hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6fe673f06aad8b8bdab20ae1a7a96f08d197a130 | 205 | py | Python | bitstamp.py | Tulip-HFT/market-crawler | a6572459a1b6dd1609d61e01c01f197911c8b144 | [
"MIT"
] | null | null | null | bitstamp.py | Tulip-HFT/market-crawler | a6572459a1b6dd1609d61e01c01f197911c8b144 | [
"MIT"
] | null | null | null | bitstamp.py | Tulip-HFT/market-crawler | a6572459a1b6dd1609d61e01c01f197911c8b144 | [
"MIT"
] | null | null | null | import interface
class Bitstamp(interface.MarketExplorer):
def __init__(self):
pass
def exchange_name(self):
return 'bitstamp'
def markets(self):
return ['btc/usd']
| 15.769231 | 41 | 0.634146 | 22 | 205 | 5.681818 | 0.681818 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.268293 | 205 | 12 | 42 | 17.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.125 | 0.125 | 0.25 | 0.875 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
b507388f2988cd14fd92d51249cba1766ca6584a | 151 | py | Python | app/schemas/rule.py | ninoseki/ayashige | e24d42979b06f420c97bbc00316545d075fdec07 | [
"MIT"
] | 28 | 2018-11-24T09:00:04.000Z | 2022-02-17T01:31:40.000Z | app/schemas/rule.py | ninoseki/ayashige | e24d42979b06f420c97bbc00316545d075fdec07 | [
"MIT"
] | 5 | 2018-11-24T04:41:09.000Z | 2021-10-31T23:36:35.000Z | app/schemas/rule.py | ninoseki/ayashige | e24d42979b06f420c97bbc00316545d075fdec07 | [
"MIT"
] | 7 | 2019-06-07T16:26:29.000Z | 2021-11-15T19:38:26.000Z | from typing import List, Optional
from .api_model import APIModel
class Rule(APIModel):
name: str
score: int
notes: Optional[List[str]]
| 15.1 | 33 | 0.708609 | 21 | 151 | 5.047619 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211921 | 151 | 9 | 34 | 16.777778 | 0.890756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
82efca060b5bffc1d37cbe30bfb77f0e14967bbf | 134 | py | Python | conftest.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | 1 | 2020-04-21T19:34:04.000Z | 2020-04-21T19:34:04.000Z | conftest.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | null | null | null | conftest.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | null | null | null | pytest_plugins = (
'icemac.addressbook.fixtures',
'icemac.addressbook.browser.fixtures',
'icemac.ab.calendar.fixtures',
)
| 22.333333 | 42 | 0.708955 | 13 | 134 | 7.230769 | 0.615385 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141791 | 134 | 5 | 43 | 26.8 | 0.817391 | 0 | 0 | 0 | 0 | 0 | 0.664179 | 0.664179 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d214f88306d81c0cf1980034d6ddbdc6e30e58a4 | 8,003 | py | Python | M3_feature_zone/retipy/create_datasets.py | rmaphoh/AutoMorph | 0c82ce322c6cd8bd80f06bbd85c5c2542e534cb8 | [
"Apache-2.0"
] | 1 | 2022-01-28T00:56:23.000Z | 2022-01-28T00:56:23.000Z | M3_feature_zone/retipy/create_datasets.py | rmaphoh/AutoMorph | 0c82ce322c6cd8bd80f06bbd85c5c2542e534cb8 | [
"Apache-2.0"
] | null | null | null | M3_feature_zone/retipy/create_datasets.py | rmaphoh/AutoMorph | 0c82ce322c6cd8bd80f06bbd85c5c2542e534cb8 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# Retipy - Retinal Image Processing on Python
# Copyright (C) 2017 Alejandro Valdes
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
script to estimate the linear tortuosity of a set of retinal images, it will output the values
to a file in the output folder defined in the configuration. The output will only have the
estimated value and it is sorted by image file name.
"""
import argparse
import glob
# import numpy as np
import os
import h5py
import shutil
import pandas as pd
# import scipy.stats as stats
from retipy import configuration, retina, tortuosity_measures
if os.path.exists('/home/jupyter/Deep_rias/Results/M2/artery_vein/artery_binary_skeleton/.ipynb_checkpoints'):
shutil.rmtree('/home/jupyter/Deep_rias/Results/M2/artery_vein/artery_binary_skeleton/.ipynb_checkpoints')
if os.path.exists('/home/jupyter/Deep_rias/Results/M2/binary_vessel/binary_skeleton/.ipynb_checkpoints'):
shutil.rmtree('/home/jupyter/Deep_rias/Results/M2/binary_vessel/binary_skeleton/.ipynb_checkpoints')
if os.path.exists('/home/jupyter/Deep_rias/Results/M2/artery_vein/vein_binary_skeleton/.ipynb_checkpoints'):
shutil.rmtree('/home/jupyter/Deep_rias/Results/M2/artery_vein/vein_binary_skeleton/.ipynb_checkpoints')
if not os.path.exists('../../Results/M3/Width/'):
os.makedirs('../../Results/M3/Width/')
#if os.path.exists('./DDR/av_seg/raw/.ipynb_checkpoints'):
# shutil.rmtree('./DDR/av_seg/raw/.ipynb_checkpoints')
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
"--configuration",
help="the configuration file location",
default="resources/retipy.config")
args = parser.parse_args()
CONFIG = configuration.Configuration(args.configuration)
t2_list = []
t4_list = []
t5_list = []
name_list = []
Artery_PATH = '/home/jupyter/Deep_rias/Results/M2/artery_vein/artery_binary_skeleton'
Vein_PATH = '/home/jupyter/Deep_rias/Results/M2/artery_vein/vein_binary_skeleton'
for filename in sorted(glob.glob(os.path.join(CONFIG.image_directory, '*.png'))):
segmentedImage = retina.Retina(None, filename, store_path='/home/jupyter/Deep_rias/Results/M2/binary_vessel/binary_process')
#segmentedImage.threshold_image()
#segmentedImage.reshape_square()
#window_sizes = segmentedImage.get_window_sizes()
window_sizes = [912]
window = retina.Window(
segmentedImage, window_sizes[-1], min_pixels=CONFIG.pixels_per_window)
t1, t2, t3, t4, td, tfi, tft, vessel_density, average_caliber,vessel_count,tcurve, bifurcation_t, vessel_count_1, vessel_count_list, w1_list = tortuosity_measures.evaluate_window(window, CONFIG.pixels_per_window, CONFIG.sampling_size, CONFIG.r_2_threshold,store_path='/home/jupyter/Deep_rias/Results/M2/binary_vessel/binary_process/')
#print(window.tags)
t2_list.append(t2)
t4_list.append(t4)
t5_list.append(td)
name_list.append(filename.split('/')[-1])
#hf = h5py.File(CONFIG.output_folder + "/" + segmentedImage.filename + ".h5", 'w')
#hf.create_dataset('windows', data=window.windows)
#hf.create_dataset('tags', data=window.tags)
#hf.close()
Data4stage2 = pd.DataFrame({'Order':vessel_count_list, 'Width':w1_list})
Data4stage2.to_csv('../../Results/M3/Width/width_results_{}.csv'.format(segmentedImage._file_name), index = None, encoding='utf8')
Exit_file = pd.read_csv('../../Results/M3/Binary_Features_Measurement.csv')
Data4stage2 = pd.DataFrame({'Distance_Tortuosity':t2_list, 'Squared_Curvature_Tortuosity':t4_list, 'Tortuosity_density':t5_list})
Data4stage2 = pd.concat([Exit_file, Data4stage2], axis=1)
Data4stage2.to_csv('../../Results/M3/Binary_Tortuosity_Measurement.csv', index = None, encoding='utf8')
####################################3
t2_list = []
t4_list = []
t5_list = []
name_list = []
for filename in sorted(glob.glob(os.path.join(Artery_PATH, '*.png'))):
segmentedImage = retina.Retina(None, filename,store_path='/home/jupyter/Deep_rias/Results/M2/artery_vein/artery_binary_process')
#segmentedImage.threshold_image()
#segmentedImage.reshape_square()
#window_sizes = segmentedImage.get_window_sizes()
window_sizes = [912]
window = retina.Window(
segmentedImage, window_sizes[-1], min_pixels=CONFIG.pixels_per_window)
t1, t2, t3, t4, td, tfi, tft, vessel_density, average_caliber,vessel_count,tcurve, bifurcation_t, vessel_count_1, vessel_count_list, w1_list = tortuosity_measures.evaluate_window(window, CONFIG.pixels_per_window, CONFIG.sampling_size, CONFIG.r_2_threshold,store_path='/home/jupyter/Deep_rias/Results/M2/artery_vein/artery_binary_process/')
#print(window.tags)
t2_list.append(t2)
t4_list.append(t4)
t5_list.append(td)
name_list.append(filename.split('/')[-1])
#hf = h5py.File(CONFIG.output_folder + "/" + segmentedImage.filename + ".h5", 'w')
#hf.create_dataset('windows', data=window.windows)
#hf.create_dataset('tags', data=window.tags)
#hf.close()
print(filename.split('/')[-1])
Data4stage2 = pd.DataFrame({'Order':vessel_count_list, 'Width':w1_list})
Data4stage2.to_csv('../../Results/M3/Width/artery_width_results_{}.csv'.format(segmentedImage._file_name), index = None, encoding='utf8')
Exit_file = pd.read_csv('../../Results/M3/Artery_Features_Measurement.csv')
Data4stage2 = pd.DataFrame({'Tortuosity':t2_list, 'Squared_Curvature_Tortuosity':t4_list, 'Tortuosity_density':t5_list})
Data4stage2 = pd.concat([Exit_file, Data4stage2], axis=1)
Data4stage2.to_csv('../../Results/M3/Artery_Tortuosity_Measurement.csv', index = None, encoding='utf8')
####################################3
t2_list = []
t4_list = []
t5_list = []
name_list = []
for filename in sorted(glob.glob(os.path.join(Vein_PATH, '*.png'))):
segmentedImage = retina.Retina(None, filename,store_path='/home/jupyter/Deep_rias/Results/M2/artery_vein/vein_binary_process')
#segmentedImage.threshold_image()
#segmentedImage.reshape_square()
#window_sizes = segmentedImage.get_window_sizes()
window_sizes = [912]
window = retina.Window(
segmentedImage, window_sizes[-1], min_pixels=CONFIG.pixels_per_window)
t1, t2, t3, t4, td, tfi, tft, vessel_density, average_caliber,vessel_count,tcurve, bifurcation_t, vessel_count_1, vessel_count_list, w1_list = tortuosity_measures.evaluate_window(window, CONFIG.pixels_per_window, CONFIG.sampling_size, CONFIG.r_2_threshold,store_path='/home/jupyter/Deep_rias/Results/M2/artery_vein/vein_binary_process/')
#print(window.tags)
t2_list.append(t2)
t4_list.append(t4)
t5_list.append(td)
name_list.append(filename.split('/')[-1])
#hf = h5py.File(CONFIG.output_folder + "/" + segmentedImage.filename + ".h5", 'w')
#hf.create_dataset('windows', data=window.windows)
#hf.create_dataset('tags', data=window.tags)
#hf.close()
Data4stage2 = pd.DataFrame({'Order':vessel_count_list, 'Width':w1_list})
Data4stage2.to_csv('../../Results/M3/Width/vein_width_results_{}.csv'.format(segmentedImage._file_name), index = None, encoding='utf8')
Exit_file = pd.read_csv('../../Results/M3/Vein_Features_Measurement.csv')
Data4stage2 = pd.DataFrame({'Image_id':name_list, 'Tortuosity':t2_list, 'Squared_Curvature_Tortuosity':t4_list, 'Tortuosity_density':t5_list})
Data4stage2 = pd.concat([Exit_file, Data4stage2], axis=1)
Data4stage2.to_csv('../../Results/M3/Vein_Tortuosity_Measurement.csv', index = None, encoding='utf8')
| 48.79878 | 343 | 0.744596 | 1,102 | 8,003 | 5.173321 | 0.197822 | 0.027013 | 0.036836 | 0.046658 | 0.784599 | 0.779688 | 0.737239 | 0.729346 | 0.724785 | 0.718295 | 0 | 0.022172 | 0.109584 | 8,003 | 163 | 344 | 49.09816 | 0.777856 | 0.256154 | 0 | 0.453488 | 0 | 0 | 0.317893 | 0.279808 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.081395 | 0 | 0.081395 | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d219e8aafd4f27e03ca4666f2e206766e0ed873b | 51 | py | Python | testy.py | yonradz/sirwalter | d71e2f10eeaf5fc08ea84f17719330d9ed613a6a | [
"Apache-2.0"
] | null | null | null | testy.py | yonradz/sirwalter | d71e2f10eeaf5fc08ea84f17719330d9ed613a6a | [
"Apache-2.0"
] | null | null | null | testy.py | yonradz/sirwalter | d71e2f10eeaf5fc08ea84f17719330d9ed613a6a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
print "Disregard this test!"
| 17 | 28 | 0.72549 | 8 | 51 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 51 | 2 | 29 | 25.5 | 0.822222 | 0.392157 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
d223692d0a1aa94547a58cfe2d937308586051ab | 97 | py | Python | invokers/python/tests/functions/http/default/func.py | andrew-su/function-buildpacks-for-knative | dcb9a8c1e07a6288dbc096e2f5270eb5e16a625a | [
"BSD-2-Clause"
] | 19 | 2021-11-03T15:02:24.000Z | 2022-03-23T04:33:56.000Z | invokers/python/tests/functions/http/default/func.py | andrew-su/function-buildpacks-for-knative | dcb9a8c1e07a6288dbc096e2f5270eb5e16a625a | [
"BSD-2-Clause"
] | 36 | 2021-11-05T14:33:37.000Z | 2022-03-24T20:13:40.000Z | invokers/python/tests/functions/http/default/func.py | andrew-su/function-buildpacks-for-knative | dcb9a8c1e07a6288dbc096e2f5270eb5e16a625a | [
"BSD-2-Clause"
] | 4 | 2021-11-16T08:27:58.000Z | 2022-02-03T02:58:24.000Z | # Copyright 2021-2022 VMware, Inc.
# SPDX-License-Identifier: BSD-2-Clause
def main():
pass
| 16.166667 | 39 | 0.701031 | 14 | 97 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.164948 | 97 | 5 | 40 | 19.4 | 0.728395 | 0.721649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
d2282dd6a1edc923a6a50ec18ed4e3d06380715b | 203 | py | Python | example/image_caption_with_attention/utils/__init__.py | yulongfan/tryEverything | 2f66a8d33c3539e46d91527186bc52515ce5b14f | [
"Apache-2.0"
] | 1 | 2020-10-01T08:52:45.000Z | 2020-10-01T08:52:45.000Z | example/image_caption_with_attention/utils/__init__.py | yulongfan/tryEverything | 2f66a8d33c3539e46d91527186bc52515ce5b14f | [
"Apache-2.0"
] | null | null | null | example/image_caption_with_attention/utils/__init__.py | yulongfan/tryEverything | 2f66a8d33c3539e46d91527186bc52515ce5b14f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# @File : image_caption/__init__.py.py
# @Info : @ TSMC-SIGGRAPH, 2018/8/27
# @Desc :
# -.-.. - ... -- -.-. .-.. .- -... .---. -.-- ..- .-.. --- -. --. ..-. .- -.
| 25.375 | 81 | 0.325123 | 16 | 203 | 3.8125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.261084 | 203 | 7 | 82 | 29 | 0.353333 | 0.931034 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d229b21889a2f256c85dade1f7e7160d3d961722 | 88 | py | Python | 6 kyu/Multiples of 3 or 5/Multiples of 3 or 5.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | 6 kyu/Multiples of 3 or 5/Multiples of 3 or 5.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | 6 kyu/Multiples of 3 or 5/Multiples of 3 or 5.py | anthonyjatoba/codewars | 76b0d66dd1ba76a4d136b658920cdf85fd5c4b06 | [
"MIT"
] | null | null | null | def solution(number):
return sum(n for n in range(number) if n % 3 == 0 or n % 5 == 0) | 44 | 66 | 0.613636 | 19 | 88 | 2.842105 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 0.238636 | 88 | 2 | 66 | 44 | 0.746269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
d241dc6a6ee87d473828f4d6bd362cf08e759169 | 39 | py | Python | models/architectures/__init__.py | hzh8311/2nd-Solution-for-CVPR2020-face-anti-spoofing-challenge | 5c21d934904bbcfc9b373da3f578d03ede842b06 | [
"MIT"
] | 3 | 2021-02-11T07:59:34.000Z | 2021-05-19T02:28:27.000Z | models/architectures/__init__.py | hzh8311/2nd-Solution-for-CVPR2020-face-anti-spoofing-challenge | 5c21d934904bbcfc9b373da3f578d03ede842b06 | [
"MIT"
] | null | null | null | models/architectures/__init__.py | hzh8311/2nd-Solution-for-CVPR2020-face-anti-spoofing-challenge | 5c21d934904bbcfc9b373da3f578d03ede842b06 | [
"MIT"
] | null | null | null | # from .mobilenetv2b import MobileNetV2 | 39 | 39 | 0.846154 | 4 | 39 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.102564 | 39 | 1 | 39 | 39 | 0.885714 | 0.948718 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d25e1f8c9c836092042257acfa6266132e8260e9 | 28,772 | py | Python | tests/query/v2/match/test_variable_length_relationships.py | nevermore3/nebula-graph | 6f24438289c2b20575bc6acdf607cd2a3648d30d | [
"Apache-2.0"
] | null | null | null | tests/query/v2/match/test_variable_length_relationships.py | nevermore3/nebula-graph | 6f24438289c2b20575bc6acdf607cd2a3648d30d | [
"Apache-2.0"
] | null | null | null | tests/query/v2/match/test_variable_length_relationships.py | nevermore3/nebula-graph | 6f24438289c2b20575bc6acdf607cd2a3648d30d | [
"Apache-2.0"
] | null | null | null | # --coding:utf-8--
#
# Copyright (c) 2020 vesoft inc. All rights reserved.
#
# This source code is licensed under Apache 2.0 License,
# attached with Common Clause Condition 1.0, found in the LICENSES directory.
import pytest
from tests.common.nebula_test_suite import NebulaTestSuite
@pytest.mark.usefixtures('set_vertices_and_edges')
class TestVariableLengthRelationshipMatch(NebulaTestSuite):
@classmethod
def prepare(cls):
cls.use_nba()
@pytest.mark.skip
def test_to_be_deleted(self):
# variable steps
stmt = 'MATCH (v:player:{name: "abc"}) -[r*1..3]-> () return *'
self.fail_query(stmt)
stmt = 'MATCH (v:player:{name: "abc"}) -[r*..3]-> () return *'
self.fail_query(stmt)
stmt = 'MATCH (v:player:{name: "abc"}) -[r*1..]-> () return *'
self.fail_query(stmt)
@pytest.mark.skip
def test_hops_0_to_1(self, like, serve):
VERTICES, EDGES = self.VERTEXS, self.EDGS
def like_row(dst: str):
return [[like('Tracy McGrady', dst)], VERTICES[dst]]
def serve_row(dst):
return [[serve('Tracy McGrady', dst)], VERTICES[dst]]
# single both direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve*0..1{start_year: 2000}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
serve_row("Magic")
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*0..1{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
like_row("Vince Carter"),
like_row("Yao Ming"),
like_row("Grant Hill"), # like each other
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*1{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
like_row("Vince Carter"),
like_row("Yao Ming"),
like_row("Grant Hill"), # like each other
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*0{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
]
}
self.check_rows_with_header(stmt, expected)
# single direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*0..1{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*0{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*1{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
]
}
self.check_rows_with_header(stmt, expected)
# single both direction edge without properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve*0..1]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
serve_row("Raptors"),
serve_row("Magic"),
serve_row("Spurs"),
serve_row("Rockets"),
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:like*0..1]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
like_row("Vince Carter"),
like_row("Yao Ming"),
like_row("Grant Hill"), # like each other
]
}
self.check_rows_with_header(stmt, expected)
# multiple both direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve|like*0..1{start_year: 2000}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
serve_row("Magic"),
]
}
self.check_rows_with_header(stmt, expected)
# multiple single direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve|like*0..1{start_year: 2000}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
serve_row("Magic"),
]
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve|like*0..1{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
]
}
self.check_rows_with_header(stmt, expected)
# multiple both direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve|like*0..1]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
like_row("Vince Carter"),
like_row("Yao Ming"),
like_row("Grant Hill"),
serve_row("Raptors"),
serve_row("Magic"),
serve_row("Spurs"),
serve_row("Rockets"),
]
}
self.check_rows_with_header(stmt, expected)
# multiple single direction edge with properties
stmt = '''
MATCH (:player{name:"Tracy McGrady"})-[e:serve|like*0..1]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[[], VERTICES["Tracy McGrady"]],
like_row("Kobe Bryant"),
like_row("Grant Hill"),
like_row("Rudy Gay"),
serve_row("Raptors"),
serve_row("Magic"),
serve_row("Spurs"),
serve_row("Rockets"),
]
}
self.check_rows_with_header(stmt, expected)
def test_hops_m_to_n(self,
like,
serve,
serve_2hop,
serve_3hop,
like_2hop,
like_3hop):
VERTICES = self.VERTEXS
def like_row_2hop(dst1: str, dst2: str):
return [like_2hop('Tim Duncan', dst1, dst2), VERTICES[dst2]]
def like_row_3hop(dst1: str, dst2: str, dst3: str):
return [like_3hop('Tim Duncan', dst1, dst2, dst3), VERTICES[dst3]]
def serve_row_2hop(team, player, r1=0, r2=0):
return [[serve('Tim Duncan', team, r1), serve(player, team, r2)], VERTICES[player]]
def serve_row_3hop(team1, player, team2, r1=0, r2=0, r3=0):
return [
[serve('Tim Duncan', team1, r1), serve(player, team1, r2), serve(player, team2, r3)],
VERTICES[team2]
]
# single both direction edge with properties
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve*2..3{start_year: 2000}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:like*2..3{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[like_2hop("Tiago Splitter", "Manu Ginobili", "Tim Duncan"), VERTICES["Tiago Splitter"]],
],
}
self.check_rows_with_header(stmt, expected)
# single direction edge with properties
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve*2..3{start_year: 2000}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:like*2..3{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tim Duncan"})<-[e:like*2..3{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[like_2hop("Tiago Splitter", "Manu Ginobili", "Tim Duncan"), VERTICES["Tiago Splitter"]],
],
}
self.check_rows_with_header(stmt, expected)
# single both direction edge without properties
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve*2..3]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
serve_row_2hop("Spurs", "Dejounte Murray"),
serve_row_2hop("Spurs", "Marco Belinelli"),
serve_row_3hop("Spurs", "Marco Belinelli", "Bulls"),
serve_row_3hop("Spurs", "Marco Belinelli", "Hornets"),
serve_row_3hop("Spurs", "Marco Belinelli", "Hawks"),
serve_row_3hop("Spurs", "Marco Belinelli", "76ers"),
serve_row_3hop("Spurs", "Marco Belinelli", "Spurs", 0, 0, 1),
serve_row_3hop("Spurs", "Marco Belinelli", "Hornets", 0, 0, 1),
serve_row_3hop("Spurs", "Marco Belinelli", "Raptors"),
serve_row_3hop("Spurs", "Marco Belinelli", "Warriors"),
serve_row_3hop("Spurs", "Marco Belinelli", "Kings"),
serve_row_2hop("Spurs", "Danny Green"),
serve_row_3hop("Spurs", "Danny Green", "Cavaliers"),
serve_row_3hop("Spurs", "Danny Green", "Raptors"),
serve_row_2hop("Spurs", "Aron Baynes"),
serve_row_3hop("Spurs", "Aron Baynes", "Pistons"),
serve_row_3hop("Spurs", "Aron Baynes", "Celtics"),
serve_row_2hop("Spurs", "Jonathon Simmons"),
serve_row_3hop("Spurs", "Jonathon Simmons", "76ers"),
serve_row_3hop("Spurs", "Jonathon Simmons", "Magic"),
serve_row_2hop("Spurs", "Rudy Gay"),
serve_row_3hop("Spurs", "Rudy Gay", "Raptors"),
serve_row_3hop("Spurs", "Rudy Gay", "Kings"),
serve_row_3hop("Spurs", "Rudy Gay", "Grizzlies"),
serve_row_2hop("Spurs", "Tony Parker"),
serve_row_3hop("Spurs", "Tony Parker", "Hornets"),
serve_row_2hop("Spurs", "Manu Ginobili"),
serve_row_2hop("Spurs", "David West"),
serve_row_3hop("Spurs", "David West", "Pacers"),
serve_row_3hop("Spurs", "David West", "Warriors"),
serve_row_3hop("Spurs", "David West", "Hornets"),
serve_row_2hop("Spurs", "Tracy McGrady"),
serve_row_3hop("Spurs", "Tracy McGrady", "Raptors"),
serve_row_3hop("Spurs", "Tracy McGrady", "Magic"),
serve_row_3hop("Spurs", "Tracy McGrady", "Rockets"),
serve_row_2hop("Spurs", "Marco Belinelli", 0, 1),
serve_row_3hop("Spurs", "Marco Belinelli", "Bulls", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Spurs", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Hornets", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Hawks", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "76ers", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Hornets", 0, 1, 1),
serve_row_3hop("Spurs", "Marco Belinelli", "Raptors", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Warriors", 0, 1, 0),
serve_row_3hop("Spurs", "Marco Belinelli", "Kings", 0, 1, 0),
serve_row_2hop("Spurs", "Paul Gasol"),
serve_row_3hop("Spurs", "Paul Gasol", "Lakers"),
serve_row_3hop("Spurs", "Paul Gasol", "Bulls"),
serve_row_3hop("Spurs", "Paul Gasol", "Grizzlies"),
serve_row_3hop("Spurs", "Paul Gasol", "Bucks"),
serve_row_2hop("Spurs", "LaMarcus Aldridge"),
serve_row_3hop("Spurs", "LaMarcus Aldridge", "Trail Blazers"),
serve_row_2hop("Spurs", "Tiago Splitter"),
serve_row_3hop("Spurs", "Tiago Splitter", "Hawks"),
serve_row_3hop("Spurs", "Tiago Splitter", "76ers"),
serve_row_2hop("Spurs", "Cory Joseph"),
serve_row_3hop("Spurs", "Cory Joseph", "Pacers"),
serve_row_3hop("Spurs", "Cory Joseph", "Raptors"),
serve_row_2hop("Spurs", "Kyle Anderson"),
serve_row_3hop("Spurs", "Kyle Anderson", "Grizzlies"),
serve_row_2hop("Spurs", "Boris Diaw"),
serve_row_3hop("Spurs", "Boris Diaw", "Suns"),
serve_row_3hop("Spurs", "Boris Diaw", "Jazz"),
serve_row_3hop("Spurs", "Boris Diaw", "Hawks"),
serve_row_3hop("Spurs", "Boris Diaw", "Hornets"),
],
}
self.check_rows_with_header(stmt, expected)
# stmt = '''
# MATCH (:player{name: "Tim Duncan"})-[e:like*2..3]-(v)
# RETURN count(*)
# '''
# expected = {
# "column_names": ['count(*)'],
# "rows": [292],
# }
# self.check_rows_with_header(stmt, expected)
# single direction edge without properties
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve*2..3]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name: "Tim Duncan"})-[e:like*2..3]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
like_row_2hop("Tony Parker", "Tim Duncan"),
like_row_3hop("Tony Parker", "Tim Duncan", "Manu Ginobili"),
like_row_2hop("Tony Parker", "Manu Ginobili"),
like_row_3hop("Tony Parker", "Manu Ginobili", "Tim Duncan"),
like_row_2hop("Tony Parker", "LaMarcus Aldridge"),
like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tony Parker"),
like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tim Duncan"),
like_row_2hop("Manu Ginobili", "Tim Duncan"),
like_row_3hop("Manu Ginobili", "Tim Duncan", "Tony Parker"),
],
}
self.check_rows_with_header(stmt, expected)
# multiple both direction edge with properties
stmt = '''
MATCH (:player{name: "Tim Duncan"})-[e:serve|like*2..3{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[like_2hop("Tiago Splitter", "Manu Ginobili", "Tim Duncan"), VERTICES["Tiago Splitter"]]
],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve|like*2..3{start_year: 2000}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
# multiple direction edge with properties
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:serve|like*2..3{likeness: 90}]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (:player{name:"Tim Duncan"})<-[e:serve|like*2..3{likeness: 90}]-(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
[like_2hop("Tiago Splitter", "Manu Ginobili", "Tim Duncan"), VERTICES['Tiago Splitter']]
],
}
self.check_rows_with_header(stmt, expected)
# multiple both direction edge without properties
# stmt = '''
# MATCH (:player{name:"Tim Duncan"})-[e:serve|like*2..3]-(v)
# RETURN count(*)
# '''
# expected = {
# "column_names": ['COUNT(*)'],
# "rows": [927],
# }
# self.check_rows_with_header(stmt, expected)
# multiple direction edge without properties
stmt = '''
MATCH (:player{name: "Tim Duncan"})-[e:serve|like*2..3]->(v)
RETURN e, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": [
like_row_2hop("Tony Parker", "Tim Duncan"),
like_row_3hop("Tony Parker", "Tim Duncan", "Manu Ginobili"),
[
[
like("Tim Duncan", "Tony Parker"),
like("Tony Parker", "Tim Duncan"),
serve("Tim Duncan", "Spurs"),
],
VERTICES['Spurs'],
],
like_row_2hop("Tony Parker", "Manu Ginobili"),
like_row_3hop("Tony Parker", "Manu Ginobili", "Tim Duncan"),
[
[
like("Tim Duncan", "Tony Parker"),
like("Tony Parker", "Manu Ginobili"),
serve("Manu Ginobili", "Spurs"),
],
VERTICES['Spurs'],
],
like_row_2hop("Tony Parker", "LaMarcus Aldridge"),
like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tony Parker"),
like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tim Duncan"),
[
[
like("Tim Duncan", "Tony Parker"),
like("Tony Parker", "LaMarcus Aldridge"),
serve("LaMarcus Aldridge", "Trail Blazers"),
],
VERTICES['Trail Blazers'],
],
[
[
like("Tim Duncan", "Tony Parker"),
like("Tony Parker", "LaMarcus Aldridge"),
serve("LaMarcus Aldridge", "Spurs"),
],
VERTICES['Spurs'],
],
[
[
like("Tim Duncan", "Tony Parker"),
serve("Tony Parker", "Hornets"),
],
VERTICES['Hornets'],
],
[
[
like("Tim Duncan", "Tony Parker"),
serve("Tony Parker", "Spurs"),
],
VERTICES['Spurs'],
],
like_row_2hop("Manu Ginobili", "Tim Duncan"),
like_row_3hop("Manu Ginobili", "Tim Duncan", "Tony Parker"),
[
[
like("Tim Duncan", "Manu Ginobili"),
like("Manu Ginobili", "Tim Duncan"),
serve("Tim Duncan", "Spurs"),
],
VERTICES['Spurs'],
],
[
[
like("Tim Duncan", "Manu Ginobili"),
serve("Manu Ginobili", "Spurs"),
],
VERTICES['Spurs'],
],
],
}
self.check_rows_with_header(stmt, expected)
@pytest.mark.skip
def test_mix_hops(self):
stmt = '''
MATCH (:player{name: "Tim Duncan"})-[e1:like]->()-[e2:serve*0..3]->()<-[e3:serve]-(v)
RETURN e1, e2, e3, v
'''
expected = {
"column_names": ['e', 'v'],
"rows": []
}
self.check_rows_with_header(stmt, expected)
def test_return_all(self, like_2hop_start_with, like_3hop_start_with):
like_row_2hop = like_2hop_start_with('Tim Duncan')
like_row_3hop = like_3hop_start_with('Tim Duncan')
stmt = '''
MATCH (:player{name:"Tim Duncan"})-[e:like*2..3]->()
RETURN *
'''
expected = {
"column_names": ['e'],
"rows": [
[like_row_2hop("Tony Parker", "Tim Duncan")],
[like_row_3hop("Tony Parker", "Tim Duncan", "Manu Ginobili")],
[like_row_2hop("Tony Parker", "Manu Ginobili")],
[like_row_3hop("Tony Parker", "Manu Ginobili", "Tim Duncan")],
[like_row_2hop("Tony Parker", "LaMarcus Aldridge")],
[like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tony Parker")],
[like_row_3hop("Tony Parker", "LaMarcus Aldridge", "Tim Duncan")],
[like_row_2hop("Manu Ginobili", "Tim Duncan")],
[like_row_3hop("Manu Ginobili", "Tim Duncan", "Tony Parker")],
],
}
self.check_rows_with_header(stmt, expected)
def test_more_cases(self, like, serve, like_2hop):
# stmt = '''
# MATCH (v:player{name: 'Tim Duncan'})-[e:like*0]-()
# RETURN e
# '''
stmt = '''
MATCH (v:player{name: 'Tim Duncan'})-[e:like*1]-()
RETURN e
'''
expected = {
"column_names": ['e'],
"rows": [
[[like('Tim Duncan', 'Manu Ginobili')]],
[[like('Tim Duncan', 'Tony Parker')]],
[[like('Dejounte Murray', 'Tim Duncan')]],
[[like('Shaquile O\'Neal', 'Tim Duncan')]],
[[like('Marco Belinelli', 'Tim Duncan')]],
[[like('Boris Diaw', 'Tim Duncan')]],
[[like('Manu Ginobili', 'Tim Duncan')]],
[[like('Danny Green', 'Tim Duncan')]],
[[like('Tiago Splitter', 'Tim Duncan')]],
[[like('Aron Baynes', 'Tim Duncan')]],
[[like('Tony Parker', 'Tim Duncan')]],
[[like('LaMarcus Aldridge', 'Tim Duncan')]],
],
}
self.check_rows_with_header(stmt, expected)
# stmt = '''
# MATCH (v:player{name: 'Tim Duncan'})-[e:like*0..0]-()
# RETURN e
# '''
stmt = '''
MATCH (v:player{name: 'Tim Duncan'})-[e:like*1..1]-()
RETURN e
'''
expected = {
"column_names": ['e'],
"rows": [
[[like('Tim Duncan', 'Manu Ginobili')]],
[[like('Tim Duncan', 'Tony Parker')]],
[[like('Dejounte Murray', 'Tim Duncan')]],
[[like('Shaquile O\'Neal', 'Tim Duncan')]],
[[like('Marco Belinelli', 'Tim Duncan')]],
[[like('Boris Diaw', 'Tim Duncan')]],
[[like('Manu Ginobili', 'Tim Duncan')]],
[[like('Danny Green', 'Tim Duncan')]],
[[like('Tiago Splitter', 'Tim Duncan')]],
[[like('Aron Baynes', 'Tim Duncan')]],
[[like('Tony Parker', 'Tim Duncan')]],
[[like('LaMarcus Aldridge', 'Tim Duncan')]],
],
}
self.check_rows_with_header(stmt, expected)
# stmt = '''
# MATCH (v:player{name: 'Tim Duncan'})-[e:like*]-()
# RETURN e
# '''
# stmt = '''
# MATCH (v:player{name: 'Tim Duncan'})-[e:like*0..0]-()-[e2:like*0..0]-()
# RETURN e, e2
# '''
stmt = '''
MATCH (v:player{name: 'Tim Duncan'})-[e:like*2..3]-()
WHERE e[1].likeness>95 AND e[2].likeness==100
RETURN e
'''
expected = {
"column_names": ['e'],
"rows": [[[
like('Dejounte Murray', 'Tim Duncan'),
like('Dejounte Murray', 'LeBron James'),
like('LeBron James', 'Ray Allen'),
]]],
}
self.check_rows_with_header(stmt, expected)
stmt = '''
MATCH (v:player{name: 'Tim Duncan'})-[e1:like*1..2]-(v2{name: 'Tony Parker'})-[e2:serve]-(v3{name: 'Spurs'})
RETURN e1, e2
'''
expected = {
"column_names": ['e1', 'e2'],
"rows": [
[[like('Dejounte Murray', 'Tim Duncan'), like('Dejounte Murray', 'Tony Parker')], serve('Tony Parker', 'Spurs')],
[[like('Tony Parker', 'Manu Ginobili'), like('Tim Duncan', 'Manu Ginobili')], serve('Tony Parker', 'Spurs')],
[[like('Marco Belinelli', 'Tim Duncan'), like('Marco Belinelli', 'Tony Parker')], serve('Tony Parker', 'Spurs')],
[[like('Boris Diaw', 'Tim Duncan'), like('Boris Diaw', 'Tony Parker')], serve('Tony Parker', 'Spurs')],
[like_2hop('Tony Parker', 'Manu Ginobili', 'Tim Duncan'), serve('Tony Parker', 'Spurs')],
[[like('LaMarcus Aldridge', 'Tim Duncan'), like('LaMarcus Aldridge', 'Tony Parker')], serve('Tony Parker', 'Spurs')],
[[like('LaMarcus Aldridge', 'Tim Duncan'), like('Tony Parker', 'LaMarcus Aldridge')], serve('Tony Parker', 'Spurs')],
[[like('Tim Duncan', 'Tony Parker')], serve('Tony Parker', 'Spurs')],
[[like('Tony Parker', 'Tim Duncan')], serve('Tony Parker', 'Spurs')],
],
}
self.check_rows_with_header(stmt, expected)
stmt='''
MATCH p=(v:player{name: 'Tim Duncan'})-[:like|:serve*1..3]->(v1)
WHERE e[0].likeness>90
RETURN p
'''
resp = self.execute(stmt)
self.check_resp_failed(resp)
self.check_error_msg(resp, "SemanticError: Alias used but not defined: `e'")
stmt='''
MATCH p=(v:player{name: 'Tim Duncan'})-[:like|:serve*1..3]->(v1)
RETURN e
'''
resp = self.execute(stmt)
self.check_resp_failed(resp)
self.check_error_msg(resp, "SemanticError: Alias used but not defined: `e'")
stmt='''
MATCH p=(v:player{name: 'Tim Duncan'})-[:like|:serve*1..3]->(v1)
WHERE e[0].likeness+e[1].likeness>90
RETURN p
'''
resp = self.execute(stmt)
self.check_resp_failed(resp)
self.check_error_msg(resp, "SemanticError: Alias used but not defined: `e'")
| 37.75853 | 133 | 0.470666 | 2,913 | 28,772 | 4.491933 | 0.072434 | 0.070157 | 0.044937 | 0.062361 | 0.86603 | 0.826442 | 0.743447 | 0.718456 | 0.693542 | 0.639893 | 0 | 0.020413 | 0.366606 | 28,772 | 761 | 134 | 37.808147 | 0.697597 | 0.057104 | 0 | 0.623077 | 0 | 0.047692 | 0.354021 | 0.04773 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0.003077 | 0.009231 | 0.033846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d26aa6e25e22222cdc2a12769be3f40d1ad6c394 | 135 | py | Python | pyaz/acr/config/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/acr/config/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/acr/config/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | '''
Configure policies for Azure Container Registries.
'''
from ... pyaz_utils import _call_az
from . import content_trust, retention
| 19.285714 | 50 | 0.77037 | 17 | 135 | 5.882353 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140741 | 135 | 6 | 51 | 22.5 | 0.862069 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9637900382a758a694ecb6f30a4c5a0a08d9fdd0 | 1,544 | py | Python | rough.py | AkashKumarSingh11032001/Twitter-Automation-Using-Python | 563029e8ef3a72824a04ad86c18a35695132a651 | [
"MIT"
] | null | null | null | rough.py | AkashKumarSingh11032001/Twitter-Automation-Using-Python | 563029e8ef3a72824a04ad86c18a35695132a651 | [
"MIT"
] | null | null | null | rough.py | AkashKumarSingh11032001/Twitter-Automation-Using-Python | 563029e8ef3a72824a04ad86c18a35695132a651 | [
"MIT"
] | null | null | null | from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import time
driver = webdriver.Firefox()
driver.get('https://twitter.com/login')
email = "yourtwitteremail@gmail.com"
password = "yourtwitterpassword"
loginField = driver.find_element_by_xpath('/html/body/div/div/div/div[2]/main/div/div/div[1]/form/div/div[1]/label/div/div[2]/div/input')
passwordField = driver.find_element_by_xpath(
'/html/body/div/div/div/div[2]/main/div/div/div[1]/form/div/div[2]/label/div/div[2]/div/input')
loginButton = driver.find_element_by_xpath('/html/body/div/div/div/div[2]/main/div/div/div[1]/form/div/div[3]/div')
loginField.send_keys(email)
passwordField.send_keys(password)
time.sleep(1)
loginButton.click()
######################## new code to add 👇 #######################
tweet = "Hello Everyone! This is a tweet that I am sending from a selenium automated script written in Python ( It feels really awesome (: ) . \n If you too want to learn this supercool trick then visit Hack Club Workshops.\n https://workshops.hackclub.com"
tweetInputField = driver.find_element_by_xpath(
'/html/body/div/div/div/div[2]/main/div/div/div/div/div/div[2]/div/div[2]/div[1]/div/div/div/div[2]/div[1]/div/div/div/div/div/div/div/div/div/div[1]/div/div/div/div[2]/div')
tweetInputField.send_keys(tweet)
tweetButton = driver.find_element_by_xpath(
'/html/body/div/div/div/div[2]/main/div/div/div/div/div/div[2]/div/div[2]/div[1]/div/div/div/div[2]/div[4]/div/div/div[2]/div[3]')
time.sleep(1)
tweetButton.click() | 38.6 | 257 | 0.726036 | 265 | 1,544 | 4.162264 | 0.309434 | 0.315503 | 0.293744 | 0.228468 | 0.451496 | 0.43971 | 0.403445 | 0.403445 | 0.388033 | 0.359927 | 0 | 0.020351 | 0.077073 | 1,544 | 40 | 258 | 38.6 | 0.752982 | 0.01101 | 0 | 0.086957 | 0 | 0.26087 | 0.586883 | 0.390128 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.130435 | 0.130435 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
963ad3ad5bbdc7f5ca025a3d0f67cb90cc22b9d1 | 2,257 | py | Python | CEC/dimension.py | ZongSingHuang/Metaheuristic-benchmark | a454ee02ffe206d925a6193a60cf6bcb772213a0 | [
"MIT"
] | null | null | null | CEC/dimension.py | ZongSingHuang/Metaheuristic-benchmark | a454ee02ffe206d925a6193a60cf6bcb772213a0 | [
"MIT"
] | null | null | null | CEC/dimension.py | ZongSingHuang/Metaheuristic-benchmark | a454ee02ffe206d925a6193a60cf6bcb772213a0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Aug 26 15:01:28 2021
@author: zongsing.huang
"""
def Sphere(D):
return D
def Rastrigin(D):
return D
def Ackley(D):
return D
def Griewank(D):
return D
def Schwefel_P222(D):
return D
def Rosenbrock(D):
return D
def Sehwwefel_P221(D):
return D
def Quartic(D):
return D
def Schwefel_P12(D):
return D
def Penalized1(D):
return D
def Penalized2(D):
return D
def Schwefel_226(D):
return D
def Step(D):
return D
def Kowalik():
return 4
def ShekelFoxholes():
return 2
def GoldsteinPrice():
return 2
def Shekel():
return 4
def Branin():
return 2
def Hartmann3():
return 3
def SixHumpCamelBack():
return 2
def Hartmann6():
return 6
def Zakharov(D):
return D
def SumSquares(D):
return D
def Alpine(D):
return D
def Michalewicz():
return 2
def Exponential(D):
return D
def Schaffer():
return 2
def BentCigar(D):
return D
def Bohachevsky1():
return 2
def Elliptic(D):
return D
def DropWave():
return 2
def CosineMixture(D):
return D
def Ellipsoidal(D):
return D
def LevyandMontalvo1(D):
return D
#%%
def Easom():
return 2
def SumofDifferentPower(D):
return D
def LevyandMontalvo2(D):
return D
def Holzman(D):
return D
def XinSheYang1(D):
return D
def XinSheYang6(D):
return D
def Beale():
return 2
def Shubert():
return 2
def InvertedCosineMixture(D):
return D
def Salomon(D):
return D
def Matyas():
return 2
def Leon():
return 2
def Paviani():
return 10
def Sinusoidal(D):
return D
def ktablet(D):
return D
def NoncontinuousRastrigin(D):
return D
def Fletcher(D):
return D
def Levy(D):
return D
def Davis():
return 2
def Pathological(D):
return D
def Schwefel_P220(D):
return D
def Booth():
return 2
def Zettl():
return 2
def PowellQuartic():
return 4
def Tablet(D):
return D
def Brown(D):
return D
def ChungReynolds(D):
return D
def Csendes(D):
return D
def Bohachevsky2():
return 2
def Bohachevsky3():
return 2
def Colville():
return 4
def BartelsConn():
return 2
def Bird():
return 2 | 10.799043 | 35 | 0.623837 | 328 | 2,257 | 4.277439 | 0.280488 | 0.199572 | 0.228083 | 0.313614 | 0.05417 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040441 | 0.276916 | 2,257 | 209 | 36 | 10.799043 | 0.81924 | 0.037661 | 0 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
96757d66f87c424a9902b2b70ab0ac475600e768 | 30 | py | Python | data/studio21_generated/introductory/2705/starter_code.py | vijaykumawat256/Prompt-Summarization | 614f5911e2acd2933440d909de2b4f86653dc214 | [
"Apache-2.0"
] | null | null | null | data/studio21_generated/introductory/2705/starter_code.py | vijaykumawat256/Prompt-Summarization | 614f5911e2acd2933440d909de2b4f86653dc214 | [
"Apache-2.0"
] | null | null | null | data/studio21_generated/introductory/2705/starter_code.py | vijaykumawat256/Prompt-Summarization | 614f5911e2acd2933440d909de2b4f86653dc214 | [
"Apache-2.0"
] | null | null | null | def generate_integers(m, n):
| 15 | 28 | 0.733333 | 5 | 30 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 2 | 29 | 15 | 0.807692 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
967dca070080d934f0546ed1d1859ef9c49a444b | 128 | py | Python | vit/formatter/wait_julian.py | kinifwyne/vit | e2cbafce922b1e09c4a66e7dc9592c51fe628e9d | [
"MIT"
] | 179 | 2020-07-28T08:21:51.000Z | 2022-03-30T21:39:37.000Z | vit/formatter/wait_julian.py | kinifwyne/vit | e2cbafce922b1e09c4a66e7dc9592c51fe628e9d | [
"MIT"
] | 255 | 2017-02-01T11:49:12.000Z | 2020-07-26T22:31:25.000Z | vit/formatter/wait_julian.py | kinifwyne/vit | e2cbafce922b1e09c4a66e7dc9592c51fe628e9d | [
"MIT"
] | 26 | 2017-01-17T20:31:13.000Z | 2020-06-17T13:09:01.000Z | from vit.formatter.wait import Wait
class WaitJulian(Wait):
def format(self, wait, task):
return self.julian(wait)
| 21.333333 | 35 | 0.703125 | 18 | 128 | 5 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195313 | 128 | 5 | 36 | 25.6 | 0.873786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
969f8d98dbcd79b38ebf16091e23ed298b6a6593 | 819 | py | Python | src/fexrd/exceptions.py | FFRI/FEXRD | 28d45511378ab9b46c6d292a5a4d241b2c7bbe33 | [
"Apache-2.0"
] | 4 | 2020-10-16T12:00:18.000Z | 2022-01-13T07:00:03.000Z | src/fexrd/exceptions.py | FFRI/FEXRD | 28d45511378ab9b46c6d292a5a4d241b2c7bbe33 | [
"Apache-2.0"
] | null | null | null | src/fexrd/exceptions.py | FFRI/FEXRD | 28d45511378ab9b46c6d292a5a4d241b2c7bbe33 | [
"Apache-2.0"
] | 1 | 2021-08-20T13:10:07.000Z | 2021-08-20T13:10:07.000Z | class FexrdBaseException(Exception):
pass
class InvalidVersion(FexrdBaseException):
def __init__(self, ver: int) -> None:
self.ver = ver
def __str__(self) -> str:
return f"{self.ver} is not valid version"
class NotImplementedYet(FexrdBaseException):
def __init__(self, ver: int, cls_name: str) -> None:
self.ver = ver
self.cls_name = cls_name
def __str__(self) -> str:
return f"{self.cls_name} is not implemented for FFRI Dataset version v{self.ver}"
class NotSupported(FexrdBaseException):
def __init__(self, ver: int, cls_name: str) -> None:
self.ver = ver
self.cls_name = cls_name
def __str__(self) -> str:
return f"{self.cls_name} is not supported for FFRI Dataset version v{self.ver}"
| 28.241379 | 90 | 0.638584 | 106 | 819 | 4.632075 | 0.264151 | 0.12831 | 0.089613 | 0.177189 | 0.698574 | 0.698574 | 0.627291 | 0.460285 | 0.460285 | 0.460285 | 0 | 0 | 0.260073 | 819 | 28 | 91 | 29.25 | 0.810231 | 0 | 0 | 0.526316 | 0 | 0 | 0.216182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0.052632 | 0 | 0.157895 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
73c3c62507b9f59cdeacf7d9506590cba8796cda | 86 | py | Python | epitator/__init__.py | AugustT/EpiTator | 00fed228c45846232c5e85601f00db97ca9ec3d2 | [
"Apache-2.0"
] | 40 | 2017-05-27T03:53:22.000Z | 2021-08-07T16:33:58.000Z | epitator/__init__.py | AugustT/EpiTator | 00fed228c45846232c5e85601f00db97ca9ec3d2 | [
"Apache-2.0"
] | 25 | 2017-07-17T14:33:24.000Z | 2021-04-09T10:27:56.000Z | epitator/__init__.py | AugustT/EpiTator | 00fed228c45846232c5e85601f00db97ca9ec3d2 | [
"Apache-2.0"
] | 9 | 2017-11-15T05:13:53.000Z | 2021-08-07T16:33:59.000Z | from __future__ import absolute_import
from .version import __version__ # noqa: F401
| 28.666667 | 46 | 0.825581 | 11 | 86 | 5.636364 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 0.139535 | 86 | 2 | 47 | 43 | 0.797297 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fb4724f486bd8c7e0d84040102c501be7b11fd27 | 2,406 | py | Python | tests/test_TAPCLASSES_SC_valueConstraintType_picklist.py | kcoyle/dctap-python | e688ed244327bc2b92d68b98a66b81d9b03cd60a | [
"MIT"
] | 6 | 2021-06-01T18:53:35.000Z | 2021-12-08T14:38:01.000Z | tests/test_TAPCLASSES_SC_valueConstraintType_picklist.py | kcoyle/dctap-python | e688ed244327bc2b92d68b98a66b81d9b03cd60a | [
"MIT"
] | 9 | 2021-06-02T08:14:38.000Z | 2021-07-13T07:39:56.000Z | tests/test_TAPCLASSES_SC_valueConstraintType_picklist.py | kcoyle/dctap-python | e688ed244327bc2b92d68b98a66b81d9b03cd60a | [
"MIT"
] | 3 | 2021-06-13T20:03:11.000Z | 2021-11-21T16:25:29.000Z | """Tests for private functions called by TAPStatementConstraint.normalize()."""
import os
import pytest
from pathlib import Path
from dctap.config import get_config
from dctap.tapclasses import TAPStatementConstraint
from dctap.csvreader import csvreader
config_dict = get_config()
def test_valueConstraintType_picklist_parse():
"""If valueConstraintType picklist, valueConstraint parsed on whitespace."""
sc = TAPStatementConstraint()
sc.propertyID = "dcterms:creator"
sc.valueConstraintType = "picklist"
sc.valueConstraint = "one two three"
sc._valueConstraintType_picklist_parse(config_dict)
assert sc.valueConstraint == ["one", "two", "three"]
def test_valueConstraintType_picklist_parse_case_insensitive():
"""Value constraint types are processed as case-insensitive."""
sc = TAPStatementConstraint()
sc.propertyID = "dcterms:creator"
sc.valueConstraintType = "PICKLIST"
sc.valueConstraint = "one two three" # extra whitespace
sc._valueConstraintType_picklist_parse(config_dict)
assert sc.valueConstraint == ["one", "two", "three"]
def test_valueConstraintType_picklist_item_separator_comma(tmp_path):
"""@@@"""
config_dict = get_config()
config_dict["picklist_item_separator"] = ","
config_dict["default_shape_identifier"] = "default"
os.chdir(tmp_path)
csvfile_path = Path(tmp_path).joinpath("some.csv")
csvfile_path.write_text(
(
'PropertyID,valueConstraintType,valueConstraint\n'
'ex:foo,picklist,"one, two, three"\n'
)
)
value_constraint = csvreader(open(csvfile_path), config_dict)[0]["shapes"][0]["statement_constraints"][0]["valueConstraint"]
assert value_constraint == ["one", "two", "three"]
def test_valueConstraintType_picklist_item_separator_pipe(tmp_path):
"""@@@"""
config_dict = get_config()
config_dict["picklist_item_separator"] = "|"
config_dict["default_shape_identifier"] = "default"
os.chdir(tmp_path)
csvfile_path = Path(tmp_path).joinpath("some.csv")
csvfile_path.write_text(
(
'PropertyID,valueConstraintType,valueConstraint\n'
'ex:foo,picklist,"one|two|three"\n'
)
)
value_constraint = csvreader(open(csvfile_path), config_dict)[0]["shapes"][0]["statement_constraints"][0]["valueConstraint"]
assert value_constraint == ["one", "two", "three"]
| 38.806452 | 128 | 0.713217 | 261 | 2,406 | 6.318008 | 0.268199 | 0.066707 | 0.053366 | 0.082474 | 0.767738 | 0.741055 | 0.741055 | 0.741055 | 0.741055 | 0.70467 | 0 | 0.002978 | 0.16251 | 2,406 | 61 | 129 | 39.442623 | 0.815385 | 0.094763 | 0 | 0.55102 | 0 | 0 | 0.231985 | 0.132961 | 0 | 0 | 0 | 0 | 0.081633 | 1 | 0.081633 | false | 0 | 0.122449 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fb493b3055e6d40e2676b4ee457dffc53da12e65 | 36 | py | Python | tests/pwb/__init__.py | nizz009/pywikibot | fcf860fee8477852c33980fd1f612637c52d42de | [
"MIT"
] | 326 | 2017-11-21T07:04:19.000Z | 2022-03-26T01:25:44.000Z | tests/pwb/__init__.py | nizz009/pywikibot | fcf860fee8477852c33980fd1f612637c52d42de | [
"MIT"
] | 17 | 2017-12-20T13:41:32.000Z | 2022-02-16T16:42:41.000Z | tests/pwb/__init__.py | nizz009/pywikibot | fcf860fee8477852c33980fd1f612637c52d42de | [
"MIT"
] | 147 | 2017-11-22T19:13:40.000Z | 2022-03-29T04:47:07.000Z | """Dummy package initialisation."""
| 18 | 35 | 0.722222 | 3 | 36 | 8.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.787879 | 0.805556 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fb55c94c6227e6b277001704be0ae62b1d27e076 | 303 | py | Python | backend/base/admin.py | AimeneNouri/Invetory-Management-WebApp | 83db8ebecc315a00ff1b974af5ba31d44d0377a2 | [
"MIT"
] | null | null | null | backend/base/admin.py | AimeneNouri/Invetory-Management-WebApp | 83db8ebecc315a00ff1b974af5ba31d44d0377a2 | [
"MIT"
] | null | null | null | backend/base/admin.py | AimeneNouri/Invetory-Management-WebApp | 83db8ebecc315a00ff1b974af5ba31d44d0377a2 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
# Register your models here.
admin.site.register(models.Compte)
admin.site.register(models.Article)
admin.site.register(models.Category)
admin.site.register(models.Client)
admin.site.register(models.Fournisseurs)
admin.site.register(models.Commande) | 33.666667 | 40 | 0.828383 | 42 | 303 | 5.97619 | 0.380952 | 0.215139 | 0.406375 | 0.549801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 303 | 9 | 41 | 33.666667 | 0.880702 | 0.085809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fb736c7760937f84a87b4c1139bfee28a79c4e74 | 112 | py | Python | tests/samples/have-pipeline-project/pipelines/pipeline-1/pipeline.py | riiid/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | 133 | 2021-05-28T07:41:49.000Z | 2022-02-21T23:07:31.000Z | tests/samples/have-pipeline-project/pipelines/pipeline-1/pipeline.py | DolceLatte/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | null | null | null | tests/samples/have-pipeline-project/pipelines/pipeline-1/pipeline.py | DolceLatte/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | 7 | 2021-06-04T00:53:04.000Z | 2022-01-10T15:26:29.000Z | import sys
sys.path.append("../..")
import kfp
@kfp.dsl.pipeline(name="pipeline-1")
def pipeline():
pass
| 11.2 | 36 | 0.651786 | 16 | 112 | 4.5625 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.142857 | 112 | 9 | 37 | 12.444444 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.133929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0.166667 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
fb8e545dd8ae6be9be8300cef376f3be048bed70 | 98 | py | Python | prediction/apis/user_controller.py | EcoJesss/ecosystem-notebooks | 095b2bc59b9749129a454a7b16c97e20d9484fd4 | [
"MIT"
] | 2 | 2020-08-30T12:50:47.000Z | 2020-11-24T12:59:43.000Z | prediction/apis/user_controller.py | EcoJesss/ecosystem-notebooks | 095b2bc59b9749129a454a7b16c97e20d9484fd4 | [
"MIT"
] | null | null | null | prediction/apis/user_controller.py | EcoJesss/ecosystem-notebooks | 095b2bc59b9749129a454a7b16c97e20d9484fd4 | [
"MIT"
] | 2 | 2020-09-02T16:54:25.000Z | 2021-06-20T20:30:11.000Z | from prediction.endpoints import user_controller as endpoints
from prediction import request_utils | 49 | 61 | 0.897959 | 13 | 98 | 6.615385 | 0.692308 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 98 | 2 | 62 | 49 | 0.966292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
fb93cdd640ba2ec3a6dfe04c7d210718adf030e2 | 307 | py | Python | tests/test_multiqc.py | PavlidisLab/luigi-biotasks | fec1c247752278518b2906a2ce968477349fee45 | [
"Apache-2.0"
] | 5 | 2019-11-14T18:41:46.000Z | 2020-03-21T17:56:32.000Z | tests/test_multiqc.py | PavlidisLab/luigi-biotasks | fec1c247752278518b2906a2ce968477349fee45 | [
"Apache-2.0"
] | 8 | 2019-11-13T21:40:32.000Z | 2022-03-04T20:31:37.000Z | tests/test_multiqc.py | PavlidisLab/luigi-biotasks | fec1c247752278518b2906a2ce968477349fee45 | [
"Apache-2.0"
] | null | null | null | from bioluigi.tasks import multiqc
def test_generate_report():
task = multiqc.GenerateReport(['indir'], 'outdir')
args = task.program_args()
assert '--outdir' in args
assert 'outdir' in args
assert '--title' not in args
assert '--comment' not in args
assert args[-1] == 'indir'
| 27.909091 | 54 | 0.661238 | 40 | 307 | 5 | 0.525 | 0.25 | 0.24 | 0.18 | 0.23 | 0.23 | 0 | 0 | 0 | 0 | 0 | 0.004115 | 0.208469 | 307 | 10 | 55 | 30.7 | 0.81893 | 0 | 0 | 0 | 1 | 0 | 0.149837 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8375bbd4983f19f0bd351a4aea30566016fe3d78 | 606 | py | Python | tests/conllz/test_reader.py | lanSeFangZhou/tokenizer_tools | edd931ae86a6e381b57e50f8b59ae19d3151d26b | [
"MIT"
] | null | null | null | tests/conllz/test_reader.py | lanSeFangZhou/tokenizer_tools | edd931ae86a6e381b57e50f8b59ae19d3151d26b | [
"MIT"
] | null | null | null | tests/conllz/test_reader.py | lanSeFangZhou/tokenizer_tools | edd931ae86a6e381b57e50f8b59ae19d3151d26b | [
"MIT"
] | null | null | null | from tokenizer_tools.conllz.reader import read_conllx_from_string, read_conllz_from_string,\
read_conllz,read_conllx
def test_read_conllx_from_string():
# TODO this way to test has no affect?
for i in read_conllz_from_string('today is a happy day'):
print('read_conllz_from_string:',i)
def test_read_conllz_from_string():
for i in read_conllz_from_string(" the weather is nice"):
print('read_conllz_from_string:', i)
def test_read_conllz():
s = read_conllz(open('corpus1.txt'))
print(s)
def test_read_conllx():
s = read_conllx(open('corpus1.txt'))
print(s)
| 28.857143 | 92 | 0.734323 | 97 | 606 | 4.226804 | 0.350515 | 0.219512 | 0.204878 | 0.292683 | 0.434146 | 0.336585 | 0.336585 | 0.209756 | 0.209756 | 0.209756 | 0 | 0.003945 | 0.163366 | 606 | 20 | 93 | 30.3 | 0.804734 | 0.059406 | 0 | 0.285714 | 0 | 0 | 0.193662 | 0.084507 | 0 | 0 | 0 | 0.05 | 0 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.357143 | 0.285714 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8384506fed6f10b1d87f2f270191209bcf7324ca | 213 | py | Python | data/data_loader.py | Linus4world/mrs-gan | 64669251584a7421cce3a5173983a2275dcb438a | [
"BSD-2-Clause"
] | 1 | 2022-01-03T16:08:35.000Z | 2022-01-03T16:08:35.000Z | data/data_loader.py | Linus4world/mrs-gan | 64669251584a7421cce3a5173983a2275dcb438a | [
"BSD-2-Clause"
] | null | null | null | data/data_loader.py | Linus4world/mrs-gan | 64669251584a7421cce3a5173983a2275dcb438a | [
"BSD-2-Clause"
] | null | null | null |
def CreateDataLoader(opt, phase):
from data.custom_dataset_data_loader import CustomDatasetDataLoader
data_loader = CustomDatasetDataLoader()
data_loader.initialize(opt, phase)
return data_loader
| 30.428571 | 71 | 0.798122 | 23 | 213 | 7.130435 | 0.565217 | 0.243902 | 0.402439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14554 | 213 | 6 | 72 | 35.5 | 0.901099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
83894ee4340621300684795986c0361348776808 | 51 | py | Python | autotesting/benchmarks_ground_truth/three_linear.py | ualberta-smr/SOAR | 325a6ed2518088b9800299c81271db51b645816a | [
"BSD-3-Clause-Clear"
] | 8 | 2021-01-13T14:59:18.000Z | 2021-06-29T17:01:37.000Z | autotesting/benchmarks_ground_truth/three_linear.py | squaresLab/SOAR | 72a35a4014d3e74548aab7d2a5cf1bdfaab149c1 | [
"BSD-3-Clause-Clear"
] | null | null | null | autotesting/benchmarks_ground_truth/three_linear.py | squaresLab/SOAR | 72a35a4014d3e74548aab7d2a5cf1bdfaab149c1 | [
"BSD-3-Clause-Clear"
] | 2 | 2021-01-16T00:09:54.000Z | 2021-08-05T01:14:40.000Z | {'tf.keras.layers.Dense': ('torch.nn.Linear', 8)}
| 17 | 49 | 0.627451 | 8 | 51 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.078431 | 51 | 2 | 50 | 25.5 | 0.659574 | 0 | 0 | 0 | 0 | 0 | 0.72 | 0.42 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
83a02634dc503e43987bc8be4b505681c9be0f48 | 152 | py | Python | ldap2jira/__init__.py | RedHat-Eng-PGM/ldap2jira | 72aa807414c8c819f4f704e8cb9f0b4aa3c47197 | [
"MIT"
] | null | null | null | ldap2jira/__init__.py | RedHat-Eng-PGM/ldap2jira | 72aa807414c8c819f4f704e8cb9f0b4aa3c47197 | [
"MIT"
] | 1 | 2021-03-03T09:18:20.000Z | 2021-03-03T09:25:23.000Z | ldap2jira/__init__.py | RedHat-Eng-PGM/python-ldap2jira | 72aa807414c8c819f4f704e8cb9f0b4aa3c47197 | [
"MIT"
] | null | null | null | from ldap2jira.ldap_lookup import ( # noqa: F401
LDAPLookup,
LDAPQueryNotFoundError
)
from ldap2jira.map import LDAP2JiraUserMap # noqa: F401
| 25.333333 | 56 | 0.763158 | 16 | 152 | 7.1875 | 0.6875 | 0.226087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072 | 0.177632 | 152 | 5 | 57 | 30.4 | 0.848 | 0.138158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
83eed92bbff2a791c2ed9889d8dd2e18b393f13c | 68 | py | Python | core/src/autogluon/core/Convertor_base/__init__.py | engsarah2050/autogluon | a77d462924dac8f8763635518eadcc523a23e18f | [
"Apache-2.0"
] | null | null | null | core/src/autogluon/core/Convertor_base/__init__.py | engsarah2050/autogluon | a77d462924dac8f8763635518eadcc523a23e18f | [
"Apache-2.0"
] | null | null | null | core/src/autogluon/core/Convertor_base/__init__.py | engsarah2050/autogluon | a77d462924dac8f8763635518eadcc523a23e18f | [
"Apache-2.0"
] | null | null | null | from autogluon.core.Convertor_base.Covert import BaseImage_converter | 68 | 68 | 0.911765 | 9 | 68 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044118 | 68 | 1 | 68 | 68 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f7d1488848daf04208c39fd4715218c0a7504ea1 | 4,963 | py | Python | advent/day05/data.py | benjackwhite/adventofcode2017 | ce29e625cbe11fd5f36cff6b36a879c6a3955581 | [
"MIT"
] | null | null | null | advent/day05/data.py | benjackwhite/adventofcode2017 | ce29e625cbe11fd5f36cff6b36a879c6a3955581 | [
"MIT"
] | null | null | null | advent/day05/data.py | benjackwhite/adventofcode2017 | ce29e625cbe11fd5f36cff6b36a879c6a3955581 | [
"MIT"
] | null | null | null | data = """0
1
0
0
1
-3
0
0
2
-2
-6
-3
2
-5
-6
-3
-3
0
-8
-12
1
-9
-12
-9
0
-7
-17
-6
-18
-7
-6
-21
-28
-14
-23
-14
-17
-5
-35
-17
-26
-14
1
-27
-19
-40
-32
-44
2
-14
-15
-12
-35
0
-49
-12
-7
-46
-47
-32
-33
-47
-7
-62
-20
-35
-4
-35
-8
-3
-61
-38
-63
-27
-33
-57
-48
-66
-68
-11
-61
-50
-34
-31
-36
-79
-49
-71
1
-34
-65
-61
-91
-12
-21
-82
-85
-51
-89
0
-83
-53
-44
-7
1
-19
-39
-27
-94
-36
-31
-35
-97
-45
-90
-15
-106
-30
-79
-18
-25
-105
-30
-63
-109
-32
-91
-96
-87
-121
-116
-103
-71
-1
-113
-10
-47
-109
-107
-38
-66
-26
-8
-38
-31
-129
-42
-91
-89
-107
-125
-75
-118
-81
-45
-111
-27
-63
-106
-110
-64
-63
-80
-44
-33
-130
-55
-90
-144
-15
-132
-122
-155
-122
-94
-159
-5
-89
-6
-97
-129
-159
-15
-44
-156
-124
-113
-154
-95
-96
-29
-121
-30
-73
-118
-57
-76
-141
-138
-108
-185
-56
-136
-161
-138
-192
2
-126
-12
-39
-60
-125
-149
-193
-146
-116
-101
-16
-207
-122
-92
-204
-42
-112
-28
-93
-96
-57
-136
-19
-36
-107
-170
-19
-20
-96
-229
-59
-172
-58
-89
-31
-57
-223
-37
-189
-43
-135
-90
-150
-22
-152
-243
-37
-231
-112
-57
-168
-30
-77
-162
-181
-176
-202
-138
-206
-183
-190
-257
-181
-47
-23
-248
-114
-98
-77
-143
-168
-166
-30
-155
-237
-51
-113
-243
-41
-142
-231
-139
-20
-190
-262
-142
-238
-200
-270
-113
-35
-296
-146
-205
-129
-198
-68
-139
-56
-196
-133
-16
-229
-258
-91
-63
-249
-274
-156
-273
-182
-166
-115
-154
-296
-115
-89
-120
-201
-44
-287
-8
1
-260
-297
-282
-114
-323
-326
-166
-241
-109
-21
-236
-280
-19
-80
-77
-271
-292
-340
-300
-206
-308
-99
-156
-277
-245
-132
-56
-172
-53
-271
-32
-5
-235
-329
-1
-150
-247
-268
-133
-341
-221
-2
-43
-229
-190
-337
-40
-71
-72
-149
-25
-253
-44
-113
-164
-370
-284
-235
-9
-234
-291
1
-152
-302
-393
-47
-289
-75
-140
-349
-140
-353
-298
-27
-292
-380
-55
-62
-208
-221
-41
-316
-411
-367
-220
-248
-59
-177
-372
-55
-241
-240
-140
-315
-297
-42
-118
-141
-70
-183
-153
-30
-63
-306
-110
-8
-356
-80
-314
-323
-41
-176
-165
-41
-230
-132
-222
-2
-404
-38
-130
2
-16
-141
-136
-336
-245
-6
-348
-172
-267
-208
-291
-285
-67
-219
-216
-136
-325
-27
-382
-242
-50
-284
-149
-454
-336
-346
-293
-402
-76
-324
-219
-336
-24
-446
-123
-185
-196
-295
-173
-400
-137
-414
-14
-104
-62
-252
-17
-398
-490
-440
-89
-347
-101
-142
-228
-301
-396
-320
-52
-508
-122
-436
-311
-344
-240
-434
-220
-197
-31
-295
-44
-452
-269
-430
-373
-409
-438
-365
-13
-241
-418
-20
-24
-141
-1
-148
-307
-63
-423
-254
-8
-438
-326
-19
-135
-109
-394
2
-398
-273
-158
-453
-346
-86
-431
-536
-549
-379
-483
-85
-476
-483
-104
-87
-462
-249
-540
-164
-360
-100
-238
-45
-390
-59
-156
-248
-257
-150
-164
-160
-545
-520
-364
-384
-237
-456
-28
-366
-147
0
-303
-583
-420
-370
-299
-154
-380
-188
-491
-258
-598
-429
-349
-333
-569
-4
-556
-421
-182
-441
-407
-542
-364
-370
-384
1
-529
-45
-319
-395
-279
-160
-575
-193
-25
-565
-548
-445
-266
-304
-361
-348
-303
-159
-39
-75
-437
-608
-622
-556
-108
-343
-283
-68
-632
-393
-68
-140
-126
-531
-87
-519
-334
-56
-70
-275
-247
-370
-439
-118
-497
-630
-594
-612
-541
-161
-646
-397
-100
-284
-313
0
-59
-200
-601
-663
-529
-676
-610
-7
-228
-50
-494
-382
-250
-306
-274
-163
-110
-375
-124
-237
-98
-645
-692
-495
-593
-647
-178
-531
-336
-697
-646
-671
-633
-542
-461
-200
-658
-525
-389
-643
-258
-329
-656
-400
-692
-557
-506
-594
-67
-623
-113
-459
-211
-713
-115
-602
-131
-181
-30
-227
-53
-719
-631
-641
-434
-552
-716
-368
-19
-439
-443
-552
-85
-79
-449
-254
-620
-474
-121
-210
-285
-608
-456
-513
-496
-13
-418
-399
-437
-258
-15
-623
-178
-336
-379
-721
-299
-729
-742
-64
-13
-438
-603
-666
-278
-767
-200
-686
-497
-256
-541
-491
-360
-615
-326
-682
-759
-524
-580
-323
-578
-793
-478
-107
-440
-657
-790
-605
-21
-163
-392
-560
-336
-430
-613
-182
-15
-782
-607
-281
-269
-25
-699
-89
-593
-280
-269
-438
-103
-359
-387
-157
-747
-619
-176
-772
-500
-735
-691
-797
-612
-573
-36
-617
-630
-357
-718
-210
-48
-185
-20
-556
-206
-722
-559
-416
-578
-745
-564
-273
-62
-300
-218
-711
-744
-805
-277
-522
-346
-280
-762
-438
-381
-379
-198
-737
-555
-466
-218
-511
-334
-353
-259
-225
-675
-350
-585
-647
-52
-395
-324
-106
-826
-279
-81
-396
-611
-312
-529
-291
-129
-594
-437
-188
-649
-820
-237
-673
-6
-387
-195
-503
-350
-83
-88
-626
-30
-313
-13
-633
-403
-319
-832
-185
-146
-839
-9
-557
-799
-841
-700
-465
-669
-769
-235
-849
-863
-819
-76
-912
-931
-909
-762
-607
-522
-64
-769
-377
-133
-414
-772
-206
-746
-730
-393
-901
-72
-33
-811
-372
-298
-835
-637
-302
-481
-958
-878
-867
-25
-260
-448
-21
-930
-903
-581
-547
-664
-843
-140
-337
-383
-513
-368
-221
-474
-169
-673
-728
-266
-862
-753
-815
-647
-106
-15
-728
-912
-147
-828
-6
-694
-434
-737
-335
-183
-732
-841
-364
-155
-116
-966
-822
-65
-22
-853
-208
-326
-826
-472
-491
-436
-771
-1009
-98
-401
-915
-275
-574
-313
-884
-648
-935
-94
-326
-553
-744
-723
-782
-719
-175
-868
-190
-153
-48
-218
-414
-721
-715
-995
-991
-575
-264
-70
-366
-381
-130
-409
-817
-258
-1028
-552
-878
-449
-138
-900
-45
-119
-677
-844
-869
-985
-1019
-60
-649
-915
-93
-1053
-121
-631
-156
-332
-193""" | 4.612454 | 11 | 0.570018 | 1,077 | 4,963 | 2.626741 | 0.498607 | 0.001414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.726969 | 0.217006 | 4,963 | 1,076 | 12 | 4.612454 | 0.001029 | 0 | 0 | 0.732342 | 0 | 0 | 0.99718 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f7d8297dd41687562828852ae83ae1b93ddc8b92 | 166 | py | Python | lab_assignment/lab_bla/linux_mac/sample/reduce.py | caru1613/introduction_to_python_TEAMLAB_MOOC | e0ac95f7a6b889e7d18b7bdaaab49820e73d5477 | [
"MIT"
] | null | null | null | lab_assignment/lab_bla/linux_mac/sample/reduce.py | caru1613/introduction_to_python_TEAMLAB_MOOC | e0ac95f7a6b889e7d18b7bdaaab49820e73d5477 | [
"MIT"
] | null | null | null | lab_assignment/lab_bla/linux_mac/sample/reduce.py | caru1613/introduction_to_python_TEAMLAB_MOOC | e0ac95f7a6b889e7d18b7bdaaab49820e73d5477 | [
"MIT"
] | null | null | null | from functools import reduce
print(reduce(lambda x,y: x+y, [1,2,3,4,5,6,7]))
def factorial(n):
return reduce(lambda x,y: x*y, range(1,n+1))
print(factorial(5)) | 20.75 | 48 | 0.668675 | 35 | 166 | 3.171429 | 0.571429 | 0.072072 | 0.234234 | 0.252252 | 0.288288 | 0.288288 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 0.13253 | 166 | 8 | 49 | 20.75 | 0.701389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f7ebcba75f272236e57ad4b7f10eab87305861dc | 296 | py | Python | tests/test_setup.py | jviloria96744/acg-covid-challenge-backend | 9df12790fa6688b8c53f9706bb583d05c08d2423 | [
"MIT"
] | null | null | null | tests/test_setup.py | jviloria96744/acg-covid-challenge-backend | 9df12790fa6688b8c53f9706bb583d05c08d2423 | [
"MIT"
] | null | null | null | tests/test_setup.py | jviloria96744/acg-covid-challenge-backend | 9df12790fa6688b8c53f9706bb583d05c08d2423 | [
"MIT"
] | null | null | null | import sys
import os
root_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), '', 'lambdas')
sys.path.append(os.path.join(root_dir, '', 'python_etl'))
sys.path.append(os.path.join(root_dir, '', 'covid_api'))
sys.path.append(os.path.join(root_dir, '', 'sns_lambda'))
| 32.888889 | 99 | 0.712838 | 50 | 296 | 4 | 0.36 | 0.21 | 0.2 | 0.225 | 0.61 | 0.45 | 0.45 | 0.45 | 0 | 0 | 0 | 0 | 0.067568 | 296 | 8 | 100 | 37 | 0.724638 | 0 | 0 | 0 | 0 | 0 | 0.121622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
79324acc861621fc510e95eeb671ab46da14ef40 | 1,789 | py | Python | api/migrations/0021_auto_20210610_1009.py | mayone-du/harusmile-backend | 4f6b90ab5c2278401ee50aa54920709effd7f323 | [
"MIT"
] | null | null | null | api/migrations/0021_auto_20210610_1009.py | mayone-du/harusmile-backend | 4f6b90ab5c2278401ee50aa54920709effd7f323 | [
"MIT"
] | 1 | 2021-06-23T09:15:50.000Z | 2021-06-23T09:15:50.000Z | api/migrations/0021_auto_20210610_1009.py | mayone-du/harusmile-backend | 4f6b90ab5c2278401ee50aa54920709effd7f323 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.3 on 2021-06-10 01:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0020_auto_20210610_1008'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='admission_format',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='age',
field=models.PositiveSmallIntegerField(blank=True, default=0, null=True),
),
migrations.AlterField(
model_name='profile',
name='club_activities',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='department',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='favorite_subject',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='problem',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='undergraduate',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
migrations.AlterField(
model_name='profile',
name='want_hear',
field=models.CharField(blank=True, default='', max_length=100, null=True),
),
]
| 33.12963 | 86 | 0.573505 | 175 | 1,789 | 5.737143 | 0.297143 | 0.159363 | 0.199203 | 0.231076 | 0.717131 | 0.717131 | 0.677291 | 0.677291 | 0.629482 | 0.629482 | 0 | 0.042197 | 0.297932 | 1,789 | 53 | 87 | 33.754717 | 0.757166 | 0.025154 | 0 | 0.659574 | 1 | 0 | 0.098163 | 0.013203 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f7101a8b865c1f87f28c1270c97bd9246634db2e | 69 | py | Python | emailtrail/__init__.py | akshaykmr/emailtrail | 8298e4b68c70f9b64198f54e4f3baf77d5fe54fa | [
"MIT"
] | 11 | 2020-04-05T07:24:46.000Z | 2021-01-10T06:58:00.000Z | emailtrail/__init__.py | akshaykmr/emailtrail | 8298e4b68c70f9b64198f54e4f3baf77d5fe54fa | [
"MIT"
] | 1 | 2021-09-09T16:46:18.000Z | 2021-09-09T16:46:18.000Z | emailtrail/__init__.py | akshaykmr/emailtrail | 8298e4b68c70f9b64198f54e4f3baf77d5fe54fa | [
"MIT"
] | 1 | 2020-10-26T17:50:10.000Z | 2020-10-26T17:50:10.000Z | from .module import * # noqa
from .models import Trail, Hop # noqa
| 17.25 | 37 | 0.695652 | 10 | 69 | 4.8 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 69 | 3 | 38 | 23 | 0.888889 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f71a4417980584c697fd59995b017ae74c4d8707 | 210 | py | Python | visualisation/core/__init__.py | dashings/CAMVIS | fb7e4e5d885ae227140f7ab40b5f47e730ec249b | [
"MIT"
] | 213 | 2018-12-20T12:09:07.000Z | 2022-03-21T10:09:58.000Z | visualisation/core/__init__.py | dashings/CAMVIS | fb7e4e5d885ae227140f7ab40b5f47e730ec249b | [
"MIT"
] | 3 | 2020-07-16T05:11:25.000Z | 2022-03-16T13:59:07.000Z | visualisation/core/__init__.py | dashings/CAMVIS | fb7e4e5d885ae227140f7ab40b5f47e730ec249b | [
"MIT"
] | 41 | 2019-03-06T12:01:24.000Z | 2022-03-09T07:55:56.000Z | from .SaliencyMap import SaliencyMap
from .DeepDream import DeepDream
from .GradCam import GradCam
from .Weights import Weights
from .Base import Base
from .ClassActivationMapping import ClassActivationMapping
| 30 | 58 | 0.857143 | 24 | 210 | 7.5 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 210 | 6 | 59 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f72c88bad07b64edf6012e96a4a6af0ebf4b41c8 | 12,698 | py | Python | mai_version/trees/TILDEQueryScorer.py | joschout/tilde | 1403b50842b83f2edd6b16b1fbe24b9bec2d0048 | [
"Apache-2.0"
] | 16 | 2019-03-06T06:11:33.000Z | 2022-02-07T21:30:25.000Z | mai_version/trees/TILDEQueryScorer.py | joschout/tilde | 1403b50842b83f2edd6b16b1fbe24b9bec2d0048 | [
"Apache-2.0"
] | 4 | 2019-10-08T14:48:23.000Z | 2020-03-26T00:31:57.000Z | mai_version/trees/TILDEQueryScorer.py | krishnangovindraj/tilde | 5243a02d92f375d56ffc49ab8c3d1a87e31e99b9 | [
"Apache-2.0"
] | 4 | 2019-08-14T05:40:47.000Z | 2020-08-05T13:21:16.000Z | import math
from typing import Iterable, Set, List, Optional
import problog
import time
from problog.logic import And, Term
from mai_version.classification.example_partitioning import ExamplePartitioner
from mai_version.representation.TILDE_query import TILDEQuery
from mai_version.representation.example import ExampleWrapper
from mai_version.representation.example import Label
from mai_version.trees.scoring import entropy, information_gain2
class QueryScoreInfo:
"""Wrapper around the information about best scoring query"""
def __init__(self, best_query: TILDEQuery, score_of_best_query: float,
examples_satisfying_best_query: Set[ExampleWrapper],
examples_not_satisfying_best_query: Set[ExampleWrapper]):
self.best_query = best_query # type: TILDEQuery
self.score_of_best_query = score_of_best_query # type: float
self.examples_satisfying_best_query = examples_satisfying_best_query # type: Set[ExampleWrapper]
self.examples_not_satisfying_best_query = examples_not_satisfying_best_query # type: Set[ExampleWrapper]
class TILDEQueryScorer:
@staticmethod
def get_best_refined_query(refined_queries: Iterable[TILDEQuery], examples: Set[ExampleWrapper],
example_partitioner: ExamplePartitioner, possible_targets: List[Label],
probabilistic: Optional[bool] = False) -> QueryScoreInfo:
# Tuple[Optional[TILDEQuery], float, Optional[Set[ExampleWrapper]], Optional[Set[ExampleWrapper]]]:
best_query = None # type: Optional[TILDEQuery]
score_best_query = - math.inf # type: float
examples_satisfying_best_query = None # type: Optional[Set[ExampleWrapper]]
examples_not_satisfying_best_query = None # type: Optional[Set[ExampleWrapper]]
entropy_complete_set = entropy(examples, possible_targets)
nb_of_examples_complete_set = len(examples)
for q in refined_queries: # type: TILDEQuery
print(q)
# compute the score of the queries
conj_of_tilde_query = q.to_conjunction() # type: And
examples_satisfying_q, examples_not_satisfying_q = example_partitioner.get_examples_satisfying_query(
examples, conj_of_tilde_query) # type: Set[ExampleWrapper]
# examples_not_satisfying_q = examples - examples_satisfying_q # type: Set[ExampleWrapper]
#TODO: no longer probabilistic!
score = information_gain2(examples_satisfying_q, examples_not_satisfying_q, possible_targets, nb_of_examples_complete_set, entropy_complete_set)
if score > score_best_query:
best_query = q
score_best_query = score
examples_satisfying_best_query = examples_satisfying_q
examples_not_satisfying_best_query = examples_not_satisfying_q
return QueryScoreInfo(best_query, score_best_query, examples_satisfying_best_query,
examples_not_satisfying_best_query)
class TILDEQueryScorer2:
@staticmethod
def get_best_refined_query(refined_queries: Iterable[TILDEQuery], examples: Set[ExampleWrapper],
example_partitioner: ExamplePartitioner, possible_targets: List[Label],
probabilistic: Optional[bool] = False) -> QueryScoreInfo:
# Tuple[Optional[TILDEQuery], float, Optional[Set[ExampleWrapper]], Optional[Set[ExampleWrapper]]]:
best_query = None # type: Optional[TILDEQuery]
score_best_query = - math.inf # type: float
# examples_satisfying_best_query = None # type: Optional[Set[ExampleWrapper]]
# examples_not_satisfying_best_query = None # type: Optional[Set[ExampleWrapper]]
entropy_complete_set = entropy(examples, possible_targets)
nb_of_examples_complete_set = len(examples)
# ided_queries = list(zip(range(0,len(refined_queries)), refined_queries))
entropy_dict = {label: 0 for label in possible_targets}
query_entropy_dicts = [(entropy_dict.copy(), entropy_dict.copy()) for q in refined_queries]
for clause_db_ex in examples:
db_to_query = clause_db_ex.extend() # type: ClauseDB
if clause_db_ex.classification_term is not None:
db_to_query += clause_db_ex.classification_term
for id, q in zip(range(0,len(refined_queries)), refined_queries):
to_query = Term('q' + str(id))
db_to_query += Term('query')(to_query)
db_to_query += (to_query << q.to_conjunction())
start_time = time.time()
evaluatable = problog.get_evaluatable()
mid_time1 = time.time()
something = evaluatable.create_from(db_to_query, engine=example_partitioner.engine)
mid_time2 = time.time()
results = something.evaluate()
end_time = time.time()
example_partitioner.nb_partitions_calculated += 1
get_evaluatable_duration = mid_time1 - start_time
example_partitioner.sum_get_evaluatable += get_evaluatable_duration
structure_creation_duration = mid_time2 - mid_time1
example_partitioner.sum_structure_creation_duration += structure_creation_duration
if structure_creation_duration > example_partitioner.max_structure_creation_duration:
example_partitioner.max_structure_creation_duration = structure_creation_duration
if structure_creation_duration < example_partitioner.min_structure_creation_duration:
example_partitioner.min_structure_creation_duration = structure_creation_duration
if structure_creation_duration < 0.000001:
example_partitioner.nb_structure_creation_zero += 1
evalutation_duration = end_time - mid_time2
example_partitioner.sum_evaluation_duration += evalutation_duration
if evalutation_duration > example_partitioner.max_evaluation_duration:
example_partitioner.max_evaluation_duration = evalutation_duration
if evalutation_duration < example_partitioner.min_evaluation_duration:
example_partitioner.min_evaluation_duration = evalutation_duration
if evalutation_duration < 0.000001:
example_partitioner.nb_evaluation_zero += 1
# results = problog.get_evaluatable().create_from(db_to_query, engine=example_partitioner.engine).evaluate()
for to_query, prob in results.items():
id = int(to_query.functor[1:])
if prob > 0.5:
query_entropy_dicts[id][0][clause_db_ex.get_label()] = query_entropy_dicts[id][0][clause_db_ex.get_label()] + 1
else:
query_entropy_dicts[id][1][clause_db_ex.get_label()] = query_entropy_dicts[id][1][
clause_db_ex.get_label()] + 1
for query, (left_dic, right_dic) in zip(refined_queries, query_entropy_dicts):
# -- ig --
ig = 0
if nb_of_examples_complete_set != 0:
ig = entropy_complete_set
nb_examples_left = sum(left_dic.values())
if nb_examples_left > 0:
entropy_left = 0
for label in left_dic.keys():
label_value = left_dic[label]
if label_value != 0:
entropy_left -= label_value / nb_examples_left \
* math.log2(label_value / nb_examples_left)
ig -= nb_examples_left / nb_of_examples_complete_set * entropy_left
# ------
nb_examples_right = sum(right_dic.values())
if nb_examples_right > 0:
entropy_right = 0
for label in right_dic.keys():
label_value = right_dic[label]
if label_value != 0:
entropy_right -= label_value / nb_examples_right \
* math.log2(label_value / nb_examples_right)
ig -= nb_examples_right / nb_of_examples_complete_set * entropy_right
if ig > score_best_query:
best_query = query
score_best_query = ig
# --- we now know the best query, so create the partition again:
examples_satisfying_best_query = set() # type: Optional[Set[ExampleWrapper]]
examples_not_satisfying_best_query = set() # type: Optional[Set[ExampleWrapper]]
to_query = Term('to_query')
to_add1 = Term('query')(to_query)
to_add2 = (to_query << best_query.to_conjunction())
for clause_db_ex in examples:
db_to_query = clause_db_ex.extend() # type: ClauseDB
if clause_db_ex.classification_term is not None:
db_to_query += clause_db_ex.classification_term
# db_to_query = example_db.extend()
db_to_query += to_add1
db_to_query += to_add2
start_time = time.time()
evaluatable = problog.get_evaluatable()
mid_time1 = time.time()
something = evaluatable.create_from(db_to_query, engine=example_partitioner.engine)
mid_time2 = time.time()
query_result = something.evaluate()
end_time = time.time()
example_partitioner.nb_partitions_calculated += 1
get_evaluatable_duration = mid_time1 - start_time
example_partitioner.sum_get_evaluatable += get_evaluatable_duration
structure_creation_duration = mid_time2 - mid_time1
example_partitioner.sum_structure_creation_duration += structure_creation_duration
if structure_creation_duration > example_partitioner.max_structure_creation_duration:
example_partitioner.max_structure_creation_duration = structure_creation_duration
if structure_creation_duration < example_partitioner.min_structure_creation_duration:
example_partitioner.min_structure_creation_duration = structure_creation_duration
if structure_creation_duration < 0.000001:
example_partitioner.nb_structure_creation_zero += 1
evalutation_duration = end_time - mid_time2
example_partitioner.sum_evaluation_duration += evalutation_duration
if evalutation_duration > example_partitioner.max_evaluation_duration:
example_partitioner.max_evaluation_duration = evalutation_duration
if evalutation_duration < example_partitioner.min_evaluation_duration:
example_partitioner.min_evaluation_duration = evalutation_duration
if evalutation_duration < 0.000001:
example_partitioner.nb_evaluation_zero += 1
# query_result = problog.get_evaluatable().create_from(db_to_query,
# engine=example_partitioner.engine).evaluate()
if query_result[to_query] > 0.5:
examples_satisfying_best_query.add(clause_db_ex)
else:
examples_not_satisfying_best_query.add(clause_db_ex)
# for qid, q in enumerate(refined_queries): # type: TILDEQuery
# # compute the score of the queries
# conj_of_tilde_query = q.to_conjunction() # type: And
#
# examples_satisfying_q, examples_not_satisfying_q = example_partitioner.get_examples_satisfying_query(
# examples, conj_of_tilde_query) # type: Set[ExampleWrapper]
# # examples_not_satisfying_q = examples - examples_satisfying_q # type: Set[ExampleWrapper]
#
# #TODO: no longer probabilistic!
# score = information_gain2(examples_satisfying_q, examples_not_satisfying_q, possible_targets, nb_of_examples_complete_set, entropy_complete_set)
#
# if score > score_best_query:
# best_query = q
# score_best_query = score
# examples_satisfying_best_query = examples_satisfying_q
# examples_not_satisfying_best_query = examples_not_satisfying_q
return QueryScoreInfo(best_query, score_best_query, examples_satisfying_best_query,
examples_not_satisfying_best_query) | 52.040984 | 158 | 0.654198 | 1,362 | 12,698 | 5.67768 | 0.104993 | 0.054701 | 0.07759 | 0.038407 | 0.817018 | 0.78456 | 0.746282 | 0.730764 | 0.699211 | 0.691582 | 0 | 0.008551 | 0.281619 | 12,698 | 244 | 159 | 52.040984 | 0.83918 | 0.175461 | 0 | 0.506173 | 0 | 0 | 0.001826 | 0 | 0 | 0 | 0 | 0.004098 | 0 | 1 | 0.018519 | false | 0 | 0.061728 | 0 | 0.111111 | 0.006173 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f7466ef116c6e0267b787907c8763fbd487f5e8d | 8,859 | py | Python | test/integration_test/test_multi_transfer.py | heshu-by/likelib | 76a06b0df7dea5520d5b43e6cf9ef4b9e81dad83 | [
"Apache-2.0"
] | 1 | 2020-10-23T19:09:27.000Z | 2020-10-23T19:09:27.000Z | test/integration_test/test_multi_transfer.py | heshu-by/likelib | 76a06b0df7dea5520d5b43e6cf9ef4b9e81dad83 | [
"Apache-2.0"
] | null | null | null | test/integration_test/test_multi_transfer.py | heshu-by/likelib | 76a06b0df7dea5520d5b43e6cf9ef4b9e81dad83 | [
"Apache-2.0"
] | 1 | 2020-12-08T11:16:30.000Z | 2020-12-08T11:16:30.000Z | from tester import test_case, Node, NodePoll
import concurrent.futures
@test_case("multi_transfer")
def main(env, logger):
settings_node_1 = Node.Settings(Node.Id(20300, 50150))
settings_node_2 = Node.Settings(Node.Id(20301, 50151), nodes=[settings_node_1.id, ])
with Node(env, settings_node_1, logger) as node_1:
node_1.run_check_test()
with Node(env, settings_node_2, logger) as node_2:
node_2.run_check_test()
target_address = node_1.create_new_address(keys_path="keys1")
node_1.run_check_balance(address=target_address, balance=0)
node_2.run_check_balance(address=target_address, balance=0)
distributor_address = node_1.load_address(keys_path=node_1.DISTRIBUTOR_ADDRESS_PATH)
amount = 333
transaction_wait = 5
transaction_timeout = 3
node_2.run_check_transfer(to_address=target_address, amount=amount,
from_address=distributor_address, fee=0, timeout=transaction_timeout, wait=transaction_wait)
node_2.run_check_balance(address=target_address, balance=amount)
node_1.run_check_balance(address=target_address, balance=amount)
return 0
@test_case("multi_transfer_connected_with_everything")
def main(env, logger):
count_nodes = 10
start_sync_port = 20302
start_rpc_port = 50152
waiting_time = 5
transaction_timeout = 5
transaction_wait = 5
with NodePoll() as pool:
pool.append(Node(env, Node.Settings(Node.Id(start_sync_port, start_rpc_port)), logger))
pool.last.start_node(waiting_time)
pool.last.run_check_test()
# initializing connections with nodes
for i in range(1, count_nodes):
curent_sync_port = start_sync_port + i
curent_rpc_port = start_rpc_port + i
pool.append(
Node(env, Node.Settings(Node.Id(curent_sync_port, curent_rpc_port), nodes=pool.ids),
logger))
pool.last.start_node(waiting_time)
for node in pool:
node.run_check_test()
addresses = [pool.last.create_new_address(keys_path=f"keys{i}") for i in range(1, len(pool))]
init_amount = 1000
distributor_address = pool.last.load_address(keys_path=Node.DISTRIBUTOR_ADDRESS_PATH)
# init addresses with amount
for to_address in addresses:
pool.last.run_check_balance(address=to_address, balance=0)
pool.last.run_check_transfer(to_address=to_address, amount=init_amount,
from_address=distributor_address, fee=0, timeout=transaction_timeout, wait=transaction_wait)
for node in pool:
node.run_check_balance(address=to_address, balance=init_amount)
for i in range(1, len(addresses) - 1):
from_address = addresses[i]
to_address = addresses[i + 1]
amount = i * 100
pool.last.run_check_transfer(to_address=to_address, amount=amount, from_address=from_address,
fee=0, timeout=transaction_timeout, wait=transaction_wait)
for node in pool:
node.run_check_balance(address=to_address, balance=amount + init_amount)
first_address = addresses[0]
first_address_balance = init_amount
for node in pool:
node.run_check_balance(address=first_address, balance=first_address_balance)
return 0
@test_case("multi_transfer_connected_one_by_one")
def main(env, logger):
count_nodes = 10
start_sync_port = 20310
start_rpc_port = 50160
waiting_time = 5
transaction_timeout = 7
transaction_wait = 4
with NodePoll() as pool:
pool.append(Node(env, Node.Settings(Node.Id(start_sync_port, start_rpc_port)), logger))
pool.last.start_node(waiting_time)
pool.last.run_check_test()
# initializing connections with nodes
for i in range(1, count_nodes):
curent_sync_port = start_sync_port + i
curent_rpc_port = start_rpc_port + i
pool.append(
Node(env, Node.Settings(Node.Id(curent_sync_port, curent_rpc_port), nodes=[pool.last.settings.id, ]),
logger))
pool.last.start_node(waiting_time)
for node in pool:
node.run_check_test()
addresses = [pool.last.create_new_address(keys_path=f"keys{i}") for i in range(1, len(pool))]
init_amount = 1000
distributor_address = pool.last.load_address(keys_path=Node.DISTRIBUTOR_ADDRESS_PATH)
# init addresses with amount
for to_address in addresses:
pool.last.run_check_balance(address=to_address, balance=0)
pool.last.run_check_transfer(to_address=to_address, amount=init_amount,
from_address=distributor_address, fee=0, timeout=transaction_timeout,
wait=transaction_wait)
for node in pool:
node.run_check_balance(address=to_address, balance=init_amount)
for i in range(1, len(addresses) - 1):
from_address = addresses[i]
to_address = addresses[i + 1]
amount = i * 100
pool.last.run_check_transfer(to_address=to_address, amount=amount, from_address=from_address,
fee=0, timeout=transaction_timeout,
wait=transaction_wait)
for node in pool:
node.run_check_balance(address=to_address, balance=amount + init_amount)
first_address = addresses[0]
first_address_balance = init_amount
for node in pool:
node.run_check_balance(address=first_address, balance=first_address_balance)
return 0
def node_transfers(node, addresses, transaction_wait):
shift = len(addresses) - 1
pos = 0
from_address = addresses[pos]
amount = 300
transaction_timeout = 40
for _ in range(len(addresses) * 5):
pos = (pos + shift) % len(addresses)
to_address = addresses[pos]
node.run_check_transfer(to_address=to_address, amount=amount, from_address=from_address, fee=0,
timeout=transaction_timeout, wait=transaction_wait)
from_address = to_address
@test_case("parallel_transfer_connected_with_everything")
def main(env, logger):
count_nodes = 7
start_sync_port = 20330
start_rpc_port = 50180
node_startup_time = 5
transaction_wait = 10
transaction_timeout = 42
init_amount = 1000
address_per_nodes = 3
with NodePoll() as pool:
pool.append(Node(env, Node.Settings(Node.Id(start_sync_port, start_rpc_port)), logger))
pool.last.start_node(node_startup_time)
pool.last.run_check_test()
# initializing connections with nodes
for i in range(1, count_nodes):
curent_sync_port = start_sync_port + i
curent_rpc_port = start_rpc_port + i
pool.append(
Node(env, Node.Settings(Node.Id(curent_sync_port, curent_rpc_port), nodes=pool.ids),
logger))
pool.last.start_node(node_startup_time)
for node in pool:
node.run_check_test()
addresses = [pool.last.create_new_address(keys_path=f"keys{i}") for i in
range(1, count_nodes * address_per_nodes + 1)]
distributor_address = pool.last.load_address(keys_path=Node.DISTRIBUTOR_ADDRESS_PATH)
# init addresses with amount
for to_address in addresses:
pool.last.run_check_balance(address=to_address, balance=0)
pool.last.run_check_transfer(to_address=to_address, amount=init_amount,
from_address=distributor_address, fee=0, timeout=transaction_timeout,
wait=transaction_wait)
for node in pool:
node.run_check_balance(address=to_address, balance=init_amount)
with concurrent.futures.ThreadPoolExecutor(len(pool)) as executor:
threads = []
for i in range(len(pool)):
first_address_number = i * address_per_nodes
last_address_number = (i * address_per_nodes) + address_per_nodes
threads.append(
executor.submit(node_transfers, pool[i], addresses[first_address_number:last_address_number],
transaction_wait))
for i in threads:
i.result()
for address in addresses:
for node in pool:
node.run_check_balance(address=address, balance=init_amount)
return 0
| 40.085973 | 133 | 0.637205 | 1,110 | 8,859 | 4.777477 | 0.091892 | 0.045257 | 0.042429 | 0.062229 | 0.787856 | 0.751839 | 0.740901 | 0.729587 | 0.722798 | 0.679992 | 0 | 0.02319 | 0.284456 | 8,859 | 220 | 134 | 40.268182 | 0.813378 | 0.021221 | 0 | 0.621302 | 0 | 0 | 0.018236 | 0.01362 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029586 | false | 0 | 0.011834 | 0 | 0.065089 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f754265a4563f6595547699085ac86c96d9efe1a | 131 | py | Python | evan/api/permissions/contents.py | eillarra/evan | befe0f8daedd1b1f629097110d92e68534e43da1 | [
"MIT"
] | null | null | null | evan/api/permissions/contents.py | eillarra/evan | befe0f8daedd1b1f629097110d92e68534e43da1 | [
"MIT"
] | 20 | 2021-03-31T20:10:46.000Z | 2022-02-15T09:58:13.000Z | evan/api/permissions/contents.py | eillarra/evan | befe0f8daedd1b1f629097110d92e68534e43da1 | [
"MIT"
] | null | null | null | from .events import EventRelatedObjectPermission
class ContentPermission(EventRelatedObjectPermission):
allow_delete = False
| 21.833333 | 54 | 0.847328 | 10 | 131 | 11 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114504 | 131 | 5 | 55 | 26.2 | 0.948276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f797ae1a1aeec558dc1c2688498060799db3b312 | 4,196 | py | Python | tests/template_tests/test_extends_relative.py | MisterNayDev/django | 0788e7c1b339b903a119ce701863e4f4562d83ca | [
"CNRI-Python-GPL-Compatible",
"BSD-3-Clause"
] | null | null | null | tests/template_tests/test_extends_relative.py | MisterNayDev/django | 0788e7c1b339b903a119ce701863e4f4562d83ca | [
"CNRI-Python-GPL-Compatible",
"BSD-3-Clause"
] | null | null | null | tests/template_tests/test_extends_relative.py | MisterNayDev/django | 0788e7c1b339b903a119ce701863e4f4562d83ca | [
"CNRI-Python-GPL-Compatible",
"BSD-3-Clause"
] | null | null | null | import os
from django.template import Context, Engine, TemplateSyntaxError
from django.test import SimpleTestCase
from .utils import ROOT
RELATIVE = os.path.join(ROOT, 'relative_templates')
class ExtendsRelativeBehaviorTests(SimpleTestCase):
def test_normal_extend(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('one.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one')
def test_dir1_extend(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/one.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir1 one')
def test_dir1_extend1(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/one1.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir1 one')
def test_dir1_extend2(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/one2.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir1 one')
def test_dir1_extend3(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/one3.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir1 one')
def test_dir2_extend(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/dir2/one.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir2 one')
def test_extend_error(self):
engine = Engine(dirs=[RELATIVE])
msg = (
"The relative path '\"./../two.html\"' points outside the file "
"hierarchy that template 'error_extends.html' is in."
)
with self.assertRaisesMessage(TemplateSyntaxError, msg):
engine.render_to_string('error_extends.html')
class IncludeRelativeBehaviorTests(SimpleTestCase):
def test_normal_include(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/dir2/inc2.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'dir2 include')
def test_normal_include_variable(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/dir2/inc3.html')
output = template.render(Context({'tmpl': './include_content.html'}))
self.assertEqual(output.strip(), 'dir2 include')
def test_dir2_include(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/dir2/inc1.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three')
def test_include_error(self):
engine = Engine(dirs=[RELATIVE])
msg = (
"The relative path '\"./../three.html\"' points outside the file "
"hierarchy that template 'error_include.html' is in."
)
with self.assertRaisesMessage(TemplateSyntaxError, msg):
engine.render_to_string('error_include.html')
class ExtendsMixedBehaviorTests(SimpleTestCase):
def test_mixing1(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/two.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one dir2 one dir1 two')
def test_mixing2(self):
engine = Engine(dirs=[RELATIVE])
template = engine.get_template('dir1/three.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three dir1 three')
def test_mixing_loop(self):
engine = Engine(dirs=[RELATIVE])
msg = (
"The relative path '\"./dir2/../looped.html\"' was translated to "
"template name \'dir1/looped.html\', the same template in which "
"the tag appears."
)
with self.assertRaisesMessage(TemplateSyntaxError, msg):
engine.render_to_string('dir1/looped.html')
| 37.464286 | 78 | 0.646806 | 469 | 4,196 | 5.680171 | 0.162047 | 0.036787 | 0.084084 | 0.105105 | 0.748123 | 0.736486 | 0.736486 | 0.736486 | 0.713213 | 0.611111 | 0 | 0.013505 | 0.223546 | 4,196 | 111 | 79 | 37.801802 | 0.804174 | 0 | 0 | 0.409091 | 0 | 0 | 0.184461 | 0.010248 | 0 | 0 | 0 | 0 | 0.159091 | 1 | 0.159091 | false | 0 | 0.045455 | 0 | 0.238636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e3959f11397b73f3f3091706d63e3faf2356e9f2 | 1,683 | py | Python | deepnlpf/config.py | deepnlpf/deepnlpf | 6508ab1e8fd395575d606ee20223f25591541e25 | [
"Apache-2.0"
] | 3 | 2020-04-11T14:12:45.000Z | 2020-05-30T16:31:06.000Z | deepnlpf/config.py | deepnlpf/deepnlpf | 6508ab1e8fd395575d606ee20223f25591541e25 | [
"Apache-2.0"
] | 34 | 2020-03-20T19:36:40.000Z | 2022-03-20T13:00:32.000Z | deepnlpf/config.py | deepnlpf/deepnlpf | 6508ab1e8fd395575d606ee20223f25591541e25 | [
"Apache-2.0"
] | 1 | 2020-09-05T06:44:15.000Z | 2020-09-05T06:44:15.000Z | from configparser import ConfigParser
from deepnlpf.global_parameters import FILE_CONFIG
class Config(object):
def __init__(self) -> None:
self.config = ConfigParser()
self.config.read(FILE_CONFIG)
def get_debug(self) -> str:
return self.config.get("debug", "is_enabled")
def set_debug(self, status: str):
self.config.set("debug", "is_enabled", status)
def get_notification_toast(self):
return self.config.get("notification", "toast")
def set_notification_toast(self, status: str):
return self.config.set("notification", "toast", status)
def get_notification_email_smtp(self):
value = self.config.get("notification", "email.smtp")
return str(value)
def set_notification_email_smtp(self, smtp: str):
return self.config.set("notification", "email.smtp", smtp)
def get_notification_email_port(self):
value = self.config.get("notification", "email.port")
return int(value)
def set_notification_email_port(self, port: str):
return self.config.set("notification", "email.port", port)
def get_notification_email_address(self):
value = self.config.get("notification", "email.email_address")
return str(value)
def set_notification_email_address(self, email_address: str):
return self.config.set("notification", "email.email_address", email_address)
def get_notification_email_pass(self):
value = self.config.get("notification", "email.pass")
return str(value)
def set_notification_email_pass(self, password: str):
return self.config.set("notification", "email.pass", password)
| 31.166667 | 84 | 0.688057 | 208 | 1,683 | 5.360577 | 0.158654 | 0.243946 | 0.143498 | 0.102242 | 0.434978 | 0.409865 | 0.379372 | 0 | 0 | 0 | 0 | 0 | 0.192513 | 1,683 | 53 | 85 | 31.754717 | 0.820456 | 0 | 0 | 0.088235 | 0 | 0 | 0.153298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.382353 | false | 0.117647 | 0.058824 | 0.205882 | 0.794118 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
e39bab2321b2aedc4caa50f2886daefb1a45ce8f | 139 | py | Python | pyqtlet/leaflet/layer/__init__.py | samhattangady/pyqtlet | 2242f63b0dce6dd6357aaa0c6fe23a991451bfdd | [
"BSD-2-Clause-FreeBSD"
] | 30 | 2018-05-24T17:38:11.000Z | 2021-11-02T19:34:03.000Z | pyqtlet/leaflet/layer/__init__.py | samhattangady/pyqtlet | 2242f63b0dce6dd6357aaa0c6fe23a991451bfdd | [
"BSD-2-Clause-FreeBSD"
] | 27 | 2018-02-21T07:22:11.000Z | 2021-10-12T06:24:18.000Z | pyqtlet/leaflet/layer/__init__.py | samhattangady/pyqtlet | 2242f63b0dce6dd6357aaa0c6fe23a991451bfdd | [
"BSD-2-Clause-FreeBSD"
] | 9 | 2018-06-11T06:50:44.000Z | 2021-05-17T15:26:26.000Z | from .featuregroup import FeatureGroup
from .layer import Layer
from .layergroup import LayerGroup
from .imageoverlay import imageOverlay
| 23.166667 | 38 | 0.848921 | 16 | 139 | 7.375 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 139 | 5 | 39 | 27.8 | 0.967213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e3a6d221ee8301fd5fd344cf1504d97a8879be8f | 809 | py | Python | setka/pipes/__init__.py | RomanovMikeV/setka | cad6f17429a4bb3479c5557ad58c15fee568f410 | [
"MIT"
] | 11 | 2019-04-16T11:41:24.000Z | 2021-05-28T15:01:17.000Z | setka/pipes/__init__.py | RomanovMikeV/cv_utilities | cad6f17429a4bb3479c5557ad58c15fee568f410 | [
"MIT"
] | 15 | 2019-12-05T22:25:37.000Z | 2020-03-18T20:09:03.000Z | setka/pipes/__init__.py | RomanovMikeV/setka | cad6f17429a4bb3479c5557ad58c15fee568f410 | [
"MIT"
] | 6 | 2019-04-24T15:35:22.000Z | 2021-08-10T07:48:39.000Z | from setka.pipes.Pipe import Pipe
from setka.pipes.Lambda import Lambda
from setka.pipes.basic.ComputeMetrics import ComputeMetrics
from setka.pipes.basic.DatasetHandler import DatasetHandler
from setka.pipes.basic.ModelHandler import ModelHandler
from setka.pipes.basic.UseCuda import UseCuda
from setka.pipes.logging.Logger import Logger
from setka.pipes.logging.Checkpointer import Checkpointer
from setka.pipes.logging.SaveResult import SaveResult
from setka.pipes.logging.TensorBoard import TensorBoard
from setka.pipes.logging.ProgressBar import ProgressBar
import setka.pipes.logging.progressbar
from setka.pipes.optimization.LossHandler import LossHandler
from setka.pipes.optimization.OneStepOptimizers import OneStepOptimizers
from setka.pipes.optimization.WeightAveraging import WeightAveraging
| 42.578947 | 72 | 0.871446 | 101 | 809 | 6.980198 | 0.207921 | 0.212766 | 0.278014 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075402 | 809 | 18 | 73 | 44.944444 | 0.942513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e3c39cce60d0f7b1621e8f888f80040bc6e88f69 | 12,164 | py | Python | tests/test_topics.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | tests/test_topics.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | tests/test_topics.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | import time
def test_client_can_get_topics(client):
resp = client.get('/api/topics')
assert resp.status_code == 200
def test_client_gets_correct_topics_fields(client):
resp = client.get('/api/topics')
assert 'offset' in resp.json
assert resp.json['offset'] is None
assert 'total' in resp.json
assert 'data' in resp.json
assert resp.json['total'] == len(resp.json['data'])
assert {
'topic_id',
'title',
'status',
'user',
'subforum',
'n_posts',
'last_post',
'created_at',
'updated_at',
} == set(resp.json['data'][0].keys())
def test_client_filters_topics_fields(client):
resp = client.get('/api/topics?fields=topic_id,user,title')
topics = resp.json['data']
assert {
'topic_id',
'user',
'title',
} == set(topics[0].keys())
def test_client_offsets_topics(client):
resp_1 = client.get('/api/topics')
resp_2 = client.get('/api/topics?offset=2')
assert len(resp_1.json['data']) \
== len(resp_2.json['data']) + min(2, len(resp_1.json['data']))
def test_client_limits_topics(client):
resp_1 = client.get('/api/topics?max_n_results=1')
resp_2 = client.get('/api/topics?max_n_results=2')
assert len(resp_1.json['data']) <= 1
assert len(resp_2.json['data']) <= 2
def test_client_filter_topics_by_statuses(admin_with_tok, client):
admin_with_tok.post('/api/topics',
data={
'title': 'hey',
'status': 'published',
}
)
admin_with_tok.post('/api/topics',
data={
'title': 'hey2',
'status': 'pinned',
}
)
resp_1 = client.get('/api/topics?statuses=published,pinned')
resp_2 = client.get('/api/topics?statuses=published')
assert {p['status'] for p in resp_1.json['data']} <= {'published', 'pinned'}
assert {p['status'] for p in resp_2.json['data']} <= {'published'}
def test_client_can_get_topic(client, topic_id):
resp = client.get('/api/topics/{}'.format(topic_id))
assert resp.status_code == 200
assert 'data' in resp.json
def test_client_gets_correct_topic_fields(client, topic_id):
resp = client.get('/api/topics/{}'.format(topic_id))
assert 'data' in resp.json
assert {
'topic_id',
'title',
'status',
'user',
'subforum',
'n_posts',
'last_post',
'created_at',
'updated_at',
} == set(resp.json['data'].keys())
def test_logged_off_client_cannot_delete_topic(client, topic_id):
resp = client.delete('/api/topics/{}'.format(topic_id))
assert resp.status_code == 401
def test_logged_in_client_cannot_delete_other_users_topic(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
other_user_topic_id = topic_id_getter('user_b')
resp = client_with_tok.delete('/api/topics/{}'.format(other_user_topic_id))
assert resp.status_code == 401
def test_logged_in_client_can_delete_their_topic(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
user_topic_id = topic_id_getter('user')
resp = client_with_tok.delete('/api/topics/{}'.format(user_topic_id))
assert resp.status_code == 204
def test_logged_in_mod_can_delete_topic(mod_with_tok, topic_id):
resp = mod_with_tok.delete('/api/topics/{}'.format(topic_id))
assert resp.status_code == 204
def test_logged_in_admin_can_delete_topic(admin_with_tok, topic_id):
resp = admin_with_tok.delete('/api/topics/{}'.format(topic_id))
assert resp.status_code == 204
def test_logged_in_admin_corretly_deletes_topic(admin_with_tok, topic_id):
resp_1 = admin_with_tok.get('/api/topics/{}'.format(topic_id))
resp_2 = admin_with_tok.delete('/api/topics/{}'.format(topic_id))
resp_3 = admin_with_tok.get('/api/topics/{}'.format(topic_id))
assert resp_1.status_code == 200
assert resp_2.status_code == 204
assert resp_3.status_code == 404
def test_logged_off_client_cannot_edit_topic(client, topic_id):
resp = client.put('/api/topics/{}'.format(topic_id),
data={
'title': 'updated',
}
)
assert resp.status_code == 401
def test_logged_in_client_cannot_edit_other_users_topic(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
other_user_topic_id = topic_id_getter('user_b')
resp = client_with_tok.put('/api/topics/{}'.format(other_user_topic_id),
data={
'title': 'updated',
}
)
assert resp.status_code == 401
def test_logged_in_client_can_edit_their_topic(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
user_topic_id = topic_id_getter('user')
resp = client_with_tok.put('/api/topics/{}'.format(user_topic_id),
data={
'title': 'updated',
}
)
assert resp.status_code == 200
def test_logged_in_mod_can_edit_topic(mod_with_tok, topic_id):
resp = mod_with_tok.put('/api/topics/{}'.format(topic_id),
data={
'title': 'updated',
}
)
assert resp.status_code == 200
def test_logged_in_admin_can_edit_topic(admin_with_tok, topic_id):
resp = admin_with_tok.put('/api/topics/{}'.format(topic_id),
data={
'title': 'updated',
}
)
assert resp.status_code == 200
def test_logged_in_client_gets_correct_put_fields(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
topic_id = topic_id_getter('user')
resp = client_with_tok.put('/api/topics/{}'.format(topic_id),
data={
'title': 'new',
}
)
assert 'data' in resp.json
assert {
'topic_id',
'title',
'status',
'user',
'subforum',
'n_posts',
'last_post',
'created_at',
'updated_at',
} == set(resp.json['data'].keys())
def test_logged_in_client_corretly_edits_its_topic(
client_with_tok_getter, topic_id_getter):
client_with_tok = client_with_tok_getter('user')
topic_id = topic_id_getter('user')
resp_1 = client_with_tok.get('/api/topics/{}'.format(topic_id))
resp_2 = client_with_tok.put('/api/topics/{}'.format(topic_id),
data={
'title': resp_1.json['data']['title'] + '_altered',
}
)
resp_3 = client_with_tok.get('/api/topics/{}'.format(topic_id))
assert resp_1.status_code == 200
assert resp_2.status_code == 200
assert resp_3.status_code == 200
assert resp_3.json['data']['title'] \
== resp_1.json['data']['title'] + '_altered'
def test_client_can_get_topic_posts(client, topic_id):
resp = client.get('/api/topics/{}/posts'.format(topic_id))
assert resp.status_code == 200
def test_client_gets_correct_topic_posts_fields(client, topic_id):
resp = client.get('/api/topics/{}/posts'.format(topic_id))
assert 'offset' in resp.json
assert resp.json['offset'] is None
assert 'total' in resp.json
assert 'data' in resp.json
assert len(resp.json['data']) > 0
assert resp.json['total'] == len(resp.json['data'])
assert {
'post_id',
'topic',
'user',
'content',
'status',
'created_at',
'updated_at',
} == set(resp.json['data'][0].keys())
def test_logged_off_client_cannot_create_post_in_topic(client, topic_id):
resp = client.post('/api/topics/{}/posts'.format(topic_id))
assert resp.status_code == 401
def test_logged_in_client_can_create_post_in_topic(client_with_tok, topic_id):
resp = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
assert resp.status_code == 200
def test_logged_in_client_gets_correct_n_posts(
client_with_tok, topic_id):
resp_1 = client_with_tok.get('/api/me')
resp_2 = client_with_tok.post(
'/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
resp_3 = client_with_tok.get('/api/me')
resp_4 = client_with_tok.delete(
'/api/posts/{}'.format(resp_2.json['data']['post_id']))
resp_5 = client_with_tok.get('/api/me')
assert \
resp_3.json['data']['n_posts'] == resp_1.json['data']['n_posts'] + 1
assert \
resp_5.json['data']['n_posts'] == resp_3.json['data']['n_posts'] - 1
def test_logged_in_client_gets_correct_fields_in_post_creation(
client_with_tok, topic_id):
resp = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
assert {
'post_id',
'topic',
'user',
'content',
'status',
'created_at',
'updated_at',
} == set(resp.json['data'].keys())
def test_logged_in_client_correctly_creates_post_in_topic(
client_with_tok, topic_id):
resp = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
assert resp.json['data']['status'] == 'published'
assert resp.json['data']['content'] == 'olar'
assert resp.json['data']['topic']['topic_id'] == str(topic_id)
def test_logged_in_client_correctly_gets_last_post(
client_with_tok, topic_id):
#test is sensitive to precision of datetime
time.sleep(1)
resp_1 = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
resp_2 = client_with_tok.get('/api/topics/{}'.format(topic_id))
time.sleep(1)
resp_3 = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar2',
}
)
resp_4 = client_with_tok.get('/api/topics/{}'.format(topic_id))
assert resp_2.json['data']['last_post']['post_id'] \
== resp_1.json['data']['post_id']
assert resp_4.json['data']['last_post']['post_id'] \
== resp_3.json['data']['post_id']
def test_logged_in_client_gets_correct_n_posts_in_topic(
client_with_tok, topic_id):
resp_1 = client_with_tok.get('/api/topics/{}'.format(topic_id))
resp_2 = client_with_tok.post('/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
resp_3 = client_with_tok.get('/api/topics/{}'.format(topic_id))
assert \
resp_3.json['data']['n_posts'] == resp_1.json['data']['n_posts'] + 1
def test_logged_in_client_under_antiflood_cannot_post_in_interval(
client_with_tok_under_antifloood, antiflood_time, topic_id):
time.sleep(antiflood_time)
start_time = time.time()
resp_1 = client_with_tok_under_antifloood.post(
'/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar',
}
)
resp_2 = client_with_tok_under_antifloood.post(
'/api/topics/{}/posts'.format(topic_id),
data={
'content': 'olar2',
}
)
end_time = time.time()
assert end_time - start_time < antiflood_time
assert resp_1.status_code == 200
assert resp_2.status_code == 429
def test_client_corretly_gets_topics_by_newest_last_post(
client_with_tok, topic_id_getter):
topic_id_1 = topic_id_getter('user')
topic_id_2 = topic_id_getter('user_b')
#test sensitive to datetime precision of objects
time.sleep(1)
resp_1 = client_with_tok.post('/api/topics/{}/posts'.format(topic_id_1),
data={
'content': 'olar',
}
)
resp_2 = client_with_tok.get('/api/topics?order=newest_last_post')
time.sleep(1)
resp_3 = client_with_tok.post('/api/topics/{}/posts'.format(topic_id_2),
data={
'content': 'olar2',
}
)
resp_4 = client_with_tok.get('/api/topics?order=newest_last_post')
assert resp_2.json['data'][0]['topic_id'] == str(topic_id_1)
assert resp_4.json['data'][0]['topic_id'] == str(topic_id_2)
assert resp_4.json['data'][1]['topic_id'] == str(topic_id_1)
| 30.951654 | 80 | 0.6354 | 1,684 | 12,164 | 4.219715 | 0.065321 | 0.090628 | 0.100619 | 0.053476 | 0.884745 | 0.84112 | 0.773431 | 0.71517 | 0.659021 | 0.631579 | 0 | 0.016967 | 0.215061 | 12,164 | 392 | 81 | 31.030612 | 0.727273 | 0.007317 | 0 | 0.527607 | 0 | 0 | 0.164831 | 0.018802 | 0 | 0 | 0 | 0 | 0.187117 | 1 | 0.09816 | false | 0 | 0.003067 | 0 | 0.101227 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e3c98f3bdbdd327e41661fe25ea8846d7a0b0174 | 2,838 | py | Python | finetune/base_models/gpt2/model.py | IndicoDataSolutions/finetune-transformer-lm | 3534658e5de281e5634c8481b0fb37635b0cb3af | [
"MIT"
] | null | null | null | finetune/base_models/gpt2/model.py | IndicoDataSolutions/finetune-transformer-lm | 3534658e5de281e5634c8481b0fb37635b0cb3af | [
"MIT"
] | null | null | null | finetune/base_models/gpt2/model.py | IndicoDataSolutions/finetune-transformer-lm | 3534658e5de281e5634c8481b0fb37635b0cb3af | [
"MIT"
] | null | null | null | import os
from urllib.parse import urljoin
from finetune.base_models import SourceModel
from finetune.base_models.gpt2.encoder import GPT2Encoder
from finetune.base_models.gpt2.featurizer import gpt2_featurizer
from finetune.util.download import GPT2_BASE_URL, FINETUNE_BASE_FOLDER
class GPT2Model(SourceModel):
is_bidirectional = False
encoder = GPT2Encoder
featurizer = gpt2_featurizer
settings = {
'max_length': 1024,
'n_embed': 768,
'n_heads': 12,
'n_layer': 12,
'l2_reg': 0.001,
'act_fn': "gelu",
'base_model_path': os.path.join("gpt2", "model-sm.jl")
}
required_files = [
{
'file': os.path.join(FINETUNE_BASE_FOLDER, 'model', 'gpt2', filename),
'url': urljoin(GPT2_BASE_URL, filename)
}
for filename in ['encoder.json', 'vocab.bpe', 'model-sm.jl']
]
class GPT2Model345(SourceModel):
is_bidirectional = False
encoder = GPT2Encoder
featurizer = gpt2_featurizer
settings = {
'max_length': 1024,
'n_embed': 1024,
'n_heads': 16,
'n_layer': 24,
'num_layers_trained': 24,
'l2_reg': 0.001,
'act_fn': "gelu",
'base_model_path': os.path.join("gpt2", "model-med.jl")
}
required_files = [
{
'file': os.path.join(FINETUNE_BASE_FOLDER, 'model', 'gpt2', filename),
'url': urljoin(GPT2_BASE_URL, filename)
}
for filename in ['encoder.json', 'vocab.bpe', 'model-med.jl']
]
class GPT2Model762(SourceModel):
is_bidirectional = False
encoder = GPT2Encoder
featurizer = gpt2_featurizer
settings = {
'max_length': 1024,
'n_embed': 1280,
'n_heads': 20,
'n_layer': 36,
'num_layers_trained': 36,
'l2_reg': 0.001,
'act_fn': "gelu",
'base_model_path': os.path.join("gpt2", "model-lg.jl")
}
required_files = [
{
'file': os.path.join(FINETUNE_BASE_FOLDER, 'model', 'gpt2', filename),
'url': urljoin(GPT2_BASE_URL, filename)
}
for filename in ['encoder.json', 'vocab.bpe', 'model-lg.jl']
]
class GPT2Model1558(SourceModel):
is_bidirectional = False
encoder = GPT2Encoder
featurizer = gpt2_featurizer
settings = {
'max_length': 1024,
'n_embed': 1600,
'n_heads': 25,
'n_layer': 48,
'num_layers_trained': 48,
'l2_reg': 0.001,
'act_fn': "gelu",
'base_model_path': os.path.join("gpt2", "model-xl.jl")
}
required_files = [
{
'file': os.path.join(FINETUNE_BASE_FOLDER, 'model', 'gpt2', filename),
'url': urljoin(GPT2_BASE_URL, filename)
}
for filename in ['encoder.json', 'vocab.bpe', 'model-xl.jl']
]
| 28.09901 | 82 | 0.584567 | 329 | 2,838 | 4.817629 | 0.215805 | 0.060568 | 0.050473 | 0.078233 | 0.739432 | 0.706625 | 0.706625 | 0.706625 | 0.706625 | 0.706625 | 0 | 0.054848 | 0.280479 | 2,838 | 100 | 83 | 28.38 | 0.721352 | 0 | 0 | 0.449438 | 0 | 0 | 0.195913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.067416 | 0 | 0.337079 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e3d987c4c24e9b04df29a1fe6dbef224705e77ff | 3,699 | py | Python | web/datasets/tests/test_services.py | andressadotpy/maria-quiteria | eb0dae395d2eb12b354aedb50810419d3b512875 | [
"MIT"
] | 151 | 2019-11-10T02:18:25.000Z | 2022-01-18T14:28:25.000Z | web/datasets/tests/test_services.py | andressadotpy/maria-quiteria | eb0dae395d2eb12b354aedb50810419d3b512875 | [
"MIT"
] | 202 | 2019-11-09T16:27:19.000Z | 2022-03-22T12:41:27.000Z | web/datasets/tests/test_services.py | andressadotpy/maria-quiteria | eb0dae395d2eb12b354aedb50810419d3b512875 | [
"MIT"
] | 69 | 2020-02-05T01:33:35.000Z | 2022-03-30T10:39:27.000Z | import os
from pathlib import Path
from django.conf import settings
from web.datasets.services import get_s3_client
client = get_s3_client(settings)
class TestS3Client:
def test_upload_file(self):
relative_path = "TestModel/2020/10/23/"
s3_url, bucket_file_path = client.upload_file(
"https://www.google.com/robots.txt", relative_path
)
expected_file_path = f"maria-quiteria-local/files/{relative_path}robots.txt"
expected_s3_url = f"https://teste.s3.brasil.amazonaws.com/{bucket_file_path}"
real_path = f"{os.getcwd()}/data/tmp/{expected_file_path}"
assert s3_url == expected_s3_url
assert bucket_file_path == expected_file_path
assert Path(real_path).exists() is False
def test_create_temp_file(self):
url = (
"http://www.feiradesantana.ba.gov.br/licitacoes/"
"respostas/4924SUSPENS%C3%83O.pdf"
)
temp_file_name, temp_file_path = client.create_temp_file(url)
assert temp_file_name == "4924SUSPENS%C3%83O.pdf"
assert Path(temp_file_path).is_file() is True
client.delete_temp_file(temp_file_path)
assert Path(temp_file_path).is_file() is False
def test_create_temp_file_with_prefix(self):
url = (
"http://www.feiradesantana.ba.gov.br/licitacoes/"
"respostas/4924SUSPENS%C3%83O.pdf"
)
prefix = "eu-sou-um-checksum"
expected_file_name = f"{prefix}-4924SUSPENS%C3%83O.pdf"
temp_file_name, temp_file_path = client.create_temp_file(url, prefix=prefix)
assert temp_file_name == expected_file_name
assert Path(temp_file_path).is_file() is True
client.delete_temp_file(temp_file_path)
assert Path(temp_file_path).is_file() is False
def test_create_temp_file_with_relative_file_path(self):
url = (
"http://www.feiradesantana.ba.gov.br/licitacoes/"
"respostas/4924SUSPENS%C3%83O.pdf"
)
relative_file_path = "extra/"
temp_file_name, temp_file_path = client.create_temp_file(
url, relative_file_path=relative_file_path
)
assert temp_file_name == "4924SUSPENS%C3%83O.pdf"
assert Path(temp_file_path).is_file() is True
client.delete_temp_file(temp_file_path)
assert Path(temp_file_path).is_file() is False
def test_download_file(self):
relative_path = "TestModel/2020/10/23/"
s3_url, relative_file_path = client.upload_file(
"https://www.google.com/robots.txt", relative_path
)
expected_file_path = f"maria-quiteria-local/files/{relative_path}robots.txt"
expected_s3_url = f"https://teste.s3.brasil.amazonaws.com/{expected_file_path}"
real_path = f"{os.getcwd()}/data/tmp/{expected_file_path}"
assert s3_url == expected_s3_url
assert relative_file_path == expected_file_path
assert Path(real_path).exists() is False
absolute_file_path = client.download_file(relative_file_path)
assert absolute_file_path == real_path
def test_upload_file_from_local_path(self):
local_path = Path("conteudo.txt")
local_path.write_text("Testando")
relative_path = "TestModel/2021/06/23/"
s3_url, bucket_file_path = client.upload_file(str(local_path), relative_path)
expected_file_path = f"maria-quiteria-local/files/{relative_path}conteudo.txt"
expected_s3_url = f"https://teste.s3.brasil.amazonaws.com/{bucket_file_path}"
assert s3_url == expected_s3_url
assert bucket_file_path == expected_file_path
assert Path(local_path).exists() is False
| 36.99 | 87 | 0.682076 | 507 | 3,699 | 4.635108 | 0.163708 | 0.122553 | 0.061277 | 0.051064 | 0.75234 | 0.75234 | 0.75234 | 0.743404 | 0.743404 | 0.725106 | 0 | 0.028976 | 0.216275 | 3,699 | 99 | 88 | 37.363636 | 0.781649 | 0 | 0 | 0.493333 | 0 | 0 | 0.243039 | 0.129224 | 0 | 0 | 0 | 0 | 0.253333 | 1 | 0.08 | false | 0 | 0.053333 | 0 | 0.146667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e3eb340c6608ce5b85a1741d90bd8d8ba777eae1 | 44 | py | Python | src/kgmk/download/__init__.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | src/kgmk/download/__init__.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | src/kgmk/download/__init__.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | from .download_zip import (
DownloadZip,
) | 14.666667 | 27 | 0.75 | 5 | 44 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 3 | 28 | 14.666667 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e3f2a4e8689001f42e9c972b053a71171fe581c1 | 140 | py | Python | core/bootstrapd/command.py | SwarmDHS/warp | 0ac8725998f4afd094830a1c683c76cdebdb0eb3 | [
"MIT"
] | null | null | null | core/bootstrapd/command.py | SwarmDHS/warp | 0ac8725998f4afd094830a1c683c76cdebdb0eb3 | [
"MIT"
] | 1 | 2021-11-11T20:04:15.000Z | 2021-11-11T20:04:15.000Z | core/bootstrapd/command.py | SwarmDHS/warp | 0ac8725998f4afd094830a1c683c76cdebdb0eb3 | [
"MIT"
] | null | null | null | import subprocess
def run(command: list) -> str:
return subprocess.run(command, stdout=subprocess.PIPE).stdout.decode("utf-8").strip()
| 28 | 89 | 0.735714 | 19 | 140 | 5.421053 | 0.736842 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008 | 0.107143 | 140 | 4 | 90 | 35 | 0.816 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
e3f73d8b046cd8ffbcb463745b92e0b81ded3984 | 12,669 | py | Python | obp/policy/logistic.py | nmasahiro/zr-obp | dde815dfe75fc6cc3c9ee6479d97db1e5567de6d | [
"Apache-2.0"
] | null | null | null | obp/policy/logistic.py | nmasahiro/zr-obp | dde815dfe75fc6cc3c9ee6479d97db1e5567de6d | [
"Apache-2.0"
] | null | null | null | obp/policy/logistic.py | nmasahiro/zr-obp | dde815dfe75fc6cc3c9ee6479d97db1e5567de6d | [
"Apache-2.0"
] | null | null | null | # Copyright (c) Yuta Saito, Yusuke Narita, and ZOZO Technologies, Inc. All rights reserved.
# Licensed under the Apache 2.0 License.
"""Contextual Logistic Bandit Algorithms."""
from dataclasses import dataclass
from typing import Optional
import numpy as np
from sklearn.utils import check_random_state
from scipy.optimize import minimize
from .base import BaseContextualPolicy
from ..utils import sigmoid
@dataclass
class LogisticEpsilonGreedy(BaseContextualPolicy):
"""Logistic Epsilon Greedy.
Parameters
-----------
dim: int
Number of dimensions of context vectors.
n_actions: int
Number of actions.
len_list: int, default=1
Length of a list of actions recommended in each impression.
When Open Bandit Dataset is used, 3 should be set.
batch_size: int, default=1
Number of samples used in a batch parameter update.
alpha_: float, default=1.
Prior parameter for the online logistic regression.
lambda_: float, default=1.
Regularization hyperparameter for the online logistic regression.
random_state: int, default=None
Controls the random seed in sampling actions.
epsilon: float, default=0.
Exploration hyperparameter that must take value in the range of [0., 1.].
"""
epsilon: float = 0.0
def __post_init__(self) -> None:
"""Initialize class."""
if not 0 <= self.epsilon <= 1:
raise ValueError(
f"epsilon must be between 0 and 1, but {self.epsilon} is given"
)
self.policy_name = f"logistic_egreedy_{self.epsilon}"
super().__post_init__()
self.model_list = [
MiniBatchLogisticRegression(
lambda_=self.lambda_list[i], alpha=self.alpha_list[i], dim=self.dim
)
for i in np.arange(self.n_actions)
]
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
def select_action(self, context: np.ndarray) -> np.ndarray:
"""Select action for new data.
Parameters
----------
context: array-like, shape (1, dim_context)
Observed context vector.
Returns
----------
selected_actions: array-like, shape (len_list, )
List of selected actions.
"""
if self.random_.rand() > self.epsilon:
theta = np.array(
[model.predict_proba(context) for model in self.model_list]
).flatten()
return theta.argsort()[::-1][: self.len_list]
else:
return self.random_.choice(
self.n_actions, size=self.len_list, replace=False
)
def update_params(self, action: int, reward: float, context: np.ndarray) -> None:
"""Update policy parameters.
Parameters
----------
action: int
Selected action by the policy.
reward: float
Observed reward for the chosen action and position.
context: array-like, shape (1, dim_context)
Observed context vector.
"""
self.n_trial += 1
self.action_counts[action] += 1
self.reward_lists[action].append(reward)
self.context_lists[action].append(context)
if self.n_trial % self.batch_size == 0:
for action, model in enumerate(self.model_list):
if not len(self.reward_lists[action]) == 0:
model.fit(
X=np.concatenate(self.context_lists[action], axis=0),
y=np.array(self.reward_lists[action]),
)
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
@dataclass
class LogisticUCB(BaseContextualPolicy):
"""Logistic Upper Confidence Bound.
Parameters
------------
dim: int
Number of dimensions of context vectors.
n_actions: int
Number of actions.
len_list: int, default=1
Length of a list of actions recommended in each impression.
When Open Bandit Dataset is used, 3 should be set.
batch_size: int, default=1
Number of samples used in a batch parameter update.
alpha_: float, default=1.
Prior parameter for the online logistic regression.
lambda_: float, default=1.
Regularization hyperparameter for the online logistic regression.
random_state: int, default=None
Controls the random seed in sampling actions.
epsilon: float, default=0.
Exploration hyperparameter that must take value in the range of [0., 1.].
References
----------
Lihong Li, Wei Chu, John Langford, and Robert E Schapire.
"A Contextual-bandit Approach to Personalized News Article Recommendation," 2010.
"""
epsilon: float = 0.0
def __post_init__(self) -> None:
"""Initialize class."""
if self.epsilon < 0:
raise ValueError(
f"epsilon must be positive scalar, but {self.epsilon} is given"
)
self.policy_name = f"logistic_ucb_{self.epsilon}"
super().__post_init__()
self.model_list = [
MiniBatchLogisticRegression(
lambda_=self.lambda_list[i], alpha=self.alpha_list[i], dim=self.dim
)
for i in np.arange(self.n_actions)
]
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
def select_action(self, context: np.ndarray) -> np.ndarray:
"""Select action for new data.
Parameters
------------
context: array-like, shape (1, dim_context)
Observed context vector.
Returns
----------
selected_actions: array-like, shape (len_list, )
List of selected actions.
"""
theta = np.array(
[model.predict_proba(context) for model in self.model_list]
).flatten()
std = np.array(
[
np.sqrt(np.sum((model._q ** (-1)) * (context ** 2)))
for model in self.model_list
]
).flatten()
ucb_score = theta + self.epsilon * std
return ucb_score.argsort()[::-1][: self.len_list]
def update_params(self, action: int, reward: float, context: np.ndarray) -> None:
"""Update policy parameters.
Parameters
------------
action: int
Selected action by the policy.
reward: float
Observed reward for the chosen action and position.
context: array-like, shape (1, dim_context)
Observed context vector.
"""
self.n_trial += 1
self.action_counts[action] += 1
self.reward_lists[action].append(reward)
self.context_lists[action].append(context)
if self.n_trial % self.batch_size == 0:
for action, model in enumerate(self.model_list):
if not len(self.reward_lists[action]) == 0:
model.fit(
X=np.concatenate(self.context_lists[action], axis=0),
y=np.array(self.reward_lists[action]),
)
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
@dataclass
class LogisticTS(BaseContextualPolicy):
"""Logistic Thompson Sampling.
Parameters
----------
dim: int
Number of dimensions of context vectors.
n_actions: int
Number of actions.
len_list: int, default=1
Length of a list of actions recommended in each impression.
When Open Bandit Dataset is used, 3 should be set.
batch_size: int, default=1
Number of samples used in a batch parameter update.
alpha_: float, default=1.
Prior parameter for the online logistic regression.
lambda_: float, default=1.
Regularization hyperparameter for the online logistic regression.
random_state: int, default=None
Controls the random seed in sampling actions.
References
----------
Olivier Chapelle and Lihong Li.
"An empirical evaluation of thompson sampling," 2011.
"""
policy_name: str = "logistic_ts"
def __post_init__(self) -> None:
"""Initialize class."""
super().__post_init__()
self.model_list = [
MiniBatchLogisticRegression(
lambda_=self.lambda_list[i],
alpha=self.alpha_list[i],
dim=self.dim,
random_state=self.random_state,
)
for i in np.arange(self.n_actions)
]
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
def select_action(self, context: np.ndarray) -> np.ndarray:
"""Select action for new data.
Parameters
----------
context: array-like, shape (1, dim_context)
Observed context vector.
Returns
----------
selected_actions: array-like, shape (len_list, )
List of selected actions.
"""
theta = np.array(
[model.predict_proba_with_sampling(context) for model in self.model_list]
).flatten()
return theta.argsort()[::-1][: self.len_list]
def update_params(self, action: int, reward: float, context: np.ndarray) -> None:
"""Update policy parameters.
Parameters
----------
action: int
Selected action by the policy.
reward: float
Observed reward for the chosen action and position.
context: array-like, shape (1, dim_context)
Observed context vector.
"""
self.n_trial += 1
self.action_counts[action] += 1
self.reward_lists[action].append(reward)
self.context_lists[action].append(context)
if self.n_trial % self.batch_size == 0:
for action, model in enumerate(self.model_list):
if not len(self.reward_lists[action]) == 0:
model.fit(
X=np.concatenate(self.context_lists[action], axis=0),
y=np.array(self.reward_lists[action]),
)
self.reward_lists = [[] for _ in np.arange(self.n_actions)]
self.context_lists = [[] for _ in np.arange(self.n_actions)]
@dataclass
class MiniBatchLogisticRegression:
"""MiniBatch Online Logistic Regression Model."""
lambda_: float
alpha: float
dim: int
random_state: Optional[int] = None
def __post_init__(self) -> None:
"""Initialize Class."""
self._m = np.zeros(self.dim)
self._q = np.ones(self.dim) * self.lambda_
self.random_ = check_random_state(self.random_state)
def loss(self, w: np.ndarray, *args) -> float:
"""Calculate loss function."""
X, y = args
return (
0.5 * (self._q * (w - self._m)).dot(w - self._m)
+ np.log(1 + np.exp(-y * w.dot(X.T))).sum()
)
def grad(self, w: np.ndarray, *args) -> np.ndarray:
"""Calculate gradient."""
X, y = args
return self._q * (w - self._m) + (-1) * (
((y * X.T) / (1.0 + np.exp(y * w.dot(X.T)))).T
).sum(axis=0)
def sample(self) -> np.ndarray:
"""Sample coefficient vector from the prior distribution."""
return self.random_.normal(self._m, self.sd(), size=self.dim)
def fit(self, X: np.ndarray, y: np.ndarray):
"""Update coefficient vector by the mini-batch data."""
self._m = minimize(
self.loss,
self._m,
args=(X, y),
jac=self.grad,
method="L-BFGS-B",
options={"maxiter": 20, "disp": False},
).x
P = (1 + np.exp(1 + X.dot(self._m))) ** (-1)
self._q = self._q + (P * (1 - P)).dot(X ** 2)
def sd(self) -> np.ndarray:
"""Standard deviation for the coefficient vector."""
return self.alpha * (self._q) ** (-1.0)
def predict_proba(self, X: np.ndarray) -> np.ndarray:
"""Predict extected probability by the expected coefficient."""
return sigmoid(X.dot(self._m))
def predict_proba_with_sampling(self, X: np.ndarray) -> np.ndarray:
"""Predict extected probability by the sampled coefficient."""
return sigmoid(X.dot(self.sample()))
| 31.992424 | 91 | 0.583945 | 1,514 | 12,669 | 4.747688 | 0.157199 | 0.015303 | 0.026711 | 0.029215 | 0.758347 | 0.743044 | 0.726071 | 0.709099 | 0.709099 | 0.709099 | 0 | 0.009287 | 0.303023 | 12,669 | 395 | 92 | 32.073418 | 0.804757 | 0.346673 | 0 | 0.517857 | 0 | 0 | 0.027916 | 0.007784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10119 | false | 0 | 0.041667 | 0 | 0.267857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
54251f1c62b56be3a46f317eccf39402d6d2510e | 245 | py | Python | define_unit_test.py | neha-060/pygrader | 073f79de56523f5484e2ec3a44801f25da6e66e1 | [
"MIT"
] | 2 | 2020-11-29T16:23:37.000Z | 2020-11-29T16:46:50.000Z | define_unit_test.py | neha-060/pygrader | 073f79de56523f5484e2ec3a44801f25da6e66e1 | [
"MIT"
] | 2 | 2019-11-24T19:05:32.000Z | 2019-11-24T19:06:54.000Z | define_unit_test.py | neha-060/pygrader | 073f79de56523f5484e2ec3a44801f25da6e66e1 | [
"MIT"
] | 9 | 2019-12-30T10:07:07.000Z | 2022-01-21T12:08:48.000Z | import unittest
# change file name below
# make sure that name of the class
# defining unit test is "unit_test"
from unit_test_file import unit_test
class assignment_test:
def get_unit_test():
return unittest.makeSuite(unit_test)
| 20.416667 | 44 | 0.763265 | 38 | 245 | 4.710526 | 0.578947 | 0.268156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187755 | 245 | 11 | 45 | 22.272727 | 0.899497 | 0.363265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
58152b4514877f3ca27ce8643357b45e34fc697e | 188 | py | Python | rentomatic/use_cases/request_objects.py | pachecobruno/python-ddd | 81812848a567d4605df346ef3630718d320706cc | [
"MIT"
] | null | null | null | rentomatic/use_cases/request_objects.py | pachecobruno/python-ddd | 81812848a567d4605df346ef3630718d320706cc | [
"MIT"
] | null | null | null | rentomatic/use_cases/request_objects.py | pachecobruno/python-ddd | 81812848a567d4605df346ef3630718d320706cc | [
"MIT"
] | null | null | null |
class StorageRoomListRequestObject(object):
@classmethod
def from_dict(cls, adict):
return StorageRoomListRequestObject()
def __nonzero__(self):
return True
| 18.8 | 45 | 0.702128 | 16 | 188 | 7.9375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228723 | 188 | 9 | 46 | 20.888889 | 0.875862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
58295f65210bda46e6a9207ddc7ec98ebaee4664 | 59 | py | Python | transcar/__init__.py | space-physics/transcar | a9305bd29723beb45004a8882627fa518d8a1bb6 | [
"Apache-2.0"
] | 3 | 2019-06-13T11:32:22.000Z | 2020-12-02T10:31:46.000Z | transcar/__init__.py | scivision/transcar | a9305bd29723beb45004a8882627fa518d8a1bb6 | [
"Apache-2.0"
] | null | null | null | transcar/__init__.py | scivision/transcar | a9305bd29723beb45004a8882627fa518d8a1bb6 | [
"Apache-2.0"
] | 1 | 2019-07-08T19:03:24.000Z | 2019-07-08T19:03:24.000Z | from .base import beam_spectrum_arbiter, mono_beam_arbiter
| 29.5 | 58 | 0.881356 | 9 | 59 | 5.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 59 | 1 | 59 | 59 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5869904d865483bb593521dee78b5f36ff6b0cc1 | 236 | py | Python | ja/code_snippets/results/result.api-monitor-get-downtimes.py | quotecenter/documentation-1 | f365703264761aa2b19d5d1d8ec55a3a6082ef4d | [
"BSD-3-Clause"
] | null | null | null | ja/code_snippets/results/result.api-monitor-get-downtimes.py | quotecenter/documentation-1 | f365703264761aa2b19d5d1d8ec55a3a6082ef4d | [
"BSD-3-Clause"
] | null | null | null | ja/code_snippets/results/result.api-monitor-get-downtimes.py | quotecenter/documentation-1 | f365703264761aa2b19d5d1d8ec55a3a6082ef4d | [
"BSD-3-Clause"
] | null | null | null | [{'active': False,
'disabled': True,
'end': 1412793983,
'id': 1625,
'scope': ['env:staging'],
'start': 1412792983},
{'active': False,
'disabled': True,
'end': None,
'id': 1626,
'scope': ['*'],
'start': 1412792985}]
| 18.153846 | 27 | 0.542373 | 24 | 236 | 5.333333 | 0.666667 | 0.171875 | 0.296875 | 0.359375 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198953 | 0.190678 | 236 | 12 | 28 | 19.666667 | 0.471204 | 0 | 0 | 0.166667 | 0 | 0 | 0.29661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
587b597be19bd181786f1f0bc4cfb23ac3717597 | 141 | py | Python | Script/run.py | PyRectangle/GreyRectangle | 21c19002f52563a096566e9166040815005b830b | [
"MIT"
] | 3 | 2017-09-28T16:53:09.000Z | 2018-03-18T20:01:41.000Z | Script/run.py | PyRectangle/GreyRectangle | 21c19002f52563a096566e9166040815005b830b | [
"MIT"
] | null | null | null | Script/run.py | PyRectangle/GreyRectangle | 21c19002f52563a096566e9166040815005b830b | [
"MIT"
] | null | null | null | def _execute(__code, gr):
exec(compile(__code, "", "exec"))
def _executeBlock(__code, gr, block):
exec(compile(__code, "", "exec"))
| 23.5 | 37 | 0.638298 | 17 | 141 | 4.705882 | 0.470588 | 0.15 | 0.375 | 0.475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156028 | 141 | 5 | 38 | 28.2 | 0.672269 | 0 | 0 | 0.5 | 0 | 0 | 0.056738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
545d4cdf317396ca1a805152e6ffbbdb835361f7 | 66 | py | Python | tests/resources/incorrect_naming/incorrect_naming.py | lleites/topyn | 69e2bd100e71bb0323adadb857aea724647f456e | [
"MIT"
] | 10 | 2019-11-21T22:25:34.000Z | 2022-01-13T13:44:54.000Z | tests/resources/incorrect_naming/incorrect_naming.py | lleites/topyn | 69e2bd100e71bb0323adadb857aea724647f456e | [
"MIT"
] | null | null | null | tests/resources/incorrect_naming/incorrect_naming.py | lleites/topyn | 69e2bd100e71bb0323adadb857aea724647f456e | [
"MIT"
] | null | null | null | class PythonIsNotJAVA:
def iLikeCamelCase(self):
pass
| 16.5 | 29 | 0.681818 | 6 | 66 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257576 | 66 | 3 | 30 | 22 | 0.918367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
54649078a72112cac01a5f67fb86aa795a59af4f | 5,574 | py | Python | dataset/Dataset.py | qiujunlin/Segmentation | b1514ca33bdf35737426de89850349aaf4ef59d4 | [
"MIT"
] | 1 | 2022-03-28T02:42:40.000Z | 2022-03-28T02:42:40.000Z | dataset/Dataset.py | qiujunlin/Segmentation | b1514ca33bdf35737426de89850349aaf4ef59d4 | [
"MIT"
] | null | null | null | dataset/Dataset.py | qiujunlin/Segmentation | b1514ca33bdf35737426de89850349aaf4ef59d4 | [
"MIT"
] | null | null | null | import torch
import glob
import os
import sys
import numpy as np
from torchvision import transforms
from torchvision.transforms import functional as F
#import cv2
from PIL import Image
import random
class Dataset(torch.utils.data.Dataset):
def __init__(self, dataset_path,scale=(352,352),augmentations = True,hasEdg =False):
super().__init__()
self.augmentations = augmentations
self.img_path=dataset_path+'/images/'
self.mask_path=dataset_path+'/masks/'
#self.edge_path = dataset_path +'/edgs/'
self.edge_flage = hasEdg
self.images = [self.img_path + f for f in os.listdir(self.img_path) if f.endswith('.jpg') or f.endswith('.png')]
self.gts = [self.mask_path + f for f in os.listdir(self.mask_path) if f.endswith('.png') or f.endswith(".jpg")]
# self.edges = [self.edge_path + f for f in os.listdir(self.edge_path) if f.endswith('.png') or f.endswith(".jpg")]
if self.augmentations :
print('Using RandomRotation, RandomFlip')
self.img_transform = transforms.Compose([
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomRotation(90, resample=False, expand=False, center=None),
transforms.Resize(scale,Image.NEAREST),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
self.gt_transform = transforms.Compose([
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomRotation(90, resample=False, expand=False, center=None),
transforms.Resize(scale,Image.BILINEAR),
transforms.ToTensor()])
else:
print('no augmentation')
self.img_transform = transforms.Compose([
transforms.Resize(scale,Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
self.gt_transform = transforms.Compose([
transforms.Resize(scale,Image.BILINEAR),
transforms.ToTensor()])
def __getitem__(self, index):
image = self.rgb_loader(self.images[index])
gt = self.binary_loader(self.gts[index])
# image = self.img_transform(image)
#gt = self.gt_transform(gt)
seed = np.random.randint(2147483647) # make a seed with numpy generator
random.seed(seed) # apply this seed to img tranfsorms
torch.manual_seed(seed) # needed for torchvision 0.7
if self.img_transform is not None:
image = self.img_transform(image)
random.seed(seed) # apply this seed to img tranfsorms
torch.manual_seed(seed) # needed for torchvision 0.7
if self.gt_transform is not None:
gt = self.gt_transform(gt)
if self.edge_flage:
edge = self.binary_loader(self.edges[index])
random.seed(seed) # apply this seed to img tranfsorms
torch.manual_seed(seed) # needed for torchvision 0.7
edge = self.gt_transform(edge)
return image, gt, edge
else:
return image, gt
# return image, gt
def rgb_loader(self, path):
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def binary_loader(self, path):
with open(path, 'rb') as f:
img = Image.open(f)
# return img.convert('1')
return img.convert('L')
def __len__(self):
return len(self.images)
class TestDataset(torch.utils.data.Dataset):
def __init__(self, dataset_path,scale=(256,448)):
super().__init__()
self.img_path=dataset_path+'/images/'
self.mask_path=dataset_path+'/masks/'
self.images = [self.img_path + f for f in os.listdir(self.img_path) if f.endswith('.jpg') or f.endswith('.png')]
self.gts = [self.mask_path + f for f in os.listdir(self.mask_path) if f.endswith('.png') or f.endswith(".jpg")]
self.img_transform = transforms.Compose([
transforms.Resize((scale)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# self.img_transform = transforms.Compose([
# transforms.Resize((scale)),
# transforms.ToTensor()])
self.gt_transform = transforms.ToTensor()
def __getitem__(self, index):
image = self.rgb_loader(self.images[index])
gt = self.binary_loader(self.gts[index])
image = self.img_transform(image)
gt = self.gt_transform(gt)
name = self.images[index].split('/')[-1]
if name.endswith('.jpg'):
name = name.split('.jpg')[0] + '_segmentation.png'
# print(gt.shape[1:])
return image,gt,name
def rgb_loader(self, path):
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def binary_loader(self, path):
with open(path, 'rb') as f:
img = Image.open(f)
# return img.convert('1')
return img.convert('L')
def __len__(self):
return len(self.images)
if __name__ == '__main__':
data = Dataset('E:\dataset\dataset\TrainDataset')
print(data.__getitem__(0))
| 36.194805 | 122 | 0.591137 | 686 | 5,574 | 4.661808 | 0.172012 | 0.030644 | 0.040025 | 0.067542 | 0.742026 | 0.727955 | 0.725766 | 0.717636 | 0.707942 | 0.697936 | 0 | 0.02984 | 0.284535 | 5,574 | 153 | 123 | 36.431373 | 0.772066 | 0.112307 | 0 | 0.669725 | 0 | 0 | 0.038579 | 0.006294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091743 | false | 0 | 0.082569 | 0.018349 | 0.275229 | 0.027523 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
54824584f1ad01b6e3a0929764ab93651e74faed | 1,064 | py | Python | src/tests/common/test_common_auth.py | codingcatgirl/pretalx | 26554967772efa5248ae9b6a0fa838b0e8713807 | [
"Apache-2.0"
] | null | null | null | src/tests/common/test_common_auth.py | codingcatgirl/pretalx | 26554967772efa5248ae9b6a0fa838b0e8713807 | [
"Apache-2.0"
] | null | null | null | src/tests/common/test_common_auth.py | codingcatgirl/pretalx | 26554967772efa5248ae9b6a0fa838b0e8713807 | [
"Apache-2.0"
] | null | null | null | import pytest
from django.test import Client
from rest_framework.authtoken.models import Token
@pytest.mark.flaky(reruns=3)
@pytest.mark.django_db
def test_can_see_schedule_with_bearer_token(event, schedule, slot, orga_user):
Token.objects.create(user=orga_user)
client = Client(HTTP_AUTHORIZATION="Token " + orga_user.auth_token.key)
event.feature_flags["show_schedule"] = False
event.save()
response = client.get(f"/{event.slug}/schedule.xml")
assert response.status_code == 200
assert slot.submission.title in response.content.decode()
@pytest.mark.flaky(reruns=3)
@pytest.mark.django_db
def test_cannot_see_schedule_with_wrong_bearer_token(event, schedule, slot, orga_user):
Token.objects.create(user=orga_user)
client = Client(HTTP_AUTHORIZATION="Token " + orga_user.auth_token.key + "xxx")
event.feature_flags["show_schedule"] = False
event.save()
response = client.get(f"/{event.slug}/schedule.xml")
assert response.status_code == 404
assert slot.submission.title not in response.content.decode()
| 38 | 87 | 0.760338 | 151 | 1,064 | 5.145695 | 0.370861 | 0.061776 | 0.03861 | 0.054054 | 0.700129 | 0.700129 | 0.700129 | 0.700129 | 0.700129 | 0.700129 | 0 | 0.008574 | 0.12312 | 1,064 | 27 | 88 | 39.407407 | 0.824223 | 0 | 0 | 0.521739 | 0 | 0 | 0.087406 | 0.048872 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5492f1d513149d3e221df667fe40e04a4dea807c | 746 | py | Python | server/src/tests/samples/annotations1.py | higoshi/pyright | 183c0ef56d2c010d28018149949cda1a40aa59b8 | [
"MIT"
] | null | null | null | server/src/tests/samples/annotations1.py | higoshi/pyright | 183c0ef56d2c010d28018149949cda1a40aa59b8 | [
"MIT"
] | null | null | null | server/src/tests/samples/annotations1.py | higoshi/pyright | 183c0ef56d2c010d28018149949cda1a40aa59b8 | [
"MIT"
] | null | null | null | # THis sample tests the handling of type annotations within a
# python source file (as opposed to a stub file).
from typing import Optional
class ClassA:
# This should generate an error because ClassA
# is not yet defined at the time it's used.
def func0(self) -> Optional[ClassA]:
return None
class ClassB(ClassA):
def func1(self) -> ClassA:
return ClassA()
# This should generate an error because ClassC
# is a forward reference, which is not allowed
# in a python source file.
def func2(self) -> Optional[ClassC]:
return None
def func3(self) -> "Optional[ClassC]":
return None
def func4(self) -> Optional["ClassC"]:
return None
class ClassC:
pass
| 21.314286 | 61 | 0.655496 | 102 | 746 | 4.794118 | 0.529412 | 0.09816 | 0.110429 | 0.147239 | 0.339468 | 0.282209 | 0.155419 | 0 | 0 | 0 | 0 | 0.009141 | 0.266756 | 746 | 34 | 62 | 21.941176 | 0.884826 | 0.414209 | 0 | 0.266667 | 1 | 0 | 0.051643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.066667 | 0.066667 | 0.333333 | 0.933333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
54abe1aba83426816dfd1ecfe5afa985b4854f55 | 8,907 | py | Python | src/tests/test_pagure_flask_api_project_git_alias.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_api_project_git_alias.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_api_project_git_alias.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
(c) 2020 - Copyright Red Hat Inc
Authors:
Pierre-Yves Chibon <pingou@pingoured.fr>
"""
from __future__ import unicode_literals, absolute_import
import unittest
import shutil
import sys
import os
import json
import pygit2
from mock import patch, MagicMock
sys.path.insert(
0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "..")
)
import pagure.api
import pagure.flask_app
import pagure.lib.query
import tests
def set_projects_up(self):
tests.create_projects(self.session)
tests.create_projects_git(os.path.join(self.path, "repos"), bare=True)
tests.add_content_git_repo(os.path.join(self.path, "repos", "test.git"))
tests.create_tokens(self.session)
tests.create_tokens_acl(self.session)
self.session.commit()
def set_up_board(self):
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
data = json.dumps({"dev": {"active": True, "tag": "dev"}})
output = self.app.post("/api/0/test/boards", headers=headers, data=data)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"boards": [
{
"active": True,
"full_url": "http://localhost.localdomain/test/boards/dev",
"name": "dev",
"status": [],
"tag": {
"tag": "dev",
"tag_color": "DeepBlueSky",
"tag_description": "",
},
}
]
},
)
class PagureFlaskApiProjectGitAliastests(tests.SimplePagureTest):
""" Tests for flask API for branch alias in pagure """
maxDiff = None
def setUp(self):
super(PagureFlaskApiProjectGitAliastests, self).setUp()
set_projects_up(self)
self.repo_obj = pygit2.Repository(
os.path.join(self.path, "repos", "test.git")
)
def test_api_git_alias_view_no_project(self):
output = self.app.get("/api/0/invalid/git/alias")
self.assertEqual(output.status_code, 404)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data, {"error": "Project not found", "error_code": "ENOPROJECT"}
)
def test_api_git_alias_view_empty(self):
output = self.app.get("/api/0/test/git/alias")
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(data, {})
def test_api_new_git_alias_no_data(self):
data = "{}"
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/new", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_new_git_alias_invalid_data(self):
data = json.dumps({"dev": "foobar"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/new", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_new_git_alias_missing_data(self):
data = json.dumps({"alias_from": "mster"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/new", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_new_git_alias_no_existant_branch(self):
data = json.dumps({"alias_from": "master", "alias_to": "main"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/new", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Branch not found in this git repository",
"error_code": "EBRANCHNOTFOUND",
},
)
def test_api_new_git_alias(self):
data = json.dumps({"alias_from": "main", "alias_to": "master"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/new", headers=headers, data=data
)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(data, {"refs/heads/main": "refs/heads/master"})
def test_api_drop_git_alias_no_data(self):
data = "{}"
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/drop", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_drop_git_alias_invalid_data(self):
data = json.dumps({"dev": "foobar"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/drop", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_drop_git_alias_missing_data(self):
data = json.dumps({"alias_from": "mster"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/drop", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Invalid or incomplete input submitted",
"error_code": "EINVALIDREQ",
},
)
def test_api_drop_git_alias_no_existant_branch(self):
data = json.dumps({"alias_from": "master", "alias_to": "main"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/drop", headers=headers, data=data
)
self.assertEqual(output.status_code, 400)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(
data,
{
"error": "Branch not found in this git repository",
"error_code": "EBRANCHNOTFOUND",
},
)
def test_api_drop_git_alias(self):
data = json.dumps({"alias_from": "main", "alias_to": "master"})
headers = {
"Authorization": "token aaabbbcccddd",
"Content-Type": "application/json",
}
output = self.app.post(
"/api/0/test/git/alias/drop", headers=headers, data=data
)
self.assertEqual(output.status_code, 200)
data = json.loads(output.get_data(as_text=True))
self.assertDictEqual(data, {})
if __name__ == "__main__":
unittest.main(verbosity=2)
| 30.820069 | 79 | 0.559448 | 943 | 8,907 | 5.109226 | 0.149523 | 0.039851 | 0.035077 | 0.072852 | 0.790369 | 0.789539 | 0.77418 | 0.74367 | 0.71814 | 0.71814 | 0 | 0.009936 | 0.310767 | 8,907 | 288 | 80 | 30.927083 | 0.774882 | 0.017514 | 0 | 0.523013 | 0 | 0 | 0.217337 | 0.034352 | 0 | 0 | 0 | 0 | 0.108787 | 1 | 0.062762 | false | 0 | 0.050209 | 0 | 0.121339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
54b0f4c2d58be5f5ba765b9e6c4062c2648fdf5f | 150 | py | Python | package/custompack/custom.py | bygui86/python-practical-programming-tutorial | cec941dc971b53018c6bd1d085eca84969a85502 | [
"Apache-2.0"
] | null | null | null | package/custompack/custom.py | bygui86/python-practical-programming-tutorial | cec941dc971b53018c6bd1d085eca84969a85502 | [
"Apache-2.0"
] | null | null | null | package/custompack/custom.py | bygui86/python-practical-programming-tutorial | cec941dc971b53018c6bd1d085eca84969a85502 | [
"Apache-2.0"
] | 1 | 2019-08-21T14:35:28.000Z | 2019-08-21T14:35:28.000Z |
class PackageTest():
msg = None
def __init__(self, msg):
self.msg = msg
def __str__(self):
return "Message: " + self.msg
| 18.75 | 37 | 0.566667 | 18 | 150 | 4.277778 | 0.555556 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.313333 | 150 | 7 | 38 | 21.428571 | 0.747573 | 0 | 0 | 0 | 0 | 0 | 0.060403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
54c0902f488e8aaa23b1571e45dbdaf8df9f499f | 1,451 | py | Python | tests/test_footer_scale.py | mdales/CMForestPlots | 1cef9d4f59b6910c7adb7b1f36fb08426a70ce5f | [
"Apache-2.0"
] | null | null | null | tests/test_footer_scale.py | mdales/CMForestPlots | 1cef9d4f59b6910c7adb7b1f36fb08426a70ce5f | [
"Apache-2.0"
] | null | null | null | tests/test_footer_scale.py | mdales/CMForestPlots | 1cef9d4f59b6910c7adb7b1f36fb08426a70ce5f | [
"Apache-2.0"
] | 1 | 2019-09-15T14:12:51.000Z | 2019-09-15T14:12:51.000Z | import unittest
from forestplots import SPSSForestPlot
class DecodeExampleSPSSForestPlotFooterScale(unittest.TestCase):
def test_example_1(self):
example = """
enema nore gaomeenenenreennreneena genera aaa
0.01 0.4 1 10 100
Favours [Pedicle screw] Favours [Hybrid Instrumentation]
"""
groups, mid_scale = SPSSForestPlot._decode_footer_scale_ocr(example)
self.assertEqual(groups, ("Pedicle screw", "Hybrid Instrumentation"))
self.assertEqual(mid_scale, 1.0)
def test_example_2(self):
example = """
NN EE —
-10 5 0 5 10
Favours [Pedicle screw] Favours [Hybrid Instrumentation}
"""
groups, mid_scale = SPSSForestPlot._decode_footer_scale_ocr(example)
self.assertEqual(groups, ("Pedicle screw", "Hybrid Instrumentation"))
self.assertEqual(mid_scale, 0.0)
def test_example_3(self):
example = """
NN EE —
-10 5 0 5 10
Favours [Pedicle Screw] Favours [Hybrid Instrumentation]
"""
groups, mid_scale = SPSSForestPlot._decode_footer_scale_ocr(example)
self.assertEqual(groups, ("Pedicle Screw", "Hybrid Instrumentation"))
self.assertEqual(mid_scale, 0.0)
def test_example_4(self):
example = """
0.01 01 1 10 100
Favours [WBRT plus TMZ] Favours (WBRT]
"""
groups, mid_scale = SPSSForestPlot._decode_footer_scale_ocr(example)
self.assertEqual(groups, ("WBRT plus TMZ", "WBRT"))
self.assertEqual(mid_scale, 1.0)
| 30.87234 | 77 | 0.700896 | 180 | 1,451 | 5.483333 | 0.244444 | 0.064843 | 0.056738 | 0.113475 | 0.727457 | 0.727457 | 0.700101 | 0.700101 | 0.700101 | 0.700101 | 0 | 0.041096 | 0.195038 | 1,451 | 46 | 78 | 31.543478 | 0.802226 | 0 | 0 | 0.578947 | 0 | 0 | 0.317712 | 0.015851 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
49ab03ba3d5f3dec291945719a2eef83f888b1a6 | 79 | py | Python | charcoal/__init__.py | cknv/statsd | 95148bfebeff5dc0995397c21cfbdc762730852e | [
"MIT"
] | null | null | null | charcoal/__init__.py | cknv/statsd | 95148bfebeff5dc0995397c21cfbdc762730852e | [
"MIT"
] | 1 | 2016-04-18T19:44:36.000Z | 2016-04-18T19:44:36.000Z | charcoal/__init__.py | cknv/charcoal | 95148bfebeff5dc0995397c21cfbdc762730852e | [
"MIT"
] | null | null | null | """Simple StatsD client, with minimul fuzz."""
from .client import StatsClient
| 26.333333 | 46 | 0.759494 | 10 | 79 | 6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 79 | 2 | 47 | 39.5 | 0.869565 | 0.506329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3f779b443aadc47848ae08f64c5935174a1088aa | 128 | py | Python | apps/spider/admin.py | pjpv/python_django_video | 0cfa5c568ed2e1b6adea2b2c27aa0dd4f74b417f | [
"MIT"
] | null | null | null | apps/spider/admin.py | pjpv/python_django_video | 0cfa5c568ed2e1b6adea2b2c27aa0dd4f74b417f | [
"MIT"
] | null | null | null | apps/spider/admin.py | pjpv/python_django_video | 0cfa5c568ed2e1b6adea2b2c27aa0dd4f74b417f | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import spiderModel
# Register your models here.
admin.site.register(spiderModel)
| 21.333333 | 32 | 0.820313 | 17 | 128 | 6.176471 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117188 | 128 | 5 | 33 | 25.6 | 0.929204 | 0.203125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3f8dc7a3f77035aeea36e350543b4661af9ac6eb | 95,097 | py | Python | classes.py | kawanovs/WI_Data_Handling_Tool_v2 | 6c9d6f27d684ebeb2d5b3bb676a4093fc90e1b99 | [
"MIT"
] | null | null | null | classes.py | kawanovs/WI_Data_Handling_Tool_v2 | 6c9d6f27d684ebeb2d5b3bb676a4093fc90e1b99 | [
"MIT"
] | null | null | null | classes.py | kawanovs/WI_Data_Handling_Tool_v2 | 6c9d6f27d684ebeb2d5b3bb676a4093fc90e1b99 | [
"MIT"
] | null | null | null | import json
import os
from bs4 import BeautifulSoup
import re
import statistics
from datetime import datetime, date
from xml.dom import minidom
from xml.etree.ElementTree import tostring, SubElement, Element, ElementTree
import plotly
import plotly.graph_objs as go
from plotly import subplots
import pandas as pd
from tabulate import tabulate
class Configuration:
"""Configurations"""
def serviceTypeOptions(self):
file = open('configuration/service type.txt')
choices = []
servicetype = []
i = 0
for line in file:
index = line.find('\n')
line1 = line[:index]
servicetype.append(line1.split(' : '))
choices.append((servicetype[i][1], servicetype[i][0]))
i += 1
return servicetype, choices
def serviceTypeOptionsforXML(self):
file = open('configuration/service type for XML.txt')
choices = []
servicetype = []
i = 0
for line in file:
index = line.find('\n')
line1 = line[:index]
servicetype.append(line1.split(' : '))
choices.append((servicetype[i][1], servicetype[i][0]))
i += 1
return servicetype, choices
def dataTypeOptions(self):
file = open('configuration/data type.txt')
choices = []
datatype = []
i = 0
for line in file:
index = line.find('\n')
line1 = line[:index]
datatype.append(line1.split(' : '))
choices.append((datatype[i][1], datatype[i][0]))
i += 1
return datatype, choices
def KDIunits(self):
dataframe1 = pd.read_excel('configuration/KDIunits.xlsx', index_col=0)
dataframe1.columns = ['Units', 'Description']
return dataframe1
pass
class IndexType:
"""Functions to find index, index curve"""
def findindex(self, file, type1):
# determine indexes [depth, time] for visualization only
index1 = None
index2 = None
if type1 == 'las':
for curve in file.curves:
if str(curve.mnemonic).lower().find(r'tim') != -1:
index1 = 'Time'
elif str(curve.mnemonic).lower().find(r'dep') != -1:
index2 = 'Depth'
elif type1 == 'csv':
for col in file.columns:
if str(col).lower().find(r'tim') != -1:
index1 = 'Time'
elif str(col).lower().find(r'dept') != -1:
index2 = 'Depth'
elif type1 == 'dlis':
f = file
indextype = []
for frame in f.frames:
indextype.append(frame.index_type)
indexset = set(indextype)
indextype1 = list(indexset)
for each in indextype1:
if str(each).lower().find(r'tim') != -1:
index1 = 'Time'
elif str(each).lower().find(r'dept') != -1:
index2 = 'Depth'
elif type1 == 'xml':
Bs_data = BeautifulSoup(file, "xml")
line1 = Bs_data.find_all('indexType')
line0 = []
for row in line1:
line0.append(row.get_text())
line = str(line0[0])
if line.lower().find(r'tim') != -1:
index1 = 'Time'
elif line.lower().find(r'dep') != -1:
index2 = 'Depth'
if index1 is not None and index2 is not None:
indextype = index1 + ', ' + index2
elif index1 is not None:
indextype = index1
elif index2 is not None:
indextype = index2
else:
indextype = 'Not found'
return indextype, index1, index2
def LASmnemonic(self, indextype, lf):
# find index curve in LAS
if indextype == 'Time':
j = 0
for curve in lf.curves:
if str(curve['mnemonic']).find(r'ETIM') != -1:
timemnem = curve['mnemonic']
j += 1
if j == 0:
for curve in lf.curves:
if str(curve['mnemonic']).find(r'TIM') != -1:
timemnem = curve['mnemonic']
break
return timemnem
elif indextype == 'Depth':
for curve in lf.curves:
if str(curve['mnemonic']).lower().find(r'dep') != -1:
depthmnem = curve['mnemonic']
return depthmnem
def CSVindex(self, df2):
# find csv index type
for col in df2.columns:
if col.lower().find('dept') != -1:
indexType = 'measured depth'
break
elif col.lower().find('time') != -1:
indexType = 'date time'
break
return indexType
pass
class InputXMLprocessing:
def curvesnumber(self, data1):
Bs_data = BeautifulSoup(data1, "xml")
line1 = Bs_data.find_all('mnemonicList')
line0 = []
for row in line1:
line0.append(row.get_text())
line = str(line0[0])
index = line.find('>')
index1 = line[index + 1:].find('<')
indextype = line[index + 1:index + 1 + index1]
curvesnumber = len(indextype.split(','))
return curvesnumber
def dataframeFromXml(self, data1):
Bs_data = BeautifulSoup(data1, "xml")
line1 = Bs_data.find_all('mnemonicList')
line2 = Bs_data.find_all('unitList')
line0 = []
for row in line1:
line0.append(row.get_text())
line01 = []
for row in line2:
line01.append(row.get_text())
line = str(line0[0])
mnem = line.split(',')
line = str(line01[0])
units = line.split(',')
curves = []
i = 0
for each in mnem:
string = each + ' ' + units[i]
curves.append(string)
i += 1
for curve in curves:
curve = curve.strip()
line1 = Bs_data.find_all('data')
line0 = []
for row in line1:
line0.append(row.get_text())
datablock = []
for line in line0:
x = line.split(',')
datablock.append(x)
df = pd.DataFrame(data=datablock, columns=curves)
return df
pass
class DLISprocessing:
"""Functions to process DLIS file"""
def dlisInfo(self, f):
# dlis summary
indextype = []
operation = []
channelsnumber = 0
for frame in f.frames:
indextype.append(frame.index_type)
operation.append(frame.direction)
channelsnumber += int(len(frame.channels))
indexset = set(indextype)
indextype1 = list(indexset)
operationset = set(operation)
operation1 = list(operationset)
ops = []
for op in operation1:
if op == 'DECREASING':
ops.append('POOH')
else:
ops.append('RIH')
return indextype1, channelsnumber, ops
pass
class LASprocessing:
def splitlogs(self, lf, repr):
df0 = lf.df()
df1 = df0
for col in df0.columns:
if str(col).lower().find(r'tim') != -1:
coltime = col
df1 = df1.drop(col, axis=1)
pass
df1 = df1.reset_index()
RIH = []
POOH = []
index1 = 0
index2 = 0
for col in df1.columns:
if str(col).lower().find(r'dept') != -1:
col1 = col
for i in range(len(df1)):
if i != 0 and i != len(df1) - 1:
if df1[col1].iloc[i - 1] < df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
elif df1[col1].iloc[i - 1] > df1[col1].iloc[i] > df1[col1].iloc[i + 1]:
POOH.append(df1.iloc[i].tolist())
index2 += 1
elif df1[col1].iloc[i - 1] < df1[col1].iloc[i] > df1[col1].iloc[i + 1] or df1[col1].iloc[
i - 1] > \
df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
POOH.append(df1.iloc[i].tolist())
index2 += 1
elif df1[col1].iloc[i - 1] != df1[col1].iloc[i] and df1[col1].iloc[i] == df1[col1].iloc[i + 1]:
j1 = i
elif df1[col1].iloc[i - 1] == df1[col1].iloc[i] and df1[col1].iloc[i] != df1[col1].iloc[i + 1]:
j2 = i
if j1 != 0:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
index1 += 1
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
index2 += 1
else:
if df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
index1 += 1
else:
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
index2 += 1
elif i == 0:
if df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
elif df1[col1].iloc[i] > df1[col1].iloc[i + 1]:
POOH.append(df1.iloc[i].tolist())
index2 += 1
else:
j1 = 0
elif i == len(df1) - 1:
if df1[col1].iloc[i] > df1[col1].iloc[i - 1]:
RIH.append(df1.iloc[i].tolist())
elif df1[col1].iloc[i] < df1[col1].iloc[i - 1]:
POOH.append(df1.iloc[i].tolist())
else:
j2 = i
if df1[col1].iloc[j1 - 1] > df1[col1].iloc[j1]:
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
else:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
if len(RIH) != 0:
RIH = RIH[:RIH.index(max(RIH)) + 1]
if len(POOH) != 0:
POOH = POOH[POOH.index(max(POOH)):]
# if df0[coltime].iloc[0] > df0[coltime].iloc[1]:
# RIH1 = RIH
# RIH = POOH
# POOH = RIH1
return RIH, POOH
pass
class CSVprocessing:
"""Functions to process CSV file"""
def csvpreprocess(self, df0):
# formatting csv for visualization
df1 = pd.DataFrame()
# drop empty columns
for col in df0.columns:
if df0[col].isnull().sum() != len(df0[col]):
df1[col] = df0[col]
# delete error cells
for col in df1.columns:
for i in range(len(df1[col])):
if df1[col].iloc[i] == '-99999.99' or df1[col].iloc[i] == '-999.25':
df1[col].iloc[i] = None
# drop NaNs
df1 = df1.dropna(thresh=2)
df1 = df1.reset_index(drop=True)
return df1
def splitlogs(self, df1, repr):
for col in df1.columns:
if str(col).lower().find(r'tim') != -1:
df1 = df1.drop(col, axis=1)
pass
df1 = df1.reset_index(drop=True)
df1 = self.csvnumeric(df1)
RIH = []
POOH = []
index1 = 0
index2 = 0
for col in df1.columns:
if str(col).lower().find('depth') != -1:
col1 = col
for i in range(len(df1)):
if i != 0 and i != len(df1) - 1:
if df1[col1].iloc[i - 1] < df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
elif df1[col1].iloc[i - 1] > df1[col1].iloc[i] > df1[col1].iloc[i + 1]:
POOH.append(df1.iloc[i].tolist())
index2 += 1
elif df1[col1].iloc[i - 1] < df1[col1].iloc[i] > df1[col1].iloc[i + 1] or df1[col1].iloc[
i - 1] > \
df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
POOH.append(df1.iloc[i].tolist())
index2 += 1
elif df1[col1].iloc[i - 1] != df1[col1].iloc[i] and df1[col1].iloc[i] == df1[col1].iloc[i + 1]:
j1 = i
elif df1[col1].iloc[i - 1] == df1[col1].iloc[i] and df1[col1].iloc[i] != df1[col1].iloc[i + 1]:
j2 = i
if j1 != 0:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
index1 += 1
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
index2 += 1
else:
if df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
index1 += 1
else:
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
index2 += 1
elif i == 0:
if df1[col1].iloc[i] < df1[col1].iloc[i + 1]:
RIH.append(df1.iloc[i].tolist())
index1 += 1
elif df1[col1].iloc[i] > df1[col1].iloc[i + 1]:
POOH.append(df1.iloc[i].tolist())
index2 += 1
else:
j1 = 0
elif i == len(df1) - 1:
if df1[col1].iloc[i] > df1[col1].iloc[i - 1]:
RIH.append(df1.iloc[i].tolist())
elif df1[col1].iloc[i] < df1[col1].iloc[i - 1]:
POOH.append(df1.iloc[i].tolist())
else:
j2 = i
if df1[col1].iloc[j1 - 1] > df1[col1].iloc[j1]:
POOH.append([])
for col in df1.columns:
if repr == 'mean':
POOH[index2].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
POOH[index2].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
POOH[index2].append(max(df1[col].iloc[j1:j2 + 1]))
else:
RIH.append([])
for col in df1.columns:
if repr == 'mean':
RIH[index1].append(statistics.mean(df1[col].iloc[j1:j2 + 1]))
elif repr == 'min':
RIH[index1].append(min(df1[col].iloc[j1:j2 + 1]))
elif repr == 'max':
RIH[index1].append(max(df1[col].iloc[j1:j2 + 1]))
RIH = RIH[:RIH.index(max(RIH)) + 1]
POOH = POOH[POOH.index(max(POOH)):]
return RIH, POOH
def operationDefine(self, index1, index2, df2):
# determine RIH/POOH operation
operation = 'No data'
if index1 is not None and index2 is not None:
RIH, POOH = self.splitlogs(df2, 'mean')
if RIH != [] and POOH != []:
operation = 'RIH, POOH'
elif RIH:
operation = 'RIH'
elif POOH:
operation = 'POOH'
elif index1 is not None:
operation = 'No data'
elif index2 is not None:
RIH, POOH = self.splitlogs(df2, 'mean')
if RIH != [] and POOH != []:
operation = 'RIH, POOH'
elif RIH:
operation = 'RIH'
elif POOH:
operation = 'POOH'
else:
operation = 'Not defined'
return operation
def csvcolumns(self, df0, x, y, c):
df01 = self.csvpreprocess(df0)
# update csv depending on column header location
if x != '':
x = int(x)
columns = []
for col in df01.columns:
if y != '':
columns.append(str(df01[col].iloc[x]) + ', ' + str(df01[col].iloc[y]))
else:
columns.append(str(df01[col].iloc[x]))
df01.columns = columns
df2 = pd.DataFrame(data=df01.iloc[c:].values, columns=df01.columns)
else:
columns = []
for col in df01.columns:
if y != '':
columns.append(str(col) + ', ' + str(df01[col].iloc[y]))
else:
columns.append(str(col))
df01.columns = columns
df2 = pd.DataFrame(data=df01.iloc[c:].values, columns=df01.columns)
return df2
def csvnumeric(self, df1):
for col in df1.columns:
if str(col).lower().find('time') == -1:
df1[col] = df1[col].astype('str')
df1[col] = df1[col].astype('float')
return df1
def summary_dataframe(self, object, **kwargs):
df = pd.DataFrame()
for i, (key, value) in enumerate(kwargs.items()):
list_of_values = []
for item in object:
try:
x = getattr(item, key)
list_of_values.append(x)
except:
list_of_values.append('')
continue
df[value] = list_of_values
return df.sort_values(df.columns[0])
pass
class Visualization:
"""Functions to visualize data"""
def generate_axis_title(self, mnemonic, descr, unit):
if descr != '':
title_words = descr.split(" ")
current_line = ""
lines = []
for word in title_words:
if len(current_line) + len(word) > 15:
lines.append(current_line[:-1])
current_line = ""
current_line += "{} ".format(word)
lines.append(current_line)
title = "<br>".join(lines)
if title[1] == " ":
title = title[2:]
elif title[2] == " ":
title = title[3:]
title += "<br>({})".format(unit)
else:
title = mnemonic
return title
def generate_curvesTime(self, lf, mnem):
# Visualize LAS from Time
plots = []
for i in range(len(lf.curves)):
if str(lf.curves[i]['mnemonic']).lower().find(r'tim') == -1:
plots.append([lf.curves[i]["mnemonic"]])
xvals = mnem
xvalsvalues = []
for k in range(len(lf.curves[xvals].data)):
if str(lf.curves[xvals].data[k]).find(r'NaN') == -1:
xvalsvalues.append(float(lf.curves[xvals].data[k]))
fig = subplots.make_subplots(
rows=len(plots), cols=1, shared_xaxes=True, horizontal_spacing=0.01, vertical_spacing=0.01, print_grid=True
)
for i in range(len(plots)):
list_of_floats = []
for k in range(len(lf.curves[plots[i][0]].data)):
if str(lf.curves[plots[i][0]].data[k]).find(r'NaN') == -1 and str(lf.curves[plots[i][0]].data[k]).find(
r'-999.25') == -1:
list_of_floats.append(float(lf.curves[plots[i][0]].data[k]))
fig.append_trace(
go.Scatter(
x=xvalsvalues,
y=list_of_floats,
name=lf.curves[plots[i][0]]["mnemonic"],
line={"dash": "solid", },
),
row=i + 1,
col=1,
)
fig["layout"]["yaxis{}".format(i + 1)].update(
title_text=self.generate_axis_title(lf.curves[plots[i][0]]["mnemonic"], lf.curves[plots[i][0]]["descr"],
lf.curves[plots[i][0]]["unit"]))
if i == len(plots) - 1:
fig.update_xaxes(
title_text=self.generate_axis_title(lf.curves[xvals]["mnemonic"], lf.curves[xvals]["descr"],
lf.curves[xvals]["unit"]), row=i + 1, col=1)
fig["layout"].update(
height=200 * len(plots),
width=1600,
font=dict(
size=10),
paper_bgcolor='rgba(0,0,0,0)',
hovermode="y",
margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=True),
)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
def generate_curves(self, lf, mnem):
# visualize LAS from Depth
plots = []
for i in range(len(lf.curves)):
if str(lf.curves[i]['mnemonic']).find(r'TIM') == -1 and str(lf.curves[i]['mnemonic']).find(r'DEP') == -1:
plots.append([lf.curves[i]["mnemonic"]])
yvals = mnem
yvalsvalues = []
for k in range(len(lf.curves[yvals].data)):
if str(lf.curves[yvals].data[k]).find(r'NaN') == -1:
yvalsvalues.append(float(lf.curves[yvals].data[k]))
rows1 = round(len(plots) / 8) + 1
if len(plots) < 8:
cols1 = len(plots)
else:
cols1 = 8
fig = subplots.make_subplots(
rows=rows1, cols=cols1, shared_yaxes=True, horizontal_spacing=0.01, vertical_spacing=0.05, print_grid=True
)
t = 0
for i in range(rows1):
for j in range(cols1):
if t < len(plots):
list_of_floats = []
for k in range(len(lf.curves[plots[t][0]].data)):
if str(lf.curves[plots[t][0]].data[k]).find(r'NaN') == -1 and str(
lf.curves[plots[t][0]].data[k]).find(r'-999.25') == -1:
list_of_floats.append(float(lf.curves[plots[t][0]].data[k]))
fig.append_trace(
go.Scatter(
x=list_of_floats,
y=yvalsvalues,
name=lf.curves[plots[t][0]]["mnemonic"],
line={"dash": "solid", },
),
row=i + 1,
col=j + 1,
)
fig["layout"]["xaxis{}".format(t + 1)].update(
title=go.layout.xaxis.Title(text=self.generate_axis_title(lf.curves[plots[t][0]]["mnemonic"],
lf.curves[plots[t][0]]["descr"],
lf.curves[plots[t][0]]["unit"]),
), side="top",
type="log" if lf.curves[plots[t][0]]["mnemonic"] in plots[1] else "linear",
mirror=True,
)
fig.update_yaxes(
title_text=self.generate_axis_title(lf.curves[yvals]["mnemonic"], lf.curves[yvals]["descr"],
lf.curves[yvals]["unit"]), autorange="reversed", row=i + 1,
col=1)
t += 1
fig["layout"].update(
height=1000 * rows1,
width=1600,
font=dict(
size=10),
paper_bgcolor='rgba(0,0,0,0)',
hovermode="y",
margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=True),
)
fig.update_yaxes(showline=True, linewidth=0.2, spikedash='dash', linecolor='black', mirror=False)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
def generate_curvesCSV(self, df):
# visualize CSV data from Time
for col in df.columns:
if str(col).lower().find(r'tim') != -1:
col1 = col
pass
random_x = df[col1].values
y = []
y1 = []
for col in df.columns:
if str(col).find(r'Time') == -1 and str(col).find(r'No.') == -1:
y.append(pd.to_numeric(df[col].values))
y1.append(col)
fig = subplots.make_subplots(
rows=len(y), cols=1, shared_xaxes=True, vertical_spacing=0.02, print_grid=True
)
for i in range(len(y)):
fig.append_trace(
go.Scatter(
x=random_x,
y=y[i],
name=y1[i],
line={"dash": "solid", }, ),
row=i + 1, col=1)
for i in range(len(y)):
fig.update_yaxes(
title_text=y1[i], row=i + 1, col=1)
fig.update_xaxes(title_text='Time', side='top', row=1, col=1, ticks='outside')
fig.update_xaxes(title_text='Time', side='bottom', row=len(y), col=1, ticks='outside')
fig["layout"].update(
height=250 * len(y),
width=1600,
font=dict(
size=10),
paper_bgcolor='rgba(0,0,0,0)',
hovermode="y",
margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=True),
)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
def generate_curvesDepthCSV(self, dataframe1):
for col in dataframe1.columns:
if str(col).lower().find('dept') != -1:
col1 = col
random_y = pd.to_numeric(dataframe1[col1].values)
x = []
x1 = []
for col in dataframe1.columns:
if str(col).lower().find(r'tim') == -1 or str(col).lower().find(r'dept') == -1:
x.append(pd.to_numeric(dataframe1[col].values))
x1.append(col)
fig = subplots.make_subplots(
rows=1, cols=len(x), shared_yaxes=True, vertical_spacing=0.05, print_grid=True
)
for i in range(len(x)):
fig.append_trace(
go.Scatter(
x=x[i],
y=random_y,
name=x1[i],
line={"dash": "solid", }, ),
row=1, col=i + 1)
for i in range(len(x)):
fig.update_xaxes(
title_text=x1[i], row=1, col=i + 1)
fig.update_yaxes(title_text='Depth', autorange='reversed', row=1, col=1)
fig["layout"].update(
height=1000,
width=1600,
font=dict(
size=10),
paper_bgcolor='rgba(0,0,0,0)',
hovermode="y",
margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=False),
)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
def curvesDepthDLIS(self, frame1):
curves = frame1.curves()
channels_names = []
for i in range(len(frame1.channels)):
if int(frame1.channels[i].dimension[0]) == 1:
x = str(frame1.channels[i]).find('(')
x1 = str(frame1.channels[i]).find(')')
channels_names.append([str(frame1.channels[i])[x + 1:x1],
str(frame1.channels[i].long_name) + ',' + str(frame1.channels[i].units)])
yvals = curves[channels_names[0][0]]
channels_names1 = channels_names[1:]
rows1 = round(len(channels_names1) / 5) + 1
cols1 = 5
fig = subplots.make_subplots(
rows=rows1, cols=cols1, shared_yaxes=True, horizontal_spacing=0.01, vertical_spacing=0.01, print_grid=True
)
t = 0
for i in range(rows1):
for j in range(cols1):
if t < len(channels_names1):
fig.append_trace(
go.Scatter(
x=curves[channels_names1[t][0]],
y=yvals,
name=channels_names1[t][1],
line={"dash": "solid", },
),
row=i + 1,
col=j + 1,
)
fig["layout"]["xaxis{}".format(t + 1)].update(
title=channels_names1[t][1], side="top",
mirror=True,
)
fig.update_yaxes(
title_text=channels_names[0][1], row=i + 1, col=1)
t += 1
fig["layout"].update(
height=650 * rows1,
width=1600,
font=dict(
size=10),
paper_bgcolor='rgba(0,0,0,0)',
hovermode="y",
margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=False),
)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
pass
class CheckFunctions:
"""Functions to check files according to the WD and SiteCom requirements"""
def unitsrecognized(self, data, type1):
dataframe1 = Configuration().KDIunits()
recognized = []
if type1 == 'las':
lf = data
for curve in lf.curves:
j = 0
for i in range(len(dataframe1)):
if dataframe1['Units'].iloc[i] == curve.unit:
recognized.append(dataframe1['Units'].iloc[i])
j += 1
if j == 0:
recognized.append('Not found')
return recognized
elif type1 == 'csv':
df2 = data
mnemoniclist = []
units = []
for col in df2.columns:
mnemoniclist.append(col.split(",")[0])
units.append(col.split(",")[1].replace(" ", ""))
k = 0
for unit in units:
j = 0
for i in range(len(dataframe1)):
if dataframe1['Units'].iloc[i] == unit:
print(dataframe1['Units'].iloc[i], unit)
recognized.append(dataframe1['Units'].iloc[i])
j += 1
if j == 0:
recognized.append('')
k += 1
return recognized, mnemoniclist, units
elif type1 == 'xml':
Bs_data = BeautifulSoup(data, "xml")
line1 = Bs_data.find_all('mnemonicList')
line2 = Bs_data.find_all('unitList')
line0 = []
for row in line1:
line0.append(row.get_text())
line01 = []
for row in line2:
line01.append(row.get_text())
line = str(line0[0])
mnemoniclist = line.split(',')
line = str(line01[0])
units = line.split(',')
k = 0
for unit in units:
j = 0
for i in range(len(dataframe1)):
if dataframe1['Units'].iloc[i] == unit:
recognized.append(dataframe1['Units'].iloc[i])
j += 1
if j == 0:
recognized.append('')
k += 1
return recognized, mnemoniclist, units
def checklasfunction(self, lf):
if lf.curves[0].descr.lower().find('dept') != -1:
indexType = 'measured depth'
elif lf.curves[0].descr.lower().find('time') != -1:
indexType = 'date time'
structure = []
# check_structure
st, st1 = Configuration().serviceTypeOptions()
WD_equipmentType = dict(st)
WD_equipmentType_list = list(WD_equipmentType.values())
st, st1 = Configuration().dataTypeOptions()
WD_dataType = dict(st)
WD_dataType_list = list(WD_dataType.values())
file = open('configuration/lognames.txt')
lognames = [line.rstrip('\n') for line in file]
if indexType == 'date time':
for curve in lf.curves:
if re.search(r'^[A-Z]+_[A-Z]+_[A-Z]+$', curve.mnemonic) is not None:
structure.append('Yes')
else:
structure.append('No')
else:
for curve in lf.curves:
if re.search(r'^[A-Z]+_[A-Z]+_[0-9]+_[A-Z]+$', curve.mnemonic) is not None:
structure.append('Yes')
else:
structure.append('No')
equipmenttype = []
datatype = []
runnumbers = []
lognamesrec = []
for i in range(len(lf.curves)):
if structure[i] == 'Yes':
s1 = lf.curves[i].mnemonic.split('_')
k = 0
for mnem in WD_equipmentType_list:
if s1[0] == mnem:
k += 1
equipmenttype.append(mnem)
if k == 0:
equipmenttype.append('Not found')
k = 0
for item in WD_dataType_list:
if s1[1] == item:
datatype.append(item)
k += 1
if k == 0:
datatype.append('Not found')
if indexType == 'date time':
k = 0
for item in lognames:
if s1[2] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
k = 0
if re.search('[0-9]', s1[2]) is not None:
runnumbers.append(s1[2])
k += 1
if k == 0:
runnumbers.append('Not found')
k = 0
for item in lognames:
if s1[3] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
equipmenttype.append('Not found')
datatype.append('Not found')
runnumbers.append('Not found')
lognamesrec.append('Not found')
return structure, equipmenttype, datatype, runnumbers, lognamesrec
def lastimestamp(self, lf):
if lf.curves[0].descr.lower().find('time') != -1:
string1 = str(lf.curves[0].data[0])
check = re.search(
'^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}.[0-9]{2}(:)[0-9]{2}$',
string1)
check1 = re.search('^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}Z$', string1)
if check is not None or check1 is not None:
result = 'Correct'
else:
result = 'Incorrect'
else:
result = 'No timestamp - Depth Index'
return result
def lasWDtags(self, lf):
description = ''
serviceCategory = ''
dataSource = ''
for item in lf.well:
if str(item).find('description') != -1:
description += 'Yes'
if str(item).find('serviceCategory') != -1:
serviceCategory += 'Yes'
if str(item).find('dataSource') != -1:
dataSource += 'Yes'
for item in lf.other:
if str(item).find('description') != -1:
description += 'Yes'
if str(item).find('serviceCategory') != -1:
serviceCategory += 'Yes'
if str(item).find('dataSource') != -1:
dataSource += 'Yes'
for item in lf.params:
if str(item).find('description') != -1:
description += 'Yes'
if str(item).find('serviceCategory') != -1:
serviceCategory += 'Yes'
if str(item).find('dataSource') != -1:
dataSource += 'Yes'
if description == '':
description = 'No data'
if serviceCategory == '':
serviceCategory = 'No data'
if dataSource == '':
dataSource = 'No data'
return description, serviceCategory, dataSource
def checkcsvfunction(self, indexType, mnemoniclist):
structure = []
# check_structure
st, st1 = Configuration().serviceTypeOptions()
WD_equipmentType = dict(st)
WD_equipmentType_list = list(WD_equipmentType.values())
st, st1 = Configuration().dataTypeOptions()
WD_dataType = dict(st)
WD_dataType_list = list(WD_dataType.values())
file = open('configuration/lognames.txt')
lognames = [line.rstrip('\n') for line in file]
if indexType == 'date time':
for mnem in mnemoniclist:
if re.search(r'^[A-Z]+_[A-Z]+_[A-Z]+$', str(mnem)) is not None:
structure.append('Yes')
else:
structure.append('No')
else:
for mnem in mnemoniclist:
if re.search(r'^[A-Z]+_[A-Z]+_[0-9]+_[A-Z]+$', str(mnem)) is not None:
structure.append('Yes')
else:
structure.append('No')
equipmenttype = []
datatype = []
runnumbers = []
lognamesrec = []
for i in range(len(mnemoniclist)):
if structure[i] == 'Yes':
s1 = mnemoniclist[i].split('_')
k = 0
for mnem in WD_equipmentType_list:
if s1[0] == mnem:
k += 1
equipmenttype.append(mnem)
if k == 0:
equipmenttype.append('Not found')
k = 0
for item in WD_dataType_list:
if s1[1] == item:
datatype.append(item)
k += 1
if k == 0:
datatype.append('Not found')
if indexType == 'date time':
k = 0
for item in lognames:
if s1[2] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
k = 0
if re.search('[0-9]', s1[2]) is not None:
runnumbers.append(s1[2])
k += 1
if k == 0:
runnumbers.append('Not found')
k = 0
for item in lognames:
if s1[3] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
equipmenttype.append('Not found')
datatype.append('Not found')
runnumbers.append('Not found')
lognamesrec.append('Not found')
return structure, equipmenttype, datatype, runnumbers, lognamesrec
def csvtimestamp(self, df2):
k = 0
for col in df2.columns:
if col.lower().find(r'tim') != -1:
string1 = str(df2[col].iloc[0])
check = re.search(
'^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}.[0-9]{2}(:)[0-9]{2}$',
string1)
check1 = re.search('^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}Z$',
string1)
if check is not None or check1 is not None:
result = 'Correct'
else:
result = 'Incorrect'
k += 1
if k == 0:
result = 'No timestamp - Depth Index'
return result
def csvWDtags(self, df2):
description = ''
serviceCategory = ''
dataSource = ''
x = df2.to_string()
if x.find('description') != -1:
description += 'Yes'
if x.find('serviceCategory') != -1:
serviceCategory += 'Yes'
if x.find('dataSource') != -1:
dataSource += 'Yes'
if description == '':
description = 'No data'
if serviceCategory == '':
serviceCategory = 'No data'
if dataSource == '':
dataSource = 'No data'
return description, serviceCategory, dataSource
def xmlWDtags(self, data1):
description = 'No data'
serviceCategory = 'No data'
dataSource = 'No data'
Bs_data = BeautifulSoup(data1, "xml")
line1 = Bs_data.find_all('description')
line2 = Bs_data.find_all('serviceCategory')
line3 = Bs_data.find_all('dataSource')
if len(line1) != 0:
description = 'Yes'
if len(line2) != 0:
serviceCategory = 'Yes'
if len(line3) != 0:
dataSource = 'Yes'
line0 = []
for row in line2:
line0.append(row.get_text())
line00 = line0[0].split(',')
if len(line00) == 4:
servicetype, choices = Configuration().serviceTypeOptions()
strreg = ''
for service in servicetype:
if line00[2] == service[1]:
strreg = 'Yes'
datatype, choices = Configuration().dataTypeOptions()
strreg1 = ''
for datat in datatype:
if line00[3] == datat[1]:
strreg1 = 'Yes'
if strreg == 'Yes' and strreg1 == 'Yes':
result = 'Recognized'
else:
result = 'Not found'
else:
result = 'No tag'
return description, serviceCategory, dataSource, result
def xmlKDItags(self, data1):
Bs_data = BeautifulSoup(data1, "xml")
line1 = Bs_data.find_all('indexType')
index = []
for row in line1:
index.append(row.get_text())
if index[0] == 'date time':
mandatory = ['name', 'indexType', 'minDateTimeIndex', 'maxDateTimeIndex', 'typeLogData', 'mnemonicList',
'unitList']
else:
mandatory = ['name', 'indexType', 'minIndex', 'maxIndex', 'typeLogData', 'mnemonicList', 'unitList']
missing = []
for each in mandatory:
line = Bs_data.find_all(each)
if len(line) == 0:
missing.append(each)
if len(missing) != 0:
missing_string = 'Missing tags: ' + ','.join(missing)
else:
missing_string = 'No missing tags'
line1 = Bs_data.find_all('mnemonicList')
line2 = Bs_data.find_all('unitList')
line0 = []
for row in line1:
line0.append(row.get_text())
line01 = []
for row in line2:
line01.append(row.get_text())
line = str(line0[0])
mnem = line.split(',')
line = str(line01[0])
units = line.split(',')
if len(mnem) != len(units):
missing_string += '\n Mnemonics do not correspond to units'
return missing_string
def checkdlisfunction(self, indexType, mnemoniclist):
structure = []
# check_structure
st, st1 = Configuration().serviceTypeOptions()
WD_equipmentType = dict(st)
WD_equipmentType_list = list(WD_equipmentType.values())
st, st1 = Configuration().dataTypeOptions()
WD_dataType = dict(st)
WD_dataType_list = list(WD_dataType.values())
file = open('configuration/lognames.txt')
lognames = [line.rstrip('\n') for line in file]
if indexType == 'date time':
for mnem in mnemoniclist:
if re.search(r'^[A-Z]+_[A-Z]+_[A-Z]+$', str(mnem)) is not None:
structure.append('Yes')
else:
structure.append('No')
else:
for mnem in mnemoniclist:
if re.search(r'^[A-Z]+_[A-Z]+_[0-9]+_[A-Z]+$', str(mnem)) is not None:
structure.append('Yes')
else:
structure.append('No')
equipmenttype = []
datatype = []
runnumbers = []
lognamesrec = []
for i in range(len(mnemoniclist)):
if structure[i] == 'Yes':
s1 = mnemoniclist[i].split('_')
k = 0
for mnem in WD_equipmentType_list:
if s1[0] == mnem:
k += 1
equipmenttype.append(mnem)
if k == 0:
equipmenttype.append('Not found')
k = 0
for item in WD_dataType_list:
if s1[1] == item:
datatype.append(item)
k += 1
if k == 0:
datatype.append('Not found')
if indexType == 'date time':
k = 0
for item in lognames:
if s1[2] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
k = 0
if re.search('[0-9]', s1[2]) is not None:
runnumbers.append(s1[2])
k += 1
if k == 0:
runnumbers.append('Not found')
k = 0
for item in lognames:
if s1[3] == item:
lognamesrec.append(item)
k += 1
if k == 0:
lognamesrec.append('Not found')
else:
equipmenttype.append('Not found')
datatype.append('Not found')
runnumbers.append('Not found')
lognamesrec.append('Not found')
return structure, equipmenttype, datatype, runnumbers, lognamesrec
def dlistimestamp(self, f):
for frame in f.frames:
if str(frame.index_type).lower().find('tim') != -1:
check = re.search(
'^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}.[0-9]{2}(:)[0-9]{2}$',
frame.index_type)
check1 = re.search('^[0-9]{4}(-)[0-9]{2}(-)[0-9]{2}(T)[0-9]{2}(:)[0-9]{2}(:)[0-9]{2}(.)[0-9]{3}Z$',
frame.index_type)
if check is not None or check1 is not None:
result = 'Correct'
else:
result = 'Incorrect'
else:
result = 'No timestamp - Depth Index'
return result
def dlisWDtags(self, f):
description = ''
serviceCategory = ''
dataSource = ''
if len(f.find('description')) > 0:
description += 'Yes'
if len(f.find('serviceCategory')) > 0:
serviceCategory += 'Yes'
if len(f.find('dataSource')) > 0:
dataSource += 'Yes'
if description == '':
description = 'No data'
if serviceCategory == '':
serviceCategory = 'No data'
if dataSource == '':
dataSource = 'No data'
return description, serviceCategory, dataSource
def errorLog(self, generalInfo, fileInfo, df, summary):
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y %H-%M-%S")
with open('errorlog/' + str(dt_string) + 'errorlog.txt', 'w') as f:
f.write('Date: ' + str(date.today()))
f.write('\n\n\nFile Information:' + '\n')
f.write(tabulate(fileInfo, headers='keys', tablefmt='rst', showindex=False))
f.write('\n\n\nGeneral Information:' + '\n')
f.write(tabulate(generalInfo, headers='keys', tablefmt='rst', showindex=False))
f.write('\n\n\nTimestamp and WD tag information check:' + '\n')
f.write(tabulate(df, headers='keys', tablefmt='rst', showindex=False)) #
f.write('\n\n\nMnemonic and Units Recognition:' + '\n')
f.write(tabulate(summary, headers='keys', tablefmt='rst', showindex=False))
pass
class XmlGeneration:
"""Functions to generate XML"""
# pretty-printed XML
def prettify(self, elem):
rough_string = tostring(elem, encoding='utf-8', method='xml')
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent=" ")
def lastoxml(self, lf, filename, uidWell, uidWellbore, BU, asset, purpose1, servicecompany, wellname1, idwi, runid,
servicetype,
datatype, uid,
creationDate,
wellbore_name,
direction,
datasource,
nullValue,
indexType,
startDateTimeIndex,
endDateTimeIndex,
indexCurve,
startIndex,
endIndex,
dataSize):
mandatory_time = ['name', 'indexType', 'minDateTimeIndex', 'maxDateTimeIndex', 'typeLogData', 'mnemonicList',
'unitList']
mandatory_depth = ['name', 'indexType', 'minIndex', 'maxIndex', 'typeLogData', 'mnemonicList', 'unitList']
name = filename
wellname = wellname1
wellbore = wellbore_name
SC = servicecompany
runNumber = runid
creationDate = creationDate
indexType = indexType
startDateTimeIndex = startDateTimeIndex
endDateTimeIndex = endDateTimeIndex
indexCurve = indexCurve
nullValue = nullValue
startIndex = startIndex
endIndex = endIndex
direction = direction
datasource = datasource
if dataSize < 10000:
maxDataNodes = dataSize
filesplit = False
else:
maxDataNodes = 10000
filesplit = True
comments = 'BU: ' + str(BU) + '\nAsset:' + str(asset)
servicecategory = str(idwi) + ',' + str(runid) + ',' + str(servicetype) + ',' + str(datatype)
description = str(purpose1)
mnemoniclist = []
units = []
for curve in lf.curves:
mnemoniclist.append(curve['mnemonic'])
if str(curve['unit']) != '':
units.append(curve['unit'])
else:
units.append('unitless')
unitstring = ','.join(units)
mnemonicstring = ','.join(mnemoniclist)
splitcount_bottom = 0
splitcount_top = maxDataNodes
file_counter = 1
while lf.data.shape[0] >= splitcount_top:
# print(lf.data.shape[0] >= splitcount_top)
top = Element('logs', xmlns="http://www.witsml.org/schemas/1series", version="1.4.1.1")
top_1 = SubElement(top, 'log', uidWell=uidWell, uidWellbore=uidWellbore, uid=uid)
top_1_1 = SubElement(top_1, 'nameWell')
top_1_1.text = str(wellname)
top_1_2 = SubElement(top_1, 'nameWellbore')
top_1_2.text = str(wellbore_name)
top_1_3 = SubElement(top_1, 'name')
top_1_3.text = str(name)
top_1_4 = SubElement(top_1, 'serviceCompany')
top_1_4.text = str(SC)
top_1_5 = SubElement(top_1, 'runNumber')
top_1_5.text = str(runNumber)
top_1_6 = SubElement(top_1, 'creationDate')
top_1_6.text = str(creationDate)
top_1_7 = SubElement(top_1, 'description')
top_1_7.text = str(description)
top_1_8 = SubElement(top_1, 'indexType')
top_1_8.text = str(indexType)
if indexType == 'date time':
top_1_9 = SubElement(top_1, 'startDateTimeIndex')
top_1_9.text = str(startDateTimeIndex)
top_1_10 = SubElement(top_1, 'endDateTimeIndex')
top_1_10.text = str(endDateTimeIndex)
else:
top_1_9a = SubElement(top_1, 'startIndex', uom='m')
top_1_9a.text = str(startIndex)
top_1_10a = SubElement(top_1, 'endIndex', uom='m')
top_1_10a.text = str(endIndex)
top_1_10b = SubElement(top_1, 'direction')
top_1_10b.text = str(direction)
top_1_11 = SubElement(top_1, 'indexCurve')
top_1_11.text = str(indexCurve)
top_1_12 = SubElement(top_1, 'nullValue')
top_1_12.text = str(nullValue)
j = 1
for curve in lf.curves:
top_2 = SubElement(top_1, 'logCurveInfo', uid=curve.mnemonic)
child1 = SubElement(top_2, 'mnemonic')
child1.text = str(curve.mnemonic)
child1a = SubElement(top_2, 'unit')
child1a.text = str(units[j - 1])
if indexType == 'date time':
child2 = SubElement(top_2, 'minDateTimeIndex')
child2.text = str(startDateTimeIndex)
child3 = SubElement(top_2, 'maxDateTimeIndex')
child3.text = str(endDateTimeIndex)
else:
child2 = SubElement(top_2, 'minIndex', uom='m')
child2.text = str(startIndex)
child3 = SubElement(top_2, 'maxIndex', uom='m')
child3.text = str(endIndex)
child4 = SubElement(top_2, 'curveDescription')
child4.text = str(re.sub(' +', ' ', curve.descr))
child4a = SubElement(top_2, 'dataSource')
child4a.text = str(datasource)
child5 = SubElement(top_2, 'typeLogData')
if curve['mnemonic'].lower().find('time') != -1:
child5.text = 'date time'
else:
child5.text = 'double'
j += 1
top_3 = SubElement(top_1, 'logData')
top_3_1 = SubElement(top_3, 'mnemonicList')
top_3_1.text = str(mnemonicstring)
top_3_2 = SubElement(top_3, 'unitList')
top_3_2.text = str(unitstring)
# for i in range(len(lf.data)):
for i in range(splitcount_bottom, splitcount_top):
top_3_3 = SubElement(top_3, 'data')
text = ','.join(str(v) for v in lf.data[i])
# if text.find('NaN') != -1:
# text = text.replace('NaN', '-0.0')
top_3_3.text = text
# print(i)
# print(text)
top_4 = SubElement(top_1, 'commonData')
top_4_1 = SubElement(top_4, 'dTimCreation')
date1 = str(datetime.today().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3])
date1 += '+00:00'
top_4_1.text = date1
# serviceCategory should come before comments
top_4_3 = SubElement(top_4, 'serviceCategory')
top_4_3.text = servicecategory
top_4_2 = SubElement(top_4, 'comments')
top_4_2.text = comments
stringfile = self.prettify(top)
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y %H-%M-%S")
if filesplit:
naming_string = filename + '_part_{}'.format(file_counter)
else:
naming_string = filename
desktop = os.path.expanduser("generatedXML/" + str(naming_string) + '.xml')
with open(desktop, "w") as f:
f.write(stringfile)
file_counter += 1
splitcount_bottom += maxDataNodes
data_dif = lf.data.shape[0] - splitcount_top
if data_dif < maxDataNodes and data_dif != 0:
splitcount_top = splitcount_top + data_dif
else:
splitcount_top += maxDataNodes
# print(lf.data.shape[0])
# print(splitcount_top)
# tree = ElementTree(top)
# tree.write(os.path.expanduser("~/Desktop/filename1.xml"))
missingData = []
lst = top.findall('log/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('commonData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logCurveInfo/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
missingMandatory = []
missingOptional = []
for each in missingData:
j = 0
if indexType == 'date time':
for each1 in mandatory_time:
if each == each1:
missingMandatory.append(each)
j += 1
else:
for each1 in mandatory_depth:
if each == each1:
missingMandatory.append(each)
j += 1
if j == 0:
missingOptional.append(each)
missingOptional1 = set(missingOptional)
missingMandatory1 = set(missingMandatory)
if len(missingMandatory1) != 0:
missingMandatoryString = ', '.join(missingMandatory1)
else:
missingMandatoryString = 'None'
if len(missingOptional1) != 0:
missingOptionalString = ', '.join(missingOptional1)
else:
missingOptionalString = 'None'
if len(mnemoniclist) == len(units):
missing3 = 'Yes'
else:
missing3 = 'None'
return stringfile, missingMandatoryString, missingOptionalString, missing3
# def lascheck(self, lf):
def csvtoxml(self, df, df2, x, c, filename, uidWell, uidWellbore, BU, asset, purpose1, servicecompany, wellname1,
idwi,
runid,
servicetype, datatype, uid,
creationDate,
wellbore_name,
direction,
datasource,
nullValue,
indexType,
startDateTimeIndex,
endDateTimeIndex,
indexCurve,
startIndex,
endIndex,
dataSize):
mandatory_time = ['name', 'indexType', 'minDateTimeIndex', 'maxDateTimeIndex', 'typeLogData', 'mnemonicList',
'unitList']
mandatory_depth = ['name', 'indexType', 'minIndex', 'maxIndex', 'typeLogData', 'mnemonicList', 'unitList']
name = filename
wellname = wellname1
wellbore = wellbore_name
SC = servicecompany
runNumber = runid
creationDate = creationDate
indexType = indexType
startDateTimeIndex = startDateTimeIndex
endDateTimeIndex = endDateTimeIndex
indexCurve = indexCurve
nullValue = nullValue
startIndex = startIndex
endIndex = endIndex
direction = direction
datasource = datasource
if dataSize < 10000:
maxDataNodes = dataSize
filesplit = False
else:
maxDataNodes = 10000
filesplit = True
comments = 'BU: ' + str(BU) + '\nAsset:' + str(asset)
servicecategory = str(idwi) + ',' + str(runid) + ',' + str(servicetype) + ',' + str(datatype)
description = str(purpose1)
mnemoniclist = []
units = []
for col in df2.columns:
mnemoniclist.append(col.split(",")[0])
units.append(col.split(",")[1].replace(" ", ""))
for unit in units:
if unit == ' ':
unit = 'unitless'
unitstring = ','.join(str(v) for v in units)
unitstring = unitstring.replace(" ", "")
mnemonicstring = ','.join(str(v) for v in mnemoniclist)
splitcount_bottom = c
splitcount_top = maxDataNodes
file_counter = 1
p = 0
while df2.shape[0] >= splitcount_top:
top = Element('logs', xmlns="http://www.witsml.org/schemas/1series", version="1.4.1.1")
top_1 = SubElement(top, 'log', uidWell=uidWell, uidWellbore=uidWellbore, uid=uid)
top_1_1 = SubElement(top_1, 'nameWell')
top_1_1.text = wellname
top_1_2 = SubElement(top_1, 'nameWellbore')
top_1_2.text = str(wellbore)
top_1_3 = SubElement(top_1, 'name')
top_1_3.text = name
top_1_4 = SubElement(top_1, 'serviceCompany')
top_1_4.text = SC
top_1_5 = SubElement(top_1, 'runNumber')
top_1_5.text = str(runNumber)
top_1_6 = SubElement(top_1, 'creationDate')
top_1_6.text = str(creationDate)
top_1_7 = SubElement(top_1, 'description')
top_1_7.text = description
top_1_8 = SubElement(top_1, 'indexType')
top_1_8.text = indexType
if indexType == 'date time':
top_1_9 = SubElement(top_1, 'startDateTimeIndex')
top_1_9.text = str(startDateTimeIndex)
top_1_10 = SubElement(top_1, 'endDateTimeIndex')
top_1_10.text = str(endDateTimeIndex)
else:
top_1_9a = SubElement(top_1, 'startIndex', uom='m')
top_1_9a.text = str(startIndex)
top_1_10a = SubElement(top_1, 'endIndex', uom='m')
top_1_10a.text = str(endIndex)
top_1_10b = SubElement(top_1, 'direction')
top_1_10b.text = str(direction)
top_1_11 = SubElement(top_1, 'indexCurve')
top_1_11.text = indexCurve
top_1_12 = SubElement(top_1, 'nullValue')
top_1_12.text = str(nullValue)
j = 1
for col in df2.columns:
top_2 = SubElement(top_1, 'logCurveInfo', uid=str(mnemoniclist[j - 1]))
child1 = SubElement(top_2, 'mnemonic')
child1.text = str(mnemoniclist[j - 1])
child1a = SubElement(top_2, 'unit')
child1a.text = str(units[j - 1])
if indexType == 'date time':
child2 = SubElement(top_2, 'minDateTimeIndex')
child2.text = str(startDateTimeIndex)
child3 = SubElement(top_2, 'maxDateTimeIndex')
child3.text = str(endDateTimeIndex)
else:
child2 = SubElement(top_2, 'minIndex', uom='m')
child2.text = str(startIndex)
child3 = SubElement(top_2, 'maxIndex', uom='m')
child3.text = str(endIndex)
child4 = SubElement(top_2, 'curveDescription')
child4.text = str(mnemoniclist[j - 1])
child4a = SubElement(top_2, 'dataSource')
child4a.text = str(datasource)
child5 = SubElement(top_2, 'typeLogData')
if col.lower().find('time') != -1:
child5.text = 'date time'
else:
child5.text = 'double'
j += 1
top_3 = SubElement(top_1, 'logData')
top_3_1 = SubElement(top_3, 'mnemonicList')
top_3_1.text = mnemonicstring
top_3_2 = SubElement(top_3, 'unitList')
top_3_2.text = unitstring
for i in range(splitcount_bottom, splitcount_top+c):
top_3_3 = SubElement(top_3, 'data')
if x is None or x == '':
top_3_3.text = ','.join(str(v) for v in df.iloc[c + p].to_list())
else:
top_3_3.text = ','.join(str(v) for v in df.iloc[c + p].to_list())
p += 1
# print(p)
# print('split')
top_4 = SubElement(top_1, 'commonData')
top_4_1 = SubElement(top_4, 'dTimCreation')
date1 = str(datetime.today().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3])
date1 += '+00:00'
top_4_1.text = date1
top_4_3 = SubElement(top_4, 'serviceCategory')
top_4_3.text = servicecategory
top_4_2 = SubElement(top_4, 'comments')
top_4_2.text = comments
stringfile = self.prettify(top)
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y %H-%M-%S")
if filesplit:
naming_string = filename + '_part_{}'.format(file_counter)
else:
naming_string = filename
desktop = os.path.expanduser("generatedXML/" + str(naming_string) + '.xml')
with open(desktop, "w") as f:
f.write(stringfile)
file_counter += 1
splitcount_bottom += maxDataNodes
data_dif = df2.shape[0] - splitcount_top
if data_dif < maxDataNodes and data_dif != 0:
splitcount_top = splitcount_top + data_dif
else:
splitcount_top += maxDataNodes
# tree = ElementTree(top)
# tree.write(os.path.expanduser("generatedXML/"+ str(date.today()) +'.xml'))
missingData = []
lst = top.findall('log/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('commonData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logCurveInfo/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
# missing = ', '.join(missingData)
missingMandatory = []
missingOptional = []
for each in missingData:
j = 0
if indexType == 'date time':
for each1 in mandatory_time:
if each == each1:
missingMandatory.append(each)
j += 1
else:
for each1 in mandatory_depth:
if each == each1:
missingMandatory.append(each)
j += 1
if j == 0:
missingOptional.append(each)
missingOptional1 = set(missingOptional)
missingMandatory1 = set(missingMandatory)
if len(missingMandatory1) != 0:
missingMandatoryString = ', '.join(missingMandatory1)
else:
missingMandatoryString = 'None'
if len(missingOptional1) != 0:
missingOptionalString = ', '.join(missingOptional1)
else:
missingOptionalString = 'None'
if len(mnemoniclist) == len(units):
missing3 = 'Yes'
else:
missing3 = 'None'
return stringfile, missingMandatoryString, missingOptionalString, missing3
def xmltoxml(self, data, uidWell, uidWellbore, BU, asset, purpose1, servicecompany, wellname1,
idwi,
runid,
servicetype, datatype, uid):
mandatory_time = ['name', 'indexType', 'minDateTimeIndex', 'maxDateTimeIndex', 'typeLogData', 'mnemonicList',
'unitList']
mandatory_depth = ['name', 'indexType', 'minIndex', 'maxIndex', 'typeLogData', 'mnemonicList', 'unitList']
Bs_data = BeautifulSoup(data, 'xml')
wellname = wellname1
description = str(purpose1)
comments = 'BU: ' + str(BU) + '\nAsset:' + str(asset)
servicecategory = str(idwi) + ',' + str(runid) + ',' + str(servicetype) + ',' + str(datatype)
top = Element('logs', xmlns="http://www.witsml.org/schemas/1series", version="1.4.1.1")
top_1 = SubElement(top, 'log', uidWell=uidWell, uidWellbore=uidWellbore, uid=uid)
top_1_1 = SubElement(top_1, 'nameWell')
line1 = Bs_data.find_all(top_1_1.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_1.text = ''.join(index)
else:
top_1_1.text = wellname
top_1_2 = SubElement(top_1, 'nameWellbore')
line1 = Bs_data.find_all(top_1_2.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_2.text = ''.join(index)
else:
top_1_2.text = ''
top_1_3 = SubElement(top_1, 'name')
line1 = Bs_data.find_all(top_1_3.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_3.text = ''.join(index)
else:
top_1_3.text = ''
top_1_4 = SubElement(top_1, 'serviceCompany')
line1 = Bs_data.find_all(top_1_4.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_4.text = ''.join(index)
else:
top_1_4.text = servicecompany
top_1_5 = SubElement(top_1, 'runNumber')
line1 = Bs_data.find_all(top_1_5.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_5.text = ''.join(index)
else:
top_1_5.text = ''
top_1_6 = SubElement(top_1, 'creationDate')
line1 = Bs_data.find_all(top_1_6.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_6.text = ''.join(index)
else:
top_1_6.text = ''
top_1_7 = SubElement(top_1, 'description')
line1 = Bs_data.find_all(top_1_7.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_7.text = ''.join(index)
else:
top_1_7.text = description
top_1_8 = SubElement(top_1, 'indexType')
line1 = Bs_data.find_all(top_1_8.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_8.text = ''.join(index)
else:
top_1_8.text = ''
if top_1_8.text == 'date time':
top_1_9 = SubElement(top_1, 'startDateTimeIndex')
line1 = Bs_data.find_all(top_1_9.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_9.text = ''.join(index)
else:
top_1_9.text = ''
top_1_10 = SubElement(top_1, 'endDateTimeIndex')
line1 = Bs_data.find_all(top_1_10.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_10.text = ''.join(index)
else:
top_1_10.text = ''
elif top_1_8.text == 'measured depth':
top_1_9a = SubElement(top_1, 'startIndex')
line1 = Bs_data.find_all(top_1_9a.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_9a.text = ''.join(index)
else:
top_1_9a.text = ''
top_1_10a = SubElement(top_1, 'endIndex')
line1 = Bs_data.find_all(top_1_10a.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_10a.text = ''.join(index)
else:
top_1_10a.text = ''
top_1_11 = SubElement(top_1, 'indexCurve')
line1 = Bs_data.find_all(top_1_11.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_11.text = ''.join(index)
else:
top_1_11.text = ''
top_1_12 = SubElement(top_1, 'nullValue')
line1 = Bs_data.find_all(top_1_12.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_1_12.text = ''.join(index)
else:
top_1_12.text = ''
line1 = Bs_data.find_all('logCurveInfo')
line2 = Bs_data.find_all('mnemonic')
index = []
if len(line2) > 0:
for each in line2:
index.append(each.get_text())
line3 = Bs_data.find_all('unit')
index1 = []
if len(line3) > 0:
for each in line3:
index1.append(each.get_text())
line4 = Bs_data.find_all('curveDescription')
index2 = []
if len(line4) > 0:
for each in line4:
index2.append(each.get_text())
line5 = Bs_data.find_all('dataSource')
index3 = []
if len(line5) > 0:
for each in line5:
index3.append(each.get_text())
line6 = Bs_data.find_all('typeLogData')
index4 = []
if len(line6) > 0:
for each in line6:
index4.append(each.get_text())
for i in range(len(line1)):
top_2 = SubElement(top_1, 'logCurveInfo', uid=str(i))
child1 = SubElement(top_2, 'mnemonic')
if len(line2) > i:
child1.text = index[i]
else:
child1.text = ''
child1a = SubElement(top_2, 'unit')
if len(line3) > i:
child1a.text = index1[i]
else:
child1a.text = ''
child4 = SubElement(top_2, 'curveDescription')
if len(line4) > i:
child4.text = index1[i]
else:
child4.text = ''
child4a = SubElement(top_2, 'dataSource')
if len(line5) > i:
child4a.text = index1[i]
else:
child4a.text = ''
child5 = SubElement(top_2, 'typeLogData')
if len(line6) > i:
child5.text = index1[i]
else:
child5.text = ''
top_3 = SubElement(top_1, 'logData')
top_3_1 = SubElement(top_3, 'mnemonicList')
line1 = Bs_data.find_all(top_3_1.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_3_1.text = ''.join(index)
else:
top_3_1.text = ''
top_3_2 = SubElement(top_3, 'unitList')
line1 = Bs_data.find_all(top_3_2.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_3_2.text = ''.join(index)
else:
top_3_2.text = ''
line4 = Bs_data.find_all('data')
index = []
if len(line4) > 0:
for each in line4:
index.append(each.get_text())
for i in range(len(line4)):
top_3_3 = SubElement(top_3, 'data')
top_3_3.text = index[i]
top_4 = SubElement(top_1, 'commonData')
top_4_1 = SubElement(top_4, 'dTimCreation')
line1 = Bs_data.find_all(top_4_1.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_4_1.text = ''.join(index)
else:
top_4_1.text = str(datetime.today().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3]) + '+00:00'
top_4_2 = SubElement(top_4, 'comments')
line1 = Bs_data.find_all(top_4_2.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_4_2.text = ''.join(index)
else:
top_4_2.text = comments
top_4_3 = SubElement(top_4, 'serviceCategory')
line1 = Bs_data.find_all(top_4_3.tag)
index = []
if len(line1) > 0:
for each in line1:
index.append(each.get_text())
top_4_3.text = ''.join(index)
else:
top_4_3.text = servicecategory
stringfile = self.prettify(top)
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y %H-%M-%S")
desktop = os.path.expanduser("generatedXML/" + str(dt_string) + '.xml')
with open(desktop, "w") as f:
f.write(stringfile)
# tree = ElementTree(top)
# tree.write(os.path.expanduser("generatedXML/"+ str(date.today()) +'.xml'))
missingData = []
lst = top.findall('log/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('commonData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logCurveInfo/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
# missing = ', '.join(missingData)
missingMandatory = []
missingOptional = []
for each in missingData:
j = 0
if top_1_8.text == 'date time':
for each1 in mandatory_time:
if each == each1:
missingMandatory.append(each)
j += 1
else:
for each1 in mandatory_depth:
if each == each1:
missingMandatory.append(each)
j += 1
if j == 0:
missingOptional.append(each)
missingOptional1 = set(missingOptional)
missingMandatory1 = set(missingMandatory)
if len(missingMandatory1) != 0:
missingMandatoryString = ', '.join(missingMandatory1)
else:
missingMandatoryString = 'None'
if len(missingOptional1) != 0:
missingOptionalString = ', '.join(missingOptional1)
else:
missingOptionalString = 'None'
line1 = Bs_data.find_all('mnemonicList')
line2 = Bs_data.find_all('unitList')
if len(line1) == len(line2):
missing3 = 'Yes'
else:
missing3 = 'None'
return stringfile, missingMandatoryString, missingOptionalString, missing3
def dlistoxml(self, frame1, filename, uidWell, uidWellbore, BU, asset, purpose1, servicecompany, wellname1, idwi,
runid,
servicetype, datatype, uid):
mandatory_time = ['name', 'indexType', 'minDateTimeIndex', 'maxDateTimeIndex', 'typeLogData', 'mnemonicList',
'unitList']
mandatory_depth = ['name', 'indexType', 'minIndex', 'maxIndex', 'typeLogData', 'mnemonicList', 'unitList']
wellname = wellname1
wellbore = ''
name = filename
SC = servicecompany
runNumber = ''
creationDate = ''
indexType = ''
startDateTimeIndex = ''
endDateTimeIndex = ''
startIndex = ''
endIndex = ''
indexCurve = ''
nullValue = ''
description = str(purpose1)
comments = 'BU: ' + str(BU) + '\nAsset:' + str(asset)
servicecategory = str(idwi) + ',' + str(runid) + ',' + str(servicetype) + ',' + str(datatype)
channels = str(frame1.channels)
channels = channels.replace('[', '')
channels = channels.replace(']', '')
channels = channels.replace('Channel(', '')
channels = channels.replace(')', '')
mnemonicstring = channels
mnemonicstring = mnemonicstring.replace(' ', '')
mnemonic = mnemonicstring.split(',')
units = []
for channel in frame1.channels:
units.append(channel.units)
for unit in units:
if unit == ' ':
unit = 'unitless'
unitstring = ','.join(str(v) for v in units)
unitstring = unitstring.replace(" ", "")
curves = frame1.curves()
if str(frame1.index_type).lower().find(r'tim') != -1:
indexType = 'date time'
startDateTimeIndex = str(curves[mnemonic[0]][0])
endDateTimeIndex = str(curves[mnemonic[0]][len(curves) - 1])
elif str(frame1.index_type).lower().find(r'dept') != -1:
indexType = 'measured depth'
startIndex = str(curves[mnemonic[0]][0])
endIndex = str(curves[mnemonic[0]][len(curves) - 1])
indexCurve = str(mnemonic[0])
top = Element('logs', xmlns="http://www.witsml.org/schemas/1series", version="1.4.1.1")
top_1 = SubElement(top, 'log', uidWell=uidWell, uidWellbore=uidWellbore, uid=uid)
top_1_1 = SubElement(top_1, 'nameWell')
top_1_1.text = wellname
top_1_2 = SubElement(top_1, 'nameWellbore')
top_1_2.text = wellbore
top_1_3 = SubElement(top_1, 'name')
top_1_3.text = name
top_1_4 = SubElement(top_1, 'serviceCompany')
top_1_4.text = SC
top_1_5 = SubElement(top_1, 'runNumber')
top_1_5.text = runNumber
top_1_6 = SubElement(top_1, 'creationDate')
top_1_6.text = creationDate
top_1_7 = SubElement(top_1, 'description')
top_1_7.text = description
top_1_8 = SubElement(top_1, 'indexType')
top_1_8.text = indexType
if indexType == 'date time':
top_1_9 = SubElement(top_1, 'startDateTimeIndex')
top_1_9.text = str(startDateTimeIndex)
top_1_10 = SubElement(top_1, 'endDateTimeIndex')
top_1_10.text = str(endDateTimeIndex)
else:
top_1_9a = SubElement(top_1, 'startIndex')
top_1_9a.text = str(startIndex)
top_1_10a = SubElement(top_1, 'endIndex')
top_1_10a.text = str(endIndex)
top_1_11 = SubElement(top_1, 'indexCurve')
top_1_11.text = indexCurve
top_1_12 = SubElement(top_1, 'nullValue')
top_1_12.text = str(nullValue)
j = 1
for mnem in mnemonic:
top_2 = SubElement(top_1, 'logCurveInfo', uid=mnem)
child1 = SubElement(top_2, 'mnemonic')
child1.text = str(mnem)
child1a = SubElement(top_2, 'unit')
child1a.text = str(units[j - 1])
if indexType == 'date time':
child2 = SubElement(top_2, 'minDateTimeIndex')
child2.text = str(startDateTimeIndex)
child3 = SubElement(top_2, 'maxDateTimeIndex')
child3.text = str(endDateTimeIndex)
child4 = SubElement(top_2, 'curveDescription')
child4.text = ''
child4a = SubElement(top_2, 'dataSource')
child4a.text = ''
child5 = SubElement(top_2, 'typeLogData')
if str(mnem).lower().find('time') != -1:
child5.text = 'date time'
else:
child5.text = 'double'
j += 1
top_3 = SubElement(top_1, 'logData')
top_3_1 = SubElement(top_3, 'mnemonicList')
top_3_1.text = mnemonicstring
top_3_2 = SubElement(top_3, 'unitList')
top_3_2.text = unitstring
for curve in curves:
top_3_3 = SubElement(top_3, 'data')
x = ','.join(str(v) for v in curve)
x1 = x.find(',')
x2 = x[x1 + 1:]
top_3_3.text = x2
top_4 = SubElement(top_1, 'commonData')
top_4_1 = SubElement(top_4, 'dTimCreation')
date1 = str(datetime.today().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3])
date1 += '+00:00'
top_4_1.text = date1
top_4_2 = SubElement(top_4, 'comments')
top_4_2.text = comments
top_4_3 = SubElement(top_4, 'serviceCategory')
top_4_3.text = servicecategory
stringfile = self.prettify(top)
# tree = ElementTree(top)
# tree.write(os.path.expanduser("~/Desktop/filename1.xml"))
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y %H-%M-%S")
desktop = os.path.expanduser("generatedXML/" + str(dt_string) + '.xml')
with open(desktop, "w") as f:
f.write(stringfile)
missingData = []
lst = top.findall('log/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('commonData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logCurveInfo/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
lst = top.findall('logData/')
for item in lst:
if item.text == '':
missingData.append(item.tag)
# missing = ', '.join(missingData)
missingMandatory = []
missingOptional = []
for each in missingData:
j = 0
if indexType == 'date time':
for each1 in mandatory_time:
if each == each1:
missingMandatory.append(each)
j += 1
else:
for each1 in mandatory_depth:
if each == each1:
missingMandatory.append(each)
j += 1
if j == 0:
missingOptional.append(each)
missingOptional1 = set(missingOptional)
missingMandatory1 = set(missingMandatory)
if len(missingMandatory1) != 0:
missingMandatoryString = ', '.join(missingMandatory1)
else:
missingMandatoryString = 'None'
if len(missingOptional1) != 0:
missingOptionalString = ', '.join(missingOptional1)
else:
missingOptionalString = 'None'
if len(mnemonic) == len(units):
missing3 = 'Yes'
else:
missing3 = 'None'
return stringfile, missingMandatoryString, missingOptionalString, missing3
pass
class APISupplementary:
"""Functions to support application interface"""
def uploadedpage(self, index1, index2):
# determine HTML template for visualization, depends on the index type [ buttons 'Visualize vs. Depth', ...]
if index1 is not None and index2 is not None:
template = 'uploaded.html'
elif index1 is not None:
template = 'uploadedTIME.html'
elif index2 is not None:
template = 'uploadedDEPTH.html'
else:
template = 'uploaded_base.html'
return template
def uploadedpageXML(self, index1, index2):
# determine HTML template for visualization, depends on the index type [ buttons 'Visualize vs. Depth', ...]
if index1 is not None and index2 is not None:
template = 'uploaded1.html'
elif index1 is not None:
template = 'uploadedTIME1.html'
elif index2 is not None:
template = 'uploadedDEPTH1.html'
else:
template = 'uploaded1_base.html'
return template
pass
| 38.894479 | 121 | 0.46318 | 9,817 | 95,097 | 4.39014 | 0.057044 | 0.020511 | 0.022739 | 0.016706 | 0.804353 | 0.772983 | 0.740336 | 0.689498 | 0.670727 | 0.650958 | 0 | 0.043191 | 0.415197 | 95,097 | 2,444 | 122 | 38.910393 | 0.731771 | 0.018465 | 0 | 0.731378 | 0 | 0.00419 | 0.070851 | 0.009019 | 0.000466 | 0 | 0 | 0 | 0 | 1 | 0.020019 | false | 0.006052 | 0.006052 | 0 | 0.051676 | 0.002793 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3fb6afc9058f00011feb10fc88f71d4daad1f88a | 232 | py | Python | gspread_pandas/exceptions.py | henu/gspread-pandas | fb2942d3ea468333bb63e1f4bb01832deef10532 | [
"BSD-3-Clause"
] | 303 | 2016-10-27T19:13:30.000Z | 2022-03-22T19:10:18.000Z | gspread_pandas/exceptions.py | henu/gspread-pandas | fb2942d3ea468333bb63e1f4bb01832deef10532 | [
"BSD-3-Clause"
] | 58 | 2016-10-18T18:01:28.000Z | 2022-03-20T21:02:51.000Z | gspread_pandas/exceptions.py | henu/gspread-pandas | fb2942d3ea468333bb63e1f4bb01832deef10532 | [
"BSD-3-Clause"
] | 45 | 2017-11-21T22:47:02.000Z | 2022-01-17T11:22:28.000Z | class GspreadPandasException(Exception):
pass
class ConfigException(GspreadPandasException):
pass
class NoWorksheetException(GspreadPandasException):
pass
class MissMatchException(GspreadPandasException):
pass
| 15.466667 | 51 | 0.801724 | 16 | 232 | 11.625 | 0.4375 | 0.145161 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146552 | 232 | 14 | 52 | 16.571429 | 0.939394 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
b76106a24afe84cc0dbe5560d905d647ee837864 | 259 | py | Python | chat/views.py | pcb00001/channels-chat-docker | 7245e1f46696c12249e4892c14151a2b4431fc69 | [
"MIT"
] | null | null | null | chat/views.py | pcb00001/channels-chat-docker | 7245e1f46696c12249e4892c14151a2b4431fc69 | [
"MIT"
] | null | null | null | chat/views.py | pcb00001/channels-chat-docker | 7245e1f46696c12249e4892c14151a2b4431fc69 | [
"MIT"
] | null | null | null | from django.shortcuts import render
import json
def index(request):
return render(request, 'chat/index.html', {})
def room(request, room_name: str):
return render(request, 'chat/room.html', {
'room_name_json': json.dumps(room_name)
})
| 19.923077 | 49 | 0.683398 | 35 | 259 | 4.942857 | 0.457143 | 0.138728 | 0.219653 | 0.265896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181467 | 259 | 12 | 50 | 21.583333 | 0.816038 | 0 | 0 | 0 | 0 | 0 | 0.166023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
b7872aee1ffa67c30c100af796535aea26e3930b | 124 | py | Python | pywi/benchmark/metrics/__init__.py | jeremiedecock/mrif | 094b0dd81ff2be0e24bf3871caab48da1b5d138b | [
"MIT"
] | 1 | 2021-07-06T06:02:45.000Z | 2021-07-06T06:02:45.000Z | pywi/benchmark/metrics/__init__.py | jeremiedecock/mrif | 094b0dd81ff2be0e24bf3871caab48da1b5d138b | [
"MIT"
] | null | null | null | pywi/benchmark/metrics/__init__.py | jeremiedecock/mrif | 094b0dd81ff2be0e24bf3871caab48da1b5d138b | [
"MIT"
] | 1 | 2019-01-07T10:50:38.000Z | 2019-01-07T10:50:38.000Z | """Benchmark modules
This package contains modules used to assess image processing algorithms.
"""
from . import refbased
| 17.714286 | 73 | 0.782258 | 15 | 124 | 6.466667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 124 | 6 | 74 | 20.666667 | 0.92381 | 0.741935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b7bfca6d0aefebcaed19e8a9d933e2640a2c1bf1 | 158 | py | Python | CodeWars/Python/6 kyu/Grill it!/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/6 kyu/Grill it!/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/6 kyu/Grill it!/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | def grille(message, code):
return ''.join([message[x] for x in reversed(range(-1, -len(message) - 1, -1)) if bin(code)[2:].zfill(len(message))[x] == '1']) | 79 | 131 | 0.613924 | 27 | 158 | 3.592593 | 0.62963 | 0.164948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036232 | 0.126582 | 158 | 2 | 131 | 79 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
b7dfffd8b85eaa9cb37316bce85c2cadbfa8e2b5 | 211 | py | Python | dynamicserialize/dstypes/com/raytheon/uf/common/message/Body.py | srcarter3/python-awips | d981062662968cf3fb105e8e23d955950ae2497e | [
"BSD-3-Clause"
] | 33 | 2016-03-17T01:21:18.000Z | 2022-02-08T10:41:06.000Z | dynamicserialize/dstypes/com/raytheon/uf/common/message/Body.py | srcarter3/python-awips | d981062662968cf3fb105e8e23d955950ae2497e | [
"BSD-3-Clause"
] | 15 | 2016-04-19T16:34:08.000Z | 2020-09-09T19:57:54.000Z | dynamicserialize/dstypes/com/raytheon/uf/common/message/Body.py | Unidata/python-awips | 8459aa756816e5a45d2e5bea534d23d5b1dd1690 | [
"BSD-3-Clause"
] | 20 | 2016-03-12T01:46:58.000Z | 2022-02-08T06:53:22.000Z |
class Body(object):
def __init__(self):
self.responses = None
def getResponses(self):
return self.responses
def setResponses(self, responses):
self.responses = responses
| 16.230769 | 38 | 0.63981 | 22 | 211 | 5.954545 | 0.5 | 0.396947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274882 | 211 | 12 | 39 | 17.583333 | 0.856209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
4d158df5c9a4634c20205d2b74b30d9064a4d55a | 21,351 | py | Python | drugsets.py | nybell/drugsea | 883ca381d84dab95ac8b2c8c37174301038dbc08 | [
"MIT"
] | null | null | null | drugsets.py | nybell/drugsea | 883ca381d84dab95ac8b2c8c37174301038dbc08 | [
"MIT"
] | null | null | null | drugsets.py | nybell/drugsea | 883ca381d84dab95ac8b2c8c37174301038dbc08 | [
"MIT"
] | null | null | null | # python script to run drug gene set analysis
# -------------------------------------------------------------------------- #
##### ----- PART 1: IMPORT PACKAGES, PARSE ARGUMENTS, CHECK INPUTS ----- #####
# -------------------------------------------------------------------------- #
# import packages
import os
import argparse
import subprocess
import numpy as np
import pandas as pd
import drugsets_func as df
# parse arguments
parser = argparse.ArgumentParser()
parser.add_argument('--geneassoc', '-g', default=None, type=str,
help='Filename of gene associations from MAGMA (.genes.raw).',
required=True)
parser.add_argument('--drugsets', '-d', default='solo', type=str, choices=['solo', 'atc', 'moa', 'ind', 'all'],
help='Type of drug gene set to use (individual, ATC code, mechanism of action, clinical indication).',
required=True)
parser.add_argument('--out', '-o', default=None, type=str,
help='Filename of output.',
required=True)
parser.add_argument('--conditional', '-c', default='yes', type=str, choices=['yes','no'],
help='"yes" will run competitive gene-set analysis in MAGMA while conditioning on a gene set of all druggable genes, "no" will run competitive gene-set analysis without any conditional analysis',
required=False)
parser.add_argument('--setsize', '-s', default=2, type=int,
help='Minimum drug gene set size. Minimum size is 2.',
required=False)
parser.add_argument('--id', '-i', default='entrez', type=str, choices=['entrez', 'ensembl', 'ensembl92'],
help='Indicate which gene naming convention is used for your genes.raw file. Options are "entrez" and "ensembl v105", and "ensembl v92". \
If you ran MAGMA using FUMA, then use "ensembl92"',
required=False)
parser.add_argument('--enrich', '-e', default=None, type=str, choices=['atc', 'moa', 'ind', 'all'],
help='Test drug category for enrichment.',
required=False)
parser.add_argument('--nsize', '-n', default=5, type=float,
help = 'Set minimum sample size for drug categories being tested for enrichment.',
required=False)
# parse arguments
args = parser.parse_args()
# print welcome
print('\n| ----- Welcome to DRUGSETS v1.0 ----- |\n')
print('Reading input...\n')
# check input data
if args.geneassoc[-10:] == '.genes.raw':
next
else:
print('ERROR: Gene association file does not end in ".genes.raw". Please check MAGMA gene association input file and try again.')
quit()
# print input arguments
print('Input arguments used:\n')
for arg in vars(args):
print('\t', arg,'=', getattr(args, arg))
# ------------------------------------------------------------------------------------------------- #
##### ----- PART 2: DEFINE FILEPATHS AND GENESETS BASED ON ID, SIZE, AND CONDITION INPUTS ----- #####
# ------------------------------------------------------------------------------------------------- #
# set base directories
DIR = os.path.dirname(__file__)
DATADIR = os.path.normpath(os.path.join(DIR, 'DATA'))
OUTDIR = os.path.normpath(os.path.join(DIR, 'OUTPUT'))
GENESETDIR = os.path.normpath(os.path.join(DATADIR, 'GENESETS'))
ANNOTDIR = os.path.normpath(os.path.join(DATADIR, 'MAGMA_ANNOT'))
# set filepaths and minimum gene sets size if gene's are named using ENTREZ
if args.id == 'entrez':
if args.conditional == 'no':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'entrez_genesets.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_entrez_sets.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_entrez_sets.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_entrez_sets.txt'))
# set file paths for custom minimum gene set size
else:
# create new gene set file for individual drug gene sets
df.setsize(GENESETDIR,'/entrez_genesets.txt', args.setsize)
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/entrez_genesets_min%d.txt' % args.setsize))
# create new gene set file for ATC III code gene sets
df.setsize(GENESETDIR,'/atc_entrez_sets.txt', args.setsize)
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atc_entrez_sets_min%d.txt' % args.setsize))
# create new gene set file for MOA gene sets
df.setsize(GENESETDIR,'/moa_entrez_sets.txt', args.setsize)
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_entrez_sets_min%d.txt' % args.setsize))
# create new gene set file for clinical indication gene sets
df.setsize(GENESETDIR,'/ind_entrez_sets.txt', args.setsize)
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_entrez_sets_min%d.txt' % args.setsize))
elif args.conditional == 'yes':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'entrez_cond_sets.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_cond_sets.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_cond_sets.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_cond_sets.txt'))
# set file paths for custom minimum gene set size
else:
# create new gene set file for individual drug gene sets
df.setsize(GENESETDIR,'/entrez_cond_sets.txt', args.setsize)
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/entrez_cond_sets_min%d.txt' % args.setsize))
# create new gene set file for ATC III code gene sets
df.setsize(GENESETDIR,'/atc_cond_sets.txt', args.setsize)
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atc_cond_sets_min%d.txt' % args.setsize))
# create new gene set file for MOA gene sets
df.setsize(GENESETDIR,'/moa_cond_sets.txt', args.setsize)
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_cond_sets_min%d.txt' % args.setsize))
# create new gene set file for clinical indication gene sets
df.setsize(GENESETDIR,'/ind_cond_sets.txt', args.setsize)
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_cond_sets_min%d.txt' % args.setsize))
# set filepaths and minimum gene sets size if gene's are named using ENSEMBL
elif args.id == 'ensembl':
if args.conditional == 'no':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'ensembl_genesets.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_ensembl_sets.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_ensembl_sets.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_ensembl_sets.txt'))
# set file paths for custom minimum gene set size
else:
df.setsize(GENESETDIR,'/ensembl_genesets.txt',args.setsize) # individual drug gene sets
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ensembl__genesets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/atc_ensembl_sets.txt',args.setsize) # ATC code gene sets
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atc_ensembl_sets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/moa_ensembl_sets.txt',args.setsize) # MOA gene sets
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_ensembl_sets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/ind_ensembl_sets.txt',args.setsize) # clinical indication gene sets
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_ensembl_sets_min%d.txt' % args.setsize))
elif args.conditional =='yes':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'ensembl_cond_sets.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_ensembl_cond_sets.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_ensembl_cond_sets.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_ensembl_cond_sets.txt'))
# set file paths for custom minimum gene set size
else:
df.setsize(GENESETDIR,'/ensembl_cond_sets.txt',args.setsize) # individual drug gene sets
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ensembl__cond_sets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/atc_ensembl_cond_sets.txt',args.setsize) # ATC code gene sets
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atc_ensembl_cond_sets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/moa_ensembl_cond_sets.txt',args.setsize) # MOA gene sets
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_ensembl_cond_sets_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/ind_ensembl_cond_sets.txt',args.setsize) # clinical indication gene sets
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_ensembl_cond_sets_min%d.txt' % args.setsize))
# set filepaths and minimum gene sets size if gene's are named using ENSEMBL
elif args.id == 'ensembl92':
if args.conditional == 'no':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'ensembl_genesets92.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_ensembl_sets92.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_ensembl_sets92.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_ensembl_sets92.txt'))
# set file paths for custom minimum gene set size
else:
df.setsize(GENESETDIR,'/ensembl_genesets92.txt',args.setsize)
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ensembl_genesets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/atc_ensembl_sets92.txt',args.setsize)
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atcs_ensembl_sets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/moa_ensembl_sets92.txt',args.setsize)
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_ensembl_sets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/ind_ensembl_sets92.txt',args.setsize)
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_ensembl_sets92_min%d.txt' % args.setsize))
elif args.conditional == 'yes':
# set gene sets filepaths if setsize is default
if args.setsize == 2:
solo = os.path.normpath(os.path.join(GENESETDIR, 'ensembl_cond_sets92.txt'))
atc = os.path.normpath(os.path.join(GENESETDIR, 'atc_ensembl_cond_sets92.txt'))
moa = os.path.normpath(os.path.join(GENESETDIR, 'moa_ensembl_cond_sets92.txt'))
ind = os.path.normpath(os.path.join(GENESETDIR, 'ind_ensembl_cond_sets92.txt'))
# set file paths for custom minimum gene set size
else:
df.setsize(GENESETDIR,'/ensembl_cond_sets92.txt',args.setsize)
solo = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ensembl_cond_sets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/atc_ensembl_cond_sets92.txt',args.setsize)
atc = os.path.normpath(os.path.join(GENESETDIR, 'tmp/atc_ensembl_cond_sets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/moa_ensembl_cond_sets92.txt',args.setsize)
moa = os.path.normpath(os.path.join(GENESETDIR, 'tmp/moa_ensembl_cond_sets92_min%d.txt' % args.setsize))
df.setsize(GENESETDIR,'/ind_ensembl_cond_sets92.txt',args.setsize)
ind = os.path.normpath(os.path.join(GENESETDIR, 'tmp/ind_ensembl_cond_sets92_min%d.txt' % args.setsize))
# set MAGMA annotation filepath
annot = os.path.normpath(os.path.join(ANNOTDIR, args.geneassoc))
# set OUTPUT filepath
output = os.path.normpath(os.path.join(OUTDIR, args.out))
# ------------------------------------------------------ #
##### ----- PART 3: RUN DRUG GENE SET ANALYSIS ----- #####
# ------------------------------------------------------ #
# individual drug gene set analysis
if (args.drugsets == 'solo' or args.drugsets == 'all'):
if args.conditional == 'no':
print('\nRunning SOLO drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --out %s' % (annot, solo, (output+'_SOLO')))
# print log
warnings = open(f'{output+"_SOLO"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_SOLO')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_SOLO')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_SOLO')))
elif args.conditional == 'yes':
print('\nRunning conditional SOLO drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --model condition=druggable --out %s' % (annot, solo, (output+'_SOLO')))
# print log
warnings = open(f'{output+"_SOLO"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_SOLO')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_SOLO')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_SOLO')))
# ATC code drug gene set analysis
if (args.drugsets == 'atc' or args.drugsets == 'all'):
if args.conditional == 'no':
print('\nRunning ATC drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --out %s' % (annot, atc, (output+'_ATC')))
# print log
warnings = open(f'{output+"_ATC"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_ATC')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_ATC')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_ATC')))
elif args.conditional == 'yes':
print('\nRunning conditional ATC drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --model condition=druggable --out %s' % (annot, atc, (output+'_ATC')))
# print log
warnings = open(f'{output+"_ATC"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_ATC')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_ATC')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_ATC')))
# MOA drug gene set analysis
if (args.drugsets == 'moa' or args.drugsets == 'all'):
if args.conditional =='no':
print('\nRunning MOA drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --out %s' % (annot, moa, (output+'_MOA')))
# print log
warnings = open(f'{output+"_MOA"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_MOA')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_MOA')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_MOA')))
elif args.conditional == 'yes':
print('\nRunning conditional MOA drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --model condition=druggable --out %s' % (annot, moa, (output+'_MOA')))
# print log
warnings = open(f'{output+"_MOA"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_MOA')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_MOA')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_MOA')))
# Clinical indication drug gene set analysis
if (args.drugsets == 'ind' or args.drugsets == 'all'):
if args.conditional == 'no':
print('\nRunning CLINICAL INDICATION drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --out %s' % (annot, ind, (output+'_IND')))
# print log
warnings = open(f'{output+"_IND"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_IND')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_IND')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_IND')))
elif args.conditional == 'yes':
print('\nRunning conditional CLINICAL INDICATION drug gene set analysis in MAGMA...\n')
df.run_task('magma --gene-results %s --set-annot %s --settings gene-info --model condition=druggable --out %s' % (annot, ind, (output+'_IND')))
# print log
warnings = open(f'{output+"_IND"}.log').read().count('WARNING:')
print('\n\t%s warnings found (see %s.log for details)' % (int(warnings), (output+'_IND')))
# print result locations
print('\tResults for all drug gene sets saving to %s' % (OUTDIR+'/%s.gsa.out' % (args.out+'_IND')))
print('\tResults for significant drug gene sets saving to %s\n' % (OUTDIR+'/%s.gsa.set.genes.out' % (args.out+'_IND')))
# print done
print('Drug gene set analysis finished.\n')
# ----------------------------------------------- #
##### ----- PART 4: DRUG GROUP ANALYSIS ----- #####
# ----------------------------------------------- #
# enrichment analysis
if args.enrich is not None:
if (args.drugsets == 'solo' or args.drugsets == 'all'):
# set full file paths for .raw file, gene set file
full = os.path.dirname(os.path.abspath(__file__)) + '/'
#print
print('Running %s enrichment analysis...\n\n' % (args.enrich.upper()))
# set file path for .gsa.out results file
gsa = (output+'_SOLO'+'.gsa.out')
# load gsa results
gsa_results = pd.read_csv(gsa, delimiter= "\s+", comment='#')
# set file path for gsa results
gsa_path = 'OUTPUT/%s.gsa.out' % (args.out+'_SOLO')
# compute covariance
print('\tComputing correlation matrix...')
df.run_task_silent('Rscript --vanilla %s %s %s %s %s %s' % (full+'compute_corrs.R', annot, solo, (full+gsa_path), ('/'+args.out), full))
# define filepath to set.corrs.rdata and to metadata.rdata file
corrdata = full+'OUTPUT/%s_setcorrs.rdata' % args.out
metaRdata = full+'DATA/metadata.rdata'
# run either one group or all
if args.enrich != 'all':
# compute dependent linear regression
print('\tRunning dependent linear regression model...')
df.run_task_silent('Rscript --vanilla %s %s %s %s %s %s %s' % (full+'compute_lnreg.R', corrdata, metaRdata, args.enrich.lower(), args.nsize, args.out, OUTDIR))
elif args.enrich == 'all':
# define groups to loop through
groups = ['atc','moa','ind']
# loop though types of drug groups and run dependent linear regression model for each type
for g in groups:
# compute dependent linear regression
print('\tRunning dependent linear regression model for %s groups...' % g.upper())
df.run_task_silent('Rscript --vanilla %s %s %s %s %s %s %s' % (full+'compute_lnreg.R', corrdata, metaRdata, g, args.nsize, (args.out+'_'+g.upper()), OUTDIR))
# remove correlation matrix file
df.run_task_silent('rm %s' % (OUTDIR+'/'+args.out+'_setcorrs.rdata'))
# remove new gene set files if created new
# if args.setsize == 2:
# next
# else:
# subprocess.run('rm %s/*min%d.txt' % (GENESETDIR, args.setsize), shell=True)
# print finished
print('\nEnrichment analysis finished.\n')
else:
print('To test for enrichment "-drugsets" must be set to "solo".')
| 49.195853 | 199 | 0.618425 | 2,870 | 21,351 | 4.508014 | 0.088502 | 0.051476 | 0.058433 | 0.06678 | 0.795022 | 0.762405 | 0.747952 | 0.70977 | 0.702968 | 0.702968 | 0 | 0.004562 | 0.209405 | 21,351 | 434 | 200 | 49.195853 | 0.761908 | 0.165707 | 0 | 0.30837 | 0 | 0.048458 | 0.358493 | 0.092961 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026432 | 0 | 0.026432 | 0.193833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4d31ce08570b4d973219c5f07a3d7e11501ab0b6 | 213 | py | Python | backend/api/room/selectors.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | backend/api/room/selectors.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | backend/api/room/selectors.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | from django.db.models.query import QuerySet
from .models import Room, Dungeon
def get_all_rooms() -> QuerySet:
return Room.objects.all()
def get_all_dungeons() -> QuerySet:
return Dungeon.objects.all() | 21.3 | 43 | 0.741784 | 30 | 213 | 5.133333 | 0.533333 | 0.077922 | 0.116883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150235 | 213 | 10 | 44 | 21.3 | 0.850829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
4d32ceebd2d87581ce41ecc8fe8c906f3bde1523 | 3,858 | py | Python | insert_tables.py | allenwangs/data-engineer-capstone | 9495bd70247f4e9e7ce4e721b723e8ea2b923766 | [
"MIT"
] | null | null | null | insert_tables.py | allenwangs/data-engineer-capstone | 9495bd70247f4e9e7ce4e721b723e8ea2b923766 | [
"MIT"
] | null | null | null | insert_tables.py | allenwangs/data-engineer-capstone | 9495bd70247f4e9e7ce4e721b723e8ea2b923766 | [
"MIT"
] | null | null | null | import psycopg2
import pandas as pd
import datetime
from sql_queries import weather_hourly_insert, metro_bound_insert, metro_traffic_insert, time_insert
def connect_database():
"""Connect to database
Returns:
cur: cursor of the database
conn: connection of the database
"""
# connect to metro database
print("connect to metro database")
conn = psycopg2.connect("host=127.0.0.1 dbname=metro user=allen.wang password=")
cur = conn.cursor()
return cur, conn
def stage_weather_hourly_insert(start_date, end_date, conn):
"""insert hourly weather data in stage_weather_hourly
Args:
start_date(str): stage_weather query start_date in YYYY-MM-DD
end_date(str): stage_weather query end_date in YYYY-MM-DD
conn: connection of the database
"""
print(f"Start INSERT stage_weather_hourly data from {start_date} to {end_date}")
query_start_time = datetime.datetime.strptime(start_date, '%Y-%m-%d')
query_end_time = query_start_time + datetime.timedelta(hours=1)
while query_end_time <= datetime.datetime.strptime(end_date, '%Y-%m-%d'):
sql = weather_hourly_insert.format(query_start_time=query_start_time, query_end_time=query_end_time)
#print(sql)
cur = conn.cursor()
cur.execute(sql)
conn.commit()
query_start_time = query_end_time
query_end_time = query_start_time + datetime.timedelta(hours=1)
print("Finish INSERT stage_weather_hourly")
def stage_metro_bound_insert(start_date, end_date, conn):
"""insert hourly weather data in stage_weather_hourly
Args:
start_date(str): stage_metro query start_date in YYYY-MM-DD
end_date(str): stage_metro query end_date in YYYY-MM-DD
conn: connection of the database
"""
print(f"Start INSERT stage_metro_bound data from {start_date} to {end_date}")
query_start_time = datetime.datetime.strptime(start_date, '%Y-%m-%d')
query_end_time = query_start_time + datetime.timedelta(days=1)
while query_end_time <= datetime.datetime.strptime(end_date, '%Y-%m-%d'):
sql = metro_bound_insert.format(query_start_date=query_start_time, query_end_date=query_end_time)
#print(sql)
cur = conn.cursor()
cur.execute(sql)
conn.commit()
query_start_time = query_end_time
query_end_time = query_start_time + datetime.timedelta(days=1)
print("Finish INSERT stage_metro_bound")
def fact_metro_traffic_insert(start_date, end_date, conn):
"""insert hourly weather data in stage_weather_hourly
Args:
start_date(str): stage_metro query start_date in YYYY-MM-DD
end_date(str): stage_metro query end_date in YYYY-MM-DD
conn: connection of the database
"""
print(f"Start INSERT fact_metro_traffic data from {start_date} to {end_date}")
query_start_time = datetime.datetime.strptime(start_date, '%Y-%m-%d')
query_end_time = query_start_time + datetime.timedelta(hours=1)
while query_end_time <= datetime.datetime.strptime(end_date, '%Y-%m-%d'):
sql = metro_traffic_insert.format(query_start_time=query_start_time, query_end_time=query_end_time)
#print(sql)
cur = conn.cursor()
cur.execute(sql)
conn.commit()
query_start_time = query_end_time
query_end_time = query_start_time + datetime.timedelta(hours=1)
print("Finish INSERT fact_metro_traffic")
def dim_time_insert(conn):
"""insert hourly weather data in stage_weather_hourly
Args:
conn: connection of the database
"""
print(f"Start INSERT dim_time data")
sql = time_insert
#print(sql)
cur = conn.cursor()
cur.execute(sql)
conn.commit()
print("Finish INSERT dim_time") | 37.096154 | 108 | 0.689217 | 545 | 3,858 | 4.590826 | 0.119266 | 0.083933 | 0.095124 | 0.07474 | 0.767386 | 0.731415 | 0.731415 | 0.731415 | 0.731415 | 0.713829 | 0 | 0.004637 | 0.21747 | 3,858 | 104 | 109 | 37.096154 | 0.824114 | 0.237429 | 0 | 0.538462 | 0 | 0 | 0.168794 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0.019231 | 0.076923 | 0 | 0.192308 | 0.173077 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4d520ccd345156f08623bfe5a25a9ed7c94ab3f0 | 46,637 | py | Python | rigid_bodys_gen/rigidBodyGen.py | nepia11/rigid_bodys_gen | e9198a252974a0a3c6dcdcccc9bc7cd0254ee0fc | [
"MIT"
] | 90 | 2016-05-08T00:47:55.000Z | 2022-02-24T00:28:58.000Z | rigid_bodys_gen/rigidBodyGen.py | nepia11/rigid_bodys_gen | e9198a252974a0a3c6dcdcccc9bc7cd0254ee0fc | [
"MIT"
] | 4 | 2016-11-27T16:25:52.000Z | 2022-03-14T07:48:20.000Z | rigid_bodys_gen/rigidBodyGen.py | nepia11/rigid_bodys_gen | e9198a252974a0a3c6dcdcccc9bc7cd0254ee0fc | [
"MIT"
] | 20 | 2017-04-13T13:46:37.000Z | 2022-03-16T05:48:33.000Z | import bpy
import time, sys
from bpy.props import *
bl_info = {
"name": "rigid bodys gen",
"author": "12funkeys",
"version": (2, 0, 0),
"blender": (2, 82, 0),
"location": "pose > selected bones",
"description": "Set rigid body and constraint easily",
"warning": "",
"support": "COMMUNITY",
"wiki_url": "",
"tracker_url": "",
"category": "Rigging"
}
translation_dict = {
"en_US" : {
("*", "Make Rigid Body Tools") : "Make Rigid Body Tools",
#("*", "Rigid Body Gen") : "Rigid Body Gen",
("*", "Make Rigid Bodys") : "Make Rigid Bodys",
("*", "Add Passive(on bones)") : "Add Passive(on bones)",
("*", "make rigibodys move on bones") : "make rigibodys move on bones",
("*", "Add Active") : "Add Active",
("*", "Add Joints") : "Add Joints",
("*", "Add Active & Joints") : "Add Active & Joints"
},
"ja_JP" : {
("*", "Make Rigid Body Tools") : "選択ボーン",
#("*", "Rigid Body Gen") : "剛体ツール",
("*", "Make Rigid Bodys") : "選択ボーン",
("*", "Add Passive(on bones)") : "基礎剛体の作成‐ボーン追従",
("*", "make rigibodys move on bones") : "ボーンに追従する剛体を作成します",
("*", "Add Active") : "基礎剛体の作成‐物理演算",
("*", "Add Joints") : "基礎Jointの作成",
("*", "Add Active & Joints") : "基礎剛体/連結Jointの作成"
}
}
shapes = [
('MESH', 'Mesh', 'Mesh'),
('CONVEX_HULL', 'Convex Hull', 'Convex Hull'),
('CONE', 'Cone', 'Cone'),
('CYLINDER', 'Cylinder', 'Cylinder'),
('CAPSULE', 'Capsule', 'Capsule'),
('SPHERE', 'Sphere', 'Sphere'),
('BOX', 'Box', 'Box')]
types = [('MOTOR', 'Motor', 'Motor'),
('GENERIC_SPRING', 'Generic Spring', 'Generic Spring'),
('GENERIC', 'Generic', 'Generic')]
### add Tool Panel
class RBG_PT_MenuRigidBodyTools(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "Rigid Body Gen"
bl_context = ".posemode"
bl_label = "Make Rigid Body Tools"
@classmethod
def poll(cls, context):
return (context.object is not None)
def draw(self, context):
pass
class RBG_PT_Add_Passive(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "Rigid Body Gen"
bl_context = ".posemode"
bl_label = "Add Passive(on bones)"
bl_parent_id = "RBG_PT_MenuRigidBodyTools"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
col.operator(RBG_OT_CreateRigidBodysOnBones.bl_idname, text=bpy.app.translations.pgettext("Add Passive(on bones)"), icon='BONE_DATA')
scene = context.scene
layout = self.layout
box = layout.box()
box.label(text="Options:")
box.prop(scene, 'rbg_rb_shape')
box.prop(scene, 'rbg_rc_dim')
box.prop(scene, 'rbg_rc_mass')
box.prop(scene, 'rbg_rc_friction')
box.prop(scene, 'rbg_rc_bounciness')
box.label(text="Damping:")
box.prop(scene, 'rbg_rc_translation')
box.prop(scene, 'rbg_rc_rotation')
box.prop(scene, 'rbg_rc_rootbody_passive')
box.prop(scene, 'rbg_rc_rootbody_animated')
box.prop(scene, 'rbg_rc_parent_armature')
class RBG_PT_Add_Active(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "Rigid Body Gen"
bl_context = ".posemode"
bl_label = "Add Active"
bl_parent_id = "RBG_PT_MenuRigidBodyTools"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
col.operator(RBG_OT_CreateRigidBodysPhysics.bl_idname, text=bpy.app.translations.pgettext("Add Active"), icon='PHYSICS')
scene = context.scene
layout = self.layout
box = layout.box()
box.label(text="Options:")
box.prop(scene, 'rbg_rb_shape')
box.prop(scene, 'rbg_rc_dim')
box.prop(scene, 'rbg_rc_mass')
box.prop(scene, 'rbg_rc_friction')
box.prop(scene, 'rbg_rc_bounciness')
box.prop(scene, 'rbg_rc_translation')
box.prop(scene, 'rbg_rc_rotation')
box.prop(scene, 'rbg_rc_rootbody_animated')
box.prop(scene, 'rbg_rc_parent_armature')
class RBG_PT_Add_Joints(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "Rigid Body Gen"
bl_context = ".posemode"
bl_label = "Add Joints"
bl_parent_id = "RBG_PT_MenuRigidBodyTools"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
col.operator(RBG_OT_CreateRigidBodysJoints.bl_idname, text=bpy.app.translations.pgettext("Add Joints"), icon='RIGID_BODY_CONSTRAINT')
scene = context.scene
layout = self.layout
box = layout.box()
box.label(text="Options:")
box.prop(scene, 'rbg_jo_type')
box.prop(scene, 'rbg_jo_dim')
col = box.column(align=True)
col.label(text="Limits:")
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_x', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_x_lower')
sub.prop(scene, 'rbg_jo_limit_lin_x_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_y', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_y_lower')
sub.prop(scene, 'rbg_jo_limit_lin_y_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_z', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_z_lower')
sub.prop(scene, 'rbg_jo_limit_lin_z_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_x', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_x_lower')
sub.prop(scene, 'rbg_jo_limit_ang_x_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_y', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_y_lower')
sub.prop(scene, 'rbg_jo_limit_ang_y_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_z', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_z_lower')
sub.prop(scene, 'rbg_jo_limit_ang_z_upper')
col.label(text="Springs:")
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_x', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_x')
sub.prop(scene, 'rbg_jo_spring_damping_x')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_y', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_y')
sub.prop(scene, 'rbg_jo_spring_damping_y')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_z', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_z')
sub.prop(scene, 'rbg_jo_spring_damping_z')
col.prop(scene, 'rbg_rc_parent_armature')
class RBG_PT_Add_Active_Joints(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_category = "Rigid Body Gen"
bl_context = ".posemode"
bl_label = "Add Active & Joints"
bl_parent_id = "RBG_PT_MenuRigidBodyTools"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
col.operator(RBG_OT_CreateRigidBodysPhysicsJoints.bl_idname, text=bpy.app.translations.pgettext("Add Active & Joints"), icon='RIGID_BODY')
scene = context.scene
###Rigid Body Object
layout = self.layout
box = layout.box()
box.label(text="Options:")
box.prop(scene, 'rbg_rb_shape')
box.prop(scene, 'rbg_rc_dim')
box.prop(scene, 'rbg_rc_mass')
box.prop(scene, 'rbg_rc_friction')
box.prop(scene, 'rbg_rc_bounciness')
box.prop(scene, 'rbg_rc_translation')
box.prop(scene, 'rbg_rc_rotation')
#Joint Object
layout = self.layout
box = layout.box()
box.prop(scene, 'rbg_jo_type')
box.prop(scene, 'rbg_jo_constraint_object')
box.prop(scene, 'rbg_rc_add_pole_rootbody')
box.prop(scene, 'rbg_rc_parent_armature')
box.prop(scene, 'rbg_jo_dim')
col = box.column(align=True)
col.label(text="Limits:")
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_x', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_x_lower')
sub.prop(scene, 'rbg_jo_limit_lin_x_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_y', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_y_lower')
sub.prop(scene, 'rbg_jo_limit_lin_y_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_lin_z', toggle=True)
sub.prop(scene, 'rbg_jo_limit_lin_z_lower')
sub.prop(scene, 'rbg_jo_limit_lin_z_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_x', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_x_lower')
sub.prop(scene, 'rbg_jo_limit_ang_x_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_y', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_y_lower')
sub.prop(scene, 'rbg_jo_limit_ang_y_upper')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_limit_ang_z', toggle=True)
sub.prop(scene, 'rbg_jo_limit_ang_z_lower')
sub.prop(scene, 'rbg_jo_limit_ang_z_upper')
col.label(text="Springs:")
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_x', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_x')
sub.prop(scene, 'rbg_jo_spring_damping_x')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_y', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_y')
sub.prop(scene, 'rbg_jo_spring_damping_y')
row = col.row(align=True)
sub = row.row(align=True)
sub.prop(scene, 'rbg_jo_use_spring_z', toggle=True)
sub.prop(scene, 'rbg_jo_spring_stiffness_z')
sub.prop(scene, 'rbg_jo_spring_damping_z')
### add MainMenu
class RBG_MT_MenuRigidBodys(bpy.types.Menu):
# bl_idname = "menu_MT_create_rigidbodys"
bl_label = "Make Rigid Bodys"
bl_description = "make rigibodys & constraint"
def draw(self, context):
layout = self.layout
layout.operator(RBG_OT_CreateRigidBodysOnBones.bl_idname, icon='BONE_DATA')
layout.operator(RBG_OT_CreateRigidBodysPhysics.bl_idname, icon='PHYSICS')
layout.operator(RBG_OT_CreateRigidBodysJoints.bl_idname, icon='RIGID_BODY_CONSTRAINT')
layout.operator(RBG_OT_CreateRigidBodysPhysicsJoints.bl_idname, icon='RIGID_BODY')
# add menu
def menu_fn(self, context):
self.layout.separator()
self.layout.menu(self.bl_idname, icon='MESH_ICOSPHERE')
@classmethod
def register(cls):
bpy.app.translations.register(__name__, translation_dict)
bpy.types.VIEW3D_MT_pose.append(cls.menu_fn)
@classmethod
def unregister(cls):
bpy.types.VIEW3D_MT_pose.remove(cls.menu_fn)
bpy.app.translations.unregister(__name__)
### user prop
def user_props():
scene = bpy.types.Scene
scene.rbg_rb_shape = EnumProperty(
name='Shape',
description='Choose Rigid Body Shape',
items=shapes,
default='CAPSULE')
scene.rbg_rc_dim = FloatVectorProperty(
name = "Dimensions",
description = "rigid body Dimensions XYZ",
default = (1, 1, 1),
subtype = 'XYZ',
unit = 'NONE',
min = 0,
max = 5)
scene.rbg_rc_mass = FloatProperty(
name = "Mass",
description = "rigid body mass",
default = 1.0,
subtype = 'NONE',
min = 0.001,)
scene.rbg_rc_friction = FloatProperty(
name = "Friction",
description = "rigid body friction",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_rc_bounciness = FloatProperty(
name = "Bounciness",
description = "rigid body bounciness",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_rc_translation = FloatProperty(
name = "Translation",
description = "rigid body translation",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_rc_rotation = FloatProperty(
name = "Rotation",
description = "rigid body rotation",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_jo_type = EnumProperty(
name='Type',
description='Choose Contstraint Type',
items=types,
default='GENERIC_SPRING')
scene.rbg_jo_dim = FloatVectorProperty(
name = "joint Dimensions",
description = "joint Dimensions XYZ",
default = (1, 1, 1),
subtype = 'XYZ',
unit = 'NONE',
min = 0,
max = 5)
scene.rbg_jo_limit_lin_x = BoolProperty(
name='X Axis',
description='limit x',
default=True,
options={'ANIMATABLE'})
scene.rbg_jo_limit_lin_y = BoolProperty(
name='Y Axis',
description='limit y',
default=True)
scene.rbg_jo_limit_lin_z = BoolProperty(
name='Z Axis',
description='limit z',
default=True)
scene.rbg_jo_limit_lin_x_lower = FloatProperty(
name = "Lower",
description = "joint limit_lin_x_lower",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_lin_y_lower = FloatProperty(
name = "Lower",
description = "joint limit_lin_y_lower",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_lin_z_lower = FloatProperty(
name = "Lower",
description = "joint limit_lin_z_lower",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_lin_x_upper = FloatProperty(
name = "Upper",
description = "joint limit_lin_x_upper",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_lin_y_upper = FloatProperty(
name = "Upper",
description = "joint limit_lin_y_upper",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_lin_z_upper = FloatProperty(
name = "Upper",
description = "joint limit_lin_z_upper",
default = 0,
subtype = 'NONE')
scene.rbg_jo_limit_ang_x = BoolProperty(
name='X Angle',
description='Angle limit x',
default=True,
options={'ANIMATABLE'})
scene.rbg_jo_limit_ang_y = BoolProperty(
name='Y Angle',
description='Angle limit y',
default=True)
scene.rbg_jo_limit_ang_z = BoolProperty(
name='Z Angle',
description='Angle limit z',
default=True)
scene.rbg_jo_limit_ang_x_lower = FloatProperty(
name = "Lower",
description = "joint limit_ang_x_lower",
default = -0.785398,
subtype = 'ANGLE')
scene.rbg_jo_limit_ang_y_lower = FloatProperty(
name = "Lower",
description = "joint limit_ang_y_lower",
default = -0.785398,
subtype = 'ANGLE')
scene.rbg_jo_limit_ang_z_lower = FloatProperty(
name = "Lower",
description = "joint limit_ang_z_lower",
default = -0.785398,
subtype = 'ANGLE')
scene.rbg_jo_limit_ang_x_upper = FloatProperty(
name = "Upper",
description = "joint limit_ang_x_upper",
default = 0.785398,
subtype = 'ANGLE')
scene.rbg_jo_limit_ang_y_upper = FloatProperty(
name = "Upper",
description = "joint limit_ang_y_upper",
default = 0.785398,
subtype = 'ANGLE')
scene.rbg_jo_limit_ang_z_upper = FloatProperty(
name = "Upper",
description = "joint limit_ang_z_upper",
default = 0.785398,
subtype = 'ANGLE')
scene.rbg_jo_use_spring_x = BoolProperty(
name='X',
description='use spring x',
default=False)
scene.rbg_jo_use_spring_y = BoolProperty(
name='Y',
description='use spring y',
default=False)
scene.rbg_jo_use_spring_z = BoolProperty(
name='Z',
description='use spring z',
default=False)
scene.rbg_jo_spring_stiffness_x = FloatProperty(
name = "Stiffness",
description = "Stiffness on the X Axis",
default = 10.000,
subtype = 'NONE',
min = 0)
scene.rbg_jo_spring_stiffness_y = FloatProperty(
name = "Stiffness",
description = "Stiffness on the Y Axis",
default = 10.000,
subtype = 'NONE',
min = 0)
scene.rbg_jo_spring_stiffness_z = FloatProperty(
name = "Stiffness",
description = "Stiffness on the Z Axis",
default = 10.000,
subtype = 'NONE',
min = 0)
scene.rbg_jo_spring_damping_x = FloatProperty(
name = "Damping X",
description = "Damping on the X Axis",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_jo_spring_damping_y = FloatProperty(
name = "Damping Y",
description = "Damping on the Y Axis",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_jo_spring_damping_z = FloatProperty(
name = "Damping Z",
description = "Damping on the Z Axis",
default = 0.5,
subtype = 'NONE',
min = 0,
max = 1)
scene.rbg_jo_constraint_object = BoolProperty(
name='Auto Constraint Object',
description='Constraint Object',
default=True)
scene.rbg_rc_rootbody_passive = BoolProperty(
name='Passive',
description='Rigid Body Type Passive',
default=True)
scene.rbg_rc_add_pole_rootbody = BoolProperty(
name='Add Pole Object',
description='Add Pole Object',
default=True)
scene.rbg_rc_rootbody_animated = BoolProperty(
name='animated',
description='Root Rigid Body sets animated',
default=True)
scene.rbg_rc_parent_armature = BoolProperty(
name='Parent to armature',
description='Parent to armature',
default=True)
def del_props():
scene = bpy.types.Scene
del scene.rbg_rb_shape
### Create Rigid Bodys On Bones
class RBG_OT_CreateRigidBodysOnBones(bpy.types.Operator):
bl_idname = "rigidbody.on_bones"
bl_label = "Add Passive(on bones)"
bl_description = "make rigibodys move on bones"
bl_options = {'REGISTER', 'UNDO'}
init_rc_dimX = 0.28
init_rc_dimY = 0.28
init_rc_dimZ = 1.30
###
def execute(self, context):
scene = context.scene
###selected Armature
ob = bpy.context.active_object
newCollectionName = 'RBGcollection.' + ob.name
### Apply Object transform
bpy.ops.object.posemode_toggle()
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)
bpy.ops.object.posemode_toggle()
### create collection
if bpy.data.collections.get(newCollectionName) is None:
newcollection = bpy.data.collections.new(newCollectionName)
bpy.context.scene.collection.children.link(newcollection)
if len(bpy.context.selected_pose_bones) == 0:
return {'FINISHED'}
for selected_bones in bpy.context.selected_pose_bones:
#self.report({'INFO'}, str(selected_bones.vector[0]))
###Create Rigidbody Cube
bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.center, rotation=(0.0, 0.0, 0.0))
rc = bpy.context.active_object
rc.name = "rbg." + selected_bones.name
viewport_display(self, rc)
rc.show_in_front = True
rc.hide_render = True
### link to collection
bpy.data.collections[newCollectionName].objects.link(rc)
###Damped Track
bpy.ops.object.constraint_add(type='DAMPED_TRACK')
dt = bpy.context.object.constraints["Damped Track"]
dt.target = ob
dt.subtarget = selected_bones.name
dt.head_tail = 1
dt.track_axis = 'TRACK_Z'
### Apply Tranceform
bpy.ops.object.visual_transform_apply()
rc.constraints.remove(dt)
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.length * self.init_rc_dimX * scene.rbg_rc_dim[0],
selected_bones.length * self.init_rc_dimY * scene.rbg_rc_dim[1],
selected_bones.length * self.init_rc_dimZ * scene.rbg_rc_dim[2]]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.object_add()
if scene.rbg_rc_rootbody_passive == True:
bpy.context.object.rigid_body.type = "PASSIVE"
else:
bpy.context.object.rigid_body.type = "ACTIVE"
bpy.context.object.rigid_body.collision_shape = scene.rbg_rb_shape
bpy.context.object.rigid_body.kinematic = scene.rbg_rc_rootbody_animated
bpy.context.object.rigid_body.mass = scene.rbg_rc_mass
bpy.context.object.rigid_body.friction = scene.rbg_rc_friction
bpy.context.object.rigid_body.restitution = scene.rbg_rc_bounciness
bpy.context.object.rigid_body.linear_damping = scene.rbg_rc_translation
bpy.context.object.rigid_body.angular_damping = scene.rbg_rc_rotation
### Child OF
CoC = rc.constraints.new("CHILD_OF")
CoC.name = 'Child_Of_' + selected_bones.name
CoC.target = ob
CoC.subtarget = selected_bones.name
#without ops way to childof_set_inverse
sub_target = bpy.data.objects[ob.name].pose.bones[selected_bones.name]
#self.report({'INFO'}, str(sub_target))
CoC.inverse_matrix = sub_target.matrix.inverted()
rc.update_tag(refresh={'OBJECT'})
bpy.context.scene.update_tag()
#parent to armature
if scene.rbg_rc_parent_armature == True:
rc.parent = ob
###clear object select
bpy.context.view_layer.objects.active = ob
bpy.ops.object.posemode_toggle()
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.posemode_toggle()
bpy.ops.pose.select_all(action='DESELECT')
self.report({'INFO'}, "OK")
return {'FINISHED'}
#
class RBG_OT_CreateRigidBodysPhysics(bpy.types.Operator):
bl_idname = "rigidbody.physics"
bl_label = "Add Active"
bl_description = "make physics engine on rigibodys"
bl_options = {'REGISTER', 'UNDO'}
init_rc_dimX = 0.28
init_rc_dimY = 0.28
init_rc_dimZ = 1.30
###
def execute(self, context):
scene = context.scene
###selected Armature
ob = bpy.context.active_object
#self.report({'INFO'}, ob.data)
### Apply Object transform
bpy.ops.object.posemode_toggle()
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)
bpy.ops.object.posemode_toggle()
for selected_bones in bpy.context.selected_pose_bones:
###Create Rigidbody Cube
bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.center, rotation=(0.0, 0.0, 0.0))
rc = bpy.context.active_object
rc.name = "rbg." + selected_bones.name
viewport_display(self, rc)
rc.show_in_front = True
bpy.data.objects[rc.name].hide_render = True
###Damped Track
bpy.ops.object.constraint_add(type='DAMPED_TRACK')
dt = bpy.context.object.constraints["Damped Track"]
dt.target = ob
dt.subtarget = selected_bones.name
dt.head_tail = 1
dt.track_axis = 'TRACK_Z'
### Apply Tranceform
bpy.ops.object.visual_transform_apply()
rc.constraints.remove(dt)
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.length * self.init_rc_dimX * scene.rbg_rc_dim[0],
selected_bones.length * self.init_rc_dimY * scene.rbg_rc_dim[1],
selected_bones.length * self.init_rc_dimZ * scene.rbg_rc_dim[2]]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.object_add()
bpy.context.object.rigid_body.type = "ACTIVE"
bpy.context.object.rigid_body.collision_shape = scene.rbg_rb_shape
bpy.context.object.rigid_body.kinematic = scene.rbg_rc_rootbody_animated
bpy.context.object.rigid_body.mass = scene.rbg_rc_mass
bpy.context.object.rigid_body.friction = scene.rbg_rc_friction
bpy.context.object.rigid_body.restitution = scene.rbg_rc_bounciness
bpy.context.object.rigid_body.linear_damping = scene.rbg_rc_translation
bpy.context.object.rigid_body.angular_damping = scene.rbg_rc_rotation
### Child OF
bpy.context.view_layer.objects.active = ob
bpy.ops.pose.armature_apply()
bpy.ops.pose.select_all(action='DESELECT')
bpy.context.object.data.bones.active = bpy.context.object.data.bones[selected_bones.name]
ab = bpy.context.active_pose_bone
CoC = ab.constraints.new("CHILD_OF")
CoC.name = 'Child_Of_' + rc.name
CoC.target = rc
#without ops way to childof_set_inverse
CoC_target = rc
#self.report({'INFO'}, str(rc))
CoC.inverse_matrix = CoC_target.matrix_world.inverted()
rc.update_tag(refresh={'OBJECT'})
bpy.context.scene.update_tag()
###parent none
bpy.ops.object.editmode_toggle()
bpy.context.active_bone.parent = None
bpy.ops.object.posemode_toggle()
#parent to armature
if scene.rbg_rc_parent_armature == True:
rc.parent = ob
###clear object select
bpy.context.view_layer.objects.active = ob
bpy.ops.object.posemode_toggle()
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.posemode_toggle()
bpy.ops.pose.select_all(action='DESELECT')
self.report({'INFO'}, "OK")
return {'FINISHED'}
#
class RBG_OT_CreateRigidBodysJoints(bpy.types.Operator):
bl_idname = "rigidbody.joints"
bl_label = "Add Joints"
bl_description = "add Add Joints on bones"
bl_options = {'REGISTER', 'UNDO'}
init_rbg_jo_dimX = 0.33
init_rbg_jo_dimY = 0.33
init_rbg_jo_dimZ = 0.33
###
def execute(self, context):
scene = context.scene
add_RigidBody_World()
###selected Armature
ob = bpy.context.active_object
### Apply Object transform
bpy.ops.object.posemode_toggle()
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)
bpy.ops.object.posemode_toggle()
for selected_bones in bpy.context.selected_pose_bones:
#self.report({'INFO'}, str(selected_bones.vector[0]))
###Create Rigidbody Cube
# bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.head, rotation=(0.0, 0.0, 0.0))
bpy.ops.object.posemode_toggle()
bpy.ops.object.empty_add(type='PLAIN_AXES', radius=0.1, align='WORLD', location=selected_bones.head)
rc = bpy.context.active_object
rc.name = "joint." + selected_bones.name
viewport_display(self, rc)
rc.show_in_front = True
bpy.data.objects[rc.name].hide_render = True
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.length * self.init_rbg_jo_dimX * scene.rbg_jo_dim[0],
selected_bones.length * self.init_rbg_jo_dimY * scene.rbg_jo_dim[1],
selected_bones.length * self.init_rbg_jo_dimZ * scene.rbg_jo_dim[2]]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.constraint_add()
bpy.context.object.rigid_body_constraint.type = scene.rbg_jo_type
bpy.context.object.rigid_body_constraint.use_breaking = False
bpy.context.object.rigid_body_constraint.use_override_solver_iterations = True
bpy.context.object.rigid_body_constraint.breaking_threshold = 10
bpy.context.object.rigid_body_constraint.solver_iterations = 10
bpy.context.object.rigid_body_constraint.use_limit_lin_x = scene.rbg_jo_limit_lin_x
bpy.context.object.rigid_body_constraint.use_limit_lin_y = scene.rbg_jo_limit_lin_y
bpy.context.object.rigid_body_constraint.use_limit_lin_z = scene.rbg_jo_limit_lin_z
bpy.context.object.rigid_body_constraint.limit_lin_x_lower = scene.rbg_jo_limit_lin_x_lower
bpy.context.object.rigid_body_constraint.limit_lin_y_lower = scene.rbg_jo_limit_lin_y_lower
bpy.context.object.rigid_body_constraint.limit_lin_z_lower = scene.rbg_jo_limit_lin_z_lower
bpy.context.object.rigid_body_constraint.limit_lin_x_upper = scene.rbg_jo_limit_lin_x_upper
bpy.context.object.rigid_body_constraint.limit_lin_y_upper = scene.rbg_jo_limit_lin_y_upper
bpy.context.object.rigid_body_constraint.limit_lin_z_upper = scene.rbg_jo_limit_lin_z_upper
bpy.context.object.rigid_body_constraint.use_limit_ang_x = scene.rbg_jo_limit_ang_x
bpy.context.object.rigid_body_constraint.use_limit_ang_y = scene.rbg_jo_limit_ang_y
bpy.context.object.rigid_body_constraint.use_limit_ang_z = scene.rbg_jo_limit_ang_z
bpy.context.object.rigid_body_constraint.limit_ang_x_lower = scene.rbg_jo_limit_ang_x_lower
bpy.context.object.rigid_body_constraint.limit_ang_y_lower = scene.rbg_jo_limit_ang_y_lower
bpy.context.object.rigid_body_constraint.limit_ang_z_lower = scene.rbg_jo_limit_ang_z_lower
bpy.context.object.rigid_body_constraint.limit_ang_x_upper = scene.rbg_jo_limit_ang_x_upper
bpy.context.object.rigid_body_constraint.limit_ang_y_upper = scene.rbg_jo_limit_ang_y_upper
bpy.context.object.rigid_body_constraint.limit_ang_z_upper = scene.rbg_jo_limit_ang_z_upper
bpy.context.object.rigid_body_constraint.use_spring_x = scene.rbg_jo_use_spring_x
bpy.context.object.rigid_body_constraint.use_spring_y = scene.rbg_jo_use_spring_y
bpy.context.object.rigid_body_constraint.use_spring_z = scene.rbg_jo_use_spring_z
bpy.context.object.rigid_body_constraint.spring_stiffness_x = scene.rbg_jo_spring_stiffness_x
bpy.context.object.rigid_body_constraint.spring_stiffness_y = scene.rbg_jo_spring_stiffness_y
bpy.context.object.rigid_body_constraint.spring_stiffness_z = scene.rbg_jo_spring_stiffness_z
bpy.context.object.rigid_body_constraint.spring_damping_x = scene.rbg_jo_spring_damping_x
bpy.context.object.rigid_body_constraint.spring_damping_y = scene.rbg_jo_spring_damping_y
bpy.context.object.rigid_body_constraint.spring_damping_z = scene.rbg_jo_spring_damping_z
#parent to armature
if scene.rbg_rc_parent_armature == True:
rc.parent = ob
###clear object select
bpy.context.view_layer.objects.active = ob
# bpy.ops.object.posemode_toggle()
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.posemode_toggle()
bpy.ops.pose.select_all(action='DESELECT')
self.report({'INFO'}, "OK")
return {'FINISHED'}
class RBG_OT_CreateRigidBodysPhysicsJoints(bpy.types.Operator):
bl_idname = "rigidbody.physics_joints"
bl_label = "Add Active & Joints"
bl_description = "Add Active & Joints"
bl_options = {'REGISTER', 'UNDO'}
init_rc_dimX = 0.28
init_rc_dimY = 0.28
init_rc_dimZ = 1.30
init_rbg_jo_dimX = 0.33
init_rbg_jo_dimY = 0.33
init_rbg_jo_dimZ = 0.33
#
def execute(self, context):
scene = context.scene
add_RigidBody_World()
###selected Armature
ob = bpy.context.active_object
# self.report({'INFO'}, "ob:" + str(ob))
### Apply Object transform
bpy.ops.object.posemode_toggle()
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)
bpy.ops.object.posemode_toggle()
parent_bones_ob = ""
pole_dict = {}
wm = bpy.context.window_manager
spb = bpy.context.selected_pose_bones
tot = len(spb)
# wm.progress_begin(0, tot)
i = 0
# self.report({'INFO'}, "pole_dict:" + str(pole_dict))
for selected_bones in spb:
#self.report({'INFO'}, str(selected_bones.vector[0]))
i += 1
# wm.progress_update(i)
###Joint Session
###Create Rigidbody Cube
# bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.head, rotation=(0.0, 0.0, 0.0))
bpy.ops.object.posemode_toggle()
bpy.ops.object.empty_add(type='PLAIN_AXES', radius=0.1, align='WORLD', location=selected_bones.head)
jc = bpy.context.active_object
jc.name = "joint." + ob.name + "." + selected_bones.name
viewport_display(self, jc)
jc.show_in_front = True
bpy.data.objects[jc.name].hide_render = True
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.length * self.init_rbg_jo_dimX * scene.rbg_jo_dim[0],
selected_bones.length * self.init_rbg_jo_dimY * scene.rbg_jo_dim[1],
selected_bones.length * self.init_rbg_jo_dimZ * scene.rbg_jo_dim[2]]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.constraint_add()
jc.rigid_body_constraint.type = scene.rbg_jo_type
jc.rigid_body_constraint.use_breaking = False
jc.rigid_body_constraint.use_override_solver_iterations = True
jc.rigid_body_constraint.breaking_threshold = 10
jc.rigid_body_constraint.solver_iterations = 10
jc.rigid_body_constraint.use_limit_lin_x = scene.rbg_jo_limit_lin_x
jc.rigid_body_constraint.use_limit_lin_y = scene.rbg_jo_limit_lin_y
jc.rigid_body_constraint.use_limit_lin_z = scene.rbg_jo_limit_lin_z
jc.rigid_body_constraint.limit_lin_x_lower = scene.rbg_jo_limit_lin_x_lower
jc.rigid_body_constraint.limit_lin_y_lower = scene.rbg_jo_limit_lin_y_lower
jc.rigid_body_constraint.limit_lin_z_lower = scene.rbg_jo_limit_lin_z_lower
jc.rigid_body_constraint.limit_lin_x_upper = scene.rbg_jo_limit_lin_x_upper
jc.rigid_body_constraint.limit_lin_y_upper = scene.rbg_jo_limit_lin_y_upper
jc.rigid_body_constraint.limit_lin_z_upper = scene.rbg_jo_limit_lin_z_upper
jc.rigid_body_constraint.use_limit_ang_x = scene.rbg_jo_limit_ang_x
jc.rigid_body_constraint.use_limit_ang_y = scene.rbg_jo_limit_ang_y
jc.rigid_body_constraint.use_limit_ang_z = scene.rbg_jo_limit_ang_z
jc.rigid_body_constraint.limit_ang_x_lower = scene.rbg_jo_limit_ang_x_lower
jc.rigid_body_constraint.limit_ang_y_lower = scene.rbg_jo_limit_ang_y_lower
jc.rigid_body_constraint.limit_ang_z_lower = scene.rbg_jo_limit_ang_z_lower
jc.rigid_body_constraint.limit_ang_x_upper = scene.rbg_jo_limit_ang_x_upper
jc.rigid_body_constraint.limit_ang_y_upper = scene.rbg_jo_limit_ang_y_upper
jc.rigid_body_constraint.limit_ang_z_upper = scene.rbg_jo_limit_ang_z_upper
jc.rigid_body_constraint.use_spring_x = scene.rbg_jo_use_spring_x
jc.rigid_body_constraint.use_spring_y = scene.rbg_jo_use_spring_y
jc.rigid_body_constraint.use_spring_z = scene.rbg_jo_use_spring_z
jc.rigid_body_constraint.spring_stiffness_x = scene.rbg_jo_spring_stiffness_x
jc.rigid_body_constraint.spring_stiffness_y = scene.rbg_jo_spring_stiffness_y
jc.rigid_body_constraint.spring_stiffness_z = scene.rbg_jo_spring_stiffness_z
jc.rigid_body_constraint.spring_damping_x = scene.rbg_jo_spring_damping_x
jc.rigid_body_constraint.spring_damping_y = scene.rbg_jo_spring_damping_y
jc.rigid_body_constraint.spring_damping_z = scene.rbg_jo_spring_damping_z
# self.report({'INFO'}, "selected_bones.parent:" + str(selected_bones.parent))
if selected_bones.parent is not None and selected_bones.parent not in spb and selected_bones.parent not in pole_dict and scene.rbg_rc_add_pole_rootbody == True:
###Create Rigidbody Cube
bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.parent.center, rotation=(0.0, 0.0, 0.0))
rc2 = bpy.context.active_object
rc2.name = "rbg.pole." + ob.name + "." + selected_bones.parent.name
viewport_display(self, rc2)
rc2.show_in_front = True
rc2.hide_render = True
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.parent.length * self.init_rbg_jo_dimX,
selected_bones.parent.length * self.init_rbg_jo_dimY,
selected_bones.parent.length * self.init_rbg_jo_dimZ]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.object_add()
rc2.rigid_body.type = "PASSIVE"
rc2.rigid_body.collision_shape = "BOX"
rc2.rigid_body.kinematic = True
### Child OF
CoC2 = rc2.constraints.new("CHILD_OF")
CoC2.name = 'Child_Of_' + selected_bones.parent.name
CoC2.target = ob
CoC2.subtarget = selected_bones.parent.name
#without ops way to childof_set_inverse
sub_target = bpy.data.objects[ob.name].pose.bones[selected_bones.parent.name]
#self.report({'INFO'}, str(sub_target))
CoC2.inverse_matrix = sub_target.matrix.inverted()
rc2.update_tag(refresh={'OBJECT'})
bpy.context.scene.update_tag()
#parent to armature
if scene.rbg_rc_parent_armature == True:
rc2.parent = ob
###constraint.object1
if selected_bones.parent is not None and selected_bones.parent not in spb and scene.rbg_rc_add_pole_rootbody == True:
if selected_bones.parent not in pole_dict:
pole_dict[selected_bones.parent] = rc2
# self.report({'INFO'}, "pole_dict:" + str(pole_dict))
jc.rigid_body_constraint.object1 = rc2
parent_bones_ob = "rbg." + ob.name + "." + selected_bones.name
else:
jc.rigid_body_constraint.object1 = pole_dict[selected_bones.parent]
parent_bones_ob = "rbg." + ob.name + "." + selected_bones.name
else:
if parent_bones_ob != "":
jc.rigid_body_constraint.object1 = bpy.data.objects[parent_bones_ob]
parent_bones_ob = "rbg." + ob.name + "." + selected_bones.name
#self.report({'INFO'}, "recursive:" + str(selected_bones.children_recursive))
#self.report({'INFO'}, "parent_bones_ob:" + str(parent_bones_ob))
#parent to armature
if scene.rbg_rc_parent_armature == True:
jc.parent = ob
###Rigid Body Session
###Create Rigidbody Cube
bpy.ops.mesh.primitive_cube_add(size=1.0, calc_uvs=False, enter_editmode=False, align='WORLD', location=selected_bones.center, rotation=(0.0, 0.0, 0.0))
rc = bpy.context.active_object
rc.name = parent_bones_ob
viewport_display(self, rc)
rc.show_in_front = True
bpy.data.objects[rc.name].hide_render = True
###constraint.object2
if parent_bones_ob != "":
jc.rigid_body_constraint.object2 = bpy.data.objects[parent_bones_ob]
if len(selected_bones.children_recursive) == 0:
parent_bones_ob = ""
###Damped Track
bpy.ops.object.constraint_add(type='DAMPED_TRACK')
dt = bpy.context.object.constraints["Damped Track"]
dt.target = ob
dt.subtarget = selected_bones.name
dt.head_tail = 1
dt.track_axis = 'TRACK_Z'
### Apply Tranceform
bpy.ops.object.visual_transform_apply()
rc.constraints.remove(dt)
### Rigid Body Dimensions
bpy.context.object.dimensions = [
selected_bones.length * self.init_rc_dimX * scene.rbg_rc_dim[0],
selected_bones.length * self.init_rc_dimY * scene.rbg_rc_dim[1],
selected_bones.length * self.init_rc_dimZ * scene.rbg_rc_dim[2]]
### Scale Apply
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
### Set Rigid Body
bpy.ops.rigidbody.object_add()
bpy.context.object.rigid_body.type = "ACTIVE"
bpy.context.object.rigid_body.collision_shape = scene.rbg_rb_shape
bpy.context.object.rigid_body.mass = scene.rbg_rc_mass
bpy.context.object.rigid_body.friction = scene.rbg_rc_friction
bpy.context.object.rigid_body.restitution = scene.rbg_rc_bounciness
bpy.context.object.rigid_body.linear_damping = scene.rbg_rc_translation
bpy.context.object.rigid_body.angular_damping = scene.rbg_rc_rotation
bpy.ops.object.posemode_toggle()
### Child OF
bpy.context.view_layer.objects.active = ob
bpy.ops.object.posemode_toggle()
bpy.ops.pose.armature_apply()
#bpy.ops.pose.visual_transform_apply()
bpy.ops.pose.select_all(action='DESELECT')
bpy.context.object.data.bones.active = bpy.context.object.data.bones[selected_bones.name]
ab = bpy.context.active_pose_bone
#self.report({'INFO'}, str(rc.name))
CoC = ab.constraints.new("CHILD_OF")
CoC.name = 'Child_Of_' + rc.name
CoC.target = rc
#without ops way to childof_set_inverse
CoC_target = rc
#self.report({'INFO'}, str(rc))
CoC.inverse_matrix = CoC_target.matrix_world.inverted()
rc.update_tag(refresh={'OBJECT'})
bpy.context.scene.update_tag()
#parent to armature
if scene.rbg_rc_parent_armature == True:
rc.parent = ob
###parent none
bpy.ops.object.editmode_toggle()
bpy.context.active_bone.parent = None
bpy.ops.object.posemode_toggle()
update_progress("Rigidbody_gen Progress", i/tot)
###clear object select
bpy.context.view_layer.objects.active = ob
bpy.ops.object.posemode_toggle()
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.posemode_toggle()
bpy.ops.pose.select_all(action='DESELECT')
# update_progress("Rigidbody_gen:FINISHED", 1)
# wm.progress_end()
self.report({'INFO'}, "FINISHED")
return {'FINISHED'}
def viewport_display(self, rb):
rb.display_type = 'WIRE'
rb.show_in_front = True
rb.display.show_shadows = False
rb.hide_render = True
def add_RigidBody_World():
scene = bpy.context.scene
if scene.rigidbody_world is None:
bpy.ops.rigidbody.world_add()
def update_progress(job_title, progress):
length = 20 # modify this to change the length
block = int(round(length*progress))
msg = "\r{0}: [{1}] {2}%".format(job_title, "#"*block + "-"*(length-block), round(progress*100, 2))
if progress >= 1: msg += " DONE\r\n"
sys.stdout.write(msg)
sys.stdout.flush()
classes = [
RBG_PT_MenuRigidBodyTools,
RBG_PT_Add_Passive,
RBG_PT_Add_Active,
RBG_PT_Add_Joints,
RBG_PT_Add_Active_Joints,
RBG_MT_MenuRigidBodys,
RBG_OT_CreateRigidBodysOnBones,
RBG_OT_CreateRigidBodysPhysics,
RBG_OT_CreateRigidBodysJoints,
RBG_OT_CreateRigidBodysPhysicsJoints
]
# クラスの登録
def register():
for cls in classes:
bpy.utils.register_class(cls)
user_props()
# クラスの登録解除
def unregister():
del_props()
for cls in classes:
bpy.utils.unregister_class(cls)
# main
if __name__ == "__main__":
register()
| 36.664308 | 179 | 0.628514 | 5,990 | 46,637 | 4.592821 | 0.056594 | 0.066882 | 0.054887 | 0.049071 | 0.812839 | 0.787503 | 0.751845 | 0.719639 | 0.661517 | 0.642725 | 0 | 0.0085 | 0.263375 | 46,637 | 1,271 | 180 | 36.693155 | 0.792251 | 0.05828 | 0 | 0.592308 | 0 | 0 | 0.122528 | 0.028314 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023077 | false | 0.016484 | 0.003297 | 0.001099 | 0.116484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4dcfc181416b9866a4142d09750315ec30a735d7 | 98 | py | Python | seer/__init__.py | cshenton/seer-python | 72ff88edf4148c2b2a13deb8e1ad984647124874 | [
"Apache-2.0"
] | 2 | 2019-05-22T21:36:01.000Z | 2020-01-16T12:23:45.000Z | seer/__init__.py | cshenton/seer-python | 72ff88edf4148c2b2a13deb8e1ad984647124874 | [
"Apache-2.0"
] | null | null | null | seer/__init__.py | cshenton/seer-python | 72ff88edf4148c2b2a13deb8e1ad984647124874 | [
"Apache-2.0"
] | 1 | 2020-01-14T23:53:19.000Z | 2020-01-14T23:53:19.000Z | """Package seer is a client for interacting with a seer server."""
from seer.client import Client
| 32.666667 | 66 | 0.765306 | 16 | 98 | 4.6875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153061 | 98 | 2 | 67 | 49 | 0.903614 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4de1331cd971953eda37c01e272dcd027eaf8a3b | 290 | py | Python | androidemu/java/classes/executable.py | DXCyber409/AndroidNativeEmulator | 11a0360a947114375757724eecd9bd9dbca43a56 | [
"Apache-2.0"
] | 3 | 2020-05-21T09:15:11.000Z | 2022-01-12T13:52:20.000Z | androidemu/java/classes/executable.py | DXCyber409/AndroidNativeEmulator | 11a0360a947114375757724eecd9bd9dbca43a56 | [
"Apache-2.0"
] | null | null | null | androidemu/java/classes/executable.py | DXCyber409/AndroidNativeEmulator | 11a0360a947114375757724eecd9bd9dbca43a56 | [
"Apache-2.0"
] | null | null | null | from androidemu.java.java_class_def import JavaClassDef
from androidemu.java.java_field_def import JavaFieldDef
class Executable(metaclass = JavaClassDef,jvm_name = 'java/lang/reflect/Executable',jvm_fields=[JavaFieldDef('accessFlags', 'I', False)]):
def __init__(self):
pass
| 36.25 | 138 | 0.782759 | 36 | 290 | 6.027778 | 0.611111 | 0.129032 | 0.165899 | 0.202765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113793 | 290 | 7 | 139 | 41.428571 | 0.844358 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 0.096552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
4de774099ce2e540e0e17b4158c990be8ca91814 | 6,298 | py | Python | tests/cli_tests.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | null | null | null | tests/cli_tests.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | null | null | null | tests/cli_tests.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | 1 | 2021-07-09T16:29:50.000Z | 2021-07-09T16:29:50.000Z | import importlib
import json
from pathlib import Path
from unittest import mock
from zipfile import is_zipfile, ZipFile
import pytest
import yaml
from freezegun import freeze_time
import rabbitai.cli
from rabbitai import app
from tests.fixtures.birth_names_dashboard import load_birth_names_dashboard_with_slices
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_export_dashboards_original(app_context, fs):
"""
Test that a JSON file is exported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_dashboards correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
runner = app.test_cli_runner()
response = runner.invoke(rabbitai.cli.export_dashboards, ("-f", "dashboards.json"))
assert response.exit_code == 0
assert Path("dashboards.json").exists()
# check that file is valid JSON
with open("dashboards.json") as fp:
contents = fp.read()
json.loads(contents)
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_export_datasources_original(app_context, fs):
"""
Test that a YAML file is exported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_dashboards correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
runner = app.test_cli_runner()
response = runner.invoke(
rabbitai.cli.export_datasources, ("-f", "datasources.yaml")
)
assert response.exit_code == 0
assert Path("datasources.yaml").exists()
# check that file is valid JSON
with open("datasources.yaml") as fp:
contents = fp.read()
yaml.safe_load(contents)
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
@mock.patch.dict(
"rabbitai.config.DEFAULT_FEATURE_FLAGS", {"VERSIONED_EXPORT": True}, clear=True
)
def test_export_dashboards_versioned_export(app_context, fs):
"""
Test that a ZIP file is exported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_dashboards correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
runner = app.test_cli_runner()
with freeze_time("2021-01-01T00:00:00Z"):
response = runner.invoke(rabbitai.cli.export_dashboards, ())
assert response.exit_code == 0
assert Path("dashboard_export_20210101T000000.zip").exists()
assert is_zipfile("dashboard_export_20210101T000000.zip")
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
@mock.patch.dict(
"rabbitai.config.DEFAULT_FEATURE_FLAGS", {"VERSIONED_EXPORT": True}, clear=True
)
def test_export_datasources_versioned_export(app_context, fs):
"""
Test that a ZIP file is exported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_dashboards correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
runner = app.test_cli_runner()
with freeze_time("2021-01-01T00:00:00Z"):
response = runner.invoke(rabbitai.cli.export_datasources, ())
assert response.exit_code == 0
assert Path("dataset_export_20210101T000000.zip").exists()
assert is_zipfile("dataset_export_20210101T000000.zip")
@mock.patch.dict(
"rabbitai.config.DEFAULT_FEATURE_FLAGS", {"VERSIONED_EXPORT": True}, clear=True
)
@mock.patch("rabbitai.dashboards.commands.importers.dispatcher.ImportDashboardsCommand")
def test_import_dashboards_versioned_export(import_dashboards_command, app_context, fs):
"""
Test that both ZIP and JSON can be imported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_dashboards correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
# write JSON file
with open("dashboards.json", "w") as fp:
fp.write('{"hello": "world"}')
runner = app.test_cli_runner()
response = runner.invoke(rabbitai.cli.import_dashboards, ("-p", "dashboards.json"))
assert response.exit_code == 0
expected_contents = {"dashboards.json": '{"hello": "world"}'}
import_dashboards_command.assert_called_with(expected_contents, overwrite=True)
# write ZIP file
with ZipFile("dashboards.zip", "w") as bundle:
with bundle.open("dashboards/dashboard.yaml", "w") as fp:
fp.write(b"hello: world")
runner = app.test_cli_runner()
response = runner.invoke(rabbitai.cli.import_dashboards, ("-p", "dashboards.zip"))
assert response.exit_code == 0
expected_contents = {"dashboard.yaml": "hello: world"}
import_dashboards_command.assert_called_with(expected_contents, overwrite=True)
@mock.patch.dict(
"rabbitai.config.DEFAULT_FEATURE_FLAGS", {"VERSIONED_EXPORT": True}, clear=True
)
@mock.patch("rabbitai.datasets.commands.importers.dispatcher.ImportDatasetsCommand")
def test_import_datasets_versioned_export(import_datasets_command, app_context, fs):
"""
Test that both ZIP and YAML can be imported.
"""
# pylint: disable=reimported, redefined-outer-name
import rabbitai.cli # noqa: F811
# reload to define export_datasets correctly based on the
# feature flags
importlib.reload(rabbitai.cli)
# write YAML file
with open("datasets.yaml", "w") as fp:
fp.write("hello: world")
runner = app.test_cli_runner()
response = runner.invoke(rabbitai.cli.import_datasources, ("-p", "datasets.yaml"))
assert response.exit_code == 0
expected_contents = {"datasets.yaml": "hello: world"}
import_datasets_command.assert_called_with(expected_contents, overwrite=True)
# write ZIP file
with ZipFile("datasets.zip", "w") as bundle:
with bundle.open("datasets/dataset.yaml", "w") as fp:
fp.write(b"hello: world")
runner = app.test_cli_runner()
response = runner.invoke(rabbitai.cli.import_datasources, ("-p", "datasets.zip"))
assert response.exit_code == 0
expected_contents = {"dataset.yaml": "hello: world"}
import_datasets_command.assert_called_with(expected_contents, overwrite=True)
| 32.802083 | 88 | 0.720387 | 796 | 6,298 | 5.513819 | 0.140704 | 0.052632 | 0.023696 | 0.029164 | 0.809068 | 0.800866 | 0.790613 | 0.683527 | 0.664388 | 0.630212 | 0 | 0.02102 | 0.169101 | 6,298 | 191 | 89 | 32.973822 | 0.817695 | 0.18101 | 0 | 0.485714 | 0 | 0 | 0.220921 | 0.124653 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.057143 | false | 0 | 0.333333 | 0 | 0.390476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1503adcd3f2b2aa5e7993b18bcdf2bacf13cf4b7 | 712 | py | Python | src/ml_gym/data_handling/iterators.py | SofiaTraba/mlgym | 3ececb4dcaa32119ced20987b81bd790303e7cb7 | [
"Apache-2.0"
] | 6 | 2020-11-28T16:56:01.000Z | 2022-01-06T12:49:12.000Z | src/ml_gym/data_handling/iterators.py | SofiaTraba/mlgym | 3ececb4dcaa32119ced20987b81bd790303e7cb7 | [
"Apache-2.0"
] | 6 | 2020-12-18T14:51:43.000Z | 2021-12-01T17:20:55.000Z | src/ml_gym/data_handling/iterators.py | SofiaTraba/mlgym | 3ececb4dcaa32119ced20987b81bd790303e7cb7 | [
"Apache-2.0"
] | 2 | 2021-11-16T08:47:36.000Z | 2022-01-06T12:49:15.000Z | from data_stack.dataset.iterator import DatasetIteratorIF
from ml_gym.data_handling.postprocessors.postprocessor import PostProcessorIf
from typing import List
class PostProcessedDatasetIterator(DatasetIteratorIF):
def __init__(self, dataset_iterator: DatasetIteratorIF, post_processor: PostProcessorIf):
self._dataset_iterator = dataset_iterator
self._post_processor = post_processor
def __len__(self):
return len(self._dataset_iterator)
def __getitem__(self, index: int):
return self._post_processor.postprocess(self._dataset_iterator[index])
@property
def underlying_iterators(self) -> List[DatasetIteratorIF]:
return [self._dataset_iterator]
| 33.904762 | 93 | 0.779494 | 75 | 712 | 6.973333 | 0.413333 | 0.200765 | 0.181644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15309 | 712 | 20 | 94 | 35.6 | 0.86733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.214286 | 0.214286 | 0.785714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
15137fab8f3c73c64287cd516aeb302bf0aef0f3 | 155 | py | Python | backend/app/__main__.py | garet2gis/video-processing-system | 68dd83722119cc8c0104567c0b466a4ebae2315f | [
"Apache-2.0"
] | null | null | null | backend/app/__main__.py | garet2gis/video-processing-system | 68dd83722119cc8c0104567c0b466a4ebae2315f | [
"Apache-2.0"
] | null | null | null | backend/app/__main__.py | garet2gis/video-processing-system | 68dd83722119cc8c0104567c0b466a4ebae2315f | [
"Apache-2.0"
] | null | null | null | import uvicorn
from .settings import settings
uvicorn.run("app.app:app", host=settings.server_host, port=settings.server_port, reload=settings.is_debug)
| 25.833333 | 106 | 0.812903 | 23 | 155 | 5.347826 | 0.521739 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077419 | 155 | 5 | 107 | 31 | 0.86014 | 0 | 0 | 0 | 0 | 0 | 0.070968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
151d84ab0d1925d222e5a2e5c24e14531bf32d8d | 52 | py | Python | Solution1/py_code/my_env/my_env/envs/__init__.py | Delapre/baby-steps-of-rl-ja | a8bd7477d8e191d219a73d5c865bfa943c6b0a70 | [
"Apache-2.0"
] | null | null | null | Solution1/py_code/my_env/my_env/envs/__init__.py | Delapre/baby-steps-of-rl-ja | a8bd7477d8e191d219a73d5c865bfa943c6b0a70 | [
"Apache-2.0"
] | null | null | null | Solution1/py_code/my_env/my_env/envs/__init__.py | Delapre/baby-steps-of-rl-ja | a8bd7477d8e191d219a73d5c865bfa943c6b0a70 | [
"Apache-2.0"
] | null | null | null | from my_env.envs.env_centrifuge import CentrifugeEnv | 52 | 52 | 0.903846 | 8 | 52 | 5.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 52 | 1 | 52 | 52 | 0.918367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
12769c5da45fe3bbe1f88d7937d6d66ddc5b6c6e | 33 | py | Python | wsu/tools/simx/simx/python/simx/act/__init__.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2020-02-28T20:35:09.000Z | 2020-02-28T20:35:09.000Z | wsu/tools/simx/simx/python/simx/act/__init__.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | wsu/tools/simx/simx/python/simx/act/__init__.py | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | from errors import LoadException
| 16.5 | 32 | 0.878788 | 4 | 33 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
12c7aa5ac7ad5f4c63516ad90ef7d8baeafe7b5d | 272 | py | Python | aiostrike/utils/jsonutil.py | Ali-TM-original/aiostrike | 13094ba945ce525e93f5bd4e571e53f4e246dc36 | [
"MIT"
] | null | null | null | aiostrike/utils/jsonutil.py | Ali-TM-original/aiostrike | 13094ba945ce525e93f5bd4e571e53f4e246dc36 | [
"MIT"
] | null | null | null | aiostrike/utils/jsonutil.py | Ali-TM-original/aiostrike | 13094ba945ce525e93f5bd4e571e53f4e246dc36 | [
"MIT"
] | null | null | null | import json
class JsonUtils:
def __init__(self, obj_to_jsonify):
self.Object_To_jsonify = obj_to_jsonify
def cvt_json(self):
return_data = json.loads(self.Object_To_jsonify)
return return_data
def parse_entities(self):
pass
| 19.428571 | 56 | 0.683824 | 37 | 272 | 4.594595 | 0.486486 | 0.211765 | 0.141176 | 0.223529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 272 | 13 | 57 | 20.923077 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.111111 | 0.111111 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
12cb171c41ef33518b0144d16d783120d1c8eb9f | 371 | py | Python | binarysearch.io/largest_element_in_rotated_array.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | 2 | 2020-06-25T21:10:32.000Z | 2020-12-10T06:53:45.000Z | binarysearch.io/largest_element_in_rotated_array.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | null | null | null | binarysearch.io/largest_element_in_rotated_array.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | 3 | 2020-05-15T14:17:09.000Z | 2021-07-25T13:18:20.000Z | class Solution:
def solve(self, arr):
# Write your code here
n = len(arr)
if len(arr) == 1:
return arr[0]
for i in range(n-1):
if arr[i+1] < arr[i]:
return arr[i]
return arr[-1]
| 18.55 | 33 | 0.309973 | 38 | 371 | 3.026316 | 0.526316 | 0.234783 | 0.173913 | 0.226087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034014 | 0.603774 | 371 | 19 | 34 | 19.526316 | 0.748299 | 0.053908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
42a430fb322e84ae18d14568cdb87661c03c501d | 454 | py | Python | test/simple_source/def/02_closure.py | gauravssnl/python-uncompyle6 | 136f42a610c0701e0770c1c278efd1107b1c6ed1 | [
"MIT"
] | 1 | 2021-03-24T11:54:03.000Z | 2021-03-24T11:54:03.000Z | test/simple_source/def/02_closure.py | gauravssnl/python-uncompyle6 | 136f42a610c0701e0770c1c278efd1107b1c6ed1 | [
"MIT"
] | null | null | null | test/simple_source/def/02_closure.py | gauravssnl/python-uncompyle6 | 136f42a610c0701e0770c1c278efd1107b1c6ed1 | [
"MIT"
] | null | null | null | # Tests
# Python3:
# funcdef ::= mkfunc designator
# designator ::= STORE_DEREF
# mkfunc ::= load_closure BUILD_TUPLE_1 LOAD_CONST LOAD_CONST MAKE_CLOSURE_0
# load_closure ::= LOAD_CLOSURE
#
# Python2:
# funcdef ::= mkfunc designator
# designator ::= STORE_DEREF
# mkfunc ::= load_closure LOAD_CONST MAKE_CLOSURE_0
# load_closure ::= LOAD_CLOSURE
def bug():
def convert(node):
return node and convert(node.left)
return
| 22.7 | 78 | 0.696035 | 56 | 454 | 5.339286 | 0.410714 | 0.220736 | 0.150502 | 0.220736 | 0.688963 | 0.688963 | 0.688963 | 0.688963 | 0.688963 | 0 | 0 | 0.013774 | 0.200441 | 454 | 19 | 79 | 23.894737 | 0.809917 | 0.744493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
35ee25fdd4082750ee20046029e87d488683294c | 131 | py | Python | python_stock/7/Python7-7.py | hollo08/stockstrategy | 09ece2457d653439a8ace80a6ac7dd4da9813846 | [
"MIT"
] | 1 | 2020-09-18T15:08:46.000Z | 2020-09-18T15:08:46.000Z | python_stock/7/Python7-7.py | hollo08/stockstrategy | 09ece2457d653439a8ace80a6ac7dd4da9813846 | [
"MIT"
] | null | null | null | python_stock/7/Python7-7.py | hollo08/stockstrategy | 09ece2457d653439a8ace80a6ac7dd4da9813846 | [
"MIT"
] | 2 | 2022-01-23T03:26:22.000Z | 2022-03-28T16:21:01.000Z | #导入包
import mypack
print("-------------------")
print("包的说明性文档:",mypack.__doc__)
print("包的类型:",type(mypack))
print("包的位置:",mypack)
| 18.714286 | 32 | 0.603053 | 15 | 131 | 5 | 0.6 | 0.293333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053435 | 131 | 6 | 33 | 21.833333 | 0.604839 | 0.022901 | 0 | 0 | 0 | 0 | 0.291339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.8 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
35eeca1632f4ac528bf66999cf6a34b9f10ee59e | 15 | py | Python | test1/testy.py | Pots8/funtutorial | 6d049e744d9db4a3f88122a7b773973f2cade5ac | [
"MIT"
] | null | null | null | test1/testy.py | Pots8/funtutorial | 6d049e744d9db4a3f88122a7b773973f2cade5ac | [
"MIT"
] | null | null | null | test1/testy.py | Pots8/funtutorial | 6d049e744d9db4a3f88122a7b773973f2cade5ac | [
"MIT"
] | null | null | null | print ("hello") | 15 | 15 | 0.666667 | 2 | 15 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 15 | 1 | 15 | 15 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
c431e293e431aea52d62147d437dfdd2df38c0aa | 101,693 | py | Python | source/code/tag_tamer.py | Frankovich73/tag-tamer | d30983493191ec7b402d542be5f80b6d07645444 | [
"MIT",
"MIT-0"
] | null | null | null | source/code/tag_tamer.py | Frankovich73/tag-tamer | d30983493191ec7b402d542be5f80b6d07645444 | [
"MIT",
"MIT-0"
] | null | null | null | source/code/tag_tamer.py | Frankovich73/tag-tamer | d30983493191ec7b402d542be5f80b6d07645444 | [
"MIT",
"MIT-0"
] | 1 | 2021-09-17T17:42:49.000Z | 2021-09-17T17:42:49.000Z | #!/usr/bin/env python3
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
# Tag Tamer Admin UI
# Import administrative functions
from admin import date_time_now, execution_status, assume_role_multi_account
# Import Collections module to manipulate dictionaries
import collections
from collections import defaultdict, OrderedDict
# Import getter functions for Amazon Cognito
from cognito_idp import get_user_group_arns, get_user_credentials
# Import getter/setter module for AWS Config
import config
from config import config
# Import getter/setter module for AWS resources & tags
import resources_tags
from resources_tags import resources_tags
# Import getter/setter module for AWS IAM
import iam
from iam import roles
# Import getter module for TagOption Groups
import get_tag_groups
from get_tag_groups import get_tag_groups
# Import setter module for TagOption Groups
import set_tag_groups
from set_tag_groups import set_tag_group
# Import getter/setter module for AWS Service Catalog
import service_catalog
from service_catalog import service_catalog
# Import getter/setter module for AWS SSM Parameter Store
import ssm_parameter_store
from ssm_parameter_store import ssm_parameter_store
# Import AWS STS functions
# from sts import get_session_credentials
# Import Tag Tamer utility functions
from utilities import get_aws_regions, get_resource_type_unit, verify_jwt
# Import flask framework module & classes to build API's
import flask, flask_wtf
from flask import (
Flask,
flash,
jsonify,
make_response,
redirect,
render_template,
request,
send_file,
url_for,
)
# Use only flask_awscognito version 1.2.8 or higher from Tag Tamer
from flask_awscognito import AWSCognitoAuthentication
# from flask_jwt_extended import JWTManager, jwt_required, create_access_token, get_jwt_identity, set_access_cookies, unset_jwt_cookies
from flask_wtf.csrf import CSRFProtect
# Import JSON parser
import json
# Import logging module
import logging
# Import Regex
import re
# import OS module
import os
# import systems library
import sys
# import epoch time method
from time import time
# Read in Tag Tamer version
tag_tamer_version_file = open("tag_tamer_version.json", "rt")
tag_tamer_version = json.load(tag_tamer_version_file)
# Read in Tag Tamer solution parameters
tag_tamer_parameters_file = open("tag_tamer_parameters.json", "rt")
tag_tamer_parameters = json.load(tag_tamer_parameters_file)
# logLevel options are DEBUG, INFO, WARNING, ERROR or CRITICAL
# Set logLevel specified in tag_tamer_parameters.json parameters file
if re.search(
"DEBUG|INFO|WARNING|ERROR|CRITICAL",
tag_tamer_parameters.get("logging_level").upper(),
):
logLevel = tag_tamer_parameters.get("logging_level").upper()
else:
logLevel = "INFO"
logging.basicConfig(
filename=tag_tamer_parameters.get("log_file_location"),
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %I:%M:%S %p",
)
# Set the base/root logging level for tag_tamer.py & all imported modules
logging.getLogger().setLevel(logLevel)
log = logging.getLogger("tag_tamer_main")
# Raise logging level for flask_wtf.csrf
logging.getLogger("flask_wtf.csrf").setLevel("WARNING")
# Raise logging level for WSGI tool kit "werkzeug" that's German for "tool"
logging.getLogger("werkzeug").setLevel("ERROR")
# Get user-specified AWS regions
all_current_regions = get_aws_regions()
additional_regions = tag_tamer_parameters.get("additional_regions")
validated_regions = list()
if all_current_regions:
if tag_tamer_parameters.get("base_region") in all_current_regions:
validated_regions.append(tag_tamer_parameters.get("base_region"))
else:
log.info(
"Terminating Tag Tamer application on {} because the base AWS region is not available. Please check the tag_tamer_parameters.json file.".format(
date_time_now()
)
)
sys.exit()
if additional_regions:
for reg in additional_regions:
if reg in all_current_regions:
validated_regions.append(reg)
else:
log.info(
"Terminating Tag Tamer application on {} because no AWS regions are available. Please check the tag_tamer_parameters.json file.".format(
date_time_now()
)
)
sys.exit()
log.debug('The validated AWS regions are: "%s"', validated_regions)
# Get AWS Service parameters from AWS SSM Parameter Store
ssm_ps = ssm_parameter_store(tag_tamer_parameters.get("base_region"))
# Get SSM Parameters names & values
ssm_parameters = ssm_ps.ssm_get_parameter_details(
tag_tamer_parameters.get("ssm_parameter_path")
)
if not ssm_parameters:
log.info(
"Terminating Tag Tamer application on {} because no AWS SSM Parameters are available. Please check the tag_tamer_parameters.json file & the AWS SSM Parameter Store.".format(
date_time_now()
)
)
sys.exit()
# Multi-account feature - get any additional AWS accounts to manage using Tag Tamer
multi_accounts = list()
if ssm_parameters.get("multi-accounts"):
raw_multi_accounts = list()
raw_multi_accounts = ssm_parameters.get("multi-accounts").split(",")
for account in raw_multi_accounts:
if re.search("\d{12}", account):
multi_accounts.append(account.strip(" "))
# Instantiate flask API application
app = Flask(__name__)
app.secret_key = os.urandom(16)
try:
app.config["AWS_DEFAULT_REGION"] = ssm_parameters.get(
"cognito-default-region-value"
)
app.config["AWS_COGNITO_DOMAIN"] = ssm_parameters.get("cognito-domain-value")
app.config["AWS_COGNITO_USER_POOL_ID"] = ssm_parameters.get(
"cognito-user-pool-id-value"
)
app.config["AWS_COGNITO_USER_POOL_CLIENT_ID"] = ssm_parameters.get(
"cognito-app-client-id"
)
app.config["AWS_COGNITO_USER_POOL_CLIENT_SECRET"] = ssm_parameters.get(
"cognito-app-client-secret-value"
)
app.config["AWS_COGNITO_REDIRECT_URL"] = ssm_parameters.get(
"cognito-redirect-url-value"
)
app.config["JWT_TOKEN_LOCATION"] = ssm_parameters.get("jwt-token-location")
app.config["JWT_ACCESS_COOKIE_NAME"] = ssm_parameters.get("jwt-access-cookie-name")
app.config["JWT_COOKIE_SECURE"] = ssm_parameters.get("jwt-cookie-secure")
app.config["JWT_COOKIE_CSRF_PROTECT"] = ssm_parameters.get(
"jwt-cookie-csrf-protect"
)
except:
log.info(
"Terminating Tag Tamer application on {} because some required AWS SSM Parameters are undefined. Please check the tag_tamer_parameters.json file & the AWS SSM Parameter Store.".format(
date_time_now()
)
)
sys.exit()
csrf = CSRFProtect(app)
csrf.init_app(app)
aws_auth = AWSCognitoAuthentication(app)
# Get the user's session credentials based on username passed in JWT
def get_user_session_credentials(cognito_id_token):
user_session_credentials = get_user_credentials(
cognito_id_token,
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-identity-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
return user_session_credentials
# Verify user's email & source IP address
def get_user_email_ip(route):
access_token = False
id_token = False
access_token = request.cookies.get("access_token")
id_token = request.cookies.get("id_token")
if access_token and id_token:
id_token_claims = dict()
id_token_claims = verify_jwt(
ssm_parameters.get("cognito-default-region-value"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-app-client-id"),
"id_token",
id_token,
)
if id_token_claims.get("email"):
user_email = id_token_claims.get("email")
else:
user_email = False
else:
user_email = False
if request.headers.get("X-Forwarded-For"):
source = request.headers.get("X-Forwarded-For")
elif request.remote_addr:
source = request.remote_addr
else:
source = False
return user_email, source
# Allow users to sign into Tag Tamer via an Amazon Cognito User Pool
@app.route("/log-in")
@app.route("/sign-in")
def sign_in():
return redirect(aws_auth.get_sign_in_url())
# Redirect the user to the Tag Tamer home page after successful AWS Cognito login
@app.route("/aws_cognito_redirect", methods=["GET"])
def aws_cognito_redirect():
access_token = False
id_token = False
access_token, id_token = aws_auth.get_tokens(request.args)
if access_token and id_token:
response = make_response(render_template("redirect.html"))
log.debug(
"function: {} - Received the request arguments".format(
sys._getframe().f_code.co_name
)
)
response.set_cookie(
"access_token",
value=access_token,
secure=True,
httponly=True,
samesite="Lax",
)
response.set_cookie(
"id_token", value=id_token, secure=True, httponly=True, samesite="Lax"
)
return response, 200
else:
return redirect(url_for("sign_in"))
# Get response delivers Tag Tamer home page
@app.route("/index.html", methods=["GET"])
@app.route("/index.htm", methods=["GET"])
@app.route("/index", methods=["GET"])
@app.route("/", methods=["GET"])
@aws_auth.authentication_required
def index():
claims = aws_auth.claims
user_email, user_source = get_user_email_ip(request)
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
# Grant access if session time not expired & user assigned to Cognito user pool group
if (
time() < claims.get("exp")
and user_email
and user_source
and cognito_user_group_arn
):
log.info(
'Successful login. User "{}" with email: "{}" signed in on {} from location: "{}"'.format(
claims.get("username"), user_email, date_time_now(), user_source
)
)
return render_template(
"index.html",
user_name=claims.get("username"),
version=tag_tamer_version.get("tag_tamer_version_number"),
)
else:
log.info(
'Failed login attempt. User "{}" with email: "{}" attempted to sign in on {} from location: "{}"'.format(
claims.get("username"), user_email, date_time_now(), user_source
)
)
return redirect("/sign-in")
# Get response delivers Tag Tamer actions page showing user choices as clickable buttons
@app.route("/actions", methods=["GET"])
@aws_auth.authentication_required
def actions():
return render_template("actions.html")
"""
# NO LONGER USED - select_resource_type() function/route used instead
# Get response delivers HTML UI to select AWS resource types that Tag Tamer will find
@app.route('/find-tags', methods=['GET'])
@aws_auth.authentication_required
def find_tags():
user_email, user_source = get_user_email_ip(request)
if user_email:
log.info("\"{}\" invoked \"{}\" on {} from location: \"{}\" - SUCCESS".format(user_email, sys._getframe().f_code.co_name, date_time_now(), user_source))
return render_template('find-tags.html')
else:
log.error("Unknown user attempted to invoke \"{}\" on {} from location: \"{}\" - FAILURE".format(sys._getframe().f_code.co_name, date_time_now(), user_source))
flash('You are not authorized to view these resources', 'danger')
return render_template('blank.html')
"""
# Post action initiates tag finding for user-selected AWS resource types
# Pass Get response to found-tags HTML UI
@app.route("/found-tags", methods=["POST"])
@aws_auth.authentication_required
def found_tags():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.form.get("resource_type"):
filter_elements = dict()
if request.form.get("tag_key1"):
filter_elements["tag_key1"] = request.form.get("tag_key1")
if request.form.get("tag_value1"):
filter_elements["tag_value1"] = request.form.get("tag_value1")
if request.form.get("tag_key2"):
filter_elements["tag_key2"] = request.form.get("tag_key2")
if request.form.get("tag_value2"):
filter_elements["tag_value2"] = request.form.get("tag_value2")
if request.form.get("conjunction"):
filter_elements["conjunction"] = request.form.get("conjunction")
resource_type, unit = get_resource_type_unit(
request.form.get("resource_type")
)
log.debug(
"function: {} - Received the request arguments".format(
sys._getframe().f_code.co_name
)
)
all_execution_status_alert_levels = list()
all_sorted_tagged_inventory = dict()
claims = aws_auth.claims
my_regions = list()
# Multi-region resource tag getter
def _get_multi_region_tags(my_regions, account_number, file_open_method):
for region in my_regions:
inventory = resources_tags(resource_type, unit, region)
chosen_resources = OrderedDict()
(
chosen_resources,
resources_execution_status,
) = inventory.get_resources(filter_elements, **session_credentials)
session_credentials["region"] = region
session_credentials["chosen_resources"] = chosen_resources
(
sorted_tagged_inventory,
sorted_tagged_inventory_execution_status,
) = inventory.get_resources_tags(**session_credentials)
region_sorted_tagged_inventory[region] = sorted_tagged_inventory
inventory.download_csv(
file_open_method,
account_number,
region,
sorted_tagged_inventory,
claims.get("username"),
)
# Set file_use_method to append "a" for remaining validated regions
file_open_method = "a"
all_execution_status_alert_levels.append(
sorted_tagged_inventory_execution_status.get("alert_level")
)
region_execution_status_message = (
str(account_number)
+ " - "
+ str(region)
+ " - "
+ str(
sorted_tagged_inventory_execution_status.get(
"status_message"
)
)
)
flash(
region_execution_status_message,
sorted_tagged_inventory_execution_status.get("alert_level"),
)
return region_sorted_tagged_inventory
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
if resource_type == "s3":
my_regions.append(tag_tamer_parameters.get("base_region"))
else:
my_regions = validated_regions
base_account_number = re.search("\d{12}", cognito_user_group_arn)
file_open_method = "w"
region_sorted_tagged_inventory = dict()
# Get base account's tags from all regions
region_sorted_tagged_inventory = _get_multi_region_tags(
my_regions, base_account_number.group(), file_open_method
)
all_sorted_tagged_inventory[
base_account_number.group()
] = region_sorted_tagged_inventory
# Get additional multi accounts' tags from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub(
"\d{12}", account_number, cognito_user_group_arn
)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
file_open_method = "a"
region_sorted_tagged_inventory = dict()
# Get the multi account's tags from all regions
region_sorted_tagged_inventory = _get_multi_region_tags(
my_regions, account_number, file_open_method
)
all_sorted_tagged_inventory[
account_number
] = region_sorted_tagged_inventory
# Execution status will be "success" if at least one AWS region contained the tag-filtered resources
if "success" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"found-tags.html", all_inventory=all_sorted_tagged_inventory
)
elif "warning" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template("blank.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"An error occurred. Please contact your Tag Tamer administrator for assistance.",
"danger",
)
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Download CSV file of found tags
@app.route("/download", methods=["GET"])
@aws_auth.authentication_required
def download_file():
user_email, user_source = get_user_email_ip(request)
if user_email:
log.info(
'"{}" invoked "{}" on {} from location: "{}" - SUCCESS'.format(
user_email, sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
claims = aws_auth.claims
download_file = "./downloads/" + claims.get("username") + "-download.csv"
return send_file(download_file, as_attachment=True)
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to download these resources", "danger")
return render_template("blank.html")
# Delivers HTML UI to select AWS resource types to manage Tag Groups for
@app.route("/type-to-tag-group", methods=["GET"])
@aws_auth.authentication_required
def type_to_tag_group():
user_email, user_source = get_user_email_ip(request)
if user_email:
log.info(
'"{}" invoked "{}" on {} from location: "{}" - SUCCESS'.format(
user_email, sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
return render_template("type-to-tag-group.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Post response to get tag groups attributes UI
@app.route("/get-tag-group-names", methods=["POST"])
@aws_auth.authentication_required
def get_tag_group_names():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
all_tag_groups = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_group_names,
tag_group_names_execution_status,
) = all_tag_groups.get_tag_group_names()
flash(
tag_group_names_execution_status["status_message"],
tag_group_names_execution_status["alert_level"],
)
if tag_group_names_execution_status.get("alert_level") == "success":
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
resource_type, _ = get_resource_type_unit(request.form.get("resource_type"))
return render_template(
"display-tag-groups.html",
inventory=tag_group_names,
resource_type=resource_type,
)
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Post method to display edit UI for chosen tag group
@app.route("/edit-tag-group", methods=["POST"])
@aws_auth.authentication_required
def edit_tag_group():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
resource_type, unit = get_resource_type_unit(request.form.get("resource_type"))
all_execution_status_alert_levels = list()
all_sorted_tag_values_inventory = list()
claims = aws_auth.claims
my_regions = list()
# Multi-region resource tag values getter
def _get_multi_region_tag_values(
my_regions, account_number, all_sorted_tag_values_inventory
):
for region in my_regions:
inventory = resources_tags(resource_type, unit, region)
(
sorted_tag_values_inventory,
sorted_tag_values_inventory_execution_status,
) = inventory.get_tag_values(**session_credentials)
all_sorted_tag_values_inventory = (
all_sorted_tag_values_inventory + sorted_tag_values_inventory
)
all_execution_status_alert_levels.append(
sorted_tag_values_inventory_execution_status.get("alert_level")
)
region_execution_status_message = (
str(account_number)
+ " - "
+ str(region)
+ " - "
+ str(
sorted_tag_values_inventory_execution_status.get(
"status_message"
)
)
)
flash(
region_execution_status_message,
sorted_tag_values_inventory_execution_status.get("alert_level"),
)
return all_sorted_tag_values_inventory
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
if resource_type == "s3":
my_regions.append(tag_tamer_parameters.get("base_region"))
else:
my_regions = validated_regions
# Get Tag Tamer base account's tag values from all regions
base_account_number = re.search("\d{12}", cognito_user_group_arn)
all_sorted_tag_values_inventory = _get_multi_region_tag_values(
my_regions, base_account_number.group(), all_sorted_tag_values_inventory
)
# Get additional multi accounts' tag values from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub("\d{12}", account_number, cognito_user_group_arn)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
all_sorted_tag_values_inventory = _get_multi_region_tag_values(
my_regions, account_number, all_sorted_tag_values_inventory
)
# Remove duplicate tag values & sort
all_sorted_tag_values_inventory = list(set(all_sorted_tag_values_inventory))
all_sorted_tag_values_inventory.sort(key=str.lower)
# Execution status will be "success" if at least one AWS region contains the tag-filtered resources
if (
"success" in all_execution_status_alert_levels
or "warning" in all_execution_status_alert_levels
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
# First conditional checks if the user wants to edit an existing Tag Group
if request.form.get("tag_group_name"):
selected_tag_group_name = request.form.get("tag_group_name")
tag_group = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_group_key_values,
tag_group_key_values_execution_status,
) = tag_group.get_tag_group_key_values(selected_tag_group_name)
if (
tag_group_key_values_execution_status.get("alert_level")
== "success"
):
return render_template(
"edit-tag-group.html",
resource_type=resource_type,
selected_tag_group_name=selected_tag_group_name,
selected_tag_group_attributes=tag_group_key_values,
selected_resource_type_tag_values_inventory=all_sorted_tag_values_inventory,
)
else:
flash(
tag_group_key_values_execution_status["status_message"],
tag_group_key_values_execution_status["alert_level"],
)
return render_template("blank.html")
# Second conditional checks if the user creates a brand new Tag Group
elif request.form.get("new_tag_group_name") and re.search(
"[\w\-\.\:\/\=\+\@ ]{1,128}", request.form.get("new_tag_group_name")
):
selected_tag_group_name = request.form.get("new_tag_group_name")
tag_group_key_values = dict()
return render_template(
"edit-tag-group.html",
resource_type=resource_type,
selected_tag_group_name=selected_tag_group_name,
selected_tag_group_attributes=tag_group_key_values,
selected_resource_type_tag_values_inventory=all_sorted_tag_values_inventory,
)
# If user does not select an existing Tag Group or enter
# a new Tag Group name reload this route until valid user input given
else:
return render_template("type-to-tag-group.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"An error occurred. Please contact your Tag Tamer administrator for assistance.",
"danger",
)
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Post method to add or update a tag group
@app.route("/add-update-tag-group", methods=["POST"])
@aws_auth.authentication_required
def add_update_tag_group():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if (
request.form.get("new_tag_group_name")
and re.search(
"[\w\-\.\:\/\=\+\@ ]{1,128}", request.form.get("new_tag_group_name")
)
and request.form.get("new_tag_group_key_name")
and re.search(
"[\w\-\.\:\/\=\+\@ ]{1,128}", request.form.get("new_tag_group_key_name")
)
):
tag_group_name = request.form.get("new_tag_group_name")
tag_group_key_name = request.form.get("new_tag_group_key_name")
tag_group_action = "create"
elif (
request.form.get("selected_tag_group_name")
and re.search(
"[\w\-\.\:\/\=\+\@ ]{1,128}",
request.form.get("selected_tag_group_name"),
)
and request.form.get("selected_tag_group_key_name")
and re.search(
"[\w\-\.\:\/\=\+\@ ]{1,128}",
request.form.get("selected_tag_group_key_name"),
)
):
tag_group_name = request.form.get("selected_tag_group_name")
tag_group_key_name = request.form.get("selected_tag_group_key_name")
tag_group_action = "update"
else:
return render_template("type-to-tag-group.html")
tag_group_value_options = []
form_contents = request.form.to_dict()
for key, value in form_contents.items():
if value == "checked" and re.search("[\w\-\.\:\/\=\+\@ ]{1,256}", key):
tag_group_value_options.append(key)
if request.form.get("new_tag_group_values"):
approved_new_tag_group_values = list()
new_tag_group_values = request.form.get("new_tag_group_values").split(",")
for value in new_tag_group_values:
core_value = value.strip(" ")
if re.search("[\w\-\.\:\/\=\+\@ ]{1,256}", core_value):
approved_new_tag_group_values.append(core_value)
tag_group_value_options.extend(approved_new_tag_group_values)
tag_group = set_tag_group(
tag_tamer_parameters.get("base_region"), **session_credentials
)
if tag_group_action == "create":
tag_group_execution_status = tag_group.create_tag_group(
tag_group_name, tag_group_key_name, tag_group_value_options
)
else:
tag_group_execution_status = tag_group.update_tag_group(
tag_group_name, tag_group_key_name, tag_group_value_options
)
if tag_group_execution_status.get("alert_level") == "success":
tag_groups = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_group_key_values,
tag_group_key_values_execution_status,
) = tag_groups.get_tag_group_key_values(tag_group_name)
if tag_group_key_values_execution_status.get("alert_level") == "success":
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
resource_type, unit = get_resource_type_unit(
request.form.get("resource_type")
)
all_execution_status_alert_levels = list()
all_sorted_tag_values_inventory = list()
claims = aws_auth.claims
my_regions = list()
# Multi-region resource tag values getter
def _get_multi_region_tag_values(
my_regions, account_number, all_sorted_tag_values_inventory
):
for region in my_regions:
inventory = resources_tags(resource_type, unit, region)
(
sorted_tag_values_inventory,
sorted_tag_values_inventory_execution_status,
) = inventory.get_tag_values(**session_credentials)
all_sorted_tag_values_inventory = (
all_sorted_tag_values_inventory
+ sorted_tag_values_inventory
)
all_execution_status_alert_levels.append(
sorted_tag_values_inventory_execution_status.get(
"alert_level"
)
)
region_execution_status_message = (
str(account_number)
+ " - "
+ str(region)
+ " - "
+ str(
sorted_tag_values_inventory_execution_status.get(
"status_message"
)
)
)
flash(
region_execution_status_message,
sorted_tag_values_inventory_execution_status.get(
"alert_level"
),
)
return all_sorted_tag_values_inventory
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
if resource_type == "s3":
my_regions.append(tag_tamer_parameters.get("base_region"))
else:
my_regions = validated_regions
# Get Tag Tamer base account's tag values from all regions
base_account_number = re.search("\d{12}", cognito_user_group_arn)
all_sorted_tag_values_inventory = _get_multi_region_tag_values(
my_regions,
base_account_number.group(),
all_sorted_tag_values_inventory,
)
# Get additional multi accounts' tag values from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub(
"\d{12}", account_number, cognito_user_group_arn
)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
all_sorted_tag_values_inventory = _get_multi_region_tag_values(
my_regions, account_number, all_sorted_tag_values_inventory
)
# Remove duplicate tag values & sort
all_sorted_tag_values_inventory = list(
set(all_sorted_tag_values_inventory)
)
all_sorted_tag_values_inventory.sort(key=str.lower)
return render_template(
"edit-tag-group.html",
resource_type=resource_type,
selected_tag_group_name=tag_group_name,
selected_tag_group_attributes=tag_group_key_values,
selected_resource_type_tag_values_inventory=all_sorted_tag_values_inventory,
)
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
tag_group_key_values_execution_status["status_message"],
tag_group_key_values_execution_status["alert_level"],
)
return render_template("blank.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
tag_group_execution_status.get("status_message"),
tag_group_execution_status.get("alert_level"),
)
return render_template("blank.html")
# Delivers HTML UI to select AWS resource type to tag using Tag Groups
@app.route("/select-resource-type", methods=["POST"])
@aws_auth.authentication_required
def select_resource_type():
user_email, user_source = get_user_email_ip(request)
if user_email:
log.info(
'"{}" invoked "{}" on {} from location: "{}" - SUCCESS'.format(
user_email, sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
next_route = request.form.get("next_route")
if not next_route:
next_route = "found_tags"
return render_template(
"select-resource-type.html", destination_route=next_route
)
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Let user search existing tags then tag matching, existing resources
@app.route("/tag-filter", methods=["POST"])
@aws_auth.authentication_required
def tag_filter():
user_email, user_source = get_user_email_ip(request)
if user_email:
log.info(
'"{}" invoked "{}" on {} from location: "{}" - SUCCESS'.format(
user_email, sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
if request.form.get("resource_type") and request.form.get("destination_route"):
return render_template(
"search-tag-resources-container.html",
destination_route=request.form.get("destination_route"),
resource_type=request.form.get("resource_type"),
)
else:
return render_template(
"select-resource-type.html", destination_route="tag_display"
)
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Enter existing tag keys & values to search
@app.route("/tag-based-search", methods=["GET"])
@aws_auth.authentication_required
def tag_based_search():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.args.get("resource_type"):
resource_type, unit = get_resource_type_unit(
request.args.get("resource_type")
)
all_execution_status_alert_levels = list()
all_selected_tag_keys = list()
all_selected_tag_values = list()
claims = aws_auth.claims
my_regions = list()
# Multi-region resource tag keys & values getter
def _get_multi_region_tag_keys_values(
my_regions,
account_number,
all_selected_tag_keys,
all_selected_tag_values,
):
for region in my_regions:
inventory = resources_tags(resource_type, unit, region)
(
selected_tag_keys,
execution_status_tag_keys,
) = inventory.get_tag_keys(**session_credentials)
all_selected_tag_keys = all_selected_tag_keys + selected_tag_keys
all_execution_status_alert_levels.append(
execution_status_tag_keys.get("alert_level")
)
(
selected_tag_values,
execution_status_tag_values,
) = inventory.get_tag_values(**session_credentials)
all_selected_tag_values = (
all_selected_tag_values + selected_tag_values
)
all_execution_status_alert_levels.append(
execution_status_tag_values.get("alert_level")
)
return all_selected_tag_keys, all_selected_tag_values
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
if resource_type == "s3":
my_regions.append(tag_tamer_parameters.get("base_region"))
else:
my_regions = validated_regions
# Get Tag Tamer base account's tag keys & values from all regions
base_account_number = re.search("\d{12}", cognito_user_group_arn)
(
all_selected_tag_keys,
all_selected_tag_values,
) = _get_multi_region_tag_keys_values(
my_regions,
base_account_number.group(),
all_selected_tag_keys,
all_selected_tag_values,
)
# Get additional multi accounts' tag keys & values from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub(
"\d{12}", account_number, cognito_user_group_arn
)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
(
all_selected_tag_keys,
all_selected_tag_values,
) = _get_multi_region_tag_keys_values(
my_regions,
account_number,
all_selected_tag_keys,
all_selected_tag_values,
)
# Remove duplicate tag values & sort
all_selected_tag_keys = list(set(all_selected_tag_keys))
all_selected_tag_keys.sort(key=str.lower)
all_selected_tag_values = list(set(all_selected_tag_values))
all_selected_tag_values.sort(key=str.lower)
if "success" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"tag-search.html",
destination_route=request.args.get("destination_route"),
resource_type=request.args.get("resource_type"),
tag_keys=all_selected_tag_keys,
tag_values=all_selected_tag_values,
)
elif "warning" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("No tag keys or values found!", "warning")
return render_template("blank.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"An error occurred. Please contact your Tag Tamer administrator for assistance.",
"danger",
)
return render_template("blank.html")
else:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"select-resource-type.html", destination_route="tag_filter"
)
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Delivers HTML UI to assign tags from Tag Groups to chosen AWS resources
@app.route("/tag_resources", methods=["GET", "POST"])
@aws_auth.authentication_required
def tag_resources():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.form.get("resource_type"):
filter_elements = dict()
if request.form.get("tag_key1"):
filter_elements["tag_key1"] = request.form.get("tag_key1")
if request.form.get("tag_value1"):
filter_elements["tag_value1"] = request.form.get("tag_value1")
if request.form.get("tag_key2"):
filter_elements["tag_key2"] = request.form.get("tag_key2")
if request.form.get("tag_value2"):
filter_elements["tag_value2"] = request.form.get("tag_value2")
if request.form.get("conjunction"):
filter_elements["conjunction"] = request.form.get("conjunction")
resource_type, unit = get_resource_type_unit(
request.form.get("resource_type")
)
all_execution_status_alert_levels = list()
all_chosen_resources = dict()
claims = aws_auth.claims
my_regions = list()
# Multi-region resource tag getter that match user-selected filter elements
def _get_multi_region_matching_resources(my_regions, account_number):
account_chosen_resources = dict()
for region in my_regions:
inventory = resources_tags(resource_type, unit, region)
# chosen_resources is a ordered dictionary which is a list of tuples
chosen_resources = OrderedDict()
(
chosen_resources,
resources_execution_status,
) = inventory.get_resources(filter_elements, **session_credentials)
# Only include AWS regions with matching filtered resources
if chosen_resources[0][0] != "No matching resources found":
account_chosen_resources[region] = chosen_resources
all_execution_status_alert_levels.append(
resources_execution_status.get("alert_level")
)
# region_execution_status_message = str(account_number) + " - " + str(region) + " - " + str(resources_execution_status.get('status_message'))
# flash(region_execution_status_message, resources_execution_status.get('alert_level'))
return account_chosen_resources
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
if resource_type == "s3":
my_regions.append(tag_tamer_parameters.get("base_region"))
else:
my_regions = validated_regions
base_account_number = re.search("\d{12}", cognito_user_group_arn)
all_chosen_resources[
base_account_number.group()
] = _get_multi_region_matching_resources(
my_regions, base_account_number.group()
)
# Get additional multi accounts' tags from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub(
"\d{12}", account_number, cognito_user_group_arn
)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
all_chosen_resources[
account_number
] = _get_multi_region_matching_resources(my_regions, account_number)
tag_group_inventory = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_groups_all_info,
tag_groups_execution_status,
) = tag_group_inventory.get_all_tag_groups_key_values(
tag_tamer_parameters.get("base_region"), **session_credentials
)
if (
"success" in all_execution_status_alert_levels
and tag_groups_execution_status.get("alert_level") == "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"tag-resources.html",
resource_type=resource_type,
all_resource_inventory=all_chosen_resources,
tag_groups_all_info=tag_groups_all_info,
filter_elements=filter_elements,
)
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to modify these resources", "danger")
return render_template("blank.html")
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"An error occurred. Please contact your Tag Tamer administrator for assistance.",
"danger",
)
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Applies selected tags to selected resources then displays resources with updated tags
@app.route("/apply-tags-to-resources", methods=["POST"])
@aws_auth.authentication_required
def apply_tags_to_resources():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
all_execution_status_alert_levels = list()
all_resources_to_tag = dict()
all_updated_sorted_tagged_inventory = dict()
chosen_tags = list()
filter_elements = dict()
claims = aws_auth.claims
resource_type, unit = get_resource_type_unit(request.form.get("resource_type"))
form_contents = request.form.to_dict()
if request.form.get("tag_key1"):
filter_elements["tag_key1"] = request.form.get("tag_key1")
form_contents.pop("tag_key1")
if request.form.get("tag_value1"):
filter_elements["tag_value1"] = request.form.get("tag_value1")
form_contents.pop("tag_value1")
if request.form.get("tag_key2"):
filter_elements["tag_key2"] = request.form.get("tag_key2")
form_contents.pop("tag_key2")
if request.form.get("tag_value2"):
filter_elements["tag_value2"] = request.form.get("tag_value2")
form_contents.pop("tag_value2")
if request.form.get("conjunction"):
filter_elements["conjunction"] = request.form.get("conjunction")
form_contents.pop("conjunction")
form_contents.pop("csrf_token")
form_contents.pop("resource_type")
for key, value in form_contents.items():
if re.search("^resource", key):
resource_metadata = list()
# resource_metadata is a list of "resource",account_number,region,resource_id
resource_metadata = key.split(",")
# Create nested dictionaries of resources to tag using account & region as the dictionary keys
if not all_resources_to_tag.get(resource_metadata[1]):
all_resources_to_tag[resource_metadata[1]] = dict()
if not all_resources_to_tag[resource_metadata[1]].get(
resource_metadata[2]
):
all_resources_to_tag[resource_metadata[1]][
resource_metadata[2]
] = list()
all_resources_to_tag[resource_metadata[1]][resource_metadata[2]].append(
resource_metadata[3]
)
# After processing key that begins with "resource" skip to next key:value pair in this for loop
continue
# Only user-selected Tag Groups will have values
if value:
tag_kv = dict()
tag_kv["Key"] = key
tag_kv["Value"] = value
chosen_tags.append(tag_kv)
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
# Assign user-selected Tag Groups to user-selected resources in accounts & regions
for account_number, region_resources_to_tag in all_resources_to_tag.items():
account_role_arn = re.sub("\d{12}", account_number, cognito_user_group_arn)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
region_sorted_tagged_inventory = dict()
for region, resources_to_tag in region_resources_to_tag.items():
chosen_resources_to_tag = resources_tags(
resource_type, unit, region
)
set_resources_tags_execution_status = (
chosen_resources_to_tag.set_resources_tags(
resources_to_tag, chosen_tags, **session_credentials
)
)
all_execution_status_alert_levels.append(
set_resources_tags_execution_status.get("alert_level")
)
region_execution_status_message = (
str(account_number)
+ " - "
+ str(region)
+ " - "
+ str(set_resources_tags_execution_status.get("status_message"))
)
flash(
region_execution_status_message,
set_resources_tags_execution_status.get("alert_level"),
)
# Get updated resources & tags after setting user-selected Tag Options
inventory = resources_tags(resource_type, unit, region)
chosen_resources = OrderedDict()
(
chosen_resources,
resources_execution_status,
) = inventory.get_resources(filter_elements, **session_credentials)
session_credentials["region"] = region
session_credentials["chosen_resources"] = chosen_resources
(
sorted_tagged_inventory,
sorted_tagged_inventory_execution_status,
) = inventory.get_resources_tags(**session_credentials)
region_sorted_tagged_inventory[region] = sorted_tagged_inventory
all_updated_sorted_tagged_inventory[
account_number
] = region_sorted_tagged_inventory
if len(all_updated_sorted_tagged_inventory):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"updated-tags.html",
all_updated_inventory=all_updated_sorted_tagged_inventory,
)
else:
log.warning(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Retrieves AWS Service Catalog products & Tag Groups
@app.route("/get-service-catalog", methods=["GET"])
@aws_auth.authentication_required
def get_service_catalog():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
# Get the Tag Group names & associated tag keys
tag_group_inventory = dict()
tag_groups = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_group_inventory,
tag_groups_execution_status,
) = tag_groups.get_tag_group_names()
# Get the Service Catalog product templates
sc_product_ids_names = dict()
sc_products = service_catalog(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
sc_product_ids_names,
sc_product_ids_names_execution_status,
) = sc_products.get_sc_product_templates()
if (
sc_product_ids_names_execution_status.get("alert_level") == "success"
and tag_groups_execution_status.get("alert_level") == "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"update-service-catalog.html",
tag_group_inventory=tag_group_inventory,
sc_product_ids_names=sc_product_ids_names,
)
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to modify these resources", "danger")
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Updates AWS Service Catalog product templates with TagOptions using Tag Groups
@app.route("/set-service-catalog", methods=["POST"])
@aws_auth.authentication_required
def set_service_catalog():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.form.getlist("tag_groups_to_assign") and request.form.getlist(
"chosen_sc_product_template_ids"
):
selected_tag_groups = list()
selected_tag_groups = request.form.getlist("tag_groups_to_assign")
sc_product_templates = list()
sc_product_templates = request.form.getlist(
"chosen_sc_product_template_ids"
)
# Get the Service Catalog product templates
sc_product_ids_names = dict()
sc_products = service_catalog(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
sc_product_ids_names,
sc_product_ids_names_execution_status,
) = sc_products.get_sc_product_templates()
# Assign every tag in selected Tag Groups to selected SC product templates
updated_product_temp_tagoptions = defaultdict(list)
sc_response = dict()
for sc_prod_template_id in sc_product_templates:
for tag_group_name in selected_tag_groups:
sc_response.clear()
(
sc_response,
sc_response_execution_status,
) = sc_products.assign_tg_sc_product_template(
tag_group_name, sc_prod_template_id, **session_credentials
)
updated_product_temp_tagoptions[sc_prod_template_id].append(
sc_response
)
if (
sc_response_execution_status.get("alert_level") == "success"
and sc_product_ids_names_execution_status.get("alert_level")
== "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("TagOptions update succeeded!", "success")
return render_template(
"updated-service-catalog.html",
sc_product_ids_names=sc_product_ids_names,
updated_product_temp_tagoptions=updated_product_temp_tagoptions,
)
elif (
sc_response_execution_status.get("alert_level") == "warning"
and sc_product_ids_names_execution_status.get("alert_level")
== "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
sc_response_execution_status["status_message"],
sc_response_execution_status["alert_level"],
)
return render_template(
"updated-service-catalog.html",
sc_product_ids_names=sc_product_ids_names,
updated_product_temp_tagoptions=updated_product_temp_tagoptions,
)
# for the case of Boto3 errors & unauthorized users
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to modify these resources", "danger")
return render_template("blank.html")
else:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"Please select at least one Tag Group and Service Catalog product.",
"warning",
)
return redirect(url_for("get_service_catalog"))
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Retrieves AWS Config Rules & Tag Groups
@app.route("/find-config-rules", methods=["GET"])
@aws_auth.authentication_required
def find_config_rules():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
all_chosen_resources = dict()
all_execution_status_alert_levels = list()
claims = aws_auth.claims
# Get the Tag Group names & associated tag keys
tag_group_inventory = dict()
tag_groups = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_group_inventory,
tag_groups_execution_status,
) = tag_groups.get_tag_group_names()
# Multi-region config rule getter
def _get_multi_region_matching_config_rules(account_number):
account_chosen_resources = dict()
for region in validated_regions:
# Get the AWS Config Rules
config_rules_ids_names = dict()
config_rules = config(region, **session_credentials)
(
config_rules_ids_names,
config_rules_execution_status,
) = config_rules.get_config_rules_ids_names()
# Only include AWS regions with matching filtered resources
if config_rules_ids_names:
account_chosen_resources[region] = config_rules_ids_names
region_execution_status_message = (
str(account_number)
+ " - "
+ str(region)
+ " - "
+ str(config_rules_execution_status.get("status_message"))
)
flash(
region_execution_status_message,
config_rules_execution_status.get("alert_level"),
)
all_execution_status_alert_levels.append(
config_rules_execution_status.get("alert_level")
)
return account_chosen_resources
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
base_account_number = re.search("\d{12}", cognito_user_group_arn)
all_chosen_resources[
base_account_number.group()
] = _get_multi_region_matching_config_rules(base_account_number.group())
# Get additional multi accounts' tags from all regions
for account_number in multi_accounts:
# Swap account number in Cognito user's assigned IAM role ARN
# Cognito user's assumed IAM role name must be identical in all AWS accounts
account_role_arn = re.sub("\d{12}", account_number, cognito_user_group_arn)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
all_chosen_resources[
account_number
] = _get_multi_region_matching_config_rules(account_number)
if (
"success" in all_execution_status_alert_levels
and tag_groups_execution_status.get("alert_level") == "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"find-config-rules.html",
tag_group_inventory=tag_group_inventory,
all_resource_inventory=all_chosen_resources,
)
elif (
"warning" in all_execution_status_alert_levels
and tag_groups_execution_status.get("alert_level") == "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
# flash(config_rules_execution_status['status_message'], config_rules_execution_status['alert_level'])
return render_template(
"find-config-rules.html",
tag_group_inventory=tag_group_inventory,
all_resource_inventory=all_chosen_resources,
)
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to modify these resources", "danger")
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Updates AWS Config's required-tags rule using Tag Groups
@app.route("/update-config-rules", methods=["POST"])
@aws_auth.authentication_required
def set_config_rules():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.form.getlist("tag_groups_to_assign"):
selected_tag_groups = list()
selected_tag_groups = request.form.getlist("tag_groups_to_assign")
selected_config_rules = dict()
all_execution_status_alert_levels = list()
all_updated_config_rules = dict()
claims = aws_auth.claims
form_contents = request.form.to_dict()
form_contents.pop("csrf_token")
form_contents.pop("tag_groups_to_assign")
# Ignore the form value. Only need the key/name
for rule_id, rule_name in form_contents.items():
if re.search("^resource", rule_id):
resource_metadata = list()
# resource_metadata is a list of "resource",account_number,region,resource_id
resource_metadata = rule_id.split(",")
# Create nested dictionaries of resources to tag using account & region as the dictionary keys
if not selected_config_rules.get(resource_metadata[1]):
selected_config_rules[resource_metadata[1]] = dict()
if not selected_config_rules[resource_metadata[1]].get(
resource_metadata[2]
):
selected_config_rules[resource_metadata[1]][
resource_metadata[2]
] = dict()
selected_config_rules[resource_metadata[1]][resource_metadata[2]][
resource_metadata[3]
] = rule_name
tag_groups = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
tag_group_key_values = dict()
tag_groups_keys_values = dict()
tag_count = 1
for group in selected_tag_groups:
# A Required_Tags Config Rule instance accepts up to 6 Tag Groups
if tag_count < 7:
(
tag_group_key_values,
key_values_execution_status,
) = tag_groups.get_tag_group_key_values(group)
key_name = "tag{}Key".format(tag_count)
value_name = "tag{}Value".format(tag_count)
tag_groups_keys_values[key_name] = tag_group_key_values[
"tag_group_key"
]
tag_group_values_string = ",".join(
tag_group_key_values["tag_group_values"]
)
tag_groups_keys_values[value_name] = tag_group_values_string
tag_count += 1
else:
flash(
"AWS Config allows 6 Tag Groups per rule. The first 6 selected Tag Groups are applied",
"warning",
)
break
# Get the user's assigned Cognito user pool group's IAM role ARN
cognito_user_group_arn = get_user_group_arns(
claims.get("username"),
ssm_parameters.get("cognito-user-pool-id-value"),
ssm_parameters.get("cognito-default-region-value"),
)
# Assign user-selected Tag Groups to user-selected resources in accounts & regions
for (
account_number,
region_resources_to_tag,
) in selected_config_rules.items():
account_role_arn = re.sub(
"\d{12}", account_number, cognito_user_group_arn
)
kwargs = dict()
kwargs["account_role_arn"] = account_role_arn
kwargs["user_email"] = user_email
kwargs["user_id"] = claims.get("username")
kwargs["user_source"] = user_source
kwargs["session_credentials"] = session_credentials
multi_account_role_session = assume_role_multi_account(**kwargs)
session_credentials[
"multi_account_role_session"
] = multi_account_role_session
if session_credentials.get("multi_account_role_session"):
for region, resources_to_tag in region_resources_to_tag.items():
config_rules = config(region, **session_credentials)
for (
config_rule_id,
config_rule_name,
) in resources_to_tag.items():
set_rules_execution_status = config_rules.set_config_rules(
tag_groups_keys_values, config_rule_id, config_rule_name
)
(
updated_config_rule,
get_rule_execution_status,
) = config_rules.get_config_rule(config_rule_name)
all_execution_status_alert_levels.append(
set_rules_execution_status.get("alert_level")
)
# region_execution_status_message = str(region) + " - " + str(set_rules_execution_status.get('status_message'))
# flash(region_execution_status_message, set_rules_execution_status.get('alert_level'))
if not all_updated_config_rules.get(account_number):
all_updated_config_rules[account_number] = dict()
if not all_updated_config_rules[account_number].get(region):
all_updated_config_rules[account_number][
region
] = list()
all_updated_config_rules[account_number][region].append(
updated_config_rule
)
if "success" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"updated-config-rules.html",
all_resource_inventory=all_updated_config_rules,
)
elif "warning" in all_execution_status_alert_levels:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash(
"Please select at least one Tag Group and Config rule.", "warning"
)
return redirect(url_for("find_config_rules"))
# for the case of Boto3 errors & unauthorized users
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
else:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("Please select at least one Tag Group and Config rule.", "warning")
return redirect(url_for("find_config_rules"))
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Retrieves AWS IAM Roles & Tag Groups
@app.route("/select-roles-tags", methods=["GET"])
@aws_auth.authentication_required
def select_roles_tags():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
tag_group_inventory = get_tag_groups(
tag_tamer_parameters.get("base_region"), **session_credentials
)
(
tag_groups_all_info,
tag_groups_all_execution_status,
) = tag_group_inventory.get_all_tag_groups_key_values(
tag_tamer_parameters.get("base_region"), **session_credentials
)
iam_roles = roles(
tag_tamer_parameters.get("base_region"), **session_credentials
)
# In initial Tag Tamer release get AWS SSO Roles
path_prefix = "/aws-reserved/sso.amazonaws.com/"
roles_inventory, roles_execution_status = iam_roles.get_roles(path_prefix)
# User notifications based on her/his permission to access IAM Roles
flash(
roles_execution_status["status_message"],
roles_execution_status["alert_level"],
)
if (
roles_execution_status.get("alert_level") == "success"
and tag_groups_all_execution_status.get("alert_level") == "success"
):
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return render_template(
"tag-roles.html",
roles_inventory=roles_inventory,
tag_groups_all_info=tag_groups_all_info,
)
# for the case of Boto3 errors & unauthorized users
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
# Assigns selected tags to roles for tagging newly created AWS resources
@app.route("/set-roles-tags", methods=["POST"])
@aws_auth.authentication_required
def set_roles_tags():
user_email, user_source = get_user_email_ip(request)
session_credentials = get_user_session_credentials(request.cookies.get("id_token"))
if user_email and session_credentials.get("AccessKeyId"):
if request.form.get("roles_to_tag"):
role_name = request.form.get("roles_to_tag")
form_contents = request.form.to_dict()
form_contents.pop("roles_to_tag")
form_contents.pop("csrf_token")
chosen_tags = list()
for key, value in form_contents.items():
if value:
tag_kv = dict()
tag_kv["Key"] = key
tag_kv["Value"] = value
chosen_tags.append(tag_kv)
role_to_tag = roles(
tag_tamer_parameters.get("base_region"), **session_credentials
)
set_role_tags_execution_status = role_to_tag.set_role_tags(
role_name, chosen_tags
)
flash(
set_role_tags_execution_status["status_message"],
set_role_tags_execution_status["alert_level"],
)
if set_role_tags_execution_status.get("alert_level") == "success":
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
return redirect(url_for("select_roles_tags"))
# for the case of Boto3 errors & unauthorized users
else:
log.error(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - FAILURE'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
else:
log.info(
'"{}" invoked "{}" on {} from location: "{}" using AWSAuth access key id: {} - SUCCESS'.format(
user_email,
sys._getframe().f_code.co_name,
date_time_now(),
user_source,
session_credentials["AccessKeyId"],
)
)
flash("Please select at least one Tag Group and IAM SSO Role.", "warning")
return redirect(url_for("select_roles_tags"))
else:
log.error(
'Unknown user attempted to invoke "{}" on {} from location: "{}" - FAILURE'.format(
sys._getframe().f_code.co_name, date_time_now(), user_source
)
)
flash("You are not authorized to view these resources", "danger")
return render_template("blank.html")
@app.route("/logout", methods=["GET"])
@aws_auth.authentication_required
def logout():
claims = aws_auth.claims
user_email, user_source = get_user_email_ip(request)
log.info(
'Successful logout. User "{}" with email "{}" signed out on {} from location "{}"'.format(
claims.get("username"), user_email, date_time_now(), user_source
)
)
response = make_response(render_template("logout.html"))
response.delete_cookie("access_token")
response.delete_cookie("id_token")
response.delete_cookie("session")
return response, 200
| 45.17681 | 194 | 0.553558 | 10,330 | 101,693 | 5.108809 | 0.047241 | 0.049797 | 0.014174 | 0.017907 | 0.812123 | 0.778262 | 0.74675 | 0.705973 | 0.679425 | 0.653144 | 0 | 0.002248 | 0.361411 | 101,693 | 2,250 | 195 | 45.196889 | 0.81041 | 0.080379 | 0 | 0.611282 | 0 | 0.002051 | 0.170311 | 0.025995 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.014872 | 0.001026 | 0.070256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c4516b1f3b90f5cdace751537f1fba7b359d005e | 69 | py | Python | __init__.py | xshaokun/skpy | d7e33c80f9741234bb98670bf8845fc1a92cdce5 | [
"MIT"
] | null | null | null | __init__.py | xshaokun/skpy | d7e33c80f9741234bb98670bf8845fc1a92cdce5 | [
"MIT"
] | 2 | 2021-09-06T12:32:50.000Z | 2021-09-07T03:54:17.000Z | __init__.py | xshaokun/skpy | d7e33c80f9741234bb98670bf8845fc1a92cdce5 | [
"MIT"
] | null | null | null | from skpy.utilities import tools as tls
from . import astroeqs as eqs | 34.5 | 39 | 0.811594 | 12 | 69 | 4.666667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 2 | 40 | 34.5 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c456ac645a32b787c7628d937220932f5aefa33a | 31 | py | Python | mypypackage/__init__.py | joaovicente/mypypackage | 722184149466555c0df055151264e1633151656e | [
"Apache-2.0"
] | null | null | null | mypypackage/__init__.py | joaovicente/mypypackage | 722184149466555c0df055151264e1633151656e | [
"Apache-2.0"
] | null | null | null | mypypackage/__init__.py | joaovicente/mypypackage | 722184149466555c0df055151264e1633151656e | [
"Apache-2.0"
] | null | null | null | from .api import hello, goodbye | 31 | 31 | 0.806452 | 5 | 31 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c47ec27d0d3c83f0070ab0264d94846ade64fce8 | 143 | py | Python | examples/backend.py | capitalrx/locald | db3b8663aac481c26f58a52709985fa8a67216d3 | [
"MIT"
] | 5 | 2021-05-06T17:58:26.000Z | 2021-11-10T22:15:19.000Z | examples/backend.py | capitalrx/locald | db3b8663aac481c26f58a52709985fa8a67216d3 | [
"MIT"
] | null | null | null | examples/backend.py | capitalrx/locald | db3b8663aac481c26f58a52709985fa8a67216d3 | [
"MIT"
] | 4 | 2021-04-13T18:14:31.000Z | 2021-07-09T22:09:54.000Z | import sys
import time
while True:
time.sleep(5)
sys.stdout.write(str(time.time()))
sys.stdout.write("\n")
sys.stdout.flush()
| 15.888889 | 38 | 0.643357 | 22 | 143 | 4.181818 | 0.545455 | 0.293478 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.188811 | 143 | 8 | 39 | 17.875 | 0.784483 | 0 | 0 | 0 | 0 | 0 | 0.013986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c4840c24a530fcf748956e3c5aba9374275d3d33 | 663 | py | Python | colossalai/builder/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 1,630 | 2021-10-30T01:00:27.000Z | 2022-03-31T23:02:41.000Z | colossalai/builder/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 166 | 2021-10-30T01:03:01.000Z | 2022-03-31T14:19:07.000Z | colossalai/builder/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 253 | 2021-10-30T06:10:29.000Z | 2022-03-31T13:30:06.000Z | from .builder import (build_schedule, build_lr_scheduler, build_model,
build_optimizer, build_layer, build_loss, build_hooks,
build_dataset, build_transform, build_data_sampler,
build_gradient_handler, build_ophooks)
from .pipeline import build_pipeline_model, build_pipeline_model_from_cfg
__all__ = [
'build_schedule', 'build_lr_scheduler', 'build_model', 'build_optimizer',
'build_layer', 'build_loss', 'build_hooks', 'build_dataset',
'build_transform', 'build_data_sampler', 'build_gradient_handler',
'build_pipeline_model', 'build_pipeline_model_from_cfg', 'build_ophooks'
]
| 51 | 77 | 0.726998 | 77 | 663 | 5.662338 | 0.285714 | 0.091743 | 0.165138 | 0.091743 | 0.869266 | 0.869266 | 0.869266 | 0.869266 | 0.683486 | 0.683486 | 0 | 0 | 0.182504 | 663 | 12 | 78 | 55.25 | 0.804428 | 0 | 0 | 0 | 0 | 0 | 0.331825 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
671498b931f35a8b927939b7a8ff0c9a12fd2f9c | 86 | py | Python | tests/data/format/summary_splitter/max_summary_lines/max_lines_with_dot.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 4 | 2022-01-02T22:50:59.000Z | 2022-02-09T09:04:37.000Z | tests/data/format/summary_splitter/max_summary_lines/max_lines_with_dot.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 80 | 2022-01-02T09:02:50.000Z | 2022-03-30T13:34:10.000Z | tests/data/format/summary_splitter/max_summary_lines/max_lines_with_dot.py | DanielNoord/pydocstringformatter | a69302cee6bd32b9b5cc48912a47d0e8ad3f7abe | [
"MIT"
] | 2 | 2022-01-02T11:58:29.000Z | 2022-01-04T18:53:29.000Z | def func():
"""My long. summary
is way
too long.
Description
"""
| 10.75 | 23 | 0.5 | 10 | 86 | 4.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.372093 | 86 | 7 | 24 | 12.285714 | 0.796296 | 0.534884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
672fcd58821714ab30dbd0561abbe0a4f356dd13 | 379 | py | Python | provider/models.py | Unous1996/Pikachu-Housing | acd1f06ddc3b0e5b300ccd5b500e0c2bad5cd1af | [
"Apache-2.0"
] | null | null | null | provider/models.py | Unous1996/Pikachu-Housing | acd1f06ddc3b0e5b300ccd5b500e0c2bad5cd1af | [
"Apache-2.0"
] | null | null | null | provider/models.py | Unous1996/Pikachu-Housing | acd1f06ddc3b0e5b300ccd5b500e0c2bad5cd1af | [
"Apache-2.0"
] | 1 | 2019-04-24T06:40:49.000Z | 2019-04-24T06:40:49.000Z | from django.db import models
# Create your models here.
class Provider(models.Model):
name = models.CharField(max_length=32, )
url = models.CharField(max_length=128, blank=True)
email = models.CharField(max_length=128, blank=True)
phone = models.CharField(max_length=128, blank=True)
def __str__(self):
return str(self.id) + ' (' + self.name + ')' | 29.153846 | 56 | 0.686016 | 52 | 379 | 4.846154 | 0.519231 | 0.238095 | 0.285714 | 0.380952 | 0.428571 | 0.428571 | 0.428571 | 0 | 0 | 0 | 0 | 0.035484 | 0.182058 | 379 | 13 | 57 | 29.153846 | 0.777419 | 0.063325 | 0 | 0 | 0 | 0 | 0.008475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
673e86f57b1503374df4657fba6cea0e0f20d3e2 | 39 | py | Python | Chapter 02/ch2_7.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 02/ch2_7.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 02/ch2_7.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | print(round(45.236,0))
print(locals()) | 19.5 | 23 | 0.692308 | 7 | 39 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 0.051282 | 39 | 2 | 24 | 19.5 | 0.567568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
679bc04a73f2c7209d18cc3a5fc7b06ef1a86037 | 90 | py | Python | flexget/ui/plugins/history/__init__.py | tvcsantos/Flexget | e08ce2957dd4f0668911d1e56347369939e4d0a5 | [
"MIT"
] | null | null | null | flexget/ui/plugins/history/__init__.py | tvcsantos/Flexget | e08ce2957dd4f0668911d1e56347369939e4d0a5 | [
"MIT"
] | 1 | 2018-06-09T18:03:35.000Z | 2018-06-09T18:03:35.000Z | flexget/ui/plugins/history/__init__.py | tvcsantos/Flexget | e08ce2957dd4f0668911d1e56347369939e4d0a5 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals, division, absolute_import
from .history import *
| 30 | 66 | 0.844444 | 11 | 90 | 6.363636 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 90 | 2 | 67 | 45 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
67bcfbb4b7208553dc6f5ad1712a4eb580b0b921 | 317 | py | Python | aries_cloudagent/protocols/problem_report/message_types.py | dhiway/aries-cloudagent-python | f08b7d65e404c8274c6606328c959a865a22c706 | [
"Apache-2.0"
] | 2 | 2020-02-26T14:22:44.000Z | 2021-05-06T20:13:36.000Z | aries_cloudagent/protocols/problem_report/message_types.py | dhiway/aries-cloudagent-python | f08b7d65e404c8274c6606328c959a865a22c706 | [
"Apache-2.0"
] | 5 | 2019-10-13T01:28:48.000Z | 2019-10-21T20:10:47.000Z | aries_cloudagent/protocols/problem_report/message_types.py | dhiway/aries-cloudagent-python | f08b7d65e404c8274c6606328c959a865a22c706 | [
"Apache-2.0"
] | 4 | 2019-07-09T20:41:03.000Z | 2021-06-06T10:45:23.000Z | """Message type identifiers for problem reports."""
PROTOCOL_URI = "did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/notification/1.0"
PROBLEM_REPORT = f"{PROTOCOL_URI}/problem-report"
PROTOCOL_PACKAGE = "aries_cloudagent.protocols.problem_report"
MESSAGE_TYPES = {PROBLEM_REPORT: f"{PROTOCOL_PACKAGE}.message.ProblemReport"}
| 31.7 | 77 | 0.807571 | 38 | 317 | 6.5 | 0.605263 | 0.210526 | 0.11336 | 0.178138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.072555 | 317 | 9 | 78 | 35.222222 | 0.833333 | 0.141956 | 0 | 0 | 0 | 0 | 0.609023 | 0.609023 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
67c42408096d93122b3def25b939fd9449d8493a | 60 | py | Python | overhave/cli/db/__init__.py | TinkoffCreditSystems/overhave | b0ab705ef5c5c5a65fa0b14b173b64fd7310e187 | [
"Apache-2.0"
] | 33 | 2021-02-01T15:49:37.000Z | 2021-12-20T00:44:43.000Z | overhave/cli/db/__init__.py | TinkoffCreditSystems/overhave | b0ab705ef5c5c5a65fa0b14b173b64fd7310e187 | [
"Apache-2.0"
] | 46 | 2021-02-03T12:56:52.000Z | 2021-12-19T18:50:27.000Z | overhave/cli/db/__init__.py | TinkoffCreditSystems/overhave | b0ab705ef5c5c5a65fa0b14b173b64fd7310e187 | [
"Apache-2.0"
] | 1 | 2021-12-07T09:02:44.000Z | 2021-12-07T09:02:44.000Z | # flake8: noqa
from .group import db, set_config_to_context
| 20 | 44 | 0.8 | 10 | 60 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.133333 | 60 | 2 | 45 | 30 | 0.846154 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
67d5c8895d699be9d4ac866f66b0ab5c6add6f23 | 12 | py | Python | shared.py | MJChku/tssx | ea0fc583c6bcffc83b55f1320743ac511b2e709c | [
"MIT"
] | 2 | 2021-11-02T07:16:21.000Z | 2021-11-02T07:18:58.000Z | shared.py | MJChku/tssx | ea0fc583c6bcffc83b55f1320743ac511b2e709c | [
"MIT"
] | null | null | null | shared.py | MJChku/tssx | ea0fc583c6bcffc83b55f1320743ac511b2e709c | [
"MIT"
] | null | null | null | c = 1000000
| 6 | 11 | 0.666667 | 2 | 12 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.777778 | 0.25 | 12 | 1 | 12 | 12 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
db02b09df3f32ea1213fcb61ee54a72c69d25802 | 169 | py | Python | routines/trans/_routines/__init__.py | avcopan/mechdriver | 63069cfb21d6fdb6d0b091dfe204b1e09c8e10a1 | [
"Apache-2.0"
] | null | null | null | routines/trans/_routines/__init__.py | avcopan/mechdriver | 63069cfb21d6fdb6d0b091dfe204b1e09c8e10a1 | [
"Apache-2.0"
] | null | null | null | routines/trans/_routines/__init__.py | avcopan/mechdriver | 63069cfb21d6fdb6d0b091dfe204b1e09c8e10a1 | [
"Apache-2.0"
] | null | null | null | """
Routines for the OneDMin Python Driver
"""
from routines.trans._routines import lj
from routines.trans._routines import build
__all__ = [
'lj',
'build'
]
| 13 | 42 | 0.698225 | 21 | 169 | 5.333333 | 0.571429 | 0.214286 | 0.303571 | 0.446429 | 0.553571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195266 | 169 | 12 | 43 | 14.083333 | 0.823529 | 0.224852 | 0 | 0 | 0 | 0 | 0.056911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
db045794dc5992752aeee1a4b7224b507f4b824a | 110 | py | Python | deepr/examples/movielens/readers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | deepr/examples/movielens/readers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | deepr/examples/movielens/readers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=unused-import,missing-docstring
from deepr.examples.movielens.readers.csv import CSVReader
| 27.5 | 58 | 0.836364 | 14 | 110 | 6.571429 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 110 | 3 | 59 | 36.666667 | 0.901961 | 0.427273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db1b0c98cfa878f36cd6f2f5ac1d408366747ad4 | 163 | py | Python | runner/ConnectionData.py | asyre/bachelor_degree | 274415a32dc642ecde53935d6b9c8bb23d21cf29 | [
"MIT"
] | null | null | null | runner/ConnectionData.py | asyre/bachelor_degree | 274415a32dc642ecde53935d6b9c8bb23d21cf29 | [
"MIT"
] | null | null | null | runner/ConnectionData.py | asyre/bachelor_degree | 274415a32dc642ecde53935d6b9c8bb23d21cf29 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
@dataclass
class ConnectionData:
username: str
password: str = None
hostname: str = "localhost"
port: int = 22
| 16.3 | 33 | 0.693252 | 18 | 163 | 6.277778 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 0.239264 | 163 | 9 | 34 | 18.111111 | 0.895161 | 0 | 0 | 0 | 0 | 0 | 0.055215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.142857 | 0.142857 | 0 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
c0002ed0a5ffc1557bdad452888f67e747a8a792 | 127 | py | Python | tests/test_fitting.py | elmanuelito/pyDatView | 3516ffaff601c122d62ffc94abd842958354ece8 | [
"MIT"
] | 50 | 2018-10-15T18:10:15.000Z | 2022-03-15T15:53:50.000Z | tests/test_fitting.py | elmanuelito/pyDatView | 3516ffaff601c122d62ffc94abd842958354ece8 | [
"MIT"
] | 99 | 2018-10-31T16:30:28.000Z | 2022-02-18T04:25:07.000Z | tests/test_fitting.py | elmanuelito/pyDatView | 3516ffaff601c122d62ffc94abd842958354ece8 | [
"MIT"
] | 20 | 2018-10-23T21:44:32.000Z | 2022-02-09T17:21:37.000Z | import unittest
import numpy as np
from pydatview.tools.curve_fitting import *
if __name__ == '__main__':
unittest.main()
| 18.142857 | 43 | 0.755906 | 17 | 127 | 5.117647 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15748 | 127 | 6 | 44 | 21.166667 | 0.813084 | 0 | 0 | 0 | 0 | 0 | 0.062992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c0153655a0428f65e44ec63e5d542a09ed79ba4c | 88 | py | Python | src/xyz/entities/__init__.py | justanr/xyz | 0096d77866498b6f569af57053d1c6a736a7eb1b | [
"MIT"
] | 1 | 2018-07-29T00:02:46.000Z | 2018-07-29T00:02:46.000Z | src/xyz/entities/__init__.py | justanr/xyz | 0096d77866498b6f569af57053d1c6a736a7eb1b | [
"MIT"
] | null | null | null | src/xyz/entities/__init__.py | justanr/xyz | 0096d77866498b6f569af57053d1c6a736a7eb1b | [
"MIT"
] | null | null | null | """
xyz.entities
~~~~~~~~~~~
"""
from .post import Post
from .user import User
| 11 | 22 | 0.534091 | 10 | 88 | 4.7 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238636 | 88 | 7 | 23 | 12.571429 | 0.701493 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.