hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e2f550e1be14fc4de3d3f2c4274e4cb59e59e02d | 3,550 | py | Python | lib/click-6.6/tests/test_formatting.py | brianrodri/google_appengine | ec4e7cdfa1afd99de23b0a32eb94563fe5e6ef43 | [
"Apache-2.0"
] | 26 | 2015-01-20T08:02:38.000Z | 2020-06-10T04:57:41.000Z | lib/click-6.6/tests/test_formatting.py | brianrodri/google_appengine | ec4e7cdfa1afd99de23b0a32eb94563fe5e6ef43 | [
"Apache-2.0"
] | 4 | 2016-02-28T05:53:54.000Z | 2017-01-03T07:39:50.000Z | lib/click-6.6/tests/test_formatting.py | brianrodri/google_appengine | ec4e7cdfa1afd99de23b0a32eb94563fe5e6ef43 | [
"Apache-2.0"
] | 13 | 2016-02-28T00:14:23.000Z | 2021-05-03T15:47:36.000Z | # -*- coding: utf-8 -*-
import click
def test_basic_functionality(runner):
@click.command()
def cli():
"""First paragraph.
This is a very long second
paragraph and not correctly
wrapped but it will be rewrapped.
\b
This is
a paragraph
without rewrapping.
\b
1
2
3
And this is a paragraph
that will be rewrapped again.
"""
result = runner.invoke(cli, ['--help'], terminal_width=60)
assert not result.exception
assert result.output.splitlines() == [
'Usage: cli [OPTIONS]',
'',
' First paragraph.',
'',
' This is a very long second paragraph and not correctly',
' wrapped but it will be rewrapped.',
'',
' This is',
' a paragraph',
' without rewrapping.',
'',
' 1',
' 2',
' 3',
'',
' And this is a paragraph that will be rewrapped again.',
'',
'Options:',
' --help Show this message and exit.',
]
def test_wrapping_long_options_strings(runner):
@click.group()
def cli():
"""Top level command
"""
@cli.group()
def a_very_long():
"""Second level
"""
@a_very_long.command()
@click.argument('first')
@click.argument('second')
@click.argument('third')
@click.argument('fourth')
@click.argument('fifth')
@click.argument('sixth')
def command():
"""A command.
"""
# 54 is chosen as a length where the second line is one character
# longer than the maximum length.
result = runner.invoke(cli, ['a_very_long', 'command', '--help'],
terminal_width=54)
assert not result.exception
assert result.output.splitlines() == [
'Usage: cli a_very_long command [OPTIONS] FIRST SECOND',
' THIRD FOURTH FIFTH',
' SIXTH',
'',
' A command.',
'',
'Options:',
' --help Show this message and exit.',
]
def test_wrapping_long_command_name(runner):
@click.group()
def cli():
"""Top level command
"""
@cli.group()
def a_very_very_very_long():
"""Second level
"""
@a_very_very_very_long.command()
@click.argument('first')
@click.argument('second')
@click.argument('third')
@click.argument('fourth')
@click.argument('fifth')
@click.argument('sixth')
def command():
"""A command.
"""
result = runner.invoke(cli, ['a_very_very_very_long', 'command', '--help'],
terminal_width=54)
assert not result.exception
assert result.output.splitlines() == [
'Usage: cli a_very_very_very_long command ',
' [OPTIONS] FIRST SECOND THIRD FOURTH FIFTH',
' SIXTH',
'',
' A command.',
'',
'Options:',
' --help Show this message and exit.',
]
def test_formatting_empty_help_lines(runner):
@click.command()
def cli():
"""Top level command
"""
result = runner.invoke(cli, ['--help'])
assert not result.exception
assert result.output.splitlines() == [
'Usage: cli [OPTIONS]',
'',
' Top level command',
'',
'',
'',
'Options:',
' --help Show this message and exit.',
]
| 23.986486 | 79 | 0.510704 | 362 | 3,550 | 4.895028 | 0.21547 | 0.088036 | 0.023702 | 0.036117 | 0.888826 | 0.822799 | 0.738149 | 0.72912 | 0.706546 | 0.706546 | 0 | 0.006573 | 0.357183 | 3,550 | 147 | 80 | 24.14966 | 0.769939 | 0.141127 | 0 | 0.659574 | 0 | 0 | 0.298513 | 0.014528 | 0 | 0 | 0 | 0 | 0.085106 | 1 | 0.12766 | false | 0 | 0.010638 | 0 | 0.138298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
391766e831921eaed8b2ee02ec5c93bd98c4153f | 1,197 | py | Python | test/example_algorithm.py | fluxtransport/hazel2 | 4121df2fa6bf96bf8f193f287bbf11c70c5a519e | [
"MIT"
] | 17 | 2018-08-31T11:13:59.000Z | 2022-01-12T02:30:56.000Z | test/example_algorithm.py | fluxtransport/hazel2 | 4121df2fa6bf96bf8f193f287bbf11c70c5a519e | [
"MIT"
] | 26 | 2018-04-03T15:09:21.000Z | 2021-05-27T10:10:45.000Z | test/example_algorithm.py | fluxtransport/hazel2 | 4121df2fa6bf96bf8f193f287bbf11c70c5a519e | [
"MIT"
] | 3 | 2018-05-01T13:47:21.000Z | 2019-09-23T20:49:08.000Z | import numpy as np
import matplotlib.pyplot as pl
import hazel
import h5py
from scipy.optimize import minimize
import gc
# Test a single inversion in non-iterator mode
mod = hazel.Model('conf_single.ini', working_mode='inversion', verbose=2)
mod.read_observation()
mod.open_output()
mod.invert()
mod.write_output()
mod.close_output()
final = np.loadtxt('photospheres/model_photosphere.1d', skiprows=4)
start = np.loadtxt('photospheres/model_photosphere_200.1d', skiprows=4)
f = h5py.File('output.h5')
pl.plot(f['ph1']['T'][0,0,:], label='inverted')
pl.plot(final[:,1], label='target')
pl.plot(start[:,1], 'x', label='initial')
f.close()
pl.legend()
mod = hazel.Model('conf_single.ini', working_mode='inversion', verbose=2)
mod.read_observation()
mod.open_output()
mod.invert_external(minimize, method='Nelder-Mead')
mod.write_output()
mod.close_output()
final = np.loadtxt('photospheres/model_photosphere.1d', skiprows=4)
start = np.loadtxt('photospheres/model_photosphere_200.1d', skiprows=4)
f = h5py.File('output.h5')
pl.figure()
pl.plot(f['ph1']['T'][0,0,:], label='inverted')
pl.plot(final[:,1], label='target')
pl.plot(start[:,1], 'x', label='initial')
f.close()
pl.legend()
pl.show() | 27.204545 | 73 | 0.730994 | 187 | 1,197 | 4.57754 | 0.347594 | 0.042056 | 0.098131 | 0.121495 | 0.787383 | 0.787383 | 0.787383 | 0.787383 | 0.787383 | 0.787383 | 0 | 0.028131 | 0.079365 | 1,197 | 44 | 74 | 27.204545 | 0.748639 | 0.036759 | 0 | 0.722222 | 0 | 0 | 0.233507 | 0.121528 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
392a862eb4cf1206505179391fd2906fb0b5c6e2 | 32 | py | Python | lxdspawner/__init__.py | KeioAIConsortium/jupyterhub-lxd-spawner | 7c5123990cfa51fb1214b7d5c7eb882dda6a50c6 | [
"MIT"
] | 1 | 2021-11-25T01:17:51.000Z | 2021-11-25T01:17:51.000Z | lxdspawner/__init__.py | KeioAIConsortium/jupyterhub-lxd-spawner | 7c5123990cfa51fb1214b7d5c7eb882dda6a50c6 | [
"MIT"
] | 9 | 2020-05-29T05:36:28.000Z | 2021-03-13T09:21:26.000Z | lxdspawner/__init__.py | KeioAIConsortium/jupyterhub-lxd-spawner | 7c5123990cfa51fb1214b7d5c7eb882dda6a50c6 | [
"MIT"
] | null | null | null | from .spawner import LXDSpawner
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a92abfa6c5aa128578bbcefafd065a659dc1fe9 | 12,086 | py | Python | oscars_test.py | srio/shadow3-scripts | 10712641333c29ca9854e9cc60d86cb321f3762b | [
"MIT"
] | 1 | 2019-10-30T10:06:15.000Z | 2019-10-30T10:06:15.000Z | oscars_test.py | srio/shadow3-scripts | 10712641333c29ca9854e9cc60d86cb321f3762b | [
"MIT"
] | null | null | null | oscars_test.py | srio/shadow3-scripts | 10712641333c29ca9854e9cc60d86cb321f3762b | [
"MIT"
] | null | null | null |
# coding: utf-8
# Plots inline for notebook
#get_ipython().run_line_magic('matplotlib', 'inline')
# Import the OSCARS SR module
from srxraylib.plot.gol import set_qt
import oscars.sr
from oscars.plots_mpl import *
from oscars.parametric_surfaces import PSCylinder
import numpy
def undulator_spectrum():
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear all existing fields and create an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.050], nperiods=31)
# Define simple electron beam
osr.set_particle_beam(energy_GeV=3, x0=[0, 0, -1], current=0.5)
# Define the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Calculate spectrum at 30 [m]
spectrum = osr.calculate_spectrum(obs=[0, 0, 30], energy_range_eV=[100, 2000])
# Optionally import the plotting tools (matplotlib)
# Plot spectrum
plot_spectrum(spectrum)
def undulator_flux():
# # coding: utf-8
#
# # Plots inline for notebook
# # get_ipython().run_line_magic('matplotlib', 'inline')
#
# # Import the OSCARS SR module
# import oscars.sr
#
# # Optionally import the plotting tools (matplotlib)
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear all existing fields and create an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.050], nperiods=31)
# Define simple electron beam
osr.set_particle_beam(energy_GeV=3, x0=[0, 0, -1], current=0.5)
# Define the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Calculate spectrum at 30 [m]. Note use of the nthreads argument.
flux = osr.calculate_flux_rectangle(
plane='XY',
energy_eV=143.8,
width=[0.01, 0.01],
npoints=[101, 101],
translation=[0, 0, 30]
)
# Plot flux
plot_flux(flux)
def undulator_power_density():
# # coding: utf-8
#
# # Plots inline for notebook
# get_ipython().run_line_magic('matplotlib', 'inline')
#
# # Import the OSCARS SR module
# import oscars.sr
#
# # Optionally import the plotting tools (matplotlib)
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear all existing fields and create an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.050], nperiods=31)
# Define simple electron beam
osr.set_particle_beam(energy_GeV=3, x0=[0, 0, -1], current=0.5)
# Define the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Calculate spectrum at 30 [m]. Note use of the nthreads argument.
power_density = osr.calculate_power_density_rectangle(
plane='XY',
width=[0.05, 0.05],
npoints=[101, 101],
translation=[0, 0, 30]
)
# Plot power density
plot_power_density(power_density)
def undulator_3d_power_density():
# # coding: utf-8
#
# # Plots inline for notebook
# get_ipython().run_line_magic('matplotlib', 'inline')
#
# # Import the OSCARS SR module
# import oscars.sr
#
# # Import the 3D and parametric surfaces utilities
# from oscars.plots3d_mpl import *
# from oscars.parametric_surfaces import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear all existing fields and create an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.050], nperiods=31)
# Define simple electron beam
osr.set_particle_beam(energy_GeV=3, x0=[0, 0, -1], current=0.5)
# Define the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# First create the surface of interest
cylinder = PSCylinder(R=0.020, L=0.010, nu=101, nv=101)
# Run calculation and plotting
pd = power_density_3d(osr, cylinder, rotations=[osr.pi() / 2, 0, 0], translation=[0, 0, 30])
def example_032_undulator_flux():
# # Import the OSCARS SR module
# import oscars.sr
#
# # Import basic plot utilities (matplotlib). You don't need these to run OSCARS, but it's used here for basic plots
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear any existing fields (just good habit in notebook style) and add an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.049], nperiods=21)
# Just to check the field that we added seems visually correct
plot_bfield(osr)
# Setup beam similar to NSLSII
osr.clear_particle_beams()
osr.set_particle_beam(x0=[0, 0, -1], energy_GeV=3, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Run the particle trajectory calculation
trajectory = osr.calculate_trajectory()
# Plot the trajectory position and velocity
plot_trajectory_position(trajectory)
plot_trajectory_velocity(trajectory)
# Calculate spectrum zoom
spectrum = osr.calculate_spectrum(obs=[0, 0, 30], energy_range_eV=[145, 160], npoints=200)
plot_spectrum(spectrum)
flux = osr.calculate_flux_rectangle(
plane='XY',
energy_eV=153,
width=[0.01, 0.01],
npoints=[101, 101],
translation=[0, 0, 30]
)
plot_flux(flux)
def example_042_undulator_power_density():
# # Import the OSCARS SR module
# import oscars.sr
#
# # Import basic plot utilities (matplotlib). You don't need these to run OSCARS, but it's used here for basic plots
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear any existing fields (just good habit in notebook style) and add an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.049], nperiods=21)
# Just to check the field that we added seems visually correct
plot_bfield(osr)
# Setup beam similar to NSLSII
osr.clear_particle_beams()
osr.set_particle_beam(x0=[0, 0, -1], energy_GeV=3, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Run the particle trajectory calculation
trajectory = osr.calculate_trajectory()
# Plot the trajectory position and velocity
plot_trajectory_position(trajectory)
plot_trajectory_velocity(trajectory)
power_density = osr.calculate_power_density_rectangle(
plane='XY',
width=[0.05, 0.05],
npoints=[101, 101],
translation=[0, 0, 30]
)
plot_power_density(power_density)
def example_001_dipole_trajectory():
# Import the OSCARS SR module
# import oscars.sr
#
# # Import basic plot utilities. You don't need these to run OSCARS, but it's used here for basic plots
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear any existing fields (just good habit in notebook style) and add an undulator field
osr.clear_bfields()
osr.add_bfield_uniform(bfield=[0, -0.4, 0], width=[0, 0, 1])
# Just to check the field that we added seems visually correct
plot_bfield(osr)
# Setup beam similar to NSLSII
osr.clear_particle_beams()
osr.set_particle_beam(x0=[0, 0, -1], energy_GeV=3, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(0, 2)
# Verify input information - print all to screen
osr.print_all()
# Run the particle trajectory calculation
trajectory = osr.calculate_trajectory()
# Plot the trajectory position and velocity
plot_trajectory_position(trajectory)
plot_trajectory_velocity(trajectory)
# Setup beam similar to NSLSII
osr.clear_particle_beams()
osr.set_particle_beam(energy_GeV=3, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(-1, 1)
# Run the particle trajectory calculation
trajectory = osr.calculate_trajectory()
# Plot the trajectory position and velocity
plot_trajectory_position(trajectory)
plot_trajectory_velocity(trajectory)
def undulator_radiation_srio():
# # Import the OSCARS SR module
# import oscars.sr
#
# # Import basic plot utilities (matplotlib). You don't need these to run OSCARS, but it's used here for basic plots
# from oscars.plots_mpl import *
# Create a new OSCARS object. Default to 8 threads and always use the GPU if available
osr = oscars.sr.sr(nthreads=8, gpu=1)
# Clear any existing fields (just good habit in notebook style) and add an undulator field
osr.clear_bfields()
osr.add_bfield_undulator(bfield=[0, 0.69, 0], period=[0, 0, 0.038], nperiods=55)
# # Just to check the field that we added seems visually correct
# plot_bfield(osr)
# Setup beam similar to NSLSII
osr.clear_particle_beams()
osr.set_particle_beam(x0=[0, 0, -2], energy_GeV=2.0, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(0, 4)
# # Run the particle trajectory calculation
# trajectory = osr.calculate_trajectory()
# # Plot the trajectory position and velocity
# plot_trajectory_position(trajectory)
# plot_trajectory_velocity(trajectory)
# # Calculate spectrum zoom
# spectrum = osr.calculate_spectrum(obs=[0, 0, 10], energy_range_eV=[100, 400], npoints=2000)
# # print(">>>",spectrum)
# plot_spectrum(spectrum)
import time
t0 = time.time()
flux = osr.calculate_flux_rectangle(
plane='XY',
energy_eV=249+6,
width=[0.0025, 0.0025],
npoints=[101, 101],
translation=[0, 0, 10]
)
print(">>>>",time.time()-t0)
plot_flux(flux)
print(">>>", flux,type(flux))
print(numpy.array(flux).shape)
def undulator_radiation_howard():
osr = oscars.sr.sr(nthreads=8, gpu=1)
osr.clear_bfields()
K = 0.6
import scipy.constants as codata
B = K * 2 * numpy.pi * codata.m_e * codata.c / (codata.e * 0.0288 )
osr.add_bfield_undulator(bfield=[0, B, 0], period=[0, 0, 0.0288], nperiods=138)
osr.clear_particle_beams()
osr.set_particle_beam(x0=[0, 0, -138*0.0288], energy_GeV=2.0, current=0.500)
# Set the start and stop times for the calculation
osr.set_ctstartstop(0, 4)
# # Run the particle trajectory calculation
# trajectory = osr.calculate_trajectory()
# # Plot the trajectory position and velocity
# plot_trajectory_position(trajectory)
# plot_trajectory_velocity(trajectory)
# # Calculate spectrum zoom
# spectrum = osr.calculate_spectrum(obs=[0, 0, 10], energy_range_eV=[100, 400], npoints=2000)
# # print(">>>",spectrum)
# plot_spectrum(spectrum)
import time
t0 = time.time()
flux = osr.calculate_flux_rectangle(
plane='XY',
energy_eV=830, #1117.74,
width=[0.006, 0.006],
npoints=[251, 251],
translation=[0, 0, 13]
)
print(">>>>", time.time() - t0)
plot_flux(flux)
print(">>>", flux, type(flux))
print(numpy.array(flux).shape)
if __name__ == "__main__":
set_qt()
# undulator_spectrum()
# undulator_flux()
# undulator_power_density()
# undulator_3d_power_density() #??????????
# example_032_undulator_flux()
# example_042_undulator_power_density()
# example_001_dipole_trajectory()
undulator_radiation_howard() | 31.310881 | 121 | 0.676485 | 1,734 | 12,086 | 4.579585 | 0.119954 | 0.010074 | 0.01763 | 0.022667 | 0.878101 | 0.865886 | 0.848004 | 0.83793 | 0.834656 | 0.822818 | 0 | 0.049989 | 0.22042 | 12,086 | 386 | 122 | 31.310881 | 0.792825 | 0.457471 | 0 | 0.688742 | 0 | 0 | 0.005324 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059603 | false | 0 | 0.05298 | 0 | 0.112583 | 0.046358 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46b7d92ab60cce8857403f110c10a7f7acc9e8c1 | 137 | py | Python | pyrimidine/local_search/__init__.py | Freakwill/pyrimidine | ff05998f110a69a002180d0dae2ae514a5807cfb | [
"MIT"
] | 1 | 2021-03-04T17:03:14.000Z | 2021-03-04T17:03:14.000Z | pyrimidine/local_search/__init__.py | Freakwill/pyrimidine | ff05998f110a69a002180d0dae2ae514a5807cfb | [
"MIT"
] | null | null | null | pyrimidine/local_search/__init__.py | Freakwill/pyrimidine | ff05998f110a69a002180d0dae2ae514a5807cfb | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from .simulated_annealing import *
from .random_walk import *
from .tabu_search import * | 19.571429 | 34 | 0.708029 | 19 | 137 | 4.947368 | 0.789474 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.145985 | 137 | 7 | 35 | 19.571429 | 0.786325 | 0.313869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
46d8c9d44e9dbc1ea9309c2a2e8cbfd2d94c2d06 | 145 | py | Python | dvc/utils/collections.py | vyloy/dvc | 60c89adeb5dcc293d8661d6aabeb1da6d05466f5 | [
"Apache-2.0"
] | 1 | 2019-04-16T19:51:03.000Z | 2019-04-16T19:51:03.000Z | dvc/utils/collections.py | vyloy/dvc | 60c89adeb5dcc293d8661d6aabeb1da6d05466f5 | [
"Apache-2.0"
] | null | null | null | dvc/utils/collections.py | vyloy/dvc | 60c89adeb5dcc293d8661d6aabeb1da6d05466f5 | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
# just simple check for Nones and emtpy strings
def compact(args):
return list(filter(bool, args))
| 20.714286 | 47 | 0.772414 | 21 | 145 | 5.095238 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165517 | 145 | 6 | 48 | 24.166667 | 0.884298 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
46ddd2452e2167c4d040cbcb09e2c5f4dfefaa71 | 1,111 | py | Python | operation.py | lucasma8795/chess | 2d8a1f6472dc12e83bace2eb7e8329358edb6b4c | [
"Unlicense"
] | null | null | null | operation.py | lucasma8795/chess | 2d8a1f6472dc12e83bace2eb7e8329358edb6b4c | [
"Unlicense"
] | null | null | null | operation.py | lucasma8795/chess | 2d8a1f6472dc12e83bace2eb7e8329358edb6b4c | [
"Unlicense"
] | null | null | null | class Operation():
def __init__(self):
pass
class op_pawn_move(Operation):
def __init__(self, old_pos, new_pos, two_squares=False):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
self.two_squares = two_squares
class op_en_passant(Operation):
def __init__(self, old_pos, new_pos, ep_pos):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
self.ep_pos = ep_pos
class op_move(Operation):
def __init__(self, old_pos, new_pos):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
class op_capture(Operation):
def __init__(self, old_pos, new_pos):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
class op_castle(Operation):
def __init__(self, old_pos, new_pos, color, side):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
self.color = color # white, black
self.side = side # 0: queenside(left), 1: kingside(right)
class op_promotion(Operation):
def __init__(self, old_pos, new_pos, color):
super().__init__()
self.old_pos = old_pos
self.new_pos = new_pos
self.color = color
| 24.152174 | 59 | 0.723672 | 181 | 1,111 | 3.878453 | 0.176796 | 0.153846 | 0.188034 | 0.239316 | 0.725071 | 0.725071 | 0.725071 | 0.725071 | 0.679487 | 0.517094 | 0 | 0.002132 | 0.155716 | 1,111 | 45 | 60 | 24.688889 | 0.746269 | 0.045905 | 0 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0.052632 | 0 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
645c421a7c0a65c7f4ed2f6036aaba467e9a0a8a | 7,307 | py | Python | tests/unit/test_install.py | markharley/pip-install-privates | a55b82020db0813ed8bac95e175edf5e8363bf32 | [
"MIT"
] | null | null | null | tests/unit/test_install.py | markharley/pip-install-privates | a55b82020db0813ed8bac95e175edf5e8363bf32 | [
"MIT"
] | null | null | null | tests/unit/test_install.py | markharley/pip-install-privates | a55b82020db0813ed8bac95e175edf5e8363bf32 | [
"MIT"
] | null | null | null | import os
import tempfile
from unittest import TestCase
from pip_install_privates.install import collect_requirements
class TestInstall(TestCase):
def _create_reqs_file(self, reqs):
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write('\n'.join(reqs).encode('utf-8'))
self.addCleanup(os.unlink, f.name)
return f.name
def test_considers_all_requirements_in_file(self):
fname = self._create_reqs_file(['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
def test_removes_comments(self):
fname = self._create_reqs_file(['mock==2.0.0', '# for testing', 'nose==1.3.7', 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
def test_removes_trailing_comments(self):
fname = self._create_reqs_file(['mock==2.0.0', 'nose==1.3.7 # for testing', 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
def test_skips_empty_lines(self):
fname = self._create_reqs_file(['mock==2.0.0', '', 'nose==1.3.7', '', 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
def test_strips_whitespaces(self):
fname = self._create_reqs_file([' mock==2.0.0 ', ' ', 'nose==1.3.7 '])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7'])
def test_reads_included_files(self):
basename = self._create_reqs_file(['mock==2.0.0', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(basename), 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0', 'nose==1.3.7', 'fso==0.3.1'])
def test_reads_chain_of_included_files(self):
file1 = self._create_reqs_file(['mock==2.0.0', 'nose==1.3.7'])
file2 = self._create_reqs_file(['-r {}'.format(file1), 'Django==1.10'])
file3 = self._create_reqs_file(['amqp==1.4.7', '-r {}'.format(file2), 'six==1.10.0'])
file4 = self._create_reqs_file(['-r {}'.format(file3), 'fso==0.3.1'])
ret = collect_requirements(file4)
self.assertEqual(ret, ['amqp==1.4.7', 'mock==2.0.0', 'nose==1.3.7',
'Django==1.10', 'six==1.10.0', 'fso==0.3.1'])
def test_honors_vcs_urls(self):
fname = self._create_reqs_file(['git+https://github.com/ByteInternet/...'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['git+https://github.com/ByteInternet/...'])
def test_transforms_vcs_git_url_to_oauth(self):
fname = self._create_reqs_file(['git+git@github.com:ByteInternet/...'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['git+https://my-token:x-oauth-basic@github.com/ByteInternet/...'])
def test_transforms_vcs_git_url_to_oauth_dashe_option(self):
fname = self._create_reqs_file(['-e git+git@github.com:ByteInternet/...'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['-e', 'git+https://my-token:x-oauth-basic@github.com/ByteInternet/...'])
def test_transforms_vcs_ssh_url_to_oauth(self):
fname = self._create_reqs_file(['git+ssh://git@github.com/ByteInternet/...'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['git+https://my-token:x-oauth-basic@github.com/ByteInternet/...'])
def test_transforms_vcs_ssh_url_to_oauth_dashe_option(self):
fname = self._create_reqs_file(['-e git+ssh://git@github.com/ByteInternet/...'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['-e', 'git+https://my-token:x-oauth-basic@github.com/ByteInternet/...'])
def test_transforms_urls_in_included_files(self):
file1 = self._create_reqs_file(['mock==2.0.0', '-e git+git@github.com:ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['mock==2.0.0',
'-e', 'git+https://my-token:x-oauth-basic@github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
def test_transforms_git_plus_git_urls_to_regular_url_if_no_token_provided(self):
file1 = self._create_reqs_file(['mock==2.0.0', '-e git+git@github.com:ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0',
'-e', 'git+https://github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
def test_transforms_git_plus_ssh_urls_to_regular_url_if_no_token_provided(self):
file1 = self._create_reqs_file(['mock==2.0.0', '-e git+ssh://git@github.com/ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0',
'-e', 'git+https://github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
def test_transforms_git_plus_https_urls_to_https_url_with_oauth_token_if_token_provided(self):
file1 = self._create_reqs_file(['mock==2.0.0', 'git+https://github.com/ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['mock==2.0.0',
'git+https://my-token:x-oauth-basic@github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
def test_transforms_editable_git_plus_https_urls_to_editable_https_url_with_oauth_token_if_token_provided(self):
file1 = self._create_reqs_file(['mock==2.0.0', '-e git+https://github.com/ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname, transform_with_token='my-token')
self.assertEqual(ret, ['mock==2.0.0',
'-e', 'git+https://my-token:x-oauth-basic@github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
def test_does_not_transform_git_plus_https_urls_to_https_url_with_oauth_token_if_no_token_provided(self):
file1 = self._create_reqs_file(['mock==2.0.0', '-e git+https://github.com/ByteInternet/...', 'nose==1.3.7'])
fname = self._create_reqs_file(['-r {}'.format(file1), 'fso==0.3.1'])
ret = collect_requirements(fname)
self.assertEqual(ret, ['mock==2.0.0',
'-e', 'git+https://github.com/ByteInternet/...',
'nose==1.3.7', 'fso==0.3.1'])
| 47.75817 | 118 | 0.608458 | 1,056 | 7,307 | 3.971591 | 0.099432 | 0.069146 | 0.096805 | 0.120172 | 0.8598 | 0.852408 | 0.843348 | 0.805913 | 0.802575 | 0.802575 | 0 | 0.045741 | 0.195155 | 7,307 | 152 | 119 | 48.072368 | 0.667404 | 0 | 0 | 0.504762 | 0 | 0 | 0.283641 | 0.036003 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.180952 | false | 0 | 0.038095 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
646d6e4235411c4ee4a21ed373eb71a093cd02cb | 179 | py | Python | matid/__init__.py | markus1978/matid | dad7a79db727015a3ad0a50962e351f6bf724e4d | [
"Apache-2.0"
] | 20 | 2018-06-25T10:04:58.000Z | 2021-07-09T06:15:06.000Z | matid/__init__.py | markus1978/matid | dad7a79db727015a3ad0a50962e351f6bf724e4d | [
"Apache-2.0"
] | 7 | 2019-02-28T11:19:14.000Z | 2020-11-27T10:16:09.000Z | matid/__init__.py | markus1978/matid | dad7a79db727015a3ad0a50962e351f6bf724e4d | [
"Apache-2.0"
] | 5 | 2018-11-23T10:02:12.000Z | 2021-06-30T12:41:45.000Z | from matid.classification.classifier import Classifier
from matid.classification.periodicfinder import PeriodicFinder
from matid.symmetry.symmetryanalyzer import SymmetryAnalyzer
| 44.75 | 62 | 0.899441 | 18 | 179 | 8.944444 | 0.444444 | 0.167702 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067039 | 179 | 3 | 63 | 59.666667 | 0.964072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6473ac17ff682dd87cd41748c7ec4698347a9b13 | 196 | py | Python | modules/dials/algorithms/background/median/__init__.py | jorgediazjr/dials-dev20191018 | 77d66c719b5746f37af51ad593e2941ed6fbba17 | [
"BSD-3-Clause"
] | null | null | null | modules/dials/algorithms/background/median/__init__.py | jorgediazjr/dials-dev20191018 | 77d66c719b5746f37af51ad593e2941ed6fbba17 | [
"BSD-3-Clause"
] | null | null | null | modules/dials/algorithms/background/median/__init__.py | jorgediazjr/dials-dev20191018 | 77d66c719b5746f37af51ad593e2941ed6fbba17 | [
"BSD-3-Clause"
] | 1 | 2020-02-04T15:39:06.000Z | 2020-02-04T15:39:06.000Z | from __future__ import absolute_import, division, print_function
from dials.algorithms.background.median.algorithm import BackgroundAlgorithm
from dials_algorithms_background_median_ext import *
| 39.2 | 76 | 0.887755 | 23 | 196 | 7.130435 | 0.608696 | 0.109756 | 0.231707 | 0.353659 | 0.426829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076531 | 196 | 4 | 77 | 49 | 0.906077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
648b275bc7cd5a4ba6b055f739ed44b8ed35745d | 41 | py | Python | core/exception/__init__.py | ryanolee/pager-duty-sync | 1fd88634e461b5db647d856bc6b59f990944685e | [
"MIT"
] | null | null | null | core/exception/__init__.py | ryanolee/pager-duty-sync | 1fd88634e461b5db647d856bc6b59f990944685e | [
"MIT"
] | 2 | 2020-09-27T18:19:17.000Z | 2021-06-29T09:21:04.000Z | core/exception/__init__.py | ryanolee/pager-duty-sync | 1fd88634e461b5db647d856bc6b59f990944685e | [
"MIT"
] | null | null | null | from .httpExceptions import HTTPException | 41 | 41 | 0.902439 | 4 | 41 | 9.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6492c020f5fcfef7237266ba7de4fa58ee7a6c87 | 102 | py | Python | aws_mock/requests/modify_subnet_attribute.py | enaydanov/aws_mock | 4ad3dca270ad164693e85741d5e92f845c34aa01 | [
"Apache-2.0"
] | null | null | null | aws_mock/requests/modify_subnet_attribute.py | enaydanov/aws_mock | 4ad3dca270ad164693e85741d5e92f845c34aa01 | [
"Apache-2.0"
] | 1 | 2021-10-21T21:06:29.000Z | 2021-10-21T21:06:29.000Z | aws_mock/requests/modify_subnet_attribute.py | bentsi/aws_mock | d6c1b963e02b4cd3602722e7135f4d65f6a71d3e | [
"Apache-2.0"
] | 1 | 2021-11-08T14:20:36.000Z | 2021-11-08T14:20:36.000Z | from aws_mock.lib import aws_response
@aws_response
def modify_subnet_attribute() -> None:
pass
| 14.571429 | 38 | 0.77451 | 15 | 102 | 4.933333 | 0.8 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 102 | 6 | 39 | 17 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
649a7cdf7291e8bb3799fdb69b8b932c130f83e7 | 7,097 | py | Python | loldib/getratings/models/NA/na_cassiopeia/na_cassiopeia_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_cassiopeia/na_cassiopeia_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_cassiopeia/na_cassiopeia_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Cassiopeia_Sup_Aatrox(Ratings):
pass
class NA_Cassiopeia_Sup_Ahri(Ratings):
pass
class NA_Cassiopeia_Sup_Akali(Ratings):
pass
class NA_Cassiopeia_Sup_Alistar(Ratings):
pass
class NA_Cassiopeia_Sup_Amumu(Ratings):
pass
class NA_Cassiopeia_Sup_Anivia(Ratings):
pass
class NA_Cassiopeia_Sup_Annie(Ratings):
pass
class NA_Cassiopeia_Sup_Ashe(Ratings):
pass
class NA_Cassiopeia_Sup_AurelionSol(Ratings):
pass
class NA_Cassiopeia_Sup_Azir(Ratings):
pass
class NA_Cassiopeia_Sup_Bard(Ratings):
pass
class NA_Cassiopeia_Sup_Blitzcrank(Ratings):
pass
class NA_Cassiopeia_Sup_Brand(Ratings):
pass
class NA_Cassiopeia_Sup_Braum(Ratings):
pass
class NA_Cassiopeia_Sup_Caitlyn(Ratings):
pass
class NA_Cassiopeia_Sup_Camille(Ratings):
pass
class NA_Cassiopeia_Sup_Cassiopeia(Ratings):
pass
class NA_Cassiopeia_Sup_Chogath(Ratings):
pass
class NA_Cassiopeia_Sup_Corki(Ratings):
pass
class NA_Cassiopeia_Sup_Darius(Ratings):
pass
class NA_Cassiopeia_Sup_Diana(Ratings):
pass
class NA_Cassiopeia_Sup_Draven(Ratings):
pass
class NA_Cassiopeia_Sup_DrMundo(Ratings):
pass
class NA_Cassiopeia_Sup_Ekko(Ratings):
pass
class NA_Cassiopeia_Sup_Elise(Ratings):
pass
class NA_Cassiopeia_Sup_Evelynn(Ratings):
pass
class NA_Cassiopeia_Sup_Ezreal(Ratings):
pass
class NA_Cassiopeia_Sup_Fiddlesticks(Ratings):
pass
class NA_Cassiopeia_Sup_Fiora(Ratings):
pass
class NA_Cassiopeia_Sup_Fizz(Ratings):
pass
class NA_Cassiopeia_Sup_Galio(Ratings):
pass
class NA_Cassiopeia_Sup_Gangplank(Ratings):
pass
class NA_Cassiopeia_Sup_Garen(Ratings):
pass
class NA_Cassiopeia_Sup_Gnar(Ratings):
pass
class NA_Cassiopeia_Sup_Gragas(Ratings):
pass
class NA_Cassiopeia_Sup_Graves(Ratings):
pass
class NA_Cassiopeia_Sup_Hecarim(Ratings):
pass
class NA_Cassiopeia_Sup_Heimerdinger(Ratings):
pass
class NA_Cassiopeia_Sup_Illaoi(Ratings):
pass
class NA_Cassiopeia_Sup_Irelia(Ratings):
pass
class NA_Cassiopeia_Sup_Ivern(Ratings):
pass
class NA_Cassiopeia_Sup_Janna(Ratings):
pass
class NA_Cassiopeia_Sup_JarvanIV(Ratings):
pass
class NA_Cassiopeia_Sup_Jax(Ratings):
pass
class NA_Cassiopeia_Sup_Jayce(Ratings):
pass
class NA_Cassiopeia_Sup_Jhin(Ratings):
pass
class NA_Cassiopeia_Sup_Jinx(Ratings):
pass
class NA_Cassiopeia_Sup_Kalista(Ratings):
pass
class NA_Cassiopeia_Sup_Karma(Ratings):
pass
class NA_Cassiopeia_Sup_Karthus(Ratings):
pass
class NA_Cassiopeia_Sup_Kassadin(Ratings):
pass
class NA_Cassiopeia_Sup_Katarina(Ratings):
pass
class NA_Cassiopeia_Sup_Kayle(Ratings):
pass
class NA_Cassiopeia_Sup_Kayn(Ratings):
pass
class NA_Cassiopeia_Sup_Kennen(Ratings):
pass
class NA_Cassiopeia_Sup_Khazix(Ratings):
pass
class NA_Cassiopeia_Sup_Kindred(Ratings):
pass
class NA_Cassiopeia_Sup_Kled(Ratings):
pass
class NA_Cassiopeia_Sup_KogMaw(Ratings):
pass
class NA_Cassiopeia_Sup_Leblanc(Ratings):
pass
class NA_Cassiopeia_Sup_LeeSin(Ratings):
pass
class NA_Cassiopeia_Sup_Leona(Ratings):
pass
class NA_Cassiopeia_Sup_Lissandra(Ratings):
pass
class NA_Cassiopeia_Sup_Lucian(Ratings):
pass
class NA_Cassiopeia_Sup_Lulu(Ratings):
pass
class NA_Cassiopeia_Sup_Lux(Ratings):
pass
class NA_Cassiopeia_Sup_Malphite(Ratings):
pass
class NA_Cassiopeia_Sup_Malzahar(Ratings):
pass
class NA_Cassiopeia_Sup_Maokai(Ratings):
pass
class NA_Cassiopeia_Sup_MasterYi(Ratings):
pass
class NA_Cassiopeia_Sup_MissFortune(Ratings):
pass
class NA_Cassiopeia_Sup_MonkeyKing(Ratings):
pass
class NA_Cassiopeia_Sup_Mordekaiser(Ratings):
pass
class NA_Cassiopeia_Sup_Morgana(Ratings):
pass
class NA_Cassiopeia_Sup_Nami(Ratings):
pass
class NA_Cassiopeia_Sup_Nasus(Ratings):
pass
class NA_Cassiopeia_Sup_Nautilus(Ratings):
pass
class NA_Cassiopeia_Sup_Nidalee(Ratings):
pass
class NA_Cassiopeia_Sup_Nocturne(Ratings):
pass
class NA_Cassiopeia_Sup_Nunu(Ratings):
pass
class NA_Cassiopeia_Sup_Olaf(Ratings):
pass
class NA_Cassiopeia_Sup_Orianna(Ratings):
pass
class NA_Cassiopeia_Sup_Ornn(Ratings):
pass
class NA_Cassiopeia_Sup_Pantheon(Ratings):
pass
class NA_Cassiopeia_Sup_Poppy(Ratings):
pass
class NA_Cassiopeia_Sup_Quinn(Ratings):
pass
class NA_Cassiopeia_Sup_Rakan(Ratings):
pass
class NA_Cassiopeia_Sup_Rammus(Ratings):
pass
class NA_Cassiopeia_Sup_RekSai(Ratings):
pass
class NA_Cassiopeia_Sup_Renekton(Ratings):
pass
class NA_Cassiopeia_Sup_Rengar(Ratings):
pass
class NA_Cassiopeia_Sup_Riven(Ratings):
pass
class NA_Cassiopeia_Sup_Rumble(Ratings):
pass
class NA_Cassiopeia_Sup_Ryze(Ratings):
pass
class NA_Cassiopeia_Sup_Sejuani(Ratings):
pass
class NA_Cassiopeia_Sup_Shaco(Ratings):
pass
class NA_Cassiopeia_Sup_Shen(Ratings):
pass
class NA_Cassiopeia_Sup_Shyvana(Ratings):
pass
class NA_Cassiopeia_Sup_Singed(Ratings):
pass
class NA_Cassiopeia_Sup_Sion(Ratings):
pass
class NA_Cassiopeia_Sup_Sivir(Ratings):
pass
class NA_Cassiopeia_Sup_Skarner(Ratings):
pass
class NA_Cassiopeia_Sup_Sona(Ratings):
pass
class NA_Cassiopeia_Sup_Soraka(Ratings):
pass
class NA_Cassiopeia_Sup_Swain(Ratings):
pass
class NA_Cassiopeia_Sup_Syndra(Ratings):
pass
class NA_Cassiopeia_Sup_TahmKench(Ratings):
pass
class NA_Cassiopeia_Sup_Taliyah(Ratings):
pass
class NA_Cassiopeia_Sup_Talon(Ratings):
pass
class NA_Cassiopeia_Sup_Taric(Ratings):
pass
class NA_Cassiopeia_Sup_Teemo(Ratings):
pass
class NA_Cassiopeia_Sup_Thresh(Ratings):
pass
class NA_Cassiopeia_Sup_Tristana(Ratings):
pass
class NA_Cassiopeia_Sup_Trundle(Ratings):
pass
class NA_Cassiopeia_Sup_Tryndamere(Ratings):
pass
class NA_Cassiopeia_Sup_TwistedFate(Ratings):
pass
class NA_Cassiopeia_Sup_Twitch(Ratings):
pass
class NA_Cassiopeia_Sup_Udyr(Ratings):
pass
class NA_Cassiopeia_Sup_Urgot(Ratings):
pass
class NA_Cassiopeia_Sup_Varus(Ratings):
pass
class NA_Cassiopeia_Sup_Vayne(Ratings):
pass
class NA_Cassiopeia_Sup_Veigar(Ratings):
pass
class NA_Cassiopeia_Sup_Velkoz(Ratings):
pass
class NA_Cassiopeia_Sup_Vi(Ratings):
pass
class NA_Cassiopeia_Sup_Viktor(Ratings):
pass
class NA_Cassiopeia_Sup_Vladimir(Ratings):
pass
class NA_Cassiopeia_Sup_Volibear(Ratings):
pass
class NA_Cassiopeia_Sup_Warwick(Ratings):
pass
class NA_Cassiopeia_Sup_Xayah(Ratings):
pass
class NA_Cassiopeia_Sup_Xerath(Ratings):
pass
class NA_Cassiopeia_Sup_XinZhao(Ratings):
pass
class NA_Cassiopeia_Sup_Yasuo(Ratings):
pass
class NA_Cassiopeia_Sup_Yorick(Ratings):
pass
class NA_Cassiopeia_Sup_Zac(Ratings):
pass
class NA_Cassiopeia_Sup_Zed(Ratings):
pass
class NA_Cassiopeia_Sup_Ziggs(Ratings):
pass
class NA_Cassiopeia_Sup_Zilean(Ratings):
pass
class NA_Cassiopeia_Sup_Zyra(Ratings):
pass
| 17.019185 | 47 | 0.784839 | 972 | 7,097 | 5.304527 | 0.151235 | 0.187355 | 0.455004 | 0.535299 | 0.823701 | 0.823701 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156545 | 7,097 | 416 | 48 | 17.060096 | 0.861343 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
64b725ad878c5678871563ccb1019a7f54db1797 | 22,388 | py | Python | tests/test_smartCompare.py | salesforce/smartACL | b8ccf5003689e32e9cf10512df84117f95ed86c2 | [
"BSD-3-Clause"
] | 3 | 2019-10-10T16:53:06.000Z | 2020-12-29T22:48:29.000Z | tests/test_smartCompare.py | salesforce/smartACL | b8ccf5003689e32e9cf10512df84117f95ed86c2 | [
"BSD-3-Clause"
] | 5 | 2018-03-02T08:30:52.000Z | 2021-07-26T10:53:18.000Z | tests/test_smartCompare.py | salesforce/smartACL | b8ccf5003689e32e9cf10512df84117f95ed86c2 | [
"BSD-3-Clause"
] | 3 | 2018-02-27T14:41:31.000Z | 2019-08-06T04:41:21.000Z | # Copyright (c) 2018, salesforce.com, inc.
# All rights reserved.
# Licensed under the BSD 3-Clause license.
# For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
import unittest
import sys
import os
from smartACL import linkdef
from smartACL import link_cisco
from smartACL import link_juniper
from smartACL import smartACL
class smartTest(unittest.TestCase):
def setUp(self):
self.filet1 = 'tests/test_data/test_acl_smartCompare1'
self.filet2 = 'tests/test_data/test_acl_smartCompare2'
self.filet2a = 'tests/test_data/test_acl_smartCompare2a'
self.filet3 = 'tests/test_data/test_acl_smartCompare3'
self.filet4 = 'tests/test_data/test_acl_smartCompare4'
self.filet5 = 'tests/test_data/test_acl_smartCompare5'
self.filet6 = 'tests/test_data/test_acl_smartCompare6'
self.filet7 = 'tests/test_data/test_acl_smartCompare7'
self.filet8 = 'tests/test_data/test_acl_smartCompare8'
self.filet9 = 'tests/test_data/test_acl_smartCompare9'
self.filet10 = 'tests/test_data/test_acl_smartCompare10'
self.filet11 = 'tests/test_data/test_acl_smartCompare11'
self.filet12 = 'tests/test_data/test_acl_smartCompare12'
self.filet13 = 'tests/test_data/test_acl_smartCompare13'
self.filet14 = 'tests/test_data/test_acl_smartCompare14'
self.filet15 = 'tests/test_data/test_acl_smartCompare15'
self.filet16 = 'tests/test_data/test_acl_smartCompare16'
self.filet17 = 'tests/test_data/test_acl_smartCompare17'
self.filet18 = 'tests/test_data/test_acl_smartCompare18'
self.filet19 = 'tests/test_data/test_acl_smartCompare19'
self.results_t1_t2 = ['permit tcp 10.230.0.0 0.0.0.127 10.240.0.0 0.0.0.127', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.128 0.0.0.127 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.64 0.0.0.63 eq 7080']
self.results_t1_t2a = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.128 0.0.0.127 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.64 0.0.0.63 eq 7080']
self.results_t2a_t2a = ['deny tcp 10.230.0.0 0.0.0.127 10.240.0.0 0.0.0.127 eq 22', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.255 eq 7080']
self.results_t3_t4 = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.255 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.255 eq 7081']
self.results_t5_t6 = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.128 0.0.0.127 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.64 0.0.0.63 eq 7080']
self.results_t7_t8 = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.64 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.128 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.192 0.0.0.63 eq 7080']
self.results_t8_t7 = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.254 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.1 0.0.0.254 eq 7080']
self.results_t7_t7 = ['permit tcp 10.231.69.128 0.0.0.127 10.0.0.0 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.64 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.128 0.0.0.63 eq 7080', 'permit tcp 10.231.69.128 0.0.0.127 10.0.0.192 0.0.0.63 eq 7080']
'''
The T9 - T9 comparison is an interesting case. T9 has two shadowed rules inside, so when we try to compare it with itself,
the two shadowed rules are shown like NOT matched. That is completely TRUE. Although it could seem to be inconsistent,
indeed these two lines will be never matched, so in this case, smartCompare is working fine.
The same would apply to T10 - T10 and T9 - T10
'''
self.results_t9_t9 = ['term testt1', 'term testt2', 'term testt3', 'term testt4']
self.results_t10_t10 = ['term testt1', "term testt2{2{1{['10.0.0.192/255.255.255.192', '10.0.1.0/255.255.255.128']", "term testt2{2{2{['10.0.0.192/255.255.255.192', '10.0.1.128/255.255.255.192']", 'term testt3', 'term testt4']
self.results_t9_t10 = ['term testt3', 'term testt4']
self.results_t11_t12 = ['term testt2', 'term testt3', 'term testt5']
self.results_t11_t12_is = ['term testt3', 'term testt5']
self.results_t13_t13 = ['permit udp 0.0.0.0 0.0.0.0 eq 67 255.255.255.255 0.0.0.0 eq 68', 'permit udp any eq 68 255.255.255.255 0.0.0.0 eq 67', 'permit udp 192.168.1.0 0.0.0.63 eq 68 any eq 68', 'permit udp 192.168.1.192 0.0.0.63 eq 68 any eq 68']
self.results_t14_t15 = ['permit tcp 10.230.0.0 0.0.0.127 10.240.0.0 0.0.0.127']
self.results_t15_t14 = []
self.results_t16_t17 = []
self.results_t17_t16 = []
self.results_t18_t19 = ['term testt1', 'term testt2']
self.results_t19_t18 = ["term testt1{1{1{['10.0.0.0/255.255.255.0', '10.0.1.0/255.255.255.0']", "term testt1{1{2{['10.0.0.0/255.255.255.0', '10.0.1.0/255.255.255.0']"]
null = open(os.devnull, 'w')
self.stdout = sys.stdout
sys.stdout = null
self.longMessage = True
def test_smartCompare_t1_t2(self):
policy1 = linkdef.FWPolicy('', self.filet1, False)
link_cisco.acl_parser(self.filet1, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet2, False)
link_cisco.acl_parser(self.filet2, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t1_t2, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t1_t2, 'Ignoring Shadowed Rules')
def test_smartCompare_t1_t2a(self):
policy1 = linkdef.FWPolicy('', self.filet1, False)
link_cisco.acl_parser(self.filet1, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet2a, False)
link_cisco.acl_parser(self.filet2a, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t1_t2a, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t1_t2a, 'Ignoring Shadowed Rules')
def test_smartCompare_t2a_t2a(self):
policy1 = linkdef.FWPolicy('', self.filet2a, False)
link_cisco.acl_parser(self.filet2a, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet2a, False)
link_cisco.acl_parser(self.filet2a, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t2a_t2a, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t2a_t2a, 'Ignoring Shadowed Rules')
def test_smartCompare_t3_t4(self):
policy1 = linkdef.FWPolicy('', self.filet3, False)
link_cisco.acl_parser(self.filet3, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet4, False)
link_cisco.acl_parser(self.filet4, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t3_t4, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t3_t4, 'Ignoring Shadowed Rules')
def test_smartCompare_t5_t6(self):
policy1 = linkdef.FWPolicy('', self.filet5, False)
link_cisco.acl_parser(self.filet5, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet6, False)
link_cisco.acl_parser(self.filet6, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t5_t6, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t5_t6, 'Ignoring Shadowed Rules')
def test_smartCompare_t7_t7(self):
policy1 = linkdef.FWPolicy('', self.filet7, False)
link_cisco.acl_parser(self.filet7, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet7, False)
link_cisco.acl_parser(self.filet7, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t7_t7, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t7_t7, 'Ignoring Shadowed Rules')
def test_smartCompare_t7_t8(self):
policy1 = linkdef.FWPolicy('', self.filet7, False)
link_cisco.acl_parser(self.filet7, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet8, False)
link_cisco.acl_parser(self.filet8, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t7_t8, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t7_t8, 'Ignoring Shadowed Rules')
def test_smartCompare_t8_t7(self):
policy1 = linkdef.FWPolicy('', self.filet8, False)
link_cisco.acl_parser(self.filet8, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet7, False)
link_cisco.acl_parser(self.filet7, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t8_t7, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t8_t7, 'Ignoring Shadowed Rules')
def test_smartCompare_t9_t9(self):
policy1 = linkdef.FWPolicy('', self.filet9, False)
link_juniper.jcl_parser(self.filet9, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet9, False)
link_juniper.jcl_parser(self.filet9, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t9_t9, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
# Because the shadowed rule is removed, both list need to be sorted first.
self.assertEqual(smartacl_result.sort(), self.results_t9_t9.sort(), 'Ignoring Shadowed Rules')
def test_smartCompare_t10_t10(self):
policy1 = linkdef.FWPolicy('', self.filet10, False)
link_juniper.jcl_parser(self.filet10, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet10, False)
link_juniper.jcl_parser(self.filet10, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t10_t10, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t10_t10, 'Ignoring Shadowed Rules')
def test_smartCompare_t9_t10(self):
policy1 = linkdef.FWPolicy('', self.filet9, False)
link_juniper.jcl_parser(self.filet9, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet10, False)
link_juniper.jcl_parser(self.filet10, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t9_t10, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t9_t10, 'Ignoring Shadowed Rules')
def test_smartCompare_t11_t12(self):
policy1 = linkdef.FWPolicy('', self.filet11, False)
link_juniper.jcl_parser(self.filet11, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet12, False)
link_juniper.jcl_parser(self.filet12, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t11_t12, 'Normal Test')
'''
This is a very special case that it's better to have it separated because the results are different with/without "ignoreshadow" option.
Explanation (simplified):
- ACL1:
- Rule1
- Rule2 -> Shadowed by Rule1
- ACL2:
- Rule2
With ignoreshadow FALSE:
- The Rule1 is NOT in ACL2
- The Rule2 is in ACL2
- The output shows that Rule1 is missing
With ignoreshadow TRUE:
- The Rule2 is removed because it's shadowed by Rule1
- The Rule1 is NOT in ACL2
- The output shows Rule1 and Rule2 are missing (Rule1 logically, but also all shadowed rules like Rule2)
'''
def test_smartCompare_t11_t12_ignoreshadowed(self):
policy1 = linkdef.FWPolicy('', self.filet11, False)
link_juniper.jcl_parser(self.filet11, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet12, False)
link_juniper.jcl_parser(self.filet12, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t11_t12_is, 'Ignoring Shadowed Rules')
def test_smartCompare_t13_t13(self):
policy1 = linkdef.FWPolicy('', self.filet13, False)
link_cisco.acl_parser(self.filet13, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet13, False)
link_cisco.acl_parser(self.filet13, policy2, False)
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t13_t13, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t13_t13, 'Ignoring Shadowed Rules')
def test_smartCompare_t14_t15(self):
policy1 = linkdef.FWPolicy('', self.filet14, False)
link_cisco.acl_parser(self.filet14, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet15, False)
link_cisco.acl_parser(self.filet15, policy2, False)
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t14_t15, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t14_t15, 'Ignoring Shadowed Rules')
def test_smartCompare_t15_t14(self):
policy1 = linkdef.FWPolicy('', self.filet15, False)
link_cisco.acl_parser(self.filet15, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet14, False)
link_cisco.acl_parser(self.filet14, policy2, False)
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t15_t14, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t15_t14, 'Ignoring Shadowed Rules')
def test_smartCompare_t16_t17(self):
policy1 = linkdef.FWPolicy('', self.filet16, False)
link_cisco.acl_parser(self.filet16, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet17, False)
link_cisco.acl_parser(self.filet17, policy2, False)
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t16_t17, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t16_t17, 'Ignoring Shadowed Rules')
def test_smartCompare_t17_t16(self):
policy1 = linkdef.FWPolicy('', self.filet17, False)
link_cisco.acl_parser(self.filet17, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet16, False)
link_cisco.acl_parser(self.filet16, policy2, False)
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t17_t16, 'Normal Test')
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=True, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t17_t16, 'Ignoring Shadowed Rules')
def test_smartCompare_t18_t19(self):
policy1 = linkdef.FWPolicy('', self.filet18, False)
link_juniper.jcl_parser(self.filet18, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet19, False)
link_juniper.jcl_parser(self.filet19, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t18_t19, 'Normal Test')
def test_smartCompare_t19_t18(self):
policy1 = linkdef.FWPolicy('', self.filet19, False)
link_juniper.jcl_parser(self.filet19, policy1, False)
policy2 = linkdef.FWPolicy('', self.filet18, False)
link_juniper.jcl_parser(self.filet18, policy2, False)
policy1.split_ips()
policy2.split_ips()
smartacl_result = smartACL.smartCompare2(policy1, policy2, verbose=False,only_different=False,outprint=False,ignore_lines='',ignoredeny=False, ignoreshadowed=False, DEBUG=False)
self.assertEqual(smartacl_result, self.results_t19_t18, 'Normal Test')
def tearDown(self):
sys.stdout = self.stdout
if __name__ == '__main__':
unittest.main() | 68.886154 | 290 | 0.719939 | 3,089 | 22,388 | 5.056329 | 0.081256 | 0.022537 | 0.019976 | 0.012037 | 0.854344 | 0.810935 | 0.748639 | 0.726231 | 0.725527 | 0.721173 | 0 | 0.089206 | 0.1608 | 22,388 | 325 | 291 | 68.886154 | 0.742123 | 0.01273 | 0 | 0.378378 | 0 | 0.127413 | 0.169639 | 0.049266 | 0 | 0 | 0 | 0 | 0.138996 | 1 | 0.084942 | false | 0 | 0.027027 | 0 | 0.11583 | 0.138996 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b37f659598aede38ce5c3895f879e9aef08c302a | 75 | py | Python | code_export/run.py | EPFLMachineLearningTeamYoor/Project02 | 1274e45b8fe6f43e959dbf862a50fdab7def7797 | [
"MIT"
] | null | null | null | code_export/run.py | EPFLMachineLearningTeamYoor/Project02 | 1274e45b8fe6f43e959dbf862a50fdab7def7797 | [
"MIT"
] | 13 | 2017-11-15T18:08:15.000Z | 2017-12-26T19:27:02.000Z | code_export/run.py | EPFLMachineLearningTeamYoor/Project02 | 1274e45b8fe6f43e959dbf862a50fdab7def7797 | [
"MIT"
] | 1 | 2018-05-25T19:39:20.000Z | 2018-05-25T19:39:20.000Z | import os
os.system("python 00clean.py")
os.system("python 01classify.py")
| 18.75 | 33 | 0.76 | 12 | 75 | 4.75 | 0.583333 | 0.280702 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.08 | 75 | 3 | 34 | 25 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.493333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b3a7d12bc9a3f6a864f63bd5a3da3a1bb16349c9 | 49 | py | Python | gmplot/__init__.py | Monti03/gmplot | 888ed6e2845913a8623009757e03ec49a11da7db | [
"MIT"
] | 606 | 2015-10-04T02:43:48.000Z | 2020-04-17T16:57:36.000Z | gmplot/__init__.py | Monti03/gmplot | 888ed6e2845913a8623009757e03ec49a11da7db | [
"MIT"
] | 100 | 2020-04-20T04:46:16.000Z | 2022-01-07T00:41:47.000Z | gmplot/__init__.py | Monti03/gmplot | 888ed6e2845913a8623009757e03ec49a11da7db | [
"MIT"
] | 266 | 2015-05-10T21:44:15.000Z | 2020-04-12T15:11:03.000Z | from .google_map_plotter import GoogleMapPlotter
| 24.5 | 48 | 0.897959 | 6 | 49 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3acd2e000318018920640120ff7b09a5c66d8ea | 10,588 | py | Python | projects/vdk-control-cli/tests/vdk/internal/control/command_groups/job/test_execute.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 100 | 2021-10-04T09:32:04.000Z | 2022-03-30T11:23:53.000Z | projects/vdk-control-cli/tests/vdk/internal/control/command_groups/job/test_execute.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 208 | 2021-10-04T16:56:40.000Z | 2022-03-31T10:41:44.000Z | projects/vdk-control-cli/tests/vdk/internal/control/command_groups/job/test_execute.py | alod83/versatile-data-kit | 9ca672d3929eb3dc6fe5c677e8c8a75e2a0d2be8 | [
"Apache-2.0"
] | 14 | 2021-10-11T14:15:13.000Z | 2022-03-11T13:39:17.000Z | # Copyright 2021 VMware, Inc.
# SPDX-License-Identifier: Apache-2.0
import json
import os
from unittest.mock import patch
from click.testing import CliRunner
from py._path.local import LocalPath
from pytest_httpserver.pytest_plugin import PluginHTTPServer
from taurus_datajob_api import DataJobDeployment
from taurus_datajob_api import DataJobExecution
from vdk.internal import test_utils
from vdk.internal.control.command_groups.job.execute import execute
from werkzeug import Response
test_utils.disable_vdk_authentication()
def test_execute(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/deployments/production/executions",
method="POST",
).respond_with_response(
Response(
status=200,
headers=dict(
Location=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/foo"
),
)
)
runner = CliRunner()
result = runner.invoke(
execute, ["-n", job_name, "-t", team_name, "--start", "-u", rest_api_url]
)
assert result.exit_code == 0, (
f"result exit code is not 0, result output: {result.output}, "
f"result.exception: {result.exception}"
)
def test_cancel(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
execution_id = "test-execution"
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/{execution_id}",
method="DELETE",
).respond_with_response(Response(status=200, headers={}))
runner = CliRunner()
result = runner.invoke(
execute,
[
"-n",
job_name,
"-t",
team_name,
"-i",
execution_id,
"--cancel",
"-u",
rest_api_url,
],
)
assert result.exit_code == 0, (
f"result exit code is not 0, result output: {result.output}, "
f"result.exception: {result.exception}"
)
def test_execute_without_url(httpserver: PluginHTTPServer, tmpdir: LocalPath):
runner = CliRunner()
result = runner.invoke(execute, ["-n", "job_name", "-t", "team_name", "-u", ""])
assert (
result.exit_code == 2
), f"result exit code is not 2, result output: {result.output}, exc: {result.exc_info}"
assert "what" in result.output and "why" in result.output
def test_execute_with_empty_url(httpserver: PluginHTTPServer, tmpdir: LocalPath):
runner = CliRunner()
result = runner.invoke(execute, ["-n", "job_name", "-t", "team_name", "-u", ""])
assert (
result.exit_code == 2
), f"result exit code is not 2, result output: {result.output}, exc: {result.exc_info}"
assert "what" in result.output and "why" in result.output
def test_execute_start_output_text(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/deployments/production/executions",
method="POST",
).respond_with_response(
Response(
status=200,
headers=dict(
Location=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/foo"
),
)
)
runner = CliRunner()
result = runner.invoke(
execute, ["-n", job_name, "-t", team_name, "--start", "-u", rest_api_url]
)
assert f"-n {job_name}" in result.output
assert f"-t {team_name}" in result.output
def test_execute_start_output_json(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/deployments/production/executions",
method="POST",
).respond_with_response(
Response(
status=200,
headers=dict(
Location=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/foo"
),
)
)
runner = CliRunner()
result = runner.invoke(
execute,
["-n", job_name, "-t", team_name, "--start", "-u", rest_api_url, "-o", "json"],
)
json_output = json.loads(result.output)
assert job_name == json_output.get("job_name")
assert team_name == json_output.get("team")
def test_execute_with_exception(httpserver: PluginHTTPServer, tmpdir: LocalPath):
runner = CliRunner()
result = runner.invoke(
execute, ["--start", "-n", "job_name", "-t", "team_name", "-u", "localhost"]
)
assert (
result.exit_code == 2
), f"result exit code is not 2, result output: {result.output}, exc: {result.exc_info}"
assert "what" in result.output and "why" in result.output
def test_execute_no_execution_id(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
execution: DataJobExecution = DataJobExecution(
id="1",
job_name=job_name,
logs_url="",
deployment=DataJobDeployment(),
start_time="2021-09-24T14:14:03.922Z",
)
older_execution = DataJobExecution(
id="2",
job_name=job_name,
logs_url="",
deployment=DataJobDeployment(),
start_time="2020-09-24T14:14:03.922Z",
)
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions",
method="GET",
).respond_with_json(
[older_execution.to_dict(), execution.to_dict(), older_execution.to_dict()]
)
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/1/logs",
method="GET",
).respond_with_json({"logs": "We are the logs! We are awesome!"})
runner = CliRunner()
result = runner.invoke(
execute,
["-n", job_name, "-t", team_name, "--logs", "-u", rest_api_url],
)
test_utils.assert_click_status(result, 0)
assert result.output.strip() == "We are the logs! We are awesome!".strip()
def test_execute_logs_using_api(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
id = "1"
execution: DataJobExecution = DataJobExecution(
id=id, job_name=job_name, logs_url="", deployment=DataJobDeployment()
)
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/1",
method="GET",
).respond_with_json(execution.to_dict())
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/1/logs",
method="GET",
).respond_with_json({"logs": "We are the logs! We are awesome!"})
runner = CliRunner()
result = runner.invoke(
execute,
["-n", job_name, "-t", team_name, "-i", id, "--logs", "-u", rest_api_url],
)
test_utils.assert_click_status(result, 0)
assert result.output.strip() == "We are the logs! We are awesome!".strip()
def test_execute_logs_with_external_log_url(
httpserver: PluginHTTPServer, tmpdir: LocalPath
):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
id = "1"
execution: DataJobExecution = DataJobExecution(
id=id,
job_name=job_name,
logs_url="http://external-service-job-logs",
deployment=DataJobDeployment(),
)
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/1",
method="GET",
).respond_with_json(execution.to_dict())
with patch("webbrowser.open") as mock_browser_open:
mock_browser_open.return_value = False
runner = CliRunner()
result = runner.invoke(
execute,
["-n", job_name, "-t", team_name, "-i", id, "--logs", "-u", rest_api_url],
)
test_utils.assert_click_status(result, 0)
mock_browser_open.assert_called_once_with("http://external-service-job-logs")
def test_execute_start_extra_arguments_invalid_json(
httpserver: PluginHTTPServer, tmpdir: LocalPath
):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/deployments/production/executions",
method="POST",
)
runner = CliRunner()
result = runner.invoke(
execute,
[
"-n",
job_name,
"-t",
team_name,
"--start",
"-u",
rest_api_url,
"--arguments",
'{key1": "value1", "key2": "value2"}',
],
)
assert (
result.exit_code == 2
), f"Result exit code not 2. result output {result.output}, exc: {result.exc_info}"
assert "Failed to validate job arguments" in result.output
assert "what" and "why" in result.output
assert "Make sure provided --arguments is a valid JSON string." in result.output
def test_execute_start_extra_arguments(httpserver: PluginHTTPServer, tmpdir: LocalPath):
rest_api_url = httpserver.url_for("")
team_name = "test-team"
job_name = "test-job"
arguments = '{"key1": "value1", "key2": "value2"}'
httpserver.expect_request(
uri=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/deployments/production/executions",
method="POST",
json=json.loads(
'{"args": {"key1": "value1", "key2": "value2"}, "started_by": "vdk-control-cli"}'
),
).respond_with_response(
Response(
status=200,
headers=dict(
Location=f"/data-jobs/for-team/{team_name}/jobs/{job_name}/executions/foo"
),
)
)
runner = CliRunner()
result = runner.invoke(
execute,
[
"-n",
job_name,
"-t",
team_name,
"--start",
"-u",
rest_api_url,
"--arguments",
arguments,
],
)
assert (
result.exit_code == 0
), f"Result exit code not 0. result output {result.output}, exc: {result.exc_info}"
| 30.868805 | 97 | 0.619286 | 1,276 | 10,588 | 4.9279 | 0.121473 | 0.052322 | 0.028626 | 0.028626 | 0.82395 | 0.787373 | 0.779262 | 0.764313 | 0.75 | 0.73855 | 0 | 0.011453 | 0.241311 | 10,588 | 342 | 98 | 30.959064 | 0.771318 | 0.00595 | 0 | 0.670213 | 0 | 0.014184 | 0.250238 | 0.102832 | 0 | 0 | 0 | 0 | 0.08156 | 1 | 0.042553 | false | 0 | 0.039007 | 0 | 0.08156 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b61224b359b1b664d0d88a9fb2fe8fd54623ffd0 | 165 | py | Python | main/admin.py | drhoet/photo-workflow | 4d1e6be82a71fec34e37ddf4096c46d871b24b66 | [
"MIT"
] | null | null | null | main/admin.py | drhoet/photo-workflow | 4d1e6be82a71fec34e37ddf4096c46d871b24b66 | [
"MIT"
] | null | null | null | main/admin.py | drhoet/photo-workflow | 4d1e6be82a71fec34e37ddf4096c46d871b24b66 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Author, Directory, Image
admin.site.register(Directory)
admin.site.register(Image)
admin.site.register(Author)
| 23.571429 | 44 | 0.818182 | 23 | 165 | 5.869565 | 0.478261 | 0.2 | 0.377778 | 0.325926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084848 | 165 | 6 | 45 | 27.5 | 0.89404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
37a107ac9978b71007d044eaab66a8782c71ec7b | 38 | py | Python | maps/lotus_island/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/lotus_island/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/lotus_island/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | from .lotus_island import LotusIsland
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
809e95004d6bb84c2a15c2de65286c1bbf8a2c1c | 10,372 | py | Python | tests/test_model_decisions.py | djmhunt/TTpy | 0f0997314bf0f54831494b2ef1a64f1bff95c097 | [
"MIT"
] | null | null | null | tests/test_model_decisions.py | djmhunt/TTpy | 0f0997314bf0f54831494b2ef1a64f1bff95c097 | [
"MIT"
] | 4 | 2020-04-19T11:43:41.000Z | 2020-07-21T09:57:51.000Z | tests/test_model_decisions.py | djmhunt/TTpy | 0f0997314bf0f54831494b2ef1a64f1bff95c097 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
:Author: Dominic
"""
import collections
import model.decision.binary as binary
import model.decision.discrete as discrete
import numpy as np
#%% For binary.single
class TestClass_decSingle:
def test_S_normal(self):
np.random.seed(100)
d = binary.single()
result = d(0.23)
correct_result = (0, collections.OrderedDict([(0, 0.77), (1, 0.23)]))
assert result == correct_result
def test_S_normal_2(self):
last_action = 0
np.random.seed(100)
d = binary.single()
result = d(0.23, last_action)
correct_result = (0, collections.OrderedDict([(0, 0.77), (1, 0.23)]))
assert result == correct_result
def test_S_normal_3(self):
last_action = 0
np.random.seed(104)
d = binary.single()
result = d(0.23, last_action)
correct_result = (1, collections.OrderedDict([(0, 0.77), (1, 0.23)]))
assert result == correct_result
def test_S_valid_1(self):
np.random.seed(100)
d = binary.single()
result = d(0.23, trial_responses=[1])
correct_result = (1, collections.OrderedDict([(0, 0), (1, 1)]))
assert result == correct_result
def test_S_valid_2(self):
np.random.seed(100)
d = binary.single()
result = d(0.23, trial_responses=[])
correct_result = (None, collections.OrderedDict([(0, 0.77), (1, 0.23)]))
assert result == correct_result
#%% For discrete.weightProb
class TestClass_decWeightProb:
def test_WP_normal(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.8, 0.5, 0.7])
correct_result = (2, collections.OrderedDict([(1, 0.4), (2, 0.25), (3, 0.35)]))
assert result == correct_result
def test_WP_normal_2(self):
np.random.seed(101)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5])
correct_result = (3, collections.OrderedDict([(1, 0.2), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_WP_valid(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[1, 2])
correct_result = (2, collections.OrderedDict([(1, 0.4), (2, 0.6), (3, 0)]))
assert result == correct_result
def test_WP_valid_2(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[1])
correct_result = (1, collections.OrderedDict([(1, 1), (2, 0), (3, 0)]))
assert result == correct_result
def test_WP_no_valid(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[])
correct_result = (None, collections.OrderedDict([(1, 0.2), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_WP_string(self):
np.random.seed(100)
d = discrete.weightProb(["A", "B", "C"])
result = d([0.2, 0.3, 0.5], trial_responses=["A", "B"])
correct_result = ('B', collections.OrderedDict([('A', 0.4), ('B', 0.6), ('C', 0)]))
assert result == correct_result
def test_WP_err(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.6, 0.3, 0.5], trial_responses=[0, 3])
correct_result = (3, collections.OrderedDict([(1, 0), (2, 0), (3, 1)]))
assert result == correct_result
def test_WP_err_2(self):
np.random.seed(100)
d = discrete.weightProb(task_responses=[1, 2, 3])
result = d([0.6, 0.3, 0.5], trial_responses=[1, 1])
correct_result = (1, collections.OrderedDict([(1, 1), (2, 0), (3, 0)]))
assert result == correct_result
#%% For discrete.maxProb
class TestClass_decMaxProb:
def test_MP_normal(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.6, 0.3, 0.5])
correct_result = (1, collections.OrderedDict([(1, 0.6), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_normal_2(self):
np.random.seed(101)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.5, 0.3, 0.5])
correct_result = (3, collections.OrderedDict([(1, 0.5), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_valid(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[1, 2])
correct_result = (2, collections.OrderedDict([(1, 0.2), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_valid_2(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[1])
correct_result = (1, collections.OrderedDict([(1, 0.2), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_no_valid(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.2, 0.3, 0.5], trial_responses=[])
correct_result = (None, collections.OrderedDict([(1, 0.2), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_string(self):
np.random.seed(100)
d = discrete.maxProb(["A", "B", "C"])
result = d([0.2, 0.3, 0.5], trial_responses=["A", "B"])
correct_result = ('B', collections.OrderedDict([('A', 0.2), ('B', 0.3), ('C', 0.5)]))
assert result == correct_result
def test_MP_err(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.6, 0.3, 0.5], trial_responses=[0, 3])
correct_result = (3, collections.OrderedDict([(1, 0.6), (2, 0.3), (3, 0.5)]))
assert result == correct_result
def test_MP_err_2(self):
np.random.seed(100)
d = discrete.maxProb(task_responses=[1, 2, 3])
result = d([0.6, 0.3, 0.5], trial_responses=[1, 1])
correct_result = (1, collections.OrderedDict([(1, 0.6), (2, 0.3), (3, 0.5)]))
assert result == correct_result
#%% For discrete.probThresh
class TestClass_decProbThresh:
def test_PT_normal(self):
np.random.seed(100)
d = discrete.probThresh(task_responses=[0, 1, 2, 3], eta=0.8)
correct_result = (1, collections.OrderedDict([(0, 0.2), (1, 0.8), (2, 0.3), (3, 0.5)]))
result = d([0.2, 0.8, 0.3, 0.5])
assert result == correct_result
def test_PT_normal_2(self):
np.random.seed(100)
d = discrete.probThresh(task_responses=[0, 1, 2, 3], eta=0.8)
correct_result = (0, collections.OrderedDict([(0, 0.2), (1, 0.5), (2, 0.3), (3, 0.5)]))
result = d([0.2, 0.5, 0.3, 0.5])
assert result == correct_result
def test_PT_normal_3(self):
np.random.seed(101)
d = discrete.probThresh(task_responses=[0, 1, 2, 3], eta=0.8)
correct_result = (3, collections.OrderedDict([(0, 0.2), (1, 0.5), (2, 0.3), (3, 0.5)]))
result = d([0.2, 0.5, 0.3, 0.5])
assert result == correct_result
def test_PT_valid(self):
np.random.seed(100)
d = discrete.probThresh(task_responses=[0, 1, 2, 3], eta=0.8)
correct_result = (0, collections.OrderedDict([(0, 0.2), (1, 0.8), (2, 0.3), (3, 0.5)]))
result = d([0.2, 0.8, 0.3, 0.5], trial_responses=[0, 2])
assert result == correct_result
def test_PT_no_valid(self):
np.random.seed(100)
d = discrete.probThresh(task_responses=[0, 1, 2, 3], eta=0.8)
correct_result = (None, collections.OrderedDict([(0, 0.2), (1, 0.8), (2, 0.3), (3, 0.5)]))
result = d([0.2, 0.8, 0.3, 0.5], trial_responses=[])
assert result == correct_result
def test_PT_string(self):
np.random.seed(100)
d = discrete.probThresh(["A", "B", "C"])
correct_result = ('A', collections.OrderedDict([('A', 0.2), ('B', 0.3), ('C', 0.8)]))
result = d([0.2, 0.3, 0.8], trial_responses=["A", "B"])
assert result == correct_result
def test_PT_err(self):
np.random.seed(100)
d = discrete.probThresh(["A", "B", "C"])
correct_result = ('A', collections.OrderedDict([('A', 0.2), ('B', 0.3), ('C', 0.8)]))
result = d([0.2, 0.3, 0.8], trial_responses=["A", "D"])
assert result == correct_result
#%% For discrete._validProbabilities
class TestClass_validProbabilities:
def test_VP_reduced_int(self):
correct_result = (np.array([0.1, 0.7]), np.array([2, 3]))
result = discrete._validProbabilities([0.2, 0.1, 0.7], [1, 2, 3], [2, 3])
assert (result[0] == correct_result[0]).all()
assert (result[1] == correct_result[1]).all()
def test_VP_reduced_str(self):
correct_result = (np.array([0.1, 0.7]), np.array(['B', 'C']))
result = discrete._validProbabilities([0.2, 0.1, 0.7], ["A", "B", "C"], ["B", "C"])
assert (result[0] == correct_result[0]).all()
assert (result[1] == correct_result[1]).all()
def test_VP_normal(self):
correct_result = (np.array([0.2, 0.1, 0.7]), np.array(["A", "B", "C"]))
result = discrete._validProbabilities([0.2, 0.1, 0.7], ["A", "B", "C"], ["A", "B", "C"])
assert (result[0] == correct_result[0]).all()
assert (result[1] == correct_result[1]).all()
def test_VP_err(self):
correct_result = (np.array([0.2]), np.array(['A']))
result = discrete._validProbabilities([0.2, 0.1, 0.7], ["A", "B", "C"], ["A", "D"])
assert (result[0] == correct_result[0]).all()
assert (result[1] == correct_result[1]).all()
def test_VP_err_2(self):
correct_result = (np.array([0.2]), np.array(['A']))
result = discrete._validProbabilities([0.2, 0.1, 0.7], ["A", "B", "C"], ["A", "A"])
assert (result[0] == correct_result[0]).all()
assert (result[1] == correct_result[1]).all() | 40.996047 | 99 | 0.554956 | 1,533 | 10,372 | 3.6197 | 0.049576 | 0.166336 | 0.018382 | 0.126149 | 0.922689 | 0.922509 | 0.907911 | 0.861597 | 0.81348 | 0.786448 | 0 | 0.088647 | 0.256074 | 10,372 | 253 | 100 | 40.996047 | 0.630508 | 0.016294 | 0 | 0.593137 | 0 | 0 | 0.006439 | 0 | 0 | 0 | 0 | 0 | 0.186275 | 1 | 0.161765 | false | 0 | 0.019608 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
809eed726ebdd675fbefd9df1532efbcc6869cd5 | 1,289 | py | Python | interactions/__init__.py | Jalancar/discord-interactions | d443d7de39780987ea8c8a580a2ca6f083542c21 | [
"MIT"
] | null | null | null | interactions/__init__.py | Jalancar/discord-interactions | d443d7de39780987ea8c8a580a2ca6f083542c21 | [
"MIT"
] | null | null | null | interactions/__init__.py | Jalancar/discord-interactions | d443d7de39780987ea8c8a580a2ca6f083542c21 | [
"MIT"
] | null | null | null | """
(interactions)
discord-interactions
Easy, simple, scalable and modular: a Python API wrapper for interactions.
To see the documentation, please head over to the link here:
https://discord-interactions.rtfd.io/en/latest for ``stable`` builds.
https://discord-interactions.rtfd.io/en/unstable for ``unstable`` builds.
(c) 2021 goverfl0w.
Co-authored by DeltaXW.
"""
from .api.models.channel import * # noqa: F401 F403
from .api.models.flags import * # noqa: F401 F403
from .api.models.guild import * # noqa: F401 F403
from .api.models.gw import * # noqa: F401 F403
from .api.models.member import * # noqa: F401 F403
from .api.models.message import * # noqa: F401 F403
from .api.models.misc import * # noqa: F401 F403
from .api.models.presence import * # noqa: F401 F403
from .api.models.role import * # noqa: F401 F403
from .api.models.team import * # noqa: F401 F403
from .api.models.user import * # noqa: F401 F403
from .base import * # noqa: F401 F403
from .client import * # noqa: F401 F403
from .context import * # noqa: F401 F403
from .decor import * # noqa: F401 F403
from .enums import * # noqa: F401 F403
from .models.command import * # noqa: F401 F403
from .models.component import * # noqa: F401 F403
from .models.misc import * # noqa: F401 F403
| 39.060606 | 77 | 0.713732 | 191 | 1,289 | 4.816754 | 0.314136 | 0.206522 | 0.28913 | 0.371739 | 0.645652 | 0.526087 | 0.336957 | 0 | 0 | 0 | 0 | 0.111528 | 0.172227 | 1,289 | 32 | 78 | 40.28125 | 0.750703 | 0.523662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
80ec0b533e065032c62d77a15af81c098418bb7a | 130 | py | Python | tests/conftest.py | rhanka/addok | 320d145e72964d54eb33742f0329e9f46f5c5ab5 | [
"WTFPL"
] | 215 | 2016-01-29T08:37:56.000Z | 2022-03-28T06:28:41.000Z | tests/conftest.py | bendathierrycom/addok | 07346046ed53993d8e2b66262f52d505f26f5ba9 | [
"MIT"
] | 487 | 2016-01-13T10:11:34.000Z | 2022-03-31T10:56:24.000Z | tests/conftest.py | bendathierrycom/addok | 07346046ed53993d8e2b66262f52d505f26f5ba9 | [
"MIT"
] | 52 | 2016-01-12T13:10:28.000Z | 2022-03-24T15:45:39.000Z | def pytest_configure():
from addok.config import config as addok_config
addok_config.SYNONYMS_PATH = 'tests/synonyms.txt'
| 32.5 | 53 | 0.776923 | 18 | 130 | 5.388889 | 0.666667 | 0.340206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 130 | 3 | 54 | 43.333333 | 0.873874 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
038e9fece67b1cf6dca75b65528af70db559ef7a | 90 | py | Python | camp/Core/__init__.py | blakezim/CAMP | a42a407dc62151ab8a7eb4be3aee1318b984502c | [
"MIT"
] | 4 | 2021-03-02T05:18:06.000Z | 2021-11-29T16:06:39.000Z | camp/Core/__init__.py | blakezim/CAMP | a42a407dc62151ab8a7eb4be3aee1318b984502c | [
"MIT"
] | null | null | null | camp/Core/__init__.py | blakezim/CAMP | a42a407dc62151ab8a7eb4be3aee1318b984502c | [
"MIT"
] | 1 | 2021-03-26T20:38:11.000Z | 2021-03-26T20:38:11.000Z | from .StructuredGridClass import *
from .TriangleMeshClass import *
from .Display import * | 30 | 34 | 0.811111 | 9 | 90 | 8.111111 | 0.555556 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 90 | 3 | 35 | 30 | 0.924051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff1ff5eb8a3d4b3cfe2f72163ab368db619cda37 | 195 | py | Python | pasa/dict/__init__.py | sonoisa/pasa | 90dbcd72890bfe390d2a58f2a4cdb79d42a9f9f8 | [
"MIT"
] | 5 | 2018-07-23T05:45:24.000Z | 2021-04-04T14:59:15.000Z | pasa/dict/__init__.py | sonoisa/pasa | 90dbcd72890bfe390d2a58f2a4cdb79d42a9f9f8 | [
"MIT"
] | 2 | 2019-01-28T04:33:12.000Z | 2019-11-20T14:30:27.000Z | pasa/dict/__init__.py | sonoisa/pasa | 90dbcd72890bfe390d2a58f2a4cdb79d42a9f9f8 | [
"MIT"
] | 1 | 2020-02-07T08:09:12.000Z | 2020-02-07T08:09:12.000Z | # -*- coding: utf-8 -*-
from . import category
from . import cchart
from . import filter
from . import frame
from . import idiom
from . import compound_predicate
from .load_json import LoadJson
| 19.5 | 32 | 0.74359 | 27 | 195 | 5.296296 | 0.555556 | 0.41958 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006211 | 0.174359 | 195 | 9 | 33 | 21.666667 | 0.881988 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
209cc331543d15ad5bdae8f487f601e323119dd4 | 516 | py | Python | cbpro/__init__.py | Cattes/coinbasepro-python | 0a9c9ba2188f6bfa08a842a666ab12fe1cc02276 | [
"MIT"
] | null | null | null | cbpro/__init__.py | Cattes/coinbasepro-python | 0a9c9ba2188f6bfa08a842a666ab12fe1cc02276 | [
"MIT"
] | null | null | null | cbpro/__init__.py | Cattes/coinbasepro-python | 0a9c9ba2188f6bfa08a842a666ab12fe1cc02276 | [
"MIT"
] | null | null | null | from cbpro.auth import Auth
from cbpro.messenger import Messenger
from cbpro.public import PublicClient
from cbpro.public import public_client
from cbpro.private import PrivateClient
from cbpro.private import private_client
from cbpro.models import PublicModel
from cbpro.models import PrivateModel
from cbpro.websocket import get_message
from cbpro.websocket import WebsocketHeader
from cbpro.websocket import WebsocketStream
from cbpro.websocket import WebsocketEvent
from cbpro.websocket import WebsocketClient | 32.25 | 43 | 0.870155 | 68 | 516 | 6.558824 | 0.294118 | 0.262332 | 0.201794 | 0.269058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104651 | 516 | 16 | 44 | 32.25 | 0.965368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20b43209e8d703bc2cd7dc94334a85f61041e9d0 | 9,938 | py | Python | tests/test_decompressor_fuzzing.py | thewtex/python-zstandard | 51d4c71ab6ca6aa915e4caf3f51a73fa6fc3b43a | [
"BSD-3-Clause"
] | null | null | null | tests/test_decompressor_fuzzing.py | thewtex/python-zstandard | 51d4c71ab6ca6aa915e4caf3f51a73fa6fc3b43a | [
"BSD-3-Clause"
] | null | null | null | tests/test_decompressor_fuzzing.py | thewtex/python-zstandard | 51d4c71ab6ca6aa915e4caf3f51a73fa6fc3b43a | [
"BSD-3-Clause"
] | null | null | null | import io
import os
import unittest
try:
import hypothesis
import hypothesis.strategies as strategies
except ImportError:
raise unittest.SkipTest('hypothesis not available')
import zstandard as zstd
from . common import (
make_cffi,
random_input_data,
)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
@make_cffi
class TestDecompressor_stream_reader_fuzzing(unittest.TestCase):
@hypothesis.settings(
suppress_health_check=[hypothesis.HealthCheck.large_base_example])
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
source_read_size=strategies.integers(1, 16384),
read_sizes=strategies.data())
def test_stream_source_read_variance(self, original, level, source_read_size,
read_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
dctx = zstd.ZstdDecompressor()
source = io.BytesIO(frame)
chunks = []
with dctx.stream_reader(source, read_size=source_read_size) as reader:
while True:
read_size = read_sizes.draw(strategies.integers(1, 16384))
chunk = reader.read(read_size)
if not chunk:
break
chunks.append(chunk)
self.assertEqual(b''.join(chunks), original)
@hypothesis.settings(
suppress_health_check=[hypothesis.HealthCheck.large_base_example])
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
source_read_size=strategies.integers(1, 16384),
read_sizes=strategies.data())
def test_buffer_source_read_variance(self, original, level, source_read_size,
read_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
dctx = zstd.ZstdDecompressor()
chunks = []
with dctx.stream_reader(frame, read_size=source_read_size) as reader:
while True:
read_size = read_sizes.draw(strategies.integers(1, 16384))
chunk = reader.read(read_size)
if not chunk:
break
chunks.append(chunk)
self.assertEqual(b''.join(chunks), original)
@hypothesis.settings(
suppress_health_check=[hypothesis.HealthCheck.large_base_example])
@hypothesis.given(
original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
source_read_size=strategies.integers(1, 16384),
seek_amounts=strategies.data(),
read_sizes=strategies.data())
def test_relative_seeks(self, original, level, source_read_size, seek_amounts,
read_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
dctx = zstd.ZstdDecompressor()
with dctx.stream_reader(frame, read_size=source_read_size) as reader:
while True:
amount = seek_amounts.draw(strategies.integers(0, 16384))
reader.seek(amount, os.SEEK_CUR)
offset = reader.tell()
read_amount = read_sizes.draw(strategies.integers(1, 16384))
chunk = reader.read(read_amount)
if not chunk:
break
self.assertEqual(original[offset:offset + len(chunk)], chunk)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
@make_cffi
class TestDecompressor_write_to_fuzzing(unittest.TestCase):
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
write_size=strategies.integers(min_value=1, max_value=8192),
input_sizes=strategies.data())
def test_write_size_variance(self, original, level, write_size, input_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
dctx = zstd.ZstdDecompressor()
source = io.BytesIO(frame)
dest = io.BytesIO()
with dctx.write_to(dest, write_size=write_size) as decompressor:
while True:
input_size = input_sizes.draw(strategies.integers(1, 4096))
chunk = source.read(input_size)
if not chunk:
break
decompressor.write(chunk)
self.assertEqual(dest.getvalue(), original)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
@make_cffi
class TestDecompressor_copy_stream_fuzzing(unittest.TestCase):
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
read_size=strategies.integers(min_value=1, max_value=8192),
write_size=strategies.integers(min_value=1, max_value=8192))
def test_read_write_size_variance(self, original, level, read_size, write_size):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
source = io.BytesIO(frame)
dest = io.BytesIO()
dctx = zstd.ZstdDecompressor()
dctx.copy_stream(source, dest, read_size=read_size, write_size=write_size)
self.assertEqual(dest.getvalue(), original)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
@make_cffi
class TestDecompressor_decompressobj_fuzzing(unittest.TestCase):
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
chunk_sizes=strategies.data())
def test_random_input_sizes(self, original, level, chunk_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
source = io.BytesIO(frame)
dctx = zstd.ZstdDecompressor()
dobj = dctx.decompressobj()
chunks = []
while True:
chunk_size = chunk_sizes.draw(strategies.integers(1, 4096))
chunk = source.read(chunk_size)
if not chunk:
break
chunks.append(dobj.decompress(chunk))
self.assertEqual(b''.join(chunks), original)
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
write_size=strategies.integers(min_value=1,
max_value=4 * zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE),
chunk_sizes=strategies.data())
def test_random_output_sizes(self, original, level, write_size, chunk_sizes):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
source = io.BytesIO(frame)
dctx = zstd.ZstdDecompressor()
dobj = dctx.decompressobj(write_size=write_size)
chunks = []
while True:
chunk_size = chunk_sizes.draw(strategies.integers(1, 4096))
chunk = source.read(chunk_size)
if not chunk:
break
chunks.append(dobj.decompress(chunk))
self.assertEqual(b''.join(chunks), original)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
@make_cffi
class TestDecompressor_read_to_iter_fuzzing(unittest.TestCase):
@hypothesis.given(original=strategies.sampled_from(random_input_data()),
level=strategies.integers(min_value=1, max_value=5),
read_size=strategies.integers(min_value=1, max_value=4096),
write_size=strategies.integers(min_value=1, max_value=4096))
def test_read_write_size_variance(self, original, level, read_size, write_size):
cctx = zstd.ZstdCompressor(level=level)
frame = cctx.compress(original)
source = io.BytesIO(frame)
dctx = zstd.ZstdDecompressor()
chunks = list(dctx.read_to_iter(source, read_size=read_size, write_size=write_size))
self.assertEqual(b''.join(chunks), original)
@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
class TestDecompressor_multi_decompress_to_buffer_fuzzing(unittest.TestCase):
@hypothesis.given(original=strategies.lists(strategies.sampled_from(random_input_data()),
min_size=1, max_size=1024),
threads=strategies.integers(min_value=1, max_value=8),
use_dict=strategies.booleans())
def test_data_equivalence(self, original, threads, use_dict):
kwargs = {}
if use_dict:
kwargs['dict_data'] = zstd.ZstdCompressionDict(original[0])
cctx = zstd.ZstdCompressor(level=1,
write_content_size=True,
write_checksum=True,
**kwargs)
frames_buffer = cctx.multi_compress_to_buffer(original, threads=-1)
dctx = zstd.ZstdDecompressor(**kwargs)
result = dctx.multi_decompress_to_buffer(frames_buffer)
self.assertEqual(len(result), len(original))
for i, frame in enumerate(result):
self.assertEqual(frame.tobytes(), original[i])
frames_list = [f.tobytes() for f in frames_buffer]
result = dctx.multi_decompress_to_buffer(frames_list)
self.assertEqual(len(result), len(original))
for i, frame in enumerate(result):
self.assertEqual(frame.tobytes(), original[i])
| 39.280632 | 111 | 0.639666 | 1,113 | 9,938 | 5.47619 | 0.115903 | 0.073831 | 0.051682 | 0.063987 | 0.801805 | 0.783429 | 0.761936 | 0.717801 | 0.717801 | 0.700902 | 0 | 0.015048 | 0.26444 | 9,938 | 252 | 112 | 39.436508 | 0.818741 | 0 | 0 | 0.649485 | 0 | 0 | 0.026263 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 1 | 0.046392 | false | 0 | 0.041237 | 0 | 0.118557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20e678b65518d51c75c0bea4ebbfc07e9a58fa53 | 511 | py | Python | relfs/relfs/error.py | matus-chochlik/various | 2a9f5eddd964213f7d1e1ce8328e2e0b2a8e998b | [
"MIT"
] | 1 | 2020-10-25T12:28:50.000Z | 2020-10-25T12:28:50.000Z | relfs/relfs/error.py | matus-chochlik/various | 2a9f5eddd964213f7d1e1ce8328e2e0b2a8e998b | [
"MIT"
] | null | null | null | relfs/relfs/error.py | matus-chochlik/various | 2a9f5eddd964213f7d1e1ce8328e2e0b2a8e998b | [
"MIT"
] | null | null | null | # coding=utf-8
#------------------------------------------------------------------------------#
from __future__ import print_function
import sys
#------------------------------------------------------------------------------#
class RelFsError(Exception):
pass
#------------------------------------------------------------------------------#
def print_error(error):
print("relfs error: %s" % (str(error)), file=sys.stderr)
#------------------------------------------------------------------------------#
| 42.583333 | 80 | 0.268102 | 27 | 511 | 4.851852 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002083 | 0.060665 | 511 | 11 | 81 | 46.454545 | 0.270833 | 0.634051 | 0 | 0 | 0 | 0 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 6 |
45ac294f52215a510ce54899c0cff4967a472747 | 30 | py | Python | BHT_ARIMA/__init__.py | ridwanfathin/BHT-ARIMA | 65df2af999cb13e15e39c4729638d31eb553c9c2 | [
"MIT"
] | 74 | 2020-02-25T13:28:47.000Z | 2022-03-29T09:10:41.000Z | BHT_ARIMA/__init__.py | tongnie/BHT-ARIMA | d88c7cedaa9c60b317d501eb595ec6f6ee72dced | [
"MIT"
] | 6 | 2020-02-27T20:04:58.000Z | 2021-12-04T08:58:01.000Z | BHT_ARIMA/__init__.py | tongnie/BHT-ARIMA | d88c7cedaa9c60b317d501eb595ec6f6ee72dced | [
"MIT"
] | 29 | 2020-03-09T03:14:14.000Z | 2022-03-29T09:09:21.000Z | from .BHTARIMA import BHTARIMA | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
affe65db0edb95b01441f0d328b87045db9c06de | 95 | py | Python | fcmeans/__init__.py | agrande-analog/fuzzy-c-means | f005223362a978ea183abc7cc4dd91f2e299f34a | [
"MIT"
] | null | null | null | fcmeans/__init__.py | agrande-analog/fuzzy-c-means | f005223362a978ea183abc7cc4dd91f2e299f34a | [
"MIT"
] | null | null | null | fcmeans/__init__.py | agrande-analog/fuzzy-c-means | f005223362a978ea183abc7cc4dd91f2e299f34a | [
"MIT"
] | null | null | null | """fuzzy-c-means - A simple implementation of Fuzzy C-means algorithm."""
from .fcm import FCM
| 31.666667 | 73 | 0.736842 | 15 | 95 | 4.666667 | 0.733333 | 0.171429 | 0.314286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136842 | 95 | 2 | 74 | 47.5 | 0.853659 | 0.705263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b33e6bdebf0f161a4e73a547d7ce5d9d2ae121f7 | 44 | py | Python | scripts/pipeline/try.py | ahsanbarkati/DRQA | d03dbd3ee12e80594e47f3003e6576e86d037f81 | [
"BSD-3-Clause"
] | null | null | null | scripts/pipeline/try.py | ahsanbarkati/DRQA | d03dbd3ee12e80594e47f3003e6576e86d037f81 | [
"BSD-3-Clause"
] | null | null | null | scripts/pipeline/try.py | ahsanbarkati/DRQA | d03dbd3ee12e80594e47f3003e6576e86d037f81 | [
"BSD-3-Clause"
] | null | null | null | import images
print(images.images("apple"))
| 14.666667 | 29 | 0.772727 | 6 | 44 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 2 | 30 | 22 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
2fced5b34fc78bcda8f8146448f6f22365d26f60 | 2,889 | py | Python | src/tests/services/metrics/test_console.py | cicadatesting/cicada-distributed | cb9caa4107fd5da30e508f34e6e11d0f8f58c142 | [
"Apache-2.0"
] | 6 | 2021-07-12T20:53:13.000Z | 2022-01-14T19:34:25.000Z | src/tests/services/metrics/test_console.py | cicadatesting/cicada-distributed | cb9caa4107fd5da30e508f34e6e11d0f8f58c142 | [
"Apache-2.0"
] | 9 | 2021-04-24T04:20:12.000Z | 2022-03-22T02:14:17.000Z | src/tests/services/metrics/test_console.py | cicadatesting/cicada-distributed | cb9caa4107fd5da30e508f34e6e11d0f8f58c142 | [
"Apache-2.0"
] | null | null | null | from unittest.mock import patch
from cicadad.metrics import console
def sample_collector(latest_results):
return [
float(result.output) for result in latest_results if result.exception is None
]
@patch("cicadad.services.datastore.get_metric_statistics")
def test_console_stats(metrics_mock):
metrics_mock.return_value = {
"min": 1.23456,
"median": 1.23456,
"max": 1.23456,
"average": 1.23456,
"len": 1,
}
console_stats = console.console_stats()
metrics_string = console_stats("foo", "bar")
assert (
metrics_string
== "Min: 1.235, Median: 1.235, Average: 1.235, Max: 1.235, Len: 1"
), "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_metric_statistics")
def test_console_stats_none(metrics_mock):
metrics_mock.return_value = None
console_stats = console.console_stats()
metrics_string = console_stats("foo", "bar")
assert metrics_string is None, "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_metric_total")
def test_console_count(metrics_mock):
metrics_mock.return_value = 60
console_count = console.console_count()
metrics_string = console_count("foo", "bar")
assert metrics_string == "60", "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_metric_total")
def test_console_count_none(metrics_mock):
metrics_mock.return_value = None
console_count = console.console_count()
metrics_string = console_count("foo", "bar")
assert metrics_string is None, "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_last_metric")
def test_console_latest(metrics_mock):
metrics_mock.return_value = 1.2345
console_latest = console.console_latest()
metrics_string = console_latest("foo", "bar")
assert metrics_string == "1.234", "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_last_metric")
def test_console_latest_none(metrics_mock):
metrics_mock.return_value = None
console_latest = console.console_latest()
metrics_string = console_latest("foo", "bar")
assert metrics_string is None, "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_metric_rate")
def test_console_percent(metrics_mock):
metrics_mock.return_value = 1.2345
console_percent = console.console_percent(1)
metrics_string = console_percent("foo", "bar")
assert metrics_string == "1.234", "Metrics string not equal to expected"
@patch("cicadad.services.datastore.get_metric_rate")
def test_console_percent_none(metrics_mock):
metrics_mock.return_value = None
console_percent = console.console_percent(1)
metrics_string = console_percent("foo", "bar")
assert metrics_string is None, "Metrics string not equal to expected"
| 27 | 85 | 0.731741 | 381 | 2,889 | 5.278215 | 0.144357 | 0.155147 | 0.079562 | 0.115365 | 0.876181 | 0.876181 | 0.843362 | 0.843362 | 0.843362 | 0.724018 | 0 | 0.027386 | 0.165801 | 2,889 | 106 | 86 | 27.254717 | 0.807054 | 0 | 0 | 0.571429 | 0 | 0.015873 | 0.270336 | 0.121149 | 0 | 0 | 0 | 0 | 0.126984 | 1 | 0.142857 | false | 0 | 0.031746 | 0.015873 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2fdfb4c931e0a5abf9c4fd2206cab5adb4db2618 | 68 | py | Python | jwtornadodemo/account/cache.py | jaggerwang/jw-pyserver | 80d621e5fe5474c3ee38b78395778c59543916cf | [
"MIT"
] | 10 | 2019-03-07T02:11:17.000Z | 2021-08-24T06:51:13.000Z | jwtornadodemo/account/cache.py | jaggerwang/jw-pyserver | 80d621e5fe5474c3ee38b78395778c59543916cf | [
"MIT"
] | 1 | 2021-06-01T21:50:48.000Z | 2021-06-01T21:50:48.000Z | jwtornadodemo/account/cache.py | jaggerwang/jw-pyserver | 80d621e5fe5474c3ee38b78395778c59543916cf | [
"MIT"
] | 3 | 2019-03-07T02:11:18.000Z | 2020-06-22T07:13:02.000Z | from ..common import cache
class UserCache(cache.Cache):
pass
| 11.333333 | 29 | 0.720588 | 9 | 68 | 5.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 68 | 5 | 30 | 13.6 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6417cdfcc4d25762f0bddd3d8e12b5d04ce98023 | 29 | py | Python | libsaas/services/compete/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 155 | 2015-01-27T15:17:59.000Z | 2022-02-20T00:14:08.000Z | libsaas/services/compete/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 14 | 2015-01-12T08:22:37.000Z | 2021-06-16T19:49:31.000Z | libsaas/services/compete/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 43 | 2015-01-28T22:41:45.000Z | 2021-09-21T04:44:26.000Z | from .service import Compete
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6432ccabf89a81ad9e221e5310bc810e793b5fd5 | 20,674 | py | Python | integration/sawtooth_integration/tests/test_tp_validator_registry.py | lcarranco/sawtooth-core | 70cd65bfe4204545501d73f748d908e6695828f3 | [
"Apache-2.0"
] | null | null | null | integration/sawtooth_integration/tests/test_tp_validator_registry.py | lcarranco/sawtooth-core | 70cd65bfe4204545501d73f748d908e6695828f3 | [
"Apache-2.0"
] | 1 | 2021-12-09T23:11:26.000Z | 2021-12-09T23:11:26.000Z | integration/sawtooth_integration/tests/test_tp_validator_registry.py | lcarranco/sawtooth-core | 70cd65bfe4204545501d73f748d908e6695828f3 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
import unittest
import json
import base64
import hashlib
from sawtooth_integration.message_factories.validator_reg_message_factory \
import ValidatorRegistryMessageFactory
from sawtooth_poet_common import sgx_structs
from sawtooth_poet_common.protobuf.validator_registry_pb2 import \
ValidatorRegistryPayload
class TestValidatorRegistry(unittest.TestCase):
"""
Set of tests to run in a test suite with an existing TPTester and
transaction processor.
"""
def __init__(self, test_name, tester):
super().__init__(test_name)
self.tester = tester
self.private_key = '5HsjpyQzpeoGAAvNeG5PzQsn1Ght18GgSmDaEUCd1c1HpA2a'\
'vzc'
self.public_key = '02f3d385777ab35888fc47af6d123bba6f8b04817a4746e97'\
'446ce1562fc4307d7'
self.factory = ValidatorRegistryMessageFactory(
private=self.private_key, public=self.public_key)
def _expect_invalid_transaction(self):
self.tester.expect(
self.factory.create_tp_response("INVALID_TRANSACTION"))
def _expect_ok(self):
self.tester.expect(self.factory.create_tp_response("OK"))
def test_valid_signup_info(self):
"""
Testing valid validator_registry transaction. This includes sending new
signup info for a validator that has already been registered.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
payload = ValidatorRegistryPayload(
verb="reg", name="val_1", id=self.factory.public_key,
signup_info=signup_info)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
# Expect Request for the ValidatorMap
received = self.tester.expect(
self.factory.create_get_request_validator_map())
# Respond with a empty validator Map
self.tester.respond(
self.factory.create_get_empty_response_validator_map(), received)
# Expect a set the new validator to the ValidatorMap
received = self.tester.expect(
self.factory.create_set_request_validator_map())
# Respond with the ValidatorMap address
self.tester.respond(self.factory.create_set_response_validator_map(),
received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1",
"registered"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(self.factory.create_set_response_validator_info(),
received)
self._expect_ok()
# --------------------------
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
payload = ValidatorRegistryPayload(
verb="reg", name="val_1", id=self.factory.public_key,
signup_info=signup_info)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
# Expect Request for the ValidatorMap
received = self.tester.expect(
self.factory.create_get_request_validator_map())
# Respond with a validator Map
self.tester.respond(self.factory.create_get_response_validator_map(),
received)
# Expect to receive a validator_info request
received = self.tester.expect(
self.factory.create_get_request_validator_info())
# Respond with the ValidatorInfo
self.tester.respond(
self.factory.create_get_response_validator_info("val_1"), received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1", "revoked"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(
self.factory.create_set_response_validator_info(), received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1",
"registered"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(self.factory.create_set_response_validator_info(),
received)
self._expect_ok()
def test_invalid_name(self):
"""
Test that a transaction with an invalid name returns an invalid
transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# The name is longer the 64 characters
payload = ValidatorRegistryPayload(
verb="reg",
name="val_11111111111111111111111111111111111111111111111111111111"
"11111",
id=self.factory.public_key,
signup_info=signup_info)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_id(self):
"""
Test that a transaction with an id that does not match the
signer_pubkey returns an invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# The idea should match the signer_pubkey in the transaction_header
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id="bad",
signup_info=signup_info
)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_poet_pubkey(self):
"""
Test that a transaction without a poet_public_key returns an invalid
transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
signup_info.poet_public_key = "bad"
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id=self.factory.public_key,
signup_info=signup_info)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def _test_bad_signup_info(self, signup_info):
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id=self.factory.public_key,
signup_info=signup_info)
# Send validator registry payload
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_verification_report(self):
"""
Test that a transaction whose verification report is invalid returns
an invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# Verification Report is None
proof_data = signup_info.proof_data
signup_info.proof_data = json.dumps({})
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No verification signature
proof_data_dict = json.loads(proof_data)
del proof_data_dict["signature"]
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad verification signature
proof_data_dict["signature"] = "bads"
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No EPID pseudonym
proof_data_dict = json.loads(proof_data)
verification_report = \
json.loads(proof_data_dict["verification_report"])
del verification_report["epidPseudonym"]
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Altered EPID pseudonym (does not match anti_sybil_id)
proof_data_dict = json.loads(proof_data)
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["epidPseudonym"] = "altered"
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No Nonce
proof_data_dict = json.loads(proof_data)
verification_report = \
json.loads(proof_data_dict["verification_report"])
del verification_report["nonce"]
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
def test_invalid_pse_manifest(self):
"""
Test that a transaction whose pse_manifast is invalid returns an
invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
proof_data = signup_info.proof_data
proof_data_dict = json.loads(proof_data)
# ------------------------------------------------------
# no pseManifestStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
del verification_report['pseManifestStatus']
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad pseManifestStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['pseManifestStatus'] = "bad"
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No pseManifestHash
verification_report = \
json.loads(proof_data_dict["verification_report"])
del verification_report['pseManifestHash']
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad pseManifestHash
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['pseManifestHash'] = "Bad"
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Missing evidence payload
evidence_payload = proof_data_dict["evidence_payload"]
del proof_data_dict["evidence_payload"]
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Missing PSE manifest
del evidence_payload["pse_manifest"]
proof_data_dict["evidence_payload"] = evidence_payload
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad PSE manifest
evidence_payload["pse_manifest"] = "bad"
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
def test_invalid_enclave_body(self):
"""
Test that a transaction whose enclave_body is invalid returns an
invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
proof_data = signup_info.proof_data
proof_data_dict = json.loads(proof_data)
# ------------------------------------------------------
# No isvEnclaveQuoteStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
enclave_status = verification_report["isvEnclaveQuoteStatus"]
verification_report["isvEnclaveQuoteStatus"] = None
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad isvEnclaveQuoteStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["isvEnclaveQuoteStatus"] = "Bad"
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No isvEnclaveQuoteBody
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["isvEnclaveQuoteStatus"] = enclave_status
verification_report['isvEnclaveQuoteBody'] = None
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Malformed isvEnclaveQuoteBody (decode the enclave quote, chop off
# the last byte, and re-encode)
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(
base64.b64decode(
verification_report['isvEnclaveQuoteBody'].encode())[1:])\
.decode()
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Invalid basename
verification_report = \
json.loads(proof_data_dict["verification_report"])
sgx_quote = sgx_structs.SgxQuote()
sgx_quote.parse_from_bytes(
base64.b64decode(
verification_report['isvEnclaveQuoteBody'].encode()))
sgx_quote.basename.name = \
b'\xCC' * sgx_structs.SgxBasename.STRUCT_SIZE
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(sgx_quote.serialize_to_bytes()).decode()
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Report data is not valid (bad OPK hash)
verification_report = \
json.loads(proof_data_dict["verification_report"])
sgx_quote = sgx_structs.SgxQuote()
sgx_quote.parse_from_bytes(
base64.b64decode(
verification_report['isvEnclaveQuoteBody'].encode()))
hash_input = \
'{0}{1}'.format(
'Not a valid OPK Hash',
self.factory.poet_public_key).upper().encode()
sgx_quote.report_body.report_data.d = \
hashlib.sha256(hash_input).digest()
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(sgx_quote.serialize_to_bytes()).decode()
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Report data is not valid (bad PPK)
verification_report = \
json.loads(proof_data_dict["verification_report"])
sgx_quote = sgx_structs.SgxQuote()
sgx_quote.parse_from_bytes(
base64.b64decode(
verification_report['isvEnclaveQuoteBody'].encode()))
hash_input = \
'{0}{1}'.format(
self.factory.pubkey_hash,
"Not a valid PPK").encode()
sgx_quote.report_body.report_data.d = \
hashlib.sha256(hash_input).digest()
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(sgx_quote.serialize_to_bytes()).decode()
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Invalid enclave measurement
verification_report = \
json.loads(proof_data_dict["verification_report"])
sgx_quote = sgx_structs.SgxQuote()
sgx_quote.parse_from_bytes(
base64.b64decode(
verification_report['isvEnclaveQuoteBody'].encode()))
sgx_quote.report_body.mr_enclave.m = \
b'\xCC' * sgx_structs.SgxMeasurement.STRUCT_SIZE
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(sgx_quote.serialize_to_bytes()).decode()
signup_info.proof_data = \
self.factory.create_proof_data(
verification_report=verification_report,
evidence_payload=proof_data_dict.get('evidence_payload'))
self._test_bad_signup_info(signup_info)
| 37.452899 | 80 | 0.608494 | 2,086 | 20,674 | 5.698466 | 0.110259 | 0.083284 | 0.050307 | 0.045428 | 0.785228 | 0.764701 | 0.740809 | 0.733154 | 0.724994 | 0.697064 | 0 | 0.014414 | 0.26173 | 20,674 | 551 | 81 | 37.520871 | 0.764398 | 0.20475 | 0 | 0.755556 | 0 | 0 | 0.087097 | 0.01495 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034921 | false | 0 | 0.022222 | 0 | 0.060317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff7a59fd8cce612a32e2603d1d2db5d3a9c23b13 | 204 | py | Python | jupyterlab2pymolpysnips/Pymolrc/fetchPath.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlab2pymolpysnips/Pymolrc/fetchPath.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlab2pymolpysnips/Pymolrc/fetchPath.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | """
cmd.do('set fetch_path, ${1:/Users/blaine/pdbFiles};')
"""
cmd.do('set fetch_path, /Users/blaine/pdbFiles;')
# Description: Set path for location to save fetched pdb files.
# Source: placeHolder
| 22.666667 | 65 | 0.696078 | 29 | 204 | 4.827586 | 0.655172 | 0.071429 | 0.114286 | 0.185714 | 0.242857 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.132353 | 204 | 8 | 66 | 25.5 | 0.785311 | 0.686275 | 0 | 0 | 0 | 0 | 0.722222 | 0.425926 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
440b6a3eea96c535aa8895dd14241914f910d2a1 | 45 | py | Python | server/parsing_lib/__init__.py | cnavrides/wireless-debugging | 9c057d0127a5f8eebca4193af4bdb7e96c3ae6dd | [
"Apache-2.0"
] | 3 | 2017-06-23T15:19:31.000Z | 2018-03-07T01:31:37.000Z | server/parsing_lib/__init__.py | cnavrides/wireless-debugging | 9c057d0127a5f8eebca4193af4bdb7e96c3ae6dd | [
"Apache-2.0"
] | 75 | 2017-06-15T20:09:32.000Z | 2018-01-17T01:30:26.000Z | server/parsing_lib/__init__.py | cnavrides/wireless-debugging | 9c057d0127a5f8eebca4193af4bdb7e96c3ae6dd | [
"Apache-2.0"
] | 3 | 2017-06-17T04:39:10.000Z | 2017-08-16T15:25:00.000Z | from parsing_lib.log_parser import LogParser
| 22.5 | 44 | 0.888889 | 7 | 45 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4411dff36e164ce52bde15bdbb30214fe31d9aae | 344 | py | Python | tests/test_agents/test_bandit/__init__.py | matrig/genrl | 25eb018f18a9a1d0865c16e5233a2a7ccddbfd78 | [
"MIT"
] | 390 | 2020-05-03T17:34:02.000Z | 2022-03-05T11:29:07.000Z | tests/test_agents/test_bandit/__init__.py | matrig/genrl | 25eb018f18a9a1d0865c16e5233a2a7ccddbfd78 | [
"MIT"
] | 306 | 2020-05-03T05:53:53.000Z | 2022-03-12T00:27:28.000Z | tests/test_agents/test_bandit/__init__.py | matrig/genrl | 25eb018f18a9a1d0865c16e5233a2a7ccddbfd78 | [
"MIT"
] | 64 | 2020-05-05T20:23:30.000Z | 2022-03-30T08:43:10.000Z | from tests.test_agents.test_bandit.test_cb_agents import TestCBAgent # noqa
from tests.test_agents.test_bandit.test_data_bandits import TestDataBandit # noqa
from tests.test_agents.test_bandit.test_mab_agents import TestMABAgent # noqa
from tests.test_agents.test_bandit.test_multi_armed_bandits import (
TestMultiArmedBandit, # noqa
)
| 49.142857 | 82 | 0.84593 | 49 | 344 | 5.591837 | 0.346939 | 0.131387 | 0.189781 | 0.277372 | 0.525547 | 0.525547 | 0.525547 | 0.405109 | 0 | 0 | 0 | 0 | 0.098837 | 344 | 6 | 83 | 57.333333 | 0.883871 | 0.055233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4421cf48be1b7210e4fd2a9b6594eb5c4459f905 | 77 | py | Python | modules/activation_functions/sigmoid.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null | modules/activation_functions/sigmoid.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null | modules/activation_functions/sigmoid.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null |
import numpy as np
def sigmoid(Y: np.ndarray):
return 1/(1+np.exp(-Y))
| 12.833333 | 27 | 0.636364 | 15 | 77 | 3.266667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.194805 | 77 | 5 | 28 | 15.4 | 0.758065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
44939478cce1f268d37c5060a1ed2866f2780550 | 130 | py | Python | app/app.py | Zuoxiaoxian/falcon_gq | 58fd22e91864789bffca26d0f2e16797b7393d3a | [
"MIT"
] | null | null | null | app/app.py | Zuoxiaoxian/falcon_gq | 58fd22e91864789bffca26d0f2e16797b7393d3a | [
"MIT"
] | null | null | null | app/app.py | Zuoxiaoxian/falcon_gq | 58fd22e91864789bffca26d0f2e16797b7393d3a | [
"MIT"
] | null | null | null | # 这里是接口方法
from jsonrpc import dispatcher
@dispatcher.add_method
def foobar(**kwargs):
return kwargs['name'] + kwargs['age']
| 18.571429 | 41 | 0.723077 | 16 | 130 | 5.8125 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 130 | 6 | 42 | 21.666667 | 0.837838 | 0.053846 | 0 | 0 | 0 | 0 | 0.057851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9264ed44fbc67a6d209397197586c50ca1196e97 | 121 | py | Python | mainRoadMap/views.py | h1gfun4/h1gfun4.github.io | e460467cb505b525ecd5b01b9eb3fd73de7ec6e1 | [
"MIT"
] | null | null | null | mainRoadMap/views.py | h1gfun4/h1gfun4.github.io | e460467cb505b525ecd5b01b9eb3fd73de7ec6e1 | [
"MIT"
] | null | null | null | mainRoadMap/views.py | h1gfun4/h1gfun4.github.io | e460467cb505b525ecd5b01b9eb3fd73de7ec6e1 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def RoadMapView(request):
return render(request, 'mainRoadMap/roadmapPage.html') | 30.25 | 58 | 0.801653 | 14 | 121 | 6.928571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107438 | 121 | 4 | 58 | 30.25 | 0.898148 | 0 | 0 | 0 | 0 | 0 | 0.229508 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
928e11591af78268ba0af04278f08144a844085d | 38 | py | Python | src/gbt/__init__.py | firiceguo/Recommendation-NLP | 526c0d50deb05331eebc1f4c82f76d12b6ba80c6 | [
"MIT"
] | null | null | null | src/gbt/__init__.py | firiceguo/Recommendation-NLP | 526c0d50deb05331eebc1f4c82f76d12b6ba80c6 | [
"MIT"
] | null | null | null | src/gbt/__init__.py | firiceguo/Recommendation-NLP | 526c0d50deb05331eebc1f4c82f76d12b6ba80c6 | [
"MIT"
] | 3 | 2017-03-14T17:27:29.000Z | 2019-06-11T14:02:59.000Z | import traingbt
import dataprocessing
| 12.666667 | 21 | 0.894737 | 4 | 38 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 2 | 22 | 19 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2bab62cca2e407ccf7184884547dd4ec530f8380 | 1,990 | py | Python | tatau_core/models/payments.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/models/payments.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/models/payments.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | from logging import getLogger
from tatau_core.db import models, fields
from tatau_core.models.task import TaskDeclaration
from tatau_core.models.nodes import ProducerNode, WorkerNode, VerifierNode
from tatau_core.utils import cached_property
logger = getLogger('tatau_core')
class WorkerPayment(models.Model):
producer_id = fields.CharField(immutable=True)
worker_id = fields.CharField(immutable=True)
task_declaration_id = fields.CharField(immutable=True)
train_iteration = fields.IntegerField(immutable=True)
train_iteration_retry = fields.IntegerField(immutable=True)
tflops = fields.FloatField(immutable=True)
tokens = fields.FloatField(immutable=True)
@cached_property
def producer(self) -> ProducerNode:
return ProducerNode.get(self.producer_id, db=self.db, encryption=self.encryption)
@cached_property
def worker(self) -> WorkerNode:
return WorkerNode.get(self.worker_id, db=self.db, encryption=self.encryption)
@cached_property
def task_declaration(self) -> TaskDeclaration:
return TaskDeclaration.get(self.task_declaration_id, db=self.db, encryption=self.encryption)
class VerifierPayment(models.Model):
producer_id = fields.CharField(immutable=True)
verifier_id = fields.CharField(immutable=True)
task_declaration_id = fields.CharField(immutable=True)
train_iteration = fields.IntegerField(immutable=True)
tflops = fields.FloatField(immutable=True)
tokens = fields.FloatField(immutable=True)
@cached_property
def producer(self) -> ProducerNode:
return ProducerNode.get(self.producer_id, db=self.db, encryption=self.encryption)
@cached_property
def verifier(self) -> VerifierNode:
return VerifierNode.get(self.verifier_id, db=self.db, encryption=self.encryption)
@cached_property
def task_declaration(self) -> TaskDeclaration:
return TaskDeclaration.get(self.task_declaration_id, db=self.db, encryption=self.encryption)
| 37.54717 | 100 | 0.765327 | 235 | 1,990 | 6.33617 | 0.182979 | 0.113499 | 0.068502 | 0.104768 | 0.727334 | 0.727334 | 0.727334 | 0.727334 | 0.661518 | 0.661518 | 0 | 0 | 0.142714 | 1,990 | 52 | 101 | 38.269231 | 0.872802 | 0 | 0 | 0.615385 | 0 | 0 | 0.005028 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.128205 | 0.153846 | 0.820513 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9200f83b61582c0c017bfc63101bb48b64375ecf | 157 | py | Python | tests/test_network.py | brunorijsman/quantum-path-computation-engine | 021bca03f8555cd9cd0cdbd7d5c6a32050ab6271 | [
"Apache-2.0"
] | null | null | null | tests/test_network.py | brunorijsman/quantum-path-computation-engine | 021bca03f8555cd9cd0cdbd7d5c6a32050ab6271 | [
"Apache-2.0"
] | 1 | 2021-06-01T23:56:41.000Z | 2021-06-01T23:56:41.000Z | tests/test_network.py | brunorijsman/quantum-path-computation-element | 021bca03f8555cd9cd0cdbd7d5c6a32050ab6271 | [
"Apache-2.0"
] | null | null | null | """Unit tests for module network."""
from network import Network
def test_create_network():
"""Test creation of a network."""
_network = Network()
| 19.625 | 37 | 0.687898 | 20 | 157 | 5.25 | 0.65 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184713 | 157 | 7 | 38 | 22.428571 | 0.820313 | 0.369427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6007dc4ef5a842518819773606454e3a597339e | 42 | py | Python | gendist/gendist/__init__.py | probml/shift-happens | 67bd65a7652e0cd148d94a2085d6e546ace584b2 | [
"MIT"
] | 5 | 2022-01-19T18:58:25.000Z | 2022-03-08T16:08:54.000Z | gendist/gendist/__init__.py | probml/shift-happens | 67bd65a7652e0cd148d94a2085d6e546ace584b2 | [
"MIT"
] | null | null | null | gendist/gendist/__init__.py | probml/shift-happens | 67bd65a7652e0cd148d94a2085d6e546ace584b2 | [
"MIT"
] | 1 | 2022-01-20T01:56:55.000Z | 2022-01-20T01:56:55.000Z | from . import training, processing, models | 42 | 42 | 0.809524 | 5 | 42 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6250c01d95465ac7f8a79f65b0c4b979c6ea023 | 15,766 | py | Python | Utils/util_opt.py | soshishimada/PhysCap_demo_release | 542756ed9ecdca77eda8b6b44ba2348253b999c3 | [
"Unlicense"
] | 62 | 2021-09-05T19:36:06.000Z | 2022-03-29T11:47:09.000Z | Utils/util_opt.py | soshishimada/PhysCap_demo_release | 542756ed9ecdca77eda8b6b44ba2348253b999c3 | [
"Unlicense"
] | 4 | 2021-09-21T09:52:02.000Z | 2022-03-27T09:08:30.000Z | Utils/util_opt.py | soshishimada/PhysCap_demo_release | 542756ed9ecdca77eda8b6b44ba2348253b999c3 | [
"Unlicense"
] | 10 | 2021-09-05T00:27:17.000Z | 2022-03-22T13:25:57.000Z | import numpy as np
import sys
import math
sys.path.append("/HPS/Shimada/work/rbdl37/rbdl/build/python")
import rbdl
#import cvxopt
from cvxopt import matrix, solvers
solvers.options['show_progress'] = False
class RbdlOpt():
def __init__(self, delta_t, l_kafth_ids, r_kafth_ids):
self.delta_t = delta_t
self.l_kafth_ids = l_kafth_ids
self.r_kafth_ids = r_kafth_ids
def c2d_func(self, v):
vec = np.array([[0, -v[2], v[1]],
[v[2], 0, -v[0]],
[-v[1], v[0], 0],
])
return vec
def mat_concatenate(self, mat):
out = None
for i in range(len(mat)):
if i == 0:
out = mat[i]
else:
out = np.concatenate((out, mat[i]), 1)
return out
def wrench_separator(self, wrench, contact_info, wrench_dim=6):
extract_index = [np.arange(i * wrench_dim, (i + 1) * wrench_dim) for i in range(int(len(wrench) / wrench_dim))
if contact_info[i]]
return wrench[np.array(extract_index).reshape(-1)]
def cross2dot_convert(self, vectors):
out = np.array(list(map(self.c2d_func, vectors)))
out = self.mat_concatenate(out)
return out
def big_G_getter(self, Gtau):
G = np.concatenate((Gtau, np.eye(3)), 0)
G = np.concatenate((G, np.zeros(G.shape)), 1)
print(G,G.shape)
return G
def big_G_getter2(self, Gtau):
G = np.concatenate((Gtau, np.eye(3)), 0)
#G = np.concatenate((G, np.zeros(G.shape)), 1)
#print(G,G.shape)
return G
def get_wrench(self, model, com, q, body_id):
contact = rbdl.CalcBodyToBaseCoordinates(model, q, body_id, np.zeros(3))
contact_vec = contact - com
G_tau_converted = self.cross2dot_convert(np.array([contact_vec]))
return G_tau_converted
def jacobi_separator(self, jacobi, contact_info, jacobi_dim=6):
extract_index = [np.arange(i * jacobi_dim, (i + 1) * jacobi_dim) for i in range(int(len(jacobi) / jacobi_dim))
if contact_info[i]]
if len(extract_index) != 0:
return jacobi[np.array(extract_index).reshape(-1)]
else:
return []
def jacobi_separator2(self, jacobi, contact_info, jacobi_dim=6):
h,w=jacobi.shape
jacobi=jacobi.reshape(4,int(h/4),w)
jacobi=contact_info.reshape(-1,1,1)*jacobi
#print(jacobi.shape)
#print(contact_info,contact_info.shape)
jacobi=jacobi.reshape(h,w)
return jacobi
# extract_index = [np.arange(i * jacobi_dim, (i + 1) * jacobi_dim) for i in range(int(len(jacobi) / jacobi_dim))
# if contact_info[i]]
#if len(extract_index) != 0:
# return jacobi[np.array(extract_index).reshape(-1)]
#else:
# return []
def qp_force_estimation_toe_heel(self, bullet_contacts_lth_rth, model, M, q, qdot, des_qddot, gcc, lr_J6D):
M = M[:6]
mass = np.zeros(q.shape)
com = np.zeros(3)
rbdl.CalcCenterOfMass(model, q, qdot, mass, com)
l_toe_G_tau_converted = self.get_wrench(model, com, q, self.l_kafth_ids[3])
print(l_toe_G_tau_converted.shape)
l_heel_G_tau_converted = self.get_wrench(model, com, q, self.l_kafth_ids[4])
r_toe_G_tau_converted = self.get_wrench(model, com, q, self.r_kafth_ids[3])
r_heel_G_tau_converted = self.get_wrench(model, com, q, self.r_kafth_ids[4])
R_l_toe = self.big_G_getter(l_toe_G_tau_converted)
R_l_heel = self.big_G_getter(l_heel_G_tau_converted)
R_r_toe = self.big_G_getter(r_toe_G_tau_converted)
R_r_heel = self.big_G_getter(r_heel_G_tau_converted)
R = np.concatenate((R_l_toe, np.concatenate((R_l_heel, np.concatenate((R_r_toe, R_r_heel), 0)), 0)), 0)
jacobi = self.jacobi_separator(lr_J6D, bullet_contacts_lth_rth)
if len(jacobi) == 0:
return 0, 0
jacobi = jacobi[:, :6]
R = self.wrench_separator(R, bullet_contacts_lth_rth)
A = np.dot(jacobi.T, R)
b = np.dot(M, des_qddot) + gcc[:6]
W = np.dot(A.T, A)
Q = -np.dot(b.T, A)
mu = 1 / math.sqrt(2)
G = np.array([[0, 0, -1, 0, 0, 0],
[1, 0, -mu, 0, 0, 0],
[-1, 0, -mu, 0, 0, 0],
[0, 1, -mu, 0, 0, 0],
[0, -1, -mu, 0, 0, 0],
[0, 0, 0, 0, 0, -1],
[0, 0, 0, 1, 0, -mu],
[0, 0, 0, -1, 0, -mu],
[0, 0, 0, 0, 1, -mu],
[0, 0, 0, 0, -1, -mu]
])
h = np.array(np.zeros(10).tolist())
W = matrix(W.astype(np.double))
Q = matrix(Q.astype(np.double))
G = matrix(G.astype(np.double))
h = matrix(h.astype(np.double))
sol = solvers.qp(W, Q, G=G, h=h)
GRF_opt = np.array(sol["x"]).reshape(-1)
return GRF_opt, R
def qp_force_estimation_toe_heel2(self, bullet_contacts_lth_rth, model, M, q, qdot, des_qddot, gcc, lr_J6D):
M = M[:6]
mass = np.zeros(q.shape)
com = np.zeros(3)
rbdl.CalcCenterOfMass(model, q, qdot, mass, com)
l_toe_G_tau_converted = self.get_wrench(model, com, q, self.l_kafth_ids[3])
#print(l_toe_G_tau_converted.shape)
l_heel_G_tau_converted = self.get_wrench(model, com, q, self.l_kafth_ids[4])
r_toe_G_tau_converted = self.get_wrench(model, com, q, self.r_kafth_ids[3])
r_heel_G_tau_converted = self.get_wrench(model, com, q, self.r_kafth_ids[4])
R_l_toe = self.big_G_getter2(l_toe_G_tau_converted)
R_l_heel = self.big_G_getter2(l_heel_G_tau_converted)
R_r_toe = self.big_G_getter2(r_toe_G_tau_converted)
R_r_heel = self.big_G_getter2(r_heel_G_tau_converted)
#""""""
#print("------------------------------------------------------------")
#print(R_l_toe,R_l_heel,R_r_toe,R_r_heel,R_l_heel.shape)
R_list=[R_l_toe,R_l_heel,R_r_toe,R_r_heel]
R_h=24
R_w=12
R = np.zeros((R_h,R_w))
#print()
for i in range(len(R_list)):
R[i*6:(i+1)*6,i*3:(i+1)*3]=R_list[i]
#R = np.concatenate((R_l_toe, np.concatenate((R_l_heel, np.concatenate((R_r_toe, R_r_heel), 0)), 0)), 0)
#print(R)
#print(lr_J6D.shape)
jacobi = self.jacobi_separator2(lr_J6D, bullet_contacts_lth_rth)
#print('jacobi',jacobi.shape)
if len(jacobi) == 0:
return 0, 0
jacobi = jacobi[:, :6]
#print('ssssssssss',R.shape)
#R = self.wrench_separator(R, bullet_contacts_lth_rth)
#print('ssssssss222ss',R.shape,jacobi.shape)
A = np.dot(jacobi.T, R)
#print(A.shape,'A')
b = np.dot(M, des_qddot) + gcc[:6]
# print(b.shape,'b')
W = np.dot(A.T, A)
#print(W.shape,'W')
Q = -np.dot(b.T, A)
#print(Q.shape,'Q')
mu = 1 / math.sqrt(2)
"""
G = np.array([[0, 0, -1, 0, 0, 0],
[1, 0, -mu, 0, 0, 0],
[-1, 0, -mu, 0, 0, 0],
[0, 1, -mu, 0, 0, 0],
[0, -1, -mu, 0, 0, 0],
[0, 0, 0, 0, 0, -1],
[0, 0, 0, 1, 0, -mu],
[0, 0, 0, -1, 0, -mu],
[0, 0, 0, 0, 1, -mu],
[0, 0, 0, 0, -1, -mu]
])"""
G = np.array([[0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1,-mu,0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[-1,-mu,0 , 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0,-mu,1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0,-mu,-1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1,-mu,0, 0, 0, 0, 0, 0, 0],
[0, 0, 0,-1,-mu,0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0,-mu,1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0,-mu,-1, 0, 0, 0, 0, 0, 0],
[0, 0, 0,0, 0, 0, 0, -1, 0, 0, 0, 0, ],
[0, 0, 0,0, 0, 0,1,-mu, 0, 0, 0, 0],
[0, 0, 0,0, 0, 0,-1,-mu, 0 , 0, 0, 0],
[0, 0, 0,0, 0, 0,0, -mu,1, 0, 0, 0],
[0, 0, 0,0, 0, 0,0, -mu,-1, 0, 0, 0],
[0, 0, 0,0, 0, 0,0, 0, 0,0, -1, 0],
[0, 0, 0,0, 0, 0,0, 0, 0,1,-mu, 0],
[0, 0, 0,0, 0, 0,0, 0, 0,-1,-mu, 0],
[0, 0, 0,0, 0, 0,0, 0, 0,0, -mu,1],
[0, 0, 0,0, 0, 0,0, 0, 0,0, -mu,-1],
])
#print(G.shape)
h = np.array(np.zeros(20).tolist())
W = matrix(W.astype(np.double))
Q = matrix(Q.astype(np.double))
G = matrix(G.astype(np.double))
h = matrix(h.astype(np.double))
sol = solvers.qp(W, Q, G=G, h=h)
GRF_opt = np.array(sol["x"]).reshape(-1)
#print(GRF_opt)
#print(GRF_opt.shape)
return GRF_opt, R
def qp_control_hc(self, bullet_contacts_lth_rth, M, qdot, des_qddot, gcc,lr_J6D, GRF_opt, R):
lr_F_J6D = self.jacobi_separator2(lr_J6D, bullet_contacts_lth_rth)
#print(lr_F_J6D.shape,'jacobi',R.shape,GRF_opt.shape)
if len(lr_F_J6D) != 0:
#print(lr_F_J6D.shape,'jacobi',R.shape,GRF_opt.shape)
general_GRF = np.dot(lr_F_J6D.T, np.dot(R, GRF_opt))
else:
general_GRF = 0
S = np.eye(M.shape[0])
A = np.concatenate((M, -S), 1)
b = general_GRF - gcc # - gcc
G_top = np.concatenate((-self.delta_t * lr_J6D, np.zeros((lr_J6D.shape[0], M.shape[1]))), 1)
G_bottom = np.concatenate((self.delta_t * lr_J6D, np.zeros((lr_J6D.shape[0], M.shape[1]))), 1)
G = np.concatenate((G_top, G_bottom), 0)
max_vel = 0.01
max_vel_floor = 0
max_vel_no_contact = 10000
l_toe_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact,
max_vel_no_contact]
l_heel_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact,
max_vel_no_contact, max_vel_no_contact]
r_toe_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact,
max_vel_no_contact]
r_heel_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel_no_contact,
max_vel_no_contact, max_vel_no_contact]
if bullet_contacts_lth_rth[0]:
l_toe_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel, max_vel_floor, max_vel]
if bullet_contacts_lth_rth[1]:
l_heel_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel, max_vel_floor, max_vel]
if bullet_contacts_lth_rth[2]:
r_toe_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel, max_vel_floor, max_vel]
if bullet_contacts_lth_rth[3]:
r_heel_xyz = [max_vel_no_contact, max_vel_no_contact, max_vel_no_contact, max_vel, max_vel_floor, max_vel]
max_vel_no_contact2 = 10000
l_toe_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2,
max_vel_no_contact2, max_vel_no_contact2]
l_heel_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2,
max_vel_no_contact2, max_vel_no_contact2]
r_toe_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2,
max_vel_no_contact2, max_vel_no_contact2]
r_heel_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2,
max_vel_no_contact2, max_vel_no_contact2]
max_vel2 = 0.1
if bullet_contacts_lth_rth[0]:
l_toe_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel2, max_vel_no_contact2, max_vel2]
if bullet_contacts_lth_rth[1]:
l_heel_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel2, max_vel_no_contact2, max_vel2]
if bullet_contacts_lth_rth[2]:
r_toe_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel2, max_vel_no_contact2,
max_vel2]
if bullet_contacts_lth_rth[3]:
r_heel_xyz2 = [max_vel_no_contact2, max_vel_no_contact2, max_vel_no_contact2, max_vel2, max_vel_no_contact2,
max_vel2]
h_top = np.dot(lr_J6D, qdot) + np.array(l_toe_xyz + l_heel_xyz + r_toe_xyz + r_heel_xyz)
h_bottom = np.array(l_toe_xyz2 + l_heel_xyz2 + r_toe_xyz2 + r_heel_xyz2) - np.dot(lr_J6D, qdot)
h = np.concatenate((h_top, h_bottom), 0)
a = np.eye(len(des_qddot))
bb = des_qddot
W = np.dot(a.T, a)
W = np.concatenate((W, np.zeros(M.shape)), 1)
Q = -np.dot(bb.T, a)
W_tau_bottom = 0.00045*np.eye(M.shape[0])
W_bottom = np.concatenate((np.zeros(M.shape), W_tau_bottom), 1)
W = np.concatenate((W, W_bottom), 0)
Q = np.concatenate((Q, np.zeros(des_qddot.shape[0])), 0)
A = matrix(A.astype(np.double))
b = matrix(b.astype(np.double))
W = matrix(W.astype(np.double))
Q = matrix(Q.astype(np.double))
G = matrix(G.astype(np.double))
h = matrix(h.astype(np.double))
sol = solvers.qp(W, Q, A=A, b=b, G=G, h=h)
x = np.array(sol['x']).reshape(-1)
tau = x[int(len(x) / 2):].reshape(-1)
acc = x[:int(len(x) / 2)].reshape(-1)
return tau, acc, general_GRF
def qp_control_fast(self, bullet_contacts_lth_rth, M, qdot, des_qddot, gcc,lr_J6D, GRF_opt, R):
lr_F_J6D = self.jacobi_separator(lr_J6D, bullet_contacts_lth_rth)
if len(lr_F_J6D) != 0:
general_GRF = np.dot(lr_F_J6D.T, np.dot(R, GRF_opt))
else:
general_GRF = 0
S = np.eye(M.shape[0])
A = np.concatenate((M, -S), 1)
b = general_GRF - gcc # - gcc
a = np.eye(len(des_qddot))
bb = des_qddot
W = np.dot(a.T, a)
W = np.concatenate((W, np.zeros(M.shape)), 1)
Q = -np.dot(bb.T, a)
W_tau_bottom = 0.00001 * np.eye(M.shape[0])
W_bottom = np.concatenate((np.zeros(M.shape), W_tau_bottom), 1)
W = np.concatenate((W, W_bottom), 0)
Q = np.concatenate((Q, np.zeros(des_qddot.shape[0])), 0)
A = matrix(A.astype(np.double))
b = matrix(b.astype(np.double))
W = matrix(W.astype(np.double))
Q = matrix(Q.astype(np.double))
sol = solvers.qp(W, Q, A=A, b=b)#, G=G, h=h)
x = np.array(sol['x']).reshape(-1)
tau = x[int(len(x) / 2):].reshape(-1)
acc = x[:int(len(x) / 2)].reshape(-1)
return tau, acc, general_GRF
| 40.950649 | 132 | 0.528796 | 2,484 | 15,766 | 3.071659 | 0.061594 | 0.065269 | 0.079817 | 0.085976 | 0.808126 | 0.773001 | 0.753866 | 0.739843 | 0.728047 | 0.7173 | 0 | 0.055466 | 0.328745 | 15,766 | 384 | 133 | 41.057292 | 0.665501 | 0.068375 | 0 | 0.477099 | 0 | 0 | 0.004277 | 0.003044 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053435 | false | 0 | 0.019084 | 0 | 0.137405 | 0.007634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a63afe42f4ad77faa3963963290020b0ea93e16c | 35 | py | Python | sloth/uff/bin/__init__.py | frank26080115/jetbot | 19354e5f8b2e3e4853f7b197b5e2714502822c3d | [
"MIT"
] | null | null | null | sloth/uff/bin/__init__.py | frank26080115/jetbot | 19354e5f8b2e3e4853f7b197b5e2714502822c3d | [
"MIT"
] | null | null | null | sloth/uff/bin/__init__.py | frank26080115/jetbot | 19354e5f8b2e3e4853f7b197b5e2714502822c3d | [
"MIT"
] | null | null | null | from uff.bin import convert_to_uff
| 17.5 | 34 | 0.857143 | 7 | 35 | 4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a663c8a30961d2064cef6deb84386ff7f1efea11 | 68 | py | Python | algorithms/map/__init__.py | upzone/algorithms | d56792abe70ddbb444fd7fd2662e087da0d225c8 | [
"MIT"
] | 4 | 2018-10-21T04:12:12.000Z | 2019-07-24T08:38:20.000Z | algorithms/map/__init__.py | Tw1stFate/algorithms | a9ed2ce490c6ac1dd220530fe1afb765a51656f4 | [
"MIT"
] | null | null | null | algorithms/map/__init__.py | Tw1stFate/algorithms | a9ed2ce490c6ac1dd220530fe1afb765a51656f4 | [
"MIT"
] | 7 | 2019-03-21T10:18:22.000Z | 2021-09-22T07:34:10.000Z | from .hashtable import *
from .separate_chaining_hashtable import *
| 22.666667 | 42 | 0.823529 | 8 | 68 | 6.75 | 0.625 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 2 | 43 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6965c015bc59d41e762a97ec717be86f37c0499 | 167 | py | Python | readpaircluster/svcf/__init__.py | RCollins13/Holmes | 3eb0119638bb93c1cab914af60a1dfd472146e28 | [
"MIT"
] | 3 | 2017-03-14T17:49:16.000Z | 2017-03-29T18:19:00.000Z | readpaircluster/svcf/__init__.py | talkowski-lab/holmes | 3eb0119638bb93c1cab914af60a1dfd472146e28 | [
"MIT"
] | null | null | null | readpaircluster/svcf/__init__.py | talkowski-lab/holmes | 3eb0119638bb93c1cab914af60a1dfd472146e28 | [
"MIT"
] | null | null | null | __all__ = []
from .rpc import RPC
# from .svcall import SVCall, SVCallCluster, LumpyCall
from .svcall import SVCall, SVCFRecord, LumpyCall
from .svfile import SVFile
| 27.833333 | 55 | 0.778443 | 21 | 167 | 6 | 0.428571 | 0.15873 | 0.253968 | 0.349206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149701 | 167 | 5 | 56 | 33.4 | 0.887324 | 0.311377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4700ca84e05cf86a46c3eeac4da2eec28b570289 | 25 | py | Python | apps/entity/models/group/__init__.py | dy1zan/softwarecapstone | c121a2b2d43b72aac19b75c31519711c0ace9c02 | [
"MIT"
] | null | null | null | apps/entity/models/group/__init__.py | dy1zan/softwarecapstone | c121a2b2d43b72aac19b75c31519711c0ace9c02 | [
"MIT"
] | 16 | 2018-11-10T21:46:40.000Z | 2018-11-11T15:08:36.000Z | apps/entity/models/group/__init__.py | dy1zan/softwarecapstone | c121a2b2d43b72aac19b75c31519711c0ace9c02 | [
"MIT"
] | null | null | null | from .group import Group
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b2b2429cbb37b823092275aa72379a1bd141465 | 38 | py | Python | vega/modules/necks/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 724 | 2020-06-22T12:05:30.000Z | 2022-03-31T07:10:54.000Z | vega/modules/necks/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 147 | 2020-06-30T13:34:46.000Z | 2022-03-29T11:30:17.000Z | vega/modules/necks/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 160 | 2020-06-29T18:27:58.000Z | 2022-03-23T08:42:21.000Z | from .parallel_fpn import ParallelFPN
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b35d0c948a0d8061ccfbc934d4df3b07235bd21 | 107 | py | Python | udemy/01_walkthrough/script2.py | inderpal2406/python | 7bd7d03a6b3cd09ff16a4447ff495a2393a87a33 | [
"MIT"
] | null | null | null | udemy/01_walkthrough/script2.py | inderpal2406/python | 7bd7d03a6b3cd09ff16a4447ff495a2393a87a33 | [
"MIT"
] | null | null | null | udemy/01_walkthrough/script2.py | inderpal2406/python | 7bd7d03a6b3cd09ff16a4447ff495a2393a87a33 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import script1
print(f"This is script2 and __name__ variable value is {__name__}")
| 21.4 | 67 | 0.766355 | 17 | 107 | 4.352941 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.130841 | 107 | 4 | 68 | 26.75 | 0.763441 | 0.196262 | 0 | 0 | 0 | 0 | 0.670588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5b71d8f30fe552731a5f2190b3ffe243d27c1405 | 2,029 | py | Python | 2015/day/13/solution.py | iangregson/advent-of-code | e2a2dde30dcaed027a5ba78f9270f8a1976577f1 | [
"MIT"
] | null | null | null | 2015/day/13/solution.py | iangregson/advent-of-code | e2a2dde30dcaed027a5ba78f9270f8a1976577f1 | [
"MIT"
] | null | null | null | 2015/day/13/solution.py | iangregson/advent-of-code | e2a2dde30dcaed027a5ba78f9270f8a1976577f1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os
from collections import defaultdict
from itertools import permutations
dir_path = os.path.dirname(os.path.realpath(__file__))
file = open(dir_path + "/input.txt", "r")
lines = file.readlines()
lines = [line.strip().rstrip('.') for line in lines]
# file = open(dir_path + "/test_input.txt", "r")
# lines = file.readlines()
# lines = [line.strip().rstrip('.') for line in lines]
# print(lines)
# Build graph
G = defaultdict(dict)
E = []
Ns = set()
for line in lines:
tokens = line.split(" ")
n, N, = tokens[0], tokens[-1]
cost = 0
if tokens[2] == 'gain':
cost += int(tokens[3])
else:
cost -= int(tokens[3])
edge = (n, N, cost)
G[n][N] = cost
E.append(edge)
Ns.add(n)
# print(G)
# print(E)
# print(Ns)
happiness = []
for tour in permutations(Ns):
tour_happiness = 0
for (idx, person) in enumerate(tour):
next_idx = (idx + 1) % len(tour)
next_person = tour[next_idx]
cost1 = G[person][next_person]
cost2 = G[next_person][person]
tour_happiness += cost1 + cost2
happiness.append(tour_happiness)
print("Part 1 answer:", max(happiness))
# Build graph
G = defaultdict(dict)
E = []
Ns = set()
Ns.add('Me')
for line in lines:
tokens = line.split(" ")
n, N, = tokens[0], tokens[-1]
cost = 0
if tokens[2] == 'gain':
cost += int(tokens[3])
else:
cost -= int(tokens[3])
edge = (n, N, cost)
G[n][N] = cost
G['Me'][N] = 0
G[n]['Me'] = 0
E.append(edge)
E.append(('Me', N, 0))
E.append((n, 'Me', 0))
Ns.add(n)
# print(G)
# print(E)
# print(Ns)
happiness = []
for tour in permutations(Ns):
tour_happiness = 0
for (idx, person) in enumerate(tour):
next_idx = (idx + 1) % len(tour)
next_person = tour[next_idx]
cost1 = G[person][next_person]
cost2 = G[next_person][person]
tour_happiness += cost1 + cost2
happiness.append(tour_happiness)
print("Part 2 answer:", max(happiness))
| 21.817204 | 54 | 0.583539 | 295 | 2,029 | 3.932203 | 0.216949 | 0.010345 | 0.031034 | 0.048276 | 0.768966 | 0.768966 | 0.768966 | 0.768966 | 0.713793 | 0.713793 | 0 | 0.020222 | 0.244455 | 2,029 | 92 | 55 | 22.054348 | 0.736464 | 0.117792 | 0 | 0.78125 | 0 | 0 | 0.033765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046875 | 0 | 0.046875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ba03eea79e6526c17e85e553d782fae7b34e434 | 108 | py | Python | tests/fixtures/__init__.py | andreroggeri/br-to-ynab | c5d0ef3804bb575badc05ac6dc771f6a9281f955 | [
"MIT"
] | 5 | 2021-09-20T13:15:37.000Z | 2022-03-01T01:03:27.000Z | tests/fixtures/__init__.py | andreroggeri/br-to-ynab | c5d0ef3804bb575badc05ac6dc771f6a9281f955 | [
"MIT"
] | 4 | 2021-04-28T14:11:42.000Z | 2021-10-09T16:18:15.000Z | tests/fixtures/__init__.py | andreroggeri/br-to-ynab | c5d0ef3804bb575badc05ac6dc771f6a9281f955 | [
"MIT"
] | 1 | 2021-09-27T15:13:30.000Z | 2021-09-27T15:13:30.000Z | from .config import config_for_nubank, config_for_bradesco, config_for_alelo
from .ynab import ynab_account
| 36 | 76 | 0.87037 | 17 | 108 | 5.117647 | 0.529412 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 108 | 2 | 77 | 54 | 0.887755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ba169d04296c4c3800e341f00e9e47e8becca97 | 3,088 | py | Python | tests/plugins/mockserver/test_tracing_enabled.py | okutane/yandex-taxi-testsuite | 7e2e3dd5a65869ecbf37bf3f79cba7bb4e782b0c | [
"MIT"
] | 128 | 2020-03-10T09:13:41.000Z | 2022-02-11T20:16:16.000Z | tests/plugins/mockserver/test_tracing_enabled.py | okutane/yandex-taxi-testsuite | 7e2e3dd5a65869ecbf37bf3f79cba7bb4e782b0c | [
"MIT"
] | 3 | 2021-11-01T12:31:27.000Z | 2022-02-11T13:08:38.000Z | tests/plugins/mockserver/test_tracing_enabled.py | okutane/yandex-taxi-testsuite | 7e2e3dd5a65869ecbf37bf3f79cba7bb4e782b0c | [
"MIT"
] | 22 | 2020-03-05T07:13:12.000Z | 2022-03-15T10:30:58.000Z | # pylint: disable=protected-access
import aiohttp.web
import pytest
from testsuite.plugins import mockserver as mockserver_module
# pylint: disable=invalid-name
async def test_mockserver_responds_with_handler_to_current_test(
mockserver, create_service_client,
):
@mockserver.handler('/arbitrary/path')
def _handler(request):
return aiohttp.web.Response(text='arbitrary text', status=200)
client = create_service_client(
mockserver.base_url,
service_headers={mockserver.trace_id_header: mockserver.trace_id},
)
response = await client.post('arbitrary/path')
assert response.status_code == 200
assert response.text == 'arbitrary text'
async def test_mockserver_responds_with_json_handler_to_current_test(
mockserver, create_service_client,
):
@mockserver.json_handler('/arbitrary/path')
def _json_handler(request):
return {'arbitrary_key': True}
client = create_service_client(
mockserver.base_url,
service_headers={mockserver.trace_id_header: mockserver.trace_id},
)
response = await client.post('arbitrary/path')
assert response.status_code == 200
assert response.json() == {'arbitrary_key': True}
async def test_mockserver_skips_handler_and_responds_500_to_other_test(
mockserver, create_service_client,
):
@mockserver.handler('/arbitrary/path')
def _handler(request):
return aiohttp.web.Response(text='arbitrary text', status=200)
client = create_service_client(
mockserver.base_url,
service_headers={
mockserver.trace_id_header: mockserver_module._generate_trace_id(),
},
)
response = await client.post('arbitrary/path')
assert response.status_code == 500
assert response.text == mockserver_module.REQUEST_FROM_ANOTHER_TEST_ERROR
async def test_mockserver_skips_json_handler_and_responds_500_to_other_test(
mockserver, create_service_client,
):
@mockserver.json_handler('/arbitrary/path')
def _json_handler(request):
return {'arbitrary_key': True}
client = create_service_client(
mockserver.base_url,
service_headers={
mockserver.trace_id_header: mockserver_module._generate_trace_id(),
},
)
response = await client.post('arbitrary/path')
assert response.status_code == 500
assert response.text == mockserver_module.REQUEST_FROM_ANOTHER_TEST_ERROR
@pytest.mark.parametrize(
'http_headers',
[
{}, # no trace_id in http headers
{mockserver_module._DEFAULT_TRACE_ID_HEADER: ''},
{
mockserver_module._DEFAULT_TRACE_ID_HEADER: (
'id_without_testsuite-_prefix'
),
},
],
)
async def test_mockserver_responds_500_on_unhandled_request_from_other_sources(
mockserver, http_headers, create_service_client,
):
client = create_service_client(
mockserver.base_url, service_headers=http_headers,
)
response = await client.post('arbitrary/path')
assert response.status_code == 500
| 29.692308 | 79 | 0.714054 | 351 | 3,088 | 5.897436 | 0.188034 | 0.037198 | 0.091787 | 0.126087 | 0.833333 | 0.792754 | 0.725121 | 0.725121 | 0.725121 | 0.682609 | 0 | 0.012097 | 0.196891 | 3,088 | 103 | 80 | 29.980583 | 0.822581 | 0.028821 | 0 | 0.607595 | 0 | 0 | 0.083806 | 0.009349 | 0 | 0 | 0 | 0 | 0.113924 | 1 | 0.050633 | false | 0 | 0.037975 | 0.050633 | 0.139241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75104617008938af895fcbe46077b0c8d76d36d0 | 129 | py | Python | lambdapool/exceptions.py | rorodata/lambdapool | da7b514496f75484541ebcbcc596f1f72dab09bf | [
"Apache-2.0"
] | null | null | null | lambdapool/exceptions.py | rorodata/lambdapool | da7b514496f75484541ebcbcc596f1f72dab09bf | [
"Apache-2.0"
] | null | null | null | lambdapool/exceptions.py | rorodata/lambdapool | da7b514496f75484541ebcbcc596f1f72dab09bf | [
"Apache-2.0"
] | 1 | 2019-12-30T12:46:24.000Z | 2019-12-30T12:46:24.000Z |
class LambdaPoolError(Exception):
pass
class LambdaFunctionError(Exception):
pass
class AWSError(Exception):
pass
| 12.9 | 37 | 0.744186 | 12 | 129 | 8 | 0.5 | 0.40625 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 129 | 9 | 38 | 14.333333 | 0.914286 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
75306ab244d8390e38c5c1d7687d2acf47eaaf1c | 3,591 | py | Python | oop/property/class_book_property.py | levs72/pyneng-examples | d6288292dcf9d1ebc5a9db4a0d620bd11b4a2df9 | [
"MIT"
] | 11 | 2021-04-05T09:30:23.000Z | 2022-03-09T13:27:56.000Z | oop/property/class_book_property.py | levs72/pyneng-examples | d6288292dcf9d1ebc5a9db4a0d620bd11b4a2df9 | [
"MIT"
] | null | null | null | oop/property/class_book_property.py | levs72/pyneng-examples | d6288292dcf9d1ebc5a9db4a0d620bd11b4a2df9 | [
"MIT"
] | 11 | 2021-04-06T03:44:35.000Z | 2022-03-04T21:20:40.000Z | ## Стандартный вариант применения property без setter
class Book:
def __init__(self, title, price, quantity):
self.title = title
self.price = price
self.quantity = quantity
# метод, который декорирован property становится getter'ом
@property
def total(self):
print("getter")
return self.price * self.quantity
## Стандартный вариант применения property с setter
class Book:
def __init__(self, title, price, quantity):
self.title = title
self.price = price
self.quantity = quantity
# total остается атрибутом только для чтения
@property
def total(self):
return round(self.price * self.quantity, 2)
# а price доступен для чтения и записи
@property # этот метод превращается в getter
def price(self):
print("price getter")
return self._price
# при записи делается проверка значения
@price.setter
def price(self, value):
print("price setter")
if not isinstance(value, (int, float)):
raise TypeError("Значение должно быть числом")
if not value >= 0:
raise ValueError("Значение должно быть положительным")
self._price = float(value)
# Декораторы с явным getter
class Book:
def __init__(self, title, price, quantity):
self.title = title
self.price = price
self.quantity = quantity
# создаем пустую property для total
total = property()
@total.getter
def total(self):
return round(self.price * self.quantity, 2)
# создаем пустую property для price
price = property()
# позже указываем getter
@price.getter
def price(self):
print("price getter")
return self._price
@price.setter
def price(self, value):
print("price setter")
if not isinstance(value, (int, float)):
raise TypeError("Значение должно быть числом")
if not value >= 0:
raise ValueError("Значение должно быть положительным")
self._price = float(value)
# property без декораторов
class Book:
def __init__(self, title, price, quantity):
self.title = title
self.price = price
self.quantity = quantity
def _get_total(self):
return round(self.price * self.quantity, 2)
def _get_price(self):
print("price getter")
return self._price
def _set_price(self, value):
print("price setter")
if not isinstance(value, (int, float)):
raise TypeError("Значение должно быть числом")
if not value >= 0:
raise ValueError("Значение должно быть положительным")
self._price = float(value)
total = property(_get_total)
price = property(_get_price, _set_price)
# property без декораторов ver 2
class Book:
def __init__(self, title, price, quantity):
self.title = title
self.price = price
self.quantity = quantity
def _get_total(self):
return round(self.price * self.quantity, 2)
def _get_price(self):
print("price getter")
return self._price
def _set_price(self, value):
print("price setter")
if not isinstance(value, (int, float)):
raise TypeError("Значение должно быть числом")
if not value >= 0:
raise ValueError("Значение должно быть положительным")
self._price = float(value)
total = property()
total = total.getter(_get_total)
price = property()
price = price.getter(_get_price)
price = price.setter(_set_price)
| 27.412214 | 66 | 0.628516 | 420 | 3,591 | 5.25 | 0.159524 | 0.073469 | 0.077098 | 0.036281 | 0.712925 | 0.712925 | 0.712925 | 0.712925 | 0.712925 | 0.712925 | 0 | 0.00349 | 0.281816 | 3,591 | 130 | 67 | 27.623077 | 0.851493 | 0.133668 | 0 | 0.902174 | 0 | 0 | 0.111793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195652 | false | 0 | 0 | 0.043478 | 0.445652 | 0.097826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
753f6f871578fee0abddaadd84fe83762fc5fd23 | 48 | py | Python | ppln/schedulers/__init__.py | amirassov/mmcv | aa517bbc62823d2c68753dd33ca1f840a75ceb2c | [
"MIT"
] | 10 | 2020-02-10T10:43:59.000Z | 2021-04-22T11:32:16.000Z | ppln/schedulers/__init__.py | amirassov/mmcv | aa517bbc62823d2c68753dd33ca1f840a75ceb2c | [
"MIT"
] | 3 | 2020-07-16T14:06:28.000Z | 2020-07-16T14:06:38.000Z | ppln/schedulers/__init__.py | amirassov/mmcv | aa517bbc62823d2c68753dd33ca1f840a75ceb2c | [
"MIT"
] | 2 | 2019-12-23T07:52:56.000Z | 2019-12-23T08:20:30.000Z | from .cosine import CosineAnnealingWithWarmupLR
| 24 | 47 | 0.895833 | 4 | 48 | 10.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
754e29d616b767924d014f4530e050d00950481e | 46 | py | Python | criss_cross/envs/__init__.py | mark-gluzman/RLinOR | 2c54dfebdd8248353baf4c0e703ef29d642cc5a7 | [
"MIT"
] | null | null | null | criss_cross/envs/__init__.py | mark-gluzman/RLinOR | 2c54dfebdd8248353baf4c0e703ef29d642cc5a7 | [
"MIT"
] | null | null | null | criss_cross/envs/__init__.py | mark-gluzman/RLinOR | 2c54dfebdd8248353baf4c0e703ef29d642cc5a7 | [
"MIT"
] | null | null | null | from criss_cross.envs.cc_env import CrissCross | 46 | 46 | 0.891304 | 8 | 46 | 4.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
755ebf006e7e23b978e12adf996be785ba00cf9d | 12,720 | py | Python | reclist/metrics/metadata_distribution.py | bbc/datalab-reclist | 42ea321856e02e46cca8e3b032b3b088ff328f57 | [
"MIT"
] | null | null | null | reclist/metrics/metadata_distribution.py | bbc/datalab-reclist | 42ea321856e02e46cca8e3b032b3b088ff328f57 | [
"MIT"
] | null | null | null | reclist/metrics/metadata_distribution.py | bbc/datalab-reclist | 42ea321856e02e46cca8e3b032b3b088ff328f57 | [
"MIT"
] | null | null | null | import os
import matplotlib.pyplot as plt
import numpy as np
from collections import defaultdict, Counter
from reclist.current import current
from reclist.utils.vectorise_sounds_data import generate_genre_dict
def round_up(number):
return int(number) + (number % 1 > 0)
def genre_distribution_by_gender(enriched_items, y_test, y_preds, k=10, top_genres=10,
first_genre_only=False, debug=True):
"""
Calculates the distribution of genres by age in testing data
"""
genre_dict = generate_genre_dict(enriched_items)
genres_per_gender = defaultdict(list)
genres_count_per_gender = {}
for target, pred in zip(y_test, y_preds):
predicted_genres = []
target_gender = target[0]["gender"]
if not target_gender:
target_gender = 'unknown'
for resource_obj in pred[:k]:
try:
predicted_genres.extend(
[genre_dict.get(resource_obj["resourceId"])[0]]) if first_genre_only else predicted_genres.extend(
genre_dict.get(resource_obj["resourceId"]))
except IndexError:
print(genre_dict.get(resource_obj["resourceId"]))
genres_per_gender[target_gender].append(Counter(predicted_genres))
genres_counter = defaultdict(list)
for gender in sorted(genres_per_gender.keys()):
gender_len = len(genres_per_gender[gender])
for result_set in genres_per_gender[gender]:
for genre, occurrences in result_set.items():
genres_counter[genre].append(occurrences)
# padding to account for genres which appear in smaller number of recs
for genre in genres_counter.keys():
padded_genre = genres_counter[genre]
padded_genre += [0] * (gender_len - len(padded_genre))
genres_counter[genre] = padded_genre
genres_count_per_gender[gender] = Counter(
{genre: np.mean(genres_counter[genre]) for genre in genres_counter.keys()}).most_common(top_genres)
if debug:
nrows = round_up(len(genres_count_per_gender.keys()) / 2)
ncols = 2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(15, 12))
gen_counter = 0
for row in ax:
for col in row:
try:
gender = genres_count_per_gender[list(genres_count_per_gender.keys())[gen_counter]]
x_tick_names = [genre[0] for genre in gender]
x_tick_idx = list(range(len(x_tick_names)))
col.barh(
# x_tick_names,
x_tick_idx,
[genre[1] for genre in gender],
align='center', tick_label=x_tick_names
)
col.set_title(list(genres_count_per_gender.keys())[gen_counter], y=1.0, pad=-14, fontsize=8)
col.set_xlabel(f'Mean no. of items \n per genre (top {k} recs)', fontsize=8)
gen_counter += 1
except IndexError:
pass
fig.tight_layout()
plt.savefig(os.path.join(current.report_path,
'plots',
f'genres_count_per_gender.png'))
plt.clf()
return genres_count_per_gender
def genre_distribution_by_agerange(enriched_items, y_test, y_preds, k=10, top_genres=10,
first_genre_only=False, debug=True):
"""
Calculates the distribution of genre by age range in testing data
"""
# extract genres
genre_dict = generate_genre_dict(enriched_items)
genres_per_age_range = defaultdict(list)
genres_count_per_age_range = {}
for target, pred in zip(y_test, y_preds):
predicted_genres = []
target_age_range = target[0]["age_range"]
if not target_age_range:
target_age_range = 'unknown'
for resource_obj in pred[:k]:
try:
predicted_genres.extend(
[genre_dict.get(resource_obj["resourceId"])[0]]) if first_genre_only else predicted_genres.extend(
genre_dict.get(resource_obj["resourceId"]))
except IndexError:
print(genre_dict.get(resource_obj["resourceId"]))
genres_per_age_range[target_age_range].append(Counter(predicted_genres))
genres_counter = defaultdict(list)
for age_range in sorted(genres_per_age_range.keys()):
age_range_len = len(genres_per_age_range[age_range])
for result_set in genres_per_age_range[age_range]:
for genre, occurrences in result_set.items():
genres_counter[genre].append(occurrences)
# padding to account for genres which appear in smaller number of recs
for genre in genres_counter.keys():
padded_genre = genres_counter[genre]
padded_genre += [0] * (age_range_len - len(padded_genre))
genres_counter[genre] = padded_genre
genres_count_per_age_range[age_range] = Counter(
{genre: np.mean(genres_counter[genre]) for genre in genres_counter.keys()}).most_common(top_genres)
# plots
if debug:
nrows = round_up(len(genres_count_per_age_range.keys()) / 2)
ncols = 2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(15, 12))
gen_counter = 0
for row in ax:
for col in row:
try:
age_range = genres_count_per_age_range[list(genres_count_per_age_range.keys())[gen_counter]]
x_tick_names = [genre[0] for genre in age_range]
x_tick_idx = list(range(len(x_tick_names)))
col.barh(
x_tick_idx,
[genre[1] for genre in age_range],
align='center', tick_label=x_tick_names
)
col.set_title(list(genres_count_per_age_range.keys())[gen_counter], y=1.0, pad=-14, fontsize=8)
col.set_xlabel(f'Mean no. of items \n per genre (top {k} recs)', fontsize=8)
gen_counter += 1
except IndexError:
pass
fig.tight_layout()
plt.savefig(os.path.join(current.report_path,
'plots',
f'genres_count_per_age_range.png'))
plt.clf()
return genres_count_per_age_range
def masterbrand_distribution_by_gender(enriched_items, y_test, y_preds, k=10, top_masterbrands=10, debug=True):
"""
Calculates the distribution of genres by age in testing data
"""
masterbrand_dict = {item['resource_id']: item['master_brand'] for item in enriched_items}
masterbrand_per_gender = defaultdict(list)
masterbrand_count_per_gender = {}
for target, pred in zip(y_test, y_preds):
predicted_masterbrands = []
target_gender = target[0]["gender"]
if not target_gender:
target_gender = 'unknown'
for resource_obj in pred[:k]:
predicted_masterbrands.append(masterbrand_dict.get(resource_obj["resourceId"]))
masterbrand_per_gender[target_gender].append(Counter(predicted_masterbrands))
masterbrands_counter = defaultdict(list)
for gender in sorted(masterbrand_per_gender.keys()):
gender_len = len(masterbrand_per_gender[gender])
for result_set in masterbrand_per_gender[gender]:
for genre, occurrences in result_set.items():
masterbrands_counter[genre].append(occurrences)
# padding to account for genres which appear in smaller number of recs
for genre in masterbrands_counter.keys():
padded_genre = masterbrands_counter[genre]
padded_genre += [0] * (gender_len - len(padded_genre))
masterbrands_counter[genre] = padded_genre
masterbrand_count_per_gender[gender] = Counter(
{masterbrand: np.mean(masterbrands_counter[masterbrand]) for masterbrand in
masterbrands_counter.keys()}).most_common(top_masterbrands)
if debug:
nrows = round_up(len(masterbrand_count_per_gender.keys()) / 2)
ncols = 2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols)
mb_counter = 0
for row in ax:
for col in row:
try:
gender = masterbrand_count_per_gender[list(masterbrand_count_per_gender.keys())[mb_counter]]
x_tick_names = [masterbrand[0] for masterbrand in gender]
x_tick_idx = list(range(len(x_tick_names)))
col.barh(
# x_tick_names,
x_tick_idx,
[masterbrand[1] for masterbrand in gender],
align='center', tick_label=x_tick_names
)
col.set_title(list(masterbrand_count_per_gender.keys())[mb_counter], y=1.0, pad=-14, fontsize=8)
col.set_xlabel(f'Avg. no. of items per \nmasterbrand (top {k} recs)', fontsize=8)
mb_counter += 1
except IndexError:
pass
fig.tight_layout()
plt.savefig(os.path.join(current.report_path,
'plots',
f'masterbrand_count_per_gender.png'))
plt.clf()
return masterbrand_count_per_gender
def masterbrand_distribution_by_agerange(enriched_items, y_test, y_preds, k=10, top_masterbrands=10, debug=True):
"""
Calculates the distribution of masterbrand by age range in testing data
"""
masterbrand_dict = {item['resource_id']: item['master_brand'] for item in enriched_items}
# hits = defaultdict(int)
masterbrands_per_age_range = defaultdict(list)
masterbrand_count_per_age_range = {}
for target, pred in zip(y_test, y_preds):
predicted_masterbrands = []
target_age_range = target[0]["age_range"]
if not target_age_range:
target_age_range = 'unknown'
for resource_obj in pred[:k]:
predicted_masterbrands.append(masterbrand_dict.get(resource_obj["resourceId"]))
masterbrands_per_age_range[target_age_range].append(Counter(predicted_masterbrands))
masterbrands_counter = defaultdict(list)
for age_range in sorted(masterbrands_per_age_range.keys()):
age_range_len = len(masterbrands_per_age_range[age_range])
for result_set in masterbrands_per_age_range[age_range]:
for masterbrand, occurrences in result_set.items():
masterbrands_counter[masterbrand].append(occurrences)
# padding to account for genres which appear in smaller number of recs
for masterbrand in masterbrands_counter.keys():
padded_masterbrand = masterbrands_counter[masterbrand]
padded_masterbrand += [0] * (age_range_len - len(padded_masterbrand))
masterbrands_counter[masterbrand] = padded_masterbrand
masterbrand_count_per_age_range[age_range] = Counter(
{masterbrand: np.mean(masterbrands_counter[masterbrand]) for masterbrand in
masterbrands_counter.keys()}).most_common(top_masterbrands)
# plots
if debug:
nrows = round_up(len(masterbrand_count_per_age_range.keys()) / 2)
ncols = 2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(15, 10))
mb_counter = 0
for row in ax:
for col in row:
try:
age_range = masterbrand_count_per_age_range[
list(masterbrand_count_per_age_range.keys())[mb_counter]]
x_tick_names = [masterbrand[0] for masterbrand in age_range]
x_tick_idx = list(range(len(x_tick_names)))
col.barh(
x_tick_idx,
[masterbrand[1] for masterbrand in age_range],
align='center', tick_label=x_tick_names
)
col.set_title(list(masterbrand_count_per_age_range.keys())[mb_counter], y=1.0, pad=-14, fontsize=8)
col.set_xlabel(f'Avg. no. of items per \nmasterbrand (top {k} recs)', fontsize=8)
mb_counter += 1
except IndexError:
pass
fig.tight_layout()
plt.savefig(os.path.join(current.report_path,
'plots',
f'masterbrands_count_per_age_range.png'))
plt.clf()
return masterbrand_count_per_age_range
| 40.253165 | 119 | 0.609591 | 1,531 | 12,720 | 4.766166 | 0.093403 | 0.061395 | 0.039194 | 0.035083 | 0.928875 | 0.895025 | 0.871317 | 0.787447 | 0.747842 | 0.656845 | 0 | 0.009936 | 0.303695 | 12,720 | 315 | 120 | 40.380952 | 0.813932 | 0.048349 | 0 | 0.621145 | 0 | 0 | 0.045118 | 0.010386 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022026 | false | 0.017621 | 0.026432 | 0.004405 | 0.070485 | 0.008811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f351124db469647604e6a3f2c4659b97b636a995 | 211 | py | Python | apps/secure_url/mixins.py | fryta/sercure-url | 06029e8e3a95616f939f62f04c260d14d128f0b4 | [
"MIT"
] | null | null | null | apps/secure_url/mixins.py | fryta/sercure-url | 06029e8e3a95616f939f62f04c260d14d128f0b4 | [
"MIT"
] | 7 | 2020-02-11T23:49:48.000Z | 2022-01-13T01:05:42.000Z | apps/secure_url/mixins.py | fryta/secure-url | 06029e8e3a95616f939f62f04c260d14d128f0b4 | [
"MIT"
] | null | null | null | from django.contrib.auth.mixins import UserPassesTestMixin
class EditOnlyOwnSecuredEntitiesMixin(UserPassesTestMixin):
def test_func(self):
return self.get_object().user.pk == self.request.user.pk
| 30.142857 | 64 | 0.78673 | 24 | 211 | 6.833333 | 0.791667 | 0.073171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123223 | 211 | 6 | 65 | 35.166667 | 0.886486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.5 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
f36803e44081834548ac1f3e13a66c28b269f627 | 115 | py | Python | core/validators/cat.py | mzrwalzy/NJUPTCats | 325dde6f48cac7dc85490935d78439d3b23a8395 | [
"MIT"
] | null | null | null | core/validators/cat.py | mzrwalzy/NJUPTCats | 325dde6f48cac7dc85490935d78439d3b23a8395 | [
"MIT"
] | null | null | null | core/validators/cat.py | mzrwalzy/NJUPTCats | 325dde6f48cac7dc85490935d78439d3b23a8395 | [
"MIT"
] | null | null | null |
import typing as tp
from core.validators._base import BaseValidator
class CatValidator(BaseValidator):
pass
| 14.375 | 47 | 0.8 | 14 | 115 | 6.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156522 | 115 | 7 | 48 | 16.428571 | 0.938144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f37990b546183d4d6faa45d1d602c02abaefaee9 | 10,427 | py | Python | tests/test_get_scores.py | hsbc/pyratings | f71d7d2e9030f0e34eb6dd8f2e753611d049302c | [
"Apache-2.0"
] | 9 | 2022-03-25T12:48:28.000Z | 2022-03-28T15:17:49.000Z | tests/test_get_scores.py | rbirkby/pyratings | eb9dfa35dfec9676e4d16152bbcae278444869d4 | [
"Apache-2.0"
] | null | null | null | tests/test_get_scores.py | rbirkby/pyratings | eb9dfa35dfec9676e4d16152bbcae278444869d4 | [
"Apache-2.0"
] | 3 | 2022-03-25T13:27:55.000Z | 2022-03-28T10:06:28.000Z | """
Copyright 2022 HSBC Global Asset Management (Deutschland) GmbH
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import numpy as np
import pandas as pd
import pytest
from pandas.testing import assert_frame_equal, assert_series_equal
import pyratings as rtg
from tests import conftest
# --- input: single rating/warf
@pytest.mark.parametrize(
["rating_provider", "rating", "score"],
list(
pd.concat(
[
conftest.rtg_df_long,
conftest.scores_df_long["rtg_score"],
],
axis=1,
).to_records(index=False)
),
)
def test_get_scores_from_single_rating_longterm(rating_provider, rating, score):
"""Tests if function can handle single string objects."""
act = rtg.get_scores_from_ratings(
ratings=rating, rating_provider=rating_provider, tenor="long-term"
)
assert act == score
@pytest.mark.parametrize(
["rating_provider", "rating", "score"],
list(
pd.concat(
[
conftest.rtg_df_long_st,
conftest.scores_df_long_st["rtg_score"],
],
axis=1,
).to_records(index=False)
),
)
def test_get_scores_from_single_rating_shortterm(rating_provider, rating, score):
"""Tests if function can handle single string objects."""
act = rtg.get_scores_from_ratings(
ratings=rating, rating_provider=rating_provider, tenor="short-term"
)
assert act == score
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_from_single_rating_invalid_rating_provider(tenor):
"""Tests if correct error message will be raised."""
with pytest.raises(AssertionError) as err:
rtg.get_scores_from_ratings(ratings="AA", rating_provider="foo", tenor=tenor)
assert str(err.value) == conftest.ERR_MSG
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_with_invalid_single_rating(tenor):
"""Tests if function returns NaN for invalid inputs."""
act = rtg.get_scores_from_ratings(
ratings="foo", rating_provider="Fitch", tenor=tenor
)
assert pd.isna(act)
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_with_single_rating_and_no_rating_provider(tenor):
"""Tests if correct error message will be raised."""
with pytest.raises(ValueError) as err:
rtg.get_scores_from_ratings(ratings="BBB", tenor=tenor)
assert str(err.value) == "'rating_provider' must not be None."
@pytest.mark.parametrize(
"warf, score",
[
(1, 1),
(6, 2),
(54.9999, 4),
(55, 5),
(55.00001, 5),
(400, 9),
(10_000, 22),
],
)
def test_get_scores_from_single_warf(warf, score):
"""Tests if function can correctly handle individual warf (float)."""
act = rtg.get_scores_from_warf(warf=warf)
assert act == score
@pytest.mark.parametrize("warf", [np.nan, -5, 20000.5])
def test_get_scores_from_invalid_single_warf(warf):
"""Tests if function returns NaN for invalid inputs."""
assert pd.isna(rtg.get_scores_from_warf(warf=warf))
# --- input: ratings series
@pytest.mark.parametrize(
["rating_provider", "scores_series", "ratings_series"],
conftest.params_provider_scores_ratings_lt,
)
def test_get_scores_from_ratings_series_longterm(
rating_provider, ratings_series, scores_series
):
"""Tests if function can correctly handle pd.Series objects."""
scores_series.name = f"rtg_score_{rating_provider}"
act = rtg.get_scores_from_ratings(
ratings=ratings_series, rating_provider=rating_provider
)
assert_series_equal(act, scores_series)
@pytest.mark.parametrize(
["rating_provider", "scores_series", "ratings_series"],
conftest.params_provider_scores_ratings_st,
)
def test_get_scores_from_ratings_series_shortterm(
rating_provider, ratings_series, scores_series
):
"""Tests if function can correctly handle pd.Series objects."""
scores_series.name = f"rtg_score_{rating_provider}"
act = rtg.get_scores_from_ratings(
ratings=ratings_series, rating_provider=rating_provider, tenor="short-term"
)
assert_series_equal(act, scores_series)
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_from_ratings_series_invalid_rating_provider(tenor):
"""Tests if correct error message will be raised."""
with pytest.raises(AssertionError) as err:
rtg.get_scores_from_ratings(
ratings=pd.Series(data=["AAA", "AA", "D"], name="rating"),
rating_provider="foo",
tenor=tenor,
)
assert str(err.value) == conftest.ERR_MSG
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_from_invalid_ratings_series(tenor):
"""Tests if function can correctly handle pd.Series objects."""
ratings_series = pd.Series(data=[np.nan, "foo", -10], name="rating")
scores_series = pd.Series(data=[np.nan, np.nan, np.nan], name="rtg_score_Fitch")
act = rtg.get_scores_from_ratings(
ratings=ratings_series, rating_provider="Fitch", tenor=tenor
)
assert_series_equal(act, scores_series)
def test_get_scores_from_warf_series():
"""Tests if function can correctly handle pd.Series objects."""
warf_series = conftest.warf_df_wide.iloc[:, 0]
scores_series = conftest.scores_df_wide.iloc[:, 0]
scores_series.name = "rtg_score"
act = rtg.get_scores_from_warf(warf=warf_series)
assert_series_equal(act, scores_series)
def test_get_scores_from_invalid_warf_series():
"""Tests if function can correctly handle pd.Series objects."""
warf_series = pd.Series(data=[np.nan, "foo", -10], name="warf")
scores_series = pd.Series(data=[np.nan, np.nan, np.nan], name="rtg_score")
act = rtg.get_scores_from_warf(warf=warf_series)
assert_series_equal(act, scores_series)
# --- input: ratings dataframe
exp_lt = conftest.scores_df_wide
exp_lt = pd.concat(
[
exp_lt,
pd.DataFrame(
data=[[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]],
columns=exp_lt.columns,
),
],
axis=0,
ignore_index=True,
)
exp_lt.columns = [
"rtg_score_Fitch",
"rtg_score_Moody",
"rtg_score_SP",
"rtg_score_Bloomberg",
"rtg_score_DBRS",
"rtg_score_ICE",
]
exp_st = conftest.scores_df_wide_st
exp_st = pd.concat(
[
exp_st,
pd.DataFrame(data=[[np.nan, np.nan, np.nan, np.nan]], columns=exp_st.columns),
],
axis=0,
ignore_index=True,
)
exp_st.columns = [
"rtg_score_Fitch",
"rtg_score_Moody",
"rtg_score_SP",
"rtg_score_DBRS",
]
def test_get_scores_from_ratings_dataframe_with_explicit_rating_provider_longterm():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_ratings(
ratings=conftest.rtg_df_wide_with_err_row,
rating_provider=[
"rtg_Fitch",
"Moody's rating",
"Rating S&P",
"Bloomberg Bloomberg RATING",
"DBRS",
"ICE",
],
tenor="long-term",
)
# noinspection PyTypeChecker
assert_frame_equal(act, exp_lt)
def test_get_scores_from_ratings_dataframe_with_explicit_rating_provider_shortterm():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_ratings(
ratings=conftest.rtg_df_wide_st_with_err_row,
rating_provider=[
"rtg_Fitch",
"Moody's rating",
"Rating S&P",
"DBRS",
],
tenor="short-term",
)
# noinspection PyTypeChecker
assert_frame_equal(act, exp_st)
def test_get_scores_from_ratings_dataframe_by_inferring_rating_provider_longterm():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_ratings(
ratings=conftest.rtg_df_wide_with_err_row, tenor="long-term"
)
# noinspection PyTypeChecker
assert_frame_equal(act, exp_lt)
def test_get_scores_from_ratings_dataframe_by_inferring_rating_provider_shortterm():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_ratings(
ratings=conftest.rtg_df_wide_st_with_err_row, tenor="short-term"
)
# noinspection PyTypeChecker
assert_frame_equal(act, exp_st)
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_from_ratings_dataframe_invalid_rating_provider(tenor):
"""Tests if correct error message will be raised."""
with pytest.raises(AssertionError) as err:
rtg.get_scores_from_ratings(
ratings=conftest.rtg_df_wide, rating_provider="foo", tenor=tenor
)
assert str(err.value) == conftest.ERR_MSG
@pytest.mark.parametrize("tenor", ["long-term", "short-term"])
def test_get_scores_from_invalid_ratings_dataframe(tenor):
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_ratings(ratings=conftest.input_invalid_df, tenor=tenor)
expectations = conftest.exp_invalid_df
expectations.columns = ["rtg_score_Fitch", "rtg_score_DBRS"]
# noinspection PyTypeChecker
assert_frame_equal(act, expectations)
def test_get_scores_from_warf_dataframe():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_warf(warf=conftest.warf_df_wide_with_err_row)
# noinspection PyTypeChecker
assert_frame_equal(act, exp_lt)
def test_get_scores_from_invalid_warf_dataframe():
"""Tests if function can correctly handle pd.DataFrame objects."""
act = rtg.get_scores_from_warf(warf=conftest.input_invalid_df)
expectations = conftest.exp_invalid_df
expectations.columns = ["rtg_score_Fitch", "rtg_score_DBRS"]
# noinspection PyTypeChecker
assert_frame_equal(act, expectations)
| 32.584375 | 86 | 0.698955 | 1,386 | 10,427 | 4.955988 | 0.134199 | 0.05503 | 0.075702 | 0.066968 | 0.814529 | 0.811472 | 0.773038 | 0.735478 | 0.708254 | 0.684379 | 0 | 0.006871 | 0.190467 | 10,427 | 319 | 87 | 32.68652 | 0.806895 | 0.197756 | 0 | 0.463303 | 0 | 0 | 0.107485 | 0.006551 | 0 | 0 | 0 | 0 | 0.114679 | 1 | 0.09633 | false | 0 | 0.027523 | 0 | 0.123853 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3b671835453bcad9342db7a5d778bed4d3e924e | 1,127 | py | Python | torchexpo/vision/image_classification/shufflenet.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 23 | 2020-09-08T05:08:46.000Z | 2021-08-12T07:16:53.000Z | torchexpo/vision/image_classification/shufflenet.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 1 | 2021-12-05T06:15:18.000Z | 2021-12-20T08:10:19.000Z | torchexpo/vision/image_classification/shufflenet.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 2 | 2021-01-12T06:10:53.000Z | 2021-07-24T08:21:59.000Z | import torchvision
from torchexpo.modules import ImageClassificationModule
def shufflenet_v2_x0_5():
"""ShuffleNet V2 0.5x Model pre-trained on ImageNet"""
model = torchvision.models.shufflenet_v2_x0_5(pretrained=True)
obj = ImageClassificationModule(model, "ShuffleNet_v2_x0_5", model_example="default")
return obj
def shufflenet_v2_x1_0():
"""ShuffleNet V2 1.0x Model pre-trained on ImageNet"""
model = torchvision.models.shufflenet_v2_x1_0(pretrained=True)
obj = ImageClassificationModule(model, "ShuffleNet_v2_x1_0", model_example="default")
return obj
# def shufflenet_v2_x1_5():
# """ShuffleNet V2 1.5x Model pre-trained on ImageNet"""
# model = torchvision.models.shufflenet_v2_x1_5(pretrained=True)
# obj = ImageClassificationModule(model, "ShuffleNet_v2_x1_5", model_example="default")
# return obj
# def shufflenet_v2_x2_0():
# """ShuffleNet V2 2.0x Model pre-trained on ImageNet"""
# model = torchvision.models.shufflenet_v2_x2_0(pretrained=True)
# obj = ImageClassificationModule(model, "ShuffleNet_v2_x2_0", model_example="default")
# return obj | 41.740741 | 91 | 0.75244 | 147 | 1,127 | 5.496599 | 0.204082 | 0.237624 | 0.10396 | 0.084158 | 0.813119 | 0.813119 | 0.77599 | 0.77599 | 0.47401 | 0.306931 | 0 | 0.049638 | 0.14197 | 1,127 | 27 | 92 | 41.740741 | 0.785936 | 0.543035 | 0 | 0.2 | 0 | 0 | 0.100806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
341415e48060a622d4be670188996ee2a77ade0f | 53 | py | Python | tests/core/test_import.py | vaporydev/bimini | 7c26efec585742ef870bf58ea5d96e2deb242775 | [
"MIT"
] | 7 | 2019-02-28T01:42:27.000Z | 2021-11-04T14:25:49.000Z | tests/core/test_import.py | vaporydev/bimini | 7c26efec585742ef870bf58ea5d96e2deb242775 | [
"MIT"
] | null | null | null | tests/core/test_import.py | vaporydev/bimini | 7c26efec585742ef870bf58ea5d96e2deb242775 | [
"MIT"
] | 3 | 2020-10-01T03:02:26.000Z | 2022-03-28T18:55:40.000Z |
def test_import():
import bimini # noqa: F401
| 10.6 | 31 | 0.641509 | 7 | 53 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.264151 | 53 | 4 | 32 | 13.25 | 0.769231 | 0.188679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3417d30c8003a78f64f68acf07fb416e55d09463 | 32 | py | Python | app.py | SRtsuki/SELab02 | 33a3728e2141838f3c797c1df40b1aa1758f23fd | [
"MIT"
] | null | null | null | app.py | SRtsuki/SELab02 | 33a3728e2141838f3c797c1df40b1aa1758f23fd | [
"MIT"
] | null | null | null | app.py | SRtsuki/SELab02 | 33a3728e2141838f3c797c1df40b1aa1758f23fd | [
"MIT"
] | null | null | null | #app.py
print("this is app.py")
| 10.666667 | 23 | 0.65625 | 7 | 32 | 3 | 0.714286 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 2 | 24 | 16 | 0.75 | 0.1875 | 0 | 0 | 0 | 0 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
343c54335bc955777387748ef0208b37c9bd5607 | 6,323 | py | Python | tests-manual/manual_testing.py | rashidalyahyai/piecewise-regression | 2cccd101ac6f44b5e2fe9e79d6782e3ce2a06ed7 | [
"MIT"
] | 24 | 2021-11-15T13:26:05.000Z | 2022-03-19T16:42:23.000Z | tests-manual/manual_testing.py | rashidalyahyai/piecewise-regression | 2cccd101ac6f44b5e2fe9e79d6782e3ce2a06ed7 | [
"MIT"
] | 5 | 2021-11-30T10:17:40.000Z | 2022-03-30T23:22:24.000Z | tests-manual/manual_testing.py | rashidalyahyai/piecewise-regression | 2cccd101ac6f44b5e2fe9e79d6782e3ce2a06ed7 | [
"MIT"
] | 4 | 2021-11-22T17:42:28.000Z | 2022-03-20T18:36:15.000Z |
from piecewise_regression import ModelSelection
from piecewise_regression import Fit
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
sys.path.insert(1, os.path.join(sys.path[0], '..'))
def on_data_1():
alpha = -4
beta_1 = -2
intercept = 100
breakpoint_1 = 7
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx + beta_1 * \
np.maximum(xx - breakpoint_1, 0) + np.random.normal(size=n_points)
pw_fit = Fit(xx, yy, start_values=[5])
print("p-value is ", pw_fit.davies)
pw_results = pw_fit.get_results()
pw_estimates = pw_results["estimates"]
print(pw_results)
print(pw_estimates)
pw_bootstrap_history = pw_fit.bootstrap_history
print(pw_bootstrap_history)
# print(bp_fit.breakpoint_history)
# bp_fit.plot_data()
# plt.show()
def on_data_1b():
alpha = -4
beta_1 = -4
beta_2 = 4
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 12
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx + beta_1 * np.maximum(
xx - breakpoint_1, 0) + beta_2 * np.maximum(
xx-breakpoint_2, 0) + np.random.normal(size=n_points)
bp_fit = Fit(xx, yy, start_values=[5, 10])
bp_fit.summary()
bp_fit.plot_best_muggeo_breakpoint_history()
plt.show()
bp_fit.plot_data()
bp_fit.plot_fit(color="red", linewidth=4)
bp_fit.plot_breakpoints()
bp_fit.plot_breakpoint_confidence_intervals()
plt.show()
def on_data_1c():
alpha = -4
beta_1 = -2
beta_2 = 4
beta_3 = 1
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 13
breakpoint_3 = 14
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += beta_1 * np.maximum(xx - breakpoint_1, 0)
yy += beta_2 * np.maximum(xx - breakpoint_2, 0)
yy += beta_3 * np.maximum(xx - breakpoint_3, 0)
yy += np.random.normal(size=n_points)
bp_fit = Fit(xx, yy, start_values=[5, 10, 16])
bp_fit.summary()
bp_fit.plot_data()
bp_fit.plot_fit(color="red", linewidth=4)
bp_fit.plot_breakpoints()
bp_fit.plot_breakpoint_confidence_intervals()
print("The fit data: ", bp_fit.__dict__)
plt.show()
plt.close()
bp_fit.plot_best_muggeo_breakpoint_history()
plt.legend()
plt.show()
plt.close()
bp_fit.plot_bootstrap_restarting_history()
plt.legend()
plt.show()
plt.close()
def model_selection_1():
alpha = -4
beta_1 = -2
intercept = 100
breakpoint_1 = 17
n_points = 100
xx = np.linspace(10, 30, n_points)
yy = intercept + alpha*xx + beta_1 * \
np.maximum(xx - breakpoint_1, 0) + np.random.normal(size=n_points)
ModelSelection(xx, yy, max_breakpoints=6)
def model_selection_2():
alpha = -4
beta_1 = -4
beta_2 = 4
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 12
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx + beta_1 * np.maximum(
xx - breakpoint_1, 0) + beta_2 * np.maximum(
xx-breakpoint_2, 0) + np.random.normal(size=n_points)
ModelSelection(xx, yy)
def fit_3_check_this_makes_sense():
np.random.seed(0)
alpha = 10
beta_1 = -8
beta_2 = -6
beta_3 = 10
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 10
breakpoint_3 = 14
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += beta_1 * np.maximum(xx - breakpoint_1, 0)
yy += beta_2 * np.maximum(xx - breakpoint_2, 0)
yy += beta_3 * np.maximum(xx - breakpoint_3, 0)
yy += np.random.normal(size=n_points)
pr = Fit(xx, yy, n_breakpoints=2)
pr.plot()
plt.show()
pr3 = Fit(xx, yy, n_breakpoints=3)
pr3.plot()
plt.show()
pr4 = Fit(xx, yy, n_breakpoints=4)
pr4.plot()
plt.show()
ModelSelection(xx, yy, max_breakpoints=6)
def fit_with_initally_diverging():
np.random.seed(2)
alpha = 10
beta_1 = -8
beta_2 = 3
beta_3 = 10
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 10
breakpoint_3 = 14
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += beta_1 * np.maximum(xx - breakpoint_1, 0)
yy += beta_2 * np.maximum(xx - breakpoint_2, 0)
yy += beta_3 * np.maximum(xx - breakpoint_3, 0)
yy += np.random.normal(size=n_points)
pr = Fit(xx, yy, n_breakpoints=2)
print(pr.summary)
def fit_with_initially_diverging_start_values():
np.random.seed(0)
alpha = 10
beta_1 = -8
beta_2 = 3
beta_3 = 10
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 10
breakpoint_3 = 14
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += beta_1 * np.maximum(xx - breakpoint_1, 0)
yy += beta_2 * np.maximum(xx - breakpoint_2, 0)
yy += beta_3 * np.maximum(xx - breakpoint_3, 0)
yy += np.random.normal(size=n_points)
pr = Fit(xx, yy, start_values=[2.15646833, 0.98300926], n_boot=20)
pr.summary()
def fit_with_initially_diverging_start_values_b():
np.random.seed(0)
alpha = 10
beta_1 = -8
beta_2 = 3
beta_3 = 10
intercept = 100
breakpoint_1 = 7
breakpoint_2 = 10
breakpoint_3 = 14
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += beta_1 * np.maximum(xx - breakpoint_1, 0)
yy += beta_2 * np.maximum(xx - breakpoint_2, 0)
yy += beta_3 * np.maximum(xx - breakpoint_3, 0)
yy += np.random.normal(size=n_points)
pr = Fit(xx, yy, start_values=[1.2, 0.53], n_boot=25)
pr.summary()
def fit_with_straight_line():
np.random.seed(0)
alpha = 10
intercept = 100
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += np.random.normal(size=n_points)
pr = Fit(xx, yy, n_breakpoints=0, n_boot=25)
pr.summary()
def model_comparision_straight_line():
np.random.seed(0)
alpha = 10
intercept = 100
n_points = 200
xx = np.linspace(0, 20, n_points)
yy = intercept + alpha*xx
yy += np.random.normal(size=n_points)
ModelSelection(xx, yy, max_breakpoints=6)
if __name__ == "__main__":
model_comparision_straight_line() | 20.201278 | 74 | 0.625969 | 977 | 6,323 | 3.802457 | 0.113613 | 0.06218 | 0.06218 | 0.118708 | 0.806999 | 0.782773 | 0.766083 | 0.719785 | 0.698789 | 0.672948 | 0 | 0.073501 | 0.253361 | 6,323 | 313 | 75 | 20.201278 | 0.713408 | 0.009805 | 0 | 0.762136 | 0 | 0 | 0.007992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053398 | false | 0 | 0.029126 | 0 | 0.082524 | 0.029126 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34500cbc5d609d33d0365ed63be410b2cd714077 | 47 | py | Python | src/chart/example.py | hasta13/chart | 3292b81a15cdf361b0b85e120c58dc1f6a22ddb8 | [
"MIT"
] | null | null | null | src/chart/example.py | hasta13/chart | 3292b81a15cdf361b0b85e120c58dc1f6a22ddb8 | [
"MIT"
] | null | null | null | src/chart/example.py | hasta13/chart | 3292b81a15cdf361b0b85e120c58dc1f6a22ddb8 | [
"MIT"
] | null | null | null | def test():
print('this is a placeholder')
| 15.666667 | 34 | 0.638298 | 7 | 47 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 2 | 35 | 23.5 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0.446809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
346f7f3c8862ce237b48d855353cec42fdc02259 | 77 | py | Python | src/python/stup/twistedutils/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 14 | 2017-05-06T10:14:32.000Z | 2018-07-17T02:58:00.000Z | src/python/stup/twistedutils/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 2 | 2017-06-13T05:40:18.000Z | 2017-06-13T16:23:01.000Z | src/python/stup/twistedutils/__init__.py | Wizmann/STUP-Protocol | e06a3442082e5061d2be32be3ffd681675e7ffb5 | [
"MIT"
] | 4 | 2017-06-09T20:20:54.000Z | 2018-07-17T02:58:10.000Z | from .deferred_deque import *
from .utils import *
from .time_wheel import *
| 19.25 | 29 | 0.766234 | 11 | 77 | 5.181818 | 0.636364 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 77 | 3 | 30 | 25.666667 | 0.876923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3479c13138309154e6c90e770d2f2209b49c8376 | 252 | py | Python | nndet/arch/heads/__init__.py | joeranbosma/nnDetection | 2ebbf1cdc8a8794c73e325f06fea50632c78ae8c | [
"BSD-3-Clause"
] | 242 | 2021-05-17T12:31:39.000Z | 2022-03-31T11:51:29.000Z | nndet/arch/heads/__init__.py | joeranbosma/nnDetection | 2ebbf1cdc8a8794c73e325f06fea50632c78ae8c | [
"BSD-3-Clause"
] | 59 | 2021-06-02T07:32:10.000Z | 2022-03-31T18:45:52.000Z | nndet/arch/heads/__init__.py | joeranbosma/nnDetection | 2ebbf1cdc8a8794c73e325f06fea50632c78ae8c | [
"BSD-3-Clause"
] | 38 | 2021-05-31T14:01:37.000Z | 2022-03-21T08:24:40.000Z | from nndet.arch.heads.classifier import ClassifierType, Classifier
from nndet.arch.heads.comb import HeadType, AbstractHead
from nndet.arch.heads.regressor import RegressorType, Regressor
from nndet.arch.heads.segmenter import SegmenterType, Segmenter
| 50.4 | 66 | 0.857143 | 32 | 252 | 6.75 | 0.4375 | 0.166667 | 0.240741 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 252 | 4 | 67 | 63 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
347ed8336a84485625e89e82067bdc54d0863113 | 90,321 | py | Python | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_asr9k_netflow_oper.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_asr9k_netflow_oper.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_asr9k_netflow_oper.py | tkamata-test/ydk-py | b637e7853a8edbbd31fbc05afa3aa4110b31c5f9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | """ Cisco_IOS_XR_asr9k_netflow_oper
This module contains a collection of YANG definitions
for Cisco IOS\-XR asr9k\-netflow package operational data.
This module contains definitions
for the following management objects\:
net\-flow\: NetFlow operational data
Copyright (c) 2013\-2016 by Cisco Systems, Inc.
All rights reserved.
"""
import re
import collections
from enum import Enum
from ydk.types import Empty, YList, YLeafList, DELETE, Decimal64, FixedBitsDict
from ydk.errors import YPYError, YPYModelError
class NfmgrFemEdmExpVerEnum(Enum):
"""
NfmgrFemEdmExpVerEnum
Netflow export version
.. data:: v9 = 0
Version 9 export format
.. data:: ip_fix = 1
IPFIX export format
"""
v9 = 0
ip_fix = 1
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NfmgrFemEdmExpVerEnum']
class NfmgrFemEdmTransProtoEnum(Enum):
"""
NfmgrFemEdmTransProtoEnum
Netflow export transport protocol
.. data:: unspecified = 0
Unspecified transport protocol
.. data:: udp = 1
UDP transport protocol
"""
unspecified = 0
udp = 1
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NfmgrFemEdmTransProtoEnum']
class NetFlow(object):
"""
NetFlow operational data
.. attribute:: configuration
NetFlow configuration information
**type**\: :py:class:`Configuration <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration>`
.. attribute:: statistics
Node\-specific NetFlow statistics information
**type**\: :py:class:`Statistics <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.configuration = NetFlow.Configuration()
self.configuration.parent = self
self.statistics = NetFlow.Statistics()
self.statistics.parent = self
class Configuration(object):
"""
NetFlow configuration information
.. attribute:: flow_exporter_maps
Flow exporter map configuration information
**type**\: :py:class:`FlowExporterMaps <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps>`
.. attribute:: flow_monitor_maps
Flow monitor map configuration information
**type**\: :py:class:`FlowMonitorMaps <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowMonitorMaps>`
.. attribute:: flow_sampler_maps
Flow sampler map configuration information
**type**\: :py:class:`FlowSamplerMaps <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowSamplerMaps>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_exporter_maps = NetFlow.Configuration.FlowExporterMaps()
self.flow_exporter_maps.parent = self
self.flow_monitor_maps = NetFlow.Configuration.FlowMonitorMaps()
self.flow_monitor_maps.parent = self
self.flow_sampler_maps = NetFlow.Configuration.FlowSamplerMaps()
self.flow_sampler_maps.parent = self
class FlowExporterMaps(object):
"""
Flow exporter map configuration information
.. attribute:: flow_exporter_map
Flow exporter map information
**type**\: list of :py:class:`FlowExporterMap <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps.FlowExporterMap>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_exporter_map = YList()
self.flow_exporter_map.parent = self
self.flow_exporter_map.name = 'flow_exporter_map'
class FlowExporterMap(object):
"""
Flow exporter map information
.. attribute:: exporter_name <key>
Exporter name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: collector
Export collector array
**type**\: list of :py:class:`Collector <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Collector>`
.. attribute:: id
Unique ID in the global flow exporter ID space
**type**\: int
**range:** 0..4294967295
.. attribute:: name
Name of the flow exporter map
**type**\: str
.. attribute:: version
Export version data
**type**\: :py:class:`Version <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.exporter_name = None
self.collector = YList()
self.collector.parent = self
self.collector.name = 'collector'
self.id = None
self.name = None
self.version = NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version()
self.version.parent = self
class Version(object):
"""
Export version data
.. attribute:: ipfix
ipfix
**type**\: :py:class:`Ipfix <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Ipfix>`
.. attribute:: version
version
**type**\: :py:class:`NfmgrFemEdmExpVerEnum <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NfmgrFemEdmExpVerEnum>`
.. attribute:: version9
version9
**type**\: :py:class:`Version9 <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Version9>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.ipfix = NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Ipfix()
self.ipfix.parent = self
self.version = None
self.version9 = NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Version9()
self.version9.parent = self
class Version9(object):
"""
version9
.. attribute:: common_template_export_timeout
Common template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: data_template_export_timeout
Data template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: interface_table_export_timeout
Interface table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: options_template_export_timeout
Options template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: sampler_table_export_timeout
Sampler table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: vrf_table_export_timeout
VRF table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.common_template_export_timeout = None
self.data_template_export_timeout = None
self.interface_table_export_timeout = None
self.options_template_export_timeout = None
self.sampler_table_export_timeout = None
self.vrf_table_export_timeout = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:version9'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.common_template_export_timeout is not None:
return True
if self.data_template_export_timeout is not None:
return True
if self.interface_table_export_timeout is not None:
return True
if self.options_template_export_timeout is not None:
return True
if self.sampler_table_export_timeout is not None:
return True
if self.vrf_table_export_timeout is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Version9']['meta_info']
class Ipfix(object):
"""
ipfix
.. attribute:: common_template_export_timeout
Common template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: data_template_export_timeout
Data template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: interface_table_export_timeout
Interface table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: options_template_export_timeout
Options template export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: sampler_table_export_timeout
Sampler table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: vrf_table_export_timeout
VRF table export timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.common_template_export_timeout = None
self.data_template_export_timeout = None
self.interface_table_export_timeout = None
self.options_template_export_timeout = None
self.sampler_table_export_timeout = None
self.vrf_table_export_timeout = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:ipfix'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.common_template_export_timeout is not None:
return True
if self.data_template_export_timeout is not None:
return True
if self.interface_table_export_timeout is not None:
return True
if self.options_template_export_timeout is not None:
return True
if self.sampler_table_export_timeout is not None:
return True
if self.vrf_table_export_timeout is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version.Ipfix']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:version'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.ipfix is not None and self.ipfix._has_data():
return True
if self.version is not None:
return True
if self.version9 is not None and self.version9._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Version']['meta_info']
class Collector(object):
"""
Export collector array
.. attribute:: destination_address
Destination IPv4 address in AAA.BBB.CCC.DDD format
**type**\: str
.. attribute:: destination_port
Transport destination port number
**type**\: int
**range:** 0..65535
.. attribute:: dscp
DSCP
**type**\: int
**range:** 0..255
.. attribute:: source_address
Source IPv4 address in AAA.BBB.CCC.DDD format
**type**\: str
.. attribute:: source_interface
Source interface name
**type**\: str
.. attribute:: transport_protocol
Transport protocol
**type**\: :py:class:`NfmgrFemEdmTransProtoEnum <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NfmgrFemEdmTransProtoEnum>`
.. attribute:: vrf_name
VRF name
**type**\: str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.destination_address = None
self.destination_port = None
self.dscp = None
self.source_address = None
self.source_interface = None
self.transport_protocol = None
self.vrf_name = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:collector'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.destination_address is not None:
return True
if self.destination_port is not None:
return True
if self.dscp is not None:
return True
if self.source_address is not None:
return True
if self.source_interface is not None:
return True
if self.transport_protocol is not None:
return True
if self.vrf_name is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps.FlowExporterMap.Collector']['meta_info']
@property
def _common_path(self):
if self.exporter_name is None:
raise YPYModelError('Key property exporter_name is None')
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-exporter-maps/Cisco-IOS-XR-asr9k-netflow-oper:flow-exporter-map[Cisco-IOS-XR-asr9k-netflow-oper:exporter-name = ' + str(self.exporter_name) + ']'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.exporter_name is not None:
return True
if self.collector is not None:
for child_ref in self.collector:
if child_ref._has_data():
return True
if self.id is not None:
return True
if self.name is not None:
return True
if self.version is not None and self.version._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps.FlowExporterMap']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-exporter-maps'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_exporter_map is not None:
for child_ref in self.flow_exporter_map:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowExporterMaps']['meta_info']
class FlowMonitorMaps(object):
"""
Flow monitor map configuration information
.. attribute:: flow_monitor_map
Flow monitor map information
**type**\: list of :py:class:`FlowMonitorMap <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowMonitorMaps.FlowMonitorMap>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_monitor_map = YList()
self.flow_monitor_map.parent = self
self.flow_monitor_map.name = 'flow_monitor_map'
class FlowMonitorMap(object):
"""
Flow monitor map information
.. attribute:: monitor_name <key>
Monitor name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: cache_active_timeout
Cache active flow timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: cache_aging_mode
Aging mode for flow cache
**type**\: str
.. attribute:: cache_inactive_timeout
Cache inactive flow timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: cache_max_entry
Max num of entries in flow cache
**type**\: int
**range:** 0..4294967295
.. attribute:: cache_timeout_rate_limit
Maximum number of entries to age each second
**type**\: int
**range:** 0..4294967295
.. attribute:: cache_update_timeout
Cache update timeout in seconds
**type**\: int
**range:** 0..4294967295
**units**\: second
.. attribute:: exporter
Name of the flow exporters used by the flow monitor
**type**\: list of :py:class:`Exporter <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowMonitorMaps.FlowMonitorMap.Exporter>`
.. attribute:: id
Unique ID in the global flow monitor ID space
**type**\: int
**range:** 0..4294967295
.. attribute:: name
Name of the flow monitor map
**type**\: str
.. attribute:: number_of_labels
Number of MPLS labels in key
**type**\: int
**range:** 0..4294967295
.. attribute:: options
Options applied to the flow monitor
**type**\: int
**range:** 0..4294967295
.. attribute:: record_map
Name of the flow record map
**type**\: str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.monitor_name = None
self.cache_active_timeout = None
self.cache_aging_mode = None
self.cache_inactive_timeout = None
self.cache_max_entry = None
self.cache_timeout_rate_limit = None
self.cache_update_timeout = None
self.exporter = YList()
self.exporter.parent = self
self.exporter.name = 'exporter'
self.id = None
self.name = None
self.number_of_labels = None
self.options = None
self.record_map = None
class Exporter(object):
"""
Name of the flow exporters used by the flow
monitor
.. attribute:: name
Exporter name
**type**\: str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.name = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:exporter'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.name is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowMonitorMaps.FlowMonitorMap.Exporter']['meta_info']
@property
def _common_path(self):
if self.monitor_name is None:
raise YPYModelError('Key property monitor_name is None')
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-monitor-maps/Cisco-IOS-XR-asr9k-netflow-oper:flow-monitor-map[Cisco-IOS-XR-asr9k-netflow-oper:monitor-name = ' + str(self.monitor_name) + ']'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.monitor_name is not None:
return True
if self.cache_active_timeout is not None:
return True
if self.cache_aging_mode is not None:
return True
if self.cache_inactive_timeout is not None:
return True
if self.cache_max_entry is not None:
return True
if self.cache_timeout_rate_limit is not None:
return True
if self.cache_update_timeout is not None:
return True
if self.exporter is not None:
for child_ref in self.exporter:
if child_ref._has_data():
return True
if self.id is not None:
return True
if self.name is not None:
return True
if self.number_of_labels is not None:
return True
if self.options is not None:
return True
if self.record_map is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowMonitorMaps.FlowMonitorMap']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-monitor-maps'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_monitor_map is not None:
for child_ref in self.flow_monitor_map:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowMonitorMaps']['meta_info']
class FlowSamplerMaps(object):
"""
Flow sampler map configuration information
.. attribute:: flow_sampler_map
Flow sampler map information
**type**\: list of :py:class:`FlowSamplerMap <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Configuration.FlowSamplerMaps.FlowSamplerMap>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_sampler_map = YList()
self.flow_sampler_map.parent = self
self.flow_sampler_map.name = 'flow_sampler_map'
class FlowSamplerMap(object):
"""
Flow sampler map information
.. attribute:: sampler_name <key>
Sampler name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: id
Unique ID in the global flow sampler ID space
**type**\: int
**range:** 0..4294967295
.. attribute:: name
Name of the flow sampler map
**type**\: str
.. attribute:: sampling_mode
Sampling mode and parameters
**type**\: str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.sampler_name = None
self.id = None
self.name = None
self.sampling_mode = None
@property
def _common_path(self):
if self.sampler_name is None:
raise YPYModelError('Key property sampler_name is None')
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-sampler-maps/Cisco-IOS-XR-asr9k-netflow-oper:flow-sampler-map[Cisco-IOS-XR-asr9k-netflow-oper:sampler-name = ' + str(self.sampler_name) + ']'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.sampler_name is not None:
return True
if self.id is not None:
return True
if self.name is not None:
return True
if self.sampling_mode is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowSamplerMaps.FlowSamplerMap']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration/Cisco-IOS-XR-asr9k-netflow-oper:flow-sampler-maps'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_sampler_map is not None:
for child_ref in self.flow_sampler_map:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration.FlowSamplerMaps']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:configuration'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_exporter_maps is not None and self.flow_exporter_maps._has_data():
return True
if self.flow_monitor_maps is not None and self.flow_monitor_maps._has_data():
return True
if self.flow_sampler_maps is not None and self.flow_sampler_maps._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Configuration']['meta_info']
class Statistics(object):
"""
Node\-specific NetFlow statistics information
.. attribute:: statistic
NetFlow statistics information for a particular node
**type**\: list of :py:class:`Statistic <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.statistic = YList()
self.statistic.parent = self
self.statistic.name = 'statistic'
class Statistic(object):
"""
NetFlow statistics information for a particular
node
.. attribute:: node <key>
Node location
**type**\: str
**pattern:** ([a\-zA\-Z0\-9\_]\*\\d+/){1,2}([a\-zA\-Z0\-9\_]\*\\d+)
.. attribute:: producer
NetFlow producer statistics
**type**\: :py:class:`Producer <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Producer>`
.. attribute:: server
NetFlow server statistics
**type**\: :py:class:`Server <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.node = None
self.producer = NetFlow.Statistics.Statistic.Producer()
self.producer.parent = self
self.server = NetFlow.Statistics.Statistic.Server()
self.server.parent = self
class Producer(object):
"""
NetFlow producer statistics
.. attribute:: statistics
Statistics information
**type**\: :py:class:`Statistics_ <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Producer.Statistics_>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.statistics = NetFlow.Statistics.Statistic.Producer.Statistics_()
self.statistics.parent = self
class Statistics_(object):
"""
Statistics information
.. attribute:: drops_no_space
Drops (no space)
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: drops_others
Drops (others)
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: flow_packet_counts
Number of Rxed Flow Packets
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: ipv4_egress_flows
IPv4 egress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: ipv4_ingress_flows
IPv4 ingress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: ipv6_egress_flows
IPv6 egress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: ipv6_ingress_flows
IPv6 ingress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_cleared
Last time Statistics cleared in 'Mon Jan 1 12\:00 \:00 2xxx' format
**type**\: str
.. attribute:: mpls_egress_flows
MPLS egress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: mpls_ingress_flows
MPLS ingress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: spp_rx_counts
Number of Rxed SPP Packets
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: unknown_egress_flows
Unknown egress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: unknown_ingress_flows
Unknown ingress flows
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: waiting_servers
Number of waiting servers
**type**\: int
**range:** 0..18446744073709551615
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.drops_no_space = None
self.drops_others = None
self.flow_packet_counts = None
self.ipv4_egress_flows = None
self.ipv4_ingress_flows = None
self.ipv6_egress_flows = None
self.ipv6_ingress_flows = None
self.last_cleared = None
self.mpls_egress_flows = None
self.mpls_ingress_flows = None
self.spp_rx_counts = None
self.unknown_egress_flows = None
self.unknown_ingress_flows = None
self.waiting_servers = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:statistics'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.drops_no_space is not None:
return True
if self.drops_others is not None:
return True
if self.flow_packet_counts is not None:
return True
if self.ipv4_egress_flows is not None:
return True
if self.ipv4_ingress_flows is not None:
return True
if self.ipv6_egress_flows is not None:
return True
if self.ipv6_ingress_flows is not None:
return True
if self.last_cleared is not None:
return True
if self.mpls_egress_flows is not None:
return True
if self.mpls_ingress_flows is not None:
return True
if self.spp_rx_counts is not None:
return True
if self.unknown_egress_flows is not None:
return True
if self.unknown_ingress_flows is not None:
return True
if self.waiting_servers is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Producer.Statistics_']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:producer'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.statistics is not None and self.statistics._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Producer']['meta_info']
class Server(object):
"""
NetFlow server statistics
.. attribute:: flow_exporters
Flow exporter information
**type**\: :py:class:`FlowExporters <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server.FlowExporters>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_exporters = NetFlow.Statistics.Statistic.Server.FlowExporters()
self.flow_exporters.parent = self
class FlowExporters(object):
"""
Flow exporter information
.. attribute:: flow_exporter
Exporter information
**type**\: list of :py:class:`FlowExporter <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.flow_exporter = YList()
self.flow_exporter.parent = self
self.flow_exporter.name = 'flow_exporter'
class FlowExporter(object):
"""
Exporter information
.. attribute:: exporter_name <key>
Exporter name
**type**\: str
**pattern:** [\\w\\\-\\.\:,\_@#%$\\+=\\\|;]+
.. attribute:: exporter
Statistics information for the exporter
**type**\: :py:class:`Exporter <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.exporter_name = None
self.exporter = NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter()
self.exporter.parent = self
class Exporter(object):
"""
Statistics information for the exporter
.. attribute:: statistic
Array of flow exporters
**type**\: list of :py:class:`Statistic_ <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter.Statistic_>`
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.statistic = YList()
self.statistic.parent = self
self.statistic.name = 'statistic'
class Statistic_(object):
"""
Array of flow exporters
.. attribute:: collector
Statistics of all collectors
**type**\: list of :py:class:`Collector <ydk.models.cisco_ios_xr.Cisco_IOS_XR_asr9k_netflow_oper.NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter.Statistic_.Collector>`
.. attribute:: memory_usage
Memory usage
**type**\: int
**range:** 0..4294967295
.. attribute:: name
Exporter name
**type**\: str
.. attribute:: used_by_flow_monitor
List of flow monitors that use the exporter
**type**\: list of str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.collector = YList()
self.collector.parent = self
self.collector.name = 'collector'
self.memory_usage = None
self.name = None
self.used_by_flow_monitor = YLeafList()
self.used_by_flow_monitor.parent = self
self.used_by_flow_monitor.name = 'used_by_flow_monitor'
class Collector(object):
"""
Statistics of all collectors
.. attribute:: bytes_dropped
Bytes dropped
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: bytes_sent
Bytes sent
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: destination_address
Destination IPv4 address in AAA.BBB.CCC.DDD format
**type**\: str
.. attribute:: destination_port
Destination port number
**type**\: int
**range:** 0..65535
.. attribute:: exporter_state
Exporter state
**type**\: str
.. attribute:: flow_bytes_dropped
Flow bytes dropped
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: flow_bytes_sent
Flow bytes sent
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: flows_dropped
Flows dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: flows_sent
Flows sent
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_hour_bytes_sent
Total bytes exported over the last one hour
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: last_hour_flows_sent
Total flows exported over the of last one hour
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_hour_packest_sent
Total packets exported over the last one hour
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_minute_bytes_sent
Total bytes exported over the last one minute
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: last_minute_flows_sent
Total flows exported over the last one minute
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_minute_packets
Total packets exported over the last one minute
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_second_bytes_sent
Total bytes exported over the last one second
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: last_second_flows_sent
Total flows exported over the last one second
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: last_second_packets_sent
Total packets exported over the last one second
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: option_data_bytes_dropped
Option data dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: option_data_bytes_sent
Option data bytes sent
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: option_data_dropped
Option data dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: option_data_sent
Option data sent
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: option_template_bytes_dropped
Option template bytes dropped
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: option_template_bytes_sent
Option template bytes sent
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: option_templates_dropped
Option templates dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: option_templates_sent
Option templates sent
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: packets_dropped
Packets dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: packets_sent
Packets sent
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: souce_port
Source port number
**type**\: int
**range:** 0..65535
.. attribute:: source_address
Source IPv4 address in AAA.BBB.CCC.DDD format
**type**\: str
.. attribute:: template_bytes_dropped
Template bytes dropped
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: template_bytes_sent
Template bytes sent
**type**\: int
**range:** 0..18446744073709551615
**units**\: byte
.. attribute:: templates_dropped
Templates dropped
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: templates_sent
Templates sent
**type**\: int
**range:** 0..18446744073709551615
.. attribute:: transport_protocol
Transport protocol
**type**\: str
.. attribute:: vrf_name
VRF Name
**type**\: str
"""
_prefix = 'asr9k-netflow-oper'
_revision = '2015-11-09'
def __init__(self):
self.parent = None
self.bytes_dropped = None
self.bytes_sent = None
self.destination_address = None
self.destination_port = None
self.exporter_state = None
self.flow_bytes_dropped = None
self.flow_bytes_sent = None
self.flows_dropped = None
self.flows_sent = None
self.last_hour_bytes_sent = None
self.last_hour_flows_sent = None
self.last_hour_packest_sent = None
self.last_minute_bytes_sent = None
self.last_minute_flows_sent = None
self.last_minute_packets = None
self.last_second_bytes_sent = None
self.last_second_flows_sent = None
self.last_second_packets_sent = None
self.option_data_bytes_dropped = None
self.option_data_bytes_sent = None
self.option_data_dropped = None
self.option_data_sent = None
self.option_template_bytes_dropped = None
self.option_template_bytes_sent = None
self.option_templates_dropped = None
self.option_templates_sent = None
self.packets_dropped = None
self.packets_sent = None
self.souce_port = None
self.source_address = None
self.template_bytes_dropped = None
self.template_bytes_sent = None
self.templates_dropped = None
self.templates_sent = None
self.transport_protocol = None
self.vrf_name = None
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:collector'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.bytes_dropped is not None:
return True
if self.bytes_sent is not None:
return True
if self.destination_address is not None:
return True
if self.destination_port is not None:
return True
if self.exporter_state is not None:
return True
if self.flow_bytes_dropped is not None:
return True
if self.flow_bytes_sent is not None:
return True
if self.flows_dropped is not None:
return True
if self.flows_sent is not None:
return True
if self.last_hour_bytes_sent is not None:
return True
if self.last_hour_flows_sent is not None:
return True
if self.last_hour_packest_sent is not None:
return True
if self.last_minute_bytes_sent is not None:
return True
if self.last_minute_flows_sent is not None:
return True
if self.last_minute_packets is not None:
return True
if self.last_second_bytes_sent is not None:
return True
if self.last_second_flows_sent is not None:
return True
if self.last_second_packets_sent is not None:
return True
if self.option_data_bytes_dropped is not None:
return True
if self.option_data_bytes_sent is not None:
return True
if self.option_data_dropped is not None:
return True
if self.option_data_sent is not None:
return True
if self.option_template_bytes_dropped is not None:
return True
if self.option_template_bytes_sent is not None:
return True
if self.option_templates_dropped is not None:
return True
if self.option_templates_sent is not None:
return True
if self.packets_dropped is not None:
return True
if self.packets_sent is not None:
return True
if self.souce_port is not None:
return True
if self.source_address is not None:
return True
if self.template_bytes_dropped is not None:
return True
if self.template_bytes_sent is not None:
return True
if self.templates_dropped is not None:
return True
if self.templates_sent is not None:
return True
if self.transport_protocol is not None:
return True
if self.vrf_name is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter.Statistic_.Collector']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:statistic'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.collector is not None:
for child_ref in self.collector:
if child_ref._has_data():
return True
if self.memory_usage is not None:
return True
if self.name is not None:
return True
if self.used_by_flow_monitor is not None:
for child in self.used_by_flow_monitor:
if child is not None:
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter.Statistic_']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:exporter'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.statistic is not None:
for child_ref in self.statistic:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter.Exporter']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
if self.exporter_name is None:
raise YPYModelError('Key property exporter_name is None')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:flow-exporter[Cisco-IOS-XR-asr9k-netflow-oper:exporter-name = ' + str(self.exporter_name) + ']'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.exporter_name is not None:
return True
if self.exporter is not None and self.exporter._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server.FlowExporters.FlowExporter']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:flow-exporters'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_exporter is not None:
for child_ref in self.flow_exporter:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server.FlowExporters']['meta_info']
@property
def _common_path(self):
if self.parent is None:
raise YPYModelError('parent is not set . Cannot derive path.')
return self.parent._common_path +'/Cisco-IOS-XR-asr9k-netflow-oper:server'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.flow_exporters is not None and self.flow_exporters._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic.Server']['meta_info']
@property
def _common_path(self):
if self.node is None:
raise YPYModelError('Key property node is None')
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:statistics/Cisco-IOS-XR-asr9k-netflow-oper:statistic[Cisco-IOS-XR-asr9k-netflow-oper:node = ' + str(self.node) + ']'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.node is not None:
return True
if self.producer is not None and self.producer._has_data():
return True
if self.server is not None and self.server._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics.Statistic']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow/Cisco-IOS-XR-asr9k-netflow-oper:statistics'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.statistic is not None:
for child_ref in self.statistic:
if child_ref._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow.Statistics']['meta_info']
@property
def _common_path(self):
return '/Cisco-IOS-XR-asr9k-netflow-oper:net-flow'
def is_config(self):
''' Returns True if this instance represents config data else returns False '''
return False
def _has_data(self):
if not self.is_config():
return False
if self.configuration is not None and self.configuration._has_data():
return True
if self.statistics is not None and self.statistics._has_data():
return True
return False
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_asr9k_netflow_oper as meta
return meta._meta_table['NetFlow']['meta_info']
| 40.887732 | 298 | 0.399929 | 6,800 | 90,321 | 5.092206 | 0.035882 | 0.033962 | 0.042452 | 0.042452 | 0.823606 | 0.778988 | 0.751119 | 0.717504 | 0.670864 | 0.63875 | 0 | 0.0369 | 0.540937 | 90,321 | 2,208 | 299 | 40.90625 | 0.79823 | 0.224643 | 0 | 0.692661 | 0 | 0.011468 | 0.098759 | 0.064931 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134174 | false | 0 | 0.034404 | 0.006881 | 0.472477 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b20735c563570dc520739ec4ddd9cd22be22ca3 | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydev_imps/_pydev_xmlrpclib.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydev_imps/_pydev_xmlrpclib.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydev_imps/_pydev_xmlrpclib.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/be/43/ef/7355ddd663c90414eaec755b395245888202140781f7fcbdb1bdf2fea5 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b97da5065661dcc6d77181421005001b148607d | 1,937 | py | Python | processl.py | rustylocks79/Figgie | fa1af3224eba83c0f0f0e070acc160583f5a68e1 | [
"MIT"
] | null | null | null | processl.py | rustylocks79/Figgie | fa1af3224eba83c0f0f0e070acc160583f5a68e1 | [
"MIT"
] | null | null | null | processl.py | rustylocks79/Figgie | fa1af3224eba83c0f0f0e070acc160583f5a68e1 | [
"MIT"
] | null | null | null | import pickle
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.tree import DecisionTreeRegressor
print('\nResults\n')
prefix = 'faded_custom_'
# ask pricer
print(prefix + 'ask pricer: ')
regression = DecisionTreeRegressor(max_depth=3)
x = pickle.load(open('data/asking_x.pickle', 'rb'))
y = pickle.load(open('data/asking_y.pickle', 'rb'))
regression.fit(x, y)
fig = plt.figure(figsize=(16, 10))
tree.plot_tree(regression, feature_names=['market price', 'model util', 'last_price'], filled=True, fontsize=10)
fig.savefig('img/' + prefix + 'asking_tree.png')
# asking empty pricing
print(prefix + 'asking empty pricer: ')
regression = DecisionTreeRegressor(max_depth=3)
x = pickle.load(open('data/asking_x_empty.pickle', 'rb'))
y = pickle.load(open('data/asking_y_empty.pickle', 'rb'))
# x = x.reshape(-1, 1)
regression.fit(x, y)
fig = plt.figure(figsize=(16, 10))
tree.plot_tree(regression, feature_names=['model util', 'last_price'], filled=True, fontsize=10)
fig.savefig('img/' + prefix + 'asking_tree_empty.png')
# bidding pricer
print(prefix + 'bidding pricer: ')
regression = DecisionTreeRegressor(max_depth=3)
x = pickle.load(open('data/bidding_x.pickle', 'rb'))
y = pickle.load(open('data/bidding_y.pickle', 'rb'))
regression.fit(x, y)
fig = plt.figure(figsize=(16, 10))
tree.plot_tree(regression, feature_names=['market price', 'model util', 'last_price'], filled=True, fontsize=10)
fig.savefig('img/' + prefix + 'bidding_tree.png')
# empty bidding pricer
print(prefix + 'bidding empty pricer: ')
regression = DecisionTreeRegressor(max_depth=3)
x = pickle.load(open('data/bidding_x_empty.pickle', 'rb'))
y = pickle.load(open('data/bidding_y_empty.pickle', 'rb'))
# x = x.reshape(-1, 1)
regression.fit(x, y)
fig = plt.figure(figsize=(16, 10))
tree.plot_tree(regression, feature_names=['model util', 'last_price'], filled=True, fontsize=10)
fig.savefig('img/' + prefix + 'bidding_tree_empty.png')
| 32.830508 | 112 | 0.726897 | 286 | 1,937 | 4.797203 | 0.188811 | 0.058309 | 0.081633 | 0.104956 | 0.841108 | 0.795918 | 0.795918 | 0.795918 | 0.795918 | 0.690962 | 0 | 0.018359 | 0.100155 | 1,937 | 58 | 113 | 33.396552 | 0.768789 | 0.056273 | 0 | 0.421053 | 0 | 0 | 0.27073 | 0.104887 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
59ed25256496b74bf5ed5d0694727e1f77897e61 | 16,360 | py | Python | platform/core/polyaxon/db/migrations/0021_auto_20190418_1600_v05.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/migrations/0021_auto_20190418_1600_v05.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/migrations/0021_auto_20190418_1600_v05.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2 on 2019-04-18 14:00
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
from django.db.models import ExpressionWrapper, F
import django.core.validators
import libs.blacklist
import re
def create_cluster_owner(apps, schema_editor):
Cluster = apps.get_model('db', 'Cluster')
Owner = apps.get_model('db', 'Owner')
ContentType = apps.get_model('contenttypes', 'ContentType')
cluster_type_id = ContentType.objects.get_for_model(Cluster).id
for cluster in Cluster.objects.all():
Owner.objects.create(object_id=cluster.id,
content_type_id=cluster_type_id,
name=cluster.uuid.hex)
def migrate_experimentgroup_config(apps, schema_editor):
ExperimentGroup = apps.get_model('db', 'ExperimentGroup')
ExperimentGroup.objects.filter(backend__isnull=True).update(backend='native')
def migrate_build_jobs_config(apps, schema_editor):
BuildJob = apps.get_model('db', 'BuildJob')
BuildJob.objects.update(content=ExpressionWrapper(F('config'), output_field=str))
def migrate_experiments_config(apps, schema_editor):
Experiment = apps.get_model('db', 'Experiment')
Experiment.objects.update(content=ExpressionWrapper(F('config'), output_field=str))
def migrate_jobs_config(apps, schema_editor):
Job = apps.get_model('db', 'Job')
Job.objects.update(content=ExpressionWrapper(F('config'), output_field=str))
def migrate_notebook_jobs_config(apps, schema_editor):
NotebookJob = apps.get_model('db', 'NotebookJob')
NotebookJob.objects.update(content=ExpressionWrapper(F('config'), output_field=str))
def migrate_tensorboard_jobs_config(apps, schema_editor):
TensorboardJob = apps.get_model('db', 'TensorboardJob')
TensorboardJob.objects.update(content=ExpressionWrapper(F('config'), output_field=str))
def migrate_experimentgroup_hptuning(apps, schema_editor):
ExperimentGroup = apps.get_model('db', 'ExperimentGroup')
groups = []
for group in ExperimentGroup.objects.exclude(hptuning__early_stopping=None):
hptuning = group.hptuning
[e.pop('policy', None) for e in hptuning['early_stopping']]
group.hptuning = hptuning
groups.append(group)
ExperimentGroup.objects.bulk_update(groups, ['hptuning'])
class Migration(migrations.Migration):
dependencies = [
('db', '0020_auto_20190307_1611'),
]
operations = [
migrations.RenameField(
model_name='buildjob',
old_name='in_cluster',
new_name='is_managed',
),
migrations.RenameField(
model_name='experiment',
old_name='in_cluster',
new_name='is_managed',
),
migrations.RenameField(
model_name='job',
old_name='in_cluster',
new_name='is_managed',
),
migrations.RenameField(
model_name='notebookjob',
old_name='in_cluster',
new_name='is_managed',
),
migrations.RenameField(
model_name='tensorboardjob',
old_name='in_cluster',
new_name='is_managed',
),
migrations.AddField(
model_name='job',
name='backend',
field=models.CharField(blank=True,
help_text='The default backend use for running this entity.',
max_length=16, null=True),
),
migrations.AlterField(
model_name='buildjob',
name='backend',
field=models.CharField(blank=True,
help_text='The default backend use for running this entity.',
max_length=16, null=True),
),
migrations.AlterField(
model_name='experiment',
name='backend',
field=models.CharField(blank=True,
help_text='The default backend use for running this entity.',
max_length=16, null=True),
),
migrations.AddField(
model_name='experimentgroup',
name='backend',
field=models.CharField(blank=True,
help_text='The default backend use for running this entity.',
max_length=16, null=True),
),
migrations.AddField(
model_name='pipeline',
name='backend',
field=models.CharField(blank=True,
help_text='The default backend use for running this entity.',
max_length=16, null=True),
),
migrations.AlterField(
model_name='buildjob',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AlterField(
model_name='experiment',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AlterField(
model_name='job',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AlterField(
model_name='notebookjob',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AlterField(
model_name='tensorboardjob',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AddField(
model_name='experimentgroup',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AddField(
model_name='pipeline',
name='is_managed',
field=models.BooleanField(default=True,
help_text='If this entity is managed by the platform.'),
),
migrations.AlterField(
model_name='buildjob',
name='config',
field=django.contrib.postgres.fields.jsonb.JSONField(
blank=True,
help_text='The compiled polyaxonfile for the build job.',
null=True),
),
migrations.AlterField(
model_name='job',
name='config',
field=django.contrib.postgres.fields.jsonb.JSONField(
blank=True,
help_text='The compiled polyaxonfile for the run job.',
null=True),
),
migrations.AddField(
model_name='buildjob',
name='content',
field=models.TextField(blank=True,
help_text='The yaml content of the polyaxonfile/specification.',
null=True),
),
migrations.AddField(
model_name='experiment',
name='content',
field=models.TextField(blank=True,
help_text='The yaml content of the polyaxonfile/specification.',
null=True),
),
migrations.AddField(
model_name='job',
name='content',
field=models.TextField(blank=True,
help_text='The yaml content of the polyaxonfile/specification.',
null=True),
),
migrations.AddField(
model_name='notebookjob',
name='content',
field=models.TextField(blank=True,
help_text='The yaml content of the polyaxonfile/specification.',
null=True),
),
migrations.AddField(
model_name='tensorboardjob',
name='content',
field=models.TextField(blank=True,
help_text='The yaml content of the polyaxonfile/specification.',
null=True),
),
migrations.AlterField(
model_name='pipelinerunstatus',
name='status',
field=models.CharField(blank=True,
choices=[('created', 'created'), ('warning', 'warning'),
('scheduled', 'scheduled'), ('running', 'running'),
('done', 'done'), ('failed', 'failed'),
('upstream_failed', 'upstream_failed'),
('stopped', 'stopped'), ('succeeded', 'succeeded'),
('stopping', 'stopping'), ('skipped', 'skipped'),
('unknown', 'unknown')], default='created',
max_length=64, null=True),
),
migrations.AlterField(
model_name='buildjob',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='experiment',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='experimentchartview',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='experimentgroup',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='experimentgroupchartview',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='job',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='notebookjob',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='operation',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='pipeline',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='search',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AlterField(
model_name='tensorboardjob',
name='name',
field=models.CharField(blank=True, default=None, max_length=128, null=True, validators=[
django.core.validators.RegexValidator(re.compile('^[-a-zA-Z0-9_]+\\Z'),
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens.",
'invalid'),
libs.blacklist.validate_blacklist_name]),
),
migrations.AddField(
model_name='buildjob',
name='valid',
field=models.NullBooleanField(default=True),
),
migrations.RenameField(
model_name='experiment',
old_name='declarations',
new_name='params',
),
migrations.RunPython(migrate_build_jobs_config),
migrations.RunPython(migrate_experiments_config),
migrations.RunPython(migrate_jobs_config),
migrations.RunPython(migrate_notebook_jobs_config),
migrations.RunPython(migrate_tensorboard_jobs_config),
migrations.RunPython(migrate_experimentgroup_config),
migrations.RunPython(migrate_experimentgroup_hptuning),
migrations.RunPython(create_cluster_owner),
]
| 45.444444 | 133 | 0.531418 | 1,467 | 16,360 | 5.777778 | 0.111793 | 0.040349 | 0.06194 | 0.07185 | 0.788462 | 0.744691 | 0.710005 | 0.70328 | 0.697971 | 0.678858 | 0 | 0.009305 | 0.362836 | 16,360 | 359 | 134 | 45.571031 | 0.803818 | 0.002628 | 0 | 0.776435 | 1 | 0 | 0.190009 | 0.011155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024169 | false | 0 | 0.018127 | 0 | 0.05136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
942abf8e6237eb68eec6a3cb18a4d59f6642d6e1 | 26 | py | Python | packer/__init__.py | thekashifmalik/packer | 736d052d2536ada7733f4b8459e32fb771af2e1c | [
"MIT"
] | null | null | null | packer/__init__.py | thekashifmalik/packer | 736d052d2536ada7733f4b8459e32fb771af2e1c | [
"MIT"
] | null | null | null | packer/__init__.py | thekashifmalik/packer | 736d052d2536ada7733f4b8459e32fb771af2e1c | [
"MIT"
] | null | null | null | from .packer import Packer | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
945d6f6dbe61afc3aecec22d285574976adcecbc | 37 | py | Python | tf_image_classification/deploy_utils/__init__.py | ciandt-d1/tf_image_classification | 76ff4cb9ec35418eb20ea3240221bbfb88970737 | [
"MIT"
] | null | null | null | tf_image_classification/deploy_utils/__init__.py | ciandt-d1/tf_image_classification | 76ff4cb9ec35418eb20ea3240221bbfb88970737 | [
"MIT"
] | null | null | null | tf_image_classification/deploy_utils/__init__.py | ciandt-d1/tf_image_classification | 76ff4cb9ec35418eb20ea3240221bbfb88970737 | [
"MIT"
] | null | null | null | import pb_viewer
import freeze_graph | 18.5 | 19 | 0.891892 | 6 | 37 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 2 | 19 | 18.5 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94759511ab16e6c5bbc155be542bb1cd2225688a | 106 | py | Python | SigProfilerClusters/version.py | AlexandrovLab/SigProfilerClusters | 804a6333bdde8df68241736ec1adba4faaa1adce | [
"BSD-2-Clause"
] | 7 | 2022-02-20T09:12:38.000Z | 2022-03-30T20:01:55.000Z | SigProfilerClusters/version.py | AlexandrovLab/SigProfilerClusters | 804a6333bdde8df68241736ec1adba4faaa1adce | [
"BSD-2-Clause"
] | 5 | 2022-02-21T09:34:45.000Z | 2022-03-30T19:57:27.000Z | SigProfilerClusters/version.py | AlexandrovLab/SigProfilerClusters | 804a6333bdde8df68241736ec1adba4faaa1adce | [
"BSD-2-Clause"
] | null | null | null |
# THIS FILE IS GENERATED FROM SIGPROFILECLUSTERS SETUP.PY
short_version = '1.0.11'
version = '1.0.11'
| 17.666667 | 57 | 0.716981 | 17 | 106 | 4.411765 | 0.764706 | 0.213333 | 0.24 | 0.293333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.169811 | 106 | 6 | 58 | 17.666667 | 0.761364 | 0.518868 | 0 | 0 | 1 | 0 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
84b021b3f02732e0ea7aad74bbe55301578a7f6c | 40 | py | Python | fcs_trade/gateway/idcm/__init__.py | fcscolorstone/fcs-trade | d76c5b8338ab55f49d78b218817326c2d1168151 | [
"MIT"
] | 2 | 2019-09-26T06:46:03.000Z | 2020-01-29T23:28:07.000Z | fcs_trade/gateway/idcm/__init__.py | fcscolorstone/fcs-trade | d76c5b8338ab55f49d78b218817326c2d1168151 | [
"MIT"
] | null | null | null | fcs_trade/gateway/idcm/__init__.py | fcscolorstone/fcs-trade | d76c5b8338ab55f49d78b218817326c2d1168151 | [
"MIT"
] | 2 | 2019-05-31T00:15:37.000Z | 2022-02-11T08:32:27.000Z | from .idcm_gateway import IdcmGateway
| 10 | 37 | 0.825 | 5 | 40 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 40 | 3 | 38 | 13.333333 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca476e2a445d23aa004180ad3976155d29b184d7 | 77 | py | Python | FastAPIMongoEngineGraphQL/app/util/get_json.py | scionoftech/FastAPI-Full-Stack-Samples | e7d42661ed59324ff20f419d05c6cd1e7dab7e97 | [
"MIT"
] | 29 | 2021-03-31T02:42:59.000Z | 2022-03-12T16:20:05.000Z | FastAPIMongoEngine/app/util/get_json.py | scionoftech/FastAPI-Full-Stack-Samples | e7d42661ed59324ff20f419d05c6cd1e7dab7e97 | [
"MIT"
] | null | null | null | FastAPIMongoEngine/app/util/get_json.py | scionoftech/FastAPI-Full-Stack-Samples | e7d42661ed59324ff20f419d05c6cd1e7dab7e97 | [
"MIT"
] | 4 | 2021-08-21T01:02:00.000Z | 2022-01-09T15:33:51.000Z | import json
def get_json(data):
return json.loads(data.to_json())
| 12.833333 | 38 | 0.662338 | 12 | 77 | 4.083333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220779 | 77 | 5 | 39 | 15.4 | 0.816667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ca5e3cf48ab9ba2db1285063f80648945de13bb9 | 4,235 | py | Python | tests/unittests/test_authentication.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 2 | 2022-02-01T20:18:48.000Z | 2022-02-02T01:22:14.000Z | tests/unittests/test_authentication.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 5 | 2022-01-12T06:55:54.000Z | 2022-03-26T13:35:50.000Z | tests/unittests/test_authentication.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from unittest.mock import MagicMock, Mock, patch
from src.grafana_api.model import APIModel
from src.grafana_api.authentication import Authentication
class AuthenticationTestCase(TestCase):
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_api_tokens(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list([{"id": "test"}]))
call_the_api_mock.return_value = mock
self.assertEqual(
list([{"id": "test"}]),
authentication.get_api_tokens(),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_api_tokens_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
authentication.get_api_tokens()
@patch("src.grafana_api.api.Api.call_the_api")
def test_create_api_token(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"id": "test"}))
call_the_api_mock.return_value = mock
self.assertEqual(
dict({"id": "test"}),
authentication.create_api_token("name", "View"),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_create_api_token_no_name(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(ValueError):
authentication.create_api_token("", "")
@patch("src.grafana_api.api.Api.call_the_api")
def test_create_api_token_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
authentication.create_api_token("name", "View")
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_api_token(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "API key deleted"}))
call_the_api_mock.return_value = mock
self.assertEqual(
None,
authentication.delete_api_token(1),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_api_token_no_token_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(ValueError):
authentication.delete_api_token(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_api_token_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
authentication: Authentication = Authentication(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
authentication.delete_api_token(1)
| 35.889831 | 80 | 0.683117 | 514 | 4,235 | 5.317121 | 0.093385 | 0.061471 | 0.087816 | 0.081961 | 0.900476 | 0.884376 | 0.884376 | 0.884376 | 0.884376 | 0.868277 | 0 | 0.000894 | 0.207556 | 4,235 | 117 | 81 | 36.196581 | 0.813468 | 0 | 0 | 0.626506 | 0 | 0 | 0.082645 | 0.068005 | 0 | 0 | 0 | 0 | 0.096386 | 1 | 0.096386 | false | 0 | 0.048193 | 0 | 0.156627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04bcf98918f6433495dc5b34ff3810062c9864f9 | 39 | py | Python | runtime/pubsub/__init__.py | akrantz01/backendless | 27acada7ab5ee4e81f9e23e0079cfb15b9f6b09e | [
"MIT"
] | 1 | 2020-10-17T04:39:29.000Z | 2020-10-17T04:39:29.000Z | runtime/pubsub/__init__.py | akrantz01/backendless | 27acada7ab5ee4e81f9e23e0079cfb15b9f6b09e | [
"MIT"
] | null | null | null | runtime/pubsub/__init__.py | akrantz01/backendless | 27acada7ab5ee4e81f9e23e0079cfb15b9f6b09e | [
"MIT"
] | null | null | null | from .setup import configure, shutdown
| 19.5 | 38 | 0.820513 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6f13a5419347d84a6288c6e719d119541a68613 | 110 | py | Python | build/disco_f407vg/rpc_extra_script.py | swansk/avatar-fw | 48bb98285ca1d5a102d17bc4df8bd77593c37dd4 | [
"MIT"
] | null | null | null | build/disco_f407vg/rpc_extra_script.py | swansk/avatar-fw | 48bb98285ca1d5a102d17bc4df8bd77593c37dd4 | [
"MIT"
] | null | null | null | build/disco_f407vg/rpc_extra_script.py | swansk/avatar-fw | 48bb98285ca1d5a102d17bc4df8bd77593c37dd4 | [
"MIT"
] | null | null | null | Import('env')
env.Prepend(CPPPATH=['/home/karl/.platformio/packages/framework-mbed/features/unsupported/rpc']) | 55 | 96 | 0.790909 | 14 | 110 | 6.214286 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 110 | 2 | 96 | 55 | 0.798165 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0.63964 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f3dc8e97a1de0f3177c93e2c134479dec218fad9 | 163 | py | Python | Python/Polar Coordinates.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | Python/Polar Coordinates.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | Python/Polar Coordinates.py | MonwarAdeeb/HackerRank-Solutions | 571327e9688061745000ae81c5fd74ff7a2976d4 | [
"MIT"
] | null | null | null | # Enter your code here. Read input from STDIN. Print output to STDOUT
import cmath
r = complex(input().strip())
print(cmath.polar(r)[0])
print(cmath.polar(r)[1]) | 23.285714 | 69 | 0.717791 | 28 | 163 | 4.178571 | 0.714286 | 0.17094 | 0.25641 | 0.273504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.128834 | 163 | 7 | 70 | 23.285714 | 0.809859 | 0.411043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f3f2711cb5accf77b96ae587e85ce00bbc4ef71f | 11,835 | py | Python | tests/basic_tests.py | dhdsjy/Feature_auto_ml | d30aa3f884c51cc060e26d38a8c648f9744f43c1 | [
"MIT"
] | null | null | null | tests/basic_tests.py | dhdsjy/Feature_auto_ml | d30aa3f884c51cc060e26d38a8c648f9744f43c1 | [
"MIT"
] | null | null | null | tests/basic_tests.py | dhdsjy/Feature_auto_ml | d30aa3f884c51cc060e26d38a8c648f9744f43c1 | [
"MIT"
] | null | null | null | """
To get standard out, run nosetests as follows:
$ nosetests -s tests
"""
import datetime
import os
import random
import sys
sys.path = [os.path.abspath(os.path.dirname(__file__))] + sys.path
from auto_ml import Predictor
from nose.tools import assert_equal, assert_not_equal, with_setup
from sklearn.metrics import accuracy_score
import dill
import numpy as np
import utils_testing as utils
def test_binary_classification():
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
ml_predictor = utils.train_basic_binary_classifier(df_titanic_train)
test_score = ml_predictor.score(df_titanic_test, df_titanic_test.survived, verbose=0)
# Right now we're getting a score of -.205
# Make sure our score is good, but not unreasonably good
assert -0.215 < test_score < -0.17
def test_multilabel_classification():
np.random.seed(0)
df_twitter_train, df_twitter_test = utils.get_twitter_sentiment_multilabel_classification_dataset()
ml_predictor = utils.train_basic_multilabel_classifier(df_twitter_train)
test_score = ml_predictor.score(df_twitter_test, df_twitter_test.airline_sentiment, verbose=0)
# Right now we're getting a score of -.205
# Make sure our score is good, but not unreasonably good
print('test_score')
print(test_score)
assert 0.67 < test_score < 0.79
def test_nlp_multilabel_classification():
np.random.seed(0)
df_twitter_train, df_twitter_test = utils.get_twitter_sentiment_multilabel_classification_dataset()
column_descriptions = {
'airline_sentiment': 'output'
, 'airline': 'categorical'
, 'text': 'nlp'
, 'tweet_location': 'categorical'
, 'user_timezone': 'categorical'
, 'tweet_created': 'date'
}
ml_predictor = Predictor(type_of_estimator='classifier', column_descriptions=column_descriptions)
ml_predictor.train(df_twitter_train)
test_score = ml_predictor.score(df_twitter_test, df_twitter_test.airline_sentiment, verbose=0)
# Make sure our score is good, but not unreasonably good
print('test_score')
print(test_score)
assert 0.67 < test_score < 0.79
def test_regression():
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
ml_predictor = utils.train_basic_regressor(df_boston_train)
test_score = ml_predictor.score(df_boston_test, df_boston_test.MEDV, verbose=0)
# Currently, we expect to get a score of -3.09
# Make sure our score is good, but not unreasonably good
assert -3.2 < test_score < -2.8
def test_saving_trained_pipeline_regression():
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
ml_predictor = utils.train_basic_regressor(df_boston_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
test_score = saved_ml_pipeline.score(df_boston_test, df_boston_test.MEDV)
# Make sure our score is good, but not unreasonably good
assert -3.2 < test_score < -2.8
def test_saving_trained_pipeline_binary_classification():
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
ml_predictor = utils.train_basic_binary_classifier(df_titanic_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
test_score = saved_ml_pipeline.score(df_titanic_test, df_titanic_test.survived)
# Right now we're getting a score of -.205
assert -0.215 < test_score < -0.17
def test_saving_trained_pipeline_multilabel_classification():
np.random.seed(0)
df_twitter_train, df_twitter_test = utils.get_twitter_sentiment_multilabel_classification_dataset()
ml_predictor = utils.train_basic_multilabel_classifier(df_twitter_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
test_score = saved_ml_pipeline.score(df_twitter_test, df_twitter_test.airline_sentiment)
# Right now we're getting a score of -.205
# Make sure our score is good, but not unreasonably good
print('test_score')
print(test_score)
assert 0.67 < test_score < 0.79
def test_getting_single_predictions_regression():
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
ml_predictor = utils.train_basic_regressor(df_boston_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
df_boston_test_dictionaries = df_boston_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
first_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
assert -3.2 < first_score < -2.8
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_boston_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_boston_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro, this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() / 1.0 < 3
# 3. make sure we're not modifying the dictionaries (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
second_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert -3.2 < second_score < -2.8
def test_getting_single_predictions_classification():
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
ml_predictor = utils.train_basic_binary_classifier(df_titanic_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
df_titanic_test_dictionaries = df_titanic_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
first_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
assert -0.215 < first_score < -0.17
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_titanic_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_titanic_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro, this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() < 3
# 3. make sure we're not modifying the dictionaries (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
print('df_titanic_test_dictionaries')
print(df_titanic_test_dictionaries)
second_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert -0.215 < second_score < -0.17
# Note that while there is the raw data here to perform NLP, we are not actually performing any NLP for this test
def test_getting_single_predictions_multilabel_classification_with_dates():
np.random.seed(0)
df_twitter_train, df_twitter_test = utils.get_twitter_sentiment_multilabel_classification_dataset()
ml_predictor = utils.train_basic_multilabel_classifier(df_twitter_train)
file_name = ml_predictor.save(str(random.random()))
with open(file_name, 'rb') as read_file:
saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
df_twitter_test_dictionaries = df_twitter_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_twitter_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
print('predictions')
print(predictions)
first_score = accuracy_score(df_twitter_test.airline_sentiment, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
assert 0.67 < first_score < 0.79
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_twitter_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_twitter_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro, this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() < 3
# 3. make sure we're not modifying the dictionaries (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_twitter_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
print('predictions')
print(predictions)
print('df_twitter_test_dictionaries')
print(df_twitter_test_dictionaries)
second_score = accuracy_score(df_twitter_test.airline_sentiment, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert 0.67 < second_score < 0.79
| 35.64759 | 135 | 0.738826 | 1,735 | 11,835 | 4.769452 | 0.116427 | 0.030453 | 0.03142 | 0.023202 | 0.887251 | 0.860665 | 0.857764 | 0.852931 | 0.837221 | 0.822961 | 0 | 0.018482 | 0.181665 | 11,835 | 331 | 136 | 35.755287 | 0.835932 | 0.237854 | 0 | 0.675393 | 0 | 0 | 0.047731 | 0.014275 | 0 | 0 | 0 | 0 | 0.089005 | 1 | 0.052356 | false | 0 | 0.052356 | 0 | 0.104712 | 0.188482 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d0e1b1c0be4d8c3dacf7757e9a6671119c7bbf7 | 106 | py | Python | train-or-finetune-model/tmp/test.py | ysh329/kaggle-invasive-species-monitoring-classification | 782a15998900fcb60de6fc1cb9fd8a3eb525435c | [
"MIT"
] | null | null | null | train-or-finetune-model/tmp/test.py | ysh329/kaggle-invasive-species-monitoring-classification | 782a15998900fcb60de6fc1cb9fd8a3eb525435c | [
"MIT"
] | null | null | null | train-or-finetune-model/tmp/test.py | ysh329/kaggle-invasive-species-monitoring-classification | 782a15998900fcb60de6fc1cb9fd8a3eb525435c | [
"MIT"
] | null | null | null | import sys
print sys.argv, len(sys.argv)
for idx in xrange(len(sys.argv)):
print idx, sys.argv[idx]
| 15.142857 | 33 | 0.688679 | 20 | 106 | 3.65 | 0.45 | 0.383562 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169811 | 106 | 6 | 34 | 17.666667 | 0.829545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6d2b93cdf981eea9a5331fe0e85086a51a6e7e1b | 132 | py | Python | app/models/__init__.py | uk-gov-mirror/alphagov.digitalmarketplace-api | 5a1db63691d0c4a435714837196ab6914badaf62 | [
"MIT"
] | 25 | 2015-01-14T10:45:13.000Z | 2021-05-26T17:21:41.000Z | app/models/__init__.py | uk-gov-mirror/alphagov.digitalmarketplace-api | 5a1db63691d0c4a435714837196ab6914badaf62 | [
"MIT"
] | 641 | 2015-01-15T11:10:50.000Z | 2021-06-15T22:18:42.000Z | app/models/__init__.py | uk-gov-mirror/alphagov.digitalmarketplace-api | 5a1db63691d0c4a435714837196ab6914badaf62 | [
"MIT"
] | 22 | 2015-06-13T15:37:45.000Z | 2021-08-19T23:40:49.000Z | from .main import * # noqa
from .direct_award import * # noqa
from .buyer_domains import * # noqa
from .outcomes import * # noqa
| 26.4 | 36 | 0.704545 | 18 | 132 | 5.055556 | 0.5 | 0.43956 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 132 | 4 | 37 | 33 | 0.866667 | 0.143939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6d2c9aa2eb55a138b2cac46ac0839af9f13f1fda | 140 | py | Python | pilog.humid_temp.py | marcheiligers/piscripts | 453986872dd5dc784a4953607da6d70429417668 | [
"MIT"
] | null | null | null | pilog.humid_temp.py | marcheiligers/piscripts | 453986872dd5dc784a4953607da6d70429417668 | [
"MIT"
] | null | null | null | pilog.humid_temp.py | marcheiligers/piscripts | 453986872dd5dc784a4953607da6d70429417668 | [
"MIT"
] | null | null | null | import socket
from pilog import *
from dht11 import *
humidity, temperature = read_humidity_and_temp()
post_weather(humidity, temperature)
| 20 | 48 | 0.814286 | 18 | 140 | 6.111111 | 0.666667 | 0.345455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01626 | 0.121429 | 140 | 6 | 49 | 23.333333 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed957652349ce731a9f084d193e38af668bc087c | 41 | py | Python | python-renascence/module/__init__.py | jxt1234/Genetic-Program-Frame | c0a801e337a31de05f49047fd11920a3c2e32ed6 | [
"Apache-2.0"
] | 3 | 2016-01-04T09:23:31.000Z | 2019-08-06T11:52:07.000Z | python-renascence/module/__init__.py | jxt1234/Renascence | c0a801e337a31de05f49047fd11920a3c2e32ed6 | [
"Apache-2.0"
] | null | null | null | python-renascence/module/__init__.py | jxt1234/Renascence | c0a801e337a31de05f49047fd11920a3c2e32ed6 | [
"Apache-2.0"
] | 6 | 2016-05-10T16:05:12.000Z | 2019-12-30T09:14:21.000Z | import RenascenceBasic
import Renascence
| 13.666667 | 22 | 0.902439 | 4 | 41 | 9.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 2 | 23 | 20.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eda63e60cb70e627e96422e4ec0d03f0c200810e | 125 | py | Python | net/model/decoder/__init__.py | lhq1/legal-predicetion | 0919732d9aecba17630a3dcaedd3611ca990010c | [
"MIT"
] | 87 | 2018-08-27T14:59:11.000Z | 2022-03-01T07:29:27.000Z | net/model/decoder/__init__.py | lllybi/TopJudge | c9186b132e79830fd4e855777b06a601d76bf0a2 | [
"MIT"
] | 6 | 2018-10-11T09:29:05.000Z | 2020-12-14T02:29:28.000Z | net/model/decoder/__init__.py | lllybi/TopJudge | c9186b132e79830fd4e855777b06a601d76bf0a2 | [
"MIT"
] | 31 | 2018-08-28T00:44:59.000Z | 2022-02-18T18:17:01.000Z | from .fc_decoder import FCDecoder
from .lstm_article_decoder import LSTMArticleDecoder
from .lstm_decoder import LSTMDecoder
| 31.25 | 52 | 0.88 | 16 | 125 | 6.625 | 0.5625 | 0.367925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096 | 125 | 3 | 53 | 41.666667 | 0.938053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
edb5dc17eea618b45e20b4dc8709a561e933224e | 22,434 | py | Python | tests/utils.py | ruth-ann/deepsnap | 35eeb5abdb304c53b2e0a68cbbeeaa55dca286a0 | [
"MIT"
] | 412 | 2020-06-20T01:37:29.000Z | 2022-03-29T11:32:55.000Z | tests/utils.py | ruth-ann/deepsnap | 35eeb5abdb304c53b2e0a68cbbeeaa55dca286a0 | [
"MIT"
] | 43 | 2020-06-21T09:16:10.000Z | 2022-02-28T03:07:50.000Z | tests/utils.py | ruth-ann/deepsnap | 35eeb5abdb304c53b2e0a68cbbeeaa55dca286a0 | [
"MIT"
] | 46 | 2020-06-20T02:00:48.000Z | 2022-03-16T21:25:20.000Z | import numpy as np
import networkx as nx
import random
import torch
import itertools
np.random.seed(0)
def pyg_to_dicts(dataset, task="enzyme"):
ds = []
for data in dataset:
d = {}
d["node_feature"] = data.x
if task == "enzyme":
d["grpah_label"] = data.y
elif task == "cora":
d["node_label"] = data.y
d["directed"] = data.is_directed()
edge_index = data.edge_index
if not data.is_directed():
row, col = edge_index
mask = row < col
row, col = row[mask], col[mask]
edge_index = torch.stack([row, col], dim=0)
edge_index = torch.cat([edge_index, torch.flip(edge_index, [0])], dim=1)
d["edge_index"] = edge_index
ds.append(d)
return ds
def simple_networkx_small_graph(directed=True):
if directed:
G = nx.DiGraph()
else:
G = nx.Graph()
G.add_node(0, node_label=0)
G.add_node(1, node_label=1)
G.add_node(2, node_label=2)
G.add_node(3, node_label=0)
G.add_node(4, node_label=1)
G.add_edge(0, 1, edge_label=0)
G.add_edge(0, 4, edge_label=1)
G.add_edge(1, 2, edge_label=3)
G.add_edge(1, 3, edge_label=3)
G.add_edge(2, 4, edge_label=0)
return G
def simple_networkx_dense_multigraph(num_edges_removed=0):
# TODO: restrict value of num_edges_removed
G = nx.MultiDiGraph()
for i in range(5):
G.add_node(i, node_label=0)
cnt = 0
for i in range(5):
for j in range(5):
if cnt >= num_edges_removed:
for k in range(3):
G.add_edge(i, j, edge_label=0)
cnt += 1
return G
def simple_networkx_dense_graph(num_edges_removed=0):
# TODO: restrict value of num_edges_removed
G = nx.DiGraph()
for i in range(5):
G.add_node(i, node_label=0)
cnt = 0
for i in range(5):
for j in range(5):
if cnt >= num_edges_removed:
G.add_edge(i, j, edge_label=0)
cnt += 1
return G
# TODO: update graph generator s.t. homogeneous & heterogeneous graph share the same format.
def simple_networkx_graph(directed=True):
num_nodes = 10
edge_index = (
torch.tensor(
[
[0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 4, 5, 6, 6, 7, 7, 9],
[1, 2, 2, 3, 3, 8, 4, 5, 6, 5, 6, 7, 8, 9, 8, 9, 8]
]
).long()
)
x = torch.zeros([num_nodes, 2])
y = torch.tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]).long()
for i in range(num_nodes):
x[i] = np.random.randint(1, num_nodes)
edge_x = torch.zeros([edge_index.shape[1], 2])
edge_y = torch.tensor(
[0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3]
).long()
for i in range(edge_index.shape[1]):
edge_x[i] = np.random.randint(1, num_nodes)
G = nx.DiGraph()
G.add_nodes_from(range(num_nodes))
for i, (u, v) in enumerate(edge_index.T.tolist()):
G.add_edge(u, v)
# if it is undirected, modify the edge attributes
if directed is False:
G = G.to_undirected()
H = G.to_directed()
edge_index = np.zeros([2, edge_index.shape[1] * 2]).astype(np.int64)
edge_x = np.zeros([edge_x.shape[0] * 2, edge_x.shape[1]])
edge_y = np.zeros(edge_y.shape[0] * 2).astype(np.int64)
for i, nx_edge in enumerate(nx.to_edgelist(H)):
edge_index[:, i] = (
np.array([nx_edge[0], nx_edge[1]]).astype(np.int64)
)
edge_x[i] = nx_edge[2]['edge_attr']
edge_y[i] = nx_edge[2]['edge_y']
graph_x = torch.tensor([[0, 1]])
graph_y = torch.tensor([0])
return G, x, y, edge_x, edge_y, edge_index, graph_x, graph_y
def simple_networkx_graph_alphabet(directed=True):
num_nodes = 10
edge_index = (
torch.tensor(
[
[0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 4, 5, 6, 6, 7, 7, 9],
[1, 2, 2, 3, 3, 8, 4, 5, 6, 5, 6, 7, 8, 9, 8, 9, 8]
]
).long()
)
x = torch.zeros([num_nodes, 2])
y = torch.tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]).long()
for i in range(num_nodes):
x[i] = np.random.randint(1, num_nodes)
edge_x = torch.zeros([edge_index.shape[1], 2])
edge_y = torch.tensor(
[0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3]
).long()
for i in range(edge_index.shape[1]):
edge_x[i] = np.random.randint(1, num_nodes)
G = nx.DiGraph()
G.add_nodes_from(range(num_nodes))
for i, (u, v) in enumerate(edge_index.T.tolist()):
G.add_edge(u, v)
# if it is undirected, modify the edge attributes
if directed is False:
G = G.to_undirected()
H = G.to_directed()
edge_index = np.zeros([2, edge_index.shape[1] * 2]).astype(np.int64)
edge_x = np.zeros([edge_x.shape[0] * 2, edge_x.shape[1]])
edge_y = np.zeros(edge_y.shape[0] * 2).astype(np.int64)
for i, nx_edge in enumerate(nx.to_edgelist(H)):
edge_index[:, i] = (
np.array([nx_edge[0], nx_edge[1]]).astype(np.int64)
)
edge_x[i] = nx_edge[2]['edge_attr']
edge_y[i] = nx_edge[2]['edge_y']
graph_x = torch.tensor([[0, 1]])
graph_y = torch.tensor([0])
# number -> alphabet transform
keys = list(G.nodes)
vals = [chr(x + 97) for x in list(range(len(keys)))]
mapping = dict(zip(keys, vals))
G = nx.relabel_nodes(G, mapping, copy=True)
return G, x, y, edge_x, edge_y, edge_index, graph_x, graph_y
def simple_networkx_multigraph():
num_nodes = 10
edge_index = (
torch.tensor(
[
[0, 0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 6, 6, 6, 7, 7, 9],
[1, 1, 1, 2, 2, 3, 3, 8, 8, 4, 5, 6, 5, 6, 7, 8, 8, 9, 8, 9, 8]
]
).long()
)
x = torch.zeros([num_nodes, 2])
y = torch.tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]).long()
for i in range(num_nodes):
x[i] = np.random.randint(1, num_nodes)
edge_x = torch.zeros([edge_index.shape[1], 2])
edge_y = torch.tensor(
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3]
).long()
for i in range(edge_index.shape[1]):
edge_x[i] = np.random.randint(1, num_nodes)
G = nx.MultiDiGraph()
G.add_nodes_from(range(num_nodes))
for i, (u, v) in enumerate(edge_index.T.tolist()):
G.add_edge(u, v)
graph_x = torch.tensor([[0, 1]])
graph_y = torch.tensor([0])
return G, x, y, edge_x, edge_y, edge_index, graph_x, graph_y
def sample_neigh(graph, size):
while True:
start_node = np.random.choice(list(graph.nodes))
neigh = [start_node]
frontier = list(set(graph.neighbors(start_node)) - set(neigh))
visited = set([start_node])
while len(neigh) < size and frontier:
new_node = np.random.choice(list(frontier))
assert new_node not in neigh
neigh.append(new_node)
visited.add(new_node)
frontier += list(graph.neighbors(new_node))
frontier = [x for x in frontier if x not in visited]
if len(neigh) == size:
return graph, neigh
def gen_graph(size, graph):
graph, neigh = sample_neigh(graph, size)
return graph.subgraph(neigh)
def generate_simple_dense_hete_graph(num_edges_removed=0):
# TODO: restrict value of num_edges_removed
G = nx.DiGraph()
for i in range(3):
G.add_node(i, node_label=0, node_type=0)
for i in range(3, 5):
G.add_node(i, node_label=0, node_type=1)
# message_type (0, 0, 0)
cnt = 0
for i in range(3):
for j in range(3):
if cnt >= num_edges_removed:
G.add_edge(i, j, edge_label=1, edge_type=0)
cnt += 1
# message_type (1, 1, 1)
cnt = 0
for i in range(3, 5):
for j in range(3, 5):
if cnt >= num_edges_removed:
G.add_edge(i, j, edge_label=1, edge_type=1)
cnt += 1
return G
def generate_simple_dense_hete_multigraph(num_edges_removed=0):
# TODO: restrict value of num_edges_removed
G = nx.MultiDiGraph()
for i in range(3):
G.add_node(i, node_label=0, node_type=0)
for i in range(3, 5):
G.add_node(i, node_label=0, node_type=1)
# message_type (0, 0, 0)
cnt = 0
for i in range(3):
for j in range(3):
if cnt >= num_edges_removed:
for k in range(3):
G.add_edge(i, j, edge_label=1, edge_type=0)
cnt += 1
# message_type (1, 1, 1)
cnt = 0
for i in range(3, 5):
for j in range(3, 5):
if cnt >= num_edges_removed:
for k in range(3):
G.add_edge(i, j, edge_label=1, edge_type=1)
cnt += 1
return G
def generate_simple_small_hete_graph(directed=True):
if directed:
G = nx.DiGraph()
else:
G = nx.Graph()
G.add_node(0, node_label=0, node_type=0)
G.add_node(1, node_label=1, node_type=0)
G.add_node(2, node_label=2, node_type=0)
G.add_node(3, node_label=0, node_type=0)
G.add_node(4, node_label=1, node_type=1)
G.add_node(5, node_label=1, node_type=1)
G.add_node(6, node_label=1, node_type=1)
# message_type (0, 0, 0)
G.add_edge(0, 1, edge_label=0, edge_type=0)
G.add_edge(0, 2, edge_label=0, edge_type=0)
G.add_edge(0, 3, edge_label=0, edge_type=0)
# message_type (0, 1, 1)
G.add_edge(0, 4, edge_label=1, edge_type=1)
G.add_edge(2, 4, edge_label=0, edge_type=1)
G.add_edge(3, 5, edge_label=0, edge_type=1)
G.add_edge(3, 6, edge_label=0, edge_type=1)
# message_type (0, 1, 0)
G.add_edge(1, 2, edge_label=3, edge_type=1)
G.add_edge(1, 3, edge_label=3, edge_type=1)
G.add_edge(2, 3, edge_label=3, edge_type=1)
return G
def generate_simple_hete_graph(add_edge_type=True):
G = nx.DiGraph()
for i in range(9):
if i < 2:
node_feature = torch.rand([10, ])
node_type = "n1"
node_label = 0
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature
)
elif 2 <= i < 4:
node_feature = torch.rand([12, ])
node_type = "n2"
node_label = 0
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature
)
elif 4 <= i < 6:
node_feature = torch.rand([10, ])
node_type = "n1"
node_label = 1
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature
)
else:
node_feature = torch.rand([12, ])
node_type = "n2"
node_label = 1
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature
)
if add_edge_type:
G.add_edge(
0, 1, edge_label=0, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
0, 2, edge_label=1, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
0, 5, edge_label=0, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
1, 3, edge_label=0, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
1, 5, edge_label=1, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
2, 3, edge_label=1, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
2, 4, edge_label=2, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
3, 4, edge_label=2, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
4, 0, edge_label=1, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
4, 5, edge_label=1, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
5, 7, edge_label=1, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
6, 1, edge_label=1, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
6, 2, edge_label=1, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
7, 3, edge_label=2, edge_feature=torch.rand([8, ]), edge_type="e1"
)
G.add_edge(
8, 0, edge_label=0, edge_feature=torch.rand([12, ]), edge_type="e2"
)
G.add_edge(
8, 1, edge_label=0, edge_feature=torch.rand([12, ]), edge_type="e2"
)
else:
G.add_edge(0, 1, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(0, 2, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(0, 5, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(1, 3, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(1, 5, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(2, 3, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(2, 4, edge_label=2, edge_feature=torch.rand([8, ]))
G.add_edge(3, 4, edge_label=2, edge_feature=torch.rand([8, ]))
G.add_edge(4, 0, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(4, 5, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(5, 7, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(6, 1, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(6, 2, edge_label=1, edge_feature=torch.rand([8, ]))
G.add_edge(7, 3, edge_label=2, edge_feature=torch.rand([8, ]))
G.add_edge(8, 0, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(8, 1, edge_label=0, edge_feature=torch.rand([8, ]))
return G
def generate_simple_hete_dataset(add_edge_type=True):
G = nx.DiGraph()
node_label_options = [0, 1, 2]
for i in range(9):
node_label = random.choice(node_label_options)
if i < 2:
node_feature = torch.rand([10, ])
node_type = "n1"
elif 2 <= i < 4:
node_feature = torch.rand([12, ])
node_type = "n2"
elif 4 <= i < 6:
node_feature = torch.rand([10, ])
node_type = "n1"
else:
node_feature = torch.rand([12, ])
node_type = "n2"
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
if add_edge_type:
G.add_edge(0, 1, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(0, 2, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(0, 5, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(1, 3, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(1, 5, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(2, 3, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(2, 4, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(3, 4, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(4, 0, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(4, 5, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(5, 7, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(6, 1, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(6, 2, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(7, 3, edge_feature=torch.rand([8, ]), edge_type="e1")
G.add_edge(8, 0, edge_feature=torch.rand([12, ]), edge_type="e2")
G.add_edge(8, 1, edge_feature=torch.rand([12, ]), edge_type="e2")
else:
G.add_edge(0, 1, edge_feature=torch.rand([8, ]))
G.add_edge(0, 2, edge_feature=torch.rand([8, ]))
G.add_edge(0, 5, edge_feature=torch.rand([8, ]))
G.add_edge(1, 3, edge_feature=torch.rand([8, ]))
G.add_edge(1, 5, edge_feature=torch.rand([8, ]))
G.add_edge(2, 3, edge_feature=torch.rand([8, ]))
G.add_edge(2, 4, edge_feature=torch.rand([8, ]))
G.add_edge(3, 4, edge_feature=torch.rand([8, ]))
G.add_edge(4, 0, edge_feature=torch.rand([8, ]))
G.add_edge(4, 5, edge_feature=torch.rand([8, ]))
G.add_edge(5, 7, edge_feature=torch.rand([8, ]))
G.add_edge(6, 1, edge_feature=torch.rand([8, ]))
G.add_edge(6, 2, edge_feature=torch.rand([8, ]))
G.add_edge(7, 3, edge_feature=torch.rand([8, ]))
G.add_edge(8, 0, edge_feature=torch.rand([8, ]))
G.add_edge(8, 1, edge_feature=torch.rand([8, ]))
return G
def generate_dense_hete_graph(add_edge_type=True, directed=True):
if directed:
G = nx.DiGraph()
else:
G = nx.Graph()
num_node = 20
for i in range(num_node):
if i < 10:
node_feature = torch.rand([10, ])
node_type = "n1"
node_label = 0
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
else:
node_feature = torch.rand([12, ])
node_type = "n2"
node_label = 1
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
if add_edge_type:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if (rand > 0.8):
continue
elif rand > 0.4:
G.add_edge(
i,
j,
edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e1',
)
else:
G.add_edge(
i,
j,
edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e2',
)
else:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if (rand > 0.8):
continue
elif rand > 0.4:
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
else:
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
return G
def generate_dense_hete_dataset(add_edge_type=True):
G = nx.DiGraph()
num_node = 20
node_label_options = [0, 1, 2, 3]
edge_label_options = [0, 1, 2]
for i in range(num_node):
node_feature = torch.rand([1, ])
if i < 10:
node_type = "n1"
else:
node_type = "n2"
node_label = random.choice(node_label_options)
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
if add_edge_type:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if rand > 0.8:
continue
elif rand > 0.4:
edge_type = "e1"
else:
edge_type = "e2"
edge_label = random.choice(edge_label_options)
G.add_edge(
i, j, edge_feature=torch.rand([1, ]),
edge_label=edge_label, edge_type=edge_type,
)
else:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if rand > 0.8:
continue
elif rand > 0.4:
edge_label = 0
else:
edge_label = 1
G.add_edge(
i, j, edge_feature=torch.rand([1, ]),
edge_label=edge_label
)
return G
def generate_dense_hete_multigraph(add_edge_type=True):
G = nx.MultiDiGraph()
num_node = 20
for i in range(num_node):
if i < 10:
node_feature = torch.rand([10, ])
node_type = "n1"
node_label = 0
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
else:
node_feature = torch.rand([12, ])
node_type = "n2"
node_label = 1
G.add_node(
i,
node_type=node_type,
node_label=node_label,
node_feature=node_feature,
)
if add_edge_type:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if (rand > 0.8):
continue
elif rand > 0.4:
G.add_edge(
i, j, edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e1',
)
G.add_edge(
i, j, edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e1',
)
else:
G.add_edge(
i, j, edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e2',
)
G.add_edge(
i, j, edge_label=0,
edge_feature=torch.rand([8, ]),
edge_type='e2',
)
else:
for i, j in itertools.permutations(range(num_node), 2):
rand = np.random.random()
if (rand > 0.8):
continue
elif rand > 0.4:
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
else:
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
G.add_edge(i, j, edge_label=0, edge_feature=torch.rand([8, ]))
return G
| 33.284866 | 92 | 0.522867 | 3,392 | 22,434 | 3.246167 | 0.045401 | 0.048315 | 0.074108 | 0.141677 | 0.884933 | 0.87567 | 0.84906 | 0.830351 | 0.808464 | 0.776678 | 0 | 0.057736 | 0.33681 | 22,434 | 673 | 93 | 33.334324 | 0.68235 | 0.024249 | 0 | 0.640678 | 0 | 0 | 0.009372 | 0 | 0 | 0 | 0 | 0.001486 | 0.001695 | 1 | 0.028814 | false | 0 | 0.008475 | 0 | 0.066102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
610529a684e4e2d65e3385e34075668b0c7289f1 | 20,138 | py | Python | MV3D_TF_release/lib/rpn_msr/proposal_layer_tf.py | ZiningWang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 52 | 2018-08-28T03:44:51.000Z | 2022-03-23T16:00:14.000Z | MV3D_TF_release/lib/rpn_msr/proposal_layer_tf.py | weidezhang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 1 | 2019-06-25T01:32:35.000Z | 2019-07-01T01:34:20.000Z | MV3D_TF_release/lib/rpn_msr/proposal_layer_tf.py | weidezhang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 20 | 2018-07-31T18:17:35.000Z | 2021-07-09T08:42:06.000Z | # --------------------------------------------------------
# Faster R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick and Sean Bell
# --------------------------------------------------------
import numpy as np
import yaml
from fast_rcnn.config import cfg
from rpn_msr.generate_anchors import generate_anchors_bv, generate_anchors
from rpn_msr.anchor_target_layer_tf import clip_anchors
from fast_rcnn.bbox_transform import bbox_transform_inv, clip_boxes, bbox_transform_inv_3d
from fast_rcnn.nms_wrapper import nms
from utils.transform import bv_anchor_to_lidar, lidar_to_bv, lidar_3d_to_bv, lidar_3d_to_corners, lidar_cnr_to_img
import pdb,time
#DEBUG = False
"""
Outputs object detection proposals by applying estimated bounding-box
transformations to a set of regular boxes (called "anchors").
"""
def proposal_layer_3d_debug(rpn_cls_prob_reshape,rpn_bbox_pred,im_info,calib,cfg_in, _feat_stride = [8,], anchor_scales=[1.0, 1.0],debug_state=True):
#copy part of the code from proposal_layer_3d for debug
_anchors = generate_anchors_bv()
# _anchors = generate_anchors(scales=np.array(anchor_scales))
_num_anchors = _anchors.shape[0]
im_info = im_info[0]
assert rpn_cls_prob_reshape.shape[0] == 1, \
'Only single item batches are supported'
# cfg_key = str(self.phase) # either 'TRAIN' or 'TEST'
# the first set of _num_anchors channels are bg probs
# the second set are the fg probs, which we want
# print rpn_cls_prob_reshape.shape
height, width = rpn_cls_prob_reshape.shape[1:3]
# scores = rpn_cls_prob_reshape[:, _num_anchors:, :, :]
scores = np.reshape(np.reshape(rpn_cls_prob_reshape, [1, height, width, _num_anchors, 2])[:,:,:,:,1],[1, height, width, _num_anchors])
bbox_deltas = rpn_bbox_pred
if debug_state:
print ('im_size: ({}, {})'.format(im_info[0], im_info[1]))
print ('scale: {}'.format(im_info[2]))
if debug_state:
print ('score map size: {}'.format(scores.shape))
# Enumerate all shifts
shift_x = np.arange(0, width) * _feat_stride
shift_y = np.arange(0, height) * _feat_stride
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
shifts = np.vstack((shift_x.ravel(), shift_y.ravel(),
shift_x.ravel(), shift_y.ravel())).transpose()
# Enumerate all shifted anchors:
#
# add A anchors (1, A, 4) to
# cell K shifts (K, 1, 4) to get
# shift anchors (K, A, 4)
# reshape to (K*A, 4) shifted anchors
A = _num_anchors
K = shifts.shape[0]
anchors = _anchors.reshape((1, A, 4)) + \
shifts.reshape((1, K, 4)).transpose((1, 0, 2))
anchors = anchors.reshape((K * A, 4))
bbox_deltas = bbox_deltas.reshape((-1, 6))
scores = scores.reshape((-1, 1))
# convert anchors bv to anchors_3d
anchors_3d = bv_anchor_to_lidar(anchors)
# Convert anchors into proposals via bbox transformations
proposals_3d = bbox_transform_inv_3d(anchors_3d, bbox_deltas)
# convert back to lidar_bv
proposals_bv = lidar_3d_to_bv(proposals_3d) #[x1,y1,x2,y2]
lidar_corners = lidar_3d_to_corners(proposals_3d)
proposals_img = lidar_cnr_to_img(lidar_corners,
calib[3], calib[2], calib[0])
if debug_state:
# print "bbox_deltas: ", bbox_deltas[:10]
# print "proposals number: ", proposals_3d[:10]
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
print ("scores shape:", scores.shape)
# 2. clip predicted boxes to image
#WZN: delete those not in image
ind_inside = clip_anchors(anchors, im_info[:2])
#ind_inside = np.logical_and(ind_inside,clip_anchors(proposals_bv, im_info[:2]))
proposals_bv = proposals_bv[ind_inside,:]
proposals_3d = proposals_3d[ind_inside,:]
proposals_img = proposals_img[ind_inside,:]
scores = scores[ind_inside,:]
proposals_bv = clip_boxes(proposals_bv, im_info[:2])
# TODO: pass real image_info
#keep = _filter_img_boxes(proposals_img, [375, 1242])
#proposals_bv = proposals_bv[keep, :]
#proposals_3d = proposals_3d[keep, :]
#proposals_img = proposals_img[keep, :]
#scores = scores[keep]
if debug_state:
print ("proposals after clip")
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
print ("proposals_img shape: ", proposals_img.shape)
# 4. sort all (proposal, score) pairs by score from highest to lowest
# 5. take top pre_nms_topN (e.g. 6000)
order = scores.ravel().argsort()[::-1]
if cfg_in['pre_keep_topN'] > 0:
order = order[:cfg_in['pre_keep_topN']]
#keep = keep[order]
proposals_bv = proposals_bv[order, :]
proposals_3d = proposals_3d[order, :]
proposals_img = proposals_img[order, :]
scores = scores[order]
# 6. apply nms (e.g. threshold = 0.7)
# 7. take after_nms_topN (e.g. 300)
# 8. return the top proposals (-> RoIs top)
if cfg_in['use_nms']:
keep = nms(np.hstack((proposals_bv, scores)), cfg_in['nms_thresh'])
if cfg_in['nms_topN'] > 0:
keep = keep[:cfg_in['nms_topN']]
proposals_bv = proposals_bv[keep, :]
proposals_3d = proposals_3d[keep, :]
proposals_img = proposals_img[keep, :]
scores = scores[keep]
if debug_state:
print ("proposals after nms")
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
# debug only: keep probabilities above a threshold
if cfg_in['prob_thresh']:
keep_ind = scores[:,0]>cfg_in['prob_thresh']
print ('scores: ',scores)
print ('threshold: ', cfg_in['prob_thresh'])
print ('score shape:', scores.shape)
#print keep_ind.shape
#print keep.shape
#keep = keep[keep_ind]
proposals_bv = proposals_bv[keep_ind, :]
proposals_3d = proposals_3d[keep_ind, :]
proposals_img = proposals_img[keep_ind, :]
scores = scores[keep_ind]
return proposals_bv,proposals_3d,proposals_img,scores
def proposal_layer_3d(rpn_cls_prob_reshape,rpn_bbox_pred,im_info,calib,cfg_key, _feat_stride = [8,], anchor_scales=[1.0, 1.0],DEBUG = False):
# Algorithm:
#
# for each (H, W) location i
# generate A anchor boxes centered on cell i
# apply predicted bbox deltas at cell i to each of the A anchors
# clip predicted boxes to image
# remove predicted boxes with either height or width < threshold
# sort all (proposal, score) pairs by score from highest to lowest
# take top pre_nms_topN proposals before NMS
# apply NMS with threshold 0.7 to remaining proposals
# take after_nms_topN proposals after NMS
# return the top proposals (-> RoIs top, scores top)
#layer_params = yaml.load(self.param_str_)
#t0 = time.time()
_anchors = generate_anchors_bv()
# _anchors = generate_anchors(scales=np.array(anchor_scales))
_num_anchors = _anchors.shape[0]
#print 'time for anchors: ', time.time()-t0
#t0 = time.time()
im_info = im_info[0]
assert rpn_cls_prob_reshape.shape[0] == 1, \
'Only single item batches are supported'
# cfg_key = str(self.phase) # either 'TRAIN' or 'TEST'
if type(cfg_key) is bytes:
cfg_key = cfg_key.decode('UTF-8','ignore')
pre_score_filt = cfg[cfg_key].RPN_SCORE_FILT
pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N
post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N
nms_thresh = cfg[cfg_key].RPN_NMS_THRESH
min_size = cfg[cfg_key].RPN_MIN_SIZE
# the first set of _num_anchors channels are bg probs
# the second set are the fg probs, which we want
# print rpn_cls_prob_reshape.shape
height, width = rpn_cls_prob_reshape.shape[1:3]
# scores = rpn_cls_prob_reshape[:, _num_anchors:, :, :]
scores = np.reshape(np.reshape(rpn_cls_prob_reshape, [1, height, width, _num_anchors, 2])[:,:,:,:,1],[1, height, width, _num_anchors])
bbox_deltas = rpn_bbox_pred
if DEBUG:
print ('im_size: ({}, {})'.format(im_info[0], im_info[1]))
print ('scale: {}'.format(im_info[2]))
# 1. Generate proposals from bbox deltas and shifted anchors
if DEBUG:
print ('score map size: {}'.format(scores.shape))
# Enumerate all shifts
shift_x = np.arange(0, width) * _feat_stride
shift_y = np.arange(0, height) * _feat_stride
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
shifts = np.vstack((shift_x.ravel(), shift_y.ravel(),
shift_x.ravel(), shift_y.ravel())).transpose()
# Enumerate all shifted anchors:
#
# add A anchors (1, A, 4) to
# cell K shifts (K, 1, 4) to get
# shift anchors (K, A, 4)
# reshape to (K*A, 4) shifted anchors
A = _num_anchors
K = shifts.shape[0]
anchors = _anchors.reshape((1, A, 4)) + \
shifts.reshape((1, K, 4)).transpose((1, 0, 2))
anchors = anchors.reshape((K * A, 4))
# Transpose and reshape predicted bbox transformations to get them
# into the same order as the anchors:
#
# bbox deltas will be (1, 4 * A, H, W) format
# transpose to (1, H, W, 4 * A)
# reshape to (1 * H * W * A, 4) where rows are ordered by (h, w, a)
# in slowest to fastest order
# bbox_deltas = bbox_deltas.transpose((0, 2, 3, 1)).reshape((-1, 6))
bbox_deltas = bbox_deltas.reshape((-1, 6))
# Same story for the scores:
#
# scores are (1, A, H, W) format
# transpose to (1, H, W, A)
# reshape to (1 * H * W * A, 1) where rows are ordered by (h, w, a)
# scores = scores.transpose((0, 2, 3, 1)).reshape((-1, 1))
scores = scores.reshape((-1, 1))
score_filter = scores[:,0] > pre_score_filt
#WZN: pre score filt
scores = scores[score_filter,:]
anchors = anchors[score_filter,:]
bbox_deltas = bbox_deltas[score_filter,:]
#print 'time for score pre_filt: ', time.time()-t0, scores.shape
#t0 = time.time()
# 5. take top pre_nms_topN (e.g. 6000)
order = scores.ravel().argsort()[::-1]
if pre_nms_topN > 0 and pre_nms_topN<order.shape[0]:
order = order[:pre_nms_topN]
scores = scores[order,:]
anchors = anchors[order,:]
bbox_deltas = bbox_deltas[order,:]
# print np.sort(scores.ravel())[-30:]
# convert anchors bv to anchors_3d
anchors_3d = bv_anchor_to_lidar(anchors)
# Convert anchors into proposals via bbox transformations
proposals_3d = bbox_transform_inv_3d(anchors_3d, bbox_deltas)
# convert back to lidar_bv
proposals_bv = lidar_3d_to_bv(proposals_3d)
lidar_corners = lidar_3d_to_corners(proposals_3d)
proposals_img = lidar_cnr_to_img(lidar_corners,
calib[3], calib[2], calib[0])
#print 'time for generating proposal: ', time.time()-t0, scores.shape
#t0 = time.time()
#WZN: delete those not in image
ind_inside = clip_anchors(anchors, im_info[:2])
#ind_inside = np.logical_and(ind_inside,clip_anchors(proposals_bv, im_info[:2]))
proposals_bv = proposals_bv[ind_inside,:]
proposals_3d = proposals_3d[ind_inside,:]
proposals_img = proposals_img[ind_inside,:]
scores = scores[ind_inside,:]
#print 'time for score clip: ', time.time()-t0, scores.shape
#t0 = time.time()
if DEBUG:
# print "bbox_deltas: ", bbox_deltas[:10]
# print "proposals number: ", proposals_3d[:10]
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
# 2. clip predicted boxes to image
proposals_bv = clip_boxes(proposals_bv, im_info[:2])
# 3. remove predicted boxes with either height or width < threshold
# (NOTE: convert min_size to input image scale stored in im_info[2])
keep = _filter_boxes(proposals_bv, min_size * im_info[2])
proposals_bv = proposals_bv[keep, :]
proposals_3d = proposals_3d[keep, :]
proposals_img = proposals_img[keep, :]
scores = scores[keep]
#WZN: discard
'''
# TODO: pass real image_info
keep = _filter_img_boxes(proposals_img, [375, 1242])
proposals_bv = proposals_bv[keep, :]
proposals_3d = proposals_3d[keep, :]
proposals_img = proposals_img[keep, :]
scores = scores[keep]
'''
if DEBUG:
print ("proposals after clip")
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
print ("proposals_img shape: ", proposals_img.shape)
# 4. sort all (proposal, score) pairs by score from highest to lowest
''' WZN: moved to upper to save time
# 5. take top pre_nms_topN (e.g. 6000)
order = scores.ravel().argsort()[::-1]
if pre_nms_topN > 0:
order = order[:pre_nms_topN]
proposals_bv = proposals_bv[order, :]
proposals_3d = proposals_3d[order, :]
proposals_img = proposals_img[order, :]
scores = scores[order]
'''
# 6. apply nms (e.g. threshold = 0.7)
# 7. take after_nms_topN (e.g. 300)
# 8. return the top proposals (-> RoIs top)
keep = nms(np.hstack((proposals_bv, scores)), nms_thresh)
if post_nms_topN > 0:
keep = keep[:post_nms_topN]
proposals_bv = proposals_bv[keep, :]
proposals_3d = proposals_3d[keep, :]
proposals_img = proposals_img[keep, :]
scores = scores[keep]
if DEBUG:
print ("proposals after nms")
print ("proposals_bv shape: ", proposals_bv.shape)
print ("proposals_3d shape: ", proposals_3d.shape)
# Output rois blob
# Our RPN implementation only supports a single input image, so all
# batch inds are 0
batch_inds = np.zeros((proposals_bv.shape[0], 1), dtype=np.float32)
blob_bv = np.hstack((batch_inds, proposals_bv.astype(np.float32, copy=False)))
blob_img = np.hstack((batch_inds, proposals_img.astype(np.float32, copy=False)))
blob_3d = np.hstack((batch_inds, proposals_3d.astype(np.float32, copy=False)))
if DEBUG:
print ("blob shape ====================:")
print (blob_bv.shape)
print (blob_img.shape)
return blob_bv, blob_img, blob_3d, scores
def proposal_layer(rpn_cls_prob_reshape,rpn_bbox_pred,im_info,cfg_key,_feat_stride = [16,],anchor_scales = [8, 16, 32],DEBUG = False):
# Algorithm:
#
# for each (H, W) location i
# generate A anchor boxes centered on cell i
# apply predicted bbox deltas at cell i to each of the A anchors
# clip predicted boxes to image
# remove predicted boxes with either height or width < threshold
# sort all (proposal, score) pairs by score from highest to lowest
# take top pre_nms_topN proposals before NMS
# apply NMS with threshold 0.7 to remaining proposals
# take after_nms_topN proposals after NMS
# return the top proposals (-> RoIs top, scores top)
#layer_params = yaml.load(self.param_str_)
# _anchors = generate_anchors_bv()
_anchors = generate_anchors(scales=np.array(anchor_scales))
_num_anchors = _anchors.shape[0]
rpn_cls_prob_reshape = np.transpose(rpn_cls_prob_reshape,[0,3,1,2])
rpn_bbox_pred = np.transpose(rpn_bbox_pred,[0,3,1,2])
#rpn_cls_prob_reshape = np.transpose(np.reshape(rpn_cls_prob_reshape,[1,rpn_cls_prob_reshape.shape[0],rpn_cls_prob_reshape.shape[1],rpn_cls_prob_reshape.shape[2]]),[0,3,2,1])
#rpn_bbox_pred = np.transpose(rpn_bbox_pred,[0,3,2,1])
im_info = im_info[0]
assert rpn_cls_prob_reshape.shape[0] == 1, \
'Only single item batches are supported'
# cfg_key = str(self.phase) # either 'TRAIN' or 'TEST'
#cfg_key = 'TEST'
pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N
post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N
nms_thresh = cfg[cfg_key].RPN_NMS_THRESH
min_size = cfg[cfg_key].RPN_MIN_SIZE
# the first set of _num_anchors channels are bg probs
# the second set are the fg probs, which we want
scores = rpn_cls_prob_reshape[:, _num_anchors:, :, :]
bbox_deltas = rpn_bbox_pred
#im_info = bottom[2].data[0, :]
if DEBUG:
print ('im_size: ({}, {})'.format(im_info[0], im_info[1]))
print ('scale: {}'.format(im_info[2]))
# 1. Generate proposals from bbox deltas and shifted anchors
height, width = scores.shape[-2:]
if DEBUG:
print ('score map size: {}'.format(scores.shape))
# Enumerate all shifts
shift_x = np.arange(0, width) * _feat_stride
shift_y = np.arange(0, height) * _feat_stride
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
shifts = np.vstack((shift_x.ravel(), shift_y.ravel(),
shift_x.ravel(), shift_y.ravel())).transpose()
# Enumerate all shifted anchors:
#
# add A anchors (1, A, 4) to
# cell K shifts (K, 1, 4) to get
# shift anchors (K, A, 4)
# reshape to (K*A, 4) shifted anchors
A = _num_anchors
K = shifts.shape[0]
anchors = _anchors.reshape((1, A, 4)) + \
shifts.reshape((1, K, 4)).transpose((1, 0, 2))
anchors = anchors.reshape((K * A, 4))
# Transpose and reshape predicted bbox transformations to get them
# into the same order as the anchors:
#
# bbox deltas will be (1, 4 * A, H, W) format
# transpose to (1, H, W, 4 * A)
# reshape to (1 * H * W * A, 4) where rows are ordered by (h, w, a)
# in slowest to fastest order
bbox_deltas = bbox_deltas.transpose((0, 2, 3, 1)).reshape((-1, 4))
# Same story for the scores:
#
# scores are (1, A, H, W) format
# transpose to (1, H, W, A)
# reshape to (1 * H * W * A, 1) where rows are ordered by (h, w, a)
scores = scores.transpose((0, 2, 3, 1)).reshape((-1, 1))
anchors_3d = bv_anchor_to_lidar(anchors)
# Convert anchors into proposals via bbox transformations
proposals = bbox_transform_inv_3d(anchors_3d, bbox_deltas)
# 2. clip predicted boxes to image
proposals = clip_boxes(proposals, im_info[:2])
# 3. remove predicted boxes with either height or width < threshold
# (NOTE: convert min_size to input image scale stored in im_info[2])
keep = _filter_boxes(proposals, min_size * im_info[2])
proposals = proposals[keep, :]
scores = scores[keep]
# 4. sort all (proposal, score) pairs by score from highest to lowest
# 5. take top pre_nms_topN (e.g. 6000)
order = scores.ravel().argsort()[::-1]
if pre_nms_topN > 0:
order = order[:pre_nms_topN]
proposals = proposals[order, :]
scores = scores[order]
# 6. apply nms (e.g. threshold = 0.7)
# 7. take after_nms_topN (e.g. 300)
# 8. return the top proposals (-> RoIs top)
keep = nms(np.hstack((proposals, scores)), nms_thresh)
if post_nms_topN > 0:
keep = keep[:post_nms_topN]
proposals = proposals[keep, :]
scores = scores[keep]
# Output rois blob
# Our RPN implementation only supports a single input image, so all
# batch inds are 0
batch_inds = np.zeros((proposals.shape[0], 1), dtype=np.float32)
blob = np.hstack((batch_inds, proposals.astype(np.float32, copy=False)))
return blob
#top[0].reshape(*(blob.shape))
#top[0].data[...] = blob
# [Optional] output scores blob
#if len(top) > 1:
# top[1].reshape(*(scores.shape))
# top[1].data[...] = scores
def _filter_boxes(boxes, min_size):
"""Remove all boxes with any side smaller than min_size."""
ws = boxes[:, 2] - boxes[:, 0] + 1
hs = boxes[:, 3] - boxes[:, 1] + 1
#WZN: filter boxes too far away
ds = (boxes[:, 3] + boxes[:, 1])/2
keep = np.where((ws >= min_size) & (hs >= min_size) )[0] #& (ds<460)
return keep
def _filter_img_boxes(boxes, im_info):
"""Remove all boxes with any side smaller than min_size."""
padding = 50
w_min = -padding
w_max = im_info[1] + padding
h_min = -padding
h_max = im_info[0] + padding
keep = np.where((w_min <= boxes[:,0]) & (boxes[:,2] <= w_max) & (h_min <= boxes[:,1]) &
(boxes[:,3] <= h_max))[0]
return keep
| 38.212524 | 178 | 0.647979 | 2,964 | 20,138 | 4.171727 | 0.095142 | 0.040922 | 0.017792 | 0.030247 | 0.827173 | 0.792964 | 0.769592 | 0.74355 | 0.73336 | 0.72503 | 0 | 0.026661 | 0.223309 | 20,138 | 526 | 179 | 38.285171 | 0.76389 | 0.345367 | 0 | 0.656904 | 0 | 0 | 0.064066 | 0.001714 | 0 | 0 | 0 | 0.003802 | 0.012552 | 1 | 0.020921 | false | 0 | 0.037657 | 0 | 0.079498 | 0.142259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b642330d777865a30f86b512306388c12565a30b | 5,684 | py | Python | pycsw/core/test_spatialSimilarity.py | Anika2/aahll-pycsw | cacc662e4d252d3bb12ccd225d67e936a53b6e4a | [
"MIT"
] | null | null | null | pycsw/core/test_spatialSimilarity.py | Anika2/aahll-pycsw | cacc662e4d252d3bb12ccd225d67e936a53b6e4a | [
"MIT"
] | null | null | null | pycsw/core/test_spatialSimilarity.py | Anika2/aahll-pycsw | cacc662e4d252d3bb12ccd225d67e936a53b6e4a | [
"MIT"
] | null | null | null | # Author: Lia Kirsch
import spatialSimilarity
# Geometry
def test_spatialdistance_Geometry():
total = spatialSimilarity.spatialDistance([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
17.7978515625, 52.09300763963822, 7.27294921875, 46.14939437647686])
assert total == 74.02
def test_spatialOverlap_Geometry():
total = spatialSimilarity.spatialOverlap([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
17.7978515625, 52.09300763963822, 7.27294921875, 46.14939437647686])
assert total == 41.26
def test_similarArea_Geometry():
total = spatialSimilarity.similarArea([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
17.7978515625, 52.09300763963822, 7.27294921875, 46.14939437647686])
assert total == 58.98
# Points
def test_spatialdistance_Points():
total = spatialSimilarity.spatialDistance([13.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 99.1
def test_spatialOverlap_Points():
total = spatialSimilarity.spatialOverlap([13.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 81.45
def test_similarArea_Points():
total = spatialSimilarity.similarArea([13.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 100.0
##Line and Point
def test_spatialdistance_LineAndPoint():
total = spatialSimilarity.spatialDistance([11.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 49.97
def test_spatialOverlap_LineAndPoint():
total = spatialSimilarity.spatialOverlap([11.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 0.09
def test_similarArea_LineAndPoint():
total = spatialSimilarity.similarArea([11.0078125, 50.62507306341435, 13.0078125, 50.62507306341435], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 100.0
## Polygon and Point
def test_spatialdistance_PolygonAndPoint():
total = spatialSimilarity.spatialDistance([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 50.57
def test_spatialOverlap_PolygonAndPoint():
total = spatialSimilarity.spatialOverlap([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 0.0
def test_similarArea_PolygonAndPoint():
total = spatialSimilarity.similarArea([13.0078125, 50.62507306341435, 5.44921875, 45.82879925192134], [
13.0082125, 50.62513301341435, 13.0082125, 50.62513301341435])
assert total == 0.0
# SameBoundingBox
def test_spatialdistance_SameBoundingBox():
total = spatialSimilarity.spatialDistance([0.439453, 29.688053, 3.911133, 31.765537], [
0.439453, 29.688053, 3.911133, 31.765537])
assert total == 100.00
def test_spatialOverlap_SameBoundingBox():
total = spatialSimilarity.spatialOverlap([0.439453, 29.688053, 3.911133, 31.765537], [
0.439453, 29.688053, 3.911133, 31.765537])
assert total == 100.0
def test_similarArea_SameBoundingBox():
total = spatialSimilarity.similarArea([0.439453, 29.688053, 3.911133, 31.765537], [
0.439453, 29.688053, 3.911133, 31.765537])
assert total == 100.0
# similar boundingBoxes that are close together
def test_spatialdistance_SBBTACT():
total = spatialSimilarity.spatialDistance([7.596703, 51.950402, 7.656441, 51.978536], [
7.588205, 51.952412, 7.616014, 51.967644])
assert total == 66.08
def test_spatialOverlap_SSBBTACT():
total = spatialSimilarity.spatialOverlap([7.596703, 51.950402, 7.656441, 51.978536], [
7.588205, 51.952412, 7.616014, 51.967644])
assert total == 17.5
def test_similarArea_SBBTACT():
total = spatialSimilarity.similarArea([7.596703, 51.950402, 7.656441, 51.978536], [
7.588205, 51.952412, 7.616014, 51.967644])
assert total == 25.2
# Far away boundingboxes
def test_spatialdistance_fABB():
total = spatialSimilarity.spatialDistance(
[7.596703, 51.950402, 7.656441, 51.978536], [-96.800194, 32.760085, -96.796353, 32.761385])
assert total == 0
def test_spatialOverlap_fABB():
total = spatialSimilarity.spatialOverlap(
[7.596703, 51.950402, 7.656441, 51.978536], [-96.800194, 32.760085, -96.796353, 32.761385])
assert total == 0.0
def test_similarArea_fABB():
total = spatialSimilarity.similarArea(
[7.596703, 51.950402, 7.656441, 51.978536], [-96.800194, 32.760085, -96.796353, 32.761385])
assert total == 0.42
| 39.748252 | 114 | 0.639163 | 589 | 5,684 | 6.096774 | 0.159593 | 0.040936 | 0.115288 | 0.125313 | 0.737678 | 0.720969 | 0.720969 | 0.681147 | 0.681147 | 0.681147 | 0 | 0.430954 | 0.25088 | 5,684 | 142 | 115 | 40.028169 | 0.4124 | 0.026566 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247059 | 1 | 0.247059 | false | 0 | 0.011765 | 0 | 0.258824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b657c3743ca24bea90b32e037ba793c40689dd81 | 10,824 | py | Python | stubs/micropython-v1_13-95-pyboard/pyb.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/pyb.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/pyb.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | """
Module: 'pyb' on pyboard 1.13.0-95
"""
# MCU: (sysname='pyboard', nodename='pyboard', release='1.13.0', version='v1.13-95-g0fff2e03f on 2020-10-03', machine='PYBv1.1 with STM32F405RG')
# Stubber: 1.3.4 - updated
from typing import Any
class ADC:
""""""
def read(self, *args) -> Any:
pass
def read_timed(self, *args) -> Any:
pass
def read_timed_multi(self, *args) -> Any:
pass
class ADCAll:
""""""
def read_channel(self, *args) -> Any:
pass
def read_core_temp(self, *args) -> Any:
pass
def read_core_vbat(self, *args) -> Any:
pass
def read_core_vref(self, *args) -> Any:
pass
def read_vref(self, *args) -> Any:
pass
class Accel:
""""""
def filtered_xyz(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def tilt(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def x(self, *args) -> Any:
pass
def y(self, *args) -> Any:
pass
def z(self, *args) -> Any:
pass
class CAN:
""""""
BUS_OFF = 4
ERROR_ACTIVE = 1
ERROR_PASSIVE = 3
ERROR_WARNING = 2
LIST16 = 1
LIST32 = 3
LOOPBACK = 67108864
MASK16 = 0
MASK32 = 2
NORMAL = 0
SILENT = 134217728
SILENT_LOOPBACK = 201326592
STOPPED = 0
def any(self, *args) -> Any:
pass
def clearfilter(self, *args) -> Any:
pass
def deinit(self, *args) -> Any:
pass
def info(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def initfilterbanks(self, *args) -> Any:
pass
def recv(self, *args) -> Any:
pass
def restart(self, *args) -> Any:
pass
def rxcallback(self, *args) -> Any:
pass
def send(self, *args) -> Any:
pass
def setfilter(self, *args) -> Any:
pass
def state(self, *args) -> Any:
pass
class DAC:
""""""
CIRCULAR = 256
NORMAL = 0
def deinit(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def noise(self, *args) -> Any:
pass
def triangle(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def write_timed(self, *args) -> Any:
pass
class ExtInt:
""""""
EVT_FALLING = 270663680
EVT_RISING = 269615104
EVT_RISING_FALLING = 271712256
IRQ_FALLING = 270598144
IRQ_RISING = 269549568
IRQ_RISING_FALLING = 271646720
def disable(self, *args) -> Any:
pass
def enable(self, *args) -> Any:
pass
def line(self, *args) -> Any:
pass
def regs(self, *args) -> Any:
pass
def swint(self, *args) -> Any:
pass
class Flash:
""""""
def ioctl(self, *args) -> Any:
pass
def readblocks(self, *args) -> Any:
pass
def writeblocks(self, *args) -> Any:
pass
class I2C:
""""""
MASTER = 0
SLAVE = 1
def deinit(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def is_ready(self, *args) -> Any:
pass
def mem_read(self, *args) -> Any:
pass
def mem_write(self, *args) -> Any:
pass
def recv(self, *args) -> Any:
pass
def scan(self, *args) -> Any:
pass
def send(self, *args) -> Any:
pass
class LCD:
""""""
def command(self, *args) -> Any:
pass
def contrast(self, *args) -> Any:
pass
def fill(self, *args) -> Any:
pass
def get(self, *args) -> Any:
pass
def light(self, *args) -> Any:
pass
def pixel(self, *args) -> Any:
pass
def show(self, *args) -> Any:
pass
def text(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
class LED:
""""""
def intensity(self, *args) -> Any:
pass
def off(self, *args) -> Any:
pass
def on(self, *args) -> Any:
pass
def toggle(self, *args) -> Any:
pass
class Pin:
""""""
AF1_TIM1 = 1
AF1_TIM2 = 1
AF2_TIM3 = 2
AF2_TIM4 = 2
AF2_TIM5 = 2
AF3_TIM10 = 3
AF3_TIM11 = 3
AF3_TIM8 = 3
AF3_TIM9 = 3
AF4_I2C1 = 4
AF4_I2C2 = 4
AF5_SPI1 = 5
AF5_SPI2 = 5
AF7_USART1 = 7
AF7_USART2 = 7
AF7_USART3 = 7
AF8_UART4 = 8
AF8_USART6 = 8
AF9_CAN1 = 9
AF9_CAN2 = 9
AF9_TIM12 = 9
AF9_TIM13 = 9
AF9_TIM14 = 9
AF_OD = 18
AF_PP = 2
ALT = 2
ALT_OPEN_DRAIN = 18
ANALOG = 3
IN = 0
IRQ_FALLING = 270598144
IRQ_RISING = 269549568
OPEN_DRAIN = 17
OUT = 1
OUT_OD = 17
OUT_PP = 1
PULL_DOWN = 2
PULL_NONE = 0
PULL_UP = 1
def af(self, *args) -> Any:
pass
def af_list(self, *args) -> Any:
pass
board = None
cpu = None
def debug(self, *args) -> Any:
pass
def dict(self, *args) -> Any:
pass
def gpio(self, *args) -> Any:
pass
def high(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def irq(self, *args) -> Any:
pass
def low(self, *args) -> Any:
pass
def mapper(self, *args) -> Any:
pass
def mode(self, *args) -> Any:
pass
def name(self, *args) -> Any:
pass
def names(self, *args) -> Any:
pass
def off(self, *args) -> Any:
pass
def on(self, *args) -> Any:
pass
def pin(self, *args) -> Any:
pass
def port(self, *args) -> Any:
pass
def pull(self, *args) -> Any:
pass
def value(self, *args) -> Any:
pass
class RTC:
""""""
def calibration(self, *args) -> Any:
pass
def datetime(self, *args) -> Any:
pass
def info(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def wakeup(self, *args) -> Any:
pass
SD = None
class SDCard:
""""""
def info(self, *args) -> Any:
pass
def ioctl(self, *args) -> Any:
pass
def power(self, *args) -> Any:
pass
def present(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readblocks(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def writeblocks(self, *args) -> Any:
pass
class SPI:
""""""
LSB = 128
MASTER = 260
MSB = 0
SLAVE = 0
def deinit(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readinto(self, *args) -> Any:
pass
def recv(self, *args) -> Any:
pass
def send(self, *args) -> Any:
pass
def send_recv(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def write_readinto(self, *args) -> Any:
pass
class Servo:
""""""
def angle(self, *args) -> Any:
pass
def calibration(self, *args) -> Any:
pass
def pulse_width(self, *args) -> Any:
pass
def speed(self, *args) -> Any:
pass
class Switch:
""""""
def callback(self, *args) -> Any:
pass
def value(self, *args) -> Any:
pass
class Timer:
""""""
BOTH = 10
BRK_HIGH = 2
BRK_LOW = 1
BRK_OFF = 0
CENTER = 32
DOWN = 16
ENC_A = 9
ENC_AB = 11
ENC_B = 10
FALLING = 2
HIGH = 0
IC = 8
LOW = 2
OC_ACTIVE = 3
OC_FORCED_ACTIVE = 6
OC_FORCED_INACTIVE = 7
OC_INACTIVE = 4
OC_TIMING = 2
OC_TOGGLE = 5
PWM = 0
PWM_INVERTED = 1
RISING = 0
UP = 0
def callback(self, *args) -> Any:
pass
def channel(self, *args) -> Any:
pass
def counter(self, *args) -> Any:
pass
def deinit(self, *args) -> Any:
pass
def freq(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def period(self, *args) -> Any:
pass
def prescaler(self, *args) -> Any:
pass
def source_freq(self, *args) -> Any:
pass
class UART:
""""""
CTS = 512
IRQ_RXIDLE = 16
RTS = 256
def any(self, *args) -> Any:
pass
def deinit(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def irq(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readchar(self, *args) -> Any:
pass
def readinto(self, *args) -> Any:
pass
def readline(self, *args) -> Any:
pass
def sendbreak(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def writechar(self, *args) -> Any:
pass
class USB_HID:
""""""
def recv(self, *args) -> Any:
pass
def send(self, *args) -> Any:
pass
class USB_VCP:
""""""
CTS = 2
RTS = 1
def any(self, *args) -> Any:
pass
def close(self, *args) -> Any:
pass
def init(self, *args) -> Any:
pass
def isconnected(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readinto(self, *args) -> Any:
pass
def readline(self, *args) -> Any:
pass
def readlines(self, *args) -> Any:
pass
def recv(self, *args) -> Any:
pass
def send(self, *args) -> Any:
pass
def setinterrupt(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
def bootloader(*args) -> Any:
pass
def country(*args) -> Any:
pass
def delay(*args) -> Any:
pass
def dht_readinto(*args) -> Any:
pass
def disable_irq(*args) -> Any:
pass
def elapsed_micros(*args) -> Any:
pass
def elapsed_millis(*args) -> Any:
pass
def enable_irq(*args) -> Any:
pass
def fault_debug(*args) -> Any:
pass
def freq(*args) -> Any:
pass
def hard_reset(*args) -> Any:
pass
def have_cdc(*args) -> Any:
pass
def hid(*args) -> Any:
pass
hid_keyboard = None
hid_mouse = None
def info(*args) -> Any:
pass
def main(*args) -> Any:
pass
def micros(*args) -> Any:
pass
def millis(*args) -> Any:
pass
def mount(*args) -> Any:
pass
def pwm(*args) -> Any:
pass
def repl_info(*args) -> Any:
pass
def repl_uart(*args) -> Any:
pass
def rng(*args) -> Any:
pass
def servo(*args) -> Any:
pass
def standby(*args) -> Any:
pass
def stop(*args) -> Any:
pass
def sync(*args) -> Any:
pass
def udelay(*args) -> Any:
pass
def unique_id(*args) -> Any:
pass
def usb_mode(*args) -> Any:
pass
def wfi(*args) -> Any:
pass
| 14.470588 | 145 | 0.500277 | 1,385 | 10,824 | 3.836823 | 0.200722 | 0.227889 | 0.358111 | 0.397817 | 0.636244 | 0.447121 | 0.407038 | 0.34061 | 0.34061 | 0.34061 | 0 | 0.043447 | 0.364191 | 10,824 | 747 | 146 | 14.48996 | 0.728713 | 0.018847 | 0 | 0.529032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.372043 | false | 0.374194 | 0.002151 | 0 | 0.621505 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b6628f7ce8905e934bd40723b480c40fa73204a6 | 59 | py | Python | src/__init__.py | m1k1o/ext4-backup-pointers | b5a4ce1f7fce87d0feed8df8294f40c919ece578 | [
"Apache-2.0"
] | 1 | 2020-05-09T17:36:06.000Z | 2020-05-09T17:36:06.000Z | src/__init__.py | m1k1o/ext4-backup-pointers | b5a4ce1f7fce87d0feed8df8294f40c919ece578 | [
"Apache-2.0"
] | 1 | 2020-05-09T23:34:45.000Z | 2020-05-09T23:34:45.000Z | src/__init__.py | m1k1o/ext4-backup-pointers | b5a4ce1f7fce87d0feed8df8294f40c919ece578 | [
"Apache-2.0"
] | null | null | null | from src.console import start
def console():
start()
| 11.8 | 30 | 0.677966 | 8 | 59 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 59 | 4 | 31 | 14.75 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b67462b8708f7acc5d93c5276a0508fddf2ff772 | 5,791 | py | Python | mlxtend/mlxtend/regressor/tests/test_linear_regression.py | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | mlxtend/mlxtend/regressor/tests/test_linear_regression.py | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | mlxtend/mlxtend/regressor/tests/test_linear_regression.py | WhiteWolf21/fp-growth | 01e1d853b09f244f14e66d7d0c87f139a0f67c81 | [
"MIT"
] | null | null | null | # Sebastian Raschka 2014-2020
# mlxtend Machine Learning Library Extensions
# Author: Sebastian Raschka <sebastianraschka.com>
#
# License: BSD 3 clause
from mlxtend.regressor import LinearRegression
from mlxtend.data import boston_housing_data
import numpy as np
from numpy.testing import assert_almost_equal
from sklearn.base import clone
X, y = boston_housing_data()
X_rm = X[:, 5][:, np.newaxis]
X_rm_lstat = X[:, [5, -1]]
# standardized variables
X_rm_std = (X_rm - X_rm.mean(axis=0)) / X_rm.std(axis=0)
X_rm_lstat_std = ((X_rm_lstat - X_rm_lstat.mean(axis=0)) /
X_rm_lstat.std(axis=0))
y_std = (y - y.mean()) / y.std()
def test_univariate_normal_equation():
w_exp = np.array([[9.1]])
b_exp = np.array([-34.7])
ne_lr = LinearRegression()
ne_lr.fit(X_rm, y)
assert_almost_equal(ne_lr.w_, w_exp, decimal=1)
assert_almost_equal(ne_lr.b_, b_exp, decimal=1)
def test_univariate_normal_equation_std():
w_exp = np.array([[0.7]])
b_exp = np.array([0.0])
ne_lr = LinearRegression()
ne_lr.fit(X_rm_std, y_std)
assert_almost_equal(ne_lr.w_, w_exp, decimal=1)
assert_almost_equal(ne_lr.b_, b_exp, decimal=1)
def test_univariate_gradient_descent():
w_exp = np.array([[0.7]])
b_exp = np.array([0.0])
gd_lr = LinearRegression(method='sgd',
minibatches=1,
eta=0.001,
epochs=500,
random_seed=0)
gd_lr.fit(X_rm_std, y_std)
assert_almost_equal(gd_lr.w_, w_exp, decimal=1)
assert_almost_equal(gd_lr.b_, b_exp, decimal=1)
def test_univariate_qr():
w_exp = np.array([[9.1]])
b_exp = np.array([-34.7])
qr_lr = LinearRegression(method='qr')
qr_lr.fit(X_rm, y)
assert_almost_equal(qr_lr.w_, w_exp, decimal=1)
assert_almost_equal(qr_lr.b_, b_exp, decimal=1)
def test_univariate_svd():
w_exp = np.array([[9.1]])
b_exp = np.array([-34.7])
svd_lr = LinearRegression(method='svd')
svd_lr.fit(X_rm, y)
assert_almost_equal(svd_lr.w_, w_exp, decimal=1)
assert_almost_equal(svd_lr.b_, b_exp, decimal=1)
def test_progress_1():
gd_lr = LinearRegression(method='sgd',
minibatches=1,
eta=0.001,
epochs=1,
print_progress=1,
random_seed=0)
gd_lr.fit(X_rm_std, y_std)
def test_progress_2():
gd_lr = LinearRegression(method='sgd',
minibatches=1,
eta=0.001,
epochs=1,
print_progress=2,
random_seed=0)
gd_lr.fit(X_rm_std, y_std)
def test_progress_3():
gd_lr = LinearRegression(method='sgd',
minibatches=1,
eta=0.001,
epochs=1,
print_progress=2,
random_seed=0)
gd_lr.fit(X_rm_std, y_std)
def test_univariate_stochastic_gradient_descent():
w_exp = np.array([[0.7]])
b_exp = np.array([0.0])
sgd_lr = LinearRegression(method='sgd',
minibatches=len(y),
eta=0.0001,
epochs=150,
random_seed=0)
sgd_lr.fit(X_rm_std, y_std)
assert_almost_equal(sgd_lr.w_, w_exp, decimal=1)
assert_almost_equal(sgd_lr.b_, b_exp, decimal=1)
def test_multivariate_normal_equation():
w_exp = np.array([[5.1], [-0.6]])
b_exp = np.array([-1.5])
ne_lr = LinearRegression()
ne_lr.fit(X_rm_lstat, y)
assert_almost_equal(ne_lr.w_, w_exp, decimal=1)
assert_almost_equal(ne_lr.b_, b_exp, decimal=1)
def test_multivariate_gradient_descent():
w_exp = np.array([[0.4], [-0.5]])
b_exp = np.array([0.0])
gd_lr = LinearRegression(method='sgd',
eta=0.001,
epochs=500,
minibatches=1,
random_seed=0)
gd_lr.fit(X_rm_lstat_std, y_std)
assert_almost_equal(gd_lr.w_, w_exp, decimal=1)
assert_almost_equal(gd_lr.b_, b_exp, decimal=1)
def test_multivariate_stochastic_gradient_descent():
w_exp = np.array([[0.4], [-0.5]])
b_exp = np.array([0.0])
sgd_lr = LinearRegression(method='sgd',
eta=0.0001,
epochs=500,
minibatches=len(y),
random_seed=0)
sgd_lr.fit(X_rm_lstat_std, y_std)
assert_almost_equal(sgd_lr.w_, w_exp, decimal=1)
assert_almost_equal(sgd_lr.b_, b_exp, decimal=1)
def test_ary_persistency_in_shuffling():
orig = X_rm_lstat_std.copy()
sgd_lr = LinearRegression(method='sgd',
eta=0.0001,
epochs=500,
minibatches=len(y),
random_seed=0)
sgd_lr.fit(X_rm_lstat_std, y_std)
np.testing.assert_almost_equal(orig, X_rm_lstat_std, 6)
def test_multivariate_qr():
w_exp = np.array([[5.1], [-0.6]])
b_exp = np.array([-1.5])
qr_lr = LinearRegression(method='qr')
qr_lr.fit(X_rm_lstat, y)
assert_almost_equal(qr_lr.w_, w_exp, decimal=1)
assert_almost_equal(qr_lr.b_, b_exp, decimal=1)
def test_multivariate_svd():
w_exp = np.array([[5.1], [-0.6]])
b_exp = np.array([-1.5])
svd_lr = LinearRegression(method='svd')
svd_lr.fit(X_rm_lstat, y)
assert_almost_equal(svd_lr.w_, w_exp, decimal=1)
assert_almost_equal(svd_lr.b_, b_exp, decimal=1)
def test_clone():
regr = LinearRegression()
clone(regr)
| 31.472826 | 59 | 0.574167 | 839 | 5,791 | 3.620977 | 0.110846 | 0.026662 | 0.134299 | 0.0395 | 0.819947 | 0.778144 | 0.757077 | 0.757077 | 0.721527 | 0.686636 | 0 | 0.041687 | 0.304093 | 5,791 | 183 | 60 | 31.644809 | 0.712159 | 0.028492 | 0 | 0.695035 | 0 | 0 | 0.006051 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 1 | 0.113475 | false | 0 | 0.035461 | 0 | 0.148936 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b6804856bec79c7d2d285078375638cc5601e190 | 145 | py | Python | 4_class/utils/__init__.py | Acrophase/Sleep_Staging_KD | e40bcef04fed669153fcb6192663bf0f1efaacb1 | [
"MIT"
] | 15 | 2022-01-16T01:22:32.000Z | 2022-02-03T07:17:14.000Z | 3_class/utils/__init__.py | Acrophase/Sleep_Staging_KD | e40bcef04fed669153fcb6192663bf0f1efaacb1 | [
"MIT"
] | null | null | null | 3_class/utils/__init__.py | Acrophase/Sleep_Staging_KD | e40bcef04fed669153fcb6192663bf0f1efaacb1 | [
"MIT"
] | 2 | 2022-01-17T03:51:40.000Z | 2022-01-25T20:10:53.000Z | from .arg_utils import get_args
from .dataset_utils import get_data
from .callback_utils import get_callbacks
from .model_utils import get_model
| 29 | 41 | 0.862069 | 24 | 145 | 4.875 | 0.458333 | 0.376068 | 0.478632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110345 | 145 | 4 | 42 | 36.25 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b68f4fbccc758b2fc9e73a626b8b3eab5c219c65 | 6,111 | py | Python | imcsdk/mometa/comm/CommMailAlert.py | vadimkuznetsov/imcsdk | ed038ce1dbc8031f99d2dfb3ccee3bf0b48309d8 | [
"Apache-2.0"
] | null | null | null | imcsdk/mometa/comm/CommMailAlert.py | vadimkuznetsov/imcsdk | ed038ce1dbc8031f99d2dfb3ccee3bf0b48309d8 | [
"Apache-2.0"
] | null | null | null | imcsdk/mometa/comm/CommMailAlert.py | vadimkuznetsov/imcsdk | ed038ce1dbc8031f99d2dfb3ccee3bf0b48309d8 | [
"Apache-2.0"
] | 1 | 2019-11-10T18:42:04.000Z | 2019-11-10T18:42:04.000Z | """This module contains the general information for CommMailAlert ManagedObject."""
from ...imcmo import ManagedObject
from ...imccoremeta import MoPropertyMeta, MoMeta
from ...imcmeta import VersionMeta
class CommMailAlertConsts:
MIN_SEVERITY_LEVEL_CONDITION = "condition"
MIN_SEVERITY_LEVEL_CRITICAL = "critical"
MIN_SEVERITY_LEVEL_MAJOR = "major"
MIN_SEVERITY_LEVEL_MINOR = "minor"
MIN_SEVERITY_LEVEL_WARNING = "warning"
class CommMailAlert(ManagedObject):
"""This is CommMailAlert class."""
consts = CommMailAlertConsts()
naming_props = set([])
mo_meta = {
"classic": MoMeta("CommMailAlert", "commMailAlert", "mail-alert-svc", VersionMeta.Version303a, "InputOutput", 0xff, [], ["admin", "read-only", "user"], [u'commSvcEp'], [u'mailRecipient'], ["Get", "Set"]),
"modular": MoMeta("CommMailAlert", "commMailAlert", "mail-alert-svc", VersionMeta.Version303a, "InputOutput", 0xff, [], ["admin", "read-only", "user"], [u'commSvcEp'], [u'mailRecipient'], ["Get", "Set"])
}
prop_meta = {
"classic": {
"admin_state": MoPropertyMeta("admin_state", "adminState", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x2, None, None, None, ["Disabled", "Enabled", "disabled", "enabled"], []),
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version303a, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x4, 0, 255, None, [], []),
"ip_address": MoPropertyMeta("ip_address", "ipAddress", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x8, 0, 255, r"""(([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{0,4}|:[0-9A-Fa-f]{1,4})?|(:[0-9A-Fa-f]{1,4}){0,2})|(:[0-9A-Fa-f]{1,4}){0,3})|(:[0-9A-Fa-f]{1,4}){0,4})|:(:[0-9A-Fa-f]{1,4}){0,5})((:[0-9A-Fa-f]{1,4}){2}|:(25[0-5]|(2[0-4]|1[0-9]|[1-9])?[0-9])(\.(25[0-5]|(2[0-4]|1[0-9]|[1-9])?[0-9])){3})|(([0-9A-Fa-f]{1,4}:){1,6}|:):[0-9A-Fa-f]{0,4}|([0-9A-Fa-f]{1,4}:){7}:) |((([a-zA-Z0-9]([a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,6})|(([a-zA-Z0-9]([a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])?)+)|([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]))""", [], []),
"min_severity_level": MoPropertyMeta("min_severity_level", "minSeverityLevel", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x10, None, None, None, ["condition", "critical", "major", "minor", "warning"], []),
"port": MoPropertyMeta("port", "port", "uint", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x20, None, None, None, [], ["1-65535"]),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x40, 0, 255, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x80, None, None, None, ["", "created", "deleted", "modified", "removed"], []),
},
"modular": {
"admin_state": MoPropertyMeta("admin_state", "adminState", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x2, None, None, None, ["Disabled", "Enabled", "disabled", "enabled"], []),
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version303a, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x4, 0, 255, None, [], []),
"ip_address": MoPropertyMeta("ip_address", "ipAddress", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x8, 0, 255, r"""([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:([0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{0,4}|:[0-9A-Fa-f]{1,4})?|(:[0-9A-Fa-f]{1,4}){0,2})|(:[0-9A-Fa-f]{1,4}){0,3})|(:[0-9A-Fa-f]{1,4}){0,4})|:(:[0-9A-Fa-f]{1,4}){0,5})((:[0-9A-Fa-f]{1,4}){2}|:(25[0-5]|(2[0-4]|1[0-9]|[1-9])?[0-9])(\.(25[0-5]|(2[0-4]|1[0-9]|[1-9])?[0-9])){3})|(([0-9A-Fa-f]{1,4}:){1,6}|:):[0-9A-Fa-f]{0,4}|([0-9A-Fa-f]{1,4}:){7}:""", [], []),
"min_severity_level": MoPropertyMeta("min_severity_level", "minSeverityLevel", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x10, None, None, None, ["condition", "critical", "major", "minor", "warning"], []),
"port": MoPropertyMeta("port", "port", "uint", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x20, None, None, None, [], ["1-65535"]),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x40, 0, 255, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version303a, MoPropertyMeta.READ_WRITE, 0x80, None, None, None, ["", "created", "deleted", "modified", "removed"], []),
},
}
prop_map = {
"classic": {
"adminState": "admin_state",
"childAction": "child_action",
"dn": "dn",
"ipAddress": "ip_address",
"minSeverityLevel": "min_severity_level",
"port": "port",
"rn": "rn",
"status": "status",
},
"modular": {
"adminState": "admin_state",
"childAction": "child_action",
"dn": "dn",
"ipAddress": "ip_address",
"minSeverityLevel": "min_severity_level",
"port": "port",
"rn": "rn",
"status": "status",
},
}
def __init__(self, parent_mo_or_dn, **kwargs):
self._dirty_mask = 0
self.admin_state = None
self.child_action = None
self.ip_address = None
self.min_severity_level = None
self.port = None
self.status = None
ManagedObject.__init__(self, "CommMailAlert", parent_mo_or_dn, **kwargs)
| 67.153846 | 896 | 0.578465 | 838 | 6,111 | 4.118138 | 0.133652 | 0.026079 | 0.043466 | 0.052159 | 0.808751 | 0.798319 | 0.79745 | 0.79745 | 0.79745 | 0.79745 | 0 | 0.084659 | 0.166912 | 6,111 | 90 | 897 | 67.9 | 0.593204 | 0.017346 | 0 | 0.492754 | 0 | 0.028986 | 0.39179 | 0.190556 | 0 | 0 | 0.009678 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.043478 | 0 | 0.231884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fcd7a0e40264829f945d11a7a6fe887077226a24 | 28 | py | Python | nblog/core/models/__init__.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | 1 | 2019-09-16T13:23:44.000Z | 2019-09-16T13:23:44.000Z | nblog/core/models/__init__.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | 8 | 2020-07-22T02:06:35.000Z | 2021-09-22T19:22:27.000Z | nblog/core/models/__init__.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | 1 | 2019-09-17T13:24:27.000Z | 2019-09-17T13:24:27.000Z | from .notifications import * | 28 | 28 | 0.821429 | 3 | 28 | 7.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fcd91f6d7ad6ea5ee46c94b95cd34dea3d40844d | 9,558 | py | Python | tests/modules/test_math.py | HelloMelanieC/batavia | 1fc436e2cf7d14896bc485b6a25f2a396cfefaf3 | [
"BSD-3-Clause"
] | 1 | 2021-01-03T00:59:23.000Z | 2021-01-03T00:59:23.000Z | tests/modules/test_math.py | HelloMelanieC/batavia | 1fc436e2cf7d14896bc485b6a25f2a396cfefaf3 | [
"BSD-3-Clause"
] | null | null | null | tests/modules/test_math.py | HelloMelanieC/batavia | 1fc436e2cf7d14896bc485b6a25f2a396cfefaf3 | [
"BSD-3-Clause"
] | null | null | null | import sys
from unittest import skipUnless
from ..utils import ModuleFunctionTestCase, TranspileTestCase
class MathTests(ModuleFunctionTestCase, TranspileTestCase):
substitutions = {
# A
'7.32747...e-15': [
'7.35784...e-15'
],
'1.53745...e-12': [
'1.53743...e-12'
],
}
@classmethod
def add_math_tests(klass):
klass.add_one_arg_tests('math', [
'acos',
'acosh',
'asin',
'asinh',
'atan',
'atanh',
'ceil',
'cos',
'cosh',
'degrees',
'exp',
'expm1',
'erf',
'erfc',
'fabs',
'factorial',
'floor',
'frexp',
'fsum',
'gamma',
'isfinite',
'isinf',
'isnan',
'lgamma',
'log',
'log10',
'log1p',
'log2',
'modf',
'radians',
'sin',
'sinh',
'sqrt',
'tan',
'tanh',
'trunc',
], numerics_only=True)
klass.add_two_arg_tests('math', [
'atan2',
'copysign',
'fmod',
'hypot',
'ldexp',
'log',
'pow',
], numerics_only=True)
if sys.version_info >= (3, 5):
klass.add_two_arg_tests('math', [
'gcd',
'isclose',
], numerics_only=True)
not_implemented = [
'test_math_acos_float',
'test_math_acos_int',
'test_math_asin_float',
'test_math_asin_int',
'test_math_fsum_NotImplemented',
'test_math_fsum_bytearray',
'test_math_fsum_bytes',
'test_math_fsum_complex',
'test_math_fsum_dict',
'test_math_fsum_range',
]
def test_constants(self):
self.assertCodeExecution("""
import math
print(math.e)
print(math.pi)
""")
@skipUnless(sys.version_info >= (3, 5), reason="Need CPython 3.5")
def test_constants_35(self):
self.assertCodeExecution("""
import math
print(math.inf)
print(math.nan)
""")
def test_erf(self):
# test some of the edge cases of erf to 15 digits of precision
self.assertCodeExecution("""
import math
print(round(math.erf(0.75) * (10**15)))
print(round(math.erf(1.40) * (10**15)))
print(round(math.erf(1.60) * (10**15)))
""")
def test_frexp(self):
# test some of the edge cases of for frexp
self.assertCodeExecution("""
import math
print(math.frexp(float('nan')))
print(math.frexp(float('inf')))
print(math.frexp(float('-inf')))
print(math.frexp(-0.0))
print(math.frexp(0.0))
print(math.frexp(2**-1026)) # denormal
print(math.frexp(2**-1027)) # denormal
print(math.frexp(1.9**-1150)) # denormal
""")
def test_docstrings(self):
self.assertCodeExecution("""
import math
print(math.acos.__doc__)
print(math.acosh.__doc__)
print(math.asin.__doc__)
print(math.asinh.__doc__)
print(math.atan.__doc__)
print(math.atan2.__doc__)
print(math.atanh.__doc__)
print(math.ceil.__doc__)
print(math.copysign.__doc__)
print(math.cos.__doc__)
print(math.cosh.__doc__)
print(math.degrees.__doc__)
print(math.erf.__doc__)
print(math.erfc.__doc__)
print(math.exp.__doc__)
print(math.expm1.__doc__)
print(math.fabs.__doc__)
print(math.factorial.__doc__)
print(math.floor.__doc__)
print(math.fmod.__doc__)
print(math.frexp.__doc__)
print(math.fsum.__doc__)
print(math.gamma.__doc__)
print(math.hypot.__doc__)
print(math.isfinite.__doc__)
print(math.isinf.__doc__)
print(math.isnan.__doc__)
print(math.ldexp.__doc__)
print(math.lgamma.__doc__)
print(math.log.__doc__)
print(math.log10.__doc__)
print(math.log1p.__doc__)
print(math.log2.__doc__)
print(math.modf.__doc__)
print(math.pow.__doc__)
print(math.radians.__doc__)
print(math.sin.__doc__)
print(math.sinh.__doc__)
print(math.sqrt.__doc__)
print(math.tan.__doc__)
print(math.tanh.__doc__)
print(math.trunc.__doc__)
""")
@skipUnless(sys.version_info >= (3, 5), reason="Need CPython 3.5")
def test_docstrings_35(self):
self.assertCodeExecution("""
import math
print(math.gcd.__doc__)
print(math.isclose.__doc__)
""")
def test_big_log(self):
self.assertCodeExecution("""
import math
print(math.log(3**4000, 2**8100))
""")
def test_big_log2(self):
self.assertCodeExecution("""
import math
print(math.log2(709874778209505449164547067054458951083931193926642747495017164186645751655744875421269098582104920390605124237633349342717862507319743615626304875347198674644024921355443346065756812077971384925385976688379587725754770781522846570196349704093046107733180534854434524219964358869238720766190004394476819714001258060050613741584644204075051799051905412773797764952606797949151269802416842454484878798473655073876851371491648288958091028337719596855793230188189772070425820554754458556422280418578218555923937387100230666995832561534819765559294519688376125270429521528901767210741847034724205354522740365483017692249820977706071766039350554571519522193325468892496901898817050295077767956445820266181178428873389385457683384690338027773216128156778372102965337341550587618703692943484656997212771985652065880479903542484558677381542438761681533659023452046408703963028815731967051176232784200533079872517065868927439508179887199747307382027271992367973398752203981370932249653618260869652706332493188390438513150963811297163798071038159044928635041637968052076196357089673578011991707879168341931769577280340919325350586520247371170996197238085208451603359009350099978722663674682758701714150597529361822020896347673618446802372853705086762735076753958743009317050891017544375235501024966086837113895250374683405794439114588602689057542397605899402997007278070746928906217565081884406372071588251849468961039107688076621346443316677903414760713997208752939061354773484040316969671376565833163179447504252238355930440353052925533204737199987806741014066788960204161921421661842362108279851816761342464770187337809510195703092924098813875328889150143200978204553812664656609886055094625058698584339535625840297952984122026248025706583198398576642795466682257872986978621024920781727232257799954982443614621424967054998813367293022309278882814078744897409296550039950294130952382751224274116423061468974653243010683217587256595866362285666690540129606522748087781211700305323617809257258869461082545385948387418641936602911595988585341866304680338433228594272988376062066145489663624587207289726229598577225356620186185104940607264294055665578166855271669760127452193136730694091219875987070929716353044606178328132861866481581176064425671499601899418528529810234656671652825363206294954548973847080477558394872610479878723418423615445410371409596278175917142530165486625362513678722139651643157453275449743997375003463533610556290097654020572842895402524185827121030107))
""")
def test_isfinite(self):
self.assertCodeExecution("""
import math
print(math.isfinite(1))
print(math.isfinite(float('-inf')))
print(math.isfinite(float('inf')))
print(math.isfinite(float('nan')))
""")
def test_isinf(self):
self.assertCodeExecution("""
import math
print(math.isinf(1))
print(math.isinf(float('-inf')))
print(math.isinf(float('inf')))
print(math.isinf(float('nan')))
""")
def test_isnan(self):
self.assertCodeExecution("""
import math
print(math.isnan(1))
print(math.isnan(float('-inf')))
print(math.isnan(float('inf')))
print(math.isnan(float('nan')))
""")
def test_ldexp_zero(self):
self.assertCodeExecution("""
import math
print(math.ldexp(0.0, 100000))
print(math.ldexp(-0.0, 100000))
""")
def test_ldexp_int_exps_edge_cases(self):
self.assertCodeExecution("""
import math
for exp in range(-1100, -900):
print(exp)
print(math.ldexp(1.0, exp))
""")
@skipUnless(sys.version_info >= (3, 5), reason="Need CPython 3.5")
def test_isclose_kwargs(self):
self.assertCodeExecution("""
import math
print(math.isclose(1.0, 0.9))
print(math.isclose(1.0, 0.9, rel_tol=0.09))
print(math.isclose(1.0, 0.9, rel_tol=0.1))
print(math.isclose(1.0, 0.9, rel_tol=0.11))
print(math.isclose(1.0, 0.9, rel_tol=0.09, abs_tol=0.1))
print(math.isclose(1.0, 1.000000001, rel_tol=1.0, abs_tol=1.0))
""")
MathTests.add_math_tests()
| 38.232 | 2,496 | 0.634442 | 760 | 9,558 | 7.639474 | 0.189474 | 0.12246 | 0.086807 | 0.079573 | 0.243713 | 0.22804 | 0.206855 | 0.126076 | 0.080262 | 0.046159 | 0 | 0.377626 | 0.263026 | 9,558 | 249 | 2,497 | 38.385542 | 0.446621 | 0.010776 | 0 | 0.243243 | 0 | 0.009009 | 0.714528 | 0.474553 | 0 | 1 | 0 | 0 | 0.063063 | 1 | 0.067568 | false | 0 | 0.076577 | 0 | 0.157658 | 0.373874 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fcf2ca34ce5d9ed7f03a3b101826e237fe90a9c9 | 23,227 | py | Python | tests/tools/test_aux_methods_labels_descriptor_manager.py | nipy/nilabels | b065febc611eef638785651b4642d53bb61f1321 | [
"MIT"
] | 15 | 2019-04-09T21:47:47.000Z | 2022-02-01T14:11:51.000Z | tests/tools/test_aux_methods_labels_descriptor_manager.py | SebastianoF/LabelsManager | b065febc611eef638785651b4642d53bb61f1321 | [
"MIT"
] | 4 | 2018-08-24T09:25:49.000Z | 2018-08-29T10:47:50.000Z | tests/tools/test_aux_methods_labels_descriptor_manager.py | nipy/nilabels | b065febc611eef638785651b4642d53bb61f1321 | [
"MIT"
] | 1 | 2019-04-06T20:49:48.000Z | 2019-04-06T20:49:48.000Z | import collections
from os.path import join as jph
import pytest
from nilabels.tools.aux_methods.label_descriptor_manager import LabelsDescriptorManager, \
generate_dummy_label_descriptor
from tests.tools.decorators_tools import write_and_erase_temporary_folder, pfo_tmp_test, \
write_and_erase_temporary_folder_with_dummy_labels_descriptor, is_a_string_number, \
write_and_erase_temporary_folder_with_left_right_dummy_labels_descriptor
# TESTING:
# --- > Testing generate dummy descriptor
@write_and_erase_temporary_folder
def test_generate_dummy_labels_descriptor_wrong_input1():
with pytest.raises(IOError):
generate_dummy_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor.txt'), list_labels=range(5),
list_roi_names=['1', '2'])
@write_and_erase_temporary_folder
def test_generate_dummy_labels_descriptor_wrong_input2():
with pytest.raises(IOError):
generate_dummy_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor.txt'), list_labels=range(5),
list_roi_names=['1', '2', '3', '4', '5'],
list_colors_triplets=[[0, 0, 0], [1, 1, 1]])
@write_and_erase_temporary_folder
def test_generate_labels_descriptor_list_roi_names_None():
d = generate_dummy_label_descriptor(jph(pfo_tmp_test, 'dummy_labels_descriptor.txt'), list_labels=range(5),
list_roi_names=None, list_colors_triplets=[[1, 1, 1], ] * 5)
for k in d.keys():
assert d[k][-1] == 'label {}'.format(k)
@write_and_erase_temporary_folder
def test_generate_labels_descriptor_list_colors_triplets_None():
d = generate_dummy_label_descriptor(jph(pfo_tmp_test, 'dummy_labels_descriptor.txt'), list_labels=range(5),
list_roi_names=None, list_colors_triplets=[[1, 1, 1], ] * 5)
for k in d.keys():
assert len(d[k][1]) == 3
@write_and_erase_temporary_folder
def test_generate_none_list_colour_triples():
generate_dummy_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor.txt'), list_labels=range(5),
list_roi_names=['1', '2', '3', '4', '5'], list_colors_triplets=None)
loaded_dummy_ldm = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
for k in loaded_dummy_ldm.dict_label_descriptor.keys():
assert len(loaded_dummy_ldm.dict_label_descriptor[k][0]) == 3
for k_rgb in loaded_dummy_ldm.dict_label_descriptor[k][0]:
assert 0 <= k_rgb < 256
@write_and_erase_temporary_folder
def test_generate_labels_descriptor_general():
list_labels = [1, 2, 3, 4, 5]
list_color_triplets = [[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4], [5, 5, 5]]
list_roi_names = ['one', 'two', 'three', 'four', 'five']
d = generate_dummy_label_descriptor(jph(pfo_tmp_test, 'dummy_label_descriptor.txt'), list_labels=list_labels,
list_roi_names=list_roi_names, list_colors_triplets=list_color_triplets)
for k_num, k in enumerate(d.keys()):
assert int(k) == list_labels[k_num]
assert d[k][0] == list_color_triplets[k_num]
assert d[k][-1] == list_roi_names[k_num]
# --- > Testing basics methods labels descriptor class manager
@write_and_erase_temporary_folder
def test_basics_methods_labels_descriptor_manager_wrong_input_path():
pfi_unexisting_label_descriptor_manager = 'zzz_path_to_spam'
with pytest.raises(IOError):
LabelsDescriptorManager(pfi_unexisting_label_descriptor_manager)
@write_and_erase_temporary_folder
def test_basics_methods_labels_descriptor_manager_wrong_input_convention():
not_allowed_convention_name = 'just_spam'
with pytest.raises(IOError):
LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'), not_allowed_convention_name)
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_basic_dict_input():
dict_ld = collections.OrderedDict()
# note that in the dictionary there are no double quotes " ", but by default the strings are '"label name"'
dict_ld.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_ld.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']})
dict_ld.update({2: [[204, 0, 0], [1, 1, 1], 'label two (l2)']})
dict_ld.update({3: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_ld.update({4: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_ld.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_ld.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_ld.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_ld.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
for k in ldm.dict_label_descriptor.keys():
assert ldm.dict_label_descriptor[k] == dict_ld[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_load_save_and_compare():
ldm = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
ldm.save_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor2.txt'))
f1 = open(jph(pfo_tmp_test, 'labels_descriptor.txt'), 'r')
f2 = open(jph(pfo_tmp_test, 'labels_descriptor2.txt'), 'r')
for l1, l2 in zip(f1.readlines(), f2.readlines()):
split_l1 = [float(a) if is_a_string_number(a) else a for a in [a.strip() for a in l1.split(' ') if a is not '']]
split_l2 = [float(b) if is_a_string_number(b) else b for b in [b.strip() for b in l2.split(' ') if b is not '']]
assert split_l1 == split_l2
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_save_in_fsl_convention_reload_as_dict_and_compare():
ldm_itk = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
# change convention
ldm_itk.convention = 'fsl'
ldm_itk.save_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor_fsl.txt'))
ldm_fsl = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor_fsl.txt'),
labels_descriptor_convention='fsl')
# NOTE: test works only with default 1.0 values - fsl convention is less informative than itk-snap..
for k in ldm_itk.dict_label_descriptor.keys():
ldm_itk.dict_label_descriptor[k] == ldm_fsl.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_signature_for_variable_convention_wrong_input():
with pytest.raises(IOError):
LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'),
labels_descriptor_convention='spam')
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_signature_for_variable_convention_wrong_input_after_initialisation():
my_ldm = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'),
labels_descriptor_convention='itk-snap')
with pytest.raises(IOError):
my_ldm.convention = 'spam'
my_ldm.save_label_descriptor(jph(pfo_tmp_test, 'labels_descriptor_again.txt'))
# --> Testing labels permutations - permute_labels_in_descriptor
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_relabel_labels_descriptor():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_expected.update({10: [[255, 0, 0], [1, 1, 1], 'label one (l1)']})
dict_expected.update({11: [[204, 0, 0], [1, 1, 1], 'label two (l2)']})
dict_expected.update({12: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_expected.update({4: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
old_labels = [1, 2, 3]
new_labels = [10, 11, 12]
ldm_relabelled = ldm_original.relabel(old_labels, new_labels, sort=True)
for k in dict_expected.keys():
dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_relabel_labels_descriptor_with_merging():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
# dict_expected.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']}) # copied over label two
dict_expected.update({1: [[204, 0, 0], [1, 1, 1], 'label two (l2)']})
dict_expected.update({5: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_expected.update({4: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
old_labels = [1, 2, 3]
new_labels = [1, 1, 5]
ldm_relabelled = ldm_original.relabel(old_labels, new_labels, sort=True)
for k in dict_expected.keys():
dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_permute_labels_from_descriptor_wrong_input_permutation():
ldm = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
perm = [[1, 2, 3], [1, 1]]
with pytest.raises(IOError):
ldm.permute_labels(perm)
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_permute_labels_from_descriptor_check():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_expected.update({3: [[255, 0, 0], [1, 1, 1], 'label one (l1)']}) # copied over label two
dict_expected.update({4: [[204, 0, 0], [1, 1, 1], 'label two (l2)']})
dict_expected.update({2: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_expected.update({1: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
perm = [[1, 2, 3, 4], [3, 4, 2, 1]]
ldm_relabelled = ldm_original.permute_labels(perm)
for k in dict_expected.keys():
dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_erase_labels():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_expected.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']}) # copied over label two
dict_expected.update({4: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
labels_to_erase = [2, 3, 7]
ldm_relabelled = ldm_original.erase_labels(labels_to_erase)
for k in dict_expected.keys():
assert dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
# -> multi-labels dict
@write_and_erase_temporary_folder_with_left_right_dummy_labels_descriptor
def test_save_multi_labels_descriptor_custom():
# load it into a labels descriptor manager
ldm_lr = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor_RL.txt'))
# save it as labels descriptor text file
pfi_multi_ld = jph(pfo_tmp_test, 'multi_labels_descriptor_LR.txt')
ldm_lr.save_as_multi_label_descriptor(pfi_multi_ld)
# expected lines:
expected_lines = [['background', 0],
['label A Left', 1], ['label A Right', 2], ['label A', 1, 2],
['label B Left', 3], ['label B Right', 4], ['label B', 3, 4],
['label C', 5], ['label D', 6],
['label E Left', 7], ['label E Right', 8], ['label E', 7, 8]]
# load saved labels descriptor
with open(pfi_multi_ld, 'r') as g:
multi_ld_lines = g.readlines()
# modify as list of lists as the expected lines.
multi_ld_lines_a_list_of_lists = [[int(a) if a.isdigit() else a
for a in [n.strip() for n in m.split('&') if not n.startswith('#')]]
for m in multi_ld_lines]
# Compare:
for li1, li2 in zip(expected_lines, multi_ld_lines_a_list_of_lists):
assert li1 == li2
@write_and_erase_temporary_folder_with_left_right_dummy_labels_descriptor
def test_get_multi_label_dict_standard_combine():
ldm_lr = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor_RL.txt'))
multi_labels_dict_from_ldm = ldm_lr.get_multi_label_dict(combine_right_left=True)
expected_multi_labels_dict = collections.OrderedDict()
expected_multi_labels_dict.update({'background': [0]})
expected_multi_labels_dict.update({'label A Left': [1]})
expected_multi_labels_dict.update({'label A Right': [2]})
expected_multi_labels_dict.update({'label A': [1, 2]})
expected_multi_labels_dict.update({'label B Left': [3]})
expected_multi_labels_dict.update({'label B Right': [4]})
expected_multi_labels_dict.update({'label B': [3, 4]})
expected_multi_labels_dict.update({'label C': [5]})
expected_multi_labels_dict.update({'label D': [6]})
expected_multi_labels_dict.update({'label E Left': [7]})
expected_multi_labels_dict.update({'label E Right': [8]})
expected_multi_labels_dict.update({'label E': [7, 8]})
for k1, k2 in zip(multi_labels_dict_from_ldm.keys(), expected_multi_labels_dict.keys()):
assert k1 == k2
assert multi_labels_dict_from_ldm[k1] == expected_multi_labels_dict[k2]
@write_and_erase_temporary_folder_with_left_right_dummy_labels_descriptor
def test_get_multi_label_dict_standard_not_combine():
ldm_lr = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor_RL.txt'))
multi_labels_dict_from_ldm = ldm_lr.get_multi_label_dict(combine_right_left=False)
expected_multi_labels_dict = collections.OrderedDict()
expected_multi_labels_dict.update({'background': [0]})
expected_multi_labels_dict.update({'label A Left': [1]})
expected_multi_labels_dict.update({'label A Right': [2]})
expected_multi_labels_dict.update({'label B Left': [3]})
expected_multi_labels_dict.update({'label B Right': [4]})
expected_multi_labels_dict.update({'label C': [5]})
expected_multi_labels_dict.update({'label D': [6]})
expected_multi_labels_dict.update({'label E Left': [7]})
expected_multi_labels_dict.update({'label E Right': [8]})
for k1, k2 in zip(multi_labels_dict_from_ldm.keys(), expected_multi_labels_dict.keys()):
assert k1 == k2
assert multi_labels_dict_from_ldm[k1] == expected_multi_labels_dict[k2]
@write_and_erase_temporary_folder
def test_save_multi_labels_descriptor_custom_test_robustness():
# save this as file multi labels descriptor then read and check that it went in order!
d = collections.OrderedDict()
d.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
d.update({1: [[255, 0, 0], [1, 1, 1], 'label A Right']})
d.update({2: [[204, 0, 0], [1, 1, 1], 'label A Left']})
d.update({3: [[51, 51, 255], [1, 1, 1], 'label B left']})
d.update({4: [[102, 102, 255], [1, 1, 1], 'label B Right']})
d.update({5: [[0, 204, 51], [1, 1, 1], 'label C ']})
d.update({6: [[51, 255, 102], [1, 1, 1], 'label D Right']}) # unpaired label
d.update({7: [[255, 255, 0], [1, 1, 1], 'label E right ']}) # small r and spaces
d.update({8: [[255, 50, 50], [1, 1, 1], 'label E Left ']}) # ... paired with small l and spaces
with open(jph(pfo_tmp_test, 'labels_descriptor_RL.txt'), 'w+') as f:
for j in d.keys():
line = '{0: >5}{1: >6}{2: >6}{3: >6}{4: >9}{5: >6}{6: >6} "{7}"\n'.format(
j, d[j][0][0], d[j][0][1], d[j][0][2], d[j][1][0], d[j][1][1], d[j][1][2], d[j][2])
f.write(line)
# load it with an instance of LabelsDescriptorManager
ldm_lr = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor_RL.txt'))
multi_labels_dict_from_ldm = ldm_lr.get_multi_label_dict(combine_right_left=True)
expected_multi_labels_dict = collections.OrderedDict()
expected_multi_labels_dict.update({'background': [0]})
expected_multi_labels_dict.update({'label A Right': [1]})
expected_multi_labels_dict.update({'label A Left': [2]})
expected_multi_labels_dict.update({'label A': [1, 2]})
expected_multi_labels_dict.update({'label B left': [3]})
expected_multi_labels_dict.update({'label B Right': [4]})
expected_multi_labels_dict.update({'label C': [5]})
expected_multi_labels_dict.update({'label D Right': [6]})
expected_multi_labels_dict.update({'label E right': [7]})
expected_multi_labels_dict.update({'label E Left': [8]})
for k1, k2 in zip(multi_labels_dict_from_ldm.keys(), expected_multi_labels_dict.keys()):
assert k1 == k2
assert multi_labels_dict_from_ldm[k1] == expected_multi_labels_dict[k2]
# -> erase, assign and keep only one label relabeller.
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_relabel_standard():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_expected.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']})
dict_expected.update({9: [[204, 0, 0], [1, 1, 1], 'label two (l2)']})
dict_expected.update({3: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_expected.update({10: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
old_labels = [2, 4]
new_labels = [9, 10]
ldm_relabelled = ldm_original.relabel(old_labels, new_labels)
for k in dict_expected.keys():
assert dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_relabel_bad_input():
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
old_labels = [2, 4, 180]
new_labels = [9, 10, 12]
with pytest.raises(IOError):
ldm_original.relabel(old_labels, new_labels)
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_erase_labels_unexisting_labels():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']})
dict_expected.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']})
dict_expected.update({3: [[51, 51, 255], [1, 1, 1], 'label three']})
dict_expected.update({5: [[0, 204, 51], [1, 1, 1], 'label five (l5)']})
dict_expected.update({6: [[51, 255, 102], [1, 1, 1], 'label six']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
dict_expected.update({8: [[255, 50, 50], [1, 1, 1], 'label eight']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
labels_to_erase = [2, 4, 16, 32]
ldm_relabelled = ldm_original.erase_labels(labels_to_erase)
for k in dict_expected.keys():
assert dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_assign_all_other_labels_the_same_value():
dict_expected = collections.OrderedDict()
dict_expected.update({0: [[0, 0, 0], [0, 0, 0], 'background']}) # Possible bug
dict_expected.update({1: [[255, 0, 0], [1, 1, 1], 'label one (l1)']}) # copied over label two
dict_expected.update({4: [[102, 102, 255], [1, 1, 1], 'label four']})
dict_expected.update({7: [[255, 255, 0], [1, 1, 1], 'label seven']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
labels_to_keep = [0, 1, 4, 7]
other_value = 12
ldm_relabelled = ldm_original.assign_all_other_labels_the_same_value(labels_to_keep, other_value)
print(dict_expected)
print(ldm_relabelled.dict_label_descriptor)
for k in dict_expected.keys():
print()
print(dict_expected[k])
print(ldm_relabelled.dict_label_descriptor[k])
assert dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
@write_and_erase_temporary_folder_with_dummy_labels_descriptor
def test_keep_one_label():
dict_expected = collections.OrderedDict()
dict_expected.update({3: [[51, 51, 255], [1, 1, 1], 'label three']})
ldm_original = LabelsDescriptorManager(jph(pfo_tmp_test, 'labels_descriptor.txt'))
label_to_keep = 3
ldm_relabelled = ldm_original.keep_one_label(label_to_keep)
for k in dict_expected.keys():
assert dict_expected[k] == ldm_relabelled.dict_label_descriptor[k]
if __name__ == '__main__':
test_generate_dummy_labels_descriptor_wrong_input1()
test_generate_dummy_labels_descriptor_wrong_input2()
test_generate_labels_descriptor_list_roi_names_None()
test_generate_labels_descriptor_list_colors_triplets_None()
test_generate_none_list_colour_triples()
test_generate_labels_descriptor_general()
test_basics_methods_labels_descriptor_manager_wrong_input_path()
test_basics_methods_labels_descriptor_manager_wrong_input_convention()
test_basic_dict_input()
test_load_save_and_compare()
test_save_in_fsl_convention_reload_as_dict_and_compare()
test_signature_for_variable_convention_wrong_input()
test_signature_for_variable_convention_wrong_input_after_initialisation()
test_relabel_labels_descriptor()
test_relabel_labels_descriptor_with_merging()
test_permute_labels_from_descriptor_wrong_input_permutation()
test_permute_labels_from_descriptor_check()
test_erase_labels()
test_save_multi_labels_descriptor_custom()
test_get_multi_label_dict_standard_combine()
test_get_multi_label_dict_standard_not_combine()
test_save_multi_labels_descriptor_custom_test_robustness()
test_relabel_standard()
test_relabel_bad_input()
test_erase_labels_unexisting_labels()
test_assign_all_other_labels_the_same_value()
test_keep_one_label()
| 46.085317 | 120 | 0.687002 | 3,407 | 23,227 | 4.336073 | 0.070737 | 0.018547 | 0.013606 | 0.034116 | 0.828268 | 0.797739 | 0.77249 | 0.736343 | 0.684492 | 0.630745 | 0 | 0.052304 | 0.170276 | 23,227 | 503 | 121 | 46.176938 | 0.714249 | 0.044302 | 0 | 0.468144 | 1 | 0.00277 | 0.100717 | 0.035722 | 0 | 0 | 0 | 0 | 0.058172 | 1 | 0.074792 | false | 0 | 0.01385 | 0 | 0.088643 | 0.01385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1e644bb033782b715202a9df50b757be66e108c6 | 232 | py | Python | spikeforestwidgets/__init__.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | spikeforestwidgets/__init__.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | spikeforestwidgets/__init__.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | from .timeserieswidget import TimeseriesWidget
from .electrodegeometrywidget import ElectrodeGeometryWidget
from .unitwaveformswidget import UnitWaveformWidget, UnitWaveformsWidget
from .correlogramswidget import CorrelogramsWidget
| 46.4 | 72 | 0.905172 | 17 | 232 | 12.352941 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073276 | 232 | 4 | 73 | 58 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.