hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b04cc2429377f6c443fc5eeec9df83818028ccb0 | 78 | py | Python | calendar.py | manishaverma1012/Hackerank_Solution | 9b13e8805fd244fbc7df2955f57e2a880772460d | [
"MIT"
] | null | null | null | calendar.py | manishaverma1012/Hackerank_Solution | 9b13e8805fd244fbc7df2955f57e2a880772460d | [
"MIT"
] | null | null | null | calendar.py | manishaverma1012/Hackerank_Solution | 9b13e8805fd244fbc7df2955f57e2a880772460d | [
"MIT"
] | null | null | null | import calendar
print (calendar.TextCalendar(firstweekday=6).formatyear(2015)) | 39 | 62 | 0.846154 | 9 | 78 | 7.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.038462 | 78 | 2 | 62 | 39 | 0.813333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
b05c8e9936ad5e55f163f2fdf2777e498d41665e | 1,231 | py | Python | sdk/python/pulumi_oci/loadbalancer/__init__.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/loadbalancer/__init__.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/loadbalancer/__init__.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from .. import _utilities
import typing
# Export this package's modules as members:
from .backend import *
from .backend_set import *
from .certificate import *
from .get_backend_health import *
from .get_backend_set_health import *
from .get_backend_sets import *
from .get_backends import *
from .get_certificates import *
from .get_health import *
from .get_hostnames import *
from .get_listener_rules import *
from .get_load_balancer_routing_policies import *
from .get_load_balancer_routing_policy import *
from .get_load_balancers import *
from .get_path_route_sets import *
from .get_policies import *
from .get_protocols import *
from .get_rule_set import *
from .get_rule_sets import *
from .get_shapes import *
from .get_ssl_cipher_suite import *
from .get_ssl_cipher_suites import *
from .hostname import *
from .listener import *
from .load_balancer import *
from .load_balancer_routing_policy import *
from .path_route_set import *
from .rule_set import *
from .ssl_cipher_suite import *
from ._inputs import *
from . import outputs
| 31.564103 | 87 | 0.787977 | 184 | 1,231 | 4.994565 | 0.358696 | 0.326442 | 0.26877 | 0.065288 | 0.261153 | 0.125136 | 0 | 0 | 0 | 0 | 0 | 0.000944 | 0.139724 | 1,231 | 38 | 88 | 32.394737 | 0.866856 | 0.177904 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c69a49cd74ac7f7464798bde8fc6270ca15bad94 | 3,939 | py | Python | grid/test/test_cell.py | CDT-AIMLAC/testing_exercise | b4a62d6ef0f61708618487ae94b89a9a4301be4c | [
"CC-BY-4.0"
] | null | null | null | grid/test/test_cell.py | CDT-AIMLAC/testing_exercise | b4a62d6ef0f61708618487ae94b89a9a4301be4c | [
"CC-BY-4.0"
] | null | null | null | grid/test/test_cell.py | CDT-AIMLAC/testing_exercise | b4a62d6ef0f61708618487ae94b89a9a4301be4c | [
"CC-BY-4.0"
] | null | null | null | from ..grid import Cell
def test_bulk():
""" Test that a cell in the bulk of the grid is correct. """
# Instantiate a cell in the bulk of a 4x4 grid.
c = Cell(2, 2, 4, 4)
# Make sure that the cell has 4 neighbours.
assert c.neighbours() == 4
# Check the coordinates of the neighbours.
assert c.left() == (1, 2)
assert c.right() == (3, 2)
assert c.up() == (2, 3)
assert c.down() == (2, 1)
def test_left_edge():
""" Test that a cell on the left edge of the grid is correct. """
# Instantiate a cell on the left edge of a 4x4 grid.
c = Cell(0, 2, 4, 4)
# Make sure that the cell has 3 neighbours.
assert c.neighbours() == 3
# Check the coordinates of the neighbours.
assert c.left() == None
assert c.right() == (1, 2)
assert c.up() == (0, 3)
assert c.down() == (0, 1)
def test_right_edge():
""" Test that a cell on the right edge of the grid is correct. """
# Instantiate a cell on the right edge of a 4x4 grid.
c = Cell(3, 2, 4, 4)
# Make sure that the cell has 3 neighbours.
assert c.neighbours() == 3
# Check the coordinates of the neighbours.
assert c.left() == (2, 2)
assert c.right() == None
assert c.up() == (3, 3)
assert c.down() == (3, 1)
def test_bottom_edge():
""" Test that a cell on the bottom edge of the grid is correct. """
# Instantiate a cell on the bottom edge of a 4x4 grid.
c = Cell(2, 0, 4, 4)
# Make sure that the cell has 3 neighbours.
assert c.neighbours() == 3
# Check the coordinates of the neighbours.
assert c.left() == (1, 0)
assert c.right() == (3, 0)
assert c.up() == (2, 1)
assert c.down() == None
def test_top_edge():
""" Test that a cell on the top edge of the grid is correct. """
# Instantiate a cell on the top edge of a 4x4 grid.
c = Cell(2, 3, 4, 4)
# Make sure that the cell has 3 neighbours.
assert c.neighbours() == 3
# Check the coordinates of the neighbours.
assert c.left() == (1, 3)
assert c.right() == (3, 3)
assert c.up() == None
assert c.down() == (2, 2)
def test_bottom_left_corner():
""" Test that a cell on the bottom left corner of the grid is correct. """
# Instantiate a cell at the bottom left corner of a 4x4 grid.
c = Cell(0, 0, 4, 4)
# Make sure that the cell has 2 neighbours.
assert c.neighbours() == 2
# Check the coordinates of the neighbours.
assert c.left() == None
assert c.right() == (1, 0)
assert c.up() == (0, 1)
assert c.down() == None
def test_bottom_right_corner():
""" Test that a cell at the bottom right corner of the grid is correct. """
# Instantiate a cell on the bottom right corner of a 4x4 grid.
c = Cell(3, 0, 4, 4)
# Make sure that the cell has 2 neighbours.
assert c.neighbours() == 2
# Check the coordinates of the neighbours.
assert c.left() == (2, 0)
assert c.right() == None
assert c.up() == (3, 1)
assert c.down() == None
def test_top_left_corner():
""" Test that a cell on the top left corner of the grid is correct. """
# Instantiate a cell on the top left corner of a 4x4 grid.
c = Cell(0, 3, 4, 4)
# Make sure that the cell has 2 neighbours.
assert c.neighbours() == 2
# Check the coordinates of the neighbours.
assert c.left() == None
assert c.right() == (1, 3)
assert c.up() == None
assert c.down() == (0, 2)
def test_top_right_corner():
""" Test that a cell on the top right corner of the grid is correct. """
# Instantiate a cell at the top right corner of a 4x4 grid.
c = Cell(3, 3, 4, 4)
# Make sure that the cell has 2 neighbours.
assert c.neighbours() == 2
# Check the coordinates of the neighbours.
assert c.left() == (2, 3)
assert c.right() == None
assert c.up() == None
assert c.down() == (3, 2)
| 28.751825 | 79 | 0.591013 | 649 | 3,939 | 3.5547 | 0.061633 | 0.136541 | 0.13264 | 0.05635 | 0.883832 | 0.875163 | 0.852622 | 0.748591 | 0.612484 | 0.554833 | 0 | 0.042478 | 0.282813 | 3,939 | 136 | 80 | 28.963235 | 0.774159 | 0.455699 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.703125 | 1 | 0.140625 | false | 0 | 0.015625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c6b22b64371d2ddbea5dca7488d88b2b0a4e19dd | 101 | py | Python | semantic-representation/project/errors/__init__.py | SOFIE-project/SMAUG-Marketplace | 404b6caa7c5ea58c27c20d716dffa60904fb7f46 | [
"Apache-2.0"
] | 1 | 2021-03-29T15:11:46.000Z | 2021-03-29T15:11:46.000Z | semantic-representation/project/errors/__init__.py | SOFIE-project/SMAUG-Marketplace | 404b6caa7c5ea58c27c20d716dffa60904fb7f46 | [
"Apache-2.0"
] | null | null | null | semantic-representation/project/errors/__init__.py | SOFIE-project/SMAUG-Marketplace | 404b6caa7c5ea58c27c20d716dffa60904fb7f46 | [
"Apache-2.0"
] | 1 | 2021-01-30T02:49:38.000Z | 2021-01-30T02:49:38.000Z | from flask import Blueprint
bp = Blueprint('errors', __name__)
from project.errors import handlers
| 16.833333 | 35 | 0.792079 | 13 | 101 | 5.846154 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138614 | 101 | 5 | 36 | 20.2 | 0.873563 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
c6c2ff36ef4a2b825e33d2ad9edfc24191a20f17 | 11,130 | py | Python | api/sensor.py | icarus213/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | 8 | 2018-07-06T10:40:53.000Z | 2019-07-31T09:12:10.000Z | api/sensor.py | echobots/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | null | null | null | api/sensor.py | echobots/iotku | 4aa70002b88ad160861b2b9cd14f5d20706a6643 | [
"MIT"
] | 7 | 2018-07-06T11:02:48.000Z | 2021-01-06T06:32:01.000Z | from flask import Blueprint, request, session, jsonify, url_for
from . import api, iotku, c
#------------------SENSOR-------------------------
@api.route('/api/sensor/name', methods=['GET'])
def sensor_name():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
return jsonify({'result':sensor.get('sensor_name')})
@api.route('/api/sensor/time_added', methods=['GET'])
def sensor_time_added():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
return jsonify({'result':sensor.get('time_added')})
@api.route('/api/sensor/get_data', methods=['GET'])
def sensor_get_data():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
try:
from_number = int(content['from'])
assert from_number >= 0
except:
from_number = 0
try:
to_number = int(content['to'])
assert from_number < to_number and to_number < 25
except:
to_number = from_number + 25
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session['email'])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False, 'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
data_collection = sensor.get_data(from_number,to_number)
return jsonify({'result':data_collection})
@api.route('/api/sensor/total_data_entry', methods=['GET'])
def sensor_total_data_entry():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
return jsonify({'result':sensor.get('total_data_entry')})
@api.route('/api/sensor/last_data_added_time', methods=['GET'])
def sensor_last_data_added_time():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
return jsonify({'result':sensor.get('last_data_added_time')})
@api.route('/api/sensor/post_data', methods=['POST'])
def sensor_post_data():
content = request.get_json(silent=True)
if not all(x in session.keys() for x in ["logged_in","api_key","device_id"]):
return jsonify({'result': False, 'reason': 'Not connected to any account / Invalid login type'})
elif not all(x in content.keys() for x in ["data","sensor_id"]):
return jsonify({'result': False,'reason': "Invalid format"})
else:
data = str(content['data'])
device_id = session['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(api_key=session['api_key'])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False, 'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
formatted = session['api_key'].encode("ascii").hex() + ' , ' + device_id.encode("ascii").hex() + ' , ' + sensor_id.encode("ascii").hex() + ' , ' + data.encode("ascii").hex()
c.publish(subject='post', payload=bytes(formatted, 'utf-8'))
return jsonify({'result': True})
@api.route('/api/sensor/total_rule', methods=['GET'])
def sensor_total_rule():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
return jsonify({'result':sensor.get('total_rule')})
@api.route('/api/sensor/rule_list', methods=['GET'])
def sensor_rule_list():
content = request.args
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id"]):
return jsonify({'result':False,'reason':"Invalid format"})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result':False,'reason':'Device ID not found'})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result':False,'reason':'Sensor ID not found'})
else:
rules = sensor.get_rule_list()
rule_id = [x.get('rule_id') for x in rules]
rule_name = [x.get('rule_name') for x in rules]
rule_list = [{'rule_id':x,'rule_name':y} for x,y in zip(rule_id, rule_name)]
return jsonify({'result':rule_list})
@api.route('/api/sensor/add_rule', methods=['POST'])
def sensor_add_rule():
content = request.get_json(silent=True)
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id","rule_id","rule_name","expected_type","condition","endpoint","command"]):
return jsonify({'result': False, 'reason': "Invalid format"})
elif not content['expected_type'].upper() in ['STR','INT']:
return jsonify({'result': False, 'reason': "Invalid expected_type"})
else:
try:
for x in content['condition']:
assert x['operator'] in ['EQU','NEQ','LSS','LEQ','GTR','GEQ']
if content['expected_type'].upper() == 'STR':
x['value'] = str(x['value'])
elif content['expected_type'].upper() == 'INT':
x['value'] = int(x['value'])
except Exception as e:
print(str(e))
return jsonify({'result': False, 'reason': "Invalid format for one of the conditions"})
device_id = content['device_id']
sensor_id = content['sensor_id']
endpoint = content["endpoint"]
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device or not user.find_device(endpoint):
return jsonify({'result': False, 'reason': "Device ID and/or Endpoint not found"})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result': False, 'reason': "Sensor ID not found"})
else:
rule_id = content["rule_id"]
rule_name = content["rule_name"]
expected_type = content["expected_type"]
condition = content["condition"]
command = content["command"]
if sensor.find_rule(rule_id):
return jsonify({'result': False, 'reason': "Rule ID exists"})
else:
sensor.add_rule(rule_id,rule_name,expected_type,condition,endpoint,command)
return jsonify({'result': True})
@api.route('/api/sensor/remove_rule', methods=['POST'])
def sensor_remove_rule():
content = request.get_json(silent=True)
if not all(x in session.keys() for x in ["logged_in","email"]):
return jsonify({'result':False,'reason':'Not logged in / Unauthorized'})
elif not all(x in content.keys() for x in ["device_id","sensor_id","rule_id"]):
return jsonify({'result': False, 'reason': 'Invalid format'})
else:
device_id = content['device_id']
sensor_id = content['sensor_id']
user = iotku.find_user(email=session["email"])
device = user.find_device(device_id)
if not device:
return jsonify({'result': False, 'reason': "Device ID not found"})
else:
sensor = device.find_sensor(sensor_id)
if not sensor:
return jsonify({'result': False, 'reason': "Sensor ID not found"})
else:
rule_id = content["rule_id"]
if not sensor.find_rule(rule_id):
return jsonify({'result': False, 'reason': "Rule ID not found"})
else:
sensor.remove_rule(rule_id)
return jsonify({'result': True})
#------------------/SENSOR------------------------- | 42.480916 | 181 | 0.645912 | 1,529 | 11,130 | 4.554611 | 0.071288 | 0.100804 | 0.147329 | 0.151637 | 0.785181 | 0.750287 | 0.735784 | 0.720706 | 0.711947 | 0.711947 | 0 | 0.000772 | 0.185175 | 11,130 | 262 | 182 | 42.480916 | 0.767119 | 0.008895 | 0 | 0.688259 | 0 | 0 | 0.245966 | 0.015322 | 0 | 0 | 0 | 0 | 0.012146 | 1 | 0.040486 | false | 0 | 0.008097 | 0 | 0.267206 | 0.008097 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c6cbd63ab97e0c74120b465a229146a532ab0191 | 73 | py | Python | src/__init__.py | cornzz/robolab-tud-spring18 | 958915181fc8cab9a3567d9b2670be8a1b704488 | [
"MIT"
] | null | null | null | src/__init__.py | cornzz/robolab-tud-spring18 | 958915181fc8cab9a3567d9b2670be8a1b704488 | [
"MIT"
] | null | null | null | src/__init__.py | cornzz/robolab-tud-spring18 | 958915181fc8cab9a3567d9b2670be8a1b704488 | [
"MIT"
] | null | null | null | from . import pilot, pid_controller, planet, events, main, shortest_path
| 36.5 | 72 | 0.794521 | 10 | 73 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 73 | 1 | 73 | 73 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c6e7a24c05a3e2131720922c04d254fac11d0c31 | 53 | py | Python | pyexlatex/exc.py | whoopnip/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 4 | 2020-06-08T07:17:12.000Z | 2021-11-04T21:39:52.000Z | pyexlatex/exc.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | 24 | 2020-02-17T17:20:44.000Z | 2021-12-20T00:10:19.000Z | pyexlatex/exc.py | nickderobertis/py-ex-latex | 66f5fadc35a0bfdce5f1ccb3c80dce8885b061b6 | [
"MIT"
] | null | null | null | class NoPackageWithNameException(Exception):
pass | 26.5 | 44 | 0.830189 | 4 | 53 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 53 | 2 | 45 | 26.5 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
059fdcf55e94643c3b71ab47b3a34e0ca43ca483 | 23 | py | Python | calc.py | JYF9711/python | 65e4c427f7068794b1ee76035cd8fab166b60ffa | [
"Apache-2.0"
] | null | null | null | calc.py | JYF9711/python | 65e4c427f7068794b1ee76035cd8fab166b60ffa | [
"Apache-2.0"
] | null | null | null | calc.py | JYF9711/python | 65e4c427f7068794b1ee76035cd8fab166b60ffa | [
"Apache-2.0"
] | null | null | null | PK
| 11.5 | 22 | 0.086957 | 1 | 23 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 23 | 1 | 23 | 23 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
05b0280ddeee642470db28b3bd2e82c1d109b314 | 112 | py | Python | bin/parentclass.py | odingsy/NGStoolkit | 68d73810351550b9ba75f9184f26bc8e55708fcc | [
"MIT"
] | 2 | 2018-05-05T06:24:51.000Z | 2021-07-04T22:24:13.000Z | bin/parentclass.py | odingsy/NGStoolkit | 68d73810351550b9ba75f9184f26bc8e55708fcc | [
"MIT"
] | null | null | null | bin/parentclass.py | odingsy/NGStoolkit | 68d73810351550b9ba75f9184f26bc8e55708fcc | [
"MIT"
] | 2 | 2020-12-27T22:02:29.000Z | 2021-05-28T20:28:26.000Z |
def hey(a):
return a + '3333'
class ThisIsParent:
def fun_(self, param1):
return hey(param1)
| 12.444444 | 27 | 0.598214 | 15 | 112 | 4.4 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.285714 | 112 | 8 | 28 | 14 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
05b475784f1980d87da642c7020a760ff749b2d9 | 82 | py | Python | tests/fixtures/contrib/gcp/firestore/__init__.py | douwevandermeij/fractal | 66b04892b4d6fd8ee6a0c07b6e230f4321165085 | [
"MIT"
] | 2 | 2021-08-12T05:19:08.000Z | 2022-01-29T16:22:37.000Z | tests/fixtures/contrib/gcp/firestore/__init__.py | douwevandermeij/fractal | 66b04892b4d6fd8ee6a0c07b6e230f4321165085 | [
"MIT"
] | null | null | null | tests/fixtures/contrib/gcp/firestore/__init__.py | douwevandermeij/fractal | 66b04892b4d6fd8ee6a0c07b6e230f4321165085 | [
"MIT"
] | null | null | null | from tests.fixtures.contrib.gcp.firestore.specifications import * # NOQA NOSONAR
| 41 | 81 | 0.817073 | 10 | 82 | 6.7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 82 | 1 | 82 | 82 | 0.905405 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
05e7ccb1b5e24ac5c06375d64dc2a836ce824f0b | 3,320 | py | Python | app/cascade/data_model/language/cqlVisitor.py | inmadria/cascade-server | 0ae612d97a5bad60b57a611ac59d491495e4cef1 | [
"BSD-3-Clause"
] | 205 | 2017-08-30T19:53:53.000Z | 2022-03-29T17:55:32.000Z | app/cascade/data_model/language/cqlVisitor.py | inmadria/cascade-server | 0ae612d97a5bad60b57a611ac59d491495e4cef1 | [
"BSD-3-Clause"
] | 14 | 2017-08-31T15:00:11.000Z | 2021-06-01T22:00:06.000Z | app/cascade/data_model/language/cqlVisitor.py | inmadria/cascade-server | 0ae612d97a5bad60b57a611ac59d491495e4cef1 | [
"BSD-3-Clause"
] | 51 | 2017-08-30T19:58:06.000Z | 2022-03-30T15:54:03.000Z | # Generated from C:/Users/...cascade-server/app/cascade/data_model/language\cql.g4 by ANTLR 4.6
from antlr4 import *
# This class defines a complete generic visitor for a parse tree produced by cqlParser.
class cqlVisitor(ParseTreeVisitor):
# Visit a parse tree produced by cqlParser#floatValue.
def visitFloatValue(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#intValue.
def visitIntValue(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#number.
def visitNumber(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#value.
def visitValue(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#field.
def visitField(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#allQueries.
def visitAllQueries(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#searchQuery.
def visitSearchQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#dataModelQuery.
def visitDataModelQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#eventObject.
def visitEventObject(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#eventAction.
def visitEventAction(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#analyticReferenceByID.
def visitAnalyticReferenceByID(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#analyticReferenceByName.
def visitAnalyticReferenceByName(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#valueComparator.
def visitValueComparator(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#string.
def visitString(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#AnalyticReferenceQuery.
def visitAnalyticReferenceQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#OrQuery.
def visitOrQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#NestedDataModelQuery.
def visitNestedDataModelQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#NotQuery.
def visitNotQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#ScopedQuery.
def visitScopedQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#ValueComparatorQuery.
def visitValueComparatorQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#AndQuery.
def visitAndQuery(self, ctx):
return self.visitChildren(ctx)
# Visit a parse tree produced by cqlParser#RegExQuery.
def visitRegExQuery(self, ctx):
return self.visitChildren(ctx)
| 28.135593 | 95 | 0.712651 | 389 | 3,320 | 6.079692 | 0.210797 | 0.058351 | 0.097252 | 0.175053 | 0.635518 | 0.635518 | 0.609302 | 0.594926 | 0.594926 | 0.594926 | 0 | 0.001534 | 0.214759 | 3,320 | 117 | 96 | 28.376068 | 0.905639 | 0.411145 | 0 | 0.478261 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.478261 | false | 0 | 0.021739 | 0.478261 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
af111e674a323d6b57d91b6d06bf8be2f3bcbf75 | 22,455 | py | Python | tests/test_vis_ddijoin.py | FedeMPouzols/cngi_prototype | 421a99c460f4092b79120f5bec122de7ce9b8b96 | [
"Apache-2.0"
] | null | null | null | tests/test_vis_ddijoin.py | FedeMPouzols/cngi_prototype | 421a99c460f4092b79120f5bec122de7ce9b8b96 | [
"Apache-2.0"
] | null | null | null | tests/test_vis_ddijoin.py | FedeMPouzols/cngi_prototype | 421a99c460f4092b79120f5bec122de7ce9b8b96 | [
"Apache-2.0"
] | null | null | null | import cngi.dio
import cngi.vis
import unittest
import xarray as xr
import numpy as np
class DdiJoinBase(unittest.TestCase):
def setUp(self):
try:
self.vis_xds0 = cngi.dio.read_vis("../data/sis14_twhya_calibrated_flagged.vis.zarr", ddi=0)
self.global_xds = cngi.dio.read_vis("../data/sis14_twhya_calibrated_flagged.vis.zarr", ddi='global')
except Exception as ee:
raise RuntimeError("These tests assume that sis14_twhya_calibrated_flagged.ms has been previously converted and stored in the ../data directory.") from ee
# limit data a bit to speed up test times
self.vis_xds0 = self.vis_xds0.where(self.vis_xds0.time >= self.vis_xds0.time[-10]).dropna("time")
self.vis_xds0 = self.vis_xds0.where(self.vis_xds0.baseline < 10).dropna("baseline")
def helper_get_joinable_ddis(self, cppy_change_times, deep_copy_both=False):
"""I don't have an obvious pair of compatible DDIs to join, so I create
one compatible DDI by starting with a copy of an existing one."""
# create the ddi copy
orig = self.vis_xds0
if deep_copy_both:
orig = self.vis_xds0.copy(deep=True)
cppy = orig.copy(deep=True)
# start the copy's times 1 second after the end of the original's end times
if cppy_change_times:
start_time = orig.time[-1] + 10**9 # time stored as nanoseconds, so start 1 second after the last time
delta_time = start_time - orig.time[0]
cppy = cppy.assign_coords(time=orig.time + delta_time)
return orig, cppy
def helper_add_nondim_coord(self, xds, coord_name, parent_dim_name=None):
if parent_dim_name == None:
parent_dim_name = list(xds.dims.keys())[0]
parent_dim_len = xds.dims[parent_dim_name]
ret = xds.assign_coords({coord_name: xr.DataArray(range(parent_dim_len), dims=parent_dim_name)})
self.assertTrue(coord_name in ret.coords, f"Dataset should have the \"{coord_name}\" coordinate!")
self.assertFalse(coord_name in ret.dims, f"The coordinate \"{coord_name}\" should not be a dimension coordinate!")
return ret, parent_dim_len
def helper_add_dim_coord(self, xds, coord_name, length=None, values=None):
if length != None:
values = range(length)
ret = xds.assign_coords({coord_name: values})
self.assertTrue(coord_name in ret.coords, f"Dataset should have the \"{coord_name}\" coordinate!")
self.assertTrue(coord_name in ret.dims, f"The coordinate \"{coord_name}\" should be a dimension coordinate!")
return ret, len(values)
def helper_add_data_var(self, xds, data_var_name, dim_name):
dim_len = len(xds.coords[dim_name])
ret = xds.assign({data_var_name: xr.DataArray(range(dim_len), dims=dim_name)})
return ret, dim_len
def helper_get_nonnan_index(self, data_var):
for i in range(data_var.shape[0]):
for j in range(data_var.shape[1]):
if not np.isnan(data_var[i][j]):
break
if not np.isnan(data_var[i][j]):
break
self.assertFalse(np.isnan(data_var[i][j]), "No non-nan values in data_var")
return i, j
class TestGoodJoins(DdiJoinBase):
def test_join_same_everything(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=False)
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did any of the coordinates change?
for coord_name in join.coords:
self.assertEqual(len(join.coords[coord_name]), len(orig.coords[coord_name]), f"joined coordinate \"{coord_name}\" length does not match original coordinate length")
self.assertTrue((join.coords[coord_name].values == orig.coords[coord_name].values).all(), f"joined coordinate \"{coord_name}\" values do not match original coordinate values")
# did all the attributes get updated?
for attr_name in join.attrs:
if attr_name == 'ddi':
continue
self.assertEqual(join.attrs[attr_name], orig.attrs[attr_name], f"attribute values for \"{attr_name}\" do not match between original and joined")
def test_join_different_coords(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did the time dimension get merged correctly?
orig_times = set(orig.time.values)
cppy_times = set(cppy.time.values)
join_times = set(join.time.values)
self.assertEqual(len(orig_times), len(orig.time), "sets of times are missing values from their corresponding lists")
self.assertEqual(len(cppy_times), len(cppy.time), "sets of times are missing values from their corresponding lists")
self.assertEqual(len(join_times.difference(orig_times).difference(cppy_times)), 0, "ERROR: there are extra values in the joint times")
self.assertEqual(len(orig.time)*2, len(join.time), "unexpected number of values in joint times")
self.assertEqual(len(cppy.time)*2, len(join.time), "unexpected number of values in joint times")
# did any of the other coordinates change?
for coord_name in join.coords:
if coord_name == 'time':
continue
if 'time' in join.coords[coord_name].dims:
self.assertEqual(len(join.coords[coord_name]), len(orig.coords[coord_name])*2, f"joined coordinate \"{coord_name}\" length does not match original coordinate length x2")
join_coords = join.coords[coord_name].sel(time=orig.time) # limit values to those from the original time coordinate
self.assertTrue((join_coords == orig.coords[coord_name].values).all(), f"joined coordinate \"{coord_name}\" values do not match original coordinate values")
else:
self.assertEqual(len(join.coords[coord_name]), len(orig.coords[coord_name]), f"joined coordinate \"{coord_name}\" length does not match original coordinate length")
self.assertTrue((join.coords[coord_name].values == orig.coords[coord_name].values).all(), f"joined coordinate \"{coord_name}\" values do not match original coordinate values")
def test_join_different_coords_inputs_unchanged(self):
""" this is a special test to verify that ddi_join does not modify the inputs """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
# set an extra attribute on vis0 (orig)
orig.attrs['testing_extra_attr'] = 'foo'
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did the attribute get carried over
self.assertTrue('testing_extra_attr' in orig.attrs, "vis0 should have an attribute \"testing_extra_attr\"")
self.assertEqual(orig.testing_extra_attr, 'foo', "vis0 should have an attribute \"testing_extra_attr\" with the value \"foo\"")
self.assertFalse('testing_extra_attr' in cppy.attrs, "vis1 should NOT have an attribute \"testing_extra_attr\"")
self.assertTrue('testing_extra_attr' in join.attrs, "join should have an attribute \"testing_extra_attr\"")
self.assertEqual(join.testing_extra_attr, 'foo', "join should have an attribute \"testing_extra_attr\" with the value \"foo\"")
def test_data_vars_offset_coords(self):
""" Verify that both data_var values from time 0 and time 1 are merged in """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
time_len = len(orig.time)
# set some values for the copy so that we can verify the values took
cppy.EXPOSURE.load()
i, j = self.helper_get_nonnan_index(cppy.EXPOSURE)
cppy.EXPOSURE[i][j] += 1 # data variable to be merged
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did the EXPOSURE data variables get updated?
self.assertAlmostEqual(join.EXPOSURE[i+time_len][j].values + 0, orig.EXPOSURE[i][j].values + 1, 5, "unexpected value in joined EXPOSURE data variable")
def test_join_different_attrs(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
# set some values for the copy so that we can verify the values took
cppy.attrs['num_chan'] += 1 # attribute to be overwritten with value from orig
cppy.attrs['ddi'] += 1 # special attribute to be combined
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did the attributes get updated?
self.assertNotEqual(orig.ddi, cppy.ddi, "original and copy should have different \"ddi\" values")
self.assertTrue(str(orig.ddi) in join.ddi, "could not find orig's \"ddi\" value in join's \"ddi\" value")
self.assertTrue(str(cppy.ddi) in join.ddi, "could not find cppy's \"ddi\" value in join's \"ddi\" value")
self.assertEqual(join.num_chan, orig.num_chan, "bad value for num_chan, should have been overwritten with value from orig")
def test_join_base(self):
#############
### setup ###
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
####################
### do the merge ###
join = cngi.vis.ddijoin(orig, cppy)
############################
### validate the results ###
# did the time dimension get merged correctly?
orig_times = set(orig.time.values)
cppy_times = set(cppy.time.values)
join_times = set(join.time.values)
self.assertEqual(len(orig_times), len(orig.time), "sets of times are missing values from their corresponding lists")
self.assertEqual(len(cppy_times), len(cppy.time), "sets of times are missing values from their corresponding lists")
self.assertEqual(len(join_times.difference(orig_times).difference(cppy_times)), 0, "ERROR: there are extra values in the joint times")
self.assertEqual(len(orig.time)*2, len(join.time), "unexpected number of values in joint times")
# did any of the other coordinates change?
for coord_name in join.coords:
if coord_name == 'time':
continue
if 'time' in join.coords[coord_name].dims:
self.assertEqual(len(join.coords[coord_name]), len(orig.coords[coord_name])*2, f"joined coordinate \"{coord_name}\" length does not match original coordinate length x2")
join_coords = join.coords[coord_name].sel(time=orig.time) # limit values to those from the original time coordinate
self.assertTrue((join_coords == orig.coords[coord_name].values).all(), f"joined coordinate \"{coord_name}\" values do not match original coordinate values")
else:
self.assertEqual(len(join.coords[coord_name]), len(orig.coords[coord_name]), f"joined coordinate \"{coord_name}\" length does not match original coordinate length")
self.assertTrue((join.coords[coord_name].values == orig.coords[coord_name].values).all(), f"joined coordinate \"{coord_name}\" values do not match original coordinate values")
# did all of the data variables get updated?
for data_var_name in join.data_vars:
self.assertEqual(len(orig.data_vars[data_var_name])*2, len(join.data_vars[data_var_name]), f"unexpected number of values in joined data var \"{data_var_name}\"")
# did all the attributes get updated?
for attr_name in join.attrs:
if attr_name == 'ddi':
self.assertTrue(str(orig.ddi) in join.ddi, "could not find orig's \"ddi\" value in join's \"ddi\" value")
self.assertTrue(str(cppy.ddi) in join.ddi, "could not find cppy's \"ddi\" value in join's \"ddi\" value")
else:
self.assertEqual(join.attrs[attr_name], orig.attrs[attr_name], f"attribute values for \"{attr_name}\" do not match between original and joined")
def test_extra_attr_vis0(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
# set an extra attribute on vis0 (orig)
orig.attrs['testing_extra_attr'] = 'foo'
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# did the attribute get carried over
self.assertTrue('testing_extra_attr' in orig.attrs, "vis0 should have an attribute \"testing_extra_attr\"")
self.assertEqual(orig.testing_extra_attr, 'foo', "vis0 should have an attribute \"testing_extra_attr\" with the value \"foo\"")
self.assertFalse('testing_extra_attr' in cppy.attrs, "vis1 should NOT have an attribute \"testing_extra_attr\"")
self.assertTrue('testing_extra_attr' in join.attrs, "join should have an attribute \"testing_extra_attr\"")
self.assertEqual(join.testing_extra_attr, 'foo', "join should have an attribute \"testing_extra_attr\" with the value \"foo\"")
def test_extra_attr_vis1(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
self.assertFalse('testing_extra_attr' in orig.attrs, "vis0 should NOT have an attribute \"testing_extra_attr\"")
# set an extra attribute on vis1 (cppy)
cppy.attrs['testing_extra_attr'] = 'foo'
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
print(type(join))
# did the attribute get carried over
self.assertTrue('testing_extra_attr' in cppy.attrs, "vis1 should have an attribute \"testing_extra_attr\"")
self.assertEqual(cppy.testing_extra_attr, 'foo', "vis1 should have an attribute \"testing_extra_attr\" with the value \"foo\"")
self.assertFalse('testing_extra_attr' in orig.attrs, "vis0 should NOT have an attribute \"testing_extra_attr\"")
self.assertTrue('testing_extra_attr' in join.attrs, "join should have an attribute \"testing_extra_attr\"")
self.assertEqual(join.testing_extra_attr, 'foo', "join should have an attribute \"testing_extra_attr\" with the value \"foo\"")
def test_missing_nondim_coord_vis0(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
# add an extra non-dimension coordinate to the copy that is not in the original
cppy, parent_dim_len = self.helper_add_nondim_coord(cppy, "new_coord")
# try to merge
join = cngi.vis.ddijoin(orig, cppy)
# did the extra non-dimension coordinate come through?
self.assertFalse("new_coord" in orig.coords, "Dataset \"orig\" should NOT have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in cppy.coords, "Dataset \"cppy\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.coords, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertEqual(len(join.coords["new_coord"]), parent_dim_len, "New \"new_coord\" coordinate length mismatch!")
def test_missing_nondim_coord_vis1(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
# add an extra non-dimension coordinate to the original that is not in the copy
orig, parent_dim_len = self.helper_add_nondim_coord(orig, "new_coord")
# try to merge
join = cngi.vis.ddijoin(orig, cppy)
# did the extra non-dimension coordinate come through?
self.assertFalse("new_coord" in cppy.coords, "Dataset \"orig\" should NOT have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in orig.coords, "Dataset \"cppy\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.coords, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertEqual(len(join.coords["new_coord"]), parent_dim_len, "New \"new_coord\" coordinate length mismatch!")
def test_missing_dim_coord_vis0(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
# add an extra dimension coordinate to the copy that is not in the original
cppy, dim_len = self.helper_add_dim_coord(cppy, "new_coord", length=4)
# try to merge
join = cngi.vis.ddijoin(orig, cppy)
# did the extra non-dimension coordinate come through?
self.assertFalse("new_coord" in orig.coords, "Dataset \"orig\" should NOT have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in cppy.coords, "Dataset \"cppy\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.coords, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.dims, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertEqual(len(join.coords["new_coord"]), dim_len, "New \"new_coord\" coordinate length mismatch!")
def test_missing_dim_coord_vis1(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
# add an extra dimension coordinate to the original that is not in the copy
orig, dim_len = self.helper_add_dim_coord(orig, "new_coord", length=4)
# try to merge
join = cngi.vis.ddijoin(orig, cppy)
# did the extra non-dimension coordinate come through?
self.assertFalse("new_coord" in cppy.coords, "Dataset \"orig\" should NOT have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in orig.coords, "Dataset \"cppy\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.coords, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertTrue("new_coord" in join.dims, "Dataset \"join\" should have the \"new_coord\" coordinate!")
self.assertEqual(len(join.coords["new_coord"]), dim_len, "New \"new_coord\" coordinate length mismatch!")
def test_same_coords_one_nan_data_var(self):
""" Two datasets should be able to be joined if the same data_var has different values at the same coordinate position, and one of those value is NaN. """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=False)
# find a non-nan index and change the original to be a nan value
orig.EXPOSURE.load()
cppy.EXPOSURE.load()
i, j = self.helper_get_nonnan_index(cppy.EXPOSURE)
orig.EXPOSURE[i][j] = np.nan
cppy.EXPOSURE[i][j] = 1
self.assertNotEqual(orig.EXPOSURE[i][j], cppy.EXPOSURE[i][j])
# do the merge
join = cngi.vis.ddijoin(orig, cppy)
# verify the non-nan value got copied
self.assertTrue(np.isnan(orig.EXPOSURE[i][j]))
self.assertEqual(float(cppy.EXPOSURE[i][j].values), 1)
self.assertEqual(float(join.EXPOSURE[i][j].values), 1)
def test_extra_data_var(self):
""" Should be able to merge two datasets that are identical, except that one of them has an extra data_var that the other doesn't. """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=False)
# add an extra data_var to the copy that is not in the original
cppy, data_var_len = self.helper_add_data_var(cppy, "new_data_var", "time")
# try to merge
join = cngi.vis.ddijoin(orig, cppy)
# did the extra data_var come through?
self.assertFalse("new_data_var" in orig.data_vars, "Dataset \"orig\" should NOT have the \"new_data_var\" data_var!")
self.assertTrue("new_data_var" in cppy.data_vars, "Dataset \"cppy\" should have the \"new_data_var\" data_var!")
self.assertTrue("new_data_var" in join.data_vars, "Dataset \"join\" should have the \"new_data_var\" data_var!")
self.assertEqual(len(join.data_vars["new_data_var"]), data_var_len, "New \"new_data_var\" data_var length mismatch!")
class TestBadJoins(DdiJoinBase):
def test_different_nondim_coords(self):
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=True)
# change one of the non-dimension coordinates to be incompatible with the orig coordinate
cppy.antennas.load()
cppy.antennas[-1] += 1
# try to merge
with self.assertRaises(Exception, msg="ddi_join should not allow datasets with differing non-dimension coordinates to be merged"):
cngi.vis.ddijoin(orig, cppy)
def test_same_datavar_diff_coords(self):
""" Two datasets should not be able to be joined if the same data_var has different coordinates. """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(deep_copy_both=True, cppy_change_times=True)
# add two different dimension coordinates
orig, dim_len = self.helper_add_dim_coord(orig, "orig_coord", length=4)
cppy, dim_len = self.helper_add_dim_coord(cppy, "cppy_coord", length=4)
# add the same data_var, but reference the different coordinates
orig = orig.assign({"new_data_var": xr.DataArray([1,2,3,4], dims="orig_coord")})
cppy = cppy.assign({"new_data_var": xr.DataArray([1,2,3,4], dims="cppy_coord")})
# try to merge
with self.assertRaises(Exception, msg="ddi_join should not allow datasets with the same data_var referencing different coordinates to be merged"):
cngi.vis.ddijoin(orig, cppy)
def test_same_coords_diff_data_var(self):
""" Two datasets should be able to be joined if the same data_var has different values at the same coordinate position, and one of those value is NaN. """
# get the ddis
orig, cppy = self.helper_get_joinable_ddis(cppy_change_times=False)
# find a nan-index
cppy.EXPOSURE.load()
i, j = self.helper_get_nonnan_index(cppy.EXPOSURE)
# change one of the data_vars values in the copy
cppy.EXPOSURE[i][j] += 1
self.assertNotEqual(orig.EXPOSURE[i][j], cppy.EXPOSURE[i][j])
# try to merge
with self.assertRaises(Exception, msg="ddi_join should not allow datasets with a different value for their data_var to be merged"):
cngi.vis.ddijoin(orig, cppy)
| 54.23913 | 191 | 0.674104 | 3,215 | 22,455 | 4.537792 | 0.079627 | 0.030228 | 0.038385 | 0.02591 | 0.782987 | 0.749332 | 0.73809 | 0.736719 | 0.725615 | 0.703955 | 0 | 0.004892 | 0.21719 | 22,455 | 413 | 192 | 54.37046 | 0.825065 | 0.151859 | 0 | 0.565957 | 0 | 0 | 0.237817 | 0.006742 | 0 | 0 | 0 | 0 | 0.340426 | 1 | 0.097872 | false | 0 | 0.021277 | 0 | 0.153191 | 0.004255 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
af3335665fc114b011e75e79ba6a3f18abe09730 | 157 | py | Python | area/admin.py | LeonMaxwell/mianco | 88c1969bebbdf39314927976497f11f830b4c58e | [
"MIT"
] | null | null | null | area/admin.py | LeonMaxwell/mianco | 88c1969bebbdf39314927976497f11f830b4c58e | [
"MIT"
] | null | null | null | area/admin.py | LeonMaxwell/mianco | 88c1969bebbdf39314927976497f11f830b4c58e | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from area.models import Country, City
admin.site.register(Country)
admin.site.register(City)
| 19.625 | 37 | 0.802548 | 23 | 157 | 5.478261 | 0.565217 | 0.142857 | 0.269841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11465 | 157 | 7 | 38 | 22.428571 | 0.906475 | 0.165605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
af7ee6cda57f0203cbfc2a8c6bb179608d8790e3 | 36 | py | Python | theonionbox/tob/utils/__init__.py | ralphwetzel/theonionbox | 9812fce48153955e179755ea7a58413c3bee182f | [
"MIT"
] | 120 | 2015-12-30T09:41:56.000Z | 2022-03-23T02:30:05.000Z | theonionbox/tob/utils/__init__.py | nwithan8/theonionbox | 9e51fe0b4d07fc89a8a133fdeceb5f97d5d58713 | [
"MIT"
] | 57 | 2015-12-29T21:55:14.000Z | 2022-01-07T09:48:51.000Z | theonionbox/tob/utils/__init__.py | nwithan8/theonionbox | 9e51fe0b4d07fc89a8a133fdeceb5f97d5d58713 | [
"MIT"
] | 17 | 2018-02-05T08:57:46.000Z | 2022-02-28T16:44:41.000Z | from .attrdict import AttributedDict | 36 | 36 | 0.888889 | 4 | 36 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
af7f90c258b46293141137475b40cae92a74322c | 7,867 | py | Python | tests/run_pending_tests.py | qmagico/gae-migrations | d7b9463ab65920b9b8a88454ed16bdb2ed05e2dc | [
"MIT"
] | null | null | null | tests/run_pending_tests.py | qmagico/gae-migrations | d7b9463ab65920b9b8a88454ed16bdb2ed05e2dc | [
"MIT"
] | null | null | null | tests/run_pending_tests.py | qmagico/gae-migrations | d7b9463ab65920b9b8a88454ed16bdb2ed05e2dc | [
"MIT"
] | null | null | null | import migrations
from migrations import task_enqueuer, MigrationException
from migrations.model import DBMigration
from test_utils import GAETestCase
from my.models import QueDoidura, QuantaLoucura
from google.appengine.api import taskqueue, namespace_manager
import my.migrations
import my.migrations_empty_namespace
import my.migrations_pau_na_query
import my.migrations_pau_na_migration
def sync_task_add(*args, **kwargs):
task_enqueuer.execute(kwargs['params']['funcpath'], kwargs['params']['kwargs_json'])
class TestRunMigrations(GAETestCase):
def setUp(self):
GAETestCase.setUp(self)
namespace_manager.set_namespace('ns1')
QueDoidura(v1=3).put()
QueDoidura(v1=4).put()
QueDoidura(v1=5).put()
namespace_manager.set_namespace('ns2')
QueDoidura(v1=30).put()
QueDoidura(v1=40).put()
self._old_task_add = taskqueue.add
taskqueue.add = sync_task_add
self._old_ns = namespace_manager.get_namespace()
def tearDown(self):
GAETestCase.tearDown(self)
taskqueue.add = self._old_task_add
namespace_manager.set_namespace(self._old_ns)
def test_run_migrations(self):
# Roda as migracoes
first_migration = migrations.run_pending(my.migrations)
self.assertEqual('migration_0001_one', first_migration)
count = 0
v1sum = 0
namespace_manager.set_namespace('ns1')
for qd in QueDoidura.query():
self.assertEqual(qd.v2, 2 * qd.v1)
self.assertEqual(qd.v3, 3 * qd.v1)
count += 1
v1sum += qd.v1
self.assertEqual(3, QuantaLoucura.query().get().quanto)
namespace_manager.set_namespace('ns2')
for qd in QueDoidura.query():
self.assertEqual(qd.v2, 2 * qd.v1)
self.assertEqual(qd.v3, 3 * qd.v1)
count += 1
v1sum += qd.v1
self.assertEqual(2, QuantaLoucura.query().get().quanto)
self.assertEqual(5, count)
self.assertEqual(82, v1sum)
# E depois nao roda mais nada
first_migration = migrations.run_pending(my.migrations)
self.assertIsNone(first_migration)
# E os DBMigrations estao ok
namespace_manager.set_namespace('')
dbmigrations = []
for dbmigration in DBMigration.query(DBMigration.module == 'my.migrations'):
self.assertTrue(dbmigration.status == 'DONE')
dbmigrations.append(dbmigration.name)
self.assertEqual(3, len(dbmigrations))
self.assertTrue('migration_0001_one' in dbmigrations)
self.assertTrue('migration_0002_two' in dbmigrations)
self.assertTrue('migration_0003_three' in dbmigrations)
class TestRunMigrationsOnEmptyNamespace(GAETestCase):
def setUp(self):
GAETestCase.setUp(self)
self._old_ns = namespace_manager.get_namespace()
namespace_manager.set_namespace('')
QueDoidura(v1=3).put()
QueDoidura(v1=4).put()
QueDoidura(v1=5).put()
self._old_task_add = taskqueue.add
taskqueue.add = sync_task_add
def tearDown(self):
GAETestCase.tearDown(self)
taskqueue.add = self._old_task_add
namespace_manager.set_namespace(self._old_ns)
def test_run_migrations(self):
# Roda as migracoes
first_migration = migrations.run_pending(my.migrations_empty_namespace)
self.assertEqual('migration_empty_0001', first_migration)
count = 0
v1sum = 0
namespace_manager.set_namespace('')
for qd in QueDoidura.query():
self.assertEqual(qd.v2, 2 * qd.v1)
self.assertEqual(qd.v3, 3 * qd.v1)
count += 1
v1sum += qd.v1
self.assertEqual(3, count)
self.assertEqual(12, v1sum)
# E depois nao roda mais nada
first_migration = migrations.run_pending(my.migrations_empty_namespace)
self.assertIsNone(first_migration)
# E os DBMigrations estao ok
namespace_manager.set_namespace('')
dbmigrations = []
for dbmigration in DBMigration.query():
self.assertTrue(dbmigration.status == 'DONE')
dbmigrations.append(dbmigration.name)
self.assertEqual(2, len(dbmigrations))
self.assertTrue('migration_empty_0001' in dbmigrations)
self.assertTrue('migration_empty_0002' in dbmigrations)
class TestRunMigrationsWithQueryError(GAETestCase):
def setUp(self):
GAETestCase.setUp(self)
self._old_ns = namespace_manager.get_namespace()
namespace_manager.set_namespace('ns1')
QueDoidura(v1=3).put()
QueDoidura(v1=4).put()
QueDoidura(v1=5).put()
self._old_task_add = taskqueue.add
taskqueue.add = sync_task_add
def tearDown(self):
GAETestCase.tearDown(self)
taskqueue.add = self._old_task_add
namespace_manager.set_namespace(self._old_ns)
def test_run_migrations(self):
# Roda as migracoes
migrations.run_pending(my.migrations_pau_na_query)
count = 0
v1sum = 0
namespace_manager.set_namespace('ns1')
for qd in QueDoidura.query():
self.assertEqual(qd.v2, 2 * qd.v1)
self.assertIsNone(qd.v3)
count += 1
v1sum += qd.v1
self.assertEqual(3, count)
self.assertEqual(12, v1sum)
# E os DBMigrations estao ok
namespace_manager.set_namespace('')
dbmigrations = {dbm.name: dbm for dbm in DBMigration.query()}
self.assertEqual(2, len(dbmigrations))
self.assertTrue('DONE', dbmigrations['migration_paunaquery_0001'].status)
self.assertTrue('ERROR', dbmigrations['migration_paunaquery_0002'].status)
# E a migration 1 nao vai rodar de novo
dones = DBMigration.last_1000_names_done_or_running('my.migrations_pau_na_query')
self.assertEqual(1, len(dones))
self.assertEqual('migration_paunaquery_0001', dones[0])
class TestRunMigrationsWithMigrationError(GAETestCase):
def setUp(self):
GAETestCase.setUp(self)
self._old_ns = namespace_manager.get_namespace()
namespace_manager.set_namespace('ns1')
QueDoidura(v1=3).put()
QueDoidura(v1=4).put()
QueDoidura(v1=5).put()
self._old_task_add = taskqueue.add
taskqueue.add = sync_task_add
def tearDown(self):
GAETestCase.tearDown(self)
taskqueue.add = self._old_task_add
namespace_manager.set_namespace(self._old_ns)
def test_run_migrations(self):
# Roda as migracoes
try:
migrations.run_pending(my.migrations_pau_na_migration)
except MigrationException as e:
deupau = True
migration_exception = e
# self.assertTrue(deupau)
# self.assertEqual('Deu pau na migracao', migration_exception.cause.message)
count = 0
v1sum = 0
namespace_manager.set_namespace('ns1')
for qd in QueDoidura.query():
self.assertEqual(qd.v2, 2 * qd.v1)
self.assertIsNone(qd.v3)
count += 1
v1sum += qd.v1
self.assertEqual(3, count)
self.assertEqual(12, v1sum)
# E os DBMigrations estao ok
namespace_manager.set_namespace('')
dbmigrations = {dbm.name: dbm for dbm in DBMigration.query()}
self.assertEqual(2, len(dbmigrations))
self.assertTrue('DONE', dbmigrations['migration_paunamigration_0001'].status)
self.assertTrue('ERROR', dbmigrations['migration_paunamigration_0002'].status)
# E a migration 1 nao vai rodar de novo
dones = DBMigration.last_1000_names_done_or_running('my.migrations_pau_na_migration')
self.assertEqual(1, len(dones))
self.assertEqual('migration_paunamigration_0001', dones[0])
| 37.28436 | 93 | 0.657938 | 912 | 7,867 | 5.474781 | 0.134868 | 0.087122 | 0.068496 | 0.100941 | 0.80012 | 0.736431 | 0.736431 | 0.679551 | 0.659523 | 0.659523 | 0 | 0.030089 | 0.243803 | 7,867 | 210 | 94 | 37.461905 | 0.809212 | 0.052116 | 0 | 0.729412 | 0 | 0 | 0.05993 | 0.029293 | 0 | 0 | 0 | 0 | 0.252941 | 1 | 0.076471 | false | 0 | 0.058824 | 0 | 0.158824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
af9fa1b759f74fc795c22ec253437021e884ce77 | 815 | py | Python | app/admin/models.py | dwisulfahnur/personal-website | a884d20657c7b97d7a16e2083597bfd97c877f89 | [
"Apache-2.0"
] | null | null | null | app/admin/models.py | dwisulfahnur/personal-website | a884d20657c7b97d7a16e2083597bfd97c877f89 | [
"Apache-2.0"
] | null | null | null | app/admin/models.py | dwisulfahnur/personal-website | a884d20657c7b97d7a16e2083597bfd97c877f89 | [
"Apache-2.0"
] | null | null | null | from app.core.db import db
from werkzeug.security import generate_password_hash
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
full_name = db.Column(db.String())
email = db.Column(db.String())
username = db.Column(db.String())
password = db.Column(db.String())
def __init__(self, full_name, email, username, password):
self.full_name = full_name
self.email = email
self.username = username
self.password = generate_password_hash(password)
def is_authenticated(self):
return True
def is_active(self):
return True
def is_anonymouse(self):
return False
def get_id(self):
return self.id
def __repr__(self):
return '<User %s>'%self.username
| 24.69697 | 61 | 0.645399 | 106 | 815 | 4.726415 | 0.349057 | 0.07984 | 0.0998 | 0.127745 | 0.075848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.250307 | 815 | 32 | 62 | 25.46875 | 0.819967 | 0 | 0 | 0.083333 | 1 | 0 | 0.015951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.166667 | 0.083333 | 0.208333 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
afa0db1e751cfc0af721b1d72a5de9c7954c0f4e | 390 | py | Python | tests/asp/weakConstraints/file.gringo.test.py | bernardocuteri/wasp | 05c8f961776dbdbf7afbf905ee00fc262eba51ad | [
"Apache-2.0"
] | 19 | 2015-12-03T08:53:45.000Z | 2022-03-31T02:09:43.000Z | tests/asp/weakConstraints/file.gringo.test.py | bernardocuteri/wasp | 05c8f961776dbdbf7afbf905ee00fc262eba51ad | [
"Apache-2.0"
] | 80 | 2017-11-25T07:57:32.000Z | 2018-06-10T19:03:30.000Z | tests/asp/weakConstraints/file.gringo.test.py | bernardocuteri/wasp | 05c8f961776dbdbf7afbf905ee00fc262eba51ad | [
"Apache-2.0"
] | 6 | 2015-01-15T07:51:48.000Z | 2020-06-18T14:47:48.000Z | input = """
8 2 2 3 0 0
8 2 8 9 0 0
8 2 10 11 0 0
8 2 12 13 0 0
8 2 20 21 0 0
8 2 22 23 0 0
8 2 24 25 0 0
1 1 2 0 24 22
1 1 2 0 24 8
1 1 2 0 24 6
1 1 2 0 24 4
1 1 2 0 24 2
1 1 2 0 22 20
1 1 2 0 22 12
1 1 2 0 22 10
1 1 2 0 22 8
1 1 2 0 22 2
1 1 2 0 20 12
1 1 2 0 20 10
6 0 7 0 25 23 21 13 11 9 3 1 1 1 1 1 1 1
0
2 a
8 d
10 e
12 f
20 j
22 k
24 l
0
B+
0
B-
1
0
1
"""
output = """
COST 3@1
"""
| 9.512195 | 40 | 0.535897 | 159 | 390 | 1.314465 | 0.188679 | 0.172249 | 0.172249 | 0.229665 | 0.416268 | 0.033493 | 0 | 0 | 0 | 0 | 0 | 0.818584 | 0.420513 | 390 | 40 | 41 | 9.75 | 0.106195 | 0 | 0 | 0.05 | 0 | 0 | 0.920513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
afa8813ca26d8eb48e05818b7b2954db8e8ee0cc | 39 | py | Python | src/base/tests/__init__.py | Kautenja/playing-mario-with-deep-reinforcement-learning | bf61b8babfd06b6e6c26eb3694b84e8c7ff4c076 | [
"MIT"
] | 57 | 2018-04-24T07:07:29.000Z | 2022-01-19T17:07:13.000Z | src/base/tests/__init__.py | Kautenja/playing-mario-with-deep-reinforcement-learning | bf61b8babfd06b6e6c26eb3694b84e8c7ff4c076 | [
"MIT"
] | 10 | 2018-06-07T14:29:19.000Z | 2019-07-29T13:48:03.000Z | src/base/tests/__init__.py | Kautenja/playing-mario-with-deep-reinforcement-learning | bf61b8babfd06b6e6c26eb3694b84e8c7ff4c076 | [
"MIT"
] | 11 | 2018-09-11T23:14:37.000Z | 2021-06-30T03:56:55.000Z | """Test cases for the base package."""
| 19.5 | 38 | 0.666667 | 6 | 39 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 1 | 39 | 39 | 0.787879 | 0.820513 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
afaf530a17ed8ca2ed433242f3da807177342ed5 | 3,998 | py | Python | tests/common/utils/conftest.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | tests/common/utils/conftest.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | tests/common/utils/conftest.py | peaudecastor/checkov | a4804b61c1b1390b7abd44ab53285fcbc3e7e80b | [
"Apache-2.0"
] | null | null | null | from typing import Dict, Any
import pytest
from checkov.common.bridgecrew.bc_source import SourceType
from checkov.common.bridgecrew.platform_integration import BcPlatformIntegration, bc_integration
@pytest.fixture()
def mock_bc_integration() -> BcPlatformIntegration:
bc_integration.bc_api_key = "abcd1234-abcd-1234-abcd-1234abcd1234"
bc_integration.setup_bridgecrew_credentials(
repo_id="bridgecrewio/checkov",
skip_fixes=True,
skip_download=True,
source=SourceType("Github", False),
source_version="1.0",
repo_branch="master",
)
return bc_integration
@pytest.fixture()
def scan_result_success_response() -> Dict[str, Any]:
return {'outputType': 'Result',
'outputData': "H4sIAN22X2IC/8WY23LbOBKGX6VLN5tUWRQp"
"+SCrZi88drL2VqKkLMUzNZu5gEjIQkwSXAKUrU3l3fdvgDofnNS4MheJKbIJdAP"
"/193g10YpC22U1eWs0aNGy2ZFq5SmSq3B3/9WqpSZzK0J7JNtHFGjEMbIBKa2rKT7HT"
"+Ie2lw5z9fG3ZWSB6mmNmJztk+F5m7k3wR+b3mO1NZGoWHuBkFbT+mnTw"
"/+bcjOjTBOBXmYWP8MDh9sfHZXhprNqZoB+3TIPyBWf6EaayzIlUij"
"+WNMZVbvLxK07UnV8rYUo0q6yf62ohLZVUsUvwIYTlR95P6MpOJqrL6R6of6yurrbf+xi5XaS5LMVIpRplvl"
"+KNbFzevWm2w+i0eRp13XoZK2xl3KKqJ5mQyikKzoPuEf50g+iYbeKpYYvTIHLrEUNAfqzBoNcJwtbFXa"
"/furjsvWt9vMXVp5vebWvQu2zxnRv8u+j1eZhEGsRV1EE2LkttTBNylFTfz+/p1e"
"+DwWtaDWDGLtmJpESZTBlzOYG45K1MhZXJh9EXuPNRF1VB4yqPeWy2j3XOC9oSSabyFseo4vrHF1NfrA3x"
"TuuHqjDBF8OvXzkB00iOdSnrhThy6/K0vOkWSeQJLqNw9UEUlnFEIsXuGIIeNCIU1gIeSImsxgw8JYlypGw"
"pyhk9ylG9BqRLuh6+f0dTJcgvNXs01emUl6fKjRhLqgwwJD2mN6mTW6ByrBe/F7g9lRAtVo5XudbLkt7+Fq"
"T1g7stVlOVP/DPibWF6bVa+TQJckg1uNfTFu9RK5FWqLS1papSmYe3wnnP6mtcuPDJKV4+wbUesXZhWT/xoQ"
"INaR916dh+81SkWlmCfY3itTAElfLlexcWLSJl0lSGSLCjdSB+5l/+6bav4yyKapQqM5HJFXbeEc1Oh91meD"
"KMTnon570w/MNJVZlYY+jnLeHPQROXaTbYOzsOowPsRWEN38kKfGfByffC1wd8nwBfH/Bd74FvCKBiaF5JKgS"
"WC9qKdSL3iP+k1vmG/tnRx4nMoUd4D/IEOZwflZ3Qv7S+TyVd5CKdgT5zdIiI0YxrDWaBA1bmCYa7HNy+paKE"
"qac6kzHIRw6AMfbdunSxRMgHY7b071LnT1L/fF9fQv0rWr9GCN+v9OMDSo/CZtgZRt3nlb7DclPpmybbSj9vR"
"ufd4+M9Uod4uaCgqAbnLPgI6362oniXX7cVHx1Q/HWt+Osdit+UtZuOp19IuvaEpd5Z3nVuzrUr4lhXuSUrHi"
"SvWkAXZCplxSiFBEsxxqaQzKAKEkmCxsDQKzsRlpQhdAciZbFD5S6nuYRuZPn58z/Mxks8UEmxMBK9l8gNPMmE"
"wwBJ/1OuHKvgoWQZlOY1PeoqTbyXPP4cLscW6itqBOBkxqCvBAgCIDx7ALsY2pVXjB9DL86jeZwBvfqQ4xHaiH"
"sxr61snMtHDJJKOGg4NkyDOZKdMxhkBqwOTPjNUt4jdNabn2kt7uD1Jr6LRuinILwi2Jdg+LJ2fgnvCtj7OZ6js"
"I9jeNluRt1hdN5DnTnE8U7LdY63TbY4bkfNTqcddvZy3A7QHYHMIGp7nturbeMxQ/Usx9ebHPf3VK51jv1sR+vA"
"ekc8x+3VB3CUJlh/oIC6kluFvUnQsrNrMyaNm33c4+bLyyuoO8nA9YyJjk1AAyvGY9SjbMTVK3bkVUwq5D2U0Ad"
"W98rp6k6Bkyn/B/WDLsiEjRz9En0/07wsYWOVooDRRZIoDhYwz45IjekVR+IYUuPXvhWWY4EjBy2cIlvPaxDfVC"
"JikB1XxupM/Q/bhJIpChw2Yk9xAj2muqiLr0gN8J0gc7AHrnfmI8wRT5VTrm0N8JrnOKz4F/n22nt1Llr6IZ9w6p"
"RJQDdIXzAviVOEHx/bgvSxaw90ZY1K3BLxFPMAqdRwaP7Cror/EzveVTJeuOhvNri4dStjzuNrh6MDaaQGcU8Wge"
"shd6tcwA9nkd2Wa1lkh8l2N9BFxxCGpyf7jp0os+2g89db3v6eBoBb3o+ASkKiH0vtjmDuQwLVR3z61ecJ74jTs1"
"A5Z4vL3ziCHt1kaEiBDZRcQP93IlWJJ2rrxOpHdrU/RkH23wj4wTtR4uwmMtdHaJdHWPj+SAdQjOJeAqU14f4AeC"
"Yy5zwFU9TLqYplQMMJoPE1nrmWYl7opT8zoR2RLoldzPsA+D0FTYb+PfjQJzgt2BeVx7pkkghMa56wHns9HD+Fyx"
"VLqjf2LaD+h+GbHiLG65mYEcsoFQUtS2uI/e92vrdDX3zj2Ya1/tTzo9V9TYB7cN0B5ZUeHGzIl0Rvsr6fzVrr+y"
"p8l4+R7ZCLcic6WOF3Wq5X+G2TnZ263529x9Lw54L51uGzaNZDrhKoupKXEtWsR1UOrfufqwSxflFS3KLjnd5ueu"
"anz3q7neGie2cS8HcBin/BL8U8U/AL0XOSX+jt75P82r6+RIV6DoZDXW14oKMNz5rR2TA6fr6j3WG52dFumrjvsG"
"sp7dAH12j5wbWz+sG1vfOD6+m3b/8HQd/FwVgXAAA=",
'compressionMethod': 'gzip'} | 68.931034 | 109 | 0.753627 | 195 | 3,998 | 15.348718 | 0.810256 | 0.021717 | 0.01136 | 0.018042 | 0.019379 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12935 | 0.187844 | 3,998 | 58 | 110 | 68.931034 | 0.792424 | 0 | 0 | 0.038462 | 0 | 0 | 0.661665 | 0.64116 | 0 | 1 | 0 | 0 | 0 | 1 | 0.038462 | true | 0 | 0.076923 | 0.019231 | 0.153846 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
afb2cf274ec63a93d5b35e310f344e34085e2291 | 246 | py | Python | AlgoExpert/array/sortedSquaredArray/data.py | Muzque/Leetcode | d06365792c9ef48e0a290da00ba5e71f212554d5 | [
"MIT"
] | 1 | 2021-05-11T09:52:38.000Z | 2021-05-11T09:52:38.000Z | AlgoExpert/array/sortedSquaredArray/data.py | Muzque/Leetcode | d06365792c9ef48e0a290da00ba5e71f212554d5 | [
"MIT"
] | null | null | null | AlgoExpert/array/sortedSquaredArray/data.py | Muzque/Leetcode | d06365792c9ef48e0a290da00ba5e71f212554d5 | [
"MIT"
] | 1 | 2021-05-05T04:13:17.000Z | 2021-05-05T04:13:17.000Z | samples = [
{
"input": {
"array": [1, 2, 3, 5, 6, 8, 9],
},
"output": [1, 4, 9, 25, 36, 64, 81],
},
{
"input": {
"array": [-2, -1],
},
"output": [1, 4],
},
]
| 16.4 | 44 | 0.247967 | 25 | 246 | 2.44 | 0.64 | 0.327869 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 0.504065 | 246 | 14 | 45 | 17.571429 | 0.319672 | 0 | 0 | 0.142857 | 0 | 0 | 0.130081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
bb6378dcc823660a95ac6a4ed033da9b6ce9a7ba | 121 | py | Python | tests/missing_data/test_missing_data_ozone_Interpolate_Constant.py | shaido987/pyaf | b9afd089557bed6b90b246d3712c481ae26a1957 | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/missing_data/test_missing_data_ozone_Interpolate_Constant.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/missing_data/test_missing_data_ozone_Interpolate_Constant.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import tests.missing_data.test_missing_data_ozone_generic as gen
gen.test_ozone_missing_data('Interpolate', 'Constant')
| 30.25 | 64 | 0.859504 | 18 | 121 | 5.333333 | 0.611111 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 121 | 3 | 65 | 40.333333 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.157025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
bb83ad7e139cee9e80067f870ea76fcedf5329f0 | 181 | py | Python | __init__.py | sparsh9012/mykit | fe252ca61ef591e4cd2ba76b25bd7712f01baa3b | [
"MIT"
] | 1 | 2019-04-17T18:26:00.000Z | 2019-04-17T18:26:00.000Z | __init__.py | sparsh9012/mykit | fe252ca61ef591e4cd2ba76b25bd7712f01baa3b | [
"MIT"
] | null | null | null | __init__.py | sparsh9012/mykit | fe252ca61ef591e4cd2ba76b25bd7712f01baa3b | [
"MIT"
] | null | null | null | __version__ = 0.0.4
# py setup.py sdist bdist_wheel
# py -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*
# pip install -i https://test.pypi.org/simple/ mykit
| 30.166667 | 74 | 0.718232 | 31 | 181 | 4.032258 | 0.774194 | 0.144 | 0.208 | 0.256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018987 | 0.127072 | 181 | 5 | 75 | 36.2 | 0.772152 | 0.845304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
bb8ce1bb55ece6881c440a4a116b70c534f09904 | 70 | py | Python | liboptpy/constr_solvers/__init__.py | amkatrutsa/liboptpy | 8e89b3f5a16aaed759c3cd727639c927ed5741cf | [
"MIT"
] | 57 | 2018-08-17T12:58:07.000Z | 2022-03-22T16:18:28.000Z | liboptpy/constr_solvers/__init__.py | amkatrutsa/liboptpy | 8e89b3f5a16aaed759c3cd727639c927ed5741cf | [
"MIT"
] | 6 | 2018-05-13T10:00:15.000Z | 2021-04-04T12:08:02.000Z | liboptpy/constr_solvers/__init__.py | amkatrutsa/liboptpy | 8e89b3f5a16aaed759c3cd727639c927ed5741cf | [
"MIT"
] | 16 | 2019-01-12T07:15:29.000Z | 2022-03-22T11:52:29.000Z | from ._frank_wolfe import FrankWolfe
from ._proj_gd import ProjectedGD | 35 | 36 | 0.871429 | 10 | 70 | 5.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 70 | 2 | 37 | 35 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
bb9c7363be053a4ef849ca88f25e4354f7724832 | 80 | py | Python | tempCodeRunnerFile.py | skeptycal/textcolors | 4137bc5f37406ac9831bf0008b9b3dda229b118e | [
"ISC"
] | null | null | null | tempCodeRunnerFile.py | skeptycal/textcolors | 4137bc5f37406ac9831bf0008b9b3dda229b118e | [
"ISC"
] | null | null | null | tempCodeRunnerFile.py | skeptycal/textcolors | 4137bc5f37406ac9831bf0008b9b3dda229b118e | [
"ISC"
] | null | null | null | # run test code
print(' ', test)
exec(test) | 26.666667 | 29 | 0.35 | 7 | 80 | 4 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.55 | 80 | 3 | 30 | 26.666667 | 0.777778 | 0.1625 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
bbe7bb72156e18123d47e5ef234f9cb433ac49ff | 56 | py | Python | bencode/__init__.py | hexlab/bencode | 10b506ff82de07a7baf154b9b1d78aac7903af5f | [
"MIT"
] | null | null | null | bencode/__init__.py | hexlab/bencode | 10b506ff82de07a7baf154b9b1d78aac7903af5f | [
"MIT"
] | null | null | null | bencode/__init__.py | hexlab/bencode | 10b506ff82de07a7baf154b9b1d78aac7903af5f | [
"MIT"
] | null | null | null | from .decode import bdecode
from .encode import bencode
| 18.666667 | 27 | 0.821429 | 8 | 56 | 5.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a52c9e342de979289628a29fabe55eac335108b8 | 107 | py | Python | exercises/chapter_02/exercise_02_02/exercise_02_02.py | HenrikSamuelsson/python-crash-course | 0550343d413e4636f402a66041860bc1a319fc8f | [
"MIT"
] | 1 | 2017-04-30T18:05:26.000Z | 2017-04-30T18:05:26.000Z | exercises/chapter_02/exercise_02_02/exercise_02_02.py | HenrikSamuelsson/python-crash-course | 0550343d413e4636f402a66041860bc1a319fc8f | [
"MIT"
] | null | null | null | exercises/chapter_02/exercise_02_02/exercise_02_02.py | HenrikSamuelsson/python-crash-course | 0550343d413e4636f402a66041860bc1a319fc8f | [
"MIT"
] | null | null | null | # 2-1 Simple Messages
message = "My message"
print(message)
message = "My updated message"
print(message)
| 15.285714 | 30 | 0.738318 | 15 | 107 | 5.266667 | 0.533333 | 0.227848 | 0.481013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.149533 | 107 | 6 | 31 | 17.833333 | 0.846154 | 0.17757 | 0 | 0.5 | 0 | 0 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a5813554a5b6ef3a995b0e89eecbc999352553f4 | 180 | py | Python | src/py-devtools-builtin/__devtools_builtin.py | Cielquan/py-devtools-builtin | 181a90fcf269e4e0119f4953df42a575a8310a14 | [
"Unlicense"
] | null | null | null | src/py-devtools-builtin/__devtools_builtin.py | Cielquan/py-devtools-builtin | 181a90fcf269e4e0119f4953df42a575a8310a14 | [
"Unlicense"
] | null | null | null | src/py-devtools-builtin/__devtools_builtin.py | Cielquan/py-devtools-builtin | 181a90fcf269e4e0119f4953df42a575a8310a14 | [
"Unlicense"
] | null | null | null | # Import devtools if installed and add to builtins
from importlib.util import find_spec
if find_spec('devtools'):
import devtools
__builtins__.update(debug=devtools.debug)
| 30 | 50 | 0.788889 | 25 | 180 | 5.44 | 0.6 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 180 | 5 | 51 | 36 | 0.883117 | 0.266667 | 0 | 0 | 0 | 0 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a5985c2bf9e1a5444eed817c47a439c3d36ce357 | 36 | py | Python | TDD_Test_Driven_Development/Carrinho_compras/excecoes.py | alaanlimaa/QA_tests | 2e6c2b982bf4f09c7027a4bbde4aa0adac63a7cf | [
"MIT"
] | null | null | null | TDD_Test_Driven_Development/Carrinho_compras/excecoes.py | alaanlimaa/QA_tests | 2e6c2b982bf4f09c7027a4bbde4aa0adac63a7cf | [
"MIT"
] | null | null | null | TDD_Test_Driven_Development/Carrinho_compras/excecoes.py | alaanlimaa/QA_tests | 2e6c2b982bf4f09c7027a4bbde4aa0adac63a7cf | [
"MIT"
] | null | null | null | class ValorZero(Exception):
pass | 18 | 27 | 0.75 | 4 | 36 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 36 | 2 | 28 | 18 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
a59fcab7983c7c2e91f5d7425176f545f9185532 | 314 | py | Python | python__fundamentals/functions_exercise/02.add and_subtract.py | EmilianStoyanov/Projects-in-SoftUni | e83996670fe00424a158905d537a7bbbeee8fb59 | [
"MIT"
] | 1 | 2020-07-14T12:32:47.000Z | 2020-07-14T12:32:47.000Z | python__fundamentals/functions_exercise/02.add and_subtract.py | EmilianStoyanov/Projects-in-SoftUni | e83996670fe00424a158905d537a7bbbeee8fb59 | [
"MIT"
] | null | null | null | python__fundamentals/functions_exercise/02.add and_subtract.py | EmilianStoyanov/Projects-in-SoftUni | e83996670fe00424a158905d537a7bbbeee8fb59 | [
"MIT"
] | null | null | null | num1 = int(input())
num2 = int(input())
num3 = int(input())
def sum_numbers(num1, num2):
return num1 + num2
def subtract(num1, num2):
return num1 - num2
def add_and_subtract(num1, num2, num3):
result = sum_numbers(num1, num2)
print(subtract(result, num3))
add_and_subtract(num1, num2, num3) | 16.526316 | 39 | 0.675159 | 46 | 314 | 4.478261 | 0.304348 | 0.271845 | 0.23301 | 0.174757 | 0.495146 | 0.495146 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 0.187898 | 314 | 19 | 40 | 16.526316 | 0.729412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.181818 | 0.454545 | 0.090909 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
a5a5a6dda0e028391550039425f495cfbd60e4f7 | 123 | py | Python | scripts/cloud/run_slave_loop.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | scripts/cloud/run_slave_loop.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | scripts/cloud/run_slave_loop.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
from control4.cloud.slave_loop import slave_loop
print sys.argv[1]
slave_loop(sys.argv[1]) | 24.6 | 48 | 0.796748 | 23 | 123 | 4.130435 | 0.608696 | 0.284211 | 0.168421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026549 | 0.081301 | 123 | 5 | 49 | 24.6 | 0.814159 | 0.162602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a5a707c77bf297bfef0852da7bc1acacf2a73642 | 143 | py | Python | src/rl/tf/advantages/__init__.py | djjh/reinforcement-learning-labs | 22706dab9e7f16e364ee4ed79c0bd67a343e5b08 | [
"MIT"
] | 1 | 2019-10-06T11:45:52.000Z | 2019-10-06T11:45:52.000Z | src/rl/tf/advantages/__init__.py | djjh/reinforcement-learning-labs | 22706dab9e7f16e364ee4ed79c0bd67a343e5b08 | [
"MIT"
] | null | null | null | src/rl/tf/advantages/__init__.py | djjh/reinforcement-learning-labs | 22706dab9e7f16e364ee4ed79c0bd67a343e5b08 | [
"MIT"
] | null | null | null | from .advantage_function import AdvantageFunction
from .cumulative import Cumulative
from .gae import Gae
from .reward_to_go import RewardToGo
| 28.6 | 49 | 0.86014 | 19 | 143 | 6.315789 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 143 | 4 | 50 | 35.75 | 0.944882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3c086d4718302c7185dd48c27076ddc8422f530a | 139 | py | Python | first/app/infra/external/interfaces/climate_api.py | OscarSilvaOfficial/Ifood-Challenges | d97290b26ca4dec62e92823fe2c6e27a9e4c8248 | [
"MIT"
] | null | null | null | first/app/infra/external/interfaces/climate_api.py | OscarSilvaOfficial/Ifood-Challenges | d97290b26ca4dec62e92823fe2c6e27a9e4c8248 | [
"MIT"
] | null | null | null | first/app/infra/external/interfaces/climate_api.py | OscarSilvaOfficial/Ifood-Challenges | d97290b26ca4dec62e92823fe2c6e27a9e4c8248 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class ClimateAPI(ABC):
@abstractmethod
def get_city_temperature(self, city, lat, lon):
pass | 19.857143 | 49 | 0.741007 | 18 | 139 | 5.611111 | 0.777778 | 0.336634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179856 | 139 | 7 | 50 | 19.857143 | 0.885965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
3c0b0a436078be79f60b31335bd33b37cd3a83d8 | 3,079 | py | Python | tests/storage/test_memory.py | djmattyg007/freiner | 4acff72c55c37495862ea642a70b443da1278894 | [
"MIT"
] | null | null | null | tests/storage/test_memory.py | djmattyg007/freiner | 4acff72c55c37495862ea642a70b443da1278894 | [
"MIT"
] | null | null | null | tests/storage/test_memory.py | djmattyg007/freiner | 4acff72c55c37495862ea642a70b443da1278894 | [
"MIT"
] | null | null | null | import time
import pytest
from freiner import RateLimitItemPerMinute, RateLimitItemPerSecond
from freiner.storage.memory import MemoryStorage
from freiner.strategies.fixed_window import FixedWindowRateLimiter
from freiner.strategies.moving_window import MovingWindowRateLimiter
from ..util import freeze_time
@pytest.fixture
def storage() -> MemoryStorage:
memory_storage = MemoryStorage()
assert memory_storage.check() is True
return memory_storage
def test_in_memory(storage: MemoryStorage):
limiter = FixedWindowRateLimiter(storage)
with freeze_time() as frozen_datetime:
per_min = RateLimitItemPerMinute(10)
for _ in range(0, 10):
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
frozen_datetime.tick(60)
assert limiter.hit(per_min) is True
def test_fixed_window_clear(storage: MemoryStorage):
limiter = FixedWindowRateLimiter(storage)
with freeze_time():
per_min = RateLimitItemPerMinute(1)
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
limiter.clear(per_min)
assert limiter.hit(per_min) is True
def test_moving_window_clear(storage: MemoryStorage):
limiter = MovingWindowRateLimiter(storage)
with freeze_time():
per_min = RateLimitItemPerMinute(1)
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
limiter.clear(per_min)
assert limiter.hit(per_min) is True
def test_reset(storage: MemoryStorage):
limiter = FixedWindowRateLimiter(storage)
with freeze_time():
per_min = RateLimitItemPerMinute(10)
for _ in range(0, 10):
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
storage.reset()
for _ in range(0, 10):
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
def test_expiry_fixed_window(storage: MemoryStorage):
limiter = FixedWindowRateLimiter(storage)
with freeze_time() as frozen_datetime:
per_min = RateLimitItemPerMinute(10)
per_sec = RateLimitItemPerSecond(1)
for _ in range(0, 10):
assert limiter.hit(per_min) is True
assert limiter.hit(per_min) is False
frozen_datetime.tick(60)
# touch another key and yield
assert limiter.hit(per_sec) is True
time.sleep(0.02)
assert per_min.key_for() not in storage.storage
def test_expiry_moving_window(storage: MemoryStorage):
limiter = MovingWindowRateLimiter(storage)
with freeze_time() as frozen_datetime:
per_min = RateLimitItemPerMinute(10)
per_sec = RateLimitItemPerSecond(1)
for _ in range(0, 2):
for _ in range(0, 10):
assert limiter.hit(per_min) is True
frozen_datetime.tick(60)
# touch another key and yield
assert limiter.hit(per_sec) is True
time.sleep(0.02)
assert storage.events[per_min.key_for()] == []
| 29.605769 | 68 | 0.687236 | 379 | 3,079 | 5.395778 | 0.150396 | 0.076284 | 0.140831 | 0.167237 | 0.739853 | 0.729095 | 0.729095 | 0.729095 | 0.680196 | 0.663081 | 0 | 0.017506 | 0.239363 | 3,079 | 103 | 69 | 29.893204 | 0.855679 | 0.017863 | 0 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.097222 | false | 0 | 0.097222 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3c67ef28076316f02c44f2ee042d2dbaa7e1f836 | 1,345 | py | Python | test/test_account_api.py | brainbeanapps/payoneer-mobile-api-python | bd26f3f6219ba6e3df36b86f7c6b6f83abb879c3 | [
"MIT"
] | null | null | null | test/test_account_api.py | brainbeanapps/payoneer-mobile-api-python | bd26f3f6219ba6e3df36b86f7c6b6f83abb879c3 | [
"MIT"
] | null | null | null | test/test_account_api.py | brainbeanapps/payoneer-mobile-api-python | bd26f3f6219ba6e3df36b86f7c6b6f83abb879c3 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Payoneer Mobile API
Swagger specification for https://mobileapi.payoneer.com
OpenAPI spec version: 0.9.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import os
import sys
import unittest
import payoneer_mobile_api
from payoneer_mobile_api.rest import ApiException
from payoneer_mobile_api.apis.account_api import AccountApi
class TestAccountApi(unittest.TestCase):
""" AccountApi unit test stubs """
def setUp(self):
self.api = payoneer_mobile_api.apis.account_api.AccountApi()
def tearDown(self):
pass
def test_account_account_post(self):
"""
Test case for account_account_post
Account Details
"""
pass
def test_account_billing_address_post(self):
"""
Test case for account_billing_address_post
Billing Address
"""
pass
def test_account_currencies_get_post(self):
"""
Test case for account_currencies_get_post
Currencies
"""
pass
def test_account_payment_method_details_post(self):
"""
Test case for account_payment_method_details_post
Payment Method Details
"""
pass
if __name__ == '__main__':
unittest.main()
| 19.492754 | 68 | 0.664684 | 157 | 1,345 | 5.369427 | 0.363057 | 0.083037 | 0.10083 | 0.085409 | 0.257414 | 0.196916 | 0 | 0 | 0 | 0 | 0 | 0.004036 | 0.263197 | 1,345 | 68 | 69 | 19.779412 | 0.84662 | 0.33829 | 0 | 0.227273 | 1 | 0 | 0.010929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.227273 | 0.318182 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
3c896dff53c3aaa488bd336a48179b36caa719be | 160 | py | Python | src/utils/elmo/data/tokenizers/__init__.py | nicolas-ivanov/MimicAndRephrase | 446674e1e6af133618e0e9888c3650c0ce9012e4 | [
"MIT"
] | 12 | 2019-06-17T19:41:35.000Z | 2022-02-17T19:51:45.000Z | src/utils/elmo/data/tokenizers/__init__.py | nicolas-ivanov/MimicAndRephrase | 446674e1e6af133618e0e9888c3650c0ce9012e4 | [
"MIT"
] | 1 | 2021-02-23T15:28:32.000Z | 2021-02-23T15:28:32.000Z | src/utils/elmo/data/tokenizers/__init__.py | isabella232/MimicAndRephrase | bd29a995b211cb4f7933fa990b0bba1564c22450 | [
"MIT"
] | 3 | 2020-09-07T16:44:11.000Z | 2020-11-14T19:00:05.000Z | """
This module contains various classes for performing
tokenization, stemming, and filtering.
"""
from elmo.data.tokenizers.tokenizer import Token, Tokenizer
| 22.857143 | 59 | 0.8 | 19 | 160 | 6.736842 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11875 | 160 | 6 | 60 | 26.666667 | 0.907801 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3c94b5180770a89d3c116208789382530dffa65d | 118 | py | Python | pllm/__init__.py | sorki/pllm | 3a933ecfa75696c698739c169f2bbb757d8b9ea6 | [
"BSD-3-Clause"
] | 4 | 2015-02-08T08:54:57.000Z | 2021-08-06T09:00:08.000Z | pllm/__init__.py | sorki/pllm | 3a933ecfa75696c698739c169f2bbb757d8b9ea6 | [
"BSD-3-Clause"
] | 2 | 2015-02-10T09:16:31.000Z | 2020-01-31T16:30:36.000Z | pllm/__init__.py | sorki/pllm | 3a933ecfa75696c698739c169f2bbb757d8b9ea6 | [
"BSD-3-Clause"
] | null | null | null | import backends
import config
import domains
import interpret
import manhole
import monitor
import util
import vision
| 13.111111 | 16 | 0.864407 | 16 | 118 | 6.375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 118 | 8 | 17 | 14.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b1b912a7920f1b459f7de417da4b3fbab147fe2b | 177 | py | Python | ldapdb/models/__init__.py | kishorkunal-raj/django-ldapdb | 7e14b23a669a96c97ac9206e8d665da1e121594a | [
"BSD-2-Clause"
] | 141 | 2016-04-15T15:29:02.000Z | 2022-03-24T10:42:38.000Z | ldapdb/models/__init__.py | kishorkunal-raj/django-ldapdb | 7e14b23a669a96c97ac9206e8d665da1e121594a | [
"BSD-2-Clause"
] | 134 | 2016-03-09T17:42:48.000Z | 2022-01-20T23:02:40.000Z | ldapdb/models/__init__.py | kishorkunal-raj/django-ldapdb | 7e14b23a669a96c97ac9206e8d665da1e121594a | [
"BSD-2-Clause"
] | 72 | 2016-03-14T19:35:41.000Z | 2021-11-12T03:28:45.000Z | # -*- coding: utf-8 -*-
# This software is distributed under the two-clause BSD license.
# Copyright (c) The django-ldapdb project
from ldapdb.models.base import Model # noqa
| 29.5 | 64 | 0.728814 | 26 | 177 | 4.961538 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.163842 | 177 | 5 | 65 | 35.4 | 0.864865 | 0.728814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
590906f6b9185140c86091ec70311866d67d4225 | 138 | py | Python | devdeck/filters.py | evroon/devdeck | 8172f9c1cc2bce8bbe341975cec262b27d11bfea | [
"BSD-3-Clause"
] | 63 | 2021-02-02T15:53:36.000Z | 2022-03-29T16:24:39.000Z | devdeck/filters.py | evroon/devdeck | 8172f9c1cc2bce8bbe341975cec262b27d11bfea | [
"BSD-3-Clause"
] | 18 | 2021-02-02T16:18:58.000Z | 2022-03-31T17:12:24.000Z | devdeck/filters.py | evroon/devdeck | 8172f9c1cc2bce8bbe341975cec262b27d11bfea | [
"BSD-3-Clause"
] | 8 | 2021-08-01T21:26:55.000Z | 2022-01-21T21:16:21.000Z | import logging
class InfoFilter(logging.Filter):
def filter(self, rec):
return rec.levelno in (logging.DEBUG, logging.INFO)
| 19.714286 | 59 | 0.710145 | 18 | 138 | 5.444444 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 138 | 6 | 60 | 23 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
5925db0a3ec406417d95bccd155432804e6d2e22 | 82 | py | Python | program_synthesis/tools/__init__.py | sunblaze-ucb/SED | ab7772927aae81e4ea8bd92980d7b64281467bd8 | [
"MIT"
] | 6 | 2020-10-19T05:22:00.000Z | 2022-02-26T01:27:54.000Z | program_synthesis/tools/__init__.py | sunblaze-ucb/SED | ab7772927aae81e4ea8bd92980d7b64281467bd8 | [
"MIT"
] | 1 | 2021-11-24T04:51:43.000Z | 2021-11-24T05:04:13.000Z | program_synthesis/tools/__init__.py | sunblaze-ucb/SED | ab7772927aae81e4ea8bd92980d7b64281467bd8 | [
"MIT"
] | 2 | 2020-12-08T06:48:55.000Z | 2020-12-28T00:22:21.000Z |
from .reporter import Reporter
from .saver import Saver, save_args, restore_args
| 20.5 | 49 | 0.817073 | 12 | 82 | 5.416667 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134146 | 82 | 3 | 50 | 27.333333 | 0.915493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3ccdf549310d1c10291d371e3807c060ab2fe1c2 | 2,130 | py | Python | bindings/python/benchmark.py | wangjia3015/marisa-trie | da2924831c1e8f90dae7223cfe7a2bc1bd8b5132 | [
"BSD-2-Clause"
] | 388 | 2016-01-28T15:16:43.000Z | 2022-03-28T08:18:07.000Z | bindings/python/benchmark.py | wangjia3015/marisa-trie | da2924831c1e8f90dae7223cfe7a2bc1bd8b5132 | [
"BSD-2-Clause"
] | 38 | 2016-02-12T14:51:12.000Z | 2022-02-12T09:10:25.000Z | bindings/python/benchmark.py | wangjia3015/marisa-trie | da2924831c1e8f90dae7223cfe7a2bc1bd8b5132 | [
"BSD-2-Clause"
] | 79 | 2016-03-16T15:47:50.000Z | 2022-03-15T22:21:08.000Z | import datetime
import marisa
import sys
time_begin = datetime.datetime.now()
keys = []
for line in sys.stdin:
keys.append(line.rstrip())
time_end = datetime.datetime.now()
print "input:", time_end - time_begin
time_begin = datetime.datetime.now()
dic = dict()
for i in range(len(keys)):
dic[keys[i]] = i
time_end = datetime.datetime.now()
print "dict_build:", time_end - time_begin
time_begin = datetime.datetime.now()
for key in keys:
dic.get(key)
time_end = datetime.datetime.now()
print "dict_lookup:", time_end - time_begin
time_begin = datetime.datetime.now()
keyset = marisa.Keyset()
for key in keys:
keyset.push_back(key)
time_end = datetime.datetime.now()
print "keyset_build:", time_end - time_begin
time_begin = datetime.datetime.now()
trie = marisa.Trie()
trie.build(keyset)
time_end = datetime.datetime.now()
print "trie_build:", time_end - time_begin
time_begin = datetime.datetime.now()
agent = marisa.Agent()
for key in keys:
agent.set_query(key)
trie.lookup(agent)
agent.key_id()
time_end = datetime.datetime.now()
print "trie_agent_lookup:", time_end - time_begin
time_begin = datetime.datetime.now()
for key in keys:
trie.lookup(key)
time_end = datetime.datetime.now()
print "trie_lookup:", time_end - time_begin
time_begin = datetime.datetime.now()
for i in range(len(keys)):
agent.set_query(i)
trie.reverse_lookup(agent)
agent.key_str()
time_end = datetime.datetime.now()
print "trie_agent_reverse_lookup:", time_end - time_begin
time_begin = datetime.datetime.now()
for i in range(len(keys)):
trie.reverse_lookup(i)
time_end = datetime.datetime.now()
print "trie_reverse_lookup:", time_end - time_begin
time_begin = datetime.datetime.now()
for key in keys:
agent.set_query(key)
while trie.common_prefix_search(agent):
agent.key_str()
time_end = datetime.datetime.now()
print "trie_agent_common_prefix_search:", time_end - time_begin
time_begin = datetime.datetime.now()
for key in keys:
agent.set_query(key)
while trie.predictive_search(agent):
agent.key_str()
time_end = datetime.datetime.now()
print "trie_agent_predictive_search:", time_end - time_begin
| 25.97561 | 63 | 0.753052 | 331 | 2,130 | 4.613293 | 0.123867 | 0.129666 | 0.273739 | 0.180092 | 0.787819 | 0.751146 | 0.719057 | 0.576948 | 0.532417 | 0.503602 | 0 | 0 | 0.120188 | 2,130 | 81 | 64 | 26.296296 | 0.814835 | 0 | 0 | 0.528571 | 0 | 0 | 0.089202 | 0.040845 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042857 | null | null | 0.157143 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a71d4168a00acb6469047cb69b9b38a173af1168 | 119 | py | Python | demo01.py | 2951121599/utils | 382fe05d744a4b549d4677bc5700dafe7acb5d94 | [
"MIT"
] | null | null | null | demo01.py | 2951121599/utils | 382fe05d744a4b549d4677bc5700dafe7acb5d94 | [
"MIT"
] | null | null | null | demo01.py | 2951121599/utils | 382fe05d744a4b549d4677bc5700dafe7acb5d94 | [
"MIT"
] | null | null | null | # -*-coding:utf-8-*-
# 作者: 29511
# 文件名: demo01.py
# 日期时间:2022/3/17,0:29
print(123)
for i in range(4):
print(i)
| 14.875 | 21 | 0.571429 | 23 | 119 | 2.956522 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231579 | 0.201681 | 119 | 7 | 22 | 17 | 0.484211 | 0.563025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a72babd86e2193d24908d753b84ec87a0268cd8e | 208 | py | Python | qcfractal/queue/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | qcfractal/queue/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | qcfractal/queue/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | """
Initializer for the queue_handler folder
"""
from .adapters import build_queue_adapter
from .handlers import TaskQueueHandler, ServiceQueueHandler, QueueManagerHandler
from .managers import QueueManager
| 26 | 80 | 0.841346 | 22 | 208 | 7.818182 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 208 | 7 | 81 | 29.714286 | 0.924731 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a72db82da91e8fe16703bf8afea6344474d791e1 | 296 | py | Python | presto_types_parser/src/complex_column_process/array_processor.py | ofekby/presto-types-parser | b1f2d8b686c1a0b2eeec71b8545e1693a92965e7 | [
"Apache-2.0"
] | null | null | null | presto_types_parser/src/complex_column_process/array_processor.py | ofekby/presto-types-parser | b1f2d8b686c1a0b2eeec71b8545e1693a92965e7 | [
"Apache-2.0"
] | null | null | null | presto_types_parser/src/complex_column_process/array_processor.py | ofekby/presto-types-parser | b1f2d8b686c1a0b2eeec71b8545e1693a92965e7 | [
"Apache-2.0"
] | null | null | null | def new_process_raw_cell_function(process_inner_raw_cell):
def process_raw_cell(raw_cell):
if raw_cell is None:
return None
return list(
process_inner_raw_cell(value)
for value in raw_cell
)
return process_raw_cell
| 24.666667 | 59 | 0.621622 | 39 | 296 | 4.282051 | 0.384615 | 0.335329 | 0.251497 | 0.227545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.341216 | 296 | 11 | 60 | 26.909091 | 0.85641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
a739b20356bfc0a452eb606545f0ee6e8f15cf67 | 136 | py | Python | redirects/admin.py | simonw/simonwillisonblog | fd3170cd5473dbd6520d4e48c57923d2b894972d | [
"Apache-2.0"
] | 45 | 2017-10-01T23:21:22.000Z | 2022-03-31T08:20:46.000Z | redirects/admin.py | simonw/simonwillisonblog | fd3170cd5473dbd6520d4e48c57923d2b894972d | [
"Apache-2.0"
] | 109 | 2017-10-05T06:40:00.000Z | 2022-03-31T13:13:44.000Z | redirects/admin.py | simonw/simonwillisonblog | fd3170cd5473dbd6520d4e48c57923d2b894972d | [
"Apache-2.0"
] | 11 | 2017-10-17T15:16:26.000Z | 2022-02-20T07:22:32.000Z | from django.contrib import admin
from .models import Redirect
admin.site.register(Redirect, list_display=("domain", "path", "target"))
| 27.2 | 72 | 0.772059 | 18 | 136 | 5.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095588 | 136 | 4 | 73 | 34 | 0.845528 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
599176c65b7d12ee32639d7c607875a75603742d | 89 | py | Python | src/helpers/__init__.py | mikerako/iot-farm | 7e979d775d0d7fb92db13c3c8513370671ad0562 | [
"MIT"
] | 1 | 2020-07-09T01:39:31.000Z | 2020-07-09T01:39:31.000Z | src/helpers/__init__.py | mikerako/iot-farm | 7e979d775d0d7fb92db13c3c8513370671ad0562 | [
"MIT"
] | null | null | null | src/helpers/__init__.py | mikerako/iot-farm | 7e979d775d0d7fb92db13c3c8513370671ad0562 | [
"MIT"
] | null | null | null | '''
Init file for helpers module.
Author: Kevin Kraydich <kevin.kraydich@gmail.com>
'''
| 14.833333 | 49 | 0.719101 | 12 | 89 | 5.333333 | 0.833333 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 5 | 50 | 17.8 | 0.831169 | 0.898876 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
59df7c55670988f2668580c2f8b32357c412147e | 30 | py | Python | leak_detect/__init__.py | abhayspawar/leak-detect | bbd0aea5755d52c5cac4f2a21a0a50c766090041 | [
"MIT"
] | 8 | 2020-07-22T06:32:19.000Z | 2022-02-21T16:17:49.000Z | leak_detect/__init__.py | abhayspawar/leak-detect | bbd0aea5755d52c5cac4f2a21a0a50c766090041 | [
"MIT"
] | null | null | null | leak_detect/__init__.py | abhayspawar/leak-detect | bbd0aea5755d52c5cac4f2a21a0a50c766090041 | [
"MIT"
] | 5 | 2020-10-05T04:57:07.000Z | 2022-03-27T17:14:28.000Z | from leak_detect.base import * | 30 | 30 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f9ee0b898d454251ae965f476c6c725faa3738d0 | 88 | py | Python | proxySTAR_V3/certbot/venv/lib/python2.7/site-packages/pylint/test/functional/missing_final_newline.py | mami-project/lurk | 98c293251e9b1e9c9a4b02789486c5ddaf46ba3c | [
"Apache-2.0"
] | 2 | 2017-07-05T09:57:33.000Z | 2017-11-14T23:05:53.000Z | Libraries/Python/pylint/v1.4.4/pylint/test/functional/missing_final_newline.py | davidbrownell/Common_Environment | 4015872aeac8d5da30a6aa7940e1035a6aa6a75d | [
"BSL-1.0"
] | 1 | 2019-01-17T14:26:22.000Z | 2019-01-17T22:56:26.000Z | Libraries/Python/pylint/v1.4.4/pylint/test/functional/missing_final_newline.py | davidbrownell/Common_Environment | 4015872aeac8d5da30a6aa7940e1035a6aa6a75d | [
"BSL-1.0"
] | 1 | 2017-08-31T14:33:03.000Z | 2017-08-31T14:33:03.000Z | """This file does not have a final newline."""
# +1:[missing-final-newline]
print(1) | 22 | 47 | 0.659091 | 14 | 88 | 4.142857 | 0.785714 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.159091 | 88 | 4 | 48 | 22 | 0.756757 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
e60fae08807dcd88b962b865821a04518252de9b | 144 | py | Python | python/pyarmnn/src/pyarmnn/_utilities/__init__.py | PetervdPerk-NXP/pyarmnn-release | 2008c270f7c7c84a930842c845138628c8b95713 | [
"MIT"
] | null | null | null | python/pyarmnn/src/pyarmnn/_utilities/__init__.py | PetervdPerk-NXP/pyarmnn-release | 2008c270f7c7c84a930842c845138628c8b95713 | [
"MIT"
] | null | null | null | python/pyarmnn/src/pyarmnn/_utilities/__init__.py | PetervdPerk-NXP/pyarmnn-release | 2008c270f7c7c84a930842c845138628c8b95713 | [
"MIT"
] | null | null | null | # Copyright © 2019 Arm Ltd. All rights reserved.
# SPDX-License-Identifier: MIT
from .profiling_helper import ProfilerData, get_profiling_data
| 28.8 | 62 | 0.805556 | 20 | 144 | 5.7 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.125 | 144 | 4 | 63 | 36 | 0.865079 | 0.520833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
052a55058c0b1cd530e998dc9b33f913d2f6ae05 | 1,373 | py | Python | llvm-projects/iree-compiler-api/build_tools/smoketest.py | CyberFlameGO/iree | 7a9ce1a089d7c4d7179e938757736b19ed84697d | [
"Apache-2.0"
] | null | null | null | llvm-projects/iree-compiler-api/build_tools/smoketest.py | CyberFlameGO/iree | 7a9ce1a089d7c4d7179e938757736b19ed84697d | [
"Apache-2.0"
] | null | null | null | llvm-projects/iree-compiler-api/build_tools/smoketest.py | CyberFlameGO/iree | 7a9ce1a089d7c4d7179e938757736b19ed84697d | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 The IREE Authors
#
# Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
from iree.compiler import ir
from iree.compiler.dialects import chlo
from iree.compiler.dialects import mhlo
from iree.compiler.dialects import iree_public
from iree.compiler.dialects import builtin
from iree.compiler.dialects import std
from iree.compiler.dialects import linalg
try:
from iree.compiler.dialects import linalg
except ImportError as e:
print("KNOWN ISSUE: Linalg has an absolute path dependency issue:", e)
from iree.compiler.dialects import math
from iree.compiler.dialects import memref
from iree.compiler.dialects import shape
from iree.compiler.dialects import tensor
from iree.compiler.dialects import tosa
from iree.compiler.dialects import vector
with ir.Context() as ctx:
try:
chlo.register_chlo_dialect(ctx)
except ImportError as e:
print(
"KNOWN ISSUE: For hidden visibility builds extensions need "
"an explicit dep on LLVMSupport (chlo):", e)
try:
mhlo.register_mhlo_dialect(ctx)
except ImportError as e:
print(
"KNOWN ISSUE: For hidden visibility builds extensions need "
"an explicit dep on LLVMSupport (mhlo):", e)
iree_public.register_iree_public_dialect(ctx)
| 35.205128 | 72 | 0.774945 | 197 | 1,373 | 5.35533 | 0.340102 | 0.106161 | 0.212322 | 0.295735 | 0.622749 | 0.309953 | 0.241706 | 0.208531 | 0.208531 | 0.208531 | 0 | 0.006903 | 0.155863 | 1,373 | 38 | 73 | 36.131579 | 0.903365 | 0.150036 | 0 | 0.387097 | 0 | 0 | 0.215332 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.548387 | 0 | 0.548387 | 0.096774 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0540a3b80784dc3e732969c76f22872290c65abc | 73 | py | Python | neureca/shared/models/__init__.py | hojinYang/neureca | b1eb7246b731b7a0c7264b47c1c27239b9fe1224 | [
"Apache-2.0"
] | 7 | 2021-08-24T14:34:33.000Z | 2021-12-10T12:43:50.000Z | neureca/shared/models/__init__.py | hojinYang/neureca | b1eb7246b731b7a0c7264b47c1c27239b9fe1224 | [
"Apache-2.0"
] | null | null | null | neureca/shared/models/__init__.py | hojinYang/neureca | b1eb7246b731b7a0c7264b47c1c27239b9fe1224 | [
"Apache-2.0"
] | 1 | 2021-09-10T17:50:38.000Z | 2021-09-10T17:50:38.000Z | from .mlp import MLP
from .lstm import LSTM
from .autorec import AutoRec
| 18.25 | 28 | 0.794521 | 12 | 73 | 4.833333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 3 | 29 | 24.333333 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0545be9b81cf95c36613c76fd972fa40e4af98f2 | 10,819 | py | Python | tests/test_fuelfactors.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 19 | 2015-12-08T23:33:51.000Z | 2022-01-31T04:41:10.000Z | tests/test_fuelfactors.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 2 | 2019-10-04T10:57:00.000Z | 2021-10-01T06:46:17.000Z | tests/test_fuelfactors.py | marcelosalles/pyidf | c2f744211572b5e14e29522aac1421ba88addb0e | [
"Apache-2.0"
] | 7 | 2015-11-04T02:25:01.000Z | 2021-12-08T03:14:28.000Z | import os
import tempfile
import unittest
import logging
from pyidf import ValidationLevel
import pyidf
from pyidf.idf import IDF
from pyidf.output_reporting import FuelFactors
log = logging.getLogger(__name__)
class TestFuelFactors(unittest.TestCase):
def setUp(self):
self.fd, self.path = tempfile.mkstemp()
def tearDown(self):
os.remove(self.path)
def test_create_fuelfactors(self):
pyidf.validation_level = ValidationLevel.error
obj = FuelFactors()
# alpha
var_existing_fuel_resource_name = "Electricity"
obj.existing_fuel_resource_name = var_existing_fuel_resource_name
# alpha
var_units_of_measure = "Units of Measure"
obj.units_of_measure = var_units_of_measure
# real
var_energy_per_unit_factor = 3.3
obj.energy_per_unit_factor = var_energy_per_unit_factor
# real
var_source_energy_factor = 4.4
obj.source_energy_factor = var_source_energy_factor
# object-list
var_source_energy_schedule_name = "object-list|Source Energy Schedule Name"
obj.source_energy_schedule_name = var_source_energy_schedule_name
# real
var_co2_emission_factor = 6.6
obj.co2_emission_factor = var_co2_emission_factor
# object-list
var_co2_emission_factor_schedule_name = "object-list|CO2 Emission Factor Schedule Name"
obj.co2_emission_factor_schedule_name = var_co2_emission_factor_schedule_name
# real
var_co_emission_factor = 8.8
obj.co_emission_factor = var_co_emission_factor
# object-list
var_co_emission_factor_schedule_name = "object-list|CO Emission Factor Schedule Name"
obj.co_emission_factor_schedule_name = var_co_emission_factor_schedule_name
# real
var_ch4_emission_factor = 10.1
obj.ch4_emission_factor = var_ch4_emission_factor
# object-list
var_ch4_emission_factor_schedule_name = "object-list|CH4 Emission Factor Schedule Name"
obj.ch4_emission_factor_schedule_name = var_ch4_emission_factor_schedule_name
# real
var_nox_emission_factor = 12.12
obj.nox_emission_factor = var_nox_emission_factor
# object-list
var_nox_emission_factor_schedule_name = "object-list|NOx Emission Factor Schedule Name"
obj.nox_emission_factor_schedule_name = var_nox_emission_factor_schedule_name
# real
var_n2o_emission_factor = 14.14
obj.n2o_emission_factor = var_n2o_emission_factor
# object-list
var_n2o_emission_factor_schedule_name = "object-list|N2O Emission Factor Schedule Name"
obj.n2o_emission_factor_schedule_name = var_n2o_emission_factor_schedule_name
# real
var_so2_emission_factor = 16.16
obj.so2_emission_factor = var_so2_emission_factor
# object-list
var_so2_emission_factor_schedule_name = "object-list|SO2 Emission Factor Schedule Name"
obj.so2_emission_factor_schedule_name = var_so2_emission_factor_schedule_name
# real
var_pm_emission_factor = 18.18
obj.pm_emission_factor = var_pm_emission_factor
# object-list
var_pm_emission_factor_schedule_name = "object-list|PM Emission Factor Schedule Name"
obj.pm_emission_factor_schedule_name = var_pm_emission_factor_schedule_name
# real
var_pm10_emission_factor = 20.2
obj.pm10_emission_factor = var_pm10_emission_factor
# object-list
var_pm10_emission_factor_schedule_name = "object-list|PM10 Emission Factor Schedule Name"
obj.pm10_emission_factor_schedule_name = var_pm10_emission_factor_schedule_name
# real
var_pm2_5_emission_factor = 22.22
obj.pm2_5_emission_factor = var_pm2_5_emission_factor
# object-list
var_pm2_5_emission_factor_schedule_name = "object-list|PM2.5 Emission Factor Schedule Name"
obj.pm2_5_emission_factor_schedule_name = var_pm2_5_emission_factor_schedule_name
# real
var_nh3_emission_factor = 24.24
obj.nh3_emission_factor = var_nh3_emission_factor
# object-list
var_nh3_emission_factor_schedule_name = "object-list|NH3 Emission Factor Schedule Name"
obj.nh3_emission_factor_schedule_name = var_nh3_emission_factor_schedule_name
# real
var_nmvoc_emission_factor = 26.26
obj.nmvoc_emission_factor = var_nmvoc_emission_factor
# object-list
var_nmvoc_emission_factor_schedule_name = "object-list|NMVOC Emission Factor Schedule Name"
obj.nmvoc_emission_factor_schedule_name = var_nmvoc_emission_factor_schedule_name
# real
var_hg_emission_factor = 28.28
obj.hg_emission_factor = var_hg_emission_factor
# object-list
var_hg_emission_factor_schedule_name = "object-list|Hg Emission Factor Schedule Name"
obj.hg_emission_factor_schedule_name = var_hg_emission_factor_schedule_name
# real
var_pb_emission_factor = 30.3
obj.pb_emission_factor = var_pb_emission_factor
# object-list
var_pb_emission_factor_schedule_name = "object-list|Pb Emission Factor Schedule Name"
obj.pb_emission_factor_schedule_name = var_pb_emission_factor_schedule_name
# real
var_water_emission_factor = 32.32
obj.water_emission_factor = var_water_emission_factor
# object-list
var_water_emission_factor_schedule_name = "object-list|Water Emission Factor Schedule Name"
obj.water_emission_factor_schedule_name = var_water_emission_factor_schedule_name
# real
var_nuclear_high_level_emission_factor = 34.34
obj.nuclear_high_level_emission_factor = var_nuclear_high_level_emission_factor
# object-list
var_nuclear_high_level_emission_factor_schedule_name = "object-list|Nuclear High Level Emission Factor Schedule Name"
obj.nuclear_high_level_emission_factor_schedule_name = var_nuclear_high_level_emission_factor_schedule_name
# real
var_nuclear_low_level_emission_factor = 36.36
obj.nuclear_low_level_emission_factor = var_nuclear_low_level_emission_factor
# object-list
var_nuclear_low_level_emission_factor_schedule_name = "object-list|Nuclear Low Level Emission Factor Schedule Name"
obj.nuclear_low_level_emission_factor_schedule_name = var_nuclear_low_level_emission_factor_schedule_name
idf = IDF()
idf.add(obj)
idf.save(self.path, check=False)
with open(self.path, mode='r') as f:
for line in f:
log.debug(line.strip())
idf2 = IDF(self.path)
self.assertEqual(idf2.fuelfactorss[0].existing_fuel_resource_name, var_existing_fuel_resource_name)
self.assertEqual(idf2.fuelfactorss[0].units_of_measure, var_units_of_measure)
self.assertAlmostEqual(idf2.fuelfactorss[0].energy_per_unit_factor, var_energy_per_unit_factor)
self.assertAlmostEqual(idf2.fuelfactorss[0].source_energy_factor, var_source_energy_factor)
self.assertEqual(idf2.fuelfactorss[0].source_energy_schedule_name, var_source_energy_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].co2_emission_factor, var_co2_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].co2_emission_factor_schedule_name, var_co2_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].co_emission_factor, var_co_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].co_emission_factor_schedule_name, var_co_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].ch4_emission_factor, var_ch4_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].ch4_emission_factor_schedule_name, var_ch4_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].nox_emission_factor, var_nox_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].nox_emission_factor_schedule_name, var_nox_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].n2o_emission_factor, var_n2o_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].n2o_emission_factor_schedule_name, var_n2o_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].so2_emission_factor, var_so2_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].so2_emission_factor_schedule_name, var_so2_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].pm_emission_factor, var_pm_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].pm_emission_factor_schedule_name, var_pm_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].pm10_emission_factor, var_pm10_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].pm10_emission_factor_schedule_name, var_pm10_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].pm2_5_emission_factor, var_pm2_5_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].pm2_5_emission_factor_schedule_name, var_pm2_5_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].nh3_emission_factor, var_nh3_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].nh3_emission_factor_schedule_name, var_nh3_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].nmvoc_emission_factor, var_nmvoc_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].nmvoc_emission_factor_schedule_name, var_nmvoc_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].hg_emission_factor, var_hg_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].hg_emission_factor_schedule_name, var_hg_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].pb_emission_factor, var_pb_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].pb_emission_factor_schedule_name, var_pb_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].water_emission_factor, var_water_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].water_emission_factor_schedule_name, var_water_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].nuclear_high_level_emission_factor, var_nuclear_high_level_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].nuclear_high_level_emission_factor_schedule_name, var_nuclear_high_level_emission_factor_schedule_name)
self.assertAlmostEqual(idf2.fuelfactorss[0].nuclear_low_level_emission_factor, var_nuclear_low_level_emission_factor)
self.assertEqual(idf2.fuelfactorss[0].nuclear_low_level_emission_factor_schedule_name, var_nuclear_low_level_emission_factor_schedule_name) | 59.445055 | 149 | 0.775395 | 1,422 | 10,819 | 5.397328 | 0.076653 | 0.321042 | 0.275179 | 0.325212 | 0.877524 | 0.773681 | 0.674528 | 0.446775 | 0.430749 | 0.310619 | 0 | 0.026088 | 0.167391 | 10,819 | 182 | 150 | 59.445055 | 0.825933 | 0.028191 | 0 | 0 | 0 | 0 | 0.078171 | 0 | 0 | 0 | 0 | 0 | 0.274074 | 1 | 0.022222 | false | 0 | 0.059259 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0564df7e533ef1fd43cd5250226689c5ba4e6b1e | 96 | py | Python | application/bridgesystem/__init__.py | Ragora/PyBridge | 146bf8dea53d8af1ec7547cb18e4d373b2af6ec8 | [
"MIT"
] | null | null | null | application/bridgesystem/__init__.py | Ragora/PyBridge | 146bf8dea53d8af1ec7547cb18e4d373b2af6ec8 | [
"MIT"
] | null | null | null | application/bridgesystem/__init__.py | Ragora/PyBridge | 146bf8dea53d8af1ec7547cb18e4d373b2af6ec8 | [
"MIT"
] | null | null | null | from .util import *
from .bridgebase import BridgeBase
from .configuration import Configuration
| 24 | 40 | 0.833333 | 11 | 96 | 7.272727 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 41 | 32 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
057431a846434a74fc78e5d9bf82bee90265e0ff | 11,773 | py | Python | n3jet/LO/fks_ensemble/LO_piecewise_balance_datasets.py | JosephPB/n3jet | f097c5e829b5f86fc0ce9007c5fa76ea229dfc56 | [
"MIT"
] | 3 | 2020-06-03T13:50:59.000Z | 2021-12-01T08:21:34.000Z | n3jet/LO/fks_ensemble/LO_piecewise_balance_datasets.py | JosephPB/n3jet | f097c5e829b5f86fc0ce9007c5fa76ea229dfc56 | [
"MIT"
] | 2 | 2021-08-25T16:15:38.000Z | 2022-02-10T03:36:12.000Z | n3jet/LO/fks_ensemble/LO_piecewise_balance_datasets.py | JosephPB/n3jet | f097c5e829b5f86fc0ce9007c5fa76ea229dfc56 | [
"MIT"
] | 1 | 2020-06-12T15:00:56.000Z | 2020-06-12T15:00:56.000Z | import sys
sys.path.append('./../')
sys.path.append('./../utils')
sys.path.append('./../FKS')
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import random
from matplotlib import rc
import time
import cPickle as pickle
import multiprocessing
import rambo_while
from njet_run_functions import *
from model import Model
from fks_partition import *
from keras.models import load_model
from tqdm import tqdm
from fks_utils import *
from piecewise_utils import *
from utils import *
from uniform_utils import *
from rambo_piecewise_balance import *
#import rambo_piecewise_balance as rpb
parser = argparse.ArgumentParser(description='Generate LO piecewise balanced data')
parser.add_argument(
'--n_gluon',
dest='n_gluon',
help='number of gluons',
type=int
)
parser.add_argument(
'--delta_cut',
dest='delta_cut',
help='proximity of jets according to JADE algorithm',
type=float
)
parser.add_argument(
'--delta_near',
dest='delta_near',
help='proximity of jets according to JADE algorithm',
type=float
)
parser.add_argument(
'--points',
dest='points',
help='number of trianing phase space points',
type=int
)
parser.add_argument(
'--test_points',
dest='test_points',
help='number of testing phase space points, default: 0',
type=int,
default = 0
)
parser.add_argument(
'--generate',
dest='generate_points',
help='override point generation, default: False',
default='False',
type=str
)
parser.add_argument(
'--save_dir',
dest='save_dir',
help='parent directory in which to save data and models, default: /mt/batch/jbullock/njet/RAMBO/',
default='/mt/batch/jbullock/njet/RAMBO/',
type=str
)
parser.add_argument(
'--data_dir',
dest='data_dir',
help='directory in which to save data, default: /data/piecewise_balance/',
default='/data/piecewise_balance/',
type=str
)
args = parser.parse_args()
n_gluon= args.n_gluon
delta_cut = args.delta_cut
delta_near = args.delta_near
points = args.points
test_points = args.test_points
save_dir = args.save_dir
data_dir = args.data_dir
generate_points = args.generate_points
order = 'LO'
delta = delta_cut
data_dir = save_dir + data_dir
if os.path.exists(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,points)) == False and os.path.exists(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,points)) == False:
print ('############### Creating both cut and near phasespace points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,points), cut_momenta)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,points), near_momenta)
elif generate_points == 'True':
print ('generate is True')
print ('############### Creating both cut and near phasespace points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,points), cut_momenta)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,points), near_momenta)
elif os.path.exists(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,points)) == False:
print ('############### Creating cut phasespace points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,points), cut_momenta)
elif os.path.exists(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,points)) == False:
print ('############### Creating near phasespace points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,points), near_momenta)
else:
print ('############### All phasespace files exist ###############')
print ('############### Loading momenta ###############')
cut_momenta = np.load(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,points),allow_pickle=True)
near_momenta = np.load(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,points),allow_pickle=True)
cut_momenta = cut_momenta.tolist()
near_momenta = near_momenta.tolist()
test_data, _, _ = run_njet(n_gluon)
if os.path.exists(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, points)) == False\
and os.path.exists(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, points)) == False:
print ('############### Calculating both cut and near njet points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(cut_momenta, near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, points),np.array(NJ_cut_treevals))
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, points),np.array(NJ_near_treevals))
elif generate_points == 'True':
print ('############### Calculating both cut and near njet points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(cut_momenta, near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, points),np.array(NJ_cut_treevals))
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, points),np.array(NJ_near_treevals))
elif os.path.exists(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, points)) == False:
print ('############### Calculating cut njet points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(cut_momenta, near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, points),np.array(NJ_cut_treevals))
elif os.path.exists(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, points)) == False:
print ('############### Calculating near njet points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(cut_momenta, near_momenta, test_data)
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, points),np.array(NJ_near_treevals))
else:
print ('############### All njet files exist ###############')
print ('############### Loading NJet points ###############')
NJ_cut_treevals = np.load(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, points), allow_pickle=True)
NJ_near_treevals = np.load(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, points), allow_pickle=True)
_, pairs = D_ij(mom=near_momenta[0],n_gluon=n_gluon)
if test_points != 0:
if os.path.exists(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,test_points)) == False and os.path.exists(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,test_points)) == False:
print ('############### Creating both cut and near phasespace test_points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,test_points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,test_points), cut_momenta)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,test_points), near_momenta)
elif generate_points == 'True':
print ('generate is True')
print ('############### Creating both cut and near phasespace test_points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,test_points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,test_points), cut_momenta)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,test_points), near_momenta)
elif os.path.exists(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,test_points)) == False:
print ('############### Creating cut phasespace test_points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,test_points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_cut_{}_{}_{}'.format(n_gluon+2,delta_cut,test_points), cut_momenta)
elif os.path.exists(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,test_points)) == False:
print ('############### Creating near phasespace test_points ###############')
cut_momenta, near_momenta = generate(n_gluon+2,test_points,1000., delta_cut, delta_near)
np.save(data_dir + 'PS_near_{}_{}_{}'.format(n_gluon+2,delta_near,test_points), near_momenta)
else:
print ('############### All phasespace files exist ###############')
print ('############### Loading test momenta ###############')
test_near_momenta = np.load(data_dir + 'PS_near_{}_{}_{}.npy'.format(n_gluon+2,delta_near,test_points), allow_pickle=True)
test_cut_momenta = np.load(data_dir + 'PS_cut_{}_{}_{}.npy'.format(n_gluon+2,delta_cut,test_points), allow_pickle=True)
test_near_momenta = test_near_momenta.tolist()
test_cut_momenta = test_cut_momenta.tolist()
if os.path.exists(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, test_points)) == False\
and os.path.exists(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, test_points)) == False:
print ('############### Calculating both cut and near njet test_points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(test_cut_momenta, test_near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, test_points),np.array(NJ_cut_treevals))
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, test_points),np.array(NJ_near_treevals))
elif generate_points == 'True':
print ('############### Calculating both cut and near njet test_points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(test_cut_momenta, test_near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, test_points),np.array(NJ_cut_treevals))
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, test_points),np.array(NJ_near_treevals))
elif os.path.exists(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, test_points)) == False:
print ('############### Calculating cut njet test_points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(test_cut_momenta, test_near_momenta, test_data)
np.save(data_dir + 'NJ_LO_cut_{}_{}_{}'.format(n_gluon+2, delta_cut, test_points),np.array(NJ_cut_treevals))
elif os.path.exists(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, test_points)) == False:
print ('############### Calculating near njet test_points ###############')
NJ_cut_treevals, NJ_near_treevals = generate_LO_piecewise_njet(test_cut_momenta, test_near_momenta, test_data)
np.save(data_dir + 'NJ_LO_near_{}_{}_{}'.format(n_gluon+2, delta_near, test_points),np.array(NJ_near_treevals))
else:
print ('############### All njet files exist ###############')
print ('############### Loading NJet test points ###############')
NJ_cut_test_treevals = np.load(data_dir + 'NJ_LO_cut_{}_{}_{}.npy'.format(n_gluon+2, delta_cut, test_points), allow_pickle=True)
NJ_near_test_treevals = np.load(data_dir + 'NJ_LO_near_{}_{}_{}.npy'.format(n_gluon+2, delta_near, test_points), allow_pickle=True)
print ('############### All points genenrated ###############') | 48.44856 | 210 | 0.665846 | 1,669 | 11,773 | 4.298382 | 0.073098 | 0.05269 | 0.054642 | 0.086981 | 0.796487 | 0.779481 | 0.753833 | 0.753833 | 0.752718 | 0.752718 | 0 | 0.009048 | 0.136329 | 11,773 | 243 | 211 | 48.44856 | 0.696499 | 0.003143 | 0 | 0.409326 | 0 | 0.005181 | 0.274966 | 0.032209 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.108808 | 0 | 0.108808 | 0.139896 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
552c16cba1ef7482ba1a7154a8cb53bdebfe3279 | 1,208 | py | Python | simple_youtube_api/decorators.py | MCOfficer/simple-youtube-api | bb170e6ca3d5786f5f7f564262fc272b60f500a7 | [
"MIT"
] | null | null | null | simple_youtube_api/decorators.py | MCOfficer/simple-youtube-api | bb170e6ca3d5786f5f7f564262fc272b60f500a7 | [
"MIT"
] | null | null | null | simple_youtube_api/decorators.py | MCOfficer/simple-youtube-api | bb170e6ca3d5786f5f7f564262fc272b60f500a7 | [
"MIT"
] | null | null | null | """
All the decorators used in Simple YouTube API go here
"""
import decorator
@decorator.decorator
def video_snippet_set(f, video, *a, **k):
video.snippet_set = True
return f(video, *a, **k)
@decorator.decorator
def video_status_set(f, video, *a, **k):
video.status_set = True
return f(video, *a, **k)
@decorator.decorator
def require_channel_auth(f, video, *a, **k):
if video.channel is not None:
return f(video, *a, **k)
else:
raise Exception(
"Setting channel authentication is required before calling "
+ f.__name__
)
@decorator.decorator
def require_youtube_auth(f, video, *a, **k):
if video.youtube is not None:
return f(video, *a, **k)
else:
raise Exception(
"Setting youtube authentication is required before calling "
+ f.__name__
)
@decorator.decorator
def require_channel_or_youtube_auth(f, video, *a, **k):
if video.youtube is not None or video.channel != None:
return f(video, *a, **k)
else:
raise Exception(
"Setting youtube or channel authentication is required before calling "
+ f.__name__
)
| 24.16 | 83 | 0.618377 | 157 | 1,208 | 4.592357 | 0.248408 | 0.083218 | 0.097087 | 0.110957 | 0.804438 | 0.775312 | 0.730929 | 0.704577 | 0.626907 | 0.626907 | 0 | 0 | 0.274007 | 1,208 | 49 | 84 | 24.653061 | 0.822121 | 0.043874 | 0 | 0.527778 | 0 | 0 | 0.161855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.027778 | 0 | 0.305556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
553b040f33f86aeb04ca2b32932f1c4e15f825d5 | 242 | py | Python | test/test_interfaces_stub.py | fabaff/smarthomeconnect | 611cd0f372d03b5fc5798a2a9a5f962d1da72799 | [
"Apache-2.0"
] | 5 | 2021-07-02T21:48:45.000Z | 2021-12-12T21:55:42.000Z | test/test_interfaces_stub.py | fabaff/smarthomeconnect | 611cd0f372d03b5fc5798a2a9a5f962d1da72799 | [
"Apache-2.0"
] | 49 | 2020-09-18T20:05:55.000Z | 2022-03-05T19:51:33.000Z | test/test_interfaces_stub.py | fabaff/smarthomeconnect | 611cd0f372d03b5fc5798a2a9a5f962d1da72799 | [
"Apache-2.0"
] | 1 | 2021-12-10T14:50:43.000Z | 2021-12-10T14:50:43.000Z | import unittest
class InterfaceImportTest(unittest.TestCase):
def test_knx(self):
import shc.interfaces.knx
def test_dmx(self):
import shc.interfaces.dmx
def test_midi(self):
import shc.interfaces.midi
| 18.615385 | 45 | 0.690083 | 30 | 242 | 5.466667 | 0.433333 | 0.128049 | 0.237805 | 0.420732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231405 | 242 | 12 | 46 | 20.166667 | 0.88172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.625 | 0 | 1.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
556198aebc3240ac81160c5539cf9326d18709a7 | 143 | py | Python | src/tastopo/__init__.py | JonathanHolvey/tastopo | 18db4ec66e1b263c1dbfeac05feb029564093f02 | [
"MIT"
] | null | null | null | src/tastopo/__init__.py | JonathanHolvey/tastopo | 18db4ec66e1b263c1dbfeac05feb029564093f02 | [
"MIT"
] | 1 | 2021-11-26T06:43:53.000Z | 2021-11-26T06:43:53.000Z | src/tastopo/__init__.py | JonathanHolvey/tastopo | 18db4ec66e1b263c1dbfeac05feb029564093f02 | [
"MIT"
] | null | null | null | from .listmap.location import Location
from .map import Sheet, Image
from .layout import Layout
from .export import export_map, clean_filename
| 28.6 | 46 | 0.825175 | 21 | 143 | 5.52381 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125874 | 143 | 4 | 47 | 35.75 | 0.928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
55659d45e63ba1bc8eec8d64ea1b593dc148c9e6 | 82 | py | Python | mds/db/__init__.py | KnockSoftware/mds-provider | 18cfa55b65d9187256619c3fb378ac215bafb1f9 | [
"MIT"
] | null | null | null | mds/db/__init__.py | KnockSoftware/mds-provider | 18cfa55b65d9187256619c3fb378ac215bafb1f9 | [
"MIT"
] | null | null | null | mds/db/__init__.py | KnockSoftware/mds-provider | 18cfa55b65d9187256619c3fb378ac215bafb1f9 | [
"MIT"
] | null | null | null | """
Work with MDS Provider data.
"""
from mds.db.load import ProviderDataLoader
| 11.714286 | 42 | 0.731707 | 11 | 82 | 5.454545 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 82 | 6 | 43 | 13.666667 | 0.869565 | 0.341463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e9942f674799f61d5ee41b58b8f1f582b38e145f | 401 | py | Python | backend/stakeholder/views.py | DaasDaham/Team_9_Club_Management_Portal | aef7a361f383efbbdb73517d7fa30c806cc1ea96 | [
"MIT"
] | 1 | 2022-01-07T16:07:09.000Z | 2022-01-07T16:07:09.000Z | backend/stakeholder/views.py | DaasDaham/Team_9_Club_Management_Portal | aef7a361f383efbbdb73517d7fa30c806cc1ea96 | [
"MIT"
] | null | null | null | backend/stakeholder/views.py | DaasDaham/Team_9_Club_Management_Portal | aef7a361f383efbbdb73517d7fa30c806cc1ea96 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def home(request):
# data = {
# 'hero_banner': hero_banner.objects.all(),
# 'member': lab_member.objects.all(),
# 'publication': publications.objects.all(),
# 'gallery': gallery.objects.all(),
# 'web_server': web_server.objects.all()
# }
return render(request, 'stakeholder/index.html') | 33.416667 | 52 | 0.628429 | 44 | 401 | 5.613636 | 0.636364 | 0.202429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216958 | 401 | 12 | 53 | 33.416667 | 0.786624 | 0.618454 | 0 | 0 | 0 | 0 | 0.151724 | 0.151724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
e9b12e2ae2c41a166d0f02bd9e0988c8f877fd57 | 188 | py | Python | backend/core/models/user.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | null | null | null | backend/core/models/user.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | null | null | null | backend/core/models/user.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | 1 | 2021-12-31T17:27:44.000Z | 2021-12-31T17:27:44.000Z | from django.db import models
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
@property
def public_key(self):
return self.keypair.public_key
| 20.888889 | 51 | 0.755319 | 25 | 188 | 5.6 | 0.68 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175532 | 188 | 8 | 52 | 23.5 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
e9be1e6aa415f017ee4a623fe35acacf0d0c2ad4 | 155 | py | Python | zfs/replicate/compress/__init__.py | tuffnatty/zfs-replicate | 4b618e5a2babb141c7da7be0e9b72511b5dd2190 | [
"BSD-2-Clause"
] | 11 | 2018-09-07T03:40:47.000Z | 2021-07-03T08:10:36.000Z | zfs/replicate/compress/__init__.py | tuffnatty/zfs-replicate | 4b618e5a2babb141c7da7be0e9b72511b5dd2190 | [
"BSD-2-Clause"
] | 68 | 2018-09-07T02:28:54.000Z | 2021-03-19T20:01:13.000Z | zfs/replicate/compress/__init__.py | tuffnatty/zfs-replicate | 4b618e5a2babb141c7da7be0e9b72511b5dd2190 | [
"BSD-2-Clause"
] | 7 | 2020-05-02T13:24:34.000Z | 2022-02-07T02:29:17.000Z | """ZFS Replication Stream Compression."""
from .command import command as command # noqa: F401
from .type import Compression as Compression # noqa: F401
| 38.75 | 58 | 0.76129 | 20 | 155 | 5.9 | 0.55 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045802 | 0.154839 | 155 | 3 | 59 | 51.666667 | 0.854962 | 0.374194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
756b72efbe8b7fdb47ed093cb01d3e4a2af3125f | 113 | py | Python | world/world/views.py | nigelmathes/GAME | dfc1c0dc59364677feec66114890dc832fe048c5 | [
"MIT"
] | null | null | null | world/world/views.py | nigelmathes/GAME | dfc1c0dc59364677feec66114890dc832fe048c5 | [
"MIT"
] | 8 | 2020-02-11T23:18:13.000Z | 2021-06-10T17:48:04.000Z | world/world/views.py | nigelmathes/GAME | dfc1c0dc59364677feec66114890dc832fe048c5 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def home_view(request):
return render(request, 'templates/index.html')
| 18.833333 | 50 | 0.769912 | 15 | 113 | 5.733333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132743 | 113 | 5 | 51 | 22.6 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
75c9c9afabbae28362e947a1247af95ad69a607e | 139 | py | Python | my_skills/urls.py | Williano/CV | b2954ac1753d7f31461c59cd7fe24ea13405cddf | [
"MIT"
] | null | null | null | my_skills/urls.py | Williano/CV | b2954ac1753d7f31461c59cd7fe24ea13405cddf | [
"MIT"
] | 3 | 2020-02-11T21:48:34.000Z | 2021-06-10T18:38:09.000Z | my_skills/urls.py | Williano/CV | b2954ac1753d7f31461c59cd7fe24ea13405cddf | [
"MIT"
] | 1 | 2018-08-06T06:57:16.000Z | 2018-08-06T06:57:16.000Z | from django.urls import path
from . import views
app_name = 'my_skills'
urlpatterns = [
path('', views.my_skills, name='my_skills')
] | 17.375 | 47 | 0.705036 | 20 | 139 | 4.7 | 0.55 | 0.255319 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165468 | 139 | 8 | 48 | 17.375 | 0.810345 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
75ce85ccbd8672258eed472a26a0a856c571ce1f | 429 | py | Python | day_22/test_solution.py | jamsidedown/adventofcode2020 | 2ff6ab6fd10470523e869fe6346cdfebc291d5a4 | [
"MIT"
] | null | null | null | day_22/test_solution.py | jamsidedown/adventofcode2020 | 2ff6ab6fd10470523e869fe6346cdfebc291d5a4 | [
"MIT"
] | null | null | null | day_22/test_solution.py | jamsidedown/adventofcode2020 | 2ff6ab6fd10470523e869fe6346cdfebc291d5a4 | [
"MIT"
] | null | null | null | from day_22.solution import parse, part_1, part_2
def test_part_1():
player_1, player_2 = parse('day_22/test_input.txt')
assert part_1(player_1, player_2) == 306
def test_part_2():
player_1, player_2 = parse('day_22/test_input.txt')
assert part_2(player_1, player_2) == 291
def test_part_2_recursive_ends():
player_1, player_2 = parse('day_22/test_input_2.txt')
assert part_2(player_1, player_2) > 0 | 26.8125 | 57 | 0.724942 | 79 | 429 | 3.518987 | 0.253165 | 0.201439 | 0.280576 | 0.302158 | 0.694245 | 0.694245 | 0.604317 | 0.604317 | 0.44964 | 0.330935 | 0 | 0.099723 | 0.158508 | 429 | 16 | 58 | 26.8125 | 0.67036 | 0 | 0 | 0.2 | 0 | 0 | 0.151163 | 0.151163 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.3 | true | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
75f42dd3c02b5b439a14903f0128c6a2e064d961 | 154 | py | Python | templates/python/HelloWorld/tests/test_greeter.py | timmattison/aws-greengrass-component-templates | a80b6ed273fd6fd50511fd7ace1a89d2e030c3b9 | [
"Apache-2.0"
] | 1 | 2022-03-27T08:20:09.000Z | 2022-03-27T08:20:09.000Z | templates/python/HelloWorld/tests/test_greeter.py | timmattison/aws-greengrass-component-templates | a80b6ed273fd6fd50511fd7ace1a89d2e030c3b9 | [
"Apache-2.0"
] | 1 | 2022-03-18T15:54:54.000Z | 2022-03-18T15:54:54.000Z | templates/python/HelloWorld/tests/test_greeter.py | timmattison/aws-greengrass-component-templates | a80b6ed273fd6fd50511fd7ace1a89d2e030c3b9 | [
"Apache-2.0"
] | 1 | 2022-03-17T18:49:31.000Z | 2022-03-17T18:49:31.000Z | import pytest
import src.greeter as greeter
def test_greeting_world():
greeting = greeter.get_greeting('World')
assert "Hello World!" == greeting | 25.666667 | 44 | 0.746753 | 20 | 154 | 5.6 | 0.6 | 0.232143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 154 | 6 | 45 | 25.666667 | 0.861538 | 0 | 0 | 0 | 0 | 0 | 0.109677 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f989cec77db189d629587c118793ee5779ae4f7d | 1,947 | py | Python | server/galaxyls/tests/unit/test_xsd_service.py | davelopez/galaxy-tools-extension | 2f9e34df69b7cde22cbfbf8d29216165424eb29e | [
"Apache-2.0"
] | 17 | 2020-10-30T22:06:24.000Z | 2022-02-11T13:06:34.000Z | server/galaxyls/tests/unit/test_xsd_service.py | galaxyproject/galaxy-language-server | f4355d35609e18d53998f999d11964026ca2b04f | [
"Apache-2.0"
] | 85 | 2020-11-01T12:28:20.000Z | 2022-01-31T12:40:34.000Z | server/galaxyls/tests/unit/test_xsd_service.py | davelopez/galaxy-tools-extension | 2f9e34df69b7cde22cbfbf8d29216165424eb29e | [
"Apache-2.0"
] | 4 | 2020-11-18T18:07:05.000Z | 2022-02-25T18:33:36.000Z | from pytest_mock import MockerFixture
from ...services.xsd.constants import MSG_NO_DOCUMENTATION_AVAILABLE
from ...services.xsd.service import GalaxyToolXsdService
class TestGalaxyToolXsdServiceClass:
def test_get_documentation_for_unknown_node_attribute_returns_no_documentation(self, mocker: MockerFixture) -> None:
service = GalaxyToolXsdService("Test")
fake_context = mocker.Mock()
fake_context.xsd_element = None
doc = service.get_documentation_for(fake_context)
assert doc.value == MSG_NO_DOCUMENTATION_AVAILABLE
def test_get_documentation_for_annotated_element(self, mocker: MockerFixture) -> None:
service = GalaxyToolXsdService("Test")
fake_context = mocker.Mock()
fake_context.is_tag = True
fake_context.is_attribute_key = False
fake_context.stack = ["tool"]
doc = service.get_documentation_for(fake_context)
assert doc.value
assert doc.value != MSG_NO_DOCUMENTATION_AVAILABLE
def test_get_documentation_for_element_using_annotated_type(self, mocker: MockerFixture) -> None:
service = GalaxyToolXsdService("Test")
fake_context = mocker.Mock()
fake_context.is_tag = True
fake_context.is_attribute_key = False
fake_context.stack = ["tool", "macros"]
doc = service.get_documentation_for(fake_context)
assert doc.value
assert doc.value != MSG_NO_DOCUMENTATION_AVAILABLE
def test_get_documentation_for_annotated_attribute(self, mocker: MockerFixture) -> None:
service = GalaxyToolXsdService("Test")
fake_context = mocker.Mock()
fake_context.is_tag = False
fake_context.is_attribute_key = True
fake_context.token.name = "id"
fake_context.stack = ["tool"]
doc = service.get_documentation_for(fake_context)
assert doc.value
assert doc.value != MSG_NO_DOCUMENTATION_AVAILABLE
| 36.735849 | 120 | 0.719569 | 221 | 1,947 | 5.9819 | 0.21267 | 0.158094 | 0.114977 | 0.102118 | 0.768533 | 0.729955 | 0.729955 | 0.729955 | 0.729955 | 0.729955 | 0 | 0 | 0.205444 | 1,947 | 52 | 121 | 37.442308 | 0.854557 | 0 | 0 | 0.631579 | 0 | 0 | 0.01849 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 1 | 0.105263 | false | 0 | 0.078947 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f996d28c650edaa17341231de375b912f74742cd | 51 | py | Python | pyststorygen/__init__.py | ahmaddoulat/pyststorygen | 3e8329d1a1af20b710ea2bd421a7dda737d631c8 | [
"MIT"
] | null | null | null | pyststorygen/__init__.py | ahmaddoulat/pyststorygen | 3e8329d1a1af20b710ea2bd421a7dda737d631c8 | [
"MIT"
] | null | null | null | pyststorygen/__init__.py | ahmaddoulat/pyststorygen | 3e8329d1a1af20b710ea2bd421a7dda737d631c8 | [
"MIT"
] | null | null | null | from pyststorygen.pyststorygen import StudentStory
| 25.5 | 50 | 0.901961 | 5 | 51 | 9.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 51 | 1 | 51 | 51 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fb33e66504749299c64597a254f0db1f516deef7 | 426 | py | Python | Desafios/desafio076.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | null | null | null | Desafios/desafio076.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | 1 | 2021-04-23T15:11:11.000Z | 2021-05-21T22:36:56.000Z | Desafios/desafio076.py | josivantarcio/Desafios-em-Python | c747eed785b9640e15c498262fc5f8f5afd4337e | [
"MIT"
] | null | null | null | prod = (input('Digite produto: '),input('Digite o Preco: '),
input('Digite produto: '),input('Digite o Preco: '),
input('Digite produto: '),input('Digite o Preco: '),
input('Digite produto: '),input('Digite o Preco: '))
print('=' * 29)
print('Tabela de Preço')
print('=' * 29)
for i in range(len(prod)):
if i % 2 == 0:
print(f'{prod[i]:.<23}',end='')
else:
print(f'R${prod[i]}') | 35.5 | 60 | 0.542254 | 59 | 426 | 3.915254 | 0.40678 | 0.380952 | 0.311688 | 0.398268 | 0.606061 | 0.606061 | 0.606061 | 0.606061 | 0.606061 | 0.606061 | 0 | 0.024169 | 0.223005 | 426 | 12 | 61 | 35.5 | 0.673716 | 0 | 0 | 0.333333 | 0 | 0 | 0.398126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.416667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
3488a70e85e3ff86ada33fc8d19014e748a8641b | 8,640 | py | Python | tests/src/python/test_qgsserver_accesscontrol_wfs_transactional.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | tests/src/python/test_qgsserver_accesscontrol_wfs_transactional.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | tests/src/python/test_qgsserver_accesscontrol_wfs_transactional.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | 1 | 2021-12-25T08:40:30.000Z | 2021-12-25T08:40:30.000Z | # -*- coding: utf-8 -*-
"""QGIS Unit tests for QgsServer.
.. note:: This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
"""
__author__ = 'Stephane Brunner'
__date__ = '28/08/2015'
__copyright__ = 'Copyright 2015, The QGIS Project'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '176c06ceefb5f555205e72b20c962740cc0ec183'
print('CTEST_FULL_OUTPUT')
from qgis.testing import unittest
from test_qgsserver_accesscontrol import TestQgsServerAccessControl, XML_NS
WFS_TRANSACTION_INSERT = """<?xml version="1.0" encoding="UTF-8"?>
<wfs:Transaction {xml_ns}>
<wfs:Insert idgen="GenerateNew">
<qgs:db_point>
<qgs:geometry>
<gml:Point srsDimension="2" srsName="http://www.opengis.net/def/crs/EPSG/0/4326">
<gml:coordinates decimal="." cs="," ts=" ">{x},{y}</gml:coordinates>
</gml:Point>
</qgs:geometry>
<qgs:name>{name}</qgs:name>
<qgs:color>{color}</qgs:color>
</qgs:db_point>
</wfs:Insert>
</wfs:Transaction>""".format(x=1000, y=2000, name="test", color="{color}", xml_ns=XML_NS)
WFS_TRANSACTION_UPDATE = """<?xml version="1.0" encoding="UTF-8"?>
<wfs:Transaction {xml_ns}>
<wfs:Update typeName="db_point">
<wfs:Property>
<wfs:Name>color</wfs:Name>
<wfs:Value>{color}</wfs:Value>
</wfs:Property>
<ogc:Filter>
<ogc:FeatureId fid="{id}"/>
</ogc:Filter>
</wfs:Update>
</wfs:Transaction>"""
WFS_TRANSACTION_DELETE = """<?xml version="1.0" encoding="UTF-8"?>
<wfs:Transaction {xml_ns}>
<wfs:Delete typeName="db_point">
<ogc:Filter>
<ogc:FeatureId fid="{id}"/>
</ogc:Filter>
</wfs:Delete>
</wfs:Transaction>"""
class TestQgsServerAccessControlWFSTransactional(TestQgsServerAccessControl):
def test_wfstransaction_insert(self):
data = WFS_TRANSACTION_INSERT.format(x=1000, y=2000, name="test", color="{color}", xml_ns=XML_NS)
self._test_colors({1: "blue"})
response, headers = self._post_fullaccess(data.format(color="red"))
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Insert is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Insert don't succeed\n%s" % response)
self._test_colors({2: "red"})
response, headers = self._post_restricted(data.format(color="blue"))
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Insert is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") == -1,
"WFS/Transactions Insert succeed\n%s" % response)
response, headers = self._post_restricted(data.format(color="red"), "LAYER_PERM=no")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Insert is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find(
'<ServiceException code="Security">No permissions to do WFS changes on layer \\\'db_point\\\'</ServiceException>') != -1,
"WFS/Transactions Insert succeed\n%s" % response)
response, headers = self._post_restricted(data.format(color="yellow"), "LAYER_PERM=yes")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Insert is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Insert don't succeed\n%s" % response)
self._test_colors({3: "yellow"})
def test_wfstransaction_update(self):
data = WFS_TRANSACTION_UPDATE.format(id="1", color="{color}", xml_ns=XML_NS)
self._test_colors({1: "blue"})
response, headers = self._post_restricted(data.format(color="yellow"))
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for GetMap is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") == -1,
"WFS/Transactions Update succeed\n%s" % response)
self._test_colors({1: "blue"})
response, headers = self._post_fullaccess(data.format(color="red"))
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Update is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Update don't succeed\n%s" % response)
self._test_colors({1: "red"})
response, headers = self._post_restricted(data.format(color="blue"))
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Update is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") == -1,
"WFS/Transactions Update succeed\n%s" % response)
self._test_colors({1: "red"})
response, headers = self._post_restricted(data.format(color="yellow"), "LAYER_PERM=no")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Update is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find(
'<ServiceException code="Security">No permissions to do WFS changes on layer \\\'db_point\\\'</ServiceException>') != -1,
"WFS/Transactions Update succeed\n%s" % response)
self._test_colors({1: "red"})
response, headers = self._post_restricted(data.format(color="yellow"), "LAYER_PERM=yes")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for Update is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Update don't succeed\n%s" % response)
self._test_colors({1: "yellow"})
def test_wfstransaction_delete_fullaccess(self):
data = WFS_TRANSACTION_DELETE.format(id="1", xml_ns=XML_NS)
self._test_colors({1: "blue"})
response, headers = self._post_fullaccess(data)
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for GetMap is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Delete don't succeed\n%s" % response)
def test_wfstransaction_delete_restricted(self):
data = WFS_TRANSACTION_DELETE.format(id="1", xml_ns=XML_NS)
self._test_colors({1: "blue"})
response, headers = self._post_restricted(data)
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for GetMap is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") == -1,
"WFS/Transactions Delete succeed\n%s" % response)
data_update = WFS_TRANSACTION_UPDATE.format(id="1", color="red", xml_ns=XML_NS)
response, headers = self._post_fullaccess(data_update)
self._test_colors({1: "red"})
response, headers = self._post_restricted(data, "LAYER_PERM=no")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for GetMap is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find(
'<ServiceException code="Security">No permissions to do WFS changes on layer \\\'db_point\\\'</ServiceException>') != -1,
"WFS/Transactions Delete succeed\n%s" % response)
response, headers = self._post_restricted(data, "LAYER_PERM=yes")
self.assertEqual(
headers.get("Content-Type"), "text/xml; charset=utf-8",
"Content type for GetMap is wrong: %s" % headers.get("Content-Type"))
self.assertTrue(
str(response).find("<SUCCESS/>") != -1,
"WFS/Transactions Delete don't succeed\n%s" % response)
if __name__ == "__main__":
unittest.main()
| 43.636364 | 137 | 0.623148 | 1,058 | 8,640 | 4.959357 | 0.155009 | 0.081761 | 0.084239 | 0.104059 | 0.760053 | 0.760053 | 0.753002 | 0.740995 | 0.736611 | 0.718887 | 0 | 0.016919 | 0.220139 | 8,640 | 197 | 138 | 43.857868 | 0.761799 | 0.042708 | 0 | 0.68125 | 0 | 0.00625 | 0.397676 | 0.021784 | 0 | 0 | 0 | 0 | 0.1625 | 1 | 0.025 | false | 0 | 0.0125 | 0 | 0.04375 | 0.00625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
34a7d405001e3cb662a10c338e9136089014091b | 35 | py | Python | src/dropSQL/__main__.py | LakshyaaSoni/dropSQL | da07ad3edf2d55f0521a385ad10678fc353b4b2b | [
"MIT"
] | 35 | 2017-11-27T22:24:46.000Z | 2022-01-16T23:50:01.000Z | src/dropSQL/__main__.py | LakshyaaSoni/dropSQL | da07ad3edf2d55f0521a385ad10678fc353b4b2b | [
"MIT"
] | null | null | null | src/dropSQL/__main__.py | LakshyaaSoni/dropSQL | da07ad3edf2d55f0521a385ad10678fc353b4b2b | [
"MIT"
] | 2 | 2018-02-20T06:06:12.000Z | 2021-10-16T18:30:15.000Z | from .repl import launch
launch()
| 8.75 | 24 | 0.742857 | 5 | 35 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 35 | 3 | 25 | 11.666667 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
34d28469533fd4c2116c1dbb541642cbecde5d16 | 221 | py | Python | armstrong/core/arm_content/video/settings.py | cirlabs/armstrong.core.arm_content | 91b2022bc19f0ddb10402d928c9b68c9faf242b6 | [
"Apache-2.0"
] | null | null | null | armstrong/core/arm_content/video/settings.py | cirlabs/armstrong.core.arm_content | 91b2022bc19f0ddb10402d928c9b68c9faf242b6 | [
"Apache-2.0"
] | null | null | null | armstrong/core/arm_content/video/settings.py | cirlabs/armstrong.core.arm_content | 91b2022bc19f0ddb10402d928c9b68c9faf242b6 | [
"Apache-2.0"
] | null | null | null | from django.conf import settings
ARMSTRONG_EMBED_VIDEO_HEIGHT = getattr(settings,
'ARMSTRONG_EMBED_VIDEO_HEIGHT', 390)
ARMSTRONG_EMBED_VIDEO_WIDTH = getattr(settings,
'ARMSTRONG_EMBED_VIDEO_WIDTH', 640)
| 27.625 | 48 | 0.78733 | 27 | 221 | 6 | 0.481481 | 0.345679 | 0.469136 | 0.5 | 0.660494 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.144796 | 221 | 7 | 49 | 31.571429 | 0.825397 | 0 | 0 | 0 | 0 | 0 | 0.248869 | 0.248869 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
551b69587f15679b1ef17d93f97ea745f4a9f5d2 | 122 | py | Python | server/goap.py | cutec-chris/sce | da1f906ff4a049722b8968eeab8a07b411c92a8c | [
"MIT"
] | null | null | null | server/goap.py | cutec-chris/sce | da1f906ff4a049722b8968eeab8a07b411c92a8c | [
"MIT"
] | null | null | null | server/goap.py | cutec-chris/sce | da1f906ff4a049722b8968eeab8a07b411c92a8c | [
"MIT"
] | null | null | null | class Action:
def __init__(self,Name,Dependencies):
self.Dependencies = Dependencies
class Goal(Action): pass | 30.5 | 41 | 0.729508 | 14 | 122 | 6.071429 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 122 | 4 | 42 | 30.5 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
9b347c2c193e8fad7fa27ed52e0c2d23774c0d8a | 137 | py | Python | tests/tests.py | hdknr/django-mywords | eaaa84e32591e92a798f27afe6e51df2f1b6e9ea | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/tests.py | hdknr/django-mywords | eaaa84e32591e92a798f27afe6e51df2f1b6e9ea | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests/tests.py | hdknr/django-mywords | eaaa84e32591e92a798f27afe6e51df2f1b6e9ea | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # coding: utf-8
from unittest import TestCase as UnitTestCase
class SimpleCase(UnitTestCase):
def test_simple(self):
pass
| 15.222222 | 45 | 0.722628 | 17 | 137 | 5.764706 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.211679 | 137 | 8 | 46 | 17.125 | 0.898148 | 0.094891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
9b395b79ec3f6c90d4671c2cc11df38f51c83535 | 188 | py | Python | network/__init__.py | houzeyu2683/SFDDD | a245ca97df4be85c4116f5833c378d6968eaa215 | [
"Apache-2.0"
] | null | null | null | network/__init__.py | houzeyu2683/SFDDD | a245ca97df4be85c4116f5833c378d6968eaa215 | [
"Apache-2.0"
] | null | null | null | network/__init__.py | houzeyu2683/SFDDD | a245ca97df4be85c4116f5833c378d6968eaa215 | [
"Apache-2.0"
] | null | null | null |
## Modules.
from .model import model
from .criterion import criterion
from .optimizer import optimizer
from .machine import machine
from .metric import metric
from .report import report | 20.888889 | 32 | 0.803191 | 25 | 188 | 6.04 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 188 | 9 | 33 | 20.888889 | 0.94375 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9b508bd9047cdd4d07acb66e18df82d52c01f62d | 253 | py | Python | tests/__init__.py | alonbl/wg-srle | 12c07ae2cface6a8525b7409a0083299e4c507ce | [
"Zlib"
] | null | null | null | tests/__init__.py | alonbl/wg-srle | 12c07ae2cface6a8525b7409a0083299e4c507ce | [
"Zlib"
] | null | null | null | tests/__init__.py | alonbl/wg-srle | 12c07ae2cface6a8525b7409a0083299e4c507ce | [
"Zlib"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
#
# >= python-2 compatibility
#
if not hasattr(unittest.TestCase, 'assertRaisesRegex'):
setattr(
unittest.TestCase,
'assertRaisesRegex',
unittest.TestCase.assertRaisesRegexp,
)
| 15.8125 | 55 | 0.644269 | 21 | 253 | 7.761905 | 0.714286 | 0.294479 | 0.404908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010256 | 0.229249 | 253 | 15 | 56 | 16.866667 | 0.825641 | 0.185771 | 0 | 0 | 0 | 0 | 0.169154 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9b92ee864d15376d8413ee1d6e0f81a6290b05b2 | 132 | py | Python | model/payload.py | mateob93/mqtt_subscriber | cb90f3c2d6e48685ae65ca98194b978f6b18732f | [
"MIT"
] | null | null | null | model/payload.py | mateob93/mqtt_subscriber | cb90f3c2d6e48685ae65ca98194b978f6b18732f | [
"MIT"
] | null | null | null | model/payload.py | mateob93/mqtt_subscriber | cb90f3c2d6e48685ae65ca98194b978f6b18732f | [
"MIT"
] | null | null | null |
class Payload:
def to_dict(self):
raise NotImplementedError
def to_tuple(self):
raise NotImplementedError
| 16.5 | 33 | 0.674242 | 14 | 132 | 6.214286 | 0.642857 | 0.114943 | 0.643678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 132 | 7 | 34 | 18.857143 | 0.90625 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
9b99c4960d872a22351794625621bf12d69dce8e | 25,668 | py | Python | pyboto3/marketplacecatalog.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 91 | 2016-12-31T11:38:37.000Z | 2021-09-16T19:33:23.000Z | pyboto3/marketplacecatalog.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 7 | 2017-01-02T18:54:23.000Z | 2020-08-11T13:54:02.000Z | pyboto3/marketplacecatalog.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 26 | 2016-12-31T13:11:00.000Z | 2022-03-03T21:01:12.000Z | '''
The MIT License (MIT)
Copyright (c) 2016 WavyCloud
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
def can_paginate(operation_name=None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name\nas the method name on the client. For example, if the\nmethod name is create_foo, and you\'d normally invoke the\noperation as client.create_foo(**kwargs), if the\ncreate_foo operation can be paginated, you can use the\ncall client.get_paginator('create_foo').
"""
pass
def cancel_change_set(Catalog=None, ChangeSetId=None):
"""
Used to cancel an open change request. Must be sent before the status of the request changes to APPLYING , the final stage of completing your change request. You can describe a change during the 60-day request history retention period for API calls.
See also: AWS API Documentation
Exceptions
:example: response = client.cancel_change_set(
Catalog='string',
ChangeSetId='string'
)
:type Catalog: string
:param Catalog: [REQUIRED]\nRequired. The catalog related to the request. Fixed value: AWSMarketplace .\n
:type ChangeSetId: string
:param ChangeSetId: [REQUIRED]\nRequired. The unique identifier of the StartChangeSet request that you want to cancel.\n
:rtype: dict
ReturnsResponse Syntax
{
'ChangeSetId': 'string',
'ChangeSetArn': 'string'
}
Response Structure
(dict) --
ChangeSetId (string) --
The unique identifier for the change set referenced in this request.
ChangeSetArn (string) --
The ARN associated with the change set referenced in this request.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ResourceInUseException
MarketplaceCatalog.Client.exceptions.ThrottlingException
:return: {
'ChangeSetId': 'string',
'ChangeSetArn': 'string'
}
:returns:
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ResourceInUseException
MarketplaceCatalog.Client.exceptions.ThrottlingException
"""
pass
def describe_change_set(Catalog=None, ChangeSetId=None):
"""
Provides information about a given change set.
See also: AWS API Documentation
Exceptions
:example: response = client.describe_change_set(
Catalog='string',
ChangeSetId='string'
)
:type Catalog: string
:param Catalog: [REQUIRED]\nRequired. The catalog related to the request. Fixed value: AWSMarketplace\n
:type ChangeSetId: string
:param ChangeSetId: [REQUIRED]\nRequired. The unique identifier for the StartChangeSet request that you want to describe the details for.\n
:rtype: dict
ReturnsResponse Syntax
{
'ChangeSetId': 'string',
'ChangeSetArn': 'string',
'ChangeSetName': 'string',
'StartTime': 'string',
'EndTime': 'string',
'Status': 'PREPARING'|'APPLYING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'FailureDescription': 'string',
'ChangeSet': [
{
'ChangeType': 'string',
'Entity': {
'Type': 'string',
'Identifier': 'string'
},
'ErrorDetailList': [
{
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
},
]
}
Response Structure
(dict) --
ChangeSetId (string) --
Required. The unique identifier for the change set referenced in this request.
ChangeSetArn (string) --
The ARN associated with the unique identifier for the change set referenced in this request.
ChangeSetName (string) --
The optional name provided in the StartChangeSet request. If you do not provide a name, one is set by default.
StartTime (string) --
The date and time, in ISO 8601 format (2018-02-27T13:45:22Z), the request started.
EndTime (string) --
The date and time, in ISO 8601 format (2018-02-27T13:45:22Z), the request transitioned to a terminal state. The change cannot transition to a different state. Null if the request is not in a terminal state.
Status (string) --
The status of the change request.
FailureDescription (string) --
Returned if there is a failure on the change set, but that failure is not related to any of the changes in the request.
ChangeSet (list) --
An array of ChangeSummary objects.
(dict) --
This object is a container for common summary information about the change. The summary doesn\'t contain the whole change structure.
ChangeType (string) --
The type of the change.
Entity (dict) --
The entity to be changed.
Type (string) --
The type of entity.
Identifier (string) --
The identifier for the entity.
ErrorDetailList (list) --
An array of ErrorDetail objects associated with the change.
(dict) --
Details about the error.
ErrorCode (string) --
The error code that identifies the type of error.
ErrorMessage (string) --
The message for the error.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
:return: {
'ChangeSetId': 'string',
'ChangeSetArn': 'string',
'ChangeSetName': 'string',
'StartTime': 'string',
'EndTime': 'string',
'Status': 'PREPARING'|'APPLYING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'FailureDescription': 'string',
'ChangeSet': [
{
'ChangeType': 'string',
'Entity': {
'Type': 'string',
'Identifier': 'string'
},
'ErrorDetailList': [
{
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
},
]
}
:returns:
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
"""
pass
def describe_entity(Catalog=None, EntityId=None):
"""
Returns the metadata and content of the entity.
See also: AWS API Documentation
Exceptions
:example: response = client.describe_entity(
Catalog='string',
EntityId='string'
)
:type Catalog: string
:param Catalog: [REQUIRED]\nRequired. The catalog related to the request. Fixed value: AWSMarketplace\n
:type EntityId: string
:param EntityId: [REQUIRED]\nRequired. The unique ID of the entity to describe.\n
:rtype: dict
ReturnsResponse Syntax
{
'EntityType': 'string',
'EntityIdentifier': 'string',
'EntityArn': 'string',
'LastModifiedDate': 'string',
'Details': 'string'
}
Response Structure
(dict) --
EntityType (string) --
The named type of the entity, in the format of EntityType@Version .
EntityIdentifier (string) --
The identifier of the entity, in the format of EntityId@RevisionId .
EntityArn (string) --
The ARN associated to the unique identifier for the change set referenced in this request.
LastModifiedDate (string) --
The last modified date of the entity, in ISO 8601 format (2018-02-27T13:45:22Z).
Details (string) --
This stringified JSON object includes the details of the entity.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotSupportedException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
:return: {
'EntityType': 'string',
'EntityIdentifier': 'string',
'EntityArn': 'string',
'LastModifiedDate': 'string',
'Details': 'string'
}
:returns:
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotSupportedException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
"""
pass
def generate_presigned_url(ClientMethod=None, Params=None, ExpiresIn=None, HttpMethod=None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to\nClientMethod.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid\nfor. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By\ndefault, the http method is whatever is used in the method\'s model.
"""
pass
def get_paginator(operation_name=None):
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name\nas the method name on the client. For example, if the\nmethod name is create_foo, and you\'d normally invoke the\noperation as client.create_foo(**kwargs), if the\ncreate_foo operation can be paginated, you can use the\ncall client.get_paginator('create_foo').
:rtype: L{botocore.paginate.Paginator}
ReturnsA paginator object.
"""
pass
def get_waiter(waiter_name=None):
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters\nsection of the service docs for a list of available waiters.
:rtype: botocore.waiter.Waiter
"""
pass
def list_change_sets(Catalog=None, FilterList=None, Sort=None, MaxResults=None, NextToken=None):
"""
Returns the list of change sets owned by the account being used to make the call. You can filter this list by providing any combination of entityId , ChangeSetName , and status. If you provide more than one filter, the API operation applies a logical AND between the filters.
You can describe a change during the 60-day request history retention period for API calls.
See also: AWS API Documentation
Exceptions
:example: response = client.list_change_sets(
Catalog='string',
FilterList=[
{
'Name': 'string',
'ValueList': [
'string',
]
},
],
Sort={
'SortBy': 'string',
'SortOrder': 'ASCENDING'|'DESCENDING'
},
MaxResults=123,
NextToken='string'
)
:type Catalog: string
:param Catalog: [REQUIRED]\nThe catalog related to the request. Fixed value: AWSMarketplace\n
:type FilterList: list
:param FilterList: An array of filter objects.\n\n(dict) --A filter object, used to optionally filter results from calls to the ListEntities and ListChangeSets actions.\n\nName (string) --For ListEntities , the supported value for this is an EntityId .\nFor ListChangeSets , the supported values are as follows:\n\nValueList (list) --\nListEntities - This is a list of unique EntityId s.ListChangeSets - The supported filter names and associated ValueList s is as follows:\n\n\nChangeSetName - The supported ValueList is a list of non-unique ChangeSetName s. These are defined when you call the StartChangeSet action.\nStatus - The supported ValueList is a list of statuses for all change set requests.\nEntityId - The supported ValueList is a list of unique EntityId s.\nBeforeStartTime - The supported ValueList is a list of all change sets that started before the filter value.\nAfterStartTime - The supported ValueList is a list of all change sets that started after the filter value.\nBeforeEndTime - The supported ValueList is a list of all change sets that ended before the filter value.\nAfterEndTime - The supported ValueList is a list of all change sets that ended after the filter value.\n\n\n(string) --\n\n\n\n\n\n
:type Sort: dict
:param Sort: An object that contains two attributes, sortBy and sortOrder .\n\nSortBy (string) --For ListEntities , supported attributes include LastModifiedDate (default), Visibility , EntityId , and Name .\nFor ListChangeSets , supported attributes include StartTime and EndTime .\n\nSortOrder (string) --The sorting order. Can be ASCENDING or DESCENDING . The default value is DESCENDING .\n\n\n
:type MaxResults: integer
:param MaxResults: The maximum number of results returned by a single call. This value must be provided in the next call to retrieve the next set of results. By default, this value is 20.
:type NextToken: string
:param NextToken: The token value retrieved from a previous call to access the next page of results.
:rtype: dict
ReturnsResponse Syntax
{
'ChangeSetSummaryList': [
{
'ChangeSetId': 'string',
'ChangeSetArn': 'string',
'ChangeSetName': 'string',
'StartTime': 'string',
'EndTime': 'string',
'Status': 'PREPARING'|'APPLYING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'EntityIdList': [
'string',
]
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ChangeSetSummaryList (list) --
Array of ChangeSetSummaryListItem objects.
(dict) --
A summary of a change set returned in a list of change sets when the ListChangeSets action is called.
ChangeSetId (string) --
The unique identifier for a change set.
ChangeSetArn (string) --
The ARN associated with the unique identifier for the change set referenced in this request.
ChangeSetName (string) --
The non-unique name for the change set.
StartTime (string) --
The time, in ISO 8601 format (2018-02-27T13:45:22Z), when the change set was started.
EndTime (string) --
The time, in ISO 8601 format (2018-02-27T13:45:22Z), when the change set was finished.
Status (string) --
The current status of the change set.
EntityIdList (list) --
This object is a list of entity IDs (string) that are a part of a change set. The entity ID list is a maximum of 20 entities. It must contain at least one entity.
(string) --
NextToken (string) --
The value of the next token, if it exists. Null if there are no more results.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ThrottlingException
:return: {
'ChangeSetSummaryList': [
{
'ChangeSetId': 'string',
'ChangeSetArn': 'string',
'ChangeSetName': 'string',
'StartTime': 'string',
'EndTime': 'string',
'Status': 'PREPARING'|'APPLYING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'EntityIdList': [
'string',
]
},
],
'NextToken': 'string'
}
:returns:
(string) --
"""
pass
def list_entities(Catalog=None, EntityType=None, FilterList=None, Sort=None, NextToken=None, MaxResults=None):
"""
Provides the list of entities of a given type.
See also: AWS API Documentation
Exceptions
:example: response = client.list_entities(
Catalog='string',
EntityType='string',
FilterList=[
{
'Name': 'string',
'ValueList': [
'string',
]
},
],
Sort={
'SortBy': 'string',
'SortOrder': 'ASCENDING'|'DESCENDING'
},
NextToken='string',
MaxResults=123
)
:type Catalog: string
:param Catalog: [REQUIRED]\nThe catalog related to the request. Fixed value: AWSMarketplace\n
:type EntityType: string
:param EntityType: [REQUIRED]\nThe type of entities to retrieve.\n
:type FilterList: list
:param FilterList: An array of filter objects. Each filter object contains two attributes, filterName and filterValues .\n\n(dict) --A filter object, used to optionally filter results from calls to the ListEntities and ListChangeSets actions.\n\nName (string) --For ListEntities , the supported value for this is an EntityId .\nFor ListChangeSets , the supported values are as follows:\n\nValueList (list) --\nListEntities - This is a list of unique EntityId s.ListChangeSets - The supported filter names and associated ValueList s is as follows:\n\n\nChangeSetName - The supported ValueList is a list of non-unique ChangeSetName s. These are defined when you call the StartChangeSet action.\nStatus - The supported ValueList is a list of statuses for all change set requests.\nEntityId - The supported ValueList is a list of unique EntityId s.\nBeforeStartTime - The supported ValueList is a list of all change sets that started before the filter value.\nAfterStartTime - The supported ValueList is a list of all change sets that started after the filter value.\nBeforeEndTime - The supported ValueList is a list of all change sets that ended before the filter value.\nAfterEndTime - The supported ValueList is a list of all change sets that ended after the filter value.\n\n\n(string) --\n\n\n\n\n\n
:type Sort: dict
:param Sort: An object that contains two attributes, sortBy and sortOrder .\n\nSortBy (string) --For ListEntities , supported attributes include LastModifiedDate (default), Visibility , EntityId , and Name .\nFor ListChangeSets , supported attributes include StartTime and EndTime .\n\nSortOrder (string) --The sorting order. Can be ASCENDING or DESCENDING . The default value is DESCENDING .\n\n\n
:type NextToken: string
:param NextToken: The value of the next token, if it exists. Null if there are no more results.
:type MaxResults: integer
:param MaxResults: Specifies the upper limit of the elements on a single page. If a value isn\'t provided, the default value is 20.
:rtype: dict
ReturnsResponse Syntax
{
'EntitySummaryList': [
{
'Name': 'string',
'EntityType': 'string',
'EntityId': 'string',
'EntityArn': 'string',
'LastModifiedDate': 'string',
'Visibility': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
EntitySummaryList (list) --
Array of EntitySummary object.
(dict) --
This object is a container for common summary information about the entity. The summary doesn\'t contain the whole entity structure, but it does contain information common across all entities.
Name (string) --
The name for the entity. This value is not unique. It is defined by the provider.
EntityType (string) --
The type of the entity.
EntityId (string) --
The unique identifier for the entity.
EntityArn (string) --
The ARN associated with the unique identifier for the entity.
LastModifiedDate (string) --
The last time the entity was published, using ISO 8601 format (2018-02-27T13:45:22Z).
Visibility (string) --
The visibility status of the entity to subscribers. This value can be Public (everyone can view the entity), Limited (the entity is visible to limited accounts only), or Restricted (the entity was published and then unpublished and only existing subscribers can view it).
NextToken (string) --
The value of the next token if it exists. Null if there is no more result.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
:return: {
'EntitySummaryList': [
{
'Name': 'string',
'EntityType': 'string',
'EntityId': 'string',
'EntityArn': 'string',
'LastModifiedDate': 'string',
'Visibility': 'string'
},
],
'NextToken': 'string'
}
:returns:
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ThrottlingException
"""
pass
def start_change_set(Catalog=None, ChangeSet=None, ChangeSetName=None, ClientRequestToken=None):
"""
This operation allows you to request changes in your entities.
See also: AWS API Documentation
Exceptions
:example: response = client.start_change_set(
Catalog='string',
ChangeSet=[
{
'ChangeType': 'string',
'Entity': {
'Type': 'string',
'Identifier': 'string'
},
'Details': 'string'
},
],
ChangeSetName='string',
ClientRequestToken='string'
)
:type Catalog: string
:param Catalog: [REQUIRED]\nThe catalog related to the request. Fixed value: AWSMarketplace\n
:type ChangeSet: list
:param ChangeSet: [REQUIRED]\nArray of change object.\n\n(dict) --An object that contains the ChangeType , Details , and Entity .\n\nChangeType (string) -- [REQUIRED]Change types are single string values that describe your intention for the change. Each change type is unique for each EntityType provided in the change\'s scope.\n\nEntity (dict) -- [REQUIRED]The entity to be changed.\n\nType (string) -- [REQUIRED]The type of entity.\n\nIdentifier (string) --The identifier for the entity.\n\n\n\nDetails (string) -- [REQUIRED]This object contains details specific to the change type of the requested change.\n\n\n\n\n
:type ChangeSetName: string
:param ChangeSetName: Optional case sensitive string of up to 100 ASCII characters. The change set name can be used to filter the list of change sets.
:type ClientRequestToken: string
:param ClientRequestToken: A unique token to identify the request to ensure idempotency.
:rtype: dict
ReturnsResponse Syntax
{
'ChangeSetId': 'string',
'ChangeSetArn': 'string'
}
Response Structure
(dict) --
ChangeSetId (string) --
Unique identifier generated for the request.
ChangeSetArn (string) --
The ARN associated to the unique identifier generated for the request.
Exceptions
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ResourceInUseException
MarketplaceCatalog.Client.exceptions.ThrottlingException
MarketplaceCatalog.Client.exceptions.ServiceQuotaExceededException
:return: {
'ChangeSetId': 'string',
'ChangeSetArn': 'string'
}
:returns:
MarketplaceCatalog.Client.exceptions.InternalServiceException
MarketplaceCatalog.Client.exceptions.AccessDeniedException
MarketplaceCatalog.Client.exceptions.ValidationException
MarketplaceCatalog.Client.exceptions.ResourceNotFoundException
MarketplaceCatalog.Client.exceptions.ResourceInUseException
MarketplaceCatalog.Client.exceptions.ThrottlingException
MarketplaceCatalog.Client.exceptions.ServiceQuotaExceededException
"""
pass
| 33.162791 | 1,305 | 0.702041 | 2,921 | 25,668 | 6.154057 | 0.153372 | 0.082777 | 0.117267 | 0.008511 | 0.677181 | 0.65988 | 0.635792 | 0.622886 | 0.617546 | 0.607032 | 0 | 0.006718 | 0.217158 | 25,668 | 773 | 1,306 | 33.205692 | 0.887877 | 0.95995 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
9b9c52b81e318f84c209d4644202b29f712a315f | 438 | py | Python | Back-End/Python/CursoPyhton/Mundo 01/Aulas/Aula11.py | DiegoT-dev/Estudos | 49a6454af3070eb1f170a03bc2fdcd08de2e54a7 | [
"MIT"
] | null | null | null | Back-End/Python/CursoPyhton/Mundo 01/Aulas/Aula11.py | DiegoT-dev/Estudos | 49a6454af3070eb1f170a03bc2fdcd08de2e54a7 | [
"MIT"
] | null | null | null | Back-End/Python/CursoPyhton/Mundo 01/Aulas/Aula11.py | DiegoT-dev/Estudos | 49a6454af3070eb1f170a03bc2fdcd08de2e54a7 | [
"MIT"
] | null | null | null | # ----------- Cores no Terminal -----------
# ANSI (escape sequence) ---- \033[_X_:_Y_:_Z_m
# onde X é o STYLE, Y é o TEXT e Z é o BACK
#STYLE: 0-(none) / 1-(bold) / 4-(underline) / 7-(negative)
#TEXT: 30-branco / 31- vermelho / 32-verde / 33-amarelo / 34- azul / 35-magenta / 36-ciano / 37-cinza
#BACK: 40-branco / 41- vermelho / 42-verde / 43-amarelo / 44- azul / 45-magenta / 46-ciano / 47-cinza
print('\033[1:31:40mOlá Mundo!\033[m')
| 54.75 | 101 | 0.609589 | 76 | 438 | 3.434211 | 0.697368 | 0.022989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136986 | 0.166667 | 438 | 7 | 102 | 62.571429 | 0.578082 | 0.881279 | 0 | 0 | 0 | 0 | 0.644444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
fd31cda26c6b2f676fa126b10ae2dc8b22837f7a | 5,117 | py | Python | saxo_openapi/endpoints/rootservices/diagnostics.py | shyrwinsia/saxo_openapi | 8e5c1bf336654d059ea87ba2ff7e7aaef33d1262 | [
"MIT"
] | 52 | 2019-03-13T13:27:36.000Z | 2022-03-18T08:27:22.000Z | saxo_openapi/endpoints/rootservices/diagnostics.py | shyrwinsia/saxo_openapi | 8e5c1bf336654d059ea87ba2ff7e7aaef33d1262 | [
"MIT"
] | 15 | 2019-03-14T19:42:51.000Z | 2021-12-19T16:14:02.000Z | saxo_openapi/endpoints/rootservices/diagnostics.py | shyrwinsia/saxo_openapi | 8e5c1bf336654d059ea87ba2ff7e7aaef33d1262 | [
"MIT"
] | 23 | 2019-03-13T13:45:22.000Z | 2022-02-26T21:20:49.000Z | # -*- encoding: utf-8 -*-
"""Handle root-services diagnostics endpoints."""
from .base import RootService
from ..decorators import endpoint
@endpoint("openapi/root/v1/diagnostics/get/")
class Get(RootService):
"""Send a GET request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Get request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Get()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Get, self).__init__()
@endpoint("openapi/root/v1/diagnostics/post/", "POST")
class Post(RootService):
"""Send a POST request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Post request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Post()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Post, self).__init__()
@endpoint("openapi/root/v1/diagnostics/put/", "PUT")
class Put(RootService):
"""Send a PUT request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Put request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Put()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Put, self).__init__()
@endpoint("openapi/root/v1/diagnostics/delete/", "DELETE")
class Delete(RootService):
"""Send a DELETE request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Delete request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Delete()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Delete, self).__init__()
@endpoint("openapi/root/v1/diagnostics/patch/", "PATCH")
class Patch(RootService):
"""Send a PATCH request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Patch request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Patch()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Patch, self).__init__()
@endpoint("openapi/root/v1/diagnostics/head/", "HEAD")
class Head(RootService):
"""Send a HEAD request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Head request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Head()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Head, self).__init__()
@endpoint("openapi/root/v1/diagnostics/options/", "OPTIONS")
class Options(RootService):
"""Send a OPTIONS request and get a 200 OK response back."""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate a Options request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Options()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Options, self).__init__()
@endpoint("openapi/root/v1/diagnostics/echo/")
class Echo(RootService):
"""Send a any request and get a 200 OK response with verb, url,
headers and body in the response body.
"""
RESPONSE_DATA = None
def __init__(self):
"""Instantiate an Echo request.
>>> import saxo_openapi
>>> import saxo_openapi.endpoints.rootservices as rs
>>> import json
>>> client = saxo_openapi.API(access_token=...)
>>> r = rs.diagnostics.Echo()
>>> rv = client.request(r)
>>> assert r.status_code == r.expected_status
No data is returned.
"""
super(Echo, self).__init__()
| 28.909605 | 67 | 0.600743 | 599 | 5,117 | 4.931553 | 0.111853 | 0.08937 | 0.092079 | 0.056872 | 0.808057 | 0.797224 | 0.797224 | 0.680433 | 0.680433 | 0.680433 | 0 | 0.008753 | 0.26324 | 5,117 | 176 | 68 | 29.073864 | 0.774801 | 0.567715 | 0 | 0.380952 | 0 | 0 | 0.193738 | 0.174821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.047619 | 0 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
fd506ee9b9a737150bf448893897e7ab828192de | 24,387 | py | Python | refnx/reduce/test/test_reduce.py | dcortie/refnx | 037434fa0a64755f72c540d75063986bd517ab10 | [
"BSD-3-Clause"
] | 32 | 2016-04-18T15:29:59.000Z | 2022-03-27T08:35:29.000Z | refnx/reduce/test/test_reduce.py | dcortie/refnx | 037434fa0a64755f72c540d75063986bd517ab10 | [
"BSD-3-Clause"
] | 116 | 2015-10-27T04:33:09.000Z | 2022-02-22T02:02:47.000Z | refnx/reduce/test/test_reduce.py | dcortie/refnx | 037434fa0a64755f72c540d75063986bd517ab10 | [
"BSD-3-Clause"
] | 22 | 2015-09-29T23:21:15.000Z | 2022-02-27T18:12:18.000Z | import os
from os.path import join as pjoin
import os.path
from refnx.reduce.platypusnexus import calculate_wavelength_bins
from refnx.reduce.reduce import PolarisationEfficiency
import warnings
import pytest
from numpy.testing import assert_equal, assert_allclose
import xml.etree.ElementTree as ET
from refnx.reduce import (
reduce_stitch,
PlatypusReduce,
ReductionOptions,
SpatzReduce,
SpinSet,
PolarisedReduce,
)
from refnx.dataset import ReflectDataset
class TestPlatypusReduce:
@pytest.mark.usefixtures("no_data_directory")
@pytest.fixture(autouse=True)
def setup_method(self, tmpdir, data_directory):
self.pth = pjoin(data_directory, "reduce")
self.cwd = os.getcwd()
self.tmpdir = tmpdir.strpath
os.chdir(self.tmpdir)
return 0
def teardown_method(self):
os.chdir(self.cwd)
def test_smoke(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
a, fname = reduce_stitch(
[708, 709, 710],
[711, 711, 711],
data_folder=self.pth,
reduction_options={"rebin_percent": 2},
)
a.save("test1.dat")
assert os.path.isfile("./test1.dat")
# reduce_stitch should take a ReductionOptions dict
opts = ReductionOptions()
opts["rebin_percent"] = 2
a2, fname = reduce_stitch(
[708, 709, 710],
[711, 711, 711],
data_folder=self.pth,
reduction_options=[opts] * 3,
)
a2.save("test2.dat")
assert os.path.isfile("./test2.dat")
assert_allclose(a.y, a2.y)
def test_reduction_method(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
a = PlatypusReduce("PLP0000711.nx.hdf", data_folder=self.pth)
# try reduction with the reduce method
a.reduce(
"PLP0000708.nx.hdf",
data_folder=self.pth,
rebin_percent=4,
)
# try reduction with the __call__ method
a(
"PLP0000708.nx.hdf",
data_folder=self.pth,
rebin_percent=4,
)
# this should also have saved a couple of files in the current
# directory
assert os.path.isfile("./PLP0000708_0.dat")
assert os.path.isfile("./PLP0000708_0.xml")
# can we read the file
ReflectDataset("./PLP0000708_0.dat")
# try writing offspecular data
a.write_offspecular("offspec.xml", 0)
def test_free_liquids(self):
# smoke test for free liquids
a0 = PlatypusReduce("PLP0038418.nx.hdf", data_folder=self.pth)
a1 = PlatypusReduce("PLP0038417.nx.hdf", data_folder=self.pth)
# try reduction with the reduce method
d0, r0 = a0.reduce(
"PLP0038420.nx.hdf", data_folder=self.pth, rebin_percent=4
)
d1, r1 = a1.reduce(
"PLP0038421.nx.hdf", data_folder=self.pth, rebin_percent=4
)
def test_event_reduction(self):
# check that eventmode reduction can occur, and that there are the
# correct number of datasets produced.
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
a = PlatypusReduce(pjoin(self.pth, "PLP0011613.nx.hdf"))
a.reduce(
pjoin(self.pth, "PLP0011641.nx.hdf"),
integrate=0,
rebin_percent=2,
eventmode=[0, 900, 1800],
)
assert_equal(a.y.shape[0], 2)
# check that two datasets are written out.
assert os.path.isfile("PLP0011641_0.dat")
assert os.path.isfile("PLP0011641_1.dat")
# check that the resolutions are pretty much the same
assert_allclose(
a.x_err[0] / a.x[0], a.x_err[1] / a.x[1], atol=0.001
)
# check that the (right?) timestamps are written into the datafile
tree = ET.parse(pjoin(os.getcwd(), "PLP0011641_1.xml"))
tree.find(".//REFentry").attrib["time"]
# TODO, timestamp is created in the local time stamp of the testing
# machine. The following works if reduced with a computer in
# Australian EST.
# assert_(t == '2012-01-20T22:05:32')
# # what happens if you have too many frame bins
# # you should get the same number of fr
# a = PlatypusReduce(
# os.path.join(self.pth, 'PLP0011613.nx.hdf'),
# reflect=os.path.join(self.pth, 'PLP0011641.nx.hdf'),
# integrate=0, rebin_percent=2,
# eventmode=[0, 25200, 27000, 30000])
# assert_equal(a.ydata.shape[0], 3)
class TestSpatzReduce:
@pytest.mark.usefixtures("no_data_directory")
@pytest.fixture(autouse=True)
def setup_method(self, tmpdir, data_directory):
self.pth = pjoin(data_directory, "reduce")
self.cwd = os.getcwd()
self.tmpdir = tmpdir.strpath
os.chdir(self.tmpdir)
return 0
def teardown_method(self):
os.chdir(self.cwd)
def test_smoke(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
a, fname = reduce_stitch(
[660, 661],
[658, 659],
data_folder=self.pth,
prefix="SPZ",
reduction_options={"rebin_percent": 2},
)
a.save("test1.dat")
assert os.path.isfile("./test1.dat")
# reduce_stitch should take a list of ReductionOptions dict,
# separate dicts are used for different angles
opts = ReductionOptions()
opts["rebin_percent"] = 2
a2, fname = reduce_stitch(
[660, 661],
[658, 659],
data_folder=self.pth,
prefix="SPZ",
reduction_options=[opts] * 2,
)
a2.save("test2.dat")
assert os.path.isfile("./test2.dat")
assert_allclose(a.y, a2.y)
def test_reduction_method(self):
# a quick smoke test to check that the reduction can occur
a = SpatzReduce("SPZ0000658.nx.hdf", data_folder=self.pth)
# try reduction with the reduce method
a.reduce(
"SPZ0000660.nx.hdf",
data_folder=self.pth,
rebin_percent=4,
)
# try reduction with the __call__ method
a(
"SPZ0000660.nx.hdf",
data_folder=self.pth,
rebin_percent=4,
)
# this should also have saved a couple of files in the current
# directory
assert os.path.isfile("./SPZ0000660_0.dat")
assert os.path.isfile("./SPZ0000660_0.xml")
# try writing offspecular data
a.write_offspecular("offspec.xml", 0)
class TestPolarisedReduce:
@pytest.mark.usefixtures("no_data_directory")
@pytest.fixture(autouse=True)
def setup_method(self, tmpdir, data_directory):
self.pth = pjoin(data_directory, "reduce", "PNR_files")
self.cwd = os.getcwd()
self.tmpdir = tmpdir.strpath
os.chdir(self.tmpdir)
return 0
def teardown_method(self):
os.chdir(self.cwd)
def test_polarised_reduction_method_4sc(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
spinset_rb = SpinSet(
down_down=pjoin(self.pth, "PLP0012785.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012787.nx.hdf"),
up_down=pjoin(self.pth, "PLP0012786.nx.hdf"),
down_up=pjoin(self.pth, "PLP0012788.nx.hdf"),
)
spinset_db = SpinSet(
down_down=pjoin(self.pth, "PLP0012793.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012795.nx.hdf"),
up_down=pjoin(self.pth, "PLP0012794.nx.hdf"),
down_up=pjoin(self.pth, "PLP0012796.nx.hdf"),
)
a = PolarisedReduce(spinset_db)
# try reduction with the reduce method
a.reduce(
spinset_rb,
data_folder=self.pth,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=4,
)
# try reduction with the __call__ method
a(
spinset_rb,
data_folder=self.pth,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=4,
)
# this should also have saved a couple of files in the current
# directory
assert os.path.isfile("./PLP0012785_0_PolCorr.dat")
assert os.path.isfile("./PLP0012786_0_PolCorr.dat")
assert os.path.isfile("./PLP0012787_0_PolCorr.dat")
assert os.path.isfile("./PLP0012788_0_PolCorr.dat")
# can we read the file
dd = ReflectDataset("./PLP0012785_0_PolCorr.dat")
uu = ReflectDataset("./PLP0012787_0_PolCorr.dat")
ud = ReflectDataset("./PLP0012786_0_PolCorr.dat")
du = ReflectDataset("./PLP0012788_0_PolCorr.dat")
# check if the written data is the same as what is in the reducers
assert_equal(dd.x, list(reversed(a.reducers["dd"].x[0])))
assert_equal(du.x, list(reversed(a.reducers["du"].x[0])))
assert_equal(ud.x, list(reversed(a.reducers["ud"].x[0])))
assert_equal(uu.x, list(reversed(a.reducers["uu"].x[0])))
assert_equal(dd.y, list(reversed(a.reducers["dd"].y_corr[0])))
assert_equal(du.y, list(reversed(a.reducers["du"].y_corr[0])))
assert_equal(ud.y, list(reversed(a.reducers["ud"].y_corr[0])))
assert_equal(uu.y, list(reversed(a.reducers["uu"].y_corr[0])))
assert_equal(
a.reducers["dd"].direct_beam.m_spec,
a.reducers["du"].direct_beam.m_spec,
)
assert_equal(
a.reducers["uu"].direct_beam.m_spec,
a.reducers["ud"].direct_beam.m_spec,
)
# Check that the shapes of all the spectra that need to be the
# same are the same.
for sc in a._reduced_successfully:
reducer = a.reducers[sc]
assert (
reducer.reflected_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_polcorr.shape
)
assert (
reducer.reflected_beam.m_spec.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.reflected_beam.m_spec.shape
)
# Make sure there is a difference between m_spec and
# m_spec_polcorr
with pytest.raises(AssertionError):
assert_equal(
reducer.reflected_beam.m_spec,
reducer.reflected_beam.m_spec_polcorr,
)
def test_polarised_reduction_method_3sc(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
spinset_rb = SpinSet(
down_down=pjoin(self.pth, "PLP0012785.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012787.nx.hdf"),
up_down=pjoin(self.pth, "PLP0012786.nx.hdf"),
)
spinset_db = SpinSet(
down_down=pjoin(self.pth, "PLP0012793.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012795.nx.hdf"),
up_down=pjoin(self.pth, "PLP0012794.nx.hdf"),
)
a = PolarisedReduce(spinset_db)
# try reduction with the reduce method
a.reduce(
spinset_rb,
data_folder=self.pth,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=4,
)
# try reduction with the __call__ method
a(
spinset_rb,
data_folder=self.pth,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=4,
)
# this should also have saved a couple of files in the current
# directory
assert os.path.isfile("./PLP0012785_0_PolCorr.dat")
assert os.path.isfile("./PLP0012786_0_PolCorr.dat")
assert os.path.isfile("./PLP0012787_0_PolCorr.dat")
# can we read the file
dd = ReflectDataset("./PLP0012785_0_PolCorr.dat")
uu = ReflectDataset("./PLP0012787_0_PolCorr.dat")
ud = ReflectDataset("./PLP0012786_0_PolCorr.dat")
# check if the written data is the same as what is in the reducers
# Note: the order of the data is reversed in the reducers
# compared to the .dat file
assert_equal(dd.x, list(reversed(a.reducers["dd"].x[0])))
assert_equal(ud.x, list(reversed(a.reducers["ud"].x[0])))
assert_equal(uu.x, list(reversed(a.reducers["uu"].x[0])))
assert_equal(dd.y, list(reversed(a.reducers["dd"].y_corr[0])))
assert_equal(ud.y, list(reversed(a.reducers["ud"].y_corr[0])))
assert_equal(uu.y, list(reversed(a.reducers["uu"].y_corr[0])))
assert_equal(
a.reducers["dd"].direct_beam.m_spec,
a.reducers["du"].direct_beam.m_spec,
)
assert_equal(
a.reducers["uu"].direct_beam.m_spec,
a.reducers["ud"].direct_beam.m_spec,
)
# Check that the shapes of all the spectra that need to be the
# same are the same.
for sc in a._reduced_successfully:
reducer = a.reducers[sc]
assert (
reducer.reflected_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_polcorr.shape
)
assert (
reducer.reflected_beam.m_spec.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.reflected_beam.m_spec.shape
)
# Make sure there is a difference between m_spec and
# m_spec_polcorr
with pytest.raises(AssertionError):
assert_equal(
reducer.reflected_beam.m_spec,
reducer.reflected_beam.m_spec_polcorr,
)
def test_polarised_reduction_method_2sc(self):
# a quick smoke test to check that the reduction can occur
# warnings filter for pixel size
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
spinset_rb = SpinSet(
down_down=pjoin(self.pth, "PLP0012785.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012787.nx.hdf"),
)
spinset_db = SpinSet(
down_down=pjoin(self.pth, "PLP0012793.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012795.nx.hdf"),
)
a = PolarisedReduce(spinset_db)
# try reduction with the reduce method
a.reduce(
spinset_rb,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=3,
)
# try reduction with the __call__ method
a(
spinset_rb,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=3,
)
# this should also have saved a couple of files in the current
# directory
assert os.path.isfile("./PLP0012785_0_PolCorr.dat")
assert os.path.isfile("./PLP0012787_0_PolCorr.dat")
# can we read the file
dd = ReflectDataset("./PLP0012785_0_PolCorr.dat")
uu = ReflectDataset("./PLP0012787_0_PolCorr.dat")
# check if the written data is the same as what is in the reducers
assert_equal(dd.x, list(reversed(a.reducers["dd"].x[0])))
assert_equal(uu.x, list(reversed(a.reducers["uu"].x[0])))
assert_equal(dd.y, list(reversed(a.reducers["dd"].y_corr[0])))
assert_equal(uu.y, list(reversed(a.reducers["uu"].y_corr[0])))
assert_equal(
a.reducers["dd"].direct_beam.m_spec,
a.reducers["du"].direct_beam.m_spec,
)
assert_equal(
a.reducers["uu"].direct_beam.m_spec,
a.reducers["ud"].direct_beam.m_spec,
)
# Check that the shapes of all the spectra that need to be the
# same are the same.
for sc in a._reduced_successfully:
reducer = a.reducers[sc]
assert (
reducer.reflected_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec_polcorr.shape
== reducer.reflected_beam.m_spec_polcorr.shape
)
assert (
reducer.reflected_beam.m_spec.shape
== reducer.reflected_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.direct_beam.m_spec_sd.shape
)
assert (
reducer.direct_beam.m_spec.shape
== reducer.reflected_beam.m_spec.shape
)
# Make sure there is a difference between m_spec and
# m_spec_polcorr
with pytest.raises(AssertionError):
assert_equal(
reducer.reflected_beam.m_spec,
reducer.reflected_beam.m_spec_polcorr,
)
def test_nsf_spin_channels(self):
"""
Check that the efficiency-corrected R++ channel is assigned properly
in the reducer by comparing where the reflectivity drops below a
certain threshold
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
spinset_rb = SpinSet(
down_down=pjoin(self.pth, "PLP0012785.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012787.nx.hdf"),
)
spinset_db = SpinSet(
down_down=pjoin(self.pth, "PLP0012793.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012795.nx.hdf"),
)
a = PolarisedReduce(spinset_db)
# Reduce and correct data
a.reduce(
spinset_rb,
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=3,
)
# Get Q position where the reflectivity drops to 0.5 for the
# "uu" and "dd" datasets
q_dd = a.reducers["dd"].x[0][
abs(a.reducers["dd"].y - 0.5).argmin()
]
q_uu = a.reducers["uu"].x[0][
abs(a.reducers["uu"].y - 0.5).argmin()
]
# Check x_uu is larger than x_dd. This is because the spin up
# neutrons see a higher potential for the magnetically saturated
# permalloy film. This checks that the "uu" and "dd" spin channels
# didn't get mixed up during the reduction process
assert q_uu > q_dd
class TestPolarisationEfficiency:
@pytest.mark.usefixtures("no_data_directory")
@pytest.fixture(autouse=True)
def setup_method(self, tmpdir, data_directory):
self.pth = pjoin(data_directory, "reduce", "PNR_files")
self.cwd = os.getcwd()
self.tmpdir = tmpdir.strpath
os.chdir(self.tmpdir)
return 0
def teardown_method(self):
os.chdir(self.cwd)
def test_smoke(self):
wavelength_axis = calculate_wavelength_bins(2.5, 12.5, 3)
peff = PolarisationEfficiency(wavelength_axis)
assert peff.combined_efficiency_matrix.shape == tuple(
[len(wavelength_axis), 4, 4]
)
def test_config_difference(self):
wavelength_axis = calculate_wavelength_bins(2.5, 12.5, 3)
p_PF = PolarisationEfficiency(wavelength_axis, config="PF")
p_full = PolarisationEfficiency(wavelength_axis, config="full")
with pytest.raises(AssertionError):
assert_equal(
p_PF.combined_efficiency_matrix,
p_full.combined_efficiency_matrix,
)
def test_input(self):
wavelength_axis = calculate_wavelength_bins(2.5, 12.5, 3).reshape(
1, -1
)
with pytest.raises(ValueError):
peff = PolarisationEfficiency(wavelength_axis)
assert peff
def test_manual_input_to_PolarisedReduce(self):
"""
Test the manual input of pol_eff into a
`refnx.reduce.PolarisedReduce` object and see if it is the same as
an automatically created one
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
# Create SpinSets, PolarisedReduce object and ReductionOptions
spinset_rb = SpinSet(
down_down=pjoin(self.pth, "PLP0012785.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012787.nx.hdf"),
)
spinset_db = SpinSet(
down_down=pjoin(self.pth, "PLP0012793.nx.hdf"),
up_up=pjoin(self.pth, "PLP0012795.nx.hdf"),
)
a = PolarisedReduce(spinset_db)
rdo = ReductionOptions(
lo_wavelength=2.5,
hi_wavelength=12.5,
rebin_percent=3,
)
# Reduce and correct reflected beams
a.reduce(spinset_rb, **rdo)
# Create another polarised reducer to compare
b = PolarisedReduce(spinset_db)
# Create PolarisationEfficiency object and reduce
pol_eff = PolarisationEfficiency(
a.reducers["dd"].direct_beam.m_lambda[0],
config="full",
)
b.reduce(spinset_rb, pol_eff=pol_eff, **rdo)
for sc in a._reduced_successfully:
assert_equal(a.reducers[sc].x, b.reducers[sc].x)
assert_equal(a.reducers[sc].y, b.reducers[sc].y)
assert_equal(a.reducers[sc].y_err, b.reducers[sc].y_err)
assert_equal(a.reducers[sc].y_corr, b.reducers[sc].y_corr)
| 36.182493 | 79 | 0.559273 | 2,830 | 24,387 | 4.644876 | 0.119435 | 0.022822 | 0.036972 | 0.034234 | 0.785546 | 0.765614 | 0.736021 | 0.731913 | 0.726968 | 0.719361 | 0 | 0.046361 | 0.343708 | 24,387 | 673 | 80 | 36.236256 | 0.774945 | 0.164063 | 0 | 0.679245 | 0 | 0 | 0.082361 | 0.02315 | 0 | 0 | 0 | 0.001486 | 0.169811 | 1 | 0.046122 | false | 0 | 0.023061 | 0 | 0.085954 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b5c4e56c4815f76f42f7cafd791820dfe6f46931 | 107 | py | Python | utils/libritts/__init__.py | shaun95/torch-tacospawn | ce39616658ab9def89061e2a5825b55525cb0e71 | [
"MIT"
] | null | null | null | utils/libritts/__init__.py | shaun95/torch-tacospawn | ce39616658ab9def89061e2a5825b55525cb0e71 | [
"MIT"
] | null | null | null | utils/libritts/__init__.py | shaun95/torch-tacospawn | ce39616658ab9def89061e2a5825b55525cb0e71 | [
"MIT"
] | null | null | null | from speechset import Config
from .dataset import LibriTTS, LibriTTSDataset
from .dump import DumpDataset
| 21.4 | 46 | 0.841121 | 13 | 107 | 6.923077 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130841 | 107 | 4 | 47 | 26.75 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b5dfe2da05e18830cb7178e8eb8e14b73f40156b | 54 | py | Python | readCSV.py | timeNeverComesBack/DownloadPhotosUsingSQL | b124e25c7de858e7f8defe24a67bf7899999b4dd | [
"MIT"
] | 1 | 2021-12-17T09:23:21.000Z | 2021-12-17T09:23:21.000Z | readCSV.py | timeNeverComesBack/DownloadWhatsUsingWhats | b124e25c7de858e7f8defe24a67bf7899999b4dd | [
"MIT"
] | null | null | null | readCSV.py | timeNeverComesBack/DownloadWhatsUsingWhats | b124e25c7de858e7f8defe24a67bf7899999b4dd | [
"MIT"
] | null | null | null | def readCSV():
print("打开CSV文件……")
print("测试!") | 18 | 22 | 0.537037 | 7 | 54 | 5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203704 | 54 | 3 | 23 | 18 | 0.674419 | 0 | 0 | 0 | 0 | 0 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
b5f70986a061a0a20b49b1d62e2133daf7be0710 | 36 | py | Python | tests/__init__.py | otchita/not-alone | e831e1607ace7bfab08052075f2a2756dface51e | [
"MIT"
] | null | null | null | tests/__init__.py | otchita/not-alone | e831e1607ace7bfab08052075f2a2756dface51e | [
"MIT"
] | null | null | null | tests/__init__.py | otchita/not-alone | e831e1607ace7bfab08052075f2a2756dface51e | [
"MIT"
] | null | null | null | """Unit test package for nalone."""
| 18 | 35 | 0.666667 | 5 | 36 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 36 | 1 | 36 | 36 | 0.774194 | 0.805556 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b5f85367bdee311b92929af828149efa97fd6543 | 183 | py | Python | neoOkpara/Phase-1/Day7/stderrPrint.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | neoOkpara/Phase-1/Day7/stderrPrint.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | neoOkpara/Phase-1/Day7/stderrPrint.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | from __future__ import print_function
import sys
message = "import statement failed"
print("Error Message: ", message, file=sys.stderr, end="\n")
print("Nothing to show ", message)
| 22.875 | 60 | 0.748634 | 25 | 183 | 5.28 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 183 | 7 | 61 | 26.142857 | 0.830189 | 0 | 0 | 0 | 0 | 0 | 0.306011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0.6 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
bd4fa56aa992df75b7a964daaf44530712ff65a4 | 184 | py | Python | django_slack_oauth/admin.py | District-so/django-slack-oauth | 33e25e732f21ace9d24dddf06af15212ef0c0ecc | [
"MIT"
] | 51 | 2015-04-09T20:25:38.000Z | 2022-01-14T08:45:08.000Z | django_slack_oauth/admin.py | District-so/django-slack-oauth | 33e25e732f21ace9d24dddf06af15212ef0c0ecc | [
"MIT"
] | 23 | 2015-07-06T14:37:11.000Z | 2021-09-22T09:21:58.000Z | django_slack_oauth/admin.py | District-so/django-slack-oauth | 33e25e732f21ace9d24dddf06af15212ef0c0ecc | [
"MIT"
] | 36 | 2015-07-06T13:14:26.000Z | 2021-04-15T06:39:57.000Z | # -*- coding: utf-8 -*-
from django.contrib import admin
from .models import SlackOAuthRequest
@admin.register(SlackOAuthRequest)
class SlackOAuthAdmin(admin.ModelAdmin):
pass
| 16.727273 | 40 | 0.76087 | 20 | 184 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0.13587 | 184 | 10 | 41 | 18.4 | 0.874214 | 0.11413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
1fccd65b5f8abd5ec71ca2c521164bfc5017fa06 | 100 | py | Python | bitmovin_api_sdk/encoding/encodings/muxings/mxf/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/encodings/muxings/mxf/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/encodings/muxings/mxf/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.encodings.muxings.mxf.customdata.customdata_api import CustomdataApi
| 50 | 99 | 0.9 | 13 | 100 | 6.692308 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 100 | 1 | 100 | 100 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1ffe0ae41d606cb845a0bd68a82597d54c8b1588 | 22 | py | Python | never_cache_post/models.py | incuna/django-never-cache-post | b74b60ce980472f72e68d0cd819f1ccc70d80eaf | [
"BSD-2-Clause"
] | 2 | 2016-10-11T02:03:25.000Z | 2017-09-29T23:58:44.000Z | never_cache_post/models.py | incuna/django-never-cache-post | b74b60ce980472f72e68d0cd819f1ccc70d80eaf | [
"BSD-2-Clause"
] | null | null | null | never_cache_post/models.py | incuna/django-never-cache-post | b74b60ce980472f72e68d0cd819f1ccc70d80eaf | [
"BSD-2-Clause"
] | null | null | null | # Required by django.
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0.863636 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9503daf272b2c7529ac7e3cf512a2fa67efb158d | 18 | py | Python | SIE/base.py | lz356/sensor_impact_FDD | c0ed39bc2e9c30536738b548eca9930a1ed62600 | [
"MIT"
] | 2 | 2022-03-07T17:53:02.000Z | 2022-03-21T17:14:47.000Z | SIE/base.py | NREL/sensor-impact-evaluation-and-verification | 8aeca97d9872ae024ed5846cf1da1c948df8cbc1 | [
"MIT"
] | null | null | null | SIE/base.py | NREL/sensor-impact-evaluation-and-verification | 8aeca97d9872ae024ed5846cf1da1c948df8cbc1 | [
"MIT"
] | null | null | null | Class base()
pass
| 6 | 12 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 2 | 13 | 9 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.5 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1f0176844fb2e3206044d826300070b2d3e3edf3 | 28 | py | Python | buttle_res.py | kagihiro/discordpy-startup | 0e358d5eb594d0b7b50c51adac92497b82481f86 | [
"MIT"
] | null | null | null | buttle_res.py | kagihiro/discordpy-startup | 0e358d5eb594d0b7b50c51adac92497b82481f86 | [
"MIT"
] | null | null | null | buttle_res.py | kagihiro/discordpy-startup | 0e358d5eb594d0b7b50c51adac92497b82481f86 | [
"MIT"
] | null | null | null | # def send_battle_res(ctx):
| 14 | 27 | 0.75 | 5 | 28 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.76 | 0.892857 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1f2e85af327454b164c5b40bc9e8d14ee0234a62 | 46 | py | Python | tests/__init__.py | scality/changelog-binder | a2c7c0ef5fef768cfd4bc4d71bfb8264721fe92a | [
"Apache-2.0"
] | 1 | 2021-01-11T16:19:37.000Z | 2021-01-11T16:19:37.000Z | tests/__init__.py | scality/changelog-binder | a2c7c0ef5fef768cfd4bc4d71bfb8264721fe92a | [
"Apache-2.0"
] | 65 | 2021-01-11T16:03:42.000Z | 2022-03-28T15:15:12.000Z | tests/__init__.py | scality/changelog-binder | a2c7c0ef5fef768cfd4bc4d71bfb8264721fe92a | [
"Apache-2.0"
] | null | null | null | """Tests for the changelog-binder package."""
| 23 | 45 | 0.717391 | 6 | 46 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.804878 | 0.847826 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1f89a0f96a61ad2c942ab90fd97e8166320fe160 | 134 | py | Python | WelcomeLiteApp/admin.py | HarryLinux/WelcomeLite | 9102be5a62d3bcb471d3d39e7a7a1780ae61589e | [
"MIT"
] | 1 | 2020-12-21T20:20:17.000Z | 2020-12-21T20:20:17.000Z | WelcomeLiteApp/admin.py | HarryLinux/WelcomeLite | 9102be5a62d3bcb471d3d39e7a7a1780ae61589e | [
"MIT"
] | null | null | null | WelcomeLiteApp/admin.py | HarryLinux/WelcomeLite | 9102be5a62d3bcb471d3d39e7a7a1780ae61589e | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import JobOffer, JobTitle
admin.site.register(JobOffer)
admin.site.register(JobTitle)
| 19.142857 | 38 | 0.820896 | 18 | 134 | 6.111111 | 0.555556 | 0.163636 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097015 | 134 | 6 | 39 | 22.333333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1f921554763505cf7edd6ddccc9a5df80269227f | 21 | py | Python | realtime_voice_conversion/__init__.py | karanokuri/realtime-yukarin | a8cd36b180c93d7251327b3ca4a027d2a9f0c868 | [
"MIT"
] | 296 | 2019-06-07T22:33:21.000Z | 2022-03-30T13:41:37.000Z | realtime_voice_conversion/__init__.py | Hiroshiba/realtime-voice-conversion | a8cd36b180c93d7251327b3ca4a027d2a9f0c868 | [
"MIT"
] | 6 | 2019-10-21T10:32:20.000Z | 2021-07-01T05:33:35.000Z | realtime_voice_conversion/__init__.py | karanokuri/realtime-yukarin | a8cd36b180c93d7251327b3ca4a027d2a9f0c868 | [
"MIT"
] | 38 | 2019-06-13T00:23:04.000Z | 2022-03-29T02:17:49.000Z | from . import stream
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
2f13bc2349618150d608da3e6e040be2cf3e0c19 | 185 | py | Python | _app/accounts/models.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | 1 | 2021-08-17T21:27:32.000Z | 2021-08-17T21:27:32.000Z | _app/accounts/models.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | null | null | null | _app/accounts/models.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
def __str__(self):
return self.username
from django.contrib import admin
admin.site.register(User)
| 16.818182 | 51 | 0.794595 | 25 | 185 | 5.72 | 0.68 | 0.13986 | 0.237762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124324 | 185 | 10 | 52 | 18.5 | 0.882716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
2f3b13fd37ad6bded76835888eb98a41ef06f4c2 | 31 | py | Python | parser/team28/models/symbol.py | susanliss/tytus | a613a2352cf4a1d0e90ce27bb346ab60ed8039cc | [
"MIT"
] | null | null | null | parser/team28/models/symbol.py | susanliss/tytus | a613a2352cf4a1d0e90ce27bb346ab60ed8039cc | [
"MIT"
] | null | null | null | parser/team28/models/symbol.py | susanliss/tytus | a613a2352cf4a1d0e90ce27bb346ab60ed8039cc | [
"MIT"
] | null | null | null | class Symbol(object):
pass
| 10.333333 | 21 | 0.677419 | 4 | 31 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225806 | 31 | 2 | 22 | 15.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
2f45261ae369c525a795da5f51055f6372572d5e | 41 | py | Python | pptb/nn/__init__.py | hanknewbird/paddle-toolbox | 1f1e4d2dd38e797092c1bba0ec3797dd4bef43f6 | [
"Apache-2.0",
"MIT"
] | 1 | 2021-12-08T03:50:11.000Z | 2021-12-08T03:50:11.000Z | pptb/nn/__init__.py | hanknewbird/paddle-toolbox | 1f1e4d2dd38e797092c1bba0ec3797dd4bef43f6 | [
"Apache-2.0",
"MIT"
] | null | null | null | pptb/nn/__init__.py | hanknewbird/paddle-toolbox | 1f1e4d2dd38e797092c1bba0ec3797dd4bef43f6 | [
"Apache-2.0",
"MIT"
] | null | null | null | from .loss import *
from .layer import *
| 13.666667 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
2f5c3a60ff6a0c18a935b4c0b81b599c68b5cfc9 | 222 | py | Python | Chapter03/return_funct.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 13 | 2018-06-21T01:44:49.000Z | 2021-12-01T10:49:53.000Z | Chapter03/return_funct.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | null | null | null | Chapter03/return_funct.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 6 | 2018-10-05T08:29:24.000Z | 2022-01-11T14:49:50.000Z | >>> def func_creator():
... def return_saying():
... return "Blessed are the cheese makers"
... return return_saying
...
>>> statement = func_creator()
>>> print(statement())
Blessed are the cheese makers
| 24.666667 | 50 | 0.635135 | 25 | 222 | 5.48 | 0.48 | 0.160584 | 0.189781 | 0.277372 | 0.364964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202703 | 222 | 8 | 51 | 27.75 | 0.774011 | 0 | 0 | 0 | 0 | 0 | 0.130631 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.125 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
2f8598edff5682fcf5e31f301fa09429aed08aae | 120 | py | Python | test/__init__.py | bollwyvl/pytest-check-links | 2fdef4efea54a4ca147bd7c6b0f185f00a674af6 | [
"BSD-3-Clause"
] | null | null | null | test/__init__.py | bollwyvl/pytest-check-links | 2fdef4efea54a4ca147bd7c6b0f185f00a674af6 | [
"BSD-3-Clause"
] | null | null | null | test/__init__.py | bollwyvl/pytest-check-links | 2fdef4efea54a4ca147bd7c6b0f185f00a674af6 | [
"BSD-3-Clause"
] | null | null | null | import os
here = os.path.dirname(os.path.abspath(__file__))
examples = os.path.join(os.path.dirname(here), 'examples')
| 24 | 58 | 0.741667 | 19 | 120 | 4.473684 | 0.473684 | 0.282353 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 120 | 4 | 59 | 30 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
85d70bc7881e00c01cfe75b42f0a2c585a5ee3cd | 176 | py | Python | tests/test_task1.py | CloudReactor/postgres_to_snowflake | d25549e70df41cf3388322c9e36096166c1f4209 | [
"BSD-2-Clause"
] | 1 | 2022-01-15T09:45:22.000Z | 2022-01-15T09:45:22.000Z | tests/test_task1.py | CloudReactor/postgres_to_snowflake | d25549e70df41cf3388322c9e36096166c1f4209 | [
"BSD-2-Clause"
] | 3 | 2020-09-13T05:04:13.000Z | 2021-03-07T09:18:48.000Z | tests/test_task1.py | CloudReactor/postgres_to_snowflake | d25549e70df41cf3388322c9e36096166c1f4209 | [
"BSD-2-Clause"
] | 2 | 2020-06-24T23:40:49.000Z | 2020-07-27T04:16:25.000Z | from task_1 import make_start_message
class TestTask1:
def test_make_start_message(self):
x = make_start_message('Not')
assert x == 'Not sleeping!' | 29.333333 | 43 | 0.670455 | 24 | 176 | 4.583333 | 0.666667 | 0.245455 | 0.436364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.25 | 176 | 6 | 43 | 29.333333 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.090395 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
c8148c5ddea3950a7ac3b973f158f2c5d3021f73 | 76 | py | Python | utils/file.py | hhandika/py-seq | 20d552076998897a2a17f8b17d5574f9fa0192e2 | [
"MIT"
] | null | null | null | utils/file.py | hhandika/py-seq | 20d552076998897a2a17f8b17d5574f9fa0192e2 | [
"MIT"
] | null | null | null | utils/file.py | hhandika/py-seq | 20d552076998897a2a17f8b17d5574f9fa0192e2 | [
"MIT"
] | null | null | null | def get_path(path: str, fname: str) -> str :
return path + "/" + fname | 38 | 44 | 0.578947 | 11 | 76 | 3.909091 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 76 | 2 | 45 | 38 | 0.754386 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
c83cd61a129771f422f7947202d4bfd328cebea0 | 146 | py | Python | bitl/datasets/lastfm.py | tritas/bitl | f394e633e38f983fed10c3e672af7be2883cbdbb | [
"BSD-3-Clause"
] | 1 | 2017-06-30T08:50:49.000Z | 2017-06-30T08:50:49.000Z | bitl/datasets/lastfm.py | tritas/bitl | f394e633e38f983fed10c3e672af7be2883cbdbb | [
"BSD-3-Clause"
] | null | null | null | bitl/datasets/lastfm.py | tritas/bitl | f394e633e38f983fed10c3e672af7be2883cbdbb | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Author: Aris Tritas
# License: BSD 3-clause
def load_lastfm():
"""Load the LastFM dataset of ratings."""
pass
| 16.222222 | 45 | 0.616438 | 20 | 146 | 4.45 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.219178 | 146 | 8 | 46 | 18.25 | 0.763158 | 0.684932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
c07207b84b9409bb0fbcacbe365351ced00ded9b | 56 | py | Python | src/main/resources/docs/tests/W0311.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/tests/W0311.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/tests/W0311.py | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | ##Patterns: W0311
if 1 == 2:
##Info: W0311
pass
| 11.2 | 18 | 0.535714 | 8 | 56 | 3.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25641 | 0.303571 | 56 | 4 | 19 | 14 | 0.512821 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.