hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
654e9640e586b471b5fa96b3f75ca5fd0981375d | 130 | py | Python | src/__init__.py | nulinspiratie/notebook-website-generator | 63b3cea29185c5a59ba6b3487bd75e03f9797399 | [
"MIT"
] | null | null | null | src/__init__.py | nulinspiratie/notebook-website-generator | 63b3cea29185c5a59ba6b3487bd75e03f9797399 | [
"MIT"
] | null | null | null | src/__init__.py | nulinspiratie/notebook-website-generator | 63b3cea29185c5a59ba6b3487bd75e03f9797399 | [
"MIT"
] | null | null | null | from .tools import *
from .converter_preprocessors import *
from .compilers import *
from .notebooks import *
from .tools import * | 26 | 38 | 0.776923 | 16 | 130 | 6.25 | 0.4375 | 0.4 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 130 | 5 | 39 | 26 | 0.900901 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
65a69067b9cef38622fa22a0316813deb2729745 | 156 | py | Python | src/gptsum/__main__.py | NicolasT/gptsum | 5a316e9d0261bf4f6309d0571a51ba2971a7e1cf | [
"Apache-2.0"
] | null | null | null | src/gptsum/__main__.py | NicolasT/gptsum | 5a316e9d0261bf4f6309d0571a51ba2971a7e1cf | [
"Apache-2.0"
] | 37 | 2021-04-19T10:05:56.000Z | 2022-02-18T08:12:33.000Z | src/gptsum/__main__.py | NicolasT/gptsum | 5a316e9d0261bf4f6309d0571a51ba2971a7e1cf | [
"Apache-2.0"
] | null | null | null | """Main entrypoint for the gptsum package.
Simply invokes :func:`gptsum.cli.main`.
"""
import gptsum.cli
if __name__ == "__main__":
gptsum.cli.main()
| 17.333333 | 42 | 0.698718 | 21 | 156 | 4.809524 | 0.619048 | 0.267327 | 0.257426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147436 | 156 | 8 | 43 | 19.5 | 0.759399 | 0.512821 | 0 | 0 | 0 | 0 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
65ae3b7e07865a1706047d4686d443e82243b7b4 | 45 | py | Python | octadocs/octiron/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 1 | 2021-11-19T22:48:27.000Z | 2021-11-19T22:48:27.000Z | octadocs/octiron/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 34 | 2020-12-27T11:49:08.000Z | 2021-10-05T04:58:54.000Z | octadocs/octiron/__init__.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | null | null | null | from octadocs.octiron.octiron import Octiron
| 22.5 | 44 | 0.866667 | 6 | 45 | 6.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
65d52302c3ef3631b10b70a2a39485f6cad70208 | 2,455 | py | Python | polymath/srdfg/templates/data_analytics.py | lite-david/polymath | cf1addc75e203fa606ebc6d32bc552fb3975ea99 | [
"Apache-2.0"
] | 15 | 2021-05-09T05:46:04.000Z | 2022-03-06T20:46:32.000Z | polymath/srdfg/templates/data_analytics.py | lite-david/polymath | cf1addc75e203fa606ebc6d32bc552fb3975ea99 | [
"Apache-2.0"
] | null | null | null | polymath/srdfg/templates/data_analytics.py | lite-david/polymath | cf1addc75e203fa606ebc6d32bc552fb3975ea99 | [
"Apache-2.0"
] | 4 | 2021-08-24T07:46:29.000Z | 2022-03-05T18:23:07.000Z | import polymath as pm
from polymath.srdfg.util import squeeze_shape
from numbers import Integral
import numpy as np
import functools
class svm_classifier_train(pm.Template):
def define_graph(self, x, w, y, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
h = pm.sum([i], (x[i] * w[i]), name="h")
c = (y*h).set_name("c")
ny = (0 - y).set_name("ny")
p = ((c > 1)*ny).set_name("p")
g = (p * x[i]).set_name("g")
w[i] = w[i] - mu * g[i]
class logistic_regressor_train(pm.Template):
def define_graph(self, x, w, y, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
h = pm.sigmoid(pm.sum([i], (x[i] * w[i]), name="h"))
d = (h - y).set_name("h-y")
g = (d * x[i]).set_name("d*x")
w[i] = w[i] - mu * g[i]
class linear_regressor(pm.Template):
def define_graph(self, x, w, y_pred, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
y_pred.write(pm.sum([i], (x[i] * w[i]), name="h"))
class logistic_regressor(pm.Template):
def define_graph(self, x, w, y_pred, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
y_pred.write(pm.sigmoid(pm.sum([i], (x[i] * w[i]), name="h")))
class mc_logistic_regressor_train(pm.Template):
def define_graph(self, x, w, y, y_pred, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
h = pm.temp(name="h", shape=(m))
h = pm.sigmoid(pm.sum([i], (x[i] * w[i]), name="h"))
d = (h - y).set_name("h-y")
g = (d * x[i]).set_name("d*x")
w[i] = w[i] - mu * g[i]
class mc_logistic_regressor(pm.Template):
def define_graph(self, x, w, y_pred, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
h = pm.sigmoid(pm.sum([i], (x[i] * w[i]), name="h"))
class linear_regressor_train(pm.Template):
def define_graph(self, x, w, y, mu, m):
i = pm.index(0, (m - 1).set_name("m-1"), name="i")
h = pm.sum([i], (x[i] * w[i]), name="h")
d = (h - y).set_name("h-y")
g = (d * x[i]).set_name("d*x")
w[i] = w[i] - mu * g[i]
class ppo(pm.Template):
def define_graph(self, obs, action, states,
gamma=0.99,
clip=0.2,
ent_coeff=0.01,
lam=0.95,
adam_eps=1e-5):
pass
# TODO: Add reshape operator, constant operator, gemm
| 29.22619 | 70 | 0.50387 | 438 | 2,455 | 2.716895 | 0.166667 | 0.1 | 0.027731 | 0.127731 | 0.735294 | 0.735294 | 0.711765 | 0.711765 | 0.70084 | 0.70084 | 0 | 0.020282 | 0.276986 | 2,455 | 83 | 71 | 29.578313 | 0.650141 | 0.020774 | 0 | 0.5 | 0 | 0 | 0.024573 | 0 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.142857 | false | 0.017857 | 0.089286 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
028c7bc865a6c95bb8ce5ed7c6da64d2c8d23077 | 19,238 | py | Python | reddash/app/home/routes.py | PredaaA/Red-Dashboard | 5fcc2ba24a75b22c06f09208095c3a4e5ed27b88 | [
"MIT"
] | null | null | null | reddash/app/home/routes.py | PredaaA/Red-Dashboard | 5fcc2ba24a75b22c06f09208095c3a4e5ed27b88 | [
"MIT"
] | null | null | null | reddash/app/home/routes.py | PredaaA/Red-Dashboard | 5fcc2ba24a75b22c06f09208095c3a4e5ed27b88 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
"""
License: MIT
Copyright (c) 2019 - present AppSeed.us
"""
from reddash.app import app
from reddash.app.home import blueprint
from flask import render_template, redirect, url_for, session, request, jsonify, Response, g
from flask_babel import _, refresh
from jinja2 import TemplateNotFound
import websocket
import json
import time
import random
import logging
import datetime
dashlog = logging.getLogger("reddash")
def get_result(app, requeststr):
app.ws.send(json.dumps(requeststr))
result = json.loads(app.ws.recv())
if "error" in result:
if result["error"]["message"] == "Method not found":
return jsonify({"status": 0, "message": _("Not connected to bot")})
dashlog.error(result["error"])
return jsonify({"status": 0, "message": _("Something went wrong")})
if isinstance(result["result"], dict) and result["result"].get("disconnected", False):
return jsonify({"status": 0, "message": _("Not connected to bot")})
return jsonify({"status": 1, "data": result["result"]})
# --------------------------------------- API ---------------------------------------
@blueprint.route("/api/getservers")
def getservers():
if not session.get("id"):
return jsonify({"status": 0, "message": _("Not logged in")})
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC__GET_USERS_SERVERS",
"params": [str(g.id)],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": _("Not connected to bot")})
@blueprint.route("/api/<guild>/serverprefix", methods=["POST"])
def serverprefix(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": _("Not logged in")})
if (
end := app.cooldowns["serverprefix"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["serverprefix"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=5)
data = request.json
userid = g.id
method = "set"
prefixes = data.get("prefixes")
if not prefixes:
return jsonify({"status": 0, "message": _("Prefixes must be specified")})
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_BOTSETTINGS__SERVERPREFIX",
"params": [str(guild), str(userid), method, prefixes],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/adminroles", methods=["POST"])
def adminroles(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["adminroles"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["adminroles"][session.get("id")] = datetime.datetime.now() + datetime.timedelta(
seconds=5
)
data = request.json
userid = g.id
method = "set"
roles = data.get("roles")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_BOTSETTINGS__ADMINROLES",
"params": [str(guild), str(userid), method, roles],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/modroles", methods=["POST"])
def modroles(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["modroles"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["modroles"][session.get("id")] = datetime.datetime.now() + datetime.timedelta(
seconds=5
)
data = request.json
userid = g.id
method = "set"
roles = data.get("roles")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_BOTSETTINGS__MODROLES",
"params": [str(guild), str(userid), method, roles],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/fetchrules", methods=["GET"])
def fetchrules(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["fetchrules"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["fetchrules"][session.get("id")] = datetime.datetime.now() + datetime.timedelta(
seconds=5
)
userid = g.id
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__FETCH_GUILD_RULES",
"params": [str(guild), str(userid)],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/fetchtargets", methods=["GET"])
def fetchtargets(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["fetchtargets"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=10)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["fetchtargets"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=10)
userid = g.id
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__FETCH_GUILD_TARGETS",
"params": [str(guild), str(userid)],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/fetchcogcommands", methods=["GET"])
def fetchcogcommands(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["fetchcogcommands"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["fetchcogcommands"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=5)
userid = g.id
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__FETCH_COG_COMMANDS",
"params": [str(guild), str(userid)],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/addrule", methods=["POST"])
def addrule(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["addrule"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["addrule"][session.get("id")] = datetime.datetime.now() + datetime.timedelta(
seconds=5
)
data = request.json
userid = g.id
allow_or_deny = data.get("ad")
who_or_what = data.get("ww")
cog_or_command = data.get("cc")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__ADD_RULE",
"params": [str(guild), str(userid), allow_or_deny, str(who_or_what), cog_or_command],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/adddefaultrule", methods=["POST"])
def adddefaultrule(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["adddefaultrule"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["adddefaultrule"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=5)
data = request.json
userid = g.id
allow_or_deny = data.get("ad")
cog_or_command = data.get("cc")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__ADD_DEFAULT_RULE",
"params": [str(guild), str(userid), allow_or_deny, cog_or_command],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/removerule", methods=["POST"])
def removerule(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["removerule"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["removerule"][session.get("id")] = datetime.datetime.now() + datetime.timedelta(
seconds=5
)
data = request.json
userid = g.id
who_or_what = data.get("ww")
cog_or_command = data.get("cc")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__REMOVE_RULE",
"params": [str(guild), str(userid), str(who_or_what), cog_or_command],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/removedefaultrule", methods=["POST"])
def removedefaultrule(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["removedefaultrule"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["removedefaultrule"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=5)
data = request.json
userid = g.id
cog_or_command = data.get("cc")
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_PERMISSIONS__REMOVE_DEFAULT_RULE",
"params": [str(guild), str(userid), cog_or_command],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
@blueprint.route("/api/<guild>/fetchaliases", methods=["GET"])
def fetchaliases(guild):
if not session.get("id"):
return jsonify({"status": 0, "message": "Not logged in"})
if (
end := app.cooldowns["fetchaliases"].get(
session.get("id"), datetime.datetime.now() - datetime.timedelta(seconds=5)
)
) > datetime.datetime.now():
return jsonify(
{
"status": 0,
"message": _("You are doing that too much. Try again in {wait} seconds").format(
wait=(end - datetime.datetime.now()).seconds
),
}
)
app.cooldowns["fetchaliases"][
session.get("id")
] = datetime.datetime.now() + datetime.timedelta(seconds=5)
userid = g.id
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
try:
requeststr = {
"jsonrpc": "2.0",
"id": 0,
"method": "DASHBOARDRPC_ALIASCC__FETCH_ALIASES",
"params": [str(guild), str(userid)],
}
with app.lock:
return get_result(app, requeststr)
except:
return jsonify({"status": 0, "message": "Not connected to bot"})
# --------------------------------------- API ---------------------------------------
# -------------------------------------- Routes -------------------------------------
@blueprint.route("/index")
@blueprint.route("/")
def index():
return render_template("index.html")
@blueprint.route("/commands")
def commands():
data = app.commanddata
prefix = app.variables.get("prefix", ["[p]"])
return render_template(
"commands.html", cogs=[k["name"] for k in data], data=data, prefixes=prefix
)
@blueprint.route("/credits")
def credits():
return render_template("credits.html")
@blueprint.route("/dashboard")
def dashboard():
if not session.get("id"):
return redirect(url_for("base_blueprint.login"))
return render_template("dashboard.html")
@blueprint.route("/guild/<guild>")
def guild(guild):
if not session.get("id"):
return redirect(url_for("base_blueprint.login"))
try:
int(guild)
except ValueError:
raise ValueError("Guild ID must be integer")
# We won't disconnect the websocket here, even if it fails, so that the main updating thread doesnt run into issues
try:
request = {
"jsonrpc": "2.0",
"id": random.randint(1, 1000),
"method": "DASHBOARDRPC__GET_SERVER",
"params": [int(g.id), int(guild)],
}
with app.lock:
app.ws.send(json.dumps(request))
result = json.loads(app.ws.recv())
data = {}
if "error" in result:
if result["error"]["message"] == "Method not found":
data = {"status": 0, "message": "Not connected to bot"}
else:
dashlog.error(result["error"])
data = {"status": 0, "message": "Something went wrong"}
if isinstance(result["result"], dict) and result["result"].get("disconnected", False):
data = {"status": 0, "message": "Not connected to bot"}
if not data:
data = {"status": 1, "data": result["result"]}
except:
data = {"status": 0, "message": "Not connected to bot"}
return render_template("guild.html", data=data)
| 31.434641 | 120 | 0.526146 | 1,970 | 19,238 | 5.072589 | 0.094416 | 0.070449 | 0.083659 | 0.078055 | 0.804563 | 0.792355 | 0.781947 | 0.777344 | 0.765636 | 0.744921 | 0 | 0.009019 | 0.31994 | 19,238 | 611 | 121 | 31.486088 | 0.754796 | 0.023027 | 0 | 0.651575 | 0 | 0 | 0.212195 | 0.040667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035433 | false | 0 | 0.021654 | 0.003937 | 0.173228 | 0.041339 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
02c54a0e5617abc0a344f5b93456ed8dc3e400e8 | 134 | py | Python | mord/__init__.py | david-macleod/mord | 8236d546d92f807cc5c5ec46ed7355311914060f | [
"BSD-3-Clause"
] | 237 | 2015-06-19T04:30:13.000Z | 2022-03-10T15:37:47.000Z | mord/__init__.py | Moyookc/mord | ef578a79bf8374d84b77f246454b06d81a620630 | [
"BSD-3-Clause"
] | 24 | 2015-12-09T17:22:22.000Z | 2021-04-02T16:45:52.000Z | mord/__init__.py | Moyookc/mord | ef578a79bf8374d84b77f246454b06d81a620630 | [
"BSD-3-Clause"
] | 67 | 2015-06-26T05:15:59.000Z | 2022-03-09T09:32:03.000Z | from .threshold_based import *
from .regression_based import *
from . import utils
from . import threshold_based
__version__ = '0.6'
| 19.142857 | 31 | 0.776119 | 18 | 134 | 5.388889 | 0.5 | 0.28866 | 0.309278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.149254 | 134 | 6 | 32 | 22.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.022388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b83eb35196a56d2a2259982d1e9f461da617b0cd | 3,812 | py | Python | python/example_code/s3/s3_versioning/test/test_remove_delete_marker.py | gabehollombe-aws/aws-doc-sdk-examples | dfc0e06ebe1762ab127f3ef5f425507644c6a99c | [
"Apache-2.0"
] | 5,166 | 2016-09-02T08:48:38.000Z | 2022-03-31T19:12:43.000Z | python/example_code/s3/s3_versioning/test/test_remove_delete_marker.py | gabehollombe-aws/aws-doc-sdk-examples | dfc0e06ebe1762ab127f3ef5f425507644c6a99c | [
"Apache-2.0"
] | 1,186 | 2016-09-28T23:05:19.000Z | 2022-03-31T18:07:47.000Z | python/example_code/s3/s3_versioning/test/test_remove_delete_marker.py | gabehollombe-aws/aws-doc-sdk-examples | dfc0e06ebe1762ab127f3ef5f425507644c6a99c | [
"Apache-2.0"
] | 4,003 | 2016-08-29T19:51:40.000Z | 2022-03-31T16:40:02.000Z | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Unit tests for remove_delete_marker.py functions.
"""
import json
from urllib import parse
import pytest
import remove_delete_marker
def test_remove_delete_marker(make_stubber, make_unique_name, make_event, make_result):
s3_stubber = make_stubber(remove_delete_marker.s3)
bucket_name = make_unique_name('bucket')
# include a space in the object key to verify url-encoding/decoding
obj_key = make_unique_name('prefix object')
version_id = 'test-version-id'
event = make_event(bucket_name, parse.quote(obj_key), version_id=version_id)
s3_stubber.stub_head_object(
bucket_name, obj_key, obj_version_id=version_id, error_code='405',
response_meta={'HTTPHeaders': {'x-amz-delete-marker': 'true'}}
)
s3_stubber.stub_delete_object(bucket_name, obj_key, obj_version_id=version_id)
result = remove_delete_marker.lambda_handler(event, None)
assert result == make_result('Succeeded')
for res in result['results']:
# S3 processor serializes to JSON. Make sure it works.
json.dumps(res['resultString'])
def test_remove_delete_marker_not_deleted(
make_stubber, make_unique_name, make_event, make_result):
s3_stubber = make_stubber(remove_delete_marker.s3)
bucket_name = make_unique_name('bucket')
obj_key = make_unique_name('object')
version_id = 'test-version-id'
event = make_event(bucket_name, obj_key, version_id=version_id)
s3_stubber.stub_head_object(bucket_name, obj_key, obj_version_id=version_id)
result = remove_delete_marker.lambda_handler(event, None)
assert result == make_result('PermanentFailure')
for res in result['results']:
# S3 processor serializes to JSON. Make sure it works.
json.dumps(res['resultString'])
def test_remove_delete_marker_no_delete_marker(
make_stubber, make_unique_name, make_event, make_result):
s3_stubber = make_stubber(remove_delete_marker.s3)
bucket_name = make_unique_name('bucket')
obj_key = make_unique_name('object')
version_id = 'test-version-id'
event = make_event(bucket_name, obj_key, version_id=version_id)
s3_stubber.stub_head_object(
bucket_name, obj_key, obj_version_id=version_id, error_code='405',
response_meta={'HTTPHeaders': {'some-other-header': 'nonsense'}}
)
result = remove_delete_marker.lambda_handler(event, None)
assert result == make_result('PermanentFailure')
for res in result['results']:
# S3 processor serializes to JSON. Make sure it works.
json.dumps(res['resultString'])
@pytest.mark.parametrize('error_code', ['TestException', 'RequestTimeout'])
def test_remove_delete_marker_delete_fails(
make_stubber, make_unique_name, make_event, make_result, error_code):
s3_stubber = make_stubber(remove_delete_marker.s3)
bucket_name = make_unique_name('bucket')
obj_key = make_unique_name('object')
version_id = 'test-version-id'
event = make_event(bucket_name, obj_key, version_id=version_id)
s3_stubber.stub_head_object(
bucket_name, obj_key, obj_version_id=version_id, error_code='405',
response_meta={'HTTPHeaders': {'x-amz-delete-marker': 'true'}}
)
s3_stubber.stub_delete_object(bucket_name, obj_key, obj_version_id=version_id,
error_code=error_code)
result = remove_delete_marker.lambda_handler(event, None)
if error_code == 'RequestTimeout':
assert result == make_result('TemporaryFailure')
else:
assert result == make_result('PermanentFailure')
for res in result['results']:
# S3 processor serializes to JSON. Make sure it works.
json.dumps(res['resultString'])
| 37.742574 | 87 | 0.728751 | 521 | 3,812 | 4.980806 | 0.1881 | 0.09711 | 0.09711 | 0.069364 | 0.823892 | 0.801541 | 0.801541 | 0.801541 | 0.783815 | 0.766859 | 0 | 0.009177 | 0.171039 | 3,812 | 100 | 88 | 38.12 | 0.812025 | 0.113064 | 0 | 0.641791 | 0 | 0 | 0.127116 | 0 | 0 | 0 | 0 | 0 | 0.074627 | 1 | 0.059701 | false | 0 | 0.059701 | 0 | 0.119403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b88a55fbbf69b69d325fa033465f90738c224424 | 4,421 | py | Python | main.py | kyomukyomupurin/AtCoder_base64 | 1c34649d060b2e58f270095ef178d3365b2b8f1d | [
"CC0-1.0"
] | 2 | 2020-10-03T14:50:53.000Z | 2021-01-16T11:11:13.000Z | main.py | kyomukyomupurin/AtCoder_base64 | 1c34649d060b2e58f270095ef178d3365b2b8f1d | [
"CC0-1.0"
] | null | null | null | main.py | kyomukyomupurin/AtCoder_base64 | 1c34649d060b2e58f270095ef178d3365b2b8f1d | [
"CC0-1.0"
] | null | null | null | # This code is generated by [Atcoder_base64](https://github.com/kyomukyomupurin/AtCoder_base64)
from base64 import b85decode
import subprocess
from pathlib import Path
from zlib import decompress
binary = "c%0=~Z){uD6~DIg$7xf?EiD9ERs&&bS7PF}DJfls?ZoXfIB96o(v?4#7u$)w`A=p)mxLi^skEwB(nV7@`hiJB>;pr5fN4VefV!Xz#-wcuQ>P)pq>fQ7>Bh)3sImd^&VBEm=l8DPqZ`_{-N-rb{N6d|-h1xfbKbWivAs@*gJ5xyuMoyP+@kS8j-T78ivZR`THybUWD{uuzQG_T(q)I4y7hWAy&k8=i*yq)Q-^^z68lnaHB&--x*lt5YK?}b*7!{HIbQFQ%dF?*dS$Lx=JH4h*F%c^Yv8{Tj*GOPm}vv2Lzk2Ud;V~h#)~vY%oO$R1-&NnpBFEu2Y7vUemTqAixkJ<_Dp(o*UsBBiOx(qSDNgc?Ag`1Yp1`M_itC*<mIya1`n$8%&q7u_I^Fb*!!i(*iFwXr(dqV``Wh19Vh2MKKc9!ChH-!aU-M)o*Pt*mBR7Z#tmm%`|SF64u0z_@E91p*#iHyjr?5>Yy52+`4?>PlQ#G@Hug6YoPp+Yzm2@tMt+YCK4H@yHd;4>mxuU#!{x9I>Rn6PNzkiV^WsE+c~Ddm@on4K*zY2gj%V{ZT2$nMLMfrqkpY@W7Lw!XqLM6(4D@C4x#WmEnn`N!tlu0Nyk}VHjw@3W$!I*%lgQIY(urg)5@)Z5m7a9INJr&ja%VJ`R{8^xqks=9fnAg;sY3n{c$c7Yxu_gf<?}lxN{XgyG#l7K2Q<4h4Q3)-xO*&9DyF!}Ofd>}1M!qxph`hbD@9O@8~|(w)Sr&id{HSR<!m%QLL<Z7bm$(-CiY<ycgFK2h50Ow?ut*!R92FcX@xUm>71NNKMG!;Opd!UE2nd0Uo0B#quc%4{oUI0;lp%?f2ThXFtUWLG#CFhz|X0VqZ8JUwhA5a*X^jyo6-&G7ABSD<LXdlYx{(}{FIZe+$pZ3Z2hjA**nSQalgR*FUaT5gn)PPb7fM%372<!%{cP&Z5nMNwS9z@w+wiT0l#R#R~hgH1HRgTFB$Mw1HNp)*BEeI2kb0rGvH^rd^4%-m8_gI;4TCHoB?kz;4d0*<9YW}18&^!XAO9>0e{1QyA8N>?5}R=RO6*)ACZpFD$eS>bnNHuIa2K|093tIf&XorgYd-o6jNEeScP)y*BH;bwm7fyql{-mvN)^qM;On#xcH*VA7DJ|+TuBtznk%_ON%oqA7(u3%3@jN?*x8q2HP1Lk*2%%fO%=UcLv@ZWDQ^VJ_t&c*Q8TKIu%@qo%&(p637l5e@{6aDnHiILlj@A?Yz>_CsjUJ*sTixVj;vRbTTmeVCaFlSmn<P6p-kthK}=WhDg&J*k%s~yQJOo%4;eJQq}pk^Gknuc_9qKV>7DXrhBh-`^Z9&89dedQWN+P?1F)p+2eHgGBkAY-4!Muct2Ws^C7A7u5|3ZrJ<47bmI|L$@H2(sa5uFVtw9l2=qQ|+uW~0Q>u~10wcSZl?^aV{w8ghTB;Y@HkX-S<~RlUs8(s1k*zCIWl4JVZ+A(rF1sYh>(ZMm%6c%+$PKuw7su3kP`}(8g~pQ7?cmtn^NsK*RW2#5(5=hBEM%b*3n^&g>y0;ooa4bcq5j2hGPn5KHb0>b*Gy$;uEw=(CZYR6hoY6=hYp6I@3&rYvJsr_Tmd>9s(c!)T#0SHr26#LPhAV+pMpb=zpwZLZ=;D=<zKPNmHx^Hq3Zf~q+@dqY4;yWmzlfohfxeY7<wo~=VoeE{$q~MZFM$l`<Rr>Wb(K9?#mZ4iJKU<(X~h7>tPdAc?RgyKwkm+0?-AZUxtpahmpzvO#uG@bn^nx4A9#Ee+Q`I(C#s;I37Jf9Ftzh#@1%{jHB7haP{sA`r8ca&RU+34lZB*q*_%S>+$aOwBOyf`Vn`T+;zjAFYV~K8THwn@hy-uu4lF<Kfv^%KHOin5bd0Q1!PY%Anfrz>D=#WKjG^4_#R*7Y47uR!yb2NwPyEe`1zp!ekR-R@!scgt9AVZem!6}$!&bc8TPcFc11kC>4vap%ae_gr|U$M<moxqJm3iyJUt;#SIDy^?D2tISgk^B-zV^U80;AL@Bf#A=OLbdcqkd;8YAuI4|o<W^GzMkpbK38dd}ndi(wj`+c$8E=PkxlICnl;spc8Q`#qjDL4FwF88hoNHJhB_?ZY#)ozo4(TyQRy`8kAR{~V{d9`MY%j?<xLGcGlm+wJD{qdT$wFA%fb$5pL=4i2qwewlZ)%6V+p66Y`Q_y6R)@ofHI7Cnv>$`^SDc5%9w(?gt&ae9=~6UNQpa}>O1?d$8i)3@c|Xep<Ze1R_i4u4l?pro>a#{)h7uAPquw(=KOv*`k__gE};b}t>V#9c(O#2fVaX~rA%d1=O*^!aPXoAq%u<8FOEnDG{U-kI@L`usHGtMzqi##@QkVtib%nrAF=4>@Ovx9RKNjC;vNOMI=KH!$PtNN}|kevQ75nDO=c{$a*9Sk7M;xmI8AW*k<x*9x!2m1g|<I*cv*2J-i+h_ijb34X)<)z9%8Rh@Qn?^>U(&&q8!cJAPIW?IbnZjR3ic#Pvj@bf6IcSxu=r#{#F^LyM*P_XliDqlY?cI}^4<=2tNTE#ck^1o64*Uytbs&?wf_p<u`n!5IX%yF@u9tX2iKOZ`5@)*5pe0d%53iIJfCyP_-=TE<ym#EEuT=v5@c^-C7tTU{y(;Uw;JI?F$eGtoIHtk$-GQ560KIO2^pZwUy&WuxQ59V9(uJ|h(`QNJY_4~|Pr`CSaZ#O!v?F`!BMW=P1=i5$gTtxq0VKN|pc&!<K%BkUEd!Dta_YL#|_gj+VZ#k{=S08fuc`mQ#7G2i%Z*f`M*=~c!Z0r!0i1Ekcgzk&&4~Jru7E7Zvs^w=`h9}lTX*$mmN+e!Tib`p0%pWJU6c|;qG|p08MM7yJPscO)Q8_~sO1@B}a%qyp^Vx|^Qb{J@fVGrh8KN|m3k7+KCUZ(*ii{QHY?3BQ+3Xak7&tYtWfG-y?}5-jghmGYS$dA{8$3uO5(lOJ1BCW}bucs#?SpqF<%Ahfvr@Vf_V3+092ub_p>QlhjXAeq(3qH_$%L%Pghuy+ej=TtrD8H6q`kNX&vJnSig}una|taiSWC|dmOmq7Xh^XM#y?+zPpFxW4D_KpuaaOiX|k-HP1aKE8_yex6*FhzK+w@*5udH|kD|7kT#Oe>YTG_LQCpK`Q?EWTDUMlv3fL~qYn972H<b8`Q&~kG1*#M@n!=}a4oprEe=e^i{h@HQQ<29xH=Zl`M@#8UqBET!DwmRrDdJB|<-ndsm4f!>XtGdD=W`~8!n;B;BQrt%G?7tQmEbJ)=pWC+1I#M;uX@E_$g8v4pG@&3m`W6CoTjR+9!&$Ep$0jdj>Chh4g*R2unA;g!CGxV|6O76Alt%mf3*9~hQ%T9w-7A`Lw)f(mQRouuiXJbAMf`_JFcS7@())a219-Ed(;+=<Mmy<HV+alMn!#$7m${DhuJmXVbI5T@(?k9yTZ5v=^4RcqW>NuT8xSM7=Ix33GEj?Cz7udEoMa?;}oPrq}F~Xsqg=zMDr=?V?3kB(++d}V*j7u`eDvv+=CS3c&2}?g+9Aq)xP1?@)$@l-p3-2e?`#8I0|W5koOAojG&M47E;k)Zh`(l(8qft((OY0={2+DSwSD~i%5lVqOSixBIfmr_gtjn9YxfCNzk|Zo@}T8GeIBYMx6>1Ui9ZLiS`{A@4>b2%Nj5GOZ5Nmi1vL2^)c?4Tw`6IsL$e<CWC(M`#lGE$C?B^Y}Z+$^<T_mmH9`X@>S};Ea>a^XRo;)Q6I|^$53CtmvhB-!T&))|07<%n5;qlP64;dM_T1;gIs@!J8mPlC9eM#L)(p|i!c0d2rtF`bFGd3Gd!`mBH*I_zX5t8jN<"
Path("077b5b43ed6ae0f2ad2b26d4f6fb1be45713b723ae814448a294f8d77118b1e9.bin").write_bytes(decompress(b85decode(binary)))
Path("077b5b43ed6ae0f2ad2b26d4f6fb1be45713b723ae814448a294f8d77118b1e9.bin").chmod(0o755)
subprocess.run("./077b5b43ed6ae0f2ad2b26d4f6fb1be45713b723ae814448a294f8d77118b1e9.bin")
# Original source code:
"""
#include <iostream>
using namespace std;
int main() {
cout << "Hello, World!" << endl;
return 0;
}
""" | 176.84 | 3,784 | 0.748473 | 825 | 4,421 | 3.957576 | 0.752727 | 0.061562 | 0.043492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13529 | 0.013572 | 4,421 | 25 | 3,785 | 176.84 | 0.613391 | 0.026012 | 0 | 0 | 1 | 0.125 | 0.949415 | 0.949415 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b89c9339ea7447031b3f1c213a1fae7d10ebdba7 | 721 | py | Python | mihaelanistor/kernels/get_kernels.py | s-shailja/challenge-iclr-2021 | 28ad9d126597166bc41715f77c8cf366b8fba975 | [
"MIT"
] | 39 | 2021-03-12T07:30:14.000Z | 2022-03-24T06:37:02.000Z | mihaelanistor/kernels/get_kernels.py | s-shailja/challenge-iclr-2021 | 28ad9d126597166bc41715f77c8cf366b8fba975 | [
"MIT"
] | 34 | 2021-03-09T03:19:55.000Z | 2021-09-07T18:30:59.000Z | mihaelanistor/kernels/get_kernels.py | s-shailja/challenge-iclr-2021 | 28ad9d126597166bc41715f77c8cf366b8fba975 | [
"MIT"
] | 29 | 2021-03-13T21:21:14.000Z | 2022-02-02T05:52:44.000Z | from kernels.extended_projection_kernels import GrassmannianRBFKernel, GrassmannianProjectionKernel
from kernels.extended_projection_kernels import LinearProjectionKernel, AffineProjectionKernel
from kernels.extended_projection_kernels import LinearScaledProjectionKernel, AffineScaledProjectionKernel
from kernels.extended_projection_kernels import LinearSpherizedProjectionKernel, AffineSpherisedProjectionKernel
def get_callable_kernels():
return [GrassmannianRBFKernel, GrassmannianProjectionKernel,
LinearProjectionKernel, AffineProjectionKernel,
LinearScaledProjectionKernel, AffineScaledProjectionKernel,
LinearSpherizedProjectionKernel, AffineSpherisedProjectionKernel] | 72.1 | 112 | 0.869626 | 45 | 721 | 13.711111 | 0.377778 | 0.071313 | 0.123177 | 0.188006 | 0.272285 | 0.272285 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102635 | 721 | 10 | 113 | 72.1 | 0.953632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.444444 | 0.111111 | 0.666667 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b2123ab992fa0e69c2bb3cf5943ef0d0f0cd03ec | 164 | py | Python | channels_presence/channels.py | Uncensored-Developer/django-channels-presence | 92dfa4b308d5525a5a3f18c835fabd3cf1a288d1 | [
"MIT"
] | 1 | 2019-10-30T08:41:19.000Z | 2019-10-30T08:41:19.000Z | channels_presence/channels.py | Uncensored-Developer/django-channels-presence | 92dfa4b308d5525a5a3f18c835fabd3cf1a288d1 | [
"MIT"
] | null | null | null | channels_presence/channels.py | Uncensored-Developer/django-channels-presence | 92dfa4b308d5525a5a3f18c835fabd3cf1a288d1 | [
"MIT"
] | 2 | 2019-04-23T05:00:47.000Z | 2020-03-08T17:14:08.000Z | from __future__ import unicode_literals
from channels_presence.decorators import remove_presence
@remove_presence
def ws_disconnect(message, **kwargs):
pass
| 18.222222 | 56 | 0.829268 | 20 | 164 | 6.35 | 0.75 | 0.220472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 164 | 8 | 57 | 20.5 | 0.881944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b2450c5ce83712400647d8e058380736638b1c1c | 14,890 | py | Python | tests/blueprints/test_cherrypicked_plates.py | BenTopping/lighthouse | 649b442ca89f7deb7c01411faa883c1894b72986 | [
"MIT"
] | null | null | null | tests/blueprints/test_cherrypicked_plates.py | BenTopping/lighthouse | 649b442ca89f7deb7c01411faa883c1894b72986 | [
"MIT"
] | null | null | null | tests/blueprints/test_cherrypicked_plates.py | BenTopping/lighthouse | 649b442ca89f7deb7c01411faa883c1894b72986 | [
"MIT"
] | null | null | null | from http import HTTPStatus
from unittest.mock import patch
import responses
from lighthouse.constants.error_messages import (
ERROR_SAMPLE_DATA_MISMATCH,
ERROR_SAMPLE_DATA_MISSING,
ERROR_SAMPLES_MISSING_UUIDS,
ERROR_UNEXPECTED_CHERRYPICKING_FAILURE,
ERROR_UPDATE_MLWH_WITH_COG_UK_IDS,
)
from lighthouse.messages.message import Message
# ---------- cherrypicked-plates/create tests ----------
def test_get_cherrypicked_plates_endpoint_successful(
app,
client,
dart_samples,
samples,
mocked_responses,
mlwh_lh_samples,
source_plates,
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.add_cog_barcodes_from_different_centres",
return_value="TC1",
):
ss_url = f"http://{app.config['SS_HOST']}/api/v2/heron/plates"
mocked_responses.add(
responses.POST,
ss_url,
json={"barcode": "des_plate_1"},
status=HTTPStatus.OK,
)
response = client.get(
"/cherrypicked-plates/create?barcode=des_plate_1&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.OK
assert response.json == {"data": {"plate_barcode": "des_plate_1", "centre": "TC1", "number_of_fit_to_pick": 5}}
def test_get_cherrypicked_plates_endpoint_no_barcode_in_request(app, client, samples):
response = client.get("/cherrypicked-plates/create?user_id=test&robot=BKRB0001", content_type="application/json")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert response.json == {"errors": ["GET request needs 'barcode' in URL"]}
def test_get_cherrypicked_plates_endpoint_no_robot_number_in_request(app, client, samples):
response = client.get(
"/cherrypicked-plates/create?barcode=plate_1&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert response.json == {"errors": ["GET request needs 'robot' in URL"]}
def test_get_cherrypicked_plates_endpoint_no_user_id_in_request(app, client, samples):
response = client.get(
"/cherrypicked-plates/create?barcode=plate_1&robot=1234",
content_type="application/json",
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert response.json == {"errors": ["GET request needs 'user_id' in URL"]}
def test_get_cherrypicked_plates_endpoint_add_cog_barcodes_failed(
app, client, dart_samples, samples, centres, mocked_responses
):
baracoda_url = f"http://{app.config['BARACODA_URL']}/barcodes_group/TC1/new?count=5"
mocked_responses.add(
responses.POST,
baracoda_url,
status=HTTPStatus.BAD_REQUEST,
)
barcode = "des_plate_1"
response = client.get(
f"/cherrypicked-plates/create?barcode={barcode}&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert response.json == {"errors": [f"Failed to add COG barcodes to plate: {barcode}"]}
def test_get_cherrypicked_plates_endpoint_ss_failure(
app, client, dart_samples, samples, mocked_responses, source_plates
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.add_cog_barcodes_from_different_centres",
return_value="TC1",
):
ss_url = f"http://{app.config['SS_HOST']}/api/v2/heron/plates"
barcode = "des_plate_1"
body = {"errors": [f"The barcode '{barcode}' is not a recognised format."]}
mocked_responses.add(
responses.POST,
ss_url,
json=body,
status=HTTPStatus.UNPROCESSABLE_ENTITY,
)
response = client.get(
f"/cherrypicked-plates/create?barcode={barcode}&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
assert response.json == {"errors": [f"The barcode '{barcode}' is not a recognised format."]}
def test_get_cherrypicked_plates_mlwh_update_failure(
app, client, dart_samples, samples, mocked_responses, source_plates
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.add_cog_barcodes_from_different_centres",
return_value="TC1",
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.update_mlwh_with_cog_uk_ids",
side_effect=Exception(),
):
ss_url = f"http://{app.config['SS_HOST']}/api/v2/heron/plates"
body = {"barcode": "plate_1"}
mocked_responses.add(
responses.POST,
ss_url,
json=body,
status=HTTPStatus.OK,
)
response = client.get(
"/cherrypicked-plates/create?barcode=des_plate_1&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert response.json == {
"errors": [
(
"Failed to update MLWH with COG UK ids. The samples should have been "
"successfully inserted into Sequencescape."
)
]
}
assert response.json == {"errors": [ERROR_UPDATE_MLWH_WITH_COG_UK_IDS]}
def test_post_plates_endpoint_mismatched_sample_numbers(app, client, dart_samples, samples):
with patch(
"lighthouse.blueprints.cherrypicked_plates.add_cog_barcodes_from_different_centres",
return_value="TC1",
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.check_matching_sample_numbers",
return_value=False,
):
barcode = "des_plate_1"
response = client.get(
f"/cherrypicked-plates/create?barcode={barcode}&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert response.json == {"errors": [f"{ERROR_SAMPLE_DATA_MISMATCH} {barcode}"]}
def test_post_cherrypicked_plates_endpoint_missing_dart_data(app, client):
with patch("lighthouse.blueprints.cherrypicked_plates.find_dart_source_samples_rows", return_value=[]):
barcode = "des_plate_1"
response = client.get(
f"/cherrypicked-plates/create?barcode={barcode}&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert response.json == {"errors": [f"{ERROR_SAMPLE_DATA_MISSING} {barcode}"]}
def test_post_cherrypicked_plates_endpoint_missing_source_plate_uuids(app, client, dart_samples, samples):
with patch(
"lighthouse.blueprints.cherrypicked_plates.add_cog_barcodes_from_different_centres",
return_value="TC1",
):
with patch(
"lighthouse.blueprints.cherrypicked_plates.get_source_plates_for_samples",
return_value=[],
):
barcode = "des_plate_1"
response = client.get(
f"/cherrypicked-plates/create?barcode={barcode}&robot=BKRB0001&user_id=test",
content_type="application/json",
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert response.json == {"errors": [f"{ERROR_SAMPLES_MISSING_UUIDS} {barcode}"]}
# ---------- cherrypicked-plates/fail tests ----------
def test_fail_plate_from_barcode_bad_request_no_barcode(client):
response = client.get("/cherrypicked-plates/fail?user_id=test_user&robot=BKRB0001&failure_type=robot_crashed")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
response = client.get(
"/cherrypicked-plates/fail?user_id=test_user&robot=BKRB0001" "&failure_type=robot_crashed&barcode="
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
def test_fail_plate_from_barcode_bad_request_no_user_id(client):
response = client.get("/cherrypicked-plates/fail?barcode=ABC123&robot=BKRB0001&failure_type=robot_crashed")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
response = client.get(
"/cherrypicked-plates/fail?barcode=ABC123&robot=BKRB0001" "&failure_type=robot_crashed&user_id="
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
def test_fail_plate_from_barcode_bad_request_no_robot(client):
response = client.get("/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user&failure_type=robot_crashed")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
response = client.get(
"/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user" "&failure_type=robot_crashed&robot="
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
def test_fail_plate_from_barcode_bad_request_no_failure_type(client):
response = client.get("/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user&robot=BKRB0001")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
response = client.get("/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user&robot=BKRB0001&failure_type=")
assert response.status_code == HTTPStatus.BAD_REQUEST
assert len(response.json["errors"]) == 1
def test_fail_plate_from_barcode_bad_request_unrecognised_failure_type(app, client):
with app.app_context():
failure_type = "notAFailureType"
response = client.get(
"/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user" f"&robot=BKRB0001&failure_type={failure_type}"
)
assert response.status_code == HTTPStatus.BAD_REQUEST
assert f"'{failure_type}' is not a known cherrypicked plate failure type" in response.json["errors"]
def test_fail_plate_from_barcode_internal_server_error_constructing_message_failure(app, client):
with app.app_context():
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message",
side_effect=Exception(),
):
response = client.get(
"/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user"
"&robot=BKRB0001&failure_type=robot_crashed"
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert ERROR_UNEXPECTED_CHERRYPICKING_FAILURE in response.json["errors"][0]
def test_fail_plate_from_barcode_internal_server_error_constructing_message_none(app, client):
with app.app_context():
test_error = "this is a test error"
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message",
return_value=([test_error], None),
):
response = client.get(
"/cherrypicked-plates/fail?barcode=ABC123&user_id=test_user&robot=BKRB0001&failure_type=robot_crashed"
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert test_error in response.json["errors"][0]
def test_fail_plate_from_barcode_internal_error_failed_broker_initialise(client):
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message"
) as mock_construct:
with patch("lighthouse.blueprints.cherrypicked_plates.Broker", side_effect=Exception()):
mock_construct.return_value = [], Message("test message content")
response = client.get(
"/cherrypicked-plates/fail?barcode=plate_1&user_id=test_user"
"&robot=BKRB0001&failure_type=robot_crashed"
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert ERROR_UNEXPECTED_CHERRYPICKING_FAILURE in response.json["errors"][0]
def test_fail_plate_from_barcode_internal_error_failed_broker_connect(client):
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message"
) as mock_construct:
with patch("lighthouse.blueprints.cherrypicked_plates.Broker.connect", side_effect=Exception()):
mock_construct.return_value = [], Message("test message content")
response = client.get(
"/cherrypicked-plates/fail?barcode=plate_1&user_id=test_user"
"&robot=BKRB0001&failure_type=robot_crashed"
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert ERROR_UNEXPECTED_CHERRYPICKING_FAILURE in response.json["errors"][0]
def test_fail_plate_from_barcode_internal_error_failed_broker_publish(client):
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message"
) as mock_construct:
with patch("lighthouse.blueprints.cherrypicked_plates.Broker") as mock_broker:
mock_broker().publish.side_effect = Exception()
mock_construct.return_value = [], Message("test message content")
response = client.get(
"/cherrypicked-plates/fail?barcode=plate_1&user_id=test_user"
"&robot=BKRB0001&failure_type=robot_crashed"
)
assert response.status_code == HTTPStatus.INTERNAL_SERVER_ERROR
assert ERROR_UNEXPECTED_CHERRYPICKING_FAILURE in response.json["errors"][0]
def test_fail_plate_from_barcode_success(client):
with patch(
"lighthouse.blueprints.cherrypicked_plates.construct_cherrypicking_plate_failed_message"
) as mock_construct:
routing_key = "test.routing.key"
with patch("lighthouse.blueprints.cherrypicked_plates.get_routing_key", return_value=routing_key):
with patch("lighthouse.blueprints.cherrypicked_plates.Broker") as mock_broker:
test_errors = ["error 1", "error 2"]
test_message = Message("test message content")
mock_construct.return_value = test_errors, test_message
response = client.get(
"/cherrypicked-plates/fail?barcode=plate_1&user_id=test_user"
"&robot=BKRB0001&failure_type=robot_crashed"
)
mock_broker().publish.assert_called_with(test_message, routing_key)
mock_broker().close_connection.assert_called()
assert response.status_code == HTTPStatus.OK
assert response.json["errors"] == test_errors
| 41.019284 | 119 | 0.687173 | 1,700 | 14,890 | 5.697059 | 0.088235 | 0.104078 | 0.058544 | 0.061951 | 0.832938 | 0.820444 | 0.799897 | 0.77935 | 0.753536 | 0.73237 | 0 | 0.013459 | 0.211619 | 14,890 | 362 | 120 | 41.132597 | 0.811568 | 0.007186 | 0 | 0.565972 | 0 | 0.006944 | 0.331867 | 0.246211 | 0 | 0 | 0 | 0 | 0.184028 | 1 | 0.072917 | false | 0 | 0.017361 | 0 | 0.090278 | 0.069444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b26827804a7f91593b72a7e371b0367eb7ed8e7b | 4,343 | py | Python | linguaf/lexical_diversity.py | Perevalov/language_features | bd85ae835162fbf53de4a087bd9cef7d38896ff5 | [
"MIT"
] | 3 | 2021-05-10T01:35:50.000Z | 2021-09-21T00:47:55.000Z | linguaf/lexical_diversity.py | Perevalov/language_features | bd85ae835162fbf53de4a087bd9cef7d38896ff5 | [
"MIT"
] | 6 | 2021-06-01T09:35:59.000Z | 2021-06-17T11:53:45.000Z | linguaf/lexical_diversity.py | Perevalov/language_features | bd85ae835162fbf53de4a087bd9cef7d38896ff5 | [
"MIT"
] | null | null | null | import collections
import math
from linguaf.descriptive_statistics import get_words, get_lexical_items
from linguaf import __check_bool_param, __check_documents_param, __check_lang_param
def lexical_density(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> float:
"""Calculates lexical density based on a list of documents.
Lexical density is the ratio between number of lexical items and number of words in total.
See Wikipedia article: https://en.wikipedia.org/wiki/Lexical_density
Keyword arguments:
documents -- the list of textual documents
lang -- language of the textual documents
remove_stopwords -- boolean flag that shows if the function should exclude stopwords
"""
__check_documents_param(documents)
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
words = get_words(documents=documents, lang=lang, remove_stopwords=remove_stopwords)
lex_items = get_lexical_items(documents=documents, remove_stopwords=remove_stopwords, lang=lang)
return len(lex_items)/len(words)*100
def type_token_ratio(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> float:
"""Calculates Type-Token Ratio based on a list of documents.
Types -- are unique words, Tokens (in this context) -- are words in total.
See Wikipedia article: https://de.wikipedia.org/wiki/Type-Token-Relation
Keyword arguments:
documents -- the list of textual documents
lang -- language of the textual documents
remove_stopwords -- boolean flag that shows if the function should exclude stopwords
"""
__check_documents_param(documents)
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
words = get_words(documents=documents, lang=lang, remove_stopwords=remove_stopwords)
num_unq = len(collections.Counter(words).keys())
return num_unq/len(words)
def log_type_token_ratio(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> float:
"""Calculates Log Type-Token Ratio (Herdan's Constant) based on a list of documents.
Types -- are unique words, Tokens (in this context) -- are words in total.
Publicaiton: Herdan, 1960, as cited in Tweedie & Baayen, 1998
Keyword arguments:
documents -- the list of textual documents
lang -- language of the textual documents
remove_stopwords -- boolean flag that shows if the function should exclude stopwords
"""
__check_documents_param(documents)
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
words = get_words(documents=documents, lang=lang, remove_stopwords=remove_stopwords)
num_unq = len(collections.Counter(words).keys())
return math.log(num_unq)/math.log(len(words)) if len(words) != 1 else 0
def summer_index(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> float:
"""Calculates Summer's Index based on a list of documents.
The index is the same as Double Log Type-Token Ratio.
Keyword arguments:
documents -- the list of textual documents
lang -- language of the textual documents
remove_stopwords -- boolean flag that shows if the function should exclude stopwords
"""
__check_documents_param(documents)
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
words = get_words(documents=documents, lang=lang, remove_stopwords=remove_stopwords)
num_unq = len(collections.Counter(words).keys())
if num_unq == 0:
num_unq = 10**-10
return math.log(math.log(num_unq))/math.log(math.log(len(words))) if len(words) != 1 else 0
def root_type_token_ratio(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> float:
"""Calculates Root Type-Token Ratio based on a list of documents.
Publication: Guiraud, 1954. Also cited in Tweedie & Baayen, 1998
Keyword arguments:
documents -- the list of textual documents
lang -- language of the textual documents
remove_stopwords -- boolean flag that shows if the function should exclude stopwords
"""
__check_documents_param(documents)
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
words = get_words(documents=documents, lang=lang, remove_stopwords=remove_stopwords)
num_unq = len(collections.Counter(words).keys())
return num_unq/(len(words)**0.5)
| 43.43 | 102 | 0.743495 | 592 | 4,343 | 5.22973 | 0.16723 | 0.130814 | 0.031654 | 0.05814 | 0.801034 | 0.801034 | 0.775517 | 0.756137 | 0.756137 | 0.739664 | 0 | 0.00834 | 0.171771 | 4,343 | 99 | 103 | 43.868687 | 0.852377 | 0.421137 | 0 | 0.585366 | 0 | 0 | 0.004254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.097561 | 0 | 0.341463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b27b2f69348b2c45d8c9ea6b23a326199d2a0455 | 106 | py | Python | tests/test_core.py | BroBossa/Api_PogoP | 871b22c33f329abd1cf1cfa19b0b78d39b7e4c26 | [
"Apache-2.0"
] | null | null | null | tests/test_core.py | BroBossa/Api_PogoP | 871b22c33f329abd1cf1cfa19b0b78d39b7e4c26 | [
"Apache-2.0"
] | null | null | null | tests/test_core.py | BroBossa/Api_PogoP | 871b22c33f329abd1cf1cfa19b0b78d39b7e4c26 | [
"Apache-2.0"
] | null | null | null |
def test_version():
from hogwarts_apitest import __version__
assert isinstance(__version__, str)
| 21.2 | 44 | 0.773585 | 12 | 106 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169811 | 106 | 4 | 45 | 26.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a22d5c1d80822ea471ff274c3f610a75af83c190 | 522 | py | Python | 3rdparty/huawei-lte-api/huawei_lte_api/api/SNtp.py | tux1c0/plugin-huawei4g | af83ce6321db23fcfd51eb204e00c287d3ea2325 | [
"MIT"
] | null | null | null | 3rdparty/huawei-lte-api/huawei_lte_api/api/SNtp.py | tux1c0/plugin-huawei4g | af83ce6321db23fcfd51eb204e00c287d3ea2325 | [
"MIT"
] | 1 | 2020-06-21T20:35:27.000Z | 2020-07-14T20:08:55.000Z | 3rdparty/huawei-lte-api/huawei_lte_api/api/SNtp.py | tux1c0/plugin-huawei4g | af83ce6321db23fcfd51eb204e00c287d3ea2325 | [
"MIT"
] | 3 | 2020-02-27T11:35:55.000Z | 2021-03-25T08:51:51.000Z | from huawei_lte_api.ApiGroup import ApiGroup
from huawei_lte_api.Connection import GetResponseType
class SNtp(ApiGroup):
def get_settings(self) -> GetResponseType:
return self._connection.get('sntp/settings')
def sntpswitch(self) -> GetResponseType:
return self._connection.get('sntp/sntpswitch')
def serverinfo(self) -> GetResponseType:
return self._connection.get('sntp/serverinfo')
def timeinfo(self) -> GetResponseType:
return self._connection.get('sntp/timeinfo')
| 30.705882 | 54 | 0.731801 | 58 | 522 | 6.431034 | 0.310345 | 0.203753 | 0.268097 | 0.310992 | 0.493298 | 0.493298 | 0.493298 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 522 | 16 | 55 | 32.625 | 0.857471 | 0 | 0 | 0 | 0 | 0 | 0.10728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.181818 | 0.363636 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a23ff4f970fe9900e1378b9ec62b11cbafe38e11 | 23 | py | Python | related_word/__init__.py | yu-su-ke/easily_language_processing | 820d0d8faccd2d87246a2e0951868f435c6dcdfb | [
"MIT"
] | null | null | null | related_word/__init__.py | yu-su-ke/easily_language_processing | 820d0d8faccd2d87246a2e0951868f435c6dcdfb | [
"MIT"
] | 4 | 2019-12-16T12:33:08.000Z | 2022-01-13T01:56:56.000Z | related_word/__init__.py | yu-su-ke/easily_language_processing | 820d0d8faccd2d87246a2e0951868f435c6dcdfb | [
"MIT"
] | null | null | null | from . import wordnet
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 1 | 23 | 23 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2867ef987c4697b728b19d668a96ec8d219e656 | 328 | py | Python | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_12_21BleachCleanser.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 33 | 2021-12-15T07:11:47.000Z | 2022-03-29T08:58:32.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_12_21BleachCleanser.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 3 | 2021-12-15T11:39:54.000Z | 2022-03-29T07:24:23.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_12_21BleachCleanser.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | null | null | null | _base_ = "./FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_01_02MasterChefCan.py"
OUTPUT_DIR = "output/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/12_21BleachCleanser"
DATASETS = dict(TRAIN=("ycbv_021_bleach_cleanser_train_pbr",))
| 82 | 147 | 0.89939 | 43 | 328 | 6.162791 | 0.697674 | 0.083019 | 0.196226 | 0.279245 | 0.513208 | 0.513208 | 0.513208 | 0.513208 | 0.513208 | 0.513208 | 0 | 0.097179 | 0.027439 | 328 | 3 | 148 | 109.333333 | 0.733542 | 0 | 0 | 0 | 0 | 0 | 0.82622 | 0.82622 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2b272636ae154c7c70a00e43c57982cef81e702 | 112 | py | Python | gridnetwork/routes/__init__.py | shashigharti/PyGridNetwork | b69c0988031177c39f555d6e561ecdf6661e5d56 | [
"Apache-2.0"
] | 1 | 2022-02-01T14:48:32.000Z | 2022-02-01T14:48:32.000Z | gridnetwork/routes/__init__.py | Benardi/PyGridNetwork | b915fafeca7fe1427162d4f1c4842aa3e38b56b6 | [
"Apache-2.0"
] | null | null | null | gridnetwork/routes/__init__.py | Benardi/PyGridNetwork | b915fafeca7fe1427162d4f1c4842aa3e38b56b6 | [
"Apache-2.0"
] | 1 | 2021-07-06T04:32:24.000Z | 2021-07-06T04:32:24.000Z | from .general import *
from .nodes import *
from .dataset import *
from .models import *
from .network import *
| 18.666667 | 22 | 0.732143 | 15 | 112 | 5.466667 | 0.466667 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 112 | 5 | 23 | 22.4 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2bec5b90cb790df09d6320b6f8f2e9b1cf987d3 | 987 | py | Python | src/grocsvs/stages/__init__.py | Uditgulati/grocsvs | e7225b0e65e40138053a214130ebaeec1e1448d8 | [
"MIT"
] | 39 | 2016-09-11T03:11:09.000Z | 2021-04-27T17:08:05.000Z | src/grocsvs/stages/__init__.py | Uditgulati/grocsvs | e7225b0e65e40138053a214130ebaeec1e1448d8 | [
"MIT"
] | 34 | 2016-10-24T22:24:49.000Z | 2021-04-12T14:08:54.000Z | src/grocsvs/stages/__init__.py | Uditgulati/grocsvs | e7225b0e65e40138053a214130ebaeec1e1448d8 | [
"MIT"
] | 7 | 2017-10-26T00:55:47.000Z | 2020-07-28T10:57:28.000Z | from grocsvs.stages import preflight
from grocsvs.stages import constants
from grocsvs.stages import sample_info
from grocsvs.stages import qc
from grocsvs.stages import call_readclouds
from grocsvs.stages import window_barcodes
from grocsvs.stages import barcode_overlaps
from grocsvs.stages import sv_candidate_regions
from grocsvs.stages import sv_candidates
from grocsvs.stages import refine_grid_search_breakpoints
from grocsvs.stages import supporting_barcodes
from grocsvs.stages import pair_evidence
from grocsvs.stages import refine_breakpoints
from grocsvs.stages import barcodes_from_graphs
from grocsvs.stages import collect_reads_for_barcodes
from grocsvs.stages import assembly
from grocsvs.stages import walk_assemblies
from grocsvs.stages import postassembly_merge
from grocsvs.stages import cluster_svs
from grocsvs.stages import final_clustering
from grocsvs.stages import genotyping
from grocsvs.stages import postprocessing
from grocsvs.stages import visualize
| 32.9 | 57 | 0.877406 | 137 | 987 | 6.160584 | 0.306569 | 0.299763 | 0.46327 | 0.626777 | 0.31872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 987 | 29 | 58 | 34.034483 | 0.949381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2cc1740c7c92ccf224f0f6a695b6d8693cdebc2 | 91 | py | Python | torchdyn/__init__.py | cfinlay/torchdyn | 300436c675c4ae51e6bd92fcf4c27fe7d3fc71a5 | [
"Apache-2.0"
] | null | null | null | torchdyn/__init__.py | cfinlay/torchdyn | 300436c675c4ae51e6bd92fcf4c27fe7d3fc71a5 | [
"Apache-2.0"
] | null | null | null | torchdyn/__init__.py | cfinlay/torchdyn | 300436c675c4ae51e6bd92fcf4c27fe7d3fc71a5 | [
"Apache-2.0"
] | null | null | null | from .adjoint import *
from .learner import *
from .plot import *
from ._internals import * | 22.75 | 25 | 0.747253 | 12 | 91 | 5.583333 | 0.5 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164835 | 91 | 4 | 25 | 22.75 | 0.881579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2d2a7f76aa30d1d29c1862804ac1728dceb522d | 115 | py | Python | stRT/plot/two_d_plot/basic_plot/__init__.py | Yao-14/stAnalysis | d08483ce581f5b03cfcad8be500aaa64b0293f74 | [
"BSD-3-Clause"
] | null | null | null | stRT/plot/two_d_plot/basic_plot/__init__.py | Yao-14/stAnalysis | d08483ce581f5b03cfcad8be500aaa64b0293f74 | [
"BSD-3-Clause"
] | null | null | null | stRT/plot/two_d_plot/basic_plot/__init__.py | Yao-14/stAnalysis | d08483ce581f5b03cfcad8be500aaa64b0293f74 | [
"BSD-3-Clause"
] | null | null | null | from .lineplot import basic_line
from .utils import basic_FacetGrid, save_fig
from .violinplot import basic_violin
| 28.75 | 44 | 0.852174 | 17 | 115 | 5.529412 | 0.647059 | 0.351064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113043 | 115 | 3 | 45 | 38.333333 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0c5f81b541dd9f32070bddec96b7d6bf60c98fd4 | 71 | py | Python | py_tdlib/constructors/message_chat_delete_photo.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/message_chat_delete_photo.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/message_chat_delete_photo.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class messageChatDeletePhoto(Type):
pass
| 11.833333 | 35 | 0.788732 | 8 | 71 | 7 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140845 | 71 | 5 | 36 | 14.2 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a74a96def1934973a9ab667809fb83552134d6a3 | 35 | py | Python | gilded-rose/item_update_brie.py | padawin/katas | b22d2337dc92351a1349172dadcd4cc1cd29431e | [
"MIT"
] | null | null | null | gilded-rose/item_update_brie.py | padawin/katas | b22d2337dc92351a1349172dadcd4cc1cd29431e | [
"MIT"
] | null | null | null | gilded-rose/item_update_brie.py | padawin/katas | b22d2337dc92351a1349172dadcd4cc1cd29431e | [
"MIT"
] | null | null | null | def update(item):
return 1, -1
| 11.666667 | 17 | 0.6 | 6 | 35 | 3.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.257143 | 35 | 2 | 18 | 17.5 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a791d47fac23f16807f38b1da6bc39f6fbff442d | 54 | py | Python | text_clf/evaluate.py | kejunxiao/TextClf | aa1c195cb5908c32a3e6ed6891142603cb198d87 | [
"BSD-3-Clause"
] | 2 | 2018-05-13T13:00:10.000Z | 2018-05-13T13:00:12.000Z | text_clf/evaluate.py | kejunxiao/TextClf | aa1c195cb5908c32a3e6ed6891142603cb198d87 | [
"BSD-3-Clause"
] | null | null | null | text_clf/evaluate.py | kejunxiao/TextClf | aa1c195cb5908c32a3e6ed6891142603cb198d87 | [
"BSD-3-Clause"
] | null | null | null | import tensorflow as tf
def evaluate(FLAGS):
pass | 13.5 | 23 | 0.740741 | 8 | 54 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203704 | 54 | 4 | 24 | 13.5 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a7b06f1988fca8fbec656a8715a144c4a37df971 | 23 | py | Python | sophus/__init__.py | apl-ocean-engineering/SophusPy | 1ae21879047444de91c23e86ba8f3e81d9060759 | [
"MIT"
] | 24 | 2018-05-30T02:27:20.000Z | 2021-08-14T12:21:03.000Z | sophus/__init__.py | apl-ocean-engineering/SophusPy | 1ae21879047444de91c23e86ba8f3e81d9060759 | [
"MIT"
] | 6 | 2019-06-29T15:05:57.000Z | 2021-12-03T23:32:29.000Z | sophus/__init__.py | apl-ocean-engineering/SophusPy | 1ae21879047444de91c23e86ba8f3e81d9060759 | [
"MIT"
] | 3 | 2020-07-30T21:52:07.000Z | 2021-07-07T22:38:37.000Z | from .sophuspy import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7d4ec67f9ca2b4fe222c81aae202c0ecef7ba9b | 39 | py | Python | modelclone/__init__.py | philsjh/django-modelclone | d7c9f18c6e95b5a67f96f65bb73fd4612286913c | [
"MIT"
] | 51 | 2015-01-28T13:51:59.000Z | 2022-02-08T11:22:23.000Z | modelclone/__init__.py | philsjh/django-modelclone | d7c9f18c6e95b5a67f96f65bb73fd4612286913c | [
"MIT"
] | 28 | 2015-01-13T11:21:40.000Z | 2022-02-01T11:03:54.000Z | modelclone/__init__.py | philsjh/django-modelclone | d7c9f18c6e95b5a67f96f65bb73fd4612286913c | [
"MIT"
] | 33 | 2015-07-09T15:04:26.000Z | 2022-03-22T17:37:53.000Z | from .admin import ClonableModelAdmin
| 13 | 37 | 0.846154 | 4 | 39 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 2 | 38 | 19.5 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac3900b94fabac45a3fd7eeac3c281a4afc15108 | 115 | py | Python | vwgec/utils/softmax.py | snukky/vwgec | 1638b24c91ed0324309a91a351986bdff5229a18 | [
"MIT"
] | null | null | null | vwgec/utils/softmax.py | snukky/vwgec | 1638b24c91ed0324309a91a351986bdff5229a18 | [
"MIT"
] | null | null | null | vwgec/utils/softmax.py | snukky/vwgec | 1638b24c91ed0324309a91a351986bdff5229a18 | [
"MIT"
] | null | null | null | import math
def softmax(xs):
norm = sum(math.exp(x) for x in xs)
return [math.exp(x) / norm for x in xs]
| 16.428571 | 43 | 0.617391 | 23 | 115 | 3.086957 | 0.521739 | 0.197183 | 0.225352 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252174 | 115 | 6 | 44 | 19.166667 | 0.825581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ac56d07b623281c921f21624a21b3881502eda1b | 7,097 | py | Python | rotaryswitch_example.py | PyTis/wx_aic | 38c23ab744bea60767f6bc9fd9f098a3b38584ce | [
"MIT"
] | 1 | 2021-03-20T08:45:07.000Z | 2021-03-20T08:45:07.000Z | rotaryswitch_example.py | PyTis/wx_aic | 38c23ab744bea60767f6bc9fd9f098a3b38584ce | [
"MIT"
] | null | null | null | rotaryswitch_example.py | PyTis/wx_aic | 38c23ab744bea60767f6bc9fd9f098a3b38584ce | [
"MIT"
] | 1 | 2021-03-20T08:45:15.000Z | 2021-03-20T08:45:15.000Z | import os
import wx
import wx.lib.inspection
from aic import ImageControlPanel, ImageControlFrame
from aic import RotarySwitch
from aic.rotary_switch import EVT_RS_CHANGE
RESOURCES = 'res'
class ICPanel(ImageControlPanel):
def __init__(self, parent, bmp, *args, tiled=False, **kwargs):
super().__init__(parent, bmp, *args, tiled, **kwargs)
self._populate()
self.Bind(EVT_RS_CHANGE, self.on_dial_change, id=self.dial.GetId())
def _populate(self):
panel_sizer = wx.BoxSizer(wx.VERTICAL)
top_sizer = wx.BoxSizer(wx.HORIZONTAL)
top_sizer.Add((0, 0), 1, wx.EXPAND, 5)
self.Text1 = wx.StaticText(self, wx.ID_ANY, "Static Text", wx.DefaultPosition, wx.DefaultSize, 0)
top_sizer.Add(self.Text1, 0, wx.ALL, 10)
top_sizer.Add((0, 0), 1, wx.EXPAND, 5)
mid_sizer = wx.BoxSizer(wx.HORIZONTAL)
mid_sizer.Add((0, 0), 1, wx.EXPAND, 5)
# Add a rotary dial control #
dial_pair = (
wx.Bitmap(os.path.join(RESOURCES, 'sticky_knob1.png')),
wx.Bitmap(os.path.join(RESOURCES, 'sticky_knob1_handle.png')))
self.dial = RotarySwitch(self, dial_pair)
self.dial.set_padding((10, 10))
self.dial.set_rotation_point_offset((-1, 0))
self.dial.set_zero_angle_offset(-225)
self.dial.set_pointer_rot_offset(-135)
self.dial.set_initial_angle(120)
self.dial.set_max_angle(270)
self.dial.set_step(2, 4)
self.dial.set_highlighting()
self.dial.highlight_box = ((0,0), (0,0))
mid_sizer.Add(self.dial,0,0, 10)
mid_sizer.Add((0, 0), 1, wx.EXPAND, 5)
bot_sizer = wx.BoxSizer(wx.HORIZONTAL)
bot_sizer.Add((0, 0), 1, wx.EXPAND, 5)
bot_sizer.Add((0, 0), 1, wx.EXPAND, 5)
panel_sizer.Add(top_sizer, 1, wx.EXPAND, 5)
panel_sizer.Add(mid_sizer, 1, wx.EXPAND, 5)
panel_sizer.Add(bot_sizer, 1, wx.EXPAND, 5)
self.SetSizer(panel_sizer)
self.Layout()
def on_dial_change(self,event):
self.Text1.SetLabel(str(event.state))
event.Skip()
def __del__(self):
pass
class StdPanel(wx.Panel):
def __init__(self, parent, id=wx.ID_ANY, pos=wx.DefaultPosition, size=wx.Size(1024, 768), style=wx.TAB_TRAVERSAL,
name=wx.EmptyString):
super().__init__(parent, id=id, pos=pos, size=size, style=style, name=name)
self._populate()
def _populate(self):
panel_sizer = wx.BoxSizer(wx.VERTICAL)
top_sizer = wx.BoxSizer(wx.HORIZONTAL)
top_sizer.Add((0, 0), 1, wx.EXPAND, 5)
self.Text1 = wx.StaticText(self, wx.ID_ANY, "Static Text", wx.DefaultPosition, wx.DefaultSize, 0)
top_sizer.Add(self.Text1, 0, wx.ALL, 10)
top_sizer.Add((0, 0), 1, wx.EXPAND, 5)
mid_sizer = wx.BoxSizer(wx.HORIZONTAL)
mid_sizer.Add((0, 0), 1, wx.EXPAND, 5)
# Add a rotary dial control #
dial_pair = (
wx.Bitmap(os.path.join(RESOURCES, 'sticky_knob1.png')),
wx.Bitmap(os.path.join(RESOURCES, 'sticky_knob1_mark.png')))
self.dial = RotarySwitch(self, dial_pair)
self.dial.set_padding((10, 10))
self.dial.set_rotation_point_offset((-1, 0))
self.dial.set_zero_angle_offset(-225)
self.dial.set_pointer_rot_offset(-135)
self.dial.set_initial_angle(0)
self.dial.set_max_angle(270)
self.dial.set_step(2, 4)
self.dial.set_highlighting()
self.dial.highlight_box = ((10,10), (3,3))
mid_sizer.Add(self.dial,0,0, 10)
mid_sizer.Add((0, 0), 1, wx.EXPAND, 5)
bot_sizer = wx.BoxSizer(wx.HORIZONTAL)
bot_sizer.Add((0, 0), 1, wx.EXPAND, 5)
bot_sizer.Add((0, 0), 1, wx.EXPAND, 5)
panel_sizer.Add(top_sizer, 1, wx.EXPAND, 5)
panel_sizer.Add(mid_sizer, 1, wx.EXPAND, 5)
panel_sizer.Add(bot_sizer, 1, wx.EXPAND, 5)
self.SetSizer(panel_sizer)
self.Layout()
self.Bind(EVT_RS_CHANGE, self.on_dial_change, id=self.dial.GetId())
def on_dial_change(self,event):
self.Text1.SetLabel(str(event.state))
event.Skip()
def __del__(self):
pass
class StdFrame(wx.Frame):
def __init__(self, parent):
super().__init__(parent, id=wx.ID_ANY, title="Example wxFrame", pos=wx.DefaultPosition, size=wx.Size(1024, 768),
style=wx.DEFAULT_FRAME_STYLE | wx.TAB_TRAVERSAL)
# self.SetSizeHints(wx.DefaultSize, wx.DefaultSize)
frame_sizer = wx.BoxSizer(wx.HORIZONTAL)
""" Choose a panel """
# self.main_panel = StdPanel(self)
panel_bmp = wx.Bitmap(os.path.join(RESOURCES, 'sticky_bg.png'))
self.main_panel = ICPanel(self, panel_bmp, tiled=True)
""" Optional second panel """
# # self.main_panel2 = StdPanel(self)
# panel_bmp = wx.Bitmap(os.path.join(RES, 'sticky_bg.png'))
# self.main_panel2 = ICPanel(self, panel_bmp, tiled=True)
frame_sizer.Add(self.main_panel, 1, wx.EXPAND, 5)
# frame_sizer.Add(self.main_panel2, 1, wx.EXPAND, 5)
self.SetSizer(frame_sizer)
self.Layout()
self.Centre(wx.BOTH)
self.Bind(wx.EVT_CLOSE, self.on_close)
def __del__(self):
pass
def on_close(self, event):
pass
event.Skip()
class ICFrame(ImageControlFrame):
bmp = (os.path.join(RESOURCES, 'led1rect_active_dark.png'))
def __init__(self, parent, resizable=True, bitmap=bmp, tiled=True):
super().__init__(parent, resizable=resizable, bitmap=bitmap, tiled=tiled, id=wx.ID_ANY, title="Example ICFrame",
pos=wx.DefaultPosition, size=wx.Size(1024, 768), style=wx.DEFAULT_FRAME_STYLE)
# self.SetSizeHints(wx.Size(1024, 768), wx.DefaultSize, wx.DefaultSize) # Minimum size
# self.SetTransparent(125)
# self.set_background(wx.Bitmap(os.path.join(RES, 'led1rect_active_dark.png')))
# self.set_tiled(True)
# self.set_stored(True)
frame_sizer = wx.BoxSizer(wx.HORIZONTAL)
""" Choose a panel """
# self.main_panel = StdPanel(self)
panel_bmp = wx.Bitmap(os.path.join(RESOURCES, 'sticky_bg.png'))
self.main_panel = ICPanel(self, panel_bmp, tiled=True)
""" Optional second panel """
# self.main_panel2 = StdPanel(self)
# panel_bmp = wx.Bitmap(os.path.join(RES, 'sticky_bg.png'))
# # self.main_panel2 = ICPanel(self, panel_bmp, tiled=True)
frame_sizer.Add(self.main_panel, 1, wx.EXPAND, 5)
# frame_sizer.Add(self.main_panel2, 1, wx.EXPAND, 5)
self.SetSizer(frame_sizer)
self.Layout()
self.Centre(wx.BOTH)
self.Bind(wx.EVT_CLOSE, self.on_close)
def on_close(self, event):
pass
event.Skip()
def main():
app = wx.App(False)
# wx.lib.inspection.InspectionTool().Show()
""" Choose a frame """
# mainframe = StdFrame(None)
mainframe = ICFrame(None)
mainframe.Show()
app.MainLoop()
if __name__ == '__main__':
main()
| 33.163551 | 120 | 0.624771 | 999 | 7,097 | 4.229229 | 0.143143 | 0.049231 | 0.046864 | 0.052071 | 0.760473 | 0.755503 | 0.740592 | 0.740592 | 0.726154 | 0.726154 | 0 | 0.033389 | 0.236156 | 7,097 | 213 | 121 | 33.319249 | 0.745988 | 0.122446 | 0 | 0.727273 | 0 | 0 | 0.031096 | 0.011188 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106061 | false | 0.037879 | 0.045455 | 0 | 0.189394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce3c5d892f80be546e7b52413fc58b176cc5c7be | 79 | py | Python | Code/gettingstarted.py | patvin80/inframonitoring | 62e9b3d5a9a51e1427164160eb313a4807978fae | [
"Apache-2.0"
] | null | null | null | Code/gettingstarted.py | patvin80/inframonitoring | 62e9b3d5a9a51e1427164160eb313a4807978fae | [
"Apache-2.0"
] | null | null | null | Code/gettingstarted.py | patvin80/inframonitoring | 62e9b3d5a9a51e1427164160eb313a4807978fae | [
"Apache-2.0"
] | null | null | null | import json
def lambda_handler(event, context):
return event["key1"][::-1] | 19.75 | 35 | 0.696203 | 11 | 79 | 4.909091 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.139241 | 79 | 4 | 36 | 19.75 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
65fffb94c4a06e73865f72d09b27457dd1cbd1a3 | 34 | py | Python | tartiflette/scalar/__init__.py | alexchamberlain/tartiflette | 6904b0f47770c348553e907be5f5bdb0929fe149 | [
"MIT"
] | null | null | null | tartiflette/scalar/__init__.py | alexchamberlain/tartiflette | 6904b0f47770c348553e907be5f5bdb0929fe149 | [
"MIT"
] | null | null | null | tartiflette/scalar/__init__.py | alexchamberlain/tartiflette | 6904b0f47770c348553e907be5f5bdb0929fe149 | [
"MIT"
] | null | null | null | from .custom_scalar import Scalar
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a77a34af898b2312a01fb70a0fbe104676f817f | 38 | py | Python | src/snake/__init__.py | main76/snake-master | 3087aed8164f1795e9558c142fc6983c18888056 | [
"MIT"
] | 8 | 2017-06-01T10:57:53.000Z | 2018-10-20T16:57:18.000Z | src/snake/__init__.py | main76/snake-master | 3087aed8164f1795e9558c142fc6983c18888056 | [
"MIT"
] | null | null | null | src/snake/__init__.py | main76/snake-master | 3087aed8164f1795e9558c142fc6983c18888056 | [
"MIT"
] | 2 | 2017-06-09T06:21:11.000Z | 2019-02-23T09:13:29.000Z | from .handler import Handler, CHANNEL
| 19 | 37 | 0.815789 | 5 | 38 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce4b7c8ad94ee923bd84380b818f1acbf5ff5e45 | 209 | py | Python | paper_uploads/admin/__init__.py | dldevinc/paper-uploads | 9414b6e6fbaa52eadacd9852ce3c4d84c6c2c939 | [
"BSD-3-Clause"
] | 3 | 2020-06-05T10:43:05.000Z | 2022-02-22T16:46:16.000Z | paper_uploads/admin/__init__.py | dldevinc/paper-uploads | 9414b6e6fbaa52eadacd9852ce3c4d84c6c2c939 | [
"BSD-3-Clause"
] | 2 | 2021-04-03T12:25:20.000Z | 2022-02-02T06:10:46.000Z | paper_uploads/admin/__init__.py | dldevinc/paper-uploads | 9414b6e6fbaa52eadacd9852ce3c4d84c6c2c939 | [
"BSD-3-Clause"
] | null | null | null | from .base import UploadedFileBase # noqa: F401
from .collection import CollectionAdminBase # noqa: F401
from .file import UploadedFileAdmin # noqa: F401
from .image import UploadedImageAdmin # noqa: F401
| 41.8 | 57 | 0.789474 | 24 | 209 | 6.875 | 0.5 | 0.193939 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 0.15311 | 209 | 4 | 58 | 52.25 | 0.864407 | 0.205742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cebd4162d3194622d4d30054b7136281be7c13bb | 2,612 | py | Python | system-tests/testMainPage.py | FISA-Team/FISA | 0508841e70aaa66220e75016cf1166647184b02a | [
"MIT"
] | null | null | null | system-tests/testMainPage.py | FISA-Team/FISA | 0508841e70aaa66220e75016cf1166647184b02a | [
"MIT"
] | null | null | null | system-tests/testMainPage.py | FISA-Team/FISA | 0508841e70aaa66220e75016cf1166647184b02a | [
"MIT"
] | 2 | 2020-09-09T19:54:25.000Z | 2020-09-13T16:20:36.000Z | import time
from baseTest import BaseTest
class TestMainPage(BaseTest):
def test_MainPageHeading(self):
self.navigate("")
assert "FISA" in self.driver.title
self.driver.find_element_by_id('projectHeading')
def test_ChangeThemes(self):
# Light theme to dark theme
self.navigate("")
self.driver.find_element_by_id('themeLanguageMenu').click()
time.sleep(0.2)
self.driver.find_element_by_xpath('/html/body/div[2]/div[3]/ul/fieldset/div[1]/label[2]/span[1]/span[1]/input').click()
time.sleep(0.2)
color = self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[1]').value_of_css_property('background-color')
assert color == 'rgb(44, 49, 58)'
# dark theme to light theme
self.driver.find_element_by_id('themeLanguageMenu').click()
time.sleep(0.2)
self.driver.find_element_by_xpath('/html/body/div[2]/div[3]/ul/fieldset/div[1]/label[1]/span[1]').click()
time.sleep(0.2)
color = self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[1]').value_of_css_property('background-color')
assert color == 'rgb(242, 242, 242)'
def test_ChangeLanguages(self):
# EN to DE
self.navigate("")
manageUsecases = self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[2]/h4').text
assert manageUsecases == 'Manage Use-Cases'
self.driver.find_element_by_id('themeLanguageMenu').click()
time.sleep(0.2)
self.driver.find_element_by_xpath('/html/body/div[2]/div[3]/ul/fieldset/div[2]/label[2]/span[1]/span[1]/input').click()
time.sleep(0.2)
manageUsecases = self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[2]/h4').text
assert manageUsecases == 'Verwalte Anwendungszenarien'
# DE to EN
self.driver.find_element_by_id('themeLanguageMenu').click()
time.sleep(0.2)
self.driver.find_element_by_xpath('/html/body/div[2]/div[3]/ul/fieldset/div[2]/label[1]/span[1]/span[1]/input').click()
time.sleep(0.2)
manageUsecases = self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[2]/h4').text
assert manageUsecases == 'Manage Use-Cases'
def test_openDeveloperMenu(self):
self.navigate("")
self.driver.find_element_by_xpath('/html/body/div/div/div/div/div[2]/button').click()
time.sleep(0.2)
developerMenuHeading = self.driver.find_element_by_xpath('/html/body/div[2]/div[3]/div/div[1]/h2').text
assert developerMenuHeading == "Developer-Menu" | 44.271186 | 128 | 0.664242 | 380 | 2,612 | 4.413158 | 0.184211 | 0.089445 | 0.096601 | 0.200358 | 0.756112 | 0.746571 | 0.731664 | 0.717352 | 0.717352 | 0.717352 | 0 | 0.032048 | 0.175727 | 2,612 | 59 | 129 | 44.271186 | 0.746865 | 0.026417 | 0 | 0.55814 | 0 | 0.093023 | 0.298543 | 0.210319 | 0 | 0 | 0 | 0 | 0.162791 | 1 | 0.093023 | false | 0 | 0.046512 | 0 | 0.162791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0c9b412858adf93dcb37e7dd774003d1ec04bcc4 | 21 | py | Python | pwnlib/rop/__init__.py | IMULMUL/python3-pwntools | 61210a68cd88e9084c72292d3119c38c44f07966 | [
"MIT"
] | 325 | 2016-01-25T08:38:06.000Z | 2022-03-30T14:31:50.000Z | pwnlib/rop/__init__.py | IMULMUL/python3-pwntools | 61210a68cd88e9084c72292d3119c38c44f07966 | [
"MIT"
] | 8 | 2016-08-23T10:15:27.000Z | 2019-01-16T02:49:34.000Z | pwnlib/rop/__init__.py | IMULMUL/python3-pwntools | 61210a68cd88e9084c72292d3119c38c44f07966 | [
"MIT"
] | 71 | 2016-07-13T10:03:52.000Z | 2022-01-10T11:57:34.000Z | from .rop import ROP
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b2d616b065b2d969ee30305e61730bf71047185 | 98 | py | Python | filekeeper/__main__.py | aaronang/filekeeper | b6fedb304de1644c1b28ad30bcb78b1cf13695cd | [
"MIT"
] | null | null | null | filekeeper/__main__.py | aaronang/filekeeper | b6fedb304de1644c1b28ad30bcb78b1cf13695cd | [
"MIT"
] | null | null | null | filekeeper/__main__.py | aaronang/filekeeper | b6fedb304de1644c1b28ad30bcb78b1cf13695cd | [
"MIT"
] | null | null | null | from filekeeper import get_files, delete_files
delete_files([f['id'] for f in get_files()[:10]])
| 24.5 | 49 | 0.744898 | 17 | 98 | 4.058824 | 0.647059 | 0.231884 | 0.463768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022989 | 0.112245 | 98 | 3 | 50 | 32.666667 | 0.770115 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0b48e2d90183a864c59d3876a797a43ff19edfbe | 37,709 | py | Python | instances/passenger_demand/pas-20210421-2109-int1/31.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int1/31.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int1/31.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 2351
passenger_arriving = (
(3, 7, 4, 1, 3, 0, 8, 4, 2, 3, 3, 0), # 0
(3, 4, 7, 3, 0, 0, 12, 3, 4, 4, 0, 0), # 1
(1, 3, 6, 2, 0, 0, 5, 6, 5, 4, 3, 0), # 2
(2, 5, 8, 0, 2, 0, 8, 3, 7, 5, 2, 0), # 3
(2, 7, 2, 2, 1, 0, 5, 7, 5, 3, 4, 0), # 4
(5, 2, 7, 2, 1, 0, 2, 7, 1, 7, 1, 0), # 5
(1, 5, 2, 4, 4, 0, 6, 9, 5, 7, 3, 0), # 6
(4, 3, 5, 2, 2, 0, 4, 8, 4, 5, 1, 0), # 7
(3, 8, 7, 0, 0, 0, 3, 9, 2, 8, 3, 0), # 8
(3, 10, 5, 0, 2, 0, 9, 4, 3, 5, 0, 0), # 9
(3, 10, 10, 0, 1, 0, 5, 8, 4, 3, 1, 0), # 10
(4, 9, 8, 4, 0, 0, 2, 6, 6, 2, 0, 0), # 11
(4, 4, 3, 0, 2, 0, 6, 3, 2, 2, 3, 0), # 12
(4, 10, 4, 3, 1, 0, 6, 10, 3, 1, 1, 0), # 13
(6, 5, 5, 2, 2, 0, 9, 5, 1, 5, 1, 0), # 14
(4, 5, 3, 4, 0, 0, 5, 7, 4, 3, 0, 0), # 15
(8, 8, 2, 5, 1, 0, 6, 7, 7, 0, 1, 0), # 16
(1, 5, 4, 6, 1, 0, 4, 8, 5, 4, 1, 0), # 17
(2, 4, 5, 1, 3, 0, 5, 3, 4, 2, 2, 0), # 18
(3, 5, 6, 1, 1, 0, 3, 6, 7, 1, 1, 0), # 19
(3, 6, 8, 2, 1, 0, 5, 4, 1, 7, 5, 0), # 20
(3, 9, 3, 2, 0, 0, 4, 6, 3, 6, 2, 0), # 21
(3, 10, 7, 4, 1, 0, 2, 4, 8, 3, 0, 0), # 22
(5, 5, 5, 2, 6, 0, 3, 7, 7, 4, 2, 0), # 23
(5, 8, 3, 3, 3, 0, 7, 5, 3, 7, 4, 0), # 24
(6, 7, 2, 1, 1, 0, 8, 4, 2, 2, 3, 0), # 25
(4, 8, 4, 3, 3, 0, 8, 6, 3, 4, 0, 0), # 26
(3, 9, 2, 0, 2, 0, 4, 9, 3, 4, 0, 0), # 27
(2, 8, 7, 1, 2, 0, 6, 3, 4, 3, 2, 0), # 28
(2, 4, 6, 2, 2, 0, 4, 5, 4, 2, 4, 0), # 29
(3, 9, 6, 2, 1, 0, 4, 6, 1, 2, 1, 0), # 30
(2, 3, 3, 4, 1, 0, 9, 6, 10, 4, 2, 0), # 31
(1, 11, 6, 5, 0, 0, 6, 4, 5, 3, 2, 0), # 32
(2, 6, 3, 3, 1, 0, 2, 4, 5, 5, 6, 0), # 33
(4, 6, 9, 2, 0, 0, 7, 6, 8, 2, 1, 0), # 34
(5, 5, 7, 4, 1, 0, 6, 2, 4, 5, 6, 0), # 35
(1, 4, 5, 3, 1, 0, 1, 5, 5, 3, 2, 0), # 36
(3, 4, 9, 2, 2, 0, 6, 4, 4, 4, 2, 0), # 37
(3, 6, 5, 2, 2, 0, 3, 6, 1, 4, 0, 0), # 38
(1, 10, 7, 3, 3, 0, 4, 9, 3, 11, 2, 0), # 39
(2, 11, 6, 1, 0, 0, 6, 3, 1, 5, 1, 0), # 40
(4, 5, 11, 5, 0, 0, 6, 9, 2, 3, 5, 0), # 41
(2, 8, 5, 2, 0, 0, 5, 8, 7, 4, 0, 0), # 42
(4, 5, 2, 2, 4, 0, 2, 7, 3, 1, 0, 0), # 43
(2, 6, 6, 3, 0, 0, 5, 10, 9, 2, 3, 0), # 44
(3, 4, 4, 2, 1, 0, 7, 3, 4, 3, 4, 0), # 45
(3, 3, 5, 4, 1, 0, 4, 7, 3, 3, 0, 0), # 46
(3, 5, 9, 2, 3, 0, 2, 7, 2, 1, 3, 0), # 47
(4, 6, 6, 4, 6, 0, 5, 6, 7, 4, 1, 0), # 48
(5, 8, 4, 7, 1, 0, 4, 9, 4, 3, 3, 0), # 49
(6, 9, 4, 4, 4, 0, 4, 3, 5, 2, 3, 0), # 50
(5, 7, 8, 4, 2, 0, 3, 7, 6, 4, 1, 0), # 51
(1, 6, 3, 2, 5, 0, 1, 7, 1, 4, 5, 0), # 52
(5, 6, 4, 3, 3, 0, 2, 4, 8, 4, 0, 0), # 53
(4, 8, 1, 1, 2, 0, 4, 7, 4, 1, 3, 0), # 54
(3, 8, 8, 1, 4, 0, 2, 7, 3, 5, 1, 0), # 55
(2, 7, 7, 6, 2, 0, 5, 3, 3, 2, 3, 0), # 56
(4, 3, 6, 3, 2, 0, 4, 7, 4, 2, 2, 0), # 57
(3, 10, 3, 1, 4, 0, 3, 6, 5, 2, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(2.649651558384548, 6.796460700757575, 7.9942360218509, 6.336277173913043, 7.143028846153846, 4.75679347826087), # 0
(2.6745220100478, 6.872041598712823, 8.037415537524994, 6.371564387077295, 7.196566506410256, 4.7551721391908215), # 1
(2.699108477221734, 6.946501402918069, 8.07957012282205, 6.406074879227053, 7.248974358974359, 4.753501207729468), # 2
(2.72339008999122, 7.019759765625, 8.120668982969152, 6.4397792119565205, 7.300204326923078, 4.7517809103260875), # 3
(2.747345978441128, 7.091736339085298, 8.160681323193373, 6.472647946859904, 7.350208333333334, 4.750011473429951), # 4
(2.7709552726563262, 7.162350775550646, 8.199576348721793, 6.504651645531401, 7.39893830128205, 4.748193123490338), # 5
(2.794197102721686, 7.231522727272727, 8.237323264781493, 6.535760869565218, 7.446346153846154, 4.746326086956522), # 6
(2.817050598722076, 7.299171846503226, 8.273891276599542, 6.565946180555556, 7.492383814102565, 4.744410590277778), # 7
(2.8394948907423667, 7.365217785493826, 8.309249589403029, 6.595178140096618, 7.537003205128205, 4.7424468599033816), # 8
(2.8615091088674274, 7.429580196496212, 8.343367408419024, 6.623427309782609, 7.580156249999999, 4.740435122282609), # 9
(2.8830723831821286, 7.492178731762065, 8.376213938874606, 6.65066425120773, 7.621794871794872, 4.738375603864734), # 10
(2.9041638437713395, 7.55293304354307, 8.407758385996857, 6.676859525966184, 7.661870993589743, 4.736268531099034), # 11
(2.92476262071993, 7.611762784090908, 8.437969955012854, 6.7019836956521734, 7.700336538461538, 4.734114130434782), # 12
(2.944847844112769, 7.668587605657268, 8.46681785114967, 6.726007321859903, 7.737143429487181, 4.731912628321256), # 13
(2.9643986440347283, 7.723327160493828, 8.494271279634388, 6.748900966183574, 7.772243589743589, 4.729664251207729), # 14
(2.9833941505706756, 7.775901100852272, 8.520299445694086, 6.770635190217391, 7.8055889423076925, 4.7273692255434785), # 15
(3.001813493805482, 7.826229078984287, 8.544871554555842, 6.791180555555555, 7.8371314102564105, 4.725027777777778), # 16
(3.019635803824017, 7.874230747141554, 8.567956811446729, 6.810507623792271, 7.866822916666667, 4.722640134359904), # 17
(3.03684021071115, 7.919825757575757, 8.589524421593831, 6.82858695652174, 7.894615384615387, 4.72020652173913), # 18
(3.053405844551751, 7.962933762538579, 8.609543590224222, 6.845389115338164, 7.9204607371794875, 4.717727166364734), # 19
(3.0693118354306894, 8.003474414281705, 8.62798352256498, 6.860884661835749, 7.944310897435898, 4.71520229468599), # 20
(3.084537313432836, 8.041367365056816, 8.644813423843189, 6.875044157608696, 7.9661177884615375, 4.712632133152174), # 21
(3.099061408643059, 8.076532267115601, 8.660002499285918, 6.887838164251208, 7.985833333333332, 4.710016908212561), # 22
(3.1128632511462295, 8.108888772709737, 8.673519954120252, 6.899237243357488, 8.003409455128205, 4.707356846316426), # 23
(3.125921971027217, 8.138356534090908, 8.685334993573264, 6.909211956521739, 8.018798076923076, 4.704652173913043), # 24
(3.1382166983708903, 8.164855203510802, 8.695416822872037, 6.917732865338165, 8.03195112179487, 4.701903117451691), # 25
(3.1497265632621207, 8.188304433221099, 8.703734647243644, 6.9247705314009655, 8.042820512820512, 4.699109903381642), # 26
(3.160430695785777, 8.208623875473483, 8.710257671915166, 6.930295516304349, 8.051358173076924, 4.696272758152174), # 27
(3.1703082260267292, 8.22573318251964, 8.714955102113683, 6.934278381642512, 8.057516025641025, 4.69339190821256), # 28
(3.1793382840698468, 8.239552006611252, 8.717796143066266, 6.936689689009662, 8.061245993589743, 4.690467580012077), # 29
(3.1875, 8.25, 8.71875, 6.9375, 8.0625, 4.6875), # 30
(3.1951370284526854, 8.258678799715907, 8.718034948671496, 6.937353656045752, 8.062043661347518, 4.683376259786773), # 31
(3.202609175191816, 8.267242897727273, 8.715910024154589, 6.93691748366013, 8.06068439716312, 4.677024758454107), # 32
(3.2099197969948845, 8.275691228693182, 8.712405570652175, 6.936195772058824, 8.058436835106383, 4.66850768365817), # 33
(3.217072250639386, 8.284022727272728, 8.70755193236715, 6.935192810457517, 8.05531560283688, 4.657887223055139), # 34
(3.224069892902813, 8.292236328124998, 8.701379453502415, 6.933912888071895, 8.051335328014185, 4.645225564301183), # 35
(3.23091608056266, 8.300330965909092, 8.69391847826087, 6.932360294117648, 8.046510638297873, 4.630584895052474), # 36
(3.2376141703964194, 8.308305575284091, 8.68519935084541, 6.9305393178104575, 8.040856161347516, 4.614027402965184), # 37
(3.2441675191815853, 8.31615909090909, 8.675252415458937, 6.9284542483660125, 8.034386524822695, 4.595615275695485), # 38
(3.250579483695652, 8.323890447443182, 8.664108016304347, 6.926109375, 8.027116356382978, 4.57541070089955), # 39
(3.2568534207161126, 8.331498579545455, 8.651796497584542, 6.923508986928105, 8.019060283687942, 4.5534758662335495), # 40
(3.26299268702046, 8.338982421874999, 8.638348203502416, 6.920657373366013, 8.010232934397163, 4.529872959353657), # 41
(3.269000639386189, 8.34634090909091, 8.62379347826087, 6.917558823529411, 8.000648936170213, 4.504664167916042), # 42
(3.2748806345907933, 8.353572975852272, 8.608162666062801, 6.914217626633987, 7.990322916666666, 4.477911679576878), # 43
(3.2806360294117645, 8.360677556818182, 8.591486111111111, 6.910638071895424, 7.979269503546099, 4.449677681992337), # 44
(3.286270180626598, 8.367653586647727, 8.573794157608697, 6.906824448529411, 7.967503324468085, 4.420024362818591), # 45
(3.291786445012788, 8.374500000000001, 8.555117149758455, 6.902781045751634, 7.955039007092199, 4.389013909711811), # 46
(3.297188179347826, 8.381215731534091, 8.535485431763284, 6.898512152777777, 7.941891179078015, 4.356708510328169), # 47
(3.3024787404092075, 8.387799715909091, 8.514929347826087, 6.894022058823529, 7.928074468085106, 4.323170352323839), # 48
(3.307661484974424, 8.39425088778409, 8.493479242149759, 6.889315053104576, 7.91360350177305, 4.288461623354989), # 49
(3.312739769820972, 8.40056818181818, 8.471165458937199, 6.884395424836602, 7.898492907801418, 4.252644511077794), # 50
(3.317716951726343, 8.406750532670454, 8.448018342391304, 6.879267463235294, 7.882757313829787, 4.215781203148426), # 51
(3.322596387468031, 8.412796875, 8.424068236714975, 6.87393545751634, 7.86641134751773, 4.177933887223055), # 52
(3.3273814338235295, 8.41870614346591, 8.39934548611111, 6.868403696895425, 7.849469636524823, 4.139164750957854), # 53
(3.332075447570333, 8.424477272727271, 8.373880434782608, 6.8626764705882355, 7.831946808510638, 4.099535982008995), # 54
(3.336681785485933, 8.430109197443182, 8.347703426932366, 6.856758067810458, 7.813857491134752, 4.05910976803265), # 55
(3.341203804347826, 8.435600852272726, 8.320844806763285, 6.8506527777777775, 7.795216312056738, 4.017948296684991), # 56
(3.345644860933504, 8.440951171875001, 8.29333491847826, 6.844364889705882, 7.77603789893617, 3.9761137556221886), # 57
(3.3500083120204605, 8.44615909090909, 8.265204106280192, 6.837898692810458, 7.756336879432624, 3.9336683325004165), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 7, 4, 1, 3, 0, 8, 4, 2, 3, 3, 0), # 0
(6, 11, 11, 4, 3, 0, 20, 7, 6, 7, 3, 0), # 1
(7, 14, 17, 6, 3, 0, 25, 13, 11, 11, 6, 0), # 2
(9, 19, 25, 6, 5, 0, 33, 16, 18, 16, 8, 0), # 3
(11, 26, 27, 8, 6, 0, 38, 23, 23, 19, 12, 0), # 4
(16, 28, 34, 10, 7, 0, 40, 30, 24, 26, 13, 0), # 5
(17, 33, 36, 14, 11, 0, 46, 39, 29, 33, 16, 0), # 6
(21, 36, 41, 16, 13, 0, 50, 47, 33, 38, 17, 0), # 7
(24, 44, 48, 16, 13, 0, 53, 56, 35, 46, 20, 0), # 8
(27, 54, 53, 16, 15, 0, 62, 60, 38, 51, 20, 0), # 9
(30, 64, 63, 16, 16, 0, 67, 68, 42, 54, 21, 0), # 10
(34, 73, 71, 20, 16, 0, 69, 74, 48, 56, 21, 0), # 11
(38, 77, 74, 20, 18, 0, 75, 77, 50, 58, 24, 0), # 12
(42, 87, 78, 23, 19, 0, 81, 87, 53, 59, 25, 0), # 13
(48, 92, 83, 25, 21, 0, 90, 92, 54, 64, 26, 0), # 14
(52, 97, 86, 29, 21, 0, 95, 99, 58, 67, 26, 0), # 15
(60, 105, 88, 34, 22, 0, 101, 106, 65, 67, 27, 0), # 16
(61, 110, 92, 40, 23, 0, 105, 114, 70, 71, 28, 0), # 17
(63, 114, 97, 41, 26, 0, 110, 117, 74, 73, 30, 0), # 18
(66, 119, 103, 42, 27, 0, 113, 123, 81, 74, 31, 0), # 19
(69, 125, 111, 44, 28, 0, 118, 127, 82, 81, 36, 0), # 20
(72, 134, 114, 46, 28, 0, 122, 133, 85, 87, 38, 0), # 21
(75, 144, 121, 50, 29, 0, 124, 137, 93, 90, 38, 0), # 22
(80, 149, 126, 52, 35, 0, 127, 144, 100, 94, 40, 0), # 23
(85, 157, 129, 55, 38, 0, 134, 149, 103, 101, 44, 0), # 24
(91, 164, 131, 56, 39, 0, 142, 153, 105, 103, 47, 0), # 25
(95, 172, 135, 59, 42, 0, 150, 159, 108, 107, 47, 0), # 26
(98, 181, 137, 59, 44, 0, 154, 168, 111, 111, 47, 0), # 27
(100, 189, 144, 60, 46, 0, 160, 171, 115, 114, 49, 0), # 28
(102, 193, 150, 62, 48, 0, 164, 176, 119, 116, 53, 0), # 29
(105, 202, 156, 64, 49, 0, 168, 182, 120, 118, 54, 0), # 30
(107, 205, 159, 68, 50, 0, 177, 188, 130, 122, 56, 0), # 31
(108, 216, 165, 73, 50, 0, 183, 192, 135, 125, 58, 0), # 32
(110, 222, 168, 76, 51, 0, 185, 196, 140, 130, 64, 0), # 33
(114, 228, 177, 78, 51, 0, 192, 202, 148, 132, 65, 0), # 34
(119, 233, 184, 82, 52, 0, 198, 204, 152, 137, 71, 0), # 35
(120, 237, 189, 85, 53, 0, 199, 209, 157, 140, 73, 0), # 36
(123, 241, 198, 87, 55, 0, 205, 213, 161, 144, 75, 0), # 37
(126, 247, 203, 89, 57, 0, 208, 219, 162, 148, 75, 0), # 38
(127, 257, 210, 92, 60, 0, 212, 228, 165, 159, 77, 0), # 39
(129, 268, 216, 93, 60, 0, 218, 231, 166, 164, 78, 0), # 40
(133, 273, 227, 98, 60, 0, 224, 240, 168, 167, 83, 0), # 41
(135, 281, 232, 100, 60, 0, 229, 248, 175, 171, 83, 0), # 42
(139, 286, 234, 102, 64, 0, 231, 255, 178, 172, 83, 0), # 43
(141, 292, 240, 105, 64, 0, 236, 265, 187, 174, 86, 0), # 44
(144, 296, 244, 107, 65, 0, 243, 268, 191, 177, 90, 0), # 45
(147, 299, 249, 111, 66, 0, 247, 275, 194, 180, 90, 0), # 46
(150, 304, 258, 113, 69, 0, 249, 282, 196, 181, 93, 0), # 47
(154, 310, 264, 117, 75, 0, 254, 288, 203, 185, 94, 0), # 48
(159, 318, 268, 124, 76, 0, 258, 297, 207, 188, 97, 0), # 49
(165, 327, 272, 128, 80, 0, 262, 300, 212, 190, 100, 0), # 50
(170, 334, 280, 132, 82, 0, 265, 307, 218, 194, 101, 0), # 51
(171, 340, 283, 134, 87, 0, 266, 314, 219, 198, 106, 0), # 52
(176, 346, 287, 137, 90, 0, 268, 318, 227, 202, 106, 0), # 53
(180, 354, 288, 138, 92, 0, 272, 325, 231, 203, 109, 0), # 54
(183, 362, 296, 139, 96, 0, 274, 332, 234, 208, 110, 0), # 55
(185, 369, 303, 145, 98, 0, 279, 335, 237, 210, 113, 0), # 56
(189, 372, 309, 148, 100, 0, 283, 342, 241, 212, 115, 0), # 57
(192, 382, 312, 149, 104, 0, 286, 348, 246, 214, 118, 0), # 58
(192, 382, 312, 149, 104, 0, 286, 348, 246, 214, 118, 0), # 59
)
passenger_arriving_rate = (
(2.649651558384548, 5.43716856060606, 4.79654161311054, 2.534510869565217, 1.428605769230769, 0.0, 4.75679347826087, 5.714423076923076, 3.801766304347826, 3.1976944087403596, 1.359292140151515, 0.0), # 0
(2.6745220100478, 5.497633278970258, 4.822449322514997, 2.5486257548309177, 1.439313301282051, 0.0, 4.7551721391908215, 5.757253205128204, 3.8229386322463768, 3.2149662150099974, 1.3744083197425645, 0.0), # 1
(2.699108477221734, 5.557201122334455, 4.8477420736932295, 2.562429951690821, 1.4497948717948717, 0.0, 4.753501207729468, 5.799179487179487, 3.8436449275362317, 3.23182804912882, 1.3893002805836137, 0.0), # 2
(2.72339008999122, 5.6158078125, 4.872401389781491, 2.575911684782608, 1.4600408653846155, 0.0, 4.7517809103260875, 5.840163461538462, 3.863867527173912, 3.2482675931876606, 1.403951953125, 0.0), # 3
(2.747345978441128, 5.673389071268238, 4.896408793916024, 2.589059178743961, 1.4700416666666667, 0.0, 4.750011473429951, 5.880166666666667, 3.883588768115942, 3.2642725292773487, 1.4183472678170594, 0.0), # 4
(2.7709552726563262, 5.729880620440516, 4.919745809233076, 2.6018606582125603, 1.47978766025641, 0.0, 4.748193123490338, 5.91915064102564, 3.9027909873188404, 3.279830539488717, 1.432470155110129, 0.0), # 5
(2.794197102721686, 5.785218181818181, 4.942393958868895, 2.614304347826087, 1.4892692307692306, 0.0, 4.746326086956522, 5.957076923076922, 3.9214565217391306, 3.294929305912597, 1.4463045454545453, 0.0), # 6
(2.817050598722076, 5.83933747720258, 4.964334765959725, 2.626378472222222, 1.498476762820513, 0.0, 4.744410590277778, 5.993907051282052, 3.939567708333333, 3.309556510639817, 1.459834369300645, 0.0), # 7
(2.8394948907423667, 5.89217422839506, 4.985549753641817, 2.638071256038647, 1.5074006410256409, 0.0, 4.7424468599033816, 6.0296025641025635, 3.9571068840579704, 3.3236998357612113, 1.473043557098765, 0.0), # 8
(2.8615091088674274, 5.943664157196969, 5.006020445051414, 2.649370923913043, 1.5160312499999997, 0.0, 4.740435122282609, 6.064124999999999, 3.9740563858695652, 3.3373469633676094, 1.4859160392992423, 0.0), # 9
(2.8830723831821286, 5.993742985409652, 5.025728363324764, 2.660265700483092, 1.5243589743589743, 0.0, 4.738375603864734, 6.097435897435897, 3.990398550724638, 3.3504855755498424, 1.498435746352413, 0.0), # 10
(2.9041638437713395, 6.042346434834456, 5.044655031598114, 2.6707438103864733, 1.5323741987179484, 0.0, 4.736268531099034, 6.129496794871794, 4.0061157155797105, 3.3631033543987425, 1.510586608708614, 0.0), # 11
(2.92476262071993, 6.089410227272726, 5.062781973007712, 2.680793478260869, 1.5400673076923075, 0.0, 4.734114130434782, 6.16026923076923, 4.021190217391304, 3.375187982005141, 1.5223525568181815, 0.0), # 12
(2.944847844112769, 6.134870084525814, 5.080090710689802, 2.690402928743961, 1.547428685897436, 0.0, 4.731912628321256, 6.189714743589744, 4.035604393115942, 3.386727140459868, 1.5337175211314535, 0.0), # 13
(2.9643986440347283, 6.1786617283950624, 5.096562767780632, 2.699560386473429, 1.5544487179487176, 0.0, 4.729664251207729, 6.217794871794871, 4.049340579710144, 3.397708511853755, 1.5446654320987656, 0.0), # 14
(2.9833941505706756, 6.220720880681816, 5.112179667416451, 2.708254076086956, 1.5611177884615384, 0.0, 4.7273692255434785, 6.2444711538461535, 4.062381114130434, 3.408119778277634, 1.555180220170454, 0.0), # 15
(3.001813493805482, 6.26098326318743, 5.126922932733505, 2.716472222222222, 1.5674262820512819, 0.0, 4.725027777777778, 6.2697051282051275, 4.074708333333333, 3.4179486218223363, 1.5652458157968574, 0.0), # 16
(3.019635803824017, 6.299384597713242, 5.140774086868038, 2.724203049516908, 1.5733645833333332, 0.0, 4.722640134359904, 6.293458333333333, 4.0863045742753625, 3.4271827245786914, 1.5748461494283106, 0.0), # 17
(3.03684021071115, 6.3358606060606055, 5.153714652956299, 2.7314347826086958, 1.578923076923077, 0.0, 4.72020652173913, 6.315692307692308, 4.097152173913043, 3.435809768637532, 1.5839651515151514, 0.0), # 18
(3.053405844551751, 6.370347010030863, 5.165726154134533, 2.738155646135265, 1.5840921474358973, 0.0, 4.717727166364734, 6.336368589743589, 4.107233469202898, 3.4438174360896885, 1.5925867525077158, 0.0), # 19
(3.0693118354306894, 6.402779531425363, 5.1767901135389875, 2.7443538647342995, 1.5888621794871793, 0.0, 4.71520229468599, 6.355448717948717, 4.11653079710145, 3.4511934090259917, 1.6006948828563408, 0.0), # 20
(3.084537313432836, 6.433093892045452, 5.186888054305913, 2.750017663043478, 1.5932235576923073, 0.0, 4.712632133152174, 6.372894230769229, 4.125026494565217, 3.4579253695372754, 1.608273473011363, 0.0), # 21
(3.099061408643059, 6.46122581369248, 5.19600149957155, 2.7551352657004826, 1.5971666666666662, 0.0, 4.710016908212561, 6.388666666666665, 4.132702898550725, 3.464000999714367, 1.61530645342312, 0.0), # 22
(3.1128632511462295, 6.487111018167789, 5.204111972472151, 2.759694897342995, 1.6006818910256408, 0.0, 4.707356846316426, 6.402727564102563, 4.139542346014493, 3.4694079816481005, 1.6217777545419472, 0.0), # 23
(3.125921971027217, 6.5106852272727265, 5.211200996143958, 2.763684782608695, 1.6037596153846152, 0.0, 4.704652173913043, 6.415038461538461, 4.1455271739130435, 3.474133997429305, 1.6276713068181816, 0.0), # 24
(3.1382166983708903, 6.531884162808641, 5.217250093723222, 2.7670931461352657, 1.606390224358974, 0.0, 4.701903117451691, 6.425560897435896, 4.150639719202899, 3.4781667291488145, 1.6329710407021603, 0.0), # 25
(3.1497265632621207, 6.550643546576878, 5.222240788346187, 2.7699082125603858, 1.6085641025641022, 0.0, 4.699109903381642, 6.434256410256409, 4.154862318840579, 3.4814938588974575, 1.6376608866442195, 0.0), # 26
(3.160430695785777, 6.566899100378786, 5.226154603149099, 2.772118206521739, 1.6102716346153847, 0.0, 4.696272758152174, 6.441086538461539, 4.158177309782609, 3.484103068766066, 1.6417247750946966, 0.0), # 27
(3.1703082260267292, 6.580586546015712, 5.228973061268209, 2.7737113526570045, 1.6115032051282048, 0.0, 4.69339190821256, 6.446012820512819, 4.160567028985507, 3.4859820408454727, 1.645146636503928, 0.0), # 28
(3.1793382840698468, 6.591641605289001, 5.230677685839759, 2.7746758756038647, 1.6122491987179486, 0.0, 4.690467580012077, 6.448996794871794, 4.162013813405797, 3.487118457226506, 1.6479104013222503, 0.0), # 29
(3.1875, 6.6, 5.23125, 2.775, 1.6124999999999998, 0.0, 4.6875, 6.449999999999999, 4.1625, 3.4875, 1.65, 0.0), # 30
(3.1951370284526854, 6.606943039772726, 5.230820969202898, 2.7749414624183006, 1.6124087322695035, 0.0, 4.683376259786773, 6.449634929078014, 4.162412193627451, 3.4872139794685983, 1.6517357599431814, 0.0), # 31
(3.202609175191816, 6.613794318181818, 5.229546014492753, 2.7747669934640515, 1.6121368794326238, 0.0, 4.677024758454107, 6.448547517730495, 4.162150490196078, 3.4863640096618354, 1.6534485795454545, 0.0), # 32
(3.2099197969948845, 6.620552982954545, 5.227443342391305, 2.774478308823529, 1.6116873670212764, 0.0, 4.66850768365817, 6.446749468085105, 4.161717463235294, 3.4849622282608697, 1.6551382457386363, 0.0), # 33
(3.217072250639386, 6.627218181818182, 5.224531159420289, 2.7740771241830067, 1.6110631205673758, 0.0, 4.657887223055139, 6.444252482269503, 4.16111568627451, 3.4830207729468596, 1.6568045454545455, 0.0), # 34
(3.224069892902813, 6.633789062499998, 5.220827672101449, 2.773565155228758, 1.6102670656028368, 0.0, 4.645225564301183, 6.441068262411347, 4.160347732843137, 3.480551781400966, 1.6584472656249996, 0.0), # 35
(3.23091608056266, 6.6402647727272734, 5.2163510869565215, 2.7729441176470586, 1.6093021276595745, 0.0, 4.630584895052474, 6.437208510638298, 4.159416176470589, 3.477567391304347, 1.6600661931818184, 0.0), # 36
(3.2376141703964194, 6.6466444602272725, 5.211119610507246, 2.7722157271241827, 1.6081712322695032, 0.0, 4.614027402965184, 6.432684929078013, 4.158323590686274, 3.474079740338164, 1.6616611150568181, 0.0), # 37
(3.2441675191815853, 6.652927272727272, 5.205151449275362, 2.7713816993464047, 1.6068773049645388, 0.0, 4.595615275695485, 6.427509219858155, 4.157072549019607, 3.4701009661835744, 1.663231818181818, 0.0), # 38
(3.250579483695652, 6.659112357954545, 5.198464809782608, 2.7704437499999996, 1.6054232712765955, 0.0, 4.57541070089955, 6.421693085106382, 4.155665625, 3.4656432065217384, 1.6647780894886361, 0.0), # 39
(3.2568534207161126, 6.6651988636363635, 5.191077898550724, 2.7694035947712417, 1.6038120567375882, 0.0, 4.5534758662335495, 6.415248226950353, 4.154105392156863, 3.4607185990338163, 1.6662997159090909, 0.0), # 40
(3.26299268702046, 6.671185937499998, 5.1830089221014495, 2.768262949346405, 1.6020465868794325, 0.0, 4.529872959353657, 6.40818634751773, 4.152394424019608, 3.455339281400966, 1.6677964843749995, 0.0), # 41
(3.269000639386189, 6.677072727272728, 5.174276086956522, 2.767023529411764, 1.6001297872340425, 0.0, 4.504664167916042, 6.40051914893617, 4.150535294117646, 3.4495173913043478, 1.669268181818182, 0.0), # 42
(3.2748806345907933, 6.682858380681817, 5.164897599637681, 2.7656870506535944, 1.5980645833333331, 0.0, 4.477911679576878, 6.3922583333333325, 4.148530575980392, 3.4432650664251203, 1.6707145951704543, 0.0), # 43
(3.2806360294117645, 6.688542045454545, 5.154891666666667, 2.7642552287581696, 1.5958539007092198, 0.0, 4.449677681992337, 6.383415602836879, 4.146382843137254, 3.4365944444444443, 1.6721355113636363, 0.0), # 44
(3.286270180626598, 6.694122869318181, 5.144276494565218, 2.7627297794117642, 1.593500664893617, 0.0, 4.420024362818591, 6.374002659574468, 4.144094669117647, 3.4295176630434785, 1.6735307173295453, 0.0), # 45
(3.291786445012788, 6.6996, 5.133070289855073, 2.761112418300653, 1.5910078014184397, 0.0, 4.389013909711811, 6.364031205673759, 4.14166862745098, 3.4220468599033818, 1.6749, 0.0), # 46
(3.297188179347826, 6.704972585227273, 5.12129125905797, 2.759404861111111, 1.588378235815603, 0.0, 4.356708510328169, 6.353512943262412, 4.139107291666666, 3.4141941727053133, 1.6762431463068181, 0.0), # 47
(3.3024787404092075, 6.710239772727273, 5.108957608695651, 2.757608823529411, 1.5856148936170211, 0.0, 4.323170352323839, 6.3424595744680845, 4.136413235294117, 3.4059717391304343, 1.6775599431818182, 0.0), # 48
(3.307661484974424, 6.715400710227271, 5.096087545289855, 2.75572602124183, 1.5827207003546098, 0.0, 4.288461623354989, 6.330882801418439, 4.133589031862745, 3.3973916968599034, 1.6788501775568176, 0.0), # 49
(3.312739769820972, 6.720454545454543, 5.082699275362319, 2.7537581699346405, 1.5796985815602835, 0.0, 4.252644511077794, 6.318794326241134, 4.130637254901961, 3.388466183574879, 1.6801136363636358, 0.0), # 50
(3.317716951726343, 6.725400426136363, 5.068811005434783, 2.7517069852941174, 1.5765514627659571, 0.0, 4.215781203148426, 6.306205851063829, 4.127560477941176, 3.3792073369565214, 1.6813501065340908, 0.0), # 51
(3.322596387468031, 6.730237499999999, 5.054440942028985, 2.7495741830065357, 1.573282269503546, 0.0, 4.177933887223055, 6.293129078014184, 4.124361274509804, 3.3696272946859898, 1.6825593749999999, 0.0), # 52
(3.3273814338235295, 6.7349649147727275, 5.039607291666666, 2.7473614787581697, 1.5698939273049646, 0.0, 4.139164750957854, 6.279575709219858, 4.121042218137255, 3.359738194444444, 1.6837412286931819, 0.0), # 53
(3.332075447570333, 6.739581818181817, 5.024328260869565, 2.745070588235294, 1.5663893617021276, 0.0, 4.099535982008995, 6.2655574468085105, 4.117605882352941, 3.3495521739130427, 1.6848954545454542, 0.0), # 54
(3.336681785485933, 6.744087357954545, 5.008622056159419, 2.7427032271241827, 1.5627714982269503, 0.0, 4.05910976803265, 6.251085992907801, 4.114054840686275, 3.3390813707729463, 1.6860218394886362, 0.0), # 55
(3.341203804347826, 6.74848068181818, 4.9925068840579705, 2.740261111111111, 1.5590432624113475, 0.0, 4.017948296684991, 6.23617304964539, 4.110391666666667, 3.328337922705314, 1.687120170454545, 0.0), # 56
(3.345644860933504, 6.752760937500001, 4.976000951086956, 2.7377459558823527, 1.5552075797872338, 0.0, 3.9761137556221886, 6.220830319148935, 4.106618933823529, 3.317333967391304, 1.6881902343750002, 0.0), # 57
(3.3500083120204605, 6.756927272727271, 4.959122463768115, 2.7351594771241827, 1.5512673758865245, 0.0, 3.9336683325004165, 6.205069503546098, 4.102739215686275, 3.3060816425120767, 1.6892318181818178, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
30, # 1
)
| 112.564179 | 215 | 0.72781 | 5,147 | 37,709 | 5.330095 | 0.217602 | 0.314938 | 0.249326 | 0.472407 | 0.331158 | 0.329956 | 0.329956 | 0.329956 | 0.329956 | 0.329956 | 0 | 0.818045 | 0.119706 | 37,709 | 334 | 216 | 112.901198 | 0.008405 | 0.032114 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e7f5537d1b212063f8cb3963209b7c68c772db72 | 3,213 | py | Python | test/dlc_tests/ec2/tensorflow/training/test_tensorflow_training.py | Satish615/deep-learning-containers-1 | 76e750e828b6f583a6b7b1c291057059a14285b1 | [
"Apache-2.0"
] | 1 | 2021-12-17T15:50:48.000Z | 2021-12-17T15:50:48.000Z | test/dlc_tests/ec2/tensorflow/training/test_tensorflow_training.py | Satish615/deep-learning-containers-1 | 76e750e828b6f583a6b7b1c291057059a14285b1 | [
"Apache-2.0"
] | null | null | null | test/dlc_tests/ec2/tensorflow/training/test_tensorflow_training.py | Satish615/deep-learning-containers-1 | 76e750e828b6f583a6b7b1c291057059a14285b1 | [
"Apache-2.0"
] | null | null | null | import os
import pytest
from test.test_utils import CONTAINER_TESTS_PREFIX, is_tf1
from test.test_utils.ec2 import execute_ec2_training_test
TF1_STANDALONE_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testTensorflow1Standalone")
TF2_STANDALONE_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testTensorflow2Standalone")
TF_MNIST_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testTensorFlow")
TF1_HVD_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testTF1HVD")
TF2_HVD_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testTF2HVD")
TF_OPENCV_CMD = os.path.join(CONTAINER_TESTS_PREFIX, "testOpenCV")
TF_EC2_GPU_INSTANCE_TYPE = "p2.xlarge"
TF_EC2_CPU_INSTANCE_TYPE = "c5.4xlarge"
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_GPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_standalone_gpu(tensorflow_training, ec2_connection, gpu_only):
test_script = TF1_STANDALONE_CMD if is_tf1(tensorflow_training) else TF2_STANDALONE_CMD
execute_ec2_training_test(ec2_connection, tensorflow_training, test_script)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_CPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_standalone_cpu(tensorflow_training, ec2_connection, cpu_only):
test_script = TF1_STANDALONE_CMD if is_tf1(tensorflow_training) else TF2_STANDALONE_CMD
execute_ec2_training_test(ec2_connection, tensorflow_training, test_script)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_GPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_train_mnist_gpu(tensorflow_training, ec2_connection, gpu_only):
execute_ec2_training_test(ec2_connection, tensorflow_training, TF_MNIST_CMD)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_CPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_train_mnist_cpu(tensorflow_training, ec2_connection, cpu_only):
execute_ec2_training_test(ec2_connection, tensorflow_training, TF_MNIST_CMD)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_GPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_with_horovod_gpu(tensorflow_training, ec2_connection, gpu_only):
test_script = TF1_HVD_CMD if is_tf1(tensorflow_training) else TF2_HVD_CMD
execute_ec2_training_test(ec2_connection, tensorflow_training, test_script)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_CPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_with_horovod_cpu(tensorflow_training, ec2_connection, cpu_only):
test_script = TF1_HVD_CMD if is_tf1(tensorflow_training) else TF2_HVD_CMD
execute_ec2_training_test(ec2_connection, tensorflow_training, test_script)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_GPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_opencv_gpu(tensorflow_training, ec2_connection, gpu_only):
if is_tf1(tensorflow_training):
pytest.skip("This test is for TF2 only")
execute_ec2_training_test(ec2_connection, tensorflow_training, TF_OPENCV_CMD)
@pytest.mark.parametrize("ec2_instance_type", [TF_EC2_CPU_INSTANCE_TYPE], indirect=True)
def test_tensorflow_opencv_cpu(tensorflow_training, ec2_connection, cpu_only):
if is_tf1(tensorflow_training):
pytest.skip("This test is for TF2 only")
execute_ec2_training_test(ec2_connection, tensorflow_training, TF_OPENCV_CMD)
| 47.955224 | 91 | 0.839091 | 460 | 3,213 | 5.373913 | 0.117391 | 0.160194 | 0.065534 | 0.080097 | 0.897249 | 0.881068 | 0.881068 | 0.794498 | 0.717638 | 0.700647 | 0 | 0.02439 | 0.081232 | 3,213 | 66 | 92 | 48.681818 | 0.813008 | 0 | 0 | 0.545455 | 0 | 0 | 0.093059 | 0.015562 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f007c539e36efa8cdf1d7475bfa01e6fd5a6a927 | 120 | py | Python | dskc/visualization/graphs/types/bars/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | dskc/visualization/graphs/types/bars/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | dskc/visualization/graphs/types/bars/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | from .multiple_variables import *
from .one_variable import *
from dskc.visualization.graphs.types.bars.target import *
| 30 | 57 | 0.816667 | 16 | 120 | 6 | 0.75 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 120 | 3 | 58 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f06a91e8d6c12d9fed1a162d885a8e95768f12e9 | 30 | py | Python | unn/models/heads/attribute_head/__init__.py | zongdaoming/TinyTransformer | 8e64f8816117048c388b4b20e3a56760ce149fe3 | [
"Apache-2.0"
] | 2 | 2021-08-08T11:23:14.000Z | 2021-09-16T04:05:23.000Z | unn/models/heads/attribute_head/__init__.py | zongdaoming/TinyTransformer | 8e64f8816117048c388b4b20e3a56760ce149fe3 | [
"Apache-2.0"
] | 1 | 2021-08-08T11:25:47.000Z | 2021-08-08T11:26:15.000Z | unn/models/heads/attribute_head/__init__.py | zongdaoming/TinyTransformer | 8e64f8816117048c388b4b20e3a56760ce149fe3 | [
"Apache-2.0"
] | null | null | null | from .attribute_head import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b2bd5e4cb61c61fbad9cd22ef0ac8515ab9824dd | 109 | py | Python | application/urls.py | Falldog/appengine-flask-template-light | b87b73b1d1862db6dbb388fee716b60d29b3b10c | [
"Apache-2.0"
] | null | null | null | application/urls.py | Falldog/appengine-flask-template-light | b87b73b1d1862db6dbb388fee716b60d29b3b10c | [
"Apache-2.0"
] | null | null | null | application/urls.py | Falldog/appengine-flask-template-light | b87b73b1d1862db6dbb388fee716b60d29b3b10c | [
"Apache-2.0"
] | null | null | null | from application import app
from application import views
app.add_url_rule('/', 'index', views.index)
| 18.166667 | 44 | 0.733945 | 15 | 109 | 5.2 | 0.6 | 0.384615 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165138 | 109 | 5 | 45 | 21.8 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b2ce9a5a2d96a6c62e907a5933b13b1b19c0b521 | 27 | py | Python | arq.py | jonfisik/cmd-git | 8cff22e5fe91243bee963decb269c2986ed8f85c | [
"MIT"
] | 1 | 2020-09-05T22:25:48.000Z | 2020-09-05T22:25:48.000Z | arq.py | jonfisik/cmd-git | 8cff22e5fe91243bee963decb269c2986ed8f85c | [
"MIT"
] | null | null | null | arq.py | jonfisik/cmd-git | 8cff22e5fe91243bee963decb269c2986ed8f85c | [
"MIT"
] | 1 | 2020-07-22T00:37:46.000Z | 2020-07-22T00:37:46.000Z | print('Ola mundo git!!!!')
| 13.5 | 26 | 0.592593 | 4 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3312edc77b51243c16e3b5e471a66aed3c50cd5e | 163 | py | Python | ccbrowse/storage/null.py | peterkuma/ccbrowse | 19d93108d55badf983fda29497b523f59b4f1231 | [
"MIT"
] | 4 | 2022-02-06T17:22:04.000Z | 2022-03-09T03:15:11.000Z | ccbrowse/storage/null.py | peterkuma/ccbrowse | 19d93108d55badf983fda29497b523f59b4f1231 | [
"MIT"
] | null | null | null | ccbrowse/storage/null.py | peterkuma/ccbrowse | 19d93108d55badf983fda29497b523f59b4f1231 | [
"MIT"
] | null | null | null | from .driver import Driver
class NullDriver(Driver):
def store(self, obj):
pass
def retrieve(self, obj, exclude=[]):
return None
| 18.111111 | 40 | 0.595092 | 19 | 163 | 5.105263 | 0.736842 | 0.14433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.306748 | 163 | 8 | 41 | 20.375 | 0.858407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
33501dd7f31e3eb470de88743d560f8f764d9b52 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/polynomial/tests/test_polyutils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/polynomial/tests/test_polyutils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/polynomial/tests/test_polyutils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/23/19/1b/55fa5c06a7b994e665b871543db2c04cbbb5af0560ee084b012a5f62b6 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3355f4edacd4442e7d353c82663ffdbfdf945bd0 | 65 | py | Python | allure-pytest/test/plugin/register_unregister/sample.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | 1 | 2021-02-19T21:00:11.000Z | 2021-02-19T21:00:11.000Z | allure-pytest/test/plugin/register_unregister/sample.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | null | null | null | allure-pytest/test/plugin/register_unregister/sample.py | vdsbenoit/allure-python | 7b56b031c42369dd73844105382e9ceb9a88d6cd | [
"Apache-2.0"
] | 1 | 2020-08-05T05:40:44.000Z | 2020-08-05T05:40:44.000Z | import allure
def test_sample():
allure.attach('Peace!!!')
| 10.833333 | 29 | 0.661538 | 8 | 65 | 5.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 65 | 5 | 30 | 13 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
682becf90639f2e77baedd01c4a3b89b80b69bd1 | 201 | py | Python | collaborator_app/admin.py | calixo888/collaboratory | 087c6454f9f7581c16798b8a8fa697509894d9cc | [
"FTL"
] | null | null | null | collaborator_app/admin.py | calixo888/collaboratory | 087c6454f9f7581c16798b8a8fa697509894d9cc | [
"FTL"
] | 5 | 2020-06-05T23:35:48.000Z | 2021-06-09T18:30:56.000Z | collaborator_app/admin.py | calixo888/collaboratory | 087c6454f9f7581c16798b8a8fa697509894d9cc | [
"FTL"
] | null | null | null | from django.contrib import admin
from . import models
# Register your models here.
admin.site.register(models.UserProfile)
admin.site.register(models.Project)
admin.site.register(models.ToDoListItem)
| 25.125 | 40 | 0.820896 | 27 | 201 | 6.111111 | 0.481481 | 0.163636 | 0.309091 | 0.418182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084577 | 201 | 7 | 41 | 28.714286 | 0.896739 | 0.129353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
689ec040805ccf79bed0bcb7a68bc6038c4f5f80 | 39 | py | Python | __init__.py | KasperskyZiv/DMG_utils | d520f372d3e7ea9aa9f569fc49b85a75ad98eadd | [
"Apache-2.0"
] | 2 | 2020-07-01T08:32:40.000Z | 2021-04-12T02:26:31.000Z | __init__.py | KasperskyZiv/osx_utils | d520f372d3e7ea9aa9f569fc49b85a75ad98eadd | [
"Apache-2.0"
] | null | null | null | __init__.py | KasperskyZiv/osx_utils | d520f372d3e7ea9aa9f569fc49b85a75ad98eadd | [
"Apache-2.0"
] | null | null | null | from dmg import DMG
from utils import * | 19.5 | 19 | 0.794872 | 7 | 39 | 4.428571 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 20 | 19.5 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d7b231606bf8a538533f4eb166b907a6be975cd4 | 8,606 | py | Python | src/django_reactive_framework/core/reactive_binary_operators.py | Tal500/django-reactive-framework | aa691d245515f504794d0f6f9a2087077e25a68b | [
"MIT"
] | null | null | null | src/django_reactive_framework/core/reactive_binary_operators.py | Tal500/django-reactive-framework | aa691d245515f504794d0f6f9a2087077e25a68b | [
"MIT"
] | 18 | 2021-09-05T09:40:11.000Z | 2021-10-05T20:04:34.000Z | src/django_reactive_framework/core/reactive_binary_operators.py | Tal500/django-reactive | aa691d245515f504794d0f6f9a2087077e25a68b | [
"MIT"
] | null | null | null | from abc import abstractmethod
from typing import Dict, List
from django import template
from .base import ReactContext, ReactValType
from .expressions.interfaces import Expression
from .utils import manual_non_empty_sum
class ReactiveBinaryOperator:
operators: Dict[str, 'ReactiveBinaryOperator'] = dict()
def validate_args(self, args: List['Expression']) -> None:
if len(args) < 2:
raise Exception(f'Internal Reactive Error: Not enough args for binary operator! Args: {args}')
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
pass
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
pass
def eval_initial(self, reactive_context: 'ReactContext', args: List['Expression']) -> 'ReactValType':
self.validate_args(args)
vals = [arg.eval_initial(reactive_context) for arg in args]
return self.eval_initial_from_values(vals)
def eval_js(self, reactive_context: 'ReactContext', args: List['Expression'], delimiter: str) -> str:
self.validate_args(args)
vals = [arg.eval_js_and_hooks(reactive_context, delimiter)[0] for arg in args]
return self.eval_js_from_js(vals, delimiter)
class StrictEqualityOperator(ReactiveBinaryOperator):
def validate_args(self, args: List['Expression']) -> None:
if len(args) != 2:
raise template.TemplateSyntaxError(f'Strict equality operator must have exactly two args! Args: {args}')
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
lhs_val, rhs_val = vals
result = lhs_val is rhs_val
if not result and isinstance(lhs_val, str) and isinstance(rhs_val, str):
result = (lhs_val == rhs_val)
return result
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
lhs_js, rhs_js = js_expressions
return f'({lhs_js}==={rhs_js})'
ReactiveBinaryOperator.operators['==='] = StrictEqualityOperator()
class StrictInequalityOperator(ReactiveBinaryOperator):
def validate_args(self, args: List['Expression']) -> None:
if len(args) != 2:
raise template.TemplateSyntaxError(f'Strict inequality operator must have exactly two args! Args: {args}')
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
lhs_val, rhs_val = vals
result = lhs_val is not rhs_val
if result and isinstance(lhs_val, str) and isinstance(rhs_val, str):
result = (lhs_val != rhs_val)
return result
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
lhs_js, rhs_js = js_expressions
return f'({lhs_js}!=={rhs_js})'
ReactiveBinaryOperator.operators['!=='] = StrictInequalityOperator()
class BoolComparingOperator(ReactiveBinaryOperator):
@abstractmethod
def eval_initial_from_values(self, vals: List[bool]) -> bool:
pass
def eval_initial(self, reactive_context: 'ReactContext', args: List['Expression']) -> bool:
return super().eval_initial(reactive_context, args)
class AndOperator(BoolComparingOperator):
def eval_initial_from_values(self, vals: List[bool]) -> bool:
for val in vals:
if val is False:
return False
# otherwise
return True
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '&&'.join(js_expressions)
ReactiveBinaryOperator.operators['&&'] = AndOperator()
class OrOperator(BoolComparingOperator):
def eval_initial_from_values(self, vals: List[bool]) -> bool:
for val in vals:
if val is True:
return True
# otherwise
return False
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '||'.join(js_expressions)
ReactiveBinaryOperator.operators['||'] = OrOperator()
class NumberInequalityOperator(ReactiveBinaryOperator):
@abstractmethod
def eval_initial_from_two_values(self, lhs_val: int, rhs_val: int) -> bool:
pass
@abstractmethod
def eval_js_from_two_js(self, lhs_js: List[str], rhs_js: List[str], delimiter: str) -> str:
pass
def validate_args(self, args: List['Expression']):
super().validate_args(args)
if len(args) != 2:
raise template.TemplateSyntaxError(f'Number inequality operators must have exactly two args! Args: {args}')
def eval_initial_from_values(self, vals: List[int]) -> bool:
return self.eval_initial_from_two_values(vals[0], vals[1])
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return self.eval_js_from_two_js(js_expressions[0], js_expressions[1], delimiter)
def eval_initial(self, reactive_context: 'ReactContext', args: List['Expression']) -> bool:
self.validate_args(args)
vals = [arg.eval_initial(reactive_context) for arg in args]
for i, val in enumerate(vals):
if not isinstance(val, int):
raise template.TemplateSyntaxError(f'Error: Argument {i} value isn\'t int in number inequality operator. ' + \
f'argument value: {val}, ' + f'argument expression: {args[i]}')
return self.eval_initial_from_values(vals)
class GreaterOrEqualOperator(NumberInequalityOperator):
def eval_initial_from_two_values(self, lhs_val: 'ReactValType', rhs_val: 'ReactValType') -> bool:
return lhs_val >= rhs_val
def eval_js_from_two_js(self, lhs_js: str, rhs_js: str, delimiter: str) -> str:
return f'{lhs_js}>={rhs_js}'
ReactiveBinaryOperator.operators['>='] = GreaterOrEqualOperator()
class LessOrEqualOperator(NumberInequalityOperator):
def eval_initial_from_two_values(self, lhs_val: 'ReactValType', rhs_val: 'ReactValType') -> bool:
return lhs_val <= rhs_val
def eval_js_from_two_js(self, lhs_js: str, rhs_js: str, delimiter: str) -> str:
return f'{lhs_js}<={rhs_js}'
ReactiveBinaryOperator.operators['<='] = LessOrEqualOperator()
# Notice that >= and <= must be registered before > and <, for best matching
class GreaterOperator(NumberInequalityOperator):
def eval_initial_from_two_values(self, lhs_val: int, rhs_val: int) -> bool:
return lhs_val > rhs_val
def eval_js_from_two_js(self, lhs_js: str, rhs_js: str, delimiter: str) -> str:
return f'{lhs_js}>{rhs_js}'
ReactiveBinaryOperator.operators['>'] = GreaterOperator()
class LessOperator(NumberInequalityOperator):
def eval_initial_from_value(self, lhs_val: 'ReactValType', rhs_val: 'ReactValType') -> bool:
return lhs_val < rhs_val
def eval_js_from_two_js(self, lhs_js: str, rhs_js: str, delimiter: str) -> str:
return f'{lhs_js}<{rhs_js}'
ReactiveBinaryOperator.operators['<'] = LessOperator()
class SumOperator(ReactiveBinaryOperator):
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
return manual_non_empty_sum(vals)
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '+'.join(js_expressions)
ReactiveBinaryOperator.operators['+'] = SumOperator()
class SubstructOperator(ReactiveBinaryOperator):
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
result = vals[0]
for val in vals[1:]:
result -= val
return result
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '-'.join(js_expressions)
ReactiveBinaryOperator.operators['-'] = SubstructOperator()
class MultiplyOperator(ReactiveBinaryOperator):
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
result = vals[0]
for val in vals[1:]:
result *= val
return result
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '*'.join(js_expressions)
ReactiveBinaryOperator.operators['*'] = MultiplyOperator()
class DivideOperator(ReactiveBinaryOperator):
def eval_initial_from_values(self, vals: List['ReactValType']) -> 'ReactValType':
result = vals[0]
for val in vals[1:]:
result /= val
return result
def eval_js_from_js(self, js_expressions: List[str], delimiter: str) -> str:
return '/'.join(js_expressions)
ReactiveBinaryOperator.operators['/'] = DivideOperator() | 37.094828 | 126 | 0.679294 | 1,035 | 8,606 | 5.425121 | 0.103382 | 0.043633 | 0.047373 | 0.051291 | 0.743366 | 0.725378 | 0.712556 | 0.669991 | 0.662333 | 0.657524 | 0 | 0.002202 | 0.208343 | 8,606 | 232 | 127 | 37.094828 | 0.821958 | 0.010923 | 0 | 0.464516 | 0 | 0 | 0.105653 | 0.007521 | 0 | 0 | 0 | 0 | 0 | 1 | 0.251613 | false | 0.032258 | 0.03871 | 0.116129 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d7dd1d4caa42956af24f210df57e50ba5bca5866 | 11,899 | py | Python | tests/test_check_runs.py | cclauss/algobot | f131905c9f4f7e7c8d6d3a85023ac9eb668ba522 | [
"MIT"
] | null | null | null | tests/test_check_runs.py | cclauss/algobot | f131905c9f4f7e7c8d6d3a85023ac9eb668ba522 | [
"MIT"
] | null | null | null | tests/test_check_runs.py | cclauss/algobot | f131905c9f4f7e7c8d6d3a85023ac9eb668ba522 | [
"MIT"
] | null | null | null | import pytest
from _pytest.monkeypatch import MonkeyPatch
from gidgethub import apps, sansio
from algobot import check_runs
from algobot.check_runs import FAILURE_LABEL
from .utils import MOCK_INSTALLATION_ID, MockGitHubAPI, mock_return
def setup_module(module, monkeypatch=MonkeyPatch()):
"""Monkeypatch for this module to store the MOCK_TOKEN in cache.
We cannot use `pytest.fixture(scope="module")` as `monkeypatch` fixture
only works at function level (this is a module level setup function),
so we will directly use the MonkeyPatch class where the fixture is
generated from and pass it as the default argument to this function.
This will only be executed once in this module, storing the token in the
cache for later use.
"""
monkeypatch.setattr(apps, "get_installation_access_token", mock_return)
@pytest.mark.asyncio
async def test_check_run_created():
repository = "TheAlgorithms/Python"
data = {
"action": "created",
"check_run": {
"status": "queued",
"conclusion": None,
},
"name": "pre-commit",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="1")
gh = MockGitHubAPI()
result = await check_runs.router.dispatch(event, gh)
assert result is None
@pytest.mark.asyncio
async def test_check_run_not_from_pr_commit():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "success",
},
"name": "pre-commit",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="2")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 0,
"incomplete_results": False,
"items": [],
}
}
gh = MockGitHubAPI(getitem=getitem)
result = await check_runs.router.dispatch(event, gh)
assert gh.getitem_url == f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}"
assert result is None
@pytest.mark.asyncio
async def test_check_run_completed_some_in_progress():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "cancelled",
},
"name": "validate-solutions",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="3")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 1,
"incomplete_results": False,
"items": [
{
"number": 3378,
"title": "Create GitHub action only for Project Euler",
"state": "open",
"labels": [
{"name": "Status: awaiting reviews"},
],
}
],
},
f"/repos/{repository}/commits/{sha}/check-runs": {
"total_count": 2,
"check_runs": [
{
"status": "completed",
"conclusion": "cancelled",
"name": "validate-solutions",
},
{
"status": "in_progress",
"conclusion": None,
"name": "pre-commit",
},
],
},
}
gh = MockGitHubAPI(getitem=getitem)
result = await check_runs.router.dispatch(event, gh)
assert result is None
assert gh.post_data is None # does not add any label
assert gh.post_url is None
assert gh.delete_url is None # does not delete any label
@pytest.mark.asyncio
async def test_check_run_completed_passing_no_label():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "success",
},
"name": "validate-solutions",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="4")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 1,
"incomplete_results": False,
"items": [
{
"number": 3378,
"title": "Create GitHub action only for Project Euler",
"state": "open",
"labels": [
{"name": "Status: awaiting reviews"},
],
}
],
},
f"/repos/{repository}/commits/{sha}/check-runs": {
"total_count": 2,
"check_runs": [
{
"status": "completed",
"conclusion": "success",
"name": "validate-solutions",
},
{
"status": "completed",
"conclusion": "skipped",
"name": "pre-commit",
},
],
},
}
gh = MockGitHubAPI(getitem=getitem)
result = await check_runs.router.dispatch(event, gh)
assert result is None
assert gh.post_data is None # does not add any label
assert gh.post_url is None
assert gh.delete_url is None # does not delete any label
@pytest.mark.asyncio
async def test_check_run_completed_passing_with_label():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "action_required",
},
"name": "validate-solutions",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="5")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 1,
"incomplete_results": False,
"items": [
{
"number": 3378,
"title": "Create GitHub action only for Project Euler",
"state": "open",
"labels": [
{"name": "Status: awaiting reviews"},
{"name": FAILURE_LABEL},
],
}
],
},
f"/repos/{repository}/commits/{sha}/check-runs": {
"total_count": 2,
"check_runs": [
{
"status": "completed",
"conclusion": "action_required",
"name": "validate-solutions",
},
{
"status": "completed",
"conclusion": "success",
"name": "pre-commit",
},
],
},
}
delete = [{"name": "Status: awaiting reviews"}]
gh = MockGitHubAPI(getitem=getitem, delete=delete)
result = await check_runs.router.dispatch(event, gh)
assert result is None
assert gh.post_data is None # does not add any label
assert (
gh.delete_url
== f"/repos/{repository}/issues/3378/labels/Status%3A%20Tests%20are%20failing"
)
@pytest.mark.asyncio
async def test_check_run_completed_failing_no_label():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "success",
},
"name": "validate-solutions",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="6")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 1,
"incomplete_results": False,
"items": [
{
"number": 3378,
"title": "Create GitHub action only for Project Euler",
"state": "open",
"labels": [
{"name": "Status: awaiting reviews"},
],
}
],
},
f"/repos/{repository}/commits/{sha}/check-runs": {
"total_count": 2,
"check_runs": [
{
"status": "completed",
"conclusion": "success",
"name": "validate-solutions",
},
{
"status": "completed",
"conclusion": "failure",
"name": "pre-commit",
},
],
},
}
post = [
{"name": "Status: awaiting reviews"},
{"name": FAILURE_LABEL},
]
gh = MockGitHubAPI(getitem=getitem, post=post)
result = await check_runs.router.dispatch(event, gh)
assert result is None
assert gh.delete_url is None # does not delete any label
assert gh.post_url == f"/repos/{repository}/issues/3378/labels"
assert gh.post_data == {"labels": [FAILURE_LABEL]}
@pytest.mark.asyncio
async def test_check_run_completed_failing_with_label():
sha = "a06212064d8e1c349c75c5ea4568ecf155368c21"
repository = "TheAlgorithms/Python"
data = {
"action": "completed",
"check_run": {
"head_sha": sha,
"status": "completed",
"conclusion": "timed_out",
},
"name": "Travis CI - Pull Request",
"repository": {"full_name": repository},
"installation": {"id": MOCK_INSTALLATION_ID},
}
event = sansio.Event(data, event="check_run", delivery_id="7")
getitem = {
f"/search/issues?q=type:pr+repo:{repository}+sha:{sha}": {
"total_count": 1,
"incomplete_results": False,
"items": [
{
"number": 3378,
"title": "Create GitHub action only for Project Euler",
"state": "open",
"labels": [
{"name": "Status: awaiting reviews"},
{"name": FAILURE_LABEL},
],
}
],
},
f"/repos/{repository}/commits/{sha}/check-runs": {
"total_count": 2,
"check_runs": [
{
"status": "completed",
"conclusion": "timed_out",
"name": "Travis CI - Pull Request",
},
{
"status": "completed",
"conclusion": "success",
"name": "pre-commit",
},
],
},
}
gh = MockGitHubAPI(getitem=getitem)
result = await check_runs.router.dispatch(event, gh)
assert result is None
assert gh.post_data is None # does not add any label
assert gh.post_url is None
assert gh.delete_url is None # does not delete any label
| 33.51831 | 86 | 0.511388 | 1,089 | 11,899 | 5.447199 | 0.144169 | 0.028321 | 0.063216 | 0.018881 | 0.833277 | 0.821982 | 0.81996 | 0.781693 | 0.767195 | 0.765678 | 0 | 0.029056 | 0.360787 | 11,899 | 354 | 87 | 33.612994 | 0.750855 | 0.053114 | 0 | 0.684049 | 0 | 0 | 0.300134 | 0.085791 | 0 | 0 | 0 | 0 | 0.067485 | 1 | 0.003067 | false | 0.006135 | 0.018405 | 0 | 0.021472 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0bce53594a39ef59f862faf282ceec7e298c71a5 | 19,536 | py | Python | sdk/python/pulumi_gcp/bigquery/table.py | pellizzetti/pulumi-gcp | fad74dd55a0cf7723f73046bb0e6fcbfd948ba84 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-12-20T22:08:20.000Z | 2019-12-20T22:08:20.000Z | sdk/python/pulumi_gcp/bigquery/table.py | pellizzetti/pulumi-gcp | fad74dd55a0cf7723f73046bb0e6fcbfd948ba84 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_gcp/bigquery/table.py | pellizzetti/pulumi-gcp | fad74dd55a0cf7723f73046bb0e6fcbfd948ba84 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Table(pulumi.CustomResource):
clusterings: pulumi.Output[list]
"""
Specifies column names to use for data clustering.
Up to four top-level columns are allowed, and should be specified in
descending priority order.
"""
creation_time: pulumi.Output[float]
"""
The time when this table was created, in milliseconds since the epoch.
"""
dataset_id: pulumi.Output[str]
"""
The dataset ID to create the table in.
Changing this forces a new resource to be created.
"""
description: pulumi.Output[str]
"""
The field description.
"""
encryption_configuration: pulumi.Output[dict]
"""
Specifies how the table should be encrypted.
If left blank, the table will be encrypted with a Google-managed key; that process
is transparent to the user. Structure is documented below.
* `kmsKeyName` (`str`)
"""
etag: pulumi.Output[str]
"""
A hash of the resource.
"""
expiration_time: pulumi.Output[float]
"""
The time when this table expires, in
milliseconds since the epoch. If not present, the table will persist
indefinitely. Expired tables will be deleted and their storage
reclaimed.
"""
external_data_configuration: pulumi.Output[dict]
"""
Describes the data format,
location, and other properties of a table stored outside of BigQuery.
By defining these properties, the data source can then be queried as
if it were a standard BigQuery table. Structure is documented below.
* `autodetect` (`bool`)
* `compression` (`str`)
* `csvOptions` (`dict`)
* `allowJaggedRows` (`bool`)
* `allowQuotedNewlines` (`bool`)
* `encoding` (`str`)
* `fieldDelimiter` (`str`)
* `quote` (`str`)
* `skipLeadingRows` (`float`)
* `googleSheetsOptions` (`dict`)
* `range` (`str`)
* `skipLeadingRows` (`float`)
* `ignoreUnknownValues` (`bool`)
* `maxBadRecords` (`float`)
* `sourceFormat` (`str`)
* `sourceUris` (`list`)
"""
friendly_name: pulumi.Output[str]
"""
A descriptive name for the table.
"""
labels: pulumi.Output[dict]
"""
A mapping of labels to assign to the resource.
"""
last_modified_time: pulumi.Output[float]
"""
The time when this table was last modified, in milliseconds since the epoch.
"""
location: pulumi.Output[str]
"""
The geographic location where the table resides. This value is inherited from the dataset.
"""
num_bytes: pulumi.Output[float]
"""
The size of this table in bytes, excluding any data in the streaming buffer.
"""
num_long_term_bytes: pulumi.Output[float]
"""
The number of bytes in the table that are considered "long-term storage".
"""
num_rows: pulumi.Output[float]
"""
The number of rows of data in this table, excluding any data in the streaming buffer.
"""
project: pulumi.Output[str]
"""
The ID of the project in which the resource belongs. If it
is not provided, the provider project is used.
"""
schema: pulumi.Output[str]
"""
A JSON schema for the table. Schema is required
for CSV and JSON formats and is disallowed for Google Cloud
Bigtable, Cloud Datastore backups, and Avro formats when using
external tables. For more information see the
[BigQuery API documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource).
"""
self_link: pulumi.Output[str]
"""
The URI of the created resource.
"""
table_id: pulumi.Output[str]
"""
A unique ID for the resource.
Changing this forces a new resource to be created.
"""
time_partitioning: pulumi.Output[dict]
"""
If specified, configures time-based
partitioning for this table. Structure is documented below.
* `expirationMs` (`float`)
* `field` (`str`)
* `requirePartitionFilter` (`bool`)
* `type` (`str`) - Describes the table type.
"""
type: pulumi.Output[str]
"""
Describes the table type.
"""
view: pulumi.Output[dict]
"""
If specified, configures this table as a view.
Structure is documented below.
* `query` (`str`)
* `useLegacySql` (`bool`)
"""
def __init__(__self__, resource_name, opts=None, clusterings=None, dataset_id=None, description=None, encryption_configuration=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, project=None, schema=None, table_id=None, time_partitioning=None, view=None, __props__=None, __name__=None, __opts__=None):
"""
Creates a table resource in a dataset for Google BigQuery. For more information see
[the official documentation](https://cloud.google.com/bigquery/docs/) and
[API](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables).
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[list] clusterings: Specifies column names to use for data clustering.
Up to four top-level columns are allowed, and should be specified in
descending priority order.
:param pulumi.Input[str] dataset_id: The dataset ID to create the table in.
Changing this forces a new resource to be created.
:param pulumi.Input[str] description: The field description.
:param pulumi.Input[dict] encryption_configuration: Specifies how the table should be encrypted.
If left blank, the table will be encrypted with a Google-managed key; that process
is transparent to the user. Structure is documented below.
:param pulumi.Input[float] expiration_time: The time when this table expires, in
milliseconds since the epoch. If not present, the table will persist
indefinitely. Expired tables will be deleted and their storage
reclaimed.
:param pulumi.Input[dict] external_data_configuration: Describes the data format,
location, and other properties of a table stored outside of BigQuery.
By defining these properties, the data source can then be queried as
if it were a standard BigQuery table. Structure is documented below.
:param pulumi.Input[str] friendly_name: A descriptive name for the table.
:param pulumi.Input[dict] labels: A mapping of labels to assign to the resource.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs. If it
is not provided, the provider project is used.
:param pulumi.Input[str] schema: A JSON schema for the table. Schema is required
for CSV and JSON formats and is disallowed for Google Cloud
Bigtable, Cloud Datastore backups, and Avro formats when using
external tables. For more information see the
[BigQuery API documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource).
:param pulumi.Input[str] table_id: A unique ID for the resource.
Changing this forces a new resource to be created.
:param pulumi.Input[dict] time_partitioning: If specified, configures time-based
partitioning for this table. Structure is documented below.
:param pulumi.Input[dict] view: If specified, configures this table as a view.
Structure is documented below.
The **encryption_configuration** object supports the following:
* `kmsKeyName` (`pulumi.Input[str]`)
The **external_data_configuration** object supports the following:
* `autodetect` (`pulumi.Input[bool]`)
* `compression` (`pulumi.Input[str]`)
* `csvOptions` (`pulumi.Input[dict]`)
* `allowJaggedRows` (`pulumi.Input[bool]`)
* `allowQuotedNewlines` (`pulumi.Input[bool]`)
* `encoding` (`pulumi.Input[str]`)
* `fieldDelimiter` (`pulumi.Input[str]`)
* `quote` (`pulumi.Input[str]`)
* `skipLeadingRows` (`pulumi.Input[float]`)
* `googleSheetsOptions` (`pulumi.Input[dict]`)
* `range` (`pulumi.Input[str]`)
* `skipLeadingRows` (`pulumi.Input[float]`)
* `ignoreUnknownValues` (`pulumi.Input[bool]`)
* `maxBadRecords` (`pulumi.Input[float]`)
* `sourceFormat` (`pulumi.Input[str]`)
* `sourceUris` (`pulumi.Input[list]`)
The **time_partitioning** object supports the following:
* `expirationMs` (`pulumi.Input[float]`)
* `field` (`pulumi.Input[str]`)
* `requirePartitionFilter` (`pulumi.Input[bool]`)
* `type` (`pulumi.Input[str]`) - Describes the table type.
The **view** object supports the following:
* `query` (`pulumi.Input[str]`)
* `useLegacySql` (`pulumi.Input[bool]`)
> This content is derived from https://github.com/terraform-providers/terraform-provider-google/blob/master/website/docs/r/bigquery_table.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['clusterings'] = clusterings
if dataset_id is None:
raise TypeError("Missing required property 'dataset_id'")
__props__['dataset_id'] = dataset_id
__props__['description'] = description
__props__['encryption_configuration'] = encryption_configuration
__props__['expiration_time'] = expiration_time
__props__['external_data_configuration'] = external_data_configuration
__props__['friendly_name'] = friendly_name
__props__['labels'] = labels
__props__['project'] = project
__props__['schema'] = schema
if table_id is None:
raise TypeError("Missing required property 'table_id'")
__props__['table_id'] = table_id
__props__['time_partitioning'] = time_partitioning
__props__['view'] = view
__props__['creation_time'] = None
__props__['etag'] = None
__props__['last_modified_time'] = None
__props__['location'] = None
__props__['num_bytes'] = None
__props__['num_long_term_bytes'] = None
__props__['num_rows'] = None
__props__['self_link'] = None
__props__['type'] = None
super(Table, __self__).__init__(
'gcp:bigquery/table:Table',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, clusterings=None, creation_time=None, dataset_id=None, description=None, encryption_configuration=None, etag=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, last_modified_time=None, location=None, num_bytes=None, num_long_term_bytes=None, num_rows=None, project=None, schema=None, self_link=None, table_id=None, time_partitioning=None, type=None, view=None):
"""
Get an existing Table resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[list] clusterings: Specifies column names to use for data clustering.
Up to four top-level columns are allowed, and should be specified in
descending priority order.
:param pulumi.Input[float] creation_time: The time when this table was created, in milliseconds since the epoch.
:param pulumi.Input[str] dataset_id: The dataset ID to create the table in.
Changing this forces a new resource to be created.
:param pulumi.Input[str] description: The field description.
:param pulumi.Input[dict] encryption_configuration: Specifies how the table should be encrypted.
If left blank, the table will be encrypted with a Google-managed key; that process
is transparent to the user. Structure is documented below.
:param pulumi.Input[str] etag: A hash of the resource.
:param pulumi.Input[float] expiration_time: The time when this table expires, in
milliseconds since the epoch. If not present, the table will persist
indefinitely. Expired tables will be deleted and their storage
reclaimed.
:param pulumi.Input[dict] external_data_configuration: Describes the data format,
location, and other properties of a table stored outside of BigQuery.
By defining these properties, the data source can then be queried as
if it were a standard BigQuery table. Structure is documented below.
:param pulumi.Input[str] friendly_name: A descriptive name for the table.
:param pulumi.Input[dict] labels: A mapping of labels to assign to the resource.
:param pulumi.Input[float] last_modified_time: The time when this table was last modified, in milliseconds since the epoch.
:param pulumi.Input[str] location: The geographic location where the table resides. This value is inherited from the dataset.
:param pulumi.Input[float] num_bytes: The size of this table in bytes, excluding any data in the streaming buffer.
:param pulumi.Input[float] num_long_term_bytes: The number of bytes in the table that are considered "long-term storage".
:param pulumi.Input[float] num_rows: The number of rows of data in this table, excluding any data in the streaming buffer.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs. If it
is not provided, the provider project is used.
:param pulumi.Input[str] schema: A JSON schema for the table. Schema is required
for CSV and JSON formats and is disallowed for Google Cloud
Bigtable, Cloud Datastore backups, and Avro formats when using
external tables. For more information see the
[BigQuery API documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource).
:param pulumi.Input[str] self_link: The URI of the created resource.
:param pulumi.Input[str] table_id: A unique ID for the resource.
Changing this forces a new resource to be created.
:param pulumi.Input[dict] time_partitioning: If specified, configures time-based
partitioning for this table. Structure is documented below.
:param pulumi.Input[str] type: Describes the table type.
:param pulumi.Input[dict] view: If specified, configures this table as a view.
Structure is documented below.
The **encryption_configuration** object supports the following:
* `kmsKeyName` (`pulumi.Input[str]`)
The **external_data_configuration** object supports the following:
* `autodetect` (`pulumi.Input[bool]`)
* `compression` (`pulumi.Input[str]`)
* `csvOptions` (`pulumi.Input[dict]`)
* `allowJaggedRows` (`pulumi.Input[bool]`)
* `allowQuotedNewlines` (`pulumi.Input[bool]`)
* `encoding` (`pulumi.Input[str]`)
* `fieldDelimiter` (`pulumi.Input[str]`)
* `quote` (`pulumi.Input[str]`)
* `skipLeadingRows` (`pulumi.Input[float]`)
* `googleSheetsOptions` (`pulumi.Input[dict]`)
* `range` (`pulumi.Input[str]`)
* `skipLeadingRows` (`pulumi.Input[float]`)
* `ignoreUnknownValues` (`pulumi.Input[bool]`)
* `maxBadRecords` (`pulumi.Input[float]`)
* `sourceFormat` (`pulumi.Input[str]`)
* `sourceUris` (`pulumi.Input[list]`)
The **time_partitioning** object supports the following:
* `expirationMs` (`pulumi.Input[float]`)
* `field` (`pulumi.Input[str]`)
* `requirePartitionFilter` (`pulumi.Input[bool]`)
* `type` (`pulumi.Input[str]`) - Describes the table type.
The **view** object supports the following:
* `query` (`pulumi.Input[str]`)
* `useLegacySql` (`pulumi.Input[bool]`)
> This content is derived from https://github.com/terraform-providers/terraform-provider-google/blob/master/website/docs/r/bigquery_table.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["clusterings"] = clusterings
__props__["creation_time"] = creation_time
__props__["dataset_id"] = dataset_id
__props__["description"] = description
__props__["encryption_configuration"] = encryption_configuration
__props__["etag"] = etag
__props__["expiration_time"] = expiration_time
__props__["external_data_configuration"] = external_data_configuration
__props__["friendly_name"] = friendly_name
__props__["labels"] = labels
__props__["last_modified_time"] = last_modified_time
__props__["location"] = location
__props__["num_bytes"] = num_bytes
__props__["num_long_term_bytes"] = num_long_term_bytes
__props__["num_rows"] = num_rows
__props__["project"] = project
__props__["schema"] = schema
__props__["self_link"] = self_link
__props__["table_id"] = table_id
__props__["time_partitioning"] = time_partitioning
__props__["type"] = type
__props__["view"] = view
return Table(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 48.118227 | 447 | 0.647011 | 2,291 | 19,536 | 5.334788 | 0.12309 | 0.072901 | 0.041237 | 0.024873 | 0.784896 | 0.746032 | 0.724431 | 0.714449 | 0.701276 | 0.685403 | 0 | 0.000346 | 0.25947 | 19,536 | 405 | 448 | 48.237037 | 0.844474 | 0.465397 | 0 | 0.018519 | 1 | 0 | 0.149886 | 0.02214 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0.009259 | 0.055556 | 0.018519 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
045e518ca6775d782dbfeafdb884c8e7e7bd2ffa | 5,474 | py | Python | symbol/transform_nets.py | Zehaos/mx-pointnet | e21c2de87ae55f714bfb897e2c8f7cf3e04a574e | [
"MIT"
] | 16 | 2018-01-13T04:57:44.000Z | 2020-02-13T18:53:09.000Z | symbol/transform_nets.py | Zehaos/mx-pointnet | e21c2de87ae55f714bfb897e2c8f7cf3e04a574e | [
"MIT"
] | null | null | null | symbol/transform_nets.py | Zehaos/mx-pointnet | e21c2de87ae55f714bfb897e2c8f7cf3e04a574e | [
"MIT"
] | 5 | 2018-01-13T15:53:51.000Z | 2019-04-22T16:54:51.000Z | import mxnet as mx
import numpy as np
from mx_constant import MyConstant
eps = 1e-5
def input_transform_net(data, batch_size, num_points, workspace, bn_mom=0.9, scope="itn_"):
data = mx.sym.expand_dims(data, axis=1) # (32,1,1024,3)
conv0 = mx.sym.Convolution(data=data, num_filter=64, kernel=(1, 3), stride=(1, 1), name=scope + "conv0",
workspace=workspace)
conv0 = mx.sym.BatchNorm(data=conv0, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn0')
conv0 = mx.sym.Activation(data=conv0, act_type='relu', name=scope + 'relu0')
conv1 = mx.sym.Convolution(data=conv0, num_filter=128, kernel=(1, 1), stride=(1, 1), name=scope + "conv1",
workspace=workspace)
conv1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn1')
conv1 = mx.sym.Activation(data=conv1, act_type='relu', name=scope + 'relu1')
conv2 = mx.sym.Convolution(data=conv1, num_filter=1024, kernel=(1, 1), stride=(1, 1), name=scope + "conv2",
workspace=workspace)
conv2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn2')
conv2 = mx.sym.Activation(data=conv2, act_type='relu', name=scope + 'relu2')
pool3 = mx.sym.Pooling(data=conv2, kernel=(num_points, 1), pool_type='max', name=scope + 'pool3')
pool3_reshaped = mx.sym.Reshape(data=pool3, shape=(batch_size, -1))
fc4 = mx.sym.FullyConnected(data=pool3_reshaped, num_hidden=512, name=scope + 'fc4')
fc4 = mx.sym.BatchNorm(data=fc4, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn4')
fc4 = mx.sym.Activation(data=fc4, act_type='relu', name=scope + 'relu4')
fc5 = mx.sym.FullyConnected(data=fc4, num_hidden=256, name=scope + 'fc5')
fc5 = mx.sym.BatchNorm(data=fc5, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn5')
fc5 = mx.sym.Activation(data=fc5, act_type='relu', name=scope + 'relu5')
input_transformer_weight = mx.sym.Variable(name="input_transformer_weight", shape=(9, 256), init=mx.init.Zero())
input_transformer_bias = mx.sym.Variable(name="input_transformer_bias", shape=(9), init=mx.init.Zero())
transform = mx.sym.FullyConnected(data=fc5, num_hidden=9, weight=input_transformer_weight, bias=input_transformer_bias, name=scope + 'fc6')
const_arr = [1, 0, 0, 0, 1, 0, 0, 0, 1]
a = mx.sym.Variable('itn_addi_bias', shape=(batch_size, 9), init=MyConstant(value=[const_arr]*batch_size))
a = mx.sym.BlockGrad(a) # now variable a is a constant
transform = mx.sym.elemwise_add(transform, a, name=scope + "add_eye")
transform_reshaped = mx.sym.Reshape(data=transform, shape=(batch_size, 3, 3), name=scope + "reshape_transform")
return transform_reshaped
def feature_transform_net(data, batch_size, num_points, workspace, bn_mom=0.9, scope="ftn_"):
conv0 = mx.sym.Convolution(data=data, num_filter=64, kernel=(1, 1), stride=(1, 1), name=scope + "conv0",
workspace=workspace)
conv0 = mx.sym.BatchNorm(data=conv0, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn0')
conv0 = mx.sym.Activation(data=conv0, act_type='relu', name=scope + 'relu0')
conv1 = mx.sym.Convolution(data=conv0, num_filter=128, kernel=(1, 1), stride=(1, 1), name=scope + "conv1",
workspace=workspace)
conv1 = mx.sym.BatchNorm(data=conv1, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn1')
conv1 = mx.sym.Activation(data=conv1, act_type='relu', name=scope + 'relu1')
conv2 = mx.sym.Convolution(data=conv1, num_filter=1024, kernel=(1, 1), stride=(1, 1), name=scope + "conv2",
workspace=workspace)
conv2 = mx.sym.BatchNorm(data=conv2, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn2')
conv2 = mx.sym.Activation(data=conv2, act_type='relu', name=scope + 'relu2')
pool3 = mx.sym.Pooling(data=conv2, kernel=(num_points, 1), pool_type='max', name=scope + 'pool3')
pool3_reshaped = mx.sym.Reshape(data=pool3, shape=(batch_size, -1))
fc4 = mx.sym.FullyConnected(data=pool3_reshaped, num_hidden=512, name=scope + 'fc4')
fc4 = mx.sym.BatchNorm(data=fc4, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn4')
fc4 = mx.sym.Activation(data=fc4, act_type='relu', name=scope + 'relu4')
fc5 = mx.sym.FullyConnected(data=fc4, num_hidden=256, name=scope + 'fc5')
fc5 = mx.sym.BatchNorm(data=fc5, fix_gamma=False, eps=eps, momentum=bn_mom, name=scope + 'bn5')
fc5 = mx.sym.Activation(data=fc5, act_type='relu', name=scope + 'relu5')
feat_transformer_weight = mx.sym.Variable(name="feat_transformer_weight", shape=(64*64, 256), init=mx.init.Zero())
feat_transformer_bias = mx.sym.Variable(name="feat_transformer_bias", shape=(64*64), init=mx.init.Zero())
transform = mx.sym.FullyConnected(data=fc5, num_hidden=64 * 64, weight=feat_transformer_weight, bias=feat_transformer_bias, name=scope + 'fc6')
const_arr = np.eye(64, dtype=np.float32).flatten().tolist()
a = mx.sym.Variable('ftn_addi_bias', shape=(batch_size, 64 * 64), init=MyConstant(value=[const_arr]*batch_size))
a = mx.sym.BlockGrad(a) # now variable a is a constant
transform = mx.sym.elemwise_add(transform, a, name=scope + "add_eye")
transform_reshaped = mx.sym.Reshape(data=transform, shape=(batch_size, 64, 64), name=scope + "reshape_transform")
return transform_reshaped
| 61.505618 | 147 | 0.68122 | 823 | 5,474 | 4.392467 | 0.126367 | 0.067773 | 0.038728 | 0.049793 | 0.905118 | 0.878838 | 0.825173 | 0.779253 | 0.778976 | 0.778976 | 0 | 0.047826 | 0.159664 | 5,474 | 88 | 148 | 62.204545 | 0.738043 | 0.01297 | 0 | 0.676923 | 0 | 0 | 0.065938 | 0.01667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.046154 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b08ebb990cb0442cb30f5c70e04c1deefce0e14 | 12,721 | py | Python | pypeFLOW/src/tests/test_pypeflow_task.py | WangGenomicsLab/FALCON | 07ce2944ebd9aa624cc6ed32af59ec36a2590e7a | [
"BSD-3-Clause-Clear"
] | 13 | 2015-06-29T08:02:25.000Z | 2020-05-11T02:28:19.000Z | pypeFLOW/src/tests/test_pypeflow_task.py | WangGenomicsLab/FALCON | 07ce2944ebd9aa624cc6ed32af59ec36a2590e7a | [
"BSD-3-Clause-Clear"
] | 40 | 2015-06-11T20:04:20.000Z | 2020-01-21T16:57:50.000Z | pypeFLOW/src/tests/test_pypeflow_task.py | WangGenomicsLab/FALCON | 07ce2944ebd9aa624cc6ed32af59ec36a2590e7a | [
"BSD-3-Clause-Clear"
] | 26 | 2015-06-06T15:11:51.000Z | 2020-12-28T10:36:04.000Z | from nose.tools import assert_equal
from nose import SkipTest
import pypeflow.task
import pypeflow.data
class TestPypeTaskBase:
def test___call__(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.__call__(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
def test___init__(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
raise SkipTest # TODO: implement your test here
def test_finalize(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.finalize())
raise SkipTest # TODO: implement your test here
def test_setInputs(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.setInputs(inputDataObjs))
raise SkipTest # TODO: implement your test here
def test_setOutputs(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.setOutputs(outputDataObjs))
raise SkipTest # TODO: implement your test here
def test_setReferenceMD5(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.setReferenceMD5(md5Str))
raise SkipTest # TODO: implement your test here
def test_status(self):
# pype_task_base = PypeTaskBase(URL, *argv, **kwargv)
# assert_equal(expected, pype_task_base.status())
raise SkipTest # TODO: implement your test here
class TestPypeThreadTaskBase:
def test___call__(self):
# pype_thread_task_base = PypeThreadTaskBase()
# assert_equal(expected, pype_thread_task_base.__call__(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
def test_nSlots(self):
# pype_thread_task_base = PypeThreadTaskBase()
# assert_equal(expected, pype_thread_task_base.nSlots())
raise SkipTest # TODO: implement your test here
def test_setMessageQueue(self):
# pype_thread_task_base = PypeThreadTaskBase()
# assert_equal(expected, pype_thread_task_base.setMessageQueue(q))
raise SkipTest # TODO: implement your test here
class TestPypeDistributiableTaskBase:
def test___init__(self):
# pype_distributiable_task_base = PypeDistributiableTaskBase(URL, *argv, **kwargv)
raise SkipTest # TODO: implement your test here
class TestPypeTask:
def test_pype_task(self):
# assert_equal(expected, PypeTask(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
class TestPypeShellTask:
def test_pype_shell_task(self):
# assert_equal(expected, PypeShellTask(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
class TestPypeSGETask:
def test_pype_sge_task(self):
# assert_equal(expected, PypeSGETask(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
class TestPypeDistributibleTask:
def test_pype_distributible_task(self):
# assert_equal(expected, PypeDistributibleTask(*argv, **kwargv))
raise SkipTest # TODO: implement your test here
class TestTimeStampCompare:
def test_time_stamp_compare(self):
# assert_equal(expected, timeStampCompare(inputDataObjs, outputDataObjs, parameters))
raise SkipTest # TODO: implement your test here
class TestPypeTaskCollectionBase:
def test___init__(self):
# pype_task_collection_base = PypeTaskCollectionBase(URL, tasks)
raise SkipTest # TODO: implement your test here
def test_getTasks(self):
# pype_task_collection_base = PypeTaskCollectionBase(URL, tasks)
# assert_equal(expected, pype_task_collection_base.getTasks())
raise SkipTest # TODO: implement your test here
class TestPypeTaskCollection:
def test___init__(self):
# pype_task_collection = PypeTaskCollection(URL, tasks)
raise SkipTest # TODO: implement your test here
def test_addTask(self):
# pype_task_collection = PypeTaskCollection(URL, tasks)
# assert_equal(expected, pype_task_collection.addTask(task))
raise SkipTest # TODO: implement your test here
def test_getTasks(self):
# pype_task_collection = PypeTaskCollection(URL, tasks)
# assert_equal(expected, pype_task_collection.getTasks())
raise SkipTest # TODO: implement your test here
class TestPypeScatteredTasks:
def test_pype_scattered_tasks(self):
import os
#os.system("rm -rf /tmp/pypetest/*")
nChunk = 5
infileObj =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_in_1.txt",
nChunk = nChunk)
with open(infileObj.localFileName, "w") as f:
for i in range(nChunk):
f.write("file%02d\n" % i)
def scatter(*argv, **kwargv):
outputObjs = sorted( kwargv["outputDataObjs"].items() )
nOut = len(outputObjs)
outputObjs = [ (o[0], o[1], open(o[1].localFileName, "w")) for o in outputObjs]
with open(kwargv["inputDataObjs"]["completeFile"].localFileName,"r") as f:
i = 0
for l in f:
outf = outputObjs[i % nOut][2]
outf.write(l)
i += 1
for o in outputObjs:
o[2].close()
PypeShellTask = pypeflow.task.PypeShellTask
PypeTask = pypeflow.task.PypeTask
PypeTaskBase = pypeflow.task.PypeTaskBase
infileObj.setScatterTask(PypeTask, PypeTaskBase, scatter)
infileObj.getScatterTask()()
def gather(*argv, **kwargv):
inputObjs = sorted( kwargv["inputDataObjs"].items() )
with open(kwargv["outputDataObjs"]["completeFile"].localFileName,"w") as outf:
for k, subfile in inputObjs:
f = open(subfile.localFileName)
outf.write(f.read())
f.close()
outfileObj =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_out_1.txt",
nChunk = nChunk)
outfileObj.setGatherTask(PypeTask, PypeTaskBase, gather)
PypeScatteredTasks = pypeflow.task.PypeScatteredTasks
@PypeScatteredTasks( inputDataObjs = {"inf":infileObj},
outputDataObjs = {"outf":outfileObj} )
def test_fun(*argv, **kwargv):
chunk_id = kwargv["chunk_id"]
self = test_fun[chunk_id]
assert self.inf._path == "/tmp/pypetest/%03d_test_in_1.txt" % chunk_id
with open( self.outf._path , "w") as f:
in_f = open(self.inf.localFileName,"r")
f.write("out:"+in_f.read())
in_f.close()
return self.inf._path
assert len(test_fun.getTasks()) == nChunk
for i in range(nChunk):
test_fun[i]()
outfileObj.getGatherTask()()
def test_pype_scattered_tasks_2(self):
import os
#os.system("rm -rf /tmp/pypetest/*")
nChunk = 5
infileObj =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_in_2.txt",
nChunk = nChunk)
with open(infileObj.localFileName, "w") as f:
for i in range(nChunk):
f.write("file%02d\n" % i)
with open("/tmp/pypetest/scatter.sh", "w") as f:
f.write("#!/bin/bash\n")
f.write("for f in %s;" % " ".join( ["%03d" % i for i in range(nChunk)] ))
f.write('do if [ -e /tmp/pypetest/%f"_test_in.txt" ];\
then rm /tmp/pypetest/$f"_test_in.txt"; fi;\n')
f.write("done\n")
for i in range(nChunk):
f.write("echo file%02d > /tmp/pypetest/%03d_test_in_2.txt\n" % (i, i))
PypeShellTask = pypeflow.task.PypeShellTask
PypeTask = pypeflow.task.PypeTask
PypeTaskBase = pypeflow.task.PypeTaskBase
infileObj.setScatterTask(PypeShellTask, PypeTaskBase, "/tmp/pypetest/scatter.sh")
infileObj.getScatterTask()()
def gather(*argv, **kwargv):
inputObjs = sorted( kwargv["inputDataObjs"].items() )
with open(kwargv["outputDataObjs"]["completeFile"].localFileName,"w") as outf:
for k, subfile in inputObjs:
f = open(subfile.localFileName)
outf.write("out:"+f.read())
f.close()
outfileObj =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_out_2.txt",
nChunk = nChunk)
outfileObj.setGatherTask(PypeTask, PypeTaskBase, gather)
PypeScatteredTasks = pypeflow.task.PypeScatteredTasks
@PypeScatteredTasks( inputDataObjs = {"inf":infileObj},
outputDataObjs = {"outf":outfileObj},
comment="xyz")
def test_fun_2(*argv, **kwargv):
assert kwargv["comment"] == "xyz"
chunk_id = kwargv["chunk_id"]
self = test_fun_2[chunk_id]
assert self.inf._path == "/tmp/pypetest/%03d_test_in_2.txt" % chunk_id
with open( self.outf._path , "w") as f:
f.write("file%02d\n" % chunk_id)
return self.inf._path
assert len(test_fun_2.getTasks()) == nChunk
for i in range(nChunk):
test_fun_2[i]()
outfileObj.getGatherTask()()
def test_pype_scattered_tasks_3(self):
import os
#os.system("rm -rf /tmp/pypetest/*")
nChunk = 5
infileObj0 =\
pypeflow.data.PypeLocalFile(
"file://localhost/tmp/pypetest/test_in_0.txt")
with open(infileObj0.localFileName,"w") as f:
f.write("prefix:")
infileObj =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_in_3.txt",
nChunk = nChunk)
with open(infileObj.localFileName, "w") as f:
for i in range(nChunk):
f.write("file%02d\n" % i)
def scatter(*argv, **kwargv):
outputObjs = sorted( kwargv["outputDataObjs"].items() )
nOut = len(outputObjs)
outputObjs = [ (o[0], o[1], open(o[1].localFileName, "w")) for o in outputObjs]
with open(kwargv["inputDataObjs"]["completeFile"].localFileName,"r") as f:
i = 0
for l in f:
outf = outputObjs[i % nOut][2]
outf.write(l)
i += 1
for o in outputObjs:
o[2].close()
PypeShellTask = pypeflow.task.PypeShellTask
PypeTask = pypeflow.task.PypeTask
PypeTaskBase = pypeflow.task.PypeTaskBase
infileObj.setScatterTask(PypeTask, PypeTaskBase, scatter)
infileObj.getScatterTask()()
def gather(*argv, **kwargv):
inputObjs = sorted( kwargv["inputDataObjs"].items() )
with open(kwargv["outputDataObjs"]["completeFile"].localFileName,"w") as outf:
for k, subfile in inputObjs:
f = open(subfile.localFileName)
outf.write(f.read())
f.close()
outfileObj3 =\
pypeflow.data.PypeSplittableLocalFile(
"splittablefile://localhost/tmp/pypetest/test_out_3.txt",
nChunk = nChunk)
outfileObj3.setGatherTask(PypeTask, PypeTaskBase, gather)
PypeScatteredTasks = pypeflow.task.PypeScatteredTasks
@PypeScatteredTasks( inputDataObjs = {"inf":infileObj, "prefix":infileObj0},
outputDataObjs = {"outf":outfileObj3} )
def test_fun_3(*argv, **kwargv):
chunk_id = kwargv["chunk_id"]
self = test_fun_3[chunk_id]
assert self.inf._path == "/tmp/pypetest/%03d_test_in_3.txt" % chunk_id
with open( self.prefix.localFileName, "r") as f:
prefix = f.read()
with open( self.outf._path, "w") as f:
in_f = open(self.inf.localFileName,"r")
f.write(prefix + in_f.read())
in_f.close()
return self.inf._path
assert len(test_fun_3.getTasks()) == nChunk
for i in range(nChunk):
test_fun_3[i]()
outfileObj3.getGatherTask()()
| 38.902141 | 93 | 0.601289 | 1,351 | 12,721 | 5.487787 | 0.113249 | 0.025492 | 0.048152 | 0.073644 | 0.8314 | 0.799838 | 0.790666 | 0.772727 | 0.71365 | 0.643917 | 0 | 0.007546 | 0.291644 | 12,721 | 326 | 94 | 39.021472 | 0.815226 | 0.208474 | 0 | 0.644444 | 0 | 0 | 0.094896 | 0.05414 | 0 | 0 | 0 | 0.003067 | 0.035556 | 1 | 0.142222 | false | 0 | 0.031111 | 0 | 0.235556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9bd597cac12e48bc99dfd6071fe411640bb284a7 | 31 | py | Python | src/actions/help.py | bluzi/pypm | 5d31fda70b76628f15a94d10cb50bc91fdcc9115 | [
"MIT"
] | 1 | 2018-08-11T12:43:23.000Z | 2018-08-11T12:43:23.000Z | src/actions/help.py | bluzi/pypm | 5d31fda70b76628f15a94d10cb50bc91fdcc9115 | [
"MIT"
] | null | null | null | src/actions/help.py | bluzi/pypm | 5d31fda70b76628f15a94d10cb50bc91fdcc9115 | [
"MIT"
] | null | null | null | def handle():
print('help') | 15.5 | 17 | 0.580645 | 4 | 31 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 2 | 17 | 15.5 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ac99f87727ec1ec295559ebcdd4dcc6f4e77c7e3 | 66 | py | Python | __init__.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | __init__.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | __init__.py | odeliss/wapetex | 1ad6740f537724e3640175234ea5d120912a0c3c | [
"MIT"
] | null | null | null | from wapetex.datagenerationpipeline import dataGenerationPipeline
| 33 | 65 | 0.924242 | 5 | 66 | 12.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 66 | 1 | 66 | 66 | 0.983871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
acc09ccf5243003cd79e6b41555cf7870edab638 | 3,990 | py | Python | tests/nomulti_test.py | victronenergy/dbus-systemcalc-py | 3d3c31d2d01bbe12aad634279e6569f4cba902b4 | [
"MIT"
] | 5 | 2018-07-08T20:05:52.000Z | 2021-11-29T03:07:00.000Z | tests/nomulti_test.py | victronenergy/dbus-systemcalc-py | 3d3c31d2d01bbe12aad634279e6569f4cba902b4 | [
"MIT"
] | 2 | 2016-10-13T13:02:54.000Z | 2021-03-05T17:08:55.000Z | tests/nomulti_test.py | victronenergy/dbus-systemcalc-py | 3d3c31d2d01bbe12aad634279e6569f4cba902b4 | [
"MIT"
] | 13 | 2015-04-13T12:21:24.000Z | 2022-01-24T16:28:35.000Z | #!/usr/bin/env python3
# This adapts sys.path to include all relevant packages
import context
# our own packages
from base import TestSystemCalcBase
# Monkey patching for unit tests
import patches
class TestSystemCalcNoMulti(TestSystemCalcBase):
def __init__(self, methodName='runTest'):
TestSystemCalcBase.__init__(self, methodName)
def test_noservices(self):
self._update_values()
self._check_values({
'/Dc/Battery/Soc': None,
'/AutoSelectedBatteryService': 'No battery monitor found'})
def test_no_battery_service(self):
self._set_setting('/Settings/SystemSetup/BatteryService', 'nobattery')
self._add_device('com.victronenergy.battery.ttyO2',
product_name='battery',
values={
'/Dc/0/Voltage': 12.3,
'/Dc/0/Current': 5.3,
'/Dc/0/Power': 65,
'/Soc': 15.3,
'/DeviceInstance': 2})
self._update_values()
self._check_values({
'/Dc/Battery/Power': None,
'/AutoSelectedBatteryService': None})
self._set_setting('/Settings/SystemSetup/BatteryService', 'default')
self._update_values()
self._check_values({
'/Dc/Battery/Power': 65,
'/AutoSelectedBatteryService': 'battery on dummy'})
def test_hub1_control_vedirect_solarcharger_bms_battery(self):
self._set_setting('/Settings/Services/Bol', 1)
self._add_device('com.victronenergy.solarcharger.ttyO2', {
'/State': 3,
'/Settings/ChargeCurrentLimit': 100,
'/Link/NetworkMode': 0,
'/Link/ChargeVoltage': None,
'/Link/ChargeCurrent': None,
'/Link/VoltageSense': None,
'/Dc/0/Voltage': 12.6,
'/Dc/0/Current': 24,
'/FirmwareVersion': 0x0129},
connection='VE.Direct')
self._add_device('com.victronenergy.battery.ttyO2',
product_name='battery',
values={
'/Dc/0/Voltage': 12.3,
'/Dc/0/Current': 5.3,
'/Dc/0/Power': 65,
'/Soc': 15.3,
'/DeviceInstance': 2,
'/Info/BatteryLowVoltage': 47,
'/Info/MaxChargeCurrent': 25,
'/Info/MaxChargeVoltage': 58.2,
'/Info/MaxDischargeCurrent': 50})
self._update_values(interval=10000)
self._check_external_values({
'com.victronenergy.solarcharger.ttyO2': {
'/Link/NetworkMode': 13,
'/Link/ChargeCurrent': 25,
'/Link/ChargeVoltage': 58.2}})
self._check_values({
'/Control/SolarChargeCurrent': 1,
'/Control/SolarChargeVoltage': 1,
'/Control/BmsParameters': 1})
def test_hub1_control_bms_battery_vedirect_solarcharger_off(self):
self._set_setting('/Settings/Services/Bol', 1)
self._add_device('com.victronenergy.solarcharger.ttyO0', {
'/State': 0,
'/Settings/ChargeCurrentLimit': 100,
'/Link/NetworkMode': 0,
'/Link/ChargeVoltage': None,
'/Link/ChargeCurrent': None,
'/Link/VoltageSense': None,
'/Dc/0/Voltage': 12.6,
'/Dc/0/Current': 0,
'/FirmwareVersion': 0x0129},
connection='VE.Direct')
self._add_device('com.victronenergy.solarcharger.ttyO2', {
'/State': 3,
'/Settings/ChargeCurrentLimit': 100,
'/Link/NetworkMode': 0,
'/Link/ChargeVoltage': None,
'/Link/ChargeCurrent': None,
'/Link/VoltageSense': None,
'/Dc/0/Voltage': 12.6,
'/Dc/0/Current': 24,
'/FirmwareVersion': 0x0129},
connection='VE.Direct')
self._add_device('com.victronenergy.battery.ttyUSB0',
product_name='battery',
values={
'/Dc/0/Voltage': 12.3,
'/Dc/0/Current': 5.3,
'/Dc/0/Power': 65,
'/Soc': 15.3,
'/DeviceInstance': 2,
'/Info/BatteryLowVoltage': 47,
'/Info/MaxChargeCurrent': 25,
'/Info/MaxChargeVoltage': 58.2,
'/Info/MaxDischargeCurrent': 50})
self._update_values(interval=10000)
self._check_external_values({
'com.victronenergy.solarcharger.ttyO0': {
'/Link/NetworkMode': 13,
'/Link/ChargeCurrent': None,
'/Link/ChargeVoltage': 58.2},
'com.victronenergy.solarcharger.ttyO2': {
'/Link/NetworkMode': 13,
'/Link/ChargeCurrent': 25,
'/Link/ChargeVoltage': 58.2}})
self._check_values({
'/Control/SolarChargeCurrent': 1,
'/Control/SolarChargeVoltage': 1,
'/Control/BmsParameters': 1})
| 30.930233 | 72 | 0.677193 | 453 | 3,990 | 5.801325 | 0.256071 | 0.017123 | 0.02968 | 0.03653 | 0.770167 | 0.755708 | 0.719939 | 0.719939 | 0.704718 | 0.670472 | 0 | 0.047549 | 0.151378 | 3,990 | 128 | 73 | 31.171875 | 0.728588 | 0.030827 | 0 | 0.782609 | 0 | 0 | 0.461817 | 0.240228 | 0 | 0 | 0.00466 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.026087 | 0 | 0.078261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acd357ff389ec7d4d1cef5722aa5fcb54f6bb146 | 424 | py | Python | rastervision/v2/core/filesystem/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | 1 | 2019-11-07T10:02:23.000Z | 2019-11-07T10:02:23.000Z | rastervision/v2/core/filesystem/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | null | null | null | rastervision/v2/core/filesystem/__init__.py | carderne/raster-vision | 915fbcd3263d8f2193e65c2cd0eb53e050a47a01 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
from rastervision.v2.core.filesystem.filesystem import (
FileSystem, NotReadableError, NotWritableError, ProtobufParseException)
from rastervision.v2.core.filesystem.local_filesystem import LocalFileSystem
from rastervision.v2.core.filesystem.s3_filesystem import S3FileSystem
from rastervision.v2.core.filesystem.http_filesystem import HttpFileSystem
from rastervision.v2.core.filesystem.utils import *
| 47.111111 | 76 | 0.858491 | 47 | 424 | 7.680851 | 0.382979 | 0.221607 | 0.249307 | 0.304709 | 0.443213 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.075472 | 424 | 8 | 77 | 53 | 0.90051 | 0.028302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ace37fbed6d60b7165772a91d5e045c52828a215 | 46 | py | Python | vnpy/gateway/onetoken/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/gateway/onetoken/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/gateway/onetoken/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from .onetoken_gateway import OnetokenGateway
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a01d06f5195438416debbf98d914807536e8b31 | 36 | py | Python | backups/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | backups/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | backups/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | from .rsyncServer import rsyncServer | 36 | 36 | 0.888889 | 4 | 36 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c582cd5b7c7bc2ac28d166d6acbd928fc6e4a9f0 | 360 | py | Python | python/testData/inspections/PyArgumentListInspection/slice.py | tgodzik/intellij-community | f5ef4191fc30b69db945633951fb160c1cfb7b6f | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyArgumentListInspection/slice.py | tgodzik/intellij-community | f5ef4191fc30b69db945633951fb160c1cfb7b6f | [
"Apache-2.0"
] | 2 | 2022-02-19T09:45:05.000Z | 2022-02-27T20:32:55.000Z | python/testData/inspections/PyArgumentListInspection/slice.py | tgodzik/intellij-community | f5ef4191fc30b69db945633951fb160c1cfb7b6f | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | print(slice(<warning descr="Parameter(s) unfilledPossible callees:slice(self: slice, stop)slice(self: slice, start, stop, step=...)">)</warning>)
print(slice(1))
print(slice(1, 2))
print(slice(1, 2, 3))
print(slice<warning descr="Unexpected argument(s)Possible callees:slice(self: slice, stop)slice(self: slice, start, stop, step=...)">(1, 2, 3, 4)</warning>)
| 60 | 156 | 0.708333 | 55 | 360 | 4.636364 | 0.345455 | 0.196078 | 0.219608 | 0.172549 | 0.407843 | 0.407843 | 0.407843 | 0.407843 | 0.407843 | 0.407843 | 0 | 0.030211 | 0.080556 | 360 | 5 | 157 | 72 | 0.740181 | 0 | 0 | 0 | 0 | 0.4 | 0.575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c5833b5fca0a051f55dedd516d8c3b1c1155e700 | 49 | py | Python | python/testData/inspections/PyArgumentListInspection/TimetupleOnAssertedDate/_datetime.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyArgumentListInspection/TimetupleOnAssertedDate/_datetime.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyArgumentListInspection/TimetupleOnAssertedDate/_datetime.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | class date:
def timetuple(self):
pass | 16.333333 | 24 | 0.591837 | 6 | 49 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.326531 | 49 | 3 | 25 | 16.333333 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
c5a4cd3620f8a6662d0d0749fc67b878b119fa12 | 142 | py | Python | moneymanagerapp/admin.py | Viniciuspxf/moneyManager | 5180d833e0cb28565cbefca4f3c346a0cb936661 | [
"MIT"
] | null | null | null | moneymanagerapp/admin.py | Viniciuspxf/moneyManager | 5180d833e0cb28565cbefca4f3c346a0cb936661 | [
"MIT"
] | null | null | null | moneymanagerapp/admin.py | Viniciuspxf/moneyManager | 5180d833e0cb28565cbefca4f3c346a0cb936661 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Account)
admin.site.register(Expense)
admin.site.register(Income)
| 20.285714 | 32 | 0.809859 | 20 | 142 | 5.75 | 0.55 | 0.234783 | 0.443478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084507 | 142 | 6 | 33 | 23.666667 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c5d5a9d1db44c66deec6d7b772a2f193aa667a77 | 163 | py | Python | rsna_heme/__init__.py | johncolby/rsna_heme | 14e3ecafa0587ebdce2a04b239edecb32dbaa6d0 | [
"MIT"
] | 1 | 2020-05-30T13:59:47.000Z | 2020-05-30T13:59:47.000Z | rsna_heme/__init__.py | johncolby/rsna_heme | 14e3ecafa0587ebdce2a04b239edecb32dbaa6d0 | [
"MIT"
] | null | null | null | rsna_heme/__init__.py | johncolby/rsna_heme | 14e3ecafa0587ebdce2a04b239edecb32dbaa6d0 | [
"MIT"
] | null | null | null | from . import process
from . import cnn
from . import dicom
from . import io
from . import labels
from . import logger
from . import transforms
from . import util
| 18.111111 | 24 | 0.754601 | 24 | 163 | 5.125 | 0.416667 | 0.650407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196319 | 163 | 8 | 25 | 20.375 | 0.938931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c5ea5d4d926e071eb9e34da6cd11cd621dbfcf74 | 220 | py | Python | strimadec/models/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | strimadec/models/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | strimadec/models/__init__.py | borea17/StrImaDec | 711e14d50ff816585b43c1509355983738b45ecb | [
"MIT"
] | null | null | null | import strimadec.models.modules
import strimadec.models.utils
from strimadec.models.DVAE import DVAE
from strimadec.models.DVAEST import DVAEST
from strimadec.models.AIR import AIR
from strimadec.models.DAIR import DAIR | 31.428571 | 42 | 0.854545 | 32 | 220 | 5.875 | 0.3125 | 0.478723 | 0.404255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 220 | 7 | 43 | 31.428571 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a845e79976cb199db5681dec2762bea126a0d27b | 9,150 | py | Python | pyocd/target/builtin/target_PIC32CXMTG.py | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | null | null | null | pyocd/target/builtin/target_PIC32CXMTG.py | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | null | null | null | pyocd/target/builtin/target_PIC32CXMTG.py | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | null | null | null | # pyOCD debugger
# Copyright (c) 2020 Arm Limited
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ...flash.flash import Flash
from ...coresight.coresight_target import CoreSightTarget
from ...core.memory_map import (FlashRegion, RamRegion, MemoryMap)
FLASH_ALGO = {
'load_address' : 0x20000000,
# Flash algorithm as a hex string
'instructions': [
0xE00ABE00, 0x062D780D, 0x24084068, 0xD3000040, 0x1E644058, 0x1C49D1FA, 0x2A001E52, 0x4770D1F2,
0xbf042800, 0x47702001, 0x60012100, 0x60816041, 0x490b60c1, 0x604a4a09, 0x7280f04f, 0x60036813,
0x60436853, 0x60836893, 0x60c268d2, 0x60484805, 0xf0106888, 0xd0fb0f01, 0x47702000, 0x5a00000e,
0x460e0000, 0x5a00000f, 0xf0416801, 0x60010101, 0x68014770, 0x0101f021, 0x47706001, 0xf44f6802,
0xf4226370, 0xea036270, 0x43112101, 0x47706001, 0x47706880, 0x477068c0, 0xf1b1b510, 0xd3027f80,
0x7f90f1b1, 0xf3afd901, 0x28008000, 0xf8dfbf1c, 0xf8c0c094, 0xf1a1c000, 0x2a007080, 0x0a41bf1c,
0x2b008011, 0xf3c0bf1c, 0x80180008, 0xb510bd10, 0xf5b1b128, 0xbf985f80, 0x7f00f5b2, 0xf3afd301,
0xeb028000, 0xf1002041, 0x2b007080, 0x6018bf18, 0xb2c9bd10, 0x4604b510, 0x407ff06f, 0x2002ea00,
0x2b004308, 0xf040bf04, 0x606040b4, 0xf04fd012, 0x688a7100, 0x4449490c, 0xf1a4600a, 0xf5b1418c,
0xd1032160, 0x41b4f040, 0x47902000, 0xf00068a0, 0xbd10000e, 0xf01068a0, 0xd0fb0f01, 0x000ef000,
0x0000bd10, 0x460e0000, 0x00000004, 0x460cb510, 0xf7ff48fe, 0x48feff8e, 0x60044448, 0xb500bd10,
0x4601b083, 0xaa01ab02, 0xf7ff4668, 0x48f8ff95, 0x44482200, 0x68032105, 0xf7ff9800, 0xb003ffba,
0xb500bd00, 0x4601b083, 0xaa01ab02, 0xf7ff4668, 0x48efff83, 0x2004f8bd, 0x21114448, 0x98006803,
0xffa7f7ff, 0xbd00b003, 0x4df0e92d, 0xb0844fe9, 0x000e4614, 0xd006444f, 0x7f80f1b0, 0x1901d303,
0x7f90f1b1, 0xf3afd901, 0x46018000, 0xaa02ab01, 0xf7ffa803, 0x2c00ff61, 0xf8dfbf1f, 0x44c88378,
0xa36cf8df, 0xd04f44ca, 0x0004f8bd, 0x7500f5c0, 0xbf2842a5, 0x466b4625, 0xf8bd2200, 0x98031008,
0xff65f7ff, 0xf8bd9800, 0xf0402004, 0xf5c24120, 0x1b407000, 0xfb80fa1f, 0x46389100, 0xf9fef000,
0x0004f8bd, 0x4438462a, 0xf0004631, 0xf8bdf9f7, 0x99000004, 0x4401465a, 0x44294438, 0xf0004428,
0x9900f9ed, 0xf8582000, 0x1c402020, 0x2b04f841, 0xd3f82880, 0x2008f8bd, 0xf8da2101, 0x98033000,
0xff47f7ff, 0xbf1c2800, 0xe8bdb004, 0x442e8df0, 0x0008f8bd, 0xf1001b64, 0xf8ad0001, 0xf04f0008,
0xf8ad0000, 0xd1af0004, 0x2000b004, 0x8df0e8bd, 0xb086b570, 0x4616461c, 0xab032510, 0xf000aa02,
0x2e00f933, 0x9802bf1c, 0x2c006030, 0x9803bf1c, 0x23006020, 0xa804466a, 0xf7ff9902, 0x2300feed,
0x4618aa01, 0xf7ff9903, 0xf8bdfee7, 0xf8bd0000, 0x42881004, 0x4c9ebf3c, 0xd213444c, 0x2108b282,
0x98046823, 0xff05f7ff, 0xbf1c2800, 0xbd70b006, 0x0000f8bd, 0x1004f8bd, 0xb2804428, 0x0000f8ad,
0xd3eb4288, 0x2000b006, 0xb570bd70, 0x461cb086, 0x25104616, 0xaa02ab03, 0xf8f6f000, 0xbf1c2e00,
0x60309802, 0xbf1c2c00, 0x60209803, 0x466a2300, 0x9902a804, 0xfeb0f7ff, 0xaa012300, 0x99034618,
0xfeaaf7ff, 0x0000f8bd, 0x1004f8bd, 0xbf3c4288, 0x444c4c7f, 0xb282d213, 0x68232109, 0xf7ff9804,
0x2800fec8, 0xb006bf1c, 0xf8bdbd70, 0xf8bd0000, 0x44281004, 0xf8adb280, 0x42880000, 0xb006d3eb,
0xbd702000, 0x41f0e92d, 0xb08c2700, 0x4281460c, 0xf1b0bf28, 0xd3027f80, 0x7f90f1b4, 0xf3afd901,
0x46018000, 0xaa092300, 0xf7ffa808, 0x2300fe75, 0x4621aa0a, 0xf7ff4618, 0xf8bdfe6f, 0x20101024,
0xf1f0fbb1, 0xf8bdb2cc, 0xfbb11028, 0xb2d6f2f0, 0xf2f0fbb1, 0x1012fb00, 0xbf1c2800, 0xb2c61c70,
0x2200485b, 0x210a4448, 0x98086803, 0xfe81f7ff, 0x46e82500, 0xf7ff9808, 0xf848fe4d, 0x1c6d0025,
0xd3f72d08, 0xbf3842b4, 0xd20d2201, 0xf0040961, 0xf858001f, 0xfa021021, 0x4201f000, 0x1c7fbf18,
0xb2c41c60, 0xd3f142b4, 0x4638b00c, 0x81f0e8bd, 0x4604b570, 0xd3012809, 0x8000f3af, 0x4d434844,
0x22004448, 0x210d6803, 0xf7ff4628, 0x4628fe52, 0xfe20f7ff, 0x40a12101, 0xbf184008, 0xbd702001,
0xb5702809, 0xd3014604, 0x8000f3af, 0x4e374d38, 0x2200444d, 0x4630210d, 0xf7ff682b, 0x4630fe3a,
0xfe08f7ff, 0x40a12101, 0xbf1c4201, 0xbd702000, 0x46304622, 0xe8bd682b, 0x210b4070, 0xbe29f7ff,
0x4604b570, 0xd3012809, 0x8000f3af, 0x4e274d28, 0x2200444d, 0x4630210d, 0xf7ff682b, 0x4630fe1a,
0xfde8f7ff, 0x40a12101, 0xbf084008, 0x4622bd70, 0x682b4630, 0x4070e8bd, 0xf7ff210c, 0x4a1bbe0a,
0xf0116891, 0xd0fb0f01, 0x41b4f04f, 0x68916051, 0x0f01f011, 0x2100d0fb, 0xf84068d3, 0x1c493021,
0xd3f92904, 0x47702000, 0xb084b570, 0x461e460c, 0x46014615, 0xaa022300, 0xf7ff4668, 0x2300fdbd,
0x4621aa03, 0xf7ffa801, 0xf8bdfdb7, 0x20101008, 0xf2f0fbb1, 0x1212fb00, 0xf8bd1a89, 0xb289400c,
0xf2f0fbb4, 0x4212fb00, 0xe005b172, 0x460e0000, 0x00000008, 0x00000010, 0xf2f0fbb4, 0x4012fb00,
0x0010f1c0, 0xb2844420, 0x2200462b, 0xf7ff9800, 0x4633fdae, 0x46212200, 0xf7ff9801, 0xb004fda8,
0x0000bd70, 0x4604b510, 0x46082100, 0xfddef7ff, 0xf7ff2005, 0x2006ff65, 0xff62f7ff, 0xbf1c2800,
0xbd102001, 0x4448481b, 0x20006004, 0x2000bd10, 0x48184770, 0x68004448, 0xbdd1f7ff, 0x4604b510,
0xf0204915, 0x2300407e, 0x461a4401, 0xfe9df7ff, 0x4620b928, 0xfdd5f7ff, 0xbf082800, 0x2001bd10,
0x4613bd10, 0x460ab510, 0x407ef020, 0xf7ff4619, 0x2800fddb, 0x2001bf18, 0x4613bd10, 0x460db570,
0x4604460a, 0xf0004619, 0x2800f81b, 0x1960bf0c, 0xbd704620, 0x0000000c, 0x0001ffff, 0x0301ea40,
0xd003079b, 0xc908e009, 0xc0081f12, 0xd2fa2a04, 0xf811e003, 0xf8003b01, 0x1e523b01, 0x4770d2f9,
0x4604b530, 0x46032000, 0x1c5be000, 0xd2034293, 0x5ccd5ce0, 0xd0f81b40, 0x0000bd30, 0x00000000,
0x00000000, 0x00000001, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000
],
# Relative function addresses
'pc_init': 0x20000585,
'pc_unInit': 0x200005af,
'pc_program_page': 0x200005e3,
'pc_erase_sector': 0x200005bd,
'pc_eraseAll': 0x200005b3,
'static_base' : 0x20000000 + 0x00000020 + 0x0000063c,
'begin_stack' : 0x20000900,
'begin_data' : 0x20000000 + 0x1000,
'page_size' : 0x200,
'analyzer_supported' : True,
'analyzer_address' : 0x20002000,
'page_buffers' : [0x20001000, 0x20001200], # Enable double buffering
'min_program_length' : 0x200,
# Flash information
'flash_start': 0x1000000,
'flash_size': 0x200000,
'sector_sizes': (
(0x0, 0x20000),
(0x20000, 0x20000),
(0x40000, 0x20000),
(0x60000, 0x20000),
(0x80000, 0x20000),
(0xa0000, 0x20000),
(0xc0000, 0x20000),
(0xe0000, 0x20000),
(0x100000, 0x20000),
(0x120000, 0x20000),
(0x140000, 0x20000),
(0x160000, 0x20000),
(0x180000, 0x20000),
(0x1a0000, 0x20000),
(0x1c0000, 0x20000),
(0x1e0000, 0x20000),
)
}
class PIC32CXMTG(CoreSightTarget):
VENDOR = "Microchip"
MEMORY_MAP = MemoryMap(
FlashRegion( start=0x01000000, length=0x200000, blocksize=0x1000, is_boot_memory=True,
algo=FLASH_ALGO),
RamRegion( start=0x20000000, length=0x80000)
)
def __init__(self, session):
super(PIC32CXMTG, self).__init__(session, self.MEMORY_MAP)
| 61.824324 | 112 | 0.762842 | 802 | 9,150 | 8.658354 | 0.660848 | 0.371544 | 0.548675 | 0.725806 | 0.185772 | 0.185772 | 0.185772 | 0.185772 | 0.185772 | 0.185772 | 0 | 0.553925 | 0.15082 | 9,150 | 147 | 113 | 62.244898 | 0.339768 | 0.077049 | 0 | 0.127119 | 0 | 0 | 0.027062 | 0 | 0 | 0 | 0.692226 | 0 | 0 | 1 | 0.008475 | false | 0 | 0.025424 | 0 | 0.059322 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8682892037f6ea42cdf9c04685d946a1a003853 | 34 | py | Python | src/bullets/data_source/__init__.py | CaballeroM/BullETS | 3071387128a51fd85cdfa807c9e6c0b8e3bfc0db | [
"Apache-2.0"
] | 6 | 2021-05-12T23:51:47.000Z | 2022-02-04T02:39:41.000Z | src/bullets/data_source/__init__.py | CaballeroM/BullETS | 3071387128a51fd85cdfa807c9e6c0b8e3bfc0db | [
"Apache-2.0"
] | 73 | 2021-05-12T02:12:02.000Z | 2022-03-31T22:26:29.000Z | src/bullets/data_source/__init__.py | CaballeroM/BullETS | 3071387128a51fd85cdfa807c9e6c0b8e3bfc0db | [
"Apache-2.0"
] | 7 | 2021-05-12T03:34:19.000Z | 2021-11-21T09:15:07.000Z | from bullets.data_source import *
| 17 | 33 | 0.823529 | 5 | 34 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a869403091a13d751561b8254178b44e6771944c | 8,874 | py | Python | test/test_snyk_access.py | mergermarket/snyk-access | 663caed22a58bf113e24236f75876b2a19a1a846 | [
"MIT"
] | null | null | null | test/test_snyk_access.py | mergermarket/snyk-access | 663caed22a58bf113e24236f75876b2a19a1a846 | [
"MIT"
] | 91 | 2019-08-21T09:46:01.000Z | 2022-03-25T07:17:28.000Z | test/test_snyk_access.py | mergermarket/snyk-access | 663caed22a58bf113e24236f75876b2a19a1a846 | [
"MIT"
] | null | null | null | import json
import unittest
from unittest.mock import MagicMock, patch, call, mock_open
import snyk
import snyk_access
class TestFindOrg(unittest.TestCase):
def test_find_org(self):
snyk_client = MagicMock(spec=snyk.Snyk)
http_client = MagicMock(spec=snyk.HTTPClient)
group = MagicMock(spec=snyk.Group)
orgs = [
snyk.Org(http_client, f'team-{i}', str(i), group)
for i in range(10)
]
org_name = 'myorg'
orgs.append(snyk.Org(http_client, org_name, '42', group))
snyk_client.orgs.return_value = orgs
org = snyk_access.find_org(snyk_client, org_name)
assert org.id == '42'
class TestFindRepos(unittest.TestCase):
def test_snyk_app_in_one_block(self):
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': [
'project-a',
'project-b',
],
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'repos': [
'project-e',
],
},
]
repos = snyk_access.repos_to_import(data)
assert sorted(repos) == ['project-a', 'project-b']
def test_snyk_app_enabled_in_two_blocks(self):
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': True,
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'apps': {
'snyk': True,
},
'repos': [
'project-e',
],
},
]
repos = snyk_access.repos_to_import(data)
assert sorted(repos) == [
'project-a',
'project-b',
'project-c',
'project-d',
'project-e',
]
def test_snyk_app_in_two_blocks_enabled_and_listed(self):
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': [
'project-a',
'project-b',
],
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'apps': {
'snyk': True,
},
'repos': [
'project-e',
],
},
]
repos = snyk_access.repos_to_import(data)
assert sorted(repos) == ['project-a', 'project-b', 'project-e']
def test_snyk_app_in_two_blocks(self):
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': [
'project-a',
'project-b',
],
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'apps': {
'snyk': [
'project-e',
],
},
'repos': [
'project-e',
],
},
]
repos = snyk_access.repos_to_import(data)
assert sorted(repos) == ['project-a', 'project-b', 'project-e']
class TestSnykAccess(unittest.TestCase):
@patch('snyk_access.Snyk')
@patch.dict('snyk_access.os.environ', {'SNYK_TOKEN': 'token'})
def test_project_import(self, Snyk):
snyk_client = MagicMock(spec=snyk.Snyk)
http_client = MagicMock(spec=snyk.HTTPClient)
group = MagicMock(spec=snyk.Group)
orgs = [
snyk.Org(http_client, f'team-{i}', str(i), group)
for i in range(10)
]
org_name = 'myorg'
org = MagicMock(spec=snyk.Org)
org.client = http_client
org.name = org_name
org.id = '42'
org.group = group
orgs.append(org)
snyk_client.orgs.return_value = orgs
Snyk.return_value = snyk_client
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': [
'project-a',
'project-b',
],
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'repos': [
'project-e',
],
},
]
with patch(
'snyk_access.open',
mock_open(read_data=json.dumps(data)),
) as open_:
snyk_access.main('owner', 'myorg', 'access.json')
open_.assert_called_once_with('access.json')
assert org.import_github_project.call_count == 2
org.import_github_project.assert_has_calls([
call('owner', 'project-a'),
call('owner', 'project-b'),
])
@patch('snyk_access.Snyk')
@patch.dict('snyk_access.os.environ', {'SNYK_TOKEN': 'token'})
def test_remove_projects_not_listed(self, Snyk):
snyk_client = MagicMock(spec=snyk.Snyk)
http_client = MagicMock(spec=snyk.HTTPClient)
group = MagicMock(spec=snyk.Group)
orgs = [
snyk.Org(http_client, f'team-{i}', str(i), group)
for i in range(10)
]
org_name = 'myorg'
org = MagicMock(spec=snyk.Org)
org.client = http_client
org.name = org_name
org.id = '42'
org.group = group
org.projects = [
snyk.Project(
http_client,
{
'id': '1',
'name': 'owner/project-c:requirements.txt',
'origin': 'github',
},
org,
),
snyk.Project(
http_client,
{
'id': '2',
'name': 'owner/project-d:requirements.txt',
'origin': 'github',
},
org,
),
snyk.Project(
http_client,
{
'id': '3',
'name': 'project-x',
'origin': 'cli',
},
org,
),
]
orgs.append(org)
snyk_client.orgs.return_value = orgs
Snyk.return_value = snyk_client
data = [
{
'teams': {
'foo-team': 'pull',
},
'apps': {
'snyk': [
'project-a',
'project-b',
],
},
'repos': [
'project-a',
'project-b',
'project-c',
'project-d',
],
},
{
'teams': {
'bar-team': 'push',
},
'repos': [
'project-e',
],
},
]
with patch('snyk_access.open', mock_open(read_data=json.dumps(data))):
snyk_access.main('owner', 'myorg', 'access.json')
assert http_client.delete.call_count == 2
http_client.delete.assert_has_calls([
call('org/42/project/1'),
call('org/42/project/2'),
])
| 26.890909 | 78 | 0.353167 | 686 | 8,874 | 4.405248 | 0.139942 | 0.042356 | 0.074454 | 0.079418 | 0.764064 | 0.749835 | 0.740238 | 0.717737 | 0.717737 | 0.699206 | 0 | 0.0059 | 0.522538 | 8,874 | 329 | 79 | 26.972644 | 0.70734 | 0 | 0 | 0.646259 | 0 | 0 | 0.140523 | 0.01217 | 0 | 0 | 0 | 0 | 0.034014 | 1 | 0.02381 | false | 0 | 0.040816 | 0 | 0.07483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a872f7d837249591890121d5033e96952f4dd3e1 | 28 | py | Python | taskqueue_cli/__init__.py | supersergiy/python-task-queue | 5438f454f7753cedb60893089b9af44908022f88 | [
"BSD-3-Clause"
] | 18 | 2019-01-25T14:54:44.000Z | 2022-02-22T19:58:41.000Z | taskqueue_cli/__init__.py | supersergiy/python-task-queue | 5438f454f7753cedb60893089b9af44908022f88 | [
"BSD-3-Clause"
] | 21 | 2018-10-16T14:09:10.000Z | 2022-02-11T18:35:45.000Z | taskqueue_cli/__init__.py | supersergiy/python-task-queue | 5438f454f7753cedb60893089b9af44908022f88 | [
"BSD-3-Clause"
] | 9 | 2019-01-25T21:49:21.000Z | 2021-12-21T09:52:39.000Z | from .taskqueue_cli import * | 28 | 28 | 0.821429 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8a83564f564c964df1891d39b91eb8141103275 | 151 | py | Python | day2-task2.py | anoop123gupta/Amar_exercise | 43fe2897004b0a4f6f2bcf5a4ab2464d04c075e1 | [
"MIT"
] | 6 | 2019-03-03T06:07:55.000Z | 2019-03-03T06:51:09.000Z | day2-task2.py | anoop123gupta/Amar_exercise | 43fe2897004b0a4f6f2bcf5a4ab2464d04c075e1 | [
"MIT"
] | null | null | null | day2-task2.py | anoop123gupta/Amar_exercise | 43fe2897004b0a4f6f2bcf5a4ab2464d04c075e1 | [
"MIT"
] | null | null | null | user=raw_input(" Enter String ")
a=user[-3:]
if user[-2:] =='ly':
print user
elif a!='ing':
print user+'ing'
elif a=='ing':
print user+'ly' | 18.875 | 32 | 0.576159 | 26 | 151 | 3.307692 | 0.5 | 0.313953 | 0.186047 | 0.302326 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.192053 | 151 | 8 | 33 | 18.875 | 0.688525 | 0 | 0 | 0 | 0 | 0 | 0.177632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.375 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8c168ff20506b0753ded03fcbce800bde82a218 | 379 | py | Python | Plugins/UnrealEnginePython/Binaries/Win64/Lib/site-packages/tensorflow/_api/v1/keras/datasets/reuters/__init__.py | JustinACoder/H22-GR3-UnrealAI | 361eb9ef1147f8a2991e5f98c4118cd823184adf | [
"MIT"
] | 6 | 2022-02-04T18:12:24.000Z | 2022-03-21T23:57:12.000Z | Lib/site-packages/tensorflow/_api/v1/keras/datasets/reuters/__init__.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/tensorflow/_api/v1/keras/datasets/reuters/__init__.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | 1 | 2022-02-08T03:53:23.000Z | 2022-02-08T03:53:23.000Z | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/python/tools/api/generator/create_python_api.py script.
"""Reuters topic classification dataset.
"""
from __future__ import print_function
from tensorflow.python.keras.datasets.reuters import get_word_index
from tensorflow.python.keras.datasets.reuters import load_data
del print_function
| 29.153846 | 83 | 0.799472 | 51 | 379 | 5.72549 | 0.666667 | 0.164384 | 0.136986 | 0.171233 | 0.315068 | 0.315068 | 0.315068 | 0 | 0 | 0 | 0 | 0 | 0.129288 | 379 | 12 | 84 | 31.583333 | 0.884848 | 0.432718 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
763f61ad9a317443c8a1c5e65c3b74ad7823e432 | 3,316 | py | Python | tests/test_link.py | asavinov/lambdo | 202b530b9bc20658782ae7bc62c15062e9213383 | [
"MIT"
] | 287 | 2018-07-15T08:06:45.000Z | 2022-02-26T10:02:45.000Z | tests/test_link.py | asavinov/lambdo | 202b530b9bc20658782ae7bc62c15062e9213383 | [
"MIT"
] | null | null | null | tests/test_link.py | asavinov/lambdo | 202b530b9bc20658782ae7bc62c15062e9213383 | [
"MIT"
] | 16 | 2018-09-12T12:42:26.000Z | 2022-02-16T07:28:48.000Z | import unittest
from lambdo.Workflow import *
class LinkTestCase(unittest.TestCase):
def setUp(self):
pass
def test_one_key(self):
#
# One key to another table
#
wf_json = {
"id": "My workflow",
"tables": [
{
"id": "Table 1",
"columns": [
{
"id": "My Link",
"operation": "link",
"keys": ["A"],
"linked_table": "Table 2",
"linked_keys": ["B"]
}
]
},
{
"id": "Table 2",
"operation": "noop",
"columns": [
]
}
]
}
wf = Workflow(wf_json)
# Main table
df = pd.DataFrame({'A': ['a', 'a', 'b', 'b']})
main_tb = wf.tables[0]
main_tb.data = df
# Secondary table (more data than used in the main table)
df = pd.DataFrame({'B': ['a', 'b', 'c'], 'C': [1, 2, 3]})
sec_tb = wf.tables[1]
sec_tb.data = df
wf.execute()
merged_tb = wf.tables[0]
self.assertEqual(len(merged_tb.data), 4) # Same number of rows
self.assertEqual(len(merged_tb.data.columns), 2)
link_column = main_tb.data['My Link']
self.assertEqual(link_column[0], 0)
self.assertEqual(link_column[1], 0)
self.assertEqual(link_column[2], 1)
self.assertEqual(link_column[3], 1)
def test_two_keys(self):
#
# One key to another table
#
wf_json = {
"id": "My workflow",
"tables": [
{
"id": "Table 1",
"columns": [
{
"id": "My Link",
"operation": "link",
"keys": ["A", "B"],
"linked_table": "Table 2",
"linked_keys": ["A", "B"]
}
]
},
{
"id": "Table 2",
"operation": "noop",
"columns": [
]
}
]
}
wf = Workflow(wf_json)
# Main table
df = pd.DataFrame({'A': ['a', 'b', 'b', 'a'], 'B': ['b', 'c', 'c', 'a']})
main_tb = wf.tables[0]
main_tb.data = df
# Secondary table (more data than used in the main table)
df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'c', 'c'], 'C': [1, 2, 3]})
sec_tb = wf.tables[1]
sec_tb.data = df
wf.execute()
merged_tb = wf.tables[0]
self.assertEqual(len(merged_tb.data), 4) # Same number of rows
self.assertEqual(len(merged_tb.data.columns), 3)
link_column = main_tb.data['My Link']
self.assertEqual(link_column[0], 0)
self.assertEqual(link_column[1], 1)
self.assertEqual(link_column[2], 1)
self.assertTrue(pd.isna(link_column[3]))
if __name__ == '__main__':
unittest.main()
| 27.404959 | 87 | 0.391134 | 331 | 3,316 | 3.779456 | 0.193353 | 0.131894 | 0.106315 | 0.139888 | 0.859313 | 0.840927 | 0.789768 | 0.740208 | 0.73701 | 0.73701 | 0 | 0.020892 | 0.465923 | 3,316 | 120 | 88 | 27.633333 | 0.685488 | 0.06725 | 0 | 0.505747 | 0 | 0 | 0.096429 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 1 | 0.034483 | false | 0.011494 | 0.022989 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7646269922217f0d7887740e675a5f01dbd3d3a1 | 358 | py | Python | nanome_postgnome/menus/__init__.py | nanome-ai/plugin-post-nome | 1b87bfd54b244c15126b40d502fd17f617a4bc29 | [
"MIT"
] | 1 | 2020-03-06T14:24:20.000Z | 2020-03-06T14:24:20.000Z | nanome_postgnome/menus/__init__.py | nanome-ai/plugin-post-nome | 1b87bfd54b244c15126b40d502fd17f617a4bc29 | [
"MIT"
] | 1 | 2021-02-22T17:56:11.000Z | 2021-02-22T17:56:11.000Z | nanome_postgnome/menus/__init__.py | nanome-ai/plugin-postgnome | 1b87bfd54b244c15126b40d502fd17f617a4bc29 | [
"MIT"
] | 1 | 2020-03-23T17:12:04.000Z | 2020-03-23T17:12:04.000Z | from .ResponseConfigurationMenu import ResponseConfigurationMenu
from .ResourceConfigurationMenu import ResourceConfigurationMenu
from .RequestConfigurationMenu import RequestConfigurationMenu
from .RequestsMenu import RequestsMenu
from .ResourcesMenu import ResourcesMenu
from .MakeRequestMenu import MakeRequestMenu
from .VariablesMenu import VariablesMenu | 51.142857 | 64 | 0.905028 | 28 | 358 | 11.571429 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075419 | 358 | 7 | 65 | 51.142857 | 0.978852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7655f55f76ce434f4c0f380c7a9800dd87922290 | 113 | py | Python | mmdet/version.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | mmdet/version.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | mmdet/version.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | # GENERATED VERSION FILE
# TIME: Fri Feb 21 14:03:52 2020
__version__ = '1.0.0+unknown'
short_version = '1.0.0'
| 18.833333 | 32 | 0.699115 | 21 | 113 | 3.52381 | 0.714286 | 0.216216 | 0.243243 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189474 | 0.159292 | 113 | 5 | 33 | 22.6 | 0.589474 | 0.469027 | 0 | 0 | 1 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
765b44728905893edf76bb510828a7c7e6abf093 | 54 | py | Python | elliot/evaluation/metrics/accuracy/f1/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 175 | 2021-03-04T15:46:25.000Z | 2022-03-31T05:56:58.000Z | elliot/evaluation/metrics/accuracy/f1/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 15 | 2021-03-06T17:53:56.000Z | 2022-03-24T17:02:07.000Z | elliot/evaluation/metrics/accuracy/f1/__init__.py | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 39 | 2021-03-04T15:46:26.000Z | 2022-03-09T15:37:12.000Z | from .f1 import F1
from .extended_f1 import ExtendedF1 | 27 | 35 | 0.833333 | 9 | 54 | 4.888889 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 0.12963 | 54 | 2 | 35 | 27 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
767523df6057e6d3e20f29143d4f71936431a06a | 90 | py | Python | stock_me/commands/suggest.py | Vaibhav/stock-me | 1d5398ceb571afbf8dca2b07aa0ea60c8a185f4b | [
"MIT"
] | 1 | 2018-07-05T05:45:05.000Z | 2018-07-05T05:45:05.000Z | stock_me/commands/suggest.py | Vaibhav/stock-me | 1d5398ceb571afbf8dca2b07aa0ea60c8a185f4b | [
"MIT"
] | null | null | null | stock_me/commands/suggest.py | Vaibhav/stock-me | 1d5398ceb571afbf8dca2b07aa0ea60c8a185f4b | [
"MIT"
] | 1 | 2018-09-18T09:44:07.000Z | 2018-09-18T09:44:07.000Z | #!/usr/bin/env python
import ystockquote
def run(stocks):
# suggest stocks
pass
| 11.25 | 21 | 0.677778 | 12 | 90 | 5.083333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 90 | 7 | 22 | 12.857143 | 0.871429 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
76994d9e63e1060b8503aeba00d05b708a102a1c | 10,149 | py | Python | tests/test_goes.py | clintecker/supercell | 6baa174c8fca1a6f805d3853bdcf9cd5148b53ca | [
"MIT"
] | null | null | null | tests/test_goes.py | clintecker/supercell | 6baa174c8fca1a6f805d3853bdcf9cd5148b53ca | [
"MIT"
] | 1 | 2020-09-06T19:45:25.000Z | 2020-09-06T19:45:25.000Z | tests/test_goes.py | clintecker/supercell | 6baa174c8fca1a6f805d3853bdcf9cd5148b53ca | [
"MIT"
] | null | null | null | """Supercell Package Tests"""
# Standard Library
import datetime
import tempfile
import unittest
# Third Party Code
import responses
# Supercell Code
from supercell import goes
class SupercellGoesTestSuite(unittest.TestCase):
def test_get_image_data(self):
d = goes.get_image_data(
"https:/doesnt.matter.com/GOES16/ABI/SECTOR/sec/Band/20201011012_GOES16-ABI-sec-band-600x600.jpg"
)
self.assertEqual(
{
"t_year": 2020,
"t_daynum": 101,
"t_hour": "10",
"t_minute": "12",
"i_width": 600,
"i_height": 600,
"sat_num": 16,
"sector": "sec",
"band": "Band",
},
d,
)
def test_get_image_data_bad_url(self):
d = goes.get_image_data("https:/doesnt.matter.com/something_different.jpg")
self.assertEqual({}, d)
@responses.activate
def test_does_exist(self):
responses.add(
method=responses.HEAD, url="https://example.com/bloop.jpg", status=200
)
responses.add(
method=responses.HEAD, url="https://example.com/nope.png", status=404
)
self.assertTrue(goes.does_exist("https://example.com/bloop.jpg"))
self.assertFalse(goes.does_exist("https://example.com/nope.png"))
def test_image_url_from_data(self):
d = goes.get_image_data(
"https:/doesnt.matter.com/GOES16/ABI/SECTOR/sec/Band/20201011012_GOES16-ABI-sec-band-600x600.jpg"
)
d["i_width"] = 2100
d["i_height"] = 2100
self.assertEqual(
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/sec/Band/20201011012_GOES16-ABI-sec-Band-2100x2100"
".jpg",
goes.image_url_from_data(d),
)
def test_in_cache(self):
with tempfile.NamedTemporaryFile(mode="r") as f:
f_parts = f.name.split("/")
directory = "/".join(f_parts[:-1])
key = f_parts[-1]
full_key = goes.in_cache(key=key, directory=directory)
self.assertEqual(f.name, str(full_key))
with self.assertRaises(ValueError):
goes.in_cache(key="the_key.txt", directory="/tmp")
with self.assertRaises(ValueError):
goes.in_cache(key="the_key.txt", directory="/tmp/bloop")
def test_store_in_cache(self):
with tempfile.TemporaryDirectory() as d:
goes.store_in_cache(key="bloop.txt", directory=d, data=b"1234567890")
goes.in_cache(key="bloop.txt", directory=d)
# After context, file is deleted, so no longer is in cache
with self.assertRaises(ValueError):
goes.in_cache(key="bloop.txt", directory=d)
@responses.activate
def test_full_day_images(self):
html = """<html>
<head><title>Index of /cdn02/GOES/data/GOES16/ABI/SECTOR/nr/GEOCOLOR/</title></head>
<body>
<h1>Index of /cdn02/GOES/data/GOES16/ABI/SECTOR/nr/GEOCOLOR/</h1><hr><pre><a href="../">../</a>
<a href="1200x1200.jpg">1200x1200.jpg</a> 01-Jun-2020 20:03 1227726
<a href="20201222001_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg">20201222001_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:07 1313965
<a href="20201222001_GOES16-ABI-nr-GEOCOLOR-2400x2400.jpg">20201222001_GOES16-ABI-nr-GEOCOLOR-2400x2400.jpg</a>
28-May-2020 20:07 4457998
<a href="20201222001_GOES16-ABI-nr-GEOCOLOR-300x300.jpg">20201222001_GOES16-ABI-nr-GEOCOLOR-300x300.jpg</a>
28-May-2020 20:07 105850
<a href="20201222001_GOES16-ABI-nr-GEOCOLOR-600x600.jpg">20201222001_GOES16-ABI-nr-GEOCOLOR-600x600.jpg</a>
28-May-2020 20:07 378507
<a href="20201222006_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg">20201222006_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:11 1316732
<a href="20201222006_GOES16-ABI-nr-GEOCOLOR-2400x2400.jpg">20201222006_GOES16-ABI-nr-GEOCOLOR-2400x2400.jpg</a>
28-May-2020 20:11 4470444
<a href="20201222006_GOES16-ABI-nr-GEOCOLOR-300x300.jpg">20201222006_GOES16-ABI-nr-GEOCOLOR-300x300.jpg</a>
28-May-2020 20:11 106199
<a href="20201222006_GOES16-ABI-nr-GEOCOLOR-600x600.jpg">20201222006_GOES16-ABI-nr-GEOCOLOR-600x600.jpg</a>
28-May-2020 20:11 379714
<a href="20201222011_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg">20201222011_GOES16-ABI-nr-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:16 1320366
<a href="2400x2400.jpg">2400x2400.jpg</a> 01-Jun-2020 20:03 4124628
<a href="600x600.jpg">600x600.jpg</a> 01-Jun-2020 20:03 361617
<a href="GOES16-NR-GEOCOLOR-600x600.gif">GOES16-NR-GEOCOLOR-600x600.gif</a> 01-Jun-2020 20:00
13666927
<a href="latest.jpg">latest.jpg</a> 01-Jun-2020 20:03 4124628
<a href="thumbnail.jpg">thumbnail.jpg</a> 01-Jun-2020 20:03 103347
</pre><hr></body>
</html>"""
first_day_html = """<html>
<head><title>Index of /cdn02/GOES/data/GOES16/ABI/SECTOR/yy/GEOCOLOR/</title></head>
<body>
<h1>Index of /cdn02/GOES/data/GOES16/ABI/SECTOR/yy/GEOCOLOR/</h1><hr><pre><a href="../">../</a>
<a href="1200x1200.jpg">1200x1200.jpg</a> 01-Jun-2020 20:03 1227726
<a href="202012001_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg">20200012001_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:07 1313965
<a href="202012001_GOES16-ABI-yy-GEOCOLOR-2400x2400.jpg">20200012001_GOES16-ABI-yy-GEOCOLOR-2400x2400.jpg</a>
28-May-2020 20:07 4457998
<a href="202012001_GOES16-ABI-yy-GEOCOLOR-300x300.jpg">20200012001_GOES16-ABI-yy-GEOCOLOR-300x300.jpg</a>
28-May-2020 20:07 105850
<a href="20193662001_GOES16-ABI-yy-GEOCOLOR-600x600.jpg">20200012001_GOES16-ABI-yy-GEOCOLOR-600x600.jpg</a>
28-May-2020 20:07 378507
<a href="20193662306_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg">20200012006_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:11 1316732
<a href="20193662306_GOES16-ABI-yy-GEOCOLOR-2400x2400.jpg">20200012006_GOES16-ABI-yy-GEOCOLOR-2400x2400.jpg</a>
28-May-2020 20:11 4470444
<a href="20193662306_GOES16-ABI-yy-GEOCOLOR-300x300.jpg">20200012006_GOES16-ABI-yy-GEOCOLOR-300x300.jpg</a>
28-May-2020 20:11 106199
<a href="20193662006_GOES16-ABI-yy-GEOCOLOR-600x600.jpg">20200012006_GOES16-ABI-yy-GEOCOLOR-600x600.jpg</a>
28-May-2020 20:11 379714
<a href="20193662311_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg">20200012011_GOES16-ABI-yy-GEOCOLOR-1200x1200.jpg</a>
28-May-2020 20:16 1320366
<a href="2400x2400.jpg">2400x2400.jpg</a> 01-Jun-2020 20:03 4124628
<a href="600x600.jpg">600x600.jpg</a> 01-Jun-2020 20:03 361617
<a href="GOES16-yy-GEOCOLOR-600x600.gif">GOES16-yy-GEOCOLOR-600x600.gif</a> 01-Jun-2020 20:00
13666927
<a href="latest.jpg">latest.jpg</a> 01-Jun-2020 20:03 4124628
<a href="thumbnail.jpg">thumbnail.jpg</a> 01-Jun-2020 20:03 103347
</pre><hr></body>
</html>"""
responses.add(
responses.GET,
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/nr/GEOCOLOR/",
body=html,
status=200,
)
responses.add(
responses.GET,
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/xx/GEOCOLOR/",
body=None,
status=500,
)
responses.add(
responses.GET,
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/",
body=first_day_html,
)
urls = goes.full_day_images(
sat="G16",
sector="nr",
band="GEOCOLOR",
size=1200,
anchor_datetime=datetime.datetime(2020, 5, 1, 20, 30, 21),
)
self.assertEqual(
[
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/nr/GEOCOLOR/20201222001_GOES16-ABI-nr-GEOCOLOR"
"-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/nr/GEOCOLOR/20201222006_GOES16-ABI-nr-GEOCOLOR"
"-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/nr/GEOCOLOR/20201222011_GOES16-ABI-nr-GEOCOLOR"
"-1200x1200.jpg",
],
urls,
)
self.assertEqual(
[],
goes.full_day_images(
sat="G16",
sector="xx",
band="GEOCOLOR",
size=1200,
anchor_datetime=datetime.datetime(2020, 5, 1, 20, 30, 21),
),
)
self.maxDiff = 1024
u = goes.full_day_images(
sat="G16",
sector="yy",
band="GEOCOLOR",
size=1200,
anchor_datetime=datetime.datetime(2020, 1, 1, 20, 30, 21),
)
self.assertEqual(
[
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/20193662306_GOES16-ABI-yy-"
"GEOCOLOR-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/20193662311_GOES16-ABI-yy-"
"GEOCOLOR-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/20200012001_GOES16-ABI-yy-"
"GEOCOLOR-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/20200012006_GOES16-ABI-yy-"
"GEOCOLOR-1200x1200.jpg",
"https://cdn.star.nesdis.noaa.gov/GOES16/ABI/SECTOR/yy/GEOCOLOR/20200012011_GOES16-ABI-yy-"
"GEOCOLOR-1200x1200.jpg",
],
u,
)
| 46.342466 | 116 | 0.589319 | 1,290 | 10,149 | 4.545736 | 0.143411 | 0.099761 | 0.043145 | 0.074523 | 0.85249 | 0.815825 | 0.77558 | 0.6059 | 0.58987 | 0.56395 | 0 | 0.232514 | 0.270273 | 10,149 | 218 | 117 | 46.555046 | 0.559276 | 0.012809 | 0 | 0.411168 | 0 | 0.243655 | 0.631868 | 0.3005 | 0 | 0 | 0 | 0 | 0.060914 | 1 | 0.035533 | false | 0 | 0.025381 | 0 | 0.06599 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76a8d06dfe8b04f7b10b6442b036a35686106655 | 44 | py | Python | model/spacy/dffml_model_spacy/accuracy/__init__.py | agriyakhetarpal/dffml | f76f2ce94c3972634053377b00e7c16530f7f0a4 | [
"MIT"
] | 171 | 2019-03-08T19:02:06.000Z | 2022-03-29T16:17:23.000Z | model/spacy/dffml_model_spacy/accuracy/__init__.py | agriyakhetarpal/dffml | f76f2ce94c3972634053377b00e7c16530f7f0a4 | [
"MIT"
] | 1,158 | 2019-03-08T19:07:50.000Z | 2022-03-25T08:28:27.000Z | model/spacy/dffml_model_spacy/accuracy/__init__.py | agriyakhetarpal/dffml | f76f2ce94c3972634053377b00e7c16530f7f0a4 | [
"MIT"
] | 183 | 2019-03-10T02:40:56.000Z | 2022-03-27T18:51:26.000Z | from .sner_accuracy import SpacyNerAccuracy
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76c42e41718858c437f4328c4d3282165e9322ad | 130 | py | Python | chainer_openpose/datasets/fashion_landmark/__init__.py | iory/chainer-openpose | 020cf7c6946ecaa3e110d937a73565f4e76e3c42 | [
"MIT"
] | 3 | 2020-10-20T12:58:48.000Z | 2020-10-20T20:38:27.000Z | chainer_openpose/datasets/fashion_landmark/__init__.py | iory/chainer-openpose | 020cf7c6946ecaa3e110d937a73565f4e76e3c42 | [
"MIT"
] | null | null | null | chainer_openpose/datasets/fashion_landmark/__init__.py | iory/chainer-openpose | 020cf7c6946ecaa3e110d937a73565f4e76e3c42 | [
"MIT"
] | null | null | null | from chainer_openpose.datasets.fashion_landmark.fashion_landmark_keypoints_dataset import FashionLandmarkKeypointsDataset # NOQA
| 65 | 129 | 0.915385 | 13 | 130 | 8.769231 | 0.846154 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053846 | 130 | 1 | 130 | 130 | 0.926829 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
76c92e7cbcc1fcefeaa15ca4e61c97dd8dc57ca8 | 54,611 | py | Python | tests/test_physical_numpyfuncs.py | juliotux/astropop | cb8e5b7527fe04de82d1322615c78510bf0ae5b0 | [
"BSD-3-Clause"
] | 10 | 2018-05-30T19:18:58.000Z | 2021-07-27T08:15:51.000Z | tests/test_physical_numpyfuncs.py | sparc4-dev/astropop | 6d329f09e2274490dc15b2a41d0c5e43c37ee955 | [
"BSD-3-Clause"
] | 8 | 2021-06-16T15:52:50.000Z | 2022-03-30T21:27:38.000Z | tests/test_physical_numpyfuncs.py | juliotux/astropop | cb8e5b7527fe04de82d1322615c78510bf0ae5b0 | [
"BSD-3-Clause"
] | 9 | 2019-06-20T00:33:34.000Z | 2022-03-03T21:52:47.000Z | # Licensed under a 3-clause BSD style license - see LICENSE.rst
import numpy as np
import pytest
from astropop.math.physical import QFloat, UnitsError, units
from astropop.testing import assert_almost_equal, assert_equal
from packaging import version
# Testing qfloat compatibility with Numpy ufuncs and array functions.
class TestQFloatNumpyArrayFuncs:
"""Test numpy array functions for numpy comatibility."""
def test_qfloat_np_append(self):
qf1 = QFloat([1.0, 2.0, 3.0], [0.1, 0.2, 0.3], unit="m")
qf2 = QFloat([1.0], [0.1], unit="km")
qf3 = QFloat(1.0, 0.1, unit="km")
qf4 = QFloat(0, 0)
qf = np.append(qf1, qf1)
assert_equal(qf.nominal, [1.0, 2.0, 3.0, 1.0, 2.0, 3.0])
assert_equal(qf.std_dev, [0.1, 0.2, 0.3, 0.1, 0.2, 0.3])
assert_equal(qf.unit, qf1.unit)
# This should work and convert the unit.
qf = np.append(qf1, qf2)
assert_equal(qf.nominal, [1.0, 2.0, 3.0, 1000.0])
assert_equal(qf.std_dev, [0.1, 0.2, 0.3, 100.0])
assert_equal(qf.unit, qf1.unit)
# Also this should work and convert the unit in the same way.
qf = np.append(qf1, qf3)
assert_equal(qf.nominal, [1.0, 2.0, 3.0, 1000.0])
assert_equal(qf.std_dev, [0.1, 0.2, 0.3, 100.0])
assert_equal(qf.unit, qf1.unit)
# This should fail due to unit
with pytest.raises(UnitsError):
qf = np.append(qf1, qf4)
# Testing with axis
qf1 = QFloat([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
[[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]], "m",)
qf = np.append(qf1, QFloat([[8.0], [9.0]], [[0.8], [0.9]], "m"),
axis=1)
assert_equal(qf.nominal, [[1.0, 2.0, 3.0, 8.0], [4.0, 5.0, 6.0, 9.0]])
assert_equal(qf.std_dev, [[0.1, 0.2, 0.3, 0.8], [0.4, 0.5, 0.6, 0.9]])
qf = np.append(qf1, QFloat([[7.0, 8.0, 9.0]], [[0.7, 0.8, 0.9]], "m"),
axis=0)
assert_equal(qf.nominal, [[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]])
assert_equal(qf.std_dev, [[0.1, 0.2, 0.3],
[0.4, 0.5, 0.6],
[0.7, 0.8, 0.9]])
def test_qfloat_np_around(self):
# single case
qf = np.around(QFloat(1.02549, 0.135964))
assert_equal(qf.nominal, 1)
assert_equal(qf.std_dev, 0)
qf = np.around(QFloat(1.02549, 0.135964), decimals=2)
assert_equal(qf.nominal, 1.03)
assert_equal(qf.std_dev, 0.14)
# just check array too
qf = np.around(QFloat([1.03256, 2.108645], [0.01456, 0.594324]),
decimals=2)
assert_equal(qf.nominal, [1.03, 2.11])
assert_equal(qf.std_dev, [0.01, 0.59])
def test_qfloat_np_atleast_1d(self):
# This function is not implemented, so should raise
with pytest.raises(TypeError):
np.atleast_1d(QFloat([1.0, 2.0], [0.1, 0.2], "m"))
def test_qfloat_np_atleast_2d(self):
# This function is not implemented, so should raise
with pytest.raises(TypeError):
np.atleast_2d(QFloat([1.0, 2.0], [0.1, 0.2], "m"))
def test_qfloat_np_atleast_3d(self):
# This function is not implemented, so should raise
with pytest.raises(TypeError):
np.atleast_3d(QFloat([1.0, 2.0], [0.1, 0.2], "m"))
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_broadcast(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_broadcast_to(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_ceil(self):
raise NotImplementedError
def test_qfloat_np_clip(self):
arr = np.arange(10)
qf = QFloat(arr, arr * 0.1, "m")
res = np.clip(qf, 2, 8)
tgt = [2, 2, 2, 3, 4, 5, 6, 7, 8, 8]
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, arr * 0.1)
assert_equal(qf.unit, res.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_columnstack(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_concatenate(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_copyto(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_cross(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_cumprod(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_cumsum(self):
raise NotImplementedError
def test_qfloat_np_delete(self):
a = np.array([[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0]])
qf = QFloat(a, a * 0.1, "m")
res1 = np.delete(qf, 1, axis=0)
assert_almost_equal(res1.nominal, [[1.0, 2.0, 3.0, 4.0],
[9.0, 10.0, 11.0, 12.0]])
assert_almost_equal(res1.std_dev, [[0.1, 0.2, 0.3, 0.4],
[0.9, 1.0, 1.1, 1.2]])
assert_equal(res1.unit, qf.unit)
res2 = np.delete(qf, 1, axis=1)
assert_almost_equal(res2.nominal, [[1.0, 3.0, 4.0],
[5.0, 7.0, 8.0],
[9.0, 11.0, 12.0]])
assert_almost_equal(res2.std_dev, [[0.1, 0.3, 0.4],
[0.5, 0.7, 0.8],
[0.9, 1.1, 1.2]])
assert_equal(res2.unit, qf.unit)
res3 = np.delete(qf, np.s_[::2], 1)
assert_almost_equal(res3.nominal,
[[2.0, 4.0], [6.0, 8.0], [10.0, 12.0]])
assert_almost_equal(res3.std_dev,
[[0.2, 0.4], [0.6, 0.8], [1.0, 1.2]])
assert_equal(res3.unit, qf.unit)
res4 = np.delete(qf, [1, 3, 5])
assert_almost_equal(res4.nominal,
[1.0, 3.0, 5.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0])
assert_almost_equal(res4.std_dev,
[0.1, 0.3, 0.5, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2])
assert_equal(res4.unit, qf.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_diff(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_dstack(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_ediff1d(self):
raise NotImplementedError
def test_qfloat_np_expand_dims(self):
qf = QFloat(1.0, 0.1, "m")
res1 = np.expand_dims(qf, axis=0)
assert_almost_equal(res1.nominal, [1.0])
assert_almost_equal(res1.std_dev, [0.1])
assert_equal(res1.unit, qf.unit)
assert_equal(res1.shape, (1,))
qf = QFloat([1.0, 2.0], [0.1, 0.2], "m")
res2 = np.expand_dims(qf, axis=0)
assert_almost_equal(res2.nominal, [[1.0, 2.0]])
assert_almost_equal(res2.std_dev, [[0.1, 0.2]])
assert_equal(res2.unit, qf.unit)
assert_equal(res2.shape, (1, 2))
res3 = np.expand_dims(qf, axis=1)
assert_almost_equal(res3.nominal, [[1.0], [2.0]])
assert_almost_equal(res3.std_dev, [[0.1], [0.2]])
assert_equal(res3.unit, qf.unit)
assert_equal(res3.shape, (2, 1))
if version.parse(np.version.full_version) >= version.parse('1.18.0'):
res4 = np.expand_dims(qf, axis=(2, 0))
assert_almost_equal(res4.nominal, [[[1.0], [2.0]]])
assert_almost_equal(res4.std_dev, [[[0.1], [0.2]]])
assert_equal(res4.unit, qf.unit)
assert_equal(res4.shape, (1, 2, 1))
def test_qfloat_np_flip(self):
a = np.arange(8).reshape((2, 2, 2))
qf = QFloat(a, a * 0.1, "m")
res1 = np.flip(qf)
assert_equal(res1.nominal, a[::-1, ::-1, ::-1])
assert_equal(res1.std_dev, a[::-1, ::-1, ::-1] * 0.1)
assert_equal(res1.unit, qf.unit)
res2 = np.flip(qf, 0)
assert_equal(res2.nominal, a[::-1, :, :])
assert_equal(res2.std_dev, a[::-1, :, :] * 0.1)
assert_equal(res2.unit, qf.unit)
res3 = np.flip(qf, 1)
assert_equal(res3.nominal, a[:, ::-1, :])
assert_equal(res3.std_dev, a[:, ::-1, :] * 0.1)
assert_equal(res3.unit, qf.unit)
res4 = np.flip(qf, 2)
assert_equal(res4.nominal, a[:, :, ::-1])
assert_equal(res4.std_dev, a[:, :, ::-1] * 0.1)
assert_equal(res4.unit, qf.unit)
# just some static check
qf = QFloat([[1, 2], [3, 4]], [[0.1, 0.2], [0.3, 0.4]], "m")
res5 = np.flip(qf)
assert_equal(res5.nominal, [[4, 3], [2, 1]])
assert_equal(res5.std_dev, [[0.4, 0.3], [0.2, 0.1]])
assert_equal(res5.unit, qf.unit)
res6 = np.flip(qf, 0)
assert_equal(res6.nominal, [[3, 4], [1, 2]])
assert_equal(res6.std_dev, [[0.3, 0.4], [0.1, 0.2]])
assert_equal(res6.unit, qf.unit)
res7 = np.flip(qf, 1)
assert_equal(res7.nominal, [[2, 1], [4, 3]])
assert_equal(res7.std_dev, [[0.2, 0.1], [0.4, 0.3]])
assert_equal(res7.unit, qf.unit)
def test_qfloat_np_fliplr(self):
a = np.arange(8).reshape((2, 2, 2))
qf = QFloat(a, a * 0.1, "m")
res = np.fliplr(qf)
assert_equal(res.nominal, a[:, ::-1, :])
assert_equal(res.std_dev, a[:, ::-1, :] * 0.1)
assert_equal(res.unit, qf.unit)
qf = QFloat([[1, 2], [3, 4]], [[0.1, 0.2], [0.3, 0.4]], "m")
res = np.fliplr(qf)
assert_equal(res.nominal, [[2, 1], [4, 3]])
assert_equal(res.std_dev, [[0.2, 0.1], [0.4, 0.3]])
assert_equal(res.unit, qf.unit)
def test_qfloat_np_flipud(self):
a = np.arange(8).reshape((2, 2, 2))
qf = QFloat(a, a * 0.1, "m")
res = np.flipud(qf)
assert_equal(res.nominal, a[::-1, :, :])
assert_equal(res.std_dev, a[::-1, :, :] * 0.1)
assert_equal(res.unit, qf.unit)
qf = QFloat([[1, 2], [3, 4]], [[0.1, 0.2], [0.3, 0.4]], "m")
res = np.flipud(qf)
assert_equal(res.nominal, [[3, 4], [1, 2]])
assert_equal(res.std_dev, [[0.3, 0.4], [0.1, 0.2]])
assert_equal(res.unit, qf.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_hstack(self):
raise NotImplementedError
def test_qfloat_np_insert(self):
a = np.array([[1, 2], [3, 4], [5, 6]])
qf = QFloat(a, a * 0.1, "m")
res = np.insert(qf, 5, QFloat(999, 0.1, unit="m"))
assert_almost_equal(res.nominal, [1, 2, 3, 4, 5, 999, 6])
assert_almost_equal(res.std_dev, [0.1, 0.2, 0.3, 0.4, 0.5, 0.1, 0.6])
assert_equal(res.unit, qf.unit)
res = np.insert(qf, 1, QFloat(999, 0.1, unit="m"), axis=1)
assert_almost_equal(res.nominal,
[[1, 999, 2], [3, 999, 4], [5, 999, 6]])
assert_almost_equal(res.std_dev, [[0.1, 0.1, 0.2],
[0.3, 0.1, 0.4],
[0.5, 0.1, 0.6]])
assert_equal(res.unit, qf.unit)
def test_qfloat_np_moveaxis(self):
arr = np.zeros((3, 4, 5))
qf = QFloat(arr, unit='m')
res = np.moveaxis(qf, 0, -1)
assert_equal(res.shape, (4, 5, 3))
assert_equal(res.unit, qf.unit)
res = np.moveaxis(qf, -1, 0)
assert_equal(res.shape, (5, 3, 4))
assert_equal(res.unit, qf.unit)
res = np.moveaxis(qf, (0, 1), (-1, -2))
assert_equal(res.shape, (5, 4, 3))
assert_equal(res.unit, qf.unit)
res = np.moveaxis(qf, [0, 1, 2], [-1, -2, -3])
assert_equal(res.shape, (5, 4, 3))
assert_equal(res.unit, qf.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_nancumprod(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_nancumsum(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_nanprod(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_nansum(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_prod(self):
raise NotImplementedError
def test_qfloat_np_ravel(self):
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
tgt = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
qf = QFloat(arr, arr * 0.1, "m")
res = np.ravel(qf)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_repeat(self):
arr = np.array([1, 2, 3])
tgt = np.array([1, 1, 2, 2, 3, 3])
qf = QFloat(arr, arr * 0.1, "m")
res = np.repeat(qf, 2)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_reshape(self):
arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
tgt = np.array([[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]])
qf = QFloat(arr, arr * 0.1, "m")
res = np.reshape(qf, (2, 6))
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
assert_equal(res.shape, (2, 6))
def test_qfloat_np_resize(self):
arr = np.array([[1, 2], [3, 4]])
qf = QFloat(arr, arr * 0.1, "m")
shp = (2, 4)
tgt = np.array([[1, 2, 3, 4], [1, 2, 3, 4]])
res = np.resize(qf, shp)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
assert_equal(res.shape, shp)
shp = (4, 2)
tgt = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
res = np.resize(qf, shp)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
assert_equal(res.shape, shp)
shp = (4, 3)
tgt = np.array([[1, 2, 3], [4, 1, 2], [3, 4, 1], [2, 3, 4]])
res = np.resize(qf, shp)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
assert_equal(res.shape, shp)
shp = (0,)
tgt = np.array([])
res = np.resize(qf, shp)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
assert_equal(res.shape, shp)
def test_qfloat_np_roll(self):
arr = np.arange(10)
qf = QFloat(arr, arr * 0.01, "m")
off = 2
tgt = np.array([8, 9, 0, 1, 2, 3, 4, 5, 6, 7])
res = np.roll(qf, off)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.01)
assert_equal(res.unit, qf.unit)
off = -2
tgt = np.array([2, 3, 4, 5, 6, 7, 8, 9, 0, 1])
res = np.roll(qf, off)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.01)
assert_equal(res.unit, qf.unit)
arr = np.arange(12).reshape((4, 3))
qf = QFloat(arr, arr * 0.01, "m")
ax = 0
off = 1
tgt = np.array([[9, 10, 11], [0, 1, 2], [3, 4, 5], [6, 7, 8]])
res = np.roll(qf, off, axis=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.01)
assert_equal(res.unit, qf.unit)
ax = 1
off = 1
tgt = np.array([[2, 0, 1], [5, 3, 4], [8, 6, 7], [11, 9, 10]])
res = np.roll(qf, off, axis=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.01)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_rollaxis(self):
arr = np.ones((3, 4, 5, 6))
qf = QFloat(arr, arr * 0.01, "m")
res = np.rollaxis(qf, 3, 1)
assert_equal(res.shape, (3, 6, 4, 5))
res = np.rollaxis(qf, 2)
assert_equal(res.shape, (5, 3, 4, 6))
res = np.rollaxis(qf, 1, 4)
assert_equal(res.shape, (3, 5, 6, 4))
def test_qfloat_np_round(self):
# single case
qf = np.round(QFloat(1.02549, 0.135964))
assert_equal(qf.nominal, 1)
assert_equal(qf.std_dev, 0)
qf = np.round(QFloat(1.02549, 0.135964), decimals=2)
assert_equal(qf.nominal, 1.03)
assert_equal(qf.std_dev, 0.14)
# just check array too
qf = np.round(QFloat([1.03256, 2.108645], [0.01456, 0.594324]),
decimals=2)
assert_equal(qf.nominal, [1.03, 2.11])
assert_equal(qf.std_dev, [0.01, 0.59])
def test_qfloat_np_rot90(self):
arr = np.array([[0, 1, 2], [3, 4, 5]])
b1 = np.array([[2, 5], [1, 4], [0, 3]])
b2 = np.array([[5, 4, 3], [2, 1, 0]])
b3 = np.array([[3, 0], [4, 1], [5, 2]])
b4 = np.array([[0, 1, 2], [3, 4, 5]])
qf = QFloat(arr, arr * 0.1, "m")
for k in range(-3, 13, 4):
res = np.rot90(qf, k=k)
assert_equal(res.nominal, b1)
assert_equal(res.std_dev, b1 * 0.1)
assert_equal(res.unit, qf.unit)
for k in range(-2, 13, 4):
res = np.rot90(qf, k=k)
assert_equal(res.nominal, b2)
assert_equal(res.std_dev, b2 * 0.1)
assert_equal(res.unit, qf.unit)
for k in range(-1, 13, 4):
res = np.rot90(qf, k=k)
assert_equal(res.nominal, b3)
assert_equal(res.std_dev, b3 * 0.1)
assert_equal(res.unit, qf.unit)
for k in range(0, 13, 4):
res = np.rot90(qf, k=k)
assert_equal(res.nominal, b4)
assert_equal(res.std_dev, b4 * 0.1)
assert_equal(res.unit, qf.unit)
arr = np.arange(8).reshape((2, 2, 2))
qf = QFloat(arr, arr * 0.1, "m")
ax = (0, 1)
tgt = np.array([[[2, 3], [6, 7]], [[0, 1], [4, 5]]])
res = np.rot90(qf, axes=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
ax = (1, 2)
tgt = np.array([[[1, 3], [0, 2]], [[5, 7], [4, 6]]])
res = np.rot90(qf, axes=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
ax = (2, 0)
tgt = np.array([[[4, 0], [6, 2]], [[5, 1], [7, 3]]])
res = np.rot90(qf, axes=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
ax = (1, 0)
tgt = np.array([[[4, 5], [0, 1]], [[6, 7], [2, 3]]])
res = np.rot90(qf, axes=ax)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_shape(self):
for shp in [(10,), (11, 12), (11, 12, 13)]:
qf = QFloat(np.ones(shp), np.ones(shp), "m")
assert_equal(np.shape(qf), shp)
def test_qfloat_np_size(self):
for shp in [(10,), (11, 12), (11, 12, 13)]:
qf = QFloat(np.ones(shp), np.ones(shp), "m")
assert_equal(np.size(qf), np.prod(shp))
def test_qfloat_np_squeeze(self):
arr = np.array([[[0], [1], [2]]])
qf = QFloat(arr, arr * 0.01, "m")
res = np.squeeze(qf)
assert_equal(res.shape, (3,))
assert_almost_equal(res.nominal, [0, 1, 2])
assert_almost_equal(res.std_dev, [0, 0.01, 0.02])
assert_equal(res.unit, qf.unit)
res = np.squeeze(qf, axis=0)
assert_equal(res.shape, (3, 1))
assert_almost_equal(res.nominal, [[0], [1], [2]])
assert_almost_equal(res.std_dev, [[0], [0.01], [0.02]])
assert_equal(res.unit, qf.unit)
with pytest.raises(ValueError):
np.squeeze(qf, axis=1)
res = np.squeeze(qf, axis=2)
assert_equal(res.shape, (1, 3))
assert_almost_equal(res.nominal, [[0, 1, 2]])
assert_almost_equal(res.std_dev, [[0, 0.01, 0.02]])
assert_equal(res.unit, qf.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_sum(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_stack(self):
raise NotImplementedError
def test_qfloat_np_swapaxes(self):
arr = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
tgt = np.array([[[0, 4], [2, 6]], [[1, 5], [3, 7]]])
qf = QFloat(arr, arr * 0.1, "m")
res = np.swapaxes(qf, 0, 2)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_take(self):
arr = np.array([1, 2, 3, 4, 5])
tgt = np.array([2, 3, 5])
ind = [1, 2, 4]
qf = QFloat(arr, arr * 0.1, "m")
res = np.take(qf, ind)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_tile(self):
arr = np.array([0, 1, 2])
qf = QFloat(arr, arr * 0.1)
tile = 2
tgt = np.array([0, 1, 2, 0, 1, 2])
res = np.tile(qf, tile)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
tile = (2, 2)
tgt = np.array([[0, 1, 2, 0, 1, 2], [0, 1, 2, 0, 1, 2]])
res = np.tile(qf, tile)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
# More checking
arr = np.array([[1, 2], [3, 4]])
qf = QFloat(arr, arr * 0.1)
tile = 2
tgt = np.array([[1, 2, 1, 2], [3, 4, 3, 4]])
res = np.tile(qf, tile)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
tile = (2, 1)
tgt = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
res = np.tile(qf, tile)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
def test_qfloat_np_transpose(self):
arr = np.array([[1, 2], [3, 4], [5, 6]])
tgt = np.array([[1, 3, 5], [2, 4, 6]])
qf = QFloat(arr, arr * 0.1, "m")
res = np.transpose(qf)
assert_almost_equal(res.nominal, tgt)
assert_almost_equal(res.std_dev, tgt * 0.1)
assert_equal(res.unit, qf.unit)
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_trunc(self):
raise NotImplementedError
class TestQFloatNumpyUfuncs:
"""Test numpy array functions for numpy comatibility."""
@pytest.mark.parametrize('func', [np.abs, np.absolute])
def test_qfloat_np_absolute(self, func):
qf1 = QFloat(1.0, 0.1, 'm')
qf2 = QFloat(-1.0, 0.1, 'm')
qf3 = QFloat(-5.0, 0.1)
qf4 = QFloat(-6)
qf5 = QFloat([1, -1, 2, -2])
assert_equal(func(qf1), QFloat(1.0, 0.1, 'm'))
assert_equal(func(qf2), QFloat(1.0, 0.1, 'm'))
assert_equal(func(qf3), QFloat(5.0, 0.1))
assert_equal(func(qf4), QFloat(6))
assert_equal(func(qf5), [1, 1, 2, 2])
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf1, out=[])
def test_qfloat_np_add(self):
qf1 = QFloat(2.0, 0.2, 'm')
qf2 = QFloat(1.0, 0.1, 'm')
qf3 = QFloat([1, 2, 3], [0.1, 0.2, 0.3], 'm')
qf4 = QFloat(1.0, 0.1, 's')
qf5 = QFloat(1.0)
res = np.add(qf1, qf2)
assert_equal(res.nominal, 3.0)
assert_almost_equal(res.std_dev, 0.223606797749979)
assert_equal(res.unit, units.Unit('m'))
res = np.add(qf1, qf3)
assert_equal(res.nominal, [3, 4, 5])
assert_almost_equal(res.std_dev, [0.2236068, 0.28284271, 0.36055513])
assert_equal(res.unit, units.Unit('m'))
res = np.add(qf3, qf1)
assert_equal(res.nominal, [3, 4, 5])
assert_almost_equal(res.std_dev, [0.2236068, 0.28284271, 0.36055513])
assert_equal(res.unit, units.Unit('m'))
with pytest.raises(UnitsError):
np.add(qf1, qf4)
with pytest.raises(UnitsError):
np.add(qf1, qf5)
with pytest.raises(UnitsError):
np.add(qf1, 1.0)
with pytest.raises(NotImplementedError):
# out argument should fail
np.add(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_cbrt(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_ceil(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_copysign(self):
raise NotImplementedError
@pytest.mark.parametrize('func', [np.divide, np.true_divide])
def test_qfloat_np_divide(self, func):
qf1 = QFloat(2.0, 0.2, 'm')
qf2 = QFloat(1.0, 0.1, 'm')
qf3 = QFloat([1, 2, 4], [0.1, 0.2, 0.4], 'cm')
qf4 = QFloat(1.0, 0.1, 's')
qf5 = QFloat(1.0)
res = func(qf1, qf2)
assert_equal(res.nominal, 2)
assert_almost_equal(res.std_dev, 0.28284271)
assert_equal(res.unit, units.dimensionless_unscaled)
res = func(qf1, qf3)
assert_equal(res.nominal, [2, 1, 0.5])
assert_almost_equal(res.std_dev, [0.28284271, 0.14142136, 0.07071068])
assert_equal(res.unit, units.Unit('m/cm'))
res = func(qf3, qf1)
assert_equal(res.nominal, [0.5, 1, 2])
assert_almost_equal(res.std_dev, [0.0707107, 0.1414214, 0.2828427])
assert_equal(res.unit, units.Unit('cm/m'))
res = func(qf1, qf4)
assert_equal(res.nominal, 2.0)
assert_almost_equal(res.std_dev, 0.28284271247461906)
assert_equal(res.unit, units.Unit('m/s'))
res = func(qf1, qf5)
assert_equal(res.nominal, 2.0)
assert_almost_equal(res.std_dev, 0.2)
assert_equal(res.unit, units.Unit('m'))
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_divmod(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_exp(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_exp2(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_expm1(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_fabs(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_floor(self):
raise NotImplementedError
def test_qfloat_np_floor_divide(self):
qf1 = QFloat(2.0, 0.2, 'm')
qf2 = QFloat(1.0, 0.1, 'm')
qf3 = QFloat([1, 2, 4], [0.1, 0.2, 0.4], 'cm')
qf4 = QFloat(1.0, 0.1, 's')
qf5 = QFloat(1.0)
res = np.floor_divide(qf1, qf2)
assert_equal(res.nominal, 2)
assert_almost_equal(res.std_dev, 0)
assert_equal(res.unit, units.dimensionless_unscaled)
res = np.floor_divide(qf1, qf3)
assert_equal(res.nominal, [2, 1, 0])
assert_almost_equal(res.std_dev, [0, 0, 0])
assert_equal(res.unit, units.Unit('m/cm'))
res = np.floor_divide(qf3, qf1)
assert_equal(res.nominal, [0, 1, 2])
assert_almost_equal(res.std_dev, [0, 0, 0])
assert_equal(res.unit, units.Unit('cm/m'))
res = np.floor_divide(qf1, qf4)
assert_equal(res.nominal, 2)
assert_almost_equal(res.std_dev, 0)
assert_equal(res.unit, units.Unit('m/s'))
res = np.floor_divide(qf1, qf5)
assert_equal(res.nominal, 2)
assert_almost_equal(res.std_dev, 0)
assert_equal(res.unit, units.Unit('m'))
with pytest.raises(NotImplementedError):
# out argument should fail
np.floor_divide(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_fmax(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_fmin(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_fmod(self):
raise NotImplementedError
def test_qfloat_np_hypot(self):
qf1 = QFloat(3, 0.3, 'm')
qf2 = QFloat(4, 0.4, 'm')
qf3 = QFloat(3*np.ones((5, 5)), unit='m')
qf4 = QFloat(4*np.ones((5, 5)), unit='m')
res = np.hypot(qf1, qf2)
assert_equal(res.nominal, 5)
assert_almost_equal(res.std_dev, 0.36715119501371646)
assert_equal(res.unit, units.Unit('m'))
res = np.hypot(qf3, qf4)
assert_equal(res.nominal, 5*np.ones((5, 5)))
assert_almost_equal(res.std_dev, np.zeros((5, 5)))
assert_equal(res.unit, units.Unit('m'))
res = np.hypot(qf1, qf4)
assert_equal(res.nominal, 5*np.ones((5, 5)))
assert_almost_equal(res.std_dev, 0.18*np.ones((5, 5)))
assert_equal(res.unit, units.Unit('m'))
with pytest.raises(UnitsError):
np.hypot(qf1, 1)
with pytest.raises(UnitsError):
np.hypot(qf1, QFloat(1, unit='s'))
with pytest.raises(NotImplementedError):
# out argument should fail
np.multiply(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_isfinit(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_isinf(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_isnan(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_log(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_log2(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_log10(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_log1p(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_maximum(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_minimum(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_modf(self):
raise NotImplementedError
def test_qfloat_np_multiply(self):
qf1 = QFloat(2.0, 0.2, 'm')
qf2 = QFloat(1.2, 0.1, 'm')
qf3 = QFloat([1, 2, 4], [0.1, 0.2, 0.4], 'cm')
res = np.multiply(qf1, 2)
assert_equal(res.nominal, 4)
assert_equal(res.std_dev, 0.4)
assert_equal(res.unit, units.Unit('m'))
res = np.multiply(qf1, qf2)
assert_equal(res.nominal, 2.4)
assert_almost_equal(res.std_dev, 0.3124099870362662)
assert_equal(res.unit, units.Unit('m2'))
res = np.multiply(qf1, qf3)
assert_equal(res.nominal, [2, 4, 8])
assert_almost_equal(res.std_dev, [0.28284271, 0.56568542, 1.13137085])
assert_equal(res.unit, units.Unit('m*cm'))
with pytest.raises(NotImplementedError):
# out argument should fail
np.multiply(qf1, qf2, out=[])
def test_qfloat_np_negative(self):
qf1 = QFloat(1.0, 0.1, 'm')
qf2 = QFloat(-1.0, 0.1, 'm')
qf3 = QFloat(-5.0, 0.1)
qf4 = QFloat(6)
qf5 = QFloat([1, -1, 2, -2])
assert_equal(np.negative(qf1), QFloat(-1.0, 0.1, 'm'))
assert_equal(np.negative(qf2), QFloat(1.0, 0.1, 'm'))
assert_equal(np.negative(qf3), QFloat(5.0, 0.1))
assert_equal(np.negative(qf4), QFloat(-6))
assert_equal(np.negative(qf5), QFloat([-1, 1, -2, 2]))
with pytest.raises(NotImplementedError):
# out argument should fail
np.negative(qf1, out=[])
def test_qfloat_np_positive(self):
qf1 = QFloat(1.0, 0.1, 'm')
qf2 = QFloat(-1.0, 0.1, 'm')
qf3 = QFloat(-5.0, 0.1)
qf4 = QFloat(6)
qf5 = QFloat([1, -1, 2, -2])
assert_equal(np.positive(qf1), QFloat(1.0, 0.1, 'm'))
assert_equal(np.positive(qf2), QFloat(-1.0, 0.1, 'm'))
assert_equal(np.positive(qf3), QFloat(-5.0, 0.1))
assert_equal(np.positive(qf4), QFloat(6))
assert_equal(np.positive(qf5), QFloat([1, -1, 2, -2]))
with pytest.raises(NotImplementedError):
# out argument should fail
np.positive(qf1, out=[])
@pytest.mark.parametrize('func', [np.power, np.float_power])
def test_qfloat_np_power(self, func):
qf1 = QFloat(2.0, 0.1, 'm')
qf2 = QFloat([2, 3, 4], [0.1, 0.2, 0.3], 'm')
qf3 = QFloat(2.0, 0.1)
qf4 = QFloat([2, 3, 4])
res = func(qf1, 2)
assert_equal(res.nominal, 4)
assert_equal(res.std_dev, 0.4)
assert_equal(res.unit, units.Unit('m2'))
res = func(qf1, 1.5)
assert_almost_equal(res.nominal, 2.8284271247461903)
assert_almost_equal(res.std_dev, 0.2121320343559643)
assert_equal(res.unit, units.Unit('m(3/2)'))
res = func(qf2, 2)
assert_equal(res.nominal, [4, 9, 16])
assert_almost_equal(res.std_dev, [0.4, 1.2, 2.4])
assert_equal(res.unit, units.Unit('m2'))
res = func(qf2, 1.5)
assert_almost_equal(res.nominal, [2.82842712, 5.19615242, 8])
assert_almost_equal(res.std_dev, [0.21213203, 0.51961524, 0.9])
assert_equal(res.unit, units.Unit('m(3/2)'))
res = func(qf1, qf3)
assert_equal(res.nominal, 4)
assert_almost_equal(res.std_dev, 0.4866954717550927)
assert_equal(res.unit, units.Unit('m2'))
with pytest.raises(ValueError):
func(qf1, qf4)
with pytest.raises(ValueError):
func(qf2, qf4)
with pytest.raises(ValueError):
func(qf4, qf1)
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf1, 2, out=[])
@pytest.mark.parametrize('func', [np.mod, np.remainder])
def test_qfloat_np_remainder(self, func):
qf1 = QFloat(5.0, 0.1, 'm')
qf2 = QFloat(3.5, 0.1, 'm')
qf3 = QFloat(1.0, 0.1, 's')
qf4 = QFloat([1, 2, 3])
res = func(qf1, 2)
assert_equal(res.nominal, 1)
assert_equal(res.std_dev, 0.1)
assert_equal(res.unit, units.Unit('m'))
res = func(qf1, qf2)
assert_equal(res.nominal, 1.5)
assert_equal(res.std_dev, 0.14142135623730953)
assert_equal(res.unit, units.Unit('m'))
res = func(qf1, qf3)
assert_equal(res.nominal, 0)
assert_equal(res.std_dev, np.inf)
assert_equal(res.unit, units.Unit('m'))
res = func(qf1, qf4)
assert_equal(res.nominal, [0, 1, 2])
assert_equal(res.std_dev, [np.nan, 0.1, 0.1])
assert_equal(res.unit, units.Unit('m'))
res = func(qf4, 1.5)
assert_equal(res.nominal, [1, 0.5, 0])
assert_equal(res.std_dev, [0, 0, np.nan])
assert_equal(res.unit, units.Unit(''))
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_rint(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_sign(self):
raise NotImplementedError
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_signbit(self):
raise NotImplementedError
def test_qfloat_np_sqrt(self):
qf1 = QFloat(4, 0.1, 'm2')
qf2 = QFloat([9, 100], [0.1, 0.1], 's2')
res = np.sqrt(qf1)
assert_equal(res.nominal, 2)
assert_equal(res.std_dev, 0.025)
assert_equal(res.unit, units.Unit('m'))
res = np.sqrt(qf2)
assert_equal(res.nominal, [3, 10])
assert_almost_equal(res.std_dev, [0.01666667, 0.005])
assert_equal(res.unit, units.Unit('s'))
with pytest.raises(NotImplementedError):
# out argument should fail
np.sqrt(qf1, out=[])
def test_qfloat_np_square(self):
qf1 = QFloat(2.0, 0.1, 'm')
qf2 = QFloat([1, 2, 3], [0.1, 0.2, 0.3], 'cm')
res = np.square(qf1)
assert_equal(res.nominal, 4)
assert_almost_equal(res.std_dev, 0.28284271247461906)
assert_equal(res.unit, units.Unit('m2'))
res = np.square(qf2)
assert_equal(res.nominal, [1, 4, 9])
assert_almost_equal(res.std_dev, [0.14142136, 0.56568542, 1.27279221])
assert_equal(res.unit, units.Unit('cm2'))
with pytest.raises(NotImplementedError):
# out argument should fail
np.square(qf1, out=[])
def test_qfloat_np_subtract(self):
qf1 = QFloat(2.0, 0.2, 'm')
qf2 = QFloat(1.0, 0.1, 'm')
qf3 = QFloat([1, 2, 3], [0.1, 0.2, 0.3], 'm')
qf4 = QFloat(1.0, 0.1, 's')
qf5 = QFloat(1.0)
res = np.subtract(qf1, qf2)
assert_equal(res.nominal, 1.0)
assert_almost_equal(res.std_dev, 0.223606797749979)
assert_equal(res.unit, units.Unit('m'))
res = np.subtract(qf1, qf3)
assert_equal(res.nominal, [1, 0, -1])
assert_almost_equal(res.std_dev, [0.2236068, 0.28284271, 0.36055513])
assert_equal(res.unit, units.Unit('m'))
res = np.subtract(qf3, qf1)
assert_equal(res.nominal, [-1, 0, 1])
assert_almost_equal(res.std_dev, [0.2236068, 0.28284271, 0.36055513])
assert_equal(res.unit, units.Unit('m'))
with pytest.raises(UnitsError):
np.subtract(qf1, qf4)
with pytest.raises(UnitsError):
np.subtract(qf1, qf5)
with pytest.raises(UnitsError):
np.subtract(qf1, 1.0)
with pytest.raises(NotImplementedError):
# out argument should fail
np.subtract(qf1, qf2, out=[])
@pytest.mark.skip(reason="Not Implemented Yet")
def test_qfloat_np_trunc(self):
raise NotImplementedError
class TestQFloatNumpyUfuncTrigonometric:
"""Test the numpy trigonometric and inverse trigonometric functions."""
# Both radians and deg2rad must work in the same way
@pytest.mark.parametrize('func', [np.radians, np.deg2rad])
def test_qfloat_np_radians(self, func):
qf = QFloat(180, 0.1, 'degree')
res = func(qf)
assert_almost_equal(res.nominal, 3.141592653589793)
assert_almost_equal(res.std_dev, 0.001745329251994)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat(-180, 0.1, 'degree')
res = func(qf)
assert_almost_equal(res.nominal, -3.141592653589793)
assert_almost_equal(res.std_dev, 0.001745329251994)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0, 30, 45, 60, 90], [0.1, 0.2, 0.3, 0.4, 0.5], 'degree')
res = func(qf)
assert_almost_equal(res.nominal, [0, 0.52359878, 0.78539816,
1.04719755, 1.57079633])
assert_almost_equal(res.std_dev, [0.00174533, 0.00349066, 0.00523599,
0.00698132, 0.00872665])
assert_equal(res.unit, units.Unit('rad'))
# radian should no change
qf = QFloat(1.0, 0.1, 'radian')
res = func(qf)
assert_equal(res.nominal, 1.0)
assert_equal(res.std_dev, 0.1)
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ('m', None, 'm/s'):
with pytest.raises(UnitsError):
func(QFloat(1.0, 0.1, unit))
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf, out=[])
# Both degrees and rad2deg must work in the same way
@pytest.mark.parametrize('func', [np.degrees, np.rad2deg])
def test_qfloat_np_degrees(self, func):
qf = QFloat(np.pi, 0.05, 'radian')
res = func(qf)
assert_almost_equal(res.nominal, 180.0)
assert_almost_equal(res.std_dev, 2.8647889756541165)
assert_equal(res.unit, units.Unit('deg'))
qf = QFloat(-np.pi, 0.05, 'radian')
res = func(qf)
assert_almost_equal(res.nominal, -180.0)
assert_almost_equal(res.std_dev, 2.8647889756541165)
assert_equal(res.unit, units.Unit('deg'))
qf = QFloat([np.pi, np.pi/2, np.pi/4, np.pi/6],
[0.01, 0.02, 0.03, 0.04], 'rad')
res = func(qf)
assert_almost_equal(res.nominal, [180.0, 90.0, 45.0, 30.0])
assert_almost_equal(res.std_dev, [0.5729578, 1.14591559,
1.71887339, 2.29183118])
assert_equal(res.unit, units.Unit('deg'))
# deg should no change
qf = QFloat(1.0, 0.1, 'deg')
res = func(qf)
assert_equal(res.nominal, 1.0)
assert_equal(res.std_dev, 0.1)
assert_equal(res.unit, units.Unit('deg'))
# Invalid units
for unit in ('m', None, 'm/s'):
with pytest.raises(UnitsError):
func(QFloat(1.0, 0.1, unit))
with pytest.raises(NotImplementedError):
# out argument should fail
func(qf, out=[])
def test_qfloat_np_sin(self):
qf = QFloat(np.pi, 0.05, 'radian')
res = np.sin(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.05)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(90, 0.05, 'deg')
res = np.sin(qf)
assert_almost_equal(res.nominal, 1.0)
assert_almost_equal(res.std_dev, 0.0)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([30, 45, 60], [0.1, 0.2, 0.3], 'deg')
res = np.sin(qf)
assert_almost_equal(res.nominal, [0.5, 0.70710678, 0.8660254])
assert_almost_equal(res.std_dev, [0.0015115, 0.00246827, 0.00261799])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.sin(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.sin(qf, out=[])
def test_qfloat_np_cos(self):
qf = QFloat(180, 0.05, 'deg')
res = np.cos(qf)
assert_almost_equal(res.nominal, -1.0)
assert_almost_equal(res.std_dev, 0.0)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(np.pi/2, 0.05, 'rad')
res = np.cos(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.05)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([30, 45, 60], [0.1, 0.2, 0.3], 'deg')
res = np.cos(qf)
assert_almost_equal(res.nominal, [0.8660254, 0.70710678, 0.5])
assert_almost_equal(res.std_dev, [0.00087266, 0.00246827, 0.0045345])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.cos(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.cos(qf, out=[])
def test_qfloat_np_tan(self):
qf = QFloat(45, 0.05, 'deg')
res = np.tan(qf)
assert_almost_equal(res.nominal, 1.0)
assert_almost_equal(res.std_dev, 0.0017453292519943294)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(np.pi/4, 0.05, 'rad')
res = np.tan(qf)
assert_almost_equal(res.nominal, 1.0)
assert_almost_equal(res.std_dev, 0.1)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([0, 30, 60], [0.1, 0.2, 0.3], 'deg')
res = np.tan(qf)
assert_almost_equal(res.nominal, [0, 0.57735027, 1.73205081])
assert_almost_equal(res.std_dev, [0.00174533, 0.00465421, 0.02094395])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.tan(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.tan(qf, out=[])
def test_qfloat_np_sinh(self):
qf = QFloat(0, 0.05, 'radian')
res = np.sinh(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.05)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(np.pi, 0.05, 'radian')
res = np.sinh(qf)
assert_almost_equal(res.nominal, 11.548739357257748)
assert_almost_equal(res.std_dev, 0.5795976637760759)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(90, 0.05, 'deg')
res = np.sinh(qf)
assert_almost_equal(res.nominal, 2.3012989023072947)
assert_almost_equal(res.std_dev, 0.002189671298638268)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([30, 45, 60], [0.1, 0.2, 0.3], 'deg')
res = np.sinh(qf)
assert_almost_equal(res.nominal, [0.5478535, 0.86867096, 1.24936705])
assert_almost_equal(res.std_dev, [0.0019901, 0.0046238, 0.0083791])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.sinh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.sinh(qf, out=[])
def test_qfloat_np_cosh(self):
qf = QFloat(0, 0.05, 'radian')
res = np.cosh(qf)
assert_almost_equal(res.nominal, 1.0)
assert_almost_equal(res.std_dev, 0.0)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(np.pi, 0.05, 'radian')
res = np.cosh(qf)
assert_almost_equal(res.nominal, 11.591953275521519)
assert_almost_equal(res.std_dev, 0.5774369678628875)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(90, 0.05, 'deg')
res = np.cosh(qf)
assert_almost_equal(res.nominal, 2.5091784786580567)
assert_almost_equal(res.std_dev, 0.0020082621458896)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([30, 45, 60], [0.1, 0.2, 0.3], 'deg')
res = np.cosh(qf)
assert_almost_equal(res.nominal, [1.14023832, 1.32460909, 1.60028686])
assert_almost_equal(res.std_dev, [0.00095618, 0.00303223, 0.00654167])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.cosh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.cosh(qf, out=[])
def test_qfloat_np_tanh(self):
qf = QFloat(0, 0.05, 'radian')
res = np.tanh(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.05)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(np.pi, 0.05, 'radian')
res = np.tanh(qf)
assert_almost_equal(res.nominal, 0.99627207622075)
assert_almost_equal(res.std_dev, 0.00037209750714)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat(90, 0.05, 'deg')
res = np.tanh(qf)
assert_almost_equal(res.nominal, 0.9171523356672744)
assert_almost_equal(res.std_dev, 0.0001386067128590)
assert_equal(res.unit, units.dimensionless_unscaled)
qf = QFloat([30, 45, 60], [0.1, 0.2, 0.3], 'deg')
res = np.tanh(qf)
assert_almost_equal(res.nominal, [0.48047278, 0.6557942, 0.78071444])
assert_almost_equal(res.std_dev, [0.00134241, 0.00198944, 0.00204457])
assert_equal(res.unit, units.dimensionless_unscaled)
for unit in ['m', 'm/s', None]:
with pytest.raises(UnitsError):
np.tanh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.tanh(qf, out=[])
def test_qfloat_np_arcsin(self):
qf = QFloat(np.sqrt(2)/2, 0.01)
res = np.arcsin(qf)
assert_almost_equal(res.nominal, 0.7853981633974484)
assert_almost_equal(res.std_dev, 0.0141421356237309)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0, 0.5, 1], [0.01, 0.2, 0.3])
res = np.arcsin(qf)
assert_almost_equal(res.nominal, [0, 0.52359878, 1.57079633])
assert_almost_equal(res.std_dev, [0.01, 0.23094011, np.inf])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arcsin(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arcsin(qf, out=[])
def test_qfloat_np_arccos(self):
qf = QFloat(np.sqrt(2)/2, 0.01)
res = np.arccos(qf)
assert_almost_equal(res.nominal, 0.7853981633974484)
assert_almost_equal(res.std_dev, 0.0141421356237309)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0, 0.5, 1], [0.01, 0.2, 0.3])
res = np.arccos(qf)
assert_almost_equal(res.nominal, [1.57079633, 1.04719755, 0])
assert_almost_equal(res.std_dev, [0.01, 0.23094011, np.inf])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arccos(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arccos(qf, out=[])
def test_qfloat_np_arctan(self):
qf = QFloat(1.0, 0.01)
res = np.arctan(qf)
assert_almost_equal(res.nominal, 0.7853981633974484)
assert_almost_equal(res.std_dev, 0.005)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0, 0.5, 1], [0.01, 0.2, 0.3])
res = np.arctan(qf)
assert_almost_equal(res.nominal, [0, 0.4636476, 0.7853982])
assert_almost_equal(res.std_dev, [0.01, 0.16, 0.15])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arctan(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arctan(qf, out=[])
def test_qfloat_np_arcsinh(self):
qf = QFloat(0.0, 0.01)
res = np.arcsinh(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.01)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0.5, 1.0, 10], [0.01, 0.2, 0.3])
res = np.arcsinh(qf)
assert_almost_equal(res.nominal, [0.4812118, 0.8813736, 2.998223])
assert_almost_equal(res.std_dev, [0.0089443, 0.1414214, 0.0298511])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arcsinh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arcsinh(qf, out=[])
def test_qfloat_np_arccosh(self):
qf = QFloat(1.0, 0.01)
res = np.arccosh(qf)
assert_almost_equal(res.nominal, 0.0)
# assert_almost_equal(res.std_dev, np.inf)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([1.5, 5.0, 10], [0.01, 0.2, 0.3])
res = np.arccosh(qf)
assert_almost_equal(res.nominal, [0.9624237, 2.2924317, 2.9932228])
assert_almost_equal(res.std_dev, [0.0089443, 0.0408248, 0.0301511])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arccosh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arccosh(qf, out=[])
def test_qfloat_np_arctanh(self):
qf = QFloat(0.0, 0.01)
res = np.arctanh(qf)
assert_almost_equal(res.nominal, 0.0)
assert_almost_equal(res.std_dev, 0.01)
assert_equal(res.unit, units.Unit('rad'))
qf = QFloat([0.1, 0.5, 1.0], [0.01, 0.2, 0.3])
res = np.arctanh(qf)
assert_almost_equal(res.nominal, [0.1003353, 0.5493061, np.inf])
assert_almost_equal(res.std_dev, [0.010101, 0.2666667, np.inf])
assert_equal(res.unit, units.Unit('rad'))
# Invalid units
for unit in ['m', 'm/s', 'rad', 'deg']:
with pytest.raises(UnitsError):
np.arctanh(QFloat(1.0, unit=unit))
with pytest.raises(NotImplementedError):
# out argument should fail
np.arctanh(qf, out=[])
| 36.070674 | 78 | 0.570142 | 8,136 | 54,611 | 3.688913 | 0.048058 | 0.095159 | 0.090028 | 0.109286 | 0.873721 | 0.838803 | 0.799753 | 0.737847 | 0.703962 | 0.650118 | 0 | 0.101204 | 0.27445 | 54,611 | 1,513 | 79 | 36.094514 | 0.65626 | 0.030617 | 0 | 0.536942 | 0 | 0 | 0.025743 | 0 | 0 | 0 | 0 | 0 | 0.385739 | 1 | 0.087629 | false | 0 | 0.004296 | 0 | 0.094502 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4f5359ec3eda3c01015a1ecae5ba8a1cc8060159 | 19,889 | py | Python | main/convenience_redirect/tests.py | csev/class2go | f9419ae16448d20fc882170f95cfd1c4dc3331ca | [
"Apache-2.0"
] | 2 | 2015-10-31T23:12:52.000Z | 2021-01-19T11:03:00.000Z | main/convenience_redirect/tests.py | csev/class2go | f9419ae16448d20fc882170f95cfd1c4dc3331ca | [
"Apache-2.0"
] | null | null | null | main/convenience_redirect/tests.py | csev/class2go | f9419ae16448d20fc882170f95cfd1c4dc3331ca | [
"Apache-2.0"
] | null | null | null | """
This file demonstrates writing tests using the unittest module. These will pass
when you run "manage.py test".
Replace this with more appropriate tests for your application.
"""
from django.test import TestCase
from convenience_redirect.redirector import convenience_redirector
from django.test.client import RequestFactory
from django.http import HttpResponseRedirect
from c2g.models import CurrentTermMap, Course
from random import randrange
from django.contrib.auth.models import User,Group
class SimpleTest(TestCase):
#for a ton of URLs in our system that should not get redirects (b/c they are not course specific), make sure they don't redirect
no_course_paths = ['/_health', '/_throw500', '/_throw404', '/email_optout/afda923sdmadf/', '/shib-login', '/impersonate/jbau@stanford.edu',
'/videos/save/', '/accounts/login/', '/accounts/logout/', '/accounts/profile/save/', '/admin/', '/admin/doc/', '/courses/new/',
'/commit', '/revert/', '/change_live_datetime/', '/save_order/', '/content_section/get_children/2342/',
'content_section/get_children_as_contentgroup_parents/155/?', '/']
#These paths should be redirected if preceded by /course_prefix/course_suffix, or if the host is a convenience redirect
course_path_endings = ['/videos/', '/exams/', '/surveys', '/exams/abcd/submit/', '/', '/surveys/abcd/', '/exams/abcd/record/23/',
'/problemsets/test/record/55']
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.redir = convenience_redirector()
#db class map
m1 = CurrentTermMap(course_prefix="db", course_suffix="Winter2013")
m1.save()
m2 = CurrentTermMap(course_prefix="class2go", course_suffix="tutorial")
m2.save()
m3 = CurrentTermMap(course_prefix="EE364A", course_suffix="Winter2013")
m3.save()
for (course,suffix) in (('nlp','Fall2012'),
('test','Fall2012'),
('networking','Fall2012'),
('crypto','Fall2012'),
('security','Fall2012'),
('cs144','Fall2012'),
('cs224n','Fall2012'),
('solar','Fall2012'),
('matsci256','Fall2012'),
('psych30','Fall2012'),
('nano','Fall2012'),
('msande111','Fall2012'),
('db','Winter2013'),
('class2go','tutorial'),
('EE364A','Winter2013'),
('networking', 'WallaWalla'),
):
### Create the new Course ###
r = randrange(0,100000000)
student_group = Group.objects.create(name="Student Group for class2go course " + course + " %d" % r)
instructor_group = Group.objects.create(name="Instructor Group for class2go course " + course + " %d" % r)
tas_group = Group.objects.create(name="TAS Group for class2go course " + course + " %d" % r)
readonly_tas_group = Group.objects.create(name="Readonly TAS Group for class2go course " + course + " %d" % r)
c = Course(handle=course+'--'+suffix,
student_group_id = student_group.id,
instructor_group_id = instructor_group.id,
tas_group_id = tas_group.id,
readonly_tas_group_id = readonly_tas_group.id,
)
c.save()
def test_noop(self):
for host in ('class.stanford.edu', 'www.class.stanford.edu', 'staging.class.stanford.edu', 'www.staging.class.stanford.edu' \
'class2go.stanford.edu', 'www.class2go.stanford.edu', 'staging.class2go.stanford.edu', 'www.staging.class2go.stanford.edu'):
request = self.factory.get('/')
request.META['HTTP_HOST']=host
response = self.redir.process_request(request)
self.assertIsNone(response)
def test_no_redirect_loop1(self):
for path in self.course_path_endings:
for host in ('class.stanford.edu', 'www.class.stanford.edu', 'staging.class.stanford.edu', 'www.staging.class.stanford.edu'):
request = self.factory.get('/networking/Fall2012%s' % path)
request.META['HTTP_HOST']=host
response = self.redir.process_request(request)
self.assertIsNone(response)
def test_no_direct_loop2(self):
for path in self.course_path_endings:
for host in ('class2go.stanford.edu', 'www.class2go.stanford.edu', 'staging.class2go.stanford.edu', 'www.staging.class2go.stanford.edu'):
request = self.factory.get('/db/Winter2013%s' % path)
request.META['HTTP_HOST']=host
response = self.redir.process_request(request)
self.assertIsNone(response)
def test_malformed(self):
for host in ('www.cnn.com', 'cs144.stanford.edu', 'class.stanford.edu.au', 'bad.prefix.class.stanford.edu'):
request = self.factory.get('/')
request.META['HTTP_HOST']=host
response = self.redir.process_request(request)
self.assertIsNone(response)
def test_class_networking(self):
#HTTP to '/'
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/')
#HTTPS
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://class.stanford.edu/networking/Fall2012/')
#HTTP to '/', should redirect class2go to class
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/')
#HTTPS should redirect class2go to class
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://class.stanford.edu/networking/Fall2012/')
def test_url_paths_and_params(self):
#path
request = self.factory.get('/videos/TheInternet/')
request.META['HTTP_HOST']='networking.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/videos/TheInternet/')
#query param
request = self.factory.get('/preview/?login=login&cnn=cnn')
request.META['HTTP_HOST']='networking.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/preview/?login=login&cnn=cnn')
#query param2
request = self.factory.get('?a=a;b=b')
request.META['HTTP_HOST']='networking.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/?a=a;b=b')
def test_staging_networking(self):
#HTTP to '/'
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.staging.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://staging.class.stanford.edu/networking/Fall2012/')
#HTTPS
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://staging.class.stanford.edu/networking/Fall2012/')
#HTTP to '/', redirect class2go to class
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.staging.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://staging.class.stanford.edu/networking/Fall2012/')
#HTTPS redirect class2go to class
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://staging.class.stanford.edu/networking/Fall2012/')
def test_ports(self):
#HTTP 80
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class.stanford.edu:80'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu/networking/Fall2012/')
#HTTPS 443
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class.stanford.edu:443'
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://class.stanford.edu/networking/Fall2012/')
#HTTP 8080
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class.stanford.edu:8080'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'http://class.stanford.edu:8080/networking/Fall2012/')
#HTTPS 4443
request = self.factory.get('/')
request.META['HTTP_HOST']='networking.class.stanford.edu:4443'
request.is_secure=lambda: True
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],'https://class.stanford.edu:4443/networking/Fall2012/')
def test_active_classes(self):
for path in self.course_path_endings:
for course in ('nlp','test','networking','crypto','security','cs144','cs224n','solar','matsci256','psych30','nano','msande111'):
request = self.factory.get(path)
request.META['HTTP_HOST']=course+'.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/Fall2012%s" % (course, path))
def test_Fall2012_classes_redir_to_class(self):
#make request to class2go for fall2012 classes, should end up with class
for path in self.course_path_endings:
for course in ('nlp','test','networking','crypto','security','cs144','cs224n','solar','matsci256','psych30','nano','msande111'):
request = self.factory.get(path)
request.META['HTTP_HOST']=course+'.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/Fall2012%s" % (course, path))
def test_cur_term_map_classes(self):
for path in self.course_path_endings:
#db--Winter2013
request = self.factory.get(path)
request.META['HTTP_HOST']='db.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/db/Winter2013%s" %path)
#EE364A--Winter2013
request = self.factory.get(path)
request.META['HTTP_HOST']='EE364A.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/EE364A/Winter2013%s" %path)
#class2go--tutorial
request = self.factory.get(path)
request.META['HTTP_HOST']='class2go.class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/class2go/tutorial%s" %path)
#db--Winter2013, redirect class to class2go
request = self.factory.get(path)
request.META['HTTP_HOST']='db.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/db/Winter2013%s" %path)
#EE364A--Winter2013
request = self.factory.get(path)
request.META['HTTP_HOST']='EE364A.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/EE364A/Winter2013%s" %path)
#class2go--tutorial, redirect class to class2go
request = self.factory.get(path)
request.META['HTTP_HOST']='class2go.class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/class2go/tutorial%s" %path)
def test_redir_class_path(self):
for ending in self.course_path_endings:
#test that we can redirect to the old codebase based on path
for course in ('nlp','test','networking','crypto','security','cs144','cs224n','solar','matsci256','psych30','nano','msande111'):
#GETs
request = self.factory.get('/%s/Fall2012%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/Fall2012%s" % (course, ending))
request = self.factory.get('/%s/Fall2012%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
#POSTS
request = self.factory.post('/%s/Fall2012%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/Fall2012%s" % (course, ending))
request = self.factory.post('/%s/Fall2012%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
#test the Walla Walla course (Should be old codebase)
for course in ('networking',):
#GET
request = self.factory.get('/%s/WallaWalla%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
request = self.factory.get('/%s/WallaWalla%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/WallaWalla%s" % (course, ending))
#POST
request = self.factory.post('/%s/WallaWalla%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
request = self.factory.post('/%s/WallaWalla%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class.stanford.edu/%s/WallaWalla%s" % (course, ending))
#test that we can redirect to the new codebase based on path
for course in ('EE364A','db'):
#GET
request = self.factory.get('/%s/Winter2013%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
request = self.factory.get('/%s/Winter2013%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/%s/Winter2013%s" % (course, ending))
#POST
request = self.factory.post('/%s/Winter2013%s' % (course, ending))
request.META['HTTP_HOST']='class2go.stanford.edu'
response = self.redir.process_request(request)
self.assertIsNone(response)
request = self.factory.post('/%s/Winter2013%s' % (course, ending))
request.META['HTTP_HOST']='class.stanford.edu'
response = self.redir.process_request(request)
self.assertTrue(isinstance(response,HttpResponseRedirect))
self.assertEqual(response['Location'],"http://class2go.stanford.edu/%s/Winter2013%s" % (course, ending))
def test_no_redirect(self):
for path in self.no_course_paths:
for host in ('class.stanford.edu', 'www.class.stanford.edu', 'staging.class.stanford.edu', 'www.staging.class.stanford.edu' \
'class2go.stanford.edu', 'www.class2go.stanford.edu', 'staging.class2go.stanford.edu', 'www.staging.class2go.stanford.edu'):
request = self.factory.get(path)
request.META['HTTP_HOST']=host
response = self.redir.process_request(request)
self.assertIsNone(response)
| 57.317003 | 149 | 0.625823 | 2,076 | 19,889 | 5.910405 | 0.108382 | 0.078892 | 0.070416 | 0.07824 | 0.80489 | 0.789324 | 0.782967 | 0.762184 | 0.756805 | 0.751915 | 0 | 0.029113 | 0.241842 | 19,889 | 346 | 150 | 57.482659 | 0.784601 | 0.058977 | 0 | 0.642066 | 0 | 0.00369 | 0.259777 | 0.083842 | 0 | 0 | 0 | 0 | 0.254613 | 1 | 0.051661 | false | 0 | 0.02583 | 0 | 0.088561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4f7b8a96b7f18e9d98e886ccdb9f6b6435e9ee8e | 1,639 | py | Python | raspberry/allskySCRIPT/meteoRRD_createRRD.py | broadcastyourseb/SADR | 5a8f7d5a30ae14a58427c9c09c56347eca6a17e9 | [
"Apache-2.0"
] | 1 | 2020-02-17T03:48:44.000Z | 2020-02-17T03:48:44.000Z | raspberry/allskySCRIPT/meteoRRD_createRRD.py | broadcastyourseb/SADR | 5a8f7d5a30ae14a58427c9c09c56347eca6a17e9 | [
"Apache-2.0"
] | 15 | 2017-08-07T11:59:47.000Z | 2017-08-14T21:28:01.000Z | raspberry/allskySCRIPT/meteoRRD_createRRD.py | broadcastyourseb/SADR | 5a8f7d5a30ae14a58427c9c09c56347eca6a17e9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#-*- coding: iso-8859-15 -*-
# SADR METEOLLSKY
# http://www.sadr.fr
# SEBASTIEN LECLERC 2017
# Inspired by :
# NACHO MAS 2013
# http://induino.wordpress.com
#
import sys, os
import rrdtool
from meteollskyconfig import *
#10s raw values for 3hour, 1min for 24 hours, 5 min for 24*7 hours,
# 1hour for 1 year, 1day dor 10 years!
ret = rrdtool.create(CHARTPATH+"meteo.rrd", "--step", "1", "--start", '0',
"DS:T:GAUGE:600:U:U",
"DS:frezzingFlag:GAUGE:600:U:U",
"DS:Light:GAUGE:600:U:U",
"DS:daylightFlag:GAUGE:600:U:U",
"DS:Thrint:GAUGE:600:U:U",
"DS:HRint:GAUGE:600:U:U",
"DS:Thr:GAUGE:600:U:U",
"DS:HR:GAUGE:600:U:U",
"DS:Dew:GAUGE:600:U:U",
"DS:dewFlag:GAUGE:600:U:U",
"DS:Tir:GAUGE:600:U:U",
"DS:IR:GAUGE:600:U:U",
"DS:skyT:GAUGE:600:U:U",
"DS:clouds:GAUGE:600:U:U",
"DS:cloudFlag:GAUGE:600:U:U",
"DS:P:GAUGE:600:U:U",
"DS:Tp:GAUGE:600:U:U",
"DS:Wind:GAUGE:600:U:U",
"DS:WindMax:GAUGE:600:U:U",
"DS:windFlag:GAUGE:600:U:U",
"DS:CRain:GAUGE:600:U:U",
"DS:rainFlag:GAUGE:600:U:U",
"DS:TargetRain:GAUGE:600:U:U",
"DS:TRain:GAUGE:600:U:U",
"DS:PIDRain:GAUGE:600:U:U",
"RRA:AVERAGE:0.5:1:10800",
"RRA:AVERAGE:0.5:60:1440",
"RRA:AVERAGE:0.5:300:1008",
"RRA:AVERAGE:0.5:3600:8760",
"RRA:AVERAGE:0.5:86400:3650",
"RRA:MAX:0.5:1:10800",
"RRA:MAX:0.5:60:1440",
"RRA:MAX:0.5:300:1008",
"RRA:MAX:0.5:3600:8760",
"RRA:MAX:0.5:86400:3650",
"RRA:MIN:0.5:1:10800",
"RRA:MIN:0.5:60:1440",
"RRA:MIN:0.5:300:1008",
"RRA:MIN:0.5:3600:8760",
"RRA:MIN:0.5:86400:3650")
if ret:
print rrdtool.error()
| 22.763889 | 74 | 0.607688 | 311 | 1,639 | 3.202572 | 0.318328 | 0.200803 | 0.225904 | 0.251004 | 0.458835 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175919 | 0.153752 | 1,639 | 71 | 75 | 23.084507 | 0.542177 | 0.161074 | 0 | 0 | 0 | 0 | 0.666911 | 0.451944 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.065217 | null | null | 0.021739 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4faf4acfac00887ed88e304d636affe65248a35e | 208 | py | Python | tests/test_generate_sent.py | PyThaiNLP/thai-analysis-rule | df62ee3a1e9b818b5280d842f6876581eca8d5eb | [
"Apache-2.0"
] | null | null | null | tests/test_generate_sent.py | PyThaiNLP/thai-analysis-rule | df62ee3a1e9b818b5280d842f6876581eca8d5eb | [
"Apache-2.0"
] | null | null | null | tests/test_generate_sent.py | PyThaiNLP/thai-analysis-rule | df62ee3a1e9b818b5280d842f6876581eca8d5eb | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
from thaianalysisrule import generate_sent
class TestGenerateSent(unittest.TestCase):
def test_generate_sent(self):
self.assertIsNotNone(generate_sent()) | 23.111111 | 45 | 0.754808 | 23 | 208 | 6.652174 | 0.695652 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.144231 | 208 | 9 | 45 | 23.111111 | 0.853933 | 0.100962 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8c3ab3040d900d88d48aef7072cb13a45c39950f | 43 | py | Python | valipede/__init__.py | cooper-software/valipede | 38aca6b95c842ec51f7ce1d92f5f9844dcbbf079 | [
"MIT"
] | null | null | null | valipede/__init__.py | cooper-software/valipede | 38aca6b95c842ec51f7ce1d92f5f9844dcbbf079 | [
"MIT"
] | 1 | 2018-03-04T20:57:58.000Z | 2018-03-04T20:57:58.000Z | valipede/__init__.py | cooper-software/valipede | 38aca6b95c842ec51f7ce1d92f5f9844dcbbf079 | [
"MIT"
] | 1 | 2018-03-04T20:52:31.000Z | 2018-03-04T20:52:31.000Z | from .fields import *
from .schema import * | 21.5 | 21 | 0.744186 | 6 | 43 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 22 | 21.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4fd5d42ff93f3377dab9b906d998b92a4675700b | 2,533 | py | Python | z2/part3/updated_part2_batch/jm/parser_errors_2/371745755.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 1 | 2020-04-16T12:13:47.000Z | 2020-04-16T12:13:47.000Z | z2/part3/updated_part2_batch/jm/parser_errors_2/371745755.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 18 | 2020-03-06T17:50:15.000Z | 2020-05-19T14:58:30.000Z | z2/part3/updated_part2_batch/jm/parser_errors_2/371745755.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 18 | 2020-03-06T17:45:13.000Z | 2020-06-09T19:18:31.000Z | from part1 import (
gamma_board,
gamma_busy_fields,
gamma_delete,
gamma_free_fields,
gamma_golden_move,
gamma_golden_possible,
gamma_move,
gamma_new,
)
"""
scenario: test_random_actions
uuid: 371745755
"""
"""
random actions, total chaos
"""
board = gamma_new(4, 4, 3, 6)
assert board is not None
assert gamma_move(board, 1, 3, 0) == 1
assert gamma_move(board, 1, 2, 3) == 1
assert gamma_move(board, 2, 2, 2) == 1
board922846933 = gamma_board(board)
assert board922846933 is not None
assert board922846933 == ("..1.\n" "..2.\n" "....\n" "...1\n")
del board922846933
board922846933 = None
assert gamma_move(board, 1, 3, 2) == 1
assert gamma_move(board, 2, 0, 1) == 1
assert gamma_free_fields(board, 2) == 11
assert gamma_move(board, 3, 2, 1) == 1
assert gamma_move(board, 1, 1, 1) == 1
assert gamma_move(board, 1, 0, 0) == 1
assert gamma_move(board, 2, 0, 3) == 1
assert gamma_move(board, 2, 1, 3) == 1
assert gamma_move(board, 3, 2, 1) == 0
assert gamma_move(board, 3, 1, 0) == 1
assert gamma_move(board, 1, 2, 0) == 1
assert gamma_move(board, 3, 1, 3) == 0
assert gamma_move(board, 1, 3, 3) == 1
assert gamma_free_fields(board, 2) == 3
assert gamma_move(board, 3, 1, 3) == 0
assert gamma_move(board, 3, 3, 0) == 0
assert gamma_move(board, 1, 2, 1) == 0
assert gamma_move(board, 1, 0, 1) == 0
assert gamma_golden_possible(board, 1) == 1
assert gamma_move(board, 2, 2, 1) == 0
assert gamma_move(board, 2, 1, 0) == 0
assert gamma_golden_move(board, 2, 3, 3) == 1
assert gamma_move(board, 3, 2, 1) == 0
assert gamma_move(board, 1, 0, 2) == 1
assert gamma_golden_possible(board, 1) == 1
assert gamma_move(board, 2, 1, 3) == 0
assert gamma_golden_move(board, 2, 1, 2) == 0
assert gamma_move(board, 3, 2, 3) == 0
assert gamma_move(board, 1, 3, 0) == 0
assert gamma_golden_move(board, 1, 0, 1) == 1
assert gamma_move(board, 2, 2, 2) == 0
assert gamma_busy_fields(board, 2) == 4
assert gamma_move(board, 3, 2, 1) == 0
assert gamma_golden_possible(board, 1) == 0
assert gamma_move(board, 2, 2, 1) == 0
assert gamma_busy_fields(board, 2) == 4
assert gamma_move(board, 3, 2, 1) == 0
assert gamma_move(board, 1, 2, 1) == 0
assert gamma_golden_possible(board, 1) == 0
assert gamma_move(board, 2, 2, 1) == 0
assert gamma_move(board, 2, 3, 1) == 1
assert gamma_free_fields(board, 2) == 1
assert gamma_move(board, 1, 2, 1) == 0
assert gamma_free_fields(board, 1) == 1
assert gamma_move(board, 2, 2, 1) == 0
assert gamma_move(board, 2, 3, 0) == 0
assert gamma_golden_possible(board, 2) == 0
gamma_delete(board)
| 30.518072 | 62 | 0.677063 | 462 | 2,533 | 3.536797 | 0.075758 | 0.336597 | 0.330477 | 0.440636 | 0.783354 | 0.773562 | 0.734394 | 0.561812 | 0.434517 | 0.427173 | 0 | 0.114272 | 0.16739 | 2,533 | 82 | 63 | 30.890244 | 0.660503 | 0 | 0 | 0.279412 | 0 | 0 | 0.009816 | 0 | 0 | 0 | 0 | 0 | 0.779412 | 1 | 0 | false | 0 | 0.014706 | 0 | 0.014706 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4fd789a7927924eeb3e4298495bbc6d10e3e94d4 | 48 | py | Python | vnpy_sqlserver/__init__.py | vnpy/vnpy_sqlserver | 6f2dc32f4f48f0215c0ca7514595e26194bdff33 | [
"MIT"
] | 1 | 2021-07-10T02:04:51.000Z | 2021-07-10T02:04:51.000Z | vnpy_sqlserver/__init__.py | vnpy/vnpy_sqlserver | 6f2dc32f4f48f0215c0ca7514595e26194bdff33 | [
"MIT"
] | null | null | null | vnpy_sqlserver/__init__.py | vnpy/vnpy_sqlserver | 6f2dc32f4f48f0215c0ca7514595e26194bdff33 | [
"MIT"
] | 3 | 2021-04-13T08:39:52.000Z | 2022-01-18T14:05:57.000Z | from .sqlserver_database import database_manager | 48 | 48 | 0.916667 | 6 | 48 | 7 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4fe4205ad64db334f50b123ca5d4c4d07bc36057 | 1,155 | py | Python | app/stats/handlers.py | projectweekend/THPL-Data-API | 28995ad93d5d16cb9da10f52f30ae16d34c4c5c3 | [
"MIT"
] | null | null | null | app/stats/handlers.py | projectweekend/THPL-Data-API | 28995ad93d5d16cb9da10f52f30ae16d34c4c5c3 | [
"MIT"
] | null | null | null | app/stats/handlers.py | projectweekend/THPL-Data-API | 28995ad93d5d16cb9da10f52f30ae16d34c4c5c3 | [
"MIT"
] | null | null | null | import falcon
from app.stats import data
from app.stats.hooks import validate_query_params
class LatestReadingResource(object):
def on_get(self, req, res, sensor):
result = data.latest_reading(sensor=sensor)
if result is None:
raise falcon.HTTPNotFound
res.status = falcon.HTTP_200
res.body = result
class HourlyStatsResource(object):
@falcon.before(validate_query_params)
def on_get(self, req, res, sensor):
result = data.hourly_stats(
sensor=sensor,
start_day=req.context['start_day'],
end_day=req.context['end_day'])
if result is None:
raise falcon.HTTPNotFound
res.status = falcon.HTTP_200
res.body = result
class DailyStatsResource(object):
@falcon.before(validate_query_params)
def on_get(self, req, res, sensor):
result = data.daily_stats(
sensor=sensor,
start_day=req.context['start_day'],
end_day=req.context['end_day'])
if result is None:
raise falcon.HTTPNotFound
res.status = falcon.HTTP_200
res.body = result
| 27.5 | 51 | 0.637229 | 142 | 1,155 | 5.021127 | 0.288732 | 0.044881 | 0.072931 | 0.050491 | 0.746143 | 0.746143 | 0.746143 | 0.746143 | 0.746143 | 0.698457 | 0 | 0.01074 | 0.274459 | 1,155 | 41 | 52 | 28.170732 | 0.840095 | 0 | 0 | 0.71875 | 0 | 0 | 0.027706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.09375 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ff843775934f549f19dc40b88eb6311faf9201f | 249 | py | Python | jans-pycloudlib/jans/pycloudlib/config/__init__.py | JanssenProject/jans | 8d57d01b998bfe87a2377bbe9023dd97fb03cc9f | [
"Apache-2.0"
] | 18 | 2022-01-13T13:45:13.000Z | 2022-03-30T04:41:18.000Z | jans-pycloudlib/jans/pycloudlib/config/__init__.py | JanssenProject/jans | 8d57d01b998bfe87a2377bbe9023dd97fb03cc9f | [
"Apache-2.0"
] | 604 | 2022-01-13T12:32:50.000Z | 2022-03-31T20:27:36.000Z | jans-pycloudlib/jans/pycloudlib/config/__init__.py | JanssenProject/jans | 8d57d01b998bfe87a2377bbe9023dd97fb03cc9f | [
"Apache-2.0"
] | 8 | 2022-01-28T00:23:25.000Z | 2022-03-16T05:12:12.000Z | # noqa: D104
from jans.pycloudlib.config.consul_config import ConsulConfig # noqa: F401
from jans.pycloudlib.config.kubernetes_config import KubernetesConfig # noqa: F401
from jans.pycloudlib.config.google_config import GoogleConfig # noqa: F401
| 49.8 | 83 | 0.823293 | 32 | 249 | 6.3125 | 0.4375 | 0.118812 | 0.267327 | 0.356436 | 0.316832 | 0.316832 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.108434 | 249 | 4 | 84 | 62.25 | 0.855856 | 0.172691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b3f28da7e01d3d6a0d2cf3478c785d5e0f30cdc | 163 | py | Python | tests/example_app/api/__init__.py | Loudr/pale | dc002ee6032c856551143af222ff8f71ed9853fe | [
"MIT"
] | 13 | 2015-06-18T02:35:31.000Z | 2019-03-15T14:39:28.000Z | tests/example_app/api/__init__.py | Loudr/pale | dc002ee6032c856551143af222ff8f71ed9853fe | [
"MIT"
] | 34 | 2015-05-18T17:13:16.000Z | 2021-03-25T21:40:42.000Z | tests/example_app/api/__init__.py | Loudr/pale | dc002ee6032c856551143af222ff8f71ed9853fe | [
"MIT"
] | 3 | 2016-06-08T01:05:47.000Z | 2020-02-04T17:50:17.000Z | import pale
from tests.example_app.api import endpoints, resources
_module_type = pale.ImplementationModule
__all__ = ['endpoints', 'resources', '_module_type']
| 23.285714 | 54 | 0.797546 | 19 | 163 | 6.368421 | 0.684211 | 0.297521 | 0.396694 | 0.46281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104294 | 163 | 6 | 55 | 27.166667 | 0.828767 | 0 | 0 | 0 | 0 | 0 | 0.184049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8cb2870538f046487849175833b19933c22cd9fb | 26 | py | Python | CrowdClient/__init__.py | tyler-tee/CrowdClient | 7d3ddaf16a015832e1ccad7c3034174b456a3286 | [
"MIT"
] | 5 | 2020-07-23T16:48:53.000Z | 2021-11-08T10:30:16.000Z | CrowdClient/__init__.py | tyler-tee/CrowdClient | 7d3ddaf16a015832e1ccad7c3034174b456a3286 | [
"MIT"
] | 2 | 2021-05-26T12:57:27.000Z | 2021-09-24T16:49:45.000Z | CrowdClient/__init__.py | tyler-tee/CrowdClient | 7d3ddaf16a015832e1ccad7c3034174b456a3286 | [
"MIT"
] | 1 | 2020-12-28T14:33:01.000Z | 2020-12-28T14:33:01.000Z | from .crowdclient import * | 26 | 26 | 0.807692 | 3 | 26 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8cb93b7df9beb39d6b26f9ab90ee500c270ba8b2 | 3,555 | py | Python | tests/codegen/dynamic_memlet_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 227 | 2019-03-15T23:39:06.000Z | 2022-03-30T07:49:08.000Z | tests/codegen/dynamic_memlet_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 834 | 2019-07-31T22:49:31.000Z | 2022-03-28T14:01:32.000Z | tests/codegen/dynamic_memlet_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 64 | 2019-03-19T05:40:37.000Z | 2022-03-11T15:02:42.000Z | # Copyright 2019-2021 ETH Zurich and the DaCe authors. All rights reserved.
""" Tests dereferencing issues with tasklets that use dynamic memlets. """
import dace
import numpy as np
def test_dynamic_memlets():
""" Tests dynamic memlet dereferencing on one value. """
sdfg = dace.SDFG('test')
state = sdfg.add_state('state')
sdfg.add_array('out_arr1', dtype=dace.float64, shape=(3, 3))
sdfg.add_array('out_arr2', dtype=dace.float64, shape=(3, 3))
tasklet = state.add_tasklet('tasklet',
inputs={},
outputs={'o1', 'o2'},
code='o1 = 1.0; o2 = 2 * o1')
map_entry, map_exit = state.add_map('map', ndrange=dict(i='0:3', j='0:3'))
state.add_edge(map_entry, None, tasklet, None, dace.Memlet())
state.add_memlet_path(tasklet,
map_exit,
state.add_write('out_arr1'),
src_conn='o1',
memlet=dace.Memlet.simple('out_arr1',
subset_str='i,j'))
state.add_memlet_path(tasklet,
map_exit,
state.add_write('out_arr2'),
src_conn='o2',
memlet=dace.Memlet.simple('out_arr2',
subset_str='i,j'))
sdfg.validate()
for state in sdfg.nodes():
for node in state.nodes():
if isinstance(node, (dace.nodes.Tasklet, dace.nodes.MapExit)):
for edge in state.out_edges(node):
edge.data.dynamic = True
A = np.random.rand(3, 3)
B = np.random.rand(3, 3)
sdfg(out_arr1=A, out_arr2=B)
assert np.allclose(A, 1)
assert np.allclose(B, 2)
def test_dynamic_memlets_subset():
"""
Tests dynamic memlet dereferencing when subset/pointer is used
in tasklet connector.
"""
sdfg = dace.SDFG('test')
state = sdfg.add_state('state')
sdfg.add_array('out_arr1', dtype=dace.float64, shape=(3, 3))
sdfg.add_array('out_arr2', dtype=dace.float64, shape=(3, 3))
tasklet = state.add_tasklet('tasklet',
inputs={},
outputs={'o1', 'o2'},
code='o1 = 1.0; o2[i, j] = 2 * o1')
map_entry, map_exit = state.add_map('map', ndrange=dict(i='0:3', j='0:3'))
state.add_edge(map_entry, None, tasklet, None, dace.Memlet())
state.add_memlet_path(tasklet,
map_exit,
state.add_write('out_arr1'),
src_conn='o1',
memlet=dace.Memlet.simple('out_arr1',
subset_str='i,j'))
state.add_memlet_path(tasklet,
map_exit,
state.add_write('out_arr2'),
src_conn='o2',
memlet=dace.Memlet('out_arr2[0:3, 0:3]'))
sdfg.validate()
for state in sdfg.nodes():
for node in state.nodes():
if isinstance(node, (dace.nodes.Tasklet, dace.nodes.MapExit)):
for edge in state.out_edges(node):
edge.data.dynamic = True
A = np.random.rand(3, 3)
B = np.random.rand(3, 3)
sdfg(out_arr1=A, out_arr2=B)
assert np.allclose(A, 1)
assert np.allclose(B, 2)
if __name__ == '__main__':
test_dynamic_memlets()
test_dynamic_memlets_subset()
| 40.397727 | 78 | 0.513924 | 434 | 3,555 | 4.032258 | 0.209677 | 0.064 | 0.041143 | 0.051429 | 0.781143 | 0.776 | 0.776 | 0.776 | 0.776 | 0.776 | 0 | 0.036874 | 0.359212 | 3,555 | 87 | 79 | 40.862069 | 0.731343 | 0.077918 | 0 | 0.847222 | 0 | 0 | 0.073035 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.027778 | false | 0 | 0.027778 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8cd913023fa7eb33f0789d3e525770286562b326 | 44 | py | Python | CodeWars/8 Kyu/You only need one - Beginner.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/You only need one - Beginner.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/You only need one - Beginner.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | def check(seq, elem):
return elem in seq | 22 | 22 | 0.681818 | 8 | 44 | 3.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 44 | 2 | 22 | 22 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
50e549dbb667969d62fe6635239f1df47a35cf03 | 161 | py | Python | freshdi/freshdi/doctype/news_comment/test_news_comment.py | JackyLN/demo-freshdi | 343b11a28a4ff4bd3a8970c9e6406841bc7fbb85 | [
"MIT"
] | null | null | null | freshdi/freshdi/doctype/news_comment/test_news_comment.py | JackyLN/demo-freshdi | 343b11a28a4ff4bd3a8970c9e6406841bc7fbb85 | [
"MIT"
] | null | null | null | freshdi/freshdi/doctype/news_comment/test_news_comment.py | JackyLN/demo-freshdi | 343b11a28a4ff4bd3a8970c9e6406841bc7fbb85 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, lenghia1991@gmail.com and Contributors
# See license.txt
# import frappe
import unittest
class TestNewsComment(unittest.TestCase):
pass
| 17.888889 | 60 | 0.78882 | 20 | 161 | 6.35 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.130435 | 161 | 8 | 61 | 20.125 | 0.85 | 0.546584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0fa3d819fae5ba6bd96870709961580611d98916 | 176 | py | Python | 25/02/all.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | 25/02/all.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | 46 | 2017-06-30T22:19:07.000Z | 2017-07-31T22:51:31.000Z | 25/02/all.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | print(all([]))
print(all([True, True]))
print(all([True, False]))
print(all([False, False]))
print()
print(all([1]))
print(all(['a']))
print(all([None]))
print(all([1, None]))
| 17.6 | 26 | 0.607955 | 28 | 176 | 3.821429 | 0.25 | 0.598131 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.073864 | 176 | 9 | 27 | 19.555556 | 0.644172 | 0 | 0 | 0 | 0 | 0 | 0.005682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0ffa4ca75389c0caa9664d1f69477202f5caf48c | 15,587 | py | Python | src/openprocurement/planning/api/tests/plan_status.py | pontostroy/openprocurement.api | 6651ef29413d155c83f893ee64a611cf75f4daaf | [
"Apache-2.0"
] | null | null | null | src/openprocurement/planning/api/tests/plan_status.py | pontostroy/openprocurement.api | 6651ef29413d155c83f893ee64a611cf75f4daaf | [
"Apache-2.0"
] | 2 | 2021-03-25T23:33:30.000Z | 2022-03-21T22:18:19.000Z | src/openprocurement/planning/api/tests/plan_status.py | scrubele/prozorro-testing | 42b93ea2f25d8cc40e66c596f582c7c05e2a9d76 | [
"Apache-2.0"
] | 1 | 2020-08-20T06:09:14.000Z | 2020-08-20T06:09:14.000Z | # -*- coding: utf-8 -*-
from openprocurement.planning.api.tests.base import app, singleton_app, test_plan_data, generate_docservice_url
from copy import deepcopy
import pytest
def test_plan_default_status(app):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data.pop("status", None)
response = app.post_json("/plans", {"data": test_data})
assert response.json["data"].get("status") == "scheduled"
test_data["status"] = None
response = app.post_json("/plans", {"data": test_data})
assert response.json["data"].get("status") == "scheduled"
response = app.get("/plans")
assert response.status == "200 OK"
assert len(response.json["data"]) == 2
@pytest.mark.parametrize("mode", ["real", "test", "_all_"])
def test_plan_draft_status(app, mode):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
if mode != "real":
test_data["mode"] = "test"
test_data["status"] = "draft"
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == "draft"
response = app.get("/plans?mode={}".format(mode))
assert response.status == "200 OK"
assert len(response.json["data"]) == 0
response = app.get("/plans?feed=changes&mode={}".format(mode))
assert response.status == "200 OK"
assert len(response.json["data"]) == 0
@pytest.mark.parametrize("initial_status", ["scheduled", None])
def test_fail_update_back_to_draft(app, initial_status):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = initial_status
response = app.post_json("/plans", {"data": test_data})
assert response.json["data"].get("status") == "scheduled"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
if initial_status is None:
plan = app.app.registry.db.get(plan_id)
del plan["status"]
app.app.registry.db.save(plan)
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "draft"}}, status=422
)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Plan status can not be changed back to 'draft'",
u"location": u"data",
u"name": u"status",
}
],
}
def test_update_status_invalid(app):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = "draft"
response = app.post_json("/plans", {"data": test_data})
assert response.json["data"].get("status") == "draft"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "invalid"}}, status=422
)
assert response.json == {
"status": "error",
"errors": [
{
"location": "body",
"name": "status",
"description": ["Value must be one of ['draft', 'scheduled', 'cancelled', 'complete']."],
}
],
}
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "cancelled"}}, status=422
)
assert response.json == {
u"status": u"error",
u"errors": [
{u"description": [u"An active cancellation object is required"], u"location": u"body", u"name": u"status"}
],
}
@pytest.mark.parametrize("status", ["scheduled", "complete"])
@pytest.mark.parametrize("mode", ["real", "test", "_all_"])
def test_plan_update_draft(app, mode, status):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
if mode != "real":
test_data["mode"] = "test"
test_data["status"] = "draft"
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == "draft"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json("/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": status}})
assert response.status == "200 OK"
assert response.json["data"].get("status") == status
response = app.get("/plans?mode={}".format(mode))
assert response.status == "200 OK"
assert len(response.json["data"]) == 1
assert response.json["data"][0]["id"] == plan_id
response = app.get("/plans?feed=changes&mode={}".format(mode))
assert response.status == "200 OK"
assert len(response.json["data"]) == 1
assert response.json["data"][0]["id"] == plan_id
def test_plan_update_scheduled_to_complete(app):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = "scheduled"
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == "scheduled"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json("/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "complete"}})
assert response.status == "200 OK"
assert response.json["data"].get("status") == "complete"
@pytest.mark.parametrize("initial_status", ["draft", "scheduled"])
def test_cancel_plan_2_steps(app, initial_status):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = initial_status
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == initial_status
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token),
{"data": {"cancellation": {"reason": "Because", "status": "pending"}}},
)
assert response.status == "200 OK"
assert response.json["data"]["cancellation"]["status"] == "pending"
assert response.json["data"].get("status") == initial_status
create_time = response.json["data"]["cancellation"]["date"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"cancellation": {"status": "active"}}}
)
assert response.status == "200 OK"
assert response.json["data"]["cancellation"]["status"] == "active"
assert response.json["data"]["cancellation"]["date"] > create_time
assert response.json["data"]["status"] == "cancelled"
get_response = app.get("/plans/{}".format(plan_id))
assert get_response.json["data"]["cancellation"]["date"] == response.json["data"]["cancellation"]["date"]
def test_cancel_plan_1_step(app):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = "scheduled"
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == "scheduled"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token),
{"data": {"cancellation": {"reason": "", "status": "active"}}},
status=422,
)
assert response.json == {
"status": "error",
"errors": [
{"location": "body", "name": "cancellation", "description": {"reason": ["String value is too short."]}}
],
}
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token),
{"data": {"cancellation": {"reason": "Because", "status": "active"}}},
)
assert response.status == "200 OK"
assert response.json["data"]["cancellation"]["status"] == "active"
assert response.json["data"]["status"] == "cancelled"
plan = app.app.registry.db.get(plan_id)
assert {c["path"] for c in plan["revisions"][-1]["changes"]} == {"/cancellation", "/status"}
@pytest.mark.parametrize("replaced_status", ["draft", "scheduled", "complete"])
def test_create_cancelled(app, replaced_status):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = replaced_status # this will be replaced by "switch_status" serializable
test_data["cancellation"] = {"reason": "Because it's possible", "status": "active"}
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == "cancelled"
def test_cancel_compatibility_completed_plan(app):
"""
well I don't know if it's an appropriate case. it's probably not
"""
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
plan = response.json["data"]
acc_token = response.json["access"]["token"]
obj = app.app.registry.db.get(plan["id"])
del obj["status"]
obj["tender_id"] = "a" * 32
app.app.registry.db.save(obj)
response = app.get("/plans/{}".format(plan["id"]))
assert response.json["data"]["status"] == "complete" # complete !
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan["id"], acc_token),
{"data": {"cancellation": {"reason": "Because it's possible", "status": "active"}}}
)
assert response.status == "200 OK"
assert response.json["data"]["status"] == "cancelled" # cancelled !
@pytest.mark.parametrize("status", ["cancelled", "complete"])
def test_fail_update_complete_or_cancelled_plan(app, status):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["documents"] = [
{
"title": u"укр.doc",
"url": generate_docservice_url(app),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
]
test_data["status"] = status
if status == "cancelled":
test_data["cancellation"] = dict(reason="Because", status="active")
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"].get("status") == status
plan_id = response.json["data"]["id"]
doc_id = response.json["data"]["documents"][0]["id"]
acc_token = response.json["access"]["token"]
# patch
response = app.patch_json("/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {}}, status=422)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Can't update plan in '{}' status".format(status),
u"location": u"data",
u"name": u"status",
}
],
}
# docs
response = app.post_json(
"/plans/{}/documents?acc_token={}".format(plan_id, acc_token),
{
"data": {
"title": u"укр.doc",
"url": generate_docservice_url(app),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Can't update plan in '{}' status".format(status),
u"location": u"data",
u"name": u"status",
}
],
}
response = app.put_json(
"/plans/{}/documents/{}?acc_token={}".format(plan_id, doc_id, acc_token),
{
"data": {
"title": u"укр_2.doc",
"url": generate_docservice_url(app),
"hash": "md5:" + "0" * 32,
"format": "application/msword",
}
},
status=422,
)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Can't update plan in '{}' status".format(status),
u"location": u"data",
u"name": u"status",
}
],
}
response = app.patch_json(
"/plans/{}/documents/{}?acc_token={}".format(plan_id, doc_id, acc_token),
{"data": {"title": u"whatever.doc"}},
status=422,
)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Can't update plan in '{}' status".format(status),
u"location": u"data",
u"name": u"status",
}
],
}
# tender creation
response = app.post_json("/plans/{}/tenders".format(plan_id), {"data": {}}, status=422)
assert response.json == {
u"status": u"error",
u"errors": [
{
u"description": u"Can't update plan in '{}' status".format(status),
u"location": u"data",
u"name": u"status",
}
],
}
@pytest.mark.parametrize(
"value",
[
"aboveThresholdUA",
"aboveThresholdUA.defense",
"aboveThresholdEU",
"esco",
"competitiveDialogueUA",
"competitiveDialogueEU",
"closeFrameworkAgreementUA",
],
)
def test_fail_complete_manually(app, value):
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = "scheduled"
test_data["tender"]["procurementMethodType"] = value
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"]["status"] == "scheduled"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "complete"}}, status=422
)
assert response.json == {
"status": "error",
"errors": [
{
"location": "body",
"name": "status",
"description": ["Can't complete plan with '{}' tender.procurementMethodType".format(value)],
}
],
}
@pytest.mark.parametrize("value", [("open", "belowThreshold"), ("limited", "reporting"), ("", "")])
def test_success_complete_manually(app, value):
procurement_method, procurement_method_type = value
app.authorization = ("Basic", ("broker", "broker"))
test_data = deepcopy(test_plan_data)
test_data["status"] = "scheduled"
test_data["tender"]["procurementMethod"] = procurement_method
test_data["tender"]["procurementMethodType"] = procurement_method_type
response = app.post_json("/plans", {"data": test_data})
assert response.status == "201 Created"
assert response.json["data"]["status"] == "scheduled"
plan_id = response.json["data"]["id"]
acc_token = response.json["access"]["token"]
response = app.patch_json(
"/plans/{}?acc_token={}".format(plan_id, acc_token), {"data": {"status": "complete"}}, status=200
)
assert response.json["data"]["status"] == "complete"
| 36.164733 | 118 | 0.587477 | 1,767 | 15,587 | 5.03056 | 0.09451 | 0.089099 | 0.082799 | 0.066824 | 0.796603 | 0.775003 | 0.748453 | 0.734278 | 0.700979 | 0.695241 | 0 | 0.010174 | 0.224354 | 15,587 | 430 | 119 | 36.248837 | 0.725062 | 0.012318 | 0 | 0.617729 | 1 | 0 | 0.254797 | 0.039225 | 0 | 0 | 0 | 0 | 0.180055 | 1 | 0.036011 | false | 0 | 0.00831 | 0 | 0.044321 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba18d9928ea3223c29859f8c9079052fcd44f5d3 | 84 | py | Python | src/word_segmentation/__init__.py | ruanchaves/word_segmentation | d9ae6cf2c00c512015fd38ec21007d5ed594084c | [
"MIT"
] | 1 | 2020-05-21T12:47:20.000Z | 2020-05-21T12:47:20.000Z | src/word_segmentation/__init__.py | ruanchaves/word_segmentation | d9ae6cf2c00c512015fd38ec21007d5ed594084c | [
"MIT"
] | 4 | 2020-06-03T21:42:36.000Z | 2021-06-08T21:43:11.000Z | src/word_segmentation/__init__.py | ruanchaves/word_segmentation | d9ae6cf2c00c512015fd38ec21007d5ed594084c | [
"MIT"
] | null | null | null | from .segmenter import *
from .argument_classes import *
from .translator import * | 28 | 32 | 0.77381 | 10 | 84 | 6.4 | 0.6 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154762 | 84 | 3 | 33 | 28 | 0.901408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba2231727bd95dc008df8f656a8fef0187df948d | 43,796 | py | Python | lib/OntologyAPI/OntologyAPIImpl.py | slebras/ontology_api | 92a2aaf04d69eff45c9a9ff26afdd17e1fb962a0 | [
"MIT"
] | null | null | null | lib/OntologyAPI/OntologyAPIImpl.py | slebras/ontology_api | 92a2aaf04d69eff45c9a9ff26afdd17e1fb962a0 | [
"MIT"
] | null | null | null | lib/OntologyAPI/OntologyAPIImpl.py | slebras/ontology_api | 92a2aaf04d69eff45c9a9ff26afdd17e1fb962a0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#BEGIN_HEADER
import logging
import re
from OntologyAPI.utils import re_api, misc
#END_HEADER
class OntologyAPI:
'''
Module Name:
OntologyAPI
Module Description:
A KBase module: OntologyAPI
'''
######## WARNING FOR GEVENT USERS ####### noqa
# Since asynchronous IO can lead to methods - even the same method -
# interrupting each other, you must be *very* careful when using global
# state. A method could easily clobber the state set by another while
# the latter method is running.
######################################### noqa
VERSION = "0.3.11"
GIT_URL = "git@github.com:zhlu9890/ontology_api.git"
GIT_COMMIT_HASH = "3252a0ee19c691570da69379a63fd091b3432782"
#BEGIN_CLASS_HEADER
#END_CLASS_HEADER
# config contains contents of config file in a hash or None if it couldn't
# be found
def __init__(self, config):
#BEGIN_CONSTRUCTOR
self.shared_folder = config['scratch']
logging.basicConfig(format='%(created)s %(levelname)s: %(message)s',
level=logging.INFO)
#END_CONSTRUCTOR
pass
def get_descendants(self, ctx, GenericParams):
"""
Retrieve descendants of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_descendants
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_descendants", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_descendants
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_descendants return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_ancestors(self, ctx, GenericParams):
"""
Retrieve ancestors of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_ancestors
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_ancestors", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_ancestors
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_ancestors return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_children(self, ctx, GenericParams):
"""
Retrieve children of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_children
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_children", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_children
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_children return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_parents(self, ctx, GenericParams):
"""
Retrieve parents of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_parents
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_parents", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_parents
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_parents return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_related(self, ctx, GenericParams):
"""
Retrieve related terms of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_related
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_related", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_related
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_related return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_siblings(self, ctx, GenericParams):
"""
Retrieve siblings terms of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_siblings
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_siblings", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_siblings
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_siblings return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_terms(self, ctx, GetTermsParams):
"""
Retrieve metadata of a list of ontology terms by IDs
:param GetTermsParams: instance of type "GetTermsParams" (Parameters
for get_terms ids - required - a list of name ontology term id,
such as '["GO:0000002", "GO:0000266"]' ts - optional - fetch
documents with this active timestamp, defaults to now ns -
optional - ontology namespace to use, defaults to "go" limit -
optional - number of results to return (defaults to 20) offset -
optional - number of results to skip (defaults to 0)) ->
structure: parameter "ids" of list of type "ID" (Ontology term id,
such as "GO:0000002"), parameter "ts" of Long, parameter "ns" of
String, parameter "limit" of Long, parameter "offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_terms
validated_params=misc.validate_params(GetTermsParams, "get_terms")
results = re_api.query("get_terms", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_terms
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_terms return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_hierarchical_ancestors(self, ctx, GenericParams):
"""
Retrieve hierarchical_ancestors of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_hierarchical_ancestors
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_hierarchicalAncestors", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_hierarchical_ancestors
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_hierarchical_ancestors return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_hierarchical_children(self, ctx, GenericParams):
"""
Retrieve hierarchical_children of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_hierarchical_children
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_hierarchicalChildren", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_hierarchical_children
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_hierarchical_children return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_hierarchical_descendants(self, ctx, GenericParams):
"""
Retrieve hierarchical_descendants of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_hierarchical_descendants
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_hierarchicalDescendants", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_hierarchical_descendants
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_hierarchical_descendants return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_hierarchical_parents(self, ctx, GenericParams):
"""
Retrieve hierarchical_parents of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GenericResults" (Generic results stats -
Query execution information from ArangoDB. results - array of
objects of results. ts - Timestamp used in the request ns -
Ontology namespace used in the request.) -> structure: parameter
"stats" of unspecified object, parameter "results" of list of
unspecified object, parameter "ts" of Long, parameter "ns" of
String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_hierarchical_parents
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_hierarchicalParents", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_hierarchical_parents
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_hierarchical_parents return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_associated_ws_genomes(self, ctx, GenericParams):
"""
Retrieve associated workspace genome objects of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GetAssociatedWSObjectsResults" (Results
from get_associated_ws_objects stats - Query execution information
from ArangoDB. results - array of WSObjectsResults objects. ts -
Timestamp used in the request ns - Ontology namespace used in the
request. total_count - total count of associated workspace
objects) -> structure: parameter "stats" of unspecified object,
parameter "results" of list of type "WSObjectsWithFeatureCount"
(Workspace obj with count of associated workspace genome features
feature_count - count of features associated. ws_obj - WSObj
object) -> structure: parameter "feature_count" of Long, parameter
"ws_obj" of type "WSObj" (workspace object) -> structure:
parameter "workspace_id" of Long, parameter "object_id" of Long,
parameter "version" of Long, parameter "name" of String, parameter
"ts" of Long, parameter "ns" of String, parameter "total_count" of
Long
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_associated_ws_genomes
validated_params=misc.validate_params(GenericParams)
results = re_api.query("get_associated_ws_genomes", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["total_count"]=0
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"][0]["results"]
returnVal["total_count"]=results["results"][0]["total_count"]
#END get_associated_ws_genomes
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_associated_ws_genomes return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_associated_ws_features(self, ctx, GetAssociatedWSFeaturesParams):
"""
Retrieve associated workspace genome features of an ontology term by ID and workspace obj_ref
:param GetAssociatedWSFeaturesParams: instance of type
"GetAssociatedWSFeaturesParams" (Parameters for
get_terms_from_ws_feature id - required - ontology term id, such
as "GO:0016209" obj_ref - optional - workspace object ref, such as
"6976/926/2" ts - optional - fetch documents with this active
timestamp, defaults to now ns - optional - ontology namespace to
use, defaults to "go" limit - optional - number of results to
return (defaults to 20) offset - optional - number of results to
skip (defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "obj_ref" of
String, parameter "ns" of String, parameter "ts" of Long,
parameter "limit" of Long, parameter "offset" of Long
:returns: instance of type "GetAssociatedWSFeaturesResults" (Results
from get_associated_ws_features stats - Query execution
information from ArangoDB. results - array of WSObjectsResults
objects. ts - Timestamp used in the request ns - Ontology
namespace used in the request. total_count - total count of
associated workspace features) -> structure: parameter "stats" of
unspecified object, parameter "results" of list of type
"WSObjWithWSFeatures" (Workspace obj with associated workspace
genome features ws_obj - WSObj object features - a list of
FeatureLite object) -> structure: parameter "ws_obj" of type
"WSObj" (workspace object) -> structure: parameter "workspace_id"
of Long, parameter "object_id" of Long, parameter "version" of
Long, parameter "name" of String, parameter "features" of list of
type "FeatureLite" (workspace genome feature, lite version) ->
structure: parameter "feature_id" of String, parameter
"updated_at" of Long, parameter "ts" of Long, parameter "ns" of
String, parameter "total_count" of Long
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_associated_ws_features
validated_params=misc.validate_params(GetAssociatedWSFeaturesParams, "get_associated_ws_features")
validated_params['obj_ref']=re.sub('/', ':', validated_params['obj_ref'])
results = re_api.query("get_associated_ws_features", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["total_count"]=0
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"][0]["results"]
returnVal["total_count"]=results["results"][0]["total_count"]
#END get_associated_ws_features
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_associated_ws_features return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_terms_from_ws_feature(self, ctx, GetTermsFromWSFeatureParams):
"""
Retrieve ontology terms of an workspace genome feature by workspace obj_ref and feature id
:param GetTermsFromWSFeatureParams: instance of type
"GetTermsFromWSFeatureParams" (Parameters for
get_terms_from_ws_feature obj_ref - required - workspace object
ref, such as "6976/926/2" feature_id - required - workspace
feature id, such as "b3908" ts - optional - fetch documents with
this active timestamp, defaults to now ns - optional - ontology
namespace to use, defaults to "go" limit - optional - number of
results to return (defaults to 20) offset - optional - number of
results to skip (defaults to 0)) -> structure: parameter "obj_ref"
of String, parameter "feature_id" of String, parameter "ns" of
String, parameter "ts" of Long, parameter "limit" of Long,
parameter "offset" of Long
:returns: instance of type "GetTermsFromWSFeatureResults" (Results
from get_terms_from_ws_feature stats - Query execution information
from ArangoDB. results - array of TermsWithWSFeature objects. ts -
Timestamp used in the request ns - Ontology namespace used in the
request.) -> structure: parameter "stats" of unspecified object,
parameter "results" of list of type "TermsWithWSFeature" (Ontology
terms with associated workspace genome feature terms - a list of
Term object feature - Feature object) -> structure: parameter
"terms" of list of type "Term" (Ontology term) -> structure:
parameter "id" of type "ID" (Ontology term id, such as
"GO:0000002"), parameter "name" of String, parameter "namespace"
of String, parameter "alt_ids" of list of String, parameter "def"
of unspecified object, parameter "comments" of list of String,
parameter "synonyms" of list of unspecified object, parameter
"xrefs" of list of unspecified object, parameter "created" of
Long, parameter "expired" of Long, parameter "feature" of type
"Feature" (workspace genome feature) -> structure: parameter
"feature_id" of String, parameter "updated_at" of Long, parameter
"workspace_id" of Long, parameter "object_id" of Long, parameter
"version" of Long, parameter "ts" of Long, parameter "ns" of String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_terms_from_ws_feature
validated_params=misc.validate_params(GetTermsFromWSFeatureParams, "get_terms_from_ws_feature")
validated_params['obj_ref']=re.sub('/', ':', validated_params['obj_ref'])
validated_params['feature_id']=validated_params['obj_ref'] + '_' + validated_params['feature_id']
del validated_params['obj_ref']
results = re_api.query("get_terms_from_ws_feature", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_terms_from_ws_feature
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_terms_from_ws_feature return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_terms_from_ws_object(self, ctx, GetTermsFromWSObjParams):
"""
Retrieve ontology terms of an workspace object by workspace obj_ref
:param GetTermsFromWSObjParams: instance of type
"GetTermsFromWSObjParams" (Parameters for get_terms_from_ws_object
obj_ref - required - workspace object ref, such as "6976/926/2" ts
- optional - fetch documents with this active timestamp, defaults
to now ns - optional - ontology namespace to use, defaults to "go"
limit - optional - number of results to return (defaults to 20)
offset - optional - number of results to skip (defaults to 0)) ->
structure: parameter "obj_ref" of String, parameter "ns" of
String, parameter "ts" of Long, parameter "limit" of Long,
parameter "offset" of Long
:returns: instance of type "GetTermsFromWSObjResults" (Results from
get_terms_from_ws_obj stats - Query execution information from
ArangoDB. results - array of TermsWithWSFeature objects. ts -
Timestamp used in the request ns - Ontology namespace used in the
request.) -> structure: parameter "stats" of unspecified object,
parameter "results" of list of type "TermsWithWSFeature" (Ontology
terms with associated workspace genome feature terms - a list of
Term object feature - Feature object) -> structure: parameter
"terms" of list of type "Term" (Ontology term) -> structure:
parameter "id" of type "ID" (Ontology term id, such as
"GO:0000002"), parameter "name" of String, parameter "namespace"
of String, parameter "alt_ids" of list of String, parameter "def"
of unspecified object, parameter "comments" of list of String,
parameter "synonyms" of list of unspecified object, parameter
"xrefs" of list of unspecified object, parameter "created" of
Long, parameter "expired" of Long, parameter "feature" of type
"Feature" (workspace genome feature) -> structure: parameter
"feature_id" of String, parameter "updated_at" of Long, parameter
"workspace_id" of Long, parameter "object_id" of Long, parameter
"version" of Long, parameter "ts" of Long, parameter "ns" of String
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_terms_from_ws_object
validated_params=misc.validate_params(GetTermsFromWSObjParams, "get_terms_from_ws_object")
validated_params['obj_ref']=re.sub('/', ':', validated_params['obj_ref'])
results = re_api.query("get_terms_from_ws_object", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["error"]=results.get('error')
else:
returnVal["results"]=results["results"]
#END get_terms_from_ws_object
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_terms_from_ws_object return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def get_associated_samples(self, ctx, GenericParams):
"""
Retrieve associated samples of an ontology term by ID
:param GenericParams: instance of type "GenericParams" (Generic
Parameters id - required - ontology term id, such as "GO:0016209"
ts - optional - fetch documents with this active timestamp,
defaults to now ns - optional - ontology namespace to use,
defaults to "go" limit - optional - number of results to return
(defaults to 20) offset - optional - number of results to skip
(defaults to 0)) -> structure: parameter "id" of type "ID"
(Ontology term id, such as "GO:0000002"), parameter "ts" of Long,
parameter "ns" of String, parameter "limit" of Long, parameter
"offset" of Long
:returns: instance of type "GetAssociatedSamplesResults" (Results
from get_associated_samples stats - Query execution information
from ArangoDB. results - array of SampleWithMetadataKey objects.
ts - Timestamp used in the request ns - Ontology namespace used in
the request. total_count - total count of associated samples) ->
structure: parameter "stats" of unspecified object, parameter
"results" of list of type "SampleWithMetadataKey" (Sample data
with sample_metadata_key id - sample id name - sample name
node_tree - sample metadata save_date - sample data saved date
version - sample data version sample_metadata_key - metadata key
referencing ontology term) -> structure: parameter "id" of String,
parameter "name" of String, parameter "node_tree" of unspecified
object, parameter "save_date" of Long, parameter "version" of
Long, parameter "sample_metadata_key" of String, parameter "ts" of
Long, parameter "ns" of String, parameter "total_count" of Long
"""
# ctx is the context object
# return variables are: returnVal
#BEGIN get_associated_samples
user_id=ctx.get('user_id')
validated_params=misc.validate_params(GenericParams, 'get_associated_samples')
results = re_api.query("get_associated_samples", validated_params)
returnVal={"stats": results["stats"], "ts": validated_params["ts"], "ns": validated_params["ns"]}
if results.get('error'):
returnVal["results"]=[]
returnVal["total_count"]=0
returnVal["error"]=results.get('error')
else:
returnVal["total_count"]=results["results"][0]["total_count"]
returnVal["results"]=[]
for x in results["results"][0]["results"]:
_sample_access=x.get("sample_access")
_sample_metadata_key=x.get("sample_metadata_key")
_sample=x.get("sample")
if None in [_sample, _sample_metadata_key, _sample_access]:
continue
if not _sample_access["acls"]["pubread"] \
and user_id != _sample_access["acls"]["owner"] \
and user_id not in _sample_access["acls"]["admin"] \
and user_id not in _sample_access["acls"]["read"]:
continue
sample={"id": _sample["id"], "save_date": int(_sample["saved"] * 1000),
"version": _sample["ver"], "user": _sample_access["acls"]["owner"]}
node_tree={"id": _sample["name"], "type": _sample["type"], "parent": _sample["parent"]}
meta_controlled={}
for m in _sample.get("cmeta", []):
if m["ok"] not in meta_controlled:
meta_controlled[m["ok"]]={}
meta_controlled[m["ok"]][m["k"]]=m["v"]
sample_name=meta_controlled.get("name")
sample["name"]=sample_name["value"] if sample_name is not None else None
node_tree["meta_controlled"]=meta_controlled
meta_user={}
for m in _sample.get("ucmeta", []):
if m["ok"] not in meta_user:
meta_user[m["ok"]]={}
meta_user[m["ok"]][m["k"]]=m["v"]
node_tree["meta_user"]=meta_user
source_meta={}
for m in _sample.get("smeta", []):
source_meta["key"]=m["k"]
source_meta["skey"]=m["sk"]
source_meta["svalue"]=m["v"]
node_tree["source_meta"]=source_meta
sample["node_tree"]=[node_tree]
returnVal["results"].append({"sample": sample, "sample_metadata_key": _sample_metadata_key})
#END get_associated_samples
# At some point might do deeper type checking...
if not isinstance(returnVal, dict):
raise ValueError('Method get_associated_samples return value ' +
'returnVal is not type dict as required.')
# return the results
return [returnVal]
def status(self, ctx):
#BEGIN_STATUS
returnVal = {'state': "OK",
'message': "",
'version': self.VERSION,
'git_url': self.GIT_URL,
'git_commit_hash': self.GIT_COMMIT_HASH}
#END_STATUS
return [returnVal]
| 52.89372 | 108 | 0.626792 | 5,000 | 43,796 | 5.3958 | 0.0544 | 0.019793 | 0.038919 | 0.035287 | 0.864154 | 0.836651 | 0.82112 | 0.811668 | 0.80559 | 0.802068 | 0 | 0.010998 | 0.283747 | 43,796 | 827 | 109 | 52.957678 | 0.849055 | 0.559092 | 0 | 0.607273 | 0 | 0 | 0.200136 | 0.038638 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065455 | false | 0.003636 | 0.010909 | 0 | 0.152727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba285978c1b9f6078cf6e157071b628e760cf0f9 | 170 | py | Python | docker-files/sovrin_config.py | sovrin-foundation/old-sovrin | d4e705054b7252c62fea00114060035c6eb314a4 | [
"Apache-2.0"
] | 3 | 2017-07-19T14:26:31.000Z | 2020-05-16T16:09:37.000Z | docker-files/sovrin_config.py | sovrin-foundation/old-sovrin | d4e705054b7252c62fea00114060035c6eb314a4 | [
"Apache-2.0"
] | null | null | null | docker-files/sovrin_config.py | sovrin-foundation/old-sovrin | d4e705054b7252c62fea00114060035c6eb314a4 | [
"Apache-2.0"
] | 3 | 2017-10-28T08:19:00.000Z | 2021-06-06T10:48:55.000Z | OrientDB = {
"user": "sovrin",
"password": "password",
"startScript": "/opt/orientdb/bin/server.sh",
"shutdownScript": "/opt/orientdb/bin/shutdown.sh"
}
| 21.25 | 53 | 0.617647 | 17 | 170 | 6.176471 | 0.647059 | 0.209524 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170588 | 170 | 7 | 54 | 24.285714 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0.633136 | 0.331361 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e8449930467f0ad72e5d030dbebffb88074f7745 | 209 | py | Python | src/part_2_automation/bdd_example/features/environment.py | AndreiHustiuc/IT_Factory_Course | c6f3e4a9282a1c19c0f52c79f0c81f026814a02a | [
"MIT"
] | null | null | null | src/part_2_automation/bdd_example/features/environment.py | AndreiHustiuc/IT_Factory_Course | c6f3e4a9282a1c19c0f52c79f0c81f026814a02a | [
"MIT"
] | null | null | null | src/part_2_automation/bdd_example/features/environment.py | AndreiHustiuc/IT_Factory_Course | c6f3e4a9282a1c19c0f52c79f0c81f026814a02a | [
"MIT"
] | 1 | 2022-03-16T10:39:03.000Z | 2022-03-16T10:39:03.000Z | def before_all(context):
print('Setup driver connection')
def after_all(contex):
print('Close driver connection')
# todo : test all hooks from https://www.tutorialspoint.com/behave/behave_hooks.htm
| 23.222222 | 83 | 0.746411 | 29 | 209 | 5.275862 | 0.724138 | 0.20915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138756 | 209 | 8 | 84 | 26.125 | 0.85 | 0.38756 | 0 | 0 | 0 | 0 | 0.365079 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e848001206b80d1922ba5ceb24fb55a4922ce369 | 131 | py | Python | src/utils/config.py | QingfengTang/YTSystem | 1b4540fd61e3f45fb4bf24ac670e3b0f0e41d701 | [
"Apache-2.0"
] | null | null | null | src/utils/config.py | QingfengTang/YTSystem | 1b4540fd61e3f45fb4bf24ac670e3b0f0e41d701 | [
"Apache-2.0"
] | null | null | null | src/utils/config.py | QingfengTang/YTSystem | 1b4540fd61e3f45fb4bf24ac670e3b0f0e41d701 | [
"Apache-2.0"
] | 1 | 2021-08-29T09:29:02.000Z | 2021-08-29T09:29:02.000Z | import os
# 基础路径 BASE_DIR = ././YTSystem/
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) | 26.2 | 87 | 0.732824 | 21 | 131 | 4.285714 | 0.47619 | 0.266667 | 0.433333 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | 0.083969 | 131 | 5 | 87 | 26.2 | 0.75 | 0.221374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e869510e4da236f4cecfcea7e900adc462aa673d | 80 | py | Python | coms.py | michaelescue/Code | dc052d206f9df424c8826c61b2d854bb79e43b86 | [
"MIT"
] | null | null | null | coms.py | michaelescue/Code | dc052d206f9df424c8826c61b2d854bb79e43b86 | [
"MIT"
] | null | null | null | coms.py | michaelescue/Code | dc052d206f9df424c8826c61b2d854bb79e43b86 | [
"MIT"
] | null | null | null | import serial
import time
import os
from serial.serialwin32 import Serial
| 8 | 37 | 0.7875 | 11 | 80 | 5.727273 | 0.545455 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.2 | 80 | 9 | 38 | 8.888889 | 0.953125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8a18ab18fd9ed5ca2e9cb66c5a7d3b6f5260a43 | 86 | py | Python | src/bgmtinygrail/db/__init__.py | no1xsyzy/bgmtinygrail | 4e762a58337f3021440a070967f1cb7a0213f8a6 | [
"MIT"
] | 5 | 2020-05-17T02:41:01.000Z | 2020-07-01T23:24:41.000Z | src/bgmtinygrail/db/__init__.py | no1xsyzy/bgmtinygrail | 4e762a58337f3021440a070967f1cb7a0213f8a6 | [
"MIT"
] | null | null | null | src/bgmtinygrail/db/__init__.py | no1xsyzy/bgmtinygrail | 4e762a58337f3021440a070967f1cb7a0213f8a6 | [
"MIT"
] | 1 | 2021-02-09T04:41:15.000Z | 2021-02-09T04:41:15.000Z | from . import _base
from . import accounts
from . import strategy
_base.create_all()
| 14.333333 | 22 | 0.767442 | 12 | 86 | 5.25 | 0.583333 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 86 | 5 | 23 | 17.2 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8ae464912e041af89c3fac54872a44833bc5b20 | 143 | py | Python | config_template.py | seaty6/forgotten-songs | 2395d3e20494e6081890ab3c625c4edc73cfc931 | [
"MIT"
] | null | null | null | config_template.py | seaty6/forgotten-songs | 2395d3e20494e6081890ab3c625c4edc73cfc931 | [
"MIT"
] | null | null | null | config_template.py | seaty6/forgotten-songs | 2395d3e20494e6081890ab3c625c4edc73cfc931 | [
"MIT"
] | null | null | null | last_fm_apikey = 'last_fm_apikey'
last_fm_secret = 'last_fm_secret'
last_fm_username = 'last_fm_username'
last_fm_password = 'last_fm_password' | 35.75 | 37 | 0.839161 | 24 | 143 | 4.333333 | 0.25 | 0.461538 | 0.230769 | 0.307692 | 0.788462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 143 | 4 | 38 | 35.75 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e8d104091654a0e42b8a6b6015a8caa66f518eeb | 236 | py | Python | home/admin.py | shreyjha2309/pixelvibe | abd478cfa1c1c26b21d5728b628577aac65feade | [
"MIT"
] | 35 | 2020-12-17T14:45:37.000Z | 2021-05-30T09:36:33.000Z | home/admin.py | shreyjha2309/pixelvibe | abd478cfa1c1c26b21d5728b628577aac65feade | [
"MIT"
] | 327 | 2020-12-17T14:37:13.000Z | 2021-06-12T06:52:09.000Z | home/admin.py | shreyjha2309/pixelvibe | abd478cfa1c1c26b21d5728b628577aac65feade | [
"MIT"
] | 116 | 2020-12-17T06:28:28.000Z | 2021-06-04T15:16:28.000Z | from django.contrib import admin
from import_export.admin import ImportExportModelAdmin
from home.models import Contact,Gallery
admin.site.register(Gallery)
@admin.register(Contact)
class ContactAdmin(ImportExportModelAdmin):
pass | 26.222222 | 54 | 0.84322 | 28 | 236 | 7.071429 | 0.571429 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09322 | 236 | 9 | 55 | 26.222222 | 0.925234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.142857 | 0.571429 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.