hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4115154b2793e618bb09dcef63234d601449a08c | 127 | py | Python | Solutions/Habitable exoplanets.py | GuardsmanPanda/ProjectLovelace | 50549114acfe98ae9511e3ec5d0e6c1335e30db9 | [
"MIT"
] | null | null | null | Solutions/Habitable exoplanets.py | GuardsmanPanda/ProjectLovelace | 50549114acfe98ae9511e3ec5d0e6c1335e30db9 | [
"MIT"
] | null | null | null | Solutions/Habitable exoplanets.py | GuardsmanPanda/ProjectLovelace | 50549114acfe98ae9511e3ec5d0e6c1335e30db9 | [
"MIT"
] | null | null | null | def habitable_exoplanet(L, r):
return 'too hot' if r < (L/1.11)**0.5 else 'too cold' if r > (L/0.54)**0.5 else 'just right' | 63.5 | 96 | 0.614173 | 28 | 127 | 2.75 | 0.642857 | 0.077922 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 0.181102 | 127 | 2 | 96 | 63.5 | 0.644231 | 0 | 0 | 0 | 0 | 0 | 0.195313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f5c91deeb3888b319de3cd1e9d15bd34c498b056 | 229 | py | Python | contact/views.py | asis2016/momo-ristorante-v1 | d46c36d1b92212ade34d781c4e2adc91cb52cac7 | [
"MIT"
] | null | null | null | contact/views.py | asis2016/momo-ristorante-v1 | d46c36d1b92212ade34d781c4e2adc91cb52cac7 | [
"MIT"
] | null | null | null | contact/views.py | asis2016/momo-ristorante-v1 | d46c36d1b92212ade34d781c4e2adc91cb52cac7 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.views.generic import TemplateView
class ContactView(TemplateView):
template_name = 'contact/index.html'
def get_success_url(self):
return reverse('contact:index')
| 25.444444 | 45 | 0.764192 | 28 | 229 | 6.142857 | 0.785714 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152838 | 229 | 8 | 46 | 28.625 | 0.886598 | 0 | 0 | 0 | 0 | 0 | 0.135371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
f5d1ca3568dbffb69a1334a2ae790c58b186bc45 | 256 | py | Python | tests/test_map_dictionary_keys.py | melwell89/map-dictionary-keys | 4133957f32b5e3f4b987694281cec7692ee1b0f0 | [
"MIT"
] | null | null | null | tests/test_map_dictionary_keys.py | melwell89/map-dictionary-keys | 4133957f32b5e3f4b987694281cec7692ee1b0f0 | [
"MIT"
] | null | null | null | tests/test_map_dictionary_keys.py | melwell89/map-dictionary-keys | 4133957f32b5e3f4b987694281cec7692ee1b0f0 | [
"MIT"
] | null | null | null | from .test_data import input_dict, expected_output
from map_dictionary_keys import map_dictionary_keys
class TestMapDictionaryKeys:
def test_valid_case(self):
assert map_dictionary_keys(input_dict, lambda key: key.upper()) == expected_output
| 32 | 90 | 0.808594 | 35 | 256 | 5.542857 | 0.6 | 0.201031 | 0.262887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132813 | 256 | 7 | 91 | 36.571429 | 0.873874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f5de34216649cfc00fd21d4ceede0ad047c01206 | 1,147 | py | Python | tests/test_float.py | anistark/Parsenvy | 280636676b80236b95e48c34fda1fb1e7343021e | [
"BSD-3-Clause"
] | 39 | 2017-02-24T04:43:00.000Z | 2021-02-24T02:13:29.000Z | tests/test_float.py | anistark/Parsenvy | 280636676b80236b95e48c34fda1fb1e7343021e | [
"BSD-3-Clause"
] | 41 | 2017-04-28T02:45:21.000Z | 2021-02-25T06:50:51.000Z | tests/test_float.py | anistark/Parsenvy | 280636676b80236b95e48c34fda1fb1e7343021e | [
"BSD-3-Clause"
] | 11 | 2017-04-04T01:38:18.000Z | 2021-02-24T01:58:05.000Z | import pytest
import parsenvy
def test_float_positive_integer(monkeypatch):
monkeypatch.setenv("foo", str(float(13)))
assert parsenvy.float("foo") == float(13)
def test_float_positive_decimal(monkeypatch):
monkeypatch.setenv("foo", str(float(13.42)))
assert parsenvy.float("foo") == float(13.42)
def test_float_negative_integer(monkeypatch):
monkeypatch.setenv("foo", str(float(-13)))
assert parsenvy.float("foo") == float(-13)
def test_float_negative_decimal(monkeypatch):
monkeypatch.setenv("foo", str(float(-13.42)))
assert parsenvy.float("foo") == float(-13.42)
def test_float_zero(monkeypatch):
monkeypatch.setenv("foo", str(float(0)))
assert parsenvy.float("foo") == float(0)
def test_float_negative_zero(monkeypatch):
monkeypatch.setenv("foo", str(float(-0)))
assert parsenvy.float("foo") == float(-0)
def test_float_invalid(monkeypatch):
monkeypatch.setenv("foo", "bar")
with pytest.raises(TypeError):
parsenvy.float("foo")
def test_float_empty(monkeypatch):
monkeypatch.setenv("foo", "")
with pytest.raises(TypeError):
parsenvy.float("foo")
| 24.934783 | 49 | 0.699215 | 146 | 1,147 | 5.349315 | 0.171233 | 0.071703 | 0.122919 | 0.317542 | 0.786172 | 0.786172 | 0.786172 | 0.681178 | 0.681178 | 0.681178 | 0 | 0.028455 | 0.14211 | 1,147 | 45 | 50 | 25.488889 | 0.765244 | 0 | 0 | 0.142857 | 0 | 0 | 0.044464 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
de477a57ca703415ab09d1bd35d0f1d6d2478d63 | 651 | py | Python | plugins/github/komand_github/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/github/komand_github/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/github/komand_github/actions/__init__.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
from .add_collaborator.action import AddCollaborator
from .add_issue_label.action import AddIssueLabel
from .add_membership.action import AddMembership
from .block_user.action import BlockUser
from .close_issue.action import CloseIssue
from .create.action import Create
from .create_issue_comment.action import CreateIssueComment
from .get_issues_by_repo.action import GetIssuesByRepo
from .get_my_issues.action import GetMyIssues
from .get_repo.action import GetRepo
from .remove.action import Remove
from .search.action import Search
from .unblock_user.action import UnblockUser
from .user.action import User
| 40.6875 | 59 | 0.854071 | 92 | 651 | 5.880435 | 0.402174 | 0.310536 | 0.088725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099846 | 651 | 15 | 60 | 43.4 | 0.923208 | 0.056836 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
de89de7c62cfa5032824293ea143797f51d17805 | 184 | py | Python | datamart/materializers/parsers/csv_parser.py | juancroldan/datamart | 9ec3b99f36192f812edd74ad2262bebccc22bc66 | [
"MIT"
] | 7 | 2018-10-02T01:32:23.000Z | 2020-10-08T00:42:35.000Z | datamart/materializers/parsers/csv_parser.py | juancroldan/datamart | 9ec3b99f36192f812edd74ad2262bebccc22bc66 | [
"MIT"
] | 47 | 2018-10-02T05:41:13.000Z | 2021-02-02T21:50:31.000Z | datamart/materializers/parsers/csv_parser.py | juancroldan/datamart | 9ec3b99f36192f812edd74ad2262bebccc22bc66 | [
"MIT"
] | 19 | 2018-10-01T22:27:20.000Z | 2019-02-28T18:59:53.000Z | from datamart.materializers.parsers.parser_base import *
class CSVParser(ParserBase):
def get_all(self, url: str) -> typing.List[pd.DataFrame]:
return [pd.read_csv(url)] | 26.285714 | 61 | 0.722826 | 25 | 184 | 5.2 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 184 | 7 | 62 | 26.285714 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
de8a4ff49d12559c89bd27d9aa94d880d3487660 | 179 | py | Python | app/api_v1/__init__.py | Medeirox/siots | 96e0ac2ffec723d7343298daeae14f8755948763 | [
"MIT"
] | null | null | null | app/api_v1/__init__.py | Medeirox/siots | 96e0ac2ffec723d7343298daeae14f8755948763 | [
"MIT"
] | null | null | null | app/api_v1/__init__.py | Medeirox/siots | 96e0ac2ffec723d7343298daeae14f8755948763 | [
"MIT"
] | null | null | null | from flask import Blueprint
api_v1 = Blueprint('api_v1', __name__)
from . import views
from . import models
models.create_table(models.Feed)
models.create_table(models.Device)
| 17.9 | 38 | 0.793296 | 26 | 179 | 5.153846 | 0.5 | 0.179104 | 0.208955 | 0.343284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.117318 | 179 | 9 | 39 | 19.888889 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.03352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
de8b70c0a06e176fada9e39910000eaed196af12 | 659 | py | Python | vshare/utils/getnow.py | jeyrce/vshare | 269fe05a4dc36f6fbf831ddf5057af95312b75ca | [
"Apache-2.0"
] | 4 | 2019-11-30T06:07:14.000Z | 2020-10-27T08:48:23.000Z | vshare/utils/getnow.py | jeeyshe/vshare | 269fe05a4dc36f6fbf831ddf5057af95312b75ca | [
"Apache-2.0"
] | null | null | null | vshare/utils/getnow.py | jeeyshe/vshare | 269fe05a4dc36f6fbf831ddf5057af95312b75ca | [
"Apache-2.0"
] | null | null | null | # coding = utf-8
# env = python3.5.2
# author = lujianxin
# time = 2018-04-20
# purpose= 获得格式化当前时间
import time
def now():
now_ = time.strftime('%Y-%m-%d %H:%M:%S')
return now_
def date_time():
return time.strftime('%Y-%m-%d')
def time_time():
return time.strftime('%H:%M:%S')
def this_year():
return time.strftime('%Y')
def this_month():
return time.strftime('%m')
def this_day():
return time.strftime('%d')
def this_hour():
return time.strftime('%H')
def this_minute():
return time.strftime('%M')
def this_second():
return time.strftime('%S')
if __name__ == '__main__':
print(now(), type(now()))
pass
| 19.969697 | 45 | 0.616085 | 98 | 659 | 3.959184 | 0.397959 | 0.278351 | 0.371134 | 0.072165 | 0.21134 | 0.134021 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.189681 | 659 | 32 | 46 | 20.59375 | 0.70412 | 0.141123 | 0 | 0 | 0 | 0 | 0.094812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.391304 | false | 0.043478 | 0.043478 | 0.347826 | 0.826087 | 0.043478 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
72179d7661aca0826064ae0fb9ddcdaa2e0b0326 | 85 | py | Python | sango/ext.py | short-greg/sango | 68bcdbe8f4784fef6f7fc382ec2c4e81911c2a8a | [
"MIT"
] | null | null | null | sango/ext.py | short-greg/sango | 68bcdbe8f4784fef6f7fc382ec2c4e81911c2a8a | [
"MIT"
] | null | null | null | sango/ext.py | short-greg/sango | 68bcdbe8f4784fef6f7fc382ec2c4e81911c2a8a | [
"MIT"
] | 1 | 2022-01-27T15:39:10.000Z | 2022-01-27T15:39:10.000Z | from ._nodes import *
from ._vars import *
from ._utils import *
from ._ext import *
| 17 | 21 | 0.717647 | 12 | 85 | 4.75 | 0.5 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188235 | 85 | 4 | 22 | 21.25 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
72209e135cf101ab138efd0f1a0f2cd87a3d5c7e | 151 | py | Python | redux/cluster.py | devsearchcomponent/redux-python | 6a026dd6ff9fcfb6631b42880f96341492ddbda9 | [
"MIT"
] | 1 | 2018-08-27T12:29:13.000Z | 2018-08-27T12:29:13.000Z | redux/cluster.py | xdusongwei/redux-python | 6a026dd6ff9fcfb6631b42880f96341492ddbda9 | [
"MIT"
] | null | null | null | redux/cluster.py | xdusongwei/redux-python | 6a026dd6ff9fcfb6631b42880f96341492ddbda9 | [
"MIT"
] | null | null | null | """
一些未来打算增加的功能
redux自身把数据都集中在了state中,如果state可以序列化,很可能reducer也可以在任意网络中的进程中进行数据迁移
另一方便,如果redux作为服务,所有在此运行的reducer可以作为容器承载在redux中,可能可以是热升级的机制的实现办法
"""
| 18.875 | 64 | 0.880795 | 8 | 151 | 16.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046358 | 151 | 7 | 65 | 21.571429 | 0.923611 | 0.940397 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
72435c87dd3c50a103223aa07ba6d57340e84d0c | 5,332 | py | Python | farmer/ncc/losses/losses.py | aiorhiroki/farmer | cf3bb93173efbf4a34b782be9faf40c707152ab8 | [
"Apache-2.0"
] | 10 | 2019-04-04T07:32:47.000Z | 2021-01-07T00:40:50.000Z | farmer/ncc/losses/losses.py | aiorhiroki/farmer | cf3bb93173efbf4a34b782be9faf40c707152ab8 | [
"Apache-2.0"
] | 59 | 2019-04-18T05:44:31.000Z | 2021-05-02T10:33:02.000Z | farmer/ncc/losses/losses.py | aiorhiroki/farmer | cf3bb93173efbf4a34b782be9faf40c707152ab8 | [
"Apache-2.0"
] | 4 | 2020-01-23T14:01:43.000Z | 2021-02-11T04:16:14.000Z | import tensorflow as tf
import segmentation_models
from segmentation_models.base import Loss
from segmentation_models.losses import CategoricalCELoss
from ..losses import functional as F
segmentation_models.set_framework('tf.keras')
class DiceLoss(Loss):
def __init__(self, beta=1, class_weights=None, flooding_level=0.):
super().__init__(name='dice_loss')
self.beta = beta
self.class_weights = class_weights if class_weights is not None else 1
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.dice_loss(
gt=gt,
pr=pr,
beta=self.beta,
class_weights=self.class_weights
), self.flooding_level)
class JaccardLoss(Loss):
def __init__(self, class_weights=None, flooding_level=0.):
super().__init__(name='jaccard_loss')
self.class_weights = class_weights if class_weights is not None else 1
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.jaccard_loss(
gt=gt,
pr=pr,
class_weights=self.class_weights
), self.flooding_level)
class TverskyLoss(Loss):
def __init__(self, alpha=0.45, beta=0.55, class_weights=None, flooding_level=0.):
super().__init__(name='tversky_loss')
self.alpha = alpha
self.beta = beta
self.class_weights = class_weights if class_weights is not None else 1.
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.tversky_loss(
gt=gt,
pr=pr,
alpha=self.alpha,
beta=self.beta,
class_weights=self.class_weights
), self.flooding_level)
class FocalTverskyLoss(Loss):
def __init__(self, alpha=0.45, beta=0.55, gamma=2.5, class_weights=None, flooding_level=0.):
super().__init__(name='focal_tversky_loss')
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.class_weights = class_weights if class_weights is not None else 1.
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.focal_tversky_loss(
gt=gt,
pr=pr,
alpha=self.alpha,
beta=self.beta,
gamma=self.gamma,
class_weights=self.class_weights
), self.flooding_level)
class CategoricalFocalLoss(Loss):
def __init__(self, alpha=0.25, gamma=2., class_weights=None, flooding_level=0.):
super().__init__(name='categorical_focal_loss')
self.alpha = alpha
self.gamma = gamma
self.class_weights = class_weights if class_weights is not None else 1.
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.categorical_focal_loss(
gt,
pr,
alpha=self.alpha,
gamma=self.gamma,
class_weights=self.class_weights
), self.flooding_level)
class LogCoshDiceLoss(Loss):
def __init__(self, beta=1, class_weights=None, flooding_level=0.):
super().__init__(name='log_cosh_dice_loss')
self.beta = beta
self.class_weights = class_weights if class_weights is not None else 1
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.log_cosh_dice_loss(
gt=gt,
pr=pr,
beta=self.beta,
class_weights=self.class_weights
), self.flooding_level)
class LogCoshTverskyLoss(Loss):
def __init__(self, alpha=0.3, beta=0.7, class_weights=None, flooding_level=0.):
super().__init__(name='log_cosh_tversky_loss')
self.alpha = alpha
self.beta = beta
self.class_weights = class_weights if class_weights is not None else 1.
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.log_cosh_tversky_loss(
gt=gt,
pr=pr,
alpha=self.alpha,
beta=self.beta,
class_weights=self.class_weights
), self.flooding_level)
class LogCoshFocalTverskyLoss(Loss):
def __init__(self, alpha=0.3, beta=0.7, gamma=1.3, class_weights=None, flooding_level=0.):
super().__init__(name='log_cosh_focal_tversky_loss')
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.class_weights = class_weights if class_weights is not None else 1.
self.flooding_level = flooding_level
def __call__(self, gt, pr):
return F.flooding(F.log_cosh_focal_tversky_loss(
gt=gt,
pr=pr,
alpha=self.alpha,
beta=self.beta,
gamma=self.gamma,
class_weights=self.class_weights
), self.flooding_level)
class LogCoshLoss(Loss):
def __init__(self, base_loss, flooding_level=0., **kwargs):
super().__init__(name=f'log_cosh_{base_loss}')
self.loss = getattr(F, base_loss)
self.flooding_level = flooding_level
self.kwargs = kwargs
def __call__(self, gt, pr):
x = self.loss(gt, pr, **self.kwargs)
return F.flooding(
tf.math.log((tf.exp(x) + tf.exp(-x)) / 2.0),
self.flooding_level)
| 32.91358 | 96 | 0.632408 | 697 | 5,332 | 4.499283 | 0.096126 | 0.183673 | 0.097577 | 0.043048 | 0.797832 | 0.773916 | 0.767219 | 0.767219 | 0.767219 | 0.696429 | 0 | 0.012571 | 0.268942 | 5,332 | 161 | 97 | 33.118012 | 0.791945 | 0 | 0 | 0.646617 | 0 | 0 | 0.03132 | 0.013128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135338 | false | 0 | 0.037594 | 0.06015 | 0.308271 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a0d4dd994b446c9305599feaf6fb2f202da378bb | 33 | py | Python | main_start/config_var.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | null | null | null | main_start/config_var.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | null | null | null | main_start/config_var.py | aviskumar/speedo | 758e8ac1fdeeb0b72c3a57742032ca5c79f0b2fa | [
"BSD-3-Clause"
] | 3 | 2021-10-12T08:17:01.000Z | 2021-12-21T01:17:54.000Z | from session.config_var import *
| 16.5 | 32 | 0.818182 | 5 | 33 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9d09cbad278532a5ece80abaa8c83bca65292c8e | 86 | py | Python | data_hacking/lsh_sims/__init__.py | c4pr1c3/data_hacking | a2c746375a2b8704eb8f263f6e2b3250ad7ec0ab | [
"MIT"
] | 1 | 2022-02-19T11:36:37.000Z | 2022-02-19T11:36:37.000Z | data_hacking/lsh_sims/__init__.py | c4pr1c3/data_hacking | a2c746375a2b8704eb8f263f6e2b3250ad7ec0ab | [
"MIT"
] | null | null | null | data_hacking/lsh_sims/__init__.py | c4pr1c3/data_hacking | a2c746375a2b8704eb8f263f6e2b3250ad7ec0ab | [
"MIT"
] | 3 | 2017-09-23T01:17:54.000Z | 2022-03-23T13:11:37.000Z | '''Package for the LSH (Locality Sensitive Hashing) Module'''
from .lsh_sims import *
| 28.666667 | 61 | 0.744186 | 12 | 86 | 5.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 86 | 2 | 62 | 43 | 0.851351 | 0.639535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9d358430f00ecead9b8842c993c9f1dfce303c82 | 77 | py | Python | Contest/ABC017/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | Contest/ABC017/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | Contest/ABC017/a/main.py | mpses/AtCoder | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
print(eval("+eval(input().replace(' ','*'))"*3) // 10) | 38.5 | 54 | 0.571429 | 11 | 77 | 4 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.064935 | 77 | 2 | 54 | 38.5 | 0.555556 | 0.272727 | 0 | 0 | 0 | 0 | 0.553571 | 0.410714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
9d36117e34cf09d2bbd19c030a83ea0ef89a9f46 | 108 | py | Python | feed/admin.py | mentix02/instagram | 561f10a026fd25cb661e901c3404d050ab16620e | [
"MIT"
] | 3 | 2021-03-31T08:46:17.000Z | 2021-11-09T12:50:26.000Z | feed/admin.py | mentix02/instagram | 561f10a026fd25cb661e901c3404d050ab16620e | [
"MIT"
] | null | null | null | feed/admin.py | mentix02/instagram | 561f10a026fd25cb661e901c3404d050ab16620e | [
"MIT"
] | null | null | null | from django.contrib import admin
from feed.models import FollowRequest
admin.site.register(FollowRequest)
| 18 | 37 | 0.842593 | 14 | 108 | 6.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101852 | 108 | 5 | 38 | 21.6 | 0.938144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c232b25fc17c6e2f0c81d6d652fca4e89cfd8b4a | 130 | py | Python | server/schedule/admin.py | adamA113/servize | 89933c3864d997188ec79ad690b37f51bca54aa3 | [
"MIT"
] | null | null | null | server/schedule/admin.py | adamA113/servize | 89933c3864d997188ec79ad690b37f51bca54aa3 | [
"MIT"
] | null | null | null | server/schedule/admin.py | adamA113/servize | 89933c3864d997188ec79ad690b37f51bca54aa3 | [
"MIT"
] | 2 | 2020-12-26T09:50:17.000Z | 2020-12-26T09:52:45.000Z | from django.contrib import admin
from schedule.models import Schedule
# Register your models here.
admin.site.register(Schedule)
| 21.666667 | 36 | 0.823077 | 18 | 130 | 5.944444 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 130 | 5 | 37 | 26 | 0.930435 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c24db8227b4279b41a1e6cf364054b485c4b88f6 | 83 | py | Python | todo/backend/todos/models/__init__.py | idle-solutions/vk-game | 08aeff3fdd2a74ee1942bfe064fff988973aacdc | [
"MIT"
] | null | null | null | todo/backend/todos/models/__init__.py | idle-solutions/vk-game | 08aeff3fdd2a74ee1942bfe064fff988973aacdc | [
"MIT"
] | 1 | 2019-10-23T15:32:53.000Z | 2019-10-23T15:32:53.000Z | todo/backend/todos/models/__init__.py | idle-solutions/vk-game | 08aeff3fdd2a74ee1942bfe064fff988973aacdc | [
"MIT"
] | null | null | null | from .character import Character
from .player import Player
from .todo import Todo
| 20.75 | 32 | 0.819277 | 12 | 83 | 5.666667 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 3 | 33 | 27.666667 | 0.957746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
dfd22cef763145d2705b8139e9754eca5355838d | 38 | py | Python | tests/__init__.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | """Unit test package for djangito."""
| 19 | 37 | 0.684211 | 5 | 38 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.787879 | 0.815789 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
dfdb025fd0ebe22a13185257e8f511f2ac9dbe8b | 70 | py | Python | UI/__init__.py | mjbogusz/TSPGen | 4916cf6276fda41b73ebdf24a7969167c63d0650 | [
"MIT"
] | null | null | null | UI/__init__.py | mjbogusz/TSPGen | 4916cf6276fda41b73ebdf24a7969167c63d0650 | [
"MIT"
] | null | null | null | UI/__init__.py | mjbogusz/TSPGen | 4916cf6276fda41b73ebdf24a7969167c63d0650 | [
"MIT"
] | null | null | null | from UI.MapPainter import MapPainter
from UI.TSPGenUI import TSPGenUI
| 23.333333 | 36 | 0.857143 | 10 | 70 | 6 | 0.5 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 37 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a03149fd4e1b34c12ec32d95d384805f87ae773c | 66 | py | Python | lithium/tests/__init__.py | PressLabs/lithium | a222e4021aabcbec0fd24bcecf904a0ee7ec852d | [
"Apache-2.0"
] | 2 | 2015-03-20T10:57:14.000Z | 2015-03-20T11:03:39.000Z | lithium/tests/__init__.py | PressLabs/lithium | a222e4021aabcbec0fd24bcecf904a0ee7ec852d | [
"Apache-2.0"
] | null | null | null | lithium/tests/__init__.py | PressLabs/lithium | a222e4021aabcbec0fd24bcecf904a0ee7ec852d | [
"Apache-2.0"
] | null | null | null | from .base import BaseTest
from .fixtures import fixtures_wrapper
| 22 | 38 | 0.848485 | 9 | 66 | 6.111111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 39 | 33 | 0.948276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a034d68fcc0b6a57467b70d2088ec76ee3a07bdc | 80 | py | Python | tr.py | dpr-ankit/BeeKeepers | ef8ad12cec5e9f45e69182ef9d69fedfd7afed84 | [
"CC-BY-3.0"
] | null | null | null | tr.py | dpr-ankit/BeeKeepers | ef8ad12cec5e9f45e69182ef9d69fedfd7afed84 | [
"CC-BY-3.0"
] | null | null | null | tr.py | dpr-ankit/BeeKeepers | ef8ad12cec5e9f45e69182ef9d69fedfd7afed84 | [
"CC-BY-3.0"
] | null | null | null | #!C:\Python27\python.exe
print "Content-type: text/html"
import cgi
print "ytfg" | 20 | 31 | 0.75 | 13 | 80 | 4.615385 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.0875 | 80 | 4 | 32 | 20 | 0.794521 | 0.2875 | 0 | 0 | 0 | 0 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
a05323f93b6850c2f86aedb3b1a5dee16358027f | 41 | py | Python | Lib/site-packages/PIL/__main__.py | ashutoshsuman99/Web-Blog-D19 | a01a0ccc40e8823110c01ebe4f43d9351df57295 | [
"bzip2-1.0.6"
] | 1,738 | 2017-09-21T10:59:12.000Z | 2022-03-31T21:05:46.000Z | virtual/lib/python3.6/site-packages/PIL/__main__.py | kahenya-anita/Insta-Clone | 4894e959c17170505e73aee6dc497aeb29d55a71 | [
"MIT"
] | 427 | 2017-09-29T22:54:36.000Z | 2022-02-15T19:26:50.000Z | virtual/lib/python3.6/site-packages/PIL/__main__.py | kahenya-anita/Insta-Clone | 4894e959c17170505e73aee6dc497aeb29d55a71 | [
"MIT"
] | 671 | 2017-09-21T08:04:01.000Z | 2022-03-29T14:30:07.000Z | from .features import pilinfo
pilinfo()
| 10.25 | 29 | 0.780488 | 5 | 41 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 3 | 30 | 13.666667 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a07036b086a219eee210bb12ceb84fc268002c98 | 134 | py | Python | omics/stats/__init__.py | choyichen/pybcb | 60ba382be28bdbce466a9b24760fe44d421aa5ae | [
"MIT"
] | 3 | 2017-05-11T02:13:03.000Z | 2020-08-04T06:59:11.000Z | omics/stats/__init__.py | choyichen/pybcb | 60ba382be28bdbce466a9b24760fe44d421aa5ae | [
"MIT"
] | null | null | null | omics/stats/__init__.py | choyichen/pybcb | 60ba382be28bdbce466a9b24760fe44d421aa5ae | [
"MIT"
] | 1 | 2020-07-03T06:57:51.000Z | 2020-07-03T06:57:51.000Z | """Statistics functions.
"""
from .fisher import fisher_exact_test
from .PCA import run_pca, plot_pca, plot_explained_variance_ratio
| 22.333333 | 65 | 0.813433 | 19 | 134 | 5.368421 | 0.684211 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104478 | 134 | 5 | 66 | 26.8 | 0.85 | 0.156716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a098e837b2c800703a93696782f8a15118513e5f | 499 | py | Python | akutil/akutil/__init__.py | hokiegeek2/arkouda | 1eed2df96aad212b9c6424b0d423d9375604c0ba | [
"MIT"
] | 51 | 2021-05-15T01:35:20.000Z | 2022-03-31T00:41:17.000Z | akutil/akutil/__init__.py | hokiegeek2/arkouda | 1eed2df96aad212b9c6424b0d423d9375604c0ba | [
"MIT"
] | 321 | 2021-05-12T16:02:45.000Z | 2022-03-31T17:10:27.000Z | akutil/akutil/__init__.py | hokiegeek2/arkouda | 1eed2df96aad212b9c6424b0d423d9375604c0ba | [
"MIT"
] | 13 | 2021-06-03T13:44:21.000Z | 2022-03-31T17:38:36.000Z | from akutil.dataframe import *
from akutil.util import *
from akutil.row import *
from akutil.alignment import *
from akutil.plotting import *
from akutil.join import *
from akutil.hdbscan import *
from akutil.read import *
from akutil.dtypes import *
from akutil.segarray import *
from akutil.series import *
from akutil.index import *
from pkg_resources import get_distribution, DistributionNotFound
try:
__version__ = get_distribution(__name__).version
except DistributionNotFound:
pass
| 26.263158 | 64 | 0.799599 | 64 | 499 | 6.0625 | 0.390625 | 0.309278 | 0.453608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138277 | 499 | 18 | 65 | 27.722222 | 0.902326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.764706 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
a0a3d168004d283dd920401fb8de689a6815f286 | 125 | py | Python | pyonepassword/op_items/__init__.py | pschelle/pyonepassword | 2258c0fa851ad6a63c4f959982a66c715706b654 | [
"MIT"
] | null | null | null | pyonepassword/op_items/__init__.py | pschelle/pyonepassword | 2258c0fa851ad6a63c4f959982a66c715706b654 | [
"MIT"
] | null | null | null | pyonepassword/op_items/__init__.py | pschelle/pyonepassword | 2258c0fa851ad6a63c4f959982a66c715706b654 | [
"MIT"
] | null | null | null | from ._op_item_type_registry import OPItemFactory
from ._op_items_base import OPAbstractItem
from .login import OPLoginItem
| 25 | 49 | 0.872 | 17 | 125 | 6 | 0.705882 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104 | 125 | 4 | 50 | 31.25 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
261a0e05b6b4bda58e73e85b1b7324575ba6cb8e | 137 | py | Python | NUAAiCal/tests/test_main.py | NUAA-Open-Source/NUAA-iCal-Python | f71796545aa9d2f7a943f7c5d0dc2d80c6dfa4b6 | [
"MIT"
] | 17 | 2018-05-04T17:47:34.000Z | 2021-07-28T11:35:17.000Z | NUAAiCal/tests/test_main.py | NUAA-Open-Source/NUAA-iCal-Python | f71796545aa9d2f7a943f7c5d0dc2d80c6dfa4b6 | [
"MIT"
] | 4 | 2018-04-27T09:16:28.000Z | 2018-12-03T06:45:19.000Z | NUAAiCal/tests/test_main.py | NUAA-Open-Source/NUAA-iCal-Python | f71796545aa9d2f7a943f7c5d0dc2d80c6dfa4b6 | [
"MIT"
] | 5 | 2018-05-20T14:41:38.000Z | 2019-11-13T05:01:21.000Z | # -*- coding:utf-8 -*-
from __future__ import unicode_literals
import pytest
from NUAAiCal.main import main
class TestMain:
pass
| 12.454545 | 39 | 0.737226 | 18 | 137 | 5.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.182482 | 137 | 10 | 40 | 13.7 | 0.848214 | 0.145985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
cd14a7790034b4df6492bb887c5baef80df8a334 | 128 | py | Python | gremlinpy/__init__.py | emedgene/gremlinpy | 80ccb02da3089317115190dc2b889b0d83be5e0e | [
"MIT"
] | 59 | 2015-01-11T18:04:40.000Z | 2022-03-09T13:15:52.000Z | gremlinpy/__init__.py | emedgene/gremlinpy | 80ccb02da3089317115190dc2b889b0d83be5e0e | [
"MIT"
] | 6 | 2015-12-17T14:40:19.000Z | 2017-07-17T18:59:14.000Z | gremlinpy/__init__.py | emedgene/gremlinpy | 80ccb02da3089317115190dc2b889b0d83be5e0e | [
"MIT"
] | 7 | 2015-10-01T15:25:09.000Z | 2017-07-28T10:02:00.000Z | from .version import __version__
from .gremlin import *
from .config import *
from .exception import *
from .statement import *
| 21.333333 | 32 | 0.773438 | 16 | 128 | 5.9375 | 0.4375 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 128 | 5 | 33 | 25.6 | 0.87963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
cd5700bf9e2d7e897fdf3339655a76eb522a717f | 101 | py | Python | codesignal/competitiveEating.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | 1 | 2019-11-06T15:17:48.000Z | 2019-11-06T15:17:48.000Z | codesignal/competitiveEating.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | null | null | null | codesignal/competitiveEating.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | 1 | 2018-11-13T08:43:26.000Z | 2018-11-13T08:43:26.000Z | def competitiveEating(t, width, precision):
return "{0:.{1}f}".format(t,precision).center(width)
| 33.666667 | 56 | 0.70297 | 14 | 101 | 5.071429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.09901 | 101 | 2 | 57 | 50.5 | 0.758242 | 0 | 0 | 0 | 0 | 0 | 0.089109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
26a1b57a92df6eb0a5edde538508491302480661 | 82 | py | Python | ryu/ofproto/ether.py | MrCocoaCat/ryu | 9e9571991a73380099b7ba7c6f37e0e587080a6a | [
"Apache-2.0"
] | null | null | null | ryu/ofproto/ether.py | MrCocoaCat/ryu | 9e9571991a73380099b7ba7c6f37e0e587080a6a | [
"Apache-2.0"
] | null | null | null | ryu/ofproto/ether.py | MrCocoaCat/ryu | 9e9571991a73380099b7ba7c6f37e0e587080a6a | [
"Apache-2.0"
] | null | null | null | # This module is for backward compat
from ryu.lib.packet.ether_types import *
| 20.5 | 41 | 0.756098 | 13 | 82 | 4.692308 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182927 | 82 | 3 | 42 | 27.333333 | 0.910448 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
26da9f6a6c41191707a07beb0a09ce2e63b2ab46 | 241 | py | Python | tests/test_core.py | totalhack/toolbox | f5095a824620af1fd5552cd0895fc76f2f843b6f | [
"MIT"
] | 1 | 2019-09-09T18:53:03.000Z | 2019-09-09T18:53:03.000Z | tests/test_core.py | totalhack/tlbx | f5095a824620af1fd5552cd0895fc76f2f843b6f | [
"MIT"
] | null | null | null | tests/test_core.py | totalhack/tlbx | f5095a824620af1fd5552cd0895fc76f2f843b6f | [
"MIT"
] | null | null | null | import pytest
from tlbx.core_utils import raiseif, raiseifnot
def test_raiseif():
with pytest.raises(AssertionError):
raiseif(1 != 2)
def test_raiseifnot():
with pytest.raises(AssertionError):
raiseifnot(1 == 2)
| 17.214286 | 47 | 0.692946 | 29 | 241 | 5.655172 | 0.517241 | 0.085366 | 0.195122 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0.207469 | 241 | 13 | 48 | 18.538462 | 0.837696 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f82019e32d1db678c1696d6e878460496cd838a0 | 184 | py | Python | etcaetera/formatters.py | oleiade/etcaetera | e1370cff5fc302ccdba24e7638b720d6cc43ffc0 | [
"MIT"
] | 4 | 2015-08-19T21:12:33.000Z | 2022-01-25T01:13:46.000Z | etcaetera/formatters.py | bitprophet/etcaetera | f94e1a5a063744a55dfce94593ead59f32701c19 | [
"MIT"
] | 2 | 2015-08-13T12:45:43.000Z | 2017-11-27T05:53:35.000Z | etcaetera/formatters.py | bitprophet/etcaetera | f94e1a5a063744a55dfce94593ead59f32701c19 | [
"MIT"
] | 2 | 2015-02-03T10:15:55.000Z | 2016-10-21T14:20:10.000Z | from collections import namedtuple
def uppercased(s):
return s.upper()
def lowercased(s):
return s.lower()
def environ(s):
return s.strip().upper().replace(' ', '_')
| 13.142857 | 46 | 0.646739 | 24 | 184 | 4.916667 | 0.583333 | 0.177966 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 184 | 13 | 47 | 14.153846 | 0.797297 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.142857 | 0.428571 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f83f4b0ddd95776833f1ac4e4389c5d80570c159 | 10,722 | py | Python | backend/web/service/LagHandler.py | asterfusion/Tapplet-Web | d32a077810ae27d50f010e058242f04e497ef68a | [
"MIT"
] | 3 | 2019-12-24T03:52:39.000Z | 2019-12-30T11:47:53.000Z | backend/web/service/LagHandler.py | asterfusion/Tapplet-Web | d32a077810ae27d50f010e058242f04e497ef68a | [
"MIT"
] | null | null | null | backend/web/service/LagHandler.py | asterfusion/Tapplet-Web | d32a077810ae27d50f010e058242f04e497ef68a | [
"MIT"
] | 6 | 2019-12-16T08:38:07.000Z | 2020-12-02T19:37:25.000Z | #!/usr/bin/python3
from tornado.web import url,RequestHandler
from tornado.web import Application, RequestHandler
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
import tornado.autoreload
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor
import json
import logging
import sys
sys.path.append('./web/control/')
sys.path.append('./web/database/')
import interface_http
import User
import permiss
import Lag
import Rule
import session
import data_operation
import Logconfig
from Logconfig import Web_log
import BaseHandler
Open_permiss=0
class ListOutgroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_read')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def get(self):
self.set_header("Content-Type","application/json")
Laglist=yield Lag.outlist_select(self)
LagData=json.dumps(Laglist)
self.write(LagData)
class ListIngroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_read')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def get(self):
self.set_header("Content-Type","application/json")
Laglist=yield Lag.inlist_select(self)
LagData=json.dumps(Laglist)
self.write(LagData)
class InsertOutgroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=yield Lag.outlist_write(self,data)
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','创建出端口组',ip_address,json.dumps(data),200)
self.write(json.dumps({"status_code":200,"res":"OK"}))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','创建出端口组',ip_address,json.dumps(data),400)
self.write(json.dumps({"status_code":400,"res":"outgroupname is exsited"}))
self.set_status(400,'existed')
class InsertIngroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=yield Lag.inlist_write(self,data)
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','创建入端口组',ip_address,json.dumps(data),200)
self.set_status(200,'OK')
self.write(json.dumps("inlist insert ok"))
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','创建入端口组',ip_address,json.dumps(data)+'\nname is exsited',400)
self.write(json.dumps({"status_code":400,"res":"name is exsited"}))
self.set_status(400,'existed')
class DeleteOutgroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
data=data["name"]
res=yield Lag.outlist_delete(self,data)
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','删除出端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','删除出端口组',ip_address,json.dumps(data)+'\nnot exsited',400)
self.write(json.dumps({"status_code":400,"res":"outgroupname is not exsited"}))
self.set_status(400,'not existed')
class DeleteIngroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
data=data["name"]
res=yield Lag.inlist_delete(self,data)
if(res==True):
res_rule=Rule.rulegroup_delete('',data)
if res_rule==True:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','删除入端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','删除入端口组',ip_address,json.dumps(data)+'\nrule is not exsited',400)
self.write(json.dumps({"status_code":400,"res":"rule is not exsited"}))
self.set_status(400,'not existed')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','删除入端口组',ip_address,json.dumps(data)+'\ningroupname is not exsited',400)
self.write(json.dumps({"status_code":400,"res":"ingroupname is not exsited"}))
self.set_status(400,'not existed')
class UpdateIngroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
data=data_operation.ByteToJson(self.request.body)
self.set_header("Content-Type","application/json")
if "deduplication_cfg" not in data:
data["deduplication_cfg"]=''
res=yield Lag.inlist_update(self,data)
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新入端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新入端口组',ip_address,json.dumps(data)+'\ningroupname is not exsited',400)
self.write(json.dumps({"status_code":400,"res":"ingroupname is not exsited"}))
self.set_status(400,'not existed')
class ReplaceOutgroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=Lag.outlist_update(data)
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新出端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新出端口组',ip_address,json.dumps(data),400)
self.write(json.dumps({"status_code":400,"res":"update failed"}))
self.set_status(400,'not existed')
class UpdateOutgroupHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=Lag.outlist_put(data)
if(res==True):
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
self.write(json.dumps({"status_code":400,"res":"outgroupname is not exsited"}))
self.set_status(400,'not existed')
class UpdatePortHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=Lag.interlist_update(data["type"],data["groupname"],data["port"])
if data["type"]=='Egress':
log_name='出'
else:
log_name='入'
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新'+log_name+'端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新'+log_name+'端口组',ip_address,json.dumps(data),400)
self.write(json.dumps({"status_code":400,"res":"outgroupname is not exsited"}))
self.set_status(400,'not existed')
class DeletePortHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_write')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def post(self):
ip_address=self.request.remote_ip
self.set_header("Content-Type","application/json")
data=data_operation.ByteToJson(self.request.body)
res=Lag.interlist_delete(data["type"],data["groupname"],data["port"])
if data["type"]=='Egress':
log_name='出'
else:
log_name='入'
if(res==True):
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新'+log_name+'端口组',ip_address,json.dumps(data),200)
self.write(json.dumps('OK'))
self.set_status(200,'ok')
else:
yield Logconfig.Write_Sys_Log(self,self.username,'转发策略','更新'+log_name+'端口组',ip_address,json.dumps(data),400)
self.write(json.dumps({"status_code":400,"res":"outgroupname is not exsited"}))
self.set_status(400,'not existed')
class getDefaultRuleInterfaceHandler(BaseHandler.BaseHandler):
executor = ThreadPoolExecutor(20)
@tornado.gen.coroutine
def prepare(self):
try:
super().prepare('policy_read')
except :
self.set_status(400,'')
@tornado.gen.coroutine
def get(self):
self.set_header("Content-Type","application/json")
data2=Lag.getDefaultInterface()
data1={
"status_code": 200,
"message": "success",
"data": data2
}
self.set_status(200,'')
self.write(json.dumps(data1))
if __name__ == "__main__":
app = Application(
[ (r"/api/policy/ListIngroup",ListIngroupHandler),
(r"/api/policy/InsertIngroup",InsertIngroupHandler),
(r"/api/policy/DeleteIngroup",DeleteIngroupHandler),
(r"/api/policy/UpdateIngroup",UpdateIngroupHandler),
(r"/api/policy/ListOutgroup",ListOutgroupHandler),
(r"/api/policy/InsertOutgroup",InsertOutgroupHandler),
(r"/api/policy/DeleteOutgroup",DeleteOutgroupHandler),
(r"/api/policy/UpdatePort",UpdatePortHandler),
(r"/api/policy/UpdatePort",DeletePortHandler),
(r"/api/policy/UpdateOutgroup",UpdateOutgroupHandler)],cookie_secret="12334")
app.listen(8000)
IOLoop.current().start()
| 31.259475 | 131 | 0.737922 | 1,457 | 10,722 | 5.304736 | 0.105697 | 0.03985 | 0.053823 | 0.068314 | 0.794152 | 0.789624 | 0.784578 | 0.769181 | 0.759865 | 0.754949 | 0 | 0.023137 | 0.105111 | 10,722 | 342 | 132 | 31.350877 | 0.782387 | 0.001586 | 0 | 0.690236 | 0 | 0 | 0.159006 | 0.022795 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080808 | false | 0 | 0.06734 | 0 | 0.228956 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f84ee815bcdd20bca9595ad4e58c311ad4ae29bf | 367 | py | Python | dynamodb_meta_store/exceptions.py | sergeymazin/dynamodb-meta-store | 33757240fd823f830f1d36ef6f04c2a82ee88118 | [
"Apache-2.0"
] | null | null | null | dynamodb_meta_store/exceptions.py | sergeymazin/dynamodb-meta-store | 33757240fd823f830f1d36ef6f04c2a82ee88118 | [
"Apache-2.0"
] | null | null | null | dynamodb_meta_store/exceptions.py | sergeymazin/dynamodb-meta-store | 33757240fd823f830f1d36ef6f04c2a82ee88118 | [
"Apache-2.0"
] | null | null | null | class TableNotReadyException(Exception):
""" Exception thrown if the table is not in ACTIVE or UPDATING state """
pass
class MisconfiguredSchemaException(Exception):
""" Exception thrown if the table does not match the configuration """
pass
class ItemNotFound(Exception):
""" Exception thrown if the item does not exist in table """
pass
| 26.214286 | 76 | 0.722071 | 44 | 367 | 6.022727 | 0.5 | 0.203774 | 0.271698 | 0.29434 | 0.366038 | 0.256604 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20436 | 367 | 13 | 77 | 28.230769 | 0.907534 | 0.495913 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
f85af2aa54df7f04580f0e826b8cf304ae9e57f6 | 85 | py | Python | cube_js_client/__init__.py | downstreamimpact/CubeJsClient | 6dc6c0e66e7bcd9a099105f22f1c6b1b610298fe | [
"MIT"
] | 10 | 2019-11-12T04:37:26.000Z | 2021-05-02T19:57:31.000Z | cube_js_client/__init__.py | downstreamimpact/CubeJsClient | 6dc6c0e66e7bcd9a099105f22f1c6b1b610298fe | [
"MIT"
] | null | null | null | cube_js_client/__init__.py | downstreamimpact/CubeJsClient | 6dc6c0e66e7bcd9a099105f22f1c6b1b610298fe | [
"MIT"
] | 1 | 2020-04-19T03:36:18.000Z | 2020-04-19T03:36:18.000Z | from .exceptions import CubeError, CubeTimeoutError
from .client import CubeJsClient
| 28.333333 | 51 | 0.858824 | 9 | 85 | 8.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 85 | 2 | 52 | 42.5 | 0.960526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f8925017ca366f2941e9294288a12581434805d4 | 303 | py | Python | tests/__init__.py | FelixKleineBoesing/FeatureSelector | b33454be39d53881b1c1b5b7b6dca8d782cabd36 | [
"MIT"
] | null | null | null | tests/__init__.py | FelixKleineBoesing/FeatureSelector | b33454be39d53881b1c1b5b7b6dca8d782cabd36 | [
"MIT"
] | 6 | 2019-02-25T08:09:48.000Z | 2019-02-25T08:11:55.000Z | tests/__init__.py | FelixKleineBoesing/pyFeatSel | b33454be39d53881b1c1b5b7b6dca8d782cabd36 | [
"MIT"
] | null | null | null | from pyFeatSel.Models.Model import XGBoostModel, Model
from pyFeatSel.FeatureSelectors.GreedySearch import GreedySearch
from pyFeatSel.FeatureSelectors.CompleteFeatureSpace import CompleteFeatureSpace
from pyFeatSel.Evaluator.Evaluator import EvaluatorBase, RMSE, Recall, Accuracy, FOneScore, Precision
| 60.6 | 101 | 0.881188 | 30 | 303 | 8.9 | 0.533333 | 0.194757 | 0.217228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072607 | 303 | 4 | 102 | 75.75 | 0.950178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f8a0fad66e76f163dc451b423f669802e70cbde5 | 134 | py | Python | sqlite3_kernel/__main__.py | brownan/sqlite3-kernel | a24301d9c53765c4f23151c2f4fb8c8fba053fa0 | [
"BSD-3-Clause"
] | 15 | 2016-10-19T17:18:38.000Z | 2022-03-14T00:28:44.000Z | sqlite3_kernel/__main__.py | brownan/sqlite3-kernel | a24301d9c53765c4f23151c2f4fb8c8fba053fa0 | [
"BSD-3-Clause"
] | 4 | 2016-10-24T10:28:37.000Z | 2019-01-23T14:14:21.000Z | sqlite3_kernel/__main__.py | brownan/sqlite3-kernel | a24301d9c53765c4f23151c2f4fb8c8fba053fa0 | [
"BSD-3-Clause"
] | 13 | 2017-08-12T14:35:37.000Z | 2020-07-07T13:30:48.000Z | from ipykernel.kernelapp import IPKernelApp
from .kernel import Sqlite3Kernel
IPKernelApp.launch_instance(kernel_class=Sqlite3Kernel)
| 33.5 | 55 | 0.88806 | 15 | 134 | 7.8 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016 | 0.067164 | 134 | 3 | 56 | 44.666667 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3e4ffb9c4c69ee3c927d45cf1692479b3359c012 | 54 | py | Python | k_choice/graphical/two_choice/strategies/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | k_choice/graphical/two_choice/strategies/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | k_choice/graphical/two_choice/strategies/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | from k_choice.graphical.two_choice.strategies import * | 54 | 54 | 0.87037 | 8 | 54 | 5.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 54 | 1 | 54 | 54 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e40de875bc299e7947d04c32ab82a8cf017d14ce | 137 | py | Python | nasco_analysis/__init__.py | nanten2/NASCO-analysis | 1966de41fb1dd214d067e9c26656495f34c22dea | [
"MIT"
] | 1 | 2020-12-16T01:59:04.000Z | 2020-12-16T01:59:04.000Z | nasco_analysis/__init__.py | nanten2/NASCO-analysis | 1966de41fb1dd214d067e9c26656495f34c22dea | [
"MIT"
] | 42 | 2020-11-27T08:30:50.000Z | 2021-04-25T06:35:08.000Z | nasco_analysis/__init__.py | nanten2/NASCO-analysis | 1966de41fb1dd214d067e9c26656495f34c22dea | [
"MIT"
] | null | null | null | __version__ = "0.1.0"
from . import kisa_rev
from . import io
from . import doppler
from . import grid_convolve
from . import Planet_OTF
| 19.571429 | 27 | 0.759124 | 22 | 137 | 4.409091 | 0.590909 | 0.515464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.167883 | 137 | 6 | 28 | 22.833333 | 0.824561 | 0 | 0 | 0 | 0 | 0 | 0.036496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e40e6572fb6634ff0805c21bed13985d775ca948 | 102 | py | Python | codes/requirements.py | LivLilli/EPL | 357f9eec1109619362c32efd8a6bb6a9eb3c2ee6 | [
"MIT"
] | null | null | null | codes/requirements.py | LivLilli/EPL | 357f9eec1109619362c32efd8a6bb6a9eb3c2ee6 | [
"MIT"
] | null | null | null | codes/requirements.py | LivLilli/EPL | 357f9eec1109619362c32efd8a6bb6a9eb3c2ee6 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter | 25.5 | 31 | 0.843137 | 17 | 102 | 5.058824 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 102 | 4 | 32 | 25.5 | 0.988506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e441c18714db586b0df6c10f0f09a1ffde305b46 | 3,391 | py | Python | enroll/migrations/0002_auto_20200620_1311.py | sudama-Inc/crud_project_python | 7615c1eeafdc51b61c6e7cba217d37527307e105 | [
"MIT"
] | null | null | null | enroll/migrations/0002_auto_20200620_1311.py | sudama-Inc/crud_project_python | 7615c1eeafdc51b61c6e7cba217d37527307e105 | [
"MIT"
] | null | null | null | enroll/migrations/0002_auto_20200620_1311.py | sudama-Inc/crud_project_python | 7615c1eeafdc51b61c6e7cba217d37527307e105 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-06-20 07:41
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('enroll', '0001_initial'),
]
operations = [
migrations.RenameField(
model_name='user',
old_name='name',
new_name='bank',
),
migrations.RemoveField(
model_name='user',
name='email',
),
migrations.RemoveField(
model_name='user',
name='password',
),
migrations.AddField(
model_name='user',
name='bankcharges',
field=models.IntegerField(default=0),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='brand',
field=models.CharField(default=0, max_length=70),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='cheque',
field=models.IntegerField(default=0),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='cltnamt',
field=models.IntegerField(default=0),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='cltndate',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='user',
name='collectedby',
field=models.CharField(default=0, max_length=70),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='customer',
field=models.CharField(default=0, max_length=70),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='customercode',
field=models.CharField(default=0, max_length=70),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='doptdate',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='user',
name='duedate',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='user',
name='invamt',
field=models.IntegerField(default=0),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='invdate',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='user',
name='paymentmode',
field=models.CharField(default=0, max_length=70),
preserve_default=False,
),
migrations.AddField(
model_name='user',
name='status',
field=models.CharField(default='Pending', max_length=70),
),
migrations.AddField(
model_name='user',
name='utrno',
field=models.IntegerField(default=0),
preserve_default=False,
),
]
| 30.276786 | 70 | 0.503686 | 288 | 3,391 | 5.802083 | 0.222222 | 0.096948 | 0.140036 | 0.17295 | 0.777379 | 0.777379 | 0.690006 | 0.690006 | 0.659485 | 0.659485 | 0 | 0.019655 | 0.384842 | 3,391 | 111 | 71 | 30.54955 | 0.7814 | 0.01327 | 0 | 0.733333 | 1 | 0 | 0.072997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009524 | 0.009524 | 0 | 0.038095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e496fdd0a68b822cd97e3daf90327fa4452b9658 | 113 | py | Python | reqe/__init__.py | ophlr/reqe | 6c9eccc40d163bb0903ab1a9b54e062a7bccffdf | [
"Apache-2.0"
] | 1 | 2020-08-11T10:17:59.000Z | 2020-08-11T10:17:59.000Z | reqe/__init__.py | ophlr/reqe | 6c9eccc40d163bb0903ab1a9b54e062a7bccffdf | [
"Apache-2.0"
] | null | null | null | reqe/__init__.py | ophlr/reqe | 6c9eccc40d163bb0903ab1a9b54e062a7bccffdf | [
"Apache-2.0"
] | null | null | null | from .api import request, get, head, post, patch, put, delete, options
from .session import session, ReqeSession
| 37.666667 | 70 | 0.769912 | 16 | 113 | 5.4375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 113 | 2 | 71 | 56.5 | 0.896907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4a30c03290163ac91c04b30652675e29a650a89 | 68 | py | Python | data_io/minibatches/__init__.py | Rekrau/PyGreentea | 457d7dc5be12b15c3c7663ceaf6d74301de56e43 | [
"BSD-2-Clause"
] | null | null | null | data_io/minibatches/__init__.py | Rekrau/PyGreentea | 457d7dc5be12b15c3c7663ceaf6d74301de56e43 | [
"BSD-2-Clause"
] | 4 | 2016-04-22T15:39:21.000Z | 2016-11-15T21:23:58.000Z | data_io/minibatches/__init__.py | Rekrau/PyGreentea | 457d7dc5be12b15c3c7663ceaf6d74301de56e43 | [
"BSD-2-Clause"
] | 4 | 2017-05-12T00:17:55.000Z | 2019-07-01T19:23:32.000Z | from .augmentation import augment_data_elastic, augment_data_simple
| 34 | 67 | 0.897059 | 9 | 68 | 6.333333 | 0.777778 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 68 | 1 | 68 | 68 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
901bbc41c1ba670fb481aff9d38e18a69297ccd7 | 36 | py | Python | upnpavcontrol/web/__init__.py | mikedevnull/upnp-av-control | fee17c29bb713f1e4191b2ba64a7f552b4b663d8 | [
"MIT"
] | 2 | 2020-04-27T21:33:27.000Z | 2022-01-12T22:17:52.000Z | upnpavcontrol/web/__init__.py | mikedevnull/upnp-av-control | fee17c29bb713f1e4191b2ba64a7f552b4b663d8 | [
"MIT"
] | 165 | 2020-04-18T23:41:58.000Z | 2022-03-31T11:33:09.000Z | upnpavcontrol/web/__init__.py | mikedevnull/upnp-av-control | fee17c29bb713f1e4191b2ba64a7f552b4b663d8 | [
"MIT"
] | null | null | null | from .application import app # noqa | 36 | 36 | 0.777778 | 5 | 36 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 36 | 1 | 36 | 36 | 0.933333 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
902dbaebbd6a23654edae6e48b8ec26e8a4c1948 | 877 | py | Python | commands/parceiros.py | macoin-finance/telegram-bot | ea4a76356b0a80eb1ef0994c6be2048a0fc3c486 | [
"BSD-3-Clause"
] | 9 | 2021-07-10T04:43:30.000Z | 2022-02-23T04:57:15.000Z | commands/parceiros.py | macoin-finance/telegram-bot | ea4a76356b0a80eb1ef0994c6be2048a0fc3c486 | [
"BSD-3-Clause"
] | 2 | 2021-07-12T01:56:18.000Z | 2021-07-12T02:05:42.000Z | commands/parceiros.py | macoin-finance/telegram-bot | ea4a76356b0a80eb1ef0994c6be2048a0fc3c486 | [
"BSD-3-Clause"
] | 6 | 2021-07-10T04:44:32.000Z | 2021-07-28T16:34:51.000Z | MESSAGE_TEXT='**Carteira de MACOIN da Associação Reconstruir Cannabis:**\t0xE14EA0C3FCF43b0f8423c23840Cf34F26F8d0cBe\n\nSite:\thttps://reconstruir.org.br\n\nInstagram:\thttps://instagram.com/reconstruircannabis\n\n\n#################################\n\nCarteira de MACOIN do Dr. Emílio Figueiredo 0xe034335EDD9966d4d0d27f99075B10308e664e49\n\nAdvogado do GROWROOM de 2009 a 2016 e atual advogado da COMUNIDADE MACOIN, o Bitcoin da maconha medicinal\nTrabalha a anos pela defesa criminal para usuários, cultivadores e pacientes, Habeas Corpus preventivo para cultivo de cannabis e consultoria para pessoas jurídicas (empresas e associações) que querem empreender com cannabis\n\nTelefone: +5521-991253403'
def parceiros(update, context):
context.bot.send_message(chat_id=update.effective_chat.id, text=MESSAGE_TEXT , parse_mode='markdown', disable_web_page_preview='True')
| 175.4 | 704 | 0.800456 | 113 | 877 | 6.132743 | 0.699115 | 0.008658 | 0.008658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092269 | 0.085519 | 877 | 4 | 705 | 219.25 | 0.77182 | 0 | 0 | 0 | 0 | 0.333333 | 0.799316 | 0.085519 | 0 | 0 | 0.047891 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
902e27a64b964676f7b70c52d94df8f49a354dea | 80 | py | Python | ddd_seedwork/flask_utils/__init__.py | aherculano/ddd-seedwork | 9b8b4ad722681190ac3c2d3e2ee5b471af02958f | [
"MIT"
] | 1 | 2020-07-07T13:45:21.000Z | 2020-07-07T13:45:21.000Z | ddd_seedwork/flask_utils/__init__.py | aherculano/ddd-seedwork | 9b8b4ad722681190ac3c2d3e2ee5b471af02958f | [
"MIT"
] | null | null | null | ddd_seedwork/flask_utils/__init__.py | aherculano/ddd-seedwork | 9b8b4ad722681190ac3c2d3e2ee5b471af02958f | [
"MIT"
] | 1 | 2020-07-07T13:45:56.000Z | 2020-07-07T13:45:56.000Z | from .ErrorHandlers import register_errors
from .ApiResponse import ApiResponse
| 26.666667 | 42 | 0.875 | 9 | 80 | 7.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 43 | 40 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5f4e694e9a72304263e985c672f9ec4860901960 | 233 | py | Python | 3.string/1.string_array.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | 3.string/1.string_array.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | 3.string/1.string_array.py | Tazri/Python | f7ca625800229c8a7e20b64810d6e162ccb6b09f | [
"DOC"
] | null | null | null | name = "Md Tazri";
print("name : ",name);
print("name[0] : ",name[0]);
print("name[-1] : ",name[-1]);
print("name[2:] : ",name[2:]);
print("name[:3] : ",name[:3]);
print("name[3:-2] :",name[3:-2]);
print("name[::-1] : ",name[::-1]); | 25.888889 | 34 | 0.497854 | 38 | 233 | 3.052632 | 0.210526 | 0.543103 | 0.172414 | 0.241379 | 0.258621 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066986 | 0.103004 | 233 | 9 | 34 | 25.888889 | 0.488038 | 0 | 0 | 0 | 0 | 0 | 0.354701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.875 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
5f95837c5e250fcbe9fd9790e3c0f9459a4bc21a | 2,061 | py | Python | tests/test_select.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 150 | 2019-10-05T18:35:36.000Z | 2022-03-26T21:21:44.000Z | tests/test_select.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 406 | 2019-10-03T14:54:22.000Z | 2022-03-28T04:02:31.000Z | tests/test_select.py | felipefrancisco/spectacles | 92f7af5810e2669343dd18425b2a8cb49d7167d2 | [
"MIT"
] | 26 | 2019-11-08T16:21:50.000Z | 2022-03-28T06:06:14.000Z | import pytest
from spectacles.select import is_selected, selector_to_pattern
from spectacles.exceptions import SpectaclesException
def test_invalid_format_should_raise_value_error():
with pytest.raises(SpectaclesException):
selector_to_pattern("model_a.explore_a")
with pytest.raises(SpectaclesException):
selector_to_pattern("model_a/")
with pytest.raises(SpectaclesException):
selector_to_pattern("explore_a")
def test_empty_selector_should_raise_value_error():
with pytest.raises(ValueError):
is_selected("model_a", "explore_a", [], [])
def test_select_wildcard_should_match():
assert is_selected("model_a", "explore_a", ["*/*"], [])
assert is_selected("model_a", "explore_a", ["model_b/explore_a", "*/*"], [])
def test_select_model_wildcard_should_match():
assert is_selected("model_a", "explore_a", ["model_a/*"], [])
assert is_selected("model_a", "explore_b", ["model_a/*"], [])
def test_select_explore_wildcard_should_match():
assert is_selected("model_a", "explore_a", ["*/explore_a"], [])
assert is_selected("model_b", "explore_a", ["*/explore_a"], [])
def test_select_exact_model_and_explore_should_match():
assert is_selected("model_a", "explore_a", ["model_a/explore_a"], [])
def test_select_wrong_model_should_not_match():
assert not is_selected("model_a", "explore_a", ["model_b/explore_a"], [])
def test_select_wrong_explore_should_not_match():
assert not is_selected("model_a", "explore_a", ["model_a/explore_b"], [])
def test_exclude_wildcard_should_not_match():
assert not is_selected("model_a", "explore_a", ["*/*"], ["*/*"])
def test_exclude_model_wildcard_should_not_match():
assert not is_selected("model_a", "explore_a", ["*/*"], ["model_a/*"])
def test_exclude_explore_wildcard_should_not_match():
assert not is_selected("model_a", "explore_a", ["*/*"], ["*/explore_a"])
def test_exclude_exact_model_and_explore_should_not_match():
assert not is_selected("model_a", "explore_a", ["*/*"], ["model_a/explore_a"])
| 32.714286 | 82 | 0.721494 | 278 | 2,061 | 4.841727 | 0.129496 | 0.130758 | 0.120357 | 0.156018 | 0.808321 | 0.783804 | 0.739227 | 0.644874 | 0.560921 | 0.47474 | 0 | 0 | 0.12033 | 2,061 | 62 | 83 | 33.241935 | 0.742416 | 0 | 0 | 0.083333 | 0 | 0 | 0.205725 | 0 | 0 | 0 | 0 | 0 | 0.361111 | 1 | 0.333333 | true | 0 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5fb926ce3a6f3702d9bb54b2ec76e4739f73971f | 2,218 | py | Python | tests/test_printer.py | eracle/traceflow | 88e8f41953ecf1e31984e36e89cc6566d7d4f120 | [
"BSD-3-Clause"
] | 68 | 2019-09-23T07:59:18.000Z | 2022-02-25T04:39:12.000Z | tests/test_printer.py | eracle/traceflow | 88e8f41953ecf1e31984e36e89cc6566d7d4f120 | [
"BSD-3-Clause"
] | 22 | 2019-09-23T07:38:50.000Z | 2021-11-22T03:51:41.000Z | tests/test_printer.py | eracle/traceflow | 88e8f41953ecf1e31984e36e89cc6566d7d4f120 | [
"BSD-3-Clause"
] | 22 | 2019-09-23T17:44:16.000Z | 2022-01-29T14:21:49.000Z | import unittest
from traceflow import printer
class TestPrinter(unittest.TestCase):
traces = {
1: {
1: "136.243.212.25",
2: "213.239.229.57",
3: "213.239.203.153",
4: "62.69.146.42",
5: "1.1.1.1",
},
2: {
1: "136.243.212.25",
2: "213.239.229.57",
3: "213.239.203.153",
4: "62.69.146.42",
5: "1.1.1.1",
},
3: {
1: "136.243.212.25",
2: "213.239.229.57",
3: "213.239.203.153",
4: "62.69.146.42",
5: "1.1.1.1",
},
4: {
1: "136.243.212.25",
2: "213.239.229.61",
3: "213.239.229.77",
4: "62.69.146.42",
5: "1.1.1.1",
},
}
def test___build_nodes(self):
json_traces = printer._build_nodes(self.traces)
self.assertEqual(
json_traces,
'{"nodes": [{"id": "136.243.212.25", "label": "136.243.212.25"}, {"id": "213.239.229.57", "label": "213.239.229.57"}, {"id": "213.239.203.153", "label": "213.239.203.153"}, {"id": "62.69.146.42", "label": "62.69.146.42"}, {"id": "1.1.1.1", "label": "1.1.1.1"}, {"id": "213.239.229.61", "label": "213.239.229.61"}, {"id": "213.239.229.77", "label": "213.239.229.77"}], "links": [{"from": "136.243.212.25", "to": "213.239.229.57"}, {"from": "213.239.229.57", "to": "213.239.203.153"}, {"from": "213.239.203.153", "to": "62.69.146.42"}, {"from": "62.69.146.42", "to": "1.1.1.1"}, {"from": "136.243.212.25", "to": "213.239.229.57"}, {"from": "213.239.229.57", "to": "213.239.203.153"}, {"from": "213.239.203.153", "to": "62.69.146.42"}, {"from": "62.69.146.42", "to": "1.1.1.1"}, {"from": "136.243.212.25", "to": "213.239.229.57"}, {"from": "213.239.229.57", "to": "213.239.203.153"}, {"from": "213.239.203.153", "to": "62.69.146.42"}, {"from": "62.69.146.42", "to": "1.1.1.1"}, {"from": "136.243.212.25", "to": "213.239.229.61"}, {"from": "213.239.229.61", "to": "213.239.229.77"}, {"from": "213.239.229.77", "to": "62.69.146.42"}, {"from": "62.69.146.42", "to": "1.1.1.1"}]}',
)
if __name__ == "__main__":
unittest.main()
| 47.191489 | 1,186 | 0.458521 | 368 | 2,218 | 2.720109 | 0.119565 | 0.191808 | 0.188811 | 0.125874 | 0.557443 | 0.557443 | 0.557443 | 0.557443 | 0.557443 | 0.535465 | 0 | 0.396057 | 0.245266 | 2,218 | 46 | 1,187 | 48.217391 | 0.201912 | 0 | 0 | 0.439024 | 0 | 0.02439 | 0.642922 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 1 | 0.02439 | false | 0 | 0.04878 | 0 | 0.121951 | 0.04878 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
399fd9ba9b44ee903e33ccb2a0b92edbd068c357 | 1,403 | py | Python | kutub/migrations/0028_auto_20210916_0726.py | rbturnbull/kutub | 46d88cad0fe7b3de70843daeefa7cca6d4a4a840 | [
"Apache-2.0"
] | null | null | null | kutub/migrations/0028_auto_20210916_0726.py | rbturnbull/kutub | 46d88cad0fe7b3de70843daeefa7cca6d4a4a840 | [
"Apache-2.0"
] | null | null | null | kutub/migrations/0028_auto_20210916_0726.py | rbturnbull/kutub | 46d88cad0fe7b3de70843daeefa7cca6d4a4a840 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-16 07:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('kutub', '0027_auto_20210916_0039'),
]
operations = [
migrations.AlterField(
model_name='contentitem',
name='end_folio',
field=models.PositiveIntegerField(blank=True, default=None, help_text='The folio number where this content item ends.', null=True),
),
migrations.AlterField(
model_name='contentitem',
name='end_folio_side',
field=models.CharField(blank=True, choices=[('', 'Unknown'), ('r', 'Recto'), ('v', 'Verso')], default='', help_text='The folio side (i.e. recto or verso) where this content item ends.', max_length=1),
),
migrations.AlterField(
model_name='contentitem',
name='start_folio',
field=models.PositiveIntegerField(blank=True, default=None, help_text='The folio number where this content item begins.', null=True),
),
migrations.AlterField(
model_name='contentitem',
name='start_folio_side',
field=models.CharField(blank=True, choices=[('', 'Unknown'), ('r', 'Recto'), ('v', 'Verso')], default='', help_text='The folio side (i.e. recto or verso) where this content item begins.', max_length=1),
),
]
| 41.264706 | 214 | 0.613685 | 161 | 1,403 | 5.229814 | 0.397516 | 0.095012 | 0.118765 | 0.137767 | 0.800475 | 0.776722 | 0.776722 | 0.776722 | 0.529691 | 0.529691 | 0 | 0.031103 | 0.243763 | 1,403 | 33 | 215 | 42.515152 | 0.762488 | 0.032074 | 0 | 0.444444 | 1 | 0 | 0.286136 | 0.016962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
39bd30531f8ddb33e7a0c31c2a8a15568403026f | 12,005 | py | Python | test/active_learning_strategies/test_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 83 | 2021-01-06T06:50:30.000Z | 2022-03-31T05:16:32.000Z | test/active_learning_strategies/test_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 30 | 2021-02-27T06:09:47.000Z | 2021-12-23T11:03:36.000Z | test/active_learning_strategies/test_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 13 | 2021-03-05T18:26:58.000Z | 2022-03-12T01:53:17.000Z | from distil.utils.models.simple_net import TwoLayerNet
from distil.active_learning_strategies.strategy import Strategy
from test.utils import MyLabeledDataset, MyUnlabeledDataset
import unittest
import torch
class TestStrategy(unittest.TestCase):
def setUp(self):
# Create model
self.input_dimension = 50
self.classes = 10
self.hidden_units = 20
mymodel = TwoLayerNet(self.input_dimension, self.classes, self.hidden_units)
# Create labeled dataset
self.num_labeled_points = 1000
rand_data_tensor = torch.randn((self.num_labeled_points, self.input_dimension), requires_grad=True)
rand_label_tensor = torch.randint(low=0,high=self.classes,size=(self.num_labeled_points,))
rand_labeled_dataset = MyLabeledDataset(rand_data_tensor, rand_label_tensor)
# Create unlabeled dataset
self.num_unlabeled_points = 10000
rand_data_tensor = torch.randn((self.num_unlabeled_points, self.input_dimension), requires_grad=True)
rand_unlabeled_dataset = MyUnlabeledDataset(rand_data_tensor)
# Create args array
device = 'cuda' if torch.cuda.is_available() else 'cpu'
args = {'batch_size': 1, 'device': device, 'loss': torch.nn.functional.cross_entropy}
self.strategy = Strategy(rand_labeled_dataset, rand_unlabeled_dataset, mymodel, self.classes, args)
def test_update_data(self):
old_unlabeled_dataset = self.strategy.unlabeled_dataset
old_labeled_dataset = self.strategy.labeled_dataset
# Create new labeled dataset
rand_l_data_tensor = torch.randn((self.num_labeled_points, self.input_dimension), requires_grad=True)
rand_label_tensor = torch.randint(low=0,high=self.classes,size=(self.num_labeled_points,))
rand_labeled_dataset = MyLabeledDataset(rand_l_data_tensor, rand_label_tensor)
# Create unlabeled dataset
rand_data_tensor = torch.randn((self.num_unlabeled_points, self.input_dimension), requires_grad=True)
rand_unlabeled_dataset = MyUnlabeledDataset(rand_data_tensor)
# Update the data
self.strategy.update_data(rand_labeled_dataset, rand_unlabeled_dataset)
# Make sure the tensors are different
self.assertFalse(torch.equal(self.strategy.labeled_dataset.wrapped_data_tensor, old_labeled_dataset.wrapped_data_tensor))
self.assertFalse(torch.equal(self.strategy.labeled_dataset.wrapped_label_tensor, old_labeled_dataset.wrapped_label_tensor))
self.assertFalse(torch.equal(self.strategy.unlabeled_dataset.wrapped_data_tensor, old_unlabeled_dataset.wrapped_data_tensor))
# Make sure the updated datasets are the same
self.assertTrue(torch.equal(self.strategy.labeled_dataset.wrapped_data_tensor, rand_l_data_tensor))
self.assertTrue(torch.equal(self.strategy.labeled_dataset.wrapped_label_tensor, rand_label_tensor))
self.assertTrue(torch.equal(self.strategy.unlabeled_dataset.wrapped_data_tensor, rand_data_tensor))
# Update works; revert back to old datasets
self.strategy.update_data(old_labeled_dataset, old_unlabeled_dataset)
# Make sure the tensors are the same
self.assertTrue(torch.equal(self.strategy.labeled_dataset.wrapped_data_tensor, old_labeled_dataset.wrapped_data_tensor))
self.assertTrue(torch.equal(self.strategy.labeled_dataset.wrapped_label_tensor, old_labeled_dataset.wrapped_label_tensor))
self.assertTrue(torch.equal(self.strategy.unlabeled_dataset.wrapped_data_tensor, old_unlabeled_dataset.wrapped_data_tensor))
def test_update_model(self):
# Create a new model with two extra hidden units
old_model = self.strategy.model
my_model = TwoLayerNet(self.input_dimension, self.classes, self.hidden_units + 2)
self.strategy.update_model(my_model)
# Make sure the models are not equal
self.assertNotEqual(old_model, self.strategy.model)
# Update works; revert back to old model
self.strategy.update_model(old_model)
# Make sure the models are equal
self.assertEqual(self.strategy.model, old_model)
def test_predict(self):
# Predict labels for the unlabeled dataset
predicted_labels = self.strategy.predict(self.strategy.unlabeled_dataset)
# Ensure the same number of labels exist as the number of points
self.assertEqual(len(predicted_labels), len(self.strategy.unlabeled_dataset))
# Ensure none of the predicted labels are outside the expected range
for predicted_label in predicted_labels:
self.assertLess(predicted_label, self.strategy.target_classes)
self.assertGreaterEqual(predicted_label, 0)
def test_predict_prob(self):
# Predict probabilities for the unlabeled dataset
predict_probs = self.strategy.predict_prob(self.strategy.unlabeled_dataset)
# Ensure the same number of probability vectors and number of probabilities
self.assertEqual(predict_probs.shape[0], len(self.strategy.unlabeled_dataset))
self.assertEqual(predict_probs.shape[1], self.strategy.target_classes)
# Ensure probabilities sum to 1
for predicted_prob_vector in predict_probs:
self.assertAlmostEqual(predicted_prob_vector.sum().item(), 1, places=6)
# Ensure probabilities are geq 0, leq 1
for predicted_prob_vector in predict_probs:
for predicted_prob in predicted_prob_vector:
self.assertLessEqual(predicted_prob, 1)
self.assertGreaterEqual(predicted_prob, 0)
def test_predict_prob_dropout(self):
# Predict probabilities for the unlabeled dataset
predict_probs = self.strategy.predict_prob_dropout(self.strategy.unlabeled_dataset, n_drop=5)
# Ensure the same number of probability vectors and number of probabilities
self.assertEqual(predict_probs.shape[0], len(self.strategy.unlabeled_dataset))
self.assertEqual(predict_probs.shape[1], self.strategy.target_classes)
# Ensure probabilities sum to 1
for predicted_prob_vector in predict_probs:
self.assertAlmostEqual(predicted_prob_vector.sum().item(), 1, places=6)
# Ensure probabilities are geq 0, leq 1
for predicted_prob_vector in predict_probs:
for predicted_prob in predicted_prob_vector:
self.assertLessEqual(predicted_prob, 1)
self.assertGreaterEqual(predicted_prob, 0)
def test_predict_prob_dropout_split(self):
# Predict probabilities for the unlabeled dataset
n_drop = 5
predict_probs = self.strategy.predict_prob_dropout_split(self.strategy.unlabeled_dataset, n_drop=n_drop)
# Ensure the same number of probability vectors and number of probabilities and number of dropout samples
self.assertEqual(predict_probs.shape[0], n_drop)
self.assertEqual(predict_probs.shape[1], len(self.strategy.unlabeled_dataset))
self.assertEqual(predict_probs.shape[2], self.strategy.target_classes)
# Ensure probabilities sum to 1
for predict_prob_dropout in predict_probs:
for predicted_prob_vector in predict_prob_dropout:
self.assertAlmostEqual(predicted_prob_vector.sum().item(), 1, places=6)
# Ensure probabilities are geq 0, leq 1
for predict_prob_dropout in predict_probs:
for predicted_prob_vector in predict_prob_dropout:
for predicted_prob in predicted_prob_vector:
self.assertLessEqual(predicted_prob, 1)
self.assertGreaterEqual(predicted_prob, 0)
def test_get_embedding(self):
# Get a last linear layer embedding
embedding = self.strategy.get_embedding(self.strategy.unlabeled_dataset)
# Ensure embedding has number of points equal to the unlabeled dataset
self.assertEqual(embedding.shape[0], len(self.strategy.unlabeled_dataset))
# Ensure embedding has number of features equal to the embedding of the model
self.assertEqual(embedding.shape[1], self.strategy.model.get_embedding_dim())
def test_get_grad_embedding(self):
# Get grad embedding (bias)
bias_grad_embedding = self.strategy.get_grad_embedding(self.strategy.unlabeled_dataset, predict_labels=True, grad_embedding_type='bias')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(bias_grad_embedding.shape[0], len(self.strategy.unlabeled_dataset))
self.assertEqual(bias_grad_embedding.shape[1], self.strategy.target_classes)
# Get grad embedding (linear)
linear_grad_embedding = self.strategy.get_grad_embedding(self.strategy.unlabeled_dataset, predict_labels=True, grad_embedding_type='linear')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(linear_grad_embedding.shape[0], len(self.strategy.unlabeled_dataset))
self.assertEqual(linear_grad_embedding.shape[1], self.strategy.model.get_embedding_dim() * self.strategy.target_classes)
# Get grad embedding (bias_linear)
bias_linear_grad_embedding = self.strategy.get_grad_embedding(self.strategy.unlabeled_dataset, predict_labels=True, grad_embedding_type='bias_linear')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(bias_linear_grad_embedding.shape[0], len(self.strategy.unlabeled_dataset))
self.assertEqual(bias_linear_grad_embedding.shape[1], self.strategy.model.get_embedding_dim() * self.strategy.target_classes + self.strategy.target_classes)
# Get grad embedding on labeled dataset (bias)
bias_grad_embedding = self.strategy.get_grad_embedding(self.strategy.labeled_dataset, predict_labels=False, grad_embedding_type='bias')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(bias_grad_embedding.shape[0], len(self.strategy.labeled_dataset))
self.assertEqual(bias_grad_embedding.shape[1], self.strategy.target_classes)
# Get grad embedding on labeled dataset (linear)
linear_grad_embedding = self.strategy.get_grad_embedding(self.strategy.labeled_dataset, predict_labels=False, grad_embedding_type='linear')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(linear_grad_embedding.shape[0], len(self.strategy.labeled_dataset))
self.assertEqual(linear_grad_embedding.shape[1], self.strategy.model.get_embedding_dim() * self.strategy.target_classes)
# Get grad embedding on labeled dataset (bias_linear)
bias_linear_grad_embedding = self.strategy.get_grad_embedding(self.strategy.labeled_dataset, predict_labels=False, grad_embedding_type='bias_linear')
# Ensure grad embedding has correct number of points / dimension
self.assertEqual(bias_linear_grad_embedding.shape[0], len(self.strategy.labeled_dataset))
self.assertEqual(bias_linear_grad_embedding.shape[1], self.strategy.model.get_embedding_dim() * self.strategy.target_classes + self.strategy.target_classes)
# Make sure that ValueError is raised on invalid grad_embedding_type
with self.assertRaises(ValueError):
self.strategy.get_grad_embedding(self.strategy.unlabeled_dataset, predict_labels=True, grad_embedding_type='invalid_type')
if __name__ == "__main__":
unittest.main() | 54.321267 | 171 | 0.713369 | 1,457 | 12,005 | 5.612217 | 0.109815 | 0.104195 | 0.053932 | 0.071909 | 0.798337 | 0.782194 | 0.741715 | 0.719579 | 0.705882 | 0.660878 | 0 | 0.006788 | 0.214661 | 12,005 | 221 | 172 | 54.321267 | 0.860522 | 0.174344 | 0 | 0.315789 | 0 | 0 | 0.009021 | 0 | 0 | 0 | 0 | 0 | 0.394737 | 1 | 0.078947 | false | 0 | 0.04386 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
39e06327863d26d43ce7f4ffbfcc92d739ce8f71 | 2,558 | py | Python | test/test_dataform_sparse.py | hirano1412/bdpy | cee6f36dcdf4f4d29fc3a6980777e1c3d7c66cbb | [
"MIT"
] | 18 | 2018-01-22T04:18:48.000Z | 2022-03-12T09:42:03.000Z | test/test_dataform_sparse.py | hirano1412/bdpy | cee6f36dcdf4f4d29fc3a6980777e1c3d7c66cbb | [
"MIT"
] | 13 | 2018-05-01T08:31:14.000Z | 2022-02-21T06:45:34.000Z | test/test_dataform_sparse.py | hirano1412/bdpy | cee6f36dcdf4f4d29fc3a6980777e1c3d7c66cbb | [
"MIT"
] | 15 | 2019-03-04T02:43:46.000Z | 2022-02-17T00:41:47.000Z | '''Tests for dataform'''
from unittest import TestCase, TestLoader, TextTestRunner
import numpy as np
from bdpy.dataform import load_array, save_array
class TestUtil(TestCase):
def test_load_save_dense_array(self):
# ndim = 1
data = np.random.rand(10)
save_array('./tmp/test_array_dense_ndim1.mat', data, key='testdata')
testdata = load_array('./tmp/test_array_dense_ndim1.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
# ndim = 2
data = np.random.rand(3, 2)
save_array('./tmp/test_array_dense_ndim2.mat', data, key='testdata')
testdata = load_array('./tmp/test_array_dense_ndim2.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
# ndim = 3
data = np.random.rand(4, 3, 2)
save_array('./tmp/test_array_dense_ndim3.mat', data, key='testdata')
testdata = load_array('./tmp/test_array_dense_ndim3.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
def test_load_save_sparse_array(self):
# ndim = 1
data = np.random.rand(10)
data[data < 0.8] = 0
save_array('./tmp/test_array_sparse_ndim1.mat', data, key='testdata', sparse=True)
testdata = load_array('./tmp/test_array_sparse_ndim1.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
# ndim = 2
data = np.random.rand(3, 2)
data[data < 0.8] = 0
save_array('./tmp/test_array_sparse_ndim2.mat', data, key='testdata', sparse=True)
testdata = load_array('./tmp/test_array_sparse_ndim2.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
# ndim = 3
data = np.random.rand(4, 3, 2)
data[data < 0.8] = 0
save_array('./tmp/test_array_sparse_ndim3.mat', data, key='testdata', sparse=True)
testdata = load_array('./tmp/test_array_sparse_ndim3.mat', key='testdata')
np.testing.assert_array_equal(data, testdata)
def test_load_array_jl(self):
data = np.array([[1, 0, 0, 0],
[2, 2, 0, 0],
[3, 3, 3, 0]])
testdata = load_array('data/array_jl_dense_v1.mat', key='a')
np.testing.assert_array_equal(data, testdata)
testdata = load_array('data/array_jl_sparse_v1.mat', key='a')
np.testing.assert_array_equal(data, testdata)
if __name__ == '__main__':
suite = TestLoader().loadTestsFromTestCase(TestUtil)
TextTestRunner(verbosity=2).run(suite)
| 31.975 | 90 | 0.636044 | 350 | 2,558 | 4.382857 | 0.157143 | 0.062581 | 0.093872 | 0.132986 | 0.802477 | 0.802477 | 0.763364 | 0.729465 | 0.695567 | 0.653846 | 0 | 0.02834 | 0.227522 | 2,558 | 79 | 91 | 32.379747 | 0.747976 | 0.028538 | 0 | 0.386364 | 0 | 0 | 0.221908 | 0.179062 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.068182 | false | 0 | 0.068182 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8426625005dc915bfb65b6d448323a0f45237104 | 92 | py | Python | producer.py | zwcn/celery-example | f32959b156315f9cc41398c9a057fa15b9bb24a6 | [
"MIT"
] | null | null | null | producer.py | zwcn/celery-example | f32959b156315f9cc41398c9a057fa15b9bb24a6 | [
"MIT"
] | null | null | null | producer.py | zwcn/celery-example | f32959b156315f9cc41398c9a057fa15b9bb24a6 | [
"MIT"
] | null | null | null | from tasks.add import add
from tasks.minus import minus
add.delay(6, 6)
minus.delay(5, 5)
| 13.142857 | 29 | 0.73913 | 18 | 92 | 3.777778 | 0.444444 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.152174 | 92 | 6 | 30 | 15.333333 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8439ea11699528f7c728fc675790aa64fc6e6b29 | 1,055 | py | Python | delira/data_loading/__init__.py | muizzk/delira | eb7fdfedd6bbeffa20ffad1ef6c918a4cd9abfbf | [
"BSD-2-Clause"
] | 1 | 2019-10-03T21:00:20.000Z | 2019-10-03T21:00:20.000Z | delira/data_loading/__init__.py | muizzk/delira | eb7fdfedd6bbeffa20ffad1ef6c918a4cd9abfbf | [
"BSD-2-Clause"
] | null | null | null | delira/data_loading/__init__.py | muizzk/delira | eb7fdfedd6bbeffa20ffad1ef6c918a4cd9abfbf | [
"BSD-2-Clause"
] | null | null | null |
from delira import get_backends as _get_backends
from delira.data_loading.data_loader import BaseDataLoader
from delira.data_loading.data_manager import BaseDataManager
from delira.data_loading.dataset import AbstractDataset
from delira.data_loading.dataset import BaseCacheDataset
from delira.data_loading.dataset import BaseLazyDataset
from delira.data_loading.dataset import ConcatDataset
from delira.data_loading.dataset import BaseExtendCacheDataset
from delira.data_loading.load_utils import default_load_fn_2d
from delira.data_loading.load_utils import LoadSample
from delira.data_loading.load_utils import LoadSampleLabel
from delira.data_loading.sampler import LambdaSampler
from delira.data_loading.sampler import RandomSampler
from delira.data_loading.sampler import SequentialSampler
if "TORCH" in _get_backends():
from delira.data_loading.dataset import TorchvisionClassificationDataset
try:
from delira.data_loading.numba_transform import NumbaTransform, \
NumbaTransformWrapper, NumbaCompose
except ImportError:
pass
| 42.2 | 76 | 0.867299 | 133 | 1,055 | 6.661654 | 0.315789 | 0.180587 | 0.23702 | 0.35553 | 0.548533 | 0.515801 | 0.121896 | 0 | 0 | 0 | 0 | 0.001047 | 0.094787 | 1,055 | 24 | 77 | 43.958333 | 0.926702 | 0 | 0 | 0 | 0 | 0 | 0.004744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.047619 | 0.809524 | 0 | 0.809524 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
845e74e737e2ae4f1ea3e6b562c4ad647dc1496c | 175 | py | Python | User_Crawler/get_graph_by_month.py | lifei96/Medium-crawler-with-data-parser | fed1a99c0b524871d430b3090a6bd8f501654535 | [
"MIT"
] | 4 | 2018-02-03T10:57:59.000Z | 2020-05-17T09:40:36.000Z | User_Crawler/get_graph_by_month.py | lifei96/Medium-crawler-with-data-parser | fed1a99c0b524871d430b3090a6bd8f501654535 | [
"MIT"
] | null | null | null | User_Crawler/get_graph_by_month.py | lifei96/Medium-crawler-with-data-parser | fed1a99c0b524871d430b3090a6bd8f501654535 | [
"MIT"
] | 6 | 2017-03-02T10:30:12.000Z | 2021-08-10T11:14:27.000Z | # -*- coding: utf-8 -*-
from util_graph import *
if __name__ == '__main__':
get_graph_by_month('./data/graph/graph.dat', './data/cross-site-linking/date_username.csv')
| 21.875 | 95 | 0.68 | 25 | 175 | 4.24 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.125714 | 175 | 7 | 96 | 25 | 0.686275 | 0.12 | 0 | 0 | 0 | 0 | 0.480263 | 0.427632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ffdc49e8c1fa26cc3c4c4fd91c9564bc7fd64ac6 | 23 | py | Python | tests/helpers/__init__.py | NickleDave/conbirt | 71db6c6fd68dfef1bdbdcfacd8b2a16b21b86089 | [
"BSD-3-Clause"
] | null | null | null | tests/helpers/__init__.py | NickleDave/conbirt | 71db6c6fd68dfef1bdbdcfacd8b2a16b21b86089 | [
"BSD-3-Clause"
] | 3 | 2018-12-16T17:57:22.000Z | 2018-12-16T20:12:33.000Z | tests/helpers/__init__.py | NickleDave/conbirt | 71db6c6fd68dfef1bdbdcfacd8b2a16b21b86089 | [
"BSD-3-Clause"
] | null | null | null | from . import keywords
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ffe6f18f7c1746d0506e56c945c3e79c92849154 | 269 | py | Python | app/FW/views/__init__.py | uncle-lu/FuckingWords | 2191d657de5ac5b663ba879f2fd5acec0d856924 | [
"MIT"
] | null | null | null | app/FW/views/__init__.py | uncle-lu/FuckingWords | 2191d657de5ac5b663ba879f2fd5acec0d856924 | [
"MIT"
] | null | null | null | app/FW/views/__init__.py | uncle-lu/FuckingWords | 2191d657de5ac5b663ba879f2fd5acec0d856924 | [
"MIT"
] | null | null | null | #coding:utf-8
from flask import Blueprint
Words_views = Blueprint('Words_views',__name__)
Units_views = Blueprint('Units_views',__name__)
Create_pdf_views = Blueprint('Create_pdf_views',__name__)
from FW.views import W
from FW.views import U
from FW.views import C | 20.692308 | 57 | 0.799257 | 42 | 269 | 4.642857 | 0.404762 | 0.215385 | 0.169231 | 0.261538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004184 | 0.111524 | 269 | 13 | 58 | 20.692308 | 0.811715 | 0.04461 | 0 | 0 | 0 | 0 | 0.14786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.571429 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
08055b930fc392cba22bc2764162b35a16d46e82 | 62 | py | Python | cee/__init__.py | gautierdag/cultural-evolution-engine | 54ea8d374ff4345c05f03eccfb2e93161e16a050 | [
"MIT"
] | 4 | 2019-03-20T15:31:44.000Z | 2020-11-28T13:40:13.000Z | cee/__init__.py | gautierdag/cultural-evolution-engine | 54ea8d374ff4345c05f03eccfb2e93161e16a050 | [
"MIT"
] | null | null | null | cee/__init__.py | gautierdag/cultural-evolution-engine | 54ea8d374ff4345c05f03eccfb2e93161e16a050 | [
"MIT"
] | 1 | 2021-11-06T01:15:28.000Z | 2021-11-06T01:15:28.000Z | from .BaseAgent import BaseAgent
from .BaseCEE import BaseCEE
| 20.666667 | 32 | 0.83871 | 8 | 62 | 6.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 62 | 2 | 33 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
08194c24e31495b4fd2fe462b66e280ce62112bd | 330 | py | Python | genienlp/ned/__init__.py | Krish-sysadmin/genienlp | 3586e4368eb0b0756a772294daedc043ce55454c | [
"BSD-3-Clause"
] | 6 | 2019-04-30T19:47:17.000Z | 2019-11-30T04:16:47.000Z | genienlp/ned/__init__.py | Krish-sysadmin/genienlp | 3586e4368eb0b0756a772294daedc043ce55454c | [
"BSD-3-Clause"
] | null | null | null | genienlp/ned/__init__.py | Krish-sysadmin/genienlp | 3586e4368eb0b0756a772294daedc043ce55454c | [
"BSD-3-Clause"
] | null | null | null | from .abstract import AbstractEntityDisambiguator # noqa
from .bootleg import BatchBootlegEntityDisambiguator, ServingBootlegEntityDisambiguator # noqa
from .main import ( # noqa
EntityAndTypeOracleEntityDisambiguator,
EntityOracleEntityDisambiguator,
NaiveEntityDisambiguator,
TypeOracleEntityDisambiguator,
)
| 36.666667 | 95 | 0.830303 | 19 | 330 | 14.421053 | 0.684211 | 0.058394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130303 | 330 | 8 | 96 | 41.25 | 0.954704 | 0.042424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.375 | 0 | 0.375 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0841859e7de1857a4642a618ddecffccf3118669 | 52 | py | Python | renconstruct/__init__.py | devorbitus/renconstruct | 6aa5fae05989ede2e4cd14c632d8eb520b4cfe60 | [
"MIT"
] | 8 | 2020-04-27T00:46:15.000Z | 2021-05-31T15:19:53.000Z | renconstruct/__init__.py | devorbitus/renconstruct | 6aa5fae05989ede2e4cd14c632d8eb520b4cfe60 | [
"MIT"
] | 3 | 2021-06-03T15:52:57.000Z | 2021-12-14T12:54:24.000Z | renconstruct/__init__.py | devorbitus/renconstruct | 6aa5fae05989ede2e4cd14c632d8eb520b4cfe60 | [
"MIT"
] | 2 | 2021-05-31T15:54:44.000Z | 2021-06-03T15:10:09.000Z | from .renconstruct import cli, logger # noqa: F401
| 26 | 51 | 0.75 | 7 | 52 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.173077 | 52 | 1 | 52 | 52 | 0.837209 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
084cf057d912a22d476875da356b12d8168530ba | 131 | py | Python | flask_app.py | ongzhixian/zhixian.pythonanywhere.com | 73ad6da5a1edb0b59c7b5ae6251d0260f23e58dd | [
"MIT"
] | null | null | null | flask_app.py | ongzhixian/zhixian.pythonanywhere.com | 73ad6da5a1edb0b59c7b5ae6251d0260f23e58dd | [
"MIT"
] | null | null | null | flask_app.py | ongzhixian/zhixian.pythonanywhere.com | 73ad6da5a1edb0b59c7b5ae6251d0260f23e58dd | [
"MIT"
] | null | null | null | import logging
import os
from forum_app import app
if __name__ == '__main__':
app.run(host='0.0.0.0', port=31000, debug=True) | 18.714286 | 51 | 0.709924 | 23 | 131 | 3.652174 | 0.695652 | 0.071429 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0.152672 | 131 | 7 | 51 | 18.714286 | 0.675676 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f26535526825a7073803171a5a1aa54c37de11c7 | 204 | py | Python | humann2/maintenance/make_map_pfam_name.py | bmpbos/humann | 4a8fee5596d89d805af6568d3260844f80c8f9a2 | [
"MIT"
] | null | null | null | humann2/maintenance/make_map_pfam_name.py | bmpbos/humann | 4a8fee5596d89d805af6568d3260844f80c8f9a2 | [
"MIT"
] | null | null | null | humann2/maintenance/make_map_pfam_name.py | bmpbos/humann | 4a8fee5596d89d805af6568d3260844f80c8f9a2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os
url = "ftp://ftp.ebi.ac.uk/pub/databases/Pfam/current_release/Pfam-A.clans.tsv.gz"
os.system( "curl {} | zcat | cut -f1,5 | gzip > map_pfam_name.txt.gz".format( url ) )
| 25.5 | 85 | 0.676471 | 37 | 204 | 3.648649 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011173 | 0.122549 | 204 | 7 | 86 | 29.142857 | 0.743017 | 0.098039 | 0 | 0 | 0 | 0.333333 | 0.710383 | 0.404372 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f26ae86cb226413d230642ce1be82529afb80300 | 93 | py | Python | compipe/__init__.py | ImagineersHub/compipe | dd14c2701717d7d0901eb1139f59e7fbfeba7517 | [
"MIT"
] | null | null | null | compipe/__init__.py | ImagineersHub/compipe | dd14c2701717d7d0901eb1139f59e7fbfeba7517 | [
"MIT"
] | null | null | null | compipe/__init__.py | ImagineersHub/compipe | dd14c2701717d7d0901eb1139f59e7fbfeba7517 | [
"MIT"
] | null | null | null | from .cmd_enroller import cmd_enroller, command_list
from .cmd_wrapper import CommandWrapper
| 31 | 52 | 0.870968 | 13 | 93 | 5.923077 | 0.615385 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 93 | 2 | 53 | 46.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f2a49c981a4031ece373adcc48393268c32b1498 | 197 | py | Python | holobot/sdk/network/exceptions/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/sdk/network/exceptions/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/sdk/network/exceptions/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .header_utils import try_get_retry_after
from .http_status_error import HttpStatusError
from .im_a_teapot_error import ImATeapotError
from .too_many_requests_error import TooManyRequestsError
| 39.4 | 57 | 0.898477 | 28 | 197 | 5.892857 | 0.714286 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081218 | 197 | 4 | 58 | 49.25 | 0.911602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b42b13fedb782c2dca41aa9f58228a4ec3bfe16 | 18,373 | py | Python | pysnmp-with-texts/IANAifType-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/IANAifType-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/IANAifType-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module IANAifType-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/IANAifType-MIB
# Produced by pysmi-0.3.4 at Wed May 1 11:03:40 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ConstraintsIntersection, ConstraintsUnion, ValueRangeConstraint, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ConstraintsIntersection", "ConstraintsUnion", "ValueRangeConstraint", "ValueSizeConstraint")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
Gauge32, IpAddress, Counter32, mib_2, Unsigned32, NotificationType, iso, ModuleIdentity, Counter64, ObjectIdentity, MibIdentifier, Integer32, TimeTicks, MibScalar, MibTable, MibTableRow, MibTableColumn, Bits = mibBuilder.importSymbols("SNMPv2-SMI", "Gauge32", "IpAddress", "Counter32", "mib-2", "Unsigned32", "NotificationType", "iso", "ModuleIdentity", "Counter64", "ObjectIdentity", "MibIdentifier", "Integer32", "TimeTicks", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Bits")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
ianaifType = ModuleIdentity((1, 3, 6, 1, 2, 1, 30))
ianaifType.setRevisions(('2017-03-30 00:00', '2017-01-19 00:00', '2016-11-23 00:00', '2016-06-16 00:00', '2016-06-09 00:00', '2016-06-08 00:00', '2016-05-19 00:00', '2016-05-03 00:00', '2016-04-29 00:00', '2014-09-24 00:00', '2014-09-19 00:00', '2014-07-03 00:00', '2014-05-22 00:00', '2012-05-17 00:00', '2012-01-11 00:00', '2011-12-18 00:00', '2011-10-26 00:00', '2011-09-07 00:00', '2011-07-22 00:00', '2011-06-03 00:00', '2010-09-21 00:00', '2010-07-21 00:00', '2010-02-11 00:00', '2010-02-08 00:00', '2009-05-06 00:00', '2009-02-06 00:00', '2008-10-09 00:00', '2008-08-12 00:00', '2008-07-22 00:00', '2008-06-24 00:00', '2008-05-29 00:00', '2007-09-13 00:00', '2007-05-29 00:00', '2007-03-08 00:00', '2007-01-23 00:00', '2006-10-17 00:00', '2006-09-25 00:00', '2006-08-17 00:00', '2006-08-11 00:00', '2006-07-25 00:00', '2006-06-14 00:00', '2006-03-31 00:00', '2006-03-30 00:00', '2005-12-22 00:00', '2005-10-10 00:00', '2005-09-09 00:00', '2005-05-27 00:00', '2005-03-03 00:00', '2004-11-22 00:00', '2004-06-17 00:00', '2004-05-12 00:00', '2004-05-07 00:00', '2003-08-25 00:00', '2003-08-18 00:00', '2003-08-07 00:00', '2003-03-18 00:00', '2003-01-13 00:00', '2002-10-17 00:00', '2002-07-16 00:00', '2002-07-10 00:00', '2002-06-19 00:00', '2002-01-04 00:00', '2001-12-20 00:00', '2001-11-15 00:00', '2001-11-06 00:00', '2001-11-02 00:00', '2001-10-16 00:00', '2001-09-19 00:00', '2001-05-11 00:00', '2001-01-12 00:00', '2000-12-19 00:00', '2000-12-07 00:00', '2000-12-04 00:00', '2000-10-17 00:00', '2000-10-02 00:00', '2000-09-01 00:00', '2000-08-24 00:00', '2000-08-23 00:00', '2000-08-22 00:00', '2000-04-25 00:00', '2000-03-06 00:00', '1999-10-08 14:30', '1994-01-31 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: ianaifType.setRevisionsDescriptions(('Registration of new IANAifType 290.', 'Registration of new IANAifType 289.', 'Registration of new IANAifTypes 283-288.', 'Updated IANAtunnelType DESCRIPTION per RFC 7870', 'Registration of new IANAifType 282.', 'Updated description for tunnelType 17.', 'Updated description for tunnelType 16.', 'Registration of new IANAifType 281.', 'Registration of new tunnelTypes 16 and 17.', 'Registration of new IANAifType 280.', 'Registration of new IANAifType 279.', 'Registration of new IANAifTypes 277-278.', 'Updated contact info.', 'Registration of new IANAifType 272.', 'Registration of new IANAifTypes 266-271.', 'Registration of new IANAifTypes 263-265.', 'Registration of new IANAifType 262.', 'Registration of new IANAifTypes 260 and 261.', 'Registration of new IANAifType 259.', 'Registration of new IANAifType 258.', 'Registration of new IANAifTypes 256 and 257.', 'Registration of new IANAifType 255.', 'Registration of new IANAifType 254.', 'Registration of new IANAifTypes 252 and 253.', 'Registration of new IANAifType 251.', 'Registration of new IANAtunnelType 15.', 'Registration of new IANAifType 250.', 'Registration of new IANAifType 249.', 'Registration of new IANAifTypes 247 and 248.', 'Registration of new IANAifType 246.', 'Registration of new IANAifType 245.', 'Registration of new IANAifTypes 243 and 244.', 'Changed the description for IANAifType 228.', 'Registration of new IANAifType 242.', 'Registration of new IANAifTypes 239, 240, and 241.', 'Deprecated/Obsoleted IANAifType 230. Registration of IANAifType 238.', 'Changed the description for IANA ifType 184 and added new IANA ifType 237.', 'Changed the descriptions for IANAifTypes 20 and 21.', 'Changed the descriptions for IANAifTypes 7, 11, 62, 69, and 117.', 'Registration of new IANA ifType 236.', 'Registration of new IANA ifType 235.', 'Registration of new IANA ifType 234.', 'Registration of new IANA ifType 233.', 'Registration of new IANA ifTypes 231 and 232.', 'Registration of new IANA ifType 230.', 'Registration of new IANA ifType 229.', 'Registration of new IANA ifType 228.', 'Added the IANAtunnelType TC and deprecated IANAifType sixToFour (215) per RFC4087.', 'Registration of new IANA ifType 227 per RFC4631.', 'Registration of new IANA ifType 226.', 'Added description for IANAifType 6, and changed the descriptions for IANAifTypes 180, 181, and 182.', 'Registration of new IANAifType 225.', 'Deprecated IANAifTypes 7 and 11. Obsoleted IANAifTypes 62, 69, and 117. ethernetCsmacd (6) should be used instead of these values', 'Registration of new IANAifType 224.', 'Registration of new IANAifTypes 222 and 223.', 'Registration of new IANAifType 221.', 'Registration of new IANAifType 220.', 'Registration of new IANAifType 219.', 'Registration of new IANAifTypes 217 and 218.', 'Registration of new IANAifTypes 215 and 216.', 'Registration of new IANAifType 214.', 'Registration of new IANAifTypes 211, 212 and 213.', 'Registration of new IANAifTypes 209 and 210.', 'Registration of new IANAifTypes 207 and 208.', 'Registration of new IANAifType 206.', 'Registration of new IANAifType 205.', 'Registration of new IANAifTypes 199, 200, 201, 202, 203, and 204.', 'Registration of new IANAifType 198.', 'Registration of new IANAifType 197.', 'Registration of new IANAifTypes 195 and 196.', 'Registration of new IANAifTypes 193 and 194.', 'Registration of new IANAifTypes 191 and 192.', 'Registration of new IANAifType 190.', 'Registration of new IANAifTypes 188 and 189.', 'Registration of new IANAifType 187.', 'Registration of new IANAifTypes 184, 185, and 186.', 'Registration of new IANAifType 183.', 'Registration of new IANAifTypes 174-182.', 'Registration of new IANAifTypes 170, 171, 172 and 173.', 'Registration of new IANAifTypes 168 and 169.', 'Fixed a missing semi-colon in the IMPORT. Also cleaned up the REVISION log a bit. It is not complete, but from now on it will be maintained and kept up to date with each change to this MIB module.', 'Include new name assignments up to cnr(85). This is the first version available via the WWW at: ftp://ftp.isi.edu/mib/ianaiftype.mib', 'Initial version of this MIB as published in RFC 1573.',))
if mibBuilder.loadTexts: ianaifType.setLastUpdated('201703300000Z')
if mibBuilder.loadTexts: ianaifType.setOrganization('IANA')
if mibBuilder.loadTexts: ianaifType.setContactInfo(' Internet Assigned Numbers Authority Postal: ICANN 12025 Waterfront Drive, Suite 300 Los Angeles, CA 90094-2536 Tel: +1 310-301-5800 E-Mail: iana&iana.org')
if mibBuilder.loadTexts: ianaifType.setDescription("This MIB module defines the IANAifType Textual Convention, and thus the enumerated values of the ifType object defined in MIB-II's ifTable.")
class IANAifType(TextualConvention, Integer32):
description = "This data type is used as the syntax of the ifType object in the (updated) definition of MIB-II's ifTable. The definition of this textual convention with the addition of newly assigned values is published periodically by the IANA, in either the Assigned Numbers RFC, or some derivative of it specific to Internet Network Management number assignments. (The latest arrangements can be obtained by contacting the IANA.) Requests for new values should be made to IANA via email (iana&iana.org). The relationship between the assignment of ifType values and of OIDs to particular media-specific MIBs is solely the purview of IANA and is subject to change without notice. Quite often, a media-specific MIB's OID-subtree assignment within MIB-II's 'transmission' subtree will be the same as its ifType value. However, in some circumstances this will not be the case, and implementors must not pre-assume any specific relationship between ifType values and transmission subtree OIDs."
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255), SingleValueConstraint(256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290))
namedValues = NamedValues(("other", 1), ("regular1822", 2), ("hdh1822", 3), ("ddnX25", 4), ("rfc877x25", 5), ("ethernetCsmacd", 6), ("iso88023Csmacd", 7), ("iso88024TokenBus", 8), ("iso88025TokenRing", 9), ("iso88026Man", 10), ("starLan", 11), ("proteon10Mbit", 12), ("proteon80Mbit", 13), ("hyperchannel", 14), ("fddi", 15), ("lapb", 16), ("sdlc", 17), ("ds1", 18), ("e1", 19), ("basicISDN", 20), ("primaryISDN", 21), ("propPointToPointSerial", 22), ("ppp", 23), ("softwareLoopback", 24), ("eon", 25), ("ethernet3Mbit", 26), ("nsip", 27), ("slip", 28), ("ultra", 29), ("ds3", 30), ("sip", 31), ("frameRelay", 32), ("rs232", 33), ("para", 34), ("arcnet", 35), ("arcnetPlus", 36), ("atm", 37), ("miox25", 38), ("sonet", 39), ("x25ple", 40), ("iso88022llc", 41), ("localTalk", 42), ("smdsDxi", 43), ("frameRelayService", 44), ("v35", 45), ("hssi", 46), ("hippi", 47), ("modem", 48), ("aal5", 49), ("sonetPath", 50), ("sonetVT", 51), ("smdsIcip", 52), ("propVirtual", 53), ("propMultiplexor", 54), ("ieee80212", 55), ("fibreChannel", 56), ("hippiInterface", 57), ("frameRelayInterconnect", 58), ("aflane8023", 59), ("aflane8025", 60), ("cctEmul", 61), ("fastEther", 62), ("isdn", 63), ("v11", 64), ("v36", 65), ("g703at64k", 66), ("g703at2mb", 67), ("qllc", 68), ("fastEtherFX", 69), ("channel", 70), ("ieee80211", 71), ("ibm370parChan", 72), ("escon", 73), ("dlsw", 74), ("isdns", 75), ("isdnu", 76), ("lapd", 77), ("ipSwitch", 78), ("rsrb", 79), ("atmLogical", 80), ("ds0", 81), ("ds0Bundle", 82), ("bsc", 83), ("async", 84), ("cnr", 85), ("iso88025Dtr", 86), ("eplrs", 87), ("arap", 88), ("propCnls", 89), ("hostPad", 90), ("termPad", 91), ("frameRelayMPI", 92), ("x213", 93), ("adsl", 94), ("radsl", 95), ("sdsl", 96), ("vdsl", 97), ("iso88025CRFPInt", 98), ("myrinet", 99), ("voiceEM", 100), ("voiceFXO", 101), ("voiceFXS", 102), ("voiceEncap", 103), ("voiceOverIp", 104), ("atmDxi", 105), ("atmFuni", 106), ("atmIma", 107), ("pppMultilinkBundle", 108), ("ipOverCdlc", 109), ("ipOverClaw", 110), ("stackToStack", 111), ("virtualIpAddress", 112), ("mpc", 113), ("ipOverAtm", 114), ("iso88025Fiber", 115), ("tdlc", 116), ("gigabitEthernet", 117), ("hdlc", 118), ("lapf", 119), ("v37", 120), ("x25mlp", 121), ("x25huntGroup", 122), ("transpHdlc", 123), ("interleave", 124), ("fast", 125), ("ip", 126), ("docsCableMaclayer", 127), ("docsCableDownstream", 128), ("docsCableUpstream", 129), ("a12MppSwitch", 130), ("tunnel", 131), ("coffee", 132), ("ces", 133), ("atmSubInterface", 134), ("l2vlan", 135), ("l3ipvlan", 136), ("l3ipxvlan", 137), ("digitalPowerline", 138), ("mediaMailOverIp", 139), ("dtm", 140), ("dcn", 141), ("ipForward", 142), ("msdsl", 143), ("ieee1394", 144), ("if-gsn", 145), ("dvbRccMacLayer", 146), ("dvbRccDownstream", 147), ("dvbRccUpstream", 148), ("atmVirtual", 149), ("mplsTunnel", 150), ("srp", 151), ("voiceOverAtm", 152), ("voiceOverFrameRelay", 153), ("idsl", 154), ("compositeLink", 155), ("ss7SigLink", 156), ("propWirelessP2P", 157), ("frForward", 158), ("rfc1483", 159), ("usb", 160), ("ieee8023adLag", 161), ("bgppolicyaccounting", 162), ("frf16MfrBundle", 163), ("h323Gatekeeper", 164), ("h323Proxy", 165), ("mpls", 166), ("mfSigLink", 167), ("hdsl2", 168), ("shdsl", 169), ("ds1FDL", 170), ("pos", 171), ("dvbAsiIn", 172), ("dvbAsiOut", 173), ("plc", 174), ("nfas", 175), ("tr008", 176), ("gr303RDT", 177), ("gr303IDT", 178), ("isup", 179), ("propDocsWirelessMaclayer", 180), ("propDocsWirelessDownstream", 181), ("propDocsWirelessUpstream", 182), ("hiperlan2", 183), ("propBWAp2Mp", 184), ("sonetOverheadChannel", 185), ("digitalWrapperOverheadChannel", 186), ("aal2", 187), ("radioMAC", 188), ("atmRadio", 189), ("imt", 190), ("mvl", 191), ("reachDSL", 192), ("frDlciEndPt", 193), ("atmVciEndPt", 194), ("opticalChannel", 195), ("opticalTransport", 196), ("propAtm", 197), ("voiceOverCable", 198), ("infiniband", 199), ("teLink", 200), ("q2931", 201), ("virtualTg", 202), ("sipTg", 203), ("sipSig", 204), ("docsCableUpstreamChannel", 205), ("econet", 206), ("pon155", 207), ("pon622", 208), ("bridge", 209), ("linegroup", 210), ("voiceEMFGD", 211), ("voiceFGDEANA", 212), ("voiceDID", 213), ("mpegTransport", 214), ("sixToFour", 215), ("gtp", 216), ("pdnEtherLoop1", 217), ("pdnEtherLoop2", 218), ("opticalChannelGroup", 219), ("homepna", 220), ("gfp", 221), ("ciscoISLvlan", 222), ("actelisMetaLOOP", 223), ("fcipLink", 224), ("rpr", 225), ("qam", 226), ("lmp", 227), ("cblVectaStar", 228), ("docsCableMCmtsDownstream", 229), ("adsl2", 230), ("macSecControlledIF", 231), ("macSecUncontrolledIF", 232), ("aviciOpticalEther", 233), ("atmbond", 234), ("voiceFGDOS", 235), ("mocaVersion1", 236), ("ieee80216WMAN", 237), ("adsl2plus", 238), ("dvbRcsMacLayer", 239), ("dvbTdm", 240), ("dvbRcsTdma", 241), ("x86Laps", 242), ("wwanPP", 243), ("wwanPP2", 244), ("voiceEBS", 245), ("ifPwType", 246), ("ilan", 247), ("pip", 248), ("aluELP", 249), ("gpon", 250), ("vdsl2", 251), ("capwapDot11Profile", 252), ("capwapDot11Bss", 253), ("capwapWtpVirtualRadio", 254), ("bits", 255)) + NamedValues(("docsCableUpstreamRfPort", 256), ("cableDownstreamRfPort", 257), ("vmwareVirtualNic", 258), ("ieee802154", 259), ("otnOdu", 260), ("otnOtu", 261), ("ifVfiType", 262), ("g9981", 263), ("g9982", 264), ("g9983", 265), ("aluEpon", 266), ("aluEponOnu", 267), ("aluEponPhysicalUni", 268), ("aluEponLogicalLink", 269), ("aluGponOnu", 270), ("aluGponPhysicalUni", 271), ("vmwareNicTeam", 272), ("docsOfdmDownstream", 277), ("docsOfdmaUpstream", 278), ("gfast", 279), ("sdci", 280), ("xboxWireless", 281), ("fastdsl", 282), ("docsCableScte55d1FwdOob", 283), ("docsCableScte55d1RetOob", 284), ("docsCableScte55d2DsOob", 285), ("docsCableScte55d2UsOob", 286), ("docsCableNdf", 287), ("docsCableNdr", 288), ("ptm", 289), ("ghn", 290))
class IANAtunnelType(TextualConvention, Integer32):
description = 'The encapsulation method used by a tunnel. The value direct indicates that a packet is encapsulated directly within a normal IP header, with no intermediate header, and unicast to the remote tunnel endpoint (e.g., an RFC 2003 IP-in-IP tunnel, or an RFC 1933 IPv6-in-IPv4 tunnel). The value minimal indicates that a Minimal Forwarding Header (RFC 2004) is inserted between the outer header and the payload packet. The value UDP indicates that the payload packet is encapsulated within a normal UDP packet (e.g., RFC 1234). The values sixToFour, sixOverFour, and isatap indicates that an IPv6 packet is encapsulated directly within an IPv4 header, with no intermediate header, and unicast to the destination determined by the 6to4, 6over4, or ISATAP protocol. The remaining protocol-specific values indicate that a header of the protocol of that name is inserted between the outer header and the payload header. The IP Tunnel MIB [RFC4087] is designed to manage tunnels of any type over IPv4 and IPv6 networks; therefore, it already supports IP-in-IP tunnels. But in a DS-Lite scenario, the tunnel type is point-to-multipoint IP-in-IP tunnels. The direct(2) defined in the IP Tunnel MIB only supports point-to-point tunnels. So, it needs to define a new tunnel type for DS-Lite. The assignment policy for IANAtunnelType values is identical to the policy for assigning IANAifType values.'
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17))
namedValues = NamedValues(("other", 1), ("direct", 2), ("gre", 3), ("minimal", 4), ("l2tp", 5), ("pptp", 6), ("l2f", 7), ("udp", 8), ("atmp", 9), ("msdp", 10), ("sixToFour", 11), ("sixOverFour", 12), ("isatap", 13), ("teredo", 14), ("ipHttps", 15), ("softwireMesh", 16), ("dsLite", 17))
mibBuilder.exportSymbols("IANAifType-MIB", IANAtunnelType=IANAtunnelType, ianaifType=ianaifType, IANAifType=IANAifType, PYSNMP_MODULE_ID=ianaifType)
| 510.361111 | 5,747 | 0.688837 | 2,608 | 18,373 | 4.85161 | 0.35046 | 0.025923 | 0.091362 | 0.06615 | 0.111041 | 0.05801 | 0.05801 | 0.05801 | 0.05801 | 0.043942 | 0 | 0.205919 | 0.124585 | 18,373 | 35 | 5,748 | 524.942857 | 0.580763 | 0.017526 | 0 | 0.08 | 0 | 0.28 | 0.612571 | 0.023833 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.28 | 0 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b6ec9a96cc7fe8203910b07c32e22611f1d1e45 | 1,542 | py | Python | tests/marltoolbox/utils/test_log.py | longtermrisk/marltoolbox | cae1ba94ccb44700b66a32e0734a0f11c9c6c7fe | [
"MIT"
] | 17 | 2021-01-17T21:21:08.000Z | 2022-01-27T00:57:30.000Z | tests/marltoolbox/utils/test_log.py | longtermrisk/marltoolbox | cae1ba94ccb44700b66a32e0734a0f11c9c6c7fe | [
"MIT"
] | 5 | 2021-02-21T21:43:00.000Z | 2021-05-04T12:27:23.000Z | tests/marltoolbox/utils/test_log.py | longtermrisk/marltoolbox | cae1ba94ccb44700b66a32e0734a0f11c9c6c7fe | [
"MIT"
] | 3 | 2021-02-21T11:38:22.000Z | 2022-03-04T12:06:19.000Z | import numpy as np
from marltoolbox.utils.log import _add_entropy_to_log
def test__add_entropy_to_log():
to_log = {}
train_batch = {"action_dist_inputs": np.array([[0.0, 1.0]])}
to_log = _add_entropy_to_log(train_batch, to_log)
assert_close(to_log[f"entropy_buffer_samples_avg"], 0.00, 0.001)
assert_close(to_log[f"entropy_buffer_samples_single"], 0.00, 0.001)
to_log = {}
train_batch = {"action_dist_inputs": np.array([[0.75, 0.25]])}
to_log = _add_entropy_to_log(train_batch, to_log)
assert_close(to_log[f"entropy_buffer_samples_avg"], 0.562335145, 0.001)
assert_close(to_log[f"entropy_buffer_samples_single"], 0.562335145, 0.001)
to_log = {}
train_batch = {"action_dist_inputs": np.array([[0.62, 0.12, 0.13, 0.13]])}
to_log = _add_entropy_to_log(train_batch, to_log)
assert_close(to_log[f"entropy_buffer_samples_avg"], 1.081271236, 0.001)
assert_close(to_log[f"entropy_buffer_samples_single"], 1.081271236, 0.001)
to_log = {}
train_batch = {
"action_dist_inputs": np.array(
[
[0.62, 0.12, 0.13, 0.13],
[0.75, 0.25, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
]
)
}
to_log = _add_entropy_to_log(train_batch, to_log)
assert_close(to_log[f"entropy_buffer_samples_avg"], 0.547868794, 0.001)
assert_close(to_log[f"entropy_buffer_samples_single"], 0.00, 0.001)
return to_log
def assert_close(a, b, threshold):
abs_diff = np.abs(a - b)
assert abs_diff < threshold
| 34.266667 | 78 | 0.659533 | 258 | 1,542 | 3.565891 | 0.170543 | 0.146739 | 0.086957 | 0.130435 | 0.765217 | 0.754348 | 0.754348 | 0.754348 | 0.754348 | 0.754348 | 0 | 0.115883 | 0.199741 | 1,542 | 44 | 79 | 35.045455 | 0.62966 | 0 | 0 | 0.285714 | 0 | 0 | 0.189364 | 0.142672 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b7f9ea57e6aaf467941f352514c49be7309459a | 138 | py | Python | respa_outlook/__init__.py | pnuz3n/respa | 0f48eb9bec18013b27970b44b8648f03eee8dcf4 | [
"MIT"
] | 1 | 2019-12-17T10:02:17.000Z | 2019-12-17T10:02:17.000Z | respa_outlook/__init__.py | pnuz3n/respa | 0f48eb9bec18013b27970b44b8648f03eee8dcf4 | [
"MIT"
] | 38 | 2020-01-24T11:30:53.000Z | 2022-01-28T12:42:47.000Z | respa_outlook/__init__.py | digipointtku/respa | a529e0df4d3f072df7801adb5bf97a5f4abd1243 | [
"MIT"
] | 14 | 2020-02-26T08:17:34.000Z | 2021-09-14T07:57:21.000Z | default_app_config = 'respa_outlook.apps.RespaOutlookConfig'
__all__ = [
'RespaOutlookConfiguration',
'RespaOutlookReservation'
] | 23 | 60 | 0.782609 | 10 | 138 | 10.1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123188 | 138 | 6 | 61 | 23 | 0.834711 | 0 | 0 | 0 | 0 | 0 | 0.611511 | 0.611511 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
298b51a0094182bdba15b792d0f47d1b453367fe | 270 | py | Python | remote_control/exceptions.py | lsapan/django-remote-control | 4dc6adfaaa1d12b2ee69d3fa3745c0040de49192 | [
"MIT"
] | 2 | 2018-09-12T13:13:44.000Z | 2021-09-17T05:08:01.000Z | remote_control/exceptions.py | lsapan/django-remote-control | 4dc6adfaaa1d12b2ee69d3fa3745c0040de49192 | [
"MIT"
] | null | null | null | remote_control/exceptions.py | lsapan/django-remote-control | 4dc6adfaaa1d12b2ee69d3fa3745c0040de49192 | [
"MIT"
] | null | null | null | class CommandNotRegistered(ValueError):
"""
An error that is raised when the requested command is not registered.
"""
pass
class CommandNotFound(ValueError):
"""
An error that is raised when the requested command is not found.
"""
pass
| 20.769231 | 73 | 0.674074 | 32 | 270 | 5.6875 | 0.53125 | 0.131868 | 0.186813 | 0.230769 | 0.626374 | 0.626374 | 0.626374 | 0.626374 | 0.626374 | 0.626374 | 0 | 0 | 0.251852 | 270 | 12 | 74 | 22.5 | 0.90099 | 0.496296 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
2992a93d6d093bd7981860f803e07362f30b3ff1 | 45 | py | Python | k2pix/__init__.py | stephtdouglas/k2-pix | c206732ebc82f09b051748cac2fbb66910d22c78 | [
"MIT"
] | 3 | 2017-01-07T17:36:19.000Z | 2017-11-30T01:01:05.000Z | k2pix/__init__.py | stephtdouglas/k2-pix | c206732ebc82f09b051748cac2fbb66910d22c78 | [
"MIT"
] | 9 | 2017-01-07T22:42:18.000Z | 2018-01-18T15:34:23.000Z | k2pix/__init__.py | stephtdouglas/k2-pix | c206732ebc82f09b051748cac2fbb66910d22c78 | [
"MIT"
] | 5 | 2017-01-07T17:36:20.000Z | 2021-12-02T02:43:39.000Z | #from . import main, figure, tpf, surveyquery | 45 | 45 | 0.755556 | 6 | 45 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 1 | 45 | 45 | 0.871795 | 0.977778 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
29a54647d757070b862306e8716d8b71e38b0f81 | 182 | py | Python | src/dataset/__init__.py | Jasonsey/BlurredImageDetection | df89079813fe8e2b66075f366d89b141af9f2501 | [
"MIT"
] | 5 | 2019-05-20T11:18:24.000Z | 2020-04-09T13:27:02.000Z | src/dataset/__init__.py | Jasonsey/BlurredImageDetection | df89079813fe8e2b66075f366d89b141af9f2501 | [
"MIT"
] | null | null | null | src/dataset/__init__.py | Jasonsey/BlurredImageDetection | df89079813fe8e2b66075f366d89b141af9f2501 | [
"MIT"
] | 2 | 2019-10-27T15:44:57.000Z | 2021-09-26T06:10:37.000Z | # Bluerred Image Detection
#
# Author: Jasonsey
# Email: 2627866800@qq.com
#
# =============================================================================
"""the data set api"""
| 22.75 | 79 | 0.379121 | 13 | 182 | 5.307692 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061728 | 0.10989 | 182 | 7 | 80 | 26 | 0.364198 | 0.901099 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
29e38b12567519195af78c3b9df2d8dec8a0ad94 | 518 | py | Python | winton_kafka_streams/state/_abc.py | jkramarz/winton-kafka-streams | 22526da71454a8b9c7bba53e4f59f645535de602 | [
"Apache-2.0"
] | null | null | null | winton_kafka_streams/state/_abc.py | jkramarz/winton-kafka-streams | 22526da71454a8b9c7bba53e4f59f645535de602 | [
"Apache-2.0"
] | null | null | null | winton_kafka_streams/state/_abc.py | jkramarz/winton-kafka-streams | 22526da71454a8b9c7bba53e4f59f645535de602 | [
"Apache-2.0"
] | 1 | 2019-04-28T23:31:24.000Z | 2019-04-28T23:31:24.000Z | """
Abstract classes for implementations of state classes
"""
import abc
import collections.abc
class StoreBase(collections.abc.Iterator):
"""
Interface that must be implemented by all state classes
"""
def __init__(self, _name):
self.name = _name
@abc.abstractmethod
def add(self, v):
pass
@abc.abstractmethod
def empty(self):
pass
@abc.abstractmethod
def clear(self):
pass
@abc.abstractmethod
def __iter__(self):
pass
| 15.235294 | 59 | 0.629344 | 58 | 518 | 5.448276 | 0.517241 | 0.21519 | 0.253165 | 0.227848 | 0.177215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283784 | 518 | 33 | 60 | 15.69697 | 0.851752 | 0.210425 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0.235294 | 0.117647 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
29f711a08c4629252bf62057f189b4500b914a24 | 98 | py | Python | app/search/__init__.py | AniaPeszek/ReclamationAndTicketSystem | 42551732dcc9af42dc7401fbc13b8fdb6e3c132f | [
"MIT"
] | null | null | null | app/search/__init__.py | AniaPeszek/ReclamationAndTicketSystem | 42551732dcc9af42dc7401fbc13b8fdb6e3c132f | [
"MIT"
] | null | null | null | app/search/__init__.py | AniaPeszek/ReclamationAndTicketSystem | 42551732dcc9af42dc7401fbc13b8fdb6e3c132f | [
"MIT"
] | null | null | null | from flask import Blueprint
bp = Blueprint("search_bp", __name__)
from app.search import search
| 16.333333 | 37 | 0.785714 | 14 | 98 | 5.142857 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 98 | 5 | 38 | 19.6 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
4b08435cf591441e30ed05c29b4f4a5f376686e5 | 14,684 | py | Python | quantsbin/derivativepricing/instruments.py | quantsbin/Quantsbin | 362522653b4b8ebcf14461e3f44fe22dea465adc | [
"MIT"
] | 132 | 2018-06-20T08:40:48.000Z | 2022-03-24T11:34:22.000Z | quantsbin/derivativepricing/instruments.py | williamjiamin/Quantsbin | 14a135174d9f08a70e36ed55279fbd7458e1ad48 | [
"MIT"
] | 5 | 2018-07-08T06:23:53.000Z | 2021-08-08T06:30:43.000Z | quantsbin/derivativepricing/instruments.py | williamjiamin/Quantsbin | 14a135174d9f08a70e36ed55279fbd7458e1ad48 | [
"MIT"
] | 35 | 2018-07-12T10:07:30.000Z | 2022-03-01T04:00:17.000Z | """
developed by Quantsbin - Jun'18
"""
from abc import ABCMeta, abstractmethod
from datetime import datetime
from .engineconfig import PricingEngine
from .namesnmapper import VanillaOptionType, ExpiryType, DEFAULT_MODEL, UdlType, OBJECT_MODEL, DerivativeType
class Instrument(metaclass=ABCMeta):
"""
Instrument - Metaclass to define financial instrument
@abstract functions:
payoff => defines payoff on instrument.
engine => attach the instrument with the pricing model and market data.
"""
@abstractmethod
def payoff(self):
pass
def engine(self, **kwargs):
"""
Binds pricing model class and market data to the object
Args required:
model: pricing model (default value set to BSM for European expiry)
**kwargs: Dictionary of parameters and their corresponding value required for valuation.
For arguments required and method available for each model check\
help(.derivativepricing.pricingmodels.<model name>)
"""
if not kwargs['model']:
kwargs['model'] = DEFAULT_MODEL[self.undl][self.derivative_type][self.expiry_type]
return PricingEngine(self, **kwargs)
def list_models(self):
return ", ".join(OBJECT_MODEL[self.undl][self.expiry_type])
class VanillaOption(Instrument):
"""
Parent class for all Vanilla options on different underlying.
Methods:
payoff(spot0) ->
Calculates the payoff of the function
engine(model, **kwargs)
Binds the inout parameter with pricing models.
To check valid models for underlying use .models()
"""
def __init__(self, option_type, expiry_type, strike, expiry_date, derivative_type):
self.option_type = option_type or VanillaOptionType.CALL.value
self.expiry_type = expiry_type or ExpiryType.EUROPEAN.value
self.strike = strike
self.expiry_date = datetime.strptime(expiry_date, '%Y%m%d')
self.derivative_type = derivative_type or DerivativeType.VANILLA_OPTION.value
@property
def _option_type_flag(self):
if self.option_type == VanillaOptionType.CALL.value:
return 1
else:
return -1
def payoff(self, spot0=None):
"""
Calculates the payoff of option
Defines payoff of the option
Payoff(Call) = max(S-K,0)
Payoff(Put) = max(K-S,0)
Args required:
spot0: Value of underlying e.g. 110
"""
return max(self._option_type_flag * (spot0 - self.strike), 0.0)
class EqOption(VanillaOption):
"""
Defines object for vanilla options on equity with both European and American expiry type.
Args required:
option_type: 'Call' or 'Put' (default value is set to 'Call')
expiry_type: 'European' or 'American' (default is set to 'European')
strike: (Float in same unit as underlying price) e.g. 110.0
expiry_date: (Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210"
derivative_type: Default value as "Vanilla Option".
"""
def __init__(self, option_type=VanillaOptionType.CALL.value, expiry_type=ExpiryType.EUROPEAN.value,
strike=None, expiry_date=None, derivative_type=None
):
super().__init__(option_type, expiry_type, strike, expiry_date, derivative_type)
self.undl = UdlType.STOCK.value
def engine(self, model=None, spot0=None, rf_rate=0, yield_div=0, div_list=None, volatility=None,
pricing_date=None, **kwargs):
"""
Binds pricing model class and market data to the object
Args required:
Core Arguments:
model: pricing model (default value set to BSM for European expiry)
To check available list of models use print(option_object.list_models())
fwd0: (float) current underlying price/value e.g. 110.0
rf_rate: (Float < 1) risk free continuously compounded discount rate e.g. 5% as 0.05
volatility: (Float < 1) Underlying price/value return annualized volatility.
Volatility in decimal e.g. Volatility of 10% => 0.10
pricing_Date: Date on which option value need to be calculated.
(Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
yield_div: (Float < 1) div yield continuously compounded (for index options) e.g. 5% as 0.05
div_list: List of tuples for discrete dividends with dates. e.g. [("20180610", 2), ("20180624", 4)]
[("Date", div amount),...]
Model specific arguments:
MonteCarlo
no_of_path = (Integer). Number of paths to be generated for simulation e.g. 10000
no_of_steps = (Integer). Number of steps(nodes) for the premium calculation e.g. 100
seed = (Integer). Used for seeding
antithetic = (Boolean). A variance reduction process in Montecarlo Simulation.
Default False
Binomial
no_of_steps = (Integer). Number of steps (nodes) for the premium calculation.
Maximum value accepted is 100. This limit will be increased
in future release.
"""
return super().engine(model=model, spot0=spot0, rf_rate=rf_rate, cnv_yield=yield_div, pv_cnv=0,
div_list=div_list, volatility=volatility, pricing_date=pricing_date, **kwargs)
class FutOption(VanillaOption):
"""
Defines object for vanilla options on futures with both European and American expiry type.
Args required:
option_type: 'Call' or 'Put' (default value is set to 'Call')
expiry_type: 'European' or 'American' (default is set to 'European')
strike: (Float in same unit as underlying price) e.g. 110.0
expiry_date: (Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
"""
def __init__(self, option_type=VanillaOptionType.CALL.value, expiry_type=ExpiryType.EUROPEAN.value,
strike=None, expiry_date=None, derivative_type=None
):
super().__init__(option_type, expiry_type, strike, expiry_date, derivative_type)
self.undl = UdlType.FUTURES.value
def engine(self, model=None, fwd0=None, rf_rate=0, volatility=None, pricing_date=None, **kwargs):
"""
Binds pricing model class and market data to the object
Args required:
Core Arguments:
model: pricing model (default value set to BSM for European expiry)
To check available list of models use print(option_object.list_models())
fwd0: (float) current future price quote e.g. 110.0
rf_rate: (Float < 1) risk free continuously compounded discount rate e.g. 5% as 0.05
volatility: (Float < 1) Underlying price/value return annualized volatility.
Volatility in decimal e.g. Volatility of 10% => 0.10
pricing_Date: Date on which option value need to be calculated.
(Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
Model specific arguments:
MonteCarlo
no_of_path = (Integer). Number of paths to be generated for simulation e.g. 10000
no_of_steps = (Integer). Number of steps(nodes) for the premium calculation e.g. 100
seed = (Integer). Used for seeding
antithetic = (Boolean). A variance reduction process in Montecarlo Simulation.
Default False
Binomial
no_of_steps = (Integer). Number of steps (nodes) for the premium calculation.
Maximum value accepted is 100. This limit will be increased
in future release.
"""
return super().engine(model=model, spot0=fwd0, rf_rate=rf_rate, cnv_yield=rf_rate,
volatility=volatility, pricing_date=pricing_date, **kwargs)
class FXOption(VanillaOption):
"""
Defines object for vanilla options on fx rates with both European and American expiry type.
Args required:
option_type: 'Call' or 'Put' (default value is set to 'Call')
expiry_type: 'European' or 'American' (default is set to 'European')
strike: (Float in same unit as underlying price) e.g. 110.0
expiry_date: (Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
"""
def __init__(self, option_type=VanillaOptionType.CALL.value, expiry_type=ExpiryType.EUROPEAN.value,
strike=None, expiry_date=None, derivative_type=None
):
super().__init__(option_type, expiry_type, strike, expiry_date, derivative_type)
self.undl = UdlType.FX.value
def engine(self, model=None, spot0=None, rf_rate_local=0, rf_rate_foreign=0, volatility=None,
pricing_date=None, **kwargs):
"""
Binds pricing model class and market data to the object
Args required:
Core Arguments:
model: pricing model (default value set to BSM for European expiry)
To check available list of models use print(option_object.list_models())
spot0: (float) current underlying price/value e.g. 110.0
rf_rate_local: (Float < 1) risk free continuously compounded discount rate of local currency
e.g. 5% as 0.05
rf_rate_foreign: (Float < 1) risk free continuously compounded discount rate of
foreign currency e.g. 5% as 0.05
volatility: (Float < 1) Underlying price/value return annualized volatility.
Volatility in decimal e.g. Volatility of 10% => 0.10
pricing_Date: Date on which option value need to be calculated.
(Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
Model specific arguments:
MonteCarlo
no_of_path = (Integer). Number of paths to be generated for simulation e.g. 10000
no_of_steps = (Integer). Number of steps(nodes) for the premium calculation e.g. 100
seed = (Integer). Used for seeding
antithetic = (Boolean). A variance reduction process in Montecarlo Simulation.
Default False
Binomial
no_of_steps = (Integer). Number of steps (nodes) for the premium calculation.
Maximum value accepted is 100. This limit will be increased
in future release.
"""
return super().engine(model=model, spot0=spot0, rf_rate=rf_rate_local, cnv_yield=rf_rate_foreign,
volatility=volatility, pricing_date=pricing_date, **kwargs)
class ComOption(VanillaOption):
"""
Defines object for vanilla options on commodities with both European and American expiry type.
Args required:
option_type: 'Call' or 'Put' (default value is set to 'Call')
expiry_type: 'European' or 'American' (default is set to 'European')
strike: (Float in same unit as underlying price) e.g. 110.0
expiry_date: (Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
"""
def __init__(self, option_type=VanillaOptionType.CALL.value, expiry_type=ExpiryType.EUROPEAN.value,
strike=None, expiry_date=None, derivative_type=None
):
super().__init__(option_type, expiry_type, strike, expiry_date, derivative_type)
self.undl = UdlType.COMMODITY.value
def engine(self, model=None, spot0=None, rf_rate=0, cnv_yield=0, cost_yield=0, volatility=None,
pricing_date=None, **kwargs):
"""
Binds pricing model class and market data to the object
Args required:
Core Arguments:
model: pricing model (default value set to BSM for European expiry)
To check available list of models use print(option_object.list_models())
spot0: (float) current underlying price/value e.g. 110.0
rf_rate: (Float < 1) risk free continuously compounded discount rate e.g. 5% as 0.05
cnv_yield: (Float < 1) Convenience yield continuously compounded e.g. 4% as 0.04
cost_yield: (Float < 1) Cost yield continuously compounded e.g. 2% as 0.02
volatility: (Float < 1) Underlying price/value return annualized volatility.
Volatility in decimal e.g. Volatility of 10% => 0.10
pricing_Date: Date on which option value need to be calculated.
(Date in string format "YYYYMMDD") e.g. 10 Dec 2018 as "20181210".
Model specific arguments:
MonteCarlo
no_of_path = (Integer). Number of paths to be generated for simulation e.g. 10000
no_of_steps = (Integer). Number of steps(nodes) for the premium calculation e.g. 100
seed = (Integer). Used for seeding
antithetic = (Boolean). A variance reduction process in Montecarlo Simulation.
Default False
Binomial
no_of_steps = (Integer). Number of steps (nodes) for the premium calculation.
Maximum value accepted is 100. This limit will be increased
in future release.
"""
return super().engine(model=model, spot0=spot0, rf_rate=rf_rate, cnv_yield=cnv_yield, cost_yield=cost_yield,
volatility=volatility, pricing_date=pricing_date, **kwargs)
| 53.202899 | 119 | 0.591732 | 1,719 | 14,684 | 4.937173 | 0.1274 | 0.008955 | 0.021209 | 0.005656 | 0.764345 | 0.756215 | 0.745022 | 0.715329 | 0.696595 | 0.684812 | 0 | 0.031727 | 0.334582 | 14,684 | 275 | 120 | 53.396364 | 0.836864 | 0.635181 | 0 | 0.323529 | 0 | 0 | 0.004299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0.014706 | 0.058824 | 0.014706 | 0.485294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b10646c72ad54b7b717960d8284b91081e28c1f | 34 | py | Python | theme/fidashtheme/__init__.py | fidash/fiware-fidash | 900e79a629b51d811e1d3eaa8ca7951138d8994c | [
"Apache-2.0"
] | null | null | null | theme/fidashtheme/__init__.py | fidash/fiware-fidash | 900e79a629b51d811e1d3eaa8ca7951138d8994c | [
"Apache-2.0"
] | null | null | null | theme/fidashtheme/__init__.py | fidash/fiware-fidash | 900e79a629b51d811e1d3eaa8ca7951138d8994c | [
"Apache-2.0"
] | null | null | null | parent="wirecloud.fiwarelabtheme"
| 17 | 33 | 0.852941 | 3 | 34 | 9.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 34 | 1 | 34 | 34 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b26702144197ceaadbfcc25feb651d26e27a0db | 158 | py | Python | setup.py | open-traffic-generator/otg-grpc | 71a2bcaaf28ebda0e1cb202dffa6e18d67463bdb | [
"MIT"
] | 3 | 2021-12-16T06:32:49.000Z | 2022-03-17T04:12:55.000Z | setup.py | open-traffic-generator/otg-gnmi | 77c33659df76a148fad9eda5950b09ed514fab30 | [
"MIT"
] | 2 | 2021-11-30T13:34:50.000Z | 2022-01-25T21:40:45.000Z | setup.py | open-traffic-generator/otg-gnmi | 77c33659df76a148fad9eda5950b09ed514fab30 | [
"MIT"
] | null | null | null | """Build distributions
To build `python setup.py sdist --formats=gztar bdist_wheel --universal`
"""
import os
print('Setup: Started')
print('Setup: Ended') | 17.555556 | 72 | 0.727848 | 21 | 158 | 5.428571 | 0.809524 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120253 | 158 | 9 | 73 | 17.555556 | 0.820144 | 0.588608 | 0 | 0 | 0 | 0 | 0.440678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
d9a828f2660c836ba19f5e2cc9c533d69287e0aa | 243 | py | Python | sgnlp/models/rumour_detection_twitter/__init__.py | raymondng76/sgnlp | f09eada90ef5b1ee979901e5c14413d32e758049 | [
"MIT"
] | 14 | 2021-08-02T01:52:18.000Z | 2022-01-14T10:16:02.000Z | sgnlp/models/rumour_detection_twitter/__init__.py | raymondng76/sgnlp | f09eada90ef5b1ee979901e5c14413d32e758049 | [
"MIT"
] | 29 | 2021-08-02T01:53:46.000Z | 2022-03-30T05:40:46.000Z | sgnlp/models/rumour_detection_twitter/__init__.py | raymondng76/sgnlp | f09eada90ef5b1ee979901e5c14413d32e758049 | [
"MIT"
] | 7 | 2021-08-02T01:54:19.000Z | 2022-01-07T06:37:45.000Z | from .config import RumourDetectionTwitterConfig
from .tokenization import RumourDetectionTwitterTokenizer
from .modeling import RumourDetectionTwitterModel
from .train import train_model
from .utils import download_tokenizer_files_from_azure
| 40.5 | 57 | 0.897119 | 25 | 243 | 8.52 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082305 | 243 | 5 | 58 | 48.6 | 0.955157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d9b1c342655751dc1c795049df14a20ba83fe23d | 171 | py | Python | pset6/hello/hello.py | ashudva/CS50_PSets | 662886dd89063be330ba0aeb9e6e74c8776b91f6 | [
"MIT"
] | null | null | null | pset6/hello/hello.py | ashudva/CS50_PSets | 662886dd89063be330ba0aeb9e6e74c8776b91f6 | [
"MIT"
] | null | null | null | pset6/hello/hello.py | ashudva/CS50_PSets | 662886dd89063be330ba0aeb9e6e74c8776b91f6 | [
"MIT"
] | null | null | null | # import get_string
from cs50 import get_string
# prompt for name
print("What is your name?")
name = get_string("")
# prints "hello, {your name}"
print(f"hello, {name}") | 19 | 29 | 0.701754 | 27 | 171 | 4.333333 | 0.555556 | 0.230769 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0.152047 | 171 | 9 | 30 | 19 | 0.793103 | 0.356725 | 0 | 0 | 0 | 0 | 0.28972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
d9b2b70ed44ab128b3b6e827dfe8be147915efae | 124 | py | Python | restAPIapp/admin.py | shawonAlam/Article-REST-API-django | ffbc92899f014b3f656496cc2e28f33bb84c055d | [
"MIT"
] | null | null | null | restAPIapp/admin.py | shawonAlam/Article-REST-API-django | ffbc92899f014b3f656496cc2e28f33bb84c055d | [
"MIT"
] | null | null | null | restAPIapp/admin.py | shawonAlam/Article-REST-API-django | ffbc92899f014b3f656496cc2e28f33bb84c055d | [
"MIT"
] | null | null | null | from django.contrib import admin
from . models import Article
admin.site.register(Article)
# Register your models here.
| 24.8 | 33 | 0.782258 | 17 | 124 | 5.705882 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 124 | 4 | 34 | 31 | 0.92381 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d9c1a66ed22c8b2de4fb310baeabf88ef07ba7d8 | 39 | py | Python | test/conductivity/__init__.py | MSimoncelli/phono3py | b28b45a025c279833e9269e5d91330c75d3f6ae0 | [
"BSD-3-Clause"
] | 38 | 2016-04-27T04:43:25.000Z | 2020-05-01T07:46:56.000Z | test/conductivity/__init__.py | MSimoncelli/phono3py | b28b45a025c279833e9269e5d91330c75d3f6ae0 | [
"BSD-3-Clause"
] | 36 | 2016-12-22T12:42:54.000Z | 2020-05-02T07:31:53.000Z | test/conductivity/__init__.py | MSimoncelli/phono3py | b28b45a025c279833e9269e5d91330c75d3f6ae0 | [
"BSD-3-Clause"
] | 30 | 2016-02-11T13:33:56.000Z | 2020-05-01T21:36:50.000Z | """Tests for conductivity routines."""
| 19.5 | 38 | 0.717949 | 4 | 39 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.8 | 0.820513 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d9c22e8bbc313e5e33cae1e03eed3e5261fa5dff | 15,309 | py | Python | chainlibpy/generated/cosmos/bank/v1beta1/bank_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null | chainlibpy/generated/cosmos/bank/v1beta1/bank_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null | chainlibpy/generated/cosmos/bank/v1beta1/bank_pb2.py | MaCong-crypto/chainlibpy | 8f91869fdf068359ebd9a3b206a7e856d8fa84f3 | [
"Apache-2.0"
] | null | null | null |
'Generated protocol buffer code.'
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
from ....gogoproto import gogo_pb2 as gogoproto_dot_gogo__pb2
from ....cosmos_proto import cosmos_pb2 as cosmos__proto_dot_cosmos__pb2
from ....cosmos.base.v1beta1 import coin_pb2 as cosmos_dot_base_dot_v1beta1_dot_coin__pb2
DESCRIPTOR = _descriptor.FileDescriptor(name='cosmos/bank/v1beta1/bank.proto', package='cosmos.bank.v1beta1', syntax='proto3', serialized_options=b'Z)github.com/cosmos/cosmos-sdk/x/bank/types', create_key=_descriptor._internal_create_key, serialized_pb=b'\n\x1ecosmos/bank/v1beta1/bank.proto\x12\x13cosmos.bank.v1beta1\x1a\x14gogoproto/gogo.proto\x1a\x19cosmos_proto/cosmos.proto\x1a\x1ecosmos/base/v1beta1/coin.proto"\xb2\x01\n\x06Params\x12Y\n\x0csend_enabled\x18\x01 \x03(\x0b2 .cosmos.bank.v1beta1.SendEnabledB!\xf2\xde\x1f\x1dyaml:"send_enabled,omitempty"\x12G\n\x14default_send_enabled\x18\x02 \x01(\x08B)\xf2\xde\x1f%yaml:"default_send_enabled,omitempty":\x04\x98\xa0\x1f\x00"7\n\x0bSendEnabled\x12\r\n\x05denom\x18\x01 \x01(\t\x12\x0f\n\x07enabled\x18\x02 \x01(\x08:\x08\xe8\xa0\x1f\x01\x98\xa0\x1f\x00"~\n\x05Input\x12\x0f\n\x07address\x18\x01 \x01(\t\x12Z\n\x05coins\x18\x02 \x03(\x0b2\x19.cosmos.base.v1beta1.CoinB0\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins:\x08\xe8\xa0\x1f\x00\x88\xa0\x1f\x00"\x7f\n\x06Output\x12\x0f\n\x07address\x18\x01 \x01(\t\x12Z\n\x05coins\x18\x02 \x03(\x0b2\x19.cosmos.base.v1beta1.CoinB0\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins:\x08\xe8\xa0\x1f\x00\x88\xa0\x1f\x00"\xac\x01\n\x06Supply\x12Z\n\x05total\x18\x01 \x03(\x0b2\x19.cosmos.base.v1beta1.CoinB0\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins:F\x18\x01\xe8\xa0\x1f\x01\x88\xa0\x1f\x00\xd2\xb4-8*github.com/cosmos/cosmos-sdk/x/bank/legacy/v040.SupplyI"=\n\tDenomUnit\x12\r\n\x05denom\x18\x01 \x01(\t\x12\x10\n\x08exponent\x18\x02 \x01(\r\x12\x0f\n\x07aliases\x18\x03 \x03(\t"\x91\x01\n\x08Metadata\x12\x13\n\x0bdescription\x18\x01 \x01(\t\x123\n\x0bdenom_units\x18\x02 \x03(\x0b2\x1e.cosmos.bank.v1beta1.DenomUnit\x12\x0c\n\x04base\x18\x03 \x01(\t\x12\x0f\n\x07display\x18\x04 \x01(\t\x12\x0c\n\x04name\x18\x05 \x01(\t\x12\x0e\n\x06symbol\x18\x06 \x01(\tB+Z)github.com/cosmos/cosmos-sdk/x/bank/typesb\x06proto3', dependencies=[gogoproto_dot_gogo__pb2.DESCRIPTOR, cosmos__proto_dot_cosmos__pb2.DESCRIPTOR, cosmos_dot_base_dot_v1beta1_dot_coin__pb2.DESCRIPTOR])
_PARAMS = _descriptor.Descriptor(name='Params', full_name='cosmos.bank.v1beta1.Params', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='send_enabled', full_name='cosmos.bank.v1beta1.Params.send_enabled', index=0, number=1, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xf2\xde\x1f\x1dyaml:"send_enabled,omitempty"', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='default_send_enabled', full_name='cosmos.bank.v1beta1.Params.default_send_enabled', index=1, number=2, type=8, cpp_type=7, label=1, has_default_value=False, default_value=False, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xf2\xde\x1f%yaml:"default_send_enabled,omitempty"', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'\x98\xa0\x1f\x00', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=137, serialized_end=315)
_SENDENABLED = _descriptor.Descriptor(name='SendEnabled', full_name='cosmos.bank.v1beta1.SendEnabled', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='denom', full_name='cosmos.bank.v1beta1.SendEnabled.denom', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='enabled', full_name='cosmos.bank.v1beta1.SendEnabled.enabled', index=1, number=2, type=8, cpp_type=7, label=1, has_default_value=False, default_value=False, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'\xe8\xa0\x1f\x01\x98\xa0\x1f\x00', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=317, serialized_end=372)
_INPUT = _descriptor.Descriptor(name='Input', full_name='cosmos.bank.v1beta1.Input', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='address', full_name='cosmos.bank.v1beta1.Input.address', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='coins', full_name='cosmos.bank.v1beta1.Input.coins', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'\xe8\xa0\x1f\x00\x88\xa0\x1f\x00', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=374, serialized_end=500)
_OUTPUT = _descriptor.Descriptor(name='Output', full_name='cosmos.bank.v1beta1.Output', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='address', full_name='cosmos.bank.v1beta1.Output.address', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='coins', full_name='cosmos.bank.v1beta1.Output.coins', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'\xe8\xa0\x1f\x00\x88\xa0\x1f\x00', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=502, serialized_end=629)
_SUPPLY = _descriptor.Descriptor(name='Supply', full_name='cosmos.bank.v1beta1.Supply', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='total', full_name='cosmos.bank.v1beta1.Supply.total', index=0, number=1, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\xc8\xde\x1f\x00\xaa\xdf\x1f(github.com/cosmos/cosmos-sdk/types.Coins', file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'\x18\x01\xe8\xa0\x1f\x01\x88\xa0\x1f\x00\xd2\xb4-8*github.com/cosmos/cosmos-sdk/x/bank/legacy/v040.SupplyI', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=632, serialized_end=804)
_DENOMUNIT = _descriptor.Descriptor(name='DenomUnit', full_name='cosmos.bank.v1beta1.DenomUnit', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='denom', full_name='cosmos.bank.v1beta1.DenomUnit.denom', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='exponent', full_name='cosmos.bank.v1beta1.DenomUnit.exponent', index=1, number=2, type=13, cpp_type=3, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='aliases', full_name='cosmos.bank.v1beta1.DenomUnit.aliases', index=2, number=3, type=9, cpp_type=9, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=806, serialized_end=867)
_METADATA = _descriptor.Descriptor(name='Metadata', full_name='cosmos.bank.v1beta1.Metadata', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='description', full_name='cosmos.bank.v1beta1.Metadata.description', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='denom_units', full_name='cosmos.bank.v1beta1.Metadata.denom_units', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='base', full_name='cosmos.bank.v1beta1.Metadata.base', index=2, number=3, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='display', full_name='cosmos.bank.v1beta1.Metadata.display', index=3, number=4, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='name', full_name='cosmos.bank.v1beta1.Metadata.name', index=4, number=5, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='symbol', full_name='cosmos.bank.v1beta1.Metadata.symbol', index=5, number=6, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=870, serialized_end=1015)
_PARAMS.fields_by_name['send_enabled'].message_type = _SENDENABLED
_INPUT.fields_by_name['coins'].message_type = cosmos_dot_base_dot_v1beta1_dot_coin__pb2._COIN
_OUTPUT.fields_by_name['coins'].message_type = cosmos_dot_base_dot_v1beta1_dot_coin__pb2._COIN
_SUPPLY.fields_by_name['total'].message_type = cosmos_dot_base_dot_v1beta1_dot_coin__pb2._COIN
_METADATA.fields_by_name['denom_units'].message_type = _DENOMUNIT
DESCRIPTOR.message_types_by_name['Params'] = _PARAMS
DESCRIPTOR.message_types_by_name['SendEnabled'] = _SENDENABLED
DESCRIPTOR.message_types_by_name['Input'] = _INPUT
DESCRIPTOR.message_types_by_name['Output'] = _OUTPUT
DESCRIPTOR.message_types_by_name['Supply'] = _SUPPLY
DESCRIPTOR.message_types_by_name['DenomUnit'] = _DENOMUNIT
DESCRIPTOR.message_types_by_name['Metadata'] = _METADATA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Params = _reflection.GeneratedProtocolMessageType('Params', (_message.Message,), {'DESCRIPTOR': _PARAMS, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(Params)
SendEnabled = _reflection.GeneratedProtocolMessageType('SendEnabled', (_message.Message,), {'DESCRIPTOR': _SENDENABLED, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(SendEnabled)
Input = _reflection.GeneratedProtocolMessageType('Input', (_message.Message,), {'DESCRIPTOR': _INPUT, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(Input)
Output = _reflection.GeneratedProtocolMessageType('Output', (_message.Message,), {'DESCRIPTOR': _OUTPUT, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(Output)
Supply = _reflection.GeneratedProtocolMessageType('Supply', (_message.Message,), {'DESCRIPTOR': _SUPPLY, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(Supply)
DenomUnit = _reflection.GeneratedProtocolMessageType('DenomUnit', (_message.Message,), {'DESCRIPTOR': _DENOMUNIT, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(DenomUnit)
Metadata = _reflection.GeneratedProtocolMessageType('Metadata', (_message.Message,), {'DESCRIPTOR': _METADATA, '__module__': 'cosmos.bank.v1beta1.bank_pb2'})
_sym_db.RegisterMessage(Metadata)
DESCRIPTOR._options = None
_PARAMS.fields_by_name['send_enabled']._options = None
_PARAMS.fields_by_name['default_send_enabled']._options = None
_PARAMS._options = None
_SENDENABLED._options = None
_INPUT.fields_by_name['coins']._options = None
_INPUT._options = None
_OUTPUT.fields_by_name['coins']._options = None
_OUTPUT._options = None
_SUPPLY.fields_by_name['total']._options = None
_SUPPLY._options = None
| 268.578947 | 2,707 | 0.812986 | 2,227 | 15,309 | 5.283341 | 0.092052 | 0.041475 | 0.059748 | 0.046405 | 0.7684 | 0.741713 | 0.667602 | 0.663947 | 0.641849 | 0.600374 | 0 | 0.047531 | 0.042132 | 15,309 | 56 | 2,708 | 273.375 | 0.754842 | 0.002025 | 0 | 0 | 1 | 0.090909 | 0.262216 | 0.216357 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.127273 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d9f6edaaa3593333e1e0ca7149034077132efb7d | 48 | py | Python | viewstate/exceptions.py | TiberiuD/viewstate | 7100e9aee3088c6cfc91bacc1580a9a371140407 | [
"MIT"
] | null | null | null | viewstate/exceptions.py | TiberiuD/viewstate | 7100e9aee3088c6cfc91bacc1580a9a371140407 | [
"MIT"
] | null | null | null | viewstate/exceptions.py | TiberiuD/viewstate | 7100e9aee3088c6cfc91bacc1580a9a371140407 | [
"MIT"
] | null | null | null |
class ViewStateException(Exception):
pass
| 9.6 | 36 | 0.75 | 4 | 48 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 48 | 4 | 37 | 12 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
8a099419c57fa200228fe43ac19cb9eb27f61377 | 41 | py | Python | rockpaperscissors.py | blackstrawberry/gitkraken-demo-11 | 3c814a13fda2a26527150e73984bd9f43de6b804 | [
"MIT"
] | null | null | null | rockpaperscissors.py | blackstrawberry/gitkraken-demo-11 | 3c814a13fda2a26527150e73984bd9f43de6b804 | [
"MIT"
] | null | null | null | rockpaperscissors.py | blackstrawberry/gitkraken-demo-11 | 3c814a13fda2a26527150e73984bd9f43de6b804 | [
"MIT"
] | null | null | null | # copy and paste
# no time to learn haha
| 13.666667 | 23 | 0.707317 | 8 | 41 | 3.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243902 | 41 | 2 | 24 | 20.5 | 0.935484 | 0.878049 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a0b4a3e3be6d02048841ccaa58185a26b148745 | 143 | py | Python | scripts.py | demophoon/Google-Voice-Takeout-Analyser | 622fd85b2c32371afdb84d3b29a368cb68230f78 | [
"MIT"
] | null | null | null | scripts.py | demophoon/Google-Voice-Takeout-Analyser | 622fd85b2c32371afdb84d3b29a368cb68230f78 | [
"MIT"
] | null | null | null | scripts.py | demophoon/Google-Voice-Takeout-Analyser | 622fd85b2c32371afdb84d3b29a368cb68230f78 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
from munge import main
from model import import_data
def run_import():
main()
import_data()
| 13 | 29 | 0.699301 | 22 | 143 | 4.409091 | 0.681818 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.195804 | 143 | 10 | 30 | 14.3 | 0.834783 | 0.251748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.8 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a13eec3bc00185b8c5222cea5f58a83f438d67a | 167 | py | Python | faker/__init__.py | svisser/faker-1 | 52e5018e3fcf5d2b176d2031672c56bc2140ebc9 | [
"MIT"
] | 1 | 2021-07-23T02:41:54.000Z | 2021-07-23T02:41:54.000Z | faker/__init__.py | svisser/faker-1 | 52e5018e3fcf5d2b176d2031672c56bc2140ebc9 | [
"MIT"
] | null | null | null | faker/__init__.py | svisser/faker-1 | 52e5018e3fcf5d2b176d2031672c56bc2140ebc9 | [
"MIT"
] | 1 | 2021-05-04T04:53:57.000Z | 2021-05-04T04:53:57.000Z | from faker.factory import Factory
from faker.generator import Generator
from faker.proxy import Faker
VERSION = '8.10.1'
__all__ = ('Factory', 'Generator', 'Faker')
| 20.875 | 43 | 0.754491 | 23 | 167 | 5.304348 | 0.478261 | 0.221311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 0.131737 | 167 | 7 | 44 | 23.857143 | 0.813793 | 0 | 0 | 0 | 0 | 0 | 0.161677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a27aca7cbae64e218a7a2be00aed29e1a38dfae | 324 | py | Python | home_web/webapp.py | tengshan2008/home_web | 6702defee466184edd38848d96efc1842ce6a2f9 | [
"MIT"
] | null | null | null | home_web/webapp.py | tengshan2008/home_web | 6702defee466184edd38848d96efc1842ce6a2f9 | [
"MIT"
] | null | null | null | home_web/webapp.py | tengshan2008/home_web | 6702defee466184edd38848d96efc1842ce6a2f9 | [
"MIT"
] | null | null | null | # Entry point for the application.
from . import app # For application discovery by the 'flask' command.
from . import views # For import side-effects of setting up routes.
# Time-saver: output a URL to the VS Code terminal
# so you can easily Ctrl+click to open a browser
# print('http://127.0.0.1:5000/hello/VSCode')
| 40.5 | 72 | 0.731481 | 55 | 324 | 4.309091 | 0.8 | 0.084388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037453 | 0.175926 | 324 | 7 | 73 | 46.285714 | 0.850187 | 0.82716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a337c6e479dd7d8d660b56dbd91b4d988f45459 | 24 | py | Python | Services/LoanLiquidator/LoanBook.py | xan-crypto/CurveZero | d2e734be1dbbc4b2704adf08f627b66820f02904 | [
"MIT"
] | 18 | 2022-02-15T09:12:27.000Z | 2022-03-27T14:40:13.000Z | Services/LoanLiquidator/LoanBook.py | tygakim/CurveZero | c671630efbf5e379840636e632a8dcef3ec57de2 | [
"MIT"
] | 2 | 2022-03-18T22:55:09.000Z | 2022-03-21T19:40:38.000Z | Services/LoanLiquidator/LoanBook.py | tygakim/CurveZero | c671630efbf5e379840636e632a8dcef3ec57de2 | [
"MIT"
] | 4 | 2022-03-10T19:33:51.000Z | 2022-03-28T15:32:31.000Z | # get and run loan book
| 12 | 23 | 0.708333 | 5 | 24 | 3.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 24 | 1 | 24 | 24 | 0.944444 | 0.875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a4f09431e4f2683304326dcb15c677eb4a58135 | 91 | py | Python | Python Programs/Dictionary.py | Chibi-Shem/Hacktoberfest2020-Expert | 324843464aec039e130e85a16e74b76d310f1497 | [
"MIT"
] | 77 | 2020-10-01T10:06:59.000Z | 2021-11-08T08:57:18.000Z | Python Programs/Dictionary.py | Chibi-Shem/Hacktoberfest2020-Expert | 324843464aec039e130e85a16e74b76d310f1497 | [
"MIT"
] | 46 | 2020-09-27T04:55:36.000Z | 2021-05-14T18:49:06.000Z | Python Programs/Dictionary.py | Chibi-Shem/Hacktoberfest2020-Expert | 324843464aec039e130e85a16e74b76d310f1497 | [
"MIT"
] | 327 | 2020-09-26T17:06:03.000Z | 2021-10-09T06:04:39.000Z | dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
print "dict['Alice']: ", dict['Alice']
| 30.333333 | 51 | 0.538462 | 12 | 91 | 4.083333 | 0.75 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.131868 | 91 | 2 | 52 | 45.5 | 0.607595 | 0 | 0 | 0 | 0 | 0 | 0.450549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
8aac4c2318c59149be9eb4742320081f47ee07eb | 175 | py | Python | src/wl/commands/config_path.py | AlphaTechnolog/wl | 09a8f883f397ba7aae80c06f61fedd1887975d3f | [
"MIT"
] | 6 | 2021-07-13T16:34:45.000Z | 2022-03-02T17:34:39.000Z | src/wl/commands/config_path.py | AlphaTechnolog/wl | 09a8f883f397ba7aae80c06f61fedd1887975d3f | [
"MIT"
] | null | null | null | src/wl/commands/config_path.py | AlphaTechnolog/wl | 09a8f883f397ba7aae80c06f61fedd1887975d3f | [
"MIT"
] | null | null | null | from .command import Command
from ..paths import config_path
class ConfigPath(Command):
def run(self):
print('The config path is:', str(config_path.absolute()))
| 21.875 | 65 | 0.708571 | 24 | 175 | 5.083333 | 0.666667 | 0.245902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177143 | 175 | 7 | 66 | 25 | 0.847222 | 0 | 0 | 0 | 0 | 0 | 0.108571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8ab5fb1a5e8679007e7adbcf55a2b674f408609f | 83 | py | Python | src/poliastro/tests/test_examples.py | sundeshgupta/poliastro | 0a269d43c8a082df3323d38ce73f5e1ae3262ccd | [
"MIT"
] | 634 | 2015-05-11T08:50:42.000Z | 2022-03-28T10:13:13.000Z | src/poliastro/tests/test_examples.py | sundeshgupta/poliastro | 0a269d43c8a082df3323d38ce73f5e1ae3262ccd | [
"MIT"
] | 1,386 | 2015-04-29T20:54:36.000Z | 2022-03-30T13:06:34.000Z | src/poliastro/tests/test_examples.py | sundeshgupta/poliastro | 0a269d43c8a082df3323d38ce73f5e1ae3262ccd | [
"MIT"
] | 324 | 2015-04-29T20:52:43.000Z | 2022-03-06T23:19:15.000Z | # This line tests all the statements so far
from poliastro import examples # noqa
| 27.666667 | 43 | 0.783133 | 13 | 83 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192771 | 83 | 2 | 44 | 41.5 | 0.970149 | 0.554217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0a0d318a4621b0b81da23883acaeda56b9fed208 | 443 | py | Python | 19_blastomatic/tests/unit_test.py | ilaydabozan/biofx_python | b7bef85dcf0b0a9e049f10a0766b9da20bf676c7 | [
"MIT"
] | 74 | 2020-12-18T16:04:31.000Z | 2022-03-02T09:05:54.000Z | 19_blastomatic/tests/unit_test.py | ilaydabozan/biofx_python | b7bef85dcf0b0a9e049f10a0766b9da20bf676c7 | [
"MIT"
] | 6 | 2021-06-30T19:42:04.000Z | 2022-02-07T04:45:31.000Z | 19_blastomatic/tests/unit_test.py | ilaydabozan/biofx_python | b7bef85dcf0b0a9e049f10a0766b9da20bf676c7 | [
"MIT"
] | 169 | 2020-11-06T19:44:36.000Z | 2022-03-30T08:38:42.000Z | """ Unit tests for blastomatic """
from blastomatic import guess_delimiter
# --------------------------------------------------
def test_guess_delimiter() -> None:
""" Test guess_delimiter """
assert guess_delimiter('/foo/bar.csv') == ','
assert guess_delimiter('/foo/bar.txt') == '\t'
assert guess_delimiter('/foo/bar.tsv') == '\t'
assert guess_delimiter('/foo/bar.tab') == '\t'
assert guess_delimiter('') == '\t'
| 29.533333 | 52 | 0.564334 | 48 | 443 | 5.020833 | 0.395833 | 0.46473 | 0.414938 | 0.381743 | 0.439834 | 0.224066 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153499 | 443 | 14 | 53 | 31.642857 | 0.642667 | 0.227991 | 0 | 0 | 0 | 0 | 0.173252 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 1 | 0.142857 | true | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a139a2e56f75fb8bbd427f649f65549f110a2b3 | 117 | py | Python | dwork/mechanisms/__init__.py | kiprotect/dwork | abf2cdddf701da0e3d1987399f32f6edeed9493d | [
"BSD-3-Clause"
] | 2 | 2020-11-17T20:05:07.000Z | 2021-11-18T10:43:42.000Z | dwork/mechanisms/__init__.py | kiprotect/dwork | abf2cdddf701da0e3d1987399f32f6edeed9493d | [
"BSD-3-Clause"
] | null | null | null | dwork/mechanisms/__init__.py | kiprotect/dwork | abf2cdddf701da0e3d1987399f32f6edeed9493d | [
"BSD-3-Clause"
] | 2 | 2020-11-17T20:05:09.000Z | 2021-01-11T21:15:28.000Z | from .geometric import geometric_noise
from .laplace import laplace_noise
from .exponential import exponential_noise
| 29.25 | 42 | 0.871795 | 15 | 117 | 6.6 | 0.4 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 117 | 3 | 43 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0a183988d3d9d66efaf038d7a571ca3f4fb3bfc5 | 348 | py | Python | exercicios/Curso_Udemy_Python/sec4_aula78.py | IgoPereiraBarros/maratona-data-science-brasil | cc07476579134a2764f00d229d415657555dcdd1 | [
"MIT"
] | null | null | null | exercicios/Curso_Udemy_Python/sec4_aula78.py | IgoPereiraBarros/maratona-data-science-brasil | cc07476579134a2764f00d229d415657555dcdd1 | [
"MIT"
] | null | null | null | exercicios/Curso_Udemy_Python/sec4_aula78.py | IgoPereiraBarros/maratona-data-science-brasil | cc07476579134a2764f00d229d415657555dcdd1 | [
"MIT"
] | null | null | null |
class Operacoes:
def __init__(self, x, y):
self.x = x
self.y = y
def soma(self):
return self.x + self.y
def sub(self):
return self.x - self.y
def mult(self):
return self.x * self.y
def divisao(self):
return self.x / self.y
def potencia(self):
return self.x ** self.y
def divisaoInteira(self):
return self.x // self.y
| 13.92 | 26 | 0.635057 | 61 | 348 | 3.557377 | 0.229508 | 0.184332 | 0.193548 | 0.414747 | 0.62212 | 0.62212 | 0.529954 | 0 | 0 | 0 | 0 | 0 | 0.227011 | 348 | 24 | 27 | 14.5 | 0.806691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0 | 0.375 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0a1fea93bbfc96d66e03aa5e6ee9310c3364d537 | 13,792 | py | Python | tests/test_functions.py | geosharma/eqsig | 3083022ab9e48ee422eff261560ee60846e766e2 | [
"MIT"
] | 15 | 2018-10-08T19:18:06.000Z | 2022-02-05T16:03:31.000Z | tests/test_functions.py | geosharma/eqsig | 3083022ab9e48ee422eff261560ee60846e766e2 | [
"MIT"
] | 2 | 2019-11-06T05:07:45.000Z | 2021-04-19T09:59:25.000Z | tests/test_functions.py | geosharma/eqsig | 3083022ab9e48ee422eff261560ee60846e766e2 | [
"MIT"
] | 8 | 2018-10-08T19:18:09.000Z | 2022-02-03T12:08:33.000Z | import numpy as np
import eqsig
from eqsig import functions as fns
import pytest
from tests.conftest import TEST_DATA_DIR
def test_determine_pseudo_cyclic_peak_only_series_with_triangle_series():
values = [0, 1, 0, -1, 0, 1, 0, -1, 0, 1, 0]
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(np.abs(peaks_only))
assert np.isclose(cum_peaks, expected_sum)
def test_determine_peaks_only_delta_series_with_triangle_series():
values = [0, 1, 0, -1, 0, 1, 0, -1, 0, 1, 0]
peaks_only = eqsig.determine_peaks_only_delta_series(values)
cum_peaks = np.sum(np.abs(peaks_only))
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
cum_diff = np.sum(peaks_only)
assert np.isclose(cum_diff, 0), (cum_diff, 0)
assert np.isclose(cum_abs_delta_values, 10), (cum_abs_delta_values, 10)
def test_determine_pseudo_cyclic_peak_only_series_with_sine_wave():
time = np.arange(99)
values = np.sin(time)
values[-1] = 0
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(np.abs(peaks_only))
assert np.isclose(cum_peaks, expected_sum)
def test_determine_peaks_only_delta_series_with_sine_wave():
time = np.arange(99)
values = np.sin(time)
values[-1] = 0
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values
peaks_only = eqsig.determine_peaks_only_delta_series(values)
cum_peaks = np.sum(np.abs(peaks_only))
assert np.isclose(cum_peaks, expected_sum), (cum_peaks, expected_sum)
def test_determine_pseudo_cyclic_peak_only_series_with_ground_motion():
record_path = TEST_DATA_DIR
record_filename = 'test_motion_dt0p01.txt'
rec = np.loadtxt(record_path + record_filename, skiprows=2)
cum_abs_delta_values = np.sum(np.abs(np.diff(rec)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(rec)
cum_peaks = np.sum(peaks_only)
assert np.isclose(cum_peaks, expected_sum)
def test_determine_peaks_only_delta_series_with_ground_motion():
record_path = TEST_DATA_DIR
record_filename = 'test_motion_dt0p01.txt'
rec = np.loadtxt(record_path + record_filename, skiprows=2)
cum_abs_delta_values = np.sum(np.abs(np.diff(rec)))
expected_sum = cum_abs_delta_values
delta_peaks_only = eqsig.determine_peaks_only_delta_series(rec)
cum_peaks = np.sum(np.abs(delta_peaks_only))
assert np.isclose(cum_peaks, expected_sum), (cum_peaks, expected_sum)
def test_determine_pseudo_cyclic_peak_only_series_with_a_double_peak_and_offset():
values = np.array([0, 2, 1, 2, 0, 1, 0, -1, 0, 1, 0]) + 4
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(peaks_only)
expected_series = np.array([0, 2, -1, 2, 0, 1, 0, 1, 0, 1, 0])
assert np.sum(np.abs(peaks_only - expected_series)) == 0.0
assert np.isclose(cum_peaks, expected_sum)
def test_determine_peaks_only_delta_series_with_a_double_peak_and_offset():
values = np.array([0, 2, 1, 2, 0, 1, 0, -1, 0, 1, 0]) + 4
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values
delta_peaks_only = eqsig.determine_peaks_only_delta_series(values)
cum_peaks = np.sum(np.abs(delta_peaks_only))
expected_series = np.array([0, 2, -1, 1, -2, 1, 0, -2, 0, 2, -1])
assert np.sum(np.abs(delta_peaks_only - expected_series)) == 0.0
assert np.isclose(cum_peaks, expected_sum)
def test_determine_pseudo_cyclic_peak_only_series_with_non_zero_end():
end_value = 1.
values = np.array([0, 2, -1, 2, 0, end_value])
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2 + end_value / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(peaks_only)
assert np.isclose(cum_peaks, expected_sum)
def test_determine_peaks_only_series_with_non_zero_end():
end_value = 1.
values = np.array([0, 2, -1, 2, 0, end_value])
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values
delta_peaks_only = eqsig.determine_peaks_only_delta_series(values)
cum_peaks = np.sum(np.abs(delta_peaks_only))
assert np.isclose(cum_peaks, expected_sum), (cum_peaks, expected_sum)
def test_determine_peaks_only_series_with_nonchanging_values():
values = np.array([0, 1, 1, -3, -5, 0]) # constant then reverse
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(peaks_only)
assert np.isclose(cum_peaks, expected_sum), cum_peaks
values = np.array([0, 1, 1, 3, -5, 0]) # constant the no reverse
cum_abs_delta_values = np.sum(np.abs(np.diff(values)))
expected_sum = cum_abs_delta_values / 2
peaks_only = eqsig.determine_pseudo_cyclic_peak_only_series(values)
cum_peaks = np.sum(peaks_only)
assert np.isclose(cum_peaks, expected_sum), cum_peaks
def test_fa_spectrum_conversion():
record_path = TEST_DATA_DIR
record_filename = 'test_motion_dt0p01.txt'
dt = 0.01
values = np.loadtxt(record_path + record_filename, skiprows=2)
npts = len(values)
n_factor = 2 ** int(np.ceil(np.log2(npts)))
fa = np.fft.fft(values, n=n_factor)
points = int(n_factor / 2)
fas = fa[range(points)] * dt
faf = np.arange(points) / (2 * points * dt)
n = 2 * len(fas)
asig = eqsig.AccSignal(values, dt)
fas_eqsig, faf_eqsig = fns.generate_fa_spectrum(asig)
assert np.isclose(fas, fas_eqsig).all()
assert np.isclose(faf, faf_eqsig).all()
a = np.zeros(len(fa), dtype=complex)
a[1:n // 2] = fas[1:]
a[n // 2 + 1:] = np.flip(np.conj(fas[1:]), axis=0)
a /= dt
sig = np.fft.ifft(fa, n=n_factor)
sig = sig[:len(values)]
assert np.isclose(np.sum(np.abs(sig)), np.sum(np.abs(values)))
asig2 = fns.fas2signal(fas_eqsig, dt, stype="signal")
trimmed = asig2.values[:len(values)]
assert np.isclose(np.sum(np.abs(trimmed)), np.sum(np.abs(values)))
def test_get_peak_indices():
values = np.array([0, 2, 1, 2, -1, 1, 1, 0.3, -1, 0.2, 1, 0.2])
peak_indices = fns.get_peak_array_indices(values)
peaks_series = np.zeros_like(values)
np.put(peaks_series, peak_indices, values)
expected = np.array([0, 1, 2, 3, 4, 5, 8, 10, 11])
assert np.sum(abs(peak_indices - expected)) == 0
values = np.array([2, 1, -1, 1])
peak_indices = fns.get_peak_array_indices(values)
expected = np.array([0, 2, 3])
assert np.sum(abs(peak_indices - expected)) == 0
values = np.array([1, 2, -1, 1])
peak_indices = fns.get_peak_array_indices(values)
expected = np.array([0, 1, 2, 3])
assert np.sum(abs(peak_indices - expected)) == 0
def test_get_zero_crossings_array_indices():
vs = np.array([0, 2, 1, 2, -1, 1, 0, 0, 1, 0.3, 0, -1, 0.2, 1, 0.2])
zci = fns.get_zero_crossings_array_indices(vs, keep_adj_zeros=True)
expected = np.array([0, 4, 5, 6, 7, 10, 12])
assert np.array_equal(zci, expected)
zci = fns.get_zero_crossings_array_indices(vs, keep_adj_zeros=False)
expected = np.array([0, 4, 5, 6, 10, 12])
assert np.array_equal(zci, expected), zci
# no zeros
vs = np.array([1, 2, 1, 2, -1, 1, 1, 1, 1, 0.3, 1, -1, 0.2, 1, 0.2])
zci = fns.get_zero_crossings_array_indices(vs, keep_adj_zeros=False)
expected = np.array([0, 4, 5, 11, 12])
assert np.array_equal(zci, expected), zci
vs = np.array([-1, -2, 1, 2, -1, 1, 1, 1, 1, 0.3, 1, -1, 0.2, 1, 0.2])
zci = fns.get_zero_crossings_array_indices(vs, keep_adj_zeros=False)
expected = np.array([0, 2, 4, 5, 11, 12])
assert np.array_equal(zci, expected), zci
def test_put_array_in_2d_array():
vals = np.arange(1, 5)
sfs = np.array([1, 2, 3])
expected_full = np.array([[0, 1, 2, 3, 4, 0, 0],
[0, 0, 1, 2, 3, 4, 0],
[0, 0, 0, 1, 2, 3, 4]])
out = fns.put_array_in_2d_array(vals, sfs)
assert np.array_equal(out, expected_full), out
# expected = np.array([[0, 1, 2, 3],
# [0, 0, 1, 2],
# [0, 0, 0, 1]])
out = fns.put_array_in_2d_array(vals, sfs, clip='end')
assert np.array_equal(out, expected_full[:, :-3]), out
out = fns.put_array_in_2d_array(vals, sfs, clip='start')
assert np.array_equal(out, expected_full), out
out = fns.put_array_in_2d_array(vals, sfs, clip='both')
assert np.array_equal(out, expected_full[:, :-3]), out
# neg shift
vals = np.arange(4, 6)
sfs = np.array([-1, 2])
expected_full = np.array([[4, 5, 0, 0, 0],
[0, 0, 0, 4, 5],
])
out = fns.put_array_in_2d_array(vals, sfs, clip='none')
assert np.array_equal(out, expected_full), out
out = fns.put_array_in_2d_array(vals, sfs, clip='end')
assert np.array_equal(out, expected_full[:, :-2]), out
out = fns.put_array_in_2d_array(vals, sfs, clip='start')
assert np.array_equal(out, expected_full[:, 1:]), out
out = fns.put_array_in_2d_array(vals, sfs, clip='both')
assert np.array_equal(out, expected_full[:, 1:-2]), out
def test_join_values_w_shifts():
vals = np.arange(1, 5)
sfs = np.array([1, 2, 3])
expected = np.array([[1, 3, 5, 7, 4, 0, 0],
[1, 2, 4, 6, 3, 4, 0],
[1, 2, 3, 5, 2, 3, 4]])
out = fns.join_values_w_shifts(vals, sfs)
assert np.array_equal(out, expected), out
expected = np.array([[ 1, 1, 1, 1, -4, 0, 0],
[ 1, 2, 2, 2, -3, -4, 0],
[ 1, 2, 3, 3, -2, -3, -4]])
def test_calc_step_fn_error():
assert min(fns.calc_step_fn_vals_error([4, 4, 4, 4, 1, 1, 1, 1])) == 0.0
assert min(fns.calc_step_fn_vals_error([4, 4, 4, 4, 1, 1, 1, 1], pow=2)) == 0.0
assert min(fns.calc_step_fn_vals_error([4, 5, 4, 4, 1, 1, 2, 1])) == 3.0
assert min(fns.calc_step_fn_vals_error([4, 5, 4, 4, 1, 1, 2, 1], pow=2)) == 1.0
def test_calc_step_fn_steps_val():
vals = [4, 4, 4, 4, 1, 1, 1, 1]
ind = np.argmin(fns.calc_step_fn_vals_error(vals))
pre, post = fns.calc_step_fn_steps_vals(vals, ind)
assert ind == 3
assert pre == 4
assert post == 1
vals = [4, 5, 4, 4, 1, 1, 2, 1]
ind = np.argmin(fns.calc_step_fn_vals_error(vals))
pre, post = fns.calc_step_fn_steps_vals(vals, ind)
assert ind == 3
assert np.isclose(pre, 4.333333)
assert post == 1.25
def test_roll_av_vals():
expected = np.array([4, 4, 3, 2, 1, 1, 1, 1])
assert np.sum(fns.calc_roll_av_vals([4, 4, 4, 4, 1, 1, 1, 1], steps=3) - expected) == 0
expected = np.array([4, 4, 4, 4, 3, 2, 1, 1])
assert np.sum(fns.calc_roll_av_vals([4, 4, 4, 4, 1, 1, 1, 1], steps=3, mode='backward') - expected) == 0
expected = np.array([4, 4, 4, 3, 2, 1, 1, 1])
assert np.sum(fns.calc_roll_av_vals([4, 4, 4, 4, 1, 1, 1, 1], steps=3, mode='centre') - expected) == 0
def test_interp2d():
y = np.linspace(1, 10, 3)
yf = np.linspace(0, 22, 5)
f = np.arange(len(yf))[:, np.newaxis] * np.ones((len(yf), 10))
f_interp = fns.interp2d(y, yf, f)
assert np.isclose(f_interp[0][0], (y[0] - yf[0]) / (yf[1] - yf[0])), (f_interp[0][0], (y[0] - yf[0]) / (yf[1] - yf[0]))
assert np.isclose(f_interp[1][0], 1), (f_interp[0][0], 1)
assert len(f_interp) == 3
assert len(f_interp[0]) == 10
def test_interp2d_2():
f = np.array([[0, 0, 0], # 0
[0, 1, 4], # 5
[2, 6, 2], # 10
[10, 10, 10] # 30
])
yf = np.array([0, 1, 2, 3])
y = np.array([0.5, 1, 2.2, 2.5])
f_interp = fns.interp2d(y, yf, f)
print(f_interp)
assert f_interp[0][0] == 0
assert f_interp[0][1] == 0.5
assert f_interp[0][2] == 2.0
assert f_interp[1][0] == f[1][0]
assert f_interp[1][1] == f[1][1]
assert f_interp[1][2] == f[1][2]
assert np.isclose(f_interp[2][0], f[2][0] + 8 * 0.2)
assert np.isclose(f_interp[3][2], 6.)
def test_interp2d_at_edge():
f = np.array([[0, 0, 0], # 0
[10, 10, 10] # 30
])
xf = np.array([0, 3])
x = np.array([0.0, 3.0])
f_interp = fns.interp2d(x, xf, f)
print(f_interp)
assert f_interp[0][0] == 0
assert f_interp[1][0] == 10.
def test_interp_left():
x0 = [0, 1, 5]
x = [0, 2, 6]
y = [1.5, 2.5, 3.5]
y_new = fns.interp_left(x0, x, y)
expected = np.array([1.5, 1.5, 2.5])
assert np.isclose(y_new, expected).all(), y_new
x0 = [0, 2, 6]
y_new = fns.interp_left(x0, x, y)
expected = np.array([1.5, 2.5, 3.5])
assert np.isclose(y_new, expected).all(), y_new
x0 = [-1, 2, 6]
with pytest.raises(AssertionError):
y_new = fns.interp_left(x0, x, y)
if __name__ == '__main__':
# test_interp2d()
test_interp2d_at_edge()
# test_put_array_in_2d_array()
# test_fa_spectrum_conversion()
# test_determine_peaks_only_series_with_sine_wave()
# test_determine_peaks_only_series_with_triangle_series()
# test_determine_peaks_only_series_with_ground_motion()
# test_determine_peaks_only_series_with_a_double_peak_and_offset()
# test_determine_peaks_only_series_with_nonchanging_values()
# test_determine_peaks_only_series_with_non_zero_end()
| 38.633053 | 123 | 0.640661 | 2,382 | 13,792 | 3.439127 | 0.075147 | 0.045288 | 0.033569 | 0.05188 | 0.832886 | 0.795166 | 0.75769 | 0.741821 | 0.693604 | 0.646851 | 0 | 0.062792 | 0.207874 | 13,792 | 356 | 124 | 38.741573 | 0.687048 | 0.043286 | 0 | 0.463768 | 0 | 0 | 0.009262 | 0.005011 | 0 | 0 | 0 | 0 | 0.235507 | 1 | 0.083333 | false | 0 | 0.018116 | 0 | 0.101449 | 0.007246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a35b0d3a49c9ccbacefa856998c0032bf6c6bf1 | 30 | py | Python | text/_elisp/type/primitive.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | text/_elisp/type/primitive.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | text/_elisp/type/primitive.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | class Type(Integer):
pass
| 10 | 20 | 0.666667 | 4 | 30 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 2 | 21 | 15 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
0a43f89ed6fed32d4d877bb7811ad205ba262549 | 227,036 | py | Python | torch/testing/_internal/common_nn.py | pradeep90/pytorch | 9ddbce203d418fe24f8d9540bcff03faac257614 | [
"Intel"
] | null | null | null | torch/testing/_internal/common_nn.py | pradeep90/pytorch | 9ddbce203d418fe24f8d9540bcff03faac257614 | [
"Intel"
] | null | null | null | torch/testing/_internal/common_nn.py | pradeep90/pytorch | 9ddbce203d418fe24f8d9540bcff03faac257614 | [
"Intel"
] | null | null | null | from abc import abstractmethod
import math
import tempfile
import unittest
from copy import deepcopy
from functools import reduce
from itertools import product
from operator import mul
from math import pi
import torch
import torch.cuda
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import _reduction as _Reduction
from torch.testing._internal.common_utils import TestCase, to_gpu, freeze_rng_state, is_iterable, \
TEST_WITH_ROCM, gradcheck, gradgradcheck
from torch.testing._internal.common_cuda import TEST_CUDA
from torch.autograd.gradcheck import _get_numerical_jacobian, _iter_tensors
from torch.autograd import Variable
from torch.types import _TensorOrTensors
import torch.backends.cudnn
from typing import Dict, Callable, Tuple, List, Sequence, Union, Any
TemporaryFile = tempfile.TemporaryFile
PRECISION = 1e-5
def get_reduction(m):
result = getattr(m, 'reduction', None)
if result is None:
result = _Reduction.legacy_get_string(getattr(m, 'sizeAverage', None), True, emit_warning=False)
assert result is not None
return result
def get_weight(m):
result = getattr(m, 'weight', None)
if result is not None:
return result
return getattr(m, 'weights', None)
# NOTE [How to check NN module / functional API parity between Python and C++ frontends]
#
# The way to check API parity is to add parity tests for the NN module / functional of interest.
# Here are the detailed steps:
#
# For NN module:
# 1. Make sure you already have a test dict with the module configuration you want to test.
# 2. Add `cpp_constructor_args` entry to the test dict, with its value exactly matching
# the Python module constructor arguments. For example, if in the test dict we pass
# `(10, 8)` to `torch.nn.Linear` constructor, then we should pass `torch::nn::LinearOptions(10, 8)`
# as the corresponding C++ constructor argument to `torch::nn::Linear`.
# 3. If in the process of performing the above step you referenced any variables
# in the `cpp_constructor_args` entry, you must add `cpp_var_map` entry
# to the test dict to make sure that those variables are populated with the right Python values.
# For example, if the Python constructor call is
# `torch.nn.FractionalMaxPool2d(2, output_ratio=0.5, _random_samples=random_samples)`,
# the corresponding C++ constructor argument is
# `torch::nn::FractionalMaxPool2dOptions(2).output_ratio(0.5)._random_samples(random_samples)`,
# and the `cpp_var_map` entry must be
# `{'random_samples': random_samples}` in order to populate the C++ variable `random_samples`
# used in the C++ constructor argument with the Python tensor value `random_samples`.
#
# For NN functional:
# 1. Make sure you already have a test dict with the functional configuration you want to test.
# 2. If the test dict's `constructor` entry looks like `wrap_functional(F.some_functional_name, ...)`,
# then you must add `cpp_options_args` entry to the test dict, with its value exactly matching the Python
# functional optional arguments. For example, if the test dict's `constructor` entry is
# `wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest')`,
# then the `cpp_options_args` entry should be
# "F::InterpolateFuncOptions().size(std::vector<int64_t>({12})).scale_factor(c10::nullopt).mode(torch::kNearest)".
# 3. Otherwise, if the test dict's `constructor` entry looks like
# `wrap_functional(lambda i: F.some_functional_name(...))`,
# then you must add `cpp_function_call` entry to the test dict, with its value exactly matching the Python
# functional function call. For example, if the test dict's `constructor` entry is
# `wrap_functional(lambda i: F.poisson_nll_loss(i, t.type_as(i), reduction='none'))`,
# then the `cpp_function_call` entry should be
# "F::poisson_nll_loss(i, t.to(i.options()), F::PoissonNLLLossFuncOptions().reduction(torch::kNone))".
# 4. If in the process of performing the above two steps you referenced any variables
# in the `cpp_options_args` or `cpp_function_call` entry, you must
# add `cpp_var_map` entry to the test dict to make sure that those variables
# are populated with the right Python values. For example, if the test dict's `constructor` entry is
# `wrap_functional(lambda i: F.poisson_nll_loss(i, t.type_as(i), reduction='none'))`,
# then the `cpp_function_call` entry should be
# "F::poisson_nll_loss(i, t.to(i.options()), F::PoissonNLLLossFuncOptions().reduction(torch::kNone))".
# Notice that there are two variables `i` and `t` that need to have their values provided,
# and the way to do so is to add a `cpp_var_map` entry: `cpp_var_map={'i': '_get_input()', 't': t}`.
# (Note that for `i`, since we want it to take the Python input value, we pass '_get_input()' string as value
# and the C++ parity test mechanism will populate `i` with the Python input value correctly.)
#
# There are also a few optional flags in the test dict to control the C++ parity test behavior:
#
# - `test_cpp_api_parity`: if `False`, skips the C++ parity test for this test dict. Default: True.
# - `has_parity`: if `False`, expects this test dict to fail the C++ parity test. Default: True.
module_tests = [
dict(
module_name='Linear',
constructor_args=(10, 8),
cpp_constructor_args='torch::nn::LinearOptions(10, 8)',
input_size=(4, 10),
reference_fn=lambda i, p, _: torch.mm(i, p[0].t()) + p[1].view(1, -1).expand(4, 8),
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Linear',
constructor_args=(10, 8, False),
cpp_constructor_args='torch::nn::LinearOptions(10, 8).bias(false)',
input_size=(4, 10),
desc='no_bias',
reference_fn=lambda i, p, _: torch.mm(i, p[0].t()),
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Threshold',
constructor_args=(2., 1.),
cpp_constructor_args='torch::nn::ThresholdOptions(2., 1.)',
input_size=(2, 3, 4, 5),
check_inplace=True,
desc='threshold_value'
),
dict(
module_name='Threshold',
constructor_args=(2., 10.),
cpp_constructor_args='torch::nn::ThresholdOptions(2., 10.)',
input_size=(2, 3, 4, 5),
desc='large_value'
),
dict(
module_name='ReLU',
input_size=(2, 3, 4, 5),
check_inplace=True,
),
dict(
module_name='ReLU6',
input_size=(2, 3, 4, 5),
check_inplace=True,
),
dict(
module_name='RReLU',
input_size=(1, 2, 2),
test_cuda=False,
),
dict(
module_name='RReLU',
constructor_args=(0.1, 0.9),
cpp_constructor_args='torch::nn::RReLUOptions().lower(0.1).upper(0.9)',
input_size=(4, 4, 5),
desc='with_up_down',
test_cuda=False,
),
dict(
module_name='Hardtanh',
input_size=(3, 2, 5),
reference_fn=lambda i, *_: i.clamp(-1, 1),
),
dict(
module_name='Sigmoid',
input_size=(2, 3, 4, 5),
),
dict(
module_name='Tanh',
input_size=(2, 3, 4, 5),
),
dict(
module_name='Flatten',
input_size=(2, 3, 4, 5),
reference_fn=lambda i, *_: torch.flatten(i, 1)
),
dict(
module_name='Softmax',
constructor_args=(1,),
cpp_constructor_args='torch::nn::SoftmaxOptions(1)',
input_size=(10, 20),
reference_fn=lambda i, *_: torch.exp(i).div(torch.exp(i).sum(1, True).expand(10, 20)),
),
dict(
module_name='Softmax2d',
input_size=(1, 3, 10, 20),
reference_fn=lambda i, *_: torch.exp(i).div(torch.exp(i).sum(1, False)),
),
dict(
module_name='LogSoftmax',
constructor_args=(1,),
cpp_constructor_args='torch::nn::LogSoftmaxOptions(1)',
input_size=(10, 20),
reference_fn=lambda i, *_: torch.exp(i).div_(torch.exp(i).sum(1, True).expand(10, 20)).log_(),
),
dict(
module_name='LogSoftmax',
constructor_args=(1,),
cpp_constructor_args='torch::nn::LogSoftmaxOptions(1)',
input_size=(1, 3, 10, 20),
reference_fn=lambda i, *_: torch.exp(i).div_(torch.exp(i).sum(1, False)).log_(),
desc='multiparam',
),
dict(
module_name='ELU',
constructor_args=(2.,),
cpp_constructor_args='torch::nn::ELUOptions().alpha(2.)',
input_size=(3, 2, 5),
reference_fn=lambda x, *_: torch.where(x >= 0, x, 2 * (x.exp() - 1)),
),
# TODO: reference function
dict(
module_name='Hardshrink',
constructor_args=(2.,),
cpp_constructor_args='torch::nn::HardshrinkOptions(2.)',
input_size=(4, 3, 2, 4),
),
dict(
module_name='LeakyReLU',
input_size=(3, 2, 5),
check_inplace=True
),
dict(
module_name='LeakyReLU',
constructor_args=(0.5,),
cpp_constructor_args='torch::nn::LeakyReLUOptions().negative_slope(0.5)',
input_size=(3, 2, 5),
check_inplace=True,
desc='with_negval'
),
dict(
module_name='LeakyReLU',
constructor_args=(0.0,),
cpp_constructor_args='torch::nn::LeakyReLUOptions().negative_slope(0.0)',
input_fn=lambda: torch.randn(10, 10),
check_inplace=True,
desc='with_zero_negval'
),
dict(
module_name='LogSigmoid',
input_size=(2, 3, 4),
reference_fn=lambda i, *_: i.sigmoid().log(),
),
dict(
module_name='Softplus',
input_size=(10, 20),
reference_fn=lambda i, *_: torch.log(1 + torch.exp(i)),
),
dict(
module_name='Softplus',
constructor_args=(2,),
cpp_constructor_args='torch::nn::SoftplusOptions().beta(2)',
input_size=(10, 20),
reference_fn=lambda i, *_: 1. / 2. * torch.log(1 + torch.exp(2 * i)),
desc='beta',
),
dict(
module_name='Softplus',
constructor_args=(2, -100),
cpp_constructor_args='torch::nn::SoftplusOptions().beta(2).threshold(-100)',
input_size=(10, 20),
reference_fn=(
lambda i, *_: ((i * 2) > -100).type_as(i) * i
+ ((i * 2) <= -100).type_as(i) * 1. / 2. * torch.log(1 + torch.exp(2 * i))
),
desc='beta_threshold',
),
dict(
module_name='Softshrink',
input_size=(3, 2, 5),
),
dict(
module_name='Softshrink',
constructor_args=(1,),
cpp_constructor_args='torch::nn::SoftshrinkOptions(1)',
input_size=(3, 2, 5),
desc='lambda',
),
dict(
module_name='CrossMapLRN2d',
constructor_args=(5, 5e-3, 1e-3, 2),
cpp_constructor_args='torch::nn::CrossMapLRN2dOptions(5).alpha(5e-3).beta(1e-3).k(2)',
input_size=(2, 3, 6, 6),
check_gradgrad=False,
# TODO(#50743): Figure out the error. "RuntimeError: Unrecognized tensor type ID: Batched"
check_batched_grad=False,
),
dict(
module_name='PReLU',
input_size=(2, 3, 4),
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
desc='1d',
),
dict(
module_name='PReLU',
constructor_args=(3,),
cpp_constructor_args='torch::nn::PReLUOptions().num_parameters(3)',
input_size=(2, 3, 4),
desc='1d_multiparam',
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
),
dict(
module_name='PReLU',
input_size=(2, 3, 4, 5),
desc='2d',
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
),
dict(
module_name='PReLU',
constructor_args=(3,),
cpp_constructor_args='torch::nn::PReLUOptions().num_parameters(3)',
input_size=(2, 3, 4, 5),
desc='2d_multiparam',
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
),
dict(
module_name='PReLU',
input_size=(2, 3, 4, 5, 6),
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
desc='3d',
),
dict(
module_name='PReLU',
constructor_args=(3,),
cpp_constructor_args='torch::nn::PReLUOptions().num_parameters(3)',
input_size=(2, 3, 4, 5, 6),
desc='3d_multiparam',
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
),
dict(
module_name='Softsign',
input_size=(3, 2, 5),
reference_fn=lambda i, *_: i.div(1 + torch.abs(i)),
),
dict(
module_name='Softmin',
constructor_args=(1,),
cpp_constructor_args='torch::nn::SoftminOptions(1)',
input_size=(10, 20),
),
dict(
module_name='Softmin',
constructor_args=(1,),
cpp_constructor_args='torch::nn::SoftminOptions(1)',
input_size=(2, 3, 5, 10),
desc='multidim',
),
dict(
module_name='Tanhshrink',
input_size=(2, 3, 4, 5),
),
]
# Generates rand tensor with non-equal values. This ensures that duplicate
# values won't be causing test failure for modules like MaxPooling.
# size should be small, otherwise randperm fails / long overflows.
def _rand_tensor_non_equal(*size):
total = reduce(mul, size, 1)
return torch.randperm(total).view(*size).double()
def wrap_functional(fn, **kwargs):
class FunctionalModule(nn.Module):
def forward(self, *args):
return fn(*args, **kwargs)
return FunctionalModule
def poissonnllloss_no_reduce_test():
t = torch.randn(10, 10)
return dict(
fullname='PoissonNLLLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.poisson_nll_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='F::poisson_nll_loss('
'i, t.to(i.options()), F::PoissonNLLLossFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(10, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: i.exp() - t.mul(i),
pickle=False)
def bceloss_no_reduce_test():
t = Variable(torch.randn(15, 10).gt(0).double())
return dict(
fullname='BCELoss_no_reduce',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy(i, t.type_as(i), reduction='none')),
cpp_function_call='F::binary_cross_entropy('
'i, t.to(i.options()), F::BinaryCrossEntropyFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(15, 10).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: -(t * i.log() + (1 - t) * (1 - i).log()),
pickle=False,
precision=7e-4)
def bceloss_no_reduce_scalar_test():
t = torch.randn(()).gt(0).double()
return dict(
fullname='BCELoss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy(i, t.type_as(i), reduction='none')),
cpp_function_call='F::binary_cross_entropy('
'i, t.to(i.options()), F::BinaryCrossEntropyFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(()).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: -(t * i.log() + (1 - t) * (1 - i).log()),
pickle=False)
def bceloss_weights_no_reduce_test():
t = Variable(torch.randn(15, 10).gt(0).double())
weights = torch.rand(10)
return dict(
fullname='BCELoss_weights_no_reduce',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy(i, t.type_as(i),
weight=weights.type_as(i), reduction='none')),
cpp_function_call='F::binary_cross_entropy('
'i, t.to(i.options()), '
'F::BinaryCrossEntropyFuncOptions().weight(weights.to(i.options())).reduction(torch::kNone))',
input_fn=lambda: torch.rand(15, 10).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t, 'weights': weights},
reference_fn=lambda i, p, m: -(t * i.log() + (1 - t) * (1 - i).log()) * weights,
pickle=False,
precision=3e-4
)
def bceloss_weights_no_reduce_scalar_test():
t = torch.randn(()).double()
weights = torch.rand(())
return dict(
fullname='BCELoss_weights_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy(i, t.type_as(i),
weight=weights.type_as(i), reduction='none')),
cpp_function_call='''F::binary_cross_entropy(
i, t.to(i.options()),
F::BinaryCrossEntropyFuncOptions().weight(weights.to(i.options())).reduction(torch::kNone))''',
cpp_var_map={'i': '_get_input()', 't': t, 'weights': weights},
input_fn=lambda: torch.rand(()).clamp_(2.8e-2, 1 - 2.8e-2),
reference_fn=lambda i, *_: -(t * i.log() + (1 - t) * (1 - i).log()) * weights,
pickle=False
)
def bce_with_logistic_legacy_enum_test():
t = Variable(torch.randn(15, 10).gt(0).double())
sigmoid = nn.Sigmoid()
return dict(
fullname='BCEWithLogitsLoss_legacy_enum',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy_with_logits(i, t.type_as(i), reduce=False)),
cpp_function_call='''F::binary_cross_entropy_with_logits(
i, t.to(i.options()), F::BinaryCrossEntropyWithLogitsFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(15, 10).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: -(t * sigmoid(i).log() + (1 - t) * (1 - sigmoid(i)).log()),
check_gradgrad=False,
pickle=False,
)
def bce_with_logistic_no_reduce_test():
t = Variable(torch.randn(15, 10).gt(0).double())
sigmoid = nn.Sigmoid()
return dict(
fullname='BCEWithLogitsLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy_with_logits(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::binary_cross_entropy_with_logits(
i, t.to(i.options()), F::BinaryCrossEntropyWithLogitsFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(15, 10).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: -(t * sigmoid(i).log() + (1 - t) * (1 - sigmoid(i)).log()),
check_gradgrad=False,
pickle=False,
)
def bce_with_logistic_no_reduce_scalar_test():
t = torch.randn(()).gt(0).double()
sigmoid = nn.Sigmoid()
return dict(
fullname='BCEWithLogitsLoss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.binary_cross_entropy_with_logits(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::binary_cross_entropy_with_logits(
i, t.to(i.options()), F::BinaryCrossEntropyWithLogitsFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(()).clamp_(2.8e-2, 1 - 2.8e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: -(t * sigmoid(i).log() + (1 - t) * (1 - sigmoid(i)).log()),
check_gradgrad=False,
pickle=False
)
def kldivloss_with_target_no_reduce_test():
i = torch.rand(10, 10).log()
return dict(
fullname='KLDivLoss_with_target_no_reduce',
constructor=wrap_functional(
lambda t: F.kl_div(i.type_as(t), t, reduction='none')),
cpp_function_call='F::kl_div(i.to(t.options()), t, F::KLDivFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(10, 10),
cpp_var_map={'i': i, 't': '_get_input()'},
reference_fn=lambda t, *_:
loss_reference_fns['KLDivLoss'](i.type_as(t), t, reduction='none'),
pickle=False)
def kldivloss_no_reduce_test():
t = torch.randn(10, 10)
return dict(
fullname='KLDivLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.kl_div(i, t.type_as(i), reduction='none')),
cpp_function_call='F::kl_div(i, t.to(i.options()), F::KLDivFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(10, 10).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['KLDivLoss'](i, t.type_as(i), reduction='none'),
pickle=False,
)
def kldivloss_no_reduce_scalar_test():
t = torch.randn(())
return dict(
fullname='KLDivLoss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.kl_div(i, t.type_as(i), reduction='none')),
cpp_function_call='F::kl_div(i, t.to(i.options()), F::KLDivFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.rand(()).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['KLDivLoss'](i, t.type_as(i), reduction='none'),
pickle=False)
def kldivloss_with_log_target_no_reduce_test():
i = torch.rand(10, 10).log()
return dict(
fullname='KLDivLoss_with_log_target_no_reduce',
constructor=wrap_functional(
lambda t: F.kl_div(i.type_as(t), t, reduction='none', log_target=True)),
cpp_function_call='F::kl_div(i.to(t.options()), t, F::KLDivFuncOptions().reduction(torch::kNone).log_target(true))',
input_fn=lambda: torch.rand(10, 10),
cpp_var_map={'i': i, 't': '_get_input()'},
reference_fn=lambda t, *_:
loss_reference_fns['KLDivLoss_log_target'](i.type_as(t), t, reduction='none'),
pickle=False)
def kldivloss_no_reduce_log_target_test():
t = torch.randn(10, 10)
return dict(
fullname='KLDivLoss_no_reduce_log_target',
constructor=wrap_functional(
lambda i: F.kl_div(i, t.type_as(i), reduction='none', log_target=True)),
cpp_function_call='F::kl_div(i, t.to(i.options()), F::KLDivFuncOptions().reduction(torch::kNone).log_target(true))',
input_fn=lambda: torch.rand(10, 10).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['KLDivLoss_log_target'](i, t.type_as(i), reduction='none'),
pickle=False,
)
def kldivloss_no_reduce_scalar_log_target_test():
t = torch.randn(())
return dict(
fullname='KLDivLoss_no_reduce_scalar_log_target',
constructor=wrap_functional(
lambda i: F.kl_div(i, t.type_as(i), reduction='none', log_target=True)),
cpp_function_call='F::kl_div(i, t.to(i.options()), F::KLDivFuncOptions().reduction(torch::kNone).log_target(true))',
input_fn=lambda: torch.rand(()).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['KLDivLoss_log_target'](i, t.type_as(i), reduction='none'),
pickle=False)
def l1loss_no_reduce_test():
t = torch.randn(2, 3, 4)
return dict(
fullname='L1Loss_no_reduce',
constructor=wrap_functional(
lambda i: F.l1_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='F::l1_loss(i, t.to(i.options()), F::L1LossFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.randn(2, 3, 4),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: (i - t.type_as(i)).abs(),
pickle=False)
def l1loss_no_reduce_complex_test():
t = torch.randn(2, 3, 4, dtype=torch.cdouble)
return dict(
fullname='L1Loss_no_reduce_complex',
constructor=wrap_functional(
lambda i: F.l1_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='F::l1_loss(i, t.to(i.options()), F::L1LossFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.randn(2, 3, 4, dtype=torch.cdouble),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: (i - t.type_as(i)).abs(),
pickle=False)
def l1loss_no_reduce_scalar_test():
t = torch.randn(())
return dict(
fullname='L1Loss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.l1_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='F::l1_loss(i, t.to(i.options()), F::L1LossFuncOptions().reduction(torch::kNone))',
input_fn=lambda: torch.randn(()),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_: (i - t.type_as(i)).abs(),
pickle=False)
def mseloss_no_reduce_test():
input_size = (2, 3, 4, 5)
target = torch.randn(*input_size)
return dict(
fullname='MSELoss_no_reduce',
constructor=wrap_functional(
lambda i: F.mse_loss(i, target.type_as(i), reduction='none')),
cpp_function_call='F::mse_loss(i, target.to(i.options()), F::MSELossFuncOptions().reduction(torch::kNone))',
input_size=input_size,
cpp_var_map={'i': '_get_input()', 'target': target},
reference_fn=lambda i, *_: (i - target).pow(2),
pickle=False)
def mseloss_no_reduce_scalar_test():
input_size = ()
target = torch.randn(input_size)
return dict(
fullname='MSELoss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.mse_loss(i, target.type_as(i), reduction='none')),
cpp_function_call='F::mse_loss(i, target.to(i.options()), F::MSELossFuncOptions().reduction(torch::kNone))',
input_size=input_size,
cpp_var_map={'i': '_get_input()', 'target': target},
reference_fn=lambda i, *_: (i - target).pow(2),
pickle=False)
def nllloss_no_reduce_test():
t = Variable(torch.empty(15).uniform_().mul(10).floor().long())
kwargs = {'reduction': 'none'}
return dict(
fullname='NLLLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), reduction=kwargs['reduction'])),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(15, 10).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLoss'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nllloss_no_reduce_ignore_index_test():
t = Variable(torch.empty(15).uniform_().mul(10).floor().long())
kwargs: Dict[str, Union[int, str]] = {'ignore_index': 2, 'reduction': 'none'}
return dict(
fullname='NLLLoss_no_reduce_ignore_index',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), ignore_index=int(kwargs['ignore_index']),
reduction=str(kwargs['reduction']))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().ignore_index(2).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(15, 10).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLoss'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nllloss_no_reduce_weights_test():
t = Variable(torch.empty(15).uniform_().mul(10).floor().long())
weight = torch.rand(10)
def kwargs(i):
return {'weight': weight.type_as(i), 'reduction': 'none'}
return dict(
fullname='NLLLoss_no_reduce_weights',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), **kwargs(i))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong),
F::NLLLossFuncOptions().weight(weight.to(i.options())).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(15, 10).add(1e-2).log(),
cpp_var_map={'i': '_get_input()', 't': t, 'weight': weight},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLoss'](i, t.type_as(i).long(), **kwargs(i)),
pickle=False)
def nllloss_no_reduce_weights_ignore_index_test():
t = Variable(torch.empty(15).uniform_().mul(10).floor().long())
weight = torch.rand(10)
def kwargs(i):
return {'weight': weight.type_as(i), 'reduction': 'none',
'ignore_index': 2}
return dict(
fullname='NLLLoss_no_reduce_weights_ignore_index',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), **kwargs(i.data))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong),
F::NLLLossFuncOptions().weight(weight.to(i.options())).reduction(torch::kNone).ignore_index(2))''',
input_fn=lambda: torch.rand(15, 10).add(1e-2).log(),
cpp_var_map={'i': '_get_input()', 't': t, 'weight': weight},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLoss'](i, t.type_as(i).long(), **kwargs(i)),
pickle=False)
def nllloss_no_reduce_weights_ignore_index_neg_test():
t = Variable(torch.empty(15).uniform_().mul(10).floor().long())
weight = torch.rand(10)
def kwargs(i):
return {'weight': weight.type_as(i), 'reduction': 'none',
'ignore_index': -1}
return dict(
fullname='NLLLoss_no_reduce_weights_ignore_index_neg',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), **kwargs(i))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong),
F::NLLLossFuncOptions().weight(weight.to(i.options())).reduction(torch::kNone).ignore_index(-1))''',
input=torch.rand(15, 10).add(1e-2).log(),
cpp_var_map={'i': '_get_input()', 't': t, 'weight': weight},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLoss'](i, t.type_as(i).long(), **kwargs(i)),
pickle=False)
def nllloss2d_no_reduce_test():
t = Variable(torch.rand(2, 5, 5).mul(3).floor().long())
kwargs = {'reduction': 'none'}
return dict(
fullname='NLLLoss2d_no_reduce',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), reduction=kwargs['reduction'])),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nllloss2d_no_reduce_ignore_index_test():
t = Variable(torch.rand(2, 5, 5).mul(3).floor().long())
kwargs: Dict[str, Union[int, str]] = {'ignore_index': 1, 'reduction': 'none'}
return dict(
fullname='NLLLoss2d_no_reduce_ignore_index',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), ignore_index=int(kwargs['ignore_index']),
reduction=str(kwargs['reduction']))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().ignore_index(1).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nllloss2d_no_reduce_weights_test():
t = Variable(torch.rand(2, 5, 5).mul(3).floor().long())
weight = torch.rand(3)
def kwargs(i):
return {'weight': weight.type_as(i), 'reduction': 'none'}
return dict(
fullname='NLLLoss2d_no_reduce_weights',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), **kwargs(i))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong),
F::NLLLossFuncOptions().weight(weight.to(i.options())).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5).log(),
cpp_var_map={'i': '_get_input()', 't': t, 'weight': weight},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs(i)),
pickle=False)
def nlllossNd_no_reduce_test():
t = Variable(torch.rand(2, 5, 5, 2, 2).mul(3).floor().long())
kwargs = {'reduction': 'none'}
return dict(
fullname='NLLLossNd_no_reduce',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), reduction=kwargs['reduction'])),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5, 2, 2).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nlllossNd_no_reduce_ignore_index_test():
t = Variable(torch.rand(2, 5, 5, 2, 2).mul(3).floor().long())
kwargs: Dict[str, Union[int, str]] = {'ignore_index': 1, 'reduction': 'none'}
return dict(
fullname='NLLLossNd_no_reduce_ignore_index',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), ignore_index=int(kwargs['ignore_index']),
reduction=str(kwargs['reduction']))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong), F::NLLLossFuncOptions().ignore_index(1).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5, 2, 2).log(),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs),
pickle=False)
def nlllossNd_no_reduce_weights_test():
t = Variable(torch.rand(2, 5, 5, 2, 2).mul(3).floor().long())
weight = torch.rand(3)
def kwargs(i):
return {'weight': weight.type_as(i), 'reduction': 'none'}
return dict(
fullname='NLLLossNd_no_reduce_weights',
constructor=wrap_functional(
lambda i: F.nll_loss(i, t.type_as(i).long(), **kwargs(i))),
cpp_function_call='''F::nll_loss(
i, t.to(i.options()).to(torch::kLong),
F::NLLLossFuncOptions().weight(weight.to(i.options())).reduction(torch::kNone))''',
input_fn=lambda: torch.rand(2, 3, 5, 5, 2, 2).log(),
cpp_var_map={'i': '_get_input()', 't': t, 'weight': weight},
reference_fn=lambda i, *_:
loss_reference_fns['NLLLossNd'](i, t.type_as(i).long(), **kwargs(i)),
pickle=False)
def smoothl1loss_no_reduce_test():
t = torch.randn(2, 3, 4)
return dict(
fullname='SmoothL1Loss_no_reduce',
constructor=wrap_functional(
lambda i: F.smooth_l1_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::smooth_l1_loss(
i, t.to(i.options()), F::SmoothL1LossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(2, 3, 4),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['SmoothL1Loss'](i, t.type_as(i), reduction='none'),
pickle=False)
def smoothl1loss_no_reduce_scalar_test():
t = torch.randn(())
return dict(
fullname='SmoothL1Loss_no_reduce_scalar',
constructor=wrap_functional(
lambda i: F.smooth_l1_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::smooth_l1_loss(
i, t.to(i.options()), F::SmoothL1LossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(()),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['SmoothL1Loss'](i, t.type_as(i), reduction='none'),
pickle=False)
def smoothl1loss_beta_test():
t = torch.randn(2, 3, 4)
return dict(
fullname='SmoothL1Loss_beta',
constructor=wrap_functional(
lambda i: F.smooth_l1_loss(i, t.type_as(i), reduction='none', beta=0.5)),
cpp_function_call='''F::smooth_l1_loss(
i, t.to(i.options()), F::SmoothL1LossFuncOptions().reduction(torch::kNone), 0.5)''',
input_fn=lambda: torch.randn(2, 3, 4),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['SmoothL1Loss'](i, t.type_as(i), reduction='none', beta=0.5),
pickle=False)
def smoothl1loss_zero_beta_test():
t = torch.randn(2, 3, 4)
return dict(
fullname='SmoothL1Loss_zero_beta',
constructor=wrap_functional(
lambda i: F.smooth_l1_loss(i, t.type_as(i), reduction='none', beta=0)),
cpp_function_call='''F::smooth_l1_loss(
i, t.to(i.options()), F::SmoothL1LossFuncOptions().reduction(torch::kNone), 0)''',
input_fn=lambda: torch.randn(2, 3, 4),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['SmoothL1Loss'](i, t.type_as(i), reduction='none', beta=0),
pickle=False)
def huberloss_delta_test():
t = torch.randn(2, 3, 4)
return dict(
fullname='HuberLoss_delta',
constructor=wrap_functional(
lambda i: F.huber_loss(i, t.type_as(i), reduction='none', delta=0.5)),
cpp_function_call='''F::huber_loss(
i, t.to(i.options()), F::HuberLossFuncOptions().reduction(torch::kNone).delta(0.5))''',
input_fn=lambda: torch.randn(2, 3, 4),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['HuberLoss'](i, t.type_as(i), reduction='none', delta=0.5),
pickle=False)
def multilabelmarginloss_0d_no_reduce_test():
t = torch.zeros(()).long()
return dict(
fullname='MultiLabelMarginLoss_0d_no_reduce',
constructor=wrap_functional(
lambda i: F.multilabel_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multilabel_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultilabelMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(()),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiLabelMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multilabelmarginloss_1d_no_reduce_test():
t = Variable(torch.rand(10).mul(10).floor().long())
return dict(
fullname='MultiLabelMarginLoss_1d_no_reduce',
constructor=wrap_functional(
lambda i: F.multilabel_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multilabel_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultilabelMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiLabelMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multilabelmarginloss_index_neg_test():
t = Variable(torch.clamp(torch.rand(5, 10).add(-.5).mul(20).floor().long(), min=-1))
return dict(
fullname='MultiLabelMarginLoss_index_neg',
constructor=wrap_functional(
lambda i: F.multilabel_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multilabel_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultilabelMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiLabelMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multilabelmarginloss_no_reduce_test():
t = Variable(torch.rand(5, 10).mul(10).floor().long())
return dict(
fullname='MultiLabelMarginLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.multilabel_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multilabel_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultilabelMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiLabelMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def hingeembeddingloss_no_reduce_test():
t = Variable(torch.randn(10).gt(0).double().mul_(2).sub(1))
return dict(
fullname='HingeEmbeddingLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.hinge_embedding_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::hinge_embedding_loss(
i, t.to(i.options()), F::HingeEmbeddingLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['HingeEmbeddingLoss'](i, t.type_as(i), reduction='none'),
check_sum_reduction=True,
pickle=False)
def hingeembeddingloss_margin_no_reduce_test():
t = Variable(torch.randn(10).gt(0).double().mul_(2).sub(1))
return dict(
fullname='HingeEmbeddingLoss_margin_no_reduce',
constructor=wrap_functional(
lambda i: F.hinge_embedding_loss(i, t.type_as(i), margin=0.5, reduction='none')),
cpp_function_call='''F::hinge_embedding_loss(
i, t.to(i.options()), F::HingeEmbeddingLossFuncOptions().margin(0.5).reduction(torch::kNone))''',
input_fn=lambda: torch.randn(10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['HingeEmbeddingLoss'](i, t.type_as(i), margin=0.5, reduction='none'),
check_sum_reduction=True,
pickle=False)
def softmarginloss_no_reduce_test():
t = torch.randn(5, 5)
return dict(
fullname='SoftMarginLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.soft_margin_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::soft_margin_loss(
i, t.to(i.options()), F::SoftMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 5),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['SoftMarginLoss'](i, t.type_as(i), reduction='none'),
pickle=False)
def multilabelsoftmarginloss_no_reduce_test():
t = torch.rand(5, 10).mul(2).floor()
return dict(
fullname='MultiLabelSoftMarginLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.multilabel_soft_margin_loss(i, t.type_as(i), reduction='none')),
cpp_function_call='''F::multilabel_soft_margin_loss(
i, t.to(i.options()), F::MultilabelSoftMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
(-(t * i.sigmoid().log() + (1 - t) * (-i).sigmoid().log())).sum(dim=1) / i.size(1),
check_gradgrad=False,
pickle=False)
def multilabelsoftmarginloss_weights_no_reduce_test():
t = torch.rand(5, 10).mul(2).floor()
weights = torch.rand(10)
return dict(
fullname='MultiLabelSoftMarginLoss_weights_no_reduce',
constructor=wrap_functional(
lambda i: F.multilabel_soft_margin_loss(i, t.type_as(i),
weight=weights.type_as(i), reduction='none')),
cpp_function_call='''F::multilabel_soft_margin_loss(
i, t.to(i.options()),
F::MultilabelSoftMarginLossFuncOptions().weight(weights.to(i.options())).reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t, 'weights': weights},
reference_fn=lambda i, *_:
(-(t * i.sigmoid().log() + (1 - t) * (-i).sigmoid().log()) * weights).sum(dim=1) / i.size(1),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_no_reduce_test():
t = torch.rand(5).mul(8).floor().long()
return dict(
fullname='MultiMarginLoss_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultiMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_1d_no_reduce_test():
t = torch.rand(1).mul(8).floor().long()
return dict(
fullname='MultiMarginLoss_1d_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultiMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_1d_input_0d_target_no_reduce_test():
t = torch.rand(()).mul(8).floor().long()
return dict(
fullname='multimarginloss_1d_input_0d_target_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultiMarginLossFuncOptions().reduction(torch::kNone))''',
input_fn=lambda: torch.randn(10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(), reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_p_no_reduce_test():
t = torch.rand(5).mul(8).floor().long()
return dict(
fullname='MultiMarginLoss_p_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), p=2, reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong), F::MultiMarginLossFuncOptions().p(2).reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10).clamp_(1e-2, 1 - 1e-2),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(), p=2, reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_margin_no_reduce_test():
t = torch.rand(5).mul(8).floor().long()
return dict(
fullname='MultiMarginLoss_margin_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), margin=0.5, reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong),
F::MultiMarginLossFuncOptions().margin(0.5).reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(),
margin=0.5, reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def multimarginloss_weights_no_reduce_test():
t = torch.rand(5).mul(8).floor().long()
weights = torch.rand(10)
return dict(
fullname='MultiMarginLoss_weights_no_reduce',
constructor=wrap_functional(
lambda i: F.multi_margin_loss(i, t.type_as(i).long(), weight=weights.type_as(i),
reduction='none')),
cpp_function_call='''F::multi_margin_loss(
i, t.to(i.options()).to(torch::kLong),
F::MultiMarginLossFuncOptions().weight(weights.to(i.options())).reduction(torch::kNone))''',
input_fn=lambda: torch.randn(5, 10),
cpp_var_map={'i': '_get_input()', 't': t, 'weights': weights},
reference_fn=lambda i, *_:
loss_reference_fns['MultiMarginLoss'](i, t.data.type_as(i).long(),
weight=weights, reduction='none'),
check_sum_reduction=True,
check_gradgrad=False,
pickle=False)
def fractional_max_pool2d_test(test_case):
random_samples = torch.empty((1, 3, 2), dtype=torch.double).uniform_()
if test_case == 'ratio':
return dict(
constructor=lambda: nn.FractionalMaxPool2d(
2, output_ratio=0.5, _random_samples=random_samples),
cpp_constructor_args='''torch::nn::FractionalMaxPool2dOptions(2)
.output_ratio(0.5)
._random_samples(random_samples)''',
input_size=(1, 3, 5, 7),
cpp_var_map={'random_samples': random_samples},
fullname='FractionalMaxPool2d_ratio')
elif test_case == 'size':
return dict(
constructor=lambda: nn.FractionalMaxPool2d((2, 3), output_size=(
4, 3), _random_samples=random_samples),
cpp_constructor_args='''torch::nn::FractionalMaxPool2dOptions({2, 3})
.output_size(std::vector<int64_t>({4, 3}))
._random_samples(random_samples)''',
input_size=(1, 3, 7, 6),
cpp_var_map={'random_samples': random_samples},
fullname='FractionalMaxPool2d_size')
def fractional_max_pool3d_test(test_case):
random_samples = torch.empty((2, 4, 3), dtype=torch.double).uniform_()
if test_case == 'ratio':
return dict(
constructor=lambda: nn.FractionalMaxPool3d(
2, output_ratio=0.5, _random_samples=random_samples),
cpp_constructor_args='''torch::nn::FractionalMaxPool3dOptions(2)
.output_ratio(0.5)
._random_samples(random_samples)''',
input_size=(2, 4, 5, 5, 5),
cpp_var_map={'random_samples': random_samples},
fullname='FractionalMaxPool3d_ratio')
elif test_case == 'size':
return dict(
constructor=lambda: nn.FractionalMaxPool3d((2, 2, 2), output_size=(
4, 4, 4), _random_samples=random_samples),
cpp_constructor_args='''torch::nn::FractionalMaxPool3dOptions({2, 2, 2})
.output_size(std::vector<int64_t>({4, 4, 4}))
._random_samples(random_samples)''',
input_size=(2, 4, 7, 7, 7),
cpp_var_map={'random_samples': random_samples},
fullname='FractionalMaxPool3d_size')
elif test_case == 'asymsize':
return dict(
constructor=lambda: nn.FractionalMaxPool3d((4, 2, 3), output_size=(
10, 3, 2), _random_samples=random_samples),
cpp_constructor_args='''torch::nn::FractionalMaxPool3dOptions({4, 2, 3})
.output_size(std::vector<int64_t>({10, 3, 2}))
._random_samples(random_samples)''',
input_size=(2, 4, 16, 7, 5),
cpp_var_map={'random_samples': random_samples},
fullname='FractionalMaxPool3d_asymsize')
def single_batch_reference_fn(input, parameters, module):
"""Reference function for modules supporting no batch dimensions.
The module is passed the input and target in batched form with a single item.
The output is squeezed to compare with the no-batch input.
"""
single_batch_input = input.unsqueeze(0)
with freeze_rng_state():
return module(single_batch_input).squeeze(0)
new_module_tests = [
poissonnllloss_no_reduce_test(),
bceloss_no_reduce_test(),
bceloss_weights_no_reduce_test(),
bce_with_logistic_legacy_enum_test(),
bce_with_logistic_no_reduce_test(),
bceloss_no_reduce_scalar_test(),
bceloss_weights_no_reduce_scalar_test(),
bce_with_logistic_no_reduce_scalar_test(),
kldivloss_with_target_no_reduce_test(),
kldivloss_no_reduce_test(),
kldivloss_no_reduce_scalar_test(),
kldivloss_with_log_target_no_reduce_test(),
kldivloss_no_reduce_log_target_test(),
kldivloss_no_reduce_scalar_log_target_test(),
l1loss_no_reduce_test(),
l1loss_no_reduce_complex_test(),
l1loss_no_reduce_scalar_test(),
mseloss_no_reduce_test(),
mseloss_no_reduce_scalar_test(),
nllloss_no_reduce_test(),
nllloss_no_reduce_ignore_index_test(),
nllloss_no_reduce_weights_test(),
nllloss_no_reduce_weights_ignore_index_test(),
nllloss_no_reduce_weights_ignore_index_neg_test(),
nllloss2d_no_reduce_test(),
nllloss2d_no_reduce_weights_test(),
nllloss2d_no_reduce_ignore_index_test(),
nlllossNd_no_reduce_test(),
nlllossNd_no_reduce_weights_test(),
nlllossNd_no_reduce_ignore_index_test(),
smoothl1loss_no_reduce_test(),
smoothl1loss_no_reduce_scalar_test(),
smoothl1loss_beta_test(),
smoothl1loss_zero_beta_test(),
huberloss_delta_test(),
multilabelmarginloss_0d_no_reduce_test(),
multilabelmarginloss_1d_no_reduce_test(),
multilabelmarginloss_index_neg_test(),
multilabelmarginloss_no_reduce_test(),
hingeembeddingloss_no_reduce_test(),
hingeembeddingloss_margin_no_reduce_test(),
softmarginloss_no_reduce_test(),
multilabelsoftmarginloss_no_reduce_test(),
multilabelsoftmarginloss_weights_no_reduce_test(),
multimarginloss_no_reduce_test(),
multimarginloss_1d_no_reduce_test(),
multimarginloss_1d_input_0d_target_no_reduce_test(),
multimarginloss_p_no_reduce_test(),
multimarginloss_margin_no_reduce_test(),
multimarginloss_weights_no_reduce_test(),
fractional_max_pool2d_test('ratio'),
fractional_max_pool2d_test('size'),
fractional_max_pool3d_test('ratio'),
fractional_max_pool3d_test('size'),
fractional_max_pool3d_test('asymsize'),
dict(
module_name='BatchNorm1d',
constructor_args=(10,),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(10)',
input_size=(4, 10),
cudnn=True,
check_eval=True,
desc='affine',
),
dict(
module_name='BatchNorm1d',
constructor_args=(5,),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(5)',
input_size=(4, 5, 3),
cudnn=True,
check_eval=True,
desc='3d_input',
),
dict(
module_name='BatchNorm1d',
constructor_args=(10, 1e-3, None),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(10).eps(1e-3).momentum(c10::nullopt)',
input_size=(4, 10),
cudnn=True,
check_eval=True,
desc='affine_simple_average',
),
dict(
module_name='BatchNorm1d',
constructor_args=(10, 1e-3, 0.3, False),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(10).eps(1e-3).momentum(0.3).affine(false)',
input_size=(4, 10),
cudnn=True,
check_eval=True,
desc='not_affine',
),
dict(
module_name='BatchNorm1d',
constructor_args=(10, 1e-3, 0.3, True, False),
cpp_constructor_args='''torch::nn::BatchNorm1dOptions(10)
.eps(1e-3).momentum(0.3).affine(true).track_running_stats(false)''',
input_size=(4, 10),
cudnn=True,
check_eval=True,
desc='not_tracking_stats',
),
dict(
module_name='BatchNorm1d',
constructor_args=(5, 1e-3, 0.3, False),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(5).eps(1e-3).momentum(0.3).affine(false)',
input_size=(4, 5, 3),
cudnn=True,
check_eval=True,
desc='3d_input_not_affine',
),
dict(
module_name='BatchNorm1d',
constructor_args=(5, 1e-3, 0.3, False),
cpp_constructor_args='torch::nn::BatchNorm1dOptions(5).eps(1e-3).momentum(0.3).affine(false)',
input_size=(0, 5, 9),
cudnn=True,
check_eval=True,
desc='zero_batch',
),
dict(
module_name='BatchNorm2d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::BatchNorm2dOptions(3)',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
),
dict(
module_name='BatchNorm2d',
constructor_args=(3, 1e-3, None),
cpp_constructor_args='torch::nn::BatchNorm2dOptions(3).eps(1e-3).momentum(c10::nullopt)',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
desc='2d_simple_average',
),
dict(
module_name='BatchNorm2d',
constructor_args=(3, 1e-3, 0.8),
cpp_constructor_args='torch::nn::BatchNorm2dOptions(3).eps(1e-3).momentum(0.8)',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
desc='momentum',
),
dict(
module_name='BatchNorm2d',
constructor_args=(3, 1e-3, 0.8, False),
cpp_constructor_args='torch::nn::BatchNorm2dOptions(3).eps(1e-3).momentum(0.8).affine(false)',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
desc='not_affine',
),
dict(
module_name='BatchNorm2d',
constructor_args=(3, 1e-3, 0.8, True, False),
cpp_constructor_args='''torch::nn::BatchNorm2dOptions(3)
.eps(1e-3).momentum(0.8).affine(true).track_running_stats(false)''',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
desc='not_tracking_stats',
),
dict(
module_name='BatchNorm2d',
constructor_args=(5, 1e-3, 0.3, False),
cpp_constructor_args='torch::nn::BatchNorm2dOptions(5).eps(1e-3).momentum(0.3).affine(false)',
input_size=(0, 5, 2, 2),
cudnn=True,
check_eval=True,
desc='zero_batch',
),
dict(
module_name='BatchNorm3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::BatchNorm3dOptions(3)',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
),
dict(
module_name='BatchNorm3d',
constructor_args=(3, 1e-3, None),
cpp_constructor_args='torch::nn::BatchNorm3dOptions(3).eps(1e-3).momentum(c10::nullopt)',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
desc='3d_simple_average',
),
dict(
module_name='BatchNorm3d',
constructor_args=(3, 1e-3, 0.7),
cpp_constructor_args='torch::nn::BatchNorm3dOptions(3).eps(1e-3).momentum(0.7)',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
desc='momentum',
),
dict(
module_name='BatchNorm3d',
constructor_args=(3, 1e-3, 0.7, False),
cpp_constructor_args='torch::nn::BatchNorm3dOptions(3).eps(1e-3).momentum(0.7).affine(false)',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
desc='not_affine',
),
dict(
module_name='BatchNorm3d',
constructor_args=(3, 1e-3, 0.7, True, False),
cpp_constructor_args='''torch::nn::BatchNorm3dOptions(3)
.eps(1e-3).momentum(0.7).affine(true).track_running_stats(false)''',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
desc='not_tracking_stats',
),
dict(
module_name='BatchNorm3d',
constructor_args=(5, 1e-3, 0.3, False),
cpp_constructor_args='torch::nn::BatchNorm3dOptions(5).eps(1e-3).momentum(0.3).affine(false)',
input_size=(0, 5, 2, 2, 2),
cudnn=True,
check_eval=True,
desc='zero_batch',
),
dict(
module_name='InstanceNorm1d',
constructor_args=(3, 1e-3, 0.3),
cpp_constructor_args='torch::nn::InstanceNorm1dOptions(3).eps(1e-3).momentum(0.3)',
input_size=(4, 3, 15),
cudnn=True,
check_eval=True,
),
dict(
module_name='InstanceNorm1d',
constructor_args=(3, 1e-3, 0.3, False, True),
cpp_constructor_args='''torch::nn::InstanceNorm1dOptions(3)
.eps(1e-3).momentum(0.3).affine(false).track_running_stats(true)''',
input_size=(4, 3, 15),
cudnn=True,
check_eval=True,
desc='tracking_stats',
),
dict(
module_name='InstanceNorm2d',
constructor_args=(3, 1e-3, 0.3),
cpp_constructor_args='torch::nn::InstanceNorm2dOptions(3).eps(1e-3).momentum(0.3)',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
),
dict(
module_name='InstanceNorm2d',
constructor_args=(3, 1e-3, 0.3, False, True),
cpp_constructor_args='''torch::nn::InstanceNorm2dOptions(3)
.eps(1e-3).momentum(0.3).affine(false).track_running_stats(true)''',
input_size=(2, 3, 6, 6),
cudnn=True,
check_eval=True,
desc='tracking_stats',
),
dict(
module_name='InstanceNorm3d',
constructor_args=(3, 1e-3, 0.3),
cpp_constructor_args='torch::nn::InstanceNorm3dOptions(3).eps(1e-3).momentum(0.3)',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
),
dict(
module_name='InstanceNorm3d',
constructor_args=(3, 1e-3, 0.3, False, True),
cpp_constructor_args='''torch::nn::InstanceNorm3dOptions(3)
.eps(1e-3).momentum(0.3).affine(false).track_running_stats(true)''',
input_size=(2, 3, 4, 4, 4),
cudnn=True,
check_eval=True,
desc='tracking_stats',
),
dict(
module_name='LayerNorm',
constructor_args=([5], 1e-3),
cpp_constructor_args='torch::nn::LayerNormOptions({5}).eps(1e-3)',
input_size=(4, 5, 5),
cudnn=True,
check_eval=True,
desc='1d_elementwise_affine',
),
dict(
module_name='LayerNorm',
constructor_args=([5], 1e-3, False),
cpp_constructor_args='torch::nn::LayerNormOptions({5}).eps(1e-3).elementwise_affine(false)',
input_size=(4, 5, 5),
cudnn=True,
check_eval=True,
desc='1d_no_elementwise_affine',
),
dict(
module_name='LayerNorm',
constructor_args=([2, 2, 5], 1e-3),
cpp_constructor_args='torch::nn::LayerNormOptions({2, 2, 5}).eps(1e-3)',
input_size=(4, 2, 2, 5),
cudnn=True,
check_eval=True,
desc='3d_elementwise_affine',
),
dict(
module_name='LayerNorm',
constructor_args=([2, 2, 5], 1e-3, False),
cpp_constructor_args='torch::nn::LayerNormOptions({2, 2, 5}).eps(1e-3).elementwise_affine(false)',
input_size=(4, 2, 2, 5),
cudnn=True,
check_eval=True,
desc='3d_no_elementwise_affine',
),
dict(
module_name='LayerNorm',
constructor_args=([56, 56, 56], 1e-5, False),
cpp_constructor_args='torch::nn::LayerNormOptions({56, 56, 56}).eps(1e-5).elementwise_affine(false)',
input_size=(4, 56, 56, 56),
cudnn=True,
check_eval=True,
gradcheck_fast_mode=True,
desc='3d_no_affine_large_feature',
),
dict(
module_name='LayerNorm',
constructor_args=([5], 1e-3),
cpp_constructor_args='torch::nn::LayerNormOptions({5}).eps(1e-3)',
input_size=(0, 5),
cudnn=True,
check_eval=True,
desc='1d_empty_elementwise_affine',
),
dict(
module_name='GroupNorm',
constructor_args=(3, 6, 1e-3),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 6).eps(1e-3)',
input_size=(4, 6, 5),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='1d_affine',
),
dict(
module_name='GroupNorm',
constructor_args=(3, 12, 1e-3),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 12).eps(1e-3)',
input_size=(4, 12),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='1d_affine_GN',
),
dict(
module_name='GroupNorm',
constructor_args=(1, 6, 1e-3),
cpp_constructor_args='torch::nn::GroupNormOptions(1, 6).eps(1e-3)',
input_size=(150, 6),
cudnn=True,
check_eval=True,
desc='1d_affine_large_batch', # For large batch_size
check_bfloat16=True,
test_cpu=False,
),
dict(
module_name='GroupNorm',
constructor_args=(5, 5, 1e-3, False),
cpp_constructor_args='torch::nn::GroupNormOptions(5, 5).eps(1e-3).affine(false)',
input_size=(4, 5, 5),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='1d_no_affine_IN', # this setting is equivalent with InstanceNormi
),
dict(
module_name='GroupNorm',
constructor_args=(1, 10, 1e-3, False),
cpp_constructor_args='torch::nn::GroupNormOptions(1, 10).eps(1e-3).affine(false)',
input_size=(4, 10),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='1d_no_affine_LN', # this setting is equivalent with LayerNorm
),
dict(
module_name='GroupNorm',
constructor_args=(3, 6, 1e-3),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 6).eps(1e-3)',
input_size=(4, 6, 2, 3),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='2d_affine',
),
dict(
module_name='GroupNorm',
constructor_args=(3, 6, 1e-3),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 6).eps(1e-3)',
input_size=(4, 6, 28, 28),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='2d_affine_large_feature',
test_cpu=False,
),
dict(
module_name='GroupNorm',
constructor_args=(3, 51, 1e-5, False),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 51).eps(1e-5).affine(false)',
input_size=(2, 51, 28, 28),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='2d_no_affine_large_feature',
test_cpu=False,
),
dict(
module_name='GroupNorm',
constructor_args=(3, 3, 1e-3, False),
cpp_constructor_args='torch::nn::GroupNormOptions(3, 3).eps(1e-3).affine(false)',
input_size=(4, 3, 2, 3),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='2d_no_affine_IN', # this setting is equivalent with InstanceNorm
),
dict(
module_name='GroupNorm',
constructor_args=(1, 3, 1e-3, False),
cpp_constructor_args='torch::nn::GroupNormOptions(1, 3).eps(1e-3).affine(false)',
input_size=(4, 3, 2, 3),
cudnn=True,
check_eval=True,
check_bfloat16=True,
desc='2d_no_affine_LN', # this setting is equivalent with LayerNorm
),
dict(
module_name='Conv1d',
constructor_args=(4, 5, 3),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3)',
input_size=(2, 4, 10),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv1d',
constructor_args=(4, 5, 3, 2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).stride(2)',
input_size=(2, 4, 10),
cudnn=True,
desc='stride',
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv1d',
constructor_args=(4, 5, 3, 1, 1),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).stride(1).padding(1)',
input_size=(2, 4, 10),
cudnn=True,
desc='pad1',
with_tf32=True,
tf32_precision=0.01,
),
dict(
module_name='Conv1d',
constructor_args=(4, 5, 5, 1, 2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 5).stride(1).padding(2)',
input_size=(2, 4, 10),
cudnn=True,
desc='pad2',
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv1d',
constructor_args=(4, 4, 3, 1, 1),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 4, 3).stride(1).padding(1)',
input_size=(1, 4, 1),
cudnn=True,
desc='pad1size1',
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv1d',
constructor_args=(4, 4, 5, 1, 2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 4, 5).stride(1).padding(2)',
input_size=(1, 4, 1),
cudnn=True,
desc='pad2size1',
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv1d',
constructor_args=(4, 5, 3),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3)',
input_size=(0, 4, 10),
cudnn=True,
desc='zero_batch',
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_dilated',
constructor=lambda: nn.Conv1d(4, 5, kernel_size=3, dilation=2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).dilation(2)',
input_size=(2, 4, 10),
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_groups',
constructor=lambda: nn.Conv1d(4, 6, kernel_size=3, groups=2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 6, 3).groups(2)',
input_size=(2, 4, 6),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_pad_valid',
constructor=lambda: nn.Conv1d(4, 5, 3, padding="valid"),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).padding(torch::kValid)',
input_size=(2, 4, 10),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_pad_same',
constructor=lambda: nn.Conv1d(4, 5, 3, padding="same"),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).padding(torch::kSame)',
input_size=(2, 4, 10),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_pad_same2',
constructor=lambda: nn.Conv1d(4, 5, 4, padding="same"),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 4).padding(torch::kSame)',
input_size=(2, 4, 10),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv1d_pad_same_dilated',
constructor=lambda: nn.Conv1d(4, 5, 4, padding="same", dilation=2),
cpp_constructor_args='torch::nn::Conv1dOptions(4, 5, 3).padding(torch::kSame).dilation(2)',
input_size=(2, 4, 10),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='ConvTranspose1d',
constructor=lambda: nn.ConvTranspose1d(3, 4, kernel_size=3, stride=(3,), padding=1, output_padding=(1,)),
cpp_constructor_args='torch::nn::ConvTranspose1dOptions(3, 4, 3).stride(3).padding(1).output_padding(1)',
cudnn=True,
input_size=(1, 3, 7),
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='ConvTranspose1d',
constructor_args=(3, 4, 3, 2, 1, 1, 1, False),
cpp_constructor_args='''torch::nn::ConvTranspose1dOptions(3, 4, 3)
.stride(2).padding(1).output_padding(1).groups(1).bias(false)''',
input_size=(1, 3, 6),
cudnn=True,
desc='no_bias',
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='ConvTranspose1d',
constructor_args=(3, 4, 3, 2, 1, 1, 1, True, 2),
cpp_constructor_args='''torch::nn::ConvTranspose1dOptions(3, 4, 3)
.stride(2).padding(1).output_padding(1).groups(1).bias(true).dilation(2)''',
input_size=(1, 3, 6),
cudnn=True,
desc='dilated',
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='ConvTranspose1d_groups',
constructor=lambda: nn.ConvTranspose1d(4, 6, 3, stride=(3,), padding=1, output_padding=(1,), groups=2),
cpp_constructor_args='''torch::nn::ConvTranspose1dOptions(4, 6, 3)
.stride(3).padding(1).output_padding(1).groups(2)''',
cudnn=True,
input_size=(2, 4, 7),
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='MaxPool1d',
constructor_args=(4,),
cpp_constructor_args='torch::nn::MaxPool1dOptions(4)',
input_size=(2, 10, 4),
),
dict(
module_name='MaxPool1d',
constructor_args=(4, 4),
cpp_constructor_args='torch::nn::MaxPool1dOptions(4).stride(4)',
input_size=(2, 10, 4),
desc='stride',
),
dict(
module_name='Conv2d',
constructor_args=(3, 4, (3, 2)),
cpp_constructor_args='torch::nn::Conv2dOptions(3, 4, {3, 2})',
input_size=(2, 3, 7, 5),
cudnn=True,
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv2d',
constructor_args=(3, 4, (3, 3), (2, 2)),
cpp_constructor_args='torch::nn::Conv2dOptions(3, 4, {3, 3}).stride({2, 2})',
input_size=(2, 3, 6, 6),
cudnn=True,
desc='strided',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv2d',
constructor_args=(3, 4, (3, 3), (2, 2), (1, 1)),
cpp_constructor_args='torch::nn::Conv2dOptions(3, 4, {3, 3}).stride({2, 2}).padding({1, 1})',
input_size=(2, 3, 6, 6),
cudnn=True,
desc='padding',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv2d',
constructor_args=(3, 2, (3, 3), (2, 2), (1, 1), (2, 2)),
cpp_constructor_args='torch::nn::Conv2dOptions(3, 2, {3, 3}).stride({2, 2}).padding({1, 1}).dilation({2, 2})',
input_size=(2, 3, 8, 8),
cudnn=True,
desc='dilated',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='Conv2d',
constructor_args=(3, 4, (3, 2), 1, 0, 1, 1, False),
cpp_constructor_args='''torch::nn::Conv2dOptions(3, 4, {3, 2})
.stride(1).padding(0).dilation(1).groups(1).bias(false)''',
input_size=(2, 3, 6, 5),
cudnn=True,
desc='no_bias',
check_with_long_tensor=True,
with_tf32=True,
),
dict(
module_name='Conv2d',
constructor_args=(3, 4, (3, 2)),
cpp_constructor_args='torch::nn::Conv2dOptions(3, 4, {3, 2})',
input_size=(0, 3, 7, 5),
cudnn=True,
desc='zero_batch',
check_with_long_tensor=True,
with_tf32=True,
),
dict(
fullname='Conv2d_groups',
constructor=lambda: nn.Conv2d(4, 6, (3, 2), groups=2),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 6, {3, 2}).groups(2)',
input_size=(2, 4, 6, 5),
cudnn=True,
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_groups_thnn',
constructor=lambda: nn.Conv2d(4, 6, (3, 2), groups=2),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 6, {3, 2}).groups(2)',
input_size=(2, 4, 6, 5),
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_pad_valid',
constructor=lambda: nn.Conv2d(2, 4, (3, 4), padding="valid"),
cpp_constructor_args='torch::nn::Conv2dOptions(2, 4, {3, 4}).padding(torch::kValid)',
input_size=(2, 2, 6, 5),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_pad_same',
constructor=lambda: nn.Conv2d(2, 4, (3, 4), padding="same"),
cpp_constructor_args='torch::nn::Conv2dOptions(2, 4, {3, 4}).padding(torch::kSame)',
input_size=(2, 2, 6, 5),
cudnn=True,
with_tf32=True,
tf32_precision=0.01,
),
dict(
fullname='Conv2d_pad_same_dilated',
constructor=lambda: nn.Conv2d(2, 4, (3, 4), padding="same", dilation=2),
cpp_constructor_args='torch::nn::Conv2dOptions(2, 4, {3, 4}).padding(torch::kSame).dilation(2)',
input_size=(2, 2, 6, 5),
cudnn=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='ConvTranspose2d',
constructor_args=(3, 4, 3, (3, 2), 1, (1, 1)),
cpp_constructor_args='''torch::nn::ConvTranspose2dOptions(3, 4, 3)
.stride({3, 2}).padding(1).output_padding({1, 1})''',
cudnn=True,
input_size=(1, 3, 7, 6),
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.01,
),
dict(
module_name='ConvTranspose2d',
constructor_args=(3, 4, 3, (2, 3), 1, (1, 1), 1, False, (2, 2)),
cpp_constructor_args='''torch::nn::ConvTranspose2dOptions(3, 4, 3)
.stride({2, 3})
.padding(1)
.output_padding({1, 1})
.groups(1)
.bias(false)
.dilation({2, 2})''',
input_size=(1, 3, 6, 7),
cudnn=True,
desc='dilated',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='ConvTranspose2d',
constructor_args=(3, 4, 3, (2, 3), 1, (1, 1), 1, False),
cpp_constructor_args='''torch::nn::ConvTranspose2dOptions(3, 4, 3)
.stride({2, 3}).padding(1).output_padding({1, 1}).groups(1).bias(false)''',
input_size=(1, 3, 6, 7),
cudnn=True,
desc='no_bias',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='ConvTranspose2d_groups',
constructor=lambda: nn.ConvTranspose2d(2, 4, (2, 3), groups=2),
cpp_constructor_args='torch::nn::ConvTranspose2dOptions(2, 4, {2, 3}).groups(2)',
input_size=(1, 2, 4, 5),
cudnn=True,
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.01,
),
dict(
fullname='Conv2d_depthwise',
constructor=lambda: nn.Conv2d(4, 4, (3, 3), groups=4),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 4, {3, 3}).groups(4)',
input_size=(2, 4, 6, 6),
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_depthwise_with_multiplier',
constructor=lambda: nn.Conv2d(4, 8, (3, 3), groups=4),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 8, {3, 3}).groups(4)',
input_size=(2, 4, 6, 6),
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_depthwise_strided',
constructor=lambda: nn.Conv2d(4, 4, (3, 3), stride=(2, 2), groups=4),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 4, {3, 3}).stride({2, 2}).groups(4)',
input_size=(2, 4, 6, 6),
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_depthwise_padded',
constructor=lambda: nn.Conv2d(4, 4, (3, 3), padding=(1, 1), groups=4),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 4, {3, 3}).padding({1, 1}).groups(4)',
input_size=(2, 4, 6, 6),
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv2d_depthwise_dilated',
constructor=lambda: nn.Conv2d(4, 4, (2, 2), dilation=(2, 2), groups=4),
cpp_constructor_args='torch::nn::Conv2dOptions(4, 4, {2, 2}).dilation({2, 2}).groups(4)',
input_size=(2, 4, 5, 5),
with_tf32=True,
tf32_precision=0.005,
),
dict(
module_name='MaxPool2d',
constructor_args=((3, 3), (2, 2), (1, 1)),
cpp_constructor_args='torch::nn::MaxPool2dOptions({3, 3}).stride({2, 2}).padding({1, 1})',
input_size=(3, 7, 7),
desc='3d_input'
),
dict(
module_name='MaxPool2d',
constructor_args=((3, 3), (2, 2), (1, 1)),
cpp_constructor_args='torch::nn::MaxPool2dOptions({3, 3}).stride({2, 2}).padding({1, 1})',
input_size=(1, 3, 7, 7),
check_with_channels_last=True,
desc='4d_input'
),
dict(
module_name='AvgPool1d',
constructor_args=(2,),
cpp_constructor_args='torch::nn::AvgPool1dOptions(2)',
input_size=(2, 3, 6),
),
dict(
module_name='AvgPool1d',
constructor_args=((2,), (2,)),
cpp_constructor_args='torch::nn::AvgPool1dOptions(2).stride(2)',
input_size=(2, 3, 6),
desc='stride',
),
dict(
module_name='AvgPool1d',
constructor_args=(2, 2, 1),
cpp_constructor_args='torch::nn::AvgPool1dOptions(2).stride(2).padding(1)',
input_size=(2, 3, 6),
desc='stride_pad',
),
dict(
module_name='AvgPool1d',
constructor_args=(2,),
cpp_constructor_args='torch::nn::AvgPool1dOptions(2)',
input_size=(3, 6),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AvgPool2d',
constructor_args=((2, 2),),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2})',
input_size=(2, 3, 6, 6),
),
dict(
module_name='AvgPool2d',
constructor_args=((2, 2),),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2})',
input_size=(3, 6, 6),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim'
),
dict(
module_name='AvgPool2d',
constructor_args=((2, 2), (2, 2)),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2}).stride({2, 2})',
input_size=(2, 3, 6, 6),
desc='stride',
),
dict(
module_name='AvgPool2d',
constructor_args=((2, 2), (2, 2), (1, 1)),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2}).stride({2, 2}).padding({1, 1})',
input_size=(2, 3, 6, 6),
desc='stride_pad',
),
dict(
fullname='AvgPool2d_divisor',
constructor=lambda: nn.AvgPool2d((2, 2), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2}).divisor_override(1)',
input_size=(2, 3, 6, 6),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool2d_divisor_stride',
constructor=lambda: nn.AvgPool2d((2, 2), (2, 2), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2}).stride({2, 2}).divisor_override(1)',
input_size=(2, 3, 6, 6),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool2d_divisor_stride_pad',
constructor=lambda: nn.AvgPool2d((2, 2), (2, 2), (1, 1), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool2dOptions({2, 2}).stride({2, 2}).padding({1, 1}).divisor_override(1)',
input_size=(2, 3, 6, 6),
check_with_long_tensor=True,
),
dict(
module_name='LPPool2d',
constructor_args=(2, 2, 2),
cpp_constructor_args='torch::nn::LPPool2dOptions(2, 2).stride(2)',
input_size=(1, 3, 7, 7),
),
dict(
module_name='LPPool2d',
constructor_args=(1.5, 2),
cpp_constructor_args='torch::nn::LPPool2dOptions(1.5, 2)',
input_fn=lambda: torch.rand(1, 3, 7, 7),
desc='norm',
),
dict(
module_name='LPPool1d',
constructor_args=(1.5, 2),
cpp_constructor_args='torch::nn::LPPool1dOptions(1.5, 2)',
input_fn=lambda: torch.rand(1, 3, 7),
desc='norm',
),
dict(
module_name='LPPool1d',
constructor_args=(2, 2, 3),
cpp_constructor_args='torch::nn::LPPool1dOptions(2, 2).stride(3)',
input_size=(1, 3, 7),
),
dict(
module_name='LocalResponseNorm',
constructor_args=(3, ),
cpp_constructor_args='torch::nn::LocalResponseNormOptions(3)',
input_size=(1, 5, 7),
desc='1d',
),
dict(
module_name='LocalResponseNorm',
constructor_args=(2, ),
cpp_constructor_args='torch::nn::LocalResponseNormOptions(2)',
input_size=(1, 5, 7, 7),
desc='2d_uneven_pad',
),
dict(
module_name='LocalResponseNorm',
constructor_args=(1, 1., 0.5, 2.),
cpp_constructor_args='torch::nn::LocalResponseNormOptions(1).alpha(1.).beta(0.5).k(2.)',
input_size=(1, 5, 7, 7, 7),
desc='3d_custom_params',
),
dict(
module_name='ReflectionPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReflectionPad1dOptions({1, 2})',
input_size=(2, 3, 8),
),
dict(
module_name='ReflectionPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReflectionPad1dOptions({1, 2})',
input_size=(3, 8),
reference_fn=single_batch_reference_fn,
desc='batch',
),
dict(
module_name='ReflectionPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReflectionPad1dOptions({1, 2})',
input_fn=lambda: torch.rand(2, 3, 8, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ReflectionPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ReflectionPad2dOptions({1, 2, 3, 4})',
input_size=(2, 3, 8, 8),
),
dict(
module_name='ReflectionPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ReflectionPad2dOptions({1, 2, 3, 4})',
input_fn=lambda: torch.rand(2, 3, 8, 8, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ReflectionPad3d',
constructor_args=((1, 2, 0, 2, 1, 2),),
cpp_constructor_args='torch::nn::ReflectionPad3dOptions({1, 2, 0, 2, 1, 2})',
input_size=(2, 3, 8, 8, 8),
),
dict(
module_name='ReflectionPad3d',
constructor_args=((1, 2, 0, 2, 1, 2),),
cpp_constructor_args='torch::nn::ReflectionPad3dOptions({1, 2, 0, 2, 1, 2})',
input_fn=lambda: torch.rand(2, 3, 8, 8, 8, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ReplicationPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReplicationPad1dOptions({1, 2})',
input_size=(2, 3, 4),
),
dict(
module_name='ReplicationPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReplicationPad1dOptions({1, 2})',
input_size=(3, 4),
reference_fn=single_batch_reference_fn,
desc='batch',
),
dict(
module_name='ReplicationPad1d',
constructor_args=((1, 2),),
cpp_constructor_args='torch::nn::ReplicationPad1dOptions({1, 2})',
input_fn=lambda: torch.rand(2, 3, 4, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ReplicationPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ReplicationPad2dOptions({1, 2, 3, 4})',
input_size=(2, 3, 4, 4),
),
dict(
module_name='ReplicationPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ReplicationPad2dOptions({1, 2, 3, 4})',
input_fn=lambda: torch.rand(2, 3, 4, 4, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ZeroPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ZeroPad2dOptions({1, 2, 3, 4})',
input_size=(2, 3, 4, 4),
),
dict(
module_name='ZeroPad2d',
constructor_args=((1, 2, 3, 4),),
cpp_constructor_args='torch::nn::ZeroPad2dOptions({1, 2, 3, 4})',
input_fn=lambda: torch.rand(2, 3, 4, 4, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ZeroPad2d',
constructor_args=((-1, -1, -1, -2),),
cpp_constructor_args='torch::nn::ZeroPad2dOptions({-1, -1, -1, -2})',
input_size=(2, 3, 4, 4),
desc='negative_dims'
),
dict(
module_name='ConstantPad1d',
constructor_args=((1, 2), 2.),
cpp_constructor_args='torch::nn::ConstantPad1dOptions({1, 2}, 2.)',
input_size=(2, 3, 4),
),
dict(
module_name='ConstantPad1d',
constructor_args=((1, 2), 2.),
cpp_constructor_args='torch::nn::ConstantPad1dOptions({1, 2}, 2.)',
input_size=(3, 4),
reference_fn=single_batch_reference_fn,
desc='batch',
),
dict(
module_name='ConstantPad1d',
constructor_args=((1, 2), 2.),
cpp_constructor_args='torch::nn::ConstantPad1dOptions({1, 2}, 2.)',
input_fn=lambda: torch.rand(2, 3, 4, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ConstantPad2d',
constructor_args=((1, 2, 3, 4), 2.),
cpp_constructor_args='torch::nn::ConstantPad2dOptions({1, 2, 3, 4}, 2.)',
input_size=(2, 3, 4, 4),
),
dict(
module_name='ConstantPad2d',
constructor_args=((1, 2, 3, 4), 2.),
cpp_constructor_args='torch::nn::ConstantPad2dOptions({1, 2, 3, 4}, 2.)',
input_size=(3, 4, 4),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim'
),
dict(
module_name='ConstantPad2d',
constructor_args=((1, 2, 3, 4), 2.),
cpp_constructor_args='torch::nn::ConstantPad2dOptions({1, 2, 3, 4}, 2.)',
input_fn=lambda: torch.rand(2, 3, 4, 4, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='ConstantPad3d',
constructor_args=((1, 2, 3, 4, 1, 0), 2.),
cpp_constructor_args='torch::nn::ConstantPad3dOptions({1, 2, 3, 4, 1, 0}, 2.)',
input_size=(2, 3, 4, 4, 5),
),
dict(
module_name='ConstantPad3d',
constructor_args=((1, 2, 3, 4, 1, 0), 2.),
cpp_constructor_args='torch::nn::ConstantPad3dOptions({1, 2, 3, 4, 1, 0}, 2.)',
input_size=(3, 4, 4, 5),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim'
),
dict(
module_name='ConstantPad3d',
constructor_args=((1, 2, 3, 4, 1, 0), 2.),
cpp_constructor_args='torch::nn::ConstantPad3dOptions({1, 2, 3, 4, 1, 0}, 2.)',
input_fn=lambda: torch.rand(2, 3, 4, 4, 5, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='Conv3d',
constructor_args=(2, 3, (2, 3, 2)),
cpp_constructor_args='torch::nn::Conv3dOptions(2, 3, {2, 3, 2})',
input_size=(1, 2, 4, 5, 4),
cudnn=True,
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Conv3d',
constructor_args=(2, 3, (2, 3, 4), 1, 0, 1, 1, False),
cpp_constructor_args='''torch::nn::Conv3dOptions(2, 3, {2, 3, 4})
.stride(1).padding(0).dilation(1).groups(1).bias(false)''',
input_size=(1, 2, 3, 4, 5),
cudnn=True,
desc='no_bias',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Conv3d',
constructor_args=(2, 3, (1, 1, 1), 1, 0, 1, 1, False),
cpp_constructor_args='''torch::nn::Conv3dOptions(2, 3, {2, 3, 4})
.stride(1).padding(0).dilation(1).groups(1).bias(false)''',
input_size=(1, 2, 3, 4, 5),
cudnn=True,
desc='1x1x1_no_bias',
check_with_long_tensor=False,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Conv3d',
constructor_args=(3, 4, 2, 2),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, 2).stride(2)',
input_size=(2, 3, 5, 5, 5),
cudnn=True,
desc='stride',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Conv3d',
constructor_args=(3, 4, 2, 2, 1),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, 2).stride(2).padding(1)',
input_size=(2, 3, 5, 5, 5),
cudnn=True,
desc='stride_padding',
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Conv3d',
constructor_args=(3, 4, (2, 3, 4)),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, {2, 3, 4})',
input_size=(0, 3, 3, 4, 5),
cudnn=True,
check_with_long_tensor=True,
desc='zero_batch',
with_tf32=True,
),
dict(
fullname='Conv3d_groups',
constructor=lambda: nn.Conv3d(2, 4, kernel_size=3, groups=2),
cpp_constructor_args='torch::nn::Conv3dOptions(2, 4, 3).groups(2)',
input_size=(1, 2, 4, 5, 4),
cudnn=True,
check_with_long_tensor=True,
with_tf32=True,
tf32_precision=0.005,
),
dict(
fullname='Conv3d_dilated',
constructor=lambda: nn.Conv3d(3, 4, kernel_size=2, dilation=2),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, 2).dilation(2)',
input_size=(2, 3, 5, 5, 5),
with_tf32=True,
tf32_precision=0.05,
),
dict(
fullname='Conv3d_dilated_strided',
constructor=lambda: nn.Conv3d(3, 4, kernel_size=2, dilation=2, stride=2),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, 2).dilation(2).stride(2)',
input_size=(2, 3, 5, 5, 5),
with_tf32=True,
tf32_precision=0.05
),
dict(
fullname='Conv3d_pad_valid',
constructor=lambda: nn.Conv3d(3, 4, (2, 3, 4), padding="valid"),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, {2, 3, 4}).padding(torch::kValid)',
input_size=(2, 3, 6, 5, 4),
cudnn=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
fullname='Conv3d_pad_same',
constructor=lambda: nn.Conv3d(3, 4, (2, 3, 4), padding="same"),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, {2, 3, 4}).padding(torch::kSame)',
input_size=(2, 3, 6, 5, 4),
cudnn=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
fullname='Conv3d_pad_same_dilated',
constructor=lambda: nn.Conv3d(3, 4, (2, 3, 4), padding="same", dilation=2),
cpp_constructor_args='torch::nn::Conv3dOptions(3, 4, {2, 3, 4}).padding(torch::kSame).dilation(2)',
input_size=(2, 3, 6, 5, 4),
cudnn=True,
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='ConvTranspose3d',
constructor_args=(2, 3, (2, 3, 2)),
cpp_constructor_args='torch::nn::ConvTranspose3dOptions(2, 3, {2, 3, 2})',
cudnn=True,
input_size=(1, 2, 4, 5, 4),
with_tf32=True,
tf32_precision=0.05
),
dict(
module_name='ConvTranspose3d',
constructor_args=(2, 3, (2, 3, 2), 1, 0, 0, 1, True, (2, 2, 2)),
cpp_constructor_args='''torch::nn::ConvTranspose3dOptions(2, 3, {2, 3, 2})
.stride(1).padding(0).output_padding(0).groups(1).bias(true).dilation({2, 2, 2})''',
cudnn=True,
input_size=(1, 2, 4, 5, 4),
desc='dilated',
with_tf32=True,
tf32_precision=0.05
),
dict(
module_name='MaxPool3d',
constructor_args=((2, 2, 2),),
cpp_constructor_args='torch::nn::MaxPool3dOptions({2, 2, 2})',
input_size=(2, 3, 5, 5, 5),
),
dict(
module_name='MaxPool3d',
constructor_args=(2, (2, 2, 2)),
cpp_constructor_args='torch::nn::MaxPool3dOptions(2).stride({2, 2, 2})',
input_size=(2, 3, 5, 5, 5),
desc='stride',
),
dict(
module_name='MaxPool3d',
constructor_args=(2, 2, (1, 1, 1)),
cpp_constructor_args='torch::nn::MaxPool3dOptions(2).stride(2).padding({1, 1, 1})',
input_size=(2, 3, 5, 5, 5),
desc='stride_padding',
),
dict(
module_name='AvgPool3d',
constructor_args=((2, 2, 2),),
cpp_constructor_args='torch::nn::AvgPool3dOptions({2, 2, 2})',
input_size=(2, 3, 4, 4, 4),
),
dict(
module_name='AvgPool3d',
constructor_args=((2, 2, 2),),
cpp_constructor_args='torch::nn::AvgPool3dOptions({2, 2, 2})',
input_size=(3, 4, 4, 4),
desc='no_batch_dim',
),
dict(
module_name='AvgPool3d',
constructor_args=(2, (2, 2, 2)),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride({2, 2, 2})',
input_size=(2, 3, 5, 5, 5),
desc='stride',
),
dict(
module_name='AvgPool3d',
constructor_args=(2, 2, (1, 1, 1)),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride(2).padding({1, 1, 1})',
input_size=(2, 3, 5, 5, 5),
desc='stride_pad',
),
dict(
module_name='AvgPool3d',
constructor_args=(4, 2, (1, 2, 1)),
cpp_constructor_args='torch::nn::AvgPool3dOptions(4).stride(2).padding({1, 2, 1})',
input_size=(2, 3, 5, 5, 5),
desc='stride_pad_gpu_fixedkw_output',
),
dict(
module_name='AvgPool3d',
constructor_args=((2, 4, 8), 1, (1, 1, 2)),
cpp_constructor_args='torch::nn::AvgPool3dOptions({2, 4, 8}).stride(1).padding({1, 1, 2})',
input_size=(2, 3, 2, 4, 8),
desc='stride_pad_gpu_general_output',
),
dict(
module_name='AvgPool3d',
constructor_args=(3, 1, 0),
cpp_constructor_args='torch::nn::AvgPool3dOptions(3).stride(1).padding(0)',
input_size=(2, 3, 4, 4, 4),
desc='stride1_pad0_gpu_input',
),
dict(
module_name='AvgPool3d',
constructor_args=(2, 2, (1, 1, 1)),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride(2).padding({1, 1, 1})',
input_size=(2, 3, 4, 4, 4),
desc='stride_pad_gpu_input_nooverlap',
),
dict(
fullname='AvgPool3d_divisor',
constructor=lambda: nn.AvgPool3d((2, 2, 2), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions({2, 2, 2}).divisor_override(1)',
input_size=(2, 3, 4, 4, 4),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride',
constructor=lambda: nn.AvgPool3d(2, (2, 2, 2), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride({2, 2, 2}).divisor_override(1)',
input_size=(2, 3, 5, 5, 5),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride_pad',
constructor=lambda: nn.AvgPool3d(2, 2, (1, 1, 1), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride(2).padding({1, 1, 1}).divisor_override(1)',
input_size=(2, 3, 5, 5, 5),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride_pad_gpu_fixedkw_output',
constructor=lambda: nn.AvgPool3d(4, 2, (1, 2, 1), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions(4).stride(2).padding({1, 2, 1}).divisor_override(1)',
input_size=(2, 3, 5, 5, 5),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride_pad_gpu_general_output',
constructor=lambda: nn.AvgPool3d((2, 4, 8), 1, (1, 1, 2), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions({2, 4, 8}).stride(1).padding({1, 1, 2}).divisor_override(1)',
input_size=(2, 3, 2, 4, 8),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride1_pad0_gpu_input',
constructor=lambda: nn.AvgPool3d(3, 1, 0, divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions(3).stride(1).padding(0).divisor_override(1)',
input_size=(2, 3, 4, 4, 4),
check_with_long_tensor=True,
),
dict(
fullname='AvgPool3d_divisor_stride_pad_gpu_input_nooverlap',
constructor=lambda: nn.AvgPool3d(2, 2, (1, 1, 1), divisor_override=1),
cpp_constructor_args='torch::nn::AvgPool3dOptions(2).stride(2).padding({1, 1, 1}).divisor_override(1)',
input_size=(2, 3, 4, 4, 4),
check_with_long_tensor=True,
),
dict(
module_name='ReplicationPad3d',
constructor_args=((1, 2, 3, 3, 2, 1),),
cpp_constructor_args='torch::nn::ReplicationPad3dOptions({1, 2, 3, 3, 2, 1})',
input_size=(2, 3, 2, 2, 2),
),
dict(
module_name='ReplicationPad3d',
constructor_args=((1, 2, 3, 3, 2, 1),),
cpp_constructor_args='torch::nn::ReplicationPad3dOptions({1, 2, 3, 3, 2, 1})',
input_fn=lambda: torch.rand(2, 3, 2, 2, 2, dtype=torch.complex128, requires_grad=True),
skip_half=True,
desc='complex'
),
dict(
module_name='Embedding',
constructor_args=(4, 3),
cpp_constructor_args='torch::nn::EmbeddingOptions(4, 3)',
input_fn=lambda: torch.empty(2, 3, dtype=torch.long).random_(4),
check_gradgrad=False,
),
dict(
module_name='EmbeddingBag',
constructor_args=(4, 3),
cpp_constructor_args='torch::nn::EmbeddingBagOptions(4, 3)',
input_fn=lambda: torch.empty(2, 3, dtype=torch.long).random_(4),
check_gradgrad=False,
desc='mean',
),
dict(
module_name='EmbeddingBag',
constructor_args=(4, 3, None, 2., False, 'sum'),
cpp_constructor_args='''torch::nn::EmbeddingBagOptions(4, 3)
.max_norm(c10::nullopt).norm_type(2.).scale_grad_by_freq(false).mode(torch::kSum)''',
input_fn=lambda: torch.empty(2, 3, dtype=torch.long).random_(4),
check_gradgrad=False,
desc='sum',
),
dict(
module_name='EmbeddingBag',
constructor_args=(4, 3, None, 2., False, 'max'),
cpp_constructor_args='''torch::nn::EmbeddingBagOptions(4, 3)
.max_norm(c10::nullopt).norm_type(2.).scale_grad_by_freq(false).mode(torch::kMax)''',
input_fn=lambda: torch.empty(2, 3, dtype=torch.long).random_(4),
check_gradgrad=False,
desc='max',
),
dict(
fullname='EmbeddingBag_mean_padding_idx',
constructor=lambda: nn.EmbeddingBag(4, 3, padding_idx=1),
cpp_constructor_args='torch::nn::EmbeddingBagOptions(4, 3).padding_idx(1)',
input_fn=lambda: torch.stack([torch.randperm(3), torch.randperm(3)]),
check_gradgrad=False,
),
dict(
fullname='EmbeddingBag_sum_padding_idx',
constructor=lambda: nn.EmbeddingBag(4, 3, None, 2., False, 'sum', padding_idx=1),
cpp_constructor_args='''torch::nn::EmbeddingBagOptions(4, 3)
.max_norm(c10::nullopt).norm_type(2.).scale_grad_by_freq(false).mode(torch::kSum).padding_idx(1)''',
input_fn=lambda: torch.stack([torch.randperm(3), torch.randperm(3)]),
check_gradgrad=False,
),
dict(
fullname='EmbeddingBag_max_padding_idx',
constructor=lambda: nn.EmbeddingBag(4, 3, None, 2., False, 'max', padding_idx=1),
cpp_constructor_args='''torch::nn::EmbeddingBagOptions(4, 3)
.max_norm(c10::nullopt).norm_type(2.).scale_grad_by_freq(false).mode(torch::kMax).padding_idx(1)''',
input_fn=lambda: torch.stack([torch.randperm(3), torch.randperm(3)]),
check_gradgrad=False,
),
dict(
fullname='EmbeddingBag_sparse',
constructor=lambda: nn.EmbeddingBag(4, 3, sparse=True),
cpp_constructor_args='torch::nn::EmbeddingBagOptions(4, 3).sparse(true)',
input_fn=lambda: torch.randperm(2).repeat(1, 2),
check_gradgrad=False,
has_sparse_gradients=True,
),
dict(
constructor=lambda: nn.Embedding(4, 3, sparse=True),
cpp_constructor_args='torch::nn::EmbeddingOptions(4, 3).sparse(true)',
input_fn=lambda: torch.randperm(2).repeat(1, 2),
fullname='Embedding_sparse',
check_gradgrad=False,
has_sparse_gradients=True,
),
dict(
module_name='PixelShuffle',
constructor_args=(3,),
cpp_constructor_args='torch::nn::PixelShuffleOptions(3)',
input_size=(1, 9, 4, 4),
),
dict(
module_name='PixelUnshuffle',
constructor_args=(3,),
cpp_constructor_args='torch::nn::PixelUnshuffleOptions(3)',
input_size=(1, 1, 12, 12),
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12})).scale_factor(c10::nullopt).mode(torch::kNearest)''',
input_size=(1, 2, 4),
fullname='interpolate_nearest_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12})).scale_factor(c10::nullopt).mode(torch::kNearest)''',
input_size=(0, 2, 4),
fullname='interpolate_nearest_1d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(12, ), scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12})).scale_factor(c10::nullopt).mode(torch::kNearest)''',
input_size=(1, 2, 3),
fullname='interpolate_nearest_tuple_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt).scale_factor(std::vector<double>({4.})).mode(torch::kNearest)''',
input_size=(1, 2, 4),
fullname='interpolate_nearest_scale_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='linear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12}))
.scale_factor(c10::nullopt)
.mode(torch::kLinear)
.align_corners(false)''',
input_size=(1, 2, 4),
fullname='interpolate_linear_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, ), scale_factor=None, mode='linear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4}))
.scale_factor(c10::nullopt)
.mode(torch::kLinear)
.align_corners(false)''',
input_size=(1, 2, 3),
fullname='interpolate_linear_tuple_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='linear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4.}))
.mode(torch::kLinear)
.align_corners(false)''',
input_size=(1, 2, 4),
fullname='interpolate_linear_scale_1d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='linear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12}))
.scale_factor(c10::nullopt)
.mode(torch::kLinear)
.align_corners(false)''',
input_size=(0, 2, 4),
fullname='interpolate_linear_1d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='linear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12}))
.scale_factor(c10::nullopt)
.mode(torch::kLinear)
.align_corners(true)''',
input_size=(1, 2, 4),
fullname='interpolate_linear_1d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='linear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4.}))
.mode(torch::kLinear)
.align_corners(true)''',
input_size=(1, 2, 4),
fullname='interpolate_linear_scale_1d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=2, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({2, 2}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(1, 128, 1, 1),
fullname='interpolate_nearest_2d_launch_configs',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_nearest_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(12, 16), scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 16}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(1, 2, 3, 4),
fullname='interpolate_nearest_tuple_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4., 4.}))
.mode(torch::kNearest)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_nearest_scale_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(0, 2, 4, 4),
fullname='interpolate_nearest_2d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(0, 2, 4, 4),
fullname='interpolate_bilinear_2d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6), scale_factor=None,
mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(1, 2, 2, 3),
fullname='interpolate_bilinear_tuple_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4.,
mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4., 4.}))
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_scale_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 2.),
mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 2.}))
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_scale_tuple_shared_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 1.),
mode='bilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 1.}))
.mode(torch::kBilinear)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_scale_tuple_skewed_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6), scale_factor=None, mode='bilinear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kBilinear)
.align_corners(true)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_tuple_2d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 1.),
mode='bilinear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 1.}))
.mode(torch::kBilinear)
.align_corners(true)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bilinear_scale_tuple_skewed_2d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(0, 2, 4, 4),
fullname='interpolate_bicubic_2d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6), scale_factor=None,
mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(1, 2, 2, 3),
fullname='interpolate_bicubic_tuple_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4., 4.}))
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_scale_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 2.),
mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 2.}))
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_scale_tuple_shared_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 1.),
mode='bicubic', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 1.}))
.mode(torch::kBicubic)
.align_corners(false)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_scale_tuple_skewed_2d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6), scale_factor=None, mode='bicubic', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kBicubic)
.align_corners(true)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_tuple_2d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=(2., 1.),
mode='bicubic', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({2., 1.}))
.mode(torch::kBicubic)
.align_corners(true)''',
input_size=(1, 2, 4, 4),
fullname='interpolate_bicubic_scale_tuple_skewed_2d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(1, 2, 4, 4, 4),
fullname='interpolate_nearest_3d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(0, 2, 4, 4, 4),
fullname='interpolate_nearest_3d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(12, 16, 16), scale_factor=None, mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 16, 16}))
.scale_factor(c10::nullopt)
.mode(torch::kNearest)''',
input_size=(1, 2, 3, 4, 4),
fullname='interpolate_nearest_tuple_3d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=4., mode='nearest'),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({4., 4., 4.}))
.mode(torch::kNearest)''',
input_size=(1, 2, 4, 4, 4),
fullname='interpolate_nearest_scale_3d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='trilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kTrilinear)
.align_corners(false)''',
input_size=(1, 2, 4, 4, 4),
fullname='interpolate_trilinear_3d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=12, scale_factor=None, mode='trilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({12, 12, 12}))
.scale_factor(c10::nullopt)
.mode(torch::kTrilinear)
.align_corners(false)''',
input_size=(0, 2, 4, 4, 4),
fullname='interpolate_trilinear_3d_zero_dim',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6, 6),
scale_factor=None, mode='trilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kTrilinear)
.align_corners(false)''',
input_size=(1, 2, 2, 3, 3),
fullname='interpolate_trilinear_tuple_3d',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=3., mode='trilinear', align_corners=False),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({3., 3., 3.}))
.mode(torch::kTrilinear)
.align_corners(false)''',
input_size=(1, 2, 3, 4, 5),
fullname='interpolate_trilinear_scale_3d',
# See https://github.com/pytorch/pytorch/issues/5006
precision=3e-4,
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=(4, 6, 6), scale_factor=None,
mode='trilinear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(std::vector<int64_t>({4, 6, 6}))
.scale_factor(c10::nullopt)
.mode(torch::kTrilinear)
.align_corners(true)''',
input_size=(1, 2, 2, 3, 3),
fullname='interpolate_trilinear_tuple_3d_align_corners',
pickle=False,
),
dict(
constructor=wrap_functional(F.interpolate, size=None, scale_factor=3., mode='trilinear', align_corners=True),
cpp_options_args='''F::InterpolateFuncOptions()
.size(c10::nullopt)
.scale_factor(std::vector<double>({3., 3., 3.}))
.mode(torch::kTrilinear)
.align_corners(true)''',
input_size=(1, 2, 3, 4, 4),
fullname='interpolate_trilinear_scale_3d_align_corners',
# See https://github.com/pytorch/pytorch/issues/5006
precision=3e-4,
pickle=False,
),
dict(
module_name='AdaptiveMaxPool1d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool1dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(1, 3, 5),
),
dict(
module_name='AdaptiveMaxPool1d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool1dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(3, 5),
desc='no_batch_dim',
),
dict(
module_name='AdaptiveMaxPool2d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool2dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(1, 3, 5, 6),
desc='single',
),
dict(
module_name='AdaptiveMaxPool2d',
constructor_args=((3, 4),),
cpp_constructor_args='torch::nn::AdaptiveMaxPool2dOptions({3, 4})',
input_fn=lambda: _rand_tensor_non_equal(1, 3, 5, 6),
desc='tuple',
),
dict(
module_name='AdaptiveMaxPool2d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool2dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(3, 5, 6),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AdaptiveMaxPool2d',
constructor_args=((3, None),),
cpp_constructor_args='torch::nn::AdaptiveMaxPool2dOptions({3, c10::nullopt})',
input_fn=lambda: _rand_tensor_non_equal(1, 3, 5, 6),
desc='tuple_none',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(2, 3, 5, 6, 7),
desc='single',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(3, 5, 6, 7),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=((3, 4, 5),),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions({3, 4, 5})',
input_fn=lambda: _rand_tensor_non_equal(2, 3, 5, 6, 7),
desc='tuple',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=((3, None, 5),),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions({3, c10::nullopt, 5})',
input_fn=lambda: _rand_tensor_non_equal(2, 3, 5, 6, 7),
desc='tuple_none',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions(3)',
input_fn=lambda: _rand_tensor_non_equal(2, 3, 12, 9, 3),
desc='single_nonatomic',
),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=((3, 4, 5),),
cpp_constructor_args='torch::nn::AdaptiveMaxPool3dOptions({3, 4, 5})',
input_fn=lambda: _rand_tensor_non_equal(2, 3, 6, 4, 10),
desc='tuple_nonatomic',
),
dict(
module_name='AdaptiveAvgPool1d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool1dOptions(3)',
input_fn=lambda: torch.rand(1, 3, 5),
),
dict(
module_name='AdaptiveAvgPool1d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool1dOptions(3)',
input_fn=lambda: torch.rand(3, 5),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AdaptiveAvgPool1d',
constructor_args=(1,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool1dOptions(1)',
input_fn=lambda: torch.rand(1, 3, 5),
desc='one_output',
),
dict(
module_name='AdaptiveAvgPool2d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool2dOptions(3)',
input_fn=lambda: torch.rand(1, 3, 5, 6),
desc='single',
),
dict(
module_name='AdaptiveAvgPool2d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool2dOptions(3)',
input_fn=lambda: torch.rand(3, 5, 6),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AdaptiveAvgPool2d',
constructor_args=(1,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool2dOptions(1)',
input_fn=lambda: torch.rand(1, 3, 5, 6),
desc='single_1x1output',
),
dict(
module_name='AdaptiveAvgPool2d',
constructor_args=((3, 4),),
cpp_constructor_args='torch::nn::AdaptiveAvgPool2dOptions({3, 4})',
input_fn=lambda: torch.rand(1, 3, 5, 6),
desc='tuple',
),
dict(
module_name='AdaptiveAvgPool2d',
constructor_args=((3, None),),
cpp_constructor_args='torch::nn::AdaptiveAvgPool2dOptions({3, c10::nullopt})',
input_fn=lambda: torch.rand(1, 3, 5, 6),
desc='tuple_none',
),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool3dOptions(3)',
input_fn=lambda: torch.rand(2, 3, 5, 2, 7),
desc='single',
),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=(3,),
cpp_constructor_args='torch::nn::AdaptiveAvgPool3dOptions(3)',
input_fn=lambda: torch.rand(3, 5, 2, 7),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=((3, 4, 5),),
cpp_constructor_args='torch::nn::AdaptiveAvgPool3dOptions({3, 4, 5})',
input_fn=lambda: torch.rand(2, 3, 5, 3, 7),
desc='tuple',
),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=((None, 4, 5),),
cpp_constructor_args='torch::nn::AdaptiveAvgPool3dOptions({c10::nullopt, 4, 5})',
input_fn=lambda: torch.rand(2, 3, 5, 3, 7),
desc='tuple_none',
),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=((3, 2, 2),),
cpp_constructor_args='torch::nn::AdaptiveAvgPool3dOptions({3, 2, 2})',
input_fn=lambda: torch.rand(1, 1, 3, 2, 6),
desc='last_dim',
),
dict(
module_name='SELU',
input_size=(3, 2, 5),
check_inplace=True
),
dict(
module_name='SELU',
input_size=(),
check_inplace=True,
desc='scalar'
),
dict(
module_name='CELU',
input_size=(3, 2, 5),
constructor_args=(2.,),
cpp_constructor_args='torch::nn::CELUOptions().alpha(2.)',
check_inplace=True,
reference_fn=lambda x, *_: torch.where(x >= 0, x, 2. * ((.5 * x).exp() - 1)),
),
dict(
module_name='CELU',
input_size=(),
constructor_args=(2.,),
cpp_constructor_args='torch::nn::CELUOptions().alpha(2.)',
check_inplace=True,
reference_fn=lambda x, *_: torch.where(x >= 0, x, 2. * ((.5 * x).exp() - 1)),
desc='scalar'
),
dict(
module_name='GLU',
input_size=(5, 6),
),
dict(
module_name='GLU',
constructor_args=(1,),
cpp_constructor_args='torch::nn::GLUOptions(1)',
input_size=(5, 6, 7),
desc='dim',
),
dict(
module_name='GELU',
input_size=(),
desc='scalar',
reference_fn=lambda x, *_: x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))),
),
dict(
module_name='GELU',
input_size=(3, 2, 5),
reference_fn=lambda x, *_: x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))),
),
dict(
module_name='SiLU',
input_size=(),
desc='scalar',
reference_fn=lambda x, *_: x * torch.sigmoid(x),
),
dict(
module_name='SiLU',
input_size=(5, 6, 7),
reference_fn=lambda x, *_: x * torch.sigmoid(x),
),
dict(
module_name='Mish',
input_size=(),
desc='scalar',
reference_fn=lambda x, *_: x * torch.tanh(F.softplus(x)),
),
dict(
module_name='Mish',
input_size=(5, 6, 7),
reference_fn=lambda x, *_: x * torch.tanh(F.softplus(x)),
),
dict(
constructor=wrap_functional(F.softmax, dim=-1),
cpp_options_args='F::SoftmaxFuncOptions(-1)',
input_size=(2, 128), # trigger the last-dim algo in CUDA
fullname='softmax_lastdim',
pickle=False,
),
dict(
constructor=wrap_functional(F.softmax, dim=1, dtype=torch.float64),
cpp_options_args='F::SoftmaxFuncOptions(1).dtype(torch::kFloat64)',
input_size=(2, 128),
fullname='softmax_lastdim_dtype',
pickle=False,
test_cuda=False
),
dict(
constructor=wrap_functional(F.softmax, dim=1),
cpp_options_args='F::SoftmaxFuncOptions(1)',
input_size=(2, 128, 2, 2), # trigger special case of spatial CUDA algo
fullname='softmax_spatial_special',
pickle=False,
),
dict(
constructor=wrap_functional(F.softmax, dim=1),
cpp_options_args='F::SoftmaxFuncOptions(1)',
input_size=(2, 2, 4, 4), # regular spatial algorithm
fullname='softmax_spatial',
pickle=False,
),
dict(
constructor=wrap_functional(F.softmax, dim=1, dtype=torch.float64),
cpp_options_args='F::SoftmaxFuncOptions(1).dtype(torch::kFloat64)',
input_size=(2, 2, 4, 4), # regular spatial algorithm
fullname='softmax_spatial_dtype',
pickle=False,
test_cuda=False
),
dict(
constructor=wrap_functional(F.softmax, dim=0),
cpp_options_args='F::SoftmaxFuncOptions(0)',
input_size=(2, 3, 4, 5),
fullname='softmax_functional_dim0',
test_cuda=False,
pickle=False,
),
dict(
constructor=wrap_functional(F.softmax, dim=3),
cpp_options_args='F::SoftmaxFuncOptions(3)',
input_size=(2, 3, 4, 5),
fullname='softmax_functional_dim3',
test_cuda=False,
pickle=False,
),
dict(
constructor=wrap_functional(F.softmax, dim=-1),
cpp_options_args='F::SoftmaxFuncOptions(-1)',
input_size=(),
fullname='softmax_functional_scalar',
test_cuda=False,
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=-1),
cpp_options_args='F::LogSoftmaxFuncOptions(-1)',
input_size=(2, 128), # trigger the last-dim algo in CUDA
fullname='log_softmax_lastdim',
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=1),
cpp_options_args='F::LogSoftmaxFuncOptions(1)',
input_size=(2, 128, 2, 2), # trigger special case of spatial CUDA algo
fullname='log_softmax_spatial_special',
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=1),
cpp_options_args='F::LogSoftmaxFuncOptions(1)',
input_size=(2, 2, 4, 4), # regular spatial algorithm
fullname='log_softmax_spatial',
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=0),
cpp_options_args='F::LogSoftmaxFuncOptions(0)',
input_size=(2, 3, 4, 5),
fullname='log_softmax_dim0',
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=3),
cpp_options_args='F::LogSoftmaxFuncOptions(3)',
input_size=(2, 3, 4, 5),
fullname='log_softmax_dim3',
pickle=False,
),
dict(
constructor=wrap_functional(F.log_softmax, dim=0),
cpp_options_args='F::LogSoftmaxFuncOptions(0)',
input_size=(),
fullname='log_softmax_scalar',
pickle=False,
),
dict(
fullname='Unfold',
constructor=lambda: nn.Unfold((2, 2), (1, 1), (0, 0), (1, 1)),
cpp_constructor_args='torch::nn::UnfoldOptions({2, 2}).dilation({1, 1}).padding({0, 0}).stride({1, 1})',
input_size=(2, 4, 3, 3),
check_gradgrad=False,
test_cuda=True,
),
dict(
fullname='Fold',
constructor=lambda: nn.Fold((3, 3), (2, 2), (1, 1), (0, 0), (1, 1)),
cpp_constructor_args='torch::nn::FoldOptions({3, 3}, {2, 2}).dilation({1, 1}).padding({0, 0}).stride({1, 1})',
input_size=(2, 16, 4),
check_gradgrad=False,
test_cuda=True,
),
dict(
fullname='Unfold_int_input',
constructor=lambda: nn.Unfold(2, 1, 0, 1),
cpp_constructor_args='torch::nn::UnfoldOptions(2).dilation(1).padding(0).stride(1)',
input_size=(2, 4, 3, 3),
check_gradgrad=False,
test_cuda=True,
),
dict(
fullname='Fold_int_input',
constructor=lambda: nn.Fold(3, 2, 1, 0, 1),
cpp_constructor_args='torch::nn::FoldOptions(3, 2).dilation(1).padding(0).stride(1)',
input_size=(2, 16, 4),
check_gradgrad=False,
test_cuda=True,
),
dict(
module_name='Threshold',
constructor_args=(2., 1.),
cpp_constructor_args='torch::nn::ThresholdOptions(2., 1.)',
input_size=(),
check_inplace=True,
desc='threshold_value_scalar'
),
dict(
module_name='ReLU',
input_size=(),
check_inplace=True,
desc='scalar'
),
dict(
module_name='ReLU6',
input_size=(),
check_inplace=True,
desc='scalar'
),
dict(
module_name='RReLU',
constructor_args=(0.1, 0.9),
cpp_constructor_args='torch::nn::RReLUOptions().lower(0.1).upper(0.9)',
input_size=(),
desc='with_up_down_scalar',
test_cuda=False,
),
dict(
module_name='Hardtanh',
input_size=(),
reference_fn=lambda i, *_: i.clamp(-1, 1),
desc='scalar'
),
dict(
module_name='Sigmoid',
input_size=(),
desc='scalar',
),
dict(
module_name='Tanh',
input_size=(),
desc='scalar',
),
dict(
module_name='Softmax',
constructor_args=(0,),
cpp_constructor_args='torch::nn::SoftmaxOptions(0)',
input_size=(),
reference_fn=lambda i, *_: torch.exp(i).div(torch.exp(i).sum(0, True)),
desc='scalar',
),
dict(
module_name='LogSoftmax',
constructor_args=(0,),
cpp_constructor_args='torch::nn::LogSoftmaxOptions(0)',
input_size=(),
reference_fn=lambda i, *_: torch.exp(i).div_(torch.exp(i).sum(0, False)).log_(),
desc='multiparam_scalar',
),
dict(
module_name='ELU',
constructor_args=(2.,),
cpp_constructor_args='torch::nn::ELUOptions().alpha(2.)',
input_size=(),
desc='scalar',
),
dict(
module_name='Hardshrink',
constructor_args=(2.,),
cpp_constructor_args='torch::nn::HardshrinkOptions(2.)',
input_size=(),
desc='scalar',
),
dict(
module_name='LeakyReLU',
constructor_args=(0.5,),
cpp_constructor_args='torch::nn::LeakyReLUOptions().negative_slope(0.5)',
input_size=(),
check_inplace=True,
desc='with_negval_scalar'
),
dict(
module_name='LogSigmoid',
input_size=(),
reference_fn=lambda i, *_: i.sigmoid().log(),
desc='scalar'
),
dict(
module_name='Softplus',
constructor_args=(2, -100),
cpp_constructor_args='torch::nn::SoftplusOptions().beta(2).threshold(-100)',
input_size=(),
reference_fn=(
lambda i, *_: ((i * 2) > -100).type_as(i) * i
+ ((i * 2) <= -100).type_as(i) * 1.0 / 2.0 * torch.log(1 + torch.exp(2 * i))
),
desc='beta_threshold_scalar',
),
dict(
module_name='Softshrink',
constructor_args=(1,),
cpp_constructor_args='torch::nn::SoftshrinkOptions(1)',
input_size=(),
desc='lambda_scalar',
),
dict(
module_name='PReLU',
input_size=(),
reference_fn=lambda i, p, _: torch.clamp(i, min=0) + torch.clamp(i, max=0) * p[0][0],
desc='scalar',
),
dict(
module_name='Softsign',
input_size=(),
reference_fn=lambda i, *_: i.div(1 + torch.abs(i)),
desc='scalar',
),
dict(
module_name='Softmin',
constructor_args=(0,),
cpp_constructor_args='torch::nn::SoftminOptions(0)',
input_size=(),
desc='scalar',
),
dict(
module_name='Tanhshrink',
input_size=(),
desc='scalar',
),
dict(
fullname='Padding12_1dcircular',
constructor=wrap_functional(F.pad, pad=(1, 2), mode='circular'),
cpp_options_args='F::PadFuncOptions({1, 2}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(6, out=torch.DoubleTensor()).reshape([1, 2, 3]),
reference_fn=lambda i, *_: padding1d_circular(i, (1, 2)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding31_1dcircular',
constructor=wrap_functional(F.pad, pad=(3, 1), mode='circular'),
cpp_options_args='F::PadFuncOptions({3, 1}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(6, out=torch.DoubleTensor()).reshape([1, 2, 3]),
reference_fn=lambda i, *_: padding1d_circular(i, (3, 1)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding33_1dcircular',
constructor=wrap_functional(F.pad, pad=(3, 3), mode='circular'),
cpp_options_args='F::PadFuncOptions({3, 3}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(6, out=torch.DoubleTensor()).reshape([1, 2, 3]),
reference_fn=lambda i, *_: padding1d_circular(i, (3, 3)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding1221_2dcircular',
constructor=wrap_functional(F.pad, pad=(1, 2, 2, 1), mode='circular'),
cpp_options_args='F::PadFuncOptions({1, 2, 2, 1}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(6, out=torch.DoubleTensor()).reshape([1, 1, 2, 3]),
reference_fn=lambda i, *_: padding2d_circular(i, (1, 2, 2, 1)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding2322_2dcircular',
constructor=wrap_functional(F.pad, pad=(2, 3, 2, 2), mode='circular'),
cpp_options_args='F::PadFuncOptions({2, 3, 2, 2}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(6, out=torch.DoubleTensor()).reshape([1, 1, 2, 3]),
reference_fn=lambda i, *_: padding2d_circular(i, (2, 3, 2, 2)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding3331_2dcircular',
constructor=wrap_functional(F.pad, pad=(3, 3, 3, 1), mode='circular'),
cpp_options_args='F::PadFuncOptions({3, 3, 3, 1}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(9, out=torch.DoubleTensor()).reshape([1, 1, 3, 3]),
reference_fn=lambda i, *_: padding2d_circular(i, (3, 3, 3, 1)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding122112_3dcircular',
constructor=wrap_functional(F.pad, pad=(1, 2, 2, 1, 1, 2), mode='circular'),
cpp_options_args='F::PadFuncOptions({1, 2, 2, 1, 1, 2}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(12, out=torch.DoubleTensor()).reshape([1, 1, 2, 2, 3]),
reference_fn=lambda i, *_: padding3d_circular(i, (1, 2, 2, 1, 1, 2)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding322112_3dcircular',
constructor=wrap_functional(F.pad, pad=(3, 2, 2, 1, 1, 2), mode='circular'),
cpp_options_args='F::PadFuncOptions({3, 2, 2, 1, 1, 2}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(12, out=torch.DoubleTensor()).reshape([1, 1, 2, 2, 3]),
reference_fn=lambda i, *_: padding3d_circular(i, (3, 2, 2, 1, 1, 2)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
fullname='Padding332122_3dcircular',
constructor=wrap_functional(F.pad, pad=(3, 3, 2, 1, 2, 2), mode='circular'),
cpp_options_args='F::PadFuncOptions({3, 3, 2, 1, 2, 2}).mode(torch::kCircular)',
input_fn=lambda: torch.arange(12, out=torch.DoubleTensor()).reshape([1, 1, 2, 2, 3]),
reference_fn=lambda i, *_: padding3d_circular(i, (3, 3, 2, 1, 2, 2)),
skip_double=TEST_WITH_ROCM,
pickle=False,
),
dict(
module_name='TransformerEncoderLayer',
constructor_args=(4, 2, 16, 0.0),
cpp_constructor_args='''torch::nn::TransformerEncoderLayerOptions(4, 2)
.dim_feedforward(16)
.dropout(0.0)''',
input_size=(2, 3, 4),
desc='relu_activation',
with_tf32=True,
tf32_precision=0.1,
# TODO(#50743): figure out the error
# RuntimeError: The size of tensor a (6) must match the size of tensor b (4)
# at non-singleton dimension 2
check_batched_grad=False,
),
dict(
module_name='TransformerEncoderLayer',
constructor_args=(4, 2, 8, 0.0, 'gelu'),
cpp_constructor_args='''torch::nn::TransformerEncoderLayerOptions(4, 2)
.dim_feedforward(8)
.dropout(0.0)
.activation(torch::kGELU)''',
input_size=(2, 3, 4),
check_gradgrad=False,
desc='gelu_activation',
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='TransformerDecoderLayer',
constructor_args=(4, 2, 8, 0.0),
cpp_constructor_args='''torch::nn::TransformerDecoderLayerOptions(4, 2)
.dim_feedforward(8)
.dropout(0.0)''',
input_fn=lambda: (torch.rand(3, 3, 4), torch.rand(2, 3, 4)),
check_gradgrad=False,
desc='relu_activation',
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='TransformerDecoderLayer',
constructor_args=(4, 2, 8, 0.0, 'gelu'),
cpp_constructor_args='''torch::nn::TransformerDecoderLayerOptions(4, 2)
.dim_feedforward(8)
.dropout(0.0)
.activation(torch::kGELU)''',
input_fn=lambda: (torch.rand(3, 3, 4), torch.rand(2, 3, 4)),
check_gradgrad=False,
desc='gelu_activation',
with_tf32=True,
tf32_precision=0.05,
),
dict(
module_name='Transformer',
constructor_args=(4, 2, 2, 2, 8, 0.0, "relu"),
cpp_constructor_args='''torch::nn::TransformerOptions()
.d_model(4)
.nhead(2)
.num_encoder_layers(2)
.num_decoder_layers(2)
.dim_feedforward(8)
.dropout(0.0)
.activation(torch::kReLU)''',
input_fn=lambda:(torch.rand(3, 3, 4), torch.rand(2, 3, 4), torch.rand(3, 3)),
check_gradgrad=False,
desc='multilayer_coder',
with_tf32=True,
tf32_precision=0.01,
),
dict(
module_name='Linear',
constructor_args=(3, 5),
cpp_constructor_args='torch::nn::LinearOptions(3, 5)',
input_fn=lambda: torch.rand(3),
reference_fn=lambda i, p, _: torch.mm(i.view(1, -1), p[0].t()).view(-1) + p[1],
desc="no_batch_dim",
with_tf32=True,
tf32_precision=0.005,
),
]
# add conv padding mode tests:
for padding_mode, cpp_padding_mode in zip(
['reflect', 'circular', 'replicate', 'zeros'],
['torch::kReflect', 'torch::kCircular', 'torch::kReplicate', 'torch::kZeros']):
# conv signature:
# in_channels, out_channels, kernel_size, stride=1,
# padding=0, dilation=1, groups=1,
# bias=True, padding_mode='zeros'
for d in (1, 2, 3):
if d == 3 and padding_mode == 'reflect':
# FIXME: remove after implementing reflection pad 3d
# https://github.com/pytorch/pytorch/issues/27655
continue
padding = tuple(range(1, d + 1))
cpp_padding = '{' + ', '.join(map(str, padding)) + '}'
input_size = (2, 2) + (4,) * d
output_size = (2, 3) + tuple(p + 1 for p in padding) # simplified from `(4 + 2 * p - 3) // 2 + 1`
new_module_tests.append(
dict(
module_name='Conv{}d'.format(d),
constructor_args=(2, 3, 3, 2, padding, 1, 1, True, padding_mode),
cpp_constructor_args='''torch::nn::Conv{}dOptions(2, 3, 3)
.stride(2)
.padding({})
.dilation(1)
.groups(1)
.bias(true)
.padding_mode({})'''.format(d, cpp_padding, cpp_padding_mode),
input_size=input_size,
output_size=output_size,
cudnn=True,
desc='{}_stride2_pad2'.format(padding_mode),
with_tf32=True,
tf32_precision=0.05
),
)
# Check that non linear activations work with no batch dimensions
non_linear_activations_no_batch = [
'ELU', 'Hardshrink', 'Hardsigmoid', 'Hardtanh', 'Hardswish', 'LeakyReLU',
'LogSigmoid', 'PReLU', 'ReLU', 'ReLU6', 'RReLU', 'SELU', 'CELU', 'GELU',
'Sigmoid', 'SiLU', 'Mish', 'Softplus', 'Softshrink', 'Softsign', 'Tanh',
'Tanhshrink', 'Threshold'
]
non_linear_activations_extra_info: Dict[str, dict] = {
'CELU': {'constructor_args': (2.,)},
'Threshold': {'constructor_args': (2., 1.)},
'Hardsigmoid': {'check_gradgrad': False, 'check_jit': False},
'Hardswish': {'check_gradgrad': False, 'check_jit': False},
# For RRelu, test that compare CPU and GPU results fail because RNG
# is different between CPU and GPU
'RReLU': {'test_cuda': False},
}
for non_linear_activation in non_linear_activations_no_batch:
activation_test_info = dict(
module_name=non_linear_activation,
input_size=(3,),
reference_fn=single_batch_reference_fn,
desc='no_batch_dim',
test_cpp_api_parity=False,
)
extra_info = non_linear_activations_extra_info.get(non_linear_activation, {})
activation_test_info.update(extra_info)
new_module_tests.append(activation_test_info)
def kldivloss_reference(input, target, reduction='mean'):
safe_target = target * (target > 0).type_as(target)
safe_target_log = (safe_target + (target <= 0).type_as(target)).log()
result = safe_target * (safe_target_log - input)
if reduction == 'mean':
return result.mean()
elif reduction == 'sum':
return result.sum()
elif reduction == 'batchmean' and result.dim() != 0:
return result.sum() / result.size(0)
return result
def kldivloss_log_target_reference(input, target, reduction='mean'):
result = torch.exp(target) * (target - input)
if reduction == 'mean':
return result.mean()
elif reduction == 'sum':
return result.sum()
elif reduction == 'batchmean' and result.dim() != 0:
return result.sum() / result.size(0)
return result
def nlllossNd_reference(input, target, weight=None, ignore_index=-100,
reduction='mean'):
assert input.dim() >= 3
N = input.size(0)
C = input.size(1)
out_size = (N,) + input.size()[2:]
output = torch.zeros(out_size).type_as(input)
if weight is None:
weight = torch.ones(C).type_as(input)
total_weight = 0
for tup in product(*[range(size) for size in out_size]):
t_nx = target[tup]
norm = 0. if ignore_index == t_nx else weight[t_nx].item()
input_index = list(tup)
input_index.insert(1, t_nx)
output[tup] = -input[tuple(input_index)] * norm
total_weight += norm
if reduction == 'mean':
return output.sum() / total_weight
elif reduction == 'sum':
return output.sum()
return output
def cross_entropy_loss_reference(input, target, weight=None, ignore_index=-100, reduction='mean'):
return nlllossNd_reference(
torch.log_softmax(input, 1),
target,
weight,
ignore_index=ignore_index,
reduction=reduction)
def nllloss_reference(input, target, weight=None, ignore_index=-100,
reduction='mean'):
def nll_loss_helper(input, target, weight, ignore_index):
if target == ignore_index:
return (0, 0)
norm = 1 if weight is None else weight[target]
result = -input[target] * norm
return (result, norm)
losses_and_weights = [nll_loss_helper(i, t, weight, ignore_index)
for i, t in zip(input, target)]
losses, weights = zip(*losses_and_weights)
losses_tensor = input.new_tensor(losses)
if reduction == 'mean':
return sum(losses_tensor) / sum(weights)
elif reduction == 'sum':
return sum(losses_tensor)
else:
return losses_tensor
def smoothl1loss_reference(input, target, reduction='mean', beta=1.0):
abs_diff = (input - target).abs()
ge_beta_mask = (abs_diff >= beta).type_as(abs_diff)
lt_beta_mask = (abs_diff < beta).type_as(abs_diff)
# when beta <= 0 we should just use l1_loss
if beta == 0:
output = abs_diff
else:
output = ge_beta_mask * (abs_diff - 0.5 * beta) + lt_beta_mask * 0.5 * (abs_diff ** 2) / beta
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def huberloss_reference(input, target, reduction='mean', delta=1.0):
abs_diff = (input - target).abs()
ge_delta_mask = (abs_diff >= delta)
lt_delta_mask = (abs_diff < delta)
output = ge_delta_mask * delta * (abs_diff - 0.5 * delta) + lt_delta_mask * 0.5 * (abs_diff ** 2)
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def _multilabelmarginloss_reference(input, target):
targets = []
for target_index in target:
if target_index < 0:
break
targets.append(target_index)
sum = 0
for target_index in targets:
for i in range(0, len(input)):
if i not in targets:
sum += max(0, 1 - input[target_index] + input[i])
return sum
def multilabelmarginloss_reference(input, target, reduction='mean'):
# make everything 2-dimensional
input_dim = input.dim()
if input.dim() < 2:
assert target.dim() < 2
input = input.unsqueeze(0) if input.dim() == 1 else input.unsqueeze(0).unsqueeze(0)
target = target.unsqueeze(0) if target.dim() == 1 else target.unsqueeze(0).unsqueeze(0)
n = input.size(0)
dim = input.size(1)
output = input.new(n).zero_()
for i in range(0, n):
output[i] = _multilabelmarginloss_reference(input[i], target[i])
if reduction == 'mean':
return output.mean() / dim
elif reduction == 'sum':
return output.sum() / dim
elif input_dim < 2:
# we know we have (1, C) X (1, C) -> (1,), so squeeze will get us
# back to correct dimensionality
return output.squeeze() / dim
else:
return output / dim
def hingeembeddingloss_reference(input, target, margin=1.0, reduction='mean'):
margin_clamp = (margin - input).clamp(min=0).type_as(input)
output = torch.where(target == 1, input, margin_clamp)
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def softmarginloss_reference(input, target, reduction='mean'):
output = (1 + (-input * target).exp()).log()
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def _multimarginloss_reference(input, target_idx, p, margin, weight):
if weight is None:
weight = input.new(len(input)).fill_(1)
output = 0
for i in range(0, len(input)):
if i != target_idx:
output += max(0, weight[target_idx] * (margin - input[target_idx] + input[i]) ** p)
return output
def multimarginloss_reference(input, target, p=1, margin=1, weight=None, reduction='mean'):
if input.dim() < 2:
input = input.unsqueeze(0) if input.dim() == 1 else input.unsqueeze(0).unsqueeze(0)
target_dim = target.dim()
if target.dim() == 0:
target = target.unsqueeze(0)
n = input.size(0)
dim = input.size(1)
output = input.new(n)
for x in range(0, n):
output[x] = _multimarginloss_reference(input[x], target[x], p, margin, weight)
if reduction == 'mean':
return output.mean() / dim
elif reduction == 'sum':
return output.sum() / dim
elif target_dim == 0:
return output.squeeze(0) / dim
return output / dim
def cosineembeddingloss_reference(input1, input2, target, margin=0, reduction='mean'):
def _cos(a, b):
cos = a.new(a.size(0))
for i in range(0, a.size(0)):
cos[i] = (a[i] * b[i]).sum() / ((((a[i] * a[i]).sum() + 1e-12) * ((b[i] * b[i]).sum() + 1e-12)) ** 0.5)
return cos
output = torch.where(target == 1, 1 - _cos(input1, input2), (_cos(input1, input2) - margin).clamp(min=0))
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def tripletmarginloss_reference(anchor, positive, negative, margin=1.0, p=2, eps=1e-6, swap=False,
reduction='mean'):
d_p = torch.pairwise_distance(anchor, positive, p, eps)
d_n = torch.pairwise_distance(anchor, negative, p, eps)
if swap:
d_s = torch.pairwise_distance(positive, negative, p, eps)
d_n = torch.min(d_n, d_s)
output = torch.clamp(margin + d_p - d_n, min=0.0)
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
def marginrankingloss_reference(input1, input2, target, margin=0, reduction='mean'):
output = (-target * (input1 - input2) + margin).clamp(min=0)
if reduction == 'mean':
return output.mean()
elif reduction == 'sum':
return output.sum()
return output
# this directly follows Graves et al's paper, in contrast to the production implementation, it does not use log-space
def ctcloss_reference(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean'):
input_lengths = torch.as_tensor(input_lengths, dtype=torch.long)
target_lengths = torch.as_tensor(target_lengths, dtype=torch.long)
dt = log_probs.dtype
log_probs = log_probs.double() # we need the accuracy as we are not in logspace
targets = targets.long()
cum_target_lengths = target_lengths.cumsum(0)
losses = []
for i in range(log_probs.size(1)):
input_length = input_lengths[i].item()
target_length = target_lengths[i].item()
cum_target_length = cum_target_lengths[i].item()
targets_prime = targets.new_full((2 * target_length + 1,), blank)
if targets.dim() == 2:
targets_prime[1::2] = targets[i, :target_length]
else:
targets_prime[1::2] = targets[cum_target_length - target_length:cum_target_length]
probs = log_probs[:input_length, i].exp()
alpha = log_probs.new_zeros((target_length * 2 + 1,))
alpha[0] = probs[0, blank]
alpha[1] = probs[0, targets_prime[1]]
mask_third = (targets_prime[:-2] != targets_prime[2:])
for t in range(1, input_length):
alpha_next = alpha.clone()
alpha_next[1:] += alpha[:-1]
alpha_next[2:] += torch.where(mask_third, alpha[:-2], alpha.new_zeros(1))
alpha = probs[t, targets_prime] * alpha_next
losses.append(-alpha[-2:].sum().log()[None])
output = torch.cat(losses, 0)
if reduction == 'mean':
return (output / target_lengths.to(dtype=output.dtype, device=output.device)).mean()
elif reduction == 'sum':
return output.sum()
output = output.to(dt)
return output
def padding1d_circular(input, pad):
r""" input:
[[[0., 1., 2.],
[3., 4., 5.]]]
pad: (1, 2)
output:
[[[2., 0., 1., 2., 0., 1.],
[5., 3., 4., 5., 3., 4.]]]
"""
return torch.cat([input[:, :, -pad[0]:], input,
input[:, :, 0:pad[1]]], dim=2)
def padding2d_circular(input, pad):
r"""input:
[[[[0., 1., 2],
[3., 4., 5.]]]]
pad: (1, 2, 2, 1)
output:
[[[[2., 0., 1., 2., 0., 1.],
[5., 3., 4., 5., 3., 4.],
[2., 0., 1., 2., 0., 1.],
[5., 3., 4., 5., 3., 4.],
[2., 0., 1., 2., 0., 1.]]]]
"""
input = torch.cat([input[:, :, -pad[2]:], input, input[:, :, 0:pad[3]]], dim=2)
return torch.cat([input[:, :, :, -pad[0]:], input, input[:, :, :, 0:pad[1]]], dim=3)
def padding3d_circular(input, pad):
r"""input:
[[[[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]]]]]
pad: (1, 2, 2, 1, 1, 2)
output: [[[[[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.]],
[[ 2., 0., 1., 2., 0., 1.],
[ 5., 3., 4., 5., 3., 4.],
[ 2., 0., 1., 2., 0., 1.],
[ 5., 3., 4., 5., 3., 4.],
[ 2., 0., 1., 2., 0., 1.]],
[[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.]],
[[ 2., 0., 1., 2., 0., 1.],
[ 5., 3., 4., 5., 3., 4.],
[ 2., 0., 1., 2., 0., 1.],
[ 5., 3., 4., 5., 3., 4.],
[ 2., 0., 1., 2., 0., 1.]],
[[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.],
[11., 9., 10., 11., 9., 10.],
[ 8., 6., 7., 8., 6., 7.]]]]]
"""
input = torch.cat([input[:, :, -pad[4]:], input, input[:, :, 0:pad[5]]], dim=2)
input = torch.cat([input[:, :, :, -pad[2]:], input, input[:, :, :, 0:pad[3]]], dim=3)
return torch.cat([input[:, :, :, :, -pad[0]:], input, input[:, :, :, :, 0:pad[1]]], dim=4)
loss_reference_fns: Dict['str', Callable] = {
'KLDivLoss': kldivloss_reference,
'KLDivLoss_log_target': kldivloss_log_target_reference,
'NLLLoss': nllloss_reference,
'NLLLossNd': nlllossNd_reference,
'SmoothL1Loss': smoothl1loss_reference,
'HuberLoss': huberloss_reference,
'MultiLabelMarginLoss': multilabelmarginloss_reference,
'HingeEmbeddingLoss': hingeembeddingloss_reference,
'SoftMarginLoss': softmarginloss_reference,
'MultiMarginLoss': multimarginloss_reference,
'CosineEmbeddingLoss': cosineembeddingloss_reference,
'TripletMarginLoss': tripletmarginloss_reference,
'MarginRankingLoss': marginrankingloss_reference,
'CTCLoss': ctcloss_reference,
'CrossEntropyLoss': cross_entropy_loss_reference
}
criterion_tests = [
dict(
module_name='L1Loss',
input_size=(2, 3, 4),
target_fn=lambda: torch.randn((2, 3, 4), requires_grad=True),
reference_fn=lambda i, t, _: 1. / i.numel() *
sum((a - b).abs().sum() for a, b in zip(i, t)),
check_complex=True,
),
dict(
module_name='NLLLoss',
input_fn=lambda: torch.rand(15, 10).log(),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
reference_fn=lambda i, t, m:
nllloss_reference(i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args=(None, None, 2),
cpp_constructor_args='torch::nn::NLLLossOptions().weight({}).ignore_index(2)',
input_fn=lambda: torch.rand(15, 10).log(),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
reference_fn=lambda i, t, _: nllloss_reference(i, t, ignore_index=2),
desc='ignore_index',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args_fn=lambda: (torch.rand(10),),
cpp_constructor_args='torch::nn::NLLLossOptions().weight(torch::rand(10))',
input_fn=lambda: torch.rand(15, 10).add(1e-2).log(),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
reference_fn=lambda i, t, m:
nllloss_reference(i, t, weight=get_weight(m)),
desc='weights',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args_fn=lambda: (torch.rand(10), None, 2),
cpp_constructor_args='torch::nn::NLLLossOptions().weight(torch::rand(10)).ignore_index(2)',
input_fn=lambda: torch.rand(15, 10).add(1e-2).log(),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
reference_fn=lambda i, t, m:
nllloss_reference(i, t, weight=get_weight(m), ignore_index=2),
desc='weights_ignore_index',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args_fn=lambda: (torch.rand(10), None, -1),
cpp_constructor_args='torch::nn::NLLLossOptions().weight(torch::rand(10)).ignore_index(-1)',
input_fn=lambda: torch.rand(15, 10).add(1e-2).log(),
target_fn=lambda: torch.empty(15).uniform_().mul(10 + 1).floor().long() - 1,
reference_fn=lambda i, t, m:
nllloss_reference(i, t, weight=get_weight(m), ignore_index=-1),
desc='weights_ignore_index_neg',
check_bfloat16=True,
),
dict(
module_name='KLDivLoss',
input_fn=lambda: torch.rand(10, 10).log(),
target_fn=lambda: torch.rand(10, 10),
reference_fn=lambda i, t, m:
kldivloss_reference(i, t, get_reduction(m)),
check_sum_reduction=True,
),
dict(
module_name='KLDivLoss',
input_fn=lambda: torch.rand(10, 10).log(),
target_fn=lambda: torch.rand(10, 10),
reference_fn=lambda i, t, m:
kldivloss_log_target_reference(i, t.log(), get_reduction(m)),
check_sum_reduction=True,
desc='log_target',
),
dict(
module_name='MSELoss',
input_size=(2, 3, 4, 5),
target_fn=lambda: torch.randn((2, 3, 4, 5), requires_grad=True),
reference_fn=lambda i, t, m: ((i - t).abs().pow(2).sum() / (i.numel()
if get_reduction(m) == 'mean' else 1)),
check_sum_reduction=True,
),
dict(
module_name='BCELoss',
input_fn=lambda: torch.rand(15, 10).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.randn(15, 10).gt(0).double(),
reference_fn=lambda i, t, m: -(t * i.log() + (1 - t) * (1 - i).log()).sum() /
(i.numel() if get_reduction(m) else 1),
check_bfloat16=True,
),
dict(
module_name='BCELoss',
constructor_args_fn=lambda: (torch.rand(10),),
cpp_constructor_args='torch::nn::BCELossOptions().weight(torch::rand(10))',
input_fn=lambda: torch.rand(15, 10).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.randn(15, 10).gt(0).double(),
reference_fn=lambda i, t, m: -((t * i.log() + (1 - t) * (1 - i).log()) * get_weight(m)).sum() /
(i.numel() if get_reduction(m) else 1),
desc='weights',
check_bfloat16=True,
),
dict(
module_name='CrossEntropyLoss',
input_size=(15, 10),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
),
dict(
module_name='CrossEntropyLoss',
constructor_args_fn=lambda: (torch.rand(10),),
cpp_constructor_args='torch::nn::CrossEntropyLossOptions().weight(torch::rand(10))',
input_size=(15, 10),
target_fn=lambda: torch.empty(15).uniform_().mul(10).floor().long(),
desc='weights',
),
dict(
module_name='HingeEmbeddingLoss',
input_size=(10,),
target_fn=lambda: torch.randn(10).gt(0).double().mul_(2).sub(1),
reference_fn=lambda i, t, m:
hingeembeddingloss_reference(i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
),
dict(
module_name='HingeEmbeddingLoss',
constructor_args=(0.5,),
cpp_constructor_args='torch::nn::HingeEmbeddingLossOptions().margin(0.5)',
input_size=(10,),
target_fn=lambda: torch.randn(10).gt(0).double().mul_(2).sub(1),
reference_fn=lambda i, t, m:
hingeembeddingloss_reference(i, t, margin=0.5, reduction=get_reduction(m)),
desc='margin',
check_sum_reduction=True,
),
dict(
module_name='MultiLabelMarginLoss',
input_size=(10,),
target_fn=lambda: torch.rand(10).mul(10).floor().long(),
reference_fn=lambda i, t, m:
multilabelmarginloss_reference(i, t, reduction=get_reduction(m)),
desc="1d",
check_sum_reduction=True,
check_gradgrad=False,
check_bfloat16=True,
),
dict(
module_name='MultiLabelMarginLoss',
input_size=(5, 10),
target_fn=lambda: torch.rand(5, 10).mul(10).floor().long(),
reference_fn=lambda i, t, m:
multilabelmarginloss_reference(i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
check_gradgrad=False,
check_bfloat16=True,
),
dict(
module_name='MultiLabelSoftMarginLoss',
input_size=(5, 10),
target_fn=lambda: torch.rand(5, 10).mul(2).floor(),
reference_fn=lambda i, t, m: -(t * i.sigmoid().log() + (1 - t) * (-i).sigmoid().log()).sum() / i.numel(),
check_gradgrad=False,
),
dict(
module_name='MultiMarginLoss',
input_size=(5, 10),
target_fn=lambda: torch.rand(5).mul(8).floor().long(),
reference_fn=lambda i, t, m:
multimarginloss_reference(i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='MultiMarginLoss',
input_size=(10,),
target_fn=lambda: torch.rand(1).mul(8).floor().long(),
reference_fn=lambda i, t, m:
multimarginloss_reference(i, t, reduction=get_reduction(m)),
desc='1d',
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='MultiMarginLoss',
constructor_args=(2,),
cpp_constructor_args='torch::nn::MultiMarginLossOptions().p(2)',
input_fn=lambda: torch.rand(5, 10).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.rand(5).mul(8).floor().long(),
reference_fn=lambda i, t, m:
multimarginloss_reference(i, t, p=2, reduction=get_reduction(m)),
desc='p',
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='MultiMarginLoss',
constructor_args=(1, 0.5),
cpp_constructor_args='torch::nn::MultiMarginLossOptions().p(1).margin(0.5)',
legacy_constructor_args=(1, None, 0.5),
input_size=(5, 10),
target_fn=lambda: torch.rand(5).mul(8).floor().long(),
reference_fn=lambda i, t, m:
multimarginloss_reference(i, t, margin=0.5, reduction=get_reduction(m)),
desc='margin',
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='MultiMarginLoss',
constructor_args=(1, 1., torch.rand(10).double()),
cpp_constructor_args='torch::nn::MultiMarginLossOptions().p(1).margin(1.).weight(torch::rand(10))',
legacy_constructor_args=(1, torch.rand(10).double()),
input_size=(5, 10),
target_fn=lambda: torch.rand(5).mul(8).floor().long(),
reference_fn=lambda i, t, m:
multimarginloss_reference(i, t, weight=get_weight(m), reduction=get_reduction(m)),
desc='weights',
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='SmoothL1Loss',
input_size=(5, 10),
target_fn=lambda: torch.randn((5, 10), requires_grad=True),
check_sum_reduction=True,
reference_fn=lambda i, t, m, b=1.0:
smoothl1loss_reference(i, t, reduction=get_reduction(m), beta=b),
),
dict(
module_name='HuberLoss',
input_size=(5, 10),
target_fn=lambda: torch.randn((5, 10), requires_grad=True),
check_sum_reduction=True,
check_half=True,
check_bfloat16=True,
reference_fn=lambda i, t, m:
huberloss_reference(i, t, reduction=get_reduction(m)),
),
dict(
module_name='SoftMarginLoss',
input_size=(5, 5),
target_fn=lambda: torch.randn(5, 5).sign(),
reference_fn=lambda i, t, m:
softmarginloss_reference(i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
),
dict(
module_name='CosineEmbeddingLoss',
input_fn=lambda: (torch.rand(15, 10), torch.rand(15, 10)),
target_fn=lambda: torch.randn(15).sign(),
reference_fn=lambda i, t, m:
cosineembeddingloss_reference(i[0], i[1], t, reduction=get_reduction(m)),
check_sum_reduction=True,
),
dict(
module_name='CosineEmbeddingLoss',
constructor_args=(0.7,),
cpp_constructor_args='torch::nn::CosineEmbeddingLossOptions().margin(0.7)',
input_fn=lambda: (torch.rand(15, 10), torch.rand(15, 10)),
target_fn=lambda: torch.randn(15).sign(),
reference_fn=lambda i, t, m:
cosineembeddingloss_reference(i[0], i[1], t, margin=0.7, reduction=get_reduction(m)),
desc='margin',
check_sum_reduction=True,
),
dict(
module_name='MarginRankingLoss',
input_fn=lambda: (torch.randn(50).mul(10), torch.randn(50).mul(10)),
target_fn=lambda: torch.randn(50).sign(),
reference_fn=lambda i, t, m:
marginrankingloss_reference(i[0], i[1], t, reduction=get_reduction(m)),
check_sum_reduction=True,
),
dict(
module_name='MarginRankingLoss',
constructor_args=(0.5,),
cpp_constructor_args='torch::nn::MarginRankingLossOptions().margin(0.5)',
input_fn=lambda: (torch.randn(50).mul(10), torch.randn(50).mul(10)),
target_fn=lambda: torch.randn(50).sign(),
reference_fn=lambda i, t, m:
marginrankingloss_reference(i[0], i[1], t, margin=0.5, reduction=get_reduction(m)),
desc='margin',
check_sum_reduction=True,
),
dict(
module_name='BCEWithLogitsLoss',
input_fn=lambda: torch.rand(15, 10).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.randn(15, 10).gt(0).double(),
),
dict(
module_name='BCEWithLogitsLoss',
constructor_args=(torch.rand(10),),
cpp_constructor_args='torch::nn::BCEWithLogitsLossOptions().weight(torch::rand(10))',
input_fn=lambda: torch.rand(15, 10).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.randn(15, 10).gt(0).double(),
desc='weights',
),
dict(
module_name='BCEWithLogitsLoss',
constructor_args=(torch.rand(()),),
cpp_constructor_args='torch::nn::BCEWithLogitsLossOptions().weight(torch::rand({}))',
input_fn=lambda: torch.rand(()).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.randn(()).gt(0).double(),
desc='scalar_weights'
),
dict(
module_name='NLLLoss',
input_size=(2, 3, 5, 5),
target_fn=lambda: torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['NLLLossNd'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='2d',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args_fn=lambda: (torch.rand(3),),
cpp_constructor_args='torch::nn::NLLLossOptions().weight(torch::rand(3))',
input_size=(2, 3, 5, 5),
target=torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['NLLLossNd'](i, t, weight=get_weight(m)),
desc='2d_weights',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
constructor_args=(None, None, 1),
cpp_constructor_args='torch::nn::NLLLossOptions().weight({}).ignore_index(1)',
input_size=(2, 3, 5, 5),
target_fn=lambda: torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['NLLLossNd'](i, t, ignore_index=1),
desc='2d_ignore_index',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
input_size=(2, 3, 5, 5, 2, 2),
target_fn=lambda: torch.rand(2, 5, 5, 2, 2).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['NLLLossNd'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='higher_dim',
check_bfloat16=True,
),
dict(
module_name='NLLLoss',
input_size=(2, 3, 5),
target_fn=lambda: torch.rand(2, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['NLLLossNd'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='dim_is_3',
check_bfloat16=True,
),
dict(
module_name='CrossEntropyLoss',
input_size=(2, 3, 5, 5),
target_fn=lambda: torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['CrossEntropyLoss'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='2d',
check_bfloat16=False,
),
dict(
module_name='CrossEntropyLoss',
constructor_args_fn=lambda: (torch.rand(3),),
cpp_constructor_args='torch::nn::CrossEntropyLossOptions().weight(torch::rand(3))',
input_size=(2, 3, 5, 5),
target=torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['CrossEntropyLoss'](i, t, weight=get_weight(m)),
desc='2d_weights',
check_bfloat16=False,
),
dict(
module_name='CrossEntropyLoss',
constructor_args=(None, None, 1),
cpp_constructor_args='torch::nn::CrossEntropyLossOptions().weight({}).ignore_index(1)',
input_size=(2, 3, 5, 5),
target_fn=lambda: torch.rand(2, 5, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['CrossEntropyLoss'](i, t, ignore_index=1),
desc='2d_ignore_index',
check_bfloat16=False,
),
dict(
module_name='CrossEntropyLoss',
input_size=(2, 3, 5, 5, 2, 2),
target_fn=lambda: torch.rand(2, 5, 5, 2, 2).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['CrossEntropyLoss'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='higher_dim',
check_bfloat16=False,
),
dict(
module_name='CrossEntropyLoss',
input_size=(2, 3, 5),
target_fn=lambda: torch.rand(2, 5).mul(3).floor().long(),
reference_fn=lambda i, t, m:
loss_reference_fns['CrossEntropyLoss'](i, t, reduction=get_reduction(m)),
check_sum_reduction=True,
desc='dim_is_3',
check_bfloat16=False,
),
dict(
module_name='PoissonNLLLoss', # Default is log_input=True, full=False
input_size=(2, 3, 4, 5),
target_fn=lambda: torch.randn(2, 3, 4, 5).floor_().abs_(),
reference_fn=lambda i, t, _: (i.exp() - t.mul(i)).mean(),
desc='no_full_loss',
),
dict(
module_name='PoissonNLLLoss',
constructor_args=(False, False), # log_input=False, full=False
cpp_constructor_args='torch::nn::PoissonNLLLossOptions().log_input(false).full(false)',
input_fn=lambda: torch.randn(2, 3, 4, 5).abs_().add_(0.001),
target_fn=lambda: torch.randn(2, 3, 4, 5).floor_().abs_(),
reference_fn=lambda i, t, _: (i - t.mul((i + 1e-8).log())).mean(),
desc='no_full_loss_no_log_input',
),
dict(
module_name='PoissonNLLLoss',
constructor_args=(True, True), # log_input=True, full=True
cpp_constructor_args='torch::nn::PoissonNLLLossOptions().log_input(true).full(true)',
input_size=(2, 3, 4, 5),
target_fn=lambda: torch.randn(2, 3, 4, 5).floor_().abs_(),
reference_fn=lambda i, t, _:
(i.exp() - t.mul(i) + (t.mul(t.log()) - t + 0.5 * (2. * pi * t).log()).masked_fill(t <= 1, 0)).mean(),
desc='full_loss',
),
dict(
module_name='PoissonNLLLoss',
constructor_args=(False, True), # log_input=False, full=True
cpp_constructor_args='torch::nn::PoissonNLLLossOptions().log_input(false).full(true)',
input_fn=lambda: torch.randn(2, 3, 4, 5).abs_().add_(0.001),
target_fn=lambda: torch.randn(2, 3, 4, 5).floor_().abs_(),
reference_fn=lambda i, t, _: (
i - t.mul((i + 1e-8).log()) + (t.mul(t.log()) - t + 0.5 * (2. * pi * t).log()).masked_fill(t <= 1, 0)
).mean(),
desc='full_loss_no_log_input',
),
dict(
module_name='L1Loss',
input_size=(),
target_fn=lambda: torch.randn((), requires_grad=True),
reference_fn=lambda i, t, _: 1. / i.numel() * (i - t).abs().sum(),
desc='scalar',
check_complex=True,
),
dict(
module_name='KLDivLoss',
input_fn=lambda: torch.rand(()).log(),
target_fn=lambda: torch.rand(()),
reference_fn=lambda i, t, m:
kldivloss_reference(i, t, get_reduction(m)),
check_sum_reduction=True,
desc='scalar',
),
dict(
module_name='KLDivLoss',
input_fn=lambda: torch.rand(()).log(),
target_fn=lambda: torch.rand(()),
reference_fn=lambda i, t, m:
kldivloss_log_target_reference(i, t.log(), get_reduction(m)),
check_sum_reduction=True,
desc='scalar_log_target',
),
dict(
module_name='MSELoss',
input_size=(),
target_fn=lambda: torch.randn((), requires_grad=True),
reference_fn=lambda i, t, m: ((i - t).abs().pow(2).sum() /
(i.numel() if get_reduction(m) == 'mean' else 1)),
check_sum_reduction=True,
desc='scalar',
check_bfloat16=True,
),
dict(
module_name='MSELoss',
input_fn=lambda: torch.ones(5, 68, 64, 64, dtype=torch.float) / 10,
target_fn=lambda: torch.zeros(5, 68, 64, 64, dtype=torch.float),
reference_fn=lambda i, t, m: ((i - t).abs().pow(2).sum() /
(i.numel() if get_reduction(m) == 'mean' else 1)),
check_forward_only=True,
desc='prec',
check_bfloat16=True,
),
dict(
module_name='BCELoss',
constructor_args_fn=lambda: (torch.rand(()),),
cpp_constructor_args='torch::nn::BCELossOptions().weight(torch::rand({}))',
input_fn=lambda: torch.rand(()).clamp_(1e-2, 1 - 1e-2),
target_fn=lambda: torch.rand(()).gt(0).double(),
reference_fn=lambda i, t, m: -((t * i.log() + (1 - t) * (1 - i).log()) * get_weight(m)).sum() /
(i.numel() if get_reduction(m) == 'mean' else 1),
desc='scalar_weights',
check_bfloat16=True,
),
dict(
module_name='HingeEmbeddingLoss',
constructor_args=(0.5,),
cpp_constructor_args='torch::nn::HingeEmbeddingLossOptions().margin(0.5)',
input_size=(),
target_fn=lambda: torch.randn(()).gt(0).double().mul_(2).sub(1),
desc='scalar_margin',
check_sum_reduction=True,
),
dict(
module_name='SmoothL1Loss',
input_size=(),
target_fn=lambda: torch.randn((), requires_grad=True),
check_sum_reduction=True,
reference_fn=lambda i, t, m, b=1.0:
smoothl1loss_reference(i, t, reduction=get_reduction(m), beta=b),
desc='scalar',
),
dict(
module_name='MultiLabelSoftMarginLoss',
constructor_args=(torch.rand(10),),
cpp_constructor_args='torch::nn::MultiLabelSoftMarginLossOptions().weight(torch::rand(10))',
input_fn=lambda: torch.randn(5, 10),
target_fn=lambda: torch.rand(5, 10).mul(2).floor(),
reference_fn=lambda i, t, m: -((t * i.sigmoid().log() + (1 - t) * (-i).sigmoid().log()) * get_weight(m)).sum() /
(i.numel() if get_reduction(m) == 'mean' else i.size(1) if get_reduction(m) == 'sum' else 1),
desc='weights',
check_sum_reduction=True,
check_gradgrad=False,
),
dict(
module_name='CTCLoss',
constructor_args=(14,), # blank=14
extra_args=([50, 50, 50], [30, 25, 20]), # input_lengths, target_lengths
input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
target_fn=lambda: torch.randint(0, 14, (3, 30), dtype=torch.long),
reference_fn=lambda i, t, il, tl, m:
ctcloss_reference(i, t, il, tl, blank=14, reduction=get_reduction(m)),
desc='lengths_intlists',
check_forward_only=True,
check_sum_reduction=True,
check_gradgrad=False,
check_half=False,
# `CTCLoss` in C++ frontend doesn't accept integer list for `input_lengths` or `target_lengths`
test_cpp_api_parity=False,
check_jit=False,
),
dict(
module_name='CTCLoss',
constructor_args=(14,), # blank=14
cpp_constructor_args='torch::nn::CTCLossOptions().blank(14)',
extra_args=(torch.tensor([50, 50, 50]), torch.tensor([30, 25, 20])), # input_lengths, target_lengths
input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
target_fn=lambda: torch.randint(0, 14, (3, 30), dtype=torch.long),
reference_fn=lambda i, t, il, tl, m:
ctcloss_reference(i, t, il, tl, blank=14, reduction=get_reduction(m)),
desc='lengths_tensors',
check_forward_only=True,
check_sum_reduction=True,
check_gradgrad=False,
check_half=False,
),
# Test is flaky
# See https://github.com/pytorch/pytorch/issues/29380.
# dict(
# module_name='CTCLoss',
# desc='1d_target',
# constructor_args=(14,), # blank=14
# extra_args=([50, 50, 50], [30, 25, 20]), # input_lengths, target_lengths
# input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
# target_fn=lambda: torch.randint(0, 14, (3, 30), dtype=torch.long),
# reference_fn=lambda i, t, il, tl, m:
# ctcloss_reference(i, t, il, tl, blank=14, reduction=get_reduction(m)),
# check_sum_reduction=True,
# check_gradgrad=False,
# check_half=False,
# ),
dict(
module_name='CTCLoss',
desc='2d_int_target_lengths_intlists',
constructor_args=(0,), # blank=0
extra_args=([50, 50, 50], [30, 25, 20]), # input_lengths, target_lengths
input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
target_fn=lambda: torch.randint(1, 15, (3, 30), dtype=torch.int),
reference_fn=lambda i, t, il, tl, m:
ctcloss_reference(i, t, il, tl, blank=0, reduction=get_reduction(m)),
check_forward_only=True,
check_sum_reduction=True,
check_gradgrad=False,
check_half=False,
# `CTCLoss` in C++ frontend doesn't accept integer list for `input_lengths` or `target_lengths`
test_cpp_api_parity=False,
check_jit=False,
),
dict(
module_name='CTCLoss',
desc='2d_int_target_lengths_tensors',
constructor_args=(0,), # blank=0
cpp_constructor_args='torch::nn::CTCLossOptions().blank(0)',
extra_args=(torch.tensor([50, 50, 50]), torch.tensor([30, 25, 20])), # input_lengths, target_lengths
input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
target_fn=lambda: torch.randint(1, 15, (3, 30), dtype=torch.int),
reference_fn=lambda i, t, il, tl, m:
ctcloss_reference(i, t, il, tl, blank=0, reduction=get_reduction(m)),
check_forward_only=True,
check_sum_reduction=True,
check_gradgrad=False,
check_half=False,
),
dict(
module_name='CTCLoss',
desc='2d_lengths_tensors',
constructor_args=(0,), # blank=0
cpp_constructor_args='torch::nn::CTCLossOptions().blank(0)',
extra_args=(torch.tensor([50, 50, 50]), torch.tensor([30, 25, 20])), # input_lengths, target_lengths
input_fn=lambda: torch.randn(50, 3, 15).log_softmax(2),
target_fn=lambda: torch.randint(1, 15, (3, 30), dtype=torch.int),
reference_fn=lambda i, t, il, tl, m:
ctcloss_reference(i, t, il, tl, blank=0, reduction=get_reduction(m)),
check_forward_only=True,
check_sum_reduction=True,
check_gradgrad=False,
check_half=False,
),
]
def single_batch_reference_criterion_fn(*args):
"""Reference function for criterion supporting no batch dimensions.
The criterion is passed the input and target in batched form with a single item.
The output is squeezed to compare with the no-batch input.
"""
criterion = args[-1]
single_batch_input_args = [input.unsqueeze(0) for input in args[:-1]]
output = criterion(*single_batch_input_args)
reduction = get_reduction(criterion)
if reduction == 'none':
return output.squeeze(0)
# reduction is 'sum' or 'mean' which results in a scalar
return output
# Check that regression criterion work with no batch dimensions
regression_criterion_no_batch = [
'L1Loss', 'MSELoss', 'PoissonNLLLoss', 'KLDivLoss', 'HuberLoss', 'SmoothL1Loss'
]
reductions = ['none', 'mean', 'sum']
for regression_criterion, reduction in product(regression_criterion_no_batch,
reductions):
regression_test_info = dict(
fullname="{}_no_batch_dim_{}".format(regression_criterion, reduction),
constructor=lambda *args: getattr(nn, regression_criterion)(reduction=reduction),
input_size=(3, ),
target_fn=lambda: torch.randn(3),
reference_fn=single_batch_reference_criterion_fn,
test_cpp_api_parity=False,
)
criterion_tests.append(regression_test_info)
class NNTestCase(TestCase):
# _forward is defined in classes inheriting from NNTestCase
@abstractmethod
def _forward(self, *args, **kwargs):
raise NotImplementedError
@abstractmethod
def _get_parameters(self, module: nn.Module) -> Tuple[List[nn.Parameter], List[nn.Parameter]]:
raise NotImplementedError
@abstractmethod
def _zero_grad_parameters(self, module: nn.Module) -> None:
raise NotImplementedError
@abstractmethod
def _backward(self, module: nn.Module,
input: _TensorOrTensors, output: torch.Tensor,
grad_output: Union[torch.Tensor, Sequence[torch.Tensor]],
create_graph: bool = False):
raise NotImplementedError
def _jacobian(self, input, num_out):
if isinstance(input, tuple):
return tuple(self._jacobian(elem, num_out) for elem in input)
elif isinstance(input, list):
return [self._jacobian(elem, num_out) for elem in input]
else:
return torch.zeros(input.nelement(), num_out)
def _flatten_tensors(self, x):
if isinstance(x, torch.Tensor):
if x.is_sparse:
return x.to_dense().view(-1)
else:
return x.view(-1)
else:
return tuple(self._flatten_tensors(a) for a in x)
def _zero_grad_input(self, input):
if isinstance(input, torch.Tensor):
if input.requires_grad and input.grad is not None:
input.grad.zero_()
input.grad.detach_()
else:
for i in input:
self._zero_grad_input(i)
def _analytical_jacobian(self, module, input: _TensorOrTensors, jacobian_input=True, jacobian_parameters=True):
output = self._forward(module, input)
output_size = output.nelement()
if jacobian_input:
jacobian_inp = self._jacobian(input, output_size)
flat_jacobian_input = list(_iter_tensors(jacobian_inp))
if jacobian_parameters:
num_param = sum(p.numel() for p in self._get_parameters(module)[0])
jacobian_param = torch.zeros(num_param, output_size)
for i in range(output_size):
param, d_param = self._get_parameters(module)
# make non grad zeros
d_param = [torch.zeros_like(p) if d is None else d for (p, d) in zip(param, d_param)]
d_out = torch.zeros_like(output)
flat_d_out = d_out.view(-1)
flat_d_out[i] = 1
if jacobian_parameters:
self._zero_grad_parameters(module)
# Tensors will accumulate gradient from multiple steps
if jacobian_input:
self._zero_grad_input(input)
d_input = self._backward(module, input, output, d_out)
if jacobian_input:
for jacobian_x, d_x in zip(flat_jacobian_input, _iter_tensors(d_input)):
jacobian_x[:, i] = d_x.contiguous().view(-1)
if jacobian_parameters:
jacobian_param[:, i] = torch.cat(self._flatten_tensors(d_param), 0)
res: Tuple[torch.Tensor, ...] = tuple()
if jacobian_input:
res += jacobian_inp,
if jacobian_parameters:
res += jacobian_param,
return res
def _numerical_jacobian(self, module, input: _TensorOrTensors, jacobian_input=True, jacobian_parameters=True):
def fw(*input):
return self._forward(module, input).detach()
res: Tuple[torch.Tensor, ...] = tuple()
if jacobian_input:
res += _get_numerical_jacobian(fw, input, eps=1e-6),
if jacobian_parameters:
param, _ = self._get_parameters(module)
to_cat = []
for p in param:
jacobian = _get_numerical_jacobian(fw, input, target=p, eps=1e-6)
# get_numerical_jacobian returns a list of tuples but we require a tensor
to_cat.append(jacobian[0][0])
res += (torch.cat(to_cat, 0),)
return res
def check_jacobian(self, module, input: _TensorOrTensors, jacobian_input=True):
jacobian_parameters = bool(self._get_parameters(module)[0])
analytical = self._analytical_jacobian(module, input, jacobian_input, jacobian_parameters)
numerical = self._numerical_jacobian(module, input, jacobian_input, jacobian_parameters)
analytical_t = list(_iter_tensors(analytical))
numerical_t = list(_iter_tensors(numerical))
differences = []
for a, n in zip(analytical_t, numerical_t):
if a.numel() != 0:
differences.append(a.add(n, alpha=-1).abs().max())
# TODO: compare structure (ensure analytic jacobian has correct shape)
if len(differences) > 0:
self.assertLessEqual(max(differences), PRECISION) # type: ignore[type-var]
class TestBase(object):
_required_arg_names = {'constructor_args', 'input', 'extra_args'}
def __init__(self, constructor, desc='', reference_fn=None, fullname=None, **kwargs):
self.desc = desc
self.fullname = fullname
self.constructor = constructor
self.reference_fn = reference_fn
for name in self._required_arg_names:
if name not in kwargs and name + '_fn' not in kwargs and name + '_size' not in kwargs:
if name in {'constructor_args', 'extra_args'}:
kwargs[name] = tuple()
else:
raise ValueError("{}: Specify {} by a value, a function to generate it, or it's size!"
.format(self.get_name(), name))
self._extra_kwargs = kwargs
self._arg_cache = {}
def get_name(self):
if self.fullname is not None:
return 'test_' + self.fullname
test_name = 'test_' + self.constructor.__name__
if self.desc:
test_name += '_' + self.desc
return test_name
def _unpack(self, value):
if isinstance(value, torch.Tensor):
return value
elif is_iterable(value):
return type(value)(self._unpack(v) for v in value)
else:
return value
@property
def constructor_args(self):
return self._get_arg('constructor_args', True)
@property
def extra_args(self):
return self._get_arg('extra_args', True)
def _get_arg(self, name, unpack):
assert name in self._required_arg_names
if name not in self._arg_cache:
fn_name = name + '_fn'
size_name = name + '_size'
if name in self._extra_kwargs:
self._arg_cache[name] = self._extra_kwargs[name]
elif fn_name in self._extra_kwargs:
self._arg_cache[name] = self._extra_kwargs[fn_name]()
else:
assert size_name in self._extra_kwargs, \
"Missing `{}`, `{}` or `{}` for {}".format(name, size_name, fn_name, self.get_name())
def map_tensor_sizes(sizes):
if isinstance(sizes, list):
return [map_tensor_sizes(s) for s in sizes]
elif isinstance(sizes, torch.Tensor):
return sizes.double()
else:
return torch.randn(sizes)
self._arg_cache[name] = map_tensor_sizes(self._extra_kwargs[size_name])
return self._unpack(self._arg_cache[name]) if unpack else self._arg_cache[name]
def _get_input(self, unpack=True):
return self._get_arg('input', unpack)
def __call__(self, test_case):
raise NotImplementedError
class ModuleTest(TestBase):
@abstractmethod
def _do_test(self, test_case: Any, module: nn.Module, input: Any) -> Any:
raise NotImplementedError
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.jacobian_input = kwargs.get('jacobian_input', True)
self.should_test_cuda = kwargs.get('test_cuda', True)
self.should_test_pickle = kwargs.get('pickle', True)
self.check_gradgrad = kwargs.get('check_gradgrad', True)
self.FIXME_no_cuda_gradgrad_comparison = \
kwargs.get('FIXME_no_cuda_gradgrad_comparison', False)
self.precision = kwargs.get('precision', 2e-4)
self.check_forward_only = kwargs.get('check_forward_only', False)
def __call__(self, test_case):
module = self.constructor(*self.constructor_args)
input = self._get_input()
if self.reference_fn is not None:
out = test_case._forward(module, input)
ref_input = deepcopy(input)
ref_module = deepcopy(module)
expected_out = self.reference_fn(ref_input, test_case._get_parameters(module)[0], ref_module)
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(out, expected_out)
if self.check_forward_only:
return
self.test_noncontig(test_case, module, input)
if self.should_test_pickle:
# TODO: do this with in-memory files as soon as torch.save will support it
with tempfile.TemporaryFile() as f:
test_case._forward(module, input)
torch.save(module, f)
f.seek(0)
module_copy = torch.load(f)
test_case.assertEqual(test_case._forward(module, input), test_case._forward(module_copy, input))
self._do_test(test_case, module, input)
def noncontiguize(self, obj):
if isinstance(obj, list):
return [self.noncontiguize(o) for o in obj]
elif isinstance(obj, tuple):
return tuple(self.noncontiguize(o) for o in obj)
tensor = obj
ndim = tensor.dim()
# Always making only the last dimension noncontiguous is easy to hide
# bugs because .view(-1) will still work. So try to find a dim with size
# > 1 and make that non-contiguous, i.e., stack + select on the
# dimension directly after that.
dim = ndim
for d in range(ndim):
if tensor.size(d) > 1:
dim = d + 1
break
noncontig = torch.stack([torch.empty_like(tensor), tensor], dim).select(dim, 1).detach()
assert noncontig.numel() == 1 or noncontig.numel() == 0 or not noncontig.is_contiguous()
noncontig.requires_grad = tensor.requires_grad
return noncontig
def test_noncontig(self, test_case, module, input):
# check no scalars, can't make non-contig
if isinstance(input, torch.Tensor) and input.dim() == 0:
return
if any(i.dim() == 0 for i in input if isinstance(i, torch.Tensor)):
return
test_case._zero_grad_parameters(module)
test_case._zero_grad_input(input)
with freeze_rng_state():
output = test_case._forward(module, input)
grad_output = output.new(output.shape).normal_()
output = output.clone()
d_input = deepcopy(test_case._backward(module, input, output, grad_output))
d_param = deepcopy(test_case._get_parameters(module)[1])
nc_input = self.noncontiguize(input)
nc_grad_output = self.noncontiguize(grad_output)
for contig_i, contig_g in product((True, False), repeat=2):
i = input if contig_i else nc_input
# Some ops, e.g., nn.Flatten, return gradient that shares
# storage with the grad_output. Hence we copy here.
go = deepcopy(grad_output if contig_g else nc_grad_output)
test_case._zero_grad_parameters(module)
test_case._zero_grad_input(i)
with freeze_rng_state():
out = test_case._forward(module, i)
grad = test_case._backward(module, i, out, go)
test_case.assertEqual(out, output)
test_case.assertEqual(grad, d_input, atol=1e-4, rtol=0)
test_case.assertEqual(test_case._get_parameters(module)[1], d_param)
def test_cuda(self, test_case):
if not TEST_CUDA or not self.should_test_cuda:
raise unittest.SkipTest('Excluded from CUDA tests')
cpu_input = self._get_input()
type_map = {torch.double: torch.float}
cpu_input_tuple = cpu_input if isinstance(cpu_input, tuple) else (cpu_input,)
gpu_input_tuple = to_gpu(cpu_input_tuple, type_map=type_map)
cpu_module = self.constructor(*self.constructor_args)
gpu_module = self.constructor(*self.constructor_args).float().cuda()
cpu_param = test_case._get_parameters(cpu_module)
gpu_param = test_case._get_parameters(gpu_module)
for cpu_p, gpu_p in zip(cpu_param[0], gpu_param[0]):
gpu_p.data.copy_(cpu_p)
test_case._zero_grad_input(cpu_input_tuple)
test_case._zero_grad_input(gpu_input_tuple)
test_case._zero_grad_parameters(cpu_module)
test_case._zero_grad_parameters(gpu_module)
cpu_output = test_case._forward(cpu_module, cpu_input_tuple)
gpu_output = test_case._forward(gpu_module, gpu_input_tuple)
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_output, gpu_output, atol=self.precision, rtol=0)
# Run backwards on CPU and GPU and compare results
for _ in range(5):
cpu_gradOutput = cpu_output.clone().normal_()
gpu_gradOutput = cpu_gradOutput.type_as(gpu_output)
cpu_gradInput = test_case._backward(cpu_module, cpu_input_tuple, cpu_output, cpu_gradOutput)
gpu_gradInput = test_case._backward(gpu_module, gpu_input_tuple, gpu_output, gpu_gradOutput)
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_gradInput, gpu_gradInput, atol=self.precision, rtol=0)
for cpu_d_p, gpu_d_p in zip(cpu_param[1], gpu_param[1]):
test_case.assertEqual(cpu_d_p, gpu_d_p, atol=self.precision, rtol=0)
# Run double-backwards on CPU and GPU and compare results
if self.check_gradgrad and not self.FIXME_no_cuda_gradgrad_comparison:
cpu_output = cpu_module(*cpu_input_tuple)
gpu_output = gpu_module(*gpu_input_tuple)
cpu_gradOutput = torch.randn_like(cpu_output, requires_grad=True)
gpu_gradOutput = cpu_gradOutput.type_as(gpu_output).detach()
gpu_gradOutput.requires_grad = True
cpu_gradInputs = torch.autograd.grad(
cpu_output,
cpu_input_tuple + tuple(cpu_module.parameters()),
cpu_gradOutput,
create_graph=True)
gpu_gradInputs = torch.autograd.grad(
gpu_output,
gpu_input_tuple + tuple(gpu_module.parameters()),
gpu_gradOutput,
create_graph=True)
for cpu_d_i, gpu_d_i in zip(cpu_gradInputs, gpu_gradInputs):
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_d_i, gpu_d_i, atol=self.precision, rtol=0)
# We mix output into the second backwards computation so that
# torch.autograd.grad doesn't complain that some inputs
# are unreachable (which can happen if you differentiate
# only on the gradient.
cpu_gg = torch.autograd.grad(
cpu_output.sum() + sum(x.sum() for x in cpu_gradInputs),
cpu_input_tuple + (cpu_gradOutput,) + tuple(cpu_module.parameters()),
retain_graph=True)
gpu_gg = torch.autograd.grad(
gpu_output.sum() + sum(x.sum() for x in gpu_gradInputs),
gpu_input_tuple + (gpu_gradOutput,) + tuple(gpu_module.parameters()),
retain_graph=True)
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_gradInput, gpu_gradInput, atol=self.precision, rtol=0)
for cpu_d_p, gpu_d_p in zip(cpu_gg, gpu_gg):
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_d_p, gpu_d_p, atol=self.precision, rtol=0)
self.test_noncontig(test_case, gpu_module, gpu_input_tuple)
class InputVariableMixin(object):
def _get_input(self):
input = TestBase._get_input(self, False) # type: ignore[arg-type]
def map_variables(i):
if isinstance(i, torch.Tensor):
if i.is_floating_point() or i.is_complex():
i.requires_grad = True
return i
else:
return type(i)(map_variables(elem) for elem in i)
return map_variables(input)
class NewModuleTest(InputVariableMixin, ModuleTest): # type: ignore[misc]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.cudnn = kwargs.get('cudnn', False)
self.check_inplace = kwargs.get('check_inplace', False)
self.check_gradgrad = kwargs.get('check_gradgrad', True)
self.skip_double = kwargs.get('skip_double', False)
self.skip_half = kwargs.get('skip_half', False)
self.with_tf32 = kwargs.get('with_tf32', False)
self.tf32_precision = kwargs.get('tf32_precision', 0.001)
self.test_cpu = kwargs.get('test_cpu', True)
self.has_sparse_gradients = kwargs.get('has_sparse_gradients', False)
self.check_batched_grad = kwargs.get('check_batched_grad', True)
self.gradcheck_fast_mode = kwargs.get('gradcheck_fast_mode', None)
def _check_gradients(self, test_case, module, input_tuple):
params = tuple(x for x in module.parameters())
num_inputs = len(input_tuple)
def fn_to_gradcheck(*inputs_and_params, **kwargs):
assert not kwargs
return test_case._forward(module, inputs_and_params[:num_inputs])
# gradcheck doesn't support operators that take in dense inputs but
# return sparse parameters. This only happens in the case of nn.Embedding
# and nn.EmbeddingBag. Instead, we call `self.check_jacobian`, which
# is a slightly different version of gradcheck that can handle this.
if self.has_sparse_gradients:
assert num_inputs == 1
test_input_jacobian = torch.is_floating_point(input_tuple[0])
test_case.check_jacobian(module, input_tuple[0], test_input_jacobian)
else:
test_case.assertTrue(gradcheck(fn_to_gradcheck, input_tuple + params,
check_batched_grad=self.check_batched_grad,
fast_mode=self.gradcheck_fast_mode))
if self.check_gradgrad:
test_case.assertTrue(gradgradcheck(fn_to_gradcheck, input_tuple + params,
check_batched_grad=self.check_batched_grad,
fast_mode=self.gradcheck_fast_mode))
def _do_test(self, test_case, module, input):
num_threads = torch.get_num_threads()
torch.set_num_threads(1)
input_tuple = input if isinstance(input, tuple) else (input,)
self._check_gradients(test_case, module, input_tuple)
# check if module can be printed
module.__repr__()
if self.check_inplace:
# check if the inplace variant of the module gives the same result
# as the out-of-place
# check_inplace doesn't support multiple input tensors, since we don't have any modules
# that modify the inputs in-place and that accept more than one input
assert len(input_tuple) == 1
input = input_tuple[0]
module_ip = self.constructor(*self.constructor_args, inplace=True)
input_version = input._version
with freeze_rng_state():
output = module(input)
test_case.assertEqual(input._version, input_version)
input_ip = deepcopy(input)
input_ip_clone = input_ip.clone()
with freeze_rng_state():
output_ip = module_ip(input_ip_clone)
test_case.assertNotEqual(input_ip_clone._version, input_version)
test_case.assertEqual(output, output_ip)
grad = output.data.clone().normal_()
if input.grad is not None:
with torch.no_grad():
input.grad.zero_()
if input_ip.grad is not None:
with torch.no_grad():
input_ip.grad.zero_()
output.backward(grad)
output_ip.backward(grad)
test_case.assertEqual(input.grad, input_ip.grad)
def assert_module_parameters_are(tensor_type, device_id=None):
for p in module.parameters():
test_case.assertIsInstance(p, tensor_type)
if device_id is not None:
test_case.assertEqual(p.get_device(), device_id)
if all(isinstance(t, torch.LongTensor) for t in input_tuple) and TEST_CUDA:
# check that cuda() moves module parameters to correct GPU device,
# and that float() casts parameters correctly
input_tuple = tuple(t.cuda() for t in input_tuple)
module.float().cuda()
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 0) # type: ignore[attr-defined]
if torch.cuda.device_count() > 1:
input_tuple = tuple(t.cuda(1) for t in input_tuple)
module.cuda(1)
with torch.cuda.device(1):
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 1) # type: ignore[attr-defined]
else:
# check that float()/double() casters work correctly
def to_type(tensor, real, complex):
if tensor.is_complex():
return tensor.to(complex)
elif tensor.is_floating_point():
return tensor.to(real)
else:
return tensor
def to_half(x):
# TODO: torch.complex32 when properly supported
return to_type(x, torch.float16, None)
def to_single(x):
return to_type(x, torch.float32, torch.complex64)
def to_double(x):
return to_type(x, torch.float64, torch.complex128)
# to float
input_tuple = tuple(to_single(t) for t in input_tuple)
module.float()
module(*input_tuple)
assert_module_parameters_are(torch.FloatTensor)
# and back to double
input_tuple = tuple(to_double(t) for t in input_tuple)
module.double()
module(*input_tuple)
assert_module_parameters_are(torch.DoubleTensor)
if TEST_CUDA and self.should_test_cuda:
# check that cuda() moves module parameters to correct GPU device,
# and that float() casts parameters correctly
# to GPU0
input_tuple = tuple(to_single(t).cuda() for t in input_tuple)
module.float().cuda()
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 0) # type: ignore[attr-defined]
# to CPU
input_tuple = tuple(t.cpu() for t in input_tuple)
module.cpu()
module(*input_tuple)
assert_module_parameters_are(torch.FloatTensor)
# back to GPU0
input_tuple = tuple(t.cuda() for t in input_tuple)
module.cuda()
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 0) # type: ignore[attr-defined]
# test that forwards of module runs correctly without cuDNN
if self.cudnn:
with torch.backends.cudnn.flags(enabled=False):
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 0) # type: ignore[attr-defined]
if torch.cuda.device_count() >= 2:
# test cross-GPU transfer works
# to GPU1
input_tuple = tuple(t.cuda(1) for t in input_tuple)
module.cuda(1)
with torch.cuda.device(1):
module(*input_tuple)
assert_module_parameters_are(torch.cuda.FloatTensor, 1) # type: ignore[attr-defined]
if not self.skip_double:
# test double()
input_tuple = tuple(to_double(t).cuda() for t in input_tuple)
module.double().cuda()
module(*input_tuple)
assert_module_parameters_are(torch.cuda.DoubleTensor, 0) # type: ignore[attr-defined]
# test half()
if not self.skip_half:
input_tuple = tuple(to_half(t).cuda() for t in input_tuple)
module.half().cuda()
module(*input_tuple)
assert_module_parameters_are(torch.cuda.HalfTensor, 0) # type: ignore[attr-defined]
torch.set_num_threads(num_threads)
def _get_target(self):
return self._get_arg('target', False)
@property
def constructor_args(self):
return self._get_arg('constructor_args', False)
class CriterionTest(InputVariableMixin, TestBase): # type: ignore[misc]
# TODO: check that criterions don't ignore grad_output
_required_arg_names = TestBase._required_arg_names.union({'target'})
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.should_test_cuda = kwargs.get('test_cuda', True)
self.check_forward_only = kwargs.get('check_forward_only', False)
self.check_gradgrad = kwargs.get('check_gradgrad', True)
self.check_half = kwargs.get('check_half', True)
self.check_bfloat16 = kwargs.get('check_bfloat16', False)
self.check_complex = kwargs.get('check_complex', False)
self.test_cpu = kwargs.get('test_cpu', True)
self.with_tf32 = kwargs.get('with_tf32', True)
self.tf32_precision = kwargs.get('tf32_precision', 0.001)
self.check_batched_grad = kwargs.get('check_batched_grad', True)
def __call__(self, test_case):
module = self.constructor(*self.constructor_args)
input = self._get_input()
# Check that these methods don't raise errors
module.__repr__()
str(module)
target = self._get_target()
if self.reference_fn is not None:
out = test_case._forward_criterion(module, input, target, extra_args=self.extra_args)
ref_args = (deepcopy(input), deepcopy(target)) + self.extra_args + (module,)
expected_out = self.reference_fn(*ref_args)
test_case.assertEqual(out, expected_out)
if self.check_forward_only:
return
params = tuple(x for x in module.parameters())
if not isinstance(input, tuple):
inputs = (input,) + params + (target,)
def apply_fn(input, target, *params):
return module(input, target)
else:
inputs = input + params + (target,)
def apply_fn(input1, input2, target, *params): # type: ignore[misc]
return module(input1, input2, target)
gradcheck(apply_fn, inputs, check_batched_grad=self.check_batched_grad)
if self.check_gradgrad:
gradgradcheck(apply_fn, inputs, check_batched_grad=self.check_batched_grad)
def test_cuda(self, test_case, dtype, extra_args=None):
def convert_dtype(obj, dtype, requires_grad=False):
if isinstance(obj, torch.Tensor):
return obj.detach().to(dtype=dtype).requires_grad_(requires_grad)
elif isinstance(obj, tuple):
return tuple(convert_dtype(o, dtype, requires_grad) for o in obj)
else:
return obj
if not TEST_CUDA or not self.should_test_cuda:
raise unittest.SkipTest('Excluded from CUDA tests')
cpu_input = self._get_input()
cpu_target = self._get_target()
cpu_module = self.constructor(*self.constructor_args)
gpu_module = self.constructor(*self.constructor_args)
# Convert input, target and module parameters to dtype
cpu_input = convert_dtype(cpu_input, dtype, True)
if cpu_target.is_floating_point() or cpu_target.is_complex():
cpu_target = convert_dtype(cpu_target, dtype)
cpu_module.type(dtype)
gpu_module.type(dtype)
# GPU setup
gpu_input = to_gpu(cpu_input)
gpu_target = to_gpu(cpu_target)
gpu_module.cuda()
# torch.HalfTensor doesn't support most operations, converting back to default
if dtype in {torch.half, torch.bfloat16}:
cpu_input = self._get_input()
cpu_target = self._get_target()
# Loss modules with weights require consistent input/module weight types
cpu_module = self.constructor(*self.constructor_args)
cpu_output = test_case._forward_criterion(cpu_module, cpu_input, cpu_target, extra_args=extra_args)
gpu_output = test_case._forward_criterion(gpu_module, gpu_input, gpu_target, extra_args=extra_args)
# dtype used to be able to be None, so set precision in this way instead of a precision map
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_output, gpu_output,
atol=1e-1 if dtype in {torch.half, torch.bfloat16} else 4e-4, rtol=0)
cpu_gradInput = test_case._backward_criterion(cpu_module, cpu_input, cpu_output, cpu_target, extra_args=extra_args)
gpu_gradInput = test_case._backward_criterion(gpu_module, gpu_input, gpu_output, gpu_target, extra_args=extra_args)
# dtype used to be able to be None, so set precision in this way instead of a precision map
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
test_case.assertEqualIgnoreType(cpu_gradInput, gpu_gradInput,
atol=1e-1 if dtype in {torch.half, torch.bfloat16} else 4e-4, rtol=0)
def _get_target(self):
return self._get_arg('target', False)
@property
def constructor_args(self):
return self._get_arg('constructor_args', False)
@property
def extra_args(self):
return self._get_arg('extra_args', False)
| 39.907892 | 132 | 0.588211 | 28,399 | 227,036 | 4.473327 | 0.035846 | 0.061163 | 0.03229 | 0.049064 | 0.827075 | 0.788221 | 0.759355 | 0.725988 | 0.690274 | 0.65175 | 0 | 0.041203 | 0.263681 | 227,036 | 5,688 | 133 | 39.914909 | 0.718725 | 0.053414 | 0 | 0.68547 | 0 | 0.01809 | 0.221244 | 0.127193 | 0.000195 | 0 | 0 | 0.000527 | 0.008559 | 1 | 0.026065 | false | 0 | 0.004085 | 0.004085 | 0.065357 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a6a974a8615f792f525f72d49e7171fa68fa8ff | 207 | py | Python | conveyancer_ui/views/utils.py | LandRegistry/digital-street-conveyancer-ui | 54667ffe25dc816591eeb1c347b42f9beee21b03 | [
"MIT"
] | null | null | null | conveyancer_ui/views/utils.py | LandRegistry/digital-street-conveyancer-ui | 54667ffe25dc816591eeb1c347b42f9beee21b03 | [
"MIT"
] | null | null | null | conveyancer_ui/views/utils.py | LandRegistry/digital-street-conveyancer-ui | 54667ffe25dc816591eeb1c347b42f9beee21b03 | [
"MIT"
] | 4 | 2019-04-26T06:37:56.000Z | 2021-04-11T05:22:23.000Z |
def suffix(d):
return 'th' if 11 <= d <= 13 else {1: 'st', 2: 'nd', 3: 'rd'}.get(d % 10, 'th')
def custom_strftime(format, t):
return t.strftime(format).replace('{S}', str(t.day) + suffix(t.day))
| 25.875 | 83 | 0.565217 | 37 | 207 | 3.135135 | 0.675676 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 0.188406 | 207 | 7 | 84 | 29.571429 | 0.636905 | 0 | 0 | 0 | 0 | 0 | 0.063107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0a6c10ecbc8790e8fcece489276ec037cfe2cca8 | 116 | py | Python | example-generation.py | szymanskir/TAIO | 0a818cd2d9073d5cca322fcdc2a11afe425e73ef | [
"MIT"
] | null | null | null | example-generation.py | szymanskir/TAIO | 0a818cd2d9073d5cca322fcdc2a11afe425e73ef | [
"MIT"
] | null | null | null | example-generation.py | szymanskir/TAIO | 0a818cd2d9073d5cca322fcdc2a11afe425e73ef | [
"MIT"
] | null | null | null | import networkx as nx
from src.utils import save_graph
G1 = nx.Graph()
G1.add_edge(0, 1)
save_graph(G1, 'g1.txt')
| 14.5 | 32 | 0.724138 | 23 | 116 | 3.521739 | 0.652174 | 0.259259 | 0.271605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.146552 | 116 | 7 | 33 | 16.571429 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0a6e1ad622e11beb33d63f94b9179fe97b220680 | 4,232 | py | Python | migrations/versions/eee104ca831d_.py | Kbaek11/Drug-Education-Project | 57500813179f901e601d3e809e05ca24115ee7d5 | [
"MIT"
] | null | null | null | migrations/versions/eee104ca831d_.py | Kbaek11/Drug-Education-Project | 57500813179f901e601d3e809e05ca24115ee7d5 | [
"MIT"
] | null | null | null | migrations/versions/eee104ca831d_.py | Kbaek11/Drug-Education-Project | 57500813179f901e601d3e809e05ca24115ee7d5 | [
"MIT"
] | null | null | null | """empty message
Revision ID: eee104ca831d
Revises:
Create Date: 2018-04-23 22:06:22.684637
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'eee104ca831d'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('users',
sa.Column('userId', sa.String(length=40), nullable=False),
sa.Column('team', sa.String(length=80), nullable=True),
sa.PrimaryKeyConstraint('userId')
)
op.create_table('calendar',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('userId', sa.String(length=40), nullable=True),
sa.Column('answeredDate', sa.DateTime(), nullable=False),
sa.Column('day1a', sa.String(length=80), nullable=True),
sa.Column('day1b', sa.String(length=80), nullable=True),
sa.Column('day1c', sa.String(length=80), nullable=True),
sa.Column('day2a', sa.String(length=80), nullable=True),
sa.Column('day2b', sa.String(length=80), nullable=True),
sa.Column('day2c', sa.String(length=80), nullable=True),
sa.Column('day3a', sa.String(length=80), nullable=True),
sa.Column('day3b', sa.String(length=80), nullable=True),
sa.Column('day3c', sa.String(length=80), nullable=True),
sa.Column('day4a', sa.String(length=80), nullable=True),
sa.Column('day4b', sa.String(length=80), nullable=True),
sa.Column('day4c', sa.String(length=80), nullable=True),
sa.Column('day5a', sa.String(length=80), nullable=True),
sa.Column('day5b', sa.String(length=80), nullable=True),
sa.Column('day5c', sa.String(length=80), nullable=True),
sa.Column('day6a', sa.String(length=80), nullable=True),
sa.Column('day6b', sa.String(length=80), nullable=True),
sa.Column('day6c', sa.String(length=80), nullable=True),
sa.Column('day7a', sa.String(length=80), nullable=True),
sa.Column('day7b', sa.String(length=80), nullable=True),
sa.Column('day7c', sa.String(length=80), nullable=True),
sa.Column('day8a', sa.String(length=80), nullable=True),
sa.Column('day8b', sa.String(length=80), nullable=True),
sa.Column('day8c', sa.String(length=80), nullable=True),
sa.Column('day9a', sa.String(length=80), nullable=True),
sa.Column('day9b', sa.String(length=80), nullable=True),
sa.Column('day9c', sa.String(length=80), nullable=True),
sa.Column('day10a', sa.String(length=80), nullable=True),
sa.Column('day10b', sa.String(length=80), nullable=True),
sa.Column('day10c', sa.String(length=80), nullable=True),
sa.Column('day11a', sa.String(length=80), nullable=True),
sa.Column('day11b', sa.String(length=80), nullable=True),
sa.Column('day11c', sa.String(length=80), nullable=True),
sa.Column('day12a', sa.String(length=80), nullable=True),
sa.Column('day12b', sa.String(length=80), nullable=True),
sa.Column('day12c', sa.String(length=80), nullable=True),
sa.Column('day13a', sa.String(length=80), nullable=True),
sa.Column('day13b', sa.String(length=80), nullable=True),
sa.Column('day13c', sa.String(length=80), nullable=True),
sa.Column('day14a', sa.String(length=80), nullable=True),
sa.Column('day14b', sa.String(length=80), nullable=True),
sa.Column('day14c', sa.String(length=80), nullable=True),
sa.Column('q1', sa.String(length=80), nullable=True),
sa.Column('q2', sa.String(length=80), nullable=True),
sa.Column('q3', sa.String(length=80), nullable=True),
sa.Column('q4', sa.String(length=80), nullable=True),
sa.Column('q5', sa.String(length=80), nullable=True),
sa.Column('q6', sa.String(length=80), nullable=True),
sa.Column('q7', sa.String(length=80), nullable=True),
sa.Column('q8', sa.String(length=80), nullable=True),
sa.Column('q9', sa.String(length=80), nullable=True),
sa.Column('q10', sa.String(length=80), nullable=True),
sa.ForeignKeyConstraint(['userId'], ['users.userId'], ),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('calendar')
op.drop_table('users')
# ### end Alembic commands ###
| 45.505376 | 65 | 0.670841 | 599 | 4,232 | 4.72788 | 0.183639 | 0.161017 | 0.271893 | 0.299435 | 0.727401 | 0.727401 | 0.727401 | 0.706215 | 0 | 0 | 0 | 0.056972 | 0.129017 | 4,232 | 92 | 66 | 46 | 0.71134 | 0.066871 | 0 | 0 | 0 | 0 | 0.086912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.027027 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a59fc41fe9acfa4b268a431ce6c579b99481536 | 1,699 | py | Python | tests/test_reactions.py | DleanJeans/dpytest | 36713995b8cbccfbb597a53c499c1c8bb7fab8ff | [
"MIT"
] | 71 | 2019-04-23T09:28:30.000Z | 2022-02-07T15:32:07.000Z | tests/test_reactions.py | DleanJeans/dpytest | 36713995b8cbccfbb597a53c499c1c8bb7fab8ff | [
"MIT"
] | 61 | 2019-05-04T09:35:32.000Z | 2022-03-19T16:37:20.000Z | tests/test_reactions.py | DleanJeans/dpytest | 36713995b8cbccfbb597a53c499c1c8bb7fab8ff | [
"MIT"
] | 29 | 2019-04-12T12:24:49.000Z | 2022-01-20T19:09:30.000Z | import pytest
import discord.ext.test as dpytest
@pytest.mark.asyncio
async def test_add_reaction(bot):
g = bot.guilds[0]
c = g.text_channels[0]
message = await c.send("Test Message")
await message.add_reaction("😂")
# This is d.py/discord's fault, the message object from send isn't the same as the one in the state
message = await c.fetch_message(message.id)
assert len(message.reactions) == 1
@pytest.mark.asyncio
async def test_remove_reaction(bot):
g = bot.guilds[0]
c = g.text_channels[0]
message = await c.send("Test Message")
await message.add_reaction("😂") # Assumes the test above passed
await message.remove_reaction("😂", g.me)
message = await c.fetch_message(message.id)
assert len(message.reactions) == 0
@pytest.mark.asyncio
async def test_user_add_reaction(bot):
g = bot.guilds[0]
c = g.text_channels[0]
m = g.members[0]
message = await c.send("Test Message")
await dpytest.add_reaction(m, message, "😂")
# Assumes the above tests pass
message = await c.fetch_message(message.id)
react = message.reactions[0]
assert react.emoji == "😂"
assert react.me is False
@pytest.mark.asyncio
async def test_user_remove_reaction(bot):
g = bot.guilds[0]
c = g.text_channels[0]
m = g.members[0]
message = await c.send("Test Message")
await message.add_reaction("😂")
await dpytest.add_reaction(m, message, "😂")
await dpytest.remove_reaction(m, message, "😂")
# Assumes the above tests pass
message = await c.fetch_message(message.id)
react = message.reactions[0]
assert react.emoji == "😂"
assert react.count == 1
assert react.me is True
| 26.546875 | 103 | 0.676869 | 262 | 1,699 | 4.328244 | 0.229008 | 0.126984 | 0.091711 | 0.077601 | 0.801587 | 0.801587 | 0.750441 | 0.655203 | 0.655203 | 0.655203 | 0 | 0.011078 | 0.203061 | 1,699 | 63 | 104 | 26.968254 | 0.819793 | 0.108888 | 0 | 0.704545 | 0 | 0 | 0.037773 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a86304b826be5cd24e22821a8ab54ebcf716d66 | 69 | py | Python | electrum_gui/common/provider/__init__.py | BixinKey/electrum | f5de4e74e313b9b569f13ba6ab9142a38bf095f2 | [
"MIT"
] | 12 | 2020-11-12T08:53:05.000Z | 2021-07-06T17:30:39.000Z | electrum_gui/common/provider/__init__.py | liyanhrxy/electrum | 107608ef201ff1d20d2f6091c257b1ceff9b7362 | [
"MIT"
] | 209 | 2020-09-23T06:58:18.000Z | 2021-11-18T11:25:41.000Z | electrum_gui/common/provider/__init__.py | liyanhrxy/electrum | 107608ef201ff1d20d2f6091c257b1ceff9b7362 | [
"MIT"
] | 19 | 2020-10-13T11:42:26.000Z | 2022-02-06T01:26:34.000Z | from electrum_gui.common.provider import manager as provider_manager
| 34.5 | 68 | 0.884058 | 10 | 69 | 5.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 69 | 1 | 69 | 69 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6ab4271b474ac04fbcd7b2b60bc201e377143cbc | 109 | py | Python | alastria_identity/types/config_parser.py | alastria/alastria-identity-lib-py | 63ec9d9e60d267c3900d2a827b5d4adb7d265acb | [
"MIT"
] | null | null | null | alastria_identity/types/config_parser.py | alastria/alastria-identity-lib-py | 63ec9d9e60d267c3900d2a827b5d4adb7d265acb | [
"MIT"
] | 2 | 2020-12-01T08:50:25.000Z | 2020-12-16T15:10:33.000Z | alastria_identity/types/config_parser.py | alastria/alastria-identity-lib-py | 63ec9d9e60d267c3900d2a827b5d4adb7d265acb | [
"MIT"
] | 2 | 2020-10-21T11:22:40.000Z | 2021-04-17T15:36:56.000Z | from abc import ABC, abstractmethod
class ConfigParser(ABC):
@abstractmethod
def parse(self): pass
| 15.571429 | 35 | 0.733945 | 13 | 109 | 6.153846 | 0.769231 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192661 | 109 | 6 | 36 | 18.166667 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.