hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a2ed35941a9dac9a4f7611dd4446d90469c16f94 | 194 | py | Python | irida_uploader_cl/core/__init__.py | duanjunhyq/irida_uploader_cl | d0e5d404c5b5b10c3411ded71a20f5ab062aabba | [
"MIT"
] | null | null | null | irida_uploader_cl/core/__init__.py | duanjunhyq/irida_uploader_cl | d0e5d404c5b5b10c3411ded71a20f5ab062aabba | [
"MIT"
] | null | null | null | irida_uploader_cl/core/__init__.py | duanjunhyq/irida_uploader_cl | d0e5d404c5b5b10c3411ded71a20f5ab062aabba | [
"MIT"
] | null | null | null | from irida_uploader_cl.core import logger
from irida_uploader_cl.core import cli_entry
from irida_uploader_cl.core import exit_return
from irida_uploader_cl.core.cli_entry import VERSION_NUMBER
| 38.8 | 59 | 0.891753 | 33 | 194 | 4.878788 | 0.393939 | 0.223602 | 0.42236 | 0.47205 | 0.68323 | 0.540373 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082474 | 194 | 4 | 60 | 48.5 | 0.904494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a7480f8b02005bd706fd79a2d64bd58d5d417334 | 11,250 | py | Python | models/seed_resnet.py | VITA-Group/Peek-a-Boo | 9290d4e5e3aee0dff994e1a664ec91bd6ec93176 | [
"MIT"
] | 2 | 2022-01-22T03:57:21.000Z | 2022-01-30T20:44:32.000Z | models/seed_resnet.py | VITA-Group/Peek-a-Boo | 9290d4e5e3aee0dff994e1a664ec91bd6ec93176 | [
"MIT"
] | null | null | null | models/seed_resnet.py | VITA-Group/Peek-a-Boo | 9290d4e5e3aee0dff994e1a664ec91bd6ec93176 | [
"MIT"
] | 2 | 2022-01-30T12:26:56.000Z | 2022-03-14T12:42:06.000Z | '''ResNet in PyTorch.
For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
from .seed_conv import SeedConv2d
from masked_layers import layers
class LambdaLayer(nn.Module):
def __init__(self, lambd):
super(LambdaLayer, self).__init__()
self.lambd = lambd
def forward(self, x):
return self.lambd(x)
class SeedBasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False):
super(SeedBasicBlock, self).__init__()
self.conv1 = SeedConv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = SeedConv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
SeedConv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class SeedBasicBlock2(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False):
super(SeedBasicBlock2, self).__init__()
self.conv1 = SeedConv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = SeedConv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = LambdaLayer(
lambda x: F.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes//4, planes//4), "constant", 0)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class SeedBottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1, sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False):
super(SeedBottleneck, self).__init__()
self.conv1 = SeedConv2d(in_planes, planes, kernel_size=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = SeedConv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = SeedConv2d(planes, self.expansion*planes, kernel_size=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
SeedConv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class SeedResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10, sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False):
super(SeedResNet, self).__init__()
self.in_planes = 64
self.conv1 = SeedConv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.linear = nn.Linear(512*block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride, sign_grouped_dim, init_method, hidden_act, scaling_input):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride, sign_grouped_dim, init_method, hidden_act, scaling_input=scaling_input))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
class SeedResNetCifar(nn.Module):
def __init__(self, block, num_blocks, num_classes=10, sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False):
super(SeedResNetCifar, self).__init__()
self.in_planes = 16
self.conv1 = SeedConv2d(3, 16, kernel_size=3, stride=1, padding=1, bias=False,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self._make_layer(block, 16, num_blocks[0], stride=1,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.layer2 = self._make_layer(block, 32, num_blocks[1], stride=2,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.layer3 = self._make_layer(block, 64, num_blocks[2], stride=2,
sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input)
self.linear = nn.Linear(64, num_classes)
def _make_layer(self, block, planes, num_blocks, stride, sign_grouped_dim, init_method, hidden_act, scaling_input):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride, sign_grouped_dim, init_method, hidden_act, scaling_input=scaling_input))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = F.avg_pool2d(out, out.size()[3])
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def SeedResNet18(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNet(SeedBasicBlock, [2,2,2,2], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def SeedResNet20(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNetCifar(SeedBasicBlock2, [3, 3, 3], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def SeedResNet34(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNet(SeedBasicBlock, [3,4,6,3], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def SeedResNet50(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNet(SeedBottleneck, [3,4,6,3], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def SeedResNet101(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNet(SeedBottleneck, [3,4,23,3], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def SeedResNet152(sign_grouped_dim=(), init_method='standard', hidden_act='none', scaling_input=False, num_classes=10):
return SeedResNet(SeedBottleneck, [3,8,36,3], sign_grouped_dim=sign_grouped_dim, init_method=init_method, hidden_act=hidden_act, scaling_input=scaling_input, num_classes=num_classes)
def test():
net = SeedResNet18()
y = net(torch.randn(1,3,32,32))
print(y.size())
# test()
| 53.827751 | 191 | 0.6808 | 1,520 | 11,250 | 4.742105 | 0.086184 | 0.108213 | 0.122364 | 0.097392 | 0.87389 | 0.863208 | 0.853496 | 0.842397 | 0.842397 | 0.842397 | 0 | 0.028186 | 0.208444 | 11,250 | 208 | 192 | 54.086538 | 0.781246 | 0.018756 | 0 | 0.590062 | 0 | 0 | 0.01269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.031056 | 0.043478 | 0.304348 | 0.006211 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a750055dd66c90b301cc30edbe231fdc5b082d34 | 143 | py | Python | swig/python/gdalconst.py | VisualAwarenessTech/gdal-2.2.1 | 5ea1c6671d6f0f3b93e9e9bf2a71da618c834e8d | [
"Apache-2.0"
] | 13 | 2015-11-18T18:26:34.000Z | 2021-05-09T13:59:46.000Z | swig/python/gdalconst.py | VisualAwarenessTech/gdal-2.2.1 | 5ea1c6671d6f0f3b93e9e9bf2a71da618c834e8d | [
"Apache-2.0"
] | 7 | 2021-06-04T23:45:15.000Z | 2022-03-12T00:44:14.000Z | swig/python/gdalconst.py | VisualAwarenessTech/gdal-2.2.1 | 5ea1c6671d6f0f3b93e9e9bf2a71da618c834e8d | [
"Apache-2.0"
] | 6 | 2019-02-03T14:19:32.000Z | 2021-12-19T06:36:49.000Z | # import osgeo.gdalconst as a convenience
from osgeo.gdal import deprecation_warn
deprecation_warn('gdalconst')
from osgeo.gdalconst import *
| 23.833333 | 41 | 0.825175 | 19 | 143 | 6.105263 | 0.526316 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 143 | 5 | 42 | 28.6 | 0.913386 | 0.272727 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a773e75f4491872ee81258c12c495865b570e2c8 | 277,391 | py | Python | test/unit/test_discovery_v2.py | timgates42/python-sdk | 8a6647636266f987816eb1d747782c07c3cfcba3 | [
"Apache-2.0"
] | 1 | 2021-06-11T03:12:15.000Z | 2021-06-11T03:12:15.000Z | test/unit/test_discovery_v2.py | timgates42/python-sdk | 8a6647636266f987816eb1d747782c07c3cfcba3 | [
"Apache-2.0"
] | null | null | null | test/unit/test_discovery_v2.py | timgates42/python-sdk | 8a6647636266f987816eb1d747782c07c3cfcba3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# (C) Copyright IBM Corp. 2019, 2020.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Unit Tests for DiscoveryV2
"""
from datetime import datetime, timezone
from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator
import inspect
import io
import json
import pytest
import re
import requests
import responses
import tempfile
import urllib
from ibm_watson.discovery_v2 import *
version = 'testString'
service = DiscoveryV2(
authenticator=NoAuthAuthenticator(),
version=version
)
base_url = 'https://api.us-south.discovery.watson.cloud.ibm.com'
service.set_service_url(base_url)
##############################################################################
# Start of Service: Collections
##############################################################################
# region
class TestListCollections():
"""
Test Class for list_collections
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_collections_all_params(self):
"""
list_collections()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections')
mock_response = '{"collections": [{"collection_id": "collection_id", "name": "name"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.list_collections(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_list_collections_value_error(self):
"""
test_list_collections_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections')
mock_response = '{"collections": [{"collection_id": "collection_id", "name": "name"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.list_collections(**req_copy)
class TestCreateCollection():
"""
Test Class for create_collection
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_collection_all_params(self):
"""
create_collection()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a CollectionEnrichment model
collection_enrichment_model = {}
collection_enrichment_model['enrichment_id'] = 'testString'
collection_enrichment_model['fields'] = ['testString']
# Set up parameter values
project_id = 'testString'
name = 'testString'
description = 'testString'
language = 'testString'
enrichments = [collection_enrichment_model]
# Invoke method
response = service.create_collection(
project_id,
name,
description=description,
language=language,
enrichments=enrichments,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['name'] == 'testString'
assert req_body['description'] == 'testString'
assert req_body['language'] == 'testString'
assert req_body['enrichments'] == [collection_enrichment_model]
@responses.activate
def test_create_collection_value_error(self):
"""
test_create_collection_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a CollectionEnrichment model
collection_enrichment_model = {}
collection_enrichment_model['enrichment_id'] = 'testString'
collection_enrichment_model['fields'] = ['testString']
# Set up parameter values
project_id = 'testString'
name = 'testString'
description = 'testString'
language = 'testString'
enrichments = [collection_enrichment_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"name": name,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.create_collection(**req_copy)
class TestGetCollection():
"""
Test Class for get_collection
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_collection_all_params(self):
"""
get_collection()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Invoke method
response = service.get_collection(
project_id,
collection_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_collection_value_error(self):
"""
test_get_collection_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_collection(**req_copy)
class TestUpdateCollection():
"""
Test Class for update_collection
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_collection_all_params(self):
"""
update_collection()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a CollectionEnrichment model
collection_enrichment_model = {}
collection_enrichment_model['enrichment_id'] = 'testString'
collection_enrichment_model['fields'] = ['testString']
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
name = 'testString'
description = 'testString'
enrichments = [collection_enrichment_model]
# Invoke method
response = service.update_collection(
project_id,
collection_id,
name=name,
description=description,
enrichments=enrichments,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['name'] == 'testString'
assert req_body['description'] == 'testString'
assert req_body['enrichments'] == [collection_enrichment_model]
@responses.activate
def test_update_collection_value_error(self):
"""
test_update_collection_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
mock_response = '{"collection_id": "collection_id", "name": "name", "description": "description", "created": "2019-01-01T12:00:00", "language": "language", "enrichments": [{"enrichment_id": "enrichment_id", "fields": ["fields"]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a CollectionEnrichment model
collection_enrichment_model = {}
collection_enrichment_model['enrichment_id'] = 'testString'
collection_enrichment_model['fields'] = ['testString']
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
name = 'testString'
description = 'testString'
enrichments = [collection_enrichment_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.update_collection(**req_copy)
class TestDeleteCollection():
"""
Test Class for delete_collection
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_collection_all_params(self):
"""
delete_collection()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Invoke method
response = service.delete_collection(
project_id,
collection_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 204
@responses.activate
def test_delete_collection_value_error(self):
"""
test_delete_collection_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_collection(**req_copy)
# endregion
##############################################################################
# End of Service: Collections
##############################################################################
##############################################################################
# Start of Service: Queries
##############################################################################
# region
class TestQuery():
"""
Test Class for query
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_query_all_params(self):
"""
query()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/query')
mock_response = '{"matching_results": 16, "results": [{"document_id": "document_id", "metadata": {"mapKey": {"anyKey": "anyValue"}}, "result_metadata": {"document_retrieval_source": "search", "collection_id": "collection_id", "confidence": 10}, "document_passages": [{"passage_text": "passage_text", "start_offset": 12, "end_offset": 10, "field": "field"}]}], "aggregations": [{"type": "filter", "match": "match", "matching_results": 16}], "retrieval_details": {"document_retrieval_strategy": "untrained"}, "suggested_query": "suggested_query", "suggested_refinements": [{"text": "text"}], "table_results": [{"table_id": "table_id", "source_document_id": "source_document_id", "collection_id": "collection_id", "table_html": "table_html", "table_html_offset": 17, "table": {"location": {"begin": 5, "end": 3}, "text": "text", "section_title": {"text": "text", "location": {"begin": 5, "end": 3}}, "title": {"text": "text", "location": {"begin": 5, "end": 3}}, "table_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "row_headers": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "column_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "key_value_pairs": [{"key": {"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}, "value": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}]}], "body_cells": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16, "row_header_ids": [{"id": "id"}], "row_header_texts": [{"text": "text"}], "row_header_texts_normalized": [{"text_normalized": "text_normalized"}], "column_header_ids": [{"id": "id"}], "column_header_texts": [{"text": "text"}], "column_header_texts_normalized": [{"text_normalized": "text_normalized"}], "attributes": [{"type": "type", "text": "text", "location": {"begin": 5, "end": 3}}]}], "contexts": [{"text": "text", "location": {"begin": 5, "end": 3}}]}}], "passages": [{"passage_text": "passage_text", "passage_score": 13, "document_id": "document_id", "collection_id": "collection_id", "start_offset": 12, "end_offset": 10, "field": "field"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a QueryLargeTableResults model
query_large_table_results_model = {}
query_large_table_results_model['enabled'] = True
query_large_table_results_model['count'] = 38
# Construct a dict representation of a QueryLargeSuggestedRefinements model
query_large_suggested_refinements_model = {}
query_large_suggested_refinements_model['enabled'] = True
query_large_suggested_refinements_model['count'] = 1
# Construct a dict representation of a QueryLargePassages model
query_large_passages_model = {}
query_large_passages_model['enabled'] = True
query_large_passages_model['per_document'] = True
query_large_passages_model['max_per_document'] = 38
query_large_passages_model['fields'] = ['testString']
query_large_passages_model['count'] = 100
query_large_passages_model['characters'] = 50
# Set up parameter values
project_id = 'testString'
collection_ids = ['testString']
filter = 'testString'
query = 'testString'
natural_language_query = 'testString'
aggregation = 'testString'
count = 38
return_ = ['testString']
offset = 38
sort = 'testString'
highlight = True
spelling_suggestions = True
table_results = query_large_table_results_model
suggested_refinements = query_large_suggested_refinements_model
passages = query_large_passages_model
# Invoke method
response = service.query(
project_id,
collection_ids=collection_ids,
filter=filter,
query=query,
natural_language_query=natural_language_query,
aggregation=aggregation,
count=count,
return_=return_,
offset=offset,
sort=sort,
highlight=highlight,
spelling_suggestions=spelling_suggestions,
table_results=table_results,
suggested_refinements=suggested_refinements,
passages=passages,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['collection_ids'] == ['testString']
assert req_body['filter'] == 'testString'
assert req_body['query'] == 'testString'
assert req_body['natural_language_query'] == 'testString'
assert req_body['aggregation'] == 'testString'
assert req_body['count'] == 38
assert req_body['return'] == ['testString']
assert req_body['offset'] == 38
assert req_body['sort'] == 'testString'
assert req_body['highlight'] == True
assert req_body['spelling_suggestions'] == True
assert req_body['table_results'] == query_large_table_results_model
assert req_body['suggested_refinements'] == query_large_suggested_refinements_model
assert req_body['passages'] == query_large_passages_model
@responses.activate
def test_query_required_params(self):
"""
test_query_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/query')
mock_response = '{"matching_results": 16, "results": [{"document_id": "document_id", "metadata": {"mapKey": {"anyKey": "anyValue"}}, "result_metadata": {"document_retrieval_source": "search", "collection_id": "collection_id", "confidence": 10}, "document_passages": [{"passage_text": "passage_text", "start_offset": 12, "end_offset": 10, "field": "field"}]}], "aggregations": [{"type": "filter", "match": "match", "matching_results": 16}], "retrieval_details": {"document_retrieval_strategy": "untrained"}, "suggested_query": "suggested_query", "suggested_refinements": [{"text": "text"}], "table_results": [{"table_id": "table_id", "source_document_id": "source_document_id", "collection_id": "collection_id", "table_html": "table_html", "table_html_offset": 17, "table": {"location": {"begin": 5, "end": 3}, "text": "text", "section_title": {"text": "text", "location": {"begin": 5, "end": 3}}, "title": {"text": "text", "location": {"begin": 5, "end": 3}}, "table_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "row_headers": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "column_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "key_value_pairs": [{"key": {"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}, "value": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}]}], "body_cells": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16, "row_header_ids": [{"id": "id"}], "row_header_texts": [{"text": "text"}], "row_header_texts_normalized": [{"text_normalized": "text_normalized"}], "column_header_ids": [{"id": "id"}], "column_header_texts": [{"text": "text"}], "column_header_texts_normalized": [{"text_normalized": "text_normalized"}], "attributes": [{"type": "type", "text": "text", "location": {"begin": 5, "end": 3}}]}], "contexts": [{"text": "text", "location": {"begin": 5, "end": 3}}]}}], "passages": [{"passage_text": "passage_text", "passage_score": 13, "document_id": "document_id", "collection_id": "collection_id", "start_offset": 12, "end_offset": 10, "field": "field"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.query(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_query_value_error(self):
"""
test_query_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/query')
mock_response = '{"matching_results": 16, "results": [{"document_id": "document_id", "metadata": {"mapKey": {"anyKey": "anyValue"}}, "result_metadata": {"document_retrieval_source": "search", "collection_id": "collection_id", "confidence": 10}, "document_passages": [{"passage_text": "passage_text", "start_offset": 12, "end_offset": 10, "field": "field"}]}], "aggregations": [{"type": "filter", "match": "match", "matching_results": 16}], "retrieval_details": {"document_retrieval_strategy": "untrained"}, "suggested_query": "suggested_query", "suggested_refinements": [{"text": "text"}], "table_results": [{"table_id": "table_id", "source_document_id": "source_document_id", "collection_id": "collection_id", "table_html": "table_html", "table_html_offset": 17, "table": {"location": {"begin": 5, "end": 3}, "text": "text", "section_title": {"text": "text", "location": {"begin": 5, "end": 3}}, "title": {"text": "text", "location": {"begin": 5, "end": 3}}, "table_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "row_headers": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "column_headers": [{"cell_id": "cell_id", "location": {"anyKey": "anyValue"}, "text": "text", "text_normalized": "text_normalized", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16}], "key_value_pairs": [{"key": {"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}, "value": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text"}]}], "body_cells": [{"cell_id": "cell_id", "location": {"begin": 5, "end": 3}, "text": "text", "row_index_begin": 15, "row_index_end": 13, "column_index_begin": 18, "column_index_end": 16, "row_header_ids": [{"id": "id"}], "row_header_texts": [{"text": "text"}], "row_header_texts_normalized": [{"text_normalized": "text_normalized"}], "column_header_ids": [{"id": "id"}], "column_header_texts": [{"text": "text"}], "column_header_texts_normalized": [{"text_normalized": "text_normalized"}], "attributes": [{"type": "type", "text": "text", "location": {"begin": 5, "end": 3}}]}], "contexts": [{"text": "text", "location": {"begin": 5, "end": 3}}]}}], "passages": [{"passage_text": "passage_text", "passage_score": 13, "document_id": "document_id", "collection_id": "collection_id", "start_offset": 12, "end_offset": 10, "field": "field"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.query(**req_copy)
class TestGetAutocompletion():
"""
Test Class for get_autocompletion
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_autocompletion_all_params(self):
"""
get_autocompletion()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/autocompletion')
mock_response = '{"completions": ["completions"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
prefix = 'testString'
collection_ids = ['testString']
field = 'testString'
count = 38
# Invoke method
response = service.get_autocompletion(
project_id,
prefix,
collection_ids=collection_ids,
field=field,
count=count,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'prefix={}'.format(prefix) in query_string
assert 'collection_ids={}'.format(','.join(collection_ids)) in query_string
assert 'field={}'.format(field) in query_string
assert 'count={}'.format(count) in query_string
@responses.activate
def test_get_autocompletion_required_params(self):
"""
test_get_autocompletion_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/autocompletion')
mock_response = '{"completions": ["completions"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
prefix = 'testString'
# Invoke method
response = service.get_autocompletion(
project_id,
prefix,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'prefix={}'.format(prefix) in query_string
@responses.activate
def test_get_autocompletion_value_error(self):
"""
test_get_autocompletion_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/autocompletion')
mock_response = '{"completions": ["completions"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
prefix = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"prefix": prefix,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_autocompletion(**req_copy)
class TestQueryNotices():
"""
Test Class for query_notices
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_query_notices_all_params(self):
"""
query_notices()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/notices')
mock_response = '{"matching_results": 16, "notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
filter = 'testString'
query = 'testString'
natural_language_query = 'testString'
count = 38
offset = 38
# Invoke method
response = service.query_notices(
project_id,
filter=filter,
query=query,
natural_language_query=natural_language_query,
count=count,
offset=offset,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'filter={}'.format(filter) in query_string
assert 'query={}'.format(query) in query_string
assert 'natural_language_query={}'.format(natural_language_query) in query_string
assert 'count={}'.format(count) in query_string
assert 'offset={}'.format(offset) in query_string
@responses.activate
def test_query_notices_required_params(self):
"""
test_query_notices_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/notices')
mock_response = '{"matching_results": 16, "notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.query_notices(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_query_notices_value_error(self):
"""
test_query_notices_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/notices')
mock_response = '{"matching_results": 16, "notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.query_notices(**req_copy)
class TestListFields():
"""
Test Class for list_fields
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_fields_all_params(self):
"""
list_fields()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/fields')
mock_response = '{"fields": [{"field": "field", "type": "nested", "collection_id": "collection_id"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_ids = ['testString']
# Invoke method
response = service.list_fields(
project_id,
collection_ids=collection_ids,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'collection_ids={}'.format(','.join(collection_ids)) in query_string
@responses.activate
def test_list_fields_required_params(self):
"""
test_list_fields_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/fields')
mock_response = '{"fields": [{"field": "field", "type": "nested", "collection_id": "collection_id"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.list_fields(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_list_fields_value_error(self):
"""
test_list_fields_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/fields')
mock_response = '{"fields": [{"field": "field", "type": "nested", "collection_id": "collection_id"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.list_fields(**req_copy)
# endregion
##############################################################################
# End of Service: Queries
##############################################################################
##############################################################################
# Start of Service: ComponentSettings
##############################################################################
# region
class TestGetComponentSettings():
"""
Test Class for get_component_settings
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_component_settings_all_params(self):
"""
get_component_settings()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/component_settings')
mock_response = '{"fields_shown": {"body": {"use_passage": false, "field": "field"}, "title": {"field": "field"}}, "autocomplete": true, "structured_search": false, "results_per_page": 16, "aggregations": [{"name": "name", "label": "label", "multiple_selections_allowed": false, "visualization_type": "auto"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.get_component_settings(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_component_settings_value_error(self):
"""
test_get_component_settings_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/component_settings')
mock_response = '{"fields_shown": {"body": {"use_passage": false, "field": "field"}, "title": {"field": "field"}}, "autocomplete": true, "structured_search": false, "results_per_page": 16, "aggregations": [{"name": "name", "label": "label", "multiple_selections_allowed": false, "visualization_type": "auto"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_component_settings(**req_copy)
# endregion
##############################################################################
# End of Service: ComponentSettings
##############################################################################
##############################################################################
# Start of Service: Documents
##############################################################################
# region
class TestAddDocument():
"""
Test Class for add_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_add_document_all_params(self):
"""
add_document()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
file = io.BytesIO(b'This is a mock file.').getvalue()
filename = 'testString'
file_content_type = 'application/json'
metadata = 'testString'
x_watson_discovery_force = True
# Invoke method
response = service.add_document(
project_id,
collection_id,
file=file,
filename=filename,
file_content_type=file_content_type,
metadata=metadata,
x_watson_discovery_force=x_watson_discovery_force,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
@responses.activate
def test_add_document_required_params(self):
"""
test_add_document_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Invoke method
response = service.add_document(
project_id,
collection_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
@responses.activate
def test_add_document_value_error(self):
"""
test_add_document_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.add_document(**req_copy)
class TestUpdateDocument():
"""
Test Class for update_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_document_all_params(self):
"""
update_document()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
file = io.BytesIO(b'This is a mock file.').getvalue()
filename = 'testString'
file_content_type = 'application/json'
metadata = 'testString'
x_watson_discovery_force = True
# Invoke method
response = service.update_document(
project_id,
collection_id,
document_id,
file=file,
filename=filename,
file_content_type=file_content_type,
metadata=metadata,
x_watson_discovery_force=x_watson_discovery_force,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
@responses.activate
def test_update_document_required_params(self):
"""
test_update_document_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
# Invoke method
response = service.update_document(
project_id,
collection_id,
document_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
@responses.activate
def test_update_document_value_error(self):
"""
test_update_document_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "processing"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
"document_id": document_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.update_document(**req_copy)
class TestDeleteDocument():
"""
Test Class for delete_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_document_all_params(self):
"""
delete_document()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "deleted"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
x_watson_discovery_force = True
# Invoke method
response = service.delete_document(
project_id,
collection_id,
document_id,
x_watson_discovery_force=x_watson_discovery_force,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_delete_document_required_params(self):
"""
test_delete_document_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "deleted"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
# Invoke method
response = service.delete_document(
project_id,
collection_id,
document_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_delete_document_value_error(self):
"""
test_delete_document_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/documents/testString')
mock_response = '{"document_id": "document_id", "status": "deleted"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
document_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
"document_id": document_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_document(**req_copy)
# endregion
##############################################################################
# End of Service: Documents
##############################################################################
##############################################################################
# Start of Service: TrainingData
##############################################################################
# region
class TestListTrainingQueries():
"""
Test Class for list_training_queries
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_training_queries_all_params(self):
"""
list_training_queries()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
mock_response = '{"queries": [{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.list_training_queries(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_list_training_queries_value_error(self):
"""
test_list_training_queries_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
mock_response = '{"queries": [{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.list_training_queries(**req_copy)
class TestDeleteTrainingQueries():
"""
Test Class for delete_training_queries
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_training_queries_all_params(self):
"""
delete_training_queries()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.delete_training_queries(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 204
@responses.activate
def test_delete_training_queries_value_error(self):
"""
test_delete_training_queries_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_training_queries(**req_copy)
class TestCreateTrainingQuery():
"""
Test Class for create_training_query
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_training_query_all_params(self):
"""
create_training_query()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a TrainingExample model
training_example_model = {}
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
# Set up parameter values
project_id = 'testString'
natural_language_query = 'testString'
examples = [training_example_model]
filter = 'testString'
# Invoke method
response = service.create_training_query(
project_id,
natural_language_query,
examples,
filter=filter,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['natural_language_query'] == 'testString'
assert req_body['examples'] == [training_example_model]
assert req_body['filter'] == 'testString'
@responses.activate
def test_create_training_query_value_error(self):
"""
test_create_training_query_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a TrainingExample model
training_example_model = {}
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
# Set up parameter values
project_id = 'testString'
natural_language_query = 'testString'
examples = [training_example_model]
filter = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"natural_language_query": natural_language_query,
"examples": examples,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.create_training_query(**req_copy)
class TestGetTrainingQuery():
"""
Test Class for get_training_query
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_training_query_all_params(self):
"""
get_training_query()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries/testString')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
query_id = 'testString'
# Invoke method
response = service.get_training_query(
project_id,
query_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_training_query_value_error(self):
"""
test_get_training_query_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries/testString')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
query_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"query_id": query_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_training_query(**req_copy)
class TestUpdateTrainingQuery():
"""
Test Class for update_training_query
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_training_query_all_params(self):
"""
update_training_query()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries/testString')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a TrainingExample model
training_example_model = {}
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
# Set up parameter values
project_id = 'testString'
query_id = 'testString'
natural_language_query = 'testString'
examples = [training_example_model]
filter = 'testString'
# Invoke method
response = service.update_training_query(
project_id,
query_id,
natural_language_query,
examples,
filter=filter,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['natural_language_query'] == 'testString'
assert req_body['examples'] == [training_example_model]
assert req_body['filter'] == 'testString'
@responses.activate
def test_update_training_query_value_error(self):
"""
test_update_training_query_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/training_data/queries/testString')
mock_response = '{"query_id": "query_id", "natural_language_query": "natural_language_query", "filter": "filter", "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00", "examples": [{"document_id": "document_id", "collection_id": "collection_id", "relevance": 9, "created": "2019-01-01T12:00:00", "updated": "2019-01-01T12:00:00"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a TrainingExample model
training_example_model = {}
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
# Set up parameter values
project_id = 'testString'
query_id = 'testString'
natural_language_query = 'testString'
examples = [training_example_model]
filter = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"query_id": query_id,
"natural_language_query": natural_language_query,
"examples": examples,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.update_training_query(**req_copy)
# endregion
##############################################################################
# End of Service: TrainingData
##############################################################################
##############################################################################
# Start of Service: Analyze
##############################################################################
# region
class TestAnalyzeDocument():
"""
Test Class for analyze_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_analyze_document_all_params(self):
"""
analyze_document()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/analyze')
mock_response = '{"notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}], "result": {"metadata": {"mapKey": {"anyKey": "anyValue"}}}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
file = io.BytesIO(b'This is a mock file.').getvalue()
filename = 'testString'
file_content_type = 'application/json'
metadata = 'testString'
# Invoke method
response = service.analyze_document(
project_id,
collection_id,
file=file,
filename=filename,
file_content_type=file_content_type,
metadata=metadata,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_analyze_document_required_params(self):
"""
test_analyze_document_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/analyze')
mock_response = '{"notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}], "result": {"metadata": {"mapKey": {"anyKey": "anyValue"}}}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Invoke method
response = service.analyze_document(
project_id,
collection_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_analyze_document_value_error(self):
"""
test_analyze_document_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/collections/testString/analyze')
mock_response = '{"notices": [{"notice_id": "notice_id", "created": "2019-01-01T12:00:00", "document_id": "document_id", "collection_id": "collection_id", "query_id": "query_id", "severity": "warning", "step": "step", "description": "description"}], "result": {"metadata": {"mapKey": {"anyKey": "anyValue"}}}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
collection_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"collection_id": collection_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.analyze_document(**req_copy)
# endregion
##############################################################################
# End of Service: Analyze
##############################################################################
##############################################################################
# Start of Service: Enrichments
##############################################################################
# region
class TestListEnrichments():
"""
Test Class for list_enrichments
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_enrichments_all_params(self):
"""
list_enrichments()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments')
mock_response = '{"enrichments": [{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.list_enrichments(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_list_enrichments_value_error(self):
"""
test_list_enrichments_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments')
mock_response = '{"enrichments": [{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.list_enrichments(**req_copy)
class TestCreateEnrichment():
"""
Test Class for create_enrichment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_enrichment_all_params(self):
"""
create_enrichment()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a EnrichmentOptions model
enrichment_options_model = {}
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
# Construct a dict representation of a CreateEnrichment model
create_enrichment_model = {}
create_enrichment_model['name'] = 'testString'
create_enrichment_model['description'] = 'testString'
create_enrichment_model['type'] = 'dictionary'
create_enrichment_model['options'] = enrichment_options_model
# Set up parameter values
project_id = 'testString'
enrichment = create_enrichment_model
file = io.BytesIO(b'This is a mock file.').getvalue()
# Invoke method
response = service.create_enrichment(
project_id,
enrichment,
file=file,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
@responses.activate
def test_create_enrichment_required_params(self):
"""
test_create_enrichment_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a EnrichmentOptions model
enrichment_options_model = {}
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
# Construct a dict representation of a CreateEnrichment model
create_enrichment_model = {}
create_enrichment_model['name'] = 'testString'
create_enrichment_model['description'] = 'testString'
create_enrichment_model['type'] = 'dictionary'
create_enrichment_model['options'] = enrichment_options_model
# Set up parameter values
project_id = 'testString'
enrichment = create_enrichment_model
# Invoke method
response = service.create_enrichment(
project_id,
enrichment,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
@responses.activate
def test_create_enrichment_value_error(self):
"""
test_create_enrichment_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a EnrichmentOptions model
enrichment_options_model = {}
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
# Construct a dict representation of a CreateEnrichment model
create_enrichment_model = {}
create_enrichment_model['name'] = 'testString'
create_enrichment_model['description'] = 'testString'
create_enrichment_model['type'] = 'dictionary'
create_enrichment_model['options'] = enrichment_options_model
# Set up parameter values
project_id = 'testString'
enrichment = create_enrichment_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"enrichment": enrichment,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.create_enrichment(**req_copy)
class TestGetEnrichment():
"""
Test Class for get_enrichment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_enrichment_all_params(self):
"""
get_enrichment()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
# Invoke method
response = service.get_enrichment(
project_id,
enrichment_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_enrichment_value_error(self):
"""
test_get_enrichment_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"enrichment_id": enrichment_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_enrichment(**req_copy)
class TestUpdateEnrichment():
"""
Test Class for update_enrichment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_enrichment_all_params(self):
"""
update_enrichment()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
name = 'testString'
description = 'testString'
# Invoke method
response = service.update_enrichment(
project_id,
enrichment_id,
name,
description=description,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['name'] == 'testString'
assert req_body['description'] == 'testString'
@responses.activate
def test_update_enrichment_value_error(self):
"""
test_update_enrichment_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
mock_response = '{"enrichment_id": "enrichment_id", "name": "name", "description": "description", "type": "part_of_speech", "options": {"languages": ["languages"], "entity_type": "entity_type", "regular_expression": "regular_expression", "result_field": "result_field"}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
name = 'testString'
description = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"enrichment_id": enrichment_id,
"name": name,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.update_enrichment(**req_copy)
class TestDeleteEnrichment():
"""
Test Class for delete_enrichment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_enrichment_all_params(self):
"""
delete_enrichment()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
# Invoke method
response = service.delete_enrichment(
project_id,
enrichment_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 204
@responses.activate
def test_delete_enrichment_value_error(self):
"""
test_delete_enrichment_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString/enrichments/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
enrichment_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
"enrichment_id": enrichment_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_enrichment(**req_copy)
# endregion
##############################################################################
# End of Service: Enrichments
##############################################################################
##############################################################################
# Start of Service: Projects
##############################################################################
# region
class TestListProjects():
"""
Test Class for list_projects
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_projects_all_params(self):
"""
list_projects()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects')
mock_response = '{"projects": [{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = service.list_projects()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_list_projects_value_error(self):
"""
test_list_projects_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects')
mock_response = '{"projects": [{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Pass in all but one required param and check for a ValueError
req_param_dict = {
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.list_projects(**req_copy)
class TestCreateProject():
"""
Test Class for create_project
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_project_all_params(self):
"""
create_project()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a DefaultQueryParamsPassages model
default_query_params_passages_model = {}
default_query_params_passages_model['enabled'] = True
default_query_params_passages_model['count'] = 38
default_query_params_passages_model['fields'] = ['testString']
default_query_params_passages_model['characters'] = 38
default_query_params_passages_model['per_document'] = True
default_query_params_passages_model['max_per_document'] = 38
# Construct a dict representation of a DefaultQueryParamsTableResults model
default_query_params_table_results_model = {}
default_query_params_table_results_model['enabled'] = True
default_query_params_table_results_model['count'] = 38
default_query_params_table_results_model['per_document'] = 38
# Construct a dict representation of a DefaultQueryParamsSuggestedRefinements model
default_query_params_suggested_refinements_model = {}
default_query_params_suggested_refinements_model['enabled'] = True
default_query_params_suggested_refinements_model['count'] = 38
# Construct a dict representation of a DefaultQueryParams model
default_query_params_model = {}
default_query_params_model['collection_ids'] = ['testString']
default_query_params_model['passages'] = default_query_params_passages_model
default_query_params_model['table_results'] = default_query_params_table_results_model
default_query_params_model['aggregation'] = 'testString'
default_query_params_model['suggested_refinements'] = default_query_params_suggested_refinements_model
default_query_params_model['spelling_suggestions'] = True
default_query_params_model['highlight'] = True
default_query_params_model['count'] = 38
default_query_params_model['sort'] = 'testString'
default_query_params_model['return'] = ['testString']
# Set up parameter values
name = 'testString'
type = 'document_retrieval'
default_query_parameters = default_query_params_model
# Invoke method
response = service.create_project(
name,
type,
default_query_parameters=default_query_parameters,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['name'] == 'testString'
assert req_body['type'] == 'document_retrieval'
assert req_body['default_query_parameters'] == default_query_params_model
@responses.activate
def test_create_project_value_error(self):
"""
test_create_project_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a DefaultQueryParamsPassages model
default_query_params_passages_model = {}
default_query_params_passages_model['enabled'] = True
default_query_params_passages_model['count'] = 38
default_query_params_passages_model['fields'] = ['testString']
default_query_params_passages_model['characters'] = 38
default_query_params_passages_model['per_document'] = True
default_query_params_passages_model['max_per_document'] = 38
# Construct a dict representation of a DefaultQueryParamsTableResults model
default_query_params_table_results_model = {}
default_query_params_table_results_model['enabled'] = True
default_query_params_table_results_model['count'] = 38
default_query_params_table_results_model['per_document'] = 38
# Construct a dict representation of a DefaultQueryParamsSuggestedRefinements model
default_query_params_suggested_refinements_model = {}
default_query_params_suggested_refinements_model['enabled'] = True
default_query_params_suggested_refinements_model['count'] = 38
# Construct a dict representation of a DefaultQueryParams model
default_query_params_model = {}
default_query_params_model['collection_ids'] = ['testString']
default_query_params_model['passages'] = default_query_params_passages_model
default_query_params_model['table_results'] = default_query_params_table_results_model
default_query_params_model['aggregation'] = 'testString'
default_query_params_model['suggested_refinements'] = default_query_params_suggested_refinements_model
default_query_params_model['spelling_suggestions'] = True
default_query_params_model['highlight'] = True
default_query_params_model['count'] = 38
default_query_params_model['sort'] = 'testString'
default_query_params_model['return'] = ['testString']
# Set up parameter values
name = 'testString'
type = 'document_retrieval'
default_query_parameters = default_query_params_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"name": name,
"type": type,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.create_project(**req_copy)
class TestGetProject():
"""
Test Class for get_project
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_project_all_params(self):
"""
get_project()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.get_project(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_get_project_value_error(self):
"""
test_get_project_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.get_project(**req_copy)
class TestUpdateProject():
"""
Test Class for update_project
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_project_all_params(self):
"""
update_project()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
name = 'testString'
# Invoke method
response = service.update_project(
project_id,
name=name,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['name'] == 'testString'
@responses.activate
def test_update_project_required_params(self):
"""
test_update_project_required_params()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.update_project(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
@responses.activate
def test_update_project_value_error(self):
"""
test_update_project_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
mock_response = '{"project_id": "project_id", "name": "name", "type": "document_retrieval", "relevancy_training_status": {"data_updated": "data_updated", "total_examples": 14, "sufficient_label_diversity": true, "processing": true, "minimum_examples_added": true, "successfully_trained": "successfully_trained", "available": false, "notices": 7, "minimum_queries_added": false}, "collection_count": 16, "default_query_parameters": {"collection_ids": ["collection_ids"], "passages": {"enabled": false, "count": 5, "fields": ["fields"], "characters": 10, "per_document": true, "max_per_document": 16}, "table_results": {"enabled": false, "count": 5, "per_document": 12}, "aggregation": "aggregation", "suggested_refinements": {"enabled": false, "count": 5}, "spelling_suggestions": true, "highlight": false, "count": 5, "sort": "sort", "return": ["return_"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.update_project(**req_copy)
class TestDeleteProject():
"""
Test Class for delete_project
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_project_all_params(self):
"""
delete_project()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
# Invoke method
response = service.delete_project(
project_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 204
@responses.activate
def test_delete_project_value_error(self):
"""
test_delete_project_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/projects/testString')
responses.add(responses.DELETE,
url,
status=204)
# Set up parameter values
project_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"project_id": project_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_project(**req_copy)
# endregion
##############################################################################
# End of Service: Projects
##############################################################################
##############################################################################
# Start of Service: UserData
##############################################################################
# region
class TestDeleteUserData():
"""
Test Class for delete_user_data
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_user_data_all_params(self):
"""
delete_user_data()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/user_data')
responses.add(responses.DELETE,
url,
status=200)
# Set up parameter values
customer_id = 'testString'
# Invoke method
response = service.delete_user_data(
customer_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'customer_id={}'.format(customer_id) in query_string
@responses.activate
def test_delete_user_data_value_error(self):
"""
test_delete_user_data_value_error()
"""
# Set up mock
url = self.preprocess_url(base_url + '/v2/user_data')
responses.add(responses.DELETE,
url,
status=200)
# Set up parameter values
customer_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"customer_id": customer_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
service.delete_user_data(**req_copy)
# endregion
##############################################################################
# End of Service: UserData
##############################################################################
##############################################################################
# Start of Model Tests
##############################################################################
# region
class TestAnalyzedDocument():
"""
Test Class for AnalyzedDocument
"""
def test_analyzed_document_serialization(self):
"""
Test serialization/deserialization for AnalyzedDocument
"""
# Construct dict forms of any model objects needed in order to build this model.
notice_model = {} # Notice
notice_model['notice_id'] = 'testString'
notice_model['created'] = '2020-01-28T18:40:40.123456Z'
notice_model['document_id'] = 'testString'
notice_model['collection_id'] = 'testString'
notice_model['query_id'] = 'testString'
notice_model['severity'] = 'warning'
notice_model['step'] = 'testString'
notice_model['description'] = 'testString'
analyzed_result_model = {} # AnalyzedResult
analyzed_result_model['metadata'] = {}
analyzed_result_model['foo'] = { 'foo': 'bar' }
# Construct a json representation of a AnalyzedDocument model
analyzed_document_model_json = {}
analyzed_document_model_json['notices'] = [notice_model]
analyzed_document_model_json['result'] = analyzed_result_model
# Construct a model instance of AnalyzedDocument by calling from_dict on the json representation
analyzed_document_model = AnalyzedDocument.from_dict(analyzed_document_model_json)
assert analyzed_document_model != False
# Construct a model instance of AnalyzedDocument by calling from_dict on the json representation
analyzed_document_model_dict = AnalyzedDocument.from_dict(analyzed_document_model_json).__dict__
analyzed_document_model2 = AnalyzedDocument(**analyzed_document_model_dict)
# Verify the model instances are equivalent
assert analyzed_document_model == analyzed_document_model2
# Convert model instance back to dict and verify no loss of data
analyzed_document_model_json2 = analyzed_document_model.to_dict()
assert analyzed_document_model_json2 == analyzed_document_model_json
class TestAnalyzedResult():
"""
Test Class for AnalyzedResult
"""
def test_analyzed_result_serialization(self):
"""
Test serialization/deserialization for AnalyzedResult
"""
# Construct a json representation of a AnalyzedResult model
analyzed_result_model_json = {}
analyzed_result_model_json['metadata'] = {}
analyzed_result_model_json['foo'] = { 'foo': 'bar' }
# Construct a model instance of AnalyzedResult by calling from_dict on the json representation
analyzed_result_model = AnalyzedResult.from_dict(analyzed_result_model_json)
assert analyzed_result_model != False
# Construct a model instance of AnalyzedResult by calling from_dict on the json representation
analyzed_result_model_dict = AnalyzedResult.from_dict(analyzed_result_model_json).__dict__
analyzed_result_model2 = AnalyzedResult(**analyzed_result_model_dict)
# Verify the model instances are equivalent
assert analyzed_result_model == analyzed_result_model2
# Convert model instance back to dict and verify no loss of data
analyzed_result_model_json2 = analyzed_result_model.to_dict()
assert analyzed_result_model_json2 == analyzed_result_model_json
class TestCollection():
"""
Test Class for Collection
"""
def test_collection_serialization(self):
"""
Test serialization/deserialization for Collection
"""
# Construct a json representation of a Collection model
collection_model_json = {}
collection_model_json['collection_id'] = 'testString'
collection_model_json['name'] = 'testString'
# Construct a model instance of Collection by calling from_dict on the json representation
collection_model = Collection.from_dict(collection_model_json)
assert collection_model != False
# Construct a model instance of Collection by calling from_dict on the json representation
collection_model_dict = Collection.from_dict(collection_model_json).__dict__
collection_model2 = Collection(**collection_model_dict)
# Verify the model instances are equivalent
assert collection_model == collection_model2
# Convert model instance back to dict and verify no loss of data
collection_model_json2 = collection_model.to_dict()
assert collection_model_json2 == collection_model_json
class TestCollectionDetails():
"""
Test Class for CollectionDetails
"""
def test_collection_details_serialization(self):
"""
Test serialization/deserialization for CollectionDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
collection_enrichment_model = {} # CollectionEnrichment
collection_enrichment_model['enrichment_id'] = 'testString'
collection_enrichment_model['fields'] = ['testString']
# Construct a json representation of a CollectionDetails model
collection_details_model_json = {}
collection_details_model_json['collection_id'] = 'testString'
collection_details_model_json['name'] = 'testString'
collection_details_model_json['description'] = 'testString'
collection_details_model_json['created'] = '2020-01-28T18:40:40.123456Z'
collection_details_model_json['language'] = 'testString'
collection_details_model_json['enrichments'] = [collection_enrichment_model]
# Construct a model instance of CollectionDetails by calling from_dict on the json representation
collection_details_model = CollectionDetails.from_dict(collection_details_model_json)
assert collection_details_model != False
# Construct a model instance of CollectionDetails by calling from_dict on the json representation
collection_details_model_dict = CollectionDetails.from_dict(collection_details_model_json).__dict__
collection_details_model2 = CollectionDetails(**collection_details_model_dict)
# Verify the model instances are equivalent
assert collection_details_model == collection_details_model2
# Convert model instance back to dict and verify no loss of data
collection_details_model_json2 = collection_details_model.to_dict()
assert collection_details_model_json2 == collection_details_model_json
class TestCollectionEnrichment():
"""
Test Class for CollectionEnrichment
"""
def test_collection_enrichment_serialization(self):
"""
Test serialization/deserialization for CollectionEnrichment
"""
# Construct a json representation of a CollectionEnrichment model
collection_enrichment_model_json = {}
collection_enrichment_model_json['enrichment_id'] = 'testString'
collection_enrichment_model_json['fields'] = ['testString']
# Construct a model instance of CollectionEnrichment by calling from_dict on the json representation
collection_enrichment_model = CollectionEnrichment.from_dict(collection_enrichment_model_json)
assert collection_enrichment_model != False
# Construct a model instance of CollectionEnrichment by calling from_dict on the json representation
collection_enrichment_model_dict = CollectionEnrichment.from_dict(collection_enrichment_model_json).__dict__
collection_enrichment_model2 = CollectionEnrichment(**collection_enrichment_model_dict)
# Verify the model instances are equivalent
assert collection_enrichment_model == collection_enrichment_model2
# Convert model instance back to dict and verify no loss of data
collection_enrichment_model_json2 = collection_enrichment_model.to_dict()
assert collection_enrichment_model_json2 == collection_enrichment_model_json
class TestCompletions():
"""
Test Class for Completions
"""
def test_completions_serialization(self):
"""
Test serialization/deserialization for Completions
"""
# Construct a json representation of a Completions model
completions_model_json = {}
completions_model_json['completions'] = ['testString']
# Construct a model instance of Completions by calling from_dict on the json representation
completions_model = Completions.from_dict(completions_model_json)
assert completions_model != False
# Construct a model instance of Completions by calling from_dict on the json representation
completions_model_dict = Completions.from_dict(completions_model_json).__dict__
completions_model2 = Completions(**completions_model_dict)
# Verify the model instances are equivalent
assert completions_model == completions_model2
# Convert model instance back to dict and verify no loss of data
completions_model_json2 = completions_model.to_dict()
assert completions_model_json2 == completions_model_json
class TestComponentSettingsAggregation():
"""
Test Class for ComponentSettingsAggregation
"""
def test_component_settings_aggregation_serialization(self):
"""
Test serialization/deserialization for ComponentSettingsAggregation
"""
# Construct a json representation of a ComponentSettingsAggregation model
component_settings_aggregation_model_json = {}
component_settings_aggregation_model_json['name'] = 'testString'
component_settings_aggregation_model_json['label'] = 'testString'
component_settings_aggregation_model_json['multiple_selections_allowed'] = True
component_settings_aggregation_model_json['visualization_type'] = 'auto'
# Construct a model instance of ComponentSettingsAggregation by calling from_dict on the json representation
component_settings_aggregation_model = ComponentSettingsAggregation.from_dict(component_settings_aggregation_model_json)
assert component_settings_aggregation_model != False
# Construct a model instance of ComponentSettingsAggregation by calling from_dict on the json representation
component_settings_aggregation_model_dict = ComponentSettingsAggregation.from_dict(component_settings_aggregation_model_json).__dict__
component_settings_aggregation_model2 = ComponentSettingsAggregation(**component_settings_aggregation_model_dict)
# Verify the model instances are equivalent
assert component_settings_aggregation_model == component_settings_aggregation_model2
# Convert model instance back to dict and verify no loss of data
component_settings_aggregation_model_json2 = component_settings_aggregation_model.to_dict()
assert component_settings_aggregation_model_json2 == component_settings_aggregation_model_json
class TestComponentSettingsFieldsShown():
"""
Test Class for ComponentSettingsFieldsShown
"""
def test_component_settings_fields_shown_serialization(self):
"""
Test serialization/deserialization for ComponentSettingsFieldsShown
"""
# Construct dict forms of any model objects needed in order to build this model.
component_settings_fields_shown_body_model = {} # ComponentSettingsFieldsShownBody
component_settings_fields_shown_body_model['use_passage'] = True
component_settings_fields_shown_body_model['field'] = 'testString'
component_settings_fields_shown_title_model = {} # ComponentSettingsFieldsShownTitle
component_settings_fields_shown_title_model['field'] = 'testString'
# Construct a json representation of a ComponentSettingsFieldsShown model
component_settings_fields_shown_model_json = {}
component_settings_fields_shown_model_json['body'] = component_settings_fields_shown_body_model
component_settings_fields_shown_model_json['title'] = component_settings_fields_shown_title_model
# Construct a model instance of ComponentSettingsFieldsShown by calling from_dict on the json representation
component_settings_fields_shown_model = ComponentSettingsFieldsShown.from_dict(component_settings_fields_shown_model_json)
assert component_settings_fields_shown_model != False
# Construct a model instance of ComponentSettingsFieldsShown by calling from_dict on the json representation
component_settings_fields_shown_model_dict = ComponentSettingsFieldsShown.from_dict(component_settings_fields_shown_model_json).__dict__
component_settings_fields_shown_model2 = ComponentSettingsFieldsShown(**component_settings_fields_shown_model_dict)
# Verify the model instances are equivalent
assert component_settings_fields_shown_model == component_settings_fields_shown_model2
# Convert model instance back to dict and verify no loss of data
component_settings_fields_shown_model_json2 = component_settings_fields_shown_model.to_dict()
assert component_settings_fields_shown_model_json2 == component_settings_fields_shown_model_json
class TestComponentSettingsFieldsShownBody():
"""
Test Class for ComponentSettingsFieldsShownBody
"""
def test_component_settings_fields_shown_body_serialization(self):
"""
Test serialization/deserialization for ComponentSettingsFieldsShownBody
"""
# Construct a json representation of a ComponentSettingsFieldsShownBody model
component_settings_fields_shown_body_model_json = {}
component_settings_fields_shown_body_model_json['use_passage'] = True
component_settings_fields_shown_body_model_json['field'] = 'testString'
# Construct a model instance of ComponentSettingsFieldsShownBody by calling from_dict on the json representation
component_settings_fields_shown_body_model = ComponentSettingsFieldsShownBody.from_dict(component_settings_fields_shown_body_model_json)
assert component_settings_fields_shown_body_model != False
# Construct a model instance of ComponentSettingsFieldsShownBody by calling from_dict on the json representation
component_settings_fields_shown_body_model_dict = ComponentSettingsFieldsShownBody.from_dict(component_settings_fields_shown_body_model_json).__dict__
component_settings_fields_shown_body_model2 = ComponentSettingsFieldsShownBody(**component_settings_fields_shown_body_model_dict)
# Verify the model instances are equivalent
assert component_settings_fields_shown_body_model == component_settings_fields_shown_body_model2
# Convert model instance back to dict and verify no loss of data
component_settings_fields_shown_body_model_json2 = component_settings_fields_shown_body_model.to_dict()
assert component_settings_fields_shown_body_model_json2 == component_settings_fields_shown_body_model_json
class TestComponentSettingsFieldsShownTitle():
"""
Test Class for ComponentSettingsFieldsShownTitle
"""
def test_component_settings_fields_shown_title_serialization(self):
"""
Test serialization/deserialization for ComponentSettingsFieldsShownTitle
"""
# Construct a json representation of a ComponentSettingsFieldsShownTitle model
component_settings_fields_shown_title_model_json = {}
component_settings_fields_shown_title_model_json['field'] = 'testString'
# Construct a model instance of ComponentSettingsFieldsShownTitle by calling from_dict on the json representation
component_settings_fields_shown_title_model = ComponentSettingsFieldsShownTitle.from_dict(component_settings_fields_shown_title_model_json)
assert component_settings_fields_shown_title_model != False
# Construct a model instance of ComponentSettingsFieldsShownTitle by calling from_dict on the json representation
component_settings_fields_shown_title_model_dict = ComponentSettingsFieldsShownTitle.from_dict(component_settings_fields_shown_title_model_json).__dict__
component_settings_fields_shown_title_model2 = ComponentSettingsFieldsShownTitle(**component_settings_fields_shown_title_model_dict)
# Verify the model instances are equivalent
assert component_settings_fields_shown_title_model == component_settings_fields_shown_title_model2
# Convert model instance back to dict and verify no loss of data
component_settings_fields_shown_title_model_json2 = component_settings_fields_shown_title_model.to_dict()
assert component_settings_fields_shown_title_model_json2 == component_settings_fields_shown_title_model_json
class TestComponentSettingsResponse():
"""
Test Class for ComponentSettingsResponse
"""
def test_component_settings_response_serialization(self):
"""
Test serialization/deserialization for ComponentSettingsResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
component_settings_fields_shown_body_model = {} # ComponentSettingsFieldsShownBody
component_settings_fields_shown_body_model['use_passage'] = True
component_settings_fields_shown_body_model['field'] = 'testString'
component_settings_fields_shown_title_model = {} # ComponentSettingsFieldsShownTitle
component_settings_fields_shown_title_model['field'] = 'testString'
component_settings_fields_shown_model = {} # ComponentSettingsFieldsShown
component_settings_fields_shown_model['body'] = component_settings_fields_shown_body_model
component_settings_fields_shown_model['title'] = component_settings_fields_shown_title_model
component_settings_aggregation_model = {} # ComponentSettingsAggregation
component_settings_aggregation_model['name'] = 'testString'
component_settings_aggregation_model['label'] = 'testString'
component_settings_aggregation_model['multiple_selections_allowed'] = True
component_settings_aggregation_model['visualization_type'] = 'auto'
# Construct a json representation of a ComponentSettingsResponse model
component_settings_response_model_json = {}
component_settings_response_model_json['fields_shown'] = component_settings_fields_shown_model
component_settings_response_model_json['autocomplete'] = True
component_settings_response_model_json['structured_search'] = True
component_settings_response_model_json['results_per_page'] = 38
component_settings_response_model_json['aggregations'] = [component_settings_aggregation_model]
# Construct a model instance of ComponentSettingsResponse by calling from_dict on the json representation
component_settings_response_model = ComponentSettingsResponse.from_dict(component_settings_response_model_json)
assert component_settings_response_model != False
# Construct a model instance of ComponentSettingsResponse by calling from_dict on the json representation
component_settings_response_model_dict = ComponentSettingsResponse.from_dict(component_settings_response_model_json).__dict__
component_settings_response_model2 = ComponentSettingsResponse(**component_settings_response_model_dict)
# Verify the model instances are equivalent
assert component_settings_response_model == component_settings_response_model2
# Convert model instance back to dict and verify no loss of data
component_settings_response_model_json2 = component_settings_response_model.to_dict()
assert component_settings_response_model_json2 == component_settings_response_model_json
class TestCreateEnrichment():
"""
Test Class for CreateEnrichment
"""
def test_create_enrichment_serialization(self):
"""
Test serialization/deserialization for CreateEnrichment
"""
# Construct dict forms of any model objects needed in order to build this model.
enrichment_options_model = {} # EnrichmentOptions
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
# Construct a json representation of a CreateEnrichment model
create_enrichment_model_json = {}
create_enrichment_model_json['name'] = 'testString'
create_enrichment_model_json['description'] = 'testString'
create_enrichment_model_json['type'] = 'dictionary'
create_enrichment_model_json['options'] = enrichment_options_model
# Construct a model instance of CreateEnrichment by calling from_dict on the json representation
create_enrichment_model = CreateEnrichment.from_dict(create_enrichment_model_json)
assert create_enrichment_model != False
# Construct a model instance of CreateEnrichment by calling from_dict on the json representation
create_enrichment_model_dict = CreateEnrichment.from_dict(create_enrichment_model_json).__dict__
create_enrichment_model2 = CreateEnrichment(**create_enrichment_model_dict)
# Verify the model instances are equivalent
assert create_enrichment_model == create_enrichment_model2
# Convert model instance back to dict and verify no loss of data
create_enrichment_model_json2 = create_enrichment_model.to_dict()
assert create_enrichment_model_json2 == create_enrichment_model_json
class TestDefaultQueryParams():
"""
Test Class for DefaultQueryParams
"""
def test_default_query_params_serialization(self):
"""
Test serialization/deserialization for DefaultQueryParams
"""
# Construct dict forms of any model objects needed in order to build this model.
default_query_params_passages_model = {} # DefaultQueryParamsPassages
default_query_params_passages_model['enabled'] = True
default_query_params_passages_model['count'] = 38
default_query_params_passages_model['fields'] = ['testString']
default_query_params_passages_model['characters'] = 38
default_query_params_passages_model['per_document'] = True
default_query_params_passages_model['max_per_document'] = 38
default_query_params_table_results_model = {} # DefaultQueryParamsTableResults
default_query_params_table_results_model['enabled'] = True
default_query_params_table_results_model['count'] = 38
default_query_params_table_results_model['per_document'] = 38
default_query_params_suggested_refinements_model = {} # DefaultQueryParamsSuggestedRefinements
default_query_params_suggested_refinements_model['enabled'] = True
default_query_params_suggested_refinements_model['count'] = 38
# Construct a json representation of a DefaultQueryParams model
default_query_params_model_json = {}
default_query_params_model_json['collection_ids'] = ['testString']
default_query_params_model_json['passages'] = default_query_params_passages_model
default_query_params_model_json['table_results'] = default_query_params_table_results_model
default_query_params_model_json['aggregation'] = 'testString'
default_query_params_model_json['suggested_refinements'] = default_query_params_suggested_refinements_model
default_query_params_model_json['spelling_suggestions'] = True
default_query_params_model_json['highlight'] = True
default_query_params_model_json['count'] = 38
default_query_params_model_json['sort'] = 'testString'
default_query_params_model_json['return'] = ['testString']
# Construct a model instance of DefaultQueryParams by calling from_dict on the json representation
default_query_params_model = DefaultQueryParams.from_dict(default_query_params_model_json)
assert default_query_params_model != False
# Construct a model instance of DefaultQueryParams by calling from_dict on the json representation
default_query_params_model_dict = DefaultQueryParams.from_dict(default_query_params_model_json).__dict__
default_query_params_model2 = DefaultQueryParams(**default_query_params_model_dict)
# Verify the model instances are equivalent
assert default_query_params_model == default_query_params_model2
# Convert model instance back to dict and verify no loss of data
default_query_params_model_json2 = default_query_params_model.to_dict()
assert default_query_params_model_json2 == default_query_params_model_json
class TestDefaultQueryParamsPassages():
"""
Test Class for DefaultQueryParamsPassages
"""
def test_default_query_params_passages_serialization(self):
"""
Test serialization/deserialization for DefaultQueryParamsPassages
"""
# Construct a json representation of a DefaultQueryParamsPassages model
default_query_params_passages_model_json = {}
default_query_params_passages_model_json['enabled'] = True
default_query_params_passages_model_json['count'] = 38
default_query_params_passages_model_json['fields'] = ['testString']
default_query_params_passages_model_json['characters'] = 38
default_query_params_passages_model_json['per_document'] = True
default_query_params_passages_model_json['max_per_document'] = 38
# Construct a model instance of DefaultQueryParamsPassages by calling from_dict on the json representation
default_query_params_passages_model = DefaultQueryParamsPassages.from_dict(default_query_params_passages_model_json)
assert default_query_params_passages_model != False
# Construct a model instance of DefaultQueryParamsPassages by calling from_dict on the json representation
default_query_params_passages_model_dict = DefaultQueryParamsPassages.from_dict(default_query_params_passages_model_json).__dict__
default_query_params_passages_model2 = DefaultQueryParamsPassages(**default_query_params_passages_model_dict)
# Verify the model instances are equivalent
assert default_query_params_passages_model == default_query_params_passages_model2
# Convert model instance back to dict and verify no loss of data
default_query_params_passages_model_json2 = default_query_params_passages_model.to_dict()
assert default_query_params_passages_model_json2 == default_query_params_passages_model_json
class TestDefaultQueryParamsSuggestedRefinements():
"""
Test Class for DefaultQueryParamsSuggestedRefinements
"""
def test_default_query_params_suggested_refinements_serialization(self):
"""
Test serialization/deserialization for DefaultQueryParamsSuggestedRefinements
"""
# Construct a json representation of a DefaultQueryParamsSuggestedRefinements model
default_query_params_suggested_refinements_model_json = {}
default_query_params_suggested_refinements_model_json['enabled'] = True
default_query_params_suggested_refinements_model_json['count'] = 38
# Construct a model instance of DefaultQueryParamsSuggestedRefinements by calling from_dict on the json representation
default_query_params_suggested_refinements_model = DefaultQueryParamsSuggestedRefinements.from_dict(default_query_params_suggested_refinements_model_json)
assert default_query_params_suggested_refinements_model != False
# Construct a model instance of DefaultQueryParamsSuggestedRefinements by calling from_dict on the json representation
default_query_params_suggested_refinements_model_dict = DefaultQueryParamsSuggestedRefinements.from_dict(default_query_params_suggested_refinements_model_json).__dict__
default_query_params_suggested_refinements_model2 = DefaultQueryParamsSuggestedRefinements(**default_query_params_suggested_refinements_model_dict)
# Verify the model instances are equivalent
assert default_query_params_suggested_refinements_model == default_query_params_suggested_refinements_model2
# Convert model instance back to dict and verify no loss of data
default_query_params_suggested_refinements_model_json2 = default_query_params_suggested_refinements_model.to_dict()
assert default_query_params_suggested_refinements_model_json2 == default_query_params_suggested_refinements_model_json
class TestDefaultQueryParamsTableResults():
"""
Test Class for DefaultQueryParamsTableResults
"""
def test_default_query_params_table_results_serialization(self):
"""
Test serialization/deserialization for DefaultQueryParamsTableResults
"""
# Construct a json representation of a DefaultQueryParamsTableResults model
default_query_params_table_results_model_json = {}
default_query_params_table_results_model_json['enabled'] = True
default_query_params_table_results_model_json['count'] = 38
default_query_params_table_results_model_json['per_document'] = 38
# Construct a model instance of DefaultQueryParamsTableResults by calling from_dict on the json representation
default_query_params_table_results_model = DefaultQueryParamsTableResults.from_dict(default_query_params_table_results_model_json)
assert default_query_params_table_results_model != False
# Construct a model instance of DefaultQueryParamsTableResults by calling from_dict on the json representation
default_query_params_table_results_model_dict = DefaultQueryParamsTableResults.from_dict(default_query_params_table_results_model_json).__dict__
default_query_params_table_results_model2 = DefaultQueryParamsTableResults(**default_query_params_table_results_model_dict)
# Verify the model instances are equivalent
assert default_query_params_table_results_model == default_query_params_table_results_model2
# Convert model instance back to dict and verify no loss of data
default_query_params_table_results_model_json2 = default_query_params_table_results_model.to_dict()
assert default_query_params_table_results_model_json2 == default_query_params_table_results_model_json
class TestDeleteDocumentResponse():
"""
Test Class for DeleteDocumentResponse
"""
def test_delete_document_response_serialization(self):
"""
Test serialization/deserialization for DeleteDocumentResponse
"""
# Construct a json representation of a DeleteDocumentResponse model
delete_document_response_model_json = {}
delete_document_response_model_json['document_id'] = 'testString'
delete_document_response_model_json['status'] = 'deleted'
# Construct a model instance of DeleteDocumentResponse by calling from_dict on the json representation
delete_document_response_model = DeleteDocumentResponse.from_dict(delete_document_response_model_json)
assert delete_document_response_model != False
# Construct a model instance of DeleteDocumentResponse by calling from_dict on the json representation
delete_document_response_model_dict = DeleteDocumentResponse.from_dict(delete_document_response_model_json).__dict__
delete_document_response_model2 = DeleteDocumentResponse(**delete_document_response_model_dict)
# Verify the model instances are equivalent
assert delete_document_response_model == delete_document_response_model2
# Convert model instance back to dict and verify no loss of data
delete_document_response_model_json2 = delete_document_response_model.to_dict()
assert delete_document_response_model_json2 == delete_document_response_model_json
class TestDocumentAccepted():
"""
Test Class for DocumentAccepted
"""
def test_document_accepted_serialization(self):
"""
Test serialization/deserialization for DocumentAccepted
"""
# Construct a json representation of a DocumentAccepted model
document_accepted_model_json = {}
document_accepted_model_json['document_id'] = 'testString'
document_accepted_model_json['status'] = 'processing'
# Construct a model instance of DocumentAccepted by calling from_dict on the json representation
document_accepted_model = DocumentAccepted.from_dict(document_accepted_model_json)
assert document_accepted_model != False
# Construct a model instance of DocumentAccepted by calling from_dict on the json representation
document_accepted_model_dict = DocumentAccepted.from_dict(document_accepted_model_json).__dict__
document_accepted_model2 = DocumentAccepted(**document_accepted_model_dict)
# Verify the model instances are equivalent
assert document_accepted_model == document_accepted_model2
# Convert model instance back to dict and verify no loss of data
document_accepted_model_json2 = document_accepted_model.to_dict()
assert document_accepted_model_json2 == document_accepted_model_json
class TestDocumentAttribute():
"""
Test Class for DocumentAttribute
"""
def test_document_attribute_serialization(self):
"""
Test serialization/deserialization for DocumentAttribute
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
# Construct a json representation of a DocumentAttribute model
document_attribute_model_json = {}
document_attribute_model_json['type'] = 'testString'
document_attribute_model_json['text'] = 'testString'
document_attribute_model_json['location'] = table_element_location_model
# Construct a model instance of DocumentAttribute by calling from_dict on the json representation
document_attribute_model = DocumentAttribute.from_dict(document_attribute_model_json)
assert document_attribute_model != False
# Construct a model instance of DocumentAttribute by calling from_dict on the json representation
document_attribute_model_dict = DocumentAttribute.from_dict(document_attribute_model_json).__dict__
document_attribute_model2 = DocumentAttribute(**document_attribute_model_dict)
# Verify the model instances are equivalent
assert document_attribute_model == document_attribute_model2
# Convert model instance back to dict and verify no loss of data
document_attribute_model_json2 = document_attribute_model.to_dict()
assert document_attribute_model_json2 == document_attribute_model_json
class TestEnrichment():
"""
Test Class for Enrichment
"""
def test_enrichment_serialization(self):
"""
Test serialization/deserialization for Enrichment
"""
# Construct dict forms of any model objects needed in order to build this model.
enrichment_options_model = {} # EnrichmentOptions
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
# Construct a json representation of a Enrichment model
enrichment_model_json = {}
enrichment_model_json['enrichment_id'] = 'testString'
enrichment_model_json['name'] = 'testString'
enrichment_model_json['description'] = 'testString'
enrichment_model_json['type'] = 'part_of_speech'
enrichment_model_json['options'] = enrichment_options_model
# Construct a model instance of Enrichment by calling from_dict on the json representation
enrichment_model = Enrichment.from_dict(enrichment_model_json)
assert enrichment_model != False
# Construct a model instance of Enrichment by calling from_dict on the json representation
enrichment_model_dict = Enrichment.from_dict(enrichment_model_json).__dict__
enrichment_model2 = Enrichment(**enrichment_model_dict)
# Verify the model instances are equivalent
assert enrichment_model == enrichment_model2
# Convert model instance back to dict and verify no loss of data
enrichment_model_json2 = enrichment_model.to_dict()
assert enrichment_model_json2 == enrichment_model_json
class TestEnrichmentOptions():
"""
Test Class for EnrichmentOptions
"""
def test_enrichment_options_serialization(self):
"""
Test serialization/deserialization for EnrichmentOptions
"""
# Construct a json representation of a EnrichmentOptions model
enrichment_options_model_json = {}
enrichment_options_model_json['languages'] = ['testString']
enrichment_options_model_json['entity_type'] = 'testString'
enrichment_options_model_json['regular_expression'] = 'testString'
enrichment_options_model_json['result_field'] = 'testString'
# Construct a model instance of EnrichmentOptions by calling from_dict on the json representation
enrichment_options_model = EnrichmentOptions.from_dict(enrichment_options_model_json)
assert enrichment_options_model != False
# Construct a model instance of EnrichmentOptions by calling from_dict on the json representation
enrichment_options_model_dict = EnrichmentOptions.from_dict(enrichment_options_model_json).__dict__
enrichment_options_model2 = EnrichmentOptions(**enrichment_options_model_dict)
# Verify the model instances are equivalent
assert enrichment_options_model == enrichment_options_model2
# Convert model instance back to dict and verify no loss of data
enrichment_options_model_json2 = enrichment_options_model.to_dict()
assert enrichment_options_model_json2 == enrichment_options_model_json
class TestEnrichments():
"""
Test Class for Enrichments
"""
def test_enrichments_serialization(self):
"""
Test serialization/deserialization for Enrichments
"""
# Construct dict forms of any model objects needed in order to build this model.
enrichment_options_model = {} # EnrichmentOptions
enrichment_options_model['languages'] = ['testString']
enrichment_options_model['entity_type'] = 'testString'
enrichment_options_model['regular_expression'] = 'testString'
enrichment_options_model['result_field'] = 'testString'
enrichment_model = {} # Enrichment
enrichment_model['enrichment_id'] = 'testString'
enrichment_model['name'] = 'testString'
enrichment_model['description'] = 'testString'
enrichment_model['type'] = 'part_of_speech'
enrichment_model['options'] = enrichment_options_model
# Construct a json representation of a Enrichments model
enrichments_model_json = {}
enrichments_model_json['enrichments'] = [enrichment_model]
# Construct a model instance of Enrichments by calling from_dict on the json representation
enrichments_model = Enrichments.from_dict(enrichments_model_json)
assert enrichments_model != False
# Construct a model instance of Enrichments by calling from_dict on the json representation
enrichments_model_dict = Enrichments.from_dict(enrichments_model_json).__dict__
enrichments_model2 = Enrichments(**enrichments_model_dict)
# Verify the model instances are equivalent
assert enrichments_model == enrichments_model2
# Convert model instance back to dict and verify no loss of data
enrichments_model_json2 = enrichments_model.to_dict()
assert enrichments_model_json2 == enrichments_model_json
class TestField():
"""
Test Class for Field
"""
def test_field_serialization(self):
"""
Test serialization/deserialization for Field
"""
# Construct a json representation of a Field model
field_model_json = {}
field_model_json['field'] = 'testString'
field_model_json['type'] = 'nested'
field_model_json['collection_id'] = 'testString'
# Construct a model instance of Field by calling from_dict on the json representation
field_model = Field.from_dict(field_model_json)
assert field_model != False
# Construct a model instance of Field by calling from_dict on the json representation
field_model_dict = Field.from_dict(field_model_json).__dict__
field_model2 = Field(**field_model_dict)
# Verify the model instances are equivalent
assert field_model == field_model2
# Convert model instance back to dict and verify no loss of data
field_model_json2 = field_model.to_dict()
assert field_model_json2 == field_model_json
class TestListCollectionsResponse():
"""
Test Class for ListCollectionsResponse
"""
def test_list_collections_response_serialization(self):
"""
Test serialization/deserialization for ListCollectionsResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
collection_model = {} # Collection
collection_model['collection_id'] = 'f1360220-ea2d-4271-9d62-89a910b13c37'
collection_model['name'] = 'example'
# Construct a json representation of a ListCollectionsResponse model
list_collections_response_model_json = {}
list_collections_response_model_json['collections'] = [collection_model]
# Construct a model instance of ListCollectionsResponse by calling from_dict on the json representation
list_collections_response_model = ListCollectionsResponse.from_dict(list_collections_response_model_json)
assert list_collections_response_model != False
# Construct a model instance of ListCollectionsResponse by calling from_dict on the json representation
list_collections_response_model_dict = ListCollectionsResponse.from_dict(list_collections_response_model_json).__dict__
list_collections_response_model2 = ListCollectionsResponse(**list_collections_response_model_dict)
# Verify the model instances are equivalent
assert list_collections_response_model == list_collections_response_model2
# Convert model instance back to dict and verify no loss of data
list_collections_response_model_json2 = list_collections_response_model.to_dict()
assert list_collections_response_model_json2 == list_collections_response_model_json
class TestListFieldsResponse():
"""
Test Class for ListFieldsResponse
"""
def test_list_fields_response_serialization(self):
"""
Test serialization/deserialization for ListFieldsResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
field_model = {} # Field
field_model['field'] = 'testString'
field_model['type'] = 'nested'
field_model['collection_id'] = 'testString'
# Construct a json representation of a ListFieldsResponse model
list_fields_response_model_json = {}
list_fields_response_model_json['fields'] = [field_model]
# Construct a model instance of ListFieldsResponse by calling from_dict on the json representation
list_fields_response_model = ListFieldsResponse.from_dict(list_fields_response_model_json)
assert list_fields_response_model != False
# Construct a model instance of ListFieldsResponse by calling from_dict on the json representation
list_fields_response_model_dict = ListFieldsResponse.from_dict(list_fields_response_model_json).__dict__
list_fields_response_model2 = ListFieldsResponse(**list_fields_response_model_dict)
# Verify the model instances are equivalent
assert list_fields_response_model == list_fields_response_model2
# Convert model instance back to dict and verify no loss of data
list_fields_response_model_json2 = list_fields_response_model.to_dict()
assert list_fields_response_model_json2 == list_fields_response_model_json
class TestListProjectsResponse():
"""
Test Class for ListProjectsResponse
"""
def test_list_projects_response_serialization(self):
"""
Test serialization/deserialization for ListProjectsResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
project_list_details_relevancy_training_status_model = {} # ProjectListDetailsRelevancyTrainingStatus
project_list_details_relevancy_training_status_model['data_updated'] = 'testString'
project_list_details_relevancy_training_status_model['total_examples'] = 38
project_list_details_relevancy_training_status_model['sufficient_label_diversity'] = True
project_list_details_relevancy_training_status_model['processing'] = True
project_list_details_relevancy_training_status_model['minimum_examples_added'] = True
project_list_details_relevancy_training_status_model['successfully_trained'] = 'testString'
project_list_details_relevancy_training_status_model['available'] = True
project_list_details_relevancy_training_status_model['notices'] = 38
project_list_details_relevancy_training_status_model['minimum_queries_added'] = True
project_list_details_model = {} # ProjectListDetails
project_list_details_model['project_id'] = 'testString'
project_list_details_model['name'] = 'testString'
project_list_details_model['type'] = 'document_retrieval'
project_list_details_model['relevancy_training_status'] = project_list_details_relevancy_training_status_model
project_list_details_model['collection_count'] = 38
# Construct a json representation of a ListProjectsResponse model
list_projects_response_model_json = {}
list_projects_response_model_json['projects'] = [project_list_details_model]
# Construct a model instance of ListProjectsResponse by calling from_dict on the json representation
list_projects_response_model = ListProjectsResponse.from_dict(list_projects_response_model_json)
assert list_projects_response_model != False
# Construct a model instance of ListProjectsResponse by calling from_dict on the json representation
list_projects_response_model_dict = ListProjectsResponse.from_dict(list_projects_response_model_json).__dict__
list_projects_response_model2 = ListProjectsResponse(**list_projects_response_model_dict)
# Verify the model instances are equivalent
assert list_projects_response_model == list_projects_response_model2
# Convert model instance back to dict and verify no loss of data
list_projects_response_model_json2 = list_projects_response_model.to_dict()
assert list_projects_response_model_json2 == list_projects_response_model_json
class TestNotice():
"""
Test Class for Notice
"""
def test_notice_serialization(self):
"""
Test serialization/deserialization for Notice
"""
# Construct a json representation of a Notice model
notice_model_json = {}
notice_model_json['notice_id'] = 'testString'
notice_model_json['created'] = '2020-01-28T18:40:40.123456Z'
notice_model_json['document_id'] = 'testString'
notice_model_json['collection_id'] = 'testString'
notice_model_json['query_id'] = 'testString'
notice_model_json['severity'] = 'warning'
notice_model_json['step'] = 'testString'
notice_model_json['description'] = 'testString'
# Construct a model instance of Notice by calling from_dict on the json representation
notice_model = Notice.from_dict(notice_model_json)
assert notice_model != False
# Construct a model instance of Notice by calling from_dict on the json representation
notice_model_dict = Notice.from_dict(notice_model_json).__dict__
notice_model2 = Notice(**notice_model_dict)
# Verify the model instances are equivalent
assert notice_model == notice_model2
# Convert model instance back to dict and verify no loss of data
notice_model_json2 = notice_model.to_dict()
assert notice_model_json2 == notice_model_json
class TestProjectDetails():
"""
Test Class for ProjectDetails
"""
def test_project_details_serialization(self):
"""
Test serialization/deserialization for ProjectDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
project_list_details_relevancy_training_status_model = {} # ProjectListDetailsRelevancyTrainingStatus
project_list_details_relevancy_training_status_model['data_updated'] = 'testString'
project_list_details_relevancy_training_status_model['total_examples'] = 38
project_list_details_relevancy_training_status_model['sufficient_label_diversity'] = True
project_list_details_relevancy_training_status_model['processing'] = True
project_list_details_relevancy_training_status_model['minimum_examples_added'] = True
project_list_details_relevancy_training_status_model['successfully_trained'] = 'testString'
project_list_details_relevancy_training_status_model['available'] = True
project_list_details_relevancy_training_status_model['notices'] = 38
project_list_details_relevancy_training_status_model['minimum_queries_added'] = True
default_query_params_passages_model = {} # DefaultQueryParamsPassages
default_query_params_passages_model['enabled'] = True
default_query_params_passages_model['count'] = 38
default_query_params_passages_model['fields'] = ['testString']
default_query_params_passages_model['characters'] = 38
default_query_params_passages_model['per_document'] = True
default_query_params_passages_model['max_per_document'] = 38
default_query_params_table_results_model = {} # DefaultQueryParamsTableResults
default_query_params_table_results_model['enabled'] = True
default_query_params_table_results_model['count'] = 38
default_query_params_table_results_model['per_document'] = 38
default_query_params_suggested_refinements_model = {} # DefaultQueryParamsSuggestedRefinements
default_query_params_suggested_refinements_model['enabled'] = True
default_query_params_suggested_refinements_model['count'] = 38
default_query_params_model = {} # DefaultQueryParams
default_query_params_model['collection_ids'] = ['testString']
default_query_params_model['passages'] = default_query_params_passages_model
default_query_params_model['table_results'] = default_query_params_table_results_model
default_query_params_model['aggregation'] = 'testString'
default_query_params_model['suggested_refinements'] = default_query_params_suggested_refinements_model
default_query_params_model['spelling_suggestions'] = True
default_query_params_model['highlight'] = True
default_query_params_model['count'] = 38
default_query_params_model['sort'] = 'testString'
default_query_params_model['return'] = ['testString']
# Construct a json representation of a ProjectDetails model
project_details_model_json = {}
project_details_model_json['project_id'] = 'testString'
project_details_model_json['name'] = 'testString'
project_details_model_json['type'] = 'document_retrieval'
project_details_model_json['relevancy_training_status'] = project_list_details_relevancy_training_status_model
project_details_model_json['collection_count'] = 38
project_details_model_json['default_query_parameters'] = default_query_params_model
# Construct a model instance of ProjectDetails by calling from_dict on the json representation
project_details_model = ProjectDetails.from_dict(project_details_model_json)
assert project_details_model != False
# Construct a model instance of ProjectDetails by calling from_dict on the json representation
project_details_model_dict = ProjectDetails.from_dict(project_details_model_json).__dict__
project_details_model2 = ProjectDetails(**project_details_model_dict)
# Verify the model instances are equivalent
assert project_details_model == project_details_model2
# Convert model instance back to dict and verify no loss of data
project_details_model_json2 = project_details_model.to_dict()
assert project_details_model_json2 == project_details_model_json
class TestProjectListDetails():
"""
Test Class for ProjectListDetails
"""
def test_project_list_details_serialization(self):
"""
Test serialization/deserialization for ProjectListDetails
"""
# Construct dict forms of any model objects needed in order to build this model.
project_list_details_relevancy_training_status_model = {} # ProjectListDetailsRelevancyTrainingStatus
project_list_details_relevancy_training_status_model['data_updated'] = 'testString'
project_list_details_relevancy_training_status_model['total_examples'] = 38
project_list_details_relevancy_training_status_model['sufficient_label_diversity'] = True
project_list_details_relevancy_training_status_model['processing'] = True
project_list_details_relevancy_training_status_model['minimum_examples_added'] = True
project_list_details_relevancy_training_status_model['successfully_trained'] = 'testString'
project_list_details_relevancy_training_status_model['available'] = True
project_list_details_relevancy_training_status_model['notices'] = 38
project_list_details_relevancy_training_status_model['minimum_queries_added'] = True
# Construct a json representation of a ProjectListDetails model
project_list_details_model_json = {}
project_list_details_model_json['project_id'] = 'testString'
project_list_details_model_json['name'] = 'testString'
project_list_details_model_json['type'] = 'document_retrieval'
project_list_details_model_json['relevancy_training_status'] = project_list_details_relevancy_training_status_model
project_list_details_model_json['collection_count'] = 38
# Construct a model instance of ProjectListDetails by calling from_dict on the json representation
project_list_details_model = ProjectListDetails.from_dict(project_list_details_model_json)
assert project_list_details_model != False
# Construct a model instance of ProjectListDetails by calling from_dict on the json representation
project_list_details_model_dict = ProjectListDetails.from_dict(project_list_details_model_json).__dict__
project_list_details_model2 = ProjectListDetails(**project_list_details_model_dict)
# Verify the model instances are equivalent
assert project_list_details_model == project_list_details_model2
# Convert model instance back to dict and verify no loss of data
project_list_details_model_json2 = project_list_details_model.to_dict()
assert project_list_details_model_json2 == project_list_details_model_json
class TestProjectListDetailsRelevancyTrainingStatus():
"""
Test Class for ProjectListDetailsRelevancyTrainingStatus
"""
def test_project_list_details_relevancy_training_status_serialization(self):
"""
Test serialization/deserialization for ProjectListDetailsRelevancyTrainingStatus
"""
# Construct a json representation of a ProjectListDetailsRelevancyTrainingStatus model
project_list_details_relevancy_training_status_model_json = {}
project_list_details_relevancy_training_status_model_json['data_updated'] = 'testString'
project_list_details_relevancy_training_status_model_json['total_examples'] = 38
project_list_details_relevancy_training_status_model_json['sufficient_label_diversity'] = True
project_list_details_relevancy_training_status_model_json['processing'] = True
project_list_details_relevancy_training_status_model_json['minimum_examples_added'] = True
project_list_details_relevancy_training_status_model_json['successfully_trained'] = 'testString'
project_list_details_relevancy_training_status_model_json['available'] = True
project_list_details_relevancy_training_status_model_json['notices'] = 38
project_list_details_relevancy_training_status_model_json['minimum_queries_added'] = True
# Construct a model instance of ProjectListDetailsRelevancyTrainingStatus by calling from_dict on the json representation
project_list_details_relevancy_training_status_model = ProjectListDetailsRelevancyTrainingStatus.from_dict(project_list_details_relevancy_training_status_model_json)
assert project_list_details_relevancy_training_status_model != False
# Construct a model instance of ProjectListDetailsRelevancyTrainingStatus by calling from_dict on the json representation
project_list_details_relevancy_training_status_model_dict = ProjectListDetailsRelevancyTrainingStatus.from_dict(project_list_details_relevancy_training_status_model_json).__dict__
project_list_details_relevancy_training_status_model2 = ProjectListDetailsRelevancyTrainingStatus(**project_list_details_relevancy_training_status_model_dict)
# Verify the model instances are equivalent
assert project_list_details_relevancy_training_status_model == project_list_details_relevancy_training_status_model2
# Convert model instance back to dict and verify no loss of data
project_list_details_relevancy_training_status_model_json2 = project_list_details_relevancy_training_status_model.to_dict()
assert project_list_details_relevancy_training_status_model_json2 == project_list_details_relevancy_training_status_model_json
class TestQueryAggregation():
"""
Test Class for QueryAggregation
"""
def test_query_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryAggregation
"""
# Construct a json representation of a QueryAggregation model
query_aggregation_model_json = {}
query_aggregation_model_json['type'] = 'testString'
# Construct a model instance of QueryAggregation by calling from_dict on the json representation
query_aggregation_model = QueryAggregation.from_dict(query_aggregation_model_json)
assert query_aggregation_model != False
# Construct a copy of the model instance by calling from_dict on the output of to_dict
query_aggregation_model_json2 = query_aggregation_model.to_dict()
query_aggregation_model2 = QueryAggregation.from_dict(query_aggregation_model_json2)
# Verify the model instances are equivalent
assert query_aggregation_model == query_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_aggregation_model_json2 = query_aggregation_model.to_dict()
assert query_aggregation_model_json2 == query_aggregation_model_json
class TestQueryGroupByAggregationResult():
"""
Test Class for QueryGroupByAggregationResult
"""
def test_query_group_by_aggregation_result_serialization(self):
"""
Test serialization/deserialization for QueryGroupByAggregationResult
"""
# Construct dict forms of any model objects needed in order to build this model.
query_aggregation_model = {} # QueryFilterAggregation
query_aggregation_model['type'] = 'filter'
query_aggregation_model['match'] = 'testString'
query_aggregation_model['matching_results'] = 26
# Construct a json representation of a QueryGroupByAggregationResult model
query_group_by_aggregation_result_model_json = {}
query_group_by_aggregation_result_model_json['key'] = 'testString'
query_group_by_aggregation_result_model_json['matching_results'] = 38
query_group_by_aggregation_result_model_json['relevancy'] = 72.5
query_group_by_aggregation_result_model_json['total_matching_documents'] = 38
query_group_by_aggregation_result_model_json['estimated_matching_documents'] = 38
query_group_by_aggregation_result_model_json['aggregations'] = [query_aggregation_model]
# Construct a model instance of QueryGroupByAggregationResult by calling from_dict on the json representation
query_group_by_aggregation_result_model = QueryGroupByAggregationResult.from_dict(query_group_by_aggregation_result_model_json)
assert query_group_by_aggregation_result_model != False
# Construct a model instance of QueryGroupByAggregationResult by calling from_dict on the json representation
query_group_by_aggregation_result_model_dict = QueryGroupByAggregationResult.from_dict(query_group_by_aggregation_result_model_json).__dict__
query_group_by_aggregation_result_model2 = QueryGroupByAggregationResult(**query_group_by_aggregation_result_model_dict)
# Verify the model instances are equivalent
assert query_group_by_aggregation_result_model == query_group_by_aggregation_result_model2
# Convert model instance back to dict and verify no loss of data
query_group_by_aggregation_result_model_json2 = query_group_by_aggregation_result_model.to_dict()
assert query_group_by_aggregation_result_model_json2 == query_group_by_aggregation_result_model_json
class TestQueryHistogramAggregationResult():
"""
Test Class for QueryHistogramAggregationResult
"""
def test_query_histogram_aggregation_result_serialization(self):
"""
Test serialization/deserialization for QueryHistogramAggregationResult
"""
# Construct dict forms of any model objects needed in order to build this model.
query_aggregation_model = {} # QueryFilterAggregation
query_aggregation_model['type'] = 'filter'
query_aggregation_model['match'] = 'testString'
query_aggregation_model['matching_results'] = 26
# Construct a json representation of a QueryHistogramAggregationResult model
query_histogram_aggregation_result_model_json = {}
query_histogram_aggregation_result_model_json['key'] = 26
query_histogram_aggregation_result_model_json['matching_results'] = 38
query_histogram_aggregation_result_model_json['aggregations'] = [query_aggregation_model]
# Construct a model instance of QueryHistogramAggregationResult by calling from_dict on the json representation
query_histogram_aggregation_result_model = QueryHistogramAggregationResult.from_dict(query_histogram_aggregation_result_model_json)
assert query_histogram_aggregation_result_model != False
# Construct a model instance of QueryHistogramAggregationResult by calling from_dict on the json representation
query_histogram_aggregation_result_model_dict = QueryHistogramAggregationResult.from_dict(query_histogram_aggregation_result_model_json).__dict__
query_histogram_aggregation_result_model2 = QueryHistogramAggregationResult(**query_histogram_aggregation_result_model_dict)
# Verify the model instances are equivalent
assert query_histogram_aggregation_result_model == query_histogram_aggregation_result_model2
# Convert model instance back to dict and verify no loss of data
query_histogram_aggregation_result_model_json2 = query_histogram_aggregation_result_model.to_dict()
assert query_histogram_aggregation_result_model_json2 == query_histogram_aggregation_result_model_json
class TestQueryLargePassages():
"""
Test Class for QueryLargePassages
"""
def test_query_large_passages_serialization(self):
"""
Test serialization/deserialization for QueryLargePassages
"""
# Construct a json representation of a QueryLargePassages model
query_large_passages_model_json = {}
query_large_passages_model_json['enabled'] = True
query_large_passages_model_json['per_document'] = True
query_large_passages_model_json['max_per_document'] = 38
query_large_passages_model_json['fields'] = ['testString']
query_large_passages_model_json['count'] = 100
query_large_passages_model_json['characters'] = 50
# Construct a model instance of QueryLargePassages by calling from_dict on the json representation
query_large_passages_model = QueryLargePassages.from_dict(query_large_passages_model_json)
assert query_large_passages_model != False
# Construct a model instance of QueryLargePassages by calling from_dict on the json representation
query_large_passages_model_dict = QueryLargePassages.from_dict(query_large_passages_model_json).__dict__
query_large_passages_model2 = QueryLargePassages(**query_large_passages_model_dict)
# Verify the model instances are equivalent
assert query_large_passages_model == query_large_passages_model2
# Convert model instance back to dict and verify no loss of data
query_large_passages_model_json2 = query_large_passages_model.to_dict()
assert query_large_passages_model_json2 == query_large_passages_model_json
class TestQueryLargeSuggestedRefinements():
"""
Test Class for QueryLargeSuggestedRefinements
"""
def test_query_large_suggested_refinements_serialization(self):
"""
Test serialization/deserialization for QueryLargeSuggestedRefinements
"""
# Construct a json representation of a QueryLargeSuggestedRefinements model
query_large_suggested_refinements_model_json = {}
query_large_suggested_refinements_model_json['enabled'] = True
query_large_suggested_refinements_model_json['count'] = 1
# Construct a model instance of QueryLargeSuggestedRefinements by calling from_dict on the json representation
query_large_suggested_refinements_model = QueryLargeSuggestedRefinements.from_dict(query_large_suggested_refinements_model_json)
assert query_large_suggested_refinements_model != False
# Construct a model instance of QueryLargeSuggestedRefinements by calling from_dict on the json representation
query_large_suggested_refinements_model_dict = QueryLargeSuggestedRefinements.from_dict(query_large_suggested_refinements_model_json).__dict__
query_large_suggested_refinements_model2 = QueryLargeSuggestedRefinements(**query_large_suggested_refinements_model_dict)
# Verify the model instances are equivalent
assert query_large_suggested_refinements_model == query_large_suggested_refinements_model2
# Convert model instance back to dict and verify no loss of data
query_large_suggested_refinements_model_json2 = query_large_suggested_refinements_model.to_dict()
assert query_large_suggested_refinements_model_json2 == query_large_suggested_refinements_model_json
class TestQueryLargeTableResults():
"""
Test Class for QueryLargeTableResults
"""
def test_query_large_table_results_serialization(self):
"""
Test serialization/deserialization for QueryLargeTableResults
"""
# Construct a json representation of a QueryLargeTableResults model
query_large_table_results_model_json = {}
query_large_table_results_model_json['enabled'] = True
query_large_table_results_model_json['count'] = 38
# Construct a model instance of QueryLargeTableResults by calling from_dict on the json representation
query_large_table_results_model = QueryLargeTableResults.from_dict(query_large_table_results_model_json)
assert query_large_table_results_model != False
# Construct a model instance of QueryLargeTableResults by calling from_dict on the json representation
query_large_table_results_model_dict = QueryLargeTableResults.from_dict(query_large_table_results_model_json).__dict__
query_large_table_results_model2 = QueryLargeTableResults(**query_large_table_results_model_dict)
# Verify the model instances are equivalent
assert query_large_table_results_model == query_large_table_results_model2
# Convert model instance back to dict and verify no loss of data
query_large_table_results_model_json2 = query_large_table_results_model.to_dict()
assert query_large_table_results_model_json2 == query_large_table_results_model_json
class TestQueryNoticesResponse():
"""
Test Class for QueryNoticesResponse
"""
def test_query_notices_response_serialization(self):
"""
Test serialization/deserialization for QueryNoticesResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
notice_model = {} # Notice
notice_model['notice_id'] = 'testString'
notice_model['created'] = '2020-01-28T18:40:40.123456Z'
notice_model['document_id'] = 'testString'
notice_model['collection_id'] = 'testString'
notice_model['query_id'] = 'testString'
notice_model['severity'] = 'warning'
notice_model['step'] = 'testString'
notice_model['description'] = 'testString'
# Construct a json representation of a QueryNoticesResponse model
query_notices_response_model_json = {}
query_notices_response_model_json['matching_results'] = 38
query_notices_response_model_json['notices'] = [notice_model]
# Construct a model instance of QueryNoticesResponse by calling from_dict on the json representation
query_notices_response_model = QueryNoticesResponse.from_dict(query_notices_response_model_json)
assert query_notices_response_model != False
# Construct a model instance of QueryNoticesResponse by calling from_dict on the json representation
query_notices_response_model_dict = QueryNoticesResponse.from_dict(query_notices_response_model_json).__dict__
query_notices_response_model2 = QueryNoticesResponse(**query_notices_response_model_dict)
# Verify the model instances are equivalent
assert query_notices_response_model == query_notices_response_model2
# Convert model instance back to dict and verify no loss of data
query_notices_response_model_json2 = query_notices_response_model.to_dict()
assert query_notices_response_model_json2 == query_notices_response_model_json
class TestQueryResponse():
"""
Test Class for QueryResponse
"""
def test_query_response_serialization(self):
"""
Test serialization/deserialization for QueryResponse
"""
# Construct dict forms of any model objects needed in order to build this model.
query_result_metadata_model = {} # QueryResultMetadata
query_result_metadata_model['document_retrieval_source'] = 'search'
query_result_metadata_model['collection_id'] = 'testString'
query_result_metadata_model['confidence'] = 72.5
query_result_passage_model = {} # QueryResultPassage
query_result_passage_model['passage_text'] = 'testString'
query_result_passage_model['start_offset'] = 38
query_result_passage_model['end_offset'] = 38
query_result_passage_model['field'] = 'testString'
query_result_model = {} # QueryResult
query_result_model['document_id'] = 'testString'
query_result_model['metadata'] = {}
query_result_model['result_metadata'] = query_result_metadata_model
query_result_model['document_passages'] = [query_result_passage_model]
query_result_model['foo'] = { 'foo': 'bar' }
query_aggregation_model = {} # QueryFilterAggregation
query_aggregation_model['type'] = 'filter'
query_aggregation_model['match'] = 'testString'
query_aggregation_model['matching_results'] = 26
retrieval_details_model = {} # RetrievalDetails
retrieval_details_model['document_retrieval_strategy'] = 'untrained'
query_suggested_refinement_model = {} # QuerySuggestedRefinement
query_suggested_refinement_model['text'] = 'testString'
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
table_text_location_model = {} # TableTextLocation
table_text_location_model['text'] = 'testString'
table_text_location_model['location'] = table_element_location_model
table_headers_model = {} # TableHeaders
table_headers_model['cell_id'] = 'testString'
table_headers_model['location'] = { 'foo': 'bar' }
table_headers_model['text'] = 'testString'
table_headers_model['row_index_begin'] = 26
table_headers_model['row_index_end'] = 26
table_headers_model['column_index_begin'] = 26
table_headers_model['column_index_end'] = 26
table_row_headers_model = {} # TableRowHeaders
table_row_headers_model['cell_id'] = 'testString'
table_row_headers_model['location'] = table_element_location_model
table_row_headers_model['text'] = 'testString'
table_row_headers_model['text_normalized'] = 'testString'
table_row_headers_model['row_index_begin'] = 26
table_row_headers_model['row_index_end'] = 26
table_row_headers_model['column_index_begin'] = 26
table_row_headers_model['column_index_end'] = 26
table_column_headers_model = {} # TableColumnHeaders
table_column_headers_model['cell_id'] = 'testString'
table_column_headers_model['location'] = { 'foo': 'bar' }
table_column_headers_model['text'] = 'testString'
table_column_headers_model['text_normalized'] = 'testString'
table_column_headers_model['row_index_begin'] = 26
table_column_headers_model['row_index_end'] = 26
table_column_headers_model['column_index_begin'] = 26
table_column_headers_model['column_index_end'] = 26
table_cell_key_model = {} # TableCellKey
table_cell_key_model['cell_id'] = 'testString'
table_cell_key_model['location'] = table_element_location_model
table_cell_key_model['text'] = 'testString'
table_cell_values_model = {} # TableCellValues
table_cell_values_model['cell_id'] = 'testString'
table_cell_values_model['location'] = table_element_location_model
table_cell_values_model['text'] = 'testString'
table_key_value_pairs_model = {} # TableKeyValuePairs
table_key_value_pairs_model['key'] = table_cell_key_model
table_key_value_pairs_model['value'] = [table_cell_values_model]
table_row_header_ids_model = {} # TableRowHeaderIds
table_row_header_ids_model['id'] = 'testString'
table_row_header_texts_model = {} # TableRowHeaderTexts
table_row_header_texts_model['text'] = 'testString'
table_row_header_texts_normalized_model = {} # TableRowHeaderTextsNormalized
table_row_header_texts_normalized_model['text_normalized'] = 'testString'
table_column_header_ids_model = {} # TableColumnHeaderIds
table_column_header_ids_model['id'] = 'testString'
table_column_header_texts_model = {} # TableColumnHeaderTexts
table_column_header_texts_model['text'] = 'testString'
table_column_header_texts_normalized_model = {} # TableColumnHeaderTextsNormalized
table_column_header_texts_normalized_model['text_normalized'] = 'testString'
document_attribute_model = {} # DocumentAttribute
document_attribute_model['type'] = 'testString'
document_attribute_model['text'] = 'testString'
document_attribute_model['location'] = table_element_location_model
table_body_cells_model = {} # TableBodyCells
table_body_cells_model['cell_id'] = 'testString'
table_body_cells_model['location'] = table_element_location_model
table_body_cells_model['text'] = 'testString'
table_body_cells_model['row_index_begin'] = 26
table_body_cells_model['row_index_end'] = 26
table_body_cells_model['column_index_begin'] = 26
table_body_cells_model['column_index_end'] = 26
table_body_cells_model['row_header_ids'] = [table_row_header_ids_model]
table_body_cells_model['row_header_texts'] = [table_row_header_texts_model]
table_body_cells_model['row_header_texts_normalized'] = [table_row_header_texts_normalized_model]
table_body_cells_model['column_header_ids'] = [table_column_header_ids_model]
table_body_cells_model['column_header_texts'] = [table_column_header_texts_model]
table_body_cells_model['column_header_texts_normalized'] = [table_column_header_texts_normalized_model]
table_body_cells_model['attributes'] = [document_attribute_model]
table_result_table_model = {} # TableResultTable
table_result_table_model['location'] = table_element_location_model
table_result_table_model['text'] = 'testString'
table_result_table_model['section_title'] = table_text_location_model
table_result_table_model['title'] = table_text_location_model
table_result_table_model['table_headers'] = [table_headers_model]
table_result_table_model['row_headers'] = [table_row_headers_model]
table_result_table_model['column_headers'] = [table_column_headers_model]
table_result_table_model['key_value_pairs'] = [table_key_value_pairs_model]
table_result_table_model['body_cells'] = [table_body_cells_model]
table_result_table_model['contexts'] = [table_text_location_model]
query_table_result_model = {} # QueryTableResult
query_table_result_model['table_id'] = 'testString'
query_table_result_model['source_document_id'] = 'testString'
query_table_result_model['collection_id'] = 'testString'
query_table_result_model['table_html'] = 'testString'
query_table_result_model['table_html_offset'] = 38
query_table_result_model['table'] = table_result_table_model
query_response_passage_model = {} # QueryResponsePassage
query_response_passage_model['passage_text'] = 'testString'
query_response_passage_model['passage_score'] = 72.5
query_response_passage_model['document_id'] = 'testString'
query_response_passage_model['collection_id'] = 'testString'
query_response_passage_model['start_offset'] = 38
query_response_passage_model['end_offset'] = 38
query_response_passage_model['field'] = 'testString'
# Construct a json representation of a QueryResponse model
query_response_model_json = {}
query_response_model_json['matching_results'] = 38
query_response_model_json['results'] = [query_result_model]
query_response_model_json['aggregations'] = [query_aggregation_model]
query_response_model_json['retrieval_details'] = retrieval_details_model
query_response_model_json['suggested_query'] = 'testString'
query_response_model_json['suggested_refinements'] = [query_suggested_refinement_model]
query_response_model_json['table_results'] = [query_table_result_model]
query_response_model_json['passages'] = [query_response_passage_model]
# Construct a model instance of QueryResponse by calling from_dict on the json representation
query_response_model = QueryResponse.from_dict(query_response_model_json)
assert query_response_model != False
# Construct a model instance of QueryResponse by calling from_dict on the json representation
query_response_model_dict = QueryResponse.from_dict(query_response_model_json).__dict__
query_response_model2 = QueryResponse(**query_response_model_dict)
# Verify the model instances are equivalent
assert query_response_model == query_response_model2
# Convert model instance back to dict and verify no loss of data
query_response_model_json2 = query_response_model.to_dict()
assert query_response_model_json2 == query_response_model_json
class TestQueryResponsePassage():
"""
Test Class for QueryResponsePassage
"""
def test_query_response_passage_serialization(self):
"""
Test serialization/deserialization for QueryResponsePassage
"""
# Construct a json representation of a QueryResponsePassage model
query_response_passage_model_json = {}
query_response_passage_model_json['passage_text'] = 'testString'
query_response_passage_model_json['passage_score'] = 72.5
query_response_passage_model_json['document_id'] = 'testString'
query_response_passage_model_json['collection_id'] = 'testString'
query_response_passage_model_json['start_offset'] = 38
query_response_passage_model_json['end_offset'] = 38
query_response_passage_model_json['field'] = 'testString'
# Construct a model instance of QueryResponsePassage by calling from_dict on the json representation
query_response_passage_model = QueryResponsePassage.from_dict(query_response_passage_model_json)
assert query_response_passage_model != False
# Construct a model instance of QueryResponsePassage by calling from_dict on the json representation
query_response_passage_model_dict = QueryResponsePassage.from_dict(query_response_passage_model_json).__dict__
query_response_passage_model2 = QueryResponsePassage(**query_response_passage_model_dict)
# Verify the model instances are equivalent
assert query_response_passage_model == query_response_passage_model2
# Convert model instance back to dict and verify no loss of data
query_response_passage_model_json2 = query_response_passage_model.to_dict()
assert query_response_passage_model_json2 == query_response_passage_model_json
class TestQueryResult():
"""
Test Class for QueryResult
"""
def test_query_result_serialization(self):
"""
Test serialization/deserialization for QueryResult
"""
# Construct dict forms of any model objects needed in order to build this model.
query_result_metadata_model = {} # QueryResultMetadata
query_result_metadata_model['document_retrieval_source'] = 'search'
query_result_metadata_model['collection_id'] = 'testString'
query_result_metadata_model['confidence'] = 72.5
query_result_passage_model = {} # QueryResultPassage
query_result_passage_model['passage_text'] = 'testString'
query_result_passage_model['start_offset'] = 38
query_result_passage_model['end_offset'] = 38
query_result_passage_model['field'] = 'testString'
# Construct a json representation of a QueryResult model
query_result_model_json = {}
query_result_model_json['document_id'] = 'testString'
query_result_model_json['metadata'] = {}
query_result_model_json['result_metadata'] = query_result_metadata_model
query_result_model_json['document_passages'] = [query_result_passage_model]
query_result_model_json['foo'] = { 'foo': 'bar' }
# Construct a model instance of QueryResult by calling from_dict on the json representation
query_result_model = QueryResult.from_dict(query_result_model_json)
assert query_result_model != False
# Construct a model instance of QueryResult by calling from_dict on the json representation
query_result_model_dict = QueryResult.from_dict(query_result_model_json).__dict__
query_result_model2 = QueryResult(**query_result_model_dict)
# Verify the model instances are equivalent
assert query_result_model == query_result_model2
# Convert model instance back to dict and verify no loss of data
query_result_model_json2 = query_result_model.to_dict()
assert query_result_model_json2 == query_result_model_json
class TestQueryResultMetadata():
"""
Test Class for QueryResultMetadata
"""
def test_query_result_metadata_serialization(self):
"""
Test serialization/deserialization for QueryResultMetadata
"""
# Construct a json representation of a QueryResultMetadata model
query_result_metadata_model_json = {}
query_result_metadata_model_json['document_retrieval_source'] = 'search'
query_result_metadata_model_json['collection_id'] = 'testString'
query_result_metadata_model_json['confidence'] = 72.5
# Construct a model instance of QueryResultMetadata by calling from_dict on the json representation
query_result_metadata_model = QueryResultMetadata.from_dict(query_result_metadata_model_json)
assert query_result_metadata_model != False
# Construct a model instance of QueryResultMetadata by calling from_dict on the json representation
query_result_metadata_model_dict = QueryResultMetadata.from_dict(query_result_metadata_model_json).__dict__
query_result_metadata_model2 = QueryResultMetadata(**query_result_metadata_model_dict)
# Verify the model instances are equivalent
assert query_result_metadata_model == query_result_metadata_model2
# Convert model instance back to dict and verify no loss of data
query_result_metadata_model_json2 = query_result_metadata_model.to_dict()
assert query_result_metadata_model_json2 == query_result_metadata_model_json
class TestQueryResultPassage():
"""
Test Class for QueryResultPassage
"""
def test_query_result_passage_serialization(self):
"""
Test serialization/deserialization for QueryResultPassage
"""
# Construct a json representation of a QueryResultPassage model
query_result_passage_model_json = {}
query_result_passage_model_json['passage_text'] = 'testString'
query_result_passage_model_json['start_offset'] = 38
query_result_passage_model_json['end_offset'] = 38
query_result_passage_model_json['field'] = 'testString'
# Construct a model instance of QueryResultPassage by calling from_dict on the json representation
query_result_passage_model = QueryResultPassage.from_dict(query_result_passage_model_json)
assert query_result_passage_model != False
# Construct a model instance of QueryResultPassage by calling from_dict on the json representation
query_result_passage_model_dict = QueryResultPassage.from_dict(query_result_passage_model_json).__dict__
query_result_passage_model2 = QueryResultPassage(**query_result_passage_model_dict)
# Verify the model instances are equivalent
assert query_result_passage_model == query_result_passage_model2
# Convert model instance back to dict and verify no loss of data
query_result_passage_model_json2 = query_result_passage_model.to_dict()
assert query_result_passage_model_json2 == query_result_passage_model_json
class TestQuerySuggestedRefinement():
"""
Test Class for QuerySuggestedRefinement
"""
def test_query_suggested_refinement_serialization(self):
"""
Test serialization/deserialization for QuerySuggestedRefinement
"""
# Construct a json representation of a QuerySuggestedRefinement model
query_suggested_refinement_model_json = {}
query_suggested_refinement_model_json['text'] = 'testString'
# Construct a model instance of QuerySuggestedRefinement by calling from_dict on the json representation
query_suggested_refinement_model = QuerySuggestedRefinement.from_dict(query_suggested_refinement_model_json)
assert query_suggested_refinement_model != False
# Construct a model instance of QuerySuggestedRefinement by calling from_dict on the json representation
query_suggested_refinement_model_dict = QuerySuggestedRefinement.from_dict(query_suggested_refinement_model_json).__dict__
query_suggested_refinement_model2 = QuerySuggestedRefinement(**query_suggested_refinement_model_dict)
# Verify the model instances are equivalent
assert query_suggested_refinement_model == query_suggested_refinement_model2
# Convert model instance back to dict and verify no loss of data
query_suggested_refinement_model_json2 = query_suggested_refinement_model.to_dict()
assert query_suggested_refinement_model_json2 == query_suggested_refinement_model_json
class TestQueryTableResult():
"""
Test Class for QueryTableResult
"""
def test_query_table_result_serialization(self):
"""
Test serialization/deserialization for QueryTableResult
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
table_text_location_model = {} # TableTextLocation
table_text_location_model['text'] = 'testString'
table_text_location_model['location'] = table_element_location_model
table_headers_model = {} # TableHeaders
table_headers_model['cell_id'] = 'testString'
table_headers_model['location'] = { 'foo': 'bar' }
table_headers_model['text'] = 'testString'
table_headers_model['row_index_begin'] = 26
table_headers_model['row_index_end'] = 26
table_headers_model['column_index_begin'] = 26
table_headers_model['column_index_end'] = 26
table_row_headers_model = {} # TableRowHeaders
table_row_headers_model['cell_id'] = 'testString'
table_row_headers_model['location'] = table_element_location_model
table_row_headers_model['text'] = 'testString'
table_row_headers_model['text_normalized'] = 'testString'
table_row_headers_model['row_index_begin'] = 26
table_row_headers_model['row_index_end'] = 26
table_row_headers_model['column_index_begin'] = 26
table_row_headers_model['column_index_end'] = 26
table_column_headers_model = {} # TableColumnHeaders
table_column_headers_model['cell_id'] = 'testString'
table_column_headers_model['location'] = { 'foo': 'bar' }
table_column_headers_model['text'] = 'testString'
table_column_headers_model['text_normalized'] = 'testString'
table_column_headers_model['row_index_begin'] = 26
table_column_headers_model['row_index_end'] = 26
table_column_headers_model['column_index_begin'] = 26
table_column_headers_model['column_index_end'] = 26
table_cell_key_model = {} # TableCellKey
table_cell_key_model['cell_id'] = 'testString'
table_cell_key_model['location'] = table_element_location_model
table_cell_key_model['text'] = 'testString'
table_cell_values_model = {} # TableCellValues
table_cell_values_model['cell_id'] = 'testString'
table_cell_values_model['location'] = table_element_location_model
table_cell_values_model['text'] = 'testString'
table_key_value_pairs_model = {} # TableKeyValuePairs
table_key_value_pairs_model['key'] = table_cell_key_model
table_key_value_pairs_model['value'] = [table_cell_values_model]
table_row_header_ids_model = {} # TableRowHeaderIds
table_row_header_ids_model['id'] = 'testString'
table_row_header_texts_model = {} # TableRowHeaderTexts
table_row_header_texts_model['text'] = 'testString'
table_row_header_texts_normalized_model = {} # TableRowHeaderTextsNormalized
table_row_header_texts_normalized_model['text_normalized'] = 'testString'
table_column_header_ids_model = {} # TableColumnHeaderIds
table_column_header_ids_model['id'] = 'testString'
table_column_header_texts_model = {} # TableColumnHeaderTexts
table_column_header_texts_model['text'] = 'testString'
table_column_header_texts_normalized_model = {} # TableColumnHeaderTextsNormalized
table_column_header_texts_normalized_model['text_normalized'] = 'testString'
document_attribute_model = {} # DocumentAttribute
document_attribute_model['type'] = 'testString'
document_attribute_model['text'] = 'testString'
document_attribute_model['location'] = table_element_location_model
table_body_cells_model = {} # TableBodyCells
table_body_cells_model['cell_id'] = 'testString'
table_body_cells_model['location'] = table_element_location_model
table_body_cells_model['text'] = 'testString'
table_body_cells_model['row_index_begin'] = 26
table_body_cells_model['row_index_end'] = 26
table_body_cells_model['column_index_begin'] = 26
table_body_cells_model['column_index_end'] = 26
table_body_cells_model['row_header_ids'] = [table_row_header_ids_model]
table_body_cells_model['row_header_texts'] = [table_row_header_texts_model]
table_body_cells_model['row_header_texts_normalized'] = [table_row_header_texts_normalized_model]
table_body_cells_model['column_header_ids'] = [table_column_header_ids_model]
table_body_cells_model['column_header_texts'] = [table_column_header_texts_model]
table_body_cells_model['column_header_texts_normalized'] = [table_column_header_texts_normalized_model]
table_body_cells_model['attributes'] = [document_attribute_model]
table_result_table_model = {} # TableResultTable
table_result_table_model['location'] = table_element_location_model
table_result_table_model['text'] = 'testString'
table_result_table_model['section_title'] = table_text_location_model
table_result_table_model['title'] = table_text_location_model
table_result_table_model['table_headers'] = [table_headers_model]
table_result_table_model['row_headers'] = [table_row_headers_model]
table_result_table_model['column_headers'] = [table_column_headers_model]
table_result_table_model['key_value_pairs'] = [table_key_value_pairs_model]
table_result_table_model['body_cells'] = [table_body_cells_model]
table_result_table_model['contexts'] = [table_text_location_model]
# Construct a json representation of a QueryTableResult model
query_table_result_model_json = {}
query_table_result_model_json['table_id'] = 'testString'
query_table_result_model_json['source_document_id'] = 'testString'
query_table_result_model_json['collection_id'] = 'testString'
query_table_result_model_json['table_html'] = 'testString'
query_table_result_model_json['table_html_offset'] = 38
query_table_result_model_json['table'] = table_result_table_model
# Construct a model instance of QueryTableResult by calling from_dict on the json representation
query_table_result_model = QueryTableResult.from_dict(query_table_result_model_json)
assert query_table_result_model != False
# Construct a model instance of QueryTableResult by calling from_dict on the json representation
query_table_result_model_dict = QueryTableResult.from_dict(query_table_result_model_json).__dict__
query_table_result_model2 = QueryTableResult(**query_table_result_model_dict)
# Verify the model instances are equivalent
assert query_table_result_model == query_table_result_model2
# Convert model instance back to dict and verify no loss of data
query_table_result_model_json2 = query_table_result_model.to_dict()
assert query_table_result_model_json2 == query_table_result_model_json
class TestQueryTermAggregationResult():
"""
Test Class for QueryTermAggregationResult
"""
def test_query_term_aggregation_result_serialization(self):
"""
Test serialization/deserialization for QueryTermAggregationResult
"""
# Construct dict forms of any model objects needed in order to build this model.
query_aggregation_model = {} # QueryFilterAggregation
query_aggregation_model['type'] = 'filter'
query_aggregation_model['match'] = 'testString'
query_aggregation_model['matching_results'] = 26
# Construct a json representation of a QueryTermAggregationResult model
query_term_aggregation_result_model_json = {}
query_term_aggregation_result_model_json['key'] = 'testString'
query_term_aggregation_result_model_json['matching_results'] = 38
query_term_aggregation_result_model_json['relevancy'] = 72.5
query_term_aggregation_result_model_json['total_matching_documents'] = 38
query_term_aggregation_result_model_json['estimated_matching_documents'] = 38
query_term_aggregation_result_model_json['aggregations'] = [query_aggregation_model]
# Construct a model instance of QueryTermAggregationResult by calling from_dict on the json representation
query_term_aggregation_result_model = QueryTermAggregationResult.from_dict(query_term_aggregation_result_model_json)
assert query_term_aggregation_result_model != False
# Construct a model instance of QueryTermAggregationResult by calling from_dict on the json representation
query_term_aggregation_result_model_dict = QueryTermAggregationResult.from_dict(query_term_aggregation_result_model_json).__dict__
query_term_aggregation_result_model2 = QueryTermAggregationResult(**query_term_aggregation_result_model_dict)
# Verify the model instances are equivalent
assert query_term_aggregation_result_model == query_term_aggregation_result_model2
# Convert model instance back to dict and verify no loss of data
query_term_aggregation_result_model_json2 = query_term_aggregation_result_model.to_dict()
assert query_term_aggregation_result_model_json2 == query_term_aggregation_result_model_json
class TestQueryTimesliceAggregationResult():
"""
Test Class for QueryTimesliceAggregationResult
"""
def test_query_timeslice_aggregation_result_serialization(self):
"""
Test serialization/deserialization for QueryTimesliceAggregationResult
"""
# Construct dict forms of any model objects needed in order to build this model.
query_aggregation_model = {} # QueryFilterAggregation
query_aggregation_model['type'] = 'filter'
query_aggregation_model['match'] = 'testString'
query_aggregation_model['matching_results'] = 26
# Construct a json representation of a QueryTimesliceAggregationResult model
query_timeslice_aggregation_result_model_json = {}
query_timeslice_aggregation_result_model_json['key_as_string'] = 'testString'
query_timeslice_aggregation_result_model_json['key'] = 26
query_timeslice_aggregation_result_model_json['matching_results'] = 26
query_timeslice_aggregation_result_model_json['aggregations'] = [query_aggregation_model]
# Construct a model instance of QueryTimesliceAggregationResult by calling from_dict on the json representation
query_timeslice_aggregation_result_model = QueryTimesliceAggregationResult.from_dict(query_timeslice_aggregation_result_model_json)
assert query_timeslice_aggregation_result_model != False
# Construct a model instance of QueryTimesliceAggregationResult by calling from_dict on the json representation
query_timeslice_aggregation_result_model_dict = QueryTimesliceAggregationResult.from_dict(query_timeslice_aggregation_result_model_json).__dict__
query_timeslice_aggregation_result_model2 = QueryTimesliceAggregationResult(**query_timeslice_aggregation_result_model_dict)
# Verify the model instances are equivalent
assert query_timeslice_aggregation_result_model == query_timeslice_aggregation_result_model2
# Convert model instance back to dict and verify no loss of data
query_timeslice_aggregation_result_model_json2 = query_timeslice_aggregation_result_model.to_dict()
assert query_timeslice_aggregation_result_model_json2 == query_timeslice_aggregation_result_model_json
class TestQueryTopHitsAggregationResult():
"""
Test Class for QueryTopHitsAggregationResult
"""
def test_query_top_hits_aggregation_result_serialization(self):
"""
Test serialization/deserialization for QueryTopHitsAggregationResult
"""
# Construct a json representation of a QueryTopHitsAggregationResult model
query_top_hits_aggregation_result_model_json = {}
query_top_hits_aggregation_result_model_json['matching_results'] = 38
query_top_hits_aggregation_result_model_json['hits'] = [{}]
# Construct a model instance of QueryTopHitsAggregationResult by calling from_dict on the json representation
query_top_hits_aggregation_result_model = QueryTopHitsAggregationResult.from_dict(query_top_hits_aggregation_result_model_json)
assert query_top_hits_aggregation_result_model != False
# Construct a model instance of QueryTopHitsAggregationResult by calling from_dict on the json representation
query_top_hits_aggregation_result_model_dict = QueryTopHitsAggregationResult.from_dict(query_top_hits_aggregation_result_model_json).__dict__
query_top_hits_aggregation_result_model2 = QueryTopHitsAggregationResult(**query_top_hits_aggregation_result_model_dict)
# Verify the model instances are equivalent
assert query_top_hits_aggregation_result_model == query_top_hits_aggregation_result_model2
# Convert model instance back to dict and verify no loss of data
query_top_hits_aggregation_result_model_json2 = query_top_hits_aggregation_result_model.to_dict()
assert query_top_hits_aggregation_result_model_json2 == query_top_hits_aggregation_result_model_json
class TestRetrievalDetails():
"""
Test Class for RetrievalDetails
"""
def test_retrieval_details_serialization(self):
"""
Test serialization/deserialization for RetrievalDetails
"""
# Construct a json representation of a RetrievalDetails model
retrieval_details_model_json = {}
retrieval_details_model_json['document_retrieval_strategy'] = 'untrained'
# Construct a model instance of RetrievalDetails by calling from_dict on the json representation
retrieval_details_model = RetrievalDetails.from_dict(retrieval_details_model_json)
assert retrieval_details_model != False
# Construct a model instance of RetrievalDetails by calling from_dict on the json representation
retrieval_details_model_dict = RetrievalDetails.from_dict(retrieval_details_model_json).__dict__
retrieval_details_model2 = RetrievalDetails(**retrieval_details_model_dict)
# Verify the model instances are equivalent
assert retrieval_details_model == retrieval_details_model2
# Convert model instance back to dict and verify no loss of data
retrieval_details_model_json2 = retrieval_details_model.to_dict()
assert retrieval_details_model_json2 == retrieval_details_model_json
class TestTableBodyCells():
"""
Test Class for TableBodyCells
"""
def test_table_body_cells_serialization(self):
"""
Test serialization/deserialization for TableBodyCells
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
table_row_header_ids_model = {} # TableRowHeaderIds
table_row_header_ids_model['id'] = 'testString'
table_row_header_texts_model = {} # TableRowHeaderTexts
table_row_header_texts_model['text'] = 'testString'
table_row_header_texts_normalized_model = {} # TableRowHeaderTextsNormalized
table_row_header_texts_normalized_model['text_normalized'] = 'testString'
table_column_header_ids_model = {} # TableColumnHeaderIds
table_column_header_ids_model['id'] = 'testString'
table_column_header_texts_model = {} # TableColumnHeaderTexts
table_column_header_texts_model['text'] = 'testString'
table_column_header_texts_normalized_model = {} # TableColumnHeaderTextsNormalized
table_column_header_texts_normalized_model['text_normalized'] = 'testString'
document_attribute_model = {} # DocumentAttribute
document_attribute_model['type'] = 'testString'
document_attribute_model['text'] = 'testString'
document_attribute_model['location'] = table_element_location_model
# Construct a json representation of a TableBodyCells model
table_body_cells_model_json = {}
table_body_cells_model_json['cell_id'] = 'testString'
table_body_cells_model_json['location'] = table_element_location_model
table_body_cells_model_json['text'] = 'testString'
table_body_cells_model_json['row_index_begin'] = 26
table_body_cells_model_json['row_index_end'] = 26
table_body_cells_model_json['column_index_begin'] = 26
table_body_cells_model_json['column_index_end'] = 26
table_body_cells_model_json['row_header_ids'] = [table_row_header_ids_model]
table_body_cells_model_json['row_header_texts'] = [table_row_header_texts_model]
table_body_cells_model_json['row_header_texts_normalized'] = [table_row_header_texts_normalized_model]
table_body_cells_model_json['column_header_ids'] = [table_column_header_ids_model]
table_body_cells_model_json['column_header_texts'] = [table_column_header_texts_model]
table_body_cells_model_json['column_header_texts_normalized'] = [table_column_header_texts_normalized_model]
table_body_cells_model_json['attributes'] = [document_attribute_model]
# Construct a model instance of TableBodyCells by calling from_dict on the json representation
table_body_cells_model = TableBodyCells.from_dict(table_body_cells_model_json)
assert table_body_cells_model != False
# Construct a model instance of TableBodyCells by calling from_dict on the json representation
table_body_cells_model_dict = TableBodyCells.from_dict(table_body_cells_model_json).__dict__
table_body_cells_model2 = TableBodyCells(**table_body_cells_model_dict)
# Verify the model instances are equivalent
assert table_body_cells_model == table_body_cells_model2
# Convert model instance back to dict and verify no loss of data
table_body_cells_model_json2 = table_body_cells_model.to_dict()
assert table_body_cells_model_json2 == table_body_cells_model_json
class TestTableCellKey():
"""
Test Class for TableCellKey
"""
def test_table_cell_key_serialization(self):
"""
Test serialization/deserialization for TableCellKey
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
# Construct a json representation of a TableCellKey model
table_cell_key_model_json = {}
table_cell_key_model_json['cell_id'] = 'testString'
table_cell_key_model_json['location'] = table_element_location_model
table_cell_key_model_json['text'] = 'testString'
# Construct a model instance of TableCellKey by calling from_dict on the json representation
table_cell_key_model = TableCellKey.from_dict(table_cell_key_model_json)
assert table_cell_key_model != False
# Construct a model instance of TableCellKey by calling from_dict on the json representation
table_cell_key_model_dict = TableCellKey.from_dict(table_cell_key_model_json).__dict__
table_cell_key_model2 = TableCellKey(**table_cell_key_model_dict)
# Verify the model instances are equivalent
assert table_cell_key_model == table_cell_key_model2
# Convert model instance back to dict and verify no loss of data
table_cell_key_model_json2 = table_cell_key_model.to_dict()
assert table_cell_key_model_json2 == table_cell_key_model_json
class TestTableCellValues():
"""
Test Class for TableCellValues
"""
def test_table_cell_values_serialization(self):
"""
Test serialization/deserialization for TableCellValues
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
# Construct a json representation of a TableCellValues model
table_cell_values_model_json = {}
table_cell_values_model_json['cell_id'] = 'testString'
table_cell_values_model_json['location'] = table_element_location_model
table_cell_values_model_json['text'] = 'testString'
# Construct a model instance of TableCellValues by calling from_dict on the json representation
table_cell_values_model = TableCellValues.from_dict(table_cell_values_model_json)
assert table_cell_values_model != False
# Construct a model instance of TableCellValues by calling from_dict on the json representation
table_cell_values_model_dict = TableCellValues.from_dict(table_cell_values_model_json).__dict__
table_cell_values_model2 = TableCellValues(**table_cell_values_model_dict)
# Verify the model instances are equivalent
assert table_cell_values_model == table_cell_values_model2
# Convert model instance back to dict and verify no loss of data
table_cell_values_model_json2 = table_cell_values_model.to_dict()
assert table_cell_values_model_json2 == table_cell_values_model_json
class TestTableColumnHeaderIds():
"""
Test Class for TableColumnHeaderIds
"""
def test_table_column_header_ids_serialization(self):
"""
Test serialization/deserialization for TableColumnHeaderIds
"""
# Construct a json representation of a TableColumnHeaderIds model
table_column_header_ids_model_json = {}
table_column_header_ids_model_json['id'] = 'testString'
# Construct a model instance of TableColumnHeaderIds by calling from_dict on the json representation
table_column_header_ids_model = TableColumnHeaderIds.from_dict(table_column_header_ids_model_json)
assert table_column_header_ids_model != False
# Construct a model instance of TableColumnHeaderIds by calling from_dict on the json representation
table_column_header_ids_model_dict = TableColumnHeaderIds.from_dict(table_column_header_ids_model_json).__dict__
table_column_header_ids_model2 = TableColumnHeaderIds(**table_column_header_ids_model_dict)
# Verify the model instances are equivalent
assert table_column_header_ids_model == table_column_header_ids_model2
# Convert model instance back to dict and verify no loss of data
table_column_header_ids_model_json2 = table_column_header_ids_model.to_dict()
assert table_column_header_ids_model_json2 == table_column_header_ids_model_json
class TestTableColumnHeaderTexts():
"""
Test Class for TableColumnHeaderTexts
"""
def test_table_column_header_texts_serialization(self):
"""
Test serialization/deserialization for TableColumnHeaderTexts
"""
# Construct a json representation of a TableColumnHeaderTexts model
table_column_header_texts_model_json = {}
table_column_header_texts_model_json['text'] = 'testString'
# Construct a model instance of TableColumnHeaderTexts by calling from_dict on the json representation
table_column_header_texts_model = TableColumnHeaderTexts.from_dict(table_column_header_texts_model_json)
assert table_column_header_texts_model != False
# Construct a model instance of TableColumnHeaderTexts by calling from_dict on the json representation
table_column_header_texts_model_dict = TableColumnHeaderTexts.from_dict(table_column_header_texts_model_json).__dict__
table_column_header_texts_model2 = TableColumnHeaderTexts(**table_column_header_texts_model_dict)
# Verify the model instances are equivalent
assert table_column_header_texts_model == table_column_header_texts_model2
# Convert model instance back to dict and verify no loss of data
table_column_header_texts_model_json2 = table_column_header_texts_model.to_dict()
assert table_column_header_texts_model_json2 == table_column_header_texts_model_json
class TestTableColumnHeaderTextsNormalized():
"""
Test Class for TableColumnHeaderTextsNormalized
"""
def test_table_column_header_texts_normalized_serialization(self):
"""
Test serialization/deserialization for TableColumnHeaderTextsNormalized
"""
# Construct a json representation of a TableColumnHeaderTextsNormalized model
table_column_header_texts_normalized_model_json = {}
table_column_header_texts_normalized_model_json['text_normalized'] = 'testString'
# Construct a model instance of TableColumnHeaderTextsNormalized by calling from_dict on the json representation
table_column_header_texts_normalized_model = TableColumnHeaderTextsNormalized.from_dict(table_column_header_texts_normalized_model_json)
assert table_column_header_texts_normalized_model != False
# Construct a model instance of TableColumnHeaderTextsNormalized by calling from_dict on the json representation
table_column_header_texts_normalized_model_dict = TableColumnHeaderTextsNormalized.from_dict(table_column_header_texts_normalized_model_json).__dict__
table_column_header_texts_normalized_model2 = TableColumnHeaderTextsNormalized(**table_column_header_texts_normalized_model_dict)
# Verify the model instances are equivalent
assert table_column_header_texts_normalized_model == table_column_header_texts_normalized_model2
# Convert model instance back to dict and verify no loss of data
table_column_header_texts_normalized_model_json2 = table_column_header_texts_normalized_model.to_dict()
assert table_column_header_texts_normalized_model_json2 == table_column_header_texts_normalized_model_json
class TestTableColumnHeaders():
"""
Test Class for TableColumnHeaders
"""
def test_table_column_headers_serialization(self):
"""
Test serialization/deserialization for TableColumnHeaders
"""
# Construct a json representation of a TableColumnHeaders model
table_column_headers_model_json = {}
table_column_headers_model_json['cell_id'] = 'testString'
table_column_headers_model_json['location'] = { 'foo': 'bar' }
table_column_headers_model_json['text'] = 'testString'
table_column_headers_model_json['text_normalized'] = 'testString'
table_column_headers_model_json['row_index_begin'] = 26
table_column_headers_model_json['row_index_end'] = 26
table_column_headers_model_json['column_index_begin'] = 26
table_column_headers_model_json['column_index_end'] = 26
# Construct a model instance of TableColumnHeaders by calling from_dict on the json representation
table_column_headers_model = TableColumnHeaders.from_dict(table_column_headers_model_json)
assert table_column_headers_model != False
# Construct a model instance of TableColumnHeaders by calling from_dict on the json representation
table_column_headers_model_dict = TableColumnHeaders.from_dict(table_column_headers_model_json).__dict__
table_column_headers_model2 = TableColumnHeaders(**table_column_headers_model_dict)
# Verify the model instances are equivalent
assert table_column_headers_model == table_column_headers_model2
# Convert model instance back to dict and verify no loss of data
table_column_headers_model_json2 = table_column_headers_model.to_dict()
assert table_column_headers_model_json2 == table_column_headers_model_json
class TestTableElementLocation():
"""
Test Class for TableElementLocation
"""
def test_table_element_location_serialization(self):
"""
Test serialization/deserialization for TableElementLocation
"""
# Construct a json representation of a TableElementLocation model
table_element_location_model_json = {}
table_element_location_model_json['begin'] = 26
table_element_location_model_json['end'] = 26
# Construct a model instance of TableElementLocation by calling from_dict on the json representation
table_element_location_model = TableElementLocation.from_dict(table_element_location_model_json)
assert table_element_location_model != False
# Construct a model instance of TableElementLocation by calling from_dict on the json representation
table_element_location_model_dict = TableElementLocation.from_dict(table_element_location_model_json).__dict__
table_element_location_model2 = TableElementLocation(**table_element_location_model_dict)
# Verify the model instances are equivalent
assert table_element_location_model == table_element_location_model2
# Convert model instance back to dict and verify no loss of data
table_element_location_model_json2 = table_element_location_model.to_dict()
assert table_element_location_model_json2 == table_element_location_model_json
class TestTableHeaders():
"""
Test Class for TableHeaders
"""
def test_table_headers_serialization(self):
"""
Test serialization/deserialization for TableHeaders
"""
# Construct a json representation of a TableHeaders model
table_headers_model_json = {}
table_headers_model_json['cell_id'] = 'testString'
table_headers_model_json['location'] = { 'foo': 'bar' }
table_headers_model_json['text'] = 'testString'
table_headers_model_json['row_index_begin'] = 26
table_headers_model_json['row_index_end'] = 26
table_headers_model_json['column_index_begin'] = 26
table_headers_model_json['column_index_end'] = 26
# Construct a model instance of TableHeaders by calling from_dict on the json representation
table_headers_model = TableHeaders.from_dict(table_headers_model_json)
assert table_headers_model != False
# Construct a model instance of TableHeaders by calling from_dict on the json representation
table_headers_model_dict = TableHeaders.from_dict(table_headers_model_json).__dict__
table_headers_model2 = TableHeaders(**table_headers_model_dict)
# Verify the model instances are equivalent
assert table_headers_model == table_headers_model2
# Convert model instance back to dict and verify no loss of data
table_headers_model_json2 = table_headers_model.to_dict()
assert table_headers_model_json2 == table_headers_model_json
class TestTableKeyValuePairs():
"""
Test Class for TableKeyValuePairs
"""
def test_table_key_value_pairs_serialization(self):
"""
Test serialization/deserialization for TableKeyValuePairs
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
table_cell_key_model = {} # TableCellKey
table_cell_key_model['cell_id'] = 'testString'
table_cell_key_model['location'] = table_element_location_model
table_cell_key_model['text'] = 'testString'
table_cell_values_model = {} # TableCellValues
table_cell_values_model['cell_id'] = 'testString'
table_cell_values_model['location'] = table_element_location_model
table_cell_values_model['text'] = 'testString'
# Construct a json representation of a TableKeyValuePairs model
table_key_value_pairs_model_json = {}
table_key_value_pairs_model_json['key'] = table_cell_key_model
table_key_value_pairs_model_json['value'] = [table_cell_values_model]
# Construct a model instance of TableKeyValuePairs by calling from_dict on the json representation
table_key_value_pairs_model = TableKeyValuePairs.from_dict(table_key_value_pairs_model_json)
assert table_key_value_pairs_model != False
# Construct a model instance of TableKeyValuePairs by calling from_dict on the json representation
table_key_value_pairs_model_dict = TableKeyValuePairs.from_dict(table_key_value_pairs_model_json).__dict__
table_key_value_pairs_model2 = TableKeyValuePairs(**table_key_value_pairs_model_dict)
# Verify the model instances are equivalent
assert table_key_value_pairs_model == table_key_value_pairs_model2
# Convert model instance back to dict and verify no loss of data
table_key_value_pairs_model_json2 = table_key_value_pairs_model.to_dict()
assert table_key_value_pairs_model_json2 == table_key_value_pairs_model_json
class TestTableResultTable():
"""
Test Class for TableResultTable
"""
def test_table_result_table_serialization(self):
"""
Test serialization/deserialization for TableResultTable
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
table_text_location_model = {} # TableTextLocation
table_text_location_model['text'] = 'testString'
table_text_location_model['location'] = table_element_location_model
table_headers_model = {} # TableHeaders
table_headers_model['cell_id'] = 'testString'
table_headers_model['location'] = { 'foo': 'bar' }
table_headers_model['text'] = 'testString'
table_headers_model['row_index_begin'] = 26
table_headers_model['row_index_end'] = 26
table_headers_model['column_index_begin'] = 26
table_headers_model['column_index_end'] = 26
table_row_headers_model = {} # TableRowHeaders
table_row_headers_model['cell_id'] = 'testString'
table_row_headers_model['location'] = table_element_location_model
table_row_headers_model['text'] = 'testString'
table_row_headers_model['text_normalized'] = 'testString'
table_row_headers_model['row_index_begin'] = 26
table_row_headers_model['row_index_end'] = 26
table_row_headers_model['column_index_begin'] = 26
table_row_headers_model['column_index_end'] = 26
table_column_headers_model = {} # TableColumnHeaders
table_column_headers_model['cell_id'] = 'testString'
table_column_headers_model['location'] = { 'foo': 'bar' }
table_column_headers_model['text'] = 'testString'
table_column_headers_model['text_normalized'] = 'testString'
table_column_headers_model['row_index_begin'] = 26
table_column_headers_model['row_index_end'] = 26
table_column_headers_model['column_index_begin'] = 26
table_column_headers_model['column_index_end'] = 26
table_cell_key_model = {} # TableCellKey
table_cell_key_model['cell_id'] = 'testString'
table_cell_key_model['location'] = table_element_location_model
table_cell_key_model['text'] = 'testString'
table_cell_values_model = {} # TableCellValues
table_cell_values_model['cell_id'] = 'testString'
table_cell_values_model['location'] = table_element_location_model
table_cell_values_model['text'] = 'testString'
table_key_value_pairs_model = {} # TableKeyValuePairs
table_key_value_pairs_model['key'] = table_cell_key_model
table_key_value_pairs_model['value'] = [table_cell_values_model]
table_row_header_ids_model = {} # TableRowHeaderIds
table_row_header_ids_model['id'] = 'testString'
table_row_header_texts_model = {} # TableRowHeaderTexts
table_row_header_texts_model['text'] = 'testString'
table_row_header_texts_normalized_model = {} # TableRowHeaderTextsNormalized
table_row_header_texts_normalized_model['text_normalized'] = 'testString'
table_column_header_ids_model = {} # TableColumnHeaderIds
table_column_header_ids_model['id'] = 'testString'
table_column_header_texts_model = {} # TableColumnHeaderTexts
table_column_header_texts_model['text'] = 'testString'
table_column_header_texts_normalized_model = {} # TableColumnHeaderTextsNormalized
table_column_header_texts_normalized_model['text_normalized'] = 'testString'
document_attribute_model = {} # DocumentAttribute
document_attribute_model['type'] = 'testString'
document_attribute_model['text'] = 'testString'
document_attribute_model['location'] = table_element_location_model
table_body_cells_model = {} # TableBodyCells
table_body_cells_model['cell_id'] = 'testString'
table_body_cells_model['location'] = table_element_location_model
table_body_cells_model['text'] = 'testString'
table_body_cells_model['row_index_begin'] = 26
table_body_cells_model['row_index_end'] = 26
table_body_cells_model['column_index_begin'] = 26
table_body_cells_model['column_index_end'] = 26
table_body_cells_model['row_header_ids'] = [table_row_header_ids_model]
table_body_cells_model['row_header_texts'] = [table_row_header_texts_model]
table_body_cells_model['row_header_texts_normalized'] = [table_row_header_texts_normalized_model]
table_body_cells_model['column_header_ids'] = [table_column_header_ids_model]
table_body_cells_model['column_header_texts'] = [table_column_header_texts_model]
table_body_cells_model['column_header_texts_normalized'] = [table_column_header_texts_normalized_model]
table_body_cells_model['attributes'] = [document_attribute_model]
# Construct a json representation of a TableResultTable model
table_result_table_model_json = {}
table_result_table_model_json['location'] = table_element_location_model
table_result_table_model_json['text'] = 'testString'
table_result_table_model_json['section_title'] = table_text_location_model
table_result_table_model_json['title'] = table_text_location_model
table_result_table_model_json['table_headers'] = [table_headers_model]
table_result_table_model_json['row_headers'] = [table_row_headers_model]
table_result_table_model_json['column_headers'] = [table_column_headers_model]
table_result_table_model_json['key_value_pairs'] = [table_key_value_pairs_model]
table_result_table_model_json['body_cells'] = [table_body_cells_model]
table_result_table_model_json['contexts'] = [table_text_location_model]
# Construct a model instance of TableResultTable by calling from_dict on the json representation
table_result_table_model = TableResultTable.from_dict(table_result_table_model_json)
assert table_result_table_model != False
# Construct a model instance of TableResultTable by calling from_dict on the json representation
table_result_table_model_dict = TableResultTable.from_dict(table_result_table_model_json).__dict__
table_result_table_model2 = TableResultTable(**table_result_table_model_dict)
# Verify the model instances are equivalent
assert table_result_table_model == table_result_table_model2
# Convert model instance back to dict and verify no loss of data
table_result_table_model_json2 = table_result_table_model.to_dict()
assert table_result_table_model_json2 == table_result_table_model_json
class TestTableRowHeaderIds():
"""
Test Class for TableRowHeaderIds
"""
def test_table_row_header_ids_serialization(self):
"""
Test serialization/deserialization for TableRowHeaderIds
"""
# Construct a json representation of a TableRowHeaderIds model
table_row_header_ids_model_json = {}
table_row_header_ids_model_json['id'] = 'testString'
# Construct a model instance of TableRowHeaderIds by calling from_dict on the json representation
table_row_header_ids_model = TableRowHeaderIds.from_dict(table_row_header_ids_model_json)
assert table_row_header_ids_model != False
# Construct a model instance of TableRowHeaderIds by calling from_dict on the json representation
table_row_header_ids_model_dict = TableRowHeaderIds.from_dict(table_row_header_ids_model_json).__dict__
table_row_header_ids_model2 = TableRowHeaderIds(**table_row_header_ids_model_dict)
# Verify the model instances are equivalent
assert table_row_header_ids_model == table_row_header_ids_model2
# Convert model instance back to dict and verify no loss of data
table_row_header_ids_model_json2 = table_row_header_ids_model.to_dict()
assert table_row_header_ids_model_json2 == table_row_header_ids_model_json
class TestTableRowHeaderTexts():
"""
Test Class for TableRowHeaderTexts
"""
def test_table_row_header_texts_serialization(self):
"""
Test serialization/deserialization for TableRowHeaderTexts
"""
# Construct a json representation of a TableRowHeaderTexts model
table_row_header_texts_model_json = {}
table_row_header_texts_model_json['text'] = 'testString'
# Construct a model instance of TableRowHeaderTexts by calling from_dict on the json representation
table_row_header_texts_model = TableRowHeaderTexts.from_dict(table_row_header_texts_model_json)
assert table_row_header_texts_model != False
# Construct a model instance of TableRowHeaderTexts by calling from_dict on the json representation
table_row_header_texts_model_dict = TableRowHeaderTexts.from_dict(table_row_header_texts_model_json).__dict__
table_row_header_texts_model2 = TableRowHeaderTexts(**table_row_header_texts_model_dict)
# Verify the model instances are equivalent
assert table_row_header_texts_model == table_row_header_texts_model2
# Convert model instance back to dict and verify no loss of data
table_row_header_texts_model_json2 = table_row_header_texts_model.to_dict()
assert table_row_header_texts_model_json2 == table_row_header_texts_model_json
class TestTableRowHeaderTextsNormalized():
"""
Test Class for TableRowHeaderTextsNormalized
"""
def test_table_row_header_texts_normalized_serialization(self):
"""
Test serialization/deserialization for TableRowHeaderTextsNormalized
"""
# Construct a json representation of a TableRowHeaderTextsNormalized model
table_row_header_texts_normalized_model_json = {}
table_row_header_texts_normalized_model_json['text_normalized'] = 'testString'
# Construct a model instance of TableRowHeaderTextsNormalized by calling from_dict on the json representation
table_row_header_texts_normalized_model = TableRowHeaderTextsNormalized.from_dict(table_row_header_texts_normalized_model_json)
assert table_row_header_texts_normalized_model != False
# Construct a model instance of TableRowHeaderTextsNormalized by calling from_dict on the json representation
table_row_header_texts_normalized_model_dict = TableRowHeaderTextsNormalized.from_dict(table_row_header_texts_normalized_model_json).__dict__
table_row_header_texts_normalized_model2 = TableRowHeaderTextsNormalized(**table_row_header_texts_normalized_model_dict)
# Verify the model instances are equivalent
assert table_row_header_texts_normalized_model == table_row_header_texts_normalized_model2
# Convert model instance back to dict and verify no loss of data
table_row_header_texts_normalized_model_json2 = table_row_header_texts_normalized_model.to_dict()
assert table_row_header_texts_normalized_model_json2 == table_row_header_texts_normalized_model_json
class TestTableRowHeaders():
"""
Test Class for TableRowHeaders
"""
def test_table_row_headers_serialization(self):
"""
Test serialization/deserialization for TableRowHeaders
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
# Construct a json representation of a TableRowHeaders model
table_row_headers_model_json = {}
table_row_headers_model_json['cell_id'] = 'testString'
table_row_headers_model_json['location'] = table_element_location_model
table_row_headers_model_json['text'] = 'testString'
table_row_headers_model_json['text_normalized'] = 'testString'
table_row_headers_model_json['row_index_begin'] = 26
table_row_headers_model_json['row_index_end'] = 26
table_row_headers_model_json['column_index_begin'] = 26
table_row_headers_model_json['column_index_end'] = 26
# Construct a model instance of TableRowHeaders by calling from_dict on the json representation
table_row_headers_model = TableRowHeaders.from_dict(table_row_headers_model_json)
assert table_row_headers_model != False
# Construct a model instance of TableRowHeaders by calling from_dict on the json representation
table_row_headers_model_dict = TableRowHeaders.from_dict(table_row_headers_model_json).__dict__
table_row_headers_model2 = TableRowHeaders(**table_row_headers_model_dict)
# Verify the model instances are equivalent
assert table_row_headers_model == table_row_headers_model2
# Convert model instance back to dict and verify no loss of data
table_row_headers_model_json2 = table_row_headers_model.to_dict()
assert table_row_headers_model_json2 == table_row_headers_model_json
class TestTableTextLocation():
"""
Test Class for TableTextLocation
"""
def test_table_text_location_serialization(self):
"""
Test serialization/deserialization for TableTextLocation
"""
# Construct dict forms of any model objects needed in order to build this model.
table_element_location_model = {} # TableElementLocation
table_element_location_model['begin'] = 26
table_element_location_model['end'] = 26
# Construct a json representation of a TableTextLocation model
table_text_location_model_json = {}
table_text_location_model_json['text'] = 'testString'
table_text_location_model_json['location'] = table_element_location_model
# Construct a model instance of TableTextLocation by calling from_dict on the json representation
table_text_location_model = TableTextLocation.from_dict(table_text_location_model_json)
assert table_text_location_model != False
# Construct a model instance of TableTextLocation by calling from_dict on the json representation
table_text_location_model_dict = TableTextLocation.from_dict(table_text_location_model_json).__dict__
table_text_location_model2 = TableTextLocation(**table_text_location_model_dict)
# Verify the model instances are equivalent
assert table_text_location_model == table_text_location_model2
# Convert model instance back to dict and verify no loss of data
table_text_location_model_json2 = table_text_location_model.to_dict()
assert table_text_location_model_json2 == table_text_location_model_json
class TestTrainingExample():
"""
Test Class for TrainingExample
"""
def test_training_example_serialization(self):
"""
Test serialization/deserialization for TrainingExample
"""
# Construct a json representation of a TrainingExample model
training_example_model_json = {}
training_example_model_json['document_id'] = 'testString'
training_example_model_json['collection_id'] = 'testString'
training_example_model_json['relevance'] = 38
training_example_model_json['created'] = '2020-01-28T18:40:40.123456Z'
training_example_model_json['updated'] = '2020-01-28T18:40:40.123456Z'
# Construct a model instance of TrainingExample by calling from_dict on the json representation
training_example_model = TrainingExample.from_dict(training_example_model_json)
assert training_example_model != False
# Construct a model instance of TrainingExample by calling from_dict on the json representation
training_example_model_dict = TrainingExample.from_dict(training_example_model_json).__dict__
training_example_model2 = TrainingExample(**training_example_model_dict)
# Verify the model instances are equivalent
assert training_example_model == training_example_model2
# Convert model instance back to dict and verify no loss of data
training_example_model_json2 = training_example_model.to_dict()
assert training_example_model_json2 == training_example_model_json
class TestTrainingQuery():
"""
Test Class for TrainingQuery
"""
def test_training_query_serialization(self):
"""
Test serialization/deserialization for TrainingQuery
"""
# Construct dict forms of any model objects needed in order to build this model.
training_example_model = {} # TrainingExample
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
training_example_model['created'] = '2020-01-28T18:40:40.123456Z'
training_example_model['updated'] = '2020-01-28T18:40:40.123456Z'
# Construct a json representation of a TrainingQuery model
training_query_model_json = {}
training_query_model_json['query_id'] = 'testString'
training_query_model_json['natural_language_query'] = 'testString'
training_query_model_json['filter'] = 'testString'
training_query_model_json['created'] = '2020-01-28T18:40:40.123456Z'
training_query_model_json['updated'] = '2020-01-28T18:40:40.123456Z'
training_query_model_json['examples'] = [training_example_model]
# Construct a model instance of TrainingQuery by calling from_dict on the json representation
training_query_model = TrainingQuery.from_dict(training_query_model_json)
assert training_query_model != False
# Construct a model instance of TrainingQuery by calling from_dict on the json representation
training_query_model_dict = TrainingQuery.from_dict(training_query_model_json).__dict__
training_query_model2 = TrainingQuery(**training_query_model_dict)
# Verify the model instances are equivalent
assert training_query_model == training_query_model2
# Convert model instance back to dict and verify no loss of data
training_query_model_json2 = training_query_model.to_dict()
assert training_query_model_json2 == training_query_model_json
class TestTrainingQuerySet():
"""
Test Class for TrainingQuerySet
"""
def test_training_query_set_serialization(self):
"""
Test serialization/deserialization for TrainingQuerySet
"""
# Construct dict forms of any model objects needed in order to build this model.
training_example_model = {} # TrainingExample
training_example_model['document_id'] = 'testString'
training_example_model['collection_id'] = 'testString'
training_example_model['relevance'] = 38
training_example_model['created'] = '2020-01-28T18:40:40.123456Z'
training_example_model['updated'] = '2020-01-28T18:40:40.123456Z'
training_query_model = {} # TrainingQuery
training_query_model['query_id'] = 'testString'
training_query_model['natural_language_query'] = 'testString'
training_query_model['filter'] = 'testString'
training_query_model['created'] = '2020-01-28T18:40:40.123456Z'
training_query_model['updated'] = '2020-01-28T18:40:40.123456Z'
training_query_model['examples'] = [training_example_model]
# Construct a json representation of a TrainingQuerySet model
training_query_set_model_json = {}
training_query_set_model_json['queries'] = [training_query_model]
# Construct a model instance of TrainingQuerySet by calling from_dict on the json representation
training_query_set_model = TrainingQuerySet.from_dict(training_query_set_model_json)
assert training_query_set_model != False
# Construct a model instance of TrainingQuerySet by calling from_dict on the json representation
training_query_set_model_dict = TrainingQuerySet.from_dict(training_query_set_model_json).__dict__
training_query_set_model2 = TrainingQuerySet(**training_query_set_model_dict)
# Verify the model instances are equivalent
assert training_query_set_model == training_query_set_model2
# Convert model instance back to dict and verify no loss of data
training_query_set_model_json2 = training_query_set_model.to_dict()
assert training_query_set_model_json2 == training_query_set_model_json
class TestQueryCalculationAggregation():
"""
Test Class for QueryCalculationAggregation
"""
def test_query_calculation_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryCalculationAggregation
"""
# Construct a json representation of a QueryCalculationAggregation model
query_calculation_aggregation_model_json = {}
query_calculation_aggregation_model_json['type'] = 'unique_count'
query_calculation_aggregation_model_json['field'] = 'testString'
query_calculation_aggregation_model_json['value'] = 72.5
# Construct a model instance of QueryCalculationAggregation by calling from_dict on the json representation
query_calculation_aggregation_model = QueryCalculationAggregation.from_dict(query_calculation_aggregation_model_json)
assert query_calculation_aggregation_model != False
# Construct a model instance of QueryCalculationAggregation by calling from_dict on the json representation
query_calculation_aggregation_model_dict = QueryCalculationAggregation.from_dict(query_calculation_aggregation_model_json).__dict__
query_calculation_aggregation_model2 = QueryCalculationAggregation(**query_calculation_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_calculation_aggregation_model == query_calculation_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_calculation_aggregation_model_json2 = query_calculation_aggregation_model.to_dict()
assert query_calculation_aggregation_model_json2 == query_calculation_aggregation_model_json
class TestQueryFilterAggregation():
"""
Test Class for QueryFilterAggregation
"""
def test_query_filter_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryFilterAggregation
"""
# Construct a json representation of a QueryFilterAggregation model
query_filter_aggregation_model_json = {}
query_filter_aggregation_model_json['type'] = 'filter'
query_filter_aggregation_model_json['match'] = 'testString'
query_filter_aggregation_model_json['matching_results'] = 26
# Construct a model instance of QueryFilterAggregation by calling from_dict on the json representation
query_filter_aggregation_model = QueryFilterAggregation.from_dict(query_filter_aggregation_model_json)
assert query_filter_aggregation_model != False
# Construct a model instance of QueryFilterAggregation by calling from_dict on the json representation
query_filter_aggregation_model_dict = QueryFilterAggregation.from_dict(query_filter_aggregation_model_json).__dict__
query_filter_aggregation_model2 = QueryFilterAggregation(**query_filter_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_filter_aggregation_model == query_filter_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_filter_aggregation_model_json2 = query_filter_aggregation_model.to_dict()
assert query_filter_aggregation_model_json2 == query_filter_aggregation_model_json
class TestQueryGroupByAggregation():
"""
Test Class for QueryGroupByAggregation
"""
def test_query_group_by_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryGroupByAggregation
"""
# Construct a json representation of a QueryGroupByAggregation model
query_group_by_aggregation_model_json = {}
query_group_by_aggregation_model_json['type'] = 'group_by'
# Construct a model instance of QueryGroupByAggregation by calling from_dict on the json representation
query_group_by_aggregation_model = QueryGroupByAggregation.from_dict(query_group_by_aggregation_model_json)
assert query_group_by_aggregation_model != False
# Construct a model instance of QueryGroupByAggregation by calling from_dict on the json representation
query_group_by_aggregation_model_dict = QueryGroupByAggregation.from_dict(query_group_by_aggregation_model_json).__dict__
query_group_by_aggregation_model2 = QueryGroupByAggregation(**query_group_by_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_group_by_aggregation_model == query_group_by_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_group_by_aggregation_model_json2 = query_group_by_aggregation_model.to_dict()
assert query_group_by_aggregation_model_json2 == query_group_by_aggregation_model_json
class TestQueryHistogramAggregation():
"""
Test Class for QueryHistogramAggregation
"""
def test_query_histogram_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryHistogramAggregation
"""
# Construct a json representation of a QueryHistogramAggregation model
query_histogram_aggregation_model_json = {}
query_histogram_aggregation_model_json['type'] = 'histogram'
query_histogram_aggregation_model_json['field'] = 'testString'
query_histogram_aggregation_model_json['interval'] = 38
query_histogram_aggregation_model_json['name'] = 'testString'
# Construct a model instance of QueryHistogramAggregation by calling from_dict on the json representation
query_histogram_aggregation_model = QueryHistogramAggregation.from_dict(query_histogram_aggregation_model_json)
assert query_histogram_aggregation_model != False
# Construct a model instance of QueryHistogramAggregation by calling from_dict on the json representation
query_histogram_aggregation_model_dict = QueryHistogramAggregation.from_dict(query_histogram_aggregation_model_json).__dict__
query_histogram_aggregation_model2 = QueryHistogramAggregation(**query_histogram_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_histogram_aggregation_model == query_histogram_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_histogram_aggregation_model_json2 = query_histogram_aggregation_model.to_dict()
assert query_histogram_aggregation_model_json2 == query_histogram_aggregation_model_json
class TestQueryNestedAggregation():
"""
Test Class for QueryNestedAggregation
"""
def test_query_nested_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryNestedAggregation
"""
# Construct a json representation of a QueryNestedAggregation model
query_nested_aggregation_model_json = {}
query_nested_aggregation_model_json['type'] = 'nested'
query_nested_aggregation_model_json['path'] = 'testString'
query_nested_aggregation_model_json['matching_results'] = 26
# Construct a model instance of QueryNestedAggregation by calling from_dict on the json representation
query_nested_aggregation_model = QueryNestedAggregation.from_dict(query_nested_aggregation_model_json)
assert query_nested_aggregation_model != False
# Construct a model instance of QueryNestedAggregation by calling from_dict on the json representation
query_nested_aggregation_model_dict = QueryNestedAggregation.from_dict(query_nested_aggregation_model_json).__dict__
query_nested_aggregation_model2 = QueryNestedAggregation(**query_nested_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_nested_aggregation_model == query_nested_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_nested_aggregation_model_json2 = query_nested_aggregation_model.to_dict()
assert query_nested_aggregation_model_json2 == query_nested_aggregation_model_json
class TestQueryTermAggregation():
"""
Test Class for QueryTermAggregation
"""
def test_query_term_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryTermAggregation
"""
# Construct a json representation of a QueryTermAggregation model
query_term_aggregation_model_json = {}
query_term_aggregation_model_json['type'] = 'term'
query_term_aggregation_model_json['field'] = 'testString'
query_term_aggregation_model_json['count'] = 38
query_term_aggregation_model_json['name'] = 'testString'
# Construct a model instance of QueryTermAggregation by calling from_dict on the json representation
query_term_aggregation_model = QueryTermAggregation.from_dict(query_term_aggregation_model_json)
assert query_term_aggregation_model != False
# Construct a model instance of QueryTermAggregation by calling from_dict on the json representation
query_term_aggregation_model_dict = QueryTermAggregation.from_dict(query_term_aggregation_model_json).__dict__
query_term_aggregation_model2 = QueryTermAggregation(**query_term_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_term_aggregation_model == query_term_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_term_aggregation_model_json2 = query_term_aggregation_model.to_dict()
assert query_term_aggregation_model_json2 == query_term_aggregation_model_json
class TestQueryTimesliceAggregation():
"""
Test Class for QueryTimesliceAggregation
"""
def test_query_timeslice_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryTimesliceAggregation
"""
# Construct a json representation of a QueryTimesliceAggregation model
query_timeslice_aggregation_model_json = {}
query_timeslice_aggregation_model_json['type'] = 'timeslice'
query_timeslice_aggregation_model_json['field'] = 'testString'
query_timeslice_aggregation_model_json['interval'] = 'testString'
query_timeslice_aggregation_model_json['name'] = 'testString'
# Construct a model instance of QueryTimesliceAggregation by calling from_dict on the json representation
query_timeslice_aggregation_model = QueryTimesliceAggregation.from_dict(query_timeslice_aggregation_model_json)
assert query_timeslice_aggregation_model != False
# Construct a model instance of QueryTimesliceAggregation by calling from_dict on the json representation
query_timeslice_aggregation_model_dict = QueryTimesliceAggregation.from_dict(query_timeslice_aggregation_model_json).__dict__
query_timeslice_aggregation_model2 = QueryTimesliceAggregation(**query_timeslice_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_timeslice_aggregation_model == query_timeslice_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_timeslice_aggregation_model_json2 = query_timeslice_aggregation_model.to_dict()
assert query_timeslice_aggregation_model_json2 == query_timeslice_aggregation_model_json
class TestQueryTopHitsAggregation():
"""
Test Class for QueryTopHitsAggregation
"""
def test_query_top_hits_aggregation_serialization(self):
"""
Test serialization/deserialization for QueryTopHitsAggregation
"""
# Construct dict forms of any model objects needed in order to build this model.
query_top_hits_aggregation_result_model = {} # QueryTopHitsAggregationResult
query_top_hits_aggregation_result_model['matching_results'] = 38
query_top_hits_aggregation_result_model['hits'] = [{}]
# Construct a json representation of a QueryTopHitsAggregation model
query_top_hits_aggregation_model_json = {}
query_top_hits_aggregation_model_json['type'] = 'top_hits'
query_top_hits_aggregation_model_json['size'] = 38
query_top_hits_aggregation_model_json['name'] = 'testString'
query_top_hits_aggregation_model_json['hits'] = query_top_hits_aggregation_result_model
# Construct a model instance of QueryTopHitsAggregation by calling from_dict on the json representation
query_top_hits_aggregation_model = QueryTopHitsAggregation.from_dict(query_top_hits_aggregation_model_json)
assert query_top_hits_aggregation_model != False
# Construct a model instance of QueryTopHitsAggregation by calling from_dict on the json representation
query_top_hits_aggregation_model_dict = QueryTopHitsAggregation.from_dict(query_top_hits_aggregation_model_json).__dict__
query_top_hits_aggregation_model2 = QueryTopHitsAggregation(**query_top_hits_aggregation_model_dict)
# Verify the model instances are equivalent
assert query_top_hits_aggregation_model == query_top_hits_aggregation_model2
# Convert model instance back to dict and verify no loss of data
query_top_hits_aggregation_model_json2 = query_top_hits_aggregation_model.to_dict()
assert query_top_hits_aggregation_model_json2 == query_top_hits_aggregation_model_json
# endregion
##############################################################################
# End of Model Tests
##############################################################################
| 46.078239 | 2,638 | 0.687135 | 30,269 | 277,391 | 5.912848 | 0.018468 | 0.028965 | 0.018706 | 0.014248 | 0.898852 | 0.850852 | 0.798012 | 0.735797 | 0.694518 | 0.649847 | 0 | 0.011335 | 0.219531 | 277,391 | 6,019 | 2,639 | 46.085895 | 0.815363 | 0.194649 | 0 | 0.574977 | 0 | 0.014106 | 0.199741 | 0.042949 | 0 | 0 | 0 | 0 | 0.107329 | 1 | 0.053665 | false | 0.043238 | 0.00368 | 0 | 0.107942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7943be6e4c320e5b3a12977d3e1584acdfa3cf4 | 231 | py | Python | lib/train/actors/__init__.py | dineev-vd/stark | bf2741675d00c5074bf2c0d7a26cc13fce1ba79d | [
"MIT"
] | 376 | 2021-03-27T12:29:17.000Z | 2022-03-29T01:22:15.000Z | lib/train/actors/__init__.py | wp8733684/Stark | ba59f9596b06bc687d726f991e1e7fce8af6b5a5 | [
"MIT"
] | 75 | 2021-03-31T12:44:45.000Z | 2022-03-28T09:02:57.000Z | lib/train/actors/__init__.py | wp8733684/Stark | ba59f9596b06bc687d726f991e1e7fce8af6b5a5 | [
"MIT"
] | 82 | 2021-03-26T10:07:57.000Z | 2022-03-29T11:08:27.000Z | from .base_actor import BaseActor
from .stark_s import STARKSActor
from .stark_st import STARKSTActor
from .stark_lightningXtrt import STARKLightningXtrtActor
from .stark_lightningXtrt_distill import STARKLightningXtrtdistillActor
| 38.5 | 71 | 0.891775 | 26 | 231 | 7.692308 | 0.538462 | 0.18 | 0.22 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08658 | 231 | 5 | 72 | 46.2 | 0.947867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7b62f9c34e1236d06d5198b5918b1f9e2df23df | 10,093 | py | Python | tests/test_create_dataset.py | platiagro/datasets | fad61c59b462ecb51b52de441fe21a750334b1b8 | [
"Apache-2.0"
] | null | null | null | tests/test_create_dataset.py | platiagro/datasets | fad61c59b462ecb51b52de441fe21a750334b1b8 | [
"Apache-2.0"
] | 55 | 2020-02-26T18:13:55.000Z | 2022-03-24T12:47:21.000Z | tests/test_create_dataset.py | platiagro/datasets | fad61c59b462ecb51b52de441fe21a750334b1b8 | [
"Apache-2.0"
] | 5 | 2020-01-23T13:28:32.000Z | 2020-08-06T13:13:21.000Z | # -*- coding: utf-8 -*-
import io
import unittest
import unittest.mock as mock
from fastapi.testclient import TestClient
from datasets.api import app
import tests.util as util
TEST_CLIENT = TestClient(app)
class TestCreateDataset(unittest.TestCase):
maxDiff = None
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_iris_csv(self, mock_save_dataset, mock_stat_dataset):
"""
Should call platiagro.save_dataset using given file, filename, and metadata
(columns, featurestypes, total, original-filename).
"""
dataset_name = util.IRIS_DATASET_NAME
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
io.StringIO(util.IRIS_DATA),
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
"total": len(util.IRIS_DATA_ARRAY),
"columns": util.IRIS_COLUMNS_FEATURETYPES,
"data": util.IRIS_DATA_ARRAY,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"columns": util.IRIS_COLUMNS,
"featuretypes": util.IRIS_FEATURETYPES,
"original-filename": util.IRIS_DATASET_NAME,
"total": len(util.IRIS_DATA_ARRAY),
},
)
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_iris_csv_one_column(
self, mock_save_dataset, mock_stat_dataset
):
"""
Should call platiagro.save_dataset using given file, filename, and metadata
(columns, featurestypes, total, original-filename).
"""
dataset_name = util.IRIS_DATASET_NAME
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
io.StringIO(util.IRIS_DATA_ONE_COLUMN),
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
"total": len(util.IRIS_DATA_ARRAY_ONE_COLUMN),
"columns": util.IRIS_COLUMNS_FEATURETYPES_ONE_COLUMN,
"data": util.IRIS_DATA_ARRAY_ONE_COLUMN,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"columns": util.IRIS_ONE_COLUMN,
"featuretypes": util.IRIS_FEATURETYPES_ONE_COLUMN,
"original-filename": util.IRIS_DATASET_NAME,
"total": len(util.IRIS_DATA_ARRAY_ONE_COLUMN),
},
)
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_iris_csv_headerless(
self, mock_save_dataset, mock_stat_dataset
):
"""
Should call platiagro.save_dataset using given file, filename, and metadata (columns, featurestypes, total, original-filename).
"""
dataset_name = util.IRIS_DATASET_NAME
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
io.StringIO(util.IRIS_DATA_HEADERLESS),
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
"total": len(util.IRIS_DATA_ARRAY),
"columns": util.IRIS_HEADERLESS_COLUMNS_FEATURETYPES,
"data": util.IRIS_DATA_ARRAY,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"columns": util.IRIS_HEADERLESS_COLUMNS,
"featuretypes": util.IRIS_FEATURETYPES,
"original-filename": util.IRIS_DATASET_NAME,
"total": len(util.IRIS_DATA_ARRAY),
},
)
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_png(self, mock_save_dataset, mock_stat_dataset):
"""
Should call platiagro.save_dataset using given file, filename, and metadata (original-filename).
"""
dataset_name = util.PNG_DATASET_NAME
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
io.BytesIO(util.PNG_DATA),
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"original-filename": util.PNG_DATASET_NAME,
},
)
def test_create_dataset_with_gfile_client_unauthorized(self):
"""
Should raise http status 400 client unauthorized when given clientId and clientSecret are invalid.
"""
rv = TEST_CLIENT.post(
"/datasets",
json={
"gfile": {
"clientId": "clientId",
"clientSecret": "clientSecret",
"id": "id",
"mimeType": "text/csv",
"name": "iris.csv",
"token": "123",
}
},
)
result = rv.json()
expected = {
"message": "Invalid token: client unauthorized",
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 400)
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_predict_file_csv(
self, mock_save_dataset, mock_stat_dataset
):
"""
Should call platiagro.save_dataset using given file, filename, and metadata (columns, featurestypes, total, original-filename).
"""
dataset_name = util.PREDICT_FILE
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
util.PREDICT_FILE_HEADER,
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
"total": len(util.PREDICT_FILE_DATA),
"columns": util.PREDICT_FILE_COLUMNS,
"data": util.PREDICT_FILE_DATA,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"columns": util.PREDICT_COLUMNS,
"featuretypes": util.PREDICT_FEATURETYPES,
"original-filename": util.PREDICT_FILE,
"total": len(util.PREDICT_FILE_DATA),
},
)
@mock.patch(
"datasets.datasets.stat_dataset",
side_effect=util.FILE_NOT_FOUND_ERROR,
)
@mock.patch(
"datasets.datasets.save_dataset",
)
def test_create_dataset_with_predict_file_headerless_csv(
self, mock_save_dataset, mock_stat_dataset
):
"""
Should call platiagro.save_dataset using given file, filename, and metadata (columns, featurestypes, total, original-filename).
"""
dataset_name = util.PREDICT_HEADERLESS
rv = TEST_CLIENT.post(
"/datasets",
files={
"file": (
dataset_name,
util.PREDICT_FILE_HEADERLESS,
"multipart/form-data",
)
},
)
result = rv.json()
expected = {
"name": dataset_name,
"filename": dataset_name,
"total": len(util.PREDICT_FILE_DATA),
"columns": util.PREDICT_FILE_COLUMNS_HEADERLESS,
"data": util.PREDICT_FILE_DATA,
}
self.assertEqual(result, expected)
self.assertEqual(rv.status_code, 200)
mock_stat_dataset.assert_any_call(dataset_name)
mock_save_dataset.assert_any_call(
dataset_name,
mock.ANY,
metadata={
"columns": util.PREDICT_COLUMNS_HEADERLESS,
"featuretypes": util.PREDICT_FEATURETYPES,
"original-filename": util.PREDICT_HEADERLESS,
"total": len(util.PREDICT_FILE_DATA),
},
)
| 31.34472 | 135 | 0.550183 | 966 | 10,093 | 5.435818 | 0.097308 | 0.092173 | 0.03885 | 0.057132 | 0.872596 | 0.849552 | 0.830508 | 0.830508 | 0.806894 | 0.796229 | 0 | 0.004299 | 0.354701 | 10,093 | 321 | 136 | 31.442368 | 0.801935 | 0.085009 | 0 | 0.623616 | 0 | 0 | 0.117147 | 0.039748 | 0 | 0 | 0 | 0 | 0.095941 | 1 | 0.02583 | false | 0 | 0.02214 | 0 | 0.055351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7cffbe8fcfd94428c0db34abf693ec733a4c7e6 | 41 | py | Python | opac/queries/renewing/__init__.py | rimphyd/Django-OPAC | d86f2e28fee7f2ec551aeeb98ec67caefc06a3fb | [
"MIT"
] | 1 | 2020-11-26T05:25:46.000Z | 2020-11-26T05:25:46.000Z | opac/queries/renewing/__init__.py | rimphyd/Django-OPAC | d86f2e28fee7f2ec551aeeb98ec67caefc06a3fb | [
"MIT"
] | null | null | null | opac/queries/renewing/__init__.py | rimphyd/Django-OPAC | d86f2e28fee7f2ec551aeeb98ec67caefc06a3fb | [
"MIT"
] | null | null | null | from .create import * # noqa: F401 F403
| 20.5 | 40 | 0.682927 | 6 | 41 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 0.219512 | 41 | 1 | 41 | 41 | 0.6875 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac2644b93c65944d236d3550d34423686e5e23cf | 23 | py | Python | wepppy/nodb/mods/baer/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | wepppy/nodb/mods/baer/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | wepppy/nodb/mods/baer/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | from .baer import Baer
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac3335b436d56e5107995134ab737ac80c74ade9 | 109 | py | Python | backend/app/app/rearq_/__init__.py | PY-GZKY/Tplan | 9f5335f9a9a28afce608744bebed1d9827068e6d | [
"MIT"
] | 121 | 2021-10-29T20:21:37.000Z | 2022-03-21T03:33:52.000Z | backend/app/app/rearq_/__init__.py | GZKY-PY/Tplan | 425ca8a497cdb3438bdbf6c72ed8dc234479dd00 | [
"MIT"
] | null | null | null | backend/app/app/rearq_/__init__.py | GZKY-PY/Tplan | 425ca8a497cdb3438bdbf6c72ed8dc234479dd00 | [
"MIT"
] | 8 | 2021-11-06T07:02:11.000Z | 2022-02-28T11:53:23.000Z | """
rearq start:rearq timer # 定时任务
rearq start:rearq worker --consumer-name nihao # 当然是指定一个 worker name
""" | 27.25 | 69 | 0.724771 | 15 | 109 | 5.266667 | 0.6 | 0.253165 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155963 | 109 | 4 | 70 | 27.25 | 0.858696 | 0.926606 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac4661af7049d6962da999af166f1c553f50ddb3 | 5,647 | py | Python | src/test_mst.py | saulno/graph_lib | d89c8e55cad964bef7e0369c2d66ecf4fd9e4ba7 | [
"MIT"
] | null | null | null | src/test_mst.py | saulno/graph_lib | d89c8e55cad964bef7e0369c2d66ecf4fd9e4ba7 | [
"MIT"
] | null | null | null | src/test_mst.py | saulno/graph_lib | d89c8e55cad964bef7e0369c2d66ecf4fd9e4ba7 | [
"MIT"
] | null | null | null | from graph.GraphFactory import GraphFactory, GridBuilder
from graph.GraphFactory import BarabasiAlbertBuilder, DorogovtsevMendesBuilder, ErdosRenyiBuilder, GeographicBuilder, GilbertBuilder, GraphFactory, GridBuilder
factory = GraphFactory()
factory.set_builder(GridBuilder())
g = factory.build_graph(columns=5, rows=6, directed=True)
g.to_graphviz("../output_4/grid/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Grid 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/grid/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Grid 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/grid/prim_30")
g = factory.build_graph(columns=10, rows=10, directed=True)
g.to_graphviz("../output_4/grid/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Grid 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/grid/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Grid 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/grid/prim_100")
factory.set_builder(ErdosRenyiBuilder())
g = factory.build_graph(nodes=30, edges=15,)
g.to_graphviz("../output_4/erdos/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Erdos 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/erdos/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Erdos 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/erdos/prim_30")
g = factory.build_graph(nodes=100, edges=40,)
g.to_graphviz("../output_4/erdos/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Erdos 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/erdos/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Erdos 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/erdos/prim_100")
factory.set_builder(GilbertBuilder())
g = factory.build_graph(nodes=30, p=0.1, loops=True)
g.to_graphviz("../output_4/gilbert/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Gilbert 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/gilbert/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Gilbert 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/gilbert/prim_30")
g = factory.build_graph(nodes=100, p=0.1, loops=True)
g.to_graphviz("../output_4/gilbert/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Gilbert 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/gilbert/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Gilbert 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/gilbert/prim_100")
factory.set_builder(GeographicBuilder())
g = factory.build_graph(nodes=30, max_dist=0.1, loops=False)
g.to_graphviz("../output_4/geographic/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Geographic 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/geographic/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Geographic 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/geographic/prim_30")
g = factory.build_graph(nodes=100, max_dist=0.1, loops=False)
g.to_graphviz("../output_4/geographic/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Geographic 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/geographic/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Geographic 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/geographic/prim_100")
factory.set_builder(BarabasiAlbertBuilder())
g = factory.build_graph(nodes=30, degree=10, loops=False)
g.to_graphviz("../output_4/barabasi/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Barabasi 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/barabasi/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Barabasi 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/barabasi/prim_30")
g = factory.build_graph(nodes=100, degree=40, loops=False)
g.to_graphviz("../output_4/barabasi/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Barabasi 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/barabasi/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Barabasi 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/barabasi/prim_100")
factory.set_builder(DorogovtsevMendesBuilder())
g = factory.build_graph(nodes=30)
g.to_graphviz("../output_4/dorogovstev/generated_30")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Dorogovstev 30 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/dorogovstev/kruskal_30")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Dorogovstev 30 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/dorogovstev/prim_30")
g = factory.build_graph(nodes=100)
g.to_graphviz("../output_4/dorogovstev/generated_100")
g_kruskal, mst_kruskal = g.kruskal()
print(f"Dorogovstev 100 nodes -> Kruskal MST: {mst_kruskal}")
g_kruskal.to_graphviz("../output_4/dorogovstev/kruskal_100")
g_prim, mst_prim = g.prim(g.get_node_by_id(list(g.nodes.keys())[0]))
print(f"Dorogovstev 100 nodes -> Prim MST: {mst_prim}")
g_prim.to_graphviz("../output_4/dorogovstev/prim_100") | 40.92029 | 159 | 0.755268 | 949 | 5,647 | 4.220232 | 0.061117 | 0.089888 | 0.14382 | 0.152809 | 0.905119 | 0.87216 | 0.835955 | 0.809988 | 0.775031 | 0.717603 | 0 | 0.046783 | 0.072605 | 5,647 | 138 | 160 | 40.92029 | 0.717968 | 0 | 0 | 0.228571 | 0 | 0 | 0.382436 | 0.195467 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019048 | 0 | 0.019048 | 0.228571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac6154cc46d725e2d9ec1714fd6a4c546c6038f8 | 1,450 | py | Python | tests/test_ner.py | easynlp/easynlp | 4b3b405a64ca166cc19ee9c43b79a475cf699996 | [
"MIT"
] | 6 | 2021-07-09T08:13:44.000Z | 2021-11-10T04:09:33.000Z | tests/test_ner.py | easynlp/easynlp | 4b3b405a64ca166cc19ee9c43b79a475cf699996 | [
"MIT"
] | 1 | 2021-07-09T17:18:16.000Z | 2021-07-09T17:18:16.000Z | tests/test_ner.py | easynlp/easynlp | 4b3b405a64ca166cc19ee9c43b79a475cf699996 | [
"MIT"
] | 1 | 2022-02-09T15:37:14.000Z | 2022-02-09T15:37:14.000Z | import easynlp
def test_single_ner():
data = {
"text": [
"My name is Ben. I live in Scotland and work for Microsoft.",
]
}
input_column = "text"
output_column = "ner"
output_dataset = easynlp.ner(data, input_column, output_column)
ner_tags = [["PER", "LOC", "ORG"]]
ner_start_offsets = [[11, 26, 48]]
ner_end_offsets = [[14, 34, 57]]
assert len(output_dataset) == 1
assert output_dataset[f"{output_column}_tags"] == ner_tags
assert output_dataset[f"{output_column}_start_offsets"] == ner_start_offsets
assert output_dataset[f"{output_column}_end_offsets"] == ner_end_offsets
def test_ner():
data = {
"text": [
"My name is Ben. I live in Scotland and work for Microsoft.",
"My name is Ben.",
"I live in Scotland.",
"I work for Microsoft.",
]
}
input_column = "text"
output_column = "ner"
output_dataset = easynlp.ner(data, input_column, output_column)
ner_tags = [["PER", "LOC", "ORG"], ["PER"], ["LOC"], ["ORG"]]
ner_start_offsets = [[11, 26, 48], [11], [10], [11]]
ner_end_offsets = [[14, 34, 57], [14], [18], [20]]
assert len(output_dataset) == 4
assert output_dataset[f"{output_column}_tags"] == ner_tags
assert output_dataset[f"{output_column}_start_offsets"] == ner_start_offsets
assert output_dataset[f"{output_column}_end_offsets"] == ner_end_offsets
| 35.365854 | 80 | 0.61931 | 194 | 1,450 | 4.335052 | 0.231959 | 0.142687 | 0.135553 | 0.142687 | 0.890606 | 0.890606 | 0.845422 | 0.845422 | 0.814507 | 0.753864 | 0 | 0.034265 | 0.235172 | 1,450 | 40 | 81 | 36.25 | 0.724076 | 0 | 0 | 0.5 | 0 | 0 | 0.256552 | 0.077241 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.055556 | false | 0 | 0.027778 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac801a15042a4a7e4448e168ebdd437ace585bce | 84 | py | Python | hikari_views/__init__.py | tandemdude/hikari-views | 7a1544b72722cd8935c4d0855c921f7577e8903c | [
"MIT"
] | 1 | 2022-01-19T13:39:01.000Z | 2022-01-19T13:39:01.000Z | hikari_views/__init__.py | tandemdude/hikari-views | 7a1544b72722cd8935c4d0855c921f7577e8903c | [
"MIT"
] | null | null | null | hikari_views/__init__.py | tandemdude/hikari-views | 7a1544b72722cd8935c4d0855c921f7577e8903c | [
"MIT"
] | null | null | null | from .button import *
from .item import *
from .select import *
from .view import *
| 16.8 | 21 | 0.714286 | 12 | 84 | 5 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 84 | 4 | 22 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3baed4dee41cf553a1f0c18c4250e26f8ddef4b6 | 234 | py | Python | h1st/h1st/schema/validators/__init__.py | tanrobotix/h1st | c9d67305726f11235751bc5abfd766279cba463b | [
"Apache-2.0"
] | null | null | null | h1st/h1st/schema/validators/__init__.py | tanrobotix/h1st | c9d67305726f11235751bc5abfd766279cba463b | [
"Apache-2.0"
] | 3 | 2020-11-13T19:06:07.000Z | 2022-02-10T02:06:03.000Z | h1st/h1st/schema/validators/__init__.py | diophung/h1st | ca4245996448717cf9701e17929eca8daa5d80a4 | [
"Apache-2.0"
] | null | null | null | from .list_validator import ListValidator
from .union_validator import UnionValidator
from .pyarrow_validator import PyArrowSchemaValidator
from .numpy_validator import NumpySchemaValidator
from .field_validator import FieldValidator
| 39 | 53 | 0.893162 | 25 | 234 | 8.16 | 0.52 | 0.367647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08547 | 234 | 5 | 54 | 46.8 | 0.953271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3bf681c4b5f9d9fc4ee699c14363086d09679006 | 4,225 | py | Python | tests/units/Vault/test_withdraw_mock.py | benber86/alcom_contracts | 57136d97d0d30088679e358a2fc3345e82ccb0f7 | [
"MIT"
] | 2 | 2021-07-14T16:26:14.000Z | 2021-08-01T22:24:51.000Z | tests/units/Vault/test_withdraw_mock.py | benber86/alcom_contracts | 57136d97d0d30088679e358a2fc3345e82ccb0f7 | [
"MIT"
] | null | null | null | tests/units/Vault/test_withdraw_mock.py | benber86/alcom_contracts | 57136d97d0d30088679e358a2fc3345e82ccb0f7 | [
"MIT"
] | null | null | null | import brownie
import pytest
import math
@pytest.mark.parametrize("amount", [1, 100, 10**18])
def test_mutiple_withdraw_mock(amount, alice, bob, charlie, dave, mock_vault, alcx, mock_ss_compounder):
prior_alcx_balance_alice = alcx.balanceOf(alice)
prior_alcx_balance_bob = alcx.balanceOf(bob)
prior_alcx_balance_charlie = alcx.balanceOf(charlie)
prior_alcx_balance_dave = alcx.balanceOf(dave)
for account in [alice, bob, charlie, dave]:
alcx.approve(mock_vault, alcx.totalSupply(), {'from': account})
mock_vault.deposit(amount, {'from': account})
for account in [bob, charlie, dave]:
assert mock_vault.balanceOf(account) == mock_vault.balanceOf(alice)
mock_vault.withdraw(amount, {'from': alice})
alice_fee = amount * 250 // 10000
assert alcx.balanceOf(alice) == prior_alcx_balance_alice - alice_fee
mock_vault.withdraw(amount, {'from': bob})
bob_gain = (alice_fee // 3)
bob_fee = (amount + bob_gain) * 250 // 10000
assert alcx.balanceOf(bob) == prior_alcx_balance_bob + bob_gain - bob_fee
mock_vault.withdraw(amount, {'from': charlie})
charlie_gain = bob_gain + (bob_fee // 2)
charlie_fee = (amount + charlie_gain) * 250 // 10000
assert math.isclose(alcx.balanceOf(charlie), prior_alcx_balance_charlie + charlie_gain - charlie_fee, rel_tol=1)
pool_balance = mock_ss_compounder.totalPoolBalance()
mock_vault.withdraw(amount, {'from': dave})
dave_gain = charlie_gain + charlie_fee
assert math.isclose(alcx.balanceOf(dave), prior_alcx_balance_dave + pool_balance - amount, rel_tol=1)
assert math.isclose(alcx.balanceOf(dave), prior_alcx_balance_dave + dave_gain, rel_tol=1)
assert mock_ss_compounder.totalPoolBalance() == 0
assert mock_vault.totalSupply() == 0
balances = 0
for account in [alice, bob, charlie, dave]:
balances += alcx.balanceOf(account)
assert mock_vault.balanceOf(account) == 0
assert balances == (prior_alcx_balance_alice + prior_alcx_balance_bob +
prior_alcx_balance_charlie + prior_alcx_balance_dave)
def test_with_simulated_harvest_mock(alice, bob, charlie, dave, mock_vault, alcx, mock_ss_compounder, mock_pool, owner):
amount = 1000
harvest = 400
prior_alcx_balance_alice = alcx.balanceOf(alice)
prior_alcx_balance_bob = alcx.balanceOf(bob)
prior_alcx_balance_charlie = alcx.balanceOf(charlie)
prior_alcx_balance_dave = alcx.balanceOf(dave)
for account in [alice, bob, charlie, dave]:
alcx.approve(mock_vault, alcx.totalSupply(), {'from': account})
mock_vault.deposit(amount, {'from': account})
for account in [bob, charlie, dave]:
assert mock_vault.balanceOf(account) == mock_vault.balanceOf(alice)
alcx.approve(mock_pool, harvest, {'from': owner})
mock_pool.deposit(0, harvest, {'from': owner})
mock_vault.withdraw(amount, {'from': alice})
harvest_gain = harvest // 4
alice_fee = (amount + harvest_gain) * 250 // 10000
assert alcx.balanceOf(alice) == prior_alcx_balance_alice + harvest_gain - alice_fee
mock_vault.withdraw(amount, {'from': bob})
bob_gain = (alice_fee // 3) + harvest_gain
bob_fee = (amount + bob_gain) * 250 // 10000
assert alcx.balanceOf(bob) == prior_alcx_balance_bob + bob_gain - bob_fee
mock_vault.withdraw(amount, {'from': charlie})
charlie_gain = bob_gain + (bob_fee // 2) + harvest_gain
charlie_fee = (amount + charlie_gain) * 250 // 10000
assert math.isclose(alcx.balanceOf(charlie), prior_alcx_balance_charlie + charlie_gain - charlie_fee, rel_tol=1)
pool_balance = mock_ss_compounder.totalPoolBalance()
mock_vault.withdraw(amount, {'from': dave})
dave_gain = charlie_gain + charlie_fee + harvest_gain
assert math.isclose(alcx.balanceOf(dave), prior_alcx_balance_dave + pool_balance - amount, rel_tol=1)
assert math.isclose(alcx.balanceOf(dave), prior_alcx_balance_dave + dave_gain, rel_tol=1)
assert mock_ss_compounder.totalPoolBalance() == 0
assert mock_vault.totalSupply() == 0
balances = 0
for account in [alice, bob, charlie, dave]:
balances += alcx.balanceOf(account)
assert mock_vault.balanceOf(account) == 0
| 42.676768 | 120 | 0.713846 | 561 | 4,225 | 5.083779 | 0.098039 | 0.069425 | 0.123422 | 0.064516 | 0.873422 | 0.84993 | 0.826087 | 0.826087 | 0.826087 | 0.826087 | 0 | 0.023782 | 0.173965 | 4,225 | 98 | 121 | 43.112245 | 0.79341 | 0 | 0 | 0.693333 | 0 | 0 | 0.014675 | 0 | 0 | 0 | 0 | 0 | 0.253333 | 1 | 0.026667 | false | 0 | 0.04 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
026d209ac335b0d720042098fa2e2891adfb020c | 167 | py | Python | backend/rating/admin.py | ankile/budbua-classifieds | 5e85edab4747501b0110cf56e1bfea524308dfff | [
"MIT"
] | null | null | null | backend/rating/admin.py | ankile/budbua-classifieds | 5e85edab4747501b0110cf56e1bfea524308dfff | [
"MIT"
] | null | null | null | backend/rating/admin.py | ankile/budbua-classifieds | 5e85edab4747501b0110cf56e1bfea524308dfff | [
"MIT"
] | null | null | null | from django.contrib import admin
from rating.models import Rating
# Register your models here.
@admin.register(Rating)
class RatingAdmin(admin.ModelAdmin):
pass
| 18.555556 | 36 | 0.790419 | 22 | 167 | 6 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137725 | 167 | 8 | 37 | 20.875 | 0.916667 | 0.155689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
028240ec0f8db6eac4aaef02276bb18a572c1cad | 96 | py | Python | venv/lib/python3.8/site-packages/clikit/io/input_stream/null_input_stream.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/clikit/io/input_stream/null_input_stream.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/clikit/io/input_stream/null_input_stream.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/a1/7e/50/f915f39cc05a20660c21d59727fecc2940e54cafe21c2b2ba707d671e7 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a021ed3082f5dadc5a556663149f0206bc5d8a6 | 78 | py | Python | Codewars/7kyu/sort-numbers/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/7kyu/sort-numbers/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/7kyu/sort-numbers/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
test.assert_equals(solution([1, 2, 10, 5]), [1, 2, 5, 10])
| 19.5 | 59 | 0.564103 | 16 | 78 | 2.6875 | 0.75 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203125 | 0.179487 | 78 | 3 | 60 | 26 | 0.46875 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a5d972cc9fc7d1c4c5f08d039cba8a6f55453ac | 387 | py | Python | ethevents/test/conftest.py | ezdac/ethevents | 9f4b0ff1ba0d303180abe3b5336805335bc0765b | [
"MIT"
] | 2 | 2018-08-21T01:06:30.000Z | 2019-03-05T08:15:55.000Z | ethevents/test/conftest.py | ezdac/ethevents | 9f4b0ff1ba0d303180abe3b5336805335bc0765b | [
"MIT"
] | 1 | 2018-04-23T14:01:51.000Z | 2018-04-23T14:09:51.000Z | ethevents/test/conftest.py | ezdac/ethevents | 9f4b0ff1ba0d303180abe3b5336805335bc0765b | [
"MIT"
] | 1 | 2022-03-22T04:57:16.000Z | 2022-03-22T04:57:16.000Z | from microraiden.test.fixtures import * # flake8: noqa
del globals()['session']
from microraiden.test.fixtures import session as usession # flake8: noqa
from microraiden.test.conftest import pytest_addoption
from .fixtures import * # flake8: noqa
from gevent import monkey
# Thread is false due to clash when testing both contract/microraiden modules.
monkey.patch_all(thread=False)
| 38.7 | 78 | 0.79845 | 53 | 387 | 5.792453 | 0.566038 | 0.14658 | 0.185668 | 0.175896 | 0.214984 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008902 | 0.129199 | 387 | 9 | 79 | 43 | 0.902077 | 0.297158 | 0 | 0 | 0 | 0 | 0.026217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce55c7ecc23537675704ddb02bc9a5ce09f3550b | 22,006 | py | Python | src/Fig_5_supplement_2_Morphing_changing_Jie2_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | null | null | null | src/Fig_5_supplement_2_Morphing_changing_Jie2_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | 3 | 2021-12-16T10:15:10.000Z | 2021-12-16T12:54:24.000Z | src/Fig_5_supplement_2_Morphing_changing_Jie2_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | 1 | 2021-12-16T10:02:43.000Z | 2021-12-16T10:02:43.000Z | import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sympy.solvers import solve
from sympy import Symbol
from matplotlib import patches
import matplotlib.patches as mpatches
import scipy.io as sio
import math
# plotting configuration
ratio = 1.5
figure_len, figure_width = 15*ratio, 12*ratio
font_size_1, font_size_2 = 36*ratio, 36*ratio
legend_size = 18*ratio
line_width, tick_len = 3*ratio, 10*ratio
marker_size = 30*ratio
plot_line_width = 5*ratio
hfont = {'fontname': 'Arial'}
marker_edge_width = 4
pal = sns.color_palette("deep")
l_color = ['#85C1E9', '#3498DB', '#2874A6']
U_max = 6
l_p = [0, 0.025, 0.05, 0.075, 0.1, 0.125, 0.15, 0.175, 0.2, 0.225, 0.25, 0.275, 0.3, 0.325, 0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.525, 0.55, 0.575, 0.6, 0.625, 0.65, 0.675, 0.7, 0.725, 0.75, 0.775, 0.8, 0.825, 0.85, 0.875, 0.9, 0.925, 0.95, 0.975, 1.0]
l_peak_E1_EE_STP, l_peak_E2_EE_STP, l_ss_E1_EE_STP, l_ss_E2_EE_STP = [], [], [], []
l_peak_E1_EI_STP, l_peak_E2_EI_STP, l_ss_E1_EI_STP, l_ss_E2_EI_STP = [], [], [], []
l_peak_E1_EE_STP_2, l_peak_E2_EE_STP_2, l_ss_E1_EE_STP_2, l_ss_E2_EE_STP_2 = [], [], [], []
l_peak_E1_EI_STP_2, l_peak_E2_EI_STP_2, l_ss_E1_EI_STP_2, l_ss_E2_EI_STP_2 = [], [], [], []
l_peak_E1_EE_STP_3, l_peak_E2_EE_STP_3, l_ss_E1_EE_STP_3, l_ss_E2_EE_STP_3 = [], [], [], []
l_peak_E1_EI_STP_3, l_peak_E2_EI_STP_3, l_ss_E1_EI_STP_3, l_ss_E2_EI_STP_3 = [], [], [], []
l_bs_E2_EE_STP, l_bs_E2_EI_STP = [], []
s_path = '../Redo_part/'
for p in l_p:
l_r_e_1_EE_STP = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E1_Jie2_0.3_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EE_STP = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E2_Jie2_0.3_p_' + str(p) + '.mat')['E2'][0]
l_r_e_1_EI_STP = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E1_Jie2_0.3_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EI_STP = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E2_Jie2_0.3_p_' + str(p) + '.mat')['E2'][0]
l_r_e_1_EE_STP_2 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E1_Jie2_0.4_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EE_STP_2 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E2_Jie2_0.4_p_' + str(p) + '.mat')['E2'][0]
l_r_e_1_EI_STP_2 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E1_Jie2_0.4_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EI_STP_2 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E2_Jie2_0.4_p_' + str(p) + '.mat')['E2'][0]
l_r_e_1_EE_STP_3 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E1_Jie2_0.5_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EE_STP_3 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EE_STP_E2_Jie2_0.5_p_' + str(p) + '.mat')['E2'][0]
l_r_e_1_EI_STP_3 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E1_Jie2_0.5_p_' + str(p) + '.mat')['E1'][0]
l_r_e_2_EI_STP_3 = sio.loadmat(s_path + 'data/Fig_6_Morphing_activity_EI_STP_E2_Jie2_0.5_p_' + str(p) + '.mat')['E2'][0]
l_peak_E1_EE_STP.append(np.nanmax(l_r_e_1_EE_STP[50000:70000]))
l_ss_E1_EE_STP.append(np.nanmean(l_r_e_1_EE_STP[65000:69000]))
l_peak_E2_EE_STP.append(np.nanmax(l_r_e_2_EE_STP[50000:70000]))
l_ss_E2_EE_STP.append(np.nanmean(l_r_e_2_EE_STP[65000:69000]))
l_peak_E1_EI_STP.append(np.nanmax(l_r_e_1_EI_STP[50000:70000]))
l_ss_E1_EI_STP.append(np.nanmean(l_r_e_1_EI_STP[65000:69000]))
l_peak_E2_EI_STP.append(np.nanmax(l_r_e_2_EI_STP[50000:70000]))
l_ss_E2_EI_STP.append(np.nanmean(l_r_e_2_EI_STP[65000:69000]))
l_peak_E1_EE_STP_2.append(np.nanmax(l_r_e_1_EE_STP_2[50000:70000]))
l_ss_E1_EE_STP_2.append(np.nanmean(l_r_e_1_EE_STP_2[65000:69000]))
l_peak_E2_EE_STP_2.append(np.nanmax(l_r_e_2_EE_STP_2[50000:70000]))
l_ss_E2_EE_STP_2.append(np.nanmean(l_r_e_2_EE_STP_2[65000:69000]))
l_peak_E1_EI_STP_2.append(np.nanmax(l_r_e_1_EI_STP_2[50000:70000]))
l_ss_E1_EI_STP_2.append(np.nanmean(l_r_e_1_EI_STP_2[65000:69000]))
l_peak_E2_EI_STP_2.append(np.nanmax(l_r_e_2_EI_STP_2[50000:70000]))
l_ss_E2_EI_STP_2.append(np.nanmean(l_r_e_2_EI_STP_2[65000:69000]))
l_peak_E1_EE_STP_3.append(np.nanmax(l_r_e_1_EE_STP_3[50000:70000]))
l_ss_E1_EE_STP_3.append(np.nanmean(l_r_e_1_EE_STP_3[65000:69000]))
l_peak_E2_EE_STP_3.append(np.nanmax(l_r_e_2_EE_STP_3[50000:70000]))
l_ss_E2_EE_STP_3.append(np.nanmean(l_r_e_2_EE_STP_3[65000:69000]))
l_peak_E1_EI_STP_3.append(np.nanmax(l_r_e_1_EI_STP_3[50000:70000]))
l_ss_E1_EI_STP_3.append(np.nanmean(l_r_e_1_EI_STP_3[65000:69000]))
l_peak_E2_EI_STP_3.append(np.nanmax(l_r_e_2_EI_STP_3[50000:70000]))
l_ss_E2_EI_STP_3.append(np.nanmean(l_r_e_2_EI_STP_3[65000:69000]))
l_idx_peak_EE_STP, l_idx_peak_EI_STP, l_idx_ss_EE_STP, l_idx_ss_EI_STP = [], [], [], []
l_idx_peak_EE_STP_2, l_idx_peak_EI_STP_2, l_idx_ss_EE_STP_2, l_idx_ss_EI_STP_2 = [], [], [], []
l_idx_peak_EE_STP_3, l_idx_peak_EI_STP_3, l_idx_ss_EE_STP_3, l_idx_ss_EI_STP_3 = [], [], [], []
for i in range(len(l_peak_E1_EE_STP)):
l_idx_peak_EE_STP.append((l_peak_E1_EE_STP[i]-l_peak_E2_EE_STP[i])/(l_peak_E1_EE_STP[i]+l_peak_E2_EE_STP[i]))
l_idx_peak_EI_STP.append((l_peak_E1_EI_STP[i]-l_peak_E2_EI_STP[i])/(l_peak_E1_EI_STP[i]+l_peak_E2_EI_STP[i]))
l_idx_ss_EE_STP.append((l_ss_E1_EE_STP[i]-l_ss_E2_EE_STP[i])/(l_ss_E1_EE_STP[i]+l_ss_E2_EE_STP[i]))
l_idx_ss_EI_STP.append((l_ss_E1_EI_STP[i]-l_ss_E2_EI_STP[i])/(l_ss_E1_EI_STP[i]+l_ss_E2_EI_STP[i]))
l_idx_peak_EE_STP_2.append((l_peak_E1_EE_STP_2[i]-l_peak_E2_EE_STP_2[i])/(l_peak_E1_EE_STP_2[i]+l_peak_E2_EE_STP_2[i]))
l_idx_peak_EI_STP_2.append((l_peak_E1_EI_STP_2[i]-l_peak_E2_EI_STP_2[i])/(l_peak_E1_EI_STP_2[i]+l_peak_E2_EI_STP_2[i]))
l_idx_ss_EE_STP_2.append((l_ss_E1_EE_STP_2[i]-l_ss_E2_EE_STP_2[i])/(l_ss_E1_EE_STP_2[i]+l_ss_E2_EE_STP_2[i]))
l_idx_ss_EI_STP_2.append((l_ss_E1_EI_STP_2[i]-l_ss_E2_EI_STP_2[i])/(l_ss_E1_EI_STP_2[i]+l_ss_E2_EI_STP_2[i]))
l_idx_peak_EE_STP_3.append((l_peak_E1_EE_STP_3[i]-l_peak_E2_EE_STP_3[i])/(l_peak_E1_EE_STP_3[i]+l_peak_E2_EE_STP_3[i]))
l_idx_peak_EI_STP_3.append((l_peak_E1_EI_STP_3[i]-l_peak_E2_EI_STP_3[i])/(l_peak_E1_EI_STP_3[i]+l_peak_E2_EI_STP_3[i]))
l_idx_ss_EE_STP_3.append((l_ss_E1_EE_STP_3[i]-l_ss_E2_EE_STP_3[i])/(l_ss_E1_EE_STP_3[i]+l_ss_E2_EE_STP_3[i]))
l_idx_ss_EI_STP_3.append((l_ss_E1_EI_STP_3[i]-l_ss_E2_EI_STP_3[i])/(l_ss_E1_EI_STP_3[i]+l_ss_E2_EI_STP_3[i]))
# l_dis_peak_EE_STP, l_dis_peak_EI_STP, l_dis_ss_EE_STP, l_dis_ss_EI_STP = [], [], [], []
# l_dis_peak_EE_STP_2, l_dis_peak_EI_STP_2, l_dis_ss_EE_STP_2, l_dis_ss_EI_STP_2 = [], [], [], []
# l_dis_peak_EE_STP_3, l_dis_peak_EI_STP_3, l_dis_ss_EE_STP_3, l_dis_ss_EI_STP_3 = [], [], [], []
#
# for i in range(len(l_peak_E1_EE_STP)):
#
# if i < len(l_p)/2:
# l_dis_peak_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EE_STP[i] / np.sqrt(np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))
# l_dis_peak_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EI_STP[i] / np.sqrt(np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))
# l_dis_ss_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EE_STP[i] / np.sqrt(np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))
# l_dis_ss_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EI_STP[i] / np.sqrt(np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))
#
#
#
# l_dis_peak_EE_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EE_STP_2[i] / np.sqrt(np.power(l_peak_E1_EE_STP_2[i], 2) + np.power(l_peak_E2_EE_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP_2[i], 2) + np.power(l_peak_E2_EE_STP_2[i], 2)))
# l_dis_peak_EI_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EI_STP_2[i] / np.sqrt(np.power(l_peak_E1_EI_STP_2[i], 2) + np.power(l_peak_E2_EI_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP_2[i], 2) + np.power(l_peak_E2_EI_STP_2[i], 2)))
# l_dis_ss_EE_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EE_STP_2[i] / np.sqrt(np.power(l_ss_E1_EE_STP_2[i], 2) + np.power(l_ss_E2_EE_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP_2[i], 2) + np.power(l_ss_E2_EE_STP_2[i], 2)))
# l_dis_ss_EI_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EI_STP_2[i] / np.sqrt(np.power(l_ss_E1_EI_STP_2[i], 2) + np.power(l_ss_E2_EI_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP_2[i], 2) + np.power(l_ss_E2_EI_STP_2[i], 2)))
#
#
#
# l_dis_peak_EE_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EE_STP_3[i] / np.sqrt(np.power(l_peak_E1_EE_STP_3[i], 2) + np.power(l_peak_E2_EE_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP_3[i], 2) + np.power(l_peak_E2_EE_STP_3[i], 2)))
# l_dis_peak_EI_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E1_EI_STP_3[i] / np.sqrt(np.power(l_peak_E1_EI_STP_3[i], 2) + np.power(l_peak_E2_EI_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP_3[i], 2) + np.power(l_peak_E2_EI_STP_3[i], 2)))
# l_dis_ss_EE_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EE_STP_3[i] / np.sqrt(np.power(l_ss_E1_EE_STP_3[i], 2) + np.power(l_ss_E2_EE_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP_3[i], 2) + np.power(l_ss_E2_EE_STP_3[i], 2)))
# l_dis_ss_EI_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E1_EI_STP_3[i] / np.sqrt(np.power(l_ss_E1_EI_STP_3[i], 2) + np.power(l_ss_E2_EI_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP_3[i], 2) + np.power(l_ss_E2_EI_STP_3[i], 2)))
#
# else:
# l_dis_peak_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EE_STP[i] / np.sqrt(np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))
# l_dis_peak_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EI_STP[i] / np.sqrt(np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))
# l_dis_ss_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EE_STP[i] / np.sqrt(np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))
# l_dis_ss_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EI_STP[i] / np.sqrt(np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))
#
#
#
# l_dis_peak_EE_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EE_STP_2[i] / np.sqrt(np.power(l_peak_E1_EE_STP_2[i], 2) + np.power(l_peak_E2_EE_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP_2[i], 2) + np.power(l_peak_E2_EE_STP_2[i], 2)))
# l_dis_peak_EI_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EI_STP_2[i] / np.sqrt(np.power(l_peak_E1_EI_STP_2[i], 2) + np.power(l_peak_E2_EI_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP_2[i], 2) + np.power(l_peak_E2_EI_STP_2[i], 2)))
# l_dis_ss_EE_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EE_STP_2[i] / np.sqrt(np.power(l_ss_E1_EE_STP_2[i], 2) + np.power(l_ss_E2_EE_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP_2[i], 2) + np.power(l_ss_E2_EE_STP_2[i], 2)))
# l_dis_ss_EI_STP_2.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EI_STP_2[i] / np.sqrt(np.power(l_ss_E1_EI_STP_2[i], 2) + np.power(l_ss_E2_EI_STP_2[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP_2[i], 2) + np.power(l_ss_E2_EI_STP_2[i], 2)))
#
#
#
# l_dis_peak_EE_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EE_STP_3[i] / np.sqrt(np.power(l_peak_E1_EE_STP_3[i], 2) + np.power(l_peak_E2_EE_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EE_STP_3[i], 2) + np.power(l_peak_E2_EE_STP_3[i], 2)))
# l_dis_peak_EI_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_peak_E2_EI_STP_3[i] / np.sqrt(np.power(l_peak_E1_EI_STP_3[i], 2) + np.power(l_peak_E2_EI_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_peak_E1_EI_STP_3[i], 2) + np.power(l_peak_E2_EI_STP_3[i], 2)))
# l_dis_ss_EE_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EE_STP_3[i] / np.sqrt(np.power(l_ss_E1_EE_STP_3[i], 2) + np.power(l_ss_E2_EE_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EE_STP_3[i], 2) + np.power(l_ss_E2_EE_STP_3[i], 2)))
# l_dis_ss_EI_STP_3.append(math.sin(math.radians(45 - round(math.degrees(
# math.asin(l_ss_E2_EI_STP_3[i] / np.sqrt(np.power(l_ss_E1_EI_STP_3[i], 2) + np.power(l_ss_E2_EI_STP_3[i], 2)))),
# 2))) * np.sqrt(
# np.power(l_ss_E1_EI_STP_3[i], 2) + np.power(l_ss_E2_EI_STP_3[i], 2)))
#
#
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.plot(l_idx_peak_EE_STP, color='blue', linewidth=plot_line_width, alpha=0.4)
plt.plot(l_idx_peak_EE_STP_2, color='blue', linewidth=plot_line_width, alpha=0.6)
plt.plot(l_idx_peak_EE_STP_3, color='blue', linewidth=plot_line_width, alpha=1.0)
plt.plot(l_idx_ss_EE_STP, color='blue', linestyle='dashed', linewidth=plot_line_width, alpha=0.4)
plt.plot(l_idx_ss_EE_STP_2, color='blue', linestyle='dashed', linewidth=plot_line_width, alpha=0.6)
plt.plot(l_idx_ss_EE_STP_3, color='blue', linestyle='dashed', linewidth=plot_line_width, alpha=1.0)
plt.xticks([0, 10, 20, 30, 40], [0, 0.25, 0.5, 0.75, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([-1.0, -0.5, 0.0, 0.5, 1.0], fontsize=font_size_1, **hfont)
plt.xlabel('$p$', fontsize=font_size_1, **hfont)
plt.ylabel('Separation index', fontsize=font_size_1, **hfont)
plt.ylim([-1.05, 1.05])
plt.legend([r"$J_{IE2}$: 0.3", r"$J_{IE2}$: 0.4", r"$J_{IE2}$: 0.5"], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Revision_Fig_Point_2_9_Morphing_EE_STP_changing_Jie2_U_max_' + str(U_max) + '.png')
plt.savefig('paper_figures/pdf/Revision_Fig_Point_2_9_Morphing_EE_STP_changing_Jie2_U_max_' + str(U_max) + '.pdf')
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.plot(l_idx_peak_EI_STP, color='red', linewidth=plot_line_width, alpha=0.4)
plt.plot(l_idx_peak_EI_STP_2, color='red', linewidth=plot_line_width, alpha=0.6)
plt.plot(l_idx_peak_EI_STP_3, color='red', linewidth=plot_line_width, alpha=1.0)
plt.plot(l_idx_ss_EI_STP, color='red', linestyle='dashed', linewidth=plot_line_width, alpha=0.4)
plt.plot(l_idx_ss_EI_STP_2, color='red', linestyle='dashed', linewidth=plot_line_width, alpha=0.6)
plt.plot(l_idx_ss_EI_STP_3, color='red', linestyle='dashed', linewidth=plot_line_width, alpha=1.0)
plt.xticks([0, 10, 20, 30, 40], [0, 0.25, 0.5, 0.75, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([-1.0, -0.5, 0.0, 0.5, 1.0], fontsize=font_size_1, **hfont)
plt.xlabel('$p$', fontsize=font_size_1, **hfont)
plt.ylabel('Separation index', fontsize=font_size_1, **hfont)
plt.ylim([-1.05, 1.05])
plt.legend([r"$J_{IE2}$: 0.3", r"$J_{IE2}$: 0.4", r"$J_{IE2}$: 0.5"], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Revision_Fig_Point_2_9_Morphing_EI_STP_changing_Jie2_U_max_' + str(U_max) + '.png')
plt.savefig('paper_figures/pdf/Revision_Fig_Point_2_9_Morphing_EI_STP_changing_Jie2_U_max_' + str(U_max) + '.pdf')
# plt.figure(figsize=(figure_len, figure_width))
# ax = plt.gca()
# ax.spines['top'].set_visible(False)
# ax.spines['right'].set_visible(False)
# ax.spines['bottom'].set_visible(True)
# ax.spines['left'].set_visible(True)
# for axis in ['top', 'bottom', 'left', 'right']:
# ax.spines[axis].set_linewidth(line_width)
# plt.tick_params(width=line_width, length=tick_len)
#
# plt.yscale('symlog', linthreshy=0.1)
#
#
# plt.plot(l_peak_E1_EE_STP, color=pal[0], linewidth=plot_line_width, alpha=0.4)
# plt.plot(l_peak_E1_EE_STP_2, color=pal[0], linewidth=plot_line_width, alpha=0.7)
# plt.plot(l_peak_E1_EE_STP_3, color=pal[0], linewidth=plot_line_width, alpha=1.0)
#
# plt.plot(l_peak_E2_EE_STP, color=pal[1], linewidth=plot_line_width, alpha=0.4)
# plt.plot(l_peak_E2_EE_STP_2, color=pal[1], linewidth=plot_line_width, alpha=0.7)
# plt.plot(l_peak_E2_EE_STP_3, color=pal[1], linewidth=plot_line_width, alpha=1.0)
#
# plt.xticks([0, 10, 20, 30, 40], [0, 0.25, 0.5, 0.75, 1.0], fontsize=font_size_1, **hfont)
# plt.yticks([0, 0.1, 1, 10, 100, 1000], fontsize=font_size_1, **hfont)
# plt.ylabel('Firing rate (Hz)', fontsize=font_size_1, **hfont)
#
# plt.legend([r"$J_{IE2}$: 0.3", r"$J_{IE2}$: 0.4", r"$J_{IE2}$: 0.5"], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
# plt.savefig('paper_figures/png/Fig_6_Morphing_EI_STP_activity_changing_Jie2_U_max_' + str(U_max) + '_peak.png')
# plt.savefig('paper_figures/pdf/Fig_6_Morphing_EI_STP_activity_changing_Jie2_U_max_' + str(U_max) + '_peak.pdf')
# plt.figure(figsize=(figure_len, figure_width))
# ax = plt.gca()
# ax.spines['top'].set_visible(False)
# ax.spines['right'].set_visible(False)
# ax.spines['bottom'].set_visible(True)
# ax.spines['left'].set_visible(True)
# for axis in ['top', 'bottom', 'left', 'right']:
# ax.spines[axis].set_linewidth(line_width)
# plt.tick_params(width=line_width, length=tick_len)
#
# plt.yscale('symlog', linthreshy=0.1)
#
# plt.plot(l_ss_E1_EE_STP, color=pal[0], linewidth=plot_line_width, alpha=0.4)
# plt.plot(l_ss_E1_EE_STP_2, color=pal[0], linewidth=plot_line_width, alpha=0.7)
# plt.plot(l_ss_E1_EE_STP_3, color=pal[0], linewidth=plot_line_width, alpha=1.0)
#
# plt.plot(l_ss_E2_EE_STP, color=pal[1], linewidth=plot_line_width, alpha=0.4)
# plt.plot(l_ss_E2_EE_STP_2, color=pal[1], linewidth=plot_line_width, alpha=0.7)
# plt.plot(l_ss_E2_EE_STP_3, color=pal[1], linewidth=plot_line_width, alpha=1.0)
#
# plt.xticks([0, 10, 20, 30, 40], [0, 0.25, 0.5, 0.75, 1.0], fontsize=font_size_1, **hfont)
# plt.yticks([0, 0.1, 1, 10, 100, 1000], fontsize=font_size_1, **hfont)
# plt.ylabel('Firing rate (Hz)', fontsize=font_size_1, **hfont)
#
# plt.legend([r"$J_{IE2}$: 0.3", r"$J_{IE2}$: 0.4", r"$J_{IE2}$: 0.5"], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
# plt.savefig('paper_figures/png/Fig_6_Morphing_EI_STP_activity_changing_Jie2_U_max_' + str(U_max) + '_ss.png')
# plt.savefig('paper_figures/pdf/Fig_6_Morphing_EI_STP_activity_changing_Jie2_U_max_' + str(U_max) + '_ss.pdf')
#
#
| 63.054441 | 259 | 0.649732 | 4,407 | 22,006 | 2.785568 | 0.043113 | 0.07535 | 0.062561 | 0.050831 | 0.94754 | 0.933936 | 0.88726 | 0.843027 | 0.836999 | 0.781851 | 0 | 0.079234 | 0.181587 | 22,006 | 348 | 260 | 63.235632 | 0.602388 | 0.552122 | 0 | 0.232558 | 0 | 0 | 0.141662 | 0.093821 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.069767 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce8806db30151f6d0452aece64ee1f18a43470de | 41 | py | Python | mltk/marl/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | null | null | null | mltk/marl/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | 2 | 2019-12-24T01:54:21.000Z | 2019-12-24T02:23:54.000Z | mltk/marl/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | null | null | null | from .agent import *
from .envs import *
| 13.666667 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ce8ecc17805c522e6afb1a3252871bad4c9b7152 | 154 | py | Python | story_untangling/predictors/__init__.py | dwlmt/Story-Untangling | c56354e305f06a508b63b913989ff8856e4db5b6 | [
"Unlicense"
] | 7 | 2020-09-12T22:32:33.000Z | 2022-02-07T08:37:04.000Z | story_untangling/predictors/__init__.py | dwlmt/Story-Untangling | c56354e305f06a508b63b913989ff8856e4db5b6 | [
"Unlicense"
] | 2 | 2021-08-31T15:46:16.000Z | 2021-09-01T15:19:52.000Z | story_untangling/predictors/__init__.py | dwlmt/Story-Untangling | c56354e305f06a508b63b913989ff8856e4db5b6 | [
"Unlicense"
] | 1 | 2021-06-02T09:33:27.000Z | 2021-06-02T09:33:27.000Z | from story_untangling.predictors import reading_thoughts_predictor, global_beam_pairwise_ordering_predictor, \
local_beam_pairwise_ordering_predictor
| 51.333333 | 110 | 0.902597 | 18 | 154 | 7.111111 | 0.722222 | 0.1875 | 0.3125 | 0.453125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 154 | 2 | 111 | 77 | 0.895105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cea178158dd9a6204c703c45c4cd145b59b246f8 | 101 | py | Python | pybamm/models/submodels/interface/lithium_plating/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 330 | 2019-04-17T11:36:57.000Z | 2022-03-28T16:49:55.000Z | pybamm/models/submodels/interface/lithium_plating/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 1,530 | 2019-03-26T18:13:03.000Z | 2022-03-31T16:12:53.000Z | pybamm/models/submodels/interface/lithium_plating/__init__.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 178 | 2019-03-27T13:48:04.000Z | 2022-03-31T09:30:11.000Z | from .base_plating import BasePlating
from .no_plating import NoPlating
from .plating import Plating
| 25.25 | 37 | 0.851485 | 14 | 101 | 6 | 0.5 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118812 | 101 | 3 | 38 | 33.666667 | 0.94382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cea677cdc47dd6084078ad767edaf762deaaa4b3 | 5,801 | py | Python | unit-tests/dcf_test_suites/related_collection_api/perms/post/post_perms.py | cozybearca/django_client_framework | 5056012714ff461f1f420ce9b7e47b4ee2d5167a | [
"MIT"
] | 1 | 2021-01-13T22:11:29.000Z | 2021-01-13T22:11:29.000Z | unit-tests/dcf_test_suites/related_collection_api/perms/post/post_perms.py | cozybearca/django_client_framework | 5056012714ff461f1f420ce9b7e47b4ee2d5167a | [
"MIT"
] | 1 | 2021-04-12T02:04:59.000Z | 2021-04-12T02:04:59.000Z | unit-tests/dcf_test_suites/related_collection_api/perms/post/post_perms.py | cozybearca/django_client_framework | 5056012714ff461f1f420ce9b7e47b4ee2d5167a | [
"MIT"
] | 1 | 2021-05-20T05:25:06.000Z | 2021-05-20T05:25:06.000Z | from django.test import TestCase
from rest_framework.test import APIClient
from django.contrib.auth import get_user_model
from dcf_test_app.models import Product
from dcf_test_app.models import Brand
from django_client_framework import permissions as p
class TestPaginationPerms(TestCase):
def setUp(self):
User = get_user_model()
self.user = User.objects.create_user(username="testuser")
self.user_client = APIClient()
self.user_client.force_authenticate(self.user)
self.brand = Brand.objects.create(name="brand")
self.products = [
Product.objects.create(barcode=f"product_{i+1}", brand=self.brand)
for i in range(100)
]
self.br2 = Brand.objects.create(name="nike")
self.new_products = [
Product.objects.create(barcode=f"product_{i+101}", brand=self.br2)
for i in range(50)
]
def test_post_no_permissions(self):
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
self.assertEquals(404, resp.status_code)
def test_post_incorrect_parent_permissions(self):
p.set_perms_shortcut(self.user, Brand, "r", field_name="products")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
self.assertEquals(404, resp.status_code)
def test_post_correct_parent_perms(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
self.assertEquals(404, resp.status_code)
def test_post_correct_parent_incorrect_reverse_field_perms(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Product, "r", field_name="brand")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
self.assertEquals(404, resp.status_code)
def test_post_correct_parent_incorrect_reverse_field_perms_ver_2(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Product, "r")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
self.assertEquals(403, resp.status_code)
def test_post_correct_parent_and_reverse_perms(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Product, "w")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
data = resp.json()
self.assertDictEqual({"success": True}, data)
self.assertEquals(1, Product.objects.get(id=101).brand_id)
def test_post_correct_parent_and_reverse_perms_ver_2(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Product, "w", field_name="brand")
p.set_perms_shortcut(self.user, Product, "r")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
data = resp.json()
self.assertDictEqual({"success": True}, data)
self.assertEquals(1, Product.objects.get(id=101).brand_id)
def test_post_correct_parent_and_reverse_perms_but_can_only_read_parent(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Brand, "r")
p.set_perms_shortcut(self.user, Product, "w", field_name="brand")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
data = resp.json()
self.assertEquals(1, Product.objects.get(id=101).brand_id)
self.assertEquals(0, len(data["objects"]))
def test_post_correct_parent_and_reverse_perms_with_correct_read_perms(self):
p.set_perms_shortcut(self.user, Brand, "w", field_name="products")
p.set_perms_shortcut(self.user, Brand, "r")
p.set_perms_shortcut(self.user, Product, "r")
p.set_perms_shortcut(self.user, Product, "w", field_name="brand")
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
data = resp.json()
self.assertEquals(50, len(data["objects"]))
self.assertDictEqual(
data["objects"][0], {"id": 1, "barcode": "product_1", "brand_id": 1}
)
self.assertEquals(1, Product.objects.get(id=101).brand_id)
def test_post_correct_parent_and_reverse_perms_with_correct_read_perms_v2(self):
p.set_perms_shortcut(
self.user, Brand.objects.get(id=1), "wr", field_name="products"
)
p.set_perms_shortcut(self.user, Product.objects.filter(id=10), "r")
p.set_perms_shortcut(self.user, Product.objects.filter(id=9), "r")
p.set_perms_shortcut(self.user, Product.objects.filter(id=11), "r")
p.set_perms_shortcut(
self.user, Product.objects.filter(id=101), "w", field_name="brand"
)
resp = self.user_client.post(
"/brand/1/products", data=[101], content_type="application/json"
)
data = resp.json()
self.assertEquals(3, len(data["objects"]))
self.assertDictEqual(
data["objects"][0], {"id": 9, "barcode": "product_9", "brand_id": 1}
)
self.assertEquals(101, Product.objects.filter(brand_id=1).count())
self.assertEquals(1, Product.objects.get(id=101).brand_id)
| 44.968992 | 84 | 0.655059 | 769 | 5,801 | 4.69961 | 0.120936 | 0.084117 | 0.057277 | 0.10819 | 0.815163 | 0.806309 | 0.79192 | 0.79192 | 0.735196 | 0.709187 | 0 | 0.024896 | 0.210653 | 5,801 | 128 | 85 | 45.320313 | 0.764359 | 0 | 0 | 0.474138 | 0 | 0 | 0.102913 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 1 | 0.094828 | false | 0 | 0.051724 | 0 | 0.155172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cecc2e217ad2d50f8ae3f6676884fd705d5499cd | 157 | py | Python | Python/1_Introduction/Arithmetic_Operators/main.py | christosg88/hackerrank | 21bc44aac842325ad0a48265658f7674984aeff2 | [
"MIT"
] | null | null | null | Python/1_Introduction/Arithmetic_Operators/main.py | christosg88/hackerrank | 21bc44aac842325ad0a48265658f7674984aeff2 | [
"MIT"
] | null | null | null | Python/1_Introduction/Arithmetic_Operators/main.py | christosg88/hackerrank | 21bc44aac842325ad0a48265658f7674984aeff2 | [
"MIT"
] | null | null | null | if __name__ == '__main__':
first = int(input())
second = int(input())
print(first + second)
print(first - second)
print(first * second)
| 19.625 | 26 | 0.592357 | 18 | 157 | 4.722222 | 0.444444 | 0.352941 | 0.564706 | 0.494118 | 0.564706 | 0.564706 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254777 | 157 | 7 | 27 | 22.428571 | 0.726496 | 0 | 0 | 0 | 0 | 0 | 0.050955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0c713346f452de0e3146d1e5f97f87b2e9f679b5 | 133 | py | Python | gapnlp/lg/__init__.py | gapml/NLP | cc21e866175e4af6ce5793d014ddad93900470a8 | [
"Apache-2.0"
] | 3 | 2018-09-10T23:29:32.000Z | 2020-05-09T12:39:10.000Z | gapnlp/lg/__init__.py | virtualdvid/NLP | a81fb60ab0272b9052c5602d48108039dc713223 | [
"Apache-2.0"
] | null | null | null | gapnlp/lg/__init__.py | virtualdvid/NLP | a81fb60ab0272b9052c5602d48108039dc713223 | [
"Apache-2.0"
] | 1 | 2018-09-10T23:28:10.000Z | 2018-09-10T23:28:10.000Z | from . import word2int_de
from . import word2int_en
from . import word2int_es
from . import word2int_fr
from . import word2int_it | 26.6 | 26 | 0.789474 | 20 | 133 | 5 | 0.4 | 0.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.172932 | 133 | 5 | 27 | 26.6 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0cc2abe2c7c30fe33e14ef1831ecb15a82a64bbf | 39 | py | Python | cupy_alias/padding/pad.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 142 | 2018-06-07T07:43:10.000Z | 2021-10-30T21:06:32.000Z | cupy_alias/padding/pad.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 282 | 2018-06-07T08:35:03.000Z | 2021-03-31T03:14:32.000Z | cupy_alias/padding/pad.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 19 | 2018-06-19T11:07:53.000Z | 2021-05-13T20:57:04.000Z | from clpy.padding.pad import * # NOQA
| 19.5 | 38 | 0.717949 | 6 | 39 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 1 | 39 | 39 | 0.875 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b3b1d5f357d4aca85db21d2ce061e5714d5f7cd | 948 | py | Python | courseware/controllers/service_controller.py | OneTesseractInMultiverse/ITIL-CaseExample | e5f629e9c8173012d30c14f8f435238428f6fe9f | [
"MIT"
] | null | null | null | courseware/controllers/service_controller.py | OneTesseractInMultiverse/ITIL-CaseExample | e5f629e9c8173012d30c14f8f435238428f6fe9f | [
"MIT"
] | 5 | 2021-02-08T20:18:48.000Z | 2022-03-11T23:16:26.000Z | courseware/controllers/service_controller.py | OneTesseractInMultiverse/ITIL-CaseExample | e5f629e9c8173012d30c14f8f435238428f6fe9f | [
"MIT"
] | null | null | null | from courseware import app
from flask import render_template
# --------------------------------------------------------------------------
# GET /
# --------------------------------------------------------------------------
# Root resource
@app.route('/service/1', methods=['GET'])
def service_1():
return render_template("home/auth_service.html")
# --------------------------------------------------------------------------
# GET /
# --------------------------------------------------------------------------
# Root resource
@app.route('/service/2', methods=['GET'])
def service_2():
return render_template("home/consulting_service.html")
# --------------------------------------------------------------------------
# GET /
# --------------------------------------------------------------------------
# Root resource
@app.route('/service/3', methods=['GET'])
def service_3():
return render_template("home/hosting_service.html") | 33.857143 | 76 | 0.362869 | 66 | 948 | 5.060606 | 0.348485 | 0.167665 | 0.134731 | 0.161677 | 0.335329 | 0.335329 | 0.245509 | 0.245509 | 0 | 0 | 0 | 0.006881 | 0.080169 | 948 | 28 | 77 | 33.857143 | 0.376147 | 0.53692 | 0 | 0 | 0 | 0 | 0.266979 | 0.175644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | true | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0bbc22f6c27de16ee3d21eabbe659b859d701621 | 41 | py | Python | pokemon_image_dataset/__init__.py | jneuendorf/pokemon-image-dataset | 120b5beb4f058ee7c8fa86cc7e5b8030b75a03f1 | [
"MIT"
] | null | null | null | pokemon_image_dataset/__init__.py | jneuendorf/pokemon-image-dataset | 120b5beb4f058ee7c8fa86cc7e5b8030b75a03f1 | [
"MIT"
] | null | null | null | pokemon_image_dataset/__init__.py | jneuendorf/pokemon-image-dataset | 120b5beb4f058ee7c8fa86cc7e5b8030b75a03f1 | [
"MIT"
] | null | null | null | from .dataset import PokemonImageDataset
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e7e3af220662cac536a1eee1686cbd0d25315145 | 25 | py | Python | cvutils/__init__.py | MercierLucas/cv_utils | 34683bfc06857c3ed293924201c9279606029ae0 | [
"MIT"
] | null | null | null | cvutils/__init__.py | MercierLucas/cv_utils | 34683bfc06857c3ed293924201c9279606029ae0 | [
"MIT"
] | null | null | null | cvutils/__init__.py | MercierLucas/cv_utils | 34683bfc06857c3ed293924201c9279606029ae0 | [
"MIT"
] | null | null | null | from .images import Image | 25 | 25 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f016a34bb2e52f8bddb55bd7fcfde37631216b72 | 1,666 | py | Python | sql/permission.py | real-fire/archer | 8e9e82a51125859c61d23496ad0cab0a4bbc5181 | [
"Apache-2.0"
] | 1,516 | 2017-02-18T07:17:21.000Z | 2022-03-30T07:00:22.000Z | sql/permission.py | real-fire/archer | 8e9e82a51125859c61d23496ad0cab0a4bbc5181 | [
"Apache-2.0"
] | 67 | 2017-02-13T07:24:56.000Z | 2022-03-22T04:56:41.000Z | sql/permission.py | real-fire/archer | 8e9e82a51125859c61d23496ad0cab0a4bbc5181 | [
"Apache-2.0"
] | 674 | 2017-03-02T02:03:25.000Z | 2022-03-31T03:43:52.000Z | # -*- coding: UTF-8 -*-
import simplejson as json
from django.shortcuts import render
from django.http import HttpResponse
from .models import users
# 管理员操作权限验证
def superuser_required(func):
def wrapper(request, *args, **kw):
# 获取用户信息,权限验证
loginUser = request.session.get('login_username', False)
loginUserOb = users.objects.get(username=loginUser)
if loginUserOb.is_superuser is False:
if request.is_ajax():
finalResult = {'status': 1, 'msg': '您无权操作,请联系管理员', 'data': []}
return HttpResponse(json.dumps(finalResult), content_type='application/json')
else:
context = {'errMsg': "您无权操作,请联系管理员"}
return render(request, "error.html", context)
return func(request, *args, **kw)
return wrapper
# 角色操作权限验证
def role_required(roles=()):
def _deco(func):
def wrapper(request, *args, **kw):
# 获取用户信息,权限验证
loginUser = request.session.get('login_username', False)
loginUserOb = users.objects.get(username=loginUser)
loginrole = loginUserOb.role
if loginrole not in roles and loginUserOb.is_superuser is False:
if request.is_ajax():
finalResult = {'status': 1, 'msg': '您无权操作,请联系管理员', 'data': []}
return HttpResponse(json.dumps(finalResult), content_type='application/json')
else:
context = {'errMsg': "您无权操作,请联系管理员"}
return render(request, "error.html", context)
return func(request, *args, **kw)
return wrapper
return _deco
| 33.32 | 97 | 0.591837 | 172 | 1,666 | 5.662791 | 0.360465 | 0.045175 | 0.053388 | 0.043121 | 0.753593 | 0.753593 | 0.753593 | 0.753593 | 0.753593 | 0.753593 | 0 | 0.002542 | 0.291717 | 1,666 | 49 | 98 | 34 | 0.822881 | 0.038415 | 0 | 0.666667 | 0 | 0 | 0.10401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.121212 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
f038b248338e1fc4cf77082c4e0cd4a150246c1c | 75 | py | Python | corefacility/authorizations/mailru/entity/authorization_token/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/authorizations/mailru/entity/authorization_token/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/authorizations/mailru/entity/authorization_token/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | from .authorization_token import AuthorizationToken, AuthorizationTokenSet
| 37.5 | 74 | 0.906667 | 6 | 75 | 11.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 75 | 1 | 75 | 75 | 0.957143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f04bad230f2c5cfc0feb23bb1d8534e39b79e0ae | 885 | py | Python | op_builder/transformer_inference.py | ScriptBox99/DeepSpeed | fead387f7837200fefbaba3a7b14709072d8d2cb | [
"MIT"
] | 1 | 2022-02-12T06:27:26.000Z | 2022-02-12T06:27:26.000Z | op_builder/transformer_inference.py | ScriptBox99/DeepSpeed | fead387f7837200fefbaba3a7b14709072d8d2cb | [
"MIT"
] | null | null | null | op_builder/transformer_inference.py | ScriptBox99/DeepSpeed | fead387f7837200fefbaba3a7b14709072d8d2cb | [
"MIT"
] | null | null | null | from .builder import CUDAOpBuilder
class InferenceBuilder(CUDAOpBuilder):
BUILD_VAR = "DS_BUILD_TRANSFORMER_INFERENCE"
NAME = "transformer_inference"
def __init__(self, name=None):
name = self.NAME if name is None else name
super().__init__(name=name)
def absolute_name(self):
return f'deepspeed.ops.transformer_inference.{self.NAME}_op'
def sources(self):
return [
'csrc/transformer/inference/csrc/pt_binding.cpp',
'csrc/transformer/inference/csrc/gelu.cu',
'csrc/transformer/inference/csrc/normalize.cu',
'csrc/transformer/inference/csrc/softmax.cu',
'csrc/transformer/inference/csrc/dequantize.cu',
'csrc/transformer/inference/csrc/apply_rotary_pos_emb.cu',
]
def include_paths(self):
return ['csrc/transformer/inference/includes']
| 32.777778 | 70 | 0.671186 | 100 | 885 | 5.73 | 0.43 | 0.34904 | 0.293194 | 0.293194 | 0.328098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219209 | 885 | 26 | 71 | 34.038462 | 0.829233 | 0 | 0 | 0 | 0 | 0 | 0.459887 | 0.459887 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.05 | 0.15 | 0.55 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b2bb0ff9c2ec882645518a870086ee49d52173a0 | 44 | py | Python | taglets/modules/zsl_kg_lite/__init__.py | BatsResearch/taglets | 0fa9ebeccc9177069aa09b2da84746b7532e3495 | [
"Apache-2.0"
] | 13 | 2021-11-10T13:17:10.000Z | 2022-03-30T22:56:52.000Z | taglets/modules/zsl_kg_lite/__init__.py | BatsResearch/taglets | 0fa9ebeccc9177069aa09b2da84746b7532e3495 | [
"Apache-2.0"
] | 1 | 2021-11-10T16:01:47.000Z | 2021-11-10T16:01:47.000Z | taglets/modules/zsl_kg_lite/__init__.py | BatsResearch/taglets | 0fa9ebeccc9177069aa09b2da84746b7532e3495 | [
"Apache-2.0"
] | 2 | 2022-02-14T22:40:29.000Z | 2022-02-27T04:27:48.000Z | from .zsl_kg import ZSLKGModule, ZSLKGTaglet | 44 | 44 | 0.863636 | 6 | 44 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b2d12d88516af73cebee0d679e3875108ce10fcd | 29 | py | Python | pdf_layout_scanner/__init__.py | yoshihikoueno/pdfminer-layout-scanner | 437f7f2329db79c0f794fe41f4156218a982cec5 | [
"MIT"
] | 5 | 2019-12-18T06:41:11.000Z | 2021-06-21T03:15:15.000Z | pdf_layout_scanner/__init__.py | yoshihikoueno/pdfminer-layout-scanner | 437f7f2329db79c0f794fe41f4156218a982cec5 | [
"MIT"
] | null | null | null | pdf_layout_scanner/__init__.py | yoshihikoueno/pdfminer-layout-scanner | 437f7f2329db79c0f794fe41f4156218a982cec5 | [
"MIT"
] | 4 | 2020-07-01T00:47:01.000Z | 2021-05-04T06:17:15.000Z | from . import layout_scanner
| 14.5 | 28 | 0.827586 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
651f6429a2201aa4fedf0766d35d8ed50a553bb4 | 206 | py | Python | token_management_system/token_manager/apps.py | pawanvirsingh/token_management | b1ef01e19e37a61c627c6712917807424e77b823 | [
"Apache-2.0"
] | null | null | null | token_management_system/token_manager/apps.py | pawanvirsingh/token_management | b1ef01e19e37a61c627c6712917807424e77b823 | [
"Apache-2.0"
] | null | null | null | token_management_system/token_manager/apps.py | pawanvirsingh/token_management | b1ef01e19e37a61c627c6712917807424e77b823 | [
"Apache-2.0"
] | null | null | null | from django.apps import AppConfig
class TokenManagerConfig(AppConfig):
name = 'token_management_system.token_manager'
def ready(self):
import token_management_system.token_manager.signals
| 25.75 | 60 | 0.786408 | 24 | 206 | 6.5 | 0.666667 | 0.192308 | 0.269231 | 0.333333 | 0.423077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150485 | 206 | 7 | 61 | 29.428571 | 0.891429 | 0 | 0 | 0 | 0 | 0 | 0.179612 | 0.179612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6533187e17e54cd7b47ac182a20ba830d533d14b | 90 | py | Python | githubsurvivor/models/__init__.py | richo/githubsurvivor | 43c8648d9b956372de669e0d5b45844d24a72583 | [
"MIT"
] | null | null | null | githubsurvivor/models/__init__.py | richo/githubsurvivor | 43c8648d9b956372de669e0d5b45844d24a72583 | [
"MIT"
] | null | null | null | githubsurvivor/models/__init__.py | richo/githubsurvivor | 43c8648d9b956372de669e0d5b45844d24a72583 | [
"MIT"
] | null | null | null | from githubsurvivor.models.user import User
from githubsurvivor.models.issue import Issue
| 30 | 45 | 0.866667 | 12 | 90 | 6.5 | 0.5 | 0.461538 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 90 | 2 | 46 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6545529115e44abb5c57e772d917d4851eb60b6c | 96 | py | Python | venv/lib/python3.8/site-packages/cachecontrol/cache.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/cachecontrol/cache.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cachecontrol/cache.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/78/c4/bd/067f49590907888600e463d106a29553de6e4bec97931af3a6869f4628 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.489583 | 0 | 96 | 1 | 96 | 96 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8e3d4cd51ad1d353efe4415390961cf26245fbb | 127 | py | Python | pip_mockup/test/foo_test.py | jp74/pip-mockup | 40bb26ba88cd80e5beaef5bf63f87337c3bcb8ca | [
"IJG"
] | null | null | null | pip_mockup/test/foo_test.py | jp74/pip-mockup | 40bb26ba88cd80e5beaef5bf63f87337c3bcb8ca | [
"IJG"
] | null | null | null | pip_mockup/test/foo_test.py | jp74/pip-mockup | 40bb26ba88cd80e5beaef5bf63f87337c3bcb8ca | [
"IJG"
] | null | null | null | from ..foo import myFunction
import unittest
def test_return():
assert myFunction('fred') == "Foo::myFunction says fred"
| 18.142857 | 60 | 0.724409 | 16 | 127 | 5.6875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15748 | 127 | 6 | 61 | 21.166667 | 0.850467 | 0 | 0 | 0 | 0 | 0 | 0.228346 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8ed0f2b1e0a47943808ad17dbf5794c76f93845 | 115 | py | Python | test/__init__.py | vincenzopalazzo/AnalyticsPyBlock | 04e206f9dd34ecc00fb85ba9be483e5bfe566302 | [
"Apache-2.0"
] | 3 | 2020-11-08T22:09:12.000Z | 2021-10-09T09:38:16.000Z | test/__init__.py | vincenzopalazzo/AnalyticsPyBlock | 04e206f9dd34ecc00fb85ba9be483e5bfe566302 | [
"Apache-2.0"
] | null | null | null | test/__init__.py | vincenzopalazzo/AnalyticsPyBlock | 04e206f9dd34ecc00fb85ba9be483e5bfe566302 | [
"Apache-2.0"
] | 2 | 2021-03-25T20:03:15.000Z | 2021-04-10T18:11:22.000Z | from .persistence_test import EstimateTypeScriptTest
from .estimate_type_script_test import EstimateTypeScriptTest
| 38.333333 | 61 | 0.913043 | 12 | 115 | 8.416667 | 0.666667 | 0.19802 | 0.633663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069565 | 115 | 2 | 62 | 57.5 | 0.943925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
681bbb947ef9631dc631a8874ef74aa123520cba | 67 | py | Python | apiserver/apiserver/views.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 232 | 2017-09-11T14:28:41.000Z | 2022-01-19T10:26:07.000Z | apiserver/apiserver/views.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 302 | 2017-09-13T04:46:25.000Z | 2018-09-06T22:14:06.000Z | apiserver/apiserver/views.py | johnnykwwang/Halite-III | dd16463f1f13d652e7172e82687136f2217bb427 | [
"MIT"
] | 151 | 2017-09-11T21:03:07.000Z | 2020-11-28T04:58:55.000Z | from . import app
@app.route("/")
def index():
return "Test"
| 9.571429 | 17 | 0.58209 | 9 | 67 | 4.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223881 | 67 | 6 | 18 | 11.166667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.074627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d7d11b42117839a04f8ce7415bf737b9ccd07fc2 | 151 | py | Python | tests/test_dazed.py | calmdown13/dazed | 37243c72d75a988d872df8d18a6fe76e6a39d353 | [
"MIT"
] | null | null | null | tests/test_dazed.py | calmdown13/dazed | 37243c72d75a988d872df8d18a6fe76e6a39d353 | [
"MIT"
] | null | null | null | tests/test_dazed.py | calmdown13/dazed | 37243c72d75a988d872df8d18a6fe76e6a39d353 | [
"MIT"
] | null | null | null | """Dazed version test."""
from dazed import __version__
def test_version():
"""It returns correct version."""
assert __version__ == "1.0.2"
| 16.777778 | 37 | 0.662252 | 19 | 151 | 4.789474 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.18543 | 151 | 8 | 38 | 18.875 | 0.715447 | 0.311258 | 0 | 0 | 0 | 0 | 0.053763 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bcf3d027fe207607f0535e8887cb74b6ffcfa87 | 96 | py | Python | source/flux/apply.py | fr0stbite/flux | a545374eae067c0fd5b39301ec7e1cb7aaef960c | [
"MIT"
] | null | null | null | source/flux/apply.py | fr0stbite/flux | a545374eae067c0fd5b39301ec7e1cb7aaef960c | [
"MIT"
] | null | null | null | source/flux/apply.py | fr0stbite/flux | a545374eae067c0fd5b39301ec7e1cb7aaef960c | [
"MIT"
] | null | null | null | from .compose import compose
def apply(value, *functions):
return compose(*functions)(value)
| 19.2 | 35 | 0.760417 | 12 | 96 | 6.083333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 4 | 36 | 24 | 0.869048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0bd5f031e6fe2fb7b0aad477937df08383f24841 | 9,774 | py | Python | remodet_repository_wdh_part/Projects/PyLib/NetLib/ResNet.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | remodet_repository_wdh_part/Projects/PyLib/NetLib/ResNet.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | remodet_repository_wdh_part/Projects/PyLib/NetLib/ResNet.py | UrwLee/Remo_experience | a59d5b9d6d009524672e415c77d056bc9dd88c72 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import caffe
from caffe import layers as L
from caffe import params as P
from caffe.proto import caffe_pb2
import sys
sys.dont_write_bytecode = True
from ConvBNLayer import *
# create ResNet unitLayer
def ResUnitLayer(net, from_layer, block_name, out2a, out2b, out2c, stride, use_branch1, dilation=1):
conv_prefix = 'res{}_'.format(block_name)
conv_postfix = ''
bn_prefix = 'bn{}_'.format(block_name)
bn_postfix = ''
scale_prefix = 'scale{}_'.format(block_name)
scale_postfix = ''
use_scale = True
if use_branch1:
branch_name = 'branch1'
ConvBNUnitLayer(net, from_layer, branch_name, use_bn=True, use_relu=False,
num_output=out2c, kernel_size=1, pad=0, stride=stride, use_scale=use_scale,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
branch1 = '{}{}'.format(conv_prefix, branch_name)
else:
branch1 = from_layer
branch_name = 'branch2a'
ConvBNUnitLayer(net, from_layer, branch_name, use_bn=True, use_relu=True,
num_output=out2a, kernel_size=1, pad=0, stride=stride, use_scale=use_scale,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
out_name = '{}{}'.format(conv_prefix, branch_name)
branch_name = 'branch2b'
if dilation == 1:
ConvBNUnitLayer(net, out_name, branch_name, use_bn=True, use_relu=True,
num_output=out2b, kernel_size=3, pad=1, stride=1, use_scale=use_scale,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
else:
pad = int((3 + (dilation - 1) * 2) - 1) / 2
ConvBNUnitLayer(net, out_name, branch_name, use_bn=True, use_relu=True,
num_output=out2b, kernel_size=3, pad=pad, stride=1, use_scale=use_scale,
dilation=dilation, conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
out_name = '{}{}'.format(conv_prefix, branch_name)
branch_name = 'branch2c'
ConvBNUnitLayer(net, out_name, branch_name, use_bn=True, use_relu=False,
num_output=out2c, kernel_size=1, pad=0, stride=1, use_scale=use_scale,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
branch2 = '{}{}'.format(conv_prefix, branch_name)
res_name = 'res{}'.format(block_name)
net[res_name] = L.Eltwise(net[branch1], net[branch2])
relu_name = '{}_relu'.format(res_name)
net[relu_name] = L.ReLU(net[res_name], in_place=True)
# Create ResNet-50
def ResNet50Net(net, from_layer="data", use_pool5=False, use_dilation_conv5=False):
conv_prefix = ''
conv_postfix = ''
bn_prefix = 'bn_'
bn_postfix = ''
scale_prefix = 'scale_'
scale_postfix = ''
ConvBNUnitLayer(net, from_layer, 'conv1', use_bn=True, use_relu=True, \
num_output=64, kernel_size=7, pad=3, stride=2, \
use_conv_bias=True, \
conv_prefix=conv_prefix, conv_postfix=conv_postfix, \
bn_prefix=bn_prefix, bn_postfix=bn_postfix, \
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
net.pool1 = L.Pooling(net.conv1, pool=P.Pooling.MAX, kernel_size=3, stride=2)
ResUnitLayer(net, 'pool1', '2a', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=True)
ResUnitLayer(net, 'res2a', '2b', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2b', '2c', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2c', '3a', out2a=128, out2b=128, out2c=512, stride=2, use_branch1=True)
ResUnitLayer(net, 'res3a', '3b', out2a=128, out2b=128, out2c=512, stride=1, use_branch1=False)
ResUnitLayer(net, 'res3b', '3c', out2a=128, out2b=128, out2c=512, stride=1, use_branch1=False)
ResUnitLayer(net, 'res3c', '3d', out2a=128, out2b=128, out2c=512, stride=1, use_branch1=False)
ResUnitLayer(net, 'res3d', '4a', out2a=256, out2b=256, out2c=1024, stride=2, use_branch1=True)
ResUnitLayer(net, 'res4a', '4b', out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
ResUnitLayer(net, 'res4b', '4c', out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
ResUnitLayer(net, 'res4c', '4d', out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
ResUnitLayer(net, 'res4d', '4e', out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
ResUnitLayer(net, 'res4e', '4f', out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
stride = 2
dilation = 1
if use_dilation_conv5:
stride = 1
dilation = 2
ResUnitLayer(net, 'res4f', '5a', out2a=512, out2b=512, out2c=2048, stride=stride, use_branch1=True, dilation=dilation)
ResUnitLayer(net, 'res5a', '5b', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
ResUnitLayer(net, 'res5b', '5c', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
if use_pool5:
net.pool5 = L.Pooling(net.res5c, pool=P.Pooling.AVE, global_pooling=True)
return net
# Create ResNet-101
def ResNet101Net(net, from_layer="data", use_pool5=True, use_dilation_conv5=False):
conv_prefix = ''
conv_postfix = ''
bn_prefix = 'bn_'
bn_postfix = ''
scale_prefix = 'scale_'
scale_postfix = ''
ConvBNUnitLayer(net, from_layer, 'conv1', use_bn=True, use_relu=True,
num_output=64, kernel_size=7, pad=3, stride=2,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
net.pool1 = L.Pooling(net.conv1, pool=P.Pooling.MAX, kernel_size=3, stride=2)
ResUnitLayer(net, 'pool1', '2a', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=True)
ResUnitLayer(net, 'res2a', '2b', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2b', '2c', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2c', '3a', out2a=128, out2b=128, out2c=512, stride=2, use_branch1=True)
from_layer = 'res3a'
for i in xrange(1, 4):
block_name = '3b{}'.format(i)
ResUnitLayer(net, from_layer, block_name, out2a=128, out2b=128, out2c=512, stride=1, use_branch1=False)
from_layer = 'res{}'.format(block_name)
ResUnitLayer(net, from_layer, '4a', out2a=256, out2b=256, out2c=1024, stride=2, use_branch1=True)
from_layer = 'res4a'
for i in xrange(1, 23):
block_name = '4b{}'.format(i)
ResUnitLayer(net, from_layer, block_name, out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
from_layer = 'res{}'.format(block_name)
stride = 2
dilation = 1
if use_dilation_conv5:
stride = 1
dilation = 2
ResUnitLayer(net, from_layer, '5a', out2a=512, out2b=512, out2c=2048, stride=stride, use_branch1=True, dilation=dilation)
ResUnitLayer(net, 'res5a', '5b', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
ResUnitLayer(net, 'res5b', '5c', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
if use_pool5:
net.pool5 = L.Pooling(net.res5c, pool=P.Pooling.AVE, global_pooling=True)
return net
# 创建ResNet152-Network
def ResNet152Net(net, from_layer="data", use_pool5=True, use_dilation_conv5=False):
conv_prefix = ''
conv_postfix = ''
bn_prefix = 'bn_'
bn_postfix = ''
scale_prefix = 'scale_'
scale_postfix = ''
ConvBNUnitLayer(net, from_layer, 'conv1', use_bn=True, use_relu=True,
num_output=64, kernel_size=7, pad=3, stride=2,
conv_prefix=conv_prefix, conv_postfix=conv_postfix,
bn_prefix=bn_prefix, bn_postfix=bn_postfix,
scale_prefix=scale_prefix, scale_postfix=scale_postfix)
net.pool1 = L.Pooling(net.conv1, pool=P.Pooling.MAX, kernel_size=3, stride=2)
ResUnitLayer(net, 'pool1', '2a', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=True)
ResUnitLayer(net, 'res2a', '2b', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2b', '2c', out2a=64, out2b=64, out2c=256, stride=1, use_branch1=False)
ResUnitLayer(net, 'res2c', '3a', out2a=128, out2b=128, out2c=512, stride=2, use_branch1=True)
from_layer = 'res3a'
for i in xrange(1, 8):
block_name = '3b{}'.format(i)
ResUnitLayer(net, from_layer, block_name, out2a=128, out2b=128, out2c=512, stride=1, use_branch1=False)
from_layer = 'res{}'.format(block_name)
ResUnitLayer(net, from_layer, '4a', out2a=256, out2b=256, out2c=1024, stride=2, use_branch1=True)
from_layer = 'res4a'
for i in xrange(1, 36):
block_name = '4b{}'.format(i)
ResUnitLayer(net, from_layer, block_name, out2a=256, out2b=256, out2c=1024, stride=1, use_branch1=False)
from_layer = 'res{}'.format(block_name)
stride = 2
dilation = 1
if use_dilation_conv5:
stride = 1
dilation = 2
ResUnitLayer(net, from_layer, '5a', out2a=512, out2b=512, out2c=2048, stride=stride, use_branch1=True, dilation=dilation)
ResUnitLayer(net, 'res5a', '5b', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
ResUnitLayer(net, 'res5b', '5c', out2a=512, out2b=512, out2c=2048, stride=1, use_branch1=False, dilation=dilation)
if use_pool5:
net.pool5 = L.Pooling(net.res5c, pool=P.Pooling.AVE, global_pooling=True)
return net
| 44.630137 | 125 | 0.700225 | 1,434 | 9,774 | 4.555091 | 0.091353 | 0.058175 | 0.045928 | 0.070269 | 0.883037 | 0.868034 | 0.860839 | 0.850429 | 0.850429 | 0.850429 | 0 | 0.085002 | 0.162267 | 9,774 | 218 | 126 | 44.834862 | 0.71275 | 0.010231 | 0 | 0.668571 | 0 | 0 | 0.040546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022857 | false | 0 | 0.04 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0bda4280ad4b2474f1449e9e702b78fcdea4314b | 852 | py | Python | build/quadrotor_control/mav_manager/catkin_generated/pkg.develspace.context.pc.py | MultiRobotUPenn/groundstation_ws_vio_swarm | 60e01af6bf32bafb5bc31626b055436278dc8311 | [
"MIT"
] | 1 | 2020-03-10T06:32:51.000Z | 2020-03-10T06:32:51.000Z | build/quadrotor_control/mav_manager/catkin_generated/pkg.develspace.context.pc.py | MultiRobotUPenn/groundstation_ws_vio_swarm | 60e01af6bf32bafb5bc31626b055436278dc8311 | [
"MIT"
] | null | null | null | build/quadrotor_control/mav_manager/catkin_generated/pkg.develspace.context.pc.py | MultiRobotUPenn/groundstation_ws_vio_swarm | 60e01af6bf32bafb5bc31626b055436278dc8311 | [
"MIT"
] | 1 | 2018-11-07T03:37:23.000Z | 2018-11-07T03:37:23.000Z | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "/home/aarow/ros/vio_swarm_groundstation_ws/devel/include;/home/aarow/ros/vio_swarm_groundstation_ws/src/quadrotor_control/mav_manager/include;/usr/include/eigen3".split(';') if "/home/aarow/ros/vio_swarm_groundstation_ws/devel/include;/home/aarow/ros/vio_swarm_groundstation_ws/src/quadrotor_control/mav_manager/include;/usr/include/eigen3" != "" else []
PROJECT_CATKIN_DEPENDS = "roscpp;nav_msgs;sensor_msgs;geometry_msgs;quadrotor_msgs;trackers_manager;std_trackers;std_msgs;message_runtime".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "-lmav_manager".split(';') if "-lmav_manager" != "" else []
PROJECT_NAME = "mav_manager"
PROJECT_SPACE_DIR = "/home/aarow/ros/vio_swarm_groundstation_ws/devel"
PROJECT_VERSION = "1.0.0"
| 94.666667 | 389 | 0.81338 | 123 | 852 | 5.260163 | 0.430894 | 0.069552 | 0.092736 | 0.11592 | 0.476043 | 0.476043 | 0.476043 | 0.476043 | 0.414219 | 0.414219 | 0 | 0.00615 | 0.045775 | 852 | 8 | 390 | 106.5 | 0.789668 | 0.06338 | 0 | 0 | 1 | 0.285714 | 0.66206 | 0.604271 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
042e1ea7988f7d09690dc1a1870371d24fe03842 | 149 | py | Python | flashcardgui/admin.py | zserg/flashcard | 985b2966250d7719d8ff5575785e41b4d503eb8b | [
"MIT"
] | 12 | 2017-03-22T10:19:04.000Z | 2022-02-03T14:42:36.000Z | flashcardgui/admin.py | zserg/flashcard | 985b2966250d7719d8ff5575785e41b4d503eb8b | [
"MIT"
] | 2 | 2017-04-13T15:15:02.000Z | 2018-11-26T17:53:23.000Z | flashcardgui/admin.py | zserg/flashcard | 985b2966250d7719d8ff5575785e41b4d503eb8b | [
"MIT"
] | 9 | 2017-04-10T22:35:27.000Z | 2022-02-21T16:38:30.000Z | from django.contrib import admin
from api.models import Deck
class DeckAdmin(admin.ModelAdmin):
pass
admin.site.register(Deck, DeckAdmin)
| 16.555556 | 36 | 0.765101 | 20 | 149 | 5.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161074 | 149 | 8 | 37 | 18.625 | 0.912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
043afd61e706aef7271c101d40127c865b4ba4d6 | 108 | py | Python | braintree/facilitated_details.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 182 | 2015-01-09T05:26:46.000Z | 2022-03-16T14:10:06.000Z | braintree/facilitated_details.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 95 | 2015-02-24T23:29:56.000Z | 2022-03-13T03:27:58.000Z | braintree/facilitated_details.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 93 | 2015-02-19T17:59:06.000Z | 2022-03-19T17:01:25.000Z | from braintree.attribute_getter import AttributeGetter
class FacilitatedDetails(AttributeGetter):
pass
| 21.6 | 54 | 0.851852 | 10 | 108 | 9.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 108 | 4 | 55 | 27 | 0.947917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f089902b160e4e1b9c05dd1956bc10b673f308da | 35 | py | Python | dialogue/moviebot_model/__init__.py | waynewu6250/ChatBoxer | ae73604d4778b3b5223049e73e696ad66239c0ff | [
"MIT"
] | 7 | 2019-04-18T14:40:37.000Z | 2021-05-11T08:36:21.000Z | dialogue/moviebot_model/__init__.py | waynewu6250/ChatBoxer | ae73604d4778b3b5223049e73e696ad66239c0ff | [
"MIT"
] | 6 | 2020-06-05T20:20:50.000Z | 2021-06-10T17:48:56.000Z | dialogue/moviebot_model/__init__.py | waynewu6250/ChatBoxer | ae73604d4778b3b5223049e73e696ad66239c0ff | [
"MIT"
] | 2 | 2019-07-26T06:07:00.000Z | 2020-06-25T17:34:47.000Z | from .new_seq2seq import NewSeq2seq | 35 | 35 | 0.885714 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.085714 | 35 | 1 | 35 | 35 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0be8299252cfa0e4da50a9df395ade386fd676b | 445 | py | Python | winter/routing/__init__.py | EvgenySmekalin/winter | 24b6a02f958478547a4a120324823743a1f7e1a1 | [
"MIT"
] | 1 | 2020-03-28T14:54:28.000Z | 2020-03-28T14:54:28.000Z | winter/routing/__init__.py | EvgenySmekalin/winter | 24b6a02f958478547a4a120324823743a1f7e1a1 | [
"MIT"
] | null | null | null | winter/routing/__init__.py | EvgenySmekalin/winter | 24b6a02f958478547a4a120324823743a1f7e1a1 | [
"MIT"
] | null | null | null | from .argument_resolvers import PathParametersArgumentResolver
from .argument_resolvers import QueryParameterArgumentResolver
from .reverse import reverse
from .route import Route
from .route_annotation import RouteAnnotation
from .routing import get_route
from .routing import route
from .routing import route_delete
from .routing import route_get
from .routing import route_patch
from .routing import route_post
from .routing import route_put
| 34.230769 | 62 | 0.865169 | 57 | 445 | 6.596491 | 0.280702 | 0.204787 | 0.316489 | 0.351064 | 0.130319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107865 | 445 | 12 | 63 | 37.083333 | 0.947103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0d6ab9097e657fa72cc9978afb077cd3348c970 | 26,013 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/ocelot/phys/Phys_Studio_Base.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/ocelot/phys/Phys_Studio_Base.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/ocelot/phys/Phys_Studio_Base.py | PascalGuenther/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.calculator_model_framework.interfaces.iphy import IPhy
from pyradioconfig.parts.common.phys.phy_common import PHY_COMMON_FRAME_INTERNAL
class PHYS_Studio_Base_Ocelot(IPhy):
##########2FSK PHYS##########
#Base Functions
def Studio_2GFSK_base(self, phy, model):
# Required Inputs
phy.profile_inputs.baudrate_tol_ppm.value = 0
phy.profile_inputs.channel_spacing_hz.value = 1000000
phy.profile_inputs.diff_encoding_mode.value = model.vars.diff_encoding_mode.var_enum.DISABLED
phy.profile_inputs.dsss_chipping_code.value = 0
phy.profile_inputs.dsss_len.value = 0
phy.profile_inputs.dsss_spreading_factor.value = 0
phy.profile_inputs.fsk_symbol_map.value = model.vars.fsk_symbol_map.var_enum.MAP0
phy.profile_inputs.modulation_type.value = model.vars.modulation_type.var_enum.FSK2
phy.profile_inputs.preamble_pattern.value = 1
phy.profile_inputs.preamble_pattern_len.value = 2
phy.profile_inputs.preamble_length.value = 40
phy.profile_inputs.rx_xtal_error_ppm.value = 10
phy.profile_inputs.shaping_filter.value = model.vars.shaping_filter.var_enum.Gaussian
phy.profile_inputs.shaping_filter_param.value = 0.5
phy.profile_inputs.syncword_0.value = 0xf68d
phy.profile_inputs.syncword_1.value = 0x0
phy.profile_inputs.syncword_length.value = 16
phy.profile_inputs.tx_xtal_error_ppm.value = 10
phy.profile_inputs.xtal_frequency_hz.value = 39000000
# Common frame settings
PHY_COMMON_FRAME_INTERNAL(phy, model)
#Derivative PHYs
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-149
def PHY_Studio_915M_2GFSK_2Mbps_500K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M 2GFSK 2Mbps 500K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2000000
phy.profile_inputs.deviation.value = 500000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-150
def PHY_Studio_915M_2GFSK_500Kbps_175K_mi0p7(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M 2GFSK 500Kbps 175K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 500000
phy.profile_inputs.deviation.value = 175000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
return phy
# Owner: Young-Joon Choi
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-148
def PHY_Studio_915M_2GFSK_100Kbps_50K_antdiv(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M 2GFSK 100Kbps 50K antenna diversity',
phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 100000
phy.profile_inputs.deviation.value = 50000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
# Configure a long preamble to support antenna diversity
phy.profile_inputs.preamble_length.value = 60
# Enable antenna diversity and configure options
phy.profile_inputs.antdivmode.value = model.vars.antdivmode.var_enum.PHDEMODANTDIV
phy.profile_inputs.skip2ant.value = model.vars.skip2ant.var_enum.SKIP2ANT
return phy
#Owner: Casey Weltzin
def PHY_Studio_915M_2GFSK_50Kbps_25K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M 2GFSK 50Kbps 25K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 50000
phy.profile_inputs.deviation.value = 25000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
return phy
# Owner: Casey Weltzin
def PHY_Studio_868M_2GFSK_50Kbps_25K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='868M 2GFSK 50Kbps 25K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 50000
phy.profile_inputs.deviation.value = 25000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 868000000
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-146
def PHY_Studio_868M_2GFSK_38p4Kbps_20K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='868M 2GFSK 38.4Kbps 20K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 38400
phy.profile_inputs.deviation.value = 20000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 868000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-145
def PHY_Datasheet_868M_2GFSK_2p4Kbps_1p2K_ETSI(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='868MHz 2GFSK 2.4Kbps 1.2KHz ETSI',
phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2400
phy.profile_inputs.deviation.value = 1200
phy.profile_inputs.channel_spacing_hz.value = 25000
# Add band-specific parameters
# Updating center frequency based on findings using SAW filter.
# Details available: https://jira.silabs.com/browse/MCUW_RADIO_CFG-1479
phy.profile_inputs.base_frequency_hz.value = 868300000
# Define PHY as ETSI compatible
phy.profile_inputs.etsi_cat1_compatible.value = model.vars.etsi_cat1_compatible.var_enum.Band_868
# For the ETSI PHYs, define the ETSI BW that will be used to accomodate frequency tolerance
# Do this by setting the bandwidth explicitly instead of the xtal error
phy.profile_inputs.bandwidth_hz.value = 10000
# Set the xtal tol to match the forced AFC bandwidth
phy.profile_inputs.rx_xtal_error_ppm.value = 2
phy.profile_inputs.tx_xtal_error_ppm.value = 2
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-190
def PHY_Studio_868M_2GFSK_600bps_800(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='868MHz 2GFSK 600bps 800Hz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 600
phy.profile_inputs.deviation.value = 800
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 868000000
# Use lower xtal tol only for low deviation PHYs
# 20ppm (default RX+TX tol) here is 17.36kHz which is much too wide compared to the deviation of this PHY
phy.profile_inputs.rx_xtal_error_ppm.value = 5
phy.profile_inputs.tx_xtal_error_ppm.value = 5
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-78
def PHY_Studio_490M_2GFSK_38p4Kbps_20K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='490MHz 2GFSK 38.4Kbps 20KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 38400
phy.profile_inputs.deviation.value = 20000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 490000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-81
def PHY_Datasheet_490M_2GFSK_10Kbps_25K_20ppm(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='490MHz 2GFSK 10Kbps 25KHz 20ppm', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 10000
phy.profile_inputs.deviation.value = 25000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 490000000
# This PHY has a special requirement of 20ppm rx/tx tol (per Apps)
phy.profile_inputs.rx_xtal_error_ppm.value = 20
phy.profile_inputs.tx_xtal_error_ppm.value = 20
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-76
def PHY_Studio_490M_2GFSK_10Kbps_5K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='490MHz 2GFSK 10Kbps 5KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 10000
phy.profile_inputs.deviation.value = 5000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 490000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-77
def PHY_Studio_490M_2GFSK_2p4Kbps_1p2K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='490MHz 2GFSK 2.4Kbps 1.2KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2400
phy.profile_inputs.deviation.value = 1200
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 490000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-58
def PHY_Studio_434M_2GFSK_100Kbps_50K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='434M 2GFSK 100Kbps 50K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 100000
phy.profile_inputs.deviation.value = 50000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 434000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-60
def PHY_Studio_434M_2GFSK_50Kbps_25K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='434M 2GFSK 50Kbps 25K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 50000
phy.profile_inputs.deviation.value = 25000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 434000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-59
def PHY_Studio_434M_2GFSK_2p4Kbps_1p2K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='434M 2GFSK 2.4Kbps 1.2K', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2400
phy.profile_inputs.deviation.value = 1200
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 434000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-39
def PHY_Studio_315M_2GFSK_38p4Kbps_20K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='315MHz 2GFSK 38.4Kbps 20KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 38400
phy.profile_inputs.deviation.value = 20000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 315000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-26
def PHY_Studio_169M_2GFSK_38p4Kbps_20K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='169MHz 2GFSK 38.4Kbps 20KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 38400
phy.profile_inputs.deviation.value = 20000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 169000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-25
def PHY_Datasheet_169M_2GFSK_2p4Kbps_1p2K_ETSI(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='169MHz 2GFSK 2.4Kbps 1.2KHz ETSI',
phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2400
phy.profile_inputs.deviation.value = 1200
phy.profile_inputs.channel_spacing_hz.value = 25000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 169000000
# Define PHY as ETSI compatible
phy.profile_inputs.etsi_cat1_compatible.value = model.vars.etsi_cat1_compatible.var_enum.Band_169
# For the ETSI PHYs, define the ETSI BW that will be used to accomodate frequency tolerance
# Do this by setting the bandwidth explicitly instead of the xtal error
phy.profile_inputs.bandwidth_hz.value = 10000
# Set the xtal tol to match the forced AFC bandwidth
phy.profile_inputs.rx_xtal_error_ppm.value = 8
phy.profile_inputs.tx_xtal_error_ppm.value = 8
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-24
def PHY_Studio_169M_2GFSK_2p4Kbps_1p2K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='169MHz 2GFSK 2.4Kbps 1.2KHz', phy_name=phy_name)
# Start with the base function
self.Studio_2GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 2400
phy.profile_inputs.deviation.value = 1200
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 169000000
return phy
##########4FSK PHYS##########
# Base Functions
def Studio_4GFSK_base(self, phy, model):
# Required Inputs
phy.profile_inputs.baudrate_tol_ppm.value = 0
phy.profile_inputs.channel_spacing_hz.value = 1000000
phy.profile_inputs.diff_encoding_mode.value = model.vars.diff_encoding_mode.var_enum.DISABLED
phy.profile_inputs.dsss_chipping_code.value = 0
phy.profile_inputs.dsss_len.value = 0
phy.profile_inputs.dsss_spreading_factor.value = 0
phy.profile_inputs.fsk_symbol_map.value = model.vars.fsk_symbol_map.var_enum.MAP0
phy.profile_inputs.modulation_type.value = model.vars.modulation_type.var_enum.FSK4
phy.profile_inputs.preamble_pattern.value = 1
phy.profile_inputs.preamble_pattern_len.value = 2
phy.profile_inputs.preamble_length.value = 40
phy.profile_inputs.rx_xtal_error_ppm.value = 10
phy.profile_inputs.shaping_filter.value = model.vars.shaping_filter.var_enum.Gaussian
phy.profile_inputs.shaping_filter_param.value = 1.0
phy.profile_inputs.syncword_0.value = 0xf68d
phy.profile_inputs.syncword_1.value = 0x0
phy.profile_inputs.syncword_length.value = 16
phy.profile_inputs.tx_xtal_error_ppm.value = 10
phy.profile_inputs.xtal_frequency_hz.value = 39000000
# Common frame settings
PHY_COMMON_FRAME_INTERNAL(phy, model)
# Derivative PHYs
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-151
def PHY_Studio_915M_4GFSK_200Kbps_16p6K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M 4GFSK 200Kbps 16.6K', phy_name=phy_name)
# Start with the base function
self.Studio_4GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 200000
phy.profile_inputs.deviation.value = 16666 #Inner symbol deviation
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-65
def PHY_Studio_434M_4GFSK_50Kbps_8p33K(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='434M 4GFSK 50Kbps 8.33K', phy_name=phy_name)
# Start with the base function
self.Studio_4GFSK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 50000
phy.profile_inputs.deviation.value = 8330 #Inner symbol deviation
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 434000000
return phy
##########OOK PHYS##########
# Base Functions
def Studio_OOK_base(self, phy, model):
# Required Inputs
phy.profile_inputs.baudrate_tol_ppm.value = 1000
phy.profile_inputs.channel_spacing_hz.value = 1000000
phy.profile_inputs.deviation.value = 0
phy.profile_inputs.diff_encoding_mode.value = model.vars.diff_encoding_mode.var_enum.DISABLED
phy.profile_inputs.dsss_chipping_code.value = 0
phy.profile_inputs.dsss_len.value = 0
phy.profile_inputs.dsss_spreading_factor.value = 0
phy.profile_inputs.fsk_symbol_map.value = model.vars.fsk_symbol_map.var_enum.MAP0
phy.profile_inputs.modulation_type.value = model.vars.modulation_type.var_enum.OOK
phy.profile_inputs.preamble_pattern.value = 1
phy.profile_inputs.preamble_pattern_len.value = 2
phy.profile_inputs.preamble_length.value = 40
phy.profile_inputs.rx_xtal_error_ppm.value = 10
phy.profile_inputs.shaping_filter.value = model.vars.shaping_filter.var_enum.NONE
phy.profile_inputs.shaping_filter_param.value = 1.5
phy.profile_inputs.symbol_encoding.value = model.vars.symbol_encoding.var_enum.Manchester
phy.profile_inputs.syncword_0.value = 0xf68d
phy.profile_inputs.syncword_1.value = 0x0
phy.profile_inputs.syncword_length.value = 16
phy.profile_inputs.tx_xtal_error_ppm.value = 10
phy.profile_inputs.xtal_frequency_hz.value = 39000000
# Common frame settings
PHY_COMMON_FRAME_INTERNAL(phy, model)
# Derivative PHYs
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-147
def PHY_Studio_915M_OOK_120kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M OOK 120kbps', phy_name=phy_name)
# Start with the base function
self.Studio_OOK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 120000
phy.profile_inputs.bandwidth_hz.value = 350000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
# Disable Manchester encoding for this PHY (baudrate not supported)
phy.profile_inputs.symbol_encoding.value = model.vars.symbol_encoding.var_enum.NRZ
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-152
def PHY_Studio_915M_OOK_4p8kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M OOK 4.8kbps Manchester', phy_name=phy_name)
# Start with the base function
self.Studio_OOK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 350000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 915000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-45
def PHY_Studio_433M_OOK_4p8kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='433M OOK 4.8kbps Manchester', phy_name=phy_name)
# Start with the base function
self.Studio_OOK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 350000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 433920000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-41
def PHY_Studio_315M_OOK_40kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='315M OOK 40kbps Manchester', phy_name=phy_name)
# Start with the base function
self.Studio_OOK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 40000
phy.profile_inputs.bandwidth_hz.value = 350000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 315000000
return phy
# Owner: Casey Weltzin
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-36
def PHY_Studio_315M_OOK_4p8kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='315M OOK 4.8kbps Manchester', phy_name=phy_name)
# Start with the base function
self.Studio_OOK_base(phy, model)
# Add data-rate specific parameters
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 350000
# Add band-specific parameters
phy.profile_inputs.base_frequency_hz.value = 315000000
return phy
##########GMSK PHYS##########
# Base Functions
def Studio_GMSK_base(self, phy, model):
""" Modulation Type """
phy.profile_inputs.modulation_type.value = model.vars.modulation_type.var_enum.MSK
""" Symbol Mapping and Encoding """
phy.profile_inputs.fsk_symbol_map.value = model.vars.fsk_symbol_map.var_enum.MAP0
phy.profile_inputs.diff_encoding_mode.value = model.vars.diff_encoding_mode.var_enum.DISABLED
""" Baudrate """
phy.profile_inputs.baudrate_tol_ppm.value = 0
""" DSSS Parameters """
phy.profile_inputs.dsss_chipping_code.value = 0
phy.profile_inputs.dsss_len.value = 0
phy.profile_inputs.dsss_spreading_factor.value = 0
""" Shaping Filter Parameters """
phy.profile_inputs.shaping_filter.value = model.vars.shaping_filter.var_enum.Gaussian
phy.profile_inputs.shaping_filter_param.value = 0.5
""" Preamble Parameters """
phy.profile_inputs.preamble_pattern.value = 1
phy.profile_inputs.preamble_pattern_len.value = 2
phy.profile_inputs.preamble_length.value = 40
""" Syncword Parameters """
phy.profile_inputs.syncword_0.value = 0xf68d
phy.profile_inputs.syncword_1.value = 0x0
phy.profile_inputs.syncword_length.value = 16
""" XO Parameters """
phy.profile_inputs.xtal_frequency_hz.value = 39000000
phy.profile_inputs.rx_xtal_error_ppm.value = 10
phy.profile_inputs.tx_xtal_error_ppm.value = 10
# Common frame settings
PHY_COMMON_FRAME_INTERNAL(phy, model)
# Owner: Young-Joon Choi
# JIRA Link: https://jira.silabs.com/browse/PGOCELOTVALTEST-189
def PHY_Studio_868M_GMSK_500Kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='868M GMSK 500Kbps', phy_name=phy_name)
# : Common base function for GMSK PHYs
self.Studio_GMSK_base(phy, model)
""" Frequency Planning """
phy.profile_inputs.base_frequency_hz.value = 868000000
phy.profile_inputs.channel_spacing_hz.value = 1000000
""" Datarate / Bandwidth """
phy.profile_inputs.bitrate.value = 500000
# : modulation index = 0.5 = 2 * deviation / data_rate for GMSK. Therefore, deviation = 0.25 * data_rate
phy.profile_inputs.deviation.value = 125000
return phy
pass # : End PHY_Studio_868M_GMSK_500Kbps
| 40.02 | 123 | 0.70434 | 3,470 | 26,013 | 5.045245 | 0.088761 | 0.101102 | 0.161764 | 0.083167 | 0.904895 | 0.866796 | 0.862512 | 0.860456 | 0.833609 | 0.829611 | 0 | 0.061418 | 0.213855 | 26,013 | 649 | 124 | 40.081664 | 0.79467 | 0.230961 | 0 | 0.662207 | 0 | 0 | 0.034891 | 0 | 0 | 0 | 0.001844 | 0 | 0 | 1 | 0.103679 | false | 0.003344 | 0.006689 | 0 | 0.200669 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9bcfa92c8b1ac16837502c3e6a257ded79f39ff4 | 45 | py | Python | bali/events/__init__.py | bali-framework/bali | d6d7024b6bed9291a93e3518d42d32250b524325 | [
"MIT"
] | 6 | 2022-02-24T15:34:37.000Z | 2022-03-30T02:04:47.000Z | bali/events/__init__.py | bali-framework/bali | d6d7024b6bed9291a93e3518d42d32250b524325 | [
"MIT"
] | null | null | null | bali/events/__init__.py | bali-framework/bali | d6d7024b6bed9291a93e3518d42d32250b524325 | [
"MIT"
] | 1 | 2022-02-23T06:07:14.000Z | 2022-02-23T06:07:14.000Z | from .dispatch import *
from .event import *
| 15 | 23 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 24 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9bdb62cf0730b44dbf7eb08ea780f009891b9476 | 2,438 | py | Python | SULI/tests/test_env.py | jklynch/bad_seed | 88ca597f2786e3a9c9aec3471b181e75cce2f4dd | [
"BSD-3-Clause"
] | null | null | null | SULI/tests/test_env.py | jklynch/bad_seed | 88ca597f2786e3a9c9aec3471b181e75cce2f4dd | [
"BSD-3-Clause"
] | null | null | null | SULI/tests/test_env.py | jklynch/bad_seed | 88ca597f2786e3a9c9aec3471b181e75cce2f4dd | [
"BSD-3-Clause"
] | null | null | null | import pprint
from SULI.src.tensorForceEnv import CustomEnvironment
def test_done():
env = CustomEnvironment()
actions = env.actions()
print(f"actions: {actions}")
assert env.extraCounter == 3
print(f"extra Count: {env.extraCounter}")
states, reward, done = env.execute(actions=0)
print(f"states: {states}")
print(f"reward: {reward}")
print(f"done: {done}")
assert done is False
states, reward, done = env.execute(actions=0)
assert done is False
states, reward, done = env.execute(actions=0)
assert done is False
states, reward, done = env.execute(actions=0)
assert done is False
states, reward, done = env.execute(actions=0)
assert done is False
states, reward, done = env.execute(actions=0)
assert done is False
states, reward, done = env.execute(actions=0)
assert done is True
# states, reward, done = env.execute(actions=0)
print(f"states: {states}")
print(f"reward: {reward}")
print(f"done: {done}")
# assert done is False
def test_reset():
env = CustomEnvironment()
assert env.extraCounter == 3
assert env.agent_pos == 3
assert len(env.GRID) == env.SAMPLES
assert len(env.GRID[0]) == env.TRIALS
env.execute(actions=0)
env.reset()
assert env.extraCounter == 3
assert env.agent_pos == 3
assert len(env.GRID) == env.SAMPLES
assert len(env.GRID[0]) == env.TRIALS
env.execute(actions=0)
env.reset()
assert env.extraCounter == 3
assert env.agent_pos == 3
assert len(env.GRID) == env.SAMPLES
assert len(env.GRID[0]) == env.TRIALS
def test_seven_steps():
env = CustomEnvironment()
state_reward_done = []
for step in range(7):
state_reward_done.append(env.execute(actions=0))
pprint.pprint(state_reward_done)
assert env.extraCounter == 10
assert state_reward_done[6][2] is True
def test_stepthru_reset():
env = CustomEnvironment()
assert env.agent_pos == env.startingPoint
assert env.extraCounter == env.startingPoint
state_reward_done = []
for step in range(7):
state_reward_done.append(env.execute(actions=0))
env.reset()
for step in range(7):
state_reward_done.append(env.execute(actions=0))
pprint.pprint(state_reward_done)
assert env.extraCounter == 10
assert state_reward_done[6][2] is True
assert state_reward_done[-1][2] is True | 22.785047 | 56 | 0.661198 | 337 | 2,438 | 4.694362 | 0.142433 | 0.11378 | 0.139697 | 0.147914 | 0.788875 | 0.75158 | 0.746523 | 0.746523 | 0.746523 | 0.746523 | 0 | 0.018888 | 0.218212 | 2,438 | 107 | 57 | 22.785047 | 0.811123 | 0.027071 | 0 | 0.808824 | 0 | 0 | 0.057806 | 0 | 0 | 0 | 0 | 0 | 0.397059 | 1 | 0.058824 | false | 0 | 0.029412 | 0 | 0.088235 | 0.161765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
505eaf568d8f20d0d889869fdb9583128064696b | 41 | py | Python | boucanpy/core/zone/data.py | bbhunter/boucanpy | 7d2fb105e7b1e90653a511534fb878bb62d02f17 | [
"MIT"
] | 34 | 2019-11-16T17:22:15.000Z | 2022-02-11T23:12:46.000Z | boucanpy/core/zone/data.py | bbhunter/boucanpy | 7d2fb105e7b1e90653a511534fb878bb62d02f17 | [
"MIT"
] | 1 | 2021-02-09T09:34:55.000Z | 2021-02-10T21:46:20.000Z | boucanpy/core/zone/data.py | bbhunter/boucanpy | 7d2fb105e7b1e90653a511534fb878bb62d02f17 | [
"MIT"
] | 9 | 2019-11-18T22:18:07.000Z | 2021-02-08T13:23:51.000Z | from boucanpy.core.types import ZoneData
| 20.5 | 40 | 0.853659 | 6 | 41 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac8f88859c6b009a744d33e93b7222edbd017d4c | 7,406 | py | Python | tests/test_generalize.py | darrenreger/zEpid | 4b5b4ed81933c92bd17d63364df6673d6f9c2ea6 | [
"MIT"
] | null | null | null | tests/test_generalize.py | darrenreger/zEpid | 4b5b4ed81933c92bd17d63364df6673d6f9c2ea6 | [
"MIT"
] | null | null | null | tests/test_generalize.py | darrenreger/zEpid | 4b5b4ed81933c92bd17d63364df6673d6f9c2ea6 | [
"MIT"
] | null | null | null | import pytest
import pandas as pd
import numpy.testing as npt
import zepid as ze
from zepid.causal.generalize import IPSW, GTransportFormula, AIPSW
from zepid.causal.ipw import IPTW
@pytest.fixture
def df_r():
df = ze.load_generalize_data(False)
df['W_sq'] = df['W'] ** 2
return df
@pytest.fixture
def df_c():
df = ze.load_generalize_data(True)
df['W_sq'] = df['W'] ** 2
return df
@pytest.fixture
def df_iptw(df_c):
dfs = df_c.loc[df_c['S'] == 1].copy()
ipt = IPTW(dfs, treatment='A', outcome='Y')
ipt.treatment_model('L', stabilized=True, print_results=False)
dfs['iptw'] = ipt.iptw
return pd.concat([dfs, df_c.loc[df_c['S'] == 0]], ignore_index=True, sort=False)
class TestIPSW:
def test_stabilize_error(self, df_c):
ipsw = IPSW(df_c, exposure='A', outcome='Y', selection='S', stabilized=False)
with pytest.raises(ValueError):
ipsw.regression_models('L + W_sq', model_numerator='W', print_results=False)
def test_no_model_error(self, df_c):
ipsw = IPSW(df_c, exposure='A', outcome='Y', selection='S', generalize=True)
with pytest.raises(ValueError):
ipsw.fit()
def test_generalize_unstabilized(self, df_r):
ipsw = IPSW(df_r, exposure='A', outcome='Y', selection='S', stabilized=False)
ipsw.regression_models('L + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.046809, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.13905, atol=1e-4)
def test_generalize_stabilized(self, df_r):
ipsw = IPSW(df_r, exposure='A', outcome='Y', selection='S', stabilized=True)
ipsw.regression_models('L + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.046809, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.13905, atol=1e-4)
def test_transport_unstabilized(self, df_r):
ipsw = IPSW(df_r, exposure='A', outcome='Y', selection='S', stabilized=False, generalize=False)
ipsw.regression_models('L + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.034896, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.097139, atol=1e-4)
def test_transport_stabilized(self, df_r):
ipsw = IPSW(df_r, exposure='A', outcome='Y', selection='S', stabilized=True, generalize=False)
ipsw.regression_models('L + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.034896, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.097139, atol=1e-4)
def test_generalize_iptw(self, df_iptw):
ipsw = IPSW(df_iptw, exposure='A', outcome='Y', selection='S', generalize=True, weights='iptw')
ipsw.regression_models('L + W + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.055034, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.167213, atol=1e-4)
def test_transport_iptw(self, df_iptw):
ipsw = IPSW(df_iptw, exposure='A', outcome='Y', selection='S', generalize=False, weights='iptw')
ipsw.regression_models('L + W + W_sq', print_results=False)
ipsw.fit()
npt.assert_allclose(ipsw.risk_difference, 0.047296, atol=1e-5)
npt.assert_allclose(ipsw.risk_ratio, 1.1372, atol=1e-4)
class TestGTransport:
def test_no_model_error(self, df_c):
gtf = GTransportFormula(df_c, exposure='A', outcome='Y', selection='S', generalize=True)
with pytest.raises(ValueError):
gtf.fit()
def test_generalize(self, df_r):
gtf = GTransportFormula(df_r, exposure='A', outcome='Y', selection='S', generalize=True)
gtf.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
gtf.fit()
npt.assert_allclose(gtf.risk_difference, 0.064038, atol=1e-5)
npt.assert_allclose(gtf.risk_ratio, 1.203057, atol=1e-4)
def test_transport(self, df_r):
gtf = GTransportFormula(df_r, exposure='A', outcome='Y', selection='S', generalize=False)
gtf.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
gtf.fit()
npt.assert_allclose(gtf.risk_difference, 0.058573, atol=1e-5)
npt.assert_allclose(gtf.risk_ratio, 1.176615, atol=1e-4)
def test_generalize_conf(self, df_c):
gtf = GTransportFormula(df_c, exposure='A', outcome='Y', selection='S', generalize=True)
gtf.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
gtf.fit()
npt.assert_allclose(gtf.risk_difference, 0.048949, atol=1e-5)
npt.assert_allclose(gtf.risk_ratio, 1.149556, atol=1e-4)
def test_transport_conf(self, df_c):
gtf = GTransportFormula(df_c, exposure='A', outcome='Y', selection='S', generalize=False)
gtf.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
gtf.fit()
npt.assert_allclose(gtf.risk_difference, 0.042574, atol=1e-5)
npt.assert_allclose(gtf.risk_ratio, 1.124257, atol=1e-4)
class TestAIPSW:
def test_no_model_error(self, df_c):
aipw = AIPSW(df_c, exposure='A', outcome='Y', selection='S', generalize=True)
with pytest.raises(ValueError):
aipw.fit()
aipw.weight_model('L', print_results=False)
with pytest.raises(ValueError):
aipw.fit()
aipw = AIPSW(df_c, exposure='A', outcome='Y', selection='S', generalize=True)
aipw.outcome_model('A + L')
with pytest.raises(ValueError):
aipw.fit()
def test_generalize(self, df_r):
aipw = AIPSW(df_r, exposure='A', outcome='Y', selection='S', generalize=True)
aipw.weight_model('L + W_sq', print_results=False)
aipw.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
aipw.fit()
npt.assert_allclose(aipw.risk_difference, 0.061382, atol=1e-5)
npt.assert_allclose(aipw.risk_ratio, 1.193161, atol=1e-4)
def test_transport(self, df_r):
aipw = AIPSW(df_r, exposure='A', outcome='Y', selection='S', generalize=False)
aipw.weight_model('L + W_sq', print_results=False)
aipw.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
aipw.fit()
npt.assert_allclose(aipw.risk_difference, 0.05479, atol=1e-5)
npt.assert_allclose(aipw.risk_ratio, 1.16352, atol=1e-4)
def test_generalize_conf(self, df_iptw):
aipw = AIPSW(df_iptw, exposure='A', outcome='Y', selection='S', generalize=True, weights='iptw')
aipw.weight_model('L + W_sq', print_results=False)
aipw.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
aipw.fit()
npt.assert_allclose(aipw.risk_difference, 0.048129, atol=1e-5)
npt.assert_allclose(aipw.risk_ratio, 1.146787, atol=1e-4)
def test_transport_conf(self, df_iptw):
aipw = AIPSW(df_iptw, exposure='A', outcome='Y', selection='S', generalize=False, weights='iptw')
aipw.weight_model('L + W_sq', print_results=False)
aipw.outcome_model('A + L + L:A + W_sq + W_sq:A + W_sq:A:L', print_results=False)
aipw.fit()
npt.assert_allclose(aipw.risk_difference, 0.041407, atol=1e-5)
npt.assert_allclose(aipw.risk_ratio, 1.120556, atol=1e-4)
| 43.05814 | 105 | 0.655144 | 1,128 | 7,406 | 4.10461 | 0.096631 | 0.023974 | 0.102808 | 0.069762 | 0.87041 | 0.850324 | 0.825486 | 0.801728 | 0.785745 | 0.754644 | 0 | 0.042031 | 0.196867 | 7,406 | 171 | 106 | 43.309942 | 0.736382 | 0 | 0 | 0.50365 | 0 | 0 | 0.067378 | 0 | 0 | 0 | 0 | 0 | 0.20438 | 1 | 0.153285 | false | 0 | 0.043796 | 0 | 0.240876 | 0.153285 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c57abf3a13c0472f358551726c2583c45ec6f386 | 2,016 | py | Python | allure-pytest/test/status/base_step_status_test.py | Sup3rGeo/allure-python | 568e7b18e7220b1bd260054447fca360fefea77f | [
"Apache-2.0"
] | 1 | 2018-07-23T16:09:54.000Z | 2018-07-23T16:09:54.000Z | allure-pytest/test/status/base_step_status_test.py | hosniadala-dt/allure-python | 7285adf0690bb703225d45e236594581bfb62728 | [
"Apache-2.0"
] | null | null | null | allure-pytest/test/status/base_step_status_test.py | hosniadala-dt/allure-python | 7285adf0690bb703225d45e236594581bfb62728 | [
"Apache-2.0"
] | 1 | 2020-08-05T05:40:44.000Z | 2020-08-05T05:40:44.000Z | import pytest
def test_broken_step():
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_broken_step',
... with_status('broken'),
... has_status_details(with_message_contains("ZeroDivisionError"),
... with_trace_contains("def test_broken_step():")
... ),
... has_step('Step',
... with_status('broken'),
... has_status_details(with_message_contains("ZeroDivisionError"),
... with_trace_contains("test_broken_step")
... )
... )
... )
... )
"""
with pytest.allure.step('Step'):
raise ZeroDivisionError
def test_pytest_fail_in_step():
"""
>>> allure_report = getfixture('allure_report')
>>> assert_that(allure_report,
... has_test_case('test_pytest_fail_in_step',
... with_status('failed'),
... has_status_details(with_message_contains("Failed: <Failed instance>"),
... with_trace_contains("def test_pytest_fail_in_step():")
... ),
... has_step('Step',
... with_status('failed'),
... has_status_details(with_message_contains("Failed: <Failed instance>"),
... with_trace_contains("test_pytest_fail_in_step")
... )
... )
... )
... )
"""
with pytest.allure.step('Step'):
pytest.fail()
| 43.826087 | 113 | 0.400794 | 138 | 2,016 | 5.376812 | 0.181159 | 0.097035 | 0.075472 | 0.107817 | 0.893531 | 0.884097 | 0.690027 | 0.690027 | 0.690027 | 0.690027 | 0 | 0 | 0.480655 | 2,016 | 45 | 114 | 44.8 | 0.708692 | 0.823909 | 0 | 0.285714 | 0 | 0 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | true | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c5d10d505acbe866e199a660a968b7db4ece7cef | 143 | py | Python | flux/__init__.py | omergertel/flux | 56c154df6538aeb353b61d48851297f5a9839391 | [
"BSD-3-Clause"
] | 6 | 2016-11-29T11:01:20.000Z | 2022-03-04T20:00:05.000Z | flux/__init__.py | omergertel/flux | 56c154df6538aeb353b61d48851297f5a9839391 | [
"BSD-3-Clause"
] | 3 | 2018-12-12T08:59:28.000Z | 2020-10-06T05:51:18.000Z | flux/__init__.py | omergertel/flux | 56c154df6538aeb353b61d48851297f5a9839391 | [
"BSD-3-Clause"
] | 2 | 2016-05-22T15:27:56.000Z | 2019-01-28T12:33:42.000Z | from .__version__ import __version__
from .timeline import Timeline
from .gevent_timeline import GeventTimeline
from . import current_timeline
| 28.6 | 43 | 0.86014 | 17 | 143 | 6.647059 | 0.411765 | 0.247788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 143 | 4 | 44 | 35.75 | 0.889764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6808e85ec4095435fc7a3c16caba528abd6517b1 | 220 | py | Python | applications/FemToDemApplication/custom_problemtype/FemDemKratos.gid/KratosFemDemApplication.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | 2 | 2019-10-25T09:28:10.000Z | 2019-11-21T12:51:46.000Z | applications/FemToDemApplication/custom_problemtype/FemDemKratos.gid/KratosFemDemApplication.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | 13 | 2019-10-07T12:06:51.000Z | 2020-02-18T08:48:33.000Z | applications/FemToDemApplication/custom_problemtype/FemDemKratos.gid/KratosFemDemApplication.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | 1 | 2020-06-12T08:51:24.000Z | 2020-06-12T08:51:24.000Z | import KratosMultiphysics
import KratosMultiphysics.SolidMechanicsApplication
import KratosMultiphysics.FemToDemApplication
import MainFemDem
model = KratosMultiphysics.Model()
MainFemDem.FEM_Solution(model).Run() | 31.428571 | 52 | 0.863636 | 18 | 220 | 10.5 | 0.5 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081818 | 220 | 7 | 53 | 31.428571 | 0.935644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a84854f5444080d86d1bb295e9c00c7ea4da3343 | 133 | py | Python | foowise/channels/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | 1 | 2020-01-25T00:14:41.000Z | 2020-01-25T00:14:41.000Z | foowise/channels/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | 1 | 2018-08-19T17:41:33.000Z | 2018-08-26T02:15:02.000Z | foowise/channels/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | null | null | null | from . import Cla
from . import Infomorphism
from . import Invariant
from . import DistSys
from . import Channel
from . import Index
| 19 | 26 | 0.774436 | 18 | 133 | 5.722222 | 0.444444 | 0.582524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180451 | 133 | 6 | 27 | 22.166667 | 0.944954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a84dd298a2fce0ef9a0cd41d56371cea7949b2c7 | 45 | py | Python | model_zoo/__init__.py | zhfeing/graduation-project | e9020a4d7916874ad9d4bf0c9f7f1f82dcfea663 | [
"MIT"
] | null | null | null | model_zoo/__init__.py | zhfeing/graduation-project | e9020a4d7916874ad9d4bf0c9f7f1f82dcfea663 | [
"MIT"
] | 1 | 2019-04-12T06:25:36.000Z | 2019-04-12T06:26:06.000Z | model_zoo/__init__.py | zhfeing/graduation-project | e9020a4d7916874ad9d4bf0c9f7f1f82dcfea663 | [
"MIT"
] | null | null | null | print("[info]: load model zoo successfully")
| 22.5 | 44 | 0.733333 | 6 | 45 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a884a8cefcf4994246c6d73b8471d71bfe404519 | 89 | py | Python | genda/transcripts/__init__.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 5 | 2016-01-12T15:12:18.000Z | 2022-02-10T21:57:39.000Z | genda/transcripts/__init__.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 5 | 2015-01-20T04:22:50.000Z | 2018-10-02T19:39:12.000Z | genda/transcripts/__init__.py | jeffhsu3/genda | 5adbb5b5620c592849fa4a61126b934e1857cd77 | [
"BSD-3-Clause"
] | 1 | 2022-03-04T06:49:39.000Z | 2022-03-04T06:49:39.000Z | from .transcripts_utils import *
from .gene import *
from .transcript import Transcript
| 17.8 | 34 | 0.797753 | 11 | 89 | 6.363636 | 0.545455 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 89 | 4 | 35 | 22.25 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8a7cbd11f3ca285cf44c3bb5858e8d75909a149 | 73 | py | Python | SL-GCN/model/__init__.py | SnorlaxSE/CVPR21Chal-SLR | 680f911131ca03559fb06d578f38d006f87aa478 | [
"CC0-1.0"
] | 85 | 2021-03-17T06:17:01.000Z | 2022-03-30T12:52:37.000Z | SL-GCN/model/__init__.py | SnorlaxSE/CVPR21Chal-SLR | 680f911131ca03559fb06d578f38d006f87aa478 | [
"CC0-1.0"
] | 21 | 2021-03-21T18:41:27.000Z | 2022-03-24T08:16:47.000Z | SL-GCN/model/__init__.py | SnorlaxSE/CVPR21Chal-SLR | 680f911131ca03559fb06d578f38d006f87aa478 | [
"CC0-1.0"
] | 28 | 2021-03-20T09:04:47.000Z | 2022-03-15T02:29:06.000Z | from . import decouple_gcn_attn
from . import dropSke
from . import dropT | 24.333333 | 31 | 0.808219 | 11 | 73 | 5.181818 | 0.636364 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 73 | 3 | 32 | 24.333333 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8b247763ad7e8fa69ddd19c75247c56a6185bc6 | 20,973 | py | Python | TIPS_Data_Analysis/TIPS_Data_Analysis/TEC_Exam.py | rnsheehan/TIPS_Data_Analysis | 8909fe9306a1ba57fa42cf2f41b5ed6ecbb60dda | [
"MIT"
] | null | null | null | TIPS_Data_Analysis/TIPS_Data_Analysis/TEC_Exam.py | rnsheehan/TIPS_Data_Analysis | 8909fe9306a1ba57fa42cf2f41b5ed6ecbb60dda | [
"MIT"
] | null | null | null | TIPS_Data_Analysis/TIPS_Data_Analysis/TEC_Exam.py | rnsheehan/TIPS_Data_Analysis | 8909fe9306a1ba57fa42cf2f41b5ed6ecbb60dda | [
"MIT"
] | null | null | null | import os
import glob
import re
import sys # access system routines
import math
import scipy
import numpy as np
import matplotlib.pyplot as plt
import Common
import Plotting
labs = ['r*-', 'g^-', 'b+-', 'md-', 'cp-', 'yh-', 'ks-' ] # plot labels
labs_lins = ['r-', 'g-', 'b-', 'm-', 'c-', 'y-', 'k-' ] # plot labels
labs_dashed = ['r--', 'g--', 'b--', 'm--', 'c--', 'y--', 'k--' ] # plot labels
labs_pts = ['r*', 'g^', 'b+', 'md', 'cp', 'yh', 'ks' ] # plot labels
def Plot_TEC_Exam_Results():
# Device TIPS-2 exhibits unstable temperature once current across its DFB, SOA sections increases to 120 mA, EAM bis is held at 0 V
# same behaviour is not oberserved in TIPS-1, suspect that temperature control on TIPS-2 is not as good as it should be
# However, I had to confirm that the problem did not lie with TEC units
# I forced a volt-limit error on TIPS-2 using both TEC units
# Data shows that problem is with device and not TEC unit, TIPS-1 operates with lower power and does not display unstable temperature
# By unstable temperature I mean that voltage across TEC increases dramatically and temperature is no longer fixed to sdet point
# For more details see Tyndall Notebook 2715
# R. Sheehan 15 - 8 - 2017
try:
DATA_HOME = "C:/Users/Robert/Research/EU_TIPS/Data/Exp-2/TEC_Exam/"
if os.path.isdir(DATA_HOME):
os.chdir(DATA_HOME)
Ivals = [0, 50, 100, 120, 140] # current (mA) across DFB and SOA sections, Veam = 0 V
# Original Configuration
Volt_1_T_20_TEC_125 = [0.16, 0.26, 0.5, 0.62, 0.77] # TEC output voltage across TIPS1 using TEC-125 at T = 20
Curr_1_T_20_TEC_125 = [0.06, 0.11, 0.22, 0.28, 0.34] # TEC output current across TIPS1 using TEC-125 at T = 20
Power_1_T_20_TEC_125 = compute_power(Volt_1_T_20_TEC_125, Curr_1_T_20_TEC_125) # TEC output power across TIPS1 using TEC-125 at T = 20
Volt_1_T_25_TEC_125 = [0.06, 0.08, 0.28, 0.39, 0.53] # TEC output voltage across TIPS1 using TEC-125 at T = 25
Curr_1_T_25_TEC_125 = [0.02, 0.04, 0.13, 0.18, 0.24] # TEC output current across TIPS1 using TEC-125 at T = 25
Power_1_T_25_TEC_125 = compute_power(Volt_1_T_25_TEC_125, Curr_1_T_25_TEC_125) # TEC output power across TIPS1 using TEC-125 at T = 25
Volt_1_T_20_TEC_125_Rpt = [0.13, 0.27, 0.51, 0.64, 0.79] # TEC output voltage across TIPS1 using TEC-125 at T = 20 Repeat
Curr_1_T_20_TEC_125_Rpt = [0.05, 0.11, 0.22, 0.28, 0.35] # TEC output current across TIPS1 using TEC-125 at T = 20 Repeat
Power_1_T_20_TEC_125_Rpt = compute_power(Volt_1_T_20_TEC_125_Rpt, Curr_1_T_20_TEC_125_Rpt) # TEC output power across TIPS1 using TEC-125 at T = 20 Repeat
Volt_1_T_25_TEC_125_Rpt = [0.09, 0.04, 0.25, 0.37, 0.49] # TEC output voltage across TIPS1 using TEC-125 at T = 25 Repeat
Curr_1_T_25_TEC_125_Rpt = [0.04, 0.03, 0.12, 0.17, 0.23] # TEC output current across TIPS1 using TEC-125 at T = 25 Repeat
Power_1_T_25_TEC_125_Rpt = compute_power(Volt_1_T_25_TEC_125_Rpt, Curr_1_T_25_TEC_125_Rpt) # TEC output power across TIPS1 using TEC-125 at T = 25 Repeat
Volt_2_T_20_TEC_124 = [0.38, 2.25, 3.49, 7, 7] # TEC output voltage across TIPS2 using TEC-124 at T = 20
Curr_2_T_20_TEC_124 = [0.05, 0.13, 0.37, 4, 4] # TEC output current across TIPS2 using TEC-124 at T = 20
Power_2_T_20_TEC_124 = compute_power(Volt_2_T_20_TEC_124, Curr_2_T_20_TEC_124) # TEC output power across TIPS2 using TEC-124 at T = 20
Volt_2_T_25_TEC_124 = [0.16, 0.2, 0.82, 1.24, 1.92] # TEC output voltage across TIPS2 using TEC-124 at T = 25
Curr_2_T_25_TEC_124 = [0.02, 0.03, 0.13, 0.2, 0.31] # TEC output current across TIPS2 using TEC-124 at T = 25
Power_2_T_25_TEC_124 = compute_power(Volt_2_T_25_TEC_124, Curr_2_T_25_TEC_124) # TEC output power across TIPS2 using TEC-124 at T = 25
Volt_2_T_20_TEC_124_Rpt = [1.02, 1.5, 2.5, 7, 7] # TEC output voltage across TIPS2 using TEC-124 at T = 20 Repeat
Curr_2_T_20_TEC_124_Rpt = [0.04, 0.12, 0.29, 4, 4] # TEC output current across TIPS2 using TEC-124 at T = 20 Repeat
Power_2_T_20_TEC_124_Rpt = compute_power(Volt_2_T_20_TEC_124_Rpt, Curr_2_T_20_TEC_124_Rpt) # TEC output power across TIPS2 using TEC-124 at T = 20 Repeat
Volt_2_T_25_TEC_124_Rpt = [0.31, 0.2, 1.11, 1.6, 2.47] # TEC output voltage across TIPS2 using TEC-124 at T = 25 Repeat
Curr_2_T_25_TEC_124_Rpt = [0.03, 0.02, 0.13, 0.2, 0.32] # TEC output current across TIPS2 using TEC-124 at T = 25 Repeat
Power_2_T_25_TEC_124_Rpt = compute_power(Volt_2_T_25_TEC_124_Rpt, Curr_2_T_25_TEC_124_Rpt) # TEC output power across TIPS2 using TEC-124 at T = 25 Repeat
# Switched Configuration
Volt_1_T_20_TEC_124 = [0.14, 0.27, 0.5, 0.63, 0.78] # TEC output voltage across TIPS1 using TEC-124 at T = 20
Curr_1_T_20_TEC_124 = [0.05, 0.11, 0.22, 0.28, 0.35] # TEC output current across TIPS1 using TEC-124 at T = 20
Power_1_T_20_TEC_124 = compute_power(Volt_1_T_20_TEC_124, Curr_1_T_20_TEC_124) # TEC output power across TIPS1 using TEC-124 at T = 20
Volt_1_T_25_TEC_124 = [0.08, 0.07, 0.27, 0.38, 0.51] # TEC output voltage across TIPS1 using TEC-124 at T = 25
Curr_1_T_25_TEC_124 = [0.03, 0.04, 0.13, 0.18, 0.24] # TEC output current across TIPS1 using TEC-124 at T = 25
Power_1_T_25_TEC_124 = compute_power(Volt_1_T_25_TEC_124, Curr_1_T_25_TEC_124) # TEC output power across TIPS1 using TEC-124 at T = 25
Volt_2_T_20_TEC_125 = [0.37, 0.91, 1.85, 7, 7] # TEC output voltage across TIPS2 using TEC-125 at T = 20
Curr_2_T_20_TEC_125 = [0.04, 0.12, 0.28, 4, 4] # TEC output current across TIPS2 using TEC-125 at T = 20
Power_2_T_20_TEC_125 = compute_power(Volt_2_T_20_TEC_125, Curr_2_T_20_TEC_125) # TEC output power across TIPS2 using TEC-125 at T = 20
Volt_2_T_25_TEC_125 = [0.18, 0.22, 0.95, 1.41, 2.15] # TEC output voltage across TIPS2 using TEC-125 at T = 25
Curr_2_T_25_TEC_125 = [0.02, 0.03, 0.14, 0.21, 0.33] # TEC output current across TIPS2 using TEC-125 at T = 25
Power_2_T_25_TEC_125 = compute_power(Volt_2_T_25_TEC_125, Curr_2_T_25_TEC_125) # TEC output power across TIPS2 using TEC-125 at T = 25
args = Plotting.plot_arguments()
# Voltage T = 20
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Volt_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Volt_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Volt_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Volt_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
args.loud = False
args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$V_{TEC}$ (V)'
args.plt_range = [0.0, 145, 0, 4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$V_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Voltage_T_20.png'
Plotting.plot_multiple_curves(plt_data, args)
# Voltage T = 20 Repeat
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Volt_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Volt_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Volt_1_T_20_TEC_125_Rpt]); labels.append('TIPS-1, TEC = 124 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Volt_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Volt_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Volt_2_T_20_TEC_124_Rpt]); labels.append('TIPS-2, TEC = 125 Rpt'); mark_list.append('g^');
args.loud = False
args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$V_{TEC}$ (V)'
args.plt_range = [0.0, 145, 0, 4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$V_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Voltage_T_20_Rpt.png'
Plotting.plot_multiple_curves(plt_data, args)
# Current T = 20
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Curr_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Curr_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Curr_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Curr_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$I_{TEC}$ (A)'
args.plt_range = [0.0, 145, 0, 0.4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$I_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Current_T_20.png'
Plotting.plot_multiple_curves(plt_data, args)
# Current T = 20 Repeat
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Curr_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Curr_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Curr_1_T_20_TEC_125_Rpt]); labels.append('TIPS-1, TEC = 125 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Curr_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Curr_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Curr_2_T_20_TEC_124_Rpt]); labels.append('TIPS-2, TEC = 124 Rpt'); mark_list.append('g^');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$I_{TEC}$ (A)'
args.plt_range = [0.0, 145, 0, 0.4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$I_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Current_T_20_Rpt.png'
Plotting.plot_multiple_curves(plt_data, args)
# Power T = 20
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Power_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Power_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Power_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Power_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$P_{TEC}$ (mW)'
args.plt_range = [0.0, 145, 0, 1000]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$P_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Power_T_20.png'
Plotting.plot_multiple_curves(plt_data, args)
# Power T = 20 Rpt
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Power_1_T_20_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Power_1_T_20_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Power_1_T_20_TEC_125_Rpt]); labels.append('TIPS-1, TEC = 125 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Power_2_T_20_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Power_2_T_20_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Power_2_T_20_TEC_124_Rpt]); labels.append('TIPS-2, TEC = 124 Rpt'); mark_list.append('g^');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$P_{TEC}$ (mW)'
args.plt_range = [0.0, 145, 0, 1000]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$P_{TEC}$ variation with DFB and SOA current at T = 20 (C)'
args.fig_name = 'Power_T_20_Rpt.png'
Plotting.plot_multiple_curves(plt_data, args)
# Voltage T = 25
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Volt_1_T_25_TEC_125]); labels.append('TIPS-1, T = 25, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Volt_1_T_25_TEC_124]); labels.append('TIPS-1, T = 25, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Volt_2_T_25_TEC_124]); labels.append('TIPS-2, T = 25, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Volt_2_T_25_TEC_125]); labels.append('TIPS-2, T = 25, TEC = 125'); mark_list.append('g--');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$V_{TEC}$ (V)'
args.plt_range = [0.0, 145, 0, 4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$V_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Voltage_T_25.png'
Plotting.plot_multiple_curves(plt_data, args)
# Voltage T = 25 Repeat
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Volt_1_T_25_TEC_125]); labels.append('TIPS-1, T = 25, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Volt_1_T_25_TEC_124]); labels.append('TIPS-1, T = 25, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Volt_1_T_25_TEC_125_Rpt]); labels.append('TIPS-1, T = 25, TEC = 125 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Volt_2_T_25_TEC_124]); labels.append('TIPS-2, T = 25, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Volt_2_T_25_TEC_125]); labels.append('TIPS-2, T = 25, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Volt_2_T_25_TEC_124_Rpt]); labels.append('TIPS-2, T = 25, TEC = 124 Rpt'); mark_list.append('g^');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$V_{TEC}$ (V)'
args.plt_range = [0.0, 145, 0, 4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$V_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Voltage_T_25.png'
Plotting.plot_multiple_curves(plt_data, args)
# Current T = 25
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Curr_1_T_25_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Curr_1_T_25_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Curr_2_T_25_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Curr_2_T_25_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$I_{TEC}$ (A)'
args.plt_range = [0.0, 145, 0, 0.4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$I_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Current_T_25.png'
Plotting.plot_multiple_curves(plt_data, args)
# Current T = 25 Repeat
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Curr_1_T_25_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Curr_1_T_25_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Curr_1_T_25_TEC_125_Rpt]); labels.append('TIPS-1, TEC = 125 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Curr_2_T_25_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Curr_2_T_25_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Curr_2_T_25_TEC_124_Rpt]); labels.append('TIPS-2, TEC = 124 Rpt'); mark_list.append('g^');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$I_{TEC}$ (A)'
args.plt_range = [0.0, 145, 0, 0.4]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$I_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Current_T_25_Rpt.png'
Plotting.plot_multiple_curves(plt_data, args)
# Power T = 25
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Power_1_T_25_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Power_1_T_25_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Power_2_T_25_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Power_2_T_25_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$P_{TEC}$ (mW)'
args.plt_range = [0.0, 145, 0, 1000]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$P_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Power_T_25.png'
Plotting.plot_multiple_curves(plt_data, args)
# Power T = 25 Repeat
plt_data = []; labels = []; mark_list = []
plt_data.append([Ivals, Power_1_T_25_TEC_125]); labels.append('TIPS-1, TEC = 125'); mark_list.append('r*-');
plt_data.append([Ivals, Power_1_T_25_TEC_124]); labels.append('TIPS-1, TEC = 124'); mark_list.append('r--');
plt_data.append([Ivals, Power_1_T_25_TEC_125_Rpt]); labels.append('TIPS-1, TEC = 125 Rpt'); mark_list.append('r^');
plt_data.append([Ivals, Power_2_T_25_TEC_124]); labels.append('TIPS-2, TEC = 124'); mark_list.append('g*-');
plt_data.append([Ivals, Power_2_T_25_TEC_125]); labels.append('TIPS-2, TEC = 125'); mark_list.append('g--');
plt_data.append([Ivals, Power_2_T_25_TEC_124_Rpt]); labels.append('TIPS-2, TEC = 124 Rpt'); mark_list.append('g^');
#args.loud = True
#args.x_label = '$I_{dev}$ (mA)'
args.y_label = '$P_{TEC}$ (mW)'
args.plt_range = [0.0, 145, 0, 1000]
args.crv_lab_list = labels
args.mrk_list = mark_list
args.plt_title = '$P_{TEC}$ variation with DFB and SOA current at T = 25 (C)'
args.fig_name = 'Power_T_25.png'
Plotting.plot_multiple_curves(plt_data, args)
else:
raise Exception
except Exception:
print("Error: TEC_Exam.Plot_TEC_Exam_Results")
def compute_power(voltage, current):
# compute the power in mW from input voltage and current values
try:
c1 = True if voltage is not None else False
c2 = True if current is not None else False
c3 = True if len(voltage) == len(current) else False
if c1 and c2 and c3:
power = []
for i in range(0, len(voltage), 1):
power.append(1000*voltage[i]*current[i])
return power
else:
raise Exception
except Exception:
print("Error: TEC_Exam.compute_power()")
| 57.460274 | 165 | 0.607448 | 3,483 | 20,973 | 3.366351 | 0.070629 | 0.055267 | 0.035821 | 0.092111 | 0.881109 | 0.874797 | 0.860725 | 0.829851 | 0.78516 | 0.766567 | 0 | 0.110906 | 0.251514 | 20,973 | 364 | 166 | 57.618132 | 0.636005 | 0.176036 | 0 | 0.629787 | 0 | 0 | 0.150683 | 0.006223 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008511 | false | 0 | 0.042553 | 0 | 0.055319 | 0.008511 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8bcb006d9a776a3d5d5101e166a4d6ca2415943 | 129 | py | Python | syn/base_utils/tests/delete1.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 1 | 2021-07-15T08:55:12.000Z | 2021-07-15T08:55:12.000Z | syn/base_utils/tests/delete1.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 7 | 2021-01-07T23:51:57.000Z | 2021-12-13T19:50:57.000Z | syn/base_utils/tests/delete1.py | mbodenhamer/syn | aeaa3ad8a49bac8f50cf89b6f1fe97ad43d1d258 | [
"MIT"
] | 2 | 2016-07-11T08:46:31.000Z | 2017-12-13T13:30:51.000Z | from syn.base_utils import harvest_metadata, delete
with delete(harvest_metadata, delete):
harvest_metadata('metadata1.yml')
| 32.25 | 51 | 0.813953 | 17 | 129 | 5.941176 | 0.647059 | 0.445545 | 0.415842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.100775 | 129 | 3 | 52 | 43 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0.100775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
76410177cc83c8080337075b0981950a7ba1aee6 | 2,558 | py | Python | crawler/tests/test_models.py | bkosawa/admin-recommendation | 8bc1acf20fde8e5e99b74da32cf426b037ccb98f | [
"Apache-2.0"
] | null | null | null | crawler/tests/test_models.py | bkosawa/admin-recommendation | 8bc1acf20fde8e5e99b74da32cf426b037ccb98f | [
"Apache-2.0"
] | null | null | null | crawler/tests/test_models.py | bkosawa/admin-recommendation | 8bc1acf20fde8e5e99b74da32cf426b037ccb98f | [
"Apache-2.0"
] | null | null | null | import numpy as np
from django.test.testcases import SimpleTestCase
from scipy.sparse import dok_matrix
from crawler.models import convert_from_sparse_array, convert_from_dict_string
class UtilityMatrixTest(SimpleTestCase):
def test_utility_matrix_serialization_return_not_none(self):
sparse_array = dok_matrix((10, 100), dtype=np.int8)
serialized_array = convert_from_sparse_array(sparse_array)
self.assertIsNotNone(serialized_array)
def test_utility_matrix_empty_array_return_an_empty_string(self):
expected_serialized_array = '{}'
sparse_array = dok_matrix((1, 100), dtype=np.int8)
serialized_array = convert_from_sparse_array(sparse_array)
self.assertEqual(serialized_array, expected_serialized_array)
def test_utility_matrix_one_element_array_return_an_dict_as_string(self):
expected_serialized_array = '{(0, 40): 1}'
sparse_array = dok_matrix((1, 100), dtype=np.int8)
sparse_array[0, 40] = 1
serialized_array = convert_from_sparse_array(sparse_array)
self.assertEqual(serialized_array, expected_serialized_array)
def test_utility_matrix_two_elements_array_return_an_dict_as_string(self):
expected_serialized_array = '{(0, 40): 1, (0, 80): 1}'
sparse_array = dok_matrix((1, 100), dtype=np.int8)
sparse_array[0, 40] = 1
sparse_array[0, 80] = 1
serialized_array = convert_from_sparse_array(sparse_array)
self.assertEqual(serialized_array, expected_serialized_array)
def test_utility_matrix_two_elements_matrix_two_by_one_hundred_return_an_dict_as_string(self):
expected_serialized_array = '{(1, 80): 1, (0, 40): 1}'
sparse_array = dok_matrix((2, 100), dtype=np.int8)
sparse_array[0, 40] = 1
sparse_array[1, 80] = 1
serialized_array = convert_from_sparse_array(sparse_array)
self.assertEqual(serialized_array, expected_serialized_array)
def test_utility_matrix_empty_string_to_sparse_matrix(self):
serialized_dict = ''
matrix = convert_from_dict_string(serialized_dict)
self.assertTrue(matrix.nnz == 0)
def test_utility_matrix_empty_dict_to_sparse_matrix(self):
serialized_dict = '{}'
matrix = convert_from_dict_string(serialized_dict)
self.assertTrue(matrix.nnz == 0)
def test_utility_matrix_one_element_dict_to_sparse_matrix(self):
serialized_dict = '{(0, 40): 1}'
matrix = convert_from_dict_string(serialized_dict)
self.assertTrue(matrix.nnz > 0)
| 44.877193 | 98 | 0.734558 | 339 | 2,558 | 5.091445 | 0.153392 | 0.133835 | 0.06489 | 0.0927 | 0.80533 | 0.783314 | 0.771727 | 0.708575 | 0.708575 | 0.669177 | 0 | 0.034845 | 0.181001 | 2,558 | 56 | 99 | 45.678571 | 0.789021 | 0 | 0 | 0.434783 | 0 | 0 | 0.029711 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.173913 | false | 0 | 0.086957 | 0 | 0.282609 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7695ba6487d91a84529c4c44bdee672f09256ced | 117 | py | Python | core/models/__init__.py | firehose-dataset/congrad | 20792f43aa89beae75454e30b82b2e1280ed3106 | [
"MIT"
] | 9 | 2020-07-21T14:37:22.000Z | 2021-07-14T12:44:13.000Z | core/models/__init__.py | firehose-dataset/congrad | 20792f43aa89beae75454e30b82b2e1280ed3106 | [
"MIT"
] | 2 | 2020-09-22T18:05:03.000Z | 2020-11-19T09:42:21.000Z | core/models/__init__.py | firehose-dataset/congrad | 20792f43aa89beae75454e30b82b2e1280ed3106 | [
"MIT"
] | 2 | 2020-07-21T16:39:12.000Z | 2020-07-30T02:20:47.000Z | from core.models.mem_transformer import MemTransformerLM
from core.models.mtl_transformer import MTLMemTransformerLM
| 39 | 59 | 0.897436 | 14 | 117 | 7.357143 | 0.642857 | 0.15534 | 0.271845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068376 | 117 | 2 | 60 | 58.5 | 0.944954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76b6fe619c7209dfb0a806e1fbc0a3667373c0a5 | 80 | py | Python | stella_nav_planning/src/stella_nav_planning/global_planner/__init__.py | ymd-stella/stella_nav | b92f2dcaf52d0bb03c9ea4228124dc3444af2681 | [
"MIT"
] | null | null | null | stella_nav_planning/src/stella_nav_planning/global_planner/__init__.py | ymd-stella/stella_nav | b92f2dcaf52d0bb03c9ea4228124dc3444af2681 | [
"MIT"
] | null | null | null | stella_nav_planning/src/stella_nav_planning/global_planner/__init__.py | ymd-stella/stella_nav | b92f2dcaf52d0bb03c9ea4228124dc3444af2681 | [
"MIT"
] | 1 | 2022-01-14T07:55:22.000Z | 2022-01-14T07:55:22.000Z | from .carrot_planner import CarrotPlanner
from .ompl_planner import OmplPlanner
| 26.666667 | 41 | 0.875 | 10 | 80 | 6.8 | 0.7 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 42 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f8b7abb84b57435a27191527dfcbdcfa30678da | 121 | py | Python | ontology-tools/CMCLABoxManagement/chemaboxwriters/chemaboxwriters/ontospecies/__init__.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | null | null | null | ontology-tools/CMCLABoxManagement/chemaboxwriters/chemaboxwriters/ontospecies/__init__.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | null | null | null | ontology-tools/CMCLABoxManagement/chemaboxwriters/chemaboxwriters/ontospecies/__init__.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | null | null | null | from chemaboxwriters.ontospecies.pipeline import OS_pipeline
from chemaboxwriters.ontospecies.writeabox import write_abox | 60.5 | 60 | 0.909091 | 14 | 121 | 7.714286 | 0.642857 | 0.351852 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 121 | 2 | 61 | 60.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4fa7a41664161969c90047f190d6ad5d65db9d0e | 21 | py | Python | ioant/ioant/proto/__init__.py | ioants/pipy-packages | 75639f5387410a4a9eaab65919989b659b4fe211 | [
"MIT"
] | null | null | null | ioant/ioant/proto/__init__.py | ioants/pipy-packages | 75639f5387410a4a9eaab65919989b659b4fe211 | [
"MIT"
] | 1 | 2017-01-15T16:18:10.000Z | 2017-01-15T16:18:10.000Z | ioant/ioant/proto/__init__.py | ioants/pypi-packages | 75639f5387410a4a9eaab65919989b659b4fe211 | [
"MIT"
] | null | null | null | from .proto import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8c27cebbd8815eca96cd010b12b07572bd62af42 | 22 | py | Python | .history/ClassFiles/Control Flow/ForLoopRangeFunc_20210101223124.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/Control Flow/ForLoopRangeFunc_20210101223124.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/Control Flow/ForLoopRangeFunc_20210101223124.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | ''' If | 22 | 22 | 0.090909 | 1 | 22 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.772727 | 22 | 1 | 22 | 22 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8c3a1f0d0216d64ec13a65ebc3b63e0a7c382fe4 | 99 | py | Python | ver01/KieaPython01/comments.py | grtlinux/KieaPython21 | 648377fc8d773b028f50f4658e70e7f9a68cc62b | [
"Apache-2.0"
] | null | null | null | ver01/KieaPython01/comments.py | grtlinux/KieaPython21 | 648377fc8d773b028f50f4658e70e7f9a68cc62b | [
"Apache-2.0"
] | null | null | null | ver01/KieaPython01/comments.py | grtlinux/KieaPython21 | 648377fc8d773b028f50f4658e70e7f9a68cc62b | [
"Apache-2.0"
] | null | null | null | # omment
print("This is a comment..")
"""
This is a comment
written in
more than just one line
""" | 12.375 | 28 | 0.676768 | 17 | 99 | 3.941176 | 0.764706 | 0.179104 | 0.208955 | 0.41791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191919 | 99 | 8 | 29 | 12.375 | 0.8375 | 0.060606 | 0 | 0 | 0 | 0 | 0.59375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4fdf5f39bc25a7a6df0c42ab529193a8010db529 | 20,017 | py | Python | src/asldro/filters/test_add_noise_filter.py | gold-standard-phantoms/asldro | 6ae82ed69d66fed64e1e54e5394cc3b5d8dbe1bd | [
"MIT"
] | 3 | 2021-03-09T15:51:32.000Z | 2021-05-19T13:05:18.000Z | src/asldro/filters/test_add_noise_filter.py | gold-standard-phantoms/asldro | 6ae82ed69d66fed64e1e54e5394cc3b5d8dbe1bd | [
"MIT"
] | null | null | null | src/asldro/filters/test_add_noise_filter.py | gold-standard-phantoms/asldro | 6ae82ed69d66fed64e1e54e5394cc3b5d8dbe1bd | [
"MIT"
] | null | null | null | """ Add noise filter tests """
# pylint: disable=duplicate-code
import pytest
import numpy as np
import numpy.testing
from asldro.filters.basefilter import FilterInputValidationError
from asldro.filters.add_noise_filter import AddNoiseFilter
from asldro.filters.fourier_filter import FftFilter
from asldro.containers.image import NumpyImageContainer
SNR_VALUE = 100.0
RANDOM_SEED = 1234
def test_add_noise_filter_validate_inputs():
"""Check a FilterInputValidationError is raised when the inputs
to the add commplex noise filter are incorrect or missing"""
noise_filter = AddNoiseFilter()
noise_filter.add_input("snr", 1)
with pytest.raises(FilterInputValidationError):
noise_filter.run() # image not defined
noise_filter.add_input("image", 1)
with pytest.raises(FilterInputValidationError):
noise_filter.run() # image wrong type
noise_filter = AddNoiseFilter()
noise_filter.add_input("image", NumpyImageContainer(image=np.zeros((32, 32, 32))))
with pytest.raises(FilterInputValidationError):
noise_filter.run() # snr not defined
noise_filter.add_input("snr", "str")
with pytest.raises(FilterInputValidationError):
noise_filter.run() # snr wrong type
noise_filter = AddNoiseFilter()
noise_filter.add_input("image", NumpyImageContainer(image=np.zeros((32, 32, 32))))
noise_filter.add_input("snr", 1)
noise_filter.add_input("reference_image", 1)
with pytest.raises(FilterInputValidationError):
noise_filter.run() # reference_image wrong type
noise_filter = AddNoiseFilter()
noise_filter.add_input("image", NumpyImageContainer(image=np.zeros((32, 32, 32))))
noise_filter.add_input("snr", 1)
noise_filter.add_input(
"reference_image", NumpyImageContainer(image=np.zeros((32, 32, 31)))
)
with pytest.raises(FilterInputValidationError):
noise_filter.run() # reference_image wrong shape
def add_noise_function(
image: np.ndarray,
snr: float,
reference_image: np.ndarray = None,
noise_scaling: float = 1.0,
):
"""
Adds normally distributed random noise to an input array.
Arguments:
image (numpy.ndarray): numpy array containing the input image to add noise to
reference_image (numpy.ndarray): reference image to use to calculate the noise amplitude
snr (float): signal to noise ratio
noise_scaling (float): scales the noise amplitude by this number
Returns:
numpy.ndarray: the input image with noise added
"""
if reference_image is None:
reference_image = image
noise_amplitude = (
noise_scaling
* np.mean(np.abs(reference_image[reference_image.nonzero()]))
/ snr
)
if image.dtype in [np.complex128, np.complex64]:
image_noise = (
np.real(image) + np.random.normal(0, noise_amplitude, image.shape)
) + 1j * (np.imag(image) + np.random.normal(0, noise_amplitude, image.shape))
else:
image_noise = image + np.random.normal(0, noise_amplitude, image.shape)
return image_noise
def calculate_snr_function(
image_1: np.ndarray, image_2: np.ndarray, mask: np.ndarray = None
):
"""calculates the snr from two image arrays
Image arrays should be of the same object and with the same amplitude
of normally distributed random noise added. The noise component must be different
on each image. The signal to noise ratio is calculated using the mean value (within
and optional ROI defined by the input mask) divided by the standard deviation of the
difference between image_1 and image_2. This is in accordance with
"A comparison of two methods for measuring the signal to
noise ratio on MR images", PMB, vol 44, no. 12, pp.N261-N264 (1999)
Args:
image_1 (np.ndarray): First image
image_2 (np.ndarray): Second image
mask (np.ndarray): mask, elements that are non-zero are used to define the
object. If not supplied then all elements in the images will be considered.
Returns:
float: The calculate SNR
"""
image_1 = np.abs(image_1)
image_2 = np.abs(image_2)
if mask is None:
mask = np.ones(image_1.shape)
diff = image_1 - image_2
return np.sqrt(2) * (
np.mean(image_1[mask.nonzero()]) / np.std(diff[mask.nonzero()])
)
# Mock Data Fixtures
def image_container_function() -> NumpyImageContainer:
""" Creates a NumpyImageContainer with mock real data """
signal_level = 100.0
np.random.seed(0)
image = np.random.normal(signal_level, 10, (32, 32, 32))
return NumpyImageContainer(image=image)
def complex_image_container_function() -> NumpyImageContainer:
""" Creates a NumpyImageContainer with mock real data """
signal_level = 100.0
np.random.seed(0)
image = np.random.normal(
signal_level / np.sqrt(2), 10, (32, 32, 32)
) + 1j * np.random.normal(signal_level / np.sqrt(2), 10, (32, 32, 32))
return NumpyImageContainer(image=image)
def ft_image_container_function(img: NumpyImageContainer) -> NumpyImageContainer:
""" Fourier transforms the input image container 'img' """
fft_filter = FftFilter()
fft_filter.add_input("image", img)
fft_filter.run()
return fft_filter.outputs["image"]
@pytest.fixture(name="image_container")
def image_container_fixture() -> NumpyImageContainer:
""" Fixture that creates and returns a NumpyImageContainer """
return image_container_function()
@pytest.fixture(name="complex_image_container")
def complex_image_container_fixture() -> NumpyImageContainer:
""" Fixture that creates and returns a NumpyImageContainer """
return complex_image_container_function()
@pytest.fixture(name="ft_image_container")
def ft_image_container_fixture() -> NumpyImageContainer:
""" Fixture that creates and returns the Fourier Transform of image_container """
return ft_image_container_function(image_container_function())
@pytest.fixture(name="ft_complex_image_container")
def ft_complex_image_container_fixture() -> NumpyImageContainer:
""" Fixture that creates and returns the Fourier Transform of image_container """
return ft_image_container_function(complex_image_container_function())
# 1. add noise to non-complex image, only image supplied
def test_add_noise_filter_with_mock_data_mag_image_only(image_container):
"""Test the add noise filter with magnitude (non-complex) image only"""
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(image_container.image, SNR_VALUE)
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This should be almost equal to the desired snr
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 2. add noise to non-complex image, image and reference in the spatial domain
def test_add_noise_filter_with_mock_data_mag_image_reference_same_domain(
image_container,
):
""" Test the add noise filter with an image and reference image, both in the same domain """
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
image_container.image, SNR_VALUE, image_container.image
)
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", image_container)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", image_container)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This should be almost equal to the desired snr
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 3. add noise to non-complex image, image in spatial domain, reference in inverse domain
# Currently the calculated SNR does not match the desired
def test_add_noise_filter_with_mock_data_mag_image_spatial_reference_inverse(
image_container, ft_image_container
):
"""Test the add noise filter with an image and reference image, image in SPATIAL_DOMAIN,
reference in INVERSE_DOMAIN"""
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
image_container.image,
SNR_VALUE,
ft_image_container.image,
1 / np.sqrt(ft_image_container.image.size),
)
image_with_noise_2 = add_noise_function(
image_container.image,
SNR_VALUE,
ft_image_container.image,
1 / np.sqrt(ft_image_container.image.size),
)
print(f"manual snr = {calculate_snr_function(image_with_noise,image_with_noise_2)}")
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", ft_image_container)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", ft_image_container)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This isn't equal to the desired SNR
with numpy.testing.assert_raises(AssertionError):
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 4. add noise to complex image, no reference supplied
def test_add_noise_filter_with_mock_data_complex_image(complex_image_container):
""" Test the add noise filter with a complex image """
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
complex_image_container.image,
SNR_VALUE,
)
image_with_noise_2 = add_noise_function(
complex_image_container.image,
SNR_VALUE,
)
print(f"manual snr = {calculate_snr_function(image_with_noise,image_with_noise_2)}")
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This should be almost equal to the desired snr
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 5. add noise to complex image, image and reference in inverse domain
def test_add_noise_filter_with_mock_data_complex_image_reference_inverse(
ft_complex_image_container,
):
""" Test the add noise filter with a complex data in the INVERSE_DOMAIN """
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
ft_complex_image_container.image,
SNR_VALUE,
)
image_with_noise_2 = add_noise_function(
ft_complex_image_container.image,
SNR_VALUE,
)
print(f"manual snr = {calculate_snr_function(image_with_noise,image_with_noise_2)}")
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", ft_complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", ft_complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This should be almost equal to the desired snr
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 6. add noise to complex image, image in spatial domain, reference in inverse domain
# Currently the calculated SNR does not match the desired
def test_add_noise_filter_with_mock_data_complex_image_spatial_reference_inverse(
complex_image_container,
ft_complex_image_container,
):
"""Test the add noise filter with a complex image SPATIAL_DOMAIN,
complex reference in the INVERSE_DOMAIN
"""
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
complex_image_container.image,
SNR_VALUE,
ft_complex_image_container.image,
1.0 / np.sqrt(ft_complex_image_container.image.size),
)
image_with_noise_2 = add_noise_function(
complex_image_container.image,
SNR_VALUE,
ft_complex_image_container.image,
1.0 / np.sqrt(ft_complex_image_container.image.size),
)
print(f"manual snr = {calculate_snr_function(image_with_noise,image_with_noise_2)}")
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", ft_complex_image_container)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", ft_complex_image_container)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This isn't equal to the desired SNR
with numpy.testing.assert_raises(AssertionError):
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
# 7. add noise to complex image, image in inverse domain, reference in spatial domain
# Currently the calculated SNR does not match the desired
def test_add_noise_filter_with_mock_data_complex_image_inverse_reference_spatial(
complex_image_container, ft_complex_image_container
):
"""Test the add noise filter with a complex image INVERSE_DOMAIN,
complex reference in the SPATIAL_DOMAIN
"""
np.random.seed(RANDOM_SEED)
# calculate manually
image_with_noise = add_noise_function(
ft_complex_image_container.image,
SNR_VALUE,
complex_image_container.image,
np.sqrt(complex_image_container.image.size),
)
image_with_noise_2 = add_noise_function(
ft_complex_image_container.image,
SNR_VALUE,
complex_image_container.image,
np.sqrt(complex_image_container.image.size),
)
print(f"manual snr = {calculate_snr_function(image_with_noise,image_with_noise_2)}")
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", ft_complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", complex_image_container)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
image_with_noise_container = add_noise_filter.outputs["image"].clone()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(image_with_noise, image_with_noise_container.image)
# Run again and then check the SNR
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", ft_complex_image_container)
add_noise_filter.add_input("snr", SNR_VALUE)
add_noise_filter.add_input("reference_image", complex_image_container)
add_noise_filter.run()
measured_snr = calculate_snr_function(
image_with_noise_container.image,
add_noise_filter.outputs["image"].image,
)
print(f"calculated snr = {measured_snr}, desired snr = {SNR_VALUE}")
# This isn't equal to the desired SNR
with numpy.testing.assert_raises(AssertionError):
numpy.testing.assert_array_almost_equal(measured_snr, SNR_VALUE, 0)
def test_add_noise_filter_snr_zero(image_container):
""" Checks that the output image is equal to the input image when snr=0 """
# calculate using the filter
add_noise_filter = AddNoiseFilter()
add_noise_filter.add_input("image", image_container)
add_noise_filter.add_input("snr", 0.0)
np.random.seed(RANDOM_SEED) # set seed so RNG is in the same state
add_noise_filter.run()
# image_with_noise and image_with_noise_container.image should be equal
numpy.testing.assert_array_equal(
image_container.image, add_noise_filter.outputs["image"].image
)
| 39.32613 | 96 | 0.742119 | 2,772 | 20,017 | 5.040765 | 0.076479 | 0.069849 | 0.101195 | 0.065269 | 0.81679 | 0.804265 | 0.791026 | 0.773635 | 0.759178 | 0.730552 | 0 | 0.008906 | 0.175401 | 20,017 | 508 | 97 | 39.403543 | 0.837635 | 0.252385 | 0 | 0.678019 | 0 | 0 | 0.085072 | 0.024131 | 0 | 0 | 0 | 0 | 0.055728 | 1 | 0.055728 | false | 0 | 0.021672 | 0 | 0.105263 | 0.037152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ffd4249fa556d5aab8d390141c20b25c92eb583 | 58 | py | Python | words/tools.py | beatbox4108/sentence_labo | fd1ef23450f0c81bfb9be7c80ed937a0e45b2d05 | [
"CC0-1.0"
] | null | null | null | words/tools.py | beatbox4108/sentence_labo | fd1ef23450f0c81bfb9be7c80ed937a0e45b2d05 | [
"CC0-1.0"
] | null | null | null | words/tools.py | beatbox4108/sentence_labo | fd1ef23450f0c81bfb9be7c80ed937a0e45b2d05 | [
"CC0-1.0"
] | null | null | null | def first_upper(word):
return word[0].upper()+word[1:] | 29 | 35 | 0.672414 | 10 | 58 | 3.8 | 0.7 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.12069 | 58 | 2 | 35 | 29 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8b2a2fdaf070b28209eaa815ce6e556802f1fd0c | 131 | py | Python | PixivCrawler/__init__.py | AuthurExcalbern/PixivCrawler | 8158916e7cc806b6c98d2a09565d7c10e22905d1 | [
"MIT"
] | null | null | null | PixivCrawler/__init__.py | AuthurExcalbern/PixivCrawler | 8158916e7cc806b6c98d2a09565d7c10e22905d1 | [
"MIT"
] | null | null | null | PixivCrawler/__init__.py | AuthurExcalbern/PixivCrawler | 8158916e7cc806b6c98d2a09565d7c10e22905d1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from PixivCrawler.login import PixivLogin
from PixivCrawler.crawler import Crawler
| 21.833333 | 41 | 0.755725 | 17 | 131 | 5.823529 | 0.764706 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0.122137 | 131 | 5 | 42 | 26.2 | 0.843478 | 0.328244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8c6606e8b6438d0d229794a46a2b94a25abe0500 | 9,135 | py | Python | dense_fusion/nn/loss.py | iory/dense-fusion | f08b9fc5257212a4f264d12845354f99ced57592 | [
"MIT"
] | 6 | 2020-02-27T11:25:33.000Z | 2021-06-19T05:10:47.000Z | dense_fusion/nn/loss.py | iory/dense-fusion | f08b9fc5257212a4f264d12845354f99ced57592 | [
"MIT"
] | null | null | null | dense_fusion/nn/loss.py | iory/dense-fusion | f08b9fc5257212a4f264d12845354f99ced57592 | [
"MIT"
] | 1 | 2020-12-02T11:07:40.000Z | 2020-12-02T11:07:40.000Z | import torch
from torch.nn.modules.loss import _Loss
def pairwise_dist(xyz1, xyz2):
r_xyz1 = torch.sum(xyz1 * xyz1, dim=2, keepdim=True)
r_xyz2 = torch.sum(xyz2 * xyz2, dim=2, keepdim=True)
mul = torch.matmul(xyz2, xyz1.permute(0, 2, 1))
dist = r_xyz2 - 2 * mul + r_xyz1.permute(0, 2, 1)
return dist
def loss_calculation(
pred_r,
pred_t,
pred_c,
target,
model_points,
idx,
points,
w,
refine,
num_point_mesh,
sym_list,
):
bs, num_p, _ = pred_c.size()
pred_r = pred_r / (torch.norm(pred_r, dim=2).view(bs, num_p, 1))
base = (
torch.cat(
((1.0 - 2.0 * (pred_r[:, :, 2] ** 2 + pred_r[:, :, 3] ** 2)).view(
bs, num_p, 1
),
(
2.0 * pred_r[:, :, 1] * pred_r[:, :, 2]
- 2.0 * pred_r[:, :, 0] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 0] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 1] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 1] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 3] * pred_r[:, :, 0]
).view(bs, num_p, 1),
(1.0 - 2.0 * (pred_r[:, :, 1] ** 2 + pred_r[:, :, 3] ** 2)).view(
bs, num_p, 1
),
(
-2.0 * pred_r[:, :, 0] * pred_r[:, :, 1]
+ 2.0 * pred_r[:, :, 2] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
-2.0 * pred_r[:, :, 0] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 1] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 0] * pred_r[:, :, 1]
+ 2.0 * pred_r[:, :, 2] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(1.0 - 2.0 * (pred_r[:, :, 1] ** 2 + pred_r[:, :, 2] ** 2)).view(
bs, num_p, 1
),
),
dim=2,
)
.contiguous()
.view(bs * num_p, 3, 3)
)
ori_base = base
base = base.contiguous().transpose(2, 1).contiguous()
model_points = (
model_points.view(bs, 1, num_point_mesh, 3)
.repeat(1, num_p, 1, 1)
.view(bs * num_p, num_point_mesh, 3)
)
target = (
target.view(bs, 1, num_point_mesh, 3)
.repeat(1, num_p, 1, 1)
.view(bs * num_p, num_point_mesh, 3)
)
ori_target = target
pred_t = pred_t.contiguous().view(bs * num_p, 1, 3)
ori_t = pred_t
points = points.contiguous().view(bs * num_p, 1, 3)
pred_c = pred_c.contiguous().view(bs * num_p)
pred = torch.add(torch.bmm(model_points, base), points + pred_t)
if not refine:
if idx[0].item() in sym_list:
target = target[0].transpose(1, 0).contiguous().view(3, -1)
pred = pred.permute(2, 0, 1).contiguous().view(3, -1)
inds = torch.kthvalue(
pairwise_dist(target.T.unsqueeze(0), pred.T.unsqueeze(0)), 1
)[1]
target = torch.index_select(target, 1, inds.view(-1).detach())
target = (
target.view(3, bs * num_p, num_point_mesh).permute(
1, 2, 0).contiguous()
)
pred = (
pred.view(3, bs * num_p, num_point_mesh).permute(
1, 2, 0).contiguous()
)
dis = torch.mean(torch.norm((pred - target), dim=2), dim=1)
loss = torch.mean((dis * pred_c - w * torch.log(pred_c)), dim=0)
pred_c = pred_c.view(bs, num_p)
how_max, which_max = torch.max(pred_c, 1)
dis = dis.view(bs, num_p)
t = ori_t[which_max[0]] + points[which_max[0]]
points = points.view(1, bs * num_p, 3)
ori_base = ori_base[which_max[0]].view(1, 3, 3).contiguous()
ori_t = t.repeat(bs * num_p, 1).contiguous().view(1, bs * num_p, 3)
new_points = torch.bmm((points - ori_t), ori_base).contiguous()
new_target = ori_target[0].view(1, num_point_mesh, 3).contiguous()
ori_t = t.repeat(num_point_mesh, 1).contiguous().view(1, num_point_mesh, 3)
new_target = torch.bmm((new_target - ori_t), ori_base).contiguous()
return loss, dis[0][which_max[0]], new_points.detach(), new_target.detach()
class DenseFusionLoss(_Loss):
def __init__(self, num_points_mesh, sym_list):
super(DenseFusionLoss, self).__init__(True)
self.num_pt_mesh = num_points_mesh
self.sym_list = sym_list
def forward(self,
pred_r,
pred_t,
pred_c,
target,
model_points,
idx,
points,
w,
refine):
return loss_calculation(
pred_r,
pred_t,
pred_c,
target,
model_points,
idx,
points,
w,
refine,
self.num_pt_mesh,
self.sym_list,
)
def refine_loss_calculation(
pred_r, pred_t, target, model_points, idx, points, num_point_mesh, sym_list
):
pred_r = pred_r.view(1, 1, -1)
pred_t = pred_t.view(1, 1, -1)
bs, num_p, _ = pred_r.size()
num_input_points = len(points[0])
pred_r = pred_r / (torch.norm(pred_r, dim=2).view(bs, num_p, 1))
base = (torch.cat(
(
(1.0 - 2.0 * (pred_r[:, :, 2] ** 2 + pred_r[:, :, 3] ** 2)).view(
bs, num_p, 1
),
(
2.0 * pred_r[:, :, 1] * pred_r[:, :, 2]
- 2.0 * pred_r[:, :, 0] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 0] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 1] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 1] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 3] * pred_r[:, :, 0]
).view(bs, num_p, 1),
(1.0 - 2.0 * (pred_r[:, :, 1] ** 2 + pred_r[:, :, 3] ** 2)).view(
bs, num_p, 1
),
(
-2.0 * pred_r[:, :, 0] * pred_r[:, :, 1]
+ 2.0 * pred_r[:, :, 2] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
-2.0 * pred_r[:, :, 0] * pred_r[:, :, 2]
+ 2.0 * pred_r[:, :, 1] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(
2.0 * pred_r[:, :, 0] * pred_r[:, :, 1]
+ 2.0 * pred_r[:, :, 2] * pred_r[:, :, 3]
).view(bs, num_p, 1),
(1.0 - 2.0 * (pred_r[:, :, 1] ** 2 + pred_r[:, :, 2] ** 2)).view(
bs, num_p, 1
),
),
dim=2,
)
.contiguous()
.view(bs * num_p, 3, 3)
)
ori_base = base
base = base.contiguous().transpose(2, 1).contiguous()
model_points = (
model_points.view(bs, 1, num_point_mesh, 3)
.repeat(1, num_p, 1, 1)
.view(bs * num_p, num_point_mesh, 3)
)
target = (
target.view(bs, 1, num_point_mesh, 3)
.repeat(1, num_p, 1, 1)
.view(bs * num_p, num_point_mesh, 3)
)
ori_target = target
pred_t = pred_t.contiguous().view(bs * num_p, 1, 3)
ori_t = pred_t
pred = torch.add(torch.bmm(model_points, base), pred_t)
if idx[0].item() in sym_list:
target = target[0].transpose(1, 0).contiguous().view(3, -1)
pred = pred.permute(2, 0, 1).contiguous().view(3, -1)
inds = torch.kthvalue(
pairwise_dist(target.T.unsqueeze(0), pred.T.unsqueeze(0)), 1
)[1]
target = torch.index_select(target, 1, inds.view(-1) - 1)
target = (
target.view(
3, bs * num_p, num_point_mesh).permute(1, 2, 0).contiguous()
)
pred = pred.view(
3, bs * num_p, num_point_mesh).permute(1, 2, 0).contiguous()
dis = torch.mean(torch.norm((pred - target), dim=2), dim=1)
t = ori_t[0]
points = points.view(1, num_input_points, 3)
ori_base = ori_base[0].view(1, 3, 3).contiguous()
ori_t = (
t.repeat(bs * num_input_points, 1)
.contiguous()
.view(1, bs * num_input_points, 3)
)
new_points = torch.bmm((points - ori_t), ori_base).contiguous()
new_target = ori_target[0].view(1, num_point_mesh, 3).contiguous()
ori_t = t.repeat(num_point_mesh, 1).contiguous().view(1, num_point_mesh, 3)
new_target = torch.bmm((new_target - ori_t), ori_base).contiguous()
return dis, new_points.detach(), new_target.detach()
class DenseFusionRefineLoss(_Loss):
def __init__(self, num_points_mesh, sym_list):
super(DenseFusionRefineLoss, self).__init__(True)
self.num_pt_mesh = num_points_mesh
self.sym_list = sym_list
def forward(self, pred_r, pred_t, target, model_points, idx, points):
return refine_loss_calculation(
pred_r,
pred_t,
target,
model_points,
idx,
points,
self.num_pt_mesh,
self.sym_list,
)
| 32.165493 | 79 | 0.473454 | 1,281 | 9,135 | 3.130367 | 0.061671 | 0.093516 | 0.061347 | 0.079801 | 0.846633 | 0.810224 | 0.796509 | 0.761596 | 0.74414 | 0.737656 | 0 | 0.055679 | 0.359059 | 9,135 | 283 | 80 | 32.279152 | 0.629206 | 0 | 0 | 0.649194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028226 | false | 0 | 0.008065 | 0.008065 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ca8ecf552e434e360402e257ea5705bfb1ec694 | 295 | py | Python | Mundo 3/Ex108/teste.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | Mundo 3/Ex108/teste.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | Mundo 3/Ex108/teste.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | #from ex108 import moeda
import moeda
p = float(input('Digite o preco: R$ '))
print(f'A metade de R${moeda.moeda(p)} eh {moeda.moeda(moeda.metade(p))}')
print(f'O dobro de {moeda.moeda(p)} eh {moeda.moeda(moeda.dobro(p))}')
print(f'Aumentando 10%, temos R${moeda.moeda(moeda.aumentar(p, 10))}') | 42.142857 | 74 | 0.691525 | 54 | 295 | 3.777778 | 0.407407 | 0.392157 | 0.220588 | 0.127451 | 0.27451 | 0.27451 | 0.27451 | 0 | 0 | 0 | 0 | 0.026415 | 0.101695 | 295 | 7 | 75 | 42.142857 | 0.743396 | 0.077966 | 0 | 0 | 0 | 0.4 | 0.746324 | 0.334559 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.6 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8cb18bf7dfcb10f25420fb15051e678b869ad9ea | 96 | py | Python | venv/lib/python3.8/site-packages/cachy/stores/memcached_store.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/cachy/stores/memcached_store.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cachy/stores/memcached_store.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/85/9d/5b/30d0ea8f8f5e44a377ac2a4ef348f85d57e6d636dd7721515885c65d10 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8cb74c685a25ade3f5ca33e6563aa04ecfe1cceb | 3,294 | py | Python | algorithms/view/plotly_designed.py | warcraft12321/Hyperfoods | b995cd7afe10fcbd338158c80f53ce637bfffc0c | [
"MIT"
] | 51 | 2020-01-26T23:32:57.000Z | 2022-03-20T14:49:57.000Z | algorithms/view/plotly_designed.py | warcraft12321/Hyperfoods | b995cd7afe10fcbd338158c80f53ce637bfffc0c | [
"MIT"
] | 2 | 2020-12-19T20:00:28.000Z | 2021-03-03T20:22:45.000Z | algorithms/view/plotly_designed.py | warcraft12321/Hyperfoods | b995cd7afe10fcbd338158c80f53ce637bfffc0c | [
"MIT"
] | 33 | 2020-02-18T16:15:48.000Z | 2022-03-24T15:12:05.000Z | import plotly.graph_objs as go
from plotly.subplots import make_subplots
def plotly_function(x_ingredients_embedded1, x_ingredients_embedded2, words, colors, sizes, labels, title):
fig_plotly = make_subplots(rows=1, cols=2, subplot_titles=("PCA", "T-SNE"))
if labels == "true":
fig_plotly.add_trace(
go.Scatter(
x=x_ingredients_embedded1[:, 0],
y=x_ingredients_embedded1[:, 1],
mode="markers+text",
text=words,
textposition="bottom center",
textfont=dict(
family="sans serif",
size=10,
color="black"
),
marker=dict(
size=sizes,
sizemode='area',
sizeref=2.*max(sizes)/(40.**2),
sizemin=4,
color=colors, # set color equal to a variable
), # ,
# alpha=0.5
hoverinfo = "text"
),
row=1, col=1
)
fig_plotly.add_trace(
go.Scatter(
x=x_ingredients_embedded2[:, 0],
y=x_ingredients_embedded2[:, 1],
mode="markers+text",
text=words,
textposition="bottom center",
textfont=dict(
family="sans serif",
size=10,
color="black"
),
marker=dict(
size=sizes,
sizemode='area',
sizeref=2.*max(sizes)/(40.**2),
sizemin=4,
color=colors, # set color equal to a variable
), # ,
# alpha=0.5
hoverinfo = "text"
),
row=1, col=2
)
else:
fig_plotly.add_trace(
go.Scatter(
x=x_ingredients_embedded1[:, 0],
y=x_ingredients_embedded1[:, 1],
mode="markers",
text=words,
marker=dict(
size=sizes,
sizemode='area',
sizeref=2.*max(sizes)/(40.**2),
sizemin=4,
color=colors, # set color equal to a variable
),
hoverinfo = "text"
),
row=1, col=1
)
fig_plotly.add_trace(
go.Scatter(
x=x_ingredients_embedded2[:, 0],
y=x_ingredients_embedded2[:, 1],
mode="markers",
text=words,
marker=dict(
size=sizes,
sizemode='area',
sizeref=2.*max(sizes)/(40.**2),
sizemin=4,
color=colors, # set color equal to a variable
),
hoverinfo = "text"
),
row=1, col=2
)
fig_plotly.update_layout(
height=500,
width=1000,
title={
"text": title,
},
showlegend=False)
fig_plotly.show()
# Explicit trust -> jupyter trust /path/to/notebook.ipynb (at the command-line)
| 30.5 | 107 | 0.416515 | 295 | 3,294 | 4.525424 | 0.305085 | 0.089888 | 0.078652 | 0.050936 | 0.734082 | 0.734082 | 0.732584 | 0.732584 | 0.732584 | 0.732584 | 0 | 0.036799 | 0.480267 | 3,294 | 107 | 108 | 30.785047 | 0.742991 | 0.067092 | 0 | 0.808511 | 0 | 0 | 0.04636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010638 | false | 0 | 0.021277 | 0 | 0.031915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8cec0869e382594a81bb42dccb19d39868345fa7 | 42 | py | Python | main2.py | ytyaru/Python.HelloMethod201612021510 | d7142f0183d1799f1fff82399600a73d887422bb | [
"CC0-1.0"
] | null | null | null | main2.py | ytyaru/Python.HelloMethod201612021510 | d7142f0183d1799f1fff82399600a73d887422bb | [
"CC0-1.0"
] | null | null | null | main2.py | ytyaru/Python.HelloMethod201612021510 | d7142f0183d1799f1fff82399600a73d887422bb | [
"CC0-1.0"
] | null | null | null | from hello import std_out
std_out.show()
| 10.5 | 25 | 0.785714 | 8 | 42 | 3.875 | 0.75 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 42 | 3 | 26 | 14 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
50915435dd815e0727d3fa40d7daa8ab2c936348 | 13,783 | py | Python | tests/test_execute.py | ExecutableBookProject/ipynb_parser | c72a3a21d37332e747fa1316c614d321e423300c | [
"BSD-3-Clause"
] | null | null | null | tests/test_execute.py | ExecutableBookProject/ipynb_parser | c72a3a21d37332e747fa1316c614d321e423300c | [
"BSD-3-Clause"
] | null | null | null | tests/test_execute.py | ExecutableBookProject/ipynb_parser | c72a3a21d37332e747fa1316c614d321e423300c | [
"BSD-3-Clause"
] | null | null | null | """Test sphinx builds which execute notebooks."""
import os
from pathlib import Path
from IPython import version_info as ipy_version
import pytest
from myst_nb.core.execute import ExecutionError
from myst_nb.sphinx_ import NbMetadataCollector
def regress_nb_doc(file_regression, sphinx_run, check_nbs):
try:
file_regression.check(
sphinx_run.get_nb(), check_fn=check_nbs, extension=".ipynb", encoding="utf8"
)
finally:
doctree_string = sphinx_run.get_doctree().pformat()
# TODO this is a difference in the hashing on the CI,
# with complex_outputs_unrun.ipynb equation PNG, after execution
doctree_string = doctree_string.replace(
"438c56ea3dcf99d86cd64df1b23e2b436afb25846434efb1cfec7b660ef01127",
"e2dfbe330154316cfb6f3186e8f57fc4df8aee03b0303ed1345fc22cd51f66de",
)
doctree_string = doctree_string.replace(
"ba12df2746ada2238753ff8514da1431501f9de0fbf63eacda13f6e8c3e799c4",
"e2dfbe330154316cfb6f3186e8f57fc4df8aee03b0303ed1345fc22cd51f66de",
)
# sympy python > 3.7
doctree_string = doctree_string.replace(
"22b9ad367066892ac151e00c2cf0d7e815327649772d7623d80606baf78307cc",
"e2dfbe330154316cfb6f3186e8f57fc4df8aee03b0303ed1345fc22cd51f66de",
)
# change in matplotlib > 3.3
doctree_string = doctree_string.replace(
"1716e562622b606c639ae411adceadd2bdbbaaae765ca9e118500612099a4821",
"cc1d31550c7aaad5128f57d4f4cae576a29174f6cd515e37c0b911f6010659f3",
)
if os.name == "nt": # on Windows image file paths are absolute
doctree_string = doctree_string.replace(
Path(sphinx_run.app.srcdir).as_posix() + "/", ""
)
file_regression.check(doctree_string, extension=".xml", encoding="utf8")
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "auto"})
def test_basic_unrun_auto(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert sphinx_run.warnings() == ""
assert "test_name" in sphinx_run.app.env.metadata["basic_unrun"]
regress_nb_doc(file_regression, sphinx_run, check_nbs)
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_unrun")
assert data
assert data["method"] == "auto"
assert data["succeeded"] is True
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "cache"})
def test_basic_unrun_cache(sphinx_run, file_regression, check_nbs):
"""The outputs should be populated."""
sphinx_run.build()
assert sphinx_run.warnings() == ""
assert "test_name" in sphinx_run.app.env.metadata["basic_unrun"]
regress_nb_doc(file_regression, sphinx_run, check_nbs)
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_unrun")
assert data
assert data["method"] == "cache"
assert data["succeeded"] is True
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "inline"})
def test_basic_unrun_inline(sphinx_run, file_regression, check_nbs):
"""The outputs should be populated."""
sphinx_run.build()
assert sphinx_run.warnings() == ""
assert "test_name" in sphinx_run.app.env.metadata["basic_unrun"]
regress_nb_doc(file_regression, sphinx_run, check_nbs)
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_unrun")
assert data
assert data["method"] == "inline"
assert data["succeeded"] is True
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "cache"})
def test_rebuild_cache(sphinx_run):
"""The notebook should only be executed once."""
sphinx_run.build()
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
sphinx_run.invalidate_files()
sphinx_run.build()
assert "Using cached" in sphinx_run.status()
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "force"})
def test_rebuild_force(sphinx_run):
"""The notebook should be executed twice."""
sphinx_run.build()
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
sphinx_run.invalidate_files()
sphinx_run.build()
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
@pytest.mark.sphinx_params(
"basic_unrun.ipynb",
conf={
"nb_execution_mode": "cache",
"nb_execution_excludepatterns": ["basic_*"],
},
)
def test_exclude_path(sphinx_run, file_regression):
"""The notebook should not be executed."""
sphinx_run.build()
assert not NbMetadataCollector.new_exec_data(sphinx_run.env)
assert "Executing" not in sphinx_run.status(), sphinx_run.status()
file_regression.check(
sphinx_run.get_doctree().pformat(), extension=".xml", encoding="utf8"
)
@pytest.mark.skipif(ipy_version[0] < 8, reason="Error message changes for ipython v8")
@pytest.mark.sphinx_params("basic_failing.ipynb", conf={"nb_execution_mode": "cache"})
def test_basic_failing_cache(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.warnings())
assert "Executing notebook failed" in sphinx_run.warnings()
regress_nb_doc(file_regression, sphinx_run, check_nbs)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_failing")
assert data
assert data["method"] == "cache"
assert data["succeeded"] is False
sphinx_run.get_report_file()
@pytest.mark.skipif(ipy_version[0] < 8, reason="Error message changes for ipython v8")
@pytest.mark.sphinx_params("basic_failing.ipynb", conf={"nb_execution_mode": "auto"})
def test_basic_failing_auto(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
assert "Executing notebook failed" in sphinx_run.warnings()
regress_nb_doc(file_regression, sphinx_run, check_nbs)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_failing")
assert data
assert data["method"] == "auto"
assert data["succeeded"] is False
assert data["traceback"]
sphinx_run.get_report_file()
@pytest.mark.skipif(ipy_version[0] < 8, reason="Error message changes for ipython v8")
@pytest.mark.sphinx_params("basic_failing.ipynb", conf={"nb_execution_mode": "inline"})
def test_basic_failing_inline(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
assert "Executing notebook failed" in sphinx_run.warnings()
regress_nb_doc(file_regression, sphinx_run, check_nbs)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "basic_failing")
assert data
assert data["method"] == "inline"
assert data["succeeded"] is False
assert data["traceback"]
sphinx_run.get_report_file()
@pytest.mark.skipif(ipy_version[0] < 8, reason="Error message changes for ipython v8")
@pytest.mark.sphinx_params(
"basic_failing.ipynb",
conf={"nb_execution_mode": "cache", "nb_execution_allow_errors": True},
)
def test_allow_errors_cache(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert not sphinx_run.warnings()
regress_nb_doc(file_regression, sphinx_run, check_nbs)
@pytest.mark.skipif(ipy_version[0] < 8, reason="Error message changes for ipython v8")
@pytest.mark.sphinx_params(
"basic_failing.ipynb",
conf={"nb_execution_mode": "auto", "nb_execution_allow_errors": True},
)
def test_allow_errors_auto(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert not sphinx_run.warnings()
regress_nb_doc(file_regression, sphinx_run, check_nbs)
@pytest.mark.sphinx_params(
"basic_failing.ipynb",
conf={"nb_execution_raise_on_error": True, "nb_execution_mode": "force"},
)
def test_raise_on_error_force(sphinx_run):
with pytest.raises(ExecutionError, match="basic_failing.ipynb"):
sphinx_run.build()
@pytest.mark.sphinx_params(
"basic_failing.ipynb",
conf={"nb_execution_raise_on_error": True, "nb_execution_mode": "cache"},
)
def test_raise_on_error_cache(sphinx_run):
with pytest.raises(ExecutionError, match="basic_failing.ipynb"):
sphinx_run.build()
@pytest.mark.sphinx_params(
"complex_outputs_unrun.ipynb", conf={"nb_execution_mode": "cache"}
)
def test_complex_outputs_unrun_cache(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
# Widget view and widget state should make it into the HTML
scripts = sphinx_run.get_html().select("script")
assert any(
"application/vnd.jupyter.widget-view+json" in script.get("type", "")
for script in scripts
)
assert any(
"application/vnd.jupyter.widget-state+json" in script.get("type", "")
for script in scripts
)
@pytest.mark.sphinx_params(
"complex_outputs_unrun.ipynb", conf={"nb_execution_mode": "auto"}
)
def test_complex_outputs_unrun_auto(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
# Widget view and widget state should make it into the HTML
scripts = sphinx_run.get_html().select("script")
assert any(
"application/vnd.jupyter.widget-view+json" in script.get("type", "")
for script in scripts
)
assert any(
"application/vnd.jupyter.widget-state+json" in script.get("type", "")
for script in scripts
)
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "off"})
def test_no_execute(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
@pytest.mark.sphinx_params("basic_unrun.ipynb", conf={"nb_execution_mode": "cache"})
def test_jupyter_cache_path(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
assert "Cached executed notebook" in sphinx_run.status()
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
# Testing relative paths within the notebook
@pytest.mark.sphinx_params("basic_relative.ipynb", conf={"nb_execution_mode": "cache"})
def test_relative_path_cache(sphinx_run):
sphinx_run.build()
assert "Execution Failed" not in sphinx_run.status(), sphinx_run.status()
@pytest.mark.sphinx_params("basic_relative.ipynb", conf={"nb_execution_mode": "force"})
def test_relative_path_force(sphinx_run):
sphinx_run.build()
assert "Execution Failed" not in sphinx_run.status(), sphinx_run.status()
@pytest.mark.sphinx_params(
"kernel_alias.md",
conf={"nb_execution_mode": "force", "nb_kernel_rgx_aliases": {"oth.+": "python3"}},
)
def test_kernel_rgx_aliases(sphinx_run):
sphinx_run.build()
assert sphinx_run.warnings() == ""
@pytest.mark.sphinx_params(
"sleep_10.ipynb",
conf={"nb_execution_mode": "cache", "nb_execution_timeout": 1},
)
def test_execution_timeout(sphinx_run):
"""execution should fail given the low timeout value"""
sphinx_run.build()
# print(sphinx_run.warnings())
assert "Executing notebook failed" in sphinx_run.warnings()
@pytest.mark.sphinx_params(
"sleep_10_metadata_timeout.ipynb",
conf={"nb_execution_mode": "cache", "nb_execution_timeout": 60},
)
def test_execution_metadata_timeout(sphinx_run):
"""notebook timeout metadata has higher preference then execution_timeout config"""
sphinx_run.build()
# print(sphinx_run.warnings())
assert "Executing notebook failed" in sphinx_run.warnings()
@pytest.mark.sphinx_params(
"nb_exec_table.md",
conf={"nb_execution_mode": "auto"},
)
def test_nb_exec_table(sphinx_run, file_regression):
"""Test that the table gets output into the HTML,
including a row for the executed notebook.
"""
sphinx_run.build()
# print(sphinx_run.status())
assert not sphinx_run.warnings()
file_regression.check(
sphinx_run.get_doctree().pformat(), extension=".xml", encoding="utf8"
)
# print(sphinx_run.get_html())
rows = sphinx_run.get_html().select("table.docutils tr")
assert any("nb_exec_table" in row.text for row in rows)
@pytest.mark.sphinx_params(
"custom-formats.Rmd",
conf={
"nb_execution_mode": "auto",
"nb_custom_formats": {".Rmd": ["jupytext.reads", {"fmt": "Rmd"}]},
},
)
def test_custom_convert_auto(sphinx_run, file_regression, check_nbs):
sphinx_run.build()
# print(sphinx_run.status())
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "custom-formats")
assert data
assert data["method"] == "auto"
assert data["succeeded"] is True
@pytest.mark.sphinx_params(
"custom-formats.Rmd",
conf={
"nb_execution_mode": "cache",
"nb_custom_formats": {".Rmd": ["jupytext.reads", {"fmt": "Rmd"}]},
},
)
def test_custom_convert_cache(sphinx_run, file_regression, check_nbs):
"""The outputs should be populated."""
sphinx_run.build()
assert sphinx_run.warnings() == ""
regress_nb_doc(file_regression, sphinx_run, check_nbs)
assert NbMetadataCollector.new_exec_data(sphinx_run.env)
data = NbMetadataCollector.get_exec_data(sphinx_run.env, "custom-formats")
assert data
assert data["method"] == "cache"
assert data["succeeded"] is True
| 36.852941 | 88 | 0.725096 | 1,759 | 13,783 | 5.379761 | 0.114838 | 0.131248 | 0.039945 | 0.058121 | 0.809151 | 0.768889 | 0.758639 | 0.747649 | 0.736342 | 0.695974 | 0 | 0.029212 | 0.155554 | 13,783 | 373 | 89 | 36.951743 | 0.78383 | 0.085105 | 0 | 0.625899 | 0 | 0 | 0.226558 | 0.07278 | 0 | 0 | 0 | 0.002681 | 0.23741 | 1 | 0.093525 | false | 0 | 0.021583 | 0 | 0.115108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
50b12cc3698a959d6a63a9181c1c3a2c13bd568c | 72 | py | Python | trade_remedies_api/core/exporters/writers/__init__.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | 1 | 2020-08-13T10:37:15.000Z | 2020-08-13T10:37:15.000Z | trade_remedies_api/core/exporters/writers/__init__.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | 4 | 2020-09-10T13:41:52.000Z | 2020-12-16T09:00:21.000Z | trade_remedies_api/core/exporters/writers/__init__.py | uktrade/trade-remedies-api | fbe2d142ef099c7244788a0f72dd1003eaa7edce | [
"MIT"
] | null | null | null | from .csv_writer import CSVWriter
from .excel_writer import ExcelWriter
| 24 | 37 | 0.861111 | 10 | 72 | 6 | 0.7 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 38 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba23474178c0716352a9af312b31169d07319dd9 | 1,710 | py | Python | hummingbot/client/ui/scroll_handlers.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 3,027 | 2019-04-04T18:52:17.000Z | 2022-03-30T09:38:34.000Z | hummingbot/client/ui/scroll_handlers.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 4,080 | 2019-04-04T19:51:11.000Z | 2022-03-31T23:45:21.000Z | hummingbot/client/ui/scroll_handlers.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 1,342 | 2019-04-04T20:50:53.000Z | 2022-03-31T15:22:36.000Z | from prompt_toolkit.layout.containers import Window
from prompt_toolkit.buffer import Buffer
from typing import Optional
def scroll_down(event, window: Optional[Window] = None, buffer: Optional[Buffer] = None):
w = window or event.app.layout.current_window
b = buffer or event.app.current_buffer
if w and w.render_info:
info = w.render_info
ui_content = info.ui_content
# Height to scroll.
scroll_height = info.window_height // 2
# Calculate how many lines is equivalent to that vertical space.
y = b.document.cursor_position_row + 1
height = 0
while y < ui_content.line_count:
line_height = info.get_height_for_line(y)
if height + line_height < scroll_height:
height += line_height
y += 1
else:
break
b.cursor_position = b.document.translate_row_col_to_index(y, 0)
def scroll_up(event, window: Optional[Window] = None, buffer: Optional[Buffer] = None):
w = window or event.app.layout.current_window
b = buffer or event.app.current_buffer
if w and w.render_info:
info = w.render_info
# Height to scroll.
scroll_height = info.window_height // 2
# Calculate how many lines is equivalent to that vertical space.
y = max(0, b.document.cursor_position_row - 1)
height = 0
while y > 0:
line_height = info.get_height_for_line(y)
if height + line_height < scroll_height:
height += line_height
y -= 1
else:
break
b.cursor_position = b.document.translate_row_col_to_index(y, 0)
| 31.090909 | 89 | 0.623977 | 229 | 1,710 | 4.449782 | 0.248908 | 0.058881 | 0.039254 | 0.049068 | 0.830226 | 0.830226 | 0.830226 | 0.830226 | 0.830226 | 0.830226 | 0 | 0.010067 | 0.302924 | 1,710 | 54 | 90 | 31.666667 | 0.844799 | 0.094152 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.083333 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8644c62d16e634c014893042f42abea8be98435 | 10,117 | py | Python | hack/generators/release-controllers/content/redirect_resources.py | rhopp/release | 2835638ebcc62ad7e3c9778a6ce12b30c7c6576b | [
"Apache-2.0"
] | null | null | null | hack/generators/release-controllers/content/redirect_resources.py | rhopp/release | 2835638ebcc62ad7e3c9778a6ce12b30c7c6576b | [
"Apache-2.0"
] | null | null | null | hack/generators/release-controllers/content/redirect_resources.py | rhopp/release | 2835638ebcc62ad7e3c9778a6ce12b30c7c6576b | [
"Apache-2.0"
] | null | null | null |
def _add_redirect_resources(gendoc):
"""
Return resources necessary to redirect release controller requests to the
OSD cluster instances where they live now.
"""
context = gendoc.context
gendoc.add_comments("""
Bootstrap the environment for the amd64 tests image. The caches require an amd64 "tests" image to execute on
the cluster. This imagestream is used as a commandline parameter to the release-controller...
--tools-image-stream-tag=release-controller-bootstrap:tests
""")
gendoc.append({
'apiVersion': 'image.openshift.io/v1',
'kind': 'ImageStream',
'metadata': {
'name': 'release-controller-bootstrap',
'namespace': context.is_namespace
},
'spec': {
'lookupPolicy': {
'local': False
},
'tags': [
{
'from': {
'kind': 'DockerImage',
'name': 'image-registry.openshift-image-registry.svc:5000/ocp/4.6:tests'
},
'importPolicy': {
'scheduled': True
},
'name': 'tests',
'referencePolicy': {
'type': 'Source'
}
}]
}
})
gendoc.append({
'apiVersion': 'v1',
'kind': 'Route',
'metadata': {
'name': f'release-controller-{context.is_namespace}',
'namespace': 'ci'
},
'spec': {
'host': f'openshift-release{context.suffix}.svc.ci.openshift.org',
'tls': {
'insecureEdgeTerminationPolicy': 'Redirect',
'termination': 'Edge'
},
'to': {
'kind': 'Service',
'name': f'release-controller-{context.is_namespace}-redirect'
}
}
})
gendoc.append({
'apiVersion': 'v1',
'data': {
'default.conf': 'server {\n listen 8080;\n return 302 https://%s$request_uri;\n}\n' % context.rc_app_url
},
'kind': 'ConfigMap',
'metadata': {
'name': f'release-controller-{context.is_namespace}-redirect-config',
'namespace': context.config.rc_deployment_namespace
}
})
gendoc.append({
'apiVersion': 'apps/v1',
'kind': 'Deployment',
'metadata': {
'labels': {
'app': f'release-controller-{context.is_namespace}-redirect'
},
'name': f'release-controller-{context.is_namespace}-redirect',
'namespace': context.config.rc_deployment_namespace
},
'spec': {
'replicas': 2,
'selector': {
'matchLabels': {
'component': f'release-controller-{context.is_namespace}-redirect'
}
},
'template': {
'metadata': {
'labels': {
'app': 'prow',
'component': f'release-controller-{context.is_namespace}-redirect'
}
},
'spec': {
'affinity': {
'podAntiAffinity': {
'requiredDuringSchedulingIgnoredDuringExecution': [{
'labelSelector': {
'matchExpressions': [
{
'key': 'component',
'operator': 'In',
'values': [
f'release-controller-{context.is_namespace}-redirect']
}]
},
'topologyKey': 'kubernetes.io/hostname'
}]
}
},
'containers': [{
'image': 'nginxinc/nginx-unprivileged:1.17',
'name': 'nginx',
'volumeMounts': [{
'mountPath': '/etc/nginx/conf.d',
'name': 'config'
}]
}],
'volumes': [{
'configMap': {
'name': f'release-controller-{context.is_namespace}-redirect-config'
},
'name': 'config'
}]
}
}
}
})
gendoc.append({
'apiVersion': 'v1',
'kind': 'Service',
'metadata': {
'labels': {
'app': 'prow',
'component': f'release-controller-{context.is_namespace}-redirect'
},
'name': f'release-controller-{context.is_namespace}-redirect',
'namespace': 'ci'
},
'spec': {
'ports': [{
'name': 'main',
'port': 8080,
'protocol': 'TCP',
'targetPort': 8080
}],
'selector': {
'component': f'release-controller-{context.is_namespace}-redirect'
},
'sessionAffinity': 'None',
'type': 'ClusterIP'
}
})
def _add_files_cache_redirect_resources(gendoc):
"""
Return resources necessary to redirect the release controller's file-cache requests to the
OSD cluster instances where they live now.
"""
context = gendoc.context
gendoc.append({
'apiVersion': 'v1',
'kind': 'Route',
'metadata': {
'name': f'release-controller-files-cache-{context.is_namespace}',
'namespace': context.jobs_namespace
},
'spec': {
'host': f'{context.fc_api_url}',
'tls': {
'insecureEdgeTerminationPolicy': 'Redirect',
'termination': 'Edge'
},
'to': {
'kind': 'Service',
'name': f'release-controller-files-cache-{context.is_namespace}-redirect'
}
}
})
gendoc.append({
'apiVersion': 'v1',
'data': {
'default.conf': 'server {\n listen 8080;\n return 302 https://%s$request_uri;\n}\n' % context.fc_app_url
},
'kind': 'ConfigMap',
'metadata': {
'name': f'release-controller-files-cache-{context.is_namespace}-redirect-config',
'namespace': context.jobs_namespace
}
})
gendoc.append({
'apiVersion': 'apps/v1',
'kind': 'Deployment',
'metadata': {
'labels': {
'app': f'release-controller-files-cache-{context.is_namespace}-redirect'
},
'name': f'release-controller-files-cache-{context.is_namespace}-redirect',
'namespace': context.jobs_namespace
},
'spec': {
'replicas': 2,
'selector': {
'matchLabels': {
'component': f'release-controller-files-cache-{context.is_namespace}-redirect'
}
},
'template': {
'metadata': {
'labels': {
'app': 'prow',
'component': f'release-controller-files-cache-{context.is_namespace}-redirect'
}
},
'spec': {
'affinity': {
'podAntiAffinity': {
'requiredDuringSchedulingIgnoredDuringExecution': [{
'labelSelector': {
'matchExpressions': [
{
'key': 'component',
'operator': 'In',
'values': [
f'release-controller-files-cache-{context.is_namespace}-redirect']
}]
},
'topologyKey': 'kubernetes.io/hostname'
}]
}
},
'containers': [{
'image': 'nginxinc/nginx-unprivileged:1.17',
'name': 'nginx',
'volumeMounts': [{
'mountPath': '/etc/nginx/conf.d',
'name': 'config'
}]
}],
'volumes': [{
'configMap': {
'name': f'release-controller-files-cache-{context.is_namespace}-redirect-config'
},
'name': 'config'
}]
}
}
}
})
gendoc.append({
'apiVersion': 'v1',
'kind': 'Service',
'metadata': {
'labels': {
'app': 'prow',
'component': f'release-controller-files-cache-{context.is_namespace}-redirect'
},
'name': f'release-controller-files-cache-{context.is_namespace}-redirect',
'namespace': context.jobs_namespace
},
'spec': {
'ports': [{
'name': 'main',
'port': 80,
'protocol': 'TCP',
'targetPort': 8080
}],
'selector': {
'component': f'release-controller-files-cache-{context.is_namespace}-redirect'
},
'sessionAffinity': 'None',
'type': 'ClusterIP'
}
})
def add_redirect_resources(gendoc):
_add_redirect_resources(gendoc)
_add_files_cache_redirect_resources(gendoc)
| 34.766323 | 118 | 0.409805 | 670 | 10,117 | 6.098507 | 0.220896 | 0.120656 | 0.110132 | 0.13999 | 0.838962 | 0.808125 | 0.786344 | 0.778512 | 0.746207 | 0.696525 | 0 | 0.010116 | 0.462588 | 10,117 | 290 | 119 | 34.886207 | 0.741402 | 0.024711 | 0 | 0.632959 | 0 | 0.007491 | 0.371972 | 0.188988 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011236 | false | 0 | 0.003745 | 0 | 0.014981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e87aa9f9ef41ab1460ee768b408c3b574e899d00 | 2,540 | py | Python | tests/test_counters.py | RI-imaging/ODTbrain | 063f9d1cf7803dd0dda9d68d2847f16c2496c205 | [
"BSD-3-Clause"
] | 15 | 2016-01-22T20:08:10.000Z | 2022-03-24T17:00:27.000Z | tests/test_counters.py | RI-imaging/ODTbrain | 063f9d1cf7803dd0dda9d68d2847f16c2496c205 | [
"BSD-3-Clause"
] | 15 | 2017-01-17T12:07:58.000Z | 2022-02-02T22:30:33.000Z | tests/test_counters.py | RI-imaging/ODTbrain | 063f9d1cf7803dd0dda9d68d2847f16c2496c205 | [
"BSD-3-Clause"
] | 6 | 2017-10-29T20:05:42.000Z | 2021-02-19T23:23:36.000Z | """Tests progress counters"""
import multiprocessing as mp
import numpy as np
import odtbrain
from common_methods import create_test_sino_2d, create_test_sino_3d, \
get_test_parameter_set
def test_integrate_2d():
sino, angles = create_test_sino_2d(N=10)
p = get_test_parameter_set(1)[0]
# complex
jmc = mp.Value("i", 0)
jmm = mp.Value("i", 0)
odtbrain.integrate_2d(sino, angles,
count=jmc,
max_count=jmm,
**p)
assert jmc.value == jmm.value
assert jmc.value != 0
def test_fmp_2d():
sino, angles = create_test_sino_2d(N=10)
p = get_test_parameter_set(1)[0]
# complex
jmc = mp.Value("i", 0)
jmm = mp.Value("i", 0)
odtbrain.fourier_map_2d(sino, angles,
count=jmc,
max_count=jmm,
**p)
assert jmc.value == jmm.value
assert jmc.value != 0
def test_bpp_2d():
sino, angles = create_test_sino_2d(N=10)
p = get_test_parameter_set(1)[0]
# complex
jmc = mp.Value("i", 0)
jmm = mp.Value("i", 0)
odtbrain.backpropagate_2d(sino, angles, padval=0,
count=jmc,
max_count=jmm,
**p)
assert jmc.value == jmm.value
assert jmc.value != 0
def test_back3d():
sino, angles = create_test_sino_3d(Nx=10, Ny=10)
p = get_test_parameter_set(1)[0]
# complex
jmc = mp.Value("i", 0)
jmm = mp.Value("i", 0)
odtbrain.backpropagate_3d(sino, angles, padval=0,
dtype=np.float64,
count=jmc,
max_count=jmm,
**p)
assert jmc.value == jmm.value
assert jmc.value != 0
def test_back3d_tilted():
sino, angles = create_test_sino_3d(Nx=10, Ny=10)
p = get_test_parameter_set(1)[0]
# complex
jmc = mp.Value("i", 0)
jmm = mp.Value("i", 0)
odtbrain.backpropagate_3d_tilted(sino, angles, padval=0,
dtype=np.float64,
count=jmc,
max_count=jmm,
**p)
assert jmc.value == jmm.value
assert jmc.value != 0
if __name__ == "__main__":
# Run all tests
loc = locals()
for key in list(loc.keys()):
if key.startswith("test_") and hasattr(loc[key], "__call__"):
loc[key]()
| 26.736842 | 70 | 0.515354 | 321 | 2,540 | 3.847352 | 0.202492 | 0.080972 | 0.064777 | 0.072874 | 0.744939 | 0.744939 | 0.744939 | 0.744939 | 0.744939 | 0.744939 | 0 | 0.039474 | 0.371654 | 2,540 | 94 | 71 | 27.021277 | 0.734336 | 0.030709 | 0 | 0.701493 | 0 | 0 | 0.012648 | 0 | 0 | 0 | 0 | 0 | 0.149254 | 1 | 0.074627 | false | 0 | 0.059701 | 0 | 0.134328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e88002d292de16d7dd715f56d346ff9d86bbb42d | 47 | py | Python | ssseg/modules/models/backbones/bricks/normalization/groupnorm/__init__.py | zhizhangxian/sssegmentation | 90613f6e0abf4cdd729cf382ab2a915e106d8649 | [
"MIT"
] | 41 | 2021-08-28T01:29:19.000Z | 2022-03-30T11:28:37.000Z | ssseg/modules/models/backbones/bricks/normalization/groupnorm/__init__.py | zhizhangxian/sssegmentation | 90613f6e0abf4cdd729cf382ab2a915e106d8649 | [
"MIT"
] | 6 | 2021-08-31T08:54:39.000Z | 2021-11-02T10:45:47.000Z | ssseg/modules/models/backbones/bricks/normalization/groupnorm/__init__.py | zhizhangxian/sssegmentation | 90613f6e0abf4cdd729cf382ab2a915e106d8649 | [
"MIT"
] | 1 | 2021-09-08T01:41:10.000Z | 2021-09-08T01:41:10.000Z | '''initialize'''
from torch.nn import GroupNorm | 23.5 | 30 | 0.765957 | 6 | 47 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 2 | 30 | 23.5 | 0.837209 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e88ea7d64f1c0b0fc929d7a119d2071c4dede801 | 157 | py | Python | ionyweb/plugin_app/plugin_image/admin.py | makinacorpus/ionyweb | 2f18e3dc1fdc86c7e19bae3778e67e28a37567be | [
"BSD-3-Clause"
] | 4 | 2015-09-28T10:07:39.000Z | 2019-10-18T20:14:07.000Z | ionyweb/plugin_app/plugin_image/admin.py | makinacorpus/ionyweb | 2f18e3dc1fdc86c7e19bae3778e67e28a37567be | [
"BSD-3-Clause"
] | 1 | 2021-03-19T21:41:33.000Z | 2021-03-19T21:41:33.000Z | ionyweb/plugin_app/plugin_image/admin.py | makinacorpus/ionyweb | 2f18e3dc1fdc86c7e19bae3778e67e28a37567be | [
"BSD-3-Clause"
] | 1 | 2017-10-12T09:25:19.000Z | 2017-10-12T09:25:19.000Z | # -*- coding: utf-8 -*-
from django.contrib import admin
from ionyweb.plugin_app.plugin_image.models import Plugin_Image
admin.site.register(Plugin_Image)
| 22.428571 | 63 | 0.789809 | 23 | 157 | 5.217391 | 0.652174 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.101911 | 157 | 6 | 64 | 26.166667 | 0.843972 | 0.133758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8940e6226c1eb8003e88924057751f98abb6952 | 36 | py | Python | ex34.py | dark-teal-coder/book-learn-python-the-hard-way | e63abddde8c29dcb1c24d8a98116a78b05be67eb | [
"MIT"
] | null | null | null | ex34.py | dark-teal-coder/book-learn-python-the-hard-way | e63abddde8c29dcb1c24d8a98116a78b05be67eb | [
"MIT"
] | null | null | null | ex34.py | dark-teal-coder/book-learn-python-the-hard-way | e63abddde8c29dcb1c24d8a98116a78b05be67eb | [
"MIT"
] | null | null | null | print("There is no code in Ex.34.")
| 18 | 35 | 0.666667 | 8 | 36 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.166667 | 36 | 1 | 36 | 36 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fa1366e08afad60bba9f0bb932d0e68a496f99b6 | 2,653 | py | Python | tests/test_perfect_entanglers.py | qucontrol/weylchamber | 6859efe0b82de667c3c5b7123123268388c194db | [
"BSD-3-Clause"
] | 3 | 2019-10-10T11:51:34.000Z | 2021-02-09T14:50:10.000Z | tests/test_perfect_entanglers.py | qucontrol/weylchamber | 6859efe0b82de667c3c5b7123123268388c194db | [
"BSD-3-Clause"
] | 5 | 2018-11-24T03:04:19.000Z | 2022-03-18T03:54:39.000Z | tests/test_perfect_entanglers.py | qucontrol/weylchamber | 6859efe0b82de667c3c5b7123123268388c194db | [
"BSD-3-Clause"
] | 1 | 2018-11-24T10:46:46.000Z | 2018-11-24T10:46:46.000Z | import warnings
import os
import qutip
import numpy as np
from weylchamber import perfect_entanglers
def test_pe_chi_construction_uni(request):
"""Test the co-state construction for the perfect-entanglers functional
in case of unitary dymanics in the 4x4 subspace"""
testdir = os.path.splitext(request.module.__file__)[0]
fw = [0 for i in range(4)]
chi_exp = [0 for i in range(4)]
for i in range(4):
file = os.path.join(testdir, 'fw_{}_uni.dat'.format(i + 1))
fw[i] = qutip.Qobj(
np.loadtxt(file, usecols=[0], unpack=True)
+ 1j * np.loadtxt(file, usecols=[1], unpack=True)
)
file = os.path.join(testdir, 'chis_{}_uni.dat'.format(i + 1))
chi_exp[i] = qutip.Qobj(
np.loadtxt(file, usecols=[0], unpack=True)
+ 1j * np.loadtxt(file, usecols=[1], unpack=True)
)
chi_exp[i] = qutip.Qobj(chi_exp[i])
psi_00 = qutip.Qobj(np.array([1, 0, 0, 0]))
psi_01 = qutip.Qobj(np.array([0, 1, 0, 0]))
psi_10 = qutip.Qobj(np.array([0, 0, 1, 0]))
psi_11 = qutip.Qobj(np.array([0, 0, 0, 1]))
chi_constructor = perfect_entanglers.make_PE_krotov_chi_constructor(
[psi_00, psi_01, psi_10, psi_11]
)
chi_out = chi_constructor(fw)
for i in range(4):
assert abs((chi_out[i] - chi_exp[i]).norm()) < 1e-12
def test_pe_chi_construction_nonuni(request):
"""Test the co-state construction for the perfect-entanglers functional
in case of non-unitary dymanics in the 4x4 subspace"""
testdir = os.path.splitext(request.module.__file__)[0]
fw = [0 for i in range(4)]
chi_exp = [0 for i in range(4)]
for i in range(4):
file = os.path.join(testdir, 'fw_{}_nonuni.dat'.format(i + 1))
fw[i] = qutip.Qobj(
np.loadtxt(file, usecols=[0], unpack=True)
+ 1j * np.loadtxt(file, usecols=[1], unpack=True)
)
file = os.path.join(testdir, 'chis_{}_nonuni.dat'.format(i + 1))
chi_exp[i] = qutip.Qobj(
np.loadtxt(file, usecols=[0], unpack=True)
+ 1j * np.loadtxt(file, usecols=[1], unpack=True)
)
chi_exp[i] = qutip.Qobj(chi_exp[i])
psi_00 = qutip.Qobj(np.array([1, 0, 0, 0, 0]))
psi_01 = qutip.Qobj(np.array([0, 1, 0, 0, 0]))
psi_10 = qutip.Qobj(np.array([0, 0, 1, 0, 0]))
psi_11 = qutip.Qobj(np.array([0, 0, 0, 1, 0]))
chi_constructor = perfect_entanglers.make_PE_krotov_chi_constructor(
[psi_00, psi_01, psi_10, psi_11], unitarity_weight=0.1
)
chi_out = chi_constructor(fw)
for i in range(4):
assert abs((chi_out[i] - chi_exp[i]).norm()) < 1e-12
| 39.597015 | 75 | 0.609499 | 426 | 2,653 | 3.631455 | 0.166667 | 0.019392 | 0.085326 | 0.056884 | 0.925016 | 0.882353 | 0.882353 | 0.882353 | 0.882353 | 0.882353 | 0 | 0.054321 | 0.236713 | 2,653 | 66 | 76 | 40.19697 | 0.70963 | 0.089333 | 0 | 0.526316 | 0 | 0 | 0.025866 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 1 | 0.035088 | false | 0 | 0.087719 | 0 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa3123761b1cca0b9ea4a9b012096873f777ad9d | 27 | py | Python | sutime/__init__.py | lli5ba/carpool-finder | c771c65928e6714f4be808df123e947a2cfb5ec9 | [
"MIT"
] | null | null | null | sutime/__init__.py | lli5ba/carpool-finder | c771c65928e6714f4be808df123e947a2cfb5ec9 | [
"MIT"
] | null | null | null | sutime/__init__.py | lli5ba/carpool-finder | c771c65928e6714f4be808df123e947a2cfb5ec9 | [
"MIT"
] | null | null | null | from .sutime import SUTime
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa64ba6946f9ccfe63038be86bd28ecd9e812adc | 1,708 | py | Python | tests/test_e2e_91_Exceptional_not_sorted.py | blue-monk/csv-diff-python2 | 8fbe9d149231b7d321d867497200e7c0c0118e57 | [
"MIT"
] | null | null | null | tests/test_e2e_91_Exceptional_not_sorted.py | blue-monk/csv-diff-python2 | 8fbe9d149231b7d321d867497200e7c0c0118e57 | [
"MIT"
] | null | null | null | tests/test_e2e_91_Exceptional_not_sorted.py | blue-monk/csv-diff-python2 | 8fbe9d149231b7d321d867497200e7c0c0118e57 | [
"MIT"
] | null | null | null | import sys
import textwrap
import pytest
from src.csvdiff2 import csvdiff
def test_string_matching_key_not_sorted(lhs, rhs, capfd):
lhs.write(textwrap.dedent('''
head1, head2, head3, head4
key1-1, value1-1, value2-1, value3-1
key1-2, value1-2, value2-2, value3-2
key1-3, value1-3, value2-3, value3-3
''').strip())
rhs.write(textwrap.dedent('''
head1, head2, head3, head4
key1-1, value1-1, value2-1, value3-1
key1-3, value1-2, value2-2, value3-2
key1-2, value1-3, value2-3, value3-3
''').strip())
sys.argv = ['csvdiff.py', lhs.strpath, rhs.strpath, '-d']
with pytest.raises(SystemExit) as e:
csvdiff.main()
assert e.type == SystemExit
assert e.value.code == 1
_, err = capfd.readouterr()
assert str(err).find("are not sorted. [current_key=['key1-2'], previous_key=['key1-3']") > 0
def test_numerical_matching_key_not_sorted(lhs, rhs, capfd):
lhs.write(textwrap.dedent('''
head1, head2, head3, head4
1, value1-1, value2-1, value3-1
103, value1-3, value2-3, value3-3
12, value1-2, value2-2, value3-2
''').strip())
rhs.write(textwrap.dedent('''
head1, head2, head3, head4
1, value1-1, value2-1, value3-1
12, value1-2, value2-2, value3-2
103, value1-3, value2-3, value3-3
''').strip())
sys.argv = ['csvdiff.py', lhs.strpath, rhs.strpath, '-k0:3', '-d']
with pytest.raises(SystemExit) as e:
csvdiff.main()
assert e.type == SystemExit
assert e.value.code == 1
_, err = capfd.readouterr()
assert str(err).find("are not sorted. [current_key=['012'], previous_key=['103']") > 0
| 28 | 96 | 0.607143 | 247 | 1,708 | 4.133603 | 0.238866 | 0.03526 | 0.074437 | 0.094025 | 0.863859 | 0.863859 | 0.863859 | 0.722821 | 0.722821 | 0.70715 | 0 | 0.098099 | 0.230094 | 1,708 | 60 | 97 | 28.466667 | 0.678327 | 0 | 0 | 0.681818 | 0 | 0 | 0.485044 | 0.039883 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d703ae0a41ee06e18f791d98f10a2def3839de87 | 106 | py | Python | dedup/cli.py | megacoder/dedup | 8d414300e517134420577168f591096faea00e2b | [
"MIT"
] | null | null | null | dedup/cli.py | megacoder/dedup | 8d414300e517134420577168f591096faea00e2b | [
"MIT"
] | null | null | null | dedup/cli.py | megacoder/dedup | 8d414300e517134420577168f591096faea00e2b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# vim: noet sw=4 ts=4
def main():
import dedup
return dedup.Deduplicate().main()
| 15.142857 | 34 | 0.679245 | 18 | 106 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.150943 | 106 | 6 | 35 | 17.666667 | 0.777778 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad21bdf3e8dda72d467a303d98930193bbcb4a50 | 35 | py | Python | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_area.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 76 | 2020-07-06T14:44:05.000Z | 2022-02-14T15:30:21.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_area.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 27 | 2020-04-28T21:23:12.000Z | 2021-06-25T15:36:38.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_area.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11 | 2020-07-12T16:18:07.000Z | 2022-02-05T16:48:35.000Z | from plotly.graph_objs import Area
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d1413c230e0f7537e6f83fac24fbebaaf5e52dcd | 134 | py | Python | sensorharm/__init__.py | MarujoRe/sensor_harmonization | bc05d0687a85a6bbd669c07eaec6e78a94be900c | [
"MIT"
] | 1 | 2021-03-03T20:19:51.000Z | 2021-03-03T20:19:51.000Z | sensorharm/__init__.py | marujore/sensor-harmonization | bc05d0687a85a6bbd669c07eaec6e78a94be900c | [
"MIT"
] | 3 | 2020-11-10T14:28:52.000Z | 2020-11-17T16:49:45.000Z | sensorharm/__init__.py | marujore/sensor-harmonization | bc05d0687a85a6bbd669c07eaec6e78a94be900c | [
"MIT"
] | 2 | 2020-02-07T13:43:13.000Z | 2020-11-01T16:38:07.000Z | from .landsat_harmonization import landsat_harmonize
from .sentinel2_harmonization import sentinel_harmonize, sentinel_harmonize_SAFE
| 44.666667 | 80 | 0.910448 | 15 | 134 | 7.733333 | 0.533333 | 0.327586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008 | 0.067164 | 134 | 2 | 81 | 67 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d1588f89ef587ca3b3b5683b5978cd7367973f64 | 36 | py | Python | mltoolkit/mlmo/eval/metrics/__init__.py | stungkit/Copycat-abstractive-opinion-summarizer | 04fe5393a7bb6883516766b762f6a0c530e95375 | [
"MIT"
] | 51 | 2020-09-25T07:05:01.000Z | 2022-03-17T12:07:40.000Z | mltoolkit/mlmo/eval/metrics/__init__.py | stungkit/Copycat-abstractive-opinion-summarizer | 04fe5393a7bb6883516766b762f6a0c530e95375 | [
"MIT"
] | 4 | 2020-10-19T10:00:22.000Z | 2022-03-14T17:02:47.000Z | mltoolkit/mlmo/eval/metrics/__init__.py | stungkit/Copycat-abstractive-opinion-summarizer | 04fe5393a7bb6883516766b762f6a0c530e95375 | [
"MIT"
] | 22 | 2020-09-22T01:06:47.000Z | 2022-01-26T14:20:09.000Z | from .base_metric import BaseMetric
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d16d60e31ee348a7f96c40063554459c1a120556 | 112 | py | Python | src/lexer/tokenizer/__init__.py | jklypchak13/MarkovTextGeneration | 0b9ac28db6998454f73af4eeabffa43f441fcc4b | [
"MIT"
] | null | null | null | src/lexer/tokenizer/__init__.py | jklypchak13/MarkovTextGeneration | 0b9ac28db6998454f73af4eeabffa43f441fcc4b | [
"MIT"
] | null | null | null | src/lexer/tokenizer/__init__.py | jklypchak13/MarkovTextGeneration | 0b9ac28db6998454f73af4eeabffa43f441fcc4b | [
"MIT"
] | null | null | null | from .base import Tokenizer
from .char_tokenizer import CharTokenizer
from .word_tokenizer import WordTokenizer
| 28 | 41 | 0.866071 | 14 | 112 | 6.785714 | 0.571429 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 112 | 3 | 42 | 37.333333 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0f2fa2199ba0f535e89700ff4c2ed0d1cc8a0092 | 304 | py | Python | test/conftest.py | mikss/pr3 | 0cab2a6edf0ff6ed56e1d91132bac72be95d8ff6 | [
"MIT"
] | null | null | null | test/conftest.py | mikss/pr3 | 0cab2a6edf0ff6ed56e1d91132bac72be95d8ff6 | [
"MIT"
] | null | null | null | test/conftest.py | mikss/pr3 | 0cab2a6edf0ff6ed56e1d91132bac72be95d8ff6 | [
"MIT"
] | null | null | null | import pytest
@pytest.fixture
def random_seed():
return 2021
@pytest.fixture
def p_dim():
return 100
@pytest.fixture
def q_dim():
return 2
@pytest.fixture
def sparsity():
return 10
@pytest.fixture
def n_samples():
return 1000
@pytest.fixture
def eps_std():
return 100
| 9.5 | 18 | 0.677632 | 43 | 304 | 4.674419 | 0.465116 | 0.38806 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072034 | 0.223684 | 304 | 31 | 19 | 9.806452 | 0.779661 | 0 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | true | 0 | 0.052632 | 0.315789 | 0.684211 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0f3ca2f662d8fb2ee3a835f4ef18f6b9ab739feb | 61 | py | Python | galsim_hsc/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | 1 | 2021-03-19T15:36:48.000Z | 2021-03-19T15:36:48.000Z | galsim_hsc/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | null | null | null | galsim_hsc/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | null | null | null | """galsim_hsc dataset."""
from .galsim_hsc import GalSimHSC
| 15.25 | 33 | 0.754098 | 8 | 61 | 5.5 | 0.75 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 61 | 3 | 34 | 20.333333 | 0.814815 | 0.311475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e38eab93b1935579305782d3dff3bdbb6ea0f13 | 42 | py | Python | wrapped_driver/__init__.py | balexander85/wrapped_driver | 2b5d5f13a8cbf52a3ed5fc4b21bf9ea282d3b7a1 | [
"MIT"
] | null | null | null | wrapped_driver/__init__.py | balexander85/wrapped_driver | 2b5d5f13a8cbf52a3ed5fc4b21bf9ea282d3b7a1 | [
"MIT"
] | null | null | null | wrapped_driver/__init__.py | balexander85/wrapped_driver | 2b5d5f13a8cbf52a3ed5fc4b21bf9ea282d3b7a1 | [
"MIT"
] | null | null | null | from .wrapped_driver import WrappedDriver
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e7a0b59f0c01400197cc6d3057a240ddfa65cdd | 3,322 | py | Python | backend/src/data/github/graphql/user/follows/follows.py | rutvikpadhiyar000/github-trends | af66cd1419586c6c91b75c3e32013160b2c36bcb | [
"MIT"
] | 157 | 2021-09-11T15:53:52.000Z | 2022-03-27T07:03:09.000Z | backend/src/data/github/graphql/user/follows/follows.py | rutvikpadhiyar000/github-trends | af66cd1419586c6c91b75c3e32013160b2c36bcb | [
"MIT"
] | 120 | 2021-02-27T21:37:47.000Z | 2022-03-25T14:44:08.000Z | backend/src/data/github/graphql/user/follows/follows.py | rutvikpadhiyar000/github-trends | af66cd1419586c6c91b75c3e32013160b2c36bcb | [
"MIT"
] | 5 | 2021-12-06T18:43:01.000Z | 2022-01-31T07:06:16.000Z | # import json
from typing import Dict, Optional, Union
from src.data.github.graphql.template import get_template
from src.data.github.graphql.user.follows.models import RawFollows
def get_user_followers(
user_id: str, first: int = 100, after: str = "", access_token: Optional[str] = None
) -> RawFollows:
"""gets user's followers and users following'"""
variables: Dict[str, Union[str, int]] = (
{"login": user_id, "first": first, "after": after}
if after != ""
else {"login": user_id, "first": first}
)
query_str: str = (
"""
query getUser($login: String!, $first: Int!, $after: String!) {
user(login: $login){
followers(first: $first, after: $after){
nodes{
name,
login,
url
}
pageInfo{
hasNextPage,
endCursor
}
}
}
}
"""
if after != ""
else """
query getUser($login: String!, $first: Int!) {
user(login: $login){
followers(first: $first){
nodes{
name,
login,
url
}
pageInfo{
hasNextPage,
endCursor
}
}
}
}
"""
)
query = {
"variables": variables,
"query": query_str,
}
output_dict = get_template(query, access_token)["data"]["user"]["followers"]
return RawFollows.parse_obj(output_dict)
def get_user_following(
user_id: str, first: int = 10, after: str = "", access_token: Optional[str] = None
) -> RawFollows:
"""gets user's followers and users following'"""
variables: Dict[str, Union[str, int]] = (
{"login": user_id, "first": first, "after": after}
if after != ""
else {"login": user_id, "first": first}
)
query_str: str = (
"""
query getUser($login: String!, $first: Int!, $after: String!) {
user(login: $login){
following(first: $first, after: $after){
nodes{
name,
login,
url
}
pageInfo{
hasNextPage,
endCursor
}
}
}
}
"""
if after != ""
else """
query getUser($login: String!, $first: Int!) {
user(login: $login){
following(first: $first){
nodes{
name,
login,
url
}
pageInfo{
hasNextPage,
endCursor
}
}
}
}
"""
)
query = {
"variables": variables,
"query": query_str,
}
output_dict = get_template(query, access_token)["data"]["user"]["following"]
return RawFollows.parse_obj(output_dict)
| 27.229508 | 87 | 0.409693 | 259 | 3,322 | 5.150579 | 0.196911 | 0.05997 | 0.032984 | 0.047976 | 0.878561 | 0.817091 | 0.73913 | 0.73913 | 0.73913 | 0.73913 | 0 | 0.002884 | 0.478025 | 3,322 | 121 | 88 | 27.454545 | 0.766436 | 0.0295 | 0 | 0.619718 | 0 | 0 | 0.436975 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028169 | false | 0 | 0.042254 | 0 | 0.098592 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e9f3af36e9f25653ed5e46980ba17e04935a760 | 48 | py | Python | python/testData/resolve/multiFile/transitiveImport/channel.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/resolve/multiFile/transitiveImport/channel.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/resolve/multiFile/transitiveImport/channel.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from source import token # this re-exports token | 48 | 48 | 0.8125 | 8 | 48 | 4.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 1 | 48 | 48 | 0.95122 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7eb1ceeaaf19f45b33dfaa9a31e6097d0659f1e6 | 22 | py | Python | sfdc_api/wsdl/__init__.py | FernandoPicazo/sfdc_api | 7a40b51f61db285fc01f52ec2bba6d4ff78e8f2d | [
"MIT"
] | null | null | null | sfdc_api/wsdl/__init__.py | FernandoPicazo/sfdc_api | 7a40b51f61db285fc01f52ec2bba6d4ff78e8f2d | [
"MIT"
] | 1 | 2020-09-12T20:08:25.000Z | 2020-09-15T04:08:22.000Z | sfdc_api/wsdl/__init__.py | FernandoPicazo/sfdc_api | 7a40b51f61db285fc01f52ec2bba6d4ff78e8f2d | [
"MIT"
] | null | null | null | from .wsdl import WSDL | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e1cb81b4dc0960784c8ccaeafb8c5f1cc2a092f | 85 | py | Python | src/__init__.py | amynuno98/Model_Analysis | 75e03438c8787980850bfc79b9956d387a29a7d7 | [
"MIT"
] | null | null | null | src/__init__.py | amynuno98/Model_Analysis | 75e03438c8787980850bfc79b9956d387a29a7d7 | [
"MIT"
] | null | null | null | src/__init__.py | amynuno98/Model_Analysis | 75e03438c8787980850bfc79b9956d387a29a7d7 | [
"MIT"
] | null | null | null | # in __init__.py
from model_analysis import *
from statistical_analysis import *
| 10.625 | 34 | 0.776471 | 11 | 85 | 5.454545 | 0.727273 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 7 | 35 | 12.142857 | 0.857143 | 0.164706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.