hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21806b4d25e1fdbe294b272bdd0091d6f0990d01 | 71 | py | Python | App/MainPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null | App/MainPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null | App/MainPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null |
from App.BasePage import BasePage
class MainPage(BasePage):
pass | 11.833333 | 33 | 0.760563 | 9 | 71 | 6 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 71 | 6 | 34 | 11.833333 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
21c527ed472b432a52781aafc005c4fd5a6eac61 | 792 | py | Python | password generator 2.py | Arjitg450/Python-Programs | 0630422c9002632a91b5ccf75f6cd02308c6e929 | [
"MIT"
] | null | null | null | password generator 2.py | Arjitg450/Python-Programs | 0630422c9002632a91b5ccf75f6cd02308c6e929 | [
"MIT"
] | null | null | null | password generator 2.py | Arjitg450/Python-Programs | 0630422c9002632a91b5ccf75f6cd02308c6e929 | [
"MIT"
] | null | null | null | # generate a password with length "passlen" with no duplicate characters in the password
import random
s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()?"
lengthinput=int(input("Enter the length of password you want : "))
p="".join(random.sample(s,lengthinput))
print(p)
s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()?"
lengthinput=int(input("Enter the length of password you want : "))
p="".join(random.choice(s) for _ in range(lengthinput))
print(p)
# with duplicate characters
import random
s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()?"
lengthinput=int(input("Enter the length of password you want : "))
for i in range(lengthinput):
p="".join(random.choices(s))
print(p,end="")
| 31.68 | 88 | 0.747475 | 88 | 792 | 6.715909 | 0.363636 | 0.324873 | 0.380711 | 0.395939 | 0.656514 | 0.656514 | 0.656514 | 0.656514 | 0.656514 | 0.656514 | 0 | 0.046809 | 0.109848 | 792 | 24 | 89 | 33 | 0.791489 | 0.141414 | 0 | 0.666667 | 1 | 0 | 0.50517 | 0.327917 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.133333 | 0 | 0.133333 | 0.2 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
21f9f8bf2511d96b22545571c7d574d0b78a9423 | 3,292 | py | Python | rdr_service/alembic/versions/9cbaee181bc9_modify_gc_validation_metrics.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 39 | 2017-10-13T19:16:27.000Z | 2021-09-24T16:58:21.000Z | rdr_service/alembic/versions/9cbaee181bc9_modify_gc_validation_metrics.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 312 | 2017-09-08T15:42:13.000Z | 2022-03-23T18:21:40.000Z | rdr_service/alembic/versions/9cbaee181bc9_modify_gc_validation_metrics.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 19 | 2017-09-15T13:58:00.000Z | 2022-02-07T18:33:20.000Z | """modify gc validation metrics
Revision ID: 9cbaee181bc9
Revises: 8cda4ff4eba7
Create Date: 2020-03-12 15:12:20.131031
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import mysql
# revision identifiers, used by Alembic.
revision = '9cbaee181bc9'
down_revision = '8cda4ff4eba7'
branch_labels = None
depends_on = None
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('genomic_gc_validation_metrics', sa.Column('chipwellbarcode', sa.String(length=80), nullable=True))
op.add_column('genomic_gc_validation_metrics', sa.Column('idat_green_received', sa.SmallInteger(), nullable=False))
op.add_column('genomic_gc_validation_metrics', sa.Column('idat_red_received', sa.SmallInteger(), nullable=False))
op.add_column('genomic_gc_validation_metrics', sa.Column('tbi_received', sa.SmallInteger(), nullable=False))
op.add_column('genomic_gc_validation_metrics', sa.Column('vcf_received', sa.SmallInteger(), nullable=False))
op.drop_column('genomic_gc_validation_metrics', 'biobank_id')
op.drop_column('genomic_gc_validation_metrics', 'sample_id')
# ### end Alembic commands ###
# Change datatypes
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `call_rate` VARCHAR(10);')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `mean_coverage` VARCHAR(10);')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `genome_coverage` VARCHAR(10);')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `contamination` VARCHAR(10);')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `site_id` VARCHAR(80);')
def downgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('genomic_gc_validation_metrics', sa.Column('sample_id', mysql.VARCHAR(length=80), nullable=True))
op.add_column('genomic_gc_validation_metrics', sa.Column('biobank_id', mysql.VARCHAR(length=80), nullable=False))
op.drop_column('genomic_gc_validation_metrics', 'vcf_received')
op.drop_column('genomic_gc_validation_metrics', 'tbi_received')
op.drop_column('genomic_gc_validation_metrics', 'idat_red_received')
op.drop_column('genomic_gc_validation_metrics', 'idat_green_received')
op.drop_column('genomic_gc_validation_metrics', 'chipwellbarcode')
# ### end Alembic commands ###
# Change datatypes
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `call_rate` INTEGER;')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `mean_coverage` INTEGER;')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `genome_coverage` INTEGER;')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `contamination` INTEGER;')
op.execute('ALTER TABLE genomic_gc_validation_metrics MODIFY `site_id` INTEGER;')
def upgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
| 42.205128 | 119 | 0.753949 | 422 | 3,292 | 5.590047 | 0.201422 | 0.127173 | 0.201357 | 0.264519 | 0.778296 | 0.778296 | 0.753709 | 0.737601 | 0.698601 | 0.612972 | 0 | 0.01875 | 0.125152 | 3,292 | 77 | 120 | 42.753247 | 0.800347 | 0.151276 | 0 | 0.04878 | 0 | 0 | 0.504035 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0.04878 | 0.073171 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df4162f3dd15818844c8b18fe747a42a7c90155f | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/f2py/f90mod_rules.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/f2py/f90mod_rules.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/f2py/f90mod_rules.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/a0/b6/6d/27f4f540d0ab6b7080096f2850b9908a8614e1b3957215ad810a75ccbf | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
800c06f0694a07708de19340d33999ea14363b71 | 111 | py | Python | src/spaceone/inventory/connector/aws_kinesis_data_stream_connector/__init__.py | jean1042/plugin-aws-cloud-services | 1cf192557b03478af33ae81f40b2a49f735716bb | [
"Apache-2.0"
] | 4 | 2020-06-22T01:48:07.000Z | 2020-08-24T00:51:09.000Z | src/spaceone/inventory/connector/aws_kinesis_data_stream_connector/__init__.py | jean1042/plugin-aws-cloud-services | 1cf192557b03478af33ae81f40b2a49f735716bb | [
"Apache-2.0"
] | 2 | 2020-07-20T01:58:32.000Z | 2020-08-04T07:41:37.000Z | src/spaceone/inventory/connector/aws_kinesis_data_stream_connector/__init__.py | jean1042/plugin-aws-cloud-services | 1cf192557b03478af33ae81f40b2a49f735716bb | [
"Apache-2.0"
] | 6 | 2020-06-22T09:19:40.000Z | 2020-09-17T06:35:37.000Z | from spaceone.inventory.connector.aws_kinesis_data_stream_connector.connector import KinesisDataStreamConnector | 111 | 111 | 0.936937 | 12 | 111 | 8.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 111 | 1 | 111 | 111 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8041dee96ba15fef4742a0d4164b7065957c5b50 | 207 | py | Python | meerschaum/connectors/sql/_tools.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 32 | 2020-09-14T16:29:19.000Z | 2022-03-08T00:51:28.000Z | meerschaum/connectors/sql/_tools.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 3 | 2020-10-04T20:03:30.000Z | 2022-02-02T21:04:46.000Z | meerschaum/connectors/sql/_tools.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 5 | 2021-04-22T23:49:21.000Z | 2022-02-02T12:59:08.000Z | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
"""
Import everything from `meerschaum.connectors.sql.tools` for backwards compatability.
"""
from meerschaum.connectors.sql.tools import *
| 20.7 | 85 | 0.714976 | 27 | 207 | 5.481481 | 0.703704 | 0.054054 | 0.324324 | 0.364865 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016484 | 0.120773 | 207 | 9 | 86 | 23 | 0.796703 | 0.700483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
33e3cbc7378b3c8d1a5f4b8b2c72d2d28a635c3e | 10,790 | py | Python | linlearn/estimator/llm.py | LinLearn/linlearn | de5752d47bbe8e2fb62d41b0dcf2526f87545e1c | [
"BSD-3-Clause"
] | null | null | null | linlearn/estimator/llm.py | LinLearn/linlearn | de5752d47bbe8e2fb62d41b0dcf2526f87545e1c | [
"BSD-3-Clause"
] | null | null | null | linlearn/estimator/llm.py | LinLearn/linlearn | de5752d47bbe8e2fb62d41b0dcf2526f87545e1c | [
"BSD-3-Clause"
] | null | null | null | # Authors: Stephane Gaiffas <stephane.gaiffas@gmail.com>
# Ibrahim Merad <imerad7@gmail.com>
# License: BSD 3 clause
"""
This module implement the ``LLM`` class for the Lecué - Lerasle - Mathieu robust
estimator.
``StateLLM`` is a place-holder for the LLM estimator containing:
gradient: numpy.ndarray
A numpy array of shape (n_weights,) containing gradients computed by the
`grad` function returned by the `grad_factory` factory function.
TODO: fill the missing things
"""
from collections import namedtuple
import numpy as np
from numba import jit
from ._base import Estimator, jit_kwargs
from .._utils import np_float
# Better implementation of argmedian ??
@jit(**jit_kwargs)
def argmedian(x):
med = np.median(x)
id = 0
for a in x:
if a == med:
return id
id += 1
raise ValueError("Failed argmedian")
# return np.argpartition(x, len(x) // 2)[len(x) // 2]
StateLLM = namedtuple(
"StateLLM",
[
"block_means",
"sample_indices",
"gradient",
"loss_derivative",
"partial_derivative",
"n_grad_calls",
"n_pderiv_calls",
],
)
class LLM(Estimator):
def __init__(self, X, y, loss, n_classes, fit_intercept, n_blocks):
# assert n_blocks % 2 == 1
super().__init__(X, y, loss, n_classes, fit_intercept)
# n_blocks must be uneven
self.n_blocks = n_blocks + ((n_blocks + 1) % 2)
if self.n_blocks >= self.n_samples:
self.n_blocks = self.n_samples - (self.n_samples % 2 + 1)
self.n_samples_in_block = max(1, self.n_samples // n_blocks)
# no last block size, the remaining samples are just ignored
# self.last_block_size = self.n_samples % self.n_samples_in_block
# if self.last_block_size > 0:
# self.n_blocks += 1
def get_state(self):
return StateLLM(
block_means=np.empty(self.n_blocks, dtype=np_float),
sample_indices=np.arange(self.n_samples, dtype=np.uintp),
gradient=np.empty(
(self.n_features + int(self.fit_intercept), self.n_classes),
dtype=np_float,
),
loss_derivative=np.empty(self.n_classes, dtype=np_float),
partial_derivative=np.empty(self.n_classes, dtype=np_float),
n_grad_calls=0,
n_pderiv_calls=0,
)
def partial_deriv_factory(self):
X = self.X
y = self.y
n_samples_in_block = self.n_samples_in_block
n_blocks = self.n_blocks
loss = self.loss
n_classes = self.n_classes
value_loss = loss.value_factory()
deriv_loss = loss.deriv_factory()
if self.fit_intercept:
@jit(**jit_kwargs)
def partial_deriv(j, inner_products, state):
sample_indices = state.sample_indices
n_calls = state.n_pderiv_calls
n_calls += 1
block_means = state.block_means
np.random.shuffle(sample_indices)
# Cumulative sum in the block
objectives_sum_block = 0.0
# Block counter
counter = 0
for i, idx in enumerate(sample_indices[:n_blocks*n_samples_in_block]):
objectives_sum_block += value_loss(y[idx], inner_products[idx])
if ((i != 0) and ((i + 1) % n_samples_in_block == 0)) or n_samples_in_block == 1:
block_means[counter] = objectives_sum_block / n_samples_in_block
counter += 1
objectives_sum_block = 0.0
argmed = argmedian(block_means)
deriv = state.loss_derivative
partial_derivative = state.partial_derivative
for k in range(n_classes):
partial_derivative[k] = 0.0
if j == 0:
for i in sample_indices[
argmed * n_samples_in_block : (argmed + 1) * n_samples_in_block
]:
deriv_loss(y[i], inner_products[i], deriv)
for k in range(n_classes):
partial_derivative[k] += deriv[k]
else:
for i in sample_indices[
argmed * n_samples_in_block : (argmed + 1) * n_samples_in_block
]:
deriv_loss(y[i], inner_products[i], deriv)
for k in range(n_classes):
partial_derivative[k] += deriv[k] * X[i, j - 1]
for k in range(n_classes):
partial_derivative[k] /= n_samples_in_block
return partial_deriv
else:
# Same function without an intercept
@jit(**jit_kwargs)
def partial_deriv(j, inner_products, state):
sample_indices = state.sample_indices
n_calls = state.n_pderiv_calls
n_calls += 1
block_means = state.block_means
np.random.shuffle(sample_indices)
# Cumulative sum in the block
objectives_sum_block = 0.0
# Block counter
counter = 0
for i, idx in enumerate(sample_indices[:n_blocks*n_samples_in_block]):
objectives_sum_block += value_loss(y[idx], inner_products[idx])
if ((i != 0) and ((i + 1) % n_samples_in_block == 0)) or n_samples_in_block == 1:
block_means[counter] = objectives_sum_block / n_samples_in_block
counter += 1
objectives_sum_block = 0.0
argmed = argmedian(block_means)
deriv = state.loss_derivative
partial_derivative = state.partial_derivative
for k in range(n_classes):
partial_derivative[k] = 0.0
for i in sample_indices[
argmed * n_samples_in_block : (argmed + 1) * n_samples_in_block
]:
deriv_loss(y[i], inner_products[i], deriv)
for k in range(n_classes):
partial_derivative[k] += deriv[k] * X[i, j]
for k in range(n_classes):
partial_derivative[k] /= n_samples_in_block
return partial_deriv
def grad_factory(self):
X = self.X
y = self.y
loss = self.loss
value_loss = loss.value_factory()
deriv_loss = loss.deriv_factory()
n_samples_in_block = self.n_samples_in_block
n_blocks = self.n_blocks
n_classes = self.n_classes
n_features = self.n_features
if self.fit_intercept:
@jit(**jit_kwargs)
def grad(inner_products, state):
sample_indices = state.sample_indices
n_calls = state.n_grad_calls
n_calls += 1
block_means = state.block_means
gradient = state.gradient
# for i in range(n_samples):
# sample_indices[i] = i
np.random.shuffle(sample_indices)
# Cumulative sum in the block
objectives_sum_block = 0.0
# Block counter
counter = 0
for i, idx in enumerate(sample_indices[:n_blocks*n_samples_in_block]):
objectives_sum_block += value_loss(y[idx], inner_products[idx])
if ((i != 0) and ((i + 1) % n_samples_in_block == 0)) or n_samples_in_block == 1:
block_means[counter] = objectives_sum_block / n_samples_in_block
counter += 1
objectives_sum_block = 0.0
argmed = argmedian(block_means)
for j in range(n_features + 1):
for k in range(n_classes):
gradient[j, k] = 0.0
deriv = state.loss_derivative
for i in sample_indices[
argmed * n_samples_in_block : (argmed + 1) * n_samples_in_block
]:
deriv_loss(y[i], inner_products[i], deriv)
for k in range(n_classes):
gradient[0, k] += deriv[k]
for j in range(n_features):
gradient[j + 1, k] += (
deriv[k] * X[i, j]
) # np.outer(X[i], deriv)
for j in range(n_features + 1):
for k in range(n_classes):
gradient[j, k] /= n_samples_in_block
return 0
return grad
else:
@jit(**jit_kwargs)
def grad(inner_products, state):
sample_indices = state.sample_indices
n_calls = state.n_grad_calls
n_calls += 1
block_means = state.block_means
gradient = state.gradient
# for i in range(n_samples):
# sample_indices[i] = i
np.random.shuffle(sample_indices)
# Cumulative sum in the block
objectives_sum_block = 0.0
# Block counter
counter = 0
for i, idx in enumerate(sample_indices[:n_blocks*n_samples_in_block]):
objectives_sum_block += value_loss(y[idx], inner_products[idx])
if ((i != 0) and ((i + 1) % n_samples_in_block == 0)) or n_samples_in_block == 1:
block_means[counter] = objectives_sum_block / n_samples_in_block
counter += 1
objectives_sum_block = 0.0
argmed = argmedian(block_means)
for j in range(n_features):
for k in range(n_classes):
gradient[j, k] = 0.0
deriv = state.loss_derivative
for i in sample_indices[
argmed * n_samples_in_block : (argmed + 1) * n_samples_in_block
]:
deriv_loss(y[i], inner_products[i], deriv)
for j in range(n_features):
for k in range(n_classes):
gradient[j, k] += (
deriv[k] * X[i, j]
) # np.outer(X[i], deriv)
for j in range(n_features):
for k in range(n_classes):
gradient[j, k] /= n_samples_in_block
return 0
return grad
| 38.127208 | 101 | 0.519926 | 1,270 | 10,790 | 4.138583 | 0.111811 | 0.066971 | 0.068493 | 0.10274 | 0.756659 | 0.744102 | 0.729833 | 0.72812 | 0.704528 | 0.67637 | 0 | 0.012629 | 0.398239 | 10,790 | 282 | 102 | 38.262411 | 0.796858 | 0.107785 | 0 | 0.723301 | 0 | 0 | 0.012091 | 0 | 0 | 0 | 0 | 0.003546 | 0 | 1 | 0.043689 | false | 0 | 0.024272 | 0.004854 | 0.11165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d1b054f5ace2d8bcc3dd3635d9ce5c0d90447af | 185 | py | Python | trec2015/cuttsum/classifiers/__init__.py | kedz/cuttsum | 992c21192af03fd2ef863f5ab7d10752f75580fa | [
"Apache-2.0"
] | 6 | 2015-09-10T02:22:21.000Z | 2021-10-01T16:36:46.000Z | trec2015/cuttsum/classifiers/__init__.py | kedz/cuttsum | 992c21192af03fd2ef863f5ab7d10752f75580fa | [
"Apache-2.0"
] | null | null | null | trec2015/cuttsum/classifiers/__init__.py | kedz/cuttsum | 992c21192af03fd2ef863f5ab7d10752f75580fa | [
"Apache-2.0"
] | 2 | 2018-04-04T10:44:32.000Z | 2021-10-01T16:37:26.000Z | from cuttsum.classifiers._nugget_classifier import NuggetClassifier
from cuttsum.classifiers._nugget_regressor import NuggetRegressor
__all__ = ["NuggetClassifier", "NuggetRegressor"]
| 37 | 67 | 0.864865 | 17 | 185 | 8.941176 | 0.588235 | 0.144737 | 0.289474 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07027 | 185 | 4 | 68 | 46.25 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0.167568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d2d124f6956caf69c24d4ae7a2fa863cd0955d8 | 131 | py | Python | plasmapy_nei/eigen/tests/test_eigen.py | StanczakDominik/PlasmaPy-NEI | 1137689c0e5b2d1e1147ead64a02b9848a1b123d | [
"BSD-2-Clause-Patent",
"BSD-3-Clause"
] | null | null | null | plasmapy_nei/eigen/tests/test_eigen.py | StanczakDominik/PlasmaPy-NEI | 1137689c0e5b2d1e1147ead64a02b9848a1b123d | [
"BSD-2-Clause-Patent",
"BSD-3-Clause"
] | null | null | null | plasmapy_nei/eigen/tests/test_eigen.py | StanczakDominik/PlasmaPy-NEI | 1137689c0e5b2d1e1147ead64a02b9848a1b123d | [
"BSD-2-Clause-Patent",
"BSD-3-Clause"
] | null | null | null | """Tests for eigentables"""
def test_import():
"""Test that the subpackage can be imported."""
import plasmapy_nei.eigen
| 18.714286 | 51 | 0.687023 | 17 | 131 | 5.176471 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183206 | 131 | 6 | 52 | 21.833333 | 0.82243 | 0.480916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d4b6b61ed95e5aea75a94fbf6c005592e9bec74 | 58 | py | Python | V0.2 RELEASE STABLE/smartphish.py | patrol114/TheSmartool | 7103015f1b2ccfbbc8b8aa6e7977e19d70c15ac0 | [
"MIT"
] | 31 | 2020-12-02T20:10:51.000Z | 2022-03-22T16:22:54.000Z | V0.2 RELEASE STABLE/smartphish.py | patrol114/TheSmartool | 7103015f1b2ccfbbc8b8aa6e7977e19d70c15ac0 | [
"MIT"
] | null | null | null | V0.2 RELEASE STABLE/smartphish.py | patrol114/TheSmartool | 7103015f1b2ccfbbc8b8aa6e7977e19d70c15ac0 | [
"MIT"
] | 6 | 2021-01-10T01:06:23.000Z | 2021-09-30T23:17:49.000Z | import os
import sys
def connect():
print
| 7.25 | 15 | 0.551724 | 7 | 58 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.396552 | 58 | 7 | 16 | 8.285714 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d85a72f8435dfedb891e99a59e399348e9648ec | 17,252 | py | Python | backend/tests/integration/service/test_product_service.py | willrp/willstores-ws | 62c4f400f40fed1aef4f316c7e73dfecba98d026 | [
"MIT"
] | null | null | null | backend/tests/integration/service/test_product_service.py | willrp/willstores-ws | 62c4f400f40fed1aef4f316c7e73dfecba98d026 | [
"MIT"
] | null | null | null | backend/tests/integration/service/test_product_service.py | willrp/willstores-ws | 62c4f400f40fed1aef4f316c7e73dfecba98d026 | [
"MIT"
] | null | null | null | import pytest
from elasticsearch_dsl import Index, Search
from uuid import uuid4
from backend.service import ProductService
from backend.model import Product
from backend.tests.factories import ProductFactory
from backend.errors.no_content_error import NoContentError
from backend.errors.not_found_error import NotFoundError
from backend.errors.request_error import ValidationError
@pytest.fixture(scope="session")
def service():
service = ProductService()
return service
def test_product_service_products_count(service, es_object):
prod_list = ProductFactory.create_batch(2)
[prod_obj.save(using=es_object.connection) for prod_obj in prod_list]
Index("store", using=es_object.connection).refresh()
result = service.products_count()
assert result > 0
def test_product_service_super_discounts(service, es_object):
prod_list = ProductFactory.create_batch(2)
[prod_obj.save(using=es_object.connection) for prod_obj in prod_list]
Index("store", using=es_object.connection).refresh()
results = service.super_discounts()
assert len(results) > 0
assert type(results[0]) == Product
test_alt_id = "I_test_product_service_super_discounts"
ProductFactory.create(gender=test_alt_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.super_discounts(gender=test_alt_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
results = service.super_discounts(gender=str(uuid4()))
def test_product_service_search(service, es_object):
search = service._ProductService__search()
assert type(search) == Search
service._ProductService__search(query="query", gender="gender", sessionid="sessionid", sessionname="sessionname", brand="brand", kind="kind", pricerange={"min": 1.0, "max": 100.0})
assert type(search) == Search
def test_product_service_select_pricerange(service, es_object):
ProductFactory.create(price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange()
for key in ["min", "max"]:
assert key in results
assert len(results.keys()) == 2
assert results["min"] <= results["max"]
test_alt_id = "I_test_product_service_select_pricerange"
test_id = str(uuid4())
ProductFactory.create(gender=test_alt_id, price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(gender=test_alt_id, price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange(gender=test_alt_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(gender=str(uuid4()))
ProductFactory.create(sessionid=test_id, price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(sessionid=test_id, price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange(sessionid=test_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(sessionid=str(uuid4()))
ProductFactory.create(sessionname=test_id, price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(sessionname=test_id, price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange(sessionname=test_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(sessionname=str(uuid4()))
ProductFactory.create(brand=test_id, price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(brand=test_id, price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange(brand=test_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(brand=str(uuid4()))
ProductFactory.create(kind=test_id, price={"outlet": 10.0, "retail": 100.0}).save(using=es_object.connection)
ProductFactory.create(kind=test_id, price={"outlet": 20.0, "retail": 120.0}).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_pricerange(kind=test_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(kind=str(uuid4()))
results = service.select_pricerange(query=test_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
results = service.select_pricerange(query=test_alt_id)
assert results["min"] == 10.0
assert results["max"] == 20.0
with pytest.raises(NoContentError):
service.select_pricerange(query=str(uuid4()))
def test_product_service_get_total(service, es_object):
ProductFactory.create().save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total()
assert results > 0
test_alt_id = "I_test_product_service_get_total"
test_id = str(uuid4())
ProductFactory.create(gender=test_alt_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total(gender=test_alt_id)
assert results == 1
results = service.get_total(gender=str(uuid4()))
assert results == 0
ProductFactory.create(sessionid=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total(sessionid=test_id)
assert results == 1
results = service.get_total(sessionid=str(uuid4()))
assert results == 0
ProductFactory.create(sessionname=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total(sessionname=test_id)
assert results == 1
results = service.get_total(sessionname=str(uuid4()))
assert results == 0
ProductFactory.create(brand=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total(brand=test_id)
assert results == 1
results = service.get_total(brand=str(uuid4()))
assert results == 0
ProductFactory.create(kind=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.get_total(kind=test_id)
assert results == 1
results = service.get_total(kind=str(uuid4()))
assert results == 0
results = service.get_total(pricerange={"min": 1.0, "max": 100.0})
assert results > 0
results = service.get_total(pricerange={"min": 10000.0, "max": 20000.0})
assert results == 0
results = service.get_total(query=test_id)
assert results == 2
results = service.get_total(query=test_alt_id)
assert results == 1
results = service.get_total(query=str(uuid4()))
assert results == 0
def test_product_service_select_brands(service, es_object):
ProductFactory.create().save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands()
assert len(results) > 0
test_alt_id = "I_test_product_service_select_brands"
test_id = str(uuid4())
ProductFactory.create(gender=test_alt_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands(gender=test_alt_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(gender=str(uuid4()))
ProductFactory.create(sessionid=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands(sessionid=test_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(sessionid=str(uuid4()))
ProductFactory.create(sessionname=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands(sessionname=test_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(sessionname=str(uuid4()))
ProductFactory.create(brand=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands(brand=test_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(brand=str(uuid4()))
ProductFactory.create(kind=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_brands(kind=test_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(kind=str(uuid4()))
results = service.select_brands(pricerange={"min": 1.0, "max": 100.0})
assert len(results) > 0
with pytest.raises(NoContentError):
service.select_brands(pricerange={"min": 10000.0, "max": 20000.0})
results = service.select_brands(query=test_id)
assert len(results) == 2
for key in ["brand", "amount"]:
assert key in results[0]
results = service.select_brands(query=test_alt_id)
assert len(results) == 1
for key in ["brand", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_brands(query=str(uuid4()))
def test_product_service_select_kinds(service, es_object):
ProductFactory.create().save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds()
assert len(results) > 0
test_alt_id = "I_test_product_service_select_kinds"
test_id = str(uuid4())
ProductFactory.create(gender=test_alt_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds(gender=test_alt_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(gender=str(uuid4()))
ProductFactory.create(sessionid=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds(sessionid=test_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(sessionid=str(uuid4()))
ProductFactory.create(sessionname=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds(sessionname=test_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(sessionname=str(uuid4()))
ProductFactory.create(brand=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds(brand=test_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(brand=str(uuid4()))
ProductFactory.create(kind=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select_kinds(kind=test_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(kind=str(uuid4()))
results = service.select_kinds(pricerange={"min": 1.0, "max": 100.0})
assert len(results) > 0
with pytest.raises(NoContentError):
service.select_kinds(pricerange={"min": 10000.0, "max": 20000.0})
results = service.select_kinds(query=test_id)
assert len(results) == 2
for key in ["kind", "amount"]:
assert key in results[0]
results = service.select_kinds(query=test_alt_id)
assert len(results) == 1
for key in ["kind", "amount"]:
assert key in results[0]
with pytest.raises(NoContentError):
service.select_kinds(query=str(uuid4()))
def test_product_service_select(service, es_object):
ProductFactory.create().save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select()
assert len(results) > 0
test_alt_id = "I_test_product_service_select"
test_id = str(uuid4())
ProductFactory.create(gender=test_alt_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select(gender=test_alt_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(gender=str(uuid4()))
ProductFactory.create(sessionid=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select(sessionid=test_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(sessionid=str(uuid4()))
ProductFactory.create(sessionname=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select(sessionname=test_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(sessionname=str(uuid4()))
ProductFactory.create(brand=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select(brand=test_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(brand=str(uuid4()))
ProductFactory.create(kind=test_id).save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
results = service.select(kind=test_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(kind=str(uuid4()))
results = service.select(pricerange={"min": 1.0, "max": 100.0})
assert len(results) > 0
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(pricerange={"min": 10000.0, "max": 20000.0})
results = service.select(query=test_id)
assert len(results) == 2
assert type(results[0]) == Product
results = service.select(query=test_alt_id)
assert len(results) == 1
assert type(results[0]) == Product
with pytest.raises(NoContentError):
service.select(query=str(uuid4()))
def test_product_service_select_by_id(service, es_object):
obj = ProductFactory.create()
obj.save(using=es_object.connection)
Index("store", using=es_object.connection).refresh()
obj_id = obj.meta["id"]
results = service.select_by_id(obj_id)
assert type(results) == Product
assert results.meta["id"] == obj_id
fake_id = str(uuid4())
with pytest.raises(NotFoundError):
service.select_by_id(fake_id)
def test_product_service_select_by_item_list(service, es_object):
price = {"outlet": 10.0, "retail": 20.0}
item_list = []
for i in range(3):
obj = ProductFactory.create(price=price)
obj.save(using=es_object.connection)
item_list.append({"item_id": obj.meta["id"], "amount": i+1})
Index("store", using=es_object.connection).refresh()
results, total = service.select_by_item_list(item_list)
assert len(results) == len(item_list)
item_id_list = [item["item_id"] for item in item_list]
for obj in results:
assert type(obj) == Product
assert obj.meta["id"] in item_id_list
assert total["outlet"] == 60.0
assert total["retail"] == 120.0
fake_item_list = [{"item_id": str(uuid4()), "amount": 2} for x in range(2)]
over_item_list = item_list + fake_item_list
with pytest.raises(ValidationError):
service.select_by_item_list(over_item_list)
| 34.43513 | 184 | 0.704208 | 2,263 | 17,252 | 5.199735 | 0.044631 | 0.058469 | 0.083964 | 0.148551 | 0.884338 | 0.841506 | 0.820685 | 0.791281 | 0.751763 | 0.718195 | 0 | 0.022997 | 0.160677 | 17,252 | 500 | 185 | 34.504 | 0.789641 | 0 | 0 | 0.558074 | 0 | 0 | 0.051936 | 0.012173 | 0 | 0 | 0 | 0 | 0.271955 | 1 | 0.031161 | false | 0 | 0.025496 | 0 | 0.05949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d54c2d68c64b9d8ec38ca041918d09a87ea4a433 | 229 | py | Python | snakegame/entities/__init__.py | vedard/SnakeGame | 828f3e892084848a45d72a8ca62385e94cf96adb | [
"MIT"
] | null | null | null | snakegame/entities/__init__.py | vedard/SnakeGame | 828f3e892084848a45d72a8ca62385e94cf96adb | [
"MIT"
] | null | null | null | snakegame/entities/__init__.py | vedard/SnakeGame | 828f3e892084848a45d72a8ca62385e94cf96adb | [
"MIT"
] | null | null | null | from snakegame.entities.entity import Entity
from snakegame.entities.drawable import Drawable
from snakegame.entities.moveable import Moveable
from snakegame.entities.fruit import Fruit
from snakegame.entities.snake import Snake
| 38.166667 | 48 | 0.868996 | 30 | 229 | 6.633333 | 0.3 | 0.326633 | 0.527638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087336 | 229 | 5 | 49 | 45.8 | 0.952153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d550243211f5daba2c41bfbe7b3b18e766b7ef7e | 125 | py | Python | numsim/__init__.py | ffernandoalves/NumSim | 44544cfa6a451835efafbc847780fdcb8ad9081c | [
"MIT"
] | 1 | 2021-05-26T07:14:21.000Z | 2021-05-26T07:14:21.000Z | numsim/__init__.py | ffernandoalves/NumSim | 44544cfa6a451835efafbc847780fdcb8ad9081c | [
"MIT"
] | null | null | null | numsim/__init__.py | ffernandoalves/NumSim | 44544cfa6a451835efafbc847780fdcb8ad9081c | [
"MIT"
] | null | null | null | """
source package
"""
from .computer import *
from .utils import load_data_generated
from .animation import start_animation | 17.857143 | 38 | 0.792 | 16 | 125 | 6 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128 | 125 | 7 | 39 | 17.857143 | 0.880734 | 0.112 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d550eb81ba376a2ac0d1f38cb80c6e08b626d5b2 | 30 | py | Python | Hello.py | LucaViolin/MB215Lab1 | d07e01fde189c53ed8b1b1f5266206a455fd9a50 | [
"MIT"
] | null | null | null | Hello.py | LucaViolin/MB215Lab1 | d07e01fde189c53ed8b1b1f5266206a455fd9a50 | [
"MIT"
] | null | null | null | Hello.py | LucaViolin/MB215Lab1 | d07e01fde189c53ed8b1b1f5266206a455fd9a50 | [
"MIT"
] | null | null | null | print("Hello world from Luca") | 30 | 30 | 0.766667 | 5 | 30 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d5772e94e5cd6fef97e00f69f3bc2b7e309250d7 | 124 | py | Python | mlcvs/tica/__init__.py | luigibonati/mlcvs | 6567fb0774dc354f9cf3472dc356fdcf10aba6f2 | [
"BSD-3-Clause"
] | 1 | 2022-02-14T10:06:42.000Z | 2022-02-14T10:06:42.000Z | mlcvs/tica/__init__.py | luigibonati/mlcvs | 6567fb0774dc354f9cf3472dc356fdcf10aba6f2 | [
"BSD-3-Clause"
] | 9 | 2021-10-31T09:28:09.000Z | 2022-03-23T15:13:21.000Z | mlcvs/tica/__init__.py | luigibonati/mlcvs | 6567fb0774dc354f9cf3472dc356fdcf10aba6f2 | [
"BSD-3-Clause"
] | null | null | null | __all__ = ["linear_tica","tica"]
from .linear_tica import TICA_CV
from .deep_tica import DeepTICA_CV
from .tica import TICA | 24.8 | 34 | 0.790323 | 20 | 124 | 4.45 | 0.4 | 0.337079 | 0.314607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120968 | 124 | 5 | 35 | 24.8 | 0.816514 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
638ee73b957def88f9ab6ba4fd829aafadd99b00 | 92 | py | Python | lib/gateway/ctpGateway/__init__.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | 1 | 2018-06-28T09:49:08.000Z | 2018-06-28T09:49:08.000Z | lib/gateway/ctpGateway/__init__.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | null | null | null | lib/gateway/ctpGateway/__init__.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | null | null | null | from . market_gateway import CTPMarketGateway
from . trader_gateway import CtpTraderGateway
| 30.666667 | 45 | 0.869565 | 10 | 92 | 7.8 | 0.7 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 92 | 2 | 46 | 46 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
639954b6db6959a843b2ca6b9283315569d93fa5 | 40 | py | Python | toolcraft/gui/widget/__init__.py | SpikingNeurons/_toolcraft_ | 070c79ea9248e4082bd69b5344f7b532e57f7730 | [
"BSD-3-Clause"
] | null | null | null | toolcraft/gui/widget/__init__.py | SpikingNeurons/_toolcraft_ | 070c79ea9248e4082bd69b5344f7b532e57f7730 | [
"BSD-3-Clause"
] | 1 | 2021-09-20T22:22:05.000Z | 2021-09-20T22:22:05.000Z | toolcraft/gui/widget/__init__.py | SpikingNeurons/_toolcraft_ | 070c79ea9248e4082bd69b5344f7b532e57f7730 | [
"BSD-3-Clause"
] | null | null | null | from .core import *
from .plot import *
| 13.333333 | 19 | 0.7 | 6 | 40 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63b3e8c01272f40661044ef1534bd15cbad17a27 | 7,153 | py | Python | test/test_getpy.py | atom-moyer/getpy | 8d5a846d030d345408a4dc71793d5918521180c4 | [
"MIT"
] | 83 | 2019-04-14T05:39:34.000Z | 2022-03-28T18:18:03.000Z | test/test_getpy.py | atom-moyer/getpy | 8d5a846d030d345408a4dc71793d5918521180c4 | [
"MIT"
] | 5 | 2019-04-30T16:23:10.000Z | 2021-03-26T12:15:52.000Z | test/test_getpy.py | atom-moyer/getpy | 8d5a846d030d345408a4dc71793d5918521180c4 | [
"MIT"
] | 5 | 2019-04-29T19:15:15.000Z | 2022-03-04T19:08:50.000Z | import pytest
import numpy as np
import getpy as gp
def test_getpy_methods():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
keys = np.random.randint(1, 1000, size=10**2, dtype=key_type)
values = np.random.randint(1, 1000, size=10**2, dtype=value_type)
gp_dict = gp.Dict(key_type, value_type)
gp_dict[keys] = values
p_dict = {key : value for key, value in zip(keys, values)}
assert len(gp_dict) == len(np.unique(keys))
assert all([gp_dict[key] == p_dict[key] for key in keys])
def test_getpy_methods_with_multidim():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
keys = np.random.randint(1, 1000, size=10**2, dtype=key_type).reshape(10,10)
values = np.random.randint(1, 1000, size=10**2, dtype=value_type).reshape(10,10)
gp_dict = gp.Dict(key_type, value_type)
gp_dict[keys] = values
p_dict = {key : value for key, value in zip(keys.flat, values.flat)}
assert len(gp_dict) == len(np.unique(keys))
assert all([gp_dict[key] == p_dict[key] for key in keys.flat])
def test_getpy_methods_with_strings():
key_type = np.dtype('S8')
value_type = np.dtype('S8')
keys = np.array([np.random.bytes(8) for i in range(10**2)], dtype=key_type)
values = np.array([np.random.bytes(8) for i in range(10**2)], dtype=value_type)
gp_dict = gp.Dict(key_type, value_type)
gp_dict[keys] = values
p_dict = {key : value for key, value in zip(keys, values)}
assert len(gp_dict) == len(np.unique(keys))
assert all([gp_dict[key] == p_dict[key] for key in keys])
def test_getpy_methods_with_multidim_and_strings():
key_type = np.dtype('S8')
value_type = np.dtype('S8')
keys = np.array([np.random.bytes(4) for i in range(10**2)], dtype=key_type).reshape(10,10)
values = np.array([np.random.bytes(4) for i in range(10**2)], dtype=value_type).reshape(10,10)
gp_dict = gp.Dict(key_type, value_type)
gp_dict[keys] = values
p_dict = {key : value for key, value in zip(keys.flat, values.flat)}
assert len(gp_dict) == len(np.unique(keys.flat))
assert all([gp_dict[key] == p_dict[key] for key in keys.flat])
def test_getpy_methods_with_default():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
keys = np.random.randint(1, 1000, size=10**2, dtype=key_type)
values = np.random.randint(1, 1000, size=10**2, dtype=value_type)
default_value = 4242
gp_dict = gp.Dict(key_type, value_type, default_value=default_value)
gp_dict[keys] = values
random_keys = np.random.randint(1, 1000, size=500, dtype=key_type)
random_values = gp_dict[random_keys]
assert np.all(random_values[np.where(gp_dict.contains(random_keys))] != default_value)
assert np.all(random_values[np.where(np.logical_not(gp_dict.contains(random_keys)))] == default_value)
def test_getpy_methods_with_default_and_strings():
key_type = np.dtype('S8')
value_type = np.dtype('S8')
keys = np.array([np.random.bytes(8) for i in range(10**2)], dtype=key_type)
values = np.array([np.random.bytes(8) for i in range(10**2)], dtype=value_type)
default_value = np.random.bytes(8)
gp_dict = gp.Dict(key_type, value_type, default_value=default_value)
gp_dict[keys] = values
random_keys = np.array([np.random.bytes(8) for i in range(10**3)], dtype=key_type)
random_values = gp_dict[random_keys]
assert np.all(random_values[np.where(gp_dict.contains(random_keys))] != default_value)
assert np.all(random_values[np.where(np.logical_not(gp_dict.contains(random_keys)))] == default_value)
def test_getpy_types():
for key_type, value_type in gp.dict_types:
gp_dict = gp.Dict(key_type, value_type)
keys = np.array(range(256), dtype=key_type)
values = np.array(range(256), dtype=value_type)
gp_dict[keys] = values
values = gp_dict[keys]
def test_getpy_dump_load():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
keys = np.random.randint(1, 1000, size=10**1, dtype=key_type)
values = np.random.randint(1, 1000, size=10**1, dtype=value_type)
gp_dict_1 = gp.Dict(key_type, value_type)
gp_dict_1[keys] = values
gp_dict_1.dump('test/test.bin')
gp_dict_2 = gp.Dict(key_type, value_type)
gp_dict_2.load('test/test.bin')
assert len(gp_dict_1) == len(gp_dict_2)
def test_getpy_big_dict_u4_u4():
key_type = np.dtype('u4')
value_type = np.dtype('u4')
gp_dict = gp.Dict(key_type, value_type)
values = np.random.randint(10**9, size=10**4, dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**9, size=10**4, dtype=key_type)
gp_dict[keys] = values
def test_getpy_big_dict_u8_u8():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
gp_dict = gp.Dict(key_type, value_type)
values = np.random.randint(10**15, size=10**4, dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**15, size=10**4, dtype=key_type)
gp_dict[keys] = values
def test_getpy_big_dict_u8_S8():
key_type = np.dtype('u8')
value_type = np.dtype('S8')
gp_dict = gp.Dict(key_type, value_type)
values = np.array([np.random.bytes(8) for i in range(10**4)], dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**15, size=10**4, dtype=key_type)
gp_dict[keys] = values
def test_getpy_big_dict_u8_u8_lookup():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
gp_dict = gp.Dict(key_type, value_type)
keys = np.random.randint(10**15, size=10**5, dtype=key_type)
values = np.random.randint(10**15, size=10**5, dtype=value_type)
gp_dict[keys] = values
for i in range(10**2):
values = gp_dict[keys]
def test_getpy_very_big_dict_u4_u4():
key_type = np.dtype('u4')
value_type = np.dtype('u4')
gp_dict = gp.Dict(key_type, value_type)
values = np.random.randint(10**9, size=10**5, dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**9, size=10**5, dtype=key_type)
gp_dict[keys] = values
def test_getpy_very_big_dict_u8_u8():
key_type = np.dtype('u8')
value_type = np.dtype('u8')
gp_dict = gp.Dict(key_type, value_type)
values = np.random.randint(10**15, size=10**5, dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**15, size=10**5, dtype=key_type)
gp_dict[keys] = values
def test_getpy_very_big_dict_u8_S8():
key_type = np.dtype('u8')
value_type = np.dtype('S8')
gp_dict = gp.Dict(key_type, value_type)
values = np.array([np.random.bytes(8) for i in range(10**5)], dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**15, size=10**5, dtype=key_type)
gp_dict[keys] = values
def test_getpy_very_big_dict_u8_S16():
key_type = np.dtype('u8')
value_type = np.dtype('S16')
gp_dict = gp.Dict(key_type, value_type)
values = np.array([np.random.bytes(16) for i in range(10**5)], dtype=value_type)
for i in range(10**2):
keys = np.random.randint(10**15, size=10**5, dtype=key_type)
gp_dict[keys] = values
| 29.804167 | 106 | 0.664337 | 1,234 | 7,153 | 3.63128 | 0.055105 | 0.095068 | 0.073644 | 0.064271 | 0.949342 | 0.938183 | 0.923008 | 0.88998 | 0.877483 | 0.868779 | 0 | 0.052415 | 0.183839 | 7,153 | 239 | 107 | 29.92887 | 0.715142 | 0 | 0 | 0.68 | 0 | 0 | 0.012163 | 0 | 0 | 0 | 0 | 0 | 0.086667 | 1 | 0.106667 | false | 0 | 0.02 | 0 | 0.126667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63c219c2fe082a3e8cb86d1432a6f0d8338c3173 | 4,383 | py | Python | util/data/gen/ntmarta.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | util/data/gen/ntmarta.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | util/data/gen/ntmarta.dll.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | symbols = []
exports = [{'type': 'function', 'name': 'AccConvertAccessMaskToActrlAccess', 'address': '0x7ffb3a7b28a0'}, {'type': 'function', 'name': 'AccConvertAccessToSD', 'address': '0x7ffb3a7b2a20'}, {'type': 'function', 'name': 'AccConvertAccessToSecurityDescriptor', 'address': '0x7ffb3a7b2be0'}, {'type': 'function', 'name': 'AccConvertAclToAccess', 'address': '0x7ffb3a7b2d40'}, {'type': 'function', 'name': 'AccConvertSDToAccess', 'address': '0x7ffb3a7b2de0'}, {'type': 'function', 'name': 'AccFreeIndexArray', 'address': '0x7ffb3a7c10c0'}, {'type': 'function', 'name': 'AccGetAccessForTrustee', 'address': '0x7ffb3a7b30c0'}, {'type': 'function', 'name': 'AccGetExplicitEntries', 'address': '0x7ffb3a7b31b0'}, {'type': 'function', 'name': 'AccGetInheritanceSource', 'address': '0x7ffb3a7c11f0'}, {'type': 'function', 'name': 'AccLookupAccountName', 'address': '0x7ffb3a7b3280'}, {'type': 'function', 'name': 'AccLookupAccountSid', 'address': '0x7ffb3a7b3600'}, {'type': 'function', 'name': 'AccLookupAccountTrustee', 'address': '0x7ffb3a7b3a40'}, {'type': 'function', 'name': 'AccProvCancelOperation', 'address': '0x7ffb3a7bb5e0'}, {'type': 'function', 'name': 'AccProvGetAccessInfoPerObjectType', 'address': '0x7ffb3a7bb670'}, {'type': 'function', 'name': 'AccProvGetAllRights', 'address': '0x7ffb3a7bb720'}, {'type': 'function', 'name': 'AccProvGetCapabilities', 'address': '0x7ffb3a7a6480'}, {'type': 'function', 'name': 'AccProvGetOperationResults', 'address': '0x7ffb3a7bb930'}, {'type': 'function', 'name': 'AccProvGetTrusteesAccess', 'address': '0x7ffb3a7bbab0'}, {'type': 'function', 'name': 'AccProvGrantAccessRights', 'address': '0x7ffb3a7bbbd0'}, {'type': 'function', 'name': 'AccProvHandleGetAccessInfoPerObjectType', 'address': '0x7ffb3a7bbd60'}, {'type': 'function', 'name': 'AccProvHandleGetAllRights', 'address': '0x7ffb3a7bbe60'}, {'type': 'function', 'name': 'AccProvHandleGetTrusteesAccess', 'address': '0x7ffb3a7bc000'}, {'type': 'function', 'name': 'AccProvHandleGrantAccessRights', 'address': '0x7ffb3a7baf60'}, {'type': 'function', 'name': 'AccProvHandleIsAccessAudited', 'address': '0x7ffb3a7bc080'}, {'type': 'function', 'name': 'AccProvHandleIsObjectAccessible', 'address': '0x7ffb3a7bc120'}, {'type': 'function', 'name': 'AccProvHandleRevokeAccessRights', 'address': '0x7ffb3a7bc290'}, {'type': 'function', 'name': 'AccProvHandleRevokeAuditRights', 'address': '0x7ffb3a7bc370'}, {'type': 'function', 'name': 'AccProvHandleSetAccessRights', 'address': '0x7ffb3a7bc450'}, {'type': 'function', 'name': 'AccProvIsAccessAudited', 'address': '0x7ffb3a7bc550'}, {'type': 'function', 'name': 'AccProvIsObjectAccessible', 'address': '0x7ffb3a7bc680'}, {'type': 'function', 'name': 'AccProvRevokeAccessRights', 'address': '0x7ffb3a7bcb30'}, {'type': 'function', 'name': 'AccProvRevokeAuditRights', 'address': '0x7ffb3a7bcc80'}, {'type': 'function', 'name': 'AccProvSetAccessRights', 'address': '0x7ffb3a7bcdd0'}, {'type': 'function', 'name': 'AccRewriteGetExplicitEntriesFromAcl', 'address': '0x7ffb3a7b91e0'}, {'type': 'function', 'name': 'AccRewriteGetHandleRights', 'address': '0x7ffb3a7a2ea0'}, {'type': 'function', 'name': 'AccRewriteGetNamedRights', 'address': '0x7ffb3a7c1890'}, {'type': 'function', 'name': 'AccRewriteSetEntriesInAcl', 'address': '0x7ffb3a7a4180'}, {'type': 'function', 'name': 'AccRewriteSetHandleRights', 'address': '0x7ffb3a7a22a0'}, {'type': 'function', 'name': 'AccRewriteSetNamedRights', 'address': '0x7ffb3a7a24b0'}, {'type': 'function', 'name': 'AccSetEntriesInAList', 'address': '0x7ffb3a7b3b60'}, {'type': 'function', 'name': 'AccTreeResetNamedSecurityInfo', 'address': '0x7ffb3a7a11d0'}, {'type': 'function', 'name': 'EventGuidToName', 'address': '0x7ffb3a7abaa0'}, {'type': 'function', 'name': 'EventNameFree', 'address': '0x7ffb3a7abb40'}, {'type': 'function', 'name': 'GetExplicitEntriesFromAclW', 'address': '0x7ffb3a7b1d10'}, {'type': 'function', 'name': 'GetMartaExtensionInterface', 'address': '0x7ffb3a7a5e60'}, {'type': 'function', 'name': 'GetNamedSecurityInfoW', 'address': '0x7ffb3a7a3000'}, {'type': 'function', 'name': 'GetSecurityInfo', 'address': '0x7ffb3a7a2df0'}, {'type': 'function', 'name': 'SetEntriesInAclW', 'address': '0x7ffb3a7a4170'}, {'type': 'function', 'name': 'SetNamedSecurityInfoW', 'address': '0x7ffb3a7a2180'}, {'type': 'function', 'name': 'SetSecurityInfo', 'address': '0x7ffb3a7a2210'}] | 2,191.5 | 4,370 | 0.701346 | 302 | 4,383 | 10.178808 | 0.350993 | 0.195185 | 0.260247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083354 | 0.069359 | 4,383 | 2 | 4,370 | 2,191.5 | 0.670262 | 0 | 0 | 0 | 0 | 0 | 0.697993 | 0.22833 | 0 | 0 | 0.159672 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8954a09b0f1137fa6660a8283930ea0464aee22e | 39 | py | Python | 100 Curso aulapharos/008 BibliotecasModulos/ImportarSoloUnaParte.py | malcabaut/AprendiendoPython | b1e8731f1614b08b5ace1b7d1ecbeb041b21f28b | [
"MIT"
] | null | null | null | 100 Curso aulapharos/008 BibliotecasModulos/ImportarSoloUnaParte.py | malcabaut/AprendiendoPython | b1e8731f1614b08b5ace1b7d1ecbeb041b21f28b | [
"MIT"
] | null | null | null | 100 Curso aulapharos/008 BibliotecasModulos/ImportarSoloUnaParte.py | malcabaut/AprendiendoPython | b1e8731f1614b08b5ace1b7d1ecbeb041b21f28b | [
"MIT"
] | null | null | null | from math import sqrt
print(sqrt(16))
| 13 | 22 | 0.74359 | 7 | 39 | 4.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.153846 | 39 | 2 | 23 | 19.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9865e15bf3a1d78dcc619a788a0b10f26ab08200 | 185 | py | Python | Python/PythonOOP/packages_test/script.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | 2 | 2018-08-18T15:14:45.000Z | 2019-10-16T16:14:13.000Z | Python/PythonOOP/packages_test/script.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | null | null | null | Python/PythonOOP/packages_test/script.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | 6 | 2018-05-05T18:13:05.000Z | 2021-05-20T11:32:48.000Z | from test_modules import arithmetic
print arithmetic.add(5,8)
print arithmetic.subtract(10,5)
print arithmetic.multiply(12,6)
print arithmetic.divide(6,2)
print arithmetic.divide(6,0)
| 23.125 | 35 | 0.810811 | 30 | 185 | 4.966667 | 0.566667 | 0.503356 | 0.281879 | 0.295302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 0.081081 | 185 | 7 | 36 | 26.428571 | 0.805882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.833333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
986b7e3da19417abfe8c4dad830d126cdfb4ac53 | 94 | py | Python | phonescrubber/numbers/__init__.py | RagtagOpen/phone-number-validator | 762122efa03c3e6057204c8b5a7e3bdc468c94e4 | [
"MIT"
] | 1 | 2019-06-04T14:40:09.000Z | 2019-06-04T14:40:09.000Z | phonescrubber/numbers/__init__.py | RagtagOpen/phone-number-validator | 762122efa03c3e6057204c8b5a7e3bdc468c94e4 | [
"MIT"
] | 7 | 2019-02-05T16:31:37.000Z | 2019-06-17T12:18:25.000Z | phonescrubber/numbers/__init__.py | RagtagOpen/phone-number-validator | 762122efa03c3e6057204c8b5a7e3bdc468c94e4 | [
"MIT"
] | 1 | 2021-03-12T04:18:21.000Z | 2021-03-12T04:18:21.000Z | from flask import Blueprint
numbers_bp = Blueprint('numbers', __name__)
from . import views
| 15.666667 | 43 | 0.776596 | 12 | 94 | 5.666667 | 0.666667 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 5 | 44 | 18.8 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
7f42ed137187799aee134f2dd3ceb282d8d8f882 | 13,772 | py | Python | phennyfyxata/scores/tests.py | Venefyxatu/phennyfyxata | cc4031c8483166c1eb77a2e14c3f0074ace58374 | [
"BSD-2-Clause"
] | 2 | 2015-05-18T13:49:42.000Z | 2015-05-18T14:16:45.000Z | phennyfyxata/scores/tests.py | Venefyxatu/phennyfyxata | cc4031c8483166c1eb77a2e14c3f0074ace58374 | [
"BSD-2-Clause"
] | 2 | 2015-11-03T16:48:45.000Z | 2015-11-19T08:49:04.000Z | phennyfyxata/scores/tests.py | Venefyxatu/phennyfyxata | cc4031c8483166c1eb77a2e14c3f0074ace58374 | [
"BSD-2-Clause"
] | null | null | null | import json
import datetime
from django.test import TestCase
from django.test.client import Client
from scores.models import War
from scores.models import Writer
from scores.models import WarParticipants
from scores.models import ParticipantScore
class ParticipationHelper:
def participate(self, war_id, writer_nick):
response = Client().post('/api/war/participate/', {'id': war_id, 'writer': writer_nick})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
def withdraw(self, war_id, writer_nick):
response = Client().post('/api/war/withdraw/', {'id': war_id, 'writer': writer_nick})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
class ParticipantTests(TestCase):
def setUp(self):
starttime = datetime.datetime.now() + datetime.timedelta(0, seconds=300)
endtime = starttime + datetime.timedelta(0, seconds=600)
self.ph = ParticipationHelper()
self.war = War(starttime=starttime, endtime=endtime)
self.war.save()
self.writer = Writer(nick='TestWriter')
self.writer.save()
self.c = Client()
def test_participate_war(self):
self.ph.participate(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
def test_withdraw_war(self):
self.ph.participate(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
self.ph.withdraw(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 0, 'Should have 0 participants, not %s' % len(participants)
def test_add_new_participant(self):
self.ph.participate(self.war.id, 'NewWriter')
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
def test_add_same_participant(self):
self.ph.participate(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
self.ph.participate(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
def test_withdraw_unparticipating_writer(self):
self.ph.withdraw(self.war.id, 'NewWriter')
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 0, 'Should have 0 participants, not %s' % len(participants)
def test_participate_nonexistant_war(self):
response = self.c.post('/api/war/participate/', {'id': 9, 'writer': self.writer.nick})
assert response.status_code == 404, 'Response status should be 404, not %s' % response.status_code
def test_list_participants(self):
self.ph.participate(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
response = self.c.post('/api/war/listparticipants/', {'id': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = [self.writer.nick]
assert json.loads(response.content) == expected_response, 'Response should be "%s", not %s' % (expected_response, json.loads(response.content))
self.ph.participate(self.war.id, 'NewWriter')
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 2, 'Should have 1 participant, not %s' % len(participants)
response = self.c.post('/api/war/listparticipants/', {'id': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = [self.writer.nick, 'NewWriter']
assert json.loads(response.content) == expected_response, 'Response should be "%s", not %s' % (expected_response, json.loads(response.content))
self.ph.withdraw(self.war.id, self.writer.nick)
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 1, 'Should have 1 participant, not %s' % len(participants)
response = self.c.post('/api/war/listparticipants/', {'id': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = ['NewWriter']
assert json.loads(response.content) == expected_response, 'Response should be "%s", not %s' % (expected_response, json.loads(response.content))
self.ph.withdraw(self.war.id, 'NewWriter')
participants = WarParticipants.objects.filter(war__id=self.war.id)
assert len(participants) == 0, 'Should have 0 participants, not %s' % len(participants)
response = self.c.post('/api/war/listparticipants/', {'id': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert json.loads(response.content) == [], 'Response should be [], not %s' % json.loads(response.content)
class WarTests(TestCase):
def setUp(self):
self.c = Client()
self.starttime = datetime.datetime.now() + datetime.timedelta(0, seconds=300)
self.starttime = self.starttime - datetime.timedelta(0, seconds=self.starttime.second, microseconds=self.starttime.microsecond)
self.endtime = self.starttime + datetime.timedelta(0, seconds=600)
self.endtime = self.endtime - datetime.timedelta(0, seconds=self.endtime.second, microseconds=self.endtime.microsecond)
def test_war_info(self):
response = self.c.post('/api/war/new/', {'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
response = self.c.post('/api/war/info/', {'id': 1})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = {'id': '1', 'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')}
assert json.loads(response.content) == expected_response, 'Response should be %s, not %s' % (expected_response, json.loads(response.content))
def test_create_war(self):
response = self.c.post('/api/war/new/', {'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert len(War.objects.all()) == 1, 'There should be 1 war, not %s' % len(War.objects.all())
def test_no_active_wars(self):
response = self.c.post('/api/war/new/', {'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
response = self.c.get('/api/war/active/')
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert json.loads(response.content) == [], 'Response should be [], not %s' % json.loads(response.content)
def test_active_wars(self):
starttime = datetime.datetime.now()
starttime = starttime - datetime.timedelta(0, seconds=starttime.second, microseconds=starttime.microsecond)
response = self.c.post('/api/war/new/', {'starttime': starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
response = self.c.get('/api/war/active/')
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = [{'id': 1, 'starttime': starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')}]
assert json.loads(response.content) == expected_response, 'Response should be "%s", not %s' % (expected_response, json.loads(response.content))
def test_planned_wars(self):
response = self.c.post('/api/war/new/', {'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
response = self.c.get('/api/war/planned/')
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = [{'id': 1, 'starttime': self.starttime.strftime('%s'), 'endtime': self.endtime.strftime('%s')}]
assert json.loads(response.content) == expected_response, 'Response should be %s, not %s' % (expected_response, json.loads(response.content))
def test_no_planned_wars(self):
response = self.c.get('/api/war/planned/')
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = []
assert json.loads(response.content) == expected_response, 'Response should be %s, not %s' % (expected_response, json.loads(response.content))
class ScoreTests(TestCase):
def setUp(self):
starttime = datetime.datetime.now() + datetime.timedelta(0, seconds=300)
endtime = starttime + datetime.timedelta(0, seconds=600)
self.war = War(starttime=starttime, endtime=endtime)
self.war.save()
self.writer = Writer(nick='TestWriter')
self.writer.save()
self.c = Client()
def test_get_score_for_war(self):
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 200, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
response = self.c.post('/api/writer/getscore/', {'writer': self.writer.nick, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
expected_response = {'war': str(self.war.id), 'writer': self.writer.nick, 'score': 200}
assert json.loads(response.content) == expected_response, 'Response should be %s, not %s' % (expected_response, json.loads(response.content))
def test_register_score(self):
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 200, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert len(ParticipantScore.objects.all()) == 1, 'There should be one ParticipantScore object'
ps = ParticipantScore.objects.all()[0]
assert ps.writer == self.writer, 'ParticipantScore writer is not as expected'
assert ps.score == 200, 'ParticipantScore score should be 200, not %s' % ps.score
assert ps.war == self.war, 'ParticipantScore war is not as expected'
def test_deregister_score(self):
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 200, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert len(ParticipantScore.objects.all()) == 1, 'There should be one ParticipantScore object'
ps = ParticipantScore.objects.all()[0]
assert ps.writer == self.writer, 'ParticipantScore writer is not as expected'
assert ps.score == 200, 'ParticipantScore score should be 200, not %s' % ps.score
assert ps.war == self.war, 'ParticipantScore war is not as expected'
response = self.c.post('/api/score/deregister/', {'writer': self.writer.nick, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert len(ParticipantScore.objects.all()) == 0, 'There should be no ParticipantScore objects'
def test_update_score(self):
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 200, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
assert len(ParticipantScore.objects.all()) == 1, 'There should be one ParticipantScore object'
ps = ParticipantScore.objects.all()[0]
assert ps.score == 200, 'ParticipantScore score should be 200, not %s' % ps.score
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 400, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
ps = ParticipantScore.objects.all()[0]
assert ps.score == 400, 'ParticipantScore score should be 400, not %s' % ps.score
response = self.c.post('/api/score/register/', {'writer': self.writer.nick, 'score': 100, 'war': self.war.id})
assert response.status_code == 200, 'Response status should be 200, not %s' % response.status_code
ps = ParticipantScore.objects.all()[0]
assert ps.score == 100, 'ParticipantScore score should be 100, not %s' % ps.score
| 54.434783 | 151 | 0.678405 | 1,786 | 13,772 | 5.149496 | 0.054311 | 0.114168 | 0.097858 | 0.0411 | 0.894205 | 0.879743 | 0.856584 | 0.851147 | 0.841905 | 0.841905 | 0 | 0.023584 | 0.181019 | 13,772 | 252 | 152 | 54.650794 | 0.791826 | 0 | 0 | 0.653409 | 0 | 0 | 0.220375 | 0.013724 | 0 | 0 | 0 | 0 | 0.340909 | 1 | 0.125 | false | 0 | 0.045455 | 0 | 0.193182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f5bd7ccfce6c2251e7369d0afb6c544acc3318f | 42 | py | Python | code/abc111_b_01.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | 3 | 2019-08-16T16:55:48.000Z | 2021-04-11T10:21:40.000Z | code/abc111_b_01.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | code/abc111_b_01.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | n=int(input())
print((((n-1)//111)+1)*111) | 21 | 27 | 0.547619 | 9 | 42 | 2.555556 | 0.666667 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 0.02381 | 42 | 2 | 27 | 21 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7f815a8e673246786acfcd4eb086250939cd3d44 | 2,108 | py | Python | 5lab/inverse_test.py | vhall415/ee144_robotics | 090a8da4da682b1175790f7d8d3c655fca9831cb | [
"MIT"
] | null | null | null | 5lab/inverse_test.py | vhall415/ee144_robotics | 090a8da4da682b1175790f7d8d3c655fca9831cb | [
"MIT"
] | null | null | null | 5lab/inverse_test.py | vhall415/ee144_robotics | 090a8da4da682b1175790f7d8d3c655fca9831cb | [
"MIT"
] | 1 | 2021-10-07T18:35:29.000Z | 2021-10-07T18:35:29.000Z | from __future__ import division
import Arm
from Arm import position, makeVector
from math import pi
import unittest
link1 = 1.0
link2 = 0.5
class InverseTestCase(unittest.TestCase):
def test_case1(self):
arm = Arm.Arm(link1=link1, link2=link2)
end_effector = position(link1+link2, 0)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, 0.0)
self.assertAlmostEqual(joints.theta2, 0.0)
end_effector = position(link1, link2)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, 0.0)
self.assertAlmostEqual(joints.theta2, pi / 2)
def test_case2(self):
arm = Arm.Arm(link1=link1, link2=link2)
end_effector = position(link2, -link1)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, -pi / 2)
self.assertAlmostEqual(joints.theta2, pi / 2)
end_effector = position(0, link1+link2)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, pi / 2)
self.assertAlmostEqual(joints.theta2, 0.0)
def test_case3(self):
arm = Arm.Arm(link1=link1, link2=link2, origin=makeVector(1,1))
end_effector = position(1+link2, 1-link1)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, -pi / 2)
self.assertAlmostEqual(joints.theta2, pi / 2)
end_effector = position(1, 1+link1+link2)
joints = arm.inverse_kinematics(end_effector)
self.assertAlmostEqual(joints.theta1, pi / 2)
self.assertAlmostEqual(joints.theta2, 0.0)
def test_unreachable(self):
arm = Arm.Arm(link1=link1, link2=link2)
end_effector = position(1+link1+link2, 0)
with self.assertRaises(ValueError):
joints = arm.inverse_kinematics(end_effector)
end_effector = position(0, 0)
with self.assertRaises(ValueError):
joints = arm.inverse_kinematics(end_effector)
if __name__ == '__main__':
unittest.main()
| 34 | 71 | 0.675996 | 256 | 2,108 | 5.410156 | 0.160156 | 0.127076 | 0.233935 | 0.150181 | 0.790614 | 0.769675 | 0.766065 | 0.766065 | 0.742238 | 0.742238 | 0 | 0.048721 | 0.221063 | 2,108 | 61 | 72 | 34.557377 | 0.794762 | 0 | 0 | 0.520833 | 0 | 0 | 0.003795 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.083333 | false | 0 | 0.104167 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f9dbc3c6d7c747d4e033dda4f2de05b67da71bd | 28 | py | Python | cilva/__init__.py | GoodhillLab/CILVA | 322e48b197044312296be507d9f06e1f4440739a | [
"MIT"
] | 8 | 2019-07-06T09:25:27.000Z | 2022-03-11T15:30:16.000Z | cilva/__init__.py | GoodhillLab/CILVA | 322e48b197044312296be507d9f06e1f4440739a | [
"MIT"
] | null | null | null | cilva/__init__.py | GoodhillLab/CILVA | 322e48b197044312296be507d9f06e1f4440739a | [
"MIT"
] | 4 | 2019-07-03T01:55:26.000Z | 2020-11-22T06:38:47.000Z | from . import core, analysis | 28 | 28 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6ae258fabc4343eeda2c6f83b7f59fd1fd40f40 | 31,729 | py | Python | assignment 0/python_lab/submit_python_lab.py | dhruvgairola/linearAlgebra-coursera | 20109133b9e53a7a38cbd17d8ca1fa1316bbf0d3 | [
"MIT"
] | 6 | 2015-09-18T02:07:21.000Z | 2020-04-22T17:05:11.000Z | submit_python_lab.py | tri2sing/LinearAlgebraPython | f3dde94f02f146089607eb520ebd4467becb5f9e | [
"Apache-2.0"
] | null | null | null | submit_python_lab.py | tri2sing/LinearAlgebraPython | f3dde94f02f146089607eb520ebd4467becb5f9e | [
"Apache-2.0"
] | 10 | 2015-09-05T03:54:00.000Z | 2020-04-21T12:56:40.000Z | ######## ########
# Hi there, curious student. #
# #
# This submission script runs some tests on your #
# code and then uploads it to Coursera for grading. #
# #
# Changing anything in this script might cause your #
# submissions to fail. #
######## ########
import io, os, sys, doctest, traceback, importlib, urllib.request, urllib.parse, urllib.error, base64, hashlib, random, ast
URL = 'matrix-001'
part_friendly_names = ['Minutes in a Week', 'Remainder', 'Divisibility', 'Assign y', 'Squares Comprehension', 'Powers of 2 Comprehension', 'Nine Element Set', 'Five Element Set', 'Base 10 Three Digit Numbers', 'Intersection of Sets', 'Average', 'Sum of Three Lists', 'Cartesian-Product Lists', 'Three Element Tuples', 'Remove (0,0,0)', 'First Element', 'List and Set Differences', 'Odd Numbers', 'Range and Zip', 'Zip Sum', 'Generate Dictionary', 'Modify Missing Key', 'Range Squared', 'Identity', 'List Integers', 'Names to Salaries', 'Next Ints', 'Cubes', 'dict2list', 'list2dict']
groups = [[('fFZfuj5BSf0z9vju', 'Minutes in a Week', '>>> print(test_format(minutes_in_week))\n')], [('RCTi7BRCbNnUGLrE', 'Remainder', '>>> print(test_format(remainder_without_mod))\n>>> print(test_format(line_contains_substr("remainder_without_mod", "%")))\n')], [('3ngJR6j6I6xGkMb6', 'Divisibility', '>>> print(test_format(divisible_by_3))\n')], [('SqKKP5oqXdJNQR54', 'Assign y', '>>> print(test_format(statement_val))\n')], [('LUxaUjsmC7dF6lFR', 'Squares Comprehension', '>>> print(test_format(first_five_squares))\n>>> print(test_format(use_comprehension("first_five_squares")))\n')], [('OO0ctDZZKI72zUu7', 'Powers of 2 Comprehension', '>>> print(test_format(first_five_pows_two))\n>>> print(test_format(use_comprehension("first_five_pows_two")))\n')], [('6jcpLOvzpLmsTSVm', 'Nine Element Set', '>>> nine_elements_set = {x*y for x in X1 for y in Y1}\n>>> print(test_format(len(nine_elements_set)))\n>>> print(test_format(len(X1)))\n>>> print(test_format(len(Y1)))\n')], [('Yc7Syvika5HHSjWY', 'Five Element Set', '>>> five_elements_set = {x*y for x in X2 for y in Y2}\n>>> print(test_format(len(five_elements_set)))\n>>> print(test_format(len(X2)))\n>>> print(test_format(len(Y2)))\n>>> print(test_format(len(X2 & Y2)))\n')], [('q4059GW7SFmhlnuV', 'Base 10 Three Digit Numbers', '>>> digits = {0,1,2,3,4,5,6,7,8,9}\n>>> base = 10\n>>> print(test_format(three_digits_set == set(range(1000))))\n>>> print(test_format(use_comprehension("three_digits_set")))\n>>> print(test_format(line_contains_substr("three_digits_set", "base")))\n')], [('apLtHTyCikXLtyrF', 'Intersection of Sets', '>>> print(test_format(S_intersect_T))\n>>> print(test_format(use_comprehension("S_intersect_T")))\n')], [('FnRTEV9wumMHDSRk', 'Average', '>>> print(test_format(L_average))\n')], [('xvF1K4mjiljgnWFv', 'Sum of Three Lists', '>>> print(test_format(LofL_sum))\n>>> print(test_format(use_comprehension("LofL_sum")))\n')], [('cXHf573AUNrEoFxH', 'Cartesian-Product Lists', '>>> print(test_format(set(map(tuple, cartesian_product))))\n>>> print(test_format(use_comprehension("cartesian_product")))\n')], [('Y4tuZGBVfLcQt8lN', 'Three Element Tuples', '>>> print(test_format(zero_sum_list == [(0, 0, 0), (0, 2, -2), (0, -2, 2), (1, 1, -2), (1, -2, 1), (2, 0, -2), (2, 2, -4), (2, -4, 2), (2, -2, 0), (-4, 2, 2), (-2, 0, 2), (-2, 1, 1), (-2, 2, 0)]))\n>>> print(test_format(use_comprehension("zero_sum_list")))\n')], [('S8w9l4cmOKXbgCfe', 'Remove (0,0,0)', '>>> print(test_format(exclude_zero_list))\n>>> print(test_format(use_comprehension("exclude_zero_list")))\n')], [('Qnq4vQ5vW6mORoqE', 'First Element', '>>> print(test_format(first_of_tuples_list == (0,0,0)))\n>>> print(test_format(len(first_of_tuples_list)))\n>>> print(test_format(sum(first_of_tuples_list)))\n>>> print(test_format(use_comprehension("first_of_tuples_list")))\n')], [('yoiJUxauMefcohSi', 'List and Set Differences', '>>> print(test_format(len(L1) == len(list(set(L1))) ))\n>>> L2_new = list(set(L2)) \n>>> print(test_format(len(L2) == len(L2_new)))\n>>> print(test_format(L2 == L2_new))\n')], [('3I5KzAun0uzowUp4', 'Odd Numbers', '>>> print(test_format(odd_num_list_range))\n>>> print(test_format(use_comprehension("odd_num_list_range")))\n')], [('D8ygxQYLerbhMhMX', 'Range and Zip', '>>> print(test_format(range_and_zip))\n>>> print(test_format(use_comprehension("range_and_zip")))\n')], [('klnmJrHwxgXXLTYT', 'Zip Sum', '>>> print(test_format(list_sum_zip))\n>>> print(test_format(use_comprehension("list_sum_zip")))\n')], [('FdPpEhCqyxB5Bt8S', 'Generate Dictionary', '>>> print(test_format(set(value_list)))\n>>> print(test_format(use_comprehension("value_list")))\n')], [('14cqsN8TvrYGNLVu', 'Modify Missing Key', '>>> print(test_format(use_comprehension("value_list_modified_1")))\n>>> print(test_format(use_comprehension("value_list_modified_2")))\n>>> print(test_format(value_list_modified_1))\n>>> print(test_format(value_list_modified_2))\n')], [('C5kNoKaPB4ApgbT7', 'Range Squared', '>>> print(test_format(square_dict == {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81, 10: 100, 11: 121, 12: 144, 13: 169, 14: 196, 15: 225, 16: 256, 17: 289, 18: 324, 19: 361, 20: 400, 21: 441, 22: 484, 23: 529, 24: 576, 25: 625, 26: 676, 27: 729, 28: 784, 29: 841, 30: 900, 31: 961, 32: 1024, 33: 1089, 34: 1156, 35: 1225, 36: 1296, 37: 1369, 38: 1444, 39: 1521, 40: 1600, 41: 1681, 42: 1764, 43: 1849, 44: 1936, 45: 2025, 46: 2116, 47: 2209, 48: 2304, 49: 2401, 50: 2500, 51: 2601, 52: 2704, 53: 2809, 54: 2916, 55: 3025, 56: 3136, 57: 3249, 58: 3364, 59: 3481, 60: 3600, 61: 3721, 62: 3844, 63: 3969, 64: 4096, 65: 4225, 66: 4356, 67: 4489, 68: 4624, 69: 4761, 70: 4900, 71: 5041, 72: 5184, 73: 5329, 74: 5476, 75: 5625, 76: 5776, 77: 5929, 78: 6084, 79: 6241, 80: 6400, 81: 6561, 82: 6724, 83: 6889, 84: 7056, 85: 7225, 86: 7396, 87: 7569, 88: 7744, 89: 7921, 90: 8100, 91: 8281, 92: 8464, 93: 8649, 94: 8836, 95: 9025, 96: 9216, 97: 9409, 98: 9604, 99: 9801}))\n>>> print(test_format(use_comprehension("square_dict")))\n')], [('tShbyCCphwno07CP', 'Identity', '>>> print(test_format(identity_dict))\n>>> print(test_format(use_comprehension("identity_dict")))\n')], [('pRtDJVxpnw3d5lFi', 'List Integers', '>>> print(test_format(representation_dict[135]))\n>>> print(test_format(representation_dict[291]))\n>>> print(test_format(use_comprehension("representation_dict")))\n>>> print(test_format(line_contains_substr("representation_dict", "base")))\n')], [('lZLRqgDYLYKTNaEx', 'Names to Salaries', '>>> print(test_format(listdict2dict))\n>>> print(test_format(use_comprehension("listdict2dict")))\n')], [('tGNwBWZTFRhlJgVe', 'Next Ints', '>>> print(test_format(nextInts([1, 5, 7])))\n>>> print(test_format(nextInts([0, 0, 0, 0, 0])))\n>>> print(test_format(nextInts([570, 968, 723, 179, 762, 377, 845, 320, 475, 952, 680, 874, 708, 493, 901, 896, 164, 165, 404, 147, 917, 936, 205, 615, 518, 254, 856, 584, 287, 336, 452, 551, 914, 706, 558, 842, 52, 593, 733, 398, 119, 874, 769, 585, 572, 261, 440, 404, 293, 176, 575, 224, 647, 241, 319, 974, 5, 373, 367, 609, 661, 691, 47, 64, 79, 744, 606, 205, 424, 88, 648, 419, 165, 399, 594, 760, 348, 638, 385, 754, 491, 284, 531, 258, 745, 634, 51, 557, 346, 577, 375, 979, 773, 523, 441, 952, 50, 534, 641, 621, 813, 511, 279, 565, 228, 86, 187, 395, 261, 287, 717, 989, 614, 92, 8, 229, 372, 378, 53, 350, 936, 654, 74, 750, 20, 978, 506, 793, 148, 944, 23, 962, 996, 586, 404, 216, 148, 284, 797, 805, 501, 161, 64, 608, 287, 127, 136, 902, 879, 433, 553, 366, 155, 763, 728, 117, 300, 990, 345, 982, 767, 279, 814, 516, 342, 291, 410, 612, 961, 445, 472, 507, 251, 832, 737, 62, 384, 273, 352, 752, 455, 216, 731, 7, 868, 111, 42, 190, 841, 283, 215, 860, 628, 835, 145, 97, 337, 57, 791, 443, 271, 925, 666, 452, 601, 571, 218, 901, 479, 75, 912, 708, 33, 575, 252, 753, 857, 150, 625, 852, 921, 178, 832, 126, 929, 16, 427, 533, 119, 256, 937, 107, 740, 607, 801, 827, 667, 776, 95, 940, 66, 982, 930, 825, 878, 512, 961, 701, 657, 584, 204, 348, 564, 505, 303, 562, 399, 415, 784, 588, 2, 729, 478, 396, 314, 130, 493, 947, 724, 540, 608, 431, 107, 497, 68, 791, 521, 583, 359, 221, 713, 683, 945, 274, 568, 666, 517, 241, 401, 437, 958, 572, 561, 929, 342, 149, 971, 762, 249, 538, 277, 761, 489, 728, 372, 131, 366, 702, 73, 382, 58, 223, 423, 642, 628, 6, 158, 946, 710, 232, 211, 747, 215, 579, 396, 521, 597, 966, 401, 749, 546, 310, 786, 691, 333, 817, 162, 961, 674, 132, 235, 481, 410, 477, 311, 932, 352, 64, 771, 837, 609, 654, 535, 530, 346, 294, 441, 532, 824, 422, 912, 99, 894, 246, 99, 111, 806, 360, 652, 753, 489, 735, 996, 8, 742, 793, 341, 498, 790, 402, 542, 892, 573, 78, 994, 676, 225, 675, 904, 196, 156, 819, 959, 501, 554, 381, 525, 608, 401, 937, 875, 373, 803, 258, 530, 901, 175, 656, 533, 91, 304, 497, 321, 906, 893, 995, 238, 51, 419, 70, 673, 479, 852, 864, 143, 224, 911, 207, 41, 603, 824, 764, 257, 653, 521, 28, 673, 333, 536, 748, 92, 98, 951, 655, 278, 437, 167, 253, 849, 343, 554, 313, 333, 556, 919, 636, 21, 841, 854, 550, 993, 291, 324, 224, 48, 927, 784, 387, 276, 652, 860, 100, 386, 153, 988, 805, 419, 75, 365, 920, 957, 23, 592, 280, 814, 800, 154, 776, 169, 635, 379, 919, 742, 145, 784, 201, 711, 209, 36, 317, 718, 84, 974, 768, 518, 884, 374, 447, 160, 295, 29, 23, 421, 384, 104, 123, 40, 945, 765, 32, 243, 696, 603, 129, 650, 957, 659, 863, 582, 165, 681, 33, 738, 917, 410, 803, 821, 636, 162, 662, 231, 75, 799, 591, 258, 722, 131, 805, 600, 704, 995, 793, 502, 624, 656, 43, 597, 353, 867, 116, 568, 26, 16, 251, 78, 764, 799, 287, 575, 190, 718, 619, 377, 465, 267, 688, 772, 359, 451, 459, 139, 71, 821, 312, 334, 988, 929, 797, 830, 26, 3, 90, 450, 715, 174, 910, 258, 229, 325, 517, 37, 260, 950, 20, 881, 156, 231, 114, 670, 287, 631, 982, 855, 841, 72, 561, 368, 289, 829, 428, 815, 207, 844, 68, 143, 707, 259, 669, 362, 943, 550, 133, 367, 900, 233, 109, 504, 803, 985, 333, 318, 680, 952, 408, 268, 890, 101, 423, 261, 641, 500, 389, 885, 76, 682, 811, 941, 142, 552, 401, 429, 973, 287, 472, 630, 383, 569, 630, 135, 823, 49, 507, 433, 550, 660, 403, 88, 879, 697, 571, 790, 896, 252, 172, 911, 485, 30, 657, 821, 412, 204, 801, 763, 329, 199, 315, 940, 515, 29, 22, 66, 221, 63, 678, 368, 545, 560, 301, 292, 987, 673, 573, 399, 148, 326, 418, 687, 85, 167, 774, 657, 754, 168, 113, 412, 353, 234, 923, 720, 691, 319, 711, 1000, 188, 969, 123, 547, 127, 69, 782, 533, 898, 574, 214, 848, 599, 112, 833, 26, 750, 462, 480, 511, 644, 929, 725, 310, 41, 559, 961, 399, 527, 960, 352, 468, 755, 732, 944, 115, 408, 642, 888, 922, 780, 727, 459, 473, 122, 716, 908, 576, 498, 196, 647, 912, 275, 238, 79, 75, 427, 299, 470, 347, 792, 969, 21, 424, 596, 88, 98, 475, 917, 683, 47, 843, 742, 673, 702, 983, 996, 430, 53, 327, 769, 666, 453, 93, 498, 942, 299, 200, 968, 202, 193, 508, 706, 247, 51, 721, 327, 484, 855, 565, 777, 33, 816, 827, 36, 962, 235, 297, 666, 111, 453, 445, 111, 653, 690, 325, 36, 187, 633, 854, 829, 74, 840, 744, 375, 124, 694, 236, 222, 88, 449, 134, 542, 812, 325, 373, 975, 131, 78, 390, 114, 969, 633, 57, 110, 635, 396, 947, 913, 148, 215, 465, 72, 463, 830, 885, 532, 728, 701, 31, 541, 54, 411, 916, 268, 596, 72, 971, 907, 856, 65, 55, 108, 222, 24, 482, 150, 864, 768, 332, 40, 961, 80, 745, 984, 170, 424, 28, 442, 146, 724, 32, 786, 985, 386, 326, 840, 416, 931, 606, 746, 39, 295, 355, 80, 663, 463, 716, 849, 606, 83, 512, 144, 854, 384, 976, 675, 549, 318, 893, 193, 562, 419, 444, 427, 612, 362, 567, 529, 273, 807, 381, 120, 66, 397, 738, 948, 99, 427, 560, 916, 283, 722, 111, 740, 156, 942, 215, 67, 944, 161, 544, 597, 468, 441, 483, 961, 503, 162, 706, 57, 37, 307, 142, 537, 861, 944]) ))\n>>> print(test_format(use_comprehension("nextInts")))\n')], [('gbXfwMbLBL6ySr10', 'Cubes', '>>> print(test_format(cubes([0, 0, 0, 0])))\n>>> print(test_format(cubes([4, 5, 6])))\n>>> print(test_format(cubes([0.5, 1.5, 2.5, 3.5])))\n>>> print(test_format(cubes([768, 275, 645, 106, 332, 836, 109, 268, 721, 711, 642, 393, 671, 263, 480, 211, 819, 735, 797, 394, 625, 199, 308, 937, 552, 435, 70, 316, 987, 188, 291, 387, 844, 939, 781, 329, 484, 678, 223, 598, 135, 717, 444, 650, 40, 740, 799, 315, 933, 321, 81, 410, 512, 651, 471, 867, 910, 769, 657, 588, 769, 174, 347, 759, 222, 904, 248, 547, 158, 254, 966, 47, 980, 948, 461, 234, 266, 976, 105, 125, 468, 612, 468, 521, 828, 93, 562, 135, 751, 160, 159, 812, 212, 553, 456, 704, 683, 849, 529, 795])))\n>>> print(test_format(use_comprehension("cubes")))\n')], [('HikXLgYM3rySDfdy', 'dict2list', '>>> dct1 = {}\n>>> keylist1 = []\n>>> print(test_format(dict2list(dct1, keylist1)))\n>>> dct2 = {\'a\':\'A\', \'b\':\'B\', \'c\':\'C\'}\n>>> keylist2 = [\'b\',\'c\',\'a\']\n>>> print(test_format(dict2list(dct2, keylist2)))\n>>> dct3 = {3: 395, 8: 816, 9: 370, 10: 102, 11: 746, 18: 477, 20: 284, 26: 783, 27: 55, 35: 108, 43: 621, 45: 225, 46: 56, 51: 503, 54: 24, 55: 742, 62: 491, 64: 317, 66: 739, 70: 972, 71: 372, 74: 312, 76: 826, 77: 215, 78: 507, 80: 970, 87: 966, 90: 798, 91: 353, 94: 358, 101: 880, 102: 730, 105: 514, 106: 867, 108: 723, 117: 412, 120: 870, 124: 511, 126: 904, 127: 196, 128: 758, 130: 89, 131: 631, 133: 45, 137: 345, 138: 246, 139: 141, 142: 963, 143: 583, 146: 626, 148: 615, 149: 581, 150: 889, 154: 662, 155: 993, 157: 765, 158: 7, 160: 67, 162: 862, 172: 212, 174: 493, 175: 676, 176: 915, 177: 220, 179: 7, 180: 362, 186: 586, 191: 632, 194: 755, 196: 537, 198: 398, 201: 330, 202: 337, 207: 767, 212: 41, 214: 341, 223: 84, 224: 651, 226: 898, 227: 926, 231: 801, 232: 751, 235: 216, 236: 234, 238: 445, 243: 534, 244: 81, 246: 860, 248: 478, 249: 659, 250: 107, 254: 609, 255: 488, 256: 108, 257: 497, 260: 649, 264: 684, 267: 964, 268: 294, 269: 327, 271: 621, 276: 713, 278: 195, 281: 559, 283: 858, 287: 931, 289: 89, 293: 850, 294: 277, 295: 537, 298: 430, 301: 244, 302: 950, 305: 594, 306: 98, 307: 438, 308: 564, 310: 643, 311: 363, 314: 109, 315: 295, 316: 604, 317: 268, 324: 166, 331: 853, 336: 123, 346: 46, 348: 186, 349: 404, 350: 426, 352: 34, 353: 741, 355: 385, 356: 115, 357: 613, 366: 369, 367: 513, 369: 36, 370: 755, 375: 77, 377: 780, 378: 57, 380: 123, 381: 914, 384: 575, 386: 866, 387: 377, 389: 915, 393: 36, 398: 895, 399: 215, 404: 317, 406: 711, 411: 490, 412: 752, 413: 879, 414: 344, 417: 723, 419: 431, 421: 279, 422: 518, 425: 346, 427: 992, 429: 758, 433: 48, 435: 66, 436: 349, 437: 429, 438: 616, 439: 186, 449: 917, 452: 807, 457: 916, 458: 548, 459: 601, 463: 891, 464: 897, 465: 404, 467: 241, 469: 510, 471: 66, 472: 688, 473: 797, 475: 252, 476: 408, 479: 79, 484: 307, 485: 462, 494: 492, 497: 841, 499: 200, 501: 451, 502: 494, 504: 754, 505: 56, 506: 234, 507: 849, 509: 984, 511: 902, 512: 156, 516: 721, 517: 905, 518: 728, 521: 505, 523: 29, 534: 256, 537: 179, 539: 820, 540: 199, 543: 358, 545: 626, 547: 21, 549: 456, 550: 447, 553: 316, 559: 997, 560: 513, 561: 171, 563: 231, 565: 913, 573: 330, 575: 697, 579: 682, 581: 92, 584: 65, 590: 393, 591: 258, 592: 0, 593: 978, 597: 407, 598: 497, 599: 420, 601: 13, 603: 460, 611: 710, 614: 228, 623: 837, 626: 98, 629: 363, 630: 510, 634: 339, 635: 625, 637: 787, 639: 774, 642: 401, 643: 187, 644: 35, 646: 183, 647: 872, 651: 901, 652: 399, 654: 635, 659: 762, 660: 358, 661: 537, 664: 639, 665: 49, 672: 121, 675: 909, 676: 369, 679: 901, 680: 409, 685: 694, 688: 979, 690: 604, 692: 212, 695: 856, 696: 722, 697: 493, 699: 340, 700: 706, 701: 549, 702: 129, 708: 222, 709: 433, 710: 872, 711: 874, 713: 197, 714: 109, 715: 463, 716: 47, 717: 5, 718: 639, 719: 900, 722: 467, 723: 785, 725: 993, 726: 89, 727: 428, 729: 47, 731: 178, 732: 74, 735: 82, 736: 68, 737: 953, 739: 490, 740: 399, 744: 489, 747: 83, 751: 178, 756: 982, 758: 343, 759: 346, 762: 600, 775: 424, 776: 669, 781: 214, 785: 438, 789: 616, 790: 852, 791: 444, 795: 671, 796: 909, 797: 331, 798: 534, 800: 782, 803: 570, 804: 638, 807: 535, 808: 852, 809: 424, 812: 75, 816: 303, 818: 730, 824: 501, 827: 138, 828: 700, 829: 475, 834: 858, 844: 814, 846: 269, 848: 258, 851: 144, 856: 585, 857: 427, 858: 136, 862: 59, 864: 981, 868: 591, 870: 754, 871: 778, 874: 77, 875: 809, 877: 198, 878: 712, 880: 699, 881: 978, 882: 301, 883: 51, 885: 453, 888: 881, 889: 758, 890: 786, 891: 329, 893: 280, 894: 594, 897: 410, 898: 567, 901: 764, 902: 528, 904: 191, 909: 664, 911: 582, 912: 945, 915: 1000, 917: 818, 922: 165, 924: 111, 925: 624, 928: 215, 930: 304, 931: 625, 933: 621, 935: 625, 937: 685, 938: 477, 939: 857, 940: 471, 941: 720, 944: 516, 945: 27, 947: 216, 948: 926, 950: 78, 954: 391, 957: 260, 958: 461, 961: 415, 962: 374, 965: 516, 968: 832, 970: 121, 975: 181, 984: 834, 987: 517, 988: 752, 989: 241, 993: 7, 997: 523}\n>>> keylist3 = [3, 8, 9, 10, 11, 18, 20, 26, 27, 35, 43, 45, 46, 51, 54, 55, 62, 64, 66, 70, 71, 74, 76, 77, 78, 80, 87, 90, 91, 94, 101, 102, 105, 106, 108, 117, 120, 124, 126, 127, 128, 130, 131, 133, 137, 138, 139, 142, 143, 146, 148, 149, 150, 154, 155, 157, 158, 160, 162, 172, 174, 175, 176, 177, 179, 180, 186, 191, 194, 196, 198, 201, 202, 207, 212, 214, 223, 224, 226, 227, 231, 232, 235, 236, 238, 243, 244, 246, 248, 249, 250, 254, 255, 256, 257, 260, 264, 267, 268, 269, 271, 276, 278, 281, 283, 287, 289, 293, 294, 295, 298, 301, 302, 305, 306, 307, 308, 310, 311, 314, 315, 316, 317, 324, 331, 336, 346, 348, 349, 350, 352, 353, 355, 356, 357, 366, 367, 369, 370, 375, 377, 378, 380, 381, 384, 386, 387, 389, 393, 398, 399, 404, 406, 411, 412, 413, 414, 417, 419, 421, 422, 425, 427, 429, 433, 435, 436, 437, 438, 439, 449, 452, 457, 458, 459, 463, 464, 465, 467, 469, 471, 472, 473, 475, 476, 479, 484, 485, 494, 497, 499, 501, 502, 504, 505, 506, 507, 509, 511, 512, 516, 517, 518, 521, 523, 534, 537, 539, 540, 543, 545, 547, 549, 550, 553, 559, 560, 561, 563, 565, 573, 575, 579, 581, 584, 590, 591, 592, 593, 597, 598, 599, 601, 603, 611, 614, 623, 626, 629, 630, 634, 635, 637, 639, 642, 643, 644, 646, 647, 651, 652, 654, 659, 660, 661, 664, 665, 672, 675, 676, 679, 680, 685, 688, 690, 692, 695, 696, 697, 699, 700, 701, 702, 708, 709, 710, 711, 713, 714, 715, 716, 717, 718, 719, 722, 723, 725, 726, 727, 729, 731, 732, 735, 736, 737, 739, 740, 744, 747, 751, 756, 758, 759, 762, 775, 776, 781, 785, 789, 790, 791, 795, 796, 797, 798, 800, 803, 804, 807, 808, 809, 812, 816, 818, 824, 827, 828, 829, 834, 844, 846, 848, 851, 856, 857, 858, 862, 864, 868, 870, 871, 874, 875, 877, 878, 880, 881, 882, 883, 885, 888, 889, 890, 891, 893, 894, 897, 898, 901, 902, 904, 909, 911, 912, 915, 917, 922, 924, 925, 928, 930, 931, 933, 935, 937, 938, 939, 940, 941, 944, 945, 947, 948, 950, 954, 957, 958, 961, 962, 965, 968, 970, 975, 984, 987, 988, 989, 993, 997]\n>>> print(test_format(dict2list(dct3, keylist3)))\n>>> print(test_format(use_comprehension("dict2list")))\n')], [('SgCJwpfcAVel9YfA', 'list2dict', '>>> L1 = []\n>>> keylist1 = []\n>>> print(test_format(list2dict(L1, keylist1)))\n>>> L2 =[\'A\',\'B\',\'C\']\n>>> keylist2 = [\'a\',\'b\',\'c\']\n>>> print(test_format(list2dict(L2, keylist2)))\n>>> L3 = [395, 816, 370, 102, 746, 477, 284, 783, 55, 108, 621, 225, 56, 503, 24, 742, 491, 317, 739, 972, 372, 312, 826, 215, 507, 970, 966, 798, 353, 358, 880, 730, 514, 867, 723, 412, 870, 511, 904, 196, 758, 89, 631, 45, 345, 246, 141, 963, 583, 626, 615, 581, 889, 662, 993, 765, 7, 67, 862, 212, 493, 676, 915, 220, 7, 362, 586, 632, 755, 537, 398, 330, 337, 767, 41, 341, 84, 651, 898, 926, 801, 751, 216, 234, 445, 534, 81, 860, 478, 659, 107, 609, 488, 108, 497, 649, 684, 964, 294, 327, 621, 713, 195, 559, 858, 931, 89, 850, 277, 537, 430, 244, 950, 594, 98, 438, 564, 643, 363, 109, 295, 604, 268, 166, 853, 123, 46, 186, 404, 426, 34, 741, 385, 115, 613, 369, 513, 36, 755, 77, 780, 57, 123, 914, 575, 866, 377, 915, 36, 895, 215, 317, 711, 490, 752, 879, 344, 723, 431, 279, 518, 346, 992, 758, 48, 66, 349, 429, 616, 186, 917, 807, 916, 548, 601, 891, 897, 404, 241, 510, 66, 688, 797, 252, 408, 79, 307, 462, 492, 841, 200, 451, 494, 754, 56, 234, 849, 984, 902, 156, 721, 905, 728, 505, 29, 256, 179, 820, 199, 358, 626, 21, 456, 447, 316, 997, 513, 171, 231, 913, 330, 697, 682, 92, 65, 393, 258, 0, 978, 407, 497, 420, 13, 460, 710, 228, 837, 98, 363, 510, 339, 625, 787, 774, 401, 187, 35, 183, 872, 901, 399, 635, 762, 358, 537, 639, 49, 121, 909, 369, 901, 409, 694, 979, 604, 212, 856, 722, 493, 340, 706, 549, 129, 222, 433, 872, 874, 197, 109, 463, 47, 5, 639, 900, 467, 785, 993, 89, 428, 47, 178, 74, 82, 68, 953, 490, 399, 489, 83, 178, 982, 343, 346, 600, 424, 669, 214, 438, 616, 852, 444, 671, 909, 331, 534, 782, 570, 638, 535, 852, 424, 75, 303, 730, 501, 138, 700, 475, 858, 814, 269, 258, 144, 585, 427, 136, 59, 981, 591, 754, 778, 77, 809, 198, 712, 699, 978, 301, 51, 453, 881, 758, 786, 329, 280, 594, 410, 567, 764, 528, 191, 664, 582, 945, 1000, 818, 165, 111, 624, 215, 304, 625, 621, 625, 685, 477, 857, 471, 720, 516, 27, 216, 926, 78, 391, 260, 461, 415, 374, 516, 832, 121, 181, 834, 517, 752, 241, 7, 523]\n>>> keylist3 = [3, 8, 9, 10, 11, 18, 20, 26, 27, 35, 43, 45, 46, 51, 54, 55, 62, 64, 66, 70, 71, 74, 76, 77, 78, 80, 87, 90, 91, 94, 101, 102, 105, 106, 108, 117, 120, 124, 126, 127, 128, 130, 131, 133, 137, 138, 139, 142, 143, 146, 148, 149, 150, 154, 155, 157, 158, 160, 162, 172, 174, 175, 176, 177, 179, 180, 186, 191, 194, 196, 198, 201, 202, 207, 212, 214, 223, 224, 226, 227, 231, 232, 235, 236, 238, 243, 244, 246, 248, 249, 250, 254, 255, 256, 257, 260, 264, 267, 268, 269, 271, 276, 278, 281, 283, 287, 289, 293, 294, 295, 298, 301, 302, 305, 306, 307, 308, 310, 311, 314, 315, 316, 317, 324, 331, 336, 346, 348, 349, 350, 352, 353, 355, 356, 357, 366, 367, 369, 370, 375, 377, 378, 380, 381, 384, 386, 387, 389, 393, 398, 399, 404, 406, 411, 412, 413, 414, 417, 419, 421, 422, 425, 427, 429, 433, 435, 436, 437, 438, 439, 449, 452, 457, 458, 459, 463, 464, 465, 467, 469, 471, 472, 473, 475, 476, 479, 484, 485, 494, 497, 499, 501, 502, 504, 505, 506, 507, 509, 511, 512, 516, 517, 518, 521, 523, 534, 537, 539, 540, 543, 545, 547, 549, 550, 553, 559, 560, 561, 563, 565, 573, 575, 579, 581, 584, 590, 591, 592, 593, 597, 598, 599, 601, 603, 611, 614, 623, 626, 629, 630, 634, 635, 637, 639, 642, 643, 644, 646, 647, 651, 652, 654, 659, 660, 661, 664, 665, 672, 675, 676, 679, 680, 685, 688, 690, 692, 695, 696, 697, 699, 700, 701, 702, 708, 709, 710, 711, 713, 714, 715, 716, 717, 718, 719, 722, 723, 725, 726, 727, 729, 731, 732, 735, 736, 737, 739, 740, 744, 747, 751, 756, 758, 759, 762, 775, 776, 781, 785, 789, 790, 791, 795, 796, 797, 798, 800, 803, 804, 807, 808, 809, 812, 816, 818, 824, 827, 828, 829, 834, 844, 846, 848, 851, 856, 857, 858, 862, 864, 868, 870, 871, 874, 875, 877, 878, 880, 881, 882, 883, 885, 888, 889, 890, 891, 893, 894, 897, 898, 901, 902, 904, 909, 911, 912, 915, 917, 922, 924, 925, 928, 930, 931, 933, 935, 937, 938, 939, 940, 941, 944, 945, 947, 948, 950, 954, 957, 958, 961, 962, 965, 968, 970, 975, 984, 987, 988, 989, 993, 997]\n>>> print(test_format(list2dict(L3, keylist3)))\n>>> print(test_format(use_comprehension("dict2list")))\n')]]
source_files = ['python_lab.py'] * len(sum(groups,[]))
try:
import python_lab as solution
test_vars = vars(solution).copy()
except Exception as exc:
print(exc)
print("!! It seems like you have an error in your stencil file. Please fix before submitting.")
sys.exit(1)
def find_lines(varname):
return list(filter(lambda l: varname in l, list(open("python_lab.py"))))
def find_line(varname):
ls = find_lines(varname)
return ls[0] if len(ls) else None
def use_comprehension(varname):
lines = find_lines(varname)
for line in lines:
try:
if "comprehension" in ast.dump(ast.parse(line)):
return True
except: pass
return False
def double_comprehension(varname):
line = find_line(varname)
return ast.dump(ast.parse(line)).count("comprehension") == 2
def line_contains_substr(varname, word):
lines = find_line(varname)
for line in lines:
if word in line:
return True
return False
def test_format(obj, precision=6):
tf = lambda o: test_format(o, precision)
delimit = lambda o: ', '.join(o)
otype = type(obj)
if otype is str:
return "'%s'" % obj
elif otype is float:
fstr = '%%.%dg' % precision
return fstr % obj
elif otype is set:
if len(obj) == 0:
return 'set()'
return '{%s}' % delimit(sorted(map(tf, obj)))
elif otype is dict:
return '{%s}' % delimit(sorted(tf(k)+': '+tf(v) for k,v in obj.items()))
elif otype is list:
return '[%s]' % delimit(map(tf, obj))
elif otype is tuple:
return '(%s%s)' % (delimit(map(tf, obj)), ',' if len(obj) is 1 else '')
elif otype.__name__ in ['Vec','Mat']:
entries = delimit(map(tf, sorted(filter(lambda o: o[1] != 0, obj.f.items()))))
return '<%s %s {%s}>' % (otype.__name__, test_format(obj.D), entries)
else:
return str(obj)
def output(tests):
dtst = doctest.DocTestParser().get_doctest(tests, test_vars, 0, '<string>', 0)
runner = ModifiedDocTestRunner()
runner.run(dtst)
return runner.results
test_vars['test_format'] = test_vars['tf'] = test_format
test_vars['find_lines'] = find_lines
test_vars['find_line'] = find_line
test_vars['use_comprehension'] = use_comprehension
test_vars['double_comprehension'] = double_comprehension
test_vars['line_contains_substr'] = line_contains_substr
base_url = '://class.coursera.org/%s/assignment/' % URL
protocol = 'https'
colorize = False
verbose = False
class ModifiedDocTestRunner(doctest.DocTestRunner):
def __init__(self, *args, **kwargs):
self.results = []
return super(ModifiedDocTestRunner, self).__init__(*args, checker=OutputAccepter(), **kwargs)
def report_success(self, out, test, example, got):
self.results.append(got)
def report_unexpected_exception(self, out, test, example, exc_info):
exf = traceback.format_exception_only(exc_info[0], exc_info[1])[-1]
self.results.append(exf)
class OutputAccepter(doctest.OutputChecker):
def check_output(self, want, got, optionflags):
return True
def submit(parts_string, login, password):
print('= Coding the Matrix Homework and Lab Submission')
if not login:
login = login_prompt()
if not password:
password = password_prompt()
if not parts_string:
parts_string = parts_prompt()
parts = parse_parts(parts_string)
if not all([parts, login, password]):
return
for sid, name, part_tests in parts:
sys.stdout.write('== Submitting "%s"' % name)
if 'DEV' in os.environ: sid += '-dev'
(login, ch, state, ch_aux) = get_challenge(login, sid)
if not all([login, ch, state]):
print(' !! Error: %s\n' % login)
return
# to stop Coursera's strip() from doing anything, we surround in parens
results = output(part_tests)
prog_out = '(%s)' % ''.join(map(str.rstrip, results))
token = challenge_response(login, password, ch)
src = source(sid)
feedback = submit_solution(login, token, sid, prog_out, src, state, ch_aux)
if len(feedback.strip()) > 0:
if colorize:
good = 'incorrect' not in feedback.lower()
print(': \033[1;3%dm%s\033[0m' % (2 if good else 1, feedback.strip()))
else:
print(': %s' % feedback.strip())
if verbose:
for t, r in zip(part_tests.split('\n'), results):
sys.stdout.write('%s\n%s' % (t, r))
sys.stdout.write('\n\n')
def login_prompt():
return input('Login email address: ')
def password_prompt():
return input("One-time password from the assignment page (NOT your own account's password): ")
def parts_prompt():
print('These are the assignment parts that you can submit:')
for i, name in enumerate(part_friendly_names):
print(' %d) %s' % (i+1, name))
return input('\nWhich parts do you want to submit? (Ex: 1, 4-7): ')
def parse_parts(string):
def extract_range(s):
s = s.split('-')
if len(s) == 1: return [int(s[0])]
else: return list(range(int(s[0]), 1+int(s[1])))
parts = map(extract_range, string.split(','))
flat_parts = sum(parts, [])
return sum(list(map(lambda p: groups[p-1], flat_parts)),[])
def get_challenge(email, sid):
"""Gets the challenge salt from the server. Returns (email,ch,state,ch_aux)."""
params = {'email_address': email, 'assignment_part_sid': sid, 'response_encoding': 'delim'}
challenge_url = '%s%schallenge' % (protocol, base_url)
data = urllib.parse.urlencode(params).encode('utf-8')
req = urllib.request.Request(challenge_url, data)
resp = urllib.request.urlopen(req)
text = resp.readall().decode('utf-8').strip().split('|')
if len(text) != 9:
print(' !! %s' % '|'.join(text))
sys.exit(1)
return tuple(text[x] for x in [2,4,6,8])
def challenge_response(email, passwd, challenge):
return hashlib.sha1((challenge+passwd).encode('utf-8')).hexdigest()
def submit_solution(email_address, ch_resp, sid, output, source, state, ch_aux):
b64ize = lambda s: str(base64.encodebytes(s.encode('utf-8')), 'ascii')
values = { 'assignment_part_sid' : sid
, 'email_address' : email_address
, 'submission' : b64ize(output)
, 'submission_aux' : b64ize(source)
, 'challenge_response' : ch_resp
, 'state' : state
}
submit_url = '%s%ssubmit' % (protocol, base_url)
data = urllib.parse.urlencode(values).encode('utf-8')
req = urllib.request.Request(submit_url, data)
response = urllib.request.urlopen(req)
return response.readall().decode('utf-8').strip()
def source(sid):
src = []
for fn in set(source_files):
with open(fn) as source_f:
src.append(source_f.read())
return '\n\n'.join(src)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
env = os.environ
helps = [ 'numbers or ranges of tasks to submit'
, 'the email address on your Coursera account'
, 'your ONE-TIME password'
, 'use ANSI color escape sequences'
, 'show the test\'s interaction with your code'
, 'use an encrypted connection to Coursera'
, 'use an unencrypted connection to Coursera'
]
parser.add_argument('tasks', default=env.get('COURSERA_TASKS'), nargs='*', help=helps[0])
parser.add_argument('--email', default=env.get('COURSERA_EMAIL'), help=helps[1])
parser.add_argument('--password', default=env.get('COURSERA_PASS'), help=helps[2])
parser.add_argument('--colorize', default=False, action='store_true', help=helps[3])
parser.add_argument('--verbose', default=False, action='store_true', help=helps[4])
group = parser.add_mutually_exclusive_group()
group.add_argument('--https', dest="protocol", const="https", action="store_const", help=helps[-2])
group.add_argument('--http', dest="protocol", const="http", action="store_const", help=helps[-1])
args = parser.parse_args()
if args.protocol: protocol = args.protocol
colorize = args.colorize
verbose = args.verbose
submit(','.join(args.tasks), args.email, args.password)
| 122.505792 | 21,988 | 0.612815 | 5,369 | 31,729 | 3.555224 | 0.251816 | 0.042435 | 0.059723 | 0.042749 | 0.246647 | 0.21228 | 0.181266 | 0.149885 | 0.130134 | 0.124895 | 0 | 0.381834 | 0.191843 | 31,729 | 258 | 21,989 | 122.98062 | 0.362569 | 0.018154 | 0 | 0.07772 | 0 | 0.082902 | 0.744759 | 0.108553 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108808 | false | 0.067358 | 0.015544 | 0.025907 | 0.284974 | 0.051813 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f6db1f730fe4f2c1d09621f1f8ae15f29f8b1df9 | 47,056 | py | Python | data/scripts/radials/blue_frog.py | anhstudios/swganh | 41c519f6cdef5a1c68b369e760781652ece7fec9 | [
"MIT"
] | 20 | 2015-02-23T15:11:56.000Z | 2022-03-18T20:56:48.000Z | data/scripts/radials/blue_frog.py | anhstudios/swganh | 41c519f6cdef5a1c68b369e760781652ece7fec9 | [
"MIT"
] | null | null | null | data/scripts/radials/blue_frog.py | anhstudios/swganh | 41c519f6cdef5a1c68b369e760781652ece7fec9 | [
"MIT"
] | 20 | 2015-04-04T16:35:59.000Z | 2022-03-24T14:54:37.000Z | import swgpy
from swgpy.object import *
from swgpy.sui import *
from swgpy.utility import vector3, quat
from swgpy.combat import *
from swgpy.gamesystems import *
from swgpy import ACTION
import random
class PyRadialMenu(RadialMenu):
def buildRadial(self, owner, target, radials):
radial_list = RadialOptionsList()
radial_list.append(RadialOptions(0, RadialIdentifier.itemUse, 1, 'Hack Universe'))
radial_list.append(RadialOptions(0, RadialIdentifier.examine, 1, ''))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu1, 3, 'items'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu2, 3, 'Weapon Pack'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu3, 3, 'Armor Pack'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu4, 3, 'Structures Pack'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu5, 3, 'Pets Pack'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu6, 3, 'Instrument Pack'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu7, 3, 'Ham Options'))
radial_list.append(RadialOptions(1, RadialIdentifier.serverMenu8, 3, 'Professions'))
return radial_list
levels = ('None', 'Light', 'Medium', 'Heavy')
damage_types = ('Energy', 'Kinetic', 'Acid', 'Cold', 'Electricity', 'Heat')
def defaultPostProcess(self, item):
pass
def weaponPostProcess(self, item):
item.max_condition = random.randint(100, 10000)
item.setStringAttribute('wpn_armor_pierce_rating', random.choice(self.levels))
item.setFloatAttribute('wpn_attack_speed', random.uniform(0.1, 5))
item.setStringAttribute('cat_wpn_damage.wpn_damage_type', random.choice(self.damage_types))
min_damage = random.randint(1, 1000)
max_damage = random.randint(min_damage, 1000)
item.setIntAttribute('cat_wpn_damage.wpn_damage_min', min_damage)
item.setIntAttribute('cat_wpn_damage.wpn_damage_max', max_damage)
item.setFloatAttribute('cat_wpn_damage.wpn_wound_chance', random.uniform(0, 100))
item.setIntAttribute('cat_wpn_rangemods.wpn_range_zero', 0)
item.setIntAttribute('cat_wpn_rangemods.wpn_range_mid', 40)
item.setIntAttribute('cat_wpn_rangemods.wpn_range_max', -80)
item.setIntAttribute('cat_wpn_attack_cost.wpn_attack_cost_health', random.randint(1, 200))
item.setIntAttribute('cat_wpn_attack_cost.wpn_attack_cost_action', random.randint(1, 200))
item.setIntAttribute('cat_wpn_attack_cost.wpn_attack_cost_mind', random.randint(1, 200))
def armorPostProcess(self, item):
item.max_condition = random.randint(100, 10000)
item.setStringAttribute('armor_rating', random.choice(self.levels))
item.setFloatAttribute('cat_armor_special_protection.armor_eff_kinetic', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_energy', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_blast', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_stun', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_elemental_heat', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_elemental_cold', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_elemental_acid', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_elemental_electrical', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_special_protection.armor_eff_restraint', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_restraint', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_energy', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_blast', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_stun', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_elemental_heat', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_elemental_cold', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_elemental_acid', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_elemental_electrical', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_effectiveness.armor_eff_restraint', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_kinetic', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_energy', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_blast', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_stun', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_elemental_heat', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_elemental_cold', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_elemental_acid', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_elemental_electrical', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setFloatAttribute('cat_armor_vulnerability.armor_eff_restraint', random.uniform(0, 100) if random.random()>0.7 else 0)
item.setIntAttribute('cat_armor_encumbrance.armor_health_encumbrance', random.randint(20, 300))
item.setIntAttribute('cat_armor_encumbrance.armor_action_encumbrance', random.randint(20, 300))
item.setIntAttribute('cat_armor_encumbrance.armor_mind_encumbrance', random.randint(20, 300))
item.setStringAttribute('crafter', 'Blue Frog, Inc.')
def giveItems(self, owner, list, postProcess):
sim = self.getKernel().serviceManager().simulationService()
inv = self.getKernel().serviceManager().equipmentService().getEquippedObject(owner, "inventory")
for name in list:
item = sim.createObject(name, swgpy.ContainerPermission.DEFAULT)
if item is not None:
postProcess(item)
inv.add(owner, item)
def displaySUIList(self, owner, list, callbackName):
sui = self.getKernel().serviceManager().suiService()
#if sui.getSUIWindowByScriptName(owner, 'Script.listBox') != None:
#return
options = EventResultList()
for option in list:
options.append(option)
window = sui.createListBox(ListBoxType.OK_CANCEL, '0xDEADBEEF', '...00010011010001...\n\n...[OVERRIDE]...\n\nWELCOME, JOHN SMEDLEY', options, owner)
results = ResultList()
results.append('List.lstList:SelectedRow')
callback = PythonCallback(self, callbackName)
window.subscribeToEventCallback(0, '', InputTrigger.OK, results, callback)
window.subscribeToEventCallback(1, '', InputTrigger.CANCEL, results, callback)
sui.openSUIWindow(window)
def professionCallback(self, owner, event_id, results):
if event_id == 0:
if int(results[0]) == 0:
self.displaySUIList(owner, ['grant entertainer_novice'], 'entertainerCallback')
return True
def entertainerCallback(self, owner, event_id, results):
if event_id == 0:
if int(results[0]) == 0:
creature = owner.toCreature()
GameSytems = self.getKernel().serviceManager().gamesystemsService()
GameSytems.grantSkill(creature, "social_entertainer_novice")
return True
def itemCallback(self, owner, event_id, results):
if event_id == 0:
if int(results[0]) == 0:
self.giveItems(owner, self.vehicleDeeds, self.defaultPostProcess)
if int(results[0]) == 1:
self.giveItems(owner, self.droidDeeds, self.defaultPostProcess)
return True
def weaponCallback(self, owner, event_id, results):
if event_id == 0:
self.giveItems(owner, self.weapons[int(results[0])], self.weaponPostProcess)
return True
def armorCallback(self, owner, event_id, results):
if event_id == 0:
self.giveItems(owner, self.armor[int(results[0])], self.armorPostProcess)
return True
def structureCallback(self, owner, event_id, results):
if event_id == 0:
self.giveItems(owner, self.structureDeeds[int(results[0])], self.defaultPostProcess)
return True
def hamCallback(self, owner, event_id, results):
print('result : ' + "{0} : {1}".format(event_id, results[0]))
if event_id == 0:
if int(results[0]) == 0:
self.displaySUIList(owner, ['Health Wounds', 'heal Health Wounds','Action Wounds', 'heal Action Wounds', 'Mind Wounds', 'heal Mind Wounds'], 'woundCallback')
if int(results[0]) == 1:
self.displaySUIList(owner, ['Health Damage', 'heal Health Damage', 'Action Damage', 'Mind Damage'], 'damageCallback')
return True
def woundCallback(self, owner, event_id, results):
if event_id == 0:
combat = self.getKernel().serviceManager().combatService()
ham = combat.getHamManager()
creature = owner.toCreature()
if int(results[0]) == 0:
a = ham.applyWound(creature,0,25)
if int(results[0]) == 1:
ham.removeWound(creature,0,25)
if int(results[0]) == 2:
ham.applyWound(creature,3,25)
if int(results[0]) == 3:
ham.removeWound(creature,3,25)
if int(results[0]) == 4:
ham.applyWound(creature,6,25)
if int(results[0]) == 5:
ham.removeWound(creature,6,25)
return True
def damageCallback(self, owner, event_id, results):
combat = self.getKernel().serviceManager().combatService()
ham = combat.getHamManager()
creature = owner.toCreature()
if event_id == 0:
if int(results[0]) == 0:
ham.updateCurrentHitpoints(creature, 0, -75)
if int(results[0]) == 1:
ham.updateCurrentHitpoints(creature, 0, 75)
if int(results[0]) == 2:
ham.updateCurrentHitpoints(creature, 3, -75)
if int(results[0]) == 3:
ham.updateCurrentHitpoints(creature, 3, 75)
if int(results[0]) == 4:
ham.updateCurrentHitpoints(creature, 6, -75)
if int(results[0]) == 5:
ham.updateCurrentHitpoints(creature, 6, 75)
return True
def handleRadial(self, owner, target, action):
if action == RadialIdentifier.serverMenu1:
self.displaySUIList(owner, ['vehicles', 'droids'], 'itemCallback')
elif action == RadialIdentifier.serverMenu2:
self.displaySUIList(owner, ['Melee Weapons', 'Ranged Weapons', 'Misc Weapons'], 'weaponCallback')
elif action == RadialIdentifier.serverMenu3:
self.displaySUIList(owner, ['Bone', 'Bounty Hunter', 'Chitin', 'Composite',
'Ithorian Defender', 'Ithorian Guardian', 'Ithorian Sentinel', 'Mandalorian',
'Marine', 'Padded', 'Ris', 'Stormtrooper', 'Tantel', 'Ubese'], 'armorCallback')
elif action == RadialIdentifier.serverMenu4:
self.displaySUIList(owner, ['Crafting Structures', 'Housing Structures', 'Corellia Civic Structures',
'Naboo Civic Structures', 'Tatooine Civic Structures', 'Guild Structures', 'Faction Structures'], 'structureCallback')
elif action == RadialIdentifier.serverMenu5:
self.giveItems(owner, self.petDeeds, self.defaultPostProcess)
elif action == RadialIdentifier.serverMenu6:
self.giveItems(owner, self.instruments, self.defaultPostProcess)
elif action == RadialIdentifier.serverMenu7:
self.displaySUIList(owner, ['Wounds', 'Damage'], 'hamCallback')
elif action == RadialIdentifier.serverMenu8:
self.displaySUIList(owner, ['entertainer'], 'professionCallback')
vehicleDeeds = ('object/tangible/deed/vehicle_deed/shared_jetpack_deed.iff',
'object/tangible/deed/vehicle_deed/shared_landspeeder_av21_deed.iff',
'object/tangible/deed/vehicle_deed/shared_landspeeder_x31_deed.iff',
'object/tangible/deed/vehicle_deed/shared_landspeeder_x34_deed.iff',
'object/tangible/deed/vehicle_deed/shared_speederbike_flash_deed.iff',
'object/tangible/deed/vehicle_deed/shared_speederbike_swoop_deed.iff')
weapons = [('object/weapon/melee/2h_sword/shared_2h_sword_battleaxe.iff',
'object/weapon/melee/2h_sword/shared_2h_sword_blacksun_hack.iff',
'object/weapon/melee/2h_sword/shared_2h_sword_cleaver.iff',
'object/weapon/melee/2h_sword/shared_2h_sword_katana.iff',
'object/weapon/melee/2h_sword/shared_2h_sword_maul.iff',
'object/weapon/melee/2h_sword/shared_2h_sword_scythe.iff',
'object/weapon/melee/axe/shared_axe_heavy_duty.iff',
'object/weapon/melee/axe/shared_axe_vibroaxe.iff',
'object/weapon/melee/baton/shared_baton_gaderiffi.iff',
'object/weapon/melee/baton/shared_baton_stun.iff',
'object/weapon/melee/baton/shared_victor_baton_gaderiffi.iff',
'object/weapon/melee/knife/shared_knife_dagger.iff',
'object/weapon/melee/knife/shared_knife_donkuwah.iff',
'object/weapon/melee/knife/shared_knife_janta.iff',
'object/weapon/melee/knife/shared_knife_stone.iff',
'object/weapon/melee/knife/shared_knife_stone_noob.iff',
'object/weapon/melee/knife/shared_knife_survival.iff',
'object/weapon/melee/knife/shared_knife_vibroblade.iff',
'object/weapon/melee/polearm/shared_lance_nightsister.iff',
'object/weapon/melee/polearm/shared_lance_staff_janta.iff',
'object/weapon/melee/polearm/shared_lance_staff_metal.iff',
'object/weapon/melee/polearm/shared_lance_staff_wood_s1.iff',
'object/weapon/melee/polearm/shared_lance_staff_wood_s2.iff',
'object/weapon/melee/polearm/shared_lance_vibrolance.iff',
'object/weapon/melee/polearm/shared_polearm_vibro_axe.iff',
'object/weapon/melee/special/shared_blacksun_razor.iff',
'object/weapon/melee/special/shared_vibroknucler.iff'),
('object/weapon/ranged/carbine/shared_carbine_cdef.iff',
'object/weapon/ranged/carbine/shared_carbine_cdef_corsec.iff',
'object/weapon/ranged/carbine/shared_carbine_dh17.iff',
'object/weapon/ranged/carbine/shared_carbine_dh17_black.iff',
'object/weapon/ranged/carbine/shared_carbine_dh17_snubnose.iff',
'object/weapon/ranged/carbine/shared_carbine_dxr6.iff',
'object/weapon/ranged/carbine/shared_carbine_e11.iff',
'object/weapon/ranged/carbine/shared_carbine_ee3.iff',
'object/weapon/ranged/carbine/shared_carbine_elite.iff',
'object/weapon/ranged/carbine/shared_carbine_laser.iff',
'object/weapon/ranged/carbine/shared_carbine_nym_slugthrower.iff',
'object/weapon/ranged/heavy/shared_heavy_acid_beam.iff',
'object/weapon/ranged/heavy/shared_heavy_lightning_beam.iff',
'object/weapon/ranged/heavy/shared_heavy_particle_beam.iff',
'object/weapon/ranged/heavy/shared_heavy_rocket_launcher.iff',
'object/weapon/ranged/pistol/shared_pistol_cdef.iff',
'object/weapon/ranged/pistol/shared_pistol_cdef_corsec.iff',
'object/weapon/ranged/pistol/shared_pistol_cdef_noob.iff',
'object/weapon/ranged/pistol/shared_pistol_d18.iff',
'object/weapon/ranged/pistol/shared_pistol_de_10.iff',
'object/weapon/ranged/pistol/shared_pistol_dh17.iff',
'object/weapon/ranged/pistol/shared_pistol_dl44.iff',
'object/weapon/ranged/pistol/shared_pistol_dl44_metal.iff',
'object/weapon/ranged/pistol/shared_pistol_dx2.iff',
'object/weapon/ranged/pistol/shared_pistol_fwg5.iff',
'object/weapon/ranged/pistol/shared_pistol_geonosian_sonic_blaster_loot.iff',
'object/weapon/ranged/pistol/shared_pistol_launcher.iff',
'object/weapon/ranged/pistol/shared_pistol_power5.iff',
'object/weapon/ranged/pistol/shared_pistol_republic_blaster.iff',
'object/weapon/ranged/pistol/shared_pistol_scatter.iff',
'object/weapon/ranged/pistol/shared_pistol_scout_blaster.iff',
'object/weapon/ranged/pistol/shared_pistol_scout_blaster_corsec.iff',
'object/weapon/ranged/pistol/shared_pistol_srcombat.iff',
'object/weapon/ranged/pistol/shared_pistol_striker.iff',
'object/weapon/ranged/pistol/shared_pistol_striker_noob.iff',
'object/weapon/ranged/pistol/shared_pistol_tangle.iff',
'object/weapon/ranged/rifle/shared_rifle_acid_beam.iff',
'object/weapon/ranged/rifle/shared_rifle_beam.iff',
'object/weapon/ranged/rifle/shared_rifle_berserker.iff',
'object/weapon/ranged/rifle/shared_rifle_bowcaster.iff',
'object/weapon/ranged/rifle/shared_rifle_cdef.iff',
'object/weapon/ranged/rifle/shared_rifle_dlt20.iff',
'object/weapon/ranged/rifle/shared_rifle_dlt20a.iff',
'object/weapon/ranged/rifle/shared_rifle_e11.iff',
'object/weapon/ranged/rifle/shared_rifle_ewok_crossbow.iff',
'object/weapon/ranged/rifle/shared_rifle_flame_thrower.iff',
'object/weapon/ranged/rifle/shared_rifle_jawa_ion.iff',
'object/weapon/ranged/rifle/shared_rifle_laser.iff',
'object/weapon/ranged/rifle/shared_rifle_laser_noob.iff',
'object/weapon/ranged/rifle/shared_rifle_lightning.iff',
'object/weapon/ranged/rifle/shared_rifle_sg82.iff',
'object/weapon/ranged/rifle/shared_rifle_spraystick.iff',
'object/weapon/ranged/rifle/shared_rifle_t21.iff',
'object/weapon/ranged/rifle/shared_rifle_tenloss_dxr6_disruptor_loot.iff',
'object/weapon/ranged/rifle/shared_rifle_tusken.iff',
'object/weapon/ranged/rifle/shared_rifle_victor_tusken.iff'),
()]
armor = [ ('object/tangible/wearables/armor/bone/shared_armor_bone_s01_bicep_l.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_bicep_r.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_boots.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_bracer_l.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_bracer_r.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_chest_plate.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_gloves.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_helmet.iff',
'object/tangible/wearables/armor/bone/shared_armor_bone_s01_leggings.iff',),
('object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_belt.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_bicep_l.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_bicep_r.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_boots.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_bracer_l.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_bracer_r.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_chest_plate.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_gloves.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_helmet.iff',
'object/tangible/wearables/armor/bounty_hunter/shared_armor_bounty_hunter_leggings.iff'),
('object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_bicep_l.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_bicep_r.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_boots.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_bracer_l.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_bracer_r.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_chest_plate.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_gloves.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_helmet.iff',
'object/tangible/wearables/armor/chitin/shared_armor_chitin_s01_leggings.iff'),
('object/tangible/wearables/armor/composite/shared_armor_composite_bicep_l.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_bicep_r.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_boots.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_bracer_l.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_bracer_r.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_chest_plate.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_gloves.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_helmet.iff',
'object/tangible/wearables/armor/composite/shared_armor_composite_leggings.iff'),
('object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_bicep_l.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_bicep_r.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_boots.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_bracer_l.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_bracer_r.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_chest_plate.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_gloves.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_helmet.iff',
'object/tangible/wearables/armor/ithorian_defender/shared_ith_armor_s01_leggings.iff'),
('object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_bicep_l.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_bicep_r.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_boots.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_bracer_l.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_bracer_r.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_chest_plate.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_gloves.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_helmet.iff',
'object/tangible/wearables/armor/ithorian_guardian/shared_ith_armor_s02_leggings.iff'),
('object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_bicep_l.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_bicep_r.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_boots.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_bracer_l.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_bracer_r.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_chest_plate.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_gloves.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_helmet.iff',
'object/tangible/wearables/armor/ithorian_sentinel/shared_ith_armor_s03_leggings.iff'),
('object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_belt.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_bicep_l.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_bicep_r.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_bracer_l.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_bracer_r.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_chest_plate.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_gloves.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_helmet.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_leggings.iff',
'object/tangible/wearables/armor/mandalorian/shared_armor_mandalorian_shoes.iff'),
('object/tangible/wearables/armor/marine/shared_armor_marine_backpack.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_bicep_l.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_bicep_r.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_boots.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_chest_plate.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_chest_plate_rebel.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_helmet.iff',
'object/tangible/wearables/armor/marine/shared_armor_marine_leggings.iff'),
('object/tangible/wearables/armor/padded/shared_armor_padded_s01_belt.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_bicep_l.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_bicep_r.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_boots.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_bracer_l.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_bracer_r.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_chest_plate.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_gloves.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_helmet.iff',
'object/tangible/wearables/armor/padded/shared_armor_padded_s01_leggings.iff'),
('object/tangible/wearables/armor/ris/shared_armor_ris_bicep_l.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_bicep_r.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_boots.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_bracer_l.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_bracer_r.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_chest_plate.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_gloves.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_helmet.iff',
'object/tangible/wearables/armor/ris/shared_armor_ris_leggings.iff'),
('object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_bicep_l.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_bicep_r.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_boots.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_bracer_l.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_bracer_r.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_chest_plate.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_gloves.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_helmet.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_leggings.iff',
'object/tangible/wearables/armor/stormtrooper/shared_armor_stormtrooper_utility_belt.iff'),
('object/tangible/wearables/armor/tantel/shared_armor_tantel_skreej_boots.iff',
'object/tangible/wearables/armor/tantel/shared_armor_tantel_skreej_chest_plate.iff',
'object/tangible/wearables/armor/tantel/shared_armor_tantel_skreej_helmet.iff'),
('object/tangible/wearables/armor/ubese/shared_armor_ubese_bandolier.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_boots.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_bracer_l.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_bracer_r.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_gloves.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_helmet.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_jacket.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_pants.iff',
'object/tangible/wearables/armor/ubese/shared_armor_ubese_shirt.iff')]
structureDeeds = [('object/tangible/deed/factory_deed/shared_factory_clothing_deed.iff',
'object/tangible/deed/factory_deed/shared_factory_food_deed.iff',
'object/tangible/deed/factory_deed/shared_factory_item_deed.iff',
'object/tangible/deed/factory_deed/shared_factory_structure_deed.iff',
'object/tangible/deed/generator_deed/shared_generator_fusion_deed.iff',
'object/tangible/deed/generator_deed/shared_generator_photo_bio_deed.iff',
'object/tangible/deed/generator_deed/shared_generator_solar_deed.iff',
'object/tangible/deed/generator_deed/shared_generator_wind_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_creature_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_flora_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_flora_deed_heavy.iff',
'object/tangible/deed/harvester_deed/shared_harvester_flora_deed_medium.iff',
'object/tangible/deed/harvester_deed/shared_harvester_gas_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_gas_deed_heavy.iff',
'object/tangible/deed/harvester_deed/shared_harvester_gas_deed_medium.iff',
'object/tangible/deed/harvester_deed/shared_harvester_liquid_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_liquid_deed_heavy.iff',
'object/tangible/deed/harvester_deed/shared_harvester_liquid_deed_medium.iff',
'object/tangible/deed/harvester_deed/shared_harvester_moisture_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_moisture_deed_heavy.iff',
'object/tangible/deed/harvester_deed/shared_harvester_moisture_deed_medium.iff',
'object/tangible/deed/harvester_deed/shared_harvester_ore_heavy_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_ore_s1_deed.iff',
'object/tangible/deed/harvester_deed/shared_harvester_ore_s2_deed.iff'),
('object/tangible/deed/player_house_deed/shared_corellia_house_large_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_large_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_medium_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_medium_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_small_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_small_floor_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_small_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_corellia_house_small_style_02_floor_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_large_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_large_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_medium_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_medium_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_small_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_small_floor_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_small_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_generic_house_small_style_02_floor_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_merchent_tent_style_01_deed.iff',
'object/tangible/deed/player_house_deed/shared_merchent_tent_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_merchent_tent_style_03_deed.iff',
'object/tangible/deed/player_house_deed/shared_naboo_house_large_deed.iff',
'object/tangible/deed/player_house_deed/shared_naboo_house_medium_deed.iff',
'object/tangible/deed/player_house_deed/shared_naboo_house_medium_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_naboo_house_small_deed.iff',
'object/tangible/deed/player_house_deed/shared_naboo_house_small_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_tatooine_house_large_deed.iff',
'object/tangible/deed/player_house_deed/shared_tatooine_house_medium_deed.iff',
'object/tangible/deed/player_house_deed/shared_tatooine_house_medium_style_02_deed.iff',
'object/tangible/deed/player_house_deed/shared_tatooine_house_small_deed.iff',
'object/tangible/deed/player_house_deed/shared_tatooine_house_small_style_02_deed.iff'),
('object/tangibe/deed/city_deed/shared_bank_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_cityhall_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_cloning_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_garage_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_lrg_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_lrg_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_lrg_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_lrg_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_lrg_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_med_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_med_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_med_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_med_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_med_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_sml_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_sml_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_sml_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_sml_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_corellia_sml_05_deed.iff',
'object/tangibe/deed/city_deed/shared_hospital_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_shuttleport_corellia_deed.iff',
'object/tangibe/deed/city_deed/shared_theater_corellia_deed.iff'),
('object/tangibe/deed/city_deed/shared_bank_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_cityhall_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_cloning_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_garage_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_lrg_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_lrg_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_lrg_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_lrg_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_lrg_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_med_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_med_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_med_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_med_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_med_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_sml_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_sml_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_sml_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_sml_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_naboo_sml_05_deed.iff',
'object/tangibe/deed/city_deed/shared_hospital_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_shuttleport_naboo_deed.iff',
'object/tangibe/deed/city_deed/shared_theater_naboo_deed.iff'),
('object/tangibe/deed/city_deed/shared_bank_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_cityhall_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_cloning_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_garage_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_cantina_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_lrg_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_lrg_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_lrg_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_lrg_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_lrg_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_med_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_med_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_med_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_med_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_med_05_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_sml_01_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_sml_02_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_sml_03_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_sml_04_deed.iff',
'object/tangibe/deed/city_deed/shared_garden_tatooine_sml_05_deed.iff',
'object/tangibe/deed/city_deed/shared_hospital_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_shuttleport_tatooine_deed.iff',
'object/tangibe/deed/city_deed/shared_theater_tatooine_deed.iff'),
('object/tangible/deed/guild_deed/shared_corellia_guild_deed.iff',
'object/tangible/deed/guild_deed/shared_generic_guild_deed.iff',
'object/tangible/deed/guild_deed/shared_naboo_guild_deed.iff',
'object/tangible/deed/guild_deed/shared_tatooine_guild_deed.iff',
'object/tangible/deed/guild_deed/shared_tatooine_guild_style_02_deed.iff'),
('object/tangible/deed/faction_perk/covert_detector/shared_detector_32m_deed.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s01.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s01_pvp.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s02.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s02_pvp.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s03.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s03_pvp.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s04.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s04_pvp.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s05.iff',
'object/tangible/deed/faction_perk/hq/shared_hq_s05_pvp.iff',
'object/tangible/deed/faction_perk/minefield/shared_field_1x1_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_block_lg_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_block_med_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_block_sm_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_dish_lg_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_dish_sm_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_tower_lg_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_tower_med_deed.iff',
'object/tangible/deed/faction_perk/turret/shared_tower_sm_deed.iff')]
petDeeds = ('object/tangible/deed/pet_deed/shared_angler_deed.iff',
'object/tangible/deed/pet_deed/shared_bageraset_deed.iff',
'object/tangible/deed/pet_deed/shared_bantha_deed.iff',
'object/tangible/deed/pet_deed/shared_bearded_jax_deed.iff',
'object/tangible/deed/pet_deed/shared_blurrg_deed.iff',
'object/tangible/deed/pet_deed/shared_boar_wolf_deed.iff',
'object/tangible/deed/pet_deed/shared_bocatt_deed.iff',
'object/tangible/deed/pet_deed/shared_bol_deed.iff',
'object/tangible/deed/pet_deed/shared_bolle_bol_deed.iff',
'object/tangible/deed/pet_deed/shared_bolma_deed.iff',
'object/tangible/deed/pet_deed/shared_bordok_deed.iff',
'object/tangible/deed/pet_deed/shared_brackaset_deed.iff',
'object/tangible/deed/pet_deed/shared_carrion_spat_deed.iff',
'object/tangible/deed/pet_deed/shared_choku_deed.iff',
'object/tangible/deed/pet_deed/shared_cu_pa_deed.iff',
'object/tangible/deed/pet_deed/shared_dalyrake_deed.iff',
'object/tangible/deed/pet_deed/shared_dewback_deed.iff',
'object/tangible/deed/pet_deed/shared_dune_lizard_deed.iff',
'object/tangible/deed/pet_deed/shared_durni_deed.iff',
'object/tangible/deed/pet_deed/shared_eopie_deed.iff',
'object/tangible/deed/pet_deed/shared_falumpaset_deed.iff',
'object/tangible/deed/pet_deed/shared_fambaa_deed.iff',
'object/tangible/deed/pet_deed/shared_gnort_deed.iff',
'object/tangible/deed/pet_deed/shared_graul_deed.iff',
'object/tangible/deed/pet_deed/shared_gronda_deed.iff',
'object/tangible/deed/pet_deed/shared_gualama_deed.iff',
'object/tangible/deed/pet_deed/shared_guf_drolg_deed.iff',
'object/tangible/deed/pet_deed/shared_gurnaset_deed.iff',
'object/tangible/deed/pet_deed/shared_gurrcat_deed.iff',
'object/tangible/deed/pet_deed/shared_gurreck_deed.iff',
'object/tangible/deed/pet_deed/shared_hermit_spider_deed.iff',
'object/tangible/deed/pet_deed/shared_huf_dun_deed.iff',
'object/tangible/deed/pet_deed/shared_huurton_deed.iff',
'object/tangible/deed/pet_deed/shared_ikopi_deed.iff',
'object/tangible/deed/pet_deed/shared_kaadu_deed.iff',
'object/tangible/deed/pet_deed/shared_kahmurra_deed.iff',
'object/tangible/deed/pet_deed/shared_kima_deed.iff',
'object/tangible/deed/pet_deed/shared_kimogila_deed.iff',
'object/tangible/deed/pet_deed/shared_kliknik_deed.iff',
'object/tangible/deed/pet_deed/shared_krahbu_deed.iff',
'object/tangible/deed/pet_deed/shared_kusak_deed.iff',
'object/tangible/deed/pet_deed/shared_kwi_deed.iff',
'object/tangible/deed/pet_deed/shared_langlatch_deed.iff',
'object/tangible/deed/pet_deed/shared_malkloc_deed.iff',
'object/tangible/deed/pet_deed/shared_mawgax_deed.iff',
'object/tangible/deed/pet_deed/shared_marek_deed.iff',
'object/tangible/deed/pet_deed/shared_mott_deed.iff',
'object/tangible/deed/pet_deed/shared_narglatch_deed.iff',
'object/tangible/deed/pet_deed/shared_piket_deed.iff',
'object/tangible/deed/pet_deed/shared_pugoriss_deed.iff',
'object/tangible/deed/pet_deed/shared_rancor_deed.iff',
'object/tangible/deed/pet_deed/shared_roba_deed.iff',
'object/tangible/deed/pet_deed/shared_ronto_deed.iff',
'object/tangible/deed/pet_deed/shared_sand_panther_deed.iff',
'object/tangible/deed/pet_deed/shared_sharnaff_deed.iff',
'object/tangible/deed/pet_deed/shared_shear_mite_deed.iff',
'object/tangible/deed/pet_deed/shared_slice_hound_deed.iff',
'object/tangible/deed/pet_deed/shared_snorbal_deed.iff',
'object/tangible/deed/pet_deed/shared_squall_deed.iff',
'object/tangible/deed/pet_deed/shared_swirl_prong_deed.iff',
'object/tangible/deed/pet_deed/shared_thune_deed.iff',
'object/tangible/deed/pet_deed/shared_torton_deed.iff',
'object/tangible/deed/pet_deed/shared_tybis_deed.iff',
'object/tangible/deed/pet_deed/shared_veermok_deed.iff',
'object/tangible/deed/pet_deed/shared_verne_deed.iff',
'object/tangible/deed/pet_deed/shared_vesp_deed.iff',
'object/tangible/deed/pet_deed/shared_vir_vur_deed.iff',
'object/tangible/deed/pet_deed/shared_woolamander_deed.iff',
'object/tangible/deed/pet_deed/shared_zucca_boar_deed.iff')
droidDeeds = ( 'object/tangible/deed/pet_deed/shared_deed_3p0_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_3p0_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_binary_load_lifter_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_binary_load_lifter_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_dz70_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_dz70_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_le_repair_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_le_repair_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_mse_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_mse_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_power_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_power_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_probot_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_probot_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r2_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r2_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r3_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r3_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r4_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r4_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r5_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_r5_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_surgical_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_surgical_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_treadwell_advanced_basic.iff',
'object/tangible/deed/pet_deed/shared_deed_treadwell_basic.iff',
)
instruments = ( 'object/tangible/instrument/shared_bandfill.iff',
'object/tangible/instrument/shared_fanfar.iff',
'object/tangible/instrument/shared_fizz.iff',
'object/tangible/instrument/shared_flute_droopy.iff',
'object/tangible/instrument/shared_instrument_kloo_horn.iff',
'object/tangible/instrument/shared_mandoviol.iff',
'object/tangible/instrument/shared_nalargon.iff',
'object/tangible/instrument/shared_ommni_box.iff',
'object/tangible/instrument/shared_slitherhorn.iff',
'object/tangible/instrument/shared_traz.iff') | 64.637363 | 161 | 0.787105 | 6,446 | 47,056 | 5.40785 | 0.084549 | 0.119539 | 0.14923 | 0.105425 | 0.860495 | 0.841304 | 0.819301 | 0.747239 | 0.633667 | 0.532259 | 0 | 0.017141 | 0.094972 | 47,056 | 728 | 162 | 64.637363 | 0.801395 | 0.001509 | 0 | 0.084465 | 0 | 0 | 0.705368 | 0.686085 | 0 | 0 | 0.000213 | 0 | 0 | 1 | 0.024133 | false | 0.001508 | 0.012066 | 0 | 0.066365 | 0.001508 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6e30ade2a441765ef5ceda6874b4a54b908e65b | 298 | py | Python | chia/knowledge/messages.py | cabrust/chia | 3eaf815b261dc8a85d64fd698e0079515ec0dde9 | [
"BSD-3-Clause"
] | null | null | null | chia/knowledge/messages.py | cabrust/chia | 3eaf815b261dc8a85d64fd698e0079515ec0dde9 | [
"BSD-3-Clause"
] | 2 | 2021-10-06T13:19:09.000Z | 2021-10-20T17:32:36.000Z | chia/knowledge/messages.py | cabrust/chia | 3eaf815b261dc8a85d64fd698e0079515ec0dde9 | [
"BSD-3-Clause"
] | null | null | null | from chia import instrumentation
class ConceptChangeMessage(instrumentation.Message):
def __init__(self, sender: str):
super().__init__(sender=sender)
class RelationChangeMessage(instrumentation.Message):
def __init__(self, sender: str):
super().__init__(sender=sender)
| 24.833333 | 53 | 0.744966 | 30 | 298 | 6.866667 | 0.466667 | 0.213592 | 0.242718 | 0.281553 | 0.61165 | 0.61165 | 0.61165 | 0.61165 | 0.61165 | 0.61165 | 0 | 0 | 0.154362 | 298 | 11 | 54 | 27.090909 | 0.81746 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
120fe14e7b70a2e4e47442edfd2f60c3ad4033b7 | 131 | py | Python | tests/shared.py | mutalyzer/client | b425a6391dc131069b5b7b7803dd55e8f16affe5 | [
"MIT"
] | 1 | 2021-01-13T21:42:18.000Z | 2021-01-13T21:42:18.000Z | tests/shared.py | mutalyzer/client | b425a6391dc131069b5b7b7803dd55e8f16affe5 | [
"MIT"
] | null | null | null | tests/shared.py | mutalyzer/client | b425a6391dc131069b5b7b7803dd55e8f16affe5 | [
"MIT"
] | 1 | 2018-10-30T14:58:52.000Z | 2018-10-30T14:58:52.000Z | from hashlib import md5
from io import StringIO
def md5_check(data, md5sum):
return md5(data.encode()).hexdigest() == md5sum
| 18.714286 | 51 | 0.732824 | 19 | 131 | 5 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.160305 | 131 | 6 | 52 | 21.833333 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
12582dd380f02bfeb830f8f74c60f04ecf4ae9dc | 19 | py | Python | __init__.py | n-hachi/raytrace | aebdffba70002b6ed8f798f8b206d5d21617a4eb | [
"CC0-1.0"
] | null | null | null | __init__.py | n-hachi/raytrace | aebdffba70002b6ed8f798f8b206d5d21617a4eb | [
"CC0-1.0"
] | null | null | null | __init__.py | n-hachi/raytrace | aebdffba70002b6ed8f798f8b206d5d21617a4eb | [
"CC0-1.0"
] | null | null | null | from . import objs
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1260fb10eb80b1eebeb38f4ff8b8fe7fb3bdf554 | 612 | py | Python | cmt/converter/__init__.py | IceflowRE/cmt | ec85642f5317bf70f7fa203f68a096edd2f190be | [
"MIT"
] | null | null | null | cmt/converter/__init__.py | IceflowRE/cmt | ec85642f5317bf70f7fa203f68a096edd2f190be | [
"MIT"
] | 47 | 2019-05-16T21:35:56.000Z | 2020-01-19T17:15:42.000Z | cmt/converter/__init__.py | IceflowRE/cmt | ec85642f5317bf70f7fa203f68a096edd2f190be | [
"MIT"
] | 2 | 2019-07-09T18:31:54.000Z | 2019-09-05T05:26:40.000Z | from cmt.converter.cmap_v0 import Converter as Converter_cmap_0
from cmt.converter.cmap_v1 import Converter as Converter_cmap_1
from cmt.converter.cmap_v2 import Converter as Converter_cmap_2
from cmt.converter.ecmap_v0 import Converter as Converter_ecmap_0
from cmt.converter.ecmap_v1 import Converter as Converter_ecmap_1
from cmt.converter.ecmap_v2 import Converter as Converter_ecmap_2
from cmt.converter.ecmap_v4 import Converter as Converter_ecmap_4
__all__ = ["Converter_cmap_0", "Converter_cmap_1", "Converter_cmap_2", "Converter_ecmap_0", "Converter_ecmap_1", "Converter_ecmap_2", "Converter_ecmap_4"]
| 61.2 | 154 | 0.854575 | 99 | 612 | 4.888889 | 0.161616 | 0.347107 | 0.231405 | 0.376033 | 0.557851 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 0.084967 | 612 | 9 | 155 | 68 | 0.826786 | 0 | 0 | 0 | 0 | 0 | 0.189542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1264b0b7c2e032d6fedb11cd1951fbe53e6a7525 | 179 | py | Python | nipype/workflows/dmri/mrtrix/__init__.py | carlohamalainen/nipype | 0c4f587946f48277de471b1801b60bd18fdfb775 | [
"BSD-3-Clause"
] | 1 | 2018-04-18T12:13:37.000Z | 2018-04-18T12:13:37.000Z | nipype/workflows/dmri/mrtrix/__init__.py | ito-takuya/nipype | 9099a5809487b55868cdec82a719030419cbd6ba | [
"BSD-3-Clause"
] | null | null | null | nipype/workflows/dmri/mrtrix/__init__.py | ito-takuya/nipype | 9099a5809487b55868cdec82a719030419cbd6ba | [
"BSD-3-Clause"
] | 1 | 2020-02-19T13:47:05.000Z | 2020-02-19T13:47:05.000Z | from diffusion import create_mrtrix_dti_pipeline
from connectivity_mapping import create_connectivity_pipeline
from group_connectivity import (create_group_connectivity_pipeline)
| 44.75 | 67 | 0.921788 | 22 | 179 | 7.045455 | 0.454545 | 0.232258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067039 | 179 | 3 | 68 | 59.666667 | 0.928144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
126e37d6a860db38bbfe15efe6f80a10571e7268 | 47 | py | Python | d1lod/d1lod/metadata/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 2 | 2019-03-07T21:14:27.000Z | 2021-03-30T00:24:13.000Z | d1lod/d1lod/metadata/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 64 | 2021-03-11T22:28:45.000Z | 2022-03-17T18:41:08.000Z | d1lod/d1lod/metadata/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 2 | 2018-09-05T16:38:42.000Z | 2021-03-12T18:07:20.000Z | import eml
import dryad
import fgdc
import iso
| 9.4 | 12 | 0.829787 | 8 | 47 | 4.875 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 4 | 13 | 11.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89e0484b812cfd0d131cfb56e8a181705ed1faa5 | 48,478 | py | Python | Synthetic_InducedCycle/CodeZip_ST.py | GKAT-NeurIPS2021/GKAT_Experiments | 38eda546bfedfaf5a86999309c30595d9fe83cc7 | [
"MIT"
] | 3 | 2021-07-29T05:20:45.000Z | 2022-01-07T01:24:44.000Z | Synthetic_InducedCycle/CodeZip_ST.py | GKAT-NeurIPS2021/GKAT_Experiments | 38eda546bfedfaf5a86999309c30595d9fe83cc7 | [
"MIT"
] | null | null | null | Synthetic_InducedCycle/CodeZip_ST.py | GKAT-NeurIPS2021/GKAT_Experiments | 38eda546bfedfaf5a86999309c30595d9fe83cc7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import networkx as nx
import matplotlib.pyplot as plt
import time
import numpy as np
import pickle
from tqdm.notebook import tqdm, trange
import random
import dgl
import dgl.function as fn
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from dgl.data import MiniGCDataset
from dgl.nn.pytorch import *
from torch.utils.data import DataLoader
from tqdm.notebook import tqdm, trange
import seaborn as sns
from random import shuffle
from multiprocessing import Pool
import multiprocessing
from functools import partial
from networkx.generators.classic import cycle_graph
import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns
from deepwalk import OnlyWalk
import os, sys
class HiddenPrints:
def __enter__(self):
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout.close()
sys.stdout = self._original_stdout
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
#### Graph Generations
def shuffle_two_lists(list_1, list_2):
c = list(zip(list_1, list_2))
random.shuffle(c)
return zip(*c)
#%%
##### Spanning Tree Synthetic Graph
from collections import deque
# method return farthest node and its distance from node u
def BFS(graph_in, u):
# marking all nodes as unvisited
visited = [False for i in range(len(graph_in.nodes) + 1)]
# mark all distance with -1
distance = [-1 for i in range(len(graph_in.nodes) + 1)]
# distance of u from u will be 0
distance[u] = 0
# in-built library for queue which performs fast oprations on both the ends
queue = deque()
queue.append(u)
# mark node u as visited
visited[u] = True
while queue:
# pop the front of the queue(0th element)
front = queue.popleft()
# loop for all adjacent nodes of node front
for i in [x for x in graph_in.neighbors(front)]:
if not visited[i]:
# mark the ith node as visited
visited[i] = True
# make distance of i , one more than distance of front
distance[i] = distance[front]+1
# Push node into the stack only if it is not visited already
queue.append(i)
maxDis = 0
# get farthest node distance and its index
for i in range(len(graph_in.nodes) ):
if distance[i] > maxDis:
maxDis = distance[i]
nodeIdx = i
return nodeIdx, maxDis
def FindLongestNodePairs(graph_in):
# first DFS to find one end point of longest path
node, Dis = BFS(graph_in, 0)
# second DFS to find the actual longest path
node_2, LongDis = BFS(graph_in, node)
return node, node_2
#%%
def generate_tree(num_nodes, num_graphs):
all_tree_graphs = []
print(f'Start generating tree graphs')
for i in trange(num_graphs):
all_tree_graphs.append(nx.generators.trees.random_tree(num_nodes))
return all_tree_graphs
def generate_tree_adding_edges(num_nodes, num_graphs, num_edges = 3):
all_tree_graphs_adding_edges = []
print(f'Start generating tree graphs with edges')
for i in trange(num_graphs):
tree = nx.generators.trees.random_tree(num_nodes)
for j in range(num_edges):
while True:
vertices = random.sample(range(num_nodes), 2)
if ((vertices[0], vertices[1]) not in tree.edges) and ((vertices[1], vertices[0]) not in tree.edges):
tree.add_edge(vertices[0], vertices[1])
break
else:
continue
all_tree_graphs_adding_edges.append(tree)
return all_tree_graphs_adding_edges
def generate_tree_adding_edges_with_longest_distance(num_nodes, num_graphs, num_edges = 3):
#num_edges = 1
all_tree_graphs_adding_edges = []
print(f'Start generating tree graphs with edges with longest distance')
for i in trange(num_graphs):
tree = nx.generators.trees.random_tree(num_nodes)
for j in range(num_edges):
#while True:
vertices = FindLongestNodePairs(tree)
#vertices = random.sample(range(num_nodes), 2)
#if ((vertices[0], vertices[1]) not in tree.edges) and ((vertices[1], vertices[0]) not in tree.edges):
tree.add_edge(vertices[0], vertices[1])
# break
#else:
# continue
all_tree_graphs_adding_edges.append(tree)
return all_tree_graphs_adding_edges
def generate_tree_adding_edges_with_shortest_distance(num_nodes, num_graphs, num_edges = 3):
#num_edges = 1
all_tree_graphs_adding_edges = []
print(f'Start generating tree graphs with edges with longest distance')
for i in trange(num_graphs):
while True:
tree = nx.generators.trees.random_tree(num_nodes)
vertices = FindLongestNodePairs(tree)
longest_len = nx.shortest_path_length(tree,source=vertices[0],target=vertices[1])
source = np.random.choice(len(tree.nodes))
target = np.random.choice(len(tree.nodes))
if source != target:
if nx.shortest_path_length(tree,source=source,target=target) >= 2:
if nx.shortest_path_length(tree,source=source,target=target) < longest_len:
tree.add_edge(source, target)
all_tree_graphs_adding_edges.append(tree)
break
#for j in range(num_edges):
# source = np.random.choice(len(tree.nodes))
# for target in tree.nodes:
# if nx.shortest_path_length(tree,source=source,target=target) == 2:
# tree.add_edge(source, target)
# break
#all_tree_graphs_adding_edges.append(tree)
return all_tree_graphs_adding_edges
#%%
def generate_graphs_labels(num_nodes, num_train_tree, num_train_edge_tree,
num_val_tree, num_val_edge_tree, num_test_tree,
num_test_edge_tree, num_edges, is_dgl_type = False,
path_length = 10, num_random_walk= 50, p=1e3, q=1, stopping_prob = 0.0):
tree_train_graphs = generate_tree_adding_edges_with_shortest_distance(num_nodes, num_train_tree, num_edges)
tree_train_labels = list(np.zeros(num_train_tree))
edge_tree_train_graphs = generate_tree_adding_edges_with_longest_distance(num_nodes, num_train_edge_tree, num_edges)
edge_tree_train_labels = list(np.ones(num_train_edge_tree))
tree_val_graphs = generate_tree_adding_edges_with_shortest_distance(num_nodes, num_val_tree, num_edges)
tree_val_labels = list(np.zeros(num_val_tree))
edge_tree_val_graphs = generate_tree_adding_edges_with_longest_distance(num_nodes, num_val_edge_tree, num_edges)
edge_tree_val_labels = list(np.ones(num_val_edge_tree))
tree_test_graphs = generate_tree_adding_edges_with_shortest_distance(num_nodes, num_test_tree, num_edges)
tree_test_labels = list(np.zeros(num_test_tree))
edge_tree_test_graphs = generate_tree_adding_edges_with_longest_distance(num_nodes, num_test_edge_tree, num_edges)
edge_tree_test_labels = list(np.ones(num_test_edge_tree))
all_train_graphs = tree_train_graphs + edge_tree_train_graphs
all_train_labels = tree_train_labels + edge_tree_train_labels
all_val_graphs = tree_val_graphs + edge_tree_val_graphs
all_val_labels = tree_val_labels + edge_tree_val_labels
all_test_graphs = tree_test_graphs + edge_tree_test_graphs
all_test_labels = tree_test_labels + edge_tree_test_labels
all_train_graphs_shuffled, all_train_labels_shuffled = \
shuffle_two_lists(all_train_graphs, all_train_labels)
all_val_graphs_shuffled, all_val_labels_shuffled = \
shuffle_two_lists(all_val_graphs, all_val_labels)
all_test_graphs_shuffled, all_test_labels_shuffled = \
shuffle_two_lists(all_test_graphs, all_test_labels)
all_train_graphs_shuffled = list(all_train_graphs_shuffled)
all_train_labels_shuffled = list(all_train_labels_shuffled)
all_val_graphs_shuffled = list(all_val_graphs_shuffled)
all_val_labels_shuffled = list(all_val_labels_shuffled)
all_test_graphs_shuffled = list(all_test_graphs_shuffled)
all_test_labels_shuffled = list(all_test_labels_shuffled)
return all_train_graphs_shuffled, all_train_labels_shuffled,\
all_val_graphs_shuffled, all_val_labels_shuffled,\
all_test_graphs_shuffled, all_test_labels_shuffled
def networkx_to_dgl_graphs(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled):
for i in range(len(all_train_graphs_shuffled)):
all_train_graphs_shuffled[i] = dgl.from_networkx(all_train_graphs_shuffled[i])
for i in range(len(all_val_graphs_shuffled)):
all_val_graphs_shuffled[i] = dgl.from_networkx(all_val_graphs_shuffled[i])
for i in range(len(all_test_graphs_shuffled)):
all_test_graphs_shuffled[i] = dgl.from_networkx(all_test_graphs_shuffled[i])
return all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled
def dgl_to_networkx_graphs(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled):
for i in range(len(all_train_graphs_shuffled)):
all_train_graphs_shuffled[i] = nx.Graph(all_train_graphs_shuffled[i].to_networkx())
for i in range(len(all_val_graphs_shuffled)):
all_val_graphs_shuffled[i] = nx.Graph(all_val_graphs_shuffled[i].to_networkx())
for i in range(len(all_test_graphs_shuffled)):
all_test_graphs_shuffled[i] = nx.Graph(all_test_graphs_shuffled[i].to_networkx())
return all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled
#GWK_masking = generate_masking_GWK(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled, path_length, num_random_walk, stopping_prob, p, q)
#GAT_masking = generate_masking_GAT(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled)
#%%
##### Generate masking
def generate_masking_GAT(train_graphs, val_graphs, test_graphs):
train_masking = []
val_masking = []
test_masking = []
print('Start generating GAT masking')
for graph in train_graphs:
adj = nx.linalg.graphmatrix.adjacency_matrix(graph).todense()
np.fill_diagonal(adj, 1)
train_masking.append(torch.from_numpy(adj))
for graph in val_graphs:
adj = nx.linalg.graphmatrix.adjacency_matrix(graph).todense()
np.fill_diagonal(adj, 1)
val_masking.append(torch.from_numpy(adj))
for graph in test_graphs:
adj = nx.linalg.graphmatrix.adjacency_matrix(graph).todense()
np.fill_diagonal(adj, 1)
test_masking.append(torch.from_numpy(adj))
return train_masking, val_masking, test_masking
def generate_masking_GWK(train_graphs, val_graphs, test_graphs, num_random_walk, path_length, stopping_prob, p, q, ignore_start = False):
train_masking = []
val_masking = []
test_masking = []
print('Start generating GWK masking')
for i in tqdm(range(len(train_graphs))):
graph = (train_graphs[i])
n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True)
counting_atten = torch.from_numpy(n2v.walker.freq_mat)
if ignore_start:
counting_atten -= np.eye(len(counting_atten))*num_random_walk
train_masking.append(MinMaxScaler(counting_atten).float())
for i in tqdm(range(len(val_graphs))):
graph = (val_graphs[i])
n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True)
counting_atten = torch.from_numpy(n2v.walker.freq_mat)
if ignore_start:
counting_atten -= np.eye(len(counting_atten))*num_random_walk
val_masking.append(MinMaxScaler(counting_atten).float())
for i in tqdm(range(len(test_graphs))):
graph = (test_graphs[i])
n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True)
counting_atten = torch.from_numpy(n2v.walker.freq_mat)
if ignore_start:
counting_atten -= np.eye(len(counting_atten))*num_random_walk
test_masking.append(MinMaxScaler(counting_atten).float())
return train_masking, val_masking, test_masking
#%%
##### Better Version
def counting_attn(node, epsilon, adj_mat, discount_factor, rand_seed=None):
if rand_seed:
np.random.seed(rand_seed)
else:
np.random.seed()
counting_vector = np.zeros(adj_mat.shape[1])
counting_vector[node] = 1
step_length = 0
visited_nodes = [node]
loop_count = 0
while True:
continue_or_not = np.random.choice([0, 1], p = [epsilon, 1 - epsilon])
if continue_or_not:
while True:
next_available_nodes = np.where(adj_mat[node, :] != 0)[0]
if len(set(next_available_nodes) - set(visited_nodes))>0:
next = np.random.choice(list(set(next_available_nodes) - set(visited_nodes)))
step_length += 1
counting_vector[next] += discount_factor**step_length
visited_nodes.append(next)
node = next
else:
return counting_vector
'''
if next in visited_nodes:
loop_count +=1
if loop_count > 1000:
#print('exceed loop count')
return counting_vector
else:
continue
else:
'''
else:
return counting_vector
def cal_counting_attn(adj, num_random_walks, stopping_prob, discounting_fact,
seed = 666):
np.random.seed(seed)
nb_nodes = adj.shape[0]
#counting_attn
vector_dict = []
for node in range(nb_nodes):
all_vectors = []
for i in range(num_random_walks):
try:
vector = counting_attn(node, stopping_prob, np.array(adj),
discounting_fact, i+1)
except:
vector = np.zeros(np.array(adj).shape[1])
vector[node] = 1
all_vectors.append(vector)
vector_dict.append(all_vectors)
return vector_dict
def collate(samples):
# The input `samples` is a list of pairs
# (graph, label).
graphs, labels = map(list, zip(*samples))
batched_graph = dgl.batch(graphs)
return batched_graph, torch.tensor(labels)
def MinMaxScaler(data):
diff = data.transpose(0,1) - torch.min(data, axis = 1)[0]
range = torch.max(data, axis = 1)[0] - torch.min(data, axis = 1)[0]
return (diff / (range + 1e-7)).transpose(0,1)
#%%
# Graph NN
##### Attention Model definition
class GWKLayer(nn.Module):
def __init__(self,
in_dim,
out_dim,
feat_drop=0.,
attn_drop=0.,
alpha=0.2,
agg_activation=F.elu):
super(GWKLayer, self).__init__()
self.feat_drop = nn.Dropout(feat_drop)
self.fc = nn.Linear(in_dim, out_dim, bias=False)
torch.nn.init.xavier_uniform_(self.fc.weight)
#torch.nn.init.zeros_(self.fc.bias)
self.attn_l = nn.Parameter(torch.ones(size=(out_dim, 1)))
self.attn_r = nn.Parameter(torch.ones(size=(out_dim, 1)))
self.attn_drop = nn.Dropout(attn_drop)
self.activation = nn.LeakyReLU(alpha)
self.softmax = nn.Softmax(dim = 1)
self.agg_activation=agg_activation
def clean_data(self):
ndata_names = ['ft', 'a1', 'a2']
edata_names = ['a_drop']
for name in ndata_names:
self.g.ndata.pop(name)
for name in edata_names:
self.g.edata.pop(name)
def forward(self, feat, bg, counting_attn):
#with HiddenPrints():
# prepare, inputs are of shape V x F, V the number of nodes, F the dim of input features
self.g = bg
h = self.feat_drop(feat)
#print('h shape is \n')
#print(h.shape)
head_ft = self.fc(h).reshape((h.shape[0], -1))
#print('ft shape is \n')
#print(head_ft.shape)
a1 = torch.mm(head_ft, self.attn_l) # V x 1
a2 = torch.mm(head_ft, self.attn_r) # V x 1
a = self.attn_drop(a1 + a2.transpose(0, 1))
a = self.activation(a)
#print('a shape is \n')
#print(a.shape)
#maxes = torch.max(a, 1, keepdim=True)[0]
a_ = a #- maxes
a_nomi = torch.mul(torch.exp(a_), counting_attn.float())
a_deno = torch.sum(a_nomi, 1, keepdim=True)
a_nor = a_nomi/(a_deno+1e-9)
ret = torch.mm(a_nor, head_ft)
#print('ret shape is \n')
#print(ret.shape)
if self.agg_activation is not None:
ret = self.agg_activation(ret)
return ret
class GWKClassifier(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_,
attn_drop = attn_drop_, agg_activation=F.elu)
for _ in range(num_heads)]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads, hidden_dim,
feat_drop = feat_drop_, attn_drop = attn_drop_,
agg_activation=F.elu)
for _ in range(1)]),
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
# For undirected graphs, in_degree is the same as
# out_degree.
h = bg.in_degrees().view(-1, 1).float()
#print('input shape is \n')
#print(h.shape)
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
#if std_:
# h = (h - mean_)/std_
#else:
# h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
#print('Output shape is \n')
#print(h.shape)
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
#return self.softmax(self.classify(hg))
return self.classify(hg)
class GWKClassifier_2hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_2hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads)]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim * 1, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_3hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_3hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim , n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_4hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_4hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_5hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_5hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_6hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_6hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[4])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[4], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_7hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_7hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[4])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[4], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[5])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[5], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_8hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_8hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[4])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[4], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[5])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[5], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[6])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[6], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_9hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_9hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[4])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[4], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[5])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[5], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[6])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[6], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[7])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[7], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
class GWKClassifier_10hid(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0.,
attn_drop_=0.,):
super(GWKClassifier_10hid, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GWKLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[0])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[0], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[1])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[1], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[2])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[2], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[3])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[3], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[4])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[4], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[5])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[5], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[6])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[6], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[7])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[7], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads[8])]),
nn.ModuleList([GWKLayer(hidden_dim * num_heads[8], hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(1)])
])
self.classify = nn.Linear(hidden_dim, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, counting_attn, normalize = 'normal'):
h = bg.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
##### Convolutional model definition
class GCNClassifier(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim)
self.conv2 = GraphConv(hidden_dim, hidden_dim)
self.classify = nn.Linear(hidden_dim, n_classes)
def forward(self, g, normalize = 'normal'):
# Use node degree as the initial node feature. For undirected graphs, the in-degree
# is the same as the out_degree.
h = g.in_degrees().view(-1, 1).float()
#print('input shape is \n')
#print(h.shape)
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
#if std_:
# h = (h - mean_)/std_
#else:
# h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
g.ndata['h'] = h
# Calculate graph representation by averaging all the node representations.
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 1 hidden layer
class GCNClassifier_1hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_1hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.classify = nn.Linear(hidden_dim[-1], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 2 hidden layers
class GCNClassifier_2hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_2hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.classify = nn.Linear(hidden_dim[-1], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 3 hidden layers
class GCNClassifier_3hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_3hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.conv3 = GraphConv(hidden_dim[1], hidden_dim[2])
self.classify = nn.Linear(hidden_dim[-1], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
h = F.relu(self.conv3(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 4 hidden layers
class GCNClassifier_4hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_4hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.conv3 = GraphConv(hidden_dim[1], hidden_dim[2])
self.conv4 = GraphConv(hidden_dim[2], hidden_dim[3])
self.classify = nn.Linear(hidden_dim[-1], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
h = F.relu(self.conv3(g, h))
h = F.relu(self.conv4(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 5 hidden layers
class GCNClassifier_5hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_5hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.conv3 = GraphConv(hidden_dim[1], hidden_dim[2])
self.conv4 = GraphConv(hidden_dim[2], hidden_dim[3])
self.conv5 = GraphConv(hidden_dim[3], hidden_dim[4])
self.classify = nn.Linear(hidden_dim[4], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
h = F.relu(self.conv3(g, h))
h = F.relu(self.conv4(g, h))
h = F.relu(self.conv5(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 6 hidden layers
class GCNClassifier_6hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_6hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.conv3 = GraphConv(hidden_dim[1], hidden_dim[2])
self.conv4 = GraphConv(hidden_dim[2], hidden_dim[3])
self.conv5 = GraphConv(hidden_dim[3], hidden_dim[4])
self.conv6 = GraphConv(hidden_dim[4], hidden_dim[5])
self.classify = nn.Linear(hidden_dim[5], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
h = F.relu(self.conv3(g, h))
h = F.relu(self.conv4(g, h))
h = F.relu(self.conv5(g, h))
h = F.relu(self.conv6(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
# 7 hidden layers
class GCNClassifier_7hid(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_7hid, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim[0])
self.conv2 = GraphConv(hidden_dim[0], hidden_dim[1])
self.conv3 = GraphConv(hidden_dim[1], hidden_dim[2])
self.conv4 = GraphConv(hidden_dim[2], hidden_dim[3])
self.conv5 = GraphConv(hidden_dim[3], hidden_dim[4])
self.conv6 = GraphConv(hidden_dim[4], hidden_dim[5])
self.conv7 = GraphConv(hidden_dim[5], hidden_dim[6])
self.classify = nn.Linear(hidden_dim[-1], n_classes)
def forward(self, g, normalize = 'normal'):
h = g.in_degrees().view(-1, 1).float()
num_nodes = h.shape[0]
features = h.numpy().flatten()
if normalize == 'normal':
mean_ = np.mean(features)
std_ = np.std(features)
h = (h - mean_)/std_
elif normalize == 'minmax':
h = h/np.max(features)
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
h = F.relu(self.conv3(g, h))
h = F.relu(self.conv4(g, h))
h = F.relu(self.conv5(g, h))
h = F.relu(self.conv6(g, h))
h = F.relu(self.conv7(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
| 37.962412 | 176 | 0.627398 | 6,789 | 48,478 | 4.181028 | 0.05833 | 0.066584 | 0.047349 | 0.034138 | 0.815219 | 0.793623 | 0.764101 | 0.746486 | 0.723727 | 0.712806 | 0 | 0.01304 | 0.25494 | 48,478 | 1,276 | 177 | 37.992163 | 0.772834 | 0.062234 | 0 | 0.661502 | 0 | 0 | 0.014144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067938 | false | 0 | 0.034565 | 0 | 0.169249 | 0.007151 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6266876c9339bc457089d5e3abb2749cda3759d | 68 | py | Python | projects/ex48/test-ex48_convert.py | ba-nyar-naing/python-excercises | c7d3a196b8fce317bbbf1cc1c61c50d496c331ca | [
"MIT"
] | null | null | null | projects/ex48/test-ex48_convert.py | ba-nyar-naing/python-excercises | c7d3a196b8fce317bbbf1cc1c61c50d496c331ca | [
"MIT"
] | null | null | null | projects/ex48/test-ex48_convert.py | ba-nyar-naing/python-excercises | c7d3a196b8fce317bbbf1cc1c61c50d496c331ca | [
"MIT"
] | 3 | 2018-06-10T17:19:05.000Z | 2018-06-26T13:49:33.000Z | import ex48_convert as c
c.convert_number(1)
c.convert_number('a')
| 13.6 | 24 | 0.779412 | 13 | 68 | 3.846154 | 0.615385 | 0.32 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.102941 | 68 | 4 | 25 | 17 | 0.770492 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3ce041775d8d0500a4853c7b5b6534665c05477 | 2,314 | py | Python | epytope/Data/pssms/smm/mat/B_58_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smm/mat/B_58_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smm/mat/B_58_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_58_01_9 = {0: {'A': 0.135, 'C': -0.219, 'E': 0.531, 'D': 0.657, 'G': 0.145, 'F': 0.051, 'I': -0.371, 'H': -0.138, 'K': -0.736, 'M': -0.656, 'L': -0.231, 'N': 0.597, 'Q': -0.103, 'P': 0.946, 'S': 0.123, 'R': -0.534, 'T': 0.065, 'W': 0.056, 'V': -0.108, 'Y': -0.208}, 1: {'A': -1.089, 'C': 0.104, 'E': 0.602, 'D': 0.874, 'G': -0.205, 'F': -0.128, 'I': -0.148, 'H': 0.705, 'K': 0.455, 'M': -0.469, 'L': 0.006, 'N': 0.087, 'Q': 0.38, 'P': 0.559, 'S': -1.287, 'R': 0.435, 'T': -0.993, 'W': -0.015, 'V': -0.45, 'Y': 0.578}, 2: {'A': -0.394, 'C': 0.072, 'E': 0.829, 'D': 0.257, 'G': 0.06, 'F': -0.255, 'I': -0.389, 'H': 0.328, 'K': 0.438, 'M': -0.433, 'L': -0.219, 'N': -0.334, 'Q': 0.016, 'P': 0.763, 'S': -0.27, 'R': 0.487, 'T': -0.144, 'W': -0.154, 'V': -0.245, 'Y': -0.413}, 3: {'A': -0.276, 'C': 0.15, 'E': 0.09, 'D': 0.12, 'G': -0.018, 'F': 0.039, 'I': 0.047, 'H': -0.052, 'K': 0.107, 'M': 0.066, 'L': 0.156, 'N': 0.122, 'Q': -0.011, 'P': -0.232, 'S': -0.078, 'R': -0.07, 'T': -0.114, 'W': 0.005, 'V': -0.054, 'Y': 0.004}, 4: {'A': 0.01, 'C': 0.002, 'E': 0.125, 'D': 0.091, 'G': 0.098, 'F': 0.039, 'I': -0.078, 'H': 0.114, 'K': 0.18, 'M': -0.109, 'L': -0.075, 'N': 0.107, 'Q': -0.012, 'P': 0.076, 'S': -0.181, 'R': 0.186, 'T': -0.08, 'W': -0.068, 'V': -0.322, 'Y': -0.101}, 5: {'A': -0.068, 'C': 0.081, 'E': -0.011, 'D': -0.018, 'G': 0.107, 'F': 0.156, 'I': -0.052, 'H': 0.028, 'K': 0.093, 'M': 0.033, 'L': -0.05, 'N': -0.006, 'Q': -0.097, 'P': -0.112, 'S': -0.006, 'R': 0.198, 'T': -0.086, 'W': 0.01, 'V': -0.158, 'Y': -0.043}, 6: {'A': -0.035, 'C': -0.038, 'E': -0.133, 'D': 0.136, 'G': 0.398, 'F': -0.355, 'I': 0.159, 'H': -0.155, 'K': 0.318, 'M': -0.141, 'L': -0.228, 'N': 0.143, 'Q': 0.07, 'P': -0.036, 'S': 0.118, 'R': 0.468, 'T': -0.1, 'W': -0.379, 'V': 0.019, 'Y': -0.229}, 7: {'A': 0.238, 'C': 0.185, 'E': -0.139, 'D': 0.319, 'G': 0.16, 'F': -0.019, 'I': -0.011, 'H': -0.179, 'K': 0.026, 'M': 0.171, 'L': -0.224, 'N': 0.094, 'Q': -0.144, 'P': -0.017, 'S': -0.036, 'R': -0.012, 'T': -0.135, 'W': -0.006, 'V': -0.096, 'Y': -0.174}, 8: {'A': -0.038, 'C': -0.218, 'E': 1.337, 'D': 0.086, 'G': 0.589, 'F': -1.168, 'I': -0.986, 'H': -0.087, 'K': 0.619, 'M': -0.737, 'L': -0.462, 'N': 0.835, 'Q': 0.228, 'P': 1.279, 'S': 0.737, 'R': 0.671, 'T': 0.584, 'W': -2.412, 'V': 0.024, 'Y': -0.883}, -1: {'con': 5.0683}} | 2,314 | 2,314 | 0.393258 | 557 | 2,314 | 1.628366 | 0.333932 | 0.019846 | 0.011025 | 0.01323 | 0.015436 | 0 | 0 | 0 | 0 | 0 | 0 | 0.372488 | 0.161193 | 2,314 | 1 | 2,314 | 2,314 | 0.094797 | 0 | 0 | 0 | 0 | 0 | 0.07905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3e8ec3cddaf4de557a39bd778d3be9c1d769d6c | 8,280 | py | Python | test/test_contrib.py | tina300399/torchgeometry | 48d8026f0a5f3d4ac5567b7b2738390892b3cc8d | [
"Apache-2.0"
] | null | null | null | test/test_contrib.py | tina300399/torchgeometry | 48d8026f0a5f3d4ac5567b7b2738390892b3cc8d | [
"Apache-2.0"
] | null | null | null | test/test_contrib.py | tina300399/torchgeometry | 48d8026f0a5f3d4ac5567b7b2738390892b3cc8d | [
"Apache-2.0"
] | 1 | 2019-10-04T05:05:28.000Z | 2019-10-04T05:05:28.000Z | import pytest
import torch
import torchgeometry as tgm
from torch.autograd import gradcheck
import utils
class TestExtractTensorPatches:
def test_smoke(self):
input = torch.arange(16.).view(1, 1, 4, 4)
m = tgm.contrib.ExtractTensorPatches(3)
assert m(input).shape == (1, 4, 1, 3, 3)
def test_b1_ch1_h4w4_ws3(self):
input = torch.arange(16.).view(1, 1, 4, 4)
m = tgm.contrib.ExtractTensorPatches(3)
patches = m(input)
assert patches.shape == (1, 4, 1, 3, 3)
assert utils.check_equal_torch(input[0, :, :3, :3], patches[0, 0])
assert utils.check_equal_torch(input[0, :, :3, 1:], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 1:, :3], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 1:, 1:], patches[0, 3])
def test_b1_ch2_h4w4_ws3(self):
input = torch.arange(16.).view(1, 1, 4, 4)
input = input.expand(-1, 2, -1, -1) # copy all channels
m = tgm.contrib.ExtractTensorPatches(3)
patches = m(input)
assert patches.shape == (1, 4, 2, 3, 3)
assert utils.check_equal_torch(input[0, :, :3, :3], patches[0, 0])
assert utils.check_equal_torch(input[0, :, :3, 1:], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 1:, :3], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 1:, 1:], patches[0, 3])
def test_b1_ch1_h4w4_ws2(self):
input = torch.arange(16.).view(1, 1, 4, 4)
m = tgm.contrib.ExtractTensorPatches(2)
patches = m(input)
assert patches.shape == (1, 9, 1, 2, 2)
assert utils.check_equal_torch(input[0, :, 0:2, 1:3], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 0:2, 2:4], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 1:3, 1:3], patches[0, 4])
assert utils.check_equal_torch(input[0, :, 2:4, 1:3], patches[0, 7])
def test_b1_ch1_h4w4_ws2_stride2(self):
input = torch.arange(16.).view(1, 1, 4, 4)
m = tgm.contrib.ExtractTensorPatches(2, stride=2)
patches = m(input)
assert patches.shape == (1, 4, 1, 2, 2)
assert utils.check_equal_torch(input[0, :, 0:2, 0:2], patches[0, 0])
assert utils.check_equal_torch(input[0, :, 0:2, 2:4], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 2:4, 0:2], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 2:4, 2:4], patches[0, 3])
def test_b1_ch1_h4w4_ws2_stride21(self):
input = torch.arange(16.).view(1, 1, 4, 4)
m = tgm.contrib.ExtractTensorPatches(2, stride=(2, 1))
patches = m(input)
assert patches.shape == (1, 6, 1, 2, 2)
assert utils.check_equal_torch(input[0, :, 0:2, 1:3], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 0:2, 2:4], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 2:4, 0:2], patches[0, 3])
assert utils.check_equal_torch(input[0, :, 2:4, 2:4], patches[0, 5])
def test_b1_ch1_h3w3_ws2_stride1_padding1(self):
input = torch.arange(9.).view(1, 1, 3, 3)
m = tgm.contrib.ExtractTensorPatches(2, stride=1, padding=1)
patches = m(input)
assert patches.shape == (1, 16, 1, 2, 2)
assert utils.check_equal_torch(input[0, :, 0:2, 0:2], patches[0, 5])
assert utils.check_equal_torch(input[0, :, 0:2, 1:3], patches[0, 6])
assert utils.check_equal_torch(input[0, :, 1:3, 0:2], patches[0, 9])
assert utils.check_equal_torch(input[0, :, 1:3, 1:3], patches[0, 10])
def test_b2_ch1_h3w3_ws2_stride1_padding1(self):
batch_size = 2
input = torch.arange(9.).view(1, 1, 3, 3)
input = input.expand(batch_size, -1, -1, -1)
m = tgm.contrib.ExtractTensorPatches(2, stride=1, padding=1)
patches = m(input)
assert patches.shape == (batch_size, 16, 1, 2, 2)
for i in range(batch_size):
assert utils.check_equal_torch(
input[i, :, 0:2, 0:2], patches[i, 5])
assert utils.check_equal_torch(
input[i, :, 0:2, 1:3], patches[i, 6])
assert utils.check_equal_torch(
input[i, :, 1:3, 0:2], patches[i, 9])
assert utils.check_equal_torch(
input[i, :, 1:3, 1:3], patches[i, 10])
def test_b1_ch1_h3w3_ws23(self):
input = torch.arange(9.).view(1, 1, 3, 3)
m = tgm.contrib.ExtractTensorPatches((2, 3))
patches = m(input)
assert patches.shape == (1, 2, 1, 2, 3)
assert utils.check_equal_torch(input[0, :, 0:2, 0:3], patches[0, 0])
assert utils.check_equal_torch(input[0, :, 1:3, 0:3], patches[0, 1])
def test_b1_ch1_h3w4_ws23(self):
input = torch.arange(12.).view(1, 1, 3, 4)
m = tgm.contrib.ExtractTensorPatches((2, 3))
patches = m(input)
assert patches.shape == (1, 4, 1, 2, 3)
assert utils.check_equal_torch(input[0, :, 0:2, 0:3], patches[0, 0])
assert utils.check_equal_torch(input[0, :, 0:2, 1:4], patches[0, 1])
assert utils.check_equal_torch(input[0, :, 1:3, 0:3], patches[0, 2])
assert utils.check_equal_torch(input[0, :, 1:3, 1:4], patches[0, 3])
# TODO: implement me
def test_jit(self):
pass
def test_gradcheck(self):
input = torch.rand(2, 3, 4, 4)
input = utils.tensor_to_gradcheck_var(input) # to var
assert gradcheck(tgm.contrib.extract_tensor_patches,
(input, 3,), raise_exception=True)
class TestSoftArgmax2d:
def _test_smoke(self):
input = torch.zeros(1, 1, 2, 3)
m = tgm.contrib.SpatialSoftArgmax2d()
assert m(input).shape == (1, 1, 2)
def _test_top_left(self):
input = torch.zeros(1, 1, 2, 3)
input[..., 0, 0] = 10.
coord = tgm.contrib.spatial_soft_argmax2d(input, True)
assert pytest.approx(coord[..., 0].item(), -1.0)
assert pytest.approx(coord[..., 1].item(), -1.0)
def _test_top_left_normalized(self):
input = torch.zeros(1, 1, 2, 3)
input[..., 0, 0] = 10.
coord = tgm.contrib.spatial_soft_argmax2d(input, False)
assert pytest.approx(coord[..., 0].item(), 0.0)
assert pytest.approx(coord[..., 1].item(), 0.0)
def _test_bottom_right(self):
input = torch.zeros(1, 1, 2, 3)
input[..., -1, 1] = 10.
coord = tgm.contrib.spatial_soft_argmax2d(input, True)
assert pytest.approx(coord[..., 0].item(), 1.0)
assert pytest.approx(coord[..., 1].item(), 1.0)
def _test_bottom_right_normalized(self):
input = torch.zeros(1, 1, 2, 3)
input[..., -1, 1] = 10.
coord = tgm.contrib.spatial_soft_argmax2d(input, False)
assert pytest.approx(coord[..., 0].item(), 2.0)
assert pytest.approx(coord[..., 1].item(), 1.0)
def _test_batch2_n2(self):
input = torch.zeros(2, 2, 2, 3)
input[0, 0, 0, 0] = 10. # top-left
input[0, 1, 0, -1] = 10. # top-right
input[1, 0, -1, 0] = 10. # bottom-left
input[1, 1, -1, -1] = 10. # bottom-right
coord = tgm.contrib.spatial_soft_argmax2d(input)
assert pytest.approx(coord[0, 0, 0].item(), -1.0) # top-left
assert pytest.approx(coord[0, 0, 1].item(), -1.0)
assert pytest.approx(coord[0, 1, 0].item(), 1.0) # top-right
assert pytest.approx(coord[0, 1, 1].item(), -1.0)
assert pytest.approx(coord[1, 0, 0].item(), -1.0) # bottom-left
assert pytest.approx(coord[1, 0, 1].item(), 1.0)
assert pytest.approx(coord[1, 1, 0].item(), 1.0) # bottom-right
assert pytest.approx(coord[1, 1, 1].item(), 1.0)
# TODO: implement me
def _test_jit(self):
pass
def _test_gradcheck(self):
input = torch.rand(2, 3, 3, 2)
input = utils.tensor_to_gradcheck_var(input) # to var
assert gradcheck(tgm.contrib.spatial_soft_argmax2d,
(input), raise_exception=True)
def test_run_all(self):
self._test_smoke()
self._test_top_left()
self._test_top_left_normalized()
self._test_bottom_right()
self._test_bottom_right_normalized()
self._test_batch2_n2()
self._test_gradcheck()
| 42.244898 | 77 | 0.586111 | 1,278 | 8,280 | 3.652582 | 0.075117 | 0.08012 | 0.116538 | 0.152956 | 0.860326 | 0.796487 | 0.738432 | 0.722579 | 0.703299 | 0.634961 | 0 | 0.089723 | 0.250242 | 8,280 | 195 | 78 | 42.461538 | 0.66221 | 0.018961 | 0 | 0.380368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005128 | 0.386503 | 1 | 0.128834 | false | 0.01227 | 0.030675 | 0 | 0.171779 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3fa19f3b5ec49f439c866981afbf7445a078fb0 | 48,640 | py | Python | nbdev_sphinx/_modidx.py | fastai/nbdev-stdlib | 8a956e40ee31c32170ab96f832fc8e0c9510c83e | [
"Apache-2.0"
] | 2 | 2020-10-15T14:59:56.000Z | 2020-10-15T17:29:18.000Z | nbdev_sphinx/_modidx.py | fastai/nbdev-stdlib | 8a956e40ee31c32170ab96f832fc8e0c9510c83e | [
"Apache-2.0"
] | 3 | 2020-10-17T05:05:21.000Z | 2020-10-19T21:19:01.000Z | nbdev_sphinx/_modidx.py | fastai/nbdev-stdlib | 8a956e40ee31c32170ab96f832fc8e0c9510c83e | [
"Apache-2.0"
] | null | null | null | # Autogenerated by get_module_idx.py
d = { 'syms': { 'docutils.parsers': {'docutils.parsers.rst': 'https://www.sphinx-doc.org/en/stable/extdev/markupapi.html#module-docutils.parsers.rst'},
'sphinx': { 'sphinx.addnodes': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#module-sphinx.addnodes',
'sphinx.application': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#module-sphinx.application',
'sphinx.builders': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders',
'sphinx.directives': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#module-sphinx.directives',
'sphinx.domains': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#module-sphinx.domains',
'sphinx.environment': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#module-sphinx.environment',
'sphinx.errors': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#module-sphinx.errors',
'sphinx.parsers': 'https://www.sphinx-doc.org/en/stable/extdev/parserapi.html#module-sphinx.parsers'},
'sphinx.builders': { 'sphinx.builders.changes': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.changes',
'sphinx.builders.dirhtml': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.dirhtml',
'sphinx.builders.dummy': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.dummy',
'sphinx.builders.epub3': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.epub3',
'sphinx.builders.gettext': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.gettext',
'sphinx.builders.html': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.html',
'sphinx.builders.latex': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.latex',
'sphinx.builders.linkcheck': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.linkcheck',
'sphinx.builders.manpage': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.manpage',
'sphinx.builders.singlehtml': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.singlehtml',
'sphinx.builders.texinfo': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.texinfo',
'sphinx.builders.text': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.text',
'sphinx.builders.xml': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinx.builders.xml',
'sphinx.builders.Builder': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder',
'sphinx.builders.Builder.build': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.build',
'sphinx.builders.Builder.build_all': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.build_all',
'sphinx.builders.Builder.build_specific': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.build_specific',
'sphinx.builders.Builder.build_update': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.build_update',
'sphinx.builders.Builder.finish': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.finish',
'sphinx.builders.Builder.get_outdated_docs': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.get_outdated_docs',
'sphinx.builders.Builder.get_relative_uri': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.get_relative_uri',
'sphinx.builders.Builder.get_target_uri': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.get_target_uri',
'sphinx.builders.Builder.init': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.init',
'sphinx.builders.Builder.prepare_writing': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.prepare_writing',
'sphinx.builders.Builder.write_doc': 'https://www.sphinx-doc.org/en/stable/extdev/builderapi.html#sphinx.builders.Builder.write_doc'},
'sphinx.domains': { 'sphinx.domains.python': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#module-sphinx.domains.python',
'sphinx.domains.Domain': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain',
'sphinx.domains.Index': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Index',
'sphinx.domains.ObjType': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.ObjType',
'sphinx.domains.Domain.add_object_type': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.add_object_type',
'sphinx.domains.Domain.check_consistency': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.check_consistency',
'sphinx.domains.Domain.clear_doc': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.clear_doc',
'sphinx.domains.Domain.directive': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.directive',
'sphinx.domains.Domain.get_enumerable_node_type': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.get_enumerable_node_type',
'sphinx.domains.Domain.get_full_qualified_name': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.get_full_qualified_name',
'sphinx.domains.Domain.get_objects': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.get_objects',
'sphinx.domains.Domain.get_type_name': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.get_type_name',
'sphinx.domains.Domain.merge_domaindata': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.merge_domaindata',
'sphinx.domains.Domain.process_doc': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.process_doc',
'sphinx.domains.Domain.process_field_xref': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.process_field_xref',
'sphinx.domains.Domain.resolve_any_xref': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.resolve_any_xref',
'sphinx.domains.Domain.resolve_xref': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.resolve_xref',
'sphinx.domains.Domain.role': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.role',
'sphinx.domains.Domain.setup': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Domain.setup',
'sphinx.domains.Index.generate': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.Index.generate'},
'sphinx.environment': { 'sphinx.environment.collectors': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#module-sphinx.environment.collectors',
'sphinx.environment.BuildEnvironment': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment',
'sphinx.environment.BuildEnvironment.doc2path': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment.doc2path',
'sphinx.environment.BuildEnvironment.new_serialno': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment.new_serialno',
'sphinx.environment.BuildEnvironment.note_dependency': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment.note_dependency',
'sphinx.environment.BuildEnvironment.note_reread': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment.note_reread',
'sphinx.environment.BuildEnvironment.relfn2path': 'https://www.sphinx-doc.org/en/stable/extdev/envapi.html#sphinx.environment.BuildEnvironment.relfn2path'},
'sphinx.ext': { 'sphinx.ext.autodoc': 'https://www.sphinx-doc.org/en/stable/usage/extensions/autodoc.html#module-sphinx.ext.autodoc',
'sphinx.ext.autosectionlabel': 'https://www.sphinx-doc.org/en/stable/usage/extensions/autosectionlabel.html#module-sphinx.ext.autosectionlabel',
'sphinx.ext.autosummary': 'https://www.sphinx-doc.org/en/stable/usage/extensions/autosummary.html#module-sphinx.ext.autosummary',
'sphinx.ext.coverage': 'https://www.sphinx-doc.org/en/stable/usage/extensions/coverage.html#module-sphinx.ext.coverage',
'sphinx.ext.doctest': 'https://www.sphinx-doc.org/en/stable/usage/extensions/doctest.html#module-sphinx.ext.doctest',
'sphinx.ext.duration': 'https://www.sphinx-doc.org/en/stable/usage/extensions/duration.html#module-sphinx.ext.duration',
'sphinx.ext.extlinks': 'https://www.sphinx-doc.org/en/stable/usage/extensions/extlinks.html#module-sphinx.ext.extlinks',
'sphinx.ext.githubpages': 'https://www.sphinx-doc.org/en/stable/usage/extensions/githubpages.html#module-sphinx.ext.githubpages',
'sphinx.ext.graphviz': 'https://www.sphinx-doc.org/en/stable/usage/extensions/graphviz.html#module-sphinx.ext.graphviz',
'sphinx.ext.ifconfig': 'https://www.sphinx-doc.org/en/stable/usage/extensions/ifconfig.html#module-sphinx.ext.ifconfig',
'sphinx.ext.imgconverter': 'https://www.sphinx-doc.org/en/stable/usage/extensions/imgconverter.html#module-sphinx.ext.imgconverter',
'sphinx.ext.imgmath': 'https://www.sphinx-doc.org/en/stable/usage/extensions/math.html#module-sphinx.ext.imgmath',
'sphinx.ext.inheritance_diagram': 'https://www.sphinx-doc.org/en/stable/usage/extensions/inheritance.html#module-sphinx.ext.inheritance_diagram',
'sphinx.ext.intersphinx': 'https://www.sphinx-doc.org/en/stable/usage/extensions/intersphinx.html#module-sphinx.ext.intersphinx',
'sphinx.ext.jsmath': 'https://www.sphinx-doc.org/en/stable/usage/extensions/math.html#module-sphinx.ext.jsmath',
'sphinx.ext.linkcode': 'https://www.sphinx-doc.org/en/stable/usage/extensions/linkcode.html#module-sphinx.ext.linkcode',
'sphinx.ext.mathbase': 'https://www.sphinx-doc.org/en/stable/usage/extensions/math.html#module-sphinx.ext.mathbase',
'sphinx.ext.mathjax': 'https://www.sphinx-doc.org/en/stable/usage/extensions/math.html#module-sphinx.ext.mathjax',
'sphinx.ext.napoleon': 'https://www.sphinx-doc.org/en/stable/usage/extensions/napoleon.html#module-sphinx.ext.napoleon',
'sphinx.ext.todo': 'https://www.sphinx-doc.org/en/stable/usage/extensions/todo.html#module-sphinx.ext.todo',
'sphinx.ext.viewcode': 'https://www.sphinx-doc.org/en/stable/usage/extensions/viewcode.html#module-sphinx.ext.viewcode'},
'sphinxcontrib': { 'sphinxcontrib.applehelp': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinxcontrib.applehelp',
'sphinxcontrib.devhelp': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinxcontrib.devhelp',
'sphinxcontrib.htmlhelp': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinxcontrib.htmlhelp',
'sphinxcontrib.qthelp': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#module-sphinxcontrib.qthelp'},
'docutils.parsers.rst': { 'docutils.parsers.rst.Directive': 'https://www.sphinx-doc.org/en/stable/extdev/markupapi.html#docutils.parsers.rst.Directive',
'docutils.parsers.rst.Directive.run': 'https://www.sphinx-doc.org/en/stable/extdev/markupapi.html#docutils.parsers.rst.Directive.run'},
'sphinx.addnodes': { 'sphinx.addnodes.compact_paragraph': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.compact_paragraph',
'sphinx.addnodes.desc': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc',
'sphinx.addnodes.desc_addname': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_addname',
'sphinx.addnodes.desc_annotation': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_annotation',
'sphinx.addnodes.desc_content': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_content',
'sphinx.addnodes.desc_inline': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_inline',
'sphinx.addnodes.desc_name': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_name',
'sphinx.addnodes.desc_optional': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_optional',
'sphinx.addnodes.desc_parameter': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_parameter',
'sphinx.addnodes.desc_parameterlist': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_parameterlist',
'sphinx.addnodes.desc_returns': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_returns',
'sphinx.addnodes.desc_signature': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_signature',
'sphinx.addnodes.desc_signature_line': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_signature_line',
'sphinx.addnodes.desc_type': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.desc_type',
'sphinx.addnodes.download_reference': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.download_reference',
'sphinx.addnodes.glossary': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.glossary',
'sphinx.addnodes.highlightlang': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.highlightlang',
'sphinx.addnodes.index': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.index',
'sphinx.addnodes.literal_emphasis': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.literal_emphasis',
'sphinx.addnodes.meta': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.meta',
'sphinx.addnodes.only': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.only',
'sphinx.addnodes.pending_xref': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.pending_xref',
'sphinx.addnodes.pending_xref_condition': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.pending_xref_condition',
'sphinx.addnodes.production': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.production',
'sphinx.addnodes.productionlist': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.productionlist',
'sphinx.addnodes.seealso': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.seealso',
'sphinx.addnodes.start_of_file': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.start_of_file',
'sphinx.addnodes.toctree': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.toctree',
'sphinx.addnodes.versionmodified': 'https://www.sphinx-doc.org/en/stable/extdev/nodes.html#sphinx.addnodes.versionmodified'},
'sphinx.application': { 'sphinx.application.Sphinx': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx',
'sphinx.application.TemplateBridge': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.TemplateBridge',
'sphinx.application.Sphinx.add_autodoc_attrgetter': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_autodoc_attrgetter',
'sphinx.application.Sphinx.add_autodocumenter': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_autodocumenter',
'sphinx.application.Sphinx.add_builder': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_builder',
'sphinx.application.Sphinx.add_config_value': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_config_value',
'sphinx.application.Sphinx.add_crossref_type': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_crossref_type',
'sphinx.application.Sphinx.add_css_file': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_css_file',
'sphinx.application.Sphinx.add_directive': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_directive',
'sphinx.application.Sphinx.add_directive_to_domain': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_directive_to_domain',
'sphinx.application.Sphinx.add_domain': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_domain',
'sphinx.application.Sphinx.add_enumerable_node': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_enumerable_node',
'sphinx.application.Sphinx.add_env_collector': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_env_collector',
'sphinx.application.Sphinx.add_event': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_event',
'sphinx.application.Sphinx.add_generic_role': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_generic_role',
'sphinx.application.Sphinx.add_html_math_renderer': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_html_math_renderer',
'sphinx.application.Sphinx.add_html_theme': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_html_theme',
'sphinx.application.Sphinx.add_index_to_domain': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_index_to_domain',
'sphinx.application.Sphinx.add_js_file': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_js_file',
'sphinx.application.Sphinx.add_latex_package': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_latex_package',
'sphinx.application.Sphinx.add_lexer': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_lexer',
'sphinx.application.Sphinx.add_message_catalog': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_message_catalog',
'sphinx.application.Sphinx.add_node': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_node',
'sphinx.application.Sphinx.add_object_type': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_object_type',
'sphinx.application.Sphinx.add_post_transform': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_post_transform',
'sphinx.application.Sphinx.add_role': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_role',
'sphinx.application.Sphinx.add_role_to_domain': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_role_to_domain',
'sphinx.application.Sphinx.add_search_language': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_search_language',
'sphinx.application.Sphinx.add_source_parser': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_source_parser',
'sphinx.application.Sphinx.add_source_suffix': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_source_suffix',
'sphinx.application.Sphinx.add_transform': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_transform',
'sphinx.application.Sphinx.connect': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.connect',
'sphinx.application.Sphinx.disconnect': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.disconnect',
'sphinx.application.Sphinx.emit': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.emit',
'sphinx.application.Sphinx.emit_firstresult': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.emit_firstresult',
'sphinx.application.Sphinx.is_parallel_allowed': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.is_parallel_allowed',
'sphinx.application.Sphinx.require_sphinx': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.require_sphinx',
'sphinx.application.Sphinx.set_translator': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.set_translator',
'sphinx.application.Sphinx.setup_extension': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.setup_extension',
'sphinx.application.TemplateBridge.init': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.TemplateBridge.init',
'sphinx.application.TemplateBridge.newest_template_mtime': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.TemplateBridge.newest_template_mtime',
'sphinx.application.TemplateBridge.render': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.TemplateBridge.render',
'sphinx.application.TemplateBridge.render_string': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.TemplateBridge.render_string'},
'sphinx.builders.changes': { 'sphinx.builders.changes.ChangesBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.changes.ChangesBuilder'},
'sphinx.builders.dirhtml': { 'sphinx.builders.dirhtml.DirectoryHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.dirhtml.DirectoryHTMLBuilder'},
'sphinx.builders.dummy': { 'sphinx.builders.dummy.DummyBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.dummy.DummyBuilder'},
'sphinx.builders.epub3': { 'sphinx.builders.epub3.Epub3Builder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.epub3.Epub3Builder'},
'sphinx.builders.gettext': { 'sphinx.builders.gettext.MessageCatalogBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.gettext.MessageCatalogBuilder'},
'sphinx.builders.html': { 'sphinx.builders.html.StandaloneHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.html.StandaloneHTMLBuilder'},
'sphinx.builders.latex': { 'sphinx.builders.latex.LaTeXBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.latex.LaTeXBuilder'},
'sphinx.builders.linkcheck': { 'sphinx.builders.linkcheck.CheckExternalLinksBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.linkcheck.CheckExternalLinksBuilder'},
'sphinx.builders.manpage': { 'sphinx.builders.manpage.ManualPageBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.manpage.ManualPageBuilder'},
'sphinx.builders.singlehtml': { 'sphinx.builders.singlehtml.SingleFileHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.singlehtml.SingleFileHTMLBuilder'},
'sphinx.builders.texinfo': { 'sphinx.builders.texinfo.TexinfoBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.texinfo.TexinfoBuilder'},
'sphinx.builders.text': { 'sphinx.builders.text.TextBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.text.TextBuilder'},
'sphinx.builders.xml': { 'sphinx.builders.xml.PseudoXMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.xml.PseudoXMLBuilder',
'sphinx.builders.xml.XMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinx.builders.xml.XMLBuilder'},
'sphinx.config': {'sphinx.config.Config': 'https://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.config.Config'},
'sphinx.directives': { 'sphinx.directives.ObjectDescription': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription',
'sphinx.directives.ObjectDescription.add_target_and_index': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.add_target_and_index',
'sphinx.directives.ObjectDescription.after_content': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.after_content',
'sphinx.directives.ObjectDescription.before_content': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.before_content',
'sphinx.directives.ObjectDescription.get_signatures': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.get_signatures',
'sphinx.directives.ObjectDescription.handle_signature': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.handle_signature',
'sphinx.directives.ObjectDescription.run': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.run',
'sphinx.directives.ObjectDescription.transform_content': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.directives.ObjectDescription.transform_content'},
'sphinx.domains.python': { 'sphinx.domains.python.PythonDomain': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.python.PythonDomain',
'sphinx.domains.python.PythonDomain.note_module': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.python.PythonDomain.note_module',
'sphinx.domains.python.PythonDomain.note_object': 'https://www.sphinx-doc.org/en/stable/extdev/domainapi.html#sphinx.domains.python.PythonDomain.note_object'},
'sphinx.environment.collectors': { 'sphinx.environment.collectors.EnvironmentCollector': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector',
'sphinx.environment.collectors.EnvironmentCollector.clear_doc': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector.clear_doc',
'sphinx.environment.collectors.EnvironmentCollector.get_outdated_docs': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector.get_outdated_docs',
'sphinx.environment.collectors.EnvironmentCollector.get_updated_docs': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector.get_updated_docs',
'sphinx.environment.collectors.EnvironmentCollector.merge_other': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector.merge_other',
'sphinx.environment.collectors.EnvironmentCollector.process_doc': 'https://www.sphinx-doc.org/en/stable/extdev/collectorapi.html#sphinx.environment.collectors.EnvironmentCollector.process_doc'},
'sphinx.events': { 'sphinx.events.EventManager': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager',
'sphinx.events.EventManager.add': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager.add',
'sphinx.events.EventManager.connect': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager.connect',
'sphinx.events.EventManager.disconnect': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager.disconnect',
'sphinx.events.EventManager.emit': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager.emit',
'sphinx.events.EventManager.emit_firstresult': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.events.EventManager.emit_firstresult'},
'sphinx.ext.coverage': { 'sphinx.ext.coverage.CoverageBuilder': 'https://www.sphinx-doc.org/en/stable/usage/extensions/coverage.html#sphinx.ext.coverage.CoverageBuilder'},
'sphinx.parsers': { 'sphinx.parsers.Parser': 'https://www.sphinx-doc.org/en/stable/extdev/parserapi.html#sphinx.parsers.Parser',
'sphinx.parsers.Parser.set_application': 'https://www.sphinx-doc.org/en/stable/extdev/parserapi.html#sphinx.parsers.Parser.set_application'},
'sphinx.project': { 'sphinx.project.Project': 'https://www.sphinx-doc.org/en/stable/extdev/projectapi.html#sphinx.project.Project',
'sphinx.project.Project.discover': 'https://www.sphinx-doc.org/en/stable/extdev/projectapi.html#sphinx.project.Project.discover',
'sphinx.project.Project.doc2path': 'https://www.sphinx-doc.org/en/stable/extdev/projectapi.html#sphinx.project.Project.doc2path',
'sphinx.project.Project.path2doc': 'https://www.sphinx-doc.org/en/stable/extdev/projectapi.html#sphinx.project.Project.path2doc',
'sphinx.project.Project.restore': 'https://www.sphinx-doc.org/en/stable/extdev/projectapi.html#sphinx.project.Project.restore'},
'sphinx.transforms': { 'sphinx.transforms.SphinxTransform': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.SphinxTransform'},
'sphinx.transforms.post_transforms': { 'sphinx.transforms.post_transforms.SphinxPostTransform': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.SphinxPostTransform',
'sphinx.transforms.post_transforms.SphinxPostTransform.apply': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.SphinxPostTransform.apply',
'sphinx.transforms.post_transforms.SphinxPostTransform.is_supported': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.SphinxPostTransform.is_supported',
'sphinx.transforms.post_transforms.SphinxPostTransform.run': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.SphinxPostTransform.run'},
'sphinx.transforms.post_transforms.images': { 'sphinx.transforms.post_transforms.images.ImageConverter': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.images.ImageConverter',
'sphinx.transforms.post_transforms.images.ImageConverter.convert': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.images.ImageConverter.convert',
'sphinx.transforms.post_transforms.images.ImageConverter.is_available': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.transforms.post_transforms.images.ImageConverter.is_available'},
'sphinx.util.docutils': { 'sphinx.util.docutils.ReferenceRole': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.ReferenceRole',
'sphinx.util.docutils.SphinxDirective': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxDirective',
'sphinx.util.docutils.SphinxRole': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxRole',
'sphinx.util.docutils.SphinxDirective.get_location': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxDirective.get_location',
'sphinx.util.docutils.SphinxDirective.get_source_info': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxDirective.get_source_info',
'sphinx.util.docutils.SphinxDirective.set_source_info': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxDirective.set_source_info',
'sphinx.util.docutils.SphinxRole.get_location': 'https://www.sphinx-doc.org/en/stable/extdev/utils.html#sphinx.util.docutils.SphinxRole.get_location'},
'sphinx.util.logging': { 'sphinx.util.logging.SphinxLoggerAdapter': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter',
'sphinx.util.logging.SphinxLoggerAdapter.critical': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.critical',
'sphinx.util.logging.SphinxLoggerAdapter.debug': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.debug',
'sphinx.util.logging.SphinxLoggerAdapter.error': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.error',
'sphinx.util.logging.SphinxLoggerAdapter.info': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.info',
'sphinx.util.logging.SphinxLoggerAdapter.log': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.log',
'sphinx.util.logging.SphinxLoggerAdapter.verbose': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.verbose',
'sphinx.util.logging.SphinxLoggerAdapter.warning': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.SphinxLoggerAdapter.warning',
'sphinx.util.logging.getLogger': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.getLogger',
'sphinx.util.logging.pending_logging': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.pending_logging',
'sphinx.util.logging.pending_warnings': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.pending_warnings',
'sphinx.util.logging.prefixed_warnings': 'https://www.sphinx-doc.org/en/stable/extdev/logging.html#sphinx.util.logging.prefixed_warnings'},
'sphinxcontrib.applehelp': { 'sphinxcontrib.applehelp.AppleHelpBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.applehelp.AppleHelpBuilder'},
'sphinxcontrib.devhelp': { 'sphinxcontrib.devhelp.DevhelpBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.devhelp.DevhelpBuilder'},
'sphinxcontrib.htmlhelp': { 'sphinxcontrib.htmlhelp.HTMLHelpBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.htmlhelp.HTMLHelpBuilder'},
'sphinxcontrib.qthelp': { 'sphinxcontrib.qthelp.QtHelpBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.qthelp.QtHelpBuilder'},
'sphinxcontrib.serializinghtml': { 'sphinxcontrib.serializinghtml.JSONHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.serializinghtml.JSONHTMLBuilder',
'sphinxcontrib.serializinghtml.PickleHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.serializinghtml.PickleHTMLBuilder',
'sphinxcontrib.serializinghtml.SerializingHTMLBuilder': 'https://www.sphinx-doc.org/en/stable/usage/builders/index.html#sphinxcontrib.serializinghtml.SerializingHTMLBuilder'},
'sphinxcontrib.websupport': { 'sphinxcontrib.websupport.WebSupport': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport',
'sphinxcontrib.websupport.WebSupport.add_comment': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.add_comment',
'sphinxcontrib.websupport.WebSupport.build': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.build',
'sphinxcontrib.websupport.WebSupport.get_data': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.get_data',
'sphinxcontrib.websupport.WebSupport.get_document': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.get_document',
'sphinxcontrib.websupport.WebSupport.get_search_results': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.get_search_results',
'sphinxcontrib.websupport.WebSupport.process_vote': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/api.html#sphinxcontrib.websupport.WebSupport.process_vote'},
'sphinxcontrib.websupport.search': { 'sphinxcontrib.websupport.search.BaseSearch': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch',
'sphinxcontrib.websupport.search.BaseSearch.add_document': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.add_document',
'sphinxcontrib.websupport.search.BaseSearch.extract_context': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.extract_context',
'sphinxcontrib.websupport.search.BaseSearch.feed': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.feed',
'sphinxcontrib.websupport.search.BaseSearch.finish_indexing': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.finish_indexing',
'sphinxcontrib.websupport.search.BaseSearch.handle_query': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.handle_query',
'sphinxcontrib.websupport.search.BaseSearch.init_indexing': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.init_indexing',
'sphinxcontrib.websupport.search.BaseSearch.query': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/searchadapters.html#sphinxcontrib.websupport.search.BaseSearch.query'},
'sphinxcontrib.websupport.storage': { 'sphinxcontrib.websupport.storage.StorageBackend': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend',
'sphinxcontrib.websupport.storage.StorageBackend.accept_comment': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.accept_comment',
'sphinxcontrib.websupport.storage.StorageBackend.add_comment': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.add_comment',
'sphinxcontrib.websupport.storage.StorageBackend.add_node': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.add_node',
'sphinxcontrib.websupport.storage.StorageBackend.delete_comment': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.delete_comment',
'sphinxcontrib.websupport.storage.StorageBackend.get_data': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.get_data',
'sphinxcontrib.websupport.storage.StorageBackend.post_build': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.post_build',
'sphinxcontrib.websupport.storage.StorageBackend.pre_build': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.pre_build',
'sphinxcontrib.websupport.storage.StorageBackend.process_vote': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.process_vote',
'sphinxcontrib.websupport.storage.StorageBackend.update_username': 'https://www.sphinx-doc.org/en/stable/usage/advanced/websupport/storagebackends.html#sphinxcontrib.websupport.storage.StorageBackend.update_username'},
'sphinx.ext.autodoc': { 'sphinx.ext.autodoc.between': 'https://www.sphinx-doc.org/en/stable/usage/extensions/autodoc.html#sphinx.ext.autodoc.between',
'sphinx.ext.autodoc.cut_lines': 'https://www.sphinx-doc.org/en/stable/usage/extensions/autodoc.html#sphinx.ext.autodoc.cut_lines'},
'sphinx.locale': { 'sphinx.locale._': 'https://www.sphinx-doc.org/en/stable/extdev/i18n.html#sphinx.locale._',
'sphinx.locale.__': 'https://www.sphinx-doc.org/en/stable/extdev/i18n.html#sphinx.locale.__',
'sphinx.locale.get_translation': 'https://www.sphinx-doc.org/en/stable/extdev/i18n.html#sphinx.locale.get_translation',
'sphinx.locale.init': 'https://www.sphinx-doc.org/en/stable/extdev/i18n.html#sphinx.locale.init',
'sphinx.locale.init_console': 'https://www.sphinx-doc.org/en/stable/extdev/i18n.html#sphinx.locale.init_console'}},
'settings': {'lib_path': 'nbdev_sphinx'}}
| 176.231884 | 268 | 0.66949 | 5,394 | 48,640 | 5.968854 | 0.056174 | 0.067586 | 0.118276 | 0.14362 | 0.83352 | 0.728848 | 0.650764 | 0.628898 | 0.626506 | 0.599422 | 0 | 0.000634 | 0.189412 | 48,640 | 275 | 269 | 176.872727 | 0.815964 | 0.000699 | 0 | 0 | 1 | 0.996337 | 0.771809 | 0.199551 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f1d5ecb573bdeada480e6a00f837bf28096e216 | 20 | py | Python | models/model_mgn/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 15 | 2020-08-24T22:47:39.000Z | 2021-04-19T07:51:32.000Z | models/model_mgn/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 1 | 2021-10-14T03:07:12.000Z | 2021-11-05T13:59:55.000Z | models/model_mgn/__init__.py | qychen13/ClusterAlignReID | 9dca1a39b7f1035c9579d80bbb73aa45480a616c | [
"MIT"
] | 1 | 2020-08-26T02:48:40.000Z | 2020-08-26T02:48:40.000Z | from .mgn import MGN | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
613861477fbd06d5f97125a23e94bff4149e190a | 116 | py | Python | start.py | FujiMakoto/discordpy-bot-template | 6a3eff598f3db93e45059c43154c24217d33de3e | [
"MIT"
] | 1 | 2020-09-21T14:09:50.000Z | 2020-09-21T14:09:50.000Z | start.py | FujiMakoto/discordpy-bot-template | 6a3eff598f3db93e45059c43154c24217d33de3e | [
"MIT"
] | null | null | null | start.py | FujiMakoto/discordpy-bot-template | 6a3eff598f3db93e45059c43154c24217d33de3e | [
"MIT"
] | null | null | null | from discordbot.config import config
from discordbot.discordbot import bot
bot.run(config.get('Discord', 'token'))
| 23.2 | 39 | 0.793103 | 16 | 116 | 5.75 | 0.5625 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094828 | 116 | 4 | 40 | 29 | 0.87619 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
619702d55f16e696718246c7df95d978fcefc6ce | 1,138 | py | Python | test_code/turtle_test.py | AJ-Gonzalez/Axolotl_Studio | 8975b6bf0c393409cb8ee6ec67cdf46e00be6542 | [
"MIT"
] | null | null | null | test_code/turtle_test.py | AJ-Gonzalez/Axolotl_Studio | 8975b6bf0c393409cb8ee6ec67cdf46e00be6542 | [
"MIT"
] | null | null | null | test_code/turtle_test.py | AJ-Gonzalez/Axolotl_Studio | 8975b6bf0c393409cb8ee6ec67cdf46e00be6542 | [
"MIT"
] | null | null | null |
import turtle
import tkinter as tk
def do_stuff():
for color in ["red", "yellow", "green"]:
my_lovely_turtle.color(color)
my_lovely_turtle.right(120)
def press():
do_stuff()
if __name__ == "__main__":
screen = turtle.Screen()
screen.bgcolor("cyan")
canvas = screen.getcanvas()
button = tk.Button(canvas.master, text="Press me", command=press)
canvas.create_window(-200, -200, window=button)
my_lovely_turtle = turtle.Turtle(shape="turtle")
turtle.done()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import turtle
import tkinter as tk
def do_stuff():
my_lovely_turtle.setpos(100,200)
for color in ["red", "yellow", "green"]:
my_lovely_turtle.color(color)
my_lovely_turtle.right(120)
def press():
do_stuff()
if __name__ == "__main__":
screen = turtle.Screen()
screen.bgcolor("cyan")
canvas = screen.getcanvas()
button = tk.Button(canvas.master, text="Press me", command=press)
canvas.create_window(-200, -200, window=button)
my_lovely_turtle = turtle.Turtle(shape="turtle")
turtle.done() | 16.028169 | 69 | 0.6529 | 163 | 1,138 | 4.337423 | 0.380368 | 0.079208 | 0.138614 | 0.070721 | 0.910891 | 0.910891 | 0.910891 | 0.910891 | 0.910891 | 0.800566 | 0 | 0.068462 | 0.217047 | 1,138 | 71 | 70 | 16.028169 | 0.725028 | 0 | 0 | 0.571429 | 0 | 0 | 0.070299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61a2e56a9912390631992ba4b7a75a7366082962 | 33,074 | py | Python | FusionIIIT/applications/programme_curriculum/views.py | Draco-D/Fusion | 065f5f9939d6f736b6b42c2650e5a05aef5dab52 | [
"bzip2-1.0.6"
] | 1 | 2021-08-05T10:31:35.000Z | 2021-08-05T10:31:35.000Z | FusionIIIT/applications/programme_curriculum/views.py | Draco-D/Fusion | 065f5f9939d6f736b6b42c2650e5a05aef5dab52 | [
"bzip2-1.0.6"
] | null | null | null | FusionIIIT/applications/programme_curriculum/views.py | Draco-D/Fusion | 065f5f9939d6f736b6b42c2650e5a05aef5dab52 | [
"bzip2-1.0.6"
] | null | null | null | from django.http import request
from django.shortcuts import render, HttpResponse
from django.http import HttpResponse, HttpResponseRedirect
import itertools
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import User
from .models import Programme, Discipline, Curriculum, Semester, Course, Batch, CourseSlot
from .forms import ProgrammeForm, DisciplineForm, CurriculumForm, SemesterForm, CourseForm, BatchForm, CourseSlotForm
# from applications.academic_information.models import Student
from applications.globals.models import (DepartmentInfo, Designation,
ExtraInfo, Faculty, HoldsDesignation)
# ------------module-functions---------------#
@login_required(login_url='/accounts/login')
def programme_curriculum(request):
"""
This function is used to Differenciate acadadmin and all other user.
@param:
request - contains metadata about the requested page
@variables:
user_details - Gets the information about the logged in user.
des - Gets the designation about the looged in user.
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
return HttpResponseRedirect('/programme_curriculum/admin_mainpage')
# ------------all-user-functions---------------#
def main_page(request):
"""
This function is used to display the main page of programme_curriculum
@param:
request - contains metadata about the requested page
"""
return render(request, 'programme_curriculum/mainpage.html')
def view_all_programmes(request):
"""
This function is used to display all the programmes offered by the institute.
@variables:
ug - UG programmes
pg - PG programmes
phd - PHD programmes
"""
ug = Programme.objects.filter(category='UG')
pg = Programme.objects.filter(category='PG')
phd = Programme.objects.filter(category='PHD')
return render(request, 'programme_curriculum/view_all_programmes.html', {'ug': ug, 'pg': pg, 'phd': phd})
def view_curriculums_of_a_programme(request, programme_id):
"""
This function is used to Display Curriculum of a specific Programmes.
@param:
programme_id - Id of a specific programme
@variables:
curriculums - Curriculums of a specific programmes
batches - List of batches for curriculums
working_curriculum - Curriculums that are affective
past_curriculum - Curriculums thet are obsolete
"""
program = Programme.objects.get(id=programme_id)
curriculums = Programme.get_curriculums_objects(program)
batches = []
for curriculum in curriculums:
batches.append([Curriculum.get_batches(curriculum)])
working_curriculums = curriculums.filter(working_curriculum=1)
past_curriculums = curriculums.filter(working_curriculum=0)
return render(request,'programme_curriculum/view_curriculums_of_a_programme.html', {'program': program, 'past_curriculums': past_curriculums, 'working_curriculums': working_curriculums})
def view_all_working_curriculums(request):
""" views all the working curriculums offered by the institute """
curriculums = Curriculum.objects.filter(working_curriculum=1)
return render(request,'programme_curriculum/view_all_working_curriculums.html',{'curriculums':curriculums})
def view_semesters_of_a_curriculum(request, curriculum_id):
"""
This function is used to Display all Semester of a Curriculum.
@param:
curriculum_id - Id of a specific curriculum
@variables:
transpose_semester_slots - semester_slots 2D list is transpose for viewing in HTML <table>.
semester_credits - Total Credits for each semester.
"""
curriculum = Curriculum.objects.get(id=curriculum_id)
semesters = Curriculum.get_semesters_objects(curriculum)
semester_slots = []
for sem in semesters:
a = list(Semester.get_courseslots_objects(sem))
semester_slots.append(a)
max_length = 0
for course_slots in semester_slots:
max_length = max(max_length, len(course_slots))
for course_slots in semester_slots:
course_slots += [""] * (max_length - len(course_slots))
semester_credits = []
for semester in semesters:
credits_sum = 0
for course_slot in semester.courseslots:
max_credit = 0
courses = course_slot.courses.all()
for course in courses:
max_credit = max(max_credit, course.credit)
credits_sum = credits_sum + max_credit
semester_credits.append(credits_sum)
transpose_semester_slots = list(zip(*semester_slots))
return render(request, 'programme_curriculum/view_semesters_of_a_curriculum.html', {'curriculum': curriculum, 'semesters': semesters, 'semester_slots': transpose_semester_slots, 'semester_credits': semester_credits})
def view_a_semester_of_a_curriculum(request, semester_id):
""" views a specfic semester of a specfic curriculum """
semester = Semester.objects.get(id=semester_id)
course_slots = Semester.get_courseslots_objects(semester)
return render(request, 'programme_curriculum/view_a_semester_of_a_curriculum.html', {'semester': semester, 'course_slots': course_slots})
def view_a_courseslot(request, courseslot_id):
""" view a course slot """
course_slot = CourseSlot.objects.get(id=courseslot_id)
return render(request, 'programme_curriculum/view_a_courseslot.html', {'course_slot': course_slot})
def view_all_courses(request):
""" views all the course slots of a specfic semester """
courses = Course.objects.all()
return render(request, 'programme_curriculum/view_all_courses.html', {'courses': courses})
def view_a_course(request, course_id):
""" views the details of a Course """
course = Course.objects.get(id=course_id)
return render(request, 'programme_curriculum/view_a_course.html', {'course': course})
def view_all_discplines(request):
""" views the details of a Course """
disciplines = Discipline.objects.all()
return render(request, 'programme_curriculum/view_all_disciplines.html', {'disciplines': disciplines})
def view_all_batches(request):
""" views the details of a Course """
batches = Batch.objects.all()
return render(request, 'programme_curriculum/view_all_batches.html', {'batches': batches})
# ------------Acad-Admin-functions---------------#
@login_required(login_url='/accounts/login')
def admin_main_page(request):
"""
This function is used to display the main page of programme_curriculum for acadadmin
@param:
request - contains metadata about the requested page
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
return render(request, 'programme_curriculum/acad_admin/admin_mainpage.html')
@login_required(login_url='/accounts/login')
def admin_view_all_programmes(request):
"""
This function is used to display all the programmes offered by the institute.
@variables:
ug - UG programmes
pg - PG programmes
phd - PHD programmes
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
ug = Programme.objects.filter(category='UG')
pg = Programme.objects.filter(category='PG')
phd = Programme.objects.filter(category='PHD')
return render(request, 'programme_curriculum/acad_admin/admin_view_all_programmes.html', {'ug': ug, 'pg': pg, "phd": phd})
@login_required(login_url='/accounts/login')
def admin_view_curriculums_of_a_programme(request, programme_id):
"""
This function is used to Display Curriculum of a specific Programmes.
@param:
programme_id - Id of a specific programme
@variables:
curriculums - Curriculums of a specific programmes
batches - List of batches for curriculums
working_curriculum - Curriculums that are affective
past_curriculum - Curriculums thet are obsolete
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
program = Programme.objects.get(id=programme_id)
curriculums = Programme.get_curriculums_objects(program)
working_curriculums = curriculums.filter(working_curriculum=1)
past_curriculums = curriculums.filter(working_curriculum=0)
return render(request,'programme_curriculum/acad_admin/admin_view_curriculums_of_a_programme.html', {'program': program, 'past_curriculums': past_curriculums, 'working_curriculums': working_curriculums})
@login_required(login_url='/accounts/login')
def admin_view_all_working_curriculums(request):
""" views all the working curriculums offered by the institute """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
curriculums = Curriculum.objects.filter(working_curriculum=1)
return render(request,'programme_curriculum/acad_admin/admin_view_all_working_curriculums.html',{'curriculums':curriculums})
@login_required(login_url='/accounts/login')
def admin_view_semesters_of_a_curriculum(request, curriculum_id):
""" gets all the semesters of a specfic curriculum """
curriculum = Curriculum.objects.get(id=curriculum_id)
semesters = Curriculum.get_semesters_objects(curriculum)
semester_slots = []
for sem in semesters:
a = list(Semester.get_courseslots_objects(sem))
semester_slots.append(a)
max_length = 0
for course_slots in semester_slots:
max_length = max(max_length, len(course_slots))
for course_slots in semester_slots:
course_slots += [""] * (max_length - len(course_slots))
semester_credits = []
for semester in semesters:
credits_sum = 0
for course_slot in semester.courseslots:
max_credit = 0
courses = course_slot.courses.all()
for course in courses:
max_credit = max(max_credit, course.credit)
credits_sum = credits_sum + max_credit
semester_credits.append(credits_sum)
print (semester_credits)
transpose_semester_slots = list(zip(*semester_slots))
return render(request, 'programme_curriculum/acad_admin/admin_view_semesters_of_a_curriculum.html', {'curriculum': curriculum, 'semesters': semesters, 'semester_slots': transpose_semester_slots, 'semester_credits': semester_credits})
@login_required(login_url='/accounts/login')
def admin_view_a_semester_of_a_curriculum(request, semester_id):
"""
This function is used to Display all Semester of a Curriculum.
@param:
curriculum_id - Id of a specific curriculum
@variables:
transpose_semester_slots - semester_slots 2D list is transpose for viewing in HTML <table>.
semester_credits - Total Credits for each semester.
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
semester = Semester.objects.get(id=semester_id)
course_slots = Semester.get_courseslots_objects(semester)
return render(request, 'programme_curriculum/acad_admin/admin_view_a_semester_of_a_curriculum.html', {'semester': semester, 'course_slots': course_slots})
@login_required(login_url='/accounts/login')
def admin_view_a_courseslot(request, courseslot_id):
""" view a course slot """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
course_slot = CourseSlot.objects.get(id=courseslot_id)
return render(request, 'programme_curriculum/acad_admin/admin_view_a_courseslot.html', {'course_slot': course_slot})
@login_required(login_url='/accounts/login')
def admin_view_all_courses(request):
""" views all the course slots of a specfic semester """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
courses = Course.objects.all()
return render(request, 'programme_curriculum/acad_admin/admin_view_all_courses.html', {'courses': courses})
@login_required(login_url='/accounts/login')
def admin_view_a_course(request, course_id):
""" views the details of a Course """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
course = Course.objects.get(id=course_id)
return render(request, 'programme_curriculum/acad_admin/admin_view_a_course.html', {'course': course})
@login_required(login_url='/accounts/login')
def admin_view_all_discplines(request):
""" views the details of a Course """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
disciplines = Discipline.objects.all()
return render(request, 'programme_curriculum/acad_admin/admin_view_all_disciplines.html', {'disciplines': disciplines})
@login_required(login_url='/accounts/login')
def admin_view_all_batches(request):
""" views the details of a Course """
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
batches = Batch.objects.all()
return render(request, 'programme_curriculum/acad_admin/admin_view_all_batches.html', {'batches': batches})
@login_required(login_url='/accounts/login')
def add_discipline_form(request):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = DisciplineForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = DisciplineForm(request.POST)
if form.is_valid():
form.save()
messages.success(request, "Added Discipline successful")
return HttpResponseRedirect('/programme_curriculum/mainpage/')
return render(request, 'programme_curriculum/acad_admin/add_discipline_form.html',{'form':form})
@login_required(login_url='/accounts/login')
def edit_discipline_form(request, discipline_id):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
discipline = Discipline.objects.get(id=discipline_id)
form = DisciplineForm(instance=discipline)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = DisciplineForm(request.POST, instance=discipline)
if form.is_valid():
form.save()
messages.success(request, "Updated "+ discipline.name +" successful")
return HttpResponseRedirect("/programme_curriculum/admin_disciplines/")
return render(request, 'programme_curriculum/acad_admin/add_discipline_form.html',{'form':form})
@login_required(login_url='/accounts/login')
def add_programme_form(request):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = ProgrammeForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = ProgrammeForm(request.POST)
if form.is_valid():
form.save()
messages.success(request, "Added successful")
return HttpResponseRedirect('/programme_curriculum/admin_mainpage')
return render(request,'programme_curriculum/acad_admin/add_programme_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def edit_programme_form(request, programme_id):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
programme = Programme.objects.get(id=programme_id)
form = ProgrammeForm(instance=programme)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = ProgrammeForm(request.POST, instance=programme)
if form.is_valid():
form.save()
messages.success(request, "Updated "+ programme.name +" successful")
return HttpResponseRedirect("/programme_curriculum/admin_programmes/")
return render(request, 'programme_curriculum/acad_admin/add_programme_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def add_curriculum_form(request):
"""
This function is used to add Curriculum and Semester into Curriculum and Semester table.
@variables:
no_of_semester - Get number of Semesters from form.
NewSemester - For initializing a new semester.
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = CurriculumForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CurriculumForm(request.POST)
if form.is_valid():
form.save()
no_of_semester = int(form.cleaned_data['no_of_semester'])
# print(form)
# print(no_of_semester)
curriculum = Curriculum.objects.all().last()
for semester_no in range(1, no_of_semester+1):
NewSemester = Semester(curriculum=curriculum,semester_no=semester_no)
NewSemester.save()
messages.success(request, "Added successful")
return HttpResponseRedirect('/programme_curriculum/admin_mainpage')
return render(request, 'programme_curriculum/acad_admin/add_curriculum_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def edit_curriculum_form(request, curriculum_id):
"""
This function is used to edit Curriculum and Semester into Curriculum and Semester table.
@variables:
no_of_semester - Get number of Semesters from form.
OldSemester - For Removing dropped Semester.
NewSemester - For initializing a new semester.
"""
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
curriculum = Curriculum.objects.get(id=curriculum_id)
form = CurriculumForm(instance=curriculum)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CurriculumForm(request.POST, instance=curriculum)
if form.is_valid():
form.save()
no_of_semester = int(form.cleaned_data['no_of_semester'])
old_no_of_semester = Semester.objects.filter(curriculum=curriculum).count()
if(old_no_of_semester != no_of_semester):
if(old_no_of_semester > no_of_semester):
for semester_no in range(no_of_semester+1, old_no_of_semester+1):
try:
OldSemester = Semester.objects.filter(curriculum=curriculum).filter(semester_no=semester_no)
OldSemester.delete()
except:
print("Failed to remove old semester")
elif(old_no_of_semester < no_of_semester):
for semester_no in range(max(1, old_no_of_semester), no_of_semester+1):
try:
NewSemester = Semester(curriculum=curriculum,semester_no=semester_no)
NewSemester.save()
except:
print("Failed to add new semester")
print("Old No of Semesters - " + str(old_no_of_semester))
print("Entered No of Semesters - " + str(no_of_semester))
print("Current No of Semesters (after operation) - " + str(Semester.objects.filter(curriculum=curriculum).count()))
messages.success(request, "Updated "+ curriculum.name +" successful")
return HttpResponseRedirect('/programme_curriculum/admin_mainpage')
return render(request, 'programme_curriculum/acad_admin/add_curriculum_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def add_course_form(request):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = CourseForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CourseForm(request.POST)
if form.is_valid():
form.save()
messages.success(request, "Added successful")
return HttpResponseRedirect("/programme_curriculum/admin_course/")
return render(request,'programme_curriculum/acad_admin/add_course_form.html',{'form':form})
@login_required(login_url='/accounts/login')
def update_course_form(request, course_id):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
course = Course.objects.get(id=course_id)
form = CourseForm(instance=course)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CourseForm(request.POST, instance=course)
if form.is_valid():
form.save()
messages.success(request, "Updated "+ course.name +" successful")
return HttpResponseRedirect("/programme_curriculum/admin_course/" + str(course_id) + "/")
return render(request,'programme_curriculum/acad_admin/add_course_form.html',{'course':course, 'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def add_courseslot_form(request):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = CourseSlotForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CourseSlotForm(request.POST)
if form.is_valid():
form.save()
messages.success(request, "Added Course Slot successful")
return HttpResponseRedirect('/programme_curriculum/admin_mainpage/')
return render(request, 'programme_curriculum/acad_admin/add_courseslot_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def edit_courseslot_form(request, courseslot_id):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
courseslot = CourseSlot.objects.get(id=courseslot_id)
form = CourseSlotForm(instance=courseslot)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = CourseSlotForm(request.POST, instance=courseslot)
if form.is_valid():
form.save()
messages.success(request, "Updated"+ courseslot.name +"successful")
return HttpResponseRedirect("/programme_curriculum/admin_courseslot/" + str(courseslot.id) + "/")
return render(request,'programme_curriculum/acad_admin/add_courseslot_form.html',{'courseslot':courseslot, 'form':form, 'submitbutton':submitbutton})
@login_required(login_url='/accounts/login')
def add_batch_form(request):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
form = BatchForm()
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = BatchForm(request.POST)
if form.is_valid():
form.save()
messages.success(request, "Added Batch successful")
return HttpResponseRedirect('/programme_curriculum/admin_batches/')
return render(request, 'programme_curriculum/acad_admin/add_batch_form.html',{'form':form, 'submitbutton': submitbutton})
@login_required(login_url='/accounts/login')
def edit_batch_form(request, batch_id):
user_details = ExtraInfo.objects.get(user = request.user)
des = HoldsDesignation.objects.all().filter(user = request.user).first()
if str(des.designation) == "student" or str(des.designation) == "Associate Professor" or str(des.designation) == "Professor" or str(des.designation) == "Assistant Professor" :
return HttpResponseRedirect('/programme_curriculum/mainpage/')
elif str(request.user) == "acadadmin" :
pass
batch = Batch.objects.get(id=batch_id)
form = BatchForm(instance=batch)
submitbutton= request.POST.get('Submit')
if submitbutton:
if request.method == 'POST':
form = BatchForm(request.POST, instance=batch)
if form.is_valid():
form.save()
messages.success(request, "Updated "+ batch.name +" successful")
return HttpResponseRedirect("/programme_curriculum/admin_batches/")
return render(request,'programme_curriculum/acad_admin/add_batch_form.html',{'batch':batch, 'form':form, 'submitbutton':submitbutton}) | 45.24487 | 237 | 0.689363 | 3,698 | 33,074 | 6.002975 | 0.049757 | 0.024866 | 0.070454 | 0.059057 | 0.896977 | 0.888914 | 0.866931 | 0.847516 | 0.821614 | 0.802063 | 0 | 0.000752 | 0.195894 | 33,074 | 731 | 238 | 45.24487 | 0.833954 | 0.103284 | 0 | 0.699346 | 0 | 0 | 0.2032 | 0.10465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076253 | false | 0.04793 | 0.021786 | 0 | 0.250545 | 0.013072 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61afd7e4dd1c6956da8feaf408d83a72419f12d1 | 39 | py | Python | simbench/test/__init__.py | navin3011/Seminar-Energy-economy | ddff1bf28f445d5a447fab119d7a6192f231d9c3 | [
"BSD-3-Clause"
] | 51 | 2019-05-13T15:33:35.000Z | 2022-03-09T06:43:11.000Z | simbench/test/__init__.py | johanneshiry/simbench | 59019645d917d2fd4539f7dfb5565a617492e556 | [
"BSD-3-Clause"
] | 22 | 2020-04-02T12:46:11.000Z | 2022-02-14T16:20:55.000Z | simbench/test/__init__.py | johanneshiry/simbench | 59019645d917d2fd4539f7dfb5565a617492e556 | [
"BSD-3-Clause"
] | 18 | 2019-11-02T19:03:38.000Z | 2022-02-23T21:42:33.000Z | from simbench.test.run_tests import *
| 13 | 37 | 0.794872 | 6 | 39 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 2 | 38 | 19.5 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ee01fae453ddc6805c6b9d56b900b5b1ac86c19 | 3,147 | py | Python | functions/aou/tests/unit_tests.py | broadinstitute/wfl | 1e5691100330a9afa0270fb4bab0a7d0a7d3bdc2 | [
"BSD-3-Clause"
] | 15 | 2020-03-04T17:30:25.000Z | 2022-03-09T14:57:26.000Z | functions/aou/tests/unit_tests.py | broadinstitute/wfl | 1e5691100330a9afa0270fb4bab0a7d0a7d3bdc2 | [
"BSD-3-Clause"
] | 184 | 2020-03-06T20:55:15.000Z | 2022-03-15T18:24:57.000Z | functions/aou/tests/unit_tests.py | broadinstitute/wfl | 1e5691100330a9afa0270fb4bab0a7d0a7d3bdc2 | [
"BSD-3-Clause"
] | 2 | 2020-07-08T19:16:26.000Z | 2020-07-10T18:47:30.000Z | import mock
from google.cloud import storage, exceptions
from aou import main
bucket_name = "test_bucket"
file_name = "dev/chip_name/chipwell_barcode/analysis_version/arrays/metadata/file.txt"
event_data = {'bucket': bucket_name, 'name': file_name}
def test_get_manifest_path_from_uploaded_file_with_environment_prefix():
uploaded_file = "dev/chip_name/chipwell_barcode/analysis_version/arrays/metadata/file.txt"
manifest_file = "dev/chip_name/chipwell_barcode/analysis_version/ptc.json"
result = main.get_manifest_path(uploaded_file)
assert result == manifest_file
def test_get_manifest_path_from_uploaded_file():
uploaded_file = "chip_name/chipwell_barcode/analysis_version/arrays/metadata/file.txt"
manifest_file = "chip_name/chipwell_barcode/analysis_version/ptc.json"
result = main.get_manifest_path(uploaded_file)
assert result == manifest_file
@mock.patch("aou.main.update_workload", return_value=["workflow_uuid"])
@mock.patch("aou.main.get_or_create_workload", return_value="workload_uuid")
@mock.patch.object(storage.Blob, 'download_as_string')
@mock.patch("aou.main.get_auth_headers")
def test_manifest_file_not_uploaded(mock_headers, mock_download, mock_get_workload, mock_update_workload):
client = mock.create_autospec(storage.Client())
mock_download.side_effect = exceptions.NotFound('Error')
main.submit_aou_workload(event_data, None)
assert not mock_get_workload.called
assert not mock_update_workload.called
@mock.patch("aou.main.update_workload", return_value=["workflow_uuid"])
@mock.patch("aou.main.get_or_create_workload", return_value="workload_uuid")
@mock.patch.object(storage.Bucket, 'get_blob')
@mock.patch.object(storage.Blob, 'download_as_string')
@mock.patch("aou.main.get_auth_headers")
def test_input_file_not_uploaded(mock_headers, mock_download, mock_get_blob, mock_get_workload, mock_update_workload):
client = mock.create_autospec(storage.Client())
mock_download.return_value = '{"notifications": [{"file": "gs://test_bucket/file.txt", "environment": "dev"}]}'
mock_get_blob.return_value = None
main.submit_aou_workload(event_data, None)
assert not mock_get_workload.called
assert not mock_update_workload.called
@mock.patch("aou.main.update_workload", return_value=["workflow_uuid"])
@mock.patch("aou.main.get_or_create_workload", return_value="workload_uuid")
@mock.patch.object(storage.Bucket, 'get_blob')
@mock.patch.object(storage.Blob, 'download_as_string')
@mock.patch("aou.main.get_auth_headers")
def test_wfl_called_when_sample_upload_completes(mock_headers, mock_download, mock_get_blob, mock_get_workload, mock_update_workload):
client = mock.create_autospec(storage.Client())
mock_download.return_value = '{"executor": "http://cromwell.broadinstitute.org", ' \
'"sample_alias": "test_sample", ' \
'"notifications": [{"file": "gs://test_bucket/file.txt", "environment": "dev"}]}'
mock_get_blob.return_value = "blob"
main.submit_aou_workload(event_data, None)
assert mock_get_workload.called
assert mock_update_workload.called
| 51.590164 | 134 | 0.775024 | 432 | 3,147 | 5.270833 | 0.175926 | 0.055336 | 0.047431 | 0.063241 | 0.851998 | 0.840141 | 0.840141 | 0.838384 | 0.784365 | 0.761089 | 0 | 0 | 0.10518 | 3,147 | 60 | 135 | 52.45 | 0.808594 | 0 | 0 | 0.538462 | 0 | 0.038462 | 0.31109 | 0.195742 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.096154 | false | 0 | 0.057692 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f62c130b5216b68bb5b3b452b405d876039ac144 | 109 | py | Python | bunsen/py/typing.py | castorini/bunsen | 6fe9b8b71ca37631974e96889e1a72d35fd5a437 | [
"MIT"
] | null | null | null | bunsen/py/typing.py | castorini/bunsen | 6fe9b8b71ca37631974e96889e1a72d35fd5a437 | [
"MIT"
] | null | null | null | bunsen/py/typing.py | castorini/bunsen | 6fe9b8b71ca37631974e96889e1a72d35fd5a437 | [
"MIT"
] | null | null | null | from typing import Callable
__all__ = ['check_input_types']
def check_input_types(fn: Callable):
pass | 13.625 | 36 | 0.752294 | 15 | 109 | 4.933333 | 0.733333 | 0.27027 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165138 | 109 | 8 | 37 | 13.625 | 0.813187 | 0 | 0 | 0 | 0 | 0 | 0.154545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f62eda1ab76b71ab679542814b4c9f340aa45036 | 120 | py | Python | model_and_testcode/model_4.py | YunqiuXu/DecentralizedAlgorithmTradingPlatform | 4138f9c267272eb8c4a9d3e13e94c3ec20a52af1 | [
"MIT"
] | null | null | null | model_and_testcode/model_4.py | YunqiuXu/DecentralizedAlgorithmTradingPlatform | 4138f9c267272eb8c4a9d3e13e94c3ec20a52af1 | [
"MIT"
] | null | null | null | model_and_testcode/model_4.py | YunqiuXu/DecentralizedAlgorithmTradingPlatform | 4138f9c267272eb8c4a9d3e13e94c3ec20a52af1 | [
"MIT"
] | null | null | null | class Model():
def __init__(self):
self.result = 9500
def show_result(self):
return self.result
| 20 | 26 | 0.608333 | 15 | 120 | 4.533333 | 0.6 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047059 | 0.291667 | 120 | 5 | 27 | 24 | 0.752941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f657b9d83b52bca53cb17e9e05f37115cc1fbd51 | 17,745 | py | Python | seg-part/dataloaders/datasets/chaos.py | QIU023/LifeLong-Segmentation | f479d1641f461e9344dcf661d0ada7484fb80896 | [
"MIT"
] | 1 | 2022-02-25T10:39:47.000Z | 2022-02-25T10:39:47.000Z | seg-part/dataloaders/datasets/chaos.py | QIU023/LifeLong-Segmentation | f479d1641f461e9344dcf661d0ada7484fb80896 | [
"MIT"
] | null | null | null | seg-part/dataloaders/datasets/chaos.py | QIU023/LifeLong-Segmentation | f479d1641f461e9344dcf661d0ada7484fb80896 | [
"MIT"
] | null | null | null | from __future__ import print_function, division
import os
import random
from ipdb import set_trace
from PIL import Image
import numpy as np
from torch.utils.data import Dataset
from mypath import Path
from dataloaders.utils import encode_segmap
from torchvision import transforms
import dataloaders.custom_transforms3 as tr
import pydicom
import imageio
def get_file_name(current_dir):
return [f.split('.')[0] for f in os.listdir(current_dir) if f.endswith(".jpg")]
def get_dir0(base_dir):
return [f for f in os.listdir(base_dir) if os.path.isdir(os.path.join(base_dir,f)) and f.startswith("Patient")]
def get_dir(base_dir):
return [f for f in os.listdir(base_dir) if os.path.isdir(os.path.join(base_dir,f))]
class Chaos1Test(Dataset):
NUM_CLASSES = 2
def __init__(self, args):
super().__init__()
self._base_dir = args.tdpath
self.args = args
self.images = []
self.file_names = []
base_dir = '/data/weishizheng/QiuYiqiao/CHAOS_Test_Sets/Test_Sets/CT'
self.pseudo_labels = []
dir1s = get_dir0(base_dir)
for dir1 in dir1s:
dir1 = os.path.join(base_dir, dir1)
assert os.path.isdir(dir1)
dir2s = get_dir(dir1)
for dir2 in dir2s:
dir2 = os.path.join(base_dir, dir1, dir2)
assert os.path.isdir(dir2)
dir3s = get_dir(dir2)
for dir3 in dir3s:
dir3 = os.path.join(base_dir, dir1, dir2, dir3)
assert os.path.isdir(dir3)
file_name = get_file_name(dir3)
for name in file_name:
assert os.path.isfile(os.path.join(base_dir, dir1, dir2, dir3, name + ".jpg"))
self.images.append(os.path.join(base_dir,dir1,dir2,dir3,name + ".jpg"))
self.file_names.append(name)
self.pseudo_labels.append(os.path.join('/data/weishizheng/QiuYiqiao/Segmentation-codes/result_chaos1_addval/', name + "_segmentation.jpg"))
print("num of test images:{}".format(len(self.images)))
def __len__(self):
return len(self.images)
def __getitem__(self, index):
image = Image.open(self.images[index])
# print(image.size)
return self.transform_test(image),self.file_names[index], image.size
def transform_test(self, images):
composed_transforms = transforms.Compose([
FixedResize(size=self.args.crop_size),
Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensor()
])
return composed_transforms(images)
class Chaos2Test(Dataset):
'''
Chaos Dataset
'''
NUM_CLASSES = 5
def __init__(self,
args,
base_dir=Path.db_root_dir('chaos2'),
split='test',
):
super().__init__()
#self._base_dir = base_dir
self.args = args
self.split = split
self.images = []
self.confidence_map = []
self.pseudo_labels = []
self.file_names = []
self.base_dir = '/data/weishizheng/QiuYiqiao/CHAOS_Test_Sets/Test_Sets/MR'
dir1s = get_dir0(base_dir)
for dir1 in dir1s:
dir1 = os.path.join(base_dir, dir1)
assert os.path.isdir(dir1)
dir2s = get_dir(dir1)
for dir2 in dir2s:
dir2 = os.path.join(base_dir, dir1, dir2)
assert os.path.isdir(dir2)
dir3s = get_dir(dir2)
for dir3 in dir3s:
dir3 = os.path.join(base_dir, dir1, dir2, dir3)
assert os.path.isdir(dir3)
file_name = get_file_name(dir3)
for name in file_name:
assert os.path.isfile(os.path.join(base_dir, dir1, dir2, dir3, name + ".jpg"))
self.images.append(os.path.join(base_dir,dir1,dir2,dir3,name + ".jpg"))
self.file_names.append(name)
self.pseudo_labels.append(os.path.join('/data/weishizheng/QiuYiqiao/Segmentation-codes/result_chaos2_test_label/', name + "_segmentation.jpg"))
self.confidence_map.append(os.path.join('/data/weishizheng/QiuYiqiao/Segmentation-codes/result_chaos2_test_confidence/', name + "_segmentation.jpg"))
#assert (len(self.images) == len(self.labels))
print('Number of images in {}: {:d}'.format(split, len(self.images)))
def __len__(self):
return len(self.images)
def transform_test(self, image):
composed_transforms = transforms.Compose([
FixedResize(size=self.args.crop_size),
Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensor()])
return composed_transforms(image)
def __getitem__(self, item):
_img = Image.open(self.images[item])
return self.transform_test(_img), self.file_names[item], _img.size
def __str__(self):
return 'Chaos2_2019(split=' + str(self.split) + ')'
class ChaosSegmentation1(Dataset):
'''
Chaos Dataset
'''
NUM_CLASSES = 2
def __init__(self,
args,
base_dir=Path.db_root_dir('chaos1'),
split='train',
):
super().__init__()
self._base_dir = base_dir
self.args = args
self.split = split
self.train_percent = self.args.train_percent
self.images = []
self.labels = []
self.id_dir = [1, 2, 5, 6, 8,
10, 14, 16, 18, 19]
self.ids_dir2 = [1, 5, 6, 7, 8, 9,
10, 11, 12, 2]
for i in self.id_dir:
_image_dir = os.path.join(self._base_dir,"Patient-CHAOS CT_SET_" + str(i), "Study_" + str(i) + "_CT[]", str(i))
_label_dir = os.path.join(self._base_dir, str(i), "Ground")
_num_image = len([lists for lists in os.listdir(_image_dir)
if os.path.isfile(os.path.join(_image_dir, lists))])
for j in range(_num_image):
if j < 10:
_image = os.path.join(_image_dir, "i000" + str(j) + ",0000b.jpg")
_label = os.path.join(_label_dir, "liver_GT_00" + str(j) + ".png")
elif j < 100:
_image = os.path.join(_image_dir, "i00" + str(j) + ",0000b.jpg")
_label = os.path.join(_label_dir, "liver_GT_0" + str(j) + ".png")
else:
_image = os.path.join(_image_dir, "i0" + str(j) + ",0000b.jpg")
_label = os.path.join(_label_dir, "liver_GT_" + str(j) + ".png")
assert os.path.isfile(_image)
assert os.path.isfile(_label)
self.images.append(_image)
self.labels.append(_label)
for i in range(21,31):
_image_dir = os.path.join(self._base_dir,"Patient-CHAOS CT_SET_" + str(i), "Study_" + str(i) + "_CT[]", str(i))
_label_dir = os.path.join(self._base_dir, str(i), "Ground")
_num_image = len([lists for lists in os.listdir(_image_dir)
if os.path.isfile(os.path.join(_image_dir, lists))])
for j in range(_num_image):
_file = "IMG-00"
if self.ids_dir2[i - 21] < 10:
_file += "0"
_file += str(self.ids_dir2[i - 21])
_file += "-00"
if j < 9:
_file += "00"
elif j < 99:
_file += "0"
_file += str(j+1) + ".jpg"
_image = os.path.join(_image_dir, _file)
if j < 10:
_label = os.path.join(_label_dir, "liver_GT_00" + str(j) + ".png")
elif j < 100:
_label = os.path.join(_label_dir, "liver_GT_0" + str(j) + ".png")
else:
_label = os.path.join(_label_dir, "liver_GT_" + str(j) + ".png")
assert os.path.isfile(_image)
assert os.path.isfile(_label)
self.images.append(_image)
self.labels.append(_label)
self.train_num = int(len(self.images)*self.train_percent)
self.val_num = int(len(self.images)*(1-self.train_percent))
random.shuffle( list(zip(self.images,self.labels)))
self.train_images = self.images[0:self.train_num]
self.train_labels = self.labels[0:self.train_num]
self.val_images = self.images[self.train_num+1:]
self.val_labels = self.labels[self.train_num+1:]
pseudo_set = Chaos1Test(self.args)
self.test_images = pseudo_set.images
self.pseudo_labels = pseudo_set.pseudo_labels
print('Number of images in {}:'.format(split))
if self.split == 'train':
print('{:d}'.format(self.train_num+len(self.pseudo_labels)))
else:
print('{:d}'.format(self.val_num))
def __len__(self):
if self.split == 'train':
return self.train_num + len(self.pseudo_labels)
elif self.split == 'val':
return self.val_num
def _make_img_gt_point_pair(self, item):
_pseudo = False
if self.split == 'train':
if item >= self.train_num:
_pseudo = True
_img = Image.open(self.test_images[item-self.train_num]).convert("RGB")
_target = Image.open(self.pseudo_labels[item-self.num_of_train])
else:
_img = Image.open(self.train_images[item]).convert("RGB")
_target = Image.open(self.train_labels[item])
else:
_img = Image.open(self.val_images[item]).convert("RGB")
_target = Image.open(self.val_labels[item])
return _img, _target, _pseudo
def transform_tr(self, sample):
composed_transforms = transforms.Compose([
tr.RandomHorizontalFlip(),
tr.RandomScaleCrop(base_size=self.args.base_size, crop_size=self.args.crop_size),
tr.RandomGaussianBlur(),
tr.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
tr.ToTensor()])
return composed_transforms(sample)
def transform_val(self, sample):
composed_transforms = transforms.Compose([
tr.FixScaleCrop(crop_size=self.args.crop_size),
tr.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
tr.ToTensor()])
return composed_transforms(sample)
def __getitem__(self, item):
_img, _target, _pseudo =self._make_img_gt_point_pair(item)
sample = {'image': _img, 'label': _target, 'pseudo': _pseudo}
if self.split == 'train':
return self.transform_tr(sample)
elif self.split == 'val':
return self.transform_val(sample)
def __str__(self):
return 'Chaos2019(split=' + str(self.split) + ')'
class ChaosSegmentation2(Dataset):
'''
Chaos Dataset
'''
NUM_CLASSES = 5
def __init__(self,
args,
base_dir=Path.db_root_dir('chaos2'),
split='train',
):
super().__init__()
self._base_dir = base_dir
self.args = args
self.split = split
self.train_percent = self.args.train_percent
self.images = []
self.labels = []
self.id_dir = [1, 2, 3, 5, 8,
10, 13, 15, 19, 20,
21, 22, 31, 32, 33,
34, 36, 37, 38, 39]
self.ids_dir2 = [4, 10, 4, 16, 34,
46, 64, 75, 22, 27,
4, 5, 29, 31, 37,
5, 13, 17, 23, 27]#T1DUAL label
self.ids_dir3 = [2, 7, 2, 14, 31,
43, 61, 73, 24, 29,
1, 8, 26, 30, 34,
8, 16, 20, 22, 26]#T2SPIR label
p = 0
for i in self.id_dir:
_image_dir = os.path.join(self._base_dir, "Patient-CHAOS MR_SET_" + str(i), "Study_" + str(i) + "_MR[]", str(i) + "1")
_label_dir = os.path.join(self._base_dir, str(i), "T2SPIR", "Ground")
_num_image = len([lists for lists in os.listdir(_image_dir)
if os.path.isfile(os.path.join(_image_dir, lists))])
for j in range(_num_image):
_file = "IMG-"
if self.ids_dir3[p] < 10:
_file += "000"
else:
_file += "00"
_file += str(self.ids_dir3[p])
_file += "-00"
if j < 9:
_file += "00"
elif j < 99:
_file += "0"
_file += str(j+1)
_image = os.path.join(_image_dir, _file + ".jpg")
_label = os.path.join(_label_dir, _file + ".png")
assert os.path.isfile(_image)
assert os.path.isfile(_label)
self.images.append(_image)
self.labels.append(_label)
p += 1
p = 0
for i in self.id_dir:
_image_dir = os.path.join(self._base_dir, "Patient-CHAOS MR_SET_" + str(i), "Study_" + str(i) + "_MR[]", str(i) + "2")
_label_dir = os.path.join(self._base_dir, str(i), "T1DUAL", "Ground")
_num_label = len([lists for lists in os.listdir(_label_dir)
if os.path.isfile(os.path.join(_label_dir, lists))])
for j in range(_num_label):
_file = "IMG-"
if self.ids_dir2[p] < 10:
_file += "000"
else:
_file += "00"
_file += str(self.ids_dir2[p])
_file += "-00"
_file2 = _file
if 2*j < 8:
_file += "00"
elif 2*j < 98:
_file += "0"
_file += str(2*j+2)
_image = os.path.join(_image_dir, _file + ".jpg")
_label = os.path.join(_label_dir, _file + ".png")
assert os.path.isfile(_image)
assert os.path.isfile(_label)
self.images.append(_image)
self.labels.append(_label)
p += 1
self.num_of_train = int(len(self.images)*self.train_percent)
self.num_of_val = int(len(self.images)*(1-self.train_percent))
random.shuffle(list(zip(self.images,self.labels)))
self.train_images=self.images[0:self.num_of_train]
self.train_labels=self.labels[0:self.num_of_train]
self.val_images=self.images[self.num_of_train+1:]
self.val_labels=self.labels[self.num_of_train+1:]
pseudo_set=Chaos2Test(self.args)
self.test_images=pseudo_set.images
self.pseudo_labels=pseudo_set.pseudo_labels
print('Number of images in {}:'.format(split))
if self.split == 'train':
print('{:d}'.format(self.num_of_train))
else:
print('{:d}'.format(self.num_of_val))
def __len__(self):
if self.split == 'train':
return self.num_of_train + len(self.test_images)
elif self.split == 'val':
return self.num_of_val
def _make_img_gt_point_pair(self, item):
#set_trace()
_pseudo = False
_confidence = None
if self.split == 'train':
if item >= self.num_of_train:
_pseudo = True
_img = Image.open(self.test_images[item-self.num_of_train]).convert("RGB")
_confidence = Image.open(self.confidence_map[item-self.num_of_train])
_target = Image.open(self.pseudo_labels[item-self.num_of_train])
else:
_img = Image.open(self.train_images[item]).convert("RGB")
_target = Image.open(self.train_labels[item])
else:
_img = Image.open(self.val_images[item]).convert("RGB")
_target = Image.open(self.val_labels[item])
_target = np.asarray(_target)
_label = Image.fromarray(encode_segmap(_target,'chaos2').astype('uint8'))
#_target = Image.open(self.labels[item])
return _img, _label, _confidence, _pseudo
def transform_tr(self, sample):
composed_transforms = transforms.Compose([
tr.RandomHorizontalFlip(),
tr.RandomScaleCrop(base_size=self.args.base_size, crop_size=self.args.crop_size),
tr.RandomGaussianBlur(),
tr.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
tr.ToTensor()
tr.Labsize()])
return composed_transforms(sample)
def transform_val(self, sample):
composed_transforms = transforms.Compose([
tr.FixScaleCrop(crop_size=self.args.crop_size),
tr.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
tr.ToTensor()])
return composed_transforms(sample)
def __getitem__(self, item):
_img, _target, _confidence, _pseudo = self._make_img_gt_point_pair(item)
sample = {'image': _img, 'label': _target, 'confidence': _confidence, 'pseudo': _pseudo }
if self.split == 'train':
return self.transform_tr(sample)
elif self.split == 'val':
return self.transform_val(sample)
def __str__(self):
return 'Chaos2_2019(split=' + str(self.split) + ')'
def __main__():
dataset1=Chaos2Segmentation('train')
if __name__ == '__main__':
main()
| 40.699541 | 173 | 0.54466 | 2,216 | 17,745 | 4.100632 | 0.101083 | 0.041598 | 0.045119 | 0.018488 | 0.800374 | 0.769011 | 0.740618 | 0.707824 | 0.684935 | 0.677011 | 0 | 0.04159 | 0.327923 | 17,745 | 435 | 174 | 40.793103 | 0.720359 | 0.009073 | 0 | 0.649733 | 0 | 0 | 0.062554 | 0.018812 | 0 | 0 | 0 | 0 | 0.042781 | 0 | null | null | 0 | 0.034759 | null | null | 0.024064 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ca5826f9a87adc5d2dfe2d7b3e791b4cad6a186 | 146 | py | Python | code/deploy/score.py | lyh01/hlazmlworkspace1 | c562699d9ce77f943ae32fda61f538c2baead62b | [
"MIT"
] | null | null | null | code/deploy/score.py | lyh01/hlazmlworkspace1 | c562699d9ce77f943ae32fda61f538c2baead62b | [
"MIT"
] | null | null | null | code/deploy/score.py | lyh01/hlazmlworkspace1 | c562699d9ce77f943ae32fda61f538c2baead62b | [
"MIT"
] | 1 | 2021-02-04T20:53:30.000Z | 2021-02-04T20:53:30.000Z | import logging
def init():
logging.info(f"pseudo init")
def run(data):
logging.info(f"pseudo score")
return { "predict": ["None"])
| 14.6 | 33 | 0.630137 | 20 | 146 | 4.6 | 0.65 | 0.23913 | 0.26087 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19863 | 146 | 9 | 34 | 16.222222 | 0.786325 | 0 | 0 | 0 | 0 | 0 | 0.232877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
143e42b9c4b3e056e56da0f5dad4bf8050480258 | 208 | py | Python | quiz.py | Sami-ul/UnitCirclePracticer | c6a28d4cbe794438a23cacc52bdd66bea2e166de | [
"CC0-1.0"
] | 2 | 2022-02-03T04:36:01.000Z | 2022-02-04T03:20:24.000Z | quiz.py | Sami-ul/UnitCirclePracticer | c6a28d4cbe794438a23cacc52bdd66bea2e166de | [
"CC0-1.0"
] | null | null | null | quiz.py | Sami-ul/UnitCirclePracticer | c6a28d4cbe794438a23cacc52bdd66bea2e166de | [
"CC0-1.0"
] | null | null | null | from unitCircle import unitCircle
circle = unitCircle()
quiz = circle.generateQuiz(10, "degradtocoor")
# quiz = circle.generateQuiz(10, "degtorad")
print(quiz[0]) # quiz
print("\n\n\n")
print(quiz[1]) # key | 23.111111 | 46 | 0.716346 | 28 | 208 | 5.321429 | 0.5 | 0.134228 | 0.295302 | 0.322148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.115385 | 208 | 9 | 47 | 23.111111 | 0.777174 | 0.245192 | 0 | 0 | 1 | 0 | 0.116883 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
14421f94d5313f9f9de384160a37f97b21d72667 | 455 | py | Python | tests/test_challenge25_pytest.py | stevenliu216/challenges | a8991fc3cc2309f8ef0ba6d189be001377153583 | [
"MIT"
] | null | null | null | tests/test_challenge25_pytest.py | stevenliu216/challenges | a8991fc3cc2309f8ef0ba6d189be001377153583 | [
"MIT"
] | 14 | 2018-09-18T02:00:28.000Z | 2019-07-08T15:59:56.000Z | tests/test_challenge25_pytest.py | stevenliu216/challenges | a8991fc3cc2309f8ef0ba6d189be001377153583 | [
"MIT"
] | 7 | 2018-09-17T14:52:24.000Z | 2020-10-02T21:55:20.000Z | from challenges.challenge25 import max_profit
def test_max_profit():
assert max_profit([7, 1, 5, 3, 6, 4]) == 5
assert max_profit([2, 1, 4, 5, 2, 9, 7]) == 8
def test_no_profit():
assert max_profit([7, 6, 4, 3, 1]) == 0
def test_buy_at_end():
assert max_profit([2, 1, 2, 1, 0, 1, 2]) == 2
def test_min_at_right():
assert max_profit([2, 4, 1]) == 2
def test_profit_not_at_min():
assert max_profit([3, 2, 6, 5, 0, 3]) == 4
| 19.782609 | 49 | 0.604396 | 88 | 455 | 2.886364 | 0.295455 | 0.283465 | 0.354331 | 0.188976 | 0.307087 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11831 | 0.21978 | 455 | 22 | 50 | 20.681818 | 0.597183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.416667 | true | 0 | 0.083333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
147e9e9c78980dc80e21c003f2b9c430aaa38491 | 159 | py | Python | pctiler/tests/conftest.py | gadomski/planetary-computer-apis | 53a04c0b24b9ccc06812bfb8ac2961bbfe58a108 | [
"MIT"
] | null | null | null | pctiler/tests/conftest.py | gadomski/planetary-computer-apis | 53a04c0b24b9ccc06812bfb8ac2961bbfe58a108 | [
"MIT"
] | null | null | null | pctiler/tests/conftest.py | gadomski/planetary-computer-apis | 53a04c0b24b9ccc06812bfb8ac2961bbfe58a108 | [
"MIT"
] | null | null | null | import pytest
from fastapi.testclient import TestClient
from pctiler.main import app
@pytest.fixture
def client() -> TestClient:
return TestClient(app)
| 15.9 | 41 | 0.779874 | 20 | 159 | 6.2 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 159 | 9 | 42 | 17.666667 | 0.918519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.5 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1ae8880be88869ae9ad57c3ba94ba4e4fbffe062 | 71 | py | Python | PyBASC/tests/test_smoke.py | AkiNikolaidis/BASC | 07ac80c1a22df84db8bdd30b09b881cecc8caf1d | [
"MIT"
] | 24 | 2017-09-22T07:47:27.000Z | 2021-09-10T07:04:59.000Z | PyBASC/tests/test_smoke.py | AkiNikolaidis/BASC | 07ac80c1a22df84db8bdd30b09b881cecc8caf1d | [
"MIT"
] | 19 | 2017-10-24T17:52:32.000Z | 2019-10-02T17:51:04.000Z | PyBASC/tests/test_smoke.py | AkiNikolaidis/BASC | 07ac80c1a22df84db8bdd30b09b881cecc8caf1d | [
"MIT"
] | 4 | 2017-11-17T00:47:32.000Z | 2020-11-02T17:56:14.000Z |
def test_smoke():
assert "Testing TravisCI" == "Testing TravisCI" | 17.75 | 51 | 0.690141 | 8 | 71 | 6 | 0.75 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 71 | 4 | 51 | 17.75 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.450704 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d3d5d0a550bc839c6836e0ef2e515425f8fb6e2 | 46 | py | Python | mlcollection/lib/bayesian/__init__.py | posborne/mlcollection | 65e1d0902ad0a3e5a53d98fb68432ce98ff970a3 | [
"MIT"
] | 2 | 2015-07-24T23:53:18.000Z | 2015-08-18T10:35:16.000Z | mlcollection/lib/bayesian/__init__.py | posborne/mlcollection | 65e1d0902ad0a3e5a53d98fb68432ce98ff970a3 | [
"MIT"
] | null | null | null | mlcollection/lib/bayesian/__init__.py | posborne/mlcollection | 65e1d0902ad0a3e5a53d98fb68432ce98ff970a3 | [
"MIT"
] | null | null | null | # TODO: implement set of bayesian classifiers
| 23 | 45 | 0.804348 | 6 | 46 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 1 | 46 | 46 | 0.948718 | 0.934783 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4a400d1f331cd10333aa9bf143efe8f0bc8b204 | 34 | py | Python | easytorch/data/__init__.py | sraashis/quenn | 4bc6b7aca7ed13bed3502d0d5f2ea1a4c839bb41 | [
"MIT"
] | 20 | 2020-07-30T16:46:57.000Z | 2021-12-11T21:24:19.000Z | easytorch/data/__init__.py | sraashis/quenn | 4bc6b7aca7ed13bed3502d0d5f2ea1a4c839bb41 | [
"MIT"
] | 4 | 2020-12-13T15:03:28.000Z | 2022-03-12T00:59:06.000Z | easytorch/data/__init__.py | sraashis/quenn | 4bc6b7aca7ed13bed3502d0d5f2ea1a4c839bb41 | [
"MIT"
] | 3 | 2021-06-06T00:23:01.000Z | 2021-11-08T14:21:13.000Z | from easytorch.data.data import *
| 17 | 33 | 0.794118 | 5 | 34 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4b0e61430e34fb953486f0d967e182a7e83c709 | 47 | py | Python | casbin_sqlalchemy_adapter/__init__.py | yyellowsun/sqlalchemy-adapter | d1ce6302dd4bdf483e0e10ca1b304ad25add8191 | [
"Apache-2.0"
] | 54 | 2019-03-04T11:19:16.000Z | 2022-03-30T12:48:20.000Z | casbin_sqlalchemy_adapter/__init__.py | yyellowsun/sqlalchemy-adapter | d1ce6302dd4bdf483e0e10ca1b304ad25add8191 | [
"Apache-2.0"
] | 49 | 2019-04-26T20:00:44.000Z | 2022-03-10T04:06:00.000Z | casbin_sqlalchemy_adapter/__init__.py | yyellowsun/sqlalchemy-adapter | d1ce6302dd4bdf483e0e10ca1b304ad25add8191 | [
"Apache-2.0"
] | 35 | 2019-04-19T21:50:58.000Z | 2022-02-03T12:40:50.000Z | from .adapter import CasbinRule, Adapter, Base
| 23.5 | 46 | 0.808511 | 6 | 47 | 6.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 1 | 47 | 47 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4b59363e3b438bc469b98834a56ae90e72a6e58 | 43 | py | Python | optimal_fantasy/models/__init__.py | Sphunt2005/HIOGJFSDIOJFGLK | bd8ca4d7fce53abc7d0ab61f79406f5d8c06f669 | [
"MIT"
] | null | null | null | optimal_fantasy/models/__init__.py | Sphunt2005/HIOGJFSDIOJFGLK | bd8ca4d7fce53abc7d0ab61f79406f5d8c06f669 | [
"MIT"
] | null | null | null | optimal_fantasy/models/__init__.py | Sphunt2005/HIOGJFSDIOJFGLK | bd8ca4d7fce53abc7d0ab61f79406f5d8c06f669 | [
"MIT"
] | null | null | null | import optimal_fantasy.models.mip_complete
| 21.5 | 42 | 0.906977 | 6 | 43 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 43 | 1 | 43 | 43 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4b8308fe774ee5cca9f1854763c68eef6e040b8 | 265 | py | Python | 01-logica-de-programacao-e-algoritmos/Aula 02/exercicio 04 aula-02.py | rafaelbarretomg/Uninter | 1f84b0103263177122663e991db3a8aeb106a959 | [
"MIT"
] | null | null | null | 01-logica-de-programacao-e-algoritmos/Aula 02/exercicio 04 aula-02.py | rafaelbarretomg/Uninter | 1f84b0103263177122663e991db3a8aeb106a959 | [
"MIT"
] | null | null | null | 01-logica-de-programacao-e-algoritmos/Aula 02/exercicio 04 aula-02.py | rafaelbarretomg/Uninter | 1f84b0103263177122663e991db3a8aeb106a959 | [
"MIT"
] | null | null | null | C = float(input('Digite a temperatura em Celsius: '))
F = ((9 * C)/5) + 32
# Maneira Classica
print('A temperatura de %.2f graus Celsius em Fahrenheit é de %.2f' %(C, F))
# Maneira Moderna
print('A temperatura {} graus Celsius em Fahrenheit é de {}' .format(C, F)) | 44.166667 | 76 | 0.667925 | 43 | 265 | 4.116279 | 0.511628 | 0.20339 | 0.19209 | 0.271186 | 0.305085 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.173585 | 265 | 6 | 77 | 44.166667 | 0.780822 | 0.124528 | 0 | 0 | 0 | 0 | 0.626087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b4e61123f7514a964f719aa0414103aeb7726e63 | 31 | py | Python | tests/operators/__init__.py | VoiSmart/airflow | c8b67f2ade5a165d46677b59620f782b7e4d1983 | [
"Apache-2.0"
] | null | null | null | tests/operators/__init__.py | VoiSmart/airflow | c8b67f2ade5a165d46677b59620f782b7e4d1983 | [
"Apache-2.0"
] | null | null | null | tests/operators/__init__.py | VoiSmart/airflow | c8b67f2ade5a165d46677b59620f782b7e4d1983 | [
"Apache-2.0"
] | null | null | null | from .docker_operator import *
| 15.5 | 30 | 0.806452 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37070678228c06d099555f024b5b33d479cfcb1d | 163 | py | Python | tests/test_propertypro.py | Ifyokoh/End-to-End-Machine-Learning | f23f8034aa3fc02c6dd834de50a25d5603adc8d2 | [
"MIT"
] | null | null | null | tests/test_propertypro.py | Ifyokoh/End-to-End-Machine-Learning | f23f8034aa3fc02c6dd834de50a25d5603adc8d2 | [
"MIT"
] | null | null | null | tests/test_propertypro.py | Ifyokoh/End-to-End-Machine-Learning | f23f8034aa3fc02c6dd834de50a25d5603adc8d2 | [
"MIT"
] | null | null | null | import pytest
from propertypro.propertypro import Propertypro
def test_scrape_data() -> None:
assert len(Propertypro().scrape_data(100, ["enugu"])) == 105
| 20.375 | 64 | 0.736196 | 20 | 163 | 5.85 | 0.7 | 0.17094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0.141104 | 163 | 7 | 65 | 23.285714 | 0.792857 | 0 | 0 | 0 | 0 | 0 | 0.030675 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3718c4d3cfc72a0f332f2d88d403f1a8bca91e30 | 38 | py | Python | pulsectl_asyncio/__init__.py | mbrea-c/pulsectl-asyncio | 9755fd30b8fe3538192cf00e92a2b54586893c6b | [
"MIT"
] | 7 | 2021-02-26T08:21:38.000Z | 2021-11-18T11:51:54.000Z | pulsectl_asyncio/__init__.py | mbrea-c/pulsectl-asyncio | 9755fd30b8fe3538192cf00e92a2b54586893c6b | [
"MIT"
] | 6 | 2021-02-26T16:01:44.000Z | 2022-03-22T21:36:24.000Z | pulsectl_asyncio/__init__.py | mbrea-c/pulsectl-asyncio | 9755fd30b8fe3538192cf00e92a2b54586893c6b | [
"MIT"
] | 3 | 2021-09-17T13:28:22.000Z | 2022-03-17T03:53:25.000Z | from .pulsectl_async import PulseAsync | 38 | 38 | 0.894737 | 5 | 38 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ec35fd5524593053fb0dad5e0c4cf96216b387d | 107 | py | Python | cpf_cnpj/__init__.py | elcidon/cpf_cnpj | 71db003669c5e75f62fd6347c5d50d8ae15a1c1a | [
"MIT"
] | null | null | null | cpf_cnpj/__init__.py | elcidon/cpf_cnpj | 71db003669c5e75f62fd6347c5d50d8ae15a1c1a | [
"MIT"
] | null | null | null | cpf_cnpj/__init__.py | elcidon/cpf_cnpj | 71db003669c5e75f62fd6347c5d50d8ae15a1c1a | [
"MIT"
] | null | null | null | from .cpf_cnpj import Cpf, Cnpj, CpfCnpj, BaseCpfCnpj
__all__ = ["Cpf", "Cnpj", "CpfCnpj", "BaseCpfCnpj"]
| 26.75 | 53 | 0.700935 | 13 | 107 | 5.384615 | 0.538462 | 0.3 | 0.4 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130841 | 107 | 3 | 54 | 35.666667 | 0.752688 | 0 | 0 | 0 | 0 | 0 | 0.233645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2eee2de1b6a1981df4667d4594c79b6f02a71ad0 | 235 | py | Python | ariadne/tracknet_v2_1/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 6 | 2020-08-28T22:44:07.000Z | 2022-01-24T20:53:00.000Z | ariadne/tracknet_v2_1/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 1 | 2021-02-20T09:38:46.000Z | 2021-02-20T09:38:46.000Z | ariadne/tracknet_v2_1/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 2 | 2021-10-04T09:25:06.000Z | 2022-02-09T09:09:09.000Z | from . import model
from . import model_small
from . import model_big
from . import dataset
from . import processor
from . import processor_with_model
from . import processor_for_validating
from . import loss
from . import data_loader
| 23.5 | 38 | 0.808511 | 34 | 235 | 5.382353 | 0.382353 | 0.491803 | 0.245902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153191 | 235 | 9 | 39 | 26.111111 | 0.919598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c162048e46ac6ab33261c0c1de3721d3319b43e | 187 | py | Python | References/Geovana Neves/TCC_Geovana_Neves_GitHub/SUAVE_modifications/SUAVE-feature-constant_throttle_EAS/trunk/SUAVE/Attributes/Atmospheres/Earth/__init__.py | Vinicius-Tanigawa/Undergraduate-Research-Project | e92372f07882484b127d7affe305eeec2238b8a9 | [
"MIT"
] | null | null | null | References/Geovana Neves/TCC_Geovana_Neves_GitHub/SUAVE_modifications/SUAVE-feature-constant_throttle_EAS/trunk/SUAVE/Attributes/Atmospheres/Earth/__init__.py | Vinicius-Tanigawa/Undergraduate-Research-Project | e92372f07882484b127d7affe305eeec2238b8a9 | [
"MIT"
] | null | null | null | References/Geovana Neves/TCC_Geovana_Neves_GitHub/SUAVE_modifications/SUAVE-feature-constant_throttle_EAS/trunk/SUAVE/Attributes/Atmospheres/Earth/__init__.py | Vinicius-Tanigawa/Undergraduate-Research-Project | e92372f07882484b127d7affe305eeec2238b8a9 | [
"MIT"
] | null | null | null |
# classes
from Constant_Temperature import Constant_Temperature
from US_Standard_1976 import US_Standard_1976
from International_Standard import International_Standard
# packages
# ...
| 20.777778 | 57 | 0.860963 | 22 | 187 | 6.954545 | 0.454545 | 0.248366 | 0.183007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047904 | 0.106952 | 187 | 8 | 58 | 23.375 | 0.868263 | 0.106952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
25997f4e8860292816eb547663bd2610eeed4142 | 47 | py | Python | torchvision/models/quantization/__init__.py | fsavard-eai/vision | 7d509c5daccb436e53c52a477b1ab214f34df4ac | [
"BSD-3-Clause"
] | null | null | null | torchvision/models/quantization/__init__.py | fsavard-eai/vision | 7d509c5daccb436e53c52a477b1ab214f34df4ac | [
"BSD-3-Clause"
] | null | null | null | torchvision/models/quantization/__init__.py | fsavard-eai/vision | 7d509c5daccb436e53c52a477b1ab214f34df4ac | [
"BSD-3-Clause"
] | 1 | 2020-02-11T02:03:07.000Z | 2020-02-11T02:03:07.000Z | from .mobilenet import *
from .resnet import *
| 15.666667 | 24 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 25 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25fc5fd13dc3277363ca898cf7727837fe54cdc9 | 345 | py | Python | stores/apps/shops/admin_urls.py | diassor/CollectorCity-Market-Place | 892ad220b8cf1c0fc7433f625213fe61729522b2 | [
"Apache-2.0"
] | 135 | 2015-03-19T13:28:18.000Z | 2022-03-27T06:41:42.000Z | stores/apps/shops/admin_urls.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | null | null | null | stores/apps/shops/admin_urls.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | 83 | 2015-01-30T01:00:15.000Z | 2022-03-08T17:25:10.000Z | from django.conf.urls.defaults import *
#TODO: ask to martin and delete this file
#
#urlpatterns = patterns('',
# url(r'^customers/$', 'lots.views.home_admin', name='customers_admin'),
# url(r'^inventary/$', 'lots.views.home_admin', name='inventory_admin'),
# url(r'^account/$', 'lots.views.home_admin', name='shop_account_admin'),
#) | 34.5 | 76 | 0.686957 | 47 | 345 | 4.893617 | 0.595745 | 0.052174 | 0.169565 | 0.234783 | 0.286957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113043 | 345 | 10 | 77 | 34.5 | 0.751634 | 0.84058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d305fb0f3a7e16058edcbe615936e47dd9059004 | 6,434 | py | Python | tests/test_plotting.py | feilong/brainplotlib | 9b65394a42ae816c7a9282adf5d79ae698fdd74e | [
"BSD-3-Clause"
] | 12 | 2022-02-01T16:33:09.000Z | 2022-02-10T10:56:50.000Z | tests/test_plotting.py | feilong/brainplotlib | 9b65394a42ae816c7a9282adf5d79ae698fdd74e | [
"BSD-3-Clause"
] | null | null | null | tests/test_plotting.py | feilong/brainplotlib | 9b65394a42ae816c7a9282adf5d79ae698fdd74e | [
"BSD-3-Clause"
] | null | null | null | import os
import numpy as np
import importlib.util
from brainplotlib import brain_plot
class TestPlotting:
def test_icoorder5_masked(self, tmp_path):
values = np.arange(9372), np.arange(9370)
img = brain_plot(*values, vmax=18741, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder5_masked.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder5_masked_random(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((9372, )), rng.random((9370, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder5_masked_random.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder5_nonmasked(self, tmp_path):
values = np.arange(10242), np.arange(10242)
img = brain_plot(*values, vmax=20483, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder5_nonmasked.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder5_nonmasked_random(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((10242, )), rng.random((10242, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder5_nonmasked_random.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder3_masked(self, tmp_path):
values = np.arange(588), np.arange(587)
img = brain_plot(*values, vmax=1174, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder3_masked.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder3_masked_random(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((588, )), rng.random((587, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder3_masked_random.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder3_nonmasked(self, tmp_path):
values = np.arange(642), np.arange(642)
img = brain_plot(*values, vmax=1283, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder3_nonmasked.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_icoorder3_nonmasked(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((642, )), rng.random((642, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap=None)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_icoorder3_nonmasked_random.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
class TestColormaps:
def test_bwr_cmap(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((588, )), rng.random((587, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap='bwr')
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_bwr_cmap.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
def test_jet_cmap(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((588, )), rng.random((587, ))
img = brain_plot(*values, vmax=1, vmin=0, cmap='jet')
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_jet_cmap.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
class TestScale:
def test_color_scale(self, tmp_path):
rng = np.random.default_rng()
values = rng.random((588, )), rng.random((587, ))
img, scale = brain_plot(*values, vmax=1, vmin=0, cmap='viridis', return_scale=True)
from matplotlib import cm
assert isinstance(scale, cm.ScalarMappable)
assert img.shape in [(1560, 1728, 4), (1560, 1728, 3)]
assert img.dtype == np.float64
assert np.all(img <= 1)
assert np.all(img >= 0)
if importlib.util.find_spec('cv2'):
import cv2
cv2.imwrite(os.path.join(tmp_path, 'test_colorscale.png'), np.round(img * 255).astype(np.uint8)[:, :, [2, 1, 0, 3]])
| 45.631206 | 144 | 0.58968 | 943 | 6,434 | 3.914104 | 0.084836 | 0.041723 | 0.065565 | 0.083446 | 0.899756 | 0.899756 | 0.868599 | 0.842861 | 0.835004 | 0.835004 | 0 | 0.101986 | 0.248679 | 6,434 | 140 | 145 | 45.957143 | 0.661564 | 0 | 0 | 0.65873 | 0 | 0 | 0.052689 | 0.037302 | 0 | 0 | 0 | 0 | 0.357143 | 1 | 0.087302 | false | 0 | 0.214286 | 0 | 0.325397 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3127ab34dcec0d795089ba25587e35faf757639 | 249 | py | Python | lib/__init__.py | acordonez/ENSO_metrics | 6b5a3c33ec89a5c6de147bf6dc03872a89b9da90 | [
"BSD-3-Clause"
] | 11 | 2020-07-09T01:03:59.000Z | 2022-03-18T11:39:06.000Z | lib/__init__.py | acordonez/ENSO_metrics | 6b5a3c33ec89a5c6de147bf6dc03872a89b9da90 | [
"BSD-3-Clause"
] | 17 | 2020-07-09T00:42:14.000Z | 2022-03-21T23:14:33.000Z | lib/__init__.py | acordonez/ENSO_metrics | 6b5a3c33ec89a5c6de147bf6dc03872a89b9da90 | [
"BSD-3-Clause"
] | 4 | 2020-09-24T05:42:15.000Z | 2022-03-15T16:17:53.000Z | from .EnsoCollectionsLib import *
from .EnsoComputeMetricsLib import *
from .EnsoErrorsWarnings import *
from .EnsoMetricsLib import *
from .EnsoToolsLib import *
from .EnsoUvcdatToolsLib import *
from .EnsoPlotLib import *
from .KeyArgLib import *
| 27.666667 | 36 | 0.807229 | 24 | 249 | 8.375 | 0.416667 | 0.348259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128514 | 249 | 8 | 37 | 31.125 | 0.926267 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d37faf6da04807045bd3c9e11c76826693d48906 | 50 | py | Python | webdriver/tests/support/__init__.py | shs96c/web-platform-tests | 61acad6dd9bb99d32340eb41f5146de64f542359 | [
"BSD-3-Clause"
] | 4 | 2020-09-09T15:28:01.000Z | 2021-12-01T00:59:56.000Z | webdriver/tests/support/__init__.py | shs96c/web-platform-tests | 61acad6dd9bb99d32340eb41f5146de64f542359 | [
"BSD-3-Clause"
] | 1 | 2021-03-31T20:23:55.000Z | 2021-03-31T20:23:55.000Z | webdriver/tests/support/__init__.py | shs96c/web-platform-tests | 61acad6dd9bb99d32340eb41f5146de64f542359 | [
"BSD-3-Clause"
] | 1 | 2021-04-06T20:06:58.000Z | 2021-04-06T20:06:58.000Z | from merge_dictionaries import merge_dictionaries
| 25 | 49 | 0.92 | 6 | 50 | 7.333333 | 0.666667 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d39f5fecd3d8efddec7769b22b9bd3abc7e6204b | 79 | py | Python | sonata/loggers/__init__.py | sergevkim/sonata | 2250b60174628ee76fb7d54bf50e4b8b07b505d5 | [
"MIT"
] | 1 | 2021-03-15T19:01:43.000Z | 2021-03-15T19:01:43.000Z | sonata/loggers/__init__.py | sergevkim/sonata | 2250b60174628ee76fb7d54bf50e4b8b07b505d5 | [
"MIT"
] | null | null | null | sonata/loggers/__init__.py | sergevkim/sonata | 2250b60174628ee76fb7d54bf50e4b8b07b505d5 | [
"MIT"
] | null | null | null | from .base_logger import BaseLogger
from .neptune_logger import NeptuneLogger
| 19.75 | 41 | 0.860759 | 10 | 79 | 6.6 | 0.7 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 79 | 3 | 42 | 26.333333 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3a264cd68c46cc5f70b27ecb402c72485be8f75 | 71 | py | Python | crepes/__init__.py | translational-informatics/crepes | 5a9a088ccfe61fbd48118ca7ffc90ac524834df4 | [
"MIT"
] | null | null | null | crepes/__init__.py | translational-informatics/crepes | 5a9a088ccfe61fbd48118ca7ffc90ac524834df4 | [
"MIT"
] | null | null | null | crepes/__init__.py | translational-informatics/crepes | 5a9a088ccfe61fbd48118ca7ffc90ac524834df4 | [
"MIT"
] | null | null | null | from .omopReader import read_omop_data
from .generate import generate
| 17.75 | 38 | 0.84507 | 10 | 71 | 5.8 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 71 | 3 | 39 | 23.666667 | 0.935484 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c8f44c564d3ef0690096b26c795c6bc22b7f12a | 35 | py | Python | textattack/datasets/translation/__init__.py | fighting41love/TextAttack | 24e48f0022dc3a7bdcd5cbb3430f1c72cfcb522d | [
"MIT"
] | 2 | 2020-07-08T08:55:37.000Z | 2020-09-03T00:57:38.000Z | textattack/datasets/translation/__init__.py | SatoshiRobatoFujimoto/TextAttack | a809a9bddddff9f41750949e26edde26c8af6cfa | [
"MIT"
] | null | null | null | textattack/datasets/translation/__init__.py | SatoshiRobatoFujimoto/TextAttack | a809a9bddddff9f41750949e26edde26c8af6cfa | [
"MIT"
] | null | null | null | from .translation_datasets import * | 35 | 35 | 0.857143 | 4 | 35 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ca6473211e545ce9b160c9567941350880957cf | 111 | py | Python | app/osm_observer/model/__init__.py | grischard/osm-observer | 9e833e98696abc4a2aab942c8899aaf039166fc1 | [
"MIT"
] | 4 | 2018-04-24T17:55:08.000Z | 2021-02-18T00:52:04.000Z | app/osm_observer/model/__init__.py | grischard/osm-observer | 9e833e98696abc4a2aab942c8899aaf039166fc1 | [
"MIT"
] | 1 | 2021-02-08T20:30:42.000Z | 2021-02-08T20:30:42.000Z | app/osm_observer/model/__init__.py | grischard/osm-observer | 9e833e98696abc4a2aab942c8899aaf039166fc1 | [
"MIT"
] | 2 | 2019-09-27T23:57:11.000Z | 2020-09-19T19:01:37.000Z | from .user import *
from .coverage import *
from .changes import *
from .review import *
from .filter import *
| 18.5 | 23 | 0.72973 | 15 | 111 | 5.4 | 0.466667 | 0.493827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18018 | 111 | 5 | 24 | 22.2 | 0.89011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cead22c7055416edbdf49b241c9f2804dcbc565 | 2,539 | py | Python | epytope/Data/pssms/smm/mat/B_35_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smm/mat/B_35_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smm/mat/B_35_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_35_01_10 = {0: {'A': 0.075, 'C': 0.34, 'E': 0.274, 'D': 0.45, 'G': 0.33, 'F': -0.917, 'I': -0.282, 'H': -0.319, 'K': 0.455, 'M': -0.841, 'L': -0.281, 'N': 0.176, 'Q': -0.069, 'P': 1.066, 'S': 0.103, 'R': 0.444, 'T': 0.036, 'W': -0.03, 'V': 0.064, 'Y': -1.074}, 1: {'A': -0.427, 'C': 0.0, 'E': 0.45, 'D': 0.0, 'G': -0.241, 'F': 0.35, 'I': -0.059, 'H': -0.011, 'K': 0.518, 'M': -0.052, 'L': 0.014, 'N': 0.488, 'Q': 0.268, 'P': -1.098, 'S': 0.197, 'R': 0.105, 'T': 0.08, 'W': -0.136, 'V': -0.441, 'Y': -0.004}, 2: {'A': -0.078, 'C': 0.02, 'E': 0.008, 'D': 0.041, 'G': 0.221, 'F': -0.233, 'I': -0.261, 'H': -0.055, 'K': 0.265, 'M': -0.246, 'L': -0.045, 'N': 0.053, 'Q': -0.056, 'P': 0.219, 'S': 0.047, 'R': 0.209, 'T': 0.026, 'W': 0.044, 'V': -0.155, 'Y': -0.024}, 3: {'A': 0.022, 'C': -0.165, 'E': -0.003, 'D': -0.044, 'G': -0.071, 'F': 0.029, 'I': 0.002, 'H': 0.07, 'K': 0.144, 'M': 0.051, 'L': -0.023, 'N': 0.062, 'Q': -0.016, 'P': -0.046, 'S': -0.031, 'R': -0.015, 'T': 0.029, 'W': 0.026, 'V': 0.026, 'Y': -0.047}, 4: {'A': 0.017, 'C': 0.144, 'E': 0.037, 'D': -0.186, 'G': 0.057, 'F': -0.048, 'I': -0.148, 'H': 0.042, 'K': 0.129, 'M': -0.255, 'L': 0.047, 'N': 0.043, 'Q': -0.062, 'P': 0.067, 'S': 0.06, 'R': 0.21, 'T': -0.044, 'W': 0.02, 'V': -0.004, 'Y': -0.127}, 5: {'A': 0.001, 'C': 0.01, 'E': 0.001, 'D': 0.015, 'G': -0.028, 'F': -0.007, 'I': 0.026, 'H': 0.006, 'K': 0.03, 'M': -0.012, 'L': -0.031, 'N': -0.003, 'Q': 0.009, 'P': -0.016, 'S': 0.009, 'R': 0.02, 'T': -0.015, 'W': -0.0, 'V': -0.008, 'Y': -0.004}, 6: {'A': 0.058, 'C': -0.012, 'E': 0.096, 'D': -0.18, 'G': -0.112, 'F': -0.026, 'I': 0.066, 'H': 0.1, 'K': 0.108, 'M': 0.011, 'L': -0.029, 'N': 0.162, 'Q': -0.224, 'P': -0.109, 'S': 0.031, 'R': 0.254, 'T': -0.217, 'W': -0.022, 'V': 0.096, 'Y': -0.052}, 7: {'A': -0.172, 'C': -0.19, 'E': 0.06, 'D': 0.023, 'G': 0.195, 'F': 0.026, 'I': -0.104, 'H': 0.039, 'K': 0.265, 'M': -0.053, 'L': -0.016, 'N': 0.068, 'Q': 0.035, 'P': -0.001, 'S': 0.021, 'R': 0.137, 'T': -0.062, 'W': -0.1, 'V': -0.031, 'Y': -0.14}, 8: {'A': -0.196, 'C': 0.188, 'E': -0.149, 'D': 0.07, 'G': 0.188, 'F': 0.021, 'I': -0.258, 'H': 0.138, 'K': 0.518, 'M': -0.127, 'L': 0.089, 'N': -0.058, 'Q': 0.082, 'P': -0.221, 'S': -0.038, 'R': 0.015, 'T': -0.099, 'W': 0.199, 'V': -0.237, 'Y': -0.125}, 9: {'A': 0.029, 'C': 0.007, 'E': 0.344, 'D': 0.23, 'G': -0.067, 'F': -0.889, 'I': 0.344, 'H': 0.088, 'K': 0.335, 'M': -0.518, 'L': -0.105, 'N': 0.09, 'Q': 0.4, 'P': -0.281, 'S': -0.061, 'R': 0.161, 'T': 0.273, 'W': 0.1, 'V': 0.469, 'Y': -0.947}, -1: {'con': 4.6849}} | 2,539 | 2,539 | 0.391099 | 618 | 2,539 | 1.601942 | 0.275081 | 0.020202 | 0.010101 | 0.012121 | 0.078788 | 0 | 0 | 0 | 0 | 0 | 0 | 0.369882 | 0.163056 | 2,539 | 1 | 2,539 | 2,539 | 0.096 | 0 | 0 | 0 | 0 | 0 | 0.079921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f270531128adb484cae5ac12daf3fc9ff79fdd5 | 292 | py | Python | The Last Comit.py | aash-gates/aash-python-babysteps | cb88b02b0d33ac74acb183d4f11f6baad0ad3db9 | [
"Unlicense"
] | 7 | 2020-11-16T18:23:21.000Z | 2021-12-18T14:08:54.000Z | The Last Comit.py | aash-gates/aash-python-babysteps | cb88b02b0d33ac74acb183d4f11f6baad0ad3db9 | [
"Unlicense"
] | null | null | null | The Last Comit.py | aash-gates/aash-python-babysteps | cb88b02b0d33ac74acb183d4f11f6baad0ad3db9 | [
"Unlicense"
] | 1 | 2020-12-21T15:59:44.000Z | 2020-12-21T15:59:44.000Z | #A Last Commit on this Year 31/12/2021
print("Good Bye 2021 it was a Great Year had Lots of fun, and this is the last Commit for the year")
print(".")
print(".")
print(".")
print(".")
print(".")
print(".")
print(".")
print(".")
print("This is the Last Comit for 2021")
#end of the program
| 18.25 | 100 | 0.64726 | 51 | 292 | 3.705882 | 0.509804 | 0.42328 | 0.555556 | 0.634921 | 0.238095 | 0.238095 | 0.238095 | 0.238095 | 0 | 0 | 0 | 0.066116 | 0.171233 | 292 | 15 | 101 | 19.466667 | 0.714876 | 0.188356 | 0 | 0.8 | 0 | 0.1 | 0.553191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9f6b86ace42a9ccdca95c654beb7d1cce5d865af | 31 | py | Python | training/__init__.py | awagot/CNN-POD | ebee234831ff58609563a925b7a47e0f4c30a16e | [
"CC0-1.0"
] | 2 | 2021-04-08T10:30:58.000Z | 2021-08-18T11:23:05.000Z | training/__init__.py | awagot/CNN-POD | ebee234831ff58609563a925b7a47e0f4c30a16e | [
"CC0-1.0"
] | 1 | 2021-04-07T21:28:59.000Z | 2021-04-07T21:28:59.000Z | training/__init__.py | awagot/CNN-POD | ebee234831ff58609563a925b7a47e0f4c30a16e | [
"CC0-1.0"
] | 2 | 2021-04-09T09:41:32.000Z | 2021-04-16T13:09:43.000Z | from training.training import * | 31 | 31 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f8b12673847116ad8a0d39897b52a94b5ff3752 | 21 | py | Python | pycascades/utils/__init__.py | vishalbelsare/pycascades | 61ac44f7db3093451f9778d2f6dee5f1390ce38a | [
"BSD-3-Clause"
] | 11 | 2020-11-05T09:50:07.000Z | 2022-03-30T18:34:07.000Z | pycascades/utils/__init__.py | vishalbelsare/pycascades | 61ac44f7db3093451f9778d2f6dee5f1390ce38a | [
"BSD-3-Clause"
] | 1 | 2021-11-08T15:10:26.000Z | 2021-11-08T15:11:36.000Z | pycascades/utils/__init__.py | vitusbenson/pycascades | 961f3e3cca43fcf75983ddf72821533f183e3a09 | [
"BSD-3-Clause"
] | 3 | 2021-09-11T09:03:30.000Z | 2021-11-05T06:37:09.000Z | from . import plotter | 21 | 21 | 0.809524 | 3 | 21 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9fad6af7615588e71b44c953892540525cedfef7 | 93 | py | Python | test/main.py | sveneberth/pypi-test | d05e628476cfcd1c223e1ff547daeb2fd2e1c973 | [
"MIT"
] | null | null | null | test/main.py | sveneberth/pypi-test | d05e628476cfcd1c223e1ff547daeb2fd2e1c973 | [
"MIT"
] | null | null | null | test/main.py | sveneberth/pypi-test | d05e628476cfcd1c223e1ff547daeb2fd2e1c973 | [
"MIT"
] | null | null | null | from first_package_xyz567 import bar
print(bar.make_random_args())
print(bar.get_version())
| 18.6 | 36 | 0.817204 | 15 | 93 | 4.733333 | 0.8 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034884 | 0.075269 | 93 | 4 | 37 | 23.25 | 0.790698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9fb2128ab8087e7d6575043c119defc3235841fc | 46 | py | Python | __init__.py | rhuard/PyTextDecorator | 3ace385ba49bdcd1acb8f8033b9f25ed779eea97 | [
"MIT"
] | null | null | null | __init__.py | rhuard/PyTextDecorator | 3ace385ba49bdcd1acb8f8033b9f25ed779eea97 | [
"MIT"
] | 5 | 2016-08-16T00:38:54.000Z | 2016-08-17T05:49:48.000Z | __init__.py | rhuard/PyTextDecorator | 3ace385ba49bdcd1acb8f8033b9f25ed779eea97 | [
"MIT"
] | null | null | null | from PyTextDecorator.pytextdecorator import *
| 23 | 45 | 0.869565 | 4 | 46 | 10 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cd93f07708f652db8e94848852b5be4f44560ac | 13,100 | py | Python | refinery/bnpy/bnpy-dev/tests/suffstats/TestParamBag.py | csa0001/Refinery | 0d5de8fc3d680a2c79bd0e9384b506229787c74f | [
"MIT"
] | 103 | 2015-01-13T00:48:14.000Z | 2021-11-08T10:53:22.000Z | refinery/bnpy/bnpy-dev/tests/suffstats/TestParamBag.py | csa0001/Refinery | 0d5de8fc3d680a2c79bd0e9384b506229787c74f | [
"MIT"
] | 7 | 2015-02-21T04:03:40.000Z | 2021-08-23T20:24:54.000Z | refinery/bnpy/bnpy-dev/tests/suffstats/TestParamBag.py | csa0001/Refinery | 0d5de8fc3d680a2c79bd0e9384b506229787c74f | [
"MIT"
] | 27 | 2015-01-23T00:54:31.000Z | 2020-12-30T14:30:50.000Z | '''
Unit-tests for ParamBag
'''
from bnpy.suffstats.ParamBag import ParamBag
import numpy as np
import unittest
class TestParamBag(unittest.TestCase):
def shortDescription(self):
return None
def test_setAllFieldsToZero_K1_D1(self, K=1, D=1):
A = ParamBag(K=K, D=D)
s = 123
N = np.ones(K)
x = np.ones((K,D))
xxT = np.ones((K,D,D))
W = np.ones((K,K))
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
A.setField('W', W, dims=('K','K'))
A.setAllFieldsToZero()
assert np.allclose(A.s, 0.0)
assert np.allclose(A.N, np.zeros(K))
assert np.allclose(A.x, np.zeros(K))
assert np.allclose(A.xxT, np.zeros(K))
assert np.allclose(A.xxT, np.zeros((K,K)))
######################################################### insertEmptyComps
def test_insertEmptyComps_K1_D1(self, K=1, D=1):
A = ParamBag(K=K, D=D)
s = 123
N = np.zeros(K)
x = np.zeros((K,D))
xxT = np.zeros((K,D,D))
W = np.zeros((K,K))
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
A.setField('W', W, dims=('K','K'))
A.insertEmptyComps(2)
assert np.allclose(A.s, 123)
assert np.allclose(A.N, np.zeros(K+2))
assert np.allclose(A.x, np.zeros(K+2))
assert np.allclose(A.xxT, np.zeros(K+2))
assert np.allclose(A.W, np.zeros((K+2,K+2)))
def test_insertEmptyComps_K1_D1(self, K=1, D=2):
A = ParamBag(K=K, D=D)
s = 123
N = np.zeros(K)
x = np.zeros((K,D))
xxT = np.zeros((K,D,D))
W = np.zeros((K,K))
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
A.setField('W', W, dims=('K','K'))
A.insertEmptyComps(2)
assert np.allclose(A.s, 123)
assert np.allclose(A.N, np.zeros(K+2))
assert np.allclose(A.x, np.zeros((K+2,D)))
assert np.allclose(A.xxT, np.zeros((K+2,D,D)))
assert np.allclose(A.W, np.zeros((K+2,K+2)))
def test_insertEmptyComps_K3_D1(self, K=3, D=1):
A = ParamBag(K=K, D=D)
s = 123
N = np.zeros(K)
x = np.zeros((K,D))
xxT = np.zeros((K,D,D))
W = np.zeros((K,K))
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
A.setField('W', W, dims=('K','K'))
A.insertEmptyComps(2)
assert np.allclose(A.s, 123)
assert np.allclose(A.N, np.zeros(K+2))
assert np.allclose(A.x, np.zeros(K+2))
assert np.allclose(A.xxT, np.zeros(K+2))
assert np.allclose(A.W, np.zeros((K+2,K+2)))
def test_insertEmptyComps_K3_D3(self, K=3, D=3):
A = ParamBag(K=K, D=D)
s = 123
N = np.zeros(K)
x = np.zeros((K,D))
xxT = np.zeros((K,D,D))
W = np.zeros((K,K))
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
A.setField('W', W, dims=('K','K'))
A.insertEmptyComps(2)
assert np.allclose(A.s, 123)
assert np.allclose(A.N, np.zeros(K+2))
assert np.allclose(A.x, np.zeros((K+2,D)))
assert np.allclose(A.xxT, np.zeros((K+2,D,D)))
assert np.allclose(A.W, np.zeros((K+2,K+2)))
######################################################### Verify insert
def test_insertComps_K1_D1(self):
A = ParamBag(K=1,D=1)
s = 123.456
A.setField('scalar', s, dims=None)
A.setField('N', [1], dims='K')
A.setField('x', [[1]], dims=('K','D'))
A.setField('xxT', [[[1]]], dims=('K','D','D'))
Abig = A.copy()
Abig.insertComps(A)
assert Abig.K == 2
assert np.allclose(Abig.N, np.hstack([A.N, A.N]))
assert Abig.scalar == 2*s
Abig.insertComps(A)
assert Abig.K == 3
assert np.allclose(Abig.N, np.hstack([A.N, A.N, A.N]))
assert Abig.scalar == 3*s
A.insertComps(Abig)
assert A.K == 4
assert A.scalar == 4*s
assert np.allclose(A.N, np.hstack([1,1,1,1]))
def test_insertComps_K1_D3(self, K=1, D=3):
A = ParamBag(K=K,D=D)
s = 123.456
A.setField('scalar', s, dims=None)
A.setField('N', [1.0], dims='K')
A.setField('x', np.random.rand(K,D), dims=('K','D'))
A.setField('xxT', np.random.rand(K,D,D), dims=('K','D','D'))
Abig = A.copy()
Abig.insertComps(A)
assert Abig.K == 2
assert np.allclose(Abig.N, np.hstack([A.N, A.N]))
assert Abig.scalar == 2*s
assert Abig.xxT.shape == (2,3,3)
assert np.allclose(Abig.xxT[0], A.xxT)
assert np.allclose(Abig.xxT[1], A.xxT)
Abig.insertComps(A)
assert Abig.K == 3
assert np.allclose(Abig.N, np.hstack([A.N, A.N, A.N]))
assert Abig.scalar == 3*s
assert Abig.xxT.shape == (3,3,3)
assert np.allclose(Abig.xxT[0], A.xxT)
assert np.allclose(Abig.xxT[1], A.xxT)
A.insertComps(Abig)
assert A.K == 4
assert A.scalar == 4*s
assert np.allclose(A.N, np.hstack([1,1,1,1]))
######################################################### Verify remove
def test_removeComp_K1_D1(self):
A = ParamBag(K=1,D=1)
A.setField('N', [1], dims='K')
A.setField('x', [[1]], dims=('K','D'))
with self.assertRaises(ValueError):
A.removeComp(0)
def test_removeComp_K3_D1(self):
A = ParamBag(K=3,D=1)
A.setField('N', [1,2,3], dims='K')
A.setField('x', [[4],[5],[6]], dims=('K','D'))
A.setField('W', np.ones((3,3)), dims=('K','K'))
Aorig = A.copy()
A.removeComp(1)
assert Aorig.K == A.K + 1
assert A.N[0] == Aorig.N[0]
assert A.N[1] == Aorig.N[2]
assert np.allclose( A.x, [[4],[6]])
assert np.allclose(A.W, np.ones((2,2)))
def test_remove_K3_D2(self, K=3, D=2):
A = ParamBag(K=K, D=D)
s = 123
N = np.random.rand(K)
x = np.random.rand(K,D)
xxT = np.random.randn(K,D,D)
A.setField('s', s)
A.setField('N', N, dims='K')
A.setField('x', x, dims=('K','D'))
A.setField('xxT', xxT, dims=('K','D','D'))
Abig = A.copy()
# First remove a few fields
for k in range(K-1):
A.removeComp(0)
assert A.K == K - k - 1
assert A.s == s
assert np.allclose(A.getComp(0).x, x[k+1])
assert np.allclose(A.getComp(0).xxT, xxT[k+1])
######################################################### Verify get
def test_getComp_K1_D1(self):
A = ParamBag(K=1,D=1)
A.setField('scalar', 1, dims=None)
A.setField('N', [1], dims='K')
A.setField('x', [[1]], dims=('K','D'))
c = A.getComp(0)
assert c.K == 1
assert c.N == A.N
assert c.x == A.x
assert id(c.scalar) != id(A.scalar)
assert id(c.N) != id(A.N)
assert id(c.x) != id(A.x)
def test_getComp_K3_D1(self):
A = ParamBag(K=3,D=1)
A.setField('N', [1,2,3], dims='K')
A.setField('x', [[4],[5],[6]], dims=('K','D'))
c = A.getComp(0)
assert c.K == 1
assert c.N == A.N[0]
assert c.x == A.x[0]
assert id(c.N) != id(A.N)
assert id(c.x) != id(A.x)
######################################################### Verify add/subtract
def test_add_K1_D1(self):
A = ParamBag(K=1,D=1)
B = ParamBag(K=1,D=1)
C = A + B
assert C.K == A.K and C.D == A.D
A.setField('N', [1], dims='K')
B.setField('N', [10], dims='K')
C = A + B
assert C.N[0] == 11.0
def test_add_K3_D2(self, K=3, D=2):
A = ParamBag(K=K,D=D)
A.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
B = ParamBag(K=K,D=D)
B.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
C = A + B
assert np.allclose(C.xxT, A.xxT + B.xxT)
def test_sub_K3_D2(self, K=3, D=2):
A = ParamBag(K=K,D=D)
A.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
B = ParamBag(K=K,D=D)
B.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
C = A - B
assert np.allclose(C.xxT, A.xxT - B.xxT)
def test_iadd_K3_D2(self, K=3, D=2):
A = ParamBag(K=K,D=D)
A.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
A.setField('x', np.random.randn(K,D), dims=('K','D'))
B = ParamBag(K=K,D=D)
B.setField('x', np.random.randn(K,D), dims=('K','D'))
B.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
origID = hex(id(A))
A += B
newID = hex(id(A))
assert origID == newID
A = A + B
newnewID = hex(id(A))
assert newnewID != origID
def test_isub_K3_D2(self, K=3, D=2):
A = ParamBag(K=K,D=D)
A.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
A.setField('x', np.random.randn(K,D), dims=('K','D'))
B = ParamBag(K=K,D=D)
B.setField('x', np.random.randn(K,D), dims=('K','D'))
B.setField('xxT', np.random.randn(K,D,D), dims=('K','D','D'))
origID = hex(id(A))
A -= B
newID = hex(id(A))
assert origID == newID
A = A - B
newnewID = hex(id(A))
assert newnewID != origID
######################################################### Dim 0 parsing
def test_parseArr_dim0_passes(self):
PB1 = ParamBag(K=1, D=1)
x = PB1.parseArr(1.23, dims=None)
assert x.ndim == 0 and x.size == 1
x = PB1.parseArr([1.23], dims=('K'))
assert x.ndim == 1 and x.size == 1
PB2 = ParamBag(K=2, D=1)
x = PB2.parseArr(1.23, dims=None)
assert x.ndim == 0 and x.size == 1
PB5 = ParamBag(K=5, D=40)
x = PB5.parseArr(1.23, dims=None)
assert x.ndim == 0 and x.size == 1
def test_parseArr_dim0_fails(self):
''' Verify fails for 0-dim input when K > 1
'''
PB2 = ParamBag(K=2, D=1)
with self.assertRaises(ValueError):
x = PB2.parseArr(1.23, dims=('K'))
with self.assertRaises(ValueError):
x = PB2.parseArr(1.23, dims='K')
######################################################### Dim 1 parsing
def test_parseArr_dim1_passes(self):
# K = 1, D = 1
PB1 = ParamBag(K=1, D=1)
x = PB1.parseArr([1.23], dims='K')
assert x.ndim == 1 and x.size == 1
x = PB1.parseArr([[1.23]], dims=('K','D'))
assert x.ndim == 2 and x.size == 1
# K = *, D = 1
PB2 = ParamBag(K=2, D=1)
x = PB2.parseArr([1.,2.], dims='K')
assert x.ndim == 1 and x.size == 2
x = PB2.parseArr([[1.],[2.]], dims=('K','D'))
assert x.ndim == 2 and x.size == 2
# K = 1, D = *
PB3 = ParamBag(K=1, D=3)
x = PB3.parseArr([[1., 2., 3.]], dims=('K','D'))
assert x.ndim == 2 and x.size == 3
# K = *, D = *
PB2 = ParamBag(K=4, D=1)
x = PB2.parseArr([[1.],[2.],[3.],[4.]], dims=('K','D'))
assert x.ndim == 2 and x.size == 4
N = PB2.parseArr([1.,2.,3.,4.], dims='K')
assert N.ndim == 1 and N.size == 4
def test_parseArr_dim1_fails(self):
PB1 = ParamBag(K=1, D=1)
with self.assertRaises(ValueError):
x = PB1.parseArr([1.23], dims=('K','D'))
PB2 = ParamBag(K=2, D=1)
with self.assertRaises(ValueError):
x = PB2.parseArr([1.23], dims=('K'))
with self.assertRaises(ValueError):
x = PB2.parseArr([1.23], dims=('K','D'))
PB3 = ParamBag(K=1, D=3)
with self.assertRaises(ValueError):
x = PB3.parseArr([1.,2.], dims=('K','D'))
PB3 = ParamBag(K=2, D=3)
with self.assertRaises(ValueError):
x = PB3.parseArr([1.,2.,3.,4.,5.,6.], dims=('K','D'))
######################################################### Dim 2 parsing
def test_parseArr_dim2_passes(self):
PB2 = ParamBag(K=2, D=2)
x = PB2.parseArr(np.eye(2), dims=('K','D'))
assert x.ndim == 2 and x.size == 4
PB31 = ParamBag(K=3, D=1)
x = PB31.parseArr([[10],[11],[12]], dims=('K','D'))
assert x.ndim == 2 and x.size == 3
def test_parseArr_dim2_fails(self):
PB2 = ParamBag(K=2, D=2)
with self.assertRaises(ValueError):
x = PB2.parseArr([[1.,2]], dims=('K'))
with self.assertRaises(ValueError):
x = PB2.parseArr([[1.,2]], dims=('K','D'))
with self.assertRaises(ValueError):
x = PB2.parseArr(np.eye(3), dims=('K','D'))
PB1 = ParamBag(K=1, D=2)
with self.assertRaises(ValueError):
# should be 1x2x2, not 2x2
x = PB1.parseArr(np.eye(2), dims=('K','D','D'))
######################################################### Dim 3 parsing
def test_parseArr_dim3_passes(self):
K=2
D=2
PB = ParamBag(K=K, D=D)
x = PB.parseArr(np.random.randn(K,D,D), dims=('K','D','D'))
assert x.ndim == 3 and x.size == K*D*D
K=1
D=2
PB = ParamBag(K=K, D=D)
x = PB.parseArr(np.random.rand(K,D,D), dims=('K','D', 'D'))
assert x.ndim == 3 and x.size == K*D*D
K=3
D=1
PB = ParamBag(K=K, D=D)
x = PB.parseArr(np.random.rand(K,D,D), dims=('K','D', 'D'))
assert x.ndim == 3 and x.size == K*D*D
def test_parseArr_dim3_fails(self):
PB = ParamBag(K=2, D=2)
with self.assertRaises(ValueError):
x = PB.parseArr([[[1.,2]]], dims=('K'))
with self.assertRaises(ValueError):
x = PB.parseArr([[[1.,2]]], dims=('K','D'))
with self.assertRaises(ValueError):
x = PB.parseArr(np.random.randn(3,3,3), dims=('K','D'))
with self.assertRaises(ValueError):
x = PB.parseArr(np.random.randn(3,3,3), dims=('K','D','D'))
| 30.679157 | 79 | 0.524733 | 2,311 | 13,100 | 2.942016 | 0.05106 | 0.030299 | 0.026474 | 0.077511 | 0.835123 | 0.806589 | 0.77129 | 0.739521 | 0.7229 | 0.708192 | 0 | 0.042304 | 0.21145 | 13,100 | 426 | 80 | 30.751174 | 0.615876 | 0.023206 | 0 | 0.644509 | 0 | 0 | 0.021691 | 0 | 0 | 0 | 0 | 0 | 0.315029 | 1 | 0.075145 | false | 0.011561 | 0.008671 | 0.00289 | 0.089595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
981b33f85196fab8723b0323f91f04b14ff682b7 | 75 | py | Python | test_data/examples/class_def_in_multi_files/level_two/k2.py | mosckital/python-mro-language-server | de5b7dd8d94ab2c94543c55a7c5e5691e664b4a4 | [
"MIT"
] | null | null | null | test_data/examples/class_def_in_multi_files/level_two/k2.py | mosckital/python-mro-language-server | de5b7dd8d94ab2c94543c55a7c5e5691e664b4a4 | [
"MIT"
] | 7 | 2020-09-18T22:55:50.000Z | 2020-10-07T22:50:40.000Z | tests/examples/class_def_in_multi_files/level_two/k2.py | mosckital/vscode_python_mro | 7a61a4a4dc1e4bc73fcdc9d5242a5fee500d6b1d | [
"MIT"
] | null | null | null | from ..level_one.intermediate_defs import B, D, E
class K2(D, B, E): pass | 18.75 | 49 | 0.706667 | 15 | 75 | 3.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.16 | 75 | 4 | 50 | 18.75 | 0.793651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e25bfce412c82a4abdf012bde3d673ecd26b03b0 | 28 | py | Python | src/funchain/__init__.py | eugenma/funchain | 53c9c71a10eaa587d8d7a7e9adc3e13046f2bf14 | [
"MIT"
] | null | null | null | src/funchain/__init__.py | eugenma/funchain | 53c9c71a10eaa587d8d7a7e9adc3e13046f2bf14 | [
"MIT"
] | null | null | null | src/funchain/__init__.py | eugenma/funchain | 53c9c71a10eaa587d8d7a7e9adc3e13046f2bf14 | [
"MIT"
] | null | null | null | from .funchain import Chain
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e26d821cc27fa0591735af46c74542a630b4182e | 184 | py | Python | lunzi/serialization.py | roosephu/boots | 2f4f500f54feb95cf36abd863f3de4510d6f4950 | [
"MIT"
] | 13 | 2019-10-15T10:43:39.000Z | 2021-03-20T06:27:15.000Z | lunzi/serialization.py | roosephu/boots | 2f4f500f54feb95cf36abd863f3de4510d6f4950 | [
"MIT"
] | null | null | null | lunzi/serialization.py | roosephu/boots | 2f4f500f54feb95cf36abd863f3de4510d6f4950 | [
"MIT"
] | 6 | 2020-01-21T06:51:18.000Z | 2021-05-27T20:25:35.000Z | from typing import Union, IO, Any
import numpy as np
def save(obj: Any, file: Union[str, IO]):
np.save(file, obj)
def load(file: Union[str, IO]):
return np.load(file)[()]
| 15.333333 | 41 | 0.641304 | 32 | 184 | 3.6875 | 0.5 | 0.152542 | 0.20339 | 0.237288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201087 | 184 | 11 | 42 | 16.727273 | 0.802721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
e2a613d42012d3909e347ab788f02d69784ab4cf | 19 | py | Python | lang/Python/literals-string-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | 5 | 2021-01-29T20:08:05.000Z | 2022-03-22T06:16:05.000Z | lang/Python/literals-string-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/literals-string-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | 1 | 2021-04-13T04:19:31.000Z | 2021-04-13T04:19:31.000Z | r'\x20' == '\\x20'
| 9.5 | 18 | 0.368421 | 3 | 19 | 2.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.157895 | 19 | 1 | 19 | 19 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2f4b0f8cd59e2e4d77757356af0290bd9d99ca8 | 284 | py | Python | number_alt.py | takanoriyanagitani/row2columnar | ce6474227a0ace55762a9d3a92b76581e8dd3bd9 | [
"MIT"
] | null | null | null | number_alt.py | takanoriyanagitani/row2columnar | ce6474227a0ace55762a9d3a92b76581e8dd3bd9 | [
"MIT"
] | null | null | null | number_alt.py | takanoriyanagitani/row2columnar | ce6474227a0ace55762a9d3a92b76581e8dd3bd9 | [
"MIT"
] | null | null | null | import math
def number_alt_float_none(f=0.0, alt=0.0): return alt if None == f else f
def number_alt_float_nan(f=0.0, alt=0.0): return alt if math.isnan(f) else f
def number_alt_float_nn(f=0.0, alt=0.0):
return number_alt_float_nan(
number_alt_float_none(f, alt),
alt
)
| 28.4 | 77 | 0.71831 | 62 | 284 | 3.048387 | 0.241935 | 0.063492 | 0.37037 | 0.269841 | 0.714286 | 0.518519 | 0.518519 | 0.201058 | 0.201058 | 0 | 0 | 0.050209 | 0.158451 | 284 | 9 | 78 | 31.555556 | 0.740586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0.375 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e2fae4c67334343112ceb3c28d8e06f09aed5cf1 | 68 | py | Python | sources/simulators/multiprocessing_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/multiprocessing_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/multiprocessing_simulator/__init__.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | from .multiprocessing_simulator import MultiprocessingBasedSimulator | 68 | 68 | 0.941176 | 5 | 68 | 12.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044118 | 68 | 1 | 68 | 68 | 0.969231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
392914d96b0c1972535e00cbfbc60c5dd08d022c | 44 | py | Python | models/__init__.py | SJTU-lqiu/QA4IE | 5b58612ad7a423279e0c6852750fcec3b3fde321 | [
"MIT"
] | 28 | 2018-05-02T01:37:41.000Z | 2021-06-13T04:21:15.000Z | models/__init__.py | SJTU-lqiu/QA4IE | 5b58612ad7a423279e0c6852750fcec3b3fde321 | [
"MIT"
] | 1 | 2019-08-21T09:55:38.000Z | 2019-08-26T01:15:37.000Z | models/__init__.py | SJTU-lqiu/QA4IE | 5b58612ad7a423279e0c6852750fcec3b3fde321 | [
"MIT"
] | 6 | 2018-05-06T13:58:04.000Z | 2021-08-24T05:25:18.000Z | from .qa4ie import QA4IESS, QA4IEQA, QA4IEAT | 44 | 44 | 0.818182 | 6 | 44 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.113636 | 44 | 1 | 44 | 44 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a54a282a4e02fee15b483d99f74eea570df3f33 | 137 | py | Python | Python/8 kyu/Century From Year/solution.py | Hsins/CodeWars | 7e7b912fdd0647c0af381d8b566408e383ea5df8 | [
"MIT"
] | 1 | 2020-01-09T21:47:56.000Z | 2020-01-09T21:47:56.000Z | Python/8 kyu/Century From Year/solution.py | Hsins/CodeWars | 7e7b912fdd0647c0af381d8b566408e383ea5df8 | [
"MIT"
] | 1 | 2020-01-20T12:39:03.000Z | 2020-01-20T12:39:03.000Z | Python/8 kyu/Century From Year/solution.py | Hsins/CodeWars | 7e7b912fdd0647c0af381d8b566408e383ea5df8 | [
"MIT"
] | null | null | null | # [8 kyu] Century From Year
#
# Author: Hsins
# Date: 2019/12/21
import math
def century(year):
return math.ceil(year / 100)
| 13.7 | 32 | 0.635036 | 21 | 137 | 4.142857 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0.240876 | 137 | 9 | 33 | 15.222222 | 0.721154 | 0.452555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
203d53cc91d739330ed826054803efababa33538 | 57 | py | Python | app/repository/events.py | maestro-server/data-app | cde6479cc84fe410220b34742772d5017571e3d3 | [
"Apache-2.0"
] | null | null | null | app/repository/events.py | maestro-server/data-app | cde6479cc84fe410220b34742772d5017571e3d3 | [
"Apache-2.0"
] | 1 | 2019-11-21T17:06:31.000Z | 2019-11-21T17:06:31.000Z | app/repository/events.py | maestro-server/data-app | cde6479cc84fe410220b34742772d5017571e3d3 | [
"Apache-2.0"
] | null | null | null | from .model import Model
class Events(Model):
pass
| 9.5 | 24 | 0.701754 | 8 | 57 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22807 | 57 | 5 | 25 | 11.4 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
204d72987c9530a046e99b4624bc1892ca9faf39 | 69 | py | Python | Streamlit/pages/page1.py | jhockx/server-configuration | 106bc6c0a57eaa582486701c80aac4f968ef0ba0 | [
"MIT"
] | 1 | 2021-04-28T06:15:14.000Z | 2021-04-28T06:15:14.000Z | Streamlit/pages/page1.py | jhockx/server-configuration | 106bc6c0a57eaa582486701c80aac4f968ef0ba0 | [
"MIT"
] | null | null | null | Streamlit/pages/page1.py | jhockx/server-configuration | 106bc6c0a57eaa582486701c80aac4f968ef0ba0 | [
"MIT"
] | null | null | null | import streamlit as st
def main():
st.title('Page 1 -- TITLE')
| 11.5 | 31 | 0.623188 | 11 | 69 | 3.909091 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0.231884 | 69 | 5 | 32 | 13.8 | 0.792453 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
645d2d79a46aad626c48e854ad7e5dcefb97dc2a | 100 | py | Python | testsuite/E71.py | dpursehouse/pep8 | 8d658692345e6866741719595f14a144337b3b9f | [
"MIT"
] | 1 | 2015-08-04T11:47:25.000Z | 2015-08-04T11:47:25.000Z | testsuite/E71.py | dpursehouse/pep8 | 8d658692345e6866741719595f14a144337b3b9f | [
"MIT"
] | null | null | null | testsuite/E71.py | dpursehouse/pep8 | 8d658692345e6866741719595f14a144337b3b9f | [
"MIT"
] | null | null | null | #: E712
if res == True:
pass
#: E712
if res != False:
pass
#: E711
if res == None:
pass
| 10 | 16 | 0.52 | 15 | 100 | 3.466667 | 0.533333 | 0.288462 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 0.33 | 100 | 9 | 17 | 11.111111 | 0.641791 | 0.18 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
646cfc1949f60de4e3efd6f01431e2bc12c6ce95 | 174 | py | Python | Django-Projects/djangoprojects/rootapp/views.py | Pratyush-Avi/Django-Project | e1173d87bab57dd165814589f0f64af20fc41049 | [
"MIT"
] | null | null | null | Django-Projects/djangoprojects/rootapp/views.py | Pratyush-Avi/Django-Project | e1173d87bab57dd165814589f0f64af20fc41049 | [
"MIT"
] | null | null | null | Django-Projects/djangoprojects/rootapp/views.py | Pratyush-Avi/Django-Project | e1173d87bab57dd165814589f0f64af20fc41049 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def root(request):
return HttpResponse("The server has started")
| 21.75 | 49 | 0.764368 | 23 | 174 | 5.782609 | 0.826087 | 0.150376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 174 | 7 | 50 | 24.857143 | 0.923611 | 0.132184 | 0 | 0 | 0 | 0 | 0.147651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
648f1ca9e67a6b1b4b7dc5735a2d858453c25c4e | 150 | py | Python | pattern/8.py | itspuneet/itspuneet | d44f78afcff275aa56f03bba738ac3e4f2c30843 | [
"bzip2-1.0.6"
] | null | null | null | pattern/8.py | itspuneet/itspuneet | d44f78afcff275aa56f03bba738ac3e4f2c30843 | [
"bzip2-1.0.6"
] | null | null | null | pattern/8.py | itspuneet/itspuneet | d44f78afcff275aa56f03bba738ac3e4f2c30843 | [
"bzip2-1.0.6"
] | null | null | null | for i in range(5):
for j in range(i):
print(' ',end='')
for j in range((2*5-2*i)-1):
print('*',end='')
print()
| 18.75 | 33 | 0.406667 | 24 | 150 | 2.541667 | 0.416667 | 0.344262 | 0.196721 | 0.360656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.366667 | 150 | 7 | 34 | 21.428571 | 0.589474 | 0 | 0 | 0 | 0 | 0 | 0.013986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3754ad274835a492f5cbab68c4857816cfc43a1c | 67 | py | Python | testchild.py | stephaniecorwin/CTDSwk2 | 5f6cabfab98314c72a6ecb9f0470f68ca712fcb0 | [
"Unlicense"
] | null | null | null | testchild.py | stephaniecorwin/CTDSwk2 | 5f6cabfab98314c72a6ecb9f0470f68ca712fcb0 | [
"Unlicense"
] | null | null | null | testchild.py | stephaniecorwin/CTDSwk2 | 5f6cabfab98314c72a6ecb9f0470f68ca712fcb0 | [
"Unlicense"
] | null | null | null | ## Adding a new file in child branch
print ("Inside child branch")
| 22.333333 | 36 | 0.731343 | 11 | 67 | 4.454545 | 0.818182 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 67 | 2 | 37 | 33.5 | 0.890909 | 0.492537 | 0 | 0 | 0 | 0 | 0.612903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
377d96c7df5de2e4bb3457c3cfe98f19c6bd3c8f | 27 | py | Python | admin/__init__.py | drnasmith/flask-ispyb-admin | eebf7ee9489e22265aa7cd23263a3bb74efa9a86 | [
"Apache-2.0"
] | null | null | null | admin/__init__.py | drnasmith/flask-ispyb-admin | eebf7ee9489e22265aa7cd23263a3bb74efa9a86 | [
"Apache-2.0"
] | null | null | null | admin/__init__.py | drnasmith/flask-ispyb-admin | eebf7ee9489e22265aa7cd23263a3bb74efa9a86 | [
"Apache-2.0"
] | null | null | null | from admin import init_app
| 13.5 | 26 | 0.851852 | 5 | 27 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37893131ee38c3cc761c697d0fecc850e4358835 | 120 | py | Python | Ascending-binary-sorting.py | chandrikadeb7/LinkedIn-SDE-CodingSolutions | 8c9ef219a08e030c99f53f52db57d327550a0367 | [
"MIT"
] | 3 | 2021-03-12T08:14:23.000Z | 2021-07-16T06:47:40.000Z | Ascending-binary-sorting.py | chandrikadeb7/LinkedIn-SDE-CodingSolutions | 8c9ef219a08e030c99f53f52db57d327550a0367 | [
"MIT"
] | null | null | null | Ascending-binary-sorting.py | chandrikadeb7/LinkedIn-SDE-CodingSolutions | 8c9ef219a08e030c99f53f52db57d327550a0367 | [
"MIT"
] | 1 | 2021-04-17T18:06:24.000Z | 2021-04-17T18:06:24.000Z | def rearrange(elements):
# Write your code here
return sorted(elements, key=lambda x:(str(bin(x)).count('1'),x)) | 40 | 68 | 0.675 | 19 | 120 | 4.263158 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.15 | 120 | 3 | 68 | 40 | 0.784314 | 0.166667 | 0 | 0 | 0 | 0 | 0.010101 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
37998a05ca0d09a8a0b7a0f248a0053e58cc97c0 | 89 | py | Python | application/views/doctor/__init__.py | mrpoor/heart_telehealth | 74f6ea9400e0691207d42e9987cb60b3a4b1681c | [
"MIT"
] | null | null | null | application/views/doctor/__init__.py | mrpoor/heart_telehealth | 74f6ea9400e0691207d42e9987cb60b3a4b1681c | [
"MIT"
] | null | null | null | application/views/doctor/__init__.py | mrpoor/heart_telehealth | 74f6ea9400e0691207d42e9987cb60b3a4b1681c | [
"MIT"
] | null | null | null | from flask import Blueprint
doctor = Blueprint('doctor', __name__)
from . import views
| 14.833333 | 38 | 0.764045 | 11 | 89 | 5.818182 | 0.636364 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157303 | 89 | 5 | 39 | 17.8 | 0.853333 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
80895058b0f82ecfe56051254c8769ab8fdf2339 | 30,272 | py | Python | guidance_plugin_validator/src/guidance_plugin_validator/guidance_plugin_validator.py | usdot-fhwa-stol/carma-platform | d45a1afbf1efdb0b8cd62fcec5a3033b7306df33 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 112 | 2020-04-27T17:06:46.000Z | 2022-03-31T15:27:14.000Z | guidance_plugin_validator/src/guidance_plugin_validator/guidance_plugin_validator.py | usdot-fhwa-stol/carma-platform | d45a1afbf1efdb0b8cd62fcec5a3033b7306df33 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 982 | 2020-04-17T11:28:04.000Z | 2022-03-31T21:12:19.000Z | guidance_plugin_validator/src/guidance_plugin_validator/guidance_plugin_validator.py | usdot-fhwa-stol/carma-platform | d45a1afbf1efdb0b8cd62fcec5a3033b7306df33 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 57 | 2020-05-07T15:48:11.000Z | 2022-03-09T23:31:45.000Z | #!/usr/bin/env python
"""
* Copyright (C) 2021 LEIDOS.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
"""
import rospy
import rosnode
import guidance_plugin_components
from cav_msgs.msg import Plugin
from cav_msgs.msg import SystemAlert
class GuidancePluginValidator:
"""
Primary class for the guidance_plugin_validator node. Conducts the validation of each of the
Guidance Plugins (of type 'Strategic', 'Tactical', or 'Control-Wrapper') as provided by this node's
configuration parameters.
"""
def __init__(self):
"""Default constructor for GuidancePluginValidator"""
# Create plugin_discovery subscriber
self.plugin_discovery_sub = rospy.Subscriber("plugin_discovery", Plugin, self.plugin_discovery_cb)
self.system_alert_sub = rospy.Subscriber("system_alert", SystemAlert, self.system_alert_cb)
# Read in config params
self.validation_duration = rospy.get_param('~validation_duration', 300) # Maximum time (sec) that node will spend conducting validation before results are considered final
self.strategic_plugin_names = rospy.get_param('~strategic_plugins_to_validate', [])
self.tactical_plugin_names = rospy.get_param('~tactical_plugins_to_validate', [])
self.control_plugin_names = rospy.get_param('~control_plugins_to_validate', [])
# Write config params to log file
rospy.loginfo("Config params for guidance_plugin_validator:")
rospy.loginfo("Validation Duration: " + str(self.validation_duration) + " seconds")
rospy.loginfo("Strategic Guidance Plugins: " + str(self.strategic_plugin_names))
rospy.loginfo("Tactical Guidance Plugins: " + str(self.tactical_plugin_names))
rospy.loginfo("Control Guidance Plugins: " + str(self.control_plugin_names))
# Boolean flag to indicate whether drivers are ready (this indicates that plugin node validation checks can begin)
self.has_startup_completed = False
# Boolean flag to indicate whether each guidance plugin's node has been validated
self.has_node_validation_completed = False
# Boolean flag to indicate whether final results have been written to log file
self.has_logged_final_results = False
# Set spin rate
self.spin_rate = rospy.Rate(10) # 10 Hz
# Initialize empty dicts that will be populated with a <plugin-type>PluginResults object for each Guidance Plugin
self.strategic_plugin_validation_results = {} # Key is plugin's name; Value is plugin's StrategicPluginResults object
self.tactical_plugin_validation_results = {} # Key is plugin's name; Value is plugin's TacticalPluginResults object
self.control_plugin_validation_results = {} # Key is plugin's name; Value is plugin's ControlPluginResults object
# Call member function to populate the 'validation results' dicts
self.populate_results_dicts(self.strategic_plugin_names, self.tactical_plugin_names, self.control_plugin_names)
def populate_results_dicts(self, strategic_plugin_names, tactical_plugin_names, control_plugin_names):
"""Initialize the the 'validation results' lists and 'index by name' dicts for this object"""
# Populate validation results dict for Strategic Plugins
for plugin_name in strategic_plugin_names:
self.strategic_plugin_validation_results[plugin_name] = guidance_plugin_components.StrategicPluginResults(plugin_name)
# Populate validation results dict for Tactical Plugins
for plugin_name in tactical_plugin_names:
self.tactical_plugin_validation_results[plugin_name] = guidance_plugin_components.TacticalPluginResults(plugin_name)
# Populate validation results dict for Control Plugins
for plugin_name in control_plugin_names:
self.control_plugin_validation_results[plugin_name] = guidance_plugin_components.ControlPluginResults(plugin_name)
return
def spin(self):
"""
Function to ensure node spins at configured spin rate.
"""
while not rospy.is_shutdown():
if self.has_startup_completed:
# Conduct node validation if it has not yet occurred
if not self.has_node_validation_completed:
self.conduct_node_validation()
self.has_node_validation_completed = True
# If time has surpassed the configured validation duration, the current results are considered final. Write to log file.
seconds_since_startup_completed = rospy.get_time() - self.start_time_seconds
if (seconds_since_startup_completed >= self.validation_duration):
if not self.has_logged_final_results:
self.log_final_results_for_each_plugin()
self.has_logged_final_results = True
self.spin_rate.sleep()
return
def log_final_results_for_each_plugin(self):
"""
Calls appropriate function for each plugin's 'results' object in order to write
all final validation results to the log file for this node.
"""
rospy.loginfo("**********************************************************")
rospy.loginfo("******Final Validation Results for Strategic Plugins******")
rospy.loginfo("**********************************************************")
# Write final validation results to log file for Guidance Strategic Plugins
for plugin_name, plugin_results_object in self.strategic_plugin_validation_results.items():
plugin_results_object.write_strategic_final_results_to_logs()
rospy.loginfo("**********************************************************")
rospy.loginfo("******Final Validation Results for Tactical Plugins*******")
rospy.loginfo("**********************************************************")
# Write final validation results to log file for Guidance Tactical Plugins
for plugin_name, plugin_results_object in self.tactical_plugin_validation_results.items():
plugin_results_object.write_tactical_final_results_to_logs()
rospy.loginfo("**********************************************************")
rospy.loginfo("*******Final Validation Results for Control Plugins*******")
rospy.loginfo("**********************************************************")
# Write final validation results to log file for Guidance Control Plugins
for plugin_name, plugin_results_object in self.control_plugin_validation_results.items():
plugin_results_object.write_control_final_results_to_logs()
rospy.loginfo("**********************************************************")
rospy.loginfo("*******End of Final Validation Results for Plugins********")
rospy.loginfo("**********************************************************")
return
def system_alert_cb(self, msg):
"""
Callback function for the system_alert topic. The Guidance Plugin Validator Node will begin conducting
validation checks on each plugin's node after a 'DRIVERS_READY' alert has been received.
"""
# Startup has completed when drivers are ready
if msg.type == SystemAlert.DRIVERS_READY:
rospy.loginfo("DRIVERS_READY message received. Beginning node validation.")
self.start_time_seconds = rospy.get_time()
self.has_startup_completed = True
return
def plugin_discovery_cb(self, msg):
"""
Callback function for the plugin_discovery topic. Processes the first received message for each guidance
plugin (as specified by this node's configuration parameters), and updates the plugin's 'results'
object accordingly.
"""
# Get the name of this plugin based on the message
plugin_name = msg.name
# Validate the plugin_discovery message based on the plugin's type (Strategic, Tactical, or Control)
if plugin_name in self.strategic_plugin_names:
# Do not process message if this plugin has already had its plugin_discovery message validated
# Note: Assumption is that once one message is received and processed for a plugin, the rest will be identical
if self.strategic_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated:
return
# Process the message and log appropriate messages
rospy.loginfo("Processing plugin_discovery message for " + str(plugin_name) + " (Strategic Plugin)")
self.strategic_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated = True
expected_capability = self.strategic_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_capability
if msg.capability == expected_capability:
self.strategic_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_capability = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery capability " + str(expected_capability))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery capability == " + str(msg.capability) + " (expected + " + str(expected_capability) + ")")
expected_type = self.strategic_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_type
if msg.type == expected_type:
self.strategic_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_type = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery type " + str(expected_type))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery type == " + str(msg.type) + " (expected + " + str(expected_type) + ")")
if msg.available == True:
self.strategic_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_available = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery availabile == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery available == " + str(msg.available) + " (expected True)")
if msg.activated == True:
self.strategic_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_activated = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery activated == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery activated == " + str(msg.activated) + " (expected True)")
elif plugin_name in self.tactical_plugin_names:
# Do not process message if this plugin has already had its plugin_discovery message validated
# Note: Assumption is that once one message is received and processed for a plugin, the rest will be identical
if self.tactical_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated:
return
# Process the message and log appropriate messages
rospy.loginfo("Processing plugin_discovery message for " + str(plugin_name) + " (Tactical Plugin)")
self.tactical_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated = True
expected_capability = self.tactical_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_capability
if msg.capability == expected_capability:
self.tactical_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_capability = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery capability " + str(expected_capability))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery capability == " + str(msg.capability) + " (expected + " + str(expected_capability) + ")")
expected_type = self.tactical_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_type
if msg.type == expected_type:
self.tactical_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_type = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery type " + str(expected_type))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery type == " + str(msg.type) + " (expected + " + str(expected_type) + ")")
if msg.available == True:
self.tactical_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_available = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery availabile == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery availability == " + str(msg.available) + " (expected True)")
if msg.activated == True:
self.tactical_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_activated = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery activated == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery activated == " + str(msg.activated) + " (expected True)")
elif plugin_name in self.control_plugin_names:
# Do not process message if this plugin has already had its plugin_discovery message validated
# Note: Assumption is that once one message is received and processed for a plugin, the rest will be identical
if self.control_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated:
return
# Process the message and log appropriate messages
rospy.loginfo("Processing plugin_discovery message for " + str(plugin_name) + " (Control Plugin)")
self.control_plugin_validation_results[plugin_name].has_had_plugin_discovery_message_validated = True
expected_capability = self.control_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_capability
if msg.capability == expected_capability:
self.control_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_capability = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery capability " + str(expected_capability))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery capability == " + str(msg.capability) + " (expected + " + str(expected_capability) + ")")
expected_type = self.control_plugin_validation_results[plugin_name].requirement_results.correct_plugin_discovery_type
if msg.type == expected_type:
self.control_plugin_validation_results[plugin_name].requirement_results.has_correct_plugin_discovery_type = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery type " + str(expected_type))
else:
rospy.logerr("ERROR: " + str(plugin_name) + " plugin_discovery type == " + str(msg.type) + " (expected + " + str(expected_type) + ")")
if msg.available == True:
self.control_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_available = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery availabile == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery availability == " + str(msg.available) + " (expected True)")
if msg.activated == True:
self.control_plugin_validation_results[plugin_name].optional_results.has_correct_plugin_discovery_activated = True
rospy.loginfo("Success: " + str(plugin_name) + " has plugin_discovery activated == True")
else:
rospy.logwarn("WARNING: " + str(plugin_name) + " plugin_discovery activated == " + str(msg.activated) + " (expected True)")
return
def conduct_node_validation(self):
"""
Call appropriate member functions to conduct validation of node communication interfaces.
"""
rospy.loginfo("Beginning validation checks for node subscriptions, publications, and advertised services")
self.validate_strategic_plugins()
self.validate_tactical_plugins()
self.validate_control_plugins()
rospy.loginfo("Completed validation checks for node subscriptions, publications, and advertised services")
return
def validate_strategic_plugins(self):
"""
Conduct validation checks for each strategic plugin's node (as specified by this node's
configuration parameters) for proper publications, subscriptions, and advertised services. Based on the
results, this function updates each strategic plugin's StrategicPluginResults object accordingly.
"""
for plugin_name, plugin_results_object in self.strategic_plugin_validation_results.items():
plugin_node_name = plugin_results_object.node_name
rospy.loginfo("Processing publishers, subscribers, and services for " + str(plugin_name) + " (Strategic Plugin)")
# Check whether the node has been created
if rosnode.rosnode_ping(plugin_node_name, max_count = 5):
plugin_results_object.requirement_results.has_node = True
rospy.loginfo("Success: Node " + str(plugin_node_name) + " exists.")
else:
rospy.logerr("ERROR: No node response for " + str(plugin_node_name) + ". Node does not exist.")
# Obtain string that includes information regarding a node's publications, subscriptions, and services
rosnode_info_string = (rosnode.get_node_info_description(plugin_node_name))
# Get substring from rosnode info that contains 'Subscriptions' information
sub_index_start = rosnode_info_string.index("Subscriptions:")
sub_index_end = rosnode_info_string.index("Services:")
subscriptions_string = rosnode_info_string[sub_index_start:sub_index_end]
# Check for required and optional subscriptions
if plugin_results_object.optional_results.current_pose_topic in subscriptions_string:
plugin_results_object.optional_results.has_current_pose_sub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " subscribes to " + str(plugin_results_object.optional_results.current_pose_topic))
else:
rospy.logwarn("WARNING: " + str(plugin_node_name) + " does not subscribe to " + str(plugin_results_object.optional_results.current_pose_topic))
if plugin_results_object.optional_results.current_speed_topic in subscriptions_string:
plugin_results_object.optional_results.has_current_speed_sub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " subscribes to " + str(plugin_results_object.optional_results.current_speed_topic))
else:
rospy.logwarn("WARNING: " + str(plugin_node_name) + " does not subscribe to " + str(plugin_results_object.optional_results.current_speed_topic))
# Get substring from rosnode info that contains 'Publications' information
pub_index_start = rosnode_info_string.index("Publications:")
pub_index_end = rosnode_info_string.index("Subscriptions:")
publications_string = rosnode_info_string[pub_index_start:pub_index_end]
# Check for required and optional publications
if plugin_results_object.requirement_results.plugin_discovery_topic in publications_string:
plugin_results_object.requirement_results.has_plugin_discovery_pub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " publishes to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not publish to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
# Get substring from rosnode info that contains 'Services' information
serv_index_start = rosnode_info_string.index("Services:")
services_string = rosnode_info_string[serv_index_start:]
# Check for required and optional servers
if plugin_results_object.requirement_results.plan_maneuvers_service in services_string:
plugin_results_object.requirement_results.has_plan_maneuvers_service = True
rospy.loginfo("Success: " + str(plugin_node_name) + " advertises service " + str(plugin_results_object.requirement_results.plan_maneuvers_service))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not advertise service " + str(plugin_results_object.requirement_results.plan_maneuvers_service))
return
def validate_tactical_plugins(self):
"""
Conduct validation checks for each tactical plugin's node (as specified by this node's
configuration parameters) for proper publications, subscriptions, and advertised services. Based on the
results, this function updates each tactical plugin's TacticalPluginResults object accordingly.
"""
for plugin_name, plugin_results_object in self.tactical_plugin_validation_results.items():
plugin_node_name = plugin_results_object.node_name
rospy.loginfo("Processing publishers, subscribers, and services for " + str(plugin_name) + " (Tactical Plugin)")
# Check whether the node has been created
if rosnode.rosnode_ping(plugin_node_name, max_count = 5):
plugin_results_object.requirement_results.has_node = True
rospy.loginfo("Success: Node " + str(plugin_node_name) + " exists.")
else:
rospy.logerr("ERROR: No node response for " + str(plugin_node_name) + ". Node does not exist.")
# Obtain string that includes information regarding a node's publications, subscriptions, and services
rosnode_info_string = (rosnode.get_node_info_description(plugin_node_name))
# Get substring from rosnode info that contains 'Subscriptions' information
sub_index_start = rosnode_info_string.index("Subscriptions:")
sub_index_end = rosnode_info_string.index("Services:")
subscriptions_string = rosnode_info_string[sub_index_start:sub_index_end]
# Check for required and optional subscriptions
if plugin_results_object.optional_results.current_pose_topic in subscriptions_string:
plugin_results_object.optional_results.has_current_pose_sub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " subscribes to " + str(plugin_results_object.optional_results.current_pose_topic))
else:
rospy.logwarn("WARNING: " + str(plugin_node_name) + " does not subscribe to " + str(plugin_results_object.optional_results.current_pose_topic))
if plugin_results_object.optional_results.current_speed_topic in subscriptions_string:
plugin_results_object.optional_results.has_current_speed_sub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " subscribes to " + str(plugin_results_object.optional_results.current_speed_topic))
else:
rospy.logwarn("WARNING: " + str(plugin_node_name) + " does not subscribe to " + str(plugin_results_object.optional_results.current_speed_topic))
# Get substring from rosnode info that contains 'Publications' information
pub_index_start = rosnode_info_string.index("Publications:")
pub_index_end = rosnode_info_string.index("Subscriptions:")
publications_string = rosnode_info_string[pub_index_start:pub_index_end]
# Check for required and optional publications
if plugin_results_object.requirement_results.plugin_discovery_topic in publications_string:
plugin_results_object.requirement_results.has_plugin_discovery_pub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " publishes to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not publish to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
# Get substring from rosnode info that contains 'Services' information
serv_index_start = rosnode_info_string.index("Services:")
services_string = rosnode_info_string[serv_index_start:]
# Check for required and optional servers
if plugin_results_object.requirement_results.plan_trajectory_service in services_string:
plugin_results_object.requirement_results.has_plan_trajectory_service = True
rospy.loginfo("Success: " + str(plugin_node_name) + " advertises service " + str(plugin_results_object.requirement_results.plan_trajectory_service))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not advertise service " + str(plugin_results_object.requirement_results.plan_trajectory_service))
return
def validate_control_plugins(self):
"""
Conduct validation checks for each control plugin's node (as specified by this node's
configuration parameters) for proper publications, subscriptions, and advertised services. Based on the
results, this function updates each control plugin's ControlPluginResults object accordingly.
"""
for plugin_name, plugin_results_object in self.control_plugin_validation_results.items():
plugin_node_name = plugin_results_object.node_name
rospy.loginfo("Processing publishers, subscribers, and services for " + str(plugin_name) + " (Control Plugin)")
# Check whether the node has been created
if rosnode.rosnode_ping(plugin_node_name, max_count = 5):
plugin_results_object.requirement_results.has_node = True
rospy.loginfo("Success: Node " + str(plugin_node_name) + " exists.")
else:
rospy.logerr("ERROR: No node response for " + str(plugin_node_name) + ". Node does not exist.")
# Obtain string that includes information regarding a node's publications, subscriptions, and services
rosnode_info_string = (rosnode.get_node_info_description(plugin_node_name))
# Get substring from rosnode info that contains 'Subscriptions' information
sub_index_start = rosnode_info_string.index("Subscriptions:")
sub_index_end = rosnode_info_string.index("Services:")
subscriptions_string = rosnode_info_string[sub_index_start:sub_index_end]
# Check for required and optional subscriptions
if plugin_results_object.requirement_results.plan_trajectory_topic in subscriptions_string:
plugin_results_object.requirement_results.has_plan_trajectory_sub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " subscribes to " + str(plugin_results_object.requirement_results.plan_trajectory_topic))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not subscribe to " + str(plugin_results_object.requirement_results.plan_trajectory_topic))
# Get substring from rosnode info that contains 'Publications' information
pub_index_start = rosnode_info_string.index("Publications:")
pub_index_end = rosnode_info_string.index("Subscriptions:")
publications_string = rosnode_info_string[pub_index_start:pub_index_end]
# Check for required and optional publications
if plugin_results_object.requirement_results.plugin_discovery_topic in publications_string:
plugin_results_object.requirement_results.has_plugin_discovery_pub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " publishes to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not publish to " + str(plugin_results_object.requirement_results.plugin_discovery_topic))
if plugin_results_object.requirement_results.final_waypoints_topic in publications_string:
plugin_results_object.requirement_results.has_final_waypoints_pub = True
rospy.loginfo("Success: " + str(plugin_node_name) + " publishes to " + str(plugin_results_object.requirement_results.final_waypoints_topic))
else:
rospy.logerr("ERROR: " + str(plugin_node_name) + " does not publish to " + str(plugin_results_object.requirement_results.final_waypoints_topic))
return | 63.730526 | 179 | 0.687632 | 3,423 | 30,272 | 5.786445 | 0.084721 | 0.036351 | 0.056596 | 0.046953 | 0.810572 | 0.773969 | 0.76786 | 0.739789 | 0.716161 | 0.704599 | 0 | 0.000765 | 0.22311 | 30,272 | 475 | 180 | 63.730526 | 0.841441 | 0.205603 | 0 | 0.565693 | 0 | 0 | 0.159197 | 0.024372 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036496 | false | 0 | 0.018248 | 0 | 0.10219 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
809cb1571b61cb14de4143b75d6cc3c36478794b | 34 | py | Python | utils/models/pykao/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | 3 | 2022-01-18T19:25:46.000Z | 2022-02-05T18:53:24.000Z | utils/models/pykao/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | utils/models/pykao/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | from .model import Modified3DUNet
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
809e2ab2119e26a91e93865ccb20a6e0b3f8c2c0 | 30 | py | Python | CADMium/inverter/__init__.py | VHchavez/CADMium | 39f3bd63ca69502a80c677855da72f9e691b57e2 | [
"BSD-3-Clause"
] | null | null | null | CADMium/inverter/__init__.py | VHchavez/CADMium | 39f3bd63ca69502a80c677855da72f9e691b57e2 | [
"BSD-3-Clause"
] | 1 | 2021-04-23T20:38:38.000Z | 2021-04-23T20:38:38.000Z | CADMium/inverter/__init__.py | VHchavez/CADMium | 39f3bd63ca69502a80c677855da72f9e691b57e2 | [
"BSD-3-Clause"
] | 2 | 2020-10-07T20:48:56.000Z | 2021-04-22T19:06:18.000Z | from .inverter import Inverter | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.