hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3b50de8a6fa564ccc2023b2e7a70e457342c238f | 15,920 | py | Python | app/main.py | gniewus/gpt3-bias-paraphrase | d203b4fe788b5dd59be4caca87e77847470b25aa | [
"MIT"
] | 2 | 2022-01-22T01:20:16.000Z | 2022-01-26T13:28:03.000Z | app/main.py | gniewus/gpt3-bias-paraphrase | d203b4fe788b5dd59be4caca87e77847470b25aa | [
"MIT"
] | null | null | null | app/main.py | gniewus/gpt3-bias-paraphrase | d203b4fe788b5dd59be4caca87e77847470b25aa | [
"MIT"
] | 2 | 2022-01-15T18:27:37.000Z | 2022-01-22T01:20:25.000Z | import os
from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
from fastapi.responses import HTMLResponse
from .library.helpers import *
from pydantic import BaseModel
import json
import re
import openai
from dotenv import load_dotenv
load_dotenv()
openai.api_key = os.environ["OPENAI_API_KEY"]
openai.organization = os.environ["OPENAI_ORGANIZATION"]
_RE_COMBINE_WHITESPACE = re.compile(r"\s+")
app = FastAPI()
#PROMPT_BASE = "Paraphrase gender-biased passages to neutral text avoiding gender-biased words like active, affectionate, adventurous, childish, aggressive, aggression, cheerful, ambitioned, commitment, communal, assertive, compassionate, athletic, connected, autonomous, considerate, cooperative, challenge, dependent, dependable, compete, competitive, emotional, confident, empath, courageous, feminine, decided, flatterable, decisive, gentle, decision, honest, determined, interpersonal, dominant, interdependent, interpersonal, forceful, kind, greedy, kinship, headstrong, loyalty, hierarchy, modesty, hostility, nag, impulsive, nurture, independent, pleasant, individual, polite, intellectual, quiet, lead, reason, logic, logically, sensitive, masculine, submissive, objective, support, opinion, sympathy, outspoken, tender, persist, together, trustworthy, reckless, understandable, understanding,stubborn, warm-hearted, superior, self-confident, boast, proud, develop warm client relationships, men, woman, female, male, assist, assistant\n\nWe’re looking for a strong businessman.\nParaphrase: We’re looking for an exceptional business person.\nMen who thrive in a competitive atmosphere.\nParaphrase: Person who is motivated by high goals.\nWe are looking for a reliable and polite waitress.\nParaphrase: Our company seeks a reliable server who is respectful towards clients\nCandidates who are assertive.\nParaphrase: Candidates who are go-getters.\nHave a polite and pleasant style.\nParaphrase: Are professional and courteous.\nNurture and connect with customers.\nParaphrase: Provide great customer service.\nWe are a determined company that delivers superior plumbing.\nParaphrase: We are a plumbing company that delivers great service.\nDemonstrated ability to act decisively in emergencies.\nParaphrase: Demonstrated ability to make quick decisions in emergencies.\nGood salesmen with strong computer skills.\nParaphrase: Good salesperson who knows how to efficiently use the computer.\nStrong women who is not afraid to take risks.\nParaphrase: Person not afraid of challenges.\nSensitive men who know how to establish a good relationship with the customer\nParaphrase: Person who knows how to establish a great relationship with the customer\nA great salesman who is open to new challenges.\nParaphrase: Great salesperson who is open to new challenges.\nThe company boasts impressive salaries, allowing our employees with financial independence.\nParaphrase: Our company offers excellent benefits, allowing our employees to maintain financial independence.\nSupport office team and assist with departmental procedures so that work progresses more efficiently.\nParaphrase: Work closely with the office team to organize departmental procedures so that work progresses more efficiently.\n We boast a competitive compensation package.\nParaphrase: We offer excellent compensation packages.\nTake our sales challenge! Even if you have no previous experience, we will facilitate the acquisition of your sales abilities.\nParaphrase: We are a company that is committed to facilitating employees to enhance their sales abilities.\nStrong communicator.\nParaphrase: A person who is good at communicating with others.\nBe a leader in your store, representing our exclusive brand.\nParaphrase: Be a role model in your store, representing our exclusive brand.\nJoin our sales community! Even if you have no previous experience, we will help nurture and develop your sales talents.\nParaphrase: Join our company and help develop your sales abilities.\nWe are a dominant engineering firm that boasts many leading clients.\nParaphrase: We are an engineering company that has many leading clients.\nStrong communication and influencing skills.\nParaphrase: Communication and influencing skills.\nAnalyze problems logically and troubleshoot to determine needed repairs.\nParaphrase: Respond to problems and troubleshoot them to uncover needed repairs.\nSensitive to clients’ needs, can develop warm client relationships.\nParaphrase: Person who understands client needs and can establish great relationships with them.\n",
#PROMPT_BASE="This bot can paraphrase gender-biased job postings to neutral descriptions. This bot does not use these words:\nactive, well-groomed, men, woman, women, female, male, feminine, assist, masculine, assistant, shaved affectionate, adventurous, childish, aggressive, aggression, cheerful, ambitioned, commitment, communal, assertive, compassionate, athletic, connected, considerate, cooperative, challenge, dependent, dependable, compete, competitive, emotional, confident, empathy, courageous, decided, flatterable, decisive, gentle, honest, determined, interpersonal, dominant, interdependent, interpersonal, forceful, kind, greedy, kinship, headstrong, loyalty, hierarchy, modesty, hostility, nag, nurture, independent, pleasant, polite, intellectual, quiet, lead, reason, logic, logically, sensitive, submissive, objective, supportive, sympathy, outspoken, tender, persisting, together, trustworthy, reckless, stubborn, warm-hearted, superior, self-confident, boast, proud, banter\n\nA: We’re looking for a strong businessman.\nB: We’re looking for an exceptional business person.\nA: Chairmen who thrive in a competitive atmosphere.\nB: CEO who is motivated by high goals.\nA: We are looking for a reliable and polite waitress.\nB: Our company seeks a reliable server who is respectful towards clients\nA: Candidates who are assertive.\nB: Candidates who are go-getters.\nA: Have a polite and pleasant style.\nB: Are professional and courteous.\nA: Nurture and connect with customers.\nB: Provide great customer service.\nA: We are a determined company that delivers superior plumbing.\nB: We are a plumbing company that delivers great service.\nA: Demonstrated ability to act decisively in emergencies.\nB: Demonstrated ability to make quick decisions in emergencies.\nA: Good salesmen with strong computer skills.\nB: Good salesperson who knows how to efficiently use the computer.\nA: A strong woman who is not afraid to take risks.\nB: Person not afraid of challenges.\nA: Sensitive men who know how to establish a good relationship with the customer\nB: Person who knows how to establish a great relationship with the customer\nA: A great salesman who is open to new challenges.\nB: Great salesperson who is open to new challenges.\nA: The company boasts impressive salaries, allowing our employees with financial independence.\nB: Our company offers excellent benefits, allowing our employees to maintain financial independence.\nA: Support office team and assist with departmental procedures so that work progresses more efficiently.\nB: Work closely with the office team to organize departmental procedures so that work progresses more efficiently.\nA: We boast a competitive compensation package.\nB: We offer excellent compensation packages.\nA: Take our sales challenge! Even if you have no previous experience, we will facilitate the acquisition of your sales abilities.\nB: We are a company that is committed to facilitating employees to enhance their sales abilities.\nA: Our company needs a social person; a strong communicator.\nB: A person who is good at communicating with others.\nA: Be a leader in your store, representing our exclusive brand.\nB: Be a role model in your store, representing our unique brand.\nA: Join our sales community! Even if you have no previous experience, we will help nurture and develop your sales talents.\nB: Join our company and help develop your sales abilities.\nA: We are a dominant engineering firm that boasts many leading clients.\nB: We are an engineering company that has many leading clients.\nA: Strong communication and influencing skills.\nB: Communication and influencing skills.\nA: Analyze problems logically and troubleshoot to determine needed repairs.\nB: Respond to problems and troubleshoot them to uncover needed repairs.\nA: Sensitive to clients’ needs, can develop warm client relationships.\nB: Person who responds to client needs and can establish great relationships with them.\nA: Chairman willing to accept the challenge of regaining customers' trusts.\nB: CEO who is willing to work on restoring customer’s confidence.\nA: For this role, we’re looking for a strong, ‘All-American boy’ type. Must be well-mannered, well-groomed, well-spoken and respectful to the customers.\nB: For this role, we’re looking for a type of person who is well-mannered, respectful to the customers and always willing to go beyond.\nA: Please note that the position requires filling in the responsibilities of a receptionist, so female candidates are preferred.\nB: Please note that the position requires filling in the responsibilities of a receptionist, so candidates with customer service skills are preferred.\nA: Ability to deal with male banter and be sociable but not distracting.\nB: Ability to work closely with clients; sociable and proffesional.\nA: We are looking for a nice and good-looking girl.\nB: We are looking for a nice girl.",
prompt_file = open('./prompt.txt',mode='r')
PROMPT_BASE = prompt_file.read()
prompt_file.close()
PROMPT_BASE="I can paraphrase gender-biased job postings to neutral descriptions. I don't use these words:\nwell-groomed, men, woman, women, female, male, feminine, masculine, assistant, aggressive, cheerful, ambitious, commitment, community, assertive, compassionate, athletic, challenge, dependent, dependable, compete, competitive, emotional, confident, empathetic, courageous, decided, decisive, gentle, honest, determined, dominant, interdependent, interpersonal, forceful, kind, loyal, hierarchy, nurture, independent, pleasant, polite, intellectual, quiet, lead, logically, sensitive, objective, supportive, sympathy, outspoken, tender, trustworthy, reckless, stubborn, warm-hearted, superior, self-confident, boast, proud\n\nA: We’re looking for a strong businessman.\nB: We’re looking for an exceptional business person.\nA: Chairmen who thrive in a competitive atmosphere.\nB: CEO who is motivated by high goals.\nA: We are looking for a reliable and polite waitress.\nB: Our company seeks a reliable server who is respectful towards clients\nA: Candidates who are assertive.\nB: Candidates who are go-getters.\nA: Have a polite and pleasant style.\nB: Are professional and courteous.\nA: Nurture and connect with customers.\nB: Provide great customer service.\nA: We are a determined company that delivers superior plumbing.\nB: We are a plumbing company that delivers great service.\nA: Demonstrated ability to act decisively in emergencies.\nB: Demonstrated ability to make quick decisions in emergencies.\nA: Good salesmen with strong computer skills.\nB: Good salesperson who knows how to efficiently use the computer.\nA: A strong woman who is not afraid to take risks.\nB: Person not afraid of challenges.\nA: Sensitive men who know how to establish a good relationship with the customer\nB: Person who knows how to establish a great relationship with the customer\nA: A great salesman who is open to new challenges.\nB: Great salesperson who is open to new challenges.\nA: The company boasts impressive salaries, allowing our employees with financial independence.\nB: Our company offers excellent benefits, allowing our employees to maintain financial independence.\nA: Support office team and assist with departmental procedures so that work progresses more efficiently.\nB: Work closely with the office team to organize departmental procedures so that work progresses more efficiently.\nA: We boast a competitive compensation package.\nB: We offer excellent compensation packages.\nA: Take our sales challenge! Even if you have no previous experience, we will facilitate the acquisition of your sales abilities.\nB: We are a company that is committed to facilitating employees to enhance their sales abilities.\nA: Our company needs a social person; a strong communicator.\nB: A person who is good at communicating with others.\nA: Be a leader in your store, representing our exclusive brand.\nB: Be a role model in your store, representing our unique brand.\nA: Join our sales community! Even if you have no previous experience, we will help nurture and develop your sales talents.\nB: Join our company and help develop your sales abilities.\nA: We are a dominant engineering firm that boasts many leading clients.\nB: We are an engineering company that has many leading clients.\nA: Strong communication and influencing skills.\nB: Communication and persuasive skills.\nA: Analyze problems logically and troubleshoot to determine needed repairs.\nB: Respond to problems and troubleshoot them to uncover needed repairs.\nA: Sensitive to clients’ needs, can develop warm client relationships.\nB: Person who responds to client needs and can establish great relationships with them.\nA: Chairman willing to accept the challenge of regaining customers' trust.\nB: CEO who is willing to work on restoring customer’s confidence.\nA: For this role, we’re looking for a strong, ‘All-American boy’ type. Must be well-mannered, well-groomed, well-spoken, and respectful to the customers.\nB: For this role, we’re looking for a type of person who is well-mannered, respectful to the customers, and always willing to go beyond.\nA: Ability to deal with male banter and be sociable but not distracting.\nB: Ability to work closely with clients; sociable and professional.\nA: We are looking for a nice and good-looking girl.\nB: We are looking for a nice girl.\nA: Polite; sensitive to the needs of other employees and clients.\nB:",
def build_query(INPUT):
if type(INPUT) == str:
INPUT = INPUT.replace("\n", "")
INPUT = _RE_COMBINE_WHITESPACE.sub(" ", INPUT).strip()
#print("build_query")
#print(PROMPT_BASE[0])
#print("{}{}\nParaphrase: ".format(PROMPT_BASE[0], INPUT.strip()))
return "{}\nA: {}\nB:".format(PROMPT_BASE[0], INPUT.strip())
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.post('/api', response_class=JSONResponse)
async def get_prediction(data: Request):
print()
d = await data.json()
# return json.loads(d)
text = d['data']
print("text",text)
QUERY = build_query(text)
api_response = openai.Completion.create(
engine="davinci",
prompt=QUERY,
temperature=.85,
max_tokens=117,
top_p=1,
n=2,
frequency_penalty=0.80,
presence_penalty=1,
stop=["\n"]
)
#print(QUERY)
#print(api_response)
print(api_response["choices"])
if api_response['choices'] and api_response["choices"][0].text:
arr=[]
for choice in api_response["choices"]:
arr.append(choice)
return json.dumps({"data": api_response["choices"][0].text,"_data":arr})
else:
return json.dumps({'data': text,'_data':[{'text':text}]})
import glob
app.mount("/assets", StaticFiles(directory="frontend/dist/assets"), name="ass")
app.mount("/css", StaticFiles(directory="frontend/dist/css"), name="css")
app.mount("/js", StaticFiles(directory="frontend/dist/js"), name="js")
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
filepath = os.path.join("./frontend/dist/", 'index.html')
with open(filepath, "r", encoding="utf-8") as input_file:
text = input_file.read()
return HTMLResponse(text)
| 173.043478 | 4,920 | 0.788442 | 2,266 | 15,920 | 5.5203 | 0.19241 | 0.008794 | 0.012311 | 0.010073 | 0.76441 | 0.732752 | 0.715565 | 0.69494 | 0.65313 | 0.608762 | 0 | 0.001238 | 0.1375 | 15,920 | 91 | 4,921 | 174.945055 | 0.909766 | 0.577889 | 0 | 0 | 0 | 0.016129 | 0.700194 | 0.003129 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.016129 | false | 0.016129 | 0.193548 | 0 | 0.274194 | 0.048387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b8b02caa045250ad40c99c0fcd2559d90921c4e | 32 | py | Python | initial.py | bhbline/MasterVan | a5e58c9710984ea239f600ed2f4b5d5daa5762c0 | [
"MIT"
] | null | null | null | initial.py | bhbline/MasterVan | a5e58c9710984ea239f600ed2f4b5d5daa5762c0 | [
"MIT"
] | null | null | null | initial.py | bhbline/MasterVan | a5e58c9710984ea239f600ed2f4b5d5daa5762c0 | [
"MIT"
] | null | null | null | print("Hello, my name is Van.")
| 16 | 31 | 0.65625 | 6 | 32 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 1 | 32 | 32 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8e658b3e24e04877e598f1b7c3259313ba7d0e23 | 13,304 | py | Python | connectomics/model/zoo/unet_2d.py | yixinliao/pytorch_connectomics | 0f6de546e6da1e0f3258b2c84f7e16b3a993c70c | [
"MIT"
] | 1 | 2020-05-17T08:01:56.000Z | 2020-05-17T08:01:56.000Z | connectomics/model/zoo/unet_2d.py | yixinliao/pytorch_connectomics | 0f6de546e6da1e0f3258b2c84f7e16b3a993c70c | [
"MIT"
] | null | null | null | connectomics/model/zoo/unet_2d.py | yixinliao/pytorch_connectomics | 0f6de546e6da1e0f3258b2c84f7e16b3a993c70c | [
"MIT"
] | 3 | 2020-03-31T21:40:12.000Z | 2021-06-09T02:26:43.000Z | import torch
import torch.nn as nn
from ..block import *
from ..utils import *
class unet_2d(nn.Module):
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super(unet_2d, self).__init__()
self.activation = activation
print('final activation function: '+self.activation)
# Encoding Path
self.layer1_E = nn.Sequential(
residual_block_2d_c2(in_num, filters[0], projection=True),
residual_block_2d_c2(filters[0], filters[0], projection=False),
residual_block_2d_c2(filters[0], filters[0], projection=True)
#SELayer(channel=filters[0], channel_reduction=2, spatial_reduction=16)
)
self.layer2_E = nn.Sequential(
residual_block_2d_c2(filters[0], filters[1], projection=True),
residual_block_2d_c2(filters[1], filters[1], projection=False),
residual_block_2d_c2(filters[1], filters[1], projection=True)
#SELayer(channel=filters[1], channel_reduction=4, spatial_reduction=8)
)
self.layer3_E = nn.Sequential(
residual_block_2d_c2(filters[1], filters[2], projection=True),
residual_block_2d_c2(filters[2], filters[2], projection=False),
residual_block_2d_c2(filters[2], filters[2], projection=True)
#SELayer(channel=filters[2], channel_reduction=8, spatial_reduction=4)
)
# Center Block
self.center = nn.Sequential(
bottleneck_dilated_2d(filters[2], filters[3], projection=True),
bottleneck_dilated_2d(filters[3], filters[3], projection=False),
bottleneck_dilated_2d(filters[3], filters[3], projection=True)
#SELayer(channel=filters[3], channel_reduction=16, spatial_reduction=2, z_reduction=2)
)
# Decoding Path
self.layer1_D = nn.Sequential(
residual_block_2d_c2(filters[0], filters[0], projection=True),
residual_block_2d_c2(filters[0], filters[0], projection=False),
residual_block_2d_c2(filters[0], filters[0], projection=True)
#SELayer(channel=filters[0], channel_reduction=2, spatial_reduction=16)
)
self.layer2_D = nn.Sequential(
residual_block_2d_c2(filters[1], filters[1], projection=True),
residual_block_2d_c2(filters[1], filters[1], projection=False),
residual_block_2d_c2(filters[1], filters[1], projection=True)
#SELayer(channel=filters[1], channel_reduction=4, spatial_reduction=8)
)
self.layer3_D = nn.Sequential(
residual_block_2d_c2(filters[2], filters[2], projection=True),
residual_block_2d_c2(filters[2], filters[2], projection=False),
residual_block_2d_c2(filters[2], filters[2], projection=True)
#SELayer(channel=filters[2], channel_reduction=8, spatial_reduction=4)
)
# down & up sampling
self.down = nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))
self.up = nn.Upsample(scale_factor=(2,2), mode='bilinear', align_corners=True)
# convert to probability
self.conv1 = conv2d_bn_elu(filters[1], filters[0], kernel_size=(1,1), padding=(0,0))
self.conv2 = conv2d_bn_elu(filters[2], filters[1], kernel_size=(1,1), padding=(0,0))
self.conv3 = conv2d_bn_elu(filters[3], filters[2], kernel_size=(1,1), padding=(0,0))
self.fconv = conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1))
# initialization
for m in self.modules():
if isinstance(m, (nn.Conv2d, nn.Linear)):
nn.init.xavier_uniform_(m.weight, gain=nn.init.calculate_gain('relu'))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = self.fconv(x)
if self.activation == 'sigmoid':
x = torch.sigmoid(x)
elif self.activation == 'tanh':
x = torch.tanh(x)
return x
class unet_2d_ds(unet_2d):
# unet_2d with deep supervision
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super().__init__(in_num, out_num, filters, activation)
print('final activation function: ' + self.activation)
self.so1_conv1 = conv2d_bn_elu(filters[1], filters[0], kernel_size=(3,3), padding=(1,1))
self.so1_fconv = conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1))
self.so2_conv1 = conv2d_bn_elu(filters[2], filters[0], kernel_size=(3,3), padding=(1,1))
self.so2_fconv = conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1))
self.so3_conv1 = conv2d_bn_elu(filters[3], filters[0], kernel_size=(3,3), padding=(1,1))
self.so3_fconv = conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# side output 3
so3_add = self.so3_conv1(x)
so3 = self.so3_fconv(so3_add)
so3 = torch.sigmoid(so3)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
# side output 2
so2_add = self.so2_conv1(x) + self.up(so3_add)
so2 = self.so2_fconv(so2_add)
so2 = torch.sigmoid(so2)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
# side output 1
so1_add = self.so1_conv1(x) + self.up(so2_add)
so1 = self.so1_fconv(so1_add)
so1 = torch.sigmoid(so1)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = x + self.up(so1_add)
x = self.fconv(x)
if self.activation == 'sigmoid':
x = 2.0 * (torch.sigmoid(x) - 0.5)
elif self.activation == 'tanh':
x = torch.tanh(x)
return x, so1, so2, so3
class unet_2d_sk(unet_2d_ds):
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super().__init__(in_num, out_num, filters, activation)
self.map_conv1 = conv2d_bn_elu(filters[0], filters[0], kernel_size=(3,3), padding=(1,1))
self.map_fconv = conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# side output 3
so3_add = self.so3_conv1(x)
so3 = self.so3_fconv(so3_add)
so3 = torch.sigmoid(so3)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
# side output 2
so2_add = self.so2_conv1(x) + self.up(so3_add)
so2 = self.so2_fconv(so2_add)
so2 = torch.sigmoid(so2)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
# side output 1
so1_add = self.so1_conv1(x) + self.up(so2_add)
so1 = self.so1_fconv(so1_add)
so1 = torch.sigmoid(so1)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = x + self.up(so1_add)
# side output 0
so0 = self.fconv(x)
so0 = torch.sigmoid(so0)
# energy map
x = self.map_conv1(x)
x = self.map_fconv(x)
if self.activation == 'sigmoid':
x = 2.0 * (torch.sigmoid(x) - 0.5)
elif self.activation == 'tanh':
x = torch.tanh(x)
# print('x', x.size() )
# print('0',so0.size())
# print('1',so1.size())
# print('2',so2.size())
# print('3',so3.size())
return x, so0, so1, so2, so3
class unet_2d_so1(unet_2d):
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super().__init__(in_num, out_num, filters, activation)
print('final activation function: ' + self.activation)
self.side_out1 = nn.Sequential(
conv2d_bn_elu(filters[1], filters[0], kernel_size=(3,3), padding=(1,1)),
conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1)))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
# side output
so1 = self.side_out1(x)
so1 = torch.sigmoid(so1)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = self.fconv(x)
if self.activation == 'sigmoid':
x = torch.sigmoid(x)
elif self.activation == 'tanh':
x = torch.tanh(x)
return x, so1
class unet_2d_so2(unet_2d_so1):
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super().__init__(in_num, out_num, filters, activation)
print('final activation function: ' + self.activation)
self.side_out2 = nn.Sequential(
conv2d_bn_elu(filters[2], filters[0], kernel_size=(3,3), padding=(1,1)),
conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1)))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
# side output 2
so2 = self.side_out2(x)
so2 = torch.sigmoid(so2)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
# side output 1
so1 = self.side_out1(x)
so1 = torch.sigmoid(so1)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = self.fconv(x)
if self.activation == 'sigmoid':
x = torch.sigmoid(x)
elif self.activation == 'tanh':
x = torch.tanh(x)
return x, so1, so2
class unet_2d_so3(unet_2d_so2):
def __init__(self, in_num=1, out_num=1, filters=[32,64,128,256], activation='sigmoid'):
super().__init__(in_num, out_num, filters, activation)
print('final activation function: ' + self.activation)
self.side_out3 = nn.Sequential(
conv2d_bn_elu(filters[3], filters[0], kernel_size=(3,3), padding=(1,1)),
conv2d_bn_non(filters[0], out_num, kernel_size=(3,3), padding=(1,1)))
def forward(self, x):
z1 = self.layer1_E(x)
x = self.down(z1)
z2 = self.layer2_E(x)
x = self.down(z2)
z3 = self.layer3_E(x)
x = self.down(z3)
x = self.center(x)
# side output 3
so3 = self.side_out3(x)
so3 = torch.sigmoid(so3)
# Decoding Path
x = self.up(self.conv3(x))
x = x + z3
x = self.layer3_D(x)
# side output 2
so2 = self.side_out2(x)
so2 = torch.sigmoid(so2)
x = self.up(self.conv2(x))
x = x + z2
x = self.layer2_D(x)
# side output 1
so1 = self.side_out1(x)
so1 = torch.sigmoid(so1)
x = self.up(self.conv1(x))
x = x + z1
x = self.layer1_D(x)
x = self.fconv(x)
if self.activation == 'sigmoid':
x = torch.sigmoid(x)
elif self.activation == 'tanh':
x = torch.tanh(x)
return x, so1, so2, so3
class SELayer(nn.Module):
# Squeeze-and-excitation layer
def __init__(self, channel, channel_reduction=4, spatial_reduction=4, z_reduction=1):
super(SELayer, self).__init__()
self.pool_size = (z_reduction, spatial_reduction, spatial_reduction)
self.se = nn.Sequential(
nn.AvgPool3d(kernel_size=self.pool_size, stride=self.pool_size),
nn.Conv3d(channel, channel // channel_reduction, kernel_size=1),
SynchronizedBatchNorm3d(channel // channel_reduction),
nn.ELU(inplace=True),
nn.Conv3d(channel // channel_reduction, channel, kernel_size=1),
SynchronizedBatchNorm3d(channel),
nn.Sigmoid(),
nn.Upsample(scale_factor=self.pool_size, mode='trilinear', align_corners=True),
)
def forward(self, x):
y = self.se(x)
z = x + y*x
return z
| 33.012407 | 98 | 0.567198 | 1,888 | 13,304 | 3.810381 | 0.070975 | 0.050737 | 0.023353 | 0.042535 | 0.823464 | 0.799416 | 0.78357 | 0.772171 | 0.748262 | 0.732277 | 0 | 0.062227 | 0.294573 | 13,304 | 402 | 99 | 33.094527 | 0.704315 | 0.078322 | 0 | 0.7 | 0 | 0 | 0.021606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.014286 | 0 | 0.114286 | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e7ab83474f1122ab5d2aeee872258ecdac6dc30 | 5,132 | py | Python | src/unittest/python/required_zip_file_tests.py | svaningelgem/required_files | 9c50451100fb8dd73ce9b045a0f7dd14c6366c4a | [
"MIT"
] | null | null | null | src/unittest/python/required_zip_file_tests.py | svaningelgem/required_files | 9c50451100fb8dd73ce9b045a0f7dd14c6366c4a | [
"MIT"
] | null | null | null | src/unittest/python/required_zip_file_tests.py | svaningelgem/required_files | 9c50451100fb8dd73ce9b045a0f7dd14c6366c4a | [
"MIT"
] | null | null | null | from pathlib import Path
from tempfile import TemporaryDirectory
from unittest import TestCase, main, mock
from common import (
TESTFILE_NAME,
TEST_STRING,
URL_ZIP_WITH_DIR_STRUCTURE,
URL_ZIP_WITH_MULTIPLE_DIRS,
URL_ZIP_WITH_SINGLE_DIR,
URL_ZIP_WITHOUT_DIR,
)
from required_files import RequiredZipFile
class TestRequiredZipFile(TestCase):
def setUp(self) -> None:
self.tmp_dir = TemporaryDirectory()
def tearDown(self) -> None:
self.tmp_dir.cleanup()
del self.tmp_dir
def test_basic_zip_skip_dir_false(self):
expected_file = (
Path(
RequiredZipFile(
URL_ZIP_WITHOUT_DIR, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=False
).check()
)
/ TESTFILE_NAME
)
self.assertTrue(expected_file.exists())
self.assertEqual(expected_file.read_text(), TEST_STRING)
def test_basic_zip_skip_dir_true(self):
p = (
Path(
RequiredZipFile(
URL_ZIP_WITHOUT_DIR, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=True
).check()
)
/ TESTFILE_NAME
)
self.assertTrue(p.exists())
self.assertEqual(p.read_text(), TEST_STRING)
def test_single_dir_zip_skip_dir_false(self):
expected_file = (
Path(
RequiredZipFile(
URL_ZIP_WITH_SINGLE_DIR, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=False
).check()
)
/ 'dir1'
/ TESTFILE_NAME
)
self.assertTrue(expected_file.exists())
self.assertEqual(expected_file.read_text(), TEST_STRING)
def test_single_dir_zip_skip_dir_true(self):
expected_file = (
Path(
RequiredZipFile(
URL_ZIP_WITH_SINGLE_DIR, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=True
).check()
)
/ TESTFILE_NAME
)
self.assertTrue(expected_file.exists())
self.assertEqual(expected_file.read_text(), TEST_STRING)
def test_multi_dir_zip_skip_dir_false(self):
expected_file = Path(
RequiredZipFile(
URL_ZIP_WITH_MULTIPLE_DIRS, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=False
).check()
)
file1 = expected_file / 'dir1' / TESTFILE_NAME
file2 = expected_file / 'dir2' / TESTFILE_NAME
self.assertTrue(file1.exists())
self.assertEqual(file1.read_text(), TEST_STRING)
self.assertTrue(file2.exists())
self.assertEqual(file2.read_text(), TEST_STRING)
def test_multi_dir_zip_skip_dir_true(self):
expected_file = Path(
RequiredZipFile(
URL_ZIP_WITH_MULTIPLE_DIRS, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=True
).check()
)
file1 = expected_file / 'dir1' / TESTFILE_NAME
file2 = expected_file / 'dir2' / TESTFILE_NAME
self.assertTrue(file1.exists())
self.assertEqual(file1.read_text(), TEST_STRING)
self.assertTrue(file2.exists())
self.assertEqual(file2.read_text(), TEST_STRING)
def test_structured_dir_zip_skip_dir_false(self):
p = Path(
RequiredZipFile(
URL_ZIP_WITH_DIR_STRUCTURE, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=False
).check()
)
file1 = p / 'dir1' / TESTFILE_NAME
file2 = p / 'dir1/dir2' / TESTFILE_NAME
self.assertTrue(file1.exists())
self.assertTrue(file2.exists())
self.assertEqual(file1.read_text(), TEST_STRING)
self.assertEqual(file2.read_text(), TEST_STRING)
def test_structured_dir_zip_skip_dir_true(self):
p = Path(
RequiredZipFile(
URL_ZIP_WITH_DIR_STRUCTURE, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=True
).check()
)
file1 = p / TESTFILE_NAME
file2 = p / 'dir2' / TESTFILE_NAME
self.assertTrue(file1.exists())
self.assertTrue(file2.exists())
self.assertEqual(file1.read_text(), TEST_STRING)
self.assertEqual(file2.read_text(), TEST_STRING)
@mock.patch('required_files.required_files.ZipfileMixin._process_zip')
def test_when_file_is_present(self, process_zip):
target_file = Path(self.tmp_dir.name) / TESTFILE_NAME
target_file.touch()
expected_file = (
Path(
RequiredZipFile(
URL_ZIP_WITHOUT_DIR, self.tmp_dir.name, file_to_check=TESTFILE_NAME, skip_initial_dir=False
).check()
)
/ TESTFILE_NAME
)
self.assertFalse(process_zip.called)
self.assertEqual(expected_file, target_file)
self.assertTrue(expected_file.exists())
self.assertEqual(expected_file.read_text(), '')
if __name__ == '__main__':
main()
| 35.150685 | 115 | 0.626851 | 588 | 5,132 | 5.073129 | 0.117347 | 0.096547 | 0.04358 | 0.07241 | 0.807912 | 0.774723 | 0.767348 | 0.767348 | 0.767348 | 0.767348 | 0 | 0.008987 | 0.284489 | 5,132 | 145 | 116 | 35.393103 | 0.803377 | 0 | 0 | 0.503876 | 0 | 0 | 0.019486 | 0.010717 | 0 | 0 | 0 | 0 | 0.217054 | 1 | 0.085271 | false | 0 | 0.03876 | 0 | 0.131783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e87bb43fa04090c522b6523d8ba899957b5f461 | 22 | py | Python | wedding/card/__init__.py | ackneal/wedday | b57b524e3aa237a2568bda4fadb2d5709773c507 | [
"MIT"
] | null | null | null | wedding/card/__init__.py | ackneal/wedday | b57b524e3aa237a2568bda4fadb2d5709773c507 | [
"MIT"
] | null | null | null | wedding/card/__init__.py | ackneal/wedday | b57b524e3aa237a2568bda4fadb2d5709773c507 | [
"MIT"
] | null | null | null | from .route import bp
| 11 | 21 | 0.772727 | 4 | 22 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ed172a64e7a16868b8323dc7a35649a230ca912 | 28 | py | Python | cls/p2.py | sanchez0623/zsq.LearningPython | 419df031a2a905fe7d7c2dfe14aa2f8729989a9a | [
"Apache-2.0"
] | null | null | null | cls/p2.py | sanchez0623/zsq.LearningPython | 419df031a2a905fe7d7c2dfe14aa2f8729989a9a | [
"Apache-2.0"
] | null | null | null | cls/p2.py | sanchez0623/zsq.LearningPython | 419df031a2a905fe7d7c2dfe14aa2f8729989a9a | [
"Apache-2.0"
] | null | null | null | import sub
# 会先执行__init__文件 | 9.333333 | 16 | 0.821429 | 5 | 28 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 3 | 16 | 9.333333 | 0.791667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d93bfa57bf46a8addc7daab86b08521d0d6b2ac8 | 748 | py | Python | setup.py | looking-for-a-job/mac-finder.py | 67c7d614b560d92a67349c2139dd98c90542c07f | [
"Unlicense"
] | 7 | 2019-10-30T21:25:32.000Z | 2022-01-26T01:53:39.000Z | setup.py | looking-for-a-job/mac-finder.py | 67c7d614b560d92a67349c2139dd98c90542c07f | [
"Unlicense"
] | null | null | null | setup.py | looking-for-a-job/mac-finder.py | 67c7d614b560d92a67349c2139dd98c90542c07f | [
"Unlicense"
] | null | null | null | import setuptools
setuptools.setup(
name='mac-finder',
version='2020.12.3',
install_requires=open('requirements.txt').read().splitlines(),
packages=setuptools.find_packages(),
scripts=['scripts/.DS_Store','scripts/.finder-alias.applescript','scripts/.finder-close-bg.applescript','scripts/.finder-close-duplicates.applescript','scripts/.finder-comment.applescript','scripts/.finder-icon.applescript','scripts/.finder-reveal.applescript','scripts/.finder-selection.applescript','scripts/finder','scripts/finder-alias','scripts/finder-close-bg','scripts/finder-close-duplicates','scripts/finder-comment','scripts/finder-exec','scripts/finder-frontmost','scripts/finder-icon','scripts/finder-reveal','scripts/finder-selection']
)
| 74.8 | 552 | 0.77139 | 87 | 748 | 6.597701 | 0.37931 | 0.385017 | 0.292683 | 0.101045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009749 | 0.040107 | 748 | 9 | 553 | 83.111111 | 0.789694 | 0 | 0 | 0 | 0 | 0 | 0.695187 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d941c97715e433adcf300f44e67d44abff2cc48d | 394 | py | Python | src/openbiolink/graph_creation/file_reader/mapping/__init__.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 97 | 2019-11-26T09:53:18.000Z | 2022-03-19T10:33:10.000Z | src/openbiolink/graph_creation/file_reader/mapping/__init__.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 67 | 2019-12-09T21:01:52.000Z | 2021-12-21T15:19:41.000Z | src/openbiolink/graph_creation/file_reader/mapping/__init__.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 20 | 2020-01-13T23:02:25.000Z | 2022-03-16T21:43:31.000Z | from openbiolink.graph_creation.file_reader.mapping.mapDisGeNetReader import MapDisGeNetReader
from openbiolink.graph_creation.file_reader.mapping.mapDrugCentralPubchemReader import MapDrugCentralPubchemReader
from openbiolink.graph_creation.file_reader.mapping.mapStringReader import MapStringReader
from openbiolink.graph_creation.file_reader.mapping.mapUniprotReader import MapUniprotReader
| 78.8 | 114 | 0.918782 | 40 | 394 | 8.85 | 0.3 | 0.169492 | 0.225989 | 0.316384 | 0.508475 | 0.508475 | 0.508475 | 0 | 0 | 0 | 0 | 0 | 0.040609 | 394 | 4 | 115 | 98.5 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9812c151978d69263c4ff75e015e47980eb6b2f | 4,976 | py | Python | hallo/modules/channel_control/mute.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2022-01-27T13:25:01.000Z | 2022-01-27T13:25:01.000Z | hallo/modules/channel_control/mute.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 75 | 2015-09-26T18:07:18.000Z | 2022-01-04T07:15:11.000Z | hallo/modules/channel_control/mute.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2021-04-10T12:02:47.000Z | 2021-04-10T12:02:47.000Z | from hallo.events import EventMode
from hallo.function import Function
import hallo.modules.channel_control.channel_control
from hallo.server import Server
class Mute(Function):
"""
Mutes the current or a selected channel. IRC only.
"""
def __init__(self):
"""
Constructor
"""
super().__init__()
# Name for use in help listing
self.help_name = "mute"
# Names which can be used to address the function
self.names = {"mute"}
# Help documentation, if it's just a single line, can be set here
self.help_docs = (
"Mutes a given channel or current channel. Format: mute <channel>"
)
def run(self, event):
# Get server object
server_obj = event.server
# If server isn't IRC type, we can't mute channels
if server_obj.type != Server.TYPE_IRC:
return event.create_response(
"Error, this function is only available for IRC servers."
)
# Check if no arguments were provided
if event.command_args.strip() == "":
if event.channel is None:
return event.create_response(
"Error, you can't set mute on a private message."
)
return event.create_response(self.mute_channel(event.channel))
# Get channel from user input
target_channel = server_obj.get_channel_by_name(event.command_args.strip())
if target_channel is None:
return event.create_response(
"Error, {} is not known on {}.".format(
event.command_args.strip(), server_obj.name
)
)
return event.create_response(self.mute_channel(target_channel))
def mute_channel(self, channel):
"""
Sets mute on a given channel.
:param channel: Channel to mute
:type channel: destination.Channel
:return: Response to send to requester
:rtype: str
"""
# Check if in channel
if not channel.in_channel:
return "Error, I'm not in that channel."
# Check if hallo has op in channel
if not hallo.modules.channel_control.channel_control.hallo_has_op(channel):
return "Error, I don't have power to mute {}.".format(channel.name)
# Send invite
mode_evt = EventMode(channel.server, channel, None, "+m", inbound=False)
channel.server.send(mode_evt)
return "Set mute in {}.".format(channel.name)
class UnMute(Function):
"""
Mutes the current or a selected channel. IRC only.
"""
def __init__(self):
"""
Constructor
"""
super().__init__()
# Name for use in help listing
self.help_name = "unmute"
# Names which can be used to address the function
self.names = {"unmute", "un mute"}
# Help documentation, if it's just a single line, can be set here
self.help_docs = "Unmutes a given channel or current channel if none is given. Format: unmute <channel>"
def run(self, event):
# Get server object
server_obj = event.server
# If server isn't IRC type, we can't unmute channels
if server_obj.type != Server.TYPE_IRC:
return event.create_response(
"Error, this function is only available for IRC servers."
)
# Check if no arguments were provided
if event.command_args.strip() == "":
if event.channel is None:
return event.create_response(
"Error, you can't unset mute on a private message."
)
return event.create_response(self.unmute_channel(event.channel))
# Get channel from user input
target_channel = server_obj.get_channel_by_name(event.command_args.strip())
if target_channel is None:
return event.create_response(
"Error, {} is not known on {}.".format(
event.command_args.strip(), server_obj.name
)
)
return event.create_response(self.unmute_channel(target_channel))
def unmute_channel(self, channel):
"""
Sets mute on a given channel.
:param channel: Channel to mute
:type channel: destination.Channel
:return: Response to send to requester
:rtype: str
"""
# Check if in channel
if not channel.in_channel:
return "Error, I'm not in that channel."
# Check if hallo has op in channel
if not hallo.modules.channel_control.channel_control.hallo_has_op(channel):
return "Error, I don't have power to unmute {}.".format(channel.name)
# Send invite
mode_evt = EventMode(channel.server, channel, None, "-m", inbound=False)
channel.server.send(mode_evt)
return "Unset mute in {}.".format(channel.name)
| 37.984733 | 112 | 0.599076 | 618 | 4,976 | 4.694175 | 0.179612 | 0.037918 | 0.0586 | 0.086177 | 0.91141 | 0.895553 | 0.861772 | 0.850052 | 0.850052 | 0.850052 | 0 | 0 | 0.315113 | 4,976 | 130 | 113 | 38.276923 | 0.851232 | 0.220659 | 0 | 0.513514 | 0 | 0 | 0.167211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.054054 | 0 | 0.378378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
794f0d30ddea85d733f4fa94de6749eb5a7ed8e3 | 9,191 | py | Python | tests/snc/agents/activity_rate_to_mpc_actions/test_feedback_mip_feasible_mpc_policy.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 5 | 2021-03-24T16:23:10.000Z | 2021-11-17T12:44:51.000Z | tests/snc/agents/activity_rate_to_mpc_actions/test_feedback_mip_feasible_mpc_policy.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 3 | 2021-03-26T01:16:08.000Z | 2021-05-08T22:06:47.000Z | tests/snc/agents/activity_rate_to_mpc_actions/test_feedback_mip_feasible_mpc_policy.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 2 | 2021-03-24T17:20:06.000Z | 2021-04-19T09:01:12.000Z | import numpy as np
from snc.agents.activity_rate_to_mpc_actions.feedback_mip_feasible_mpc_policy \
import FeedbackMipFeasibleMpcPolicy
def get_mpc_policy_sirl():
# Simple reentrant line like environment.
constituency_matrix = np.array([[1, 0, 1], [0, 1, 0]])
buffer_processing_matrix = np.array([[-1, 0, 0], [1, -1, 0], [0, 1, -1]])
return FeedbackMipFeasibleMpcPolicy(constituency_matrix, buffer_processing_matrix)
def get_mpc_policy_routing():
constituency_matrix = np.eye(3)
buffer_processing_matrix = np.array([[-1, -1, -1]])
return FeedbackMipFeasibleMpcPolicy(constituency_matrix, buffer_processing_matrix)
def get_mpc_policy_simple_link_routing_from_book():
mu12 = 1
mu13 = 1
mu25 = 1
mu32 = 1
mu34 = 1
mu35 = 1
mu45 = 1
mu5 = 1
buffer_processing_matrix = np.array([[-mu12, -mu13, 0, 0, 0, 0, 0, 0],
[mu12, 0, -mu25, mu32, 0, 0, 0, 0],
[0, mu13, 0, -mu32, -mu34, -mu35, 0, 0],
[0, 0, 0, 0, mu34, 0, -mu45, 0],
[0, 0, mu25, 0, 0, mu35, mu45, -mu5]])
constituency_matrix = np.eye(8)
return FeedbackMipFeasibleMpcPolicy(constituency_matrix, buffer_processing_matrix)
def test_get_nonidling_resources_zero_action():
sum_actions = np.array([[0], [0], [0]])
mpc_policy = get_mpc_policy_sirl()
nonidling_constituency_mat, nonidling_ones = mpc_policy.get_nonidling_resources(sum_actions)
assert np.all(nonidling_constituency_mat == np.zeros((2, 3)))
assert np.all(nonidling_ones == np.zeros((2, 1)))
def test_get_nonidling_resources_zero_action_res_1():
sum_actions = np.array([[0], [1], [0]])
mpc_policy = get_mpc_policy_sirl()
nonidling_constituency_mat, nonidling_ones = mpc_policy.get_nonidling_resources(sum_actions)
assert np.all(nonidling_constituency_mat == np.array([[0, 0, 0], [0, 1, 0]]))
assert np.all(nonidling_ones == np.array([[0], [1]]))
def test_get_nonidling_resources_zero_action_res_2():
sum_actions = np.array([[1], [0], [0]])
mpc_policy = get_mpc_policy_sirl()
nonidling_constituency_mat, nonidling_ones = mpc_policy.get_nonidling_resources(sum_actions)
assert np.all(nonidling_constituency_mat == np.array([[1, 0, 1], [0, 0, 0]]))
assert np.all(nonidling_ones == np.array([[1], [0]]))
def test_get_nonidling_resources_both_active():
sum_actions = np.array([[0], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
nonidling_constituency_mat, nonidling_ones = mpc_policy.get_nonidling_resources(sum_actions)
assert np.all(nonidling_constituency_mat == np.array([[1, 0, 1], [0, 1, 0]]))
assert np.all(nonidling_ones == np.ones((2, 1)))
def test_generate_actions_with_feedback_empty_buffers():
sum_actions = np.ones((3, 1))
state = np.zeros((3, 1))
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.zeros((3, 1)))
def test_generate_actions_with_feedback_empty_buffer_1():
sum_actions = np.ones((3, 1))
state = np.array([[0], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[0], [1], [1]]))
def test_generate_actions_with_feedback_empty_buffer_1_no_action_buffer_2():
sum_actions = np.array([[1], [1], [0]])
state = np.array([[0], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[0], [1], [1]]))
def test_generate_actions_with_feedback_empty_buffers_1_and_3():
sum_actions = np.array([[0], [1], [0]])
state = np.array([[0], [1], [0]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[0], [1], [0]]))
def test_generate_actions_with_feedback_priority_buffer_1():
sum_actions = np.array([[1001], [1000], [1000]])
state = np.array([[1], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[1], [1], [0]]))
def test_generate_actions_with_feedback_priority_buffer_3():
sum_actions = np.array([[1000], [1000], [1001]])
state = np.array([[1], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[0], [1], [1]]))
def test_generate_actions_with_feedback_no_priority():
sum_actions = np.array([[1000], [1000], [1000]])
state = np.array([[1], [1], [1]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert action[1] == 1
assert action[0] == 1 or action[2] == 1
def test_generate_actions_with_feedback_priority_buffer_3_but_empty():
sum_actions = np.array([[1000], [1000], [1001]])
state = np.array([[1], [1], [0]])
mpc_policy = get_mpc_policy_sirl()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.array([[1], [1], [0]]))
def test_generate_actions_with_feedback_routing_enough_items():
sum_actions = np.array([[1], [1], [1]])
state = np.array([[3]])
mpc_policy = get_mpc_policy_routing()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.all(action == np.ones((3, 1)))
def test_generate_actions_with_feedback_routing_only_one_item():
sum_actions = np.array([[1], [1], [1]])
state = np.array([[1]])
mpc_policy = get_mpc_policy_routing()
action = mpc_policy.generate_actions_with_feedback(sum_actions, state)
assert np.sum(action) == 1
def test_get_actions_drain_each_buffer_routing():
mpc_policy = get_mpc_policy_routing()
actions_drain_each_buffer = mpc_policy.get_actions_drain_each_buffer()
assert np.all(actions_drain_each_buffer[0] == [np.array([0, 1, 2])])
def test_get_action_drain_each_buffer_simple_link_routing():
mpc_policy = get_mpc_policy_simple_link_routing_from_book()
actions_drain_each_buffer = mpc_policy.get_actions_drain_each_buffer()
assert np.all(actions_drain_each_buffer[0] == [np.array([0, 1])])
assert np.all(actions_drain_each_buffer[1] == [np.array([2])])
assert np.all(actions_drain_each_buffer[2] == [np.array([3, 4, 5])])
assert np.all(actions_drain_each_buffer[3] == [np.array([6])])
assert np.all(actions_drain_each_buffer[4] == [np.array([7])])
def test_update_bias_counter_routing_enough_items():
mpc_policy = get_mpc_policy_routing()
state = np.array([[3]])
action = np.array([[1], [1], [1]])
sum_actions = np.ones((3, 1))
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.zeros((3, 1)))
def test_update_bias_counter_routing_enough_items_not_required():
mpc_policy = get_mpc_policy_routing()
state = np.array([[3]])
action = np.array([[1], [1], [1]])
sum_actions = np.zeros((3, 1))
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.zeros((3, 1)))
def test_update_bias_counter_routing_not_enough_items_1():
mpc_policy = get_mpc_policy_routing()
state = np.array([[2]])
action = np.array([[1], [1], [0]])
sum_actions = np.ones((3, 1))
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.array([[0], [0], [1]]))
def test_update_bias_counter_routing_not_enough_items_1_not_required():
mpc_policy = get_mpc_policy_routing()
state = np.array([[2]])
action = np.array([[1], [1], [0]])
sum_actions = np.array([[1], [1], [0]])
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.zeros((3, 1)))
def test_update_bias_counter_routing_not_enough_items_1_other_action():
mpc_policy = get_mpc_policy_routing()
state = np.array([[2]])
action = np.array([[0], [1], [1]])
sum_actions = np.ones((3, 1))
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.array([[1], [0], [0]]))
def test_update_bias_counter_routing_not_enough_items_2():
mpc_policy = get_mpc_policy_routing()
state = np.array([[1]])
action = np.array([[0], [1], [0]])
sum_actions = np.ones((3, 1))
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.array([[1], [0], [1]]))
def test_update_bias_counter_simple_link_routing():
mpc_policy = get_mpc_policy_simple_link_routing_from_book()
state = np.ones((5, 1))
action = np.array([1, 0, 1, 1, 0, 0, 1, 1])[:, None]
sum_actions = np.ones_like(action)
mpc_policy.update_bias_counter(state, action, sum_actions)
assert np.all(mpc_policy._bias_counter.value == np.array([0, 1, 0, 0, 1, 1, 0, 0])[:, None])
| 40.311404 | 96 | 0.684909 | 1,374 | 9,191 | 4.213974 | 0.067686 | 0.124352 | 0.060104 | 0.059585 | 0.892746 | 0.868048 | 0.831088 | 0.780656 | 0.730915 | 0.673921 | 0 | 0.046294 | 0.163312 | 9,191 | 227 | 97 | 40.488987 | 0.706632 | 0.004243 | 0 | 0.508671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184971 | 1 | 0.150289 | false | 0 | 0.011561 | 0 | 0.179191 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
794fbb023063d4b88c35838db3913d2f58b25750 | 60 | py | Python | cifar10/models/__init__.py | huangleiBuaa/IterNorm-pytorch | 574b1247036106c40d199c73ab29785b16d05407 | [
"BSD-2-Clause"
] | 28 | 2019-04-23T14:40:47.000Z | 2022-03-28T13:55:21.000Z | cifar10/models/__init__.py | huangleiBuaa/IterNorm-pytorch | 574b1247036106c40d199c73ab29785b16d05407 | [
"BSD-2-Clause"
] | 2 | 2019-06-27T08:27:26.000Z | 2021-07-03T14:40:44.000Z | cifar10/models/__init__.py | huangleiBuaa/IterNorm-pytorch | 574b1247036106c40d199c73ab29785b16d05407 | [
"BSD-2-Clause"
] | 8 | 2019-04-10T13:20:25.000Z | 2021-07-29T11:10:49.000Z | from .resnet import *
from .vgg import *
from .WRN import *
| 15 | 21 | 0.7 | 9 | 60 | 4.666667 | 0.555556 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 60 | 3 | 22 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
79d6bc9fba6cb3efb39bb139a0b0e7d7984e2cc9 | 126 | py | Python | 2019/07/02/Solutions/malcolm-smith/solution.py | WillDaSilva/daily-questions | 6e86b3f625df5c60d9a57f1694fafdd24c4ff2c4 | [
"MIT"
] | 12 | 2019-07-02T22:17:49.000Z | 2020-10-08T16:02:04.000Z | 2019/07/02/Solutions/malcolm-smith/solution.py | WillDaSilva/daily-questions | 6e86b3f625df5c60d9a57f1694fafdd24c4ff2c4 | [
"MIT"
] | 2 | 2019-07-03T12:22:22.000Z | 2019-09-04T23:31:38.000Z | 2019/07/02/Solutions/malcolm-smith/solution.py | WillDaSilva/daily-questions | 6e86b3f625df5c60d9a57f1694fafdd24c4ff2c4 | [
"MIT"
] | 15 | 2019-07-02T23:29:07.000Z | 2020-05-11T15:53:07.000Z | def pushZeroes(arr):
return [i for i in arr if i != 0] + [i for i in arr if i == 0]
print(pushZeroes([0, 1, 0, 3, 12]))
| 21 | 66 | 0.563492 | 27 | 126 | 2.62963 | 0.481481 | 0.112676 | 0.140845 | 0.197183 | 0.394366 | 0.394366 | 0.394366 | 0.394366 | 0 | 0 | 0 | 0.086022 | 0.261905 | 126 | 5 | 67 | 25.2 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8dd35d4ae5a71f02afd852a3100afe994e103cde | 28 | py | Python | majora2/submodels/__init__.py | CLIMB-COVID/majora2 | 46ea1809a61e4a768f8cbacaf54cba5c4d82e1f2 | [
"MIT"
] | 29 | 2019-04-04T18:03:43.000Z | 2022-02-09T12:47:30.000Z | majora2/submodels/__init__.py | CLIMB-COVID/majora2 | 46ea1809a61e4a768f8cbacaf54cba5c4d82e1f2 | [
"MIT"
] | 66 | 2019-04-02T16:18:40.000Z | 2022-01-25T16:15:42.000Z | majora2/submodels/__init__.py | CLIMB-COVID/majora2 | 46ea1809a61e4a768f8cbacaf54cba5c4d82e1f2 | [
"MIT"
] | 6 | 2020-04-10T14:15:32.000Z | 2022-01-18T13:08:35.000Z | from .supplemental import *
| 14 | 27 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8de98294dc0ea2c3f70705044f738a627e18bf64 | 1,699 | py | Python | simulation_pomegranate.py | malwash/Simulation_Calibration | fd0ebd54e78694aa0d256d3837fa67642a35c54b | [
"Apache-2.0"
] | null | null | null | simulation_pomegranate.py | malwash/Simulation_Calibration | fd0ebd54e78694aa0d256d3837fa67642a35c54b | [
"Apache-2.0"
] | null | null | null | simulation_pomegranate.py | malwash/Simulation_Calibration | fd0ebd54e78694aa0d256d3837fa67642a35c54b | [
"Apache-2.0"
] | null | null | null | import notears
import numpy as np
from pomegranate import BayesianNetwork
from notears.nonlinear import notears_nonlinear, NotearsMLP
from statsmodels.tools.eval_measures import bic
def pomegranate_setup(train_data, training_n):
model = BayesianNetwork.from_samples(train_data, state_names=train_data.columns.values, algorithm='exact')
#print(model.structure)
# model.plot()
nt_sampling_train = model.sample(1000)
nt_sampling_test = model.sample(1000)
#print(nt_sampling_train)
np.savetxt('X_est_train.csv', nt_sampling_train, delimiter=',')
#np.savetxt('W_est_test.csv', nt_sampling_test, delimiter=',')
return nt_sampling_train, nt_sampling_test
def pomegranate_setup_b(train_data, training_n):
model = BayesianNetwork.from_samples(train_data, state_names=train_data.columns.values, algorithm='greedy')
#print(model.structure)
# model.plot()
nt_sampling_train = model.sample(1000)
nt_sampling_test = model.sample(1000)
#print(nt_sampling_train)
np.savetxt('X_est_train.csv', nt_sampling_train, delimiter=',')
#np.savetxt('W_est_test.csv', nt_sampling_test, delimiter=',')
return nt_sampling_train, nt_sampling_test
def pomegranate_setup_c(train_data, training_n):
model = BayesianNetwork.from_samples(train_data, state_names=train_data.columns.values, algorithm='chow-liu’')
#print(model.structure)
# model.plot()
nt_sampling_train = model.sample(1000)
nt_sampling_test = model.sample(1000)
#print(nt_sampling_train)
np.savetxt('X_est_train.csv', nt_sampling_train, delimiter=',')
#np.savetxt('W_est_test.csv', nt_sampling_test, delimiter=',')
return nt_sampling_train, nt_sampling_test
| 41.439024 | 114 | 0.765156 | 234 | 1,699 | 5.217949 | 0.209402 | 0.17199 | 0.14742 | 0.044226 | 0.839476 | 0.839476 | 0.839476 | 0.839476 | 0.839476 | 0.839476 | 0 | 0.016118 | 0.123602 | 1,699 | 40 | 115 | 42.475 | 0.803895 | 0.217187 | 0 | 0.521739 | 0 | 0 | 0.051593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.217391 | 0 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8df36a1face9d6c6b4a0e92f1d065d5bdef9cb4b | 21 | py | Python | __init__.py | tekromancr/pypurdypixels | 2a357bb5e423303d08c03dac871dbd0f98b63035 | [
"MIT"
] | null | null | null | __init__.py | tekromancr/pypurdypixels | 2a357bb5e423303d08c03dac871dbd0f98b63035 | [
"MIT"
] | null | null | null | __init__.py | tekromancr/pypurdypixels | 2a357bb5e423303d08c03dac871dbd0f98b63035 | [
"MIT"
] | null | null | null | from strand import *
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5c2890399e7412165e09afb334e6d9e942573ac3 | 157 | py | Python | python_mbills/exceptions.py | boris-savic/python-mbills | a591be0a3e07ee825503c2e0dd8a54d5c991f56f | [
"MIT"
] | null | null | null | python_mbills/exceptions.py | boris-savic/python-mbills | a591be0a3e07ee825503c2e0dd8a54d5c991f56f | [
"MIT"
] | null | null | null | python_mbills/exceptions.py | boris-savic/python-mbills | a591be0a3e07ee825503c2e0dd8a54d5c991f56f | [
"MIT"
] | null | null | null |
class SignatureValidationException(Exception):
pass
class TransactionDoesNotExist(Exception):
pass
class InsufficientFunds(Exception):
pass
| 13.083333 | 46 | 0.77707 | 12 | 157 | 10.166667 | 0.5 | 0.319672 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165605 | 157 | 11 | 47 | 14.272727 | 0.931298 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5c37d5938bc6e4ff26abc6a0454650dbb466b92e | 46 | py | Python | datasets/__init__.py | rioyokotalab/ecl-isvr | ae274b1b81b1d1c10db008140c477f5893a0c1c3 | [
"BSD-4-Clause-UC"
] | null | null | null | datasets/__init__.py | rioyokotalab/ecl-isvr | ae274b1b81b1d1c10db008140c477f5893a0c1c3 | [
"BSD-4-Clause-UC"
] | null | null | null | datasets/__init__.py | rioyokotalab/ecl-isvr | ae274b1b81b1d1c10db008140c477f5893a0c1c3 | [
"BSD-4-Clause-UC"
] | 2 | 2021-09-30T02:13:40.000Z | 2021-12-14T07:33:28.000Z | #! -*- coding:utf-8
from .datasets import *
| 15.333333 | 24 | 0.608696 | 6 | 46 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.195652 | 46 | 2 | 25 | 23 | 0.72973 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb76ec1152ded3758f71170ddfc00815659ac7b8 | 187 | py | Python | profiles_api/admin.py | jaesungreemei/profiles-rest-api | 385d6b8ba1891397771a4f4d36f603358cf33517 | [
"MIT"
] | null | null | null | profiles_api/admin.py | jaesungreemei/profiles-rest-api | 385d6b8ba1891397771a4f4d36f603358cf33517 | [
"MIT"
] | null | null | null | profiles_api/admin.py | jaesungreemei/profiles-rest-api | 385d6b8ba1891397771a4f4d36f603358cf33517 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Enable the model through imports
from profiles_api import models
admin.site.register(models.UserProfile)
admin.site.register(models.ProfileFeedItem)
| 20.777778 | 43 | 0.834225 | 25 | 187 | 6.2 | 0.68 | 0.116129 | 0.219355 | 0.296774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101604 | 187 | 8 | 44 | 23.375 | 0.922619 | 0.171123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ebe2de359eabe8105405f751046c2ddce10b0f70 | 16,070 | py | Python | datadog_checks_base/tests/openmetrics/test_config.py | vbarbaresi/integrations-core | ab26ab1cd6c28a97c1ad1177093a93659658c7aa | [
"BSD-3-Clause"
] | 1 | 2021-06-06T23:49:17.000Z | 2021-06-06T23:49:17.000Z | datadog_checks_base/tests/openmetrics/test_config.py | vbarbaresi/integrations-core | ab26ab1cd6c28a97c1ad1177093a93659658c7aa | [
"BSD-3-Clause"
] | null | null | null | datadog_checks_base/tests/openmetrics/test_config.py | vbarbaresi/integrations-core | ab26ab1cd6c28a97c1ad1177093a93659658c7aa | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2020-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
import pytest
from datadog_checks.dev.testing import requires_py3
from .utils import get_check
pytestmark = [requires_py3, pytest.mark.openmetrics, pytest.mark.openmetrics_config]
class TestPrometheusEndpoint:
def test_not_string(self, dd_run_check):
check = get_check({'openmetrics_endpoint': 9000})
with pytest.raises(Exception, match='^The setting `openmetrics_endpoint` must be a string$'):
dd_run_check(check, extract_message=True)
def test_missing(self, dd_run_check):
check = get_check({'openmetrics_endpoint': ''})
with pytest.raises(Exception, match='^The setting `openmetrics_endpoint` is required$'):
dd_run_check(check, extract_message=True)
class TestNamespace:
def test_not_string(self, dd_run_check):
check = get_check({'namespace': 9000})
check.__NAMESPACE__ = ''
with pytest.raises(Exception, match='^Setting `namespace` must be a string$'):
dd_run_check(check, extract_message=True)
def test_not_string_override(self, dd_run_check):
check = get_check({'namespace': 'foo'})
check.__NAMESPACE__ = 9000
with pytest.raises(Exception, match='^Setting `namespace` must be a string$'):
dd_run_check(check, extract_message=True)
def test_missing(self, dd_run_check):
check = get_check({'openmetrics_endpoint': 'test'})
check.__NAMESPACE__ = ''
with pytest.raises(Exception, match='^Setting `namespace` is required$'):
dd_run_check(check, extract_message=True)
class TestRawMetricPrefix:
def test_not_string(self, dd_run_check):
check = get_check({'raw_metric_prefix': 9000})
with pytest.raises(Exception, match='^Setting `raw_metric_prefix` must be a string$'):
dd_run_check(check, extract_message=True)
class TestHostnameLabel:
def test_not_string(self, dd_run_check):
check = get_check({'hostname_label': 9000})
with pytest.raises(Exception, match='^Setting `hostname_label` must be a string$'):
dd_run_check(check, extract_message=True)
class TestHostnameFormat:
def test_not_string(self, dd_run_check):
check = get_check({'hostname_format': 9000})
with pytest.raises(Exception, match='^Setting `hostname_format` must be a string$'):
dd_run_check(check, extract_message=True)
def test_no_placeholder(self, dd_run_check):
check = get_check({'hostname_label': 'foo', 'hostname_format': 'bar'})
with pytest.raises(
Exception, match='^Setting `hostname_format` does not contain the placeholder `<HOSTNAME>`$'
):
dd_run_check(check, extract_message=True)
class TestExcludeLabels:
def test_not_array(self, dd_run_check):
check = get_check({'exclude_labels': 9000})
with pytest.raises(Exception, match='^Setting `exclude_labels` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'exclude_labels': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `exclude_labels` must be a string$'):
dd_run_check(check, extract_message=True)
class TestRenameLabels:
def test_not_mapping(self, dd_run_check):
check = get_check({'rename_labels': 9000})
with pytest.raises(Exception, match='^Setting `rename_labels` must be a mapping$'):
dd_run_check(check, extract_message=True)
def test_value_not_string(self, dd_run_check):
check = get_check({'rename_labels': {'foo': 9000}})
with pytest.raises(Exception, match='^Value for label `foo` of setting `rename_labels` must be a string$'):
dd_run_check(check, extract_message=True)
class TestExcludeMetrics:
def test_not_array(self, dd_run_check):
check = get_check({'exclude_metrics': 9000})
with pytest.raises(Exception, match='^Setting `exclude_metrics` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'exclude_metrics': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `exclude_metrics` must be a string$'):
dd_run_check(check, extract_message=True)
class TestExcludeMetricsByLabels:
def test_not_mapping(self, dd_run_check):
check = get_check({'exclude_metrics_by_labels': 9000})
with pytest.raises(Exception, match='^Setting `exclude_metrics_by_labels` must be a mapping$'):
dd_run_check(check, extract_message=True)
def test_value_not_string(self, dd_run_check):
check = get_check({'exclude_metrics_by_labels': {'foo': [9000]}})
with pytest.raises(
Exception, match='^Value #1 for label `foo` of setting `exclude_metrics_by_labels` must be a string$'
):
dd_run_check(check, extract_message=True)
def test_invalid_type(self, dd_run_check):
check = get_check({'exclude_metrics_by_labels': {'foo': 9000}})
with pytest.raises(
Exception, match='^Label `foo` of setting `exclude_metrics_by_labels` must be an array or set to `true`$'
):
dd_run_check(check, extract_message=True)
class TestTags:
def test_not_array(self, dd_run_check):
check = get_check({'tags': 9000})
with pytest.raises(Exception, match='^Setting `tags` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'tags': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `tags` must be a string$'):
dd_run_check(check, extract_message=True)
class TestRawLineFilters:
def test_not_array(self, dd_run_check):
check = get_check({'raw_line_filters': 9000})
with pytest.raises(Exception, match='^Setting `raw_line_filters` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'raw_line_filters': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `raw_line_filters` must be a string$'):
dd_run_check(check, extract_message=True)
def test_invalid_pattern(self, dd_run_check):
check = get_check({'raw_line_filters': ['\\1']})
with pytest.raises(Exception, match='^invalid group reference'):
dd_run_check(check, extract_message=True)
class TestMetrics:
def test_not_array(self, dd_run_check):
check = get_check({'metrics': 9000})
with pytest.raises(Exception, match='^Setting `metrics` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'metrics': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `metrics` must be a string or a mapping$'):
dd_run_check(check, extract_message=True)
def test_mapped_value_not_string(self, dd_run_check):
check = get_check({'metrics': [{'foo': 9000}]})
with pytest.raises(
Exception, match='^Value of entry `foo` of setting `metrics` must be a string or a mapping$'
):
dd_run_check(check, extract_message=True)
def test_config_name_not_string(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'name': 9000}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: field `name` must be a string$'
):
dd_run_check(check, extract_message=True)
def test_config_type_not_string(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 9000}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: field `type` must be a string$'
):
dd_run_check(check, extract_message=True)
def test_config_type_unknown(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'bar'}}]})
with pytest.raises(Exception, match='^Error compiling transformer for metric `foo`: unknown type `bar`$'):
dd_run_check(check, extract_message=True)
class TestExtraMetrics:
def test_not_array(self, dd_run_check):
check = get_check({'extra_metrics': 9000})
with pytest.raises(Exception, match='^Setting `extra_metrics` must be an array$'):
dd_run_check(check, extract_message=True)
def test_entry_invalid_type(self, dd_run_check):
check = get_check({'extra_metrics': [9000]})
with pytest.raises(Exception, match='^Entry #1 of setting `extra_metrics` must be a string or a mapping$'):
dd_run_check(check, extract_message=True)
def test_mapped_value_not_string(self, dd_run_check):
check = get_check({'extra_metrics': [{'foo': 9000}]})
with pytest.raises(
Exception, match='^Value of entry `foo` of setting `extra_metrics` must be a string or a mapping$'
):
dd_run_check(check, extract_message=True)
class TestTransformerCompilation:
def test_temporal_percent_no_scale(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'temporal_percent'}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: the `scale` parameter is required$'
):
dd_run_check(check, extract_message=True)
def test_temporal_percent_unknown_scale(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'temporal_percent', 'scale': 'bar'}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: the `scale` parameter must be one of: '
):
dd_run_check(check, extract_message=True)
def test_temporal_percent_scale_not_int(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'temporal_percent', 'scale': 1.23}}]})
with pytest.raises(
Exception,
match=(
'^Error compiling transformer for metric `foo`: '
'the `scale` parameter must be an integer representing parts of a second e.g. 1000 for millisecond$'
),
):
dd_run_check(check, extract_message=True)
def test_service_check_no_status_map(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check'}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: the `status_map` parameter is required$'
):
dd_run_check(check, extract_message=True)
def test_service_check_status_map_not_dict(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check', 'status_map': 5}}]})
with pytest.raises(
Exception,
match='^Error compiling transformer for metric `foo`: the `status_map` parameter must be a mapping$',
):
dd_run_check(check, extract_message=True)
def test_service_check_status_map_empty(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check', 'status_map': {}}}]})
with pytest.raises(
Exception,
match='^Error compiling transformer for metric `foo`: the `status_map` parameter must not be empty$',
):
dd_run_check(check, extract_message=True)
def test_service_check_status_map_value_not_number(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check', 'status_map': {True: 'OK'}}}]})
with pytest.raises(
Exception,
match=(
'^Error compiling transformer for metric `foo`: '
'value `True` of parameter `status_map` does not represent an integer$'
),
):
dd_run_check(check, extract_message=True)
def test_service_check_status_map_status_not_string(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check', 'status_map': {'9000': 0}}}]})
with pytest.raises(
Exception,
match=(
'^Error compiling transformer for metric `foo`: '
'status `0` for value `9000` of parameter `status_map` is not a string$'
),
):
dd_run_check(check, extract_message=True)
def test_service_check_status_map_status_invalid(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'service_check', 'status_map': {'9000': '0k'}}}]})
with pytest.raises(
Exception,
match=(
'^Error compiling transformer for metric `foo`: '
'invalid status `0k` for value `9000` of parameter `status_map`$'
),
):
dd_run_check(check, extract_message=True)
def test_metadata_label_not_string(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'metadata', 'label': 9000}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: the `label` parameter must be a string$'
):
dd_run_check(check, extract_message=True)
def test_metadata_no_label(self, dd_run_check):
check = get_check({'metrics': [{'foo': {'type': 'metadata'}}]})
with pytest.raises(
Exception, match='^Error compiling transformer for metric `foo`: the `label` parameter is required$'
):
dd_run_check(check, extract_message=True)
class TestShareLabels:
def test_not_mapping(self, dd_run_check):
check = get_check({'share_labels': 9000})
with pytest.raises(Exception, match='^Setting `share_labels` must be a mapping$'):
dd_run_check(check, extract_message=True)
def test_invalid_type(self, dd_run_check):
check = get_check({'share_labels': {'foo': 9000}})
with pytest.raises(
Exception, match='^Metric `foo` of setting `share_labels` must be a mapping or set to `true`$'
):
dd_run_check(check, extract_message=True)
def test_values_not_array(self, dd_run_check):
check = get_check({'share_labels': {'foo': {'values': 9000}}})
with pytest.raises(
Exception, match='^Option `values` for metric `foo` of setting `share_labels` must be an array$'
):
dd_run_check(check, extract_message=True)
def test_values_entry_not_integer(self, dd_run_check):
check = get_check({'share_labels': {'foo': {'values': [1.0]}}})
with pytest.raises(
Exception,
match=(
'^Entry #1 of option `values` for metric `foo` of setting `share_labels` must represent an integer$'
),
):
dd_run_check(check, extract_message=True)
@pytest.mark.parametrize('option', ['labels', 'match'])
def test_option_not_array(self, dd_run_check, option):
check = get_check({'share_labels': {'foo': {option: 9000}}})
with pytest.raises(
Exception, match='^Option `{}` for metric `foo` of setting `share_labels` must be an array$'.format(option)
):
dd_run_check(check, extract_message=True)
@pytest.mark.parametrize('option', ['labels', 'match'])
def test_option_entry_not_string(self, dd_run_check, option):
check = get_check({'share_labels': {'foo': {option: [9000]}}})
with pytest.raises(
Exception,
match=(
'^Entry #1 of option `{}` for metric `foo` of setting `share_labels` must be a string$'.format(option)
),
):
dd_run_check(check, extract_message=True)
| 39.195122 | 119 | 0.646422 | 2,026 | 16,070 | 4.851431 | 0.07305 | 0.049852 | 0.099705 | 0.146505 | 0.900295 | 0.893072 | 0.88595 | 0.873131 | 0.820938 | 0.762539 | 0 | 0.014519 | 0.232794 | 16,070 | 409 | 120 | 39.290954 | 0.782707 | 0.006721 | 0 | 0.527586 | 0 | 0.010345 | 0.268392 | 0.012533 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168966 | false | 0 | 0.010345 | 0 | 0.231034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccd9854e324566ae92a87c2fd96e8e156af84ee6 | 27 | py | Python | gen2-mask-detection/depthai_utils/__init__.py | ibaiGorordo/depthai-experiments | cde67e277120ddac815cbad6360695759cca900f | [
"MIT"
] | 381 | 2020-05-31T22:36:51.000Z | 2022-03-31T15:39:36.000Z | gen2-mask-detection/depthai_utils/__init__.py | ibaiGorordo/depthai-experiments | cde67e277120ddac815cbad6360695759cca900f | [
"MIT"
] | 211 | 2020-09-12T20:49:18.000Z | 2022-03-31T17:22:52.000Z | gen2-mask-detection/depthai_utils/__init__.py | ibaiGorordo/depthai-experiments | cde67e277120ddac815cbad6360695759cca900f | [
"MIT"
] | 189 | 2020-06-01T19:09:51.000Z | 2022-03-31T15:39:28.000Z | from .gen2_depthai import * | 27 | 27 | 0.814815 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.111111 | 27 | 1 | 27 | 27 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ccdb03336fa4e60335500ce7924e6f29c4ccdec3 | 25 | py | Python | src/tfmars/modules/__init__.py | Shakshi3104/tfmars | f34d3fff49d9eabf130cd7c5fb8f7c6d0f12b5e8 | [
"MIT"
] | 1 | 2022-03-19T11:14:04.000Z | 2022-03-19T11:14:04.000Z | src/tfmars/modules/__init__.py | Shakshi3104/tfmars | f34d3fff49d9eabf130cd7c5fb8f7c6d0f12b5e8 | [
"MIT"
] | null | null | null | src/tfmars/modules/__init__.py | Shakshi3104/tfmars | f34d3fff49d9eabf130cd7c5fb8f7c6d0f12b5e8 | [
"MIT"
] | null | null | null | from .attention import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
692e2ec14110764034d7a064279febd7219b4110 | 53 | py | Python | VersionDetermination/LastVersionDetector/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDetermination/LastVersionDetector/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDetermination/LastVersionDetector/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | from .LastVersionDetector import LastVersionDetector
| 26.5 | 52 | 0.90566 | 4 | 53 | 12 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 53 | 1 | 53 | 53 | 0.979592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
694698474062c3e2bc67478fdd66ae5beeefa20c | 882 | py | Python | python/ray/util/collective/__init__.py | Crissman/ray | 2092b097eab41b118a117fdfadd0fe664db41f63 | [
"Apache-2.0"
] | 3 | 2021-06-22T19:57:41.000Z | 2021-06-23T07:16:44.000Z | python/ray/util/collective/__init__.py | h453693821/ray | 9eb79727aa6ad94b01f8b660b83e1182555a89f6 | [
"Apache-2.0"
] | 72 | 2021-02-06T08:07:16.000Z | 2022-03-26T07:17:49.000Z | python/ray/util/collective/__init__.py | h453693821/ray | 9eb79727aa6ad94b01f8b660b83e1182555a89f6 | [
"Apache-2.0"
] | 2 | 2021-05-05T21:05:16.000Z | 2021-06-22T21:16:03.000Z | from ray.util.collective.collective import nccl_available, gloo_available, \
is_group_initialized, init_collective_group, destroy_collective_group, \
declare_collective_group, get_rank, get_world_size, allreduce, \
allreduce_multigpu, barrier, reduce, reduce_multigpu, broadcast, \
broadcast_multigpu, allgather, allgather_multigpu, reducescatter, \
reducescatter_multigpu, send, send_multigpu, recv, recv_multigpu
__all__ = [
"nccl_available", "gloo_available", "is_group_initialized",
"init_collective_group", "destroy_collective_group",
"declare_collective_group", "get_rank", "get_world_size", "allreduce",
"allreduce_multigpu", "barrier", "reduce", "reduce_multigpu", "broadcast",
"broadcast_multigpu", "allgather", "allgather_multigpu", "reducescatter",
"reducescatter_multigpu", "send", "send_multigpu", "recv", "recv_multigpu"
]
| 55.125 | 78 | 0.768707 | 93 | 882 | 6.817204 | 0.301075 | 0.141956 | 0.053628 | 0.082019 | 0.936909 | 0.936909 | 0.936909 | 0.936909 | 0.936909 | 0.936909 | 0 | 0 | 0.11678 | 882 | 15 | 79 | 58.8 | 0.813864 | 0 | 0 | 0 | 0 | 0 | 0.35941 | 0.103175 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c6573bc6895ef6da2022e09bd658f96f01d948ac | 87,833 | py | Python | teospy/seaice4.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | teospy/seaice4.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | teospy/seaice4.py | jarethholt/teospy | 3bb23e67bbb765c0842aa8d4a73c1d55ea395d2f | [
"MIT"
] | null | null | null | """Seawater-ice equilibrium functions.
This module provides thermodynamic functions for seawater in equilibrium
with ice (sea-ice), e.g. the enthalpy of melting. It also provides a
Gibbs free energy function for sea-ice parcels, with primary variables
being the total salinity (mass of salt per mass of salt, liquid, and
ice), temperature, and pressure.
:Examples:
>>> temperature(salt=0.035,pres=1e5)
271.240373585159
>>> enthalpymelt(salt=0.035,pres=1e5)
329942.976285
>>> volumemelt(salt=0.035,pres=1e5)
-9.10140854473e-5
>>> pressure(salt=0.035,temp=270.)
16132047.4385
>>> enthalpymelt(salt=0.035,temp=270.)
326829.393605
>>> volumemelt(salt=0.035,temp=270.)
-9.67135426848e-5
>>> salinity(temp=270.,pres=1e5)
0.05602641503
>>> enthalpymelt(temp=270.,pres=1e5)
328249.119579
>>> volumemelt(temp=270.,pres=1e5)
-9.18186917900e-5
>>> brinesalinity(270.,1e5)
0.05602641503
>>> meltingpressure(0.035,270.)
16132047.4385
>>> freezingtemperature(0.035,1e5)
271.240373585
>>> dtfdp(0.035,1e5)
7.48210942879e-8
>>> dtfds(0.035,1e5)
-56.8751336296
>>> seaice_g(0,0,0,0.035,270.,1e5)
-414.0175745
>>> seaice_g(0,1,0,0.035,270.,1e5)
500.445444181
>>> seaice_g(0,1,1,0.035,270.,1e5)
-1.658664467e-05
>>> brinefraction(0.035,270.,1e5)
0.6247053284
>>> cp(0.035,270.,1e5)
62868.90151
>>> density(0.035,270.,1e5)
993.156434117
>>> enthalpy(0.035,270.,1e5)
-135534.287504
>>> entropy(0.035,270.,1e5)
-500.445444181
>>> expansion(0.035,270.,1e5)
-1.647313287e-02
>>> kappa_t(0.035,270.,1e5)
1.56513441348e-9
:Functions:
* :func:`eq_stp`: Calculate primary variables for sea-ice at any two of
the seawater salinity, temperature, and pressure.
* :func:`densityice`: Sea-ice ice density.
* :func:`densitysea`: Sea-ice seawater density.
* :func:`enthalpyice`: Sea-ice ice enthalpy.
* :func:`enthalpysea`: Sea-ice seawater enthalpy.
* :func:`entropyice`: Sea-ice ice entropy for sea ice.
* :func:`entropysea`: Sea-ice seawater entropy.
* :func:`pressure`: Sea-ice pressure.
* :func:`temperature`: Sea-ice temperature.
* :func:`salinity`: Sea-ice salinity.
* :func:`enthalpymelt`: Enthalpy of melting.
* :func:`volumemelt`: Specific volume of melting.
* :func:`brinesalinity`: Salinity of seawater in equilibrium with ice.
* :func:`meltingpressure`: Pressure of seawater in equilibrium with ice.
* :func:`freezingtemperature`: Temperature of seawater in equilibrium
with ice.
* :func:`dtfdp`: Freezing point depression of seawater due to pressure.
* :func:`dtfds`: Freezing point depression of seawater due to salinity.
* :func:`eq_seaice`: Calculate primary variables for a sea-ice parcel at
the given total salinity, temperature, and pressure.
* :func:`seaice_g`: Sea-ice Gibbs free energy with derivatives.
* :func:`brinefraction`: Sea-ice seawater mass fraction.
* :func:`cp`: Sea-ice isobaric heat capacity.
* :func:`density`: Sea-ice total density.
* :func:`enthalpy`: Sea-ice specific enthalpy.
* :func:`entropy`: Sea-ice specific entropy.
* :func:`expansion`: Sea-ice thermal expansion coefficient.
* :func:`kappa_t`: Sea-ice isothermal compressibility.
"""
__all__ = ['eq_stp','densityice','densitysea','enthalpyice','enthalpysea',
'entropyice','entropysea','pressure','temperature','salinity',
'enthalpymelt','volumemelt',
'brinesalinity','meltingpressure','freezingtemperature','dtfdp','dtfds',
'eq_seaice','seaice_g','brinefraction','cp','density','enthalpy','entropy',
'expansion','kappa_t']
import warnings
import numpy
from teospy import constants0
from teospy import flu1
from teospy import ice1
from teospy import flu2
from teospy import ice2
from teospy import sal2
from teospy import maths3
from teospy import flu3a
from teospy import sea3a
from teospy import maths4
_CHKTOL = constants0.CHKTOL
_MSAL = constants0.MSAL
_RUNIV = constants0.RUNIV
_RWAT = constants0.RWAT
_TTP = constants0.TTP
_PTPE = constants0.PTPE
_DLTP = constants0.DLTP
_DITP = constants0.DITP
_LILTP = constants0.LILTP
_CLIQ = constants0.CLIQ
_CICE = constants0.CICE
_SAL0 = constants0.SAL0
_RSAL = _RUNIV / _MSAL
_VLTP = _DLTP**(-1)
_VITP = _DITP**(-1)
_C_SP = 0.396166676603
_E = numpy.exp(1)
_chkflubnds = constants0.chkflubnds
_chkicebnds = constants0.chkicebnds
_chksalbnds = constants0.chksalbnds
_flu_f = flu1.flu_f
_ice_g = ice1.ice_g
_eq_pressure = flu2.eq_pressure
_eq_chempot = flu2.eq_chempot
_sal_g = sal2.sal_g
_eq_liqpot = sal2.eq_liqpot
_newton = maths3.newton
_dliq_default = flu3a._dliq_default
## Equilibrium functions
def _approx_st(salt,temp):
"""Approximate PDl at ST.
Approximate the pressure and liquid water density of sea-ice with
the given salinity and temperature.
:arg float salt: Salinity in kg/kg.
:arg float temp: Temperature in K.
:returns: Pressure and liquid water density (both in SI units).
"""
dmu = ((_CLIQ-_CICE)*(temp - _TTP - temp*numpy.log(temp/_TTP))
+ -_LILTP/_TTP*(temp - _TTP) - _RSAL*temp*salt)
pres = _PTPE + dmu/(_VITP-_VLTP)
dliq = _dliq_default(temp,pres)
return pres, dliq
def _approx_sp(salt,pres):
"""Approximate TDl at SP.
Approximate the temperature and liquid water density of sea-ice with
the given salinity and pressure.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:returns: Temperature and liquid water density (both in SI units).
"""
CDIF = _CLIQ-_CICE
R0 = _LILTP/_TTP / CDIF
r1 = (pres-_PTPE) * (_VITP-_VLTP)/_TTP / CDIF
r2 = _RSAL*salt / CDIF
w = -(1 - R0 + r1) * numpy.exp(-(1 - R0 - r2))
negz = 1 - (1 + _E*w)**_C_SP
temp = (1 - R0 + r1)*_TTP/negz
dliq = _dliq_default(temp,pres)
return temp, dliq
def _approx_sp2(salt,pres):
"""Approximate TDl at SP.
Approximate the temperature and liquid water density of sea-ice with
the given salinity and pressure.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:returns: Temperature and liquid water density (both in SI units).
"""
x = (_RSAL*_TTP*salt + (pres-_PTPE)*(_VITP-_VLTP)) / _LILTP
temp = _TTP * (1-x)
dliq = _dliq_default(temp,pres)
return temp, dliq
def _approx_tp(temp,pres,dliq):
"""Approximate S at TP.
Approximate the salinity of sea-ice with the given temperature and
pressure.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg float dliq: Liquid water density in kg/m3 (unused).
:returns: Salinity in kg/kg.
"""
dmu = ((_CLIQ-_CICE) * (temp-_TTP-temp*numpy.log(temp/_TTP))
+ -_LILTP/_TTP*(temp-_TTP) - (pres-_PTPE)*(_VITP-_VLTP))
salt = dmu / (_RSAL*temp)
return salt
def _diff_st(p,dl,salt,temp,useext=False):
"""Calculate sea-ice disequilibrium at ST.
Calculate both sides of the equations
given pressure = pressure of liquid water
chemical potential of ice = potential of liquid water
and their Jacobians with respect to pressure and liquid water
density. Solving these equations gives equilibrium values at the
given salinity and temperature.
:arg float p: Pressure in Pa.
:arg float dl: Liquid water density in kg/m3.
:arg float salt: Salinity in kg/kg.
:arg float temp: Temperature in K.
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:returns: Left-hand side of the equation, right-hand side,
Jacobian of LHS, and Jacobian of RHS.
:rtype: tuple(array(float))
"""
pl = _eq_pressure(0,0,temp,dl)
gi = _ice_g(0,0,temp,p)
gl = _eq_chempot(0,0,temp,dl)
gl += _eq_liqpot(0,0,0,salt,temp,p,useext=useext)
lhs = numpy.array([p, gi])
rhs = numpy.array([pl, gl])
pl_d = _eq_pressure(0,1,temp,dl)
gi_p = _ice_g(0,1,temp,p)
gl_d = _eq_chempot(0,1,temp,dl)
gl_p = _eq_liqpot(0,0,1,salt,temp,p,useext=useext)
dlhs = numpy.array([[1.,0.], [gi_p,0.]])
drhs = numpy.array([[0.,pl_d], [gl_p,gl_d]])
return lhs, rhs, dlhs, drhs
def _diff_sp(t,dl,salt,pres,useext=False):
"""Calculate sea-ice disequilibrium at SP.
Calculate both sides of the equations
given pressure = pressure of liquid water
chemical potential of ice = potential of liquid water
and their Jacobians with respect to temperature and liquid water
density. Solving these equations gives equilibrium values at the
given salinity and pressure.
:arg float t: Temperature in K.
:arg float dl: Liquid water density in kg/m3.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:returns: Left-hand side of the equation, right-hand side,
Jacobian of LHS, and Jacobian of RHS.
:rtype: tuple(array(float))
"""
pl = _eq_pressure(0,0,t,dl)
gi = _ice_g(0,0,t,pres)
gl = _eq_chempot(0,0,t,dl)
gl += _eq_liqpot(0,0,0,salt,t,pres,useext=useext)
lhs = numpy.array([pres, gi])
rhs = numpy.array([pl, gl])
pl_t = _eq_pressure(1,0,t,dl)
pl_d = _eq_pressure(0,1,t,dl)
gi_t = _ice_g(1,0,t,pres)
gl_t = _eq_chempot(1,0,t,dl)
gl_t += _eq_liqpot(0,1,0,salt,t,pres,useext=useext)
gl_d = _eq_chempot(0,1,t,dl)
dlhs = numpy.array([[0.,0.], [gi_t,0.]])
drhs = numpy.array([[pl_t,pl_d], [gl_t,gl_d]])
return lhs, rhs, dlhs, drhs
def _diff_tp(s,temp,pres,dliq,useext=False):
"""Calculate sea-ice disequilibrium at TP.
Calculate both sides of the equation
chemical potential of ice = potential of liquid water
and their derivatives with respect to salinity. Solving these
equations gives the equilibrium salinity at the given temperature
and pressure.
:arg float s: Salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg float dliq: Liquid water density in kg/m3.
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:returns: Left-hand side of the equation, right-hand side,
derivative of LHS, and derivative of RHS.
:rtype: tuple(float)
"""
gi = _ice_g(0,0,temp,pres)
gl = _eq_chempot(0,0,temp,dliq)
gl += _eq_liqpot(0,0,0,s,temp,pres,useext=useext)
lhs = gi
rhs = gl
gl_s = _eq_liqpot(1,0,0,s,temp,pres,useext=useext)
dlhs = 0.
drhs = gl_s
return lhs, rhs, dlhs, drhs
def eq_stp(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,chkbnd=False,
useext=False,mathargs=None):
"""Get primary sea-ice variables at STP.
Get the values of all primary variables for sea-ice in equilibrium.
At least two of the salinity, temperature, and pressure must be
provided.
If the calculation has already been done, the results can be passed
to avoid unnecessary repeat calculations. If enough values are
passed, they will be checked for consistency if chkvals is True.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Salinity, temperature, pressure, and seawater liquid
density (all in SI units).
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
"""
if sum(val is None for val in (salt,temp,pres)) > 1:
errmsg = 'Must provide at least two of (salt,temp,pres)'
raise ValueError(errmsg)
if mathargs is None:
mathargs = dict()
fkwargs = {'useext': useext}
if salt is None:
dliq = flu3a.eq_tp_liq(temp,pres,dliq=dliq,dliq0=dliq0,
mathargs=mathargs)
fargs = (temp,pres,dliq)
salt = _newton(_diff_tp,salt0,_approx_tp,fargs=fargs,fkwargs=fkwargs,
**mathargs)
elif temp is None:
x0 = (temp0,dliq0)
fargs = (salt,pres)
x1 = _newton(_diff_sp,x0,_approx_sp,fargs=fargs,fkwargs=fkwargs,
**mathargs)
temp, dliq = x1
elif pres is None:
x0 = (pres0,dliq0)
fargs = (salt,temp)
x1 = _newton(_diff_st,x0,_approx_st,fargs=fargs,fkwargs=fkwargs,
**mathargs)
pres, dliq = x1
elif dliq is None:
dliq = flu3a.eq_tp_liq(temp,pres,dliq0=dliq0,mathargs=mathargs)
_chkflubnds(temp,dliq,chkbnd=chkbnd)
_chkicebnds(temp,pres,chkbnd=chkbnd)
_chksalbnds(salt,temp,pres,chkbnd=chkbnd)
if not chkvals:
return salt, temp, pres, dliq
lhs, rhs, __, __ = _diff_st(pres,dliq,salt,temp,useext=useext)
errs = list()
for (l,r) in zip(lhs,rhs):
if abs(r) >= chktol:
errs.append(abs(l/r-1))
else:
errs.append(abs(l-r))
if max(errs) > chktol:
warnmsg = ('Given values {0} and solutions {1} disagree to more than '
'the tolerance {2}').format(lhs,rhs,chktol)
warnings.warn(warnmsg,RuntimeWarning)
return salt, temp, pres, dliq
## Thermodynamic functions
def densityice(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice ice density.
Calculate the density of ice in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Density in kg/m3.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> densityice(salt=0.035,pres=1e5)
917.000739687
>>> densityice(salt=0.035,temp=270.)
918.898527655
>>> densityice(temp=270.,pres=1e5)
917.181167192
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
dice = ice2.density(temp,pres)
return dice
def densitysea(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice seawater density.
Calculate the density of seawater in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Density in kg/m3.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> densitysea(salt=0.035,pres=1e5)
1028.05199645
>>> densitysea(salt=0.035,temp=270.)
1035.73670169
>>> densitysea(temp=270.,pres=1e5)
1045.16805918
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
dsea = sea3a.density(salt,temp,pres,dliq=dliq,useext=useext)
return dsea
def enthalpyice(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice ice enthalpy.
Calculate the specific enthalpy of ice in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Enthalpy in J/kg.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> enthalpyice(salt=0.035,pres=1e5)
-337351.999358
>>> enthalpyice(salt=0.035,temp=270.)
-323205.968289
>>> enthalpyice(temp=270.,pres=1e5)
-339929.555499
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
hice = ice2.enthalpy(temp,pres)
return hice
def enthalpysea(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice seawater enthalpy.
Calculate the specific enthalpy of seawater in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Enthalpy in J/kg.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> enthalpysea(salt=0.035,pres=1e5)
-7613.193379
>>> enthalpysea(salt=0.035,temp=270.)
2832.949104
>>> enthalpysea(temp=270.,pres=1e5)
-12742.86649
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
hsea = sea3a.enthalpy(salt,temp,pres,dliq=dliq,useext=useext)
return hsea
def entropyice(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice ice entropy.
Calculate the specific entropy of ice in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Entropy in J/kg/K.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> entropyice(salt=0.035,pres=1e5)
-1235.44872812
>>> entropyice(salt=0.035,temp=270.)
-1247.71314646
>>> entropyice(temp=270.,pres=1e5)
-1244.97335506
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
sice = ice2.entropy(temp,pres)
return sice
def entropysea(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice seawater entropy.
Calculate the specific entropy of seawater in sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Entropy in J/kg/K.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> entropysea(salt=0.035,pres=1e5)
-27.9264598103
>>> entropysea(salt=0.035,temp=270.)
-46.7361169560
>>> entropysea(temp=270.,pres=1e5)
-53.1667911144
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
ssea = sea3a.entropy(salt,temp,pres,dliq=dliq,useext=useext)
return ssea
def pressure(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice pressure.
Calculate the pressure of sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Pressure in Pa.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> pressure(salt=0.035,temp=270.)
16132047.4385
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
return pres
def temperature(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice temperature.
Calculate the temperature of sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Temperature in K.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> temperature(salt=0.035,pres=1e5)
271.240373585159
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
return temp
def salinity(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice salinity.
Calculate the salinity of sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Salinity in kg/kg.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> salinity(temp=270.,pres=1e5)
0.05602641503
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
return salt
def enthalpymelt(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate the enthalpy of melting.
Calculate the specific enthalpy of melting of sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Enthalpy in J/kg.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> enthalpymelt(salt=0.035,pres=1e5)
329942.976285
>>> enthalpymelt(salt=0.035,temp=270.)
326829.393605
>>> enthalpymelt(temp=270.,pres=1e5)
328249.119579
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gs_st = _sal_g(1,1,0,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
hmelt = temp * (gi_t - (fl_t + gs_t - salt*gs_st))
return hmelt
def volumemelt(salt=None,temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate the volume of melting.
Calculate the specific volume of melting of sea-ice.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Specific volume in m3/kg.
:raises ValueError: If fewer than two of salt, temp, and pres are
provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> volumemelt(salt=0.035,pres=1e5)
-9.10140854473e-5
>>> volumemelt(salt=0.035,temp=270.)
-9.67135426848e-5
>>> volumemelt(temp=270.,pres=1e5)
-9.18186917900e-5
"""
salt, temp, pres, dliq = eq_stp(salt=salt,temp=temp,pres=pres,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,temp0=temp0,pres0=pres0,
dliq0=dliq0,chkbnd=chkbnd,useext=useext,mathargs=mathargs)
gs_p = _sal_g(0,0,1,salt,temp,pres,useext=useext)
gs_sp = _sal_g(1,0,1,salt,temp,pres,useext=useext)
gi_p = _ice_g(0,1,temp,pres)
vmelt = dliq**(-1) + gs_p - salt*gs_sp - gi_p
return vmelt
## Thermodynamic functions of two variables
def brinesalinity(temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,temp0=None,pres0=None,dliq0=None,
chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice brine salinity.
Calculate the salinity of seawater (brine) in equilibrium with ice
of the given temperature and pressure.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Salinity in kg/kg. If unknown, pass None (default) and it
will be calculated.
:type salt: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the salinity in kg/kg. If None
(default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Salinity in kg/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> brinesalinity(270.,1e5)
0.05602641503
"""
salt, __, __, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
return salt
def meltingpressure(salt,temp,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,pres0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice melting pressure.
Calculate the pressure required to melt ice into seawater at the
given salinity and temperature.
:arg float salt: Salinity in kg/kg.
:arg float temp: Temperature in K.
:arg pres: Pressure in Pa. If unknown, pass None (default) and it
will be calculated.
:type pres: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_st` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Pressure in Pa.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> meltingpressure(0.035,270.)
16132047.4385
"""
__, __, pres, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,pres0=pres0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
return pres
def freezingtemperature(salt,pres,temp=None,dliq=None,chkvals=False,
chktol=_CHKTOL,temp0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice freezing temperature.
Calculate the temperature required to freeze seawater at the given
salinity and pressure.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Temperature in K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> freezingtemperature(0.035,1e5)
271.240373585
"""
__, temp, __, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,temp0=temp0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
return temp
def dtfdp(salt,pres,temp=None,dliq=None,chkvals=False,chktol=_CHKTOL,
temp0=None,dliq0=None,chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice freezing point pressure lowering.
Calculate the effect of pressure on lowering the freezing point of
sea-ice.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Freezing point lowering in K/Pa.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> dtfdp(0.035,1e5)
7.48210942879e-8
"""
__, temp, __, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,temp0=temp0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gs_p = _sal_g(0,0,1,salt,temp,pres,useext=useext)
gs_st = _sal_g(1,1,0,salt,temp,pres,useext=useext)
gs_sp = _sal_g(1,0,1,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
gi_p = _ice_g(0,1,temp,pres)
dent = fl_t + gs_t - salt*gs_st - gi_t
dvol = dliq**(-1) + gs_p - salt*gs_sp - gi_p
dtfdp = dvol/dent
return dtfdp
def dtfds(salt,pres,temp=None,dliq=None,chkvals=False,chktol=_CHKTOL,
temp0=None,dliq0=None,chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice freezing point salt lowering.
Calculate the effect of salinity on lowering the freezing point of
sea-ice.
:arg float salt: Salinity in kg/kg.
:arg float pres: Pressure in Pa.
:arg temp: Temperature in K. If unknown, pass None (default) and it
will be calculated.
:type temp: float or None
:arg dliq: Density of liquid water in seawater in kg/m3. If
unknown, pass None (default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_sp` is used.
:type temp0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Freezing point lowering in K/(kg/kg).
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:Examples:
>>> dtfds(0.035,1e5)
-56.8751336296
"""
__, temp, __, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,temp0=temp0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gs_ss = _sal_g(2,0,0,salt,temp,pres,useext=useext)
gs_st = _sal_g(1,1,0,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
dent = fl_t + gs_t - salt*gs_st - gi_t
dtfds = salt*gs_ss / dent
return dtfds
## Seawater-ice combined system
def eq_seaice(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Get primary sea-ice variables at SsiTP.
Get the values of all primary variables for a seawater-ice parcel at
the given total salinity, temperature, and pressure. Total salinity
here is the ratio of the mass of salt to the total parcel mass (salt
+ liquid water + ice).
If the calculation has already been done, the results can be passed
to avoid unnecessary repeat calculations. If enough values are
passed, they will be checked for consistency if chkvals is True.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Seawater salinity and liquid water density (both in SI
units).
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
"""
if salt is None or dliq is None:
salt, __, __, dliq = eq_stp(temp=temp,pres=pres,salt=salt,dliq=dliq,
chkvals=chkvals,chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,
useext=useext,mathargs=mathargs)
if salt < sisal:
warnmsg = ('Equilibrium salinity {0} is lower than the total parcel '
'salinity {1}').format(salt,sisal)
warnings.warn(warnmsg,RuntimeWarning)
salt = sisal
return salt, dliq
def seaice_g(drvs,drvt,drvp,sisal,temp,pres,salt=None,dliq=None,
chkvals=False,chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,
useext=False,mathargs=None):
"""Calculate sea-ice Gibbs free energy with derivatives.
Calculate the specific Gibbs free energy of a sea-ice parcel or its
derivatives with respect to total salinity, temperature, and
pressure.
:arg int drvs: Number of total salinity derivatives.
:arg int drvt: Number of temperature derivatives.
:arg int drvp: Number of pressure derivatives.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Gibbs free energy in units of
(J/kg) / (kg/kg)^drvs / K^drvt / Pa^drvp.
:raises ValueError: If any of (drvs,drvt,drvp) are negative or if
(drvs+drvt+drvp) > 2.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> seaice_g(0,0,0,0.035,270.,1e5)
-414.0175745
>>> seaice_g(1,0,0,0.035,270.,1e5)
96363.77305
>>> seaice_g(0,1,0,0.035,270.,1e5)
500.445444181
>>> seaice_g(0,0,1,0.035,270.,1e5)
1.00689072300e-3
>>> seaice_g(2,0,0,0.035,270.,1e5)
0.
>>> seaice_g(1,1,0,0.035,270.,1e5)
-21272.2260252
>>> seaice_g(1,0,1,0.035,270.,1e5)
-2.383040378e-03
>>> seaice_g(0,2,0,0.035,270.,1e5)
-232.847783380
>>> seaice_g(0,1,1,0.035,270.,1e5)
-1.658664467e-05
>>> seaice_g(0,0,2,0.035,270.,1e5)
-1.57591932118e-12
"""
drvtup = (drvs,drvt,drvp)
if any(drv < 0 for drv in drvtup) or sum(drvtup) > 2:
errmsg = 'Derivatives {0} not recognized'.format(drvtup)
raise ValueError(errmsg)
salt, dliq = eq_seaice(sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
seaf = sisal/salt
# Straightforward derivatives
if (drvs,drvt,drvp) == (0,0,0):
gl = _eq_chempot(0,0,temp,dliq)
gs = _sal_g(0,0,0,salt,temp,pres,useext=useext)
gi = _ice_g(0,0,temp,pres)
g = seaf*(gl + gs) + (1-seaf)*gi
return g
elif (drvs,drvt,drvp) == (1,0,0):
gs_s = _sal_g(1,0,0,salt,temp,pres,useext=useext)
g_s = gs_s
return g_s
elif (drvs,drvt,drvp) == (0,1,0):
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
g_t = seaf*(fl_t + gs_t) + (1-seaf)*gi_t
return g_t
elif (drvs,drvt,drvp) == (0,0,1):
gs_p = _sal_g(0,0,1,salt,temp,pres,useext=useext)
gi_p = _ice_g(0,1,temp,pres)
g_p = seaf*(dliq**(-1) + gs_p) + (1-seaf)*gi_p
return g_p
elif (drvs,drvt,drvp) == (2,0,0):
g_ss = 0.0
return g_ss
elif (drvs,drvt,drvp) == (1,1,0):
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
g_st = (fl_t + gs_t - gi_t)/salt
return g_st
elif (drvs,drvt,drvp) == (1,0,1):
gs_p = _sal_g(0,0,1,salt,temp,pres,useext=useext)
gi_p = _ice_g(0,1,temp,pres)
g_sp = (dliq**(-1) + gs_p - gi_p)/salt
return g_sp
# Other derivatives require inversion
cl = _eq_pressure(0,1,temp,dliq)
gs_ss = _sal_g(2,0,0,salt,temp,pres,useext=useext)
if drvt > 0:
fl_t = _flu_f(1,0,temp,dliq)
gs_t = _sal_g(0,1,0,salt,temp,pres,useext=useext)
gs_st = _sal_g(1,1,0,salt,temp,pres,useext=useext)
gi_t = _ice_g(1,0,temp,pres)
dentr = fl_t + gs_t - salt*gs_st - gi_t
if drvp > 0:
gs_p = _sal_g(0,0,1,salt,temp,pres,useext=useext)
gs_sp = _sal_g(1,0,1,salt,temp,pres,useext=useext)
gi_p = _ice_g(0,1,temp,pres)
dvol = dliq**(-1) + gs_p - salt*gs_sp - gi_p
s_p = dvol / (salt*gs_ss)
dl_p = cl**(-1)
if (drvs,drvt,drvp) == (0,2,0):
fl_tt = _flu_f(2,0,temp,dliq)
fl_td = _flu_f(1,1,temp,dliq)
gs_tt = _sal_g(0,2,0,salt,temp,pres,useext=useext)
gi_tt = _ice_g(2,0,temp,pres)
s_t = dentr / (salt*gs_ss)
dl_t = -dliq**2*fl_td/cl
gb_tt = fl_tt + fl_td*dl_t + gs_tt
g_tt = -seaf/salt*dentr*s_t + seaf*gb_tt + (1-seaf)*gi_tt
return g_tt
elif (drvs,drvt,drvp) == (0,1,1):
fl_td = _flu_f(1,1,temp,dliq)
gs_tp = _sal_g(0,1,1,salt,temp,pres,useext=useext)
gi_tp = _ice_g(1,1,temp,pres)
gb_tp = fl_td*dl_p + gs_tp
g_tp = -seaf/salt*dentr*s_p + seaf*gb_tp + (1-seaf)*gi_tp
return g_tp
elif (drvs,drvt,drvp) == (0,0,2):
gs_pp = _sal_g(0,0,2,salt,temp,pres,useext=useext)
gi_pp = _ice_g(0,2,temp,pres)
gb_pp = -dl_p/dliq**2 + gs_pp
g_pp = -seaf/salt*dvol*s_p + seaf*gb_pp + (1-seaf)*gi_pp
return g_pp
# Should not have made it this far!
errmsg = 'Derivatives {0} not recognized'.format((drvs,drvt,drvp))
raise ValueError(errmsg)
def brinefraction(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice brine fraction.
Calculate the mass fraction of seawater (brine) in a sea-ice parcel,
the ratio of the mass of seawater (salt + liquid water) to the total
mass (salt + liquid water + ice).
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Brine fraction in kg/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> brinefraction(0.035,270.,1e5)
0.6247053284
"""
salt, dliq = eq_seaice(sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
seaf = sisal/salt
return seaf
def cp(sisal,temp,pres,salt=None,dliq=None,chkvals=False,chktol=_CHKTOL,
salt0=None,dliq0=None,chkbnd=False,useext=False,mathargs=None):
"""Calculate sea-ice isobaric heat capacity.
Calculate the isobaric heat capacity of sea-ice.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Heat capacity in J/kg/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> cp(0.035,270.,1e5)
62868.90151
"""
g_tt = seaice_g(0,2,0,sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
cp = -temp * g_tt
return cp
def density(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice total density.
Calculate the total density of a sea-ice parcel.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Density in kg/m3.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> density(0.035,270.,1e5)
993.156434117
"""
g_p = seaice_g(0,0,1,sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
rho = g_p**(-1)
return rho
def enthalpy(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice enthalpy.
Calculate the specific enthalpy of a sea-ice parcel.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Enthalpy in J/kg.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> enthalpy(0.035,270.,1e5)
-135534.287504
"""
salt, dliq = eq_seaice(sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
g = seaice_g(0,0,0,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
g_t = seaice_g(0,1,0,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
h = g - temp*g_t
return h
def entropy(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice entropy.
Calculate the specific entropy of a sea-ice parcel.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Entropy in J/kg/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> entropy(0.035,270.,1e5)
-500.445444181
"""
g_t = seaice_g(0,1,0,sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
s = -g_t
return s
def expansion(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice thermal expansion coefficient.
Calculate the thermal expansion coefficient of a sea-ice parcel.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Expansion coefficient in 1/K.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> expansion(0.035,270.,1e5)
-1.647313287e-02
"""
salt, dliq = eq_seaice(sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
g_p = seaice_g(0,0,1,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
g_tp = seaice_g(0,1,1,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
alpha = g_tp / g_p
return alpha
def kappa_t(sisal,temp,pres,salt=None,dliq=None,chkvals=False,
chktol=_CHKTOL,salt0=None,dliq0=None,chkbnd=False,useext=False,
mathargs=None):
"""Calculate sea-ice isothermal compressibility.
Calculate the isothermal compressibility of a sea-ice parcel.
:arg float sisal: Total sea-ice salinity in kg/kg.
:arg float temp: Temperature in K.
:arg float pres: Pressure in Pa.
:arg salt: Seawater salinity in kg/kg. If unknown, pass None
(default) and it will be calculated.
:type salt: float or None
:arg dliq: Seawater liquid water density in kg/m3. If unknown, pass
None (default) and it will be calculated.
:type dliq: float or None
:arg dvap: Water vapour density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dvap: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg salt0: Initial guess for the seawater salinity in kg/kg. If
None (default) then `_approx_tp` is used.
:type salt0: float or None
:arg dliq0: Initial guess for the seawater liquid water density in
kg/m3. If None (default) then `flu3a._dliq_default` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg bool useext: If False (default) then the salt contribution is
calculated from _GSCOEFFS; if True, from _GSCOEFFS_EXT.
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Compressibility in 1/Pa.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
:raises RuntimeWarning: If the equilibrium seawater salinity is
lower than the total parcel salinity.
:Examples:
>>> kappa_t(0.035,270.,1e5)
1.56513441348e-9
"""
salt, dliq = eq_seaice(sisal,temp,pres,salt=salt,dliq=dliq,chkvals=chkvals,
chktol=chktol,salt0=salt0,dliq0=dliq0,chkbnd=chkbnd,useext=useext,
mathargs=mathargs)
g_p = seaice_g(0,0,1,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
g_pp = seaice_g(0,0,2,sisal,temp,pres,salt=salt,dliq=dliq,useext=useext)
kappa = -g_pp / g_p
return kappa
| 42.92913 | 80 | 0.677968 | 13,182 | 87,833 | 4.452587 | 0.034137 | 0.035046 | 0.030173 | 0.038403 | 0.883804 | 0.862098 | 0.848008 | 0.824837 | 0.812127 | 0.802876 | 0 | 0.033886 | 0.231929 | 87,833 | 2,045 | 81 | 42.950122 | 0.836145 | 0.695798 | 0 | 0.423913 | 0 | 0 | 0.023769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071739 | false | 0 | 0.026087 | 0 | 0.191304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d687cf09eca45bb96738ee6a2dab0c2bbb4b8a46 | 30 | py | Python | kartverket_tide_api/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | from .tide_api import TideApi
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba95672ec341242f8978163c51676c13af938817 | 33 | py | Python | qurry/visualization/__init__.py | LSaldyt/curry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 11 | 2018-07-28T17:08:23.000Z | 2019-02-08T03:04:03.000Z | qurry/visualization/__init__.py | LSaldyt/Qurry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 33 | 2019-07-09T09:46:44.000Z | 2019-09-23T23:44:37.000Z | qurry/visualization/__init__.py | LSaldyt/Qurry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 4 | 2019-05-28T01:27:49.000Z | 2019-12-26T18:01:51.000Z | from .histogram import histogram
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bace3196bc905295c4d58e81efd6505cd963dc47 | 33 | py | Python | python/recognitionService_api/__init__.py | Koisell/SmartCoffeeMachine | 40844039970d177b20b9d3c6d3e7eedf7352885e | [
"MIT"
] | null | null | null | python/recognitionService_api/__init__.py | Koisell/SmartCoffeeMachine | 40844039970d177b20b9d3c6d3e7eedf7352885e | [
"MIT"
] | null | null | null | python/recognitionService_api/__init__.py | Koisell/SmartCoffeeMachine | 40844039970d177b20b9d3c6d3e7eedf7352885e | [
"MIT"
] | null | null | null | from .flask_app import add_route
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bae77b64464b436386a4f76548eec99e0d7a192d | 193 | py | Python | scripts/osmupdate-latvia-trolleybus/config.example.py | trolleway/osmot | 6fceaacb3d64b3d7d1a13f0b6a132a239fbab547 | [
"Unlicense"
] | 12 | 2015-02-23T17:31:46.000Z | 2021-03-11T21:52:31.000Z | scripts/osmupdate-latvia-trolleybus/config.example.py | trolleway/osmot | 6fceaacb3d64b3d7d1a13f0b6a132a239fbab547 | [
"Unlicense"
] | 8 | 2015-02-10T19:04:34.000Z | 2020-10-19T23:31:52.000Z | scripts/osmupdate-latvia-trolleybus/config.example.py | trolleway/osmot | 6fceaacb3d64b3d7d1a13f0b6a132a239fbab547 | [
"Unlicense"
] | 1 | 2018-01-26T19:38:20.000Z | 2018-01-26T19:38:20.000Z | dbname='gis'
user='trolleway'
host='localhost'
password='admin'
dump_url='http://download.geofabrik.de/europe/latvia-latest.osm.pbf'
poly_url='http://download.geofabrik.de/europe/latvia.poly'
| 24.125 | 68 | 0.777202 | 28 | 193 | 5.285714 | 0.714286 | 0.094595 | 0.202703 | 0.324324 | 0.513514 | 0.513514 | 0.513514 | 0 | 0 | 0 | 0 | 0 | 0.036269 | 193 | 7 | 69 | 27.571429 | 0.795699 | 0 | 0 | 0 | 0 | 0 | 0.673575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
0312f47f49120369ec8b514ab92ab4305bb4f87a | 33 | py | Python | ale/base/__init__.py | paarongiroux/ale | aaefdf555c4f24c5610912625e81dff57c479b4c | [
"Unlicense"
] | 4 | 2019-05-23T18:37:43.000Z | 2022-01-10T20:03:56.000Z | ale/base/__init__.py | paarongiroux/ale | aaefdf555c4f24c5610912625e81dff57c479b4c | [
"Unlicense"
] | 262 | 2018-12-12T16:33:00.000Z | 2022-03-28T01:28:33.000Z | ale/base/__init__.py | paarongiroux/ale | aaefdf555c4f24c5610912625e81dff57c479b4c | [
"Unlicense"
] | 20 | 2018-12-17T20:42:18.000Z | 2022-03-07T20:48:16.000Z | from ale.base.base import Driver
| 16.5 | 32 | 0.818182 | 6 | 33 | 4.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03544b668855363e6d2313dde61f58f4b2160a8b | 89 | py | Python | numpyro/__init__.py | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | 1 | 2019-06-24T04:27:18.000Z | 2019-06-24T04:27:18.000Z | numpyro/__init__.py | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | null | null | null | numpyro/__init__.py | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | null | null | null | import numpyro.patch # noqa: F401
from numpyro.version import __version__ # noqa: F401
| 29.666667 | 53 | 0.775281 | 12 | 89 | 5.416667 | 0.583333 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.157303 | 89 | 2 | 54 | 44.5 | 0.786667 | 0.235955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
069beec455baa18fc5311f4fb043836a053d084c | 24,511 | py | Python | src/nexgen/command_line/nexus_generator.py | DominicOram/nexgen | 5a0dbb7bc7f8f3876e4cd69c6cd4eb7df0587af8 | [
"BSD-3-Clause"
] | null | null | null | src/nexgen/command_line/nexus_generator.py | DominicOram/nexgen | 5a0dbb7bc7f8f3876e4cd69c6cd4eb7df0587af8 | [
"BSD-3-Clause"
] | null | null | null | src/nexgen/command_line/nexus_generator.py | DominicOram/nexgen | 5a0dbb7bc7f8f3876e4cd69c6cd4eb7df0587af8 | [
"BSD-3-Clause"
] | null | null | null | """
Command line tool to generate NeXus files.
"""
import sys
import glob
import h5py
import time
import logging
import argparse
import freephil
import numpy as np
from pathlib import Path
from datetime import datetime
from . import (
version_parser,
detectormode_parser,
nexus_parser,
demo_parser,
add_tristan_spec,
)
from .. import (
get_nexus_filename,
get_filename_template,
get_iso_timestamp,
)
from ..nxs_write.NexusWriter import write_nexus, write_nexus_demo
from ..nxs_write.NXclassWriters import write_NXnote
# Define a logger object and a formatter
logger = logging.getLogger("NeXusGenerator")
logger.setLevel(logging.DEBUG)
# formatter = logging.Formatter("%(levelname)s %(message)s")
formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s %(message)s")
# Phil scopes
master_phil = freephil.parse(
"""
input {
datafile = None
.multiple = True
.type = path
.help = "HDF5 file. For now, assumes pattern filename_%0{6}d.h5"
coordinate_frame = *mcstas imgcif
.type = choice
.help = "Which coordinate system is being used to provide input vectors."
vds_writer = *None dataset file
.type = choice
.help = "If not None, write vds along with external link to data in NeXus file, or create _vds.h5 file."
}
include scope nexgen.command_line.nxs_phil.goniometer_scope
include scope nexgen.command_line.nxs_phil.beamline_scope
include scope nexgen.command_line.nxs_phil.detector_scope
include scope nexgen.command_line.nxs_phil.module_scope
include scope nexgen.command_line.nxs_phil.timestamp_scope
""",
process_includes=True,
)
demo_phil = freephil.parse(
"""
output {
master_filename = nexus_master.h5
.type = path
.help = "Filename for master file"
}
input {
coordinate_frame = *mcstas imgcif
.type = choice
.help = "Which coordinate system is being used to provide input vectors"
vds_writer = *None dataset file
.type = choice
.help = "If not None, either write a vds in the nexus file or create also a _vds.h5 file."
}
include scope nexgen.command_line.nxs_phil.goniometer_scope
include scope nexgen.command_line.nxs_phil.beamline_scope
include scope nexgen.command_line.nxs_phil.detector_scope
include scope nexgen.command_line.nxs_phil.module_scope
""",
process_includes=True,
)
meta_phil = freephil.parse(
"""
input {
metafile = None
.type = path
.help = "Path to _meta.h5 file for collection."
datafile = None
.multiple = True
.type = path
.help = "HDF5 file. For now, assumes pattern filename_%0{6}d.h5"
coordinate_frame = *mcstas imgcif
.type = choice
.help = "Which coordinate system is being used to provide input vectors."
vds_writer = *None dataset file
.type = choice
.help = "If not None, write vds along with external link to data in NeXus file, or create _vds.h5 file."
}
include scope nexgen.command_line.nxs_phil.goniometer_scope
include scope nexgen.command_line.nxs_phil.beamline_scope
include scope nexgen.command_line.nxs_phil.detector_scope
include scope nexgen.command_line.nxs_phil.module_scope
include scope nexgen.command_line.nxs_phil.timestamp_scope
# include scope nexgen.command_line.nexus_generator.master_phil
""",
process_includes=True,
)
# Parse command line arguments
parser = argparse.ArgumentParser(
description="Generate a NeXus file for data collection.",
parents=[version_parser],
)
parser.add_argument("--debug", action="store_const", const=True)
parser.add_argument(
"-c",
"--show-config",
action="store_true",
default=False,
dest="show_config",
help="Show the configuration parameters.",
)
# CLIs
def write_NXmx_cli(args):
cl = master_phil.command_line_argument_interpreter()
working_phil = master_phil.fetch(cl.process_and_fetch(args.phil_args))
params = working_phil.extract()
# Path to data file
datafiles = [Path(d).expanduser().resolve() for d in params.input.datafile]
# Get NeXus file name
master_file = get_nexus_filename(datafiles[0])
# Start logger
logfile = datafiles[0].parent / "generate_nexus.log"
# Define a file handler for logging
FH = logging.FileHandler(logfile, mode="a")
FH.setLevel(logging.DEBUG)
FH.setFormatter(formatter)
# Add handlers to logger
logger.addHandler(FH)
# Add some information to logger
logger.info("Create a NeXus file for %s" % datafiles[0])
logger.info(
"Number of experiment data files in directory, linked to the Nexus file: %d"
% len(datafiles)
)
logger.info("NeXus file will be saved as %s" % master_file)
# Load technical info from phil parser
cf = params.input.coordinate_frame
goniometer = params.goniometer
detector = params.detector
module = params.detector_module
source = params.source
beam = params.beam
attenuator = params.attenuator
timestamps = (
get_iso_timestamp(params.start_time),
get_iso_timestamp(params.end_time),
)
# If dealing with a tristan detector, add its specifications to detector scope.
if "TRISTAN" in detector.description.upper():
add_tristan_spec(detector, params.tristanSpec)
# Log information
logger.info("Source information")
logger.info(f"Facility: {source.name} - {source.type}.")
logger.info(f"Beamline: {source.beamline_name}")
if timestamps[0] is not None:
logger.info(f"Collection start time: {timestamps[0]}")
else:
logger.warning("No collection start time recorded.")
if timestamps[1] is not None:
logger.info(f"Collection end time: {timestamps[1]}")
else:
logger.warning("No collection end time recorded.")
logger.info("Coordinate system: %s" % cf)
if cf == "imgcif":
logger.warning(
"Input coordinate frame is imgcif. They will be converted to mcstas."
)
logger.info("Goniometer information")
axes = goniometer.axes
axis_vectors = goniometer.vectors
for tu in zip(goniometer.types, goniometer.units):
assert tu in (
("translation", "mm"),
("rotation", "deg"),
), "Appropriate axis units should be: mm for translations, det for rotations"
assert len(axis_vectors) == 3 * len(
axes
), "Number of vectors does not match number of goniometer axes."
for j in reversed(range(len(axes))):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Goniometer axis: {axes[j]} => {vector} ({goniometer.types[j]}) on {goniometer.depends[j]}. {goniometer.starts[j]} {goniometer.ends[j]} {goniometer.increments[j]}"
)
logger.info("")
logger.info(
f"Detector information:\n {detector.description}, {detector.detector_type}"
)
logger.info(
f"Sensor made of {detector.sensor_material} x {detector.sensor_thickness}"
)
logger.info(f"Trusted pixels > {detector.underload} and < {detector.overload}")
logger.info(
f"Image is a {detector.image_size} array of {detector.pixel_size} pixels"
)
logger.info("Detector axes:")
axes = detector.axes
axis_vectors = detector.vectors
for tu in zip(detector.types, detector.units):
assert tu in (
("translation", "mm"),
("rotation", "deg"),
), "Appropriate axis units should be: mm for translations, det for rotations"
assert len(axis_vectors) == 3 * len(
axes
), "Number of vectors does not match number of detector axes."
for j in range(len(axes)):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Detector axis: {axes[j]} => {vector} ({detector.types[j]}) on {detector.depends[j]}. {detector.starts[j]}"
)
if detector.flatfield is None:
logger.info("No flatfield applied")
else:
logger.info(f"Flatfield correction data lives here {detector.flatfield}")
if detector.pixel_mask is None:
logger.info("No bad pixel mask for this detector")
else:
logger.info(f"Bad pixel mask lives here {detector.pixel_mask}")
logger.info("Module information")
logger.info(f"Number of modules: {module.num_modules}")
logger.info(f"Fast axis at datum position: {module.fast_axis}")
logger.info(f"Slow_axis at datum position: {module.slow_axis}")
if module.module_offset == "0":
logger.warning(f"module_offset field will not be written.")
logger.info("")
logger.info("Start writing NeXus file ...")
try:
with h5py.File(master_file, "x") as nxsfile:
write_nexus(
nxsfile,
datafiles,
goniometer,
detector,
module,
source,
beam,
attenuator,
timestamps,
cf,
params.input.vds_writer,
)
# Check and save pump status
if params.pump_probe.pump_status is True:
logger.info(
"Pump probe status is True, write relative metadata as NXnote."
)
pump_info = {
"pump_exposure_time": params.pump_probe.pump_exp,
"pump_delay": params.pump_probe.pump_delay,
}
write_NXnote(nxsfile, "/entry/source/notes", pump_info)
logger.info(f"{master_file} correctly written.")
except Exception as err:
logger.info(
f"An error occurred and {master_file} couldn't be written correctly."
)
logger.exception(err)
# logger.error(err)
logger.info("EOF")
def write_demo_cli(args):
cl = demo_phil.command_line_argument_interpreter()
working_phil = demo_phil.fetch(cl.process_and_fetch(args.phil_args))
params = working_phil.extract()
# Path to file
master_file = Path(params.output.master_filename).expanduser().resolve()
# Just in case ...
if master_file.suffix == ".h5" and "master" not in master_file.stem:
master_file = Path(master_file.as_posix().replace(".h5", "_master.h5"))
# Start logger
logfile = master_file.parent / "generate_demo.log"
# Define a file handler for logging
FH = logging.FileHandler(logfile, mode="w")
FH.setLevel(logging.DEBUG)
FH.setFormatter(formatter)
# Add handlers to logger
logger.addHandler(FH)
# Images or events ?
if args.events is True:
num_events = args.force if args.force else 1
data_type = ("events", num_events)
else:
num_images = args.force if args.force else None
data_type = ("images", num_images)
# Get data file name template
data_file_template = get_filename_template(master_file)
# Add some information to logger
logger.info("NeXus file will be saved as %s" % params.output.master_filename)
logger.info("Data file(s) template: %s" % data_file_template)
# Next: go through technical info (goniometer, detector, beamline etc ...)
cf = params.input.coordinate_frame
goniometer = params.goniometer
detector = params.detector
module = params.detector_module
source = params.source
beam = params.beam
attenuator = params.attenuator
# If dealing with a tristan detector, add its specifications to detector scope.
if "TRISTAN" in detector.description.upper():
add_tristan_spec(detector, params.tristanSpec)
# Log information
logger.info("Data type: %s" % data_type[0])
logger.info("Source information")
logger.info(f"Facility: {source.name} - {source.type}.")
logger.info(f"Beamline: {source.beamline_name}")
logger.info("Coordinate system: %s" % cf)
if cf == "imgcif":
logger.warning(
"Input coordinate frame is imgcif. They will be converted to mcstas."
)
logger.info("Goniometer information")
axes = goniometer.axes
axis_vectors = goniometer.vectors
for tu in zip(goniometer.types, goniometer.units):
assert tu in (("translation", "mm"), ("rotation", "deg"))
assert len(axis_vectors) == 3 * len(
axes
), "Number of vectors does not match number of axes."
for j in reversed(range(len(axes))):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Goniometer axis: {axes[j]} => {vector} ({goniometer.types[j]}) on {goniometer.depends[j]}. {goniometer.starts[j]} {goniometer.ends[j]} {goniometer.increments[j]}"
)
logger.info("")
logger.info("Detector information:\n%s" % detector.description)
logger.info(
f"Sensor made of {detector.sensor_material} x {detector.sensor_thickness}"
)
if data_type[0] == "images":
logger.info(f"Trusted pixels > {detector.underload} and < {detector.overload}")
logger.info(
f"Image is a {detector.image_size} array of {detector.pixel_size} pixels"
)
logger.info("Detector axes:")
axes = detector.axes
axis_vectors = detector.vectors
for tu in zip(detector.types, detector.units):
assert tu in (("translation", "mm"), ("rotation", "deg"))
assert len(axis_vectors) == 3 * len(axes)
for j in range(len(axes)):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Detector axis: {axes[j]} => {vector} ({detector.types[j]}) on {detector.depends[j]}. {detector.starts[j]}"
)
if detector.flatfield is None:
logger.info("No flatfield applied")
else:
logger.info(f"Flatfield correction data lives here {detector.flatfield}")
if detector.pixel_mask is None:
logger.info("No bad pixel mask for this detector")
else:
logger.info(f"Bad pixel mask lives here {detector.pixel_mask}")
logger.info("Module information")
logger.warning(f"module_offset field setting: {module.module_offset}")
logger.info(f"Number of modules: {module.num_modules}")
logger.info(f"Fast axis at datum position: {module.fast_axis}")
logger.info(f"Slow_axis at datum position: {module.slow_axis}")
logger.info("")
# Record string with start_time
start_time = datetime.fromtimestamp(time.time()).strftime("%A, %d. %B %Y %I:%M%p")
logger.info("Start writing NeXus and data files ...")
try:
with h5py.File(master_file, "x") as nxsfile:
write_nexus_demo(
nxsfile,
data_file_template,
data_type,
cf,
goniometer,
detector,
module,
source,
beam,
attenuator,
params.input.vds_writer,
)
# Check and save pump status
if params.pump_probe.pump_status is True:
logger.info(
"Pump probe status is True, write relative metadata as NXnote."
)
pump_info = {
"pump_exposure_time": params.pump_probe.pump_exp,
"pump_delay": params.pump_probe.pump_delay,
}
write_NXnote(nxsfile, "/entry/source/notes", pump_info)
# Record string with end_time
end_time = datetime.fromtimestamp(time.time()).strftime(
"%A, %d. %B %Y %I:%M%p"
)
# Write /entry/start_time and /entry/end_time
nxsfile.create_dataset("/entry/start_time", data=np.string_(start_time))
nxsfile.create_dataset("/entry/end_time", data=np.string_(end_time))
logger.info(f"{master_file} correctly written.")
except Exception as err:
logger.info(
f"An error occurred and {master_file} couldn't be written correctly."
)
logger.exception(err)
logger.info("EOF")
def write_with_meta_cli(args):
cl = meta_phil.command_line_argument_interpreter()
working_phil = meta_phil.fetch(cl.process_and_fetch(args.phil_args))
params = working_phil.extract()
# Path to meta file
if params.input.metafile:
metafile = Path(params.input.metafile).expanduser().resolve()
else:
sys.exit(
"Please pass a _meta.h5 file. If not available use 'nexus' option instead."
)
# Get NeXus filename
master_file = get_nexus_filename(metafile)
# If no datafile has been passed, look for them in the directory
if params.input.datafile:
datafiles = [Path(d).expanduser().resolve() for d in params.input.datafile]
else:
datafile_pattern = (
metafile.parent / f"{master_file.stem}_{6*'[0-9]'}.h5"
).as_posix()
datafiles = [
Path(d).expanduser().resolve() for d in sorted(glob.glob(datafile_pattern))
]
# Start logger
logfile = metafile.parent / "generate_nexus_from_meta.log"
# Define a file handler for logging
FH = logging.FileHandler(logfile, mode="a")
FH.setLevel(logging.DEBUG)
FH.setFormatter(formatter)
# Add handlers to logger
logger.addHandler(FH)
# Add some information to logger
logger.info("Create a NeXus file for %s" % datafiles[0])
logger.info(
"Number of experiment data files in directory, linked to the Nexus file: %d"
% len(datafiles)
)
logger.info("Meta file for the collection: %s" % metafile)
logger.info("NeXus file will be saved as %s" % master_file)
# Load technical info from phil parser
cf = params.input.coordinate_frame
goniometer = params.goniometer
detector = params.detector
module = params.detector_module
source = params.source
beam = params.beam
attenuator = params.attenuator
timestamps = (
get_iso_timestamp(params.start_time),
get_iso_timestamp(params.end_time),
)
# If dealing with a tristan detector, add its specifications to detector scope.
if "TRISTAN" in detector.description.upper():
add_tristan_spec(detector, params.tristanSpec)
# Log information
logger.info("Source information")
logger.info(f"Facility: {source.name} - {source.type}.")
logger.info(f"Beamline: {source.beamline_name}")
if timestamps[0] is not None:
logger.info(f"Collection start time: {timestamps[0]}")
else:
logger.warning("No collection start time recorded.")
if timestamps[1] is not None:
logger.info(f"Collection end time: {timestamps[1]}")
else:
logger.warning("No collection end time recorded.")
logger.info("Coordinate system: %s" % cf)
if cf == "imgcif":
logger.warning(
"Input coordinate frame is imgcif. They will be converted to mcstas."
)
logger.info("Goniometer information")
axes = goniometer.axes
axis_vectors = goniometer.vectors
for tu in zip(goniometer.types, goniometer.units):
assert tu in (
("translation", "mm"),
("rotation", "deg"),
), "Appropriate axis units should be: mm for translations, det for rotations"
assert len(axis_vectors) == 3 * len(
axes
), "Number of vectors does not match number of goniometer axes."
for j in reversed(range(len(axes))):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Goniometer axis: {axes[j]} => {vector} ({goniometer.types[j]}) on {goniometer.depends[j]}. {goniometer.starts[j]} {goniometer.ends[j]} {goniometer.increments[j]}"
)
logger.info("")
if detector.description is None:
logger.warning("No detector description provided, exit.")
sys.exit("Please provide a detector description for identification.")
logger.info(
f"Detector information:\n {detector.description}, {detector.detector_type}"
)
logger.info(
f"Sensor made of {detector.sensor_material} x {detector.sensor_thickness}"
)
logger.info(f"Trusted pixels > {detector.underload} and < {detector.overload}")
logger.info(
f"Image is a {detector.image_size} array of {detector.pixel_size} pixels"
)
logger.info("Detector axes:")
axes = detector.axes
axis_vectors = detector.vectors
for tu in zip(detector.types, detector.units):
assert tu in (
("translation", "mm"),
("rotation", "deg"),
), "Appropriate axis units should be: mm for translations, det for rotations"
assert len(axis_vectors) == 3 * len(
axes
), "Number of vectors does not match number of detector axes."
for j in range(len(axes)):
vector = axis_vectors[3 * j : 3 * j + 3]
logger.info(
f"Detector axis: {axes[j]} => {vector} ({detector.types[j]}) on {detector.depends[j]}. {detector.starts[j]}"
)
if detector.flatfield is None:
logger.info("No flatfield applied")
else:
logger.info(f"Flatfield correction data lives here {detector.flatfield}")
if detector.pixel_mask is None:
logger.info("No bad pixel mask for this detector")
else:
logger.info(f"Bad pixel mask lives here {detector.pixel_mask}")
logger.info("Module information")
logger.info(f"Number of modules: {module.num_modules}")
logger.info(f"Fast axis at datum position: {module.fast_axis}")
logger.info(f"Slow_axis at datum position: {module.slow_axis}")
if module.module_offset == "0":
logger.warning(f"module_offset field will not be written.")
logger.info("")
if args.no_ow:
logger.warning(f"The following datasets will not be overwritten: {args.no_ow}")
metainfo = (metafile, args.no_ow)
else:
metainfo = (metafile, None)
logger.info("Start writing NeXus file ...")
try:
with h5py.File(master_file, "x") as nxsfile:
write_nexus(
nxsfile,
datafiles,
goniometer,
detector,
module,
source,
beam,
attenuator,
timestamps,
cf,
params.input.vds_writer,
metainfo,
)
# Check and save pump status
if params.pump_probe.pump_status is True:
logger.info(
"Pump probe status is True, write relative metadata as NXnote."
)
pump_info = {
"pump_exposure_time": params.pump_probe.pump_exp,
"pump_delay": params.pump_probe.pump_delay,
}
write_NXnote(nxsfile, "/entry/source/notes", pump_info)
logger.info(f"{master_file} correctly written.")
except Exception as err:
logger.info(
f"An error occurred and {master_file} couldn't be written correctly."
)
logger.exception(err)
logger.info("EOF")
# Define subparsers
subparsers = parser.add_subparsers(
help="Choose whether to write a NXmx NeXus file for a collection or a demo. \
Run generate_nexus <command> --help to see the parameters for each sub-command.",
required=True,
dest="sub-command",
)
parser_NXmx = subparsers.add_parser(
"1",
aliases=["nexus"],
description=("Trigger NeXus file writing pointing to existing data."),
parents=[nexus_parser],
)
parser_NXmx.set_defaults(func=write_NXmx_cli)
parser_NXmx_demo = subparsers.add_parser(
"2",
aliases=["demo"],
description=("Trigger NeXus and blank data file writing."),
parents=[demo_parser, detectormode_parser],
)
parser_NXmx_demo.set_defaults(func=write_demo_cli)
parser_NXmx_meta = subparsers.add_parser(
"3",
aliases=["meta"],
description=(
"Trigger NeXus file writing pointing to an existing collection with a meta file."
),
parents=[nexus_parser],
)
parser_NXmx_meta.add_argument(
"-no",
"--no_ow",
nargs="+",
help="List of datasets that should not be overwritten even if present in meta file",
type=str,
)
parser_NXmx_meta.set_defaults(func=write_with_meta_cli)
def main():
# Define a stream handler
CH = logging.StreamHandler(sys.stdout)
CH.setLevel(logging.DEBUG)
CH.setFormatter(formatter)
logger.addHandler(CH)
args = parser.parse_args()
args.func(args)
# main()
| 33.078273 | 176 | 0.637143 | 3,050 | 24,511 | 5.002623 | 0.114754 | 0.062262 | 0.034605 | 0.024577 | 0.759339 | 0.74669 | 0.735024 | 0.718705 | 0.71628 | 0.714117 | 0 | 0.003782 | 0.255763 | 24,511 | 740 | 177 | 33.122973 | 0.832639 | 0.056464 | 0 | 0.639535 | 1 | 0.011628 | 0.306926 | 0.048815 | 0 | 0 | 0 | 0 | 0.023256 | 1 | 0.007752 | false | 0.001938 | 0.027132 | 0 | 0.034884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
234c7ba7dba7e1153eaebd2a9f976d8200caa41b | 487 | py | Python | src/pages/views.py | Neville-Loh/webdev_commerce | b025182e0df54a7dd156ee027efd57736a458b95 | [
"bzip2-1.0.6"
] | null | null | null | src/pages/views.py | Neville-Loh/webdev_commerce | b025182e0df54a7dd156ee027efd57736a458b95 | [
"bzip2-1.0.6"
] | null | null | null | src/pages/views.py | Neville-Loh/webdev_commerce | b025182e0df54a7dd156ee027efd57736a458b95 | [
"bzip2-1.0.6"
] | null | null | null | from django.shortcuts import render
def home_view(request):
return render(request, "home.html", {})
def about_view(request):
return render(request, "about.html", {})
def contact_view(request):
return render(request, "contact.html", {})
# For debug purpose
# -----------------------------------------------------------------
def self_note_view(request):
return render(request, "self_note.html", {})
def self_note_view_1(request):
return render(request, "self_note_1.html", {})
| 24.35 | 67 | 0.632444 | 59 | 487 | 5.033898 | 0.322034 | 0.218855 | 0.319865 | 0.43771 | 0.545455 | 0.228956 | 0 | 0 | 0 | 0 | 0 | 0.004587 | 0.104723 | 487 | 19 | 68 | 25.631579 | 0.676606 | 0.170431 | 0 | 0 | 0 | 0 | 0.15212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.090909 | 0.454545 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
000106decf226374650a69d21bc7a44c65206ab7 | 6,023 | py | Python | tests/potato/test_potato_transferFrom.py | MostafaMhmod/potato-farm | fc99abbc162a61f432aae3ba054ab46fc6a901e1 | [
"MIT"
] | null | null | null | tests/potato/test_potato_transferFrom.py | MostafaMhmod/potato-farm | fc99abbc162a61f432aae3ba054ab46fc6a901e1 | [
"MIT"
] | null | null | null | tests/potato/test_potato_transferFrom.py | MostafaMhmod/potato-farm | fc99abbc162a61f432aae3ba054ab46fc6a901e1 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import brownie
def test_potato_sender_balance_decreases(accounts, potato):
sender_balance = potato.balanceOf(accounts[0])
amount = sender_balance // 4
potato.approve(accounts[1], amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert potato.balanceOf(accounts[0]) == sender_balance - amount
def test_potato_receiver_balance_increases(accounts, potato):
receiver_balance = potato.balanceOf(accounts[2])
amount = potato.balanceOf(accounts[0]) // 4
potato.approve(accounts[1], amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert potato.balanceOf(accounts[2]) == receiver_balance + amount
def test_potato_caller_balance_not_affected(accounts, potato):
caller_balance = potato.balanceOf(accounts[1])
amount = potato.balanceOf(accounts[0])
potato.approve(accounts[1], amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert potato.balanceOf(accounts[1]) == caller_balance
def test_potato_caller_approval_affected(accounts, potato):
approval_amount = potato.balanceOf(accounts[0])
transfer_amount = approval_amount // 4
potato.approve(accounts[1], approval_amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], transfer_amount, {'from': accounts[1]})
assert potato.allowance(accounts[0], accounts[1]) == approval_amount - transfer_amount
def test_potato_receiver_approval_not_affected(accounts, potato):
approval_amount = potato.balanceOf(accounts[0])
transfer_amount = approval_amount // 4
potato.approve(accounts[1], approval_amount, {'from': accounts[0]})
potato.approve(accounts[2], approval_amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], transfer_amount, {'from': accounts[1]})
assert potato.allowance(accounts[0], accounts[2]) == approval_amount
def test_potato_total_supply_not_affected(accounts, potato):
total_supply = potato.totalSupply()
amount = potato.balanceOf(accounts[0])
potato.approve(accounts[1], amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert potato.totalSupply() == total_supply
def test_potato_returns_true(accounts, potato):
amount = potato.balanceOf(accounts[0])
potato.approve(accounts[1], amount, {'from': accounts[0]})
tx = potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert tx.return_value is True
def test_potato_transfer_full_balance(accounts, potato):
amount = potato.balanceOf(accounts[0])
receiver_balance = potato.balanceOf(accounts[2])
potato.approve(accounts[1], amount, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert potato.balanceOf(accounts[0]) == 0
assert potato.balanceOf(accounts[2]) == receiver_balance + amount
def test_potato_transfer_zero_potatos(accounts, potato):
sender_balance = potato.balanceOf(accounts[0])
receiver_balance = potato.balanceOf(accounts[2])
potato.approve(accounts[1], sender_balance, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[2], 0, {'from': accounts[1]})
assert potato.balanceOf(accounts[0]) == sender_balance
assert potato.balanceOf(accounts[2]) == receiver_balance
def test_potato_transfer_zero_potatos_without_approval(accounts, potato):
sender_balance = potato.balanceOf(accounts[0])
receiver_balance = potato.balanceOf(accounts[2])
potato.transferFrom(accounts[0], accounts[2], 0, {'from': accounts[1]})
assert potato.balanceOf(accounts[0]) == sender_balance
assert potato.balanceOf(accounts[2]) == receiver_balance
def test_potato_insufficient_balance(accounts, potato):
balance = potato.balanceOf(accounts[0])
potato.approve(accounts[1], balance + 1, {'from': accounts[0]})
with brownie.reverts():
potato.transferFrom(accounts[0], accounts[2], balance + 1, {'from': accounts[1]})
def test_potato_insufficient_approval(accounts, potato):
balance = potato.balanceOf(accounts[0])
print(balance)
potato.approve(accounts[1], balance - 1, {'from': accounts[0]})
with brownie.reverts():
potato.transferFrom(accounts[0], accounts[2], balance, {'from': accounts[1]})
def test_potato_no_approval(accounts, potato):
balance = potato.balanceOf(accounts[0])
with brownie.reverts():
potato.transferFrom(accounts[0], accounts[2], balance, {'from': accounts[1]})
def test_potato_revoked_approval(accounts, potato):
balance = potato.balanceOf(accounts[0])
potato.approve(accounts[1], balance, {'from': accounts[0]})
potato.approve(accounts[1], 0, {'from': accounts[0]})
with brownie.reverts():
potato.transferFrom(accounts[0], accounts[2], balance, {'from': accounts[1]})
def test_potato_transfer_to_self(accounts, potato):
sender_balance = potato.balanceOf(accounts[0])
amount = sender_balance // 4
potato.approve(accounts[0], sender_balance, {'from': accounts[0]})
potato.transferFrom(accounts[0], accounts[0], amount, {'from': accounts[0]})
assert potato.balanceOf(accounts[0]) == sender_balance
assert potato.allowance(accounts[0], accounts[0]) == sender_balance - amount
def test_potato_transfer_to_self_no_approval(accounts, potato):
amount = potato.balanceOf(accounts[0])
with brownie.reverts():
potato.transferFrom(accounts[0], accounts[0], amount, {'from': accounts[0]})
def test_potato_transfer_event_fires(accounts, potato):
amount = potato.balanceOf(accounts[0])
potato.approve(accounts[1], amount, {'from': accounts[0]})
tx = potato.transferFrom(accounts[0], accounts[2], amount, {'from': accounts[1]})
assert len(tx.events) == 1
assert tx.events["Transfer"].values() == [accounts[0], accounts[2], amount]
| 36.50303 | 90 | 0.716254 | 742 | 6,023 | 5.665768 | 0.075472 | 0.139153 | 0.175071 | 0.125595 | 0.86156 | 0.840152 | 0.792816 | 0.783302 | 0.747621 | 0.734539 | 0 | 0.026291 | 0.134817 | 6,023 | 164 | 91 | 36.72561 | 0.780464 | 0.002823 | 0 | 0.60396 | 0 | 0 | 0.023314 | 0 | 0 | 0 | 0 | 0 | 0.168317 | 1 | 0.168317 | false | 0 | 0.009901 | 0 | 0.178218 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
000817092cd6b9ae5fbe118fd14a6d71e367a553 | 2,111 | py | Python | src/sort_strategy.py | Amatsukan/TechnicalAssessment | b30eaa2b94b9e34e61ee2372b61eb151fa8aa95c | [
"Unlicense"
] | null | null | null | src/sort_strategy.py | Amatsukan/TechnicalAssessment | b30eaa2b94b9e34e61ee2372b61eb151fa8aa95c | [
"Unlicense"
] | null | null | null | src/sort_strategy.py | Amatsukan/TechnicalAssessment | b30eaa2b94b9e34e61ee2372b61eb151fa8aa95c | [
"Unlicense"
] | null | null | null | from ordered_set import *
class Sort_strategy:
def sort(self, books):
pass
def setComposition(self,composition):
self.composition = composition
return self
def cleanComposition(self,composition):
self.composition = None
return self
def desc_order(self, recursive = True):
self.reverse = True;
if(self.composition != None and recursive):
self.composition.desc_order(recursive)
def asc_order(self, recursive = True):
self.reverse = False;
if(self.composition != None and recursive):
self.composition.asc_order(recursive)
class Title_sort(Sort_strategy):
def __init__(self, composition = None, reverse = False):
self.composition = composition
self.reverse = reverse
self.id = "TITLE SORT"
def sort(self, books):
if(self.composition == None):
return OrderedSet(sorted(books, key = lambda x: x.title, reverse = self.reverse))
else:
return OrderedSet(sorted(self.composition.sort(books), key = lambda x: x.title, reverse = self.reverse))
class Author_sort(Sort_strategy):
def __init__(self, composition = None, reverse = False):
self.composition = composition
self.reverse = reverse
self.id = "AUTHOR SORT"
def sort(self, books):
if(self.composition == None):
return OrderedSet(sorted(books, key = lambda x: x.author, reverse = self.reverse))
else:
return OrderedSet(sorted(self.composition.sort(books), key = lambda x: x.author, reverse = self.reverse))
class Year_sort(Sort_strategy):
def __init__(self, composition = None, reverse = False):
self.composition = composition
self.reverse = reverse
self.id = "YEAR SORT"
def sort(self, books):
if(self.composition == None):
return OrderedSet(sorted(books, key = lambda x: x.ed_year, reverse = self.reverse))
else:
return OrderedSet(sorted(self.composition.sort(books), key = lambda x: x.ed_year, reverse = self.reverse)) | 35.183333 | 119 | 0.640455 | 246 | 2,111 | 5.390244 | 0.150407 | 0.226244 | 0.128959 | 0.067873 | 0.781297 | 0.781297 | 0.731523 | 0.731523 | 0.659125 | 0.615385 | 0 | 0 | 0.255803 | 2,111 | 60 | 119 | 35.183333 | 0.844048 | 0 | 0 | 0.5 | 0 | 0 | 0.014205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.229167 | false | 0.020833 | 0.020833 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0023bc2a40c6b9f6445e225b7289953888297c9d | 217 | py | Python | istock_app/resources.py | Jonas1015/ShoppySm | 52c8edf958ca705b489f8f4efb547bdd635cefd2 | [
"MIT"
] | null | null | null | istock_app/resources.py | Jonas1015/ShoppySm | 52c8edf958ca705b489f8f4efb547bdd635cefd2 | [
"MIT"
] | null | null | null | istock_app/resources.py | Jonas1015/ShoppySm | 52c8edf958ca705b489f8f4efb547bdd635cefd2 | [
"MIT"
] | null | null | null | # from import_export import resources
from .models import sales
# class salesResource(resources.ModelResource):
# class Meta:
# model = sales
# fields = ['sales_name', 'quantity', 'date_of_sale']
| 27.125 | 61 | 0.682028 | 24 | 217 | 6 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211982 | 217 | 7 | 62 | 31 | 0.842105 | 0.824885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
003e6f4c7bd179448c6e83447f86aa9321c675b4 | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_vendor/chardet/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_vendor/chardet/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_vendor/chardet/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/99/66/5a/5a6bd9921c1f044013f4ed58ea74537cace14fb1478504d302e8dba940 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cc8767a7d431d9c6df0105ead6e11ccf39bfffda | 109 | py | Python | aula#22/desafio115/teste.py | daramariabs/exercicios-python | 0d9785a9cccd5442a190572c58ab8dd6e2fe0cce | [
"MIT"
] | null | null | null | aula#22/desafio115/teste.py | daramariabs/exercicios-python | 0d9785a9cccd5442a190572c58ab8dd6e2fe0cce | [
"MIT"
] | null | null | null | aula#22/desafio115/teste.py | daramariabs/exercicios-python | 0d9785a9cccd5442a190572c58ab8dd6e2fe0cce | [
"MIT"
] | null | null | null | arquivo = open("/home/daramariabs/Documentos/GITHUB/exercicios-python/aula#22/desafio115/contatos.txt", "a")
| 54.5 | 108 | 0.788991 | 14 | 109 | 6.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.036697 | 109 | 1 | 109 | 109 | 0.771429 | 0 | 0 | 0 | 0 | 1 | 0.788991 | 0.779817 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccaf4e0e199ba5b6f44d8a76e29133fde5daff1f | 110 | py | Python | flaskeddit/user/__init__.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | 1 | 2019-03-23T03:21:35.000Z | 2019-03-23T03:21:35.000Z | flaskeddit/user/__init__.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | 7 | 2020-03-24T18:05:13.000Z | 2022-01-13T01:57:43.000Z | flaskeddit/user/__init__.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | null | null | null | from flask import Blueprint
user_blueprint = Blueprint("user", __name__)
from flaskeddit.user import routes
| 18.333333 | 44 | 0.809091 | 14 | 110 | 6 | 0.571429 | 0.309524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 110 | 5 | 45 | 22 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
ccb3c2aef88ab43ffb7170b22436644354627886 | 452 | py | Python | wynncraft/__init__.py | martinkovacs/wynncraft-python | 0c35e3eb4d080aa32b997543d21998933ffa7819 | [
"0BSD"
] | null | null | null | wynncraft/__init__.py | martinkovacs/wynncraft-python | 0c35e3eb4d080aa32b997543d21998933ffa7819 | [
"0BSD"
] | null | null | null | wynncraft/__init__.py | martinkovacs/wynncraft-python | 0c35e3eb4d080aa32b997543d21998933ffa7819 | [
"0BSD"
] | null | null | null | from wynncraft.version import __version__
import wynncraft.utils
import wynncraft.cache
from wynncraft.wynncraft import Guild
from wynncraft.wynncraft import Ingredient
from wynncraft.wynncraft import Item
from wynncraft.wynncraft import Leaderboard
from wynncraft.wynncraft import Network
from wynncraft.wynncraft import Player
from wynncraft.wynncraft import Recipe
from wynncraft.wynncraft import Search
from wynncraft.wynncraft import Territory
| 28.25 | 43 | 0.869469 | 56 | 452 | 6.946429 | 0.267857 | 0.33419 | 0.508997 | 0.647815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103982 | 452 | 15 | 44 | 30.133333 | 0.960494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ccc6109cd2681baeee129a990454d03dc1c8648e | 1,266 | py | Python | tests/dags/test_sub_provider_update_workflow.py | lyu4321/openverse-catalog | cd5be8fbff402a7420e772a803abdc2a20fd7235 | [
"MIT"
] | null | null | null | tests/dags/test_sub_provider_update_workflow.py | lyu4321/openverse-catalog | cd5be8fbff402a7420e772a803abdc2a20fd7235 | [
"MIT"
] | 1 | 2021-10-07T19:21:21.000Z | 2021-10-07T19:21:21.000Z | tests/dags/test_sub_provider_update_workflow.py | lyu4321/openverse-catalog | cd5be8fbff402a7420e772a803abdc2a20fd7235 | [
"MIT"
] | null | null | null | import os
from airflow.models import DagBag
FILE_DIR = os.path.abspath(os.path.dirname(__file__))
def test_flickr_dag_loads_with_no_errors(tmpdir):
tmp_directory = str(tmpdir)
dag_bag = DagBag(dag_folder=tmp_directory, include_examples=False)
dag_bag.process_file(
os.path.join(
FILE_DIR,
"../../dags/flickr_sub_provider_update_workflow.py",
)
)
assert len(dag_bag.import_errors) == 0
assert len(dag_bag.dags) == 1
def test_europeana_dag_loads_with_no_errors(tmpdir):
tmp_directory = str(tmpdir)
dag_bag = DagBag(dag_folder=tmp_directory, include_examples=False)
dag_bag.process_file(
os.path.join(
FILE_DIR,
"../../dags/europeana_sub_provider_update_workflow.py",
)
)
assert len(dag_bag.import_errors) == 0
assert len(dag_bag.dags) == 1
def test_smithsonian_dag_loads_with_no_errors(tmpdir):
tmp_directory = str(tmpdir)
dag_bag = DagBag(dag_folder=tmp_directory, include_examples=False)
dag_bag.process_file(
os.path.join(
FILE_DIR,
"../../dags/smithsonian_sub_provider_update_workflow.py",
)
)
assert len(dag_bag.import_errors) == 0
assert len(dag_bag.dags) == 1
| 27.521739 | 70 | 0.674566 | 172 | 1,266 | 4.569767 | 0.244186 | 0.091603 | 0.091603 | 0.114504 | 0.830789 | 0.830789 | 0.830789 | 0.830789 | 0.830789 | 0.830789 | 0 | 0.006079 | 0.220379 | 1,266 | 45 | 71 | 28.133333 | 0.790274 | 0 | 0 | 0.583333 | 0 | 0 | 0.122433 | 0.122433 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0 | 0.138889 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e02bc1fdc8cad1317b4aee089994315c1fff569 | 212 | py | Python | Gamify/status/admin.py | Londa-LG/Gamify | 2fd8557650b584bf47ffaa6acfdc61c8fdbf64f6 | [
"MIT"
] | null | null | null | Gamify/status/admin.py | Londa-LG/Gamify | 2fd8557650b584bf47ffaa6acfdc61c8fdbf64f6 | [
"MIT"
] | 5 | 2021-03-30T14:10:33.000Z | 2021-09-22T19:33:59.000Z | Gamify/status/admin.py | Londa-LG/Gamify | 2fd8557650b584bf47ffaa6acfdc61c8fdbf64f6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Challenge_Body, Challenge_Mind, Challenge_Skill
admin.site.register(Challenge_Body)
admin.site.register(Challenge_Mind)
admin.site.register(Challenge_Skill)
| 26.5 | 67 | 0.853774 | 29 | 212 | 6.034483 | 0.413793 | 0.154286 | 0.291429 | 0.445714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070755 | 212 | 7 | 68 | 30.285714 | 0.888325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d1c4f0253e3e6f6c94b8bf4b96b5e73e1562811c | 943 | py | Python | test/test_upgrade_notification.py | dcompane/controlm_py | c521208be2f00303383bb32ca5eb2b7ff91999d3 | [
"MIT"
] | 2 | 2020-03-20T18:24:23.000Z | 2021-03-05T22:05:04.000Z | test/test_upgrade_notification.py | dcompane/controlm_py | c521208be2f00303383bb32ca5eb2b7ff91999d3 | [
"MIT"
] | null | null | null | test/test_upgrade_notification.py | dcompane/controlm_py | c521208be2f00303383bb32ca5eb2b7ff91999d3 | [
"MIT"
] | 1 | 2021-05-27T15:54:37.000Z | 2021-05-27T15:54:37.000Z | # coding: utf-8
"""
Control-M Services
Provides access to BMC Control-M Services # noqa: E501
OpenAPI spec version: 9.20.220
Contact: customer_support@bmc.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import controlm_py
from controlm_py.models.upgrade_notification import UpgradeNotification # noqa: E501
from controlm_py.rest import ApiException
class TestUpgradeNotification(unittest.TestCase):
"""UpgradeNotification unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testUpgradeNotification(self):
"""Test UpgradeNotification"""
# FIXME: construct object with mandatory attributes with example values
# model = controlm_py.models.upgrade_notification.UpgradeNotification() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 23.575 | 93 | 0.718982 | 107 | 943 | 6.149533 | 0.616822 | 0.06079 | 0.048632 | 0.069909 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021164 | 0.198303 | 943 | 39 | 94 | 24.179487 | 0.849206 | 0.472959 | 0 | 0.214286 | 1 | 0 | 0.017582 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d1c9ec3e52a28dbdd9f4961c2c882911edcd85ec | 4,215 | py | Python | ksteta3pi/PotentialBackgrounds/MC_12_11164011_MagUp.py.py | Williams224/davinci-scripts | 730642d2ff13543eca4073a4ce0932631195de56 | [
"MIT"
] | null | null | null | ksteta3pi/PotentialBackgrounds/MC_12_11164011_MagUp.py.py | Williams224/davinci-scripts | 730642d2ff13543eca4073a4ce0932631195de56 | [
"MIT"
] | null | null | null | ksteta3pi/PotentialBackgrounds/MC_12_11164011_MagUp.py.py | Williams224/davinci-scripts | 730642d2ff13543eca4073a4ce0932631195de56 | [
"MIT"
] | null | null | null | #-- GAUDI jobOptions generated on Fri Jul 24 17:00:56 2015
#-- Contains event types :
#-- 11164011 - 17 files - 264994 events - 57.72 GBytes
#-- Extra information about the data processing phases:
#-- Processing Pass Step-124834
#-- StepId : 124834
#-- StepName : Reco14a for MC
#-- ApplicationName : Brunel
#-- ApplicationVersion : v43r2p7
#-- OptionFiles : $APPCONFIGOPTS/Brunel/DataType-2012.py;$APPCONFIGOPTS/Brunel/MC-WithTruth.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-124620
#-- StepId : 124620
#-- StepName : Digi13 with G4 dE/dx
#-- ApplicationName : Boole
#-- ApplicationVersion : v26r3
#-- OptionFiles : $APPCONFIGOPTS/Boole/Default.py;$APPCONFIGOPTS/Boole/DataType-2012.py;$APPCONFIGOPTS/Boole/Boole-SiG4EnergyDeposit.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-124632
#-- StepId : 124632
#-- StepName : TCK-0x409f0045 Flagged for Sim08 2012
#-- ApplicationName : Moore
#-- ApplicationVersion : v14r8p1
#-- OptionFiles : $APPCONFIGOPTS/Moore/MooreSimProductionWithL0Emulation.py;$APPCONFIGOPTS/Conditions/TCK-0x409f0045.py;$APPCONFIGOPTS/Moore/DataType-2012.py;$APPCONFIGOPTS/L0/L0TCK-0x0045.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-124630
#-- StepId : 124630
#-- StepName : Stripping20-NoPrescalingFlagged for Sim08
#-- ApplicationName : DaVinci
#-- ApplicationVersion : v32r2p1
#-- OptionFiles : $APPCONFIGOPTS/DaVinci/DV-Stripping20-Stripping-MC-NoPrescaling.py;$APPCONFIGOPTS/DaVinci/DataType-2012.py;$APPCONFIGOPTS/DaVinci/InputType-DST.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-126435
#-- StepId : 126435
#-- StepName : Sim08e - 2012 - MU - Pythia8
#-- ApplicationName : Gauss
#-- ApplicationVersion : v45r7
#-- OptionFiles : $APPCONFIGOPTS/Gauss/Sim08-Beam4000GeV-mu100-2012-nu2.5.py;$DECFILESROOT/options/@{eventType}.py;$LBPYTHIA8ROOT/options/Pythia8.py;$APPCONFIGOPTS/Gauss/G4PL_FTFP_BERT_EmNoCuts.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : dddb-20130929-1
#-- CONDDB : sim-20130522-1-vc-mu100
#-- ExtraPackages : AppConfig.v3r193;DecFiles.v27r22
#-- Visible : Y
from Gaudi.Configuration import *
from GaudiConf import IOHelper
IOHelper('ROOT').inputFiles(['LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000001_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000002_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000003_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000004_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000005_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000006_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000007_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000008_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000009_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000010_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000011_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000012_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000013_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000014_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000015_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000016_1.allstreams.dst',
'LFN:/lhcb/MC/2012/ALLSTREAMS.DST/00035796/0000/00035796_00000017_1.allstreams.dst'
], clear=True)
| 45.322581 | 247 | 0.759431 | 507 | 4,215 | 6.240631 | 0.287968 | 0.139697 | 0.048357 | 0.069848 | 0.496839 | 0.496839 | 0.496839 | 0.496839 | 0.481669 | 0.46713 | 0 | 0.21054 | 0.095136 | 4,215 | 92 | 248 | 45.815217 | 0.619035 | 0.601186 | 0 | 0 | 1 | 0.85 | 0.849323 | 0.846863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d1d6852d9425380edd3198c3036ae6e94053de0c | 116 | py | Python | src/utils/torch_utils.py | YetAnotherPolicy/RMIX | dbc8f0a6560aa3f274765fa78aaaca22351ab8ad | [
"Apache-2.0"
] | 6 | 2021-11-18T16:21:43.000Z | 2021-12-31T02:22:23.000Z | src/utils/torch_utils.py | YetAnotherPolicy/RMIX | dbc8f0a6560aa3f274765fa78aaaca22351ab8ad | [
"Apache-2.0"
] | null | null | null | src/utils/torch_utils.py | YetAnotherPolicy/RMIX | dbc8f0a6560aa3f274765fa78aaaca22351ab8ad | [
"Apache-2.0"
] | 1 | 2022-02-22T02:37:30.000Z | 2022-02-22T02:37:30.000Z | import torch as th
def huber(x, k=1.0):
return th.where(x.abs() < k, 0.5 * x.pow(2), k * (x.abs() - 0.5 * k))
| 19.333333 | 73 | 0.517241 | 27 | 116 | 2.222222 | 0.592593 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078652 | 0.232759 | 116 | 5 | 74 | 23.2 | 0.595506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ae393a5beae32d41f4772ac1a0542ef95d370d06 | 23 | py | Python | river_model_io/__init__.py | joelrahman/river-model-io | f4ff3fabde40906ab798ae2a668afb68d4bcfae3 | [
"0BSD"
] | null | null | null | river_model_io/__init__.py | joelrahman/river-model-io | f4ff3fabde40906ab798ae2a668afb68d4bcfae3 | [
"0BSD"
] | null | null | null | river_model_io/__init__.py | joelrahman/river-model-io | f4ff3fabde40906ab798ae2a668afb68d4bcfae3 | [
"0BSD"
] | null | null | null | from .bigmod import *
| 7.666667 | 21 | 0.695652 | 3 | 23 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 22 | 11.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ae764cae0bb7c0087bbdf84868e0a40e8aa0d5a9 | 223 | py | Python | jotdx/__init__.py | jojoquant/jotdx | bc1cf478ddb262883d779d3c494877a767ebedd5 | [
"MIT"
] | null | null | null | jotdx/__init__.py | jojoquant/jotdx | bc1cf478ddb262883d779d3c494877a767ebedd5 | [
"MIT"
] | null | null | null | jotdx/__init__.py | jojoquant/jotdx | bc1cf478ddb262883d779d3c494877a767ebedd5 | [
"MIT"
] | null | null | null | from jotdx import config
from jotdx.consts import EX_HOSTS
from jotdx.consts import GP_HOSTS
from jotdx.consts import HQ_HOSTS
from jotdx.server import server
from jotdx.utils import get_config_path
__version__ = '0.1.11'
| 24.777778 | 39 | 0.829596 | 38 | 223 | 4.631579 | 0.447368 | 0.306818 | 0.255682 | 0.357955 | 0.295455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020513 | 0.125561 | 223 | 8 | 40 | 27.875 | 0.882051 | 0 | 0 | 0 | 0 | 0 | 0.026906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
884340d1890b3b844cae63e031d0cdc6ac2fda77 | 84,078 | py | Python | duke-cs671-fall21-coupon-recommendation/outputs/rules/ID3/10_features/maxdepth_5/0/rules.py | apcarrik/kaggle | 6e2d4db58017323e7ba5510bcc2598e01a4ee7bf | [
"MIT"
] | null | null | null | duke-cs671-fall21-coupon-recommendation/outputs/rules/ID3/10_features/maxdepth_5/0/rules.py | apcarrik/kaggle | 6e2d4db58017323e7ba5510bcc2598e01a4ee7bf | [
"MIT"
] | null | null | null | duke-cs671-fall21-coupon-recommendation/outputs/rules/ID3/10_features/maxdepth_5/0/rules.py | apcarrik/kaggle | 6e2d4db58017323e7ba5510bcc2598e01a4ee7bf | [
"MIT"
] | null | null | null | def findDecision(obj): #obj[0]: Passanger, obj[1]: Time, obj[2]: Coupon, obj[3]: Education, obj[4]: Occupation, obj[5]: Bar, obj[6]: Coffeehouse, obj[7]: Restaurant20to50, obj[8]: Direction_same, obj[9]: Distance
# {"feature": "Coupon", "instances": 8147, "metric_value": 0.4744, "depth": 1}
if obj[2]>1:
# {"feature": "Coffeehouse", "instances": 5889, "metric_value": 0.4587, "depth": 2}
if obj[6]>0.0:
# {"feature": "Distance", "instances": 4432, "metric_value": 0.4358, "depth": 3}
if obj[9]<=2:
# {"feature": "Passanger", "instances": 4015, "metric_value": 0.4266, "depth": 4}
if obj[0]<=2:
# {"feature": "Time", "instances": 2626, "metric_value": 0.4479, "depth": 5}
if obj[1]<=3:
# {"feature": "Occupation", "instances": 2123, "metric_value": 0.4553, "depth": 6}
if obj[4]>0:
# {"feature": "Direction_same", "instances": 2109, "metric_value": 0.4565, "depth": 7}
if obj[8]<=0:
# {"feature": "Restaurant20to50", "instances": 1161, "metric_value": 0.4673, "depth": 8}
if obj[7]<=3.0:
# {"feature": "Education", "instances": 1126, "metric_value": 0.4704, "depth": 9}
if obj[3]<=3:
# {"feature": "Bar", "instances": 1058, "metric_value": 0.4722, "depth": 10}
if obj[5]>0.0:
return 'True'
elif obj[5]<=0.0:
return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Bar", "instances": 68, "metric_value": 0.4281, "depth": 10}
if obj[5]<=2.0:
return 'True'
elif obj[5]>2.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>3.0:
# {"feature": "Bar", "instances": 35, "metric_value": 0.3255, "depth": 9}
if obj[5]>2.0:
# {"feature": "Education", "instances": 22, "metric_value": 0.4329, "depth": 10}
if obj[3]>1:
return 'True'
elif obj[3]<=1:
return 'True'
else: return 'True'
elif obj[5]<=2.0:
# {"feature": "Education", "instances": 13, "metric_value": 0.1346, "depth": 10}
if obj[3]>0:
return 'True'
elif obj[3]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[8]>0:
# {"feature": "Bar", "instances": 948, "metric_value": 0.4386, "depth": 8}
if obj[5]>-1.0:
# {"feature": "Restaurant20to50", "instances": 940, "metric_value": 0.4379, "depth": 9}
if obj[7]<=2.0:
# {"feature": "Education", "instances": 841, "metric_value": 0.4443, "depth": 10}
if obj[3]<=2:
return 'True'
elif obj[3]>2:
return 'True'
else: return 'True'
elif obj[7]>2.0:
# {"feature": "Education", "instances": 99, "metric_value": 0.3718, "depth": 10}
if obj[3]<=2:
return 'True'
elif obj[3]>2:
return 'True'
else: return 'True'
else: return 'True'
elif obj[5]<=-1.0:
# {"feature": "Education", "instances": 8, "metric_value": 0.375, "depth": 9}
if obj[3]<=1:
# {"feature": "Restaurant20to50", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[7]<=1.0:
return 'False'
else: return 'False'
elif obj[3]>1:
# {"feature": "Restaurant20to50", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[7]<=2.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
elif obj[4]<=0:
# {"feature": "Education", "instances": 14, "metric_value": 0.1143, "depth": 7}
if obj[3]<=0:
return 'True'
elif obj[3]>0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.2667, "depth": 8}
if obj[8]<=0:
# {"feature": "Bar", "instances": 3, "metric_value": 0.4444, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Restaurant20to50", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[7]<=1.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[1]>3:
# {"feature": "Bar", "instances": 503, "metric_value": 0.4078, "depth": 6}
if obj[5]<=1.0:
# {"feature": "Occupation", "instances": 337, "metric_value": 0.3791, "depth": 7}
if obj[4]<=13.168241765136806:
# {"feature": "Education", "instances": 281, "metric_value": 0.3606, "depth": 8}
if obj[3]<=3:
# {"feature": "Restaurant20to50", "instances": 268, "metric_value": 0.3706, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 182, "metric_value": 0.3831, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Direction_same", "instances": 86, "metric_value": 0.3442, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Restaurant20to50", "instances": 13, "metric_value": 0.1399, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 11, "metric_value": 0.1653, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[4]>13.168241765136806:
# {"feature": "Education", "instances": 56, "metric_value": 0.4456, "depth": 8}
if obj[3]<=2:
# {"feature": "Restaurant20to50", "instances": 42, "metric_value": 0.4179, "depth": 9}
if obj[7]<=2.0:
# {"feature": "Direction_same", "instances": 40, "metric_value": 0.4387, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>2.0:
return 'True'
else: return 'True'
elif obj[3]>2:
# {"feature": "Restaurant20to50", "instances": 14, "metric_value": 0.45, "depth": 9}
if obj[7]>0.0:
# {"feature": "Direction_same", "instances": 10, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]<=0.0:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[5]>1.0:
# {"feature": "Education", "instances": 166, "metric_value": 0.4524, "depth": 7}
if obj[3]>0:
# {"feature": "Occupation", "instances": 115, "metric_value": 0.4565, "depth": 8}
if obj[4]<=19:
# {"feature": "Restaurant20to50", "instances": 112, "metric_value": 0.4647, "depth": 9}
if obj[7]>0.0:
# {"feature": "Direction_same", "instances": 103, "metric_value": 0.4751, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]<=0.0:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.3457, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[4]>19:
return 'False'
else: return 'False'
elif obj[3]<=0:
# {"feature": "Occupation", "instances": 51, "metric_value": 0.3037, "depth": 8}
if obj[4]>4:
# {"feature": "Restaurant20to50", "instances": 39, "metric_value": 0.2501, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 22, "metric_value": 0.1653, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Direction_same", "instances": 17, "metric_value": 0.3599, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[4]<=4:
# {"feature": "Restaurant20to50", "instances": 12, "metric_value": 0.4242, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 11, "metric_value": 0.4628, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]>1.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[0]>2:
# {"feature": "Occupation", "instances": 1389, "metric_value": 0.382, "depth": 5}
if obj[4]<=18.52567473260329:
# {"feature": "Bar", "instances": 1292, "metric_value": 0.3906, "depth": 6}
if obj[5]<=3.0:
# {"feature": "Restaurant20to50", "instances": 1225, "metric_value": 0.3841, "depth": 7}
if obj[7]<=1.0:
# {"feature": "Education", "instances": 789, "metric_value": 0.4048, "depth": 8}
if obj[3]<=3:
# {"feature": "Time", "instances": 745, "metric_value": 0.4128, "depth": 9}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 580, "metric_value": 0.4101, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Direction_same", "instances": 165, "metric_value": 0.4224, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Time", "instances": 44, "metric_value": 0.2645, "depth": 9}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 33, "metric_value": 0.2975, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Direction_same", "instances": 11, "metric_value": 0.1653, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Time", "instances": 436, "metric_value": 0.3377, "depth": 8}
if obj[1]>0:
# {"feature": "Education", "instances": 337, "metric_value": 0.373, "depth": 9}
if obj[3]<=3:
# {"feature": "Direction_same", "instances": 317, "metric_value": 0.3805, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Direction_same", "instances": 20, "metric_value": 0.255, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Education", "instances": 99, "metric_value": 0.2115, "depth": 9}
if obj[3]<=3:
# {"feature": "Direction_same", "instances": 94, "metric_value": 0.2227, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]>3:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[5]>3.0:
# {"feature": "Time", "instances": 67, "metric_value": 0.4397, "depth": 7}
if obj[1]>0:
# {"feature": "Restaurant20to50", "instances": 51, "metric_value": 0.4057, "depth": 8}
if obj[7]>2.0:
# {"feature": "Education", "instances": 26, "metric_value": 0.4253, "depth": 9}
if obj[3]>3:
# {"feature": "Direction_same", "instances": 17, "metric_value": 0.4152, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]<=3:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]<=2.0:
# {"feature": "Education", "instances": 25, "metric_value": 0.3, "depth": 9}
if obj[3]<=0:
# {"feature": "Direction_same", "instances": 16, "metric_value": 0.2188, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]>0:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Education", "instances": 16, "metric_value": 0.4563, "depth": 8}
if obj[3]>0:
# {"feature": "Restaurant20to50", "instances": 9, "metric_value": 0.4167, "depth": 9}
if obj[7]>0.0:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.4688, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]<=0.0:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Restaurant20to50", "instances": 7, "metric_value": 0.3429, "depth": 9}
if obj[7]<=2.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]>2.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
elif obj[4]>18.52567473260329:
# {"feature": "Bar", "instances": 97, "metric_value": 0.2374, "depth": 6}
if obj[5]<=1.0:
# {"feature": "Restaurant20to50", "instances": 69, "metric_value": 0.2928, "depth": 7}
if obj[7]<=1.0:
# {"feature": "Time", "instances": 62, "metric_value": 0.2567, "depth": 8}
if obj[1]>0:
# {"feature": "Education", "instances": 49, "metric_value": 0.3174, "depth": 9}
if obj[3]<=2:
# {"feature": "Direction_same", "instances": 38, "metric_value": 0.3615, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]>2:
# {"feature": "Direction_same", "instances": 11, "metric_value": 0.1653, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[1]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Education", "instances": 7, "metric_value": 0.2143, "depth": 8}
if obj[3]>0:
# {"feature": "Time", "instances": 4, "metric_value": 0.3333, "depth": 9}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[1]<=0:
return 'False'
else: return 'False'
elif obj[3]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[5]>1.0:
# {"feature": "Education", "instances": 28, "metric_value": 0.0663, "depth": 7}
if obj[3]>0:
# {"feature": "Time", "instances": 14, "metric_value": 0.1224, "depth": 8}
if obj[1]<=2:
return 'True'
elif obj[1]>2:
# {"feature": "Restaurant20to50", "instances": 7, "metric_value": 0.2286, "depth": 9}
if obj[7]>1.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]<=1.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[9]>2:
# {"feature": "Passanger", "instances": 417, "metric_value": 0.4857, "depth": 4}
if obj[0]>0:
# {"feature": "Time", "instances": 404, "metric_value": 0.4863, "depth": 5}
if obj[1]>0:
# {"feature": "Bar", "instances": 347, "metric_value": 0.4936, "depth": 6}
if obj[5]>-1.0:
# {"feature": "Education", "instances": 344, "metric_value": 0.4944, "depth": 7}
if obj[3]<=3:
# {"feature": "Restaurant20to50", "instances": 318, "metric_value": 0.4935, "depth": 8}
if obj[7]>-1.0:
# {"feature": "Occupation", "instances": 316, "metric_value": 0.495, "depth": 9}
if obj[4]<=7.639240506329114:
# {"feature": "Direction_same", "instances": 202, "metric_value": 0.4992, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[4]>7.639240506329114:
# {"feature": "Direction_same", "instances": 114, "metric_value": 0.4875, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]<=-1.0:
return 'False'
else: return 'False'
elif obj[3]>3:
# {"feature": "Occupation", "instances": 26, "metric_value": 0.337, "depth": 8}
if obj[4]>1:
# {"feature": "Restaurant20to50", "instances": 14, "metric_value": 0.2286, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 10, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
elif obj[4]<=1:
# {"feature": "Restaurant20to50", "instances": 12, "metric_value": 0.3704, "depth": 9}
if obj[7]<=3.0:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.4938, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]>3.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
elif obj[5]<=-1.0:
return 'False'
else: return 'False'
elif obj[1]<=0:
# {"feature": "Education", "instances": 57, "metric_value": 0.3851, "depth": 6}
if obj[3]<=2:
# {"feature": "Bar", "instances": 43, "metric_value": 0.4289, "depth": 7}
if obj[5]>0.0:
# {"feature": "Occupation", "instances": 29, "metric_value": 0.3557, "depth": 8}
if obj[4]>2:
# {"feature": "Restaurant20to50", "instances": 24, "metric_value": 0.3021, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 16, "metric_value": 0.2188, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]>1.0:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.4688, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=2:
# {"feature": "Restaurant20to50", "instances": 5, "metric_value": 0.4, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[5]<=0.0:
# {"feature": "Restaurant20to50", "instances": 14, "metric_value": 0.3365, "depth": 8}
if obj[7]>0.0:
# {"feature": "Occupation", "instances": 9, "metric_value": 0.0, "depth": 9}
if obj[4]<=19:
return 'True'
elif obj[4]>19:
return 'False'
else: return 'False'
elif obj[7]<=0.0:
# {"feature": "Occupation", "instances": 5, "metric_value": 0.0, "depth": 9}
if obj[4]>1:
return 'False'
elif obj[4]<=1:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
elif obj[3]>2:
# {"feature": "Occupation", "instances": 14, "metric_value": 0.0, "depth": 7}
if obj[4]<=16:
return 'False'
elif obj[4]>16:
return 'True'
else: return 'True'
else: return 'False'
else: return 'False'
elif obj[0]<=0:
# {"feature": "Restaurant20to50", "instances": 13, "metric_value": 0.2051, "depth": 5}
if obj[7]<=1.0:
return 'True'
elif obj[7]>1.0:
# {"feature": "Bar", "instances": 6, "metric_value": 0.0, "depth": 6}
if obj[5]>1.0:
return 'True'
elif obj[5]<=1.0:
return 'False'
else: return 'False'
else: return 'True'
else: return 'True'
else: return 'False'
elif obj[6]<=0.0:
# {"feature": "Passanger", "instances": 1457, "metric_value": 0.495, "depth": 3}
if obj[0]<=1:
# {"feature": "Distance", "instances": 933, "metric_value": 0.4888, "depth": 4}
if obj[9]<=1:
# {"feature": "Time", "instances": 499, "metric_value": 0.4911, "depth": 5}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 325, "metric_value": 0.478, "depth": 6}
if obj[8]>0:
# {"feature": "Education", "instances": 170, "metric_value": 0.4853, "depth": 7}
if obj[3]>1:
# {"feature": "Restaurant20to50", "instances": 103, "metric_value": 0.4845, "depth": 8}
if obj[7]<=2.0:
# {"feature": "Occupation", "instances": 101, "metric_value": 0.4875, "depth": 9}
if obj[4]>5:
# {"feature": "Bar", "instances": 61, "metric_value": 0.4748, "depth": 10}
if obj[5]>0.0:
return 'True'
elif obj[5]<=0.0:
return 'False'
else: return 'False'
elif obj[4]<=5:
# {"feature": "Bar", "instances": 40, "metric_value": 0.432, "depth": 10}
if obj[5]<=0.0:
return 'False'
elif obj[5]>0.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]>2.0:
return 'False'
else: return 'False'
elif obj[3]<=1:
# {"feature": "Bar", "instances": 67, "metric_value": 0.4411, "depth": 8}
if obj[5]<=1.0:
# {"feature": "Restaurant20to50", "instances": 55, "metric_value": 0.3984, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Occupation", "instances": 46, "metric_value": 0.439, "depth": 10}
if obj[4]<=12:
return 'True'
elif obj[4]>12:
return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
elif obj[5]>1.0:
# {"feature": "Restaurant20to50", "instances": 12, "metric_value": 0.3429, "depth": 9}
if obj[7]<=1.0:
# {"feature": "Occupation", "instances": 7, "metric_value": 0.2286, "depth": 10}
if obj[4]>1:
return 'False'
elif obj[4]<=1:
return 'False'
else: return 'False'
elif obj[7]>1.0:
# {"feature": "Occupation", "instances": 5, "metric_value": 0.4667, "depth": 10}
if obj[4]>10:
return 'True'
elif obj[4]<=10:
return 'False'
else: return 'False'
else: return 'True'
else: return 'False'
else: return 'True'
elif obj[8]<=0:
# {"feature": "Bar", "instances": 155, "metric_value": 0.4437, "depth": 7}
if obj[5]<=0.0:
# {"feature": "Occupation", "instances": 82, "metric_value": 0.3864, "depth": 8}
if obj[4]<=6:
# {"feature": "Education", "instances": 51, "metric_value": 0.4422, "depth": 9}
if obj[3]>0:
# {"feature": "Restaurant20to50", "instances": 26, "metric_value": 0.359, "depth": 10}
if obj[7]<=1.0:
return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Restaurant20to50", "instances": 25, "metric_value": 0.4562, "depth": 10}
if obj[7]>0.0:
return 'True'
elif obj[7]<=0.0:
return 'False'
else: return 'False'
else: return 'True'
elif obj[4]>6:
# {"feature": "Education", "instances": 31, "metric_value": 0.2497, "depth": 9}
if obj[3]>0:
# {"feature": "Restaurant20to50", "instances": 16, "metric_value": 0.1042, "depth": 10}
if obj[7]<=0.0:
return 'True'
elif obj[7]>0.0:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Restaurant20to50", "instances": 15, "metric_value": 0.3778, "depth": 10}
if obj[7]>0.0:
return 'True'
elif obj[7]<=0.0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[5]>0.0:
# {"feature": "Restaurant20to50", "instances": 73, "metric_value": 0.447, "depth": 8}
if obj[7]>0.0:
# {"feature": "Occupation", "instances": 61, "metric_value": 0.4307, "depth": 9}
if obj[4]<=6:
# {"feature": "Education", "instances": 34, "metric_value": 0.4847, "depth": 10}
if obj[3]>0:
return 'False'
elif obj[3]<=0:
return 'True'
else: return 'True'
elif obj[4]>6:
# {"feature": "Education", "instances": 27, "metric_value": 0.3281, "depth": 10}
if obj[3]>1:
return 'True'
elif obj[3]<=1:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]<=0.0:
# {"feature": "Occupation", "instances": 12, "metric_value": 0.25, "depth": 9}
if obj[4]>8:
return 'False'
elif obj[4]<=8:
# {"feature": "Education", "instances": 6, "metric_value": 0.4, "depth": 10}
if obj[3]<=2:
return 'False'
elif obj[3]>2:
return 'True'
else: return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Occupation", "instances": 174, "metric_value": 0.4654, "depth": 6}
if obj[4]<=6:
# {"feature": "Education", "instances": 94, "metric_value": 0.4397, "depth": 7}
if obj[3]<=4:
# {"feature": "Bar", "instances": 93, "metric_value": 0.4348, "depth": 8}
if obj[5]>-1.0:
# {"feature": "Restaurant20to50", "instances": 92, "metric_value": 0.4372, "depth": 9}
if obj[7]>-1.0:
# {"feature": "Direction_same", "instances": 91, "metric_value": 0.4405, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[7]<=-1.0:
return 'False'
else: return 'False'
elif obj[5]<=-1.0:
return 'True'
else: return 'True'
elif obj[3]>4:
return 'True'
else: return 'True'
elif obj[4]>6:
# {"feature": "Direction_same", "instances": 80, "metric_value": 0.4492, "depth": 7}
if obj[8]>0:
# {"feature": "Restaurant20to50", "instances": 42, "metric_value": 0.4, "depth": 8}
if obj[7]>-1.0:
# {"feature": "Bar", "instances": 40, "metric_value": 0.4154, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Education", "instances": 39, "metric_value": 0.4251, "depth": 10}
if obj[3]>0:
return 'True'
elif obj[3]<=0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
return 'True'
else: return 'True'
elif obj[7]<=-1.0:
return 'True'
else: return 'True'
elif obj[8]<=0:
# {"feature": "Education", "instances": 38, "metric_value": 0.4605, "depth": 8}
if obj[3]<=3:
# {"feature": "Bar", "instances": 36, "metric_value": 0.4611, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Restaurant20to50", "instances": 30, "metric_value": 0.491, "depth": 10}
if obj[7]<=1.0:
return 'False'
elif obj[7]>1.0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
# {"feature": "Restaurant20to50", "instances": 6, "metric_value": 0.25, "depth": 10}
if obj[7]>1.0:
return 'False'
elif obj[7]<=1.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>3:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
else: return 'False'
elif obj[9]>1:
# {"feature": "Education", "instances": 434, "metric_value": 0.4739, "depth": 5}
if obj[3]>1:
# {"feature": "Bar", "instances": 258, "metric_value": 0.4526, "depth": 6}
if obj[5]<=1.0:
# {"feature": "Occupation", "instances": 202, "metric_value": 0.4655, "depth": 7}
if obj[4]<=9:
# {"feature": "Time", "instances": 134, "metric_value": 0.4758, "depth": 8}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 107, "metric_value": 0.4829, "depth": 9}
if obj[8]<=0:
# {"feature": "Restaurant20to50", "instances": 89, "metric_value": 0.4818, "depth": 10}
if obj[7]>0.0:
return 'False'
elif obj[7]<=0.0:
return 'True'
else: return 'True'
elif obj[8]>0:
# {"feature": "Restaurant20to50", "instances": 18, "metric_value": 0.3723, "depth": 10}
if obj[7]<=1.0:
return 'False'
elif obj[7]>1.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[1]<=0:
# {"feature": "Direction_same", "instances": 27, "metric_value": 0.3608, "depth": 9}
if obj[8]<=0:
# {"feature": "Restaurant20to50", "instances": 17, "metric_value": 0.2647, "depth": 10}
if obj[7]>0.0:
return 'False'
elif obj[7]<=0.0:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Restaurant20to50", "instances": 10, "metric_value": 0.4762, "depth": 10}
if obj[7]>0.0:
return 'False'
elif obj[7]<=0.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[4]>9:
# {"feature": "Direction_same", "instances": 68, "metric_value": 0.4059, "depth": 8}
if obj[8]<=0:
# {"feature": "Time", "instances": 54, "metric_value": 0.3641, "depth": 9}
if obj[1]>0:
# {"feature": "Restaurant20to50", "instances": 47, "metric_value": 0.4144, "depth": 10}
if obj[7]<=2.0:
return 'False'
elif obj[7]>2.0:
return 'False'
else: return 'False'
elif obj[1]<=0:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Time", "instances": 14, "metric_value": 0.381, "depth": 9}
if obj[1]>0:
# {"feature": "Restaurant20to50", "instances": 12, "metric_value": 0.419, "depth": 10}
if obj[7]>0.0:
return 'False'
elif obj[7]<=0.0:
return 'False'
else: return 'False'
elif obj[1]<=0:
return 'True'
else: return 'True'
else: return 'False'
else: return 'False'
elif obj[5]>1.0:
# {"feature": "Occupation", "instances": 56, "metric_value": 0.3569, "depth": 7}
if obj[4]>3:
# {"feature": "Restaurant20to50", "instances": 43, "metric_value": 0.4027, "depth": 8}
if obj[7]<=1.0:
# {"feature": "Time", "instances": 30, "metric_value": 0.4593, "depth": 9}
if obj[1]<=1:
# {"feature": "Direction_same", "instances": 21, "metric_value": 0.4417, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[1]>1:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.4938, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]>1.0:
# {"feature": "Direction_same", "instances": 13, "metric_value": 0.141, "depth": 9}
if obj[8]<=0:
# {"feature": "Time", "instances": 12, "metric_value": 0.1111, "depth": 10}
if obj[1]<=1:
return 'False'
elif obj[1]>1:
return 'False'
else: return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'False'
elif obj[4]<=3:
# {"feature": "Time", "instances": 13, "metric_value": 0.1282, "depth": 8}
if obj[1]<=1:
return 'False'
elif obj[1]>1:
# {"feature": "Restaurant20to50", "instances": 6, "metric_value": 0.25, "depth": 9}
if obj[7]>1.0:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[7]<=1.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[3]<=1:
# {"feature": "Occupation", "instances": 176, "metric_value": 0.483, "depth": 6}
if obj[4]>3:
# {"feature": "Restaurant20to50", "instances": 115, "metric_value": 0.4881, "depth": 7}
if obj[7]<=1.0:
# {"feature": "Time", "instances": 89, "metric_value": 0.492, "depth": 8}
if obj[1]>0:
# {"feature": "Bar", "instances": 72, "metric_value": 0.4884, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 65, "metric_value": 0.4941, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[5]>2.0:
# {"feature": "Direction_same", "instances": 7, "metric_value": 0.3714, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'False'
elif obj[1]<=0:
# {"feature": "Direction_same", "instances": 17, "metric_value": 0.4189, "depth": 9}
if obj[8]<=0:
# {"feature": "Bar", "instances": 11, "metric_value": 0.4545, "depth": 10}
if obj[5]<=2.0:
return 'True'
elif obj[5]>2.0:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Bar", "instances": 6, "metric_value": 0.25, "depth": 10}
if obj[5]<=0.0:
return 'True'
elif obj[5]>0.0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Time", "instances": 26, "metric_value": 0.4259, "depth": 8}
if obj[1]>0:
# {"feature": "Bar", "instances": 23, "metric_value": 0.3844, "depth": 9}
if obj[5]>0.0:
# {"feature": "Direction_same", "instances": 19, "metric_value": 0.4613, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[5]<=0.0:
return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Bar", "instances": 3, "metric_value": 0.3333, "depth": 9}
if obj[5]>0.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.0, "depth": 10}
if obj[8]>0:
return 'True'
elif obj[8]<=0:
return 'False'
else: return 'False'
elif obj[5]<=0.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
elif obj[4]<=3:
# {"feature": "Restaurant20to50", "instances": 61, "metric_value": 0.4312, "depth": 7}
if obj[7]>0.0:
# {"feature": "Direction_same", "instances": 41, "metric_value": 0.4253, "depth": 8}
if obj[8]<=0:
# {"feature": "Time", "instances": 35, "metric_value": 0.4114, "depth": 9}
if obj[1]>0:
# {"feature": "Bar", "instances": 30, "metric_value": 0.4528, "depth": 10}
if obj[5]<=1.0:
return 'False'
elif obj[5]>1.0:
return 'False'
else: return 'False'
elif obj[1]<=0:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Time", "instances": 6, "metric_value": 0.2222, "depth": 9}
if obj[1]>0:
# {"feature": "Bar", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[5]<=0.0:
return 'True'
else: return 'True'
elif obj[1]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]<=0.0:
# {"feature": "Time", "instances": 20, "metric_value": 0.3, "depth": 8}
if obj[1]<=3:
# {"feature": "Direction_same", "instances": 16, "metric_value": 0.3333, "depth": 9}
if obj[8]<=0:
# {"feature": "Bar", "instances": 12, "metric_value": 0.4444, "depth": 10}
if obj[5]<=0.0:
return 'False'
else: return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[1]>3:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[0]>1:
# {"feature": "Distance", "instances": 524, "metric_value": 0.4635, "depth": 4}
if obj[9]>1:
# {"feature": "Occupation", "instances": 349, "metric_value": 0.4481, "depth": 5}
if obj[4]>1.5048799563075503:
# {"feature": "Time", "instances": 290, "metric_value": 0.4613, "depth": 6}
if obj[1]<=2:
# {"feature": "Education", "instances": 169, "metric_value": 0.4664, "depth": 7}
if obj[3]<=2:
# {"feature": "Restaurant20to50", "instances": 140, "metric_value": 0.4914, "depth": 8}
if obj[7]<=1.0:
# {"feature": "Bar", "instances": 114, "metric_value": 0.4767, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 110, "metric_value": 0.494, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Bar", "instances": 26, "metric_value": 0.4796, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 17, "metric_value": 0.4983, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>2:
# {"feature": "Restaurant20to50", "instances": 29, "metric_value": 0.3155, "depth": 8}
if obj[7]<=2.0:
# {"feature": "Bar", "instances": 27, "metric_value": 0.2963, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 24, "metric_value": 0.2778, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>2.0:
# {"feature": "Bar", "instances": 2, "metric_value": 0.5, "depth": 9}
if obj[5]<=0.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
elif obj[1]>2:
# {"feature": "Restaurant20to50", "instances": 121, "metric_value": 0.4185, "depth": 7}
if obj[7]<=1.0:
# {"feature": "Education", "instances": 96, "metric_value": 0.4508, "depth": 8}
if obj[3]<=3:
# {"feature": "Bar", "instances": 91, "metric_value": 0.444, "depth": 9}
if obj[5]>-1.0:
# {"feature": "Direction_same", "instances": 89, "metric_value": 0.454, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]<=-1.0:
return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Bar", "instances": 5, "metric_value": 0.2667, "depth": 9}
if obj[5]<=0.0:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>0.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]>1.0:
# {"feature": "Education", "instances": 25, "metric_value": 0.24, "depth": 8}
if obj[3]<=2:
# {"feature": "Bar", "instances": 20, "metric_value": 0.1444, "depth": 9}
if obj[5]<=3.0:
# {"feature": "Direction_same", "instances": 18, "metric_value": 0.1049, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>3.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>2:
# {"feature": "Bar", "instances": 5, "metric_value": 0.48, "depth": 9}
if obj[5]<=0.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[4]<=1.5048799563075503:
# {"feature": "Time", "instances": 59, "metric_value": 0.3448, "depth": 6}
if obj[1]>0:
# {"feature": "Restaurant20to50", "instances": 45, "metric_value": 0.399, "depth": 7}
if obj[7]>0.0:
# {"feature": "Education", "instances": 33, "metric_value": 0.3463, "depth": 8}
if obj[3]<=2:
# {"feature": "Bar", "instances": 28, "metric_value": 0.4011, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 26, "metric_value": 0.3935, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[5]>2.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>2:
return 'True'
else: return 'True'
elif obj[7]<=0.0:
# {"feature": "Education", "instances": 12, "metric_value": 0.375, "depth": 8}
if obj[3]<=2:
# {"feature": "Bar", "instances": 8, "metric_value": 0.375, "depth": 9}
if obj[5]<=0.0:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>2:
# {"feature": "Bar", "instances": 4, "metric_value": 0.0, "depth": 9}
if obj[5]>-1.0:
return 'False'
elif obj[5]<=-1.0:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Education", "instances": 14, "metric_value": 0.125, "depth": 7}
if obj[3]<=0:
# {"feature": "Restaurant20to50", "instances": 8, "metric_value": 0.2, "depth": 8}
if obj[7]<=1.0:
# {"feature": "Bar", "instances": 5, "metric_value": 0.32, "depth": 9}
if obj[5]<=0.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'True'
else: return 'True'
elif obj[3]>0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Occupation", "instances": 175, "metric_value": 0.4681, "depth": 5}
if obj[4]>0:
# {"feature": "Education", "instances": 171, "metric_value": 0.4675, "depth": 6}
if obj[3]<=2:
# {"feature": "Restaurant20to50", "instances": 138, "metric_value": 0.4827, "depth": 7}
if obj[7]>0.0:
# {"feature": "Time", "instances": 97, "metric_value": 0.4603, "depth": 8}
if obj[1]<=3:
# {"feature": "Bar", "instances": 69, "metric_value": 0.4857, "depth": 9}
if obj[5]<=1.0:
# {"feature": "Direction_same", "instances": 45, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[5]>1.0:
# {"feature": "Direction_same", "instances": 24, "metric_value": 0.4965, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[1]>3:
# {"feature": "Bar", "instances": 28, "metric_value": 0.3652, "depth": 9}
if obj[5]<=2.0:
# {"feature": "Direction_same", "instances": 23, "metric_value": 0.3403, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[5]>2.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[7]<=0.0:
# {"feature": "Bar", "instances": 41, "metric_value": 0.4671, "depth": 8}
if obj[5]<=1.0:
# {"feature": "Time", "instances": 35, "metric_value": 0.4959, "depth": 9}
if obj[1]<=3:
# {"feature": "Direction_same", "instances": 28, "metric_value": 0.4974, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[1]>3:
# {"feature": "Direction_same", "instances": 7, "metric_value": 0.4898, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[5]>1.0:
# {"feature": "Time", "instances": 6, "metric_value": 0.2222, "depth": 9}
if obj[1]>3:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[1]<=3:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>2:
# {"feature": "Time", "instances": 33, "metric_value": 0.3561, "depth": 7}
if obj[1]<=2:
# {"feature": "Restaurant20to50", "instances": 25, "metric_value": 0.3043, "depth": 8}
if obj[7]>-1.0:
# {"feature": "Bar", "instances": 23, "metric_value": 0.2849, "depth": 9}
if obj[5]<=1.0:
# {"feature": "Direction_same", "instances": 19, "metric_value": 0.2659, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[5]>1.0:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]<=-1.0:
# {"feature": "Bar", "instances": 2, "metric_value": 0.0, "depth": 9}
if obj[5]<=-1.0:
return 'False'
elif obj[5]>-1.0:
return 'True'
else: return 'True'
else: return 'False'
elif obj[1]>2:
# {"feature": "Restaurant20to50", "instances": 8, "metric_value": 0.4286, "depth": 8}
if obj[7]<=1.0:
# {"feature": "Bar", "instances": 7, "metric_value": 0.381, "depth": 9}
if obj[5]<=1.0:
# {"feature": "Direction_same", "instances": 6, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[5]>1.0:
return 'True'
else: return 'True'
elif obj[7]>1.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=0:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
else: return 'True'
elif obj[2]<=1:
# {"feature": "Bar", "instances": 2258, "metric_value": 0.4643, "depth": 2}
if obj[5]>0.0:
# {"feature": "Time", "instances": 1296, "metric_value": 0.4911, "depth": 3}
if obj[1]>0:
# {"feature": "Passanger", "instances": 933, "metric_value": 0.4879, "depth": 4}
if obj[0]<=2:
# {"feature": "Coffeehouse", "instances": 743, "metric_value": 0.4889, "depth": 5}
if obj[6]>1.0:
# {"feature": "Restaurant20to50", "instances": 409, "metric_value": 0.4944, "depth": 6}
if obj[7]<=2.0:
# {"feature": "Education", "instances": 342, "metric_value": 0.4925, "depth": 7}
if obj[3]<=2:
# {"feature": "Occupation", "instances": 258, "metric_value": 0.4826, "depth": 8}
if obj[4]>5:
# {"feature": "Distance", "instances": 167, "metric_value": 0.4785, "depth": 9}
if obj[9]<=2:
# {"feature": "Direction_same", "instances": 123, "metric_value": 0.4944, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[9]>2:
# {"feature": "Direction_same", "instances": 44, "metric_value": 0.4339, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[4]<=5:
# {"feature": "Distance", "instances": 91, "metric_value": 0.4667, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 60, "metric_value": 0.4856, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Direction_same", "instances": 31, "metric_value": 0.3871, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>2:
# {"feature": "Occupation", "instances": 84, "metric_value": 0.4264, "depth": 8}
if obj[4]>5:
# {"feature": "Distance", "instances": 64, "metric_value": 0.416, "depth": 9}
if obj[9]<=2:
# {"feature": "Direction_same", "instances": 48, "metric_value": 0.4296, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[9]>2:
# {"feature": "Direction_same", "instances": 16, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=5:
# {"feature": "Direction_same", "instances": 20, "metric_value": 0.4118, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 17, "metric_value": 0.4759, "depth": 10}
if obj[9]>1:
return 'True'
elif obj[9]<=1:
return 'False'
else: return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'True'
else: return 'False'
elif obj[7]>2.0:
# {"feature": "Occupation", "instances": 67, "metric_value": 0.4501, "depth": 7}
if obj[4]<=17:
# {"feature": "Education", "instances": 63, "metric_value": 0.4717, "depth": 8}
if obj[3]<=3:
# {"feature": "Distance", "instances": 49, "metric_value": 0.4783, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 26, "metric_value": 0.4583, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Direction_same", "instances": 23, "metric_value": 0.4522, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[3]>3:
# {"feature": "Distance", "instances": 14, "metric_value": 0.3393, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.2083, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Direction_same", "instances": 6, "metric_value": 0.25, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
else: return 'True'
else: return 'True'
elif obj[4]>17:
return 'True'
else: return 'True'
else: return 'True'
elif obj[6]<=1.0:
# {"feature": "Education", "instances": 334, "metric_value": 0.4654, "depth": 6}
if obj[3]>0:
# {"feature": "Restaurant20to50", "instances": 219, "metric_value": 0.4311, "depth": 7}
if obj[7]<=2.0:
# {"feature": "Direction_same", "instances": 215, "metric_value": 0.4335, "depth": 8}
if obj[8]<=0:
# {"feature": "Occupation", "instances": 188, "metric_value": 0.4427, "depth": 9}
if obj[4]>2.8018865567551456:
# {"feature": "Distance", "instances": 151, "metric_value": 0.425, "depth": 10}
if obj[9]>1:
return 'False'
elif obj[9]<=1:
return 'False'
else: return 'False'
elif obj[4]<=2.8018865567551456:
# {"feature": "Distance", "instances": 37, "metric_value": 0.4994, "depth": 10}
if obj[9]>1:
return 'False'
elif obj[9]<=1:
return 'True'
else: return 'True'
else: return 'False'
elif obj[8]>0:
# {"feature": "Occupation", "instances": 27, "metric_value": 0.2729, "depth": 9}
if obj[4]>4:
# {"feature": "Distance", "instances": 19, "metric_value": 0.3792, "depth": 10}
if obj[9]>1:
return 'False'
elif obj[9]<=1:
return 'False'
else: return 'False'
elif obj[4]<=4:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]>2.0:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Occupation", "instances": 115, "metric_value": 0.479, "depth": 7}
if obj[4]>5:
# {"feature": "Restaurant20to50", "instances": 77, "metric_value": 0.4753, "depth": 8}
if obj[7]<=2.0:
# {"feature": "Distance", "instances": 72, "metric_value": 0.4825, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 49, "metric_value": 0.4548, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Direction_same", "instances": 23, "metric_value": 0.4174, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
else: return 'True'
elif obj[7]>2.0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.2, "depth": 9}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.0, "depth": 10}
if obj[9]<=1:
return 'False'
elif obj[9]>1:
return 'True'
else: return 'True'
else: return 'False'
else: return 'False'
elif obj[4]<=5:
# {"feature": "Restaurant20to50", "instances": 38, "metric_value": 0.417, "depth": 8}
if obj[7]>0.0:
# {"feature": "Direction_same", "instances": 29, "metric_value": 0.484, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 27, "metric_value": 0.4825, "depth": 10}
if obj[9]<=2:
return 'False'
elif obj[9]>2:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[9]<=2:
return 'False'
else: return 'False'
else: return 'False'
elif obj[7]<=0.0:
# {"feature": "Distance", "instances": 9, "metric_value": 0.1481, "depth": 9}
if obj[9]<=2:
return 'False'
elif obj[9]>2:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.4444, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[0]>2:
# {"feature": "Occupation", "instances": 190, "metric_value": 0.4413, "depth": 5}
if obj[4]<=19.70715618872958:
# {"feature": "Coffeehouse", "instances": 179, "metric_value": 0.4572, "depth": 6}
if obj[6]>1.0:
# {"feature": "Restaurant20to50", "instances": 98, "metric_value": 0.4155, "depth": 7}
if obj[7]<=3.0:
# {"feature": "Distance", "instances": 93, "metric_value": 0.4014, "depth": 8}
if obj[9]>1:
# {"feature": "Education", "instances": 82, "metric_value": 0.4254, "depth": 9}
if obj[3]<=3:
# {"feature": "Direction_same", "instances": 79, "metric_value": 0.4416, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]>3:
return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Education", "instances": 11, "metric_value": 0.1558, "depth": 9}
if obj[3]>0:
# {"feature": "Direction_same", "instances": 7, "metric_value": 0.2449, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>3.0:
# {"feature": "Education", "instances": 5, "metric_value": 0.0, "depth": 8}
if obj[3]>2:
return 'False'
elif obj[3]<=2:
return 'True'
else: return 'True'
else: return 'False'
elif obj[6]<=1.0:
# {"feature": "Restaurant20to50", "instances": 81, "metric_value": 0.4778, "depth": 7}
if obj[7]<=1.0:
# {"feature": "Distance", "instances": 58, "metric_value": 0.4855, "depth": 8}
if obj[9]>1:
# {"feature": "Education", "instances": 48, "metric_value": 0.4896, "depth": 9}
if obj[3]<=2:
# {"feature": "Direction_same", "instances": 44, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[3]>2:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.375, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[9]<=1:
# {"feature": "Education", "instances": 10, "metric_value": 0.4, "depth": 9}
if obj[3]<=0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[3]>0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[7]>1.0:
# {"feature": "Distance", "instances": 23, "metric_value": 0.3581, "depth": 8}
if obj[9]>1:
# {"feature": "Education", "instances": 17, "metric_value": 0.4471, "depth": 9}
if obj[3]>0:
# {"feature": "Direction_same", "instances": 12, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[9]<=1:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[4]>19.70715618872958:
return 'True'
else: return 'True'
else: return 'True'
elif obj[1]<=0:
# {"feature": "Passanger", "instances": 363, "metric_value": 0.4495, "depth": 4}
if obj[0]<=1:
# {"feature": "Distance", "instances": 333, "metric_value": 0.4386, "depth": 5}
if obj[9]<=1:
# {"feature": "Restaurant20to50", "instances": 170, "metric_value": 0.3751, "depth": 6}
if obj[7]<=1.0:
# {"feature": "Coffeehouse", "instances": 102, "metric_value": 0.4339, "depth": 7}
if obj[6]<=2.0:
# {"feature": "Occupation", "instances": 81, "metric_value": 0.4508, "depth": 8}
if obj[4]>5:
# {"feature": "Education", "instances": 50, "metric_value": 0.4767, "depth": 9}
if obj[3]<=3:
# {"feature": "Direction_same", "instances": 48, "metric_value": 0.4965, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
elif obj[3]>3:
return 'True'
else: return 'True'
elif obj[4]<=5:
# {"feature": "Education", "instances": 31, "metric_value": 0.3653, "depth": 9}
if obj[3]>0:
# {"feature": "Direction_same", "instances": 22, "metric_value": 0.4339, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
elif obj[3]<=0:
# {"feature": "Direction_same", "instances": 9, "metric_value": 0.1975, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[6]>2.0:
# {"feature": "Education", "instances": 21, "metric_value": 0.2721, "depth": 8}
if obj[3]>1:
# {"feature": "Occupation", "instances": 14, "metric_value": 0.3429, "depth": 9}
if obj[4]<=7:
# {"feature": "Direction_same", "instances": 10, "metric_value": 0.48, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
elif obj[4]>7:
return 'True'
else: return 'True'
elif obj[3]<=1:
return 'True'
else: return 'True'
else: return 'True'
elif obj[7]>1.0:
# {"feature": "Education", "instances": 68, "metric_value": 0.2603, "depth": 7}
if obj[3]>1:
# {"feature": "Occupation", "instances": 40, "metric_value": 0.1636, "depth": 8}
if obj[4]>8:
# {"feature": "Coffeehouse", "instances": 22, "metric_value": 0.2773, "depth": 9}
if obj[6]<=3.0:
# {"feature": "Direction_same", "instances": 20, "metric_value": 0.255, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
elif obj[6]>3.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=1:
return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=8:
return 'True'
else: return 'True'
elif obj[3]<=1:
# {"feature": "Occupation", "instances": 28, "metric_value": 0.2313, "depth": 8}
if obj[4]>5:
# {"feature": "Coffeehouse", "instances": 21, "metric_value": 0.1633, "depth": 9}
if obj[6]<=2.0:
# {"feature": "Direction_same", "instances": 14, "metric_value": 0.2449, "depth": 10}
if obj[8]<=1:
return 'True'
else: return 'True'
elif obj[6]>2.0:
return 'True'
else: return 'True'
elif obj[4]<=5:
# {"feature": "Coffeehouse", "instances": 7, "metric_value": 0.2381, "depth": 9}
if obj[6]<=2.0:
# {"feature": "Direction_same", "instances": 6, "metric_value": 0.2778, "depth": 10}
if obj[8]<=1:
return 'False'
else: return 'False'
elif obj[6]>2.0:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
else: return 'True'
elif obj[9]>1:
# {"feature": "Restaurant20to50", "instances": 163, "metric_value": 0.4737, "depth": 6}
if obj[7]>0.0:
# {"feature": "Education", "instances": 140, "metric_value": 0.4638, "depth": 7}
if obj[3]<=3:
# {"feature": "Direction_same", "instances": 130, "metric_value": 0.475, "depth": 8}
if obj[8]<=0:
# {"feature": "Occupation", "instances": 128, "metric_value": 0.4767, "depth": 9}
if obj[4]<=13.844008971972023:
# {"feature": "Coffeehouse", "instances": 107, "metric_value": 0.4879, "depth": 10}
if obj[6]>1.0:
return 'True'
elif obj[6]<=1.0:
return 'True'
else: return 'True'
elif obj[4]>13.844008971972023:
# {"feature": "Coffeehouse", "instances": 21, "metric_value": 0.3878, "depth": 10}
if obj[6]>0.0:
return 'True'
elif obj[6]<=0.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[3]>3:
# {"feature": "Coffeehouse", "instances": 10, "metric_value": 0.1333, "depth": 8}
if obj[6]>0.0:
return 'True'
elif obj[6]<=0.0:
# {"feature": "Occupation", "instances": 3, "metric_value": 0.3333, "depth": 9}
if obj[4]<=2:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[4]>2:
return 'True'
else: return 'True'
else: return 'True'
else: return 'True'
elif obj[7]<=0.0:
# {"feature": "Coffeehouse", "instances": 23, "metric_value": 0.4099, "depth": 7}
if obj[6]<=2.0:
# {"feature": "Education", "instances": 16, "metric_value": 0.3254, "depth": 8}
if obj[3]<=2:
# {"feature": "Occupation", "instances": 9, "metric_value": 0.1111, "depth": 9}
if obj[4]>3:
return 'False'
elif obj[4]<=3:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>2:
# {"feature": "Occupation", "instances": 7, "metric_value": 0.3429, "depth": 9}
if obj[4]<=12:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.48, "depth": 10}
if obj[8]<=0:
return 'True'
else: return 'True'
elif obj[4]>12:
return 'False'
else: return 'False'
else: return 'False'
elif obj[6]>2.0:
# {"feature": "Occupation", "instances": 7, "metric_value": 0.2143, "depth": 8}
if obj[4]<=6:
# {"feature": "Education", "instances": 4, "metric_value": 0.0, "depth": 9}
if obj[3]>0:
return 'False'
elif obj[3]<=0:
return 'True'
else: return 'True'
elif obj[4]>6:
return 'True'
else: return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
elif obj[0]>1:
# {"feature": "Restaurant20to50", "instances": 30, "metric_value": 0.35, "depth": 5}
if obj[7]>0.0:
# {"feature": "Occupation", "instances": 28, "metric_value": 0.3333, "depth": 6}
if obj[4]<=14:
# {"feature": "Education", "instances": 21, "metric_value": 0.375, "depth": 7}
if obj[3]<=2:
# {"feature": "Coffeehouse", "instances": 16, "metric_value": 0.4219, "depth": 8}
if obj[6]>1.0:
# {"feature": "Distance", "instances": 8, "metric_value": 0.375, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 6, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[9]<=1:
return 'True'
else: return 'True'
elif obj[6]<=1.0:
# {"feature": "Distance", "instances": 8, "metric_value": 0.3, "depth": 9}
if obj[9]>1:
# {"feature": "Direction_same", "instances": 5, "metric_value": 0.4, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[9]<=1:
return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>2:
return 'False'
else: return 'False'
elif obj[4]>14:
return 'False'
else: return 'False'
elif obj[7]<=0.0:
return 'True'
else: return 'True'
else: return 'False'
else: return 'True'
elif obj[5]<=0.0:
# {"feature": "Restaurant20to50", "instances": 962, "metric_value": 0.4124, "depth": 3}
if obj[7]<=2.0:
# {"feature": "Distance", "instances": 914, "metric_value": 0.4034, "depth": 4}
if obj[9]<=2:
# {"feature": "Coffeehouse", "instances": 746, "metric_value": 0.423, "depth": 5}
if obj[6]<=3.0:
# {"feature": "Occupation", "instances": 703, "metric_value": 0.4307, "depth": 6}
if obj[4]>1.302144119170582:
# {"feature": "Education", "instances": 535, "metric_value": 0.4453, "depth": 7}
if obj[3]>0:
# {"feature": "Time", "instances": 366, "metric_value": 0.4225, "depth": 8}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 243, "metric_value": 0.3874, "depth": 9}
if obj[8]<=0:
# {"feature": "Passanger", "instances": 219, "metric_value": 0.4188, "depth": 10}
if obj[0]<=2:
return 'False'
elif obj[0]>2:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Passanger", "instances": 24, "metric_value": 0.0417, "depth": 10}
if obj[0]<=1:
return 'False'
elif obj[0]>1:
return 'False'
else: return 'False'
else: return 'False'
elif obj[1]<=0:
# {"feature": "Passanger", "instances": 123, "metric_value": 0.4629, "depth": 9}
if obj[0]>0:
# {"feature": "Direction_same", "instances": 113, "metric_value": 0.4733, "depth": 10}
if obj[8]>0:
return 'False'
elif obj[8]<=0:
return 'False'
else: return 'False'
elif obj[0]<=0:
# {"feature": "Direction_same", "instances": 10, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[3]<=0:
# {"feature": "Passanger", "instances": 169, "metric_value": 0.4772, "depth": 8}
if obj[0]<=1:
# {"feature": "Time", "instances": 129, "metric_value": 0.4896, "depth": 9}
if obj[1]>0:
# {"feature": "Direction_same", "instances": 87, "metric_value": 0.4833, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[1]<=0:
# {"feature": "Direction_same", "instances": 42, "metric_value": 0.4818, "depth": 10}
if obj[8]<=0:
return 'True'
elif obj[8]>0:
return 'False'
else: return 'False'
else: return 'True'
elif obj[0]>1:
# {"feature": "Time", "instances": 40, "metric_value": 0.408, "depth": 9}
if obj[1]<=3:
# {"feature": "Direction_same", "instances": 25, "metric_value": 0.4608, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[1]>3:
# {"feature": "Direction_same", "instances": 15, "metric_value": 0.32, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=1.302144119170582:
# {"feature": "Education", "instances": 168, "metric_value": 0.3565, "depth": 7}
if obj[3]<=4:
# {"feature": "Time", "instances": 164, "metric_value": 0.3511, "depth": 8}
if obj[1]<=2:
# {"feature": "Passanger", "instances": 86, "metric_value": 0.3908, "depth": 9}
if obj[0]<=1:
# {"feature": "Direction_same", "instances": 64, "metric_value": 0.3584, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
elif obj[0]>1:
# {"feature": "Direction_same", "instances": 22, "metric_value": 0.4502, "depth": 10}
if obj[8]<=0:
return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
else: return 'False'
elif obj[1]>2:
# {"feature": "Direction_same", "instances": 78, "metric_value": 0.2774, "depth": 9}
if obj[8]<=0:
# {"feature": "Passanger", "instances": 72, "metric_value": 0.2519, "depth": 10}
if obj[0]>0:
return 'False'
elif obj[0]<=0:
return 'False'
else: return 'False'
elif obj[8]>0:
# {"feature": "Passanger", "instances": 6, "metric_value": 0.0, "depth": 10}
if obj[0]<=1:
return 'False'
elif obj[0]>1:
return 'True'
else: return 'True'
else: return 'False'
else: return 'False'
elif obj[3]>4:
# {"feature": "Passanger", "instances": 4, "metric_value": 0.25, "depth": 8}
if obj[0]<=1:
return 'True'
elif obj[0]>1:
# {"feature": "Time", "instances": 2, "metric_value": 0.0, "depth": 9}
if obj[1]<=0:
return 'True'
elif obj[1]>0:
return 'False'
else: return 'False'
else: return 'True'
else: return 'True'
else: return 'False'
elif obj[6]>3.0:
# {"feature": "Education", "instances": 43, "metric_value": 0.2063, "depth": 6}
if obj[3]<=0:
# {"feature": "Occupation", "instances": 23, "metric_value": 0.3327, "depth": 7}
if obj[4]<=7:
# {"feature": "Direction_same", "instances": 12, "metric_value": 0.35, "depth": 8}
if obj[8]<=0:
# {"feature": "Time", "instances": 10, "metric_value": 0.3429, "depth": 9}
if obj[1]<=3:
# {"feature": "Passanger", "instances": 7, "metric_value": 0.4762, "depth": 10}
if obj[0]>1:
return 'False'
elif obj[0]<=1:
return 'False'
else: return 'False'
elif obj[1]>3:
return 'False'
else: return 'False'
elif obj[8]>0:
return 'True'
else: return 'True'
elif obj[4]>7:
# {"feature": "Passanger", "instances": 11, "metric_value": 0.0909, "depth": 8}
if obj[0]>0:
return 'False'
elif obj[0]<=0:
# {"feature": "Time", "instances": 2, "metric_value": 0.0, "depth": 9}
if obj[1]>0:
return 'True'
elif obj[1]<=0:
return 'False'
else: return 'False'
else: return 'True'
else: return 'False'
elif obj[3]>0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[9]>2:
# {"feature": "Passanger", "instances": 168, "metric_value": 0.2854, "depth": 5}
if obj[0]<=1:
# {"feature": "Occupation", "instances": 164, "metric_value": 0.2716, "depth": 6}
if obj[4]>0:
# {"feature": "Education", "instances": 159, "metric_value": 0.256, "depth": 7}
if obj[3]<=4:
# {"feature": "Coffeehouse", "instances": 158, "metric_value": 0.2545, "depth": 8}
if obj[6]>0.0:
# {"feature": "Time", "instances": 111, "metric_value": 0.2183, "depth": 9}
if obj[1]<=1:
# {"feature": "Direction_same", "instances": 104, "metric_value": 0.233, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[1]>1:
return 'False'
else: return 'False'
elif obj[6]<=0.0:
# {"feature": "Time", "instances": 47, "metric_value": 0.3288, "depth": 9}
if obj[1]<=1:
# {"feature": "Direction_same", "instances": 44, "metric_value": 0.3512, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
elif obj[1]>1:
return 'False'
else: return 'False'
else: return 'False'
elif obj[3]>4:
return 'True'
else: return 'True'
elif obj[4]<=0:
# {"feature": "Coffeehouse", "instances": 5, "metric_value": 0.4, "depth": 7}
if obj[6]<=2.0:
# {"feature": "Time", "instances": 4, "metric_value": 0.5, "depth": 8}
if obj[1]<=1:
# {"feature": "Education", "instances": 4, "metric_value": 0.5, "depth": 9}
if obj[3]<=0:
# {"feature": "Direction_same", "instances": 4, "metric_value": 0.5, "depth": 10}
if obj[8]<=0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'False'
elif obj[6]>2.0:
return 'True'
else: return 'True'
else: return 'True'
elif obj[0]>1:
# {"feature": "Coffeehouse", "instances": 4, "metric_value": 0.0, "depth": 6}
if obj[6]>-1.0:
return 'True'
elif obj[6]<=-1.0:
return 'False'
else: return 'False'
else: return 'True'
else: return 'False'
elif obj[7]>2.0:
# {"feature": "Education", "instances": 48, "metric_value": 0.3929, "depth": 4}
if obj[3]<=0:
# {"feature": "Occupation", "instances": 25, "metric_value": 0.2743, "depth": 5}
if obj[4]<=4:
# {"feature": "Passanger", "instances": 14, "metric_value": 0.4286, "depth": 6}
if obj[0]<=2:
# {"feature": "Time", "instances": 12, "metric_value": 0.4375, "depth": 7}
if obj[1]<=2:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.4375, "depth": 8}
if obj[8]>0:
# {"feature": "Coffeehouse", "instances": 4, "metric_value": 0.25, "depth": 9}
if obj[6]>2.0:
return 'True'
elif obj[6]<=2.0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[9]<=2:
return 'False'
else: return 'False'
else: return 'False'
elif obj[8]<=0:
# {"feature": "Coffeehouse", "instances": 4, "metric_value": 0.3333, "depth": 9}
if obj[6]<=2.0:
# {"feature": "Distance", "instances": 3, "metric_value": 0.3333, "depth": 10}
if obj[9]>1:
return 'True'
elif obj[9]<=1:
return 'True'
else: return 'True'
elif obj[6]>2.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[1]>2:
# {"feature": "Coffeehouse", "instances": 4, "metric_value": 0.25, "depth": 8}
if obj[6]<=2.0:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[9]<=2:
return 'False'
else: return 'False'
else: return 'False'
elif obj[6]>2.0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[0]>2:
return 'True'
else: return 'True'
elif obj[4]>4:
return 'True'
else: return 'True'
elif obj[3]>0:
# {"feature": "Coffeehouse", "instances": 23, "metric_value": 0.3581, "depth": 5}
if obj[6]<=2.0:
# {"feature": "Time", "instances": 17, "metric_value": 0.4118, "depth": 6}
if obj[1]<=3:
# {"feature": "Passanger", "instances": 14, "metric_value": 0.4848, "depth": 7}
if obj[0]<=2:
# {"feature": "Occupation", "instances": 11, "metric_value": 0.4848, "depth": 8}
if obj[4]>5:
# {"feature": "Direction_same", "instances": 8, "metric_value": 0.5, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 6, "metric_value": 0.5, "depth": 10}
if obj[9]>1:
return 'True'
elif obj[9]<=1:
return 'True'
else: return 'True'
elif obj[8]>0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[9]<=2:
return 'False'
else: return 'False'
else: return 'False'
elif obj[4]<=5:
# {"feature": "Direction_same", "instances": 3, "metric_value": 0.3333, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.0, "depth": 10}
if obj[9]<=1:
return 'True'
elif obj[9]>1:
return 'False'
else: return 'False'
elif obj[8]>0:
return 'False'
else: return 'False'
else: return 'False'
elif obj[0]>2:
# {"feature": "Occupation", "instances": 3, "metric_value": 0.3333, "depth": 8}
if obj[4]>5:
# {"feature": "Direction_same", "instances": 2, "metric_value": 0.5, "depth": 9}
if obj[8]<=0:
# {"feature": "Distance", "instances": 2, "metric_value": 0.5, "depth": 10}
if obj[9]<=2:
return 'True'
else: return 'True'
else: return 'True'
elif obj[4]<=5:
return 'True'
else: return 'True'
else: return 'True'
elif obj[1]>3:
return 'False'
else: return 'False'
elif obj[6]>2.0:
return 'False'
else: return 'False'
else: return 'False'
else: return 'True'
else: return 'False'
else: return 'False'
| 39.753191 | 212 | 0.494446 | 10,505 | 84,078 | 3.899857 | 0.049595 | 0.125122 | 0.136497 | 0.122535 | 0.922696 | 0.842438 | 0.819737 | 0.724639 | 0.667912 | 0.61221 | 0 | 0.105832 | 0.315362 | 84,078 | 2,114 | 213 | 39.771996 | 0.605875 | 0.44222 | 0 | 0.962985 | 0 | 0 | 0.079259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000607 | false | 0 | 0 | 0 | 0.21784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8872eed8fe157740b389e61010fddf9443825d2b | 1,134 | py | Python | nnunet/training/network_training/nnUNetTrainerV2_TransferLess.py | ZXLam/nnUNet | 0cf7c8a857c248d6be171e4945427b405f6ac258 | [
"Apache-2.0"
] | null | null | null | nnunet/training/network_training/nnUNetTrainerV2_TransferLess.py | ZXLam/nnUNet | 0cf7c8a857c248d6be171e4945427b405f6ac258 | [
"Apache-2.0"
] | null | null | null | nnunet/training/network_training/nnUNetTrainerV2_TransferLess.py | ZXLam/nnUNet | 0cf7c8a857c248d6be171e4945427b405f6ac258 | [
"Apache-2.0"
] | 1 | 2022-03-15T03:15:02.000Z | 2022-03-15T03:15:02.000Z | from nnunet.training.network_training.nnUNetTrainer import nnUNetTrainer
from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2
class nnUNetTrainerV2TransferLess(nnUNetTrainerV2):
def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None,
unpack_data=True, deterministic=True, fp16=False):
super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data,
deterministic, fp16)
self.max_num_epochs = 50
self.initial_lr = 1e-4
self.num_batches_per_epoch = 50
class nnUNetTrainerTransferLess(nnUNetTrainer):
def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None,
unpack_data=True, deterministic=True, fp16=False):
super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data,
deterministic, fp16)
self.max_num_epochs = 50
self.initial_lr = 1e-4
self.num_batches_per_epoch = 50 | 54 | 113 | 0.719577 | 134 | 1,134 | 5.716418 | 0.313433 | 0.046997 | 0.067885 | 0.099217 | 0.793734 | 0.707572 | 0.707572 | 0.707572 | 0.707572 | 0.707572 | 0 | 0.02649 | 0.201058 | 1,134 | 21 | 114 | 54 | 0.818985 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
88857f939de5dcf31149a4f73b41342f6634dd1b | 48 | py | Python | lib/west_tools/trajtree/__init__.py | ajoshpratt/westpa | 545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218 | [
"MIT"
] | 1 | 2019-12-21T09:11:54.000Z | 2019-12-21T09:11:54.000Z | lib/west_tools/trajtree/__init__.py | ajoshpratt/westpa | 545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218 | [
"MIT"
] | 1 | 2019-04-25T14:10:33.000Z | 2019-04-25T14:10:33.000Z | lib/west_tools/trajtree/__init__.py | ajoshpratt/westpa | 545a42a5ae4cfa77de0e125a38a5b1ec2b9ab218 | [
"MIT"
] | 1 | 2020-04-14T20:42:11.000Z | 2020-04-14T20:42:11.000Z | import trajtree
from trajtree import TrajTreeSet | 24 | 32 | 0.895833 | 6 | 48 | 7.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 48 | 2 | 32 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8897efa17268f45911b9e3024b5f78e0c4d3801e | 79 | py | Python | rmsectkf/core/modules/local/local_module.py | MartinR2295/rm-sec-toolkit | c3906dc97a8b778a29421efa982c3ab9d72873ff | [
"Apache-2.0"
] | null | null | null | rmsectkf/core/modules/local/local_module.py | MartinR2295/rm-sec-toolkit | c3906dc97a8b778a29421efa982c3ab9d72873ff | [
"Apache-2.0"
] | 15 | 2021-05-07T07:44:48.000Z | 2021-05-09T08:59:19.000Z | rmsectkf/core/modules/local/local_module.py | MartinR2295/rm-sec-toolkit | c3906dc97a8b778a29421efa982c3ab9d72873ff | [
"Apache-2.0"
] | null | null | null | from ..base_module import BaseModule
class LocalModule(BaseModule):
pass
| 13.166667 | 36 | 0.772152 | 9 | 79 | 6.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 5 | 37 | 15.8 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
88ab2f9e4b65b20b973f678eed6a4f7087f7b862 | 183 | py | Python | test.py | pangbo13/BiliSpider | 7f2b14b620f5abafed462b3edbe2437d59d0e17b | [
"MIT"
] | 1 | 2019-07-30T14:17:17.000Z | 2019-07-30T14:17:17.000Z | test.py | pangbo13/BiliSpider | 7f2b14b620f5abafed462b3edbe2437d59d0e17b | [
"MIT"
] | null | null | null | test.py | pangbo13/BiliSpider | 7f2b14b620f5abafed462b3edbe2437d59d0e17b | [
"MIT"
] | null | null | null | # from bilispider.bilispider import *
# s = spider(54,{'tid':(54,),'debug':True,'output':2})
# s.auto_run()
from bilispider.gui import gui_config
print(gui_config({'tid':30}).get())
| 26.142857 | 54 | 0.68306 | 28 | 183 | 4.357143 | 0.642857 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042169 | 0.092896 | 183 | 6 | 55 | 30.5 | 0.692771 | 0.551913 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ee42990447a820842892b6708b0af588d3800d82 | 46 | py | Python | scripts/mell/layers/gpt2.py | zhenhua32/EasyTransfer | 07940087b80b7dc001fb688c81d9420a2055a2bd | [
"Apache-2.0"
] | 806 | 2020-09-02T03:05:24.000Z | 2022-03-26T03:45:23.000Z | scripts/mell/layers/gpt2.py | zhenhua32/EasyTransfer | 07940087b80b7dc001fb688c81d9420a2055a2bd | [
"Apache-2.0"
] | 48 | 2020-09-16T12:53:32.000Z | 2022-03-09T09:34:44.000Z | scripts/mell/layers/gpt2.py | zhenhua32/EasyTransfer | 07940087b80b7dc001fb688c81d9420a2055a2bd | [
"Apache-2.0"
] | 151 | 2020-09-16T12:31:06.000Z | 2022-03-24T08:51:47.000Z | from transformers import GPT2Model, GPT2Config | 46 | 46 | 0.891304 | 5 | 46 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.086957 | 46 | 1 | 46 | 46 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee4f7a577586f8c5f04a68206b967f7e49c3ffe3 | 83 | py | Python | src/advance/expression_evaluator/compiler/node_traversal/__init__.py | chuanhao01/Python_Expression_Evaluator | 016f7c0a593f47a2139df5949b26d7f93c23fffe | [
"MIT"
] | null | null | null | src/advance/expression_evaluator/compiler/node_traversal/__init__.py | chuanhao01/Python_Expression_Evaluator | 016f7c0a593f47a2139df5949b26d7f93c23fffe | [
"MIT"
] | null | null | null | src/advance/expression_evaluator/compiler/node_traversal/__init__.py | chuanhao01/Python_Expression_Evaluator | 016f7c0a593f47a2139df5949b26d7f93c23fffe | [
"MIT"
] | null | null | null | from .pre_order import PreOrderTraversal
from .post_order import PostOrderTraversal | 41.5 | 42 | 0.891566 | 10 | 83 | 7.2 | 0.7 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084337 | 83 | 2 | 42 | 41.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9cbc3d5ffc4a66cf02c11146fed4de8cd985e1e | 47 | py | Python | reader.py | JKLako/welltopreader-1 | f78e0f76363cef38d8bf4b4c793e5a824119c0ec | [
"MIT"
] | null | null | null | reader.py | JKLako/welltopreader-1 | f78e0f76363cef38d8bf4b4c793e5a824119c0ec | [
"MIT"
] | null | null | null | reader.py | JKLako/welltopreader-1 | f78e0f76363cef38d8bf4b4c793e5a824119c0ec | [
"MIT"
] | null | null | null | import numpy as np
##Testing to commit changes | 15.666667 | 27 | 0.787234 | 8 | 47 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 3 | 27 | 15.666667 | 0.948718 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e72be3bff6b9b73bdbe26671d49a040142593ed | 377 | py | Python | iCount/mapping/__init__.py | genialis/iCount | 80dba0f7292a364c62843d71e76c1b22e6268e14 | [
"MIT"
] | null | null | null | iCount/mapping/__init__.py | genialis/iCount | 80dba0f7292a364c62843d71e76c1b22e6268e14 | [
"MIT"
] | 1 | 2021-09-30T12:55:37.000Z | 2021-09-30T12:55:37.000Z | iCount/mapping/__init__.py | ulelab/iCount | b9dc1b21b80e4dae77b3ac33734514091fbe3151 | [
"MIT"
] | 4 | 2021-03-23T12:38:55.000Z | 2021-05-14T10:10:00.000Z | """.. Line to protect from pydocstyle D205, D400.
Mapping
=======
.. automodule:: iCount.mapping.indexstar
:members:
.. automodule:: iCount.mapping.mapstar
:members:
.. automodule:: iCount.mapping.filters
:members:
.. automodule:: iCount.mapping.xlsites
:members:
"""
from . import filters
from . import mapstar
from . import indexstar
from . import xlsites
| 15.708333 | 49 | 0.69496 | 40 | 377 | 6.55 | 0.4 | 0.244275 | 0.351145 | 0.343511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.164456 | 377 | 23 | 50 | 16.391304 | 0.812698 | 0.734748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ea4f2ba46af99b658edf74a1e9557ee0d84297f | 89,165 | py | Python | cottonformation/res/iotwireless.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/iotwireless.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/iotwireless.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class WirelessDeviceSessionKeysAbpV11(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.SessionKeysAbpV11"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html
Property Document:
- ``rp_AppSKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-appskey
- ``rp_FNwkSIntKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-fnwksintkey
- ``rp_NwkSEncKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-nwksenckey
- ``rp_SNwkSIntKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-snwksintkey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.SessionKeysAbpV11"
rp_AppSKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppSKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-appskey"""
rp_FNwkSIntKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "FNwkSIntKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-fnwksintkey"""
rp_NwkSEncKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "NwkSEncKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-nwksenckey"""
rp_SNwkSIntKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "SNwkSIntKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv11.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv11-snwksintkey"""
@attr.s
class DeviceProfileLoRaWANDeviceProfile(Property):
"""
AWS Object Type = "AWS::IoTWireless::DeviceProfile.LoRaWANDeviceProfile"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html
Property Document:
- ``p_ClassBTimeout``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-classbtimeout
- ``p_ClassCTimeout``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-classctimeout
- ``p_MacVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-macversion
- ``p_MaxDutyCycle``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-maxdutycycle
- ``p_MaxEirp``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-maxeirp
- ``p_PingSlotDr``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotdr
- ``p_PingSlotFreq``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotfreq
- ``p_PingSlotPeriod``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotperiod
- ``p_RegParamsRevision``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-regparamsrevision
- ``p_RfRegion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-rfregion
- ``p_Supports32BitFCnt``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supports32bitfcnt
- ``p_SupportsClassB``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsclassb
- ``p_SupportsClassC``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsclassc
- ``p_SupportsJoin``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsjoin
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::DeviceProfile.LoRaWANDeviceProfile"
p_ClassBTimeout: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ClassBTimeout"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-classbtimeout"""
p_ClassCTimeout: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ClassCTimeout"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-classctimeout"""
p_MacVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "MacVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-macversion"""
p_MaxDutyCycle: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "MaxDutyCycle"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-maxdutycycle"""
p_MaxEirp: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "MaxEirp"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-maxeirp"""
p_PingSlotDr: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "PingSlotDr"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotdr"""
p_PingSlotFreq: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "PingSlotFreq"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotfreq"""
p_PingSlotPeriod: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "PingSlotPeriod"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-pingslotperiod"""
p_RegParamsRevision: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RegParamsRevision"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-regparamsrevision"""
p_RfRegion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RfRegion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-rfregion"""
p_Supports32BitFCnt: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "Supports32BitFCnt"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supports32bitfcnt"""
p_SupportsClassB: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "SupportsClassB"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsclassb"""
p_SupportsClassC: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "SupportsClassC"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsclassc"""
p_SupportsJoin: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "SupportsJoin"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-deviceprofile-lorawandeviceprofile.html#cfn-iotwireless-deviceprofile-lorawandeviceprofile-supportsjoin"""
@attr.s
class TaskDefinitionLoRaWANGatewayVersion(Property):
"""
AWS Object Type = "AWS::IoTWireless::TaskDefinition.LoRaWANGatewayVersion"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html
Property Document:
- ``p_Model``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-model
- ``p_PackageVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-packageversion
- ``p_Station``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-station
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::TaskDefinition.LoRaWANGatewayVersion"
p_Model: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Model"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-model"""
p_PackageVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PackageVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-packageversion"""
p_Station: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Station"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawangatewayversion.html#cfn-iotwireless-taskdefinition-lorawangatewayversion-station"""
@attr.s
class PartnerAccountSidewalkAccountInfo(Property):
"""
AWS Object Type = "AWS::IoTWireless::PartnerAccount.SidewalkAccountInfo"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkaccountinfo.html
Property Document:
- ``rp_AppServerPrivateKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkaccountinfo.html#cfn-iotwireless-partneraccount-sidewalkaccountinfo-appserverprivatekey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::PartnerAccount.SidewalkAccountInfo"
rp_AppServerPrivateKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppServerPrivateKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkaccountinfo.html#cfn-iotwireless-partneraccount-sidewalkaccountinfo-appserverprivatekey"""
@attr.s
class WirelessGatewayLoRaWANGateway(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessGateway.LoRaWANGateway"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessgateway-lorawangateway.html
Property Document:
- ``rp_GatewayEui``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessgateway-lorawangateway.html#cfn-iotwireless-wirelessgateway-lorawangateway-gatewayeui
- ``rp_RfRegion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessgateway-lorawangateway.html#cfn-iotwireless-wirelessgateway-lorawangateway-rfregion
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessGateway.LoRaWANGateway"
rp_GatewayEui: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "GatewayEui"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessgateway-lorawangateway.html#cfn-iotwireless-wirelessgateway-lorawangateway-gatewayeui"""
rp_RfRegion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "RfRegion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessgateway-lorawangateway.html#cfn-iotwireless-wirelessgateway-lorawangateway-rfregion"""
@attr.s
class TaskDefinitionLoRaWANUpdateGatewayTaskCreate(Property):
"""
AWS Object Type = "AWS::IoTWireless::TaskDefinition.LoRaWANUpdateGatewayTaskCreate"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html
Property Document:
- ``p_CurrentVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-currentversion
- ``p_SigKeyCrc``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-sigkeycrc
- ``p_UpdateSignature``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-updatesignature
- ``p_UpdateVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-updateversion
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::TaskDefinition.LoRaWANUpdateGatewayTaskCreate"
p_CurrentVersion: typing.Union['TaskDefinitionLoRaWANGatewayVersion', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANGatewayVersion.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANGatewayVersion)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-currentversion"""
p_SigKeyCrc: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "SigKeyCrc"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-sigkeycrc"""
p_UpdateSignature: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UpdateSignature"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-updatesignature"""
p_UpdateVersion: typing.Union['TaskDefinitionLoRaWANGatewayVersion', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANGatewayVersion.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANGatewayVersion)),
metadata={AttrMeta.PROPERTY_NAME: "UpdateVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskcreate-updateversion"""
@attr.s
class WirelessDeviceOtaaV11(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.OtaaV11"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html
Property Document:
- ``rp_AppKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-appkey
- ``rp_JoinEui``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-joineui
- ``rp_NwkKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-nwkkey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.OtaaV11"
rp_AppKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-appkey"""
rp_JoinEui: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "JoinEui"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-joineui"""
rp_NwkKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "NwkKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav11.html#cfn-iotwireless-wirelessdevice-otaav11-nwkkey"""
@attr.s
class WirelessDeviceSessionKeysAbpV10x(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.SessionKeysAbpV10x"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv10x.html
Property Document:
- ``rp_AppSKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv10x.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv10x-appskey
- ``rp_NwkSKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv10x.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv10x-nwkskey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.SessionKeysAbpV10x"
rp_AppSKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppSKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv10x.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv10x-appskey"""
rp_NwkSKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "NwkSKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-sessionkeysabpv10x.html#cfn-iotwireless-wirelessdevice-sessionkeysabpv10x-nwkskey"""
@attr.s
class ServiceProfileLoRaWANServiceProfile(Property):
"""
AWS Object Type = "AWS::IoTWireless::ServiceProfile.LoRaWANServiceProfile"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html
Property Document:
- ``p_AddGwMetadata``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-addgwmetadata
- ``p_ChannelMask``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-channelmask
- ``p_DevStatusReqFreq``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-devstatusreqfreq
- ``p_DlBucketSize``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlbucketsize
- ``p_DlRate``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlrate
- ``p_DlRatePolicy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlratepolicy
- ``p_DrMax``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-drmax
- ``p_DrMin``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-drmin
- ``p_HrAllowed``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-hrallowed
- ``p_MinGwDiversity``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-mingwdiversity
- ``p_NwkGeoLoc``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-nwkgeoloc
- ``p_PrAllowed``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-prallowed
- ``p_RaAllowed``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-raallowed
- ``p_ReportDevStatusBattery``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-reportdevstatusbattery
- ``p_ReportDevStatusMargin``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-reportdevstatusmargin
- ``p_TargetPer``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-targetper
- ``p_UlBucketSize``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulbucketsize
- ``p_UlRate``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulrate
- ``p_UlRatePolicy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulratepolicy
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::ServiceProfile.LoRaWANServiceProfile"
p_AddGwMetadata: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "AddGwMetadata"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-addgwmetadata"""
p_ChannelMask: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ChannelMask"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-channelmask"""
p_DevStatusReqFreq: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DevStatusReqFreq"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-devstatusreqfreq"""
p_DlBucketSize: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DlBucketSize"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlbucketsize"""
p_DlRate: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DlRate"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlrate"""
p_DlRatePolicy: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DlRatePolicy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-dlratepolicy"""
p_DrMax: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DrMax"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-drmax"""
p_DrMin: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DrMin"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-drmin"""
p_HrAllowed: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "HrAllowed"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-hrallowed"""
p_MinGwDiversity: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "MinGwDiversity"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-mingwdiversity"""
p_NwkGeoLoc: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "NwkGeoLoc"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-nwkgeoloc"""
p_PrAllowed: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "PrAllowed"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-prallowed"""
p_RaAllowed: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "RaAllowed"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-raallowed"""
p_ReportDevStatusBattery: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "ReportDevStatusBattery"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-reportdevstatusbattery"""
p_ReportDevStatusMargin: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "ReportDevStatusMargin"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-reportdevstatusmargin"""
p_TargetPer: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "TargetPer"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-targetper"""
p_UlBucketSize: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "UlBucketSize"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulbucketsize"""
p_UlRate: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "UlRate"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulrate"""
p_UlRatePolicy: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UlRatePolicy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-serviceprofile-lorawanserviceprofile.html#cfn-iotwireless-serviceprofile-lorawanserviceprofile-ulratepolicy"""
@attr.s
class WirelessDeviceOtaaV10x(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.OtaaV10x"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav10x.html
Property Document:
- ``rp_AppEui``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav10x.html#cfn-iotwireless-wirelessdevice-otaav10x-appeui
- ``rp_AppKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav10x.html#cfn-iotwireless-wirelessdevice-otaav10x-appkey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.OtaaV10x"
rp_AppEui: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppEui"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav10x.html#cfn-iotwireless-wirelessdevice-otaav10x-appeui"""
rp_AppKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AppKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-otaav10x.html#cfn-iotwireless-wirelessdevice-otaav10x-appkey"""
@attr.s
class PartnerAccountSidewalkUpdateAccount(Property):
"""
AWS Object Type = "AWS::IoTWireless::PartnerAccount.SidewalkUpdateAccount"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkupdateaccount.html
Property Document:
- ``p_AppServerPrivateKey``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkupdateaccount.html#cfn-iotwireless-partneraccount-sidewalkupdateaccount-appserverprivatekey
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::PartnerAccount.SidewalkUpdateAccount"
p_AppServerPrivateKey: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AppServerPrivateKey"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-partneraccount-sidewalkupdateaccount.html#cfn-iotwireless-partneraccount-sidewalkupdateaccount-appserverprivatekey"""
@attr.s
class WirelessDeviceAbpV11(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.AbpV11"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv11.html
Property Document:
- ``rp_DevAddr``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv11.html#cfn-iotwireless-wirelessdevice-abpv11-devaddr
- ``rp_SessionKeys``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv11.html#cfn-iotwireless-wirelessdevice-abpv11-sessionkeys
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.AbpV11"
rp_DevAddr: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DevAddr"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv11.html#cfn-iotwireless-wirelessdevice-abpv11-devaddr"""
rp_SessionKeys: typing.Union['WirelessDeviceSessionKeysAbpV11', dict] = attr.ib(
default=None,
converter=WirelessDeviceSessionKeysAbpV11.from_dict,
validator=attr.validators.instance_of(WirelessDeviceSessionKeysAbpV11),
metadata={AttrMeta.PROPERTY_NAME: "SessionKeys"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv11.html#cfn-iotwireless-wirelessdevice-abpv11-sessionkeys"""
@attr.s
class TaskDefinitionUpdateWirelessGatewayTaskCreate(Property):
"""
AWS Object Type = "AWS::IoTWireless::TaskDefinition.UpdateWirelessGatewayTaskCreate"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html
Property Document:
- ``p_LoRaWAN``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-lorawan
- ``p_UpdateDataRole``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-updatedatarole
- ``p_UpdateDataSource``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-updatedatasource
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::TaskDefinition.UpdateWirelessGatewayTaskCreate"
p_LoRaWAN: typing.Union['TaskDefinitionLoRaWANUpdateGatewayTaskCreate', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANUpdateGatewayTaskCreate.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANUpdateGatewayTaskCreate)),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWAN"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-lorawan"""
p_UpdateDataRole: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UpdateDataRole"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-updatedatarole"""
p_UpdateDataSource: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UpdateDataSource"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate.html#cfn-iotwireless-taskdefinition-updatewirelessgatewaytaskcreate-updatedatasource"""
@attr.s
class TaskDefinitionLoRaWANUpdateGatewayTaskEntry(Property):
"""
AWS Object Type = "AWS::IoTWireless::TaskDefinition.LoRaWANUpdateGatewayTaskEntry"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskentry.html
Property Document:
- ``p_CurrentVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskentry.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry-currentversion
- ``p_UpdateVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskentry.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry-updateversion
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::TaskDefinition.LoRaWANUpdateGatewayTaskEntry"
p_CurrentVersion: typing.Union['TaskDefinitionLoRaWANGatewayVersion', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANGatewayVersion.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANGatewayVersion)),
metadata={AttrMeta.PROPERTY_NAME: "CurrentVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskentry.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry-currentversion"""
p_UpdateVersion: typing.Union['TaskDefinitionLoRaWANGatewayVersion', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANGatewayVersion.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANGatewayVersion)),
metadata={AttrMeta.PROPERTY_NAME: "UpdateVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-taskdefinition-lorawanupdategatewaytaskentry.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry-updateversion"""
@attr.s
class WirelessDeviceAbpV10x(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.AbpV10x"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv10x.html
Property Document:
- ``rp_DevAddr``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv10x.html#cfn-iotwireless-wirelessdevice-abpv10x-devaddr
- ``rp_SessionKeys``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv10x.html#cfn-iotwireless-wirelessdevice-abpv10x-sessionkeys
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.AbpV10x"
rp_DevAddr: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DevAddr"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv10x.html#cfn-iotwireless-wirelessdevice-abpv10x-devaddr"""
rp_SessionKeys: typing.Union['WirelessDeviceSessionKeysAbpV10x', dict] = attr.ib(
default=None,
converter=WirelessDeviceSessionKeysAbpV10x.from_dict,
validator=attr.validators.instance_of(WirelessDeviceSessionKeysAbpV10x),
metadata={AttrMeta.PROPERTY_NAME: "SessionKeys"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-abpv10x.html#cfn-iotwireless-wirelessdevice-abpv10x-sessionkeys"""
@attr.s
class WirelessDeviceLoRaWANDevice(Property):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice.LoRaWANDevice"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html
Property Document:
- ``p_AbpV10x``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-abpv10x
- ``p_AbpV11``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-abpv11
- ``p_DevEui``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-deveui
- ``p_DeviceProfileId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-deviceprofileid
- ``p_OtaaV10x``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-otaav10x
- ``p_OtaaV11``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-otaav11
- ``p_ServiceProfileId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-serviceprofileid
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice.LoRaWANDevice"
p_AbpV10x: typing.Union['WirelessDeviceAbpV10x', dict] = attr.ib(
default=None,
converter=WirelessDeviceAbpV10x.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(WirelessDeviceAbpV10x)),
metadata={AttrMeta.PROPERTY_NAME: "AbpV10x"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-abpv10x"""
p_AbpV11: typing.Union['WirelessDeviceAbpV11', dict] = attr.ib(
default=None,
converter=WirelessDeviceAbpV11.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(WirelessDeviceAbpV11)),
metadata={AttrMeta.PROPERTY_NAME: "AbpV11"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-abpv11"""
p_DevEui: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DevEui"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-deveui"""
p_DeviceProfileId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DeviceProfileId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-deviceprofileid"""
p_OtaaV10x: typing.Union['WirelessDeviceOtaaV10x', dict] = attr.ib(
default=None,
converter=WirelessDeviceOtaaV10x.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(WirelessDeviceOtaaV10x)),
metadata={AttrMeta.PROPERTY_NAME: "OtaaV10x"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-otaav10x"""
p_OtaaV11: typing.Union['WirelessDeviceOtaaV11', dict] = attr.ib(
default=None,
converter=WirelessDeviceOtaaV11.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(WirelessDeviceOtaaV11)),
metadata={AttrMeta.PROPERTY_NAME: "OtaaV11"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-otaav11"""
p_ServiceProfileId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ServiceProfileId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iotwireless-wirelessdevice-lorawandevice.html#cfn-iotwireless-wirelessdevice-lorawandevice-serviceprofileid"""
#--- Resource declaration ---
@attr.s
class ServiceProfile(Resource):
"""
AWS Object Type = "AWS::IoTWireless::ServiceProfile"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html
Property Document:
- ``p_LoRaWAN``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-lorawan
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::ServiceProfile"
p_LoRaWAN: typing.Union['ServiceProfileLoRaWANServiceProfile', dict] = attr.ib(
default=None,
converter=ServiceProfileLoRaWANServiceProfile.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(ServiceProfileLoRaWANServiceProfile)),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWAN"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-lorawan"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-name"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#cfn-iotwireless-serviceprofile-tags"""
@property
def rv_LoRaWANUlRate(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.UlRate")
@property
def rv_LoRaWANUlBucketSize(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.UlBucketSize")
@property
def rv_LoRaWANUlRatePolicy(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.UlRatePolicy")
@property
def rv_LoRaWANDlRate(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DlRate")
@property
def rv_LoRaWANDlBucketSize(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DlBucketSize")
@property
def rv_LoRaWANDlRatePolicy(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DlRatePolicy")
@property
def rv_LoRaWANDevStatusReqFreq(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DevStatusReqFreq")
@property
def rv_LoRaWANReportDevStatusBattery(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.ReportDevStatusBattery")
@property
def rv_LoRaWANReportDevStatusMargin(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.ReportDevStatusMargin")
@property
def rv_LoRaWANDrMin(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DrMin")
@property
def rv_LoRaWANDrMax(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.DrMax")
@property
def rv_LoRaWANChannelMask(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.ChannelMask")
@property
def rv_LoRaWANPrAllowed(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.PrAllowed")
@property
def rv_LoRaWANHrAllowed(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.HrAllowed")
@property
def rv_LoRaWANRaAllowed(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.RaAllowed")
@property
def rv_LoRaWANNwkGeoLoc(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.NwkGeoLoc")
@property
def rv_LoRaWANTargetPer(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.TargetPer")
@property
def rv_LoRaWANMinGwDiversity(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="LoRaWAN.MinGwDiversity")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-serviceprofile.html#aws-resource-iotwireless-serviceprofile-return-values"""
return GetAtt(resource=self, attr_name="Id")
@attr.s
class WirelessDevice(Resource):
"""
AWS Object Type = "AWS::IoTWireless::WirelessDevice"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html
Property Document:
- ``rp_DestinationName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-destinationname
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-type
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-description
- ``p_LastUplinkReceivedAt``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-lastuplinkreceivedat
- ``p_LoRaWAN``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-lorawan
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-name
- ``p_ThingArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-thingarn
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessDevice"
rp_DestinationName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DestinationName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-destinationname"""
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-type"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-description"""
p_LastUplinkReceivedAt: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LastUplinkReceivedAt"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-lastuplinkreceivedat"""
p_LoRaWAN: typing.Union['WirelessDeviceLoRaWANDevice', dict] = attr.ib(
default=None,
converter=WirelessDeviceLoRaWANDevice.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(WirelessDeviceLoRaWANDevice)),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWAN"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-lorawan"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-name"""
p_ThingArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ThingArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-thingarn"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#cfn-iotwireless-wirelessdevice-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#aws-resource-iotwireless-wirelessdevice-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#aws-resource-iotwireless-wirelessdevice-return-values"""
return GetAtt(resource=self, attr_name="Id")
@property
def rv_ThingName(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessdevice.html#aws-resource-iotwireless-wirelessdevice-return-values"""
return GetAtt(resource=self, attr_name="ThingName")
@attr.s
class WirelessGateway(Resource):
"""
AWS Object Type = "AWS::IoTWireless::WirelessGateway"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html
Property Document:
- ``rp_LoRaWAN``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-lorawan
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-description
- ``p_LastUplinkReceivedAt``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-lastuplinkreceivedat
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-name
- ``p_ThingArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-thingarn
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::WirelessGateway"
rp_LoRaWAN: typing.Union['WirelessGatewayLoRaWANGateway', dict] = attr.ib(
default=None,
converter=WirelessGatewayLoRaWANGateway.from_dict,
validator=attr.validators.instance_of(WirelessGatewayLoRaWANGateway),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWAN"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-lorawan"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-description"""
p_LastUplinkReceivedAt: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LastUplinkReceivedAt"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-lastuplinkreceivedat"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-name"""
p_ThingArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ThingArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-thingarn"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#cfn-iotwireless-wirelessgateway-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#aws-resource-iotwireless-wirelessgateway-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#aws-resource-iotwireless-wirelessgateway-return-values"""
return GetAtt(resource=self, attr_name="Id")
@property
def rv_ThingName(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-wirelessgateway.html#aws-resource-iotwireless-wirelessgateway-return-values"""
return GetAtt(resource=self, attr_name="ThingName")
@attr.s
class Destination(Resource):
"""
AWS Object Type = "AWS::IoTWireless::Destination"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html
Property Document:
- ``rp_Expression``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-expression
- ``rp_ExpressionType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-expressiontype
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-name
- ``rp_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-rolearn
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-description
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::Destination"
rp_Expression: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Expression"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-expression"""
rp_ExpressionType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ExpressionType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-expressiontype"""
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-name"""
rp_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-rolearn"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-description"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#cfn-iotwireless-destination-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-destination.html#aws-resource-iotwireless-destination-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class DeviceProfile(Resource):
"""
AWS Object Type = "AWS::IoTWireless::DeviceProfile"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html
Property Document:
- ``p_LoRaWAN``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-lorawan
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::DeviceProfile"
p_LoRaWAN: typing.Union['DeviceProfileLoRaWANDeviceProfile', dict] = attr.ib(
default=None,
converter=DeviceProfileLoRaWANDeviceProfile.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(DeviceProfileLoRaWANDeviceProfile)),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWAN"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-lorawan"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-name"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#cfn-iotwireless-deviceprofile-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#aws-resource-iotwireless-deviceprofile-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-deviceprofile.html#aws-resource-iotwireless-deviceprofile-return-values"""
return GetAtt(resource=self, attr_name="Id")
@attr.s
class PartnerAccount(Resource):
"""
AWS Object Type = "AWS::IoTWireless::PartnerAccount"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html
Property Document:
- ``p_AccountLinked``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-accountlinked
- ``p_Fingerprint``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-fingerprint
- ``p_PartnerAccountId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-partneraccountid
- ``p_PartnerType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-partnertype
- ``p_Sidewalk``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-sidewalk
- ``p_SidewalkUpdate``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-sidewalkupdate
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::PartnerAccount"
p_AccountLinked: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "AccountLinked"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-accountlinked"""
p_Fingerprint: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Fingerprint"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-fingerprint"""
p_PartnerAccountId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PartnerAccountId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-partneraccountid"""
p_PartnerType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PartnerType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-partnertype"""
p_Sidewalk: typing.Union['PartnerAccountSidewalkAccountInfo', dict] = attr.ib(
default=None,
converter=PartnerAccountSidewalkAccountInfo.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PartnerAccountSidewalkAccountInfo)),
metadata={AttrMeta.PROPERTY_NAME: "Sidewalk"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-sidewalk"""
p_SidewalkUpdate: typing.Union['PartnerAccountSidewalkUpdateAccount', dict] = attr.ib(
default=None,
converter=PartnerAccountSidewalkUpdateAccount.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PartnerAccountSidewalkUpdateAccount)),
metadata={AttrMeta.PROPERTY_NAME: "SidewalkUpdate"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-sidewalkupdate"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#cfn-iotwireless-partneraccount-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-partneraccount.html#aws-resource-iotwireless-partneraccount-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class TaskDefinition(Resource):
"""
AWS Object Type = "AWS::IoTWireless::TaskDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html
Property Document:
- ``rp_AutoCreateTasks``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-autocreatetasks
- ``p_LoRaWANUpdateGatewayTaskEntry``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-name
- ``p_TaskDefinitionType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-taskdefinitiontype
- ``p_Update``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-update
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-tags
"""
AWS_OBJECT_TYPE = "AWS::IoTWireless::TaskDefinition"
rp_AutoCreateTasks: bool = attr.ib(
default=None,
validator=attr.validators.instance_of(bool),
metadata={AttrMeta.PROPERTY_NAME: "AutoCreateTasks"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-autocreatetasks"""
p_LoRaWANUpdateGatewayTaskEntry: typing.Union['TaskDefinitionLoRaWANUpdateGatewayTaskEntry', dict] = attr.ib(
default=None,
converter=TaskDefinitionLoRaWANUpdateGatewayTaskEntry.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionLoRaWANUpdateGatewayTaskEntry)),
metadata={AttrMeta.PROPERTY_NAME: "LoRaWANUpdateGatewayTaskEntry"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-lorawanupdategatewaytaskentry"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-name"""
p_TaskDefinitionType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "TaskDefinitionType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-taskdefinitiontype"""
p_Update: typing.Union['TaskDefinitionUpdateWirelessGatewayTaskCreate', dict] = attr.ib(
default=None,
converter=TaskDefinitionUpdateWirelessGatewayTaskCreate.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(TaskDefinitionUpdateWirelessGatewayTaskCreate)),
metadata={AttrMeta.PROPERTY_NAME: "Update"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-update"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#cfn-iotwireless-taskdefinition-tags"""
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#aws-resource-iotwireless-taskdefinition-return-values"""
return GetAtt(resource=self, attr_name="Id")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotwireless-taskdefinition.html#aws-resource-iotwireless-taskdefinition-return-values"""
return GetAtt(resource=self, attr_name="Arn")
| 68.853282 | 250 | 0.776897 | 9,075 | 89,165 | 7.548871 | 0.022149 | 0.032114 | 0.044157 | 0.068242 | 0.929364 | 0.929043 | 0.913015 | 0.882317 | 0.882258 | 0.881018 | 0 | 0.003174 | 0.098862 | 89,165 | 1,294 | 251 | 68.906492 | 0.849421 | 0.361364 | 0 | 0.492637 | 0 | 0 | 0.089577 | 0.050981 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042838 | false | 0 | 0.005355 | 0 | 0.299866 | 0.002677 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14d7ddf6e1e6127dd8e0b04f9ab07d85c5142473 | 241 | py | Python | fcos_core/modeling/discriminator/__init__.py | CityU-AIM-Group/SIGMA | 19f89777db8d42f750a9b87756d3326c7efd18f5 | [
"MIT"
] | 5 | 2022-03-02T02:57:44.000Z | 2022-03-25T02:48:43.000Z | fcos_core/modeling/discriminator/__init__.py | CityU-AIM-Group/SCAN | c42d67416fda5c44ee023e14307b63b1ad9890a2 | [
"MIT"
] | null | null | null | fcos_core/modeling/discriminator/__init__.py | CityU-AIM-Group/SCAN | c42d67416fda5c44ee023e14307b63b1ad9890a2 | [
"MIT"
] | null | null | null | from .fcos_head_discriminator import FCOSDiscriminator
from .fcos_head_discriminator_CA import FCOSDiscriminator_CA
from .fcos_head_discriminator_out import FCOSDiscriminator_out
from .fcos_head_discriminator_con import FCOSDiscriminator_con | 60.25 | 62 | 0.921162 | 30 | 241 | 6.933333 | 0.3 | 0.153846 | 0.230769 | 0.480769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062241 | 241 | 4 | 63 | 60.25 | 0.920354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0903f76474baec12080e8aa26a1c099fb0755aa4 | 29 | py | Python | Models/__init__.py | educauchy/kaggle-tab-playground-apr21 | c28d007970bcc3b3ddd4a1cf4811eddaa765ee38 | [
"MIT"
] | null | null | null | Models/__init__.py | educauchy/kaggle-tab-playground-apr21 | c28d007970bcc3b3ddd4a1cf4811eddaa765ee38 | [
"MIT"
] | null | null | null | Models/__init__.py | educauchy/kaggle-tab-playground-apr21 | c28d007970bcc3b3ddd4a1cf4811eddaa765ee38 | [
"MIT"
] | null | null | null | from .MetaClassifier import * | 29 | 29 | 0.827586 | 3 | 29 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0923e33444074cfc54719898da419415ea161152 | 110 | py | Python | sirena_client/base/connection/__init__.py | utair-digital/sirena-client | 7cbdca66531f3c86b6780878b22a891e0df754ae | [
"Apache-2.0"
] | 11 | 2022-03-04T14:37:50.000Z | 2022-03-04T14:57:12.000Z | sirena_client/base/connection/__init__.py | utair-digital/sirena-client | 7cbdca66531f3c86b6780878b22a891e0df754ae | [
"Apache-2.0"
] | null | null | null | sirena_client/base/connection/__init__.py | utair-digital/sirena-client | 7cbdca66531f3c86b6780878b22a891e0df754ae | [
"Apache-2.0"
] | null | null | null | from .async_connection import AsyncConnection # noqa
from .async_pool import AsyncConnectionPool # noqa
| 36.666667 | 54 | 0.8 | 12 | 110 | 7.166667 | 0.666667 | 0.209302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 110 | 2 | 55 | 55 | 0.934783 | 0.081818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11b344a9e1bb465ec536f5bac14696fdf00681dc | 95 | py | Python | python/smqtk/algorithms/nn_index/lsh/functors/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 82 | 2015-01-07T15:33:29.000Z | 2021-08-11T18:34:05.000Z | python/smqtk/algorithms/nn_index/lsh/functors/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 230 | 2015-04-08T14:36:51.000Z | 2022-03-14T17:55:30.000Z | python/smqtk/algorithms/nn_index/lsh/functors/_plugins.py | joshanderson-kw/SMQTK | 594e7c733fe7f4e514a1a08a7343293a883a41fc | [
"BSD-3-Clause"
] | 65 | 2015-01-04T15:00:16.000Z | 2021-11-19T18:09:11.000Z | from .itq import ItqFunctor # noqa: F401
from .simple_rp import SimpleRPFunctor # noqa: F401
| 31.666667 | 52 | 0.768421 | 13 | 95 | 5.538462 | 0.692308 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075949 | 0.168421 | 95 | 2 | 53 | 47.5 | 0.835443 | 0.221053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee9808c9424ece166d71e967cbebcc233b9cee15 | 155 | py | Python | helpers_calc.py | ahammadshawki8/Proggraming-Terms | 264156b6cfb347fc1b3aaa966c44aeab8dca26c2 | [
"MIT"
] | 1 | 2021-06-07T00:22:28.000Z | 2021-06-07T00:22:28.000Z | helpers_calc.py | ahammadshawki8/Proggraming-Terms | 264156b6cfb347fc1b3aaa966c44aeab8dca26c2 | [
"MIT"
] | 2 | 2021-03-03T02:22:42.000Z | 2021-04-24T03:26:42.000Z | helpers_calc.py | ahammadshawki8/Proggraming-Terms | 264156b6cfb347fc1b3aaa966c44aeab8dca26c2 | [
"MIT"
] | null | null | null | print("Imported helpers_calc!")
def add(x,y):
return x+y
def sub(x,y):
return x-y
def multiply(x,y):
return x*y
def devide(x,y):
return x/y | 17.222222 | 31 | 0.625806 | 32 | 155 | 3 | 0.375 | 0.166667 | 0.333333 | 0.375 | 0.510417 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212903 | 155 | 9 | 32 | 17.222222 | 0.786885 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0.111111 | 0.444444 | 1 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
eeba5060e61f89875f63c3a4669a5265d78d2e18 | 149 | py | Python | viperdriver/src/__init__.py | Tesqos/viperdriver | 41f618221031c646e59adc146e7eb748a8ff5383 | [
"Apache-2.0"
] | null | null | null | viperdriver/src/__init__.py | Tesqos/viperdriver | 41f618221031c646e59adc146e7eb748a8ff5383 | [
"Apache-2.0"
] | null | null | null | viperdriver/src/__init__.py | Tesqos/viperdriver | 41f618221031c646e59adc146e7eb748a8ff5383 | [
"Apache-2.0"
] | null | null | null | import logging
# logger = logging.getLogger(__name__).addHandler(logging.NullHandler()) - remove upon testing
logger = logging.getLogger(__name__)
| 29.8 | 95 | 0.791946 | 16 | 149 | 6.875 | 0.625 | 0.236364 | 0.4 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100671 | 149 | 4 | 96 | 37.25 | 0.820896 | 0.624161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
eedcf6750dfeb0ff038ffdecd67f8ea9f2ee086f | 131 | py | Python | app/blog/__init__.py | ralphdc/blog | fe3ebe3aea4da38a3cf592c90b0dd9914e47f8de | [
"Apache-2.0"
] | null | null | null | app/blog/__init__.py | ralphdc/blog | fe3ebe3aea4da38a3cf592c90b0dd9914e47f8de | [
"Apache-2.0"
] | null | null | null | app/blog/__init__.py | ralphdc/blog | fe3ebe3aea4da38a3cf592c90b0dd9914e47f8de | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from flask import Blueprint
blog = Blueprint('blog', __name__)
from . import msgboard
from . import index | 16.375 | 34 | 0.748092 | 18 | 131 | 5.222222 | 0.666667 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.152672 | 131 | 8 | 35 | 16.375 | 0.837838 | 0.160305 | 0 | 0 | 0 | 0 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
eeff0450682e17379d80de0bcaa86e4f0bbf9a1d | 210 | py | Python | wrapclib/re2/__init__.py | sdpython/wrapclib | 41212ec526f3eaef29333e8947c8d6ca9816362d | [
"MIT"
] | null | null | null | wrapclib/re2/__init__.py | sdpython/wrapclib | 41212ec526f3eaef29333e8947c8d6ca9816362d | [
"MIT"
] | 4 | 2019-03-15T10:42:36.000Z | 2021-01-09T12:06:27.000Z | wrapclib/re2/__init__.py | sdpython/wrapclib | 41212ec526f3eaef29333e8947c8d6ca9816362d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@file
@brief Shortcut to *re2*.
"""
from .re2 import compile # pylint: disable=W0622
from .re2 import findall, search, match, fullmatch, Set, UNANCHORED, ANCHOR_START, ANCHOR_BOTH
| 23.333333 | 94 | 0.695238 | 28 | 210 | 5.142857 | 0.821429 | 0.097222 | 0.180556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0.152381 | 210 | 8 | 95 | 26.25 | 0.764045 | 0.361905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1114c7c54c13f847c09a49cfd8bb6e90bfcc7bf | 215 | py | Python | timm/loss/__init__.py | david-yd-hao/timm-biased-loss | dce603249a6c216aebab8cc5a504a8ac188ae32b | [
"Apache-2.0"
] | null | null | null | timm/loss/__init__.py | david-yd-hao/timm-biased-loss | dce603249a6c216aebab8cc5a504a8ac188ae32b | [
"Apache-2.0"
] | null | null | null | timm/loss/__init__.py | david-yd-hao/timm-biased-loss | dce603249a6c216aebab8cc5a504a8ac188ae32b | [
"Apache-2.0"
] | null | null | null | from .cross_entropy import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy, BiasedLossCrossEntropy
from .jsd import JsdCrossEntropy
from .asymmetric_loss import AsymmetricLossMultiLabel, AsymmetricLossSingleLabel | 71.666667 | 101 | 0.906977 | 17 | 215 | 11.352941 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065116 | 215 | 3 | 102 | 71.666667 | 0.960199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1295f1ac08e51c5e38b0f44b5adb2ee4511e636 | 114 | py | Python | lang/py/cookbook/v2/source/cb2_1_23_sol_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_1_23_sol_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_1_23_sol_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | def encode_for_xml(unicode_data, encoding='ascii'):
return unicode_data.encode(encoding, 'xmlcharrefreplace')
| 38 | 61 | 0.798246 | 14 | 114 | 6.214286 | 0.714286 | 0.252874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 114 | 2 | 62 | 57 | 0.836538 | 0 | 0 | 0 | 0 | 0 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
011e5c65f17050df6c0beccce92d3cae32ba5cc7 | 25 | py | Python | flowbee/workers/__init__.py | blitzagency/flowbee | 35e8977827eda34474aa5edc95bbfec86c61b33a | [
"MIT"
] | null | null | null | flowbee/workers/__init__.py | blitzagency/flowbee | 35e8977827eda34474aa5edc95bbfec86c61b33a | [
"MIT"
] | null | null | null | flowbee/workers/__init__.py | blitzagency/flowbee | 35e8977827eda34474aa5edc95bbfec86c61b33a | [
"MIT"
] | null | null | null | from .base import Worker
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
017e734ac3251dff1577cce59896d63f5935c575 | 40 | py | Python | cougar/structures/__init__.py | Swall0w/cougar | 9161b2b1d0c256f4bb952ec190351684f28ec1b7 | [
"MIT"
] | 1 | 2019-11-23T12:20:50.000Z | 2019-11-23T12:20:50.000Z | cougar/structures/__init__.py | Swall0w/cougar | 9161b2b1d0c256f4bb952ec190351684f28ec1b7 | [
"MIT"
] | 1 | 2022-01-13T01:41:34.000Z | 2022-01-13T01:41:34.000Z | cougar/structures/__init__.py | Swall0w/cougar | 9161b2b1d0c256f4bb952ec190351684f28ec1b7 | [
"MIT"
] | null | null | null | from cougar.structures import image_list | 40 | 40 | 0.9 | 6 | 40 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0199ee794d2e26dbb466994ddc39414941ab741c | 9,495 | py | Python | tests/test_lookup.py | mkorman9/yt-archiver | eaa9eb4b98c1a9b9262e1d6f83522370b95c5d8a | [
"MIT"
] | 6 | 2018-05-31T19:26:52.000Z | 2022-02-16T22:58:57.000Z | tests/test_lookup.py | mkorman9/yt-archiver | eaa9eb4b98c1a9b9262e1d6f83522370b95c5d8a | [
"MIT"
] | 1 | 2020-01-27T15:34:11.000Z | 2020-01-27T15:34:11.000Z | tests/test_lookup.py | mkorman9/yt-archiver | eaa9eb4b98c1a9b9262e1d6f83522370b95c5d8a | [
"MIT"
] | null | null | null | import logging
from datetime import datetime
from unittest import TestCase
from mock import MagicMock, create_autospec, call
from ytarchiver.api import YoutubeAPI, YoutubeChannel
from ytarchiver.common import Context, RecordersController, StorageManager, ContentItem, EventBus, Event, PluginsManager
from ytarchiver.lookup import lookup
CHANNEL_1 = YoutubeChannel('id1', 'uploads_playlistid1')
VIDEO_1 = ContentItem(
video_id='video1',
channel_id='channel_id',
timestamp=datetime.utcnow(),
title='video #1',
channel_name='some channel'
)
VIDEO_2 = ContentItem(
video_id='video2',
channel_id='channel_id',
timestamp=datetime.utcnow(),
title='video #2',
channel_name='some channel'
)
LIVESTREAM_1 = ContentItem(
video_id='livestream',
channel_id='channel_id',
timestamp=datetime.utcnow(),
title='livestream #live',
channel_name='some channel'
)
class LookupTest(TestCase):
def test_should_search_for_videos_and_save_results(self):
# given
context, storage = _create_context_and_storage()
context.config.archive_all = False
context.config.monitor_livestreams = False
storage.video_exist.return_value = False
context.video_recorders.is_recording_active.return_value = False
context.livestream_recorders.is_recording_active.return_value = False
context.api.find_channels.return_value = [CHANNEL_1]
context.api.find_channel_uploaded_videos.return_value = [VIDEO_1, VIDEO_2]
# when
lookup(context, is_first_run=True)
# then
storage.add_video.assert_has_calls([
call(context.api.find_channel_uploaded_videos.return_value[0]),
call(context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
storage.add_livestream.assert_not_called()
context.livestream_recorders.start_recording.assert_not_called()
context.video_recorders.start_recording.assert_not_called()
context.bus.add_event.assert_has_calls([
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[0])),
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[1]))
], any_order=True)
context.bus.retrieve_events.assert_called_once()
storage.commit.assert_called()
def test_should_search_for_all_content_and_save_results_and_start_recording_livestream(self):
# given
context, storage = _create_context_and_storage()
context.config.archive_all = False
context.config.monitor_livestreams = True
storage.video_exist.return_value = False
context.video_recorders.is_recording_active.return_value = False
context.livestream_recorders.is_recording_active.return_value = False
context.api.find_channels.return_value = [CHANNEL_1]
context.api.find_channel_uploaded_videos.return_value = [VIDEO_1, VIDEO_2]
context.api.fetch_channel_livestream.return_value = LIVESTREAM_1
# when
lookup(context, is_first_run=True)
# then
storage.add_video.assert_has_calls([
call(context.api.find_channel_uploaded_videos.return_value[0]),
call(context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
storage.add_livestream.assert_has_calls([
call(context.api.fetch_channel_livestream.return_value)
], any_order=True)
context.livestream_recorders.start_recording.assert_has_calls([
call(context, context.api.fetch_channel_livestream.return_value)
], any_order=True)
context.video_recorders.start_recording.assert_not_called()
context.bus.add_event.assert_has_calls([
call(Event(type=Event.LIVESTREAM_STARTED, content=context.api.fetch_channel_livestream.return_value))
], any_order=True)
context.bus.retrieve_events.assert_called_once()
storage.commit.assert_called()
def test_should_search_for_videos_and_archive_all(self):
# given
context, storage = _create_context_and_storage()
context.config.archive_all = True
context.config.monitor_livestreams = False
storage.video_exist.return_value = False
context.video_recorders.is_recording_active.return_value = False
context.livestream_recorders.is_recording_active.return_value = False
context.api.find_channels.return_value = [CHANNEL_1]
context.api.find_channel_uploaded_videos.return_value = [VIDEO_1, VIDEO_2]
# when
lookup(context, is_first_run=True)
# then
storage.add_video.assert_has_calls([
call(context.api.find_channel_uploaded_videos.return_value[0]),
call(context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
storage.add_livestream.assert_not_called()
context.video_recorders.start_recording.assert_has_calls([
call(context, context.api.find_channel_uploaded_videos.return_value[0]),
call(context, context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
context.livestream_recorders.start_recording.assert_not_called()
context.bus.add_event.assert_has_calls([
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[0])),
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[1]))
], any_order=True)
context.bus.retrieve_events.assert_called_once()
storage.commit.assert_called()
def test_should_download_newly_fetched_videos(self):
# given
context, storage = _create_context_and_storage()
context.config.archive_all = False
context.config.monitor_livestreams = False
storage.video_exist.return_value = False
context.video_recorders.is_recording_active.return_value = False
context.livestream_recorders.is_recording_active.return_value = False
context.api.find_channels.return_value = [CHANNEL_1]
context.api.find_channel_uploaded_videos.return_value = [VIDEO_1, VIDEO_2]
# when
lookup(context, is_first_run=False)
# then
storage.add_video.assert_has_calls([
call(context.api.find_channel_uploaded_videos.return_value[0]),
call(context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
storage.add_livestream.assert_not_called()
context.video_recorders.start_recording.assert_has_calls([
call(context, context.api.find_channel_uploaded_videos.return_value[0]),
call(context, context.api.find_channel_uploaded_videos.return_value[1])
], any_order=True)
context.livestream_recorders.start_recording.assert_not_called()
context.bus.add_event.assert_has_calls([
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[0])),
call(Event(type=Event.NEW_VIDEO, content=context.api.find_channel_uploaded_videos.return_value[1]))
], any_order=True)
context.bus.retrieve_events.assert_called_once()
storage.commit.assert_called()
def test_should_not_download_if_video_already_exist(self):
# given
context, storage = _create_context_and_storage()
context.config.archive_all = False
context.config.monitor_livestreams = False
storage.video_exist.return_value = True
context.video_recorders.is_recording_active.return_value = False
context.livestream_recorders.is_recording_active.return_value = False
context.api.find_channels.return_value = [CHANNEL_1]
context.api.find_channel_uploaded_videos.return_value = [VIDEO_1, VIDEO_2]
# when
lookup(context, is_first_run=False)
# then
storage.add_video.assert_not_called()
storage.add_livestream.assert_not_called()
context.video_recorders.start_recording.assert_not_called()
context.livestream_recorders.start_recording.assert_not_called()
context.bus.add_event.assert_not_called()
context.bus.retrieve_events.assert_called_once()
storage.commit.assert_called()
def _create_context_and_storage():
config = MagicMock()
config.output_dir = 'fake_output_directory'
logger = create_autospec(logging.Logger, spec_set=True)
api = create_autospec(YoutubeAPI, spec_set=True)
video_recorders_controller = create_autospec(RecordersController, spec_set=True)
livestream_recorders_controller = create_autospec(RecordersController, spec_set=True)
event_bus = create_autospec(EventBus, spec_set=True)
plugins_mananger = create_autospec(PluginsManager, spec_set=True)
storage_manager = create_autospec(StorageManager, spec_set=True)
storage_manager.open.return_value.__enter__.return_value = MagicMock()
storage = storage_manager.open.return_value.__enter__.return_value
context = Context(
config,
logger,
api=api,
video_recorders_controller=video_recorders_controller,
livestream_recorders_controller=livestream_recorders_controller,
storage_manager=storage_manager,
bus=event_bus,
plugins=plugins_mananger
)
return context, storage
| 40.751073 | 120 | 0.729647 | 1,151 | 9,495 | 5.624674 | 0.099044 | 0.086654 | 0.06055 | 0.074606 | 0.820976 | 0.802132 | 0.800278 | 0.787458 | 0.746988 | 0.729688 | 0 | 0.005694 | 0.186203 | 9,495 | 232 | 121 | 40.926724 | 0.832147 | 0.00832 | 0 | 0.651163 | 0 | 0 | 0.01734 | 0.002234 | 0 | 0 | 0 | 0 | 0.203488 | 1 | 0.034884 | false | 0 | 0.040698 | 0 | 0.087209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dc090ba9a2d3ef1eebb45e3d8efbc0ce8b6ac30 | 40 | py | Python | rgc/__init__.py | sievetech/rgc | 33f44b33e8b303e156c002983411203b44e9aebf | [
"BSD-3-Clause"
] | null | null | null | rgc/__init__.py | sievetech/rgc | 33f44b33e8b303e156c002983411203b44e9aebf | [
"BSD-3-Clause"
] | null | null | null | rgc/__init__.py | sievetech/rgc | 33f44b33e8b303e156c002983411203b44e9aebf | [
"BSD-3-Clause"
] | null | null | null | #coding: utf-8
from rgc import collect
| 10 | 23 | 0.75 | 7 | 40 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.175 | 40 | 3 | 24 | 13.333333 | 0.878788 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61cac778e20cbd4888cc273942ca2f1ec25ee40b | 168 | py | Python | skgaip/flood/gaip/__init__.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | 1 | 2020-04-30T07:42:12.000Z | 2020-04-30T07:42:12.000Z | skgaip/flood/gaip/__init__.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | 3 | 2020-09-25T22:37:57.000Z | 2022-02-09T23:38:23.000Z | skgaip/flood/gaip/__init__.py | danielsuo/toy_flood | 471d3c4091d86d4a00fbf910937d4e60fdaf79a1 | [
"MIT"
] | null | null | null | # from ealstm.gaip.flood_lstm import FloodLSTM
from ealstm.gaip.flood_data import FloodData
# from ealstm.gaip.ar_stateless import ARStateless
__all__ = ["FloodData"]
| 28 | 50 | 0.815476 | 23 | 168 | 5.652174 | 0.565217 | 0.230769 | 0.323077 | 0.292308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 168 | 5 | 51 | 33.6 | 0.866667 | 0.553571 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
61cc168afb34e1b77bec3b26a135c92d29bba7ae | 25,449 | py | Python | post_optimization_studies/mad_analyses/ma100MeV_L1pt8-2pt4TeV_deta2pt6/Output/Histos/MadAnalysis5job_0/selection_6.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | post_optimization_studies/mad_analyses/ma100MeV_L1pt8-2pt4TeV_deta2pt6/Output/Histos/MadAnalysis5job_0/selection_6.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | post_optimization_studies/mad_analyses/ma100MeV_L1pt8-2pt4TeV_deta2pt6/Output/Histos/MadAnalysis5job_0/selection_6.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | def selection_6():
# Library import
import numpy
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# Library version
matplotlib_version = matplotlib.__version__
numpy_version = numpy.__version__
# Histo binning
xBinning = numpy.linspace(0.0,15.0,76,endpoint=True)
# Creating data sequence: middle of each bin
xData = numpy.array([0.1,0.3,0.5,0.7,0.9,1.1,1.3,1.5,1.7,1.9,2.1,2.3,2.5,2.7,2.9,3.1,3.3,3.5,3.7,3.9,4.1,4.3,4.5,4.7,4.9,5.1,5.3,5.5,5.7,5.9,6.1,6.3,6.5,6.7,6.9,7.1,7.3,7.5,7.7,7.9,8.1,8.3,8.5,8.7,8.9,9.1,9.3,9.5,9.7,9.9,10.1,10.3,10.5,10.7,10.9,11.1,11.3,11.5,11.7,11.9,12.1,12.3,12.5,12.7,12.9,13.1,13.3,13.5,13.7,13.9,14.1,14.3,14.5,14.7,14.9])
# Creating weights for histo: y7_DELTAR_0
y7_DELTAR_0_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.746530323921,1.61866183504,2.49079294616,3.26916606684,4.07761158445,4.6030155309,5.04173148619,5.36015945374,4.96389549413,4.56586353469,4.16960357507,3.88125000446,3.57697683547,3.08518688559,2.8976697047,2.63408453156,2.36165375932,1.95477740079,1.81679301485,1.52844104424,1.24362747326,1.14456188336,0.925202305712,0.792525119233,0.633312335459,0.49886634916,0.399800599256,0.31311816809,0.291889810253,0.201669339448,0.180440981611,0.132677186479,0.093758550445,0.0406876758535,0.024766409476,0.0212283498366,0.0106141749183,0.00353805843943,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_1
y7_DELTAR_1_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.403963891944,0.955309720248,1.45438208153,1.852842767,2.35299866323,2.78260608244,3.01447137079,3.09573928985,3.04752378311,2.82748385389,2.7121063665,2.54953535647,2.28876474493,2.18952557338,1.85396227323,1.60391869768,1.56337666476,1.36465051949,1.17755748683,1.01085897202,0.851687647823,0.756533215185,0.612290372702,0.51609916778,0.388952596341,0.350492181545,0.278914006598,0.220104997278,0.176283942195,0.146392166552,0.104721674548,0.0673212135287,0.0619848738415,0.0363259628544,0.0277816482816,0.0128212360556,0.00641566200093,0.00106772357368,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_2
y7_DELTAR_2_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.22431137196,0.567375731428,0.86530020044,1.23753188667,1.49656634667,1.73754480249,1.87782643498,2.06047007729,1.94866165139,1.95699485332,1.78893521439,1.70490519493,1.55142915938,1.46392673911,1.30836710307,1.18822507524,1.05141624355,0.939607817653,0.820160189984,0.65696135218,0.568070131589,0.522235720972,0.421538497646,0.353481361881,0.289590827082,0.246534137108,0.227783652764,0.163198677804,0.118753067508,0.0854188597867,0.0763908576954,0.052084692065,0.0479178910998,0.0194449485043,0.0180560241825,0.00555569728694,0.00138892472173,0.000694462560867,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_3
y7_DELTAR_3_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.16690277194,0.352771789782,0.593168584593,0.796581447895,1.00189071179,1.16405196225,1.29586720327,1.43858804769,1.43100164533,1.32052321095,1.28306519929,1.19250117111,1.07917833584,0.971544702344,0.916542685227,0.818392654683,0.739208630041,0.634420197431,0.562348575002,0.476526548295,0.427214132949,0.361306592438,0.310097776502,0.233758712745,0.201990302859,0.16690277194,0.146039925447,0.115693956004,0.0848738664127,0.0621143793299,0.0592694584446,0.0440964537228,0.0260785601156,0.0199145341974,0.0109055793938,0.00379324478045,0.0033190894329,0.000474155747557,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_4
y7_DELTAR_4_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_5
y7_DELTAR_5_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0529581672,0.0,0.0,0.0,1.0521138287,0.0,1.05462838872,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_6
y7_DELTAR_6_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.4605753905,1.15126882793,2.30383643573,1.15235088832,2.30326773922,3.6875657254,2.0723035641,4.14416446714,2.53284898276,0.690760297698,1.61153220386,1.38315290808,0.690937438976,0.6907560709,0.230455998676,0.0,0.46143957864,0.0,0.230350866673,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_7
y7_DELTAR_7_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.359930673019,1.02467163757,1.52287197843,2.16014969574,2.21554021106,2.79640758282,3.23978560499,3.82157971045,3.54384966239,2.46399524558,1.6893643612,1.24662301162,1.10786224032,0.858736679932,0.553717036599,0.304495109752,0.13843947284,0.193792287905,0.0830513811122,0.055299842367,0.0277194204404,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_8
y7_DELTAR_8_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.151084201488,0.48423614004,0.504377516496,0.675359702004,1.13907599672,1.41120160532,1.5830356947,1.69408174933,1.36118388312,0.887213066367,0.71563869268,0.413348077695,0.312530816245,0.201576026187,0.141078169117,0.0302685953236,0.0604550221859,0.0504230667906,0.0301680161753,0.0201517590157,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_9
y7_DELTAR_9_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0961516802609,0.18110331181,0.24614779823,0.373490429927,0.53758102364,0.676174258702,0.905334192949,1.01844234252,0.69036572705,0.59690386983,0.339506699398,0.209366171586,0.104671408976,0.0905198764959,0.0197879608031,0.01697604866,0.01131334269,0.0113283436484,0.00282343621867,0.00565237189102,0.0,0.0,0.00283193740298,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_10
y7_DELTAR_10_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0136848571629,0.0152216980958,0.0365169671737,0.039632448391,0.0594495704809,0.0914595798965,0.1552297714,0.187437010429,0.118771828975,0.0517749827387,0.0304819702114,0.013714783041,0.00916503115889,0.00458334140609,0.00152202212957,0.00151823561399,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_11
y7_DELTAR_11_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00216659567611,0.00198641060761,0.00378777066194,0.00541646608365,0.00704192465245,0.0142617587561,0.0287097800078,0.0370157756304,0.0186005318984,0.00794641419169,0.00270888195285,0.00126479882879,0.000180814223357,0.000361067379374,0.0,0.00018006256488,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_12
y7_DELTAR_12_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0243067199767,0.0,0.0,0.0242945760233,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_13
y7_DELTAR_13_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0301235473445,0.0301201301501,0.0602001795444,0.07029003463,0.170647867166,0.0702986705867,0.110519502419,0.070259953403,0.080279563865,0.0903759888888,0.110412193426,0.0903980539647,0.0100457331979,0.0602205091649,0.0702763989089,0.0100369732801,0.0100340932506,0.0201103993878,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_14
y7_DELTAR_14_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00549610751918,0.0220212053763,0.082501273114,0.109972893147,0.236463551029,0.236452744324,0.36850044573,0.467458665392,0.58839179223,0.594141284488,0.544418658146,0.506137732657,0.594260320754,0.407120685516,0.390558678292,0.31348980595,0.285914344042,0.253089667205,0.275022769494,0.120959006053,0.104542239323,0.0605015023909,0.0660052030425,0.0384962917511,0.00551434485012,0.00550890493341,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_15
y7_DELTAR_15_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0345292436295,0.107563798304,0.166786103988,0.242751902195,0.364121319786,0.436184490394,0.53092754537,0.599046762949,0.595042485216,0.610876818306,0.528911377502,0.52204839018,0.443090767205,0.411544754578,0.287165194352,0.235871799492,0.155924731113,0.146079418662,0.101652137586,0.0651205787258,0.0384885764514,0.0286232305864,0.0138173916229,0.00986914171184,0.00197291004404,0.00197407244699,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_16
y7_DELTAR_16_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0191538962283,0.0531793492939,0.089488529985,0.142421133628,0.176961603723,0.242768152703,0.289898212478,0.340056317918,0.327185653453,0.297188232115,0.246267803847,0.227384019486,0.179238929645,0.131319939962,0.0965389658765,0.0620093386667,0.0491618803762,0.0274785991608,0.0166351060603,0.0103367982856,0.00352906736996,0.00201567710607,0.000756651720394,0.000756508082177,0.0,0.000504346990318,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_17
y7_DELTAR_17_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.01487905372,0.0260706394856,0.0466383758843,0.0853450869514,0.101317762802,0.146563121676,0.196712213959,0.219053654567,0.20100598004,0.152575913859,0.124260872544,0.0916329093138,0.0658572356715,0.0434977489888,0.0294825773694,0.0163354469461,0.0108738948652,0.00515803109849,0.000859068447659,0.000858927878233,0.00142991998416,0.000285189269689,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_18
y7_DELTAR_18_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00229000058071,0.00472975209873,0.00732024700378,0.0110771301609,0.0173614532999,0.0256745733096,0.0359121027543,0.0412880452271,0.0333957775386,0.021764430436,0.0148130830232,0.00958823766524,0.00509825353625,0.00267870432109,0.00151258304132,0.000970540792045,0.000302361286533,0.000172846893268,0.000129587066735,6.47756164652e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y7_DELTAR_19
y7_DELTAR_19_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000424383966984,0.000994375917234,0.00102261162147,0.00181470109613,0.00285953548127,0.00484562833142,0.00870869645677,0.0104268159111,0.00671779866065,0.0032323279934,0.00164597397055,0.000623735902071,0.000511575986831,0.000311105472207,0.00011365054761,2.84378684386e-05,8.50595298734e-05,0.0,2.84575189595e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating a new Canvas
fig = plt.figure(figsize=(12,6),dpi=80)
frame = gridspec.GridSpec(1,1,right=0.7)
pad = fig.add_subplot(frame[0])
# Creating a new Stack
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights+y7_DELTAR_17_weights+y7_DELTAR_18_weights+y7_DELTAR_19_weights,\
label="$bg\_vbf\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights+y7_DELTAR_17_weights+y7_DELTAR_18_weights,\
label="$bg\_vbf\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#c1bfa8", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights+y7_DELTAR_17_weights,\
label="$bg\_vbf\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#bab5a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights,\
label="$bg\_vbf\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b2a596", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights,\
label="$bg\_vbf\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b7a39b", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights,\
label="$bg\_vbf\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ad998c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights,\
label="$bg\_vbf\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#9b8e82", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights,\
label="$bg\_vbf\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#876656", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights,\
label="$bg\_dip\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#afcec6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights,\
label="$bg\_dip\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#84c1a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights,\
label="$bg\_dip\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#89a8a0", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights,\
label="$bg\_dip\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#829e8c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights,\
label="$bg\_dip\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#adbcc6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights,\
label="$bg\_dip\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7a8e99", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights,\
label="$bg\_dip\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#758991", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights,\
label="$bg\_dip\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#688296", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights,\
label="$signal\_2pt4TeVL$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#6d7a84", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights,\
label="$signal\_2pt2TeVL$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7c99d1", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights+y7_DELTAR_1_weights,\
label="$signal\_2TeVL$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7f7f9b", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y7_DELTAR_0_weights,\
label="$signal\_1pt8TeVL$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#aaa5bf", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
# Axis
plt.rc('text',usetex=False)
plt.xlabel(r"\Delta R [ j_{1} , j_{2} ] ",\
fontsize=16,color="black")
plt.ylabel(r"$\mathrm{Events}$ $(\mathcal{L}_{\mathrm{int}} = 40.0\ \mathrm{fb}^{-1})$ ",\
fontsize=16,color="black")
# Boundary of y-axis
ymax=(y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights+y7_DELTAR_17_weights+y7_DELTAR_18_weights+y7_DELTAR_19_weights).max()*1.1
ymin=0 # linear scale
#ymin=min([x for x in (y7_DELTAR_0_weights+y7_DELTAR_1_weights+y7_DELTAR_2_weights+y7_DELTAR_3_weights+y7_DELTAR_4_weights+y7_DELTAR_5_weights+y7_DELTAR_6_weights+y7_DELTAR_7_weights+y7_DELTAR_8_weights+y7_DELTAR_9_weights+y7_DELTAR_10_weights+y7_DELTAR_11_weights+y7_DELTAR_12_weights+y7_DELTAR_13_weights+y7_DELTAR_14_weights+y7_DELTAR_15_weights+y7_DELTAR_16_weights+y7_DELTAR_17_weights+y7_DELTAR_18_weights+y7_DELTAR_19_weights) if x])/100. # log scale
plt.gca().set_ylim(ymin,ymax)
# Log/Linear scale for X-axis
plt.gca().set_xscale("linear")
#plt.gca().set_xscale("log",nonposx="clip")
# Log/Linear scale for Y-axis
plt.gca().set_yscale("linear")
#plt.gca().set_yscale("log",nonposy="clip")
# Legend
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
# Saving the image
plt.savefig('../../HTML/MadAnalysis5job_0/selection_6.png')
plt.savefig('../../PDF/MadAnalysis5job_0/selection_6.png')
plt.savefig('../../DVI/MadAnalysis5job_0/selection_6.eps')
# Running!
if __name__ == '__main__':
selection_6()
| 116.738532 | 760 | 0.734999 | 5,335 | 25,449 | 3.330084 | 0.112465 | 0.240122 | 0.351908 | 0.458854 | 0.592761 | 0.592199 | 0.590454 | 0.585613 | 0.577057 | 0.577057 | 0 | 0.37683 | 0.07403 | 25,449 | 217 | 761 | 117.276498 | 0.377085 | 0.063067 | 0 | 0.171875 | 0 | 0.007813 | 0.049653 | 0.008402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007813 | false | 0 | 0.03125 | 0 | 0.039063 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61e126316f971a0e63d3f73f30ad3959f32beed2 | 37 | py | Python | src/lit_tracking/converter/__init__.py | Actis92/lit-tracking | 9e7b243ba77c80ca260bff479e54db271d10c195 | [
"MIT"
] | null | null | null | src/lit_tracking/converter/__init__.py | Actis92/lit-tracking | 9e7b243ba77c80ca260bff479e54db271d10c195 | [
"MIT"
] | 14 | 2021-11-01T08:48:23.000Z | 2022-01-08T14:20:17.000Z | src/lit_tracking/converter/__init__.py | Actis92/lit-tracking | 9e7b243ba77c80ca260bff479e54db271d10c195 | [
"MIT"
] | null | null | null | from .mot_to_coco import Mot20ToCoco
| 18.5 | 36 | 0.864865 | 6 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.108108 | 37 | 1 | 37 | 37 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11320fbed288ea9e813dfc48a063f201b7628dd8 | 8,777 | py | Python | demo/DeepForest/callbacks.py | bw4sz/SpeciesClassification | f661cf4409381a71b3ff7acf1383741c9eb19cdd | [
"MIT"
] | 1 | 2020-05-18T07:14:31.000Z | 2020-05-18T07:14:31.000Z | demo/DeepForest/callbacks.py | bw4sz/SpeciesClassification | f661cf4409381a71b3ff7acf1383741c9eb19cdd | [
"MIT"
] | 4 | 2019-11-12T02:48:19.000Z | 2020-02-07T18:01:25.000Z | Weinstein2019/DeepForest/callbacks.py | weecology/NeonTreeEvaluation_analysis | a426a1a6a621b67f11dc4e6cc46eb9df9d0fc677 | [
"MIT"
] | 1 | 2021-04-18T22:12:58.000Z | 2021-04-18T22:12:58.000Z | '''
Callback for evaluation. Modified in part from keras-retinanet by FizyR.
'''
import keras
from .evaluation import neonRecall
from .evalmAP import evaluate
class Evaluate(keras.callbacks.Callback):
""" Evaluation callback for arbitrary datasets.
"""
def __init__(self, generator, iou_threshold=0.5, score_threshold=0.05, max_detections=300, suppression_threshold=0.2,save_path=None, weighted_average=False, verbose=1,experiment=None,DeepForest_config=None):
""" Evaluate a given dataset using a given model at the end of every epoch during training.
# Arguments
generator : The generator that represents the dataset to evaluate.
iou_threshold : The threshold used to consider when a detection is positive or negative.
score_threshold : The score confidence threshold to use for detections.
max_detections : The maximum number of detections to use per image.
suppression_threshold: Percent overlap allowed among boxes
save_path : The path to save images with visualized detections to.
verbose : Set the verbosity level, by default this is set to 1.
Experiment : Comet ml experiment for online logging
"""
self.generator = generator
self.iou_threshold = iou_threshold
self.score_threshold = score_threshold
self.max_detections = max_detections
self.suppression_threshold=suppression_threshold
self.save_path = save_path
self.weighted_average = weighted_average
self.verbose = verbose
self.experiment = experiment
self.DeepForest_config = DeepForest_config
super(Evaluate, self).__init__()
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
# run evaluation
average_precisions = evaluate(
self.generator,
self.model,
iou_threshold=self.iou_threshold,
score_threshold=self.score_threshold,
max_detections=self.max_detections,
save_path=self.save_path,
experiment=self.experiment
)
# compute per class average precision
total_instances = []
precisions = []
for label, (average_precision, num_annotations ) in average_precisions.items():
if self.verbose == 1:
print('{:.0f} instances of class'.format(num_annotations),
self.generator.label_to_name(label), 'with average precision: {:.3f}'.format(average_precision))
total_instances.append(num_annotations)
precisions.append(average_precision)
if self.weighted_average:
self.mean_ap = sum([a * b for a, b in zip(total_instances, precisions)]) / sum(total_instances)
else:
self.mean_ap = sum(precisions) / sum(x > 0 for x in total_instances)
logs['mAP'] = self.mean_ap
if self.verbose == 1:
print('mAP: {:.3f}'.format(self.mean_ap))
self.experiment.log_metric("mAP", self.mean_ap)
# Neon Recall
class recallCallback(keras.callbacks.Callback):
""" Evaluation callback for NEON stem maps
"""
def __init__(self, generator=None, score_threshold=0.05, max_detections=300, suppression_threshold=0.2,save_path=None, weighted_average=False, verbose=1,experiment=None, sites=None):
""" Evaluate a given dataset using a given model at the end of every epoch during training.
# Arguments
generator : The generator that represents the dataset to evaluate.
iou_threshold : The threshold used to consider when a detection is positive or negative.
score_threshold : The score confidence threshold to use for detections.
max_detections : The maximum number of detections to use per image.
suppression_threshold: Percent overlap allowed among boxes
save_path : The path to save images with visualized detections to.
verbose : Set the verbosity level, by default this is set to 1.
Experiment : Comet ml experiment for online logging
"""
self.generator = generator
self.score_threshold = score_threshold
self.max_detections = max_detections
self.suppression_threshold=suppression_threshold
self.save_path = save_path
self.weighted_average = weighted_average
self.verbose = verbose
self.experiment = experiment
self.sites = sites
super(recallCallback, self).__init__()
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
recall=neonRecall(
self.sites,
self.generator,
self.model,
score_threshold=self.score_threshold,
save_path=self.save_path,
max_detections=self.max_detections,
experiment=self.experiment,
)
print("Recall is {}".format(recall))
self.experiment.log_metric("Recall", recall)
#Hand annotated mAP
class NEONmAP(keras.callbacks.Callback):
""" Evaluation callback for arbitrary datasets.
"""
def __init__(self, generator, iou_threshold=0.5, score_threshold=0.05, max_detections=300, save_path=None, weighted_average=False, verbose=1, experiment=None, DeepForest_config=None):
""" Evaluate a given dataset using a given model at the end of every epoch during training.
# Arguments
generator : The generator that represents the dataset to evaluate.
iou_threshold : The threshold used to consider when a detection is positive or negative.
score_threshold : The score confidence threshold to use for detections.
max_detections : The maximum number of detections to use per image.
save_path : The path to save images with visualized detections to.
verbose : Set the verbosity level, by default this is set to 1.
Experiment : Comet ml experiment for online logging
"""
self.generator = generator
self.iou_threshold = iou_threshold
self.score_threshold = score_threshold
self.max_detections = max_detections
self.save_path = save_path
self.weighted_average = weighted_average
self.verbose = verbose
self.experiment = experiment
self.DeepForest_config = DeepForest_config
super(NEONmAP, self).__init__()
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
print("computing NEON mAP scores")
# run evaluation
average_precisions = evaluate(
self.generator,
self.model,
iou_threshold=self.iou_threshold,
score_threshold=self.score_threshold,
max_detections=self.max_detections,
save_path=self.save_path,
experiment=self.experiment
)
# print evaluation
# compute per class average precision
total_instances = []
precisions = []
for label, (average_precision, num_annotations ) in average_precisions.items():
if self.verbose == 1:
print('{:.0f} instances of class'.format(num_annotations),
self.generator.label_to_name(label), 'with average precision: {:.3f}'.format(average_precision))
total_instances.append(num_annotations)
precisions.append(average_precision)
if self.weighted_average:
self.NEON_map = sum([a * b for a, b in zip(total_instances, precisions)]) / sum(total_instances)
else:
self.NEON_map = sum(precisions) / sum(x > 0 for x in total_instances)
logs['NEON_mAP'] = self.NEON_map
print('Neon mAP: {:.3f}'.format(self.NEON_map))
self.experiment.log_metric("Neon mAP", self.NEON_map)
class shuffle_inputs(keras.callbacks.Callback):
"""Randomize order of tiles and windows
"""
def __init__(self, generator):
"""
# Arguments
generator : The generator that represents the dataset to evaluate.
"""
self.generator = generator
super(shuffle_inputs, self).__init__()
#Before epoch, randomize tile order
def on_epoch_begin(self,epoch,logs=None):
self.generator.image_data, self.generator.image_names =self.generator.define_groups(self.generator.windowdf,shuffle=True)
self.generator.group_images()
| 43.885 | 211 | 0.635411 | 994 | 8,777 | 5.427565 | 0.151911 | 0.043373 | 0.025579 | 0.025023 | 0.827618 | 0.802224 | 0.794254 | 0.794254 | 0.794254 | 0.794254 | 0 | 0.006913 | 0.29133 | 8,777 | 199 | 212 | 44.105528 | 0.86045 | 0.291444 | 0 | 0.666667 | 0 | 0 | 0.034261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.026316 | 0 | 0.131579 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
114b2c69786d06e5b502c67bc505fb5dc84be52f | 165 | py | Python | canteen/admin.py | vasundhara7/College-EWallet | 0a4c32bc08218650635a04fb9a9e28446fd4f3e1 | [
"Apache-2.0"
] | 2 | 2019-07-28T00:34:09.000Z | 2020-06-18T11:58:03.000Z | canteen/admin.py | vasundhara7/College-EWallet | 0a4c32bc08218650635a04fb9a9e28446fd4f3e1 | [
"Apache-2.0"
] | null | null | null | canteen/admin.py | vasundhara7/College-EWallet | 0a4c32bc08218650635a04fb9a9e28446fd4f3e1 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Payment, Order, card_pay
admin.site.register(Payment)
admin.site.register(Order)
admin.site.register(card_pay)
| 23.571429 | 44 | 0.818182 | 25 | 165 | 5.32 | 0.48 | 0.203008 | 0.383459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084848 | 165 | 6 | 45 | 27.5 | 0.880795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3a0f450a7bcebdc7c3b582246d0cef132e9ad897 | 23 | py | Python | grid_utils/tiler/__init__.py | claydodo/grid_utils | 1a08cb8ca226bb22ddac01be2a0863919d736767 | [
"Unlicense"
] | null | null | null | grid_utils/tiler/__init__.py | claydodo/grid_utils | 1a08cb8ca226bb22ddac01be2a0863919d736767 | [
"Unlicense"
] | 1 | 2018-11-07T08:05:41.000Z | 2018-11-07T08:05:41.000Z | grid_utils/tiler/__init__.py | claydodo/grid_utils | 1a08cb8ca226bb22ddac01be2a0863919d736767 | [
"Unlicense"
] | null | null | null | from .xy_tiler import * | 23 | 23 | 0.782609 | 4 | 23 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28c1d0e84f2c4ff9653db6c3987ac19f0875b589 | 231 | py | Python | 03.Inheritance/Lab/multiple_inheritance/project/teacher.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 03.Inheritance/Lab/multiple_inheritance/project/teacher.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 03.Inheritance/Lab/multiple_inheritance/project/teacher.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | from inheritance.Lab.multiple_inheritance.project.employee import Employee
from inheritance.Lab.multiple_inheritance.project.person import Person
class Teacher(Person, Employee):
def teach(self):
return "teaching..." | 28.875 | 74 | 0.787879 | 27 | 231 | 6.666667 | 0.555556 | 0.166667 | 0.2 | 0.288889 | 0.488889 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125541 | 231 | 8 | 75 | 28.875 | 0.891089 | 0 | 0 | 0 | 0 | 0 | 0.047414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
e910378db320015322ed2b7546d113a8d39c4ed3 | 138 | py | Python | tensorboardX/__init__.py | rococostudio/Deep-Exemplar-based-Video-Colorization | f9ded4ccfd276251ad8426c5628af46e619e0a0e | [
"MIT"
] | 402 | 2016-12-11T00:19:59.000Z | 2022-03-20T04:03:11.000Z | tensorboardX/__init__.py | avalonstrel/tensorboard-pytorch | 1cb71ccfe9016578c6ffd1802d13a888dca58a59 | [
"MIT"
] | 55 | 2016-12-11T19:53:23.000Z | 2020-03-24T15:07:51.000Z | tensorboardX/__init__.py | avalonstrel/tensorboard-pytorch | 1cb71ccfe9016578c6ffd1802d13a888dca58a59 | [
"MIT"
] | 74 | 2016-12-11T03:39:05.000Z | 2022-03-31T02:04:16.000Z | """A module for visualization with tensorboard
"""
from .writer import FileWriter, SummaryWriter
from .record_writer import RecordWriter
| 23 | 46 | 0.811594 | 16 | 138 | 6.9375 | 0.8125 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123188 | 138 | 5 | 47 | 27.6 | 0.917355 | 0.311594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a7dcc9b7ad07981edad0b8c1c0a15b440b1c262 | 596 | py | Python | thualign/optimizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 27 | 2021-05-11T07:24:59.000Z | 2022-03-25T05:23:45.000Z | thualign/optimizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 11 | 2021-10-02T05:56:01.000Z | 2022-03-30T02:32:36.000Z | thualign/optimizers/__init__.py | bryant1410/Mask-Align | 329690919d6885a8fcdf13beef6cf98ff6a2d51a | [
"BSD-3-Clause"
] | 11 | 2021-06-04T05:23:39.000Z | 2022-03-19T19:40:55.000Z | from thualign.optimizers.optimizers import AdamOptimizer
from thualign.optimizers.optimizers import AdadeltaOptimizer
from thualign.optimizers.optimizers import SGDOptimizer
from thualign.optimizers.optimizers import MultiStepOptimizer
from thualign.optimizers.optimizers import LossScalingOptimizer
from thualign.optimizers.schedules import LinearWarmupRsqrtDecay
from thualign.optimizers.schedules import PiecewiseConstantDecay
from thualign.optimizers.schedules import LinearExponentialDecay
from thualign.optimizers.clipping import (
adaptive_clipper, global_norm_clipper, value_clipper)
| 54.181818 | 64 | 0.89094 | 60 | 596 | 8.783333 | 0.316667 | 0.204934 | 0.375712 | 0.303605 | 0.571158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072148 | 596 | 10 | 65 | 59.6 | 0.952984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.9 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a8016ae7719491d9c333b3d295c359ed2e23cff | 3,167 | py | Python | common/user_query.py | cedadev/download-stats | 3d18b08ce239e82e53c5a9bd4dd77b35a1f040bc | [
"BSD-3-Clause"
] | null | null | null | common/user_query.py | cedadev/download-stats | 3d18b08ce239e82e53c5a9bd4dd77b35a1f040bc | [
"BSD-3-Clause"
] | 6 | 2019-08-29T10:35:09.000Z | 2021-04-07T12:24:37.000Z | common/user_query.py | cedadev/access-stats | 3d18b08ce239e82e53c5a9bd4dd77b35a1f040bc | [
"BSD-3-Clause"
] | 1 | 2018-11-01T16:31:16.000Z | 2018-11-01T16:31:16.000Z | from common.query_builder import QueryBuilder
class UserQuery(QueryBuilder):
def get_size(self):
return 0
def update_aggs(self):
self.group_by()
def group_by_main(self):
self.generated_aggs["group_by_field"] = {}
self.generated_aggs["group_by_field"]["terms"] = {}
self.generated_aggs["group_by_field"]["terms"]["field"] = "user_data.field.keyword.terms.value"
self.generated_aggs["group_by_field"]["aggs"] = {}
self.generated_aggs["group_by_field"]["aggs"]["users"] = {}
self.generated_aggs["group_by_field"]["aggs"]["users"]["cardinality"] = {}
self.generated_aggs["group_by_field"]["aggs"]["users"]["cardinality"]["field"] = "user.keyword.terms.value"
self.generated_aggs["group_by_country"] = {}
self.generated_aggs["group_by_country"]["terms"] = {}
self.generated_aggs["group_by_country"]["terms"]["field"] = "user_data.isocode.keyword.terms.value"
self.generated_aggs["group_by_country"]["aggs"] = {}
self.generated_aggs["group_by_country"]["aggs"]["users"] = {}
self.generated_aggs["group_by_country"]["aggs"]["users"]["cardinality"] = {}
self.generated_aggs["group_by_country"]["aggs"]["users"]["cardinality"]["field"] = "user.keyword.terms.value"
self.generated_aggs["group_by_institute_type"] = {}
self.generated_aggs["group_by_institute_type"]["terms"] = {}
self.generated_aggs["group_by_institute_type"]["terms"]["field"] = "user_data.type.keyword.terms.value"
self.generated_aggs["group_by_institute_type"]["aggs"] = {}
self.generated_aggs["group_by_institute_type"]["aggs"]["users"] = {}
self.generated_aggs["group_by_institute_type"]["aggs"]["users"]["cardinality"] = {}
self.generated_aggs["group_by_institute_type"]["aggs"]["users"]["cardinality"]["field"] = "user.keyword.terms.value"
if "user" in self.filters:
self.generated_aggs["group_by_oda_type"] = {}
self.generated_aggs["group_by_oda_type"]["terms"] = {}
self.generated_aggs["group_by_oda_type"]["terms"]["field"] = "user_data.oda_country.keyword.terms.value"
self.generated_aggs["group_by_oda_type"]["aggs"] = {}
self.generated_aggs["group_by_oda_type"]["aggs"]["users"] = {}
self.generated_aggs["group_by_oda_type"]["aggs"]["users"]["cardinality"] = {}
self.generated_aggs["group_by_oda_type"]["aggs"]["users"]["cardinality"]["field"] = "user.keyword.terms.value"
self.generated_aggs["group_by_area"] = {}
self.generated_aggs["group_by_area"]["terms"] = {}
self.generated_aggs["group_by_area"]["terms"]["field"] = "user_data.area.keyword.terms.value"
self.generated_aggs["group_by_area"]["aggs"] = {}
self.generated_aggs["group_by_area"]["aggs"]["users"] = {}
self.generated_aggs["group_by_area"]["aggs"]["users"]["cardinality"] = {}
self.generated_aggs["group_by_area"]["aggs"]["users"]["cardinality"]["field"] = "user.keyword.terms.value"
def base_aggs(self):
return {}
| 58.648148 | 125 | 0.638459 | 377 | 3,167 | 5.013263 | 0.100796 | 0.137037 | 0.314815 | 0.407407 | 0.871958 | 0.858201 | 0.842857 | 0.708466 | 0.5 | 0.250265 | 0 | 0.000381 | 0.172087 | 3,167 | 53 | 126 | 59.754717 | 0.720442 | 0 | 0 | 0 | 0 | 0 | 0.401734 | 0.148362 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.022222 | 0.044444 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a80ac0a1f58619078f9f3f3f6aec6b69e08303d | 62 | py | Python | copper/core/engine/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 6 | 2016-07-28T13:59:34.000Z | 2021-12-28T05:44:15.000Z | copper/core/engine/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 5 | 2016-06-30T10:19:25.000Z | 2022-03-11T23:19:01.000Z | copper/core/engine/__init__.py | cinepost/Copperfield_FX | 1900b506d0a407a3fb5774ab129b984a547ee0b5 | [
"Unlicense"
] | 3 | 2019-03-18T05:17:10.000Z | 2020-02-14T06:56:40.000Z | from .engine_signals import signals
from .engine import Engine | 31 | 35 | 0.854839 | 9 | 62 | 5.777778 | 0.444444 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 36 | 31 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaf45d6dbb32cea95347efbbbf9946f0d33619d0 | 110 | py | Python | info/modules/user/__init__.py | Signss/The-news-program | 602c1c4970e4f96ec6772349f7628d179c3a00fa | [
"MIT"
] | null | null | null | info/modules/user/__init__.py | Signss/The-news-program | 602c1c4970e4f96ec6772349f7628d179c3a00fa | [
"MIT"
] | null | null | null | info/modules/user/__init__.py | Signss/The-news-program | 602c1c4970e4f96ec6772349f7628d179c3a00fa | [
"MIT"
] | null | null | null | from flask import Blueprint
user_blue = Blueprint('user', __name__, url_prefix='/user')
from . import views | 18.333333 | 59 | 0.754545 | 15 | 110 | 5.133333 | 0.666667 | 0.337662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 110 | 6 | 60 | 18.333333 | 0.810526 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c945025354277c15cdb2f546fd5709b3a9bf6987 | 124 | py | Python | wooey/models/__init__.py | fridmundklaus/wooey | 4a2e31c282bfe86edf77b0ff8f58f4177eeab9dd | [
"BSD-3-Clause"
] | 1,572 | 2015-06-19T21:31:41.000Z | 2022-03-30T23:37:13.000Z | wooey/models/__init__.py | fridmundklaus/wooey | 4a2e31c282bfe86edf77b0ff8f58f4177eeab9dd | [
"BSD-3-Clause"
] | 309 | 2015-07-08T02:33:08.000Z | 2022-02-08T00:37:11.000Z | wooey/models/__init__.py | fridmundklaus/wooey | 4a2e31c282bfe86edf77b0ff8f58f4177eeab9dd | [
"BSD-3-Clause"
] | 220 | 2015-07-01T10:30:27.000Z | 2022-02-05T04:10:54.000Z | from __future__ import absolute_import, unicode_literals
from .core import *
from .favorite import *
from .widgets import *
| 24.8 | 56 | 0.806452 | 16 | 124 | 5.875 | 0.5625 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137097 | 124 | 4 | 57 | 31 | 0.878505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9839fa4ef1ed8b86f362d58198ab5f1c3c48662 | 108 | py | Python | historical_weather/__init__.py | Shom770/xmacis | 5e9694d4f2ba37ab93fc83e03b88c707026e886a | [
"MIT"
] | null | null | null | historical_weather/__init__.py | Shom770/xmacis | 5e9694d4f2ba37ab93fc83e03b88c707026e886a | [
"MIT"
] | null | null | null | historical_weather/__init__.py | Shom770/xmacis | 5e9694d4f2ba37ab93fc83e03b88c707026e886a | [
"MIT"
] | null | null | null | from ._wrapped_endpoints import *
from .elements import *
from .oni import *
from .teleconnections import *
| 21.6 | 33 | 0.777778 | 13 | 108 | 6.307692 | 0.538462 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 108 | 4 | 34 | 27 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a33e0d9a2b5191b4ff7ae42bd5192ccd28e74918 | 21,704 | py | Python | Src/models/new_vat.py | SivanKe/SyntheticDataHandwrittenCharacterRecognition | c2b299009ddc24eab1bc2074787e82e566f79abc | [
"MIT"
] | 7 | 2019-07-14T06:49:15.000Z | 2021-11-03T12:13:37.000Z | Src/models/new_vat.py | SivanKe/SyntheticDataHandwrittenCharacterRecognition | c2b299009ddc24eab1bc2074787e82e566f79abc | [
"MIT"
] | 2 | 2019-07-16T07:44:37.000Z | 2019-07-18T10:57:23.000Z | Src/models/new_vat.py | SivanKe/SyntheticDataHandwrittenCharacterRecognition | c2b299009ddc24eab1bc2074787e82e566f79abc | [
"MIT"
] | 1 | 2020-09-03T14:51:53.000Z | 2020-09-03T14:51:53.000Z | import contextlib
import torch
import torch.nn as nn
import torch.nn.functional as F
from warpctc_pytorch import CTCLoss
from torch.autograd import Variable
import numpy as np
@contextlib.contextmanager
def _disable_tracking_bn_stats(model):
def switch_attr(m):
if hasattr(m, 'track_running_stats'):
m.track_running_stats ^= True
model.apply(switch_attr)
yield
model.apply(switch_attr)
def _l2_normalize(d):
d_reshaped = d.view(d.shape[0], -1, *(1 for _ in range(d.dim() - 2)))
d /= torch.norm(d_reshaped, dim=1, keepdim=True) + 1e-8
return d
def _entropy(logits, mask):
p = F.softmax(logits, dim=1)
return -torch.mean(torch.sum((p * F.log_softmax(logits, dim=1)), dim=1) * mask)
def _kl_div(log_probs, probs, mask=None):
# pytorch KLDLoss is averaged over all dim if size_average=True
if mask is not None:
kld = F.kl_div(log_probs, probs, size_average=False, reduce=False)
kld = mask.view(-1) * kld.sum(1)
kld = kld.sum() / mask.sum()
else:
kld = F.kl_div(log_probs, probs, size_average=False)
kld = kld / log_probs.shape[0]
return kld
class LabeledATLoss(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(LabeledATLoss, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, labels_flatten, img_seq_lens, label_lens, batch_size):
with _disable_tracking_bn_stats(model):
# calc adversarial direction
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
for _ in range(self.ip):
d.requires_grad_()
loss_function = CTCLoss()
preds = model.forward(x + self.xi * d, img_seq_lens)
adv_loss_ctc = loss_function(preds, labels_flatten,
Variable(torch.IntTensor(np.array(img_seq_lens))), label_lens) / batch_size
adv_loss_ctc.backward()
d = d.grad
model.zero_grad()
# calc LDS
r_adv = torch.sign(d) * self.eps
pred_hat = model.forward(x + r_adv, img_seq_lens)
lds = loss_function(pred_hat, labels_flatten,
Variable(torch.IntTensor(np.array(img_seq_lens))), label_lens) / batch_size
return lds
class LabeledAtAndUnlabeledTestVatLoss(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1, unlabeled_ratio=10.):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(LabeledAtAndUnlabeledTestVatLoss, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
self.unlabeled_ratio = unlabeled_ratio
def forward(self, model, train_x, train_labels_flatten, train_img_seq_lens, train_label_lens, batch_size,
test_x, test_seq_len, test_mask
):
with _disable_tracking_bn_stats(model):
# TRAIN
# calc adversarial direction
# prepare random unit tensor
train_d = torch.rand(train_x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
train_d = _l2_normalize(train_d)
for _ in range(self.ip):
train_d.requires_grad_()
train_loss_function = CTCLoss()
train_preds = model.forward(train_x + self.xi * train_d, train_img_seq_lens)
train_adv_loss_ctc = train_loss_function(train_preds, train_labels_flatten,
Variable(torch.IntTensor(np.array(train_img_seq_lens))), train_label_lens) / batch_size
train_adv_loss_ctc.backward()
train_d = train_d.grad
model.zero_grad()
#TEST
with torch.no_grad():
test_pred = model.vat_forward(test_x, test_seq_len)
test_pred = test_pred * test_mask
test_pred = F.softmax(test_pred, dim=2).view(-1, test_pred.size()[-1])
# prepare random unit tensor
test_d = torch.rand(test_x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
test_d = _l2_normalize(test_d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
test_d.requires_grad_()
test_pred_hat = model.vat_forward(test_x + self.xi * test_d, test_seq_len)
test_pred_hat = test_pred_hat * test_mask
test_pred_hat = F.log_softmax(test_pred_hat, dim=2).view(-1, test_pred_hat.size()[-1])
# pred_hat = model(x + self.xi * d)
# adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
test_adv_distance = _kl_div(test_pred_hat, test_pred)
test_adv_distance.backward()
test_d = _l2_normalize(test_d.grad)
model.zero_grad()
#TRAIN
# calc LDS
train_r_adv = torch.sign(train_d) * self.eps
train_pred_hat = model.forward(train_x + train_r_adv, train_img_seq_lens)
train_lds = train_loss_function(train_pred_hat, train_labels_flatten,
Variable(torch.IntTensor(np.array(train_img_seq_lens))), train_label_lens) / batch_size
#TEST
# calc LDS
test_d = torch.sign(test_d)
test_r_adv = test_d * self.eps
test_pred_hat = model.vat_forward(test_x + test_r_adv, test_seq_len)
test_pred_hat = test_pred_hat * test_mask
test_pred_hat = F.log_softmax(test_pred_hat, dim=2).view(-1, test_pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
test_lds = _kl_div(test_pred_hat, test_pred)
return train_lds, test_lds
class VATLoss(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATLoss, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
pred = model.vat_forward(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
d.requires_grad_()
pred_hat = model.vat_forward(x + self.xi * d, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, pred)
adv_distance.backward()
d = _l2_normalize(d.grad)
model.zero_grad()
# calc LDS
r_adv = d * self.eps
pred_hat = model.vat_forward(x + r_adv, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
lds = _kl_div(pred_hat, pred)
return lds
class VATonRnnSign(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATonRnnSign, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
x_features = model.vat_forward_cnn(x)
pred = model.vat_forward_rnn(x_features, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x_features.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
d.requires_grad_()
pred_hat = model.vat_forward_rnn(x_features + self.xi * d, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, pred)
adv_distance.backward()
d = _l2_normalize(d.grad)
model.zero_grad()
# calc LDS
d = torch.sign(d)
r_adv = d * self.eps
pred_hat = model.vat_forward_rnn(x_features + r_adv, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
lds = _kl_div(pred_hat, pred)
return lds
class VATonCnnSign(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATonCnnSign, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
x_features = model.vat_forward(x)
pred = model.vat_forward_rnn(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
d.requires_grad_()
pred_hat = model.vat_forward(x + self.xi * d, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, pred)
adv_distance.backward()
d = _l2_normalize(d.grad)
model.zero_grad()
# calc LDS
d = torch.sign(d)
r_adv = d * self.eps
pred_features_hat = model.vat_forward_cnn(x + r_adv, seq_len)
pred_features_hat = pred_features_hat * mask
l2_loss = torch.nn.MSELoss()
lds = l2_loss(pred_features_hat, x_features)
return lds
class VATonRnnCnnSign(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATonRnnCnnSign, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
x_features = model.vat_forward_cnn(x)
x_pred = model.vat_forward_rnn(x.size(0), x_features, seq_len)
x_pred = x_pred * mask
x_pred = F.softmax(x_pred, dim=2).view(-1, x_pred.size()[-1])
# prepare random unit tensor
d_rnn = torch.rand(x_features.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d_rnn = _l2_normalize(d_rnn)
d_cnn = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d_cnn = _l2_normalize(d_cnn)
with _disable_tracking_bn_stats(model):
### Calc rnn d
for _ in range(self.ip):
d_rnn.requires_grad_()
pred_hat = model.vat_forward_rnn(x.size(0), x_features + self.xi * d_rnn, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, x_pred)
adv_distance.backward()
d_rnn = _l2_normalize(d_rnn.grad)
model.zero_grad()
# calc LDS
d_rnn = torch.sign(d_rnn)
r_adv_rnn = d_rnn * self.eps
### Calc Cnn d
for _ in range(self.ip):
d_cnn.requires_grad_()
pred_hat = model.vat_forward(x + self.xi * d_cnn, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, x_pred)
adv_distance.backward()
d_cnn = _l2_normalize(d_cnn.grad)
model.zero_grad()
d_cnn = torch.sign(d_cnn)
r_adv_cnn = d_cnn * self.eps
#calc rnn lds
pred_hat = model.vat_forward_rnn(x.size(0), x_features + r_adv_rnn, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
rnn_lds = _kl_div(pred_hat, x_pred)
# calc cnn lds
pred_features_hat = model.vat_forward_cnn(x + r_adv_cnn)
pred_features_hat = pred_features_hat * mask
l2_loss = torch.nn.L1Loss()
cnn_lds = l2_loss(pred_features_hat, x_features)
lds = cnn_lds + rnn_lds
return lds, cnn_lds, rnn_lds
class VATLossSign(nn.Module):
def __init__(self, do_test_entropy, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATLossSign, self).__init__()
self.do_test_entropy = do_test_entropy
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
pred = model.vat_forward(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
d.requires_grad_()
pred_hat = model.vat_forward(x + self.xi * d, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, pred, mask)
adv_distance.backward()
d = _l2_normalize(d.grad)
model.zero_grad()
# calc LDS
d = torch.sign(d)
r_adv = d * self.eps
pred_hat = model.vat_forward(x + r_adv, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
lds = _kl_div(pred_hat, pred, mask)
if self.do_test_entropy:
lds += _entropy(pred_hat, mask)
return lds
class VATLossSignOld(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(VATLossSign, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
pred = model.vat_forward(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
d = _l2_normalize(d)
with _disable_tracking_bn_stats(model):
# calc adversarial direction
for _ in range(self.ip):
d.requires_grad_()
pred_hat = model.vat_forward(x + self.xi * d, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + self.xi * d)
#adv_distance = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
adv_distance = _kl_div(pred_hat, pred)
adv_distance.backward()
d = _l2_normalize(d.grad)
model.zero_grad()
# calc LDS
d = torch.sign(d)
r_adv = d * self.eps
pred_hat = model.vat_forward(x + r_adv, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
lds = _kl_div(pred_hat, pred)
return lds
class RandomLoss(nn.Module):
def __init__(self, xi=10.0, eps=1.0, ip=1):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(RandomLoss, self).__init__()
self.xi = xi
self.eps = eps
self.ip = ip
def forward(self, model, x, seq_len, mask):
with torch.no_grad():
pred = model.vat_forward(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
# prepare random unit tensor
d = torch.rand(x.shape).to(
torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
# calc LDS
d = torch.sign(d)
r_adv = d * self.eps
pred_hat = model.vat_forward(x + r_adv, seq_len)
pred_hat = pred_hat * mask
pred_hat = F.log_softmax(pred_hat, dim=2).view(-1, pred_hat.size()[-1])
#pred_hat = model(x + r_adv)
#lds = _kl_div(F.log_softmax(pred_hat, dim=1), pred)
lds = _kl_div(pred_hat, pred)
return lds
class PseudoLabel(nn.Module):
def __init__(self, confidence_thresh):
"""VAT loss
:param xi: hyperparameter of VAT (default: 10.0)
:param eps: hyperparameter of VAT (default: 1.0)
:param ip: iteration times of computing adv noise (default: 1)
"""
super(PseudoLabel, self).__init__()
self.confidence_thresh = confidence_thresh
def forward(self, model, x, seq_len, mask):
pred = model.vat_forward(x, seq_len)
pred = pred * mask
pred = F.softmax(pred, dim=2).view(-1, pred.size()[-1])
np_preds = pred.cpu().data.numpy()
indices, classes = np.where(np_preds > self.confidence_thresh)
if len(indices) > 0:
indices = Variable(torch.from_numpy(indices).cuda())
labels = Variable(torch.from_numpy(classes).cuda())
strong_preds = pred[indices]
nll_loss = torch.nn.NLLLoss()
loss = nll_loss(strong_preds, labels)
return loss.cpu()
else:
return 0
| 38.757143 | 128 | 0.565011 | 2,975 | 21,704 | 3.844034 | 0.05479 | 0.08447 | 0.032529 | 0.035414 | 0.80063 | 0.777894 | 0.752886 | 0.726478 | 0.705491 | 0.693861 | 0 | 0.016645 | 0.324595 | 21,704 | 559 | 129 | 38.826476 | 0.76349 | 0.166697 | 0 | 0.615169 | 0 | 0 | 0.005443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070225 | false | 0 | 0.019663 | 0 | 0.157303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3719461c944d994c377c4499b9b5bafd7b358e2 | 26 | py | Python | Python/Subtitle_Downloader/__init__.py | CharvyJain/Rotten-Scripts | c9b8f7dde378620e4a82eae7aacec53f1eeea3c5 | [
"MIT"
] | 7 | 2020-07-18T15:29:20.000Z | 2021-03-23T15:09:51.000Z | Python/Subtitle_Downloader/__init__.py | SKAUL05/Rotten-Scripts | c44e69754bbecb8a547fe2cc3a29be5acf97c46a | [
"MIT"
] | 3 | 2022-01-15T07:33:28.000Z | 2022-03-24T04:23:03.000Z | Python/Subtitle_Downloader/__init__.py | SKAUL05/Rotten-Scripts | c44e69754bbecb8a547fe2cc3a29be5acf97c46a | [
"MIT"
] | 1 | 2021-01-28T07:58:26.000Z | 2021-01-28T07:58:26.000Z | from .subtitle import main | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e650710e02e5ae13ec04c6abf239a0f873465ce | 100 | py | Python | blendSupports/Meshs/__init__.py | sbaker-dev/blendSupports | 42f9913f409c9e0d6bc39bde11ba6431b1a2ff30 | [
"MIT"
] | null | null | null | blendSupports/Meshs/__init__.py | sbaker-dev/blendSupports | 42f9913f409c9e0d6bc39bde11ba6431b1a2ff30 | [
"MIT"
] | null | null | null | blendSupports/Meshs/__init__.py | sbaker-dev/blendSupports | 42f9913f409c9e0d6bc39bde11ba6431b1a2ff30 | [
"MIT"
] | null | null | null | from .mesh_ref import make_mesh
from .text import make_text
from .graph_axis import make_graph_axis
| 25 | 39 | 0.85 | 18 | 100 | 4.388889 | 0.444444 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 3 | 40 | 33.333333 | 0.897727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42df6350281ced46b5692092157f68400226d105 | 19 | py | Python | multimodal/clip/__init__.py | sithu31296/multimodal | 78f57956cc84273579eb9e2e2be2a58fa1f38814 | [
"MIT"
] | 385 | 2020-10-26T13:12:11.000Z | 2021-10-07T15:14:48.000Z | multimodal/clip/__init__.py | sithu31296/multimodal | 78f57956cc84273579eb9e2e2be2a58fa1f38814 | [
"MIT"
] | 24 | 2020-10-29T13:16:31.000Z | 2021-08-31T06:47:33.000Z | multimodal/clip/__init__.py | sithu31296/multimodal | 78f57956cc84273579eb9e2e2be2a58fa1f38814 | [
"MIT"
] | 45 | 2020-10-29T15:25:19.000Z | 2021-09-05T21:50:57.000Z | from .clip import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e0bea132e4b61f3ac97b8c32a145b7faf76df4f | 217 | py | Python | tests/test_carveme.py | acbweiss/carveme | 08f44e3e180730a881fbd73f8af03fa8e1f5895a | [
"Apache-2.0"
] | 84 | 2018-01-13T15:38:26.000Z | 2022-02-12T14:31:05.000Z | tests/test_carveme.py | acbweiss/carveme | 08f44e3e180730a881fbd73f8af03fa8e1f5895a | [
"Apache-2.0"
] | 133 | 2017-09-19T14:58:23.000Z | 2022-03-16T12:04:15.000Z | tests/test_carveme.py | acbweiss/carveme | 08f44e3e180730a881fbd73f8af03fa8e1f5895a | [
"Apache-2.0"
] | 48 | 2017-09-19T16:00:47.000Z | 2022-03-24T13:47:54.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for `carveme` package."""
import unittest
from carveme import carveme
class TestCarveme(unittest.TestCase):
"""Tests for `carveme` package."""
pass
| 14.466667 | 38 | 0.658986 | 26 | 217 | 5.5 | 0.692308 | 0.111888 | 0.20979 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.175115 | 217 | 14 | 39 | 15.5 | 0.793296 | 0.460829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
6e3ddb397d1bf64d7c08134c0c6c33c64b211aee | 1,051 | py | Python | bukber/models.py | ppabcd/django-bukber | 8a5d272e988a63082977deb5ba026876d4c70ee4 | [
"BSD-3-Clause"
] | null | null | null | bukber/models.py | ppabcd/django-bukber | 8a5d272e988a63082977deb5ba026876d4c70ee4 | [
"BSD-3-Clause"
] | null | null | null | bukber/models.py | ppabcd/django-bukber | 8a5d272e988a63082977deb5ba026876d4c70ee4 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from django.utils import timezone
# Create your models here.
class Kelas(models.Model):
nama = models.CharField(max_length=200)
created_at = models.DateTimeField(editable=False)
updated_at = models.DateTimeField()
def save(self):
if self.id:
self.updated_at = timezone.now()
else:
self.updated_at = timezone.now()
self.created_at = timezone.now()
super().save()
def __str__(self):
return self.nama
class Peserta(models.Model):
nama = models.CharField(max_length=200)
kelas = models.ForeignKey('Kelas', on_delete=models.CASCADE)
nominal = models.IntegerField()
created_at = models.DateTimeField(editable=False)
updated_at = models.DateTimeField()
def save(self):
if self.id:
self.updated_at = timezone.now()
else:
self.updated_at = timezone.now()
self.created_at = timezone.now()
super().save()
def __str__(self):
return self.nama
| 26.275 | 64 | 0.633682 | 125 | 1,051 | 5.16 | 0.328 | 0.083721 | 0.12093 | 0.130233 | 0.741085 | 0.741085 | 0.741085 | 0.741085 | 0.610853 | 0.610853 | 0 | 0.007702 | 0.258801 | 1,051 | 39 | 65 | 26.948718 | 0.820282 | 0.022835 | 0 | 0.8 | 0 | 0 | 0.004878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.066667 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
280e5c9629fe9185c4c9dae7b7d2676ce16a85a7 | 38 | py | Python | moda/models/stl/__init__.py | Patte1808/moda | 312c9594754ae0f6d17cbfafaa2c4c790c58efe5 | [
"MIT"
] | null | null | null | moda/models/stl/__init__.py | Patte1808/moda | 312c9594754ae0f6d17cbfafaa2c4c790c58efe5 | [
"MIT"
] | null | null | null | moda/models/stl/__init__.py | Patte1808/moda | 312c9594754ae0f6d17cbfafaa2c4c790c58efe5 | [
"MIT"
] | null | null | null | from moda.models.stl import stl_model
| 19 | 37 | 0.842105 | 7 | 38 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2832a65a5061020c837ca4e4d03271dc5d234dc2 | 151 | py | Python | continual_learning/backbone_networks/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/backbone_networks/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/backbone_networks/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | from .resnet.resnet import *
from .resnet.torch_resnet import *
from .alexnet import AlexNet
from .lenet import LeNet, LeNet_300_100
from .vgg import * | 30.2 | 39 | 0.794702 | 23 | 151 | 5.086957 | 0.391304 | 0.17094 | 0.273504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045802 | 0.13245 | 151 | 5 | 40 | 30.2 | 0.847328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28397f5d1262c53eaf574f2535bad6003f1068cc | 21 | py | Python | models/__init__.py | hhj1897/fan_training | 5882f9edf2f1a07c80a6d1f3341a7cf1d348e217 | [
"MIT"
] | 1 | 2021-12-11T21:31:57.000Z | 2021-12-11T21:31:57.000Z | models/__init__.py | hhj1897/fan_training | 5882f9edf2f1a07c80a6d1f3341a7cf1d348e217 | [
"MIT"
] | null | null | null | models/__init__.py | hhj1897/fan_training | 5882f9edf2f1a07c80a6d1f3341a7cf1d348e217 | [
"MIT"
] | 1 | 2021-12-11T21:31:49.000Z | 2021-12-11T21:31:49.000Z | from .fan import FAN
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
284092cc0cdfbe3cc230105f4b35fef6cb577976 | 5,783 | py | Python | suspect/processing/phase.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 16 | 2016-08-31T21:05:06.000Z | 2022-02-06T12:48:33.000Z | suspect/processing/phase.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 141 | 2016-07-28T21:34:17.000Z | 2022-03-30T09:00:36.000Z | suspect/processing/phase.py | hjiang1/suspect | f8b320b16bbd73a95d58eea1660921d6cad16f36 | [
"MIT"
] | 21 | 2016-08-04T14:54:19.000Z | 2022-03-29T16:04:08.000Z | import lmfit
import numpy as np
def mag_real(data, *args, range_hz=None, range_ppm=None):
"""
Estimates the zero and first order phase parameters which minimise the
difference between the real part of the spectrum and the magnitude. Note
that these are the phase correction terms, designed to be used directly
in the adjust_phase() function without negation.
Parameters
----------
data: MRSBase
The data to be phased
range_hz: tuple (low, high)
The frequency range in Hertz over which to compare the spectra
range_ppm: tuple (low, high)
The frequency range in PPM over which to compare the spectra. range_hz
and range_ppm cannot both be defined.
Returns
-------
phi0 : float
The estimated zero order phase correction
phi1 : float
The estimated first order phase correction
"""
if range_hz is not None and range_ppm is not None:
raise KeyError("Cannot specify both range_hz and range_ppm")
if range_hz is not None:
frequency_slice = data.slice_hz(*range_hz)
elif range_hz is not None:
frequency_slice = data.slice_ppm(*range_ppm)
else:
frequency_slice = slice(0, data.np)
def single_spectrum_version(spectrum):
def residual(pars):
par_vals = pars.valuesdict()
phased_data = spectrum.adjust_phase(par_vals['phi0'],
par_vals['phi1'])
diff = np.real(phased_data) - np.abs(spectrum)
return diff[frequency_slice]
params = lmfit.Parameters()
params.add('phi0', value=0, min=-np.pi, max=np.pi)
params.add('phi1', value=0.0, min=-0.01, max=0.25)
result = lmfit.minimize(residual, params)
return result.params['phi0'].value, result.params['phi1'].value
return np.apply_along_axis(single_spectrum_version,
axis=-1,
arr=data.spectrum())
def ernst(data):
"""
Estimates the zero and first order phase using the ACME algorithm, which
minimises the integral of the imaginary part of the spectrum. Note that
these are the phase correction terms, designed to be used directly in the
adjust_phase() function without negation.
Parameters
----------
data: MRSBase
The data to be phased
range_hz: tuple (low, high)
The frequency range in Hertz over which to compare the spectra
range_ppm: tuple (low, high)
The frequency range in PPM over which to compare the spectra. range_hz
and range_ppm cannot both be defined.
Returns
-------
phi0 : float
The estimated zero order phase correction
phi1 : float
The estimated first order phase correction
"""
def residual(pars):
par_vals = pars.valuesdict()
phased_data = data.adjust_phase(par_vals['phi0'],
par_vals['phi1'])
return np.sum(phased_data.spectrum().imag)
params = lmfit.Parameters()
params.add('phi0', value=0, min=-np.pi, max=np.pi)
params.add('phi1', value=0.0, min=-0.005, max=0.1)
result = lmfit.minimize(residual, params, method='simplex')
return result.params['phi0'].value, result.params['phi1'].value
def acme(data, *args, range_hz=None, range_ppm=None, gamma=100):
"""
Estimates the zero and first order phase using the ACME algorithm, which
minimises the entropy of the real part of the spectrum. Note that these
are the phase correction terms, designed to be used directly in the
adjust_phase() function without negation.
Parameters
----------
data : MRSBase
The data to be phased
range_hz : tuple (low, high)
The frequency range in Hertz over which to compare the spectra
range_ppm : tuple (low, high)
The frequency range in PPM over which to compare the spectra. range_hz
and range_ppm cannot both be defined.
gamma : float
Weighting factor for penalty function.
Returns
-------
phi0 : float
The estimated zero order phase correction
phi1 : float
The estimated first order phase correction
"""
if range_hz is not None and range_ppm is not None:
raise KeyError("Cannot specify both range_hz and range_ppm")
if range_hz is not None:
frequency_slice = data.slice_hz(*range_hz)
elif range_hz is not None:
frequency_slice = data.slice_ppm(*range_ppm)
else:
frequency_slice = slice(0, data.np)
def single_spectrum_version(spectrum):
def residual(pars):
par_vals = pars.valuesdict()
phased_data = spectrum.adjust_phase(par_vals['phi0'],
par_vals['phi1'])
r = phased_data.real[frequency_slice]
r = r / np.sum(r)
derivative = np.abs((r[1:] - r[:-1]))
derivative_norm = derivative / np.sum(derivative)
# make sure the entropy doesn't blow up by removing 0 values
derivative_norm[derivative_norm == 0] = 1
entropy = -np.sum(derivative_norm * np.log(derivative_norm))
# penalty function
p = np.sum(r[r < 0] ** 2)
return entropy + gamma * p
params = lmfit.Parameters()
params.add('phi0', value=0.0, min=-np.pi, max=np.pi)
params.add('phi1', value=0.001, min=-0.005, max=0.25)
result = lmfit.minimize(residual, params, method='simplex')
return result.params['phi0'].value, result.params['phi1'].value
return np.apply_along_axis(single_spectrum_version,
-1,
data.spectrum())
| 35.478528 | 78 | 0.621304 | 769 | 5,783 | 4.564369 | 0.174252 | 0.035897 | 0.020513 | 0.025641 | 0.832194 | 0.828205 | 0.819373 | 0.809687 | 0.758405 | 0.745299 | 0 | 0.016788 | 0.289296 | 5,783 | 162 | 79 | 35.697531 | 0.837226 | 0.386305 | 0 | 0.61194 | 0 | 0 | 0.051988 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119403 | false | 0 | 0.029851 | 0 | 0.268657 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2843ba3aa64e127ac4da4877930f346f9d4cad44 | 7,761 | py | Python | tests/test_response_handler.py | Alex-Weatherhead/riot_api | 2d589f57cd46e0f7c54de29245078c730acd710f | [
"MIT"
] | null | null | null | tests/test_response_handler.py | Alex-Weatherhead/riot_api | 2d589f57cd46e0f7c54de29245078c730acd710f | [
"MIT"
] | null | null | null | tests/test_response_handler.py | Alex-Weatherhead/riot_api | 2d589f57cd46e0f7c54de29245078c730acd710f | [
"MIT"
] | null | null | null | import unittest
from riot_api.api import _response_handler
class ResponseMock:
def __init__ (self, status_code, headers=None):
self.status_code = status_code
self.headers = headers
class TestResponseHandler (unittest.TestCase):
def test_handle_yields_successful_equals_true_when_status_code_is_200 (self):
STATUS_CODE = 200
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["successful"])
def test_handle_yields_response_object_when_status_code_is_200 (self):
STATUS_CODE = 200
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["response"], RESPONSE)
def test_handle_yields_successful_equals_false_when_status_code_is_400 (self):
STATUS_CODE = 400
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_false_when_status_code_is_400 (self):
STATUS_CODE = 400
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["retry"])
def test_handle_yields_successful_equals_false_when_status_code_is_401 (self):
STATUS_CODE = 401
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_false_when_status_code_is_401 (self):
STATUS_CODE = 401
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["retry"])
def test_handle_yields_successful_equals_false_when_status_code_is_403 (self):
STATUS_CODE = 403
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_false_when_status_code_is_403 (self):
STATUS_CODE = 403
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["retry"])
def test_handle_yields_successful_equals_false_when_status_code_is_404 (self):
STATUS_CODE = 404
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_404 (self):
STATUS_CODE = 404
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_default_delay_when_status_code_is_404 (self):
STATUS_CODE = 404
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], _response_handler._DEFAULT_DELAY_FOR_STATUS_CODE_404)
def test_handle_yields_successful_equals_false_when_status_code_is_415 (self):
STATUS_CODE = 415
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_false_when_status_code_is_415 (self):
STATUS_CODE = 415
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["retry"])
def test_handle_yields_successful_equals_false_when_status_code_is_429 (self):
STATUS_CODE = 429
HEADERS = {
"Retry-After": 0
}
RESPONSE = ResponseMock(STATUS_CODE, headers=HEADERS)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_429 (self):
STATUS_CODE = 429
HEADERS = {
"Retry-After": 0
}
RESPONSE = ResponseMock(STATUS_CODE, headers=HEADERS)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_retry_after_header_as_delay_when_status_code_is_429 (self):
STATUS_CODE = 429
HEADERS = {
"Retry-After": 10
}
RESPONSE = ResponseMock(STATUS_CODE, headers=HEADERS)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], HEADERS["Retry-After"])
def test_handle_yields_successful_equals_false_when_status_code_is_500 (self):
STATUS_CODE = 500
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_500 (self):
STATUS_CODE = 500
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_default_delay_when_status_code_is_500 (self):
STATUS_CODE = 500
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], _response_handler._DEFAULT_DELAY_FOR_STATUS_CODE_500)
def test_handle_yields_successful_equals_false_when_status_code_is_502 (self):
STATUS_CODE = 502
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_502 (self):
STATUS_CODE = 502
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_default_delay_when_status_code_is_502 (self):
STATUS_CODE = 502
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], _response_handler._DEFAULT_DELAY_FOR_STATUS_CODE_502)
def test_handle_yields_successful_equals_false_when_status_code_is_503 (self):
STATUS_CODE = 503
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_503 (self):
STATUS_CODE = 503
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_default_delay_when_status_code_is_503 (self):
STATUS_CODE = 503
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], _response_handler._DEFAULT_DELAY_FOR_STATUS_CODE_503)
def test_handle_yields_successful_equals_false_when_status_code_is_504 (self):
STATUS_CODE = 504
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertFalse(result["successful"])
def test_handle_yields_retry_equals_true_when_status_code_is_504 (self):
STATUS_CODE = 504
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertTrue(result["retry"])
def test_handle_yields_default_delay_when_status_code_is_504 (self):
STATUS_CODE = 504
RESPONSE = ResponseMock(STATUS_CODE)
result = _response_handler.handle(RESPONSE)
self.assertEqual(result["delay"], _response_handler._DEFAULT_DELAY_FOR_STATUS_CODE_504) | 31.677551 | 95 | 0.718335 | 900 | 7,761 | 5.707778 | 0.057778 | 0.179093 | 0.08176 | 0.103562 | 0.949971 | 0.948024 | 0.934787 | 0.934787 | 0.934787 | 0.930504 | 0 | 0.030461 | 0.208994 | 7,761 | 245 | 96 | 31.677551 | 0.80632 | 0 | 0 | 0.698718 | 0 | 0 | 0.031178 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 1 | 0.185897 | false | 0 | 0.012821 | 0 | 0.211538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
285470d5580301217271de4b90c77c082d47b943 | 201 | py | Python | papyruslib/__init__.py | hrmorley34/papyrusplusplus | bdda49f586b9780aef41002bc519b273187fb6f7 | [
"MIT"
] | null | null | null | papyruslib/__init__.py | hrmorley34/papyrusplusplus | bdda49f586b9780aef41002bc519b273187fb6f7 | [
"MIT"
] | null | null | null | papyruslib/__init__.py | hrmorley34/papyrusplusplus | bdda49f586b9780aef41002bc519b273187fb6f7 | [
"MIT"
] | null | null | null | from .bases import Definition, PlayerMarker, Spreadsheet, Remote, Webhook
from . import helpers
__all__ = ["Definition", "PlayerMarker", "Spreadsheet", "Remote", "Webhook", "Definition", "helpers"]
| 28.714286 | 101 | 0.736318 | 19 | 201 | 7.578947 | 0.526316 | 0.305556 | 0.458333 | 0.541667 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 201 | 6 | 102 | 33.5 | 0.813559 | 0 | 0 | 0 | 0 | 0 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
95556148fadeaee463dfe202a1320951679e9473 | 26 | py | Python | src/dwriteshapepy/__init__.py | microsoft/DWriteShapePy | ced3341bf2c76a09135491660311c09d725a8b35 | [
"MIT"
] | 9 | 2021-03-26T08:20:24.000Z | 2022-03-29T12:35:12.000Z | src/dwriteshapepy/__init__.py | paullinnerud/DWriteShapePy | f351de0de47818a7cff00ae9ba16d58086a76878 | [
"MIT"
] | 2 | 2021-07-14T13:39:59.000Z | 2021-12-22T11:48:24.000Z | src/dwriteshapepy/__init__.py | paullinnerud/DWriteShapePy | f351de0de47818a7cff00ae9ba16d58086a76878 | [
"MIT"
] | 2 | 2021-03-30T06:00:08.000Z | 2021-07-14T13:33:22.000Z | from .dwriteshape import * | 26 | 26 | 0.807692 | 3 | 26 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95fe5b95a0c6e28964ab48ef953dc6c1ce110af9 | 4,021 | py | Python | test/unit/test_should_investigate.py | nicolaslazo/chess-puzzle-maker | 17ca484ab40f1e82d53b7ae93239b2abd37e4cdd | [
"MIT"
] | 34 | 2019-08-28T16:49:45.000Z | 2022-03-04T18:05:20.000Z | test/unit/test_should_investigate.py | nicolaslazo/chess-puzzle-maker | 17ca484ab40f1e82d53b7ae93239b2abd37e4cdd | [
"MIT"
] | 10 | 2019-11-27T14:15:25.000Z | 2021-03-30T07:46:24.000Z | test/unit/test_should_investigate.py | nicolaslazo/chess-puzzle-maker | 17ca484ab40f1e82d53b7ae93239b2abd37e4cdd | [
"MIT"
] | 14 | 2019-12-06T12:41:28.000Z | 2021-12-27T01:40:27.000Z | import unittest
from chess import Board
from chess.engine import Cp, Mate
from puzzlemaker.puzzle_finder import should_investigate
board = Board()
class TestShouldInvestigate(unittest.TestCase):
def test_investigating_moderate_score_changes(self):
score_changes = [
[0, 200],
[50, 200],
[-50, 200],
]
for a, b in score_changes:
a = Cp(a)
b = Cp(b)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_major_score_changes(self):
score_changes = [
[0, 500],
[100, 500],
[100, -100],
]
for a, b in score_changes:
a = Cp(a)
b = Cp(b)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_even_position_to_mate(self):
a = Cp(0)
b = Mate(5)
self.assertTrue(should_investigate(a, b, board))
a = Cp(0)
b = Mate(-5)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_minor_advantage_to_mate(self):
a = Cp(100)
b = Mate(5)
self.assertTrue(should_investigate(a, b, board))
a = Cp(-100)
b = Mate(-5)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_major_advantage_to_getting_mated(self):
a = Cp(700)
b = Mate(-5)
self.assertTrue(should_investigate(a, b, board))
a = Cp(-700)
b = Mate(5)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_major_advantage_to_major_disadvantage(self):
a = Cp(700)
b = Cp(-700)
self.assertTrue(should_investigate(a, b, board))
a = Cp(-700)
b = Cp(700)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_major_advantage_to_even_position(self):
a = Cp(700)
b = Cp(0)
self.assertTrue(should_investigate(a, b, board))
a = Cp(-700)
b = Cp(0)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_mate_threat_to_major_disadvantage(self):
a = Mate(5)
b = Cp(-700)
self.assertTrue(should_investigate(a, b, board))
a = Mate(-5)
b = Cp(700)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_mate_threat_to_even_position(self):
a = Mate(5)
b = Cp(0)
self.assertTrue(should_investigate(a, b, board))
a = Mate(-5)
b = Cp(0)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_mate_threat_to_getting_mated(self):
a = Mate(1)
b = Mate(-1)
self.assertTrue(should_investigate(a, b, board))
a = Mate(-1)
b = Mate(1)
self.assertTrue(should_investigate(a, b, board))
def test_investigating_mate_threat_to_checkmate(self):
a = Mate(1)
b = Mate(0)
self.assertFalse(should_investigate(a, b, board))
def test_not_investigating_insignificant_score_changes(self):
score_changes = [
[0, 0],
[-50, 50],
[50, -50],
[-70, -70],
[70, 70],
]
for a, b in score_changes:
a = Cp(a)
b = Cp(b)
self.assertFalse(should_investigate(a, b, board))
def test_not_investigating_major_advantage_to_mate_threat(self):
a = Cp(900)
b = Mate(5)
self.assertFalse(should_investigate(a, b, board))
a = Cp(-900)
b = Mate(-5)
self.assertFalse(should_investigate(a, b, board))
def test_not_investigating_even_position(self):
board = Board("4k3/8/3n4/3N4/8/8/4K3/8 w - - 0 1")
a = Cp(0)
b = Cp(0)
self.assertFalse(should_investigate(a, b, board))
a = Cp(9)
b = Cp(9)
self.assertFalse(should_investigate(a, b, board))
if __name__ == '__main__':
unittest.main()
| 27.353741 | 71 | 0.575479 | 518 | 4,021 | 4.245174 | 0.11583 | 0.027285 | 0.196453 | 0.207367 | 0.821282 | 0.773079 | 0.717599 | 0.699864 | 0.698954 | 0.679854 | 0 | 0.047482 | 0.30863 | 4,021 | 146 | 72 | 27.541096 | 0.743525 | 0 | 0 | 0.591304 | 0 | 0 | 0.010196 | 0.00572 | 0 | 0 | 0 | 0 | 0.208696 | 1 | 0.121739 | false | 0 | 0.034783 | 0 | 0.165217 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
254c52bea569929462a780cd2d3f6c21abc6b2b5 | 148 | py | Python | tests/test_package_import.py | rolando/parsel-cli | c211fe5ddfc53ac965acaeffde00082ebc57081d | [
"MIT"
] | 5 | 2016-07-26T18:26:29.000Z | 2017-04-28T14:47:05.000Z | tests/test_package_import.py | rmax/parsel-cli | c211fe5ddfc53ac965acaeffde00082ebc57081d | [
"MIT"
] | 2 | 2016-07-28T17:27:27.000Z | 2016-08-15T20:35:21.000Z | tests/test_package_import.py | rmax/parsel-cli | c211fe5ddfc53ac965acaeffde00082ebc57081d | [
"MIT"
] | 3 | 2016-07-26T19:50:45.000Z | 2017-03-17T17:36:44.000Z | import parsel_cli
def test_package_metadata():
assert parsel_cli.__author__
assert parsel_cli.__email__
assert parsel_cli.__version__
| 18.5 | 33 | 0.797297 | 19 | 148 | 5.263158 | 0.578947 | 0.36 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 148 | 7 | 34 | 21.142857 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
254ee93de9badfce23215ecadcfc63e4dc555223 | 17,656 | py | Python | tests/test_app_routers_files_GET.py | BoostryJP/ibet-Prime | 924e7f8da4f8feea0a572e8b5532e09bcdf2dc99 | [
"Apache-2.0"
] | 2 | 2021-08-19T12:35:25.000Z | 2022-02-16T04:13:38.000Z | tests/test_app_routers_files_GET.py | BoostryJP/ibet-Prime | 924e7f8da4f8feea0a572e8b5532e09bcdf2dc99 | [
"Apache-2.0"
] | 46 | 2021-09-02T03:22:05.000Z | 2022-03-31T09:20:00.000Z | tests/test_app_routers_files_GET.py | BoostryJP/ibet-Prime | 924e7f8da4f8feea0a572e8b5532e09bcdf2dc99 | [
"Apache-2.0"
] | 1 | 2021-11-17T23:18:27.000Z | 2021-11-17T23:18:27.000Z | """
Copyright BOOSTRY Co., Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
SPDX-License-Identifier: Apache-2.0
"""
from datetime import datetime
from app.model.db import UploadFile
class TestAppRoutersFilesGET:
# target API endpoint
base_url = "/files"
issuer_address = "0x1234567890123456789012345678900000000001"
token_address = "0x1234567890123456789012345678900000000011"
file_content = """test data
12345 67890
あいうえお かきくけこ
😃😃😃😃
abc def"""
###########################################################################
# Normal Case
###########################################################################
# <Normal_1>
# 0 record
def test_normal_1(self, client, db):
# request target api
resp = client.get(self.base_url)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 0,
"offset": None,
"limit": None,
"total": 0
},
"files": []
}
# <Normal_2>
# 1 record
def test_normal_2(self, client, db):
file_content_1_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_1_bin
_upload_file.content_size = len(file_content_1_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 1,
"offset": None,
"limit": None,
"total": 1
},
"files": [
{
"file_id": "file_id_1",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_1",
"content_size": len(file_content_1_bin),
"description": "description_1",
"created": "2022-01-02T00:20:30.000001+09:00",
},
]
}
# <Normal_3>
# 2 record
def test_normal_3(self, client, db):
file_content_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_2"
_upload_file.issuer_address = "0x1234567890123456789012345678900000000001"
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_2"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_2"
_upload_file.created = datetime.strptime("2022/01/02 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 2,
"offset": None,
"limit": None,
"total": 2
},
"files": [
{
"file_id": "file_id_2",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_2",
"content_size": len(file_content_bin),
"description": "description_2",
"created": "2022-01-02T09:20:30.000001+09:00",
},
{
"file_id": "file_id_1",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_1",
"content_size": len(file_content_bin),
"description": "description_1",
"created": "2022-01-02T00:20:30.000001+09:00",
},
]
}
# <Normal_4_1>
# Search Filter
# issuer_address
def test_normal_4_1(self, client, db):
file_content_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_2"
_upload_file.issuer_address = "0x1234567890123456789012345678900000000002" # not target
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_2"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_2"
_upload_file.created = datetime.strptime("2022/01/02 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
headers={
"issuer-address": self.issuer_address,
},
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 1,
"offset": None,
"limit": None,
"total": 2
},
"files": [
{
"file_id": "file_id_1",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_1",
"content_size": len(file_content_bin),
"description": "description_1",
"created": "2022-01-02T00:20:30.000001+09:00",
},
]
}
# <Normal_4_2>
# Search Filter
# relation
def test_normal_4_2(self, client, db):
file_content_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_2"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = "uuid_test_foo_bar" # not target
_upload_file.file_name = "file_name_2"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_2"
_upload_file.created = datetime.strptime("2022/01/02 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
params={
"relation": self.token_address,
},
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 1,
"offset": None,
"limit": None,
"total": 2
},
"files": [
{
"file_id": "file_id_1",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_1",
"content_size": len(file_content_bin),
"description": "description_1",
"created": "2022-01-02T00:20:30.000001+09:00",
},
]
}
# <Normal_4_3>
# Search Filter
# file_name
def test_normal_4_3(self, client, db):
file_content_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_2"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "test_foo_bar_1" # not target
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_2"
_upload_file.created = datetime.strptime("2022/01/02 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
params={
"file_name": "name",
},
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 1,
"offset": None,
"limit": None,
"total": 2
},
"files": [
{
"file_id": "file_id_1",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_1",
"content_size": len(file_content_bin),
"description": "description_1",
"created": "2022-01-02T00:20:30.000001+09:00",
},
]
}
# <Normal_5>
# Pagination
def test_normal_5(self, client, db):
file_content_bin = self.file_content.encode()
# prepare data
_upload_file = UploadFile()
_upload_file.file_id = "file_id_1"
_upload_file.issuer_address = self.issuer_address
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_1"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_1"
_upload_file.created = datetime.strptime("2022/01/01 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_2"
_upload_file.issuer_address = "0x1234567890123456789012345678900000000001"
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_2"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_2"
_upload_file.created = datetime.strptime("2022/01/02 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/02
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_3"
_upload_file.issuer_address = "0x1234567890123456789012345678900000000001"
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_3"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_3"
_upload_file.created = datetime.strptime("2022/01/02 15:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/03
db.add(_upload_file)
_upload_file = UploadFile()
_upload_file.file_id = "file_id_4"
_upload_file.issuer_address = "0x1234567890123456789012345678900000000001"
_upload_file.relation = self.token_address
_upload_file.file_name = "file_name_4"
_upload_file.content = file_content_bin
_upload_file.content_size = len(file_content_bin)
_upload_file.description = "description_4"
_upload_file.created = datetime.strptime("2022/01/03 00:20:30.000001", '%Y/%m/%d %H:%M:%S.%f') # JST 2022/01/03
db.add(_upload_file)
# request target api
resp = client.get(
self.base_url,
params={
"offset": 1,
"limit": 2
},
)
# assertion
assert resp.status_code == 200
assert resp.json() == {
"result_set": {
"count": 4,
"offset": 1,
"limit": 2,
"total": 4
},
"files": [
{
"file_id": "file_id_3",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_3",
"content_size": len(file_content_bin),
"description": "description_3",
"created": "2022-01-03T00:20:30.000001+09:00",
},
{
"file_id": "file_id_2",
"issuer_address": self.issuer_address,
"relation": self.token_address,
"file_name": "file_name_2",
"content_size": len(file_content_bin),
"description": "description_2",
"created": "2022-01-02T09:20:30.000001+09:00",
},
]
}
###########################################################################
# Error Case
###########################################################################
# <Error_1>
# Parameter Error
# Query
def test_error_1(self, client, db):
# request target API
resp = client.get(
self.base_url,
params={
"offset": "test",
"limit": "test"
},
)
# assertion
assert resp.status_code == 422
assert resp.json() == {
"meta": {
"code": 1,
"title": "RequestValidationError"
},
"detail": [
{
"loc": ["query", "offset"],
"msg": "value is not a valid integer",
"type": "type_error.integer"
},
{
"loc": ["query", "limit"],
"msg": "value is not a valid integer",
"type": "type_error.integer"
},
]
}
# <Error_2>
# Parameter Error
# Header
def test_error_2(self, client, db):
# request target API
resp = client.get(
self.base_url,
headers={
"issuer-address": "test",
},
)
# assertion
assert resp.status_code == 422
assert resp.json() == {
"meta": {
"code": 1,
"title": "RequestValidationError"
},
"detail": [
{
"loc": ["header", "issuer-address"],
"msg": "issuer-address is not a valid address",
"type": "value_error"
}
]
} | 35.453815 | 120 | 0.533926 | 1,909 | 17,656 | 4.600314 | 0.092719 | 0.14803 | 0.05739 | 0.054657 | 0.848895 | 0.842633 | 0.839672 | 0.835003 | 0.820314 | 0.81872 | 0 | 0.085985 | 0.3413 | 17,656 | 498 | 121 | 35.453815 | 0.668788 | 0.082635 | 0 | 0.683246 | 0 | 0 | 0.195324 | 0.037536 | 0 | 0 | 0.018578 | 0 | 0.04712 | 1 | 0.02356 | false | 0 | 0.005236 | 0 | 0.041885 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.