hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
292172c4f74da4e8ff4c1c9d3aeec575d25e283b | 93 | py | Python | test/__init__.py | wanidon/Ra | 69bf103362435261e41fdc19995a5835a90d9117 | [
"MIT"
] | 11 | 2020-06-05T01:13:04.000Z | 2021-12-01T04:28:35.000Z | test/__init__.py | wanidon/Ra | 69bf103362435261e41fdc19995a5835a90d9117 | [
"MIT"
] | 1 | 2021-08-09T08:56:40.000Z | 2021-08-17T10:30:25.000Z | test/__init__.py | wanidon/Ra | 69bf103362435261e41fdc19995a5835a90d9117 | [
"MIT"
] | 4 | 2020-07-22T14:29:49.000Z | 2020-12-31T16:53:28.000Z | from pathlib import Path
from sys import path
path.append(str(Path(__file__).parent.parent))
| 23.25 | 46 | 0.806452 | 15 | 93 | 4.733333 | 0.6 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 93 | 3 | 47 | 31 | 0.845238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29335ea1337a609720cc93b4321ddd96ceeb3ef1 | 52,692 | py | Python | discordbot.py | Siroino/discordpy-startup | d88527d935d27527cb4c5bada448e6eaf21460a7 | [
"MIT"
] | null | null | null | discordbot.py | Siroino/discordpy-startup | d88527d935d27527cb4c5bada448e6eaf21460a7 | [
"MIT"
] | null | null | null | discordbot.py | Siroino/discordpy-startup | d88527d935d27527cb4c5bada448e6eaf21460a7 | [
"MIT"
] | null | null | null | #coding: UTF-8
import discord
import random
import ssl
import os
import traceback
import emoji
token = os.environ['DISCORD_BOT_TOKEN']
client = discord.Client()
pile = ["攻撃1 落とし物 "+"\n"+" 念運空 "+"\n"+" 任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃1 占術 "+"\n"+" 運空精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃1 野犬 "+"\n"+" 念運精 "+"\n"+" 任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃1 滑る地面 "+"\n"+" 熱念運 "+"\n"+" 防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃1 支配 "+"\n"+" 運精 "+"\n"+" 山札の一番上のカードで精神感応を試みる。"+"\n"+"\n",
"攻撃1 揺さぶり "+"\n"+" 念空精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃1 磁場 "+"\n"+" 電念運 "+"\n"+" 防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃2 漏電 "+"\n"+" 電空 "+"\n"+" ターゲット以外のプレイヤー1人に2ダメージを与えてもよい。"+"\n"+"\n",
"攻撃2 不幸な事故 "+"\n"+" 電熱運"+"\n"+"\n",
"攻撃2 高速弾 "+"\n"+" 電熱念"+"\n"+"\n",
"攻撃2 縮地 "+"\n"+" 念空 "+"\n"+" 防御不可"+"\n"+"\n",
"攻撃2 落石 "+"\n"+" 念運 "+"\n"+" 防御不可。任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃2 熱感 "+"\n"+" 熱精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃2 拷問 "+"\n"+" 念精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃2 運命変転 "+"\n"+" 運 "+"\n"+" 自分の縦向きの捨て札を1枚選ぶ。そのカードのダメージと効果をこのカードに追加する。"+"\n"+"\n",
"攻撃3 電磁砲 "+"\n"+" 電念 "+"\n"+" 自分に1ダメージ。防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃3 雹塊 "+"\n"+" 熱運"+"\n"+"\n",
"攻撃3 爆発 "+"\n"+" 熱念"+"\n"+"\n",
"攻撃3 落雷 "+"\n"+" 電運 "+"\n"+" 自分に1ダメージ。任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃4 夢幻暴走 "+"\n"+" 念 "+"\n"+" 次のあなたのターンまで、あなたはダメージを受けず、レゾナンスリングを使用されない。"+"\n"+"\n",
"攻撃4 電熱ブレード "+"\n"+" 電熱 "+"\n"+" 自分に2ダメージ。"+"\n"+"\n",
"攻撃5 完全焼却 "+"\n"+" 熱"+"\n"+"\n",
"攻撃6 衝天轟雷 "+"\n"+" 電 "+"\n"+" 自分に3ダメージ。"+"\n"+"\n",
"防御 蜃気楼 "+"\n"+" 熱空精 "+"\n"+" 防御した攻撃のダメージを2軽減する。"+"\n"+"\n",
"防御 静電気 "+"\n"+" 電空精 "+"\n"+" 防御した攻撃のダメージを2軽減する。"+"\n"+"\n",
"防御 突風 "+"\n"+" 熱空 "+"\n"+" 防御した攻撃のダメージを2軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 高速移動 "+"\n"+" 電熱空 "+"\n"+" 防御した攻撃のダメージを1軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 閃光 "+"\n"+" 電熱精 "+"\n"+" 防御した攻撃のダメージを1軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 ニューロン暴走 "+"\n"+" 電精 "+"\n"+" ターゲットに2ダメージを与える。"+"\n"+"\n",
"防御 空間識変調 "+"\n"+" 空精 "+"\n"+" 防御した攻撃のダメージを1軽減する。その後、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"防御 落とし穴 "+"\n"+" 運空 "+"\n"+" 防御した攻撃のダメージを2軽減し、任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"防御 空間連結 "+"\n"+" 空 "+"\n"+" 防御した攻撃を無効化する。そのカードを自分の能力を無視して直ちにあなたが使用する。その後、そのカードを元々の使用者の捨て札に縦向きで置く。"+"\n"+"\n",
"防御 精神破壊 "+"\n"+" 精 "+"\n"+" 自分に4ダメージ。防御した攻撃を無効化する。あなた以外のプレイヤー全員は能力1つを使用不能にする。(能力カードを1枚選び、表にする)"+"\n"+"\n"]
attack = []
defense = []
bottrash = []
myhand = []
abilityA = ["電", "熱", "念", "運", "空", "精"]
abilityB = ["電", "熱", "念", "運", "空", "精"]
rolelist = []
rolephase = 0
startshadow = 0
list3=["あなたはヴァンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。", "あなたは人間陣営です。hと入力してください。"]
list4=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。"]
list5=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。"]
list6=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。"]
list7=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。", "あなたは人間陣営です。hと入力してください。"]
list8=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。"]
listv=["HP11"+"\n"+"U"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 手番の開始時に「墓場」にいるプレイヤー一人を選び3ダメージを与える",
"HP11"+"\n"+"U"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 緑のカードを受け取った際、答えを偽ることができる。公開の必要はない。正体判明の効果を受けない。",
"HP13"+"\n"+"V"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 攻撃時に4面ダイスを振り、出た目のダメージを与える",
"HP13"+"\n"+"V"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 自身の攻撃により誰かにダメージを与えた場合、直ちに自分のダメージを2回復する",
"HP14"+"\n"+"W"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番の後で追加手番を脱落したプレイヤー一人につき一回行える",
"HP14"+"\n"+"W"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 誰かがあなたを攻撃してきた場合、そのあとでそのプレイヤーに対し攻撃することができる"]
listw=["HP10"+"\n"+"E"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: 移動する際にダイスを振らずに左または右に隣接する場所に移動することができる。",
"HP10"+"\n"+"E"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番の開始時に誰か一人のプレイヤーを選び、そのプレイヤーの特殊能力をゲーム終了時まで封印する。",
"HP12"+"\n"+"F"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に誰か一人を選び、1d6のダメージを与える",
"HP12"+"\n"+"F"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に他のプレイヤーのマーカーを血の月マスに送る",
"HP14"+"\n"+"G"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番終了時に次の自分の手番開始時までダメージを受けないことを宣言できる",
"HP14"+"\n"+"G"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に誰か一人を選び、1d4のダメージを与える"]
listh=["HP8"+"\n"+"A"+"\n"+"勝利条件: ゲーム終了時に脱落していない"+"\n"+"特殊能力: ゲーム中一回限り自分のダメージを全て回復可能",
"HP8"+"\n"+"A"+"\n"+"勝利条件: ゲーム終了時に脱落していない"+"\n"+"特殊能力: ゲーム中一回限り自分のダメージを全て回復可能",
"HP8"+"\n"+"A"+"\n"+"勝利条件: 右隣りのプレイヤーの勝利"+"\n"+"特殊能力: ゲーム中一回限り勝利条件を「左隣りのプレイヤーの勝利」に変更可能",
"HP10"+"\n"+"B"+"\n"+"勝利条件: あなたの攻撃により「受けられるダメージが13以上」のプレイヤーを脱落させる。またはゲーム終了時にストーンサークルにコマがある"+"\n"+"特殊能力: あなたの攻撃により「受けられるダメージが12以下」のプレイヤーを脱落させた場合、自身のキャラクターを強制公開",
"HP10"+"\n"+"B"+"\n"+"勝利条件: 4つ以上のアイテムを持つ"+"\n"+"特殊能力: あなたの攻撃によりプレイヤーを脱落させた場合、そのプレイヤーのアイテムをすべて奪うことができる",
"HP11"+"\n"+"C"+"\n"+"勝利条件: あなたの手番でプレイヤーを脱落させ、そのプレイヤーが3人目以上の脱落者である"+"\n"+"特殊能力: あなたの攻撃の後、直ちに自身が2ダメージを受けることによりもう一度攻撃を行える(1手番にいちどまで?)",
"HP11"+"\n"+"C"+"\n"+"勝利条件: 最初に脱落する。またはゲーム終了時にプレイヤーが自身ともう一人だけになる"+"\n"+"特殊能力: 手番開始時に自分のダメージを1回復できる",
"HP13"+"\n"+"D"+"\n"+"勝利条件: 最初に脱落する。またはすべてのバンパイアが脱落し、あなたが残っている"+"\n"+"特殊能力: プレイヤーが脱落した場合、自身のキャラクターを強制公開",
"HP13"+"\n"+"D"+"\n"+"勝利条件: 「タリスマン」「骨の槍」「守りのローブ」「リュックサック」のうち3つ以上を所持する"+"\n"+"特殊能力: ゲーム中一回限り、赤か青の捨て札からアイテムを一つ取ることができる"]
listred=["吸血蜘蛛(強制) 自分と任意のプレイヤーに2ダメージ", "吸血蜘蛛(強制) 自分と任意のプレイヤーに2ダメージ", "爆発(強制) 二つのダイスを振り、出た目の場所にいるすべてのプレイヤーは3ダメージを受ける", "落とし穴(強制) 装備アイテム1つを誰かに渡す。持ってなければ1ダメージ受ける。", "待ち伏せ(強制) 対象プレイヤーを選択。6面ダイスを振り、1~4が出れば対象に3ダメージ", "闇の儀式(任意) ヴァンパイアなら全回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "攻撃(強制) 任意のプレイヤーからアイテムを1つ奪う", "攻撃(強制) 任意のプレイヤーからアイテムを1つ奪う", "炎の魔法(強制) 攻撃時、対象と同じエリアに居る他のプレイヤーにも同量のダメージを与える", "アーチェリー(強制) 攻撃対象を選ぶ際に自分の居るエリア外のプレイヤーを選択する", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "松明(強制) 攻撃時に4面ダイスを振り、出た目のダメージを与える"]
listblue=["祝福(任意) あなたは2点回復する", "遠隔治療(強制) 他のプレイヤーを選び6面ダイスを振り、出た目だけそのプレイヤーを回復", "祝福(任意) あなたは2点回復する", "正体判明(強制) バンパイアかワーウルフなら公開。嘘つきバンパイアなら無効可", "時間移動(強制) あなたの手番の後、即座にもう一度手番を行う", "時間移動(強制) あなたの手番の後、即座にもう一度手番を行う", "回復(任意) ワーウルフなら体力全回複", "エネルギーの素(任意) キャラクターがA,E,Uならダメージを全回復可能", "聖なる怒り(強制) あなた以外の全プレイヤーに2ダメージ", "守りのオーラ(強制) 現在から次の自分の手番までプレイヤーからの攻撃から守られる(魔女森やカードは喰らう)", "血の月(強制) マーカー一つを血の月マスへ送る", "魔法のコンパス(任意) 移動時、異なるエリアの任意の場所へ移動可能", "リュックサック(強制) あなたの攻撃で誰かを脱落させたならそのプレイヤーの装備をすべて奪う", "守りのローブ(強制) 他プレイヤーからの攻撃により受けるダメージが1減少", "守りのアミュレット(強制) 魔女の森からのダメージを受けない。魔女の森に移動するとさらに1ダメージ回復可能", "タリスマン(強制) カードの吸血蜘蛛、バンパイアの蝙蝠、爆発のダメージを受けない", "守りの指輪(強制) 血の月から守られる", "骨の槍(強制) 自身ワーウルフなら攻撃成功時追加で2ダメージ"]
listgreen=["手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら2ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたが人間なら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアかワーウルフなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアかワーウルフなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフか人間なら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフか人間なら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたが人間かバンパイアなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたが人間かバンパイアなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのキャラクターがABCEUのいずれかであれば1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのキャラクターがDFGVWのいずれかであれば2ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのカードを手番プレイヤーに見せなければならない"]
list1d6 = [1, 2, 3, 4, 5, 6]
list1d4 = [1, 2, 3, 4]
list = ["刀","扇", "薙", "銃", "忍", "傘", "書", "毒", "絡", "騎", "古", "琵", "炎", "笛", "戦", "社","経", "絆","機", "新","爪","拒", "鎌", "塵", "旗","橇","鏡","櫂","兜", "槌","嵐", "棹", "面", "勾", "金", "恐", "剣", "衣", "友", "花", "信"]
list2 = ["大気の護符","水の護符","火の護符","土の護符","イシュターの天秤","春の杖","時のブーツ","イオの財布","聖杯","信心深きサイラス","強欲のフィグリム","女預言者ナリア","驚愕の箱","物乞いの角笛","悪意のダイス","破壊者ケアン","首長のアムサグ","魔法の秘本","ラグフィールドの兜","運命の手","灰顔のルイス","イオリスのルーン方体","力の薬","夢の薬","知識の薬","命の薬","時の砂時計","壮大の錫杖","オラフの祝福の像","ヤンの忘れられた花瓶","精霊のアミュレット","光の木","アルカノ蛭","水晶球","暴食の大鍋","吸血の王冠","竜の頭蓋骨","アルゴスの悪魔","深き眼差しのタイタス","大気の精霊","泥棒フェアリー","アルスの呪われた書","使い魔の偶像","壊死のクリス","クシディットのランプ","ウルムの封印された箱","季節の鏡","ラグノールのペンダント","夜影のシド","オニスの忌まわしき魂", "大気の護符","水の護符","火の護符","水の護符","イシュターの天秤","春の杖","時のブーツ","イオの財布","聖杯","信心深きサイラス","強欲のフィグリム","女預言者ナリア","驚愕の箱","物乞いの角笛","悪意のダイス","破壊者ケアン","首長のアムサグ","魔法の秘本","ラグフィールドの兜","運命の手","灰顔のルイス","イオリスのルーン方体","力の薬","夢の薬","知識の薬","命の薬","時の砂時計","壮大の錫杖","オラフの祝福の像","ヤンの忘れられた花瓶","精霊のアミュレット","光の木","アルカノ蛭","水晶球","暴食の大鍋","吸血の王冠","竜の頭蓋骨","アルゴスの悪魔","深き眼差しのタイタス","大気の精霊","泥棒フェアリー","アルスの呪われた書","使い魔の偶像","壊死のクリス","クシディットのランプ","ウルムの封印された箱","季節の鏡","ラグノールのペンダント","夜影のシド","オニスの忌まわしき魂"]
list0 = ["大気の護符","水の護符","火の護符","土の護符","イシュターの天秤","春の杖","時のブーツ","イオの財布","聖杯","信心深きサイラス","強欲のフィグリム","女預言者ナリア","驚愕の箱","物乞いの角笛","悪意のダイス","破壊者ケアン","首長のアムサグ","魔法の秘本","ラグフィールドの兜","運命の手","灰顔のルイス","イオリスのルーン方体","力の薬","夢の薬","知識の薬","命の薬","時の砂時計","壮大の錫杖","オラフの祝福の像","ヤンの忘れられた花瓶","精霊のアミュレット","光の木","アルカノ蛭","水晶球","暴食の大鍋","吸血の王冠","竜の頭蓋骨","アルゴスの悪魔","深き眼差しのタイタス","大気の精霊","泥棒フェアリー","アルスの呪われた書","使い魔の偶像","壊死のクリス","クシディットのランプ","ウルムの封印された箱","季節の鏡","ラグノールのペンダント","夜影のシド","オニスの忌まわしき魂", "大気の護符","水の護符","火の護符","水の護符","イシュターの天秤","春の杖","時のブーツ","イオの財布","聖杯","信心深きサイラス","強欲のフィグリム","女預言者ナリア","驚愕の箱","物乞いの角笛","悪意のダイス","破壊者ケアン","首長のアムサグ","魔法の秘本","ラグフィールドの兜","運命の手","灰顔のルイス","イオリスのルーン方体","力の薬","夢の薬","知識の薬","命の薬","時の砂時計","壮大の錫杖","オラフの祝福の像","ヤンの忘れられた花瓶","精霊のアミュレット","光の木","アルカノ蛭","水晶球","暴食の大鍋","吸血の王冠","竜の頭蓋骨","アルゴスの悪魔","深き眼差しのタイタス","大気の精霊","泥棒フェアリー","アルスの呪われた書","使い魔の偶像","壊死のクリス","クシディットのランプ","ウルムの封印された箱","季節の鏡","ラグノールのペンダント","夜影のシド","オニスの忌まわしき魂", "アルゴスの心臓",
"豊穣の角笛",
"妖精の石碑",
"セレニアの古写本",
"イシュターの巻物",
"メソディーのランタン",
"イオリスの像",
"使い魔捕え",
"イオの変転器",
"再生の玉座",
"復活の薬",
"古代の宝石",
"ジラの盾",
"信念のダイス",
"時のアミュレット",
"不思議な望遠鏡",
"アルゴスの鷹",
"強奪者のカラス",
"アルゴスの監視者",
"ハシリドコロのネズミ",
"竜の魂",
"溶岩の核",
"運命の悪戯",
"古代の薬",
"エシールの泉",
"コロフの目盛盤",
"永遠の杯",
"冬の杖",
"墓場の護符",
"イオリスの複製装置",
"エストリアの竪琴",
"時の指輪",
"アルスのミミック",
"ヒトクイカズラ",
"ウルムの魂の牢獄",
"ラグフィールドの従者",
"アルゴスの絡みつく雑草",
"イオの手先",
"神託者オトゥス",
"狡猾なハシリドコロ",
"消し去るものイグラマル",
"逃走するスピードウォール",
"ラグフィールドのオーブ",
"水晶のタイタン"]
listdraw = ["吸血の王冠", "精霊のアミュレット", "水晶球"]
listN = [1, 2, 3, 4, 5, 6, 7, 8, 9]
listyear = [1, 2, 3]
listmonth = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
listdice6 = [1, 2, 3, 4, 5, 6]
listhato=[" a-基本 2 寄付 " ,
" a-基本 2 願いの泉 " ,
" a-基本 2 斥候 " ,
" a-基本 2 早馬 " ,
" a-基本 3 交易船 " ,
" a-基本 3 御用商人 " ,
" a-基本 3 召集令状 " ,
" a-基本 3 焼き畑農業 " ,
" a-基本 4 図書館 " ,
" a-基本 4 追い立てられた魔獣 " ,
" a-基本 4 都市開発 " ,
" a-基本 4 金貸し " ,
" a-基本 4 補給部隊 " ,
" a-基本 5 冒険者 " ,
" a-基本 5 呪詛の魔女 " ,
" a-基本 5 銀行 " ,
" a-基本 5 皇室領 " ,
" a-基本 5 錬金術師 " ,
" a-基本 6 噂好きの公爵夫人 " ,
" b-極東 2 お金好きの妖精 " ,
" b-極東 3 課税 " ,
" b-極東 3 貿易商人 " ,
" b-極東 3 伝書鳩 " ,
" b-極東 4 見習い魔女 " ,
" b-極東 4 鉱山都市 " ,
" b-極東 4 港町 " ,
" b-極東 5 結盟 " ,
" c-北限 2 ケットシー " ,
" c-北限 2 幸運の銀貨 " ,
" c-北限 3 洗礼 " ,
" c-北限 3 名馬 " ,
" c-北限 3 呪いの人形 " ,
" c-北限 4 ドワーフの宝石職人 " ,
" c-北限 4 宮廷闘争 " ,
" c-北限 4 エルフの狙撃手 " ,
" c-北限 5 地方役人 " ,
" c-北限 5 豪商 " ,
" c-北限 5 貴族の一人娘 " ,
" c-北限 6 独占 " ,
" c-北限 6 工業都市 " ,
" d-FG 2 家守の精霊 " ,
" d-FG 2 春風の妖精 " ,
" d-FG 2 伝令 " ,
" d-FG 2 密偵 " ,
" d-FG 2 巡礼 " ,
" d-FG 3 リーフフェアリー " ,
" d-FG 3 司書 " ,
" d-FG 3 旅芸人 " ,
" d-FG 3 祝福 " ,
" d-FG 3 ギルドマスター " ,
" d-FG 3 星巫女の託宣 " ,
" d-FG 4 ブラウニー " ,
" d-FG 4 氷雪の精霊 " ,
" d-FG 4 石弓隊 " ,
" d-FG 4 行商人 " ,
" d-FG 4 辻占い師 " ,
" d-FG 4 ニンフ " ,
" d-FG 4 大農園 " ,
" d-FG 4 御料地 " ,
" d-FG 4 検地役人 " ,
" d-FG 5 商船団 " ,
" d-FG 5 執事 " ,
" d-FG 5 徴税人 " ,
" d-FG 5 聖堂騎士 " ,
" d-FG 5 鬼族の戦士 " ,
" d-FG 5 交易都市 " ,
" d-FG 5 収穫祭 " ,
" d-FG 5 合併 " ,
" d-FG 5 メイド長 " ,
" d-FG 6 裁判官 " ,
" e-六都 2 漁村 " ,
" e-六都 3 いたずら妖精(不運) " ,
" e-六都 3 へそくり " ,
" e-六都 3 女学院 " ,
" e-六都 4 まじない師(不運) " ,
" e-六都 4 開発命令 " ,
" e-六都 4 魔法のランプ(不運) " ,
" e-六都 5 傭兵団 " ,
" e-六都 5 免罪符 " ,
" e-六都 5 十字軍 " ,
" e-六都 5 砲兵部隊 " ,
" e-六都 5 学術都市 " ,
" e-六都 5 独立都市 " ,
" e-六都 5 転売屋 " ,
" e-六都 5 ニンジャマスター " ,
" e-六都 12 大公爵 " ,
" f-星天 3 灯台 " ,
" f-星天 4 先行投資 " ,
" f-星天 4 義賊 " ,
" f-星天 4 カンフーマスター " ,
" f-星天 4 家庭教師 " ,
" f-星天 5 キョンシー " ,
" f-星天 5 ウイッチドクター " ,
" f-星天 5 キャラバン " ,
" f-星天 5 離れ小島 " ,
" f-星天 5 富豪の愛娘 "
]
cards = ["A:spades: ","2:spades: ", "3:spades: ","4:spades: ", "5:spades: ","6:spades: ", "7:spades: ","8:spades: ", "9:spades: ","10:spades: ", "J:spades: ","Q:spades: ","K:spades: ","A:hearts: ", "2:hearts: ", "3:hearts: ", "4:hearts: ", "5:hearts: ", "6:hearts: ","7:hearts: ", "8:hearts: ", "9:hearts: ", "10:hearts: ", "J:hearts: ", "Q:hearts: ", "K:hearts: ", "A:clubs: ", "2:clubs: ", "3:clubs: ", "4:clubs: ", "5:clubs: ", "6:clubs: ","7:clubs: ", "8:clubs: ", "9:clubs: ", "10:clubs: ", "J:clubs: ", "Q:clubs: ", "K:clubs: ","A:diamonds: ","2:diamonds: ", "3:diamonds: ","4:diamonds: ", "5:diamonds: ", "6:diamonds: ","7:diamonds: ", "8:diamonds: ", "9:diamonds: ", "10:diamonds: ", "J:diamonds: ", "Q:diamonds: ", "K:diamonds: "]
rcards = ["2:spades: ", "3:spades: ","4:spades: ", "5:spades: ","6:spades: ", "7:spades: ","8:spades: ", "9:spades: ","10:spades: ", "J:spades: ","Q:spades: ","K:spades: ", "2:hearts: ", "3:hearts: ", "4:hearts: ", "5:hearts: ", "6:hearts: ","7:hearts: ", "8:hearts: ", "9:hearts: ", "10:hearts: ", "J:hearts: ", "Q:hearts: ", "K:hearts: ", "A:clubs: ", "2:clubs: ", "3:clubs: ", "4:clubs: ", "5:clubs: ", "6:clubs: ","7:clubs: ", "8:clubs: ", "9:clubs: ", "10:clubs: ", "J:clubs: ", "Q:clubs: ", "K:clubs: ","A:diamonds: ","2:diamonds: ", "3:diamonds: ","4:diamonds: ", "5:diamonds: ", "6:diamonds: ","7:diamonds: ", "8:diamonds: ", "9:diamonds: ", "10:diamonds: ", "J:diamonds: ", "Q:diamonds: ", "K:diamonds: "]
word = [" あ " ,
" い " ,
" う " ,
" え " ,
" お " ,
" か " ,
" き " ,
" く " ,
" け " ,
" こ " ,
" さ " ,
" し " ,
" す " ,
" せ " ,
" そ " ,
" た " ,
" ち " ,
" つ " ,
" て " ,
" と " ,
" な " ,
" に " ,
" ぬ " ,
" ね " ,
" の " ,
" は " ,
" ひ " ,
" ふ " ,
" へ " ,
" ほ " ,
" ま " ,
" み " ,
" む " ,
" め " ,
" も " ,
" や " ,
" ゆ " ,
" よ " ,
" わ " ,
" あ行 " ,
" か行 " ,
" さ行 " ,
" た行 " ,
" な行 " ,
" は行 " ,
" ま行 " ,
" や行 " ,
" ら行 " ,
" ら " ,
" り " ,
" る " ,
" れ " ,
" ろ " ,
" 5 " ,
" 6 " ,
" 7+ " ]
wordstart = [" あ " ,
" い " ,
" う " ,
" え " ,
" お " ,
" か " ,
" き " ,
" く " ,
" け " ,
" こ " ,
" さ " ,
" し " ,
" す " ,
" せ " ,
" そ " ,
" た " ,
" ち " ,
" つ " ,
" て " ,
" と " ,
" な " ,
" に " ,
" ぬ " ,
" ね " ,
" の " ,
" は " ,
" ひ " ,
" ふ " ,
" へ " ,
" ほ " ,
" ま " ,
" み " ,
" む " ,
" め " ,
" も " ,
" や " ,
" ゆ " ,
" よ " ,
" わ " ]
start = 0
Hard = 0
@client.event
async def on_ready():
print('Logged in as')
print(client.user.name)
print(client.user.id)
print('------')
@client.event
async def on_message(message):
# 書き込み文が「megami3」で始まるか調べる
if message.content.startswith("megami3"):
# 送り主がBotだった場合反応したくないので
if client.user != message.author:
random.shuffle(list)
m = str(list[0]+list[1]+list[2]) + "とか良いんじゃないですか?"
# メッセージが送られてきたチャンネルへメッセージを送ります
await message.channel.send(m)
if message.content.startswith("megami2"):
if client.user != message.author:
random.shuffle(list)
m = str(list[0]+list[1]) + "とか良いんじゃないですか?"
await message.channel.send(m)
if message.content.startswith("daima9"):
if client.user != message.author:
random.shuffle(list2)
random.shuffle(listN)
m = message.author.name + "さんの束は" + list2[0]+"・"+list2[1]+"・"+list2[2]+"・"+list2[3]+"・"+list2[4]+"・"+list2[5]+"・"+list2[6]+"・"+list2[7]+"・"+list2[8] + "ですね。左から" + str(listN[0]) + "番目をピックするのがオススメです!"
await message.channel.send(m)
if message.content.startswith("all9"):
if client.user != message.author:
random.shuffle(list0)
random.shuffle(listN)
m = message.author.name + "さんの束は" + list0[0]+"・"+list0[1]+"・"+list0[2]+"・"+list0[3]+"・"+list0[4]+"・"+list0[5]+"・"+list0[6]+"・"+list0[7]+"・"+list0[8] + "ですね。左から" + str(listN[0]) + "番目をピックするのがオススメです!"
await message.channel.send(m)
if message.content == "uranai":
if client.user != message.author:
random.shuffle(list2)
random.shuffle(listdraw)
m = message.author.name + "さんの今日の運勢は" + listdraw[0] + "で"+ list2[0] + "を引くくらいの運勢です。"
await message.channel.send(m)
if message.content == "u":
if client.user != message.author:
random.shuffle(list0)
random.shuffle(listdraw)
random.shuffle(listyear)
random.shuffle(listmonth)
m = message.author.name + "さんの今日の運勢は" + str(listyear[0]) + "年目" + str(listmonth[0]) + "月に" + listdraw[0] + "で"+ list0[0] + "を引くくらいの運勢です。"
await message.channel.send(m)
if message.content == "d":
if client.user != message.author:
random.shuffle(listdice6)
m ="出目は" + str(listdice6[0]) + "です。"
await message.channel.send(m)
if message.content == "d1":
if client.user != message.author:
random.shuffle(list2)
m = list2[0] + "を引きました。"
await message.channel.send(m)
if message.content == "d2":
if client.user != message.author:
random.shuffle(list2)
m = list2[0] +"・"+list2[1] + "を引きました。"
await message.channel.send(m)
if message.content == "a1":
if client.user != message.author:
random.shuffle(list0)
m = list0[0] + "を引きました。"
await message.channel.send(m)
if message.content == "a2":
if client.user != message.author:
random.shuffle(list0)
m = list0[0] +"・"+list0[1] + "を引きました。"
await message.channel.send(m)
if message.content == "d4":
if client.user != message.author:
random.shuffle(list2)
m = list2[0] +"・"+list2[1] +"・"+list2[2] + "・"+list2[3] + "を引きました。"
await message.channel.send(m)
if message.content == "a4":
if client.user != message.author:
random.shuffle(list0)
m = list0[0] +"・"+list0[1] +"・"+list0[2] + "・"+list0[3] + "を引きました。"
await message.channel.send(m)
if message.content.startswith("hato"):
if client.user != message.author:
random.shuffle(listhato)
newlisthato = [listhato[0], listhato[1], listhato[2],listhato[3], listhato[4], listhato[5],listhato[6], listhato[7], listhato[8], listhato[9] ]
newlisthato.sort()
m = "__ランダム10種__"+"\n"+"\n"+str(newlisthato[0])+"\n"+str(newlisthato[1])+"\n"+str(newlisthato[2])+"\n"+str(newlisthato[3])+"\n"+str(newlisthato[4])+"\n"+str(newlisthato[5])+"\n"+str(newlisthato[6])+"\n"+str(newlisthato[7])+"\n"+str(newlisthato[8])+"\n"+str(newlisthato[9])
await message.channel.send(m)
if message.content.startswith("ping"):
if client.user != message.author:
m = "pong"
await message.channel.send(m)
if message.content.startswith("name"):
if client.user != message.author:
m = "torisan"
await message.channel.send(m)
if message.content == "p":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手" +":"+" "+ emoji.emojize(cards[2]+cards[3], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(cards[4]+cards[5]+cards[6]+cards[7]+cards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "p3":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize(cards[2]+cards[3], use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize(cards[9]+cards[10], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(cards[4]+cards[5]+cards[6]+cards[7]+cards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "p4":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize(cards[2]+cards[3], use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize(cards[9]+cards[10], use_aliases=True)+" "+"相手c" +":"+" "+ emoji.emojize(cards[11]+cards[12], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(cards[4]+cards[5]+cards[6]+cards[7]+cards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "pr":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手" +":"+" "+ emoji.emojize(rcards[2]+rcards[3], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(rcards[4]+rcards[5]+rcards[6]+rcards[7]+rcards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "p3r":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize(rcards[2]+rcards[3], use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize(rcards[9]+rcards[10], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(rcards[4]+rcards[5]+rcards[6]+rcards[7]+rcards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "p4r":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize(rcards[2]+rcards[3], use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize(rcards[9]+rcards[10], use_aliases=True)+" "+"相手c" +":"+" "+ emoji.emojize(rcards[11]+rcards[12], use_aliases=True)+"\n"+"\n"+" "+emoji.emojize(rcards[4]+rcards[5]+rcards[6]+rcards[7]+rcards[8], use_aliases=True)
await message.channel.send(m)
if message.content == "pc":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手" +":"+" "+ emoji.emojize("||"+cards[2]+cards[3]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+cards[4]+cards[5]+cards[6]+"||"+"||"+cards[7]+"||"+"||"+cards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "p3c":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize("||"+cards[2]+cards[3]+"||", use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize("||"+cards[9]+cards[10]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+cards[4]+cards[5]+cards[6]+"||"+"||"+cards[7]+"||"+"||"+cards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "p4c":
if client.user != message.author:
random.shuffle(cards)
m = message.author.name +":"+" "+emoji.emojize(cards[0]+cards[1], use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize("||"+cards[2]+cards[3]+"||", use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize("||"+cards[9]+cards[10]+"||", use_aliases=True)+" "+"相手c" +":"+" "+ emoji.emojize("||"+cards[11]+cards[12]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+cards[4]+cards[5]+cards[6]+"||"+"||"+cards[7]+"||"+"||"+cards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "prc":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手" +":"+" "+ emoji.emojize("||"+rcards[2]+rcards[3]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+rcards[4]+rcards[5]+rcards[6]+"||"+"||"+rcards[7]+"||"+"||"+rcards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "p3rc":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize("||"+rcards[2]+rcards[3]+"||", use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize("||"+rcards[9]+rcards[10]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+rcards[4]+rcards[5]+rcards[6]+"||"+"||"+rcards[7]+"||"+"||"+rcards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "p4rc":
if client.user != message.author:
random.shuffle(rcards)
m = message.author.name +":"+" "+emoji.emojize("A:spades: " + "A:hearts: ", use_aliases=True)+" "+"相手a" +":"+" "+ emoji.emojize("||"+rcards[2]+rcards[3]+"||", use_aliases=True)+" "+"相手b" +":"+" "+ emoji.emojize("||"+rcards[9]+rcards[10]+"||", use_aliases=True)+" "+"相手c" +":"+" "+ emoji.emojize("||"+rcards[11]+rcards[12]+"||", use_aliases=True)+"\n"+"\n"+" "+emoji.emojize("||"+rcards[4]+rcards[5]+rcards[6]+"||"+"||"+rcards[7]+"||"+"||"+rcards[8]+"||", use_aliases=True)
await message.channel.send(m)
if message.content == "word":
if client.user != message.author:
random.shuffle(word)
random.shuffle(wordstart)
m = "捨て札:"+ wordstart[0] +"\n"+"\n"+ message.author.name + "さんの手札:"+ word[0]+word[1]+word[2]+word[3]+word[4]
await message.channel.send(m)
if message.content == "cube":
if client.user != message.author:
result = [random.randint(1, 6) for i in range(84)]
await message.channel.send(str(result.count(1))+"枚、"+str(result.count(2))+"枚、"+str(result.count(3))+"枚、"+str(result.count(4))+"枚、"+str(result.count(5))+"枚、"+str(result.count(6))+"枚")
global rolelist, rolephase
if message.content == "roleset" and rolephase == 0:
rolephase = 1
await message.channel.send("以下のリスト例をコピーして役職リストを作成してください。"+"\n"+"※各役職名はダブルクォーテーションマークでくくって下さい。また、役職数はカンマ区切りでいくらでも増やせます。"+"\n"+"[役職A, 役職B, 役職C, 役職D]←このリストの各役職をダブルクォーテーションマークでくくる")
if message.content.startswith("[") and rolephase == 1:
if client.user != message.author:
rolelist = eval(message.content)
rolephase = 2
await message.channel.send("役職リストを読み込みました。ダイレクトメッセージで role と入力すると役職が割り当てられます。reset と入力すると役職リストがリセットされます。")
if message.content == "role" and rolephase == 2:
m = ''.join(random.sample(rolelist, 1))
rolelist.remove(m)
await message.channel.send(m)
if (message.content == "reset" and rolephase == 1) or (message.content == "reset" and rolephase == 2):
rolelist = []
rolephase = 0
await message.channel.send("役職リストをリセットしました。")
global startshadow, listv, listw, listh, listred, listblue, listgreen, list3, list4, list5, list6, list7, list8
if message.content=="シャドハン" and startshadow ==0:
startshadow = 1
m = "シャドハンを開始しました。"+"\n"+"\n"+"**m**: move。移動時に使用。"+"\n"+"**a**: attack。ターン終了時に使用。"+"\n"+"**r, g, b**: red, green, blue。山札引き時に使用。"+"\n"+"**1d4, 1d6**: 4面・6面ダイスを振るときに使用。"
await message.channel.send(m)
if message.content == "role3" and startshadow == 1:
m = ''.join(random.sample(list3, 1))
list3.remove(m)
await message.channel.send(m)
if message.content == "role4" and startshadow == 1:
m = ''.join(random.sample(list4, 1))
list4.remove(m)
await message.channel.send(m)
if message.content == "role5" and startshadow == 1:
m = ''.join(random.sample(list5, 1))
list5.remove(m)
await message.channel.send(m)
if message.content == "role6" and startshadow == 1:
m = ''.join(random.sample(list6, 1))
list6.remove(m)
await message.channel.send(m)
if message.content == "role7" and startshadow == 1:
m = ''.join(random.sample(list7, 1))
list7.remove(m)
await message.channel.send(m)
if message.content == "role8" and startshadow == 1:
m = ''.join(random.sample(list8, 1))
list8.remove(m)
await message.channel.send(m)
if message.content == "v" and startshadow == 1:
m = ''.join(random.sample(listv, 1))
listv.remove(m)
await message.channel.send(m)
if message.content == "w" and startshadow == 1:
m = ''.join(random.sample(listw, 1))
listw.remove(m)
await message.channel.send(m)
if message.content == "h" and startshadow == 1:
m = ''.join(random.sample(listh, 1))
listh.remove(m)
await message.channel.send(m)
if message.content == "r" and startshadow == 1:
m = ''.join(random.sample(listred, 1))
listred.remove(m)
await message.channel.send(m)
if message.content == "b" and startshadow == 1:
m = ''.join(random.sample(listblue, 1))
listblue.remove(m)
await message.channel.send(m)
if message.content == "g" and startshadow == 1:
m = ''.join(random.sample(listgreen, 1))
listgreen.remove(m)
await message.channel.send(m)
if message.content =="m" and startshadow ==1:
m = str(random.choice(list1d6)+random.choice(list1d4))+"へ移動"
await message.channel.send(m)
if message.content =="a" and startshadow ==1:
m = message.author.name+"さんの攻撃!"+str(abs(random.choice(list1d6)-random.choice(list1d4)))+"ダメージを与えた!"
await message.channel.send(m)
if message.content =="1d6" and startshadow ==1:
m = random.choice(list1d6)
await message.channel.send(m)
if message.content =="1d4" and startshadow ==1:
m = random.choice(list1d4)
await message.channel.send(m)
if message.content == "ends":
startshadow = 0
list3=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。", "あなたは人間陣営です。hと入力してください。"]
list4=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。"]
list5=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。"]
list6=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。"]
list7=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。", "あなたは人間陣営です。hと入力してください。"]
list8=["あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたはバンパイア陣営です。vと入力してください。", "あなたはワーウルフ陣営です。wと入力してください。","あなたは人間陣営です。hと入力してください。","あなたは人間陣営です。hと入力してください。"]
listv=["HP11"+"\n"+"U"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 手番の開始時に「墓場」にいるプレイヤー一人を選び3ダメージを与える",
"HP11"+"\n"+"U"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 緑のカードを受け取った際、答えを偽ることができる。公開の必要はない。正体判明の効果を受けない。",
"HP13"+"\n"+"V"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 攻撃時に4面ダイスを振り、出た目のダメージを与える",
"HP13"+"\n"+"V"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 自身の攻撃により誰かにダメージを与えた場合、直ちに自分のダメージを2回復する",
"HP14"+"\n"+"W"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番の後で追加手番を脱落したプレイヤー一人につき一回行える",
"HP14"+"\n"+"W"+"\n"+"勝利条件: 全てのワーウルフを倒す"+"\n"+"特殊能力: 誰かがあなたを攻撃してきた場合、そのあとでそのプレイヤーに対し攻撃することができる"]
listw=["HP10"+"\n"+"E"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: 移動する際にダイスを振らずに左または右に隣接する場所に移動することができる。",
"HP10"+"\n"+"E"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番の開始時に誰か一人のプレイヤーを選び、そのプレイヤーの特殊能力をゲーム終了時まで封印する。",
"HP12"+"\n"+"F"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に誰か一人を選び、1d6のダメージを与える",
"HP12"+"\n"+"F"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に他のプレイヤーのマーカーを血の月マスに送る",
"HP14"+"\n"+"G"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番終了時に次の自分の手番開始時までダメージを受けないことを宣言できる",
"HP14"+"\n"+"G"+"\n"+"勝利条件: 全てのバンパイアを倒す"+"\n"+"特殊能力: ゲーム中一回限り、自分の手番開始時に誰か一人を選び、1d4のダメージを与える"]
listh=["HP8"+"\n"+"A"+"\n"+"勝利条件: ゲーム終了時に脱落していない"+"\n"+"特殊能力: ゲーム中一回限り自分のダメージを全て回復可能",
"HP8"+"\n"+"A"+"\n"+"勝利条件: ゲーム終了時に脱落していない"+"\n"+"特殊能力: ゲーム中一回限り自分のダメージを全て回復可能",
"HP8"+"\n"+"A"+"\n"+"勝利条件: 右隣りのプレイヤーの勝利"+"\n"+"特殊能力: ゲーム中一回限り勝利条件を「左隣りのプレイヤーの勝利」に変更可能",
"HP10"+"\n"+"B"+"\n"+"勝利条件: あなたの攻撃により「受けられるダメージが13以上」のプレイヤーを脱落させる。またはゲーム終了時にストーンサークルにコマがある"+"\n"+"特殊能力: あなたの攻撃により「受けられるダメージが12以下」のプレイヤーを脱落させた場合、自身のキャラクターを強制公開",
"HP10"+"\n"+"B"+"\n"+"勝利条件: 4つ以上のアイテムを持つ"+"\n"+"特殊能力: あなたの攻撃によりプレイヤーを脱落させた場合、そのプレイヤーのアイテムをすべて奪うことができる",
"HP11"+"\n"+"C"+"\n"+"勝利条件: あなたの手番でプレイヤーを脱落させ、そのプレイヤーが3人目以上の脱落者である"+"\n"+"特殊能力: あなたの攻撃の後、直ちに自身が2ダメージを受けることによりもう一度攻撃を行える(1手番にいちどまで?)",
"HP11"+"\n"+"C"+"\n"+"勝利条件: 最初に脱落する。またはゲーム終了時にプレイヤーが自身ともう一人だけになる"+"\n"+"特殊能力: 手番開始時に自分のダメージを1回復できる",
"HP13"+"\n"+"D"+"\n"+"勝利条件: 最初に脱落する。またはすべてのバンパイアが脱落し、あなたが残っている"+"\n"+"特殊能力: プレイヤーが脱落した場合、自身のキャラクターを強制公開",
"HP13"+"\n"+"D"+"\n"+"勝利条件: 「タリスマン」「骨の槍」「守りのローブ」「リュックサック」のうち3つ以上を所持する"+"\n"+"特殊能力: ゲーム中一回限り、赤か青の捨て札からアイテムを一つ取ることができる"]
listred=["吸血蜘蛛(強制) 自分と任意のプレイヤーに2ダメージ", "吸血蜘蛛(強制) 自分と任意のプレイヤーに2ダメージ", "爆発(強制) 二つのダイスを振り、出た目の場所にいるすべてのプレイヤーは3ダメージを受ける", "落とし穴(強制) 装備アイテム1つを誰かに渡す。持ってなければ1ダメージ受ける。", "待ち伏せ(強制) 対象プレイヤーを選択。6面ダイスを振り、1~4が出れば対象に3ダメージ", "闇の儀式(任意) ヴァンパイアなら全回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "バンパイアの蝙蝠(強制) 任意のプレイヤーに2ダメージを与え、自分を1回復する", "攻撃(強制) 任意のプレイヤーからアイテムを1つ奪う", "攻撃(強制) 任意のプレイヤーからアイテムを1つ奪う", "炎の魔法(強制) 攻撃時、対象と同じエリアに居る他のプレイヤーにも同量のダメージを与える", "アーチェリー(強制) 攻撃対象を選ぶ際に自分の居るエリア外のプレイヤーを選択する", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "鉄拳(強制) 攻撃成功で追加1ダメージ", "松明(強制) 攻撃時に4面ダイスを振り、出た目のダメージを与える"]
listblue=["祝福(任意) あなたは2点回復する", "遠隔治療(強制) 他のプレイヤーを選び6面ダイスを振り、出た目だけそのプレイヤーを回復", "祝福(任意) あなたは2点回復する", "正体判明(強制) バンパイアかワーウルフなら公開。嘘つきバンパイアなら無効可", "時間移動(強制) あなたの手番の後、即座にもう一度手番を行う", "時間移動(強制) あなたの手番の後、即座にもう一度手番を行う", "回復(任意) ワーウルフなら体力全回複", "エネルギーの素(任意) キャラクターがA,E,Uならダメージを全回復可能", "聖なる怒り(強制) あなた以外の全プレイヤーに2ダメージ", "守りのオーラ(強制) 現在から次の自分の手番までプレイヤーからの攻撃から守られる(魔女森やカードは喰らう)", "血の月(強制) マーカー一つを血の月マスへ送る", "魔法のコンパス(任意) 移動時、異なるエリアの任意の場所へ移動可能", "リュックサック(強制) あなたの攻撃で誰かを脱落させたならそのプレイヤーの装備をすべて奪う", "守りのローブ(強制) 他プレイヤーからの攻撃により受けるダメージが1減少", "守りのアミュレット(強制) 魔女の森からのダメージを受けない。魔女の森に移動するとさらに1ダメージ回復可能", "タリスマン(強制) カードの吸血蜘蛛、バンパイアの蝙蝠、爆発のダメージを受けない", "守りの指輪(強制) 血の月から守られる", "骨の槍(強制) 自身ワーウルフなら攻撃成功時追加で2ダメージ"]
listgreen=["手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら2ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら1ダメージを受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアなら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフなら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたが人間なら以下の指示に従う。ダメージを受けていなければ1ダメージを受ける。ダメージを受けていれば1ダメージ回復する", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアかワーウルフなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがバンパイアかワーウルフなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフか人間なら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたがワーウルフか人間なら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたが人間かバンパイアなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたが人間かバンパイアなら以下の指示に従う。手番プレイヤーにアイテムを一つ渡す。持っていなければ1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのキャラクターがABCEUのいずれかであれば1ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのキャラクターがDFGVWのいずれかであれば2ダメージ受ける", "手番プレイヤーからこのカードを受け取った時、あなたのカードを手番プレイヤーに見せなければならない"]
await message.channel.send("シャドハンを終了しました")
global start, pile, attack, defense, bottrash, myhand, abilityA, abilityB, ability1, ability2, ability3, bothand, player, botactive, Hard
if message.content == "hard":
Hard = 1
await message.channel.send("【Hard mode】に変更しました。"+"\n"+"レゾナンスリング成功条件: 能力3つ当て")
if message.content == "normal":
Hard = 0
await message.channel.send("【normal mode】に変更しました。"+"\n"+"レゾナンスリング成功条件: 能力3つ中2つ当て")
if message.content=="サイレントファントム" and start ==0:
start = 1
#bot側能力決定
botability = random.sample(abilityA, k = 3)
player = random.sample(abilityB, k = 3)
ability1 = botability[0]
ability2 = botability[1]
ability3 = botability[2]
#bot初期札1枚目
m1 = ''.join(random.sample(pile, k=1))
pile.remove(m1)
if m1.startswith("攻撃"):
attack.append(m1)
if m1.startswith("防御"):
defense.append(m1)
#bot初期札2枚目
m2 = ''.join(random.sample(pile, k=1))
pile.remove(m2)
if m2.startswith("攻撃"):
attack.append(m2)
if m2.startswith("防御"):
defense.append(m2)
bothand = attack+defense
#プレイヤー側初期札2枚
m3 = ''.join(random.sample(pile, k=1))
pile.remove(m3)
myhand.append(m3)
m4 = ''.join(random.sample(pile, k=1))
pile.remove(m4)
myhand.append(m4)
m = "サイレントファントムを開始しました。各コマンドは『help』と入力すれば見れます。"+"\n"+"\n"+"あなたの能力は"+"**"+player[0]+player[1]+player[2]+"**"+"です。"+"\n"+"\n"+"あなたの手札は"+"\n"+"\n"+m3+m4+"です。"
await message.channel.send(m)
#bot側ターンを"you"コマンドで開始
if (message.content == "you" and start == 1) or (message.content == "you" and start == 2) or (message.content == "you" and start == 6) or (message.content == "you" and start == 8) or (message.content == "you" and start == 15) or (message.content == "you" and start == 16):
m = ''.join(random.sample(pile, k=1))
pile.remove(m)
if m.startswith("攻撃"):
attack.append(m)
if m.startswith("防御"):
defense.append(m)
bothand = attack+defense
start = 3
if start == 3:
botactivehand = [s for s in attack if (ability1 in s) or (ability2 in s) or (ability3 in s)]
if len(botactivehand) >= 1 and start == 3:
me = botactivehand[0]
attack.remove(me)
bottrash.append(me)
start = 4
bothand = attack+defense
await message.channel.send(me+"を使います。")
elif len(botactivehand) == 0 and start == 3:
start = 4
bothand = attack+defense
await message.channel.send("カードを使わずターン終了します。")
if message.content == "yourhand" and start == 4:
m = ''.join(bothand)
await message.channel.send(m)
if (message.content == "myhand" and start == 4) or (message.content == "myhand" and start == 9):
m = ''.join(myhand)
await message.channel.send(m)
if message.content.startswith("防御") and start == 4:
if client.user != message.author:
bothand = attack+defense
start = 9
for name in myhand:
if message.content in name:
myhand.remove(name)
await message.channel.send("防御札を確認しました。")
break
if message.content == "yourunmei" and start == 9:
if len(bothand) >=1:
ma = bothand[0]
for x in attack:
if ma in x:
attack.remove(x)
break
for x in defense:
if ma in x:
defense.remove(x)
break
m = ''.join(random.sample(pile, k=1))
pile.remove(m)
start = 11
await message.channel.send("あなたは"+ma+"を見て山札底に置きました。"+"\n"+"私はカードを1枚引きました。")
if m.startswith("攻撃") and start ==11:
start = 12
attack.append(m)
elif m.startswith("防御") and start ==11:
start = 12
defense.append(m)
bothand = attack+defense
if len(bothand) == 0 and start ==9:
start = 12
await message.channel.send("私はカードを持っていないので、運命干渉は起こりませんでした。")
if (message.content == "myturn" and start == 4) or (message.content == "myturn" and start == 9) or (message.content == "myturn" and start == 8) or (message.content == "myturn" and start == 12) or (message.content == "myturn" and start == 15) or (message.content == "myturn" and start == 16):
start = 2
m = ''.join(random.sample(pile, k=1))
pile.remove(m)
myhand.append(m)
await message.channel.send("あなたは"+"\n"+"\n"+m+"を引きました。")
if message.content == "yourhand" and start == 2:
m = ''.join(bothand)
await message.channel.send(m)
if message.content == "myhand" and start == 2:
m = ''.join(myhand)
await message.channel.send(m)
if message.content == "reso" and start == 2 and Hard == 0:
start = 5
await message.channel.send("レゾナンスリング宣言を確認しました。私の能力名を電熱念運空精の中から2つ書いてください。"+"\n"+"例: 熱運")
return
if (message.content == ability1+ability2 and start ==5) or (message.content == ability2+ability3 and start ==5) or (message.content == ability1+ability3 and start ==5) or (message.content == ability2+ability1 and start ==5) or (message.content == ability3+ability2 and start ==5) or (message.content == ability3+ability1 and start ==5):
if client.user != message.author:
start = 1
await message.channel.send("レゾナンスリング成功!あなたの勝利です!"+"\n"+"能力:"+ability1+ability2+ability3)
if message.content == "reso" and start == 2 and Hard == 1:
start = 20
await message.channel.send("【hard mode】"+"\n"+"レゾナンスリング宣言を確認しました。私の能力名を電熱念運空精の中から3つ書いてください。"+"\n"+"例: 電熱運")
return
if (message.content == ability1+ability2+ability3 and start ==20) or (message.content == ability2+ability3+ability1 and start ==20) or (message.content == ability1+ability3+ability2 and start ==20) or (message.content == ability2+ability1+ability3 and start ==20) or (message.content == ability3+ability2+ability1 and start ==20) or (message.content == ability3+ability1+ability2 and start ==20):
if client.user != message.author:
start = 1
await message.channel.send("レゾナンスリング大成功!あなたの完全勝利です!"+"\n"+"能力:"+ability1+ability2+ability3)
if message.content != ability1+ability2 and start ==5 or message.content != ability2+ability3 and start ==5 or message.content != ability1+ability3 and start ==5 or message.content != ability2+ability1 and start ==5 or message.content != ability3+ability2 and start ==5 or message.content != ability3+ability1 and start ==5:
if client.user != message.author:
start = 1
await message.channel.send("レゾナンスリング失敗、5ダメージを受けてください。")
if (message.content != ability1+ability2+ability3 and start ==20) or (message.content != ability2+ability3+ability1 and start ==20) or (message.content != ability1+ability3+ability2 and start ==20) or (message.content != ability2+ability1+ability3 and start ==20) or (message.content != ability3+ability2+ability1 and start ==20) or (message.content != ability3+ability1+ability2 and start ==20):
if client.user != message.author:
start = 1
await message.channel.send("レゾナンスリング失敗、5ダメージを受けてください。")
if message.content.startswith("攻撃") and start == 2:
if client.user != message.author:
botactivehand = [s for s in defense if (ability1 in s) or (ability2 in s) or (ability3 in s)]
start = 1
for name in myhand:
if message.content in name:
myhand.remove(name)
if "防御不可" in name:
start = 6
await message.channel.send("防御不可により手札は使いません。")
break
if len(botactivehand) >= 1 and start ==1:
me = botactivehand[0]
defense.remove(me)
bottrash.append(me)
bothand = attack+defense
start = 6
await message.channel.send(me+"を使います。")
elif len(botactivehand) == 0 and start ==1:
start = 6
await message.channel.send("防御は使いません。")
if message.content == "yourhand" and start == 6:
m = ''.join(bothand)
await message.channel.send(m)
if message.content == "myhand" and start == 6:
m = ''.join(myhand)
await message.channel.send(m)
if message.content == "yourunmei" and start == 6:
if len(bothand) >=1:
ma = bothand[0]
for x in attack:
if ma in x:
attack.remove(x)
break
for x in defense:
if ma in x:
defense.remove(x)
break
m = ''.join(random.sample(pile, k=1))
pile.remove(m)
start = 7
await message.channel.send("あなたは"+ma+"を見て山札底に置きました。"+"\n"+"私はカードを1枚引きました。")
if m.startswith("攻撃") and start ==7:
start = 8
attack.append(m)
elif m.startswith("防御") and start ==7:
start = 8
defense.append(m)
bothand = attack+defense
if len(bothand) == 0 and start ==6:
start = 8
await message.channel.send("私はカードを持っていないので、運命変転は起こりませんでした。")
#ランダムでmyhandからリムーブ、pileからランダムドロー、pileからリムーブ、何が抜かれて何を引いたかメッセージ
if (message.content == "myunmei" and start == 4) or (message.content == "myunmei" and start == 9) or (message.content == "myunmei" and start ==6) or (message.content == "myunmei" and start ==12):
g = ''.join(random.sample(myhand, k=1))
myhand.remove(g)
z = ''.join(random.sample(pile, k=1))
pile.remove(z)
myhand.append(z)
start = 16
await message.channel.send(g+"を確認して山札底に置きました。"+"\n"+"あなたは"+"\n"+"\n"+z+"を引きました。")
#ハンドからなら、yourunmeiと同じ要領でattackやdefenseからリムーブし、mytrashへ。山札ならpileからランダム→pileからリムーブ→mytrashへ(botresoなしなら省略?)
if (message.content == "yourseisin" and start == 6) or (message.content == "yourseisin" and start == 9):
start = 15
await message.channel.send("精神感応に使うカードを宣言してください。")
if (message.content.startswith("攻撃") and start == 15) or (message.content.startswith("防御") and start == 15):
start = 16
for name in myhand:
if message.content in name:
myhand.remove(name)
if ability1 in name or ability2 in name or ability3 in name:
await message.channel.send("そのカードは使えます。")
else:
await message.channel.send("そのカードは使えません。")
break
if message.content.startswith("山札") and start == 15:
y = ''.join(random.sample(pile, k=1))
pile.remove(y)
start = 16
if ability1 in y or ability2 in y or ability3 in y:
bottrash.append(y)
await message.channel.send("山札トップのカードは"+"\n"+"\n"+y+"でした。そのカードは使えます。")
else:
await message.channel.send("山札トップのカードは"+"\n"+"\n"+y+"でした。そのカードは使えません。")
#myunmeiとpile
#if message.content == "myseisin":
#''joinでok
if message.content == "yourtrash":
m = ''.join(bottrash)
await message.channel.send(m)
#同じだが、botresoがないなら省略?
#if message.content == "mytrash":
if message.content == "y":
bothand = attack+defense
m = len(bothand)
f = ''.join(bottrash)
await message.channel.send("手札:"+str(m)+"枚"+"\n"+"\n"+"捨て札:"+"\n"+f)
if message.content == "m":
m = ''.join(player)
f = ''.join(myhand)
await message.channel.send("能力:"+m+"\n"+"\n"+"手札:"+"\n"+f)
if message.content == "checkstart":
await message.channel.send("start ="+str(start))
if message.content == "help":
await message.channel.send("コマンド一覧"+"\n"+"\n"+"**you**: ターンを渡す。"+"\n"+"**myturn**: ターンをもらう。"+"\n"+"**y**: 相手の状態を表示する。(手札枚数、捨て札)"+"\n"+"**m**: 自分の状態を確認する。(能力、手札)"+"\n"+"**reso**: レソナンスリング使用。"+"\n"+"**yourseisin**: 相手に対して精神感応を行う。手札の場合はカードを記入(例: 攻撃3 発火)。山札トップの場合は『山札』と記入。**yourunmei**: 相手に対して運命干渉を行う。"+"\n"+"**myunmei**: 自分に運命干渉がなされる。"+"\n"+"**hakai**: 自分が『防御 精神破壊』を使用した後に打つ。相手の能力が1つ開示される。(使用不能にはならない)"+"\n"+"**win**: 勝利宣言。HPを削り切った時に打つ。"+"\n"+"**lose**: 敗北宣言。HPが削り切られた時に打つ。"+"\n"+"**end**: ゲームを終了するときに打つ。(必須)"+"\n"+"\n"+"__攻撃・防御の処理__"+"\n"+"攻撃札・防御を使用するときは、『攻撃3 発火』や『防御 精神破壊』のように記入。(対応能力名やカード説明は記入しない)")
if message.content == "hakai":
await message.channel.send("精神破壊により"+ability1+"を開示しました。")
if message.content == "botactivehand":
botactive = [s for s in defense if (ability1 in s) or (ability2 in s) or (ability3 in s)]+[d for d in attack if (ability1 in d) or (ability2 in d)or (ability3 in d)]
await message.channel.send(''.join(botactive))
if message.content == "yourhand":
bothand = attack + defense
await message.channel.send(''.join(bothand))
if message.content == "win":
await message.channel.send("おめでとうございます!あなたの勝利です!"+"\n"+"能力:"+ability1+ability2+ability3)
if message.content == "lose":
await message.channel.send("私の勝利です!"+"\n"+"能力:"+ability1+ability2+ability3)
if message.content == "end":
start = 0
pile = ["攻撃1 落とし物 "+"\n"+" 念運空 "+"\n"+" 任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃1 占術 "+"\n"+" 運空精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃1 野犬 "+"\n"+" 念運精 "+"\n"+" 任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃1 滑る地面 "+"\n"+" 熱念運 "+"\n"+" 防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃1 支配 "+"\n"+" 運精 "+"\n"+" 山札の一番上のカードで精神感応を試みる。"+"\n"+"\n",
"攻撃1 揺さぶり "+"\n"+" 念空精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃1 磁場 "+"\n"+" 電念運 "+"\n"+" 防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃2 漏電 "+"\n"+" 電空 "+"\n"+" ターゲット以外のプレイヤー1人に2ダメージを与えてもよい。"+"\n"+"\n",
"攻撃2 不幸な事故 "+"\n"+" 電熱運"+"\n"+"\n",
"攻撃2 高速弾 "+"\n"+" 電熱念"+"\n"+"\n",
"攻撃2 縮地 "+"\n"+" 念空 "+"\n"+" 防御不可"+"\n"+"\n",
"攻撃2 落石 "+"\n"+" 念運 "+"\n"+" 防御不可。任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃2 熱感 "+"\n"+" 熱精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃2 拷問 "+"\n"+" 念精 "+"\n"+" 相手にダメージを与えたら、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"攻撃2 運命変転 "+"\n"+" 運 "+"\n"+" 自分の縦向きの捨て札を1枚選ぶ。そのカードのダメージと効果をこのカードに追加する。"+"\n"+"\n",
"攻撃3 電磁砲 "+"\n"+" 電念 "+"\n"+" 自分に1ダメージ。防御されなければ、次のあなたのターンまで、あなたはダメージを受けない。"+"\n"+"\n",
"攻撃3 雹塊 "+"\n"+" 熱運"+"\n"+"\n",
"攻撃3 爆発 "+"\n"+" 熱念"+"\n"+"\n",
"攻撃3 落雷 "+"\n"+" 電運 "+"\n"+" 自分に1ダメージ。任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"攻撃4 夢幻暴走 "+"\n"+" 念 "+"\n"+" 次のあなたのターンまで、あなたはダメージを受けず、レゾナンスリングを使用されない。"+"\n"+"\n",
"攻撃4 電熱ブレード "+"\n"+" 電熱 "+"\n"+" 自分に2ダメージ。"+"\n"+"\n",
"攻撃5 完全焼却 "+"\n"+" 熱"+"\n"+"\n",
"攻撃6 衝天轟雷 "+"\n"+" 電 "+"\n"+" 自分に3ダメージ。"+"\n"+"\n",
"防御 蜃気楼 "+"\n"+" 熱空精 "+"\n"+" 防御した攻撃のダメージを2軽減する。"+"\n"+"\n",
"防御 静電気 "+"\n"+" 電空精 "+"\n"+" 防御した攻撃のダメージを2軽減する。"+"\n"+"\n",
"防御 突風 "+"\n"+" 熱空 "+"\n"+" 防御した攻撃のダメージを2軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 高速移動 "+"\n"+" 電熱空 "+"\n"+" 防御した攻撃のダメージを1軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 閃光 "+"\n"+" 電熱精 "+"\n"+" 防御した攻撃のダメージを1軽減し、ターゲットに1ダメージを与える。"+"\n"+"\n",
"防御 ニューロン暴走 "+"\n"+" 電精 "+"\n"+" ターゲットに2ダメージを与える。"+"\n"+"\n",
"防御 空間識変調 "+"\n"+" 空精 "+"\n"+" 防御した攻撃のダメージを1軽減する。その後、あなたの手札のカード1枚で精神感応を試みてもよい。"+"\n"+"\n",
"防御 落とし穴 "+"\n"+" 運空 "+"\n"+" 防御した攻撃のダメージを2軽減し、任意のプレイヤーに運命干渉を行う。"+"\n"+"\n",
"防御 空間連結 "+"\n"+" 空 "+"\n"+" 防御した攻撃を無効化する。そのカードを自分の能力を無視して直ちにあなたが使用する。その後、そのカードを元々の使用者の捨て札に縦向きで置く。"+"\n"+"\n",
"防御 精神破壊 "+"\n"+" 精 "+"\n"+" 自分に4ダメージ。防御した攻撃を無効化する。あなた以外のプレイヤー全員は能力1つを使用不能にする。(能力カードを1枚選び、表にする)"+"\n"+"\n"]
attack = []
defense = []
bottrash = []
myhand = []
abilityA = ["電", "熱", "念", "運", "空", "精"]
abilityB = ["電", "熱", "念", "運", "空", "精"]
Hard = 0
await message.channel.send("サイレントファントム シュウリョウ シタ。オカタヅケ..(((ノ〇▲)ノ")
client.run(token)
| 48.2086 | 1,140 | 0.59529 | 6,233 | 52,692 | 5.029681 | 0.147762 | 0.058501 | 0.057576 | 0.069697 | 0.804689 | 0.77815 | 0.736204 | 0.717161 | 0.69681 | 0.677065 | 0 | 0.028525 | 0.187638 | 52,692 | 1,092 | 1,141 | 48.252747 | 0.702838 | 0.007876 | 0 | 0.490503 | 0 | 0.001117 | 0.378332 | 0.171945 | 0.004469 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006704 | 0 | 0.008939 | 0.004469 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
294b35f11e7911d9541113854cf8d1a41f9fda45 | 77 | py | Python | BeeVeeH/__init__.py | wghou/BeeVeeH | 04148f83e03e9719b2fe0e80d2ec9a0b075cb525 | [
"MIT"
] | 11 | 2019-01-22T07:50:53.000Z | 2022-03-14T00:44:03.000Z | BeeVeeH/__init__.py | edvardHua/BeeVeeH | d24c1355ae9b08baba450681aeb171abbbabae71 | [
"MIT"
] | 2 | 2017-12-15T21:16:38.000Z | 2017-12-16T04:28:10.000Z | BeeVeeH/__init__.py | edvardHua/BeeVeeH | d24c1355ae9b08baba450681aeb171abbbabae71 | [
"MIT"
] | 5 | 2018-09-25T10:33:00.000Z | 2021-06-29T02:04:50.000Z | import sys
sys.path = ['lib'] + sys.path
from BeeVeeH.frame_app import start | 19.25 | 35 | 0.74026 | 13 | 77 | 4.307692 | 0.692308 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 77 | 4 | 35 | 19.25 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
467c2d188338425791023e42fb7aad0035cf6c83 | 42 | py | Python | tcp_connectors/base/__init__.py | evocount/connectors | 966007b92a98a6e921a3c127f1b9f8ee1aca3d1b | [
"MIT"
] | 2 | 2020-08-11T02:45:09.000Z | 2021-02-26T01:25:23.000Z | tcp_connectors/base/__init__.py | evocount/tcp-connectors | 966007b92a98a6e921a3c127f1b9f8ee1aca3d1b | [
"MIT"
] | null | null | null | tcp_connectors/base/__init__.py | evocount/tcp-connectors | 966007b92a98a6e921a3c127f1b9f8ee1aca3d1b | [
"MIT"
] | null | null | null | from .base_connector import BaseConnector
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
467d0b1a0015462385148aa425c445395d53da29 | 734 | py | Python | meerschaum/api/routes/__init__.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 32 | 2020-09-14T16:29:19.000Z | 2022-03-08T00:51:28.000Z | meerschaum/api/routes/__init__.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 3 | 2020-10-04T20:03:30.000Z | 2022-02-02T21:04:46.000Z | meerschaum/api/routes/__init__.py | bmeares/Meerschaum | 37bd7a9923efce53e91c6a1d9c31f9533b9b4463 | [
"Apache-2.0"
] | 5 | 2021-04-22T23:49:21.000Z | 2022-02-02T12:59:08.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
"""
Import all routes from other modules in package
"""
### Although import_children works well, it's fairly ambiguous and does not
### freeze well. It will be depreciated in a future release.
# from meerschaum.utils.packages import import_children
# import_children()
from meerschaum.api.routes._login import *
from meerschaum.api.routes._actions import *
from meerschaum.api.routes._connectors import *
from meerschaum.api.routes._index import *
from meerschaum.api.routes._misc import *
from meerschaum.api.routes._pipes import *
from meerschaum.api.routes._plugins import *
from meerschaum.api.routes._users import *
from meerschaum.api.routes._version import *
| 30.583333 | 75 | 0.775204 | 104 | 734 | 5.355769 | 0.461538 | 0.251347 | 0.274686 | 0.371634 | 0.416517 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003101 | 0.121253 | 734 | 23 | 76 | 31.913043 | 0.860465 | 0.418256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3d258f9256797b2b59550c40d33b9f753e7957e | 10,196 | py | Python | software/tests/test_utilities.py | adellanno/MetaXcan | cfc9e369bbf5630e0c9488993cd877f231c5d02e | [
"MIT"
] | 83 | 2016-07-19T20:14:52.000Z | 2022-03-28T17:02:39.000Z | software/tests/test_utilities.py | adellanno/MetaXcan | cfc9e369bbf5630e0c9488993cd877f231c5d02e | [
"MIT"
] | 75 | 2016-02-25T16:43:17.000Z | 2022-03-30T14:19:03.000Z | software/tests/test_utilities.py | adellanno/MetaXcan | cfc9e369bbf5630e0c9488993cd877f231c5d02e | [
"MIT"
] | 71 | 2016-02-11T17:10:32.000Z | 2022-03-30T20:15:19.000Z | import unittest
import sys
import re
if "DEBUG" in sys.argv:
sys.path.insert(0, "..")
sys.path.insert(0, "../../")
sys.path.insert(0, ".")
sys.argv.remove("DEBUG")
import metax.Utilities as Utilities
import metax.Exceptions as Exceptions
class TestUtilities(unittest.TestCase):
def testHapName(self):
hap_name = Utilities.hapName("a")
self.assertEqual(hap_name, "a.hap.gz")
def testLegendName(self):
legend_name = Utilities.legendName("a")
self.assertEqual(legend_name, "a.legend.gz")
def testDosageName(self):
dosage_name = Utilities.dosageName("a")
self.assertEqual(dosage_name, "a.dosage.gz")
def testDosageNamesFromFolder(self):
names = Utilities.dosageNamesFromFolder("tests/_td/dosage_set_1")
self.assertEqual(names, [])
def testLegendNamesFromFolder(self):
names = Utilities.legendNamesFromFolder("tests/_td/dosage_set_1")
self.assertEqual(names, ["set_chr1"])
def testHapNamesFromFolder(self):
names = Utilities.hapNamesFromFolder("tests/_td/dosage_set_1")
self.assertEqual(names, ["set_chr1"])
def testNamesWithPatternFromFolders(self):
names = Utilities.namesWithPatternFromFolder("tests/_td/dosage_set_1/", ".sample")
self.assertEqual(names, ["set"])
def testContentsWithPatternsFromFolders(self):
contents = Utilities.contentsWithPatternsFromFolder("tests/_td/dosage_set_1", ["sample", "Fail"])
contents = {c for c in contents}
self.assertEqual(contents, set([]))
contents = Utilities.contentsWithPatternsFromFolder("tests/_td/dosage_set_1", ["set", "sample"])
contents = {c for c in contents}
self.assertEqual(contents, {"set.sample"})
def testContentsWithRegexpFromFolder(self):
contents = Utilities.contentsWithRegexpFromFolder("tests/_td/dosage_set_1", re.compile(".*sample"))
self.assertEqual(contents, ["set.sample"])
def testSamplesInputPath(self):
path = Utilities.samplesInputPath("tests/_td/dosage_set_1")
self.assertEqual(path, "tests/_td/dosage_set_1/set.sample")
def testCheckSubdirectorySanity(self):
b = Utilities.checkSubdirectorySanity("tests", "tests")
self.assertFalse(b)
b = Utilities.checkSubdirectorySanity("tests", "tests/_td")
self.assertTrue(b)
b = Utilities.checkSubdirectorySanity("tests/_td", "tests")
self.assertFalse(b)
def testFileIterator(self):
class DummyCallback():
def __init__(self):
self.lines = []
def __call__(self, i, line):
self.lines.append((i, line.strip()))
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set.sample", header="a")
with self.assertRaises(Exceptions.MalformedInputFile):
f.iterate(c)
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set.sample")
f.iterate(c)
self.assertEqual(c.lines,
[(0, "ID POP GROUP SEX"),
(1, "ID1 K HERO male"),
(2, "ID2 K HERO female"),
(3, "DI5 K HERO male"),
(4, "ID3 K HERO female"),
(5,"B1 L T female")])
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set.sample", "")
f.iterate(c)
self.assertEqual(c.lines,
[(0, "ID1 K HERO male"),
(1, "ID2 K HERO female"),
(2, "DI5 K HERO male"),
(3, "ID3 K HERO female"),
(4,"B1 L T female")])
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set.sample", "ID POP GROUP SEX")
f.iterate(c)
self.assertEqual(c.lines,
[(0, "ID1 K HERO male"),
(1, "ID2 K HERO female"),
(2, "DI5 K HERO male"),
(3, "ID3 K HERO female"),
(4,"B1 L T female")])
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set.sample", "DI5 K", ignore_until_header=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, "ID3 K HERO female"),
(1,"B1 L T female")])
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", header="a", compressed=True)
with self.assertRaises(Exceptions.MalformedInputFile):
f.iterate(c)
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", compressed=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, "id position a0 a1 TYPE AFR AMR EAS EUR SAS ALL"),
(1, "1:10177:A:AC 10177 A AC Biallelic_INDEL 0.490922844175492 0.360230547550432 0.336309523809524 0.405566600397614 0.494887525562372 0.425319488817891"),
(2, "rs1:1:A:T 10505 A T Biallelic_SNP 0 0 0 0 0 0"),
(3, "1:12:C:G 10506 C G Biallelic_SNP 0 0 0 0 0 0"),
(4, "rs2:2:G:A 10511 G A Biallelic_SNP 0 0 0 0 0 0"),
(5, "rs3:3:C:T 10511 G A Biallelic_SNP 0 0 0 0 0 0"),
(6, "rs4:4:C:T 10511 G A Biallelic_SNP 0 0 0 0 0 0")]
)
c = DummyCallback()
f = Utilities.FileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", header="id position a0 a1 TYPE AFR AMR EAS EUR SAS ALL", compressed=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, "1:10177:A:AC 10177 A AC Biallelic_INDEL 0.490922844175492 0.360230547550432 0.336309523809524 0.405566600397614 0.494887525562372 0.425319488817891"),
(1, "rs1:1:A:T 10505 A T Biallelic_SNP 0 0 0 0 0 0"),
(2, "1:12:C:G 10506 C G Biallelic_SNP 0 0 0 0 0 0"),
(3, "rs2:2:G:A 10511 G A Biallelic_SNP 0 0 0 0 0 0"),
(4, "rs3:3:C:T 10511 G A Biallelic_SNP 0 0 0 0 0 0"),
(5, "rs4:4:C:T 10511 G A Biallelic_SNP 0 0 0 0 0 0")]
)
def testCSVFileIterator(self):
class DummyCallback():
def __init__(self):
self.lines = []
def __call__(self, i, row):
self.lines.append((i, row))
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set.sample", header="a")
with self.assertRaises(Exceptions.MalformedInputFile):
f.iterate(c)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set.sample")
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["ID", "POP", "GROUP", "SEX"]),
(1, ["ID1", "K", "HERO", "male"]),
(2, ["ID2", "K", "HERO", "female"]),
(3, ["DI5", "K", "HERO", "male"]),
(4, ["ID3", "K", "HERO", "female"]),
(5, ["B1", "L", "T", "female"])]
)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set.sample", "")
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["ID1", "K", "HERO", "male"]),
(1, ["ID2", "K", "HERO", "female"]),
(2, ["DI5", "K", "HERO", "male"]),
(3, ["ID3", "K", "HERO", "female"]),
(4, ["B1", "L", "T", "female"])]
)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set.sample", "DI5 K", ignore_until_header=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["ID3", "K", "HERO", "female"]),
(1, ["B1", "L", "T", "female"])]
)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set.sample", header="ID POP GROUP SEX")
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["ID1", "K", "HERO", "male"]),
(1, ["ID2", "K", "HERO", "female"]),
(2, ["DI5", "K", "HERO", "male"]),
(3, ["ID3", "K", "HERO", "female"]),
(4, ["B1", "L", "T", "female"])]
)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", header="a", compressed=True)
with self.assertRaises(Exceptions.MalformedInputFile):
f.iterate(c)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", compressed=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["id", "position", "a0", "a1", "TYPE", "AFR", "AMR", "EAS", "EUR", "SAS", "ALL"]),
(1, ["1:10177:A:AC", "10177", "A", "AC", "Biallelic_INDEL", "0.490922844175492", "0.360230547550432", "0.336309523809524", "0.405566600397614", "0.494887525562372", "0.425319488817891"]),
(2, ["rs1:1:A:T", "10505", "A", "T", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(3, ["1:12:C:G", "10506", "C", "G", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(4, ["rs2:2:G:A", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(5, ["rs3:3:C:T", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(6, ["rs4:4:C:T", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"])]
)
c = DummyCallback()
f = Utilities.CSVFileIterator("tests/_td/dosage_set_1/set_chr1.legend.gz", header="id position a0 a1 TYPE AFR AMR EAS EUR SAS ALL", compressed=True)
f.iterate(c)
self.assertEqual(c.lines,
[(0, ["1:10177:A:AC", "10177", "A", "AC", "Biallelic_INDEL", "0.490922844175492", "0.360230547550432", "0.336309523809524", "0.405566600397614", "0.494887525562372", "0.425319488817891"]),
(1, ["rs1:1:A:T", "10505", "A", "T", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(2, ["1:12:C:G", "10506", "C", "G", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(3, ["rs2:2:G:A", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(4, ["rs3:3:C:T", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"]),
(5, ["rs4:4:C:T", "10511", "G", "A", "Biallelic_SNP", "0", "0", "0", "0", "0", "0"])]
)
| 43.20339 | 200 | 0.551098 | 1,299 | 10,196 | 4.221709 | 0.106236 | 0.03647 | 0.043764 | 0.043764 | 0.805069 | 0.778993 | 0.765682 | 0.753465 | 0.723377 | 0.715171 | 0 | 0.111334 | 0.2653 | 10,196 | 235 | 201 | 43.387234 | 0.620745 | 0 | 0 | 0.40404 | 0 | 0.010101 | 0.29747 | 0.077089 | 0 | 0 | 0 | 0 | 0.151515 | 1 | 0.085859 | false | 0 | 0.025253 | 0 | 0.126263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3e9727ce9469003693852a6d39fe5c2bbbd0155 | 448 | py | Python | result_linker/api/__init__.py | akshayAithal/result_linker | 5dd051ef355f9e50c735cffbbf14e5f20f677699 | [
"Apache-2.0"
] | null | null | null | result_linker/api/__init__.py | akshayAithal/result_linker | 5dd051ef355f9e50c735cffbbf14e5f20f677699 | [
"Apache-2.0"
] | null | null | null | result_linker/api/__init__.py | akshayAithal/result_linker | 5dd051ef355f9e50c735cffbbf14e5f20f677699 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Blueprints for the application.
"""
from result_linker.api.home import home_blueprint
from result_linker.api.user import user_blueprint
from result_linker.api.svn import svn_blueprint
from result_linker.api.download import download_blueprint
from result_linker.api.share import share_blueprint
from result_linker.api.link import link_blueprint
from result_linker.api.write import write_blueprint | 29.866667 | 57 | 0.825893 | 67 | 448 | 5.313433 | 0.358209 | 0.196629 | 0.314607 | 0.373596 | 0.47191 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002469 | 0.095982 | 448 | 15 | 58 | 29.866667 | 0.876543 | 0.165179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3135d5909c56b0c9a8764ca0a2a27cb224b76bcf | 104 | py | Python | data_interrogator/exceptions.py | s-i-l-k-e/django-data-interrogator | 0284168b81aaa31a8df84f3ea52166eded8a4362 | [
"MIT"
] | null | null | null | data_interrogator/exceptions.py | s-i-l-k-e/django-data-interrogator | 0284168b81aaa31a8df84f3ea52166eded8a4362 | [
"MIT"
] | null | null | null | data_interrogator/exceptions.py | s-i-l-k-e/django-data-interrogator | 0284168b81aaa31a8df84f3ea52166eded8a4362 | [
"MIT"
] | null | null | null | class ModelNotAllowedException(Exception):
pass
class InvalidAnnotationError(Exception):
pass
| 14.857143 | 42 | 0.788462 | 8 | 104 | 10.25 | 0.625 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 104 | 6 | 43 | 17.333333 | 0.931818 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9ed9fc5765637a558a491d94aa347a7372a30ae3 | 95 | py | Python | la_stopwatch/__init__.py | thiagola92/la-stopwatch | ada0f5cb65b236ff958f446dc0801159a17327b9 | [
"MIT"
] | null | null | null | la_stopwatch/__init__.py | thiagola92/la-stopwatch | ada0f5cb65b236ff958f446dc0801159a17327b9 | [
"MIT"
] | null | null | null | la_stopwatch/__init__.py | thiagola92/la-stopwatch | ada0f5cb65b236ff958f446dc0801159a17327b9 | [
"MIT"
] | null | null | null | from la_stopwatch.stopwatch import Stopwatch
from la_stopwatch.stopwatch_ns import StopwatchNS
| 31.666667 | 49 | 0.894737 | 13 | 95 | 6.307692 | 0.461538 | 0.146341 | 0.365854 | 0.585366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084211 | 95 | 2 | 50 | 47.5 | 0.942529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7311bfd8e2a795ca1e60404f7893b614fdaffe6f | 79 | py | Python | backend/family_app/models.py | berserg2010/family_and_history_backend | 08fd5901e6e0c9cbd75a72e46d69ac53c737786a | [
"Apache-2.0"
] | null | null | null | backend/family_app/models.py | berserg2010/family_and_history_backend | 08fd5901e6e0c9cbd75a72e46d69ac53c737786a | [
"Apache-2.0"
] | null | null | null | backend/family_app/models.py | berserg2010/family_and_history_backend | 08fd5901e6e0c9cbd75a72e46d69ac53c737786a | [
"Apache-2.0"
] | null | null | null | from .family.models import Family
from .events.marriage.models import Marriage
| 26.333333 | 44 | 0.835443 | 11 | 79 | 6 | 0.545455 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 79 | 2 | 45 | 39.5 | 0.929577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b405cdf7b653878aabc3dbd1bb26b03bb3616234 | 2,708 | py | Python | src/algorithms/kalman_filter.py | Brechard/Robot-Simulator | 201256fdae6d6d1bd7221832ed4646afbe0779aa | [
"MIT"
] | null | null | null | src/algorithms/kalman_filter.py | Brechard/Robot-Simulator | 201256fdae6d6d1bd7221832ed4646afbe0779aa | [
"MIT"
] | null | null | null | src/algorithms/kalman_filter.py | Brechard/Robot-Simulator | 201256fdae6d6d1bd7221832ed4646afbe0779aa | [
"MIT"
] | null | null | null | import numpy as np
default_Q_t = np.identity(3) * np.random.rand(3, 1) * 0.1
default_R = np.identity(3) * np.random.rand(3, 1) * 0.1
class kalman_filter():
def __init__(self, Q_t = default_Q_t, R = default_R):
"""
:param Q_t: Covariance matrix defining noise of motion model deltax§
:param R:
"""
self.Q_t = Q_t
self.R = R
def run_filter(self, state, covariance, control, observation):
"""
:param state: Previous believe state
:param covariance: Covariance matrix
:param control: kinematics values
:param observation:
:param R:
:return: Corrected state and covariance
"""
# Initialing distributions
A = np.identity(3)
B = np.array([[np.cos(state[2]), 0],
[np.sin(state[2]), 0],
[0, 1]])
C = np.identity(3)
# Prediction
state = np.matmul(A, state) + np.matmul(B, control) # mu_t
covariance = np.matmul(np.matmul(A, covariance), A.T) + self.R # sum_t
# Correction
K_t = covariance * C.T * np.linalg.inv(np.matmul(np.matmul(C, covariance), C.T) + self.Q_t.T) # Kalman gain
try:
new_state = state + np.matmul(K_t, (observation - np.matmul(C, state)))
except:
print("ERROR")
new_covariance = np.matmul((np.identity(3) - np.matmul(K_t, C)), covariance)
return state, new_state, new_covariance
# def kalman_filter(state, covariance, control, observation, Q_t = default_Q_t, R = default_R):
# """
# :param state: Previous believe state
# :param covariance: Covariance matrix
# :param control: kinematics values
# :param observation:
# :param Q_t: Covariance matrix defining noise of motion model deltax§
# :param R: Covariance matrix defining noise of motion model epsilon
# :return: Corrected state and covariance
# """
#
# # Initialing distributions
# A = np.identity(3)
# B = np.array([[np.cos(state[2]), 0],
# [np.sin(state[2]), 0],
# [0, 1]])
# C = np.identity(3)
#
# # Prediction
# state = np.matmul(A, state) + np.matmul(B, control) # mu_t
# covariance = np.matmul(np.matmul(A, covariance), A.T) + R # sum_t
#
# # Correction
# K_t = covariance * C.T * np.linalg.inv(np.matmul(np.matmul(C, covariance), C.T) + Q_t.T) # Kalman gain
# try:
# new_state = state + np.matmul(K_t, (observation - np.matmul(C, state)))
# except:
# print("ERROR")
# new_covariance = np.matmul((np.identity(3) - np.matmul(K_t, C)), covariance)
#
# return state, new_state, new_covariance
| 34.278481 | 116 | 0.580502 | 366 | 2,708 | 4.185792 | 0.177596 | 0.104439 | 0.057441 | 0.033943 | 0.877937 | 0.877937 | 0.877937 | 0.850522 | 0.850522 | 0.824413 | 0 | 0.014403 | 0.282127 | 2,708 | 78 | 117 | 34.717949 | 0.772634 | 0.544682 | 0 | 0 | 0 | 0 | 0.004562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.227273 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b40982b495157dab0d11669c9d34f18dbe1674c8 | 1,403 | py | Python | reference/NumpyDL-master/tests/test_layers/test_pooling.py | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 186 | 2017-04-04T07:37:00.000Z | 2021-02-25T11:56:48.000Z | reference/NumpyDL-master/tests/test_layers/test_pooling.py | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 9 | 2017-05-07T12:42:45.000Z | 2019-11-06T19:45:33.000Z | reference/NumpyDL-master/tests/test_layers/test_pooling.py | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 74 | 2017-04-04T06:41:07.000Z | 2021-02-19T12:58:36.000Z | # -*- coding: utf-8 -*-
import pytest
import numpy as np
class PreLayer:
def __init__(self, out_shape):
self.out_shape = out_shape
def test_MeanPooling():
from npdl.layers import MeanPooling
pool = MeanPooling((2, 2))
pool.connect_to(PreLayer((10, 1, 20, 30)))
assert pool.out_shape == (10, 1, 10, 15)
with pytest.raises(ValueError):
pool.forward(np.random.rand(10, 10))
with pytest.raises(ValueError):
pool.backward(np.random.rand(10, 20))
assert np.ndim(pool.forward(np.random.rand(10, 20, 30))) == 3
assert np.ndim(pool.backward(np.random.rand(10, 20, 30))) == 3
assert np.ndim(pool.forward(np.random.rand(10, 1, 20, 30))) == 4
assert np.ndim(pool.backward(np.random.rand(10, 1, 20, 30))) == 4
def test_MaxPooling():
from npdl.layers import MaxPooling
pool = MaxPooling((2, 2))
pool.connect_to(PreLayer((10, 1, 20, 30)))
assert pool.out_shape == (10, 1, 10, 15)
with pytest.raises(ValueError):
pool.forward(np.random.rand(10, 10))
with pytest.raises(ValueError):
pool.backward(np.random.rand(10, 20))
assert np.ndim(pool.forward(np.random.rand(10, 20, 30))) == 3
assert np.ndim(pool.backward(np.random.rand(10, 20, 30))) == 3
assert np.ndim(pool.forward(np.random.rand(10, 1, 20, 30))) == 4
assert np.ndim(pool.backward(np.random.rand(10, 1, 20, 30))) == 4
| 26.980769 | 69 | 0.637919 | 221 | 1,403 | 3.99095 | 0.18552 | 0.108844 | 0.163265 | 0.190476 | 0.75737 | 0.75737 | 0.75737 | 0.75737 | 0.75737 | 0.75737 | 0 | 0.096375 | 0.19387 | 1,403 | 51 | 70 | 27.509804 | 0.683466 | 0.014968 | 0 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.322581 | 1 | 0.096774 | false | 0 | 0.129032 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b40bdfd650c53715cc151b8757e8ea4313ea806b | 91 | py | Python | taxstats/__init__.py | raheem03/taxstats | 23537030d7fb84b72ad1d514f7fd7f4ba6cd3ca3 | [
"MIT"
] | null | null | null | taxstats/__init__.py | raheem03/taxstats | 23537030d7fb84b72ad1d514f7fd7f4ba6cd3ca3 | [
"MIT"
] | null | null | null | taxstats/__init__.py | raheem03/taxstats | 23537030d7fb84b72ad1d514f7fd7f4ba6cd3ca3 | [
"MIT"
] | null | null | null | from taxstats.core import (taxstats)
from taxstats.utils import (parse_docs, create_labels) | 45.5 | 54 | 0.835165 | 13 | 91 | 5.692308 | 0.692308 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 91 | 2 | 54 | 45.5 | 0.891566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b44c6c17f657304df05c9541067bf5b9f850ff15 | 14,305 | py | Python | iter8_analytics/api/analytics/endpoints/examples.py | huang195/iter8-analytics | 6d03128ac3fc3f4e9754d41e44baa72f411f29e5 | [
"Apache-2.0"
] | null | null | null | iter8_analytics/api/analytics/endpoints/examples.py | huang195/iter8-analytics | 6d03128ac3fc3f4e9754d41e44baa72f411f29e5 | [
"Apache-2.0"
] | null | null | null | iter8_analytics/api/analytics/endpoints/examples.py | huang195/iter8-analytics | 6d03128ac3fc3f4e9754d41e44baa72f411f29e5 | [
"Apache-2.0"
] | null | null | null | import copy
eip_example = {
'start_time': "2020-04-03T12:55:50.568Z",
'iteration_number': 1,
'service_name': "reviews",
"metric_specs": {
"counter_metrics": [
{
"id": "iter8_request_count",
"query_template": "sum(increase(istio_requests_total{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_total_latency",
"query_template": "sum(increase(istio_request_duration_seconds_sum{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_error_count",
"query_template": "sum(increase(istio_requests_total{response_code=~'5..',reporter='source'}[$interval])) by ($version_labels)",
"preferred_direction": "lower"
},
{
"id": "conversion_count",
"query_template": "sum(increase(newsletter_signups[$interval])) by ($version_labels)"
},
],
"ratio_metrics": [
{
"id": "iter8_mean_latency",
"numerator": "iter8_total_latency",
"denominator": "iter8_request_count",
"preferred_direction": "lower",
"zero_to_one": False
},
{
"id": "iter8_error_rate",
"numerator": "iter8_error_count",
"denominator": "iter8_request_count",
"preferred_direction": "lower",
"zero_to_one": True
},
{
"id": "conversion_rate",
"numerator": "conversion_count",
"denominator": "iter8_request_count",
"preferred_direction": "higher",
"zero_to_one": True
}
]},
"criteria": [
{
"id": "0",
"metric_id": "iter8_mean_latency",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
}
],
"baseline": {
"id": "reviews_base",
"version_labels": {
'destination_service_namespace': "default",
'destination_workload': "reviews-v1"
}
},
"candidates": [
{
"id": "reviews_candidate",
"version_labels": {
'destination_service_namespace': "default",
'destination_workload': "reviews-v2"
}
}
],
"advanced_traffic_control_parameters": {
"exploration_traffic_percentage": 5.0,
"check_and_increment_parameters": {
"step_size": 1
}
},
"advanced_assessment_parameters": {
"posterior_probability_for_credible_intervals": 95.0,
"min_posterior_probability_for_winner": 99.0
}
}
ar_example = {
'timestamp': "2020-04-03T12:59:50.568Z",
'baseline_assessment': {
"id": "reviews_base",
"request_count": 500,
"win_probability": 0.1,
"criterion_assessments": [
{
"id": "0",
"metric_id": "iter8_mean_latency",
"statistics": {
"value": 0.005,
"ratio_statistics": {
"improvement_over_baseline": {
'lower': 2.3,
'upper': 5.0
},
"probability_of_beating_baseline": .82,
"probability_of_being_best_version": 0.1,
"credible_interval": {
'lower': 22,
'upper': 28
}
}
},
"threshold_assessment": {
"threshold_breached": False,
"probability_of_satisfying_threshold": 0.8
}
}
]
},
'candidate_assessments': [
{
"id": "reviews_candidate",
"request_count": 1500,
"win_probability": 0.11,
"criterion_assessments": [
{
"id": "0",
"metric_id": "iter8_mean_latency",
"statistics": {
"value": 0.1005,
"ratio_statistics": {
"sample_size": 1500,
"improvement_over_baseline": {
'lower': 12.3,
'upper': 15.0
},
"probability_of_beating_baseline": .182,
"probability_of_being_best_version": 0.1,
"credible_interval": {
'lower': 122,
'upper': 128
}
}
},
"threshold_assessment": {
"threshold_breached": True,
"probability_of_satisfying_threshold": 0.180
}
}
]
}
],
'traffic_split_recommendation': {
'unif': {
'reviews_base': 50.0,
'reviews_candidate': 50.0
}
},
'winner_assessment': {
'winning_version_found': False
},
'status': ["all_ok"]
}
reviews_example = {
"start_time": "2020-05-17T12:55:50.568Z",
"service_name": "reviews",
"metric_specs": {
"counter_metrics": [
{
"id": "iter8_request_count",
"query_template": "sum(increase(istio_requests_total{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_total_latency",
"query_template": "sum(increase(istio_request_duration_seconds_sum{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_error_count",
"query_template": "sum(increase(istio_requests_total{response_code=~'5..',reporter='source'}[$interval])) by ($version_labels)",
"preferred_direction": "lower"
}
],
"ratio_metrics": [
{
"id": "iter8_mean_latency",
"numerator": "iter8_total_latency",
"denominator": "iter8_request_count",
"preferred_direction": "lower"
}
]
},
"criteria": [
{
"id": "0",
"metric_id": "iter8_error_count",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
},
{
"id": "1",
"metric_id": "iter8_mean_latency",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
}
],
"baseline": {
"id": "reviews_base",
"version_labels": {
"destination_service_namespace": "bookinfo-iter8",
"destination_workload": "reviews-v2"
}
},
"candidates": [
{
"id": "reviews_candidate",
"version_labels": {
"destination_service_namespace": "bookinfo-iter8",
"destination_workload": "reviews-v3"
}
}
],
"advanced_traffic_control_parameters": {
"exploration_traffic_percentage": 5,
"check_and_increment_parameters": {
"step_size": 1
}
},
"advanced_assessment_parameters": {
"posterior_probability_for_credible_intervals": 95,
"min_posterior_probability_for_winner": 99
}
}
last_state = {
"aggregated_counter_metrics": {
"reviews_candidate": {
"iter8_request_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_error_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_total_latency": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
},
"reviews_base": {
"iter8_request_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_error_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_total_latency": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
}
},
"aggregated_ratio_metrics": {
"reviews_candidate": {
"iter8_mean_latency": {
"value": None,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
},
"reviews_base": {
"iter8_mean_latency": {
"value": None,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
}
},
"ratio_max_mins": {
"iter8_mean_latency": {
"minimum": None,
"maximum": None
}
}
}
partial_last_state = {
"aggregated_counter_metrics": {
"reviews_candidate": {
"iter8_request_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_error_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
},
"reviews_base": {
"iter8_request_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_error_count": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
},
"iter8_total_latency": {
"value": 0,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
}
},
"aggregated_ratio_metrics": {
"reviews_candidate": {
"iter8_mean_latency": {
"value": None,
"timestamp": "2020-05-19T11:41:51.474487+00:00",
"status": "no versions in prometheus response"
}
}
},
"ratio_max_mins": {
"iter8_mean_latency": {
"minimum": None,
"maximum": None
}
}
}
last_state_with_ratio_max_mins = copy.deepcopy(last_state)
last_state_with_ratio_max_mins["ratio_max_mins"] = {
"iter8_mean_latency": {
"minimum": 1.5,
"maximum": 20
}
}
reviews_example_with_last_state = {
"start_time": "2020-05-17T12:55:50.568Z",
"service_name": "reviews",
"metric_specs": {
"counter_metrics": [
{
"id": "iter8_request_count",
"query_template": "sum(increase(istio_requests_total{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_total_latency",
"query_template": "sum(increase(istio_request_duration_seconds_sum{reporter='source'}[$interval])) by ($version_labels)"
},
{
"id": "iter8_error_count",
"query_template": "sum(increase(istio_requests_total{response_code=~'5..',reporter='source'}[$interval])) by ($version_labels)",
"preferred_direction": "lower"
}
],
"ratio_metrics": [
{
"id": "iter8_mean_latency",
"numerator": "iter8_total_latency",
"denominator": "iter8_request_count",
"preferred_direction": "lower"
}
]
},
"criteria": [
{
"id": "0",
"metric_id": "iter8_error_count",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
},
{
"id": "1",
"metric_id": "iter8_mean_latency",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
}
],
"baseline": {
"id": "reviews_base",
"version_labels": {
"destination_service_namespace": "bookinfo-iter8",
"destination_workload": "reviews-v2"
}
},
"candidates": [
{
"id": "reviews_candidate",
"version_labels": {
"destination_service_namespace": "bookinfo-iter8",
"destination_workload": "reviews-v3"
}
}
],
"advanced_traffic_control_parameters": {
"exploration_traffic_percentage": 5,
"check_and_increment_parameters": {
"step_size": 1
}
},
"advanced_assessment_parameters": {
"posterior_probability_for_credible_intervals": 95,
"min_posterior_probability_for_winner": 99
},
"last_state": copy.deepcopy(last_state)
}
reviews_example_with_partial_last_state = copy.deepcopy(reviews_example_with_last_state)
reviews_example_with_partial_last_state["last_state"] = copy.deepcopy(partial_last_state)
reviews_example_with_ratio_max_mins = copy.deepcopy(reviews_example_with_last_state)
reviews_example_with_ratio_max_mins["last_state"] = copy.deepcopy(last_state_with_ratio_max_mins)
eip_with_invalid_ratio = copy.deepcopy(reviews_example_with_ratio_max_mins)
eip_with_invalid_ratio["metric_specs"]["ratio_metrics"].append({
"id": "iter8_invalid_latency",
"numerator": "iter8_total_invalid_latency",
"denominator": "iter8_request_count",
"preferred_direction": "lower"
})
eip_with_invalid_ratio["criteria"].append({
"id": "2",
"metric_id": "iter8_invalid_latency",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
})
eip_with_unknown_metric_in_criterion = copy.deepcopy(reviews_example_with_ratio_max_mins)
eip_with_unknown_metric_in_criterion["criteria"].append({
"id": "2",
"metric_id": "iter8_invalid_latency",
"is_reward": False,
"threshold": {
"type": "absolute",
"value": 25
}
})
reviews_example_without_request_count = copy.deepcopy(reviews_example)
del reviews_example_without_request_count["criteria"][1]
del reviews_example_without_request_count["metric_specs"]["counter_metrics"][0]
del reviews_example_without_request_count["metric_specs"]["ratio_metrics"][0]
| 30.052521 | 140 | 0.534918 | 1,315 | 14,305 | 5.460076 | 0.13308 | 0.022423 | 0.031198 | 0.038997 | 0.86532 | 0.835655 | 0.802646 | 0.789554 | 0.738022 | 0.719359 | 0 | 0.061885 | 0.32562 | 14,305 | 475 | 141 | 30.115789 | 0.682388 | 0 | 0 | 0.547619 | 0 | 0 | 0.479796 | 0.182956 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002165 | 0 | 0.002165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81eaa65664a9028701fc3084ebb2ab135a111ca4 | 148 | py | Python | app/main/search.py | senderle/doppio | 4f7b6a8bcb60f54648d162bdcdc467c598ec50a9 | [
"MIT"
] | null | null | null | app/main/search.py | senderle/doppio | 4f7b6a8bcb60f54648d162bdcdc467c598ec50a9 | [
"MIT"
] | null | null | null | app/main/search.py | senderle/doppio | 4f7b6a8bcb60f54648d162bdcdc467c598ec50a9 | [
"MIT"
] | null | null | null | from app.main import bp
from flask import render_template
@bp.route('/search')
def render_search_page():
return render_template('search.html')
| 21.142857 | 41 | 0.77027 | 22 | 148 | 5 | 0.636364 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121622 | 148 | 6 | 42 | 24.666667 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.121622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
c307b0e0ff3f98b410ef5673c53b3daebee4634c | 39 | py | Python | The Core/03 - candies.py | lucasalme1da/codesignal | faff1ae635d04a33a1b59e6f751d266fabca5e71 | [
"MIT"
] | 2 | 2020-04-15T00:15:03.000Z | 2021-02-17T18:43:08.000Z | The Core/03 - candies.py | lucasalme1da/codesignal | faff1ae635d04a33a1b59e6f751d266fabca5e71 | [
"MIT"
] | null | null | null | The Core/03 - candies.py | lucasalme1da/codesignal | faff1ae635d04a33a1b59e6f751d266fabca5e71 | [
"MIT"
] | null | null | null | def candies(n, m):
return m - m % n | 19.5 | 20 | 0.538462 | 8 | 39 | 2.625 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 39 | 2 | 20 | 19.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c33a999dda65a38d40bd6a731be0ea29bc3ad1ff | 148 | py | Python | mkt/__init__.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | mkt/__init__.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | mkt/__init__.py | muffinresearch/zamboni | 045a6f07c775b99672af6d9857d295ed02fe5dd9 | [
"BSD-3-Clause"
] | null | null | null | from mkt.constants import (categories, comm, platforms, iarc_mappings,
ratingsbodies)
from mkt.constants.submit import *
| 37 | 70 | 0.668919 | 15 | 148 | 6.533333 | 0.733333 | 0.142857 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263514 | 148 | 3 | 71 | 49.333333 | 0.899083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c372adb737fbc94daeb6c03fb67e3eff89bd448e | 4,416 | py | Python | tests/test_observable/test_contains.py | AlexMost/RxPY | 05cb14c72806dc41e243789c05f498dede11cebd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | tests/test_observable/test_contains.py | AlexMost/RxPY | 05cb14c72806dc41e243789c05f498dede11cebd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | tests/test_observable/test_contains.py | AlexMost/RxPY | 05cb14c72806dc41e243789c05f498dede11cebd | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-11-04T11:13:49.000Z | 2021-11-04T11:13:49.000Z | import unittest
from rx import Observable
from rx.testing import TestScheduler, ReactiveTest, is_prime, MockDisposable
from rx.disposables import Disposable, SerialDisposable
on_next = ReactiveTest.on_next
on_completed = ReactiveTest.on_completed
on_error = ReactiveTest.on_error
subscribe = ReactiveTest.subscribe
subscribed = ReactiveTest.subscribed
disposed = ReactiveTest.disposed
created = ReactiveTest.created
class TestContains(unittest.TestCase):
def test_contains_empty(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_completed(250)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(42)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(250, False), on_completed(250))
def test_contains_return_positive(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(210, 2), on_completed(250)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(2)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(210, True), on_completed(210))
def test_contains_return_negative(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(210, 2), on_completed(250)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(-2)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(250, False), on_completed(250))
def test_contains_some_positive(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_completed(250)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(3)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(220, True), on_completed(220))
def test_contains_some_negative(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_completed(250)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(-3)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(250, False), on_completed(250))
def test_contains_throw(self):
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_error(210, ex))
def create():
return xs.contains(42)
res = scheduler.start(create=create).messages
res.assert_equal(on_error(210, ex))
def test_contains_never(self):
scheduler = TestScheduler()
msgs = [on_next(150, 1)]
xs = scheduler.create_hot_observable(msgs)
def create():
return xs.contains(42)
res = scheduler.start(create=create).messages
res.assert_equal()
def test_contains_comparer_throws(self):
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2))
def create():
def comparer(a, b):
raise Exception(ex)
return xs.contains(42, comparer)
res = scheduler.start(create=create).messages
res.assert_equal(on_error(210, ex))
def test_contains_comparer_contains_value(self):
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 3), on_next(220, 4), on_next(230, 8), on_completed(250))
def create():
return xs.contains(42, lambda a, b: a % 2 == b % 2)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(220, True), on_completed(220))
def test_contains_comparer_does_not_contain_value(self):
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 4), on_next(230, 8), on_completed(250))
def create():
return xs.contains(21, lambda a, b: a % 2 == b % 2)
res = scheduler.start(create=create).messages
res.assert_equal(on_next(250, False), on_completed(250))
| 35.328 | 131 | 0.636096 | 553 | 4,416 | 4.867993 | 0.130199 | 0.07578 | 0.057207 | 0.037147 | 0.776003 | 0.772288 | 0.771545 | 0.771545 | 0.756686 | 0.741828 | 0 | 0.058306 | 0.254303 | 4,416 | 124 | 132 | 35.612903 | 0.759186 | 0 | 0 | 0.576087 | 0 | 0 | 0.000906 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 1 | 0.228261 | false | 0 | 0.043478 | 0.097826 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3825519a903de0343beac5007414c7881cd4cda | 3,775 | py | Python | tests/project/test_core.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | tests/project/test_core.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | tests/project/test_core.py | daobook/hatch | 1cf39ad1a11ce90bc77fb7fdc4b9202433509179 | [
"MIT"
] | null | null | null | import pytest
from hatch.project.core import Project
class TestFindProjectRoot:
def test_no_project(self, temp_dir):
project = Project(temp_dir)
assert project.find_project_root() is None
@pytest.mark.parametrize('file_name', ['pyproject.toml', 'setup.py'])
def test_direct(self, temp_dir, file_name):
project = Project(temp_dir)
project_file = temp_dir / file_name
project_file.touch()
assert project.find_project_root() == temp_dir
@pytest.mark.parametrize('file_name', ['pyproject.toml', 'setup.py'])
def test_recurse(self, temp_dir, file_name):
project = Project(temp_dir)
project_file = temp_dir / file_name
project_file.touch()
path = temp_dir / 'test'
path.mkdir()
assert project.find_project_root() == temp_dir
@pytest.mark.parametrize('file_name', ['pyproject.toml', 'setup.py'])
def test_no_path(self, temp_dir, file_name):
project_file = temp_dir / file_name
project_file.touch()
path = temp_dir / 'test'
project = Project(path)
assert project.find_project_root() == temp_dir
class TestLoadProjectFromConfig:
def test_no_project_no_project_dirs(self, config_file):
assert Project.from_config(config_file.model, 'foo') is None
def test_project_empty_string(self, config_file, temp_dir):
config_file.model.projects[''] = str(temp_dir)
assert Project.from_config(config_file.model, '') is None
def test_project_basic_string(self, config_file, temp_dir):
config_file.model.projects = {'foo': str(temp_dir)}
project = Project.from_config(config_file.model, 'foo')
assert project.chosen_name == 'foo'
assert project.location == temp_dir
def test_project_complex(self, config_file, temp_dir):
config_file.model.projects = {'foo': {'location': str(temp_dir)}}
project = Project.from_config(config_file.model, 'foo')
assert project.chosen_name == 'foo'
assert project.location == temp_dir
def test_project_complex_null_location(self, config_file):
config_file.model.projects = {'foo': {'location': ''}}
assert Project.from_config(config_file.model, 'foo') is None
def test_project_dirs(self, config_file, temp_dir):
path = temp_dir / 'foo'
path.mkdir()
config_file.model.dirs.project = [str(temp_dir)]
project = Project.from_config(config_file.model, 'foo')
assert project.chosen_name == 'foo'
assert project.location == path
def test_project_dirs_null_dir(self, config_file):
config_file.model.dirs.project = ['']
assert Project.from_config(config_file.model, 'foo') is None
def test_project_dirs_not_directory(self, config_file, temp_dir):
path = temp_dir / 'foo'
path.touch()
config_file.model.dirs.project = [str(temp_dir)]
assert Project.from_config(config_file.model, 'foo') is None
class TestChosenName:
def test_selected(self, temp_dir):
project = Project(temp_dir, name='foo')
assert project.chosen_name == 'foo'
def test_cwd(self, temp_dir):
project = Project(temp_dir)
assert project.chosen_name is None
class TestLocation:
def test_no_project(self, temp_dir):
project = Project(temp_dir)
assert project.location == temp_dir
assert project.root is None
@pytest.mark.parametrize('file_name', ['pyproject.toml', 'setup.py'])
def test_project(self, temp_dir, file_name):
project_file = temp_dir / file_name
project_file.touch()
project = Project(temp_dir)
assert project.location == temp_dir
assert project.root == temp_dir
| 34.318182 | 73 | 0.673377 | 491 | 3,775 | 4.885947 | 0.105906 | 0.119633 | 0.093789 | 0.066694 | 0.862026 | 0.837015 | 0.789496 | 0.761567 | 0.739892 | 0.721134 | 0 | 0 | 0.218013 | 3,775 | 109 | 74 | 34.633028 | 0.812669 | 0 | 0 | 0.575 | 0 | 0 | 0.052715 | 0 | 0 | 0 | 0 | 0 | 0.2625 | 1 | 0.2 | false | 0 | 0.025 | 0 | 0.275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f24ce27da2e75f0f63eeef7bacb7cf291cbc996 | 19 | py | Python | hfo/__init__.py | RubenvanHeusden/HFO-Robotkeeper | 03bbe1170d703b7f264ef245b99a0ced2759ed39 | [
"MIT"
] | null | null | null | hfo/__init__.py | RubenvanHeusden/HFO-Robotkeeper | 03bbe1170d703b7f264ef245b99a0ced2759ed39 | [
"MIT"
] | null | null | null | hfo/__init__.py | RubenvanHeusden/HFO-Robotkeeper | 03bbe1170d703b7f264ef245b99a0ced2759ed39 | [
"MIT"
] | 1 | 2019-12-04T14:08:01.000Z | 2019-12-04T14:08:01.000Z | from .hfo import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f464b74388424c95350c89cd87e2dc3ef26c913 | 14,716 | py | Python | usaspending_api/search/tests/test_spending_by_award_type.py | gaybro8777/usaspending-api | fe9d730acd632401bbbefa168e3d86d59560314b | [
"CC0-1.0"
] | null | null | null | usaspending_api/search/tests/test_spending_by_award_type.py | gaybro8777/usaspending-api | fe9d730acd632401bbbefa168e3d86d59560314b | [
"CC0-1.0"
] | null | null | null | usaspending_api/search/tests/test_spending_by_award_type.py | gaybro8777/usaspending-api | fe9d730acd632401bbbefa168e3d86d59560314b | [
"CC0-1.0"
] | null | null | null | import json
import pytest
from django.db import connection
from model_mommy import mommy
from rest_framework import status
from usaspending_api.search.tests.test_mock_data_search import non_legacy_filters
@pytest.fixture
@pytest.mark.django_db
def test_data():
mommy.make("references.LegalEntity", legal_entity_id=1)
mommy.make(
"awards.Award", id=1, type="A", recipient_id=1, latest_transaction_id=1, generated_unique_award_id="CONT_AWD_1"
)
mommy.make("awards.TransactionNormalized", id=1, action_date="2010-10-01", award_id=1, is_fpds=True)
mommy.make(
"awards.TransactionFPDS",
transaction_id=1,
legal_entity_country_code="USA",
legal_entity_country_name="UNITED STATES",
legal_entity_zip5="00501",
place_of_perform_country_c="USA",
place_of_perform_country_n="UNITED STATES",
place_of_performance_zip5="00001",
)
mommy.make(
"awards.Award", id=2, type="A", recipient_id=1, latest_transaction_id=2, generated_unique_award_id="CONT_AWD_2"
)
mommy.make("awards.TransactionNormalized", id=2, action_date="2010-10-01", award_id=2, is_fpds=True)
mommy.make(
"awards.TransactionFPDS",
transaction_id=2,
legal_entity_country_code="USA",
legal_entity_country_name="UNITED STATES",
legal_entity_zip5="00502",
place_of_perform_country_c="USA",
place_of_perform_country_n="UNITED STATES",
place_of_performance_zip5="00002",
)
mommy.make(
"awards.Award", id=3, type="A", recipient_id=1, latest_transaction_id=3, generated_unique_award_id="CONT_AWD_3"
)
mommy.make("awards.TransactionNormalized", id=3, action_date="2010-10-01", award_id=3, is_fpds=True)
mommy.make(
"awards.TransactionFPDS",
transaction_id=3,
legal_entity_country_code="USA",
legal_entity_country_name="UNITED STATES",
legal_entity_zip5="00503",
place_of_perform_country_c="USA",
place_of_perform_country_n="UNITED STATES",
place_of_performance_zip5="00003",
)
mommy.make(
"awards.Award", id=4, type="A", recipient_id=1, latest_transaction_id=4, generated_unique_award_id="CONT_AWD_4"
)
mommy.make("awards.TransactionNormalized", id=4, action_date="2010-10-01", award_id=4, is_fpds=True)
mommy.make(
"awards.TransactionFPDS",
transaction_id=4,
legal_entity_country_code="GIB",
legal_entity_country_name="GIBRALTAR",
legal_entity_zip5="00504",
place_of_perform_country_c="GIB",
place_of_perform_country_n="GIBRALTAR",
place_of_performance_zip5="00004",
)
with connection.cursor() as cursor:
cursor.execute("refresh materialized view concurrently mv_contract_award_search")
@pytest.mark.django_db
def test_spending_by_award_type_success(client, refresh_matviews):
# test small request
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps({"fields": ["Award ID", "Recipient Name"], "filters": {"award_type_codes": ["A", "B", "C"]}}),
)
assert resp.status_code == status.HTTP_200_OK
# test IDV award types
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Award ID", "Recipient Name"],
"filters": {
"award_type_codes": ["IDV_A", "IDV_B", "IDV_B_A", "IDV_B_B", "IDV_B_C", "IDV_C", "IDV_D", "IDV_E"]
},
}
),
)
assert resp.status_code == status.HTTP_200_OK
# test all features
resp = client.post(
"/api/v2/search/spending_by_award",
content_type="application/json",
data=json.dumps({"fields": ["Award ID", "Recipient Name"], "filters": non_legacy_filters()}),
)
assert resp.status_code == status.HTTP_200_OK
# test subawards
resp = client.post(
"/api/v2/search/spending_by_award",
content_type="application/json",
data=json.dumps({"fields": ["Sub-Award ID"], "filters": non_legacy_filters(), "subawards": True}),
)
assert resp.status_code == status.HTTP_200_OK
@pytest.mark.django_db
def test_spending_by_award_type_failure(client, refresh_matviews):
# test incomplete IDV award types
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Award ID", "Recipient Name"],
"filters": {"award_type_codes": ["IDV_A", "IDV_B_A", "IDV_C", "IDV_D", "IDV_A_A"]},
}
),
)
assert resp.status_code == status.HTTP_400_BAD_REQUEST
# test bad autocomplete request for budget function
resp = client.post(
"/api/v2/search/spending_by_award/", content_type="application/json", data=json.dumps({"filters": {}})
)
assert resp.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
@pytest.mark.django_db
def test_spending_by_award_pop_zip_filter(client, test_data):
""" Test that filtering by pop zips works"""
# test simple, single zip
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"place_of_performance_locations": [{"country": "USA", "zip": "00001"}],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"generated_internal_id": "CONT_AWD_1",
"Place of Performance Zip5": "00001",
}
# test that adding a zip that has no results doesn't remove the results from the first zip
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"place_of_performance_locations": [
{"country": "USA", "zip": "00001"},
{"country": "USA", "zip": "10000"},
],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"generated_internal_id": "CONT_AWD_1",
"Place of Performance Zip5": "00001",
}
# test that we get 2 results with 2 valid zips
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"place_of_performance_locations": [
{"country": "USA", "zip": "00001"},
{"country": "USA", "zip": "00002"},
],
},
}
),
)
possible_results = (
{"internal_id": 1, "Place of Performance Zip5": "00001", "generated_internal_id": "CONT_AWD_1"},
{"internal_id": 2, "Place of Performance Zip5": "00002", "generated_internal_id": "CONT_AWD_2"},
)
assert len(resp.data["results"]) == 2
assert resp.data["results"][0] in possible_results
assert resp.data["results"][1] in possible_results
# Just to make sure it isn't returning the same thing twice somehow
assert resp.data["results"][0] != resp.data["results"][1]
@pytest.mark.django_db
def test_spending_by_award_recipient_zip_filter(client, test_data):
""" Test that filtering by recipient zips works"""
# test simple, single zip
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"Place of Performance Zip5": "00001",
"generated_internal_id": "CONT_AWD_1",
}
# test that adding a zip that has no results doesn't remove the results from the first zip
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}, {"country": "USA", "zip": "10000"}],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"Place of Performance Zip5": "00001",
"generated_internal_id": "CONT_AWD_1",
}
# test that we get 2 results with 2 valid zips
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}, {"country": "USA", "zip": "00502"}],
},
}
),
)
possible_results = (
{"internal_id": 1, "Place of Performance Zip5": "00001", "generated_internal_id": "CONT_AWD_1"},
{"internal_id": 2, "Place of Performance Zip5": "00002", "generated_internal_id": "CONT_AWD_2"},
)
assert len(resp.data["results"]) == 2
assert resp.data["results"][0] in possible_results
assert resp.data["results"][1] in possible_results
# Just to make sure it isn't returning the same thing twice somehow
assert resp.data["results"][0] != resp.data["results"][1]
@pytest.mark.django_db
def test_spending_by_award_both_zip_filter(client, test_data):
""" Test that filtering by both kinds of zips works"""
# test simple, single pair of zips that both match
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}],
"place_of_performance_locations": [{"country": "USA", "zip": "00001"}],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"Place of Performance Zip5": "00001",
"generated_internal_id": "CONT_AWD_1",
}
# test simple, single pair of zips that don't match
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}],
"place_of_performance_locations": [{"country": "USA", "zip": "00002"}],
},
}
),
)
assert len(resp.data["results"]) == 0
# test 2 pairs (only one pair can be made from this)
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"fields": ["Place of Performance Zip5"],
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
"recipient_locations": [{"country": "USA", "zip": "00501"}, {"country": "USA", "zip": "00502"}],
"place_of_performance_locations": [
{"country": "USA", "zip": "00001"},
{"country": "USA", "zip": "00003"},
],
},
}
),
)
assert len(resp.data["results"]) == 1
assert resp.data["results"][0] == {
"internal_id": 1,
"Place of Performance Zip5": "00001",
"generated_internal_id": "CONT_AWD_1",
}
@pytest.mark.django_db
def test_spending_by_award_foreign_filter(client, test_data):
""" Verify that foreign country filter is returning the correct results """
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"filters": {
"award_type_codes": ["A", "B", "C", "D"],
# "recipient_locations": [{"country": "USA"}]
"recipient_scope": "domestic",
},
"fields": ["Award ID"],
}
),
)
# Three results are returned when searching for "USA"-based recipients
# e.g. "USA"; "UNITED STATES"; "USA" and "UNITED STATES";
assert len(resp.data["results"]) == 3
resp = client.post(
"/api/v2/search/spending_by_award/",
content_type="application/json",
data=json.dumps(
{
"filters": {"award_type_codes": ["A", "B", "C", "D"], "recipient_scope": "foreign"},
"fields": ["Award ID"],
}
),
)
# One result is returned when searching for "Foreign" recipients
assert len(resp.data["results"]) == 1
# test subaward types
@pytest.mark.django_db
def test_spending_by_subaward_type_success(client, refresh_matviews):
resp = client.post(
"/api/v2/search/spending_by_award",
content_type="application/json",
data=json.dumps(
{
"fields": ["Sub-Award ID"],
"filters": {"award_type_codes": ["10", "06", "07", "08", "09", "11"]},
"subawards": True,
}
),
)
assert resp.status_code == status.HTTP_200_OK
| 35.631961 | 119 | 0.561905 | 1,677 | 14,716 | 4.690519 | 0.118068 | 0.032927 | 0.066362 | 0.064327 | 0.85609 | 0.801042 | 0.776125 | 0.751208 | 0.728452 | 0.665523 | 0 | 0.035379 | 0.291248 | 14,716 | 412 | 120 | 35.718447 | 0.718792 | 0.081816 | 0 | 0.610951 | 0 | 0 | 0.296079 | 0.091118 | 0 | 0 | 0 | 0 | 0.086455 | 1 | 0.023055 | false | 0 | 0.017291 | 0 | 0.040346 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f5aa3de1e209c544dc80e2b4d7d979e0e1afa68 | 13,911 | py | Python | tests/test_generic_consumer.py | gegenschall/djangochannelsrestframework | bb611a6c251517d0e014b028c4b808e6db1785f3 | [
"MIT"
] | null | null | null | tests/test_generic_consumer.py | gegenschall/djangochannelsrestframework | bb611a6c251517d0e014b028c4b808e6db1785f3 | [
"MIT"
] | null | null | null | tests/test_generic_consumer.py | gegenschall/djangochannelsrestframework | bb611a6c251517d0e014b028c4b808e6db1785f3 | [
"MIT"
] | null | null | null | import pytest
from channels.db import database_sync_to_async
from channels.testing import WebsocketCommunicator
from django.contrib.auth import get_user_model
from rest_framework import serializers
from djangochannelsrestframework.decorators import action
from djangochannelsrestframework.generics import GenericAsyncAPIConsumer
from djangochannelsrestframework.mixins import (
CreateModelMixin,
ListModelMixin,
RetrieveModelMixin,
UpdateModelMixin,
PatchModelMixin,
DeleteModelMixin,
)
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_generic_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
@action()
def test_sync_action(self, pk=None, **kwargs):
user = self.get_object(pk=pk)
s = self.get_serializer(action_kwargs={"pk": pk}, instance=user)
return s.data, 200
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to(
{"action": "test_sync_action", "pk": 2, "request_id": 1}
)
response = await communicator.receive_json_from()
assert response == {
"action": "test_sync_action",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
user = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="test@example.com"
)
pk = user.id
assert await database_sync_to_async(get_user_model().objects.filter(pk=pk).exists)()
await communicator.disconnect()
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to(
{"action": "test_sync_action", "pk": pk, "request_id": 2}
)
response = await communicator.receive_json_from()
assert response == {
"action": "test_sync_action",
"errors": [],
"response_status": 200,
"request_id": 2,
"data": {"email": "test@example.com", "id": 1, "username": "test1"},
}
await communicator.disconnect()
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_create_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(CreateModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to(
{
"action": "create",
"data": {"username": "test101", "email": "42@example.com"},
"request_id": 1,
}
)
response = await communicator.receive_json_from()
user = await database_sync_to_async(get_user_model().objects.all().first)()
assert user
pk = user.id
assert response == {
"action": "create",
"errors": [],
"response_status": 201,
"request_id": 1,
"data": {"email": "42@example.com", "id": pk, "username": "test101"},
}
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_list_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(ListModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to({"action": "list", "request_id": 1})
response = await communicator.receive_json_from()
assert response == {
"action": "list",
"errors": [],
"response_status": 200,
"request_id": 1,
"data": [],
}
u1 = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="42@example.com"
)
u2 = await database_sync_to_async(get_user_model().objects.create)(
username="test2", email="45@example.com"
)
await communicator.send_json_to({"action": "list", "request_id": 1})
response = await communicator.receive_json_from()
assert response == {
"action": "list",
"errors": [],
"response_status": 200,
"request_id": 1,
"data": [
{"email": "42@example.com", "id": u1.id, "username": "test1"},
{"email": "45@example.com", "id": u2.id, "username": "test2"},
],
}
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_retrieve_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(RetrieveModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to({"action": "retrieve", "pk": 100, "request_id": 1})
response = await communicator.receive_json_from()
assert response == {
"action": "retrieve",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
u1 = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="42@example.com"
)
u2 = await database_sync_to_async(get_user_model().objects.create)(
username="test2", email="45@example.com"
)
# lookup a pk that is not there
await communicator.send_json_to(
{"action": "retrieve", "pk": u1.id - 1, "request_id": 1}
)
response = await communicator.receive_json_from()
assert response == {
"action": "retrieve",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
# lookup up u1
await communicator.send_json_to(
{"action": "retrieve", "pk": u1.id, "request_id": 1}
)
response = await communicator.receive_json_from()
assert response == {
"action": "retrieve",
"errors": [],
"response_status": 200,
"request_id": 1,
"data": {"email": "42@example.com", "id": u1.id, "username": "test1"},
}
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_update_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(UpdateModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to(
{
"action": "update",
"pk": 100,
"data": {"username": "test101", "email": "42@example.com"},
"request_id": 1,
}
)
response = await communicator.receive_json_from()
assert response == {
"action": "update",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
u1 = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="42@example.com"
)
await database_sync_to_async(get_user_model().objects.create)(
username="test2", email="45@example.com"
)
await communicator.send_json_to(
{
"action": "update",
"pk": u1.id,
"data": {
"username": "test101",
},
"request_id": 2,
}
)
response = await communicator.receive_json_from()
assert response == {
"action": "update",
"errors": [],
"response_status": 200,
"request_id": 2,
"data": {"email": "42@example.com", "id": u1.id, "username": "test101"},
}
u1 = await database_sync_to_async(get_user_model().objects.get)(id=u1.id)
assert u1.username == "test101"
assert u1.email == "42@example.com"
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_patch_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(PatchModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to(
{
"action": "patch",
"pk": 100,
"data": {"username": "test101", "email": "42@example.com"},
"request_id": 1,
}
)
response = await communicator.receive_json_from()
assert response == {
"action": "patch",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
u1 = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="42@example.com"
)
await database_sync_to_async(get_user_model().objects.create)(
username="test2", email="45@example.com"
)
await communicator.send_json_to(
{
"action": "patch",
"pk": u1.id,
"data": {
"email": "00@example.com",
},
"request_id": 2,
}
)
response = await communicator.receive_json_from()
assert response == {
"action": "patch",
"errors": [],
"response_status": 200,
"request_id": 2,
"data": {"email": "00@example.com", "id": u1.id, "username": "test1"},
}
u1 = await database_sync_to_async(get_user_model().objects.get)(id=u1.id)
assert u1.username == "test1"
assert u1.email == "00@example.com"
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_delete_mixin_consumer():
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields = (
"id",
"username",
"email",
)
class AConsumer(DeleteModelMixin, GenericAsyncAPIConsumer):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
assert not await database_sync_to_async(get_user_model().objects.all().exists)()
# Test a normal connection
communicator = WebsocketCommunicator(AConsumer(), "/testws/")
connected, _ = await communicator.connect()
assert connected
await communicator.send_json_to({"action": "delete", "pk": 100, "request_id": 1})
response = await communicator.receive_json_from()
assert response == {
"action": "delete",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
u1 = await database_sync_to_async(get_user_model().objects.create)(
username="test1", email="42@example.com"
)
await database_sync_to_async(get_user_model().objects.create)(
username="test2", email="45@example.com"
)
await communicator.send_json_to(
{"action": "delete", "pk": u1.id - 1, "request_id": 1}
)
response = await communicator.receive_json_from()
assert response == {
"action": "delete",
"errors": ["Not found"],
"response_status": 404,
"request_id": 1,
"data": None,
}
await communicator.send_json_to({"action": "delete", "pk": u1.id, "request_id": 1})
response = await communicator.receive_json_from()
assert response == {
"action": "delete",
"errors": [],
"response_status": 204,
"request_id": 1,
"data": None,
}
assert not await database_sync_to_async(
get_user_model().objects.filter(id=u1.id).exists
)()
| 28.331976 | 88 | 0.601466 | 1,425 | 13,911 | 5.658246 | 0.079298 | 0.084336 | 0.055066 | 0.068337 | 0.866055 | 0.86097 | 0.86097 | 0.857373 | 0.852412 | 0.8317 | 0 | 0.020012 | 0.267199 | 13,911 | 490 | 89 | 28.389796 | 0.770944 | 0.015599 | 0 | 0.678851 | 0 | 0 | 0.144987 | 0 | 0 | 0 | 0 | 0 | 0.093995 | 1 | 0.002611 | false | 0 | 0.020888 | 0 | 0.117493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48bb73f65ac0cdcec55c8cc8c0e66f227e278ba1 | 79 | py | Python | elements/tests/views/test_tag_group.py | philsupertramp/wik | 0650ae181926a5ccad8af70b8ae9a572a423e6f6 | [
"MIT"
] | null | null | null | elements/tests/views/test_tag_group.py | philsupertramp/wik | 0650ae181926a5ccad8af70b8ae9a572a423e6f6 | [
"MIT"
] | 19 | 2021-02-09T18:01:05.000Z | 2021-08-25T04:50:44.000Z | elements/tests/views/test_tag_group.py | philsupertramp/wiki | b30ee58d63e55588ced06af4f6588c8dd6baba7e | [
"MIT"
] | null | null | null | from django.test import TestCase
class TagGroupTestCase(TestCase):
pass
| 11.285714 | 33 | 0.772152 | 9 | 79 | 6.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177215 | 79 | 6 | 34 | 13.166667 | 0.938462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5b0555a6a682143f1eb46433b80fa4fee249e56d | 138 | py | Python | ravens/__init__.py | EricCousineau-TRI/deformable-ravens | 6ff2443ba7f6673ba4696484e052441262cc14d7 | [
"Apache-2.0"
] | 98 | 2020-12-23T02:32:01.000Z | 2022-03-30T07:09:59.000Z | ravens/__init__.py | EricCousineau-TRI/deformable-ravens | 6ff2443ba7f6673ba4696484e052441262cc14d7 | [
"Apache-2.0"
] | 8 | 2020-12-22T16:17:24.000Z | 2021-10-13T23:44:48.000Z | ravens/__init__.py | EricCousineau-TRI/deformable-ravens | 6ff2443ba7f6673ba4696484e052441262cc14d7 | [
"Apache-2.0"
] | 26 | 2020-12-22T16:14:11.000Z | 2022-03-03T10:27:29.000Z | import ravens.tasks as tasks
import ravens.agents as agents
from ravens.dataset import Dataset
from ravens.environment import Environment
| 27.6 | 42 | 0.855072 | 20 | 138 | 5.9 | 0.4 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 138 | 4 | 43 | 34.5 | 0.967213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b0ae32b34d62ab55cda83ef27c85abaf5128942 | 31,807 | py | Python | py/pruning.py | mattliston/postgraduate_dissertation | 3f03be7c294863e9aaa0d247a78d18f9e78b6e89 | [
"MIT"
] | 1 | 2022-02-11T06:18:06.000Z | 2022-02-11T06:18:06.000Z | py/pruning.py | mattliston/postgraduate_dissertation | 3f03be7c294863e9aaa0d247a78d18f9e78b6e89 | [
"MIT"
] | null | null | null | py/pruning.py | mattliston/postgraduate_dissertation | 3f03be7c294863e9aaa0d247a78d18f9e78b6e89 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# # Imports
import numpy as np
import tensorflow as tf
import tensorflow_model_optimization as tfmot
import matplotlib.pyplot as plt
import json
import tempfile
import itertools
#from google.colab import drive
from mat4py import loadmat
print(tf.__version__)
# # Data pre-processing
def downscale(data, resolution):
# 10 min resolution.. (data.shape[0], 3, 1440) -> (data.shape[0], 10, 3, 144).. breaks one 3,1440 length trajectory into ten 3,144 length trajectories
# Use ~12 timesteps -> 2-5 timesteps (Use ~2 hours to predict 20-50 mins)
return np.mean(data.reshape(data.shape[0], data.shape[1], int(data.shape[2]/resolution), resolution), axis=3)
def process_data(aligned_data, time_horizon, ph):
# 10 min resolution.. breaks each (3,144) trajectory into (144-ph-time_horizon,3,time_horizon) samples
data = np.zeros((aligned_data.shape[0] * (aligned_data.shape[2]-ph-time_horizon), aligned_data.shape[1], time_horizon))
label = np.zeros((aligned_data.shape[0] * (aligned_data.shape[2]-ph-time_horizon), ph))
count = 0
for i in range(aligned_data.shape[0]): # for each sample
for j in range(aligned_data.shape[2]-ph-time_horizon): # TH length sliding window across trajectory
data[count] = aligned_data[i,:,j:j+time_horizon]
label[count] = aligned_data[i,0,j+time_horizon:j+time_horizon+ph]
count+=1
return data, label
def load_mpc(time_horizon, ph, resolution, batch): # int, int, int, bool
# Load train data
g = np.loadtxt('CGM_prediction_data/glucose_readings_train.csv', delimiter=',')
c = np.loadtxt('CGM_prediction_data/meals_carbs_train.csv', delimiter=',')
it = np.loadtxt('CGM_prediction_data/insulin_therapy_train.csv', delimiter=',')
# Load test data
g_ = np.loadtxt('CGM_prediction_data/glucose_readings_test.csv', delimiter=',')
c_ = np.loadtxt('CGM_prediction_data/meals_carbs_test.csv', delimiter=',')
it_ = np.loadtxt('CGM_prediction_data/insulin_therapy_test.csv', delimiter=',')
# Time align train & test data
aligned_train_data = downscale(np.array([(g[i,:], c[i,:], it[i,:]) for i in range(g.shape[0])]), resolution)
aligned_test_data = downscale(np.array([(g_[i,:], c_[i,:], it_[i,:]) for i in range(g_.shape[0])]), resolution)
print(aligned_train_data.shape)
# Break time aligned data into train & test samples
if batch:
train_data, train_label = process_data(aligned_train_data, time_horizon, ph)
test_data, test_label = process_data(aligned_test_data, time_horizon, ph)
return np.swapaxes(train_data,1,2), train_label, np.swapaxes(test_data,1,2), test_label
else:
return aligned_train_data, aligned_test_data
def load_uva(time_horizon, ph, resolution, batch):
data = loadmat('uva-padova-data/sim_results.mat')
train_data = np.zeros((231,3,1440))
test_data = np.zeros((99,3,1440))
# Separate train and test sets.. last 3 records of each patient will be used for testing
count_train = 0
count_test = 0
for i in range(33):
for j in range(10):
if j>=7:
test_data[count_test,0,:] = np.asarray(data['data']['results']['sensor'][count_test+count_train]['signals']['values']).flatten()[:1440]
test_data[count_test,1,:] = np.asarray(data['data']['results']['CHO'][count_test+count_train]['signals']['values']).flatten()[:1440]
test_data[count_test,2,:] = np.asarray(data['data']['results']['BOLUS'][count_test+count_train]['signals']['values']).flatten()[:1440] + np.asarray(data['data']['results']['BASAL'][i]['signals']['values']).flatten()[:1440]
count_test+=1
else:
train_data[count_train,0,:] = np.asarray(data['data']['results']['sensor'][count_test+count_train]['signals']['values']).flatten()[:1440]
train_data[count_train,1,:] = np.asarray(data['data']['results']['CHO'][count_test+count_train]['signals']['values']).flatten()[:1440]
train_data[count_train,2,:] = np.asarray(data['data']['results']['BOLUS'][count_test+count_train]['signals']['values']).flatten()[:1440] + np.asarray(data['data']['results']['BASAL'][i]['signals']['values']).flatten()[:1440]
count_train+=1
train_data = downscale(train_data, resolution)
test_data = downscale(test_data, resolution)
if batch:
train_data, train_label = process_data(train_data, time_horizon, ph)
test_data, test_label = process_data(test_data, time_horizon, ph)
return np.swapaxes(train_data,1,2)*0.0555, train_label*0.0555, np.swapaxes(test_data,1,2)*0.0555, test_label*0.0555 # convert to mmol/L
else:
return train_data, test_data
# # Make bidirectional LSTM prunable & define custom metrics
class PruneBidirectional(tf.keras.layers.Bidirectional, tfmot.sparsity.keras.PrunableLayer):
def get_prunable_weights(self):
# print(self.forward_layer._trainable_weights)
# print(self.backward_layer._trainable_weights)
# print(len(self.get_trainable_weights()))
# print(self.get_weights()[0], self.get_weights()[0].shape)
# return self.get_weights()
return self.trainable_weights
def loss_metric1(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,0], y_pred[:,0])
def loss_metric2(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,1], y_pred[:,1])
def loss_metric3(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,2], y_pred[:,2])
def loss_metric4(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,3], y_pred[:,3])
def loss_metric5(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,4], y_pred[:,4])
def loss_metric6(y_true, y_pred):
loss = tf.keras.losses.MeanSquaredError()
return loss(y_true[:,5], y_pred[:,5])
def bilstm(ph, training):
inp = tf.keras.Input(shape=(train_data.shape[1], train_data.shape[2]))
model = PruneBidirectional(tf.keras.layers.LSTM(200, return_sequences=True))(inp)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = PruneBidirectional(tf.keras.layers.LSTM(200, return_sequences=True))(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Flatten()(model)
model = tf.keras.layers.Dense(ph, activation=None)(model)
x = tf.keras.Model(inputs=inp, outputs=model)
x.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
return x
def crnn(ph, training):
inp = tf.keras.Input(shape=(train_data.shape[1], train_data.shape[2]))
model = tf.keras.layers.Conv1D(256, 4, activation='relu', padding='same')(inp)
model = tf.keras.layers.MaxPool1D(pool_size=2, strides=1, padding='same')(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Conv1D(512, 4, activation='relu', padding='same')(model)
model = tf.keras.layers.MaxPool1D(pool_size=2, strides=1, padding='same')(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.LSTM(200, return_sequences=True)(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Flatten()(model)
model = tf.keras.layers.Dense(ph, activation=None)(model)
x = tf.keras.Model(inputs=inp, outputs=model)
x.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
return x
# # Custom callback to save pruning results
# Custom sparsity callback
import re
import csv
class SparsityCallback(tf.keras.callbacks.Callback):##tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
non_trainable = [i.name for i in self.model.non_trainable_weights]
masks = []
for i in range(len(non_trainable)):
if re.match('(.*)mask(.*)', non_trainable[i]):
masks.append(self.model.non_trainable_weights[i].numpy())
masks = [i.flatten() for i in masks]
masks = np.concatenate(masks).ravel()
print('\n', np.count_nonzero(masks), 1-(np.count_nonzero(masks)/float(masks.shape[0])))
# with open('saved_models/uva_bilstm_sparsity.csv','ab') as f: #uva_crnn_sparisty.csv, uva_lstm_sparsity.csv, uva_bilstm_sparsity.csv
# np.savetxt(f,np.asarray([1-(np.count_nonzero(masks)/float(masks.shape[0]))]))
# csv_writer = csv.writer(f)
# csv_writer.writerow(str(1-(np.count_nonzero(masks)/float(masks.shape[0]))))#,delimiter=',')
# f.close()
# print(np.concatenate(masks).ravel(), np.concatenate(masks).ravel().shape)
# # Prune MPC generated models
# ## CRNN
# pruning crnn
#get_ipython().run_line_magic('load_ext', 'tensorboard')
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 50
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_mpc(TIME_HORIZON, PH, RESOLUTION, BATCH)
model = tf.keras.models.load_model('../saved/postgraduate_dissertation/saved_models/mpc_guided_crnn.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
#model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=2997,
end_step=2997*EPOCHS,
frequency=2977)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_crnn = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_mpc_guided_crnn.h5', include_optimizer=False)
#get_ipython().system('ls saved_models')
#print(pruned_model.summary())
#%tensorboard --logdir={logdir}
#model.save('../saved/history/pruned_mpc_guided_bilstm.h5')
#json.dump(pruned_crnn.history, open('../saved/history/pruned_mpc_guided_crnn_history', 'w'))
#!ls saved_models
#%tensorboard --logdir={log_dir}
# ## LSTM
# In[ ]:
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 50
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_mpc(TIME_HORIZON, PH, RESOLUTION, BATCH)
model = tf.keras.models.load_model('../saved/postgraduate_dissertation/saved_models/mpc_guided_lstm.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
#model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=2997,
end_step=2997*EPOCHS,
frequency=2977)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_lstm = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_mpc_guided_lstm.h5', include_optimizer=False)
#json.dump(pruned_lstm.history, open('../saved/history/pruned_mpc_guided_lstm_history', 'w'))
#get_ipython().system('ls saved_models')
# ## Bidirectional LSTM
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 150
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_mpc(TIME_HORIZON, PH, RESOLUTION, BATCH)
#model = tf.keras.models.load_model('saved_models/mpc_guided_lstm.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=2997*100,
end_step=2997*EPOCHS,
frequency=2977)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_lstm = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_mpc_guided_bilstm.h5', include_optimizer=False)
#json.dump(pruned_lstm.history, open('../saved/history/pruned_mpc_guided_bilstm_history', 'w'))
#get_ipython().system('ls saved_models')
# # MPC Pruning results
# In[ ]:
lstm_val_loss_10 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric1']
lstm_val_loss_20 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric2']
lstm_val_loss_30 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric3']
lstm_val_loss_40 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric4']
lstm_val_loss_50 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric5']
lstm_val_loss_60 = json.load(open('../saved/history/pruned_mpc_guided_lstm_history'))['val_loss_metric6']
crnn_val_loss_10 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric1']
crnn_val_loss_20 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric2']
crnn_val_loss_30 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric3']
crnn_val_loss_40 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric4']
crnn_val_loss_50 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric5']
crnn_val_loss_60 = json.load(open('../saved/history/pruned_mpc_guided_crnn_history'))['val_loss_metric6']
bilstm_val_loss_10 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric1'][100:]
bilstm_val_loss_20 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric2'][100:]
bilstm_val_loss_30 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric3'][100:]
bilstm_val_loss_40 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric4'][100:]
bilstm_val_loss_50 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric5'][100:]
bilstm_val_loss_60 = json.load(open('../saved/history/pruned_mpc_guided_bilstm_history'))['val_loss_metric6'][100:]
x_crnn = np.genfromtxt('../saved/history/crnn_sparsity.csv')
x_lstm = np.genfromtxt('../saved/history/lstm_sparsity.csv')
x_bilstm = np.genfromtxt('../saved/history/bilstm_sparsity.csv')[100:]
fig, axes = plt.subplots(2,3)
plt.rcParams["figure.figsize"] = (20,10)
axes[0,0].plot(x_lstm, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(x_lstm, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(x_lstm, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(x_lstm, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(x_lstm, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(x_lstm, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(x_crnn, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(x_crnn, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(x_crnn, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(x_crnn, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(x_crnn, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(x_crnn, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(x_bilstm, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(x_bilstm, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(x_bilstm, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(x_bilstm, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(x_bilstm, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(x_bilstm, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction validation loss')
axes[0,1].title.set_text('20 minute prediction validation loss')
axes[0,2].title.set_text('30 minute prediction validation loss')
axes[1,0].title.set_text('40 minute prediction validation loss')
axes[1,1].title.set_text('50 minute prediction validation loss')
axes[1,2].title.set_text('60 minute prediction validation loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Sparsity (%)')
axes[1,1].set_xlabel('Sparsity (%)')
axes[1,2].set_xlabel('Sparsity (%)')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
#plt.rcParams["figure.figsize"] = (20,10)
custom_ylim = (0,2)
plt.setp(axes, ylim=custom_ylim)
plt.show()
# # Prune UVA Padova models
# ## CRNN
# In[ ]:
# pruning crnn
#get_ipython().run_line_magic('load_ext', 'tensorboard')
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 50
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_uva(TIME_HORIZON, PH, RESOLUTION, BATCH)
model = tf.keras.models.load_model('../saved/postgraduate_dissertation/saved_models/uva_padova_crnn.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
#model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=910,
end_step=910*EPOCHS,
frequency=910)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_crnn = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_uva_padova_crnn.h5', include_optimizer=False)
#
#get_ipython().system('ls saved_models')
#print(pruned_model.summary())
#%tensorboard --logdir={logdir}
#model.save('../saved/history/pruned_mpc_guided_bilstm.h5')
#json.dump(pruned_crnn.history, open('../saved/history/pruned_uva_padova_crnn_history', 'w'))
# ## LSTM
# In[ ]:
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 50
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_uva(TIME_HORIZON, PH, RESOLUTION, BATCH)
model = tf.keras.models.load_model('../saved/postgraduate_dissertation/saved_models/uva_padova_lstm.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
#model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=910,
end_step=910*EPOCHS,
frequency=910)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_lstm = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_uva_padova_lstm.h5', include_optimizer=False)
#json.dump(pruned_lstm.history, open('../saved/history/pruned_uva_padova_lstm_history', 'w'))
#get_ipython().system('ls saved_models')
# ## BiLSTM
# In[ ]:
PH = 6
TIME_HORIZON = 12
RESOLUTION = 10
BATCH_SIZE = 32
EPOCHS = 150
BATCH = True # indicates whether to convert data into batches
training = True
train_data, train_label, test_data, test_label = load_uva(TIME_HORIZON, PH, RESOLUTION, BATCH)
#model = tf.keras.models.load_model('saved_models/mpc_guided_lstm.h5',custom_objects={'loss_metric1':loss_metric1, 'loss_metric2':loss_metric2, 'loss_metric3':loss_metric3, 'loss_metric4':loss_metric4,'loss_metric5':loss_metric5,'loss_metric6':loss_metric6})
model = bilstm(PH, training)
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0,
final_sparsity=0.98,
begin_step=910*100,
end_step=910*EPOCHS,
frequency=910)
}
print(model.summary())
logdir = tempfile.mkdtemp()
print(logdir)
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
SparsityCallback()
]
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
print(model_for_pruning.summary())
pruned_lstm = model_for_pruning.fit(x=train_data,
y=train_label,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(test_data, test_label),
callbacks=callbacks)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
#tf.keras.models.save_model(model_for_export, '../saved/history/pruned_uva_padova_bilstm.h5', include_optimizer=False)
#json.dump(pruned_lstm.history, open('../saved/history/pruned_uva_padova_bilstm_history', 'w'))
#get_ipython().system('ls saved_models')
# # UVA Padova Pruning results
# In[ ]:
lstm_val_loss_10 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric1']
lstm_val_loss_20 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric2']
lstm_val_loss_30 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric3']
lstm_val_loss_40 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric4']
lstm_val_loss_50 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric5']
lstm_val_loss_60 = json.load(open('../saved/history/pruned_uva_padova_lstm_history'))['val_loss_metric6']
crnn_val_loss_10 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric1']
crnn_val_loss_20 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric2']
crnn_val_loss_30 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric3']
crnn_val_loss_40 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric4']
crnn_val_loss_50 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric5']
crnn_val_loss_60 = json.load(open('../saved/history/pruned_uva_padova_crnn_history'))['val_loss_metric6']
bilstm_val_loss_10 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric1'][100:]
bilstm_val_loss_20 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric2'][100:]
bilstm_val_loss_30 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric3'][100:]
bilstm_val_loss_40 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric4'][100:]
bilstm_val_loss_50 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric5'][100:]
bilstm_val_loss_60 = json.load(open('../saved/history/pruned_uva_padova_bilstm_history'))['val_loss_metric6'][100:]
x_crnn = np.genfromtxt('../saved/history/uva_crnn_sparsity.csv')#[6:]
x_lstm = np.genfromtxt('../saved/history/uva_lstm_sparsity.csv')
x_bilstm = np.genfromtxt('../saved/history/uva_bilstm_sparsity.csv')[100:]
print(x_crnn.shape, x_lstm.shape, x_bilstm.shape)
fig, axes = plt.subplots(2,3)
plt.rcParams["figure.figsize"] = (20,10)
axes[0,0].plot(x_lstm, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(x_lstm, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(x_lstm, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(x_lstm, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(x_lstm, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(x_lstm, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(x_crnn, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(x_crnn, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(x_crnn, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(x_crnn, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(x_crnn, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(x_crnn, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(x_bilstm, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(x_bilstm, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(x_bilstm, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(x_bilstm, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(x_bilstm, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(x_bilstm, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction validation loss')
axes[0,1].title.set_text('20 minute prediction validation loss')
axes[0,2].title.set_text('30 minute prediction validation loss')
axes[1,0].title.set_text('40 minute prediction validation loss')
axes[1,1].title.set_text('50 minute prediction validation loss')
axes[1,2].title.set_text('60 minute prediction validation loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Sparsity (%)')
axes[1,1].set_xlabel('Sparsity (%)')
axes[1,2].set_xlabel('Sparsity (%)')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
#plt.rcParams["figure.figsize"] = (20,10)
custom_ylim = (0,2)
plt.setp(axes, ylim=custom_ylim)
plt.show()
| 42.128477 | 292 | 0.687616 | 4,393 | 31,807 | 4.697701 | 0.073298 | 0.036633 | 0.043611 | 0.044774 | 0.864564 | 0.855163 | 0.850414 | 0.844793 | 0.839269 | 0.816979 | 0 | 0.034565 | 0.171377 | 31,807 | 754 | 293 | 42.18435 | 0.748444 | 0.15745 | 0 | 0.654628 | 0 | 0 | 0.179869 | 0.093421 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031603 | false | 0 | 0.022573 | 0.004515 | 0.092551 | 0.049661 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b1b0e6043f032cf6ff2777d7ab419487030a9f7 | 410 | py | Python | pyctm/memory/memory.py | CST-Group/PyCTM | c42b6141fb0488a7ec16d7e7563184c4859f02a3 | [
"MIT"
] | null | null | null | pyctm/memory/memory.py | CST-Group/PyCTM | c42b6141fb0488a7ec16d7e7563184c4859f02a3 | [
"MIT"
] | null | null | null | pyctm/memory/memory.py | CST-Group/PyCTM | c42b6141fb0488a7ec16d7e7563184c4859f02a3 | [
"MIT"
] | null | null | null | class Memory:
def get_i(self) -> object:
pass
def get_name(self) -> str:
pass
def get_evaluation(self) -> float:
pass
def get_id(self) -> str:
pass
def set_i(self, i) -> None:
pass
def set_name(self, name) -> None:
pass
def set_evaluation(self, evaluation) -> None:
pass
def set_id(self, id) -> None:
pass
| 15.769231 | 49 | 0.52439 | 54 | 410 | 3.833333 | 0.277778 | 0.236715 | 0.193237 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.365854 | 410 | 25 | 50 | 16.4 | 0.796154 | 0 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.470588 | false | 0.470588 | 0 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d28893ac4f1adfc6b6ddd5696e1f9a0b95fb5fc9 | 97 | py | Python | logs/.gitkeep.py | wucaihua520/info | f8d1d3a40c3ea0ccfdf45a2e53bcc5d4e9128e40 | [
"MIT"
] | null | null | null | logs/.gitkeep.py | wucaihua520/info | f8d1d3a40c3ea0ccfdf45a2e53bcc5d4e9128e40 | [
"MIT"
] | null | null | null | logs/.gitkeep.py | wucaihua520/info | f8d1d3a40c3ea0ccfdf45a2e53bcc5d4e9128e40 | [
"MIT"
] | null | null | null | from flask import current_app
current_app.logger.debug('debug')
current_app.logger.error('error') | 32.333333 | 33 | 0.824742 | 15 | 97 | 5.133333 | 0.533333 | 0.38961 | 0.415584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 97 | 3 | 34 | 32.333333 | 0.836957 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d2e96b0cf503242c3bd1029fe1ade942741f3a8e | 188 | py | Python | system_b/site_up.py | objarni/gothpy_fun | 9678092e7da16bc307b263aa963863672901f050 | [
"MIT"
] | null | null | null | system_b/site_up.py | objarni/gothpy_fun | 9678092e7da16bc307b263aa963863672901f050 | [
"MIT"
] | null | null | null | system_b/site_up.py | objarni/gothpy_fun | 9678092e7da16bc307b263aa963863672901f050 | [
"MIT"
] | null | null | null |
def get_is_up_for(url):
# Fake implementation!
return '<html>fake HTML, get something real from http://is.up/!</html>'
def update_lamp_status(check_up=get_is_up_for):
pass
| 18.8 | 75 | 0.707447 | 31 | 188 | 4 | 0.612903 | 0.096774 | 0.112903 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 188 | 9 | 76 | 20.888889 | 0.794872 | 0.106383 | 0 | 0 | 0 | 0 | 0.378049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.25 | 0 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
82697303f89df19a3cc954dd38f43a32b36ee34b | 35 | py | Python | amocrm_api_client/repositories/core/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/repositories/core/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/repositories/core/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | from .IPaginable import IPaginable
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82767f4c621ccfbdafef62ebe41f4b949c6d415f | 33 | py | Python | klusta_process_manager/server/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | 2 | 2015-06-10T13:56:19.000Z | 2019-01-31T22:30:49.000Z | klusta_process_manager/server/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | null | null | null | klusta_process_manager/server/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | 1 | 2016-05-31T13:25:40.000Z | 2016-05-31T13:25:40.000Z | from .serverTCP import ServerTCP
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
828973d3f827eb106a99a15b273e5e557aa6d59b | 154 | py | Python | timpani/__init__.py | ollien/Timpani | 0d1aac467e0bcbe2d1dadb4e6c025315d6be45cb | [
"MIT"
] | 3 | 2015-10-16T11:26:53.000Z | 2016-08-28T19:28:52.000Z | timpani/__init__.py | ollien/timpani | 0d1aac467e0bcbe2d1dadb4e6c025315d6be45cb | [
"MIT"
] | 22 | 2015-09-14T23:00:07.000Z | 2016-07-22T08:39:39.000Z | timpani/__init__.py | ollien/timpani | 0d1aac467e0bcbe2d1dadb4e6c025315d6be45cb | [
"MIT"
] | null | null | null | from . import database
from . import blog
from . import auth
from . import configmanager
from . import wsgi
from . import themes
from .timpani import run
| 19.25 | 27 | 0.772727 | 22 | 154 | 5.409091 | 0.454545 | 0.504202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 154 | 7 | 28 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8294c191cdaafffdd0579b34994b3b52391a9821 | 145 | py | Python | utils/__init__.py | azimuth-san/pdaf-tracking | 99474b45023255c68c7c336b6151147cb81bbe65 | [
"MIT"
] | null | null | null | utils/__init__.py | azimuth-san/pdaf-tracking | 99474b45023255c68c7c336b6151147cb81bbe65 | [
"MIT"
] | null | null | null | utils/__init__.py | azimuth-san/pdaf-tracking | 99474b45023255c68c7c336b6151147cb81bbe65 | [
"MIT"
] | null | null | null | from .ellipse import EllipseData
from .ellipse import create_ellipse
from .gaussian import GaussianNoise
from .funcs import maharanobis_distance
| 29 | 39 | 0.862069 | 18 | 145 | 6.833333 | 0.555556 | 0.178862 | 0.276423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110345 | 145 | 4 | 40 | 36.25 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82971da6bedb5c3bd91dfd212d6bf9ae8a4dfdc4 | 85 | py | Python | lib/django-1.5/django/contrib/gis/forms/__init__.py | MiCHiLU/google_appengine_sdk | 3da9f20d7e65e26c4938d2c4054bc4f39cbc5522 | [
"Apache-2.0"
] | 790 | 2015-01-03T02:13:39.000Z | 2020-05-10T19:53:57.000Z | django/contrib/gis/forms/__init__.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 1,361 | 2015-01-08T23:09:40.000Z | 2020-04-14T00:03:04.000Z | django/contrib/gis/forms/__init__.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 155 | 2015-01-08T22:59:31.000Z | 2020-04-08T08:01:53.000Z | from django.forms import *
from django.contrib.gis.forms.fields import GeometryField
| 28.333333 | 57 | 0.835294 | 12 | 85 | 5.916667 | 0.666667 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094118 | 85 | 2 | 58 | 42.5 | 0.922078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82af57a2219f354950f70def09836f917465de0c | 14,535 | py | Python | test/test_modals.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | 21 | 2019-07-16T15:03:43.000Z | 2021-11-16T10:51:58.000Z | test/test_modals.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | null | null | null | test/test_modals.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | null | null | null | from markyp_bootstrap4.buttons import ButtonContext
from markyp_bootstrap4.modals import *
def test_title():
assert title.h1("Value").markup == '<h1 class="modal-title">Value</h1>'
assert title.h1("Value", class_="my-title", attr=42).markup == '<h1 attr="42" class="modal-title my-title">Value</h1>'
assert title.h2("Value").markup == '<h2 class="modal-title">Value</h2>'
assert title.h2("Value", class_="my-title", attr=42).markup == '<h2 attr="42" class="modal-title my-title">Value</h2>'
assert title.h3("Value").markup == '<h3 class="modal-title">Value</h3>'
assert title.h3("Value", class_="my-title", attr=42).markup == '<h3 attr="42" class="modal-title my-title">Value</h3>'
assert title.h4("Value").markup == '<h4 class="modal-title">Value</h4>'
assert title.h4("Value", class_="my-title", attr=42).markup == '<h4 attr="42" class="modal-title my-title">Value</h4>'
assert title.h5("Value").markup == '<h5 class="modal-title">Value</h5>'
assert title.h5("Value", class_="my-title", attr=42).markup == '<h5 attr="42" class="modal-title my-title">Value</h5>'
assert title.h6("Value").markup == '<h6 class="modal-title">Value</h6>'
assert title.h6("Value", class_="my-title", attr=42).markup == '<h6 attr="42" class="modal-title my-title">Value</h6>'
assert title.p("Value").markup == '<p class="modal-title">Value</p>'
assert title.p("Value", class_="my-title", attr=42).markup == '<p attr="42" class="modal-title my-title">Value</p>'
def test_CloseButtonFactory():
contexts = (
ButtonContext.PRIMARY, ButtonContext.SECONDARY, ButtonContext.SUCCESS,
ButtonContext.DANGER, ButtonContext.WARNING, ButtonContext.INFO,
ButtonContext.LIGHT, ButtonContext.DARK, ButtonContext.LINK
)
factory = CloseButtonFactory()
assert factory.create_element().markup == '<button ></button>'
for context in contexts:
assert factory.create_element(class_=factory.get_css_class(context), **factory.update_attributes({})).markup ==\
f'<button type="button" data-dismiss="modal" class="btn btn-{context}"></button>'
assert factory.create_element("Value", class_=factory.get_css_class(context), **factory.update_attributes({})).markup ==\
f'<button type="button" data-dismiss="modal" class="btn btn-{context}">Value</button>'
def test_ModalToggleButtonFactory():
contexts = (
ButtonContext.PRIMARY, ButtonContext.SECONDARY, ButtonContext.SUCCESS,
ButtonContext.DANGER, ButtonContext.WARNING, ButtonContext.INFO,
ButtonContext.LIGHT, ButtonContext.DARK, ButtonContext.LINK
)
factory = ModalToggleButtonFactory()
assert factory.create_element().markup == '<button ></button>'
for context in contexts:
assert factory.create_element(class_=factory.get_css_class(context), **factory.update_attributes({"modal_id": "modal-id"})).markup ==\
f'<button type="button" data-toggle="modal" data-target="#modal-id" class="btn btn-{context}"></button>'
assert factory.create_element("Value", class_=factory.get_css_class(context), **factory.update_attributes({"modal_id": "modal-id"})).markup ==\
f'<button type="button" data-toggle="modal" data-target="#modal-id" class="btn btn-{context}">Value</button>'
def test_close_button():
contexts = (
ButtonContext.PRIMARY, ButtonContext.SECONDARY, ButtonContext.SUCCESS,
ButtonContext.DANGER, ButtonContext.WARNING, ButtonContext.INFO,
ButtonContext.LIGHT, ButtonContext.DARK, ButtonContext.LINK
)
factory = close_button
assert factory.create_element().markup == '<button ></button>'
for context in contexts:
assert factory.create_element(class_=factory.get_css_class(context), **factory.update_attributes({})).markup ==\
f'<button type="button" data-dismiss="modal" class="btn btn-{context}"></button>'
assert factory.create_element("Value", class_=factory.get_css_class(context), **factory.update_attributes({})).markup ==\
f'<button type="button" data-dismiss="modal" class="btn btn-{context}">Value</button>'
def test_toggle_button():
contexts = (
ButtonContext.PRIMARY, ButtonContext.SECONDARY, ButtonContext.SUCCESS,
ButtonContext.DANGER, ButtonContext.WARNING, ButtonContext.INFO,
ButtonContext.LIGHT, ButtonContext.DARK, ButtonContext.LINK
)
factory = toggle_button
assert factory.create_element().markup == '<button ></button>'
for context in contexts:
assert factory.create_element(class_=factory.get_css_class(context), **factory.update_attributes({"modal_id": "modal-id"})).markup ==\
f'<button type="button" data-toggle="modal" data-target="#modal-id" class="btn btn-{context}"></button>'
assert factory.create_element("Value", class_=factory.get_css_class(context), **factory.update_attributes({"modal_id": "modal-id"})).markup ==\
f'<button type="button" data-toggle="modal" data-target="#modal-id" class="btn btn-{context}">Value</button>'
def test_modal():
assert modal(id="modal-1").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", title="Example").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'Example',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", add_close_button=False).markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", centered=True).markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog modal-dialog-centered">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", fade=False).markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", class_="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade my-class">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", dialog_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog my-class">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", content_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content my-class">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", header_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header my-class">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", body_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body my-class"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", footer_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'</div>',
'</div>',
'</div>'
))
assert modal(id="modal-1", footer="Footer", footer_class="my-class").markup == "\n".join((
'<div role="dialog" tabindex="-1" id="modal-1" class="modal fade">',
'<div role="document" class="modal-dialog">',
'<div class="modal-content">',
'<div class="modal-header">',
'<button type="button" data-dismiss="modal" aria-label="Close" class="close"><span aria-hidden="true">×</span></button>',
'</div>',
'<div class="modal-body"></div>',
'<div class="modal-footer my-class">\nFooter\n</div>',
'</div>',
'</div>',
'</div>'
))
def test_modal_element():
assert modal_element().markup ==\
'<div role="dialog" tabindex="-1" class="modal fade"></div>'
assert modal_element("First", "Second", class_="my-modal", attr=42).markup ==\
'<div role="dialog" tabindex="-1" attr="42" class="modal fade my-modal">\nFirst\nSecond\n</div>'
assert modal_element("First", "Second", class_="my-modal", fade=False, attr=42).markup ==\
'<div role="dialog" tabindex="-1" attr="42" class="modal my-modal">\nFirst\nSecond\n</div>'
def test_modal_dialog_base():
assert modal_dialog_base().markup ==\
'<div role="document" class="modal-dialog"></div>'
assert modal_dialog_base("First", "Second", class_="my-dialog", attr=42).markup ==\
'<div role="document" attr="42" class="modal-dialog my-dialog">\nFirst\nSecond\n</div>'
assert modal_dialog_base("First", "Second", class_="my-dialog", centered=True, attr=42).markup ==\
'<div role="document" attr="42" class="modal-dialog modal-dialog-centered my-dialog">\nFirst\nSecond\n</div>'
def test_modal_content():
assert modal_content().markup ==\
'<div class="modal-content"></div>'
assert modal_content("First", "Second", class_="my-content", attr=42).markup ==\
'<div attr="42" class="modal-content my-content">\nFirst\nSecond\n</div>'
def test_modal_header():
assert modal_header().markup ==\
'<div class="modal-header"></div>'
assert modal_header("First", "Second", class_="my-header", attr=42).markup ==\
'<div attr="42" class="modal-header my-header">\nFirst\nSecond\n</div>'
def test_modal_body():
assert modal_body().markup ==\
'<div class="modal-body"></div>'
assert modal_body("First", "Second", class_="my-body", attr=42).markup ==\
'<div attr="42" class="modal-body my-body">\nFirst\nSecond\n</div>'
def test_modal_footer():
assert modal_footer().markup ==\
'<div class="modal-footer"></div>'
assert modal_footer("First", "Second", class_="my-footer", attr=42).markup ==\
'<div attr="42" class="modal-footer my-footer">\nFirst\nSecond\n</div>'
| 51.725979 | 151 | 0.566151 | 1,674 | 14,535 | 4.84767 | 0.051374 | 0.108441 | 0.064079 | 0.046827 | 0.867776 | 0.830068 | 0.814787 | 0.768084 | 0.724338 | 0.713494 | 0 | 0.01225 | 0.230547 | 14,535 | 280 | 152 | 51.910714 | 0.71334 | 0 | 0 | 0.65587 | 0 | 0.153846 | 0.471895 | 0.164087 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.048583 | false | 0 | 0.008097 | 0 | 0.05668 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82bae48b6b47ef2ff8c94bf3b6ecac832fc1216a | 35 | py | Python | easytrader/dc/__init__.py | gitbillzone/easytrader | f944b7c43e53a91d00c7b52b0b6772e93ad6ee94 | [
"MIT"
] | null | null | null | easytrader/dc/__init__.py | gitbillzone/easytrader | f944b7c43e53a91d00c7b52b0b6772e93ad6ee94 | [
"MIT"
] | null | null | null | easytrader/dc/__init__.py | gitbillzone/easytrader | f944b7c43e53a91d00c7b52b0b6772e93ad6ee94 | [
"MIT"
] | null | null | null | from .data_center import DataCenter | 35 | 35 | 0.885714 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d8a13f5f116861da076c7b45a9cee03a98d2f2a | 33 | py | Python | cranlogs/__init__.py | sophiamyang/cranlogs | 1e9b3457df710c60fc860259331f7b03df5c6526 | [
"BSD-3-Clause"
] | null | null | null | cranlogs/__init__.py | sophiamyang/cranlogs | 1e9b3457df710c60fc860259331f7b03df5c6526 | [
"BSD-3-Clause"
] | null | null | null | cranlogs/__init__.py | sophiamyang/cranlogs | 1e9b3457df710c60fc860259331f7b03df5c6526 | [
"BSD-3-Clause"
] | null | null | null | from .core import cran_downloads
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7da83126ace35cfbc5125715d51b7d1849f750ee | 208 | py | Python | credentials.py | kmpoltorak/twilioapi-send-sms | fd03fdde043f27ef5d42315fdddf597208901b33 | [
"MIT"
] | null | null | null | credentials.py | kmpoltorak/twilioapi-send-sms | fd03fdde043f27ef5d42315fdddf597208901b33 | [
"MIT"
] | null | null | null | credentials.py | kmpoltorak/twilioapi-send-sms | fd03fdde043f27ef5d42315fdddf597208901b33 | [
"MIT"
] | null | null | null | # Constants
# Twilio API account unique SID
ACCOUNT_SID = 'PASTE_YOUR_TWILIO_ACCOUNT_SID_HERE'
# Twilio API account unique authentication token
AUTH_TOKEN = 'PASTE_YOUR_TWILIO_ACCOUNT_AUTH_TOKEN_HERE' | 29.714286 | 56 | 0.822115 | 29 | 208 | 5.448276 | 0.413793 | 0.113924 | 0.202532 | 0.278481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 208 | 7 | 56 | 29.714286 | 0.877778 | 0.413462 | 0 | 0 | 0 | 0 | 0.663717 | 0.663717 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7deb4c199564af862d15f957ec7680c9c5f12852 | 2,472 | py | Python | app/selenium_ui/jira_ui.py | dsplugins/dc-app-performance-toolkit | 0a1bb0f8d40f1dc4104aebe926695238a0ef3d00 | [
"Apache-2.0"
] | null | null | null | app/selenium_ui/jira_ui.py | dsplugins/dc-app-performance-toolkit | 0a1bb0f8d40f1dc4104aebe926695238a0ef3d00 | [
"Apache-2.0"
] | null | null | null | app/selenium_ui/jira_ui.py | dsplugins/dc-app-performance-toolkit | 0a1bb0f8d40f1dc4104aebe926695238a0ef3d00 | [
"Apache-2.0"
] | null | null | null | from selenium_ui.jira import modules
from extension.jira import extension_ui # noqa F401
# this action should be the first one
def test_0_selenium_a_login(webdriver, jira_datasets, jira_screen_shots):
modules.login(webdriver, jira_datasets)
def test_1_selenium_browse_projects_list(webdriver, jira_datasets, jira_screen_shots):
modules.browse_projects_list(webdriver, jira_datasets)
def test_1_selenium_browse_boards_list(webdriver, jira_datasets, jira_screen_shots):
modules.browse_boards_list(webdriver, jira_datasets)
def test_1_selenium_create_issue(webdriver, jira_datasets, jira_screen_shots):
modules.create_issue(webdriver, jira_datasets)
def test_1_selenium_edit_issue(webdriver, jira_datasets, jira_screen_shots):
modules.edit_issue(webdriver, jira_datasets)
def test_1_selenium_save_comment(webdriver, jira_datasets, jira_screen_shots):
modules.save_comment(webdriver, jira_datasets)
def test_1_selenium_search_jql(webdriver, jira_datasets, jira_screen_shots):
modules.search_jql(webdriver, jira_datasets)
def test_1_selenium_view_backlog_for_scrum_board(webdriver, jira_datasets, jira_screen_shots):
modules.view_backlog_for_scrum_board(webdriver, jira_datasets)
def test_1_selenium_view_scrum_board(webdriver, jira_datasets, jira_screen_shots):
modules.view_scrum_board(webdriver, jira_datasets)
def test_1_selenium_view_kanban_board(webdriver, jira_datasets, jira_screen_shots):
modules.view_kanban_board(webdriver, jira_datasets)
def test_1_selenium_view_dashboard(webdriver, jira_datasets, jira_screen_shots):
modules.view_dashboard(webdriver, jira_datasets)
def test_1_selenium_view_issue(webdriver, jira_datasets, jira_screen_shots):
modules.view_issue(webdriver, jira_datasets)
def test_1_selenium_view_project_summary(webdriver, jira_datasets, jira_screen_shots):
modules.view_project_summary(webdriver, jira_datasets)
"""
Add custom actions anywhere between login and log out action. Move this to a different line as needed.
Write your custom selenium scripts in `app/extension/jira/extension_ui.py`.
Refer to `app/selenium_ui/jira/modules.py` for examples.
"""
# def test_1_selenium_custom_action(webdriver, jira_datasets, jira_screen_shots):
# extension_ui.app_specific_action(webdriver, jira_datasets)
# this action should be the last one
def test_2_selenium_z_log_out(webdriver, jira_datasets, jira_screen_shots):
modules.log_out(webdriver, jira_datasets)
| 35.314286 | 102 | 0.835761 | 356 | 2,472 | 5.36236 | 0.202247 | 0.204295 | 0.330016 | 0.196438 | 0.792038 | 0.711891 | 0.646412 | 0.53693 | 0.22001 | 0.113148 | 0 | 0.008072 | 0.097896 | 2,472 | 69 | 103 | 35.826087 | 0.847982 | 0.09021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.466667 | false | 0 | 0.066667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
81812f6a65cbe65b3844f19f0bb550473c81cfa1 | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/cli/status_codes.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/cli/status_codes.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/cli/status_codes.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/b0/41/47/51a5096eabfc880acbdc702d733b5666618e157d358537ac4b2b43121d | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
819735a8acdbb4a5edd0df8bc47044fc062baa34 | 3,849 | py | Python | notifications/migrations/0001_initial.py | ezkat/linkedevents | fa942f21825d2832328bf339904c72f4d3a414b5 | [
"MIT"
] | 20 | 2015-05-28T16:02:00.000Z | 2021-07-14T06:36:19.000Z | notifications/migrations/0001_initial.py | ezkat/linkedevents | fa942f21825d2832328bf339904c72f4d3a414b5 | [
"MIT"
] | 358 | 2015-02-04T09:07:19.000Z | 2022-03-28T12:10:22.000Z | notifications/migrations/0001_initial.py | ezkat/linkedevents | fa942f21825d2832328bf339904c72f4d3a414b5 | [
"MIT"
] | 38 | 2015-02-23T13:16:02.000Z | 2021-12-13T07:48:23.000Z | # Generated by Django 2.2.9 on 2020-01-14 08:44
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='NotificationTemplate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('type', models.CharField(choices=[('unpublished_event_deleted', 'Unpublished event deleted'), ('event_published', 'Event published'), ('draft_posted', 'Draft posted')], db_index=True, max_length=100, unique=True, verbose_name='Type')),
('subject', models.CharField(help_text='Subject for email notifications', max_length=200, verbose_name='Subject')),
('subject_fi', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('subject_sv', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('subject_en', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('subject_zh_hans', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('subject_ru', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('subject_ar', models.CharField(help_text='Subject for email notifications', max_length=200, null=True, verbose_name='Subject')),
('body', models.TextField(blank=True, help_text='Text body for email notifications', verbose_name='Body')),
('body_fi', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('body_sv', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('body_en', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('body_zh_hans', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('body_ru', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('body_ar', models.TextField(blank=True, help_text='Text body for email notifications', null=True, verbose_name='Body')),
('html_body', models.TextField(blank=True, help_text='HTML body for email notifications', verbose_name='HTML Body')),
('html_body_fi', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
('html_body_sv', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
('html_body_en', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
('html_body_zh_hans', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
('html_body_ru', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
('html_body_ar', models.TextField(blank=True, help_text='HTML body for email notifications', null=True, verbose_name='HTML Body')),
],
options={
'verbose_name': 'Notification template',
'verbose_name_plural': 'Notification templates',
},
),
]
| 81.893617 | 252 | 0.669784 | 470 | 3,849 | 5.293617 | 0.155319 | 0.110531 | 0.177251 | 0.13746 | 0.77492 | 0.77492 | 0.761254 | 0.758039 | 0.746785 | 0.741961 | 0 | 0.012573 | 0.194076 | 3,849 | 46 | 253 | 83.673913 | 0.789491 | 0.011691 | 0 | 0 | 1 | 0 | 0.325618 | 0.006575 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c49418b35a29e42c56df370762413fe20fdb6002 | 2,866 | py | Python | sdk/python/pulumi_google_native/bigqueryreservation/v1beta1/_enums.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/bigqueryreservation/v1beta1/_enums.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/bigqueryreservation/v1beta1/_enums.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from enum import Enum
__all__ = [
'CapacityCommitmentPlan',
'CapacityCommitmentRenewalPlan',
]
class CapacityCommitmentPlan(str, Enum):
"""
Capacity commitment commitment plan.
"""
COMMITMENT_PLAN_UNSPECIFIED = "COMMITMENT_PLAN_UNSPECIFIED"
"""
Invalid plan value. Requests with this value will be rejected with error code `google.rpc.Code.INVALID_ARGUMENT`.
"""
FLEX = "FLEX"
"""
Flex commitments have committed period of 1 minute after becoming ACTIVE. After that, they are not in a committed period anymore and can be removed any time.
"""
TRIAL = "TRIAL"
"""
Trial commitments have a committed period of 182 days after becoming ACTIVE. After that, they are converted to a new commitment based on the `renewal_plan`. Default `renewal_plan` for Trial commitment is Flex so that it can be deleted right after committed period ends.
"""
MONTHLY = "MONTHLY"
"""
Monthly commitments have a committed period of 30 days after becoming ACTIVE. After that, they are not in a committed period anymore and can be removed any time.
"""
ANNUAL = "ANNUAL"
"""
Annual commitments have a committed period of 365 days after becoming ACTIVE. After that they are converted to a new commitment based on the renewal_plan.
"""
class CapacityCommitmentRenewalPlan(str, Enum):
"""
The plan this capacity commitment is converted to after commitment_end_time passes. Once the plan is changed, committed period is extended according to commitment plan. Only applicable for ANNUAL commitments.
"""
COMMITMENT_PLAN_UNSPECIFIED = "COMMITMENT_PLAN_UNSPECIFIED"
"""
Invalid plan value. Requests with this value will be rejected with error code `google.rpc.Code.INVALID_ARGUMENT`.
"""
FLEX = "FLEX"
"""
Flex commitments have committed period of 1 minute after becoming ACTIVE. After that, they are not in a committed period anymore and can be removed any time.
"""
TRIAL = "TRIAL"
"""
Trial commitments have a committed period of 182 days after becoming ACTIVE. After that, they are converted to a new commitment based on the `renewal_plan`. Default `renewal_plan` for Trial commitment is Flex so that it can be deleted right after committed period ends.
"""
MONTHLY = "MONTHLY"
"""
Monthly commitments have a committed period of 30 days after becoming ACTIVE. After that, they are not in a committed period anymore and can be removed any time.
"""
ANNUAL = "ANNUAL"
"""
Annual commitments have a committed period of 365 days after becoming ACTIVE. After that they are converted to a new commitment based on the renewal_plan.
"""
| 45.492063 | 273 | 0.721563 | 394 | 2,866 | 5.192893 | 0.251269 | 0.109971 | 0.078201 | 0.093842 | 0.771261 | 0.771261 | 0.771261 | 0.771261 | 0.771261 | 0.771261 | 0 | 0.008411 | 0.211793 | 2,866 | 62 | 274 | 46.225806 | 0.8973 | 0.142359 | 0 | 0.588235 | 1 | 0 | 0.267504 | 0.18851 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4a2d6eaac8229b68c75ef91faa30fa6a9d44bf2 | 229 | py | Python | enb/__init__.py | G-AshwinKumar/experiment-notebook | aae1c5fb9ef8f84dce5d75989ed8975797282f37 | [
"MIT"
] | null | null | null | enb/__init__.py | G-AshwinKumar/experiment-notebook | aae1c5fb9ef8f84dce5d75989ed8975797282f37 | [
"MIT"
] | null | null | null | enb/__init__.py | G-AshwinKumar/experiment-notebook | aae1c5fb9ef8f84dce5d75989ed8975797282f37 | [
"MIT"
] | null | null | null | from . import singleton_cli
from . import atable
from . import aanalysis
from . import sets
from . import isets
from . import icompression
from . import pgm
from . import aanalysis
from . import plotdata
from . import ray_cluster | 22.9 | 27 | 0.786026 | 32 | 229 | 5.5625 | 0.40625 | 0.561798 | 0.213483 | 0.258427 | 0.325843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170306 | 229 | 10 | 28 | 22.9 | 0.936842 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4c959075ee8da7dffcadfbacf6b54ef194eab0b | 19,991 | py | Python | rl_bakery/contrib/test_py_prioritized_replay_buffer.py | pzhongp/rl-bakery | cf0887be7ca424ed81b48e5f9a304d9c6b201fe2 | [
"Apache-2.0"
] | 1 | 2022-01-07T21:13:21.000Z | 2022-01-07T21:13:21.000Z | rl_bakery/contrib/test_py_prioritized_replay_buffer.py | pzhongp/rl-bakery | cf0887be7ca424ed81b48e5f9a304d9c6b201fe2 | [
"Apache-2.0"
] | 142 | 2021-06-17T22:03:18.000Z | 2021-12-20T01:17:49.000Z | rl_bakery/contrib/test_py_prioritized_replay_buffer.py | jtimberlake/rl-bakery | d91469bc8923d4e9b3605580b1f374632d029ad0 | [
"Apache-2.0"
] | null | null | null | #
# Based on work found at:
# https://github.com/tensorflow/agents/blob/master/tf_agents/replay_buffers/py_replay_buffers_test.py
#
"""Unit tests for PyPrioritizedReplayBuffer."""
from __future__ import division
from __future__ import unicode_literals
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from rl_bakery.contrib.py_prioritized_replay_buffer import PyPrioritizedReplayBuffer
from tf_agents.specs import array_spec
from tf_agents.trajectories import policy_step
from tf_agents.trajectories import time_step as ts
from tf_agents.trajectories import trajectory
from tf_agents.utils import nest_utils
assert tf.executing_eagerly() is True, "Error: eager mode was not activate successfully"
class PyPrioritizedReplayBufferTest(parameterized.TestCase, tf.test.TestCase):
def _create_replay_buffer(self, capacity=32):
self._stack_count = 2
self._single_shape = (1,)
shape = (1, self._stack_count)
observation_spec = array_spec.ArraySpec(shape, np.int32, 'obs')
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = policy_step.PolicyStep(array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action'))
self._trajectory_spec = trajectory.from_transition(
time_step_spec, action_spec, time_step_spec)
self._capacity = capacity
self._alpha = 0.6
self._replay_buffer = PyPrioritizedReplayBuffer(data_spec=self._trajectory_spec, capacity=self._capacity,
alpha=self._alpha)
def _fill_replay_buffer(self, n_transition=50):
# Generate N observations.
single_obs_list = []
obs_count = 100
for k in range(obs_count):
single_obs_list.append(np.full(self._single_shape, k, dtype=np.int32))
# Add stack of observations to the replay buffer.
time_steps = []
for k in range(len(single_obs_list) - self._stack_count + 1):
stacked_observation = np.concatenate(single_obs_list[k:k + self._stack_count], axis=-1)
time_steps.append(ts.transition(stacked_observation, reward=0.0))
self._experience_count = n_transition
dummy_action = policy_step.PolicyStep(np.int32(0))
for k in range(self._experience_count):
self._replay_buffer.add_batch(nest_utils.batch_nested_array(trajectory.from_transition(time_steps[k],
dummy_action,
time_steps[k + 1])))
def _generate_replay_buffer(self):
self._create_replay_buffer()
self._fill_replay_buffer()
def testEmptyBuffer(self):
self._create_replay_buffer()
ds = self._replay_buffer.as_dataset(prioritized_buffer_beta=0.4, sample_batch_size=1)
if tf.executing_eagerly():
itr = iter(ds)
mini_batch, indices, weights = next(itr)
else:
get_next = tf.compat.v1.data.make_one_shot_iterator(ds).get_next()
res, indices, weights = self.evaluate(get_next)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
def testEmptyBufferBatchSize(self):
self._create_replay_buffer()
ds = self._replay_buffer.as_dataset(sample_batch_size=2)
if tf.executing_eagerly():
next(iter(ds))
else:
get_next = tf.compat.v1.data.make_one_shot_iterator(ds).get_next()
self.evaluate(get_next)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
def validate_data(self, mini_batch, indices, weights=None):
for idx, observation in enumerate(mini_batch.observation):
observation_0 = observation[0][0]
obs_index = indices[idx]
self.assertAllEqual(observation_0, obs_index)
if weights is not None:
obs_weight = weights[idx]
self.assertAllEqual(obs_weight, 1.0)
def testReplayBufferFullDataset(self):
np.random.seed(12345)
buffer_size = 10
self._create_replay_buffer(buffer_size)
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
sample_batch_size = 1
ds = self._replay_buffer.as_dataset(prioritized_buffer_beta=0.4, sample_batch_size=sample_batch_size)
itr = iter(ds)
mini_batch, indices, weights = next(itr)
self.validate_data(mini_batch, indices, weights)
sample_frequency = [0 for _ in range(10)]
for i in range(10000):
mini_batch, indices, weights = next(itr)
for idx in indices:
sample_frequency[idx] += 1
if i % 100 == 0:
self.validate_data(mini_batch, indices, weights)
for i in range(10):
self.assertAlmostEqual(10000 / 10, sample_frequency[i], delta=100)
def testReplayBufferFullDatasetPrefetch(self):
np.random.seed(12345)
buffer_size = 10
self._create_replay_buffer(buffer_size)
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
sample_batch_size = 1
prefetch_size = 10
ds = self._replay_buffer.as_dataset(prioritized_buffer_beta=0.4, sample_batch_size=sample_batch_size).\
prefetch(prefetch_size)
itr = iter(ds)
mini_batch, indices, weights = next(itr)
self.validate_data(mini_batch, indices, weights)
sample_frequency = [0 for _ in range(10)]
for i in range(10000):
mini_batch, indices, weights = next(itr)
for idx in indices:
sample_frequency[idx] += 1
if i % 100 == 0:
self.validate_data(mini_batch, indices, weights)
for i in range(10):
self.assertAlmostEqual(10000 / 10, sample_frequency[i], delta=100)
def testReplayBufferFull(self):
np.random.seed(12345)
buffer_size = 10
self._create_replay_buffer(buffer_size)
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
sample_batch_size = 1
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
self.validate_data(mini_batch, indices, weights)
sample_frequency = [0 for _ in range(10)]
for i in range(10000):
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices, weights)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
self.assertAlmostEqual(10000 / 10, sample_frequency[i], delta=100)
def testReplayBufferNotFull(self):
np.random.seed(12345)
buffer_size = 20
self._create_replay_buffer(buffer_size)
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
sample_batch_size = 1
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
self.validate_data(mini_batch, indices, weights)
sample_frequency = [0 for _ in range(10)]
for i in range(10000):
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices, weights)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
self.assertAlmostEqual(10000 / 10, sample_frequency[i], delta=100)
def testReplayBufferBatchSize(self):
np.random.seed(12345)
buffer_size = 20
self._create_replay_buffer(buffer_size)
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
sample_batch_size = 10
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
self.validate_data(mini_batch, indices, weights)
sample_frequency = [0 for _ in range(10)]
for i in range(1000):
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices, weights)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
self.assertAlmostEqual(10000 / 10, sample_frequency[i], delta=100)
def testPrioritizedReplayBuffer(self):
np.random.seed(12345)
self._create_replay_buffer()
# fill replay buffer with 10 experiences which observation is between 0 and 9
num_experiences = 10
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 0 since the buffer is empty
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
# set the loss of numbers larger 5 to be equal to their number
# set the loss of numbers smaller or equal to 5 close to 0
indices = [i for i in range(10)]
priorities = [i if i > 5 else i / 10 for i in range(10)]
self._replay_buffer.update_prioritized_buffer_priorities(indices, priorities)
sample_frequency = [0 for _ in range(10)]
for i in range(1000):
sample_batch_size = 10
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
if i <= 5:
# numbers smaller than 5 should be picked less that 1% of the time
self.assertLessEqual(sample_frequency[i], 10000 * 5 / 100)
else:
# all numbers larger than 5 should be picked between 15% and 25% of the time
self.assertGreaterEqual(sample_frequency[i], 10000 * 15 / 100)
self.assertLessEqual(sample_frequency[i], 10000 * 25 / 100)
# all numbers larger than 5 should be selected more times than the numbers which precedes them and
# less time than the numbers that follows them
self.assertGreaterEqual(sample_frequency[i], sample_frequency[i-1])
if i < 9:
self.assertLessEqual(sample_frequency[i], sample_frequency[i+1])
# set the loss of numbers larger or equal 5 to be close to 0
# set the loss of numbers smaller to 5 to their number + 5
indices = [i for i in range(10)]
priorities = [i/10 if i >= 5 else i + 5 for i in range(10)]
self._replay_buffer.update_prioritized_buffer_priorities(indices, priorities)
sample_frequency = [0 for _ in range(10)]
for i in range(1000):
sample_batch_size = 10
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
if i >= 5:
# numbers larger than 5 should be picked less that 1% of the time
self.assertLessEqual(sample_frequency[i], 10000 * 5 / 100)
else:
# all numbers smaller or equal to 5 should be picked between 12% and 20% of the time
self.assertGreaterEqual(sample_frequency[i], 10000 * 12 / 100)
self.assertLessEqual(sample_frequency[i], 10000 * 20 / 100)
# all numbers smaller or equal to 5 should be selected more times than the numbers which precedes
# them and less time than the numbers that follows them
self.assertGreaterEqual(sample_frequency[i], sample_frequency[i - 1])
if i < 4:
self.assertLessEqual(sample_frequency[i], sample_frequency[i + 1])
def testPrioritizedReplayBufferFull(self):
np.random.seed(12345)
capacity = 10
self._create_replay_buffer(capacity)
# fill replay buffer with 20 experiences which observation is between 0 and 19. only values from 10 to 19 will
# remain in the buffer because it's capacity is 10
num_experiences = 20
self._fill_replay_buffer(num_experiences)
# make sure that the priority are set to 1
expected_priority = np.zeros((self._capacity,), dtype=np.float32)
for i in range(num_experiences):
if i >= self._capacity:
break
expected_priority[i] = 1.0
buffer_priority = self._replay_buffer._prioritized_buffer_priorities
self.assertAllEqual(expected_priority, buffer_priority)
# set the loss of numbers larger 15 to be equal to their number
# set the loss of numbers smaller or equal to 15 close to 0
indices = [i for i in range(10)]
priorities = [i if i > 5 else i / 10 for i in range(10)]
self._replay_buffer.update_prioritized_buffer_priorities(indices, priorities)
sample_frequency = [0 for _ in range(10)]
for i in range(1000):
sample_batch_size = 10
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices+10)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
if i <= 5:
# numbers smaller than 5 should be picked less that 1% of the time
self.assertLessEqual(sample_frequency[i], 10000 * 5 / 100)
else:
# all numbers larger than 5 should be picked between 15% and 25% of the time
self.assertGreaterEqual(sample_frequency[i], 10000 * 15 / 100)
self.assertLessEqual(sample_frequency[i], 10000 * 25 / 100)
# all numbers larger than 5 should be selected more times than the numbers which precedes them and
# less time than the numbers that follows them
self.assertGreaterEqual(sample_frequency[i], sample_frequency[i-1])
if i < 9:
self.assertLessEqual(sample_frequency[i], sample_frequency[i+1])
# set the loss of numbers larger or equal 5 to be close to 0
# set the loss of numbers smaller to 5 to their number + 5
indices = [i for i in range(10)]
priorities = [i/10 if i >= 5 else i + 5 for i in range(10)]
self._replay_buffer.update_prioritized_buffer_priorities(indices, priorities)
sample_frequency = [0 for _ in range(10)]
for i in range(1000):
sample_batch_size = 10
mini_batch, indices, weights = self._replay_buffer.get_next(prioritized_buffer_beta=0.4,
sample_batch_size=sample_batch_size)
if i % 100 == 0:
self.validate_data(mini_batch, indices + 10)
for idx in indices:
sample_frequency[idx] += 1
for i in range(10):
if i >= 5:
# numbers larger than 5 should be picked less that 1% of the time
self.assertLessEqual(sample_frequency[i], 10000 * 5 / 100)
else:
# all numbers smaller or equal to 5 should be picked between 12% and 20% of the time
self.assertGreaterEqual(sample_frequency[i], 10000 * 12 / 100)
self.assertLessEqual(sample_frequency[i], 10000 * 20 / 100)
# all numbers smaller or equal to 5 should be selected more times than the numbers which precedes
# them and less time than the numbers that follows them
self.assertGreaterEqual(sample_frequency[i], sample_frequency[i - 1])
if i < 4:
self.assertLessEqual(sample_frequency[i], sample_frequency[i + 1])
| 43.177106 | 119 | 0.617428 | 2,470 | 19,991 | 4.764372 | 0.090283 | 0.054045 | 0.044613 | 0.030846 | 0.794528 | 0.779062 | 0.774728 | 0.76793 | 0.76793 | 0.764616 | 0 | 0.04287 | 0.31039 | 19,991 | 462 | 120 | 43.270563 | 0.81075 | 0.13396 | 0 | 0.771875 | 0 | 0 | 0.003243 | 0 | 0 | 0 | 0 | 0 | 0.1125 | 1 | 0.040625 | false | 0 | 0.034375 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4e7b4f664b37b252684009608b98999f59f2d90 | 91 | py | Python | src/manifestoo/__main__.py | article714/manifestoo | 8a99bd1a11e2356b2383557c659045cee4aedf2c | [
"MIT"
] | 7 | 2021-04-30T22:34:33.000Z | 2022-03-21T23:11:55.000Z | src/manifestoo/__main__.py | article714/manifestoo | 8a99bd1a11e2356b2383557c659045cee4aedf2c | [
"MIT"
] | 19 | 2021-05-24T12:13:28.000Z | 2022-03-29T14:30:07.000Z | src/manifestoo/__main__.py | article714/manifestoo | 8a99bd1a11e2356b2383557c659045cee4aedf2c | [
"MIT"
] | 6 | 2021-05-26T08:38:41.000Z | 2022-02-24T16:43:20.000Z | from .main import app # pragma: no cover
app(prog_name="manifestoo") # pragma: no cover
| 22.75 | 47 | 0.714286 | 14 | 91 | 4.571429 | 0.714286 | 0.25 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175824 | 91 | 3 | 48 | 30.333333 | 0.853333 | 0.362637 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f20e102611f6c51b0a3ec23446ee5c41d4700763 | 150 | py | Python | guillotina_cms/blocks/types.py | alteroo/guillotina_cms | a8ea0efd2ad4f4ab9fab484fe55f41abd37cdac8 | [
"BSD-2-Clause"
] | 5 | 2018-11-11T07:19:06.000Z | 2020-01-18T11:04:15.000Z | guillotina_cms/blocks/types.py | alteroo/guillotina_cms | a8ea0efd2ad4f4ab9fab484fe55f41abd37cdac8 | [
"BSD-2-Clause"
] | 4 | 2018-09-20T14:44:17.000Z | 2018-10-23T12:16:45.000Z | guillotina_cms/blocks/types.py | alteroo/guillotina_cms | a8ea0efd2ad4f4ab9fab484fe55f41abd37cdac8 | [
"BSD-2-Clause"
] | 2 | 2019-06-14T10:42:22.000Z | 2020-05-09T13:09:09.000Z |
from guillotina_cms.interfaces import IBlockType
from zope.interface import implementer
@implementer(IBlockType)
class BlockType(object):
pass
| 16.666667 | 48 | 0.82 | 17 | 150 | 7.176471 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126667 | 150 | 8 | 49 | 18.75 | 0.931298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
48027404ca600e4fae6f49a394675c345cf2234c | 124 | py | Python | command/test/integration/fake_repository/commit_016/a.py | skylerberg/pyre-check | e7967e5ee65dd09608f162cdb36a5b0919aeb5e3 | [
"MIT"
] | 5 | 2019-02-14T19:46:47.000Z | 2020-01-16T05:48:45.000Z | command/test/integration/fake_repository/commit_016/a.py | skylerberg/pyre-check | e7967e5ee65dd09608f162cdb36a5b0919aeb5e3 | [
"MIT"
] | 4 | 2022-02-15T02:42:33.000Z | 2022-02-28T01:30:07.000Z | command/test/integration/fake_repository/commit_016/a.py | skylerberg/pyre-check | e7967e5ee65dd09608f162cdb36a5b0919aeb5e3 | [
"MIT"
] | 2 | 2019-02-14T19:46:23.000Z | 2020-07-13T03:53:04.000Z | # Test: Add pyre-strict (change mode)
#!/usr/bin/env python3
from typing import Any
def foo(x: Any) -> str:
return x
| 13.777778 | 37 | 0.66129 | 21 | 124 | 3.904762 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.209677 | 124 | 8 | 38 | 15.5 | 0.826531 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
483315466a54a07b3077e549bffd5ee28bb51d0e | 92 | py | Python | application/tasks/main.py | ActiveChooN/flask-app-template | b0134cdd95bb2e4c074b47db3e2f9ba5184e2ab8 | [
"MIT"
] | null | null | null | application/tasks/main.py | ActiveChooN/flask-app-template | b0134cdd95bb2e4c074b47db3e2f9ba5184e2ab8 | [
"MIT"
] | null | null | null | application/tasks/main.py | ActiveChooN/flask-app-template | b0134cdd95bb2e4c074b47db3e2f9ba5184e2ab8 | [
"MIT"
] | null | null | null | from application import celery
@celery.task
def simple_task():
return "Hello, world!"
| 13.142857 | 30 | 0.728261 | 12 | 92 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 92 | 6 | 31 | 15.333333 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0.141304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
485766fcf268636d91ea6235e2969a9cbe1677b0 | 1,268 | py | Python | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/atompub/resource.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2017-03-28T06:41:51.000Z | 2017-03-28T06:41:51.000Z | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/atompub/resource.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | null | null | null | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/atompub/resource.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2016-12-13T21:08:58.000Z | 2016-12-13T21:08:58.000Z | # -*- coding: utf-8 -*-
import time
from amplee.atompub.member import MemberResource
from amplee.error import ResourceOperationException
from amplee.utils import extract_url_trail
__all__ = ['ResourceWrapper', 'ProfileResource']
class ResourceWrapper(MemberResource):
def generate_resource_id(self, entry=None, slug=None, info=None):
if slug:
return slug.replace(' ','_').decode('utf-8')
else:
# if not then we use the last segment of the
# link as the id of the resource in the storage
links = entry.xml_xpath('/atom:entry/atom:link[@rel="edit"]')
if links:
return extract_url_trail(links[0].href)
# fallback
return str(time.time())
class ProfileResource(MemberResource):
def generate_resource_id(self, entry=None, slug=None, info=None):
if slug:
return slug.replace(' ','_').decode('utf-8')
else:
# if not then we use the last segment of the
# link as the id of the resource in the storage
links = entry.xml_xpath('/atom:entry/atom:link[@rel="edit"]')
if links:
return extract_url_trail(links[0].href)
# fallback
return str(time.time())
| 35.222222 | 73 | 0.620662 | 160 | 1,268 | 4.80625 | 0.35 | 0.026008 | 0.058518 | 0.085826 | 0.715215 | 0.715215 | 0.715215 | 0.715215 | 0.715215 | 0.715215 | 0 | 0.005411 | 0.271293 | 1,268 | 35 | 74 | 36.228571 | 0.82684 | 0.171136 | 0 | 0.695652 | 0 | 0 | 0.107383 | 0.065197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.173913 | 0 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
6f85060a0199b9f3b52f24de20cbb60d714c28e2 | 4,330 | py | Python | tests/roles/werewolf/wolf_test.py | TylerYep/wolfbot | 8d4786ce9542bab344b227e0571bb24bc354298d | [
"MIT"
] | 3 | 2018-06-16T00:03:30.000Z | 2021-12-26T20:48:45.000Z | tests/roles/werewolf/wolf_test.py | TylerYep/wolfbot | 8d4786ce9542bab344b227e0571bb24bc354298d | [
"MIT"
] | null | null | null | tests/roles/werewolf/wolf_test.py | TylerYep/wolfbot | 8d4786ce9542bab344b227e0571bb24bc354298d | [
"MIT"
] | 2 | 2021-03-03T09:31:35.000Z | 2021-03-03T10:02:55.000Z | from tests.conftest import set_roles
from wolfbot import const
from wolfbot.enums import Role
from wolfbot.roles import Wolf
from wolfbot.statements import KnowledgeBase
class TestWolf:
"""Tests for the Wolf player class."""
@staticmethod
def test_awake_init_medium(medium_game_roles: tuple[Role, ...]) -> None:
"""
Should initialize a Wolf. Note that the player_index of the Wolf is
not necessarily the index where the true Wolf is located.
"""
set_roles(Role.WOLF, *medium_game_roles[1:])
player_index = 2
wolf = Wolf.awake_init(player_index, list(const.ROLES))
assert wolf.wolf_indices == (0, 2)
assert wolf.center_index is None
assert wolf.center_role is None
@staticmethod
def test_awake_init_large(large_game_roles: tuple[Role, ...]) -> None:
"""
Should initialize a Wolf. Note that the player_index of the Wolf is
not necessarily the index where the true Wolf is located.
"""
player_index = 7
wolf = Wolf.awake_init(player_index, list(const.ROLES))
assert wolf.wolf_indices == (0, 7)
assert wolf.center_index is None
assert wolf.center_role is None
@staticmethod
def test_awake_init_center(large_game_roles: tuple[Role, ...]) -> None:
"""
Should initialize a Center Wolf. Note that the player_index of the Wolf is
not necessarily the index where the true Wolf is located.
"""
set_roles(Role.VILLAGER, *large_game_roles[1:])
player_index = 7
wolf = Wolf.awake_init(player_index, list(const.ROLES))
assert wolf.wolf_indices == (7,)
assert wolf.center_index == 13
assert wolf.center_role is Role.INSOMNIAC
@staticmethod
def test_get_random_statement_medium(
medium_game_roles: tuple[Role, ...], medium_knowledge_base: KnowledgeBase
) -> None:
"""Execute initialization actions and return the possible statements."""
player_index = 4
wolf = Wolf(player_index, (1, player_index))
wolf.analyze(medium_knowledge_base)
_ = wolf.get_statement(medium_knowledge_base)
assert len(wolf.statements) == 61
@staticmethod
def test_get_reg_wolf_statement_medium(
medium_game_roles: tuple[Role, ...], medium_knowledge_base: KnowledgeBase
) -> None:
"""Execute initialization actions and return the possible statements."""
const.USE_REG_WOLF = True
player_index = 4
wolf = Wolf(player_index, (1, player_index))
wolf.analyze(medium_knowledge_base)
_ = wolf.get_statement(medium_knowledge_base)
assert len(wolf.statements) == 7
@staticmethod
def test_get_center_statement_medium(
medium_game_roles: tuple[Role, ...], medium_knowledge_base: KnowledgeBase
) -> None:
"""Execute initialization actions and return the possible statements."""
const.USE_REG_WOLF = True
player_index = 2
wolf = Wolf(player_index, (1, player_index), 5, Role.ROBBER)
wolf.analyze(medium_knowledge_base)
_ = wolf.get_statement(medium_knowledge_base)
assert len(wolf.statements) == 4
@staticmethod
def test_get_random_statement_large(
large_game_roles: tuple[Role, ...], large_knowledge_base: KnowledgeBase
) -> None:
"""Execute initialization actions and return the possible statements."""
player_index = 4
wolf = Wolf(player_index, (1, player_index))
wolf.analyze(large_knowledge_base)
_ = wolf.get_statement(large_knowledge_base)
assert len(wolf.statements) == 615
@staticmethod
def test_get_reg_wolf_statement_large(
large_game_roles: tuple[Role, ...], large_knowledge_base: KnowledgeBase
) -> None:
"""Execute initialization actions and return the possible statements."""
const.USE_REG_WOLF = True
player_index = 4
wolf = Wolf(player_index, (1, player_index))
wolf.analyze(large_knowledge_base)
_ = wolf.get_statement(large_knowledge_base)
assert len(wolf.statements) == 74
# @staticmethod
# def test_eval_fn() -> None:
# """Should return the value from the chosen statement action."""
# pass
| 34.365079 | 82 | 0.666282 | 535 | 4,330 | 5.134579 | 0.147664 | 0.096105 | 0.06225 | 0.052421 | 0.867492 | 0.828904 | 0.80233 | 0.769931 | 0.769931 | 0.752093 | 0 | 0.009798 | 0.245727 | 4,330 | 125 | 83 | 34.64 | 0.831292 | 0.201386 | 0 | 0.649351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.103896 | false | 0 | 0.064935 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f939e07bc1c7c941c70d593f519002a60483ca5 | 30 | py | Python | tests/test_sample.py | cosmicprop/EDGE | 07cc6bf051297c9239824a05552b5a1765ad4030 | [
"BSD-3-Clause"
] | null | null | null | tests/test_sample.py | cosmicprop/EDGE | 07cc6bf051297c9239824a05552b5a1765ad4030 | [
"BSD-3-Clause"
] | null | null | null | tests/test_sample.py | cosmicprop/EDGE | 07cc6bf051297c9239824a05552b5a1765ad4030 | [
"BSD-3-Clause"
] | null | null | null | import edge
# Run EDGE tests
| 7.5 | 16 | 0.733333 | 5 | 30 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 3 | 17 | 10 | 0.956522 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d201fa221c48a45ad03721c230717b02474085c4 | 188 | py | Python | pizzeriaproj/pizzeria/admin.py | generocha/pizzeria | 9076d45e3ffc01ba93a7f6854db39b1005de090e | [
"MIT"
] | null | null | null | pizzeriaproj/pizzeria/admin.py | generocha/pizzeria | 9076d45e3ffc01ba93a7f6854db39b1005de090e | [
"MIT"
] | null | null | null | pizzeriaproj/pizzeria/admin.py | generocha/pizzeria | 9076d45e3ffc01ba93a7f6854db39b1005de090e | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import PizzaType, Pizza
# Register your models here.
admin.site.register(PizzaType)
admin.site.register(Pizza)
# Register your models here.
| 20.888889 | 36 | 0.797872 | 26 | 188 | 5.769231 | 0.461538 | 0.173333 | 0.226667 | 0.306667 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12234 | 188 | 8 | 37 | 23.5 | 0.909091 | 0.281915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d2105a34186ff143dbca67bd8680b8c8ba8ed4e5 | 95 | py | Python | backend/helper/__init__.py | Integration-Continue-TP/societe-pieces-auto | 80b7efa472531553026c6393c422e899e17c857c | [
"MIT"
] | null | null | null | backend/helper/__init__.py | Integration-Continue-TP/societe-pieces-auto | 80b7efa472531553026c6393c422e899e17c857c | [
"MIT"
] | null | null | null | backend/helper/__init__.py | Integration-Continue-TP/societe-pieces-auto | 80b7efa472531553026c6393c422e899e17c857c | [
"MIT"
] | null | null | null | from flask import Blueprint
helper = Blueprint('helper', __name__)
from .DateHelper import *
| 15.833333 | 38 | 0.768421 | 11 | 95 | 6.272727 | 0.636364 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 95 | 5 | 39 | 19 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d2595899e553e504f4437aa174066afdb09fe2ca | 5,816 | py | Python | projects/PointsColletion/pointscollection/loss.py | li-haoran/detectron2 | 84aebaaed19b07cce9dfd579f98b09ad4ed22e90 | [
"Apache-2.0"
] | null | null | null | projects/PointsColletion/pointscollection/loss.py | li-haoran/detectron2 | 84aebaaed19b07cce9dfd579f98b09ad4ed22e90 | [
"Apache-2.0"
] | null | null | null | projects/PointsColletion/pointscollection/loss.py | li-haoran/detectron2 | 84aebaaed19b07cce9dfd579f98b09ad4ed22e90 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from pointscollection.layers.emd import emd_function
def chamfer_loss(pred_points,gt_points):
p2pdistance=torch.sum(torch.abs((pred_points-gt_points)),dim=2)
dist1,_=torch.min(p2pdistance,dim=1)
dist2,_=torch.min(p2pdistance,dim=2)
dist1=dist1.mean(-1)
dist2=dist2.mean(-1)
dist=(dist1+dist2)/2.0
return torch.mean(dist)
def normlize_chamfer_loss(pred_points,gt_points,max_side=32):
eps=10e-5
with torch.no_grad():
pred_points_copy=pred_points.detach()
gt_points_copy=gt_points.detach()
p_mean=torch.mean(pred_points_copy,dim=1,keepdim=True)
g_mean=torch.mean(gt_points_copy,dim=3,keepdim=True)
p_align=pred_points_copy-p_mean
g_align=gt_points_copy-g_mean
## this is max side alignment
p_norm=torch.abs(p_align)
p_norm,_=torch.max(p_norm,dim=1,keepdim=True)
g_norm=torch.abs(g_align)
g_norm,_=torch.max(g_norm,dim=3,keepdim=True)
p_norm=torch.clamp(p_norm, min=eps,max=max_side)
g_norm=torch.clamp(g_norm,min=eps,max=max_side)
p_align_new=p_align*g_norm/p_norm
distance=torch.sum((p_align_new-g_align)**2,dim=2,keepdim=True)
_,min_index_gt=torch.min(distance,dim=3,keepdim=True)
_,min_index_pt=torch.min(distance,dim=1,keepdim=True)
rep_min_index_gt=min_index_gt.repeat(1,1,2,1)
rep_min_index_pt=min_index_pt.repeat(1,1,2,1)
tran_gt_points=torch.transpose(gt_points,1,3)
gather_gt_points=torch.gather(tran_gt_points,1,rep_min_index_gt)
tran_pt_points=torch.transpose(pred_points,1,3)
gather_pt_points=torch.gather(tran_pt_points,3,rep_min_index_pt)
# dist1=torch.sum((pred_points-gather_gt_points)**2,dim=2).squeeze()
# dist1=F._smooth_l1_loss(pred_points,gather_gt_points).squeeze(3)
# dist2=torch.sum((gather_pt_points-gt_points)**2,dim=2).squeeze()
# dist2=F._smooth_l1_loss(gather_pt_points,gt_points).squeeze(1)
# dist1=torch.sum((pred_points-gather_gt_points)**2,dim=2).squeeze()
# dist2=torch.sum((gather_pt_points-gt_points)**2,dim=2).squeeze()
dist1=torch.abs(pred_points-gather_gt_points).squeeze()
dist2=torch.abs(gather_pt_points-gt_points).squeeze()
dist1=dist1.mean(-2)
dist2=dist2.mean(-1)
dist=(dist1+dist2)/2.0
return torch.mean(dist)
def outlier_loss(pred_points,gt_points,contour_size=81):
npoints=gt_points.size(3)
inner_size=npoints-contour_size
contour,inner=torch.split(gt_points,[contour_size,inner_size],dim=3)
dist1=torch.sum((pred_points-contour)**2,dim=2)
mindist1,mindist1_index=torch.min(dist1,dim=2)
dist2=torch.sum((pred_points-inner)**2,dim=2)
mindist2,mindist2_index=torch.min(dist2,dim=2)
penalty=torch.where(dist1<dist2,dist1,0)
return torch.mean(penalty)
def normlize_chamfer_loss_with_outlier_penalty(pred_points,gt_points,contour_size=81,max_side=32):
eps=10e-5
with torch.no_grad():
pred_points_copy=pred_points.detach()
gt_points_copy=gt_points.detach()
p_mean=torch.mean(pred_points_copy,dim=1,keepdim=True)
g_mean=torch.mean(gt_points_copy,dim=3,keepdim=True)
p_align=pred_points_copy-p_mean
g_align=gt_points_copy-g_mean
## this is max side alignment
p_norm=torch.abs(p_align)
p_norm,_=torch.max(p_norm,dim=1,keepdim=True)
g_norm=torch.abs(g_align)
g_norm,_=torch.max(g_norm,dim=3,keepdim=True)
p_norm=torch.clamp(p_norm, min=eps,max=max_side)
g_norm=torch.clamp(g_norm,min=eps,max=max_side)
p_align_new=p_align*g_norm/p_norm
distance=torch.sum((p_align_new-g_align)**2,dim=2,keepdim=True)
_,min_index_gt=torch.min(distance,dim=3,keepdim=True)
_,min_index_pt=torch.min(distance,dim=1,keepdim=True)
rep_min_index_gt=min_index_gt.repeat(1,1,2,1)
rep_min_index_pt=min_index_pt.repeat(1,1,2,1)
outlier_index=(min_index_gt<contour_size).squeeze()
tran_gt_points=torch.transpose(gt_points,1,3)
gather_gt_points=torch.gather(tran_gt_points,1,rep_min_index_gt)
tran_pt_points=torch.transpose(pred_points,1,3)
gather_pt_points=torch.gather(tran_pt_points,3,rep_min_index_pt)
dist1=torch.sum((pred_points-gather_gt_points)**2,dim=2).squeeze()
outlier_penalty=dist1[outlier_index]
dist2=torch.sum((gather_pt_points-gt_points)**2,dim=2).squeeze()
dist1=dist1.mean(-1)
dist2=dist2.mean(-1)
dist=(dist1+dist2)/2.0
return torch.mean(dist),torch.mean(outlier_penalty)
def emd_loss(pred_points,gt_points,eps=0.005,iters=50):
pred_points=pred_points.squeeze(3)
gt_points=gt_points.squeeze(1)
gt_points=gt_points.transpose(1,2)
dist,_=emd_function(pred_points,gt_points,eps,iters)
return torch.mean(dist)
def emd_l1_loss(pred_points,gt_points,eps=0.005,iters=50):
pred_points=pred_points.squeeze(3)
gt_points=gt_points.squeeze(1)
gt_points=gt_points.transpose(1,2)
_,assignment=emd_function(pred_points,gt_points,eps,iters)
assignment=assignment.unsqueeze(2)
assignment=assignment.repeat(1,1,2).long()
gt_points=torch.gather(gt_points,1,assignment)
dist=torch.abs(pred_points-gt_points)
dist=dist.mean(-1)
return torch.mean(dist)
def emd_l1_loss2(pred_points,gt_points,eps=0.005,iters=50):
pred_points=pred_points.squeeze(3)
gt_points=gt_points.squeeze(1)
gt_points=gt_points.transpose(1,2)
_,assignment=emd_function(pred_points,gt_points,eps,iters)
assignment=assignment.unsqueeze(2)
assignment=assignment.repeat(1,1,2).long()
gt_points=torch.gather(gt_points,1,assignment)
dist=torch.abs(pred_points-gt_points)
return dist
| 32.311111 | 98 | 0.721458 | 973 | 5,816 | 4.014388 | 0.085303 | 0.116743 | 0.086022 | 0.059908 | 0.818484 | 0.811316 | 0.759089 | 0.726831 | 0.717358 | 0.717358 | 0 | 0.037755 | 0.148384 | 5,816 | 179 | 99 | 32.49162 | 0.750858 | 0.076513 | 0 | 0.699115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061947 | false | 0 | 0.035398 | 0 | 0.159292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d25f6c568e332378f81f43fcf9e93a49a97c289e | 147 | py | Python | bjcpy/all_styles.py | blackelbow/bjcpy | 98d6d93c9160f2c52b0b4cbc15bf78fef4ebee96 | [
"MIT"
] | null | null | null | bjcpy/all_styles.py | blackelbow/bjcpy | 98d6d93c9160f2c52b0b4cbc15bf78fef4ebee96 | [
"MIT"
] | null | null | null | bjcpy/all_styles.py | blackelbow/bjcpy | 98d6d93c9160f2c52b0b4cbc15bf78fef4ebee96 | [
"MIT"
] | 1 | 2020-06-22T23:43:08.000Z | 2020-06-22T23:43:08.000Z | from .style_data import dict_of_styles
def all_styles():
"""Return a list of every BJCP style"""
return list(dict_of_styles.keys())
| 18.375 | 43 | 0.693878 | 23 | 147 | 4.173913 | 0.652174 | 0.125 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 147 | 7 | 44 | 21 | 0.820513 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9673f524afe7485b005b5fedcc68e5f7860cd122 | 3,983 | py | Python | my_topo.py | fno2010/pwospf-p4-kerim | 7c8709990d415f326e8c514e17d37cff243db5cc | [
"Apache-2.0"
] | null | null | null | my_topo.py | fno2010/pwospf-p4-kerim | 7c8709990d415f326e8c514e17d37cff243db5cc | [
"Apache-2.0"
] | null | null | null | my_topo.py | fno2010/pwospf-p4-kerim | 7c8709990d415f326e8c514e17d37cff243db5cc | [
"Apache-2.0"
] | null | null | null | from mininet.topo import Topo
class SingleSwitchTopo(Topo):
def __init__(self, n, **opts):
Topo.__init__(self, **opts)
switch = self.addSwitch('s1')
for i in xrange(1, n + 1):
host = self.addHost('h%d' % i,
ip="10.0.0.%d" % i,
mac='00:00:00:00:00:%02x' % i)
self.addLink(host, switch, port2=i)
# Takes number of switches (n) and optional number of hosts (excluding CP) per switch (default m=1)
class LinearTopo(Topo):
def __init__(self, n, m=None, **opts):
Topo.__init__(self, **opts)
if not m:
m = 1
switches = []
for i in xrange(1, n + 1):
switch = self.addSwitch('s%d' % i)
host = self.addHost('c%d' % i,
ip="10.0.%d.%d/24" % (i, 1),
# mask='255.255.255.0',
mac='00:00:00:%02x:00:%02x' % (i, 1))
self.addLink(host, switch, port2=1)
host = self.addHost('h%d' % i,
ip="10.0.%d.%d/24" % (i, 2),
mac='00:00:00:%02x:00:%02x' % (i, 2))
self.addLink(host, switch, port2=2)
switches.append(switch)
iface_ports = [3] * n
for i in xrange(n - 1):
s1 = switches[i]
s2 = switches[i + 1]
self.addLink(s1, s2, port1=iface_ports[i], port2=iface_ports[i + 1])
iface_ports[i] += 1
iface_ports[i + 1] += 1
class RingLinearTopo(Topo):
def __init__(self, n, m=None, **opts):
Topo.__init__(self, **opts)
if not m:
m = 1
switches = []
for i in xrange(1, n + 1):
switch = self.addSwitch('s%d' % i)
host = self.addHost('c%d' % i,
ip="10.0.%d.%d/24" % (i, 1),
# mask='255.255.255.0',
mac='00:00:00:%02x:00:%02x' % (i, 1))
self.addLink(host, switch, port2=1)
host = self.addHost('h%d' % i,
ip="10.0.%d.%d/24" % (i, 2),
mac='00:00:00:%02x:00:%02x' % (i, 2))
self.addLink(host, switch, port2=2)
switches.append(switch)
iface_ports = [3] * n
for i in xrange(n - 1):
s1 = switches[i]
s2 = switches[i + 1]
self.addLink(s1, s2, port1=iface_ports[i], port2=iface_ports[i + 1])
iface_ports[i] += 1
iface_ports[i + 1] += 1
self.addLink(switches[0], switches[-1], port1=4, port2=4)
class RingTopo(Topo):
def __init__(self, n, **opts):
Topo.__init__(self, **opts)
switches = []
for i in xrange(1, n + 1):
switch = self.addSwitch('s%d' % i)
host = self.addHost('c%d' % i,
ip="10.0.%d.%d/24" % (i, 1),
# mask='255.255.255.0',
mac='00:00:00:%02x:00:%02x' % (i, 1))
self.addLink(host, switch, port2=1)
host = self.addHost('h%d' % i,
ip="10.0.%d.%d/24" % (i, 2),
mac='00:00:00:%02x:00:%02x' % (i, 2))
self.addLink(host, switch, port2=2)
switches.append(switch)
for i in xrange(n):
# self.addLink(switches[(i - 1) % n], switches[i], port2=3)
j = (i + 1) % n
s1 = switches[i]
s2 = switches[j]
# s1_ipv4 = '192.168.%d.%d' % (i, (j << 1) + 1)
# s2_ipv4 = '192.168.%d.%d' % (i, (j << 1) + 2)
self.addLink(s1, s2, port1=3, port2=4)
# self.get('s%d' % i).cmd('ifconfig s%d-eth%d %s netmask 255.255.255.254' % (i, 3, s1_ipv4))
# s2.cmd('ifconfig s%d-eth%d %s netmask 255.255.255.254' % (j, 4, s2_ipv4))
| 37.933333 | 104 | 0.427567 | 540 | 3,983 | 3.068519 | 0.133333 | 0.038624 | 0.032589 | 0.050694 | 0.798431 | 0.750151 | 0.750151 | 0.741702 | 0.723597 | 0.723597 | 0 | 0.118094 | 0.40472 | 3,983 | 104 | 105 | 38.298077 | 0.580768 | 0.12001 | 0 | 0.822785 | 0 | 0 | 0.075536 | 0.036052 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.012658 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
96a0b30cb73000c6609979d284417ab871a0e4b9 | 21,267 | py | Python | tests/rest/auth/test_cli.py | jdries/openeo-python-client | 63e70bdb27749ba51553bb3fa46135125d8bc9d9 | [
"Apache-2.0"
] | 1 | 2017-10-13T09:27:46.000Z | 2017-10-13T09:27:46.000Z | tests/rest/auth/test_cli.py | jdries/openeo-python-client | 63e70bdb27749ba51553bb3fa46135125d8bc9d9 | [
"Apache-2.0"
] | null | null | null | tests/rest/auth/test_cli.py | jdries/openeo-python-client | 63e70bdb27749ba51553bb3fa46135125d8bc9d9 | [
"Apache-2.0"
] | null | null | null | import logging
from unittest import mock
import pytest
from openeo.rest.auth import cli
from openeo.rest.auth.cli import CliToolException
from openeo.rest.auth.config import AuthConfig, RefreshTokenStore
from .test_oidc import OidcMock, assert_device_code_poll_sleep
def mock_input(*args: str):
"""Mock user input (one or more responses)"""
return mock.patch("builtins.input", side_effect=list(args))
def mock_secret_input(secret: str):
"""Mocking of user input of password/secret through `getpass`"""
return mock.patch.object(cli, "getpass", side_effect=[secret])
@pytest.fixture(autouse=True)
def auth_config(tmp_openeo_config_home) -> AuthConfig:
"""Make sure we start with emtpy AuthConfig."""
config = AuthConfig(tmp_openeo_config_home)
assert not config.path.exists()
return config
@pytest.fixture(autouse=True)
def refresh_token_store(tmp_openeo_config_home) -> RefreshTokenStore:
store = RefreshTokenStore(tmp_openeo_config_home)
assert not store.path.exists()
return store
def test_paths(capsys):
cli.main(["paths"])
out = capsys.readouterr().out
assert "/auth-config.json" in out
assert "/refresh-tokens.json" in out
def test_config_dump(capsys, auth_config):
auth_config.set_basic_auth("https://oeo.test", "john17", "j0hn123")
cli.main(["config-dump"])
out = capsys.readouterr().out
assert "john17" in out
assert "j0hn123" not in out
assert "<redacted>" in out
def test_config_dump_show_secrets(capsys, auth_config):
auth_config.set_basic_auth("https://oeo.test", "john17", "j0hn123")
cli.main(["config-dump", "--show-secrets"])
out = capsys.readouterr().out
assert "john17" in out
assert "j0hn123" in out
assert "<redacted>" not in out
def test_token_clear_no_file(capsys, refresh_token_store):
assert not refresh_token_store.path.exists()
cli.main(["token-clear"])
out = capsys.readouterr().out
assert "No refresh token file at" in out
def test_token_clear_no(capsys, refresh_token_store):
refresh_token_store.set_refresh_token(issuer="i", client_id="c", refresh_token="r")
assert refresh_token_store.path.exists()
with mock_input("no"):
cli.main(["token-clear"])
out = capsys.readouterr().out
assert "Keeping refresh token file" in out
assert refresh_token_store.path.exists()
def test_token_clear_yes(capsys, refresh_token_store):
refresh_token_store.set_refresh_token(issuer="i", client_id="c", refresh_token="r")
assert refresh_token_store.path.exists()
with mock_input("yes"):
cli.main(["token-clear"])
out = capsys.readouterr().out
assert "Removed refresh token file" in out
assert not refresh_token_store.path.exists()
def test_token_clear_force(capsys, refresh_token_store):
refresh_token_store.set_refresh_token(issuer="i", client_id="c", refresh_token="r")
assert refresh_token_store.path.exists()
cli.main(["token-clear", "--force"])
out = capsys.readouterr().out
assert "Removed refresh token file" in out
assert not refresh_token_store.path.exists()
def test_add_basic_auth(auth_config):
with mock_secret_input("p455w0r6"):
cli.main(["add-basic", "https://oeo.test", "--username", "user49", "--no-try"])
assert auth_config.get_basic_auth("https://oeo.test") == ("user49", "p455w0r6")
def test_add_basic_auth_input_username(auth_config):
with mock_input("user55") as input_mock, mock_secret_input("p455w0r6"):
cli.main(["add-basic", "https://oeo.test", "--no-try"])
assert input_mock.call_count == 1
assert "Enter username" in input_mock.call_args[0][0]
assert auth_config.get_basic_auth("https://oeo.test") == ("user55", "p455w0r6")
def test_add_oidc_simple(auth_config, requests_mock):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
with mock_secret_input(client_secret):
cli.main(["add-oidc", "https://oeo.test", "--client-id", client_id])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (client_id, client_secret)
def test_add_oidc_no_secret(auth_config, requests_mock):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
client_id = "z3-cl13nt"
cli.main(["add-oidc", "https://oeo.test", "--client-id", client_id, "--no-client-secret"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (client_id, None)
def test_add_oidc_use_default_client(auth_config, requests_mock, caplog):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
"default_clients": [{
"id": "d3f6ul7cl13n7",
"grant_types": ["urn:ietf:params:oauth:grant-type:device_code+pkce", "refresh_token"],
}]
}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
cli.main(["add-oidc", "https://oeo.test", "--use-default-client"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (None, None)
warnings = [r[2] for r in caplog.record_tuples if r[1] == logging.WARN]
assert warnings == []
def test_add_oidc_use_default_client_no_default(auth_config, requests_mock, caplog):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
cli.main(["add-oidc", "https://oeo.test", "--use-default-client"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (None, None)
warnings = [r[2] for r in caplog.record_tuples if r[1] == logging.WARN]
assert warnings == ["No default clients declared for provider 'authit'"]
def test_add_oidc_default_client_interactive(auth_config, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
"default_clients": [{
"id": "d3f6ul7cl13n7",
"grant_types": ["urn:ietf:params:oauth:grant-type:device_code+pkce", "refresh_token"]
}]
}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
with mock_input("") as input:
cli.main(["add-oidc", "https://oeo.test"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (None, None)
input.assert_called_with("Enter client_id or leave empty to use default client, and press enter: ")
stdout = capsys.readouterr().out
assert "Using client ID None" in stdout
def test_add_oidc_use_default_client_overwrite(auth_config, requests_mock, caplog):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={
"providers": [{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
"default_clients": [{
"id": "d3f6ul7cl13n7",
"grant_types": ["urn:ietf:params:oauth:grant-type:device_code+pkce", "refresh_token"]
}]
}]
})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
with mock_secret_input(client_secret):
cli.main(["add-oidc", "https://oeo.test", "--client-id", client_id])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (client_id, client_secret)
cli.main(["add-oidc", "https://oeo.test", "--use-default-client"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (None, None)
warnings = [r[2] for r in caplog.record_tuples if r[1] == logging.WARN]
assert warnings == []
def test_add_oidc_04(auth_config, requests_mock):
requests_mock.get("https://oeo.test/", json={"api_version": "0.4.0"})
with pytest.raises(CliToolException, match="Backend API version is too low"):
cli.main(["add-oidc", "https://oeo.test"])
def test_add_oidc_multiple_providers(auth_config, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
requests_mock.get("https://youauth.test/.well-known/openid-configuration", json={"issuer": "https://youauth.test"})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
with mock_secret_input(client_secret):
cli.main(["add-oidc", "https://oeo.test", "--provider-id", "youauth", "--client-id", client_id])
assert "youauth" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "youauth") == (client_id, client_secret)
out = capsys.readouterr().out
expected = ["Using provider ID 'youauth'", "Using client ID 'z3-cl13nt'"]
for e in expected:
assert e in out
def test_add_oidc_no_providers(auth_config, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": []})
with pytest.raises(CliToolException, match="No OpenID Connect providers listed by backend"):
cli.main(["add-oidc", "https://oeo.test"])
with pytest.raises(CliToolException, match="No OpenID Connect providers listed by backend"):
cli.main(["add-oidc", "https://oeo.test", "--provider-id", "youauth"])
def test_add_oidc_interactive(auth_config, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
requests_mock.get("https://authit.test/.well-known/openid-configuration", json={"issuer": "https://authit.test"})
requests_mock.get("https://youauth.test/.well-known/openid-configuration", json={"issuer": "https://youauth.test"})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
with mock_input("1", client_id), mock_secret_input(client_secret):
cli.main(["add-oidc", "https://oeo.test"])
assert "authit" in auth_config.get_oidc_provider_configs("https://oeo.test")
assert auth_config.get_oidc_client_configs("https://oeo.test", "authit") == (client_id, client_secret)
out = capsys.readouterr().out
expected = [
"Backend 'https://oeo.test' has multiple OpenID Connect providers.",
"[1] Auth It", "[2] YouAuth",
"Using provider ID 'authit'",
"Using client ID 'z3-cl13nt'"
]
for e in expected:
assert e in out
def test_oidc_auth_device_flow(auth_config, refresh_token_store, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
auth_config.set_oidc_client_config("https://oeo.test", "authit", client_id, client_secret)
oidc_mock = OidcMock(
requests_mock=requests_mock,
expected_grant_type="urn:ietf:params:oauth:grant-type:device_code",
expected_client_id=client_id,
provider_root_url="https://authit.test",
oidc_discovery_url="https://authit.test/.well-known/openid-configuration",
expected_fields={"scope": "openid", "client_secret": client_secret},
state={"device_code_callback_timeline": ["great success"]},
scopes_supported=["openid"]
)
with assert_device_code_poll_sleep():
cli.main(["oidc-auth", "https://oeo.test", "--flow", "device"])
assert refresh_token_store.get_refresh_token("https://authit.test", client_id) == oidc_mock.state["refresh_token"]
out = capsys.readouterr().out
expected = [
"Using provider ID 'authit'",
"Using client ID 'z3-cl13nt'",
"To authenticate: visit https://authit.test/dc",
"enter the user code {c!r}".format(c=oidc_mock.state["user_code"]),
"Authorized successfully.",
"The OpenID Connect device flow was successful.",
"Stored refresh token in {p!r}".format(p=str(refresh_token_store.path)),
]
for e in expected:
assert e in out
def test_oidc_auth_device_flow_default_client(auth_config, refresh_token_store, requests_mock, capsys):
"""Test device flow with default client (which uses PKCE instead of secret)."""
default_client_id = "d3f6u17cl13n7"
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
"default_clients": [{
"id": default_client_id,
"grant_types": ["urn:ietf:params:oauth:grant-type:device_code+pkce", "refresh_token"],
}]
},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
auth_config.set_oidc_client_config("https://oeo.test", "authit", client_id=None, client_secret=None)
oidc_mock = OidcMock(
requests_mock=requests_mock,
expected_grant_type="urn:ietf:params:oauth:grant-type:device_code",
expected_client_id=default_client_id,
provider_root_url="https://authit.test",
oidc_discovery_url="https://authit.test/.well-known/openid-configuration",
expected_fields={"scope": "openid", "code_verifier": True, "code_challenge": True},
state={"device_code_callback_timeline": ["great success"]},
scopes_supported=["openid"]
)
with assert_device_code_poll_sleep():
cli.main(["oidc-auth", "https://oeo.test", "--flow", "device"])
stored_refresh_token = refresh_token_store.get_refresh_token("https://authit.test", default_client_id)
assert stored_refresh_token == oidc_mock.state["refresh_token"]
out = capsys.readouterr().out
expected = [
"Using provider ID 'authit'",
"Will try to use default client.",
"To authenticate: visit https://authit.test/dc",
"enter the user code {c!r}".format(c=oidc_mock.state["user_code"]),
"Authorized successfully.",
"The OpenID Connect device flow was successful.",
"Stored refresh token in {p!r}".format(p=str(refresh_token_store.path)),
]
for e in expected:
assert e in out
def test_oidc_auth_device_flow_no_config_all_defaults(auth_config, refresh_token_store, requests_mock, capsys):
"""Test device flow with default client (which uses PKCE instead of secret)."""
default_client_id = "d3f6u17cl13n7"
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{
"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"],
"default_clients": [{
"id": default_client_id,
"grant_types": ["urn:ietf:params:oauth:grant-type:device_code+pkce", "refresh_token"],
}]
},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
oidc_mock = OidcMock(
requests_mock=requests_mock,
expected_grant_type="urn:ietf:params:oauth:grant-type:device_code",
expected_client_id=default_client_id,
provider_root_url="https://authit.test",
oidc_discovery_url="https://authit.test/.well-known/openid-configuration",
expected_fields={"scope": "openid", "code_verifier": True, "code_challenge": True},
state={"device_code_callback_timeline": ["great success"]},
scopes_supported=["openid"]
)
with assert_device_code_poll_sleep():
cli.main(["oidc-auth", "https://oeo.test", "--flow", "device"])
stored_refresh_token = refresh_token_store.get_refresh_token("https://authit.test", default_client_id)
assert stored_refresh_token == oidc_mock.state["refresh_token"]
out = capsys.readouterr().out
expected = [
"Will try to use default provider_id.",
"Using provider ID None",
"Will try to use default client.",
"To authenticate: visit https://authit.test/dc",
"enter the user code {c!r}".format(c=oidc_mock.state["user_code"]),
"Authorized successfully.",
"The OpenID Connect device flow was successful.",
"Stored refresh token in {p!r}".format(p=str(refresh_token_store.path)),
]
for e in expected:
assert e in out
assert auth_config.load() == {}
@pytest.mark.slow
def test_oidc_auth_auth_code_flow(auth_config, refresh_token_store, requests_mock, capsys):
requests_mock.get("https://oeo.test/", json={"api_version": "1.0.0"})
requests_mock.get("https://oeo.test/credentials/oidc", json={"providers": [
{"id": "authit", "issuer": "https://authit.test", "title": "Auth It", "scopes": ["openid"]},
{"id": "youauth", "issuer": "https://youauth.test", "title": "YouAuth", "scopes": ["openid"]}
]})
client_id, client_secret = "z3-cl13nt", "z3-z3cr3t-y6y6"
auth_config.set_oidc_client_config("https://oeo.test", "authit", client_id, client_secret)
auth_config.set_oidc_client_config("https://oeo.test", "youauth", client_id + '-tw00', client_secret + '-tw00')
oidc_mock = OidcMock(
requests_mock=requests_mock,
expected_grant_type="authorization_code",
expected_client_id=client_id,
expected_fields={"scope": "openid"},
provider_root_url="https://authit.test",
oidc_discovery_url="https://authit.test/.well-known/openid-configuration",
scopes_supported=["openid"]
)
with mock_input("1"), mock.patch.object(cli, "_webbrowser_open", new=oidc_mock.webbrowser_open):
cli.main(["oidc-auth", "https://oeo.test", "--flow", "auth-code", "--timeout", "10"])
assert refresh_token_store.get_refresh_token("https://authit.test", client_id) == oidc_mock.state["refresh_token"]
out = capsys.readouterr().out
expected = [
"Using provider ID 'authit'",
"Using client ID 'z3-cl13nt'",
"a browser window should open allowing you to log in",
"and grant access to the client 'z3-cl13nt' (timeout: 10s).",
"The OpenID Connect authorization code flow was successful.",
"Stored refresh token in {p!r}".format(p=str(refresh_token_store.path)),
]
for e in expected:
assert e in out
| 46.232609 | 119 | 0.665209 | 2,786 | 21,267 | 4.864681 | 0.080761 | 0.0425 | 0.06375 | 0.0546 | 0.85826 | 0.840478 | 0.827418 | 0.815096 | 0.811702 | 0.780196 | 0 | 0.012092 | 0.163916 | 21,267 | 459 | 120 | 46.333333 | 0.750127 | 0.013542 | 0 | 0.697548 | 0 | 0 | 0.353935 | 0.022145 | 0 | 0 | 0 | 0 | 0.171662 | 1 | 0.073569 | false | 0.002725 | 0.019074 | 0 | 0.103542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
96b98e477df50a549fbcc8d02d02a9d8d6604624 | 7,964 | py | Python | ensemble/forest.py | adityajn105/MLfromScratch | ea0758d4039051268d7f3af8799e2b005dbc2ebe | [
"MIT"
] | 16 | 2019-12-17T04:24:51.000Z | 2021-12-15T18:31:41.000Z | ensemble/forest.py | adityajn105/MLfromScratch | ea0758d4039051268d7f3af8799e2b005dbc2ebe | [
"MIT"
] | null | null | null | ensemble/forest.py | adityajn105/MLfromScratch | ea0758d4039051268d7f3af8799e2b005dbc2ebe | [
"MIT"
] | 5 | 2019-12-17T04:24:55.000Z | 2022-01-23T15:18:24.000Z | """
Authors : Aditya Jain
Contact : https://adityajain.me
"""
import numpy as np
from ..tree import DecisionTreeClassifier
from ..tree import DecisionTreeRegressor
class RandomForestClassifier():
"""
Random Forest fits number of decision tree on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True.
Parameter
---------
n_estimators : integer (Default 50), number of trees in forest
max_depth : integer (Default 'inf'), maximum depth allowed for each tree
min_samples_split : integer (Default 2), The minimum number of samples required to split an internal node
max_features : ( 'auto', 'sqrt', 'log2', 'max_features' )
The number of features to consider when looking for the best split:
'auto' is same as 'sqrt'
'sqrt' is sqrt(number of features)
'log2' is log2(number of features)
'max_features' is all features
bootstrap : If False, the whole datset is used to build each tree.
random_state : random seed
"""
def __init__(self, n_estimators=50, max_depth = None, min_samples_split=2, max_features="auto",
bootstrap=True, random_state=None):
np.random.seed( random_state if random_state!=None else np.random.randint(100) )
self.__n_estimators = n_estimators
self.__max_depth = float('inf') if max_depth==None else max_depth
self.__min_samples_split = min_samples_split
self.__max_features = {
'auto': lambda x: int(np.sqrt(x)), 'sqrt': lambda x: int(np.sqrt(x)),
'log2': lambda x: int(np.log2(x)), 'max_features': lambda x: x }[max_features]
self.__bootstrap = bootstrap
self.__n_samples = None
self.__n_features = None
self.__n_classes = None
self.__trees = [ ]
def __bootstrapX(self,X):
indexes = np.random.choice( np.arange(0,len(X),1), size=self.__n_samples, replace=self.__bootstrap )
return X[indexes,:]
def __get_feature_index(self):
return np.random.choice( np.arange(0,self.__n_features,1),
size=self.__max_features(self.__n_features), replace=False)
def fit(self,X,y):
"""
Fit the X and y to estimators
Parameters
----------
X : numpy array, independent variables
y : numpy array, target variable
"""
self.__n_samples, self.__n_features = X.shape
self.__n_classes = len(np.unique(y))
X_y = np.c_[X,y]
for _ in range(self.__n_estimators):
dt = DecisionTreeClassifier( max_depth=self.__max_depth,
min_samples_split=self.__min_samples_split,
n_classes = self.__n_classes)
data = self.__bootstrapX(X_y)
features = self.__get_feature_index()
dt.fit( data[:,features], data[:,-1] )
self.__trees.append( (dt.tree_, features) )
def predict(self,X):
"""
Predict labels using all estimators
Parameters
----------
X : numpy array, independent variables
Returns
-------
predicted labels
"""
return np.argmax( self.predict_proba(X), axis=1 )
def predict_proba(self,X):
"""
Predict probaibilty of each class using all estimators
Parameters
----------
X : numpy array, independent variablesss
Returns
-------
probability of each class [ n_samples, n_classes ]
"""
probs = np.zeros( (len(X),self.__n_classes) )
for root, features in self.__trees:
probs += np.array([ self.__predict_row(row,root)[1] for row in X[:,features] ])
return probs/self.__n_estimators
def __predict_row(self,row,node):
if row[node['index']] < node['value']:
if isinstance(node['left'], dict): return self.__predict_row(row,node['left'])
else: return node['left']
else:
if isinstance(node['right'], dict): return self.__predict_row(row,node['right'])
else: return node['right']
def score(self,X,y):
"""
Calculate accuracy from independent variables
Parameters
----------
X : numpy array, independent variables
y : numpy array, dependent variable
Returns
-------
accuracy score
"""
return (y==self.predict(X)).sum()/len(y)
class RandomForestRegressor():
"""
Random Forest fits number of decision tree on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True.
Parameter
---------
n_estimators : integer (Default 50), number of trees in forest
criterion : ('mse', 'mae', 'std') ( Default 'mse' )
The function to measure the quality of a split.
'mse' is mean squared error
'mae' is mean absolute error
'std' is standard deviation
max_depth : integer (Default 'inf'), maximum depth allowed for each tree
min_samples_split : integer (Default 2), The minimum number of samples required to split an internal node
max_features : ( 'auto', 'sqrt', 'log2', 'max_features' ) ( Default 'auto' )
The number of features to consider when looking for the best split:
'auto' is same as 'sqrt'
'sqrt' is sqrt(number of features)
'log2' is log2(number of features)
'max_features' is all features
bootstrap : If False, the whole datset is used to build each tree.
random_state : random seed
"""
def __init__(self, n_estimators=10, criterion='mse', max_depth = None, min_samples_split=2, max_features="auto",
bootstrap=True, random_state=None):
np.random.seed( random_state if random_state!=None else np.random.randint(100) )
self.__n_estimators = n_estimators
self.__criterion = criterion
self.__max_depth = float('inf') if max_depth==None else max_depth
self.__min_samples_split = min_samples_split
self.__max_features = {
'auto': lambda x: int(np.sqrt(x))+1, 'sqrt': lambda x: int(np.sqrt(x))+1,
'log2': lambda x: int(np.log2(x))+1, 'max_features': lambda x: x }[max_features]
self.__bootstrap = bootstrap
self.__n_samples = None
self.__n_features = None
self.__trees = [ ]
def __bootstrapX(self,X):
indexes = np.random.choice( np.arange(0,len(X),1), size=self.__n_samples, replace=self.__bootstrap )
return X[indexes,:]
def __get_feature_index(self):
return np.random.choice( np.arange(0,self.__n_features,1),
size=self.__max_features(self.__n_features), replace=False)
def fit(self,X,y):
"""
Fit the X and y to estimators
Parameters
----------
X : numpy array, independent variables
y : numpy array, target variable
"""
self.__n_samples, self.__n_features = X.shape
X_y = np.c_[X,y]
for _ in range(self.__n_estimators):
dt = DecisionTreeRegressor( criterion=self.__criterion, max_depth=self.__max_depth,
min_samples_split=self.__min_samples_split)
data = self.__bootstrapX(X_y)
features = self.__get_feature_index()
dt.fit( data[:,features], data[:,-1] )
self.__trees.append( (dt.tree_, features) )
def predict(self,X):
"""
Predict values using all estimators
Parameters
----------
X : numpy array, independent variables
Returns
-------
predicted values
"""
predictions = np.zeros( (len(X)) )
for root, features in self.__trees:
predictions += np.array([ self.__predict_row(row,root) for row in X[:,features] ])
return predictions/self.__n_estimators
def __predict_row(self,row,node):
if row[node['index']] < node['value']:
if isinstance(node['left'], dict): return self.__predict_row(row,node['left'])
else: return node['left']
else:
if isinstance(node['right'], dict): return self.__predict_row(row,node['right'])
else: return node['right']
def score(self,X,y):
"""
Computer Coefficient of Determination (rsquare)
Parameters
----------
X : 2D numpy array, independent variables
y : numpy array, dependent variables
Returns
-------
r2 values
"""
y_pred = self.predict(X)
return 1-( np.sum( (y-y_pred)**2 )/np.sum( (y-y.mean())**2 ) ) | 30.868217 | 161 | 0.690231 | 1,136 | 7,964 | 4.609155 | 0.159331 | 0.024828 | 0.034377 | 0.013751 | 0.819901 | 0.819901 | 0.801184 | 0.774064 | 0.753247 | 0.743316 | 0 | 0.00737 | 0.182195 | 7,964 | 258 | 162 | 30.868217 | 0.796561 | 0.409216 | 0 | 0.693878 | 0 | 0 | 0.030773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153061 | false | 0 | 0.030612 | 0.020408 | 0.295918 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
738d82db9d93ede603f083ea3611e75dacabf1ff | 61 | py | Python | calamari_ocr/ocr/dataset/datareader/generated_line_dataset/__init__.py | jacektl/calamari | 980477aefe4e56f7fc373119c1b38649798d8686 | [
"Apache-2.0"
] | 922 | 2018-07-06T05:18:22.000Z | 2022-03-22T12:38:32.000Z | calamari_ocr/ocr/dataset/datareader/generated_line_dataset/__init__.py | jacektl/calamari | 980477aefe4e56f7fc373119c1b38649798d8686 | [
"Apache-2.0"
] | 267 | 2018-07-14T22:10:41.000Z | 2022-03-28T18:38:43.000Z | calamari_ocr/ocr/dataset/datareader/generated_line_dataset/__init__.py | jacektl/calamari | 980477aefe4e56f7fc373119c1b38649798d8686 | [
"Apache-2.0"
] | 227 | 2018-07-06T07:42:16.000Z | 2022-02-27T05:29:59.000Z | from .params import LineGeneratorParams, TextGeneratorParams
| 30.5 | 60 | 0.885246 | 5 | 61 | 10.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 61 | 1 | 61 | 61 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7392ee2a31fc1ab38599d63f5fbfc05bfce7583d | 54 | py | Python | src/clasterization/tasks/__init__.py | mstrechen/advanced-news-scraper | dc54a057eb7c14d0e390b82f6b308f5a924cb966 | [
"MIT"
] | null | null | null | src/clasterization/tasks/__init__.py | mstrechen/advanced-news-scraper | dc54a057eb7c14d0e390b82f6b308f5a924cb966 | [
"MIT"
] | 3 | 2021-04-06T18:16:57.000Z | 2021-12-13T20:55:52.000Z | src/clasterization/tasks/__init__.py | mstrechen/advanced-news-scraper | dc54a057eb7c14d0e390b82f6b308f5a924cb966 | [
"MIT"
] | null | null | null | # flake8: noqa
from .apply_tags import apply_tags_task | 27 | 39 | 0.833333 | 9 | 54 | 4.666667 | 0.777778 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.111111 | 54 | 2 | 39 | 27 | 0.854167 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73aaf8a43ab7a63495196bac69e3f479fe49cf1c | 12,336 | py | Python | tests/test_run.py | datamaterials/spyns | 68e8412ba003e2d882373db93f322497be7bff93 | [
"MIT"
] | 9 | 2019-12-06T06:54:04.000Z | 2022-03-14T00:16:47.000Z | tests/test_run.py | jkglasbrenner/spyns | 68e8412ba003e2d882373db93f322497be7bff93 | [
"MIT"
] | 1 | 2018-10-31T16:41:07.000Z | 2018-11-19T21:19:56.000Z | tests/test_run.py | datamaterials/spyns | 68e8412ba003e2d882373db93f322497be7bff93 | [
"MIT"
] | 2 | 2019-12-06T06:06:45.000Z | 2020-02-12T11:35:30.000Z | # -*- coding: utf-8 -*-
from typing import Tuple, Union
import numpy as np
import pymatgen as pmg
import pytest
from spyns.data import StructureParameters, SimulationParameters, SimulationData
from spyns.lattice import Lattice
import spyns
@pytest.fixture()
def two_dimensional_square_lattice() -> pmg.Structure:
structure_parameters: StructureParameters = StructureParameters(
abc=(2.0, 2.0, 20.0),
ang=3 * (90,),
spacegroup=1,
species=4 * ["Fe"],
coordinates=[
[0.00, 0.00, 0.00],
[0.50, 0.00, 0.00],
[0.00, 0.50, 0.00],
[0.50, 0.50, 0.00],
],
)
structure: pmg.Structure = spyns.lattice.generate.from_parameters(
structure_parameters=structure_parameters
)
structure = spyns.lattice.generate.label_subspecies(
structure=structure, subspecies_labels={0: "1", 1: "2", 2: "2", 3: "1"}
)
structure = spyns.lattice.generate.make_supercell(
cell_structure=structure, scaling_factors=(5, 5, 1)
)
return structure
@pytest.fixture()
def cubic_lattice() -> pmg.Structure:
structure_parameters: StructureParameters = StructureParameters(
abc=(2.0, 2.0, 2.0),
ang=3 * (90,),
spacegroup=1,
species=8 * ["Fe"],
coordinates=[
[0.00, 0.00, 0.00],
[0.50, 0.00, 0.00],
[0.00, 0.50, 0.00],
[0.50, 0.50, 0.00],
[0.00, 0.00, 0.50],
[0.50, 0.00, 0.50],
[0.00, 0.50, 0.50],
[0.50, 0.50, 0.50],
],
)
structure: pmg.Structure = spyns.lattice.generate.from_parameters(
structure_parameters=structure_parameters
)
structure = spyns.lattice.generate.label_subspecies(
structure=structure,
subspecies_labels={
0: "1",
1: "2",
2: "2",
3: "1",
4: "2",
5: "1",
6: "1",
7: "2",
},
)
structure = spyns.lattice.generate.make_supercell(
cell_structure=structure, scaling_factors=(5, 5, 5)
)
return structure
@pytest.fixture()
def bcc_lattice() -> pmg.Structure:
structure_parameters: StructureParameters = StructureParameters(
abc=(2.0, 2.0, 1.0),
ang=3 * (90,),
spacegroup=1,
species=8 * ["Fe"],
coordinates=[
[0.00, 0.00, 0.00],
[0.50, 0.00, 0.00],
[0.00, 0.50, 0.00],
[0.50, 0.50, 0.00],
[0.25, 0.25, 0.50],
[0.75, 0.25, 0.50],
[0.25, 0.75, 0.50],
[0.75, 0.75, 0.50],
],
)
structure: pmg.Structure = spyns.lattice.generate.from_parameters(
structure_parameters=structure_parameters
)
structure = spyns.lattice.generate.label_subspecies(
structure=structure,
subspecies_labels={
0: "1",
1: "2",
2: "2",
3: "1",
4: "3",
5: "4",
6: "4",
7: "3",
},
)
structure = spyns.lattice.generate.make_supercell(
cell_structure=structure, scaling_factors=(5, 5, 10)
)
return structure
@pytest.fixture()
def simulation_parameters_heisenberg_cython() -> SimulationParameters:
return SimulationParameters(
seed=np.random.randint(100000),
mode="heisenberg_cython",
trace_filepath=None,
snapshot_filepath=None,
sweeps=200,
equilibration_sweeps=100,
sample_interval=1,
temperature=1,
)
# @pytest.mark.parametrize(
# "r, max_abs_energy, interaction_ij",
# [
# (1.2, 2.0, (-1.0, -1.0)),
# (1.9, 4.0, (-1.0, -1.0, -1.0, -1.0)),
# ],
# )
# def test_2d_square_ising_ferromagnet_simulation(
# r: float,
# max_abs_energy: float,
# interaction_ij: Union[Tuple[float, float], Tuple[float, float, float, float]],
# two_dimensional_square_lattice: pmg.Structure,
# simulation_parameters: SimulationParameters,
# ) -> None:
# lattice: Lattice = Lattice(structure=two_dimensional_square_lattice, r=r)
#
# lattice.set_sublattice_pair_interactions(
# interaction_df=lattice.sublattice_pairs_data_frame.assign(J_ij=interaction_ij)
# )
#
# data: SimulationData = spyns.run.simulation(
# lattice=lattice,
# parameters=simulation_parameters,
# )
#
# energy: float = \
# data.data_frame["<E**1>"].values[-1] / data.lookup_tables.number_sites
# magnetization: float = \
# data.data_frame["<M**1>"].values[-1] / data.lookup_tables.number_sites
# susceptibility: float = data.data_frame["X"].values[-1]
# heat_capacity: float = data.data_frame["C"].values[-1]
# binder_m: float = data.data_frame["Binder_M"].values[-1]
#
# print(f"Average susceptibility = {susceptibility}")
# print(f"Average heat capacity = {heat_capacity}")
# print(f"Binder parameter for M = {binder_m}")
#
# assert energy >= -max_abs_energy and energy <= max_abs_energy
# assert magnetization >= 0 and magnetization <= 1.0
# @pytest.mark.parametrize(
# "r, max_abs_energy, interaction_ij",
# [
# (1.2, 2.0, (1.0, 1.0)),
# (1.9, 4.0, (-1.0, 1.0, 1.0, -1.0)),
# ],
# )
# def test_2d_square_ising_antiferromagnet_simulation(
# r: float,
# max_abs_energy: float,
# interaction_ij: Union[Tuple[float, float], Tuple[float, float, float, float]],
# two_dimensional_square_lattice: pmg.Structure,
# simulation_parameters: SimulationParameters,
# ) -> None:
# lattice: Lattice = Lattice(structure=two_dimensional_square_lattice, r=r)
#
# lattice.set_sublattice_pair_interactions(
# interaction_df=lattice.sublattice_pairs_data_frame.assign(J_ij=interaction_ij)
# )
#
# data: SimulationData = spyns.run.simulation(
# lattice=lattice,
# parameters=simulation_parameters,
# )
#
# spyns.statistics.compute_ising_afm_order_parameter(
# trace_df=data.data_frame,
# order_parameter_name="AFM",
# sublattices1=[0],
# sublattices2=[1],
# number_sites=data.lookup_tables.number_sites,
# )
#
# energy: float = \
# data.data_frame["<E**1>"].values[-1] / data.lookup_tables.number_sites
# magnetization: float = \
# data.data_frame["<M**1>"].values[-1] / data.lookup_tables.number_sites
# susceptibility: float = data.data_frame["X"].values[-1]
# heat_capacity: float = data.data_frame["C"].values[-1]
# binder_m: float = data.data_frame["Binder_M"].values[-1]
# antiferromagnetization: float = data.data_frame["AFM"].mean()
#
# print(f"Average susceptibility = {susceptibility}")
# print(f"Average heat capacity = {heat_capacity}")
# print(f"Binder parameter for M = {binder_m}")
# print(f"Average antiferromagnetization = {antiferromagnetization}")
#
# assert energy >= -max_abs_energy and energy <= max_abs_energy
# assert antiferromagnetization <= 1.0
# assert antiferromagnetization - magnetization > 0.1
# @pytest.mark.parametrize(
# "r, max_abs_energy, interaction_ij",
# [
# (1.2, 3.0, (-1.0, -1.0)),
# (1.5, 9.0, (-1.0, -1.0, -1.0, -1.0)),
# ],
# )
# def test_sc_heisenberg_ferromagnet_simulation(
# r: float,
# max_abs_energy: float,
# interaction_ij: Union[Tuple[float, float], Tuple[float, float, float, float]],
# cubic_lattice: pmg.Structure,
# simulation_parameters_heisenberg: SimulationParameters,
# ) -> None:
# lattice: Lattice = Lattice(structure=cubic_lattice, r=r)
#
# lattice.set_sublattice_pair_interactions(
# interaction_df=lattice.sublattice_pairs_data_frame.assign(J_ij=interaction_ij)
# )
#
# data: SimulationData = spyns.run.simulation(
# lattice=lattice,
# parameters=simulation_parameters_heisenberg,
# )
#
# energy: float = \
# data.data_frame["<E**1>"].values[-1] / data.lookup_tables.number_sites
# magnetization: float = \
# data.data_frame["<M**1>"].values[-1] / data.lookup_tables.number_sites
# susceptibility: float = data.data_frame["X"].values[-1]
# heat_capacity: float = data.data_frame["C"].values[-1]
# binder_m: float = data.data_frame["Binder_M"].values[-1]
#
# print(f"Average susceptibility = {susceptibility}")
# print(f"Average heat capacity = {heat_capacity}")
# print(f"Binder parameter for M = {binder_m}")
#
# assert energy >= -max_abs_energy and energy <= max_abs_energy
# assert magnetization >= -1.0 and magnetization <= 1.0
# @pytest.mark.parametrize(
# "r, max_abs_energy, interaction_ij",
# [
# (0.9, 4.0, (-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0)),
# ],
# )
# def test_bcc_heisenberg_ferromagnet_simulation(
# r: float,
# max_abs_energy: float,
# interaction_ij: Union[Tuple[float, float], Tuple[float, float, float, float]],
# bcc_lattice: pmg.Structure,
# simulation_parameters_heisenberg: SimulationParameters,
# ) -> None:
# lattice: Lattice = Lattice(structure=bcc_lattice, r=r)
#
# lattice.set_sublattice_pair_interactions(
# interaction_df=lattice.sublattice_pairs_data_frame.assign(J_ij=interaction_ij)
# )
#
# data: SimulationData = spyns.run.simulation(
# lattice=lattice,
# parameters=simulation_parameters_heisenberg,
# )
#
# energy: float = \
# data.data_frame["<E**1>"].values[-1] / data.lookup_tables.number_sites
# magnetization: float = \
# data.data_frame["<M**1>"].values[-1] / data.lookup_tables.number_sites
# susceptibility: float = data.data_frame["X"].values[-1]
# heat_capacity: float = data.data_frame["C"].values[-1]
# binder_m: float = data.data_frame["Binder_M"].values[-1]
#
# print(f"Average susceptibility = {susceptibility}")
# print(f"Average heat capacity = {heat_capacity}")
# print(f"Binder parameter for M = {binder_m}")
#
# assert energy >= -max_abs_energy and energy <= max_abs_energy
# assert magnetization >= -1.0 and magnetization <= 1.0
# @pytest.mark.parametrize(
# "r",
# [
# 1.2,
# 1.9,
# ],
# )
# def test_2d_square_voter_model_simulation(
# r: float,
# two_dimensional_square_lattice: pmg.Structure,
# simulation_parameters_voter: SimulationParameters,
# ) -> None:
# lattice: Lattice = Lattice(structure=two_dimensional_square_lattice, r=r)
#
# data: SimulationData = spyns.run.simulation(
# lattice=lattice,
# parameters=simulation_parameters_voter,
# )
#
# magnetization: float = \
# data.data_frame["<M**1>"].values[-1] / data.lookup_tables.number_sites
#
# assert magnetization >= 0 and magnetization <= 1.0
@pytest.mark.parametrize(
"r, max_abs_energy, interaction_ij",
[(1.2, 3.0, (-1.0, -1.0)), (1.5, 9.0, (-1.0, -1.0, -1.0, -1.0))],
)
def test_sc_heisenberg_cython_ferromagnet_simulation(
r: float,
max_abs_energy: float,
interaction_ij: Union[Tuple[float, float], Tuple[float, float, float, float]],
cubic_lattice: pmg.Structure,
simulation_parameters_heisenberg_cython: SimulationParameters,
) -> None:
lattice: Lattice = Lattice(structure=cubic_lattice, r=r)
lattice.set_sublattice_pair_interactions(
interaction_df=lattice.sublattice_pairs_data_frame.assign(J_ij=interaction_ij)
)
data: SimulationData = spyns.run.simulation(
lattice=lattice, parameters=simulation_parameters_heisenberg_cython
)
energy: float = data.container.data_frame["<E**1>"].values[
-1
] / data.container.lookup_tables.number_sites
magnetization: float = data.container.data_frame["<M**1>"].values[
-1
] / data.container.lookup_tables.number_sites
susceptibility: float = data.container.data_frame["X"].values[-1]
heat_capacity: float = data.container.data_frame["C"].values[-1]
binder_m: float = data.container.data_frame["Binder_M"].values[-1]
print(f"Average susceptibility = {susceptibility}")
print(f"Average heat capacity = {heat_capacity}")
print(f"Binder parameter for M = {binder_m}")
assert energy >= -max_abs_energy and energy <= max_abs_energy
assert magnetization >= -1.0 and magnetization <= 1.0
| 33.072386 | 88 | 0.623217 | 1,498 | 12,336 | 4.935915 | 0.092123 | 0.011361 | 0.013389 | 0.014606 | 0.891804 | 0.856776 | 0.854071 | 0.841899 | 0.841223 | 0.810793 | 0 | 0.047204 | 0.228923 | 12,336 | 372 | 89 | 33.16129 | 0.73013 | 0.554718 | 0 | 0.423841 | 0 | 0 | 0.040174 | 0 | 0 | 0 | 0 | 0 | 0.013245 | 1 | 0.033113 | false | 0 | 0.046358 | 0.006623 | 0.10596 | 0.019868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73b8ba3c255a6ebf9764ad50cad9e068b8654661 | 197 | py | Python | src/training/Core2/Chapter6Sequences/exercise_6_14.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null | src/training/Core2/Chapter6Sequences/exercise_6_14.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null | src/training/Core2/Chapter6Sequences/exercise_6_14.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null |
def rock_paper_scissors():
return result
def test_rock_paper_scissors():
print '------test_rock_paper_scissors is passed. ^__^-----'
if __name__ == '__main__':
test() | 15.153846 | 64 | 0.619289 | 22 | 197 | 4.727273 | 0.590909 | 0.259615 | 0.490385 | 0.403846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233503 | 197 | 13 | 65 | 15.153846 | 0.688742 | 0 | 0 | 0 | 0 | 0 | 0.318919 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.166667 | 0 | null | null | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
73be8a8baa25c60454d3d74c9dacf5230e9859a4 | 90 | py | Python | mobie/migration/migrate_v2/__init__.py | platybrowser/mobie-python | 43341cd92742016a3a0d602325bb93b94c3b4c36 | [
"MIT"
] | 1 | 2020-03-03T01:33:06.000Z | 2020-03-03T01:33:06.000Z | mobie/migration/migrate_v2/__init__.py | platybrowser/mobie-python | 43341cd92742016a3a0d602325bb93b94c3b4c36 | [
"MIT"
] | 4 | 2020-05-15T09:27:59.000Z | 2020-05-29T19:15:00.000Z | mobie/migration/migrate_v2/__init__.py | platybrowser/mobie-python | 43341cd92742016a3a0d602325bb93b94c3b4c36 | [
"MIT"
] | 2 | 2020-06-08T07:06:01.000Z | 2020-06-08T07:08:08.000Z | from .migrate_project import migrate_project
from .migrate_dataset import migrate_dataset
| 30 | 44 | 0.888889 | 12 | 90 | 6.333333 | 0.416667 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 90 | 2 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73d55c605f99a47ec0c66d3e3711c529e6b7aabb | 214 | py | Python | call_tracking/admin.py | ababen/call-tracking-django | d6bee30f12eeccf9516867d24507b0ce1e15c386 | [
"MIT"
] | 17 | 2015-09-11T21:17:37.000Z | 2021-03-09T23:40:21.000Z | call_tracking/admin.py | ababen/call-tracking-django | d6bee30f12eeccf9516867d24507b0ce1e15c386 | [
"MIT"
] | 111 | 2015-08-26T21:14:42.000Z | 2022-03-24T03:26:53.000Z | call_tracking/admin.py | ababen/call-tracking-django | d6bee30f12eeccf9516867d24507b0ce1e15c386 | [
"MIT"
] | 20 | 2015-09-16T14:22:49.000Z | 2022-03-11T18:47:33.000Z | from django.contrib import admin
from .models import LeadSource, Lead
# Register our models with the basic ModelAdmin
admin.site.register(LeadSource, admin.ModelAdmin)
admin.site.register(Lead, admin.ModelAdmin)
| 26.75 | 49 | 0.817757 | 29 | 214 | 6.034483 | 0.517241 | 0.171429 | 0.217143 | 0.308571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107477 | 214 | 7 | 50 | 30.571429 | 0.91623 | 0.21028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
73d6f2e1ed31e8379e78cd187264c6c289341b8a | 91 | py | Python | old/science/experiments/kmeans/kmeans.py | connorwalsh/connorwalsh.github.io | 99531abd99320768d8595695aaccb56347d15dfe | [
"MIT"
] | null | null | null | old/science/experiments/kmeans/kmeans.py | connorwalsh/connorwalsh.github.io | 99531abd99320768d8595695aaccb56347d15dfe | [
"MIT"
] | null | null | null | old/science/experiments/kmeans/kmeans.py | connorwalsh/connorwalsh.github.io | 99531abd99320768d8595695aaccb56347d15dfe | [
"MIT"
] | null | null | null | #/usr/bin/python
import numpy as np
import matplotlib as mpl
import matplotlib.pyplt as plt | 22.75 | 30 | 0.813187 | 16 | 91 | 4.625 | 0.6875 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131868 | 91 | 4 | 30 | 22.75 | 0.936709 | 0.164835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73e13fbafd00501351f9ca956d1b581e8fd48903 | 5,439 | py | Python | tests/cli/test_patch_tool.py | mr-mixas/nddiff.py | d8b613f31cc2390c370cfa3342c42def484751fe | [
"Apache-2.0"
] | null | null | null | tests/cli/test_patch_tool.py | mr-mixas/nddiff.py | d8b613f31cc2390c370cfa3342c42def484751fe | [
"Apache-2.0"
] | null | null | null | tests/cli/test_patch_tool.py | mr-mixas/nddiff.py | d8b613f31cc2390c370cfa3342c42def484751fe | [
"Apache-2.0"
] | null | null | null | import io
import json
import pytest
from unittest import mock
from shutil import copyfile
import nested_diff.patch_tool
def test_default_patch(capsys, content, fullname, tmp_path):
result_file_name = '{}.got.json'.format(tmp_path)
copyfile(
fullname('lists.a.json', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('lists.patch.yaml', shared=True),
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
assert json.loads(content(fullname('lists.b.json', shared=True))) == \
json.loads(content(result_file_name))
def test_json_ofmt_opts(capsys, content, expected, fullname, tmp_path):
result_file_name = '{}.got.json'.format(tmp_path)
copyfile(
fullname('lists.a.json', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('lists.patch.json', shared=True),
'--ofmt', 'json',
'--ofmt-opts', '{"indent": null}',
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
assert json.loads(expected) == json.loads(content(result_file_name))
def test_auto_fmts(capsys, content, expected, fullname, tmp_path):
result_file_name = '{}.got.yaml'.format(tmp_path)
copyfile(
fullname('lists.a.yaml', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('lists.patch.json', shared=True),
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
assert expected == content(result_file_name)
def test_yaml_ifmt(capsys, content, fullname, tmp_path):
result_file_name = '{}.got'.format(tmp_path)
copyfile(
fullname('lists.a.yaml', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('lists.patch.yaml', shared=True),
'--ifmt', 'yaml',
'--ofmt', 'json',
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
# output is json by default
assert json.loads(content(fullname('lists.b.json', shared=True))) == \
json.loads(content(result_file_name))
def test_yaml_ofmt(capsys, content, expected, fullname, tmp_path):
result_file_name = '{}.got.json'.format(tmp_path)
copyfile(
fullname('lists.a.json', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('lists.patch.json', shared=True),
'--ofmt', 'yaml',
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
assert expected == content(result_file_name)
def test_ini_ofmt(capsys, content, fullname, tmp_path):
result_file_name = '{}.got.ini'.format(tmp_path)
copyfile(
fullname('a.ini', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('ini.patch.json', shared=True),
'--ofmt', 'ini',
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
expected = content(fullname('b.ini', shared=True))
assert expected == content(result_file_name)
def test_toml_fmt(capsys, content, fullname, tmp_path):
result_file_name = '{}.got.toml'.format(tmp_path)
copyfile(
fullname('dict.a.toml', shared=True),
result_file_name,
)
exit_code = nested_diff.patch_tool.App(args=(
result_file_name,
fullname('dict.patch.toml', shared=True),
)).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
expected = content(fullname('dict.b.toml', shared=True))
assert expected == content(result_file_name)
def test_entry_point(capsys):
with mock.patch('sys.argv', ['nested_patch', '-h']):
with pytest.raises(SystemExit) as e:
nested_diff.patch_tool.cli()
assert e.value.code == 0
captured = capsys.readouterr()
assert captured.out.startswith('usage: nested_patch')
assert '' == captured.err
def test_stdin_patch(capsys, content, fullname, tmp_path):
result_file_name = '{}.got.json'.format(tmp_path)
copyfile(
fullname('lists.a.json', shared=True),
result_file_name,
)
patch = io.StringIO(content(fullname('lists.patch.json', shared=True)))
with mock.patch('sys.stdin', patch):
exit_code = nested_diff.patch_tool.App(
args=(result_file_name, '--ifmt', 'json')).run()
captured = capsys.readouterr()
assert '' == captured.out
assert '' == captured.err
assert exit_code == 0
assert json.loads(content(fullname('lists.b.json', shared=True))) == \
json.loads(content(result_file_name))
def test_arg_files_absent():
with pytest.raises(SystemExit) as e:
nested_diff.patch_tool.App(args=('/file/not/exists')).run()
assert e.value.code == 2
| 28.036082 | 75 | 0.640927 | 681 | 5,439 | 4.910426 | 0.118943 | 0.095694 | 0.133971 | 0.0625 | 0.84988 | 0.825658 | 0.801734 | 0.799342 | 0.788278 | 0.747907 | 0 | 0.002344 | 0.215665 | 5,439 | 193 | 76 | 28.181347 | 0.781528 | 0.004596 | 0 | 0.671141 | 0 | 0 | 0.092203 | 0 | 0 | 0 | 0 | 0 | 0.241611 | 1 | 0.067114 | false | 0 | 0.040268 | 0 | 0.107383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb4a1a0e2c89bbe2d315c8b5c1e9101f66ea27a5 | 1,255 | py | Python | lightly/openapi_generated/swagger_client/api/__init__.py | umami-ware/lightly | 5d70b34df7f784af249f9e9a6bfd6256756a877f | [
"MIT"
] | null | null | null | lightly/openapi_generated/swagger_client/api/__init__.py | umami-ware/lightly | 5d70b34df7f784af249f9e9a6bfd6256756a877f | [
"MIT"
] | null | null | null | lightly/openapi_generated/swagger_client/api/__init__.py | umami-ware/lightly | 5d70b34df7f784af249f9e9a6bfd6256756a877f | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from lightly.openapi_generated.swagger_client.api.datasets_api import DatasetsApi
from lightly.openapi_generated.swagger_client.api.datasources_api import DatasourcesApi
from lightly.openapi_generated.swagger_client.api.embeddings_api import EmbeddingsApi
from lightly.openapi_generated.swagger_client.api.embeddings2d_api import Embeddings2dApi
from lightly.openapi_generated.swagger_client.api.jobs_api import JobsApi
from lightly.openapi_generated.swagger_client.api.mappings_api import MappingsApi
from lightly.openapi_generated.swagger_client.api.meta_data_configurations_api import MetaDataConfigurationsApi
from lightly.openapi_generated.swagger_client.api.other_api import OtherApi
from lightly.openapi_generated.swagger_client.api.quota_api import QuotaApi
from lightly.openapi_generated.swagger_client.api.samples_api import SamplesApi
from lightly.openapi_generated.swagger_client.api.samplings_api import SamplingsApi
from lightly.openapi_generated.swagger_client.api.scores_api import ScoresApi
from lightly.openapi_generated.swagger_client.api.tags_api import TagsApi
from lightly.openapi_generated.swagger_client.api.versioning_api import VersioningApi
| 62.75 | 111 | 0.896414 | 168 | 1,255 | 6.404762 | 0.267857 | 0.143123 | 0.234201 | 0.351301 | 0.55948 | 0.55948 | 0.55948 | 0 | 0 | 0 | 0 | 0.002534 | 0.056574 | 1,255 | 19 | 112 | 66.052632 | 0.90625 | 0.032669 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fbdd5a79d587268e8ecbea98006718820ddfbb2e | 43 | py | Python | tests/test_example.py | Irio/AnkiSyncDuolingo | e1aa5ce38866397a98c4fe7bcb80f2c586fe46d3 | [
"MIT"
] | null | null | null | tests/test_example.py | Irio/AnkiSyncDuolingo | e1aa5ce38866397a98c4fe7bcb80f2c586fe46d3 | [
"MIT"
] | null | null | null | tests/test_example.py | Irio/AnkiSyncDuolingo | e1aa5ce38866397a98c4fe7bcb80f2c586fe46d3 | [
"MIT"
] | null | null | null | def test_it_works():
assert 2 + 2 == 4
| 14.333333 | 21 | 0.581395 | 8 | 43 | 2.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 0.27907 | 43 | 2 | 22 | 21.5 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
836fdf0e0dc3455f49d032ccf04398ba5a3c20b6 | 234 | py | Python | range_dictionary/mutliple_value_exception.py | Christian-B/my_spinnaker | b19f4025878bc4fbd6d81d78cec8c284929e148b | [
"MIT"
] | null | null | null | range_dictionary/mutliple_value_exception.py | Christian-B/my_spinnaker | b19f4025878bc4fbd6d81d78cec8c284929e148b | [
"MIT"
] | null | null | null | range_dictionary/mutliple_value_exception.py | Christian-B/my_spinnaker | b19f4025878bc4fbd6d81d78cec8c284929e148b | [
"MIT"
] | null | null | null | class MutlipleValueException(Exception):
def __init__(self, method):
self._method = method
def __str__(self):
return "The method {} would return more than one value." \
"".format(self._method)
| 26 | 66 | 0.628205 | 25 | 234 | 5.48 | 0.64 | 0.218978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 234 | 8 | 67 | 29.25 | 0.80117 | 0 | 0 | 0 | 0 | 0 | 0.200855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
837728b72a1088b5498a60e413101dddcbf30dba | 20 | py | Python | Process/EDM/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | Process/EDM/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | Process/EDM/__init__.py | jwbrooks0/johnspythonlibrary2 | 10ca519276d8c32da0fbd41a597f75c0c98a8736 | [
"MIT"
] | null | null | null | from ._edm import *
| 10 | 19 | 0.7 | 3 | 20 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83b657c691c0165415f0c90a0d38a27329c08c67 | 39 | py | Python | rooms-unified/gym_rooms/envs/__init__.py | root-master/subgoal-dicovery | 3f0851f02cb7ebfbb66edde817c5b4e542baf58d | [
"MIT"
] | 34 | 2018-10-25T22:14:17.000Z | 2022-03-29T01:22:13.000Z | rooms-unified/gym_rooms/envs/__init__.py | root-master/subgoal-dicovery | 3f0851f02cb7ebfbb66edde817c5b4e542baf58d | [
"MIT"
] | null | null | null | rooms-unified/gym_rooms/envs/__init__.py | root-master/subgoal-dicovery | 3f0851f02cb7ebfbb66edde817c5b4e542baf58d | [
"MIT"
] | 10 | 2018-11-05T23:37:04.000Z | 2022-03-15T03:43:27.000Z | from gym_rooms.envs.rooms_env import *
| 19.5 | 38 | 0.820513 | 7 | 39 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83d271c4d711011c8053e080a52e9635fce0a6ae | 3,777 | py | Python | tests/conftest.py | GE-Atomic6/ghg | feea44a4a4bd1f6674a9c8be807f2c9f59f5da08 | [
"BSD-3-Clause"
] | 3 | 2022-03-30T00:06:06.000Z | 2022-03-30T15:56:45.000Z | tests/conftest.py | GE-Atomic6/ghg | feea44a4a4bd1f6674a9c8be807f2c9f59f5da08 | [
"BSD-3-Clause"
] | null | null | null | tests/conftest.py | GE-Atomic6/ghg | feea44a4a4bd1f6674a9c8be807f2c9f59f5da08 | [
"BSD-3-Clause"
] | null | null | null | # pylint: disable=all
"""Configuration for testing"""
import json
import pkgutil
from jsonschema import Draft7Validator
import pytest
@pytest.fixture
def stationary_combustion_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "stationary_combustion.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def mobile_sources_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "mobile_sources.json")
schema = json.loads(schema_file_contents)
verified = Draft7Validator(schema=schema)
return verified
@pytest.fixture
def waste_gases_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "waste_gases.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def electricity_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "electricity.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def steam_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "steam.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture()
def refrigeration_and_ac_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "refrigeration_and_ac.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def fire_suppression_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "fire_suppression.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def purchased_offsets_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "purchased_offsets.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def purchased_gases_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "purchased_gases.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def business_travel_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "business_travel.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def commuting_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "commuting.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def product_transport_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "product_transport.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
@pytest.fixture
def waste_schema():
"""Provides schema validation to tests"""
schema_file_contents = pkgutil.get_data("atomic6ghg.schemas", "waste.json")
schema = json.loads(schema_file_contents)
v = Draft7Validator(schema=schema)
return v
| 30.707317 | 95 | 0.745301 | 452 | 3,777 | 6.011062 | 0.110619 | 0.095694 | 0.172249 | 0.143541 | 0.832168 | 0.832168 | 0.832168 | 0.81855 | 0.81855 | 0.81855 | 0 | 0.008396 | 0.148531 | 3,777 | 122 | 96 | 30.959016 | 0.836443 | 0.136087 | 0 | 0.597561 | 0 | 0 | 0.14881 | 0.036341 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158537 | false | 0 | 0.04878 | 0 | 0.365854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83e4682112675943a338978214ba2c505133dc09 | 3,414 | py | Python | app/update_result_test.py | jdeanwallace/tinypilot | 5427fccb0fe6fe66460f5b243b485c22c9d29aed | [
"MIT"
] | null | null | null | app/update_result_test.py | jdeanwallace/tinypilot | 5427fccb0fe6fe66460f5b243b485c22c9d29aed | [
"MIT"
] | null | null | null | app/update_result_test.py | jdeanwallace/tinypilot | 5427fccb0fe6fe66460f5b243b485c22c9d29aed | [
"MIT"
] | null | null | null | import datetime
import io
import unittest
import update_result
class UpdateResultTest(unittest.TestCase):
def test_reads_correct_values_for_successful_result(self):
self.assertEqual(
update_result.Result(
success=True,
error='',
timestamp=datetime.datetime(2021,
2,
10,
8,
57,
35,
tzinfo=datetime.timezone.utc),
),
update_result.read(
io.StringIO("""
{
"success": true,
"error": "",
"timestamp": "2021-02-10T085735Z"
}
""")))
def test_reads_correct_values_for_failed_result(self):
self.assertEqual(
update_result.Result(
success=False,
error='dummy update error',
timestamp=datetime.datetime(2021,
2,
10,
8,
57,
35,
tzinfo=datetime.timezone.utc),
),
update_result.read(
io.StringIO("""
{
"success": false,
"error": "dummy update error",
"timestamp": "2021-02-10T085735Z"
}
""")))
def test_reads_default_values_for_empty_dict(self):
self.assertEqual(
update_result.Result(
success=False,
error='',
timestamp=datetime.datetime.utcfromtimestamp(0),
), update_result.read(io.StringIO('{}')))
def test_writes_success_result_accurately(self):
mock_file = io.StringIO()
update_result.write(
update_result.Result(
success=True,
error='',
timestamp=datetime.datetime(2021,
2,
10,
8,
57,
35,
tzinfo=datetime.timezone.utc),
), mock_file)
self.assertEqual(('{"success": true, "error": "", '
'"timestamp": "2021-02-10T085735Z"}'),
mock_file.getvalue())
def test_writes_error_result_accurately(self):
mock_file = io.StringIO()
update_result.write(
update_result.Result(
success=False,
error='dummy update error',
timestamp=datetime.datetime(2021,
2,
10,
8,
57,
35,
tzinfo=datetime.timezone.utc),
), mock_file)
self.assertEqual(('{"success": false, "error": "dummy update error", '
'"timestamp": "2021-02-10T085735Z"}'),
mock_file.getvalue())
| 35.195876 | 78 | 0.383421 | 230 | 3,414 | 5.504348 | 0.213043 | 0.104265 | 0.07109 | 0.098736 | 0.832543 | 0.812006 | 0.777251 | 0.770932 | 0.654818 | 0.597156 | 0 | 0.065666 | 0.531634 | 3,414 | 96 | 79 | 35.5625 | 0.726079 | 0 | 0 | 0.75 | 0 | 0 | 0.104277 | 0.012302 | 0 | 0 | 0 | 0 | 0.056818 | 1 | 0.056818 | false | 0 | 0.045455 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7c993c9246179fb2191b5b05afea7e67bba6953 | 8,361 | py | Python | tests/test_ws_response.py | gridsmartercities/pywsitest | 7d438477bfc61b6e3adeab6530f52a24359249d8 | [
"MIT"
] | 19 | 2019-07-31T14:51:25.000Z | 2021-12-10T08:43:46.000Z | tests/test_ws_response.py | gridsmartercities/pywsitest | 7d438477bfc61b6e3adeab6530f52a24359249d8 | [
"MIT"
] | 10 | 2019-07-30T12:07:24.000Z | 2020-12-27T18:33:07.000Z | tests/test_ws_response.py | gridsmartercities/pywsitest | 7d438477bfc61b6e3adeab6530f52a24359249d8 | [
"MIT"
] | 1 | 2021-03-29T09:33:45.000Z | 2021-03-29T09:33:45.000Z | import unittest
from pywsitest import WSResponse, WSMessage
class WSResponseTests(unittest.TestCase): # noqa: pylint - too-many-public-methods
def test_create_ws_response(self):
ws_response = WSResponse()
self.assertDictEqual({}, ws_response.attributes)
def test_with_attribute(self):
ws_response = WSResponse().with_attribute("test")
self.assertIn("test", ws_response.attributes)
self.assertEqual(1, len(ws_response.attributes))
def test_with_attribute_with_value(self):
ws_response = WSResponse().with_attribute("test", 123)
self.assertEqual(123, ws_response.attributes["test"])
self.assertEqual(1, len(ws_response.attributes))
def test_all_attributes_is_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body")
)
test_data = {
"type": "new_request",
"body": {}
}
self.assertTrue(ws_response.is_match(test_data))
def test_attribute_is_not_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body")
)
test_data = {
"type": "new_request",
"not_body": {}
}
self.assertFalse(ws_response.is_match(test_data))
def test_attribute_value_is_not_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body")
)
test_data = {
"type": "not_new_request",
"body": {}
}
self.assertFalse(ws_response.is_match(test_data))
def test_no_attributes_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body")
)
test_data = {
"not_type": "new_request",
"not_body": {}
}
self.assertFalse(ws_response.is_match(test_data))
def test_with_trigger(self):
message = WSMessage().with_attribute("test", 123)
ws_response = WSResponse().with_trigger(message)
self.assertEqual(1, len(ws_response.triggers))
self.assertEqual(message, ws_response.triggers[0])
def test_stringify(self):
response = WSResponse().with_attribute("test", 123)
self.assertEqual("{\"test\": 123}", str(response))
def test_resolved_attribute_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body/attribute", "value")
)
test_data = {
"type": "new_request",
"body": {
"attribute": "value"
}
}
self.assertTrue(ws_response.is_match(test_data))
def test_no_resolved_attribute_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body/attribute", "value")
)
test_data = {
"type": "new_request",
"body": {
"not_attribute": "not_value"
}
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolved_attribute_no_match(self):
ws_response = (
WSResponse()
.with_attribute("type", "new_request")
.with_attribute("body/attribute", "value")
)
test_data = {
"type": "new_request",
"body": {
"attribute": "not_value"
}
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolved_recursive_attribute_match(self):
ws_response = WSResponse().with_attribute("body/first/second/third", "value")
test_data = {
"type": "new_request",
"body": {
"first": {
"second": {
"third": "value",
"fourth": "not_value"
}
}
}
}
self.assertTrue(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_index(self):
ws_response = WSResponse().with_attribute("body/0/colour", "red")
test_data = {
"body": [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
}
self.assertTrue(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_index_without_value(self):
ws_response = WSResponse().with_attribute("body/0/colour")
test_data = {
"body": [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
}
self.assertTrue(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_without_index(self):
ws_response = WSResponse().with_attribute("body//colour", "green")
test_data = {
"body": [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
}
self.assertTrue(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_index_no_match(self):
ws_response = WSResponse().with_attribute("body/1/colour", "yellow")
test_data = {
"body": [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_index_not_enough_elements(self):
ws_response = WSResponse().with_attribute("body/0/colour", "red")
test_data = {
"body": []
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolved_attribute_by_list_without_index_no_match(self):
ws_response = WSResponse().with_attribute("body//colour", "yellow")
test_data = {
"body": [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolved_attribute_by_just_list_index(self):
ws_response = WSResponse().with_attribute("body/0/", "red")
test_data = {
"body": [
"red",
"green",
"blue"
]
}
self.assertTrue(ws_response.is_match(test_data))
def test_resolve_by_index_when_dict_fails(self):
ws_response = WSResponse().with_attribute("body/0/colour", "red")
test_data = {
"body": {
"colour": "red"
}
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolve_by_key_when_list_fails(self):
ws_response = WSResponse().with_attribute("body/colour", "red")
test_data = {
"body": [
"red",
"green",
"blue"
]
}
self.assertFalse(ws_response.is_match(test_data))
def test_resolve_top_level_list_by_index(self):
ws_response = WSResponse().with_attribute("/0/colour", "red")
test_data = [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
self.assertTrue(ws_response.is_match(test_data))
def test_resolve_top_level_list_without_index(self):
ws_response = WSResponse().with_attribute("//colour", "blue")
test_data = [
{"colour": "red"},
{"colour": "green"},
{"colour": "blue"}
]
self.assertTrue(ws_response.is_match(test_data))
def test_resolve_double_top_level_list_without_indexes(self):
ws_response = WSResponse().with_attribute("///colour", "blue")
test_data = [
[
{"colour": "red"},
{"colour": "green"}
],
[
{"colour": "yellow"},
{"colour": "blue"}
]
]
self.assertTrue(ws_response.is_match(test_data))
| 27.413115 | 85 | 0.533549 | 809 | 8,361 | 5.159456 | 0.093943 | 0.124581 | 0.114998 | 0.132247 | 0.856493 | 0.845232 | 0.837326 | 0.793483 | 0.72736 | 0.676809 | 0 | 0.004703 | 0.338835 | 8,361 | 304 | 86 | 27.503289 | 0.750362 | 0.004545 | 0 | 0.534483 | 0 | 0 | 0.119337 | 0.002764 | 0 | 0 | 0 | 0 | 0.12069 | 1 | 0.107759 | false | 0 | 0.008621 | 0 | 0.12069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7eef0e577b16d644d3bee728275638502bde74d | 4,071 | py | Python | tests/test_alpha_tuning.py | DSCI-310/Group-10-Project | cfc50ebcbbf160e0a72a1144e6f7ae8c345db4aa | [
"MIT"
] | null | null | null | tests/test_alpha_tuning.py | DSCI-310/Group-10-Project | cfc50ebcbbf160e0a72a1144e6f7ae8c345db4aa | [
"MIT"
] | 34 | 2022-02-13T23:15:57.000Z | 2022-03-31T07:15:03.000Z | tests/test_alpha_tuning.py | DSCI-310/Group-10-Project | cfc50ebcbbf160e0a72a1144e6f7ae8c345db4aa | [
"MIT"
] | null | null | null | from pandas import DataFrame
from sklearn.model_selection import train_test_split
import pytest
from src.analysis.alpha_tuning import ridge_alpha_tuning
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import RidgeCV
@pytest.fixture
def toy_dataset():
return DataFrame({'x1': [1, 2, 3, 4, 6, 7, 8, 9, 0],
'x2': [1, 2, 3, 4, 5, 6, 7, 8, 10],
'y': [2, 3, 4, 5, 6, 7, 7, 8, 9]
})
@pytest.fixture
def toy_dataset_2():
return DataFrame({
'x1': [1, 2, 3, 4, 6, 7, 8, 9, 0,1, 2, 3,
4, 6, 7, 8, 9, 0,1, 2, 3, 4, 6,
7, 8, 9, 0,1, 2, 3, 4, 6, 7, 8, 9, 0],
'x2': [1, 2, 3, 4, 5, 6, 7, 8, 10,1,
2, 3, 4, 6, 7, 8, 9, 0,1, 2, 3,
4, 6, 7, 8, 9, 0,1, 2, 3, 4, 6,
7, 8, 9, 0],
'y': [2, 3, 4, 5, 6, 7, 7, 8, 9,
2, 3, 4, 5, 6, 7, 7, 8, 9,
2, 3, 4, 5, 6, 7, 7, 8, 9,
2, 3, 4, 5, 6, 7, 7, 8, 9]
})
def test_ridgealphatuning_fullfunc(toy_dataset):
alpha = [1, 5, 12]
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
cv_pipe = make_pipeline(StandardScaler(), RidgeCV(alphas=alpha, cv=2))
cv_pipe.fit(trainx, trainy)
best_a = cv_pipe.named_steps['ridgecv'].alpha_
print(best_a)
assert ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2) == best_a
def test_ridgealphatuning_alpha(toy_dataset):
alpha = 1
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
with pytest.raises(TypeError) as e_info:
ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2)
assert "alpha is not a list" in str(e_info.value)
def test_ridgealphatuning_trainx(toy_dataset):
alpha = [1, 10, 100]
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
trainx = 1
with pytest.raises(TypeError) as e_info:
ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2)
assert "train_x should be data frame" in str(e_info.value)
def test_ridgealphatuning_trainy(toy_dataset):
alpha = [1, 10, 100]
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
trainy = 1213
with pytest.raises(TypeError) as e_info:
ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2)
assert "train_y should be data frame" in str(e_info.value)
def test_ridgealphatuning_cv(toy_dataset):
alpha = [1, 10, 100]
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
with pytest.raises(TypeError) as e_info:
ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv="two")
assert "cv should be an integer" in str(e_info.value)
def test_ridgealphatuning_smallalpha(toy_dataset):
alpha = [1]
train, test = train_test_split(toy_dataset, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
cv_pipe = make_pipeline(StandardScaler(), RidgeCV(alphas=alpha, cv=2))
cv_pipe.fit(trainx, trainy)
best_a = cv_pipe.named_steps['ridgecv'].alpha_
print(best_a)
assert ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2) == best_a
def test_ridgealphatuning_largedat(toy_dataset_2):
alpha = [1, 10, 100]
train, test = train_test_split(toy_dataset_2, test_size=.4, random_state=123)
trainx, trainy = train.drop(columns='y'), train['y']
cv_pipe = make_pipeline(StandardScaler(), RidgeCV(alphas=alpha, cv=2))
cv_pipe.fit(trainx, trainy)
best_a = cv_pipe.named_steps['ridgecv'].alpha_
print(best_a)
assert ridge_alpha_tuning(alpha, StandardScaler(), trainx, trainy, cv=2) == best_a | 42.40625 | 86 | 0.645787 | 640 | 4,071 | 3.920313 | 0.134375 | 0.081307 | 0.017935 | 0.015943 | 0.834197 | 0.813472 | 0.813472 | 0.813472 | 0.783181 | 0.783181 | 0 | 0.067458 | 0.213461 | 4,071 | 96 | 87 | 42.40625 | 0.716115 | 0 | 0 | 0.552941 | 0 | 0 | 0.035855 | 0 | 0 | 0 | 0 | 0 | 0.082353 | 1 | 0.105882 | false | 0 | 0.082353 | 0.023529 | 0.211765 | 0.035294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e392dff6c73468edd05f5acf6c703689b6de3f77 | 127 | py | Python | robocup_env/envs/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | 1 | 2021-07-27T12:47:57.000Z | 2021-07-27T12:47:57.000Z | robocup_env/envs/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | null | null | null | robocup_env/envs/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | null | null | null | from .collect import RoboCupCollect
from .score import RoboCupScore
from .passing import RoboCupPass
from .versioning import *
| 25.4 | 35 | 0.834646 | 15 | 127 | 7.066667 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125984 | 127 | 4 | 36 | 31.75 | 0.954955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e3be62cb5381693aeccfae2ee115f09edc4e0517 | 804 | py | Python | test/test_body5.py | ike709/tgs4-api-pyclient | 97918cfe614cc4ef06ef2485efff163417a8cd44 | [
"MIT"
] | null | null | null | test/test_body5.py | ike709/tgs4-api-pyclient | 97918cfe614cc4ef06ef2485efff163417a8cd44 | [
"MIT"
] | null | null | null | test/test_body5.py | ike709/tgs4-api-pyclient | 97918cfe614cc4ef06ef2485efff163417a8cd44 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
TGS API
A production scale tool for BYOND server management # noqa: E501
OpenAPI spec version: 9.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import swagger_client
from swagger_client.models.body5 import Body5 # noqa: E501
from swagger_client.rest import ApiException
class TestBody5(unittest.TestCase):
"""Body5 unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testBody5(self):
"""Test Body5"""
# FIXME: construct object with mandatory attributes with example values
# model = swagger_client.models.body5.Body5() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 20.1 | 79 | 0.675373 | 100 | 804 | 5.26 | 0.61 | 0.098859 | 0.064639 | 0.091255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.233831 | 804 | 39 | 80 | 20.615385 | 0.819805 | 0.441542 | 0 | 0.214286 | 1 | 0 | 0.019656 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e3dc362771c8fee12ee7a53912b8cd1180e1b6f8 | 108 | py | Python | Introduction_to_programming/december_2021/package/multiplication.py | Ivanazzz/Technical-University-of-Sofia-Python | bd38f2468375c6619a2f8956b4ddc70aec523ccc | [
"MIT"
] | 1 | 2022-01-31T13:25:01.000Z | 2022-01-31T13:25:01.000Z | Introduction_to_programming/december_2021/package/multiplication.py | Ivanazzz/Technical-University-of-Sofia-Python | bd38f2468375c6619a2f8956b4ddc70aec523ccc | [
"MIT"
] | null | null | null | Introduction_to_programming/december_2021/package/multiplication.py | Ivanazzz/Technical-University-of-Sofia-Python | bd38f2468375c6619a2f8956b4ddc70aec523ccc | [
"MIT"
] | null | null | null | def multiplication(first_number, second_number):
result = first_number * second_number
return result | 36 | 48 | 0.787037 | 13 | 108 | 6.230769 | 0.538462 | 0.271605 | 0.419753 | 0.567901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 108 | 3 | 49 | 36 | 0.89011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
540e5d6148dde44922677d9975f890af3afaa4ba | 67 | py | Python | src/funsql/prettyprint/__init__.py | ananis25/funsql-python | 158c66528fc6df2f1a84bcf49daddc543a31c4a9 | [
"MIT"
] | 1 | 2022-03-30T19:48:01.000Z | 2022-03-30T19:48:01.000Z | src/funsql/prettyprint/__init__.py | ananis25/funsql-python | 158c66528fc6df2f1a84bcf49daddc543a31c4a9 | [
"MIT"
] | null | null | null | src/funsql/prettyprint/__init__.py | ananis25/funsql-python | 158c66528fc6df2f1a84bcf49daddc543a31c4a9 | [
"MIT"
] | null | null | null | from .printer import Printer, Begin, End, Break, Token, GroupBreak
| 33.5 | 66 | 0.776119 | 9 | 67 | 5.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 67 | 1 | 67 | 67 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
542aeaaf417fe755b68a1fae7dd121eb966843c9 | 26 | py | Python | timebudget/__init__.py | Kahsius/timebudget | e58b7121aa5846db784fb80ab6b8dfffdcc8fae5 | [
"Apache-2.0"
] | 145 | 2019-10-22T21:45:53.000Z | 2022-03-12T02:15:55.000Z | timebudget/__init__.py | Kahsius/timebudget | e58b7121aa5846db784fb80ab6b8dfffdcc8fae5 | [
"Apache-2.0"
] | 13 | 2019-10-23T15:15:20.000Z | 2021-02-10T00:12:36.000Z | timebudget/__init__.py | Kahsius/timebudget | e58b7121aa5846db784fb80ab6b8dfffdcc8fae5 | [
"Apache-2.0"
] | 9 | 2019-10-25T00:44:25.000Z | 2020-09-23T11:54:17.000Z | from .timebudget import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
581df4075f03fa7740b84f21a0708f4914a1e6be | 46 | py | Python | ChimuApi/__init__.py | lenforiee/python-chimu-api | 464310741bc58aa1702c9810a50d061e40f63ec2 | [
"MIT"
] | null | null | null | ChimuApi/__init__.py | lenforiee/python-chimu-api | 464310741bc58aa1702c9810a50d061e40f63ec2 | [
"MIT"
] | null | null | null | ChimuApi/__init__.py | lenforiee/python-chimu-api | 464310741bc58aa1702c9810a50d061e40f63ec2 | [
"MIT"
] | null | null | null | from .chimu_api import ChimuAPI, AsyncChimuAPI | 46 | 46 | 0.869565 | 6 | 46 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
585770c9b9366640b672aa291a06b82c3fc30905 | 16,861 | py | Python | cpa/tests/testscoring.py | DavidStirling/CellProfiler-Analyst | 7a0bfcb5cc7db067844595bdbb90f3132f9a8ea9 | [
"MIT"
] | 98 | 2015-02-05T18:22:04.000Z | 2022-03-29T12:06:48.000Z | cpa/tests/testscoring.py | DavidStirling/CellProfiler-Analyst | 7a0bfcb5cc7db067844595bdbb90f3132f9a8ea9 | [
"MIT"
] | 268 | 2015-01-14T15:43:24.000Z | 2022-02-13T22:04:37.000Z | cpa/tests/testscoring.py | DavidStirling/CellProfiler-Analyst | 7a0bfcb5cc7db067844595bdbb90f3132f9a8ea9 | [
"MIT"
] | 64 | 2015-06-30T22:26:03.000Z | 2022-03-11T01:06:13.000Z | '''
Checks the per-image counts calculated by multiclasssql.
'''
import numpy
from cpa.dbconnect import *
from cpa.properties import Properties
from cpa.datamodel import DataModel
from cpa.scoreall import score
import base64
import zlib
import os
if __name__ == "__main__":
from cpa.trainingset import TrainingSet
import wx
app = wx.App()
os.chdir('/Users/afraser/')
p = Properties()
db = DBConnect()
dm = DataModel()
testdata = [
# Test 2 classes filtered by MAPs
{'props' : 'CPAnalyst_test_data/nirht_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_2class_test.txt',
'nRules' : 5,
'filter' : 'MAPs',
'group' : None,
'vals' : 'eJylmOFxI7kOhP+/KCaBcxEEQYCxXG3+abyvwRkHcOuVvCVLJMFGo9HQv/+OJ55MPcYzfir2Gcv2Ll+8tOXjbF9eu3iZmT5i7hx2/vzvYe1+PPXQ2lxuw7dFTX14xq6z95h1nJdr2zzmNc/cd20+NfToc/dcI2LGXEfnesaJYRaKIk+ajfCIuivr0RsW/pjWHp97rFMxeu3Y6af0Fy0+c+6YJ84Y77nmXDn62FjbeScJWyGv8L1s8JN6OWrkjsNF1neyrb9YG8C179pY5xDksNQN1gqiG8eJVGuJdk4wj9hfzPu/nzvt2fns9SgNMfNExKlpqdU5x1qLX0NgjVXbVqwsj/muns8B8aXIf0ifhZ1tObUZ/9eqEcdqPv+Q8lEcG1mZ611NiuzwnE8fN2NMkkvooWBynWSJUq315gYNjg8Y965XmhPYeY4OgLen61d1AIdsbygKfxWAjTrDIdr4NjjPJt8l8H7cuWr5XDaF8wbgtH3CYahWn6pjmWe+yDnAT55ApevmOPyzkaua4Hp1YBpgdmj7rEkO096z3R4rnhkN3gbXUdyP3OvzC1qaaCpkZgKEcfqpFzqfD3jynL06KQ94HhRm9OFrbeGo3dksz0piicp3df53snlx3dm15bFGTV8Z/WGyS4iIwzjViG3QmKQbRXjXHiXKblUTDVRbq3jqZLK7xvYzTTeifiU2MHXEXby6SOzyHEVASqJFpW8YxR/2etUFXM6Yvv0FG1RsATawNbrgtKGbjXOZutGP5Iowqpm6p7uXjfnCvXbn2VCH1adHuqf0r7fLNVlbs2MBrymi2dpf5MkGPJeLsqorValtbjq76lSlxD+8caPmbCi0sniVdAE6d3CPy/QkgKmkr77AJIBExtJXV/2caAALbL7AI4YcDvw6nqhZrfcVvFCm7A+f7TJjCwcJtHa9l0dN/0IigpCTQt+7izyQIFsjM6xDNeoJju2WK3IE7yHNOS/bAuDZGTz6dOISUL5Or3bdkdVQzJr2RTjUXPx2IHvqPHX1xdm+4ISg0l7Ug23JjXXW3aEMCsANXtaQna7wIY3qzFaniSBONzzkLmgwVY1c6G1+jDS9G5BwPSVQrZBgvRcS4J146gze6cRoibHjy5LyQ6TeDbj6jlY53Tfor6ggJK/mEWW9qSK4pvsNFKdKWp/02XeD8yQSUZ14RB8ADX6uTgVAkm1EIhsAEnNShYPmvqlP6duF37wTtOktkxz6bENAudJ/EeVW2CSN9PStdv5uQN4oPB94AUkRbX6Sf3LaylQphQBDakAbcDNE3JHd+UoV0ZNhTj+IXSsSugjDRohVdHDiQQcQkupLHPZCiGCnf3fIB/hEpGY7bKPSh7VQOFVK0qmH6Jc7aJE+gPHFj7aph3eHWEPCsaiM6lI4Eg6gZlGnb6PY+JDc+dIvRRzSB3vWbUggBDinZYCbIjNsIPzVngia9kw5vpFjf+Z+QEaRAzUWhiJldTsmegMyQ8Nvod7SWc5HAd/F0b1tXuqC0kbmNwRqHwSK5GQ4sO3nH/sxY/2ma/y6qN2hn3hhd8jK2QPCaz+aqsgG8yhE1ksAkVDaxwtcJe2dPRBIb3K6Phro9S2eQyYKOtUR+QxqJ6Quev4XQF1zMPzWDuR23SKS9HTi2YJbIMJTG1A05FyfuusRxiYOuW+q0kqodLpcd3csIkogslT3T1qcJ/bTXsHEHbqJt3YFF7FI2sEpMb2lBOcK86orzyUBQMsN38wLt658v/ih0LQHUmvXK9N70NygoXfl83c/KUg//A7YoV70tKYerpo2zAKYqwOpui0ve6XMm7W0nffu8Ay9qNtmMTH0K5zguiar6FzITncKo/XQfCjDsm+pXMXzmhrWqc9Am8g28SHx5Ng3bHVM6h3f8llR8r2HLFV3tnMmsBSLmrNIE5KFRFwvThtU50PMPz822KH0aMyXBKrv3MUZki8qTIqo0FIjxFI9favjWn8IFtdS4YlUr7fWYUDCUWj2eio0wFVO+a2n2FC7+akdZyOXAHBabqk45HKmKrbrvcgZmeNCn5mGHDIBmODuOAiRGoahKK9f6oaxrfWDuWRqMOFnf+vp81AHQyFjLsUm71OKNHcrNnln4IGE5yo221M5OIv9OXIiJ2Owz9rqbJKFRiy11mbqnBoKIq4TcOqDJOVvBLo/3X5KL1t2SmorC9iQ0EG5BH2zrMsI6wCXj/j1baDwuYbIe7sM3R43h5dr/uEd1CfjdhDmkdQfjqzctwOUD0O66ulBAmsHMRAvpr4OQSFjnpgWOgS24IcK+e4AdKhHvaKN17Ki75/2aiG6owHEd83WUf9Hl+hDHwsha5LA9dw5wGVTj7VsufiKPSQP8xpt2h8Bns8ssXO7rXhufS2prsaxFqEMiS4ycr0WyaMYsbjj69dTM83QPBN9OrkTBcbLWVIBF0DgYinNn2xg/nv1o3tn3oGMmQBzhup0AVEpOG5kA+W9/hzRXproP4vN7NTzXFzkQouR/TvCEAlAQD6o0dzTkFBDvvvjnmYaWYa4Pv2nYU4yD+XbMG2N+PiObPlQF0dVqYD6CghSMLW1W+weOahQKCyt6y5Ah0O/JIUYAqk+1GQX3vzdIf/KMbI1AjLacPT3B8yr+tpCraoTalPFhBSJDiUjkXL69nv+eZV/3hJmqIWhcHK1bNHg8HA9z10JlaKX3PfXuKdOx7VqOmzTt2/54RWFwELCtnzY0u7MzlN2enxOH4rINZKjDlc5X6Lc5Z8YgZEg4mYAEwSMYUQv/zSQSUfz8FVgrOzhskO6pd24rrwastHQalpt9H8rR0ZXCfD9gke/cLmpeYsBLUUb/WK3pINLVvKjH1FRdXkzJ7lfsgfX6KkZMLDpe5+eD+mHZHbrgHc1ZadueedaEdP1hcBoDWF41lCDfreq0RA5FVwwg9/qJZu834s70THaUvR9GARAOEW25jGMn/Qi+cBvtQbM3axp0RvodKFJ9zsqiow0TU1aCmbgFOkCU8T8rv4OOijc6dbgQ1+2lXWWYeDRsMlQ2p1nLQwLfY3348+f/wMAfdG3'
},
# Test 3 classes filtered by Maps
{'props' : 'CPAnalyst_test_data/nirht_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_3class_test.txt',
'nRules' : 5,
'filter' : 'MAPs',
'group' : None,
'vals' : 'eJy9mluOZjcOg9+zirOBNGzL17UE2f825iNlV5BknjNIF6b6v8iWKJLS6T/+KN/41vpK/ii/WosVbfc+VufX3U6rvbS22uDXKHW3OeqJdn77vt/Lr8GLu/Y519m8Ye7J+3vbY+vVmL2VcUbUEX/+9hFtfqFAkdEi+ulllxhj8uvi/5S6xinT0fYYbbe159KXtbP2aa2PmJPgvL5G61HmHGv4DaVx7jnrahlsfZso+YOX64o1a402dNQd49TgOtGXXj1zteh11l7v1QYv1cbFIg+3yj6jlRPhu62+Vo9Yddxw+6ujf5Wf4YDkii8tZ83dFPBMkjXHKKsqYOmln3kqt8qAs5/NZfoqRSc6+uo+yinbATluHaXUVlvPgDWoXvGf8quvqLtz+tF1vRHB8aOMHfq177pqm5XS3miForU9x+aEuh/vHLynj9ra8RuCo7UJIjjjDdj/64ADxBT/4SvnLmWUPYsSLOjVNtY5sbuPsw2dPcq8AangmKcsPqMjlXlA1AYDlMnx1gGhEXzFhWed//EFW/1ANz/GJ1CEDgze+aSuOHlzFybAkU6g4pfTorWLUX8T31daMaYaAO6FCGrnX6XwdmAeNGG9mGntO/Tft8fX/JXcfS+aqh+nuPdCV9Uo+pV0jqoAc4ZbrlS+m549lb/3lSZkcPj+XapOXMc+3B+cjhsvvlrPp5/dETufP2RtVZ14LDqu8NoIH0c3OWdvDpU3rFHb6XH2qcopFW+TQs5ZXNLRRqdJZ2ln34C04CCl9GHPpEIhXa0z+VolaXcO2zdpbuYg2rQfDrTzjqBMTVtPn65a6Dw9+lwzkodqzH3W6OeihuryEs14HI/DROPSBzT4wptDtxVniNVIXCkR1CS649XN4QchyzDPccVROqQRrTvl9D1na9zgEk0A1EZYLrkSNtyA1EFuU79y9tYhkkll9CodcojWYaJMatBEAb0towYGhmjngojM2pXagMAJms8NSDI3IP34e7cGl6hkHXgZmKSsby50XETeiiYEsC/zouZEWWuRgHZ8XF0H6LbeTHxDzTuAwJw3YPsaBdZPJVbfyfFJbHM8JOFs2nP2bdSC4SLkAPtHpQPM0PnhXh10FP/bA+gYtSQqeEtV996I679lGs5FT6gjlD+ECWnjSimztDikL8IZZpqtPlxAXC2AAvIKDdfGbZHDYWmkcm40yg98hrio3mggUypYUwo7V6Ch+GCYVybv5WhwR70kgOjTyhH3frosCSCnoYTTr4jYoMJ7PBZosQk554VMN3nfH+56jMBSa7gJ6LixqWY1a3Q6khYXgN6dgPQwdx2dsM5pbUSi5z2Q5BCpj3YTqmbn+CQV8Ezbg1o65gNQhN2EgpEkRFukIERt6DFOv1xKYwNJ8jxm2omKF9rcdQrVc1A9TgS0blL7dAOqDeNLKhHuwPmyw5gwS2tl6T9zWy9QB3ze241IFrretFxn9SwmB0q78kFFzANw3Y24CLq+8/XeJcZ8ZoqbSEI1tw2VEKboINh9Ugug4p71VhIeU+8e3qhCYt4GTbxEByZ8uB9xwVmtK1BIHx9BEYHc51NyGwESp6dfF28lOBJgV8OFSCKCtX8cIuwHNCbF787CPmJs0WeCtZj5Oqx3JYOe6+qNXlIwugl4cww3u/gQevUX2JYdvjBg51vHIvoqtKqkw1jqdU3JCpgzECBG3gO+XsB+RTgV6h8iTN0ABXneZjeagwrCzUj7/xVhG15Qh9Vr7leoYuH1Nl944wFVMAl85jR/U3YBC0toNqX1jqiqu9kwDadNcoDyJp2OWniZNk2BkYPcMNVc4f4P7HbS/X4XJEyxNS3NAYENH6GJlyOCEJLLHXDPbla6FFggW/UHNvBx4FbdC1AvB7DSz+xWCk62yM2VKHQIap7fNsX9AtSbVuVzxRgSX6BRoN2YgK+ESihm+YYUe8u7QCe+kSAgRirboIe8IWRag6LccM2KSBHxLXrLEgPKnp/lgGgDZa9nuk2W4DeV59EyIAlZks1dT441WH1+RZIiXx4QqkD87kfH86fKb2z3hejxUCVg2fMjHYoxNPUqYgHoikxAqn7IcsCzUSNVH3wcWrGmhmKxFoMAovIiUj9ogitSTQORz9LY1M0RuB2TCJwXWUOcHWxGjUvamqYpjhFkPDM63RShqadl31gQcX6A4cY8H9RFd+yUqVmnhHptJxWOIWmbv/MB4FbqD0cbYDAdL5Dx8tqSb2fUgh9vPByVkM3wNq5mrGKXOA3TaV8hnwbwSKzrDgfjwma4sTYHAXXkoJwcCHEBmu5gCAVYxRqitBiDfJHN1HhtwUmZ8L62cFRAlfeASaLh0Y1LOE6IVT5MpxhGFBpEZV9UpSyo0tR5f+dXNLAXOTsLjuwNug4Orqshl1Uekfans4fj4enBVq1W3qKKB9AkfToxlzkM4ADjZDxOvzWPcGTiaa5Q1adn766j8AegvnyuT05Y7e9pm4vIMdG4J5PLZYDd9K+0CkM8qOnPJcIrGmXkEnZmdwthI10iIkAkdL3/pHPrv/4pdyZKDls8PVxlWLLW3R0YDLJkjXt7UhHTAFz53jdMyXNI4nGhUXOZAFtRcCX1BiSXdViAk9Y0r1NheKNam7BFDOG4IxeE6Zv7D+n4k3wgVKCRauLGpNkjQ2PJaswmaC9WK+4NN+0uxm7XBSPvg/QD7ZQ2/AkpLemqqwl45rRICmWScXnIcbI47lnQ0c7hyhQcN8Vx69YPczU14g/P+b+WLHrTOLLsD/j6BeVMjXlqB7LFN6IyPW239jTMqwwiYZem5oejAFfRcWGHo8+X8gq4p/NJC56Z8MTXggrub5sGtI8A0dWHiqhyWayG4clUI6GGtI/giVUJk4ozQEmAgwB2OXtjnSAOoMwJ7Ux/HWzy6eppT9yQPN4C3WdeGG54xi79VbMxrTQk0G+a16YCYktoJeQxu2drZIPR90vn9iiqybBkRrEiAcgpvK2F5gVtrwBcN2OgkaCccS+StNEpegzDg392wyvn2LRuh4lt05wlyr2cfSIbPidSSzXWY+uOtSWDUSMd1GNNaCY7OFZgc+cYAIMlo3eadZpShPuxZg8eOUaNGu22BNoD4WnNBiV52j1ai4EtuThhaOOXyBrgMeThD83HjNF3NJQTOHJT9rD05UbhYIWaO7Z6lk6Bqtys4qmshAgiSbPrVBMyw+Y00dSRFAgo+ApYDJqGxO3MqTy/OHBjK1N6u/CGhypuWrwPFYVqniM98k02+uNNvww6C6Nb3IYy0Dg9orRMKi0AatGRd8Uqp9Y1vOR4RcmQMulYDseYaP3GC3crxAwiBUwV7MgOSmo3n5Z5izjHdsK6by9y+/GjwAXXD7cd83SB0SmwXWNOT9qWUC9GqPECMvt6wM8b4kUlPYxQdbum0hk5ztwSMSniMSFTL010wa1Rl1uNZE46hGBRcx/QtA1B9+fp88XDqyneyCtqRakd8N0KYTEg4qNKRWaUiRhpP/3N90CEr6TGdgKkm2zH9sbDGYXsutbA845OcPKnq+QPkWXB7JKUklQh0KFHcKGFjfEkZND2HZyqiFZrkeZNKfUZx8vinl6K48t3IYZvC4XtfMvZchcKf1vOLvIB6IoU0maODhjaSJzXiX9fzmpipZexyV7SQFK4vaDUD6UNSmowqQrZbEkF/IBAZk2DpioAipqrPT6JZ8dS300bLzJ84ud2Nh6YJTSi2K0ugJdLc0344gVc3pZUV9NX5PBUCv5wwG3XjXzGymWi7BEVJst5Rd5Ac2tSKTZfUF2Vp9jWb4KjAhQFg4VpeEEZD4/GmY9Z2ZxKK/IZCmqs4MDl4QhiRidtqDc6j69OVwr5DiJp2vI1kVOBa5Vc+nPhqjW/0DbfyhSqWVrVNNOAnSQHRzOG9zN8dpI2dCdSiGVBGDAYc6/sLxwWZYhsp9A6R86m99ThUf0OQPsiMhhOLYi0GvAMOfQuMIsAGrAgVQit5iwtQzE6WiHesZtBhlkL+di5JT24Xsy51sE2+0XVFyH9RGTghqFRxeCSRhyMSxwpR44bWrqCGPTAIQECifqrRdAiTgmm9zAWpnZENLFrW7XbHtpy9p+I3Ay6oJJa0pkYqQwQoZPshpEanRutmLns0C9Nlnu8/Td6DksAmGPpQf4Zb1CHMq5jRPu6JPDBB+OyNT7t9G9mYaoGZXcXpi1vJHL32/WwKdSTPS0UvIpOISw7ea5JwFVZmConGhoGCwPZz5/1d/uWLNzqqR2SY2+Eh0dv7ZY580nMe1G8cG7M4G9lA3TJI8bc4oaDKJrxyo2IymoVCoL3ozomSBs49NezJ3WaNn5GJ7A5cr6MGCdNy1Efkuu8ow4P12m9Zpco9e5KdPUjM+mINrXI6VtGawEOF4RH/itYWh7Af3ZREB5o5aI1H3zhOaAlJpn9l2ChgERU6b7cJ9AMfMW2L6Y1xvIKsf/U8XxbC42V+wwxx5libA/fmEbkBLyEJ2PsdTdyTEPyN6raiCfPTMWAtWlavMtFcIzgwdiP66LoKUaXQ643SVRJG/v9/espBvYJo4ytm+du3JeW/UMl2P9+iJErJBpS6odBfiHVG37mpszbcJN2PXukWt7zgHxujZX0uKmFPQMXbrnldLrV5tiAqi2yjA78yzjbQIrT6pVS097jmTnkAjL3DlWaYcutHQI2mBHYjpX5ongmCA9x3HPJCZxzY+K3iw3I8cTY4Jwtg+7+QuK12LCsvYjrv94UgV8mjqKQGioUsqidjrrduoXX4g/faupq2otsT2V3wagHpViQmdy6J2D2c9XS8jGuNjJ0KfbgRTxpWXGu/Y5W1GDq4XEY8ZgVvmIml/9jcTO0wVSNe64xgDJD4mJYNEv+e3HTdLktStWO1rSih+xqsLBD1sMC3qCndl7p4qukn4D6DeLyGRQBmfKiheZE345cbw7GJOcA6PWestNR2hcNjf92N7h5asYE7gBN60Y8XuRCc5CQrSXCrndu1ExLyUQlduR6nqiAaegFp6p/IbB+HrnNdI7XPtJo8CmKs7w9W3rYcLQGcP+Hho7QNBw/S2k9PIBdMh8DN360Fl535tD6s2oNGj/3A6aCKj/nxQweosiyrOwDRn7t9Ma46z84eUgs1ttK445QGaQlO8H/UkFPNmdeGT8CR9DfrxfxL9aMlRtNDB9zEpqa1o6KQaPbe/zppy/abK18kIBVPU2PN6Jdb1dFSGPkYoXMoEGg+Gc0bhIMkcy4T0/0SAcDyPlzD7X0tK3eZ6Kh/fbQfu/nidCS16siVY9AzTNNiOZSiP2kARS/gaONu+S7m75mielkxUq+NQ/IXJTxNtLHO9q7rOna4JMzjbdOAEgnuXqEehfSMbXdxP69eFozTG0zb9OjvBreRbk5d5IsPVP06zQ9fqbKyO5rG8/RhLbyn53AT6OLJ8hqDpFihCPy+eHSuwOPDyZMGZp6WgMn5/ZSCN9a4nqWX5oA1DE/g7HUgbBby7p0G1AGWNY+wv4MqtV4zst//vk/8UXPMw=='
},
# Test 2 classes filtered by MAPs with area output
{'props' : 'CPAnalyst_test_data/nirht_area_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_2class_test.txt',
'nRules' : 5,
'filter' : 'MAPs',
'group' : None,
'vals' : 'eJxVmGGSbKcOg/9nFWcDuQUYjL2WVPa/jXwy0K9eavrU7cm0G9uSLPPPP+1b3976aV/v6X6f7Wt/cufyltn7Tt620aZl62MPs6//8Whrd18t+79/fUTyz7Z++Ndo4zxOnDHNolmMXXH2cu9J4DaJs+aeYQRPP3H2F00/nMiWvWdFiuZ75Oy5hiL1tLEbX9T2ItJuy1bM2f1Giq8vvmJx3G/Mtsc3LDa/zcnHdzNvrcfofLr9GemWGbz3/OvjF7bXnNF6i2UnXjfKxd9+M0a96lB9rB78N0NhWpBO7326u9LjCzjk0pfdIPMXZNfrBknqtTzXDdJ27klqvhVktNWJMXPfIItyq00rs14VhL9udG/2XkF20i+zHN7UsObT2o5s46Xj9yQWs14VpNkYVNTjBEnfe2c6zYpqJiUjCEndKKN/nJFc7eP78iP//IBFldRyb+q5fFXBvY2xrc2xvr8pMAgiJeurjxtrfEnbJqlR9rm/TAJFCjeq8nA6MoWxP6rytkVZaCbBJsCgV73lyhvMBGZeA0jxZ9ap/WxdgKjTdZrS5lxEVcAAAZQfOKr/hFxrjzFHN5pxQwpNBONknXetLeJzDn4d1f0qXYBC866YKl2nIb7ixpzTG7SCOnFjUi1wFXQiwDfIh157m6LZTsg2M5sYuIHDdkrbKIhitbW8rwA8txdiyxCTISLtpYuc12kKjScA/4RMg4otxQMO4aTb0rwQz2fp3jLfdmuomgWvLbyN1WDAUDb0SGiYC0yFihZ6u6YtAWWufrKFl+HLB5+wqxIGCVt8pRDkjqZ8IygxjXdF5DiUZ65WFajDWi7EwW9IgpP/juivK3z4kiHO6zCKLH1170d1wNo06DZAm8iwNnXcMX9BgpINhMIm5wIz3YQKag3p1gZ4Y+iA9GI1UAH04vt7iN+82+DyxklBo5d89R7nUQeaDWq1Nbb0lQNAKhS2jyl9+iNNGzY3X3cCzWK5SEg0SFaPCoQwUZYhbSOOD9UsUDaQRBxQEgbDfN44BJl8nMKnoD+H3gu4KJNA5iggbdzrqKJBb9qIZl8a0FQYyDfmO5kXwjpgmCoTOoJ8QQJqpBb6oF4NNUtxYNmmWsn/vw2kKYgSQLG8NYNOEw2ZwgLqaIIZoHfhdupQZts7+uqrhN+dbFenn91OUNTTQ2IXeaUEfBqpmy01wzYf/gB2U1vWQVqIQAm+ZyGtz5wq5HzgtW0+3MLD7kHpFGekxZwSGeXUZGyhNGssZs4NIVVGtRl9oBdZReR8Hc5P9PDGmj/d5Ht4dJNmWR2N+REggslXR0NcejS+zetg8BBArrjQZWQghfDdJXQpwCup9aVG1B+Cc5ourkqinKT2CNhg6xZvjvDNPKfGN6L0l94ilhLRrW/QTOO4yhVF2vB6kVC1g1oAA6Z4exE9R6VO/a456Mj4FyXFBJR+qjVRiGEmbAk5cmtVO9AHMynhjQeAdOSB6rx44yhdE44L0CEQTj17qbt6qbkJyxRTncbRoP83JJo6GT0o002aHgy9JO5ghX8LfwRF+E5IxzxI3EYBG6bAQaTvJ+4AkgIx7eMNNFkAXzU0ZDtC8yjAMgcvmNgC+PCtMVFVSbRgwEY0ZtxKQjvIzEgfl32wAiPjAQapDw/8CQ3LqiMDjEzxHpXz0KjlwHYmZHc0D2rup8VbZDhthnGd/Dhk2zJTDIozEFDegVJ0ZRzoFawIWaZzvHDqNJE1nOCNKQXlhZrTX0YnEDHAy3wjmGLyvxciL1+lmFSPcTQj86YcDXcB5xCyG3J8pVf0Qy3HfWgMkyz9krhyTKeCW0JYpzQXCRZtuhEh5qA1sO8WEQDSEsEcvwKGylRAiyKfkaDC1+DAHYBX3AWz+M4fWtwGH/NXxqgfmcNedoQaakSO6kk4Eg3aW6ktLEe/A20bt4RYJDyD7E57JRSmi3HSV+SHY+IG4U4pFWNbhMNQFvtoveiYsPCOSMGye0wNgmuEURbNb9JFZBFHPg3Cy1bRzqYxvvbhHp5JY/x6tIIzYzzp1o21yq8M8S4cx4Yl1zELwBqrgJxSWzEb7E6ab4h1pdtxXWj/Jt19q6fsyDZXNZdgINGwz8Bn1+TG4jSKZNL7Gn8BJpvIOE9I5tjSAHB7R6SXRlgZH5wQnJ+YjMaBkWQrV8vEY97in0a5Wnk2pHnyOjWU45V/iL3fMhHHTspLqyFlrjgSoPSlgzIPmS8cA1clGIobHFv++8JQngQBGdZvX9IOsvk89UT3WgXGHp0510FOopalNwYl4NGQQJRdQ9yYpaxDeMobj2nSRb5eQw8+8MZGP3xRDBGB2qMCAqdrc4IrzIKbN182kUR++yygGlOyaGqQIUj8I03jdJQTHy15j5k/w2tngxoseKwI1/NuuQeR6rY8qZw0cGkB0QKmAQYmGKqShMlHU/siqBTAG04WTGFdryPBW8it01Zyu6MZcUdio9xS7vM4bgmPRV8mg7zQ1LRtJgnMWbYL8cB69v7CxBEZuUEsF8dDqEe5EObgwNGr3uoOToGBNVascuMgF++JAWAsvlhYeznST9swgbH92iHKOcsSOc6S9ihHGSbDV1KGaoNZ7bUY33uwIecSX5lJBsQ+j8oQetBArX21p8MOFmzGp4w7+tO10nHy9QKtsxaDXFnDlXo2Vb8Eme0a46EtqNYucqJ2GE5SuxAhtGvxb8+qsnd+Atk4g4S6iShZFZ53uW7ogOMXyvwiuciEIaIPd+zlmHqR0l9MOCGupVwNhpzYyG5qUSgVpGeNOXt0B1DCLjSFffWeUnqINtKSt6jCK8CAJEwtrYwgvcE461rArWzmlHFGeLLYgcXZk8Liv57NDKwwbVq/JVP5bslrx17TBSyOlfPlKL3cMDKtu47jhckTfuue4UIZb82IoEfQ6IXkfAzwofEEf2kRqeO+U90qR4dkp64U2jknjWfQF5xu8iktc4/1bh3EMeTj014KuCcdcFYDUlGlx5EGbKU2rdoDwCP2cRa2ftIQfAWDqz1M6gaBw2HMtG0jRhwZ2IYkqPqOZjVYQmeLfcCGNUc2z27+QCe15M83qlA3SW1omsJSlhatBdiDupExlRMHU+3CXKqgNHCWwWFZdz7BgP5fMIYm+CFJ5hYtCP3pKLmeukzCaY8oJ7Z0mUQiGJmf/2fxCU1DcPDiWe0ADHe0SoOU4YIja+VrUk5LszjK10BePihrrGjIoml1RbJfrDIKcoISf1jntZLp0qpqBT1APTIYtQJBntBFDD7kqL9pNaGSGMgXMVW3rXWYsT1l3V3LZcvjiLWqoxKxjiPWqq4M536WmD1c2yx/9a5QWl3I6JoI39bKXZ91r3w780l4bsVBbOwW1IHyE37NFsayYPsCCn7abGu1xeiDR36ZhZgq465LAYjUqoxDWkEMNr4T1HUxluQwfrcyoAQG1AIkPGM/1GRiQ/JjELXw5jx3cgTFcqKtgCcvtkmbVBgn/ou5/28Fct3xdBT+07pLrabuBHR7JWBiuQfTCcV52AELsmQIxrsYGOqslnrdEEiBVUetdlSz+p2ycGjPWZuNzsNFtMfsXEdqwIAxOvA4LUNco3lI0EQDXamACdzOOBdIjFIajicZdYHEKKXhNt9tFLKvW105/HfDpTFTt1qfSI3LlHbUJcI9l+wmv5L106llRlmL3m0BsRcGCKj+VIIPU3akAZPMIPlcW3OUHabbWtwxRacxKIVRclpT7pOdWbcTJqv2gnndlWkMggQ/j3PNbJJrFLzmQ9NdIQYP8ERdM1MDZs4av35oz1OX5RRlw2Q8NcFrJ5JTx8BJbrKGvy6pSm6IUf3QfTI+yWnXL9GQ1GgD+oJVHyNfrjvOSo++IP1467PSoz6MBYz2u9fSjdaU6LaXK9yVRVJ+ONQ8j5Mr+xGTdc/qK84LomvDqUYhjBCE5Pt8tNNlQ+hH9mhpK63niRWSkLbP/RZ6BdlAkWu3Iha9QKDXzz6IuboBAnyay+TABJlawuvq7Y+WdDYGVLN0cLOLzBQb8yEEY7ExjPaodm8bmMaywqwL4okweCaoAuo+pI114+k6BBGIXzw6BYw55L///geLI7+9'
},
# Test 3 classes filtered by MAPs with area output
{'props' : 'CPAnalyst_test_data/nirht_area_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_3class_test.txt',
'nRules' : 5,
'filter' : 'MAPs',
'group' : None,
'vals' : 'eJxFz8txAzEMA9B7qmABsYYfgBRr8Wz/bYRa2/FRo3kA+HyqUKpkrfUrD1vl4KamGsRXEIVwdFAeuto7O9rUvK8fGZsSX8uibiNQPdag1s2sjmM3Y29VNjJetmTrt5eOYHbbsdHYg81hdy+iJi7pdext7ggnJPd/SHaGBoN+MkZM93a+MmbY3Mb2eG93Ugz50rrS5hdmOuUHBE03m1rvEb0rwh3qH59iio9Xp2s2ER13wOTj3GQ5b12FRs2YuTCv6w/NUkka'
},
# Test 3 classes
{'props' : 'CPAnalyst_test_data/nirht_area_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_3class_test.txt',
'nRules' : 5,
'filter' : None,
'group' : None,
'vals' : 'eJw1kMttBDEMQ++pQgVkDErUt5bFHrf/FqLJYH0yLD9S5OsFURmXc86v4Iz30GPcdeTCqbHY60xq7djDFOS0Rb9/ZGGTqi9c3V3V2mMtlx50wfeFVNGjGdGxZ/xBKckvSgCVAYzy9lUj1JGGnB1bhbpVuvss/A89Gm2i+TzcjgXdDbW919HTifC191szdaJ7PPhdnk3BA/Ns5Fz3Svrn0hA7lcXM6tX7XNg+uP2ojRbL7f70lfHt4dkBZysKNoq98bYUgKC3Iv6TEZtjXXSl3u8/HxNLBw=='
},
# Test 2 classes grouped by Well+Gene
{'props' : 'CPAnalyst_test_data/nirht_test.properties',
'ts' : 'CPAnalyst_test_data/nirht_2class_test.txt',
'nRules' : 5,
'filter' : None,
'group' : 'Well+Gene',
'vals' : 'eJxNkDtOBDEQRHNO4QOA1d3V33CFEMFoJQJEMppgOQL3D2gPwRCW7Hp+5X2PGvevbeiYcz4PmlVGVURRaR2JtENVulDxkAkKS5FK4Hgau8u43z5k84tgUQYuC7LqqELqSQrJToyCKvoUfvZ9fL69XuWQ6LsqCdaOEtIkC85+fLGCAUdKdPmsNMPG++P754GLorQMi6Eiy6giiCndhMdLSwQXjNMi9JTgPwJfBLCjnXtHG9L0hJe5J5MvAqBh4dmaC8DnL2z8X0FL2lt76algFL3KmpkLQJlqyKDG6nH8AuLPT3I='
},
]
for i, test in enumerate(testdata):
props_file = test['props']
ts_file = test['ts']
nRules = test['nRules']
filter_name = test['filter']
group = test['group']
vals = numpy.array(test['vals'])
logging.info('Loading properties file...')
p.load_file(props_file)
logging.info('Loading training set...')
ts = TrainingSet(p)
ts.Load(ts_file)
data = score(p, ts, nRules, filter_name, group)
nClasses = len(ts.labels)
nKeyCols = len(image_key_columns())
if base64.b64encode(zlib.compress(str(list(data)))) != vals:
logging.error('Test %d failed'%(i))
app.MainLoop()
| 166.940594 | 5,522 | 0.870174 | 698 | 16,861 | 20.931232 | 0.661891 | 0.010678 | 0.013963 | 0.01807 | 0.053114 | 0.053114 | 0.051745 | 0.050513 | 0.050513 | 0.050513 | 0 | 0.143449 | 0.083803 | 16,861 | 100 | 5,523 | 168.61 | 0.802305 | 0.016013 | 0 | 0.371795 | 0 | 0.038462 | 0.860779 | 0.842261 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.128205 | 0 | 0.128205 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
585b6f4b9884bf8f1736765e8198c21111a5022b | 96 | py | Python | venv/lib/python3.8/site-packages/cryptography/utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 1 | 2021-11-07T22:40:27.000Z | 2021-11-07T22:40:27.000Z | venv/lib/python3.8/site-packages/cryptography/utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cryptography/utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/36/cf/a1/b3470066738b13f6c4794ac382aad58b985d3d9efc197c0b0723374f24 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
543788a6d072233ea57b5563008174dead0b1187 | 99 | py | Python | super_stream_tools/stream_library/__init__.py | KorigamiK/media-tools | ff4e7490ab32a8a08491836ced8f0b3302218c1d | [
"MIT"
] | null | null | null | super_stream_tools/stream_library/__init__.py | KorigamiK/media-tools | ff4e7490ab32a8a08491836ced8f0b3302218c1d | [
"MIT"
] | null | null | null | super_stream_tools/stream_library/__init__.py | KorigamiK/media-tools | ff4e7490ab32a8a08491836ced8f0b3302218c1d | [
"MIT"
] | null | null | null | from .async_subprocess import async_subprocess
from .async_downloads import save_file, delete_file
| 33 | 51 | 0.878788 | 14 | 99 | 5.857143 | 0.571429 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 99 | 2 | 52 | 49.5 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
544632304ea7a0739ba8ec42a3b9c15b3c387c0e | 56,797 | py | Python | grafeas/grafeas/grafeas_v1/gapic/grafeas_client.py | hugovk/google-cloud-python | b387134827dbc3be0e1b431201e0875798002fda | [
"Apache-2.0"
] | 1 | 2019-03-26T21:44:51.000Z | 2019-03-26T21:44:51.000Z | grafeas/grafeas/grafeas_v1/gapic/grafeas_client.py | hugovk/google-cloud-python | b387134827dbc3be0e1b431201e0875798002fda | [
"Apache-2.0"
] | 3 | 2019-06-20T05:20:15.000Z | 2019-06-27T05:01:16.000Z | grafeas/grafeas/grafeas_v1/gapic/grafeas_client.py | hugovk/google-cloud-python | b387134827dbc3be0e1b431201e0875798002fda | [
"Apache-2.0"
] | 1 | 2019-03-29T18:26:16.000Z | 2019-03-29T18:26:16.000Z | # -*- coding: utf-8 -*-
#
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Accesses the grafeas.v1 Grafeas API."""
import functools
import pkg_resources
import warnings
from google.oauth2 import service_account
import google.api_core.gapic_v1.client_info
import google.api_core.gapic_v1.config
import google.api_core.gapic_v1.method
import google.api_core.gapic_v1.routing_header
import google.api_core.grpc_helpers
import google.api_core.page_iterator
import google.api_core.path_template
import grpc
from google.protobuf import empty_pb2
from google.protobuf import field_mask_pb2
from grafeas.grafeas_v1.gapic import enums
from grafeas.grafeas_v1.gapic import grafeas_client_config
from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
from grafeas.grafeas_v1.proto import grafeas_pb2
from grafeas.grafeas_v1.proto import grafeas_pb2_grpc
_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution("grafeas",).version
class GrafeasClient(object):
"""
`Grafeas <https://grafeas.io>`__ API.
Retrieves analysis results of Cloud components such as Docker container
images.
Analysis results are stored as a series of occurrences. An
``Occurrence`` contains information about a specific analysis instance
on a resource. An occurrence refers to a ``Note``. A note contains
details describing the analysis and is generally stored in a separate
project, called a ``Provider``. Multiple occurrences can refer to the
same note.
For example, an SSL vulnerability could affect multiple images. In this
case, there would be one note for the vulnerability and an occurrence
for each image with the vulnerability referring to that note.
"""
# The name of the interface for this client. This is the key used to
# find the method configuration in the client_config dictionary.
_INTERFACE_NAME = "grafeas.v1.Grafeas"
@classmethod
def note_path(cls, project, note):
"""Return a fully-qualified note string."""
return google.api_core.path_template.expand(
"projects/{project}/notes/{note}", project=project, note=note,
)
@classmethod
def occurrence_path(cls, project, occurrence):
"""Return a fully-qualified occurrence string."""
return google.api_core.path_template.expand(
"projects/{project}/occurrences/{occurrence}",
project=project,
occurrence=occurrence,
)
@classmethod
def project_path(cls, project):
"""Return a fully-qualified project string."""
return google.api_core.path_template.expand(
"projects/{project}", project=project,
)
def __init__(self, transport, client_config=None, client_info=None):
"""Constructor.
Args:
transport (~.GrafeasGrpcTransport): A transport
instance, responsible for actually making the API calls.
The default transport uses the gRPC protocol.
This argument may also be a callable which returns a
transport instance. Callables will be sent the credentials
as the first argument and the default transport class as
the second argument.
client_config (dict): DEPRECATED. A dictionary of call options for
each method. If not specified, the default configuration is used.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
"""
# Raise deprecation warnings for things we want to go away.
if client_config is not None:
warnings.warn(
"The `client_config` argument is deprecated.",
PendingDeprecationWarning,
stacklevel=2,
)
else:
client_config = grafeas_client_config.config
# Instantiate the transport.
# The transport is responsible for handling serialization and
# deserialization and actually sending data to the service.
self.transport = transport
if client_info is None:
client_info = google.api_core.gapic_v1.client_info.ClientInfo(
gapic_version=_GAPIC_LIBRARY_VERSION,
)
else:
client_info.gapic_version = _GAPIC_LIBRARY_VERSION
self._client_info = client_info
# Parse out the default settings for retry and timeout for each RPC
# from the client configuration.
# (Ordinarily, these are the defaults specified in the `*_config.py`
# file next to this one.)
self._method_configs = google.api_core.gapic_v1.config.parse_method_configs(
client_config["interfaces"][self._INTERFACE_NAME],
)
# Save a dictionary of cached API call functions.
# These are the actual callables which invoke the proper
# transport methods, wrapped with `wrap_method` to add retry,
# timeout, and the like.
self._inner_api_calls = {}
# Service calls
def get_occurrence(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets the specified occurrence.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.occurrence_path('[PROJECT]', '[OCCURRENCE]')
>>>
>>> response = client.get_occurrence(name)
Args:
name (str): The name of the occurrence in the form of
``projects/[PROJECT_ID]/occurrences/[OCCURRENCE_ID]``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Occurrence` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_occurrence" not in self._inner_api_calls:
self._inner_api_calls[
"get_occurrence"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_occurrence,
default_retry=self._method_configs["GetOccurrence"].retry,
default_timeout=self._method_configs["GetOccurrence"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.GetOccurrenceRequest(name=name,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_occurrence"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_occurrences(
self,
parent,
filter_=None,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Lists occurrences for the specified project.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # Iterate over all results
>>> for element in client.list_occurrences(parent):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_occurrences(parent).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): The name of the project to list occurrences for in the form of
``projects/[PROJECT_ID]``.
filter_ (str): The filter expression.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~grafeas.grafeas_v1.types.Occurrence` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_occurrences" not in self._inner_api_calls:
self._inner_api_calls[
"list_occurrences"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_occurrences,
default_retry=self._method_configs["ListOccurrences"].retry,
default_timeout=self._method_configs["ListOccurrences"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.ListOccurrencesRequest(
parent=parent, filter=filter_, page_size=page_size,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_occurrences"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="occurrences",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def delete_occurrence(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Deletes the specified occurrence. For example, use this method to delete an
occurrence when the occurrence is no longer applicable for the given
resource.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.occurrence_path('[PROJECT]', '[OCCURRENCE]')
>>>
>>> client.delete_occurrence(name)
Args:
name (str): The name of the occurrence in the form of
``projects/[PROJECT_ID]/occurrences/[OCCURRENCE_ID]``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "delete_occurrence" not in self._inner_api_calls:
self._inner_api_calls[
"delete_occurrence"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.delete_occurrence,
default_retry=self._method_configs["DeleteOccurrence"].retry,
default_timeout=self._method_configs["DeleteOccurrence"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.DeleteOccurrenceRequest(name=name,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
self._inner_api_calls["delete_occurrence"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def create_occurrence(
self,
parent,
occurrence,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new occurrence.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `occurrence`:
>>> occurrence = {}
>>>
>>> response = client.create_occurrence(parent, occurrence)
Args:
parent (str): The name of the project in the form of ``projects/[PROJECT_ID]``, under
which the occurrence is to be created.
occurrence (Union[dict, ~grafeas.grafeas_v1.types.Occurrence]): The occurrence to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Occurrence`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Occurrence` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "create_occurrence" not in self._inner_api_calls:
self._inner_api_calls[
"create_occurrence"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_occurrence,
default_retry=self._method_configs["CreateOccurrence"].retry,
default_timeout=self._method_configs["CreateOccurrence"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.CreateOccurrenceRequest(
parent=parent, occurrence=occurrence,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_occurrence"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def batch_create_occurrences(
self,
parent,
occurrences,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates new occurrences in batch.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `occurrences`:
>>> occurrences = []
>>>
>>> response = client.batch_create_occurrences(parent, occurrences)
Args:
parent (str): The name of the project in the form of ``projects/[PROJECT_ID]``, under
which the occurrences are to be created.
occurrences (list[Union[dict, ~grafeas.grafeas_v1.types.Occurrence]]): The occurrences to create. Max allowed length is 1000.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Occurrence`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.BatchCreateOccurrencesResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "batch_create_occurrences" not in self._inner_api_calls:
self._inner_api_calls[
"batch_create_occurrences"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.batch_create_occurrences,
default_retry=self._method_configs["BatchCreateOccurrences"].retry,
default_timeout=self._method_configs["BatchCreateOccurrences"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.BatchCreateOccurrencesRequest(
parent=parent, occurrences=occurrences,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["batch_create_occurrences"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def update_occurrence(
self,
name,
occurrence,
update_mask=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Updates the specified occurrence.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.occurrence_path('[PROJECT]', '[OCCURRENCE]')
>>>
>>> # TODO: Initialize `occurrence`:
>>> occurrence = {}
>>>
>>> response = client.update_occurrence(name, occurrence)
Args:
name (str): The name of the occurrence in the form of
``projects/[PROJECT_ID]/occurrences/[OCCURRENCE_ID]``.
occurrence (Union[dict, ~grafeas.grafeas_v1.types.Occurrence]): The updated occurrence.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Occurrence`
update_mask (Union[dict, ~grafeas.grafeas_v1.types.FieldMask]): The fields to update.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.FieldMask`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Occurrence` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "update_occurrence" not in self._inner_api_calls:
self._inner_api_calls[
"update_occurrence"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_occurrence,
default_retry=self._method_configs["UpdateOccurrence"].retry,
default_timeout=self._method_configs["UpdateOccurrence"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.UpdateOccurrenceRequest(
name=name, occurrence=occurrence, update_mask=update_mask,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["update_occurrence"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def get_occurrence_note(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets the note attached to the specified occurrence. Consumer projects can
use this method to get a note that belongs to a provider project.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.occurrence_path('[PROJECT]', '[OCCURRENCE]')
>>>
>>> response = client.get_occurrence_note(name)
Args:
name (str): The name of the occurrence in the form of
``projects/[PROJECT_ID]/occurrences/[OCCURRENCE_ID]``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Note` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_occurrence_note" not in self._inner_api_calls:
self._inner_api_calls[
"get_occurrence_note"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_occurrence_note,
default_retry=self._method_configs["GetOccurrenceNote"].retry,
default_timeout=self._method_configs["GetOccurrenceNote"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.GetOccurrenceNoteRequest(name=name,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_occurrence_note"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def get_note(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets the specified note.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.note_path('[PROJECT]', '[NOTE]')
>>>
>>> response = client.get_note(name)
Args:
name (str): The name of the note in the form of
``projects/[PROVIDER_ID]/notes/[NOTE_ID]``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Note` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_note" not in self._inner_api_calls:
self._inner_api_calls[
"get_note"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_note,
default_retry=self._method_configs["GetNote"].retry,
default_timeout=self._method_configs["GetNote"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.GetNoteRequest(name=name,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_note"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_notes(
self,
parent,
filter_=None,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Lists notes for the specified project.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # Iterate over all results
>>> for element in client.list_notes(parent):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_notes(parent).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): The name of the project to list notes for in the form of
``projects/[PROJECT_ID]``.
filter_ (str): The filter expression.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~grafeas.grafeas_v1.types.Note` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_notes" not in self._inner_api_calls:
self._inner_api_calls[
"list_notes"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_notes,
default_retry=self._method_configs["ListNotes"].retry,
default_timeout=self._method_configs["ListNotes"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.ListNotesRequest(
parent=parent, filter=filter_, page_size=page_size,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_notes"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="notes",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def delete_note(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Deletes the specified note.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.note_path('[PROJECT]', '[NOTE]')
>>>
>>> client.delete_note(name)
Args:
name (str): The name of the note in the form of
``projects/[PROVIDER_ID]/notes/[NOTE_ID]``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "delete_note" not in self._inner_api_calls:
self._inner_api_calls[
"delete_note"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.delete_note,
default_retry=self._method_configs["DeleteNote"].retry,
default_timeout=self._method_configs["DeleteNote"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.DeleteNoteRequest(name=name,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
self._inner_api_calls["delete_note"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def create_note(
self,
parent,
note_id,
note,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new note.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `note_id`:
>>> note_id = ''
>>>
>>> # TODO: Initialize `note`:
>>> note = {}
>>>
>>> response = client.create_note(parent, note_id, note)
Args:
parent (str): The name of the project in the form of ``projects/[PROJECT_ID]``, under
which the note is to be created.
note_id (str): The ID to use for this note.
note (Union[dict, ~grafeas.grafeas_v1.types.Note]): The note to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Note`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Note` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "create_note" not in self._inner_api_calls:
self._inner_api_calls[
"create_note"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_note,
default_retry=self._method_configs["CreateNote"].retry,
default_timeout=self._method_configs["CreateNote"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.CreateNoteRequest(
parent=parent, note_id=note_id, note=note,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_note"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def batch_create_notes(
self,
parent,
notes,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates new notes in batch.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `notes`:
>>> notes = {}
>>>
>>> response = client.batch_create_notes(parent, notes)
Args:
parent (str): The name of the project in the form of ``projects/[PROJECT_ID]``, under
which the notes are to be created.
notes (dict[str -> Union[dict, ~grafeas.grafeas_v1.types.Note]]): The notes to create. Max allowed length is 1000.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Note`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.BatchCreateNotesResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "batch_create_notes" not in self._inner_api_calls:
self._inner_api_calls[
"batch_create_notes"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.batch_create_notes,
default_retry=self._method_configs["BatchCreateNotes"].retry,
default_timeout=self._method_configs["BatchCreateNotes"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.BatchCreateNotesRequest(parent=parent, notes=notes,)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["batch_create_notes"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def update_note(
self,
name,
note,
update_mask=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Updates the specified note.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.note_path('[PROJECT]', '[NOTE]')
>>>
>>> # TODO: Initialize `note`:
>>> note = {}
>>>
>>> response = client.update_note(name, note)
Args:
name (str): The name of the note in the form of
``projects/[PROVIDER_ID]/notes/[NOTE_ID]``.
note (Union[dict, ~grafeas.grafeas_v1.types.Note]): The updated note.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.Note`
update_mask (Union[dict, ~grafeas.grafeas_v1.types.FieldMask]): The fields to update.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~grafeas.grafeas_v1.types.FieldMask`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~grafeas.grafeas_v1.types.Note` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "update_note" not in self._inner_api_calls:
self._inner_api_calls[
"update_note"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_note,
default_retry=self._method_configs["UpdateNote"].retry,
default_timeout=self._method_configs["UpdateNote"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.UpdateNoteRequest(
name=name, note=note, update_mask=update_mask,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["update_note"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_note_occurrences(
self,
name,
filter_=None,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Lists occurrences referencing the specified note. Provider projects can use
this method to get all occurrences across consumer projects referencing the
specified note.
Example:
>>> from grafeas import grafeas_v1
>>> from grafeas.grafeas_v1.gapic.transports import grafeas_grpc_transport
>>>
>>> address = "[SERVICE_ADDRESS]"
>>> scopes = ("[SCOPE]")
>>> transport = grafeas_grpc_transport.GrafeasGrpcTransport(address, scopes)
>>> client = grafeas_v1.GrafeasClient(transport)
>>>
>>> name = client.note_path('[PROJECT]', '[NOTE]')
>>>
>>> # Iterate over all results
>>> for element in client.list_note_occurrences(name):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_note_occurrences(name).pages:
... for element in page:
... # process element
... pass
Args:
name (str): The name of the note to list occurrences for in the form of
``projects/[PROVIDER_ID]/notes/[NOTE_ID]``.
filter_ (str): The filter expression.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~grafeas.grafeas_v1.types.Occurrence` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_note_occurrences" not in self._inner_api_calls:
self._inner_api_calls[
"list_note_occurrences"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_note_occurrences,
default_retry=self._method_configs["ListNoteOccurrences"].retry,
default_timeout=self._method_configs["ListNoteOccurrences"].timeout,
client_info=self._client_info,
)
request = grafeas_pb2.ListNoteOccurrencesRequest(
name=name, filter=filter_, page_size=page_size,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_note_occurrences"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="occurrences",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
| 41.640029 | 137 | 0.594415 | 6,055 | 56,797 | 5.412056 | 0.065731 | 0.032133 | 0.046414 | 0.034605 | 0.845285 | 0.8408 | 0.799054 | 0.794629 | 0.788984 | 0.746048 | 0 | 0.004603 | 0.32294 | 56,797 | 1,363 | 138 | 41.67058 | 0.847562 | 0.523478 | 0 | 0.608392 | 0 | 0 | 0.062076 | 0.01143 | 0 | 0 | 0 | 0.005136 | 0 | 1 | 0.031469 | false | 0.024476 | 0.033217 | 0 | 0.094406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54744202ddd3ad98eaa6bb902b83eac38fb19e6d | 207 | py | Python | serial_constants.py | ThomasGerstenberg/sublime3-serial-monitor | 6e25172aca9ad755b8ec2f7e3efc5664ce35ed7e | [
"BSD-3-Clause"
] | 10 | 2016-02-12T08:44:49.000Z | 2018-08-29T21:34:49.000Z | serial_constants.py | ThomasGerstenberg/sublime3-serial-monitor | 6e25172aca9ad755b8ec2f7e3efc5664ce35ed7e | [
"BSD-3-Clause"
] | 30 | 2015-08-31T18:56:31.000Z | 2018-12-04T02:55:02.000Z | serial_constants.py | ThomasGerstenberg/sublime3-serial-monitor | 6e25172aca9ad755b8ec2f7e3efc5664ce35ed7e | [
"BSD-3-Clause"
] | 6 | 2016-01-21T03:40:28.000Z | 2021-12-10T08:13:57.000Z | # Constants
SYNTAX_FILE = "Packages/serial_monitor/syntax/serial_monitor.tmLanguage"
DEFAULT_SETTINGS = "serial_monitor.sublime-settings"
LAST_USED_SETTINGS = "serial_monitor_last_used.sublime-settings"
| 41.4 | 73 | 0.84058 | 25 | 207 | 6.56 | 0.48 | 0.317073 | 0.256098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 207 | 4 | 74 | 51.75 | 0.854167 | 0.043478 | 0 | 0 | 0 | 0 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
548905ec19b3ffa595198527f4466539ca0c8d6a | 187 | py | Python | src/schnetpack/utils/script_utils/__init__.py | giadefa/schnetpack | 9dabc3b6e3b28deb2fb3743ea1857c46b055efbf | [
"MIT"
] | 450 | 2018-09-04T08:37:47.000Z | 2022-03-30T08:05:37.000Z | src/schnetpack/utils/script_utils/__init__.py | giadefa/schnetpack | 9dabc3b6e3b28deb2fb3743ea1857c46b055efbf | [
"MIT"
] | 239 | 2018-09-11T21:09:08.000Z | 2022-03-18T09:25:11.000Z | src/schnetpack/utils/script_utils/__init__.py | giadefa/schnetpack | 9dabc3b6e3b28deb2fb3743ea1857c46b055efbf | [
"MIT"
] | 166 | 2018-09-13T13:01:06.000Z | 2022-03-31T12:59:12.000Z | from .model import *
from .parsing import *
from .evaluation import *
from .training import *
from .setup import *
from .data import *
from .script_error import *
from .settings import *
| 20.777778 | 27 | 0.743316 | 25 | 187 | 5.52 | 0.44 | 0.507246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171123 | 187 | 8 | 28 | 23.375 | 0.890323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5491e36ec8284268460269c8d5a6458ec3040942 | 28,169 | py | Python | app/apps/api/tests.py | stweil/escriptorium | 63a063f2dbecebe9f79aa6376e99030f49a02502 | [
"MIT"
] | 4 | 2021-09-21T09:15:24.000Z | 2022-02-12T13:36:33.000Z | app/apps/api/tests.py | UB-Mannheim/escriptorium | b3506975d15ba155925ac48d90f2d0afe2cc5621 | [
"MIT"
] | 1 | 2021-11-30T12:04:11.000Z | 2021-11-30T12:04:11.000Z | app/apps/api/tests.py | stweil/escriptorium | 63a063f2dbecebe9f79aa6376e99030f49a02502 | [
"MIT"
] | 2 | 2021-11-10T09:39:52.000Z | 2022-01-10T08:52:40.000Z | """
The goal here is not to test drf internals
but only our own layer on top of it.
So no need to test the content unless there is some magic in the serializer.
"""
import unittest
import os
from django.core.files.uploadedfile import SimpleUploadedFile
from django.test import override_settings
from django.urls import reverse
from core.models import Block, Line, Transcription, LineTranscription, OcrModel
from core.tests.factory import CoreFactoryTestCase
class UserViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
def test_onboarding(self):
user = self.factory.make_user()
self.client.force_login(user)
uri = reverse('api:user-onboarding')
resp = self.client.put(uri, {
'onboarding': 'False',
}, content_type='application/json')
user.refresh_from_db()
self.assertEqual(resp.status_code, 200)
self.assertEqual(user.onboarding, False)
class DocumentViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
self.doc = self.factory.make_document()
self.doc2 = self.factory.make_document(owner=self.doc.owner)
self.part = self.factory.make_part(document=self.doc)
self.part2 = self.factory.make_part(document=self.doc)
self.line = Line.objects.create(
baseline=[[10, 25], [50, 25]],
mask=[[10, 10], [50, 10], [50, 50], [10, 50]],
document_part=self.part)
self.line2 = Line.objects.create(
baseline=[[10, 80], [50, 80]],
mask=[[10, 60], [50, 60], [50, 100], [10, 100]],
document_part=self.part)
self.transcription = Transcription.objects.create(
document=self.part.document,
name='test')
self.transcription2 = Transcription.objects.create(
document=self.part.document,
name='tr2')
self.lt = LineTranscription.objects.create(
transcription=self.transcription,
line=self.line,
content='test')
self.lt2 = LineTranscription.objects.create(
transcription=self.transcription2,
line=self.line2,
content='test2')
def test_list(self):
self.client.force_login(self.doc.owner)
uri = reverse('api:document-list')
with self.assertNumQueries(14):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_detail(self):
self.client.force_login(self.doc.owner)
uri = reverse('api:document-detail',
kwargs={'pk': self.doc.pk})
with self.assertNumQueries(9):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_perm(self):
user = self.factory.make_user()
self.client.force_login(user)
uri = reverse('api:document-detail',
kwargs={'pk': self.doc.pk})
resp = self.client.get(uri)
# Note: raises a 404 instead of 403 but its fine
self.assertEqual(resp.status_code, 404)
def test_segtrain_less_two_parts(self):
self.client.force_login(self.doc.owner)
model = self.factory.make_model(self.doc, job=OcrModel.MODEL_JOB_SEGMENT)
uri = reverse('api:document-segtrain', kwargs={'pk': self.doc.pk})
resp = self.client.post(uri, data={
'parts': [self.part.pk],
'model': model.pk
})
self.assertEqual(resp.status_code, 400)
self.assertEqual(resp.json()['error'], {'parts': [
'Segmentation training requires at least 2 images.']})
@unittest.skip
def test_segtrain_new_model(self):
# This test breaks CI as it consumes too many resources
self.client.force_login(self.doc.owner)
uri = reverse('api:document-segtrain', kwargs={'pk': self.doc.pk})
resp = self.client.post(uri, data={
'parts': [self.part.pk, self.part2.pk],
'model_name': 'new model'
})
self.assertEqual(resp.status_code, 200, resp.content)
self.assertEqual(OcrModel.objects.count(), 1)
self.assertEqual(OcrModel.objects.first().name, "new model")
@unittest.expectedFailure
def test_segtrain_existing_model_rename(self):
self.client.force_login(self.doc.owner)
model = self.factory.make_model(self.doc, job=OcrModel.MODEL_JOB_SEGMENT)
uri = reverse('api:document-segtrain', kwargs={'pk': self.doc.pk})
resp = self.client.post(uri, data={
'parts': [self.part.pk, self.part2.pk],
'model': model.pk,
'model_name': 'test new model'
})
self.assertEqual(resp.status_code, 200, resp.content)
self.assertEqual(OcrModel.objects.count(), 2)
@unittest.expectedFailure
def test_segment(self):
uri = reverse('api:document-segment', kwargs={'pk': self.doc.pk})
self.client.force_login(self.doc.owner)
model = self.factory.make_model(self.doc, job=OcrModel.MODEL_JOB_SEGMENT)
resp = self.client.post(uri, data={
'parts': [self.part.pk, self.part2.pk],
'seg_steps': 'both',
'model': model.pk,
})
self.assertEqual(resp.status_code, 200)
@unittest.expectedFailure
def test_train_new_model(self):
self.client.force_login(self.doc.owner)
uri = reverse('api:document-train', kwargs={'pk': self.doc.pk})
resp = self.client.post(uri, data={
'parts': [self.part.pk, self.part2.pk],
'model_name': 'testing new model',
'transcription': self.transcription.pk
})
self.assertEqual(resp.status_code, 200)
self.assertEqual(self.doc.ocr_models.filter(job=OcrModel.MODEL_JOB_RECOGNIZE).count(), 1)
@unittest.expectedFailure
def test_transcribe(self):
trans = Transcription.objects.create(document=self.part.document)
self.client.force_login(self.doc.owner)
model = self.factory.make_model(self.doc, job=OcrModel.MODEL_JOB_RECOGNIZE)
uri = reverse('api:document-transcribe', kwargs={'pk': self.doc.pk})
resp = self.client.post(uri, data={
'parts': [self.part.pk, self.part2.pk],
'model': model.pk,
'transcription': trans.pk
})
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.content, b'{"status":"ok"}')
# won't work with dummy model and image
# self.assertEqual(LineTranscription.objects.filter(transcription=trans).count(), 2)
def test_list_document_with_tasks(self):
# Creating a new Document that self.doc.owner shouldn't see
other_doc = self.factory.make_document(project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(6):
resp = self.client.get(reverse('api:document-tasks'))
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 1)
self.assertEqual(json['results'], [{
'pk': self.doc.pk,
'name': self.doc.name,
'tasks_stats': {'Queued': 0, 'Running': 0, 'Crashed': 0, 'Finished': 6},
'last_started_task': self.doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
}])
def test_list_document_with_tasks_staff_user(self):
self.doc.owner.is_staff = True
self.doc.owner.save()
# Creating a new Document that self.doc.owner should also see since he is a staff member
other_doc = self.factory.make_document(project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(8):
resp = self.client.get(reverse('api:document-tasks'))
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 2)
self.assertEqual(json['results'], [
{
'pk': other_doc.pk,
'name': other_doc.name,
'tasks_stats': {'Queued': 0, 'Running': 1, 'Crashed': 0, 'Finished': 0},
'last_started_task': other_doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
},
{
'pk': self.doc.pk,
'name': self.doc.name,
'tasks_stats': {'Queued': 0, 'Running': 0, 'Crashed': 0, 'Finished': 6},
'last_started_task': self.doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
},
])
def test_list_document_with_tasks_filter_wrong_user_id(self):
self.doc.owner.is_staff = True
self.doc.owner.save()
self.client.force_login(self.doc.owner)
resp = self.client.get(reverse('api:document-tasks') + '?user_id=blablabla')
self.assertEqual(resp.status_code, 400)
self.assertEqual(resp.json(), {'error': 'Invalid user_id, it should be an int.'})
def test_list_document_with_tasks_filter_user_id_disabled_for_normal_user(self):
# Creating a new Document that self.doc.owner shouldn't see
other_doc = self.factory.make_document(project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(6):
# Filtering by user_id but the user is not part of the staff so the filter will be ignored
resp = self.client.get(reverse('api:document-tasks') + f"?user_id={other_doc.owner.id}")
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 1)
self.assertEqual(json['results'], [{
'pk': self.doc.pk,
'name': self.doc.name,
'tasks_stats': {'Queued': 0, 'Running': 0, 'Crashed': 0, 'Finished': 6},
'last_started_task': self.doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
}])
def test_list_document_with_tasks_filter_user_id(self):
self.doc.owner.is_staff = True
self.doc.owner.save()
other_doc = self.factory.make_document(project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(6):
resp = self.client.get(reverse('api:document-tasks') + f"?user_id={other_doc.owner.id}")
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 1)
self.assertEqual(json['results'], [
{
'pk': other_doc.pk,
'name': other_doc.name,
'tasks_stats': {'Queued': 0, 'Running': 1, 'Crashed': 0, 'Finished': 0},
'last_started_task': other_doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
}
])
def test_list_document_with_tasks_filter_name(self):
self.doc.owner.is_staff = True
self.doc.owner.save()
other_doc = self.factory.make_document(name="other doc", project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(6):
resp = self.client.get(reverse('api:document-tasks') + "?name=other")
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 1)
self.assertEqual(json['results'], [
{
'pk': other_doc.pk,
'name': other_doc.name,
'tasks_stats': {'Queued': 0, 'Running': 1, 'Crashed': 0, 'Finished': 0},
'last_started_task': other_doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
}
])
def test_list_document_with_tasks_filter_wrong_task_state(self):
self.client.force_login(self.doc.owner)
resp = self.client.get(reverse('api:document-tasks') + '?task_state=wrongstate')
self.assertEqual(resp.status_code, 400)
self.assertEqual(resp.json(), {'error': 'Invalid task_state, it should match a valid workflow_state.'})
def test_list_document_with_tasks_filter_task_state(self):
self.doc.owner.is_staff = True
self.doc.owner.save()
other_doc = self.factory.make_document(project=self.factory.make_project(name="Test API"))
report = other_doc.reports.create(user=other_doc.owner, label="Fake report")
report.start(None, None)
self.client.force_login(self.doc.owner)
with self.assertNumQueries(6):
resp = self.client.get(reverse('api:document-tasks') + "?task_state=Running")
self.assertEqual(resp.status_code, 200)
json = resp.json()
self.assertEqual(json['count'], 1)
self.assertEqual(json['results'], [
{
'pk': other_doc.pk,
'name': other_doc.name,
'tasks_stats': {'Queued': 0, 'Running': 1, 'Crashed': 0, 'Finished': 0},
'last_started_task': other_doc.reports.latest('started_at').started_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
},
])
class PartViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
self.part = self.factory.make_part()
self.part2 = self.factory.make_part(document=self.part.document) # scaling test
self.user = self.part.document.owner # shortcut
@override_settings(THUMBNAIL_ENABLE=False)
def test_list(self):
self.client.force_login(self.user)
uri = reverse('api:part-list',
kwargs={'document_pk': self.part.document.pk})
with self.assertNumQueries(5):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_list_perm(self):
user = self.factory.make_user()
self.client.force_login(user)
uri = reverse('api:part-list',
kwargs={'document_pk': self.part.document.pk})
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 403)
@override_settings(THUMBNAIL_ENABLE=False)
def test_detail(self):
self.client.force_login(self.user)
uri = reverse('api:part-detail',
kwargs={'document_pk': self.part.document.pk,
'pk': self.part.pk})
with self.assertNumQueries(8):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_detail_perm(self):
user = self.factory.make_user()
self.client.force_login(user)
uri = reverse('api:part-detail',
kwargs={'document_pk': self.part.document.pk,
'pk': self.part.pk})
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 403)
@override_settings(THUMBNAIL_ENABLE=False)
def test_create(self):
self.client.force_login(self.user)
uri = reverse('api:part-list',
kwargs={'document_pk': self.part.document.pk})
with self.assertNumQueries(42):
img = self.factory.make_image_file()
resp = self.client.post(uri, {
'image': SimpleUploadedFile(
'test.png', img.read())})
self.assertEqual(resp.status_code, 201)
@override_settings(THUMBNAIL_ENABLE=False)
def test_update(self):
self.client.force_login(self.user)
uri = reverse('api:part-detail',
kwargs={'document_pk': self.part.document.pk,
'pk': self.part.pk})
with self.assertNumQueries(6):
resp = self.client.patch(
uri, {'transcription_progress': 50},
content_type='application/json')
self.assertEqual(resp.status_code, 200, resp.content)
def test_move(self):
self.client.force_login(self.user)
uri = reverse('api:part-move',
kwargs={'document_pk': self.part2.document.pk,
'pk': self.part2.pk})
with self.assertNumQueries(7):
resp = self.client.post(uri, {'index': 0})
self.assertEqual(resp.status_code, 200)
self.part2.refresh_from_db()
self.assertEqual(self.part2.order, 0)
class BlockViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
self.part = self.factory.make_part()
self.user = self.part.document.owner
for i in range(2):
b = Block.objects.create(
box=[10+50*i, 10, 50+50*i, 50],
document_part=self.part)
self.block = b
def test_detail(self):
self.client.force_login(self.user)
uri = reverse('api:block-detail',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk,
'pk': self.block.pk})
with self.assertNumQueries(4):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_list(self):
self.client.force_login(self.user)
uri = reverse('api:block-list',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk})
with self.assertNumQueries(5):
resp = self.client.get(uri)
self.assertEqual(resp.status_code, 200)
def test_create(self):
self.client.force_login(self.user)
uri = reverse('api:block-list',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk})
with self.assertNumQueries(5):
# 1-2: auth
# 3 select document_part
# 4 select max block order
# 5 insert
resp = self.client.post(uri, {
'document_part': self.part.pk,
'box': '[[10,10], [20,20], [50,50]]'
})
self.assertEqual(resp.status_code, 201, resp.content)
def test_update(self):
self.client.force_login(self.user)
uri = reverse('api:block-detail',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk,
'pk': self.block.pk})
with self.assertNumQueries(5):
resp = self.client.patch(uri, {
'box': '[[100,100], [150,150]]'
}, content_type='application/json')
self.assertEqual(resp.status_code, 200, resp.content)
class LineViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
self.part = self.factory.make_part()
self.user = self.part.document.owner
self.block = Block.objects.create(
box=[10, 10, 200, 200],
document_part=self.part)
self.line = Line.objects.create(
mask=[60, 10, 100, 50],
document_part=self.part,
block=self.block)
self.line2 = Line.objects.create(
mask=[90, 10, 70, 50],
document_part=self.part,
block=self.block)
self.orphan = Line.objects.create(
mask=[0, 0, 10, 10],
document_part=self.part,
block=None)
# not used
# def test_detail(self):
# def test_list(self):
def test_create(self):
self.client.force_login(self.user)
uri = reverse('api:line-list',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk})
with self.assertNumQueries(5):
resp = self.client.post(uri, {
'document_part': self.part.pk,
'baseline': '[[10, 10], [50, 50]]'
})
self.assertEqual(resp.status_code, 201, resp.content)
self.assertEqual(self.part.lines.count(), 4) # 3 + 1 new
def test_update(self):
self.client.force_login(self.user)
uri = reverse('api:line-detail',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk,
'pk': self.line.pk})
with self.assertNumQueries(5):
resp = self.client.patch(uri, {
'baseline': '[[100,100], [150,150]]'
}, content_type='application/json')
self.assertEqual(resp.status_code, 200)
self.line.refresh_from_db()
self.assertEqual(self.line.baseline, '[[100,100], [150,150]]')
def test_bulk_delete(self):
self.client.force_login(self.user)
uri = reverse('api:line-bulk-delete',
kwargs={'document_pk': self.part.document.pk, 'part_pk': self.part.pk})
with self.assertNumQueries(5):
resp = self.client.post(uri, {'lines': [self.line.pk]},
content_type='application/json')
self.assertEqual(Line.objects.count(), 2)
self.assertEqual(resp.status_code, 204)
def test_bulk_update(self):
self.client.force_login(self.user)
uri = reverse('api:line-bulk-update',
kwargs={'document_pk': self.part.document.pk, 'part_pk': self.part.pk})
with self.assertNumQueries(7):
resp = self.client.put(uri, {'lines': [
{'pk': self.line.pk,
'mask': '[[60, 40], [60, 50], [90, 50], [90, 40]]',
'region': None},
{'pk': self.line2.pk,
'mask': '[[50, 40], [50, 30], [70, 30], [70, 40]]',
'region': self.block.pk}
]}, content_type='application/json')
self.assertEqual(resp.status_code, 200, resp.content)
self.line.refresh_from_db()
self.line2.refresh_from_db()
self.assertEqual(self.line.mask, '[[60, 40], [60, 50], [90, 50], [90, 40]]')
self.assertEqual(self.line2.mask, '[[50, 40], [50, 30], [70, 30], [70, 40]]')
class LineTranscriptionViewSetTestCase(CoreFactoryTestCase):
def setUp(self):
super().setUp()
self.part = self.factory.make_part()
self.user = self.part.document.owner
self.line = Line.objects.create(
mask=[10, 10, 50, 50],
document_part=self.part)
self.line2 = Line.objects.create(
mask=[10, 60, 50, 100],
document_part=self.part)
self.transcription = Transcription.objects.create(
document=self.part.document,
name='test')
self.transcription2 = Transcription.objects.create(
document=self.part.document,
name='tr2')
self.lt = LineTranscription.objects.create(
transcription=self.transcription,
line=self.line,
content='test')
self.lt2 = LineTranscription.objects.create(
transcription=self.transcription2,
line=self.line2,
content='test2')
def test_update(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-detail',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk,
'pk': self.lt.pk})
with self.assertNumQueries(6):
resp = self.client.patch(uri, {
'content': 'update'
}, content_type='application/json')
self.assertEqual(resp.status_code, 200)
def test_create(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-list',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk})
with self.assertNumQueries(12):
resp = self.client.post(uri, {
'line': self.line2.pk,
'transcription': self.transcription.pk,
'content': 'new'
}, content_type='application/json')
self.assertEqual(resp.status_code, 201)
def test_new_version(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-detail',
kwargs={'document_pk': self.part.document.pk,
'part_pk': self.part.pk,
'pk': self.lt.pk})
with self.assertNumQueries(8):
resp = self.client.put(uri, {'content': 'test',
'transcription': self.lt.transcription.pk,
'line': self.lt.line.pk},
content_type='application/json')
self.assertEqual(resp.status_code, 200, resp.data)
self.lt.refresh_from_db()
self.assertEqual(len(self.lt.versions), 1)
def test_bulk_create(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-bulk-create',
kwargs={'document_pk': self.part.document.pk, 'part_pk': self.part.pk})
ll = Line.objects.create(
mask=[10, 10, 50, 50],
document_part=self.part)
with self.assertNumQueries(10):
resp = self.client.post(
uri,
{'lines': [
{'line': ll.pk,
'transcription': self.transcription.pk,
'content': 'new transcription'},
{'line': ll.pk,
'transcription': self.transcription2.pk,
'content': 'new transcription 2'},
]}, content_type='application/json')
self.assertEqual(resp.status_code, 200)
def test_bulk_update(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-bulk-update',
kwargs={'document_pk': self.part.document.pk, 'part_pk': self.part.pk})
with self.assertNumQueries(15):
resp = self.client.put(uri, {'lines': [
{'pk': self.lt.pk,
'content': 'test1 new',
'transcription': self.transcription.pk,
'line': self.line.pk},
{'pk': self.lt2.pk,
'content': 'test2 new',
'transcription': self.transcription.pk,
'line': self.line2.pk},
]}, content_type='application/json')
self.lt.refresh_from_db()
self.lt2.refresh_from_db()
self.assertEqual(self.lt.content, "test1 new")
self.assertEqual(self.lt2.content, "test2 new")
self.assertEqual(self.lt2.transcription, self.transcription)
self.assertEqual(resp.status_code, 200)
def test_bulk_delete(self):
self.client.force_login(self.user)
uri = reverse('api:linetranscription-bulk-delete',
kwargs={'document_pk': self.part.document.pk, 'part_pk': self.part.pk})
with self.assertNumQueries(5):
resp = self.client.post(uri, {'lines': [self.lt.pk, self.lt2.pk]},
content_type='application/json')
lines = LineTranscription.objects.all()
self.assertEqual(lines[0].content, "")
self.assertEqual(lines[1].content, "")
self.assertEqual(resp.status_code, 204)
| 42.043284 | 120 | 0.579502 | 3,324 | 28,169 | 4.790915 | 0.085138 | 0.04898 | 0.051303 | 0.04898 | 0.827818 | 0.796044 | 0.765338 | 0.734568 | 0.70675 | 0.682072 | 0 | 0.024491 | 0.283929 | 28,169 | 669 | 121 | 42.106129 | 0.765009 | 0.029217 | 0 | 0.695804 | 0 | 0 | 0.121591 | 0.018887 | 0 | 0 | 0 | 0 | 0.173077 | 1 | 0.078671 | false | 0 | 0.012238 | 0 | 0.101399 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54bf60d5f5551388ead2bf0b447cbfb70656c8c6 | 10,659 | py | Python | tests/mobly/controllers/android_device_lib/services/snippet_management_service_test.py | mhaoli/mobly | a0948ae35bfec5d33819f824f5d59692e9f78fe5 | [
"Apache-2.0"
] | null | null | null | tests/mobly/controllers/android_device_lib/services/snippet_management_service_test.py | mhaoli/mobly | a0948ae35bfec5d33819f824f5d59692e9f78fe5 | [
"Apache-2.0"
] | null | null | null | tests/mobly/controllers/android_device_lib/services/snippet_management_service_test.py | mhaoli/mobly | a0948ae35bfec5d33819f824f5d59692e9f78fe5 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from unittest import mock
from mobly.controllers.android_device_lib.services import snippet_management_service
MOCK_PACKAGE = 'com.mock.package'
SNIPPET_CLIENT_CLASS_PATH = 'mobly.controllers.android_device_lib.snippet_client.SnippetClient'
SNIPPET_CLIENT_V2_CLASS_PATH = 'mobly.controllers.android_device_lib.snippet_client_v2.SnippetClientV2'
class SnippetManagementServiceTest(unittest.TestCase):
"""Tests for the snippet management service."""
def test_empty_manager_start_stop(self):
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.start()
# When no client is registered, manager is never alive.
self.assertFalse(manager.is_alive)
manager.stop()
self.assertFalse(manager.is_alive)
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_get_snippet_client(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
self.assertEqual(manager.get_snippet_client('foo'), mock_client)
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_get_snippet_client_fail(self, _):
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
self.assertIsNone(manager.get_snippet_client('foo'))
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_stop_with_live_client(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.start_app_and_connect.assert_called_once_with()
manager.stop()
mock_client.stop_app.assert_called_once_with()
mock_client.stop_app.reset_mock()
mock_client.is_alive = False
self.assertFalse(manager.is_alive)
manager.stop()
mock_client.stop_app.assert_not_called()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_add_snippet_client_dup_name(self, _):
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
msg = ('.* Name "foo" is already registered with package ".*", it '
'cannot be used again.')
with self.assertRaisesRegex(snippet_management_service.Error, msg):
manager.add_snippet_client('foo', MOCK_PACKAGE + 'ha')
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_add_snippet_client_dup_package(self, mock_class):
mock_client = mock_class.return_value
mock_client.package = MOCK_PACKAGE
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
msg = ('Snippet package "com.mock.package" has already been loaded '
'under name "foo".')
with self.assertRaisesRegex(snippet_management_service.Error, msg):
manager.add_snippet_client('bar', MOCK_PACKAGE)
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_remove_snippet_client(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
manager.remove_snippet_client('foo')
msg = 'No snippet client is registered with name "foo".'
with self.assertRaisesRegex(snippet_management_service.Error, msg):
manager.foo.do_something()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_remove_snippet_client(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
with self.assertRaisesRegex(
snippet_management_service.Error,
'No snippet client is registered with name "foo".'):
manager.remove_snippet_client('foo')
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_start_with_live_service(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.start_app_and_connect.reset_mock()
mock_client.is_alive = True
manager.start()
mock_client.start_app_and_connect.assert_not_called()
self.assertTrue(manager.is_alive)
mock_client.is_alive = False
manager.start()
mock_client.start_app_and_connect.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_pause(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
manager.pause()
mock_client.disconnect.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_resume_positive_case(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.is_alive = False
manager.resume()
mock_client.restore_app_connection.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_resume_negative_case(self, mock_class):
mock_client = mock_class.return_value
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.is_alive = True
manager.resume()
mock_client.restore_app_connection.assert_not_called()
@mock.patch(SNIPPET_CLIENT_CLASS_PATH)
def test_attribute_access(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
manager = snippet_management_service.SnippetManagementService(
mock.MagicMock())
manager.add_snippet_client('foo', MOCK_PACKAGE)
manager.foo.ha('param')
mock_client.ha.assert_called_once_with('param')
def test_client_v2_flag_default_value(self):
mock_device = mock.MagicMock()
mock_device.dimensions = {}
manager = snippet_management_service.SnippetManagementService(mock_device)
self.assertFalse(manager._is_using_client_v2())
def test_client_v2_flag_false(self):
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'false'})
manager = snippet_management_service.SnippetManagementService(mock_device)
self.assertFalse(manager._is_using_client_v2())
def test_client_v2_flag_true(self):
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
self.assertTrue(manager._is_using_client_v2())
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_add_snippet_client(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
self.assertIs(manager.get_snippet_client('foo'), mock_client)
mock_client.initialize.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_remove_snippet_client(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
manager.remove_snippet_client('foo')
mock_client.stop.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_start(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.initialize.reset_mock()
mock_client.is_alive = False
manager.start()
mock_client.initialize.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_stop(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.stop.reset_mock()
mock_client.is_alive = True
manager.stop()
mock_client.stop.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_pause(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.close_connection.reset_mock()
manager.pause()
mock_client.close_connection.assert_called_once_with()
@mock.patch(SNIPPET_CLIENT_V2_CLASS_PATH)
def test_client_v2_resume(self, mock_class):
mock_client = mock.MagicMock()
mock_class.return_value = mock_client
mock_device = mock.MagicMock(
dimensions={'use_mobly_snippet_client_v2': 'true'})
manager = snippet_management_service.SnippetManagementService(mock_device)
manager.add_snippet_client('foo', MOCK_PACKAGE)
mock_client.restore_server_connection.reset_mock()
mock_client.is_alive = False
manager.resume()
mock_client.restore_server_connection.assert_called_once_with()
if __name__ == '__main__':
unittest.main()
| 39.921348 | 103 | 0.771273 | 1,353 | 10,659 | 5.676275 | 0.118995 | 0.108333 | 0.0875 | 0.088802 | 0.830729 | 0.801432 | 0.796615 | 0.765365 | 0.71901 | 0.671094 | 0 | 0.004038 | 0.140351 | 10,659 | 266 | 104 | 40.071429 | 0.834115 | 0.060512 | 0 | 0.696262 | 0 | 0 | 0.074337 | 0.035118 | 0 | 0 | 0 | 0 | 0.135514 | 1 | 0.102804 | false | 0 | 0.014019 | 0 | 0.121495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49c6aa525fa177d5aee622ed452327c7eb9d407e | 3,796 | py | Python | tests/inventory/test_inventory.py | CiscoDevNet/bcs-oi-api-sdk | eb99af3db7482d2bbfcae53c477335805acc95b5 | [
"MIT"
] | 5 | 2022-03-03T17:26:39.000Z | 2022-03-24T09:59:47.000Z | tests/inventory/test_inventory.py | CiscoDevNet/bcs-oi-api-sdk | eb99af3db7482d2bbfcae53c477335805acc95b5 | [
"MIT"
] | 1 | 2022-03-16T12:48:13.000Z | 2022-03-16T12:58:38.000Z | tests/inventory/test_inventory.py | CiscoDevNet/bcs-oi-api-sdk | eb99af3db7482d2bbfcae53c477335805acc95b5 | [
"MIT"
] | null | null | null | from src.bcs_oi_api.models import Asset, Device
from tests.utils import check_model_creation
asset_1 = {
"chassisName": "10.201.23.147",
"deviceId": 24948009,
"deviceName": "10.201.23.147",
"hardwareRevision": "",
"installedFlash": None,
"installedMemory": None,
"printedCircuitBoardName": "",
"printedCircuitBoardRevision": "",
"physicalAssetId": 477944695,
"physicalAssetSubtype": "",
"physicalAssetType": "Fan",
"productFamily": "Catalyst 2K/3K Series Fans",
"productId": "FAN-T1=",
"productType": "Fans",
"serialNumber": "",
"serialNumberStatus": "N/A",
"slot": "FAN1",
"softwareVersion": "",
"topAssemblyNumber": "",
"topAssemblyNumberRevision": "",
}
asset_2 = {
"chassisName": "10.201.23.147",
"deviceId": 24948009,
"deviceName": "10.201.23.147",
"hardwareRevision": "",
"installedFlash": 2048,
"installedMemory": 1024,
"printedCircuitBoardName": "",
"printedCircuitBoardRevision": "",
"physicalAssetId": 477944695,
"physicalAssetSubtype": "",
"physicalAssetType": "Fan",
"productFamily": "Catalyst 2K/3K Series Fans",
"productId": "FAN-T1=",
"productType": "Fans",
"serialNumber": "",
"serialNumberStatus": "N/A",
"slot": "FAN1",
"softwareVersion": "",
"topAssemblyNumber": "",
"topAssemblyNumberRevision": "",
}
def test_asset_model():
for asset_dict in [asset_1, asset_2]:
asset = Asset(**asset_dict)
check_model_creation(input_dict=asset_dict, model_instance=asset)
device_1 = {
"collectorName": "mycollector",
"configRegister": "2102",
"configStatus": "Completed",
"configTimestamp": "2022-02-02T15:33:37",
"createdTimestamp": "2022-02-02T15:33:37",
"deviceId": 22345640,
"deviceIp": "172.21.1.1",
"deviceName": "switch",
"deviceStatus": "ACTIVE",
"deviceType": "Unmanaged Chassis",
"featureSetDescription": "",
"imageName": "",
"inventoryStatus": "Completed",
"inventoryTimestamp": "2022-02-02T15:33:37",
"ipAddress": "172.16.1.1",
"isInSeedFile": True,
"lastResetTimestamp": "2022-02-02T15:33:37",
"productFamily": "Cisco Catalyst 3560-E Series Switches",
"productId": "WS-C3560X-24P-E",
"productType": "Metro Ethernet Switches",
"resetReason": "",
"snmpSysContact": "",
"snmpSysDescription": "",
"snmpSysLocation": "",
"snmpSysName": "",
"snmpSysObjectId": "",
"softwareType": "IOS",
"softwareVersion": "15.1(4)M4",
"userField1": "",
"userField2": "",
"userField3": "",
"userField4": "",
}
device_2 = {
"collectorName": "mycollector",
"configRegister": "2102",
"configStatus": "Completed",
"configTimestamp": None,
"createdTimestamp": None,
"deviceId": 22345640,
"deviceIp": "172.21.1.1",
"deviceName": "switch",
"deviceStatus": "ACTIVE",
"deviceType": "Unmanaged Chassis",
"featureSetDescription": "",
"imageName": "",
"inventoryStatus": "Completed",
"inventoryTimestamp": None,
"ipAddress": "172.16.1.1",
"isInSeedFile": True,
"lastResetTimestamp": None,
"productFamily": "Cisco Catalyst 3560-E Series Switches",
"productId": "WS-C3560X-24P-E",
"productType": "Metro Ethernet Switches",
"resetReason": "",
"snmpSysContact": "",
"snmpSysDescription": "",
"snmpSysLocation": "",
"snmpSysName": "",
"snmpSysObjectId": "",
"softwareType": "IOS",
"softwareVersion": "15.1(4)M4",
"userField1": "",
"userField2": "",
"userField3": "",
"userField4": "",
}
def test_device_model():
for device_dict in [device_1, device_2]:
device = Device(**device_dict)
check_model_creation(input_dict=device_dict, model_instance=device)
| 28.541353 | 75 | 0.618809 | 324 | 3,796 | 7.157407 | 0.364198 | 0.008624 | 0.012074 | 0.017249 | 0.838292 | 0.812419 | 0.785683 | 0.718413 | 0.675291 | 0.675291 | 0 | 0.079921 | 0.195732 | 3,796 | 132 | 76 | 28.757576 | 0.679659 | 0 | 0 | 0.754098 | 0 | 0 | 0.51844 | 0.05058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0.032787 | 0.016393 | 0 | 0.032787 | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7111429bfec9826809b0169278fff06e3adf92b | 115 | py | Python | cifar/step1/cifar10/models/__init__.py | chatzikon/DNN-COMPRESSION | 5c19ab740048052426a77eb5bc7a56ab3fae93e9 | [
"MIT"
] | 9 | 2020-05-06T10:14:11.000Z | 2021-07-09T10:12:22.000Z | cifar/step1/cifar10/models/__init__.py | chatzikon/DNN-COMPRESSION | 5c19ab740048052426a77eb5bc7a56ab3fae93e9 | [
"MIT"
] | null | null | null | cifar/step1/cifar10/models/__init__.py | chatzikon/DNN-COMPRESSION | 5c19ab740048052426a77eb5bc7a56ab3fae93e9 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .resnet import *
from .mobilenetv2 import *
from .mobilenet import *
| 16.428571 | 38 | 0.791304 | 14 | 115 | 6.142857 | 0.5 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.156522 | 115 | 6 | 39 | 19.166667 | 0.876289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f7e6765c339cf5dc895a988e734282b27f1e558 | 99 | py | Python | shapr/__init__.py | marrlab/SHAPR_torch | e1bf5525f315b5ebee55bbb87a9b66a12538edb7 | [
"BSD-3-Clause"
] | 5 | 2022-03-03T11:27:21.000Z | 2022-03-17T15:45:56.000Z | shapr/__init__.py | marrlab/SHAPR_torch | e1bf5525f315b5ebee55bbb87a9b66a12538edb7 | [
"BSD-3-Clause"
] | null | null | null | shapr/__init__.py | marrlab/SHAPR_torch | e1bf5525f315b5ebee55bbb87a9b66a12538edb7 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
from shapr.utils import *
from shapr.main import run_train, run_evaluation
| 19.8 | 48 | 0.787879 | 16 | 99 | 4.75 | 0.75 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.121212 | 99 | 4 | 49 | 24.75 | 0.862069 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fdf4630539ed5be670e326de28cfd3909077c15 | 267 | py | Python | tests/file_endpoint.py | CancerDataAggregator/cda-python | ede300e0e3baa2ee564094af6197be1013147d58 | [
"Apache-2.0"
] | 7 | 2021-02-24T15:30:13.000Z | 2022-02-25T21:32:05.000Z | tests/file_endpoint.py | CancerDataAggregator/cda-python | ede300e0e3baa2ee564094af6197be1013147d58 | [
"Apache-2.0"
] | 82 | 2021-02-26T14:53:18.000Z | 2022-03-24T17:53:57.000Z | tests/file_endpoint.py | CancerDataAggregator/cda-python | ede300e0e3baa2ee564094af6197be1013147d58 | [
"Apache-2.0"
] | 5 | 2021-03-11T15:19:47.000Z | 2022-03-08T20:39:12.000Z | from cdapython import Q
q1 = Q('ResearchSubject.identifier = "GDC"')
q2 = Q('ResearchSubject.Specimen.source_material_type = "Primary Tumor"')
q3 = Q('ResearchSubject.Specimen.source_material_type = "Blood Derived Normal"')
q = q1.And(q2.Or(q3))
r = q.run()
print()
| 29.666667 | 80 | 0.730337 | 38 | 267 | 5.026316 | 0.631579 | 0.251309 | 0.251309 | 0.314136 | 0.439791 | 0.439791 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.11236 | 267 | 8 | 81 | 33.375 | 0.780591 | 0 | 0 | 0 | 0 | 0 | 0.625468 | 0.434457 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b77cdc498779a3e4ae11318225378d884c824e8b | 115 | py | Python | homeworks/yan_romanovich/hw05/test_lvl05.py | tgrx/Z22 | b2539682ff26c8b6d9f63a7670c8a9c6b614a8ff | [
"Apache-2.0"
] | null | null | null | homeworks/yan_romanovich/hw05/test_lvl05.py | tgrx/Z22 | b2539682ff26c8b6d9f63a7670c8a9c6b614a8ff | [
"Apache-2.0"
] | 8 | 2019-11-15T18:15:56.000Z | 2020-02-03T18:05:05.000Z | homeworks/yan_romanovich/hw05/test_lvl05.py | tgrx/Z22 | b2539682ff26c8b6d9f63a7670c8a9c6b614a8ff | [
"Apache-2.0"
] | null | null | null | from homeworks.yan_romanovich.hw05 import level05
assert level05.unique("123")
assert not level05.unique((1, 1))
| 19.166667 | 49 | 0.782609 | 17 | 115 | 5.235294 | 0.705882 | 0.292135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126214 | 0.104348 | 115 | 5 | 50 | 23 | 0.737864 | 0 | 0 | 0 | 0 | 0 | 0.026087 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4d02698b8a078a06999b5ae2001f46e35ab4e2a5 | 37 | py | Python | venv/Lib/site-packages/torch/cpu/amp/__init__.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | 1 | 2022-01-08T12:30:44.000Z | 2022-01-08T12:30:44.000Z | venv/Lib/site-packages/torch/cpu/amp/__init__.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/torch/cpu/amp/__init__.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null | from .autocast_mode import autocast
| 18.5 | 36 | 0.837838 | 5 | 37 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d33d1cac18fb4bc565da2ea1629dd05be56934c | 533 | py | Python | robo_rugby/gym_env/__init__.py | harman097/RoboRugby | 31da3cef66e9ac9eadc7aa2636b508d4673f9ecd | [
"MIT"
] | 2 | 2020-08-24T18:18:52.000Z | 2020-08-25T10:06:58.000Z | robo_rugby/gym_env/__init__.py | harman097/RoboRugby | 31da3cef66e9ac9eadc7aa2636b508d4673f9ecd | [
"MIT"
] | 30 | 2020-08-25T17:46:17.000Z | 2020-10-11T11:03:28.000Z | robo_rugby/gym_env/__init__.py | harman097/RoboRugby | 31da3cef66e9ac9eadc7aa2636b508d4673f9ecd | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
import robo_rugby.gym_env.RR_Constants
import robo_rugby.gym_env.RR_EnvBase
import robo_rugby.gym_env.RR_ScoreKeepers
import robo_rugby.gym_env.RR_Robot
import robo_rugby.gym_env.RR_Goal
import robo_rugby.gym_env.RR_Ball
from robo_rugby.gym_env.RR_EnvBase import GameEnv, GameEnv_Simple
from robo_rugby.gym_env.RR_Ball import Ball
from robo_rugby.gym_env.RR_Goal import Goal
from robo_rugby.gym_env.RR_Robot import Robot
from . import RR_TrashyPhysics as TrashyPhysics | 44.416667 | 65 | 0.879925 | 94 | 533 | 4.585106 | 0.234043 | 0.208817 | 0.278422 | 0.348028 | 0.658933 | 0.658933 | 0.431555 | 0 | 0 | 0 | 0 | 0 | 0.075047 | 533 | 12 | 66 | 44.416667 | 0.874239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.083333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d3448f6ad626b5d92f6f3626a270c8d7a0766a1 | 92 | py | Python | ITGlue/health_check.py | securecyberdefense/connector-itglue | bc174fe40d479e343dedf6c1c41bbdc8cdaf0c4c | [
"MIT"
] | null | null | null | ITGlue/health_check.py | securecyberdefense/connector-itglue | bc174fe40d479e343dedf6c1c41bbdc8cdaf0c4c | [
"MIT"
] | null | null | null | ITGlue/health_check.py | securecyberdefense/connector-itglue | bc174fe40d479e343dedf6c1c41bbdc8cdaf0c4c | [
"MIT"
] | null | null | null | def check(config):
# TODO: implement health check for the connector here
return True | 30.666667 | 57 | 0.728261 | 13 | 92 | 5.153846 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 92 | 3 | 58 | 30.666667 | 0.930556 | 0.554348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
12894c4add010705a0f413987791796868d0712b | 140 | py | Python | tests/metrics/__init__.py | nrupatunga/pytorch-lightning | 0af064d022c634fdcc9f5f2b2ffa00a4b9055889 | [
"Apache-2.0"
] | null | null | null | tests/metrics/__init__.py | nrupatunga/pytorch-lightning | 0af064d022c634fdcc9f5f2b2ffa00a4b9055889 | [
"Apache-2.0"
] | null | null | null | tests/metrics/__init__.py | nrupatunga/pytorch-lightning | 0af064d022c634fdcc9f5f2b2ffa00a4b9055889 | [
"Apache-2.0"
] | null | null | null | import os
from tests.metrics.utils import NUM_BATCHES, NUM_PROCESSES, BATCH_SIZE, MetricTester
from tests.metrics.test_metric import Dummy
| 28 | 84 | 0.85 | 21 | 140 | 5.47619 | 0.714286 | 0.156522 | 0.278261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 140 | 4 | 85 | 35 | 0.912698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
129b7982c54085c8aa8398474e33cf06561f2712 | 304 | py | Python | app/blueprints/__init__.py | johnlcd/AppServer | b802b2afe273edeeff7a4341676b7aeb722883a6 | [
"MIT"
] | 90 | 2017-03-01T17:58:13.000Z | 2021-11-07T20:18:44.000Z | app/blueprints/__init__.py | kakawaa/AppServer | 53a3ec952dda8b0975cce5300a196895679de19f | [
"MIT"
] | 4 | 2017-03-02T06:33:12.000Z | 2020-05-07T07:56:38.000Z | app/blueprints/__init__.py | kakawaa/AppServer | 53a3ec952dda8b0975cce5300a196895679de19f | [
"MIT"
] | 31 | 2017-03-02T00:54:27.000Z | 2021-02-14T08:35:14.000Z | # -*- coding: utf-8 -*-
# Created by apple on 2017/1/30.
from .upload import upload_blueprint
from .apps import apps_blueprint
from .app_versions import app_versions_blueprint
from .exception import exception_blueprint
from .static import static_blueprint
from .short_chain import short_chain_blueprint
| 30.4 | 48 | 0.819079 | 44 | 304 | 5.431818 | 0.477273 | 0.271967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.118421 | 304 | 9 | 49 | 33.777778 | 0.86194 | 0.171053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
12a5992193c6236cae0c4c25336095b645c17334 | 96 | py | Python | venv/lib/python3.8/site-packages/requests_toolbelt/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/requests_toolbelt/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/requests_toolbelt/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/b1/e9/25/3c6e77fcc6575b571e562ac6f301d070406150f5f668bb1d0dc4ce835e | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
12c4a819195b6c762ed71bc2510d99d320585443 | 21,858 | py | Python | important_code/shuffle_test.py | BrancoLab/escape-analysis | bd4800c92947f1b7b464cd3e58f8499af6de4bbc | [
"MIT"
] | null | null | null | important_code/shuffle_test.py | BrancoLab/escape-analysis | bd4800c92947f1b7b464cd3e58f8499af6de4bbc | [
"MIT"
] | null | null | null | important_code/shuffle_test.py | BrancoLab/escape-analysis | bd4800c92947f1b7b464cd3e58f8499af6de4bbc | [
"MIT"
] | null | null | null | import numpy as np
import random
from scipy.stats import percentileofscore, ttest_ind, mannwhitneyu, pearsonr
def flatten(iterable):
''' flatten a nested list '''
it = iter(iterable)
for e in it:
if isinstance(e, (list, tuple)):
for f in flatten(e):
yield f
else:
yield e
def permutation_test(group_A, group_B, iterations = 1000, two_tailed = True):
''' POOL -> SHUFFLE MOUSE IDENTITY '''
# pool trials for the test statistic
group_A_pooled = np.array(list(flatten(group_A)))
group_B_pooled = np.array(list(flatten(group_B)))
# create identity array
group_A_mouse_ID = np.zeros(len(group_A))
group_B_mouse_ID = np.ones(len(group_B))
# create 2-D data / ID array
data = np.array(group_A + group_B)
# create a copy to be shuffled
shuffled_labels = np.concatenate((group_A_mouse_ID, group_B_mouse_ID))
# initialize test statistic
null_distribution = np.ones(iterations) * np.nan
# iterate over shuffles
for i in range(iterations):
# shuffle the labels
random.shuffle(shuffled_labels)
# get back the data
group_A_mean = np.mean(list(flatten(data[shuffled_labels == 0])))
group_B_mean = np.mean(list(flatten(data[shuffled_labels == 1])))
# get the test statistic for the null distribution
if two_tailed:
null_distribution[i] = abs(group_A_mean - group_B_mean)
else:
null_distribution[i] = group_B_mean - group_A_mean
# get the test statistic from the actual data
if two_tailed:
test_statistic = abs(np.mean(group_A_pooled) - np.mean(group_B_pooled))
else:
test_statistic = np.mean(group_B_pooled) - np.mean(group_A_pooled)
# get the p value
p = 1 - percentileofscore(null_distribution, test_statistic, kind='mean') / 100
if p > 0.0001: p = np.round(p, 4)
print('\nPooled stats, shuffled by mouse: p = ' + str(p))
if p < 0.05:
print('SIGNIFICANT.')
else:
print('not significant')
#
# group_A = [[28,25,19,1,4,24], [27,22,14], [4,21,5,0,19,17,1], [10,2,5,27,31,2,5,0,77,30,4,9,0], [1,4,0,3,0,3,97,3,5,0,10,0], [0,22,23,0,28,12,0,36,19,5]]
# group_B = [[27,46,26,9,2,28,24,51,23,175,28,11,39,12],[70, 49, 0,11,12,9,0,2,29,9,1,64,9,66],[4,22,121,8,88,106,158,152,7,6,],[14,0,7,112,3,9,1,43,31,175,34,25,18],[21,9,0,6],\
# [82,0,154,7,115,88,99,82,82,20,23,169],[32,9,23,4,42,25],[32,8,7,23,4,29,15],[60,27,88,33,10,35,13,49,18,14,28,18,37,],[24,0,26,2,14,0,128,169,6,19,18],[4]]
#
# permutation_test(group_A, group_B, iterations = 10000, two_tailed = True)
def permutation_correlation(data_x, data_y, iterations = 1000, two_tailed = False, pool_all = True):
# get number of mice
num_mice = len(data_x)
# initialize test statistics
null_distribution = np.ones(iterations) * np.nan
data_distribution = np.ones(iterations) * np.nan
# iterate over shuffles
for i in range(iterations):
if pool_all:
# pool trials
data_x_pooled = np.array(list(flatten(data_x)))
data_y_pooled = np.array(list(flatten(data_y)))
else:
# initialize pooled list
data_x_pooled = []
data_y_pooled = []
# pick one datum at random from each mouse
for m in range(num_mice):
# if the mouse did any trials
trials = len(data_x[m])
if trials:
# pick a trial at random
t = np.random.randint(0, trials)
# add that datum to the pooled list
data_x_pooled.append(data_x[m][t])
data_y_pooled.append(data_y[m][t])
# get the corr coeff of the actual data
r, _ = pearsonr(data_x_pooled, data_y_pooled)
data_distribution[i] = r
# shuffle the data
random.shuffle(data_x_pooled)
# get the corr coeff of the shuffled data
r, _ = pearsonr(data_x_pooled, data_y_pooled)
null_distribution[i] = r
# get the test statistic
if two_tailed:
test_statistic = np.mean(abs(data_distribution))
else:
test_statistic = np.mean(data_distribution)
# get the p value
if two_tailed:
p = np.round(1 - percentileofscore(abs(null_distribution), test_statistic) / 100, 5)
else:
p = np.round(1 - percentileofscore(null_distribution, test_statistic) / 100, 5)
print('\ncorrelation 1 trial per mouse: r = ' + str(np.round(test_statistic,2)) + ' p = ' + str(p))
if p < 0.05:
print('SIGNIFICANT.')
else:
print('not significant')
# group_A = [[],[28,25,19,1,4,24], [],[],[27,22,14], [],[],[4,21,5,0,19,17,1], [10,2,5,27,31,2,5,0,77,30,4,9,0], [1,4,0,3,0,3,97,3,5,0,10,0], [0,22,23,0,28,12,0,36,19,5]]
# group_B = [[27,46,26,9,2,28,24,51,23,175,28,11,39,12],[70, 49, 0,11,12,9,0,2,29,9,1,64,9,66],[4,22,121,8,88,106,158,152,7,6,],[14,0,7,112,3,9,1,43,31,175,34,25,18],[21,9,0,6],\
# [82,0,154,7,115,88,99,82,82,20,23,169],[32,9,23,4,42,25],[32,8,7,23,4,29,15],[60,27,88,33,10,35,13,49,18,14,28,18,37,],[24,0,26,2,14,0,128,169,6,19,18],[4]]
#
# group_A = [[28,25,19,1,4,24], [27,22,14], [4,21,5,0,19,17,1], [10,2,5,27,31,2,5,0,77,30,4,9,0], [1,4,0,3,0,3,97,3,5,0,10,0], [0,22,23,0,28,12,0,36,19,5]]
# group_B = [[70, 49, 0,11,12,9,0,2,29,9,1,64,9,66],[21,9,0,6],[32,8,7,23,4,29,15],[60,27,88,33,10,35,13,49,18,14,28,18,37,],[24,0,26,2,14,0,128,169,6,19,18],[4]]
# permutation_test_paired(group_A, group_B, iterations = 10000, two_tailed = True)
def permutation_test_paired(group_A, group_B, iterations = 1000, two_tailed = True):
''' POOL -> SHUFFLE MOUSE IDENTITY '''
# pool trials
group_A_means = [np.mean(session) for session in group_A]
group_B_means = [np.mean(session) for session in group_B]
# create identity array
group_A_mouse_ID = np.zeros(len(group_A))
group_B_mouse_ID = np.ones(len(group_B))
# create 2-D data / ID array
data = np.array(group_A + group_B)
# create a copy to be shuffled
shuffle_data = np.concatenate((group_A_mouse_ID, group_B_mouse_ID))
# initialize test statistic
null_distribution = np.ones(iterations) * np.nan
# iterate over shuffles
for i in range(iterations):
# shuffle the labels
idx_0 = (np.random.random(len(group_A_means)) > .5).astype(int)
idx_1 = 1 - idx_0
# create new groups
group_A_mean = group_A_means * idx_0 + group_B_means * idx_1
group_B_mean = group_B_means * idx_0 + group_A_means * idx_1
# get the test statistic for the null distribution
if two_tailed:
null_distribution[i] = abs(np.mean([a - b for a, b in zip(group_A_mean, group_B_mean)] )) #abs(group_A_mean - group_B_mean)
else:
null_distribution[i] = np.mean([a - b for a, b in zip(group_A_mean, group_B_mean)])
# get the test statistic from the actual data
if two_tailed:
test_statistic = abs(np.mean([a - b for a, b in zip(group_A_means, group_B_means)] ))
else:
test_statistic = np.mean([a - b for a, b in zip(group_A_means, group_B_means)])
# get the p value
p = np.round(1 - percentileofscore(null_distribution, test_statistic, kind='mean') / 100, 4)
print('\nPooled stats, shuffled by mouse: p = ' + str(p))
if p < 0.05:
print('SIGNIFICANT.')
else:
print('not significant')
# def permutation_test_paired(group_A, group_B, iterations = 1000, two_tailed = True):
# ''' POOL -> SHUFFLE MOUSE IDENTITY '''
# # pool trials
# group_A_means = [np.mean(session) for session in group_A]
# group_B_means = [np.mean(session) for session in group_B]
# # create identity array
# group_A_mouse_ID = np.zeros(len(group_A))
# group_B_mouse_ID = np.ones(len(group_B))
# # create 2-D data / ID array
# data = np.array(group_A + group_B)
# # create a copy to be shuffled
# shuffle_data = np.concatenate((group_A_mouse_ID, group_B_mouse_ID))
# # initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle the labels
# random.shuffle(shuffle_data)
# # get back the data
# group_A_mean = np.mean(list(flatten(data[shuffle_data == 0])))
# group_B_mean = np.mean(list(flatten(data[shuffle_data == 1])))
# # get the test statistic for the null distribution
# if two_tailed:
# null_distribution[i] = abs(np.mean([a - b for a, b in zip(group_A_mean, group_B_mean)] )) #abs(group_A_mean - group_B_mean)
# else:
# null_distribution[i] = group_B_mean - group_A_mean
# # get the test statistic from the actual data
# if two_tailed:
# test_statistic = abs(np.mean([a - b for a, b in zip(group_A_means, group_B_means)] ))
# else:
# test_statistic = np.mean([a - b for a, b in zip(group_A_means, group_B_means)])
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic, kind='mean') / 100, 4)
# print('\nPooled stats, shuffled by mouse: p = ' + str(p))
# if p < 0.05:
# print('SIGNIFICANT.')
# else:
# print('not significant')
#
#
# def permutation_correlation(data_x, data_y, iterations = 1000, two_tailed = False, pool_all = True):
# # get number of mice
# num_mice = len(data_x)
# # initialize test statistics
# null_distribution = np.ones(iterations) * np.nan
# data_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# if pool_all:
# # pool trials
# data_x_pooled = np.array(list(flatten(data_x)))
# data_y_pooled = np.array(list(flatten(data_y)))
# else:
# # initialize pooled list
# data_x_pooled = []
# data_y_pooled = []
# # pick one datum at random from each mouse
# for m in range(num_mice):
# # if the mouse did any trials
# trials = len(data_x[m])
# if trials:
# # pick a trial at random
# t = np.random.randint(0, trials)
# # add that datum to the pooled list
# data_x_pooled.append(data_x[m][t])
# data_y_pooled.append(data_y[m][t])
# # get the corr coeff of the actual data
# r, _ = pearsonr(data_x_pooled, data_y_pooled)
# data_distribution[i] = r
# # shuffle the data
# random.shuffle(data_x_pooled)
# # get the corr coeff of the shuffled data
# r, _ = pearsonr(data_x_pooled, data_y_pooled)
# null_distribution[i] = r
# # get the test statistic
# if two_tailed:
# test_statistic = np.mean(abs(data_distribution))
# else:
# test_statistic = np.mean(data_distribution)
# # get the p value
# if two_tailed:
# p = np.round(1 - percentileofscore(abs(null_distribution), test_statistic) / 100, 5)
# else:
# p = np.round(1 - percentileofscore(null_distribution, test_statistic) / 100, 5)
# print('\ncorrelation 1 trial per mouse: r = ' + str(np.round(test_statistic,2)) + ' p = ' + str(p))
# if p < 0.05:
# print('SIGNIFICANT.')
# else:
# print('not significant')
# iterations = 1000
# situation = 'lots of trials few mice'
# situation = 'number of trials correlated with effect'
# situation = 'few trials few mice'
# situation = 'few trials lots of mice'
# situation = 'more trials makes more reliable'
# situation = 'discrete'
#
# print(situation + '\n')
#
#
# if situation == 'lots of trials few mice':
# group_A = [[50, 50], [75, 75, 75, 75, 75, 75]]
#
# group_B = [[75, 75], [50, 50, 50, 50, 50, 50]]
#
# elif situation == 'number of trials correlated with effect':
# group_A = [ [75,75], [75,75], [75,75], [75,75], [75,75], [75,75], [75,75], [75,75], [75,75], [75,75] ]
#
# group_B = [ [100], [100], [100], [100], [100],
# [50, 50, 50], [50, 50, 50], [50, 50, 50], [50, 50, 50], [50, 50, 50] ]
#
# elif situation == 'few trials few mice':
# group_A = [ [75,75], [75,75] ]
#
# group_B = [ [50,50], [50,50] ]
#
# elif situation == 'few trials lots of mice':
# group_A = [ [50,75], [50,75], [50,75], [50,75], [50,75], [50,75] ]
#
# group_B = [ [50], [50], [50], [50], [50], [50] ]
#
# elif situation == 'more trials makes more reliable':
# group_A = [ [75,75], [75,75], [75,75], [75,75], [75,75], [75,75] ]
#
# group_B = [ [100], [100], [100], [0,0,50, 50], [100,0,50, 50], [50,100,0,50,0] ]
#
# elif situation == 'discrete':
# group_A = [ [0,1], [0,0], [0,0], [0,0], [0,0], [0,0] ]
#
# group_B = [ [1], [0], [0], [1,1,1], [0,1,1], [0,1] ]
#
#
#
# '''
# SHUFFLE ALL TRIALS
# '''
# # pool trials
# group_A_pooled = np.array(list(flatten(group_A)))
# group_B_pooled = np.array(list(flatten(group_B)))
# # do man-whitney
# s, p = mannwhitneyu(group_A_pooled, group_B_pooled)
# print('Mann Whitney pooled, p = ' + str(p))
#
#
# # create identity array
# group_A_trial_ID = np.zeros_like(group_A_pooled)
# group_B_trial_ID = np.ones_like(group_B_pooled)
# # create 2-D data / ID array
# data = np.stack( (np.concatenate((group_A_trial_ID, group_B_trial_ID)),
# np.concatenate((group_A_pooled, group_B_pooled))), 0)
# # create a copy to be shuffled
# shuffle_data = data.copy()
# #initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle the labels
# random.shuffle(shuffle_data[0, :])
# # get back the data
# group_A_mean = np.mean(data[1, shuffle_data[0, :] == 0])
# group_B_mean = np.mean(data[1, shuffle_data[0, :] == 1])
# # get the test statistic for the null distribution
# null_distribution[i] = abs(group_A_mean - group_B_mean)
# # get the test statistic from the actual data
# test_statistic = abs(np.mean(group_A_pooled) - np.mean(group_B_pooled))
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic)/100, 2)
# print('\nPooled data: p = ' + str(p))
# if p < 0.05: print('SIGNIFICANT.')
# else: print('not significant')
#
#
#
# '''
# AVERAGE -> SHUFFLE MOUSE IDENTITY
# '''
# # get each mouse's mean value
# group_A_averaged = np.array([np.mean(x) for x in group_A])
# group_B_averaged = np.array([np.mean(x) for x in group_B])
# # create identity array
# group_A_mouse_ID = np.zeros_like(group_A_averaged)
# group_B_mouse_ID = np.ones_like(group_B_averaged)
# # create 2-D data / ID array
# data = np.stack( (np.concatenate((group_A_mouse_ID, group_B_mouse_ID)),
# np.concatenate((group_A_averaged, group_B_averaged))), 0)
# # create a copy to be shuffled
# shuffle_data = data.copy()
# #initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle the labels
# random.shuffle(shuffle_data[0, :])
# # get back the data
# group_A_mean = np.mean(data[1, shuffle_data[0, :] == 0])
# group_B_mean = np.mean(data[1, shuffle_data[0, :] == 1])
# # get the test statistic for the null distribution
# null_distribution[i] = abs(group_A_mean - group_B_mean)
# # get the test statistic from the actual data
# test_statistic = abs(np.mean(group_A_averaged) - np.mean(group_B_averaged))
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic)/100, 2)
# print('\nAveraged data: p = ' + str(p))
# if p < 0.05: print('SIGNIFICANT.')
# else: print('not significant')
#
# #
#
#
# '''
# POOL -> SHUFFLE MOUSE IDENTITY
# '''
# # pool trials
# group_A_pooled = np.array(list(flatten(group_A)))
# group_B_pooled = np.array(list(flatten(group_B)))
# # create identity array
# group_A_mouse_ID = np.zeros(len(group_A))
# group_B_mouse_ID = np.ones(len(group_B))
# # create 2-D data / ID array
# data = np.array(group_A + group_B)
# # create a copy to be shuffled
# shuffle_data = np.concatenate((group_A_mouse_ID, group_B_mouse_ID))
# #initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle the labels
# random.shuffle(shuffle_data)
# # get back the data
# group_A_mean = np.mean(list(flatten(data[shuffle_data == 0])))
# group_B_mean = np.mean(list(flatten(data[shuffle_data == 1])))
# # get the test statistic for the null distribution
# null_distribution[i] = abs(group_A_mean - group_B_mean)
# # get the test statistic from the actual data
# test_statistic = abs(np.mean(group_A_pooled) - np.mean(group_B_pooled))
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic)/100, 2)
# print('\nPooled stats, shuffled by mouse: p = ' + str(p))
# if p < 0.05: print('SIGNIFICANT.')
# else: print('not significant')
#
# print('')
#
# '''
# SHUFFLE TRIAL IDENTITY WITHIN GROUP -> AVERAGE
# '''
# # pool trials
# group_A_pooled = np.array(list(flatten(group_A)))
# group_B_pooled = np.array(list(flatten(group_B)))
# # initialize average list
# group_A_averaged = []
# group_B_averaged = []
# # create identity array
# group_A_trial_ID = np.zeros_like(group_A_pooled)
# group_B_trial_ID = np.ones_like(group_B_pooled)
# # get number of trials per mouse
# group_A_trials = [len(x) for x in group_A]
# group_B_trials = [len(x) for x in group_B]
# #initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle each group
# random.shuffle(group_A_pooled)
# random.shuffle(group_B_pooled)
# # create 2-D data / ID array
# data = np.stack((np.concatenate((group_A_trial_ID, group_B_trial_ID)),
# np.concatenate((group_A_pooled, group_B_pooled))), 0)
# # initialize group array
# trials_added = 0
# group_A_data = []
# group_B_data = []
# # fill in each mouse with shuffled data
# for mouse in range(len(group_A)):
# group_A_data.append(np.mean(data[1, trials_added: trials_added + group_A_trials[mouse]]))
# trials_added += group_A_trials[mouse]
# for mouse in range(len(group_B)):
# group_B_data.append(np.mean(data[1, trials_added: trials_added + group_B_trials[mouse]]))
# trials_added += group_B_trials[mouse]
#
# # create identity array
# group_A_mouse_ID = np.zeros_like(group_A_data)
# group_B_mouse_ID = np.ones_like(group_B_data)
# # create 2-D data / ID array
# data = np.stack((np.concatenate((group_A_mouse_ID, group_B_mouse_ID)),
# np.concatenate((group_A_data, group_B_data))), 0)
# # get back the data
# group_A_averaged.append( np.mean(data[1, data[0, :] == 0]) )
# group_B_averaged.append( np.mean(data[1, data[0, :] == 1]) )
# # create a copy to be shuffled
# shuffle_data = data.copy()
# # shuffle the labels
# random.shuffle(shuffle_data[0, :])
# # get back the data
# group_A_mean = np.mean(data[1, shuffle_data[0, :] == 0])
# group_B_mean = np.mean(data[1, shuffle_data[0, :] == 1])
# # get the test statistic for the null distribution
# null_distribution[i] = abs(group_A_mean - group_B_mean)
# # get the test statistic from the actual data
# test_statistic = abs(np.mean(group_A_averaged) - np.mean(group_B_averaged))
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic) / 100, 2)
# print('\nShuffled within group then averaged data: p = ' + str(p))
# if p < 0.05:
# print('SIGNIFICANT.')
# else:
# print('not significant')
#
#
# print('')
#
#
# '''
# SHUFFLE TRIAL IDENTITY -> AVERAGE
# '''
# # pool trials
# group_A_pooled = np.array(list(flatten(group_A)))
# group_B_pooled = np.array(list(flatten(group_B)))
# # create identity array
# group_A_trial_ID = np.zeros_like(group_A_pooled)
# group_B_trial_ID = np.ones_like(group_B_pooled)
# # get number of trials per mouse
# group_A_trials = [len(x) for x in group_A]
# group_B_trials = [len(x) for x in group_B]
# # create 2-D data / ID array
# data = np.stack( (np.concatenate((group_A_trial_ID, group_B_trial_ID)),
# np.concatenate((group_A_pooled, group_B_pooled))), 0)
# # create a copy to be shuffled
# shuffle_data = data.copy()
# #initialize test statistic
# null_distribution = np.ones(iterations) * np.nan
# # iterate over shuffles
# for i in range(iterations):
# # shuffle the data
# random.shuffle(shuffle_data[1, :])
# # initialize group array
# trials_added = 0
# group_A_data = []
# group_B_data = []
# # fill in each mouse with shuffled data
# for mouse in range(len(group_A)):
# group_A_data.append(np.mean(shuffle_data[1, trials_added: trials_added + group_A_trials[mouse]]))
# trials_added += group_A_trials[mouse]
# for mouse in range(len(group_B)):
# group_B_data.append(np.mean(shuffle_data[1, trials_added: trials_added + group_B_trials[mouse]]))
# trials_added += group_B_trials[mouse]
# # get back the data
# group_A_mean = np.mean(group_A_data)
# group_B_mean = np.mean(group_B_data)
# # get the test statistic for the null distribution
# null_distribution[i] = abs(group_A_mean - group_B_mean)
# # get the test statistic from the actual data
# test_statistic = abs(np.mean(group_A_pooled) - np.mean(group_B_pooled))
# # get the p value
# p = np.round(1 - percentileofscore(null_distribution, test_statistic)/100, 2)
# print('\nPooled then averaged data: p = ' + str(p))
# if p < 0.05: print('SIGNIFICANT.')
# else: print('not significant')
#
| 40.106422 | 178 | 0.630662 | 3,457 | 21,858 | 3.791438 | 0.06306 | 0.054932 | 0.015564 | 0.018311 | 0.938735 | 0.925994 | 0.905241 | 0.897154 | 0.876555 | 0.862211 | 0 | 0.066428 | 0.221749 | 21,858 | 544 | 179 | 40.180147 | 0.704074 | 0.736161 | 0 | 0.42 | 0 | 0 | 0.040176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.03 | 0 | 0.07 | 0.09 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.