hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
41df3e0d617a0c44fda1dbf09fdfe0802a9802fc | 12,716 | py | Python | CMUTweetTagger.py | renhaocui/activityExtractor | f5abaff28bb84af3af3a7268729541af6fa25f36 | [
"MIT"
] | 1 | 2019-09-18T16:39:00.000Z | 2019-09-18T16:39:00.000Z | CMUTweetTagger.py | renhaocui/activityExtractor | f5abaff28bb84af3af3a7268729541af6fa25f36 | [
"MIT"
] | 1 | 2019-09-18T19:40:35.000Z | 2019-09-18T19:40:35.000Z | CMUTweetTagger.py | renhaocui/activityExtractor | f5abaff28bb84af3af3a7268729541af6fa25f36 | [
"MIT"
] | null | null | null | import subprocess, shlex, json, re
from wordsegment import load, segment
from difflib import SequenceMatcher
import sys
reload(sys)
sys.setdefaultencoding('utf8')
load()
RUN_TAGGER_CMD = "java -XX:ParallelGCThreads=2 -Xmx1000m -jar utilities/ark-tweet-nlp-0.3.2.jar"
def removeLinks(input):
urls = re.findall("(?P<url>https?://[^\s]+)", input)
if len(urls) != 0:
for url in urls:
input = input.replace(url, '')
return input
def _split_results(rows):
"""Parse the tab-delimited returned lines, modified from: https://github.com/brendano/ark-tweet-nlp/blob/master/scripts/show.py"""
for line in rows:
line = line.strip() # remove '\n'
if len(line) > 0:
if line.count('\t') == 2:
parts = line.split('\t')
tokens = parts[0]
tags = parts[1]
confidence = float(parts[2])
yield tokens, tags, confidence
def _call_runtagger(tweets, run_tagger_cmd=RUN_TAGGER_CMD):
"""Call runTagger.sh using a named input file"""
# remove carriage returns as they are tweet separators for the stdin
# interface
tweets_cleaned = [tw.replace('\n', ' ') for tw in tweets]
message = "\n".join(tweets_cleaned)
# force UTF-8 encoding (from internal unicode type) to avoid .communicate encoding error as per:
# http://stackoverflow.com/questions/3040101/python-encoding-for-pipe-communicate
message = message.encode('utf-8')
# build a list of args
args = shlex.split(run_tagger_cmd)
args.append('--output-format')
args.append('conll')
po = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# old call - made a direct call to runTagger.sh (not Windows friendly)
#po = subprocess.Popen([run_tagger_cmd, '--output-format', 'conll'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = po.communicate(message)
# expect a tuple of 2 items like:
# ('hello\t!\t0.9858\nthere\tR\t0.4168\n\n',
# 'Listening on stdin for input. (-h for help)\nDetected text input format\nTokenized and tagged 1 tweets (2 tokens) in 7.5 seconds: 0.1 tweets/sec, 0.3 tokens/sec\n')
pos_result = result[0].strip('\n\n') # get first line, remove final double carriage return
pos_result = pos_result.split('\n\n') # split messages by double carriage returns
pos_results = [pr.split('\n') for pr in pos_result] # split parts of message by each carriage return
return pos_results
def runtagger_parse(tweets, run_tagger_cmd=RUN_TAGGER_CMD):
"""Call runTagger.sh on a list of tweets, parse the result, return lists of tuples of (term, type, confidence)"""
pos_raw_results = _call_runtagger(tweets, run_tagger_cmd)
pos_result = []
for pos_raw_result in pos_raw_results:
pos_result.append([x for x in _split_results(pos_raw_result)])
return pos_result
def combineContents(inputLists):
contentOutput = []
for inputList in inputLists:
content = ''
for item in inputList:
content += item[0]
contentOutput.append(content)
return contentOutput
def sortTweetbyID(inputList, num=6):
sorted_list = sorted(inputList, key=lambda k: k['id'], reverse=True)
return sorted_list[:num]
def processYelpData(dataFilename, outputFilename):
contents = []
review_ids = []
dataFile = open(dataFilename, 'r')
for line in dataFile:
item = json.loads(line.strip())
for review in item['reviews']:
review_ids.append(review['review_id'])
contents.append(review['text'])
dataFile.close()
print len(contents)
print len(review_ids)
contentFile = open('data/yelpData.content', 'w')
for content in contents:
contentFile.write(content+'\n')
contentFile.close()
idFile = open('data/yelpData.id', 'w')
for id in review_ids:
idFile.write(str(id)+'\n')
idFile.close()
output = runtagger_parse(contents)
outputFile = open('data/yelpData.pos', 'w')
for item in output:
outputFile.write(json.dumps(item)+'\n')
outputFile.close()
print len(output)
if len(contents) != len(output):
print('output length different from input length')
else:
outFile = open(outputFilename, 'w')
for index, out in enumerate(output):
outFile.write(json.dumps({'review_id': review_ids[index], 'tag': out}) + '\n')
outFile.close()
def processTweetTag(hashtag=True):
placeList = []
placeListFile = open('lists/google_place_long.category', 'r')
for line in placeListFile:
if not line.startswith('#'):
placeList.append(line.strip())
placeListFile.close()
for index, place in enumerate(placeList):
tweetFile = open('data/POIplace/' + place + '.json', 'r')
contents = []
ids = []
for line in tweetFile:
data = json.loads(line.strip())
content = removeLinks(data['text']).replace('\n', ' ').replace('\r', ' ').replace('#', ' #')
#encode('unicode-escape').replace('\u', ' \u').replace('\U', ' \U')
if hashtag:
outContent = content
else:
outContent = ''
for temp in content.split(' '):
if temp != '':
if temp.startswith('#'):
segTemp = segment(temp[1:])
for seg in segTemp:
outContent += seg + ' '
else:
outContent += temp + ' '
contents.append(outContent.strip())
ids.append(data['id'])
tweetFile.close()
print(place + ' size: '+str(len(ids)))
output = runtagger_parse(contents)
if len(contents) != len(output):
print('ERROR')
else:
outFile = open('data/POSnew/'+place+'.pos', 'w')
for index, out in enumerate(output):
outFile.write(json.dumps({'id': ids[index], 'tag': out})+'\n')
outFile.close()
def processHistTag(hashtag=True, maxHistNum=10):
placeList = []
placeListFile = open('lists/google_place_long.category', 'r')
for line in placeListFile:
if not line.startswith('#'):
placeList.append(line.strip())
placeListFile.close()
for index, place in enumerate(placeList):
print(place)
histFile = open('data/POIHistClean/' + place + '.json', 'r')
contents = []
ids = []
for line in histFile:
histData = json.loads(line.strip())
if len(histData['statuses']) > 1:
#tweetID = histData['max_id']
for i in range(min(maxHistNum, len(histData['statuses'])-1)):
tweet = histData['statuses'][i + 1]
content = removeLinks(tweet['text']).replace('\n', ' ').replace('\r', ' ').replace('#', ' #')
histID = tweet['id']
ids.append(histID)
if hashtag:
outContent = content
else:
outContent = ''
for temp in content.split(' '):
if temp != '':
if temp.startswith('#'):
segTemp = segment(temp[1:])
for seg in segTemp:
outContent += seg + ' '
else:
outContent += temp + ' '
contents.append(outContent.strip())
outputs = runtagger_parse(contents)
outFile = open('data/POShistCleanMax_100_0.7/' + place + '.pos', 'w')
print(str(len(outputs))+'/'+str(len(contents)))
if len(contents) != len(outputs):
predictions = combineContents(outputs)
idIndex = 0
predIndex = 0
count = 0
outputDict = {}
while True:
#print idIndex
#print predIndex
contentTemp = contents[idIndex].replace(' ', '').replace('<', '<').replace('>', '>').replace('&', '&').encode('unicode-escape')
predTemp = predictions[predIndex].encode('unicode-escape')
score = SequenceMatcher(None, contentTemp, predTemp).ratio()
#print contentTemp
#print predTemp
#print score
#print '-------'
if score > 0.7:
outputDict[ids[idIndex]] = outputs[predIndex]
count += 1
predIndex += 1
idIndex += 1
else:
idIndex += 1
if idIndex >= len(contents) or predIndex >= len(predictions):
break
if count != len(outputs):
print('ERROR')
else:
for key, value in outputDict.items():
outFile.write(json.dumps({int(key): value})+'\n')
else:
for index, out in enumerate(outputs):
outFile.write(json.dumps({int(ids[index]): out})+'\n')
outFile.close()
def processUserTweetTag(fileName, hashtag=True):
brandList = []
listFile = open(fileName, 'r')
for line in listFile:
if not line.startswith('#'):
brandList.append(line.strip())
listFile.close()
for brand in brandList:
print(brand)
userIDList = []
tweetIDList = []
contents = []
inputFile = open('data/userTweets2/clean2/' + brand + '.json', 'r')
outFile = open('data/userTweets2/clean2/' + brand + '.pos', 'w')
for line in inputFile:
userData = json.loads(line.strip())
if len(userData['statuses']) > 19:
user_id = userData['user_id']
#tweets = sortTweetbyID(userData['statuses'], num=6)
for tweet in userData['statuses']:
content = removeLinks(tweet['text']).replace('\n', ' ').replace('\r', ' ').replace('#', ' #')
tweetID = tweet['id']
tweetIDList.append(tweetID)
userIDList.append(user_id)
if hashtag:
outContent = content
else:
outContent = ''
for temp in content.split(' '):
if temp != '':
if temp.startswith('#'):
segTemp = segment(temp[1:])
for seg in segTemp:
outContent += seg + ' '
else:
outContent += temp + ' '
contents.append(outContent.strip())
print ('Running CMU Tagger...')
outputs = runtagger_parse(contents)
print ('Aligning tagged outputs...')
if len(contents) != len(outputs):
predictions = combineContents(outputs)
idIndex = 0
predIndex = 0
count = 0
outputDict = {}
while True:
contentTemp = contents[idIndex].replace(' ', '').replace('<', '<').replace('>', '>').replace('&', '&').encode('unicode-escape')
predTemp = predictions[predIndex].encode('unicode-escape')
score = SequenceMatcher(None, contentTemp, predTemp).ratio()
if score > 0.7:
outputDict[tweetIDList[idIndex]] = outputs[predIndex]
count += 1
predIndex += 1
idIndex += 1
else:
idIndex += 1
if idIndex >= len(contents) or predIndex >= len(predictions):
break
if count != len(outputs):
print('ERROR')
else:
for key, value in outputDict.items():
outFile.write(json.dumps({int(key): value})+'\n')
else:
for index, out in enumerate(outputs):
outFile.write(json.dumps({int(tweetIDList[index]): out})+'\n')
outFile.close()
if __name__ == "__main__":
#processTweetTag(hashtag=True)
processYelpData('data/yelpUserReviewData.json', 'data/yelpUserReview.pos.json')
#processHistTag(hashtag=False, maxHistNum=50)
#processUserTweetTag('lists/popularAccount4.list', hashtag=False)
| 39.368421 | 172 | 0.533973 | 1,299 | 12,716 | 5.169361 | 0.224788 | 0.010722 | 0.014296 | 0.018764 | 0.416232 | 0.392554 | 0.367089 | 0.367089 | 0.349218 | 0.316754 | 0 | 0.010301 | 0.335797 | 12,716 | 322 | 173 | 39.490683 | 0.78475 | 0.097594 | 0 | 0.492248 | 0 | 0.003876 | 0.076902 | 0.02671 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015504 | null | null | 0.050388 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
41ea908d2cb1c1e2858cedb1b00c6722a8ee4eee | 867 | py | Python | src/mem/HybridController/eDRAM_Cache_Side/eDRAM.py | MattDAngeli/Hybrid-eDRAM-PCM-Full-System-Simulator | 4ae14eb6d3c13ab8287e792054dadf8c44f064c3 | [
"BSD-3-Clause"
] | null | null | null | src/mem/HybridController/eDRAM_Cache_Side/eDRAM.py | MattDAngeli/Hybrid-eDRAM-PCM-Full-System-Simulator | 4ae14eb6d3c13ab8287e792054dadf8c44f064c3 | [
"BSD-3-Clause"
] | null | null | null | src/mem/HybridController/eDRAM_Cache_Side/eDRAM.py | MattDAngeli/Hybrid-eDRAM-PCM-Full-System-Simulator | 4ae14eb6d3c13ab8287e792054dadf8c44f064c3 | [
"BSD-3-Clause"
] | null | null | null | from m5.params import *
from m5.proxy import *
from MemObject import *
from eDRAMCacheTags import *
class eDRAMCache(MemObject):
type = 'eDRAMCache'
cxx_header = "mem/HybridController/eDRAM_Cache_Side/eDRAM_cache.hh"
block_size = Param.Int(Parent.block_size, "same as parent")
size = Param.MemorySize(Parent.eDRAM_cache_size, "same")
write_only = Param.Bool(Parent.eDRAM_cache_write_only_mode, "same")
read_part = Param.MemorySize(Parent.eDRAM_cache_read_partition, "same")
write_part = Param.MemorySize(Parent.eDRAM_cache_write_partition, "same")
tag_latency = Param.Cycles(Parent.eDRAM_cache_tag_latency, "same")
mshr_entries = Param.Int(Parent.eDRAM_cache_mshr_entries, "same")
wb_entries = Param.Int(Parent.eDRAM_cache_wb_entries, "same")
tags = Param.eDRAMCacheTagsWithFABlk(eDRAMCacheFATags(), "tag store")
| 37.695652 | 77 | 0.758939 | 114 | 867 | 5.482456 | 0.377193 | 0.144 | 0.1792 | 0.1248 | 0.2608 | 0.2112 | 0 | 0 | 0 | 0 | 0 | 0.002667 | 0.134948 | 867 | 22 | 78 | 39.409091 | 0.830667 | 0 | 0 | 0 | 0 | 0 | 0.130334 | 0.059977 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
41ec0228ce778ad3bf08a92d4dc659af6c15f8f7 | 483 | py | Python | interview/leet/859_Buddy_Strings.py | eroicaleo/LearningPython | 297d46eddce6e43ce0c160d2660dff5f5d616800 | [
"MIT"
] | 1 | 2020-10-12T13:33:29.000Z | 2020-10-12T13:33:29.000Z | interview/leet/859_Buddy_Strings.py | eroicaleo/LearningPython | 297d46eddce6e43ce0c160d2660dff5f5d616800 | [
"MIT"
] | null | null | null | interview/leet/859_Buddy_Strings.py | eroicaleo/LearningPython | 297d46eddce6e43ce0c160d2660dff5f5d616800 | [
"MIT"
] | 1 | 2016-11-09T07:28:45.000Z | 2016-11-09T07:28:45.000Z | #!/usr/bin/env python3
class Solution:
def buddStrings(self, A, B):
la, lb = len(A), len(B)
if la != lb:
return False
diff = [i for i in range(la) if A[i] != B[i]]
if len(diff) > 2 or len(diff) == 1:
return False
elif len(diff) == 0 and len(set(A)) == la:
return False
else:
i, j = diff
if A[i] != B[j] or A[j] != B[i]:
return False
return True
| 25.421053 | 53 | 0.436853 | 73 | 483 | 2.890411 | 0.438356 | 0.208531 | 0.037915 | 0.047393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014388 | 0.424431 | 483 | 18 | 54 | 26.833333 | 0.744604 | 0.043478 | 0 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
41ed42ba155f4f8c3b0e56171e05cd1321ae8090 | 502 | py | Python | classes/magic_method/method1.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | 1 | 2020-06-04T08:44:09.000Z | 2020-06-04T08:44:09.000Z | classes/magic_method/method1.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | null | null | null | classes/magic_method/method1.py | zhangyage/Python-oldboy | a95c1b465929e2be641e425fcb5e15b366800831 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
'''
__new__ __init__
'''
class Programer(object):
def __new__(self, *args, **kwargs):
print 'call __new__ method!'
print args
return object.__new__(self, *args, **kwargs)
def __init__(self,name,age):
print 'call __init__ method!'
self.name = name
self.age = age
if __name__ == '__main__':
programer = Programer('zhang',25)
print programer.__dict__ | 21.826087 | 53 | 0.557769 | 54 | 502 | 4.444444 | 0.5 | 0.058333 | 0.091667 | 0.141667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008671 | 0.310757 | 502 | 23 | 54 | 21.826087 | 0.684971 | 0.081673 | 0 | 0 | 0 | 0 | 0.131068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
41f2f05dc184e0a4fdb150d39e607ae1b449e7c0 | 2,533 | py | Python | Projects/events/views.py | jjfleet/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 2 | 2018-07-23T05:44:50.000Z | 2018-09-10T09:12:36.000Z | Projects/events/views.py | jjmassey/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 14 | 2018-09-10T10:42:39.000Z | 2018-10-24T00:04:36.000Z | Projects/events/views.py | jjfleet/Capstone | f81e21f0641ed0b75e06161198fca52805acb2e4 | [
"Apache-2.0"
] | 2 | 2018-09-10T06:34:31.000Z | 2018-09-17T06:05:23.000Z | from django.shortcuts import render, get_object_or_404, redirect
from django.views.generic import TemplateView, ListView, DetailView, CreateView, UpdateView, DeleteView
from .models import EventListing
from django.contrib.auth.models import User
from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin
from django.contrib.auth.decorators import login_required
from events.forms import EventCreateView, EventUpdateForm
from django.contrib import messages
from django.urls import reverse
@login_required
def eventCreate(request):
if request.method == 'POST':
form = EventCreateView(request.POST, request.FILES)
if form.is_valid():
form.instance.author = request.user
form = form.save()
messages.success(request, "Your event has been created!")
return redirect(reverse('event-detail', kwargs={'pk': form.pk}))
else:
form = EventCreateView()
return render(request, 'events/eventlisting_form.html', {'form': form})
def eventUpdateView(request, pk):
instance = get_object_or_404(EventListing, id=pk)
form = EventUpdateForm(request.POST or None, instance=instance)
if form.is_valid():
form.save()
messages.success(request, "Your event has been updated!")
return redirect(reverse('event-detail', kwargs={'pk': pk}))
else:
e_form = EventUpdateForm(instance = EventListing.objects.get(pk=pk))
return render(request, 'events/eventupdate_form.html', {'e_form': e_form})
class EventPageView(ListView):
model = EventListing
template_name = 'events/events.html'
context_object_name = 'data'
ordering = ['-date_posted']
class UserEventPageView(ListView):
model = EventListing
template_name = 'events/user_event.html'
context_object_name = 'data'
paginate_by = 3
def get_queryset(self):
user = get_object_or_404(User, username=self.kwargs.get('username'))
return EventListing.objects.filter(author=user).order_by('-date_posted')
class EventUpdateView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
model = EventListing
fields = [] #models go here
def form_valid(self, form):
form.instance.author = self.request.user
return super().form_valid(form)
def test_func(self):
event = self.get_object()
return self.request.user == event.author # do we need a conditional true/false here??
class EventDeleteView(LoginRequiredMixin, UserPassesTestMixin, DeleteView):
model = EventListing
success_url = '/'
def test_func(self):
event = self.get_object()
return self.request.user == event.author
class EventDetailView(DetailView):
model = EventListing
| 34.22973 | 103 | 0.771812 | 320 | 2,533 | 5.99375 | 0.321875 | 0.036496 | 0.035454 | 0.021898 | 0.244004 | 0.202294 | 0.157456 | 0.115746 | 0.115746 | 0.067779 | 0 | 0.004474 | 0.117647 | 2,533 | 73 | 104 | 34.69863 | 0.853691 | 0.022503 | 0 | 0.278689 | 0 | 0 | 0.095392 | 0.031932 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098361 | false | 0.04918 | 0.147541 | 0 | 0.672131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
41f5feecfe50cf78c71aaf4a80ea65c9c1794f05 | 1,594 | py | Python | todo/migrations/0004_rename_list_tasklist.py | Sowmya-1998/https-github.com-shacker-django-todo | 79a272d222ef262f4f6caee91d9accfb8569ccea | [
"BSD-3-Clause"
] | 567 | 2015-01-02T00:34:10.000Z | 2022-03-30T07:52:08.000Z | todo/migrations/0004_rename_list_tasklist.py | wu1f72514/django-todo | 2d86a51177a6f16cf4239fa3c034f6844c4bc048 | [
"BSD-3-Clause"
] | 94 | 2015-06-07T09:26:31.000Z | 2022-03-05T23:53:22.000Z | todo/migrations/0004_rename_list_tasklist.py | yeastbaron/tst | 2d86a51177a6f16cf4239fa3c034f6844c4bc048 | [
"BSD-3-Clause"
] | 272 | 2015-01-03T08:16:51.000Z | 2022-03-29T09:37:17.000Z | # Generated by Django 2.0.2 on 2018-02-09 23:15
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("auth", "0009_alter_user_last_name_max_length"),
("todo", "0003_assignee_optional"),
]
operations = [
migrations.CreateModel(
name="TaskList",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("name", models.CharField(max_length=60)),
("slug", models.SlugField(default="")),
(
"group",
models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to="auth.Group"),
),
],
options={"verbose_name_plural": "Lists", "ordering": ["name"]},
),
migrations.AlterUniqueTogether(name="list", unique_together=set()),
migrations.RemoveField(model_name="list", name="group"),
migrations.RemoveField(model_name="item", name="list"),
migrations.DeleteModel(name="List"),
migrations.AddField(
model_name="item",
name="task_list",
field=models.ForeignKey(
null=True, on_delete=django.db.models.deletion.CASCADE, to="todo.TaskList"
),
),
migrations.AlterUniqueTogether(name="tasklist", unique_together={("group", "slug")}),
]
| 34.652174 | 100 | 0.542033 | 146 | 1,594 | 5.767123 | 0.5 | 0.038005 | 0.049881 | 0.078385 | 0.092637 | 0.092637 | 0.092637 | 0.092637 | 0 | 0 | 0 | 0.023191 | 0.323714 | 1,594 | 45 | 101 | 35.422222 | 0.757885 | 0.028231 | 0 | 0.153846 | 1 | 0 | 0.132515 | 0.037492 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051282 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
41f9ca29c3f35036a1832f13ceb7d1aad3a2fd84 | 2,753 | py | Python | src/dx/share/property_maker.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | null | null | null | src/dx/share/property_maker.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | 2 | 2021-01-03T16:22:11.000Z | 2021-02-07T08:41:57.000Z | src/dx/share/property_maker.py | lmmx/dx | 063e8f8cfc24dfdf09a12001b58b4017a75ea3e8 | [
"MIT"
] | null | null | null | from inspect import currentframe
__all__ = ["add_props_to_ns", "add_classprops_to_ns", "classproperty", "props_as_dict"]
def prop_getsetdel(property_name, prefix="_", read_only=False, deletable=False):
internal_attr = prefix + property_name
def prop_getter(internal_attr):
def getter_func(self):
return getattr(self, internal_attr)
return getter_func
def prop_setter(internal_attr):
def setter_func(self, val):
setattr(self, internal_attr, val)
return setter_func
def prop_deleter(internal_attr):
def deleter_func(self):
delattr(self, internal_attr)
return deleter_func
pget = prop_getter(internal_attr)
pset = prop_setter(internal_attr)
pdel = prop_deleter(internal_attr)
if read_only:
if deletable:
return pget, None, pdel # Leave pset `None`
else:
return tuple([pget])
else:
if deletable:
return pget, pset, pdel # Full house !
else:
return pget, pset
def property_maker(property_name, prefix="_", read_only=False, deletable=False):
pgsd = prop_getsetdel(property_name, prefix, read_only, deletable)
return property(*pgsd)
def classproperty_maker(property_name, prefix="_", read_only=False, deletable=False):
pgsd = prop_getsetdel(property_name, prefix, read_only, deletable)
return classproperty(*pgsd)
def props_as_dict(prop_names, prefix="_", read_only=False, deletable=False):
l = [(p, property_maker(p, prefix, read_only, deletable)) for p in prop_names]
return dict(l)
def classprops_as_dict(prop_names, prefix="_", read_only=False, deletable=False):
l = [(p, classproperty_maker(p, prefix, read_only, deletable)) for p in prop_names]
return dict(l)
def add_props_to_ns(property_list, prefix="_", read_only=False, deletable=False):
try:
frame = currentframe()
callers_ns = frame.f_back.f_locals
d = props_as_dict(property_list, prefix, read_only, deletable)
callers_ns.update(d)
finally:
del frame
return
def add_classprops_to_ns(property_list, prefix="_", read_only=False, deletable=False):
try:
frame = currentframe()
callers_ns = frame.f_back.f_locals
d = classprops_as_dict(property_list, prefix, read_only, deletable)
callers_ns.update(d)
finally:
del frame
return
# use within a class definition as:
# add_props_to_ns(["attr1", "attr2"])
# Decorate a class method to get a static method @property,
# if used to access a __private attribute it makes it immutable
class classproperty(property):
def __get__(self, cls, owner):
return classmethod(self.fget).__get__(None, owner)()
| 34.4125 | 87 | 0.687977 | 362 | 2,753 | 4.925414 | 0.234807 | 0.062815 | 0.102075 | 0.074593 | 0.469994 | 0.469994 | 0.469994 | 0.462703 | 0.437465 | 0.437465 | 0 | 0.000928 | 0.216854 | 2,753 | 79 | 88 | 34.848101 | 0.826067 | 0.079913 | 0 | 0.370968 | 0 | 0 | 0.02692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0 | 0.016129 | 0.032258 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
41fe10662f063c61998d2ea5f5fcc421ad3d452f | 581 | py | Python | mulm/residualizer/__init__.py | neurospin/pylearn-mulm | cfbcda8312a495a8e82657e04b67dc95db2a3306 | [
"BSD-3-Clause"
] | 6 | 2015-02-27T13:29:36.000Z | 2021-05-04T11:45:42.000Z | mulm/residualizer/__init__.py | neurospin/pylearn-mulm | cfbcda8312a495a8e82657e04b67dc95db2a3306 | [
"BSD-3-Clause"
] | null | null | null | mulm/residualizer/__init__.py | neurospin/pylearn-mulm | cfbcda8312a495a8e82657e04b67dc95db2a3306 | [
"BSD-3-Clause"
] | 1 | 2020-05-20T15:42:07.000Z | 2020-05-20T15:42:07.000Z | # -*- coding: utf-8 -*-
##########################################################################
# Created on Tue Jun 25 13:25:41 2013
# Copyright (c) 2013-2021, CEA/DRF/Joliot/NeuroSpin. All rights reserved.
# @author: Edouard Duchesnay
# @email: edouard.duchesnay@cea.fr
# @license: BSD 3-clause.
##########################################################################
"""
Module that contains the residualizers.
"""
from .residualizer import Residualizer
from .residualizer import ResidualizerEstimator
__all__ = ['Residualizer',
'ResidualizerEstimator']
| 29.05 | 74 | 0.531842 | 51 | 581 | 5.980392 | 0.764706 | 0.104918 | 0.144262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043137 | 0.122203 | 581 | 19 | 75 | 30.578947 | 0.554902 | 0.442341 | 0 | 0 | 0 | 0 | 0.202454 | 0.128834 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
510189956b36fd250d873884e457135e0f5d9814 | 352 | py | Python | pset_pandas_ext/101problems/p37.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 5 | 2019-04-08T20:05:37.000Z | 2019-12-04T20:48:45.000Z | pset_pandas_ext/101problems/p37.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 8 | 2019-04-15T15:16:05.000Z | 2022-02-12T10:33:32.000Z | pset_pandas_ext/101problems/p37.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 2 | 2019-04-10T00:14:42.000Z | 2020-02-26T20:35:21.000Z | """
37. How to get the nrows, ncolumns, datatype, summary stats of each column of a dataframe? Also get the array and list equivalent.
"""
"""
Difficulty Level: L2
"""
"""
Get the number of rows, columns, datatype and summary statistics of each column of the Cars93 dataset. Also get the numpy array and list equivalent of the dataframe.
"""
"""
"""
| 27.076923 | 165 | 0.71875 | 54 | 352 | 4.685185 | 0.555556 | 0.094862 | 0.094862 | 0.110672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017361 | 0.181818 | 352 | 12 | 166 | 29.333333 | 0.861111 | 0.369318 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5102d3fc83a6a50bd2856d794fbbf9fc67809890 | 3,423 | py | Python | bookstore/news/views.py | M0673N/bookstore | ec9477550ba46f9ffde3817cf676e97b0239263d | [
"MIT"
] | null | null | null | bookstore/news/views.py | M0673N/bookstore | ec9477550ba46f9ffde3817cf676e97b0239263d | [
"MIT"
] | null | null | null | bookstore/news/views.py | M0673N/bookstore | ec9477550ba46f9ffde3817cf676e97b0239263d | [
"MIT"
] | null | null | null | from django.contrib.auth.mixins import LoginRequiredMixin
from django.shortcuts import redirect, render
from django.urls import reverse_lazy
from django.views import View
from django.views.generic import ListView, CreateView, DetailView, UpdateView, DeleteView
from bookstore.news.forms import ArticleForm, ArticleCommentForm
from .models import ArticleComment
from .signals import *
from django.db.models import signals
class ListArticlesView(ListView):
template_name = 'articles/news.html'
context_object_name = 'articles'
model = Article
paginate_by = 12
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['articles'] = Article.objects.order_by('-date_posted')
return context
class AddArticleView(LoginRequiredMixin, CreateView):
model = Article
form_class = ArticleForm
success_url = reverse_lazy('news')
template_name = 'articles/add_article.html'
def post(self, request, *args, **kwargs):
form = ArticleForm(request.POST, request.FILES)
if form.is_valid():
signals.pre_save.disconnect(receiver=delete_old_image_on_article_change, sender=Article)
article = form.save(commit=False)
article.user = self.request.user
article.save()
signals.pre_save.connect(receiver=delete_old_image_on_article_change, sender=Article)
return redirect('news')
else:
return render(request, 'articles/add_article.html', {'form': form})
class ArticleDetailsView(DetailView):
model = Article
template_name = 'articles/article_details.html'
context_object_name = 'article'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
article = context['article']
is_owner = article.user == self.request.user
context['form'] = ArticleCommentForm()
context['comments'] = article.articlecomment_set.all().order_by('date_posted')
context['comments_count'] = article.articlecomment_set.count()
context['is_owner'] = is_owner
return context
class EditArticleView(LoginRequiredMixin, UpdateView):
model = Article
form_class = ArticleForm
success_url = reverse_lazy('news')
template_name = 'articles/edit_article.html'
class DeleteArticleView(LoginRequiredMixin, DeleteView):
def get(self, request, *args, **kwargs):
article = Article.objects.get(pk=self.kwargs['pk'])
article.delete()
return redirect('news')
class CommentArticleView(LoginRequiredMixin, View):
form_class = ArticleCommentForm
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
article = Article.objects.get(pk=self.kwargs['pk'])
comment = ArticleComment(
text=form.cleaned_data['text'],
article=article,
user=self.request.user,
)
comment.save()
return redirect('article details', article.id)
return redirect('article details', self.kwargs['pk'])
class DeleteArticleCommentView(LoginRequiredMixin, DeleteView):
def get(self, request, *args, **kwargs):
comment = ArticleComment.objects.get(pk=self.kwargs['cpk'])
comment.delete()
return redirect('article details', self.kwargs['apk'])
| 32.913462 | 100 | 0.682734 | 373 | 3,423 | 6.117962 | 0.268097 | 0.030675 | 0.035057 | 0.03681 | 0.349693 | 0.305872 | 0.272568 | 0.244522 | 0.163015 | 0.119194 | 0 | 0.00074 | 0.210926 | 3,423 | 103 | 101 | 33.23301 | 0.844132 | 0 | 0 | 0.315789 | 0 | 0 | 0.085013 | 0.030675 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.118421 | 0 | 0.605263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
511056a39b1c224d0c6ccad1ee52150327e9b87b | 4,074 | py | Python | functions.py | QijingMa/COGS18-Final-Project | 71d2761706f2614bcee2e5d5d2b42a98af542d21 | [
"MIT"
] | null | null | null | functions.py | QijingMa/COGS18-Final-Project | 71d2761706f2614bcee2e5d5d2b42a98af542d21 | [
"MIT"
] | null | null | null | functions.py | QijingMa/COGS18-Final-Project | 71d2761706f2614bcee2e5d5d2b42a98af542d21 | [
"MIT"
] | null | null | null | """A collection of function for doing my project."""
import time
import random
#Background Introduction
def Intro():
print("Hello, how are you doing after the long journey? ")
time.sleep(2)
print("You are now at Pallet Town,Kanto.")
time.sleep(2)
print("A place where is abounded with amazing creatures we call -Pokemon ")
time.sleep(2)
print("Here pokemon and people work and live together peacefully")
time.sleep(2)
print("I am Professor Oak, a Pokemon researcher. ")
time.sleep(2.5)
print("Now it is your time to start your new own adventure.")
time.sleep(2)
print("Before you go, I have some gifts for you ")
time.sleep(2)
print("I have three adorable Pokemon for you. You can pick one of them as your friend! ")
time.sleep(3)
print()
def choosePokemon():#a function that allow user choose a pokemon and store its value
""" """
Pokemon=""
while Pokemon != "charmander" and Pokemon != "squirtle" and Pokemon != "bulbasaur":#allow user choose again if the input is not one of the three
Pokemon = input("Which Pokemon would you like to choose (Charmander or Squirtle or Bulbasaur): ")
Pokemon = Pokemon.lower()#allow user to put lower capital input
return Pokemon
def chooseSkill1():#choose a Skill and store the value
Skill1= ""
while Skill1 !="bubble"and Skill1!= "aqua tail":#choose again if input is not one of two skill
Skill1 = input("Which Skill will you use against? (Bubble or Aqua Tail): ")
Skill1 = Skill1.lower() #allow lower case
return Skill1
def checkPath1(chosenSkill1):#show the result of choice of skill
print("OK....")
time.sleep(2)
print("if this is your final decision")
time.sleep(2)
print("Let's see what will happen")
print()
time.sleep(2)
List_1=["bubble","aqua tail"]
correctchoice=random.choice(List_1)#allow random choice so that every time the result will be random
if chosenSkill1 == correctchoice :
print("Oh, that skill seems super effective.")
print("The enemy is down, you win your first fight!")
print("Exp+100, Gold+50")
else:
print("What a pity")
print("You missed!")
print("The enemy fight back and you lose=_=")
def chooseSkill2():
"""WHAT DOES THIS DO."""
Skill2= ""
while Skill2!="Ember"and Skill2!="Flame Charge":
Skill2 = input("Which Skill will you use against? (Ember or Flame Charge): ")
return Skill2
def checkPath2(chosenSkill2):
print("OK....")
time.sleep(2)
print("if this is your final decision")
time.sleep(2)
print("Let's see what will happen")
print()
time.sleep(2)
List_1=["Ember","Flame Charge"]
correctchoice=random.choice(List_1)
if chosenSkill2 == correctchoice :
print("Oh, that skill seems super effective.")
print("The enemy is down, you win your first fight!")
print("Exp+100, Gold+50")
else:
print("What a pity")
print("You missed!")
print("The enemy fight back and you lose=_=")
def chooseSkill3():
Skill3= ""
while Skill3!="Seed Bomb"and Skill3!="Wine Whip":
Skill3 = input("Which Skill will you use against? (Seed Bomb or Wine Whip): ")
return Skill3
def checkPath3(chosenSkill3):
print("OK....")
time.sleep(2)
print("if this is your final decision")
time.sleep(2)
print("Let's see what will happen")
print()
time.sleep(2)
List_1=["Wine Whip","Seed Bomb"]
correctchoice=random.choice(List_1)
if chosenSkill3 == correctchoice :
print("Oh, that skill seems super effective.")
print("The enemy is down, you win your first fight!")
print("Exp+100, Gold+50")
else:
print("What a pity")
print("You missed!")
print("The enemy fight back and you lose=_=")
| 29.309353 | 148 | 0.61512 | 553 | 4,074 | 4.515371 | 0.298373 | 0.061274 | 0.064077 | 0.072087 | 0.428915 | 0.39207 | 0.36644 | 0.327994 | 0.327994 | 0.327994 | 0 | 0.022989 | 0.273932 | 4,074 | 139 | 149 | 29.309353 | 0.821163 | 0.108738 | 0 | 0.541667 | 0 | 0 | 0.403163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.020833 | 0 | 0.145833 | 0.40625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
511348d0f7e727316d9decb80f7903c72de66bea | 486 | py | Python | changelogger/urls.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | 1 | 2020-04-14T06:10:23.000Z | 2020-04-14T06:10:23.000Z | changelogger/urls.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | 43 | 2019-08-23T06:23:57.000Z | 2022-03-18T06:30:34.000Z | changelogger/urls.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | null | null | null | from django.conf.urls import include
from django.contrib import admin
from django.contrib.auth import views as auth_views
from django.urls import path
urlpatterns = [
path("", include("changelogs.urls")),
path("admin/", admin.site.urls),
path("accounts/", include("django.contrib.auth.urls")),
path("login/", auth_views.LoginView.as_view()),
path("logout/", auth_views.LogoutView.as_view()),
path("oauth/", include("social_django.urls", namespace="social")),
]
| 34.714286 | 70 | 0.709877 | 64 | 486 | 5.296875 | 0.375 | 0.117994 | 0.100295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 486 | 13 | 71 | 37.384615 | 0.795775 | 0 | 0 | 0 | 0 | 0 | 0.199588 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5116c1aaabb06943907a648c428e69e237f18f57 | 264 | py | Python | sfa.py | ISeeBugsEverywhere/SnapFlatApp | 5526311297a018312175b90623e1ff9613bc4d30 | [
"Unlicense"
] | null | null | null | sfa.py | ISeeBugsEverywhere/SnapFlatApp | 5526311297a018312175b90623e1ff9613bc4d30 | [
"Unlicense"
] | null | null | null | sfa.py | ISeeBugsEverywhere/SnapFlatApp | 5526311297a018312175b90623e1ff9613bc4d30 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python3
#-*- coding: utf-8 -*-
from PyQt5.QtWidgets import QApplication
from Main.SFA_main import SFA_window
import sys
if __name__ == "__main__":
app = QApplication(sys.argv)
window = SFA_window()
window.show()
sys.exit(app.exec_())
| 18.857143 | 40 | 0.685606 | 36 | 264 | 4.694444 | 0.611111 | 0.106509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013761 | 0.174242 | 264 | 13 | 41 | 20.307692 | 0.761468 | 0.143939 | 0 | 0 | 0 | 0 | 0.035874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
511b45e66c5245a99c2902dd04256ca1c0aa4500 | 756 | py | Python | preader/reader/forms.py | jobscry/preader | 20bfcc88da723f1393f43a296ed12c03320d232e | [
"MIT"
] | null | null | null | preader/reader/forms.py | jobscry/preader | 20bfcc88da723f1393f43a296ed12c03320d232e | [
"MIT"
] | null | null | null | preader/reader/forms.py | jobscry/preader | 20bfcc88da723f1393f43a296ed12c03320d232e | [
"MIT"
] | null | null | null | from django import forms
from .models import Feed
class URLForm(forms.Form):
url = forms.URLField(label='URL', max_length=255)
class NewSubscriptionForm(forms.Form):
feeds = forms.MultipleChoiceField(widget=forms.CheckboxSelectMultiple, label='URLs')
def __init__(self, *args, **kwargs):
feed_id_list = kwargs.pop('feed_id_list')
super(NewSubscriptionForm, self).__init__(*args, **kwargs)
self.fields['feeds'] = forms.MultipleChoiceField(
choices=Feed.objects.filter(id__in=feed_id_list).values_list(
'id', 'feed_url'), widget=forms.CheckboxSelectMultiple, label='URLs'
)
class NewFeedForm(forms.ModelForm):
class Meta:
model = Feed
fields = ('feed_url', )
| 30.24 | 88 | 0.678571 | 86 | 756 | 5.732558 | 0.465116 | 0.036511 | 0.060852 | 0.154158 | 0.170385 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004942 | 0.19709 | 756 | 24 | 89 | 31.5 | 0.807249 | 0 | 0 | 0 | 0 | 0 | 0.060847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
511ba19bf0b9fb7d2426745e79893e633d788d0d | 1,344 | py | Python | dkr-py310/docker-student-portal-310/course_files/begin_advanced/py_screener.py | pbarton666/virtual_classroom | a9d0dc2eb16ebc4d2fd451c3a3e6f96e37c87675 | [
"MIT"
] | null | null | null | dkr-py310/docker-student-portal-310/course_files/begin_advanced/py_screener.py | pbarton666/virtual_classroom | a9d0dc2eb16ebc4d2fd451c3a3e6f96e37c87675 | [
"MIT"
] | null | null | null | dkr-py310/docker-student-portal-310/course_files/begin_advanced/py_screener.py | pbarton666/virtual_classroom | a9d0dc2eb16ebc4d2fd451c3a3e6f96e37c87675 | [
"MIT"
] | null | null | null | #py_screener.py
def screener(user_inp=None):
"""A function to square only floating points.
Returns custom exceptions if an int or complex is encountered."""
#make sure something was input
if not user_inp:
print("Ummm...did you type in ANYTHING?")
return
#If it *might* be a float (has a ".") try to type-cast it and return
if "." in user_inp:
try:
inp_as_float=float(user_inp)
if isinstance(inp_as_float, float):
square = inp_as_float**2
print( "You gave me {}. Its square is: {}".format(user_inp, square))
except:
return
try: #see if we need to return the ComplexException
if "(" in user_inp: #it might be complex if it has a (
inp_as_complex=complex(user_inp)
if isinstance(inp_as_complex, complex):
raise ComplexException(inp_as_complex)
except: #it's not complex
pass
try:
#we already tried to type-cast to float, let's try casting to int
inp_as_integer=int(user_inp)
if isinstance(inp_as_integer, int):
raise IntException(inp_as_integer)
except:
pass
#if we're here, the function hasn't returned anything or raised an exception
print("Done processing {} ".format(user_inp)) | 34.461538 | 85 | 0.613839 | 190 | 1,344 | 4.194737 | 0.4 | 0.079046 | 0.037641 | 0.071518 | 0.090339 | 0.090339 | 0 | 0 | 0 | 0 | 0 | 0.001068 | 0.303571 | 1,344 | 39 | 86 | 34.461538 | 0.850427 | 0.335565 | 0 | 0.384615 | 0 | 0 | 0.100687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.076923 | 0 | 0 | 0.115385 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5126df55db07533e80af8cb011974663ad79b0a2 | 2,442 | py | Python | labellab-flask/api/models/Issue.py | M-A-D-A-R-A/LabelLab | 4e86ff52f6bd78ede853e001b5e9a38ebb234b83 | [
"Apache-2.0"
] | null | null | null | labellab-flask/api/models/Issue.py | M-A-D-A-R-A/LabelLab | 4e86ff52f6bd78ede853e001b5e9a38ebb234b83 | [
"Apache-2.0"
] | null | null | null | labellab-flask/api/models/Issue.py | M-A-D-A-R-A/LabelLab | 4e86ff52f6bd78ede853e001b5e9a38ebb234b83 | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from email.policy import default
from flask import current_app
from api.extensions import db, Base
class Issue(db.Model):
"""
This model holds information about an Issue
"""
__tablename__ = "issue"
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(128), nullable=False)
description = db.Column(db.String(256))
project_id = db.Column(db.Integer,
db.ForeignKey('project.id', ondelete="cascade", onupdate="cascade"),
nullable=False)
created_by = db.Column(db.Integer,
db.ForeignKey('user.id', ondelete="cascade", onupdate="cascade"),
nullable=False)
assignee_id = db.Column(db.Integer,
db.ForeignKey('user.id', ondelete="cascade", onupdate="cascade"),
nullable=True)
team_id = db.Column(db.Integer,
db.ForeignKey('team.id', ondelete="cascade", onupdate="cascade"),
nullable=True)
category = db.Column(db.String(20), nullable=False)
priority = db.Column(db.String(20), nullable=False, default="Low")
status = db.Column(db.String(20), nullable=False, default="Open")
entity_type = db.Column(db.String(10))
entity_id = db.Column(db.Integer)
due_date = db.Column(db.DateTime)
created_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
def __init__(self, title, description, project_id, created_by, category, status="Open", priority="Low", team_id=None, assignee_id=None, entity_type=None, entity_id=None, due_date=None):
self.title = title
self.description = description
self.project_id = project_id
self.created_by = created_by
self.assignee_id = assignee_id
self.team_id = team_id
self.category = category
self.status = status
self.priority = priority
self.entity_type = entity_type
self.entity_id = entity_id
self.due_date = due_date
def __repr__(self):
"""
Returns the object representation
"""
return "<Issue(issue_id='%s', issue_title='%s', issue_description='%s')>" % (self.id, self.title, self.description) | 44.4 | 190 | 0.618346 | 292 | 2,442 | 5.010274 | 0.239726 | 0.082023 | 0.102529 | 0.06972 | 0.336979 | 0.283664 | 0.283664 | 0.151743 | 0.099795 | 0.099795 | 0 | 0.007791 | 0.264128 | 2,442 | 55 | 191 | 44.4 | 0.806344 | 0.031532 | 0 | 0.136364 | 0 | 0 | 0.074791 | 0.019798 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
512889aa7d8884a615563804875a0c84c20d6faf | 6,310 | py | Python | src/dumpBuggyPatches/dumpLogs.py | dSar-UVA/repoMiner | 8f75074e388ff13419a0a37b4337c0cdcb459f74 | [
"BSD-3-Clause"
] | null | null | null | src/dumpBuggyPatches/dumpLogs.py | dSar-UVA/repoMiner | 8f75074e388ff13419a0a37b4337c0cdcb459f74 | [
"BSD-3-Clause"
] | null | null | null | src/dumpBuggyPatches/dumpLogs.py | dSar-UVA/repoMiner | 8f75074e388ff13419a0a37b4337c0cdcb459f74 | [
"BSD-3-Clause"
] | null | null | null | import sys, os
#import psycopg2
import logging
import codecs
sys.path.append("../util")
from DatabaseCon import DatabaseCon
#from Config import Config
import Util
class dumpLogs:
def __init__(self, password, c_info):
self.config_info = c_info
self.db_config = self.config_info.config_db
self.dbPass = password
self.connectDb()
#self.cleanDb()
@staticmethod
def getFullTitleString(keywordDictionary):
'''
Create a string specifying not only the database column names
but also their types. This is used when automatically creating
the database table.
'''
dictStr = "(project character varying(500), sha text, language character varying(500)," + \
" file_name text, is_test boolean, method_name text"
for key, value in keywordDictionary.iteritems():
dictStr= dictStr+", "+ str(key).replace(" ", "_").lower() + " integer" #ToStr will add ' around the strings...
dictStr += ", total_adds integer, total_dels integer, warning_alert boolean)"
return dictStr
def connectDb(self):
#self.db_config = self.cfg.ConfigSectionMap("Database")
logging.debug("Database configuration = %r\n", self.db_config)
self.dbCon = DatabaseCon(self.db_config['database'], self.db_config['user'], \
self.db_config['host'], self.db_config['port'], \
self.dbPass)
def cleanDb(self):
schema = self.db_config['schema']
response = 'y' # raw_input("Deleting database %s ?" % (self.db_config['schema']))
schema = self.db_config['schema']
tables = []
tables.append(schema + "." + self.db_config['table_method_detail'])
tables.append(schema + "." + self.db_config['table_change_summary'])
if response.lower().startswith('y'):
for table in tables:
print("Deleting table %r \n" % table)
sql_command = "DELETE FROM " + table
self.dbCon.insert(sql_command)
self.dbCon.commit()
def close(self):
self.dbCon.commit()
self.dbCon.close()
#TODO: Improve security here for possible injections?
def createSummaryTable(self):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_change_summary']
user = self.db_config['user']
sql_command = "CREATE TABLE IF NOT EXISTS " + table + " (project character varying(500) NOT NULL," + \
" sha text NOT NULL, author character varying(500), commit_date date, is_bug boolean,"+ \
" CONSTRAINT change_summary_pkey PRIMARY KEY (project, sha)) WITH (OIDS=FALSE);"
self.dbCon.create(sql_command)
#self.dbCon.create("ALTER TABLE " + table + " OWNER TO " + user + ";")
#self.dbCon.create("GRANT ALL ON TABLE " + table + " TO " + user + ";")
def createFileChangesTable(self):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_file_detail']
user = self.db_config['user']
sql_command = "CREATE TABLE IF NOT EXISTS " + table + \
" (project character varying(500) NOT NULL," + \
" sha text NOT NULL," + \
" language character varying(500)," + \
" file_name text," + \
" is_test boolean," + \
" committer character varying(500), commit_date date," + \
" author character varying(500), author_date date," + \
" is_bug boolean," + \
" total_adds integer, total_dels integer," +\
" CONSTRAINT " + self.db_config['table_file_detail'] + "_pkey PRIMARY KEY (project, sha, file_name)) WITH (OIDS=FALSE);"
if(self.config_info.DEBUG):
print(sql_command)
self.dbCon.create(sql_command)
self.dbCon.commit()
#self.dbCon.create("ALTER TABLE " + table + " OWNER TO " + user + ";")
#self.dbCon.create("GRANT ALL ON TABLE " + table + " TO " + user + ";")
def dumpFileChanges(self, summaryStr):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_file_detail']
sql_command = " INSERT INTO " + table + \
"(project, sha, language, file_name, is_test, committer, commit_date," + \
" author, author_date, is_bug, total_adds, total_dels)" + \
" VALUES (" + summaryStr + ");"
if(self.config_info.DEBUG):
print sql_command
self.dbCon.insert(sql_command)
#self.dbCon.commit()
def createMethodChangesTable(self, titleString):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_method_detail']
user = self.db_config['user']
sql_command = "CREATE TABLE IF NOT EXISTS " + table + titleString + " WITH (OIDS=FALSE);"
self.dbCon.create(sql_command)
#self.dbCon.create("ALTER TABLE " + table + " OWNER TO " + user + ";")
#self.dbCon.create("GRANT ALL ON TABLE " + table + " TO " + user + ";")
def dumpSummary(self, summaryStr):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_change_summary']
sql_command = "INSERT INTO " + table + \
"(project, sha, author, commit_date, is_bug)" + \
" VALUES (" + summaryStr + ")"
#print sql_command
self.dbCon.insert(sql_command)
#self.dbCon.commit()
def dumpMethodChanges(self, methodChange, titleString):
schema = self.db_config['schema']
table = schema + "." + self.db_config['table_method_detail']
#sql_command = "INSERT INTO " + table + \
# "(project, sha, language, file_name, is_test, method_name, assertion_add, " + \
# "assertion_del, total_add, total_del)" + \
# "VALUES (" + methodChange + ")"
sql_command = "INSERT INTO " + table + titleString + " VALUES (" + methodChange + ")"
if(self.config_info.DEBUG):
print(sql_command)
self.dbCon.insert(sql_command)
#self.dbCon.commit()
| 37.337278 | 132 | 0.581775 | 689 | 6,310 | 5.162554 | 0.214804 | 0.047231 | 0.094462 | 0.080967 | 0.577453 | 0.532752 | 0.480742 | 0.437166 | 0.437166 | 0.425077 | 0 | 0.00559 | 0.291284 | 6,310 | 168 | 133 | 37.559524 | 0.789803 | 0.155626 | 0 | 0.346939 | 0 | 0 | 0.27793 | 0 | 0 | 0 | 0 | 0.005952 | 0 | 0 | null | null | 0.030612 | 0.05102 | null | null | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5129fb2575eab078132f342aff13007a4b5635a4 | 687 | py | Python | app/apiv2/internal/tasking/mobius_task.py | Joey-Wondersign/Staffjoy-suite-Joey | b6d0d87b8e60e6b866810ebeed631fb02fadad48 | [
"MIT"
] | 890 | 2017-02-25T07:11:09.000Z | 2022-03-08T05:49:20.000Z | app/apiv2/internal/tasking/mobius_task.py | Joey-Wondersign/Staffjoy-suite-Joey | b6d0d87b8e60e6b866810ebeed631fb02fadad48 | [
"MIT"
] | 11 | 2017-02-25T18:07:11.000Z | 2020-10-19T13:09:41.000Z | app/apiv2/internal/tasking/mobius_task.py | nfriedly/suite | c58c772d98d1476cad0531b8a296f27ad2ab945c | [
"MIT"
] | 276 | 2017-02-25T09:01:23.000Z | 2022-03-19T02:24:02.000Z | from flask_restful import marshal, abort, Resource
from app.models import Schedule2
from app.apiv2.decorators import permission_sudo
from app.apiv2.marshal import tasking_schedule_fields
class MobiusTaskApi(Resource):
method_decorators = [permission_sudo]
def get(self, schedule_id):
""" Peek at a schedule """
s = Schedule2.query.get_or_404(schedule_id)
return marshal(s, tasking_schedule_fields)
def delete(self, schedule_id):
""" Mark a task as done """
s = Schedule2.query.get_or_404(schedule_id)
if s.state != "mobius-processing":
abort(400)
s.transition_to_published()
return "{}", 204
| 27.48 | 53 | 0.682678 | 88 | 687 | 5.125 | 0.522727 | 0.088692 | 0.053215 | 0.079823 | 0.146341 | 0.146341 | 0.146341 | 0.146341 | 0 | 0 | 0 | 0.031955 | 0.225619 | 687 | 24 | 54 | 28.625 | 0.815789 | 0.056769 | 0 | 0.133333 | 0 | 0 | 0.029968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5130695836f69d3767c643d9186227a5de4c2625 | 1,388 | py | Python | Scripts/simulation/sims/university/university_enums.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/sims/university/university_enums.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/sims/university/university_enums.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | # uncompyle6 version 3.7.4
# Python bytecode 3.7 (3394)
# Decompiled from: Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)]
# Embedded file name: T:\InGame\Gameplay\Scripts\Server\sims\university\university_enums.py
# Compiled at: 2019-08-26 20:01:13
# Size of source mod 2**32: 4312 bytes
from sims4.tuning.dynamic_enum import DynamicEnumLocked
import enum
class Grade(DynamicEnumLocked):
UNKNOWN = 0
class FinalCourseRequirement(enum.Int):
NONE = 0
EXAM = 1
PAPER = 2
PRESENTATION = 3
class EnrollmentStatus(enum.Int):
NONE = 0
ENROLLED = 1
NOT_ENROLLED = 2
PROBATION = 3
SUSPENDED = 4
DROPOUT = 5
GRADUATED = 6
class UniversityHousingKickOutReason(enum.Int):
NONE = 0
GRADUATED = 1
SUSPENDED = 2
DROPOUT = 3
MOVED = 4
NOT_ENROLLED = 5
PREGNANT = 6
BABY = 7
class UniversityHousingRoommateRequirementCriteria(enum.Int):
NONE = 0
UNIVERSITY = 1
GENDER = 2
ORGANIZATION = 3
CLUB = 4
class UniversityInfoType(enum.Int):
INVALID = 0
PRESTIGE_DEGREES = 1
NON_PRESTIGE_DEGREES = 2
ORGANIZATIONS = 3
class HomeworkCheatingStatus(enum.Int, export=False):
NONE = 0
CHEATING_FAIL = 1
CHEATING_SUCCESS = 2
class UniversityMajorStatus(enum.Int, export=False):
NOT_ACCEPTED = 0
ACCEPTED = 1
GRADUATED = 2 | 21.030303 | 107 | 0.680115 | 184 | 1,388 | 5.076087 | 0.548913 | 0.052463 | 0.047109 | 0.051392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095014 | 0.23415 | 1,388 | 66 | 108 | 21.030303 | 0.783631 | 0.228386 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044444 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5132a18e10327c76ac7e1f41f76399315fe8da2f | 2,995 | py | Python | customize_notifications/customize-notifications.py | renjunok/customize-notifications-python-sdk | 759d9bee278595e6f800451648301564a250e3ab | [
"MIT"
] | null | null | null | customize_notifications/customize-notifications.py | renjunok/customize-notifications-python-sdk | 759d9bee278595e6f800451648301564a250e3ab | [
"MIT"
] | null | null | null | customize_notifications/customize-notifications.py | renjunok/customize-notifications-python-sdk | 759d9bee278595e6f800451648301564a250e3ab | [
"MIT"
] | null | null | null | import random
import time
import json
import hashlib
import requests
def generateNonceStr():
return "".join(random.sample('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', 16))
# def change_type(byte):
# if isinstance(byte, bytes):
# return str(byte, encoding="utf-8")
# return json.JSONEncoder.default(byte)
def SubmitMessageRequest(sm):
json_str = json.dumps(sm, sort_keys=True, indent=4, separators=(',', ':')).replace("\n", "").replace(
" ", "")
headers = {'content-type': 'application/json'}
try:
resp = requests.post("https://api.msg.launch.im/message", data=json_str, headers=headers)
except ConnectionError as err:
# handle err
print(err)
# examine response
data = json.loads(resp.content)
print(data)
class Message:
msg_dict = {}
p = {}
sm = {}
def __init__(self, title, msg_type, content, group=""):
self.msg_dict['title'] = title
self.msg_dict['msg_type'] = msg_type
self.msg_dict['content'] = content
if len(group) != 0:
self.msg_dict['group'] = group
def check(self):
if len(self.msg_dict["title"]) > 100 or len(self.msg_dict["title"]) < 1:
raise Exception("title count error")
elif len(self.msg_dict["content"]) > 4000 or len(self.msg_dict["content"]) < 1:
raise Exception("content count error")
elif "group" in self.msg_dict and len(self.msg_dict["group"]) > 20:
raise Exception("group count error")
elif self.msg_dict["msg_type"] < 0 or self.msg_dict["msg_type"] > 4:
raise Exception("msg type error")
def sign(self, push_secret):
signStrTmp = ""
for k in sorted(self.p.keys()):
if len(self.p[k]) == 0:
continue
else:
signStrTmp += k + "=" + self.p[k] + "&"
signStrTmp += "secret=" + push_secret
return hashlib.sha256(signStrTmp.encode("utf-8")).hexdigest()
def send_message(self, push_id, push_secret):
timestamp = int(time.time())
nonceStr = generateNonceStr()
self.check()
msgJson = json.dumps(self.msg_dict, indent=4, separators=(',', ':'), ensure_ascii=False).replace("\n",
"").replace(
" ", "")
self.p = {"push_id": push_id, "nonce": nonceStr, "timestamp": str(timestamp),
"message": msgJson}
signStr = self.sign(push_secret)
self.sm = {"push_id": push_id, "nonce": nonceStr, "timestamp": timestamp,
"message": self.msg_dict, "sign": signStr}
SubmitMessageRequest(self.sm)
if __name__ == '__main__':
try:
m = Message(title="test title", msg_type=0, content="test content", group="test group")
m.send_message("your_id", "your_secret")
except Exception as e:
print(e)
| 32.554348 | 117 | 0.573289 | 346 | 2,995 | 4.815029 | 0.315029 | 0.063025 | 0.092437 | 0.042017 | 0.123649 | 0.040816 | 0.040816 | 0 | 0 | 0 | 0 | 0.016256 | 0.281135 | 2,995 | 91 | 118 | 32.912088 | 0.757548 | 0.05576 | 0 | 0.0625 | 1 | 0 | 0.144275 | 0.021978 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.078125 | 0.015625 | 0.265625 | 0.046875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51337874245c3ee667e619871bcb80cf5b620260 | 8,865 | py | Python | dashboard_viewer/uploader/models.py | EHDEN/NetworkDashboards | 8f67debe0e62606c1067eea8f27f2935242c62b4 | [
"MIT"
] | 4 | 2021-05-24T15:12:44.000Z | 2022-03-13T11:51:36.000Z | dashboard_viewer/uploader/models.py | EHDEN/NetworkDashboards | 8f67debe0e62606c1067eea8f27f2935242c62b4 | [
"MIT"
] | 111 | 2020-07-27T14:48:31.000Z | 2022-01-06T17:40:11.000Z | dashboard_viewer/uploader/models.py | EHDEN/NetworkDashboards | 8f67debe0e62606c1067eea8f27f2935242c62b4 | [
"MIT"
] | null | null | null | import datetime
import json
import os
import pathlib
import uuid
from django.conf import settings
from django.db import models
from django_celery_results.models import TaskResult
class Country(models.Model):
class Meta:
db_table = "country"
ordering = ("country",)
country = models.CharField(max_length=100, unique=True, help_text="Country name.")
alpha2 = models.CharField(
max_length=3, unique=True, help_text="ISO 3166-1 Alpha-2 Code"
)
continent = models.CharField(max_length=50, help_text="Continent associated.")
def __str__(self):
return f"{self.country}"
def __repr__(self):
return self.__str__()
class DatabaseType(models.Model):
class Meta:
db_table = "database_type"
type = models.CharField(
max_length=100, unique=True, help_text="Defines the database type."
)
def __str__(self):
return self.type
def __repr__(self):
return self.__str__()
def hash_generator():
return uuid.uuid4().hex
# Not following the relational rules in the database_type field, but it will simplify the SQL queries in the SQL Lab
class DataSource(models.Model):
class Meta:
db_table = "data_source"
name = models.CharField(
max_length=100, unique=True, help_text="Name of the data source."
)
acronym = models.CharField(
max_length=50,
unique=True,
help_text="Short label for the data source, containing only letters, numbers, underscores or hyphens.",
)
hash = models.CharField(
blank=True,
default=hash_generator,
max_length=255,
null=False,
unique=True,
)
release_date = models.CharField(
max_length=50,
help_text="Date at which DB is available for research for current release.",
null=True,
blank=True,
)
database_type = models.CharField(
max_length=100, help_text="Type of the data source. You can create a new type."
)
country = models.ForeignKey(
Country,
on_delete=models.SET_NULL,
null=True,
help_text="Country where the data source is located.",
)
latitude = models.FloatField()
longitude = models.FloatField()
link = models.URLField(help_text="Link to home page of the data source", blank=True)
draft = models.BooleanField(default=True)
def save(
self, force_insert=False, force_update=False, using=None, update_fields=None
):
if DatabaseType.objects.filter(type=self.database_type).count() == 0:
db_type = DatabaseType(type=self.database_type)
db_type.save()
super().save(force_insert, force_update, using, update_fields)
def __str__(self):
return self.name
def __repr__(self):
return self.__str__()
def failure_data_source_directory(instance, filename):
file_path = os.path.join(
settings.ACHILLES_RESULTS_STORAGE_PATH,
instance.data_source.hash,
"failure",
"%Y%m%d%H%M%S%f" + "".join(pathlib.Path(filename).suffixes),
)
return datetime.datetime.now().strftime(file_path)
class PendingUpload(models.Model):
STATE_PENDING = 1
STATE_STARTED = 2
STATE_CANCELED = 3
STATE_FAILED = 4
STATES = (
(STATE_PENDING, "Pending"),
(STATE_STARTED, "Started"),
(STATE_CANCELED, "Canceled"), # currently not being used
(STATE_FAILED, "Failed"),
)
class Meta:
ordering = ("-upload_date",)
data_source = models.ForeignKey(DataSource, on_delete=models.CASCADE)
upload_date = models.DateTimeField(auto_now_add=True)
status = models.IntegerField(choices=STATES, default=STATE_PENDING)
uploaded_file = models.FileField(upload_to=failure_data_source_directory)
task_id = models.CharField(max_length=255, null=True)
def get_status(self):
for status_id, name in self.STATES:
if self.status == status_id:
return name
return None # should never happen
def failure_message(self):
if self.status != self.STATE_FAILED:
return None
try:
task = TaskResult.objects.get(
task_id=self.task_id, task_name="uploader.tasks.upload_results_file"
)
except TaskResult.DoesNotExist:
return (
"The information about this failure was deleted. Probably because this upload history "
"record is an old one. If not please contact the system administrator for more details. "
)
result = json.loads(task.result)
if result["exc_module"] == "uploader.file_handler.checks":
return result["exc_message"][0]
return (
"An unexpected error occurred while processing your file. Please contact the "
"system administrator for more details."
)
def success_data_source_directory(instance, filename):
file_path = os.path.join(
settings.ACHILLES_RESULTS_STORAGE_PATH,
instance.data_source.hash,
"success",
"%Y%m%d%H%M%S%f" + "".join(pathlib.Path(filename).suffixes),
)
return datetime.datetime.now().strftime(file_path)
class UploadHistory(models.Model):
"""
Successful uploads only
"""
class Meta:
get_latest_by = "upload_date"
ordering = ("-upload_date",)
db_table = "upload_history"
data_source = models.ForeignKey(DataSource, on_delete=models.CASCADE)
upload_date = models.DateTimeField(auto_now_add=True)
r_package_version = models.CharField(max_length=50, null=True)
generation_date = models.CharField(max_length=50, null=True)
cdm_release_date = models.CharField(max_length=50, null=True)
cdm_version = models.CharField(max_length=50, null=True)
vocabulary_version = models.CharField(max_length=50, null=True)
uploaded_file = models.FileField(
null=True, upload_to=success_data_source_directory
) # For backwards compatibility its easier to make this null=True
pending_upload_id = models.IntegerField(
null=True,
help_text="The id of the PendingUpload record that originated this successful upload."
# aspedrosa: A foreign key is not used here since a PendingUpload record is erased once is successful. This
# is field is then only used to get the result data of pending upload through the get_upload_task_status view
)
def __repr__(self):
return self.__str__()
def __str__(self):
return f"{self.data_source.name} - {self.upload_date}"
def get_status(self):
return "Done"
class AchillesResults(models.Model):
class Meta:
db_table = "achilles_results"
indexes = [
models.Index(fields=("data_source",)),
models.Index(fields=("analysis_id",)),
]
data_source = models.ForeignKey(DataSource, on_delete=models.CASCADE)
analysis_id = models.BigIntegerField()
stratum_1 = models.TextField(null=True)
stratum_2 = models.TextField(null=True)
stratum_3 = models.TextField(null=True)
stratum_4 = models.TextField(null=True)
stratum_5 = models.TextField(null=True)
count_value = models.BigIntegerField()
min_value = models.BigIntegerField(null=True)
max_value = models.BigIntegerField(null=True)
avg_value = models.FloatField(null=True)
stdev_value = models.FloatField(null=True)
median_value = models.BigIntegerField(null=True)
p10_value = models.BigIntegerField(null=True)
p25_value = models.BigIntegerField(null=True)
p75_value = models.BigIntegerField(null=True)
p90_value = models.BigIntegerField(null=True)
class AchillesResultsArchive(models.Model):
class Meta:
db_table = "achilles_results_archive"
indexes = [
models.Index(fields=("data_source",)),
models.Index(fields=("analysis_id",)),
]
upload_info = models.ForeignKey(UploadHistory, on_delete=models.CASCADE)
data_source = models.ForeignKey(DataSource, on_delete=models.CASCADE)
analysis_id = models.BigIntegerField()
stratum_1 = models.TextField(null=True)
stratum_2 = models.TextField(null=True)
stratum_3 = models.TextField(null=True)
stratum_4 = models.TextField(null=True)
stratum_5 = models.TextField(null=True)
count_value = models.BigIntegerField()
min_value = models.BigIntegerField(null=True)
max_value = models.BigIntegerField(null=True)
avg_value = models.FloatField(null=True)
stdev_value = models.FloatField(null=True)
median_value = models.BigIntegerField(null=True)
p10_value = models.BigIntegerField(null=True)
p25_value = models.BigIntegerField(null=True)
p75_value = models.BigIntegerField(null=True)
p90_value = models.BigIntegerField(null=True)
| 32.95539 | 118 | 0.676593 | 1,087 | 8,865 | 5.308188 | 0.24011 | 0.054073 | 0.072097 | 0.058232 | 0.493241 | 0.481802 | 0.45078 | 0.421837 | 0.368977 | 0.331369 | 0 | 0.010935 | 0.226283 | 8,865 | 268 | 119 | 33.078358 | 0.830296 | 0.052002 | 0 | 0.422535 | 0 | 0 | 0.135274 | 0.013003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.037559 | 0.046948 | 0.577465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
51341e2322007046ce7610e475fc608433c5d8ed | 439 | py | Python | write_new_table.py | Cassiel60/python | 3f451e398a8705a5859d347d5fcdcfd9a5671e1c | [
"MIT"
] | null | null | null | write_new_table.py | Cassiel60/python | 3f451e398a8705a5859d347d5fcdcfd9a5671e1c | [
"MIT"
] | null | null | null | write_new_table.py | Cassiel60/python | 3f451e398a8705a5859d347d5fcdcfd9a5671e1c | [
"MIT"
] | 1 | 2019-12-19T00:34:02.000Z | 2019-12-19T00:34:02.000Z | import sys
import pandas as pd
varFil = sys.argv[1]
df = pd.read_csv(varFil, header=0, sep='\t')
print len(df)
df_new = df[df['Allele Call'].isin(['Heterozygous', 'Homozygous'])]
print len(df_new)
df_res = df_new[df_new['Allele Cov'] >= 30]
print len(df_res)
df_res.to_csv('output.xls', index=False, header=True, sep='\t')
writer = pd.ExcelWriter('outputExcel' + '.xlsx')
df_res.to_excel(writer, 'Sheet1', index=False)
writer.save() | 21.95 | 67 | 0.697039 | 75 | 439 | 3.933333 | 0.52 | 0.067797 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012853 | 0.113895 | 439 | 20 | 68 | 21.95 | 0.745501 | 0 | 0 | 0 | 0 | 0 | 0.179545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
513560c2982f0246b4627dd184d69ec56e1a83d4 | 1,583 | py | Python | challenges/repeated_word/test_repeated_word.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | null | null | null | challenges/repeated_word/test_repeated_word.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | null | null | null | challenges/repeated_word/test_repeated_word.py | nastinsk/python-data-structures-and-algorithms | 505b26a70fb846f6e9d0681bbe4f77e3797acf2d | [
"MIT"
] | 3 | 2020-05-31T03:25:49.000Z | 2020-12-05T21:03:13.000Z | from repeated_word import find_first_repeat
import pytest
def test_regular_str():
text = "Once upon a time, there was a brave princess who..."
assert find_first_repeat(text) == "a"
def test_bigger_different_case():
text = "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only..."
assert find_first_repeat(text) == "it"
def test_no_tepeat():
text = "a b c, d."
assert find_first_repeat(text) == None
def test_punktuation_case():
text = "It was a queer, sultry summer, the summer they electrocuted the Rosenbergs, and I didn’t know what I was doing in New York..."
assert find_first_repeat(text) == 'summer'
def test_numbers_input():
text = 135
with pytest.raises(TypeError):
result = find_first_repeat(text)
def test_repeat_in_the_end():
text = "red blue yellow green green blue blue"
assert find_first_repeat(text) == "green"
def test_list_input():
text = ['a', 'f', 'a', 'vf']
with pytest.raises(TypeError):
result = find_first_repeat(text)
| 45.228571 | 628 | 0.721415 | 265 | 1,583 | 4.184906 | 0.456604 | 0.049594 | 0.072137 | 0.119928 | 0.338142 | 0.090171 | 0.090171 | 0.090171 | 0.090171 | 0 | 0 | 0.002364 | 0.198358 | 1,583 | 34 | 629 | 46.558824 | 0.870764 | 0 | 0 | 0.16 | 0 | 0.08 | 0.541429 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.28 | false | 0 | 0.08 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
513799080335e27a87d9d53d12c63255df8de4ad | 1,369 | py | Python | study/day01_spider/03_urllib2_useragentlist.py | Youngfellows/HPython2-Spider | 24e51e81f926b5c5dbc1c9f1cc87050e733d376c | [
"Apache-2.0"
] | null | null | null | study/day01_spider/03_urllib2_useragentlist.py | Youngfellows/HPython2-Spider | 24e51e81f926b5c5dbc1c9f1cc87050e733d376c | [
"Apache-2.0"
] | null | null | null | study/day01_spider/03_urllib2_useragentlist.py | Youngfellows/HPython2-Spider | 24e51e81f926b5c5dbc1c9f1cc87050e733d376c | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
import urllib2
import random
url = "http://www.baidu.com/"
# 可以是User-Agent列表,也可以是代理列表
ua_list = [
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 6.1; rv2.0.1) Gecko/20100101 Firefox/4.0.1",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"
]
# 在User-Agent列表里随机选择一个User-Agent
user_agent = random.choice(ua_list)
# 构造一个请求
request = urllib2.Request(url)
# add_header()方法 添加/修改 一个HTTP报头
request.add_header("User-Agent", user_agent)
# get_header() 获取一个已有的HTTP报头的值,注意只能是第一个字母大写,其他的必须小写
print("User-Agent: %s" % request.get_header("User-agent"))
# 向指定的url地址发送请求,并返回服务器响应的类文件对象
response = urllib2.urlopen(request)
# 服务器返回的类文件对象支持Python文件对象的操作方法
# read()方法就是读取文件里的全部内容,返回字符串
html = response.read()
# 返回 HTTP的响应码,成功返回200,4服务器页面出错,5服务器问题
code = response.getcode()
# 返回 返回实际数据的实际URL,防止重定向问题
response_url = response.geturl()
# 返回 服务器响应的HTTP报头
response_head = response.info()
print("code = %d" % code)
print("response_url = %s" % response_url)
print("response_head =\n %s" % response_head)
print(html)
| 28.520833 | 125 | 0.699781 | 207 | 1,369 | 4.550725 | 0.468599 | 0.047771 | 0.028662 | 0.06051 | 0.203822 | 0.203822 | 0.203822 | 0.178344 | 0.178344 | 0 | 0 | 0.08643 | 0.154858 | 1,369 | 47 | 126 | 29.12766 | 0.727744 | 0.230825 | 0 | 0 | 0 | 0.217391 | 0.514602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0.217391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
513879b46eed9121763c814ec735a00ff6dc4b80 | 60,220 | py | Python | skills.py | chombatant/python-kik-chatbot | 9f5d966378df683efed6f3d74eafded150b2579b | [
"MIT"
] | 1 | 2021-11-19T02:20:50.000Z | 2021-11-19T02:20:50.000Z | skills.py | chombatant/python-kik-chatbot | 9f5d966378df683efed6f3d74eafded150b2579b | [
"MIT"
] | null | null | null | skills.py | chombatant/python-kik-chatbot | 9f5d966378df683efed6f3d74eafded150b2579b | [
"MIT"
] | 1 | 2018-04-13T14:11:14.000Z | 2018-04-13T14:11:14.000Z | import re
import regex as re2
from regex import sub as sub2
from collections import defaultdict
from datetime import datetime
from ngrams.ngrams import corrections, Pw
import nltk
# ===========================================================================================
# #
## # #### ##### # # ## # # ###### ## ##### # #### # #
# # # # # # # ## ## # # # # # # # # # # # ## #
# # # # # # # # ## # # # # # # # # # # # # # # #
# # # # # ##### # # ###### # # # ###### # # # # # # #
# ## # # # # # # # # # # # # # # # # # # ##
# # #### # # # # # # ###### # ###### # # # # #### # #
def preprocess_message( statement):
sentences = nltk.sent_tokenize( statement)
return [
cleanup_sentence(
remove_fluff(
corrections(
expand_contractions(
sentence.lower()
))))
for sentence
in sentences
]
def stemmer( word, pos):
if pos=="NOUN" and word[-1]=="s":
return word[:-1]
elif pos=="VERB" and word[-1]=="s":
return word[:-1]
elif pos=="VERB" and word[-2:]=="ed" and word[-3]!="e":
return word[:-2]+"ing"
else:
return word
def capitalize_sentence(sentence):
sentence = sentence[:1].upper() + sentence[1:]
sentence = re.sub(r"(^|\W)i($|\W)",r"\1I\2",sentence)
names = extract_named_entities(sentence.title())
for name in names:
sentence = re.sub(name.lower(),name,sentence)
for token in nltk.tokenize.word_tokenize(sentence)[1:]:
if re.match(r"[A-Z]\w*",token):
if Pw(token.lower())>1e-06 and token not in firstnames.words():
sentence = re.sub(token,token.lower(),sentence)
return sentence
def capitalize_fragment(sentence):
sentence = re.sub(r"(^|\W)i($|\W)",r"\1I\2",sentence)
names = extract_named_entities(sentence.title())
for name in names:
sentence = re.sub(name.lower(),name,sentence)
for token in nltk.tokenize.word_tokenize(sentence):
if re.match(r"[A-Z]\w*",token):
if Pw(token.lower())>1e-06 and token not in firstnames.words():
sentence = re.sub(token,token.lower(),sentence)
return sentence
##### # ######
# # #### #### ##### # # # ## #####
# # # # # # # # # # # # # #
# #### # # # # # # # ###### # # # #
# # # # # # # # # # # ###### # #
# # # # # # # # # # # # # # #
##### #### #### ##### # ###### # # #####
intensifiers = "|".join([
r"(?:pretty(?: much)?",
r"quite",
r"so",
r"very",
r"absolutely",
r"total?ly",
r"real?ly",
r"somewhat",
r"kind of",
r"perfectly",
r"incredibly",
r"positively",
r"definitely",
r"completely",
r"propably",
r"just",
r"rather",
r"almost",
r"entirely",
r"fully",
r"highly",
r"a bit)"
])
positives = "|".join([
r"(good",
r"better",
r"best",
r"finer?",
r"nicer?",
r"lovel(y|ier)",
r"great(er)?",
r"amazing",
r"super",
r"smashing",
r"fantastic",
r"stunning",
r"groovy",
r"wonderful?l",
r"superb",
r"marvel?lous",
r"neat",
r"terrific",
r"swell",
r"dandy",
r"tremendous",
r"excellent",
r"dope",
r"well",
r"elat(ed|ing)",
r"enthusiastic",
r"looking forward to",
r"engag(ed|ing)",
r"thrill(ed|ing)",
r"excit(ed|ing)",
r"happ(y|ier)",
r"joyful",
r"joyous",
r"delight(ed|ing)",
r"curious",
r"eager",
r"ok",
r"alright)"
])
negatives = "|".join([
r"(bad",
r"terrible",
r"awful",
r"mad",
r"horrible",
r"horrid",
r"sad",
r"blue",
r"down",
r"unhappy",
r"unwell",
r"miserable",
r"dissatisfied",
r"unsatisfied",
r"sick",
r"ill",
r"tired",
r"jealous",
r"envious",
r"afraid",
r"scared",
r"converned",
r"worried",
r"uneasy",
r"so-so",
r"medium",
r"negative",
r"troubled)"
])
def is_positive( sentence):
if(
(
re.search( positives, sentence)
and not has_negation( sentence)
)or(
re.search( negatives, sentence)
and has_negation( sentence)
)
):
return True
else:
return False
def is_negative( sentence):
if(
(
re.search( negatives, sentence)
and not has_negation( sentence)
)or(
re.search( positives, sentence)
and has_negation( sentence)
)
):
return True
else:
return False
#####
# # ##### #### ##### # #
# # # # # # # #
##### # # # # # #
# # # # ##### #
# # # # # # # #
##### # #### # # #
def has_story( sentence):
if(
re.search( r"(^|\W)(i|me|mine|myself|my|we|our|oursel(f|v)(es)?|us)(\W|$)", sentence)
and not re.search( r"\?$", sentence)
):
return True
else:
return False
story_negatives = r"|".join([
r"(too late",
r"(lost|missed|broke|hurt|killed|failed|misplaced|forgot) (.* )?(my|ours?|mine|hers?|his|theirs?)",
r"failed (\w+ )?at)"
])
def has_story_negative( sentence):
if(
re.search( r"(^|\W)(i|we|my|our|me) ", sentence)
and not re.search( r"\?$", sentence)
and re.search( story_negatives, sentence)
):
return True
else:
return False
#####
# # # # ## # # ##### # ##### # ##### # #
# # # # # # ## # # # # # # # #
# # # # # # # # # # # # # # #
# # # # # ###### # # # # # # # # #
# # # # # # # ## # # # # # #
#### # #### # # # # # # # # # #
quantifier_much = "|".join([
r"(a [^\.\;]*lot",
r"lots",
r"enough",
r"(?:^|\s)sufficient",
r"great [^\.\;]*deal of",
r"some",
r"extensively",
r"several",
r"a few",
r"a [^\.\;]*couple of",
r"a [^\.\;]*bit of",
r"several",
r"multiple",
r"various",
r"fold",
r"numerous",
r"plent[iy]",
r"copious",
r"abundant",
r"ample",
r"any",
r"many",
r"much)"
])
quantifier_insufficient = "|".join([
r"(insufficient",
r"lack of",
r"lacked",
r"defici",
r"(?<!a\s)few", # match only if not preceded by "a "
r"(?<!a\s)little",
r"scant",
r"miss)"
])
def has_quantifier_much(sentence):
if re.search(r"not[^\.\;]+" + quantifier_much,sentence):
return False
if re.search(quantifier_much,sentence):
return True
elif re.search(r"no[^\.\;]+(complain|lack|miss|defici|insufficient)",sentence):
return True
else:
return False
def has_quantifier_insufficient(sentence):
if re.search(r"no[^\.\;]+" + quantifier_insufficient,sentence):
return False
if re.search(quantifier_insufficient,sentence):
return True
elif re.search(r"not[^\.\;]+"+quantifier_much,sentence):
return True
else:
return False
def has_quantifier_excessive(sentence):
if re.search(r"(too much|overmuch)",sentence):
return True
else:
return False
# # # # #
# # ###### #### # ## # ####
# # # # # # # # # #
# ##### #### # # # # # #
# # # # # # # # #
# # # # # # ## # #
# ###### #### # # # ####
# Maybe intensifiers can be viewed as a subset of affirmations?
affirmations = "|".join([
r"(yes",
r"yeah",
r"aye",
r"absolutely",
r"total?ly",
r"certainly",
r"probably",
r"definitely",
r"maybe",
r"right",
r"correct",
r"true",
r"possible",
r"possibly",
r"sure",
r"almost",
r"entirely",
r"fully",
r"highly",
r"ok",
r"okay",
r"agree",
r"alright)"
])
negations_short = "|".join([
r"(no",
r"not",
r"nay",
r"nope)"
])
negations_pronoun = "|".join([
r"(never",
r"no?one",
r"nobody",
r"nowhere",
r"nothing)"
])
negations_adjective = "|".join([
r"(impossible",
r"wrong",
r"false",
r"bullshit",
r"incorrect)"
])
negations = r"(("+negations_short+r"(\W|$))|" + negations_pronoun + "|" + negations_adjective + ")"
def has_negation(sentence):
if re.search( negations_short + r"[^\.\,\;(is)]+" + negations_adjective,sentence):
return False
elif re.search( negations,sentence):
return True
else:
return False
def has_affirmation( sentence):
if(
re.search( affirmations+r"(\W|$)",sentence)
and not has_negation( sentence)
):
return True
elif(
re.search( r"why not(\?|\!)", sentence)
or re.search( intensifiers + r" so(\W|$)", sentence)
or (
re.search( r"(\W|^)i (.* )?(think|say|hope) so(\W|$)", sentence)
and not has_negation(sentence)
)
or(
re.search( r"(\W|^)(sounds|feels) (" + intensifiers + " )?" + positives, sentence)
)
):
return True
else:
return False
def has_elaboration(sentences):
text = "".join(sentences)
for pattern in [ positives, negatives, intensifiers, affirmations, negations]:
text=re.sub(pattern,"",text)
if len(text) > 20:
return True
else:
return False
#######
# # ##### ##### # #### # # ####
# # # # # # # # ## # #
# # # # # # # # # # # ####
# # ##### # # # # # # # #
# # # # # # # # ## # #
####### # # # #### # # ####
action_verbs = r"|".join([
r"(ask(ing)?",
r"go(ing)?",
r"demand(ing)?",
r"driv(e|ing)",
r"chang(e|ing)",
r"behav(e|ing)",
r"perform(ing)?",
r"work(ing)?",
r"meet(ing)?",
r"prepar(e|ing)",
r"smil(e|ing)",
r"help(ing)?",
r"support(ing)?",
r"aid(ing)?",
r"consult(ing)?",
r"coach(ing)?",
r"car(e|ing)",
r"bring(ing)?",
r"tak(e|ing)",
r"get(ting)?",
r"carry(ing)?",
r"solv(e|ing)",
r"creat(e|ing)",
r"initiat(e|ing)?",
r"engag(e|ing)",
r"set(ting)?",
r"motivat(e|ing)",
r"inspir(e|ing)",
r"eat(ing)?",
r"drink(ing)?",
r"consum(e|ing)",
r"sleep(ing)?",
r"see(ing)?",
r"invent(ing)?",
r"rehears(e|ing)",
r"dress(ing)?",
r"break(ing)?",
r"fill(ing)?",
r"fulfill(ing)?",
r"develop(ing)?",
r"rest(ing)?",
r"stop(ing)?",
r"increas(e|ing)",
r"decreas(e|ing)",
r"listen(ing)?",
r"meditat(e|ing)",
r"us(e|ing)",
r"spen.(ing)?",
r"wast(e|ing)",
r"organiz(e|ing)",
r"plan(ing)?",
r"invest(ing)?",
r"learn(ing)?",
r"join(ing)?",
r"practi.(e|ing)",
r"play(ing)?",
r"hik(e|ing)",
r"climb(ing)?",
r"walk(ing)?",
r"bik(e|ing)",
r"sail(ing)?",
r"jump(ing)?",
r"laugh(ing)?",
r"surf(ing)?",
r"swim(ing)?",
r"fly(ing)?",
r"writ(e|ing)",
r"reply(ing)?",
r"send(ing)?",
r"fight(ing)?",
r"buy(ing)?",
r"repair(ing)?",
r"continu(e|ing)",
r"lower(ing)?",
r"rais(e|ing)",
r"improv(e|ing)",
r"read(ing)?",
r"explor(ing)?",
r"travel(ing)?",
r"exchang(e|ing)",
r"invest(ing)?",
r"transfer(ing)?",
r"balanc(ing)?",
r"danc(e|ing)",
r"wear(ing)?",
r"mak(e|ing)",
r"keep(ing)?",
r"writ(e|ing)",
r"jump(ing)?",
r"stand(ing)?",
r"pos(e|ing)",
r"fake(e|ing)?",
r"pretend(ing)?",
r"tell(ing)?",
r"nap(ping)?",
r"research(ing)?",
r"find(ing)?",
r"discuss(ing)?",
r"argue(ing)?",
r"provoc(e|ing)",
r"suggest(ing)?",
r"start(ing)?",
r"apply(ing)?",
r"connect(ing)?",
r"(out|crowd)?sourc(e|ing)",
r"fun(ing)?",
r"found(ing)",
r"shar(e|ing)",
r"tap(ping)?",
r"invit(e|ing)",
r"investigat(e|ing)",
r"giv(e|ing)",
r"donat(e|ing)",
r"lov(e|ing)?)",
r"ignor(e|ing)",
r"deal(ing)?",
r"mind(ing)?",
r"do(ing)"
])
def has_option( sentence):
if(
re2.search( action_verbs, sentence)
and (
not has_negation( sentence)
or re.search( r"no matter (what)?", sentence))
and not has_quantifier_excessive( sentence)
and not has_quantifier_insufficient( sentence)
and not re.search( r"(too late)", sentence)
):
return True
else:
return False
numbers = r"|".join([
r"(first",
r"second",
r"third",
r"two",
r"three",
r"four(th)?",
r"six(th)?",
r"seven(th)?",
r"eighth?",
r"nine?(th)?",
r"ten(th)?",
r"(number|option|item) (one|a|b|c)(\W|$)",
r"last",
r"end",
r"beginning",
r"start)"
])
def has_choice_of_enumerated_item( sentence):
sentence_fragments = sentence.split("but")
if (
re.search( numbers, sentence_fragments[-1])
and not has_negation( sentence_fragments[-1])
):
return True
elif(
re.search( numbers, sentence)
and not has_negation( sentence)
):
return True
else:
return False
######
# # ##### #### ##### # ###### # #
# # # # # # # # # # ## ##
###### # # # # ##### # ##### # ## #
# ##### # # # # # # # #
# # # # # # # # # # #
# # # #### ##### ###### ###### # #
problem_grammar = nltk.CFG.fromstring(r"""
problem -> subject VP | subject modifier VP
VP -> verb_group
VP -> verb_group object_clause
VP -> verb_group quantifier_j object_clause
VP -> verb_group quantifier_r
VP -> verb_group modifier_r quantifier_r
PP -> P object
verb_group -> verb_simple
verb_group -> moderator verb_simple
verb_group -> verb_aux verb_gerund
verb_group -> verb_aux modifier verb_gerund
verb_group -> verb_aux moderator verb_gerund
verb_group -> verb_aux moderator modifier verb_gerund
verb_group -> verb_aux verb_simple
verb_group -> verb_aux modifier verb_simple
verb_group -> verb_aux moderator verb_simple
verb_group -> verb_aux moderator modifier verb_simple
verb_group -> verb_aux "to" verb_simple
verb_group -> "do" modifier verb_aux "to" verb_simple
verb_group -> verb_aux
verb_simple -> "drink"
verb_gerund -> "drinking"
verb_aux -> "have" verb_pastprog
verb_aux -> "have" modifier verb_pastprog
verb_aux -> "am" | "do" | "have"
verb_aux -> "use" | "fail" | "can" | "want" | "get"
verb_pastprog -> "been" | "done"
subject -> "i"
object_simple -> "milk"
object_np -> determiner object_simple
object_clause -> object_simple | object_np
object_clause -> comparison_base object_simple "than" comparison
object_clause -> comparison_base preposition object_np "than" comparison
object_clause -> quantifier_j object_simple
object_clause -> modifier_r quantifier_j object_simple
object_clause -> quantifier_j preposition object_np
object_clause -> modifier_r quantifier_j preposition object_np
determiner -> "a" | "an" | "the" | "this" | "these" | "those" | "that"
determiner -> "my" | "our" | "his" | "her" | "its" | "our" | "their"
preposition -> "of" | "from"
modifier -> negation | temporal | negation temporal
modifier_r -> "much" | "way" | "far" | "by" "far" | "a" "lot" | "a" "bit"
negation -> "no" | "not" | "never"
temporal -> "always" | "sometimes" | "often"
quantifier_j -> "a" "lot" "of" | "a" "lot"| "enough"
quantifier_j -> "too" "much" | "too" "many"
quantifier_j -> "too" "few" | "too" "little"
quantifier_r -> "too" adverb | "too" "little" | "too" "much" |
quantifier_r -> comparison_base adverb "than" comparison
quantifier_r -> comparison_base "than" comparison
adverb -> "seldom" | "often"
comparison -> "i" "should"
comparison_base -> "more" | "less" | modifier_r "more" | modifier_r "less"
moderator -> "kind" "of" | "somehow"
""")
problem_parser = nltk.RecursiveDescentParser( problem_grammar)
def matches_problem_grammar(sentence):
sentence = re.sub( r"(\.|\,|\!|\?|\;|\:)", "", sentence)
return False
# work in progress
def extract_defined_problem( sentence):
return sentence
# work in progress
problems = "|".join([
"(problems?",
"issues?",
"topics?",
"somethings?",
"itch(es)?",
"irritations?",
"troubles?",
"challenges?",
"topics?",
"fears?",
"pains?)"
])
problem_keywords = r"|".join([
r"(too much",
r"too many",
r"too often",
r"too little",
r"too few",
r"too seldom",
r"not get",
r"being",
r"lack of",
r"need",
r"satisf",
r"unhealthy",
r"stress",
r"pressure",
r"struggle",
r"barely"
r"hardly",
r"pain",
r"alone",
r"awkward",
r"deserve",
r"more (\w+ )than",
r"less (\w+ )than",
r"survive",
r"enough)"
])
problem_quantifiers = r"|".join([
r"(no",
r"lack of",
r"too much",
r"too many",
r"too few",
r"not enough",
r"more than enough",
r"lack of",
r"overmuch",
r"(want(ing)?|need(ing)?|hav(e|ing)) (more|less)",
r"(not|never) (hav(e|ing)|get(ting)?)( my( (\w+ )?share of)| the| enough( of))?",
r"too little)"
])
problem_ressources = r"|".join([
r"(money",
r"(\w+ )?time",
r"(\w+ )?opportinit(y|ies)",
r"respect",
r"recognition",
r"support",
r"help",
r"backup",
r"ressources?",
r"sources?",
r"sex",
r"love",
r"thrill",
r"process",
r"food",
r"fun",
r"contact",
r"connections?",
r"energy",
r"willpower",
r"endurance",
r"fitness",
r"strenght",
r"power)"
])
problem_patterns = [
r".*(my|the) (\w|\s|\-|\,)?" \
+ problems \
+ r" (is|are|would be|might be)"\
+ "(?! not)( that( i))? ([^\.\!\?$]+)",
r".* (is|are|would be|might be)(?! not) "\
+ "(my|the|an?|one of my) (\w|\s|\-|\,)?" \
+ problems \
+ r"( for me)?(\,|\.|\-|\!|\?|$)",
r"(it (would|might|could|will)"\
+ "(( \w+)?( allow| enable| empower) me to)?"\
+ "( (solve|reduce|improve|increase)) )?"\
+ "(my|the|an?|one of my)( \w+)? "\
+ problems\
+ "\,? (of|that|the|in|not)"
]
problem_antipatterns = [
r".*(my|the) (\w|\s|\-|\,)*" \
+ problems \
+ r" (is|are|would be|might be)( that( i))? (not|never) ([^\.\!\?$]+)",
r".* (is|are|would be|might be) (not|never) (my|the|an?|one of my) (\w|\s|\-|\,)*" \
+ problems \
+ r"( for me)?(\,|\.|\-|\!|\?|$)"
]
def has_problem_statement( sentence):
if(
any( re.search( antipattern, sentence) for antipattern in problem_antipatterns)
):
return False
elif(
(
any( re.search( pattern, sentence) for pattern in problem_patterns)
or re.search( problem_keywords, sentence)
or re.search( problem_quantifiers + r" " + problem_ressources, sentence)
)
and not re.search( r"you", sentence)
):
return True
else:
return False
##### #######
# # ##### ###### #### # ###### # #### # # # # ######
# # # # # # # # # # # # # ## ## #
##### # # ##### # # ##### # # # # # ## # #####
# ##### # # # # # # # # # # #
# # # # # # # # # # # # # # # #
##### # ###### #### # # # #### # # # # ######
timepoints = r"|".join([
r"(monday",
r"tuesday",
r"wednesday",
r"thursday",
r"friday",
r"saturday",
r"sunday",
r"week",
r"tonight",
r"today",
r"tomorrow",
r"afternoon",
r"morning",
r"evening",
r"night",
r"break",
r"session",
r"meeting",
r"training",
r"presentation",
r"test",
r"lunch",
r"breakfast",
r"supper",
r"dinner",
r"coffee",
r"party",
r"gathering",
r"event",
r"trip",
r"barbecue",
r"picnic)"
])
def has_specific_time( sentence):
if (
re.search( timepoints, sentence)
or re.search( r"(at|on) the ", sentence)
):
return True
else:
return False
# #
# # ###### #### # ##### ## ##### # #### # #
# # # # # # # # # # # # ## #
####### ##### #### # # # # # # # # # # #
# # # # # # ###### # # # # # # #
# # # # # # # # # # # # # # ##
# # ###### #### # # # # # # #### # #
def has_hesitation( sentence):
if(
re.search( r"^(\W)*(e+r*m*|h*u*m+|so+|well)?(\W)*"
r"(actually|"
r"((" + intensifiers + r" )?" + positives + r") question|"
r"let me think|"
r"(\w+ )?hard to say|"
r"i do ?n(o|\')t( \w+)? know)?"
r"(\W)*$", sentence)
):
return True
else:
return False
#######
# # # ## # # # # ####
# # # # # ## # # # #
# ###### # # # # # #### ####
# # # ###### # # # # # #
# # # # # # ## # # # #
# # # # # # # # # ####
def has_thanks( sentence):
if re.search( r"thank", sentence):
return True
else:
return False
#######
# ###### ###### # # # # #### ####
# # # # # ## # # # #
##### ##### ##### # # # # # # ####
# # # # # # # # # ### #
# # # # # # ## # # # #
# ###### ###### ###### # # # #### ####
feelings_negative = r"|".join([
r"(a?lone(?:ly|some)?",
r"hungry",
r"thirsty",
r"tired",
r"starv(?:ed|ing)",
r"ravished",
r"angry",
r"furious",
r"raging",
r"mad",
r"sad",
r"unhappy",
r"blue",
r"sorrowful",
r"mournful",
r"miserable",
r"depressed",
r"frustrated",
r"glum",
r"weak",
r"down",
r"disappointed",
r"desparate",
r"despairing",
r"exhausted",
r"worn.out",
r"drained",
r"displeased",
r"dissatisfied",
r"discontent",
r"unlucky",
r"unwell",
r"uneasy",
r"annoyed",
r"irritated",
r"infuriated",
r"bothered",
r"nervous",
r"shocked",
r"perplexed",
r"confused",
r"troubled",
r"outraged",
r"shaken",
r"scandalized",
r"disgusted",
r"deprived",
r"hurt",
r"injured",
r"insulted",
r"degraded",
r"humiliated",
r"shamed",
r"ashame",
r"pathetic",
r"rotten",
r"disgusting",
r"foul",
r"defeated",
r"powerless",
r"inferior",
r"inappropriate",
r"awkward",
r"clumsy",
r"ugly",
r"embarrass(?:ed|ing)",
r"inept",
r"helpless",
r"upset",
r"unwelcome)"
])
def has_feeling_negative( sentence):
return_value = False
if re.search( feelings_negative, sentence):
feeling_match = re.match( r"(.*?)(" + intensifiers + r".*)?" + feelings_negative + r"(.*?$)", sentence)
if(
not has_negation( feeling_match.group(1))
and
(re.search( r"(^|\W)(i|we)\W", feeling_match.group(1))
or re.search( r"^\W*$", feeling_match.group(1)))
):
return_value = True
return return_value
#######
# ###### ## #####
# # # # # #
##### ##### # # # #
# # ###### #####
# # # # # #
# ###### # # # #
fear_pattern = r"|".join([
r"(fear",
r"afraid",
r"scared of",
r"scare(?:s|\W|ing)?",
r"terrified",
r"terrif(?:ies|ying)",
r"frightened",
r"concerned",
r"concern(?:s|\W|ing)?",
r"frighten(?:s|\W|ing)?)"
])
def has_fear( sentence):
return_value = False
if re.search( fear_pattern, sentence):
fear_match = re.match( r"(.*?)" + fear_pattern + r"(.*?$)", sentence)
if(
(re.search( r"(fear|afraid|scared|terrified|concerned|frightened)" ,fear_match.group(2))
and re.search( r"(^|\W)(i|we)\W", fear_match.group(1))
and not has_negation( fear_match.group(1)))
or
(re.search( r"(scare( |s|ing)|terrif(y|ies|ying)|concern( |s|ing)|frighten( |s|ing))" ,fear_match.group(2))
and re.search( r"(^|\W)(me|us|i|we)(\W|$)", fear_match.group(3))
and not has_negation( fear_match.group(1)))
or
(re.search( r"(ing|fear)" ,fear_match.group(2))
and re.search( r"(^|\W)(me|us|i|we|my)\W", fear_match.group(0))
and not has_negation( fear_match.group(1)))
):
return_value = True
return return_value
#
# # # # # # # ####
# # # # # ## # # #
# # #### # # # # #
# # # # # # # # # ###
# # # # # # ## # #
####### # # # # # # ####
dislikes = r"|".join([
r"(i (?:\w+ )?hate",
r"i (?:\w+ )?dislike",
r"i (?:\w+ )?can not stand",
r"i (?:\w+ )?detest",
r"i (?:\w+ )?loathe",
r"(freak|creep)(?:s|ing)? me out",
r"get(?:s|ting)? on my nerves",
r"i(?: have)(?: \w+)? had enough of",
r"i(?: \w+) can not see any more of)"
])
def has_dislike( sentence):
if(
re.search( dislikes, sentence)
and not has_negation( re.sub( dislikes, " good ", sentence))
):
return True
else:
return False
######
# # ###### #### # ##### ######
# # # # # # # #
# # ##### #### # # # #####
# # # # # ##### #
# # # # # # # # #
###### ###### #### # # # ######
desires = "|".join([
r"(i (\w+ )?wish",
r"if only",
r"my (\w+ )?goal is (that|for|to|when|.w+ing)",
r"i (\w+ )?hope(?! for)",
r"it would (\w+ )?be (\w+ )?" + positives + r" (if|when))",
])
def has_desire(sentence):
desire_match = re.search( desires + r"(\s|\.|\,)(?!you)", sentence)
if desire_match:
if not has_negation( desire_match.group(0)):
return True
else:
return False
###### #####
# # ## # # #### ###### ##### ##### #### # # ###### # ######
# # # # ## # # # # # # # # # # # # #
# # # # # # # # ##### # # # # # ##### ##### # #####
# # ###### # # # # ### # ##### # # # # # # #
# # # # # ## # # # # # # # # # # # # #
###### # # # # #### ###### # # # #### ##### ###### ###### #
hurts = "|".join([
r"(kill",
r"hang",
r"cut",
r"harm",
r"electrocute",
r"burn",
r"to death",
r"hurt",
r"drown)",
])
intentions_self = "|".join([
r"(i am (\w+ )?going to",
r"i (\w+ )?will",
r"i (\w|\s)*plan(ing)? to",
r"i (\w|\s)*inten(d|t)(ing)? to",
r"i (\w|\s)*prepar(e|ing) to",
r"i (\w+ )?want to",
r"i (\w|\s)*think(ing)? about",
r"i am (\w+ )about to)",
])
def has_danger_to_self(sentence):
intention_match = re.search( intentions_self+r"(.*)", sentence)
desire_match = re.search( desires+r"(.*)", sentence)
if intention_match:
if(
not has_negation( intention_match.group(1))
and re.search( hurts, intention_match.group(len(intention_match.groups())))
and re.search( r"\W(me|myself)(\W|$)", intention_match.group(len(intention_match.groups())))
):
return True
else:
return False
elif desire_match:
if(
not has_negation( desire_match.group(1))
and (
(
re.search( r"\W(dead|die)(\W|$)", desire_match.group(len(desire_match.groups())))
and re.search( r"\W(i)\W", desire_match.group(len(desire_match.groups())))
)
or(
re.search( hurts, desire_match.group( len( desire_match.groups())))
and re.search( r"\W(me|myself)(\W|$)", desire_match.group( len( desire_match.groups())))
)
)
):
return True
else:
return False
else:
return False
#####
# # #### # # ###### # # #### #####
# # # ## # # # # # # #
# # # # # # ##### # # # #
# # # # # # # # # # #
# # # # # ## # # # # # #
##### #### # # # ###### # #### #
conflicts = "|".join([
r"(trouble",
r"problem",
r"conflict",
r"fight",
r"disagreement",
r"struggle",
r"dispute",
r"argument",
r"battle",
r"quarrel",
r"dispute",
r"controvery",
r"clash",
r"collision",
r"(?:^|\s)issue)"
])
def has_conflict(sentence):
if re.search(conflicts,sentence) and not has_negation(sentence):
return True
else:
return False
######
# # ## ##### # #### # # ## # ######
# # # # # # # # ## # # # # #
###### # # # # # # # # # # # # #####
# # ###### # # # # # # # ###### # #
# # # # # # # # # ## # # # #
# # # # # # #### # # # # ###### ######
rationale_pattern = re.compile(r"(?:.*)because\W([^\.\,\;\!\?]+)")
def has_rationale(sentence):
if rationale_pattern.search(sentence):
return True
else:
return False
def reflect_rationale(sentence):
reason = rationale_pattern.search(sentence).group(1)
return capitalize_fragment(
perform_pronoun_reflection(
reason))
######
# # ##### #### ##### ###### #### #####
# # # # # # # # # #
###### # # # # # ##### #### #
# ##### # # # # # #
# # # # # # # # # #
# # # #### # ###### #### #
def has_protest_to_question(sentence):
if(
re.search( r"no[^\.\,\;(is)]+you[^\.\,\;]+(busines|concern)",sentence)
or re.search( r"mind[^\.\,\;(is)]+own[^\.\,\;]+busines",sentence)
or re.search( r"(never|no)[^\.\,\;(is)]+mind",sentence)
or re.search( r"no[^\.\,\;]+((talk[^\.\,\;]+about)|discuss)",sentence)
or re.search( r"(fuck|screw|stop) this", sentence)
or re.search( r"(annoying|stupid|idiotic|absurd|meaningless|fucking) (questions?|conversation)", sentence)
):
return True
else:
return False
#####
# # # # ###### #### ##### # #### # # ####
# # # # # # # # # # ## # #
# # # # ##### #### # # # # # # # ####
# # # # # # # # # # # # # # #
# # # # # # # # # # # # ## # #
#### # #### ###### #### # # #### # # ####
def has_request_to_explain( sentence):
if(
( # why are you asking this? / why would you want to know this?
re.search(r"(why|(what.*(for|reason|purpose)))", sentence)
and
re.search(r"(ask|know|question|curious|nosy|inquisitive)", sentence)
)
or( # in how far is that relevant?
re.search(r"(why|(how(\w|\s)+(is|be)))", sentence)
and
re.search(r"(important|relevant|interesting|fascinating)", sentence)
)
or( # what do you mean / i do not get your point?
re.search(r"(what|((^|\W)i\W).*(not.*(get|understand|follow)))", sentence)
and(
re.search(r"(talk|question|this)(\w|\s)+about", sentence)
or
re.search(r"you.*(point|mean|ask|question)", sentence)
)
)
or(
re.search(r"(question|ask)", sentence)
and
re.search(r"(you|this).*(has|make)", sentence)
and
re.search(r"no(\w|\s)+(sense)", sentence)
)
or(
re.search(r"(why|sorry|what|wtf)\?", sentence)
)
):
return True
else:
return False
######
# # ###### ###### # ###### #### ##### # #### # #
# # # # # # # # # # # # ## #
###### ##### ##### # ##### # # # # # # # #
# # # # # # # # # # # # # #
# # # # # # # # # # # # # ##
# # ###### # ###### ###### #### # # #### # #
temporal = "|".join([
r"(today",
r"right now",
r"currently",
r"now",
r"recently",
r"previously",
r"lately",
r"these days",
r"this \w+",
r"sometimes",
r"every now and then)"
])
#####
# # ##### ###### ###### ##### # # # ####
# # # # # # # ## # # #
# #### # # ##### ##### # # # # # #
# # ##### # # # # # # # # ###
# # # # # # # # # ## # #
##### # # ###### ###### # # # # ####
def current_greeting(current_hour):
if not isinstance(current_hour, int):
return "Hello"
elif current_hour < 11:
return "Good morning"
elif current_hour >= 18:
return "Good evening"
elif current_hour >= 14 and current_hour < 18:
return "Good afternoon"
else:
return "Hello"
def current_daytime(current_hour):
if not isinstance(current_hour, int):
return "day"
elif current_hour < 11:
return "morning"
elif current_hour >= 18:
return "evening"
elif current_hour >= 14 and current_hour < 18:
return "afternoon"
else:
return "day"
def previous_daytime(current_hour):
if not isinstance(current_hour, int):
return "day"
elif current_hour < 10:
return "night"
elif current_hour >= 10 and current_hour < 18:
return "day so far"
else:
return "day"
def next_daytime(current_hour):
if not isinstance(current_hour, int):
return "day"
elif current_hour < 5:
return "night"
elif current_hour >= 5 or current_hour < 10:
return "start into the day"
elif current_hour >= 15 and current_hour < 20:
return "evening"
elif current_hour >= 20:
return "night"
else:
return "day"
greeting = r"^(" + "|".join([r"^(oh\,? ",
r"hey [\w\s-]+\,?",
r"hey\,? ",
r"why\, ",
r"\w+ \w+\, ",
r"oh [\w\s-]+\,)"
]) + "?" + "|".join([
r"(good morning",
r"good afternoon",
r"good evening",
r"h[ea]llo",
r"hi",
r"howdy",
r"salut",
r"servus",
r"ahoi)"
]) + r"($|\,| |\.|\!|\;))|(hey there!)"
def has_greeting(sentence):
if re.search( greeting, sentence):
return True
else:
return False
temporal_general = "|".join([
r"(today",
r"yesterday",
r"right now",
r"currently",
r"now",
r"recently",
r"previously",
r"lately",
r"sometimes",
r"every now and then)"
])
temporal_units = "|".join([
r"(days?",
r"weeks?",
r"weekend",
r"morning",
r"evening",
r"night",
r"time",
r"\w+day",
r"hours?",
r"january",
r"february",
r"march",
r"april",
r"mai",
r"june",
r"july",
r"august",
r"september",
r"ocotber",
r"november",
r"december",
r"spring",
r"winter",
r"summer",
r"fall",
r"autumn",
r"years?)",
])
how_are_you = r"(^|([\,\;\.\!]\s))" + "|".join([
r"(how are you( doing| feeling)?",
r"how is it going",
r"how do you (do|feel)",
r"what is up)"
]) + r"(\s(this|these|those|lately|recently|today|now|again|right now)\s?)?" + temporal_units + r"?(\,[\s\w]+)?\?"
how_was_your_time = r"(^|([\,\;\.\!]\s))" + "|".join([
r"(how (is|was|were) your",
r"how (has|have) your)",
]) + r"\s(last|current|recent|previous)?\s?" + temporal_units + r"\s?(been)?\s?(so far|lately|recently)?(\,[\s\w]+)?\?"
you_had_good_time = r"(?<!(why |how |who |what |when ))" + "|".join([r"(did you have",
"have you had)"]) + r"\s" + "|".join([
r"(a",
r"some",
r"a few)" ]) + "\s" + intensifiers + r"?\s?" + positives + r"\s" + temporal_units + r"((\s|\,)[\,\s\w]+)?\?"
def has_question_how_are_you(sentence):
if re.search( how_are_you, sentence):
return True
else:
return False
def has_question_how_was_your_time(sentence):
if re.search( how_was_your_time, sentence):
return True
else:
return False
def has_question_you_had_good_time(sentence):
sentence = sentence.replace( "why did", "why did")
sentence = sentence.replace( "how did", "how did")
sentence = sentence.replace( "who did", "who did")
if re.search( you_had_good_time, sentence):
return True
else:
return False
#######
# # # # ###### ###### ##### ## # # ######
# # ## ## # # # # # # ## ## #
# # # ## # ##### ##### # # # # # ## # #####
# # # # # # ##### ###### # # #
# # # # # # # # # # # # #
# # # # ###### # # # # # # # ######
timeframe_shorts = r"|".join([
r"(short",
r"first",
r"quick",
r"distance",
r"a",
r"1)"
])
timeframe_longs = r"|".join([
r"(long",
r"mid",
r"second",
r"sustainable",
r"difference",
r"b",
r"2)"
])
def prefers_timeframe_short( sentence):
sentence = sentence.split("but")[-1]
if(
re.search( r"(\W|^)" + timeframe_shorts + r"(\W|$)", sentence)
and not has_negation( sentence)
):
return True
else:
return False
def prefers_timeframe_long( sentence):
sentence = sentence.split("but")[-1]
if( re.search( r"(\W|^)" + timeframe_longs + r"(\W|$)", sentence)
and not has_negation( sentence)
):
return True
else:
return False
#######
# # # # ###### ######
# # # # # #
##### # # # ##### #####
# # # # # #
# # # # # #
# ###### #### # #
fluffs = [
#re.compile(r"^[\s\.\,\;\-\!\?]"),
re.compile(r"(?:^|\W)(?::|;|=|B|8)(?:-|\^)?(?:\)|\(|D|P|\||\[|\]|>|\$|3)+(?:$|\W)"),
re.compile(r"^well\W"),
re.compile(r"^so\W"),
#re.compile(r"^alright\W"),
re.compile(r"^anyways?\W"),
re.compile(r"^lol\w?\W"),
re.compile(r"^wo+w\W"),
re.compile(r"^cool[\.\,\!]+"),
re.compile(r"^sorry[\.\,\!]+"),
#re.compile(r"^great[\.\,\!]+"),
re.compile(r"final?ly"),
re.compile(r"honestly"),
re.compile(r"actually"),
re.compile(r"quite"),
re.compile(r"really"),
re.compile(r"literal?ly"),
re.compile(r"certainly"),
re.compile(r"in fact"),
re.compile(r"somehow"),
re.compile(r"basical?ly")
#re.compile(r"just")
]
def contains_fluff(text, fluffs=fluffs):
"Remove words that don't contribute to the meaning of a statement."
# Create a regular expression from the fluff list
if any(fluff.search(text) for fluff in fluffs):
return True
else:
return False
def remove_fluff(text, fluffs=fluffs):
"Remove words that don't contribute to the meaning of a statement."
while any(fluff.search(text) for fluff in fluffs):
for fluff in fluffs:
text = fluff.sub('', text)
return text
def cleanup_sentence(text):
corrections = [
(r"^\W", r""),
(r"\s$", r""),
(r"\;" , ","),
(r"\s{2,}" , " "),
(r"\.{2,}" , " "),
(r"[\!\?]+\?[\!\?]*" , "?"),
(r"[\!\?]*\?[\!\?]+" , "?")
]
while any(re.search(before,text) for (before,after) in corrections):
for (before, after) in corrections:
text = re.sub(before, after, text)
return text
#####
# # #### # # ##### ##### ## #### ##### # #### # # ####
# # # ## # # # # # # # # # # # # ## # #
# # # # # # # # # # # # # # # # # # # ####
# # # # # # # ##### ###### # # # # # # # # #
# # # # # ## # # # # # # # # # # # # ## # #
##### #### # # # # # # # #### # # #### # # ####
def expand_contractions(text):
"Replace contractions of pronoun and auxiliary verb with their expanded versions."
contractions = [
(r"1", "one"),
(r"ain't", "is not"),
(r"aren't", "are not"),
(r"can't", "can not"),
(r"cant", "can not"),
(r"can't've", "can not have"),
(r"'cause", "because"),
(r"could've", "could have"),
(r"couldn't", "could not"),
(r"couldn't've", "could not have"),
(r"didn't", "did not"),
(r"didnt", "did not"),
(r"doesn't", "does not"),
(r"doesnt", "does not"),
(r"don't", "do not"),
(r"dont", "do not"),
(r"gonna", "going to"),
(r"hadn't", "had not"),
(r"hadn't've", "had not have"),
(r"hasn't", "has not"),
(r"hasnt", "has not"),
(r"haven't", "have not"),
(r"havent", "have not"),
(r"he'd", "he would"),
(r"he'd've", "he would have"),
(r"he'll", "he will"),
(r"he'll've", "he will have"),
(r"he's", "he is"),
(r"hes", "he is"),
(r"here's", "here is"),
(r"heres", "here is"),
(r"how'd", "how did"),
(r"how'd'y", "how do you"),
(r"how'll", "how will"),
(r"how's", "how is"),
(r"hows", "how is"),
(r"how're", "how are"),
(r"howre", "how are"),
(r"i'd", "i would"),
(r"i'd've", "i would have"),
(r"i'll", "i will"),
(r"i'll've", "i will have"),
(r"i'm", "i am"),
(r"im", "i am"),
(r"ima", "i am going to"),
(r"i've", "i have"),
(r"ive", "i have"),
(r"isn't", "is not"),
(r"isnt", "is not"),
(r"it'd", "it would"),
(r"it'd've", "it would have"),
(r"it'll", "it will"),
(r"it'll've", "it will have"),
(r"it's", "it is"),
(r"let's", "let us"),
(r"lets", "let us"),
(r"ma'am", "madam"),
(r"mayn't", "may not"),
(r"might've", "might have"),
(r"mightn't", "might not"),
(r"mightn't've", "might not have"),
(r"must've", "must have"),
(r"mustn't", "must not"),
(r"mustn't've", "must not have"),
(r"needn't", "need not"),
(r"needn't've", "need not have"),
(r"o'clock", "of the clock"),
(r"oughtn't", "ought not"),
(r"oughtn't've", "ought not have"),
(r"shan't", "shall not"),
(r"sha'n't", "shall not"),
(r"shan't've", "shall not have"),
(r"she'd", "she would"),
(r"she'd've", "she would have"),
(r"she'll", "she will"),
(r"she'll've", "she will have"),
(r"she's", "she is"),
(r"shes", "she is"),
(r"should've", "should have"),
(r"shouldn't", "should not"),
(r"shouldnt", "should not"),
(r"shouldn't've", "should not have"),
(r"so've", "so have"),
(r"so's", "so is"),
(r"that'd", "that would"),
(r"that'd've", "that would have"),
(r"that's", "that is"),
(r"thats", "that is"),
(r"there'd", "there would"),
(r"there'd've", "there would have"),
(r"there's", "there is"),
(r"theres", "there is"),
(r"they'd", "they would"),
(r"they'd've", "they would have"),
(r"they'll", "they will"),
(r"they'll've", "they will have"),
(r"they're", "they are"),
(r"theyre", "they are"),
(r"they've", "they have"),
(r"theyve", "they have"),
(r"to've", "to have"),
(r"wanna", "want to"),
(r"wasn't", "was not"),
(r"wasnt", "was not"),
(r"we'd", "we would"),
(r"we'd've", "we would have"),
(r"we'll", "we will"),
(r"we'll've", "we will have"),
(r"we're", "we are"),
(r"we've", "we have"),
(r"weve", "we have"),
(r"weren't", "were not"),
(r"what'll", "what will"),
(r"what'll've", "what will have"),
(r"what're", "what are"),
(r"what's", "what is"),
(r"whats", "what is"),
(r"what've", "what have"),
(r"when's", "when is"),
(r"when've", "when have"),
(r"where'd", "where did"),
(r"where's", "where is"),
(r"wheres", "where is"),
(r"where've", "where have"),
(r"who'll", "who will"),
(r"who'll've", "who will have"),
(r"who's", "who is"),
(r"whos", "who is"),
(r"who've", "who have"),
(r"why's", "why is"),
(r"why've", "why have"),
(r"will've", "will have"),
(r"won't", "will not"),
(r"wont", "will not"),
(r"won't've", "will not have"),
(r"would've", "would have"),
(r"wouldn't", "would not"),
(r"wouldn't've", "would not have"),
(r"ya", "you"),
(r"y'all", "you all"),
(r"y'all'd", "you all would"),
(r"y'all'd've", "you all would have"),
(r"y'all're", "you all are"),
(r"y'all're", "you all are"),
(r"y'know", "you know"),
(r"you'd", "you would"),
(r"youd", "you would"),
(r"you'd've", "you would have"),
(r"you'll", "you will"),
(r"youll", "you will"),
(r"you'll've", "you will have"),
(r"you're", "you are"),
(r"youre", "you are"),
(r"you've", "you have"),
(r"youve", "you have")
]
# Create a regular expression from the dictionary keys
for (before, after) in contractions:
text = re.sub(r"(^|\W)"+before+r"($|\W)", r"\1"+after+r"\2", text)
return text
hypothesis_map = defaultdict( lambda: lambda x: False)
# If an unknown hypothesis is queried, hypothesis_map will
# use an anonymous lambda function to evaluate it with False
hypothesis_map.update({
"has_affirmation" : has_affirmation,
"has_negation" : has_negation,
"has_greeting" : has_greeting,
"has_request_to_explain" : has_request_to_explain,
"has_protest_to_question" : has_protest_to_question,
"has_question_how_are_you" : has_question_how_are_you,
"has_question_how_was_your_time" : has_question_how_was_your_time,
"has_question_you_had_good_time" : has_question_you_had_good_time,
"has_danger_to_self" : has_danger_to_self,
"has_hesitation" : has_hesitation,
"has_story" : has_story,
"has_story_negative" : has_story_negative,
"has_problem_statement" : has_problem_statement,
"has_desire" : has_desire,
"has_fear" : has_fear,
"has_feeling_negative" : has_feeling_negative,
"has_dislike" : has_dislike,
"is_positive" : is_positive,
"is_negative" : is_negative,
"has_problem_statement" : has_problem_statement,
"prefers_timeframe_long" : prefers_timeframe_long,
"prefers_timeframe_short" : prefers_timeframe_short,
"has_option" : has_option,
"has_choice_of_enumerated_item" : has_choice_of_enumerated_item,
"has_specific_time" : has_specific_time
})
def check_if_statement( statement, hypothesis, verbose=True):
"""
Evaluates if a hypothesis about a statement is true.
Arguments:
statement -- A string for which the hypothesis should be tested,
e.g. "Hello world!"
hypothesis -- A string that contains the hypothesis to be tested,
e.g. "has_greeting". The hypothesis is tested by a
function with the same name as the hypothesis. If no
such function exists, it will issue a warning and
evaluate as False.
verbose -- (default: True) 'False' silences Error messages. This
is useful mainly for de-cluttering unit tests.
"""
if not(
isinstance( statement, str)
or isinstance( statement, unicode)
):
if verbose: print "Argument 'statement' must be a (unicode) string."
if verbose: print "Instead, statement argument of type '" + type(statement).__name__ + "' was given."
raise TypeError
if not(
isinstance( hypothesis, str)
or isinstance( hypothesis, unicode)
):
if verbose: print "Argument 'hypothesis' must be a (unicode) string."
if verbose: print "Instead, hypothesis argument of type '" + type(hypothesis).__name__ + "' was given."
raise TypeError
if not(
hypothesis in hypothesis_map.keys()
):
warning_message = None
if verbose:
warning_message = "'hypothesis' argument '" + hypothesis + "'' is not a known skill / function."
warnings.warn( warning_message, Warning)
return hypothesis_map[hypothesis]( statement)
# # #######
## # ## # # ###### ##### # # # ##### # ##### # ###### ####
# # # # # ## ## # # # # ## # # # # # # #
# # # # # # ## # ##### # # ##### # # # # # # # ##### ####
# # # ###### # # # # # # # # # # # # # # #
# ## # # # # # # # # # ## # # # # # # #
# # # # # # ###### ##### ####### # # # # # # ###### ####
# Works well, but currently not required
# def extract_persons(text):
# target = re.compile(r"(PERSON")#|ORGANIZATION|FACILITY)")
# tokens = nltk.tokenize.word_tokenize(text)
# pos = nltk.pos_tag(tokens)
# sentt = nltk.ne_chunk(pos, binary = False)
# persons = []
# for subtree in sentt.subtrees(filter=lambda t: target.match(t.label())):
# name = ' '.join([leaf[0] for leaf in subtree.leaves()])
# for i in range(len((persons))):
# if re.search(persons[i],name):
# persons[i] = re.sub(persons[i],name,persons[i])
# if not any(re.search(name,person) for person in persons):
# persons.append(name)
# return (persons)
# def extract_named_entities(text):
# target = re.compile(r"(PERSON|ORGANIZATION|GPE|LOCATION)")
# tokens = nltk.tokenize.word_tokenize(text)
# pos = nltk.pos_tag(tokens)
# sentt = nltk.ne_chunk(pos, binary = False)
# entities = []
# for subtree in sentt.subtrees(filter=lambda t: target.match(t.label())):
# name = ' '.join([leaf[0] for leaf in subtree.leaves()])
# if name not in entities:
# entities.append(name)
# return (entities)
#
# # # ##### #### ###### # # ###### # # #####
# # # # # # # # ## ## # ## # #
# # # # # # ##### # ## # ##### # # # #
# # # # # # # ### # # # # # # # #
# # # # # # # # # # # # # ## #
##### #### ##### #### ###### # # ###### # # #
# Suspended because of bad performance
#from nltk.sentiment.vader import SentimentIntensityAnalyzer
#vader = SentimentIntensityAnalyzer()
# judgement_grammar = """
# S_and_V: {((((<PRP\$>|<DT>)? (<R.*>|<J.*>)*)? <NN>? (<NNS>|<NN>))|<PRP>) (<VBZ>|<VBP>|<VBD>) <VBN>?}
# Object : {<DT>* (<R.*>|<J.*>|<VBG> )* (<NNS>|<NN>)* (<CC> <DT>* (<R.*>|<J.*>|<VBG> )* (<NNS>|<NN>)*)? <.>}
# Judgement : {<.*>* <S_and_V> <Object>}
# """
# judgement_chunker = nltk.RegexpParser(judgement_grammar)
# def is_judgement_positive(sentence):
# return_value = False
# equivalence = False
# sentence = re.sub(r"(\,)",r"",sentence)
# sentence_pos = nltk.pos_tag(nltk.word_tokenize(sentence))
# tree = judgement_chunker.parse(sentence_pos)
# if unicode("Judgement") in [subtree.label() for subtree in tree.subtrees()]:
# for subtree in tree.subtrees():
# if subtree.label() == unicode("Object"):
# vader_score = vader.polarity_scores("You are " + " ".join([word for (word,tag) in subtree.leaves()]))
# if subtree.label() == unicode("S_and_V"):
# if re.search(r"(is|are|was|were|has been|have been)",sentence):
# equivalence = True
# if (
# vader_score["pos"] >= max(vader_score["neg"],vader_score["neu"])
# and equivalence
# ):
# return_value = True
# return return_value
# def is_judgement_negative(sentence):
# return_value = False
# equivalence = False
# sentence = re.sub( r"(\,)",r"", sentence)
# sentence = re.sub( r"(\.|\!|\?|\)|\(|\:|\-)*$", r"!", sentence)
# sentence = re.sub( r"!+", r"!", sentence)
# sentence_pos = nltk.pos_tag(nltk.word_tokenize(sentence))
# tree = judgement_chunker.parse(sentence_pos)
# if unicode("Judgement") in [subtree.label() for subtree in tree.subtrees()]:
# for subtree in tree.subtrees():
# if subtree.label() == unicode("Object"):
# vader_score = vader.polarity_scores("You are " + " ".join([word for (word,tag) in subtree.leaves()]))
# if subtree.label() == unicode("S_and_V"):
# if re.search(r"(is|are|was|were|has been|have been)",sentence):
# equivalence = True
# if (
# vader_score["neg"] >= max(vader_score["neu"],vader_score["pos"])
# and equivalence
# ):
# return_value = True
# if re.search(r"(suck|full of .*shit|nothing but .*shit)",sentence):
# return_value = True
# return return_value
# # # ##### ##### #### ##### # # #### ##### # #### # #
# ## # # # # # # # # # # # # # # # # ## #
# # # # # # # # # # # # # # # # # # # # #
# # # # # ##### # # # # # # # # # # # # # #
# # ## # # # # # # # # # # # # # # # # ##
# # # # # # #### ##### #### #### # # #### # #
# Draft
# simple_sentence_grammar = """
# noun_phrase: {((((<PRP\$>|<DT>)? (<R.*>|<J.*>)*)? <NN>? (<NNS>|<NN>))|<PRP>)}
# subject_and_verb : {<noun_phrase> (<R.*> )*(<VBZ>|<VBP>|<VBD>)}
# """
# simple_sentence_chunker = nltk.RegexpParser(simple_sentence_grammar)
# introductions = "|".join([
# "(say",
# "guess",
# "think)"
# ]) + r"(\,? that)?"
# def has_introduction( sentence):
# return_value = False
# intro_match = re.search( r"(.*) " + introductions + r" (.*)", sentence)
# if intro_match:
# if(
# re.search( r"i ", intro_match.group(1))
# and not re.search( r"not ", intro_match.group(1))
# ):
# return_value = True
# return return_value
| 30.352823 | 143 | 0.410628 | 6,096 | 60,220 | 3.968504 | 0.17126 | 0.020337 | 0.020461 | 0.025628 | 0.365989 | 0.302207 | 0.259631 | 0.202546 | 0.171999 | 0.141162 | 0 | 0.002418 | 0.382033 | 60,220 | 1,983 | 144 | 30.368129 | 0.647659 | 0.202873 | 0 | 0.264706 | 0 | 0.009576 | 0.327491 | 0.039282 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.005472 | null | null | 0.002736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
513d10cc8c7e94f6157cf7b1bc732ab90ec7a284 | 11,967 | py | Python | utils/scripts/debug/data_utils.py | eflows4hpc/compss | c497f6d34722103c6c8f83ebc314b495573ce054 | [
"Apache-2.0"
] | 31 | 2018-03-06T09:30:03.000Z | 2022-03-23T09:51:05.000Z | utils/scripts/debug/data_utils.py | eflows4hpc/compss | c497f6d34722103c6c8f83ebc314b495573ce054 | [
"Apache-2.0"
] | 3 | 2020-08-28T17:16:50.000Z | 2021-11-11T21:58:02.000Z | utils/scripts/debug/data_utils.py | eflows4hpc/compss | c497f6d34722103c6c8f83ebc314b495573ce054 | [
"Apache-2.0"
] | 15 | 2018-06-07T10:03:27.000Z | 2022-02-23T14:59:42.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# For better print formatting
from __future__ import print_function
# Imports
from abc import abstractmethod
from enum import Enum
class Version:
def __init__(self, data, version_id, timestamp):
self._data = data
self._id = version_id
self._producer = None
self._usages = []
self._history = [[timestamp, "Registered Version" + self._id]]
def get_id(self):
return self._id
def get_data(self):
return self._data
def get_rename(self):
return "d" + self._data.get_id()+"v"+self._id
def register_consumer(self, access, timestamp):
self._usages.append(access)
consumer = ""
cause = access.get_cause()
if cause is not None and cause.task is not None:
consumer = "task " +cause.task.task_id
else:
consumer = "Main code"
self._history.append([timestamp, "Registered Read access to " + self.get_rename() + " by " + consumer])
def register_producer(self, access, timestamp):
self._producer = access
producer = ""
cause = access.get_cause()
if cause is not None and cause.task is not None:
producer = "task " + cause.task.task_id
else:
producer = "Main code"
self._history.append([timestamp, "Registered Write access to " + self.get_rename() + " by " + producer])
def get_producer(self):
if self._producer is not None:
return self._producer.get_cause()
else:
return None
def commit_read(self, access, timestamp):
cause = access.get_cause()
if cause is not None and cause.task is not None:
producer = "task " + cause.task.task_id
else:
producer = "Main code"
self._history.append([timestamp, "Completed read access to " + self.get_rename() + " by " + producer])
def commit_write(self, access, timestamp):
cause = access.get_cause()
if cause is not None and cause.task is not None:
producer = "task " + cause.task.task_id
else:
producer = "Main code"
self._history.append([timestamp, "New value (" + self.get_rename() + ") generated by " + producer])
def main_access_progress(self, state, timestamp):
self._history.append([timestamp, "Main access " + state])
def get_history(self):
history = []
history = history + self._history
return sorted(history, key=lambda t: int(t[0]))
def __str__(self):
return self.get_rename()
class Data:
def __init__(self, data_id, timestamp):
self._id = data_id
self._versions = []
self.add_version(timestamp)
self._history = []
self._history.append([timestamp, "First data access"])
def get_id(self):
return self._id
def get_current_version(self):
return self._versions[-1]
def get_all_versions(self):
return self._versions
def add_version(self, timestamp):
version = Version(self, str(len(self._versions)+1), timestamp)
self._versions.append(version)
return version
def get_last_writer(self):
version = self._versions[-1]
if version is not None:
access = version.get_producer()
return access
return None
def register_consumer(self, access, timestamp):
if len(self._versions) == 0:
v = self.add_version(timestamp)
else:
v = self._versions[-1]
v.register_consumer(access, timestamp)
return v
def register_update(self, access, timestamp):
v = self.add_version(timestamp)
v.register_producer(access, timestamp)
return v
def get_history(self):
history = []
history = history + self._history
for v in self.get_all_versions():
history = history + v.get_history()
return sorted(history, key=lambda t: int(t[0]))
class Access(object):
def __init__(self, timestamp):
self._cause = None
@staticmethod
def create_access(direction, timestamp):
if direction.upper() == "IN" or direction.upper() == "R":
return RAccess(timestamp)
elif direction.upper() == "INOUT" or direction.upper() == "RW":
return RWAccess(timestamp)
else: # direction.upper() == "OUT" or direction.upper() == "W":
return WAccess(timestamp)
def get_cause(self):
return self._cause
def set_cause(self, cause):
self._cause = cause
@abstractmethod
def get_data(self):
pass
@abstractmethod
def get_direction(self):
pass
@abstractmethod
def get_dependence(self):
pass
@abstractmethod
def committed(self, timestamp):
pass
class RAccess(Access):
def __init__(self, timestamp):
super(RAccess, self).__init__(timestamp)
self._read_version = None
self._history = []
def get_read_version(self, timestamp):
return self._read_version
def get_data(self):
if self._read_version is not None:
return self._read_version.get_data()
else:
return None
def get_direction(self):
return "IN"
def get_dependence(self):
if self._read_version is not None:
producer = self._read_version.get_producer()
if producer is not None:
return producer.get_reason()
return None
def register_read(self, data, timestamp):
self._read_version = data.register_consumer(self, timestamp)
def committed(self, timestamp):
if self._read_version is not None:
self._read_version.commit_read(self, timestamp)
def __str__(self):
read_version = "?"
if self._read_version is not None:
read_version = self._read_version.get_rename()
return read_version
class RWAccess(Access):
def __init__(self, timestamp):
super(RWAccess, self).__init__(timestamp)
self._read_version = None
self._written_version = None
def get_read_version(self, timestamp):
return self._read_version
def get_written_version(self, timestamp):
return self._written_version
def set_written_version(self, version, timestamp):
self._written_version = version
def get_data(self):
if self._read_version is not None:
return self._read_version.get_data()
elif self._written_version is not None:
return self._written_version.get_data()
else:
return None
def get_direction(self):
return "INOUT"
def get_dependence(self):
if self._read_version is not None:
producer = self._read_version.get_producer()
if producer is not None:
return producer.get_reason()
return None
def register_read(self, data, timestamp):
self._read_version = data.register_consumer(self, timestamp)
def register_write(self, data, timestamp):
self._written_version = data.register_update(self, timestamp)
def committed(self, timestamp):
if self._read_version is not None:
self._read_version.commit_read(self, timestamp)
if self._written_version is not None:
self._written_version.commit_write(self, timestamp)
def __str__(self):
read_version = "?"
if self._read_version is not None:
read_version = self._read_version.get_rename()
written_version = "?"
if self._written_version is not None:
written_version = self._written_version.get_rename()
return read_version + " -> " + written_version
class WAccess(Access):
def __init__(self, timestamp):
super(WAccess, self).__init__(timestamp)
self._written_version = None
def get_written_version(self, timestamp):
return self._written_version
def set_written_version(self, version, timestamp):
self._written_version = version
def get_data(self):
if self._written_version is not None:
return self._written_version.get_data()
else:
None
def get_direction(self):
return "OUT"
def get_dependence(self):
return None
def register_write(self, data, timestamp):
self._written_version = data.register_update(self, timestamp)
def committed(self, timestamp):
if self._written_version is not None:
self._written_version.commit_write(self, timestamp)
def __str__(self):
written_version = "->?"
if self._written_version is not None:
written_version = "->"+self._written_version.get_rename()
return written_version
class DataRegister:
def __init__(self):
self.data = {} # value = Data
self.last_registered_access = None
def register_access(self, direction, timestamp):
self.last_registered_access = Access.create_access(direction, timestamp)
def register_data(self, data_id, timestamp):
data = self.data.get(data_id)
if data is None:
data = Data(data_id, timestamp)
self.data[data_id] = data
return data
def get_datum(self, data_id):
return self.data.get(data_id)
def get_data(self):
return self.data.values()
def __str__(self):
string = ""
for key, value in self.data.items():
string = string + "\n * " + (str(key) + " -> "+str(value))
return string
class MainDataAccessStatus(Enum):
REQUESTED = 0
EXISTENCE_AWARE = 1
OBTAINED = 2
class MainDataAccess:
def __init__(self, access, data, timestamp):
self.access = access
self.access.register_read(data, timestamp)
self.state = MainDataAccessStatus.REQUESTED
self.access.get_read_version(timestamp).main_access_progress("requested", timestamp)
def exists(self, timestamp):
self.state = MainDataAccessStatus.EXISTENCE_AWARE
self.access.get_read_version(timestamp).main_access_progress("is aware of existence", timestamp)
def obtained(self, timestamp):
self.state = MainDataAccessStatus.OBTAINED
self.access.get_read_version(timestamp).main_access_progress("has the value on the node", timestamp)
def __str__(self):
return str(self.access) + " in state " + str(self.state)
class MainDataAccessRegister:
def __init__(self):
self.ongoing_accesses = {} # dataID = MainDataAccess
self.accesses_count = 0
self.completed_accesses_count = 0
def register_access(self, access, datum, timestamp):
main_access_description = MainDataAccess(access, datum, timestamp)
self.ongoing_accesses[datum.get_id()] = main_access_description
self.accesses_count = self.accesses_count + 1
def data_exists(self, data_id, timestamp):
current_data_access = self.ongoing_accesses[data_id]
if current_data_access is None:
print("Available data value for unregistered access for data " + data_id)
else:
current_data_access.exists(timestamp)
def data_obtained(self, data_id, timestamp):
current_data_access = self.ongoing_accesses[data_id]
if current_data_access is None:
print("Available data value for unregistered access for data " + data_id)
else:
current_data_access.obtained(timestamp)
del self.ongoing_accesses[data_id]
self.completed_accesses_count = self.completed_accesses_count + 1
def get_all_accesses_count(self):
return self.accesses_count;
def get_completed_accesses_count(self):
return self.completed_accesses_count;
def get_pending_accesses(self):
return self.ongoing_accesses.values()
| 30.296203 | 112 | 0.637587 | 1,423 | 11,967 | 5.085032 | 0.090654 | 0.050166 | 0.032338 | 0.033168 | 0.576423 | 0.518519 | 0.478579 | 0.466833 | 0.439193 | 0.397181 | 0 | 0.001718 | 0.270577 | 11,967 | 394 | 113 | 30.373096 | 0.827243 | 0.013955 | 0 | 0.513514 | 0 | 0 | 0.036886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.013514 | 0.010135 | 0.074324 | 0.469595 | 0.010135 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5143a930ff33b9a0eb7adad59c1687c5c52a46b5 | 833 | py | Python | conduit_ui_tests/sign_in/signin_steps.py | dduleba/tw2019-ui-tests | 5f149c6c2bdb9f2d69a02c038248374f6b0b5903 | [
"MIT"
] | 1 | 2019-09-27T23:12:07.000Z | 2019-09-27T23:12:07.000Z | conduit_ui_tests/sign_in/signin_steps.py | dduleba/conduit-tests | 5f149c6c2bdb9f2d69a02c038248374f6b0b5903 | [
"MIT"
] | null | null | null | conduit_ui_tests/sign_in/signin_steps.py | dduleba/conduit-tests | 5f149c6c2bdb9f2d69a02c038248374f6b0b5903 | [
"MIT"
] | null | null | null | from radish import steps, after
from radish_selenium.radish.selenium_base_steps import attach_screenshot_on_failure, \
attach_page_source_on_failure, attach_console_log_on_failure, close_web_browser
# from realworld_ui.sdk.page_objects.general import LoggedPageObject
from conduit_rest.radish.conduit_rest_steps import ConduitRestBaseSteps
from conduit_ui.radish.ui_steps import ConduitBaseSteps, SignInBaseSteps
@after.each_step
def on_failure(step):
attach_screenshot_on_failure(step)
attach_page_source_on_failure(step)
attach_console_log_on_failure(step)
@after.each_scenario
def test_cleanup(scenario):
close_web_browser(scenario)
@steps
class ConduitSteps(ConduitBaseSteps):
pass
@steps
class ConduitRestSteps(ConduitRestBaseSteps):
pass
@steps
class SignInSteps(SignInBaseSteps):
pass
| 25.242424 | 86 | 0.836735 | 107 | 833 | 6.140187 | 0.373832 | 0.09589 | 0.079148 | 0.086758 | 0.152207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108043 | 833 | 32 | 87 | 26.03125 | 0.884253 | 0.079232 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.136364 | 0.181818 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
514938d6e5f155ccafb92cff10d67e8a7f542275 | 4,528 | py | Python | torchvision/prototype/datasets/_builtin/cifar.py | yassineAlouini/vision-1 | ee26e9c260a255e2afb5e691e713349529170c8b | [
"BSD-3-Clause"
] | 1 | 2022-02-14T09:16:02.000Z | 2022-02-14T09:16:02.000Z | torchvision/prototype/datasets/_builtin/cifar.py | yassineAlouini/vision-1 | ee26e9c260a255e2afb5e691e713349529170c8b | [
"BSD-3-Clause"
] | null | null | null | torchvision/prototype/datasets/_builtin/cifar.py | yassineAlouini/vision-1 | ee26e9c260a255e2afb5e691e713349529170c8b | [
"BSD-3-Clause"
] | null | null | null | import abc
import io
import pathlib
import pickle
from typing import Any, Dict, List, Optional, Tuple, Iterator, cast, BinaryIO, Union
import numpy as np
from torchdata.datapipes.iter import (
IterDataPipe,
Filter,
Mapper,
)
from torchvision.prototype.datasets.utils import Dataset, HttpResource, OnlineResource
from torchvision.prototype.datasets.utils._internal import (
hint_shuffling,
path_comparator,
hint_sharding,
read_categories_file,
)
from torchvision.prototype.features import Label, Image
from .._api import register_dataset, register_info
class CifarFileReader(IterDataPipe[Tuple[np.ndarray, int]]):
def __init__(self, datapipe: IterDataPipe[Dict[str, Any]], *, labels_key: str) -> None:
self.datapipe = datapipe
self.labels_key = labels_key
def __iter__(self) -> Iterator[Tuple[np.ndarray, int]]:
for mapping in self.datapipe:
image_arrays = mapping["data"].reshape((-1, 3, 32, 32))
category_idcs = mapping[self.labels_key]
yield from iter(zip(image_arrays, category_idcs))
class _CifarBase(Dataset):
_FILE_NAME: str
_SHA256: str
_LABELS_KEY: str
_META_FILE_NAME: str
_CATEGORIES_KEY: str
_categories: List[str]
def __init__(
self,
root: Union[str, pathlib.Path],
*,
split: str = "train",
skip_integrity_check: bool = False,
) -> None:
self._split = self._verify_str_arg(split, "split", ("train", "test"))
super().__init__(root, skip_integrity_check=skip_integrity_check)
@abc.abstractmethod
def _is_data_file(self, data: Tuple[str, BinaryIO]) -> Optional[int]:
pass
def _resources(self) -> List[OnlineResource]:
return [
HttpResource(
f"https://www.cs.toronto.edu/~kriz/{self._FILE_NAME}",
sha256=self._SHA256,
)
]
def _unpickle(self, data: Tuple[str, io.BytesIO]) -> Dict[str, Any]:
_, file = data
return cast(Dict[str, Any], pickle.load(file, encoding="latin1"))
def _prepare_sample(self, data: Tuple[np.ndarray, int]) -> Dict[str, Any]:
image_array, category_idx = data
return dict(
image=Image(image_array),
label=Label(category_idx, categories=self._categories),
)
def _datapipe(self, resource_dps: List[IterDataPipe]) -> IterDataPipe[Dict[str, Any]]:
dp = resource_dps[0]
dp = Filter(dp, self._is_data_file)
dp = Mapper(dp, self._unpickle)
dp = CifarFileReader(dp, labels_key=self._LABELS_KEY)
dp = hint_shuffling(dp)
dp = hint_sharding(dp)
return Mapper(dp, self._prepare_sample)
def __len__(self) -> int:
return 50_000 if self._split == "train" else 10_000
def _generate_categories(self) -> List[str]:
resources = self._resources()
dp = resources[0].load(self._root)
dp = Filter(dp, path_comparator("name", self._META_FILE_NAME))
dp = Mapper(dp, self._unpickle)
return cast(List[str], next(iter(dp))[self._CATEGORIES_KEY])
@register_info("cifar10")
def _cifar10_info() -> Dict[str, Any]:
return dict(categories=read_categories_file("cifar10"))
@register_dataset("cifar10")
class Cifar10(_CifarBase):
"""
- **homepage**: https://www.cs.toronto.edu/~kriz/cifar.html
"""
_FILE_NAME = "cifar-10-python.tar.gz"
_SHA256 = "6d958be074577803d12ecdefd02955f39262c83c16fe9348329d7fe0b5c001ce"
_LABELS_KEY = "labels"
_META_FILE_NAME = "batches.meta"
_CATEGORIES_KEY = "label_names"
_categories = _cifar10_info()["categories"]
def _is_data_file(self, data: Tuple[str, Any]) -> bool:
path = pathlib.Path(data[0])
return path.name.startswith("data" if self._split == "train" else "test")
@register_info("cifar100")
def _cifar100_info() -> Dict[str, Any]:
return dict(categories=read_categories_file("cifar100"))
@register_dataset("cifar100")
class Cifar100(_CifarBase):
"""
- **homepage**: https://www.cs.toronto.edu/~kriz/cifar.html
"""
_FILE_NAME = "cifar-100-python.tar.gz"
_SHA256 = "85cd44d02ba6437773c5bbd22e183051d648de2e7d6b014e1ef29b855ba677a7"
_LABELS_KEY = "fine_labels"
_META_FILE_NAME = "meta"
_CATEGORIES_KEY = "fine_label_names"
_categories = _cifar100_info()["categories"]
def _is_data_file(self, data: Tuple[str, Any]) -> bool:
path = pathlib.Path(data[0])
return path.name == self._split
| 31.444444 | 91 | 0.662323 | 540 | 4,528 | 5.266667 | 0.251852 | 0.018987 | 0.024613 | 0.022504 | 0.214135 | 0.158579 | 0.150141 | 0.150141 | 0.139944 | 0.139944 | 0 | 0.043295 | 0.214443 | 4,528 | 143 | 92 | 31.664336 | 0.756255 | 0.026281 | 0 | 0.055556 | 0 | 0 | 0.091138 | 0.039516 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12963 | false | 0.009259 | 0.101852 | 0.037037 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
515e6016ab981fef14aba771dd1e8745df2000d2 | 24,147 | py | Python | libspn/tests/test_partition.py | pronobis/libspn | b98141ea5a609a02706433220758e58f46bd3f5e | [
"MIT"
] | 22 | 2019-03-01T15:58:20.000Z | 2022-02-18T10:32:04.000Z | libspn/tests/test_partition.py | pronobis/libspn | b98141ea5a609a02706433220758e58f46bd3f5e | [
"MIT"
] | 10 | 2019-03-03T18:15:24.000Z | 2021-05-04T09:02:55.000Z | libspn/tests/test_partition.py | pronobis/libspn | b98141ea5a609a02706433220758e58f46bd3f5e | [
"MIT"
] | 8 | 2019-03-22T20:45:20.000Z | 2021-05-03T13:22:09.000Z | #!/usr/bin/env python3
from context import libspn as spn
from test import TestCase
import numpy as np
import random
import tensorflow as tf
def assert_list_elements_equal(list1, list2):
"""Check if lists have the same elements."""
for l1 in list1:
if l1 not in list2:
raise AssertionError("List elements differ: %s != %s" % (list1, list2))
class TestPartition(TestCase):
@classmethod
def setUpClass(cls):
super(TestPartition, cls).setUpClass()
cls.test_set = [1, 2, 3, 4, 5]
# Partitions for num_subsets=[1; 4]
cls.possible_partitions = [None] * len(cls.test_set)
cls.possible_partitions[0] = [[{1, 2, 3, 4, 5}]]
cls.possible_partitions[1] = [[{1}, {2, 3, 4, 5}],
[{1, 5}, {2, 3, 4}],
[{1, 2, 3, 4}, {5}],
[{1, 3, 5}, {2, 4}],
[{1, 4}, {2, 3, 5}],
[{1, 2, 3}, {4, 5}],
[{1, 3, 4}, {2, 5}],
[{1, 2, 4}, {3, 5}],
[{1, 4, 5}, {2, 3}],
[{1, 2, 4, 5}, {3}],
[{1, 3}, {2, 4, 5}],
[{1, 3, 4, 5}, {2}],
[{1, 2}, {3, 4, 5}],
[{1, 2, 3, 5}, {4}],
[{1, 2, 5}, {3, 4}]]
cls.possible_partitions[2] = [[{1}, {2, 4}, {3, 5}],
[{1, 4}, {2, 5}, {3}],
[{1, 2}, {3, 4}, {5}],
[{1, 3}, {2}, {4, 5}],
[{1, 2, 5}, {3}, {4}],
[{1, 2, 4}, {3}, {5}],
[{1, 4}, {2}, {3, 5}],
[{1, 2, 3}, {4}, {5}],
[{1, 2}, {3, 5}, {4}],
[{1, 5}, {2, 4}, {3}],
[{1, 3, 4}, {2}, {5}],
[{1}, {2, 5}, {3, 4}],
[{1}, {2}, {3, 4, 5}],
[{1, 4}, {2, 3}, {5}],
[{1, 3}, {2, 4}, {5}],
[{1, 5}, {2, 3}, {4}],
[{1}, {2, 4, 5}, {3}],
[{1, 4, 5}, {2}, {3}],
[{1}, {2, 3}, {4, 5}],
[{1}, {2, 3, 5}, {4}],
[{1, 3, 5}, {2}, {4}],
[{1, 3}, {2, 5}, {4}],
[{1}, {2, 3, 4}, {5}],
[{1, 2}, {3}, {4, 5}],
[{1, 5}, {2}, {3, 4}]]
cls.possible_partitions[3] = [[{1, 2}, {3}, {4}, {5}],
[{1}, {2}, {3, 5}, {4}],
[{1, 5}, {2}, {3}, {4}],
[{1}, {2, 4}, {3}, {5}],
[{1}, {2}, {3}, {4, 5}],
[{1}, {2}, {3, 4}, {5}],
[{1}, {2, 3}, {4}, {5}],
[{1, 4}, {2}, {3}, {5}],
[{1, 3}, {2}, {4}, {5}],
[{1}, {2, 5}, {3}, {4}]]
cls.possible_partitions[4] = [[{1}, {2}, {3}, {4}, {5}]]
# Balanced partitions for num_subsets=[1; 4]
cls.possible_balanced_partitions = [None] * len(cls.test_set)
cls.possible_balanced_partitions[0] = [[{1, 2, 3, 4, 5}]]
cls.possible_balanced_partitions[1] = [[{1, 5}, {2, 3, 4}],
[{1, 3, 5}, {2, 4}],
[{1, 4}, {2, 3, 5}],
[{1, 2, 3}, {4, 5}],
[{1, 3, 4}, {2, 5}],
[{1, 2, 4}, {3, 5}],
[{1, 4, 5}, {2, 3}],
[{1, 3}, {2, 4, 5}],
[{1, 2}, {3, 4, 5}],
[{1, 2, 5}, {3, 4}]]
cls.possible_balanced_partitions[2] = [[{1}, {2, 4}, {3, 5}],
[{1, 4}, {2, 5}, {3}],
[{1, 2}, {3, 4}, {5}],
[{1, 3}, {2}, {4, 5}],
[{1, 4}, {2}, {3, 5}],
[{1, 2}, {3, 5}, {4}],
[{1, 5}, {2, 4}, {3}],
[{1}, {2, 5}, {3, 4}],
[{1, 4}, {2, 3}, {5}],
[{1, 3}, {2, 4}, {5}],
[{1, 5}, {2, 3}, {4}],
[{1}, {2, 3}, {4, 5}],
[{1, 3}, {2, 5}, {4}],
[{1, 2}, {3}, {4, 5}],
[{1, 5}, {2}, {3, 4}]]
cls.possible_balanced_partitions[3] = [[{1, 2}, {3}, {4}, {5}],
[{1}, {2}, {3, 5}, {4}],
[{1, 5}, {2}, {3}, {4}],
[{1}, {2, 4}, {3}, {5}],
[{1}, {2}, {3}, {4, 5}],
[{1}, {2}, {3, 4}, {5}],
[{1}, {2, 3}, {4}, {5}],
[{1, 4}, {2}, {3}, {5}],
[{1, 3}, {2}, {4}, {5}],
[{1}, {2, 5}, {3}, {4}]]
cls.possible_balanced_partitions[4] = [[{1}, {2}, {3}, {4}, {5}]]
def test_stirling_number_args(self):
"""Argument verification of StirlingRatio."""
s = spn.utils.StirlingNumber()
with self.assertRaises(ValueError):
s[0, 0]
with self.assertRaises(ValueError):
s[2, 0]
with self.assertRaises(ValueError):
s[1, 2]
with self.assertRaises(IndexError):
s[1]
with self.assertRaises(IndexError):
s[1, 1, 1]
def test_stirling_number_allocation(self):
"""Test memory allocation of StirlingNumber cache."""
s = spn.utils.StirlingNumber()
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [100, 100])
s[100, 100]
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [100, 100])
s[101, 100]
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [200, 100])
s[101, 101]
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [200, 200])
s[601, 51]
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [601, 200])
s[901, 401]
self.assertListEqual(list(s._StirlingNumber__numbers.shape), [1202, 401])
def test_stirling_number(self):
"""Calculation of Stirling number."""
def test(s, n, k, true_array):
s[n, k]
s_array = np.tril(s._StirlingNumber__numbers[:n, :k])
np.testing.assert_array_equal(s_array, np.array(true_array))
# Initialize
s = spn.utils.StirlingNumber()
# Run tests
test(s, 1, 1, [[1]])
test(s, 5, 3, [[1, 0, 0],
[1, 1, 0],
[1, 3, 1],
[1, 7, 6],
[1, 15, 25]])
test(s, 5, 5, [[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[1, 3, 1, 0, 0],
[1, 7, 6, 1, 0],
[1, 15, 25, 10, 1]])
test(s, 10, 8, [[1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 3, 1, 0, 0, 0, 0, 0],
[1, 7, 6, 1, 0, 0, 0, 0],
[1, 15, 25, 10, 1, 0, 0, 0],
[1, 31, 90, 65, 15, 1, 0, 0],
[1, 63, 301, 350, 140, 21, 1, 0],
[1, 127, 966, 1701, 1050, 266, 28, 1],
[1, 255, 3025, 7770, 6951, 2646, 462, 36],
[1, 511, 9330, 34105, 42525, 22827, 5880, 750]])
test(s, 10, 10, [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 3, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 7, 6, 1, 0, 0, 0, 0, 0, 0],
[1, 15, 25, 10, 1, 0, 0, 0, 0, 0],
[1, 31, 90, 65, 15, 1, 0, 0, 0, 0],
[1, 63, 301, 350, 140, 21, 1, 0, 0, 0],
[1, 127, 966, 1701, 1050, 266, 28, 1, 0, 0],
[1, 255, 3025, 7770, 6951, 2646, 462, 36, 1, 0],
[1, 511, 9330, 34105, 42525, 22827, 5880, 750, 45, 1]])
# Test if indexing works as it should
self.assertEqual(s[9, 4], 7770)
self.assertEqual(s[26, 8], 5749622251945664950)
# Test overflow detection
# Max 64 bit int is: 9223372036854775807
# s[26, 9] is larger than that
self.assertEqual(s[26, 9], -1)
def test_stirling_ratio_args(self):
"""Argument verification of StirlingRatio."""
r = spn.utils.StirlingRatio()
with self.assertRaises(ValueError):
r[0, 0]
with self.assertRaises(ValueError):
r[2, 0]
with self.assertRaises(ValueError):
r[1, 2]
with self.assertRaises(IndexError):
r[1]
with self.assertRaises(IndexError):
r[1, 1, 1]
def test_stirling_ratio_allocation(self):
"""Test memory allocation of StirlingRatio cache."""
r = spn.utils.StirlingRatio()
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [100, 100])
r[100, 100]
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [100, 100])
r[101, 100]
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [200, 100])
r[101, 101]
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [200, 200])
r[601, 51]
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [601, 200])
r[901, 401]
self.assertListEqual(list(r._StirlingRatio__ratios.shape), [1202, 401])
def test_stirling_ratio(self):
"""Calculation of Stirling ratio."""
def test(r, s, n, k, size_n, size_k):
"""Compare ratio calculated explicitly with StirlingNumber
with the ratio from StirlingRatio."""
self.assertAlmostEqual(s[n + 1, k] / s[n, k], r[n, k])
r1 = (np.tril(s._StirlingNumber__numbers[1:size_n + 1, :size_k]) /
(np.tril(s._StirlingNumber__numbers[:size_n, :size_k]) +
(1 - np.tri(size_n, size_k))))
r2 = np.tril(r._StirlingRatio__ratios[:size_n, :size_k])
np.testing.assert_array_almost_equal(r1, r2)
# Initialize
r = spn.utils.StirlingRatio()
s = spn.utils.StirlingNumber()
# Test for growing values of n, k
test(r, s, 1, 1, 1, 1)
test(r, s, 4, 1, 4, 1)
test(r, s, 4, 3, 4, 3)
test(r, s, 4, 4, 4, 4)
test(r, s, 10, 3, 10, 4)
test(r, s, 10, 6, 10, 6)
test(r, s, 10, 10, 10, 10)
test(r, s, 14, 5, 14, 10)
test(r, s, 12, 12, 14, 12)
test(r, s, 14, 14, 14, 14)
# Test if indexing works as it should
self.assertAlmostEqual(r[9, 4], 4.38931788932)
def test_random_partition_args(self):
"""Argument verification of random_partition."""
# input_set
with self.assertRaises(TypeError):
spn.utils.random_partition(1, 1)
with self.assertRaises(ValueError):
spn.utils.random_partition([], 1)
# num_subsets
with self.assertRaises(ValueError):
spn.utils.random_partition([1], 0)
with self.assertRaises(ValueError):
spn.utils.random_partition([1], 2)
# stirling
with self.assertRaises(TypeError):
spn.utils.random_partition([1], 1, stirling=list())
# rnd
with self.assertRaises(TypeError):
spn.utils.random_partition([1], 1, rnd=list())
def test_random_partition(self):
"""Test sampling random partitions."""
stirling = spn.utils.Stirling()
for num_subsets in range(1, len(TestPartition.test_set) + 1):
# Run test for various num_subsets
with self.subTest(num_subsets=num_subsets):
possible_partitions = TestPartition.possible_partitions[num_subsets - 1]
counts = [0 for p in possible_partitions]
# Sample many times
num_tests = 10000
for _ in range(num_tests):
out = spn.utils.random_partition(TestPartition.test_set,
num_subsets, stirling)
i = possible_partitions.index(out)
counts[i] += 1
# Check if counts are uniform
expected = num_tests / len(possible_partitions)
for c in counts:
self.assertGreater(c, 0.8 * expected)
self.assertLess(c, 1.2 * expected)
def test_random_partition_customrnd(self):
"""Test sampling random partitions."""
stirling = spn.utils.Stirling()
for num_subsets in range(1, len(TestPartition.test_set) + 1):
# Run test for various num_subsets
with self.subTest(num_subsets=num_subsets):
possible_partitions = TestPartition.possible_partitions[num_subsets - 1]
# TEST 1 - Initialize seed with 100
rnd = random.Random(100)
counts1 = [0 for p in possible_partitions]
# Sample many times
num_tests = 10000
for _ in range(num_tests):
out = spn.utils.random_partition(TestPartition.test_set,
num_subsets, stirling,
rnd)
i = possible_partitions.index(out)
counts1[i] += 1
# Check if counts are uniform
expected = num_tests / len(possible_partitions)
for c in counts1:
self.assertGreater(c, 0.8 * expected)
self.assertLess(c, 1.2 * expected)
# TEST 2 - Initialize seed with 100
rnd = random.Random(100) # Use seed 100
counts2 = [0 for p in possible_partitions]
# Sample many times
num_tests = 10000
for _ in range(num_tests):
out = spn.utils.random_partition(TestPartition.test_set,
num_subsets, stirling,
rnd)
i = possible_partitions.index(out)
counts2[i] += 1
# Check if counts are uniform
expected = num_tests / len(possible_partitions)
for c in counts2:
self.assertGreater(c, 0.8 * expected)
self.assertLess(c, 1.2 * expected)
# COMPARE IF COUNTS ARE IDENTICAL
self.assertListEqual(counts1, counts2)
def test_all_partitions_args(self):
"""Argument verification of all_partitions."""
# input_set
with self.assertRaises(TypeError):
spn.utils.all_partitions(1, 1)
with self.assertRaises(ValueError):
spn.utils.all_partitions([], 1)
# num_subsets
with self.assertRaises(ValueError):
spn.utils.all_partitions([1], 0)
with self.assertRaises(ValueError):
spn.utils.all_partitions([1], 2)
def test_all_partitions(self):
"""Test generation of all partitions of a set."""
for num_subsets in range(1, len(TestPartition.test_set) + 1):
# Run test for various num_subsets
with self.subTest(num_subsets=num_subsets):
possible_partitions = TestPartition.possible_partitions[num_subsets - 1]
out = spn.utils.all_partitions(TestPartition.test_set,
num_subsets)
# Note, we cannot test the below by converting them to sets
# since the elements are not hashable.
assert_list_elements_equal(possible_partitions, out)
def run_test_random_partitions(self, fun, balanced):
"""Generic test for sampling a subset of random partitions."""
def sample_many(rnd=None):
"""Sample many times."""
counts = [0 for p in possible_partitions]
num_tests = 2000
# Sample partitions many times
for _ in range(num_tests):
import inspect
# Since we request `max_num_partitions` which is less than
# all possible partitions in some cases, and more in others,
# we test all possibilities
if len(inspect.signature(fun).parameters) > 5:
out = fun(TestPartition.test_set, num_subsets,
max_num_partitions, balanced=balanced,
stirling=stirling, rnd=rnd)
else:
out = fun(TestPartition.test_set, num_subsets,
max_num_partitions, balanced=balanced,
rnd=rnd)
# Verify the sample
self.assertEqual(len(out), num_partitions)
# Count partitions
for p in out:
i = possible_partitions.index(p)
counts[i] += 1
# Check if counts are uniform
expected = (num_tests * num_partitions) / len(possible_partitions)
for c in counts:
self.assertGreater(c, 0.8 * expected)
self.assertLess(c, 1.2 * expected)
return counts
max_num_partitions = 3
stirling = spn.utils.Stirling()
for num_subsets in range(1, len(TestPartition.test_set) + 1):
# Run test for various num_subsets
with self.subTest(num_subsets=num_subsets):
num_partitions = min(stirling.number[len(TestPartition.test_set),
num_subsets],
max_num_partitions)
if balanced:
possible_partitions = TestPartition.possible_balanced_partitions[
num_subsets - 1]
else:
possible_partitions = TestPartition.possible_partitions[num_subsets - 1]
# Test with rnd = None
sample_many(None)
# Test with custom rnd
c1 = sample_many(random.Random(100))
c2 = sample_many(random.Random(100))
self.assertListEqual(c1, c2)
def test_random_partitions_by_sampling_args(self):
"""Argument verification of random_partitions_by_sampling."""
# input_set
with self.assertRaises(TypeError):
spn.utils.random_partitions_by_sampling(1, 1, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_sampling([], 1, 1)
# stirling
with self.assertRaises(TypeError):
spn.utils.random_partitions_by_sampling([1], 1, 1, True, stirling=list())
# rnd
with self.assertRaises(TypeError):
spn.utils.random_partitions_by_sampling([1], 1, 1, True, rnd=list())
# num_partitions
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_sampling([1], 1, 0)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_sampling([1], 1, np.iinfo(int).max + 1)
# num_subsets
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_sampling([1], 0, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_sampling([1], 2, 1)
def test_random_partitions_by_sampling(self):
"""Test sampling a subset of random partitions by repeated sampling."""
self.run_test_random_partitions(spn.utils.random_partitions_by_sampling,
balanced=False)
self.run_test_random_partitions(spn.utils.random_partitions_by_sampling,
balanced=True)
def test_random_partitions_by_enumeration_args(self):
"""Argument verification of random_partitions_by_enumeration."""
# input_set
with self.assertRaises(TypeError):
spn.utils.random_partitions_by_enumeration(1, 1, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_enumeration([], 1, 1)
# num_partitions
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_enumeration([1], 1, 0)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_enumeration([1], 1,
np.iinfo(int).max + 1)
# num_subsets
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_enumeration([1], 0, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions_by_enumeration([1], 2, 1)
# rnd
with self.assertRaises(TypeError):
spn.utils.random_partitions([1], 1, 1, True, list())
def test_random_partitions_by_enumeration(self):
"""Test sampling a subset of random partitions by enumeration."""
self.run_test_random_partitions(spn.utils.random_partitions_by_enumeration,
balanced=False)
self.run_test_random_partitions(spn.utils.random_partitions_by_enumeration,
balanced=True)
def test_random_partitions_args(self):
"""Argument verification of random_partitions."""
# input_set
with self.assertRaises(TypeError):
spn.utils.random_partitions(1, 1, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions([], 1, 1)
# stirling
with self.assertRaises(TypeError):
spn.utils.random_partitions([1], 1, 1, True, stirling=list())
# rnd
with self.assertRaises(TypeError):
spn.utils.random_partitions([1], 1, 1, True, rnd=list())
# num_partitions
with self.assertRaises(ValueError):
spn.utils.random_partitions([1], 1, 0)
with self.assertRaises(ValueError):
spn.utils.random_partitions([1], 1, np.iinfo(int).max + 1)
# num_subsets
with self.assertRaises(ValueError):
spn.utils.random_partitions([1], 0, 1)
with self.assertRaises(ValueError):
spn.utils.random_partitions([1], 2, 1)
if __name__ == '__main__':
tf.test.main()
| 48.487952 | 92 | 0.44755 | 2,623 | 24,147 | 3.981319 | 0.091498 | 0.01264 | 0.013215 | 0.012257 | 0.766638 | 0.715312 | 0.638131 | 0.60835 | 0.534042 | 0.468735 | 0 | 0.094331 | 0.423572 | 24,147 | 497 | 93 | 48.585513 | 0.655938 | 0.091399 | 0 | 0.464467 | 0 | 0 | 0.001746 | 0 | 0 | 0 | 0 | 0 | 0.192893 | 1 | 0.055838 | false | 0 | 0.015228 | 0 | 0.076142 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
515ff9d70a03bd916a813d03ff79f40b0a4d32a2 | 358 | py | Python | dist/snippets/maps_http_places_textsearch_incomplete_address/maps_http_places_textsearch_incomplete_address.py | Mike-Tran/openapi-specification | 3cbe1afa3539143943538b51a49fa621a3a3ab66 | [
"Apache-2.0"
] | 37 | 2021-05-01T20:55:14.000Z | 2022-03-28T12:35:20.000Z | dist/snippets/maps_http_places_textsearch_incomplete_address/maps_http_places_textsearch_incomplete_address.py | Mike-Tran/openapi-specification | 3cbe1afa3539143943538b51a49fa621a3a3ab66 | [
"Apache-2.0"
] | 244 | 2021-03-16T00:07:42.000Z | 2022-03-29T17:21:42.000Z | dist/snippets/maps_http_places_textsearch_incomplete_address/maps_http_places_textsearch_incomplete_address.py | Mike-Tran/openapi-specification | 3cbe1afa3539143943538b51a49fa621a3a3ab66 | [
"Apache-2.0"
] | 28 | 2021-05-03T02:41:49.000Z | 2022-03-31T12:07:52.000Z | # [START maps_http_places_textsearch_incomplete_address]
import requests
url = "https://maps.googleapis.com/maps/api/place/textsearch/json?query=123%20main%20street&key=YOUR_API_KEY"
payload={}
headers = {}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
# [END maps_http_places_textsearch_incomplete_address] | 27.538462 | 109 | 0.801676 | 48 | 358 | 5.729167 | 0.645833 | 0.058182 | 0.101818 | 0.174545 | 0.298182 | 0.298182 | 0 | 0 | 0 | 0 | 0 | 0.021084 | 0.072626 | 358 | 13 | 110 | 27.538462 | 0.807229 | 0.298883 | 0 | 0 | 0 | 0.166667 | 0.417671 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
516363767adb5e46529c4c8a6d02a33469f596dd | 2,681 | py | Python | main.py | cmungall/ontology-term-usage | 8fcb42cb95d45db4bb6b1589293bcea019113326 | [
"CC0-1.0"
] | 2 | 2021-11-11T08:32:42.000Z | 2021-12-07T22:06:20.000Z | main.py | cmungall/ontology-term-usage | 8fcb42cb95d45db4bb6b1589293bcea019113326 | [
"CC0-1.0"
] | 1 | 2021-11-11T02:58:37.000Z | 2021-11-11T02:58:37.000Z | main.py | cmungall/ontology-term-usage | 8fcb42cb95d45db4bb6b1589293bcea019113326 | [
"CC0-1.0"
] | null | null | null | # TODO: figure out how to put this in the app/ folder and still use serverless
# This line: `handler: main.handler`
# How do we specify a path here, as per uvicorn?
import os
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from fastapi import FastAPI, Query
# for lambda; see https://adem.sh/blog/tutorial-fastapi-aws-lambda-serverless
from mangum import Mangum
from ontology_term_usage.term_usage import OntologyClient, ResultSet, TermUsage, TERM, ServiceMetadataCollection
# necessary for serverless/lambda
stage = os.environ.get('STAGE', None)
openapi_prefix = f"/{stage}" if stage else "/"
client = OntologyClient()
description = """
Wraps multiple endpoints to query for all usages of a term, including
* Terms used in logical definitions in external ontologies
* Terms used in annotation of entities like genes and proteins
* Terms used in specialized annotation such as GO-CAMs
"""
app = FastAPI(title='Ontology Usage API',
description=description,
contact = {
"name": "Chris Mungall",
"url": "https://github.com/cmungall/ontology-term-usage",
"email": "cjmungall AT lbl DOT gov",
},
openapi_prefix=openapi_prefix)
tags_metadata = [
{
"name": "usages",
"description": "Operations on term usages",
"externalDocs": {
"description": "External docs",
"url": "https://github.com/cmungall/ontology-term-usage",
},
},
{
"name": "metadata",
"description": "Operations to discover more information about system configuration.",
"externalDocs": {
"description": "External docs",
"url": "https://github.com/cmungall/ontology-term-usage",
},
},
]
@app.get("/")
async def root():
return {"message": "Hello World"}
@app.get("/usage/{term}", response_model=ResultSet, summary='Find usages of a term', tags=["usages"])
async def usage(term: TERM, limit: int = None) -> ResultSet:
"""
Find all usages of an ontology term across multiple services.
To obtain metadata on all services called, use the services endpoint
Example terms: GO:0006915 (apoptotic process), RO:0000057 (has participant)
\f
:param term: URI or CURIE of a term.
:param limit: maximum number of usages
:return: usages broken down by service
"""
rs = client.term_usage(term, limit=limit)
return rs
@app.get("/metadata", response_model=ServiceMetadataCollection, tags=["metadata"])
async def metadata() -> ServiceMetadataCollection:
return client.get_services()
handler = Mangum(app) | 31.541176 | 112 | 0.666915 | 323 | 2,681 | 5.501548 | 0.47678 | 0.030388 | 0.038267 | 0.0287 | 0.110298 | 0.110298 | 0.110298 | 0.110298 | 0.086663 | 0.086663 | 0 | 0.006724 | 0.223424 | 2,681 | 85 | 113 | 31.541176 | 0.846782 | 0.099217 | 0 | 0.132075 | 0 | 0 | 0.372491 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0 | false | 0 | 0.132075 | 0 | 0.188679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
516366f3ce6321c828167d27328552506084f971 | 881 | py | Python | migrations/versions/0035.py | NewAcropolis/api | 61ffe14cb64407ffe1f58d0e970703bf07d60ea3 | [
"MIT"
] | 1 | 2018-10-12T15:04:31.000Z | 2018-10-12T15:04:31.000Z | migrations/versions/0035.py | NewAcropolis/api | 61ffe14cb64407ffe1f58d0e970703bf07d60ea3 | [
"MIT"
] | 169 | 2017-11-07T00:45:25.000Z | 2022-03-12T00:08:59.000Z | migrations/versions/0035.py | NewAcropolis/api | 61ffe14cb64407ffe1f58d0e970703bf07d60ea3 | [
"MIT"
] | 1 | 2019-08-15T14:51:31.000Z | 2019-08-15T14:51:31.000Z | """empty message
Revision ID: 0035 add basic email template
Revises: 0034 add send_after to emails
Create Date: 2019-10-30 00:01:13.441215
"""
# revision identifiers, used by Alembic.
revision = '0035 add basic email template'
down_revision = '0034 add send_after to emails'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from app.models import EmailType, EMAIL_TYPES
def upgrade():
conn = op.get_bind()
res = conn.execute("SELECT email_type FROM email_types")
email_types = res.fetchall()
# as 0025 adds in all EMAIL_TYPES we need to only add in missing email types
for email_type in EMAIL_TYPES:
if email_type not in [e[0] for e in email_types]:
op.execute(
"INSERT INTO email_types (email_type) VALUES ('{}')".format(email_type)
)
def downgrade():
pass
| 25.171429 | 87 | 0.702611 | 132 | 881 | 4.568182 | 0.530303 | 0.13267 | 0.039801 | 0.056385 | 0.162521 | 0.079602 | 0 | 0 | 0 | 0 | 0 | 0.059593 | 0.219069 | 881 | 34 | 88 | 25.911765 | 0.81686 | 0.284904 | 0 | 0 | 0 | 0 | 0.228663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.058824 | 0.235294 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
51693d983dbdb9e8a3c0e6a2b046c58c72c14680 | 2,263 | py | Python | setup.py | cowanml/samplemangler | cd2b772beb74cf5d2106cd67e74e95ebafc74735 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cowanml/samplemangler | cd2b772beb74cf5d2106cd67e74e95ebafc74735 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cowanml/samplemangler | cd2b772beb74cf5d2106cd67e74e95ebafc74735 | [
"BSD-3-Clause"
] | null | null | null | # -*- encoding: utf-8 -*-
import glob
import io
import re
from os.path import basename
from os.path import dirname
from os.path import join
from os.path import splitext
from setuptools import find_packages
from setuptools import setup
def read(*names, **kwargs):
return io.open(
join(dirname(__file__), *names),
encoding=kwargs.get("encoding", "utf8")
).read()
setup(
name="sampleMangler",
version="0.1.1",
license="BSD",
description="Adapter layer between sampleManager and legacy api.",
long_description="%s\n%s" % (read("README.rst"), re.sub(":obj:`~?(.*?)`", r"``\1``", read("CHANGELOG.rst"))),
author="Matt Cowan",
author_email="cowan@bnl.gov",
url="https://github.com/cowanml/sampleMangler",
packages=find_packages("src"),
package_dir={"": "src"},
py_modules=[splitext(basename(i))[0] for i in glob.glob("src/*.py")],
include_package_data=True,
## zip_safe=False,
# zip_safe=True,
classifiers=[
# complete classifier list: http://pypi.python.org/pypi?%3Aaction=list_classifiers
"Development Status :: 0 - Fetal",
"Intended Audience :: Developers",
"Operating System :: Unix",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Utilities",
],
keywords=[
# eg: "keyword1", "keyword2", "keyword3",
],
install_requires=[
# eg: "aspectlib==1.1.1", "six>=1.7",
"pymongo",
"sampleManager"
],
extras_require={
# eg: 'rst': ["docutils>=0.11"],
},
entry_points={
"console_scripts": [
"sampleMangler = sampleMangler.__main__:main"
]
},
# don't do this...? just abstract dependencies here, concrete in requirements.txt...
# dependency_links = [
# "git+https://github.com/NSLS-II/sampleManager.git"
# ],
)
| 31.430556 | 113 | 0.604507 | 250 | 2,263 | 5.376 | 0.568 | 0.113095 | 0.14881 | 0.047619 | 0.040179 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016657 | 0.230667 | 2,263 | 71 | 114 | 31.873239 | 0.755313 | 0.185594 | 0 | 0.053571 | 0 | 0 | 0.407104 | 0.014754 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | true | 0 | 0.160714 | 0.017857 | 0.196429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
516adb054f9d8040e8884b18e04338d797104ab6 | 291 | py | Python | Mundo 1/ex030.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | null | null | null | Mundo 1/ex030.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | null | null | null | Mundo 1/ex030.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | 1 | 2021-03-06T02:41:36.000Z | 2021-03-06T02:41:36.000Z | """
30 - Crie um programa que leia um número inteiro qualquer e mostre na tela se
ele é par ou ímpar
"""
num = int(input('\033[35mDigite um número qualquer: \033[m'))
if num % 2 == 0:
print(f'O número {num} é \033[34mPAR\033[m')
else:
print(f'O número {num} é \033[34mÍMPAR\033[m.')
| 29.1 | 77 | 0.656357 | 55 | 291 | 3.472727 | 0.618182 | 0.062827 | 0.073298 | 0.136126 | 0.209424 | 0.209424 | 0.209424 | 0 | 0 | 0 | 0 | 0.118644 | 0.189003 | 291 | 9 | 78 | 32.333333 | 0.690678 | 0.329897 | 0 | 0 | 0 | 0 | 0.59893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a8c45b620d79f635c96d387b1c36d837c1ad948 | 978 | py | Python | scripts/pastry.py | arthurwpessoa/machine_learning | a63e4d0f1effc5ab07eedeb5664d845708edfb05 | [
"MIT"
] | 2 | 2021-03-16T16:58:29.000Z | 2021-11-08T13:05:45.000Z | scripts/pastry.py | arthurwpessoa/machine_learning | a63e4d0f1effc5ab07eedeb5664d845708edfb05 | [
"MIT"
] | null | null | null | scripts/pastry.py | arthurwpessoa/machine_learning | a63e4d0f1effc5ab07eedeb5664d845708edfb05 | [
"MIT"
] | null | null | null | # Required libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import datetime, re
#Reading pastry dataset
df = pd.read_csv('../datasets/coffee_shop/pastry.csv')
# Drops null values
df.dropna(inplace = True)
# Replaces column to remove special character
df = df.rename(columns={'% waste': 'pct_waste'})
# Converts percentage strings to equivalent numbers
df["pct_waste"] = df["pct_waste"].replace({'%':''}, regex=True).astype(int) / 100
# Converts percentage strings to equivalent numbers
df["transaction_weekday"] = pd.to_datetime(df['transaction_date'], format = '%m/%d/%Y').dt.dayofweek
sns.jointplot(df)
# save heatmap as .png file
# dpi - sets the resolution of the saved image in dots/inches
plt.figure(figsize=(16, 6))
heatmap = sns.heatmap(df.corr(), annot = True)
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':18}, pad=12);
# plt.savefig('heatmap.png', dpi=300, bbox_inches='tight')
#plt.show()
| 30.5625 | 100 | 0.736196 | 145 | 978 | 4.896552 | 0.641379 | 0.033803 | 0.070423 | 0.076056 | 0.129577 | 0.129577 | 0.129577 | 0 | 0 | 0 | 0 | 0.015081 | 0.118609 | 978 | 31 | 101 | 31.548387 | 0.808585 | 0.362986 | 0 | 0 | 0 | 0 | 0.227124 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5a8ce4bdcee556442797bf38719e0cd5e80d0426 | 789 | py | Python | pyexcel_io/writers/__init__.py | vinraspa/pyexcel-io | 1b4fde5b79c42c57ebed54ed94272d700c6f9317 | [
"BSD-3-Clause"
] | 52 | 2016-06-15T17:11:23.000Z | 2022-02-07T12:44:07.000Z | pyexcel_io/writers/__init__.py | vinraspa/pyexcel-io | 1b4fde5b79c42c57ebed54ed94272d700c6f9317 | [
"BSD-3-Clause"
] | 100 | 2015-12-28T17:58:50.000Z | 2022-01-29T19:48:39.000Z | pyexcel_io/writers/__init__.py | vinraspa/pyexcel-io | 1b4fde5b79c42c57ebed54ed94272d700c6f9317 | [
"BSD-3-Clause"
] | 20 | 2016-05-09T16:44:36.000Z | 2021-09-27T11:54:00.000Z | """
pyexcel_io.writers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
file writers
:copyright: (c) 2014-2020 by Onni Software Ltd.
:license: New BSD License, see LICENSE for more details
"""
from pyexcel_io.plugins import IOPluginInfoChainV2
IOPluginInfoChainV2(__name__).add_a_writer(
relative_plugin_class_path="csv_in_file.CsvFileWriter",
locations=["file", "content"],
file_types=["csv", "tsv"],
stream_type="text",
).add_a_writer(
relative_plugin_class_path="csv_in_memory.CsvMemoryWriter",
locations=["memory"],
file_types=["csv", "tsv"],
stream_type="text",
).add_a_writer(
relative_plugin_class_path="csvz_writer.CsvZipWriter",
locations=["memory", "file", "content"],
file_types=["csvz", "tsvz"],
stream_type="binary",
)
| 28.178571 | 63 | 0.667934 | 92 | 789 | 5.380435 | 0.51087 | 0.024242 | 0.060606 | 0.109091 | 0.337374 | 0.337374 | 0.337374 | 0.337374 | 0.337374 | 0.250505 | 0 | 0.014925 | 0.150824 | 789 | 27 | 64 | 29.222222 | 0.723881 | 0.217997 | 0 | 0.352941 | 0 | 0 | 0.247878 | 0.132428 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a8cf7bc0cb4442c58ea3979c4ae7da99ab95e65 | 561 | py | Python | Interview-Preparation/Facebook/Parenthesis-valid-parenthesis.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | 1 | 2020-11-14T18:28:13.000Z | 2020-11-14T18:28:13.000Z | Interview-Preparation/Facebook/Parenthesis-valid-parenthesis.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | null | null | null | Interview-Preparation/Facebook/Parenthesis-valid-parenthesis.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | null | null | null | class Solution:
def isValid(self, s: str) -> bool:
if not s: return True
if len(s) % 2: return False
if s[0] in ']})': return False
maps = {'(':')', '{':'}', '[':']'}
stack = []
for char in s:
if char in '({[':
stack.append(char)
else:
if not stack: return False
else:
temp = stack.pop()
if maps[temp] != char:
return False
return len(stack) == 0
| 28.05 | 42 | 0.372549 | 57 | 561 | 3.666667 | 0.438596 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.491979 | 561 | 19 | 43 | 29.526316 | 0.722807 | 0 | 0 | 0.117647 | 0 | 0 | 0.02139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a966f94b9c16f0705907add44bb45e9dcfe5e10 | 2,777 | py | Python | copy_release_bin.py | AlexanderYunker1983/YBuild | ad34bb01983d2bac83cecc772efffaeba8717750 | [
"MIT"
] | null | null | null | copy_release_bin.py | AlexanderYunker1983/YBuild | ad34bb01983d2bac83cecc772efffaeba8717750 | [
"MIT"
] | null | null | null | copy_release_bin.py | AlexanderYunker1983/YBuild | ad34bb01983d2bac83cecc772efffaeba8717750 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#-*- coding: utf-8 -*-
import threading
import traceback
import sys
import os
import shutil
import zipfile
from subprocess import Popen, PIPE
from re import search
from datetime import datetime
from call_helper import CallHelper
from os.path import basename
class OtherException(Exception):
def __init__ (self, value):
self.value = value
def __str__(self):
return self.value
#use valid login and password
def hgcmd(cmd,*args):
return [
'hg',
cmd,
'--config','auth.spread.username=user',
'--config','auth.spread.password=password',
'--config','auth.spread.schemes=http https',
'--config','auth.spread.prefix=*',
'--noninteractive',
] + list(args)
def GetVersionFromHg(branchName):
(o,e) = CallHelper.call_helper(hgcmd('log', '--template', 'tag: {tags}&&&&\n', '-l', '5', '-b', branchName))
logs = o.split("&&&&")
version = ""
for log in logs:
line = log.split(None,1)
if(line.__len__() > 1):
if line[0] == 'tag:':
if line[1] != 'tip':
version = line[1]
break
return version
def ZipPdb(path, zip_handle):
for root, dirs, files in os.walk(path):
for file in files:
if file.endswith(".pdb"):
file_path = os.path.join(root,file)
zip_handle.write(file_path, basename(file_path))
def main():
try:
if len(sys.argv) < 5:
raise OtherException('Enter source dir, Project name, prefix for dest. dir and branch name')
version = GetVersionFromHg(sys.argv[4])
prefix_dst = sys.argv[3]
project_name = sys.argv[2]
src_dir = sys.argv[1]
zipfile_name = version + '.zip'
zip_file_path = src_dir + '\\' + zipfile_name
zipf = zipfile.ZipFile(zip_file_path, 'w', zipfile.ZIP_DEFLATED)
ZipPdb(src_dir, zipf)
if len(version) > 0:
print 'Copy symbol ' + version
#use symbols server path
dst_dir = '\\\\server\\ReleaseSymbol\\' + project_name + '\\' + version[:version.rfind('.')] + '\\' + prefix_dst + '\\'
try:
os.makedirs(dst_dir)
except OSError as e:
pass
shutil.copyfile(zip_file_path, dst_dir + zipfile_name)
else:
print "No Change"
except OtherException as e:
print "\r\n ERROR:"
print e.__str__()
except:
print "\r\nSYS ERROR:"
traceback.print_exc(file=sys.stdout)
if __name__ == "__main__":
main() | 29.542553 | 131 | 0.540151 | 317 | 2,777 | 4.567823 | 0.416404 | 0.033149 | 0.044199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007012 | 0.332373 | 2,777 | 94 | 132 | 29.542553 | 0.774002 | 0.033129 | 0 | 0.027027 | 0 | 0 | 0.136838 | 0.03915 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027027 | 0.148649 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5aa83b96362baef92bbabc498a99e7ddfae72bea | 421 | py | Python | wall.py | Kimeg/Raycasting-Visualization-in-3D | 77f6cc06f0d03c454fc5d8e0b98820fe6aa0f407 | [
"MIT"
] | null | null | null | wall.py | Kimeg/Raycasting-Visualization-in-3D | 77f6cc06f0d03c454fc5d8e0b98820fe6aa0f407 | [
"MIT"
] | null | null | null | wall.py | Kimeg/Raycasting-Visualization-in-3D | 77f6cc06f0d03c454fc5d8e0b98820fe6aa0f407 | [
"MIT"
] | null | null | null | from static import *
from point import Point
class Wall:
def __init__(self, x1, y1, x2, y2, color, pg, screen):
self.p1 = Point(x1, y1)
self.p2 = Point(x2, y2)
self.color = color
self.pg = pg
self.screen = screen
return
def draw(self):
self.pg.draw.line(self.screen, self.color, (self.p1.x, self.p1.y), (self.p2.x, self.p2.y), 3)
return
| 24.764706 | 101 | 0.555819 | 64 | 421 | 3.59375 | 0.375 | 0.078261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.311164 | 421 | 16 | 102 | 26.3125 | 0.741379 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5aa86d20c6cc95191053b38a5b1e5cc77507fe57 | 1,919 | py | Python | dll_test.py | heicj/data-structures | 6a55abf7108d0a3157e372e9122b716158a22d94 | [
"MIT"
] | null | null | null | dll_test.py | heicj/data-structures | 6a55abf7108d0a3157e372e9122b716158a22d94 | [
"MIT"
] | null | null | null | dll_test.py | heicj/data-structures | 6a55abf7108d0a3157e372e9122b716158a22d94 | [
"MIT"
] | null | null | null | import unittest
from dll import Node, DoubleLinkedList
class TestIt(unittest.TestCase):
def test_1(self):
"""make a node"""
n1 = Node('A')
self.assertEqual(n1._value, 'A')
def test_2(self):
"""make a test head is set when add first node"""
n1 = Node('A')
dl = DoubleLinkedList()
dl.append(n1)
self.assertEqual(dl.head._value, 'A')
def test_3(self):
"""add two nodes. test that 2nd append makes changes tail to new node"""
n1 = Node('A')
n2 = Node('B')
dl = DoubleLinkedList()
dl.append(n1)
dl.append(n2)
self.assertEqual(dl.tail._value, 'B')
def test_4(self):
"""check that head is still the first node added to list"""
n1 = Node('A')
n2 = Node('B')
dl = DoubleLinkedList()
dl.append(n1)
dl.append(n2)
self.assertEqual(dl.head._value, 'A')
def test_5(self):
"""check that push adds to front"""
n1 = Node('A')
n2 = Node('B')
dl = DoubleLinkedList()
dl.append(n1)
dl.append(n2)
n3 = Node('C') #will push C to front(head)
dl.push(n3)
self.assertEqual(dl.head._value, 'C')
def test_6(self):
"""check that pop removes head"""
n1 = Node('A')
n2 = Node('B')
dl = DoubleLinkedList()
dl.append(n1) #head and tail at this point
dl.append(n2) # A is head and B is now tail
dl.pop() # removes A so head value should be B
self.assertEqual(dl.head._value, 'B')
def test_7(self):
"""check that shift removes last node"""
n1 = Node('A')
n2 = Node('B')
n3 = Node('C')
dl = DoubleLinkedList()
dl.append(n1)
dl.append(n2)
dl.append(n3)
dl.shift()
self.assertEqual(dl.tail._value, 'B')
def test_8(self):
"""test to remove tail by using remove method"""
n1 = Node('A')
n2 = Node('B')
n3 = Node('C')
dl = DoubleLinkedList()
dl.append(n1)
dl.append(n2)
dl.append(n3)
dl.remove('C') #this removes C so tail should become BaseException
self.assertEqual(dl.tail._value, 'B')
| 22.313953 | 74 | 0.627931 | 308 | 1,919 | 3.86039 | 0.237013 | 0.100925 | 0.047098 | 0.15307 | 0.497056 | 0.429773 | 0.407065 | 0.400336 | 0.31455 | 0.31455 | 0 | 0.028327 | 0.208963 | 1,919 | 86 | 75 | 22.313953 | 0.754941 | 0.25013 | 0 | 0.709677 | 0 | 0 | 0.018638 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 1 | 0.129032 | false | 0 | 0.032258 | 0 | 0.177419 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5aa96091626ad42c5d306a62910ed2be6278004e | 2,513 | py | Python | comprobo_final_project/scripts/models/robot.py | comprobo-final-project/genetic_racer | edf5d3a6f2a99aa8138962dc880341285f73530e | [
"MIT"
] | null | null | null | comprobo_final_project/scripts/models/robot.py | comprobo-final-project/genetic_racer | edf5d3a6f2a99aa8138962dc880341285f73530e | [
"MIT"
] | null | null | null | comprobo_final_project/scripts/models/robot.py | comprobo-final-project/genetic_racer | edf5d3a6f2a99aa8138962dc880341285f73530e | [
"MIT"
] | null | null | null | #!usr/bin/env python
"""
The real world robot class which has been made modular to mirror the simulation
robot class. It connects to april tags in the real world (or Gazebo) and can run
the most fit organism in the same way as done in simulation.
"""
import rospy
import tf
import numpy as np
from geometry_msgs.msg import PoseStamped, Twist
from ..helpers import sleeper
from ..providers.gazebo_position_provider import GazeboPoseProvider
from ..providers.april_pose_provider import AprilPoseProvider
from ..helpers import sleeper
class Robot:
"""
Neato connection.
set_twist(twist : Twist) : Void - Sets the Twist of the robot
"""
def __init__(self, resolution=10, real=False, name=""):
self.MAX_SPEED = 0.3 # m/s
self.MAX_TURN_RATE = 0.8 * np.pi # rad/s
rospy.init_node('robot_controller', anonymous=True)
self.pose_stamped = PoseStamped()
self.twist = Twist()
self.resolution = resolution
self.name = name
# Suscribe to position of Neato robot, can switch between real world
# vs gazebo
self.pose_provider = AprilPoseProvider(rospy, self.name) if real \
else GazeboPoseProvider(rospy)
self.pose_provider.subscribe(self._pose_listener)
# Create publisher for current detected ball characteristics
self.twist_publisher = rospy.Publisher(self.name+'/cmd_vel', Twist, queue_size=10)
def set_twist(self, forward_rate, turn_rate):
self.twist.linear.x = np.clip(forward_rate, 0, self.MAX_SPEED)
self.twist.angular.z = np.clip(turn_rate, -self.MAX_TURN_RATE, self.MAX_TURN_RATE)
self.twist_publisher.publish(self.twist)
sleeper.sleep(1.0 / self.resolution)
def get_position(self):
return self.pose_stamped.pose.position
def get_direction(self):
return tf.transformations.euler_from_quaternion((
self.pose_stamped.pose.orientation.x,
self.pose_stamped.pose.orientation.y,
self.pose_stamped.pose.orientation.z,
self.pose_stamped.pose.orientation.w))[2]
def set_random_position(self, r=1):
# TODO: Set a random position in gazebo. Ignore if real.
pass
def set_random_direction(self):
# TODO: Set a random direction in gazebo. Ignore if real.
pass
def _pose_listener(self, pose):
"""
Callback function for organism position.
"""
self.pose_stamped.pose = pose
| 28.235955 | 90 | 0.67688 | 337 | 2,513 | 4.89911 | 0.379822 | 0.053301 | 0.063598 | 0.069049 | 0.13083 | 0.058147 | 0.058147 | 0 | 0 | 0 | 0 | 0.006781 | 0.237167 | 2,513 | 88 | 91 | 28.556818 | 0.85446 | 0.246319 | 0 | 0.1 | 0 | 0 | 0.013065 | 0 | 0 | 0 | 0 | 0.011364 | 0 | 1 | 0.175 | false | 0.05 | 0.2 | 0.05 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5ab302ab4df0af198345e6c5a8acf2ac20fe3b00 | 10,899 | py | Python | crawler/chengdulib.py | zixinzeng-jennifer/public-culture-activity | abe908cb10407e7e4753ef541476e4267d24a1b8 | [
"CC0-1.0"
] | null | null | null | crawler/chengdulib.py | zixinzeng-jennifer/public-culture-activity | abe908cb10407e7e4753ef541476e4267d24a1b8 | [
"CC0-1.0"
] | null | null | null | crawler/chengdulib.py | zixinzeng-jennifer/public-culture-activity | abe908cb10407e7e4753ef541476e4267d24a1b8 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
from bs4 import BeautifulSoup
from cultureBigdata.items import CultureNewsItem, CultureBasicItem, CultureEventItem
from selenium import webdriver
import re
import time
class ChengdulibSpider(scrapy.Spider):
name = 'chengdulib'
# 爬去机构动态所需的参数
news_url = ''
news_base_url = ''
news_count = 1
news_page_end = 221
# 爬去机构介绍所需的参数
intro_url = 'http://www.jnlib.net.cn/gyjt/201311/t20131105_2257.html'
# 爬去机构活动所需的参数
def start_requests(self):
# 请求机构介绍信息
#yield scrapy.Request(self.intro_url, callback=self.intro_parse)
# 请求机构动态信息
for i in range(1,360):
page = (i-1)
url = 'https://act.cdclib.org/action/web/publish.do?actionCmd=listSearch&offset=' + str(page)
yield scrapy.Request(url, callback=self.event_parse)
# 请求机构活动信息
#yield scrapy.Request(self.event_url, callback=self.event_parse)
'''
def news_parse(self, response):
origin_url = 'http://www.zslib.com.cn/TempletPage/List.aspx?dbid=2&page=1'
data = response.body
soup = BeautifulSoup(data, 'html.parser')
article_lists = soup.find('div', {"class": "gl_list"})
for article in article_lists.find_all("li"):
item = CultureNewsItem()
try:
item['pav_name'] = '广东省立中山图书馆'
item['title'] = article.a.string
item['url'] = origin_url + article.a.attrs['href'][2:]
item['time'] = re.findall(r'(\d{4}-\d{1,2}-\d{1,2})', article.span.string)[0]
yield scrapy.Request(item['url'], meta={'item': item}, callback=self.news_text_parse)
except Exception as err:
print(err)
if self.news_count < self.news_page_end:
self.news_count = self.news_count + 1
yield scrapy.Request(self.news_base_url + str(self.news_count) + '.html', callback=self.news_parse)
else:
return None
def news_text_parse(self, response):
item = response.meta['item']
data = response.body
soup = BeautifulSoup(data, "html.parser")
content = soup.find("div", {"class": "xl_show"})
item['content'] = str(content.text).replace('\u3000', '').replace('\xa0', '').replace('\n', '')
return item
'''
def event_parse(self, response):
data = response.body
soup = BeautifulSoup(data, 'html.parser')
event_all = soup.find_all('div', {'class': 'hdzx_search'})[0].find_all('div',{'class':'item'})
for event in event_all:
item = CultureEventItem()
try:
item['pav_name'] = '成都市图书馆'
item['activity_name'] = event.find_all('div',{'class':'name'})[0].text.strip()
print(item['activity_name'])
item['activity_time']=event.find_all('div',{'class':'p'})[1].text.strip()[5:]
item['place'] = event.find_all('div',{'class':'p'})[3].text.strip()[5:]
print(item['place'])
item['url'] = 'https://act.cdclib.org/action/web/' + event.find_all('div',{'class':'name'})[0].a.attrs['href']
print(item['url'])
#item['remark'] = event.find('div',{'class':'hdzx_layer_2'}).text.replace(' ', '').replace('\n', '').replace('\r', '').replace('\xa0', '')
item['organizer'] = '成都市图书馆'
yield scrapy.Request(item['url'], meta={'item': item}, callback=self.event_text_parse)
break
except Exception as err:
print('event_parse')
print(err)
def event_text_parse(self, response):
item = response.meta['item']
data = response.body
soup = BeautifulSoup(data, 'html.parser')
#print(soup.find_all('td',{'valign':'top'})[1])
info = soup.find_all('td',{'valign':'top'})[1].find_all('tr')
print(len(info))
item['activity_time'] = info[6].text[6:17].strip()
print(item['activity_time'])
content = soup.find('div',{'class':'grid_item'}).find_all('table')[-1]
full_text = str(content.text).replace('\u3000', '').replace('\xa0', '')
p_tags = content.find_all('p')
p_content = []
for p in p_tags:
p_content.append(str(p.text).replace('\u3000', '').replace('\xa0', ''))
# print(p_content)
########################################################################################
item['remark'] = full_text.replace('\n', '')
########################################################################################
########################################################################################
item['activity_type'] = ''
try:
if '展览' in full_text:
item['activity_type'] = '展览'
elif '讲座' in full_text:
item['activity_type'] = '讲座'
elif '培训' in full_text:
item['activity_type'] = '培训'
elif '阅读' in full_text:
item['activity_type'] = '阅读'
except:
pass
########################################################################################
item['presenter'] = ''
for i in range(len(p_content)):
if '一、活动主讲人:' in p_content[i]:
item['presenter'] = p_content[i + 1]
break
elif '主 讲 人:' in p_content[i]:
item['presenter'] = p_content[i].split(':')[1]
break
elif '主讲人:' in p_content[i]:
item['presenter'] = p_content[i].split(':')[1]
break
try:
if re.findall(r'(...)老师', content.text)[0] and item['presenter'] == '':
item['presenter'] = re.findall(r'(...)老师', content.text)[0]
except:
pass
try:
if re.findall(r'(...)先生', content.text)[0] and item['presenter'] == '':
item['presenter'] = re.findall(r'(...)先生', content.text)[0]
except:
pass
try:
if re.findall(r'(...)姐姐', content.text)[0] and item['presenter'] == '':
item['presenter'] = re.findall(r'(...)姐姐', content.text)[0]
except:
pass
########################################################################################
item['organizer'] = ''
for i in range(len(p_content)):
if '主办单位:' in p_content[i]:
item['organizer'] = p_content[i].split(':')[1]
break
elif '举办单位:' == p_content[i] or '主办单位' == p_content[i]:
item['organizer'] = p_content[i + 1]
break
elif '举办单位:' in p_content[i]:
item['organizer'] = p_content[i].split(':')[1]
break
elif '主 办:' in p_content[i]:
item['organizer'] = p_content[i].split(':')[1]
break
elif '举办:' in p_content[i]:
item['organizer'] = p_content[i].split(':')[1]
break
# 举办
########################################################################################
item['age_limit'] = ''
try:
if re.findall(r'不限年龄', content.text)[0] and item['age_limit'] == '':
item['age_limit'] = re.findall(r'不限年龄', content.text)[0]
except:
pass
try:
if re.findall(r'([1‐9]?\d~[1‐9]?\d岁)', content.text)[0] and item['age_limit'] == '':
item['age_limit'] = re.findall(r'([1‐9]?\d~[1‐9]?\d岁)', content.text)[0]
except:
pass
try:
if re.findall(r'([1‐9]?\d岁-[1‐9]?\d岁)', content.text)[0] and item['age_limit'] == '':
item['age_limit'] = re.findall(r'([1‐9]?\d岁-[1‐9]?\d岁)', content.text)[0]
except:
pass
try:
if re.findall(r'([1‐9]?\d-[1‐9]?\d岁)', content.text)[0] and item['age_limit'] == '':
item['age_limit'] = re.findall(r'([1‐9]?\d-[1‐9]?\d岁)', content.text)[0]
except:
pass
########################################################################################
item['presenter_introduction'] = ''
for i in range(len(p_content)):
if '作者简介:' == p_content[i] or '主讲人简介:' == p_content[i]:
item['presenter_introduction'] = p_content[i + 1]
break
elif '讲师简介:' in p_content[i]:
item['presenter_introduction'] = p_content[i].split(":")[1]
break
########################################################################################
item['contact'] = ''
for i in range(len(p_content)):
if '预约电话:' in p_content[i]:
item['contact'] = p_content[i].split(':')[1]
break
try:
if re.findall(r'\d{4}—\d{8}', content.text)[0] and item['age_limit'] == '':
item['contact'] = re.findall(r'\d{4}—\d{8}', content.text)[0]
except:
pass
try:
if re.findall(r'\d{8}', content.text)[0] and item['age_limit'] == '':
item['contact'] = re.findall(r'\d{8}', content.text)[0]
except:
pass
########################################################################################
item['participation_number'] = ''
########################################################################################
item['click_number'] = ''
########################################################################################
item['source'] = ''
########################################################################################
item['activity_introduction'] = ''
########################################################################################
return item
def intro_parse(self, response):
item = CultureBasicItem()
data = response.body
soup = BeautifulSoup(data, 'html.parser')
intro = str(soup.find('div', {"class": 'TRS_Editor'}).text).strip()
item['pav_name'] = '海南省图书馆'
item['pav_introduction'] = intro.replace('\u3000\u3000', '')
item['region'] = '海南'
item['area_number'] = '2.5万平方米'
item['collection_number'] = '164万余册'
item['branch_number'] = ''
item['librarian_number'] = ''
item['client_number'] = '17万'
item['activity_number'] = ''
yield item
| 44.125506 | 160 | 0.439857 | 1,134 | 10,899 | 4.119929 | 0.18254 | 0.053082 | 0.046233 | 0.030608 | 0.534461 | 0.481807 | 0.436644 | 0.376284 | 0.316139 | 0.290668 | 0 | 0.018895 | 0.300762 | 10,899 | 246 | 161 | 44.304878 | 0.592311 | 0.038352 | 0 | 0.352601 | 0 | 0.00578 | 0.182374 | 0.016793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023121 | false | 0.057803 | 0.034682 | 0 | 0.104046 | 0.040462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5ab574781671ffe46d32f9a44876e2d5bdff2e85 | 1,730 | py | Python | dev/add_images_to_db.py | johncoleman83/stamp | bc7b1ad56accdcbb8230ebbd7e5dd57ee137fc0e | [
"MIT"
] | null | null | null | dev/add_images_to_db.py | johncoleman83/stamp | bc7b1ad56accdcbb8230ebbd7e5dd57ee137fc0e | [
"MIT"
] | null | null | null | dev/add_images_to_db.py | johncoleman83/stamp | bc7b1ad56accdcbb8230ebbd7e5dd57ee137fc0e | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""
generates DB from Getty API
"""
import json
import models
User = models.User
Image = models.Image
storage = models.storage
def load_from_json_file(filename):
"""creates json object from file"""
with open(filename, mode='r', encoding='utf-8') as f_io:
my_dict = json.loads(f_io.read())
f_io.close()
return my_dict
def store_to_db():
""" stores JSON to db """
files = [
'./images_json/lizards.json',
'./images_json/dogs.json',
'./images_json/nature.json',
'./images_json/stained_glass.json',
'./images_json/faces.json',
'./images_json/business.json',
'./images_json/goats.json',
'./images_json/religion.json'
]
num = 1
for filename in files:
json = load_from_json_file(filename)
last_name = filename.split('/')[2].split('.')[0]
u_kwargs = {
'email': '{}@notreal.com'.format(num),
'password': 'testpass',
'first_name': 'not_real_{}'.format(num),
'last_name': '{} lover'.format(last_name)
}
new_u = User(**u_kwargs)
new_u.save()
for i in json:
i_kwargs = {
"url": i.get('display_sizes')[0].get('uri'),
"title": i.get('title'),
"family": i.get('asset_family'),
"collection": i.get('collection_name')
}
new_i = Image(**i_kwargs)
new_i.save()
new_i.users.append(new_u)
new_u.images.append(new_i)
new_i.save()
new_u.save()
num += 1
if __name__ == "__main__":
"""
MAIN App
"""
store_to_db()
storage.save()
| 26.212121 | 60 | 0.531214 | 212 | 1,730 | 4.075472 | 0.415094 | 0.092593 | 0.113426 | 0.037037 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005868 | 0.310405 | 1,730 | 65 | 61 | 26.615385 | 0.718357 | 0.054335 | 0 | 0.08 | 1 | 0 | 0.231638 | 0.130571 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.02 | 0.04 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5abbbe8c4d6c302ee2b44f9009054418d7c3a84a | 228 | py | Python | src/tfNetwork/cost.py | RaymondLZhou/deep-neural-networks | 9223ae4fa78364def8cc5dcbe58be566ab33a132 | [
"MIT"
] | null | null | null | src/tfNetwork/cost.py | RaymondLZhou/deep-neural-networks | 9223ae4fa78364def8cc5dcbe58be566ab33a132 | [
"MIT"
] | null | null | null | src/tfNetwork/cost.py | RaymondLZhou/deep-neural-networks | 9223ae4fa78364def8cc5dcbe58be566ab33a132 | [
"MIT"
] | null | null | null | import tensorflow as tf
def compute_cost(Z3, Y):
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
return cost
| 22.8 | 100 | 0.714912 | 34 | 228 | 4.617647 | 0.588235 | 0.140127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.184211 | 228 | 9 | 101 | 25.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5ac23c52534d79e0dcc78d358b525334710827af | 4,665 | py | Python | wallee/models/subscription_product_version_retirement_create.py | bluedynamics/wallee-python-sdk | 7f20df96d2c3dba3b1ca5236e8deca578819eea2 | [
"Apache-2.0"
] | null | null | null | wallee/models/subscription_product_version_retirement_create.py | bluedynamics/wallee-python-sdk | 7f20df96d2c3dba3b1ca5236e8deca578819eea2 | [
"Apache-2.0"
] | null | null | null | wallee/models/subscription_product_version_retirement_create.py | bluedynamics/wallee-python-sdk | 7f20df96d2c3dba3b1ca5236e8deca578819eea2 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import pprint
import six
from enum import Enum
class SubscriptionProductVersionRetirementCreate:
swagger_types = {
'product_version': 'int',
'respect_terminiation_periods_enabled': 'bool',
'target_product': 'int',
}
attribute_map = {
'product_version': 'productVersion','respect_terminiation_periods_enabled': 'respectTerminiationPeriodsEnabled','target_product': 'targetProduct',
}
_product_version = None
_respect_terminiation_periods_enabled = None
_target_product = None
def __init__(self, **kwargs):
self.discriminator = None
self.product_version = kwargs.get('product_version')
self.respect_terminiation_periods_enabled = kwargs.get('respect_terminiation_periods_enabled', None)
self.target_product = kwargs.get('target_product', None)
@property
def product_version(self):
"""Gets the product_version of this SubscriptionProductVersionRetirementCreate.
:return: The product_version of this SubscriptionProductVersionRetirementCreate.
:rtype: int
"""
return self._product_version
@product_version.setter
def product_version(self, product_version):
"""Sets the product_version of this SubscriptionProductVersionRetirementCreate.
:param product_version: The product_version of this SubscriptionProductVersionRetirementCreate.
:type: int
"""
if product_version is None:
raise ValueError("Invalid value for `product_version`, must not be `None`")
self._product_version = product_version
@property
def respect_terminiation_periods_enabled(self):
"""Gets the respect_terminiation_periods_enabled of this SubscriptionProductVersionRetirementCreate.
:return: The respect_terminiation_periods_enabled of this SubscriptionProductVersionRetirementCreate.
:rtype: bool
"""
return self._respect_terminiation_periods_enabled
@respect_terminiation_periods_enabled.setter
def respect_terminiation_periods_enabled(self, respect_terminiation_periods_enabled):
"""Sets the respect_terminiation_periods_enabled of this SubscriptionProductVersionRetirementCreate.
:param respect_terminiation_periods_enabled: The respect_terminiation_periods_enabled of this SubscriptionProductVersionRetirementCreate.
:type: bool
"""
self._respect_terminiation_periods_enabled = respect_terminiation_periods_enabled
@property
def target_product(self):
"""Gets the target_product of this SubscriptionProductVersionRetirementCreate.
:return: The target_product of this SubscriptionProductVersionRetirementCreate.
:rtype: int
"""
return self._target_product
@target_product.setter
def target_product(self, target_product):
"""Sets the target_product of this SubscriptionProductVersionRetirementCreate.
:param target_product: The target_product of this SubscriptionProductVersionRetirementCreate.
:type: int
"""
self._target_product = target_product
def to_dict(self):
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
elif isinstance(value, Enum):
result[attr] = value.value
else:
result[attr] = value
if issubclass(SubscriptionProductVersionRetirementCreate, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
return pprint.pformat(self.to_dict())
def __repr__(self):
return self.to_str()
def __eq__(self, other):
if not isinstance(other, SubscriptionProductVersionRetirementCreate):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 31.1 | 154 | 0.651661 | 437 | 4,665 | 6.649886 | 0.185355 | 0.091535 | 0.152099 | 0.193049 | 0.481762 | 0.38128 | 0.209222 | 0.163799 | 0.048176 | 0 | 0 | 0.001183 | 0.275456 | 4,665 | 149 | 155 | 31.308725 | 0.85858 | 0.261522 | 0 | 0.066667 | 0 | 0 | 0.106032 | 0.043843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.04 | 0.04 | 0.4 | 0.026667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5ac28b181d3a3747712ef8eeb27527de2879c03b | 2,729 | py | Python | materials_io/csv.py | jat255/MaterialsIO | 04df70eddc9d0a464f7a089cf753ce26c2adf81f | [
"Apache-2.0"
] | 10 | 2019-03-25T01:16:48.000Z | 2022-02-23T16:47:02.000Z | materials_io/csv.py | jat255/MaterialsIO | 04df70eddc9d0a464f7a089cf753ce26c2adf81f | [
"Apache-2.0"
] | 31 | 2019-02-05T22:47:44.000Z | 2022-03-25T21:50:55.000Z | materials_io/csv.py | jat255/MaterialsIO | 04df70eddc9d0a464f7a089cf753ce26c2adf81f | [
"Apache-2.0"
] | 2 | 2019-11-12T18:30:49.000Z | 2022-01-13T20:04:01.000Z | from materials_io.base import BaseSingleFileParser
from tableschema.exceptions import CastError
from tableschema import Table
from typing import List
import logging
logger = logging.getLogger(__name__)
class CSVParser(BaseSingleFileParser):
"""Reads comma-separated value (CSV) files
The context dictionary for the CSV parser includes several fields:
- ``schema``: Dictionary defining the schema for this dataset, following that of
FrictionlessIO
- ``na_values``: Any values that should be interpreted as missing
"""
def __init__(self, return_records=True, **kwargs):
"""
Args:
return_records (bool): Whether to return each row in the CSV file
Keyword:
All kwargs as passed to
`TableSchema's infer <https://github.com/frictionlessdata/tableschema-py#infer>`_
method
"""
self.return_records = return_records
self.infer_kwargs = kwargs
def _parse_file(self, path: str, context=None):
# Set the default value
if context is None:
context = dict()
# Load in the table
table = Table(path, schema=context.get('schema', None))
# Infer the table's schema
table.infer(**self.infer_kwargs)
# Add missing values
if 'na_values' in context:
if not isinstance(context['na_values'], list):
raise ValueError('context["na_values"] must be a list')
table.schema.descriptor['missingValues'] = sorted(set([''] + context['na_values']))
table.schema.commit()
# Store the schema
output = {'schema': table.schema.descriptor}
# If desired, store the data
if self.return_records:
headers = table.schema.headers
records = []
failed_records = 0
for row in table.iter(keyed=False, cast=False):
try:
row = table.schema.cast_row(row)
except CastError:
failed_records += 1
# TODO (wardlt): Use json output from tableschema once it's supported
# https://github.com/frictionlessdata/tableschema-py/issues/213
records.append(eval(repr(dict(zip(headers, row)))))
if failed_records > 0:
logger.warning(f'{failed_records} records failed casting with schema')
output['records'] = records
return output
def implementors(self) -> List[str]:
return ['Logan Ward']
def citations(self) -> List[str]:
return ["https://github.com/frictionlessdata/tableschema-py"]
def version(self) -> str:
return '0.0.1'
| 33.691358 | 95 | 0.607549 | 308 | 2,729 | 5.288961 | 0.428571 | 0.024555 | 0.031308 | 0.055249 | 0.07919 | 0.07919 | 0 | 0 | 0 | 0 | 0 | 0.004695 | 0.297545 | 2,729 | 80 | 96 | 34.1125 | 0.84507 | 0.275559 | 0 | 0 | 0 | 0 | 0.111762 | 0 | 0 | 0 | 0 | 0.0125 | 0 | 1 | 0.121951 | false | 0 | 0.121951 | 0.073171 | 0.365854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5ac86f88e6b4493b0b7cca01a2e5799e654214bf | 436 | py | Python | renderer/plane.py | darsovit/pyRayTracerChallenge | 5ca833c3606c67c3113d91e5de0750cbfd73660d | [
"MIT"
] | 2 | 2020-05-13T20:55:23.000Z | 2020-10-26T02:54:40.000Z | renderer/plane.py | darsovit/pyRayTracerChallenge | 5ca833c3606c67c3113d91e5de0750cbfd73660d | [
"MIT"
] | null | null | null | renderer/plane.py | darsovit/pyRayTracerChallenge | 5ca833c3606c67c3113d91e5de0750cbfd73660d | [
"MIT"
] | 1 | 2020-05-15T02:18:15.000Z | 2020-05-15T02:18:15.000Z | #! python
#
#
from renderer.shape import Shape
from renderer.bolts import Vector, EPSILON
class Plane(Shape):
def LocalNormal( self, localPoint ):
return Vector( 0, 1, 0 )
def LocalIntersect(self, localRay):
if abs(localRay.Direction()[1]) < EPSILON:
return []
timeToIntersect = (0 - localRay.Origin()[1])/ localRay.Direction()[1]
return [{'time': timeToIntersect, 'object':self}]
| 25.647059 | 77 | 0.635321 | 48 | 436 | 5.770833 | 0.541667 | 0.086643 | 0.129964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.229358 | 436 | 16 | 78 | 27.25 | 0.803571 | 0.018349 | 0 | 0 | 0 | 0 | 0.023529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5acd31e6aea73c938e33e02e7c93df12421f650e | 590 | py | Python | utils/rsa.py | vsgobbi/nfe-library | 9e35884e7a6fe83d61779fed13efb6932ff70102 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | utils/rsa.py | vsgobbi/nfe-library | 9e35884e7a6fe83d61779fed13efb6932ff70102 | [
"Python-2.0",
"OLDAP-2.7"
] | 4 | 2021-03-31T19:25:04.000Z | 2021-12-13T20:20:04.000Z | utils/rsa.py | vsgobbi/nfe-library | 9e35884e7a6fe83d61779fed13efb6932ff70102 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | from base64 import b64encode
from hashlib import sha1
from Crypto.Hash import SHA
from Crypto.Signature import PKCS1_v1_5
from Crypto.PublicKey import RSA
class Rsa:
@classmethod
def sign(cls, text, privateKeyContent):
digest = SHA.new(text)
rsaKey = RSA.importKey(privateKeyContent)
signer = PKCS1_v1_5.new(rsaKey)
signature = signer.sign(digest)
return b64encode(signature)
@classmethod
def digest(cls, text):
hasher = sha1()
hasher.update(text)
digest = hasher.digest()
return b64encode(digest)
| 24.583333 | 49 | 0.676271 | 70 | 590 | 5.642857 | 0.428571 | 0.075949 | 0.040506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0.247458 | 590 | 23 | 50 | 25.652174 | 0.853604 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.315789 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5ace1d6b1f8036c2d2089c4a9bc34a0d20981615 | 458 | py | Python | equipment_assigments/models.py | amado-developer/ReadHub-RestfulAPI | 8d8b445c4a84810d52bbf78a2593e0b48351590c | [
"MIT"
] | null | null | null | equipment_assigments/models.py | amado-developer/ReadHub-RestfulAPI | 8d8b445c4a84810d52bbf78a2593e0b48351590c | [
"MIT"
] | 7 | 2021-03-19T03:09:53.000Z | 2022-01-13T02:48:44.000Z | equipment_assigments/models.py | amado-developer/ReadHub-RestfulAPI | 8d8b445c4a84810d52bbf78a2593e0b48351590c | [
"MIT"
] | null | null | null | from django.db import models
'''
Equipment_Assigment
id_user (FK)
id_equipment (FK)
'''
class Equipment_Assigment(models.Model):
'''
id_user = models.ForeignKey(
'Users.User',
on_delete = models.CASCADE,
null = False,
blank = False
)
'''
id_equipment = models.ForeignKey(
'equipments.Equipment',
on_delete = models.CASCADE,
null = False,
blank = False
)
| 15.266667 | 40 | 0.572052 | 46 | 458 | 5.521739 | 0.456522 | 0.141732 | 0.110236 | 0.165354 | 0.314961 | 0.314961 | 0.314961 | 0.314961 | 0 | 0 | 0 | 0 | 0.320961 | 458 | 29 | 41 | 15.793103 | 0.81672 | 0.253275 | 0 | 0 | 0 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5adb9f347a688298339e2ea270f8f53c573156cb | 473 | py | Python | django_basics/logging.py | pinehq/django_basics | fae7f66ece3f8b4c3e6f24ca461622a2dd44d586 | [
"MIT"
] | null | null | null | django_basics/logging.py | pinehq/django_basics | fae7f66ece3f8b4c3e6f24ca461622a2dd44d586 | [
"MIT"
] | null | null | null | django_basics/logging.py | pinehq/django_basics | fae7f66ece3f8b4c3e6f24ca461622a2dd44d586 | [
"MIT"
] | null | null | null | from logging import StreamHandler
from ipware import get_client_ip
class EnhancedStreamHandler(StreamHandler, object):
def emit(self, record):
record.ip = ''
record.email = ''
try:
request = record.args[0]
record.ip, _ = get_client_ip(request)
record.args = None
record.email = request.user.email
except: # noqa
pass
super(EnhancedStreamHandler, self).emit(record)
| 24.894737 | 55 | 0.602537 | 49 | 473 | 5.714286 | 0.530612 | 0.064286 | 0.078571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003077 | 0.312896 | 473 | 18 | 56 | 26.277778 | 0.858462 | 0.008457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.071429 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5adef8da9843a5dde1a9faa4f8073df12522f1a0 | 4,048 | py | Python | invoicer/_login/__init__.py | mtik00/invoicer | 6128202ca86823dc998b8ea661050702544b9b21 | [
"MIT"
] | null | null | null | invoicer/_login/__init__.py | mtik00/invoicer | 6128202ca86823dc998b8ea661050702544b9b21 | [
"MIT"
] | 1 | 2021-08-04T23:32:15.000Z | 2021-08-04T23:32:15.000Z | invoicer/_login/__init__.py | mtik00/invoicer | 6128202ca86823dc998b8ea661050702544b9b21 | [
"MIT"
] | null | null | null | from flask import (
Blueprint, render_template, request, flash, redirect, url_for, session)
from flask_login import login_user, logout_user, login_required, current_user
from ..common import is_safe_url
from ..password import verify_password, hash_password
from ..models import User
from ..logger import AUTH_LOG
from ..database import db
from .forms import LoginForm, TwoFAEnableForm
login_page = Blueprint('login_page', __name__, template_folder='templates')
def get_redirect_target():
for target in request.values.get('next'), request.referrer:
if not target:
continue
if is_safe_url(target):
return target
def redirect_back(endpoint, **values):
target = get_redirect_target()
if not target or not is_safe_url(target):
target = url_for(endpoint, **values)
return redirect(target)
@login_page.route('/login', methods=['GET', 'POST'])
def login():
if current_user and current_user.is_authenticated:
return redirect(url_for('index_page.dashboard'))
next_url = get_redirect_target()
form = LoginForm(request.form)
error = False
hashed_password = ''
if form.validate_on_submit():
user = User.query.filter(User.username == form.username.data[:1024]).first()
hashed_password = user.hashed_password if user else ''
try:
verify_password(hashed_password, form.password.data[:1024])
except Exception:
error = True
if error:
# NOTE: You must not change this format without changing the
# fail2ban filter.
AUTH_LOG.error(
"Invalid login for username [%s] from [%s]",
form.username.data[:1024],
request.remote_addr,
)
flash('Invalid username and/or password', 'error')
return render_template('login/login.html', form=form, next_url=next_url), 401
if not user.is_active:
flash('You account is not active', 'error')
return redirect(url_for('.login'), code=401)
if getattr(user, 'rehash_password', False):
user.hashed_password = hash_password(form.password.data[:1024])
user.rehash_password = False
db.session.add(user)
db.session.commit()
if user.totp_enabled:
session['user_id'] = user.id
return redirect(url_for('.two_fa', next=next_url))
return complete_login(user)
elif form.errors:
flash(', '.join(form.errors), 'error')
return render_template('login/login.html', form=form, next_url=next_url)
@login_page.route('/logout')
@login_required
def logout():
current_user.is_authenticated = False
session.pop('logged_in', None)
session.pop('user_id', None)
logout_user()
flash('You were logged out', 'success')
return redirect(url_for('.login'))
@login_page.route('/2fa', methods=['GET', 'POST'])
def two_fa(next=None):
next_url = next or get_redirect_target()
user = User.query.get(session['user_id'])
form = TwoFAEnableForm()
# import ipdb; ipdb.set_trace()
if form.validate_on_submit():
if user.verify_totp(form.token.data):
return complete_login(user)
else:
flash('Invalid 2FA token', 'error')
form.token.errors = ['Invalid 2FA token']
return render_template('login/2fa.html', form=form, next_url=next_url)
def complete_login(user):
login_user(user)
current_user.is_authenticated = True
session['logged_in'] = True
session['user_debug'] = user.application_settings.debug_mode
session['site_theme'] = user.profile.site_theme.name if user.profile.site_theme else 'black'
session['site_theme_top'] = user.profile.site_theme.top if user.profile.site_theme else '#777777'
session['site_theme_bottom'] = user.profile.site_theme.bottom if user.profile.site_theme else '#777777'
flash('You were logged in', 'success')
return redirect_back('index_page.dashboard')
| 32.384 | 107 | 0.661314 | 518 | 4,048 | 4.959459 | 0.249035 | 0.024523 | 0.035033 | 0.046711 | 0.151032 | 0.092643 | 0.082522 | 0.047489 | 0.047489 | 0.047489 | 0 | 0.012424 | 0.224555 | 4,048 | 124 | 108 | 32.645161 | 0.805989 | 0.025939 | 0 | 0.044444 | 0 | 0 | 0.123889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.088889 | 0.088889 | 0 | 0.288889 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5adfe79ab6406cab06a4c7ac22f823c3c4c4d29e | 1,003 | py | Python | tests/api/mocks.py | gmos2104/poke-query | 7c44805cdcffc75d4e075c61ece07b92028a1a3f | [
"MIT"
] | null | null | null | tests/api/mocks.py | gmos2104/poke-query | 7c44805cdcffc75d4e075c61ece07b92028a1a3f | [
"MIT"
] | null | null | null | tests/api/mocks.py | gmos2104/poke-query | 7c44805cdcffc75d4e075c61ece07b92028a1a3f | [
"MIT"
] | null | null | null | class MockRequests:
def __init__(self, ok=True, json_data=None):
self.ok = ok
self.json_data = json_data
self.get_method_called = False
def __call__(self, *args, **kwargs):
self.get_method_called = True
self.response = MockResponse(json_data=self.json_data)
return self.response
class MockResponse:
def __init__(self, ok=True, json_data=None):
self.ok = ok
self.json_method_called = False
self.json_data = json_data
def json(self):
self.json_method_called = True
return self.json_data
class MockRedis:
def __init__(self, get_data=None):
self.get_data = get_data
self.get_method_called = False
self.set_method_called = False
def get(self, *args, **kwargs):
self.get_method_called = True
return self.get_data
def set(self, *args, **kwargs):
self.set_method_called = True
def __call__(self, *args, **kwargs):
return self
| 25.717949 | 62 | 0.639083 | 133 | 1,003 | 4.451128 | 0.165414 | 0.121622 | 0.081081 | 0.128378 | 0.527027 | 0.371622 | 0.277027 | 0.277027 | 0.152027 | 0.152027 | 0 | 0 | 0.269192 | 1,003 | 38 | 63 | 26.394737 | 0.80764 | 0 | 0 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.275862 | false | 0 | 0 | 0.034483 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5ae22ad84a9e479681fb10a82237a423fd3d12b8 | 3,899 | py | Python | safenotes/displayer.py | Guilherme-Vasconcelos/safenotes | 8c49c3c0cad5c4596524933af6fc378b0c548200 | [
"MIT"
] | 1 | 2020-12-22T17:31:52.000Z | 2020-12-22T17:31:52.000Z | safenotes/displayer.py | Guilherme-Vasconcelos/safenotes | 8c49c3c0cad5c4596524933af6fc378b0c548200 | [
"MIT"
] | 22 | 2020-12-22T17:30:17.000Z | 2020-12-28T08:21:35.000Z | safenotes/displayer.py | Guilherme-Vasconcelos/safenotes | 8c49c3c0cad5c4596524933af6fc378b0c548200 | [
"MIT"
] | null | null | null | from safenotes.helpers import display_colored_text
from safenotes.colors import red, blue
from typing import Callable, Dict
from os import system
from sys import exit
import safenotes.files_accessor as files_accessor
import questionary
class Displayer:
"""
Class for displaying all notes, handling user action, etc.
"""
def __init__(self, password: str) -> None:
self.password = password
def display_initial_menu(self) -> None:
""" Used to display all notes and allowing user to create new ones, edit existing, etc. """
system('clear')
special_choices = ['New note', 'Refresh encryptions', 'Quit']
choice = questionary.select(
'Available options',
choices=special_choices + files_accessor.get_saved_notes_filenames(),
qmark=''
).ask()
self.handle_choice_initial_menu(choice)
def display_menu_for_note(self, note_name: str) -> None:
""" Displays a menu for a given note. Allows actions such as editing and deleting the note """
system('clear')
display_colored_text(f'Note: {note_name}\n', blue)
display_colored_text('What would you like to do?', blue)
choice = questionary.select(
note_name,
choices=['Read/Edit', 'Delete', 'Go back to initial menu'],
qmark=''
).ask()
self.handle_choice_for_note(note_name, choice)
def create_new_note(self) -> None:
""" Creates a new note and encrypts it """
display_colored_text('Please decide on a name for the file ', blue)
display_colored_text('(this name WILL be publicly accessible): ', red)
file_name = input()
note_path = files_accessor.note_full_path(file_name)
files_accessor.edit_file_and_encrypt(note_path, self.password)
self.display_initial_menu()
def edit_note(self, note_path: str) -> None:
""" Unencrypts a note, allows user to edit it, then encrypts it again """
files_accessor.decrypt_file(note_path, self.password)
note_path = note_path.replace('.gpg', '')
files_accessor.edit_file_and_encrypt(note_path, self.password)
self.display_initial_menu()
def delete_note(self, note_path: str) -> None:
""" Deletes an encrypted note. If note is not encrypted raises error. """
files_accessor.delete_file(note_path)
self.display_initial_menu()
def refresh_encryptions(self) -> None:
files_to_encrypt = [
f for f in files_accessor.get_saved_notes_filenames() if not files_accessor.is_file_encrypted(f)
]
for file in files_to_encrypt:
file_path = files_accessor.note_full_path(file)
files_accessor.encrypt_file(file_path, self.password)
self.display_initial_menu()
def handle_choice_initial_menu(self, choice: str) -> None:
""" Call the correct method based on user's input at initial menu """
available_choices: Dict[str, Callable] = {
'New note': self.create_new_note,
'Refresh encryptions': self.refresh_encryptions,
'Quit': exit
}
if choice in available_choices:
available_choices[choice]()
else: # Else is assumed to be a note which was picked
self.display_menu_for_note(choice)
def handle_choice_for_note(self, note_name: str, choice: str) -> None:
""" Call the correct method based on user's input at note menu """
# Since many of the functions here require arguments I don't think
# it's possible to use the dict like before
note_path = files_accessor.note_full_path(note_name)
if choice == 'Read/Edit':
self.edit_note(note_path)
elif choice == 'Delete':
self.delete_note(note_path)
else:
self.display_initial_menu()
| 38.60396 | 108 | 0.65504 | 508 | 3,899 | 4.78937 | 0.281496 | 0.069462 | 0.04439 | 0.045212 | 0.260173 | 0.230169 | 0.164406 | 0.122072 | 0.10522 | 0.10522 | 0 | 0 | 0.255194 | 3,899 | 100 | 109 | 38.99 | 0.83781 | 0.173891 | 0 | 0.242857 | 0 | 0 | 0.0851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128571 | false | 0.085714 | 0.1 | 0 | 0.242857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5ae9d6a1998998f21d0ccfebf209ef91da6c751f | 359 | py | Python | template/src/handlers/common.py | zhkzyth/storm_maker | 01e90742fb6aacc139cd94f6ea01fa7795ef5204 | [
"MIT"
] | null | null | null | template/src/handlers/common.py | zhkzyth/storm_maker | 01e90742fb6aacc139cd94f6ea01fa7795ef5204 | [
"MIT"
] | null | null | null | template/src/handlers/common.py | zhkzyth/storm_maker | 01e90742fb6aacc139cd94f6ea01fa7795ef5204 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
from base import BaseHandler
class Better404Handler(BaseHandler):
def _write_404(self):
self.send_response(
None,
error_code=-1,
error_msg='The api does not exist.'
)
def get(self):
self._write_404()
def post(self):
self._write_404()
| 17.95 | 47 | 0.582173 | 44 | 359 | 4.545455 | 0.704545 | 0.12 | 0.13 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056911 | 0.314763 | 359 | 19 | 48 | 18.894737 | 0.756098 | 0.100279 | 0 | 0.166667 | 0 | 0 | 0.071651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5aecb77655ae47aa329aff75bb06156a664a0045 | 809 | py | Python | pepper/urls.py | GeekyShacklebolt/pepper | 3952c1e21a39e0e9caa5e2b0d564a2b6dfdf7606 | [
"MIT"
] | null | null | null | pepper/urls.py | GeekyShacklebolt/pepper | 3952c1e21a39e0e9caa5e2b0d564a2b6dfdf7606 | [
"MIT"
] | 10 | 2019-07-22T11:08:22.000Z | 2022-02-10T07:51:06.000Z | pepper/urls.py | GeekyShacklebolt/pepper | 3952c1e21a39e0e9caa5e2b0d564a2b6dfdf7606 | [
"MIT"
] | 1 | 2019-07-08T17:59:52.000Z | 2019-07-08T17:59:52.000Z | # Third party imports
from django.conf.urls import url, include
from rest_framework import routers
from django.contrib import admin
# Pepper imports
from pepper.facebook.api import UserViewSet, GroupViewSet
# Relative imports
from . import api_urls
# Default user and group routers
router = routers.DefaultRouter()
router.register(r'users', UserViewSet)
router.register(r'groups', GroupViewSet)
# Wire up our API using automatic URL routing.
# Additionally, we include login URLs for the browsable API.
urlpatterns = [
# Default API, subjected to change in case of custom 'users' app
url('', include(router.urls)),
# Admin
url('admin/', admin.site.urls),
# Rest API
url(r'^api/', include(api_urls)),
url('api-auth/', include('rest_framework.urls', namespace='facebook')),
]
| 26.096774 | 75 | 0.733004 | 109 | 809 | 5.40367 | 0.504587 | 0.056027 | 0.050934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159456 | 809 | 30 | 76 | 26.966667 | 0.866176 | 0.326329 | 0 | 0 | 0 | 0 | 0.108411 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5af85a8ed082740f9c81032bd87835377b4b926e | 4,137 | py | Python | src/main/app-resources/data_download_publish/hist_skip_no_zero.py | geohazards-tep/dcs-rss-fullres-mm-data-browser | daaaa1b9e458b2deedfa9383173a5b724e77a6b7 | [
"Apache-2.0"
] | null | null | null | src/main/app-resources/data_download_publish/hist_skip_no_zero.py | geohazards-tep/dcs-rss-fullres-mm-data-browser | daaaa1b9e458b2deedfa9383173a5b724e77a6b7 | [
"Apache-2.0"
] | null | null | null | src/main/app-resources/data_download_publish/hist_skip_no_zero.py | geohazards-tep/dcs-rss-fullres-mm-data-browser | daaaa1b9e458b2deedfa9383173a5b724e77a6b7 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import gdal, gdalconst
import os
# See http://www.gdal.org/classVRTRasterBand.html#a155a54960c422941d3d5c2853c6c7cef
def hist_skip(inFname, bandIndex, percentileMin, percentileMax, outFname, nbuckets=1000):
"""
Given a filename, finds approximate percentile values and provides the
gdal_translate invocation required to create an 8-bit PNG.
Works by evaluating a histogram of the original raster with a large number of
buckets between the raster minimum and maximum, then estimating the
probability mass and distribution functions before reporting the percentiles
requested.
N.B. This technique is very approximate and hasn't been checked for asymptotic
convergence. Heck, it uses GDAL's `GetHistogram` function in approximate mode,
so you're getting approximate percentiles using an approximated histogram.
Optional arguments:
- `percentiles`: list of percentiles, between 0 and 100 (inclusive).
- `nbuckets`: the more buckets, the better percentile approximations you get.
"""
src = gdal.Open(inFname)
band = src.GetRasterBand(int(bandIndex))
percentiles = [ float(percentileMin), float(percentileMax) ]
# Use GDAL to find the min and max
(lo, hi, avg, std) = band.GetStatistics(True, True)
# Use GDAL to calculate a big histogram
rawhist = band.GetHistogram(min=lo, max=hi, buckets=nbuckets)
binEdges = np.linspace(lo, hi, nbuckets+1)
# Probability mass function. Trapezoidal-integration of this should yield 1.0.
pmf = rawhist / (np.sum(rawhist) * np.diff(binEdges[:2]))
# Cumulative probability distribution. Starts at 0, ends at 1.0.
distribution = np.cumsum(pmf) * np.diff(binEdges[:2])
# Which histogram buckets are close to the percentiles requested?
idxs = [np.sum(distribution < p / 100.0) for p in percentiles]
# These:
vals = [binEdges[i] for i in idxs]
# Append 0 and 100% percentiles (min & max)
percentiles = [0] + percentiles + [100]
vals = [lo] + vals + [hi]
# Print the percentile table
print "percentile (out of 100%),value at percentile"
for (p, v) in zip(percentiles, vals):
print "%f,%f" % (p, v)
if vals[1] == 0:
print "percentile "+str(percentileMin)+" is equal to 0"
print "Percentile recomputation as pNoZero+"+str(percentileMin)+", where pNoZero is the first percentile with no zero value"
pNoZero=0
for p in range(int(percentileMin),100):
idx = np.sum(distribution < float(p) / 100.0)
val = binEdges[idx]
if val > 0:
pNoZero=p+int(percentileMin)
break
percentiles = [ float(pNoZero), float(percentileMax) ]
# Which histogram buckets are close to the percentiles requested?
idxs = [np.sum(distribution < p / 100.0) for p in percentiles]
# These:
vals = [binEdges[i] for i in idxs]
# Append 0 and 100% percentiles (min & max)
percentiles = [0] + percentiles + [100]
vals = [lo] + vals + [hi]
# Print the percentile table
print "percentile (out of 100%),value at percentile"
for (p, v) in zip(percentiles, vals):
print "%f,%f" % (p, v)
# Print out gdal_calc command
gdalCalcCommand="gdal_calc.py -A "+inFname+" --A_band="+bandIndex+" --calc="+'"'+str(vals[1])+"*logical_and(A>0, A<="+str(vals[1])+")+A*(A>"+str(vals[1])+")"+'"'+" --outfile=gdal_calc_result.tif --NoDataValue=0"
print "running "+gdalCalcCommand
os.system(gdalCalcCommand)
# Print out gdal_translate command (what we came here for anyway)
gdalTranslateCommand="gdal_translate -b 1 -co TILED=YES -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co ALPHA=YES -ot Byte -a_nodata 0 -scale "+str(vals[1])+" "+str(vals[2])+" 1 255 gdal_calc_result.tif "+outFname
print "running "+gdalTranslateCommand
os.system(gdalTranslateCommand)
# remove temp file
os.system("rm gdal_calc_result.tif")
return (vals, percentiles)
# Invoke as: `python hist_skip.py my-raster.tif`.
if __name__ == '__main__':
import sys
if len(sys.argv) == 6:
hist_skip(sys.argv[1],sys.argv[2],sys.argv[3],sys.argv[4],sys.argv[5])
else:
print "python hist_skip.py INPUT-RASTER BAND-INDEX PERCENTILE-MIN PERCENTILE-MAX OUTPUT-RASTER"
| 41.787879 | 213 | 0.701233 | 586 | 4,137 | 4.906143 | 0.372014 | 0.014609 | 0.01113 | 0.007304 | 0.201739 | 0.201739 | 0.201739 | 0.201739 | 0.201739 | 0.201739 | 0 | 0.031222 | 0.179357 | 4,137 | 98 | 214 | 42.214286 | 0.815611 | 0.175973 | 0 | 0.27451 | 0 | 0.019608 | 0.234477 | 0.01157 | 0.019608 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.078431 | null | null | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5aff1baa18ae5ed95e1d9b0372e653977030aa2f | 285 | py | Python | test/test_skunit.py | Hsuxu/vnet_attention | 6958932f3974d268e93bd6443369a3f43c497ed3 | [
"MIT"
] | 45 | 2018-11-05T10:59:19.000Z | 2021-12-30T15:58:10.000Z | test/test_skunit.py | Hsuxu/vnet_attention | 6958932f3974d268e93bd6443369a3f43c497ed3 | [
"MIT"
] | 2 | 2019-06-21T02:55:59.000Z | 2020-07-14T08:29:40.000Z | test/test_skunit.py | Hsuxu/vnet_attention | 6958932f3974d268e93bd6443369a3f43c497ed3 | [
"MIT"
] | 9 | 2019-01-17T11:36:53.000Z | 2021-07-06T05:01:06.000Z | import torch
import torch.nn as nn
from magic_vnet.blocks.skunit.skunit import SKConv3d, SK_Block
if __name__ == '__main__':
down = torch.rand((1, 64, 32, 32, 32))
# up = torch.rand((1, 16, 64, 64, 64))
model = SK_Block(64, 64)
out = model(down)
print(out.shape)
| 23.75 | 62 | 0.645614 | 47 | 285 | 3.680851 | 0.553191 | 0.069364 | 0.115607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102222 | 0.210526 | 285 | 11 | 63 | 25.909091 | 0.666667 | 0.126316 | 0 | 0 | 0 | 0 | 0.032389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5aff4af7e6263480bbafe9f470c613941cd21707 | 5,590 | py | Python | apps/scraper/bing_api.py | suenklerhaw/seoeffekt | 0a31fdfa1a7246da37e37bf53c03d94c5f13f095 | [
"MIT"
] | 1 | 2022-02-15T14:03:10.000Z | 2022-02-15T14:03:10.000Z | apps/scraper/bing_api.py | suenklerhaw/seoeffekt | 0a31fdfa1a7246da37e37bf53c03d94c5f13f095 | [
"MIT"
] | null | null | null | apps/scraper/bing_api.py | suenklerhaw/seoeffekt | 0a31fdfa1a7246da37e37bf53c03d94c5f13f095 | [
"MIT"
] | null | null | null | #script to scraper bing api
#include libs
import sys
sys.path.insert(0, '..')
from include import *
def generate_scraping_job(query, scraper):
query_string = query[1]
query_id = query[4]
study_id = query[0]
search_engine = scraper
result_pages = 20
number_multi = 50
check_jobs = Scrapers.getScrapingJobs(query_id, study_id, search_engine)
if not check_jobs:
for r in range(result_pages):
start = r * number_multi
print(start)
try:
Scrapers.insertScrapingJobs(query_id, study_id, query_string, search_engine, start, date.today())
print('Scraper Job: '+query_string+' SE:'+search_engine+' start:'+str(start)+' created')
except:
break;
def scrape_query(query, scraper):
today = date.today()
jobs = Scrapers.getScrapingJobsByQueryProgressSE(query, 0, scraper)
subscription_key = "b175056d732742038339a83743658448"
assert subscription_key
search_url = "https://api.bing.microsoft.com/v7.0/search"
for job in jobs:
search_engine = job[3]
search_query = job[2]
start = job[4]
query_id = job[0]
study_id = job[1]
job_id = job[7]
progress = 2
Scrapers.updateScrapingJobQuerySearchEngine(query_id, search_engine, progress)
sleeper = random.randint(3,10)
time.sleep(sleeper)
#headers = {"Ocp-Apim-Subscription-Key": subscription_key, "X-Search-ClientIP":"217.111.88.182"}
headers = {"Ocp-Apim-Subscription-Key": subscription_key}
params = {"q": search_query, "textDecorations": True, "textFormat": "HTML", "count": 50, "offset": start, "responseFilter": "Webpages"}
try:
response = requests.get(search_url, headers=headers, params=params)
response.raise_for_status()
search_results = response.json()
web_results = search_results['webPages']
except:
Helpers.saveLog("../../logs/"+str(study_id)+"_"+search_query+".log", 'Error Scraping Job', 1)
Scrapers.updateScrapingJobQuerySearchEngine(query_id, search_engine, -1)
Results.deleteResultsNoScrapers(query_id, search_engine)
exit()
results = []
for w in web_results['value']:
results.append(w['url'])
if results:
results_check = results[-1]
check_url = Results.getURL(query_id, study_id, results_check, search_engine)
if check_url:
Scrapers.updateScrapingJobQuerySearchEngine(query_id, search_engine, 1)
Helpers.saveLog("../../logs/"+str(study_id)+"_"+search_query+".log", 'Max Results', 1)
exit()
else:
Scrapers.updateScrapingJob(job_id, 1)
Helpers.saveLog("../../logs/"+str(study_id)+"_"+search_query+".log", 'Start Scraping Results', 1)
results_position = 1
for result in results:
url = result
check_url = Results.getURL(query_id, study_id, url, search_engine)
if (not check_url):
url_meta = Results.getResultMeta(url, str(study_id), search_engine, str(query_id))
hash = url_meta[0]
ip = url_meta[1]
main = url_meta[2]
main_hash = Helpers.computeMD5hash(main+str(study_id)+search_engine+str(query_id))
contact_url = "0"
Helpers.saveLog("../../logs/"+str(study_id)+"_"+search_query+".log", url, 1)
contact_hash = "0"
contact_url = "0"
last_position = Results.getLastPosition(query_id, study_id, search_engine, today)
if last_position:
results_position = last_position[0][0] + 1
if Results.getPosition(query_id, study_id, search_engine, results_position):
results_position = results_position + 1
Results.insertResult(query_id, study_id, job_id, 0, ip, hash, main_hash, contact_hash, search_engine, url, main, contact_url, today, datetime.now(), 1, results_position)
check_sources = Results.getSource(hash)
if not check_sources:
Results.insertSource(hash, None, None, None, today, 0)
Helpers.saveLog("../../logs/"+str(study_id)+"_"+search_query+".log", 'Insert Result', 1)
studies = Studies.getStudiesScraper()
for s in studies:
if "Bing_API" in s[-1]:
scraper = "Bing_API"
studies_id = s[-3]
queries = Queries.getQueriesStudy(studies_id)
for q in queries:
query_id = q[-2]
job = 0
check_jobs = Scrapers.getScrapingJobsBySE(query_id, scraper)
count_jobs = check_jobs[0][0]
if count_jobs == 0:
job = 1
if job == 1:
generate_scraping_job(q, scraper)
open_queries = Queries.getOpenQueriesStudybySE(studies_id, scraper)
if open_queries:
random.shuffle(open_queries)
o = open_queries[0]
if o:
check_progress = Scrapers.getScrapingJobsByQueryProgressSE(o, 2, scraper)
if not check_progress:
print(o)
scrape_query(o, scraper)
| 31.942857 | 193 | 0.569946 | 602 | 5,590 | 5.069767 | 0.237542 | 0.038991 | 0.042595 | 0.03211 | 0.239187 | 0.228702 | 0.183159 | 0.114024 | 0.070118 | 0.056356 | 0 | 0.02596 | 0.324687 | 5,590 | 174 | 194 | 32.126437 | 0.782517 | 0.023792 | 0 | 0.072727 | 1 | 0 | 0.068757 | 0.010451 | 0 | 0 | 0 | 0 | 0.009091 | 1 | 0.018182 | false | 0 | 0.018182 | 0 | 0.036364 | 0.027273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
850520991795398bea7333efd1e1838193a82e7a | 802 | py | Python | python/problem17b.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | 1 | 2015-12-19T09:59:39.000Z | 2015-12-19T09:59:39.000Z | python/problem17b.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | null | null | null | python/problem17b.py | jreese/euler | 0e2a809620cb02367120c0fbfbf9b419edd42c6e | [
"MIT"
] | null | null | null |
onetonine = len("onetwothreefourfivesixseveneightnine")
onetoten = onetonine + len("ten")
eleventotwenty = len("eleventwelvethirteenfourteenfifteensixteenseventeeneighteennineteen")
twenties = len("twenty")*10 + onetonine
thirties = len("thirty")*10 + onetonine
forties = len("forty")*10 + onetonine
fifties = len("fifty")*10 + onetonine
sixties = len("sixty")*10 + onetonine
seventies = len("seventy")*10 + onetonine
eighties = len("eighty")*10 + onetonine
nineties = len("ninety")*10 + onetonine
hundred = onetoten + eleventotwenty + twenties + thirties + forties + fifties + sixties + seventies + eighties + nineties
hundredonly = len("hundred")
hundredand = len("hundredand")
thousands = hundred * 10 + onetonine * 100 + hundredonly * 9 + hundredand * 891 + len("onethousand")
print thousands
| 36.454545 | 121 | 0.739401 | 80 | 802 | 7.4125 | 0.4125 | 0.166948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035817 | 0.129676 | 802 | 21 | 122 | 38.190476 | 0.813754 | 0 | 0 | 0 | 0 | 0 | 0.225 | 0.12875 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
850664af9ae52cf04b067165f2ed6e6601361122 | 725 | py | Python | web_applications/social_network/blog/signals.py | Had96dad/Python | 9b5cdf0a18fdb195abb8ed9861ff471d6dacde1b | [
"MIT"
] | null | null | null | web_applications/social_network/blog/signals.py | Had96dad/Python | 9b5cdf0a18fdb195abb8ed9861ff471d6dacde1b | [
"MIT"
] | null | null | null | web_applications/social_network/blog/signals.py | Had96dad/Python | 9b5cdf0a18fdb195abb8ed9861ff471d6dacde1b | [
"MIT"
] | null | null | null | from django.dispatch import receiver
from django.utils.text import slugify
from django.db.models.signals import post_delete, pre_save
from blog.models import *
def category_create(sender, instance, **kwargs):
instance.name = instance.name.lower()
pre_save.connect(category_create, sender=Category)
# Create Slug to each new Post before you save to DB.
def slug_create(sender, instance, *args, **kwargs):
if not instance.slug:
instance.slug = slugify(instance.author.first_name + "-" + instance.author.last_name + "-" + instance.title)
pre_save.connect(slug_create, sender=Post)
@receiver(post_delete, sender=Post)
def submission_delete(sender, instance, **kwargs):
instance.image.delete(False)
| 26.851852 | 116 | 0.755862 | 100 | 725 | 5.36 | 0.41 | 0.089552 | 0.074627 | 0.104478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133793 | 725 | 26 | 117 | 27.884615 | 0.853503 | 0.070345 | 0 | 0 | 0 | 0 | 0.002976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.285714 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85145f85900cfbb884381ee42e38fa0f4eb6fe68 | 3,323 | py | Python | urllib2demo/test17_HTTPError_URLError.py | liang1024/CrawlerDemo | f9b93d062477ff96c31decc600f35e2dc84d27e4 | [
"Apache-2.0"
] | null | null | null | urllib2demo/test17_HTTPError_URLError.py | liang1024/CrawlerDemo | f9b93d062477ff96c31decc600f35e2dc84d27e4 | [
"Apache-2.0"
] | null | null | null | urllib2demo/test17_HTTPError_URLError.py | liang1024/CrawlerDemo | f9b93d062477ff96c31decc600f35e2dc84d27e4 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
'''
在我们用urlopen或opener.open方法发出一个请求时,
如果urlopen或opener.open不能处理这个response,就产生错误。
主要是URLError和HTTPError,以及对它们的错误处理。
URLError 产生的原因主要有:
没有网络连接
服务器连接失败
找不到指定的服务器
'''
import urllib2
requset = urllib2.Request('http://blog.baidu.com/blog')
try:
urllib2.urlopen(requset)
except urllib2.HTTPError, err:
print err.code
except urllib2.URLError, err:
print err
else:
print "Good Job"
'''
HTTPError
HTTPError是URLError的子类,我们发出一个请求时,服务器上都会对应一个response应答对象,其中它包含一个数字"响应状态码"。
详细返回码信息:
1xx:信息
100 Continue
服务器仅接收到部分请求,但是一旦服务器并没有拒绝该请求,客户端应该继续发送其余的请求。
101 Switching Protocols
服务器转换协议:服务器将遵从客户的请求转换到另外一种协议。
2xx:成功
200 OK
请求成功(其后是对GET和POST请求的应答文档)
201 Created
请求被创建完成,同时新的资源被创建。
202 Accepted
供处理的请求已被接受,但是处理未完成。
203 Non-authoritative Information
文档已经正常地返回,但一些应答头可能不正确,因为使用的是文档的拷贝。
204 No Content
没有新文档。浏览器应该继续显示原来的文档。如果用户定期地刷新页面,而Servlet可以确定用户文档足够新,这个状态代码是很有用的。
205 Reset Content
没有新文档。但浏览器应该重置它所显示的内容。用来强制浏览器清除表单输入内容。
206 Partial Content
客户发送了一个带有Range头的GET请求,服务器完成了它。
3xx:重定向
300 Multiple Choices
多重选择。链接列表。用户可以选择某链接到达目的地。最多允许五个地址。
301 Moved Permanently
所请求的页面已经转移至新的url。
302 Moved Temporarily
所请求的页面已经临时转移至新的url。
303 See Other
所请求的页面可在别的url下被找到。
304 Not Modified
未按预期修改文档。客户端有缓冲的文档并发出了一个条件性的请求(一般是提供If-Modified-Since头表示客户只想比指定日期更新的文档)。服务器告诉客户,原来缓冲的文档还可以继续使用。
305 Use Proxy
客户请求的文档应该通过Location头所指明的代理服务器提取。
306 Unused
此代码被用于前一版本。目前已不再使用,但是代码依然被保留。
307 Temporary Redirect
被请求的页面已经临时移至新的url。
4xx:客户端错误
400 Bad Request
服务器未能理解请求。
401 Unauthorized
被请求的页面需要用户名和密码。
401.1
登录失败。
401.2
服务器配置导致登录失败。
401.3
由于 ACL 对资源的限制而未获得授权。
401.4
筛选器授权失败。
401.5
ISAPI/CGI 应用程序授权失败。
401.7
访问被 Web 服务器上的 URL 授权策略拒绝。这个错误代码为 IIS 6.0 所专用。
402 Payment Required
此代码尚无法使用。
403 Forbidden
对被请求页面的访问被禁止。
403.1
执行访问被禁止。
403.2
读访问被禁止。
403.3
写访问被禁止。
403.4
要求 SSL。
403.5
要求 SSL 128。
403.6
IP 地址被拒绝。
403.7
要求客户端证书。
403.8
站点访问被拒绝。
403.9
用户数过多。
403.10
配置无效。
403.11
密码更改。
403.12
拒绝访问映射表。
403.13
客户端证书被吊销。
403.14
拒绝目录列表。
403.15
超出客户端访问许可。
403.16
客户端证书不受信任或无效。
403.17
客户端证书已过期或尚未生效。
403.18
在当前的应用程序池中不能执行所请求的 URL。这个错误代码为 IIS 6.0 所专用。
403.19
不能为这个应用程序池中的客户端执行 CGI。这个错误代码为 IIS 6.0 所专用。
403.20
Passport 登录失败。这个错误代码为 IIS 6.0 所专用。
404 Not Found
服务器无法找到被请求的页面。
404.0
没有找到文件或目录。
404.1
无法在所请求的端口上访问 Web 站点。
404.2
Web 服务扩展锁定策略阻止本请求。
404.3
MIME 映射策略阻止本请求。
405 Method Not Allowed
请求中指定的方法不被允许。
406 Not Acceptable
服务器生成的响应无法被客户端所接受。
407 Proxy Authentication Required
用户必须首先使用代理服务器进行验证,这样请求才会被处理。
408 Request Timeout
请求超出了服务器的等待时间。
409 Conflict
由于冲突,请求无法被完成。
410 Gone
被请求的页面不可用。
411 Length Required
"Content-Length" 未被定义。如果无此内容,服务器不会接受请求。
412 Precondition Failed
请求中的前提条件被服务器评估为失败。
413 Request Entity Too Large
由于所请求的实体的太大,服务器不会接受请求。
414 Request-url Too Long
由于url太长,服务器不会接受请求。当post请求被转换为带有很长的查询信息的get请求时,就会发生这种情况。
415 Unsupported Media Type
由于媒介类型不被支持,服务器不会接受请求。
416 Requested Range Not Satisfiable
服务器不能满足客户在请求中指定的Range头。
417 Expectation Failed
执行失败。
423
锁定的错误。
5xx:服务器错误
500 Internal Server Error
请求未完成。服务器遇到不可预知的情况。
500.12
应用程序正忙于在 Web 服务器上重新启动。
500.13
Web 服务器太忙。
500.15
不允许直接请求 Global.asa。
500.16
UNC 授权凭据不正确。这个错误代码为 IIS 6.0 所专用。
500.18
URL 授权存储不能打开。这个错误代码为 IIS 6.0 所专用。
500.100
内部 ASP 错误。
501 Not Implemented
请求未完成。服务器不支持所请求的功能。
502 Bad Gateway
请求未完成。服务器从上游服务器收到一个无效的响应。
502.1
CGI 应用程序超时。 ·
502.2
CGI 应用程序出错。
503 Service Unavailable
请求未完成。服务器临时过载或当机。
504 Gateway Timeout
网关超时。
505 HTTP Version Not Supported
服务器不支持请求中指明的HTTP协议版本
'''
| 15.104545 | 95 | 0.814023 | 449 | 3,323 | 6.026726 | 0.683742 | 0.022173 | 0.02439 | 0.026608 | 0.037694 | 0.026608 | 0 | 0 | 0 | 0 | 0 | 0.109412 | 0.114355 | 3,323 | 219 | 96 | 15.173516 | 0.809718 | 0.003611 | 0 | 0 | 0 | 0 | 0.144681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8515513f03922fe11079924ffdb45f303026df94 | 4,745 | py | Python | lisn/test_match.py | Algy/tempy | 3fe3da6380628a6ab4af56e5ad738efbcb9eaedb | [
"Apache-2.0"
] | null | null | null | lisn/test_match.py | Algy/tempy | 3fe3da6380628a6ab4af56e5ad738efbcb9eaedb | [
"Apache-2.0"
] | null | null | null | lisn/test_match.py | Algy/tempy | 3fe3da6380628a6ab4af56e5ad738efbcb9eaedb | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env ipython
import unittest
from clisn import loads
from match import LISNPattern
from pprint import pprint
class PatternTest(unittest.TestCase):
def test_very_basic_pattern(self):
@LISNPattern
def pat_vb(case, default):
@case
def A(res):
'''
FooBar
'''
return True
@case
def B(res):
'''
NAME$a
'''
return res['a']
@case
def C(res):
'''
$a
'''
return res['a']
@LISNPattern
def pat_node(case, default):
@case
def D(res):
'''
c(NAME$x)
'''
return res['x']
foobar_lisn = loads("FooBar")["exprs"][0]["param"]
b_lisn = loads("b")["exprs"][0]["param"]
c_lisn = loads("c(a)")["exprs"][0]["param"]
'''
self.assertTrue(pat_vb(foobar_lisn))
self.assertEqual(pat_vb(b_lisn), 'b')
self.assertIs(pat_vb(c_lisn), c_lisn)
self.assertEqual(pat_node(c_lisn), 'a')
'''
def test_basic_node(self):
thunk_lisn = loads('thunk: a + 2')["exprs"][0]["param"]
defvar_lisn = loads('defvar x: foo(1 + 2)')["exprs"][0]["param"]
f_lisn = loads('f(parg_first, parg_second, \
label=karg, \
**dstar, &, &&damp)')["exprs"][0]["param"]
@LISNPattern
def pat_v(case, default):
@case
def a(res):
'''
thunk: $node
'''
return res['node']
@case
def b(res):
'''
defvar NAME$var_name:
$val_node
'''
return (res["var_name"], res["val_node"])
@LISNPattern
def pat_arg(case, default):
@case
def a(res):
'''
f>
parg_first
parg_second
keyword -> dict:
label -> karg
*__optional__:
star
**dstar
&
&&damp
'''
return True
'''
self.assertEqual(pat_v(thunk_lisn)["type"], "binop")
var_name, val_node = pat_v(defvar_lisn)
self.assertEqual(var_name, "x")
self.assertEqual(val_node["head_expr"]["name"], "foo")
self.assertTrue(pat_arg(f_lisn))
'''
def test_fun(self):
fun_lisn = loads('''
def go(a, b, c, d=2, d=7, e=3, *f, **g, &h, &&i)
''')["exprs"][0]["param"]
@LISNPattern
def pat_fun(case, default):
@case
def case1(res):
'''
def NAME$funname>
__kleene_star__(pargs): NAME$argname
keyword -> dict(kargs)
*__optional__(star): NAME$argname
**__optional__(dstar): NAME$argname
&__optional__(amp): NAME$argname
&&__optional__(damp): NAME$argname
--
__kleene_star__(body): $expr
'''
return res
pprint(pat_fun(fun_lisn))
def test_lets(self):
lets_lisn = loads('''
lets>
a -> 1
a -> a + 1
a -> a + 2
--
a
''')["exprs"][0]["param"]
@LISNPattern
def pat_lets(case, default):
@case
def case1(res):
'''
lets>
keyword -> seq:
__kleene_star__(definition):
NAME$key -> $value
--
__kleene_plus__(body): $expr
'''
return res
'''
pprint(pat_lets(lets_lisn))
'''
def test_or(self):
yy_lisn = loads('''
yinyang:
yin
yang
yang
yin
''')["exprs"][0]["param"]
@LISNPattern
def pat_yinyang(case, default):
@case
def case1(res):
'''
yinyang:
__kleene_star__(list):
__or__:
__group__(yin):
yin
__group__(yang):
yang
'''
return res
pprint(pat_yinyang(yy_lisn))
if __name__ == "__main__":
unittest.main()
| 26.21547 | 72 | 0.393256 | 429 | 4,745 | 4.062937 | 0.230769 | 0.040161 | 0.056799 | 0.072289 | 0.199656 | 0.176707 | 0 | 0 | 0 | 0 | 0 | 0.008575 | 0.483878 | 4,745 | 180 | 73 | 26.361111 | 0.703144 | 0.167334 | 0 | 0.428571 | 0 | 0.011905 | 0.12233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.261905 | false | 0 | 0.047619 | 0 | 0.440476 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8522e5b958846afd1515181c782873e1918b22c4 | 696 | py | Python | datahub/interaction/migrations/0075_add_trade_agreement_fields.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 6 | 2019-12-02T16:11:24.000Z | 2022-03-18T10:02:02.000Z | datahub/interaction/migrations/0075_add_trade_agreement_fields.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 1,696 | 2019-10-31T14:08:37.000Z | 2022-03-29T12:35:57.000Z | datahub/interaction/migrations/0075_add_trade_agreement_fields.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 9 | 2019-11-22T12:42:03.000Z | 2021-09-03T14:25:05.000Z | # Generated by Django 3.1.7 on 2021-04-12 12:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('metadata', '0009_tradeagreement'),
('interaction', '0074_update_service_questions_and_answers'),
]
operations = [
migrations.AddField(
model_name='interaction',
name='has_related_trade_agreements',
field=models.BooleanField(blank=True, null=True),
),
migrations.AddField(
model_name='interaction',
name='related_trade_agreements',
field=models.ManyToManyField(blank=True, to='metadata.TradeAgreement'),
),
]
| 27.84 | 83 | 0.630747 | 68 | 696 | 6.264706 | 0.647059 | 0.084507 | 0.107981 | 0.126761 | 0.352113 | 0.197183 | 0 | 0 | 0 | 0 | 0 | 0.044574 | 0.258621 | 696 | 24 | 84 | 29 | 0.781008 | 0.064655 | 0 | 0.333333 | 1 | 0 | 0.271186 | 0.178737 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
85283b88ff00316970c14b2face4e2f43d0be40a | 5,822 | py | Python | Tools/resourceCompiler/mayaExporter/workers/skinclusterExporter.py | giordi91/SirEngineThe3rd | 551328144f513e7e7ea9af03327672096baae610 | [
"MIT"
] | 114 | 2020-12-03T10:25:21.000Z | 2022-03-16T20:06:15.000Z | Tools/resourceCompiler/mayaExporter/workers/skinclusterExporter.py | giordi91/SirEngineThe3rd | 551328144f513e7e7ea9af03327672096baae610 | [
"MIT"
] | null | null | null | Tools/resourceCompiler/mayaExporter/workers/skinclusterExporter.py | giordi91/SirEngineThe3rd | 551328144f513e7e7ea9af03327672096baae610 | [
"MIT"
] | 3 | 2021-01-11T16:22:26.000Z | 2022-01-29T16:41:09.000Z | import sys
sys.path.append( "E:\\WORK_IN_PROGRESS\\C\\platfoorm\\engine\\misc\\exporters")
from maya import cmds
from maya import OpenMaya
from maya import OpenMayaAnim
import skeletonExporter
reload(skeletonExporter)
import json
MAX_INFLUENCE = 6;
def map_shadow_to_skeleton(root):
data,joints = skeletonExporter.get_skeleton_data(root)
shadow_to_skele = {}
skele_to_shadow={}
#for each joints we need to follow the constraint to find the driver and build
#a map with that data
for j in joints:
const = cmds.listConnections(j + '.tx', d=0,s=1)[0]
driver = cmds.listConnections(const + '.target[0].targetTranslate',s=1,d=0)
shadow_to_skele[j] = driver[0]
skele_to_shadow[driver[0]] = j
return shadow_to_skele, skele_to_shadow
def getWeightsData (mesh,skinNode, skele_to_shadow, joints):
'''
This procedure let you create a dictionary holding all the needed information to rebuild
a skinCluster map
'''
sknN = skinNode
cmds.undoInfo(openChunk = 1)
infls = cmds.skinCluster(skinNode, q=True, inf=True)
weightMap = []
# get the dag path of the shape node
sel = OpenMaya.MSelectionList()
cmds.select(skinNode)
OpenMaya.MGlobal.getActiveSelectionList(sel)
skinClusterObject = OpenMaya.MObject()
sel.getDependNode(0,skinClusterObject )
skinClusterFn = OpenMayaAnim.MFnSkinCluster(skinClusterObject)
cmds.select(mesh)
sel = OpenMaya.MSelectionList()
OpenMaya.MGlobal.getActiveSelectionList(sel)
shapeDag = OpenMaya.MDagPath()
sel.getDagPath(0, shapeDag)
# create the geometry iterator
geoIter = OpenMaya.MItGeometry(shapeDag)
# create a pointer object for the influence count of the MFnSkinCluster
infCount = OpenMaya.MScriptUtil()
infCountPtr = infCount.asUintPtr()
OpenMaya.MScriptUtil.setUint(infCountPtr, 0)
value = OpenMaya.MDoubleArray()
weightMap = []
infls= OpenMaya.MDagPathArray()
skinClusterFn.influenceObjects(infls)
while geoIter.isDone() == False:
skinClusterFn.getWeights(shapeDag, geoIter.currentItem(), value, infCountPtr)
vtx_data ={"idx": geoIter.index(),
"j":[],
"w":[]}
for j in range(0, infls.length()):
if value[j] > 0:
if skele_to_shadow:
jnt_idx = joints.index(skele_to_shadow[infls[j]])
else:
#node = cmds.listConnections(skinN + ".matrix[" + str(j) + "]",s=1,d=0)[0]
#jnt_idx = joints.index(node)
node = infls[j].fullPathName().rsplit("|",1)[1]
#print node
jnt_idx = joints.index(node)
#jnt_idx = j
weight= value[j]
vtx_data["j"].append(int(jnt_idx))
vtx_data["w"].append(float(weight))
currL = len(vtx_data["j"])
if currL>MAX_INFLUENCE:
print "vertex",vtx_data["idx"], "joints got more than "+str(MAX_INFLUENCE) + " infs"
return;
if currL!= MAX_INFLUENCE:
#lets format the data to have always 4 elemets
deltaSize = MAX_INFLUENCE - currL
vtx_data['j'].extend([int(0)]*deltaSize)
vtx_data['w'].extend([0.0]*deltaSize)
if len(vtx_data["j"]) != MAX_INFLUENCE:
print "vertex",vtx_data["idx"], "wrong formatting after correction"
if len(vtx_data["w"]) != MAX_INFLUENCE:
print "vertex",vtx_data["idx"], "wrong formatting after correction"
weightMap.append(vtx_data)
geoIter.next()
cmds.undoInfo(closeChunk = 1)
print "------> WeightMap has been saved!"
return weightMap
def export_skin(root, skin_name, path, mesh , tootle_path=None, is_shadow=True):
data,joints = skeletonExporter.get_skeleton_data(root)
#print joints.index("L_EyeAim0")
if is_shadow:
print "----> Remapping to shadow skeleton"
shadow_to_skele, skele_to_shadow = map_shadow_to_skeleton(root)
data = getWeightsData(mesh,skin_name,skele_to_shadow, joints)
else :
data = getWeightsData(mesh,skin_name,None, joints)
full = {"type":"skinCluster",
"data":data,
"skeleton": "dogSkeleton"
}
if tootle_path != None:
#read in the tootle
print "---> remapping skin using tootle data"
t = open(tootle_path, 'r')
tootle_map = json.load(t)
newData = [0]*len(full["data"])
for i,d in enumerate(full["data"]):
new = tootle_map[str(i)]
newData[new] = d
full["data"] = newData
else:
print "skippping tootle"
to_save = json.dumps(full)
f = open( path, 'w')
f.write(to_save)
f.close()
print "saved to", path
if __name__ == "__main__" or __name__ == "__builtin__":
print "exporting skin"
root = "root"
skin = "skinCluster1"
path = r"E:\WORK_IN_PROGRESS\C\platfoorm\engine\misc\exporters\temp_data\mannequin_skin.json"
mesh = "mannequin"
tootle_path = r"E:\WORK_IN_PROGRESS\C\platfoorm\engine\misc\exporters\temp_data\mannequin.tootle"
tootle_path=None
export_skin(root, skin, path, mesh, tootle_path, False)
"""
data,joints = skeleton_exporter.get_skeleton_data(root)
shadow_to_skele, skele_to_shadow = map_shadow_to_skeleton(root)
data = getWeightsData(mesh,skin,skele_to_shadow, joints)
full = {"type":"skinCluster",
"data":data,
"skeleton": "dogSkeleton"
}
to_save = json.dumps(full)
f = open( path, 'w')
f.write(to_save)
f.close()
print "saved to", path
"""
| 31.47027 | 101 | 0.614394 | 691 | 5,822 | 5.01013 | 0.286541 | 0.024263 | 0.037551 | 0.020797 | 0.281629 | 0.259676 | 0.245523 | 0.215482 | 0.185442 | 0.172733 | 0 | 0.006587 | 0.269839 | 5,822 | 184 | 102 | 31.641304 | 0.80781 | 0.07695 | 0 | 0.116071 | 0 | 0 | 0.129294 | 0.051634 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053571 | null | null | 0.080357 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
516d85a5a28f621b638975ffe3f7c2a1bbb06e93 | 11,468 | py | Python | venv/lib/python3.8/site-packages/openapi_client/models/first_last_name_us_race_ethnicity_out.py | akshitgoyal/csc398nlp | 6adf80cb7fa3737f88faf73a6e818da495b95ab4 | [
"MIT"
] | 1 | 2020-09-28T10:09:25.000Z | 2020-09-28T10:09:25.000Z | venv/lib/python3.8/site-packages/openapi_client/models/first_last_name_us_race_ethnicity_out.py | akshitgoyal/NLP-Research-Project | 6adf80cb7fa3737f88faf73a6e818da495b95ab4 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/openapi_client/models/first_last_name_us_race_ethnicity_out.py | akshitgoyal/NLP-Research-Project | 6adf80cb7fa3737f88faf73a6e818da495b95ab4 | [
"MIT"
] | 1 | 2020-07-01T18:46:20.000Z | 2020-07-01T18:46:20.000Z | # coding: utf-8
"""
NamSor API v2
NamSor API v2 : enpoints to process personal names (gender, cultural origin or ethnicity) in all alphabets or languages. Use GET methods for small tests, but prefer POST methods for higher throughput (batch processing of up to 100 names at a time). Need something you can't find here? We have many more features coming soon. Let us know, we'll do our best to add it! # noqa: E501
OpenAPI spec version: 2.0.10
Contact: contact@namsor.com
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class FirstLastNameUSRaceEthnicityOut(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'id': 'str',
'first_name': 'str',
'last_name': 'str',
'race_ethnicity_alt': 'str',
'race_ethnicity': 'str',
'score': 'float',
'race_ethnicities_top': 'list[str]',
'probability_calibrated': 'float',
'probability_alt_calibrated': 'float'
}
attribute_map = {
'id': 'id',
'first_name': 'firstName',
'last_name': 'lastName',
'race_ethnicity_alt': 'raceEthnicityAlt',
'race_ethnicity': 'raceEthnicity',
'score': 'score',
'race_ethnicities_top': 'raceEthnicitiesTop',
'probability_calibrated': 'probabilityCalibrated',
'probability_alt_calibrated': 'probabilityAltCalibrated'
}
def __init__(self, id=None, first_name=None, last_name=None, race_ethnicity_alt=None, race_ethnicity=None, score=None, race_ethnicities_top=None, probability_calibrated=None, probability_alt_calibrated=None): # noqa: E501
"""FirstLastNameUSRaceEthnicityOut - a model defined in OpenAPI""" # noqa: E501
self._id = None
self._first_name = None
self._last_name = None
self._race_ethnicity_alt = None
self._race_ethnicity = None
self._score = None
self._race_ethnicities_top = None
self._probability_calibrated = None
self._probability_alt_calibrated = None
self.discriminator = None
if id is not None:
self.id = id
if first_name is not None:
self.first_name = first_name
if last_name is not None:
self.last_name = last_name
if race_ethnicity_alt is not None:
self.race_ethnicity_alt = race_ethnicity_alt
if race_ethnicity is not None:
self.race_ethnicity = race_ethnicity
if score is not None:
self.score = score
if race_ethnicities_top is not None:
self.race_ethnicities_top = race_ethnicities_top
if probability_calibrated is not None:
self.probability_calibrated = probability_calibrated
if probability_alt_calibrated is not None:
self.probability_alt_calibrated = probability_alt_calibrated
@property
def id(self):
"""Gets the id of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:return: The id of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this FirstLastNameUSRaceEthnicityOut.
:param id: The id of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: str
"""
self._id = id
@property
def first_name(self):
"""Gets the first_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:return: The first_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: str
"""
return self._first_name
@first_name.setter
def first_name(self, first_name):
"""Sets the first_name of this FirstLastNameUSRaceEthnicityOut.
:param first_name: The first_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: str
"""
self._first_name = first_name
@property
def last_name(self):
"""Gets the last_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:return: The last_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: str
"""
return self._last_name
@last_name.setter
def last_name(self, last_name):
"""Sets the last_name of this FirstLastNameUSRaceEthnicityOut.
:param last_name: The last_name of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: str
"""
self._last_name = last_name
@property
def race_ethnicity_alt(self):
"""Gets the race_ethnicity_alt of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
Second most likely US 'race'/ethnicity # noqa: E501
:return: The race_ethnicity_alt of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: str
"""
return self._race_ethnicity_alt
@race_ethnicity_alt.setter
def race_ethnicity_alt(self, race_ethnicity_alt):
"""Sets the race_ethnicity_alt of this FirstLastNameUSRaceEthnicityOut.
Second most likely US 'race'/ethnicity # noqa: E501
:param race_ethnicity_alt: The race_ethnicity_alt of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: str
"""
allowed_values = ["W_NL", "HL", "A", "B_NL"] # noqa: E501
if race_ethnicity_alt not in allowed_values:
raise ValueError(
"Invalid value for `race_ethnicity_alt` ({0}), must be one of {1}" # noqa: E501
.format(race_ethnicity_alt, allowed_values)
)
self._race_ethnicity_alt = race_ethnicity_alt
@property
def race_ethnicity(self):
"""Gets the race_ethnicity of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
Most likely US 'race'/ethnicity # noqa: E501
:return: The race_ethnicity of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: str
"""
return self._race_ethnicity
@race_ethnicity.setter
def race_ethnicity(self, race_ethnicity):
"""Sets the race_ethnicity of this FirstLastNameUSRaceEthnicityOut.
Most likely US 'race'/ethnicity # noqa: E501
:param race_ethnicity: The race_ethnicity of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: str
"""
allowed_values = ["W_NL", "HL", "A", "B_NL"] # noqa: E501
if race_ethnicity not in allowed_values:
raise ValueError(
"Invalid value for `race_ethnicity` ({0}), must be one of {1}" # noqa: E501
.format(race_ethnicity, allowed_values)
)
self._race_ethnicity = race_ethnicity
@property
def score(self):
"""Gets the score of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
Compatibility to NamSor_v1 Origin score value # noqa: E501
:return: The score of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: float
"""
return self._score
@score.setter
def score(self, score):
"""Sets the score of this FirstLastNameUSRaceEthnicityOut.
Compatibility to NamSor_v1 Origin score value # noqa: E501
:param score: The score of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: float
"""
self._score = score
@property
def race_ethnicities_top(self):
"""Gets the race_ethnicities_top of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
List 'race'/ethnicities # noqa: E501
:return: The race_ethnicities_top of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: list[str]
"""
return self._race_ethnicities_top
@race_ethnicities_top.setter
def race_ethnicities_top(self, race_ethnicities_top):
"""Sets the race_ethnicities_top of this FirstLastNameUSRaceEthnicityOut.
List 'race'/ethnicities # noqa: E501
:param race_ethnicities_top: The race_ethnicities_top of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: list[str]
"""
self._race_ethnicities_top = race_ethnicities_top
@property
def probability_calibrated(self):
"""Gets the probability_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:return: The probability_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: float
"""
return self._probability_calibrated
@probability_calibrated.setter
def probability_calibrated(self, probability_calibrated):
"""Sets the probability_calibrated of this FirstLastNameUSRaceEthnicityOut.
:param probability_calibrated: The probability_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: float
"""
self._probability_calibrated = probability_calibrated
@property
def probability_alt_calibrated(self):
"""Gets the probability_alt_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:return: The probability_alt_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:rtype: float
"""
return self._probability_alt_calibrated
@probability_alt_calibrated.setter
def probability_alt_calibrated(self, probability_alt_calibrated):
"""Sets the probability_alt_calibrated of this FirstLastNameUSRaceEthnicityOut.
:param probability_alt_calibrated: The probability_alt_calibrated of this FirstLastNameUSRaceEthnicityOut. # noqa: E501
:type: float
"""
self._probability_alt_calibrated = probability_alt_calibrated
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, FirstLastNameUSRaceEthnicityOut):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 33.532164 | 385 | 0.643966 | 1,266 | 11,468 | 5.613744 | 0.149289 | 0.087801 | 0.187421 | 0.155762 | 0.606726 | 0.501055 | 0.453215 | 0.361756 | 0.297313 | 0.182918 | 0 | 0.017829 | 0.27616 | 11,468 | 341 | 386 | 33.630499 | 0.838333 | 0.396582 | 0 | 0.099338 | 0 | 0 | 0.09618 | 0.023626 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15894 | false | 0 | 0.019868 | 0 | 0.298013 | 0.013245 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
517daa5290ced1affd3a8d3d48d8b9924bd823b5 | 6,746 | py | Python | scripts/threshold_algorithm_randomized.py | NSSAC/active_queries_threshold_gds_published_code | ee1e1035c62d4b66fcf467e0b1c9ae7031fb595d | [
"Apache-2.0"
] | null | null | null | scripts/threshold_algorithm_randomized.py | NSSAC/active_queries_threshold_gds_published_code | ee1e1035c62d4b66fcf467e0b1c9ae7031fb595d | [
"Apache-2.0"
] | null | null | null | scripts/threshold_algorithm_randomized.py | NSSAC/active_queries_threshold_gds_published_code | ee1e1035c62d4b66fcf467e0b1c9ae7031fb595d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# tags: code python thresholdAlgorithm threshold class
#
import argparse
import networkx as nx
from random import randint
from random import random
from random import seed
import os
import sys
import argparse
import pdb
import logging
import time
DESC="""This code implements the algorithm for discovering thresholds. It
also implements various experiments in order to evaluate the algorithm.
"""
WAIT_PERIOD=2
class NodeAttrib:
def __init__(self, degree, tHigh, tLow, chargeRange):
self.degree = degree
self.thresholdDiscovered = True
self.update(tHigh,tLow,chargeRange)
def update(self, tHigh, tLow,chargeRange):
self.tHigh = tHigh
self.tLow = min(tHigh,tLow)
if not self.thresholdDiscovered:
if tHigh==tLow:
avgHammWt=(self.tHigh+self.tLow)/2.0/(self.degree+1)
if tHigh==tLow or (avgHammWt<chargeRange[0] or avgHammWt>chargeRange[1]):
self.avgHammWt = 0 # no contribution to charge
self.influence = 0
else:
self.avgHammWt = avgHammWt
self.influence = 1
self.accumulatedCharge=self.avgHammWt
self.neighborsToResolve=self.influence + .0
def disp(self):
return "tHigh=%d,tLow=%d,degree=%d,avgHammWt=%g,accumulated charge=%g" \
%(self.tHigh,self.tLow,self.degree,self.avgHammWt,self.accumulatedCharge)
def stateOneNeighbors(v,neighbors,q):
numOnes=q[v]
for u in neighbors:
numOnes+=q[u]
return numOnes
def successorStandardThreshold(G,T,q):
q_={}
for v in G.nodes():
numOnes=stateOneNeighbors(v,G.neighbors(v),q)
if numOnes>=T[v]:
q_[v]=1
else:
q_[v]=0
return q_
def thresholdAlgorithm(G,T,successor,cRange=[0,1],waitPeriod=WAIT_PERIOD):
# set state vector and node attributes required by the algorithm
logging.debug("Initializing state and node attributes vectors ...")
q={} # state vector
nodeProp={} # node attributes for the algorithm
for v in G.nodes():
# set initial state uniformly at random (shouldn't be zero vector)
allZeroInd=True
while allZeroInd:
if random()>=.5:
q[v]=1
allZeroInd=False
else:
q[v]=0
nodeProp[v]=NodeAttrib(G.degree(v),G.degree(v)+1,0,cRange)
# start iterations
logging.debug("Algorithm begins ...")
numQueries=1
noProgressCount=0
while True:
progressInd=False # true if tHigh or tLow changes for some node
s=successor(G,T,q) # find the successor of q
if logging.getLogger().getEffectiveLevel()==10: # if set to DEBUG
logging.debug("*****Query %d*****" %numQueries)
for v in G.nodes():
logging.debug("%d,%d,%s" %(v,q[v],nodeProp[v].disp()))
logging.debug("*****")
# set threshold
for v in G.nodes():
# determine threshold interval
if nodeProp[v].tHigh==nodeProp[v].tLow:
nodeProp[v].update(nodeProp[v].tHigh,nodeProp[v].tLow,cRange)
continue
numOnes=stateOneNeighbors(v,G.neighbors(v),q)
if s[v] and nodeProp[v].tHigh>numOnes:
progressInd=True
nodeProp[v].update(numOnes,nodeProp[v].tLow,cRange)
elif not s[v] and nodeProp[v].tLow<=numOnes:
progressInd=True
nodeProp[v].update(nodeProp[v].tHigh,numOnes+1,cRange)
else:
nodeProp[v].update(nodeProp[v].tHigh,nodeProp[v].tLow,cRange)
# terminate loop if no progress and wait period exceeded
if not progressInd:
if noProgressCount>waitPeriod:
break
else:
noProgressCount+=1
# update accumulated avgHammWt vector based on closed neighborhood
for e in G.edges():
u=e[0]
v=e[1]
nodeProp[v].accumulatedCharge+=nodeProp[u].avgHammWt
nodeProp[u].accumulatedCharge+=nodeProp[v].avgHammWt
nodeProp[v].neighborsToResolve+=nodeProp[u].influence
nodeProp[u].neighborsToResolve+=nodeProp[v].influence
# set new state vector
numQueries+=1
for v in G.nodes():
if nodeProp[v].neighborsToResolve:
avgCharge=nodeProp[v].accumulatedCharge/nodeProp[v].neighborsToResolve
else:
avgCharge=0.5 # this node has no use now; can be set to any state
if avgCharge>=random():
q[v]=1
else:
q[v]=0
# while ends here
# check if converged, and if yes, then, extract threshold
logging.debug("Comparing discovered thresholds with actual thresholds.")
for v in G.nodes():
if nodeProp[v].tLow!=T[v]:
logging.error("Incorrect threshold estimate for '%d': actual t=%d; est t=%d" \
%(v,T[v],nodeProp[v].tLow))
return False,numQueries
return True,numQueries
def cliqueOneThreshold():
# This is a test case as well as example for thresholdAlgorithm.
# All thresholds will be the same.
seedVal=12345
seed(seedVal)
# generate graph
numNodes=100
G=nx.complete_graph(numNodes)
for threshold in xrange(numNodes+1):
T={}
for v in G.nodes():
T[v]=threshold
# run threshold algorithm
[state,numQueries]=thresholdAlgorithm(G,T,successorStandardThreshold,MAX_QUERIES)
if state==False:
logging.error("Not all thresholds discovered.")
else:
logging.debug("All thresholds discovered")
print "experiment=clique_fixed_threshold; clique size=%d; threshold=%d; seed=%d; queries: %d;" \
%(G.number_of_nodes(),threshold,seedVal,numQueries)
return
def cliqueRandomThreshold():
# This is a test case as well as example for thresholdAlgorithm
seedVal=12345
seed(seedVal)
# generate graph
numNodes=100
G=nx.complete_graph(numNodes)
# set random thresholds
T={};
for node in G.nodes():
T[node]=randint(0,numNodes+1)
# run threshold algorithm
[state,numQueries]=thresholdAlgorithm(G,T,successorStandardThreshold,MAX_QUERIES)
if state==False:
logging.error("Not all thresholds discovered.")
else:
logging.debug("All thresholds discovered")
print "experiment=clique_random_threshold; clique size=%d; seed=%d; queries: %d" \
%(G.number_of_nodes(),seedVal,numQueries)
return
if __name__=='__main__':
# parser
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", action="store_true")
args = parser.parse_args()
# set logger
if args.verbose:
logging.basicConfig(level=logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
# run example 1
# cliqueOneThreshold()
# run example 2
cliqueRandomThreshold()
| 30.251121 | 102 | 0.64542 | 845 | 6,746 | 5.113609 | 0.23787 | 0.052071 | 0.014811 | 0.01134 | 0.265448 | 0.245082 | 0.218468 | 0.213839 | 0.185142 | 0.172182 | 0 | 0.010378 | 0.242959 | 6,746 | 222 | 103 | 30.387387 | 0.835716 | 0.149274 | 0 | 0.295597 | 0 | 0.006289 | 0.124781 | 0.02103 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.069182 | null | null | 0.012579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
518b046e2ab1fc7c0553b8c1b697fb51787ff7f0 | 3,180 | py | Python | app/controllers/settings.py | williamflynt/MailgunMailer | c25212ddd57b0603dd10f651acc0e381c10b8cdd | [
"MIT"
] | null | null | null | app/controllers/settings.py | williamflynt/MailgunMailer | c25212ddd57b0603dd10f651acc0e381c10b8cdd | [
"MIT"
] | null | null | null | app/controllers/settings.py | williamflynt/MailgunMailer | c25212ddd57b0603dd10f651acc0e381c10b8cdd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from flask import render_template, session, request
from app import app
from app.log import get_logger
from app.models.Login import login_required
from app.models.SQL_DB import User
logger = get_logger(__name__)
@app.route("/settings/profile", methods=["GET", "POST"])
@login_required
def settings_profile():
from app.models.Mailgun_Internal import mailgun_get_campaigns
username = session["username"]
campaigns = mailgun_get_campaigns()
user_data = User.query.filter_by(username=username).first()
return render_template(
"settings/index.html", user_data=user_data, campaigns=campaigns
)
@app.route("/settings/profile/profile_change", methods=["POST"])
@login_required
def settings_profile_change():
from app.models.Settings import edit_profile
try:
name_1 = request.form["name_1"]
name_2 = request.form["name_2"]
company = request.form["company"]
address = request.form["address"]
edit_profile(name_1, name_2, company, address)
return "Success. Profile Changed."
except Exception as e:
logger.exception(e)
return "Problem occurred. Contact system administrator"
@app.route("/settings/profile/pw_change", methods=["POST"])
@login_required
def settings_profile_pw_change():
from app.models.Settings import edit_password
from flask_bcrypt import generate_password_hash
try:
password = request.form["password"]
pw_hash = generate_password_hash(password).decode("utf8")
edit_password(pw_hash)
return "Success. Password Changed."
except Exception as e:
logger.exception(e)
return "Problem occurred. Contact system administrator"
@app.route("/settings/profile/mg_change", methods=["POST"])
@login_required
def settings_profile_mg_change():
from app.models.Settings import edit_mg_settings
try:
mg_domain = request.form["mg_domain"]
mg_api_private = request.form["mg_api_private"]
mg_sender = request.form["mg_sender"]
edit_mg_settings(mg_domain, mg_api_private, mg_sender)
return "Success. Settings Changed."
except Exception as e:
logger.exception(e)
return "Problem occurred. Contact system administrator"
@app.route("/settings/campaigns/add", methods=["POST"])
@login_required
def settings_campaigns_add():
from app.models.Mailgun_Internal import mailgun_add_campaigns
try:
campaign_name = request.form["campaign_name"]
response = mailgun_add_campaigns(campaign_name)
return response["message"]
except Exception as e:
logger.exception(e)
return "Problem occurred. Contact system administrator"
@app.route("/settings/campaigns/delete", methods=["POST"])
@login_required
def settings_campaigns_delete():
from app.models.Mailgun_Internal import mailgun_delete_campaigns
try:
campaign_name = request.form["campaign_name"]
mailgun_delete_campaigns(campaign_name)
return "Success. Campaign Deleted."
except Exception as e:
logger.exception(e)
return "Problem occurred. Contact system administrator"
| 31.176471 | 71 | 0.713522 | 390 | 3,180 | 5.587179 | 0.194872 | 0.032125 | 0.047728 | 0.055071 | 0.543827 | 0.51262 | 0.496558 | 0.348784 | 0.239559 | 0.239559 | 0 | 0.003092 | 0.186478 | 3,180 | 101 | 72 | 31.485149 | 0.839196 | 0.006604 | 0 | 0.358974 | 0 | 0 | 0.203358 | 0.042762 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.076923 | 0.153846 | 0 | 0.371795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
518f3024a30d130d5b61ad1a31e1cc80b846e52d | 901 | py | Python | Scraping/Scraping_Instagram/insta_crawler/Image.py | ghassen1302/Interview_Code_Demonstration | fd9e2b313d3203e79e4f40bd52f82365508126d2 | [
"Apache-2.0"
] | null | null | null | Scraping/Scraping_Instagram/insta_crawler/Image.py | ghassen1302/Interview_Code_Demonstration | fd9e2b313d3203e79e4f40bd52f82365508126d2 | [
"Apache-2.0"
] | null | null | null | Scraping/Scraping_Instagram/insta_crawler/Image.py | ghassen1302/Interview_Code_Demonstration | fd9e2b313d3203e79e4f40bd52f82365508126d2 | [
"Apache-2.0"
] | null | null | null |
import os
import json
import time
from selenium import webdriver
# from .crawler.media import login
from .crawler.media import getimages
# from .crawler.media import getcomments
# from .crawler.crawler.spiders.profil import launch
# from webdriver_manager.chrome import ChromeDriverManager
username = 'jawherbouhouch75'
password = '123456789pilote@'
class Image():
def __init__(self):
pass
def get_images(self, user='', starttime='', keys=[], download_img=False, **kwargs):
for dict in kwargs.values():
for key in dict.keys():
if (key == 'post_scroll_down'):
post_scroll_down = int(dict[key])
gt = getimages.Getimages(user, post_scroll_down, starttime, keys, download_img)
links, images_url = gt.get_links()
posts = gt.get_images(links, images_url)
return posts
| 30.033333 | 87 | 0.657048 | 106 | 901 | 5.415094 | 0.490566 | 0.076655 | 0.083624 | 0.114983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016224 | 0.247503 | 901 | 29 | 88 | 31.068966 | 0.830383 | 0.198668 | 0 | 0 | 0 | 0 | 0.067039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.105263 | 0.263158 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5193249282cd300d38e16d581f1b6ec59e9d1f3a | 3,605 | py | Python | app.py | vblazhnov/stats | dfe103521543af40b6e6c941d28cdd831b765a92 | [
"Apache-2.0"
] | null | null | null | app.py | vblazhnov/stats | dfe103521543af40b6e6c941d28cdd831b765a92 | [
"Apache-2.0"
] | null | null | null | app.py | vblazhnov/stats | dfe103521543af40b6e6c941d28cdd831b765a92 | [
"Apache-2.0"
] | null | null | null | #!flask/bin/python
# Copyright 2015 vblazhnov
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__author__ = 'vblazhnov'
from db import DataBase
from functools import wraps
from flask import Flask, jsonify, abort, make_response, request
app = Flask(__name__)
@app.errorhandler(404)
def not_found(error):
return make_response(jsonify({'error': 'Not found'}), 404)
@app.errorhandler(400)
def incorrect_data(error):
return make_response(jsonify({'error': 'Your data is incorrect'}), 400)
@app.errorhandler(409)
def conflict_data(error):
return make_response(jsonify({'error': 'Your data is conflict. Try another.'}), 409)
@app.errorhandler(403)
def conflict_data(error):
return unauthorized()
def requires_auth(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or not check_auth(auth.username, auth.password):
return unauthorized()
kwargs['authUserName'] = auth.username
return f(*args, **kwargs)
return decorated
def check_auth(login, password):
return DataBase.is_valid_pass(login, password)
def unauthorized():
return make_response(jsonify({'error': 'Unauthorized access'}), 403)
@app.route('/stats/api/register', methods=['POST'])
def sign_up():
"""
Регистрация нового пользователя
"""
if not request.json or not 'login' in request.json or not 'password' in request.json:
abort(400)
user = DataBase.add_user(request.json['login'], request.json['password'])
if user is None:
abort(409)
return jsonify({'login': user[1], 'apiKey': user[3]}), 201
@app.route('/stats/api/me', methods=['GET'])
@requires_auth
def get_user_info(**args):
"""
Получение информации о пользователе
"""
user = DataBase.get_user_info(args['authUserName'])
if user is None:
abort(403)
return jsonify({'login': user[1], 'apiKey': user[3]})
@app.route('/stats/api/events', methods=['POST'])
def add_event():
"""
Добавление евента
"""
if not request.json or not 'apiKey' in request.json or not 'event' in request.json:
abort(400)
result = DataBase.add_event(request.json['apiKey'], request.json['event'], request.remote_addr)
if result is None:
abort(400)
return jsonify({'event': result[2], 'date': result[3], 'ip': result[4]}), 201
@app.route('/stats/api/events', methods=['GET'])
@requires_auth
def get_events(**args):
"""
Получение своих евентов
"""
user = DataBase.get_user_info(args['authUserName'])
if user is None:
abort(403)
result = DataBase.get_users_events(user[0])
return jsonify({'events': result})
@app.route('/stats/api/events/<string:name>', methods=['GET'])
@requires_auth
def get_event(name, **args):
"""
Получение подробной информации об евенте
"""
user = DataBase.get_user_info(args['authUserName'])
if user is None:
abort(403)
result = DataBase.get_users_event(user[0], name)
return jsonify({'event': name, 'events': result})
if __name__ == '__main__':
app.run(debug=True)
| 26.123188 | 99 | 0.678225 | 483 | 3,605 | 4.954451 | 0.327122 | 0.045967 | 0.027163 | 0.033431 | 0.336398 | 0.250313 | 0.161722 | 0.161722 | 0.133305 | 0.133305 | 0 | 0.023129 | 0.184466 | 3,605 | 137 | 100 | 26.313869 | 0.790816 | 0.197226 | 0 | 0.28169 | 0 | 0 | 0.136283 | 0.01106 | 0 | 0 | 0 | 0 | 0 | 1 | 0.183099 | false | 0.070423 | 0.042254 | 0.084507 | 0.422535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
51952efcf7c6d5d53b78efef09aae93be736969f | 1,773 | py | Python | paraview_scripts/export_slice_to_csv.py | ric-95/azimuthal-average | a955ff4c6904c8f15d062d08cd83b59655f64db0 | [
"MIT"
] | null | null | null | paraview_scripts/export_slice_to_csv.py | ric-95/azimuthal-average | a955ff4c6904c8f15d062d08cd83b59655f64db0 | [
"MIT"
] | null | null | null | paraview_scripts/export_slice_to_csv.py | ric-95/azimuthal-average | a955ff4c6904c8f15d062d08cd83b59655f64db0 | [
"MIT"
] | null | null | null | # trace generated using paraview version 5.9.0-RC2
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
def export_slice_to_csv(render_view, output_file="slice.csv"):
paraview.simple._DisableFirstRenderCameraReset()
# get active view
renderView1 = render_view
# destroy renderView1
Delete(renderView1)
del renderView1
# Create a new 'SpreadSheet View'
spreadSheetView1 = CreateView('SpreadSheetView')
spreadSheetView1.ColumnToSort = ''
spreadSheetView1.BlockSize = 1024
animationScene1 = GetAnimationScene()
animationScene1.GoToLast()
# get active source.
resampleWithDataset1 = GetActiveSource()
# show data in view
resampleWithDataset1Display = Show(resampleWithDataset1, spreadSheetView1, 'SpreadSheetRepresentation')
# get layout
layout1 = GetLayoutByName("Layout #1")
# assign view to a particular cell in the layout
AssignViewToLayout(view=spreadSheetView1, layout=layout1, hint=0)
# export view
ExportView(output_file, view=spreadSheetView1)
#================================================================
# addendum: following script captures some of the application
# state to faithfully reproduce the visualization during playback
#================================================================
#--------------------------------
# saving layout sizes for layouts
# layout/tab size in pixels
layout1.SetSize(400, 400)
#--------------------------------------------
# uncomment the following to render all views
# RenderAllViews()
# alternatively, if you want to write images, you can use SaveScreenshot(...).
return spreadSheetView1
| 30.568966 | 107 | 0.640722 | 163 | 1,773 | 6.920245 | 0.625767 | 0.053191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024088 | 0.180485 | 1,773 | 57 | 108 | 31.105263 | 0.752237 | 0.468697 | 0 | 0 | 1 | 0 | 0.063527 | 0.027382 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
519703519b1822b675914d8d9bbde49d58e9c9f1 | 2,098 | py | Python | NewDeclarationInQueue/preprocess/document_location.py | it-pebune/ani-research-data-extraction | e8b0ffecb0835020ce7942223cf566dc45ccee35 | [
"MIT"
] | null | null | null | NewDeclarationInQueue/preprocess/document_location.py | it-pebune/ani-research-data-extraction | e8b0ffecb0835020ce7942223cf566dc45ccee35 | [
"MIT"
] | 7 | 2022-01-29T22:19:55.000Z | 2022-03-28T18:18:19.000Z | NewDeclarationInQueue/preprocess/document_location.py | it-pebune/ani-research-data-extraction | e8b0ffecb0835020ce7942223cf566dc45ccee35 | [
"MIT"
] | null | null | null | from NewDeclarationInQueue.preprocess.models import DocumentType
class DocumentLocation:
"""Class for all the parameters necessary for processing a file
"""
type = DocumentType.DOC_WEALTH
storage = 'azure'
path = ''
filename = ''
out_path = ''
page_image_filename = ''
ocr_json_filename = ''
ocr_table_json_filename = ''
ocr_custom_json_filename = ''
formular_type = 0
def __init__(self, type, storage, path, filename, outpath, pageimage, jsonfilename, tablefilename, customfilename, formulartype):
"""Constructor containg all the parameters for processing a file
Args:
type ([int]): Type of declaration (welth of interest)
storage ([type]): Type of storage: azure or something else
path ([type]): Relative path to the file to process
filename ([type]): Name of the file to be processed
outpath ([type]): Relative path where the output files should be saved
pageimage ([type]): Relative path where the page images should be saved
jsonfilename ([type]): Relative path where the JSON file obtained from OCR service should be saved
tablefilename ([type]): Relative path where the JSON file obtained from
processing the file obtained from OCR services should be saved
customfilename ([str]): Relative path where the custom JSON file will be saved
(obtained after processing the data from OCR service)
formulartype ([int]): Type of the formular (structure)
"""
self.type = type
self.storage = storage
self.path = path
self.filename = filename
self.out_path = outpath
self.page_image_filename = pageimage
self.ocr_json_filename = jsonfilename
self.ocr_table_json_filename = tablefilename
self.ocr_custom_json_filename = customfilename
self.formular_type = formulartype
def __str__(self):
return self.storage
| 39.584906 | 133 | 0.639657 | 235 | 2,098 | 5.570213 | 0.310638 | 0.055004 | 0.061115 | 0.076394 | 0.103896 | 0.067227 | 0.067227 | 0.067227 | 0.067227 | 0 | 0 | 0.000676 | 0.295043 | 2,098 | 53 | 134 | 39.584906 | 0.884381 | 0.487131 | 0 | 0 | 0 | 0 | 0.005247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0.04 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5198c026cb3a9f64c17f8f6d7e821a1da7113439 | 1,850 | py | Python | examples/simple_rnn_comparison/without_treeano.py | diogo149/treeano | 9b3fd6bb5eb2f6738c9e5c357e70bef95dcae7b7 | [
"Apache-2.0"
] | 45 | 2015-04-26T04:45:51.000Z | 2022-01-24T15:03:55.000Z | examples/simple_rnn_comparison/without_treeano.py | diogo149/treeano | 9b3fd6bb5eb2f6738c9e5c357e70bef95dcae7b7 | [
"Apache-2.0"
] | 5 | 2015-07-24T00:51:19.000Z | 2016-02-03T07:32:49.000Z | examples/simple_rnn_comparison/without_treeano.py | diogo149/treeano | 9b3fd6bb5eb2f6738c9e5c357e70bef95dcae7b7 | [
"Apache-2.0"
] | 13 | 2015-07-24T00:46:28.000Z | 2022-01-18T16:55:47.000Z | import numpy as np
import theano
import theano.tensor as T
fX = theano.config.floatX
LAG = 20
LENGTH = 50
N_TRAIN = 5000
HIDDEN_STATE_SIZE = 10
def binary_toy_data(lag=1, length=20):
inputs = np.random.randint(0, 2, length).astype(fX)
outputs = np.array(lag * [0] + list(inputs), dtype=fX)[:length]
return inputs, outputs
W_x = theano.shared(
(0.1 * np.random.randn(1, HIDDEN_STATE_SIZE)).astype(fX))
W_h = theano.shared(
(0.1 * np.random.randn(HIDDEN_STATE_SIZE,
HIDDEN_STATE_SIZE)).astype(fX))
W_y = theano.shared(
(0.1 * np.random.randn(HIDDEN_STATE_SIZE, 1)).astype(fX))
b_h = theano.shared(np.zeros((HIDDEN_STATE_SIZE,), dtype=fX))
b_y = theano.shared(np.zeros((1,), dtype=fX))
X = T.matrix("X")
Y = T.matrix("Y")
def step(x, h):
new_h = T.tanh(T.dot(x, W_x) + T.dot(h, W_h) + b_h)
new_y = T.nnet.sigmoid(T.dot(new_h, W_y) + b_y)
return new_h, new_y
results, updates = theano.scan(
fn=step,
sequences=[X],
outputs_info=[T.patternbroadcast(T.zeros((HIDDEN_STATE_SIZE)),
(False,)), None],
)
ys = results[1]
loss = T.mean((ys - Y) ** 2)
params = [W_x, W_h, W_y, b_h, b_y]
grads = T.grad(loss, params)
updates = []
for param, grad in zip(params, grads):
updates.append((param, param - grad * 0.1))
train_fn = theano.function([X, Y], loss, updates=updates)
valid_fn = theano.function([X], ys)
import time
st = time.time()
for i in range(N_TRAIN):
inputs, outputs = binary_toy_data(lag=LAG, length=LENGTH)
loss = train_fn(inputs.reshape(-1, 1), outputs.reshape(-1, 1))
if (i % (N_TRAIN // 100)) == 0:
print(loss)
print "total_time: %s" % (time.time() - st)
inputs, outputs = binary_toy_data(lag=LAG, length=LENGTH)
pred = valid_fn(inputs.reshape(-1, 1)).flatten()
print(np.round(pred) == outputs)
| 25.694444 | 67 | 0.636216 | 311 | 1,850 | 3.62701 | 0.279743 | 0.068262 | 0.093085 | 0.042553 | 0.249113 | 0.218972 | 0.176418 | 0.152482 | 0.152482 | 0.074468 | 0 | 0.026139 | 0.193514 | 1,850 | 71 | 68 | 26.056338 | 0.729893 | 0 | 0 | 0.037736 | 0 | 0 | 0.008649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075472 | null | null | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51a0bd2bff320ee855b3e9dd424da651c680ad50 | 282 | py | Python | src/helpers/config.py | daavofficial/Aural | b4ef4170dcc27d87b2a08a9ae59383ea8099e024 | [
"MIT"
] | null | null | null | src/helpers/config.py | daavofficial/Aural | b4ef4170dcc27d87b2a08a9ae59383ea8099e024 | [
"MIT"
] | null | null | null | src/helpers/config.py | daavofficial/Aural | b4ef4170dcc27d87b2a08a9ae59383ea8099e024 | [
"MIT"
] | null | null | null |
VERSION = "0.0.2"
VERSION_GUI = "0.0.2"
VERSION_CUI = "0.0.0"
VERSION_AUDIOCABLE = "0.0.2"
VERSION_ROUTE = "0.0.2"
VERSION_SETINGS = "0.0.1"
CALLBACK_AUDIOCABLE_SELECTED = None
CALLBACK_ROUTE_SELECTED = None
SETTINGS = None
CONFIGURATION = None
PATH_ROOT = ""
PATH_SETTINGS = "" | 17.625 | 35 | 0.719858 | 45 | 282 | 4.266667 | 0.355556 | 0.072917 | 0.0625 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.138298 | 282 | 16 | 36 | 17.625 | 0.716049 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51a6cf65d39a2bf9aac0a6337d87e8b618aefc57 | 410 | py | Python | gochan/browser.py | morin4ga/gochan | 46285caa394c6b37e1e418ebd5c8be2b296b564f | [
"MIT"
] | 1 | 2020-04-22T09:25:53.000Z | 2020-04-22T09:25:53.000Z | gochan/browser.py | anon774/gochan | 46285caa394c6b37e1e418ebd5c8be2b296b564f | [
"MIT"
] | 4 | 2021-04-23T03:20:39.000Z | 2022-03-12T00:29:19.000Z | gochan/browser.py | anon774/gochan | 46285caa394c6b37e1e418ebd5c8be2b296b564f | [
"MIT"
] | null | null | null | import webbrowser
from subprocess import Popen
from typing import List
from gochan.config import BROWSER_PATH
def open_link(url: str):
if BROWSER_PATH is None:
webbrowser.open(url)
else:
Popen([BROWSER_PATH, url])
def open_links(urls: List[str]):
if BROWSER_PATH is None:
for url in urls:
webbrowser.open(url)
else:
Popen([BROWSER_PATH, *urls])
| 19.52381 | 38 | 0.663415 | 57 | 410 | 4.649123 | 0.421053 | 0.207547 | 0.090566 | 0.120755 | 0.445283 | 0.445283 | 0.279245 | 0 | 0 | 0 | 0 | 0 | 0.256098 | 410 | 20 | 39 | 20.5 | 0.868852 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51acd6a8ef3173804b57f91d34eb632d4e8c1afe | 574 | py | Python | vstutils/api/meta.py | FloThinksPi-Forks/vstutils | eeb4d7a4d280cb8b844d9c9ab212e88f7bbe5d38 | [
"Apache-2.0"
] | 36 | 2018-05-29T22:55:45.000Z | 2021-11-18T22:59:29.000Z | vstutils/api/meta.py | FloThinksPi-Forks/vstutils | eeb4d7a4d280cb8b844d9c9ab212e88f7bbe5d38 | [
"Apache-2.0"
] | 19 | 2020-03-05T01:31:52.000Z | 2022-01-21T08:22:19.000Z | vstutils/api/meta.py | FloThinksPi-Forks/vstutils | eeb4d7a4d280cb8b844d9c9ab212e88f7bbe5d38 | [
"Apache-2.0"
] | 10 | 2018-07-30T10:14:30.000Z | 2022-01-08T12:07:20.000Z | from rest_framework.metadata import SimpleMetadata
from . import fields, serializers
class VSTMetadata(SimpleMetadata):
label_lookup = SimpleMetadata.label_lookup
origin_mapping = SimpleMetadata.label_lookup.mapping
mapping_fields = {
fields.FileInStringField: 'file',
fields.SecretFileInString: 'secretfile',
fields.AutoCompletionField: 'autocomplete',
serializers.JsonObjectSerializer: 'json',
fields.DependEnumField: 'dynamic',
fields.HtmlField: 'html',
}
label_lookup.mapping.update(mapping_fields)
| 31.888889 | 56 | 0.733449 | 50 | 574 | 8.26 | 0.54 | 0.106538 | 0.181598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182927 | 574 | 17 | 57 | 33.764706 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51b46c0a7235d08384e87af73a0f41cd3c65733d | 2,411 | py | Python | opexebo/tests/test_general/test_upsample.py | simon-ball/opexebo | 8e44a4890efa60a6ed8c2e9e0df7cc9ab2d80d31 | [
"MIT"
] | 4 | 2019-06-12T07:50:42.000Z | 2021-11-19T12:55:47.000Z | opexebo/tests/test_general/test_upsample.py | simon-ball/opexebo | 8e44a4890efa60a6ed8c2e9e0df7cc9ab2d80d31 | [
"MIT"
] | 12 | 2019-06-12T07:26:40.000Z | 2021-08-11T15:10:47.000Z | opexebo/tests/test_general/test_upsample.py | simon-ball/opexebo | 8e44a4890efa60a6ed8c2e9e0df7cc9ab2d80d31 | [
"MIT"
] | 4 | 2019-11-21T10:44:37.000Z | 2022-01-07T14:21:07.000Z | """ Tests for upsampling"""
import numpy as np
import pytest
from opexebo.general import upsample as func
print("=== tests_upsampling ===")
###############################################################################
################ MAIN TESTS
###############################################################################
def test_invalid_inputs():
# Invalid array format
with pytest.raises(NotImplementedError):
upscale = 2
func(3, upscale)
func("abc", upscale)
func({2: 2}, upscale)
# Invalid integerr upscaling
# Force integer upsclaing with masked array inputs
with pytest.raises(NotImplementedError):
masked_array = np.ma.ones((4, 6))
high_dim_ma = np.ma.ones((4, 6, 7))
fractional_upscale = 2.5
shrinking_upscale = 0
valid_upscale = 2
func(masked_array, fractional_upscale)
func(masked_array, shrinking_upscale)
func(high_dim_ma, valid_upscale)
# Invalid fractional upscaling
# force fractional upscaling with floating point upscale
with pytest.raises(ValueError):
ndarray = np.ones((4, 6))
zero_upscale = float(0)
neg_upscale = -0.5
func(ndarray, zero_upscale)
func(ndarray, neg_upscale)
print("test_invalid_inputs passed")
return True
def test_integer_scaling_2d():
upscale = int(2)
ndarray = np.random.rand(50, 50)
maarray = ndarray.copy()
maarray[25:28, 10:16] = np.nan
maarray = np.ma.masked_invalid(maarray)
# Test behavior on masked array
new_array = func(maarray, upscale)
assert np.array_equal(np.array(maarray.shape) * upscale, np.array(new_array.shape))
assert np.sum(maarray.mask) * (upscale ** 2) == np.sum(new_array.mask)
# Test behavior on ndarray
new_array = func(ndarray, upscale)
assert np.array_equal(np.array(ndarray.shape) * upscale, np.array(new_array.shape))
print("test_integer_scaling_2d passed")
return True
def test_fractional_scaling_2d():
upscale = 1.45
ndarray = np.random.rand(50, 75)
new_array = func(ndarray, upscale)
assert np.array_equal(np.round(np.array(ndarray.shape) * upscale), new_array.shape)
print("test_fractional_scaling_2d passed")
return True
# if __name__ == '__main__':
# test_invalid_inputs()
# test_integer_scaling_2d()
# test_fractional_scaling_2d()
| 30.1375 | 87 | 0.622148 | 294 | 2,411 | 4.894558 | 0.282313 | 0.038916 | 0.035441 | 0.041696 | 0.277276 | 0.134121 | 0.134121 | 0.063933 | 0.063933 | 0.063933 | 0 | 0.023573 | 0.208212 | 2,411 | 79 | 88 | 30.518987 | 0.730225 | 0.164247 | 0 | 0.148936 | 0 | 0 | 0.063596 | 0.026864 | 0 | 0 | 0 | 0 | 0.085106 | 1 | 0.06383 | false | 0.06383 | 0.06383 | 0 | 0.191489 | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
51c4801c1f9f8af4b3019cc7abe4aa467e67d141 | 4,641 | py | Python | awards/models.py | shamanu4/test_project | 8ec52b5ab88c7bae4e469dc04fe64630e2f081fa | [
"MIT"
] | 1 | 2019-07-26T09:56:38.000Z | 2019-07-26T09:56:38.000Z | awards/models.py | shamanu4/test_project | 8ec52b5ab88c7bae4e469dc04fe64630e2f081fa | [
"MIT"
] | 6 | 2020-06-05T19:00:20.000Z | 2022-03-11T23:29:35.000Z | awards/models.py | vintkor/cryptotrade | cd27b5d58e4149cf9ad5e035983fcec566369833 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import ugettext as _
from ckeditor_uploader.fields import RichTextUploadingField
class RangAward(models.Model):
title = models.CharField(verbose_name=_('Заголовок'), max_length=60)
volume = models.DecimalField(verbose_name=_('Объём'), max_digits=10, decimal_places=2)
max_lines = models.PositiveSmallIntegerField(verbose_name=_('Количество линий'))
lines_volume = models.DecimalField(verbose_name=_('Объём в линиях'), max_digits=10, decimal_places=2)
include_rang = models.ForeignKey('self', on_delete=models.CASCADE, verbose_name=_('Ранг партнёров'), null=True, blank=True)
include_rang_count = models.PositiveSmallIntegerField(verbose_name=_('Количество партнёров с указаным рангом'))
bonus = models.DecimalField(verbose_name=_('Бонус'), max_digits=10, decimal_places=2)
is_final = models.BooleanField(default=False)
is_start = models.BooleanField(default=False)
quick_days = models.PositiveSmallIntegerField(blank=True, null=True, verbose_name=_('К-во дней с момента регистрации'))
quick_bonus = models.DecimalField(blank=True, null=True, verbose_name=_('Бонус быстрого старта'), decimal_places=2, max_digits=10)
weight = models.PositiveSmallIntegerField(default=10, verbose_name=_('Вес'))
class Meta:
verbose_name = _('Ранговый бонус')
verbose_name_plural = _('Ранговые бонусы')
ordering = ('volume',)
def __str__(self):
return self.title
class RangAwardHistory(models.Model):
user = models.ForeignKey('user_profile.User', on_delete=models.CASCADE, verbose_name=_('Пользователь'))
text = RichTextUploadingField()
created = models.DateTimeField(auto_now_add=True, auto_now=False, verbose_name=_('Дата создания'))
class Meta:
verbose_name = _('История начисления рангового бонуса')
verbose_name_plural = _('Истории начисления рангового бонуса')
def __str__(self):
return self.user.unique_number
class MultiLevelBonus(models.Model):
rang = models.ForeignKey(RangAward, on_delete=models.CASCADE, verbose_name=_('Ранг'))
bonus_for_line_1 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 1'), default=0)
bonus_for_line_2 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 2'), default=0)
bonus_for_line_3 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 3'), default=0)
bonus_for_line_4 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 4'), default=0)
bonus_for_line_5 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 5'), default=0)
bonus_for_line_6 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 6'), default=0)
bonus_for_line_7 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 7'), default=0)
bonus_for_line_8 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 8'), default=0)
bonus_for_line_9 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 9'), default=0)
bonus_for_line_10 = models.DecimalField(
max_digits=3, decimal_places=1, verbose_name=_('Бонус за линию 10'), default=0)
class Meta:
verbose_name = _('Многоуровневый бонус')
verbose_name_plural = _('Многоуровневые бонусы')
def __str__(self):
return self.rang.title
class MultiLevelBonusHistory(models.Model):
package_history = models.ForeignKey('packages.PackageHistory', on_delete=models.CASCADE, verbose_name=_('Запись в истории'))
text = RichTextUploadingField()
created = models.DateTimeField(auto_now_add=True, auto_now=False, verbose_name=_('Дата создания'))
class Meta:
verbose_name = _('Отчёт по многоуровневому бонусу')
verbose_name_plural = _('Отчёты по многоуровневому бонусу')
def __str__(self):
return '{} > {}'.format(self.package_history.user.unique_number, self.package_history.package.title)
class PointAward(models.Model):
rang = models.ForeignKey(RangAward, on_delete=models.CASCADE, verbose_name=_('Ранг'))
bonus = models.DecimalField(verbose_name=_('Бонус'), max_digits=10, decimal_places=2)
max_money = 10000
class Meta:
verbose_name = _('Схема конвертации баллов в FBC')
verbose_name_plural = _('Схемы конвертации баллов в FBC')
def __str__(self):
return self.rang.title
| 47.357143 | 134 | 0.731954 | 590 | 4,641 | 5.437288 | 0.232203 | 0.12687 | 0.064838 | 0.084165 | 0.576995 | 0.475998 | 0.40586 | 0.366895 | 0.366895 | 0.366895 | 0 | 0.019417 | 0.156647 | 4,641 | 97 | 135 | 47.845361 | 0.800204 | 0 | 0 | 0.263158 | 0 | 0 | 0.151691 | 0.004956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065789 | false | 0 | 0.039474 | 0.065789 | 0.723684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
51cbb58d8357f9795453f4cf8aa2840be325e66d | 3,255 | py | Python | tests/images/test_image_utils.py | danjampro/panoptes-utils | ff51019cdd0e188cf5e8d8d70fc3579776a31716 | [
"MIT"
] | 3 | 2019-03-26T03:35:46.000Z | 2021-08-16T16:59:12.000Z | tests/images/test_image_utils.py | danjampro/panoptes-utils | ff51019cdd0e188cf5e8d8d70fc3579776a31716 | [
"MIT"
] | 254 | 2019-03-29T05:42:54.000Z | 2022-02-18T05:03:09.000Z | tests/images/test_image_utils.py | danjampro/panoptes-utils | ff51019cdd0e188cf5e8d8d70fc3579776a31716 | [
"MIT"
] | 9 | 2019-03-25T09:55:35.000Z | 2021-05-18T02:45:17.000Z | import os
import tempfile
import numpy as np
import pytest
from astropy.nddata import Cutout2D
from panoptes.utils import images as img_utils
from panoptes.utils import error
def test_crop_data():
ones = np.ones((201, 201))
assert ones.sum() == 40401.
cropped01 = img_utils.crop_data(ones) # False to exercise coverage.
assert cropped01.sum() == 40000.
cropped02 = img_utils.crop_data(ones, box_width=10)
assert cropped02.sum() == 100.
cropped03 = img_utils.crop_data(ones, box_width=6, center=(50, 50))
assert cropped03.sum() == 36.
# Test the Cutout2D object
cropped04 = img_utils.crop_data(ones,
box_width=20,
center=(50, 50),
data_only=False)
assert isinstance(cropped04, Cutout2D)
assert cropped04.position_original == (50, 50)
# Box is 20 pixels wide so center is at 10,10
assert cropped04.position_cutout == (10, 10)
def test_make_pretty_image(solved_fits_file, tiny_fits_file, save_environ):
# Make a dir and put test image files in it.
with tempfile.TemporaryDirectory() as tmpdir:
# TODO Add a small CR2 file to our sample image files.
# Can't operate on a non-existent files.
with pytest.warns(UserWarning, match="File doesn't exist"):
assert not img_utils.make_pretty_image('Foobar')
# Can handle the fits file, and creating the images dir for linking
# the latest image.
imgdir = os.path.join(tmpdir, 'images')
assert not os.path.isdir(imgdir)
os.makedirs(imgdir, exist_ok=True)
link_path = os.path.join(tmpdir, 'latest.jpg')
pretty = img_utils.make_pretty_image(solved_fits_file, link_path=link_path)
assert pretty
assert os.path.isfile(pretty)
assert os.path.isdir(imgdir)
assert link_path == pretty
os.remove(link_path)
os.rmdir(imgdir)
# Try again, but without link_path.
pretty = img_utils.make_pretty_image(tiny_fits_file, title='some text')
assert pretty
assert os.path.isfile(pretty)
assert not os.path.isdir(imgdir)
@pytest.mark.skipif(
"TRAVIS" in os.environ and os.environ["TRAVIS"] == "true",
reason="Skipping this test on Travis CI.")
def test_make_pretty_image_cr2_fail():
with tempfile.TemporaryDirectory() as tmpdir:
tmpfile = os.path.join(tmpdir, 'bad.cr2')
with open(tmpfile, 'w') as f:
f.write('not an image file')
with pytest.raises(error.InvalidCommand):
img_utils.make_pretty_image(tmpfile,
title='some text')
with pytest.raises(error.InvalidCommand):
img_utils.make_pretty_image(tmpfile)
def test_make_pretty_image_cr2(cr2_file, tmpdir):
link_path = str(tmpdir.mkdir('images').join('latest.jpg'))
pretty_path = img_utils.make_pretty_image(cr2_file,
title='CR2 Test',
image_type='cr2',
link_path=link_path)
assert os.path.exists(pretty_path)
assert pretty_path == link_path
| 35.769231 | 83 | 0.626421 | 425 | 3,255 | 4.625882 | 0.324706 | 0.044761 | 0.068667 | 0.054934 | 0.346897 | 0.25178 | 0.151577 | 0.10885 | 0.066124 | 0.066124 | 0 | 0.033775 | 0.281413 | 3,255 | 90 | 84 | 36.166667 | 0.806755 | 0.10722 | 0 | 0.15873 | 0 | 0 | 0.054558 | 0 | 0 | 0 | 0 | 0.011111 | 0.285714 | 1 | 0.063492 | false | 0 | 0.111111 | 0 | 0.174603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51d2854307f8ceec0db67f9e86abcc56e1aa5109 | 5,261 | py | Python | polyadicqml/circuitML.py | JoaquinKeller/polyadicQML | 9534ca59285baf9b4060ed546827f2977a8d8657 | [
"Apache-2.0"
] | 19 | 2020-07-24T01:28:44.000Z | 2022-03-29T07:52:19.000Z | polyadicqml/circuitML.py | JoaquinKeller/polyadicQML | 9534ca59285baf9b4060ed546827f2977a8d8657 | [
"Apache-2.0"
] | null | null | null | polyadicqml/circuitML.py | JoaquinKeller/polyadicQML | 9534ca59285baf9b4060ed546827f2977a8d8657 | [
"Apache-2.0"
] | 8 | 2020-07-26T17:27:06.000Z | 2021-11-03T16:17:54.000Z | """Implementation of circuit for ML
"""
from numpy import pi, random, zeros_like, zeros, log2
class circuitML():
"""Abstract Quantum ML circuit interface.
Provides a unified interface to run multiple parametric circuits with
different input and model parameters, agnostic of the backend, implemented
in the subclasses.
Parameters
----------
make_circuit : callable of signature self.make_circuit
Function to generate the circuit corresponding to input `x` and
`params`.
nbqbits : int
Number of qubits.
nbparams : int
Number of parameters.
cbuilder : circuitBuilder
Circuit builder class to be used. It must correspond to the subclass
implementation.
Attributes
----------
nbqbits : int
Number of qubits.
nbparams : int
Number of parameters.
"""
def __init__(self, make_circuit, nbqbits, nbparams, cbuilder):
self.nbqbits = nbqbits
self.nbparams = nbparams
self.__set_builder__(cbuilder)
self.make_circuit = make_circuit
def __set_builder__(self, cbuilder):
self.__verify_builder__(cbuilder)
self._circuitBuilder = cbuilder
def __verify_builder__(self, cbuilder):
raise NotImplementedError
def run(self, X, params, nbshots=None, job_size=None):
"""Run the circuit with input `X` and parameters `params`.
Parameters
----------
X : array-like
Input matrix of shape *(nb_samples, nb_features)*.
params : vector-like
Parameter vector.
nbshots : int, optional
Number of shots for the circuit run, by default ``None``. If
``None``, uses the backend default.
job_size : int, optional
Maximum job size, to split the circuit runs, by default ``None``.
If ``None``, put all *nb_samples* in the same job.
Returns
-------
array
Bitstring counts as an array of shape *(nb_samples, 2**nbqbits)*
"""
raise NotImplementedError
def random_params(self, seed=None):
"""Generate a valid vector of random parameters.
Parameters
----------
seed : int, optional
random seed, by default ``None``
Returns
-------
vector
Vector of random parameters.
"""
if seed: random.seed(seed)
return random.randn(self.nbparams)
def make_circuit(self, bdr, x, params):
"""Generate the circuit corresponding to input `x` and `params`.
NOTE: This function is to be provided by the user, with the present
signature.
Parameters
----------
bdr : circuitBuilder
A circuit builder.
x : vector-like
Input sample
params : vector-like
Parameter vector.
Returns
-------
circuitBuilder
Instructed builder
"""
raise NotImplementedError
def __eq__(self, other):
return self.make_circuit is other.make_circuit
def __repr__(self):
return "<circuitML>"
def __str__(self):
return self.__repr__()
def grad(self, X, params, v=None, eps=None, nbshots=None, job_size=None):
"""Compute the gradient of the circuit w.r.t. parameters *params* on
input *X*.
Uses finite differences of the circuit runs.
Parameters
----------
X : array-like
Input matrix of shape *(nb_samples, nb_features)*.
params : vector-like
Parameter vector of length *nb_params*.
v : array-like
Vector or matrix to right multiply the Jacobian with.
eps : float, optional
Epsilon for finite differences. By default uses ``1e-8`` if
`nbshots` is not provided, else uses :math:`\\pi /
\\sqrt{\\text{nbshots}}`
nbshots : int, optional
Number of shots for the circuit run, by default ``None``. If
``None``, uses the backend default.
job_size : int, optional
Maximum job size, to split the circuit runs, by default ``None``.
If ``None``, put all *nb_samples* in the same job.
Returns
-------
array
Jacobian matix as an array of shape *(nb_params, 2**nbqbits)* if
`v` is None, else Jacobian-vector product: ``J(circuit) @ v``
"""
dim_out = 2**self.nbqbits
if v is not None:
if len(v.shape) > 1:
dim_out = v.shape[0]
else:
dim_out = 1
if eps is None:
if nbshots is None:
eps = 1e-8
else:
max(log2(self.nbqbits)*2*pi/3 * min(.5, 1/nbshots**.25), 1e-8)
num = eps if nbshots is None else eps * nbshots
out = zeros((self.nbparams, dim_out))
run_out = self.run(X, params, nbshots, job_size) / num
for i in range(len(params)):
d = zeros_like(params)
d[i] = eps
pd = self.run(X, params + d, nbshots, job_size) / num - run_out
out[i] = pd if v is None else pd @ v
return out
| 30.766082 | 78 | 0.569093 | 622 | 5,261 | 4.696141 | 0.254019 | 0.030811 | 0.022253 | 0.020541 | 0.311879 | 0.277302 | 0.264978 | 0.264978 | 0.264978 | 0.232112 | 0 | 0.005719 | 0.335297 | 5,261 | 170 | 79 | 30.947059 | 0.829568 | 0.519863 | 0 | 0.108696 | 0 | 0 | 0.005908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.021739 | 0.065217 | 0.369565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51d6936665b7826005993a52b90d85027da66187 | 2,210 | py | Python | src/permedcoe/utils/artifact.py | PerMedCoE/permedcoe | 528415135d16963212cd81b0ea64fa655364e64a | [
"Apache-2.0"
] | null | null | null | src/permedcoe/utils/artifact.py | PerMedCoE/permedcoe | 528415135d16963212cd81b0ea64fa655364e64a | [
"Apache-2.0"
] | null | null | null | src/permedcoe/utils/artifact.py | PerMedCoE/permedcoe | 528415135d16963212cd81b0ea64fa655364e64a | [
"Apache-2.0"
] | null | null | null | import os
from permedcoe.core.constants import SEPARATOR
DO_NOT_PARSE = (".pyc", ".def", ".sif")
PARSING_KEY = "NEW_NAME"
PARSING_PATH = "/PATH/TO/"
def adapt_name(name, path):
""" Replace recursively into the given path they keyword with name.
Like a set recursively.
Args:
name (str): Name to personalize the files.
path (str): Path to find the files.
"""
template_path = path + "/"
for directory_name, dirs, files in os.walk(path):
for file_name in files:
if not file_name.endswith(DO_NOT_PARSE):
file_path = os.path.join(directory_name, file_name)
with open(file_path) as f:
s = f.read()
s = s.replace(PARSING_KEY, name)
s = s.replace(PARSING_PATH, template_path)
with open(file_path, "w") as f:
f.write(s)
def rename_folder(name, path):
""" Adapt the building block folder name.
Args:
name (str): Name to personalize the folder name.
path (str): Path to find the files.
"""
source = os.path.join(path, "src", "bb")
destination = os.path.join(path, "src", name)
os.rename(source, destination)
def show_todo(path):
""" Show on the screen all to do messages.
Args:
path (str): Artifact path.
"""
print(SEPARATOR)
print("To be completed:")
print()
for directory_name, dirs, files in os.walk(path):
for file_name in files:
if not file_name.endswith(DO_NOT_PARSE):
file_path = os.path.join(directory_name, file_name)
__show_work__(file_path)
print(SEPARATOR)
def __show_work__(file_path):
""" Show the TODO messages of a given set of lines.
Args:
file_path (str): File to be analyzed.
"""
with open(file_path) as f:
lines = f.readlines()
position = 0
for line in lines:
if "TODO" in line:
_, message = line.split("#")
print("- %s:(%s):\t%s" % (str(os.path.basename(file_path)),
str(position),
str(message).strip()))
position += 1
| 29.466667 | 71 | 0.566516 | 291 | 2,210 | 4.14433 | 0.292096 | 0.059701 | 0.033168 | 0.039801 | 0.359867 | 0.331675 | 0.300166 | 0.207297 | 0.207297 | 0.207297 | 0 | 0.001331 | 0.31991 | 2,210 | 74 | 72 | 29.864865 | 0.801065 | 0.223077 | 0 | 0.285714 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0.013514 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.142857 | 0.119048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51dc710f797c5dd8ee717e037b8c2c3dc22aed7d | 422 | py | Python | app/migrations/0004_auto_20210528_2005.py | ThebiggunSeeoil/VIS-MASTER | a54a5f321cfe8b258bacc25458490c5b154edf19 | [
"MIT"
] | null | null | null | app/migrations/0004_auto_20210528_2005.py | ThebiggunSeeoil/VIS-MASTER | a54a5f321cfe8b258bacc25458490c5b154edf19 | [
"MIT"
] | null | null | null | app/migrations/0004_auto_20210528_2005.py | ThebiggunSeeoil/VIS-MASTER | a54a5f321cfe8b258bacc25458490c5b154edf19 | [
"MIT"
] | null | null | null | # Generated by Django 3.0 on 2021-05-28 20:05
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0003_auto_20210528_1759'),
]
operations = [
migrations.AlterField(
model_name='site',
name='station_active',
field=models.CharField(default='False', max_length=255),
),
]
| 22.210526 | 69 | 0.580569 | 44 | 422 | 5.431818 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113014 | 0.308057 | 422 | 18 | 70 | 23.444444 | 0.705479 | 0.101896 | 0 | 0 | 1 | 0 | 0.13649 | 0.064067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51df1caba9434b15b6d1ad06110006e4c3bd4231 | 10,885 | py | Python | pytouch/trainingmachine.py | aboutNisblee/PyTouch | 0098aa07ac78ec1868e0a155c92342512d07613a | [
"MIT"
] | 3 | 2016-08-19T16:17:53.000Z | 2017-07-19T18:55:37.000Z | pytouch/trainingmachine.py | mNisblee/PyTouch | 0098aa07ac78ec1868e0a155c92342512d07613a | [
"MIT"
] | null | null | null | pytouch/trainingmachine.py | mNisblee/PyTouch | 0098aa07ac78ec1868e0a155c92342512d07613a | [
"MIT"
] | null | null | null | import copy
import logging
from datetime import datetime, timedelta
from collections import namedtuple
from blinker import Signal
__all__ = [
'Event',
'TrainingMachineObserver',
'TrainingMachine',
]
logger = logging.getLogger(__name__)
class Event(dict):
""" Events that are expected by the process_event function.
Use the factory methods to create appropriate events.
"""
def __init__(self, type, **kwargs):
super().__init__(type=type, **kwargs)
@property
def type(self):
return self['type']
@property
def index(self):
return self.get('index')
@property
def char(self):
return self.get('char')
@classmethod
def input_event(cls, index, char):
return cls(type='input', index=index, char=char)
@classmethod
def undo_event(cls, index):
""" Create an undo event.
:param index: The index right of the char that should be reverted.
"""
return cls(type='undo', index=index)
@classmethod
def pause_event(cls):
return cls(type='pause')
@classmethod
def unpause_event(cls):
return cls(type='unpause')
@classmethod
def restart_event(cls):
return cls(type='restart')
class TrainingMachineObserver(object):
""" TrainingMachine observer interface.
A client should implement this interface to get feedback from the machine.
"""
def on_pause(self, sender):
raise NotImplementedError
def on_unpause(self, sender):
raise NotImplementedError
def on_hit(self, sender, index, typed):
raise NotImplementedError
def on_miss(self, sender, index, typed, expected):
raise NotImplementedError
def on_undo(self, sender, index, expect):
""" Called after a successful undo event.
:param sender: The sending machine.
:param index: The index that should be replaced by the expect argument.
:param expect: The expected character.
"""
raise NotImplementedError
def on_end(self, sender):
raise NotImplementedError
def on_restart(self, sender):
raise NotImplementedError
class Char(object):
KeyStroke = namedtuple('KeyStroke', ['char', 'time'])
def __init__(self, idx, char, undo_typo):
""" Internal representation of a character in the text of a lesson.
An additional list of all key strokes at this index is maintained.
:param idx: The absolute index in the text starting at 0.
:param char: The utf-8 character in the text.
:param undo_typo: Should undos (<UNDO>) counts as typos.
"""
self._idx = idx
self._char = char
self._keystrokes = list()
self._undo_typo = undo_typo
@property
def index(self):
return self._idx
@property
def char(self):
return self._char
@property
def hit(self):
""" Is the last recorded key stroke a hit?
:return: True on hit, else False.
"""
return self._keystrokes[-1].char == self._char if self._keystrokes else False
@property
def miss(self):
""" Is the last recorded key stroke a miss?
:return: True on miss, else False.
"""
return not self.hit
@property
def keystrokes(self):
return self._keystrokes
@property
def typos(self):
return [ks for ks in self._keystrokes if (ks.char != '<UNDO>' and ks.char != self._char) or (ks.char == '<UNDO>' and self._undo_typo)]
def append(self, char, elapsed):
self._keystrokes.append(Char.KeyStroke(char, elapsed))
def __getitem__(self, item):
return self._keystrokes[item].char
def __iter__(self):
for ks in self._keystrokes:
yield ks
class TrainingMachine(object):
PauseEntry = namedtuple('PauseEntry', ['action', 'time'])
def __init__(self, text, auto_unpause=False, undo_typo=False, **kwargs):
""" Training machine.
A client should never manipulate internal attributes on its instance.
Additional kwargs are added to the instance dict and can later be accessed as attributes.
Note that the logic is currently initialized with paused state. In case auto_unpause is False
the logic must first be unpaused by passing an unpause event to start the state machine.
If auto_unpause is True, the machine automatically switches state to input on first input event.
In either case an on_unpause callback is made that the gui can use to detect the start of the training
session.
:param text: The lesson text.
:param undo_typo: If enabled wrong undos count as typos.
:param auto_unpause: True to enable the auto transition from pause to input on input event.
"""
# Ensure the text ends with NL
if not text.endswith('\n'):
text += '\n'
self._state_fn = self._state_pause
self._text = [Char(i, c, undo_typo) for i, c in enumerate(text)]
self._pause_history = list()
self._observers = list()
self.auto_unpause = auto_unpause
self.undo_typo = undo_typo
self.__dict__.update(kwargs)
@classmethod
def from_lesson(cls, lesson, **kwargs):
""" Create a :class:`TrainingMachine` from the given :class:`Lesson`.
Additional arguments are passed to the context. The lesson is appended to the context.
:param lesson: A :class:`Lesson`.
:return: An instance of :class:`TrainingMachine`.
"""
return cls(lesson.text, lesson=lesson, **kwargs)
def add_observer(self, observer):
""" Add an observer to the given machine.
:param observer: An object implementing the :class:`TrainingMachineObserver` interface.
"""
if observer not in self._observers:
self._observers.append(observer)
def remove_observer(self, observer):
""" Remove an observer from the given machine.
:param observer: An object implementing the :class:`TrainingMachineObserver` interface.
"""
self._observers.remove(observer)
def process_event(self, event):
""" Process external event.
:param event: An event.
"""
logger.debug('processing event: {}'.format(event))
self._state_fn(event)
@property
def paused(self):
return self._state_fn is self._state_pause
@property
def running(self):
return not self.paused and self._state_fn is not self._state_end
def _keystrokes(self):
for char in self._text:
for ks in char:
yield ks
@property
def keystrokes(self):
return len([ks for ks in self._keystrokes() if ks.char != '<UNDO>'])
@property
def hits(self):
return len([char for char in self._text if char.hit])
@property
def progress(self):
rv = self.hits / len(self._text)
return rv
def elapsed(self):
""" Get the overall runtime.
:return: The runtime as :class:`datetime.timedelta`
"""
if not self._pause_history:
return timedelta(0)
# Sort all inputs by input time
# keystrokes = sorted(self._keystrokes(), key=lambda ks: ks.time)
overall = datetime.utcnow() - self._pause_history[0].time
pause_time = timedelta(0)
# make a deep copy of the pause history
history = copy.deepcopy(self._pause_history)
# pop last event if we are still running or just started
if history[-1].action in ['start', 'unpause']:
history.pop()
def pairs(iterable):
it = iter(iterable)
return zip(it, it)
for start, stop in pairs(history):
pause_time += (stop.time - start.time)
return overall - pause_time
def _notify(self, method, *args, **kwargs):
for observer in self._observers:
getattr(observer, method)(self, *args, **kwargs)
def _reset(self):
self._state_fn = self._state_pause
for char in self._text:
char.keystrokes.clear()
def _state_input(self, event):
if event.type == 'pause':
self._state_fn = self._state_pause
self._pause_history.append(TrainingMachine.PauseEntry('pause', datetime.utcnow()))
self._notify('on_pause')
elif event.type == 'undo':
if event.index > 0:
self._text[event.index - 1].append('<UNDO>', self.elapsed())
# report wrong undos if desired
if self.undo_typo:
self._notify('on_miss', event.index - 1, '<UNDO>', self._text[event.index - 1].char)
self._notify('on_undo', event.index - 1, self._text[event.index - 1].char)
elif event.type == 'input':
# Note that this may produce an IndexError. Let it happen! It's a bug in the caller.
if self._text[event.index].char == event.char: # hit
self._text[event.index].append(event.char, self.elapsed())
self._notify('on_hit', event.index, event.char)
if event.index == self._text[-1].index:
self._state_fn = self._state_end
self._pause_history.append(TrainingMachine.PauseEntry('stop', datetime.utcnow()))
self._notify('on_end')
else: # miss
if self._text[event.index].char == '\n': # misses at line ending
return # TODO: Make misses on line ending configurable
if event.char == '\n': # 'Return' hits in line
# TODO: Make misses on wrong returns configurable
return
self._text[event.index].append(event.char, self.elapsed())
self._notify('on_miss', event.index, event.char, self._text[event.index].char)
def _state_pause(self, event):
if event.type == 'unpause' or (event.type == 'input' and self.auto_unpause):
self._state_fn = self._state_input
if self._pause_history:
# Only append start time if we've already had a pause event.
# Currently we're detecting the start view first keystroke time.
self._pause_history.append(TrainingMachine.PauseEntry('unpause', datetime.utcnow()))
else:
self._pause_history.append(TrainingMachine.PauseEntry('start', datetime.utcnow()))
self._notify('on_unpause')
if event.type == 'input' and self.auto_unpause:
# Auto transition to input state
self._state_input(event)
def _state_end(self, event):
if event.type == 'restart':
self._reset()
self._notify('on_restart')
| 31.550725 | 142 | 0.615342 | 1,335 | 10,885 | 4.875655 | 0.18427 | 0.022123 | 0.022123 | 0.022123 | 0.242126 | 0.172069 | 0.083269 | 0.064526 | 0.055001 | 0.055001 | 0 | 0.001804 | 0.286909 | 10,885 | 344 | 143 | 31.642442 | 0.836769 | 0.270648 | 0 | 0.243386 | 0 | 0 | 0.0414 | 0.003062 | 0 | 0 | 0 | 0.002907 | 0 | 1 | 0.232804 | false | 0 | 0.026455 | 0.084656 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
51f48b69f5a97792ed83236058b2281a9436bf6c | 1,221 | py | Python | parsec/commands/workflows/extract_workflow_from_history.py | erasche/parsec | c2f1bda7ff776f9aa121c7b94d62e3da2fad93f6 | [
"Apache-2.0"
] | 8 | 2015-03-27T17:09:15.000Z | 2021-07-13T15:33:02.000Z | parsec/commands/workflows/extract_workflow_from_history.py | erasche/parsec | c2f1bda7ff776f9aa121c7b94d62e3da2fad93f6 | [
"Apache-2.0"
] | 30 | 2015-02-27T21:21:47.000Z | 2021-08-31T14:19:55.000Z | parsec/commands/workflows/extract_workflow_from_history.py | erasche/parsec | c2f1bda7ff776f9aa121c7b94d62e3da2fad93f6 | [
"Apache-2.0"
] | 12 | 2017-06-01T03:49:23.000Z | 2021-07-13T15:33:06.000Z | import click
from parsec.cli import pass_context, json_loads
from parsec.decorators import custom_exception, json_output
@click.command('extract_workflow_from_history')
@click.argument("history_id", type=str)
@click.argument("workflow_name", type=str)
@click.option(
"--job_ids",
help="Optional list of job IDs to filter the jobs to extract from the history",
type=str,
multiple=True
)
@click.option(
"--dataset_hids",
help="Optional list of dataset hids corresponding to workflow inputs when extracting a workflow from history",
type=str,
multiple=True
)
@click.option(
"--dataset_collection_hids",
help="Optional list of dataset collection hids corresponding to workflow inputs when extracting a workflow from history",
type=str,
multiple=True
)
@pass_context
@custom_exception
@json_output
def cli(ctx, history_id, workflow_name, job_ids="", dataset_hids="", dataset_collection_hids=""):
"""Extract a workflow from a history.
Output:
A description of the created workflow
"""
return ctx.gi.workflows.extract_workflow_from_history(history_id, workflow_name, job_ids=job_ids, dataset_hids=dataset_hids, dataset_collection_hids=dataset_collection_hids)
| 32.131579 | 177 | 0.763309 | 168 | 1,221 | 5.339286 | 0.297619 | 0.06689 | 0.117057 | 0.060201 | 0.476031 | 0.38573 | 0.26087 | 0.26087 | 0.19175 | 0.19175 | 0 | 0 | 0.143325 | 1,221 | 37 | 178 | 33 | 0.857553 | 0.070434 | 0 | 0.310345 | 0 | 0 | 0.343416 | 0.048043 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0.068966 | 0.103448 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
51f92deb6441bf615c9c55898b9be6727d70fddd | 1,647 | py | Python | main_menu.py | Steven-Wilson/pyweek25 | 1547919a17973d0b13ed630bdb1d7285f07f2a5e | [
"MIT"
] | null | null | null | main_menu.py | Steven-Wilson/pyweek25 | 1547919a17973d0b13ed630bdb1d7285f07f2a5e | [
"MIT"
] | null | null | null | main_menu.py | Steven-Wilson/pyweek25 | 1547919a17973d0b13ed630bdb1d7285f07f2a5e | [
"MIT"
] | null | null | null | import model
import pyxelen
from view import *
from sounds import *
from utils import *
def set_scene(state, **kwargs):
return state.set(scene=state.scene.set(**kwargs))
def selection(state):
return state.scene.selection
def select_next(state):
state = set_scene(state, selection=selection(state).next())
return state.play_effect(FX_BLIP)
def select_prev(state):
state = set_scene(state, selection=selection(state).prev())
return state.play_effect(FX_BLIP)
def select(state):
if selection(state) == model.MainMenuSelection.PLAY:
return state.set(scene=model.ACT1).play_effect(FX_SELECT)
elif selection(state) == model.MainMenuSelection.OPTIONS:
return state.set(
scene=model.Settings(
selection=model.SettingsSelection.MUSIC_VOLUME
)
).play_effect(FX_SELECT)
elif selection(state) == model.MainMenuSelection.CREDITS:
return state.set(scene=model.CREDITS)
else:
return state
def on_key_down(key, state):
if key == pyxelen.Key.DOWN:
return select_next(state)
elif key == pyxelen.Key.UP:
return select_prev(state)
elif key == pyxelen.Key.RETURN:
return select(state)
else:
return state
def on_update(state):
return state.set_music(MUSIC_MENU)
def view(renderer, state):
renderer.draw_sprite(MENU_BACKGROUND, FULLSCREEN)
for i, s in enumerate(model.MainMenuSelection):
renderer.draw_text(MAIN_FONT, s.value, 210, 144 + i * 20, False)
if selection(state) == s:
renderer.draw_sprite(RIGHT_ARROW, Box(190, 140 + i * 20, 16, 16))
| 26.142857 | 77 | 0.677596 | 214 | 1,647 | 5.088785 | 0.28972 | 0.10101 | 0.071625 | 0.069789 | 0.400367 | 0.257117 | 0.257117 | 0.257117 | 0.10652 | 0 | 0 | 0.016229 | 0.214329 | 1,647 | 62 | 78 | 26.564516 | 0.825348 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.111111 | 0.066667 | 0.577778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
51fd1ee0affbea3983a9baf19ef487642c72089d | 4,056 | py | Python | gui_rest_client/menu_handlers.py | devdo-eu/macau | 6f3c4ad119c133be01935485d8afe6aa0a8d0589 | [
"MIT"
] | null | null | null | gui_rest_client/menu_handlers.py | devdo-eu/macau | 6f3c4ad119c133be01935485d8afe6aa0a8d0589 | [
"MIT"
] | 18 | 2021-03-07T13:01:01.000Z | 2021-03-25T18:56:33.000Z | gui_rest_client/menu_handlers.py | devdo-eu/macau | 6f3c4ad119c133be01935485d8afe6aa0a8d0589 | [
"MIT"
] | 1 | 2022-02-20T05:42:41.000Z | 2022-02-20T05:42:41.000Z | import gui_rest_client.common as common
import pyglet
def on_key_release_factory(window):
"""
Function used to create specific on_key_release method for window property of MenuWindow object.
:param window: MenuWindow object
:return: functor with prepared on_key_release method
"""
def functor(symbol, _modifiers):
if symbol == pyglet.window.key.BACKSPACE and type(window.active_edit) is pyglet.text.Label:
window.active_edit.text = window.active_edit.text[:-1]
for obj in window.draw_objects:
if type(obj) is pyglet.shapes.Rectangle and obj.color != [255, 255, 255]:
return
window.switch_to_game(symbol)
return functor
def on_draw_factory(window):
"""
Function used to create specific on_draw method for window property of MenuWindow object.
:param window: MenuWindow object
:return: functor with prepared on_draw method
"""
def functor():
pyglet.gl.glClearColor(65 / 256.0, 65 / 256.0, 70 / 256.0, 1)
window.window.clear()
addition = 0.25
for obj in window.draw_objects:
if type(obj) is pyglet.sprite.Sprite:
obj.rotation += addition
addition += 0.25
obj.draw()
return functor
def on_mouse_motion_factory(window):
"""
Function used to create specific on_mouse_motion method for window property of MenuWindow object.
:param window: MenuWindow object
:return: functor with prepared on_mouse_motion method
"""
def functor(x, y, _dx, _dy):
for obj in window.draw_objects:
if type(obj) is pyglet.shapes.Rectangle and common.check_if_inside(x, y, obj):
distance = round(100 * abs(x - obj.x) + abs(y - obj.y))
print(distance)
return functor
def on_mouse_release_factory(window):
"""
Function used to create specific on_mouse_release method for window property of MenuWindow object.
:param window: MenuWindow object
:return: functor with prepared on_mouse_release method
"""
def functor(x, y, button, _modifiers):
window.active_edit = None
if button == pyglet.window.mouse.LEFT:
candidates = window.find_pointed_edits(x, y)
if len(candidates) > 0:
window.active_edit = candidates[min(candidates.keys())]
window.active_edit.color = (129, 178, 154)
for obj in window.draw_objects:
if type(obj) is pyglet.text.Label and common.check_if_inside(obj.x, obj.y, window.active_edit):
window.active_edit = obj
break
function = empty_on_text_factory()
if window.active_edit is not None:
function = on_text_factory(window.active_edit)
@window.window.event
def on_text(text):
function(text)
return functor
def register_menu_events(window):
"""
Function used to register all prepared methods inside window property of MenuWindow object.
:param window: MenuWindow object
"""
@window.window.event
def on_key_release(symbol, modifiers):
on_key_release_factory(window)(symbol, modifiers)
@window.window.event
def on_draw():
on_draw_factory(window)()
@window.window.event
def on_mouse_motion(x, y, dx, dy):
on_mouse_motion_factory(window)(x, y, dx, dy)
@window.window.event
def on_mouse_release(x, y, button, modifiers):
on_mouse_release_factory(window)(x, y, button, modifiers)
@window.window.event
def on_close():
window.window.has_exit = True
def empty_on_text_factory():
"""
Function used to un-register on_text method of window property of MenuWindow object.
:return: functor with empty on_text method
"""
def functor(_text):
pass
return functor
def on_text_factory(active_edit):
def functor(text):
if active_edit is not None:
active_edit.text += text
return functor
| 31.6875 | 115 | 0.649408 | 532 | 4,056 | 4.778195 | 0.193609 | 0.051141 | 0.062943 | 0.061369 | 0.531471 | 0.38513 | 0.339496 | 0.339496 | 0.322581 | 0.239182 | 0 | 0.016124 | 0.266026 | 4,056 | 127 | 116 | 31.937008 | 0.837756 | 0.240631 | 0 | 0.219178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.260274 | false | 0.013699 | 0.027397 | 0 | 0.383562 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a400495c45ce712f436631104311183285251059 | 628 | py | Python | var/spack/repos/builtin/packages/py-iocapture/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 11 | 2015-10-04T02:17:46.000Z | 2018-02-07T18:23:00.000Z | var/spack/repos/builtin/packages/py-iocapture/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 22 | 2017-08-01T22:45:10.000Z | 2022-03-10T07:46:31.000Z | var/spack/repos/builtin/packages/py-iocapture/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 4 | 2016-06-10T17:57:39.000Z | 2018-09-11T04:59:38.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyIocapture(PythonPackage):
"""Capture stdout, stderr easily."""
homepage = "https://github.com/oinume/iocapture"
pypi = "iocapture/iocapture-0.1.2.tar.gz"
maintainers = ['dorton21']
version('0.1.2', sha256='86670e1808bcdcd4f70112f43da72ae766f04cd8311d1071ce6e0e0a72e37ee8')
depends_on('python@2.4:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
| 29.904762 | 95 | 0.716561 | 75 | 628 | 5.973333 | 0.813333 | 0.008929 | 0.013393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117537 | 0.146497 | 628 | 20 | 96 | 31.4 | 0.718284 | 0.350318 | 0 | 0 | 0 | 0 | 0.453634 | 0.240602 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a4042c74bb460724f81c2a3bc84335fbfffb8bdc | 5,526 | py | Python | tobias/tobiasfyi/settings/base.py | tobias-fyi/tobiasfyi | 194768e27d9767f2d3f839106b56ff3502c86564 | [
"MIT"
] | null | null | null | tobias/tobiasfyi/settings/base.py | tobias-fyi/tobiasfyi | 194768e27d9767f2d3f839106b56ff3502c86564 | [
"MIT"
] | 16 | 2020-01-19T20:15:49.000Z | 2021-09-22T18:49:09.000Z | tobias/tobiasfyi/settings/base.py | tobias-fyi/tobiasfyi | 194768e27d9767f2d3f839106b56ff3502c86564 | [
"MIT"
] | null | null | null | """
tobias.fyi :: Base Django settings
"""
import os
import dj_database_url
PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
BASE_DIR = os.path.dirname(PROJECT_DIR)
# === Application definition === #
INSTALLED_APPS = [
"home",
"search",
"blog",
"navigator",
"wagtail.contrib.styleguide",
"wagtail.contrib.forms",
"wagtail.contrib.redirects",
"wagtail.embeds",
"wagtail.sites",
"wagtail.users",
"wagtail.snippets",
"wagtail.documents",
"wagtail.images",
"wagtail.search",
"wagtail.admin",
"wagtail.core",
"modelcluster",
"taggit",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"wagtail.contrib.table_block",
"health_check",
"health_check.db",
"storages",
]
MIDDLEWARE = [
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"wagtail.contrib.redirects.middleware.RedirectMiddleware",
]
ROOT_URLCONF = "tobiasfyi.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [os.path.join(PROJECT_DIR, "templates"),],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "tobiasfyi.wsgi.application"
# === Database === #
if "RDS_HOSTNAME" in os.environ:
DATABASES = {
"default": {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get("RDS_DB_NAME", os.path.join(BASE_DIR, "db.sqlite3")),
"USER": os.environ.get("RDS_USERNAME", "postgres"),
"PASSWORD": os.environ.get("RDS_PASSWORD", "postgres"),
"HOST": os.environ.get("RDS_HOSTNAME", "localhost"),
"PORT": os.environ.get("RDS_PORT", "5432"),
}
}
else:
DATABASES = {
"default": {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get(
"SQL_DATABASE", os.path.join(BASE_DIR, "db.sqlite3")
),
"USER": os.environ.get("SQL_USER", "postgres"),
"PASSWORD": os.environ.get("SQL_PASSWORD", "postgres"),
"HOST": os.environ.get("SQL_HOST", "localhost"),
"PORT": os.environ.get("SQL_PORT", "5432"),
}
}
DATABASE_URL = os.environ.get("DATABASE_URL")
dj_db = dj_database_url.config(default=DATABASE_URL, conn_max_age=500, ssl_require=True)
DATABASES["default"].update(dj_db)
# === Password validation === #
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",},
{"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",},
{"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",},
]
# === Internationalization === #
LANGUAGE_CODE = "en-us"
TIME_ZONE = "America/Denver"
USE_I18N = True
USE_L10N = True
USE_TZ = True
# === Static files (CSS, JavaScript, Images) === #
STATICFILES_FINDERS = [
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
]
USE_S3 = os.getenv("USE_S3") == "True"
if USE_S3: # AWS settings
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
AWS_STORAGE_BUCKET_NAME = os.getenv("AWS_STORAGE_BUCKET_NAME")
AWS_DEFAULT_ACL = "public-read"
AWS_S3_CUSTOM_DOMAIN = f"{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com"
AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
# S3 static settings
STATIC_LOCATION = "static"
STATIC_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/"
STATICFILES_STORAGE = "tobiasfyi.storage_backends.StaticStorage"
# S3 public media settings
PUBLIC_MEDIA_LOCATION = "media"
MEDIA_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/{PUBLIC_MEDIA_LOCATION}/"
DEFAULT_FILE_STORAGE = "tobiasfyi.storage_backends.PublicMediaStorage"
else:
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_STORAGE = "whitenoise.storage.CompressedStaticFilesStorage"
# STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
# STATICFILES_STORAGE = "django.contrib.staticfiles.storage.StaticFilesStorage"
MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
STATICFILES_DIRS = [os.path.join(PROJECT_DIR, "static")]
# === Wagtail settings === #
WAGTAIL_SITE_NAME = "tobiasfyi"
# Base URL to use when referring to full URLs within the Wagtail admin backend
BASE_URL = os.environ.get("WAGTAIL_BASE_URL", "http://tobias.fyi")
| 32.698225 | 91 | 0.675172 | 591 | 5,526 | 6.098139 | 0.2978 | 0.064928 | 0.046615 | 0.029134 | 0.207547 | 0.169256 | 0.083241 | 0.068812 | 0.068812 | 0.068812 | 0 | 0.007494 | 0.178972 | 5,526 | 168 | 92 | 32.892857 | 0.786864 | 0.092291 | 0 | 0.062016 | 0 | 0 | 0.485273 | 0.304749 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054264 | 0.015504 | 0 | 0.015504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a405e633e0bf2bee6a4d55ce17c3b01ad230679e | 21,789 | py | Python | try1.py | joseph-reynolds/minecraft-pidoodles | e5b1f309a8a773b35f760e67588a52fb23916d65 | [
"MIT"
] | 1 | 2021-01-27T17:48:24.000Z | 2021-01-27T17:48:24.000Z | try1.py | joseph-reynolds/minecraft-pidoodles | e5b1f309a8a773b35f760e67588a52fb23916d65 | [
"MIT"
] | null | null | null | try1.py | joseph-reynolds/minecraft-pidoodles | e5b1f309a8a773b35f760e67588a52fb23916d65 | [
"MIT"
] | null | null | null | #!/usr/bin/python
"""Ideas for minecraft
Ideas:
- determine the size of the world (x..x, y..y, z..z)
- given a house foundation, build the entire house
- clear a large area of blocks
- dig a shaft or steps down
- jump to the top of a nearby tree
- create a miniature world: reduce (256,256) to (16,16)
- determine ground level (not Minecraft.getHeight)
- copy/paste a structure
First, we need a faster way to read block data.
Idea to trigger building a house:
- hit a wall near a torch on top of a gold block
Function to build an entire house (a hollow box) given its dimensions.
To use this function,
start with a flat field
build two low walls that meet at a right angle
to indicate the area of your house,
and a post (a stack of blocks) on the corner to indicate the height.
Place a torch at ground level exactly at the inside corner of your house.
Stand on or immediately next to the torch.
Then run this function ("Algorithm to build a house" below).
--> The function locates the nearby torch, and the corner of your house.
Then it scans the foundation to learn how big to make it.
Finally, it puts the walls up.
It doesn't matter what direction the corner is facing, as long as
there is a torch on the insde corner. For example, if "x" is a block
and " "(space) is air, "t" is a torch, and "X"(capital x) is a stack of
blocks, the following top-view shows the required layout:
Xxxxxxxx x
xt x
x or x
x tx
x xxxxX
Idea: memoize results from mc.getBlock
"""
# mcpi is found in /usr/lib/python2.7/dist-packages
import mcpi.minecraft as minecraft
import mcpi.block as block
from mcpi.vec3 import Vec3
from mcpi.connection import Connection
import time
import timeit
import Queue
import threading
def vec3_get_sign(self):
# Warning: hack! Assumes unit vector
return self.x + self.y + self.z
Vec3.get_sign = vec3_get_sign
class TorchFindError(Exception):
"""Torch not found nearby"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class CornerFindError(Exception):
"""Corner not found"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
def different_block(b):
if b == block.STONE.id: b = block.SANDSTONE.id
elif b == block.SANDSTONE.id: b = block.DIRT.id
elif b == block.DIRT.id: b = block.WOOD.id
else: b = block.STONE.id
return b
def get_nearby_torchpos(p):
"""Return the position of a nearby torch"""
torchpos = None
search_size = 1
for x in range(int(p.x - search_size), int(p.x + search_size+1)):
for z in range(int(p.z - search_size), int(p.z + search_size+1)):
b = mc.getBlock(x, p.y, z)
if b == block.TORCH.id:
if not torchpos is None:
raise TorchFindError("Too many torches")
torchpos = Vec3(x, p.y, z)
if torchpos is None:
raise TorchFindError("Torch not found nearby")
return torchpos
def get_corner_data(pos):
"""Returns data about a corner next to the input position.
A "corner" is two walls meeting at right angles with a post.
The walls and post define the corner.
The input position should be the inside corner at ground level
such as returned by get_nearby_torchpos.
The return value is a 2-tuple:
- The corner position
- A unit vector pointing "inside" the corner
"""
# Read blocks around the input pos
blocks = [] # Usage: blocks[x][z]
mask = 0 # Bit mask: abcdefghi, like:
# adg
# beh
# cfi
for x in range(int(pos.x - 1), int(pos.x + 2)):
col = []
for z in range(int(pos.z - 1), int(pos.z + 2)):
b = mc.getBlockWithData(x, pos.y, z) # Test
b = mc.getBlock(x, pos.y, z)
col.append(b)
mask = (mask << 1) + (0 if b == block.AIR.id else 1)
blocks.append(col)
mask &= 0b111101111 # Mask off center block
# print "Mask", format(mask,"#011b")
nw_corner_mask = 0b111100100
ne_corner_mask = 0b100100111
sw_corner_mask = 0b111001001
se_corner_mask = 0b001001111
if mask == nw_corner_mask:
corner = Vec3(pos.x - 1, pos.y, pos.z - 1)
vector = Vec3(1, 1, 1)
elif mask == ne_corner_mask:
corner = Vec3(pos.x +1, pos.y, pos.z - 1)
vector = Vec3(-1, 1, 1)
elif mask == sw_corner_mask:
corner = Vec3(pos.x - 1, pos.y, pos.z + 1)
vector = Vec3(1, 1, -1)
elif mask == se_corner_mask:
corner = Vec3(pos.x + 1, pos.y, pos.z + 1)
vector = Vec3(-1, 1, -1)
else:
raise CornerFindError("Corner not found")
return corner, vector
def get_length_of_block_run(startpos, vector):
"""determine the length of a run of blocks
parameters: startpos is the starting block
vector is a unit vector in the direction to count
"""
ans = 0
pos = startpos
while mc.getBlock(pos) != block.AIR.id:
ans += 1
pos = pos + vector
ans -= 1
return ans * vector.get_sign()
def get_house_data(pos):
corner, vector = get_corner_data(pos)
sizex = get_length_of_block_run(corner, Vec3(vector.x, 0, 0))
sizey = get_length_of_block_run(corner, Vec3(0, vector.y, 0))
sizez = get_length_of_block_run(corner, Vec3(0, 0, vector.z))
return corner, Vec3(sizex, sizey, sizez)
def do_house(corner, dim):
newblockid = different_block(mc.getBlock(corner))
# Unit test: just do a chimney
#mc.setBlocks(corner.x, corner.y, corner.z,
# corner.x, corner.y + dim.y, corner.z,
# newblockid)
# Near wall along x direction
mc.setBlocks(corner.x, corner.y, corner.z,
corner.x + dim.x, corner.y + dim.y, corner.z,
newblockid)
# Near wall along z direction
mc.setBlocks(corner.x, corner.y, corner.z,
corner.x, corner.y + dim.y, corner.z + dim.z,
newblockid)
mc = minecraft.Minecraft.create()
#p = mc.player.getTilePos()
#mc.x_connect_multiple(p.x, p.y+2, p.z, block.GLASS.id)
#print 'bye!'
#exit(0)
# Algorithm to build a house
for i in range(0,0):
time.sleep(1)
p = mc.player.getTilePos()
b = mc.getBlock(p.x, p.y-1, p.z)
info = ""
try:
tp = get_nearby_torchpos(p)
info = "torch found"
try:
c,v = get_house_data(tp)
info = "corner found"
do_house(c,v)
except CornerFindError as e:
pass
except TorchFindError as e:
pass # print "TorchFindError:", e.value
print b, info
connections = []
def get_blocks_in_parallel(c1, c2, degree=35):
"""get a cuboid of block data
parms:
c1, c2: the corners of the cuboid
degree: the degree of parallelism (number of sockets)
returns:
map from mcpi.vec3.Vec3 to mcpi.block.Block
"""
# Set up the work queue
c1.x, c2.x = sorted((c1.x, c2.x))
c1.y, c2.y = sorted((c1.y, c2.y))
c1.z, c2.z = sorted((c1.z, c2.z))
workq = Queue.Queue()
for x in range(c1.x, c2.x+1):
for y in range(c1.y, c2.y+1):
for z in range(c1.z, c2.z+1):
workq.put((x,y,z))
print "Getting data for %d blocks" % workq.qsize()
# Create socket connections, if needed
# TO DO: Bad! Assumes degree is a constant
# To do: close the socket
global connections
if not connections:
connections = [Connection("localhost", 4711) for i in range(0,degree)]
# Create worker threads
def worker_fn(connection, workq, outq):
try:
while True:
pos = workq.get(False)
# print "working", pos[0], pos[1], pos[2]
connection.send("world.getBlockWithData", pos[0], pos[1], pos[2])
ans = connection.receive()
blockid, blockdata = map(int, ans.split(","))
outq.put((pos, (blockid, blockdata)))
except Queue.Empty:
pass
outq = Queue.Queue()
workers = []
for w in range(degree):
t = threading.Thread(target = worker_fn,
args = (connections[w], workq, outq))
t.start()
workers.append(t)
# Wait for workers to finish
for w in workers:
# print "waiting for", w.name
w.join()
# Collect results
answer = {}
while not outq.empty():
pos, block = outq.get()
answer[pos] = block
return answer
while False:
# mc.getHeight works
ppos = mc.player.getPos()
h = mc.getHeight(ppos.x, ppos.z)
print h
time.sleep(1)
if False:
"""
degree = 200
corner1 = Vec3(-50, 8, -50)
corner2 = Vec3( 50, 8, 50)
starttime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime = timeit.default_timer()
print endtime-starttime, 'get_blocks_in_parallel'
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime2 = timeit.default_timer()
print endtime2-endtime, 'get_blocks_in_parallel again'
for z in range(corner1.z, corner2.z):
s = ""
for x in range(corner1.x, corner2.x):
c = " " if blks[(x, 8, z)][0] == block.AIR.id else "x"
s = s + c
print s
"""
# Performance experiments
"""Results:
Hardware: Raspbery Pi 3 Model B V1.2 with heat sink
Linux commands show:
$ uname -a
Linux raspberrypi 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l GNU/Linux
$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Model name: ARMv7 Processor rev 4 (v7l)
CPU max MHz: 1200.0000
CPU min MHz: 600.0000
GPU memory is 128 (unit is Mb, I think)
Test getting 10201 blocks, and stack_size=128Kb
varying the number of threads:
threads time(sec) blocks/sec
------- --------- ----------
10 39.87
25 19.46
50 10.68
75 7.29
100 5.57
115 5.01
120 4.86
125 4.75
130 4.58
150 4.47
175 4.55
200 4.24
250 4.41
400 4.60
Observations:
- Each thread process 15 to 25 blocks/sec
- Some blocks take much longer to fetch, about 0.3 sec
- performance peaks with 200 threads, at 2400 blocks/sec
- creating threads is not free
- can create 50 threads in 1 sec, 100 threads in 2.5 sec
- memory consumption increases (not measured)
- the tests were repeated while the game was being
played interactively, specifically, flying at altitude
and looking down so that new blocks were being fetched
as quickly as possible. This did not affect performance:
+ no graphical slowdowns or glitches were observed
+ the performance of fetching blocks was not affected
Note:
The expected case is to create the required threads once
and keep them around for the lifetime of the program.
The experimental code was designed to do just that.
Some data was captured that suggests how expensive
starting up hundreds of threads is. Although it was
not one of the objectives of the original study, it is
given as an interesting observation.
Conclusions:
Eyeballing the data suggests that 200 threads is optimal.
However, if the 6 seconds it takes to create the threads
is not acceptable, consider using 100 threads which is
about 30% slower, but only takes 1 second to create the
threads.
"""
threading.stack_size(128*1024)
for degree in [100, 150, 200]:
connections = []
corner1 = Vec3(-50, 8, -50)
corner2 = Vec3( 50, 8, 50)
starttime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime2 = timeit.default_timer()
print "entries=10201 degree=%s time1=%s time2=%s" % (
str(degree),
str(endtime-starttime),
str(endtime2-endtime))
# Idea: class for get_blocks_in_parallel()
class ParallelGetter:
"""Get block data from the Mincraft Pi API -- NOT FINISHED"""
def __init__(self, address = "localhost", port = 4711, parallelism=200):
self.address = address
self.port = port
self.parallelism = parallelism
self.connections = [Connection(address, port) for i in range(parallelism)]
# To do: close the socket connection
@staticmethod
def normalize_corners(c1, c2):
"""ensure c1.x <= c2.x, etc., without changing the cuboid"""
c1.x, c2.x = sorted((c1.x, c2.x))
c1.y, c2.y = sorted((c1.y, c2.y))
c1.z, c2.z = sorted((c1.z, c2.z))
return c1, c2
@staticmethod
def generate_work_items_xyz(c1, c2):
c1, c2 = normalize_corners(c1, c2)
workq = Queue.Queue()
for x in range(c1.x, c2.x+1):
for y in range(c1.y, c2.y+1):
for z in range(c1.z, c2.z+1):
workq.put((x,y,z))
return workq
@staticmethod
def _unpack_int(self, response):
return int(response)
@staticmethod
def _unpack_int_int(self, response):
i1, i2 = map(int, response.split(","))
return i1, i2
def get_blocks(self, c1, c2):
workq = generate_work_items_xyz(c, c2)
return _do_work(workq, "world.getBlock", _unpack_int)
def get_blocks_with_data(self, c1, c2):
workq = generate_work_items_xyz(c, c2)
return _do_work(workq, "world.getBlockWithData", _unpack_int_int)
def _do_work(self, workq, api_name, unpack_fn):
"""Perform the parallel portion of the work.
parms:
workq - such as from generate_work_items_xyz
Specifically, start a worker thread for each connection,
Each worker feeds work from the workq to the API, formats
the results, and enqueues the results.
When there is no more work, the workers quit, and the
return value is computed.
"""
def worker_fn(connection, workq, outq, unpack_fn):
try:
while True:
pos = workq.get(False)
connection.send(api_name, pos)
outq.put((pos, unpack_fn(connection.receive())))
except Queue.Empty:
pass
# Create worker threads
outq = Queue.Queue()
workers = []
for w in range(parallelism):
t = threading.Thread(
target = worker_fn,
args = (connections[w], workq, outq, unpack_fn))
t.start()
workers.append(t)
# Wait for workers to finish, then collect their data
for w in workers:
w.join()
answer = {}
while not outq.empty():
pos, data = outq.get()
answer[pos] = data
return answer
"""Idea: Tree jumper
You can jump from tree to tree.
If you are
(a) on a tree (LEAVES = Block(18)),
(b) moving forward, and
(c) jumping,
you will jump/fly to the nearest tree in your path.
Algorithm:
while True:
player_velocity = "track player position to determine velocity"
if "player is moving and on a leaf and jumps":
destination = "find nearest tree(player_pos, player_vel)"
if destination:
parabola = compute(player_pos, destination)
"move player smoothly along the parabola"
"if player hits a block: break"
where
def nearest_tree(player_pos, player_vel):
search_areas = [
player_pos + 30 * player_vel with radius=15,
player_pos + 15 * player_vel with radius=10,
player_pos + 40 * player_vel with radius=15]
search areas *= [player_pos.y, player_pos.y-7, player_pos+7]
for area in search_areas:
fetch a plane of block data centered at (area)
tree = find tree, prefering center of the area
if tree: return tree
return tree
def compute_parabola():
gravity = 0.3 # blocks/time**2
xz_distance = sqrt(xd**2 + zd**2)
xz_speed = 1
total_time = xz_distance / xz_speed
x_vel = xd / total_time
z_vel = zd / total_time
y_vel = ((-yd / total_time) +
((0.5 * gravity * (total_time ** 2)))
"""
# Lets' try a leap/jump
if False:
# mc.player.setPos(1, 4, 3) # if the jump goes badly wrong
ppos = mc.player.getPos()
x = ppos.x
y = ppos.y
z = ppos.z
xv = 0.005
yv = 0.1
zv = 0.02
while yv > 0:
mc.player.setPos(x, y, z)
x += xv
y += yv
z += zv
yv -= 0.0001
time.sleep(0.001)
# Try stacking up multiple getBlocks:
if True:
"""This code is weird. Delete it!"""
connection = Connection("localhost", 4711)
for x in range(-20, 20):
for z in range(-20, 20):
connection.send("world.getBlockWithData", x, 2, z)
print connection.receive()
print connection.receive()
# How big is the world?
if False:
corner1 = Vec3(-200, 0, 0)
corner2 = Vec3(200, 0, 0)
degree = 150
xaxis = get_blocks_in_parallel(corner1, corner2, degree)
xmin = xmax = 0
for x in range(200):
if xaxis[(x, 0, 0)][0] == block.BEDROCK_INVISIBLE.id:
xmax = x - 1
break
for x in range(0, -200, -1):
if xaxis[(x, 0, 0)][0] == block.BEDROCK_INVISIBLE.id:
xmin = x + 1
break
#print "X-axis: %d to %d" % (xmin, xmax)
corner1 = Vec3(0, 0, 200)
corner2 = Vec3(0, 0, -200)
degree = 150
zaxis = get_blocks_in_parallel(corner1, corner2, degree)
zmin = zmax = 0
for z in range(200):
if zaxis[(0, 0, z)][0] == block.BEDROCK_INVISIBLE.id:
zmax = z - 1
break
for z in range(0, -200, -1):
if zaxis[(0, 0, z)][0] == block.BEDROCK_INVISIBLE.id:
zmin = z + 1
break
#print "Z-axis: %d to %d" % (zmin, zmax)
print "The world is: [%d..%d][y][%d..%d]" % (
xmin, xmax, zmin, zmax)
###
### Try stuff with sockets
###
'''
I have not finished coding this part.
def gen_answers(connection, requests, format, parse_fn):
"""generate answers for each request, like (req, answer)"""
request_buf = io.BufferedWriter(connection.socket.makefile('w'))
response_buf = io.BufferedReader(connection.socket.makefile('r'))
request_queue = []
while True:
# Write requests into request_buffer
# ...to do...
# Perform socket I/O
Communicate:
r,w,e = select.select([response_buf], [request_buf], [], 1)
if r:
response_data = response_buf.peek()
if "response_data has a newline at position n":
response_text = response_buf.read(n)
response_buf.read(???)
if w:
request_buf.write(???)
# Read answers
while resp_buf.hasSomeData:
request = request_queue[0] # Er, use a queue?
request_queue = request_queue[1:]
response = parse_fn(response_buf.readline())
yield (request, response)
Hmmm, my sockets are rusty, and my Python io buffer classes weak,
but this seems more correct:
# We write requests (like b"world.getBlock(0,0,0)" into the
# request_buffer and then into the request_file (socket).
request_buffer = bytearray() # bytes, bytearray, or memoryview
request_file = io.FileIO(connection.socket.fileno(), "w", closeFd=False)
"...append data to request_buffer..."
if request_buffer: can select
if selected:
# Write exactly once
bytes_written = request_file.write(request_buffer)
request_buffer = request_buffer[bytes_written:]
if bytes_written == 0: "something is wrong"
# We read responses (like b"2") from the response_file (socket)
# into the response_buffer.
response_file = io.FileIO(connection.socket.fileno(), "r", closeFd=False)
response_buffer = bytes()
if selected:
# Read exactly once
response_buffer.append(response_file.read())
"...remove data from response_buffer..."
# Try gen_answers:
if True:
connection = Connection("localhost", 4711)
def some_rectangle():
for x in range(-2,2):
for z in range(-2,2):
yield (x, 0, z)
for pos, blk in gen_answers(connection,
some_rectangle,
"world.getBlock(%d,%d)",
int):
print "Got", pos, blk
my_blocks = {}
for pos, blk in gen_answers(connection,
some_rectangle,
"world.getBlock(%d,%d)",
int):
my_blocks[pos] = blk
'''
| 33.781395 | 93 | 0.579788 | 3,025 | 21,789 | 4.093884 | 0.219174 | 0.014131 | 0.008882 | 0.015342 | 0.220688 | 0.192103 | 0.169008 | 0.155846 | 0.145833 | 0.145833 | 0 | 0.041551 | 0.319611 | 21,789 | 644 | 94 | 33.833851 | 0.793794 | 0.059984 | 0 | 0.320144 | 0 | 0 | 0.029273 | 0.006755 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.014388 | 0.028777 | null | null | 0.02518 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a4070d5261493c37bcf2333af0d0da98724223b5 | 266 | py | Python | setup.py | philipwfowler/gromarks | d30c1d08a403b5044db5af2cb69d48f49f2d21fd | [
"MIT"
] | null | null | null | setup.py | philipwfowler/gromarks | d30c1d08a403b5044db5af2cb69d48f49f2d21fd | [
"MIT"
] | null | null | null | setup.py | philipwfowler/gromarks | d30c1d08a403b5044db5af2cb69d48f49f2d21fd | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='gromarks',
version='0.1',
author='Philip Fowler',
packages=['gromarks'],
scripts=["bin/gromarks-analyse.py","bin/gromarks-create.py"],
license='MIT',
long_description=open('README.md').read(),
)
| 22.166667 | 65 | 0.650376 | 32 | 266 | 5.375 | 0.8125 | 0.127907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008969 | 0.161654 | 266 | 11 | 66 | 24.181818 | 0.762332 | 0 | 0 | 0 | 0 | 0 | 0.334586 | 0.169173 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a40c134772d49ab93a95b17b92d661e9e1dc23d1 | 533 | py | Python | modules/example_name_get/models/custom_model.py | ngi-odoo/examples | d1d4dc109862d90f8fc4f34af169ff09f0c8ea2f | [
"MIT"
] | null | null | null | modules/example_name_get/models/custom_model.py | ngi-odoo/examples | d1d4dc109862d90f8fc4f34af169ff09f0c8ea2f | [
"MIT"
] | null | null | null | modules/example_name_get/models/custom_model.py | ngi-odoo/examples | d1d4dc109862d90f8fc4f34af169ff09f0c8ea2f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from odoo import api, fields, models
class CustomModel(models.Model):
_name = 'custom.model'
_description = 'Custom Model'
_rec_name = 'custom_name'
name = fields.Char('Name', default='Name', required=True)
custom_int = fields.Integer('Custom Int', required=True)
custom_name = fields.Char('Custom Name', compute='_compute_custom_name')
@api.depends('name', 'custom_int')
def _compute_custom_name(self):
self.custom_name = self.name + ' - ' + str(self.custom_int)
| 29.611111 | 76 | 0.673546 | 68 | 533 | 5.044118 | 0.397059 | 0.174927 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002288 | 0.180113 | 533 | 17 | 77 | 31.352941 | 0.782609 | 0.0394 | 0 | 0 | 0 | 0 | 0.198039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a40c8b53c82db18dc8dc952f046c2a7a09fe4bf4 | 1,103 | py | Python | spike2_processor/tests/params.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | spike2_processor/tests/params.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | spike2_processor/tests/params.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | from base_processor.timeseries.tests import TimeSeriesTest, ChannelTest
# -----------------------------
# parameters for channel tests
# -----------------------------
# ----------------------- test channels -----------------------
channels_00 = [
# continuous channels
ChannelTest(name='EKG-1000.0', nsamples=1414172, rate=1000.0, channel_type='CONTINUOUS'),
ChannelTest(name='BP-1000.0', nsamples=1414172, rate=1000.0, channel_type='CONTINUOUS'),
ChannelTest(name='RAE-1000.0', nsamples=1414172, rate=1000.0, channel_type='CONTINUOUS'),
ChannelTest(name='LVP-1000.0', nsamples=1414172, rate=1000.0, channel_type='CONTINUOUS'),
ChannelTest(name='ANS-5000.0', nsamples=7070856, rate=5000.0, channel_type='CONTINUOUS')
]
# ----------------------- parametrize -----------------------
params_global = [
TimeSeriesTest(
name = 'file2',
nchannels = len(channels_00),
channels = channels_00,
result = 'pass',
inputs = {
'file': [
'/test-resources/file2.smr',
]
}),
]
| 34.46875 | 93 | 0.547597 | 104 | 1,103 | 5.711538 | 0.394231 | 0.06734 | 0.10101 | 0.185185 | 0.43771 | 0.43771 | 0.43771 | 0.43771 | 0.43771 | 0.43771 | 0 | 0.105085 | 0.197643 | 1,103 | 31 | 94 | 35.580645 | 0.566102 | 0.208522 | 0 | 0 | 0 | 0 | 0.158199 | 0.028868 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.05 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf8bf175075177bf940eb88972c096da0cd5c335 | 2,764 | py | Python | src/ModuleManager.py | RhysRead/RhysRoom | 7cc2643f4ff10f56263310e1d7648a90720122be | [
"MIT"
] | null | null | null | src/ModuleManager.py | RhysRead/RhysRoom | 7cc2643f4ff10f56263310e1d7648a90720122be | [
"MIT"
] | null | null | null | src/ModuleManager.py | RhysRead/RhysRoom | 7cc2643f4ff10f56263310e1d7648a90720122be | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""ModuleManager.py: This file contains the code for the module managing aspect of the RhysRoom software."""
__author__ = "Rhys Read"
__copyright__ = "Copyright 2018, Rhys Read"
import logging
import os
from importlib.machinery import SourceFileLoader
import threading
MODULE_FOLDER_LOCATION = "/../Modules/"
# Todo: Add logging.
class ModuleManager(object):
def __init__(self, main, auto_load=True, verbose=True):
self.__main = main
self.__file_path = os.path.dirname(os.path.realpath(__file__))
self.modules = []
self.__modules_threads = []
if auto_load:
count = self.load_modules_from_module_folder()
if verbose:
logging.info("{} modules loaded.".format(count))
def get_module(self, module_name: str):
"""
Used to retrieve a module from the module manager with the modules name.
:param module_name:
:return:
"""
for module in self.modules:
if module.get_name() == module_name:
return module
return None
def start_modules(self):
"""
Used to iterate through and start all loaded modules.
:return:
"""
for module in self.modules:
if not module.run_as_thread:
module.start()
continue
thread = threading.Thread(target=module.start, daemon=True)
thread.start()
self.__modules_threads.append(thread)
for thread in self.__modules_threads:
thread.join()
def load_module(self, module_name: str):
"""
Used to load a single module with its module name.
:param module_name:
:return:
"""
real_module_path = self.__file_path + MODULE_FOLDER_LOCATION + module_name
module = SourceFileLoader("Module", real_module_path).load_module()
# Todo: Add exception handling for failure to load modules.
self.modules.append(module.Module(self.__main))
def load_modules_from_module_folder(self):
"""
Used to load modules from the module folder. Will iterate through all files in the directory and attempt to load
them.
:return: The integer value of the quantity of modules loaded.
"""
files = os.listdir(self.__file_path + MODULE_FOLDER_LOCATION)
count = 0
for file_name in files:
# Ensuring that the file is not a directory. Will trigger an error on files with no file type allocated.
if '.' not in list(file_name) or file_name == "ModuleTemplate.py":
continue
self.load_module(file_name)
count += 1
return count
| 30.711111 | 120 | 0.62301 | 334 | 2,764 | 4.928144 | 0.335329 | 0.04678 | 0.036452 | 0.025516 | 0.170109 | 0.110571 | 0.071689 | 0 | 0 | 0 | 0 | 0.003595 | 0.295586 | 2,764 | 89 | 121 | 31.05618 | 0.841808 | 0.26411 | 0 | 0.088889 | 0 | 0 | 0.046883 | 0 | 0 | 0 | 0 | 0.022472 | 0 | 1 | 0.111111 | false | 0 | 0.088889 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf8c77a8403d7c2b9ad92146eb9320547377a0ac | 1,441 | py | Python | dependencies/svgwrite/tests/test_rect.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 21 | 2015-01-16T05:10:02.000Z | 2021-06-11T20:48:15.000Z | dependencies/svgwrite/tests/test_rect.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 1 | 2019-09-09T12:10:27.000Z | 2020-05-22T10:12:14.000Z | dependencies/svgwrite/tests/test_rect.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 2 | 2015-05-03T04:51:08.000Z | 2018-08-24T08:28:53.000Z | #!/usr/bin/env python
#coding:utf-8
# Author: mozman --<mozman@gmx.at>
# Purpose: test rect object
# Created: 25.09.2010
# Copyright (C) 2010, Manfred Moitzi
# License: GPLv3
import sys
import unittest
from svgwrite.shapes import Rect
class TestRect(unittest.TestCase):
def test_numbers(self):
rect = Rect(insert=(0,0), size=(10,20))
self.assertEqual(rect.tostring(), '<rect height="20" width="10" x="0" y="0" />')
def test_coordinates(self):
rect = Rect(insert=('10cm','11cm'), size=('20cm', '30cm'))
self.assertEqual(rect.tostring(), '<rect height="30cm" width="20cm" x="10cm" y="11cm" />')
def test_corners_numbers(self):
rect = Rect(rx=1, ry=1)
self.assertEqual(rect.tostring(), '<rect height="1" rx="1" ry="1" width="1" x="0" y="0" />')
def test_corners_length(self):
rect = Rect(rx='1mm', ry='1mm')
self.assertEqual(rect.tostring(), '<rect height="1" rx="1mm" ry="1mm" width="1" x="0" y="0" />')
def test_errors(self):
self.assertRaises(TypeError, Rect, insert=1)
self.assertRaises(TypeError, Rect, size=1)
self.assertRaises(TypeError, Rect, insert=None)
self.assertRaises(TypeError, Rect, size=None)
self.assertRaises(TypeError, Rect, size=(None, None))
self.assertRaises(TypeError, Rect, insert=(None, None))
if __name__=='__main__':
unittest.main() | 36.025 | 105 | 0.616239 | 193 | 1,441 | 4.523316 | 0.341969 | 0.109966 | 0.171821 | 0.199313 | 0.486827 | 0.402062 | 0.219931 | 0.130584 | 0 | 0 | 0 | 0.052539 | 0.207495 | 1,441 | 40 | 106 | 36.025 | 0.711909 | 0.112422 | 0 | 0 | 0 | 0.08 | 0.194489 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.2 | false | 0 | 0.12 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf93a8fb947171db61193394450cab6da3bc92ab | 841 | py | Python | freenodejobs/urls.py | freenode/freenodejobs | 235388c88ac6f984f36cd20074542e21369bcc8b | [
"MIT"
] | 4 | 2018-06-22T22:27:33.000Z | 2021-12-04T20:48:52.000Z | freenodejobs/urls.py | freenode/freenodejobs | 235388c88ac6f984f36cd20074542e21369bcc8b | [
"MIT"
] | 31 | 2018-05-22T15:38:55.000Z | 2021-03-18T20:31:53.000Z | freenodejobs/urls.py | freenode/freenodejobs | 235388c88ac6f984f36cd20074542e21369bcc8b | [
"MIT"
] | 5 | 2018-06-02T17:28:42.000Z | 2021-12-04T20:48:41.000Z | from django.conf import settings
from django.urls import path, include
from django.views.static import serve
urlpatterns = (
path('', include('freenodejobs.account.urls',
namespace='account')),
path('', include('freenodejobs.admin.urls',
namespace='admin')),
path('', include('freenodejobs.dashboard.urls',
namespace='dashboard')),
path('', include('freenodejobs.profile.urls',
namespace='profile')),
path('', include('freenodejobs.registration.urls',
namespace='registration')),
path('', include('freenodejobs.static.urls',
namespace='static')),
path('', include('freenodejobs.jobs.urls',
namespace='jobs')),
path('storage/<path:path>', serve, {
'show_indexes': settings.DEBUG,
'document_root': settings.MEDIA_ROOT,
}),
)
| 31.148148 | 54 | 0.634958 | 81 | 841 | 6.555556 | 0.333333 | 0.165725 | 0.303202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195006 | 841 | 26 | 55 | 32.346154 | 0.784343 | 0 | 0 | 0 | 0 | 0 | 0.321046 | 0.209275 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf9f2dffcca7faa4bae484271d8e331b630da008 | 663 | py | Python | karanja_me/polls/admin.py | denisKaranja/django-dive-in | 451742ac065136cb0f9ac7b042d5913bbc2a36d0 | [
"MIT"
] | null | null | null | karanja_me/polls/admin.py | denisKaranja/django-dive-in | 451742ac065136cb0f9ac7b042d5913bbc2a36d0 | [
"MIT"
] | null | null | null | karanja_me/polls/admin.py | denisKaranja/django-dive-in | 451742ac065136cb0f9ac7b042d5913bbc2a36d0 | [
"MIT"
] | null | null | null | from django.contrib import admin
# register Pools app in the admin interface
from .models import Choice, Question
class ChoiceInLine(admin.TabularInline):
model = Choice
extra = 3
class QuestionAdmin(admin.ModelAdmin):
# re-order which field comes first
fieldsets = [
("Question", {"fields": ["question_text"]}),
("Date Information", {"fields": ["pub_date"], "classes": ["collapse"]})
]
list_display = ("question_text", "pub_date", "was_published_recently")
list_filter = ["pub_date"] # adds a filter to the sidebar
search_fileds = ["question_text"]
inlines = [ChoiceInLine]
admin.site.register(Question, QuestionAdmin)
| 24.555556 | 75 | 0.701357 | 77 | 663 | 5.896104 | 0.649351 | 0.079295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001815 | 0.168929 | 663 | 26 | 76 | 25.5 | 0.822142 | 0.155354 | 0 | 0 | 0 | 0 | 0.245487 | 0.039711 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cfa4ca9a10bea098722aaeb8c2a7d2d1a93ebf25 | 9,065 | py | Python | tests/src/main/python/rest/tests/extract/swagger_client/models/measure_parameter_info.py | IBM/quality-measure-and-cohort-service | 8963227bf4941d6a5fdc641b37ca0f72da5a6f2b | [
"Apache-2.0"
] | 1 | 2020-10-05T15:10:03.000Z | 2020-10-05T15:10:03.000Z | tests/src/main/python/rest/tests/extract/swagger_client/models/measure_parameter_info.py | IBM/quality-measure-and-cohort-service | 8963227bf4941d6a5fdc641b37ca0f72da5a6f2b | [
"Apache-2.0"
] | null | null | null | tests/src/main/python/rest/tests/extract/swagger_client/models/measure_parameter_info.py | IBM/quality-measure-and-cohort-service | 8963227bf4941d6a5fdc641b37ca0f72da5a6f2b | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
IBM Cohort Engine
Service to evaluate cohorts and measures # noqa: E501
OpenAPI spec version: 2.1.0 2022-02-18T21:50:45Z
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from swagger_client.configuration import Configuration
class MeasureParameterInfo(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'name': 'str',
'use': 'str',
'min': 'int',
'max': 'str',
'type': 'str',
'default_value': 'str',
'documentation': 'str'
}
attribute_map = {
'name': 'name',
'use': 'use',
'min': 'min',
'max': 'max',
'type': 'type',
'default_value': 'defaultValue',
'documentation': 'documentation'
}
def __init__(self, name=None, use=None, min=None, max=None, type=None, default_value=None, documentation=None, _configuration=None): # noqa: E501
"""MeasureParameterInfo - a model defined in Swagger""" # noqa: E501
if _configuration is None:
_configuration = Configuration()
self._configuration = _configuration
self._name = None
self._use = None
self._min = None
self._max = None
self._type = None
self._default_value = None
self._documentation = None
self.discriminator = None
if name is not None:
self.name = name
if use is not None:
self.use = use
if min is not None:
self.min = min
if max is not None:
self.max = max
if type is not None:
self.type = type
if default_value is not None:
self.default_value = default_value
if documentation is not None:
self.documentation = documentation
@property
def name(self):
"""Gets the name of this MeasureParameterInfo. # noqa: E501
Name of the parameter which is the Fhir ParameterDefinition.name field # noqa: E501
:return: The name of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this MeasureParameterInfo.
Name of the parameter which is the Fhir ParameterDefinition.name field # noqa: E501
:param name: The name of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._name = name
@property
def use(self):
"""Gets the use of this MeasureParameterInfo. # noqa: E501
A string describing if the parameter is an input or output parameter. FHIR ParameterDefinition.use field # noqa: E501
:return: The use of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._use
@use.setter
def use(self, use):
"""Sets the use of this MeasureParameterInfo.
A string describing if the parameter is an input or output parameter. FHIR ParameterDefinition.use field # noqa: E501
:param use: The use of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._use = use
@property
def min(self):
"""Gets the min of this MeasureParameterInfo. # noqa: E501
The minimum number of times this parameter may be used (ie 0 means optional parameter, greater than or equal to 1 means required parameter) FHIR ParameterDefinition.min field # noqa: E501
:return: The min of this MeasureParameterInfo. # noqa: E501
:rtype: int
"""
return self._min
@min.setter
def min(self, min):
"""Sets the min of this MeasureParameterInfo.
The minimum number of times this parameter may be used (ie 0 means optional parameter, greater than or equal to 1 means required parameter) FHIR ParameterDefinition.min field # noqa: E501
:param min: The min of this MeasureParameterInfo. # noqa: E501
:type: int
"""
self._min = min
@property
def max(self):
"""Gets the max of this MeasureParameterInfo. # noqa: E501
A string representing the maximum number of times this parameter may be used. FHIR ParameterDefinition.max field # noqa: E501
:return: The max of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._max
@max.setter
def max(self, max):
"""Sets the max of this MeasureParameterInfo.
A string representing the maximum number of times this parameter may be used. FHIR ParameterDefinition.max field # noqa: E501
:param max: The max of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._max = max
@property
def type(self):
"""Gets the type of this MeasureParameterInfo. # noqa: E501
The type of the parameter. FHIR ParameterDefinition.type field # noqa: E501
:return: The type of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._type
@type.setter
def type(self, type):
"""Sets the type of this MeasureParameterInfo.
The type of the parameter. FHIR ParameterDefinition.type field # noqa: E501
:param type: The type of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._type = type
@property
def default_value(self):
"""Gets the default_value of this MeasureParameterInfo. # noqa: E501
The defaultValue of the parameter. FHIR ParameterDefinition.defaultValue field # noqa: E501
:return: The default_value of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._default_value
@default_value.setter
def default_value(self, default_value):
"""Sets the default_value of this MeasureParameterInfo.
The defaultValue of the parameter. FHIR ParameterDefinition.defaultValue field # noqa: E501
:param default_value: The default_value of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._default_value = default_value
@property
def documentation(self):
"""Gets the documentation of this MeasureParameterInfo. # noqa: E501
A string describing any documentation associated with this parameter. FHIR FHIR ParameterDefinition.documentation field # noqa: E501
:return: The documentation of this MeasureParameterInfo. # noqa: E501
:rtype: str
"""
return self._documentation
@documentation.setter
def documentation(self, documentation):
"""Sets the documentation of this MeasureParameterInfo.
A string describing any documentation associated with this parameter. FHIR FHIR ParameterDefinition.documentation field # noqa: E501
:param documentation: The documentation of this MeasureParameterInfo. # noqa: E501
:type: str
"""
self._documentation = documentation
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(MeasureParameterInfo, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, MeasureParameterInfo):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, MeasureParameterInfo):
return True
return self.to_dict() != other.to_dict()
| 30.833333 | 196 | 0.60706 | 1,042 | 9,065 | 5.202495 | 0.146833 | 0.056078 | 0.134293 | 0.116215 | 0.570559 | 0.520753 | 0.501937 | 0.408965 | 0.31876 | 0.261206 | 0 | 0.022684 | 0.309432 | 9,065 | 293 | 197 | 30.938567 | 0.843291 | 0.453282 | 0 | 0.087302 | 1 | 0 | 0.041626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15873 | false | 0 | 0.031746 | 0 | 0.325397 | 0.015873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cfa7c54b2c60bb286552dff00421bc13413e5568 | 2,352 | py | Python | beartype_test/util/pyterror.py | jonathanmorley/beartype | 0d1207210220807d5c5848033d13657afa307983 | [
"MIT"
] | null | null | null | beartype_test/util/pyterror.py | jonathanmorley/beartype | 0d1207210220807d5c5848033d13657afa307983 | [
"MIT"
] | null | null | null | beartype_test/util/pyterror.py | jonathanmorley/beartype | 0d1207210220807d5c5848033d13657afa307983 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# --------------------( LICENSE )--------------------
# Copyright (c) 2014-2021 Beartype authors.
# See "LICENSE" for further details.
'''
**:mod:`pytest` exception-handling utilities.**
This submodule provides functions validating caller-defined exceptions raised
during testing.
'''
# ....................{ IMPORTS }....................
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# WARNING: To raise human-readable test errors, avoid importing from
# package-specific submodules at module scope.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
from contextlib import contextmanager
from pytest import raises
from typing import Type
# ....................{ CONTEXTS }....................
@contextmanager
def raises_uncached(exception_cls: Type[Exception]) -> 'ExceptionInfo':
'''
Context manager validating that the block exercised by this manager raises
a **cached exception** (i.e., whose message previously containing one or
more instances of the magic
:data:`beartype._util.cache.utilcacheerror.EXCEPTION_CACHED_PLACEHOLDER`
substring since replaced by the
:func:`beartype._util.cache.utilcacheerror.reraise_exception_cached`
function) of the passed type.
Parameters
----------
exception_cls : str
Type of cached exception expected to be raised by this block.
Returns
----------
:class:`pytest.nodes.ExceptionInfo`
:mod:`pytest`-specific object collecting metadata on the cached
exception of this type raised by this block.
See Also:
----------
https://docs.pytest.org/en/stable/reference.html#pytest._code.ExceptionInfo
Official :class:`pytest.nodes.ExceptionInfo` documentation.
'''
# Defer heavyweight imports.
from beartype._util.cache.utilcacheerror import (
EXCEPTION_CACHED_PLACEHOLDER)
# With a "pytest"-specific context manager validating this contextual block
# to raise an exception of this type...
with raises(exception_cls) as exception_info:
yield exception_info
# Assert this exception message does *NOT* contain this magic substring.
assert EXCEPTION_CACHED_PLACEHOLDER not in str(exception_info.value)
| 37.333333 | 79 | 0.616922 | 239 | 2,352 | 5.991632 | 0.535565 | 0.041899 | 0.035615 | 0.064944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004715 | 0.18835 | 2,352 | 62 | 80 | 37.935484 | 0.745416 | 0.755527 | 0 | 0 | 0 | 0 | 0.028078 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cfab1a84be9c0072b43334afcc4dc985928ed6db | 2,247 | py | Python | tests/utils.py | Upabjojr/matchpy | ff8b28a0e178f3cc5d6546c25ee1d891cff64425 | [
"MIT"
] | null | null | null | tests/utils.py | Upabjojr/matchpy | ff8b28a0e178f3cc5d6546c25ee1d891cff64425 | [
"MIT"
] | null | null | null | tests/utils.py | Upabjojr/matchpy | ff8b28a0e178f3cc5d6546c25ee1d891cff64425 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from matchpy.expressions.constraints import Constraint
from matchpy.expressions.substitution import Substitution
from matchpy.expressions.expressions import Pattern
class MockConstraint(Constraint):
def __init__(self, return_value, *variables, renaming=None):
self.return_value = return_value
self.called_with = []
self._variables = set(variables)
self.renaming = renaming or {}
def __call__(self, match):
self.called_with.append(Substitution(match))
return self.return_value
def __eq__(self, other):
return id(self) == id(other)
def __hash__(self):
return hash(id(self))
def __repr__(self):
if self.variables:
return 'MockConstraint({!r}, {}, renaming={!r})'.format(
self.return_value, ', '.join(map(repr, self.variables)), self.renaming)
return 'MockConstraint({!r}, renaming={!r})'.format(self.return_value, self.renaming)
def with_renamed_vars(self, renaming):
self.renaming.update(renaming)
return self
@property
def variables(self):
return set(self.renaming.get(v, v) for v in self._variables)
@property
def call_count(self):
return len(self.called_with)
def assert_called_with(self, args):
args = dict((self.renaming.get(n, n), v) for n, v in args.items())
assert args in self.called_with, "Constraint was not called with {}. List of calls: {}".format(
args, self.called_with
)
def assert_match_as_expected(match, subject, pattern, expected_matches):
pattern = Pattern(pattern)
matches = list(match(subject, pattern))
assert len(matches) == len(expected_matches), 'Unexpected number of matches'
for expected_match in expected_matches:
assert expected_match in matches, "Subject {!s} and pattern {!s} did not yield the match {!s} but were supposed to".format(
subject, pattern, expected_match
)
for match in matches:
assert match in expected_matches, "Subject {!s} and pattern {!s} yielded the unexpected match {!s}".format(
subject, pattern, match
) | 37.45 | 132 | 0.64308 | 270 | 2,247 | 5.174074 | 0.274074 | 0.057266 | 0.053686 | 0.041518 | 0.143164 | 0.110236 | 0.073014 | 0.073014 | 0.073014 | 0 | 0 | 0.000592 | 0.248331 | 2,247 | 60 | 133 | 37.45 | 0.826525 | 0.009346 | 0 | 0.042553 | 0 | 0.021277 | 0.137581 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 1 | 0.212766 | false | 0 | 0.06383 | 0.085106 | 0.468085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cfb21d4a9cdd85f0f62e4fd06045dd96b635904d | 11,617 | py | Python | habitat/extravehicular/migrations/0001_initial.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | 1 | 2021-02-01T19:04:39.000Z | 2021-02-01T19:04:39.000Z | habitat/extravehicular/migrations/0001_initial.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | null | null | null | habitat/extravehicular/migrations/0001_initial.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2017-09-29 18:53
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import habitat.timezone.models.martian_standard_time
class Migration(migrations.Migration):
initial = True
dependencies = [
('inventory', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Activity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('date', models.CharField(default=habitat.timezone.models.martian_standard_time.MartianStandardTime.date, help_text='example: 51099.420109', max_length=15, verbose_name='Mars Sol Date')),
('start', models.TimeField(blank=True, default=None, null=True, verbose_name='EVA Start')),
('end', models.TimeField(blank=True, default=None, null=True, verbose_name='EVA End')),
('objectives', models.TextField(verbose_name='Objectives')),
],
options={
'verbose_name': 'Extra-Vehicular Activity',
'verbose_name_plural': 'Extra-Vehicular Activities',
},
),
migrations.CreateModel(
name='Contingency',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('identifier', models.CharField(max_length=10, unique=True, verbose_name='Identifier')),
('severity', models.CharField(choices=[('emergency', 'Emergency - ABORT the simulation'), ('critical', 'Critical - ABORT the EVA'), ('warning', 'Warning'), ('info', 'Informative')], default='info', max_length=30, verbose_name='Severity')),
('name', models.CharField(max_length=100, unique=True, verbose_name='Name')),
('description', models.TextField(verbose_name='Description')),
('recovery_procedure', models.TextField(verbose_name='Recovery Procedure')),
('remarks', models.TextField(blank=True, default=None, null=True, verbose_name='Additional Remarks')),
],
options={
'verbose_name': 'Contingency',
'verbose_name_plural': 'Contingencies',
},
),
migrations.CreateModel(
name='Location',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('identifier', models.CharField(max_length=10, unique=True, verbose_name='Identifier')),
('direction', models.CharField(choices=[('north', 'North'), ('south', 'South'), ('east', 'East'), ('west', 'West'), ('north-east', 'North East'), ('north-east', 'North West'), ('south-east', 'South East'), ('south-east', 'South West')], max_length=30, verbose_name='Direction from Habitat')),
('name', models.CharField(max_length=100, unique=True, verbose_name='Name')),
('description', models.TextField(blank=True, default=None, null=True, verbose_name='Description')),
('longitude', models.DecimalField(blank=True, decimal_places=7, default=None, help_text='Decimal Degrees', max_digits=9, null=True, verbose_name='Longitude')),
('latitude', models.DecimalField(blank=True, decimal_places=7, default=None, help_text='Decimal Degrees', max_digits=10, null=True, verbose_name='Latitude')),
('elevation', models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Meters AGSL', max_digits=6, null=True, verbose_name='Elevation')),
('radius', models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Meters', max_digits=6, null=True, verbose_name='Radius')),
],
options={
'verbose_name': 'Location',
'verbose_name_plural': 'Locations',
},
),
migrations.CreateModel(
name='Objective',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('identifier', models.CharField(max_length=10, unique=True, verbose_name='Identifier')),
('estimated_duration', models.DurationField(blank=True, default=None, null=True, verbose_name='Estimated Duration')),
('objective', models.TextField(verbose_name='Objective')),
('remarks', models.TextField(verbose_name='Additional Remarks')),
('location', models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.CASCADE, to='extravehicular.Location', verbose_name='Location')),
],
options={
'verbose_name': 'Objective',
'verbose_name_plural': 'Objectives',
},
),
migrations.CreateModel(
name='Report',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('date', models.CharField(default=habitat.timezone.models.martian_standard_time.MartianStandardTime.date, help_text='example: 51099.420109', max_length=15, verbose_name='Mars Sol Date')),
('time', models.TimeField(default=habitat.timezone.models.martian_standard_time.MartianStandardTime.time, help_text='example: 51099.420109', verbose_name='Coordinated Mars Time')),
('start', models.TimeField(blank=True, default=None, null=True, verbose_name='Start')),
('end', models.TimeField(blank=True, default=None, null=True, verbose_name='End')),
('status', models.CharField(choices=[('success-full', 'Full Success'), ('success-primary', 'Primary Objectives Done'), ('success-partial', 'Partial Success'), ('todo', 'To Do'), ('in-progress', 'In Progress'), ('aborted', 'Aborted')], default='todo', max_length=30, verbose_name='Status')),
('type', models.CharField(choices=[('exploratory', 'Exploratory'), ('experimental', 'Experimental'), ('operational', 'Operational'), ('emergency', 'Emergency')], default='operational', max_length=30, verbose_name='Type')),
('description', models.TextField(blank=True, default=None, null=True, verbose_name='Description')),
('contingencies', models.TextField(blank=True, default=None, null=True, verbose_name='Contingencies')),
('remarks', models.TextField(blank=True, default=None, null=True, verbose_name='Contingencies')),
('location', models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.CASCADE, to='extravehicular.Location', verbose_name='Location')),
('primary_objectives', models.ManyToManyField(related_name='primary_objectives', to='extravehicular.Objective', verbose_name='Primary Objectives')),
('reporter', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Astronaut')),
('secondary_objectives', models.ManyToManyField(blank=True, default=None, related_name='secondary_objectives', to='extravehicular.Objective', verbose_name='Secondary Objectives')),
],
options={
'verbose_name': 'Report',
'verbose_name_plural': 'Reports',
},
),
migrations.CreateModel(
name='ReportAttachment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('file', models.FileField(upload_to='report/', verbose_name='Attachment')),
('report', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='extravehicular.Report', verbose_name='Report')),
],
options={
'verbose_name': 'Report Attachment',
'verbose_name_plural': 'Report Attachments',
},
),
migrations.CreateModel(
name='Spacewalker',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, db_index=True, verbose_name='Add Datetime')),
('modified', models.DateTimeField(auto_now=True, db_index=True, verbose_name='Modified Datetime')),
('designation', models.CharField(choices=[('ev-leader', 'Lead Spacewalker'), ('ev-support', 'Supporting Spacewalker'), ('habitat-support', 'Habitat Support'), ('rover-operator', 'Rover Operator')], max_length=30, verbose_name='Designation')),
('objectives', models.TextField(blank=True, default=None, null=True, verbose_name='Objectives')),
('activity', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='extravehicular.Activity', verbose_name='Activity')),
('participant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Participant')),
],
options={
'verbose_name': 'Spacewalker',
'verbose_name_plural': 'Spacewalkers',
},
),
migrations.AddField(
model_name='activity',
name='contingencies',
field=models.ManyToManyField(blank=True, default=None, to='extravehicular.Contingency', verbose_name='Contingencies'),
),
migrations.AddField(
model_name='activity',
name='location',
field=models.ForeignKey(default=None, on_delete=django.db.models.deletion.CASCADE, to='extravehicular.Location', verbose_name='Location'),
),
migrations.AddField(
model_name='activity',
name='tools',
field=models.ManyToManyField(blank=True, default=None, to='inventory.Item', verbose_name='Tools'),
),
]
| 71.269939 | 308 | 0.634845 | 1,191 | 11,617 | 6.02183 | 0.157011 | 0.122699 | 0.07111 | 0.044618 | 0.639989 | 0.612103 | 0.571668 | 0.563581 | 0.535555 | 0.535555 | 0 | 0.009768 | 0.215718 | 11,617 | 162 | 309 | 71.709877 | 0.777412 | 0.005853 | 0 | 0.493506 | 1 | 0 | 0.228651 | 0.016196 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032468 | 0 | 0.058442 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cfb23450358399f117f247e51bf8ad70f91b84ed | 1,014 | py | Python | scraper/storage_spiders/vanphongphamanhkhoacom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | null | null | null | scraper/storage_spiders/vanphongphamanhkhoacom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 10 | 2020-02-11T23:34:28.000Z | 2022-03-11T23:16:12.000Z | scraper/storage_spiders/vanphongphamanhkhoacom.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 3 | 2018-08-05T14:54:25.000Z | 2021-06-07T01:49:59.000Z | # Auto generated by generator.py. Delete this line if you make modification.
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
XPATH = {
'name' : "//div[@class='clus']/h2[@class='nomargin title_sp']",
'price' : "//h2[@class='nomargin']/font",
'category' : "//div[@class='head_title_center']/div[@class='sp_detai']/a",
'description' : "//div[@class='clus']/div[@id='user_post_view']/p",
'images' : "//div[@class='box_nullstyle']/a[@class='lightbox']/img/@src",
'canonical' : "",
'base_url' : "",
'brand' : ""
}
name = 'vanphongphamanhkhoa.com'
allowed_domains = ['vanphongphamanhkhoa.com']
start_urls = ['http://vanphongphamanhkhoa.com/']
tracking_url = ''
sitemap_urls = ['']
sitemap_rules = [('', 'parse_item')]
sitemap_follow = []
rules = [
Rule(LinkExtractor(allow=['/view_product+-\d+/']), 'parse_item'),
Rule(LinkExtractor(allow=['/products-\d+/[a-zA-Z0-9-]+/($|\d\d?$)']), 'parse'),
#Rule(LinkExtractor(), 'parse_item_and_links'),
]
| 37.555556 | 83 | 0.642998 | 121 | 1,014 | 5.223141 | 0.603306 | 0.063291 | 0.037975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004505 | 0.12426 | 1,014 | 26 | 84 | 39 | 0.707207 | 0.118343 | 0 | 0 | 1 | 0 | 0.515152 | 0.35578 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.