hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
012db21bc927b85c8eac8aed517e187ad63c1016 | 1,889 | py | Python | doc/crypto/testrsa.py | haysengithub/ctf | c2cefed8470f40d0cb6bc4d1ae941a70936ea497 | [
"MIT"
] | null | null | null | doc/crypto/testrsa.py | haysengithub/ctf | c2cefed8470f40d0cb6bc4d1ae941a70936ea497 | [
"MIT"
] | null | null | null | doc/crypto/testrsa.py | haysengithub/ctf | c2cefed8470f40d0cb6bc4d1ae941a70936ea497 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# by https://findneo.github.io/
# ref:
# https://crypto.stackexchange.com/questions/11053/rsa-least-significant-bit-oracle-attack
# https://ctf.rip/sharif-ctf-2016-lsb-oracle-crypto-challenge/
# https://introspelliam.github.io/2018/03/27/crypto/RSA-Least-Significant-Bit-Oracle-Attack/
import libnum, gmpy2, socket, time, decimal
def oracle(c1):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
hostname = '47.96.239.28'
port = 23333
s.connect((hostname, port))
s.recv(1024)
s.send(hex(c1)[2:].strip("lL") + '\n')
res = s.recv(1024).strip()
s.close()
if res == 'even': return 0
if res == 'odd':
return 1
else:
assert (0)
def partial(c, n):
global c_of_2
k = n.bit_length()
decimal.getcontext().prec = k # allows for 'precise enough' floats
lower = decimal.Decimal(0)
upper = decimal.Decimal(n)
for i in range(k):
possible_plaintext = (lower + upper) / 2
# lower==0 when i<1809
flag = oracle(c)
if not flag:
upper = possible_plaintext # plaintext is in the lower half
else:
lower = possible_plaintext # plaintext is in the upper half
c = (c * c_of_2) % n # multiply y by the encryption of 2 again
print i, flag, int(upper - lower)
# time.sleep(0.2)
# By now, our plaintext is revealed!
return int(upper)
def main():
print "[*] Conducting Oracle attack..."
return partial((c * c_of_2) % n, n)
if __name__ == '__main__':
e = 0x10001
n = 1606938044309278499168642398192229212629290234347717645487123
c = 1206101155741464091016050901578054614292420649123909371122176
c_of_2 = pow(2, e, n)
m = main()
# m = 560856645743734814774953158390773525781916094468093308691660509501812349
print libnum.n2s(m)
# QCTF{RSA_parity_oracle_is_fun}
| 30.467742 | 92 | 0.643727 | 251 | 1,889 | 4.741036 | 0.478088 | 0.012605 | 0.013445 | 0.036975 | 0.122689 | 0.112605 | 0 | 0 | 0 | 0 | 0 | 0.180632 | 0.229222 | 1,889 | 61 | 93 | 30.967213 | 0.636676 | 0.32504 | 0 | 0.04878 | 0 | 0 | 0.049245 | 0 | 0 | 0 | 0.00556 | 0 | 0.02439 | 0 | null | null | 0 | 0.02439 | null | null | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
013dc6803632ec79cb90fb2226d95535547c3c20 | 5,771 | py | Python | bikeshed/htmlhelpers.py | shans/bikeshed | af01666677d1f9d933a489b7eb321576be2227a0 | [
"CC0-1.0"
] | null | null | null | bikeshed/htmlhelpers.py | shans/bikeshed | af01666677d1f9d933a489b7eb321576be2227a0 | [
"CC0-1.0"
] | null | null | null | bikeshed/htmlhelpers.py | shans/bikeshed | af01666677d1f9d933a489b7eb321576be2227a0 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import division, unicode_literals
import html5lib
from html5lib import treewalkers
from html5lib.serializer import htmlserializer
from lxml import html
from lxml import etree
from lxml.cssselect import CSSSelector
from . import config
from .messages import *
def findAll(sel, context=None):
if context is None:
context = config.doc.document
try:
return CSSSelector(sel, namespaces={"svg":"http://www.w3.org/2000/svg"})(context)
except Exception, e:
die("The selector '{0}' returned an error:\n{1}", sel, e)
return []
def find(sel, context=None):
result = findAll(sel, context)
if result:
return result[0]
else:
return None
def textContent(el):
return html.tostring(el, method='text', with_tail=False, encoding="unicode")
def innerHTML(el):
if el is None:
return ''
return (el.text or '') + ''.join(html.tostring(x, encoding="unicode") for x in el)
def outerHTML(el):
if el is None:
return ''
return html.tostring(el, with_tail=False, encoding="unicode")
def parseHTML(text):
doc = html5lib.parse(text, treebuilder='lxml', namespaceHTMLElements=False)
body = doc.getroot()[1]
if body.text is None:
if len(body):
return list(body.iterchildren())
head = doc.getroot()[0]
if head.text is None:
return list(head.iterchildren())
return [head.text] + list(head.iterchildren())
else:
return [body.text] + list(body.iterchildren())
def parseDocument(text):
doc = html5lib.parse(text, treebuilder='lxml', namespaceHTMLElements=False)
return doc
def escapeHTML(text):
# Escape HTML
return text.replace('&', '&').replace('<', '<')
def escapeAttr(text):
return text.replace('&', '&').replace("'", ''').replace('"', '"')
def clearContents(el):
for child in el.iterchildren():
el.remove(child)
el.text = ''
return el
def appendChild(parent, child):
# Appends either text or an element.
try:
parent.append(child)
except TypeError:
# child is a string
if len(parent) > 0:
parent[-1].tail = (parent[-1].tail or '') + child
else:
parent.text = (parent.text or '') + child
def prependChild(parent, child):
# Prepends either text or an element to the parent.
if isinstance(child, basestring):
if parent.text is None:
parent.text = child
else:
parent.text = child + parent.text
else:
parent.insert(0, child)
if parent.text is not None:
child.tail = (child.tail or '') + parent.text
parent.text = None
def insertBefore(target, el):
parent = target.getparent()
parent.insert(parent.index(target), el)
def insertAfter(target, el):
parent = target.getparent()
parent.insert(parent.index(target)+1, el)
def removeNode(node):
text = node.tail or ''
parent = node.getparent()
index = parent.index(node)
if index == 0:
parent.text = (parent.text or '') + text
else:
prevsibling = parent[index-1]
prevsibling.tail = (prevsibling.tail or '') + text
parent.remove(node)
def appendContents(el, newElements):
if(etree.iselement(newElements) and newElements.text is not None):
appendChild(el, newElements.text)
for new in newElements:
appendChild(el, new)
return el
def replaceContents(el, newElements):
clearContents(el)
return appendContents(el, newElements)
def moveContents(targetEl, sourceEl):
replaceContents(targetEl, sourceEl)
sourceEl.text = ''
def headingLevelOfElement(el):
for el in relevantHeadings(el, levels=[2,3,4,5,6]):
if el.get('data-level') is not None:
return el.get('data-level')
return None
def relevantHeadings(startEl, levels=None):
if levels is None:
levels = [1,2,3,4,5,6]
levels = ["h"+str(level) for level in levels]
currentHeadingLevel = float('inf')
for el in scopingElements(startEl, *levels):
tagLevel = int(el.tag[1])
if tagLevel < currentHeadingLevel:
yield el
currentHeadingLevel = tagLevel
if tagLevel == 2:
return
def scopingElements(startEl, *tags):
# Elements that could form a "scope" for the startEl
# Ancestors, and preceding siblings of ancestors.
# Maps to the things that can establish a counter scope.
els = []
tagFilter = set(tags)
for el in startEl.itersiblings(preceding=True, *tags):
els.append(el)
for el in startEl.iterancestors():
if el.tag in tagFilter:
els.append(el)
for el in el.itersiblings(preceding=True, *tags):
els.append(el)
return els
def previousElements(startEl, tag=None, *tags):
# Elements preceding the startEl in document order.
# Like .iter(), but in the opposite direction.
els = []
for el in startEl.getroottree().getroot().iter(tag=tag, *tags):
if el == startEl:
return reversed(els)
els.append(el)
return els
def treeAttr(el, attrName):
if el.get(attrName) is not None:
return el.get(attrName)
for ancestor in el.iterancestors():
if ancestor.get(attrName) is not None:
return ancestor.get(attrName)
def addClass(el, cls):
if el.get('class') is None:
el.set('class', cls)
else:
el.set('class', "{0} {1}".format(el.get('class'), cls))
def hasClass(el, cls):
if el.get('class') is None:
return False
paddedAttr = " {0} ".format(el.get('class'))
paddedCls = " {0} ".format(cls)
return paddedCls in paddedAttr
| 27.221698 | 89 | 0.625888 | 726 | 5,771 | 4.965565 | 0.256198 | 0.030513 | 0.01165 | 0.016644 | 0.213315 | 0.171706 | 0.116505 | 0.080999 | 0.068239 | 0.032178 | 0 | 0.009238 | 0.249697 | 5,771 | 211 | 90 | 27.350711 | 0.823326 | 0.066713 | 0 | 0.198718 | 0 | 0 | 0.038326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057692 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
014397c4b95a8dfff24fd2b3339de39dd77d428a | 9,024 | py | Python | main/rest/views.py | csev/class2go | f9419ae16448d20fc882170f95cfd1c4dc3331ca | [
"Apache-2.0"
] | 2 | 2015-10-31T23:12:52.000Z | 2021-01-19T11:03:00.000Z | main/rest/views.py | sunu/class2go | 653b1edd01d390ad387dd788e0fc2d89445fbcab | [
"Apache-2.0"
] | null | null | null | main/rest/views.py | sunu/class2go | 653b1edd01d390ad387dd788e0fc2d89445fbcab | [
"Apache-2.0"
] | null | null | null | from rest_framework import permissions
from rest_framework.renderers import JSONRenderer
from rest_framework.parsers import JSONParser
from rest_framework import generics
from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework.views import APIView
from rest.serializers import *
from django.http import HttpResponse, Http404
from registration.backends import get_backend
from django.template import RequestContext
from django.core.urlresolvers import reverse
from courses.common_page_data import get_common_page_data
from django.views.decorators.http import require_POST
from django.views.decorators.csrf import csrf_protect
from django.contrib.auth.forms import AuthenticationForm
from django.contrib.auth.views import login
from django.views.decorators.cache import never_cache
from django.contrib.auth import login as auth_login
from django.conf import settings
from c2g.util import upgrade_to_https_and_downgrade_upon_redirect
from django.views.decorators.debug import sensitive_post_parameters
from django.utils import simplejson
from c2g.models import *
from django.contrib import auth
import json
import settings
import os.path
import logging
logger=logging.getLogger("foo")
class JSONResponse(HttpResponse):
"""
An HttpResponse that renders it's content into JSON.
"""
def __init__(self, data, **kwargs):
content = JSONRenderer().render(data)
kwargs['content_type'] = 'application/json'
super(JSONResponse, self).__init__(content, **kwargs)
@sensitive_post_parameters()
@never_cache
@require_POST
@csrf_protect
#@upgrade_to_https_and_downgrade_upon_redirect
def rest_login(request):
"""
Login to c2g if ok return course list student registered for
"""
login_form = AuthenticationForm(data=request.POST)
if login_form.is_valid():
auth_login(request, login_form.get_user())
course_list = Course.objects.all()
groups = request.user.groups.all()
courses = []
for g in groups:
for c in course_list:
if (g.id == c.student_group_id or g.id == c.instructor_group_id or g.id == c.tas_group_id or g.id == c.readonly_tas_group_id) and c.mode != 'draft':
courses.append({'course_id':c.id})
break
to_json = {'userid': request.user.username, 'email': request.user.email, 'last_name':request.user.last_name,
'first_name':request.user.first_name,'date_joined':request.user.date_joined.strftime('%Y-%m-%dT%H:%M:%S'),
'last_login': request.user.last_login.strftime('%Y-%m-%dT%H:%M:%S'),'courses':courses}
else:
to_json = {"login": "error"}
return HttpResponse(simplejson.dumps(to_json), mimetype='application/json')
"""
Gets Problemset Activities for student
"""
class ProblemActivities(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
problem_activities = ProblemActivity.objects.filter(student=request.user)
index_list = []
for prob in problem_activities:
index_list.append({'id': prob.id, 'video_to_exercise': prob.video_to_exercise_id,
'problemset_to_exercise': prob.problemset_to_exercise_id,
'problem_identifier': prob.problem_identifier, 'complete': prob.complete, 'attempt_content': prob.attempt_content,
'count_hints':prob.count_hints, 'time_taken':prob.time_taken,'attempt_number': prob.attempt_number,
'sha1':prob.sha1, 'seed': prob.seed,'problem_type':prob.problem_type,'review_mode':prob.review_mode,
'topic_mode':prob.topic_mode,'casing':prob.casing,'card':prob.card,'cards_done':prob.cards_done,
'cards_left':prob.cards_left,'user_selection_val':prob.user_selection_val,'user_choices':prob.user_choices})
return Response(index_list)
class VideoActivities(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
video_activities = VideoActivity.objects.filter(student=request.user)
index_list = []
for vid_activity in video_activities:
index_list.append({'id': vid_activity.id, 'course': vid_activity.course_id,
'video': vid_activity.video_id,
'start_seconds': vid_activity.start_seconds})
return Response(index_list)
class FilesList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
files = File.objects.all()
serializer = FileSerializer(files)
return Response(serializer.data)
class CourseList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
courses= Course.objects.all()
serializer = CourseSerializer(courses)
return Response(serializer.data)
class AnnouncementList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
announcements= Announcement.objects.all()
serializer = AnnouncementSerializer(announcements)
return Response(serializer.data)
class ProblemSetList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
psets = ProblemSet.objects.all()
serializer = PSetSerializer(psets)
return Response(serializer.data)
class ProblemSetToExerciseList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
psetexes = ProblemSetToExercise.objects.all()
serializer = PSetExerciseSerializer(psetexes)
return Response(serializer.data)
class ExerciseList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
exercises= Exercise.objects.all()
serializer = ExerciseSerializer(exercises)
return Response(serializer.data)
class ContentSectionList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
content_sections = ContentSection.objects.all()
serializer = ContentSectionSerializer(content_sections)
return Response(serializer.data)
class VideoList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
videos = Video.objects.all()
serializer = VideoSerializer(videos)
return Response(serializer.data)
class VideoToExerciseList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
vidtoexes = VideoToExercise.objects.all()
serializer = VideoToExerciseSerializer(vidtoexes)
return Response(serializer.data)
"""
Gets ExamScores for student
"""
class ExamRecordList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
exam_records = ExamRecord.objects.filter(student=request.user)
serializer = ExamRecordSerializer(exam_records)
return Response(serializer.data)
class ExamScoreList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
exam_scores = ExamScore.objects.filter(student=request.user)
serializer = ExamScoreSerializer(exam_scores)
return Response(serializer.data)
class ExamScoreFieldList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
exam_score_fields = ExamScoreField.objects.filter(student=request.user)
serializer = ExamScoreFieldSerializer(exam_score_fields)
return Response(serializer.data)
class ExamList(APIView):
renderer_classes = (JSONRenderer,)
permission_classes = (permissions.IsAuthenticated,)
def get(self, request):
course_id = request.GET.get('course')
exams = Exam.objects.filter()
serializer = ExamSerializer(exams)
return Response(serializer.data)
| 31.663158 | 164 | 0.681959 | 935 | 9,024 | 6.396791 | 0.240642 | 0.037619 | 0.055175 | 0.08527 | 0.376024 | 0.29694 | 0.269855 | 0.252132 | 0.237753 | 0.237753 | 0 | 0.001147 | 0.22695 | 9,024 | 284 | 165 | 31.774648 | 0.856221 | 0.01762 | 0 | 0.352273 | 0 | 0 | 0.047184 | 0.002513 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096591 | false | 0 | 0.170455 | 0 | 0.619318 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0144c5f3fcfcb9ed82f7efa1e7777e181413f23b | 2,520 | py | Python | mlflow/utils/_spark_utils.py | adamreeve/mlflow | d0d307f7f7b49f013727191a672ae2139bf37343 | [
"Apache-2.0"
] | 1 | 2022-01-11T02:51:17.000Z | 2022-01-11T02:51:17.000Z | mlflow/utils/_spark_utils.py | adamreeve/mlflow | d0d307f7f7b49f013727191a672ae2139bf37343 | [
"Apache-2.0"
] | null | null | null | mlflow/utils/_spark_utils.py | adamreeve/mlflow | d0d307f7f7b49f013727191a672ae2139bf37343 | [
"Apache-2.0"
] | null | null | null | import tempfile
import shutil
import os
def _get_active_spark_session():
try:
from pyspark.sql import SparkSession
except ImportError:
# Return None if user doesn't have PySpark installed
return None
try:
# getActiveSession() only exists in Spark 3.0 and above
return SparkSession.getActiveSession()
except Exception:
# Fall back to this internal field for Spark 2.x and below.
return SparkSession._instantiatedSession
class _SparkDirectoryDistributor:
"""Distribute spark directory from driver to executors."""
_extracted_dir_paths = {}
def __init__(self):
pass
@staticmethod
def add_dir(spark, dir_path):
"""Given a SparkSession and a model_path which refers to a pyfunc directory locally,
we will zip the directory up, enable it to be distributed to executors, and return
the "archive_path", which should be used as the path in get_or_load().
"""
_, archive_basepath = tempfile.mkstemp()
# NB: We must archive the directory as Spark.addFile does not support non-DFS
# directories when recursive=True.
archive_path = shutil.make_archive(archive_basepath, "zip", dir_path)
spark.sparkContext.addFile(archive_path)
return archive_path
@staticmethod
def get_or_extract(archive_path):
"""Given a path returned by add_local_model(), this method will return a tuple of
(loaded_model, local_model_path).
If this Python process ever loaded the model before, we will reuse that copy.
"""
from pyspark.files import SparkFiles
import zipfile
if archive_path in _SparkDirectoryDistributor._extracted_dir_paths:
return _SparkDirectoryDistributor._extracted_dir_paths[archive_path]
# BUG: Despite the documentation of SparkContext.addFile() and SparkFiles.get() in Scala
# and Python, it turns out that we actually need to use the basename as the input to
# SparkFiles.get(), as opposed to the (absolute) path.
archive_path_basename = os.path.basename(archive_path)
local_path = SparkFiles.get(archive_path_basename)
temp_dir = tempfile.mkdtemp()
zip_ref = zipfile.ZipFile(local_path, "r")
zip_ref.extractall(temp_dir)
zip_ref.close()
_SparkDirectoryDistributor._extracted_dir_paths[archive_path] = temp_dir
return _SparkDirectoryDistributor._extracted_dir_paths[archive_path]
| 38.769231 | 96 | 0.700794 | 315 | 2,520 | 5.390476 | 0.425397 | 0.077739 | 0.050059 | 0.09894 | 0.100707 | 0.100707 | 0.069494 | 0 | 0 | 0 | 0 | 0.001562 | 0.237698 | 2,520 | 64 | 97 | 39.375 | 0.882353 | 0.386905 | 0 | 0.166667 | 0 | 0 | 0.002717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.027778 | 0.194444 | 0 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0149d1163a187ccaada2d255b902904587bcfda7 | 342 | py | Python | web/blog/comments/urls.py | BumagniyPacket/django-blog | bb3511c2380a79e400302d75406c568d1c5066b1 | [
"Apache-2.0"
] | 2 | 2017-04-22T11:07:53.000Z | 2017-04-23T12:49:13.000Z | web/blog/comments/urls.py | BumagniyPacket/django-blog-example | bb3511c2380a79e400302d75406c568d1c5066b1 | [
"Apache-2.0"
] | null | null | null | web/blog/comments/urls.py | BumagniyPacket/django-blog-example | bb3511c2380a79e400302d75406c568d1c5066b1 | [
"Apache-2.0"
] | null | null | null | from django.conf.urls import url
from .views import CommentAddView, CommentApproveView, CommentDeleteView
urlpatterns = [
url(r'^(?P<pk>\d+)/delete$', CommentDeleteView.as_view(), name='delete'),
url(r'^(?P<pk>\d+)/approve$', CommentApproveView.as_view(), name='approve'),
url(r'^add$', CommentAddView.as_view(), name='add'),
]
| 34.2 | 80 | 0.690058 | 43 | 342 | 5.418605 | 0.488372 | 0.051502 | 0.128755 | 0.060086 | 0.06867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108187 | 342 | 9 | 81 | 38 | 0.763934 | 0 | 0 | 0 | 0 | 0 | 0.181287 | 0.061404 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0154d88cdbc3dd2aec76ee7cfc07fec8fd62ef41 | 492 | py | Python | week12/api/serializers.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | week12/api/serializers.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | 13 | 2021-03-10T08:46:52.000Z | 2022-03-02T08:13:58.000Z | week12/api/serializers.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | from rest_framework import serializers
from api.models import Company
class CompanySerializer(serializers.Serializer):
id = serializers.IntegerField(read_only=True)
name = serializers.CharField()
def create(self, validated_data):
category = Company.objects.create(name=validated_data.get('name'))
return category
def update(self, instance, validated_data):
instance.name = validated_data.get('name')
instance.save()
return instance
| 28.941176 | 74 | 0.719512 | 55 | 492 | 6.327273 | 0.545455 | 0.149425 | 0.097701 | 0.114943 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191057 | 492 | 16 | 75 | 30.75 | 0.874372 | 0 | 0 | 0 | 0 | 0 | 0.01626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
015adaedf627e18885346b27fcaa16562c4147f6 | 3,989 | py | Python | 5408Project/DBMS_Software/Serverless/vectorB00870639.py | AparnaVivekanandan18/Distributed-DBMS | a03b1bfbc9684ef8c67303a516f48ea7f53518cb | [
"Apache-2.0"
] | 1 | 2021-09-08T12:05:31.000Z | 2021-09-08T12:05:31.000Z | 5408Project/DBMS_Software/Serverless/vectorB00870639.py | AparnaVivekanandan18/Distributed-DBMS | a03b1bfbc9684ef8c67303a516f48ea7f53518cb | [
"Apache-2.0"
] | null | null | null | 5408Project/DBMS_Software/Serverless/vectorB00870639.py | AparnaVivekanandan18/Distributed-DBMS | a03b1bfbc9684ef8c67303a516f48ea7f53518cb | [
"Apache-2.0"
] | null | null | null | import json
import json
import boto3
import re
import json
import collections
import os
import pandas as pd
import csv
from csv import writer
# boto3 S3 initialization
s3_client = boto3.client("s3")
import numpy as np
def lambda_handler(event, context):
# TODO implement
bucketname = 'sourcedatab00870639'
# event contains all information about uploaded object
print("Event :", event)
# Bucket Name where file was uploaded
sourcebucket = event['Records'][0]['s3']['bucket']['name']
# Filename of object (with path)
file_key_name = event['Records'][0]['s3']['object']['key']
input_file = os.path.join(sourcebucket, file_key_name)
# Start the function that processes the incoming data.
bucket = bucketname
key = file_key_name
response = s3_client.get_object(Bucket=sourcebucket, Key=file_key_name)
content = response['Body'].read().decode('utf-8')
x = content.split()
stopwords = ['ourselves', 'hers', 'between', 'yourself', 'but', 'again', 'there', 'about', 'once', 'during', 'out',
'very', 'having', 'with', 'they', 'own', 'an', 'be', 'some', 'for', 'do', 'its', 'yours', 'such',
'into', 'of', 'most', 'itself', 'other', 'off', 'is', 's', 'am', 'or', 'who', 'as', 'from', 'him',
'each', 'the', 'themselves', 'until', 'below', 'are', 'we', 'these', 'your', 'his', 'through', 'don',
'nor', 'me', 'were', 'her', 'more', 'himself', 'this', 'down', 'should', 'our', 'their', 'while',
'above', 'both', 'up', 'to', 'ours', 'had', 'she', 'all', 'no', 'when', 'at', 'any', 'before', 'them',
'same', 'and', 'been', 'have', 'in', 'will', 'on', 'does', 'yourselves', 'then', 'that', 'because',
'what', 'over', 'why', 'so', 'can', 'did', 'not', 'now', 'under', 'he', 'you', 'herself', 'has',
'just', 'where', 'too', 'only', 'myself', 'which', 'those', 'i', 'after', 'few', 'whom', 't', 'being',
'if', 'theirs', 'my', 'against', 'a', 'by', 'doing', 'it', 'how', 'further', 'was', 'here', 'than']
stop_words = set(stopwords)
tokens_without_sw = [w for w in x if w not in stop_words]
current_word = []
next_word = []
data_list = [['Current_Word', 'Next_Word', 'Levenshtein_distance']]
def levenshteindistance(var1, var2):
size_x = len(var1) + 1
size_y = len(var2) + 1
matrix = np.zeros((size_x, size_y))
for x in range(size_x):
matrix[x, 0] = x
for y in range(size_y):
matrix[0, y] = y
for x in range(1, size_x):
for y in range(1, size_y):
if seq1[x - 1] == seq2[y - 1]:
matrix[x, y] = min(matrix[x - 1, y] + 1, matrix[x - 1, y - 1], matrix[x, y - 1] + 1)
else:
matrix[x, y] = min(matrix[x - 1, y] + 1, matrix[x - 1, y - 1] + 1, matrix[x, y - 1] + 1)
return (matrix[size_x - 1, size_y - 1])
for i in range(len(tokens_without_sw) - 1):
data_list.append([tokens_without_sw[i], tokens_without_sw[i + 1],
levenshteindistance(tokens_without_sw[i], tokens_without_sw[i + 1])])
print(tokens_without_sw)
df = pd.DataFrame(data_list)
bytes_to_write = df.to_csv(None, header=None, index=False).encode()
file_name = "testVector.csv"
s3 = boto3.resource('s3')
bucket = s3.Bucket(bucketname)
key = file_name
ans = []
current_data = s3_client.get_object(Bucket=bucketname, Key=file_name)
lines = csv.reader(current_data)
for row in lines:
ans.append(row)
for d in data_list:
ans.append(d)
file_name = "trainVector.csv"
resfile = s3.get_object(Bucket="sourcedatab00870639", Key=file_name)
restext = resfile["Body"].read().decode('utf-8')
updated_data = restext + "\n" + "\n".join(str(item).strip('[]') for item in words_list)
s3.put_object(Body=updated_data, Bucket="sourcedatab00870639 ", Key=file_name)
print(updated_data)
| 43.835165 | 119 | 0.576586 | 541 | 3,989 | 4.127542 | 0.438078 | 0.028213 | 0.047022 | 0.016122 | 0.179579 | 0.066726 | 0.060457 | 0.057322 | 0.057322 | 0.027765 | 0 | 0.024902 | 0.234896 | 3,989 | 90 | 120 | 44.322222 | 0.70675 | 0.052895 | 0 | 0.039474 | 0 | 0 | 0.184301 | 0 | 0 | 0 | 0 | 0.011111 | 0 | 1 | 0.026316 | false | 0 | 0.144737 | 0 | 0.184211 | 0.039474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
015c6707869d8578c11c15943f88eac9d78ec845 | 1,547 | py | Python | sshsysmon/drivers/ssh.py | zix99/sshsysmon | 091a28f2d28795f05e12a158bef22a10c87de8ff | [
"MIT"
] | 46 | 2016-03-13T20:57:24.000Z | 2022-03-21T13:37:04.000Z | sshsysmon/drivers/ssh.py | zix99/sshmon | 091a28f2d28795f05e12a158bef22a10c87de8ff | [
"MIT"
] | 5 | 2016-03-15T10:00:54.000Z | 2021-04-30T01:41:02.000Z | sshsysmon/drivers/ssh.py | zix99/sshmon | 091a28f2d28795f05e12a158bef22a10c87de8ff | [
"MIT"
] | 9 | 2016-09-23T09:37:31.000Z | 2021-05-11T11:26:46.000Z | from lib.plugins import Driver
import os
from paramiko import SSHClient, RSAKey, AutoAddPolicy
from io import StringIO
class Ssh(Driver):
DEFAULT_KEY_PATH = "~/.ssh/id_rsa"
def __init__(self, host, username='root', password = None, key = None, port = 22, path = "/proc"):
Driver.__init__(self)
self._host = host
self._username = username
self._password = password
self._port = port
self._path = path
self._client = None
self._ftp = None
if not password or key:
self._key = RSAKey.from_private_key_file(os.path.expanduser(key or Ssh.DEFAULT_KEY_PATH))
else:
self._key = None
def readProc(self, path):
sftp = self._connectFtp()
o = StringIO()
for line in sftp.open(os.path.join(self._path, path)):
o.write(line)
return o.getvalue()
def sh(self, cmd):
client = self._connect()
stdin, stdout, stderr = client.exec_command(cmd)
return {
"stdout": stdout.read().decode('utf-8'),
"stderr": stderr.read().decode('utf-8'),
"status": stdout.channel.recv_exit_status()
}
def _connect(self):
if not self._client:
client = SSHClient()
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(hostname = self._host, username=self._username, password=self._password, pkey=self._key, port=self._port, look_for_keys=False)
self._client = client
return self._client
def _connectFtp(self):
if not self._ftp:
client = self._connect()
self._ftp = client.open_sftp()
return self._ftp
def getHost(self):
return self._host
def create(args):
return Ssh(**args) | 25.360656 | 144 | 0.706529 | 221 | 1,547 | 4.705882 | 0.352941 | 0.030769 | 0.026923 | 0.026923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003094 | 0.164189 | 1,547 | 61 | 145 | 25.360656 | 0.801237 | 0 | 0 | 0.040816 | 0 | 0 | 0.0323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.081633 | 0.081633 | 0.040816 | 0.387755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
015d37692cec82d3d89e289491db230733103ea2 | 497 | py | Python | python/src/hackerrank/2020/military_time.py | ccampo133/coding-challenges | 618b51d2d6f6946d3756c68b0008a178d80952ae | [
"MIT"
] | null | null | null | python/src/hackerrank/2020/military_time.py | ccampo133/coding-challenges | 618b51d2d6f6946d3756c68b0008a178d80952ae | [
"MIT"
] | null | null | null | python/src/hackerrank/2020/military_time.py | ccampo133/coding-challenges | 618b51d2d6f6946d3756c68b0008a178d80952ae | [
"MIT"
] | null | null | null | def timeConversion(s):
ampm = s[-2:]
hr = s[:2]
if ampm == 'AM':
return s[:-2] if hr != '12' else '00' + s[2:-2]
return s[:-2] if hr == '12' else str(int(s[:2]) + 12) + s[2:-2]
if __name__ == '__main__':
s1 = '07:05:45PM'
assert timeConversion(s1) == '19:05:45'
s2 = '07:05:45AM'
assert timeConversion(s2) == '07:05:45'
s3 = '12:06:21PM'
assert timeConversion(s3) == '12:06:21'
s4 = '12:06:21AM'
assert timeConversion(s4) == '00:06:21'
| 23.666667 | 67 | 0.523139 | 81 | 497 | 3.111111 | 0.382716 | 0.055556 | 0.047619 | 0.079365 | 0.142857 | 0.142857 | 0.142857 | 0 | 0 | 0 | 0 | 0.197297 | 0.255533 | 497 | 20 | 68 | 24.85 | 0.483784 | 0 | 0 | 0 | 0 | 0 | 0.177062 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0163df4b94688e25551d1e8cbc3582f32d6b4f39 | 21,716 | py | Python | Packs/Ansible_Powered_Integrations/Integrations/Linux/Linux.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/Ansible_Powered_Integrations/Integrations/Linux/Linux.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/Ansible_Powered_Integrations/Integrations/Linux/Linux.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | import json
import traceback
from typing import Dict, cast
import ansible_runner
import demistomock as demisto # noqa: F401
import ssh_agent_setup
from CommonServerPython import * # noqa: F401
# Dict to Markdown Converter adapted from https://github.com/PolBaladas/torsimany/
def dict2md(json_block, depth=0):
markdown = ""
if isinstance(json_block, dict):
markdown = parseDict(json_block, depth)
if isinstance(json_block, list):
markdown = parseList(json_block, depth)
return markdown
def parseDict(d, depth):
markdown = ""
for k in d:
if isinstance(d[k], (dict, list)):
markdown += addHeader(k, depth)
markdown += dict2md(d[k], depth + 1)
else:
markdown += buildValueChain(k, d[k], depth)
return markdown
def parseList(rawlist, depth):
markdown = ""
for value in rawlist:
if not isinstance(value, (dict, list)):
index = rawlist.index(value)
markdown += buildValueChain(index, value, depth)
else:
markdown += parseDict(value, depth)
return markdown
def buildHeaderChain(depth):
list_tag = '* '
htag = '#'
chain = list_tag * (bool(depth)) + htag * (depth + 1) + \
' value ' + (htag * (depth + 1) + '\n')
return chain
def buildValueChain(key, value, depth):
tab = " "
list_tag = '* '
chain = tab * (bool(depth - 1)) + list_tag + \
str(key) + ": " + str(value) + "\n"
return chain
def addHeader(value, depth):
chain = buildHeaderChain(depth)
chain = chain.replace('value', value.title())
return chain
# Remove ansible branding from results
def rec_ansible_key_strip(obj):
if isinstance(obj, dict):
return {key.replace('ansible_', ''): rec_ansible_key_strip(val) for key, val in obj.items()}
return obj
# COMMAND FUNCTIONS
def generic_ansible(integration_name, command, args: Dict[str, Any]) -> CommandResults:
readable_output = ""
sshkey = ""
fork_count = 1 # default to executing against 1 host at a time
if args.get('concurrency'):
fork_count = cast(int, args.get('concurrency'))
inventory: Dict[str, dict] = {}
inventory['all'] = {}
inventory['all']['hosts'] = {}
if type(args['host']) is list:
# host arg can be a array of multiple hosts
hosts = args['host']
else:
# host arg could also be csv
hosts = [host.strip() for host in args['host'].split(',')]
for host in hosts:
new_host = {}
new_host['ansible_host'] = host
if ":" in host:
address = host.split(':')
new_host['ansible_port'] = address[1]
new_host['ansible_host'] = address[0]
else:
new_host['ansible_host'] = host
if demisto.params().get('port'):
new_host['ansible_port'] = demisto.params().get('port')
# Linux
# Different credential options
# SSH Key saved in credential manager selection
if demisto.params().get('creds', {}).get('credentials').get('sshkey'):
username = demisto.params().get('creds', {}).get('credentials').get('user')
sshkey = demisto.params().get('creds', {}).get('credentials').get('sshkey')
new_host['ansible_user'] = username
# Password saved in credential manager selection
elif demisto.params().get('creds', {}).get('credentials').get('password'):
username = demisto.params().get('creds', {}).get('credentials').get('user')
password = demisto.params().get('creds', {}).get('credentials').get('password')
new_host['ansible_user'] = username
new_host['ansible_password'] = password
# username/password individually entered
else:
username = demisto.params().get('creds', {}).get('identifier')
password = demisto.params().get('creds', {}).get('password')
new_host['ansible_user'] = username
new_host['ansible_password'] = password
inventory['all']['hosts'][host] = new_host
module_args = ""
# build module args list
for arg_key, arg_value in args.items():
# skip hardcoded host arg, as it doesn't related to module
if arg_key == 'host':
continue
module_args += "%s=\"%s\" " % (arg_key, arg_value)
r = ansible_runner.run(inventory=inventory, host_pattern='all', module=command, quiet=True,
omit_event_data=True, ssh_key=sshkey, module_args=module_args, forks=fork_count)
results = []
for each_host_event in r.events:
# Troubleshooting
# demisto.log("%s: %s\n" % (each_host_event['event'], each_host_event))
if each_host_event['event'] in ["runner_on_ok", "runner_on_unreachable", "runner_on_failed"]:
# parse results
result = json.loads('{' + each_host_event['stdout'].split('{', 1)[1])
host = each_host_event['stdout'].split('|', 1)[0].strip()
status = each_host_event['stdout'].replace('=>', '|').split('|', 3)[1]
# if successful build outputs
if each_host_event['event'] == "runner_on_ok":
if 'fact' in command:
result = result['ansible_facts']
else:
if result.get(command) is not None:
result = result[command]
else:
result.pop("ansible_facts", None)
result = rec_ansible_key_strip(result)
if host != "localhost":
readable_output += "# %s - %s\n" % (host, status)
else:
# This is integration is not host based
readable_output += "# %s\n" % status
readable_output += dict2md(result)
# add host and status to result
result['host'] = host
result['status'] = status
results.append(result)
if each_host_event['event'] == "runner_on_unreachable":
msg = "Host %s unreachable\nError Details: %s" % (host, result)
return_error(msg)
if each_host_event['event'] == "runner_on_failed":
msg = "Host %s failed running command\nError Details: %s" % (host, result)
return_error(msg)
return CommandResults(
readable_output=readable_output,
outputs_prefix=integration_name + '.' + command,
outputs_key_field='',
outputs=results
)
# MAIN FUNCTION
def main() -> None:
"""main function, parses params and runs command functions
:return:
:rtype:
"""
# SSH Key integration requires ssh_agent to be running in the background
ssh_agent_setup.setup()
try:
if demisto.command() == 'test-module':
# This is the call made when pressing the integration Test button.
return_results('ok')
elif demisto.command() == 'linux-alternatives':
return_results(generic_ansible('linux', 'alternatives', demisto.args()))
elif demisto.command() == 'linux-at':
return_results(generic_ansible('linux', 'at', demisto.args()))
elif demisto.command() == 'linux-authorized-key':
return_results(generic_ansible('linux', 'authorized_key', demisto.args()))
elif demisto.command() == 'linux-capabilities':
return_results(generic_ansible('linux', 'capabilities', demisto.args()))
elif demisto.command() == 'linux-cron':
return_results(generic_ansible('linux', 'cron', demisto.args()))
elif demisto.command() == 'linux-cronvar':
return_results(generic_ansible('linux', 'cronvar', demisto.args()))
elif demisto.command() == 'linux-dconf':
return_results(generic_ansible('linux', 'dconf', demisto.args()))
elif demisto.command() == 'linux-debconf':
return_results(generic_ansible('linux', 'debconf', demisto.args()))
elif demisto.command() == 'linux-filesystem':
return_results(generic_ansible('linux', 'filesystem', demisto.args()))
elif demisto.command() == 'linux-firewalld':
return_results(generic_ansible('linux', 'firewalld', demisto.args()))
elif demisto.command() == 'linux-gather-facts':
return_results(generic_ansible('linux', 'gather_facts', demisto.args()))
elif demisto.command() == 'linux-gconftool2':
return_results(generic_ansible('linux', 'gconftool2', demisto.args()))
elif demisto.command() == 'linux-getent':
return_results(generic_ansible('linux', 'getent', demisto.args()))
elif demisto.command() == 'linux-group':
return_results(generic_ansible('linux', 'group', demisto.args()))
elif demisto.command() == 'linux-hostname':
return_results(generic_ansible('linux', 'hostname', demisto.args()))
elif demisto.command() == 'linux-interfaces-file':
return_results(generic_ansible('linux', 'interfaces_file', demisto.args()))
elif demisto.command() == 'linux-iptables':
return_results(generic_ansible('linux', 'iptables', demisto.args()))
elif demisto.command() == 'linux-java-cert':
return_results(generic_ansible('linux', 'java_cert', demisto.args()))
elif demisto.command() == 'linux-java-keystore':
return_results(generic_ansible('linux', 'java_keystore', demisto.args()))
elif demisto.command() == 'linux-kernel-blacklist':
return_results(generic_ansible('linux', 'kernel_blacklist', demisto.args()))
elif demisto.command() == 'linux-known-hosts':
return_results(generic_ansible('linux', 'known_hosts', demisto.args()))
elif demisto.command() == 'linux-listen-ports-facts':
return_results(generic_ansible('linux', 'listen_ports_facts', demisto.args()))
elif demisto.command() == 'linux-locale-gen':
return_results(generic_ansible('linux', 'locale_gen', demisto.args()))
elif demisto.command() == 'linux-modprobe':
return_results(generic_ansible('linux', 'modprobe', demisto.args()))
elif demisto.command() == 'linux-mount':
return_results(generic_ansible('linux', 'mount', demisto.args()))
elif demisto.command() == 'linux-open-iscsi':
return_results(generic_ansible('linux', 'open_iscsi', demisto.args()))
elif demisto.command() == 'linux-pam-limits':
return_results(generic_ansible('linux', 'pam_limits', demisto.args()))
elif demisto.command() == 'linux-pamd':
return_results(generic_ansible('linux', 'pamd', demisto.args()))
elif demisto.command() == 'linux-parted':
return_results(generic_ansible('linux', 'parted', demisto.args()))
elif demisto.command() == 'linux-pids':
return_results(generic_ansible('linux', 'pids', demisto.args()))
elif demisto.command() == 'linux-ping':
return_results(generic_ansible('linux', 'ping', demisto.args()))
elif demisto.command() == 'linux-python-requirements-info':
return_results(generic_ansible('linux', 'python_requirements_info', demisto.args()))
elif demisto.command() == 'linux-reboot':
return_results(generic_ansible('linux', 'reboot', demisto.args()))
elif demisto.command() == 'linux-seboolean':
return_results(generic_ansible('linux', 'seboolean', demisto.args()))
elif demisto.command() == 'linux-sefcontext':
return_results(generic_ansible('linux', 'sefcontext', demisto.args()))
elif demisto.command() == 'linux-selinux':
return_results(generic_ansible('linux', 'selinux', demisto.args()))
elif demisto.command() == 'linux-selinux-permissive':
return_results(generic_ansible('linux', 'selinux_permissive', demisto.args()))
elif demisto.command() == 'linux-selogin':
return_results(generic_ansible('linux', 'selogin', demisto.args()))
elif demisto.command() == 'linux-seport':
return_results(generic_ansible('linux', 'seport', demisto.args()))
elif demisto.command() == 'linux-service':
return_results(generic_ansible('linux', 'service', demisto.args()))
elif demisto.command() == 'linux-service-facts':
return_results(generic_ansible('linux', 'service_facts', demisto.args()))
elif demisto.command() == 'linux-setup':
return_results(generic_ansible('linux', 'setup', demisto.args()))
elif demisto.command() == 'linux-sysctl':
return_results(generic_ansible('linux', 'sysctl', demisto.args()))
elif demisto.command() == 'linux-systemd':
return_results(generic_ansible('linux', 'systemd', demisto.args()))
elif demisto.command() == 'linux-sysvinit':
return_results(generic_ansible('linux', 'sysvinit', demisto.args()))
elif demisto.command() == 'linux-timezone':
return_results(generic_ansible('linux', 'timezone', demisto.args()))
elif demisto.command() == 'linux-ufw':
return_results(generic_ansible('linux', 'ufw', demisto.args()))
elif demisto.command() == 'linux-user':
return_results(generic_ansible('linux', 'user', demisto.args()))
elif demisto.command() == 'linux-xfs-quota':
return_results(generic_ansible('linux', 'xfs_quota', demisto.args()))
elif demisto.command() == 'linux-htpasswd':
return_results(generic_ansible('linux', 'htpasswd', demisto.args()))
elif demisto.command() == 'linux-supervisorctl':
return_results(generic_ansible('linux', 'supervisorctl', demisto.args()))
elif demisto.command() == 'linux-openssh-cert':
return_results(generic_ansible('linux', 'openssh_cert', demisto.args()))
elif demisto.command() == 'linux-openssh-keypair':
return_results(generic_ansible('linux', 'openssh_keypair', demisto.args()))
elif demisto.command() == 'linux-acl':
return_results(generic_ansible('linux', 'acl', demisto.args()))
elif demisto.command() == 'linux-archive':
return_results(generic_ansible('linux', 'archive', demisto.args()))
elif demisto.command() == 'linux-assemble':
return_results(generic_ansible('linux', 'assemble', demisto.args()))
elif demisto.command() == 'linux-blockinfile':
return_results(generic_ansible('linux', 'blockinfile', demisto.args()))
elif demisto.command() == 'linux-file':
return_results(generic_ansible('linux', 'file', demisto.args()))
elif demisto.command() == 'linux-find':
return_results(generic_ansible('linux', 'find', demisto.args()))
elif demisto.command() == 'linux-ini-file':
return_results(generic_ansible('linux', 'ini_file', demisto.args()))
elif demisto.command() == 'linux-iso-extract':
return_results(generic_ansible('linux', 'iso_extract', demisto.args()))
elif demisto.command() == 'linux-lineinfile':
return_results(generic_ansible('linux', 'lineinfile', demisto.args()))
elif demisto.command() == 'linux-replace':
return_results(generic_ansible('linux', 'replace', demisto.args()))
elif demisto.command() == 'linux-stat':
return_results(generic_ansible('linux', 'stat', demisto.args()))
elif demisto.command() == 'linux-synchronize':
return_results(generic_ansible('linux', 'synchronize', demisto.args()))
elif demisto.command() == 'linux-tempfile':
return_results(generic_ansible('linux', 'tempfile', demisto.args()))
elif demisto.command() == 'linux-unarchive':
return_results(generic_ansible('linux', 'unarchive', demisto.args()))
elif demisto.command() == 'linux-xml':
return_results(generic_ansible('linux', 'xml', demisto.args()))
elif demisto.command() == 'linux-expect':
return_results(generic_ansible('linux', 'expect', demisto.args()))
elif demisto.command() == 'linux-bower':
return_results(generic_ansible('linux', 'bower', demisto.args()))
elif demisto.command() == 'linux-bundler':
return_results(generic_ansible('linux', 'bundler', demisto.args()))
elif demisto.command() == 'linux-composer':
return_results(generic_ansible('linux', 'composer', demisto.args()))
elif demisto.command() == 'linux-cpanm':
return_results(generic_ansible('linux', 'cpanm', demisto.args()))
elif demisto.command() == 'linux-gem':
return_results(generic_ansible('linux', 'gem', demisto.args()))
elif demisto.command() == 'linux-maven-artifact':
return_results(generic_ansible('linux', 'maven_artifact', demisto.args()))
elif demisto.command() == 'linux-npm':
return_results(generic_ansible('linux', 'npm', demisto.args()))
elif demisto.command() == 'linux-pear':
return_results(generic_ansible('linux', 'pear', demisto.args()))
elif demisto.command() == 'linux-pip':
return_results(generic_ansible('linux', 'pip', demisto.args()))
elif demisto.command() == 'linux-pip-package-info':
return_results(generic_ansible('linux', 'pip_package_info', demisto.args()))
elif demisto.command() == 'linux-yarn':
return_results(generic_ansible('linux', 'yarn', demisto.args()))
elif demisto.command() == 'linux-apk':
return_results(generic_ansible('linux', 'apk', demisto.args()))
elif demisto.command() == 'linux-apt':
return_results(generic_ansible('linux', 'apt', demisto.args()))
elif demisto.command() == 'linux-apt-key':
return_results(generic_ansible('linux', 'apt_key', demisto.args()))
elif demisto.command() == 'linux-apt-repo':
return_results(generic_ansible('linux', 'apt_repo', demisto.args()))
elif demisto.command() == 'linux-apt-repository':
return_results(generic_ansible('linux', 'apt_repository', demisto.args()))
elif demisto.command() == 'linux-apt-rpm':
return_results(generic_ansible('linux', 'apt_rpm', demisto.args()))
elif demisto.command() == 'linux-dpkg-selections':
return_results(generic_ansible('linux', 'dpkg_selections', demisto.args()))
elif demisto.command() == 'linux-flatpak':
return_results(generic_ansible('linux', 'flatpak', demisto.args()))
elif demisto.command() == 'linux-flatpak-remote':
return_results(generic_ansible('linux', 'flatpak_remote', demisto.args()))
elif demisto.command() == 'linux-homebrew':
return_results(generic_ansible('linux', 'homebrew', demisto.args()))
elif demisto.command() == 'linux-homebrew-cask':
return_results(generic_ansible('linux', 'homebrew_cask', demisto.args()))
elif demisto.command() == 'linux-homebrew-tap':
return_results(generic_ansible('linux', 'homebrew_tap', demisto.args()))
elif demisto.command() == 'linux-layman':
return_results(generic_ansible('linux', 'layman', demisto.args()))
elif demisto.command() == 'linux-package':
return_results(generic_ansible('linux', 'package', demisto.args()))
elif demisto.command() == 'linux-package-facts':
return_results(generic_ansible('linux', 'package_facts', demisto.args()))
elif demisto.command() == 'linux-yum':
return_results(generic_ansible('linux', 'yum', demisto.args()))
elif demisto.command() == 'linux-yum-repository':
return_results(generic_ansible('linux', 'yum_repository', demisto.args()))
elif demisto.command() == 'linux-zypper':
return_results(generic_ansible('linux', 'zypper', demisto.args()))
elif demisto.command() == 'linux-zypper-repository':
return_results(generic_ansible('linux', 'zypper_repository', demisto.args()))
elif demisto.command() == 'linux-snap':
return_results(generic_ansible('linux', 'snap', demisto.args()))
elif demisto.command() == 'linux-redhat-subscription':
return_results(generic_ansible('linux', 'redhat_subscription', demisto.args()))
elif demisto.command() == 'linux-rhn-channel':
return_results(generic_ansible('linux', 'rhn_channel', demisto.args()))
elif demisto.command() == 'linux-rhn-register':
return_results(generic_ansible('linux', 'rhn_register', demisto.args()))
elif demisto.command() == 'linux-rhsm-release':
return_results(generic_ansible('linux', 'rhsm_release', demisto.args()))
elif demisto.command() == 'linux-rhsm-repository':
return_results(generic_ansible('linux', 'rhsm_repository', demisto.args()))
elif demisto.command() == 'linux-rpm-key':
return_results(generic_ansible('linux', 'rpm_key', demisto.args()))
elif demisto.command() == 'linux-get-url':
return_results(generic_ansible('linux', 'get_url', demisto.args()))
# Log exceptions and return errors
except Exception as e:
demisto.error(traceback.format_exc()) # print the traceback
return_error(f'Failed to execute {demisto.command()} command.\nError:\n{str(e)}')
# ENTRY POINT
if __name__ in ('__main__', '__builtin__', 'builtins'):
main()
| 49.020316 | 107 | 0.615399 | 2,312 | 21,716 | 5.614619 | 0.137543 | 0.117556 | 0.148371 | 0.189585 | 0.645097 | 0.466374 | 0.172791 | 0.0369 | 0.018026 | 0.010323 | 0 | 0.001555 | 0.230107 | 21,716 | 442 | 108 | 49.131222 | 0.774867 | 0.046279 | 0 | 0.084034 | 0 | 0 | 0.185474 | 0.016742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02521 | false | 0.019608 | 0.019608 | 0 | 0.070028 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
016c6398fbb64a982ddb4440ab6f451537982ecf | 37,509 | py | Python | mmfewshot/classification/datasets/tiered_imagenet.py | BIGWangYuDong/mmfewshot | dac097afc92df176bc2de76b7c90968584865197 | [
"Apache-2.0"
] | 376 | 2021-11-23T13:29:57.000Z | 2022-03-30T07:22:14.000Z | mmfewshot/classification/datasets/tiered_imagenet.py | BIGWangYuDong/mmfewshot | dac097afc92df176bc2de76b7c90968584865197 | [
"Apache-2.0"
] | 51 | 2021-11-23T14:45:08.000Z | 2022-03-30T03:37:15.000Z | mmfewshot/classification/datasets/tiered_imagenet.py | BIGWangYuDong/mmfewshot | dac097afc92df176bc2de76b7c90968584865197 | [
"Apache-2.0"
] | 56 | 2021-11-23T14:02:27.000Z | 2022-03-31T09:01:50.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import pickle
import warnings
from typing import Dict, List, Optional, Sequence, Union
import mmcv
import numpy as np
from mmcls.datasets.builder import DATASETS
from typing_extensions import Literal
from .base import BaseFewShotDataset
TRAIN_CLASSES = [
('Yorkshire terrier', 'terrier'), ('space shuttle', 'craft'),
('drake', 'aquatic bird'),
("plane, carpenter's plane, woodworking plane", 'tool'),
('mosquito net', 'protective covering, protective cover, protect'),
('sax, saxophone', 'musical instrument, instrument'),
('container ship, containership, container vessel', 'craft'),
('patas, hussar monkey, Erythrocebus patas', 'primate'),
('cheetah, chetah, Acinonyx jubatus', 'feline, felid'),
('submarine, pigboat, sub, U-boat', 'craft'),
('prison, prison house', 'establishment'),
('can opener, tin opener', 'tool'), ('syringe', 'instrument'),
('odometer, hodometer, mileometer, milometer', 'instrument'),
('bassoon', 'musical instrument, instrument'),
('Kerry blue terrier', 'terrier'),
('scale, weighing machine', 'instrument'), ('baseball', 'game equipment'),
('cassette player', 'electronic equipment'),
('shield, buckler', 'protective covering, protective cover, protect'),
('goldfinch, Carduelis carduelis', 'passerine, passeriform bird'),
('cornet, horn, trumpet, trump', 'musical instrument, instrument'),
('flute, transverse flute', 'musical instrument, instrument'),
('stopwatch, stop watch', 'instrument'), ('basketball', 'game equipment'),
('brassiere, bra, bandeau', 'garment'),
('bulbul', 'passerine, passeriform bird'),
('steel drum', 'musical instrument, instrument'),
('bolo tie, bolo, bola tie, bola', 'garment'),
('planetarium', 'building, edifice'), ('stethoscope', 'instrument'),
('proboscis monkey, Nasalis larvatus', 'primate'),
('guillotine', 'instrument'),
('Scottish deerhound, deerhound', 'hound, hound dog'),
('ocarina, sweet potato', 'musical instrument, instrument'),
('Border terrier', 'terrier'),
('capuchin, ringtail, Cebus capucinus', 'primate'),
('magnetic compass', 'instrument'), ('alligator lizard', 'saurian'),
('baboon', 'primate'), ('sundial', 'instrument'),
('gibbon, Hylobates lar', 'primate'),
('grand piano, grand', 'musical instrument, instrument'),
('Arabian camel, dromedary, Camelus dromedarius',
'ungulate, hoofed mammal'), ('basset, basset hound', 'hound, hound dog'),
('corkscrew, bottle screw', 'tool'), ('miniskirt, mini', 'garment'),
('missile', 'instrument'), ('hatchet', 'tool'),
('acoustic guitar', 'musical instrument, instrument'),
('impala, Aepyceros melampus', 'ungulate, hoofed mammal'),
('parking meter', 'instrument'),
('greenhouse, nursery, glasshouse', 'building, edifice'),
('home theater, home theatre', 'building, edifice'),
('hartebeest', 'ungulate, hoofed mammal'),
('hippopotamus, hippo, river horse, Hippopotamus amphibius',
'ungulate, hoofed mammal'), ('warplane, military plane', 'craft'),
('albatross, mollymawk', 'aquatic bird'),
('umbrella', 'protective covering, protective cover, protect'),
('shoe shop, shoe-shop, shoe store', 'establishment'),
('suit, suit of clothes', 'garment'),
('pickelhaube', 'protective covering, protective cover, protect'),
('soccer ball', 'game equipment'), ('yawl', 'craft'),
('screwdriver', 'tool'),
('Madagascar cat, ring-tailed lemur, Lemur catta', 'primate'),
('garter snake, grass snake', 'snake, serpent, ophidian'),
('bustard', 'aquatic bird'), ('tabby, tabby cat', 'feline, felid'),
('airliner', 'craft'),
('tobacco shop, tobacconist shop, tobacconist', 'establishment'),
('Italian greyhound', 'hound, hound dog'), ('projector', 'instrument'),
('bittern', 'aquatic bird'), ('rifle', 'instrument'),
('pay-phone, pay-station', 'electronic equipment'),
('house finch, linnet, Carpodacus mexicanus',
'passerine, passeriform bird'), ('monastery', 'building, edifice'),
('lens cap, lens cover', 'protective covering, protective cover, protect'),
('maillot, tank suit', 'garment'), ('canoe', 'craft'),
('letter opener, paper knife, paperknife', 'tool'),
('nail', 'restraint, constraint'), ('guenon, guenon monkey', 'primate'),
('CD player', 'electronic equipment'),
('safety pin', 'restraint, constraint'),
('harp', 'musical instrument, instrument'),
('disk brake, disc brake', 'restraint, constraint'),
('otterhound, otter hound', 'hound, hound dog'),
('green mamba', 'snake, serpent, ophidian'),
('violin, fiddle', 'musical instrument, instrument'),
('American coot, marsh hen, mud hen, water hen, Fulica americana',
'aquatic bird'), ('ram, tup', 'ungulate, hoofed mammal'),
('jay', 'passerine, passeriform bird'), ('trench coat', 'garment'),
('Indian cobra, Naja naja', 'snake, serpent, ophidian'),
('projectile, missile', 'instrument'), ('schooner', 'craft'),
('magpie', 'passerine, passeriform bird'), ('Norwich terrier', 'terrier'),
('cairn, cairn terrier', 'terrier'),
('crossword puzzle, crossword', 'game equipment'),
('snow leopard, ounce, Panthera uncia', 'feline, felid'),
('gong, tam-tam', 'musical instrument, instrument'),
('library', 'building, edifice'),
('swimming trunks, bathing trunks', 'garment'),
('Staffordshire bullterrier, Staffordshire bull terrier', 'terrier'),
('Lakeland terrier', 'terrier'),
('black stork, Ciconia nigra', 'aquatic bird'),
('king penguin, Aptenodytes patagonica', 'aquatic bird'),
('water ouzel, dipper', 'passerine, passeriform bird'),
('macaque', 'primate'), ('lynx, catamount', 'feline, felid'),
('ping-pong ball', 'game equipment'), ('standard schnauzer', 'terrier'),
('Australian terrier', 'terrier'), ('stupa, tope', 'building, edifice'),
('white stork, Ciconia ciconia', 'aquatic bird'),
('king snake, kingsnake', 'snake, serpent, ophidian'),
('Airedale, Airedale terrier', 'terrier'),
('banjo', 'musical instrument, instrument'), ('Windsor tie', 'garment'),
('abaya', 'garment'), ('stole', 'garment'),
('vine snake', 'snake, serpent, ophidian'),
('Bedlington terrier', 'terrier'), ('langur', 'primate'),
('catamaran', 'craft'), ('sarong', 'garment'),
('spoonbill', 'aquatic bird'),
('boa constrictor, Constrictor constrictor', 'snake, serpent, ophidian'),
('ruddy turnstone, Arenaria interpres', 'aquatic bird'),
('hognose snake, puff adder, sand viper', 'snake, serpent, ophidian'),
('American chameleon, anole, Anolis carolinensis', 'saurian'),
('rugby ball', 'game equipment'),
('black swan, Cygnus atratus', 'aquatic bird'),
('frilled lizard, Chlamydosaurus kingi', 'saurian'),
('oscilloscope, scope, cathode-ray oscilloscope, CRO',
'electronic equipment'),
('ski mask', 'protective covering, protective cover, protect'),
('marmoset', 'primate'),
('Komodo dragon, Komodo lizard, dragon lizard, giant lizard, '
'Varanus komodoensis', 'saurian'),
('accordion, piano accordion, squeeze box',
'musical instrument, instrument'),
('horned viper, cerastes, sand viper, horned asp, Cerastes cornutus',
'snake, serpent, ophidian'),
('bookshop, bookstore, bookstall', 'establishment'),
('Boston bull, Boston terrier', 'terrier'), ('crane', 'aquatic bird'),
('junco, snowbird', 'passerine, passeriform bird'),
('silky terrier, Sydney silky', 'terrier'),
('Egyptian cat', 'feline, felid'), ('Irish terrier', 'terrier'),
('leopard, Panthera pardus', 'feline, felid'),
('sea snake', 'snake, serpent, ophidian'),
('hog, pig, grunter, squealer, Sus scrofa', 'ungulate, hoofed mammal'),
('colobus, colobus monkey', 'primate'),
('chickadee', 'passerine, passeriform bird'),
('Scotch terrier, Scottish terrier, Scottie', 'terrier'),
('digital watch', 'instrument'), ('analog clock', 'instrument'),
('zebra', 'ungulate, hoofed mammal'),
('American Staffordshire terrier, Staffordshire terrier, '
'American pit bull terrier, pit bull terrier', 'terrier'),
('European gallinule, Porphyrio porphyrio', 'aquatic bird'),
('lampshade, lamp shade',
'protective covering, protective cover, protect'),
('holster', 'protective covering, protective cover, protect'),
('jaguar, panther, Panthera onca, Felis onca', 'feline, felid'),
('cleaver, meat cleaver, chopper', 'tool'),
('brambling, Fringilla montifringilla', 'passerine, passeriform bird'),
('orangutan, orang, orangutang, Pongo pygmaeus', 'primate'),
('combination lock', 'restraint, constraint'),
('tile roof', 'protective covering, protective cover, protect'),
('borzoi, Russian wolfhound', 'hound, hound dog'),
('water snake', 'snake, serpent, ophidian'),
('knot', 'restraint, constraint'),
('window shade', 'protective covering, protective cover, protect'),
('mosque', 'building, edifice'),
('Walker hound, Walker foxhound', 'hound, hound dog'),
('cardigan', 'garment'), ('warthog', 'ungulate, hoofed mammal'),
('whiptail, whiptail lizard', 'saurian'), ('plow, plough', 'tool'),
('bluetick', 'hound, hound dog'), ('poncho', 'garment'),
('shovel', 'tool'),
('sidewinder, horned rattlesnake, Crotalus cerastes',
'snake, serpent, ophidian'), ('croquet ball', 'game equipment'),
('sorrel', 'ungulate, hoofed mammal'), ('airship, dirigible', 'craft'),
('goose', 'aquatic bird'), ('church, church building',
'building, edifice'),
('titi, titi monkey', 'primate'),
('butcher shop, meat market', 'establishment'),
('diamondback, diamondback rattlesnake, Crotalus adamanteus',
'snake, serpent, ophidian'),
('common iguana, iguana, Iguana iguana', 'saurian'),
('Saluki, gazelle hound', 'hound, hound dog'),
('monitor', 'electronic equipment'),
('sunglasses, dark glasses, shades', 'instrument'),
('flamingo', 'aquatic bird'),
('seat belt, seatbelt', 'restraint, constraint'),
('Persian cat', 'feline, felid'), ('gorilla, Gorilla gorilla', 'primate'),
('banded gecko', 'saurian'),
('thatch, thatched roof',
'protective covering, protective cover, protect'),
('beagle', 'hound, hound dog'), ('limpkin, Aramus pictus', 'aquatic bird'),
('jigsaw puzzle', 'game equipment'), ('rule, ruler', 'instrument'),
('hammer', 'tool'), ('cello, violoncello',
'musical instrument, instrument'),
('lab coat, laboratory coat', 'garment'),
('indri, indris, Indri indri, Indri brevicaudatus', 'primate'),
('vault', 'protective covering, protective cover, protect'),
('cellular telephone, cellular phone, cellphone, cell, mobile phone',
'electronic equipment'), ('whippet', 'hound, hound dog'),
('siamang, Hylobates syndactylus, Symphalangus syndactylus', 'primate'),
("loupe, jeweler's loupe", 'instrument'), ('modem',
'electronic equipment'),
('lifeboat', 'craft'),
('dial telephone, dial phone', 'electronic equipment'),
('cougar, puma, catamount, mountain lion, painter, panther, '
'Felis concolor', 'feline, felid'),
('thimble', 'protective covering, protective cover, protect'),
('ibex, Capra ibex', 'ungulate, hoofed mammal'),
('lawn mower, mower', 'tool'),
('bell cote, bell cot', 'protective covering, protective cover, protect'),
('chain mail, ring mail, mail, chain armor, chain armour, ring armor, '
'ring armour', 'protective covering, protective cover, protect'),
('hair slide', 'restraint, constraint'),
('apiary, bee house', 'building, edifice'),
('harmonica, mouth organ, harp, mouth harp',
'musical instrument, instrument'),
('green snake, grass snake', 'snake, serpent, ophidian'),
('howler monkey, howler', 'primate'), ('digital clock', 'instrument'),
('restaurant, eating house, eating place, eatery', 'building, edifice'),
('miniature schnauzer', 'terrier'),
('panpipe, pandean pipe, syrinx', 'musical instrument, instrument'),
('pirate, pirate ship', 'craft'),
('window screen', 'protective covering, protective cover, protect'),
('binoculars, field glasses, opera glasses', 'instrument'),
('Afghan hound, Afghan', 'hound, hound dog'),
('cinema, movie theater, movie theatre, movie house, picture palace',
'building, edifice'), ('liner, ocean liner', 'craft'),
('ringneck snake, ring-necked snake, ring snake',
'snake, serpent, ophidian'), ('redshank, Tringa totanus', 'aquatic bird'),
('Siamese cat, Siamese', 'feline, felid'),
('thunder snake, worm snake, Carphophis amoenus',
'snake, serpent, ophidian'), ('boathouse', 'building, edifice'),
('jersey, T-shirt, tee shirt', 'garment'),
('soft-coated wheaten terrier', 'terrier'),
('scabbard', 'protective covering, protective cover, protect'),
('muzzle', 'restraint, constraint'),
('Ibizan hound, Ibizan Podenco', 'hound, hound dog'),
('tennis ball', 'game equipment'), ('padlock', 'restraint, constraint'),
('kimono', 'garment'), ('redbone', 'hound, hound dog'),
('wild boar, boar, Sus scrofa', 'ungulate, hoofed mammal'),
('dowitcher', 'aquatic bird'),
('oboe, hautboy, hautbois', 'musical instrument, instrument'),
('electric guitar', 'musical instrument, instrument'), ('trimaran',
'craft'),
('barometer', 'instrument'), ('llama', 'ungulate, hoofed mammal'),
('robin, American robin, Turdus migratorius',
'passerine, passeriform bird'),
('maraca', 'musical instrument, instrument'),
('feather boa, boa', 'garment'),
('Dandie Dinmont, Dandie Dinmont terrier', 'terrier'),
('Lhasa, Lhasa apso', 'terrier'), ('bow', 'instrument'),
('punching bag, punch bag, punching ball, punchball', 'game equipment'),
('volleyball', 'game equipment'), ('Norfolk terrier', 'terrier'),
('Gila monster, Heloderma suspectum', 'saurian'),
('fire screen, fireguard',
'protective covering, protective cover, protect'),
('hourglass', 'instrument'),
('chimpanzee, chimp, Pan troglodytes', 'primate'),
('birdhouse', 'protective covering, protective cover, protect'),
('Sealyham terrier, Sealyham', 'terrier'),
('Tibetan terrier, chrysanthemum dog', 'terrier'),
('palace', 'building, edifice'), ('wreck', 'craft'),
('overskirt', 'garment'), ('pelican', 'aquatic bird'),
('French horn, horn', 'musical instrument, instrument'),
('tiger cat', 'feline, felid'), ('barbershop', 'establishment'),
('revolver, six-gun, six-shooter', 'instrument'),
('Irish wolfhound', 'hound, hound dog'),
('lion, king of beasts, Panthera leo', 'feline, felid'),
('fur coat', 'garment'), ('ox', 'ungulate, hoofed mammal'),
('cuirass', 'protective covering, protective cover, protect'),
('grocery store, grocery, food market, market', 'establishment'),
('hoopskirt, crinoline', 'garment'),
('spider monkey, Ateles geoffroyi', 'primate'),
('tiger, Panthera tigris', 'feline, felid'),
('bloodhound, sleuthhound', 'hound, hound dog'),
('red-backed sandpiper, dunlin, Erolia alpina', 'aquatic bird'),
('drum, membranophone, tympan', 'musical instrument, instrument'),
('radio telescope, radio reflector', 'instrument'),
('West Highland white terrier', 'terrier'),
('bow tie, bow-tie, bowtie', 'garment'), ('golf ball', 'game equipment'),
('barn', 'building, edifice'),
('binder, ring-binder', 'protective covering, protective cover, protect'),
('English foxhound', 'hound, hound dog'),
('bison', 'ungulate, hoofed mammal'), ('screw', 'restraint, constraint'),
('assault rifle, assault gun', 'instrument'),
('diaper, nappy, napkin', 'garment'),
('bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, '
'Rocky Mountain sheep, Ovis canadensis', 'ungulate, hoofed mammal'),
('Weimaraner', 'hound, hound dog'),
('computer keyboard, keypad', 'electronic equipment'),
('black-and-tan coonhound', 'hound, hound dog'),
('little blue heron, Egretta caerulea', 'aquatic bird'),
('breastplate, aegis, egis',
'protective covering, protective cover, protect'),
('gasmask, respirator, gas helmet',
'protective covering, protective cover, protect'),
('aircraft carrier, carrier, flattop, attack aircraft carrier', 'craft'),
('iPod', 'electronic equipment'),
('organ, pipe organ', 'musical instrument, instrument'),
('wall clock', 'instrument'),
('rock python, rock snake, Python sebae', 'snake, serpent, ophidian'),
('squirrel monkey, Saimiri sciureus', 'primate'),
('bikini, two-piece', 'garment'),
('water buffalo, water ox, Asiatic buffalo, Bubalus bubalis',
'ungulate, hoofed mammal'),
('upright, upright piano', 'musical instrument, instrument'),
('chime, bell, gong', 'musical instrument, instrument'),
('confectionery, confectionary, candy store', 'establishment'),
('indigo bunting, indigo finch, indigo bird, Passerina cyanea',
'passerine, passeriform bird'),
('green lizard, Lacerta viridis', 'saurian'),
('Norwegian elkhound, elkhound', 'hound, hound dog'),
('dome', 'protective covering, protective cover, protect'),
('buckle', 'restraint, constraint'), ('giant schnauzer', 'terrier'),
('jean, blue jean, denim', 'garment'),
('wire-haired fox terrier', 'terrier'),
('African chameleon, Chamaeleo chamaeleon', 'saurian'),
('trombone', 'musical instrument, instrument'),
('oystercatcher, oyster catcher', 'aquatic bird'), ('sweatshirt',
'garment'),
('American egret, great white heron, Egretta albus', 'aquatic bird'),
('marimba, xylophone', 'musical instrument, instrument'),
('gazelle', 'ungulate, hoofed mammal'),
('red-breasted merganser, Mergus serrator', 'aquatic bird'),
('tape player', 'electronic equipment'), ('speedboat', 'craft'),
('gondola', 'craft'),
('night snake, Hypsiglena torquata', 'snake, serpent, ophidian'),
('cannon', 'instrument'), ("plunger, plumber's helper", 'tool'),
('balloon', 'craft'), ('toyshop', 'establishment'), ('agama', 'saurian'),
('fireboat', 'craft'), ('bakery, bakeshop, bakehouse', 'establishment')
]
VAL_CLASSES = [
('cab, hack, taxi, taxicab', 'motor vehicle, automotive vehicle'),
('jeep, landrover', 'motor vehicle, automotive vehicle'),
('English setter', 'sporting dog, gun dog'),
('flat-coated retriever', 'sporting dog, gun dog'),
('bassinet', 'furnishing'),
('sports car, sport car', 'motor vehicle, automotive vehicle'),
('golfcart, golf cart', 'motor vehicle, automotive vehicle'),
('clumber, clumber spaniel', 'sporting dog, gun dog'),
('puck, hockey puck', 'mechanism'), ('reel', 'mechanism'),
('Welsh springer spaniel', 'sporting dog, gun dog'),
('car wheel', 'mechanism'), ('wardrobe, closet, press', 'furnishing'),
('go-kart', 'motor vehicle, automotive vehicle'),
('switch, electric switch, electrical switch', 'mechanism'),
('crib, cot', 'furnishing'), ('laptop, laptop computer', 'machine'),
('thresher, thrasher, threshing machine', 'machine'),
('web site, website, internet site, site', 'machine'),
('English springer, English springer spaniel', 'sporting dog, gun dog'),
('iron, smoothing iron', 'durables, durable goods, consumer durables'),
('Gordon setter', 'sporting dog, gun dog'),
('Labrador retriever', 'sporting dog, gun dog'),
('Irish water spaniel', 'sporting dog, gun dog'),
('amphibian, amphibious vehicle', 'motor vehicle, automotive vehicle'),
('file, file cabinet, filing cabinet', 'furnishing'),
('harvester, reaper', 'machine'),
('convertible', 'motor vehicle, automotive vehicle'),
('paddlewheel, paddle wheel', 'mechanism'),
('microwave, microwave oven',
'durables, durable goods, consumer durables'), ('swing', 'mechanism'),
('chiffonier, commode', 'furnishing'), ('desktop computer', 'machine'),
('gas pump, gasoline pump, petrol pump, island dispenser', 'mechanism'),
('beach wagon, station wagon, wagon, estate car, beach waggon, station '
'waggon, waggon', 'motor vehicle, automotive vehicle'),
('carousel, carrousel, merry-go-round, roundabout, whirligig',
'mechanism'), ("potter's wheel", 'mechanism'),
('folding chair', 'furnishing'),
('fire engine, fire truck', 'motor vehicle, automotive vehicle'),
('slide rule, slipstick', 'machine'),
('vizsla, Hungarian pointer', 'sporting dog, gun dog'),
('waffle iron', 'durables, durable goods, consumer durables'),
('trailer truck, tractor trailer, trucking rig, rig, articulated lorry, '
'semi', 'motor vehicle, automotive vehicle'),
('toilet seat', 'furnishing'),
('medicine chest, medicine cabinet', 'furnishing'),
('Brittany spaniel', 'sporting dog, gun dog'),
('Chesapeake Bay retriever', 'sporting dog, gun dog'),
('cash machine, cash dispenser, automated teller machine, automatic '
'teller machine, automated teller, automatic teller, ATM', 'machine'),
('moped', 'motor vehicle, automotive vehicle'),
('Model T', 'motor vehicle, automotive vehicle'),
('bookcase', 'furnishing'),
('ambulance', 'motor vehicle, automotive vehicle'),
('German short-haired pointer', 'sporting dog, gun dog'),
('dining table, board', 'furnishing'),
('minivan', 'motor vehicle, automotive vehicle'),
('police van, police wagon, paddy wagon, patrol wagon, wagon, '
'black Maria', 'motor vehicle, automotive vehicle'),
('entertainment center', 'furnishing'), ('throne', 'furnishing'),
('desk', 'furnishing'), ('notebook, notebook computer', 'machine'),
('snowplow, snowplough', 'motor vehicle, automotive vehicle'),
('cradle', 'furnishing'), ('abacus', 'machine'),
('hand-held computer, hand-held microcomputer', 'machine'),
('Dutch oven', 'durables, durable goods, consumer durables'),
('toaster', 'durables, durable goods, consumer durables'),
('barber chair', 'furnishing'), ('vending machine', 'machine'),
('four-poster', 'furnishing'),
('rotisserie', 'durables, durable goods, consumer durables'),
('hook, claw', 'mechanism'),
('vacuum, vacuum cleaner', 'durables, durable goods, consumer durables'),
('pickup, pickup truck', 'motor vehicle, automotive vehicle'),
('table lamp', 'furnishing'), ('rocking chair, rocker', 'furnishing'),
('prayer rug, prayer mat', 'furnishing'),
('moving van', 'motor vehicle, automotive vehicle'),
('studio couch, day bed', 'furnishing'),
('racer, race car, racing car', 'motor vehicle, automotive vehicle'),
('park bench', 'furnishing'),
('Irish setter, red setter', 'sporting dog, gun dog'),
('refrigerator, icebox', 'durables, durable goods, consumer durables'),
('china cabinet, china closet', 'furnishing'),
('cocker spaniel, English cocker spaniel, cocker',
'sporting dog, gun dog'), ('radiator', 'mechanism'),
('Sussex spaniel', 'sporting dog, gun dog'),
('hand blower, blow dryer, blow drier, hair dryer, hair drier',
'durables, durable goods, consumer durables'),
('slot, one-armed bandit', 'machine'),
('golden retriever', 'sporting dog, gun dog'),
('curly-coated retriever', 'sporting dog, gun dog'),
('limousine, limo', 'motor vehicle, automotive vehicle'),
('washer, automatic washer, washing machine',
'durables, durable goods, consumer durables'),
('garbage truck, dustcart', 'motor vehicle, automotive vehicle'),
('dishwasher, dish washer, dishwashing machine',
'durables, durable goods, consumer durables'), ('pinwheel', 'mechanism'),
('espresso maker', 'durables, durable goods, consumer durables'),
('tow truck, tow car, wrecker', 'motor vehicle, automotive vehicle')
]
TEST_CLASSES = [
('Siberian husky', 'working dog'), ('dung beetle', 'insect'),
('jackfruit, jak, jack', 'solid'), ('miniature pinscher', 'working dog'),
('tiger shark, Galeocerdo cuvieri', 'aquatic vertebrate'),
('weevil', 'insect'),
('goldfish, Carassius auratus', 'aquatic vertebrate'),
('schipperke', 'working dog'), ('Tibetan mastiff', 'working dog'),
('orange', 'solid'), ('whiskey jug', 'vessel'),
('hammerhead, hammerhead shark', 'aquatic vertebrate'),
('bull mastiff', 'working dog'), ('eggnog', 'substance'),
('bee', 'insect'), ('tench, Tinca tinca', 'aquatic vertebrate'),
('chocolate sauce, chocolate syrup', 'substance'),
("dragonfly, darning needle, devil's darning needle, sewing needle, "
'snake feeder, snake doctor, mosquito hawk, skeeter hawk', 'insect'),
('zucchini, courgette', 'solid'), ('kelpie', 'working dog'),
('stone wall', 'obstruction, obstructor, obstructer, impedimen'),
('butternut squash', 'solid'), ('mushroom', 'solid'),
('Old English sheepdog, bobtail', 'working dog'),
('dam, dike, dyke', 'obstruction, obstructor, obstructer, impedimen'),
('picket fence, paling', 'obstruction, obstructor, obstructer, impedimen'),
('espresso', 'substance'), ('beer bottle', 'vessel'),
('plate', 'substance'), ('dough', 'substance'),
('sandbar, sand bar', 'geological formation, formation'),
('boxer', 'working dog'), ('bathtub, bathing tub, bath, tub', 'vessel'),
('beaker', 'vessel'), ('bucket, pail', 'vessel'),
('Border collie', 'working dog'), ('sturgeon', 'aquatic vertebrate'),
('worm fence, snake fence, snake-rail fence, Virginia fence',
'obstruction, obstructor, obstructer, impedimen'),
('seashore, coast, seacoast, sea-coast',
'geological formation, formation'),
('long-horned beetle, longicorn, longicorn beetle', 'insect'),
('turnstile', 'obstruction, obstructor, obstructer, impedimen'),
('groenendael', 'working dog'), ('vase', 'vessel'), ('teapot', 'vessel'),
('water tower', 'vessel'), ('strawberry', 'solid'), ('burrito',
'substance'),
('cauliflower', 'solid'), ('volcano', 'geological formation, formation'),
('valley, vale', 'geological formation, formation'),
('head cabbage', 'solid'), ('tub, vat', 'vessel'),
('lacewing, lacewing fly', 'insect'),
('coral reef', 'geological formation, formation'),
('hot pot, hotpot', 'substance'), ('custard apple', 'solid'),
('monarch, monarch butterfly, milkweed butterfly, Danaus plexippus',
'insect'), ('cricket', 'insect'), ('pill bottle', 'vessel'),
('walking stick, walkingstick, stick insect', 'insect'),
('promontory, headland, head, foreland',
'geological formation, formation'), ('malinois', 'working dog'),
('pizza, pizza pie', 'substance'),
('malamute, malemute, Alaskan malamute', 'working dog'),
('kuvasz', 'working dog'), ('trifle', 'substance'), ('fig', 'solid'),
('komondor', 'working dog'), ('ant, emmet, pismire', 'insect'),
('electric ray, crampfish, numbfish, torpedo', 'aquatic vertebrate'),
('Granny Smith', 'solid'), ('cockroach, roach', 'insect'),
('stingray', 'aquatic vertebrate'), ('red wine', 'substance'),
('Saint Bernard, St Bernard', 'working dog'),
('ice lolly, lolly, lollipop, popsicle', 'substance'),
('bell pepper', 'solid'), ('cup', 'substance'), ('pomegranate', 'solid'),
('Appenzeller', 'working dog'), ('hay', 'substance'),
('EntleBucher', 'working dog'),
('sulphur butterfly, sulfur butterfly', 'insect'),
('mantis, mantid', 'insect'), ('Bernese mountain dog', 'working dog'),
('banana', 'solid'), ('water jug', 'vessel'), ('cicada, cicala', 'insect'),
('barracouta, snoek', 'aquatic vertebrate'),
('washbasin, handbasin, washbowl, lavabo, wash-hand basin', 'vessel'),
('wine bottle', 'vessel'), ('Rottweiler', 'working dog'),
('briard', 'working dog'),
('puffer, pufferfish, blowfish, globefish', 'aquatic vertebrate'),
('ground beetle, carabid beetle', 'insect'),
('Bouvier des Flandres, Bouviers des Flandres', 'working dog'),
('chainlink fence', 'obstruction, obstructor, obstructer, impedimen'),
('damselfly', 'insect'), ('grasshopper, hopper', 'insect'),
('carbonara', 'substance'),
('German shepherd, German shepherd dog, German police dog, alsatian',
'working dog'), ('guacamole', 'substance'),
('leaf beetle, chrysomelid', 'insect'), ('caldron, cauldron', 'vessel'),
('fly', 'insect'),
('bannister, banister, balustrade, balusters, handrail',
'obstruction, obstructor, obstructer, impedimen'),
('spaghetti squash', 'solid'), ('coffee mug', 'vessel'),
('gar, garfish, garpike, billfish, Lepisosteus osseus',
'aquatic vertebrate'), ('barrel, cask', 'vessel'),
('eel', 'aquatic vertebrate'), ('rain barrel', 'vessel'),
('coho, cohoe, coho salmon, blue jack, silver salmon, '
'Oncorhynchus kisutch', 'aquatic vertebrate'), ('water bottle', 'vessel'),
('menu', 'substance'), ('tiger beetle', 'insect'),
('Great Dane', 'working dog'),
('rock beauty, Holocanthus tricolor', 'aquatic vertebrate'),
('anemone fish', 'aquatic vertebrate'), ('mortar', 'vessel'),
('Eskimo dog, husky', 'working dog'),
('affenpinscher, monkey pinscher, monkey dog', 'working dog'),
('breakwater, groin, groyne, mole, bulwark, seawall, jetty',
'obstruction, obstructor, obstructer, impedimen'),
('artichoke, globe artichoke', 'solid'), ('broccoli', 'solid'),
('French bulldog', 'working dog'), ('coffeepot', 'vessel'),
('cliff, drop, drop-off', 'geological formation, formation'),
('ladle', 'vessel'),
('sliding door', 'obstruction, obstructor, obstructer, impedimen'),
('leafhopper', 'insect'), ('collie', 'working dog'),
('Doberman, Doberman pinscher', 'working dog'), ('pitcher, ewer',
'vessel'),
('admiral', 'insect'), ('cabbage butterfly', 'insect'),
('geyser', 'geological formation, formation'), ('cheeseburger',
'substance'),
('grille, radiator grille',
'obstruction, obstructor, obstructer, impedimen'),
('ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle', 'insect'),
('great white shark, white shark, man-eater, man-eating shark, '
'Carcharodon carcharias', 'aquatic vertebrate'),
('pineapple, ananas', 'solid'), ('cardoon', 'solid'),
('pop bottle, soda bottle', 'vessel'), ('lionfish', 'aquatic vertebrate'),
('cucumber, cuke', 'solid'), ('face powder', 'substance'),
('Shetland sheepdog, Shetland sheep dog, Shetland', 'working dog'),
('ringlet, ringlet butterfly', 'insect'),
('Greater Swiss Mountain dog', 'working dog'),
('alp', 'geological formation, formation'), ('consomme', 'substance'),
('potpie', 'substance'), ('acorn squash', 'solid'),
('ice cream, icecream', 'substance'),
('lakeside, lakeshore', 'geological formation, formation'),
('hotdog, hot dog, red hot', 'substance'), ('rhinoceros beetle', 'insect'),
('lycaenid, lycaenid butterfly', 'insect'), ('lemon', 'solid')
]
@DATASETS.register_module()
class TieredImageNetDataset(BaseFewShotDataset):
"""TieredImageNet dataset for few shot classification.
Args:
subset (str| list[str]): The classes of whole dataset are split into
three disjoint subset: train, val and test. If subset is a string,
only one subset data will be loaded. If subset is a list of
string, then all data of subset in list will be loaded.
Options: ['train', 'val', 'test']. Default: 'train'.
"""
resource = 'https://github.com/renmengye/few-shot-ssl-public'
TRAIN_CLASSES = TRAIN_CLASSES
VAL_CLASSES = VAL_CLASSES
TEST_CLASSES = TEST_CLASSES
def __init__(self,
subset: Literal['train', 'test', 'val'] = 'train',
*args,
**kwargs):
if isinstance(subset, str):
subset = [subset]
for subset_ in subset:
assert subset_ in ['train', 'test', 'val']
self.subset = subset
self.GENERAL_CLASSES = self.get_general_classes()
super().__init__(*args, **kwargs)
def get_classes(
self,
classes: Optional[Union[Sequence[str],
str]] = None) -> Sequence[str]:
"""Get class names of current dataset.
Args:
classes (Sequence[str] | str | None): Three types of input
will correspond to different processing logics:
- If `classes` is a tuple or list, it will override the
CLASSES predefined in the dataset.
- If `classes` is None, we directly use pre-defined CLASSES
will be used by the dataset.
- If `classes` is a string, it is the path of a classes file
that contains the name of all classes. Each line of the file
contains a single class name.
Returns:
tuple[str] or list[str]: Names of categories of the dataset.
"""
if classes is None:
class_names = []
for subset_ in self.subset:
if subset_ == 'train':
class_names += [i[0] for i in self.TRAIN_CLASSES]
elif subset_ == 'val':
class_names += [i[0] for i in self.VAL_CLASSES]
elif subset_ == 'test':
class_names += [i[0] for i in self.TEST_CLASSES]
else:
raise ValueError(f'invalid subset {subset_} only '
f'support train, val or test.')
elif isinstance(classes, str):
# take it as a file path
class_names = mmcv.list_from_file(classes)
elif isinstance(classes, (tuple, list)):
class_names = classes
else:
raise ValueError(f'Unsupported type {type(classes)} of classes.')
return class_names
def get_general_classes(self) -> List[str]:
"""Get general classes of each classes."""
general_classes = []
for subset_ in self.subset:
if subset_ == 'train':
general_classes += [i[1] for i in self.TRAIN_CLASSES]
elif subset_ == 'val':
general_classes += [i[1] for i in self.VAL_CLASSES]
elif subset_ == 'test':
general_classes += [i[1] for i in self.TEST_CLASSES]
else:
raise ValueError(f'invalid subset {subset_} only '
f'support train, val or test.')
return general_classes
def load_annotations(self) -> List[Dict]:
"""Load annotation according to the classes subset."""
data_infos = []
for subset_ in self.subset:
labels_file = osp.join(self.data_prefix, f'{subset_}_labels.pkl')
img_bytes_file = osp.join(self.data_prefix,
f'{subset_}_images_png.pkl')
assert osp.exists(img_bytes_file) and osp.exists(labels_file), \
f'Please download ann_file through {self.resource}.'
data_infos = []
with open(labels_file, 'rb') as labels, \
open(img_bytes_file, 'rb') as img_bytes:
labels = pickle.load(labels)
img_bytes = pickle.load(img_bytes)
label_specific = labels['label_specific']
label_general = labels['label_general']
class_specific = labels['label_specific_str']
class_general = labels['label_general_str']
unzip_file_path = osp.join(self.data_prefix, subset_)
is_unzip_file = osp.exists(unzip_file_path)
if not is_unzip_file:
msg = ('Please use the provided script '
'tools/classification/data/unzip_tiered_imagenet.py'
'to unzip pickle file. Otherwise the whole pickle '
'file may cost heavy memory usage when the model '
'is trained with distributed parallel.')
warnings.warn(msg)
for i in range(len(img_bytes)):
class_specific_name = class_specific[label_specific[i]]
class_general_name = class_general[label_general[i]]
gt_label = self.class_to_idx[class_specific_name]
assert class_general_name == self.GENERAL_CLASSES[gt_label]
filename = osp.join(subset_, f'{subset_}_image_{i}.byte')
info = {
'img_prefix': self.data_prefix,
'img_info': {
'filename': filename
},
'gt_label': np.array(gt_label, dtype=np.int64),
}
# if the whole pickle file isn't unzipped,
# image bytes of will be put into data_info
if not is_unzip_file:
info['img_bytes'] = img_bytes[i]
data_infos.append(info)
return data_infos
| 54.998534 | 79 | 0.622731 | 3,732 | 37,509 | 6.222133 | 0.376474 | 0.012919 | 0.030231 | 0.034107 | 0.125188 | 0.051591 | 0.022652 | 0.022652 | 0.01335 | 0.007407 | 0 | 0.00027 | 0.209416 | 37,509 | 681 | 80 | 55.079295 | 0.782795 | 0.033912 | 0 | 0.065728 | 0 | 0 | 0.638737 | 0.002716 | 0 | 0 | 0 | 0 | 0.004695 | 1 | 0.00626 | false | 0.020344 | 0.014085 | 0 | 0.032864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
016d0e0a8899431a78249311fa5b45cb1e49cec2 | 642 | py | Python | archive/management/commands/backload_is.py | pastpages/savemy.news | 39ff49ffd2f63308a847243dccc95b82b69cb06c | [
"MIT"
] | 19 | 2017-11-06T17:06:44.000Z | 2020-10-15T16:59:12.000Z | archive/management/commands/backload_is.py | pastpages/savemy.news | 39ff49ffd2f63308a847243dccc95b82b69cb06c | [
"MIT"
] | 25 | 2017-11-06T17:45:02.000Z | 2021-09-22T17:54:35.000Z | archive/management/commands/backload_is.py | palewire/savemy.news | 39ff49ffd2f63308a847243dccc95b82b69cb06c | [
"MIT"
] | 1 | 2019-03-16T17:43:59.000Z | 2019-03-16T17:43:59.000Z | from django.core.management.base import BaseCommand
import time
import logging
from archive import tasks
from archive.models import Clip
logger = logging.getLogger(__name__)
class Command(BaseCommand):
def handle(self, *args, **options):
clip_list = Clip.objects.all().order_by("?")
no_is = [c for c in clip_list if not c.mementos.filter(archive="archive.is").count()]
logger.debug("{} clips lack an archive.is memento".format(len(no_is)))
for c in no_is[:8]:
logger.debug("Backloading archive.is URL for {}".format(c.url))
tasks.is_memento.delay(c.id)
time.sleep(5)
| 33.789474 | 93 | 0.67134 | 93 | 642 | 4.516129 | 0.55914 | 0.028571 | 0.028571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003906 | 0.202492 | 642 | 18 | 94 | 35.666667 | 0.816406 | 0 | 0 | 0 | 0 | 0 | 0.123053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
016f7bcbcf6e5076ed0a78d853f3274ea0689b95 | 3,414 | py | Python | app/ko.py | embian-inc/Appium-Python-Console | bb15ac441d713a0be4a8e6fee61ca54cf6fc7614 | [
"MIT"
] | 8 | 2017-09-25T05:46:53.000Z | 2022-02-04T14:42:57.000Z | app/ko.py | embian-inc/Appium-Python-Console | bb15ac441d713a0be4a8e6fee61ca54cf6fc7614 | [
"MIT"
] | null | null | null | app/ko.py | embian-inc/Appium-Python-Console | bb15ac441d713a0be4a8e6fee61ca54cf6fc7614 | [
"MIT"
] | 2 | 2018-11-28T19:16:20.000Z | 2022-02-04T14:39:15.000Z | #-*- coding: utf-8 -*-
# Code Interact Banner
APC_BANNER = u"""
************************************************************************************************************
* *
* Appium의 Python Client를 사용하여 테스트 스크립트를 작성하는 사용자들을 위한 콘솔 프로그램입니다. *
* 본 Console은 Android 전용이며 IOS 관련 기능은 지원 하지않습니다. *
* 본 Console을 통해 Python Client의 여러 Methods들을 직접 테스트 해보실 수 있습니다. *
* *
* 콘솔 내부 명령 메소드에 대한 정보는 "help()" 를 입력해 주세요. *
* 사용 가능한 Python Client의 메소드 정보를 보시길 원하시면 "methods()"를 입력해 주세요. *
* 콘솔의 종료를 원하실 경우 ctrl+d 혹은 "exit()"를 입력해 주세요. *
* *
************************************************************************************************************\n
Welcome Appium Python Console(APC) !\n
"""
# APC Mode Help
HELP_MSG = u"""
** HELP ***
help() : 도움말. APC Command Methods 목록 출력
clear() : Console Clear (terminal의 clear 같은 기능)
exit() : APC 종료
page() : 현재 페이지에서 Resource-id, Content-desc, Text, Action(Clickable, Scrollable) 값이 있는 요소들의 정보 출력
action_table() : 현재 페이지에서 Action 수행이 가능한 Element의 List를 Table형식으로 제공
사용법 :
action_table() - Class, Resource-id, Content-desc, Text, Bounds, Action Type, Context 출력
action_table('d') - 위 항목에 추가로 Xpath를 함께 출력
manual_test(mode='h') : 별도의 Test Script 작성없이 사용자와의 Interaction을 통해 간단한 test를 진행해 볼 수 있는 모드
mode='n' - UIAutomator를 통해 수행가능한 Action 정보 추출 [Default]
mode='h' - UIAutomator와 Chromedriver를 통해 수행가능한 Action 정보 추출
methods() : Python Client를 통해 사용할 수 있는 WebDriver Methods 리스트 출력
methods(num) : methods()를 통해 출력된 리스트 중 특정 번호에 해당하는 Method의 상세 정보 출력
사용법 :
methods(42)
driver : WebDriver Object.
사용법 :
driver.<Appium WebDriver Methods>
driver.contexts
driver.find_element_by_id('RESOURCE_ID')
"""
command = {
'HELP' : { 'cmd': ['help', 'h'], 'desc': u'도움말' },
'PAGE' : { 'cmd': ['page_source', 'p'], 'desc': u'appium page_source' },
'DETAIL' : { 'cmd': ['detail', 'd'], 'desc': u'액션 테이블 상세보기' },
'BACK' : { 'cmd': ['back', 'b'], 'desc': u'뒤로가기' },
'REFRESH' : { 'cmd': ['refresh', 'r'], 'desc': u'액션 리스트 다시 가져오기' },
'SCROLL_UP' : { 'cmd': ['sup', 'scrollup', 'up'], 'desc': u'스크롤 UP' },
'SCROLL_DOWN' : { 'cmd': ['sdown', 'scrolldown', 'down'], 'desc': u'스크롤 Down'},
'SAVE_ALL' : { 'cmd': ['save_all'], 'desc': u'현재 페이지의 XML, HTML, Screen Shot 이미지를 파일로 저장하기' },
'XML' : { 'cmd': ['xml_save'], 'desc': u'현재 페이지의 XML을 파일로 저장하기' },
'HTML' : { 'cmd': ['html_save'], 'desc': u'현재 페이지의 HTML을 파일로 저장하기' },
'SCREENSHOT' : { 'cmd': ['screenshot_save', 'ss_save'], 'desc': u'현재 페이지의 Screen Shot 이미지를 파일로 저장하기' },
'EXIT' : { 'cmd': ['exit', 'quit', 'q'], 'desc': u'테스트 종료' }
}
| 51.727273 | 118 | 0.426772 | 363 | 3,414 | 3.958678 | 0.501377 | 0.041754 | 0.019485 | 0.030619 | 0.118998 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001432 | 0.38635 | 3,414 | 65 | 119 | 52.523077 | 0.684487 | 0.016403 | 0 | 0.156863 | 0 | 0.058824 | 0.800596 | 0.083159 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0173cd621b9fa8b7478d65288de8598d1f74a94f | 4,363 | py | Python | isic/core/models/collection.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | null | null | null | isic/core/models/collection.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | 18 | 2021-06-10T05:14:34.000Z | 2022-03-22T02:15:59.000Z | isic/core/models/collection.py | ImageMarkup/isic | 607b2b103d0d2a67adb61f8ea88f1461c85ec8f3 | [
"Apache-2.0"
] | null | null | null | from typing import Optional
from django.contrib.auth.models import User
from django.db import models
from django.db.models.aggregates import Count
from django.db.models.query import QuerySet
from django.db.models.query_utils import Q
from django.urls import reverse
from django_extensions.db.models import TimeStampedModel
from .doi import Doi
from .image import Image
class Collection(TimeStampedModel):
creator = models.ForeignKey(User, on_delete=models.PROTECT)
images = models.ManyToManyField(Image, related_name='collections')
# TODO: probably make it unique per user, or unique for official collections
name = models.CharField(max_length=200, unique=True)
description = models.TextField(blank=True)
public = models.BooleanField(default=False)
official = models.BooleanField(default=False)
doi = models.OneToOneField(Doi, on_delete=models.PROTECT, null=True, blank=True)
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse('core/collection-detail', args=[self.pk])
def _get_datacite_creators(self) -> list[str]:
"""
Return a list of datacite creators for this collection.
Creators are ordered by number of images contributed (to this collection), ties are broken
alphabetically, except for Anonymous contributions which are always last.
"""
creators = (
self.images.alias(num_images=Count('accession__image'))
.values_list('accession__cohort__attribution', flat=True)
.order_by('-num_images', 'accession__cohort__attribution')
.distinct()
)
# Push an Anonymous attribution to the end
creators = sorted(creators, key=lambda x: 1 if x == 'Anonymous' else 0)
return creators
def as_datacite_doi(self, contributor: User, doi_id: str) -> dict:
return {
'data': {
'type': 'dois',
'attributes': {
'identifiers': [{'identifierType': 'DOI', 'identifier': doi_id}],
'event': 'publish',
'doi': doi_id,
'creators': [{'name': creator} for creator in self._get_datacite_creators()],
'contributor': f'{self.creator.first_name} {self.creator.last_name}',
'titles': [{'title': self.name}],
'publisher': 'ISIC Archive',
'publicationYear': self.images.order_by('created').latest().created.year,
# resourceType?
'types': {'resourceTypeGeneral': 'Dataset'},
# TODO: api.?
'url': f'https://api.isic-archive.com/collections/{self.pk}/',
'schemaVersion': 'http://datacite.org/schema/kernel-4',
'description': self.description,
'descriptionType': 'Other',
},
}
}
class CollectionPermissions:
model = Collection
perms = ['view_collection', 'create_doi']
filters = {'view_collection': 'view_collection_list', 'create_doi': 'create_doi_list'}
@staticmethod
def view_collection_list(
user_obj: User, qs: Optional[QuerySet[Collection]] = None
) -> QuerySet[Collection]:
qs = qs if qs is not None else Collection._default_manager.all()
if user_obj.is_active and user_obj.is_staff:
return qs
elif user_obj.is_authenticated:
return qs.filter(Q(public=True) | Q(creator=user_obj))
else:
return qs.filter(public=True)
@staticmethod
def view_collection(user_obj, obj):
# TODO: use .contains in django 4
return CollectionPermissions.view_collection_list(user_obj).filter(pk=obj.pk).exists()
@staticmethod
def create_doi_list(
user_obj: User, qs: Optional[QuerySet[Collection]] = None
) -> QuerySet[Collection]:
qs = qs if qs is not None else Collection._default_manager.all()
if user_obj.is_active and user_obj.is_staff:
return qs
else:
return qs.none()
@staticmethod
def create_doi(user_obj: User, obj: Collection) -> bool:
return CollectionPermissions.create_doi_list(user_obj).filter(pk=obj.pk).exists()
Collection.perms_class = CollectionPermissions
| 36.358333 | 98 | 0.631905 | 491 | 4,363 | 5.452138 | 0.354379 | 0.033993 | 0.01681 | 0.020172 | 0.172581 | 0.138214 | 0.138214 | 0.138214 | 0.115801 | 0.115801 | 0 | 0.002176 | 0.262663 | 4,363 | 119 | 99 | 36.663866 | 0.829966 | 0.090763 | 0 | 0.192771 | 0 | 0 | 0.147846 | 0.033393 | 0 | 0 | 0 | 0.016807 | 0 | 1 | 0.096386 | false | 0 | 0.120482 | 0.060241 | 0.493976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
017d2cb59aedfb7cb560e06f30dfaf2068dcf95b | 3,664 | py | Python | pagapp/admin_panel/forms.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | pagapp/admin_panel/forms.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | pagapp/admin_panel/forms.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | """Set of forms which using in admin panel.
List of forms:
ChangePasswordForm -- providing form for password change.
AddAlbumForm -- providing form for adding new album.
UploadForm -- providing form for uploading new pictures.
CommonSettingsForm -- providing form for changing common settings.
"""
import re
from flask import current_app
from flask_login import current_user
from flask_wtf import Form
from flask_wtf.form import ValidationError
from wtforms import PasswordField, SubmitField, StringField, FileField, \
SelectField, TextAreaField
from wtforms.validators import DataRequired, EqualTo
class ChangePasswordForm(Form):
"""Form for changing password for current user."""
old_password = PasswordField(
"Current password:",
validators=[DataRequired()],
description="Current password")
new_password = PasswordField(
"New password:",
validators=[DataRequired()],
description="New password")
new_password2 = PasswordField(
"Retype new password:",
validators=[DataRequired(),
EqualTo('new_password',
message="Passwords must match")],
description="New password")
submit_button = SubmitField('Change password')
@staticmethod
def validate_old_password(form, field):
"""Check, is given current password is not wrong."""
del form
if current_user.check_password(field.data) is False:
current_app.logger.error(
"User provided wrong password, when (s)he tried to change it.")
raise ValidationError("Given password is wrong")
class AddAlbumForm(Form):
"""Form for adding new album."""
album_name = StringField(
"Album name",
validators=[DataRequired()],
description="Album name")
album_description = StringField(
"Album description",
validators=[DataRequired()],
description="Short description")
submit_button = SubmitField("Create")
class UploadForm(Form):
"""Form for uploading new pictures to the given album."""
file_name = FileField(
"Select file:",
id='inputFile',
description="Select file")
album = SelectField(
"Select album:",
choices=[],
id='selectAlbum',
description="Select album")
name = StringField(
"Picture name:",
validators=[DataRequired()],
description="Picture name")
description = TextAreaField(
"Picture description:",
description="Picture description")
submit_button = SubmitField("Upload")
@staticmethod
def validate_file_name(form, field):
"""file_name field validator.
It checks - is given filename has allowed extension.
"""
allowed_extensions = current_app.config['ALLOWED_EXTENSIONS']
regexp = '|'.join(map(
lambda x: '(.*\\.{}$)'.format(x), allowed_extensions))
regexp = re.compile(regexp, re.IGNORECASE)
if regexp.search(field.data.filename) is None:
raise ValidationError('{} has not allowed exception!'.format(
field.data.filename))
del form
class CommonSettingsForm(Form):
"""Form for changing common settings."""
gallery_title = StringField(
"Gallery title:",
validators=[DataRequired()],
description="Gallery title name")
gallery_description = TextAreaField(
"Gallery description:",
validators=[DataRequired()],
description="Gallery description")
submit_button = SubmitField(
"Save settings",
description="Save settings")
| 31.86087 | 79 | 0.650382 | 359 | 3,664 | 6.554318 | 0.317549 | 0.023799 | 0.098173 | 0.043349 | 0.065448 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000363 | 0.24809 | 3,664 | 114 | 80 | 32.140351 | 0.853721 | 0.157751 | 0 | 0.160494 | 0 | 0 | 0.188984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024691 | false | 0.222222 | 0.08642 | 0 | 0.345679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
017f75656ba2e5a5bfc0221b28e5e28c7b830463 | 4,008 | py | Python | tests/form/test_form_validation.py | warownia1/Slivca | 5491afec63c8cd41d6f1389a5dd0ba9877b888a1 | [
"Apache-2.0"
] | 5 | 2016-09-01T15:30:46.000Z | 2019-07-15T12:26:46.000Z | tests/form/test_form_validation.py | warownia1/Slivca | 5491afec63c8cd41d6f1389a5dd0ba9877b888a1 | [
"Apache-2.0"
] | 75 | 2016-08-31T11:32:49.000Z | 2021-05-12T14:33:17.000Z | tests/form/test_form_validation.py | warownia1/Slivca | 5491afec63c8cd41d6f1389a5dd0ba9877b888a1 | [
"Apache-2.0"
] | 3 | 2017-06-01T10:21:04.000Z | 2020-06-12T10:32:49.000Z | from nose.tools import assert_is_none, assert_equal, assert_list_equal, \
assert_true, assert_false
from werkzeug.datastructures import MultiDict
from slivka.server.forms.fields import *
from slivka.server.forms.form import BaseForm
class TestCleanedValues:
class DummyForm(BaseForm):
field1 = IntegerField('int', min=0, max=10, default=1)
field1a = IntegerField('int2', required=False, default=2)
field1b = IntegerField('int3', required=False)
field2 = IntegerArrayField('m_int', required=False)
field3 = IntegerArrayField('m_int2', required=False)
field4 = IntegerArrayField('m_int3', required=False)
field5 = DecimalField('float', min=0.0, max=10.0, required=False)
field6 = TextField('text', required=False)
field7 = BooleanField('bool')
field8 = BooleanField('bool2', required=False)
CHOICES = [('a', 'A'), ('b', 'B'), ('c', 'C')]
field9 = ChoiceField('choice', choices=CHOICES)
field10 = ChoiceArrayField('m_choice', choices=CHOICES)
def setup(self):
self.form = TestCleanedValues.DummyForm(MultiDict([
('int', '5'),
('m_int', '10'), ('m_int', '15'), ('m_int', '0'),
('m_int2', '2'),
('float', '5.0'),
('text', 'foo bar baz'),
('bool', 'yes'),
('bool2', 'no'),
('choice', 'a'),
('m_choice', 'a'),
('m_choice', 'c')
]))
assert self.form.is_valid()
def test_cleaned_integer_field(self):
assert_equal(self.form.cleaned_data['int'], 5)
def test_cleaned_default_value(self):
assert_equal(self.form.cleaned_data['int2'], 2)
def test_cleaned_no_value(self):
assert_is_none(self.form.cleaned_data['int3'])
def test_cleaned_multiple(self):
assert_list_equal(self.form.cleaned_data['m_int'], [10, 15, 0])
def test_cleaned_multiple_one_value(self):
assert_list_equal(self.form.cleaned_data['m_int2'], [2])
def test_cleaned_multiple_no_value(self):
assert_is_none(self.form.cleaned_data['m_int3'])
def test_cleaned_decimal_field(self):
assert_equal(self.form.cleaned_data['float'], 5.0)
def test_cleaned_text_field(self):
assert_equal(self.form.cleaned_data['text'], 'foo bar baz')
def test_cleaned_bool_true_value(self):
assert_true(self.form.cleaned_data['bool'])
def test_cleaned_bool_false_value(self):
assert_is_none(self.form.cleaned_data['bool2'])
def test_cleaned_choice_field(self):
assert_equal(self.form.cleaned_data['choice'], 'a')
def test_cleaned_multiple_choice_field(self):
assert_list_equal(self.form.cleaned_data['m_choice'], ['a', 'c'])
class TestConditionalValues:
class DummyForm(BaseForm):
field1 = IntegerField('int1')
field2 = IntegerField('int2', default=0, condition="self > int1")
field3 = IntegerField('int3', required=False, condition="self > int1")
def test_valid_empty_value(self):
form = self.create_form(int1=1)
assert_true(form.is_valid())
def test_valid_explicit_null(self):
form = self.create_form(int1=1, int3=None)
assert_true(form.is_valid())
def test_valid_default(self):
form = self.create_form(int1=-1)
assert_true(form.is_valid())
assert_equal(form.cleaned_data['int2'], 0)
def test_invalid_default(self):
form = self.create_form(int1=1)
assert_true(form.is_valid())
assert_is_none(form.cleaned_data['int2'])
def test_valid_provided(self):
form = self.create_form(int1=1, int3=2)
assert_true(form.is_valid())
assert_equal(form.cleaned_data['int3'], 2)
def test_invalid_provided(self):
form = self.create_form(int1=1, int3=0)
assert_false(form.is_valid())
@staticmethod
def create_form(**kwargs):
return TestConditionalValues.DummyForm(MultiDict(kwargs))
| 35.469027 | 78 | 0.646956 | 509 | 4,008 | 4.834971 | 0.18664 | 0.065014 | 0.091426 | 0.092645 | 0.389679 | 0.334011 | 0.334011 | 0.320195 | 0.225924 | 0.130435 | 0 | 0.026958 | 0.213323 | 4,008 | 112 | 79 | 35.785714 | 0.753568 | 0 | 0 | 0.103448 | 0 | 0 | 0.069611 | 0 | 0 | 0 | 0 | 0 | 0.275862 | 1 | 0.229885 | false | 0 | 0.045977 | 0.011494 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d671451f6b6a886f3b7dd77e88f2a7d6a9d455c | 4,454 | py | Python | fn_cloud_foundry/fn_cloud_foundry/components/fn_cloud_foundry_create_app.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 65 | 2017-12-04T13:58:32.000Z | 2022-03-24T18:33:17.000Z | fn_cloud_foundry/fn_cloud_foundry/components/fn_cloud_foundry_create_app.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 48 | 2018-03-02T19:17:14.000Z | 2022-03-09T22:00:38.000Z | fn_cloud_foundry/fn_cloud_foundry/components/fn_cloud_foundry_create_app.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 95 | 2018-01-11T16:23:39.000Z | 2022-03-21T11:34:29.000Z | # -*- coding: utf-8 -*-
# pragma pylint: disable=unused-argument, no-self-use
"""Function implementation"""
import json
import logging
from resilient_circuits import ResilientComponent, function, handler, StatusMessage, FunctionResult, FunctionError
from fn_cloud_foundry.util.cloud_foundry_api import IBMCloudFoundryAPI
from fn_cloud_foundry.util.authentication.ibm_cf_bearer import IBMCloudFoundryAuthenticator
CONFIG_DATA_SECTION = 'fn_cloud_foundry'
class FunctionComponent(ResilientComponent):
"""Component that implements Resilient function 'fn_cloud_foundry_create_app"""
def __init__(self, opts):
"""constructor provides access to the configuration options"""
super(FunctionComponent, self).__init__(opts)
self.opts = opts
self.options = opts.get(CONFIG_DATA_SECTION, {})
if self.options == {}:
raise ValueError("{} section is not set in the config file".format(CONFIG_DATA_SECTION))
self.base_url = self.options.get("cf_api_base")
self.cf_api_username = self.options.get("cf_api_username")
self.cf_api_password = self.options.get("cf_api_password")
if not self.base_url:
raise ValueError("cf_api_base is not set. You must set this value to run {}".format(CONFIG_DATA_SECTION))
else:
self.base_url = str(self.base_url).rstrip('/')
if not self.cf_api_username:
raise ValueError("cf_api_username is not set. You must set this value to run {}".format(CONFIG_DATA_SECTION))
if not self.cf_api_password:
raise ValueError("cf_api_password is not set. You must set this value to run {}".format(CONFIG_DATA_SECTION))
@handler("reload")
def _reload(self, event, opts):
"""Configuration options have changed, save new values"""
self.opts = opts
self.options = opts.get(CONFIG_DATA_SECTION, {})
@function("fn_cloud_foundry_create_app")
def _fn_cloud_foundry_create_app_function(self, event, *args, **kwargs):
"""Function: Creates and deploys a cloud foundry applications from the specified parameters/docker files."""
try:
# Get the function parameters:
application_name = kwargs.get("fn_cloud_foundry_applications", None) # text
space_guid = kwargs.get("fn_cloud_foundry_space_guid", None) # text
additional_parameters = kwargs.get("fn_cloud_foundry_additional_parameters_json", None) # text
if space_guid is None or application_name is None:
raise ValueError("Both fn_cloud_foundry_applications and fn_cloud_foundry_space_guid "
"have to be defined.")
if additional_parameters is None:
additional_parameters = {}
else:
additional_parameters = json.loads(additional_parameters)
log = logging.getLogger(__name__)
log.info("fn_cloud_foundry_applications: %s", application_name)
log.info("fn_cloud_foundry_space_guid: %s", space_guid)
log.info("fn_cloud_foundry_additional_parameters_json: %s", additional_parameters)
log.info("Params: {}".format(additional_parameters))
authenticator = IBMCloudFoundryAuthenticator(self.opts, self.options, self.base_url)
yield StatusMessage("Authenticated into Cloud Foundry")
cf_service = IBMCloudFoundryAPI(self.opts, self.options, self.base_url, authenticator)
values = {
"space_guid": space_guid,
"name": application_name,
"username" : self.cf_api_username,
"password" : self.cf_api_password
}
additional_parameters.update(values) # so values overwrite additional params, not the other way
values = additional_parameters
results = cf_service.create_app(values)
log.info("Result: %s", results)
yield StatusMessage("Done.")
self._add_keys(results)
# Produce a FunctionResult with the results
yield FunctionResult(results)
except Exception as e:
yield FunctionError(str(e))
@staticmethod
def _add_keys(result):
keys = list(result.keys())
for item in result:
if isinstance(result[item], dict):
FunctionComponent._add_keys(result[item])
result["_keys"] = keys
| 45.917526 | 121 | 0.666367 | 517 | 4,454 | 5.468085 | 0.272727 | 0.072161 | 0.069331 | 0.032543 | 0.271666 | 0.181464 | 0.136894 | 0.091617 | 0.091617 | 0.091617 | 0 | 0.000298 | 0.245397 | 4,454 | 96 | 122 | 46.395833 | 0.840821 | 0.118321 | 0 | 0.084507 | 0 | 0 | 0.17925 | 0.072933 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056338 | false | 0.056338 | 0.070423 | 0 | 0.140845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6d6d9bc2e8ef397124fe3ecbd4cdecac78b44386 | 4,746 | py | Python | daily_mail_and_nbc.py | V5NEXT/ScrappingWebPages | e193aacc3d267360173fe8b862e6479500ee64f6 | [
"MIT"
] | null | null | null | daily_mail_and_nbc.py | V5NEXT/ScrappingWebPages | e193aacc3d267360173fe8b862e6479500ee64f6 | [
"MIT"
] | null | null | null | daily_mail_and_nbc.py | V5NEXT/ScrappingWebPages | e193aacc3d267360173fe8b862e6479500ee64f6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""daily_mail and NBC.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1mt9Lo5KApXqCdNN_57OCej3pT9_vVBRP
"""
#importing requests
import requests
#importing BeautifulSoup for scrapping templates
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import re
import time
# Daily Mail
url = "https://www.dailymail.co.uk"
# Request
r1 = requests.get(url)
r1.status_code
# We'll save in coverpage the cover page content
coverpage = r1.content
# Soup creation
soup1 = BeautifulSoup(coverpage, 'html5lib')
# News identification
coverpage_news = soup1.find_all('h2', class_='linkro-darkred')
dailymailylen = len(coverpage_news)
#CNN
#tried to access CNN but it didnt had enough news to follow on
url = "https://edition.cnn.com/"
# Request
r1 = requests.get(url)
r1.status_code
# We'll save in coverpage the cover page content
coverpage = r1.content
from bs4 import BeautifulSoup as soup
import requests
from datetime import date
today = date.today()
d = today.strftime("%m-%d-%y")
cnn_url="https://edition.cnn.com/world".format(d)
html = requests.get(cnn_url)
bsobj = soup(html.content,'lxml')
for link in bsobj.findAll("h3"):
print("Headline : {}".format(link.text))
from bs4 import BeautifulSoup as soup
import requests
nbc_url='https://www.nbcnews.com/politics'
r = requests.get('https://www.nbcnews.com/politics')
b = soup(r.content,'lxml')
politics_news = []
for news in b.findAll('div', {'class': 'wide-tease-item__description'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
nbc_url='https://www.nbcnews.com/health/coronavirus'
r = requests.get('https://www.nbcnews.com/health/coronavirus')
b = soup(r.content,'lxml')
for news in b.findAll('div', {'class': 'wide-tease-item__description'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
nbc_url='https://www.nbcnews.com/world'
r = requests.get('https://www.nbcnews.com/world')
b = soup(r.content,'lxml')
for news in b.findAll('div', {'class': 'wide-tease-item__description'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
nbc_url='https://www.nbcnews.com/business'
r = requests.get('https://www.nbcnews.com/business')
b = soup(r.content,'lxml')
for news in b.findAll('div', {'class': 'wide-tease-item__description'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
nbc_url='https://www.nbcnews.com/podcasts'
r = requests.get('https://www.nbcnews.com/podcasts')
b = soup(r.content,'lxml')
for news in b.findAll('div', {'class': 'text-summary'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
nbc_url='https://www.nbcnews.com/us-news'
r = requests.get('https://www.nbcnews.com/us-news')
b = soup(r.content,'lxml')
for news in b.findAll('div', {'class': 'wide-tease-item__description'}):
politics_news.append(news.text.strip())
df_politics = pd.DataFrame(
{'Article Content': politics_news
})
df_politics
from google.colab import files
df_politics.to_csv('nbcnews.csv')
files.download('nbcnews.csv')
number_of_articles = dailymailylen
# Empty lists for content, links and titles
news_contents = []
list_links = []
list_titles = []
for n in np.arange(0, number_of_articles):
# Getting the link of the article
link = url + coverpage_news[n].find('a')['href']
list_links.append(link)
# Getting the title
title = coverpage_news[n].find('a').get_text()
list_titles.append(title)
# Reading the content (it is divided in paragraphs)
article = requests.get(link)
article_content = article.content
soup_article = BeautifulSoup(article_content, 'html5lib')
body = soup_article.find_all('p', class_='mol-para-with-font')
# Unifying the paragraphs
list_paragraphs = []
for p in np.arange(0, len(body)):
paragraph = body[p].get_text()
list_paragraphs.append(paragraph)
final_article = " ".join(list_paragraphs)
# Removing special characters
final_article = re.sub("\\xa0", "", final_article)
news_contents.append(final_article)
# df_features
df_features = pd.DataFrame(
{'Article Content': news_contents
})
# df_show_info
df_show_info = pd.DataFrame(
{'Article Title': list_titles,
'Article Link': list_links})
df_features
df_show_info
from google.colab import files
df_features.to_csv('filename.csv')
files.download('filename.csv') | 25.379679 | 77 | 0.709018 | 669 | 4,746 | 4.895366 | 0.248132 | 0.031756 | 0.054962 | 0.065954 | 0.512366 | 0.465038 | 0.426565 | 0.371603 | 0.343511 | 0.343511 | 0 | 0.006428 | 0.147703 | 4,746 | 187 | 78 | 25.379679 | 0.803214 | 0.147493 | 0 | 0.433628 | 1 | 0 | 0.240239 | 0.034817 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.115044 | 0 | 0.115044 | 0.00885 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d705fb032634cb7f7ff0dbf9ba37857ef543803 | 5,545 | py | Python | util/Instruction.py | F-Stuckmann/PATARA | 26b5c821e356e33e949817a1475ef38e75880a03 | [
"MIT"
] | 2 | 2021-11-04T06:48:45.000Z | 2022-01-19T16:25:20.000Z | util/Instruction.py | F-Stuckmann/PATARA | 26b5c821e356e33e949817a1475ef38e75880a03 | [
"MIT"
] | 1 | 2021-05-05T09:05:22.000Z | 2021-05-05T09:05:22.000Z | util/Instruction.py | F-Stuckmann/PATARA | 26b5c821e356e33e949817a1475ef38e75880a03 | [
"MIT"
] | 2 | 2021-05-05T08:42:57.000Z | 2021-11-04T06:48:49.000Z | import copy
import warnings
from Constants import TARGET_REGISTER, RAND_VALUE, FOCUS_REGISTER, ADDRESS, OPERAND_TYPE, OPERANDS, ISSUE_SLOT
from util.Processor import Processor
class Instruction:
def __init__(self, assembly: str, mutable=True):
# set default enabled features
self.isRandValueRandomImmediate = False
self.globalMandatoryFeatures = {}
self.mutable = mutable
self._assembly = assembly
self.parsestring = Processor().parseInstruction(assembly)
self.mandatoryEnabledFeatures = Processor().getMandatoryFeatures(assembly)
self.inst_name = self.parsestring[0]
self.features = self.getFeatures()
self._enabledfeatures = {}
# self.reset_features()
# Operands
self._operandAttr = {} # TARGET_REGISTER, FOCUS_REGISTER, RAND_VALUE
# parse assembly for return
self._instruction = self.parsestring[0]
self._operandStrings = self.parsestring[1]
self._originalOperandString = copy.deepcopy(self.parsestring[1])
self.interleavingTargetRegister = None
def getFeatures(self):
instructionName = None
if len(self.parsestring) > 2:
instructionName = self.inst_name
try:
features = Processor().getAvailableInstructionFeatures(instructionName)
# overwrite mandatory features
for feature in self.mandatoryEnabledFeatures:
features[feature] = [Processor().getFeatureValue(feature, self.mandatoryEnabledFeatures[feature])]
return features
except KeyError:
return None
def get_operand_attr(self):
return self._operandAttr
def getEnabledFeatures(self):
return self._enabledfeatures
def setFeatures(self, features):
"""
Reset all features and set them new.
:param features: to set of the instruction.
:return:
"""
self._enabledfeatures = {}
availableFeatures = Processor().getAvailableInstructionFeatures(self.inst_name)
# self._enabledfeatures = features
for key in features:
if key in availableFeatures:
assemblyFeatureString = features[key]
if assemblyFeatureString in availableFeatures[key]:
self._enabledfeatures[key] = assemblyFeatureString
self._setInstructionAssembly()
def getOperandAssembly(self): # in processor
isPragma = Processor().isPragma(self.inst_name)
# v = Processor().getOperandAssembly(self.parsestring[1], self._operandAttr, self.interleavingTargetRegister, isPragma=isPragma)
return Processor().getOperandAssembly(self.parsestring[1], self._operandAttr, self.interleavingTargetRegister, isPragma=isPragma, isRandValueRandomImmediate=self.isRandValueRandomImmediate)
def __str__(self) -> str:
self._setInstructionAssembly()
return self._instruction + " " + self.getOperandAssembly()
def string(self) -> str:
return self.__str__()
def getRawString(self):
return self.inst_name + " " + self._originalOperandString
def getName(self) -> str:
return self.inst_name
def getAssembly(self, enabledFeatures, globalMandatoryFeatures={}):
self.setFeatures(enabledFeatures)
self.setGlobalMandatoryFeatures(globalMandatoryFeatures)
return self.string()
def setEnabledFeatures(self, key: int, value: int):
self._enabledfeatures[key] = value
#if value in self.features[key]:
# self._enabledfeatures[key] = value
#else:
# self._enabledfeatures[key] = None
self._setInstructionAssembly()
def resetFeatures(self):
if self.features is None:
return
for key in self.features:
self._enabledfeatures[key] = self.features[key][0]
self._setInstructionAssembly()
def _setInstructionAssembly(self):
if self.mutable:
self.overrideMandatoryFeatures()
self._instruction = Processor().getInstructionAssemblyString(self.parsestring[0],
self._enabledfeatures)
if OPERANDS.BRANCH_INDEX.value in self._instruction:
if OPERANDS.BRANCH_INDEX.value in self._operandAttr:
self._instruction = self._instruction.replace(OPERANDS.BRANCH_INDEX.value, str(self._operandAttr[OPERANDS.BRANCH_INDEX.value]))
n=3
def overrideMandatoryFeatures(self):
for feature in self.mandatoryEnabledFeatures:
self._enabledfeatures[feature] = Processor().getFeatureValue(feature, self.mandatoryEnabledFeatures[feature])
for key in self.globalMandatoryFeatures:
if key in self._enabledfeatures and key in self.mandatoryEnabledFeatures:
self._enabledfeatures[key] = self.globalMandatoryFeatures[key]
def setGlobalMandatoryFeatures(self, globalMandatoryFeature):
self.globalMandatoryFeatures = globalMandatoryFeature
def __eq__(self, other):
if type(self) != type(other):
return False
return self.__str__() == other.__str__()
def setOperands(self, operands):
self._operandAttr = operands
def setOverrideTargetOperand(self, overRidingTargetOperand):
self.interleavingTargetRegister = overRidingTargetOperand
def enableRandValueRandomImmediate(self):
self.isRandValueRandomImmediate = True
| 35.094937 | 197 | 0.672498 | 483 | 5,545 | 7.554865 | 0.227743 | 0.072897 | 0.019731 | 0.021924 | 0.155111 | 0.114552 | 0.114552 | 0.057002 | 0.057002 | 0.057002 | 0 | 0.002397 | 0.24761 | 5,545 | 157 | 198 | 35.318471 | 0.872244 | 0.096123 | 0 | 0.083333 | 0 | 0 | 0.000403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.041667 | 0.052083 | 0.395833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d7e4c40024f3a52dc2fa5cc72c19c80ecb37fee | 4,484 | py | Python | plotting-tools/plot_samples_multiple.py | larsgeb/hmc-documentation | e302375a870359174254cc4e6c0515ef255dea3e | [
"BSD-3-Clause"
] | null | null | null | plotting-tools/plot_samples_multiple.py | larsgeb/hmc-documentation | e302375a870359174254cc4e6c0515ef255dea3e | [
"BSD-3-Clause"
] | null | null | null | plotting-tools/plot_samples_multiple.py | larsgeb/hmc-documentation | e302375a870359174254cc4e6c0515ef255dea3e | [
"BSD-3-Clause"
] | null | null | null | import glob
import numpy as np
import random as random
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
import locale
import decimal
def drange(x, y, jump):
while x < y:
yield float(x)
x = x + float(decimal.Decimal(jump))
# ============================================================
# - Setup.
# ============================================================
# - Number of burn-in samples to be ignored.
nbi = 0
# - Dimensions of interest.
dim_1 = 0
dim_2 = 1
# - Incremental displacement for duplicate points.
epsilon_1 = 0.0003
epsilon_2 = 0.0003
params = {'legend.fontsize': 'x-large',
'figure.figsize': (16, 8),
'axes.labelsize': 16,
'axes.titlesize': 'x-large',
'xtick.labelsize': 16,
'ytick.labelsize': 16
}
pylab.rcParams.update(params)
locale.setlocale(locale.LC_ALL, 'en_US.utf8') # Used for number formatting (thousands separators)
# ============================================================
# - Read samples and plotting0
# ============================================================
columns = 4
rows = 2
f, axes = plt.subplots(rows, columns, sharex=True, sharey=True)
# Creating an underlying subplot using some magic, to label the axes
f.patch.set_facecolor('white')
ax = f.add_subplot(111)
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('none')
ax.spines['left'].set_color('none')
ax.spines['right'].set_color('none')
ax.tick_params(labelcolor='w', top='off', bottom='off', left='off', right='off')
ax.set_zorder(-1)
ax.set_xlabel('parameter 1')
ax.set_ylabel('parameter 2')
# Create dictionary of all read folders
fileNames = glob.glob('../results/2d_naive/*/samples.txt')
sampleAmounts = map(lambda fileName: float(fileName.replace("../results/2d_naive/", "").replace("/samples.txt", "")),
fileNames)
dictFiles = dict(zip(sampleAmounts, fileNames))
# Shamefully, looping through a dictionary with an increasing index requires an extra variable,
# and of course every variable kills a kitten.
iterator = 0
for key, value in iter(sorted(dictFiles.iteritems())): # The sorting here is very important for nice plots
fid = open(value)
print value
dummy = fid.read().strip().split()
fid.close() # Close it!
# Parsing all that data
dimension = int(dummy[0])
iterations = int(dummy[dummy.__len__() - 1]) - nbi
x = np.zeros(iterations)
y = np.zeros(iterations)
x_plot = np.zeros(iterations)
y_plot = np.zeros(iterations)
q_opt = np.zeros(dimension)
chi = 1.0e100
# Black magic happening below, please ignore
for i in range(iterations):
x[i] = float(dummy[2 + dim_1 + (i + nbi) * (dimension + 1)])
y[i] = float(dummy[2 + dim_2 + (i + nbi) * (dimension + 1)])
x_plot[i] = x[i]
y_plot[i] = y[i]
chi_test = float(dummy[2 + dimension + (i + nbi) * (dimension + 1)])
if chi_test < chi:
chi = chi_test
print 'chi_min=', chi_test
for k in range(dimension):
q_opt[k] = float(dummy[2 + k + (i + nbi) * (dimension + 1)])
if i > 0 and x[i] == x[i - 1] and x[i] > 0 and y[i] == y[i - 1]:
x_plot[i] += epsilon_1 * random.gauss(0.0, 1.0)
y_plot[i] += epsilon_2 * random.gauss(0.0, 1.0)
# =========================================== #
# - The actual interesting stuff, plotting - #
# =========================================== #
# Linearly stepping through all the subplots in an nd array requires some creativity. (I did this myself!!)
verIndex = int(np.floor(iterator / columns))
horIndex = iterator % columns
# Now choose either of these lines to plot histograms ...
# axes[verIndex][horIndex].hist(x, color='k', normed=True, bins=list(drange(-1-0.075, 3.5+0.075, '0.075')))
# axes[verIndex][horIndex].hist(x, color='k', normed=True, bins=list(drange(1.5, 4.5, '0.05')))
# ... or this block to plot 2D histograms
h = axes[verIndex][horIndex].hist2d(x, y, bins=40, normed=True, cmap='binary')
PCM = axes[verIndex][horIndex].get_children()[9]
cHandle = plt.colorbar(PCM, ax=axes[verIndex][horIndex])
if iterator == 0:
cHandle.set_label('probability', labelpad=-2)
if iterator != 0:
cHandle.set_label('probability')
axes[verIndex][horIndex].set_title('%s samples' % locale.format("%d", float(key), grouping=True))
# iterator += 1
plt.show()
plt.close()
| 35.03125 | 117 | 0.587199 | 604 | 4,484 | 4.288079 | 0.390728 | 0.027799 | 0.046332 | 0.021622 | 0.130502 | 0.083398 | 0.071815 | 0.043243 | 0.043243 | 0.043243 | 0 | 0.028157 | 0.200045 | 4,484 | 127 | 118 | 35.307087 | 0.693895 | 0.305308 | 0 | 0 | 0 | 0 | 0.096628 | 0.0107 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.085366 | null | null | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d7e82096768612ef6bdc39f074b4c354bf17295 | 228 | py | Python | guardaPaisesLineas.py | juaseso/sinf_desarrollos | 897e09aa959d2309c11453024d9873028bd52101 | [
"Apache-2.0"
] | null | null | null | guardaPaisesLineas.py | juaseso/sinf_desarrollos | 897e09aa959d2309c11453024d9873028bd52101 | [
"Apache-2.0"
] | null | null | null | guardaPaisesLineas.py | juaseso/sinf_desarrollos | 897e09aa959d2309c11453024d9873028bd52101 | [
"Apache-2.0"
] | null | null | null | import shelve
shelfFile = shelve.open('mydata')
file = open('allCountries.txt')
print('Dividiendo el fichero en un array de líneas')
lineas = file.readlines()
print('OK')
shelfFile['arraylineas'] = lineas
shelfFile.close()
| 17.538462 | 52 | 0.732456 | 29 | 228 | 5.758621 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127193 | 228 | 12 | 53 | 19 | 0.839196 | 0 | 0 | 0 | 0 | 0 | 0.342105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d86c7604e83b81de4abc9d204ea3c2ee935247d | 371 | py | Python | #Ejercicio 2.py | carlospuigserver/Recursividad | 8706b645e7d1607556b46f149ecf79cc175f9337 | [
"Apache-2.0"
] | null | null | null | #Ejercicio 2.py | carlospuigserver/Recursividad | 8706b645e7d1607556b46f149ecf79cc175f9337 | [
"Apache-2.0"
] | null | null | null | #Ejercicio 2.py | carlospuigserver/Recursividad | 8706b645e7d1607556b46f149ecf79cc175f9337 | [
"Apache-2.0"
] | null | null | null | #Ejercicio 2
palabra=input ("Escribe una palabra y vamos a analizar si es polindroma o no: ")
p=len(palabra)
capicua=""
while p>0:
p=p-1
capicua=capicua+palabra[p] #Concateno las letras una por una y las guardo
if palabra ==capicua:
print("La palabra escogida es polindroma")
if palabra!= capicua:
print("La palabra escogida no es polindroma") | 28.538462 | 81 | 0.700809 | 58 | 371 | 4.482759 | 0.517241 | 0.138462 | 0.123077 | 0.161538 | 0.292308 | 0.292308 | 0.292308 | 0 | 0 | 0 | 0 | 0.010135 | 0.202156 | 371 | 13 | 82 | 28.538462 | 0.868243 | 0.150943 | 0 | 0 | 0 | 0 | 0.420382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d949ee96c88d43897365fa7b83156abcb3721b7 | 8,569 | py | Python | sdk/python/pulumi_aws/appautoscaling/policy.py | pulumi-bot/pulumi-aws | 756c60135851e015232043c8206567101b8ebd85 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/appautoscaling/policy.py | pulumi-bot/pulumi-aws | 756c60135851e015232043c8206567101b8ebd85 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/appautoscaling/policy.py | pulumi-bot/pulumi-aws | 756c60135851e015232043c8206567101b8ebd85 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import pulumi
import pulumi.runtime
class Policy(pulumi.CustomResource):
"""
Provides an Application AutoScaling Policy resource.
"""
def __init__(__self__, __name__, __opts__=None, adjustment_type=None, alarms=None, cooldown=None, metric_aggregation_type=None, min_adjustment_magnitude=None, name=None, policy_type=None, resource_id=None, scalable_dimension=None, service_namespace=None, step_adjustments=None, step_scaling_policy_configurations=None, target_tracking_scaling_policy_configuration=None):
"""Create a Policy resource with the given unique name, props, and options."""
if not __name__:
raise TypeError('Missing resource name argument (for URN creation)')
if not isinstance(__name__, basestring):
raise TypeError('Expected resource name to be a string')
if __opts__ and not isinstance(__opts__, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
__props__ = dict()
if adjustment_type and not isinstance(adjustment_type, basestring):
raise TypeError('Expected property adjustment_type to be a basestring')
__self__.adjustment_type = adjustment_type
"""
The scaling policy's adjustment type.
"""
__props__['adjustmentType'] = adjustment_type
if alarms and not isinstance(alarms, list):
raise TypeError('Expected property alarms to be a list')
__self__.alarms = alarms
__props__['alarms'] = alarms
if cooldown and not isinstance(cooldown, int):
raise TypeError('Expected property cooldown to be a int')
__self__.cooldown = cooldown
__props__['cooldown'] = cooldown
if metric_aggregation_type and not isinstance(metric_aggregation_type, basestring):
raise TypeError('Expected property metric_aggregation_type to be a basestring')
__self__.metric_aggregation_type = metric_aggregation_type
__props__['metricAggregationType'] = metric_aggregation_type
if min_adjustment_magnitude and not isinstance(min_adjustment_magnitude, int):
raise TypeError('Expected property min_adjustment_magnitude to be a int')
__self__.min_adjustment_magnitude = min_adjustment_magnitude
__props__['minAdjustmentMagnitude'] = min_adjustment_magnitude
if name and not isinstance(name, basestring):
raise TypeError('Expected property name to be a basestring')
__self__.name = name
"""
The name of the policy.
"""
__props__['name'] = name
if policy_type and not isinstance(policy_type, basestring):
raise TypeError('Expected property policy_type to be a basestring')
__self__.policy_type = policy_type
"""
For DynamoDB, only `TargetTrackingScaling` is supported. For Amazon ECS, Spot Fleet, and Amazon RDS, both `StepScaling` and `TargetTrackingScaling` are supported. For any other service, only `StepScaling` is supported. Defaults to `StepScaling`.
"""
__props__['policyType'] = policy_type
if not resource_id:
raise TypeError('Missing required property resource_id')
elif not isinstance(resource_id, basestring):
raise TypeError('Expected property resource_id to be a basestring')
__self__.resource_id = resource_id
"""
The resource type and unique identifier string for the resource associated with the scaling policy. Documentation can be found in the `ResourceId` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
"""
__props__['resourceId'] = resource_id
if not scalable_dimension:
raise TypeError('Missing required property scalable_dimension')
elif not isinstance(scalable_dimension, basestring):
raise TypeError('Expected property scalable_dimension to be a basestring')
__self__.scalable_dimension = scalable_dimension
"""
The scalable dimension of the scalable target. Documentation can be found in the `ScalableDimension` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
"""
__props__['scalableDimension'] = scalable_dimension
if not service_namespace:
raise TypeError('Missing required property service_namespace')
elif not isinstance(service_namespace, basestring):
raise TypeError('Expected property service_namespace to be a basestring')
__self__.service_namespace = service_namespace
"""
The AWS service namespace of the scalable target. Documentation can be found in the `ServiceNamespace` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
"""
__props__['serviceNamespace'] = service_namespace
if step_adjustments and not isinstance(step_adjustments, list):
raise TypeError('Expected property step_adjustments to be a list')
__self__.step_adjustments = step_adjustments
__props__['stepAdjustments'] = step_adjustments
if step_scaling_policy_configurations and not isinstance(step_scaling_policy_configurations, list):
raise TypeError('Expected property step_scaling_policy_configurations to be a list')
__self__.step_scaling_policy_configurations = step_scaling_policy_configurations
"""
Step scaling policy configuration, requires `policy_type = "StepScaling"` (default). See supported fields below.
"""
__props__['stepScalingPolicyConfigurations'] = step_scaling_policy_configurations
if target_tracking_scaling_policy_configuration and not isinstance(target_tracking_scaling_policy_configuration, dict):
raise TypeError('Expected property target_tracking_scaling_policy_configuration to be a dict')
__self__.target_tracking_scaling_policy_configuration = target_tracking_scaling_policy_configuration
"""
A target tracking policy, requires `policy_type = "TargetTrackingScaling"`. See supported fields below.
"""
__props__['targetTrackingScalingPolicyConfiguration'] = target_tracking_scaling_policy_configuration
__self__.arn = pulumi.runtime.UNKNOWN
"""
The ARN assigned by AWS to the scaling policy.
"""
super(Policy, __self__).__init__(
'aws:appautoscaling/policy:Policy',
__name__,
__props__,
__opts__)
def set_outputs(self, outs):
if 'adjustmentType' in outs:
self.adjustment_type = outs['adjustmentType']
if 'alarms' in outs:
self.alarms = outs['alarms']
if 'arn' in outs:
self.arn = outs['arn']
if 'cooldown' in outs:
self.cooldown = outs['cooldown']
if 'metricAggregationType' in outs:
self.metric_aggregation_type = outs['metricAggregationType']
if 'minAdjustmentMagnitude' in outs:
self.min_adjustment_magnitude = outs['minAdjustmentMagnitude']
if 'name' in outs:
self.name = outs['name']
if 'policyType' in outs:
self.policy_type = outs['policyType']
if 'resourceId' in outs:
self.resource_id = outs['resourceId']
if 'scalableDimension' in outs:
self.scalable_dimension = outs['scalableDimension']
if 'serviceNamespace' in outs:
self.service_namespace = outs['serviceNamespace']
if 'stepAdjustments' in outs:
self.step_adjustments = outs['stepAdjustments']
if 'stepScalingPolicyConfigurations' in outs:
self.step_scaling_policy_configurations = outs['stepScalingPolicyConfigurations']
if 'targetTrackingScalingPolicyConfiguration' in outs:
self.target_tracking_scaling_policy_configuration = outs['targetTrackingScalingPolicyConfiguration']
| 54.234177 | 374 | 0.709068 | 907 | 8,569 | 6.351709 | 0.169791 | 0.045131 | 0.057282 | 0.067697 | 0.352196 | 0.216282 | 0.142857 | 0.112133 | 0.112133 | 0.112133 | 0 | 0.000149 | 0.219045 | 8,569 | 157 | 375 | 54.579618 | 0.860729 | 0.035477 | 0 | 0 | 1 | 0 | 0.249884 | 0.076876 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.019231 | 0 | 0.048077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d9b846453bf4d93b96ea5045041b61ccea9af6c | 3,828 | py | Python | simple_downloader.py | ZubairLK/ka-static | dbb56cb67ae49e12cd96be17f40f6328dfc4c2d0 | [
"MIT"
] | 1 | 2018-11-21T06:44:52.000Z | 2018-11-21T06:44:52.000Z | simple_downloader.py | ZubairLK/ka-static | dbb56cb67ae49e12cd96be17f40f6328dfc4c2d0 | [
"MIT"
] | null | null | null | simple_downloader.py | ZubairLK/ka-static | dbb56cb67ae49e12cd96be17f40f6328dfc4c2d0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# This script basically downloads topictree from khan academy
# Then parses it and creates a temporary folder with files.
# Kept simple on purpose to just see how parsing the topic tree json works in python
import json
import pprint
import inspect
import os
import re
import urllib
global_index = 0
#http://stackoverflow.com/questions/3663450/python-remove-substring-only-at-the-end-of-string
def rchop(thestring, ending):
if thestring.endswith(ending):
return thestring[:-len(ending)]
return thestring
#somestring = rchop(somestring, ' rec')
def make_video_file(data, dirname, title, video_index):
global global_index
# for key in data:
# print key
video_title = re.sub('[^A-Za-z0-9]+', '_', data["translated_title"])
# for key in data["download_urls"]:
# print key
if "mp4-low" in data["download_urls"]:
download_url = data["download_urls"]["mp4-low"]
elif "mp4" in data["download_urls"]:
download_url = data["download_urls"]["mp4"]
else:
# Looks like there is a download_urls json that doesn't have a video.
print "No mp4 or mp4 for some videos"
print dirname
# print global_index
print video_title
for key in data["download_urls"]:
print key
return
if title == "New_and_noteworthy":
directory = dirname
else:
directory = rchop(dirname , title)
# print directory
# directory = re.sub('[^A-Za-z0-9]+', '_', directory)
# print directory
# print video_title
# print title
# print global_index
# print video_index
if data['translated_title'] is not None:
full_title = data['translated_title'].encode('utf-8')
else:
# Use file name
full_title = video_title
if data['translated_description'] is not None:
full_description = data['translated_description'].encode('utf-8')
else:
full_description = "No description"
id = []
if data['id'] is not None:
id = data['id']
if not os.path.exists(directory):
os.makedirs(directory)
fp = open(directory + "/" + str(global_index) + "_" + video_title, "wb")
fp.write("Video Title : ")
fp.write(full_title + "\n")
fp.write("Video Description : ")
fp.write(full_description + "\n")
fp.write("Download URL : ")
fp.write(download_url + "\n")
fp.write("Video ID : ")
fp.write(id + "\n")
fp.close()
def list_dict_keys(data, level, dirname, title):
# print type(data)
global global_index
video_index = 0
base = (" " * level)
if type(data) is dict:
for key in data:
# print base + key
# if key == 'title':
if key == 'translated_title':
title = re.sub('[^A-Za-z0-9]+', '_', data[key])
dirname = dirname + "/" + title
# print base + dirname
# print base + data[key]
if key == 'relative_url':
continue
print base + data[key]
if key == 'translated_youtube_id':
continue
print base + data[key]
if key == 'download_urls':
# Now that we have recursively reached a node with a video
# Create a folder and file.
global_index = global_index + 1
# video_index = video_index + 1
make_video_file(data, dirname, title, video_index)
# list_dict_keys(data[key], level, dirname, title)
# if key == 'mp4':
# print base + data[key]
if key == 'children':
list_members(data[key], level + 1, dirname, title)
def list_list_keys(data):
if type(data) is dict:
for key in dict:
print key
def list_members(data, level, dirname, title):
for index, item in enumerate(data):
if type(data[index]) is dict:
list_dict_keys(data[index], level, dirname, title)
if not os.path.exists('topictree'):
print "Downloading Topictree. This can take a while"
topictree = urllib.urlretrieve("http://www.khanacademy.org/api/v1/topictree", "topictree")
with open('topictree') as data_file:
data = json.load(data_file)
# print json.dumps(data)
list_dict_keys(data,0, "myfolder", "title")
quit()
| 25.350993 | 93 | 0.686259 | 565 | 3,828 | 4.527434 | 0.283186 | 0.04222 | 0.037529 | 0.018765 | 0.21853 | 0.174746 | 0.154027 | 0.131353 | 0.066458 | 0.037529 | 0 | 0.009548 | 0.179206 | 3,828 | 150 | 94 | 25.52 | 0.804583 | 0.272205 | 0 | 0.16092 | 0 | 0 | 0.201235 | 0.023611 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068966 | null | null | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6d9c037ebf9af70103bf930fa4932f8acd3fcfa5 | 1,998 | py | Python | jython/sixpeaks.py | theturpinator/randomized-optimization-ABAGAIL | 350a3328fd5c72a5773fc002b0ca0f9427794628 | [
"BSD-3-Clause"
] | null | null | null | jython/sixpeaks.py | theturpinator/randomized-optimization-ABAGAIL | 350a3328fd5c72a5773fc002b0ca0f9427794628 | [
"BSD-3-Clause"
] | null | null | null | jython/sixpeaks.py | theturpinator/randomized-optimization-ABAGAIL | 350a3328fd5c72a5773fc002b0ca0f9427794628 | [
"BSD-3-Clause"
] | null | null | null | import sys
import os
import time
import opt.example.SixPeaksEvaluationFunction as SixPeaksEvaluationFunction
import dist.DiscreteUniformDistribution as DiscreteUniformDistribution
import opt.DiscreteChangeOneNeighbor as DiscreteChangeOneNeighbor
import opt.ga.DiscreteChangeOneMutation as DiscreteChangeOneMutation
import opt.ga.SingleCrossOver as SingleCrossOver
import dist.DiscreteDependencyTree as DiscreteDependencyTree
import opt.GenericHillClimbingProblem as GenericHillClimbingProblem
import opt.ga.GeneticAlgorithmProblem as GeneticAlgorithmProblem
import opt.prob.GenericProbabilisticOptimizationProblem as GenericProbabilisticOptimizationProblem
import opt.ga.GenericGeneticAlgorithmProblem as GenericGeneticAlgorithmProblem
import opt.RandomizedHillClimbing as RandomizedHillClimbing
import shared.FixedIterationTrainer as FixedIterationTrainer
import opt.SimulatedAnnealing as SimulatedAnnealing
import opt.ga.StandardGeneticAlgorithm as StandardGeneticAlgorithm
import opt.prob.MIMIC as MIMIC
from array import array
N = 10
T = N / 5
fill = [2] * N
ranges = array('i', fill)
ef = SixPeaksEvaluationFunction(T)
odd = DiscreteUniformDistribution(ranges)
nf = DiscreteChangeOneNeighbor(ranges)
mf = DiscreteChangeOneMutation(ranges)
cf = SingleCrossOver()
df = DiscreteDependencyTree(.1, ranges)
hcp = GenericHillClimbingProblem(ef, odd, nf)
gap = GenericGeneticAlgorithmProblem(ef, odd, mf, cf)
pop = GenericProbabilisticOptimizationProblem(ef, odd, df)
rhc = RandomizedHillClimbing(hcp)
fit = FixedIterationTrainer(rhc, 5)
fit.train()
print("RHC: ", ef.value(rhc.getOptimal()))
sa = SimulatedAnnealing(1E11, .95, hcp)
fit = FixedIterationTrainer(sa, 200000)
fit.train()
print("SA: ", ef.value(sa.getOptimal()))
ga = StandardGeneticAlgorithm(200, 100, 10, gap)
fit = FixedIterationTrainer(ga, 1000)
fit.train()
print("GA: ", ef.value(ga.getOptimal()))
mimic = MIMIC(200, 20, pop)
fit = FixedIterationTrainer(mimic, 1000)
fit.train()
print("MIMIC: ", ef.value(mimic.getOptimal())) | 36.327273 | 98 | 0.826326 | 206 | 1,998 | 8.014563 | 0.291262 | 0.065415 | 0.033313 | 0.020594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020891 | 0.08959 | 1,998 | 55 | 99 | 36.327273 | 0.886751 | 0 | 0 | 0.083333 | 0 | 0 | 0.010505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.395833 | 0 | 0.395833 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6d9e0b4359b3e1bc96643532f85f9bc943b74f27 | 3,751 | py | Python | granite/generator/md_wrapper.py | codacy-badger/granite | c9753ddb261e8ee6648f6fd405b21bbb86329277 | [
"MIT"
] | null | null | null | granite/generator/md_wrapper.py | codacy-badger/granite | c9753ddb261e8ee6648f6fd405b21bbb86329277 | [
"MIT"
] | null | null | null | granite/generator/md_wrapper.py | codacy-badger/granite | c9753ddb261e8ee6648f6fd405b21bbb86329277 | [
"MIT"
] | null | null | null | # External includes
# OS module provides functions for interacting with the operating system
import os
import markdown_generator as mg
from typing import Optional
# Project includes
from granite.generator.output_file_wrapper import OutputFileWrapper
class MarkDownFileWrapper(OutputFileWrapper):
"""
...
"""
def __init__(self, filename: str) -> None:
"""
Initialize the object instance
"""
# Call the parent class constructor
super().__init__(filename, ".MD")
self.md_writer = mg.Writer(self.document)
def __del__(self) -> None:
"""
Redefine the class destructor to call its parent's destructor
"""
super().__del__()
def write_heading(
self,
string: str,
level: int = 1
) -> None:
"""
Method for initializing a section of source code in Markdown
:param language: source code language to get synthax coloration
"""
self.md_writer.write_heading(string, level)
def write(self, string: str) -> None:
"""
Method for writing a string a section of source code
:param language: source code language to get synthax coloration
"""
self.md_writer.write(string)
def _write_hrule(self) -> None:
"""
Method for writing a horizontal ruler tag to the Markdown file
"""
self.md_writer.write_hrule()
def init_code(self, language: str = "c") -> None:
"""
Method to initialize a source code section
:param language: source code language to obtain synthetic coloring
"""
self.code = mg.Code(language)
def append_code(self, string: str) -> None:
"""
Method to add the string to the code
:param dict_field: value of the structure fields
"""
self.code.append(string)
def _write_code(self) -> None:
"""
Method for writing the instance of the code object in the markdown file
"""
self.md_writer.write(self.code)
def writeline(
self,
string: Optional[str] = None
) -> None:
"""
Method for writing a line in the markdown file
:param string: optional string
"""
self.md_writer.writeline(string)
def write_header_table(
self,
dict_header: dict
) -> None:
"""
Method for writing the table header to the markdown file
"""
self.table = mg.Table()
# For each items in the dictionary
for key, value in dict_header.items():
# add the items into the table
self.table.add_column(key, value)
def write_table(self) -> None:
"""
Method of writing the table to the markdown file
"""
self.write(self.table)
def append_table(
self,
dict_field: dict
) -> None:
"""
Method for appending the input dicionary to the object instance table
:param dict_field: value of the structure fields
"""
self.table.append(*['{}'.format(j) for j in dict_field.values()])
def write_source_code(
self,
tc_name: str,
c_structure: str
) -> None:
"""
Method of writing the source code section in the mardown file
:param tc_name: telecommand name
:param c_structure: C strucuture of the telecommand
"""
self._write_hrule()
self.init_code("c")
self.append_code("/* Autogenerated C implementation of the '{}' structure */".format(tc_name))
self.append_code(c_structure)
self._write_code()
self._write_hrule() | 25.868966 | 102 | 0.590243 | 441 | 3,751 | 4.879819 | 0.244898 | 0.051115 | 0.042286 | 0.046468 | 0.280669 | 0.150093 | 0.134758 | 0.105019 | 0.105019 | 0.065056 | 0 | 0.000393 | 0.322047 | 3,751 | 145 | 103 | 25.868966 | 0.845851 | 0.354306 | 0 | 0.214286 | 0 | 0 | 0.031972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232143 | false | 0 | 0.071429 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6da6adfc22b6b79b653ca53e97ef50b518fae2ba | 20,101 | py | Python | tools/scripts/sdoget.py | helio-hfc/SDOSS | 3d00d91473fef26679901045d9852b2337dba864 | [
"MIT"
] | null | null | null | tools/scripts/sdoget.py | helio-hfc/SDOSS | 3d00d91473fef26679901045d9852b2337dba864 | [
"MIT"
] | null | null | null | tools/scripts/sdoget.py | helio-hfc/SDOSS | 3d00d91473fef26679901045d9852b2337dba864 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
""" Routines to access a netDRMS system via HTTP and CGI and to fetch
and read data.
These can be used on any Internet connected machine, and fetch data
from a netDRMS site. You must have Python and optionally pyfits to
read the FITS files.
The default is to use the ROB netDRMS server, but note that another
server may need wrappers for data serving program, as experience shows
the DRMS might not return a correctly formed response. Server admins
should contact <boyes@sidc.be> for example wrappers.
You can choose not to use a cache, but if you do so remember that each
time you fetch a file it has to be transferred from the server. On
the ROB cluster the cache is set by default, elsewhere
"/tmp/sdo_cache" is tried but then None is set.
Author David Boyes <boyes@sidc.be>
"""
""" Notes:
urllib/cgi module : the urlencode is a bit too enthusiastic, so you
have to tell it not to escape = and &. Some CGI servers accept
encoded versions, but Python parse() and so on do not. Or else you
need to read stdin and unquote()....
Localisation information for SDO netDRMS data centre sets globals for:
netdrmsserver
fetch_url
info_url
cacheroot
by exploring the machine we are on.
"""
import httplib, urllib
import tarfile
import shutil
import os
import stat
import re
import glob
import socket
try:
import pyfits
hasfits = True
import copy
except ImportError:
hasfits = False
#
# Local parameters
#
# Default cache root (series name will be added automatically) outside
# the ROB cluster if available. If not set to None, and then the
# calling program should have something like sdoget.cacheroot =
# '/tmp/my_sdo_cache' if you want to use another cache.
_def_cache = '/tmp/sdo_cache'
# Some parameters for the site
#
# Clues for ROB intranet
_intranet_dom = 'oma.be' # if this is domain we are on intranet
_probe_add = '192.168.144.1' # FQDN this address for domain
#
# Clues for cluster cache (plus with at site)
_mkey = '/pool' # mount point
_dkey = 'sdo_exports/pfscache' # visible writeable directory
_fskey = 'lustre' # file system type
#
# The prefix the server gives the exported files in a tar archive
def _tarpath(series):
return '/tmp/jsoc/' + series + '.'
# Check if we are in the given domain vs. not in domain or not online
try:
_rob_intranet = (socket.getfqdn(_probe_add)[::-1].find(_intranet_dom[::-1]) == 0)
except:
_rob_intranet = False
# Do we have a plausible ROB type cluster cache
_have_cluster_cache = False
if os.path.ismount(_mkey) and os.access(_mkey + '/' + _dkey, os.W_OK):
_p = os.popen('df -T ' + _mkey + ' 2>&1')
for _l in _p:
if _fskey in _l:
_have_cluster_cache = True
break
_p.close()
if _rob_intranet:
netdrmsserver = '192.168.144.14'
fetch_url = '/netdrms/drms_export_cgi'
info_url = '/netdrms/jsoc_info'
else:
netdrmsserver = 'sdodata.oma.be'
fetch_url = '/sdodb/netdrms/drms_export_cgi'
info_url = '/sdodb/netdrms/jsoc_info'
if _rob_intranet and _have_cluster_cache:
cacheroot = '/pool/sdo_exports/pfscache'
elif os.access(_def_cache, os.W_OK):
cacheroot = _def_cache
else:
cacheroot = None
class LocalError(Exception):
"""Unrecoverable error not normally handled by calling program"""
def __init(self,value,message):
self.value = value
self.message = message
def _checkdt(s, fmt = "%04d.%02d.%02d_%02d:%02d:%09.6f"):
"""Very relaxed date/time regex based parser.
Accepts - or . date separators, _ or T date time separators, but
insists on decimal point in seconds, no suffix and no -ve years.
Returns a string based on fmt arg (6-tuple)
Raises ValueError on bad parse."""
date_re = r'(?P<year>\d{4})[-/.](?P<month>\d{2})[-/.](?P<day>\d{2})(?P<time>\S*)'
time_re = r'([T_]((?P<hour>\d{2})(:(?P<minute>\d{2})(:(?P<second>\d{2}(\.\d{1,6})?))?)?)?)?(?P<rest>\S*)'
s = ''.join(s.split()) # must remove whitespace for simple regex
s = s.rstrip('_TAIZ')
m = re.match(date_re, s)
if m:
date = (int(m.group("year")),int(m.group("month")),int(m.group("day")))
else:
raise ValueError
s = m.group("time")
if len(s) == 0:
return fmt%(date + (0,0,0))
m = re.match(time_re, s)
if m and (len(m.group("rest")) == 0):
time = []
for u in ["hour", "minute", "second"]:
g = m.group(u)
if g == None:
time.append(0)
else:
if u =="second":
time.append(float(g))
else:
time.append(int(g))
return fmt%(date + tuple(time))
else:
raise ValueError
def _getdata( message):
""" Use GET to fetch response to a message.
All sorts of things can go wrong, but e.g. a 404 is a correct
response. So wrong things that cause an exception really are
wrong.
"""
message = urllib.quote(urllib.unquote_plus(message),'&=') # urlencode adds not understood +'s
conn = httplib.HTTPConnection(netdrmsserver+':80')
try:
conn.request("GET", info_url+'?'+message)
response = conn.getresponse()
data = response.read()
except: # Catch very bad results
data = None # NB 400 messages and such are *not* trapped here
conn.close()
return data # $$ can data be None legit?
def _header2fn(hdr):
"""Get the first file name from a mime header block.
Helper fns so fetchfile less prolix
"""
for h in hdr.headers:
if h.startswith('Content-Disposition') and ('filename' in h):
s = h[h.find('filename'):].split('"')
if len(s) < 3:
return None
return s[1]
return None
class SdoGet:
""" Get data and information on individual FITS files and from the
netDRMS on the basis of instrument data series, T_OBS ranges and
wavelengths if appropriate.
"""
def __init__(self, s, d=None, wavelength=None, nrecs=1, window='10m'):
"""
Instantiation args are :
s - series (required)
d - date/time start or range
wavelength - wavelength where appropriate
nrecs - limit on number of records (defaults to 1, 0 for all)
window - time window (defaults to 10 minutes)
If a date/time range is given (two date/times separated by
" - ") this takes precedence over the time window.
The basic function is to get a list of recnums (the serial
numbers in the system) and online statuses, but to be helpful
it also gives a list of T_OBS and of suggested file names. NB
the suggested file names use the wavelength given here, not
that in the FITS header.
"""
self.series = s
self.wavelength = wavelength
self.filename = None # name stem for final file
self.files = [None] # list of files filled in by fetch
self.online = [None] # online status list
self.t_obs = [None] # T_OBS list
self.dtname = [None] # made up names
if d == None: # that's fine, we just named series
self.recnum = [None]
self.statmsg = 'No record(s) selected yet'
else:
d = d.split(' - ')
if len(d) == 1:
d.append(None)
dstart = _checkdt(d[0])
if d[1] == None:
q_filter = '[' + dstart + '/' + window + ']'
else:
q_filter = '[' + dstart + ' - ' + _checkdt(d[1]) + ']'
if self.wavelength != None:
q_filter = q_filter + '[? WAVELNTH = \'' + str(self.wavelength) + '\' ?]'
self.recnum, self.statmsg = self._firstrecs(q_filter, nmax=nrecs)
def _firstrecs(self, q_filter, timekey='T_OBS', nmax=1):
""" Return recnum list for what is hopefully the first
record(s) in a time range. Can also return (None, message)
when recnum not available or raises an exception for a bad
server error. Thus you must check for before using the
recnum.
"""
self.recnum = [None]
record_set = self.series + q_filter
message = urllib.urlencode({'ds': record_set, 'op': 'rs_list', 'n':nmax, 'key': timekey + ',*recnum*,*online*,*sunum*'})
jd = _getdata(message) # Both good JSON and error messages
try:
jd = eval(jd)
except:
raise LocalError(type(jd), 'Badly formed status info') # this wrong
if not (('status' in jd) and jd['status'] == 0 and ('keywords' in jd)):
return (None, 'No data for given query')
ols = []
rns = []
for kd in jd['keywords']:
k = kd['name']
v = kd['values'][0]
if k == '*online*':
ols = kd['values']
if k == '*recnum*':
rns = kd['values']
if k == 'T_OBS':
tobs = kd['values']
if len(ols) == 0 and len(rns) == 0:
return (None, 'No record(s) found')
if len(ols) != len(rns):
return (None, 'Inconsistent data from query')
none_ol = True
for i, ol in enumerate(ols):
rns[i] = str(rns[i])
if ol == 'Y':
none_ol = False
if none_ol:
return (None, 'No record(s) currently available') # ## try again message?
self.online = ols
self.t_obs = tobs
self.recnum = rns
if self.wavelength == None:
format = "%04d%02d%02d_%02d%02d%09.6f"
else:
format = "%04d%02d%02d_%02d%02d%09.6f" + '_' + "%04d"%(self.wavelength,)
self.dtname = map(lambda x :_checkdt(x,format).replace('.',''),self.t_obs)
return (rns, 'Recnum')
def _getcache(self):
"""Get the cache dir full name."""
if not os.access(cacheroot, os.W_OK):
raise LocalError(cacheroot, 'Given cache root not found or not writable')
res = cacheroot + '/' + self.series
if not os.access(res, os.F_OK):
os.mkdir(res)
os.chmod(res, stat.S_IRWXU | stat.S_IRWXG | stat.S_IROTH| stat.S_IXOTH)
if not os.access(res, os.W_OK):
raise LocalError(res, 'Given cache directory not writable')
return res
def _placedata(self, src, namestem, destdir, newname, extn):
""" Put result in destination and possibly in cache.
src - tmpname from system
namestem - probably recnum as string
destdir - final dir
newname - evt. rename
extn - the extn in cache and to keep
"""
if cacheroot == None: # then don't care what namestem is
shutil.move(src, destdir + '/' + newname + '.' + extn)
else:
cname = self._getcache() + '/' + namestem + '.' + extn
shutil.move(src, cname)
os.chmod(cname, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IWGRP)
shutil.copy(cname, destdir + '/' + newname + '.' + extn)
def _fetchsinglerecnum(self, newname=None, todir='.', fetch_rn=None, extn_filt = '') :
""" Fetch a single file. This will have an assigned name or
maybe names, so you might want to rename it.
Most users won't use this, but will probably want just
fetch().
Args are the recnum, optional new filename (the system adds an
extension) and optional directory. If no new filename is
given a name based on the recnum is used, which is just the
name that the server delivers.
The routine sets the status message and returns None or the
filename stem (i.e. without the extensions which distinguish
multiple files). That is is, there may be multiple files, all
with this name stem plus various extensions.
"""
# $$ the logic of some of this may be redundant
if fetch_rn == None:
fetch_rn = self.recnum[0]
if fetch_rn == None:
self.statmsg = 'No recnum given or obtained by query'
return None
if newname == None:
self.filename = fetch_rn
else:
self.filename = newname
# Is it possibly already in cache?
if cacheroot != None: # or else no cache
fnl = glob.glob(self._getcache() + '/' + fetch_rn + '.*')
if len(fnl) > 0: # They seem to be in cache
for fn in fnl:
n, extn = os.path.basename(fn).split('.', 1)
if extn_filt == '' or extn_filt == extn:
shutil.copy(fn, todir + '/' + self.filename + '.' + extn)
self.statmsg = 'File from cache'
return self.filename # i.e. either recnum or arg
# Not in cache, need to get
urllib.urlcleanup()
record_set = self.series + '[:#' + fetch_rn + ']'
message = urllib.urlencode({'rsquery': record_set,'n':'1'})
message = urllib.quote(urllib.unquote_plus(message),'&=/') # urlencode adds not understood +'s
full_url = 'http://' + netdrmsserver + fetch_url
try:
r = urllib.urlretrieve(full_url, data=message) # file is r[0]
except:
self.statmsg = 'No response from server' # something really odd, not e.g. 404
return None
fetchname = _header2fn(r[1]) # filename from mime header
if fetchname == None:
self.statmsg = 'Badly formed server header'
return None
to_snip = _tarpath(self.series) # dir in tarfile plus file name
if fetchname.endswith('.tar'):
if not tarfile.is_tarfile(r[0]):
self.statmsg ='Tar file from server unreadable'
return None
tf = tarfile.open(r[0])
res = False
for fn in tf.getnames():
if fn.startswith(to_snip): # ignore other content
tf.extract(fn)
rn, extn = fn[len(to_snip):].split('.',1)
if rn != fetch_rn:
self.statmsg = 'Wrong data files in tar file'
return None
if extn_filt == '' or extn_filt == extn:
self._placedata(fn, fetch_rn, todir, self.filename, extn)
res = True
if not res:
self.statmsg = 'Could not find data files in tar file'
return None
else: # plain file, not a tar file
to_snip = os.path.basename(to_snip)
if not fetchname.startswith(to_snip):
self.statmsg = 'Data file name does not match query' # name looks v. wrong
return None
rn,extn = fetchname[len(to_snip):].split('.',1)
if rn != fetch_rn:
self.statmsg = 'Wrong data file returned'
return None
if extn_filt == '' or extn_filt == extn:
self._placedata(r[0], fetch_rn, todir, self.filename, extn)
self.statmsg = 'New file from server'
return self.filename
def fetch(self, filename=None, todir='.', extn = '', recnum=None):
"""Fetches one or more files on the basis of the lists of
available serial numbers ("recnum's") and online statuses
which may have been obtained when the class is instantiated.
Option arguments allow rename of files (from the assigned name
based on a serial number), placement of files in a given
directory, limit to a selected file extension and direct
selection via the "recnum" serial number.
Sets the list of the files obtained and returns the
name of the first file. That includes the destination directory,
whereas the list of filenames does not include the directory.
Just makes repeated calls if need be to fetchsinglerecnum()
with friendlier args!
Args :
filename : string or list. If string is used as base for a
generated file name. If list must match in length the list of
recnums.
todir : string for the destination directory
extn : if given selects the extension for the case where a
famility of files can be fetched (e.g. image and bad
pixels). If not given or '' will fetch all family members.
recnum : string (for single recnum) or list to get recnum(s)
which need not have been found by an instantiation. Default
is to use the attribute found by instantiation.
Remember that you can always (should you like to live
dangerously) set the recnum and online attribute lists
manually before calling. If you give a recnum list as an
argument, it will be assumed that all recnums are online.
"""
if recnum == None:
rn = self.recnum
ol = self.online
else:
rn = recnum
ol = []
if type(recnum) == type(''): # listify a string
rn = [rn]
for r in rn:
ol.append('Y') # mark as 'Y' and give it a try
if rn == None:
self.statmsg = 'No recnum given or obtained by query'
return None
fnres = []
for i, oli in enumerate(ol): # ol in case rn given
rnn = rn[i]
if oli == 'Y':
if len(rn) == 1:
fnxt = ''
else:
fnxt = '_' + str(i)
if filename == None:
fn = rnn + fnxt
elif type(filename) == type(''):
fn = filename + fnxt
else:
fn = filename[i]
fnres.append(self._fetchsinglerecnum(fn, todir, rnn, extn_filt=extn))
else:
self.statmsg = 'Some record(s) not on server at the moment, try again later'
fnres.append(None)
self.files = fnres
return todir + '/' + fnres[0]
def readsdofits(filename, extn = '', dir = ''):
"""If the pyfits library is installed, will return a tuple of a
keyword dictionary plus a numpy array from an SDO format
(i.e. Rice compressed or uncompressed) FITS file.
In fact it just reads the last unit of a 1 or 2 unit file, and
pyfits can work out if the data is compressed.
The data is read from a single file with a full name made from the
two arguments separated by a dot - so the two args are just a
help, one would do.
"""
if not hasfits:
return (None, 'pyfits + numpy required')
if len(extn) > 0:
filename = filename + '.' + extn
if dir != '':
dir = dir.rstrip('/') + '/'
# SDO data is in second HDU
hd = pyfits.open(dir + filename)
if len(hd) == 2:
hdun = 1
elif len(hd) == 1:
hdun = 0
else:
return (None, 'FITS file structure odd')
kwds = {}
# You can do this, but it is still talkative
#hd[hdun].verify('silentfix')
for k in hd[hdun].header:
try:
kwds[k] = hd[hdun].header[k]
except ValueError:
continue
img = copy.copy(hd[hdun].data)
hd.close()
return (kwds, img)
def Main():
"""Display the settings for this location."""
if _rob_intranet:
print 'Will connect on ROB intranet using:'
else:
print 'Will connect via internet using:'
print 'Server: ',netdrmsserver
print 'Fetch command: ',fetch_url
print 'Query command: ',info_url
print 'Data cache: ',cacheroot
if cacheroot == None:
print 'You might want to make a data cache at ' + _def_cache
if not hasfits:
print 'Warning, pyfits and/or numpy not available,\nyou will not be able to read the files you download'
if __name__ == "__main__":
Main()
| 37.642322 | 128 | 0.577931 | 2,711 | 20,101 | 4.217632 | 0.225747 | 0.015743 | 0.004723 | 0.003149 | 0.076264 | 0.066643 | 0.050114 | 0.04268 | 0.038307 | 0.038307 | 0 | 0.009753 | 0.321576 | 20,101 | 533 | 129 | 37.712946 | 0.828701 | 0.081787 | 0 | 0.186916 | 0 | 0.009346 | 0.140708 | 0.03066 | 0.003115 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034268 | null | null | 0.024922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6daea89f611dc6e654cc934cad01ad97775d87f4 | 4,248 | py | Python | username_books_score.py | ru4488/Book-bot | 347e3a3ebcf251d3040593c05b0729432bdf7abb | [
"MIT"
] | null | null | null | username_books_score.py | ru4488/Book-bot | 347e3a3ebcf251d3040593c05b0729432bdf7abb | [
"MIT"
] | null | null | null | username_books_score.py | ru4488/Book-bot | 347e3a3ebcf251d3040593c05b0729432bdf7abb | [
"MIT"
] | 1 | 2021-01-12T16:52:50.000Z | 2021-01-12T16:52:50.000Z | import requests
import random
import redis
import time
import json
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
from ratelimit import *
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
from ratelimit import *
def last_page(pages):
soup = BeautifulSoup(pages, 'html.parser')
info= soup.find("a" , title= "Последняя страница")
last_page_list = info['href'].split('~')
last_page = int(last_page_list[1])
return last_page
def book_title(info_list):
#название книги
if 'автора' in info_list:
name_book_list = info_list[info_list.index('книгу') + 1 : info_list.index('автора')]
elif 'авторов' in info_list:
#нашел проблему на странице, есть места где нет слов "атовров/автора"
name_book_list = info_list[info_list.index('книгу') + 1 : info_list.index('авторов')]
return " ".join(name_book_list)
def book_authors(info_list):
# имя автора книги
if 'автора' in info_list:
all_about_book_dir = {}
name_artist_list = info_list[info_list.index('автора') + 1 :]
return [" ".join(name_artist_list)]
elif 'авторов' in info_list:
name_artist_list =info_list[info_list.index('авторов') + 1 :]
name_artist = " ".join(name_artist_list)
artists = name_artist.split(', ')
return artists
def teg_book(info_list, review):
all_about_book_list = []
all_about_book_dir = {}
score_book = review.find('div' , class_="group-review-rating")
score_list = score_book.text.split()
score = score_list[2]
all_about_book_dir['score'] = score
#название книги
if 'автора' in info_list:
name_book_list = info_list[info_list.index('книгу') + 1 : info_list.index('автора')]
name_book = " ".join(name_book_list)
all_about_book_dir['title'] = name_book
#нашел проблему на странице, есть места где нет слов "атовров/автора"
elif 'авторов' in info_list:
name_book_list = info_list[info_list.index('книгу') + 1 : info_list.index('авторов')]
name_book = " ".join(name_book_list)
# имя автора книги
if 'автора' in info_list:
name_artist_list =info_list[info_list.index('автора') + 1 :]
name_artist = " ".join(name_artist_list)
all_about_book_dir['artist'] = name_artist
elif 'авторов' in info_list:
name_artist_list =info_list[info_list.index('авторов') + 1 :]
name_artist = " ".join(name_artist_list)
how_much_artists = name_artist.split(', ')
for artist in range(len(how_much_artists)):
all_about_book_dir1 = {}
all_about_book_dir1['artist'] = how_much_artists[artist]
all_about_book_dir1['title'] = name_book
all_about_book_dir1['score'] = score
all_about_book_list.append(all_about_book_dir1)
if len(all_about_book_dir) != 0:
all_about_book_list.append(all_about_book_dir)
all_about_book_dir = {}
return all_about_book_list
def all_about_books(html):
soup = BeautifulSoup(html , 'html.parser')
all_about_book_list = []
# all_about_book_dir = {}
for review in soup.find_all('div', class_="block-border card-block expert-review"):
info= review.find("div" , class_="group-login-date dont-author")
info_list = info.text.split()
all_about_book_list.extend(teg_book(info_list, review))
return all_about_book_list
if __name__ == "__main__":
url = 'https://www.livelib.ru/reader/Fari22/reviews'
pages = get_HTML('https://www.livelib.ru/reader/Fari22/reviews~1')
last_page = last_page(pages)
all_page_list = []
for web_page in range(1 , last_page + 1):
all_page_list.append(url + '~' + str(web_page))
print(last_page)
new_page = []
for i in range(len(all_page_list)):
random_numb = random.randint(7 , 30)
html = get_HTML(all_page_list[i])
new_page.extend(all_about_books(html))
time.sleep(random_numb)
print(new_page)
with open("reviews_of_all_users_books.json", "w") as write_file:
json.dump(new_page, write_file) | 35.697479 | 93 | 0.653249 | 588 | 4,248 | 4.37415 | 0.197279 | 0.102644 | 0.097978 | 0.052488 | 0.551322 | 0.478616 | 0.443235 | 0.404355 | 0.35381 | 0.329705 | 0 | 0.008621 | 0.235405 | 4,248 | 119 | 94 | 35.697479 | 0.783251 | 0.05226 | 0 | 0.377778 | 0 | 0 | 0.108706 | 0.007711 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.122222 | null | null | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6db173eb40cc32b3b71989c0a8e0ddcdd055a89b | 11,516 | py | Python | model.py | kdz/pystemm | c98dd548e3db60013fb4f26c2f1522a29f683d71 | [
"MIT"
] | null | null | null | model.py | kdz/pystemm | c98dd548e3db60013fb4f26c2f1522a29f683d71 | [
"MIT"
] | 1 | 2015-10-17T16:35:09.000Z | 2015-10-17T16:35:09.000Z | model.py | kdz/pystemm | c98dd548e3db60013fb4f26c2f1522a29f683d71 | [
"MIT"
] | null | null | null | ## The Model class is the main class used to build actual PySTEMM models
## It traverses concept types and instances, analyzes, and generates visualizations
## It's main public methods are addClasses, addInstances, and showXYZ
import appscript
K = appscript.k
import traits.api as TR
from traits.api import Int, String, Instance, List, Tuple, Dict, Bool, Float, Property, Callable, cached_property
from template import *
from view import *
def find(iterable):
## just a more readable name
return next(iterable)
class Concept(TR.HasTraits):
pass
class Model(object):
def __init__(self, *conceptList):
## conceptList is a list of Concept classes
self.conceptList = conceptList
## instanceList is instances of those concepts
self.instanceList = []
## attrFilter dict {ConcepClass: listOfAttributes} to selectively show attributes
self._attrFilter = {}
## model has its View, to draw on OmniGraffle
self.view = View()
def addClasses(self, *cs):
self.conceptList = self.conceptList + cs
def addInstances(self, *instances):
self.instanceList += instances
def attrFilter(self, klsToAttrsDict):
self._attrFilter = klsToAttrsDict
def _attrPassesFilter(self, kls, attrName):
## check if an attribute of a Concept class kls should be drawn or not
return self._attrFilter.get(kls) is None or attrName in self._attrFilter[kls]
def _traverseConcept(self, c, visitedList):
## walks around starting from Concept Class c
## adds nodes and edges (to View) for Concept Classes and their attribute definitions
## recursively traverses other related Concept Classes, including super classes
## uses visitedList to keep track of which classes have already been done
if c in visitedList: return
visitedList.append(c)
self.view.node(c, apply_template(netClassTemplate(c), c))
for attr_name, attr_cTrait in classTraits(c).items():
if not self._attrPassesFilter(c, attr_name):
continue
if isPrimitive(attr_cTrait):
continue
else:
attr = Attr(attr_name, attr_cTrait)
self._traverseConcept(endPoint(attr.type), visitedList)
self.view.edge(c,
endPoint(attr.type),
apply_template(Rel_Template, attr),
apply_template(Rel_Label_Template, attr))
for b in c.__bases__ if hasattr(c, '__bases__') else []:
if issubclass(b, Concept) and b is not Concept:
self._traverseConcept(b, visitedList)
self.view.edge(c, b, Subclass_Props, {})
def _traverseInst(self, inst, visitedList):
## walks around starting from the Concept instance inst
## adds nodes and edges (to View) for instances and their attribute values
## recursively traverses other related instances via attribute values
## uses visitedList to keep track of which instances have already been done
if inst in visitedList: return
visitedList.append(inst)
kls = type(inst)
self.view.node(inst, apply_template(netInstanceTemplate(inst), inst))
for key in classTraits(kls):
val = getattr(inst, key)
if val is None: continue
if not self._attrPassesFilter(kls, key):
continue
trait = inst.trait(key)
label = lineLabel(key)
if isPrimitive(trait):
pass
elif isInstanceTrait(trait):
self._traverseInst(val, visitedList)
self.view.edge(inst, val, Link_Props, label)
elif isListOfInstanceTrait(trait):
for obj in val:
self._traverseInst(obj, visitedList)
self.view.edge(inst, obj, Link_Props, label)
elif isListOfTupleOfInstancePrim(trait):
for (obj, prim) in val:
self._traverseInst(obj, visitedList)
self.view.edge(inst, obj, Link_Props, {K.text: "%s(_,%s)" % (key, prim)})
elif isListOfTupleOfPrimInstance(trait):
for (prim, obj) in val:
self._traverseInst(obj, visitedList)
self.view.edge(inst, obj, Link_Props, {K.text: "%s(%s,_)" % (key, prim)})
elif isCallableTrait(trait):
self.view.leafEdgeAndNode(inst, val, Link_Props, label, apply_template(Callable_Template, val))
else:
self.view.leafEdgeAndNode(inst, val, Link_Props, label, apply_template(Val_Template, val))
def displayConcepts(self):
visitedList = []
for c in self.conceptList:
self._traverseConcept(c, visitedList)
def displayInstances(self):
visitedList = []
for i in self.instanceList:
self._traverseInst(i, visitedList)
def showMethod(self, obj, name):
method = getattr(obj, name)
self.view.leafEdgeAndNode(obj, method, Link_Props, lineLabel(name), apply_template(Callable_Template, method))
def showEval(self, obj, funcName, argsList, tmpl=None):
func = getattr(obj, funcName)
result = apply(func, argsList)
label = "%s(%s)" % (funcName, ','.join(["%s" % arg for arg in argsList]))
props = apply_template(mergeTemplates(tmpl, Val_Template), result)
self.view.leafEdgeAndNode(obj, result, {K.stroke_pattern: 2, K.head_type: 'StickArrow'}, lineLabel(label),
props)
def showGraph(self, obj, func_names, param, range, *highlights):
if isinstance(func_names, str): func_names = [func_names]
for func_name in func_names:
file = graph_file(obj, func_name, param, range, highlights)
self.view.leafEdgeAndNode(obj, file, Link_Props, {K.text: func_name}, apply_template(Graph_Template, file))
def animate(self, obj, from_to, num_pts, template_list):
## For Times between From and To, apply each template to obj & Time, draw result
## Expects a special key in each template: k.new: k.shape or k.line or ...
## (so that lines can be drawn as easily as circles, rectangles, etc.)
## Bypasses View methods, and directly draws templates onto the Canvas
c = self.view.canvas
for p in range(from_to[0], from_to[1], (from_to[1] - from_to[0]) / num_pts):
for t in template_list:
shape_class = t[K.new]
tmp = apply_template({key: v for key, v in t.items() if key != K.new}, obj, p)
c.make(new=shape_class, at=c.graphics.end, with_properties=tmp)
def display(self):
self.displayConcepts()
self.displayVerbalizedConcepts()
self.displayInstances()
self.view.draw()
def displayVerbalizedConcepts(self):
text = '\n'.join([verbalizeConcept(c) for c in self.conceptList])
self.view.node(text, apply_template(Verbalized_Template, text))
class Obj(object):
def __init__(self, **kwargs):
for key,val in kwargs.items():
setattr(self, key, val)
def view(obj, kls):
return view1(obj, kls, {})
# only supports traits that are primitive and Instance(X)
def view1(objA, concept_class, views): # , mapping)
## Answers: can <objA> be correctly viewed as a <concept_class>?
## returns instance of <concept_class> with _errs_ attribute
## _errs_ = [] means the match is valid; otherwise, _errs_ = explanation
## <views> is a dictionary of {(objA, concept_class) : mapped_instance }
## this is used to prevent infinite loops due to cycles between objects
if (objA, concept_class) in views:
return views[(objA, concept_class)]
if not issubclass(concept_class, Concept):
if isinstance(objA, concept_class):
return objA
else:
objA._errs_ = [("self", "Not instance of %s" % concept_class)]
return objA
if issubclass(concept_class, Concept):
if isinstance(objA, concept_class):
objB = objA
objB._errs_ = []
else:
objB = concept_class()
objB._errs_ = []
traits = classTraits(concept_class)
for name, trait in traits.items():
try:
valA = getattr(objA, name) if hasattr(objA, name) else None
if valA:
if isPrimitive(trait):
valB = valA
elif isInstanceTrait(trait):
v2 = dict(views)
v2.update({(objA, concept_class): objB})
valB = view1(valA, trait.trait_type.klass, v2)
if valB._errs_: objB._errs_.append((name, 'invalid %s' % concept_class))
else: raise Exception("Unsupported Trait Type")
trait.validate(objB, name, valB)
setattr(objB, name, valB)
except Exception as err:
objB._errs_.append((name, "%s" % err))
# TODO: carry over non-trait attributes from objA to objB
# e.g. for list of values of functions pos=[(t1,p1), (t2,p2)...]
# @rules checks
for m_name, meth in getRules(concept_class):
def fail():
objB._errs_.append((m_name, meth.__doc__ or "@rule failed"))
try:
ok = meth(objB)
if not ok: fail()
except Exception:
fail()
return objB
import inflect
english = inflect.engine()
def verbalizeConcept(concept):
name_and_doc = "%s: %s " % (concept.__name__, concept.__doc__ or "")
every_one_has = "Every %s" % concept.__name__ + \
(' is a %s, and' % concept.__bases__[0].__name__ \
if (concept.__bases__ != (object,) and concept.__bases__ != (Concept,)) \
else '') + \
' has attributes:\n\t'
traits = classTraits(concept)
attrs = '\n\t'.join(["%s, which is %s" % (n, english.a(verbalize(tr))) for n, tr in traits.items()])
rules = getRules(concept)
rulesTxt = ('\n and has rules:\n\t' + '\n\t'.join([rName for rName, f in rules])) if rules else ''
return name_and_doc + every_one_has + attrs + rulesTxt
def graph_file(obj, func_name, param, range, highlights):
## plots the named function on obj, taking param over its range (start, end)
## highlights are pairs of (x_value, "Text_to_show") to annotate on the graph plot
## saves the plot to a file and returns full file name
import numpy, pylab
func = getattr(obj, func_name)
f = numpy.vectorize(func)
x = numpy.linspace(*range)
y = f(x)
pylab.rc('font', family='serif', size=20)
pylab.plot(x, y)
pylab.xlabel(param)
pylab.ylabel(func_name)
hi_pts = [pt for pt, txt in highlights]
pylab.plot(hi_pts, f(hi_pts), 'rD')
for pt, txt in highlights:
pt_y = func(pt)
ann = txt + "\n(%s,%s)" % (pt, pt_y)
pylab.annotate(ann, xy=(pt, pt_y))
import os, time
file = os.path.dirname(__file__) + "/generated_graphs/%s.%s.%s.pdf" % (type(obj).__name__, func_name, time.time())
pylab.savefig(file, dpi=300)
pylab.close()
return file
| 41.574007 | 119 | 0.600903 | 1,386 | 11,516 | 4.850649 | 0.253247 | 0.032129 | 0.016659 | 0.020527 | 0.154842 | 0.105161 | 0.097724 | 0.080173 | 0.068273 | 0.051316 | 0 | 0.002596 | 0.297586 | 11,516 | 276 | 120 | 41.724638 | 0.828533 | 0.180879 | 0 | 0.131313 | 1 | 0 | 0.027644 | 0.003202 | 0 | 0 | 0 | 0.003623 | 0 | 1 | 0.111111 | false | 0.025253 | 0.040404 | 0.015152 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6db1a9262a34cda1cac0db347dd018db050c1c72 | 337 | py | Python | theory/8th_sprint/telegram.py | abi83/YaPractice | 1c3a5670ee2f872d4f872623a392755318b893b5 | [
"MIT"
] | 3 | 2020-11-18T05:16:30.000Z | 2021-03-08T06:36:01.000Z | theory/8th_sprint/telegram.py | abi83/YaPractice | 1c3a5670ee2f872d4f872623a392755318b893b5 | [
"MIT"
] | null | null | null | theory/8th_sprint/telegram.py | abi83/YaPractice | 1c3a5670ee2f872d4f872623a392755318b893b5 | [
"MIT"
] | 1 | 2021-01-20T12:41:48.000Z | 2021-01-20T12:41:48.000Z | import telegram
def send_message(message, bot):
return bot.send_message(chat_id=CHAT_ID, text=message)
CHAT_ID = '32989550' # добавить chat_id
TELEGRAM_TOKEN = '1440023732:AAHl6SURCOrJQfJ4X_Xv56FG-MOUuzNa_oo' # добавить токен
bot = telegram.Bot(token=TELEGRAM_TOKEN) # проинициализируйте бота здесь
send_message('hello', bot)
| 28.083333 | 83 | 0.789318 | 45 | 337 | 5.666667 | 0.511111 | 0.094118 | 0.101961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.118694 | 337 | 11 | 84 | 30.636364 | 0.784512 | 0.181009 | 0 | 0 | 0 | 0 | 0.216912 | 0.169118 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
6db1d3446e4ced079ebf34b571f9d9b7550052f6 | 9,467 | py | Python | virfac/get_data.py | NCBI-Hackathons/Virulence_Factor_Characterization | d42e8a24c0ba3d19b221753c49453765dfb35a34 | [
"MIT"
] | 6 | 2019-08-14T08:34:42.000Z | 2021-06-02T16:58:52.000Z | virfac/get_data.py | NCBI-Hackathons/Virulence_Factor_Characterization | d42e8a24c0ba3d19b221753c49453765dfb35a34 | [
"MIT"
] | 1 | 2019-08-13T14:05:50.000Z | 2019-08-13T14:05:50.000Z | virfac/get_data.py | NCBI-Hackathons/Virulence_Factor_Characterization | d42e8a24c0ba3d19b221753c49453765dfb35a34 | [
"MIT"
] | 5 | 2019-08-13T13:47:11.000Z | 2021-11-16T00:25:41.000Z | import os
import requests
import string
import xml.etree.ElementTree as xml
from os.path import join as j
from time import sleep
from virfac import data_dir
bad_bugs = """Acinetobacter baumannii ACICU
Aeromonas hydrophila subsp. hydrophila ATCC 7966
Aeromonas salmonicida subsp. salmonicida A449
Aeromonas hydrophila ML09-119
Aeromonas veronii B565
Aeromonas hydrophila str. AH-3
Anaplasma phagocytophilum HZ
Bacillus anthracis str. Sterne
Bacillus anthracis
Bacillus cereus ATCC 10987
Bacillus cereus ATCC 14579
Bacillus anthracis str. Ames Ancestor
Bacillus subtilis subsp. subtilis str. 168
Bartonella henselae str. Houston-1
Bartonella quintana str. Toulouse
Bordetella pertussis Tohama I
Brucella melitensis bv. 1 str. 16M
Brucella suis 1330
Burkholderia pseudomallei K96243
Campylobacter jejuni subsp. jejuni NCTC 11168
Campylobacter jejuni subsp. jejuni 81-176
Chlamydia trachomatis D/UW-3/CX
Clostridium perfringens str. 13
Clostridium perfringens ATCC 13124
Clostridium perfringens SM101
Clostridium difficile 630
Clostridium tetani E88
Clostridium perfringens B str. ATCC 3626
Clostridium perfringens str. NCTC 8533B4D
Clostridium perfringens E str. NCIB 10748
Clostridium botulinum C str. 203U28
Clostridium botulinum D str. 1873
Clostridium botulinum Hall 183 (A391)
Clostridium difficile str. CCUG 20309
Clostridium novyi str. ATCC19402
Clostridium septicum str. NH2
Corynebacterium diphtheriae NCTC 13129
Coxiella burnetii CbuK_Q154
Coxiella burnetii RSA 493
Coxiella burnetii RSA 331
Coxiella burnetii CbuG_Q212
Coxiella burnetii Dugway 5J108-111
Enterococcus faecalis V583
Enterococcus faecalis str. MMH594
Enterococcus faecium DO
Enterococcus faecium str. TX2555
Escherichia coli O127:H6 str. E2348/69
Escherichia coli B171
Escherichia coli O157:H7 str. EDL933
Escherichia coli 17-2
Escherichia coli str. 042
Escherichia coli 55989
Escherichia coli O44:H18 042
Escherichia coli CFT073
Escherichia coli O75:K5:H- str. IH11128
Escherichia coli EC7372
Escherichia coli str. A30
Escherichia coli O25b:H4 str. FV9863
Escherichia coli str. C1845
Escherichia coli
Escherichia coli E10703
Escherichia coli O114:H49 str. E29101A
Escherichia coli O159:H4 str. 350C1
Escherichia coli str. E7473
Escherichia coli str. 260-1
Escherichia coli str. ARG-3
Escherichia coli O8:H9 str. WS6788A
Escherichia coli str. H721A
Escherichia coli O18:K1:H7 str. RS218
Escherichia coli UTI89
Escherichia coli str. E-B35
Escherichia coli O111:H- str. E45035
Escherichia coli O78:H11:K80 str. H10407
Escherichia coli O157:H7 str. Sakai
Escherichia coli O55:H7 str. CB9615
Escherichia coli O26 str. C/15333
Escherichia coli ONT:H- str. FV11678
Escherichia coli str. A22
Escherichia coli str. AL 851
Escherichia coli str. 239 KH 89
Escherichia coli O25:H42 str. E11881A
Escherichia coli O114:H- str. WS0115A
Escherichia coli str. 111KH86
Escherichia coli SE11
Escherichia coli O157:H str. 493/89
Escherichia coli ONT:HND str. A16
Escherichia coli C342-62
Escherichia coli O45:K1:H7 str. S88
Haemophilus influenzae Rd KW20
Haemophilus influenzae str. 1007
Haemophilus influenzae AM30 (770235)
Haemophilus influenzae N187
Haemophilus influenzae nontypable strain 3179B
Haemophilus influenzae str. 12
Haemophilus influenzae C54
Haemophilus influenzae TN106
Helicobacter pylori 26695
Helicobacter pylori J99
Klebsiella pneumoniae subsp. pneumoniae NTUH-K2044
Klebsiella pneumoniae subsp. pneumoniae HS11286
Klebsiella pneumoniae subsp. pneumoniae 1084
Legionella pneumophila subsp. pneumophila str. Philadelphia 1
Listeria monocytogenes EGD-e
Listeria ivanovii str. ATCC 19119
Listeria innocua SLCC6294
Mycobacterium tuberculosis H37Rv
Mycoplasma pneumoniae M129
Mycoplasma hyopneumoniae 232
Neisseria meningitidis MC58
Neisseria meningitidis Z2491
Pseudomonas aeruginosa PAO1
Pseudomonas aeruginosa PA103
Rickettsia rickettsii str. Sheila Smith
Rickettsia conorii str. Malish 7
Salmonella enterica subsp. enterica serovar Typhi str. CT18
Salmonella enterica subsp. enterica serovar Typhimurium str. LT2
Salmonella enterica (serovar typhimurium)
Salmonella enterica subsp. enterica serovar Typhimurium str. 14028s
Shigella flexneri 2a str. 301
Shigella dysenteriae Sd197
Staphylococcus aureus subsp. aureus MW2
Staphylococcus aureus subsp. aureus str. Newman
Staphylococcus aureus subsp. aureus COL
Staphylococcus aureus str. Newman D2C (ATCC 25904)
Staphylococcus aureus ZM
Staphylococcus aureus
Staphylococcus aureus S6
Staphylococcus aureus RN4220
Staphylococcus aureus subsp. aureus N315
Streptococcus pyogenes MGAS315
Streptococcus pyogenes M1 GAS
Streptococcus pyogenes MGAS8232
Streptococcus agalactiae 2603V/R
Streptococcus agalactiae A909
Streptococcus pneumoniae TIGR4
Streptococcus pneumoniae R6
Streptococcus agalactiae FM027022
Streptococcus agalactiae NEM316
Streptococcus pyogenes MGAS5005
Streptococcus pneumoniae Taiwan19F-14
Vibrio cholerae O1 biovar El Tor str. N16961
Vibrio parahaemolyticus RIMD 2210633
Vibrio vulnificus CMCP6
Vibrio parahaemolyticus
Vibrio vulnificus YJ016
Yersinia pestis CO92
Yersinia enterocolitica W1024
Yersinia enterocolitica str. 84-50
Yersinia enterocolitica subsp. enterocolitica 8081
Yersinia pestis KIM 10""".split('\n')
good_bugs = """Actinomyces dentalis
Actinomyces israelii
Actinomyces oricola
Actinomyces oris
Aerococcus viridans
Aerococcus urinae
Arthrobacter agilis
Aerococcus sanguicola
Aerococcus viridans
Arcanobacterium bernardiae
Arthrobacter agilis
Bacillus algicola
Bacillus barbaricus
Bacillus firmus
Bacillus funiculus
Bacillus gibsonii
Bacillus horikoshii
Bacillus niacini
Bacillus pasteurii
Bacillus subtilis inaquosorum
Brevibacillus brevis
Brevibacterium epidermidis
Brevibacterium oxydans
Burkholderia_thailandensis_E264_uid58081
Burkholderia_cenocepacia_MC0_3_uid58769
Burkholderia_phymatum_STM815_uid58699
Burkholderia_CCGE1001_uid42975
Cellulomonas hominis
Cellulomonas turbata
Corynebacterium acnes
Corynebacterium caspium
Corynebacterium flavescens
Corynebacterium genitalium
Corynebacterium imitans
Corynebacterium striatum
Corynebacterium variabilis
Corynebacterium xerosis
Dermabacter hominis
Dermacoccus nishinomiyaensis
Exiguobacterium acetylicum
Flavobacterium arborescens
Flavobacterium maritypicum
Gordonia bronchialis
Escherichia coli str. K-12 substr. MG1655
Escherichia_coli_ED1a_uid59379
Escherichia_coli_SE11_uid59425
Escherichia_fergusonii_ATCC_35469_uid59375
Escherichia_coli_HS_uid58393
Escherichia_coli__BL21_GOLDd_DE3_pLysS_AG__uid59245
Escherichia_coli_IAI1_uid59377
Escherichia_coli_B_REL606_uid58803
Escherichia_coli_K_12_substr__DH10B_uid58979
Exiguobacterium acetylicum
Flavobacterium arborescens
Flavobacterium maritypicum
Gordonia bronchialis
Leifsonia aquatica
Leifsonia xyli
Micrococcus glutamicus
Micrococcus nishinomiyaensis
Micrococcus sedentarius
Pseudomonas_fluorescens_Pf_5_uid57937
Pseudomonas_putida_GB_1_uid58735
Pseudomonas_putida_KT2440_uid57843
Pseudomonas_fluorescens_SBW25_uid62971
Pseudomonas_putida_W619_uid58651
Pseudomonas_brassicacearum_NFM421_uid66303
Pseudomonas_stutzeri_A1501_uid58641
Rhodococcus bronchialis
Rhodococcus erythropolis
Rhodococcus terrae
Staphylococcus albus
Staphylococcus auricularis
Staphylococcus capitis
Staphylococcus epidermidis
Staphylococcus hominis
Staphylococcus nepalensis
Staphylococcus saprophyticus
Staphylococcus vitulinus
Staphylococcus warneri
Streptococcus australis
Streptococcus caprinus
Streptococcus crista
Streptococcus entericus
Streptococcus gordonii
Streptococcus infantarius
Streptococcus mitis
Streptococcus mutans ferus
Streptococcus oralis
Streptococcus salivarius
Streptococcus sanguis
Streptococcus vestibularis
Streptococcus viridans
Tsukamurella inchonensis
Tsukamurella paurometabola
Tsukamurella pulmonis
Virgibacillus pantothenticus""".split('\n')
esearch = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi"
efetch = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi"
elink = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi"
api_key = '39bc94a6bd1a989fdaacde696739255d7709'
#api_key = None
def get_data_from_ncbi(bug_list=bad_bugs, subdir="pathogens", esearch=esearch, efetch=efetch, elink=elink, api_key=api_key):
for bug in bug_list:
#sleep(.1)
os.makedirs(j(data_dir, subdir), exist_ok=True)
filename = j(data_dir, subdir, bug.lower().translate(str.maketrans('', '', string.punctuation)).replace(' ','_').strip() + '.fasta')
res = requests.get(esearch, params=dict(tool='hackathon2019', db='assembly', term=bug, retmax=1000, api_key=api_key))
res.raise_for_status()
try:
rec = xml.fromstring(res.content)
except xml.ParseError:
print(res.content)
break
ids = [i.text for i in rec.findall('.//IdList/Id')]
print(f'found {len(ids)} genomes for {bug} ...')
for gb_id in ids:
res = requests.get(elink, params=dict(tool='hackathon2019', db='nuccore', dbfrom='assembly', Id=gb_id, retmode='xml', api_key=api_key))
try:
rec = xml.fromstring(res.content)
except xml.ParseError:
print(res.content)
break
#sleep(.1)
for fa_id in [e.text for e in rec.findall(r".//*[DbTo='nuccore']//Id")]:
res = requests.get(efetch, params=dict(tool='hackathon2019', db='nuccore', Id=fa_id, rettype='fasta', api_key=api_key))
with open(filename, 'wb') as fd:
for i, chunk in enumerate(res.iter_content(chunk_size=128)):
fd.write(chunk)
print(f"{i*128} bytes read.")
if i*128 > 5000:
break #just need the first one, otherwise keep going
break
if __name__ == '__main__':
get_data_from_ncbi() | 31.244224 | 138 | 0.837435 | 1,213 | 9,467 | 6.441055 | 0.460841 | 0.095994 | 0.02995 | 0.015871 | 0.092538 | 0.077819 | 0.068604 | 0.055292 | 0.017407 | 0.017407 | 0 | 0.087335 | 0.113447 | 9,467 | 303 | 139 | 31.244224 | 0.84356 | 0.008134 | 0 | 0.083045 | 0 | 0 | 0.841696 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00346 | false | 0 | 0.024221 | 0 | 0.027682 | 0.013841 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6db7bece3d9aa4ddb4c7a259791cdb3ff2503c32 | 235 | py | Python | algorithms/python/344.py | scream7/leetcode | 1fe0f5df3ca0019a4d99979cc663993d2492272d | [
"Apache-2.0"
] | null | null | null | algorithms/python/344.py | scream7/leetcode | 1fe0f5df3ca0019a4d99979cc663993d2492272d | [
"Apache-2.0"
] | null | null | null | algorithms/python/344.py | scream7/leetcode | 1fe0f5df3ca0019a4d99979cc663993d2492272d | [
"Apache-2.0"
] | null | null | null | class Solution(object):
def reverseString(self, s):
"""
:type s: str
:rtype: str
"""
res = []
for i in range(len(s)-1,-1,-1):
res.append(s[i])
return ''.join(res)
| 21.363636 | 39 | 0.429787 | 29 | 235 | 3.482759 | 0.689655 | 0.039604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.4 | 235 | 10 | 40 | 23.5 | 0.695035 | 0.102128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dbbf502d7630e0dc5c8b845b56c38ed3a17076b | 3,647 | py | Python | dlrm/nn/mlps.py | henryqin1997/Adalars | 2440b3b6e177a9b38011eff3ec25f3b6052acfd0 | [
"Apache-2.0"
] | 1 | 2021-07-30T06:20:40.000Z | 2021-07-30T06:20:40.000Z | dlrm/nn/mlps.py | henryqin1997/Adalars | 2440b3b6e177a9b38011eff3ec25f3b6052acfd0 | [
"Apache-2.0"
] | null | null | null | dlrm/nn/mlps.py | henryqin1997/Adalars | 2440b3b6e177a9b38011eff3ec25f3b6052acfd0 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from typing import Sequence, List, Iterable
import apex.mlp
import torch
from torch import nn
class AbstractMlp(nn.Module):
"""
MLP interface used for configuration-agnostic checkpointing (`dlrm.utils.checkpointing`)
and easily swappable MLP implementation
"""
@property
def weights(self) -> List[torch.Tensor]:
"""
Getter for all MLP layers weights (without biases)
"""
raise NotImplementedError()
@property
def biases(self) -> List[torch.Tensor]:
"""
Getter for all MLP layers biases
"""
raise NotImplementedError()
def forward(self, mlp_input: torch.Tensor) -> torch.Tensor:
raise NotImplementedError()
def load_state(self, weights: Iterable[torch.Tensor], biases: Iterable[torch.Tensor]):
for new_weight, weight, new_bias, bias in zip(weights, self.weights, biases, self.biases):
weight.data = new_weight.data
weight.data.requires_grad_()
bias.data = new_bias.data
bias.data.requires_grad_()
class TorchMlp(AbstractMlp):
def __init__(self, input_dim: int, sizes: Sequence[int]):
super().__init__()
layers = []
for output_dims in sizes:
layers.append(nn.Linear(input_dim, output_dims))
layers.append(nn.BatchNorm1d(output_dims))
layers.append(nn.ReLU(inplace=True))
input_dim = output_dims
self.layers = nn.Sequential(*layers)
self._initialize_weights()
def _initialize_weights(self):
for module in self.modules():
if isinstance(module, nn.Linear):
nn.init.normal_(module.weight.data, 0., math.sqrt(2. / (module.in_features + module.out_features)))
nn.init.normal_(module.bias.data, 0., math.sqrt(1. / module.out_features))
@property
def weights(self):
return [layer.weight for layer in self.layers if isinstance(layer, nn.Linear)]
@property
def biases(self):
return [layer.bias for layer in self.layers if isinstance(layer, nn.Linear)]
def forward(self, mlp_input: torch.Tensor) -> torch.Tensor:
"""
Args:
mlp_input (Tensor): with shape [batch_size, num_features]
Returns:
Tensor: Mlp output in shape [batch_size, num_output_features]
"""
return self.layers(mlp_input)
class CppMlp(AbstractMlp):
def __init__(self, input_dim: int, sizes: Sequence[int]):
super().__init__()
self.mlp = apex.mlp.MLP([input_dim] + list(sizes))
@property
def weights(self):
return self.mlp.weights
@property
def biases(self):
return self.mlp.biases
def forward(self, mlp_input: torch.Tensor) -> torch.Tensor:
"""
Args:
mlp_input (Tensor): with shape [batch_size, num_features]
Returns:
Tensor: Mlp output in shape [batch_size, num_output_features]
"""
return self.mlp(mlp_input)
| 30.647059 | 115 | 0.649849 | 456 | 3,647 | 5.065789 | 0.317982 | 0.047619 | 0.024242 | 0.029437 | 0.34632 | 0.277922 | 0.277922 | 0.277922 | 0.277922 | 0.224242 | 0 | 0.00475 | 0.24952 | 3,647 | 118 | 116 | 30.90678 | 0.83924 | 0.297505 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232143 | false | 0 | 0.089286 | 0.071429 | 0.482143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dbc4d2ccf1e239fd49d77572a46154a3ac6e343 | 1,522 | py | Python | query_stack.py | mccormickmichael/laurel | 6222d8d2dea6fa18cfe3b031434154003e8a125b | [
"Unlicense"
] | 1 | 2018-08-20T13:49:46.000Z | 2018-08-20T13:49:46.000Z | query_stack.py | mccormickmichael/laurel | 6222d8d2dea6fa18cfe3b031434154003e8a125b | [
"Unlicense"
] | null | null | null | query_stack.py | mccormickmichael/laurel | 6222d8d2dea6fa18cfe3b031434154003e8a125b | [
"Unlicense"
] | null | null | null | # Query a CloudFormation stack.
# Returns:
# - Build Parameters - the parameters sent to the TemplateBuilder when generating the template
# - Stack Parameters - the parameters sent to CF during stack create or update time.
import argparse
import arguments
import logconfig
import session
from scaffold.cf import stack
def query_stack(stack_name, profile):
boto3_session = session.new(args.profile, args.region, args.role)
return [
stack.summary(boto3_session, stack_name).build_parameters(),
stack.parameters(boto3_session, stack_name)
]
default_profile = 'default'
def get_args():
ap = argparse.ArgumentParser(description='Query a CloudFormation stack for build and template parameters',
add_help=False)
ap.add_argument('stack_name',
help='Name of the stack to query')
arguments.add_security_control_group(ap)
return ap.parse_args()
if __name__ == '__main__':
logconfig.config()
args = get_args()
build_parms, stack_parms = query_stack(args.stack_name, args.profile)
# TODO: move these to logging messages
print('Build Parameters:')
if len(build_parms) == 0:
print(' (none)')
else:
for name, value in build_parms.items():
print(' {} : {}'.format(name, value))
print('Stack Parameters:')
if len(stack_parms) == 0:
print(' (none)')
else:
for name, value in stack_parms.items():
print(' {} : {}'.format(name, value))
| 29.843137 | 110 | 0.660972 | 187 | 1,522 | 5.197861 | 0.379679 | 0.046296 | 0.041152 | 0.05144 | 0.1893 | 0.12963 | 0.067901 | 0.067901 | 0.067901 | 0 | 0 | 0.004292 | 0.23456 | 1,522 | 50 | 111 | 30.44 | 0.830043 | 0.164915 | 0 | 0.171429 | 0 | 0 | 0.143083 | 0 | 0 | 0 | 0 | 0.02 | 0 | 1 | 0.057143 | false | 0 | 0.142857 | 0 | 0.257143 | 0.171429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dbf40a1430ad63c8114578643af78b7bda55978 | 1,205 | py | Python | nacelle/core/utils/encoder.py | paddycarey/nacelle | a41956be87550de096bb5d6775d4f9df88e85959 | [
"MIT"
] | null | null | null | nacelle/core/utils/encoder.py | paddycarey/nacelle | a41956be87550de096bb5d6775d4f9df88e85959 | [
"MIT"
] | 4 | 2017-09-16T23:44:20.000Z | 2017-09-16T23:49:13.000Z | nacelle/core/utils/encoder.py | paddycarey/nacelle | a41956be87550de096bb5d6775d4f9df88e85959 | [
"MIT"
] | null | null | null | # stdlib imports
import datetime
import json
# third-party imports
from google.appengine.api import datastore_types
from google.appengine.ext import ndb
class ModelEncoder(json.JSONEncoder):
"""
Custom JSON encoder allowing easy encoding
of appengine ndb entities/queries to JSON.
"""
def default(self, obj):
# convert models to dicts before serialising
if isinstance(obj, ndb.Model):
return obj.to_dict()
# convert keys to ids
if isinstance(obj, ndb.Key):
return obj.get().key.id()
# fetch the result of Ndb Futures
if isinstance(obj, ndb.Future):
return obj.get_result()
# listify queries
if isinstance(obj, ndb.Query):
return list(obj.iter())
# serialize geopts
if isinstance(obj, datastore_types.GeoPt):
return ','.join([str(obj.lat), str(obj.lon)])
# output dates/times in iso8601 format
if isinstance(obj, (datetime.date, datetime.datetime, datetime.time)):
return obj.isoformat()
# Let the base class default method raise the TypeError
return super(ModelEncoder, self).default(obj)
| 26.777778 | 78 | 0.635685 | 147 | 1,205 | 5.183673 | 0.52381 | 0.094488 | 0.11811 | 0.094488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004571 | 0.273859 | 1,205 | 44 | 79 | 27.386364 | 0.866286 | 0.282158 | 0 | 0 | 0 | 0 | 0.001195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6dd538c7d9feacc43eed07ae4d6cca01f005def4 | 497 | py | Python | getmac.py | Risca/biffboot | 63df31c91fcae7a00b5245e357acee89bb3be46b | [
"Unlicense"
] | null | null | null | getmac.py | Risca/biffboot | 63df31c91fcae7a00b5245e357acee89bb3be46b | [
"Unlicense"
] | 1 | 2021-11-22T22:57:18.000Z | 2021-12-14T12:39:21.000Z | getmac.py | Risca/biffboot | 63df31c91fcae7a00b5245e357acee89bb3be46b | [
"Unlicense"
] | 1 | 2021-11-28T00:24:51.000Z | 2021-11-28T00:24:51.000Z | #!/usr/bin/env python
import pod
import sys, binascii
from StringIO import StringIO
def ConvertMac(dotted):
if dotted.find(":") == -1:
str = binascii.unhexlify(dotted)
else:
str = "".join([chr(eval("0x"+i)) for i in dotted.split(":")])
if len(str) != 6:
raise ValueError("Not a MAC address")
return str
def Help():
print "Usage: getmac.py"
print "Copies mac to mac.bin"
sys.exit(-1)
if __name__ == "__main__":
p = pod.Pod("turbo")
p.GetMac()
p.Close()
| 15.53125 | 65 | 0.619718 | 74 | 497 | 4.054054 | 0.662162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010178 | 0.209256 | 497 | 31 | 66 | 16.032258 | 0.753181 | 0.040241 | 0 | 0 | 0 | 0 | 0.149789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.157895 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dd7b700e194a99bff88d3f26dd3938d3a146ccd | 1,244 | py | Python | python-idioms/npi-tuple.py | sdlab-naist/Teddy-plus | 367aadbb30b80e1680156b88017f9bb2d78c539a | [
"0BSD"
] | null | null | null | python-idioms/npi-tuple.py | sdlab-naist/Teddy-plus | 367aadbb30b80e1680156b88017f9bb2d78c539a | [
"0BSD"
] | 65 | 2020-01-30T10:58:48.000Z | 2020-04-30T03:37:58.000Z | python-idioms/npi-tuple.py | MUICT-SERU/SP2019-TEDDY | 367aadbb30b80e1680156b88017f9bb2d78c539a | [
"ISC"
] | 1 | 2021-01-15T08:18:23.000Z | 2021-01-15T08:18:23.000Z | def n48():
list_from_comma_separated_value_file = ['dog', 'Fido', 10]
animal = list_from_comma_separated_value_file[0]
name = list_from_comma_separated_value_file[1]
age = list_from_comma_separated_value_file[2]
def n49():
catherine_info = ['Catherine', 1960, 'Australian', 'F', 165, 50, 'Trainee']
class Staff:
name = ''
year-of-birth = 0
nationality = ''
gender = ''
height = 0
weight = 0
position = ''
cat = Staff()
cat.name = catherine_info[0]
cat.year-of-birth = catherine_info[1]
cat.nationality = catherine_info[2]
cat.gender = catherine_info[3]
cat.height = catherine_info[4]
cat.weight = catherine_info[5]
cat.position = catherine_info[6]
def n50():
DOB_numbers = [11,27,1,9,1996]
h = DOB_numbers[0]
m = DOB_numbers[1]
D = DOB_numbers[2]
M = DOB_numbers[3]
Y = DOB_numbers[4]
return '%i:%i %i-%i-%i' % (h,m,D,M,Y)
def n51():
blood_group = ['A','B','O','B']
class Person:
blood = 'X'
p1 = Person()
p1.blood = blood_group[0]
p2 = Person()
p2.blood = blood_group[1]
p3 = Person()
p3.blood = blood_group[2]
p4 = Person()
p4.blood = blood_group[3] | 27.043478 | 79 | 0.59164 | 179 | 1,244 | 3.893855 | 0.368715 | 0.149211 | 0.074605 | 0.126255 | 0.177905 | 0.177905 | 0 | 0 | 0 | 0 | 0 | 0.0642 | 0.261254 | 1,244 | 46 | 80 | 27.043478 | 0.694233 | 0 | 0 | 0 | 0 | 0 | 0.04257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dda55a6123bae029d35e1437483b7eca538eb7f | 1,149 | py | Python | sr/robot/vision.py | ghalliday/sr-turtle | e2564105df02659bce15bacddda60395b05aba48 | [
"MIT"
] | 2 | 2019-07-22T19:25:11.000Z | 2019-08-31T08:55:29.000Z | sr/robot/vision.py | ghalliday/sr-turtle | e2564105df02659bce15bacddda60395b05aba48 | [
"MIT"
] | 1 | 2015-07-29T13:17:12.000Z | 2015-07-29T13:17:12.000Z | sr/robot/vision.py | ghalliday/sr-turtle | e2564105df02659bce15bacddda60395b05aba48 | [
"MIT"
] | 2 | 2015-11-07T22:50:08.000Z | 2019-04-03T07:28:59.000Z | from collections import namedtuple
# From pyenv.git/pylib/sr/vision.py
MARKER_ARENA, MARKER_ROBOT, MARKER_PEDESTAL, MARKER_TOKEN = range(4)
marker_offsets = {
MARKER_ARENA: 0,
MARKER_ROBOT: 28,
MARKER_PEDESTAL: 32,
MARKER_TOKEN: 41
}
marker_sizes = {
MARKER_ARENA: 0.25 * (10.0/12),
MARKER_ROBOT: 0.1 * (10.0/12),
MARKER_PEDESTAL: 0.2 * (10.0/12),
MARKER_TOKEN: 0.2 * (10.0/12)
}
# MarkerInfo class
MarkerInfo = namedtuple( "MarkerInfo", "code marker_type offset size" )
def create_marker_info_by_type(marker_type, offset):
return MarkerInfo(marker_type = marker_type,
offset = offset,
size = marker_sizes[marker_type],
code = marker_offsets[marker_type] + offset)
# Points
# TODO: World Coordinates
PolarCoord = namedtuple("PolarCoord", "length rot_y")
Point = namedtuple("Point", "polar")
# Marker class
MarkerBase = namedtuple( "Marker", "info res centre timestamp" )
class Marker(MarkerBase):
def __init__(self, *a, **kwd):
# Aliases
self.dist = self.centre.polar.length
self.rot_y = self.centre.polar.rot_y
| 28.02439 | 71 | 0.664056 | 149 | 1,149 | 4.899329 | 0.395973 | 0.082192 | 0.027397 | 0.045205 | 0.019178 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041295 | 0.220191 | 1,149 | 40 | 72 | 28.725 | 0.773438 | 0.088773 | 0 | 0 | 0 | 0 | 0.097115 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.074074 | false | 0 | 0.037037 | 0.037037 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6ddd4f533b3ccfe2c16ca3fdc8f7a56cbdfea95c | 487 | py | Python | mediawiki_dump/utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | mediawiki_dump/utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | mediawiki_dump/utils.py | e1mo/mediawiki-dump | eefc3668b01f105b8740d370f012c14c19084f89 | [
"MIT"
] | null | null | null | """
Utility functions
"""
from datetime import datetime, timezone
def parse_date_string(date: str) -> datetime:
"""Converts date as string (e.g. "2004-05-25T02:19:28Z") to UNIX timestamp (uses UTC, always)
"""
# https://docs.python.org/3.6/library/datetime.html#strftime-strptime-behavior
# http://strftime.org/
parsed = datetime.strptime(date, '%Y-%m-%dT%H:%M:%SZ') # string parse time
# now apply UTC timezone
return parsed.replace(tzinfo=timezone.utc)
| 30.4375 | 97 | 0.683778 | 69 | 487 | 4.797101 | 0.724638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03912 | 0.160164 | 487 | 15 | 98 | 32.466667 | 0.770171 | 0.517454 | 0 | 0 | 0 | 0 | 0.082569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6de3836c6240f68aeaddf3b97377e5d7d13a94ae | 798 | py | Python | setup.py | rako233/rako-pyads | b218d7ac250a396c9e054586e377c5fb65400b2d | [
"MIT"
] | 1 | 2020-05-12T19:15:47.000Z | 2020-05-12T19:15:47.000Z | setup.py | rako233/rako-pyads | b218d7ac250a396c9e054586e377c5fb65400b2d | [
"MIT"
] | null | null | null | setup.py | rako233/rako-pyads | b218d7ac250a396c9e054586e377c5fb65400b2d | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
exec(open('counsyl_pyads/version.py').read())
setup(
name='counsyl-pyads',
version=__version__,
packages=find_packages(),
scripts=['bin/twincat_plc_info.py'],
include_package_data=True,
zip_safe=False,
author='Counsyl Inc.',
author_email='opensource@counsyl.com',
description='A library for directly interacting with a Twincat PLC.',
url='https://github.com/counsyl/counsyl-pyads/',
classifiers=[
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',
'Topic :: Software Development :: Libraries :: Python Modules',
],
)
| 33.25 | 82 | 0.676692 | 87 | 798 | 6.057471 | 0.735632 | 0.068311 | 0.072106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001548 | 0.190476 | 798 | 23 | 83 | 34.695652 | 0.814241 | 0 | 0 | 0 | 0 | 0 | 0.5401 | 0.114035 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6de68b1de276f74f20e4a6c6d49b19640afb122b | 449 | py | Python | tests/test_parallel.py | jimmysong/fabric | 239b786d67aa06c3684e7ebda3d2f5ba9ebc2379 | [
"BSD-2-Clause"
] | 1 | 2019-02-10T11:42:28.000Z | 2019-02-10T11:42:28.000Z | tests/test_parallel.py | fitoria/fabric | 239b786d67aa06c3684e7ebda3d2f5ba9ebc2379 | [
"BSD-2-Clause"
] | null | null | null | tests/test_parallel.py | fitoria/fabric | 239b786d67aa06c3684e7ebda3d2f5ba9ebc2379 | [
"BSD-2-Clause"
] | null | null | null | from __future__ import with_statement
from fabric.api import run, parallel, env, hide
from utils import FabricTest, eq_
from server import server, RESPONSES
class TestParallel(FabricTest):
@server()
@parallel
def test_parallel(self):
"""
Want to do a simple call and respond
"""
env.pool_size = 10
cmd = "ls /simple"
with hide('everything'):
eq_(run(cmd), RESPONSES[cmd])
| 21.380952 | 47 | 0.632517 | 55 | 449 | 5 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006173 | 0.278396 | 449 | 20 | 48 | 22.45 | 0.842593 | 0.080178 | 0 | 0 | 0 | 0 | 0.051414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6df445e6eba33cbdb674cd7df376cded0036c3d7 | 1,071 | py | Python | scripts/generate_example.py | binary-is/alcoholnow | ed1f1cd584db3f47db1b24c0612723d9aed6bdae | [
"MIT"
] | 4 | 2020-08-24T08:56:24.000Z | 2021-09-17T01:29:47.000Z | scripts/generate_example.py | binary-is/alcoholnow | ed1f1cd584db3f47db1b24c0612723d9aed6bdae | [
"MIT"
] | 1 | 2021-07-22T22:43:13.000Z | 2021-07-22T22:43:13.000Z | scripts/generate_example.py | binary-is/alcoholnow | ed1f1cd584db3f47db1b24c0612723d9aed6bdae | [
"MIT"
] | 2 | 2021-07-22T16:07:14.000Z | 2021-08-03T15:03:04.000Z | #!/usr/bin/env python3
# Generates a FILENAME, named after today (i.e. "2021-01-05.json" with the
# prettified JSON output of the remote store data.
#
# Should be run from the project root.
import os
import requests
import json
from datetime import datetime
DEST_DIR = 'example-data'
if not os.path.exists(DEST_DIR):
print('Error: Must be run from project root, like so: scripts/generate_example.py.')
quit(1)
URL = 'https://www.vinbudin.is/addons/origo/module/ajaxwebservices/search.asmx/GetAllShops'
FILENAME = '%s/%s.json' % (DEST_DIR, datetime.now().strftime('%Y-%m-%d'))
print('Fetching data...', end='', flush=True)
response = requests.get(URL, headers={'Content-Type': 'application/json; charset=utf-8'})
print(' done')
print('Decoding JSON...', end='', flush=True)
stores = json.loads(json.loads(response.text)['d'])
pretty_json = json.dumps(stores, indent=2, ensure_ascii=False).encode('utf8').decode()
print(' done')
print('Writing file %s...' % FILENAME, end='', flush=True)
with open(FILENAME, 'w') as f:
f.write(pretty_json)
print(' done')
| 32.454545 | 91 | 0.706816 | 163 | 1,071 | 4.601227 | 0.638037 | 0.028 | 0.048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013757 | 0.117647 | 1,071 | 32 | 92 | 33.46875 | 0.779894 | 0.168067 | 0 | 0.142857 | 1 | 0.047619 | 0.340858 | 0.031603 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dfb13f191942d85c22d5227b56bcd81bec7c152 | 1,217 | py | Python | pyslam/residuals/pose_to_pose_orientation_residual.py | utiasSTARS/pyslam | 5c0de9eaed4ebc7687b6d43079481c9a9336145b | [
"MIT"
] | 70 | 2018-03-13T21:31:21.000Z | 2022-03-31T16:12:29.000Z | pyslam/residuals/pose_to_pose_orientation_residual.py | jangsoopark/pyslam | 5c0de9eaed4ebc7687b6d43079481c9a9336145b | [
"MIT"
] | 1 | 2021-07-13T08:08:17.000Z | 2021-07-15T01:52:23.000Z | pyslam/residuals/pose_to_pose_orientation_residual.py | jangsoopark/pyslam | 5c0de9eaed4ebc7687b6d43079481c9a9336145b | [
"MIT"
] | 12 | 2018-07-03T05:42:23.000Z | 2021-09-23T13:52:23.000Z | import numpy as np
class PoseToPoseOrientationResidual:
"""Binary pose-to-pose residual given relative rotation mesurement in SO3."""
def __init__(self, C_2_1_obs, stiffness):
self.C_2_1_obs = C_2_1_obs
self.stiffness = stiffness
self.obstype = type(C_2_1_obs)
def evaluate(self, params, compute_jacobians=None):
T_1_0_est = params[0]
T_2_0_est = params[1]
#TODO: Make this more general for SE(2) poses
P2 = np.zeros((3, 6))
P2[:, 3:] = np.eye(3)
P1 = np.zeros((3, 6))
P1[:, 3:] = T_2_0_est.dot(T_1_0_est.inv()).rot.as_matrix()
residual = np.dot(self.stiffness,
self.obstype.log(
T_2_0_est.dot(T_1_0_est.inv()).rot.dot(self.C_2_1_obs.inv())
)
)
if compute_jacobians:
jacobians = [None for _ in enumerate(params)]
if compute_jacobians[0]:
jacobians[0] = np.dot(self.stiffness, -P1)
if compute_jacobians[1]:
jacobians[1] = np.dot(self.stiffness, P2)
return residual, jacobians
return residual
| 31.205128 | 94 | 0.545604 | 164 | 1,217 | 3.786585 | 0.341463 | 0.038647 | 0.024155 | 0.048309 | 0.115942 | 0.067633 | 0.067633 | 0.067633 | 0.067633 | 0.067633 | 0 | 0.05402 | 0.345933 | 1,217 | 38 | 95 | 32.026316 | 0.726131 | 0.095316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dfc7d595062f2e5ca6ec0904ae26af279d640f3 | 6,109 | py | Python | pypy/interpreter/test/test_raise.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | 1 | 2019-05-27T00:58:46.000Z | 2019-05-27T00:58:46.000Z | pypy/interpreter/test/test_raise.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | pypy/interpreter/test/test_raise.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | import py.test
class AppTestRaise:
def test_arg_as_string(self):
def f():
raise "test"
import warnings
warnings.simplefilter('error', DeprecationWarning)
try:
raises(DeprecationWarning, f)
finally:
warnings.simplefilter('default', DeprecationWarning)
def test_control_flow(self):
try:
raise Exception
raise AssertionError, "exception failed to raise"
except:
pass
else:
raise AssertionError, "exception executing else clause!"
def test_1arg(self):
try:
raise SystemError, 1
except Exception, e:
assert e.args[0] == 1
def test_2args(self):
try:
raise SystemError, (1, 2)
except Exception, e:
assert e.args[0] == 1
assert e.args[1] == 2
def test_instancearg(self):
try:
raise SystemError, SystemError(1, 2)
except Exception, e:
assert e.args[0] == 1
assert e.args[1] == 2
def test_more_precise_instancearg(self):
try:
raise Exception, SystemError(1, 2)
except SystemError, e:
assert e.args[0] == 1
assert e.args[1] == 2
def test_stringexc(self):
a = "hello world"
try:
raise a
except a, e:
assert e == None
try:
raise a, "message"
except a, e:
assert e == "message"
def test_builtin_exc(self):
try:
[][0]
except IndexError, e:
assert isinstance(e, IndexError)
def test_raise_cls(self):
def f():
raise IndexError
raises(IndexError, f)
def test_raise_cls_catch(self):
def f(r):
try:
raise r
except LookupError:
return 1
raises(Exception, f, Exception)
assert f(IndexError) == 1
def test_raise_wrong(self):
try:
raise 1
except TypeError:
pass
else:
raise AssertionError, "shouldn't be able to raise 1"
def test_raise_three_args(self):
import sys
try:
raise ValueError
except:
exc_type,exc_val,exc_tb = sys.exc_info()
try:
raise exc_type,exc_val,exc_tb
except:
exc_type2,exc_val2,exc_tb2 = sys.exc_info()
assert exc_type ==exc_type2
assert exc_val ==exc_val2
assert exc_tb ==exc_tb2
def test_reraise(self):
# some collection of funny code
import sys
raises(ValueError, """
import sys
try:
raise ValueError
except:
try:
raise IndexError
finally:
assert sys.exc_info()[0] is ValueError
raise
""")
raises(ValueError, """
def foo():
import sys
assert sys.exc_info()[0] is ValueError
raise
try:
raise ValueError
except:
try:
raise IndexError
finally:
foo()
""")
raises(IndexError, """
def spam():
import sys
try:
raise KeyError
except KeyError:
pass
assert sys._getframe().f_exc_type is ValueError
try:
raise ValueError
except:
try:
raise IndexError
finally:
spam()
""")
try:
raise ValueError
except:
try:
raise KeyError
except:
ok = sys.exc_info()[0] is KeyError
assert ok
raises(IndexError, """
import sys
try:
raise ValueError
except:
some_traceback = sys.exc_info()[2]
try:
raise KeyError
except:
try:
raise IndexError, IndexError(), some_traceback
finally:
assert sys.exc_info()[0] is KeyError
assert sys.exc_info()[2] is not some_traceback
""")
def test_tuple_type(self):
def f():
raise ((StopIteration, 123), 456, 789)
raises(StopIteration, f)
def test_userclass(self):
# new-style classes can't be raised unless they inherit from
# BaseException
class A(object):
def __init__(self, x=None):
self.x = x
def f():
raise A
raises(TypeError, f)
def f():
raise A(42)
raises(TypeError, f)
def test_it(self):
class C:
pass
# this used to explode in the exception normalization step:
try:
{}[C]
except KeyError:
pass
def test_oldstyle_userclass(self):
class A:
def __init__(self, val=None):
self.val = val
class Sub(A):
pass
try:
raise Sub
except IndexError:
assert 0
except A, a:
assert a.__class__ is Sub
sub = Sub()
try:
raise sub
except IndexError:
assert 0
except A, a:
assert a is sub
try:
raise A, sub
except IndexError:
assert 0
except A, a:
assert a is sub
assert sub.val is None
try:
raise Sub, 42
except IndexError:
assert 0
except A, a:
assert a.__class__ is Sub
assert a.val == 42
try:
{}[5]
except A, a:
assert 0
except KeyError:
pass
| 24.732794 | 68 | 0.460796 | 609 | 6,109 | 4.497537 | 0.197044 | 0.078861 | 0.029208 | 0.052574 | 0.323111 | 0.299014 | 0.246805 | 0.222344 | 0.13326 | 0.13326 | 0 | 0.018798 | 0.468817 | 6,109 | 246 | 69 | 24.833333 | 0.82527 | 0.026191 | 0 | 0.588785 | 0 | 0 | 0.266992 | 0.004374 | 0 | 0 | 0 | 0 | 0.158879 | 0 | null | null | 0.03271 | 0.037383 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6dff1c5e196b0bd798627435c23e2a853fd9b211 | 9,691 | py | Python | KDAT/src/modules/uiConnect.py | GavinGL/workspace | e8080656a235eee7419994b3be24396945ba2889 | [
"MIT"
] | null | null | null | KDAT/src/modules/uiConnect.py | GavinGL/workspace | e8080656a235eee7419994b3be24396945ba2889 | [
"MIT"
] | null | null | null | KDAT/src/modules/uiConnect.py | GavinGL/workspace | e8080656a235eee7419994b3be24396945ba2889 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'gui\Connectui.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_DialogConnect(object):
def setupUi(self, DialogConnect):
DialogConnect.setObjectName(_fromUtf8("DialogConnect"))
DialogConnect.setWindowModality(QtCore.Qt.ApplicationModal)
DialogConnect.resize(433, 383)
self.groupBox_Connect = QtGui.QGroupBox(DialogConnect)
self.groupBox_Connect.setGeometry(QtCore.QRect(10, 10, 411, 201))
self.groupBox_Connect.setObjectName(_fromUtf8("groupBox_Connect"))
self.pushButton_NetLink = QtGui.QPushButton(self.groupBox_Connect)
self.pushButton_NetLink.setGeometry(QtCore.QRect(40, 30, 75, 23))
self.pushButton_NetLink.setObjectName(_fromUtf8("pushButton_NetLink"))
self.pushButton_SerialLink = QtGui.QPushButton(self.groupBox_Connect)
self.pushButton_SerialLink.setGeometry(QtCore.QRect(40, 90, 75, 23))
self.pushButton_SerialLink.setObjectName(_fromUtf8("pushButton_SerialLink"))
self.lineEdit_NetName = QtGui.QLineEdit(self.groupBox_Connect)
self.lineEdit_NetName.setGeometry(QtCore.QRect(240, 20, 161, 20))
self.lineEdit_NetName.setObjectName(_fromUtf8("lineEdit_NetName"))
self.lineEdit_NetHostIP = QtGui.QLineEdit(self.groupBox_Connect)
self.lineEdit_NetHostIP.setGeometry(QtCore.QRect(240, 80, 161, 20))
self.lineEdit_NetHostIP.setObjectName(_fromUtf8("lineEdit_NetHostIP"))
self.lineEdit_NetUser = QtGui.QLineEdit(self.groupBox_Connect)
self.lineEdit_NetUser.setGeometry(QtCore.QRect(240, 140, 161, 20))
self.lineEdit_NetUser.setObjectName(_fromUtf8("lineEdit_NetUser"))
self.lineEdit_NetPasswd = QtGui.QLineEdit(self.groupBox_Connect)
self.lineEdit_NetPasswd.setGeometry(QtCore.QRect(240, 170, 161, 20))
self.lineEdit_NetPasswd.setObjectName(_fromUtf8("lineEdit_NetPasswd"))
self.label = QtGui.QLabel(self.groupBox_Connect)
self.label.setGeometry(QtCore.QRect(160, 19, 54, 20))
self.label.setObjectName(_fromUtf8("label"))
self.label_2 = QtGui.QLabel(self.groupBox_Connect)
self.label_2.setGeometry(QtCore.QRect(160, 49, 54, 20))
self.label_2.setObjectName(_fromUtf8("label_2"))
self.label_3 = QtGui.QLabel(self.groupBox_Connect)
self.label_3.setGeometry(QtCore.QRect(160, 78, 54, 20))
self.label_3.setObjectName(_fromUtf8("label_3"))
self.label_4 = QtGui.QLabel(self.groupBox_Connect)
self.label_4.setGeometry(QtCore.QRect(160, 109, 54, 20))
self.label_4.setObjectName(_fromUtf8("label_4"))
self.label_5 = QtGui.QLabel(self.groupBox_Connect)
self.label_5.setGeometry(QtCore.QRect(160, 137, 54, 21))
self.label_5.setObjectName(_fromUtf8("label_5"))
self.label_6 = QtGui.QLabel(self.groupBox_Connect)
self.label_6.setGeometry(QtCore.QRect(160, 166, 54, 20))
self.label_6.setObjectName(_fromUtf8("label_6"))
self.comboBox_NetProtocol = QtGui.QComboBox(self.groupBox_Connect)
self.comboBox_NetProtocol.setGeometry(QtCore.QRect(240, 50, 161, 22))
self.comboBox_NetProtocol.setStyleSheet(_fromUtf8(""))
self.comboBox_NetProtocol.setMaxVisibleItems(2)
self.comboBox_NetProtocol.setObjectName(_fromUtf8("comboBox_NetProtocol"))
self.comboBox_NetProtocol.addItem(_fromUtf8(""))
self.comboBox_NetProtocol.addItem(_fromUtf8(""))
self.lineEdit_NetPort = QtGui.QLineEdit(self.groupBox_Connect)
self.lineEdit_NetPort.setGeometry(QtCore.QRect(240, 110, 161, 20))
self.lineEdit_NetPort.setObjectName(_fromUtf8("lineEdit_NetPort"))
self.groupBox_Other = QtGui.QGroupBox(DialogConnect)
self.groupBox_Other.setGeometry(QtCore.QRect(0, 220, 421, 111))
self.groupBox_Other.setObjectName(_fromUtf8("groupBox_Other"))
self.label_7 = QtGui.QLabel(self.groupBox_Other)
self.label_7.setGeometry(QtCore.QRect(20, 20, 91, 16))
self.label_7.setObjectName(_fromUtf8("label_7"))
self.label_9 = QtGui.QLabel(self.groupBox_Other)
self.label_9.setGeometry(QtCore.QRect(20, 50, 91, 16))
self.label_9.setObjectName(_fromUtf8("label_9"))
self.label_10 = QtGui.QLabel(self.groupBox_Other)
self.label_10.setGeometry(QtCore.QRect(20, 80, 101, 16))
self.label_10.setObjectName(_fromUtf8("label_10"))
self.lineEdit_PlatformIP = QtGui.QLineEdit(self.groupBox_Other)
self.lineEdit_PlatformIP.setGeometry(QtCore.QRect(120, 50, 201, 20))
self.lineEdit_PlatformIP.setObjectName(_fromUtf8("lineEdit_PlatformIP"))
self.lineEdit_ReportPath = QtGui.QLineEdit(self.groupBox_Other)
self.lineEdit_ReportPath.setGeometry(QtCore.QRect(120, 80, 201, 20))
self.lineEdit_ReportPath.setObjectName(_fromUtf8("lineEdit_ReportPath"))
self.pushButton_CheckAddress = QtGui.QPushButton(self.groupBox_Other)
self.pushButton_CheckAddress.setGeometry(QtCore.QRect(330, 20, 75, 23))
self.pushButton_CheckAddress.setObjectName(_fromUtf8("pushButton_CheckAddress"))
self.pushButton_CheckPath = QtGui.QPushButton(self.groupBox_Other)
self.pushButton_CheckPath.setGeometry(QtCore.QRect(330, 80, 75, 23))
self.pushButton_CheckPath.setObjectName(_fromUtf8("pushButton_CheckPath"))
self.comboBox_LocalIP = QtGui.QComboBox(self.groupBox_Other)
self.comboBox_LocalIP.setGeometry(QtCore.QRect(120, 20, 201, 22))
self.comboBox_LocalIP.setObjectName(_fromUtf8("comboBox_LocalIP"))
self.pushButton_Save = QtGui.QPushButton(DialogConnect)
self.pushButton_Save.setGeometry(QtCore.QRect(110, 350, 75, 23))
self.pushButton_Save.setObjectName(_fromUtf8("pushButton_Save"))
self.pushButton_Connect = QtGui.QPushButton(DialogConnect)
self.pushButton_Connect.setGeometry(QtCore.QRect(260, 350, 75, 23))
self.pushButton_Connect.setObjectName(_fromUtf8("pushButton_Connect"))
self.retranslateUi(DialogConnect)
self.comboBox_NetProtocol.setCurrentIndex(0)
QtCore.QMetaObject.connectSlotsByName(DialogConnect)
DialogConnect.setTabOrder(self.pushButton_NetLink, self.pushButton_SerialLink)
DialogConnect.setTabOrder(self.pushButton_SerialLink, self.lineEdit_NetName)
DialogConnect.setTabOrder(self.lineEdit_NetName, self.comboBox_NetProtocol)
DialogConnect.setTabOrder(self.comboBox_NetProtocol, self.lineEdit_NetHostIP)
DialogConnect.setTabOrder(self.lineEdit_NetHostIP, self.lineEdit_NetPort)
DialogConnect.setTabOrder(self.lineEdit_NetPort, self.lineEdit_NetUser)
DialogConnect.setTabOrder(self.lineEdit_NetUser, self.lineEdit_NetPasswd)
DialogConnect.setTabOrder(self.lineEdit_NetPasswd, self.pushButton_CheckAddress)
DialogConnect.setTabOrder(self.pushButton_CheckAddress, self.comboBox_LocalIP)
DialogConnect.setTabOrder(self.comboBox_LocalIP, self.lineEdit_PlatformIP)
DialogConnect.setTabOrder(self.lineEdit_PlatformIP, self.pushButton_CheckPath)
DialogConnect.setTabOrder(self.pushButton_CheckPath, self.lineEdit_ReportPath)
DialogConnect.setTabOrder(self.lineEdit_ReportPath, self.pushButton_Save)
DialogConnect.setTabOrder(self.pushButton_Save, self.pushButton_Connect)
def retranslateUi(self, DialogConnect):
DialogConnect.setWindowTitle(_translate("DialogConnect", "新建会话属性", None))
self.groupBox_Connect.setTitle(_translate("DialogConnect", "连接", None))
self.pushButton_NetLink.setText(_translate("DialogConnect", "网络连接", None))
self.pushButton_SerialLink.setText(_translate("DialogConnect", "串口连接", None))
self.label.setText(_translate("DialogConnect", "名称:", None))
self.label_2.setText(_translate("DialogConnect", "协议:", None))
self.label_3.setText(_translate("DialogConnect", "主机ip:", None))
self.label_4.setText(_translate("DialogConnect", "端口号:", None))
self.label_5.setText(_translate("DialogConnect", "用户名:", None))
self.label_6.setText(_translate("DialogConnect", "密码:", None))
self.comboBox_NetProtocol.setItemText(0, _translate("DialogConnect", "SSH", None))
self.comboBox_NetProtocol.setItemText(1, _translate("DialogConnect", "TELNET", None))
self.groupBox_Other.setTitle(_translate("DialogConnect", "其它", None))
self.label_7.setText(_translate("DialogConnect", "本机IP地址:", None))
self.label_9.setText(_translate("DialogConnect", "平台IP地址:", None))
self.label_10.setText(_translate("DialogConnect", "报告存放路径:", None))
self.pushButton_CheckAddress.setText(_translate("DialogConnect", "自动检测", None))
self.pushButton_CheckPath.setText(_translate("DialogConnect", "选择路径", None))
self.pushButton_Save.setText(_translate("DialogConnect", "保存", None))
self.pushButton_Connect.setText(_translate("DialogConnect", "连接", None))
| 63.339869 | 94 | 0.725725 | 1,047 | 9,691 | 6.497612 | 0.154728 | 0.047626 | 0.084081 | 0.047332 | 0.256211 | 0.160517 | 0.148758 | 0.024107 | 0.024107 | 0.024107 | 0 | 0.042524 | 0.162831 | 9,691 | 152 | 95 | 63.756579 | 0.796006 | 0.019296 | 0 | 0.057971 | 1 | 0 | 0.077055 | 0.004709 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036232 | false | 0.036232 | 0.007246 | 0.021739 | 0.072464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a303f401a3dd27becb4b9306fbf20adeb3023171 | 191 | py | Python | 1. Python/2. Flow of Control/15.print_natural_number.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | 1 | 2021-12-05T18:02:15.000Z | 2021-12-05T18:02:15.000Z | 1. Python/2. Flow of Control/15.print_natural_number.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | null | null | null | 1. Python/2. Flow of Control/15.print_natural_number.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | null | null | null | """
Syntax of while Loop
while test_condition:
body of while
"""
#Print first 5 natural numbers using while loop
count = 1
while count <= 5:
print(count)
count += 1
| 12.733333 | 47 | 0.623037 | 27 | 191 | 4.37037 | 0.555556 | 0.118644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02963 | 0.293194 | 191 | 15 | 48 | 12.733333 | 0.844444 | 0.565445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a3040663b193b6306818bf3c49f8039fdf3d79c0 | 1,570 | py | Python | emplacer.py | permamodel/ILAMB-tools | 43494a87d8f8e7994f4f9da0609b0764fc862494 | [
"MIT"
] | null | null | null | emplacer.py | permamodel/ILAMB-tools | 43494a87d8f8e7994f4f9da0609b0764fc862494 | [
"MIT"
] | null | null | null | emplacer.py | permamodel/ILAMB-tools | 43494a87d8f8e7994f4f9da0609b0764fc862494 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""Puts MsTMIP output files in a standardized location for ILAMB.
Examples
--------
Copy all the `ra` files from *transfer* to the appropriate MsTMIP
directories in *MODELS/original* with:
$ python emplacer.py
"""
import os
import shutil
import glob
import string
ILAMB_dir = '/nas/data/ILAMB'
transfer_dir = os.path.join(ILAMB_dir, 'transfer')
models_dir = os.path.join(ILAMB_dir, 'MODELS', 'original')
model_list = [
'BIOME-BGC',
'CLASS-CTEM-N',
'CLM4',
'CLM4VIC',
'DLEM',
'GTEC',
'ISAM',
'LPJ-wsl',
'ORCHIDEE-LSCE',
'SiB3',
'SiBCASA',
'TEM6',
'TRIPLEX-GHG',
'VEGAS2.1',
'VISIT',
]
mstmip_variable = 'rh'
mip_table = 'Lmon'
for model in model_list:
try:
pattern = string.join([mstmip_variable, mip_table, model, '*'], '_')
transfer_file_list = glob.glob(os.path.join(transfer_dir, pattern))
transfer_file = transfer_file_list.pop()
print '\nTransfer:', transfer_file
os.chmod(transfer_file, 0664)
except IndexError:
print '\nVariable {}, MIP {}, not matched to model {}.'.format(
mstmip_variable, mip_table, model)
else:
mstmip_variable_dir = os.path.join(models_dir, model, mstmip_variable)
print 'Destination:', mstmip_variable_dir
try:
os.mkdir(mstmip_variable_dir)
except OSError:
print 'Destination directory exists'
os.chmod(mstmip_variable_dir, 0775) # mode isn't set properly
shutil.copy(transfer_file, mstmip_variable_dir)
| 24.920635 | 78 | 0.643949 | 196 | 1,570 | 4.979592 | 0.479592 | 0.129098 | 0.08709 | 0.039959 | 0.098361 | 0.043033 | 0 | 0 | 0 | 0 | 0 | 0.011542 | 0.227389 | 1,570 | 62 | 79 | 25.322581 | 0.793075 | 0.028025 | 0 | 0.044444 | 0 | 0 | 0.188073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088889 | null | null | 0.088889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a3041e23ddc1c3cb95ad6ec5003ef736f4d25b06 | 365 | py | Python | resources/endpoints.py | aliie62/paranuara_planet | e8a1c5c4eea746d5c8dfc2ee64993a91837a892a | [
"MIT"
] | null | null | null | resources/endpoints.py | aliie62/paranuara_planet | e8a1c5c4eea746d5c8dfc2ee64993a91837a892a | [
"MIT"
] | null | null | null | resources/endpoints.py | aliie62/paranuara_planet | e8a1c5c4eea746d5c8dfc2ee64993a91837a892a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from flask_restful import Api
from resources.person import PersonResource
from resources.company import CompanyResource
#Define app end points
def get_endpoints(app):
api = Api(app)
api.add_resource(PersonResource,'/people','/people/<string:username>')
api.add_resource(CompanyResource,'/company/<string:name>')
return api
| 28.076923 | 74 | 0.747945 | 46 | 365 | 5.847826 | 0.586957 | 0.096654 | 0.104089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003155 | 0.131507 | 365 | 12 | 75 | 30.416667 | 0.845426 | 0.115068 | 0 | 0 | 0 | 0 | 0.16875 | 0.146875 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0967225064580656eeb6e147b33fa45f33f1ff66 | 5,575 | py | Python | cms/models.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | null | null | null | cms/models.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | 8 | 2020-03-24T17:11:49.000Z | 2022-01-13T01:18:11.000Z | cms/models.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | null | null | null | import re
from datetime import date
import tagging
from ckeditor.fields import RichTextField
from django.core import validators
from django.core.urlresolvers import reverse
from django.db import models
from django.template.defaultfilters import truncatechars
from django.utils import timezone
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from common.constants import SUMMARY_MAX_LENGTH, STATUS_PENDING, STATUS_PUBLISHED
from common.functions import instance_get_thumbnail
from common.models import PriorityModel, AbstractPermalink
import files_widget
from organization.models import Organization
from party.models import Party
from tagging_autocomplete_tagit.models import TagAutocompleteTagItField
from taxonomy.models import ArticleCategory
USER_MODEL = settings.AUTH_USER_MODEL
class CommonCms(AbstractPermalink):
title = models.CharField(max_length=255)
image = files_widget.ImageField(null=True, blank=True)
summary = models.TextField(null=True, blank=True)
description = RichTextField(null=True, blank=True)
topics = models.ManyToManyField('taxonomy.Topic', null=True, blank=True, related_name='cms_topics')
party_created_by = models.ForeignKey(Party, related_name='cms_party_created_by')
created_by = models.ForeignKey(USER_MODEL, related_name='cms_created_by')
published_by = models.ForeignKey(USER_MODEL, related_name='cms_published_by', null=True, blank=True)
status = models.IntegerField(default=STATUS_PUBLISHED)
created = models.DateTimeField(auto_now_add=True, null=True, blank=True)
changed = models.DateTimeField(auto_now=True, null=True, blank=True)
published = models.DateTimeField(null=True, blank=True)
is_promoted = models.BooleanField(default=True)
facebook_url = models.URLField(max_length=255, null=True, blank=True)
twitter_url = models.URLField(max_length=255, null=True, blank=True)
homepage_url = models.URLField(max_length=1024, null=True, blank=True)
uuid = models.CharField(max_length=255, null=True, blank=True)
class Meta:
ordering = ['-id']
def get_display_name(self):
return self.title or ''
def get_thumbnail(self):
return instance_get_thumbnail(self, crop=None, size='x360')
def get_thumbnail_in_primary(self):
return instance_get_thumbnail(self, size='150x150', crop=None, upscale=False)
def get_summary(self):
return truncatechars(self.summary or self.description or '', SUMMARY_MAX_LENGTH)
def get_absolute_url(self):
if hasattr(self, 'news') and self.news:
return self.news.get_absolute_url()
elif hasattr(self, 'event') and self.event:
return self.event.get_absolute_url()
return ''
def save(self, *args, **kwargs):
super(CommonCms, self).save(*args, **kwargs)
class News(CommonCms):
# relation
#organization = models.ForeignKey(Organization, related_name='news_organization', null=True, blank=True)
# Taxonomy
ARTICLE_TYPE_CHOICES = (
('news', _('News')),
('knowledge-tools', _('Knowledge & Tools')),
)
# deprecate
article_category = models.CharField(max_length=255, choices=ARTICLE_TYPE_CHOICES, default='news')
categories = models.ManyToManyField('taxonomy.ArticleCategory', null=True, blank=True, related_name='cms_categories')
tags = TagAutocompleteTagItField(max_tags=False, null=True, blank=True, max_length=2048)
files = files_widget.XFilesField(verbose_name='File Attachment', null=True, blank=True)
def get_absolute_url(self):
first_category = None
try:
first_category = self.categories.filter(level=0).first()
except (ArticleCategory.DoesNotExist, ValueError):
try:
first_category = self._categories[0]
except (AttributeError, IndexError):
pass
if first_category:
if first_category.permalink == 'news':
return reverse('news_detail', args=[self.permalink, self.id])
else:
return reverse('article_detail', args=[first_category.permalink, self.permalink, self.id])
# deprecate
if self.article_category == 'news':
return reverse('news_detail', args=[self.permalink, self.id])
else:
return reverse('article_detail', args=[self.article_category, self.permalink, self.id])
def get_files(self):
files = []
if self.files:
files = [ settings.MEDIA_URL + path for path in self.files.split('\n')]
return files
tagging.register(News, tag_descriptor_attr='tag_set')
class Event(CommonCms):
#organization = models.ForeignKey(Organization, related_name='event_organization', null=True, blank=True)
location = models.TextField(null=True, blank=True)
start_date = models.DateField(null=True, blank=True, default=timezone.now)
end_date = models.DateField(null=True, blank=True)
time = models.CharField(max_length=255, null=True, blank=True)
phone = models.TextField(null=True, blank=True)
email = models.EmailField(
max_length=255,
null=True,
blank=True
)
tags = TagAutocompleteTagItField(max_tags=False, null=True, blank=True, max_length=2048)
def get_absolute_url(self):
return reverse('event_detail', args=[self.permalink, self.id])
def get_phones(self):
return [phone.strip() for phone in self.phone.split(',')]
tagging.register(Event, tag_descriptor_attr='tag_set')
| 35.28481 | 121 | 0.713722 | 684 | 5,575 | 5.647661 | 0.245614 | 0.049702 | 0.080766 | 0.105617 | 0.365001 | 0.274657 | 0.190267 | 0.148071 | 0.126844 | 0.104064 | 0 | 0.009655 | 0.182601 | 5,575 | 157 | 122 | 35.509554 | 0.838051 | 0.043767 | 0 | 0.103774 | 0 | 0 | 0.05278 | 0.004508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09434 | false | 0.009434 | 0.179245 | 0.056604 | 0.716981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
096d828147557fce848b12e0d838b46e8b14b824 | 777 | py | Python | File for Lab 7/lab7ex3.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | File for Lab 7/lab7ex3.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | File for Lab 7/lab7ex3.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | import math
def factorial(n: int) -> int:
""" Return n! for postive values of n.
>>> factorial(1)
1
>>> factorial(2)
2
>>> factorial(3)
6
>>> factorial(4)
24
"""
fact = 1
for i in range(2, n+1):
fact=fact*n
return fact
testpass = 0
testfail = 0
for j in range(1, 5):
expected = math.factorial(j)
actual = factorial (j)
if expected == actual:
print ("Expected Result: ", expected, " Actual Result: ", actual, "\n Test Passed", sep='')
testpass += 1
else:
print ("Expected Result: ", expected, " Actual Result: ", actual, "\r Test Failed", sep='')
testfail += 1
print(testpass, " tests passed \r", testfail, " tests failed\r", sep='') | 25.064516 | 100 | 0.528958 | 97 | 777 | 4.237113 | 0.391753 | 0.10219 | 0.092457 | 0.131387 | 0.218978 | 0.218978 | 0.218978 | 0 | 0 | 0 | 0 | 0.034156 | 0.32175 | 777 | 31 | 101 | 25.064516 | 0.745731 | 0.144144 | 0 | 0 | 0 | 0 | 0.212585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.222222 | 0.055556 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0972b611f1f91b71538c0a5d76908add796f3ab9 | 5,753 | py | Python | jstokens.py | zhongweiy/jsinterpreter | 08262761b7213fd0d9573319ed7e1d5837a7d038 | [
"Apache-2.0"
] | 4 | 2016-01-23T13:58:43.000Z | 2016-07-04T04:11:59.000Z | jstokens.py | zhongweiy/jsinterpreter | 08262761b7213fd0d9573319ed7e1d5837a7d038 | [
"Apache-2.0"
] | null | null | null | jstokens.py | zhongweiy/jsinterpreter | 08262761b7213fd0d9573319ed7e1d5837a7d038 | [
"Apache-2.0"
] | null | null | null | # JavaScript: Numbers & Strings
#
# handling Numbers, Identifiers and Strings in a simpler version.
#
# a JavaScript IDENTIFIER must start with an upper- or
# lower-case character. It can then contain any number of upper- or
# lower-case characters or underscores. Its token.value is the textual
# string of the identifier.
# Yes: my_age
# Yes: cRaZy
# No: _starts_with_underscore
#
# a JavaScript NUMBER is one or more digits. A NUMBER
# can start with an optional negative sign. A NUMBER can contain a decimal
# point, which can then be followed by zero or more additional digits. Do
# not worry about hexadecimal (only base 10 is allowed in this problem).
# The token.value of a NUMBER is its floating point value (NOT a string).
# Yes: 123
# Yes: -456
# Yes: 78.9
# Yes: 10.
# No: +5
# No: 1.2.3
#
# a JavaScript STRING is zero or more characters
# contained in double quotes. A STRING may contain escaped characters.
# Notably, \" does not end a string. The token.value of a STRING is
# its contents (not including the outer double quotes).
# Yes: "hello world"
# Yes: "this has \"escaped quotes\""
# No: "no"t one string"
#
# Hint: float("2.3") = 2.3
import ply.lex as lex
tokens = (
'ANDAND', # &&
'COMMA', # ,
'DIVIDE', # /
'ELSE', # else
'EQUAL', # =
'EQUALEQUAL', # ==
'FALSE', # false
'FUNCTION', # function
'GE', # >=
'GT', # >
'IDENTIFIER', # identifier
'IF', # if
'LBRACE', # {
'LE', # <=
'LPAREN', # (
'LT', # <
'MINUS', # -
'MOD', # %
'NOT', # !
'NUMBER', # number
'OROR', # ||
'PLUS', # +
'RBRACE', # }
'RETURN', # return
'RPAREN', # )
'SEMICOLON', # ;
'STRING', # string
'TIMES', # *
'TRUE', # true
'VAR', # var
'WHILE', # while
)
t_ignore = ' \t\v\r' # whitespace
def t_newline(t):
r'\n'
t.lexer.lineno += 1
states = (
('comment', 'exclusive'),
)
t_ANDAND = r'&&'
t_COMMA = r','
t_DIVIDE = r'/'
t_EQUALEQUAL = r'=='
t_EQUAL = r'='
t_GE = r'>='
t_GT = r'>'
t_LBRACE = r'{'
t_LE = r'<='
t_LPAREN = r'\('
t_LT = r'<'
t_MINUS = r'\-'
t_MOD = r'%'
t_NOT = r'!'
t_OROR = r'\|\|'
t_PLUS = r'\+'
t_RBRACE = r'}'
t_RPAREN = r'\)'
t_SEMICOLON = r';'
t_TIMES = r'\*'
t_IDENTIFIER = r'[a-zA-Z][_a-zA-Z]*'
def t_VAR(t):
r'var'
return t
def t_TRUE(t):
r'true'
return t
def t_RETURN(t):
r'return'
return t
def t_IF(t):
r'if'
return t
def t_FUNCTION(t):
r'function'
return t
def t_FALSE(t):
r'false'
return t
def t_ELSE(t):
r'else'
return t
def t_NUMBER(t):
r'-?[0-9]+(?:\.[0-9]*)?'
t.value = float(t.value)
return t
def t_STRING(t):
r'"((\\.)|[^"])*"'
t.value = t.value[1:-1]
return t
def t_WHILE(t):
r'while'
return t
def t_comment(t):
r'\/\*'
t.lexer.begin('comment')
def t_comment_end(t):
r'\*\/'
t.lexer.lineno += t.value.count('\n')
t.lexer.begin('INITIAL')
pass
def t_comment_error(t):
t.lexer.skip(1)
def t_eolcomment(t):
r'//.*'
pass
def t_error(t):
print "JavaScript Lexer: Illegal character " + t.value[0]
t.lexer.skip(1)
# This handle // end of line comments as well as /* delimited comments */.
#
# We will assume that JavaScript is case sensitive and that keywords like
# 'if' and 'true' must be written in lowercase. There are 26 possible
# tokens that you must handle. The 'tokens' variable below has been
# initialized below, listing each token's formal name (i.e., the value of
# token.type). In addition, each token has its associated textual string
# listed in a comment. For example, your lexer must convert && to a token
# with token.type 'ANDAND' (unless the && is found inside a comment).
#
# Hint 1: Use an exclusive state for /* comments */. You may want to define
# t_comment_ignore and t_comment_error as well.
if __name__ == '__main__':
lexer = lex.lex()
def test_lexer1(input_string):
lexer.input(input_string)
result = [ ]
while True:
tok = lexer.token()
if not tok: break
result = result + [tok.type,tok.value]
return result
input1 = 'some_identifier -12.34 "a \\"escape\\" b"'
output1 = ['IDENTIFIER', 'some_identifier', 'NUMBER', -12.34, 'STRING',
'a \\"escape\\" b']
print test_lexer1(input1) == output1
input2 = '-12x34'
output2 = ['NUMBER', -12.0, 'IDENTIFIER', 'x', 'NUMBER', 34.0]
print test_lexer1(input2) == output2
def test_lexer2(input_string):
lexer.input(input_string)
result = [ ]
while True:
tok = lexer.token()
if not tok: break
result = result + [tok.type]
return result
input3 = """ - ! && () * , / ; { || } + < <= = == > >= else false function
if return true var """
output3 = ['MINUS', 'NOT', 'ANDAND', 'LPAREN', 'RPAREN', 'TIMES', 'COMMA',
'DIVIDE', 'SEMICOLON', 'LBRACE', 'OROR', 'RBRACE', 'PLUS', 'LT', 'LE',
'EQUAL', 'EQUALEQUAL', 'GT', 'GE', 'ELSE', 'FALSE', 'FUNCTION', 'IF',
'RETURN', 'TRUE', 'VAR']
test_output3 = test_lexer2(input3)
print test_lexer2(input3) == output3
input4 = """
if // else mystery
=/*=*/=
true /* false
*/ return"""
output4 = ['IF', 'EQUAL', 'EQUAL', 'TRUE', 'RETURN']
print test_lexer2(input4) == output4
| 25.122271 | 85 | 0.541804 | 756 | 5,753 | 4.029101 | 0.291005 | 0.015102 | 0.03283 | 0.036113 | 0.091924 | 0.081418 | 0.081418 | 0.060407 | 0.060407 | 0.060407 | 0 | 0.019221 | 0.303668 | 5,753 | 228 | 86 | 25.232456 | 0.741138 | 0.349383 | 0 | 0.173333 | 0 | 0.006667 | 0.219645 | 0.00573 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.013333 | 0.006667 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09733c80cfdeca2a2ff41cf7d76c224a31e7fb01 | 302 | py | Python | accelerator/__init__.py | michuschenk/accelerator | 93f7ace68f7e67ec20ba1ea9590aee66a7da88a4 | [
"MIT"
] | null | null | null | accelerator/__init__.py | michuschenk/accelerator | 93f7ace68f7e67ec20ba1ea9590aee66a7da88a4 | [
"MIT"
] | null | null | null | accelerator/__init__.py | michuschenk/accelerator | 93f7ace68f7e67ec20ba1ea9590aee66a7da88a4 | [
"MIT"
] | null | null | null | import logging
import matplotlib.pyplot as plt
from .beam import Beam
from .lattice import Lattice
from .transfer_matrix import TransferMatrix
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
plt.rcParams["figure.facecolor"] = "white"
| 20.133333 | 43 | 0.807947 | 37 | 302 | 6.459459 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102649 | 302 | 14 | 44 | 21.571429 | 0.881919 | 0 | 0 | 0 | 0 | 0 | 0.069536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
097dcbd748caa1351af1b837ef400915cf09bd02 | 990 | py | Python | cgi-bin/w3-query-res.py | TanyaAdams1/WST | fdf334741412f8b3634c444467f53279b1e4fba6 | [
"Apache-2.0"
] | null | null | null | cgi-bin/w3-query-res.py | TanyaAdams1/WST | fdf334741412f8b3634c444467f53279b1e4fba6 | [
"Apache-2.0"
] | null | null | null | cgi-bin/w3-query-res.py | TanyaAdams1/WST | fdf334741412f8b3634c444467f53279b1e4fba6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import re
import fcntl
print "Content-type: text/html\n\n"
try:
f = open("../homework/w3/query/w3-query.dat")
fcntl.flock(f, fcntl.DN_ACCESS)
template = "".join(open("../homework/w3/src/table.html").readlines()).split("<!-- insert point-->")
data = ""
lines = f.readlines()
fcntl.flock(f, fcntl.F_UNLCK)
f.close()
i = 0
for line in lines:
data += "<tr>\n"
fields = re.split("\t+", line)
for field in fields[0:4]:
data += "<td>" + str(field) + "</td>\n"
data += """<td>
<div class="form-check">
<input class="form-check-input" type="checkbox" name="""
data += str(i) + """ value="1">
</td>
</tr>\n
"""
i += 1
if lines.__len__() == 0:
data = "<tr>\n<h3>Nothing</h3></tr>"
print template[0] + data + template[1]
except Exception as e:
print "{code:0,msg:\"Internal error\"}\n"
exit(1)
| 28.285714 | 103 | 0.506061 | 134 | 990 | 3.69403 | 0.514925 | 0.018182 | 0.056566 | 0.064646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021186 | 0.284848 | 990 | 34 | 104 | 29.117647 | 0.677966 | 0.020202 | 0 | 0 | 0 | 0 | 0.356037 | 0.116615 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.064516 | null | null | 0.096774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
098c54765217dc8e251e2022a8de13677025cc3f | 1,465 | py | Python | dcaeapplib/setup.py | onap/dcaegen2-utils | fc8a6edbf7d5f50096fc687ba964b464adcf2bb0 | [
"Apache-2.0",
"CC-BY-4.0"
] | 2 | 2019-09-24T11:13:00.000Z | 2020-07-14T14:21:58.000Z | dcaeapplib/setup.py | alex-sh2020/dcaegen2-utils | 1d2990b9b712a019bf553863a847eff90066f58a | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | dcaeapplib/setup.py | alex-sh2020/dcaegen2-utils | 1d2990b9b712a019bf553863a847eff90066f58a | [
"Apache-2.0",
"CC-BY-4.0"
] | 2 | 2020-07-14T19:16:15.000Z | 2021-10-15T15:50:08.000Z | # org.onap.dcae
# ============LICENSE_START====================================================
# Copyright (c) 2018-2019 AT&T Intellectual Property. All rights reserved.
# =============================================================================
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============LICENSE_END======================================================
#
# ECOMP is a trademark and service mark of AT&T Intellectual Property.
from setuptools import setup, find_packages
setup(
name="dcaeapplib",
version="1.0.0",
packages=find_packages(),
author="Andrew Gauld",
author_email="ag1282@att.com",
description=("Library for building DCAE analytics applications"),
license="Apache 2.0",
url="https://gerrit.onap.org/r/#/admin/projects/dcaegen2/utils",
zip_safe=True,
install_requires=["onap-dcae-cbs-docker-client>=2.1.0"],
entry_points={"console_scripts": ["reconfigure.sh=dcaeapplib:reconfigure"]},
)
| 41.857143 | 80 | 0.632082 | 182 | 1,465 | 5.038462 | 0.659341 | 0.065431 | 0.032715 | 0.050164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019701 | 0.133788 | 1,465 | 34 | 81 | 43.088235 | 0.702916 | 0.621843 | 0 | 0 | 0 | 0 | 0.452336 | 0.13271 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
098d5b14cc847bd2040aa814e681d9634ab1f2aa | 353 | py | Python | Object Oriented Programming/creating_our_own_objects.py | elipopovadev/Tournament-Tracker | 5b0acbd460257869438e27d3db770ff269ac6d40 | [
"MIT"
] | null | null | null | Object Oriented Programming/creating_our_own_objects.py | elipopovadev/Tournament-Tracker | 5b0acbd460257869438e27d3db770ff269ac6d40 | [
"MIT"
] | 1 | 2022-02-21T12:22:41.000Z | 2022-02-21T12:22:41.000Z | Object Oriented Programming/creating_our_own_objects.py | elipopovadev/Complete-Python-Developer | 5b0acbd460257869438e27d3db770ff269ac6d40 | [
"MIT"
] | null | null | null | class PlayerCharacter:
def __init__(self, name, age):
self.name = name
self.age = age
def run(self):
print("run")
return "done"
player1 = PlayerCharacter("Cindy", 50)
player2 = PlayerCharacter("Tom", 20)
print(player1.name)
print(player1.age)
print(player2.name)
print(player2.age)
print(player1.run()) | 20.764706 | 38 | 0.637394 | 43 | 353 | 5.139535 | 0.395349 | 0.162896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040293 | 0.226629 | 353 | 17 | 39 | 20.764706 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.042373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
099371a27004c6b16e1b3cbb70571aa10fc566b8 | 6,455 | py | Python | profile_collection/startup/csx1/devices/stats_plugin.py | NSLS-II-CSX/xf23id1_profiles | 88b24b0bae222f4d69c278c5e3da8a9560c846d0 | [
"BSD-3-Clause"
] | null | null | null | profile_collection/startup/csx1/devices/stats_plugin.py | NSLS-II-CSX/xf23id1_profiles | 88b24b0bae222f4d69c278c5e3da8a9560c846d0 | [
"BSD-3-Clause"
] | 57 | 2016-03-05T16:37:55.000Z | 2022-02-16T18:43:33.000Z | profile_collection/startup/csx1/devices/stats_plugin.py | NSLS-II-CSX/xf23id1_profiles | 88b24b0bae222f4d69c278c5e3da8a9560c846d0 | [
"BSD-3-Clause"
] | 4 | 2016-03-06T19:40:18.000Z | 2019-01-24T22:49:30.000Z | from ophyd import Device, Component as Cpt
from ophyd.areadetector.base import (ADBase, ADComponent as ADCpt, ad_group,
EpicsSignalWithRBV as SignalWithRBV)
from ophyd.areadetector.plugins import PluginBase
from ophyd.areadetector.trigger_mixins import TriggerBase, ADTriggerStatus
from ophyd.device import DynamicDeviceComponent as DDC, Staged
from ophyd.signal import (Signal, EpicsSignalRO, EpicsSignal)
from ophyd.quadem import QuadEM
import time as ttime
class StatsPluginCSX(PluginBase):
"""This supports changes to time series PV names in AD 3-3
Due to https://github.com/areaDetector/ADCore/pull/333
"""
_default_suffix = 'Stats1:'
_suffix_re = 'Stats\d:'
_html_docs = ['NDPluginStats.html']
_plugin_type = 'NDPluginStats'
_default_configuration_attrs = (PluginBase._default_configuration_attrs + (
'centroid_threshold', 'compute_centroid', 'compute_histogram',
'compute_profiles', 'compute_statistics', 'bgd_width',
'hist_size', 'hist_min', 'hist_max', 'profile_size',
'profile_cursor')
)
bgd_width = ADCpt(SignalWithRBV, 'BgdWidth')
centroid_threshold = ADCpt(SignalWithRBV, 'CentroidThreshold')
centroid = DDC(ad_group(EpicsSignalRO,
(('x', 'CentroidX_RBV'),
('y', 'CentroidY_RBV'))),
doc='The centroid XY',
default_read_attrs=('x', 'y'))
compute_centroid = ADCpt(SignalWithRBV, 'ComputeCentroid', string=True)
compute_histogram = ADCpt(SignalWithRBV, 'ComputeHistogram', string=True)
compute_profiles = ADCpt(SignalWithRBV, 'ComputeProfiles', string=True)
compute_statistics = ADCpt(SignalWithRBV, 'ComputeStatistics', string=True)
cursor = DDC(ad_group(SignalWithRBV,
(('x', 'CursorX'),
('y', 'CursorY'))),
doc='The cursor XY',
default_read_attrs=('x', 'y'))
hist_entropy = ADCpt(EpicsSignalRO, 'HistEntropy_RBV')
hist_max = ADCpt(SignalWithRBV, 'HistMax')
hist_min = ADCpt(SignalWithRBV, 'HistMin')
hist_size = ADCpt(SignalWithRBV, 'HistSize')
histogram = ADCpt(EpicsSignalRO, 'Histogram_RBV')
max_size = DDC(ad_group(EpicsSignal,
(('x', 'MaxSizeX'),
('y', 'MaxSizeY'))),
doc='The maximum size in XY',
default_read_attrs=('x', 'y'))
max_value = ADCpt(EpicsSignalRO, 'MaxValue_RBV')
max_xy = DDC(ad_group(EpicsSignalRO,
(('x', 'MaxX_RBV'),
('y', 'MaxY_RBV'))),
doc='Maximum in XY',
default_read_attrs=('x', 'y'))
mean_value = ADCpt(EpicsSignalRO, 'MeanValue_RBV')
min_value = ADCpt(EpicsSignalRO, 'MinValue_RBV')
min_xy = DDC(ad_group(EpicsSignalRO,
(('x', 'MinX_RBV'),
('y', 'MinY_RBV'))),
doc='Minimum in XY',
default_read_attrs=('x', 'y'))
net = ADCpt(EpicsSignalRO, 'Net_RBV')
profile_average = DDC(ad_group(EpicsSignalRO,
(('x', 'ProfileAverageX_RBV'),
('y', 'ProfileAverageY_RBV'))),
doc='Profile average in XY',
default_read_attrs=('x', 'y'))
profile_centroid = DDC(ad_group(EpicsSignalRO,
(('x', 'ProfileCentroidX_RBV'),
('y', 'ProfileCentroidY_RBV'))),
doc='Profile centroid in XY',
default_read_attrs=('x', 'y'))
profile_cursor = DDC(ad_group(EpicsSignalRO,
(('x', 'ProfileCursorX_RBV'),
('y', 'ProfileCursorY_RBV'))),
doc='Profile cursor in XY',
default_read_attrs=('x', 'y'))
profile_size = DDC(ad_group(EpicsSignalRO,
(('x', 'ProfileSizeX_RBV'),
('y', 'ProfileSizeY_RBV'))),
doc='Profile size in XY',
default_read_attrs=('x', 'y'))
profile_threshold = DDC(ad_group(EpicsSignalRO,
(('x', 'ProfileThresholdX_RBV'),
('y', 'ProfileThresholdY_RBV'))),
doc='Profile threshold in XY',
default_read_attrs=('x', 'y'))
set_xhopr = ADCpt(EpicsSignal, 'SetXHOPR')
set_yhopr = ADCpt(EpicsSignal, 'SetYHOPR')
sigma_xy = ADCpt(EpicsSignalRO, 'SigmaXY_RBV')
sigma_x = ADCpt(EpicsSignalRO, 'SigmaX_RBV')
sigma_y = ADCpt(EpicsSignalRO, 'SigmaY_RBV')
sigma = ADCpt(EpicsSignalRO, 'Sigma_RBV')
# ts_acquiring = ADCpt(EpicsSignal, 'TS:TSAcquiring')
ts_centroid = DDC(ad_group(EpicsSignal,
(('x', 'TS:TSCentroidX'),
('y', 'TS:TSCentroidY'))),
doc='Time series centroid in XY',
default_read_attrs=('x', 'y'))
# ts_control = ADCpt(EpicsSignal, 'TS:TSControl', string=True)
# ts_current_point = ADCpt(EpicsSignal, 'TS:TSCurrentPoint')
ts_max_value = ADCpt(EpicsSignal, 'TS:TSMaxValue')
ts_max = DDC(ad_group(EpicsSignal,
(('x', 'TS:TSMaxX'),
('y', 'TS:TSMaxY'))),
doc='Time series maximum in XY',
default_read_attrs=('x', 'y'))
ts_mean_value = ADCpt(EpicsSignal, 'TS:TSMeanValue')
ts_min_value = ADCpt(EpicsSignal, 'TS:TSMinValue')
ts_min = DDC(ad_group(EpicsSignal,
(('x', 'TS:TSMinX'),
('y', 'TS:TSMinY'))),
doc='Time series minimum in XY',
default_read_attrs=('x', 'y'))
ts_net = ADCpt(EpicsSignal, 'TS:TSNet')
# ts_num_points = ADCpt(EpicsSignal, 'TS:TSNumPoints')
ts_read = ADCpt(EpicsSignal, 'TS:TSRead')
ts_sigma = ADCpt(EpicsSignal, 'TS:TSSigma')
ts_sigma_x = ADCpt(EpicsSignal, 'TS:TSSigmaX')
ts_sigma_xy = ADCpt(EpicsSignal, 'TS:TSSigmaXY')
ts_sigma_y = ADCpt(EpicsSignal, 'TS:TSSigmaY')
ts_total = ADCpt(EpicsSignal, 'TS:TSTotal')
total = ADCpt(EpicsSignalRO, 'Total_RBV')
| 42.748344 | 79 | 0.550736 | 625 | 6,455 | 5.4576 | 0.2592 | 0.075051 | 0.073879 | 0.068602 | 0.197596 | 0.163002 | 0.096159 | 0.08971 | 0 | 0 | 0 | 0.001373 | 0.32316 | 6,455 | 150 | 80 | 43.033333 | 0.779355 | 0.052208 | 0 | 0.111111 | 0 | 0 | 0.19623 | 0.006885 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068376 | 0 | 0.504274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
09958040565ea6463ed0280058a58732e7eb9fca | 2,346 | py | Python | codeforces/api/json_objects/judge_protocol.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 26 | 2015-06-21T16:19:44.000Z | 2021-11-15T12:32:25.000Z | codeforces/api/json_objects/judge_protocol.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 5 | 2015-03-10T06:00:52.000Z | 2020-01-18T12:59:25.000Z | codeforces/api/json_objects/judge_protocol.py | ericbrandwein/CodeforcesAPI | 12ae641910a3308033584dc518bb2fc0173e56f3 | [
"MIT"
] | 12 | 2015-04-24T17:16:50.000Z | 2022-01-04T14:21:25.000Z | """
This module contains classes for representing JudgeProtocol object
For further information visit http://codeforces.com/api/help/objects#Hack
"""
from . import BaseJsonObject
__all__ = ['JudgeProtocol']
class JudgeProtocol(BaseJsonObject):
"""
This class represents JudgeProtocol object
For further information visit http://codeforces.com/api/help/objects#Hack
"""
def __init__(self, data=None):
self._manual = None
self._protocol = None
self._verdict = None
super().__init__(data)
def __repr__(self):
return '<JudgeProtocol: {}>'.format(self.verdict)
def load_required_fields_from_dict(self, values):
super().load_required_fields_from_dict(values)
self.manual = values['manual']
self.protocol = values['protocol']
self.verdict = values['verdict']
@property
def manual(self):
"""
:return: If test for the hack was entered manually returns True, otherwise False
Returns None if not initialized
:rtype: bool
"""
return self._manual
@manual.setter
def manual(self, value):
"""
:param value: True, if test for the hack was entered manually returns True, otherwise False
:type value: bool or str
"""
assert isinstance(value, (bool, str))
self._manual = bool(value)
@property
def protocol(self):
"""
Localized.
:return: Human-readable description of judge protocol or None if not initialized
:rtype: str
"""
return self._protocol
@protocol.setter
def protocol(self, value):
"""
Localized.
:param value: Human-readable description of judge protocol
:type value: str
"""
assert isinstance(value, str)
self._protocol = value
@property
def verdict(self):
"""
Localized.
:return: Human-readable hack verdict or None if not initialized
:rtype: str
"""
return self._verdict
@verdict.setter
def verdict(self, value):
"""
Localized.
:param value: Human-readable hack verdict or None if not initialized
:type value: str
"""
assert isinstance(value, str)
self._verdict = value
| 24.4375 | 99 | 0.609122 | 252 | 2,346 | 5.539683 | 0.277778 | 0.039398 | 0.025788 | 0.057307 | 0.542264 | 0.459885 | 0.413324 | 0.363897 | 0.30659 | 0.19914 | 0 | 0 | 0.297954 | 2,346 | 95 | 100 | 24.694737 | 0.847602 | 0.379369 | 0 | 0.138889 | 0 | 0 | 0.044057 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.25 | false | 0 | 0.027778 | 0.027778 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
099bae41339eea4897a6dc29aab9f4125f4671dc | 7,544 | py | Python | src/semantic_parsing_with_constrained_lm/configs/calflow_emnlp_camera_ready.py | microsoft/semantic_parsing_with_constrained_lm | 7e3c099500c3102e46d7a47469fe6840580c2b11 | [
"MIT"
] | 17 | 2021-09-22T13:08:37.000Z | 2022-03-27T10:39:53.000Z | src/semantic_parsing_with_constrained_lm/configs/calflow_emnlp_camera_ready.py | microsoft/semantic_parsing_with_constrained_lm | 7e3c099500c3102e46d7a47469fe6840580c2b11 | [
"MIT"
] | 1 | 2022-03-12T01:05:15.000Z | 2022-03-12T01:05:15.000Z | src/semantic_parsing_with_constrained_lm/configs/calflow_emnlp_camera_ready.py | microsoft/semantic_parsing_with_constrained_lm | 7e3c099500c3102e46d7a47469fe6840580c2b11 | [
"MIT"
] | 1 | 2021-12-16T22:26:54.000Z | 2021-12-16T22:26:54.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
"""This config file is for running experiments for the EMNLP camera ready.
It will generate the following experiments (depending on the value of eval_split and model):
- 200 dev examples
- GPT-3 Constrained Canonical, P = 20
- GPT-3 Constrained Meaning, P = 20
- GPT-3 Unconstrained Canonical, P = 20
- GPT-3 Unconstrained Meaning, P = 20
- GPT-3 Constrained Canonical, P = 8
- GPT-3 Constrained Meaning, P = 8
- All dev examples
- GPT-3 Constrained Canonical, P = 20
- BART Constrained Canonical
- BART Constrained Meaning
- BART Unconstrained Canonical
- BART Unconstrained Meaning
- GPT-2 Constrained Canonical
- GPT-2 Constrained Meaning
- GPT-2 Unconstrained Canonical
- GPT-2 Unconstrained Meaning
"""
from typing import Any, Callable, Dict
import torch
from typing_extensions import Literal
from semantic_parsing_with_constrained_lm.scfg.read_grammar import PreprocessedGrammar
from semantic_parsing_with_constrained_lm.scfg.scfg import SCFG
from semantic_parsing_with_constrained_lm.configs.lib.calflow import (
cached_read_calflow_jsonl,
make_semantic_parser_for_calflow,
)
from semantic_parsing_with_constrained_lm.configs.lib.common import PromptOrder
from semantic_parsing_with_constrained_lm.domains.calflow import CalflowMetrics, CalflowOutputLanguage
from semantic_parsing_with_constrained_lm.eval import TopKExactMatch
from semantic_parsing_with_constrained_lm.lm import TRAINED_MODEL_DIR, AutoregressiveModel, ClientType
from semantic_parsing_with_constrained_lm.lm_bart import Seq2SeqBart
from semantic_parsing_with_constrained_lm.lm_openai_gpt3 import IncrementalOpenAIGPT3
from semantic_parsing_with_constrained_lm.paths import DOMAINS_DIR
from semantic_parsing_with_constrained_lm.run_exp import EvalSplit, Experiment
def build_config(
log_dir, # pylint: disable=unused-argument
eval_split: EvalSplit,
model: ClientType,
**kwargs: Any, # pylint: disable=unused-argument
) -> Dict[str, Callable[[], Experiment]]:
EXAMPLES_DIR = DOMAINS_DIR / "calflow/data"
TRAIN_SIZE = 300
BEAM_SIZE = 10
use_gpt3 = model == ClientType.GPT3
preprocessed_grammar = PreprocessedGrammar.from_folder(
str(DOMAINS_DIR / "calflow/grammar")
)
scfg = SCFG(preprocessed_grammar)
def create_exp(
problem_type: Literal[
"constrained", "unconstrained-beam", "unconstrained-greedy"
],
output_type: CalflowOutputLanguage,
):
train_data = cached_read_calflow_jsonl(
EXAMPLES_DIR / "train_300_stratified.jsonl", output_type,
)[:TRAIN_SIZE]
if eval_split == EvalSplit.DevFull:
test_data = cached_read_calflow_jsonl(
EXAMPLES_DIR / "dev_all.jsonl", output_type,
)
elif eval_split == EvalSplit.DevSubset:
test_data = cached_read_calflow_jsonl(
EXAMPLES_DIR / "test_200_uniform.jsonl", output_type,
)
elif eval_split == EvalSplit.TrainSubset:
# Select a subset not already present in train
ids_train_300 = set()
with open(EXAMPLES_DIR / "ids_train_300_stratified.txt", "r") as id_file:
for _, line in enumerate(id_file):
dialogue_id, turn_index = line.strip().split(",")
ids_train_300.add((dialogue_id.strip(), int(turn_index.strip())))
train_data_1000_stratified = cached_read_calflow_jsonl(
EXAMPLES_DIR / "train_1000_stratified.jsonl", output_type,
)
test_data = [
datum
for datum in train_data_1000_stratified
if (datum.dialogue_id, datum.turn_part_index) not in ids_train_300
]
test_data = test_data[:100]
else:
raise ValueError(eval_split)
lm: AutoregressiveModel
if model == ClientType.GPT3:
lm = IncrementalOpenAIGPT3()
elif model == ClientType.BART:
lm = Seq2SeqBart(
# Part after / is set to match lm_finetune.py
f"{TRAINED_MODEL_DIR}/20000/calflow_{output_type}/",
device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu"),
)
else:
raise ValueError(model)
if problem_type == "constrained":
constrained = True
beam_size = BEAM_SIZE
elif problem_type == "unconstrained-beam":
constrained = False
beam_size = BEAM_SIZE
elif problem_type == "unconstrained-greedy":
constrained = False
beam_size = 1
else:
raise ValueError(f"{problem_type} not allowed")
parser = make_semantic_parser_for_calflow(
train_data,
lm,
use_gpt3,
beam_size,
output_type,
model,
preprocessed_grammar,
constrained,
prompt_order=PromptOrder.Shuffle,
)
return Experiment(
model=parser,
metrics={
"exact_match": TopKExactMatch(beam_size),
"round_trip": CalflowMetrics(
k=beam_size,
scfg=scfg,
data_type=output_type,
require_exact_length=True,
),
},
test_data=test_data,
client=lm,
)
def add_exp_to_dict(
exps_dict: Dict[str, Callable[[], Experiment]],
problem_type: Literal[
"constrained", "unconstrained-beam", "unconstrained-greedy"
],
output_type: CalflowOutputLanguage,
num_examples_per_prompt: int,
):
exp_name = f"calflow_{model}_{eval_split}_{problem_type}_{output_type}_prompt{num_examples_per_prompt}"
exps_dict[exp_name] = lambda: create_exp(problem_type, output_type)
result: Dict[str, Callable[[], Experiment]] = {}
if eval_split == EvalSplit.DevFull:
if use_gpt3:
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Canonical, 20)
else:
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Canonical, 0)
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Lispress, 0)
add_exp_to_dict(
result, "unconstrained-greedy", CalflowOutputLanguage.Canonical, 0
)
add_exp_to_dict(
result, "unconstrained-greedy", CalflowOutputLanguage.Lispress, 0
)
elif eval_split == EvalSplit.DevSubset:
if use_gpt3:
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Canonical, 20)
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Lispress, 20)
add_exp_to_dict(
result, "unconstrained-greedy", CalflowOutputLanguage.Canonical, 20
)
add_exp_to_dict(
result, "unconstrained-greedy", CalflowOutputLanguage.Lispress, 20
)
else:
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Canonical, 0)
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Lispress, 0)
elif eval_split == EvalSplit.TrainSubset and not use_gpt3:
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Canonical, 0)
add_exp_to_dict(result, "constrained", CalflowOutputLanguage.Lispress, 0)
return result
| 39.291667 | 111 | 0.655753 | 826 | 7,544 | 5.691283 | 0.22276 | 0.017869 | 0.023825 | 0.035737 | 0.465646 | 0.406297 | 0.375665 | 0.292066 | 0.207828 | 0.150181 | 0 | 0.018287 | 0.267895 | 7,544 | 191 | 112 | 39.497382 | 0.832881 | 0.138388 | 0 | 0.258278 | 1 | 0 | 0.100713 | 0.036402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019868 | false | 0 | 0.092715 | 0 | 0.125828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
099d48f142439bd04db2751ed3f7a9c474bb4692 | 7,934 | py | Python | App/gui/chat.py | Wizard-collab/wizard | c2ec623fe011626716493c232b895fb0513f68ff | [
"MIT"
] | null | null | null | App/gui/chat.py | Wizard-collab/wizard | c2ec623fe011626716493c232b895fb0513f68ff | [
"MIT"
] | null | null | null | App/gui/chat.py | Wizard-collab/wizard | c2ec623fe011626716493c232b895fb0513f68ff | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'C:\Users\conta\Documents\script\Wizard\App\gui\ui_files\chat.ui'
#
# Created by: PyQt5 UI code generator 5.13.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(428, 544)
self.horizontalLayout_2 = QtWidgets.QHBoxLayout(Form)
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_2.setSpacing(0)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.chat_frame = QtWidgets.QFrame(Form)
self.chat_frame.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.chat_frame.setFrameShadow(QtWidgets.QFrame.Raised)
self.chat_frame.setObjectName("chat_frame")
self.verticalLayout = QtWidgets.QVBoxLayout(self.chat_frame)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setSpacing(0)
self.verticalLayout.setObjectName("verticalLayout")
self.chat_top_horizontalFrame = QtWidgets.QFrame(self.chat_frame)
self.chat_top_horizontalFrame.setObjectName("chat_top_horizontalFrame")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.chat_top_horizontalFrame)
self.horizontalLayout.setContentsMargins(10, 10, 10, 10)
self.horizontalLayout.setObjectName("horizontalLayout")
self.chat_project_label = QtWidgets.QLabel(self.chat_top_horizontalFrame)
self.chat_project_label.setObjectName("chat_project_label")
self.horizontalLayout.addWidget(self.chat_project_label)
spacerItem = QtWidgets.QSpacerItem(40, 20, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem)
self.verticalLayout.addWidget(self.chat_top_horizontalFrame)
self.messages_frame = QtWidgets.QFrame(self.chat_frame)
self.messages_frame.setObjectName("messages_frame")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.messages_frame)
self.verticalLayout_2.setContentsMargins(0, 0, 0, 0)
self.verticalLayout_2.setSpacing(0)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.chat_main_scrollArea = QtWidgets.QScrollArea(self.messages_frame)
self.chat_main_scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.chat_main_scrollArea.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustToContentsOnFirstShow)
self.chat_main_scrollArea.setWidgetResizable(True)
self.chat_main_scrollArea.setObjectName("chat_main_scrollArea")
self.chat_scrollAreaWidgetContents = QtWidgets.QWidget()
self.chat_scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 424, 397))
self.chat_scrollAreaWidgetContents.setObjectName("chat_scrollAreaWidgetContents")
self.chat_messages_layouts = QtWidgets.QVBoxLayout(self.chat_scrollAreaWidgetContents)
self.chat_messages_layouts.setContentsMargins(-1, 0, -1, 0)
self.chat_messages_layouts.setObjectName("chat_messages_layouts")
spacerItem1 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.chat_messages_layouts.addItem(spacerItem1)
self.horizontalLayout_3 = QtWidgets.QHBoxLayout()
self.horizontalLayout_3.setContentsMargins(-1, 11, -1, 11)
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.chat_show_more_pushButton = QtWidgets.QPushButton(self.chat_scrollAreaWidgetContents)
self.chat_show_more_pushButton.setMinimumSize(QtCore.QSize(100, 40))
self.chat_show_more_pushButton.setMaximumSize(QtCore.QSize(100, 40))
self.chat_show_more_pushButton.setObjectName("chat_show_more_pushButton")
self.horizontalLayout_3.addWidget(self.chat_show_more_pushButton)
self.chat_messages_layouts.addLayout(self.horizontalLayout_3)
self.chat_messages_layouts_1 = QtWidgets.QVBoxLayout()
self.chat_messages_layouts_1.setSpacing(0)
self.chat_messages_layouts_1.setObjectName("chat_messages_layouts_1")
self.chat_messages_layouts.addLayout(self.chat_messages_layouts_1)
spacerItem2 = QtWidgets.QSpacerItem(20, 11, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
self.chat_messages_layouts.addItem(spacerItem2)
self.chat_main_scrollArea.setWidget(self.chat_scrollAreaWidgetContents)
self.verticalLayout_2.addWidget(self.chat_main_scrollArea)
self.verticalLayout.addWidget(self.messages_frame)
self.line_2 = QtWidgets.QFrame(self.chat_frame)
self.line_2.setMaximumSize(QtCore.QSize(16777215, 1))
self.line_2.setFrameShape(QtWidgets.QFrame.HLine)
self.line_2.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_2.setObjectName("line_2")
self.verticalLayout.addWidget(self.line_2)
self.verticalLayout_3 = QtWidgets.QVBoxLayout()
self.verticalLayout_3.setContentsMargins(10, -1, 10, -1)
self.verticalLayout_3.setSpacing(0)
self.verticalLayout_3.setObjectName("verticalLayout_3")
self.chat_file_frame = QtWidgets.QFrame(self.chat_frame)
self.chat_file_frame.setObjectName("chat_file_frame")
self.chat_file_layout = QtWidgets.QHBoxLayout(self.chat_file_frame)
self.chat_file_layout.setObjectName("chat_file_layout")
self.added_chat_file_pushButton = QtWidgets.QPushButton(self.chat_file_frame)
self.added_chat_file_pushButton.setMinimumSize(QtCore.QSize(0, 40))
self.added_chat_file_pushButton.setText("")
self.added_chat_file_pushButton.setObjectName("added_chat_file_pushButton")
self.chat_file_layout.addWidget(self.added_chat_file_pushButton)
spacerItem3 = QtWidgets.QSpacerItem(40, 20, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.chat_file_layout.addItem(spacerItem3)
self.verticalLayout_3.addWidget(self.chat_file_frame)
self.horizontalLayout_4 = QtWidgets.QHBoxLayout()
self.horizontalLayout_4.setContentsMargins(11, -1, 0, -1)
self.horizontalLayout_4.setSpacing(3)
self.horizontalLayout_4.setObjectName("horizontalLayout_4")
self.add_file_pushButton = QtWidgets.QPushButton(self.chat_frame)
self.add_file_pushButton.setMinimumSize(QtCore.QSize(36, 36))
self.add_file_pushButton.setMaximumSize(QtCore.QSize(36, 36))
self.add_file_pushButton.setText("")
self.add_file_pushButton.setObjectName("add_file_pushButton")
self.horizontalLayout_4.addWidget(self.add_file_pushButton)
self.send_message_pushButton = QtWidgets.QPushButton(self.chat_frame)
self.send_message_pushButton.setMinimumSize(QtCore.QSize(36, 36))
self.send_message_pushButton.setMaximumSize(QtCore.QSize(36, 36))
self.send_message_pushButton.setText("")
self.send_message_pushButton.setObjectName("send_message_pushButton")
self.horizontalLayout_4.addWidget(self.send_message_pushButton)
self.verticalLayout_3.addLayout(self.horizontalLayout_4)
self.verticalLayout.addLayout(self.verticalLayout_3)
self.horizontalLayout_2.addWidget(self.chat_frame)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
self.chat_project_label.setText(_translate("Form", "TextLabel"))
self.chat_show_more_pushButton.setText(_translate("Form", "Show more"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 57.912409 | 118 | 0.755357 | 887 | 7,934 | 6.48478 | 0.157835 | 0.083449 | 0.042942 | 0.043985 | 0.356398 | 0.208275 | 0.149166 | 0.070236 | 0.043811 | 0.029207 | 0 | 0.025687 | 0.151122 | 7,934 | 136 | 119 | 58.338235 | 0.828359 | 0.029493 | 0 | 0 | 1 | 0 | 0.058625 | 0.022228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016529 | false | 0 | 0.016529 | 0 | 0.041322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09a018ce2bd2ec82bc9ff59776c4820190fd8477 | 951 | py | Python | python-data-analysis/series/demo1.py | meteor1993/python-learning | 4ee574c9360caf6e63bb6ee2ef31fa6a9918fa40 | [
"MIT"
] | 83 | 2019-10-15T06:54:06.000Z | 2022-03-28T14:08:21.000Z | python-data-analysis/series/demo1.py | wenxuefeng3930/python-learning | 4ee574c9360caf6e63bb6ee2ef31fa6a9918fa40 | [
"MIT"
] | 1 | 2020-04-16T08:13:19.000Z | 2020-07-14T01:52:46.000Z | python-data-analysis/series/demo1.py | wenxuefeng3930/python-learning | 4ee574c9360caf6e63bb6ee2ef31fa6a9918fa40 | [
"MIT"
] | 74 | 2019-11-02T08:10:36.000Z | 2022-02-19T12:23:36.000Z | import numpy as np
import pandas as pd
s = pd.Series(np.random.rand(5), index=['a', 'b', 'c', 'd', 'e'])
print(s)
print(s.index)
s1 = pd.Series(np.random.randn(5))
print(s1)
d = {'b': 1, 'a': 0, 'c': 2}
s2 = pd.Series(d)
print(s2)
s3 = pd.Series(d, index=['b', 'c', 'd', 'a'])
print(s3)
s4 = pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])
print(s4)
print(s[0])
print(s[:3])
print(s[s > s.median()])
print(s[[4, 3, 1]])
# 打印 e 的幂次方, e 是一个常数为 2.71828
print (np.exp(s))
# 打印 s 里每个元素的开方
print (np.sqrt(s))
print(s.dtype)
print(s.array)
print(s.to_numpy())
print(s['a'])
s['e'] = 12.
print(s)
print('e' in s)
print('f' in s)
# 抛出 KeyError 异常
# print(s['f'])
print(s.get('f'))
print(s.get('f', np.nan))
print(s[1:] + s[:-1])
s5 = pd.Series(np.random.randn(5), name='my_series')
print(s5)
print(s5.name)
print(id(s5))
# 重命名 series
s6 = s5.rename("my_series_different")
print(s6)
print(id(s6)) | 17.290909 | 66 | 0.550999 | 183 | 951 | 2.84153 | 0.300546 | 0.173077 | 0.057692 | 0.092308 | 0.186538 | 0.146154 | 0.061538 | 0.061538 | 0 | 0 | 0 | 0.049287 | 0.189274 | 951 | 55 | 67 | 17.290909 | 0.625162 | 0.085174 | 0 | 0.052632 | 0 | 0 | 0.062885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.710526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
09a9ea864e04572dd8112091eef2cac100cf5988 | 376 | py | Python | init5/src/users/migrations/0005_user_banned.py | iYoQ/init5 | be67fc501376f17f2f6a258c5f8c2002defbc415 | [
"Apache-2.0"
] | null | null | null | init5/src/users/migrations/0005_user_banned.py | iYoQ/init5 | be67fc501376f17f2f6a258c5f8c2002defbc415 | [
"Apache-2.0"
] | null | null | null | init5/src/users/migrations/0005_user_banned.py | iYoQ/init5 | be67fc501376f17f2f6a258c5f8c2002defbc415 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 4.0.1 on 2022-01-16 16:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0004_user_rating'),
]
operations = [
migrations.AddField(
model_name='user',
name='banned',
field=models.BooleanField(default=False),
),
]
| 19.789474 | 53 | 0.587766 | 40 | 376 | 5.45 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071698 | 0.295213 | 376 | 18 | 54 | 20.888889 | 0.750943 | 0.119681 | 0 | 0 | 1 | 0 | 0.094225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09adee812e641848283e99fe1e1c57bd9d793e06 | 543 | py | Python | blender/arm/logicnode/animation/LN_get_tilesheet_state.py | onelsonic/armory | 55cfead0844923d419d75bf4bd677ebed714b4b5 | [
"Zlib"
] | 2,583 | 2016-07-27T08:25:47.000Z | 2022-03-31T10:42:17.000Z | blender/arm/logicnode/animation/LN_get_tilesheet_state.py | N8n5h/armory | 5b4d24f067a2354bafd3ab417bb8e30ee0c5aff8 | [
"Zlib"
] | 2,122 | 2016-07-31T14:20:04.000Z | 2022-03-31T20:44:14.000Z | blender/arm/logicnode/animation/LN_get_tilesheet_state.py | N8n5h/armory | 5b4d24f067a2354bafd3ab417bb8e30ee0c5aff8 | [
"Zlib"
] | 451 | 2016-08-12T05:52:58.000Z | 2022-03-31T01:33:07.000Z | from arm.logicnode.arm_nodes import *
class GetTilesheetStateNode(ArmLogicTreeNode):
"""Returns the information about the current tilesheet of the given object."""
bl_idname = 'LNGetTilesheetStateNode'
bl_label = 'Get Tilesheet State'
arm_version = 1
arm_section = 'tilesheet'
def arm_init(self, context):
self.add_input('ArmNodeSocketObject', 'Object')
self.add_output('ArmStringSocket', 'Name')
self.add_output('ArmIntSocket', 'Frame')
self.add_output('ArmBoolSocket', 'Is Paused')
| 33.9375 | 82 | 0.703499 | 60 | 543 | 6.2 | 0.683333 | 0.075269 | 0.104839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002268 | 0.187845 | 543 | 15 | 83 | 36.2 | 0.84127 | 0.132597 | 0 | 0 | 0 | 0 | 0.288172 | 0.049462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
09aff6ef14fb4885c4577050d30d835709a7725b | 832 | py | Python | capstone/capdb/migrations/0101_auto_20200423_1714.py | rachelaus/capstone | 2affa02706f9b1a99d032c66f258a7421c40a35e | [
"MIT"
] | 134 | 2017-07-12T17:03:06.000Z | 2022-03-27T06:38:29.000Z | capstone/capdb/migrations/0101_auto_20200423_1714.py | rachelaus/capstone | 2affa02706f9b1a99d032c66f258a7421c40a35e | [
"MIT"
] | 1,362 | 2017-06-22T17:42:49.000Z | 2022-03-31T15:28:00.000Z | capstone/capdb/migrations/0101_auto_20200423_1714.py | rachelaus/capstone | 2affa02706f9b1a99d032c66f258a7421c40a35e | [
"MIT"
] | 38 | 2017-06-22T14:46:23.000Z | 2022-03-16T05:32:54.000Z | # Generated by Django 2.2.11 on 2020-04-23 17:14
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('capdb', '0100_auto_20200410_1755'),
]
operations = [
migrations.AddField(
model_name='historicalvolumemetadata',
name='second_part_of',
field=models.ForeignKey(blank=True, db_constraint=False, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='capdb.VolumeMetadata'),
),
migrations.AddField(
model_name='volumemetadata',
name='second_part_of',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='second_part', to='capdb.VolumeMetadata'),
),
]
| 33.28 | 173 | 0.665865 | 95 | 832 | 5.652632 | 0.484211 | 0.05959 | 0.078212 | 0.122905 | 0.353818 | 0.353818 | 0.353818 | 0.216015 | 0.216015 | 0.216015 | 0 | 0.04893 | 0.213942 | 832 | 24 | 174 | 34.666667 | 0.772171 | 0.055288 | 0 | 0.333333 | 1 | 0 | 0.186224 | 0.059949 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09b0477a8d72253dd37fede78e47b823150d1788 | 374 | py | Python | src/cwr/urls.py | kaizer88/moltrandashboad | 590b3426ecc3a80aa31a08eb07bed224d0c2562d | [
"MIT"
] | null | null | null | src/cwr/urls.py | kaizer88/moltrandashboad | 590b3426ecc3a80aa31a08eb07bed224d0c2562d | [
"MIT"
] | 4 | 2021-04-08T22:01:45.000Z | 2021-09-22T19:50:53.000Z | src/cwr/urls.py | kaizer88/moltrandashboad | 590b3426ecc3a80aa31a08eb07bed224d0c2562d | [
"MIT"
] | null | null | null | from django.urls import path
from .views import *
urlpatterns = [
# path('towns', towns, name='towns'),
# path('add_town', edit_town, name='add_town'),
# path('edit_town/<pk>', edit_town, name='edit_town'),
# path('properties', properties, name='properties'),
# path('tenants', tenants, name='tenants'),
# path('agents', agents, name='agents'),
]
| 26.714286 | 58 | 0.631016 | 46 | 374 | 5 | 0.347826 | 0.13913 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168449 | 374 | 13 | 59 | 28.769231 | 0.73955 | 0.71123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
09b7f89cb5eb23713528719c5cfdb45d6cbed1d3 | 1,004 | py | Python | calendar_util/migrations/0001_initial.py | christiankuhl/todolist | a10316f8b24c33da7d0b386a4fda5b4a9c128cae | [
"MIT"
] | null | null | null | calendar_util/migrations/0001_initial.py | christiankuhl/todolist | a10316f8b24c33da7d0b386a4fda5b4a9c128cae | [
"MIT"
] | 5 | 2020-06-05T17:35:16.000Z | 2021-09-07T10:20:36.000Z | calendar_util/migrations/0001_initial.py | christiankuhl/todolist | a10316f8b24c33da7d0b386a4fda5b4a9c128cae | [
"MIT"
] | null | null | null | # Generated by Django 2.0.2 on 2018-02-26 22:35
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Holiday',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateField()),
('description', models.CharField(max_length=128)),
('created_at', models.DateTimeField(auto_now_add=True)),
('creator', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('date',),
},
),
]
| 32.387097 | 152 | 0.583665 | 102 | 1,004 | 5.598039 | 0.617647 | 0.042032 | 0.049037 | 0.077058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025352 | 0.292829 | 1,004 | 30 | 153 | 33.466667 | 0.778873 | 0.044821 | 0 | 0 | 1 | 0 | 0.06041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09be8ce22aaeafd1f7ba14340567fecf2e3913de | 1,379 | py | Python | server_util/resource_manager.py | igushev/fase_lib | 182c626193193b196041b18b9974b5b2cbf15c67 | [
"MIT"
] | 7 | 2019-05-20T09:57:02.000Z | 2020-01-10T05:30:48.000Z | server_util/resource_manager.py | igushev/fase_lib | 182c626193193b196041b18b9974b5b2cbf15c67 | [
"MIT"
] | null | null | null | server_util/resource_manager.py | igushev/fase_lib | 182c626193193b196041b18b9974b5b2cbf15c67 | [
"MIT"
] | null | null | null | import functools
import os
from fase_lib.base_util import singleton_util
TEMPLATE_SYMBOL = '@'
PIXEL_DENSITY_STEP = 0.25
PIXEL_DENSITY_MIN = 0
PIXEL_DENSITY_MAX = 10
@singleton_util.Singleton()
class ResourceManager():
def __init__(self, resource_dir):
self.resource_dir = resource_dir
def GetResourceDir(self):
return self.resource_dir
@functools.lru_cache(maxsize=None, typed=True)
def GetResourceFilename(self, filename, pixel_density):
if TEMPLATE_SYMBOL in filename:
return self.ResolveResourceFilename(filename, pixel_density)
if os.path.isfile(os.path.join(self.resource_dir, filename)):
return filename
return None
def ResolveResourceFilename(self, filename_template, pixel_density):
assert pixel_density % PIXEL_DENSITY_STEP == 0
for direction in [-1, 1]:
current_pixel_density = pixel_density
while ((direction == -1 and current_pixel_density > PIXEL_DENSITY_MIN) or
(direction == 1 and current_pixel_density < PIXEL_DENSITY_MAX)):
current_pixel_density_str = ('%.2f' % current_pixel_density).replace('.', '_')
filename = filename_template.replace(TEMPLATE_SYMBOL, current_pixel_density_str)
if os.path.isfile(os.path.join(self.resource_dir, filename)):
return filename
current_pixel_density += PIXEL_DENSITY_STEP * direction
return None
| 33.634146 | 88 | 0.740392 | 173 | 1,379 | 5.578035 | 0.306358 | 0.236269 | 0.137824 | 0.124352 | 0.315026 | 0.217617 | 0.217617 | 0.217617 | 0.126425 | 0.126425 | 0 | 0.010536 | 0.174039 | 1,379 | 40 | 89 | 34.475 | 0.836699 | 0 | 0 | 0.1875 | 0 | 0 | 0.005076 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.125 | false | 0 | 0.09375 | 0.03125 | 0.4375 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09c129d6095de636eeb5ca45e25197119bf1cf98 | 4,942 | py | Python | lib/config.py | qvant/stackexchange_bot | 36f9f13ee3ba58d4277d4ecdae567ead003870f8 | [
"MIT"
] | null | null | null | lib/config.py | qvant/stackexchange_bot | 36f9f13ee3ba58d4277d4ecdae567ead003870f8 | [
"MIT"
] | null | null | null | lib/config.py | qvant/stackexchange_bot | 36f9f13ee3ba58d4277d4ecdae567ead003870f8 | [
"MIT"
] | null | null | null | import codecs
import datetime
import json
from .security import is_password_encrypted, encrypt_password, decrypt_password
from .log import get_logger
from .stats import set_startup
LOG_CONFIG = "config"
CONFIG_PARAM_LOG_LEVEL = "LOG_LEVEL"
CONFIG_PARAM_DB_PORT = "DB_PORT"
CONFIG_PARAM_DB_NAME = "DB_NAME"
CONFIG_PARAM_DB_HOST = "DB_HOST"
CONFIG_PARAM_DB_USER = "DB_USER"
CONFIG_PARAM_DB_PASSWORD = "DB_PASSWORD"
CONFIG_PARAM_NEW_PATH = "CONFIG_PATH"
CONFIG_PARAM_CONFIG_RELOAD_TIME = "CONFIG_RELOAD_TIME"
CONFIG_PARAM_SERVER_NAME = "SERVER_NAME"
CONFIG_PARAM_HALT_ON_ERRORS = "HALT_ON_ERRORS"
CONFIG_PARAM_BOT_SECRET = "BOT_SECRET"
CONFIG_PARAM_ADMIN_LIST = "ADMIN_ACCOUNTS"
MODE_CORE = "core"
MODE_BOT = "bot"
MODE_WORKER = "worker"
MODE_UPDATER = "updater"
class Config:
def __init__(self, file: str, reload: bool = False):
f = file
fp = codecs.open(f, 'r', "utf-8")
config = json.load(fp)
if not reload:
self.logger = get_logger(LOG_CONFIG, is_system=True)
set_startup()
self.logger.info("Read settings from {0}".format(file))
self.file_path = file
self.old_file_path = file
self.log_level = config.get(CONFIG_PARAM_LOG_LEVEL)
self.logger.setLevel(self.log_level)
self.server_name = config.get(CONFIG_PARAM_SERVER_NAME)
self.supress_errors = False
if not config.get(CONFIG_PARAM_HALT_ON_ERRORS):
self.supress_errors = True
self.secret = config.get(CONFIG_PARAM_BOT_SECRET)
self.db_name = config.get(CONFIG_PARAM_DB_NAME)
self.db_port = config.get(CONFIG_PARAM_DB_PORT)
self.db_host = config.get(CONFIG_PARAM_DB_HOST)
self.db_user = config.get(CONFIG_PARAM_DB_USER)
self.db_password_read = config.get(CONFIG_PARAM_DB_PASSWORD)
if config.get(CONFIG_PARAM_NEW_PATH) is not None:
self.file_path = config.get(CONFIG_PARAM_NEW_PATH)
self.reload_time = config.get(CONFIG_PARAM_CONFIG_RELOAD_TIME)
self.next_reload = datetime.datetime.now()
self.reloaded = False
self.db_credential_changed = False
if is_password_encrypted(self.db_password_read):
self.logger.info("DB password encrypted, do nothing")
self.db_password = decrypt_password(self.db_password_read, self.server_name, self.db_port)
else:
self.logger.info("DB password in plain text, start encrypt")
password = encrypt_password(self.db_password_read, self.server_name, self.db_port)
self._save_db_password(password)
self.logger.info("DB password encrypted and save back in config")
self.db_password = self.db_password_read
if is_password_encrypted(self.secret):
self.logger.info("Secret in cypher text, start decryption")
self.secret = decrypt_password(self.secret, self.server_name, self.db_port)
self.logger.info("Secret was decrypted")
else:
self.logger.info("Secret in plain text, start encryption")
new_password = encrypt_password(self.secret, self.server_name, self.db_port)
self._save_secret(new_password)
self.logger.info("Secret was encrypted and saved")
self.admin_list = config.get(CONFIG_PARAM_ADMIN_LIST)
def _save_db_password(self, password: str):
fp = codecs.open(self.file_path, 'r', "utf-8")
config = json.load(fp)
fp.close()
fp = codecs.open(self.file_path, 'w', "utf-8")
config[CONFIG_PARAM_DB_PASSWORD] = password
json.dump(config, fp, indent=2)
fp.close()
def _save_secret(self, password: str):
fp = codecs.open(self.file_path, 'r', "utf-8")
config = json.load(fp)
fp.close()
fp = codecs.open(self.file_path, 'w', "utf-8")
config[CONFIG_PARAM_BOT_SECRET] = password
json.dump(config, fp, indent=2)
fp.close()
def renew_if_needed(self):
if datetime.datetime.now() >= self.next_reload:
self.logger.debug("Time to reload settings")
old_file_path = self.old_file_path
old_db_password = self.db_password
try:
self.__init__(self.file_path, reload=True)
self.reloaded = True
if self.db_password != old_db_password:
self.logger.info("DB password changed, need to reconnect")
self.db_credential_changed = True
except BaseException as exc:
self.logger.critical("Can't reload settings from new path {0}, error {1}".format(self.file_path, exc))
self.old_file_path = old_file_path
self.file_path = old_file_path
else:
self.logger.debug("Too early to reload settings")
def mark_reload_finish(self):
self.reloaded = False
self.db_credential_changed = False
| 41.183333 | 118 | 0.667746 | 675 | 4,942 | 4.574815 | 0.171852 | 0.096179 | 0.063148 | 0.084197 | 0.424223 | 0.253562 | 0.201749 | 0.194948 | 0.161917 | 0.161917 | 0 | 0.002659 | 0.238972 | 4,942 | 119 | 119 | 41.529412 | 0.818399 | 0 | 0 | 0.186916 | 0 | 0 | 0.11898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046729 | false | 0.205607 | 0.056075 | 0 | 0.11215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
09c38b111eefb36e876de4885ebe9341a617cb60 | 1,501 | py | Python | claf/learn/tensorboard.py | GMDennis/claf | d1e064e593127e5d654f000f5506c5ae1caab5ce | [
"MIT"
] | 225 | 2019-02-27T06:22:09.000Z | 2022-03-05T23:19:59.000Z | claf/learn/tensorboard.py | GMDennis/claf | d1e064e593127e5d654f000f5506c5ae1caab5ce | [
"MIT"
] | 25 | 2019-03-08T05:04:24.000Z | 2021-09-02T07:19:27.000Z | claf/learn/tensorboard.py | GMDennis/claf | d1e064e593127e5d654f000f5506c5ae1caab5ce | [
"MIT"
] | 41 | 2019-02-27T16:09:30.000Z | 2022-02-09T15:56:09.000Z |
import os
from tensorboardX import SummaryWriter
from claf import nsml
class TensorBoard:
""" TensorBoard Wrapper for Pytorch """
def __init__(self, log_dir):
if not os.path.exists(log_dir):
os.makedirs(log_dir)
self.writer = SummaryWriter(log_dir=log_dir)
def scalar_summaries(self, step, summary):
if nsml.IS_ON_NSML:
if type(summary) != dict:
raise ValueError(f"summary type is dict. not {type(summary)}")
kwargs = {"summary": True, "scope": locals(), "step": step}
kwargs.update(summary)
nsml.report(**kwargs)
else:
for tag, value in summary.items():
self.scalar_summary(step, tag, value)
def scalar_summary(self, step, tag, value):
"""Log a scalar variable."""
if nsml.IS_ON_NSML:
nsml.report(**{"summary": True, "scope": locals(), "step": step, tag: value})
else:
self.writer.add_scalar(tag, value, step)
def image_summary(self, tag, images, step):
"""Log a list of images."""
raise NotImplementedError()
def embedding_summary(self, features, metadata=None, label_img=None):
raise NotImplementedError()
def histogram_summary(self, tag, values, step, bins=1000):
"""Log a histogram of the tensor of values."""
raise NotImplementedError()
def graph_summary(self, model, input_to_model=None):
raise NotImplementedError()
| 30.632653 | 89 | 0.614257 | 180 | 1,501 | 4.988889 | 0.383333 | 0.033408 | 0.040089 | 0.022272 | 0.097996 | 0.066815 | 0 | 0 | 0 | 0 | 0 | 0.00365 | 0.26982 | 1,501 | 48 | 90 | 31.270833 | 0.815693 | 0.078614 | 0 | 0.258065 | 0 | 0 | 0.053676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0 | 0.096774 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09c7a77b5f23c61c1a13ddffcf3b3d38b547233b | 225 | py | Python | nadlogar/students/context_processors.py | LenartBucar/nadlogar | 2aba693254d56896419d09e066f91551492f8980 | [
"MIT"
] | null | null | null | nadlogar/students/context_processors.py | LenartBucar/nadlogar | 2aba693254d56896419d09e066f91551492f8980 | [
"MIT"
] | null | null | null | nadlogar/students/context_processors.py | LenartBucar/nadlogar | 2aba693254d56896419d09e066f91551492f8980 | [
"MIT"
] | null | null | null | from .models import StudentGroup
def my_groups(request):
if request.user.is_authenticated:
return {
"my_groups": StudentGroup.objects.filter(user=request.user),
}
else:
return {}
| 20.454545 | 72 | 0.626667 | 24 | 225 | 5.75 | 0.666667 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275556 | 225 | 10 | 73 | 22.5 | 0.846626 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09ca12d31da63891df1c995ff41a4a10b0df20a8 | 479 | py | Python | 202-happy-number/202-happy-number.py | jenndryden/coding-challenges | 0abe49eb335a06d273378634ef9e42f061751626 | [
"MIT"
] | null | null | null | 202-happy-number/202-happy-number.py | jenndryden/coding-challenges | 0abe49eb335a06d273378634ef9e42f061751626 | [
"MIT"
] | null | null | null | 202-happy-number/202-happy-number.py | jenndryden/coding-challenges | 0abe49eb335a06d273378634ef9e42f061751626 | [
"MIT"
] | null | null | null | class Solution:
def isHappy(self, n: int) -> bool:
seen = {}
while True:
if n in seen:
break
counter = 0
for i in range(len(str(n))):
counter += int(str(n)[i]) ** 2
seen[n] = 1
n = counter
counter = 0
if n == 1:
return True
break
else:
continue
return False
| 25.210526 | 46 | 0.350731 | 48 | 479 | 3.5 | 0.5625 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024038 | 0.565762 | 479 | 19 | 47 | 25.210526 | 0.783654 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09cf093e2828cee5fbb1ca9469992843916c1851 | 788 | py | Python | Scripts/MembraneValidation.py | krm15/ACME | 5b727667c21367e6c4a3631d2a68e32b3f8a7067 | [
"BSD-2-Clause-FreeBSD",
"BSD-3-Clause"
] | 4 | 2015-08-18T03:16:42.000Z | 2018-04-22T22:04:46.000Z | Scripts/MembraneValidation.py | krm15/ACME | 5b727667c21367e6c4a3631d2a68e32b3f8a7067 | [
"BSD-2-Clause-FreeBSD",
"BSD-3-Clause"
] | 2 | 2021-11-28T02:07:43.000Z | 2021-11-30T09:11:48.000Z | Scripts/MembraneValidation.py | krm15/ACME | 5b727667c21367e6c4a3631d2a68e32b3f8a7067 | [
"BSD-2-Clause-FreeBSD",
"BSD-3-Clause"
] | 5 | 2015-03-30T19:37:31.000Z | 2020-01-02T07:31:39.000Z | #!/usr/bin/python
# Filename : var.py
import sys, os
# Default arguments
current = os.getcwd()
# Define executable
Preprocess = "/home/krm15/bin/GITROOT/ACME/bin/membraneSegmentationEvaluation"
MembranePreprocess = "~/GITROOT/ACME/Data/Test/Input/Preprocess/10.mha"
Foreground = "~/GITROOT/ACME/Data/Test/Input/Foreground/10.mha"
OutputDir = "~/GITROOT/ACME/Data/Test/Temporary/"
for seg in range(30, 50, 3):
val = str(0.1+0.1*seg)
OutputFile = OutputDir + "10-3_1.0_0.7_" + val + "_8.0.mha"
# Define command
CellPreprocess = "%s %s %s %s 3 1.0 0.7 %s 8.0" % (Preprocess, MembranePreprocess,
Foreground, OutputFile, val)
if not os.path.isfile( OutputFile ):
print CellPreprocess
os.system(CellPreprocess)
| 30.307692 | 86 | 0.658629 | 105 | 788 | 4.904762 | 0.504762 | 0.085437 | 0.087379 | 0.11068 | 0.112621 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049206 | 0.200508 | 788 | 25 | 87 | 31.52 | 0.768254 | 0.107868 | 0 | 0 | 0 | 0 | 0.348138 | 0.277937 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09d51b4730ce9ff6fd35d945fa30b454905a428c | 1,319 | py | Python | setup.py | tomprimozic/scribe-python | f59f51cfb09eab6e44f9871f3014f531b3b75280 | [
"Apache-2.0"
] | null | null | null | setup.py | tomprimozic/scribe-python | f59f51cfb09eab6e44f9871f3014f531b3b75280 | [
"Apache-2.0"
] | 4 | 2017-04-30T18:21:37.000Z | 2018-05-31T10:04:11.000Z | setup.py | tomprimozic/scribe-python | f59f51cfb09eab6e44f9871f3014f531b3b75280 | [
"Apache-2.0"
] | 3 | 2016-06-06T14:23:40.000Z | 2021-07-05T08:08:13.000Z | '''
Scribe client
=============
This is a Python client for scribe that can be installed using pip::
pip install facebook-scribe
Usage
-----
Connect to ``HOST:9999`` using *Thrift*::
from scribe import scribe
from thrift.transport import TTransport, TSocket
from thrift.protocol import TBinaryProtocol
socket = TSocket.TSocket(host="HOST", port=9999)
transport = TTransport.TFramedTransport(socket)
protocol = TBinaryProtocol.TBinaryProtocol(trans=transport, strictRead=False, strictWrite=False)
client = scribe.Client(protocol)
transport.open()
category = 'LOGS'
message = 'hello world'
log_entry = scribe.LogEntry(category, message)
result = client.Log(messages=[log_entry])
if result == 0:
print 'success'
Links
-----
* `Facebook Scribe on GitHub <https://github.com/facebook/scribe>`_
'''
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
setup(name='facebook-scribe',
version='2.0.post1',
url='http://github.com/tomprimozic/scribe-python/',
author='Tom Primozic',
author_email='tom.primozic@zemanta.com',
description='A Python client for Facebook Scribe',
long_description=__doc__,
packages=['fb303', 'scribe'],
install_requires=['thrift>=0.9.0'],
)
| 23.553571 | 100 | 0.6884 | 154 | 1,319 | 5.831169 | 0.525974 | 0.077951 | 0.028953 | 0.035635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016682 | 0.181956 | 1,319 | 55 | 101 | 23.981818 | 0.81557 | 0.650493 | 0 | 0 | 0 | 0 | 0.360619 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09da82b22410d87e6cb012a3478f0bcd68c0338f | 1,353 | py | Python | AdventOfCode/Day6.py | btrzcinski/AdventOfCode | 46012e81ba8a56cde811ad481ab14b43ce73f09f | [
"MIT"
] | null | null | null | AdventOfCode/Day6.py | btrzcinski/AdventOfCode | 46012e81ba8a56cde811ad481ab14b43ce73f09f | [
"MIT"
] | null | null | null | AdventOfCode/Day6.py | btrzcinski/AdventOfCode | 46012e81ba8a56cde811ad481ab14b43ce73f09f | [
"MIT"
] | null | null | null | def turn_on_lights(lights, begin, end):
for x in range(begin[0], end[0]+1):
for y in range(begin[1], end[1]+1):
lights[x][y] = True
def turn_off_lights(lights, begin, end):
for x in range(begin[0], end[0]+1):
for y in range(begin[1], end[1]+1):
lights[x][y] = False
def toggle_lights(lights, begin, end):
for x in range(begin[0], end[0]+1):
for y in range(begin[1], end[1]+1):
if lights[x][y]: lights[x][y] = False
else: lights[x][y] = True
def coordinate_from_string(s):
return tuple([int(x) for x in s.split(",")])
def lit_lights(lights):
return sum(row.count(True) for row in lights)
def all_lights_off():
return [([False] * 1000).copy() for x in range(0,1000)]
def main():
lights = all_lights_off()
with open("Day6.txt") as f:
for command in [l.rstrip() for l in f.readlines()]:
parts = command.split(" ")
begin = coordinate_from_string(parts[-3])
end = coordinate_from_string(parts[-1])
if command.startswith("toggle"): toggle_lights(lights, begin, end)
if parts[1] == "on": turn_on_lights(lights, begin, end)
if parts[1] == "off": turn_off_lights(lights, begin, end)
print("Lit lights: %d" % (lit_lights(lights),))
if __name__ == "__main__":
main() | 34.692308 | 78 | 0.585366 | 213 | 1,353 | 3.577465 | 0.239437 | 0.125984 | 0.133858 | 0.15748 | 0.451444 | 0.406824 | 0.346457 | 0.272966 | 0.272966 | 0.272966 | 0 | 0.031496 | 0.249076 | 1,353 | 39 | 79 | 34.692308 | 0.718504 | 0 | 0 | 0.1875 | 0 | 0 | 0.031758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0 | 0.09375 | 0.3125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09eb56fa00f33253052cd738d087602992b07a2f | 1,071 | py | Python | broker/broker.py | Max00355/ByteMail-1 | 576f765da9fb88f23b4d5c46868e0ce16db83c7c | [
"MIT"
] | 4 | 2016-02-18T15:11:58.000Z | 2020-01-16T11:07:50.000Z | broker/broker.py | Max00355/ByteMail-1 | 576f765da9fb88f23b4d5c46868e0ce16db83c7c | [
"MIT"
] | 1 | 2021-03-06T00:07:14.000Z | 2021-03-06T00:15:26.000Z | broker/broker.py | Max00355/ByteMail-1 | 576f765da9fb88f23b4d5c46868e0ce16db83c7c | [
"MIT"
] | 1 | 2017-11-08T04:33:38.000Z | 2017-11-08T04:33:38.000Z | import ssl
import socket
import json
import threading
import landerdb
class Broker:
def __init__(self):
self.port = 4321
self.db = landerdb.Connect("nodes.db")
def main(self):
s = ssl.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("", self.port))
s.listen(5)
while True:
obj, conn = s.accept()
threading.Thread(target=self.handle, args=(obj, conn[0])).start()
def handle(self, obj, ip):
data = obj.recv(1024000)
print data
if data:
data = json.loads(data)
port = data['port']
pubkey = data['publickey']
addr = data['addr'].replace("\n", '').replace(" ", '')
if len(addr) == 32:
self.db.insert("nodes", {"addr":addr, "ip":ip, "port":port, "publickey":pubkey})
with open("nodes.db", 'r') as file:
obj.sendall(file.read())
obj.close()
else:
obj.close()
Broker().main()
| 28.945946 | 96 | 0.507937 | 125 | 1,071 | 4.304 | 0.488 | 0.02974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022631 | 0.339869 | 1,071 | 36 | 97 | 29.75 | 0.738331 | 0 | 0 | 0.060606 | 0 | 0 | 0.056956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.151515 | null | null | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09f22b407997d2470ba0e0eef9c0396d73e10de6 | 13,954 | py | Python | source/py4dlib/utils.py | andreberg/py4dlib | e7832afbbf1a70695310c1a46bf054b5988a104c | [
"Apache-2.0"
] | 18 | 2015-08-30T08:09:57.000Z | 2021-02-15T22:09:21.000Z | source/py4dlib/utils.py | andreberg/py4dlib | e7832afbbf1a70695310c1a46bf054b5988a104c | [
"Apache-2.0"
] | null | null | null | source/py4dlib/utils.py | andreberg/py4dlib | e7832afbbf1a70695310c1a46bf054b5988a104c | [
"Apache-2.0"
] | 2 | 2015-06-19T17:31:28.000Z | 2021-01-25T16:31:50.000Z | # -*- coding: utf-8 -*-
#
# utils.py
# py4dlib
#
# Created by André Berg on 2012-09-28.
# Copyright 2012 Berg Media. All rights reserved.
#
# andre.bergmedia@googlemail.com
#
# pylint: disable-msg=F0401
'''py4dlib.utils -- utility toolbelt for great convenience.'''
import os
import warnings
__version__ = (0, 6)
__date__ = '2012-09-27'
__updated__ = '2013-08-12'
DEBUG = 1 or ('DebugLevel' in os.environ and os.environ['DebugLevel'] > 0)
TESTRUN = 0 or ('TestRunLevel' in os.environ and os.environ['TestRunLevel'] > 0)
from subprocess import Popen, PIPE
from functools import wraps, partial
try:
import c4d #@UnresolvedImport
C4D_VERSION = c4d.GetC4DVersion()
except ImportError:
# TestRunLevel is defined by the Pydev PyUnit launcher to be 1
if TESTRUN == 1:
# define hard coded version here so test scripts
# can set the desired version after importing py4dlib.utils
C4D_VERSION = 12043
def ClearConsole():
version = c4d.GetC4DVersion()
if version > 12999:
cmd = 13957 # R14 (and R13?)
else:
cmd = 1024314 # R12
c4d.CallCommand(cmd)
def FuzzyCompareStrings(a, b, limit=20):
""" Fuzzy string comparison.
Two strings are deemed equal if they have
the same byte sequence for up to 'limit' chars.
Limit can be an int or a percentage string
like for example '60%' in which case 2 strings
are deemed equal if at least '60%' relative to
the longest string match.
"""
result = True
maxchars = limit
if isinstance(limit, basestring):
try:
maxp = int(limit[:-1]) # chop off '%'
maxlen = max(len(a), len(b)) # percentage relative to longest str
maxchars = int((maxp/100.0) * maxlen)
except:
raise ValueError("E: param 'limit' must be one of [str, int] " +
"where str indicates a percentage, e.g. '75%'.")
idx = 0
for char in a:
if idx >= maxchars:
break
try:
if char != b[idx]:
result = False
except IndexError:
result = False
idx += 1
return result
def EscapeUnicode(s):
ur""" CINEMA 4D R12's CPython integration stores high-order chars (``ord > 126``)
as 4-byte unicode escape sequences with upper case hex letters.
For example the character ``ä`` (LATIN SMALL LETTER A WITH DIAERESIS)
would be stored as the byte sequence ``\u00E4``. This function replaces
high-order chars with a unicode escape sequence suitable for CINEMA 4D.
If you use this function in R12 you need to balance each call with a
call to :py:func:`UnescapeUnicode` when the time comes to use or display
the string.
In R13 and R14 it returns the string untouched since in those versions
the CPython intergration handles Unicode encoded strings properly.
"""
result = ""
if C4D_VERSION <= 12999:
try:
s = s.decode('utf-8').encode('latin-1')
except UnicodeDecodeError:
pass
except UnicodeEncodeError:
pass
for b in s:
if ord(b) > 126:
result += r"\u%04X" % (ord(b),)
else:
result += b
if DEBUG:
print("result = %r" % result)
else:
return s
return result
def UnescapeUnicode(s):
ur""" CINEMA 4D R12's CPython integration stores high-order chars (``ord > 126``)
as 4-byte unicode escape sequences with upper case hex letters.
This function converts unicode escape sequences used by CINEMA 4D when passing
bytes (e.g. ``\u00FC`` -> ``\xfc``) to their corresponding high-order characters.
It should be used in R12 only and should balance out any calls made to
:py:func:`EscapeUnicode`.
In R13 and R14 the string is returned untouched since in those versions
the CPython intergration handles Unicode encoded strings properly.
"""
if C4D_VERSION <= 12999:
try:
return s.decode('unicode_escape')
except UnicodeEncodeError:
return s
else:
try:
return s.decode("utf-8")
except UnicodeDecodeError:
return s
except UnicodeEncodeError:
return s
def VersionString(versionTuple):
""" (x,y,z .. n) -> 'x.y.z...n' """
return '.'.join(str(x) for x in versionTuple)
def PPLLString(ll):
""" Returns a pretty-printed string of a ``list<list>`` structure. """
s = " " + repr(ll)[1:-2]
lines = s.split('],')
result = '],\n'.join(lines)
return result + ']'
def System(cmd, args=None):
'''
Convenience function for firing off commands to
the System console. Used instead of `subprocess.call`_
so that shell variables will be expanded properly.
Not the same as `os.system`_ as here it captures
returns ``stdout`` and ``stderr`` in a tuple in
Python 2.5 and lower or a ``namedtuple`` in 2.6
and higher. So you can use ``result[0]`` in the
first case and ``result.out`` in the second.
:param cmd: a console command line
:type cmd: ``string``
:param args: a list of arguments that
will be expanded in cmd
starting with ``$0``
:type args: ``list``
:return: ``tuple`` or ``namedtuple``
'''
if args is None:
fullcmd = cmd
else:
args = ["'{}'".format(s.replace(r'\\', r'\\\\')
.replace("'", r"\'")) for s in args]
fullcmd = "%s %s" % (cmd, ' '.join(args))
out, err = Popen(fullcmd, stdout=PIPE, shell=True).communicate()
System.out = out
System.err = err
try:
from collections import namedtuple
StdStreams = namedtuple('StdStreams', ['out', 'err'])
return StdStreams(out=out, err=err)
except ImportError:
return (out, err)
def benchmark(func=None, prec=3, unit='auto', name_width=0, time_width=8):
"""
A decorator that prints the time a function takes
to execute per call and cumulative total.
Accepts the following keyword arguments:
:param unit: ``str`` time unit for display. one of `[auto, us, ms, s, m]`.
:param prec: ``int`` radix point precision.
:param name_width: ``int`` width of the right-aligned function name field.
:param time_width: ``int`` width of the right-aligned time value field.
For convenience you can also set attributes on the benchmark
function itself with the same name as the keyword arguments
and the value of those will be used instead. This saves you
from having to call the decorator with the same arguments each
time you use it. Just set, for example, ``benchmark.prec = 5``
after the import and before you use it for the first time.
"""
import time
if hasattr(benchmark, 'prec'):
prec = getattr(benchmark, 'prec')
if hasattr(benchmark, 'unit'):
unit = getattr(benchmark, 'unit')
if hasattr(benchmark, 'name_width'):
name_width = getattr(benchmark, 'name_width')
if hasattr(benchmark, 'time_width'):
time_width = getattr(benchmark, 'time_width')
if func is None:
return partial(benchmark, prec=prec, unit=unit,
name_width=name_width, time_width=time_width)
@wraps(func)
def wrapper(*args, **kwargs): # IGNORE:W0613
def _get_unit_mult(val, unit):
multipliers = {'us': 1000000.0, 'ms': 1000.0, 's': 1.0, 'm': (1.0 / 60.0)}
if unit in multipliers:
mult = multipliers[unit]
else: # auto
if val >= 60.0:
unit = "m"
elif val >= 1.0:
unit = "s"
elif val <= 0.001:
unit = "us"
else:
unit = "ms"
mult = multipliers[unit]
return (unit, mult)
t = time.clock()
res = func(*args, **kwargs)
td = (time.clock() - t)
wrapper.total += td
wrapper.count += 1
tt = wrapper.total
cn = wrapper.count
tdu, tdm = _get_unit_mult(td, unit)
ttu, ttm = _get_unit_mult(tt, unit)
td *= tdm
tt *= ttm
print(" -> {0:>{8}}() @ {1:>03}: {3:>{7}.{2}f} {4:>2}, total: {5:>{7}.{2}f} {6:>2}"
.format(func.__name__, cn, prec, td, tdu, tt, ttu, time_width, name_width))
return res
wrapper.total = 0
wrapper.count = 0
return wrapper
def require(*args, **kwargs):
'''
Decorator that enforces types for function/method args.
Two ways to specify which types are required for each arg.
1) 2-tuples, where first member specifies arg index or arg name,
second member specifies a type or a tuple of types.
2) kwargs style, e.g. `argname`=`types` where `types` again can
be a type or a tuple of types.
None is always a valid type, to allow for optional args.
'''
_required = []
_args = args
_kwargs = kwargs
def wrapper(func):
if hasattr(func, "wrapped_args"):
wrapped_args = getattr(func, "wrapped_args")
else:
code = func.func_code
wrapped_args = list(code.co_varnames[:code.co_argcount])
@wraps(func)
def wrapped_fn(*args, **kwargs):
def _check_type(_funcname, _locator, _arg, _types):
if _arg is not None and not isinstance(_arg, _types):
pluralstr = "one " if isinstance(_types, (list, tuple)) else "" # IGNORE:W0311
raise TypeError("E: for %s(): param %r must be %sof %r, but is %r" % # IGNORE:W0311
(_funcname, str(_locator), pluralstr, _types, type(_arg)))
def _get_index(param_name):
codeobj_varnames = wrapped_args
try:
_idx = codeobj_varnames.index(param_name)
return _idx
except ValueError:
raise NameError(param_name)
for i in _args:
locator = i[0]
types = i[1]
idx = None
if isinstance(locator, int):
idx = locator
elif isinstance(locator, basestring):
name = locator
idx = _get_index(name)
if name in kwargs:
_check_type(func.__name__, name, kwargs[name], types)
if idx >= len(args):
break
_check_type(func.__name__, idx, args[idx], types)
for k, v in _kwargs.iteritems():
name = k
types = v
if name in kwargs:
_check_type(func.__name__, name, kwargs[name], types)
else:
idx = _get_index(name)
if idx >= len(args):
break
_check_type(func.__name__, name, args[idx], types)
return func(*args, **kwargs)
wrapped_fn.wrapped_args = wrapped_args
return wrapped_fn
return wrapper
def deprecated(level=1, since=None, info=None):
"""This decorator can be used to mark functions as deprecated.
:param int level: severity level.
0 = warnings.warn(category=DeprecationWarning)
1 = warnings.warn_explicit(category=DeprecationWarning)
2 = raise DeprecationWarning()
:param string since: the version where deprecation was introduced.
:param string info: additional info. normally used to refer to the new
function now favored in place of the deprecated one.
"""
def __decorate(func):
if since is None:
msg = 'Method %s() is deprecated.' % func.__name__
else:
msg = 'Method %s() has been deprecated since version %s.' % (func.__name__, str(since))
if info:
msg += ' ' + info
@wraps(func)
def __wrapped(*args, **kwargs): # IGNORE:C0111
if level <= 0:
warnings.warn(msg, category=DeprecationWarning, stacklevel=2)
func(*args, **kwargs)
elif level == 1:
warnings.warn_explicit(msg, category=DeprecationWarning,
filename=func.func_code.co_filename,
lineno=func.func_code.co_firstlineno + 1)
elif level >= 2:
raise DeprecationWarning(msg)
return __wrapped
return __decorate
def cache(func):
"""Classic cache decorator."""
saved = {}
@wraps(func)
def wrapped_fn(*args):
if args in saved:
return saved[args]
result = func(*args)
saved[args] = result
return result
return wrapped_fn
def memoize(func):
"""Classic memoization decorator."""
mem_cache = {}
@wraps(func)
def wrapped_fn(*args, **kw):
key = (args, tuple(sorted(kw.items())))
if key not in mem_cache:
mem_cache[key] = func(*args, **kw)
return mem_cache[key]
return wrapped_fn
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 35.060302 | 108 | 0.573599 | 1,731 | 13,954 | 4.538995 | 0.270942 | 0.009164 | 0.007637 | 0.009673 | 0.115184 | 0.099911 | 0.084511 | 0.071783 | 0.071783 | 0.063128 | 0 | 0.025761 | 0.326788 | 13,954 | 397 | 109 | 35.148615 | 0.810624 | 0.073886 | 0 | 0.2625 | 0 | 0.004167 | 0.06439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.008333 | 0.0375 | null | null | 0.008333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
09fa62272c9d322781530946b01ed18f3fb00b1b | 333 | py | Python | main.py | stillmatic/plaitpy | cc3627f2871aeb28b0171561ec86a73631507e36 | [
"MIT"
] | 438 | 2018-01-09T16:41:40.000Z | 2022-02-23T00:19:00.000Z | main.py | stillmatic/plaitpy | cc3627f2871aeb28b0171561ec86a73631507e36 | [
"MIT"
] | 9 | 2018-01-09T17:39:41.000Z | 2020-06-15T23:00:33.000Z | main.py | stillmatic/plaitpy | cc3627f2871aeb28b0171561ec86a73631507e36 | [
"MIT"
] | 17 | 2018-01-09T18:49:10.000Z | 2020-12-23T16:21:45.000Z | from __future__ import print_function
from src import cli
from os import environ as ENV
PROFILE=False
if PROFILE:
print("PROFILING")
import cProfile
cProfile.run("cli.main()", "restats")
import pstats
p = pstats.Stats('restats')
p.strip_dirs().sort_stats('cumulative').print_stats(50)
else:
cli.main()
| 19.588235 | 59 | 0.702703 | 46 | 333 | 4.913043 | 0.608696 | 0.061947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.183183 | 333 | 16 | 60 | 20.8125 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.129129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
09feb8ca650a6ece1f46e273f548926a54d249e6 | 970 | py | Python | tacotron2/datasets/length_sort_sampler.py | DashaSerdyuk/tacotron2 | 1a88669670750f8b0e1aff76abc8b1b15300e1dc | [
"BSD-3-Clause"
] | 1 | 2020-03-03T23:04:05.000Z | 2020-03-03T23:04:05.000Z | tacotron2/datasets/length_sort_sampler.py | DashaSerdyuk/tacotron2 | 1a88669670750f8b0e1aff76abc8b1b15300e1dc | [
"BSD-3-Clause"
] | null | null | null | tacotron2/datasets/length_sort_sampler.py | DashaSerdyuk/tacotron2 | 1a88669670750f8b0e1aff76abc8b1b15300e1dc | [
"BSD-3-Clause"
] | 1 | 2020-03-26T19:37:46.000Z | 2020-03-26T19:37:46.000Z | from torch.utils.data import Sampler
import numpy as np
def get_chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def flatten(l):
return [item for sublist in l for item in sublist]
class LengthSortSampler(Sampler):
def __init__(self, data_source, bs):
super().__init__(data_source)
self.data_source = data_source
self.bs = bs
try:
int(self.data_source[0])
lengths = self.data_source
except TypeError:
lengths = [len(x) for x in self.data_source]
inds = np.argsort(lengths)[::-1]
chunks = list(get_chunks(inds, bs))
chunk_inds = list(range(len(chunks) - 1))
np.random.shuffle(chunk_inds)
chunk_inds = list(chunk_inds) + [len(chunk_inds)]
self.inds = flatten([chunks[i] for i in chunk_inds])
def __len__(self):
return len(self.data_source)
def __iter__(self):
return iter(self.inds)
| 25.526316 | 60 | 0.608247 | 139 | 970 | 4.014388 | 0.330935 | 0.143369 | 0.150538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005722 | 0.279381 | 970 | 37 | 61 | 26.216216 | 0.792561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.074074 | 0.111111 | 0.407407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
61da239097aad9cce25f61e3ec30cee88b413a23 | 2,804 | py | Python | 0. labos/data.py | drakipovic/deep-learning | 939a0ad913c4943fc08b08674881abc6dfc614de | [
"MIT"
] | null | null | null | 0. labos/data.py | drakipovic/deep-learning | 939a0ad913c4943fc08b08674881abc6dfc614de | [
"MIT"
] | null | null | null | 0. labos/data.py | drakipovic/deep-learning | 939a0ad913c4943fc08b08674881abc6dfc614de | [
"MIT"
] | null | null | null | import math
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, precision_score, recall_score
class Random2DGaussian(object):
min_x = 0
max_x = 10
min_y = 0
max_y = 10
def __init__(self):
self.mean = [(self.max_x-self.min_x) * np.random.random_sample() + self.min_x,
(self.max_y-self.min_y) * np.random.random_sample() + self.min_y]
eigval_x = (np.random.random_sample() * (self.max_x - self.min_x) / 5)**2
eigval_y = (np.random.random_sample() * (self.max_y - self.min_y) / 5)**2
D = [[eigval_x, 0], [0, eigval_y]]
angle = 2 * math.pi * np.random.random_sample()
R = [[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]]
self.cov = np.dot(np.dot(R, D), np.transpose(R))
def get_sample(self, n):
return np.random.multivariate_normal(self.mean, self.cov, size=n)
def binlogreg_train(X, Y_, param_niter=1000, param_delta=0.001):
w, b = np.random.randn(2, 1), 0
for i in range(param_niter):
scores = np.dot(X, w) + b
probs = np.exp(scores) / (1 + np.exp(scores))
loss = -np.sum(np.log(probs[Y_==1]) + np.log(1 - probs[Y_==0]))
if i % 100 == 0:
print "Iteration {}: loss: {}".format(i, loss)
dl_dscores = probs - Y_
grad_w = np.dot(np.transpose(X), dl_dscores)
grad_b = np.sum(dl_dscores)
w += -param_delta * grad_w
b += -param_delta * grad_b
return w, b
def binlogreg_classify(X, w, b):
s = np.dot(X, w) + b
probs = np.exp(s) / (1 + np.exp(s))
return probs
def sample_gauss_2d(C, N):
X = []
Y = []
for i in range(C):
gauss = Random2DGaussian()
x = gauss.get_sample(N)
X.extend(x)
y=[[i] for j in range(N)]
Y.extend(y)
return np.array(X), np.array(Y)
def eval_perf_binary(Y, Y_):
acc = accuracy_score(Y, Y_)
prec = precision_score(Y, Y_, average='binary')
rec = recall_score(Y, Y_, average='binary')
return acc, prec, rec
def decision_func(X):
scores = X[:,0] + X[:,1] - 5
return scores>0.5
def graph_data(X, Y, Y_):
correct_idx = (Y==Y_).T.flatten()
wrong_idx = (Y!=Y_).T.flatten()
plt.scatter(X[correct_idx, 0], X[correct_idx, 1], c=Y_[correct_idx], marker='o', label='correct')
plt.scatter(X[wrong_idx, 0], X[wrong_idx, 1], c=Y_[wrong_idx], marker='s', label='wrong')
plt.legend(loc='upper left')
plt.show()
if __name__ == '__main__':
X, Y_ = sample_gauss_2d(2, 100)
w, b = binlogreg_train(X, Y_)
probs = binlogreg_classify(X, w, b)
Y = probs>0.5
print eval_perf_binary(Y, Y_)
graph_data(X, Y, Y_)
| 25.490909 | 101 | 0.574536 | 449 | 2,804 | 3.374165 | 0.249443 | 0.011881 | 0.046205 | 0.066007 | 0.258086 | 0.129373 | 0.023762 | 0.023762 | 0 | 0 | 0 | 0.025243 | 0.265335 | 2,804 | 109 | 102 | 25.724771 | 0.710194 | 0 | 0 | 0 | 0 | 0 | 0.023546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057143 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61dd85a731ea1d17fe987a685657c6998f39bff5 | 343 | py | Python | feedback/admin.py | alekam/django-feedback | 492ad924f1ac05548165c5cb2837e177efea597d | [
"BSD-3-Clause"
] | 1 | 2015-02-06T17:06:49.000Z | 2015-02-06T17:06:49.000Z | feedback/admin.py | alekam/django-feedback | 492ad924f1ac05548165c5cb2837e177efea597d | [
"BSD-3-Clause"
] | null | null | null | feedback/admin.py | alekam/django-feedback | 492ad924f1ac05548165c5cb2837e177efea597d | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from feedback.models import Feedback
class FeedbackAdmin(admin.ModelAdmin):
list_display = ['name', 'email', 'message', 'time']
search_fields = ['email', 'message']
list_filter = ['time', ]
date_hierarchy = 'time'
raw_id_fields = ('user', )
admin.site.register(Feedback, FeedbackAdmin)
| 24.5 | 55 | 0.693878 | 39 | 343 | 5.948718 | 0.666667 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166181 | 343 | 13 | 56 | 26.384615 | 0.811189 | 0 | 0 | 0 | 0 | 0 | 0.12828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
61eddfa4d6fe9359fa77088a788b9bcd02ce14c6 | 14,985 | py | Python | TrainingDataFromSgfValue.py | stefanpeidli/GoNet | a9340b0bd0e361612fb5585b54d137d0f2c1ff2e | [
"MIT"
] | 6 | 2017-10-26T15:10:51.000Z | 2019-10-22T18:20:57.000Z | TrainingDataFromSgfValue.py | stefanpeidli/GoNet | a9340b0bd0e361612fb5585b54d137d0f2c1ff2e | [
"MIT"
] | 15 | 2017-10-31T13:58:55.000Z | 2018-01-26T16:31:36.000Z | TrainingDataFromSgfValue.py | stefanpeidli/GoNet | a9340b0bd0e361612fb5585b54d137d0f2c1ff2e | [
"MIT"
] | 4 | 2017-10-28T17:35:33.000Z | 2018-11-11T22:29:59.000Z | # -*- coding: utf-8 -*-
"""
Created on Sat Nov 18 12:27:16 2017
@author: Stefan Peidli
This script reads sgf files from a file directory, converts them into pairs of "current board - next move" and stores
them either in a dictionary or in a sqlite3 database.
"""
import os
from collections import defaultdict
from Board import *
from Filters import *
from Hashable import Hashable
import pandas as pd
import time
import sqlite3
import io
#neccessarry to store arrays in database (from stackOverFlow)
def adapt_array(arr):
out = io.BytesIO()
np.save(out, arr)
out.seek(0)
return sqlite3.Binary(out.read())
def convert_array(text):
out = io.BytesIO(text)
out.seek(0)
return np.load(out)
# Converts np.array to TEXT when inserting
sqlite3.register_adapter(np.ndarray, adapt_array)
# Converts TEXT to np.array when selecting
sqlite3.register_converter("array", convert_array)
class TrainingDataSgfPass:
# standard initialize with boardsize 9
def __init__(self, folder=None, id_list=range(1000), dbNameMoves = False, dbNameDist = False, dbValue = False):
self.n = 9
self.board = Board(self.n)
self.dic = defaultdict(np.ndarray)
self.passVector = np.zeros(self.n*self.n + 1, dtype=np.int32)
self.passVector[self.n*self.n]=1
self.winner = ''
self.dbFlagMoves = False
self.dbFlagDist = False
self.dbValue = dbValue
if self.dbValue:
con = sqlite3.connect(r"DB/ValueDB's/" + self.dbValue, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("create table test (id INTEGER PRIMARY KEY, board array, move array)")
con.close()
if dbNameMoves:
self.dbFlagMoves = True
self.dbNameMoves = dbNameMoves
con = sqlite3.connect(r"DB/Move/" + self.dbNameMoves, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("create table test (id INTEGER PRIMARY KEY, board array, move array)")
con.close()
if dbNameDist:
self.dbFlagDist = True
self.dbNameDist = dbNameDist
con = sqlite3.connect(r"DB/Dist/" + self.dbNameDist, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("create table test (id INTEGER PRIMARY KEY, board array, distribution array)")
con.close()
if folder is not None:
self.importTrainingData(folder, id_list)
# help method for converting a vector containing a move to the corresponding coord tuple
def toCoords(self, vector):
vector = vector.flatten()
if np.linalg.norm(vector) == 1:
for entry in range(len(vector)):
if vector[entry] == 1:
return entry % 9, int(entry / 9)
else:
return None
def importTrainingData(self, folder, id_list=range(1000)):
dir_path = os.path.dirname(os.path.realpath(__file__))
if type(id_list) is str:
id_list_prepped = pd.read_excel(dir_path + '/' + 'Training_Sets' + '/' + id_list + '.xlsx')[
"game_id"].values.tolist()
else:
id_list_prepped = id_list
filedir = dir_path + "/" + folder + "/"
gameIDs = [] # stores the suffix
for i in id_list_prepped:
currfile = filedir + "game_" + str(i) + ".sgf"
if os.path.exists(currfile):
gameIDs = np.append(gameIDs, i)
self.importSingleFile(currfile)
# close db after last call of importSingleFile
if self.dbFlagMoves:
con = sqlite3.connect(r"DB/Move/" + self.dbNameMoves, detect_types=sqlite3.PARSE_DECLTYPES)
con.close()
if self.dbFlagDist:
con = sqlite3.connect(r"DB/Dist/" + self.dbNameDist, detect_types=sqlite3.PARSE_DECLTYPES)
con.close()
if self.dbValue:
con = sqlite3.connect(r"DB/Dist/" + self.dbValue, detect_types=sqlite3.PARSE_DECLTYPES)
con.close()
def importSingleFile(self, currfile):
# set winner to blank (important)
self.winner = ''
with open(currfile, encoding="Latin-1") as f:
movelistWithAB = f.read().split(';')[1:]
temp = ''
for i in range(len(movelistWithAB[0])):
if temp == 'R' and movelistWithAB[0][i] == 'E':
self.winner = movelistWithAB[0][i + 2]
temp = movelistWithAB[0][i]
if self.winner != 'W' and self.winner != 'B':
print("no winner")
return
print(self.winner)
movelist = movelistWithAB[1:]
self.board.clear()
# first board of the game is always empty board
prevBoardMatrix = np.zeros((self.n, self.n), dtype=np.int32)
currBoardMatrix = np.zeros((self.n, self.n), dtype=np.int32)
# NOT always empty: add black handycap stones
for line in movelistWithAB[0].split('\n'):
if line.startswith("AB"):
handycapmoves = []
m = line.split('[')
for i in range(1, len(m) - 1):
if not m[i].startswith('P'):
handycapmoves.append('B[' + m[i][0] + m[i][1] + ']')
movelist = handycapmoves + movelist
# some corrupt files have more ';" at beginning
if movelist[0].startswith('&') or movelist[0].startswith(' ') or movelist[0].startswith('d') \
or movelist[0].startswith('(') or movelist[0].startswith('i') or movelist[0].startswith('o') \
or len(movelist[0]) > 20:
for m in movelist:
if m.startswith('&') or m.startswith(' ') or m.startswith('d') \
or m.startswith('(') or m.startswith('i') or m.startswith('o') \
or len(m) > 20:
movelist = movelist[1:]
for i in range(0, len(movelist)):
# we need to keep track of who played that move, B=-1, W=+1
stoneColor = movelist[i].split('[')[0]
if stoneColor == 'B':
stone = -1
elif stoneColor == 'W':
stone = 1
# game finished
elif stoneColor == 'C':
return
# PL tells whose turn it is to play in move setup situation ??
elif stoneColor == "PL":
return
# Comment in end
elif len(stoneColor) > 2:
return
# MoveNumber added at before move
elif stoneColor == "MN":
movelist[i] = movelist[i].split(']')[1]
stoneColor = movelist[i].split('[')[0]
if stoneColor == 'B':
stone = -1
elif stoneColor == 'W':
stone = 1
else:
return
# now we extract the next move, e.g. 'bb'
move = movelist[i].split('[')[1].split(']')[0]
# print(stoneColor, "plays", move)
# If not Pass
if len(move) > 0:
currBoardMatrix[ord(move[1]) - 97, ord(move[0]) - 97] = stone
self.addToDict(prevBoardMatrix, currBoardMatrix, stone)
# now we play the move on the board and see if we have to remove stones
coords = self.toCoords(np.absolute(currBoardMatrix - prevBoardMatrix))
if coords is not None:
self.board.play_stone(coords[1], coords[0], stone)
prevBoardMatrix = np.copy(self.board.vertices)
currBoardMatrix = np.copy(self.board.vertices)
# add pass as move to dic
else:
self.addToDict(prevBoardMatrix, currBoardMatrix, stone, passing=True)
# help method: adds tuple (board,move) to dictionary for every possible rotation
def addToDict(self, prevBoardMatrix, currBoardMatrix, player, passing=False):
currPrevPair = (currBoardMatrix, prevBoardMatrix)
flippedCurrPrevPair = (np.flip(currPrevPair[0], 1), np.flip(currPrevPair[1], 1))
symmats = [currPrevPair, flippedCurrPrevPair]
for i in range(3):
currPrevPair = (np.rot90(currPrevPair[0]), np.rot90(currPrevPair[1]))
flippedCurrPrevPair = (np.rot90(flippedCurrPrevPair[0]), np.rot90(flippedCurrPrevPair[1]))
symmats.append(currPrevPair)
symmats.append(flippedCurrPrevPair)
for rotatedPair in symmats:
currBoardVector = rotatedPair[0].flatten()
prevBoardVector = rotatedPair[1].flatten()
if player == -1: # Trainieren das Netzwerk nur für Spieler Schwarz. wenn weiß: Flip colors B=-1, W=+1
self.addEntryToDic(prevBoardVector, prevBoardVector, currBoardVector, passing)
else:
invPrevBoardVector = np.zeros(9 * 9, dtype=np.int32)
for count in range(len(prevBoardVector)):
if prevBoardVector[count] != 0:
invPrevBoardVector[count] = -1 * prevBoardVector[count]
else:
invPrevBoardVector[count] = 0
self.addEntryToDic(invPrevBoardVector, prevBoardVector, currBoardVector, passing)
def addEntryToDic(self, entryBoardVector, prevBoardVector, currBoardVector, passing):
move = np.absolute(currBoardVector - prevBoardVector)
if self.dbFlagMoves == True:
con = sqlite3.connect(r"DB/Move/" + self.dbNameMoves, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
if passing == False:
cur.execute("insert into test values (?, ?, ?)",
(None, entryBoardVector, np.append(np.absolute(currBoardVector - prevBoardVector), 0)))
else:
cur.execute("insert into test values (?, ?, ?)", (None, entryBoardVector, np.copy(self.passVector)))
con.commit()
if self.dbFlagDist == True:
con = sqlite3.connect(r"DB/Dist/" + self.dbNameDist, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
if passing == False:
cur.execute("select count(*) from test where board = ?", (prevBoardVector,))
data = cur.fetchall()
if data[0][0] == 0:
cur.execute("insert into test values (?, ?, ?)",
(None, entryBoardVector, np.append(np.absolute(currBoardVector - prevBoardVector), 0)))
else:
cur.execute("select distribution from test where board = ?", (prevBoardVector,))
old_dist = cur.fetchall()
cur.execute("UPDATE test SET distribution = ? WHERE board = ?", (old_dist[0][0] + np.append(np.absolute(currBoardVector - prevBoardVector), 0), entryBoardVector))
else:
cur.execute("select count(*) from test where board = ?", (prevBoardVector,))
data = cur.fetchall()
if data[0][0] == 0:
cur.execute("insert into test values (?, ?, ?)",
(None, entryBoardVector, np.copy(self.passVector)))
else:
cur.execute("select distribution from test where board = ?", (prevBoardVector,))
old_dist = cur.fetchall()
cur.execute("UPDATE test SET distribution = ? WHERE board = ?",
(old_dist[0][0] + np.copy(self.passVector), prevBoardVector))
con.commit()
if self.dbValue == True:
con = sqlite3.connect(r"DB/Move/" + self.dbValue, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("select count(*) from test where board = ?", (prevBoardVector,))
data = cur.fetchall()
if data[0][0] == 0:
cur.execute("insert into test values (?, ?, ?)", (None, entryBoardVector, np.copy(self.passVector)))
con.commit()
else:
if passing==False:
if Hashable(entryBoardVector) in self.dic:
self.dic[Hashable(entryBoardVector)] += np.append(np.absolute(currBoardVector - prevBoardVector), 0)
else:
self.dic[Hashable(entryBoardVector)] = np.append(np.absolute(currBoardVector - prevBoardVector), 0)
else:
if Hashable(entryBoardVector) in self.dic:
self.dic[Hashable(entryBoardVector)] += np.copy(self.passVector)
else:
self.dic[Hashable(entryBoardVector)] = np.copy(self.passVector)
# end class TrainingDataSgfPass
def dbTest2():
dbName = 'dan_data_10'
con = sqlite3.connect(r"DB/Dist/" + dbName, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("Select * from test where id > 5730")
data = cur.fetchall()
print(data)
con.close()
#dbTest2()
def test3pass():
start = time.clock()
t = TrainingDataSgfPass("dgs")
zeroMatrix = t.dic[Hashable(np.zeros(t.n * t.n, dtype=np.int32))]
print('\n', zeroMatrix[:-1].reshape((9, 9)), '\n')
print("passing: ", zeroMatrix[81])
print("entries: ", np.sum(zeroMatrix))
# print cumulated move distributions for boards with exactly one stone
secondMoveDist = np.zeros(9 * 9+1, dtype=np.int32)
for entry in t.dic:
thisis = Hashable.unwrap(entry)
if np.sum(np.absolute(thisis)) == 1:
secondMoveDist += t.dic[entry]
print(secondMoveDist[:-1].reshape((9, 9)))
print("passing: ", secondMoveDist[81])
print("entries: ",np.sum(secondMoveDist))
print("\nTime " + str(time.clock() - start))
#test3pass()
def dbCreate():
TrainingDataSgfPass(folder="dgs", id_list = 'dan_data_10', dbNameMoves="dan_data_10")
con = sqlite3.connect(r"DB/Move/dan_data_10", detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("select * from test where id <= 100")
data = cur.fetchall()
con.close
print(data)
#dbCreate()
def distDbCreate():
TrainingDataSgfPass(folder="dgs", id_list='dan_data_10', dbNameDist="dan_data_10_test")
con = sqlite3.connect(r"DB/Dist/dan_data_10_test", detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("select count(*) from test where id <= 100")
data = cur.fetchall()
con.close()
print(data[0][0])
#distDbCreate()
def testWinner():
TrainingDataSgfPass("dgs", id_list=range(10000))
#testWinner()
def testGetData():
testset = TrainingDataSgfPass("dgs", id_list=range(10))
print("Type", type(testset.dic))
for entry in testset.dic:
testdata = Hashable.unwrap(entry)
targ = testset.dic[entry].reshape(9 * 9 + 1)
#print("Testdata", testdata)
print("Target", targ)
testGetData()
| 43.184438 | 183 | 0.578512 | 1,681 | 14,985 | 5.108269 | 0.198096 | 0.020962 | 0.023757 | 0.025154 | 0.399208 | 0.367882 | 0.355188 | 0.341097 | 0.295796 | 0.285781 | 0 | 0.020807 | 0.300834 | 14,985 | 346 | 184 | 43.309249 | 0.798797 | 0.097097 | 0 | 0.338182 | 0 | 0 | 0.085322 | 0.001779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050909 | false | 0.090909 | 0.047273 | 0 | 0.134545 | 0.050909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
61ef1534c49a70727e4a833fa33c2f8123c653da | 641 | py | Python | second-analysis-steps/code/building-decays/00.start.py | mesmith75/starterkit-lessons | 8a7afc67f7fe2a805bf18b2e63e6721ec0d262ab | [
"CC-BY-4.0"
] | null | null | null | second-analysis-steps/code/building-decays/00.start.py | mesmith75/starterkit-lessons | 8a7afc67f7fe2a805bf18b2e63e6721ec0d262ab | [
"CC-BY-4.0"
] | null | null | null | second-analysis-steps/code/building-decays/00.start.py | mesmith75/starterkit-lessons | 8a7afc67f7fe2a805bf18b2e63e6721ec0d262ab | [
"CC-BY-4.0"
] | null | null | null | from Configurables import DaVinci
from GaudiConf import IOHelper
DaVinci().InputType = 'DST'
DaVinci().TupleFile = 'DVntuple.root'
DaVinci().PrintFreq = 1000
DaVinci().DataType = '2012'
DaVinci().Simulation = True
# Only ask for luminosity information when not using simulated data
DaVinci().Lumi = not DaVinci().Simulation
DaVinci().EvtMax = 1000
# Use the local input data
IOHelper().inputFiles([('root://eoslhcb.cern.ch/'
'/eos/lhcb/grid/prod/lhcb/'
'MC/2012/ALLSTREAMS.DST/00035742/0000/'
'00035742_00000001_1.allstreams.dst')],
clear=True)
| 33.736842 | 67 | 0.645866 | 71 | 641 | 5.802817 | 0.676056 | 0.082524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091278 | 0.230889 | 641 | 18 | 68 | 35.611111 | 0.744422 | 0.140406 | 0 | 0 | 0 | 0 | 0.25365 | 0.217153 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61ef3de62f7eda03dbfd2a5832ccb97a2c97a2f1 | 2,464 | py | Python | tools/profiling/microbenchmarks/bm_diff/bm_speedup.py | eaglesunshine/grpc_learn | ecf33b83c9d49892539a3ffed9dc1553d91f59ae | [
"BSD-3-Clause"
] | 1 | 2017-12-12T20:55:14.000Z | 2017-12-12T20:55:14.000Z | tools/profiling/microbenchmarks/bm_diff/bm_speedup.py | eaglesunshine/grpc_learn | ecf33b83c9d49892539a3ffed9dc1553d91f59ae | [
"BSD-3-Clause"
] | null | null | null | tools/profiling/microbenchmarks/bm_diff/bm_speedup.py | eaglesunshine/grpc_learn | ecf33b83c9d49892539a3ffed9dc1553d91f59ae | [
"BSD-3-Clause"
] | 1 | 2020-11-04T04:12:37.000Z | 2020-11-04T04:12:37.000Z | # Copyright 2017, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from scipy import stats
import math
_DEFAULT_THRESHOLD = 1e-10
def scale(a, mul):
return [x * mul for x in a]
def cmp(a, b):
return stats.ttest_ind(a, b)
def speedup(new, old, threshold = _DEFAULT_THRESHOLD):
if (len(set(new))) == 1 and new == old: return 0
s0, p0 = cmp(new, old)
if math.isnan(p0): return 0
if s0 == 0: return 0
if p0 > threshold: return 0
if s0 < 0:
pct = 1
while pct < 100:
sp, pp = cmp(new, scale(old, 1 - pct / 100.0))
if sp > 0: break
if pp > threshold: break
pct += 1
return -(pct - 1)
else:
pct = 1
while pct < 10000:
sp, pp = cmp(new, scale(old, 1 + pct / 100.0))
if sp < 0: break
if pp > threshold: break
pct += 1
return pct - 1
if __name__ == "__main__":
new = [0.0, 0.0, 0.0, 0.0]
old = [2.96608e-06, 3.35076e-06, 3.45384e-06, 3.34407e-06]
print speedup(new, old, 1e-5)
print speedup(old, new, 1e-5)
| 34.704225 | 72 | 0.70211 | 386 | 2,464 | 4.448187 | 0.432642 | 0.008154 | 0.010483 | 0.011648 | 0.205009 | 0.163075 | 0.163075 | 0.158416 | 0.158416 | 0.158416 | 0 | 0.046512 | 0.214692 | 2,464 | 70 | 73 | 35.2 | 0.840827 | 0.596997 | 0 | 0.176471 | 0 | 0 | 0.008299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
61f3821391f916421b7dcc7d806b0069a8f04988 | 316 | py | Python | render_static/tests/bad_pattern.py | bckohan/django-static-templates | 91dc8f07ba8cb61694959cc41be3a40ed8678104 | [
"MIT"
] | 5 | 2021-02-25T20:23:18.000Z | 2022-03-16T04:43:14.000Z | render_static/tests/bad_pattern.py | bckohan/django-static-templates | 91dc8f07ba8cb61694959cc41be3a40ed8678104 | [
"MIT"
] | 48 | 2021-02-25T06:04:58.000Z | 2022-03-30T20:20:42.000Z | render_static/tests/bad_pattern.py | bckohan/django-static-templates | 91dc8f07ba8cb61694959cc41be3a40ed8678104 | [
"MIT"
] | null | null | null | import re
from django.urls import path
from render_static.tests.views import TestView
class Unrecognized:
regex = re.compile('Im not normal')
class NotAPattern:
pass
urlpatterns = [
path('test/simple/', TestView.as_view(), name='bad'),
NotAPattern()
]
urlpatterns[0].pattern = Unrecognized()
| 15.047619 | 57 | 0.708861 | 39 | 316 | 5.692308 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003831 | 0.174051 | 316 | 20 | 58 | 15.8 | 0.846743 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
61fb9ee25c144462b562757e58c7cf21956780c1 | 2,733 | py | Python | omdatabase/lib/kobas/kb/create_table.py | bioShaun/OMdatabase | 2a794af8e02e74527ae9400b207f17bf9b50bd24 | [
"MIT"
] | null | null | null | omdatabase/lib/kobas/kb/create_table.py | bioShaun/OMdatabase | 2a794af8e02e74527ae9400b207f17bf9b50bd24 | [
"MIT"
] | null | null | null | omdatabase/lib/kobas/kb/create_table.py | bioShaun/OMdatabase | 2a794af8e02e74527ae9400b207f17bf9b50bd24 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
def organism(con):
con.executescript(
'''
CREATE TABLE Organisms
(
abbr TEXT PRIMARY KEY,
name TEXT
);
''')
def species(con):
con.executescript(
'''
CREATE TABLE Genes
(
gid TEXT PRIMARY KEY,
name TEXT
);
CREATE TABLE GeneEntrezGeneIds
(
gid TEXT,
entrez_gene_id TEXT,
PRIMARY KEY (gid, entrez_gene_id)
);
CREATE INDEX GeneEntrezGeneIds_idx_entrez_gene_id ON GeneEntrezGeneIds (entrez_gene_id);
CREATE TABLE GeneGis
(
gid TEXT,
gi TEXT,
PRIMARY KEY (gid, gi)
);
CREATE INDEX GeneGis_idx_gi ON GeneGis (gi);
CREATE TABLE GeneUniprotkbAcs
(
gid TEXT,
uniprotkb_ac TEXT,
PRIMARY KEY (gid, uniprotkb_ac)
);
CREATE INDEX GeneUniprotkbAcs_idx_uniprotkb_ac ON GeneUniprotkbAcs (uniprotkb_ac);
CREATE TABLE GeneEnsemblGeneIds
(
gid TEXT,
ensembl_gene_id TEXT,
PRIMARY KEY (gid, ensembl_gene_id)
);
CREATE INDEX GeneEnsemblGeneIds_idx_ensembl_gene_id ON GeneEnsemblGeneIds (ensembl_gene_id);
CREATE TABLE Orthologs
(
gid TEXT,
oid TEXT,
PRIMARY KEY (gid, oid)
);
CREATE TABLE Pathways
(
pid INTEGER PRIMARY KEY,
db TEXT,
id TEXT,
name TEXT
);
CREATE TABLE GenePathways
(
gid TEXT,
pid INTEGER,
PRIMARY KEY (gid, pid)
);
CREATE TABLE Diseases
(
did INTEGER PRIMARY KEY,
db TEXT,
id TEXT,
name TEXT
);
CREATE TABLE GeneDiseases
(
gid TEXT,
did INTEGER,
PRIMARY KEY (gid, did)
);
CREATE TABLE Gos
(
goid TEXT PRIMARY KEY,
name TEXT
);
CREATE TABLE GeneGos
(
gid TEXT,
goid TEXT,
PRIMARY KEY (gid, goid)
);
''')
def ko(con):
con.executescript(
'''
CREATE TABLE Kos
(
koid TEXT PRIMARY KEY,
name TEXT
);
CREATE TABLE KoGenes
(
koid TEXT,
gid TEXT,
PRIMARY KEY (koid, gid)
);
CREATE INDEX KoGenes_idx_gid ON KoGenes (gid);
CREATE TABLE KoEntrezGeneIds
(
koid TEXT,
entrez_gene_id TEXT,
PRIMARY KEY (koid, entrez_gene_id)
);
CREATE INDEX KoEntrezGeneIds_idx_entrez_gene_id ON KoEntrezGeneIds (entrez_gene_id);
CREATE TABLE KoGis
(
koid TEXT,
gi TEXT,
PRIMARY KEY (koid, gi)
);
CREATE INDEX KoGis_idx_gi ON KoGis (gi);
CREATE TABLE KoUniprotkbAcs
(
koid TEXT,
uniprotkb_ac TEXT,
PRIMARY KEY (koid, uniprotkb_ac)
);
CREATE INDEX KoUniprotkbAcs_idx_uniprotkb_ac ON KoUniprotkbAcs (uniprotkb_ac);
CREATE TABLE KoEnsemblGeneIds
(
koid TEXT,
ensembl_gene_id TEXT,
PRIMARY KEY (koid, ensembl_gene_id)
);
CREATE INDEX KoEnsemblGeneIds_idx_ensembl_gene_id ON KoEnsemblGeneIds (ensembl_gene_id);
CREATE TABLE Pathways
(
pid TEXT PRIMARY KEY,
name TEXT
);
CREATE TABLE KoPathways
(
koid TEXT,
pid TEXT,
PRIMARY KEY (koid, pid)
);
''')
| 18.342282 | 92 | 0.687889 | 356 | 2,733 | 5.117978 | 0.154494 | 0.126784 | 0.130626 | 0.062569 | 0.456641 | 0.231614 | 0.192097 | 0.052689 | 0.052689 | 0.052689 | 0 | 0 | 0.234541 | 2,733 | 148 | 93 | 18.466216 | 0.870937 | 0.007318 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11033524d9556c411058eec67a071b2bd886b6df | 5,080 | py | Python | deprecated/thermobalance.py | mdbartos/RIPS | ab654138ccdcd8cb7c4ab53092132e0156812e95 | [
"MIT"
] | 1 | 2021-04-02T03:05:55.000Z | 2021-04-02T03:05:55.000Z | deprecated/thermobalance.py | mdbartos/RIPS | ab654138ccdcd8cb7c4ab53092132e0156812e95 | [
"MIT"
] | 2 | 2015-05-13T23:35:43.000Z | 2015-05-22T00:51:23.000Z | deprecated/thermobalance.py | mdbartos/RIPS | ab654138ccdcd8cb7c4ab53092132e0156812e95 | [
"MIT"
] | 2 | 2015-05-13T23:29:03.000Z | 2015-05-21T22:50:15.000Z | import math
import numpy as np
def T_c(I, T_amb, V, D, R_list, N_cond=1, T_range=[298,323,348], a_s=0.9, e_s=0.9, I_sun=900.0, temp_factor=1, wind_factor=1, n_iter=10):
"""
%% TO BE ASSIGNED
T_line % line temperature (C)
T_surf % line surface temperature (C)
T_film % line film temperature (C)
L % line length (m)
%% INDEPENDENT VARIABLES
T_amb % ambient temperature (C)
I % electrical current (A)
D % line diameter (m)
V % wind velocity (m / s)
%% CONSTANTS AND PARAMETERS
R % line resistance (ohm)
R_0 % line resistivity at temperature T_0 (ohm)
a_T % temperature coefficient of resistance (K^-1)
T_0 % reference temperature (C)
k % air thermal conductivity (W / mK)
a_s % absorptivity of the line surface (unitless)
e_s % emissivity of the line surface (unitless)
v % dynamic viscosity of air (m^2 / s)
Pr % Prandtl number (unitless)
sigma % Stefan-Boltzmann constant
%% DEPENDENT VARIABLES
q_gen % heat generated in the line by electrical resistive losses (W)
q_cond % conductive heat transfer within the line (W)
q_conv % convective heat transfer from the line (W)
q_rad_in % radiative heat added to the line from the sun (W)
q_rad_out % radiative heat lost from the line to the surroundings (W)
I_sun % incident solar radiation (W / m^2)
A_s % line surface area (m^2)
A_c % line cross-sectional area (m^2)
Nu % Nusselt number
Re % Reynolds number
"""
# def Q_gen(I, R):
# w = I * I * R
# return w
# def Q_rad_in(I_sun, A_s, a_s):
# w = I_sun * D * a_s
# return w
# def Q_conv(htcoeff, A_s, T_lin, T_amb):
# w = htcoeff * A_s * (T_line - T_amb)
# return w
# def Q_rad_out(e_s, A_s, sigma, T_line, T_amb):
# w = e_s * D * sigma * (T_line**4 - T_amb**4)
# return w
def reynolds(V, D, v, Mair=1.103):
r = V * D / v
return r
def nusselt(Re, Pr):
a = 0.62 * ( (Re) ** (1.0/2.0) ) * ( Pr ** (1.0/3.0) )
b = (1 + (0.4/(Pr**(2.0/3.0) ) ) ) ** (1.0/4.0)
c = (Re / 282000) ** (5.0/8.0)
n = 0.3 + (a/b) * ( (1 + c) ** (4.0/5.0) )
return n
def air_prop(T_amb):
# temp v k Pr
air_prop = np.array([[200, 7.59e-6, 18.1e-3, 0.737],
[250, 11.44e-6, 22.3e-3, 0.720],
[300, 15.89e-6, 26.3e-3, 0.707],
[350, 20.92e-6, 30.0e-3, 0.700],
[400, 26.41e-6, 33.8e-3, 0.690],
[450, 32.39e-6, 37.3e-3, 0.686],
[500, 38.79e-6, 40.7e-3, 0.684],
[550, 45.57e-6, 43.9e-3, 0.683],
[600, 52.69e-6, 46.9e-3, 0.685]])
v, k, Pr = np.apply_along_axis(lambda x: np.interp(T_amb, air_prop[:,0], x),
0, air_prop[:,1:])
return v, k, Pr
def R_T(R_lo, R_mid, R_hi, T_line, N_cond, T_range=T_range):
if 273 <= T_line <= 323:
R = ((R_lo +
((R_lo - R_mid)/(T_range[0] - T_range[1]))
*(T_line - T_range[0]))/N_cond)
elif T_line > 323:
R = ((R_mid +
((R_mid - R_hi)/(T_range[1] - T_range[2]))
*(T_line - T_range[1]))/N_cond)
else:
R = R_lo
print('Out of bounds')
return R
R_lo, R_mid, R_hi = R_list[0], R_list[1], R_list[2]
temp_factor = 1
wind_factor = 1
sigma = 5.6703e-8 # Stefan-Boltzmann constant
T_amb = T_amb*temp_factor
V = V*wind_factor
v, k, Pr = air_prop(T_amb)
Re = reynolds(V, D, v)
htcoeff = nusselt(Re, Pr) * k / D
def T_line(T_init):
R = R_T(R_lo, R_mid, R_hi, T_init, N_cond)
print R
C4 = e_s * sigma * D * math.pi
C3 = 0.0
C2 = 0.0
C1 = htcoeff * D * math.pi
C0 = - ( I ** 2 * R
+ I_sun * a_s * D
+ htcoeff * D * math.pi * T_amb
+ e_s * D * math.pi * sigma * (T_amb ** 4))
return np.roots([C4, C3, C2, C1, C0])
T_c = T_amb
for i in range(n_iter):
T_arr = T_line(T_c)
T_c = np.real(T_arr[np.where((np.real(T_arr) > 0) & ~(np.iscomplex(T_arr)))]).mean()
print T_c
return T_c
R_1000ft = np.array([0.0186, 0.0205, 0.0222])
R_list = R_1000ft*(3.28084/1000)
RT = T_c(1220.0, 293.0, 1.0, 30.39e-3, R_list, 1, I_sun=1000.0, e_s = 0.5, a_s = 0.5)
| 35.034483 | 137 | 0.456299 | 787 | 5,080 | 2.770013 | 0.27446 | 0.025688 | 0.016514 | 0.012844 | 0.094495 | 0.036697 | 0.011927 | 0.011927 | 0.011927 | 0 | 0 | 0.108571 | 0.41437 | 5,080 | 144 | 138 | 35.277778 | 0.624202 | 0.073819 | 0 | 0 | 0 | 0 | 0.004434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.029412 | null | null | 0.044118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11067ec0c42dc7b6d2eca1efa398c1aee084a5fe | 503 | py | Python | cookschedule/urls.py | yuxuan-bill/Cook-Scheduler | 1bd9699ab1f8ba711987c0494a207961d859ec12 | [
"MIT"
] | null | null | null | cookschedule/urls.py | yuxuan-bill/Cook-Scheduler | 1bd9699ab1f8ba711987c0494a207961d859ec12 | [
"MIT"
] | null | null | null | cookschedule/urls.py | yuxuan-bill/Cook-Scheduler | 1bd9699ab1f8ba711987c0494a207961d859ec12 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = 'cookschedule'
urlpatterns = [
path('', views.index, name='index'),
path('login/', views.login, name='login'),
path('logout/', views.logout, name='logout'),
path('change_password/', views.change_password, name="change_password"),
path('user_stat/', views.user_stat, name="user_stat"),
path('public_cart/', views.public_cart, name='public_cart'),
path('personal_cart/', views.personal_cart, name='personal_cart')
] | 35.928571 | 76 | 0.691849 | 65 | 503 | 5.153846 | 0.307692 | 0.125373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129225 | 503 | 14 | 77 | 35.928571 | 0.76484 | 0 | 0 | 0 | 0 | 0 | 0.279762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
110b7c71af7d001d646d47f0dd6885b37cd40ba1 | 644 | py | Python | api/uwkgm/database/database/graph/triples/add.py | ichise-laboratory/uwkgm | 6505fa5d524336b30505804a3d241143cb1fa4bf | [
"BSD-3-Clause"
] | null | null | null | api/uwkgm/database/database/graph/triples/add.py | ichise-laboratory/uwkgm | 6505fa5d524336b30505804a3d241143cb1fa4bf | [
"BSD-3-Clause"
] | 6 | 2020-11-25T10:49:45.000Z | 2021-09-22T18:50:03.000Z | api/uwkgm/database/database/graph/triples/add.py | ichise-laboratory/uwkgm | 6505fa5d524336b30505804a3d241143cb1fa4bf | [
"BSD-3-Clause"
] | 1 | 2020-12-24T02:15:42.000Z | 2020-12-24T02:15:42.000Z | """Add triples to the graph database
The UWKGM project
:copyright: (c) 2020 Ichise Laboratory at NII & AIST
:author: Rungsiman Nararatwong
"""
from typing import Tuple
from dorest.managers.struct import generic
from dorest.managers.struct.decorators import endpoint
from database.database.graph import default_graph_uri
@endpoint(['GET'])
def single(triple: Tuple[str, str, str], graph: str = default_graph_uri) -> str:
"""Adds a triple to the graph database
:param triple: A URI triple (subject, predicate, object)
:param graph: Graph URI
:return: Adding status
"""
return generic.resolve(single)(triple, graph)
| 25.76 | 80 | 0.737578 | 88 | 644 | 5.352273 | 0.522727 | 0.050955 | 0.042463 | 0.076433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007449 | 0.166149 | 644 | 24 | 81 | 26.833333 | 0.869646 | 0.430124 | 0 | 0 | 0 | 0 | 0.008876 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
110e787475034acf87d9e09bac8c00fb9456d45a | 250 | py | Python | example/example/urls.py | AI-App/Django-Browser-Reload | 36c8162ff9daff77d41e19750b67d104b0b0b2ec | [
"MIT"
] | null | null | null | example/example/urls.py | AI-App/Django-Browser-Reload | 36c8162ff9daff77d41e19750b67d104b0b0b2ec | [
"MIT"
] | null | null | null | example/example/urls.py | AI-App/Django-Browser-Reload | 36c8162ff9daff77d41e19750b67d104b0b0b2ec | [
"MIT"
] | null | null | null | from django.urls import include, path
from example.core import views as core_views
urlpatterns = [
path("", core_views.index_django),
path("jinja/", core_views.index_jinja),
path("__reload__/", include("django_browser_reload.urls")),
]
| 25 | 63 | 0.728 | 33 | 250 | 5.181818 | 0.454545 | 0.157895 | 0.163743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 250 | 9 | 64 | 27.777778 | 0.795349 | 0 | 0 | 0 | 0 | 0 | 0.172 | 0.104 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
110f9c3720bf0335a844623f8596e6e66148862e | 656 | py | Python | response/core/util.py | ojno/response | 0f8d2a8378a02c1f4680a04e4a943d7e32234a22 | [
"MIT"
] | 1,408 | 2019-05-03T11:39:34.000Z | 2022-03-31T17:51:04.000Z | response/core/util.py | ojno/response | 0f8d2a8378a02c1f4680a04e4a943d7e32234a22 | [
"MIT"
] | 105 | 2019-05-04T07:59:44.000Z | 2022-03-14T04:47:02.000Z | response/core/util.py | ojno/response | 0f8d2a8378a02c1f4680a04e4a943d7e32234a22 | [
"MIT"
] | 177 | 2019-05-03T18:11:46.000Z | 2022-03-25T04:49:57.000Z | import bleach
import bleach_whitelist
from django.conf import settings
from rest_framework.pagination import PageNumberPagination
def sanitize(string):
# bleach doesn't handle None so let's not pass it
if string and getattr(settings, "RESPONSE_SANITIZE_USER_INPUT", True):
return bleach.clean(
string,
tags=bleach_whitelist.markdown_tags,
attributes=bleach_whitelist.markdown_attrs,
styles=bleach_whitelist.all_styles,
)
return string
class LargeResultsSetPagination(PageNumberPagination):
page_size = 500
max_page_size = 1000
page_size_query_param = "page_size"
| 27.333333 | 74 | 0.727134 | 77 | 656 | 5.961039 | 0.623377 | 0.130719 | 0.100218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013645 | 0.217988 | 656 | 23 | 75 | 28.521739 | 0.881092 | 0.071646 | 0 | 0 | 0 | 0 | 0.060956 | 0.046129 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.235294 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1112a24132da3dc3ec68dbe6f3e5454b571071da | 1,426 | py | Python | generated-libraries/python/netapp/lun/fcp_down_hba_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | 2 | 2017-03-28T15:31:26.000Z | 2018-08-16T22:15:18.000Z | generated-libraries/python/netapp/lun/fcp_down_hba_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | generated-libraries/python/netapp/lun/fcp_down_hba_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | from netapp.netapp_object import NetAppObject
class FcpDownHbaInfo(NetAppObject):
"""
Information about a down FCP HBA
"""
_adapter = None
@property
def adapter(self):
"""
Which FC adapter.
"""
return self._adapter
@adapter.setter
def adapter(self, val):
if val != None:
self.validate('adapter', val)
self._adapter = val
_state = None
@property
def state(self):
"""
Description of HBAs state.
Possible values:
STARTUP
UNINITIALIZED
INITIALIZING FIRMWARE
LINK NOT CONNECTED
WAITING FOR LINK UP
ONLINE
LINK DISCONNECTED
RESETTING
OFFLINE
OFFLINED BY USER/SYSTEM
Unknown state
"""
return self._state
@state.setter
def state(self, val):
if val != None:
self.validate('state', val)
self._state = val
@staticmethod
def get_api_name():
return "fcp-down-hba-info"
@staticmethod
def get_desired_attrs():
return [
'adapter',
'state',
]
def describe_properties(self):
return {
'adapter': { 'class': basestring, 'is_list': False, 'required': 'required' },
'state': { 'class': basestring, 'is_list': False, 'required': 'required' },
}
| 23 | 89 | 0.532959 | 135 | 1,426 | 5.525926 | 0.474074 | 0.032172 | 0.040214 | 0.032172 | 0.187668 | 0.187668 | 0.187668 | 0 | 0 | 0 | 0 | 0 | 0.371669 | 1,426 | 61 | 90 | 23.377049 | 0.832589 | 0.180926 | 0 | 0.176471 | 0 | 0 | 0.107921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.029412 | 0.088235 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1119d4e9e9854fe6930d82bb88bf791110d88102 | 3,403 | py | Python | pixelpuncher/player/migrations/0024_auto_20160906_0234.py | ej2/pixelpuncher | 8dd31090252c00772932c78ea21438d1e979f722 | [
"BSD-3-Clause"
] | null | null | null | pixelpuncher/player/migrations/0024_auto_20160906_0234.py | ej2/pixelpuncher | 8dd31090252c00772932c78ea21438d1e979f722 | [
"BSD-3-Clause"
] | null | null | null | pixelpuncher/player/migrations/0024_auto_20160906_0234.py | ej2/pixelpuncher | 8dd31090252c00772932c78ea21438d1e979f722 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9 on 2016-09-06 02:34
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django_extensions.db.fields
class Migration(migrations.Migration):
dependencies = [
('item', '0018_auto_20160906_0234'),
('player', '0023_auto_20160612_1804'),
]
operations = [
migrations.CreateModel(
name='Collection',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100, unique=True)),
('reward_xp', models.IntegerField(default=0)),
('date_created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)),
('date_updated', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)),
],
options={
'ordering': ['name'],
},
),
migrations.CreateModel(
name='PlayerCollection',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date_created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)),
('date_completed', models.DateTimeField(blank=True, null=True)),
('collection', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='player.Collection')),
('player', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='player.Player')),
],
),
migrations.AlterModelOptions(
name='achievement',
options={'ordering': ['name']},
),
migrations.AlterModelOptions(
name='skill',
options={'ordering': ['name']},
),
migrations.AlterField(
model_name='achievement',
name='name',
field=models.CharField(max_length=30, unique=True),
),
migrations.AlterField(
model_name='skill',
name='name',
field=models.CharField(max_length=25, unique=True),
),
migrations.AlterField(
model_name='skill',
name='skill_type',
field=models.CharField(choices=[(b'ATTK', b'Attack'), (b'SPCL', b'Special'), (b'HEAL', b'Heal'), (b'PASS', b'Passive')], max_length=4),
),
migrations.AddField(
model_name='collection',
name='achievement',
field=models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='collection', to='player.Achievement'),
),
migrations.AddField(
model_name='collection',
name='items',
field=models.ManyToManyField(blank=True, related_name='collections', to='item.ItemType'),
),
migrations.AddField(
model_name='collection',
name='reward_item',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to='item.ItemType'),
),
migrations.AlterUniqueTogether(
name='playercollection',
unique_together=set([('player', 'collection')]),
),
]
| 40.511905 | 159 | 0.588892 | 328 | 3,403 | 5.954268 | 0.323171 | 0.024578 | 0.035842 | 0.056324 | 0.433692 | 0.433692 | 0.370712 | 0.334869 | 0.285714 | 0.285714 | 0 | 0.022572 | 0.270937 | 3,403 | 83 | 160 | 41 | 0.764611 | 0.019101 | 0 | 0.526316 | 1 | 0 | 0.145427 | 0.013793 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.013158 | 0.052632 | 0 | 0.092105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11351d5facf175448d31d637b157d2fbbb78c91b | 1,053 | py | Python | tests/cfngin/lookups/handlers/test_hook_data.py | cmilam87/runway | e1b2aca8e9468bf246ae5afa878ed4a36fb36ab1 | [
"Apache-2.0"
] | null | null | null | tests/cfngin/lookups/handlers/test_hook_data.py | cmilam87/runway | e1b2aca8e9468bf246ae5afa878ed4a36fb36ab1 | [
"Apache-2.0"
] | null | null | null | tests/cfngin/lookups/handlers/test_hook_data.py | cmilam87/runway | e1b2aca8e9468bf246ae5afa878ed4a36fb36ab1 | [
"Apache-2.0"
] | null | null | null | """Tests for runway.cfngin.lookups.handlers.hook_data."""
import unittest
from runway.cfngin.context import Context
from runway.cfngin.lookups.handlers.hook_data import HookDataLookup
class TestHookDataLookup(unittest.TestCase):
"""Tests for runway.cfngin.lookups.handlers.hook_data.HookDataLookup."""
def setUp(self):
"""Run before tests."""
self.ctx = Context({"namespace": "test-ns"})
self.ctx.set_hook_data("fake_hook", {"result": "good"})
def test_valid_hook_data(self):
"""Test valid hook data."""
value = HookDataLookup.handle("fake_hook::result", context=self.ctx)
self.assertEqual(value, "good")
def test_invalid_hook_data(self):
"""Test invalid hook data."""
with self.assertRaises(KeyError):
HookDataLookup.handle("fake_hook::bad_key", context=self.ctx)
def test_bad_value_hook_data(self):
"""Test bad value hook data."""
with self.assertRaises(ValueError):
HookDataLookup.handle("fake_hook", context=self.ctx)
| 35.1 | 76 | 0.680912 | 128 | 1,053 | 5.445313 | 0.3125 | 0.114778 | 0.081779 | 0.116212 | 0.317073 | 0.190818 | 0.190818 | 0.123386 | 0 | 0 | 0 | 0 | 0.184236 | 1,053 | 29 | 77 | 36.310345 | 0.811409 | 0.197531 | 0 | 0 | 0 | 0 | 0.101966 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.25 | false | 0 | 0.1875 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1137f2135aef8386f03d8c93ecf1122aac6d6f90 | 489 | py | Python | apps/UDPCountToLeds/receiver.py | csiro-wsn/tinyos-main-wsn-course | bbc0923f725c16dbf0ea15047a15809e4576281c | [
"BSD-3-Clause"
] | null | null | null | apps/UDPCountToLeds/receiver.py | csiro-wsn/tinyos-main-wsn-course | bbc0923f725c16dbf0ea15047a15809e4576281c | [
"BSD-3-Clause"
] | null | null | null | apps/UDPCountToLeds/receiver.py | csiro-wsn/tinyos-main-wsn-course | bbc0923f725c16dbf0ea15047a15809e4576281c | [
"BSD-3-Clause"
] | null | null | null | from socket import *
from struct import unpack
UDPSock = socket(AF_INET6,SOCK_DGRAM)
UDPSock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
UDPSock.bind(("",1234))
#UDPSock.bind(("fec0::64",1234))
while True:
data,addr = UDPSock.recvfrom(1024)
if not data:
print "Client has exited!"
break
elif len(data)<=0:
print "Empty data received"
else:
print "\nReceived message from", addr[0],":", unpack("B",data)[0]
# Close socket
UDPSock.close()
| 21.26087 | 73 | 0.650307 | 66 | 489 | 4.757576 | 0.636364 | 0.070064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05168 | 0.208589 | 489 | 22 | 74 | 22.227273 | 0.75969 | 0.08998 | 0 | 0 | 0 | 0 | 0.140271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.133333 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
113893aaf04b1031ba6c6c578788300a98d2e58d | 2,703 | py | Python | hard-gists/2344345/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/2344345/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/2344345/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | """
DrupalPasswordHasher
To use, put this in any app and add to your settings.py, something like this:
PASSWORD_HASHERS = (
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'myproject.myapp.drupal_hasher.DrupalPasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.BCryptPasswordHasher',
'django.contrib.auth.hashers.SHA1PasswordHasher',
'django.contrib.auth.hashers.MD5PasswordHasher',
'django.contrib.auth.hashers.CryptPasswordHasher',
)
Three notes:
-This requires Drupal 1.4 or higher (with the introduction of the flexible password
hasher)
-In Drupal, the number of algorithm iterations is a power of 2,
which is represented by the first character after the $. Specifically, it
performs 2^(index within _ITOA64) iterations, such that 'C' represents 2^14
and 'D' would represent 2^15.
-In Drupal the passwords are stored as:
$S$<hash>
while in Django the passwords are stored as
S$<hash>
So you must cut off the first character of each password when migrating.
"""
import hashlib
from django.contrib.auth.hashers import BasePasswordHasher
_ITOA64 = './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
class DrupalPasswordHasher(BasePasswordHasher):
algorithm = "S"
iter_code = 'C'
salt_length = 8
def encode(self, password, salt, iter_code=None):
"""The Drupal 7 method of encoding passwords"""
if iter_code == None:
iterations = 2 ** _ITOA64.index(self.iter_code)
else:
iterations = 2 ** _ITOA64.index(iter_code)
hash = hashlib.sha512(salt + password).digest()
for i in range(iterations):
hash = hashlib.sha512(hash + password).digest()
l = len(hash)
output = ''
i = 0
while i < l:
value = ord(hash[i])
i = i + 1
output += _ITOA64[value & 0x3f]
if i < l:
value |= ord(hash[i]) << 8
output += _ITOA64[(value >> 6) & 0x3f]
if i >= l:
break
i += 1
if i < l:
value |= ord(hash[i]) << 16
output += _ITOA64[(value >> 12) & 0x3f]
if i >= l:
break
i += 1
output += _ITOA64[(value >> 18) & 0x3f]
longhashed = "%s$%s%s%s" % (self.algorithm, iter_code,
salt, output)
return longhashed[:54]
def verify(self, password, encoded):
hash = encoded.split("$")[1]
iter_code = hash[0]
salt = hash[1:1 + self.salt_length]
return encoded == self.encode(password, salt, iter_code)
| 27.865979 | 83 | 0.612283 | 321 | 2,703 | 5.093458 | 0.398754 | 0.039144 | 0.072783 | 0.102752 | 0.104587 | 0.082569 | 0.073395 | 0 | 0 | 0 | 0 | 0.03967 | 0.281909 | 2,703 | 96 | 84 | 28.15625 | 0.802679 | 0.403256 | 0 | 0.186047 | 0 | 0 | 0.0475 | 0.04 | 0 | 0 | 0.01 | 0 | 0 | 1 | 0.046512 | false | 0.162791 | 0.046512 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1139728abd4261a2a7c6d0ed27fc3a7907f677b3 | 492 | py | Python | hudeven/P007.py | hudeven/algorithms | 710dfe1003d75aa350a5af25cf12240ad86dc25d | [
"Apache-2.0"
] | null | null | null | hudeven/P007.py | hudeven/algorithms | 710dfe1003d75aa350a5af25cf12240ad86dc25d | [
"Apache-2.0"
] | 1 | 2016-07-15T08:07:29.000Z | 2016-07-15T08:07:29.000Z | hudeven/P007.py | hudeven/algorithms | 710dfe1003d75aa350a5af25cf12240ad86dc25d | [
"Apache-2.0"
] | 1 | 2016-07-15T08:04:49.000Z | 2016-07-15T08:04:49.000Z | class Solution(object):
def reverse(self, x):
"""
:type x: int
:rtype: int
"""
MAX_INT = (1 << 31) - 1
MIN_INT = -MAX_INT - 1
if x == MIN_INT: return 0
res = 0
p = x if x >= 0 else -x
while p != 0:
digit = p % 10
if res <= MAX_INT // 10:
res = res * 10 + digit
else:
return 0
p = p // 10
return res if x >= 0 else -res
| 23.428571 | 38 | 0.376016 | 65 | 492 | 2.769231 | 0.353846 | 0.1 | 0.1 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079498 | 0.514228 | 492 | 20 | 39 | 24.6 | 0.67364 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
113ead56e28385094fc819a213199f68b10c54d7 | 5,225 | py | Python | AMAO/apps/Avaliacao/Questao/models/filtro_questao.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | 2 | 2017-04-26T14:08:02.000Z | 2017-09-01T13:10:17.000Z | AMAO/apps/Avaliacao/Questao/models/filtro_questao.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | null | null | null | AMAO/apps/Avaliacao/Questao/models/filtro_questao.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.db import models
from tipo_questao import TipoQuestao
from questao import Questao
#from libs.uniqifiers_benchmark import f11 as uniqifier
class FiltroQuestao(models.Model):
"""
Classe que ira gerar uma questao(QuestaoDeAvaliacao) com base em alguns criterios/filtros(TipoQuestao).
"""
#vou fazer que um filtro de questao pode ser usado apenas num template de Avaliacao por vez, acho q fica mais
#logico a ideia.
templateAvaliacao = models.ForeignKey('Avaliacao.TemplateAvaliacao', related_name='filtrosQuestoes')
#representa a nota que o aluno receberá se conseguir 100% da questão
notaBase = models.DecimalField(u"Nota Base",max_digits=10, decimal_places=2,default="0.00")
#representa o limite inferior que a nota dessa questao pode chegar.
notaLimMinimo = models.DecimalField(u"Limite Mínimo da Nota",max_digits=10, decimal_places=2,default="0.00")
#representa o limite superior que a nota dessa questao pode chegar.
notaLimMaximo = models.DecimalField(u"Limite Máximo da Nota",max_digits=10, decimal_places=2,default="0.00")
#tipo que da questao, usado para filtragem
tipo = models.ManyToManyField(TipoQuestao, related_name="filtrosQuestoes")
#caso seja de uma questao expecifica.
questaoExata = models.ForeignKey(Questao, related_name='filtrosQuestoes', blank=True, null=True,limit_choices_to = {'verificada':True})
class Meta:
verbose_name = u'Filtro de Questão'
app_label = 'Questao'
def verifica_autor(self,autor):
"verifica se um dado autor(usuario) corresponde ao autor do templateAvaliacao desse filtro"
return self.templateAvaliacao.autor.pk == autor.pk
def _prepara_tipos_requeridos(self):
"""
Metodo usado por filtrarQuestao.
prepara os tipos requeridos, juntando n elementos de num_descendentes cada um dos tipos
retorna um vetor com um vetor de listas de todos os tipos e seus descententes
Ex:
Tipos:
* C -> Ponteiro -> Malloc
* Facil
* Estruturas de Dados -> Pilha
Isso resultaria no seguinte vetor:
[[C,Ponteiro,Malloc], [Facil,], [Estruturas de Dados, Pilha]]
"""
tiposRequeridos = []
for tipoFiltro in self.tipo.all():
listaTiposFilho_e_proprio = tipoFiltro.get_descendants(include_self=True)
tiposRequeridos.append(listaTiposFilho_e_proprio)
return tiposRequeridos
def _questoes_selecionadas(self,tiposRequeridos):
"""
Recupera todas as questoes selecionadas usando os filtros(sem serem exatas)
dos tiposRequeridos.
"""
#recupera todas as questoes
tdsQuestoes = Questao.objects.filter(verificada=True)
questoesSelecionadas = []
for questaoATestar in tdsQuestoes:
questao_valida = True
for grupoDeTiposRequeridos in tiposRequeridos:
tipo_valido = False
for tipoQuestao_da_questaoATestar in questaoATestar.tipo.all():
if tipoQuestao_da_questaoATestar in grupoDeTiposRequeridos:
tipo_valido=True
break
if not tipo_valido:
questao_valida = False
break
if questao_valida:
questoesSelecionadas.append(questaoATestar)
return questoesSelecionadas
def filtrarQuestao(self):
"""
Retorna uma questao utilizando criterios de busca baseado no campo 'tipo', ou retorna questaoExata se esta for != None
Passando como parametro uma lista de questoes previamente selecionadas, para evitar a selecao de uma destas.
se for uma questão exata e simulado=True então nao pega a propria questão mas uma qualquer que seja do mesmo tipo que esta.
"""
######################
#: se tiver questão exata, retorna a mesma(uma lista com apenas ela)
#caso contrario, tenta recuperar uma questão aleatoria, seguindo os tipos do filtro
if self.questaoExata:
# print "(EXATA) " + self.questaoExata.slug
return [self.questaoExata,]
#prepara os tipos requeridos, juntando n elementos de num_descendentes cada um dos tipos
tiposRequeridos = self._prepara_tipos_requeridos()
# print "===================================="
# print ">>>tiposFiltro:"
# print self.tipo.all()
# print ">>>tiposRequeridos:"
# print tiposRequeridos
# print ">>>>>>>>>>>>>>>>>>>"
questoesSelecionadas = self._questoes_selecionadas(tiposRequeridos)
if questoesSelecionadas == []:
raise Exception("Nenhuma questao encontrada para os seguintes filtro:%s"%str(self.pk))
# #randamiza uma dessas questoes para ser a resposta.
# import random
# rand = random.randint(0, questoesSelecionadas.__len__()-1)
# questao = questoesSelecionadas[rand]
# print str(rand) + " " + questao.slug
return questoesSelecionadas
#Ver qual o unicode que vou por para esse model
# def __unicode__(self):
# return self.arquivo.name
| 37.862319 | 139 | 0.660478 | 582 | 5,225 | 5.840206 | 0.405498 | 0.017652 | 0.022948 | 0.015887 | 0.152104 | 0.129744 | 0.129744 | 0.112092 | 0.087379 | 0.087379 | 0 | 0.00668 | 0.25512 | 5,225 | 137 | 140 | 38.138686 | 0.86665 | 0.453397 | 0 | 0.085106 | 0 | 0 | 0.115129 | 0.009963 | 0 | 0 | 0 | 0.014599 | 0 | 1 | 0.085106 | false | 0 | 0.06383 | 0 | 0.425532 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11429eb1809ea25d1156c2ff110ad1c2a073914f | 4,507 | py | Python | tools/extract_features.py | LogoRecognition/Detector | 19f3b489a002976da1b66c7827881fb6a93b5688 | [
"MIT"
] | null | null | null | tools/extract_features.py | LogoRecognition/Detector | 19f3b489a002976da1b66c7827881fb6a93b5688 | [
"MIT"
] | null | null | null | tools/extract_features.py | LogoRecognition/Detector | 19f3b489a002976da1b66c7827881fb6a93b5688 | [
"MIT"
] | null | null | null | import _init_paths
import tensorflow as tf
from fast_rcnn.config import cfg
from fast_rcnn.test import im_detect
from fast_rcnn.nms_wrapper import nms
from utils.timer import Timer
#import matplotlib
#matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
import os, sys, cv2
import argparse
from networks.factory import get_network
CLASSES = ('__background__', # always index 0
'Acura', 'Alpha-Romeo', 'Aston-Martin', 'Audi', 'Bentley', 'Benz', 'BMW', 'Bugatti', 'Buick', 'nike', 'adidas', 'vans', 'converse', 'puma', 'nb', 'anta', 'lining', 'pessi', 'yili', 'uniquo', 'coca', 'Haier', 'Huawei', 'Apple', 'Lenovo', 'McDonalds', 'Amazon')
def vis_detections(im, class_name, dets,ax, image_name,fc7, brands,thresh=0.0):
"""Draw detected bounding boxes."""
inds = np.where(dets[:, -1] >= thresh)[0]
if len(inds) == 0:
return
print(len(inds))
for i in inds:
print(i)
param = fc7[i]
param = np.array(param)
np.save('/home/CarLogo/features/'+image_name[:-4]+'.npy', param)
bbox = dets[i, :4]
score = dets[i, -1]
ax.add_patch(
plt.Rectangle((bbox[0], bbox[1]),
bbox[2] - bbox[0],
bbox[3] - bbox[1], fill=False,
edgecolor='red', linewidth=3.5)
)
ax.text(bbox[0], bbox[1] - 2,
'{:s} {:.3f}'.format(class_name, score),
bbox=dict(facecolor='blue', alpha=0.5),
fontsize=14, color='white')
brands.append(class_name)
ax.set_title(('{} detections with '
'p({} | box) >= {:.1f}').format(class_name, class_name,
thresh),
fontsize=14)
plt.savefig('/home/CarLogo/detect/'+class_name+'_'+image_name)
plt.axis('off')
plt.tight_layout()
plt.draw()
class extractor():
def __init__(self):
cfg.TEST.HAS_RPN = True # Use RPN for proposals
# init session
gpu_options = tf.GPUOptions(allow_growth=True)
self.sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
# load network
self.net = get_network('VGGnet_test')
# load model
self.saver = tf.train.Saver(write_version=tf.train.SaverDef.V1)
# model = '/home/CarLogo/Faster_RCNN_TF/output/default/car_logo_train_list_27/VGGnet_fast_rcnn_iter_70000_test.ckpt'
model = '/home/CarLogo/Faster_RCNN_TF/output/default/car_logo_train_list_all/VGGnet_fast_rcnn_iter_70000.ckpt'
self.saver.restore(self.sess, model)
#sess.run(tf.initialize_all_variables())
print '\n\nLoaded network {:s}'.format(model)
im = 128 * np.ones((300, 300, 3), dtype=np.uint8)
for i in xrange(2):
_, _, _= im_detect(self.sess, self.net, im)
def get_feature(self,image_name):
#im_file = os.path.join(cfg.DATA_DIR, sys.argv[1], image_name)
im_file = os.path.join(cfg.DATA_DIR,'demo', image_name)
im = cv2.imread(im_file)
timer = Timer()
timer.tic()
scores, boxes,fc7 = im_detect(self.sess, self.net, im)
print(fc7.shape)
timer.toc()
print ('Detection took {:.3f}s for '
'{:d} object proposals').format(timer.total_time, boxes.shape[0])
im = im[:, :, (2, 1, 0)]
fig, ax = plt.subplots(figsize=(12, 12))
ax.imshow(im, aspect='equal')
CONF_THRESH = 0.8
NMS_THRESH = 0.3
brands=[]
for cls_ind, cls in enumerate(CLASSES[1:]):
cls_ind += 1 # because we skipped background
cls_boxes = boxes[:, 4*cls_ind:4*(cls_ind + 1)]
cls_scores = scores[:, cls_ind]
dets = np.hstack((cls_boxes,
cls_scores[:, np.newaxis])).astype(np.float32)
keep = nms(dets, NMS_THRESH)
dets = dets[keep, :]
# brands.append(cls)
vis_detections(im, cls, dets, ax, image_name, fc7,brands, thresh=CONF_THRESH)
return brands
if __name__ == '__main__':
e = extractor()
filename= os.path.join(cfg.DATA_DIR,sys.argv[1])
print('loading files from {}'.format(filename))
im_names=[]
for root, dirs, files in os.walk(filename):
print(files)
im_names = files
im_names = ['0024.jpg', '0075.jpg', '0084.jpg',
'0093.jpg']
for image_name in im_names:
brands = e.get_feature(image_name)
print(brands)
| 38.853448 | 369 | 0.584646 | 601 | 4,507 | 4.206323 | 0.392679 | 0.032041 | 0.014241 | 0.015427 | 0.148734 | 0.130538 | 0.130538 | 0.087025 | 0.087025 | 0.072785 | 0 | 0.029715 | 0.268249 | 4,507 | 115 | 370 | 39.191304 | 0.73681 | 0.084313 | 0 | 0 | 0 | 0 | 0.130701 | 0.035311 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.114583 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
11446baf4a7470845817644e8a87860076ded6cf | 3,218 | py | Python | populate_cones.py | peytondmurray/jdh2020 | 1bccedc46925140d9277fa1d17420e7cad16ffff | [
"MIT"
] | 1 | 2021-09-17T12:47:09.000Z | 2021-09-17T12:47:09.000Z | populate_cones.py | peytondmurray/jdh2020 | 1bccedc46925140d9277fa1d17420e7cad16ffff | [
"MIT"
] | null | null | null | populate_cones.py | peytondmurray/jdh2020 | 1bccedc46925140d9277fa1d17420e7cad16ffff | [
"MIT"
] | null | null | null | """Generate the array of cones representing the magnetization at each simulation cell. Then, animate
the cones to orient in the direction of the magnetization; also animate the material color of each cone
to map the z-component of the orientation to colors using Matplotlib's RdBu_r colormap.
"""
import bpy
import mathutils
import numpy as np
import matplotlib.cm as cm
import tqdm
scale = 0.5 # Set the physical scale of the animation
step = 11 # Only show every 11th cone; this can be lowered depending on computer resources
time_dilation_factor = 10 # Set the number of animation frames between each keyframe
# Load the data and get the number of cones in each dimension
data = np.load('data.npy')
nz, ny, nx = data.shape[1:4]
# Get the current selected object. The user is expected to have generated this "master" cone with the
# desired geometry; this script only copies it to the locations given by the simulation data, and animates them.
master_cone = bpy.context.active_object
master_cone.rotation_mode = 'QUATERNION'
scene = bpy.context.scene
# Make a new cone object for each location. The name of the cones should include the indices, i.e., Cone(ix,iy,iz)
# Also generate a new material for each cone, since each cone needs to be independently colored at each keyframe
n_cones = 0
for iz in range(nz):
for iy in tqdm.trange(ny, desc='Adding cones'):
for ix in range(nx):
# Downsample the number of cones by the step size
if n_cones % step == 0:
object = master_cone.copy()
object.data = master_cone.data.copy()
object.location = (ix*scale, iy*scale, iz*scale)
object.name = f'Cone({ix},{iy},{iz})'
object.rotation_mode = 'QUATERNION'
new_mat = bpy.data.materials.new(name=f'Cone({ix},{iy},{iz})')
new_mat.use_nodes = True
object.data.materials.append(new_mat)
scene.collection.objects.link(object)
n_cones += 1
# Remove the original "master" cone which has been copied to each cell location from the simulation data
bpy.data.objects.remove(master_cone)
# For each data file generated by mumax, keyframe the orientation and material color of each cone.
for i in tqdm.tqdm(range(data.shape[0]), desc='Animating frame'):
n_cones = 0
for iz in range(nz):
for iy in range(ny):
for ix in range(nx):
if n_cones % step == 0:
color = cm.RdBu_r(data[i, iz, iy, ix, 2])
name = f'Cone({ix},{iy},{iz})'
object = bpy.data.objects[name]
material = bpy.data.materials[name]
m = mathutils.Vector(data[i, iz, iy, ix])
object.rotation_quaternion = m.to_track_quat('Z','Y')
material.node_tree.nodes[1].inputs[0].default_value = [color[0], color[1], color[2], 1]
material.node_tree.nodes[1].inputs[0].keyframe_insert(data_path='default_value', frame=i*time_dilation_factor)
object.keyframe_insert(data_path='rotation_quaternion', frame=i*time_dilation_factor)
n_cones += 1
| 47.323529 | 130 | 0.65289 | 479 | 3,218 | 4.306889 | 0.348643 | 0.033931 | 0.015511 | 0.019389 | 0.165293 | 0.082889 | 0.075618 | 0.027145 | 0.027145 | 0.027145 | 0 | 0.01086 | 0.25606 | 3,218 | 67 | 131 | 48.029851 | 0.850877 | 0.375699 | 0 | 0.222222 | 1 | 0 | 0.074799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1144d43163cdbf3b72ace3d4a87b7c79032f61b9 | 933 | py | Python | dvc/prompt.py | amjadsaadeh/dvc | f405168619c2bb85430c4ded2585b57ebfd01bd7 | [
"Apache-2.0"
] | null | null | null | dvc/prompt.py | amjadsaadeh/dvc | f405168619c2bb85430c4ded2585b57ebfd01bd7 | [
"Apache-2.0"
] | null | null | null | dvc/prompt.py | amjadsaadeh/dvc | f405168619c2bb85430c4ded2585b57ebfd01bd7 | [
"Apache-2.0"
] | null | null | null | import sys
from dvc.progress import progress
try:
# NOTE: in Python3 raw_input() was renamed to input()
input = raw_input
except NameError:
pass
class Prompt(object):
def __init__(self):
self.default = None
def prompt(self, msg, default=False): # pragma: no cover
if self.default is not None:
return self.default
if not sys.stdout.isatty():
return default
answer = input(msg + u' (y/n)\n').lower()
while answer not in ['yes', 'no', 'y', 'n']:
answer = input('Enter \'yes\' or \'no\'.\n').lower()
return answer[0] == "y"
def prompt_password(self, msg): # pragma: no cover
import getpass
if not sys.stdout.isatty():
return None
msg = 'Enter password for {}:\n'.format(msg)
if not progress.is_finished:
msg = u'\n' + msg
return getpass.getpass(msg)
| 22.756098 | 64 | 0.564845 | 122 | 933 | 4.254098 | 0.418033 | 0.063584 | 0.050096 | 0.05395 | 0.100193 | 0.100193 | 0 | 0 | 0 | 0 | 0 | 0.00311 | 0.310825 | 933 | 40 | 65 | 23.325 | 0.804044 | 0.091104 | 0 | 0.076923 | 0 | 0 | 0.067536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0.192308 | 0.115385 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
114752545e80d4b8771517c3984c936fca591739 | 1,044 | py | Python | scratchai/attacks/attacks/semantic.py | iArunava/scratchai | 9472f64fd3a06a1daa51486b68f5d3d3c36ac2f0 | [
"MIT"
] | 101 | 2019-04-20T13:52:15.000Z | 2021-11-05T02:41:11.000Z | scratchai/attacks/attacks/semantic.py | iArunava/scratch.ai | 9472f64fd3a06a1daa51486b68f5d3d3c36ac2f0 | [
"MIT"
] | 228 | 2019-04-16T11:57:13.000Z | 2021-08-02T23:15:27.000Z | scratchai/attacks/attacks/semantic.py | iArunava/scratch.ai | 9472f64fd3a06a1daa51486b68f5d3d3c36ac2f0 | [
"MIT"
] | 16 | 2019-04-16T16:41:00.000Z | 2021-02-08T01:13:09.000Z | """
Semantic adversarial Examples
"""
__all__ = ['semantic', 'Semantic']
def semantic(x, center:bool=True, max_val:float=1.):
"""
Semantic adversarial examples.
https://arxiv.org/abs/1703.06857
Note: data must either be centered (so that the negative image can be
made by simple negation) or must be in the interval of [-1, 1]
Arguments
---------
net : nn.Module, optional
The model on which to perform the attack.
center : bool
If true, assumes data has 0 mean so the negative image is just negation.
If false, assumes data is in interval [0, max_val]
max_val : float
Maximum value allowed in the input data.
"""
if center:
return x*-1
return max_val - x
################################################################
###### Class to initialize this attack
###### mainly for the use with torchvision.transforms
class Semantic():
def __init__(self, net=None, **kwargs):
self.kwargs = kwargs
def __call__(self, x):
return semantic(x, **self.kwargs)
| 26.1 | 83 | 0.619732 | 141 | 1,044 | 4.475177 | 0.546099 | 0.038035 | 0.085578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018337 | 0.216475 | 1,044 | 39 | 84 | 26.769231 | 0.753056 | 0.599617 | 0 | 0 | 0 | 0 | 0.053691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.