hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b71e4d45d0dd84308fd2a62e675360e49475c3fc | 3,140 | py | Python | readthedocs/search/parse_json.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | null | null | null | readthedocs/search/parse_json.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | null | null | null | readthedocs/search/parse_json.py | darrowco/readthedocs.org | fa7fc5a24306f1f6a27c7393f381c594ab29b357 | [
"MIT"
] | null | null | null | """Functions related to converting content into dict/JSON structures."""
import codecs
import json
import logging
from pyquery import PyQuery
log = logging.getLogger(__name__)
def generate_sections_from_pyquery(body):
"""Given a pyquery object, generate section dicts for each section."""
# Capture text inside h1 before the first h2
h1_section = body('.section > h1')
if h1_section:
div = h1_section.parent()
h1_title = h1_section.text().replace('¶', '').strip()
h1_id = div.attr('id')
h1_content = ''
next_p = body('h1').next()
while next_p:
if next_p[0].tag == 'div' and 'class' in next_p[0].attrib:
if 'section' in next_p[0].attrib['class']:
break
h1_content += parse_content(next_p.text())
next_p = next_p.next()
if h1_content:
yield {
'id': h1_id,
'title': h1_title,
'content': h1_content.replace('\n', '. '),
}
# Capture text inside h2's
section_list = body('.section > h2')
for num in range(len(section_list)):
div = section_list.eq(num).parent()
header = section_list.eq(num)
title = header.text().replace('¶', '').strip()
section_id = div.attr('id')
content = div.text()
content = parse_content(content)
yield {
'id': section_id,
'title': title,
'content': content,
}
def process_file(fjson_filename):
"""Read the fjson file from disk and parse it into a structured dict."""
try:
with codecs.open(fjson_filename, encoding='utf-8', mode='r') as f:
file_contents = f.read()
except IOError:
log.info('Unable to read file: %s', fjson_filename)
raise
data = json.loads(file_contents)
sections = []
path = ''
title = ''
if 'current_page_name' in data:
path = data['current_page_name']
else:
log.info('Unable to index file due to no name %s', fjson_filename)
if data.get('body'):
body = PyQuery(data['body'])
sections.extend(generate_sections_from_pyquery(body))
else:
log.info('Unable to index content for: %s', fjson_filename)
if 'title' in data:
title = data['title']
if title.startswith('<'):
title = PyQuery(data['title']).text()
else:
log.info('Unable to index title for: %s', fjson_filename)
return {
'path': path,
'title': title,
'sections': sections,
}
def parse_content(content):
"""
Removes the starting text and ¶.
It removes the starting text from the content
because it contains the title of that content,
which is redundant here.
"""
content = content.replace('¶', '').strip()
# removing the starting text of each
content = content.split('\n')
if len(content) > 1: # there were \n
content = content[1:]
# converting newlines to ". "
content = '. '.join([text.strip().rstrip('.') for text in content])
return content
| 28.288288 | 76 | 0.576752 | 393 | 3,140 | 4.486005 | 0.305344 | 0.022689 | 0.029495 | 0.034033 | 0.091889 | 0.040839 | 0 | 0 | 0 | 0 | 0 | 0.010835 | 0.294586 | 3,140 | 110 | 77 | 28.545455 | 0.783296 | 0.157962 | 0 | 0.092105 | 1 | 0 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.052632 | 0 | 0.118421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b721cd3010a8637974b6ec065b10132ac28ed47b | 1,957 | py | Python | createVideo.py | Thefalas/disksMD | 1f3a0a1814baf1fd8905da2e88d2244de90d14ec | [
"MIT"
] | null | null | null | createVideo.py | Thefalas/disksMD | 1f3a0a1814baf1fd8905da2e88d2244de90d14ec | [
"MIT"
] | null | null | null | createVideo.py | Thefalas/disksMD | 1f3a0a1814baf1fd8905da2e88d2244de90d14ec | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu May 3 18:33:28 2018
@author: malopez
"""
import pandas as pd
import matplotlib.pyplot as plt
import cv2
images_folder = "C:/Users/malopez/Desktop/disksMD/images"
data_folder = "C:/Users/malopez/Desktop/disksMD/data"
output_video = './video4.mp4'
particle_radius = 1.0
n_particles = 90 # TODO: Why 3 is the minimun number of particles?
desired_collisions_per_particle = 10
n_collisions = n_particles*desired_collisions_per_particle
size_X = 60 # System size X
size_Y = 30 # System size Y
size_X_inches = 6*(size_X/size_Y)
size_Y_inches = 6
size_figure = (size_X_inches, size_Y_inches)
# Fenomenological constant ;p
circle_size = 11875*size_X_inches*size_Y_inches / (size_X*size_Y)
# circle_size = particle_radius*427500 / (size_X*size_Y)
for i in range(n_collisions):
file_name_pos = data_folder + "/xy"+'{0:05d}'.format(i)+".dat"
pos = pd.read_table(file_name_pos, sep='\s+',
header = None, names =['x', 'y'])
img_name = images_folder+'/img'+'{0:05d}'.format(i)+".png"
fig, ax = plt.subplots(figsize=size_figure, dpi=250)
ax.set_xlim([0,size_X])
ax.set_ylim([0,size_Y])
plt.scatter(pos.x, pos.y, s=circle_size)
fig.savefig(img_name)
print('Saving img nº: '+str(i))
plt.close()
images = []
for i in range(n_collisions):
images.append(images_folder+'/img'+'{0:05d}'.format(i)+".png")
# Height and Width from first image
frame = cv2.imread(images[0])
height, width, channels = frame.shape
# Definimos el codec y creamos un objeto VideoWriter
fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Be sure to use lower case
out = cv2.VideoWriter(output_video, fourcc, 30.0, (width, height))
print('Generating video, please wait')
for image in images:
frame = cv2.imread(image)
# Write out frame to video
out.write(frame)
# Release everything if job is finished
out.release()
print("The output video is {}".format(output_video)) | 29.651515 | 68 | 0.701073 | 313 | 1,957 | 4.198083 | 0.440895 | 0.034247 | 0.027397 | 0.030441 | 0.217656 | 0.161339 | 0.04414 | 0.04414 | 0 | 0 | 0 | 0.037782 | 0.161472 | 1,957 | 66 | 69 | 29.651515 | 0.762949 | 0.208993 | 0 | 0.04878 | 0 | 0 | 0.135206 | 0.049641 | 0 | 0 | 0 | 0.015152 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.073171 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b72306e350f2a9f34586f4bdf8fb4a7f6ec9f932 | 4,141 | py | Python | truffe2/generic/templatetags/generic_extras.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 9 | 2016-09-14T02:19:19.000Z | 2020-10-18T14:52:14.000Z | truffe2/generic/templatetags/generic_extras.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 19 | 2016-11-09T21:28:51.000Z | 2021-02-10T22:37:31.000Z | truffe2/generic/templatetags/generic_extras.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 13 | 2016-12-31T14:22:09.000Z | 2020-12-27T19:43:19.000Z | from django import template
from django.utils.safestring import mark_safe
import bleach
from bleach.sanitizer import BleachSanitizer
from bleach.encoding import force_unicode
from bootstrap3.renderers import FieldRenderer
from bootstrap3.text import text_value
import html5lib
import re
register = template.Library()
pos = [(0, 0), (1, 0), (0, 1), (2, 3), (1, 2), (2, 1), (2, 2)]
re_spaceless = re.compile("(\n|\r)+")
@register.filter
def node_x(value):
x, _ = pos[value]
return x
@register.filter
def node_y(value):
_, y = pos[value]
return y
@register.filter
def get_attr(value, arg):
v = getattr(value, arg, None)
if hasattr(v, '__call__'):
v = v()
elif isinstance(value, dict):
v = value.get(arg)
if v is None:
return ''
return v
@register.filter
def call(obj, methodName):
method = getattr(obj, methodName)
if "__callArg" in obj.__dict__:
ret = method(*obj.__callArg)
del obj.__callArg
return ret
return method()
@register.filter
def args(obj, arg):
if "__callArg" not in obj.__dict__:
obj.__callArg = []
obj.__callArg += [arg]
return obj
@register.filter
def get_class(value):
return value.__class__.__name__
@register.filter
def is_new_for(obj, user):
return obj.is_new(user)
@register.simple_tag(takes_context=True)
def switchable(context, obj, user, id):
return 'true' if obj.may_switch_to(user, id) else 'false'
@register.assignment_tag(takes_context=True)
def get_list_quick_switch(context, obj):
if hasattr(obj.MetaState, 'list_quick_switch'):
return filter(lambda (status, __, ___): obj.may_switch_to(context['user'], status), obj.MetaState.list_quick_switch.get(obj.status, []))
@register.assignment_tag(takes_context=True)
def get_states_quick_switch(context, obj):
if hasattr(obj.MetaState, 'states_quick_switch'):
return filter(lambda (status, __): obj.may_switch_to(context['user'], status), obj.MetaState.states_quick_switch.get(obj.status, []))
@register.tag
def nocrlf(parser, token):
nodelist = parser.parse(('endnocrlf',))
parser.delete_first_token()
return CrlfNode(nodelist)
class CrlfNode(template.Node):
def __init__(self, nodelist):
self.nodelist = nodelist
def render(self, context):
rendered = self.nodelist.render(context).strip()
return re_spaceless.sub("", rendered)
@register.filter
def html_check_and_safe(value):
tags = bleach.ALLOWED_TAGS + ['div', 'br', 'font', 'p', 'table', 'tr', 'td', 'th', 'img', 'u', 'span', 'tbody', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'hr']
attrs = {
'*': ['class', 'style', 'color', 'align', 'title', 'data-toggle', 'data-placement'],
'a': ['href', 'rel'],
'img': ['src', 'alt'],
}
style = ['line-height', 'background-color', 'font-size', 'margin-top']
text = force_unicode(value)
class s(BleachSanitizer):
allowed_elements = tags
allowed_attributes = attrs
allowed_css_properties = style
strip_disallowed_elements = True
strip_html_comments = True
allowed_protocols = ['http', 'https', 'data']
parser = html5lib.HTMLParser(tokenizer=s)
return mark_safe(bleach._render(parser.parseFragment(text)))
class SimpleFieldRenderer(FieldRenderer):
def render(self):
# See if we're not excluded
if self.field.name in self.exclude.replace(' ', '').split(','):
return ''
# Hidden input requires no special treatment
if self.field.is_hidden:
return text_value(self.field)
# Render the widget
self.add_widget_attrs()
html = self.field.as_widget(attrs=self.widget.attrs)
self.restore_widget_attrs()
# Start post render
html = self.post_widget_render(html)
html = self.wrap_widget(html)
html = self.make_input_group(html)
html = self.append_to_field(html)
html = self.wrap_field(html)
return html
@register.simple_tag()
def simple_bootstrap_field(field):
return SimpleFieldRenderer(field).render()
| 26.375796 | 157 | 0.657088 | 532 | 4,141 | 4.890977 | 0.340226 | 0.043044 | 0.052267 | 0.021906 | 0.162183 | 0.146042 | 0.125288 | 0.125288 | 0.059954 | 0.059954 | 0 | 0.00731 | 0.207196 | 4,141 | 156 | 158 | 26.544872 | 0.785257 | 0.025115 | 0 | 0.110092 | 0 | 0 | 0.067708 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.082569 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b727e8a96c1fbd46e661c2a5b89a290d333e2329 | 1,805 | py | Python | pull_related_videos.py | jgawrilo/youtube | 553bfe4cf303bc06abf8173f5ed0f4deb3ede57f | [
"Apache-2.0"
] | 1 | 2017-01-13T12:57:06.000Z | 2017-01-13T12:57:06.000Z | pull_related_videos.py | jgawrilo/youtube | 553bfe4cf303bc06abf8173f5ed0f4deb3ede57f | [
"Apache-2.0"
] | null | null | null | pull_related_videos.py | jgawrilo/youtube | 553bfe4cf303bc06abf8173f5ed0f4deb3ede57f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.tools import argparser
import json
import os
import codecs
from bs4 import BeautifulSoup
import argparse
import requests
import sys
import googleapiclient
def get_video_info(vid, youtube):
response = youtube.videos().list(
part="id,snippet,contentDetails,statistics",
id=vid,
maxResults=1
).execute()
return response
def get_video_suggestions(youtube,vid):
try:
#print "Related to:", vid
search_response = youtube.search().list(
type="video",
part="id",
relatedToVideoId=vid,
maxResults=20
).execute()
for i in search_response["items"]:
#print float(get_video_info(i["id"]["videoId"],youtube)["items"][0]["statistics"]["viewCount"])
if float(get_video_info(i["id"]["videoId"],youtube)["items"][0]["statistics"]["viewCount"]) < 100000:
print i["id"]["videoId"]
except googleapiclient.errors.HttpError:
pass
# MAIN
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Pull some youtube.')
parser.add_argument("--key", help="https://cloud.google.com/console")
args = parser.parse_args()
# Set DEVELOPER_KEY to the API key value from the APIs & auth > Registered apps
# tab of
# https://cloud.google.com/console
# Please ensure that you have enabled the YouTube Data API for your project.
DEVELOPER_KEY = args.key
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
developerKey=DEVELOPER_KEY)
for f in os.listdir("../flashpoint/videos/"):
get_video_suggestions(youtube,f) | 28.203125 | 113 | 0.67867 | 222 | 1,805 | 5.36036 | 0.481982 | 0.033613 | 0.030252 | 0.043697 | 0.189916 | 0.09916 | 0.09916 | 0.09916 | 0.09916 | 0.09916 | 0 | 0.009743 | 0.203878 | 1,805 | 64 | 114 | 28.203125 | 0.818372 | 0.187258 | 0 | 0.047619 | 0 | 0 | 0.125342 | 0.039041 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.02381 | 0.261905 | null | null | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7286793431599e58da81b53c698ba6999fd3f4e | 1,630 | py | Python | day02/ex04/ai42/logging/log.py | d-r-e/Machine-Learning-Bootcamp | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | null | null | null | day02/ex04/ai42/logging/log.py | d-r-e/Machine-Learning-Bootcamp | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | 6 | 2021-05-25T08:51:39.000Z | 2021-05-25T08:51:40.000Z | day02/ex04/ai42/logging/log.py | d-r-e/Python-Bootcamp-42AI | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | null | null | null | # **************************************************************************** #
# #
# ::: :::::::: #
# logger.py :+: :+: :+: #
# +:+ +:+ +:+ #
# By: darodrig <darodrig@42madrid.com> +#+ +:+ +#+ #
# +#+#+#+#+#+ +#+ #
# Created: 2020/04/15 22:12:50 by darodrig #+# #+# #
# Updated: 2020/04/15 22:12:50 by darodrig ### ########.fr #
# #
# **************************************************************************** #
import time
import functools
from string import capwords
import getpass
import datetime
def log(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
ret = func(*args, **kwargs)
exectime = time.time() - start
if int(exectime) > 0:
units = "s"
else:
exectime = exectime * 1000
units = "ms"
f = open("machine.log", "a")
f.write("({})Running: {} [ exec-time = {} {} ]\n".format(
getpass.getuser(),
capwords(func.__name__.replace('_', ' ')),
round(exectime, 3),
units))
f.close()
return ret
return wrapper
| 41.794872 | 81 | 0.282209 | 99 | 1,630 | 4.59596 | 0.575758 | 0.065934 | 0.035165 | 0.043956 | 0.105495 | 0.105495 | 0.105495 | 0.105495 | 0 | 0 | 0 | 0.043956 | 0.497546 | 1,630 | 38 | 82 | 42.894737 | 0.5116 | 0.511043 | 0 | 0 | 0 | 0 | 0.082865 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.08 | 0.2 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b72cde7853e529893263c2d30f32b8dad4c30116 | 7,467 | py | Python | ava/common/check.py | indeedsecurity/ava-ce | 4483b301034a096b716646a470a6642b3df8ce61 | [
"Apache-2.0"
] | 2 | 2019-03-26T15:37:48.000Z | 2020-01-03T03:47:30.000Z | ava/common/check.py | indeedsecurity/ava-ce | 4483b301034a096b716646a470a6642b3df8ce61 | [
"Apache-2.0"
] | 2 | 2021-03-25T21:27:09.000Z | 2021-06-01T21:20:04.000Z | ava/common/check.py | indeedsecurity/ava-ce | 4483b301034a096b716646a470a6642b3df8ce61 | [
"Apache-2.0"
] | null | null | null | from bs4 import BeautifulSoup
from difflib import SequenceMatcher
class _Check:
"""
Parent check class. All other checks should be a subclass of this class.
"""
key = "check"
name = "Check"
description = "Parent check"
class _BlindCheck(_Check):
"""
These checks identify issues in internal systems. The payloads include a host that listens for callbacks from the
internal system. These checks do not raise issues directly. The listener server maintains issue information.
"""
_payloads = [] # attribute should be populated by children's __init___ method
example = "Payload example"
def payloads(self, url, target, value):
"""
Returns the check's payloads. Children can override to provide dynamic payloads.
:param url: url value
:param target: target name
:param value: target value
:return: list of payloads
"""
# return
return self._payloads
def _check_payloads(self, payloads):
"""
Checks if the payloads are adoptable for this class and modify the payloads to adjust to check function.
InvalidFormatException is raised, if a payload is not adoptable.
Children can override.
:param payloads: list of payloads
:return: list of modified payloads
"""
return payloads
def set_payloads(self, payloads):
"""
Overwrite the check's payloads.
:param payloads: list of payloads
"""
self._payloads = self._check_payloads(payloads)
def add_payloads(self, payloads):
"""
Add payloads to the check's payloads.
:param payloads: list of payloads
"""
self._payloads += self._check_payloads(payloads)
class _PassiveCheck(_Check):
"""
These checks identify sensitive information in responses. The response from the server may be checked for social
security numbers, credit card numbers, email addresses, etc. These checks do not have payloads.
"""
def check(self, response):
"""Method should be implemented by children"""
pass
class _ActiveCheck(_Check):
"""
Parent class for active checks. Subclasses include simple, differential, and timing checks.
"""
_payloads = [] # attribute should be populated by children's __init___ method
example = "Payload example"
def payloads(self, url, target, value):
"""
Returns the check's payloads. Children can override to provide dynamic payloads.
:param url: url value
:param target: target name
:param value: target value
:return: list of payloads
"""
return self._payloads
def _check_payloads(self, payloads):
"""
Checks if the payloads are adoptable for this class and modify the payloads to adjust to check function.
InvalidFormatException is raised, if a payload is not adoptable.
Children can override.
:param payloads: list of payloads
:return: list of modified payloads
"""
return payloads
def set_payloads(self, payloads):
"""
Overwrite the check's payloads.
:param payloads: list of payloads
"""
self._payloads = self._check_payloads(payloads)
def add_payloads(self, payloads):
"""
Add payloads to the check's payloads.
:param payloads: list of payloads
"""
self._payloads += self._check_payloads(payloads)
class _ValueCheck(_ActiveCheck):
"""
These checks perform value analysis to identify issues. For instance, they may analyze text in HTML bodies or
values in HTTP headers. These checks audit using a single payload and single response.
"""
def check(self, response, payload):
"""Method should be implemented by children"""
pass
class _DifferentialCheck(_ActiveCheck):
"""
These checks perform differential analysis between true and false payloads to identify issues. If the difference
between the payloads' responses is below a threshold, then an issue is raised. These checks audit using two
payloads and two responses.
"""
_threshold = 0.90
def check(self, responses, payload):
"""
Checks for issues by looking for the difference between response bodies. HTML script and style tags are
removed from HTML responses.
:param responses: response objects from server
:param payload: payload value
:return: true if vulnerable, false otherwise
"""
# extract
true_response = responses['true']
false_response = responses['false']
# check response
if not true_response.text or not false_response.text:
return False
# check status code
if true_response.status_code != false_response.status_code:
return False
# soup
true_soup = BeautifulSoup(true_response.text, "html.parser")
false_soup = BeautifulSoup(false_response.text, "html.parser")
# remove script and style tags
excludes = ["script", "style"]
true_tags = [tag for tag in true_soup.find_all(text=True) if tag.parent.name not in excludes]
false_tags = [tag for tag in false_soup.find_all(text=True) if tag.parent.name not in excludes]
# join back
true_text = ' '.join(true_tags)
false_text = ' '.join(false_tags)
# calculate ratio
sequence = SequenceMatcher(None, true_text, false_text)
ratio = sequence.quick_ratio()
# check difference
if ratio < _DifferentialCheck._threshold:
return True
else:
return False
class _TimingCheck(_ActiveCheck):
"""
These checks perform timing analysis from delays to identify issues. If the response's elapsed time is above a
threshold, then an issue is raised. These checks audit using one payload, the payload's delay, and one response.
"""
_padding = 0.50
def check(self, responses, payload, delay):
"""
Checks for issues by measuring the elapsed time of the response. It uses a padding to prevent slow endpoints
from producing false positives.
:param responses: response objects from server
:param payload: payload value
:param delay: time as float
:return: true if vulnerable, false otherwise
"""
# extract
original_response = responses['original']
timing_response = responses['timing']
# calculate elapsed time
original_elapsed = original_response.elapsed.seconds + (original_response.elapsed.microseconds / 1000000)
timing_elapsed = timing_response.elapsed.seconds + (timing_response.elapsed.microseconds / 1000000)
# calculate padding
padding = original_elapsed * _TimingCheck._padding
# check time
if timing_elapsed > (delay + padding):
return True
else:
return False
def _check_payloads(self, payloads):
"""
Checks if the payloads are adoptable for this class and modify the payloads to adjust to check function.
InvalidFormatException is raised, if a payload is not adoptable.
Children can override.
:param payloads: list of payloads
:return: list of modified payloads
"""
return [(payload, 9) for payload in payloads]
| 34.892523 | 117 | 0.656489 | 877 | 7,467 | 5.486887 | 0.208666 | 0.042394 | 0.045719 | 0.027639 | 0.510391 | 0.4734 | 0.4734 | 0.455528 | 0.43724 | 0.43724 | 0 | 0.004067 | 0.275613 | 7,467 | 213 | 118 | 35.056338 | 0.885561 | 0.486139 | 0 | 0.442857 | 0 | 0 | 0.034744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185714 | false | 0.042857 | 0.028571 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b72ff8b09a0b3b3377f4450ff20722334a0f1163 | 260 | py | Python | Chapter02/Shuffle.py | Tanishadel/Mastering-Machine-Learning-for-Penetration-Testing | 5aecc2603faaa97dafc1fa4c86f4e5f530bb4877 | [
"MIT"
] | 241 | 2018-06-14T15:58:34.000Z | 2022-03-27T08:47:40.000Z | Chapter02/Shuffle.py | Tanishadel/Mastering-Machine-Learning-for-Penetration-Testing | 5aecc2603faaa97dafc1fa4c86f4e5f530bb4877 | [
"MIT"
] | 5 | 2018-11-05T20:40:58.000Z | 2021-09-17T12:35:42.000Z | Chapter02/Shuffle.py | Tanishadel/Mastering-Machine-Learning-for-Penetration-Testing | 5aecc2603faaa97dafc1fa4c86f4e5f530bb4877 | [
"MIT"
] | 144 | 2018-06-21T12:50:25.000Z | 2022-03-21T13:47:51.000Z | import os
import random
#initiate a list called emails_list
emails_list = []
Directory = '/home/azureuser/spam_filter/enron1/emails/'
Dir_list = os.listdir(Directory)
for file in Dir_list:
f = open(Directory + file, 'r')
emails_list.append(f.read())
f.close()
| 23.636364 | 56 | 0.753846 | 41 | 260 | 4.634146 | 0.609756 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004348 | 0.115385 | 260 | 10 | 57 | 26 | 0.821739 | 0.130769 | 0 | 0 | 0 | 0 | 0.191111 | 0.186667 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b733a705c894e170baefe76d8fe393fd92a6fb35 | 4,346 | py | Python | demo/corenlp.py | Rvlis/stanza | 58237519740b27b83c00c6c23d53ba434132dcca | [
"Apache-2.0"
] | null | null | null | demo/corenlp.py | Rvlis/stanza | 58237519740b27b83c00c6c23d53ba434132dcca | [
"Apache-2.0"
] | null | null | null | demo/corenlp.py | Rvlis/stanza | 58237519740b27b83c00c6c23d53ba434132dcca | [
"Apache-2.0"
] | null | null | null | from stanza.server import CoreNLPClient
import os
# example text
print('---')
print('input text')
print('')
# text = "Chris Manning is a nice person. Chris wrote a simple sentence. He also gives oranges to people."
text = "PyTables is built on top of the HDF5 library, using the Python language and the NumPy package."
print(text)
# set up the client
print('---')
print('starting up Java Stanford CoreNLP Server...')
# set up the client
# with CoreNLPClient(annotators=['tokenize','ssplit','pos','lemma','ner','parse','depparse','coref'], timeout=60000, memory='4G', be_quiet=True) as client:
with CoreNLPClient(annotators=['tokenize','ssplit','pos','parse','depparse'], timeout=60000, memory='4G', be_quiet=True) as client:
# submit the request to the server
ann = client.annotate(text)
# print("ann is ", ann)
# os.system("pause")
# get the first sentence
sentence = ann.sentence[0]
print("sentence is ", sentence)
os.system("pause")
# get the dependency parse of the first sentence
# print('---')
# print('dependency parse of first sentence')
# dependency_parse = sentence.basicDependencies
# print(dependency_parse)
# os.system("pause")
# HDSKG's method
print('---')
print('enhanced++ dependency parse of first sentence')
enhanced_plus_plus_dependency_parse = sentence.enhancedPlusPlusDependencies
print(enhanced_plus_plus_dependency_parse)
os.system("pause")
# get the constituency parse of the first sentence
# print('---')
# print('constituency parse of first sentence')
# constituency_parse = sentence.parseTree
# print(constituency_parse)
# os.system("pause")
# get the first subtree of the constituency parse
# print('---')
# print('first subtree of constituency parse')
# print(constituency_parse.child[0])
# os.system("pause")
# get the value of the first subtree
# print('---')
# print('value of first subtree of constituency parse')
# print(constituency_parse.child[0].value)
# os.system("pause")
# get the first token of the first sentence
print('---')
print('first token of first sentence')
token = sentence.token[0]
print(token)
os.system("pause")
# get the part-of-speech tag
print('---')
print('part of speech tag of token')
token.pos
print(token.pos)
os.system("pause")
# get the named entity tag
print('---')
print('named entity tag of token')
print(token.ner)
os.system("pause")
# get an entity mention from the first sentence
# print('---')
# print('first entity mention in sentence')
# print(sentence.mentions[0])
# os.system("pause")
# access the coref chain
# print('---')
# print('coref chains for the example')
# print(ann.corefChain)
# os.system("pause")
# Use tokensregex patterns to find who wrote a sentence.
# pattern = '([ner: PERSON]+) /wrote/ /an?/ []{0,3} /sentence|article/'
pattern = "([tag: NNP]{1,}) ([ tag:/VB.*/ ]) /an?/ ([pos:JJ]{0,3}) /sentence|article/"
matches = client.tokensregex(text, pattern)
print("tokensregex matches is ", matches)
# sentences contains a list with matches for each sentence.
assert len(matches["sentences"]) == 3
# length tells you whether or not there are any matches in this
assert matches["sentences"][1]["length"] == 1
# You can access matches like most regex groups.
# print("sentence is ",["sentences"][1]["0"]["text"])
matches["sentences"][1]["0"]["text"] == "Chris wrote a simple sentence"
matches["sentences"][1]["0"]["1"]["text"] == "Chris"
# # Use semgrex patterns to directly find who wrote what.
# pattern = '{word:wrote} >nsubj {}=subject >dobj {}=object'
# matches = client.semgrex(text, pattern)
# # print("semgrex matches is", matches)
# # sentences contains a list with matches for each sentence.
# assert len(matches["sentences"]) == 3
# # length tells you whether or not there are any matches in this
# assert matches["sentences"][1]["length"] == 1
# # You can access matches like most regex groups.
# matches["sentences"][1]["0"]["text"] == "wrote"
# matches["sentences"][1]["0"]["$subject"]["text"] == "Chris"
# matches["sentences"][1]["0"]["$object"]["text"] == "sentence"
| 35.048387 | 155 | 0.645651 | 556 | 4,346 | 5.017986 | 0.257194 | 0.043011 | 0.055914 | 0.051613 | 0.448746 | 0.335842 | 0.276703 | 0.219355 | 0.219355 | 0.191398 | 0 | 0.012076 | 0.199724 | 4,346 | 123 | 156 | 35.333333 | 0.790109 | 0.562356 | 0 | 0.275 | 0 | 0.025 | 0.296943 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b739046fc9ac3dbe4de500836eb668496db9cf82 | 1,754 | py | Python | scripts/backup/eval.py | vafaei-ar/Ngene | 9735f9729bdde70624ec9af73196b418f2e1b2f2 | [
"MIT"
] | null | null | null | scripts/backup/eval.py | vafaei-ar/Ngene | 9735f9729bdde70624ec9af73196b418f2e1b2f2 | [
"MIT"
] | null | null | null | scripts/backup/eval.py | vafaei-ar/Ngene | 9735f9729bdde70624ec9af73196b418f2e1b2f2 | [
"MIT"
] | null | null | null | import matplotlib as mpl
mpl.use('agg')
import glob
import argparse
from time import time
import numpy as np
import pylab as plt
import rficnn as rfc
parser = argparse.ArgumentParser()
parser.add_argument('--arch', required=False, help='choose architecture', type=str, default='1')
parser.add_argument('--trsh', required=False, help='choose threshold', type=float, default=0.1)
args = parser.parse_args()
threshold = args.trsh
rfc.the_print('Chosen architecture is: '+args.arch+' and threshod is: '+str(threshold),bgc='green')
model_add = './models/model_'+args.arch+'_'+str(threshold)
conv = ss.ConvolutionalLayers(nx=276,ny=400,n_channel=1,restore=1,
model_add=model_add,arch_file_name='arch_'+args.arch)
sim_files = glob.glob('../data/hide_sims_test/calib_1year/*.fits'))
times = []
for fil in sim_files:
fname = fil.split('/')[-1]
print fname
data,mask = read_chunck_sdfits(fil,label_tag=RFI,threshold=0.1,verbose=0)
data = np.clip(np.fabs(data), 0, 200)
data -= data.min()
data /= data.max()
lnx,lny = data.shape
s = time()
pred = conv.conv_large_image(data.reshape(1,lnx,lny,1),pad=10,lx=276,ly=400)
e = time()
times.append(e-s)
mask = mask[10:-10,:]
pred = pred[10:-10,:]
fig, (ax1,ax2,ax3) = plt.subplots(3,1,figsize=(18,8))
ax1.imshow(data,aspect='auto')
ax2.imshow(mask,aspect='auto')
ax3.imshow(pred,aspect='auto')
np.save('../comparison/'+fname+'_mask_'+sys.argv[1],mask)
np.save('../comparison/'+fname+'_pred_'+sys.argv[1],pred)
plt.subplots_adjust(left=0.04, right=0.99, top=0.99, bottom=0.04)
plt.savefig('../comparison/'+fname+'_'+sys.argv[1]+'.jpg',dpi=30)
plt.close()
print np.mean(times)
| 29.728814 | 99 | 0.661345 | 272 | 1,754 | 4.158088 | 0.474265 | 0.02122 | 0.02122 | 0.040672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044564 | 0.155644 | 1,754 | 58 | 100 | 30.241379 | 0.719109 | 0 | 0 | 0 | 0 | 0 | 0.132269 | 0.023375 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.162791 | null | null | 0.069767 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7407fa4aa1e99e15cc05919bc7ed96eeb510a89 | 443 | py | Python | road_roughness_prediction/tools/image_utils.py | mknz/dsr-road-roughness-prediction | 5f56b6ba5da70a09f2c967b7f32c740072e20ed1 | [
"MIT"
] | 7 | 2019-04-04T06:40:29.000Z | 2020-11-12T10:53:30.000Z | road_roughness_prediction/tools/image_utils.py | mknz/dsr-road-roughness-prediction | 5f56b6ba5da70a09f2c967b7f32c740072e20ed1 | [
"MIT"
] | 1 | 2021-09-28T07:11:05.000Z | 2021-09-28T07:11:05.000Z | road_roughness_prediction/tools/image_utils.py | mknz/dsr-road-roughness-prediction | 5f56b6ba5da70a09f2c967b7f32c740072e20ed1 | [
"MIT"
] | null | null | null | '''Image utils'''
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt
def save_and_open(save_func):
'''Save to in-memory buffer and re-open '''
buf = BytesIO()
save_func(buf)
buf.seek(0)
bytes_ = buf.read()
buf_ = BytesIO(bytes_)
return buf_
def fig_to_pil(fig: plt.Figure):
'''Convert matplot figure to PIL Image'''
buf = save_and_open(fig.savefig)
return Image.open(buf)
| 20.136364 | 47 | 0.670429 | 68 | 443 | 4.191176 | 0.470588 | 0.049123 | 0.077193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002874 | 0.214447 | 443 | 21 | 48 | 21.095238 | 0.816092 | 0.191874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3f7a60ea385d7fdaeb685b99834b8a082b01d925 | 719 | py | Python | run-workflow-cloud-run/main.py | UriKatsirPrivate/gcp-workflows | 35fa813bdc5335d8f06aee2f2d2e12b773f43872 | [
"MIT"
] | 1 | 2021-07-07T07:43:55.000Z | 2021-07-07T07:43:55.000Z | run-workflow-cloud-run/main.py | UriKatsirPrivate/gcp-workflows | 35fa813bdc5335d8f06aee2f2d2e12b773f43872 | [
"MIT"
] | null | null | null | run-workflow-cloud-run/main.py | UriKatsirPrivate/gcp-workflows | 35fa813bdc5335d8f06aee2f2d2e12b773f43872 | [
"MIT"
] | null | null | null | import os
from flask import Flask
from googleapiclient.discovery import build
import json
import sys
from google.oauth2 import service_account
import googleapiclient.discovery
app = Flask(__name__)
workflow_service = build('workflowexecutions', 'v1')
@app.route("/")
def hello_world():
parent = "projects/uri-test/locations/us-central1/workflows/CreateMachineImagesInAllZones"
body = {"argument": '{"project": "uri-test","zone": "us-east1-b"}'}
workflow = workflow_service.projects().locations().workflows(
).executions().create(parent=parent, body=body).execute()
return workflow
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
| 23.966667 | 94 | 0.727399 | 89 | 719 | 5.696629 | 0.595506 | 0.011834 | 0.011834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.123783 | 719 | 29 | 95 | 24.793103 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.23783 | 0.109875 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.388889 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3f7b2d2414e53d9ad204b36035beffc35a3a5a76 | 9,264 | py | Python | rendering/viewer.py | MTKat/FloorplanTransformation | de10a1177e685267185de05dbce3230c25ef1d16 | [
"MIT"
] | 323 | 2017-10-24T07:45:12.000Z | 2022-03-05T15:25:17.000Z | rendering/viewer.py | MTKat/FloorplanTransformation | de10a1177e685267185de05dbce3230c25ef1d16 | [
"MIT"
] | 32 | 2017-11-20T12:44:27.000Z | 2022-03-22T01:59:23.000Z | rendering/viewer.py | MTKat/FloorplanTransformation | de10a1177e685267185de05dbce3230c25ef1d16 | [
"MIT"
] | 137 | 2017-11-13T10:06:00.000Z | 2022-03-23T10:26:30.000Z | from math import pi, sin, cos
from panda3d.core import *
from direct.showbase.ShowBase import ShowBase
from direct.task import Task
from floorplan import Floorplan
import numpy as np
import random
import copy
class Viewer(ShowBase):
def __init__(self):
ShowBase.__init__(self)
#self.scene = self.loader.loadModel("floorplan_1.txt-floor.obj")
#self.scene = base.loader.loadModel("floorplan_1.txt-floor.egg")
#self.scene = base.loader.loadModel("panda.egg")
#self.scene = base.loader.loadModel("environment")
base.setBackgroundColor(0, 0, 0)
self.angle = 0.0
lens = PerspectiveLens()
lens.setFov(60)
lens.setNear(0.01)
lens.setFar(100000)
base.cam.node().setLens(lens)
floorplan = Floorplan('test/floorplan_7')
#floorplan.setFilename('test/floorplan_2')
floorplan.read()
self.scene = floorplan.generateEggModel()
self.scene.reparentTo(self.render)
#self.scene.setScale(0.01, 0.01, 0.01)
#self.scene.setTwoSided(True)
self.scene.setTwoSided(True)
#self.scene.setPos(0, 0, 3)
#texture = loader.loadTexture("floorplan_1.png")
#self.scene.setTexture(texture)
#self.scene.setHpr(0, 0, 0)
# angleDegrees = 0
# angleRadians = angleDegrees * (pi / 180.0)
# self.camera.setPos(20 * sin(angleRadians), -20 * cos(angleRadians), 3)
# self.camera.setHpr(angleDegrees, 0, 0)
#self.camera.lookAt(0, 0, 0)
self.alight = AmbientLight('alight')
self.alight.setColor(VBase4(0.2, 0.2, 0.2, 1))
self.alnp = self.render.attachNewNode(self.alight)
self.render.setLight(self.alnp)
dlight = DirectionalLight('dlight')
dlight.setColor(VBase4(1, 1, 1, 1))
dlnp = self.render.attachNewNode(dlight)
#dlnp.setHpr(0, -90, 0)
dlnp.setPos(0.5, 0.5, 3)
dlnp.lookAt(0.5, 0.5, 2)
self.render.setLight(dlnp)
for i in xrange(10):
plight = PointLight('plight')
plight.setAttenuation((1, 0, 1))
color = random.randint(10, 15)
plight.setColor(VBase4(color, color, color, 1))
plnp = self.render.attachNewNode(plight)
if i == 0:
plnp.setPos(0.5, 0.5, 3)
else:
plnp.setPos(1 * random.random(), 1 * random.random(), 0.3)
pass
self.render.setLight(plnp)
#base.useTrackball()
#base.trackball.node().setPos(2.0, 0, 3)
#base.trackball.node().setHpr(0, 0, 3)
#base.enableMouse()
#base.useDrive()
base.disableMouse()
self.taskMgr.add(self.spinCameraTask, "SpinCameraTask")
#self.accept('arrow_up', self.moveForward)
#self.accept('arrow_up_-repeat', self.moveForward)
self.topDownCameraPos = [0.5, 0.5, 1.5]
self.topDownTarget = [0.5, 0.499, 0.5]
self.topDownH = 0
self.startCameraPos = floorplan.startCameraPos
self.startTarget = floorplan.startTarget
self.startH = 0
self.cameraPos = self.topDownCameraPos
self.target = self.topDownTarget
self.H = self.topDownH
self.accept('space', self.openDoor)
self.accept('enter', self.startChangingView)
self.viewMode = 'T'
self.viewChangingProgress = 1.02
ceiling = self.scene.find("**/ceiling")
ceiling.hide()
return
def moveForward(self):
self.cameraPos[0] -= 0.1
def openDoor(self):
minDistance = 10000
doors = self.scene.find("**/doors")
for door in doors.getChildren():
mins, maxs = door.getTightBounds()
vec_1 = (mins + maxs) / 2 - Vec3(self.target[0], self.target[1], (mins[2] + maxs[2]) / 2)
vec_2 = (mins + maxs) / 2 - Vec3(self.cameraPos[0], self.cameraPos[1], (mins[2] + maxs[2]) / 2)
if (vec_1.dot(vec_2) > 0 and vec_1.length() > vec_2.length()) or np.arccos(abs(vec_1.dot(vec_2)) / (vec_1.length() * vec_2.length())) > np.pi / 4:
continue
distance = pow(pow(self.cameraPos[0] - (mins[0] + maxs[0]) / 2, 2) + pow(self.cameraPos[1] - (mins[1] + maxs[1]) / 2, 2) + pow(self.cameraPos[2] - (mins[2] + maxs[2]) / 2, 2), 0.5)
if distance < minDistance:
minDistanceDoor = door
minDistance = distance
pass
continue
if minDistance > 1:
return
mins, maxs = minDistanceDoor.getTightBounds()
if abs(maxs[0] - mins[0]) > abs(maxs[1] - mins[1]):
minsExpected = Vec3(mins[0] - (maxs[1] - mins[1]), mins[1], mins[2])
maxsExpected = Vec3(mins[0], mins[1] + (maxs[0] - mins[0]), maxs[2])
else:
minsExpected = Vec3(mins[0] - (maxs[1] - mins[1]) + (maxs[0] - mins[0]), mins[1] - (maxs[0] - mins[0]), mins[2])
maxsExpected = Vec3(mins[0] + (maxs[0] - mins[0]), mins[1] + (maxs[0] - mins[0]) - (maxs[0] - mins[0]), maxs[2])
pass
minDistanceDoor.setH(minDistanceDoor, 90)
mins, maxs = minDistanceDoor.getTightBounds()
minDistanceDoor.setPos(minDistanceDoor, minsExpected[1] - mins[1], -minsExpected[0] + mins[0], 0)
#print(scene.findAllMatches('doors'))
return
def startChangingView(self):
self.viewChangingProgress = 0
self.prevCameraPos = copy.deepcopy(self.cameraPos)
self.prevTarget = copy.deepcopy(self.target)
self.prevH = self.camera.getR()
if self.viewMode == 'T':
self.newCameraPos = self.startCameraPos
self.newTarget = self.startTarget
self.newH = self.startH
self.viewMode = 'C'
else:
self.newCameraPos = self.topDownCameraPos
self.newTarget = self.topDownTarget
self.newH = self.topDownH
self.startCameraPos = copy.deepcopy(self.cameraPos)
self.startTarget = copy.deepcopy(self.target)
self.startH = self.camera.getR()
self.viewMode = 'T'
pass
return
def changeView(self):
self.cameraPos = []
self.target = []
for c in xrange(3):
self.cameraPos.append(self.prevCameraPos[c] + (self.newCameraPos[c] - self.prevCameraPos[c]) * self.viewChangingProgress)
self.target.append(self.prevTarget[c] + (self.newTarget[c] - self.prevTarget[c]) * self.viewChangingProgress)
continue
self.H = self.prevH + (self.newH - self.prevH) * self.viewChangingProgress
if self.viewChangingProgress + 0.02 >= 1 and self.viewMode == 'C':
ceiling = self.scene.find("**/ceiling")
ceiling.show()
pass
if self.viewChangingProgress <= 0.02 and self.viewMode == 'T':
ceiling = self.scene.find("**/ceiling")
ceiling.hide()
pass
return
def spinCameraTask(self, task):
#print(task.time)
#angleDegrees = task.time * 6.0
movementStep = 0.003
if self.viewChangingProgress <= 1.01:
self.changeView()
self.viewChangingProgress += 0.02
pass
if base.mouseWatcherNode.is_button_down('w'):
for c in xrange(2):
step = movementStep * (self.target[c] - self.cameraPos[c])
self.cameraPos[c] += step
self.target[c] += step
continue
pass
if base.mouseWatcherNode.is_button_down('s'):
for c in xrange(2):
step = movementStep * (self.target[c] - self.cameraPos[c])
self.cameraPos[c] -= step
self.target[c] -= step
continue
pass
if base.mouseWatcherNode.is_button_down('a'):
step = movementStep * (self.target[0] - self.cameraPos[0])
self.cameraPos[1] += step
self.target[1] += step
step = movementStep * (self.target[1] - self.cameraPos[1])
self.cameraPos[0] -= step
self.target[0] -= step
pass
if base.mouseWatcherNode.is_button_down('d'):
step = movementStep * (self.target[0] - self.cameraPos[0])
self.cameraPos[1] -= step
self.target[1] -= step
step = movementStep * (self.target[1] - self.cameraPos[1])
self.cameraPos[0] += step
self.target[0] += step
pass
rotationStep = 0.02
if base.mouseWatcherNode.is_button_down('arrow_left'):
angle = np.angle(complex(self.target[0] - self.cameraPos[0], self.target[1] - self.cameraPos[1]))
angle += rotationStep
self.target[0] = self.cameraPos[0] + np.cos(angle)
self.target[1] = self.cameraPos[1] + np.sin(angle)
pass
if base.mouseWatcherNode.is_button_down('arrow_right'):
angle = np.angle(complex(self.target[0] - self.cameraPos[0], self.target[1] - self.cameraPos[1]))
angle -= rotationStep
self.target[0] = self.cameraPos[0] + np.cos(angle)
self.target[1] = self.cameraPos[1] + np.sin(angle)
pass
if base.mouseWatcherNode.is_button_down('arrow_up'):
angle = np.arcsin(self.target[2] - self.cameraPos[2])
angle += rotationStep
self.target[2] = self.cameraPos[2] + np.sin(angle)
pass
if base.mouseWatcherNode.is_button_down('arrow_down'):
angle = np.arcsin(self.target[2] - self.cameraPos[2])
angle -= rotationStep
self.target[2] = self.cameraPos[2] + np.sin(angle)
pass
angleDegrees = self.angle
angleRadians = angleDegrees * (pi / 180.0)
#self.camera.setPos(2.0 * sin(angleRadians), -2.0 * cos(angleRadians), 3)
self.camera.setPos(self.cameraPos[0], self.cameraPos[1], self.cameraPos[2])
#self.camera.setHpr(angleDegrees, 0, 0)
#self.camera.lookAt(0, 0, 0)
self.camera.lookAt(self.target[0], self.target[1], self.target[2])
self.camera.setR(self.H)
#if base.mouseWatcherNode.hasMouse()
return Task.cont
app = Viewer()
app.run()
| 35.358779 | 186 | 0.636118 | 1,221 | 9,264 | 4.788698 | 0.158886 | 0.084488 | 0.028733 | 0.020523 | 0.434411 | 0.373183 | 0.294852 | 0.262186 | 0.235847 | 0.231059 | 0 | 0.041957 | 0.210168 | 9,264 | 261 | 187 | 35.494253 | 0.757141 | 0.123921 | 0 | 0.29798 | 0 | 0 | 0.017934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.075758 | 0.040404 | 0 | 0.106061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3f89aa8877a0c320a25f1232234def2b63591b67 | 438 | py | Python | chris_backend/plugins/migrations/0034_auto_20200420_2320.py | rudolphpienaar/ChRIS_ultron_backEnd | 5de4e255fb151ac7a6f900327704831da11dcd1f | [
"MIT"
] | 26 | 2016-05-26T14:09:35.000Z | 2022-01-28T19:12:43.000Z | chris_backend/plugins/migrations/0034_auto_20200420_2320.py | rudolphpienaar/ChRIS_ultron_backEnd | 5de4e255fb151ac7a6f900327704831da11dcd1f | [
"MIT"
] | 168 | 2016-06-24T11:07:15.000Z | 2022-03-21T12:33:43.000Z | chris_backend/plugins/migrations/0034_auto_20200420_2320.py | rudolphpienaar/ChRIS_ultron_backEnd | 5de4e255fb151ac7a6f900327704831da11dcd1f | [
"MIT"
] | 45 | 2017-08-16T16:41:40.000Z | 2022-03-31T18:12:14.000Z | # Generated by Django 2.1.4 on 2020-04-20 23:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('plugins', '0033_pluginparameter_short_flag'),
]
operations = [
migrations.AlterField(
model_name='computeresource',
name='compute_resource_identifier',
field=models.CharField(max_length=100, unique=True),
),
]
| 23.052632 | 64 | 0.639269 | 46 | 438 | 5.934783 | 0.847826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0.255708 | 438 | 18 | 65 | 24.333333 | 0.769939 | 0.10274 | 0 | 0 | 1 | 0 | 0.204604 | 0.148338 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3f8a0dda01eeac406db001e56651f4bc81dfa178 | 4,112 | py | Python | src/StarTrac/forms.py | Stdubic/Track | 853df13178967ab9b5c1918d6d56fa7fe2831b0f | [
"MIT"
] | 1 | 2015-09-14T19:54:56.000Z | 2015-09-14T19:54:56.000Z | src/StarTrac/forms.py | Stdubic/Track | 853df13178967ab9b5c1918d6d56fa7fe2831b0f | [
"MIT"
] | null | null | null | src/StarTrac/forms.py | Stdubic/Track | 853df13178967ab9b5c1918d6d56fa7fe2831b0f | [
"MIT"
] | null | null | null | '''
Created on Dec 21, 2014
@author: Milos
'''
'''
Forma za eventualna prosirenja djangovog user-a
'''
from django import forms
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
from django.core.urlresolvers import reverse
from django.shortcuts import get_object_or_404
from django.views.generic.detail import DetailView
from django.views.generic.edit import UpdateView, CreateView
from tasks.models import UserExtend
class UserExtendForm(forms.ModelForm):
class Meta:
model = UserExtend
fields = ['picture']
class RegistrationForm(UserCreationForm):
email = forms.EmailField(required=True)
first_name = forms.CharField(required = False)
last_name = forms.CharField(required = False)
class Meta:
model = UserExtend
fields = ['first_name','last_name', 'username', 'email', 'password1', 'password2','picture']
def save(self, commit=True):
user = super(RegistrationForm, self).save(commit = False)
user.email = self.cleaned_data['email']
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.picture = self.cleaned_data['picture']
if commit:
user.save()
return user
def __init__(self, *args, **kwargs):
super(RegistrationForm, self).__init__(*args, **kwargs)
self.fields["first_name"].widget.attrs['class']='form-control'
self.fields["last_name"].widget.attrs['class']='form-control'
self.fields["username"].widget.attrs['class']='form-control'
self.fields["email"].widget.attrs['class']='form-control'
self.fields["password1"].widget.attrs['class']='form-control'
self.fields["password2"].widget.attrs['class']='form-control'
self.fields["picture"].widget.attrs['class']='form-control'
class UserForm(forms.ModelForm):
class Meta:
model = UserExtend
fields = ['first_name','last_name', 'username', 'email', 'picture']
def __init__(self, *args, **kwargs):
super(UserForm, self).__init__(*args, **kwargs)
self.fields["first_name"].widget.attrs['class']='form-control'
self.fields["last_name"].widget.attrs['class']='form-control'
self.fields["username"].widget.attrs['class']='form-control'
self.fields["email"].widget.attrs['class']='form-control'
self.fields["picture"].widget.attrs['class']='form-control'
class UserCreate(CreateView):
model = UserExtend
template_name = 'tasks/register.html'
form_class = RegistrationForm
def get_success_url(self):
return reverse('home')
def get_context_data(self, **kwargs):
context = super(UserCreate, self).get_context_data(**kwargs)
#KeyError
try:
context["back"] = self.request.META["HTTP_REFERER"]
except(KeyError):
context["back"]="/"
return context
class UserUpdate(UpdateView):
model = UserExtend
template_name = 'tasks/uupdate.html'
form_class = UserForm
def get_success_url(self):
return reverse('udetail')
def get_context_data(self, **kwargs):
context = super(UserUpdate, self).get_context_data(**kwargs)
#KeyError
try:
context["back"] = self.request.META["HTTP_REFERER"]
except(KeyError):
context["back"]="/"
return context
class DetailUser(DetailView):
model = UserExtend
template_name = 'tasks/udetail.html'
context_object_name='user'
def get_object(self):
return get_object_or_404(User, pk=self.request.user.pk)
def get_context_data(self, **kwargs):
context = super(DetailUser, self).get_context_data(**kwargs)
#KeyError
try:
context["back"] = self.request.META["HTTP_REFERER"]
except(KeyError):
context["back"]="/"
#context["user"] = self.request.user
return context | 31.875969 | 100 | 0.63643 | 457 | 4,112 | 5.579869 | 0.210066 | 0.047059 | 0.075294 | 0.094118 | 0.573333 | 0.511373 | 0.49098 | 0.427843 | 0.381961 | 0.381961 | 0 | 0.005049 | 0.229329 | 4,112 | 129 | 101 | 31.875969 | 0.799621 | 0.024076 | 0 | 0.471264 | 0 | 0 | 0.143617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0.034483 | 0.091954 | 0.034483 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3f8a7855be2c7792001f6241d146c5b377cdb431 | 932 | py | Python | setup.py | kiranmantri/python-remote-import | 2724e320377c3f2328dafd61707aad4d5eed2f10 | [
"MIT"
] | null | null | null | setup.py | kiranmantri/python-remote-import | 2724e320377c3f2328dafd61707aad4d5eed2f10 | [
"MIT"
] | null | null | null | setup.py | kiranmantri/python-remote-import | 2724e320377c3f2328dafd61707aad4d5eed2f10 | [
"MIT"
] | null | null | null | """Create the distribution file (pypi)."""
import importlib
import setuptools
from pathlib import Path
this_package_name = 'remote_import'
version_file = Path(__file__).absolute().parent / this_package_name / "__version__.py"
setuptools.setup(
name=this_package_name,
url="https://github.com/kiranmantri/python-remote_import",
author="Kiran Mantripragada (and Lydia.ai team)",
author_email="kiran.mantri@gmail.com",
description="Enable the Python import subsystem to load libraries from remote (e.g. HTTP, S3, SSH).",
version=importlib.import_module("remote_import.version", "__version__").__version__,
long_description=open("README.rst").read(),
packages=[this_package_name],
python_requires=">=3.10",
install_requires=[
line for line in open("requirements.txt").read().split("\n") if not line.startswith("#")
],
include_package_data=True,
setup_requires=["wheel"],
)
| 35.846154 | 105 | 0.724249 | 118 | 932 | 5.415254 | 0.610169 | 0.068858 | 0.093897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004969 | 0.136266 | 932 | 25 | 106 | 37.28 | 0.78882 | 0.038627 | 0 | 0 | 0 | 0.047619 | 0.333708 | 0.048315 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3f8f5e2babed8a1212595d0eeba5f9b572718ef4 | 1,961 | py | Python | rebalanced_balanced_tests/evalr.py | zhanghuiying2319/Master | 1f4d11dd8277517b7e63d34651a9629f58dd7070 | [
"MIT"
] | null | null | null | rebalanced_balanced_tests/evalr.py | zhanghuiying2319/Master | 1f4d11dd8277517b7e63d34651a9629f58dd7070 | [
"MIT"
] | null | null | null | rebalanced_balanced_tests/evalr.py | zhanghuiying2319/Master | 1f4d11dd8277517b7e63d34651a9629f58dd7070 | [
"MIT"
] | null | null | null |
import os,sys,math,numpy as np, matplotlib.pyplot as plt
def runstuff():
#savept= './scores_v2_23042021' #0.7580982029438019 0.5812124161981046
#savept= './scores_v2_23042021_sizes' #0.7552803814411163 0.583345946110785
#savept= './scores_v2_23042021_sizes_posenc' #0.7596977412700654 0.583193539083004
#savept= './scores_v2_23042021_aspectratios' #0.7534634172916412 0.575133552961051
#savept= './scores_v2_23042021_aspectratios_posenc10' #0.7560707509517669 0.5779228328727186
#savept= './oldscores/scores_v2_23042021_innercv_sizes' #0.7794053614139558 0.6070150885730982
#savept= './scores_v4_10052021'
#savept= './scores_v4_10052021_rebal0half'
#savept= './scores_v4_10052021_rebal0half_densenet121_SGD'#0.8922465562820435 0.8617303603225284
#savept= './scores_v4_10052021_rebal0half_densenet121_AdamW_10-4'#0.9014121651649475 0.8683200238479508
savept= './scores_v4_10052021_rebal0half_densenet121_SGD_equalproba'
numcv =10
numcl = 9
cwacc= np.zeros(numcl)
globalacc=0
confusion = np.zeros((numcl,numcl))
for cvind in range(numcv):
cwacctmp=np.load(os.path.join(savept,'rotavg_cwacc_outercv{:d}.npy'.format(cvind)) )
globalacctmp=np.load(os.path.join(savept,'rotavg_globalacc_outercv{:d}.npy'.format(cvind)) )
confusion_matrixtmp=np.load(os.path.join(savept,'rotavg_confusion_matrix_outercv{:d}.npy'.format(cvind)))
globalacc+=globalacctmp / float(numcv)
cwacc+=cwacctmp / float(numcv)
conf = np.load( os.path.join(savept,'rotavg_confusion_matrix_outercv{:d}.npy'.format(cvind)))
confusion+=conf
counts = np.sum(confusion, axis=1)
nconfusion = np.array(confusion)
for r in range(numcl):
nconfusion[r,:]= nconfusion[r,:] / np.sum(confusion[r,:] )
for r in range(numcl):
print(r, 1 - nconfusion[r,r] )
print(counts)
plt.matshow(nconfusion)
plt.show()
print(globalacc, np.mean(cwacc) )
if __name__=='__main__':
runstuff()
| 35.017857 | 109 | 0.740948 | 250 | 1,961 | 5.568 | 0.348 | 0.086207 | 0.068966 | 0.079023 | 0.408764 | 0.259339 | 0.199713 | 0.093391 | 0.093391 | 0.093391 | 0 | 0.228705 | 0.125956 | 1,961 | 55 | 110 | 35.654545 | 0.583431 | 0.382968 | 0 | 0.071429 | 0 | 0 | 0.171429 | 0.164706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.035714 | 0 | 0.071429 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fa25fe67ade7748177cb9602ea922bfde19db47 | 2,601 | py | Python | backend/server/services.py | Masyru/gisspot | 472e7d6c321c3a9db01ffdb8573e612caaf8a13b | [
"Apache-2.0"
] | null | null | null | backend/server/services.py | Masyru/gisspot | 472e7d6c321c3a9db01ffdb8573e612caaf8a13b | [
"Apache-2.0"
] | null | null | null | backend/server/services.py | Masyru/gisspot | 472e7d6c321c3a9db01ffdb8573e612caaf8a13b | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from typing import Optional, List, Dict
import sys
sys.path.append("../../")
from backend.server.pd_model import *
from backend.queue.services import add_task, stop_all_ws_task
from backend.database.main import gis_stac
__all__ = ["preview_processing", "vector_processing", "refuse_processing"]
def get_items(time_interval: List[datetime], bbox: List[float]) -> List[dict]:
# response = get(DATABASE_URL + "get_preview") # TODO: Запрос к базе данных через url
items = gis_stac.filter(time_intervals=[time_interval], bboxes=[bbox])
items = sorted(map(lambda item: item.to_dict(), items),
key=lambda item: item["datetime"])
return list(items)
def preview_interval(timestamp: int) -> List[datetime]:
return [datetime.fromtimestamp(timestamp),
datetime.fromtimestamp(timestamp + 24 * 60 * 60)]
def preview_processing(data: PreviewData) -> Dict[str, List[PreviewData]]:
items = get_items(preview_interval(data.datetime),
[data.bbox[0].lat, data.bbox[0].lon,
data.bbox[1].lat, data.bbox[1].lon])
res = []
for item in items:
for asset in item["assets"]:
if asset["type"] == "img":
res.append(PreviewData(img=asset["href"],
datetime=item["properties"]["datetime"],
bbox=item["bbox"]))
break
res.sort(key=lambda el: el.datetime)
return {"imgs": res}
def get_item_url(iid: Optional[str]) -> str:
item = gis_stac.root_catalog.get_child(iid, recursive=True)
return item.href
def vector_processing(ws_id: Optional[str],
data: Optional[VectorsRequest]) -> None:
files = (get_item_url(data.ids[0]), get_item_url(data.ids[1]))
params = [files[0], files[1], data.points, data.window_size, data.vicinity_size]
add_to_queue(ws_id=ws_id, *params)
def add_to_queue(ws_id: Optional[str],
task_type: Optional[str] = "high",
*params, **kwargs) -> None:
# request_data = {"task_type": task_type,
# "params": params,
# "kwargs": kwargs}
# if ws_id is not None:
# request_data["ws_id"] = ws_id
add_task(ws_id=ws_id, args=params,
kwargs=kwargs, task_type=task_type)
# TODO: Microservices
def delete_work_to_queue(ws_id: Optional[str]) -> None:
stop_all_ws_task(ws_id) # TODO: Microservices
def refuse_processing(ws_id: Optional[str]) -> None:
delete_work_to_queue(ws_id)
| 34.68 | 89 | 0.626682 | 337 | 2,601 | 4.623145 | 0.311573 | 0.033376 | 0.030809 | 0.038511 | 0.1181 | 0.048139 | 0 | 0 | 0 | 0 | 0 | 0.00711 | 0.242983 | 2,601 | 74 | 90 | 35.148649 | 0.784154 | 0.109958 | 0 | 0 | 0 | 0 | 0.049024 | 0 | 0 | 0 | 0 | 0.013514 | 0 | 1 | 0.170213 | false | 0 | 0.12766 | 0.021277 | 0.382979 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fa285fbe23a04fd466fb79a701560d794e0830d | 530 | py | Python | openwater/views.py | openwater/h2o-really | bb6ae678cc4f505450684a2579e3f0196236e8dc | [
"Unlicense"
] | 3 | 2015-05-25T07:41:42.000Z | 2020-05-18T05:50:40.000Z | openwater/views.py | openwater/h2o-really | bb6ae678cc4f505450684a2579e3f0196236e8dc | [
"Unlicense"
] | null | null | null | openwater/views.py | openwater/h2o-really | bb6ae678cc4f505450684a2579e3f0196236e8dc | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from django.views.generic.base import TemplateView
from diario.models import Entry
from observations.models import Measurement
class HomePageView(TemplateView):
template_name = "home.html"
def get_context_data(self, **kwargs):
context = super(HomePageView, self).get_context_data(**kwargs)
context['entry_list'] = Entry.objects.all()[:3]
context['measurement_list'] = Measurement.objects.order_by('-created_timestamp')[:5]
return context
| 27.894737 | 92 | 0.711321 | 64 | 530 | 5.75 | 0.65625 | 0.065217 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006742 | 0.160377 | 530 | 18 | 93 | 29.444444 | 0.820225 | 0.079245 | 0 | 0 | 0 | 0 | 0.109054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3fac52a8d1c05423e274506bc3a7a1b6ae3c1a67 | 553 | py | Python | setup.py | williamgilpin/pypdb_legacy | 5c0586128415ec64804d17dd3efa0a4e52b64734 | [
"MIT"
] | null | null | null | setup.py | williamgilpin/pypdb_legacy | 5c0586128415ec64804d17dd3efa0a4e52b64734 | [
"MIT"
] | null | null | null | setup.py | williamgilpin/pypdb_legacy | 5c0586128415ec64804d17dd3efa0a4e52b64734 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name = 'pypdb',
packages = ['pypdb'], # same as 'name'
version = '1.310',
install_requires=[
'xmltodict',
'beautifulsoup4',
'requests'
],
description = 'A Python wrapper for the RCSB Protein Data Bank (PDB) API',
author = 'William Gilpin',
author_email = 'firstname_lastname@gmail.com',
url = 'https://github.com/williamgilpin/pypdb',
download_url = 'https://github.com/williamgilpin/pypdb/tarball/0.6',
keywords = ['protein','data','RESTful','api'],
classifiers = [],
)
| 26.333333 | 76 | 0.652803 | 63 | 553 | 5.666667 | 0.761905 | 0.061625 | 0.078431 | 0.095238 | 0.196078 | 0.196078 | 0 | 0 | 0 | 0 | 0 | 0.015521 | 0.184448 | 553 | 20 | 77 | 27.65 | 0.776053 | 0.025316 | 0 | 0 | 0 | 0 | 0.472998 | 0.052142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3faf19d3bce07009761ea272b335839c3b750a3e | 3,485 | py | Python | webinar_part_2/2.0_nxos_existing_with_netbox/setup.py | Miradot/webinar | d6f4b54477e2dabf856406d65ad585c98066bc2f | [
"MIT"
] | 1 | 2021-12-21T21:20:48.000Z | 2021-12-21T21:20:48.000Z | webinar_part_2/2.0_nxos_existing_with_netbox/setup.py | Miradot/webinar | d6f4b54477e2dabf856406d65ad585c98066bc2f | [
"MIT"
] | null | null | null | webinar_part_2/2.0_nxos_existing_with_netbox/setup.py | Miradot/webinar | d6f4b54477e2dabf856406d65ad585c98066bc2f | [
"MIT"
] | null | null | null | import yaml
import argparse
from ansible_vault import Vault
def create_file(args):
svc = open("netbox_tools/webhook_proxy_svc.py", "r")
all_lines = svc.readlines()
svc = open("netbox_tools/webhook_proxy_svc.py", "w")
for index, content in enumerate(all_lines):
if "'token'" in content:
all_lines[index] = " 'token': (None, '{}'),\n".format(args.gitlab_token)
if "gitlab_url =" in content:
all_lines[index] = "gitlab_url = '{}'\n".format(args.gitlab_url)
if "host_vars = " in content:
all_lines[index] = "host_vars = '{}repository/files/ansible%2Fhost_vars%2F{}.yml?ref=master'\n".format(
args.gitlab_url.split("trigger")[0], args.nxos_devices[0])
svc.writelines(all_lines)
svc.close()
inv = open("netbox_tools/inventory.py", "r")
all_lines = inv.readlines()
inv = open("netbox_tools/inventory.py", "w")
for index, content in enumerate(all_lines):
if "URL =" in content:
all_lines[index] = "URL = '{}'\n".format(args.netbox_url)
if "TOKEN =" in content:
all_lines[index] = "TOKEN = '{}'\n".format(args.netbox_token)
inv.writelines(all_lines)
inv.close()
config = dict()
config["ansible_connection"] = "network_cli"
config["ansible_network_os"] = "nxos"
config["ansible_user"] = args.nxos_username
config["ansible_password"] = "{{ vault_ansible_password }}"
config["nxapi_port"] = args.nxos_port
config["netbox_url"] = args.netbox_url
with open("ansible/group_vars/all", "w") as file:
yaml.dump(config, file)
with open("ansible/hosts", "w") as file:
for device in args.nxos_devices:
file.write("\n[{}]\n".format(device))
file.write("{}\n".format(device))
data = dict({"vault_ansible_password": args.nxos_password, "netbox_token": args.netbox_token})
vault = Vault(args.ansible_vault_password)
vault.dump(data, open("ansible/group_vars/vault", "wb"))
def main():
# show arguments for the user
parser = argparse.ArgumentParser()
parser._action_groups.pop()
required = parser.add_argument_group("required arguments")
required.add_argument("-d", dest="nxos_devices",
help="NXOS Hostname or IP ; nxos1.lab.local 192.168.1.1 ...",
required=True,
nargs='+')
required.add_argument("-np", dest="nxos_port", help="NXAPI Port ; 80", required=True)
required.add_argument("-u", dest="nxos_username", help="NXOS Username", required=True)
required.add_argument("-p", dest="nxos_password", help="NXOS Password", required=True)
required.add_argument("-pv", dest="ansible_vault_password", help="Ansible Vault Password", required=True)
required.add_argument("-nu", dest="netbox_url", help="Netbox URL ; http(s)://netbox.lab.local", required=True)
required.add_argument("-nt", dest="netbox_token", help="Netbox Token ; 12345abcdef", required=True)
required.add_argument("-gu", dest="gitlab_url",
help="Gitlab Pipeline Trigger Url ; http("
"s)://gitlab.lab.local/api/v4/projects/91/trigger/pipeline",
required=True)
required.add_argument("-gt", dest="gitlab_token", help="Gitlab Pipeline Token ; 12345abcdef", required=True)
args = parser.parse_args()
create_file(args)
if __name__ == '__main__':
main()
| 37.880435 | 115 | 0.631564 | 440 | 3,485 | 4.802273 | 0.265909 | 0.041647 | 0.080928 | 0.076195 | 0.290109 | 0.187411 | 0.100331 | 0.100331 | 0.036914 | 0.036914 | 0 | 0.01023 | 0.214634 | 3,485 | 91 | 116 | 38.296703 | 0.761783 | 0.007747 | 0 | 0.030303 | 0 | 0 | 0.296586 | 0.107928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.075758 | 0.045455 | 0 | 0.075758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3fb2a77029f3ade78cfb9fb154c1aca159746d1e | 1,371 | py | Python | type_page/models.py | dumel93/project- | f9ad52d9c8449953e2151fd1c13b39631113eea7 | [
"MIT"
] | null | null | null | type_page/models.py | dumel93/project- | f9ad52d9c8449953e2151fd1c13b39631113eea7 | [
"MIT"
] | null | null | null | type_page/models.py | dumel93/project- | f9ad52d9c8449953e2151fd1c13b39631113eea7 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db import models
from .validators import validate_bet, validate_course
class User(AbstractUser):
email = models.EmailField('email address', unique=True)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
class FootballType(models.Model):
first_team = models.CharField(max_length=64, null=True)
second_team = models.CharField(max_length=64, null=True)
draw = models.BooleanField(default=False)
is_ended = models.BooleanField(default=False)
date_game = models.DateTimeField()
league = models.CharField(max_length=64)
course = models.DecimalField(max_digits=5, decimal_places=2, validators=[validate_course])
comments = models.CharField(max_length=128, null=True, blank=True)
bet = models.IntegerField(validators=[validate_bet])
retired = models.BooleanField(default=False)
class Meta:
ordering = ['-date_game']
@property
def total(self):
if self.is_ended == True:
if self.retired == False:
if self.draw == True:
return self.bet * (self.course - 1)
return (self.bet) * (-1)
else:
return 0
return self.bet * (self.course - 1)
def __str__(self):
return "{} vs. {}".format(self.first_team, self.second_team)
| 33.439024 | 94 | 0.66229 | 164 | 1,371 | 5.390244 | 0.414634 | 0.067873 | 0.081448 | 0.108597 | 0.169683 | 0.140271 | 0.085973 | 0.085973 | 0 | 0 | 0 | 0.014138 | 0.226112 | 1,371 | 40 | 95 | 34.275 | 0.819039 | 0 | 0 | 0.0625 | 0 | 0 | 0.032823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.09375 | 0.03125 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3fb92b9dfc88175ad72437eeb1e0e07300b64bc1 | 1,077 | py | Python | pritunl_wireguard_client/utils/token.py | SuperBo/pritunl-wireguard-cient | bed4407bf2b7811f7180d72446a2dc26d45db90d | [
"MIT"
] | 1 | 2021-02-16T07:08:46.000Z | 2021-02-16T07:08:46.000Z | pritunl_wireguard_client/utils/token.py | SuperBo/pritunl-wireguard-cient | bed4407bf2b7811f7180d72446a2dc26d45db90d | [
"MIT"
] | 1 | 2022-02-08T13:34:18.000Z | 2022-02-08T13:34:18.000Z | pritunl_wireguard_client/utils/token.py | SuperBo/pritunl-wireguard-cient | bed4407bf2b7811f7180d72446a2dc26d45db90d | [
"MIT"
] | 1 | 2021-03-18T14:34:41.000Z | 2021-03-18T14:34:41.000Z | import time
import pritunl_wireguard_client.utils.random as utils
class Tokens:
def __init__(self):
self.store = dict()
def get(self, profile_id: str, ttl: int):
"""Return token for profile_id"""
if profile_id not in self.store:
self.init(profile_id, ttl)
self.update()
return self.store[profile_id]['token']
def update(self):
"""Update out of time token"""
now = time.time()
to_update = []
for profile_id, token in self.store.items():
ttl = token['ttl']
if now - token['timestamp'] > ttl:
to_update.append(profile_id)
for profile_id in to_update:
self.init(profile_id)
def init(self, profile_id: str, ttl=None):
"""Generate new token for profile_id"""
if ttl is None:
ttl = self.store[profile_id]['ttl']
token = {
'token': utils.rand_str_complex(16),
'timestamp': time.time(),
'ttl': ttl
}
self.store[profile_id] = token
| 27.615385 | 53 | 0.557103 | 136 | 1,077 | 4.235294 | 0.308824 | 0.203125 | 0.083333 | 0.09375 | 0.253472 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002755 | 0.325905 | 1,077 | 38 | 54 | 28.342105 | 0.790634 | 0.079851 | 0 | 0 | 0 | 0 | 0.037949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fba31ce0598188d4268a711f23cf7223b613738 | 2,299 | py | Python | moabb/analysis/__init__.py | plcrodrigues/moabb | aa4274fe7905631864e854c121c92e1927061f29 | [
"BSD-3-Clause"
] | 321 | 2017-06-03T16:14:45.000Z | 2022-03-28T17:43:59.000Z | moabb/analysis/__init__.py | plcrodrigues/moabb | aa4274fe7905631864e854c121c92e1927061f29 | [
"BSD-3-Clause"
] | 223 | 2017-06-03T17:41:57.000Z | 2022-03-29T09:07:44.000Z | moabb/analysis/__init__.py | girafe-ai/moabb | 78bbb48a2a0058b0725ebeba1ba1e3203f0eacd5 | [
"BSD-3-Clause"
] | 118 | 2017-06-03T18:36:35.000Z | 2022-03-16T06:22:02.000Z | import logging
import os
import platform
from datetime import datetime
from moabb.analysis import plotting as plt
from moabb.analysis.meta_analysis import ( # noqa: E501
compute_dataset_statistics,
find_significant_differences,
)
from moabb.analysis.results import Results # noqa: F401
log = logging.getLogger(__name__)
def analyze(results, out_path, name="analysis", plot=False):
"""Analyze results.
Given a results dataframe, generates a folder with
results and a dataframe of the exact data used to generate those results,
aswell as introspection to return information on the computer
parameters
----------
out_path: location to store analysis folder
results: Dataframe generated from Results object
path: string/None
plot: whether to plot results
Either path or results is necessary
"""
# input checks #
if not isinstance(out_path, str):
raise ValueError("Given out_path argument is not string")
elif not os.path.isdir(out_path):
raise IOError("Given directory does not exist")
else:
analysis_path = os.path.join(out_path, name)
unique_ids = [plt._simplify_names(x) for x in results.pipeline.unique()]
simplify = True
print(unique_ids)
print(set(unique_ids))
if len(unique_ids) != len(set(unique_ids)):
log.warning("Pipeline names are too similar, turning off name shortening")
simplify = False
os.makedirs(analysis_path, exist_ok=True)
# TODO: no good cross-platform way of recording CPU info?
with open(os.path.join(analysis_path, "info.txt"), "a") as f:
dt = datetime.now()
f.write("Date: {:%Y-%m-%d}\n Time: {:%H:%M}\n".format(dt, dt))
f.write("System: {}\n".format(platform.system()))
f.write("CPU: {}\n".format(platform.processor()))
results.to_csv(os.path.join(analysis_path, "data.csv"))
stats = compute_dataset_statistics(results)
stats.to_csv(os.path.join(analysis_path, "stats.csv"))
P, T = find_significant_differences(stats)
if plot:
fig, color_dict = plt.score_plot(results)
fig.savefig(os.path.join(analysis_path, "scores.pdf"))
fig = plt.summary_plot(P, T, simplify=simplify)
fig.savefig(os.path.join(analysis_path, "ordering.pdf"))
| 32.380282 | 82 | 0.68769 | 320 | 2,299 | 4.81875 | 0.434375 | 0.027237 | 0.038911 | 0.058366 | 0.090791 | 0.076524 | 0.076524 | 0 | 0 | 0 | 0 | 0.003263 | 0.200087 | 2,299 | 70 | 83 | 32.842857 | 0.835237 | 0.217921 | 0 | 0 | 0 | 0 | 0.137199 | 0 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0.025 | false | 0 | 0.175 | 0 | 0.2 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fbfff6f4fb61ab0d9c6aaa91429d122a201c789 | 4,310 | py | Python | python/clockwork/ena/submit_files.py | jeff-k/clockwork | d6e9ac80bb46ec806acd7db85ed5c3430c3f2438 | [
"MIT"
] | 18 | 2018-01-18T13:02:10.000Z | 2022-03-25T05:56:02.000Z | python/clockwork/ena/submit_files.py | jeff-k/clockwork | d6e9ac80bb46ec806acd7db85ed5c3430c3f2438 | [
"MIT"
] | 54 | 2018-01-25T15:47:25.000Z | 2022-03-30T17:02:23.000Z | python/clockwork/ena/submit_files.py | jeff-k/clockwork | d6e9ac80bb46ec806acd7db85ed5c3430c3f2438 | [
"MIT"
] | 15 | 2018-01-18T11:21:33.000Z | 2022-03-30T16:55:48.000Z | import configparser
import random
import string
from clockwork import utils
class Error(Exception):
pass
def _make_dummy_success_receipt(outfile, object_type):
accession = "".join(
[random.choice(string.ascii_uppercase + string.digits) for _ in range(10)]
)
with open(outfile, "w") as f:
print(
r"""<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="receipt.xsl"?>
<RECEIPT receiptDate="2017-08-31T11:07:50.251+01:00" submissionFile="submission.xml" success="true">
<"""
+ object_type.upper()
+ r''' accession="'''
+ accession
+ r"""" alias="alias" status="PRIVATE">
<EXT_ID accession="SAMEA123456789" type="biosample"/>
</"""
+ object_type.upper()
+ r""">
<SUBMISSION accession="ERA1234567" alias="alias 42"/>
<MESSAGES>
<INFO>Submission has been committed.</INFO>
<INFO>This submission is a TEST submission and will be discarded within 24 hours</INFO>
</MESSAGES>
<ACTIONS>ADD</ACTIONS>
</RECEIPT>
""",
file=f,
)
def _make_dummy_fail_receipt(outfile):
with open(outfile, "w") as f:
print(
r"""<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="receipt.xsl"?>
<RECEIPT receiptDate="2017-09-01T14:13:19.573+01:00" submissionFile="submission.xml" success="false">
<SUBMISSION alias="Submission alias run 1"/>
<MESSAGES>
<ERROR>In submission, alias:"Submission alias run 1", accession:"". The object being added already exists in the submission account.</ERROR>
<INFO>Submission has been rolled back.</INFO>
<INFO>This submission is a TEST submission and will be discarded within 24 hours</INFO>
</MESSAGES>
<ACTIONS>ADD</ACTIONS>
</RECEIPT>
""",
file=f,
)
def parse_config_file(ini_file):
config = configparser.ConfigParser()
try:
config.read(ini_file)
except:
raise Error("Error! Parsing config file " + ini_file)
if "ena_login" not in config:
raise Error("Error! [ena_login] section not found in config file " + ini_file)
if not ("user" in config["ena_login"] and "password" in config["ena_login"]):
raise Error(
"Error! user and password not found in [ena_login] section of config file "
+ ini_file
)
return config["ena_login"]["user"], config["ena_login"]["password"]
def submit_xml_files(
ini_file,
outfile,
files=None,
use_test_server=False,
unit_test=None,
unit_test_obj_type=None,
):
username, password = parse_config_file(ini_file)
if files is None:
files_command = None
else:
files_command_list = [
'-F "' + key + "=@" + value + '"' for key, value in files.items()
]
files_command = " ".join(files_command_list)
if use_test_server:
url = "https://www-test.ebi.ac.uk/ena/submit/drop-box/submit/?auth=ENA%20"
else:
url = "https://www.ebi.ac.uk/ena/submit/drop-box/submit/?auth=ENA%20"
command_list = [
"curl -k",
files_command,
'"' + url + username + "%20" + password + '"',
">",
outfile,
]
command = " ".join([x for x in command_list if x is not None])
if unit_test is None:
utils.syscall(command)
elif unit_test == "success":
_make_dummy_success_receipt(outfile, unit_test_obj_type)
elif unit_test == "fail":
_make_dummy_fail_receipt(outfile)
else:
raise Error("unit_test must be None, success, or fail. Got: " + unit_test)
def upload_file_to_ena_ftp(ini_file, filename, uploaded_name):
# paranoid about passwords and running ps? Looks like curl is ok:
# https://unix.stackexchange.com/questions/385339/how-does-curl-protect-a-password-from-appearing-in-ps-output
# "wipe the next argument out so that the username:password isn't
# displayed in the system process list"
username, password = parse_config_file(ini_file)
cmd = " ".join(
[
"curl -T",
filename,
"ftp://webin.ebi.ac.uk/" + uploaded_name,
"--user",
username + ":" + password,
]
)
utils.syscall(cmd)
| 31.459854 | 148 | 0.615777 | 548 | 4,310 | 4.702555 | 0.34854 | 0.024447 | 0.030268 | 0.039581 | 0.356228 | 0.287932 | 0.232053 | 0.202561 | 0.202561 | 0.202561 | 0 | 0.026634 | 0.250812 | 4,310 | 136 | 149 | 31.691176 | 0.771446 | 0.063573 | 0 | 0.151515 | 0 | 0.020202 | 0.267456 | 0.028678 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050505 | false | 0.080808 | 0.040404 | 0 | 0.111111 | 0.020202 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3fc217a661811f79c1778a9b4610e13c10ae7b95 | 1,106 | py | Python | src/backend/marsha/core/migrations/0019_auto_20200609_0820.py | marin-leonard/marsha | b5d6bf98fda27acd3a08577b82dd98bcd39bfd8d | [
"MIT"
] | 64 | 2018-04-26T23:46:14.000Z | 2022-03-26T21:32:23.000Z | src/backend/marsha/core/migrations/0019_auto_20200609_0820.py | marin-leonard/marsha | b5d6bf98fda27acd3a08577b82dd98bcd39bfd8d | [
"MIT"
] | 533 | 2018-04-17T10:17:24.000Z | 2022-03-31T13:07:49.000Z | src/backend/marsha/core/migrations/0019_auto_20200609_0820.py | marin-leonard/marsha | b5d6bf98fda27acd3a08577b82dd98bcd39bfd8d | [
"MIT"
] | 16 | 2018-09-21T12:52:34.000Z | 2021-11-29T16:44:51.000Z | # Generated by Django 3.0.6 on 2020-05-19 14:32
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("core", "0018_auto_20200603_0620"),
]
operations = [
migrations.AddField(
model_name="video",
name="live_info",
field=models.JSONField(
blank=True,
help_text="Information needed to manage live streaming",
null=True,
verbose_name="Live info",
),
),
migrations.AddField(
model_name="video",
name="live_state",
field=models.CharField(
blank=True,
choices=[
("idle", "idle"),
("starting", "starting"),
("running", "running"),
("stopped", "stopped"),
],
help_text="state of the live mode.",
max_length=20,
null=True,
verbose_name="live state",
),
),
]
| 26.97561 | 72 | 0.454792 | 94 | 1,106 | 5.223404 | 0.617021 | 0.065173 | 0.093686 | 0.10998 | 0.256619 | 0.162933 | 0.162933 | 0 | 0 | 0 | 0 | 0.052716 | 0.433996 | 1,106 | 40 | 73 | 27.65 | 0.731629 | 0.040687 | 0 | 0.352941 | 1 | 0 | 0.182247 | 0.021719 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fc96ed41bb550e4bca1d28b2bc1be7197a1cd5d | 5,620 | py | Python | tests/test_identity.py | multiscale/ymmsl-python | f8a63232823ad655530a83570443d9c045ef9929 | [
"Apache-2.0"
] | 1 | 2018-12-13T18:09:09.000Z | 2018-12-13T18:09:09.000Z | tests/test_identity.py | multiscale/ymmsl-python | f8a63232823ad655530a83570443d9c045ef9929 | [
"Apache-2.0"
] | 10 | 2018-11-13T16:12:38.000Z | 2021-07-21T13:16:43.000Z | tests/test_identity.py | multiscale/ymmsl-python | f8a63232823ad655530a83570443d9c045ef9929 | [
"Apache-2.0"
] | null | null | null | from ymmsl import Identifier, Reference
import pytest
import yatiml
def test_create_identifier() -> None:
part = Identifier('testing')
assert str(part) == 'testing'
part = Identifier('CapiTaLs')
assert str(part) == 'CapiTaLs'
part = Identifier('under_score')
assert str(part) == 'under_score'
part = Identifier('_underscore')
assert str(part) == '_underscore'
part = Identifier('digits123')
assert str(part) == 'digits123'
with pytest.raises(ValueError):
Identifier('1initialdigit')
with pytest.raises(ValueError):
Identifier('test.period')
with pytest.raises(ValueError):
Identifier('test-hyphen')
with pytest.raises(ValueError):
Identifier('test space')
with pytest.raises(ValueError):
Identifier('test/slash')
def test_compare_identifier() -> None:
assert Identifier('test') == Identifier('test')
assert Identifier('test1') != Identifier('test2')
assert Identifier('test') == 'test'
assert 'test' == Identifier('test') # pylint: disable=C0122
assert Identifier('test') != 'test2'
assert 'test2' != Identifier('test') # pylint: disable=C0122
def test_identifier_dict_key() -> None:
test_dict = {Identifier('test'): 1}
assert test_dict[Identifier('test')] == 1
def test_create_reference() -> None:
test_ref = Reference('_testing')
assert str(test_ref) == '_testing'
assert len(test_ref) == 1
assert isinstance(test_ref[0], Identifier)
assert str(test_ref[0]) == '_testing'
with pytest.raises(ValueError):
Reference('1test')
test_ref = Reference('test.testing')
assert len(test_ref) == 2
assert isinstance(test_ref[0], Identifier)
assert str(test_ref[0]) == 'test'
assert isinstance(test_ref[1], Identifier)
assert str(test_ref[1]) == 'testing'
assert str(test_ref) == 'test.testing'
test_ref = Reference('test[12]')
assert len(test_ref) == 2
assert isinstance(test_ref[0], Identifier)
assert str(test_ref[0]) == 'test'
assert isinstance(test_ref[1], int)
assert test_ref[1] == 12
assert str(test_ref) == 'test[12]'
test_ref = Reference('test[12].testing.ok.index[3][5]')
assert len(test_ref) == 7
assert isinstance(test_ref[0], Identifier)
assert str(test_ref[0]) == 'test'
assert isinstance(test_ref[1], int)
assert test_ref[1] == 12
assert isinstance(test_ref[2], Identifier)
assert str(test_ref[2]) == 'testing'
assert isinstance(test_ref[3], Identifier)
assert str(test_ref[3]) == 'ok'
assert isinstance(test_ref[4], Identifier)
assert str(test_ref[4]) == 'index'
assert isinstance(test_ref[5], int)
assert test_ref[5] == 3
assert isinstance(test_ref[6], int)
assert test_ref[6] == 5
assert str(test_ref) == 'test[12].testing.ok.index[3][5]'
with pytest.raises(ValueError):
Reference([4])
with pytest.raises(ValueError):
Reference([3, Identifier('test')])
with pytest.raises(ValueError):
Reference('ua",.u8[')
with pytest.raises(ValueError):
Reference('test[4')
with pytest.raises(ValueError):
Reference('test4]')
with pytest.raises(ValueError):
Reference('test[_t]')
with pytest.raises(ValueError):
Reference('testing_{3}')
with pytest.raises(ValueError):
Reference('test.(x)')
with pytest.raises(ValueError):
Reference('[3]test')
with pytest.raises(ValueError):
Reference('[4].test')
def test_reference_slicing() -> None:
test_ref = Reference('test[12].testing.ok.index[3][5]')
assert test_ref[0] == 'test'
assert test_ref[1] == 12
assert test_ref[3] == 'ok'
assert test_ref[:3] == 'test[12].testing'
assert test_ref[2:] == 'testing.ok.index[3][5]'
with pytest.raises(RuntimeError):
test_ref[0] = 'test2'
with pytest.raises(ValueError):
test_ref[1:] # pylint: disable=pointless-statement
def test_reference_dict_key() -> None:
test_dict = {Reference('test[4]'): 1}
assert test_dict[Reference('test[4]')] == 1
def test_reference_equivalence() -> None:
assert Reference('test.test[3]') == Reference('test.test[3]')
assert Reference('test.test[3]') != Reference('test1.test[3]')
assert Reference('test.test[3]') == 'test.test[3]'
assert Reference('test.test[3]') != 'test1.test[3]'
assert 'test.test[3]' == Reference('test.test[3]') # pylint: disable=C0122
assert 'test1.test[3]' != Reference(
'test.test[3]') # pylint: disable=C0122
def test_reference_concatenation() -> None:
assert Reference('test') + Reference('test2') == 'test.test2'
assert Reference('test') + Identifier('test2') == 'test.test2'
assert Reference('test') + 5 == 'test[5]'
assert Reference('test') + [Identifier('test2'), 5] == 'test.test2[5]'
assert Reference('test[5]') + Reference('test2[3]') == 'test[5].test2[3]'
assert Reference('test[5]') + Identifier('test2') == 'test[5].test2'
assert Reference('test[5]') + 3 == 'test[5][3]'
assert (Reference('test[5]') + [3, Identifier('test2')] ==
'test[5][3].test2')
def test_reference_io() -> None:
load_reference = yatiml.load_function(Reference, Identifier)
text = 'test[12]'
doc = load_reference(text)
assert str(doc[0]) == 'test'
assert doc[1] == 12
dump_reference = yatiml.dumps_function(Identifier, Reference)
doc = Reference('test[12].testing.ok.index[3][5]')
text = dump_reference(doc)
assert text == 'test[12].testing.ok.index[3][5]\n...\n'
| 30.053476 | 79 | 0.637011 | 708 | 5,620 | 4.939266 | 0.100282 | 0.088075 | 0.082356 | 0.126394 | 0.590506 | 0.416357 | 0.22362 | 0.201029 | 0.15499 | 0.131541 | 0 | 0.03648 | 0.195196 | 5,620 | 186 | 80 | 30.215054 | 0.736679 | 0.021886 | 0 | 0.244444 | 0 | 0 | 0.16955 | 0.033509 | 0 | 0 | 0 | 0 | 0.496296 | 1 | 0.066667 | false | 0 | 0.022222 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fca93e41547a1ae70fbe0d1268f42c02b9128db | 4,237 | py | Python | tiledb/cloud/_results/results.py | TileDB-Inc/TileDB-Cloud-Py | e73f6e0ae3fc595218abd3be606c68f62ad5ac9b | [
"MIT"
] | 4 | 2019-12-04T23:19:35.000Z | 2021-06-21T21:42:53.000Z | tiledb/cloud/_results/results.py | TileDB-Inc/TileDB-Cloud-Py | e73f6e0ae3fc595218abd3be606c68f62ad5ac9b | [
"MIT"
] | 106 | 2019-11-07T22:40:43.000Z | 2022-03-29T22:31:18.000Z | tiledb/cloud/_results/results.py | TileDB-Inc/TileDB-Cloud-Py | e73f6e0ae3fc595218abd3be606c68f62ad5ac9b | [
"MIT"
] | 1 | 2020-10-04T18:54:37.000Z | 2020-10-04T18:54:37.000Z | """Things that help you keep track of task results and how to decode them."""
import abc
import dataclasses
import threading
import uuid
from concurrent import futures
from typing import Callable, Generic, Optional, TypeVar, Union
import urllib3
from tiledb.cloud import rest_api
from tiledb.cloud import tiledb_cloud_error as tce
from tiledb.cloud._results import decoders
from tiledb.cloud._results import stored_params
TASK_ID_HEADER = "X-TILEDB-CLOUD-TASK-ID"
_T = TypeVar("_T")
class Result(Generic[_T], metaclass=abc.ABCMeta):
@abc.abstractmethod
def get(self) -> _T:
"""Gets the value stored in this Result."""
raise NotImplementedError()
def to_stored_param(self) -> stored_params.StoredParam:
raise TypeError("This result cannot be converted to a StoredParam.")
# Not frozen to avoid generating unsafe methods like `__hash__`,
# but you should still *treat* these subclasses like they're frozen.
@dataclasses.dataclass()
class LocalResult(Result[_T], Generic[_T]):
"""A result from running a function in a Node locally."""
it: _T
def get(self) -> _T:
return self.it
@classmethod
def wrap(cls, func: Callable[..., _T]) -> Callable[..., Result[_T]]:
return lambda *args, **kwargs: cls(func(*args, **kwargs))
@dataclasses.dataclass()
class RemoteResult(Result[_T], Generic[_T]):
"""A response from running a UDF remotely."""
def get(self) -> _T:
"""Decodes the response from the server."""
try:
return self.decoder.decode(self.body)
except ValueError as ve:
inner_msg = f": {ve.args[0]}" if ve.args else ""
raise tce.TileDBCloudError(
f"Error decoding response from TileDB Cloud{inner_msg}"
) from ve
def to_stored_param(self) -> stored_params.StoredParam:
if not (self.results_stored and self.task_id):
raise ValueError("A result must be stored to create a StoredParam.")
return stored_params.StoredParam(
decoder=self.decoder,
task_id=self.task_id,
)
# The HTTP content of the body that was returned.
body: bytes
# The server-generated UUID of the task.
task_id: Optional[uuid.UUID]
# The decoder that was used to decode the results.
decoder: decoders.AbstractDecoder[_T]
# True if the results were stored, false otherwise.
results_stored: bool
class AsyncResult(Generic[_T]):
"""Asynchronous wrapper for compatibility with the old array.TaskResult."""
def __init__(self, future: "futures.Future[Result[_T]]"):
"""Creates a new AsyncResponse wrapping the given Future."""
self._future = future
self._id_lock = threading.Lock()
self._task_id: Optional[uuid.UUID] = None
self._future.add_done_callback(self._set_task_id)
def get(self, timeout: Optional[float] = None) -> _T:
"""Gets the result from this response, with Future's timeout rules."""
return self._future.result(timeout).get()
@property
def task_id(self) -> Optional[uuid.UUID]:
"""Gets the task ID, or None if not complete or failed with no ID."""
with self._id_lock:
return self._task_id
def _set_task_id(self, _):
"""Sets the task ID once the Future has completed."""
try:
res = self._future.result()
except rest_api.ApiException as exc:
with self._id_lock:
self._task_id = extract_task_id(exc)
except: # noqa: E722 We don't care about other exceptions, period.
pass
else:
with self._id_lock:
self._task_id = res.task_id
def extract_task_id(
thing: Union[rest_api.ApiException, urllib3.HTTPResponse],
) -> Optional[uuid.UUID]:
"""Pulls the task ID out of a response or an exception."""
id_hdr = thing.headers and thing.headers.get(TASK_ID_HEADER)
return _maybe_uuid(id_hdr)
def _maybe_uuid(id_str: Optional[str]) -> Optional[uuid.UUID]:
"""Parses a hex string into a UUID if present and valid."""
if not id_str:
return None
try:
return uuid.UUID(hex=id_str)
except ValueError:
return None
| 32.343511 | 80 | 0.660609 | 573 | 4,237 | 4.717277 | 0.342059 | 0.044395 | 0.022198 | 0.012209 | 0.098409 | 0.049575 | 0.049575 | 0.031817 | 0 | 0 | 0 | 0.001862 | 0.23932 | 4,237 | 130 | 81 | 32.592308 | 0.836798 | 0.241208 | 0 | 0.180723 | 0 | 0 | 0.067662 | 0.015248 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144578 | false | 0.012048 | 0.13253 | 0.024096 | 0.506024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3fcf781a8eea0228e1eb54ee4f6768c262da5120 | 748 | py | Python | rasa/train_init.py | BarnikRay/chatbot | 4191f3064f6dae90b108ecf4e08130cda39ed370 | [
"MIT"
] | 1 | 2020-05-03T07:30:18.000Z | 2020-05-03T07:30:18.000Z | rasa/train_init.py | privykurura1/chatbot | 4191f3064f6dae90b108ecf4e08130cda39ed370 | [
"MIT"
] | 8 | 2019-12-04T23:20:56.000Z | 2022-02-10T07:47:03.000Z | rasa/train_init.py | privykurura1/chatbot | 4191f3064f6dae90b108ecf4e08130cda39ed370 | [
"MIT"
] | 1 | 2021-11-14T07:45:24.000Z | 2021-11-14T07:45:24.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
import logging
from rasa_core.agent import Agent
from rasa_core.policies.keras_policy import KerasPolicy
from rasa_core.policies.memoization import MemoizationPolicy
def run_faq(domain_file="config/faq_domain.yml",
training_data_file='data/stories.md'):
agent = Agent(domain_file,
policies=[MemoizationPolicy(max_history=2), KerasPolicy(max_history=3, epochs=100, batch_size=50)])
data = agent.load_data(training_data_file)
model_path = './models/dialogue'
agent.train(data)
agent.persist(model_path)
if __name__ == '__main__':
logging.basicConfig(level="INFO")
run_faq()
| 28.769231 | 117 | 0.758021 | 97 | 748 | 5.42268 | 0.515464 | 0.057034 | 0.091255 | 0.076046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011058 | 0.153743 | 748 | 25 | 118 | 29.92 | 0.819905 | 0 | 0 | 0 | 0 | 0 | 0.086898 | 0.028075 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.388889 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3fd2e3175b855481fd32ee5d4ebc2f50e3468d9a | 4,101 | py | Python | Tests/Methods/Mesh/test_get_field.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Methods/Mesh/test_get_field.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Methods/Mesh/test_get_field.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
import numpy as np
from unittest import TestCase
from SciDataTool import DataTime, Data1D, DataLinspace, VectorField
from pyleecan.Classes.SolutionData import SolutionData
from pyleecan.Classes.SolutionMat import SolutionMat
from pyleecan.Classes.SolutionVector import SolutionVector
@pytest.mark.MeshSol
@pytest.mark.METHODS
class Test_get_field(TestCase):
""" Tests for get_field method from Solution classes"""
def test_SolutionMat(self):
DELTA = 1e-10
solution = SolutionMat()
solution.field = np.array([[1, 2, 3], [2, 3, 4]])
solution.axis_name = ["time", "indice"]
solution.axis_size = [2, 3]
field = solution.get_field()
correction = np.array([[1, 2, 3], [2, 3, 4]])
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
field = solution.get_field("time[0]", "indice[1,2]")
correction = np.array([[2, 3]])
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
def test_SolutionVector(self):
DELTA = 1e-10
Indices_Cell = Data1D(name="indice", values=[0, 1, 2, 4], is_components=True)
Time = DataLinspace(
name="time",
unit="s",
initial=0,
final=1,
number=10,
)
H = np.ones((10, 4, 2))
# Store the results for H
componentsH = {}
Hx_data = DataTime(
name="Magnetic Field Hx",
unit="A/m",
symbol="Hx",
axes=[Time, Indices_Cell],
values=H[:, :, 0],
)
componentsH["comp_x"] = Hx_data
Hy_data = DataTime(
name="Magnetic Field Hy",
unit="A/m",
symbol="Hy",
axes=[Time, Indices_Cell],
values=H[:, :, 1],
)
componentsH["comp_y"] = Hy_data
vecH = VectorField(name="Magnetic Field", symbol="H", components=componentsH)
solution = SolutionVector(field=vecH, type_cell="triangle", label="H")
field = solution.get_field()
correction = np.ones((10, 4, 2))
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
field = solution.get_field("time[0]", "indice[1,2]")
correction = np.ones((2, 2))
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
def test_SolutionData(self):
DELTA = 1e-10
Indices_Cell = Data1D(name="indice", values=[0, 1, 2, 4], is_components=True)
Time = DataLinspace(
name="time",
unit="s",
initial=0,
final=1,
number=10,
)
# Store the results for H
H = DataTime(
name="Magnetic Field Hx",
unit="A/m",
symbol="Hx",
axes=[Time, Indices_Cell],
values=np.ones((10, 4)),
)
solution = SolutionData(field=H, type_cell="triangle", label="H")
field = solution.get_field()
correction = np.ones((10, 4))
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
field = solution.get_field("time[0]", "indice[1,2]")
correction = correction[0, 1:3]
result = np.sum(np.abs(correction - field))
msg = "Wrong result: returned " + str(field) + ", expected: " + str(correction)
self.assertAlmostEqual(result, 0, msg=msg, delta=DELTA)
| 32.039063 | 87 | 0.566935 | 480 | 4,101 | 4.783333 | 0.1875 | 0.027875 | 0.041812 | 0.054878 | 0.68162 | 0.646341 | 0.62108 | 0.62108 | 0.610192 | 0.610192 | 0 | 0.027864 | 0.291149 | 4,101 | 127 | 88 | 32.291339 | 0.761954 | 0.029017 | 0 | 0.542553 | 0 | 0 | 0.102441 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 1 | 0.031915 | false | 0 | 0.074468 | 0 | 0.117021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fd8c6ef2dca4f5f0372db69829883a2a443d40b | 4,536 | py | Python | tests/ref_test.py | lykme516/pykka | d66b0c49658fc0e7c4e1ae46a0f9c50c7e964ca5 | [
"Apache-2.0"
] | 1 | 2021-01-03T09:25:23.000Z | 2021-01-03T09:25:23.000Z | tests/ref_test.py | hujunxianligong/pykka | d66b0c49658fc0e7c4e1ae46a0f9c50c7e964ca5 | [
"Apache-2.0"
] | null | null | null | tests/ref_test.py | hujunxianligong/pykka | d66b0c49658fc0e7c4e1ae46a0f9c50c7e964ca5 | [
"Apache-2.0"
] | null | null | null | import time
import unittest
from pykka import ActorDeadError, ThreadingActor, ThreadingFuture, Timeout
class AnActor(object):
def __init__(self, received_message):
super(AnActor, self).__init__()
self.received_message = received_message
def on_receive(self, message):
if message.get('command') == 'ping':
self.sleep(0.01)
return 'pong'
else:
self.received_message.set(message)
class RefTest(object):
def setUp(self):
self.received_message = self.future_class()
self.ref = self.AnActor.start(self.received_message)
def tearDown(self):
self.ref.stop()
def test_repr_is_wrapped_in_lt_and_gt(self):
result = repr(self.ref)
self.assertTrue(result.startswith('<'))
self.assertTrue(result.endswith('>'))
def test_repr_reveals_that_this_is_a_ref(self):
self.assertTrue('ActorRef' in repr(self.ref))
def test_repr_contains_actor_class_name(self):
self.assertTrue('AnActor' in repr(self.ref))
def test_repr_contains_actor_urn(self):
self.assertTrue(self.ref.actor_urn in repr(self.ref))
def test_str_contains_actor_class_name(self):
self.assertTrue('AnActor' in str(self.ref))
def test_str_contains_actor_urn(self):
self.assertTrue(self.ref.actor_urn in str(self.ref))
def test_is_alive_returns_true_for_running_actor(self):
self.assertTrue(self.ref.is_alive())
def test_is_alive_returns_false_for_dead_actor(self):
self.ref.stop()
self.assertFalse(self.ref.is_alive())
def test_stop_returns_true_if_actor_is_stopped(self):
self.assertTrue(self.ref.stop())
def test_stop_does_not_stop_already_dead_actor(self):
self.ref.stop()
try:
self.assertFalse(self.ref.stop())
except ActorDeadError:
self.fail('Should never raise ActorDeadError')
def test_tell_delivers_message_to_actors_custom_on_receive(self):
self.ref.tell({'command': 'a custom message'})
self.assertEqual(
{'command': 'a custom message'}, self.received_message.get())
def test_tell_fails_if_actor_is_stopped(self):
self.ref.stop()
try:
self.ref.tell({'command': 'a custom message'})
self.fail('Should raise ActorDeadError')
except ActorDeadError as exception:
self.assertEqual('%s not found' % self.ref, str(exception))
def test_ask_blocks_until_response_arrives(self):
result = self.ref.ask({'command': 'ping'})
self.assertEqual('pong', result)
def test_ask_can_timeout_if_blocked_too_long(self):
try:
self.ref.ask({'command': 'ping'}, timeout=0)
self.fail('Should raise Timeout exception')
except Timeout:
pass
def test_ask_can_return_future_instead_of_blocking(self):
future = self.ref.ask({'command': 'ping'}, block=False)
self.assertEqual('pong', future.get())
def test_ask_fails_if_actor_is_stopped(self):
self.ref.stop()
try:
self.ref.ask({'command': 'ping'})
self.fail('Should raise ActorDeadError')
except ActorDeadError as exception:
self.assertEqual('%s not found' % self.ref, str(exception))
def test_ask_nonblocking_fails_future_if_actor_is_stopped(self):
self.ref.stop()
future = self.ref.ask({'command': 'ping'}, block=False)
try:
future.get()
self.fail('Should raise ActorDeadError')
except ActorDeadError as exception:
self.assertEqual('%s not found' % self.ref, str(exception))
def ConcreteRefTest(actor_class, future_class, sleep_function):
class C(RefTest, unittest.TestCase):
class AnActor(AnActor, actor_class):
def sleep(self, seconds):
sleep_function(seconds)
C.__name__ = '%sRefTest' % (actor_class.__name__,)
C.future_class = future_class
return C
ThreadingActorRefTest = ConcreteRefTest(
ThreadingActor, ThreadingFuture, time.sleep)
try:
import gevent
from pykka.gevent import GeventActor, GeventFuture
GeventActorRefTest = ConcreteRefTest(
GeventActor, GeventFuture, gevent.sleep)
except ImportError:
pass
try:
import eventlet
from pykka.eventlet import EventletActor, EventletFuture
EventletActorRefTest = ConcreteRefTest(
EventletActor, EventletFuture, eventlet.sleep)
except ImportError:
pass
| 31.068493 | 74 | 0.668651 | 549 | 4,536 | 5.262295 | 0.218579 | 0.070267 | 0.03046 | 0.031153 | 0.417792 | 0.377293 | 0.305296 | 0.293527 | 0.236068 | 0.186224 | 0 | 0.001141 | 0.226852 | 4,536 | 145 | 75 | 31.282759 | 0.82264 | 0 | 0 | 0.287037 | 0 | 0 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 1 | 0.212963 | false | 0.027778 | 0.083333 | 0 | 0.351852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fe6078d322f58b763a2e00d815b964e8911f9bf | 885 | py | Python | PyTrinamic/modules/TMC_EvalShield.py | trinamic-AA/PyTrinamic | b054f4baae8eb6d3f5d2574cf69c232f66abb4ee | [
"MIT"
] | 37 | 2019-01-13T11:08:45.000Z | 2022-03-25T07:18:15.000Z | PyTrinamic/modules/TMC_EvalShield.py | AprDec/PyTrinamic | a9db10071f8fbeebafecb55c619e5893757dd0ce | [
"MIT"
] | 56 | 2019-02-25T02:48:27.000Z | 2022-03-31T08:45:34.000Z | PyTrinamic/modules/TMC_EvalShield.py | AprDec/PyTrinamic | a9db10071f8fbeebafecb55c619e5893757dd0ce | [
"MIT"
] | 26 | 2019-01-14T05:20:16.000Z | 2022-03-08T13:27:35.000Z | '''
Created on 18.03.2020
@author: LK
'''
class TMC_EvalShield(object):
"""
Arguments:
connection:
Type: connection interface
The connection interface used for this module.
shield:
Type: class
The EvalShield class used for every axis on this module.
For every axis connected, an instance of this class will be created,
which can be used later.
"""
def __init__(self, connection, shield, moduleID=1):
self.GPs = _GPs
self.shields = []
while(not(connection.globalParameter(self.GPs.attachedAxes, 0, moduleID))):
pass
attachedAxes = connection.globalParameter(self.GPs.attachedAxes, 0, moduleID)
for i in range(attachedAxes):
self.shields.append(shield(connection, i, moduleID))
class _GPs():
attachedAxes = 6
| 26.818182 | 85 | 0.615819 | 99 | 885 | 5.434343 | 0.505051 | 0.039033 | 0.04461 | 0.118959 | 0.197026 | 0.197026 | 0.197026 | 0 | 0 | 0 | 0 | 0.019293 | 0.297175 | 885 | 32 | 86 | 27.65625 | 0.845659 | 0.40452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3fe82d5a85daeba3d97651074742e05e1165543c | 1,697 | py | Python | test/vizier/test_nodes.py | robotarium/vizier | 6ce2be4fc0edcdaf5ba246094c2e79bff32e219d | [
"MIT"
] | 11 | 2016-08-18T20:37:06.000Z | 2019-11-24T17:34:27.000Z | test/vizier/test_nodes.py | robotarium/vizier | 6ce2be4fc0edcdaf5ba246094c2e79bff32e219d | [
"MIT"
] | 6 | 2018-10-07T17:01:40.000Z | 2019-11-24T17:41:16.000Z | test/vizier/test_nodes.py | robotarium/vizier | 6ce2be4fc0edcdaf5ba246094c2e79bff32e219d | [
"MIT"
] | 3 | 2016-08-22T13:58:24.000Z | 2018-06-07T21:06:35.000Z | import json
import vizier.node as node
import unittest
class TestVizierNodes(unittest.TestCase):
def setUp(self):
path_a = '../config/node_desc_a.json'
path_b = '../config/node_desc_b.json'
try:
f = open(path_a, 'r')
node_descriptor_a = json.load(f)
f.close()
except Exception as e:
print(repr(e))
print('Could not open given node file {}'.format(path_a))
return -1
try:
f = open(path_b, 'r')
node_descriptor_b = json.load(f)
f.close()
except Exception as e:
print(repr(e))
print('Could not open given node file {}'.format(path_b))
return -1
self.node_a = node.Node('localhost', 1883, node_descriptor_a)
self.node_a.start()
self.node_b = node.Node('localhost', 1883, node_descriptor_b)
self.node_b.start()
def test_publishable_links(self):
self.assertEqual(self.node_a.publishable_links, {'a/a_sub'})
self.assertEqual(self.node_b.publishable_links, set())
def test_subscribable_links(self):
self.assertEqual(self.node_a.subscribable_links, set())
self.assertEqual(self.node_b.subscribable_links, {'a/a_sub'})
def test_gettable_links(self):
self.assertEqual(self.node_a.gettable_links, set())
self.assertEqual(self.node_b.gettable_links, {'a/a_sub2'})
def test_puttable_links(self):
self.assertEqual(self.node_a.puttable_links, {'a/a_sub2'})
self.assertEqual(self.node_b.puttable_links, {'b/b_sub'})
def tearDown(self):
self.node_a.stop()
self.node_b.stop()
| 30.854545 | 69 | 0.614614 | 229 | 1,697 | 4.323144 | 0.222707 | 0.113131 | 0.153535 | 0.185859 | 0.484848 | 0.436364 | 0.365657 | 0.167677 | 0.167677 | 0.167677 | 0 | 0.009577 | 0.261638 | 1,697 | 54 | 70 | 31.425926 | 0.780527 | 0 | 0 | 0.238095 | 0 | 0 | 0.103123 | 0.030642 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.285714 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fe8ccedd5919a259d55f873b8eeacc8ac42d24a | 5,417 | py | Python | cuda/rrnn_semiring.py | Noahs-ARK/rational-recurrences | 3b7ef54520bcaa2b24551cf42a125c9251124229 | [
"MIT"
] | 27 | 2018-09-28T02:17:07.000Z | 2020-10-15T14:57:16.000Z | cuda/rrnn_semiring.py | Noahs-ARK/rational-recurrences | 3b7ef54520bcaa2b24551cf42a125c9251124229 | [
"MIT"
] | 1 | 2021-03-25T22:08:35.000Z | 2021-03-25T22:08:35.000Z | cuda/rrnn_semiring.py | Noahs-ARK/rational-recurrences | 3b7ef54520bcaa2b24551cf42a125c9251124229 | [
"MIT"
] | 5 | 2018-11-06T05:49:51.000Z | 2019-10-26T03:36:43.000Z | RRNN_SEMIRING = """
extern "C" {
__global__ void rrnn_semiring_fwd(
const float * __restrict__ u,
const float * __restrict__ eps,
const float * __restrict__ c1_init,
const float * __restrict__ c2_init,
const int len,
const int batch,
const int dim,
const int k,
float * __restrict__ c1,
float * __restrict__ c2,
int semiring_type) {
assert (k == K);
int ncols = batch*dim;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (col >= ncols) return;
int ncols_u = ncols*k;
const float *up = u + (col*k);
float *c1p = c1 + col;
float *c2p = c2 + col;
float cur_c1 = *(c1_init + col);
float cur_c2 = *(c2_init + col);
const float eps_val = *(eps + (col%dim));
for (int row = 0; row < len; ++row) {
float u1 = *(up);
float u2 = *(up+1);
float forget1 = *(up+2);
float forget2 = *(up+3);
float prev_c1 = cur_c1;
float op1 = times_forward(semiring_type, cur_c1, forget1);
cur_c1 = plus_forward(semiring_type, op1, u1);
float op2 = times_forward(semiring_type, cur_c2, forget2);
float op3_ = plus_forward(semiring_type, eps_val, prev_c1);
float op3 = times_forward(semiring_type, op3_, u2);
cur_c2 = plus_forward(semiring_type, op2, op3);
*c1p = cur_c1;
*c2p = cur_c2;
up += ncols_u;
c1p += ncols;
c2p += ncols;
}
}
__global__ void rrnn_semiring_bwd(
const float * __restrict__ u,
const float * __restrict__ eps,
const float * __restrict__ c1_init,
const float * __restrict__ c2_init,
const float * __restrict__ c1,
const float * __restrict__ c2,
const float * __restrict__ grad_c1,
const float * __restrict__ grad_c2,
const float * __restrict__ grad_last_c1,
const float * __restrict__ grad_last_c2,
const int len,
const int batch,
const int dim,
const int k,
float * __restrict__ grad_u,
float * __restrict__ grad_eps,
float * __restrict__ grad_c1_init,
float * __restrict__ grad_c2_init,
int semiring_type) {
assert (k == K);
int ncols = batch*dim;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (col >= ncols) return;
int ncols_u = ncols*k;
float cur_c1 = *(grad_last_c1 + col);
float cur_c2 = *(grad_last_c2 + col);
const float eps_val = *(eps + (col%dim));
const float *up = u + (col*k) + (len-1)*ncols_u;
const float *c1p = c1 + col + (len-1)*ncols;
const float *c2p = c2 + col + (len-1)*ncols;
const float *gc1p = grad_c1 + col + (len-1)*ncols;
const float *gc2p = grad_c2 + col + (len-1)*ncols;
float *gup = grad_u + (col*k) + (len-1)*ncols_u;
float geps = 0.f;
for (int row = len-1; row >= 0; --row) {
float u1 = *(up);
float u2 = *(up+1);
float forget1 = *(up+2);
float forget2 = *(up+3);
const float c1_val = *c1p;
const float c2_val = *c2p;
const float prev_c1 = (row>0) ? (*(c1p-ncols)) : (*(c1_init+col));
const float prev_c2 = (row>0) ? (*(c2p-ncols)) : (*(c2_init+col));
const float gc1 = *(gc1p) + cur_c1;
const float gc2 = *(gc2p) + cur_c2;
cur_c1 = cur_c2 = 0.f;
float op1 = times_forward(semiring_type, prev_c1, forget1);
float gop1 = 0.f, gu1 = 0.f;
plus_backward(semiring_type, op1, u1, gc1, gop1, gu1);
float gprev_c1 = 0.f, gprev_c2 = 0.f, gforget1=0.f;
times_backward(semiring_type, prev_c1, forget1, gop1, gprev_c1, gforget1);
*(gup) = gu1;
*(gup+2) = gforget1;
cur_c1 += gprev_c1;
float op2 = times_forward(semiring_type, prev_c2, forget2);
float op3_ = plus_forward(semiring_type, eps_val, prev_c1);
float op3 = times_forward(semiring_type, op3_, u2);
float gop2 = 0.f, gop3 = 0.f;
plus_backward(semiring_type, op2, op3, gc2, gop2, gop3);
float gop3_ = 0.f, gu2 = 0.f, gforget2 = 0.f, cur_geps=0.f;
times_backward(semiring_type, prev_c2, forget2, gop2, gprev_c2, gforget2);
times_backward(semiring_type, op3_, u2, gop3, gop3_, gu2);
plus_backward(semiring_type, eps_val, prev_c1, gop3_, cur_geps, gprev_c1);
*(gup+1) = gu2;
*(gup+3) = gforget2;
geps += cur_geps;
cur_c1 += gprev_c1;
cur_c2 += gprev_c2;
up -= ncols_u;
c1p -= ncols;
c2p -= ncols;
gup -= ncols_u;
gc1p -= ncols;
gc2p -= ncols;
}
*(grad_c1_init + col) = cur_c1;
*(grad_c2_init + col) = cur_c2;
*(grad_eps + col%dim) = geps;
}
}
"""
| 36.601351 | 86 | 0.502677 | 654 | 5,417 | 3.824159 | 0.108563 | 0.111955 | 0.10076 | 0.057577 | 0.589364 | 0.514994 | 0.413035 | 0.357057 | 0.313874 | 0.313874 | 0 | 0.055824 | 0.388222 | 5,417 | 147 | 87 | 36.85034 | 0.698853 | 0 | 0 | 0.349206 | 0 | 0.02381 | 0.995754 | 0.082518 | 0 | 0 | 0 | 0 | 0.015873 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3fe8e4411ff091a355fe9346309f0659c9b08983 | 1,841 | py | Python | tests.py | c-okelly/movie_script_analytics | 6fee40c0378921199ab14ca0b4db447b9f4e7bcf | [
"MIT"
] | 1 | 2017-11-09T13:24:47.000Z | 2017-11-09T13:24:47.000Z | tests.py | c-okelly/movie_script_analytics | 6fee40c0378921199ab14ca0b4db447b9f4e7bcf | [
"MIT"
] | null | null | null | tests.py | c-okelly/movie_script_analytics | 6fee40c0378921199ab14ca0b4db447b9f4e7bcf | [
"MIT"
] | null | null | null | import re
import text_objects
import numpy as np
import pickle
# f = open("Data/scripts_text/17-Again.txt", 'r')
# text = f.read()
# text = text[900:1500]
# print(text)
# count = len(re.findall("\W+",text))
# print(count)
#
# lines = text.split('\n')
# lines_on_empty = re.split("\n\s+\n", text)
# print(len(lines))
# print(len(lines_on_empty))
#
# # Find empty lines
# count = 0
# for item in lines:
# if re.search("\A\s+\Z", item):
# print(count)
# count += 1
#
# # Search for character names in list
# for item in lines:
# if re.search("\A\s*Name_character\s*(\(.*\))?\s*\Z", item):
# print(item)
# # Generate list of characters from the script
# characters = dict()
#
#
# for line in lines:
# #Strip whitespace and check if whole line is in capital letters
# line = line.strip()
# if (line.isupper()):
#
# # Exclude lines with EXT / INT in them
# s1 = re.search('EXT\.', line)
# s2 = re.search('INT\.', line)
#
# # Select correct lines and strip out and elements within parathenses. Normally continued
# if (not(s1 or s2)):
# line = re.sub("\s*\(.*\)","",line)
# # If character no in dict add them. If a already in increase count by 1
# if line in characters:
# characters[line] = characters[line] + 1
# else:
# characters[line] = 1
#
# print(characters)
# Get description lines
if __name__ == '__main__':
#
# string = " -EARLY APRIL, 1841"
# print(re.match("^\s+-(\w+\s{0,3},?/?){0,4}(\s\d{0,5})-\s+",string))
# for i in np.arange(0,1,0.1):
# print(i,"to",i+0.1)
# array= [1,3,5,6,1]
#
# count = 0
var = pickle.load(open("Data/Pickled_objects/400.dat","rb"))
object_1 = var[0]
print(object_1.info_dict)
| 23.303797 | 98 | 0.558935 | 268 | 1,841 | 3.768657 | 0.410448 | 0.031683 | 0.023762 | 0.027723 | 0.051485 | 0.051485 | 0.051485 | 0.051485 | 0.051485 | 0 | 0 | 0.033898 | 0.262901 | 1,841 | 78 | 99 | 23.602564 | 0.710391 | 0.797936 | 0 | 0 | 0 | 0 | 0.124183 | 0.091503 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3feef5a3e0cc27bf16fbab36a842bb9bb4ecc2cd | 643 | py | Python | machina/templatetags/forum_tracking_tags.py | jujinesy/initdjango-machina | 93c24877f546521867b3ef77fa278237af932d42 | [
"BSD-3-Clause"
] | 1 | 2021-10-08T03:31:24.000Z | 2021-10-08T03:31:24.000Z | machina/templatetags/forum_tracking_tags.py | jujinesy/initdjango-machina | 93c24877f546521867b3ef77fa278237af932d42 | [
"BSD-3-Clause"
] | 7 | 2020-02-12T01:11:13.000Z | 2022-03-11T23:26:32.000Z | machina/templatetags/forum_tracking_tags.py | jujinesy/initdjango-machina | 93c24877f546521867b3ef77fa278237af932d42 | [
"BSD-3-Clause"
] | 1 | 2019-04-20T05:26:27.000Z | 2019-04-20T05:26:27.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django import template
from machina.core.loading import get_class
TrackingHandler = get_class('forum_tracking.handler', 'TrackingHandler')
register = template.Library()
@register.simple_tag(takes_context=True)
def get_unread_topics(context, topics, user):
"""
This will return a list of unread topics for the given user from a given set of topics.
Usage::
{% get_unread_topics topics request.user as unread_topics %}
"""
request = context.get('request', None)
return TrackingHandler(request=request).get_unread_topics(topics, user)
| 24.730769 | 91 | 0.738725 | 84 | 643 | 5.452381 | 0.52381 | 0.131004 | 0.098253 | 0.091703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001862 | 0.164852 | 643 | 25 | 92 | 25.72 | 0.851024 | 0.287714 | 0 | 0 | 0 | 0 | 0.101382 | 0.050691 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3ff244c8c0c0b1265e61249a530b3e42331c5fc4 | 13,794 | py | Python | qiskit/pulse/timeslots.py | lerongil/qiskit-terra | a25af2a2378bc3d4f5ec73b948d048d1b707454c | [
"Apache-2.0"
] | 3 | 2019-11-20T08:15:28.000Z | 2020-11-01T15:32:57.000Z | qiskit/pulse/timeslots.py | lerongil/qiskit-terra | a25af2a2378bc3d4f5ec73b948d048d1b707454c | [
"Apache-2.0"
] | null | null | null | qiskit/pulse/timeslots.py | lerongil/qiskit-terra | a25af2a2378bc3d4f5ec73b948d048d1b707454c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# This code is part of Qiskit.
#
# (C) Copyright IBM 2017, 2019.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Timeslots for channels.
"""
from collections import defaultdict
import itertools
from typing import Tuple, Union, Optional
from .channels import Channel
from .exceptions import PulseError
# pylint: disable=missing-return-doc
class Interval:
"""Time interval."""
def __init__(self, start: int, stop: int):
"""Create an interval, (start, stop).
Args:
start: Starting value of interval
stop: Stopping value of interval
Raises:
PulseError: when invalid time or duration is specified
"""
if start < 0:
raise PulseError("Cannot create Interval with negative starting value")
if stop < 0:
raise PulseError("Cannot create Interval with negative stopping value")
if start > stop:
raise PulseError("Cannot create Interval with value start after stop")
self._start = start
self._stop = stop
@property
def start(self):
"""Start of interval."""
return self._start
@property
def stop(self):
"""Stop of interval."""
return self._stop
@property
def duration(self):
"""Duration of this interval."""
return self.stop - self.start
def has_overlap(self, interval: 'Interval') -> bool:
"""Check if self has overlap with `interval`.
Args:
interval: interval to be examined
Returns:
bool: True if self has overlap with `interval` otherwise False
"""
return self.start < interval.stop and interval.start < self.stop
def shift(self, time: int) -> 'Interval':
"""Return a new interval shifted by `time` from self
Args:
time: time to be shifted
Returns:
Interval: interval shifted by `time`
"""
return Interval(self.start + time, self.stop + time)
def __eq__(self, other):
"""Two intervals are the same if they have the same starting and stopping values.
Args:
other (Interval): other Interval
Returns:
bool: are self and other equal.
"""
return self.start == other.start and self.stop == other.stop
def stops_before(self, other):
"""Whether intervals stops at value less than or equal to the
other interval's starting time.
Args:
other (Interval): other Interval
Returns:
bool: are self and other equal.
"""
return self.stop <= other.start
def starts_after(self, other):
"""Whether intervals starts at value greater than or equal to the
other interval's stopping time.
Args:
other (Interval): other Interval
Returns:
bool: are self and other equal.
"""
return self.start >= other.stop
def __repr__(self):
"""Return a readable representation of Interval Object"""
return "{}({}, {})".format(self.__class__.__name__, self.start, self.stop)
class Timeslot:
"""Named tuple of (Interval, Channel)."""
def __init__(self, interval: Interval, channel: Channel):
self._interval = interval
self._channel = channel
@property
def interval(self):
"""Interval of this time slot."""
return self._interval
@property
def channel(self):
"""Channel of this time slot."""
return self._channel
@property
def start(self):
"""Start of timeslot."""
return self.interval.start
@property
def stop(self):
"""Stop of timeslot."""
return self.interval.stop
@property
def duration(self):
"""Duration of this timeslot."""
return self.interval.duration
def shift(self, time: int) -> 'Timeslot':
"""Return a new Timeslot shifted by `time`.
Args:
time: time to be shifted
"""
return Timeslot(self.interval.shift(time), self.channel)
def has_overlap(self, other: 'Timeslot') -> bool:
"""Check if self has overlap with `interval`.
Args:
other: Other Timeslot to check for overlap with
Returns:
bool: True if intervals overlap and are on the same channel
"""
return self.interval.has_overlap(other) and self.channel == other.channel
def __eq__(self, other) -> bool:
"""Two time-slots are the same if they have the same interval and channel.
Args:
other (Timeslot): other Timeslot
"""
return self.interval == other.interval and self.channel == other.channel
def __repr__(self):
"""Return a readable representation of Timeslot Object"""
return "{}({}, {})".format(self.__class__.__name__,
self.channel,
(self.interval.start, self.interval.stop))
class TimeslotCollection:
"""Collection of `Timeslot`s."""
def __init__(self, *timeslots: Union[Timeslot, 'TimeslotCollection']):
"""Create a new time-slot collection.
Args:
*timeslots: list of time slots
Raises:
PulseError: when overlapped time slots are specified
"""
self._table = defaultdict(list)
for timeslot in timeslots:
if isinstance(timeslot, TimeslotCollection):
self._merge_timeslot_collection(timeslot)
else:
self._merge_timeslot(timeslot)
@property
def timeslots(self) -> Tuple[Timeslot]:
"""Sorted tuple of `Timeslot`s in collection."""
return tuple(itertools.chain.from_iterable(self._table.values()))
@property
def channels(self) -> Tuple[Timeslot]:
"""Channels within the timeslot collection."""
return tuple(k for k, v in self._table.items() if v)
@property
def start_time(self) -> int:
"""Return earliest start time in this collection."""
return self.ch_start_time(*self.channels)
@property
def stop_time(self) -> int:
"""Return maximum time of timeslots over all channels."""
return self.ch_stop_time(*self.channels)
@property
def duration(self) -> int:
"""Return maximum duration of timeslots over all channels."""
return self.stop_time
def _merge_timeslot_collection(self, other: 'TimeslotCollection'):
"""Mutably merge timeslot collections into this TimeslotCollection.
Args:
other: TimeSlotCollection to merge
"""
for channel, other_ch_timeslots in other._table.items():
if channel not in self._table:
self._table[channel] += other_ch_timeslots # extend to copy items
else:
# if channel is in self there might be an overlap
for idx, other_ch_timeslot in enumerate(other_ch_timeslots):
insert_idx = self._merge_timeslot(other_ch_timeslot)
if insert_idx == len(self._table[channel]) - 1:
# Timeslot was inserted at end of list. The rest can be appended.
self._table[channel] += other_ch_timeslots[idx + 1:]
break
def _merge_timeslot(self, timeslot: Timeslot) -> int:
"""Mutably merge timeslots into this TimeslotCollection.
Note timeslots are sorted internally on their respective channel
Args:
timeslot: Timeslot to merge
Returns:
int: Return the index in which timeslot was inserted
Raises:
PulseError: If timeslots overlap
"""
interval = timeslot.interval
ch_timeslots = self._table[timeslot.channel]
insert_idx = len(ch_timeslots)
# merge timeslots by insertion sort.
# Worst case O(n_channels), O(1) for append
# could be improved by implementing an interval tree
for ch_timeslot in reversed(ch_timeslots):
ch_interval = ch_timeslot.interval
if interval.start >= ch_interval.stop:
break
elif interval.has_overlap(ch_interval):
raise PulseError("Timeslot: {0} overlaps with existing"
"Timeslot: {1}".format(timeslot, ch_timeslot))
insert_idx -= 1
ch_timeslots.insert(insert_idx, timeslot)
return insert_idx
def ch_timeslots(self, channel: Channel) -> Tuple[Timeslot]:
"""Sorted tuple of `Timeslot`s for channel in this TimeslotCollection."""
if channel in self._table:
return tuple(self._table[channel])
return tuple()
def ch_start_time(self, *channels: Channel) -> int:
"""Return earliest start time in this collection.
Args:
*channels: Channels over which to obtain start_time.
"""
timeslots = list(itertools.chain(*(self._table[chan] for chan in channels
if chan in self._table)))
if timeslots:
return min(timeslot.start for timeslot in timeslots)
return 0
def ch_stop_time(self, *channels: Channel) -> int:
"""Return maximum time of timeslots over all channels.
Args:
*channels: Channels over which to obtain stop time.
"""
timeslots = list(itertools.chain(*(self._table[chan] for chan in channels
if chan in self._table)))
if timeslots:
return max(timeslot.stop for timeslot in timeslots)
return 0
def ch_duration(self, *channels: Channel) -> int:
"""Return maximum duration of timeslots over all channels.
Args:
*channels: Channels over which to obtain the duration.
"""
return self.ch_stop_time(*channels)
def is_mergeable_with(self, other: 'TimeslotCollection') -> bool:
"""Return if self is mergeable with `timeslots`.
Args:
other: TimeslotCollection to be checked for mergeability
"""
common_channels = set(self.channels) & set(other.channels)
for channel in common_channels:
ch_timeslots = self.ch_timeslots(channel)
other_ch_timeslots = other.ch_timeslots(channel)
if ch_timeslots[-1].stop < other_ch_timeslots[0].start:
continue # We are appending along this channel
i = 0 # iterate through this
j = 0 # iterate through other
while i < len(ch_timeslots) and j < len(other_ch_timeslots):
if ch_timeslots[i].interval.has_overlap(other_ch_timeslots[j].interval):
return False
if ch_timeslots[i].stop <= other_ch_timeslots[j].start:
i += 1
else:
j += 1
return True
def merge(self, timeslots: 'TimeslotCollection') -> 'TimeslotCollection':
"""Return a new TimeslotCollection with `timeslots` merged into it.
Args:
timeslots: TimeslotCollection to be merged
"""
return TimeslotCollection(self, timeslots)
def shift(self, time: int) -> 'TimeslotCollection':
"""Return a new TimeslotCollection shifted by `time`.
Args:
time: time to be shifted by
"""
slots = [slot.shift(time) for slot in self.timeslots]
return TimeslotCollection(*slots)
def complement(self, stop_time: Optional[int] = None) -> 'TimeslotCollection':
"""Return a complement TimeSlotCollection containing all unoccupied Timeslots
within this TimeSlotCollection.
Args:
stop_time: Final time too which complement Timeslot's will be returned.
If not set, defaults to last time in this TimeSlotCollection
"""
timeslots = []
stop_time = stop_time or self.stop_time
for channel in self.channels:
curr_time = 0
for timeslot in self.ch_timeslots(channel):
next_time = timeslot.interval.start
if next_time-curr_time > 0:
timeslots.append(Timeslot(Interval(curr_time, next_time), channel))
curr_time = timeslot.interval.stop
# pad out channel to stop_time
if stop_time-curr_time > 0:
timeslots.append(Timeslot(Interval(curr_time, stop_time), channel))
return TimeslotCollection(*timeslots)
def __eq__(self, other) -> bool:
"""Two time-slot collections are the same if they have the same time-slots.
Args:
other (TimeslotCollection): other TimeslotCollection
"""
if set(self.channels) != set(other.channels):
return False
for channel in self.channels:
if self.ch_timeslots(channel) != self.ch_timeslots(channel):
return False
return True
def __repr__(self):
"""Return a readable representation of TimeslotCollection Object"""
rep = dict()
for key, val in self._table.items():
rep[key] = [(timeslot.start, timeslot.stop) for timeslot in val]
return self.__class__.__name__ + str(rep)
| 32.456471 | 89 | 0.603741 | 1,575 | 13,794 | 5.166984 | 0.154921 | 0.03244 | 0.019661 | 0.01278 | 0.329073 | 0.281887 | 0.228435 | 0.19071 | 0.117351 | 0.081224 | 0 | 0.003359 | 0.309337 | 13,794 | 424 | 90 | 32.533019 | 0.850845 | 0.335581 | 0 | 0.248588 | 0 | 0 | 0.046068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214689 | false | 0 | 0.028249 | 0 | 0.485876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ff3b22779c14ce17a4d6563f15286360782e0ac | 3,237 | py | Python | qvdfile/tests/test_qvdfile.py | cosmocracy/qvdfile | c1f92ec153c07f607fd57c6f6679e3c7269d643e | [
"Apache-2.0"
] | 17 | 2019-07-18T12:50:33.000Z | 2021-05-25T06:26:45.000Z | qvdfile/tests/test_qvdfile.py | cosmocracy/qvdfile | c1f92ec153c07f607fd57c6f6679e3c7269d643e | [
"Apache-2.0"
] | 2 | 2021-05-15T03:53:08.000Z | 2021-07-22T14:31:15.000Z | qvdfile/tests/test_qvdfile.py | cosmocracy/qvdfile | c1f92ec153c07f607fd57c6f6679e3c7269d643e | [
"Apache-2.0"
] | 5 | 2019-07-18T12:55:31.000Z | 2021-12-21T15:09:37.000Z | import pytest
import errno
import os
import glob
import shutil
import xml.etree.ElementTree as ET
from qvdfile.qvdfile import QvdFile, BadFormat
@pytest.fixture(scope="function")
def qvd():
""" standard setup for most of the tests """
yield QvdFile("data/tab1.qvd")
@pytest.fixture(scope="function")
def bigqvd():
""" standard setup for tests with bigger qvd"""
yield QvdFile("data/tab2.qvd")
# READING QVD ==================================================================
# init
def test_init_smoke(qvd):
# metadata is in attribs
assert "TableName" in qvd.attribs.keys()
assert qvd.attribs["TableName"] == "tab1"
# fields info is in fields
assert len(qvd.fields) == 3
assert "ID" in [ f["FieldName"] for f in qvd.fields ]
def test_init_no_file():
with pytest.raises(FileNotFoundError):
qvd = QvdFile("data/no_such_file.qvd")
def test_init_not_qvd_or_bad_file():
with pytest.raises(BadFormat):
qvd = QvdFile(__file__)
# getFieldVal
def test_get_field_val_smoke(qvd):
assert qvd.getFieldVal("ID",0) == "123.12"
assert qvd.getFieldVal("NAME",2) == "Vaysa"
assert qvd.getFieldVal("ONEVAL",0) == "0"
def test_get_field_val_bad_name(qvd):
with pytest.raises(KeyError):
qvd.getFieldVal("NOFILED",0)
def test_get_field_val_bad_index(qvd):
with pytest.raises(IndexError):
qvd.getFieldVal("ID",10)
# fieldsInRow
def test_fields_in_row_smoke(qvd):
rowf = qvd.fieldsInRow()
assert next(rowf)["FieldName"] == "NAME"
assert next(rowf)["FieldName"] == "ID"
with pytest.raises(StopIteration):
next(rowf)
def test_fields_in_row_bigger(bigqvd):
rowf = bigqvd.fieldsInRow()
assert next(rowf)["FieldName"] == "NAME"
assert next(rowf)["FieldName"] == "PHONE"
assert next(rowf)["FieldName"] == "VAL"
assert next(rowf)["FieldName"] == "ID"
with pytest.raises(StopIteration):
next(rowf)
# createMask
def test_create_mask_smoke(qvd):
assert qvd.createMask() == "uint:5,uint:3"
def test_create_mask_bigger(bigqvd):
assert bigqvd.createMask() == "uint:6,uint:5,uint:5,uint:8"
# getRow
def test_get_row_smoke(qvd):
row = qvd.getRow(0)
assert row["ID"] == "123.12"
assert row["NAME"] == "Pete"
assert row["ONEVAL"] == "0"
def test_get_row_bad_index(qvd):
with pytest.raises(IndexError):
qvd.getRow(10)
def test_get_row_null(qvd):
row = qvd.getRow(4)
assert row["ID"] == qvd.NoneValueStr
def test_get_row_bigger(bigqvd):
row = bigqvd.getRow(0)
assert row["ID"] == "1"
assert row["VAL"] == "100001"
assert row["NAME"] == "Pete1"
assert row["PHONE"] == "1234567890"
assert row["SINGLE"] == "single value"
def test_get_row_bigger_nulls(bigqvd):
row = bigqvd.getRow(17)
assert row["ID"] == "-18"
assert row["VAL"] == bigqvd.NoneValueStr
assert row["NAME"] == "Pete17"
assert row["PHONE"] == bigqvd.NoneValueStr
assert row["SINGLE"] == "single value"
# WRITING QVD ===================================================================
# code and tests will follow....
| 22.957447 | 81 | 0.611369 | 410 | 3,237 | 4.67561 | 0.263415 | 0.054773 | 0.041732 | 0.071987 | 0.303599 | 0.179447 | 0.179447 | 0.158059 | 0.116328 | 0.116328 | 0 | 0.022309 | 0.210689 | 3,237 | 140 | 82 | 23.121429 | 0.727984 | 0.11245 | 0 | 0.181818 | 0 | 0 | 0.127324 | 0.016836 | 0 | 0 | 0 | 0 | 0.376623 | 1 | 0.220779 | false | 0 | 0.090909 | 0 | 0.311688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ff6e816cd8b898e3be215d0d77841e6ad25c848 | 543 | py | Python | patients/migrations/0008_alter_patient_age.py | Curewell-Homeo-Clinic/admin-system | c8ce56a2bdbccfe1e6bec09068932f1943498b9f | [
"MIT"
] | 1 | 2021-11-29T15:24:41.000Z | 2021-11-29T15:24:41.000Z | patients/migrations/0008_alter_patient_age.py | Curewell-Homeo-Clinic/admin-system | c8ce56a2bdbccfe1e6bec09068932f1943498b9f | [
"MIT"
] | 46 | 2021-11-29T16:05:55.000Z | 2022-03-01T13:04:45.000Z | patients/migrations/0008_alter_patient_age.py | Curewell-Homeo-Clinic/admin-system | c8ce56a2bdbccfe1e6bec09068932f1943498b9f | [
"MIT"
] | null | null | null | # Generated by Django 3.2.9 on 2021-11-20 16:13
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('patients', '0007_alter_patient_age'),
]
operations = [
migrations.AlterField(
model_name='patient',
name='age',
field=models.PositiveSmallIntegerField(blank=True, null=True, validators=[django.core.validators.MaxValueValidator(100), django.core.validators.MinValueValidator(1)]),
),
]
| 27.15 | 179 | 0.67035 | 58 | 543 | 6.206897 | 0.689655 | 0.083333 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053864 | 0.213628 | 543 | 19 | 180 | 28.578947 | 0.789227 | 0.082873 | 0 | 0 | 1 | 0 | 0.080645 | 0.044355 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ff9c9e147dda16eeaf022e601e081b35faea86c | 15,400 | py | Python | minemeld/ft/condition/BoolExprParser.py | zul126/minemeld-core | 2eb9b9bfd7654aee57aabd5fb280d4e89a438daf | [
"Apache-2.0"
] | 1 | 2021-01-02T07:25:04.000Z | 2021-01-02T07:25:04.000Z | minemeld/ft/condition/BoolExprParser.py | zul126/minemeld-core | 2eb9b9bfd7654aee57aabd5fb280d4e89a438daf | [
"Apache-2.0"
] | null | null | null | minemeld/ft/condition/BoolExprParser.py | zul126/minemeld-core | 2eb9b9bfd7654aee57aabd5fb280d4e89a438daf | [
"Apache-2.0"
] | 1 | 2019-03-14T06:52:52.000Z | 2019-03-14T06:52:52.000Z | # Generated from BoolExpr.g4 by ANTLR 4.5.1
# encoding: utf-8
from __future__ import print_function
from antlr4 import *
from io import StringIO
# flake8: noqa
def serializedATN():
with StringIO() as buf:
buf.write(u"\3\u0430\ud6d1\u8206\uad2d\u4417\uaef1\u8d80\uaadd\3")
buf.write(u"\22\63\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t")
buf.write(u"\7\4\b\t\b\3\2\3\2\3\2\3\2\3\3\3\3\5\3\27\n\3\3\4\3\4")
buf.write(u"\3\4\5\4\34\n\4\3\5\3\5\3\5\3\6\3\6\3\6\3\6\3\6\5\6&")
buf.write(u"\n\6\7\6(\n\6\f\6\16\6+\13\6\3\6\3\6\3\7\3\7\3\b\3\b")
buf.write(u"\3\b\2\2\t\2\4\6\b\n\f\16\2\4\3\2\6\13\4\2\f\16\20\21")
buf.write(u"/\2\20\3\2\2\2\4\26\3\2\2\2\6\30\3\2\2\2\b\35\3\2\2\2")
buf.write(u"\n \3\2\2\2\f.\3\2\2\2\16\60\3\2\2\2\20\21\5\4\3\2\21")
buf.write(u"\22\5\f\7\2\22\23\5\16\b\2\23\3\3\2\2\2\24\27\7\17\2")
buf.write(u"\2\25\27\5\6\4\2\26\24\3\2\2\2\26\25\3\2\2\2\27\5\3\2")
buf.write(u"\2\2\30\33\7\17\2\2\31\34\5\b\5\2\32\34\5\n\6\2\33\31")
buf.write(u"\3\2\2\2\33\32\3\2\2\2\34\7\3\2\2\2\35\36\7\3\2\2\36")
buf.write(u"\37\7\4\2\2\37\t\3\2\2\2 !\7\3\2\2!)\5\4\3\2\"%\7\5\2")
buf.write(u"\2#&\5\4\3\2$&\5\16\b\2%#\3\2\2\2%$\3\2\2\2&(\3\2\2\2")
buf.write(u"\'\"\3\2\2\2(+\3\2\2\2)\'\3\2\2\2)*\3\2\2\2*,\3\2\2\2")
buf.write(u"+)\3\2\2\2,-\7\4\2\2-\13\3\2\2\2./\t\2\2\2/\r\3\2\2\2")
buf.write(u"\60\61\t\3\2\2\61\17\3\2\2\2\6\26\33%)")
return buf.getvalue()
class BoolExprParser ( Parser ):
grammarFileName = "BoolExpr.g4"
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
sharedContextCache = PredictionContextCache()
literalNames = [ u"<INVALID>", u"'('", u"')'", u"','", u"'<'", u"'<='",
u"'=='", u"'>='", u"'>'", u"'!='", u"'true'", u"'false'",
u"'null'" ]
symbolicNames = [ u"<INVALID>", u"<INVALID>", u"<INVALID>", u"<INVALID>",
u"<INVALID>", u"<INVALID>", u"<INVALID>", u"<INVALID>",
u"<INVALID>", u"<INVALID>", u"<INVALID>", u"<INVALID>",
u"<INVALID>", u"JAVASCRIPTIDENTIFIER", u"STRING",
u"NUMBER", u"WS" ]
RULE_booleanExpression = 0
RULE_expression = 1
RULE_functionExpression = 2
RULE_noArgs = 3
RULE_oneOrMoreArgs = 4
RULE_comparator = 5
RULE_value = 6
ruleNames = [ u"booleanExpression", u"expression", u"functionExpression",
u"noArgs", u"oneOrMoreArgs", u"comparator", u"value" ]
EOF = Token.EOF
T__0=1
T__1=2
T__2=3
T__3=4
T__4=5
T__5=6
T__6=7
T__7=8
T__8=9
T__9=10
T__10=11
T__11=12
JAVASCRIPTIDENTIFIER=13
STRING=14
NUMBER=15
WS=16
def __init__(self, input):
super(BoolExprParser, self).__init__(input)
self.checkVersion("4.5.1")
self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
self._predicates = None
class BooleanExpressionContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.BooleanExpressionContext, self).__init__(parent, invokingState)
self.parser = parser
def expression(self):
return self.getTypedRuleContext(BoolExprParser.ExpressionContext,0)
def comparator(self):
return self.getTypedRuleContext(BoolExprParser.ComparatorContext,0)
def value(self):
return self.getTypedRuleContext(BoolExprParser.ValueContext,0)
def getRuleIndex(self):
return BoolExprParser.RULE_booleanExpression
def enterRule(self, listener):
if hasattr(listener, "enterBooleanExpression"):
listener.enterBooleanExpression(self)
def exitRule(self, listener):
if hasattr(listener, "exitBooleanExpression"):
listener.exitBooleanExpression(self)
def booleanExpression(self):
localctx = BoolExprParser.BooleanExpressionContext(self, self._ctx, self.state)
self.enterRule(localctx, 0, self.RULE_booleanExpression)
try:
self.enterOuterAlt(localctx, 1)
self.state = 14
self.expression()
self.state = 15
self.comparator()
self.state = 16
self.value()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ExpressionContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.ExpressionContext, self).__init__(parent, invokingState)
self.parser = parser
def JAVASCRIPTIDENTIFIER(self):
return self.getToken(BoolExprParser.JAVASCRIPTIDENTIFIER, 0)
def functionExpression(self):
return self.getTypedRuleContext(BoolExprParser.FunctionExpressionContext,0)
def getRuleIndex(self):
return BoolExprParser.RULE_expression
def enterRule(self, listener):
if hasattr(listener, "enterExpression"):
listener.enterExpression(self)
def exitRule(self, listener):
if hasattr(listener, "exitExpression"):
listener.exitExpression(self)
def expression(self):
localctx = BoolExprParser.ExpressionContext(self, self._ctx, self.state)
self.enterRule(localctx, 2, self.RULE_expression)
try:
self.state = 20
la_ = self._interp.adaptivePredict(self._input,0,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 18
self.match(BoolExprParser.JAVASCRIPTIDENTIFIER)
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 19
self.functionExpression()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class FunctionExpressionContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.FunctionExpressionContext, self).__init__(parent, invokingState)
self.parser = parser
def JAVASCRIPTIDENTIFIER(self):
return self.getToken(BoolExprParser.JAVASCRIPTIDENTIFIER, 0)
def noArgs(self):
return self.getTypedRuleContext(BoolExprParser.NoArgsContext,0)
def oneOrMoreArgs(self):
return self.getTypedRuleContext(BoolExprParser.OneOrMoreArgsContext,0)
def getRuleIndex(self):
return BoolExprParser.RULE_functionExpression
def enterRule(self, listener):
if hasattr(listener, "enterFunctionExpression"):
listener.enterFunctionExpression(self)
def exitRule(self, listener):
if hasattr(listener, "exitFunctionExpression"):
listener.exitFunctionExpression(self)
def functionExpression(self):
localctx = BoolExprParser.FunctionExpressionContext(self, self._ctx, self.state)
self.enterRule(localctx, 4, self.RULE_functionExpression)
try:
self.enterOuterAlt(localctx, 1)
self.state = 22
self.match(BoolExprParser.JAVASCRIPTIDENTIFIER)
self.state = 25
la_ = self._interp.adaptivePredict(self._input,1,self._ctx)
if la_ == 1:
self.state = 23
self.noArgs()
pass
elif la_ == 2:
self.state = 24
self.oneOrMoreArgs()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class NoArgsContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.NoArgsContext, self).__init__(parent, invokingState)
self.parser = parser
def getRuleIndex(self):
return BoolExprParser.RULE_noArgs
def enterRule(self, listener):
if hasattr(listener, "enterNoArgs"):
listener.enterNoArgs(self)
def exitRule(self, listener):
if hasattr(listener, "exitNoArgs"):
listener.exitNoArgs(self)
def noArgs(self):
localctx = BoolExprParser.NoArgsContext(self, self._ctx, self.state)
self.enterRule(localctx, 6, self.RULE_noArgs)
try:
self.enterOuterAlt(localctx, 1)
self.state = 27
self.match(BoolExprParser.T__0)
self.state = 28
self.match(BoolExprParser.T__1)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class OneOrMoreArgsContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.OneOrMoreArgsContext, self).__init__(parent, invokingState)
self.parser = parser
def expression(self, i=None):
if i is None:
return self.getTypedRuleContexts(BoolExprParser.ExpressionContext)
else:
return self.getTypedRuleContext(BoolExprParser.ExpressionContext,i)
def value(self, i=None):
if i is None:
return self.getTypedRuleContexts(BoolExprParser.ValueContext)
else:
return self.getTypedRuleContext(BoolExprParser.ValueContext,i)
def getRuleIndex(self):
return BoolExprParser.RULE_oneOrMoreArgs
def enterRule(self, listener):
if hasattr(listener, "enterOneOrMoreArgs"):
listener.enterOneOrMoreArgs(self)
def exitRule(self, listener):
if hasattr(listener, "exitOneOrMoreArgs"):
listener.exitOneOrMoreArgs(self)
def oneOrMoreArgs(self):
localctx = BoolExprParser.OneOrMoreArgsContext(self, self._ctx, self.state)
self.enterRule(localctx, 8, self.RULE_oneOrMoreArgs)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 30
self.match(BoolExprParser.T__0)
self.state = 31
self.expression()
self.state = 39
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==BoolExprParser.T__2:
self.state = 32
self.match(BoolExprParser.T__2)
self.state = 35
token = self._input.LA(1)
if token in [BoolExprParser.JAVASCRIPTIDENTIFIER]:
self.state = 33
self.expression()
elif token in [BoolExprParser.T__9, BoolExprParser.T__10, BoolExprParser.T__11, BoolExprParser.STRING, BoolExprParser.NUMBER]:
self.state = 34
self.value()
else:
raise NoViableAltException(self)
self.state = 41
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 42
self.match(BoolExprParser.T__1)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ComparatorContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.ComparatorContext, self).__init__(parent, invokingState)
self.parser = parser
def getRuleIndex(self):
return BoolExprParser.RULE_comparator
def enterRule(self, listener):
if hasattr(listener, "enterComparator"):
listener.enterComparator(self)
def exitRule(self, listener):
if hasattr(listener, "exitComparator"):
listener.exitComparator(self)
def comparator(self):
localctx = BoolExprParser.ComparatorContext(self, self._ctx, self.state)
self.enterRule(localctx, 10, self.RULE_comparator)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 44
_la = self._input.LA(1)
if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << BoolExprParser.T__3) | (1 << BoolExprParser.T__4) | (1 << BoolExprParser.T__5) | (1 << BoolExprParser.T__6) | (1 << BoolExprParser.T__7) | (1 << BoolExprParser.T__8))) != 0)):
self._errHandler.recoverInline(self)
else:
self.consume()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ValueContext(ParserRuleContext):
def __init__(self, parser, parent=None, invokingState=-1):
super(BoolExprParser.ValueContext, self).__init__(parent, invokingState)
self.parser = parser
def STRING(self):
return self.getToken(BoolExprParser.STRING, 0)
def NUMBER(self):
return self.getToken(BoolExprParser.NUMBER, 0)
def getRuleIndex(self):
return BoolExprParser.RULE_value
def enterRule(self, listener):
if hasattr(listener, "enterValue"):
listener.enterValue(self)
def exitRule(self, listener):
if hasattr(listener, "exitValue"):
listener.exitValue(self)
def value(self):
localctx = BoolExprParser.ValueContext(self, self._ctx, self.state)
self.enterRule(localctx, 12, self.RULE_value)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 46
_la = self._input.LA(1)
if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << BoolExprParser.T__9) | (1 << BoolExprParser.T__10) | (1 << BoolExprParser.T__11) | (1 << BoolExprParser.STRING) | (1 << BoolExprParser.NUMBER))) != 0)):
self._errHandler.recoverInline(self)
else:
self.consume()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
| 33.04721 | 241 | 0.586104 | 1,750 | 15,400 | 5.04 | 0.110857 | 0.014059 | 0.009864 | 0.011791 | 0.572902 | 0.498073 | 0.477211 | 0.398639 | 0.32483 | 0.315533 | 0 | 0.055117 | 0.293117 | 15,400 | 465 | 242 | 33.11828 | 0.755098 | 0.006688 | 0 | 0.45858 | 1 | 0.044379 | 0.088191 | 0.057769 | 0 | 0 | 0.000523 | 0 | 0 | 1 | 0.14497 | false | 0.011834 | 0.008876 | 0.050296 | 0.35503 | 0.002959 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ffb7c0442cbda7e7c873ec775ef33cdb0c000d2 | 398 | py | Python | nodes/networkedSingleStepper/temporaryURLNode.py | imoyer/pygestalt | d332df64264cce4a2bec8a73d698c386f1eaca7b | [
"MIT"
] | 1 | 2017-07-03T08:34:39.000Z | 2017-07-03T08:34:39.000Z | nodes/networkedSingleStepper/temporaryURLNode.py | imoyer/pygestalt | d332df64264cce4a2bec8a73d698c386f1eaca7b | [
"MIT"
] | 3 | 2015-12-04T23:14:50.000Z | 2016-11-08T16:24:32.000Z | nodes/networkedSingleStepper/temporaryURLNode.py | imnp/pygestalt | d332df64264cce4a2bec8a73d698c386f1eaca7b | [
"MIT"
] | 1 | 2017-09-13T00:17:39.000Z | 2017-09-13T00:17:39.000Z | <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /vn/testNode.py was not found on this server.</p>
<p>Additionally, a 404 Not Found
error was encountered while trying to use an ErrorDocument to handle the request.</p>
<hr>
<address>Apache Server at www.pygestalt.org Port 80</address>
</body></html>
| 33.166667 | 85 | 0.718593 | 69 | 398 | 4.144928 | 0.681159 | 0.111888 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034384 | 0.123116 | 398 | 11 | 86 | 36.181818 | 0.7851 | 0 | 0 | 0 | 0 | 0 | 0.062814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ffbaac7ded264cd662d18071c85e8138b2662eb | 4,761 | py | Python | cppwg/writers/header_collection_writer.py | josephsnyder/cppwg | 265117455ed57eb250643a28ea6029c2bccf3ab3 | [
"MIT"
] | 21 | 2017-10-03T14:29:36.000Z | 2021-12-07T08:54:43.000Z | cppwg/writers/header_collection_writer.py | josephsnyder/cppwg | 265117455ed57eb250643a28ea6029c2bccf3ab3 | [
"MIT"
] | 2 | 2017-12-29T19:17:44.000Z | 2020-03-27T14:59:27.000Z | cppwg/writers/header_collection_writer.py | josephsnyder/cppwg | 265117455ed57eb250643a28ea6029c2bccf3ab3 | [
"MIT"
] | 6 | 2019-03-21T11:55:52.000Z | 2021-07-13T20:49:50.000Z | #!/usr/bin/env python
"""
Generate the file classes_to_be_wrapped.hpp, which contains includes,
instantiation and naming typedefs for all classes that are to be
automatically wrapped.
"""
import os
import ntpath
class CppHeaderCollectionWriter():
"""
This class manages generation of the header collection file for
parsing by CastXML
"""
def __init__(self, package_info, wrapper_root):
self.wrapper_root = wrapper_root
self.package_info = package_info
self.header_file_name = "wrapper_header_collection.hpp"
self.hpp_string = ""
self.class_dict = {}
self.free_func_dict = {}
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
self.class_dict[eachClassInfo.name] = eachClassInfo
for eachFuncInfo in eachModule.free_function_info:
self.free_func_dict[eachFuncInfo.name] = eachFuncInfo
def add_custom_header_code(self):
"""
Any custom header code goes here
"""
pass
def write_file(self):
"""
The actual write
"""
if not os.path.exists(self.wrapper_root + "/"):
os.makedirs(self.wrapper_root + "/")
file_path = self.wrapper_root + "/" + self.header_file_name
hpp_file = open(file_path, 'w')
hpp_file.write(self.hpp_string)
hpp_file.close()
def should_include_all(self):
"""
Return whether all source files in the module source locs should be included
"""
for eachModule in self.package_info.module_info:
if eachModule.use_all_classes or eachModule.use_all_free_functions:
return True
return False
def write(self):
"""
Main method for generating the header file output string
"""
hpp_header_dict = {'package_name': self.package_info.name}
hpp_header_template = """\
#ifndef {package_name}_HEADERS_HPP_
#define {package_name}_HEADERS_HPP_
// Includes
"""
self.hpp_string = hpp_header_template.format(**hpp_header_dict)
# Now our own includes
if self.should_include_all():
for eachFile in self.package_info.source_hpp_files:
include_name = ntpath.basename(eachFile)
self.hpp_string += '#include "' + include_name + '"\n'
else:
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
if eachClassInfo.source_file is not None:
self.hpp_string += '#include "' + eachClassInfo.source_file + '"\n'
elif eachClassInfo.source_file_full_path is not None:
include_name = ntpath.basename(eachClassInfo.source_file_full_path)
self.hpp_string += '#include "' + include_name + '"\n'
for eachFuncInfo in eachModule.free_function_info:
if eachFuncInfo.source_file_full_path is not None:
include_name = ntpath.basename(eachFuncInfo.source_file_full_path)
self.hpp_string += '#include "' + include_name + '"\n'
# Add the template instantiations
self.hpp_string += "\n// Instantiate Template Classes \n"
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
full_names = eachClassInfo.get_full_names()
if len(full_names) == 1:
continue
prefix = "template class "
for eachTemplateName in full_names:
self.hpp_string += prefix + eachTemplateName.replace(" ","") + ";\n"
# Add typdefs for nice naming
self.hpp_string += "\n// Typedef for nicer naming\n"
self.hpp_string += "namespace cppwg{ \n"
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
full_names = eachClassInfo.get_full_names()
if len(full_names) == 1:
continue
short_names = eachClassInfo.get_short_names()
for idx, eachTemplateName in enumerate(full_names):
short_name = short_names[idx]
typdef_prefix = "typedef " + eachTemplateName.replace(" ","") + " "
self.hpp_string += typdef_prefix + short_name + ";\n"
self.hpp_string += "}\n"
self.add_custom_header_code()
self.hpp_string += "\n#endif // {}_HEADERS_HPP_\n".format(self.package_info.name)
self.write_file()
| 36.068182 | 91 | 0.603235 | 531 | 4,761 | 5.135593 | 0.239171 | 0.035937 | 0.06674 | 0.037404 | 0.323432 | 0.288229 | 0.288229 | 0.244958 | 0.23029 | 0.23029 | 0 | 0.00061 | 0.310859 | 4,761 | 131 | 92 | 36.343511 | 0.830539 | 0.110481 | 0 | 0.25641 | 1 | 0 | 0.081206 | 0.02018 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064103 | false | 0.012821 | 0.025641 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b209d756a7a9dd9b0a6aa608dc616fb5501e9ff4 | 219 | py | Python | 01 - Expressions, variables and assignments/exercises/perimeter-of-rectangle.py | PableraShow/python-exercises | e1648fd42f3009ec6fb1e2096852b6d399e91d5b | [
"MIT"
] | 8 | 2018-10-01T17:35:57.000Z | 2022-02-01T08:12:12.000Z | 01 - Expressions, variables and assignments/exercises/perimeter-of-rectangle.py | PableraShow/python-exercises | e1648fd42f3009ec6fb1e2096852b6d399e91d5b | [
"MIT"
] | null | null | null | 01 - Expressions, variables and assignments/exercises/perimeter-of-rectangle.py | PableraShow/python-exercises | e1648fd42f3009ec6fb1e2096852b6d399e91d5b | [
"MIT"
] | 6 | 2018-07-22T19:15:21.000Z | 2022-02-05T07:54:58.000Z | """
Prints the length in inches of the perimeter of a rectangle
with sides of length 4 and 7 inches.
"""
# Rectangle perimeter formula
length = 4
inches = 7
perimeter = 2 * length + 2 * inches
# Output
print perimeter | 18.25 | 59 | 0.726027 | 34 | 219 | 4.676471 | 0.529412 | 0.08805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.205479 | 219 | 12 | 60 | 18.25 | 0.87931 | 0.155251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b20c24ef9d6d64b2c1eb48b70a055569f3cf0291 | 690 | py | Python | 2018/21/reverse_engineered.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | 2018/21/reverse_engineered.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | 2018/21/reverse_engineered.py | lvaughn/advent | ff3f727b8db1fd9b2a04aad5dcda9a6c8d1c271e | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
def simulate(reg_0, find_non_loop):
seen = set()
c = 0
last_unique_c = -1
while True:
a = c | 65536
c = reg_0
while True:
c = (((c + (a & 255)) & 16777215) * 65899) & 16777215
if a < 256:
if find_non_loop:
return c
else:
if c not in seen:
seen.add(c)
last_unique_c = c
break
else:
return last_unique_c
else:
a //= 256
print(simulate(7041048, True))
print(simulate(7041048, False))
| 22.258065 | 65 | 0.401449 | 75 | 690 | 3.533333 | 0.466667 | 0.113208 | 0.124528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161194 | 0.514493 | 690 | 30 | 66 | 23 | 0.629851 | 0.030435 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.130435 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b20eabd7816b307c80c7a57deaf784b914a0c831 | 2,619 | py | Python | model/State.py | BrandonTheBuilder/thermawesome | b2f2cb95e1181f05a112193be11baa18e10d39b1 | [
"MIT"
] | null | null | null | model/State.py | BrandonTheBuilder/thermawesome | b2f2cb95e1181f05a112193be11baa18e10d39b1 | [
"MIT"
] | null | null | null | model/State.py | BrandonTheBuilder/thermawesome | b2f2cb95e1181f05a112193be11baa18e10d39b1 | [
"MIT"
] | null | null | null | from CoolProp import CoolProp as CP
class State(object):
"""
The state of a fluid is defined with two unique intensive properties
this class keeps track of the state of the fluid and solves for all other
intensive properties.
"""
def __init__(self, fluid, **kwargs):
"""
fluid: The type of fluid this is.
flowRate: The mass flow rate of the stream, kg/s
kwargs: intensive properties to define the state.
The Properties that we are interested in are:
h, specific Enthalpy
u, specific Internal Energy
v, specific Volume
s, specific Entropy
m, specific mass flow rate
"""
self.defined = False
self.fluid = fluid
self.properties = dict()
if kwargs is not None:
self.properties.update(kwargs)
def define(self, **kwargs):
"""
Define the fluid state based off of the inputed properties
"""
#Make a list of defined properties
inputProp = []
if kwargs is not None:
self.properties.update(kwargs)
for key in self.properties.keys():
inputProp.extend([key.capitalize(), self.properties[key]])
inputProp.append(self.fluid)
try:
self.properties.update(
T = CP.PropsSI('T', *inputProp),
P = CP.PropsSI('P', *inputProp),
h = CP.PropsSI('H', *inputProp),
s = CP.PropsSI('S',*inputProp),
u = CP.PropsSI('U', *inputProp),
v = 1/CP.PropsSI('D', *inputProp))
self.defined = True
except Exception as ex:
self.defined = False
print ex
return self.defined
def exergy_f(self, t0, p0):
deadState = State('Water', T=t0, P=p0)
if deadState.define():
self._exergy_f = ((self.properties['h'] - deadState.properties['h'])
-t0*(self.properties['s']-deadState.properties['s']))
return self._exergy_f
else:
return False
def exergy(self, t0, p0):
deadState = State('Water', T=t0, P=p0)
if deadState.define():
self._exergy = ((self.properties['u'] - deadState.properties['u'])
+p0*(self.properties['v'] - deadState.properties['v'])
-t0*(self.properties['s']-deadState.properties['s']))
return self._exergy
else:
return False
def __add__(self, other):
pass
def isDefined(self):
pass
| 33.576923 | 82 | 0.544101 | 299 | 2,619 | 4.715719 | 0.317726 | 0.10922 | 0.042553 | 0.01844 | 0.221277 | 0.221277 | 0.221277 | 0.221277 | 0.221277 | 0.160284 | 0 | 0.00703 | 0.348225 | 2,619 | 78 | 83 | 33.576923 | 0.818981 | 0.0126 | 0 | 0.367347 | 0 | 0 | 0.013542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.040816 | 0.020408 | null | null | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b748129a257264ee78fbb33c2f52b2552698dcea | 2,418 | py | Python | CalibTracker/SiStripCommon/python/theBigNtuple_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | CalibTracker/SiStripCommon/python/theBigNtuple_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | CalibTracker/SiStripCommon/python/theBigNtuple_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from CalibTracker.SiStripCommon.ShallowEventDataProducer_cfi import *
from CalibTracker.SiStripCommon.ShallowDigisProducer_cfi import *
from CalibTracker.SiStripCommon.ShallowClustersProducer_cfi import *
from CalibTracker.SiStripCommon.ShallowTrackClustersProducer_cfi import *
from CalibTracker.SiStripCommon.ShallowRechitClustersProducer_cfi import *
from CalibTracker.SiStripCommon.ShallowTracksProducer_cfi import *
from RecoVertex.BeamSpotProducer.BeamSpot_cff import *
from RecoTracker.TrackProducer.TrackRefitters_cff import *
bigNtupleTrackCollectionTag = cms.InputTag("bigNtupleTracksRefit")
bigNtupleClusterCollectionTag = cms.InputTag("siStripClusters")
bigNtupleTracksRefit = RecoTracker.TrackProducer.TrackRefitter_cfi.TrackRefitter.clone(src = "generalTracks")
bigNtupleEventRun = shallowEventRun.clone()
bigNtupleDigis = shallowDigis.clone()
bigNtupleClusters = shallowClusters.clone(Clusters=bigNtupleClusterCollectionTag)
bigNtupleRecHits = shallowRechitClusters.clone(Clusters=bigNtupleClusterCollectionTag)
bigNtupleTrackClusters = shallowTrackClusters.clone(Tracks = bigNtupleTrackCollectionTag,Clusters=bigNtupleClusterCollectionTag)
bigNtupleTracks = shallowTracks.clone(Tracks = bigNtupleTrackCollectionTag)
bigShallowTree = cms.EDAnalyzer("ShallowTree",
outputCommands = cms.untracked.vstring(
'drop *',
'keep *_bigNtupleEventRun_*_*',
'keep *_bigNtupleDigis_*_*',
'keep *_bigNtupleClusters_*_*' ,
'keep *_bigNtupleRechits_*_*',
'keep *_bigNtupleTracks_*_*',
'keep *_bigNtupleTrackClusters_*_*'
)
)
from Configuration.StandardSequences.RawToDigi_Data_cff import *
from Configuration.StandardSequences.Reconstruction_cff import *
theBigNtuple = cms.Sequence( ( siPixelRecHits+siStripMatchedRecHits +
offlineBeamSpot +
bigNtupleTracksRefit)
* (bigNtupleEventRun +
bigNtupleClusters +
bigNtupleRecHits +
bigNtupleTracks +
bigNtupleTrackClusters
)
)
theBigNtupleDigi = cms.Sequence( siStripDigis + bigNtupleDigis )
| 43.178571 | 130 | 0.700165 | 150 | 2,418 | 11.086667 | 0.44 | 0.048106 | 0.10463 | 0.075165 | 0.114251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235318 | 2,418 | 55 | 131 | 43.963636 | 0.899405 | 0 | 0 | 0 | 0 | 0 | 0.096026 | 0.048427 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.261905 | 0 | 0.261905 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b74a946738ed6712ecf1be81551ad79c1bd928a1 | 1,401 | py | Python | tests/test_protocol.py | gimbas/openinput | 9cbb4b22aebe46dfc33ae9c56b164baa6c1fe693 | [
"MIT"
] | 38 | 2020-05-11T10:54:15.000Z | 2022-03-30T13:19:09.000Z | tests/test_protocol.py | gimbas/openinput | 9cbb4b22aebe46dfc33ae9c56b164baa6c1fe693 | [
"MIT"
] | 45 | 2020-04-21T23:52:22.000Z | 2022-02-19T20:29:27.000Z | tests/test_protocol.py | gimbas/openinput | 9cbb4b22aebe46dfc33ae9c56b164baa6c1fe693 | [
"MIT"
] | 5 | 2020-08-29T02:10:42.000Z | 2021-08-31T03:12:15.000Z | # SPDX-License-Identifier: MIT
# SPDX-FileCopyrightText: 2021 Filipe Laíns <lains@riseup.net>
def test_dispatch(basic_device):
basic_device.protocol_dispatch([0x03, 0x02, 0x01]) # unrelated
basic_device.protocol_dispatch([0x20]) # invalid length short
basic_device.protocol_dispatch([0x21]) # invalid length long
basic_device.hid_send.assert_not_called()
def test_fw_info_vendor(basic_device):
basic_device.protocol_dispatch([0x20, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00])
basic_device.hid_send.assert_called_with(
[0x21, 0x00, 0x01] + list(b'openinput-git') + [0x00] * 16
)
def test_fw_info_version(basic_device, fw_version):
basic_device.protocol_dispatch([0x20, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00])
basic_device.hid_send.assert_called_with(
[0x21, 0x00, 0x01] + list(fw_version.encode('ascii')) + [0x00] * (29 - len(fw_version))
)
def test_fw_info_device_name(basic_device):
basic_device.protocol_dispatch([0x20, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00])
basic_device.hid_send.assert_called_with(
[0x21, 0x00, 0x01] + list(b'basic test device') + [0x00] * 12
)
def test_fw_info_unsupported(basic_device):
basic_device.protocol_dispatch([0x20, 0x00, 0x01, 0xFF, 0x00, 0x00, 0x00, 0x00])
basic_device.hid_send.assert_called_with(
[0x21, 0xFF, 0x01, 0x00, 0x01] + [0x00] * 27
)
| 32.581395 | 95 | 0.712348 | 194 | 1,401 | 4.85567 | 0.268041 | 0.198514 | 0.11465 | 0.200637 | 0.577495 | 0.510616 | 0.470276 | 0.428875 | 0.428875 | 0.269639 | 0 | 0.147737 | 0.164168 | 1,401 | 42 | 96 | 33.357143 | 0.656704 | 0.099929 | 0 | 0.16 | 0 | 0 | 0.027888 | 0 | 0 | 0 | 0.175299 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b74c264ab951da49d482e8b5b2b953e6b1285a3b | 792 | py | Python | tests/explainers/test_explainer.py | zduey/shap | 1bb8203f2d43f7552396a5f26167a258cbdc505c | [
"MIT"
] | 1 | 2021-03-03T11:00:32.000Z | 2021-03-03T11:00:32.000Z | tests/explainers/test_explainer.py | zduey/shap | 1bb8203f2d43f7552396a5f26167a258cbdc505c | [
"MIT"
] | null | null | null | tests/explainers/test_explainer.py | zduey/shap | 1bb8203f2d43f7552396a5f26167a258cbdc505c | [
"MIT"
] | null | null | null | """ Tests for Explainer class.
"""
import pytest
import shap
def test_wrapping_for_text_to_text_teacher_forcing_logits_model():
""" This tests using the Explainer class to auto choose a text to text setup.
"""
transformers = pytest.importorskip("transformers")
def f(x): # pylint: disable=unused-argument
pass
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
model = transformers.AutoModelForCausalLM.from_pretrained("gpt2")
wrapped_model = shap.models.TeacherForcingLogits(f, similarity_model=model, similarity_tokenizer=tokenizer)
masker = shap.maskers.Text(tokenizer, mask_token="...")
explainer = shap.Explainer(wrapped_model, masker)
assert shap.utils.safe_isinstance(explainer.masker, "shap.maskers.FixedComposite")
| 31.68 | 111 | 0.753788 | 92 | 792 | 6.304348 | 0.554348 | 0.048276 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002954 | 0.145202 | 792 | 24 | 112 | 33 | 0.853767 | 0.174242 | 0 | 0 | 0 | 0 | 0.078125 | 0.042188 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.166667 | false | 0.083333 | 0.25 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b74eef5240ddb793f5798e460265805a101c2233 | 486 | py | Python | examples/simpleform/app/forms.py | ezeev/Flask-AppBuilder | d95f0ed934272629ee44ad3241646fa7ba09cdf8 | [
"BSD-3-Clause"
] | 71 | 2016-11-02T06:45:42.000Z | 2021-11-15T12:33:48.000Z | examples/simpleform/app/forms.py | ezeev/Flask-AppBuilder | d95f0ed934272629ee44ad3241646fa7ba09cdf8 | [
"BSD-3-Clause"
] | 3 | 2021-06-08T23:39:54.000Z | 2022-03-12T00:50:13.000Z | examples/simpleform/app/forms.py | ezeev/Flask-AppBuilder | d95f0ed934272629ee44ad3241646fa7ba09cdf8 | [
"BSD-3-Clause"
] | 23 | 2016-11-02T06:45:44.000Z | 2022-02-08T14:55:13.000Z | from wtforms import Form, StringField
from wtforms.validators import DataRequired
from flask_appbuilder.fieldwidgets import BS3TextFieldWidget
from flask_appbuilder.forms import DynamicForm
class MyForm(DynamicForm):
field1 = StringField(('Field1'),
description=('Your field number one!'),
validators = [DataRequired()], widget=BS3TextFieldWidget())
field2 = StringField(('Field2'),
description=('Your field number two!'), widget=BS3TextFieldWidget())
| 37.384615 | 76 | 0.751029 | 47 | 486 | 7.723404 | 0.510638 | 0.060606 | 0.104683 | 0.143251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016908 | 0.148148 | 486 | 12 | 77 | 40.5 | 0.859903 | 0 | 0 | 0 | 0 | 0 | 0.115226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b74ef20d1f5294557f6193fe99adc3a01e0224ec | 403 | py | Python | comms.py | kajusz/ufscreenadsclient | 0151edec0117161c522a87643eef2f7be214210c | [
"MIT"
] | null | null | null | comms.py | kajusz/ufscreenadsclient | 0151edec0117161c522a87643eef2f7be214210c | [
"MIT"
] | null | null | null | comms.py | kajusz/ufscreenadsclient | 0151edec0117161c522a87643eef2f7be214210c | [
"MIT"
] | null | null | null | import zmq
context = zmq.Context()
socket = context.socket(zmq.PAIR)
address = "tcp://127.0.0.1:5000"
def client(address):
socket.connect(address)
def server(address):
socket.bind(address)
def send(data):
socket.send_string(data)
def recv():
try:
return socket.recv(flags=zmq.NOBLOCK)
except zmq.Again as e:
return None
# print("No message received yet")
| 18.318182 | 45 | 0.66005 | 57 | 403 | 4.649123 | 0.596491 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031348 | 0.208437 | 403 | 21 | 46 | 19.190476 | 0.799373 | 0.079404 | 0 | 0 | 0 | 0 | 0.055249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.066667 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7509767f47f312767bff162702df8fc8da90b4c | 2,821 | py | Python | applications/admin/controllers/gae.py | otaviocarvalho/forca-inf | 93b61f1d6988d4fb00a1736633d85b4f99a2f259 | [
"BSD-3-Clause"
] | 1 | 2017-03-28T21:31:51.000Z | 2017-03-28T21:31:51.000Z | applications/admin/controllers/gae.py | murray3/augmi-a | 9f8cff457fa3966d67d3752ccd86876b08bb19b1 | [
"BSD-3-Clause"
] | null | null | null | applications/admin/controllers/gae.py | murray3/augmi-a | 9f8cff457fa3966d67d3752ccd86876b08bb19b1 | [
"BSD-3-Clause"
] | 1 | 2022-03-10T19:53:44.000Z | 2022-03-10T19:53:44.000Z | ### this works on linux only
try:
import fcntl
import subprocess
import signal
import os
except:
session.flash='sorry, only on Unix systems'
redirect(URL(request.application,'default','site'))
forever=10**8
def kill():
p = cache.ram('gae_upload',lambda:None,forever)
if not p or p.poll()!=None:
return 'oops'
os.kill(p.pid, signal.SIGKILL)
cache.ram('gae_upload',lambda:None,-1)
def deploy():
if not os.path.exists(GAE_APPCFG):
redirect(URL(request.application,'default','site'))
regex = re.compile('^\w+$')
apps = sorted([(file.upper(), file) for file in \
os.listdir(apath(r=request)) if regex.match(file)])
options = [OPTION(item[1]) for item in apps]
form = FORM(TABLE(TR('Applications to deploy',
SELECT(_name='applications',_multiple='multiple',
_id='applications',*options)),
TR('GAE Email:',
INPUT(_name='email',requires=IS_EMAIL())),
TR('GAE Password:',
INPUT(_name='password',_type='password',
requires=IS_NOT_EMPTY())),
TR('',INPUT(_type='submit',value='deploy'))))
cmd = output = errors= ""
if form.accepts(request.vars,session):
try:
kill()
except:
pass
ignore_apps = [item[1] for item in apps \
if not item[1] in request.vars.applications]
regex = re.compile('\(applications/\(.*')
yaml = apath('../app.yaml', r=request)
data=open(yaml,'r').read()
data = regex.sub('(applications/(%s)/.*)|' % '|'.join(ignore_apps),data)
open(yaml,'w').write(data)
path = request.env.web2py_path
cmd = '%s --email=%s --passin update %s' % \
(GAE_APPCFG, form.vars.email, path)
p = cache.ram('gae_upload',
lambda s=subprocess,c=cmd:s.Popen(c, shell=True,
stdin=s.PIPE,
stdout=s.PIPE,
stderr=s.PIPE, close_fds=True),-1)
p.stdin.write(form.vars.password)
fcntl.fcntl(p.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
fcntl.fcntl(p.stderr.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
return dict(form=form,command=cmd)
def callback():
p = cache.ram('gae_upload',lambda:None,forever)
if not p or p.poll()!=None:
return '<done/>'
try:
output = p.stdout.read()
except:
output=''
try:
errors = p.stderr.read()
except:
errors=''
return (output+errors).replace('\n','<br/>')
| 36.636364 | 90 | 0.515066 | 326 | 2,821 | 4.383436 | 0.383436 | 0.022393 | 0.030791 | 0.047586 | 0.237229 | 0.237229 | 0.120364 | 0.081176 | 0.081176 | 0.081176 | 0 | 0.004795 | 0.334633 | 2,821 | 76 | 91 | 37.118421 | 0.756526 | 0.008508 | 0 | 0.202899 | 0 | 0 | 0.110992 | 0.008235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.072464 | 0.057971 | 0 | 0.15942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b757a454248faaffeb488872e86cf07d801bf71c | 1,355 | py | Python | resources/lib/IMDbPY/bin/get_first_movie.py | bopopescu/ServerStatus | a883598248ad6f5273eb3be498e3b04a1fab6510 | [
"MIT"
] | 1 | 2017-11-02T06:06:39.000Z | 2017-11-02T06:06:39.000Z | resources/lib/IMDbPY/bin/get_first_movie.py | bopopescu/ServerStatus | a883598248ad6f5273eb3be498e3b04a1fab6510 | [
"MIT"
] | 1 | 2015-04-21T22:05:02.000Z | 2015-04-22T22:27:15.000Z | resources/lib/IMDbPY/bin/get_first_movie.py | GetSomeBlocks/Score_Soccer | a883598248ad6f5273eb3be498e3b04a1fab6510 | [
"MIT"
] | 4 | 2017-11-01T19:24:31.000Z | 2018-09-13T00:05:41.000Z | #!/usr/bin/env python
"""
get_first_movie.py
Usage: get_first_movie "movie title"
Search for the given title and print the best matching result.
"""
import sys
# Import the IMDbPY package.
try:
import imdb
except ImportError:
print 'You bad boy! You need to install the IMDbPY package!'
sys.exit(1)
if len(sys.argv) != 2:
print 'Only one argument is required:'
print ' %s "movie title"' % sys.argv[0]
sys.exit(2)
title = sys.argv[1]
i = imdb.IMDb()
in_encoding = sys.stdin.encoding or sys.getdefaultencoding()
out_encoding = sys.stdout.encoding or sys.getdefaultencoding()
title = unicode(title, in_encoding, 'replace')
try:
# Do the search, and get the results (a list of Movie objects).
results = i.search_movie(title)
except imdb.IMDbError, e:
print "Probably you're not connected to Internet. Complete error report:"
print e
sys.exit(3)
if not results:
print 'No matches for "%s", sorry.' % title.encode(out_encoding, 'replace')
sys.exit(0)
# Print only the first result.
print ' Best match for "%s"' % title.encode(out_encoding, 'replace')
# This is a Movie instance.
movie = results[0]
# So far the Movie object only contains basic information like the
# title and the year; retrieve main information:
i.update(movie)
print movie.summary().encode(out_encoding, 'replace')
| 22.583333 | 79 | 0.702583 | 206 | 1,355 | 4.567961 | 0.446602 | 0.029756 | 0.054198 | 0.076514 | 0.061637 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007266 | 0.187454 | 1,355 | 59 | 80 | 22.966102 | 0.847411 | 0.20369 | 0 | 0.071429 | 0 | 0 | 0.260361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.107143 | null | null | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7592e3ec4b70120c5e12cf12590570b289d59a3 | 14,079 | py | Python | ID3.py | idiomatic/id3.py | 574b2a6bd52897e07c220198d451e5971577fc02 | [
"MIT"
] | null | null | null | ID3.py | idiomatic/id3.py | 574b2a6bd52897e07c220198d451e5971577fc02 | [
"MIT"
] | null | null | null | ID3.py | idiomatic/id3.py | 574b2a6bd52897e07c220198d451e5971577fc02 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- mode: python -*-
import re
import struct
import types
def items_in_order(dict, order=[]):
"""return all items of dict, but starting in the specified order."""
done = { }
items = [ ]
for key in order + dict.keys():
if not done.has_key(key) and dict.has_key(key):
done[key] = None
items.append((key, dict[key]))
return items
class UnsupportedID3:
pass
class InvalidMP3:
pass
genres = [
'Blues', 'Classic Rock', 'Country', 'Dance', 'Disco', 'Funk',
'Grunge', 'Hip-Hop', 'Jazz', 'Metal', 'New Age', 'Oldies', 'Other',
'Pop', 'R&B', 'Rap', 'Reggae', 'Rock', 'Techno', 'Industrial',
'Alternative', 'Ska', 'Death Metal', 'Pranks', 'Soundtrack',
'Euro-Techno', 'Ambient', 'Trip-Hop', 'Vocal', 'Jazz+Funk', 'Fusion',
'Trance', 'Classical', 'Instrumental', 'Acid', 'House', 'Game',
'Sound Clip', 'Gospel', 'Noise', 'Alt. Rock', 'Bass', 'Soul',
'Punk', 'Space', 'Meditative', 'Instrum. Pop', 'Instrum. Rock',
'Ethnic', 'Gothic', 'Darkwave', 'Techno-Indust.', 'Electronic',
'Pop-Folk', 'Eurodance', 'Dream', 'Southern Rock', 'Comedy',
'Cult', 'Gangsta', 'Top 40', 'Christian Rap', 'Pop/Funk', 'Jungle',
'Native American', 'Cabaret', 'New Wave', 'Psychadelic', 'Rave',
'Showtunes', 'Trailer', 'Lo-Fi', 'Tribal', 'Acid Punk', 'Acid Jazz',
'Polka', 'Retro', 'Musical', 'Rock & Roll', 'Hard Rock', 'Folk',
'Folk/Rock', 'National Folk', 'Swing', 'Fusion', 'Bebob', 'Latin',
'Revival', 'Celtic', 'Bluegrass', 'Avantgarde', 'Gothic Rock',
'Progress. Rock', 'Psychadel. Rock', 'Symphonic Rock', 'Slow Rock',
'Big Band', 'Chorus', 'Easy Listening', 'Acoustic', 'Humour',
'Speech', 'Chanson', 'Opera', 'Chamber Music', 'Sonata', 'Symphony',
'Booty Bass', 'Primus', 'Porn Groove', 'Satire', 'Slow Jam',
'Club', 'Tango', 'Samba', 'Folklore', 'Ballad', 'Power Ballad',
'Rhythmic Soul', 'Freestyle', 'Duet', 'Punk Rock', 'Drum Solo',
'A Capella', 'Euro-House', 'Dance Hall', 'Goa', 'Drum & Bass',
'Club-House', 'Hardcore', 'Terror', 'Indie', 'BritPop', 'Negerpunk',
'Polsk Punk', 'Beat', 'Christian Gangsta Rap', 'Heavy Metal',
'Black Metal', 'Crossover', 'Contemporary Christian', 'Christian Rock',
'Merengue', 'Salsa', 'Thrash Metal', 'Anime', 'Jpop', 'Synthpop',
]
frame_id_names = {
'BUF' : 'Recommended buffer size',
'CNT' : 'Play counter',
'COM' : 'Comments',
'CRA' : 'Audio encryption',
'CRM' : 'Encrypted meta frame',
'ETC' : 'Event timing codes',
'EQU' : 'Equalization',
'GEO' : 'General encapsulated object',
'IPL' : 'Involved people list',
'LNK' : 'Linked information',
'MCI' : 'Music CD Identifier',
'MLL' : 'MPEG location lookup table',
'PIC' : 'Attached picture',
'POP' : 'Popularimeter',
'REV' : 'Reverb',
'RVA' : 'Relative volume adjustment',
'SLT' : 'Synchronized lyric/text',
'STC' : 'Synced tempo codes',
'TAL' : 'Title',
'TBP' : 'Beats per minute',
'TCM' : 'Composer',
'TCO' : 'Content type',
'TCR' : 'Copyright message',
'TDA' : 'Date',
'TDY' : 'Playlist delay',
'TEN' : 'Encoded by',
'TFT' : 'File type',
'TIM' : 'Time',
'TKE' : 'Initial key',
'TLA' : 'Language(s)',
'TLE' : 'Length',
'TMT' : 'Media type',
'TOA' : 'Original artist(s)/performer(s)',
'TOF' : 'Original filename',
'TOL' : 'Original Lyricist(s)/text writer(s)',
'TOR' : 'Original release year',
'TOT' : 'Original album/Movie/Show title',
'TP1' : 'Lead artist(s)/Lead performer(s)/Soloist(s)/Performing group',
'TP2' : 'Band/Orchestra/Accompaniment',
'TP3' : 'Conductor/Performer refinement',
'TP4' : 'Interpreted, remixed, or otherwise modified by',
'TPA' : 'Part of a set',
'TPB' : 'Publisher',
'TRC' : 'ISRC (International Standard Recording Code)',
'TRD' : 'Recording dates',
'TRK' : 'Track number/Position in set',
'TSI' : 'Size',
'TSS' : 'Software/hardware and settings used for encoding',
'TT1' : 'Content group description',
'TT2' : 'Title/Songname/Content description',
'TT3' : 'Subtitle/Description refinement',
'TXT' : 'Lyricist/text writer',
'TXX' : 'User defined text information frame',
'TYE' : 'Year',
'UFI' : 'Unique file identifier',
'ULT' : 'Unsychronized lyric/text transcription',
'WAF' : 'Official audio file webpage',
'WAR' : 'Official artist/performer webpage',
'WAS' : 'Official audio source webpage',
'WCM' : 'Commercial information',
'WCP' : 'Copyright/Legal information',
'WPB' : 'Publishers official webpage',
'WXX' : 'User defined URL link frame',
}
text_frame_ids = ( 'TT1', 'TT2', 'TT3', 'TP1', 'TP2', 'TP3', 'TP4',
'TCM', 'TXT', 'TLA', 'TCO', 'TAL', 'TPA', 'TRK',
'TRC', 'TYE', 'TDA', 'TIM', 'TRD', 'TMT', 'TFT',
'TBP', 'TCR', 'TPB', 'TEN', 'TSS', 'TOF', 'TLE',
'TSI', 'TDY', 'TKE', 'TOT', 'TOA', 'TOL', 'TOR',
'IPL' )
_genre_number_re = re.compile("^\((\d+)\)$")
_track_re = re.compile("^(\d+)/(\d+)$")
def _nts(s):
null = s.find('\0')
if null:
return s[:null]
return s
def _unpack_non_negative_octet_28_bit_int(n):
return (((n & 0x7f000000) >> 3)
+ ((n & 0x7f0000) >> 2)
+ ((n & 0x7f00) >> 1)
+ (n & 0x7f))
def _pack_non_negative_ocket_28_bit_int(n):
return (((n & 0xfe00000) << 3)
+ ((n & 0x1fc000) << 2)
+ ((n & 0x3f80) << 1)
+ (n & 0x7f))
def _unpack_genre(s):
m = _genre_number_re.match(s)
if m:
return genres[int(m.group(1))]
return s
def _pack_genre(s):
s = s.lower()
for i in range(len(genres)):
if s == genres[i].lower():
return "(%d)" % (i,)
return s
def _unpack_track(s):
m = _track_re.match(s)
if m:
return (int(m.group(1)), int(m.group(2)))
return (int(track), 0)
def _pack_track(track, tracks):
return "%d/%d" % (track, tracks)
def _unpack_str(s):
if s[0] == "\0":
return s[1:]
raise UnsupportedID3
def _pack_str(s):
if type(s) is types.StringType:
return "\0" + s
raise UnsupportedID3
class id3_file:
def __init__(self, f):
self._f = f
self._order = [ ]
self._version = None
self._read_length = 0
self._write_length = 0
def _read(self, length):
self._read_length += length
return self._f.read(length)
def _read_unpack(self, format, exception=InvalidMP3):
data = self._read(struct.calcsize(format))
if not data:
raise exception
data = struct.unpack(format, data)
if len(data) == 1:
return data[0]
return data
def _read_id3_v1(self):
id3 = self._read_unpack('30s30s30s4s30sB')
title, artist, album, year, comment, genre = id3
track = ord(comment[-1])
if track and ord(comment[-2]) == 0:
comment = comment[:-2]
else:
track = None
self.title = _nts(title)
self.artist = _nts(artist)
self.album = _nts(album)
self.year = _nts(year)
self.genre = genres[genre]
self.comment = _nts(comment)
if track is None:
self.version = self._version = "1.0"
else:
self.version = self._version = "1.1"
self.track = track
def _read_id3_v2(self):
major_version, minor_version, flags = self._read_unpack('>BBB')
self._version = "2.%d.%d" % (major_version, minor_version)
self.version = self._version
if flags & 0x20:
# extended header
raise UnsupportedID3
if major_version == 2:
return self._read_id3_v2_2()
elif major_version == 3:
return self._read_id3_v2_3()
else:
raise UnsupportedID3
def _read_id3_v2_3(self):
self.raw = raw = { }
self._order = order = [ ]
size = self._read_unpack('>L')
size = _unpack_non_negative_octet_28_bit_int(size)
#print "size", size
while size > 0:
id, frame_size, flags = self._read_unpack('>4sLH')
size -= 10
if flags:
raise UnsupportedID3
if frame_size:
size -= frame_size
data = self._read(frame_size)
if id == "\0\0\0\0":
continue
if raw.has_key(id):
# duplicates?
raise UnsupportedID3
raw[id] = data
order.append(id)
self.title = _unpack_str(raw.get('TIT2', "\0"))
self.artist = _unpack_str(raw.get('TPE1', "\0"))
self.album = _unpack_str(raw.get('TALB', "\0"))
self.year = _unpack_str(raw.get('TYER', "\0"))
self.comment = _unpack_str(raw.get('COMM', "\0"))
self.genre = _unpack_genre(_unpack_str(raw.get('TCON', "\0")))
track = _unpack_str(raw.get('TRCK', "\0000"))
self.track, self.tracks = _unpack_track(track)
def _read_id3_v2_2(self):
self.raw = raw = { }
size = self._read_unpack('>L')
size = _unpack_non_negative_octet_28_bit_int(size)
# frames
while size > 0:
id, frame_size = self._read_unpack('>3s3s')
frame_size = (struct.unpack('>L', '\0' + frame_size))[0]
size -= 6
if frame_size:
size -= frame_size
data = self._read(frame_size)
if raw.has_key(id):
# duplicates?
raise UnsupportedID3
raw[id] = data
if raw.has_key('UFI'):
raw['UFI'] = (raw['UFI'][0], raw['UFI'][1:])
for id in text_frame_ids:
if raw.has_key(id):
raw[id] = _unpack_str(raw[id])
if raw.has_key('TXX'):
raw['TXX'] = (_unpack_str(raw['TXX']).split('\0', 1))
if raw.has_key('COM'):
raw['COM'] = (_unpack_str(raw['COM']).split('\0'))
self.title = raw.get('TT2', '')
self.artist = raw.get('TP1', '')
self.album = raw.get('TAL', '')
self.year = raw.get('TYE', '')
self.comment = raw.get('COM', '')
self.genre = _unpack_genre(raw.get('TCO', ''))
track = raw.get('TRK', '0')
self.track, self.tracks = _unpack_track(track)
def read(self):
magic = self._read_unpack('>3s')
if magic == 'ID3':
self._read_id3_v2()
return self
self._f.seek(128, 2) # last 128 bytes
tag = self._read_unpack('3s')
if tag == 'TAG':
self._read_id3_v1()
return self
raise InvalidMP3
def _write(self, data):
self._write_length += len(data)
self._f.write(data)
def _pad(self):
self._write("\0" * (self._read_length - self._write_length))
self._write_length = 0
def _write_pack(self, format, *values):
data = apply(struct.pack, [ format ] + list(values))
return self._write(data)
def _write_id3_v2_3(self):
self._f.seek(0)
raw = self.raw.copy()
if self.title:
raw['TIT2'] = _pack_str(self.title)
if self.artist:
raw['TPE1'] = _pack_str(self.artist)
if self.album:
raw['TALB'] = _pack_str(self.album)
if self.year:
raw['TYER'] = _pack_str(self.year)
if self.comment:
raw['COMM'] = _pack_str(self.comment)
if self.genre:
raw['TCON'] = _pack_str(_pack_genre(self.genre))
if self.track:
raw['TRCK'] = _pack_str(_pack_track(self.track, self.tracks))
# order of:
# 1) original key order
# 2) title, artist, album, year, comment, genre, tracks
# 3) anything else added since read()
front_keys = [ 'TIT2', 'TPE1', 'TALB', 'TYER', 'COMM', 'TCON', 'TRCK' ]
data = [ ]
for k,v in items_in_order(raw, self._order + front_keys):
data.append(struct.pack('>4sLH', k, len(v), 0x0) + v)
data = "".join(data)
if len(data) > self._read_length:
raise InvalidMP3
length = max(len(data), self._read_length - 10)
self._write('ID3')
self._write_pack('>BBB', 3, 0, 0x0)
self._write_pack('>L', _pack_non_negative_ocket_28_bit_int(length))
self._write(data)
self._pad()
def _write_id3_v2_2(self):
raise UnsupportedID3
self._f.seek(0)
pass
def _write_id3_v1(self):
raise UnsupportedID3
self._f.seek(128, 2)
pass
def write(self):
version = (getattr(self, '_version', None)
or getattr(self, 'version', None))
if version == "2.3.0":
self._write_id3_v2_3()
elif version == "2.2.0":
self._write_id3_v2_2()
elif version == "1.1":
self._write_id3_v1()
elif version == "1.0":
self._write_id3_v1()
#def info(filename):
# f = open(filename, 'rb')
# try:
# return id3().read(f).attributes()
# finally:
# f.close()
# composer
# disc
# part_of_a_compilation
# volume_adjustment
# equalizer_preset
# my_rating
# start_time
# stop_time
def test(filename="2_3.mp3"):
import StringIO
f = open(filename)
i = id3_file(f)
i.read()
i._f = StringIO.StringIO()
i.write()
v = i._f.getvalue()
f.seek(0)
v2 = f.read(len(v))
f.close()
return v == v2
def scan():
import os
def walkfn(arg, dir, files):
for filename in files:
if filename[-4:] == '.mp3':
filename = os.path.join(dir, filename)
if not test(filename):
print filename
os.path.walk('.', walkfn, 0)
if __name__ == '__main__':
scan()
| 32.291284 | 79 | 0.539882 | 1,698 | 14,079 | 4.304476 | 0.295053 | 0.021891 | 0.016418 | 0.014366 | 0.138596 | 0.099193 | 0.065399 | 0.053633 | 0.053633 | 0.04214 | 0 | 0.025176 | 0.294694 | 14,079 | 435 | 80 | 32.365517 | 0.710876 | 0.033383 | 0 | 0.156863 | 0 | 0 | 0.230263 | 0.007843 | 0 | 0 | 0.004809 | 0 | 0 | 0 | null | null | 0.011204 | 0.014006 | null | null | 0.002801 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b75a00768c2cceed8ca46774029ad378bc7cc2e6 | 1,180 | py | Python | workflow/pnmlpy/pmnl_model.py | SODALITE-EU/verification | 584e3c61bc20e65944e34b875eb5ed0ec02d6fa9 | [
"Apache-2.0"
] | null | null | null | workflow/pnmlpy/pmnl_model.py | SODALITE-EU/verification | 584e3c61bc20e65944e34b875eb5ed0ec02d6fa9 | [
"Apache-2.0"
] | 2 | 2020-03-30T12:02:32.000Z | 2021-04-20T19:09:25.000Z | workflow/pnmlpy/pmnl_model.py | SODALITE-EU/verification | 584e3c61bc20e65944e34b875eb5ed0ec02d6fa9 | [
"Apache-2.0"
] | null | null | null | from xml.dom import minidom
from xml.etree import ElementTree
from xml.etree.cElementTree import Element, SubElement, ElementTree, tostring
class PNMLModelGenerator:
def generate(self, tasks):
top = Element('pnml')
child = SubElement(top, 'net',
attrib={'id': "net1", 'type': "http://www.pnml.org/version-2009/grammar/pnmlcoremodel"})
page = SubElement(child, 'page',
attrib={'id': "n0"})
index = 0
for task in tasks:
place = SubElement(page, 'place',
attrib={'id': "p" + str(index)})
name = SubElement(place, 'name')
text = SubElement(name, 'text')
text.text = task.name
index += 1
finalmarkings = SubElement(child, 'finalmarkings')
markings = SubElement(finalmarkings, 'markings')
tree = ElementTree(top)
return prettify(top)
def prettify(elem):
"""Return a pretty-printed XML string for the Element.
"""
rough_string = tostring(elem, 'utf-8')
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent=" ")
| 31.891892 | 115 | 0.582203 | 121 | 1,180 | 5.661157 | 0.520661 | 0.030657 | 0.035037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01083 | 0.295763 | 1,180 | 36 | 116 | 32.777778 | 0.813478 | 0.04322 | 0 | 0 | 1 | 0 | 0.110018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b75eb4207857101d04d38eb0f52b4294fd616690 | 1,413 | py | Python | wavefront_reader/reading/readobjfile.py | SimLeek/wavefront_reader | 4504f5b6185a03fcdd1722dbea660f7af35b8b8c | [
"MIT"
] | null | null | null | wavefront_reader/reading/readobjfile.py | SimLeek/wavefront_reader | 4504f5b6185a03fcdd1722dbea660f7af35b8b8c | [
"MIT"
] | null | null | null | wavefront_reader/reading/readobjfile.py | SimLeek/wavefront_reader | 4504f5b6185a03fcdd1722dbea660f7af35b8b8c | [
"MIT"
] | null | null | null | from wavefront_reader.wavefront_classes.objfile import ObjFile
from .readface import read_face
def read_objfile(fname):
"""Takes .obj filename and return an ObjFile class."""
obj_file = ObjFile()
with open(fname) as f:
lines = f.read().splitlines()
if 'OBJ' not in lines[0]:
raise ValueError("File not .obj-formatted.")
# todo: assumes one object per .obj file, which is wrong
# todo: doesn't properly ignore comments
for line in lines:
if line:
prefix, value = line.split(' ', 1)
if prefix == 'o':
obj_file.add_prop(value)
if obj_file.has_prop():
if prefix == 'v':
obj_file.last_obj_prop.vertices.append([float(val) for val in value.split(' ')])
elif prefix == 'vn':
obj_file.last_obj_prop.vertex_normals.append([float(val) for val in value.split(' ')])
elif prefix == 'vt':
obj_file.last_obj_prop.vertex_textures.append([float(val) for val in value.split(' ')])
elif prefix == 'usemtl':
obj_file.last_obj_prop.material_name = value
elif prefix == 'f':
obj_file.last_obj_prop.faces.append(read_face(value, obj_file.last_obj_prop))
else:
obj_file.misc[prefix] = value
return obj_file
| 39.25 | 107 | 0.573248 | 180 | 1,413 | 4.316667 | 0.405556 | 0.108108 | 0.084942 | 0.108108 | 0.316602 | 0.223938 | 0.162162 | 0.162162 | 0.162162 | 0.162162 | 0 | 0.002079 | 0.319179 | 1,413 | 35 | 108 | 40.371429 | 0.805613 | 0.101203 | 0 | 0 | 0 | 0 | 0.034838 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b760116d8d8fe2d046e6af340b2d6bd9cb6fc8e2 | 157 | py | Python | constants.py | Guedelho/snake-ai | 176db202aaec76ff5c7cac6cc9d7a7bc46ff2b16 | [
"MIT"
] | null | null | null | constants.py | Guedelho/snake-ai | 176db202aaec76ff5c7cac6cc9d7a7bc46ff2b16 | [
"MIT"
] | null | null | null | constants.py | Guedelho/snake-ai | 176db202aaec76ff5c7cac6cc9d7a7bc46ff2b16 | [
"MIT"
] | null | null | null | # Directions
UP = 'UP'
DOWN = 'DOWN'
LEFT = 'LEFT'
RIGHT = 'RIGHT'
# Colors
RED = (255, 0, 0)
BLACK = (0, 0, 0)
GREEN = (0, 255, 0)
WHITE = (255, 255, 255)
| 13.083333 | 23 | 0.547771 | 26 | 157 | 3.307692 | 0.461538 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 0.235669 | 157 | 11 | 24 | 14.272727 | 0.533333 | 0.10828 | 0 | 0 | 0 | 0 | 0.109489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b761fb951040af2347c2dd2aa478c82dca9ff08e | 10,460 | py | Python | src/ebay_rest/api/buy_browse/models/refinement.py | matecsaj/ebay_rest | dd23236f39e05636eff222f99df1e3699ce47d4a | [
"MIT"
] | 3 | 2021-12-12T04:28:03.000Z | 2022-03-10T03:29:18.000Z | src/ebay_rest/api/buy_browse/models/refinement.py | jdavv/ebay_rest | 20fc88c6aefdae9ab90f9c1330e79abddcd750cd | [
"MIT"
] | 33 | 2021-06-16T20:44:36.000Z | 2022-03-30T14:55:06.000Z | src/ebay_rest/api/buy_browse/models/refinement.py | jdavv/ebay_rest | 20fc88c6aefdae9ab90f9c1330e79abddcd750cd | [
"MIT"
] | 7 | 2021-06-03T09:30:23.000Z | 2022-03-08T19:51:33.000Z | # coding: utf-8
"""
Browse API
<p>The Browse API has the following resources:</p> <ul> <li><b> item_summary: </b> Lets shoppers search for specific items by keyword, GTIN, category, charity, product, or item aspects and refine the results by using filters, such as aspects, compatibility, and fields values.</li> <li><b> search_by_image: </b><a href=\"https://developer.ebay.com/api-docs/static/versioning.html#experimental\" target=\"_blank\"><img src=\"/cms/img/docs/experimental-icon.svg\" class=\"legend-icon experimental-icon\" alt=\"Experimental Release\" title=\"Experimental Release\" /> (Experimental)</a> Lets shoppers search for specific items by image. You can refine the results by using URI parameters and filters.</li> <li><b> item: </b> <ul><li>Lets you retrieve the details of a specific item or all the items in an item group, which is an item with variations such as color and size and check if a product is compatible with the specified item, such as if a specific car is compatible with a specific part.</li> <li>Provides a bridge between the eBay legacy APIs, such as <b> Finding</b>, and the RESTful APIs, which use different formats for the item IDs.</li> </ul> </li> <li> <b> shopping_cart: </b> <a href=\"https://developer.ebay.com/api-docs/static/versioning.html#experimental\" target=\"_blank\"><img src=\"/cms/img/docs/experimental-icon.svg\" class=\"legend-icon experimental-icon\" alt=\"Experimental Release\" title=\"Experimental Release\" /> (Experimental)</a> <a href=\"https://developer.ebay.com/api-docs/static/versioning.html#limited\" target=\"_blank\"> <img src=\"/cms/img/docs/partners-api.svg\" class=\"legend-icon partners-icon\" title=\"Limited Release\" alt=\"Limited Release\" />(Limited Release)</a> Provides the ability for eBay members to see the contents of their eBay cart, and add, remove, and change the quantity of items in their eBay cart. <b> Note: </b> This resource is not available in the eBay API Explorer.</li></ul> <p>The <b> item_summary</b>, <b> search_by_image</b>, and <b> item</b> resource calls require an <a href=\"/api-docs/static/oauth-client-credentials-grant.html\">Application access token</a>. The <b> shopping_cart</b> resource calls require a <a href=\"/api-docs/static/oauth-authorization-code-grant.html\">User access token</a>.</p> # noqa: E501
OpenAPI spec version: v1.11.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class Refinement(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'aspect_distributions': 'list[AspectDistribution]',
'buying_option_distributions': 'list[BuyingOptionDistribution]',
'category_distributions': 'list[CategoryDistribution]',
'condition_distributions': 'list[ConditionDistribution]',
'dominant_category_id': 'str'
}
attribute_map = {
'aspect_distributions': 'aspectDistributions',
'buying_option_distributions': 'buyingOptionDistributions',
'category_distributions': 'categoryDistributions',
'condition_distributions': 'conditionDistributions',
'dominant_category_id': 'dominantCategoryId'
}
def __init__(self, aspect_distributions=None, buying_option_distributions=None, category_distributions=None, condition_distributions=None, dominant_category_id=None): # noqa: E501
"""Refinement - a model defined in Swagger""" # noqa: E501
self._aspect_distributions = None
self._buying_option_distributions = None
self._category_distributions = None
self._condition_distributions = None
self._dominant_category_id = None
self.discriminator = None
if aspect_distributions is not None:
self.aspect_distributions = aspect_distributions
if buying_option_distributions is not None:
self.buying_option_distributions = buying_option_distributions
if category_distributions is not None:
self.category_distributions = category_distributions
if condition_distributions is not None:
self.condition_distributions = condition_distributions
if dominant_category_id is not None:
self.dominant_category_id = dominant_category_id
@property
def aspect_distributions(self):
"""Gets the aspect_distributions of this Refinement. # noqa: E501
An array of containers for the all the aspect refinements. # noqa: E501
:return: The aspect_distributions of this Refinement. # noqa: E501
:rtype: list[AspectDistribution]
"""
return self._aspect_distributions
@aspect_distributions.setter
def aspect_distributions(self, aspect_distributions):
"""Sets the aspect_distributions of this Refinement.
An array of containers for the all the aspect refinements. # noqa: E501
:param aspect_distributions: The aspect_distributions of this Refinement. # noqa: E501
:type: list[AspectDistribution]
"""
self._aspect_distributions = aspect_distributions
@property
def buying_option_distributions(self):
"""Gets the buying_option_distributions of this Refinement. # noqa: E501
An array of containers for the all the buying option refinements. # noqa: E501
:return: The buying_option_distributions of this Refinement. # noqa: E501
:rtype: list[BuyingOptionDistribution]
"""
return self._buying_option_distributions
@buying_option_distributions.setter
def buying_option_distributions(self, buying_option_distributions):
"""Sets the buying_option_distributions of this Refinement.
An array of containers for the all the buying option refinements. # noqa: E501
:param buying_option_distributions: The buying_option_distributions of this Refinement. # noqa: E501
:type: list[BuyingOptionDistribution]
"""
self._buying_option_distributions = buying_option_distributions
@property
def category_distributions(self):
"""Gets the category_distributions of this Refinement. # noqa: E501
An array of containers for the all the category refinements. # noqa: E501
:return: The category_distributions of this Refinement. # noqa: E501
:rtype: list[CategoryDistribution]
"""
return self._category_distributions
@category_distributions.setter
def category_distributions(self, category_distributions):
"""Sets the category_distributions of this Refinement.
An array of containers for the all the category refinements. # noqa: E501
:param category_distributions: The category_distributions of this Refinement. # noqa: E501
:type: list[CategoryDistribution]
"""
self._category_distributions = category_distributions
@property
def condition_distributions(self):
"""Gets the condition_distributions of this Refinement. # noqa: E501
An array of containers for the all the condition refinements. # noqa: E501
:return: The condition_distributions of this Refinement. # noqa: E501
:rtype: list[ConditionDistribution]
"""
return self._condition_distributions
@condition_distributions.setter
def condition_distributions(self, condition_distributions):
"""Sets the condition_distributions of this Refinement.
An array of containers for the all the condition refinements. # noqa: E501
:param condition_distributions: The condition_distributions of this Refinement. # noqa: E501
:type: list[ConditionDistribution]
"""
self._condition_distributions = condition_distributions
@property
def dominant_category_id(self):
"""Gets the dominant_category_id of this Refinement. # noqa: E501
The identifier of the category that most of the items are part of. # noqa: E501
:return: The dominant_category_id of this Refinement. # noqa: E501
:rtype: str
"""
return self._dominant_category_id
@dominant_category_id.setter
def dominant_category_id(self, dominant_category_id):
"""Sets the dominant_category_id of this Refinement.
The identifier of the category that most of the items are part of. # noqa: E501
:param dominant_category_id: The dominant_category_id of this Refinement. # noqa: E501
:type: str
"""
self._dominant_category_id = dominant_category_id
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(Refinement, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Refinement):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 46.488889 | 2,332 | 0.67782 | 1,255 | 10,460 | 5.490837 | 0.185657 | 0.032506 | 0.046437 | 0.067334 | 0.493542 | 0.381512 | 0.368887 | 0.306051 | 0.239588 | 0.202728 | 0 | 0.011807 | 0.230784 | 10,460 | 224 | 2,333 | 46.696429 | 0.844643 | 0.505927 | 0 | 0.071429 | 0 | 0 | 0.104356 | 0.072368 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.030612 | 0 | 0.336735 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b76732831186e479ef5311c8854a1d2a5b3efda3 | 18,186 | py | Python | bioshareX/api/views.py | amschaal/bioshare | 7ac5da6874f70605ded8757c46fd9629d2d3d18b | [
"MIT"
] | 7 | 2015-12-10T23:23:53.000Z | 2018-08-22T11:00:28.000Z | bioshareX/api/views.py | amschaal/bioshare | 7ac5da6874f70605ded8757c46fd9629d2d3d18b | [
"MIT"
] | 61 | 2015-12-11T01:26:26.000Z | 2021-10-05T00:25:20.000Z | bioshareX/api/views.py | amschaal/bioshare | 7ac5da6874f70605ded8757c46fd9629d2d3d18b | [
"MIT"
] | 2 | 2019-05-31T16:33:37.000Z | 2021-05-03T20:51:25.000Z | # Create your views here.
from django.core.urlresolvers import reverse
from django.http.response import JsonResponse, HttpResponse
from settings.settings import AUTHORIZED_KEYS_FILE, SITE_URL
from bioshareX.models import Share, SSHKey, MetaData, Tag
from bioshareX.forms import MetaDataForm, json_form_validate
from guardian.shortcuts import get_perms, get_users_with_perms, remove_perm, assign_perm
from bioshareX.utils import JSONDecorator, json_response, json_error, share_access_decorator, safe_path_decorator, validate_email, fetchall,\
test_path, du
from django.contrib.auth.models import User, Group
from django.db.models import Q
import os
from rest_framework.decorators import api_view, detail_route, throttle_classes,\
action
from bioshareX.forms import ShareForm
from guardian.decorators import permission_required
from bioshareX.utils import ajax_login_required, email_users
from rest_framework import generics, viewsets, status
from bioshareX.models import ShareLog, Message
from bioshareX.api.serializers import ShareLogSerializer, ShareSerializer,\
GroupSerializer, UserSerializer, MessageSerializer
from rest_framework.permissions import DjangoModelPermissions, IsAuthenticated
from bioshareX.permissions import ManageGroupPermission
from rest_framework.response import Response
from guardian.models import UserObjectPermission
from django.contrib.contenttypes.models import ContentType
import datetime
from bioshareX.api.filters import UserShareFilter, ShareTagFilter,\
GroupShareFilter, ActiveMessageFilter
from rest_framework.throttling import UserRateThrottle
from django.utils import timezone
import csv
@ajax_login_required
def get_user(request):
query = request.GET.get('query')
try:
user = User.objects.get(Q(username=query)|Q(email=query))
return JsonResponse({'user':UserSerializer(user).data})
except Exception, e:
return JsonResponse({'status':'error','query':query,'errors':[e.message]},status=status.HTTP_404_NOT_FOUND)
@ajax_login_required
def get_address_book(request):
try:
emails = User.objects.filter(shareuserobjectpermission__content_object__in=Share.objects.filter(owner=request.user).values_list('id')).values_list('email').distinct().order_by('email')
groups = Group.objects.all().order_by('name')
return json_response({'emails':[email[0] for email in emails], 'groups':[g.name for g in groups]})
except Exception, e:
return json_error([e.message])
@ajax_login_required
def get_tags(request):
try:
tags = Tag.objects.filter(name__icontains=request.GET.get('tag'))
return json_response({'tags':[tag.name for tag in tags]})
except Exception, e:
return json_error([e.message])
@share_access_decorator(['admin'])
def share_with(request,share):
query = request.POST.get('query',request.GET.get('query'))
exists = []
new_users = []
groups = []
invalid = []
try:
emails = [email.strip().lower() for email in query.split(',')]
for email in emails:
if email == '':
continue
if email.startswith('group:'):
name = email.split('group:')[1].lower()
try:
group = Group.objects.get(name__iexact=name)
groups.append({'group':{'id':group.id,'name':group.name}})
except:
invalid.append(name)
elif validate_email(email):
try:
user = User.objects.get(email=email)
exists.append({'user':{'username':email}})
except:
new_users.append({'user':{'username':email}})
else:
invalid.append(email)
return json_response({'exists':exists, 'groups':groups,'new_users':new_users,'invalid':invalid})
except Exception, e:
return json_error([e.message])
@ajax_login_required
def share_autocomplete(request):
terms = [term.strip() for term in request.GET.get('query').split()]
query = reduce(lambda q,value: q&Q(name__icontains=value), terms , Q())
try:
share_objs = Share.user_queryset(request.user).filter(query).order_by('-created')[:10]
shares = [{'id':s.id,'url':reverse('list_directory',kwargs={'share':s.id}),'name':s.name,'notes':s.notes} for s in share_objs]
return json_response({'status':'success','shares':shares})
except Exception, e:
return json_error([e.message])
def get_group(request):
query = request.GET.get('query')
try:
group = Group.objects.get(name=query)
return json_response({'group':{'name':group.name}})
except Exception, e:
return json_error([e.message])
@api_view(['GET'])
@share_access_decorator(['admin'])
def get_permissions(request,share):
data = share.get_permissions(user_specific=True)
return json_response(data)
@share_access_decorator(['admin'])
@JSONDecorator
def update_share(request,share,json=None):
share.secure = json['secure']
share.save()
return json_response({'status':'okay'})
@api_view(['POST'])
@share_access_decorator(['admin'])
@JSONDecorator
def set_permissions(request,share,json=None):
from smtplib import SMTPException
emailed=[]
created=[]
failed=[]
# if not request.user.has_perm('admin',share):
# return json_response({'status':'error','error':'You do not have permission to write to this share.'})
if json.has_key('groups'):
for group, permissions in json['groups'].iteritems():
g = Group.objects.get(id__iexact=group)
current_perms = get_perms(g,share)
removed_perms = list(set(current_perms) - set(permissions))
added_perms = list(set(permissions) - set(current_perms))
for u in g.user_set.all():
if len(share.get_user_permissions(u,user_specific=True)) == 0 and len(added_perms) > 0 and json['email']:
email_users([u],'share/share_subject.txt','share/share_email_body.txt',{'user':u,'share':share,'sharer':request.user,'site_url':SITE_URL})
emailed.append(u.username)
for perm in removed_perms:
remove_perm(perm,g,share)
for perm in added_perms:
assign_perm(perm,g,share)
if json.has_key('users'):
for username, permissions in json['users'].iteritems():
username = username.lower()
try:
u = User.objects.get(username__iexact=username)
if len(share.get_user_permissions(u,user_specific=True)) == 0 and json['email']:
try:
email_users([u],'share/share_subject.txt','share/share_email_body.txt',{'user':u,'share':share,'sharer':request.user,'site_url':SITE_URL})
emailed.append(username)
except:
failed.append(username)
except:
if len(permissions) > 0:
password = User.objects.make_random_password()
u = User(username=username,email=username)
u.set_password(password)
u.save()
try:
email_users([u],'share/share_subject.txt','share/share_new_email_body.txt',{'user':u,'password':password,'share':share,'sharer':request.user,'site_url':SITE_URL})
created.append(username)
except:
failed.append(username)
u.delete()
current_perms = share.get_user_permissions(u,user_specific=True)
print 'CURRENT'
print current_perms
print 'PERMISSIONS'
print permissions
removed_perms = list(set(current_perms) - set(permissions))
added_perms = list(set(permissions) - set(current_perms))
print 'ADDING: '
print added_perms
print 'REMOVING: '
print removed_perms
for perm in removed_perms:
if u.username not in failed:
remove_perm(perm,u,share)
for perm in added_perms:
if u.username not in failed:
assign_perm(perm,u,share)
data = share.get_permissions(user_specific=True)
data['messages']=[]
if len(emailed) > 0:
data['messages'].append({'type':'info','content':'%s has/have been emailed'%', '.join(emailed)})
if len(created) > 0:
data['messages'].append({'type':'info','content':'Accounts has/have been created and emails have been sent to the following email addresses: %s'%', '.join(created)})
if len(failed) > 0:
data['messages'].append({'type':'info','content':'Delivery has failed to the following addresses: %s'%', '.join(failed)})
data['json']=json
return json_response(data)
@share_access_decorator(['view_share_files'])
def search_share(request,share,subdir=None):
from bioshareX.utils import find
query = request.GET.get('query',False)
response={}
if query:
response['results'] = find(share,"*%s*"%query,subdir)
else:
response = {'status':'error'}
return json_response(response)
@safe_path_decorator()
@share_access_decorator(['write_to_share'])
def edit_metadata(request, share, subpath):
try:
if share.get_path_type(subpath) is None:
raise Exception('The specified file or folder does not exist in this share.')
metadata = MetaData.objects.get_or_create(share=share, subpath=subpath)[0]
form = MetaDataForm(request.POST if request.method == 'POST' else request.GET)
data = json_form_validate(form)
if not form.is_valid():
return json_response(data)#return json_error(form.errors)
tags = []
for tag in form.cleaned_data['tags'].split(','):
tag = tag.strip()
if len(tag) >2 :
tags.append(Tag.objects.get_or_create(name=tag)[0])
metadata.tags = tags
metadata.notes = form.cleaned_data['notes']
metadata.save()
name = os.path.basename(os.path.normpath(subpath))
return json_response({'name':name,'notes':metadata.notes,'tags':[tag.name for tag in tags]})
except Exception, e:
return json_error([str(e)])
@ajax_login_required
def delete_ssh_key(request):
try:
id = request.POST.get('id')
key = SSHKey.objects.get(user=request.user,id=id)
# subprocess.call(['/bin/chmod','600',AUTHORIZED_KEYS_FILE])
keystring = key.get_key()
# remove_me = keystring.replace('/','\\/')#re.escape(key.extract_key())
# command = ['/bin/sed','-i','/%s/d'%remove_me,AUTHORIZED_KEYS_FILE]
# subprocess.check_call(command)
f = open(AUTHORIZED_KEYS_FILE,"r")
lines = f.readlines()
f.close()
f = open(AUTHORIZED_KEYS_FILE,"w")
for line in lines:
if line.find(keystring) ==-1:
f.write(line)
f.close()
# subprocess.call(['/bin/chmod','400',AUTHORIZED_KEYS_FILE])
key.delete()
SSHKey.objects.filter(key__contains=keystring).delete()
response = {'status':'success','deleted':id}
except Exception, e:
response = {'status':'error','message':'Unable to delete ssh key'+str(e)}
return json_response(response)
"""
Requires: "name", "notes", "filesystem" arguments.
Optional: "link_to_path", "read_only"
"""
@api_view(['POST'])
@permission_required('bioshareX.add_share', return_403=True)
def create_share(request):
form = ShareForm(request.user,request.data)
if form.is_valid():
share = form.save(commit=False)
share.owner=request.user
link_to_path = request.data.get('link_to_path',None)
if link_to_path:
if not request.user.has_perm('bioshareX.link_to_path'):
return JsonResponse({'error':"You do not have permission to link to a specific path."},status=400)
try:
share.save()
except Exception, e:
share.delete()
return JsonResponse({'error':e.message},status=400)
return JsonResponse({'url':"%s%s"%(SITE_URL,reverse('list_directory',kwargs={'share':share.id})),'id':share.id})
else:
return JsonResponse({'errors':form.errors},status=400)
@ajax_login_required
@share_access_decorator(['view_share_files'])
def email_participants(request,share,subdir=None):
try:
subject = request.POST.get('subject')
emails = request.POST.getlist('emails',[])
users = [u for u in get_users_with_perms(share, attach_perms=False, with_superusers=False, with_group_users=True)]
if len(emails) > 0:
users = [u for u in User.objects.filter(id__in=[u.id for u in users]).filter(email__in=emails)]
body = request.POST.get('body')
users.append(share.owner)
email_users(users, ctx_dict={}, subject=subject, body=body,from_email=request.user.email,content_subtype='plain')
response = {'status':'success','sent_to':[u.email for u in users]}
return json_response(response)
except Exception, e:
return JsonResponse({'errors':[str(e)]},status=400)
class ShareLogList(generics.ListAPIView):
serializer_class = ShareLogSerializer
permission_classes = (IsAuthenticated,)
filter_fields = {'action':['icontains'],'user__username':['icontains'],'text':['icontains'],'paths':['icontains'],'share':['exact']}
def get_queryset(self):
shares = Share.user_queryset(self.request.user,include_stats=False)
return ShareLog.objects.filter(share__in=shares)
class ShareViewset(viewsets.ReadOnlyModelViewSet):
serializer_class = ShareSerializer
permission_classes = (IsAuthenticated,)
filter_backends = generics.ListAPIView.filter_backends + [UserShareFilter,ShareTagFilter,GroupShareFilter]
filter_fields = {'name':['icontains'],'notes':['icontains'],'owner__username':['icontains'],'path_exists':['exact']}
ordering_fields = ('name','owner__username','created','updated','stats__num_files','stats__bytes')
def get_queryset(self):
return Share.user_queryset(self.request.user,include_stats=False).select_related('owner','stats').prefetch_related('tags','user_permissions__user','group_permissions__group')
@detail_route(['GET'])
@throttle_classes([UserRateThrottle])
def directory_size(self, request, *args, **kwargs):
share = self.get_object()
subdir = request.query_params.get('subdir','')
test_path(subdir,share=share)
size = du(os.path.join(share.get_path(),subdir))
return Response({'share':share.id,'subdir':subdir,'size':size})
@action(detail=False, methods=['GET'], permission_classes=[IsAuthenticated])
def export(self, request):
queryset = self.get_queryset()
serializer = self.get_serializer(queryset, many=True)
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="shares_{}.csv"'.format(str(timezone.now())[:19].replace(' ','_'))
writer = csv.writer(response, delimiter='\t')
writer.writerow(['id','name','url','users','groups','bytes','tags','owner','slug','created','updated','secure','read_only','notes','path_exists'])
for r in serializer.data:
row = [r['id'],r['name'],r['url'],', '.join(r['users']),', '.join(r['groups']),r['stats'].get('bytes') if r['stats'] else '',', '.join([t['name'] for t in r['tags']]),r['owner'].get('username'),r['slug'],r['created'],r['updated'],r['secure'],r['read_only'],r['notes'],r['path_exists'] ]
writer.writerow([c.encode('ascii', 'replace') if hasattr(c,'decode') else c for c in row])
return response
class GroupViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = GroupSerializer
permission_classes = (IsAuthenticated,DjangoModelPermissions,)
filter_fields = {'name':['icontains']}
model = Group
def get_queryset(self):
if self.request.user.is_superuser or self.request.user.is_staff:
return Group.objects.all()
else:
return self.request.user.groups.all()
@detail_route(['POST'],permission_classes=[ManageGroupPermission])
def update_users(self, request, *args, **kwargs):
users = request.data.get('users')
group = self.get_object()
# old_users = GroupSerializer(group).data['users']
# old_user_ids = [u['id'] for u in old_users]
# remove_users = set(old_user_ids) - set(user_ids)
# add_users = set(user_ids) - set(old_user_ids)
group.user_set = [u['id'] for u in users]
#clear permissions
ct = ContentType.objects.get_for_model(Group)
UserObjectPermission.objects.filter(content_type=ct,object_pk=group.id).delete()
#assign permissions
for user in users:
if 'manage_group' in user['permissions']:
user = User.objects.get(id=user['id'])
assign_perm('manage_group', user, group)
return self.retrieve(request,*args,**kwargs)#Response({'status':'success'})
# @detail_route(['POST'])
# def remove_user(self,request,*args,**kwargs):
# # user = request.query_params.get('user')
# # self.get_object().user_set.remove(user)
# return Response({'status':'success'})
class MessageViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = MessageSerializer
permission_classes = (IsAuthenticated,)
filter_backends = (ActiveMessageFilter,)
model = Message
def get_queryset(self):
return Message.objects.all().order_by('-created')
# return Message.objects.filter(active=True).filter(Q(expires__gte=datetime.datetime.today())|Q(expires=None)).exclude(viewed_by__id=self.request.user.id)
@detail_route(['POST','GET'],permission_classes=[IsAuthenticated])
def dismiss(self, request, pk=None):
message = self.get_object()
message.viewed_by.add(request.user)
message.save()
return Response({'status':'Message dismissed'})
| 47.11399 | 298 | 0.651875 | 2,207 | 18,186 | 5.206615 | 0.160399 | 0.018275 | 0.02193 | 0.015316 | 0.222348 | 0.166565 | 0.134192 | 0.097816 | 0.079192 | 0.067096 | 0 | 0.002918 | 0.208567 | 18,186 | 385 | 299 | 47.236364 | 0.795456 | 0.06615 | 0 | 0.263158 | 0 | 0 | 0.11382 | 0.014413 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.008772 | 0.084795 | null | null | 0.023392 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b77490b49e8e303cfb2f69ab8e30192b9d37bd8f | 2,077 | py | Python | zairachem/reports/report.py | ersilia-os/ersilia-automl-chem | fabb1f05d17cff11ec0e084495eed4c0152f2f63 | [
"MIT"
] | null | null | null | zairachem/reports/report.py | ersilia-os/ersilia-automl-chem | fabb1f05d17cff11ec0e084495eed4c0152f2f63 | [
"MIT"
] | null | null | null | zairachem/reports/report.py | ersilia-os/ersilia-automl-chem | fabb1f05d17cff11ec0e084495eed4c0152f2f63 | [
"MIT"
] | null | null | null | from .plots import (
ActivesInactivesPlot,
ConfusionPlot,
RocCurvePlot,
ProjectionPlot,
RegressionPlotRaw,
HistogramPlotRaw,
RegressionPlotTransf,
HistogramPlotTransf,
Transformation,
IndividualEstimatorsAurocPlot,
InidvidualEstimatorsR2Plot,
)
from .. import ZairaBase
from ..vars import REPORT_SUBFOLDER
class Reporter(ZairaBase):
def __init__(self, path):
ZairaBase.__init__(self)
if path is None:
self.path = self.get_output_dir()
else:
self.path = path
def _actives_inactives_plot(self):
ActivesInactivesPlot(ax=None, path=self.path).save()
def _confusion_matrix_plot(self):
ConfusionPlot(ax=None, path=self.path).save()
def _roc_curve_plot(self):
RocCurvePlot(ax=None, path=self.path).save()
def _projection_plot(self):
ProjectionPlot(ax=None, path=self.path).save()
def _regression_plot_raw(self):
RegressionPlotRaw(ax=None, path=self.path).save()
def _histogram_plot_raw(self):
HistogramPlotRaw(ax=None, path=self.path).save()
def _regression_plot_transf(self):
RegressionPlotTransf(ax=None, path=self.path).save()
def _histogram_plot_transf(self):
HistogramPlotTransf(ax=None, path=self.path).save()
def _transformation_plot(self):
Transformation(ax=None, path=self.path).save()
def _individual_estimators_auroc_plot(self):
IndividualEstimatorsAurocPlot(ax=None, path=self.path).save()
def _individual_estimators_r2_plot(self):
InidvidualEstimatorsR2Plot(ax=None, path=self.path).save()
def run(self):
self._actives_inactives_plot()
self._confusion_matrix_plot()
self._roc_curve_plot()
self._projection_plot()
self._regression_plot_transf()
self._histogram_plot_transf()
self._regression_plot_raw()
self._histogram_plot_raw()
self._transformation_plot()
self._individual_estimators_auroc_plot()
self._individual_estimators_r2_plot()
| 28.847222 | 69 | 0.692826 | 225 | 2,077 | 6.053333 | 0.222222 | 0.082232 | 0.080764 | 0.113069 | 0.304699 | 0.270925 | 0.270925 | 0.179148 | 0.179148 | 0 | 0 | 0.002433 | 0.208474 | 2,077 | 71 | 70 | 29.253521 | 0.826034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232143 | false | 0 | 0.053571 | 0 | 0.303571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b77c8918ea2f71cb1258eb1a156531c01f3b83b2 | 269 | py | Python | Chapter 07/hmac-md5.py | Prakshal2607/Effective-Python-Penetration-Testing | f49fedc172a1add45edb21f66f74746dfa9c944a | [
"MIT"
] | 346 | 2016-06-21T11:39:39.000Z | 2022-01-26T03:19:29.000Z | Chapter 07/hmac-md5.py | liceaga/Effective-Python-Penetration-Testing | 0b043885231662efd63402eec3c9cb413b9693e2 | [
"MIT"
] | 1 | 2016-06-21T11:44:42.000Z | 2016-11-17T05:10:08.000Z | Chapter 07/hmac-md5.py | liceaga/Effective-Python-Penetration-Testing | 0b043885231662efd63402eec3c9cb413b9693e2 | [
"MIT"
] | 210 | 2016-06-22T12:08:47.000Z | 2022-03-16T15:54:30.000Z | import hmac
hmac_md5 = hmac.new('secret-key')
f = open('sample-file.txt', 'rb')
try:
while True:
block = f.read(1024)
if not block:
break
hmac_md5.update(block)
finally:
f.close()
digest = hmac_md5.hexdigest()
print digest | 16.8125 | 33 | 0.598513 | 38 | 269 | 4.157895 | 0.710526 | 0.132911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.271375 | 269 | 16 | 34 | 16.8125 | 0.770408 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b77cac8f40a35a229bb7b41f7b04619f55a2baf4 | 31,686 | py | Python | dingtalk/python/alibabacloud_dingtalk/alitrip_1_0/models.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | dingtalk/python/alibabacloud_dingtalk/alitrip_1_0/models.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | dingtalk/python/alibabacloud_dingtalk/alitrip_1_0/models.py | yndu13/dingtalk-sdk | 700fb7bb49c4d3167f84afc5fcb5e7aa5a09735f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.model import TeaModel
from typing import Dict, List
class AddCityCarApplyHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class AddCityCarApplyRequest(TeaModel):
def __init__(
self,
cause: str = None,
city: str = None,
corp_id: str = None,
date: str = None,
project_code: str = None,
project_name: str = None,
status: int = None,
third_part_apply_id: str = None,
third_part_cost_center_id: str = None,
third_part_invoice_id: str = None,
times_total: int = None,
times_type: int = None,
times_used: int = None,
title: str = None,
user_id: str = None,
ding_suite_key: str = None,
ding_corp_id: str = None,
ding_token_grant_type: int = None,
finished_date: str = None,
):
# 出差事由
self.cause = cause
# 用车城市
self.city = city
# 第三方企业ID
self.corp_id = corp_id
# 用车时间,按天管控,比如传值2021-03-18 20:26:56表示2021-03-18当天可用车,跨天情况配合finishedDate参数使用
self.date = date
# 审批单关联的项目code
self.project_code = project_code
# 审批单关联的项目名
self.project_name = project_name
# 审批单状态:0-申请,1-同意,2-拒绝
self.status = status
# 三方审批单ID
self.third_part_apply_id = third_part_apply_id
# 审批单关联的三方成本中心ID
self.third_part_cost_center_id = third_part_cost_center_id
# 审批单关联的三方发票抬头ID
self.third_part_invoice_id = third_part_invoice_id
# 审批单可用总次数
self.times_total = times_total
# 审批单可用次数类型:1-次数不限制,2-用户可指定次数,3-管理员限制次数;如果企业没有限制审批单使用次数的需求,这个参数传1(次数不限制),同时times_total和times_used都传0即可
self.times_type = times_type
# 审批单已用次数
self.times_used = times_used
# 审批单标题
self.title = title
# 发起审批的第三方员工ID
self.user_id = user_id
# suiteKey
self.ding_suite_key = ding_suite_key
# account
self.ding_corp_id = ding_corp_id
# tokenGrantType
self.ding_token_grant_type = ding_token_grant_type
# 用车截止时间,按天管控,比如date传值2021-03-18 20:26:56、finished_date传值2021-03-30 20:26:56表示2021-03-18(含)到2021-03-30(含)之间可用车,该参数不传值情况使用date作为用车截止时间;
self.finished_date = finished_date
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.cause is not None:
result['cause'] = self.cause
if self.city is not None:
result['city'] = self.city
if self.corp_id is not None:
result['corpId'] = self.corp_id
if self.date is not None:
result['date'] = self.date
if self.project_code is not None:
result['projectCode'] = self.project_code
if self.project_name is not None:
result['projectName'] = self.project_name
if self.status is not None:
result['status'] = self.status
if self.third_part_apply_id is not None:
result['thirdPartApplyId'] = self.third_part_apply_id
if self.third_part_cost_center_id is not None:
result['thirdPartCostCenterId'] = self.third_part_cost_center_id
if self.third_part_invoice_id is not None:
result['thirdPartInvoiceId'] = self.third_part_invoice_id
if self.times_total is not None:
result['timesTotal'] = self.times_total
if self.times_type is not None:
result['timesType'] = self.times_type
if self.times_used is not None:
result['timesUsed'] = self.times_used
if self.title is not None:
result['title'] = self.title
if self.user_id is not None:
result['userId'] = self.user_id
if self.ding_suite_key is not None:
result['dingSuiteKey'] = self.ding_suite_key
if self.ding_corp_id is not None:
result['dingCorpId'] = self.ding_corp_id
if self.ding_token_grant_type is not None:
result['dingTokenGrantType'] = self.ding_token_grant_type
if self.finished_date is not None:
result['finishedDate'] = self.finished_date
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('cause') is not None:
self.cause = m.get('cause')
if m.get('city') is not None:
self.city = m.get('city')
if m.get('corpId') is not None:
self.corp_id = m.get('corpId')
if m.get('date') is not None:
self.date = m.get('date')
if m.get('projectCode') is not None:
self.project_code = m.get('projectCode')
if m.get('projectName') is not None:
self.project_name = m.get('projectName')
if m.get('status') is not None:
self.status = m.get('status')
if m.get('thirdPartApplyId') is not None:
self.third_part_apply_id = m.get('thirdPartApplyId')
if m.get('thirdPartCostCenterId') is not None:
self.third_part_cost_center_id = m.get('thirdPartCostCenterId')
if m.get('thirdPartInvoiceId') is not None:
self.third_part_invoice_id = m.get('thirdPartInvoiceId')
if m.get('timesTotal') is not None:
self.times_total = m.get('timesTotal')
if m.get('timesType') is not None:
self.times_type = m.get('timesType')
if m.get('timesUsed') is not None:
self.times_used = m.get('timesUsed')
if m.get('title') is not None:
self.title = m.get('title')
if m.get('userId') is not None:
self.user_id = m.get('userId')
if m.get('dingSuiteKey') is not None:
self.ding_suite_key = m.get('dingSuiteKey')
if m.get('dingCorpId') is not None:
self.ding_corp_id = m.get('dingCorpId')
if m.get('dingTokenGrantType') is not None:
self.ding_token_grant_type = m.get('dingTokenGrantType')
if m.get('finishedDate') is not None:
self.finished_date = m.get('finishedDate')
return self
class AddCityCarApplyResponseBody(TeaModel):
def __init__(
self,
apply_id: int = None,
):
# 商旅内部审批单ID
self.apply_id = apply_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.apply_id is not None:
result['applyId'] = self.apply_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('applyId') is not None:
self.apply_id = m.get('applyId')
return self
class AddCityCarApplyResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: AddCityCarApplyResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = AddCityCarApplyResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class ApproveCityCarApplyHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class ApproveCityCarApplyRequest(TeaModel):
def __init__(
self,
corp_id: str = None,
operate_time: str = None,
remark: str = None,
status: int = None,
third_part_apply_id: str = None,
user_id: str = None,
ding_suite_key: str = None,
ding_corp_id: str = None,
ding_token_grant_type: int = None,
):
# 第三方企业ID
self.corp_id = corp_id
# 审批时间
self.operate_time = operate_time
# 审批备注
self.remark = remark
# 审批结果:1-同意,2-拒绝
self.status = status
# 第三方审批单ID
self.third_part_apply_id = third_part_apply_id
# 审批的第三方员工ID
self.user_id = user_id
# suiteKey
self.ding_suite_key = ding_suite_key
# account
self.ding_corp_id = ding_corp_id
# tokenGrantType
self.ding_token_grant_type = ding_token_grant_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.corp_id is not None:
result['corpId'] = self.corp_id
if self.operate_time is not None:
result['operateTime'] = self.operate_time
if self.remark is not None:
result['remark'] = self.remark
if self.status is not None:
result['status'] = self.status
if self.third_part_apply_id is not None:
result['thirdPartApplyId'] = self.third_part_apply_id
if self.user_id is not None:
result['userId'] = self.user_id
if self.ding_suite_key is not None:
result['dingSuiteKey'] = self.ding_suite_key
if self.ding_corp_id is not None:
result['dingCorpId'] = self.ding_corp_id
if self.ding_token_grant_type is not None:
result['dingTokenGrantType'] = self.ding_token_grant_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('corpId') is not None:
self.corp_id = m.get('corpId')
if m.get('operateTime') is not None:
self.operate_time = m.get('operateTime')
if m.get('remark') is not None:
self.remark = m.get('remark')
if m.get('status') is not None:
self.status = m.get('status')
if m.get('thirdPartApplyId') is not None:
self.third_part_apply_id = m.get('thirdPartApplyId')
if m.get('userId') is not None:
self.user_id = m.get('userId')
if m.get('dingSuiteKey') is not None:
self.ding_suite_key = m.get('dingSuiteKey')
if m.get('dingCorpId') is not None:
self.ding_corp_id = m.get('dingCorpId')
if m.get('dingTokenGrantType') is not None:
self.ding_token_grant_type = m.get('dingTokenGrantType')
return self
class ApproveCityCarApplyResponseBody(TeaModel):
def __init__(
self,
approve_result: bool = None,
):
# 审批结果
self.approve_result = approve_result
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.approve_result is not None:
result['approveResult'] = self.approve_result
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('approveResult') is not None:
self.approve_result = m.get('approveResult')
return self
class ApproveCityCarApplyResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: ApproveCityCarApplyResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = ApproveCityCarApplyResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class QueryCityCarApplyHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class QueryCityCarApplyRequest(TeaModel):
def __init__(
self,
corp_id: str = None,
created_end_at: str = None,
created_start_at: str = None,
page_number: int = None,
page_size: int = None,
third_part_apply_id: str = None,
user_id: str = None,
):
# 第三方企业ID
self.corp_id = corp_id
# 审批单创建时间小于值
self.created_end_at = created_end_at
# 审批单创建时间大于等于值
self.created_start_at = created_start_at
# 页码,要求大于等于1,默认1
self.page_number = page_number
# 每页数据量,要求大于等于1,默认20
self.page_size = page_size
# 三方审批单ID
self.third_part_apply_id = third_part_apply_id
# 第三方员工ID
self.user_id = user_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.corp_id is not None:
result['corpId'] = self.corp_id
if self.created_end_at is not None:
result['createdEndAt'] = self.created_end_at
if self.created_start_at is not None:
result['createdStartAt'] = self.created_start_at
if self.page_number is not None:
result['pageNumber'] = self.page_number
if self.page_size is not None:
result['pageSize'] = self.page_size
if self.third_part_apply_id is not None:
result['thirdPartApplyId'] = self.third_part_apply_id
if self.user_id is not None:
result['userId'] = self.user_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('corpId') is not None:
self.corp_id = m.get('corpId')
if m.get('createdEndAt') is not None:
self.created_end_at = m.get('createdEndAt')
if m.get('createdStartAt') is not None:
self.created_start_at = m.get('createdStartAt')
if m.get('pageNumber') is not None:
self.page_number = m.get('pageNumber')
if m.get('pageSize') is not None:
self.page_size = m.get('pageSize')
if m.get('thirdPartApplyId') is not None:
self.third_part_apply_id = m.get('thirdPartApplyId')
if m.get('userId') is not None:
self.user_id = m.get('userId')
return self
class QueryCityCarApplyResponseBodyApplyListApproverList(TeaModel):
def __init__(
self,
note: str = None,
operate_time: str = None,
order: int = None,
status: int = None,
status_desc: str = None,
user_id: str = None,
user_name: str = None,
):
# 审批备注
self.note = note
# 审批时间
self.operate_time = operate_time
# 审批人排序值
self.order = order
# 审批状态枚举:审批状态:0-审批中,1-已同意,2-已拒绝
self.status = status
# 审批状态描述
self.status_desc = status_desc
# 审批员工ID
self.user_id = user_id
# 审批员工名
self.user_name = user_name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.note is not None:
result['note'] = self.note
if self.operate_time is not None:
result['operateTime'] = self.operate_time
if self.order is not None:
result['order'] = self.order
if self.status is not None:
result['status'] = self.status
if self.status_desc is not None:
result['statusDesc'] = self.status_desc
if self.user_id is not None:
result['userId'] = self.user_id
if self.user_name is not None:
result['userName'] = self.user_name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('note') is not None:
self.note = m.get('note')
if m.get('operateTime') is not None:
self.operate_time = m.get('operateTime')
if m.get('order') is not None:
self.order = m.get('order')
if m.get('status') is not None:
self.status = m.get('status')
if m.get('statusDesc') is not None:
self.status_desc = m.get('statusDesc')
if m.get('userId') is not None:
self.user_id = m.get('userId')
if m.get('userName') is not None:
self.user_name = m.get('userName')
return self
class QueryCityCarApplyResponseBodyApplyListItineraryList(TeaModel):
def __init__(
self,
arr_city: str = None,
arr_city_code: str = None,
arr_date: str = None,
cost_center_id: int = None,
cost_center_name: str = None,
dep_city: str = None,
dep_city_code: str = None,
dep_date: str = None,
invoice_id: int = None,
invoice_name: str = None,
itinerary_id: str = None,
project_code: str = None,
project_title: str = None,
traffic_type: int = None,
):
# 目的地城市
self.arr_city = arr_city
# 目的地城市三字码
self.arr_city_code = arr_city_code
# 到达目的地城市时间
self.arr_date = arr_date
# 商旅内部成本中心ID
self.cost_center_id = cost_center_id
# 成本中心名称
self.cost_center_name = cost_center_name
# 出发城市
self.dep_city = dep_city
# 出发城市三字码
self.dep_city_code = dep_city_code
# 出发时间
self.dep_date = dep_date
# 商旅内部发票抬头ID
self.invoice_id = invoice_id
# 发票抬头名称
self.invoice_name = invoice_name
# 商旅内部行程单ID
self.itinerary_id = itinerary_id
# 项目code
self.project_code = project_code
# 项目名称
self.project_title = project_title
# 交通方式:4-市内交通
self.traffic_type = traffic_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.arr_city is not None:
result['arrCity'] = self.arr_city
if self.arr_city_code is not None:
result['arrCityCode'] = self.arr_city_code
if self.arr_date is not None:
result['arrDate'] = self.arr_date
if self.cost_center_id is not None:
result['costCenterId'] = self.cost_center_id
if self.cost_center_name is not None:
result['costCenterName'] = self.cost_center_name
if self.dep_city is not None:
result['depCity'] = self.dep_city
if self.dep_city_code is not None:
result['depCityCode'] = self.dep_city_code
if self.dep_date is not None:
result['depDate'] = self.dep_date
if self.invoice_id is not None:
result['invoiceId'] = self.invoice_id
if self.invoice_name is not None:
result['invoiceName'] = self.invoice_name
if self.itinerary_id is not None:
result['itineraryId'] = self.itinerary_id
if self.project_code is not None:
result['projectCode'] = self.project_code
if self.project_title is not None:
result['projectTitle'] = self.project_title
if self.traffic_type is not None:
result['trafficType'] = self.traffic_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('arrCity') is not None:
self.arr_city = m.get('arrCity')
if m.get('arrCityCode') is not None:
self.arr_city_code = m.get('arrCityCode')
if m.get('arrDate') is not None:
self.arr_date = m.get('arrDate')
if m.get('costCenterId') is not None:
self.cost_center_id = m.get('costCenterId')
if m.get('costCenterName') is not None:
self.cost_center_name = m.get('costCenterName')
if m.get('depCity') is not None:
self.dep_city = m.get('depCity')
if m.get('depCityCode') is not None:
self.dep_city_code = m.get('depCityCode')
if m.get('depDate') is not None:
self.dep_date = m.get('depDate')
if m.get('invoiceId') is not None:
self.invoice_id = m.get('invoiceId')
if m.get('invoiceName') is not None:
self.invoice_name = m.get('invoiceName')
if m.get('itineraryId') is not None:
self.itinerary_id = m.get('itineraryId')
if m.get('projectCode') is not None:
self.project_code = m.get('projectCode')
if m.get('projectTitle') is not None:
self.project_title = m.get('projectTitle')
if m.get('trafficType') is not None:
self.traffic_type = m.get('trafficType')
return self
class QueryCityCarApplyResponseBodyApplyList(TeaModel):
def __init__(
self,
approver_list: List[QueryCityCarApplyResponseBodyApplyListApproverList] = None,
depart_id: str = None,
depart_name: str = None,
gmt_create: str = None,
gmt_modified: str = None,
itinerary_list: List[QueryCityCarApplyResponseBodyApplyListItineraryList] = None,
status: int = None,
status_desc: str = None,
third_part_apply_id: str = None,
trip_cause: str = None,
trip_title: str = None,
user_id: str = None,
user_name: str = None,
):
# 审批单列表
self.approver_list = approver_list
# 员工所在部门ID
self.depart_id = depart_id
# 员工所在部门名
self.depart_name = depart_name
# 创建时间
self.gmt_create = gmt_create
# 最近修改时间
self.gmt_modified = gmt_modified
# 审批单关联的行程
self.itinerary_list = itinerary_list
# 审批单状态:0-申请,1-同意,2-拒绝
self.status = status
# 审批单状态:0-申请,1-同意,2-拒绝
self.status_desc = status_desc
# 三方审批单ID
self.third_part_apply_id = third_part_apply_id
# 申请事由
self.trip_cause = trip_cause
# 审批单标题
self.trip_title = trip_title
# 发起审批员工ID
self.user_id = user_id
# 发起审批员工名
self.user_name = user_name
def validate(self):
if self.approver_list:
for k in self.approver_list:
if k:
k.validate()
if self.itinerary_list:
for k in self.itinerary_list:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['approverList'] = []
if self.approver_list is not None:
for k in self.approver_list:
result['approverList'].append(k.to_map() if k else None)
if self.depart_id is not None:
result['departId'] = self.depart_id
if self.depart_name is not None:
result['departName'] = self.depart_name
if self.gmt_create is not None:
result['gmtCreate'] = self.gmt_create
if self.gmt_modified is not None:
result['gmtModified'] = self.gmt_modified
result['itineraryList'] = []
if self.itinerary_list is not None:
for k in self.itinerary_list:
result['itineraryList'].append(k.to_map() if k else None)
if self.status is not None:
result['status'] = self.status
if self.status_desc is not None:
result['statusDesc'] = self.status_desc
if self.third_part_apply_id is not None:
result['thirdPartApplyId'] = self.third_part_apply_id
if self.trip_cause is not None:
result['tripCause'] = self.trip_cause
if self.trip_title is not None:
result['tripTitle'] = self.trip_title
if self.user_id is not None:
result['userId'] = self.user_id
if self.user_name is not None:
result['userName'] = self.user_name
return result
def from_map(self, m: dict = None):
m = m or dict()
self.approver_list = []
if m.get('approverList') is not None:
for k in m.get('approverList'):
temp_model = QueryCityCarApplyResponseBodyApplyListApproverList()
self.approver_list.append(temp_model.from_map(k))
if m.get('departId') is not None:
self.depart_id = m.get('departId')
if m.get('departName') is not None:
self.depart_name = m.get('departName')
if m.get('gmtCreate') is not None:
self.gmt_create = m.get('gmtCreate')
if m.get('gmtModified') is not None:
self.gmt_modified = m.get('gmtModified')
self.itinerary_list = []
if m.get('itineraryList') is not None:
for k in m.get('itineraryList'):
temp_model = QueryCityCarApplyResponseBodyApplyListItineraryList()
self.itinerary_list.append(temp_model.from_map(k))
if m.get('status') is not None:
self.status = m.get('status')
if m.get('statusDesc') is not None:
self.status_desc = m.get('statusDesc')
if m.get('thirdPartApplyId') is not None:
self.third_part_apply_id = m.get('thirdPartApplyId')
if m.get('tripCause') is not None:
self.trip_cause = m.get('tripCause')
if m.get('tripTitle') is not None:
self.trip_title = m.get('tripTitle')
if m.get('userId') is not None:
self.user_id = m.get('userId')
if m.get('userName') is not None:
self.user_name = m.get('userName')
return self
class QueryCityCarApplyResponseBody(TeaModel):
def __init__(
self,
apply_list: List[QueryCityCarApplyResponseBodyApplyList] = None,
total: int = None,
):
# 审批单列表
self.apply_list = apply_list
# 总数
self.total = total
def validate(self):
if self.apply_list:
for k in self.apply_list:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['applyList'] = []
if self.apply_list is not None:
for k in self.apply_list:
result['applyList'].append(k.to_map() if k else None)
if self.total is not None:
result['total'] = self.total
return result
def from_map(self, m: dict = None):
m = m or dict()
self.apply_list = []
if m.get('applyList') is not None:
for k in m.get('applyList'):
temp_model = QueryCityCarApplyResponseBodyApplyList()
self.apply_list.append(temp_model.from_map(k))
if m.get('total') is not None:
self.total = m.get('total')
return self
class QueryCityCarApplyResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: QueryCityCarApplyResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = QueryCityCarApplyResponseBody()
self.body = temp_model.from_map(m['body'])
return self
| 33.636943 | 142 | 0.581424 | 4,054 | 31,686 | 4.344598 | 0.065861 | 0.052518 | 0.094532 | 0.069835 | 0.633453 | 0.566002 | 0.53347 | 0.521036 | 0.503662 | 0.500255 | 0 | 0.004274 | 0.320678 | 31,686 | 941 | 143 | 33.672689 | 0.813984 | 0.032885 | 0 | 0.617571 | 1 | 0 | 0.086698 | 0.010011 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077519 | false | 0.01292 | 0.002584 | 0 | 0.157623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b78064bab0e579a79f199c581d29dfd0023d9a67 | 1,337 | py | Python | src/kong/db.py | paulgessinger/kong | b1e2ec0c18f432fa2419b2b0dc95ee1e391cf7a5 | [
"MIT"
] | 3 | 2020-02-14T09:23:56.000Z | 2020-08-24T16:19:00.000Z | src/kong/db.py | paulgessinger/kong | b1e2ec0c18f432fa2419b2b0dc95ee1e391cf7a5 | [
"MIT"
] | 159 | 2019-09-16T19:17:16.000Z | 2022-03-29T19:12:37.000Z | src/kong/db.py | paulgessinger/kong | b1e2ec0c18f432fa2419b2b0dc95ee1e391cf7a5 | [
"MIT"
] | null | null | null | """
Singleton database instance
"""
from typing import TYPE_CHECKING, Any, List, ContextManager, Tuple, Iterable
if not TYPE_CHECKING:
from playhouse.sqlite_ext import SqliteExtDatabase, AutoIncrementField
else: # pragma: no cover
class SqliteExtDatabase:
"""
Mypy stub for the not type-hinted SqliteExtDatabase class
"""
def __init__(self, *args: Any) -> None:
"""
Type stub
:param args:
"""
...
def init(self, *args: Any) -> None:
"""
Type stub
:param args:
:return:
"""
...
def connect(self) -> None:
"""
Type stub
:return:
"""
...
def create_tables(self, tables: List[Any]) -> None:
"""
Type stub
:param tables:
:return:
"""
...
def atomic(self) -> ContextManager[None]:
"""
Type stub
:return:
"""
...
def execute_sql(self, query: str, params: Tuple[Any]) -> Iterable[Tuple[Any]]:
...
class AutoIncrementField:
"""
Type stub
"""
...
database = SqliteExtDatabase(None)
| 20.890625 | 86 | 0.449514 | 107 | 1,337 | 5.53271 | 0.429907 | 0.081081 | 0.101351 | 0.076014 | 0.236486 | 0.131757 | 0.131757 | 0.131757 | 0.131757 | 0.131757 | 0 | 0 | 0.436799 | 1,337 | 63 | 87 | 21.222222 | 0.786189 | 0.179506 | 0 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7886dbd5b5bd5591584039afc3ce54cfdba530a | 6,877 | py | Python | For_Simulator/learning/plot_gmm_nd.py | a-taniguchi/CSL-BGM | 64bd803289e55b76a219c02ea040325a8a5b949e | [
"MIT"
] | 1 | 2018-09-27T12:19:05.000Z | 2018-09-27T12:19:05.000Z | learning/plot_gmm_nd.py | neuronalX/CSL-BGM | 2fc66611928783c9b65c675ec84d9c06e0b6cd8a | [
"MIT"
] | null | null | null | learning/plot_gmm_nd.py | neuronalX/CSL-BGM | 2fc66611928783c9b65c675ec84d9c06e0b6cd8a | [
"MIT"
] | 1 | 2018-09-27T12:19:21.000Z | 2018-09-27T12:19:21.000Z | #coding:utf-8
#gaussian plot (position category)
#Akira Taniguchi 2016/06/16
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
from __init__ import *
from numpy.random import multinomial,uniform,dirichlet
from scipy.stats import multivariate_normal,invwishart,rv_discrete
trialname = "testss"#raw_input("trialname?(folder) >")
start = "1"#raw_input("start number?>")
end = "40"#raw_input("end number?>")
filename = raw_input("learning trial name?>")#"001"#
sn = int(start)
en = int(end)
Data = int(en) - int(sn) +1
foldername = datafolder + trialname+"("+str(sn).zfill(3)+"-"+str(en).zfill(3)+")"
Mu_p = [ np.array([0 for i in xrange(dim_p)]) for k in xrange(Kp) ]
Sig_p = [ np.eye(dim_p)*sig_p_init for k in xrange(Kp) ]
#p_dm = [[[-0.3945, 0.0165]], [[-0.3555, -0.006], [-0.336, 0.18]], [[-0.438, -0.0315], [-0.315, 0.0225], [-0.2355, 0.18]], [[-0.453, -0.018], [-0.3, -0.1005], [-0.258, -0.0255]], [[-0.438, 0.036], [-0.318, 0.1875], [-0.3, 0.0795]], [[-0.5535, 0.0675], [-0.336, -0.0465]], [[-0.3885, 0.0555], [-0.3465, -0.126]], [[-0.3555, -0.1425], [-0.324, -0.039], [-0.273, 0.0825]], [[-0.3885, 0.135]], [[-0.285, -0.0135]], [[-0.5265, 0.045], [-0.33, 0.18], [-0.2685, 0.0165]], [[-0.453, 0.015], [-0.3795, 0.231]], [[-0.3825, -0.231]], [[-0.327, -0.18], [-0.309, -0.0075]], [[-0.3735, -0.1455]], [[-0.2685, -0.0135]], [[-0.438, 0.033], [-0.36, 0.204], [-0.2955, 0.0855]], [[-0.45, 0.048]], [[-0.447, -0.006], [-0.3735, 0.1785]], [[-0.4005, 0.1755], [-0.2655, -0.0705]]]
p_temp = []
#for d in xrange(D):
# p_temp = p_temp + p_dm[d]
#[[-0.319936213, 0.117489433],[-0.345566772, -0.00810185],[-0.362990185, -0.042447971],[-0.277759177, 0.083363745]]
#Sig_p = [[] , [], [] ,[]]
#Sig_p[0] = [[0.010389635, 0.001709343],[0.001709343, 0.018386732]]
#[[0.005423979, 0.000652657],[0.000652657, 0.001134736]]
#Sig_p[1] = [[0.001920786, -0.001210214],[-0.001210214, 0.002644612]]
#Sig_p[2] = [[0.003648299, -0.000312398],[-0.000312398, 0.001518234]]
#Sig_p[3] = [[0.001851727, -0.000656013],[-0.000656013, 0.004825636]]
k=0
for line in open(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_Mu_p.csv', 'r'):
itemList = line[:-1].split(',')
#for i in xrange(len(itemList)):
Mu_p[k] = [float(itemList[0]),float(itemList[1])]
k = k + 1
k=0
i=0
for line in open(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_Sig_p.csv', 'r'):
itemList = line[:-1].split(',')
if k < Kp:
if (i == 0):
#for i in xrange(len(itemList)):
print itemList
Sig_p[k][0][0] = float(itemList[0])
Sig_p[k][0][1] = float(itemList[1])
i = i + 1
elif (i == 1):
#for i in xrange(len(itemList)):
print itemList
Sig_p[k][1][0] = float(itemList[0])
Sig_p[k][1][1] = float(itemList[1])
i = i + 1
elif (i == 2):
i = 0
k = k + 1
zp = []
pi_p = [0.0 for k in range(Kp)] #[0.017826621173443864,0.28554229470170217,0.041570976925928926,0.1265347852145472,0.52852532198437785]
dm = 0
for line in open(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_zp.csv', 'r'):
itemList = line[:-1].split(',')
for i in range(len(itemList)):
if itemList[i] != '':
#print dm,itemList[i]
zp = zp + [int(itemList[i])]
dm = dm + 1
for line in open(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_pi_p.csv', 'r'):
itemList = line[:-1].split(',')
for i in range(len(pi_p)):
pi_p[i] = float(itemList[i])
colors = ['b', 'g', 'm', 'r', 'c', 'y', 'k', 'orange', 'purple', 'brown']
color_iter = itertools.cycle(colors)
splot = plt.subplot(1, 1,1)
for k,(mean,covar,color) in enumerate(zip(Mu_p,Sig_p,color_iter)):
v, w = linalg.eigh(covar)
u = w[0] / linalg.norm(w[0])
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse([mean[1],mean[0]], v[0], v[1], 180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
#splot.add_artist(ell)
#ガウス分布から大量にサンプリングしてプロットする場合
for i in range(int(5000*2*pi_p[k])):#)):#
X = multivariate_normal.rvs(mean=mean, cov=covar)
plt.scatter(X[1],X[0], s=5, marker='.', color=color, alpha=0.2)
#データをクラスごとに色分けしてプロットする場合
#for i in range(len(p_temp)):
# plt.scatter(p_temp[i][1],p_temp[i][0], marker='x', c=colors[zp[i]])
"""
# Number of samples per component
n_samples = 500
# Generate random sample, two components
np.random.seed(0)
C = np.array([[0., -0.1], [1.7, .4]])
X = np.r_[np.dot(np.random.randn(n_samples, 2), C),
.7 * np.random.randn(n_samples, 2) + np.array([-6, 3])]
# Fit a mixture of Gaussians with EM using five components
#gmm = mixture.GMM(n_components=5, covariance_type='full')
#gmm.fit(X)
# Fit a Dirichlet process mixture of Gaussians using five components
dpgmm = mixture.DPGMM(n_components=5, covariance_type='full')
dpgmm.fit(X)
#for i, (clf, title) in enumerate([#(gmm, 'GMM'),
# (dpgmm, 'Dirichlet Process GMM')]):
"""
#clf=dpgmm
title = 'Position category'#data'
#Y_ = clf.predict(X)
#print Y_
"""
for i, (mean, covar, color) in enumerate(zip(
clf.means_, clf._get_covars(), color_iter)):
v, w = linalg.eigh(covar)
print covar
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
#if not np.any(Y_ == i):
# continue
#plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], .8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
"""
plt.ylim(-0.2, -0.8)
plt.xlim(-0.3, 0.3)
#plt.xticks([-0.8+0.1*i for i in range(7)])
#plt.yticks([-0.3+0.1*i for i in range(7)])
plt.title(title)
#w, h = plt.get_figwidth(), plt.get_figheight()
#ax = plt.add_axes((0.5 - 0.5 * 0.8 * h / w, 0.1, 0.8 * h / w, 0.8))
#aspect = (ax.get_xlim()[1] - ax.get_xlim()[0]) / (ax.get_ylim()[1] - ax.get_ylim()[0])
#ax.set_aspect(aspect)
plt.gca().set_aspect('equal', adjustable='box')
plt.savefig(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_position_data_plot_p1nd.eps', dpi=150)
plt.savefig(foldername +'/' + filename + '/' + trialname + '_'+ filename +'_position_data_plot_p1nd.png', dpi=150)
plt.show()
| 38.418994 | 757 | 0.574378 | 1,102 | 6,877 | 3.495463 | 0.274955 | 0.013499 | 0.015576 | 0.054517 | 0.332295 | 0.315161 | 0.269211 | 0.240654 | 0.217809 | 0.158359 | 0 | 0.160242 | 0.207794 | 6,877 | 178 | 758 | 38.634831 | 0.546806 | 0.319325 | 0 | 0.179487 | 0 | 0 | 0.066934 | 0.018742 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115385 | null | null | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b78fac6287214282885abd8ffbebb076e0bdd37a | 3,672 | py | Python | paddlescience/pde/pde_navier_stokes.py | juneweng/PaddleScience | f30ce908b6fbec2403936007d12d9701f74fd00e | [
"Apache-2.0"
] | null | null | null | paddlescience/pde/pde_navier_stokes.py | juneweng/PaddleScience | f30ce908b6fbec2403936007d12d9701f74fd00e | [
"Apache-2.0"
] | null | null | null | paddlescience/pde/pde_navier_stokes.py | juneweng/PaddleScience | f30ce908b6fbec2403936007d12d9701f74fd00e | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .pde_base import PDE
class NavierStokes(PDE):
"""
Two dimentional time-independent Navier-Stokes equation
.. math::
:nowrap:
\\begin{eqnarray*}
\\frac{\\partial u}{\\partial x} + \\frac{\\partial u}{\\partial y} & = & 0, \\\\
u \\frac{\\partial u}{\\partial x} + v \\frac{\partial u}{\\partial y} - \\frac{\\nu}{\\rho} \\frac{\\partial^2 u}{\\partial x^2} - \\frac{\\nu}{\\rho} \\frac{\\partial^2 u}{\\partial y^2} + dp/dx & = & 0,\\\\
u \\frac{\\partial v}{\\partial x} + v \\frac{\partial v}{\\partial y} - \\frac{\\nu}{\\rho} \\frac{\\partial^2 v}{\\partial x^2} - \\frac{\\nu}{\\rho} \\frac{\\partial^2 v}{\\partial y^2} + dp/dy & = & 0.
\\end{eqnarray*}
Parameters
----------
nu : float
Kinematic viscosity
rho : float
Density
Example:
>>> import paddlescience as psci
>>> pde = psci.pde.NavierStokes(0.01, 1.0)
"""
def __init__(self, nu=0.01, rho=1.0):
dim = 2
super(NavierStokes, self).__init__(dim + 1)
if dim == 2:
# continuty
self.add_item(0, 1.0, "du/dx")
self.add_item(0, 1.0, "dv/dy")
# momentum x
self.add_item(1, 1.0, "u", "du/dx")
self.add_item(1, 1.0, "v", "du/dy")
self.add_item(1, -nu / rho, "d2u/dx2")
self.add_item(1, -nu / rho, "d2u/dy2")
self.add_item(1, 1.0 / rho, "dw/dx")
# momentum y
self.add_item(2, 1.0, "u", "dv/dx")
self.add_item(2, 1.0, "v", "dv/dy")
self.add_item(2, -nu / rho, "d2v/dx2")
self.add_item(2, -nu / rho, "d2v/dy2")
self.add_item(2, 1.0 / rho, "dw/dy")
elif dim == 3:
# continuty
self.add_item(0, 1.0, "du/dx")
self.add_item(0, 1.0, "dv/dy")
self.add_item(0, 1.0, "dw/dz")
# momentum x
self.add_item(1, 1.0, "u", "du/dx")
self.add_item(1, 1.0, "v", "du/dy")
self.add_item(1, 1.0, "w", "du/dz")
self.add_item(1, -nu / rho, "d2u/dx2")
self.add_item(1, -nu / rho, "d2u/dy2")
self.add_item(1, -nu / rho, "d2u/dz2")
self.add_item(1, 1.0 / rho, "dp/dx")
# momentum y
self.add_item(2, 1.0, "u", "dv/dx")
self.add_item(2, 1.0, "v", "dv/dy")
self.add_item(2, 1.0, "w", "dv/dz")
self.add_item(2, -nu / rho, "d2v/dx2")
self.add_item(2, -nu / rho, "d2v/dy2")
self.add_item(2, -nu / rho, "d2v/dz2")
self.add_item(2, 1.0 / rho, "dp/dy")
# momentum z
self.add_item(3, 1.0, "u", "dw/dx")
self.add_item(3, 1.0, "v", "dw/dy")
self.add_item(3, 1.0, "w", "dw/dz")
self.add_item(3, -nu / rho, "d2w/dx2")
self.add_item(3, -nu / rho, "d2w/dy2")
self.add_item(3, -nu / rho, "d2w/dz2")
self.add_item(3, 1.0 / rho, "dp/dz")
| 40.351648 | 223 | 0.510349 | 558 | 3,672 | 3.277778 | 0.232975 | 0.13778 | 0.216512 | 0.078732 | 0.525424 | 0.474576 | 0.434117 | 0.368507 | 0.326955 | 0.29415 | 0 | 0.055121 | 0.303377 | 3,672 | 90 | 224 | 40.8 | 0.659891 | 0.398148 | 0 | 0.465116 | 0 | 0 | 0.106053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.023256 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7907344916d1e840e9f663cfdac58234f01e739 | 327 | py | Python | tests/strict/it_mod_double_fun.py | Euromance/pycopy | 540cfcc52d17667a5f6b2a176427cc031029b78f | [
"MIT"
] | 663 | 2018-12-30T00:17:59.000Z | 2022-03-14T05:03:41.000Z | tests/strict/it_mod_double_fun.py | Euromance/pycopy | 540cfcc52d17667a5f6b2a176427cc031029b78f | [
"MIT"
] | 41 | 2019-06-06T08:31:19.000Z | 2022-02-13T16:53:41.000Z | tests/strict/it_mod_double_fun.py | Euromance/pycopy | 540cfcc52d17667a5f6b2a176427cc031029b78f | [
"MIT"
] | 60 | 2019-06-01T04:25:00.000Z | 2022-02-25T01:47:31.000Z | import mod
def foo():
return 1
try:
mod.foo = foo
except RuntimeError:
print("RuntimeError1")
print(mod.foo())
try:
mod.foo = 1
except RuntimeError:
print("RuntimeError2")
print(mod.foo)
try:
mod.foo = 2
except RuntimeError:
print("RuntimeError3")
print(mod.foo)
def __main__():
pass
| 10.21875 | 26 | 0.642202 | 42 | 327 | 4.904762 | 0.380952 | 0.174757 | 0.131068 | 0.135922 | 0.194175 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0.023904 | 0.232416 | 327 | 31 | 27 | 10.548387 | 0.796813 | 0 | 0 | 0.4 | 0 | 0 | 0.119266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | true | 0.05 | 0.05 | 0.05 | 0.2 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b791e622824560b49e99db6ea638309e033920f8 | 3,357 | py | Python | scripts/merge.py | jgonzalezdemendibil/movie_publisher | 0e58cb616e6e6c2c5cac7cb5016e0874c0409b42 | [
"BSD-3-Clause"
] | null | null | null | scripts/merge.py | jgonzalezdemendibil/movie_publisher | 0e58cb616e6e6c2c5cac7cb5016e0874c0409b42 | [
"BSD-3-Clause"
] | null | null | null | scripts/merge.py | jgonzalezdemendibil/movie_publisher | 0e58cb616e6e6c2c5cac7cb5016e0874c0409b42 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
# Copied from https://raw.githubusercontent.com/srv/srv_tools/kinetic/bag_tools/scripts/merge.py since this script is
# not released for indigo.
"""
Copyright (c) 2015,
Enrique Fernandez Perdomo
Clearpath Robotics, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Systems, Robotics and Vision Group, University of
the Balearican Islands nor the names of its contributors may be used to
endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
from __future__ import print_function
import rosbag
import argparse
import os
import sys
def merge(inbags, outbag='output.bag', topics=None, exclude_topics=[], raw=True):
# Open output bag file:
try:
out = rosbag.Bag(outbag, 'a' if os.path.exists(outbag) else 'w')
except IOError as e:
print('Failed to open output bag file %s!: %s' % (outbag, e.message), file=sys.stderr)
return 127
# Write the messages from the input bag files into the output one:
for inbag in inbags:
try:
print(' Processing input bagfile: %s' % inbag)
for topic, msg, t in rosbag.Bag(inbag, 'r').read_messages(topics=topics, raw=raw):
if topic not in args.exclude_topics:
out.write(topic, msg, t, raw=raw)
except IOError as e:
print('Failed to open input bag file %s!: %s' % (inbag, e.message), file=sys.stderr)
return 127
out.close()
return 0
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='Merge multiple bag files into a single one.')
parser.add_argument('inbag', help='input bagfile(s)', nargs='+')
parser.add_argument('--output', help='output bag file', default='output.bag')
parser.add_argument('--topics', help='topics to merge from the input bag files', nargs='+', default=None)
parser.add_argument('--exclude_topics', help='topics not to merge from the input bag files', nargs='+', default=[])
args = parser.parse_args()
try:
sys.exit(merge(args.inbag, args.output, args.topics, args.exclude_topics))
except Exception, e:
import traceback
traceback.print_exc() | 42.493671 | 117 | 0.739053 | 489 | 3,357 | 5.02045 | 0.443763 | 0.01833 | 0.027699 | 0.01833 | 0.166191 | 0.138493 | 0.138493 | 0.114053 | 0.087169 | 0.055397 | 0 | 0.00398 | 0.176646 | 3,357 | 79 | 118 | 42.493671 | 0.884226 | 0.072684 | 0 | 0.2 | 0 | 0 | 0.222591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.171429 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7925483ba95f2bb530529066114343ba1164af4 | 1,567 | py | Python | src/pyforest/auto_import.py | tnwei/pyforest | 73b36298e8cbce9a861c13c01509e34f0e3397fe | [
"MIT"
] | 1,002 | 2019-08-13T15:00:39.000Z | 2022-03-29T19:14:41.000Z | src/pyforest/auto_import.py | tnwei/pyforest | 73b36298e8cbce9a861c13c01509e34f0e3397fe | [
"MIT"
] | 40 | 2019-08-13T19:17:49.000Z | 2022-02-14T08:46:09.000Z | src/pyforest/auto_import.py | tnwei/pyforest | 73b36298e8cbce9a861c13c01509e34f0e3397fe | [
"MIT"
] | 202 | 2019-08-13T19:37:25.000Z | 2022-03-21T20:05:27.000Z | from pathlib import Path
IPYTHON_STARTUP_FOLDER = Path.home() / ".ipython" / "profile_default" / "startup"
STARTUP_FILE = IPYTHON_STARTUP_FOLDER / "pyforest_autoimport.py"
def _create_or_reset_startup_file():
if STARTUP_FILE.exists():
STARTUP_FILE.unlink() # deletes the old file
# this is important if someone messed around with the file
# if he calls our method, he expects that we repair everything
# therefore, we delete the old file and write a new, valid version
STARTUP_FILE.touch() # create a new file
def _write_into_startup_file():
with STARTUP_FILE.open("w") as file:
file.write(
f"""
# HOW TO DEACTIVATE AUTO-IMPORT:
# if you dont want to auto-import pyforest, you have two options:
# 0) if you only want to disable the auto-import temporarily and activate it later,
# you can uncomment the import statement below
# 1) if you never want to auto-import pyforest again, you can delete this file
try:
import pyforest # uncomment this line if you temporarily dont want to auto-import pyforest
pass
except:
pass
"""
)
def setup():
if not IPYTHON_STARTUP_FOLDER.exists():
print(
f"Error: Could not find the default IPython startup folder at {IPYTHON_STARTUP_FOLDER}"
)
return False
_create_or_reset_startup_file()
_write_into_startup_file()
print(
"Success: pyforest is now available in Jupyter Notebook, Jupyter Lab and IPython because it was added to the IPython auto import"
)
return True
| 31.34 | 137 | 0.699426 | 223 | 1,567 | 4.766816 | 0.457399 | 0.093133 | 0.094073 | 0.045155 | 0.120414 | 0.052681 | 0 | 0 | 0 | 0 | 0 | 0.001669 | 0.235482 | 1,567 | 49 | 138 | 31.979592 | 0.885643 | 0.141034 | 0 | 0.114286 | 0 | 0.028571 | 0.526119 | 0.034328 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.057143 | 0.257143 | 0 | 0.4 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b79836d2b12f5a44dd3688e75af8cc0da3616913 | 3,982 | py | Python | egg/zoo/basic_games/data_readers.py | renata-nerenata/EGG | b8532efc3569defabeba6851986cecb0c6640984 | [
"MIT"
] | 1 | 2021-05-26T14:23:25.000Z | 2021-05-26T14:23:25.000Z | egg/zoo/basic_games/data_readers.py | renata-nerenata/EGG | b8532efc3569defabeba6851986cecb0c6640984 | [
"MIT"
] | 1 | 2019-10-31T16:21:01.000Z | 2019-10-31T16:21:01.000Z | egg/zoo/basic_games/data_readers.py | renata-nerenata/EGG | b8532efc3569defabeba6851986cecb0c6640984 | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch
from torch.utils.data import Dataset
import numpy as np
# These input-data-processing classes take input data from a text file and convert them to the format
# appropriate for the recognition and discrimination games, so that they can be read by
# the standard pytorch DataLoader. The latter requires the data reading classes to support
# a __len__(self) method, returning the size of the dataset, and a __getitem__(self, idx)
# method, returning the idx-th item in the dataset. We also provide a get_n_features(self) method,
# returning the dimensionality of the Sender input vector after it is transformed to one-hot format.
# The AttValRecoDataset class is used in the reconstruction game. It takes an input file with a
# space-delimited attribute-value vector per line and creates a data-frame with the two mandatory
# fields expected in EGG games, namely sender_input and labels.
# In this case, the two fields contain the same information, namely the input attribute-value vectors,
# represented as one-hot in sender_input, and in the original integer-based format in
# labels.
class AttValRecoDataset(Dataset):
def __init__(self, path,n_attributes,n_values):
frame = np.loadtxt(path, dtype='S10')
self.frame = []
for row in frame:
if (n_attributes==1):
row = row.split()
config = list(map(int, row))
z=torch.zeros((n_attributes,n_values))
for i in range(n_attributes):
z[i,config[i]]=1
label = torch.tensor(list(map(int, row)))
self.frame.append((z.view(-1),label))
def get_n_features(self):
return self.frame[0][0].size(0)
def __len__(self):
return len(self.frame)
def __getitem__(self, idx):
return self.frame[idx]
# The AttValDiscriDataset class, used in the discrimination game takes an input file with a variable
# number of period-delimited fields, where all fields but the last represent attribute-value vectors
# (with space-delimited attributes). The last field contains the index (counting from 0) of the target
# vector.
# Here, we create a data-frame containing 3 fields: sender_input, labels and receiver_input (these are
# expected by EGG, the first two mandatorily so).
# The sender_input corresponds to the target vector (in one-hot format), labels are the indices of the
# target vector location and receiver_input is a matrix with a row for each input vector (in input order).
class AttValDiscriDataset(Dataset):
def __init__(self, path,n_values):
frame = open(path,'r')
self.frame = []
for row in frame:
raw_info = row.split('.')
index_vectors = list([list(map(int,x.split())) for x in raw_info[:-1]])
target_index = int(raw_info[-1])
target_one_hot = []
for index in index_vectors[target_index]:
current=np.zeros(n_values)
current[index]=1
target_one_hot=np.concatenate((target_one_hot,current))
target_one_hot_tensor = torch.FloatTensor(target_one_hot)
one_hot = []
for index_vector in index_vectors:
for index in index_vector:
current=np.zeros(n_values)
current[index]=1
one_hot=np.concatenate((one_hot,current))
one_hot_sequence = torch.FloatTensor(one_hot).view(len(index_vectors),-1)
label= torch.tensor(target_index)
self.frame.append((target_one_hot_tensor,label,one_hot_sequence))
frame.close()
def get_n_features(self):
return self.frame[0][0].size(0)
def __len__(self):
return len(self.frame)
def __getitem__(self, idx):
return self.frame[idx]
| 45.770115 | 106 | 0.678805 | 576 | 3,982 | 4.541667 | 0.314236 | 0.034404 | 0.027523 | 0.018349 | 0.155963 | 0.155963 | 0.105505 | 0.105505 | 0.079511 | 0.079511 | 0 | 0.005937 | 0.238574 | 3,982 | 86 | 107 | 46.302326 | 0.85686 | 0.462079 | 0 | 0.384615 | 0 | 0 | 0.002361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.057692 | 0.115385 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b799cded976768ac0fb3f24b3d043843412ec29f | 4,056 | py | Python | Round 3/fence_design.py | e-ntro-py/GoogleCodeJam-2021 | c42283480fa20a853c6d31d5faf0e83c6ad0f5f7 | [
"MIT"
] | 30 | 2021-03-27T20:18:15.000Z | 2022-03-19T06:18:58.000Z | Round 3/fence_design.py | e-ntro-py/GoogleCodeJam-2021 | c42283480fa20a853c6d31d5faf0e83c6ad0f5f7 | [
"MIT"
] | 1 | 2021-05-24T19:14:29.000Z | 2021-05-25T04:14:10.000Z | Round 3/fence_design.py | e-ntro-py/GoogleCodeJam-2021 | c42283480fa20a853c6d31d5faf0e83c6ad0f5f7 | [
"MIT"
] | 7 | 2021-03-28T12:38:55.000Z | 2021-09-19T15:30:39.000Z | # Copyright (c) 2021 kamyu. All rights reserved.
#
# Google Code Jam 2021 Round 3 - Problem C. Fence Design
# https://codingcompetitions.withgoogle.com/codejam/round/0000000000436142/0000000000813bc7
#
# Time: O(NlogN) on average, pass in PyPy2 but Python2
# Space: O(N)
#
from random import seed, randint
# Compute the cross product of vectors AB and AC
CW, COLLINEAR, CCW = range(-1, 2)
def ccw(A, B, C):
area = (B[0]-A[0])*(C[1]-A[1]) - (B[1]-A[1])*(C[0]-A[0])
return CCW if area > 0 else CW if area < 0 else COLLINEAR
def same_side(A, B, C, D):
return ccw(A,C,D) == 0 or ccw(B,C,D) == 0 or ccw(A,C,D) == ccw(B,C,D)
def rotate(hull, split):
for i in xrange(len(hull)):
if hull[i] in split and hull[(i-1)%len(hull)] in split:
return hull[i:]+hull[:i]
return hull[:]
def add_result(result, x):
result.add(tuple(sorted(x)))
def add_triangle(P, left_ccw, right_cw, result, lookup):
p, q = 0, 1
while True:
p1 = (p+1)%len(left_ccw)
if ccw(P[left_ccw[p1]], P[left_ccw[p]], P[right_cw[q]]) == CCW:
add_result(result, [left_ccw[p1], right_cw[q]])
lookup.add(left_ccw[p]) # inside the convex hull
p = p1
continue
q1 = (q+1)%len(right_cw)
if ccw(P[left_ccw[p]], P[right_cw[q]], P[right_cw[q1]]) == CCW:
add_result(result, [right_cw[q1], left_ccw[p]])
lookup.add(right_cw[q]) # inside the convex hull
q = q1
continue
break
def conquer(P, left, right, split, result): # Time: O(N)
if len(left) == 2:
return right
if len(right) == 2:
return left
lookup = set()
left_ccw, right_cw = rotate(left, split), rotate(right[::-1], split)
add_triangle(P, left_ccw, right_cw, result, lookup)
right_ccw, left_cw = rotate(right, split), rotate(left[::-1], split)
add_triangle(P, right_ccw, left_cw, result, lookup)
return [x for x in left_ccw if x not in lookup] + \
[x for x in right_ccw[1:-1] if x not in lookup]
def divide(P, f, curr, split, result): # depth at most O(logN) on average => Time: O(NlogN)
if len(curr) == 2:
return curr
if len(curr) == 3: # terminal case
p = next(p for p in curr if p not in split)
for x in split:
add_result(result, [p, x])
return [p, split[0], split[1]] if ccw(P[p], P[split[0]], P[split[1]]) == CCW else [p, split[1], split[0]]
if f: # prefer to use pre-placed fence
new_split = f.pop()
else:
while True:
idx = randint(0, len(curr)-1)
p = curr[idx]
curr[idx], curr[-1] = curr[-1], curr[idx]
q = curr[randint(0, len(curr)-2)]
if p > q:
p, q = q, p
if (p, q) not in result:
break
new_split = (p, q)
add_result(result, new_split)
left = [x for x in curr if ccw(P[new_split[0]], P[new_split[1]], P[x]) != CCW]
right = [x for x in curr if ccw(P[new_split[0]], P[new_split[1]], P[x]) != CW]
return conquer(P,
divide(P, f if f and f[-1][0] in left and f[-1][1] in left else [], left, new_split, result),
divide(P, f if f and f[-1][0] in right and f[-1][1] in right else [], right, new_split, result),
new_split, result)
def fence_design():
N = input()
P = [map(int, raw_input().strip().split()) for _ in xrange(N)]
f = [map(lambda x: int(x)-1, raw_input().strip().split()) for _ in xrange(2)]
f = [tuple(sorted(x)) for x in f]
if not same_side(P[f[0][0]], P[f[0][1]], P[f[1][0]], P[f[1][1]]):
# make sure f[0] will be on the same side of f[1]
f[0], f[1] = f[1], f[0]
result = set()
hull = divide(P, f[:], range(len(P)), [], result)
assert(len(result) == 3*N-3-len(hull))
return "%s\n"%(len(result)-2)+"\n".join("%s %s"%(x[0]+1, x[1]+1) for x in [x for x in result if x not in f])
seed(0)
for case in xrange(input()):
print 'Case #%d: %s' % (case+1, fence_design())
| 38.264151 | 115 | 0.554487 | 715 | 4,056 | 3.072727 | 0.177622 | 0.035048 | 0.021848 | 0.019117 | 0.175694 | 0.129267 | 0.129267 | 0.102868 | 0.086482 | 0.051889 | 0 | 0.042373 | 0.272682 | 4,056 | 105 | 116 | 38.628571 | 0.702373 | 0.124507 | 0 | 0.071429 | 0 | 0 | 0.006508 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0 | null | null | 0 | 0.011905 | null | null | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b79f4a64c362393fd37c99b135489a7797ae3252 | 1,421 | py | Python | device_e2e/sync/test_sync_c2d.py | dt-boringtao/azure-iot-sdk-python | 35a09679bdf4d7a727391b265a8f1fbb99a30c45 | [
"MIT"
] | null | null | null | device_e2e/sync/test_sync_c2d.py | dt-boringtao/azure-iot-sdk-python | 35a09679bdf4d7a727391b265a8f1fbb99a30c45 | [
"MIT"
] | null | null | null | device_e2e/sync/test_sync_c2d.py | dt-boringtao/azure-iot-sdk-python | 35a09679bdf4d7a727391b265a8f1fbb99a30c45 | [
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
import pytest
import logging
import json
import threading
from utils import get_random_dict
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
# TODO: add tests for various application properties
# TODO: is there a way to call send_c2d so it arrives as an object rather than a JSON string?
@pytest.mark.describe("Client C2d")
class TestReceiveC2d(object):
@pytest.mark.it("Can receive C2D")
@pytest.mark.quicktest_suite
def test_sync_receive_c2d(self, client, service_helper, leak_tracker):
leak_tracker.set_initial_object_list()
message = json.dumps(get_random_dict())
received_message = None
received = threading.Event()
def handle_on_message_received(message):
nonlocal received_message, received
logger.info("received {}".format(message))
received_message = message
received.set()
client.on_message_received = handle_on_message_received
service_helper.send_c2d(message, {})
received.wait(timeout=60)
assert received.is_set()
assert received_message.data.decode("utf-8") == message
received_message = None # so this isn't tagged as a leak
leak_tracker.check_for_leaks()
| 30.891304 | 93 | 0.713582 | 184 | 1,421 | 5.304348 | 0.548913 | 0.122951 | 0.052254 | 0.047131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007993 | 0.2076 | 1,421 | 45 | 94 | 31.577778 | 0.858792 | 0.228008 | 0 | 0.071429 | 0 | 0 | 0.037615 | 0 | 0 | 0 | 0 | 0.022222 | 0.071429 | 1 | 0.071429 | false | 0 | 0.178571 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b79f9124d587b0b999491249d4952350ec3b140e | 6,588 | py | Python | addons/mixer/blender_data/tests/test_bpy_blend_diff.py | trisadmeslek/V-Sekai-Blender-tools | 0d8747387c58584b50c69c61ba50a881319114f8 | [
"MIT"
] | null | null | null | addons/mixer/blender_data/tests/test_bpy_blend_diff.py | trisadmeslek/V-Sekai-Blender-tools | 0d8747387c58584b50c69c61ba50a881319114f8 | [
"MIT"
] | null | null | null | addons/mixer/blender_data/tests/test_bpy_blend_diff.py | trisadmeslek/V-Sekai-Blender-tools | 0d8747387c58584b50c69c61ba50a881319114f8 | [
"MIT"
] | null | null | null | # GPLv3 License
#
# Copyright (C) 2020 Ubisoft
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import unittest
from bpy import data as D # noqa
from bpy import types as T # noqa
from mixer.blender_data.bpy_data_proxy import BpyDataProxy
from mixer.blender_data.diff import BpyBlendDiff
from mixer.blender_data.filter import test_properties
def sort_renamed_item(x):
return x[1]
class TestDiff(unittest.TestCase):
def setUp(self):
for w in D.worlds:
D.worlds.remove(w)
self.proxy = BpyDataProxy()
def test_create(self):
# test_diff.TestDiff.test_create
self.proxy.load(test_properties)
new_worlds = ["W0", "W1"]
new_worlds.sort()
for w in new_worlds:
D.worlds.new(w)
diff = BpyBlendDiff()
diff.diff(self.proxy, test_properties)
for collection_name, delta in diff.collection_deltas:
self.assertEqual(0, len(delta.items_removed), f"removed count mismatch for {collection_name}")
self.assertEqual(0, len(delta.items_renamed), f"renamed count mismatch for {collection_name}")
if collection_name == "worlds":
self.assertEqual(len(new_worlds), len(delta.items_added), f"added count mismatch for {collection_name}")
found = [datablock.name for datablock, _ in delta.items_added]
found.sort()
self.assertEqual(new_worlds, found, f"added count mismatch for {collection_name}")
else:
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {collection_name}")
def test_remove(self):
# test_diff.TestDiff.test_create
new_worlds = ["W0", "W1", "W2"]
new_worlds.sort()
for w in new_worlds:
D.worlds.new(w)
self.proxy.load(test_properties)
removed = ["W0", "W1"]
removed.sort()
for w in removed:
D.worlds.remove(D.worlds[w])
diff = BpyBlendDiff()
diff.diff(self.proxy, test_properties)
for name, delta in diff.collection_deltas:
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {name}")
self.assertEqual(0, len(delta.items_renamed), f"renamed count mismatch for {name}")
if name == "worlds":
self.assertEqual(len(removed), len(delta.items_removed), f"removed count mismatch for {name}")
items_removed = [proxy.data("name") for proxy in delta.items_removed]
items_removed.sort()
self.assertEqual(removed, items_removed, f"removed count mismatch for {name}")
else:
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {name}")
def test_rename(self):
# test_diff.TestDiff.test_create
new_worlds = ["W0", "W1", "W2"]
new_worlds.sort()
for w in new_worlds:
D.worlds.new(w)
self.proxy.load(test_properties)
renamed = [("W0", "W00"), ("W2", "W22")]
renamed.sort(key=sort_renamed_item)
for old_name, new_name in renamed:
D.worlds[old_name].name = new_name
diff = BpyBlendDiff()
diff.diff(self.proxy, test_properties)
for name, delta in diff.collection_deltas:
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {name}")
self.assertEqual(0, len(delta.items_removed), f"removed count mismatch for {name}")
if name == "worlds":
self.assertEqual(len(renamed), len(delta.items_renamed), f"renamed count mismatch for {name}")
items_renamed = list(delta.items_renamed)
items_renamed.sort(key=sort_renamed_item)
items_renamed = [(proxy.data("name"), new_name) for proxy, new_name in items_renamed]
self.assertEqual(renamed, items_renamed, f"removed count mismatch for {name}")
else:
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {name}")
def test_create_delete_rename(self):
# test_diff.TestDiff.test_create
new_worlds = ["W0", "W1", "W2", "W4"]
new_worlds.sort()
for w in new_worlds:
D.worlds.new(w)
self.proxy.load(test_properties)
renamed = [("W0", "W00"), ("W2", "W22"), ("W4", "W44")]
renamed.sort(key=sort_renamed_item)
for old_name, new_name in renamed:
D.worlds[old_name].name = new_name
added = ["W0", "W5"]
added.sort()
for w in added:
D.worlds.new(w)
removed = ["W1", "W00"]
removed.sort()
for w in removed:
D.worlds.remove(D.worlds[w])
diff = BpyBlendDiff()
diff.diff(self.proxy, test_properties)
for name, delta in diff.collection_deltas:
if name == "worlds":
items_added = [datablock.name for datablock, _ in delta.items_added]
items_added.sort()
self.assertEqual(items_added, ["W0", "W5"], f"added count mismatch for {name}")
items_renamed = delta.items_renamed
items_renamed.sort(key=sort_renamed_item)
items_renamed = [(proxy.data("name"), new_name) for proxy, new_name in items_renamed]
self.assertEqual(items_renamed, [("W2", "W22"), ("W4", "W44")], f"renamed count mismatch for {name}")
items_removed = [proxy.data("name") for proxy in delta.items_removed]
items_removed.sort()
self.assertEqual(items_removed, ["W0", "W1"], f"removed count mismatch for {name}")
else:
self.assertEqual(0, len(delta.items_renamed), f"renamed count mismatch for {name}")
self.assertEqual(0, len(delta.items_removed), f"removed count mismatch for {name}")
self.assertEqual(0, len(delta.items_added), f"added count mismatch for {name}")
| 41.961783 | 120 | 0.622495 | 856 | 6,588 | 4.654206 | 0.15771 | 0.079066 | 0.084337 | 0.080321 | 0.724649 | 0.698293 | 0.665412 | 0.654116 | 0.63002 | 0.63002 | 0 | 0.013052 | 0.267304 | 6,588 | 156 | 121 | 42.230769 | 0.812306 | 0.119156 | 0 | 0.589286 | 0 | 0 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.053571 | false | 0 | 0.053571 | 0.008929 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7a7160ec0048a1f0be91a335c4abd54fb69fa8b | 10,165 | py | Python | guiapp/meditech_nls_parser/old/meditech_nls_to_xml.py | gcampuzano14/PathISTabs | ae29a0b71647ecb32fc40e234b5c7276ab5333d9 | [
"MIT"
] | 1 | 2017-07-28T14:01:32.000Z | 2017-07-28T14:01:32.000Z | guiapp/meditech_nls_parser/old/meditech_nls_to_xml.py | gcampuzano14/PathISTabs | ae29a0b71647ecb32fc40e234b5c7276ab5333d9 | [
"MIT"
] | null | null | null | guiapp/meditech_nls_parser/old/meditech_nls_to_xml.py | gcampuzano14/PathISTabs | ae29a0b71647ecb32fc40e234b5c7276ab5333d9 | [
"MIT"
] | 1 | 2019-02-14T06:07:24.000Z | 2019-02-14T06:07:24.000Z | #!/usr/bin/env python
import os
import re
import csv
def fil():
filetext = os.path.join(os.path.dirname(__file__),'meditech_data','RAW')
#filetext = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/MT/RAW"
outtext_temp = os.path.join(os.path.dirname(__file__),'meditech_data','CLEANEDTEMP')
outtext = os.path.join(os.path.dirname(__file__),'meditech_data','CLEANED')
#outtext_temp = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/CLEANEDTEMP"
#outtext = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/CLEANED"
outxml = os.path.join(os.path.dirname(__file__),'meditech_data','MEDITECH_NLS.xml')
#outxml = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/MEDITECH_NLS.xml"
outtabdelim = os.path.join(os.path.dirname(__file__),'meditech_data','TAB_DELIM_MEDITECH_NLS.txt')
#outjson = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/MT/XML/MEDITECH_NLS.xml"
#outtabdelim = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/TAB_DELIM_MEDITECH_NLS.txt"
os.open(outxml, os.O_RDWR | os.O_CREAT)
with open(outxml, "w+") as out:
out.write("<UMH_PATHOLOGY>\n")
for f in os.listdir(filetext):
path = os.path.join(filetext, f)
clean_name = f[:-4] + "_CLEAN.TXT"
clean_path_temp = os.path.join(outtext_temp, clean_name)
clean_path = os.path.join(outtext, clean_name)
cleaner(path, clean_path_temp, clean_path)
mapper(f, clean_path, outxml, outtabdelim)
with open(outxml, "a+") as out:
out.write("</UMH_PATHOLOGY>\n")
# with open(outxml, "r") as out:
# t = out.read()
# with open(outxml, "w") as out:
# u = t.encode('latin1','xmlcharrefreplace').decode('utf8','xmlcharrefreplace')
# u = u.decode("utf-8").replace(u"\u2022", "*").u.encode("utf-8")
# out.write(u)
def mapper(filename, text, outxml, outtabdelim):
# text = "C:/Users/germancz/Dropbox/MEDITECH_NLS_2005.TXT_PROCESSED.txt"
logger = os.path.join(os.path.dirname(__file__),'meditech_data','meditech_case_log.txt')
#logger = "C:/Users/gcampuzanozuluaga/Dropbox/Programming/Python/APPS/meditech_data/MT/XML/meditech_case_log.txt"
os.open(text, os.O_RDWR)
with open(text, "r+") as log:
t = log.read()
print "mapping"
ptnameb = re.findall(r"PATIENT:\s*([\w\s]*\w),?(\D*?\w*)?\s+ACCT\s*#:\s*(\S+)?\s*LOC:?.*?U\s*#:\s*(\S+)?\s*\n*"
"AGE\/SEX:\s+(\d+)\/(\w{1})\s+DOB:(\d*\/*\d*\/*\d*)\s*?.*?\n*.*?\n*"
"Path\s+#:\s+(\d+:\w+:\w+)\s+\w+\s+Received:\s+(\d{2}\/\d{2}\/\d{2})\s*-\s*\d{4}\s+"
"Collected:\s*(\S+)\s*\n*(.*?)(\d+\/\d+\/\d+)\s*\d{4}\s*?\n*?.*?\n?(?=PATIENT:)", t, re.S)
newlistt = []
count = 0
print "creating list"
arr = (ptnameb, newlistt)
for e in arr[0]:
#print e
li = list(e)
arr[1].append(li)
count += 1
print count
os.open(logger, os.O_RDWR | os.O_CREAT)
with open(logger, "a+") as log:
t = str(filename) + ": \n" + "COUNT: " + str(count) + "\n"
log.write(t)
reducer (newlistt, outxml, outtabdelim)
def reducer(newlistt, outxml, outtabdelim):
# outxml = "C:/Users/gcampuzanozuluaga/Dropbox/MT/XML/MEDITECH_NLS.xml"
tabinit = " "*2 + "<MEDITECH_PATHOLOGY>\n" + " "*4
tabend = " "*2 + "</MEDITECH_PATHOLOGY>\n"
with open(outxml, "a+") as out:
# out.write("<UMH_PATHOLOGY>\n")
for e in newlistt:
t = e[7].rfind(":") + 1
nice_accession = e[7][t:]
xml = (tabinit
+ "<FIRST_NAME>" + e[0] + "</FIRST_NAME>\n"
+ " "*4 + "<LAST_NAME>" + e[1] + "</LAST_NAME>\n"
+ " "*4 + "<AGE>" + e[4] + "</AGE>\n"
+ " "*4 + "<SEX>" + e[5] + "</SEX>\n"
+ " "*4 + "<DOB>" + e[6] + "</DOB>\n"
+ " "*4 + "<ACCOUNT_NUM>" + e[2] + "</ACCOUNT_NUM>\n"
+ " "*4 + "<U_NUMBER>" + e[3] + "</U_NUMBER>\n"
+ " "*4 + "<ACCESSION_NUMBER_RAW>" + e[7] + "</ACCESSION_NUMBER_RAW>\n"
+ " "*4 + "<ACCESSION_NUMBER>" + nice_accession + "</ACCESSION_NUMBER>\n"
+ " "*4 + "<RECEIVED>" + e[8] + "</RECEIVED>\n"
+ " "*4 + "<COLLECTED>" + e[9] + "</COLLECTED>\n"
+ " "*4 + "<SIGNOUT_DATE>" + e[11] + "</SIGNOUT_DATE>\n"
+ " "*4 + "<TEXT>" + "<![CDATA[\n" + e[10] + "\n]]>" + "</TEXT>\n"
+ tabend)
#print xml
out.write(xml)
os.open(outtabdelim, os.O_RDWR | os.O_CREAT)
with open(outtabdelim, 'wb') as csvfile:
result_writer = csv.writer(csvfile, delimiter = "\t")
#with open(outtabdelim, "w+") as outtab:
result_writer.writerow(["FIRST_NAME", " SECOND_NAME", " U_NUMBER", " DOB", " AGE", " SEX", " ACCESSION_NUMBER", " RECEIVED", " SIGNOUT_DATE", " DX"])
for e in newlistt:
dxtext = str(e[10]).replace("\n"," ")
dxtext = dxtext.replace("\f"," ")
dxtext = re.sub(r"\s{2,}"," ", dxtext, re.S)
dxtext = re.sub(r'^[-+=*\/]{1,}','', dxtext)
dx = dxtext.lower()
if "malignant" in dx or "malignancy" in dx or "carcinoma" in dx or "cancer" in dx or "neoplasm" in dx or "sarcoma" in dx or "lymphoma" in dx or "blastoma" in dx:
t = e[7].rfind(":") + 1
nice_accession = e[7][t:]
t = e[7].rfind(":") + 1
nice_accession = e[7][t:]
patientstr = [str(e[0]), str(e[1]), str(e[3]), str(e[6]), str(e[4]), str(e[5]), str(e[1]), nice_accession, str(e[8]), str(e[11]), dxtext ]
result_writer.writerow(patientstr)
#outtab.write(patientstr)
# out.write("</UMH_PATHOLOGY>\n")
# print count
def cleaner(filetext, outtext_temp, outtext):
continuedline = re.compile(r"\n.*\(Continued\)\s*\n", re.MULTILINE)
disclaimer = re.compile(r"\s*This\sreport\sis\sprivileged,\sconfidential\sand\sexempt\sfrom\sdisclosure\sunder\sapplicable\slaw\.\s*\n"
"\s+If\syou\sreceive\sthis\sreport\sinadvertently,\splease\scall\s\(305\)\s325-5587\sand\s*\n"
"\s+return\sthe\sreport\sto\sus\sby\smail\.\s*\n", re.MULTILINE)
disclaimer_two = re.compile(r"\s*This\sreport\sis\sprivileged,\sconfidential\sand\sexempt\sfrom\sdisclosure\sunder\sapplicable\slaw\.\s*\n"
"\s+If\syou\sreceive\sthis\sreport\sinadvertently,\splease\scall\s\(305\)\s325-5587\sand", re.MULTILINE)
headings = re.compile(r"\s*?RUN\s+DATE:\s+\d{2}\/\d{2}\/\d{2}\s+ADVANCED\s+PATHOLOGY\s+ASSOCIATES\s+PAGE\s+\d+\s*\n"
"\s*RUN\s+TIME:\s+\d{4}\s+\*\*\*\sFINAL\sREPORT\s\*\*\*\s*\n+"
"\s*SURGICAL\s+PATHOLOGY\s+REPORT\s*\n\s+1400\s+NW\s+12th\s+Avenue,\s+Miami,\s+FL\s+33136\s*\n"
"\s+Telephone:\s+305-325-5587\s+FAX:\s+305-325-5899\s*\n", re.MULTILINE)
lines = re.compile(r"-{5,}", re.MULTILINE)
doub_space = re.compile(r"\n\n", re.MULTILINE)
illegalxml = re.compile(u'[\x00-\x08\x0b\x0c\x0e-\x1F\uD800-\uDFFF\uFFFE\uFFFF\u0192]')
# illegalxml = re.compile(u"[\x00-\x08\x0b\x0c\x0e-\x1F\u0000-\uD800-\uDFFF\u000B\u000C\u000E-\u001F\u007F-\u0084\u0086-\u009F\uD800-\uDFFF"
# "\uFDD0-\uFDFEF\uFFFE\uFFFF\u0192]")
os.open(filetext, os.O_RDWR)
with open(filetext, "r+") as log:
titler = log.read()
t = continuedline.sub("", titler)
t = disclaimer.sub("", t)
t = disclaimer_two.sub("", t)
t = headings.sub("", t)
# t = middle_case.sub(,t)
t = lines.sub("\n", t)
t = illegalxml.sub("?", t)
nline = doub_space.findall(t)
while len(nline) > 0:
t = doub_space.sub("\n", t)
nline = doub_space.findall(t)
print "newlines"
os.open(outtext_temp, os.O_RDWR | os.O_CREAT)
with open(outtext_temp, "w+") as out:
out.write(t)
with open(outtext_temp, "a") as endline:
endline.write("\nPATIENT: XXXX,XXXXX\n")
inbetween(outtext_temp, outtext)
def inbetween(outtext_temp, outtext):
# text = "C:/Users/gcampuzanozuluaga/Dropbox/MT/CLEANED/MEDITECH_NLS_2011_CLEAN.TXT"
# outtext = "C:/Users/gcampuzanozuluaga/Dropbox/MT/CLEANED/TESTOUT.TXT"
os.open(outtext_temp, os.O_RDWR)
with open(outtext_temp, "r+") as log:
t = log.read()
prevline = 0
count = 0
splitstuff = t.splitlines()
linecount = 0
for line in splitstuff:
# print line
if re.match(r"PATIENT:\s*(?:[\w\s]*\w),?(?:\D*?\w*)?\s+ACCT\s*#:\s*(?:\S+)?\s*LOC:?.*?U\s*#:\s*(?:\S+)?\s*", line):
# print "pte"
patient_head = line + "\n" + splitstuff[linecount + 1] + "\n" + splitstuff[linecount + 2]
prevline = linecount + 5
indicatepte = 1
if linecount > prevline:
if len(line) > 0:
indicatepte = 0
else:
pass
if re.match(r"Path\s+#:\s+(\d+:\w+:\w+)\s+\w+\s+Received:\s+(\d{2}\/\d{2}\/\d{2})\s*-\s*\d{4}\s+Collected:\s*(\S+)\s*", line):
count += 1
if indicatepte == 0:
line = patient_head + "\n" + line
splitstuff[linecount] = line
#print line
linecount += 1
os.open(outtext, os.O_RDWR | os.O_CREAT)
with open(outtext, "w+") as out:
for e in splitstuff:
t = e + "\n"
out.write(t)
print count
fil()
| 50.073892 | 174 | 0.52789 | 1,329 | 10,165 | 3.93529 | 0.190369 | 0.008031 | 0.043977 | 0.057361 | 0.404398 | 0.357744 | 0.323518 | 0.307266 | 0.286424 | 0.196941 | 0 | 0.029404 | 0.277324 | 10,165 | 202 | 175 | 50.321782 | 0.682548 | 0.163896 | 0 | 0.146667 | 0 | 0.1 | 0.269054 | 0.180256 | 0.006667 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006667 | 0.02 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7af3d0ba516d34beefe1df3aa1a2a39558521ee | 318 | py | Python | python/cython_build.py | n1tk/batch_jaro_winkler | 421c7e3a5bedce89e8c00216b90d32d1629073a2 | [
"MIT"
] | 22 | 2020-04-30T17:56:29.000Z | 2022-01-19T21:05:15.000Z | python/cython_build.py | n1tk/batch_jaro_winkler | 421c7e3a5bedce89e8c00216b90d32d1629073a2 | [
"MIT"
] | 2 | 2021-01-19T14:07:22.000Z | 2021-11-24T16:32:46.000Z | python/cython_build.py | n1tk/batch_jaro_winkler | 421c7e3a5bedce89e8c00216b90d32d1629073a2 | [
"MIT"
] | 3 | 2020-10-28T20:56:29.000Z | 2022-02-25T23:29:05.000Z |
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import sys
python_version = sys.version_info[0]
setup(
name='batch_jaro_winkler',
ext_modules=cythonize([Extension('batch_jaro_winkler', ['cbatch_jaro_winkler.pyx'])], language_level=python_version)
) | 24.461538 | 118 | 0.811321 | 43 | 318 | 5.744186 | 0.55814 | 0.133603 | 0.129555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003472 | 0.09434 | 318 | 13 | 119 | 24.461538 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0.18612 | 0.072555 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b7afa43603c9767401a62b1a4fe6fc631887605a | 6,647 | py | Python | venv/lib/python3.7/site-packages/google/type/postal_address_pb2.py | nicholasadamou/StockBird | 257396479667863d4ee122ea46adb86087c9aa78 | [
"Apache-2.0"
] | 15 | 2020-06-29T08:33:39.000Z | 2022-02-12T00:28:51.000Z | venv/lib/python3.7/site-packages/google/type/postal_address_pb2.py | nicholasadamou/StockBird | 257396479667863d4ee122ea46adb86087c9aa78 | [
"Apache-2.0"
] | 21 | 2020-03-01T18:21:09.000Z | 2020-05-26T14:49:08.000Z | venv/lib/python3.7/site-packages/google/type/postal_address_pb2.py | nicholasadamou/StockBird | 257396479667863d4ee122ea46adb86087c9aa78 | [
"Apache-2.0"
] | 11 | 2020-06-29T08:40:24.000Z | 2022-02-24T17:39:16.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/type/postal_address.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='google/type/postal_address.proto',
package='google.type',
syntax='proto3',
serialized_options=_b('\n\017com.google.typeB\022PostalAddressProtoP\001ZFgoogle.golang.org/genproto/googleapis/type/postaladdress;postaladdress\242\002\003GTP'),
serialized_pb=_b('\n google/type/postal_address.proto\x12\x0bgoogle.type\"\xfd\x01\n\rPostalAddress\x12\x10\n\x08revision\x18\x01 \x01(\x05\x12\x13\n\x0bregion_code\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t\x12\x13\n\x0bpostal_code\x18\x04 \x01(\t\x12\x14\n\x0csorting_code\x18\x05 \x01(\t\x12\x1b\n\x13\x61\x64ministrative_area\x18\x06 \x01(\t\x12\x10\n\x08locality\x18\x07 \x01(\t\x12\x13\n\x0bsublocality\x18\x08 \x01(\t\x12\x15\n\raddress_lines\x18\t \x03(\t\x12\x12\n\nrecipients\x18\n \x03(\t\x12\x14\n\x0corganization\x18\x0b \x01(\tBu\n\x0f\x63om.google.typeB\x12PostalAddressProtoP\x01ZFgoogle.golang.org/genproto/googleapis/type/postaladdress;postaladdress\xa2\x02\x03GTPb\x06proto3')
)
_POSTALADDRESS = _descriptor.Descriptor(
name='PostalAddress',
full_name='google.type.PostalAddress',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='revision', full_name='google.type.PostalAddress.revision', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='region_code', full_name='google.type.PostalAddress.region_code', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='language_code', full_name='google.type.PostalAddress.language_code', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='postal_code', full_name='google.type.PostalAddress.postal_code', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sorting_code', full_name='google.type.PostalAddress.sorting_code', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='administrative_area', full_name='google.type.PostalAddress.administrative_area', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='locality', full_name='google.type.PostalAddress.locality', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sublocality', full_name='google.type.PostalAddress.sublocality', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='address_lines', full_name='google.type.PostalAddress.address_lines', index=8,
number=9, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='recipients', full_name='google.type.PostalAddress.recipients', index=9,
number=10, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='organization', full_name='google.type.PostalAddress.organization', index=10,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=50,
serialized_end=303,
)
DESCRIPTOR.message_types_by_name['PostalAddress'] = _POSTALADDRESS
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
PostalAddress = _reflection.GeneratedProtocolMessageType('PostalAddress', (_message.Message,), dict(
DESCRIPTOR = _POSTALADDRESS,
__module__ = 'google.type.postal_address_pb2'
# @@protoc_insertion_point(class_scope:google.type.PostalAddress)
))
_sym_db.RegisterMessage(PostalAddress)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 47.141844 | 711 | 0.746352 | 883 | 6,647 | 5.374858 | 0.184598 | 0.057311 | 0.038348 | 0.045512 | 0.592078 | 0.512853 | 0.483354 | 0.459334 | 0.451327 | 0.451327 | 0 | 0.040673 | 0.123364 | 6,647 | 140 | 712 | 47.478571 | 0.773812 | 0.035204 | 0 | 0.495868 | 1 | 0.016529 | 0.243796 | 0.204776 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041322 | 0 | 0.041322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7b19fe3229937c45439013a0185385ffb6134b0 | 2,007 | py | Python | BAMF_Detect/modules/dendroid.py | bwall/bamfdetect | 3b0b96a8b2285a1a4b3e3cf5ed1b5ad422b91ed1 | [
"MIT"
] | 152 | 2015-02-04T16:34:53.000Z | 2021-07-27T19:00:40.000Z | BAMF_Detect/modules/dendroid.py | bwall/bamfdetect | 3b0b96a8b2285a1a4b3e3cf5ed1b5ad422b91ed1 | [
"MIT"
] | null | null | null | BAMF_Detect/modules/dendroid.py | bwall/bamfdetect | 3b0b96a8b2285a1a4b3e3cf5ed1b5ad422b91ed1 | [
"MIT"
] | 30 | 2015-03-31T10:20:32.000Z | 2022-02-09T16:17:04.000Z | from common import Modules, data_strings, load_yara_rules, AndroidParseModule, ModuleMetadata
from base64 import b64decode
from string import printable
class dendroid(AndroidParseModule):
def __init__(self):
md = ModuleMetadata(
module_name="dendroid",
bot_name="Dendroid",
description="Android RAT",
authors=["Brian Wallace (@botnet_hunter)"],
version="1.0.0",
date="August 18, 2014",
references=[]
)
AndroidParseModule.__init__(self, md)
self.yara_rules = None
pass
def _generate_yara_rules(self):
if self.yara_rules is None:
self.yara_rules = load_yara_rules("dendroid.yara")
return self.yara_rules
def get_bot_information(self, file_data):
results = {}
uri = None
password = None
for s in data_strings(file_data, charset="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwx yz0123456789+/="):
try:
line = b64decode(s)
if len(line) == 0:
continue
valid = True
for c in line:
if c not in printable:
valid = False
if not valid:
continue
if line.lower().startswith("https://") or line.lower().startswith("http://"):
uri = line
continue
if uri is not None:
password = line
break
except TypeError:
continue
if uri is not None:
results["c2_uri"] = uri
if password is not None:
try:
password.decode("utf8")
results["password"] = password
except UnicodeDecodeError:
results["password"] = "h" + password.encode("hex")
return results
Modules.list.append(dendroid()) | 33.45 | 119 | 0.521176 | 189 | 2,007 | 5.380952 | 0.444444 | 0.061947 | 0.051131 | 0.029499 | 0.043265 | 0.043265 | 0 | 0 | 0 | 0 | 0 | 0.023121 | 0.396612 | 2,007 | 60 | 120 | 33.45 | 0.81668 | 0 | 0 | 0.148148 | 0 | 0 | 0.1001 | 0.0249 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.12963 | 0.055556 | 0 | 0.166667 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b7cb10c335526f698fe7f642c39ab4db21115697 | 246 | py | Python | logxs/__version__.py | minlaxz/logxs | e225e7a3c69b01595e1f2c11552b70e4b1540d47 | [
"MIT"
] | null | null | null | logxs/__version__.py | minlaxz/logxs | e225e7a3c69b01595e1f2c11552b70e4b1540d47 | [
"MIT"
] | null | null | null | logxs/__version__.py | minlaxz/logxs | e225e7a3c69b01595e1f2c11552b70e4b1540d47 | [
"MIT"
] | null | null | null | __title__ = 'logxs'
__description__ = 'Replacing with build-in `print` with nice formatting.'
__url__ = 'https://github.com/minlaxz/logxs'
__version__ = '0.3.2'
__author__ = 'Min Latt'
__author_email__ = 'minminlaxz@gmail.com'
__license__ = 'MIT' | 35.142857 | 73 | 0.747967 | 31 | 246 | 5 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.109756 | 246 | 7 | 74 | 35.142857 | 0.694064 | 0 | 0 | 0 | 0 | 0 | 0.510121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7d37af2b6bf8f16d281543414e0b3b8888f7e5c | 1,121 | py | Python | src/spring-cloud/azext_spring_cloud/_validators_enterprise.py | SanyaKochhar/azure-cli-extensions | ff845c73e3110d9f4025c122c1938dd24a43cca0 | [
"MIT"
] | 2 | 2021-03-23T02:34:41.000Z | 2021-06-03T05:53:34.000Z | src/spring-cloud/azext_spring_cloud/_validators_enterprise.py | SanyaKochhar/azure-cli-extensions | ff845c73e3110d9f4025c122c1938dd24a43cca0 | [
"MIT"
] | 21 | 2021-03-16T23:04:40.000Z | 2022-03-24T01:45:54.000Z | src/spring-cloud/azext_spring_cloud/_validators_enterprise.py | SanyaKochhar/azure-cli-extensions | ff845c73e3110d9f4025c122c1938dd24a43cca0 | [
"MIT"
] | 9 | 2021-03-11T02:59:39.000Z | 2022-02-24T21:46:34.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=too-few-public-methods, unused-argument, redefined-builtin
from azure.cli.core.azclierror import ClientRequestError
from ._util_enterprise import is_enterprise_tier
def only_support_enterprise(cmd, namespace):
if namespace.resource_group and namespace.service and not is_enterprise_tier(cmd, namespace.resource_group, namespace.service):
raise ClientRequestError("'{}' only supports for Enterprise tier Spring instance.".format(namespace.command))
def not_support_enterprise(cmd, namespace):
if namespace.resource_group and namespace.service and is_enterprise_tier(cmd, namespace.resource_group, namespace.service):
raise ClientRequestError("'{}' doesn't support for Enterprise tier Spring instance.".format(namespace.command))
| 56.05 | 131 | 0.667261 | 118 | 1,121 | 6.20339 | 0.5 | 0.095628 | 0.120219 | 0.079235 | 0.568306 | 0.568306 | 0.568306 | 0.568306 | 0.423497 | 0.423497 | 0 | 0 | 0.099019 | 1,121 | 19 | 132 | 59 | 0.724752 | 0.366637 | 0 | 0 | 0 | 0 | 0.159091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7d4dda1b3752a19244c734487e74c4425e170d8 | 8,796 | py | Python | fluentql/function.py | RaduG/fluentql | 653a77bb95b40724eb58744f5f8dbed9c88eaebd | [
"MIT"
] | 4 | 2020-04-15T10:50:03.000Z | 2021-07-22T12:23:50.000Z | fluentql/function.py | RaduG/fluentql | 653a77bb95b40724eb58744f5f8dbed9c88eaebd | [
"MIT"
] | 2 | 2020-05-24T08:54:56.000Z | 2020-05-24T09:04:31.000Z | fluentql/function.py | RaduG/fluentql | 653a77bb95b40724eb58744f5f8dbed9c88eaebd | [
"MIT"
] | null | null | null | from typing import Any, TypeVar, Union
from types import MethodType, FunctionType
from .base_types import BooleanType, Constant, StringType, Collection, Referenceable
from .type_checking import TypeChecker
AnyArgs = TypeVar("AnyArgs")
NoArgs = TypeVar("NoArgs")
VarArgs = TypeVar("VarArgs")
T = TypeVar("T")
class WithOperatorSupport:
"""
Implements operator support.
"""
def __gt__(self, other):
return GreaterThan(self, other)
def __ge__(self, other):
return GreaterThanOrEqual(self, other)
def __lt__(self, other):
return LessThan(self, other)
def __le__(self, other):
return LessThanOrEqual(self, other)
def __eq__(self, other):
return Equals(self, other)
def __ne__(self, other):
return NotEqual(self, other)
def __add__(self, other):
return Add(self, other)
def __radd__(self, other):
return Add(other, self)
def __sub__(self, other):
return Subtract(self, other)
def __rsub__(self, other):
return Subtract(other, self)
def __mul__(self, other):
return Multiply(self, other)
def __rmul__(self, other):
return Multiply(other, self)
def __truediv__(self, other):
return Divide(self, other)
def __rtruediv__(self, other):
return Divide(other, self)
def __mod__(self, other):
return Modulo(self, other)
def __rmod__(self, other):
return Modulo(other, self)
def __and__(self, other):
return BitwiseAnd(self, other)
def __rand__(self, other):
return BitwiseAnd(other, self)
def __or__(self, other):
return BitwiseOr(self, other)
def __ror__(self, other):
return BitwiseOr(other, self)
def __xor__(self, other):
return BitwiseXor(self, other)
def __rxor__(self, other):
return BitwiseXor(other, self)
def __invert__(self):
return Not(self)
class F(Referenceable):
def __init_subclass__(cls, **kwargs):
"""
Use init_subclass to map the arguments / return value based on type
annotations, instead of going hard at it with a metaclass.
Args:
cls (type):
**kwargs (dict):
"""
cls._process_annotations()
@classmethod
def _process_annotations(cls):
"""
Set __args__ and __returns__ attributes to cls. Those will be set to
the user annotations, if any, or will default to:
AnyArgs - for __args__
Any - for __returns__
Args:
cls (object):
"""
try:
annotations = {**cls.__annotations__}
except AttributeError:
annotations = {}
# Check for "returns"
if "returns" in annotations:
cls.__returns__ = annotations.pop("returns")
elif hasattr(cls, "returns"):
cls.__returns__ = cls.returns
else:
cls.__returns__ = Any
if len(annotations) == 0:
cls.__args__ = AnyArgs
elif len(annotations) == 1 and list(annotations.values())[0] is NoArgs:
cls.__args__ = NoArgs
else:
cls.__args__ = tuple(annotations.values())
def __init__(self, *args):
self._validate_args(args)
self.__values__ = args
self.__returns__ = self._get_return_type()
def _get_return_type(self):
# If __returns__ is a function, the result of it called
# on args is the actual return type
if isinstance(self.__returns__, (FunctionType, MethodType)):
# Replace F arg types with their return values
return self.__returns__(
tuple(self.__type_checker__._matched_types),
self.__type_checker__._type_var_mapping,
)
return self.__returns__
@property
def values(self):
return self.__values__
@classmethod
def new(cls, name):
"""
Returns a new subclass of cls, with the given name.
Args:
name (str):
Returns:
type
"""
return type(name, (cls,), {})
def _validate_args(self, args):
if self.__args__ is AnyArgs:
if len(args) == 0:
raise TypeError(f"{type(self).__name__} takes at least one argument")
# All expected args are Any
arg_types = [Any] * len(args)
elif self.__args__ is NoArgs:
if len(args) > 0:
raise TypeError(f"{type(self).__name__} takes no arguments")
return
elif len(self.__args__) != len(args):
raise TypeError(
f"{type(self).__name__} takes {len(self.__args__)} arguments, {len(args)} given"
)
else:
# Replace F arg types with their return values
arg_types = [
arg.__returns__ if issubclass(type(arg), F) else type(arg)
for arg in args
]
self.__type_checker__ = TypeChecker(arg_types, self.__args__)
self.__type_checker__.validate()
class ArithmeticF(WithOperatorSupport, F):
@classmethod
def returns(cls, matched_types, type_var_mapping):
"""
If both args are Constant, the return value is Constant. Otherwise, the
return value is Collection.
Args:
args (list(type)): Argument types, in order
Returns:
type
"""
constant_type = type_var_mapping[Constant][1]
if any(Collection in t.__mro__ for t in matched_types if hasattr(t, "__mro__")):
return Collection[constant_type]
return constant_type
class BooleanF(F):
@classmethod
def returns(cls, matched_types, type_var_mapping):
"""
If both args are BooleanType, the return value is BooleanType.
Otherwise, the return value is collection.
Args:
args (list(type)): Argument types, in order
Returns:
type
"""
if any(Collection in t.__mro__ for t in matched_types if hasattr(t, "__mro__")):
return Collection[BooleanType]
return Collection[BooleanType]
class AggregateF(WithOperatorSupport, F):
@classmethod
def returns(cls, matched_types, type_var_mapping):
try:
return type_var_mapping[Constant][1]
except KeyError:
return Any
class ComparisonF(F):
pass
class OrderF(F):
pass
class Add(ArithmeticF):
a: Union[Constant, Collection[Constant]]
b: Union[Constant, Collection[Constant]]
class Subtract(ArithmeticF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class Multiply(ArithmeticF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class Divide(ArithmeticF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class Modulo(ArithmeticF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class BitwiseOr(BooleanF):
a: Union[Collection[BooleanType], BooleanType]
b: Union[Collection[BooleanType], BooleanType]
class BitwiseAnd(BooleanF):
a: Union[Collection[BooleanType], BooleanType]
b: Union[Collection[BooleanType], BooleanType]
class BitwiseXor(BooleanF):
a: Union[Collection[BooleanType], BooleanType]
b: Union[Collection[BooleanType], BooleanType]
class Equals(BooleanF):
a: Union[Constant, Collection[Constant]]
b: Union[Constant, Collection[Constant]]
class LessThan(BooleanF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class LessThanOrEqual(BooleanF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class GreaterThan(BooleanF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class GreaterThanOrEqual(BooleanF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class NotEqual(BooleanF):
a: Union[Constant, Collection[Any]]
b: Union[Constant, Collection[Any]]
class Not(BooleanF):
a: Union[BooleanType, Collection[BooleanType]]
class As(F):
a: T
b: str
returns: T
class TableStar(F):
a: Referenceable
returns: Any
class Star(F):
a: NoArgs
returns: Any
class Like(BooleanF):
a: Collection[StringType]
b: str
class In(BooleanF):
a: Collection[Any]
b: Any
class Max(AggregateF):
a: Collection[Constant]
class Min(AggregateF):
a: Collection[Constant]
class Sum(AggregateF):
a: Collection[Constant]
class Asc(OrderF):
a: Collection[Any]
returns: Collection[Any]
class Desc(OrderF):
a: Collection[Any]
returns: Collection[Any]
| 23.393617 | 96 | 0.624034 | 992 | 8,796 | 5.242944 | 0.174395 | 0.062296 | 0.063449 | 0.089983 | 0.379542 | 0.347433 | 0.347433 | 0.325899 | 0.311671 | 0.311671 | 0 | 0.001101 | 0.277285 | 8,796 | 375 | 97 | 23.456 | 0.817052 | 0.120737 | 0 | 0.282297 | 0 | 0.004785 | 0.029943 | 0.008497 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.009569 | 0.019139 | 0.114833 | 0.717703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b7d854946bf40e07210624df5e0576dbd5f15fb1 | 945 | py | Python | coregent/net/core.py | landoffire/coregent | 908aaacbb7b2b9d8ea044d47b9518e8914dad08b | [
"Apache-2.0"
] | 1 | 2021-04-25T07:26:07.000Z | 2021-04-25T07:26:07.000Z | coregent/net/core.py | neurite-interactive/coregent | 908aaacbb7b2b9d8ea044d47b9518e8914dad08b | [
"Apache-2.0"
] | null | null | null | coregent/net/core.py | neurite-interactive/coregent | 908aaacbb7b2b9d8ea044d47b9518e8914dad08b | [
"Apache-2.0"
] | 2 | 2021-06-12T23:00:12.000Z | 2021-06-12T23:01:57.000Z | import abc
import collections.abc
import socket
__all__ = ['get_socket_type', 'get_server_socket', 'get_client_socket',
'SocketReader', 'SocketWriter', 'JSONReader', 'JSONWriter']
def get_socket_type(host=None, ip_type=None):
if ip_type is not None:
return ip_type
if host and ':' in host:
return socket.AF_INET6
return socket.AF_INET
def get_server_socket(host, port, ip_type=None):
sock = socket.socket(get_socket_type(host, ip_type))
sock.bind((host, port))
return sock
def get_client_socket(host, port, ip_type=None):
sock = socket.socket(get_socket_type(host, ip_type))
sock.connect((host, port))
return sock
class SocketReader(collections.abc.Iterator):
@abc.abstractmethod
def close(self):
...
class SocketWriter(abc.ABC):
@abc.abstractmethod
def send(self, message):
...
@abc.abstractmethod
def close(self):
...
| 21 | 71 | 0.671958 | 125 | 945 | 4.848 | 0.296 | 0.069307 | 0.085809 | 0.084158 | 0.316832 | 0.221122 | 0.221122 | 0.221122 | 0.221122 | 0.221122 | 0 | 0.001344 | 0.212698 | 945 | 44 | 72 | 21.477273 | 0.813172 | 0 | 0 | 0.4 | 0 | 0 | 0.099471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b7d90dcc48241b77ca82bd93f406aefe69d173b9 | 360 | py | Python | hackdayproject/urls.py | alstn2468/Naver_Campus_Hackday_Project | e8c3b638638182ccb8b4631c03cf5cb153c7278a | [
"MIT"
] | 1 | 2019-11-15T05:03:54.000Z | 2019-11-15T05:03:54.000Z | hackdayproject/urls.py | alstn2468/Naver_Campus_Hackday_Project | e8c3b638638182ccb8b4631c03cf5cb153c7278a | [
"MIT"
] | null | null | null | hackdayproject/urls.py | alstn2468/Naver_Campus_Hackday_Project | e8c3b638638182ccb8b4631c03cf5cb153c7278a | [
"MIT"
] | null | null | null | from django.urls import path, include
from django.contrib import admin
import hackdayproject.main.urls as main_urls
import hackdayproject.repo.urls as repo_urls
urlpatterns = [
path('admin/', admin.site.urls),
path('oauth/', include('social_django.urls', namespace='social')),
path('', include(main_urls)),
path('repo/', include(repo_urls))
]
| 30 | 70 | 0.727778 | 48 | 360 | 5.354167 | 0.354167 | 0.093385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130556 | 360 | 11 | 71 | 32.727273 | 0.821086 | 0 | 0 | 0 | 0 | 0 | 0.113889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b7daad942b4ee13674b01a3bc7990323f036b3a5 | 1,176 | py | Python | Financely/basic_app/models.py | Frostday/Financely | 23226aca0ad21971cb61d13509e16651b304d207 | [
"MIT"
] | 8 | 2021-05-28T16:09:36.000Z | 2022-02-27T23:12:48.000Z | Financely/basic_app/models.py | Frostday/Financely | 23226aca0ad21971cb61d13509e16651b304d207 | [
"MIT"
] | null | null | null | Financely/basic_app/models.py | Frostday/Financely | 23226aca0ad21971cb61d13509e16651b304d207 | [
"MIT"
] | 8 | 2021-05-28T16:01:48.000Z | 2022-02-27T23:12:50.000Z | from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Client(models.Model):
user = models.OneToOneField(User,null=True,blank= True,on_delete=models.CASCADE)
name = models.CharField(max_length=100, null=True)
# def __str__(self):
# return self.name
class Portfolio(models.Model):
client = models.OneToOneField(Client,on_delete=models.CASCADE,blank=True,null=True)
# def __str__(self):
# return self.client.name + "'s Portfolio"
class Stock(models.Model):
id = models.BigAutoField(primary_key=True)
parent_portfolio = models.ForeignKey(Portfolio,related_name="stocks",on_delete=models.CASCADE,null=True,blank=True)
stock_symbol = models.CharField(max_length=100,null=True)
stock_price = models.CharField(max_length=100,null=True,blank=True)
stock_sector_performance = models.CharField(max_length=100,null=True,blank=True)
stock_name = models.CharField(max_length=100,null=True)
quantity = models.IntegerField(default=0,null=True,blank=True)
date_added = models.DateTimeField(auto_now_add=True)
# def __str__(self):
# return self.stock_name
| 40.551724 | 119 | 0.747449 | 162 | 1,176 | 5.228395 | 0.339506 | 0.085006 | 0.076741 | 0.100354 | 0.360094 | 0.33412 | 0.305785 | 0.207792 | 0.115702 | 0.115702 | 0 | 0.015764 | 0.136905 | 1,176 | 28 | 120 | 42 | 0.818719 | 0.147959 | 0 | 0 | 0 | 0 | 0.006036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b7e289ea7bf92691efc481deeec6261bf7909c3b | 850 | py | Python | get_tweet.py | Na27i/tweet_generator | 92a5156e041982dd12d9850445f15a599fb6ec5e | [
"MIT"
] | null | null | null | get_tweet.py | Na27i/tweet_generator | 92a5156e041982dd12d9850445f15a599fb6ec5e | [
"MIT"
] | null | null | null | get_tweet.py | Na27i/tweet_generator | 92a5156e041982dd12d9850445f15a599fb6ec5e | [
"MIT"
] | null | null | null | import json
import sys
import pandas
args = sys.argv
if len(args) == 1 :
import main as settings
else :
import sub as settings
from requests_oauthlib import OAuth1Session
CK = settings.CONSUMER_KEY
CS = settings.CONSUMER_SECRET
AT = settings.ACCESS_TOKEN
ATS = settings.ACCESS_TOKEN_SECRET
twitter = OAuth1Session(CK, CS, AT, ATS)
tweetlist = []
url = "https://api.twitter.com/1.1/statuses/user_timeline.json"
params = {"count" : 200}
for i range(5):
res = twitter.get(url, params = params)
if res.status_code == 200:
timelines = json.loads(res.text)
for tweet in timelines:
tweetlist.append(tweet["text"])
else:
print("取得失敗(%d)" % res.status_code)
datafile = pandas.DataFrame(tweetlist)
datafile.to_csv("tweetlist.csv", encoding='utf_8_sig') | 22.972973 | 64 | 0.661176 | 113 | 850 | 4.867257 | 0.566372 | 0.036364 | 0.069091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019817 | 0.228235 | 850 | 37 | 65 | 22.972973 | 0.818598 | 0 | 0 | 0.074074 | 0 | 0 | 0.115337 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7e4658365995b8bd790113c73797283daaf0910 | 907 | py | Python | 3.7.1/solution.py | luxnlex/stepic-python | 92a4b25391f76935c3c2a70fb8552e7f93928d9b | [
"MIT"
] | 1 | 2021-05-07T18:20:51.000Z | 2021-05-07T18:20:51.000Z | 3.7.1/solution.py | luxnlex/stepic-python | 92a4b25391f76935c3c2a70fb8552e7f93928d9b | [
"MIT"
] | null | null | null | 3.7.1/solution.py | luxnlex/stepic-python | 92a4b25391f76935c3c2a70fb8552e7f93928d9b | [
"MIT"
] | 2 | 2017-12-27T07:51:57.000Z | 2020-08-03T22:10:55.000Z | s=str(input())
a=[]
for i in range(len(s)):
si=s[i]
a.append(si)
b=[]
n=str(input())
for j in range(len(n)):
sj=n[j]
b.append(sj)
p={}
for pi in range(len(s)):
key=s[pi]
p[key]=0
j1=0
for i in range(0,len(a)):
key=a[i]
while j1<len(b):
bj=b[0]
if key in p:
p[key]=bj
b.remove(bj)
break
c=[]
si=str(input())
for si1 in range(0,len(si)):
ci=si[si1]
c.append(ci)
co=[]
for ci in range(0,len(c)):
if c[ci] in p:
cco=c[ci]
pco=p[cco]
co.append(pco)
d=[]
di=str(input())
for sj1 in range(0,len(di)):
dj=di[sj1]
d.append(dj)
do=[]
for di in range(0,len(d)):
for key in p:
pkey=key
if p.get(key) == d[di]:
ddo=pkey
do.append(ddo)
for i in range (0,len(co)):
print(co[i],end='')
print()
for j in range (0,len(do)):
print(do[j],end='') | 14.868852 | 31 | 0.485116 | 179 | 907 | 2.458101 | 0.22905 | 0.159091 | 0.127273 | 0.175 | 0.068182 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0.025397 | 0.305402 | 907 | 61 | 32 | 14.868852 | 0.673016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7e805c3fdc6130f33ad7d70c4f57afa4833b9f9 | 3,630 | py | Python | ecosante/users/schemas/__init__.py | betagouv/recosante-api | 4560b2cf2ff4dc19597792fe15a3805f6259201d | [
"MIT"
] | 3 | 2021-09-24T14:07:51.000Z | 2021-12-14T13:48:34.000Z | ecosante/users/schemas/__init__.py | betagouv/recosante-api | 4560b2cf2ff4dc19597792fe15a3805f6259201d | [
"MIT"
] | 187 | 2021-03-25T16:43:49.000Z | 2022-03-23T14:40:31.000Z | ecosante/users/schemas/__init__.py | betagouv/recosante-api | 4560b2cf2ff4dc19597792fe15a3805f6259201d | [
"MIT"
] | null | null | null | from dataclasses import field
from marshmallow import Schema, ValidationError, post_load, schema
from marshmallow.validate import OneOf, Length
from marshmallow.fields import Bool, Str, List, Nested, Email
from flask_rebar import ResponseSchema, RequestSchema, errors
from ecosante.inscription.models import Inscription
from ecosante.utils.custom_fields import TempList
from ecosante.api.schemas.commune import CommuneSchema
from ecosante.extensions import celery
from indice_pollution.history.models import Commune as CommuneModel
from flask import request
def list_str(choices, max_length=None, temp=False, **kwargs):
t = TempList if temp else List
return t(
Str(validate=OneOf(choices=choices)),
required=False,
allow_none=True,
validate=Length(min=0, max=max_length) if max_length else None,
**kwargs
)
class User(Schema):
commune = Nested(CommuneSchema, required=False, allow_none=True)
uid = Str(dump_only=True)
mail = Email(required=True)
deplacement = list_str(["velo", "tec", "voiture", "aucun"])
activites = list_str(["jardinage", "bricolage", "menage", "sport", "aucun"])
enfants = list_str(["oui", "non", "aucun"], temp=True)
chauffage = list_str(["bois", "chaudiere", "appoint", "aucun"])
animaux_domestiques = list_str(["chat", "chien", "aucun"])
connaissance_produit = list_str(["medecin", "association", "reseaux_sociaux", "publicite", "ami", "autrement"])
population = list_str(["pathologie_respiratoire", "allergie_pollens", "aucun"])
indicateurs = list_str(["indice_atmo", "raep", "indice_uv", "vigilance_meteorologique"])
indicateurs_frequence = list_str(["quotidien", "hebdomadaire", "alerte"], 1)
indicateurs_media = list_str(["mail", "notifications_web"])
recommandations = list_str(["oui", "non"], 1, attribute='recommandations_actives')
recommandations_frequence = list_str(["quotidien", "hebdomadaire", "pollution"], 1)
recommandations_media = list_str(["mail", "notifications_web"])
webpush_subscriptions_info = Str(required=False, allow_none=True, load_only=True)
class Response(User, ResponseSchema):
is_active = Bool(attribute='is_active')
class RequestPOST(User, RequestSchema):
@post_load
def make_inscription(self, data, **kwargs):
inscription = Inscription.query.filter(Inscription.mail.ilike(data['mail'])).first()
if inscription:
raise ValidationError('mail already used', field_name='mail')
inscription = Inscription(**data)
return inscription
class RequestPOSTID(User, RequestSchema):
def __init__(self, **kwargs):
super_kwargs = dict(kwargs)
partial_arg = super_kwargs.pop('partial', ['mail'])
super(RequestPOSTID, self).__init__(partial=partial_arg, **super_kwargs)
@post_load
def make_inscription(self, data, **kwargs):
uid = request.view_args.get('uid')
if not uid:
raise ValidationError('uid is required')
inscription = Inscription.query.filter_by(uid=uid).first()
if not inscription:
raise errors.NotFound('uid unknown')
if 'mail' in data:
inscription_same_mail = Inscription.query.filter(
Inscription.uid != uid,
Inscription.mail == data['mail']
).first()
if inscription_same_mail:
raise errors.Conflict('user with this mail already exists')
for k, v in data.items():
setattr(inscription, k, v)
return inscription
class RequestUpdateProfile(Schema):
mail = Email(required=True) | 43.214286 | 115 | 0.689532 | 413 | 3,630 | 5.893462 | 0.384988 | 0.040263 | 0.022186 | 0.027116 | 0.142975 | 0.059162 | 0.032868 | 0.032868 | 0 | 0 | 0 | 0.00136 | 0.189532 | 3,630 | 84 | 116 | 43.214286 | 0.825969 | 0 | 0 | 0.109589 | 0 | 0 | 0.135775 | 0.019278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0 | 0.150685 | 0 | 0.575342 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b7f128c1c030f4883afe9da12b85ac98f1c9b3dd | 9,603 | py | Python | openfl/component/ca/ca.py | saransh09/openfl-1 | beba571929a56771f2fc1671154a3dbe60b38785 | [
"Apache-2.0"
] | null | null | null | openfl/component/ca/ca.py | saransh09/openfl-1 | beba571929a56771f2fc1671154a3dbe60b38785 | [
"Apache-2.0"
] | 1 | 2022-03-02T18:07:11.000Z | 2022-03-10T02:43:12.000Z | openfl/component/ca/ca.py | saransh09/openfl-1 | beba571929a56771f2fc1671154a3dbe60b38785 | [
"Apache-2.0"
] | 1 | 2022-03-03T00:50:15.000Z | 2022-03-03T00:50:15.000Z | # Copyright (C) 2020-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Aggregator module."""
import base64
import json
import os
import platform
import shutil
import signal
import subprocess
import time
import urllib.request
from logging import getLogger
from pathlib import Path
from subprocess import call
import requests
from click import confirm
logger = getLogger(__name__)
TOKEN_DELIMITER = '.'
CA_STEP_CONFIG_DIR = Path('step_config')
CA_PKI_DIR = Path('cert')
CA_PASSWORD_FILE = Path('pass_file')
CA_CONFIG_JSON = Path('config/ca.json')
def get_system_and_architecture():
"""Get system and architecture of machine."""
uname_res = platform.uname()
system = uname_res.system.lower()
architecture_aliases = {
'x86_64': 'amd64',
'armv6l': 'armv6',
'armv7l': 'armv7',
'aarch64': 'arm64'
}
architecture = uname_res.machine.lower()
for alias in architecture_aliases:
if architecture == alias:
architecture = architecture_aliases[alias]
break
return system, architecture
def download_step_bin(url, grep_name, architecture, prefix='.', confirmation=True):
"""
Donwload step binaries from github.
Args:
url: address of latest release
grep_name: name to grep over github assets
architecture: architecture type to grep
prefix: folder path to download
confirmation: request user confirmation or not
"""
if confirmation:
confirm('CA binaries from github will be downloaded now', default=True, abort=True)
result = requests.get(url)
if result.status_code != 200:
logger.warning('Can\'t download binaries from github. Please try lately.')
return
assets = result.json().get('assets', [])
archive_urls = [
a['browser_download_url']
for a in assets
if (grep_name in a['name'] and architecture in a['name']
and 'application/gzip' in a['content_type'])
]
if len(archive_urls) == 0:
raise Exception('Applicable CA binaries from github were not found '
f'(name: {grep_name}, architecture: {architecture})')
archive_url = archive_urls[-1]
archive_url = archive_url.replace('https', 'http')
name = archive_url.split('/')[-1]
logger.info(f'Downloading {name}')
urllib.request.urlretrieve(archive_url, f'{prefix}/{name}')
shutil.unpack_archive(f'{prefix}/{name}', f'{prefix}/step')
def get_token(name, ca_url, ca_path='.'):
"""
Create authentication token.
Args:
name: common name for following certificate
(aggregator fqdn or collaborator name)
ca_url: full url of CA server
ca_path: path to ca folder
"""
ca_path = Path(ca_path)
step_config_dir = ca_path / CA_STEP_CONFIG_DIR
pki_dir = ca_path / CA_PKI_DIR
step_path, _ = get_ca_bin_paths(ca_path)
if not step_path:
raise Exception('Step-CA is not installed!\nRun `fx pki install` first')
priv_json = step_config_dir / 'secrets' / 'priv.json'
pass_file = pki_dir / CA_PASSWORD_FILE
root_crt = step_config_dir / 'certs' / 'root_ca.crt'
try:
token = subprocess.check_output(
f'{step_path} ca token {name} '
f'--key {priv_json} --root {root_crt} '
f'--password-file {pass_file} 'f'--ca-url {ca_url}', shell=True)
except subprocess.CalledProcessError as exc:
logger.error(f'Error code {exc.returncode}: {exc.output}')
return
token = token.strip()
token_b64 = base64.b64encode(token)
with open(root_crt, mode='rb') as file:
root_certificate_b = file.read()
root_ca_b64 = base64.b64encode(root_certificate_b)
return TOKEN_DELIMITER.join([
token_b64.decode('utf-8'),
root_ca_b64.decode('utf-8'),
])
def get_ca_bin_paths(ca_path):
"""Get paths of step binaries."""
ca_path = Path(ca_path)
step = None
step_ca = None
if (ca_path / 'step').exists():
dirs = os.listdir(ca_path / 'step')
for dir_ in dirs:
if 'step_' in dir_:
step = ca_path / 'step' / dir_ / 'bin' / 'step'
if 'step-ca' in dir_:
step_ca = ca_path / 'step' / dir_ / 'bin' / 'step-ca'
return step, step_ca
def certify(name, cert_path: Path, token_with_cert, ca_path: Path):
"""Create an envoy workspace."""
os.makedirs(cert_path, exist_ok=True)
token, root_certificate = token_with_cert.split(TOKEN_DELIMITER)
token = base64.b64decode(token).decode('utf-8')
root_certificate = base64.b64decode(root_certificate)
step_path, _ = get_ca_bin_paths(ca_path)
if not step_path:
url = 'http://api.github.com/repos/smallstep/cli/releases/latest'
system, arch = get_system_and_architecture()
download_step_bin(url, f'step_{system}', arch, prefix=ca_path)
step_path, _ = get_ca_bin_paths(ca_path)
if not step_path:
raise Exception('Step-CA is not installed!\nRun `fx pki install` first')
with open(f'{cert_path}/root_ca.crt', mode='wb') as file:
file.write(root_certificate)
call(f'{step_path} ca certificate {name} {cert_path}/{name}.crt '
f'{cert_path}/{name}.key --kty EC --curve P-384 -f --token {token}', shell=True)
def remove_ca(ca_path):
"""Kill step-ca process and rm ca directory."""
_check_kill_process('step-ca')
shutil.rmtree(ca_path, ignore_errors=True)
def install(ca_path, ca_url, password):
"""
Create certificate authority for federation.
Args:
ca_path: path to ca directory
ca_url: url for ca server like: 'host:port'
password: Simple password for encrypting root private keys
"""
logger.info('Creating CA')
ca_path = Path(ca_path)
ca_path.mkdir(parents=True, exist_ok=True)
step_config_dir = ca_path / CA_STEP_CONFIG_DIR
os.environ['STEPPATH'] = str(step_config_dir)
step_path, step_ca_path = get_ca_bin_paths(ca_path)
if not (step_path and step_ca_path and step_path.exists() and step_ca_path.exists()):
confirm('CA binaries from github will be downloaded now', default=True, abort=True)
system, arch = get_system_and_architecture()
url = 'http://api.github.com/repos/smallstep/certificates/releases/latest'
download_step_bin(url, f'step-ca_{system}', arch, prefix=ca_path, confirmation=False)
url = 'http://api.github.com/repos/smallstep/cli/releases/latest'
download_step_bin(url, f'step_{system}', arch, prefix=ca_path, confirmation=False)
step_config_dir = ca_path / CA_STEP_CONFIG_DIR
if (not step_config_dir.exists()
or confirm('CA exists, do you want to recreate it?', default=True)):
_create_ca(ca_path, ca_url, password)
_configure(step_config_dir)
def run_ca(step_ca, pass_file, ca_json):
"""Run CA server."""
if _check_kill_process('step-ca', confirmation=True):
logger.info('Up CA server')
call(f'{step_ca} --password-file {pass_file} {ca_json}', shell=True)
def _check_kill_process(pstring, confirmation=False):
"""Kill process by name."""
pids = []
proc = subprocess.Popen(f'ps ax | grep {pstring} | grep -v grep',
shell=True, stdout=subprocess.PIPE)
text = proc.communicate()[0].decode('utf-8')
for line in text.splitlines():
fields = line.split()
pids.append(fields[0])
if len(pids):
if confirmation and not confirm('CA server is already running. Stop him?', default=True):
return False
for pid in pids:
os.kill(int(pid), signal.SIGKILL)
time.sleep(2)
return True
def _create_ca(ca_path: Path, ca_url: str, password: str):
"""Create a ca workspace."""
import os
pki_dir = ca_path / CA_PKI_DIR
step_config_dir = ca_path / CA_STEP_CONFIG_DIR
pki_dir.mkdir(parents=True, exist_ok=True)
step_config_dir.mkdir(parents=True, exist_ok=True)
with open(f'{pki_dir}/pass_file', 'w') as f:
f.write(password)
os.chmod(f'{pki_dir}/pass_file', 0o600)
step_path, step_ca_path = get_ca_bin_paths(ca_path)
assert (step_path and step_ca_path and step_path.exists() and step_ca_path.exists())
logger.info('Create CA Config')
os.environ['STEPPATH'] = str(step_config_dir)
shutil.rmtree(step_config_dir, ignore_errors=True)
name = ca_url.split(':')[0]
call(f'{step_path} ca init --name name --dns {name} '
f'--address {ca_url} --provisioner prov '
f'--password-file {pki_dir}/pass_file', shell=True)
call(f'{step_path} ca provisioner remove prov --all', shell=True)
call(f'{step_path} crypto jwk create {step_config_dir}/certs/pub.json '
f'{step_config_dir}/secrets/priv.json --password-file={pki_dir}/pass_file', shell=True)
call(
f'{step_path} ca provisioner add provisioner {step_config_dir}/certs/pub.json',
shell=True
)
def _configure(step_config_dir):
conf_file = step_config_dir / CA_CONFIG_JSON
with open(conf_file, 'r+') as f:
data = json.load(f)
data.setdefault('authority', {}).setdefault('claims', {})
data['authority']['claims']['maxTLSCertDuration'] = f'{365 * 24}h'
data['authority']['claims']['defaultTLSCertDuration'] = f'{365 * 24}h'
data['authority']['claims']['maxUserSSHCertDuration'] = '24h'
data['authority']['claims']['defaultUserSSHCertDuration'] = '24h'
f.seek(0)
json.dump(data, f, indent=4)
f.truncate()
| 34.793478 | 97 | 0.656357 | 1,331 | 9,603 | 4.512397 | 0.220135 | 0.040959 | 0.047619 | 0.010989 | 0.31302 | 0.279387 | 0.230936 | 0.191975 | 0.180153 | 0.157676 | 0 | 0.011615 | 0.220035 | 9,603 | 275 | 98 | 34.92 | 0.790254 | 0.099032 | 0 | 0.161458 | 0 | 0 | 0.231891 | 0.031867 | 0 | 0 | 0 | 0 | 0.005208 | 1 | 0.057292 | false | 0.067708 | 0.078125 | 0 | 0.171875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b7f17afa5fddb406481a5085256bccee3d1bcc8c | 574 | py | Python | bin/optimization/cosmo_optimizer_hod_only.py | mclaughlin6464/pearce | 746f2bf4bf45e904d66996e003043661a01423ba | [
"MIT"
] | null | null | null | bin/optimization/cosmo_optimizer_hod_only.py | mclaughlin6464/pearce | 746f2bf4bf45e904d66996e003043661a01423ba | [
"MIT"
] | 16 | 2016-11-04T22:24:32.000Z | 2018-05-01T22:53:39.000Z | bin/optimization/cosmo_optimizer_hod_only.py | mclaughlin6464/pearce | 746f2bf4bf45e904d66996e003043661a01423ba | [
"MIT"
] | 3 | 2016-10-04T08:07:52.000Z | 2019-05-03T23:50:01.000Z | from pearce.emulator import OriginalRecipe, ExtraCrispy
import numpy as np
training_file = '/home/users/swmclau2/scratch/PearceRedMagicWpCosmo.hdf5'
em_method = 'gp'
split_method = 'random'
a = 1.0
z = 1.0/a - 1.0
fixed_params = {'z':z, 'cosmo': 1}#, 'r':0.18477483}
n_leaves, n_overlap = 5, 2
emu = ExtraCrispy(training_file,n_leaves, n_overlap, split_method, method = em_method, fixed_params=fixed_params,\
custom_mean_function = None)
results = emu.train_metric()
print results
print
print dict(zip(emu.get_param_names(), np.exp(results.x)))
| 23.916667 | 115 | 0.721254 | 87 | 574 | 4.551724 | 0.597701 | 0.015152 | 0.015152 | 0.075758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041068 | 0.151568 | 574 | 23 | 116 | 24.956522 | 0.772074 | 0.029617 | 0 | 0 | 0 | 0 | 0.124101 | 0.098921 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.133333 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7f84a7d5201859ed1a739cf1602952494964553 | 7,702 | py | Python | channels/italiaserie.py | sodicarus/channels | d77402f4f460ea6daa66959aa5384aaffbff70b5 | [
"MIT"
] | null | null | null | channels/italiaserie.py | sodicarus/channels | d77402f4f460ea6daa66959aa5384aaffbff70b5 | [
"MIT"
] | null | null | null | channels/italiaserie.py | sodicarus/channels | d77402f4f460ea6daa66959aa5384aaffbff70b5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# ------------------------------------------------------------
# streamondemand-pureita.- XBMC Plugin
# Canale italiaserie
# http://www.mimediacenter.info/foro/viewtopic.php?f=36&t=7808
# ------------------------------------------------------------
import re
from core import httptools
from core import logger
from core import config
from core import servertools
from core import scrapertools
from core.item import Item
from core.tmdb import infoSod
__channel__ = "italiaserie"
host = "https://italiaserie.org"
headers = [['Referer', host]]
def isGeneric():
return True
def mainlist(item):
logger.info("streamondemand-pureita -[italiaserie mainlist]")
itemlist = [Item(channel=__channel__,
action="peliculas",
title="[COLOR azure]Serie TV - [COLOR orange]Ultime Aggiunte[/COLOR]",
url="%s/category/serie-tv/" % host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/popcorn_serie_P.png"),
Item(channel=__channel__,
action="peliculas",
title="[COLOR azure]Serie TV - [COLOR orange]Aggiornamenti[/COLOR]",
url="%s/ultimi-episodi/" % host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/tv_series_P.png"),
Item(channel=__channel__,
action="categorie",
title="[COLOR azure]Serie TV - [COLOR orange]Categorie[/COLOR]",
url=host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/genres_P.png"),
Item(channel=__channel__,
action="peliculas",
title="[COLOR azure]Serie TV - [COLOR orange]Animazione[/COLOR]",
url="%s/category/serie-tv/animazione-e-bambini/" % host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/animation2_P.png"),
Item(channel=__channel__,
action="peliculas",
title="[COLOR azure]Serie TV - [COLOR orange]TV Show[/COLOR]",
url="%s/category/serie-tv/tv-show/" % host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/new_tvshows_P.png"),
Item(channel=__channel__,
action="search",
title="[COLOR orange]Search ...[/COLOR]",
url=host,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/search_P.png")]
return itemlist
# ==================================================================================================================================================
def search(item, texto):
logger.info("streamondemand-pureita - [italiaserie search]")
item.url = host + "/?s=" + texto
try:
return peliculas(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
# ==================================================================================================================================================
def categorie(item):
logger.info("streamondemand-pureita -[italiaserie categorie]")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
blocco = scrapertools.get_match(data, r'<h3 class="title">Categorie</h3>(.*?)</ul>')
patron = r'<li class=".*?"><a href="([^"]+)" >([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(blocco)
for scrapedurl, scrapedtitle in matches:
if "Serie TV" in scrapedtitle or "Tv Show" in scrapedtitle or "Animazione e Bambini" in scrapedtitle:
continue
itemlist.append(
Item(channel=__channel__,
action="peliculas",
title=scrapedtitle,
url=scrapedurl,
thumbnail='https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/genre_P.png',
folder=True))
return itemlist
# ==================================================================================================================================================
def peliculas(item):
logger.info("streamondemand-pureita -[serietvonline_co peliculas]")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<a href="([^"]+)"\s*title="([^"]+)">\s*<img src="([^<]+)"\s*alt[^>]+>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle, scrapedthumbnail in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle).strip()
scrapedplot=""
itemlist.append(infoSod(
Item(channel=__channel__,
action="episodes",
title=scrapedtitle,
fulltitle=scrapedtitle,
url=scrapedurl,
thumbnail=scrapedthumbnail,
plot=scrapedplot,
show=scrapedtitle,
folder=True), tipo="tv"))
next_page = scrapertools.find_single_match(data, '<a class="next page-numbers" href="([^"]+)">Next »</a>')
if next_page != "":
itemlist.append(
Item(channel=__channel__,
action="peliculas",
title="[COLOR orange]Successivi >>[/COLOR]",
url=next_page,
thumbnail="https://raw.githubusercontent.com/orione7/Pelis_images/master/channels_icon_pureita/next_1.png"))
return itemlist
# ==================================================================================================================================================
def episodes(item):
logger.info("streamondemand-pureita -[italiaserie episodes]")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = '<a rel="nofollow"\s*target="_blank" act=".*?"\s*href="([^"]+)"\s*class="green-link">\s*<strong>([^<]+)</strong>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
itemlist.append(
Item(channel=__channel__,
action="findvideos",
title=scrapedtitle,
fulltitle=item.fulltitle + " - " + scrapedtitle,
show=item.show + " - " + scrapedtitle,
url=scrapedurl,
plot="[COLOR orange]" + item.title + "[/COLOR]" + item.plot,
thumbnail=item.thumbnail,
folder=True))
return itemlist
# ==================================================================================================================================================
def findvideos(item):
logger.info()
data = httptools.downloadpage(item.url).data
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
servername = re.sub(r'[-\[\]\s]+', '', videoitem.title)
videoitem.title = "".join(['[COLOR azure][[COLOR orange]' + servername.capitalize() + '[/COLOR]] - ', item.title])
videoitem.fulltitle = item.fulltitle
videoitem.show = item.show
videoitem.thumbnail = item.thumbnail
videoitem.plot = item.plot
videoitem.channel = __channel__
return itemlist
| 43.514124 | 149 | 0.531291 | 697 | 7,702 | 5.736011 | 0.242468 | 0.038519 | 0.045023 | 0.06003 | 0.490996 | 0.43947 | 0.363432 | 0.344672 | 0.318659 | 0.304152 | 0 | 0.003247 | 0.240197 | 7,702 | 176 | 150 | 43.761364 | 0.679938 | 0.139834 | 0 | 0.323308 | 0 | 0.007519 | 0.311299 | 0.073968 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.067669 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7f8e6d0c8a700576343e9ec9966950fe6696251 | 629 | py | Python | setup.py | jeffleary00/greenery | cb5b5d037b6fd297463633d2d3315c722851161f | [
"MIT"
] | null | null | null | setup.py | jeffleary00/greenery | cb5b5d037b6fd297463633d2d3315c722851161f | [
"MIT"
] | null | null | null | setup.py | jeffleary00/greenery | cb5b5d037b6fd297463633d2d3315c722851161f | [
"MIT"
] | 1 | 2018-02-25T17:29:37.000Z | 2018-02-25T17:29:37.000Z | from setuptools import setup
setup(
name='potnanny-api',
version='0.2.6',
packages=['potnanny_api'],
include_package_data=True,
description='Part of the Potnanny greenhouse controller application. Contains Flask REST API and basic web interface.',
author='Jeff Leary',
author_email='potnanny@gmail.com',
url='https://github.com/jeffleary00/potnanny-api',
install_requires=[
'requests',
'passlib',
'sqlalchemy',
'marshmallow',
'flask',
'flask-restful',
'flask-jwt-extended',
'flask-wtf',
'potnanny-core==0.2.9',
],
)
| 26.208333 | 123 | 0.616852 | 69 | 629 | 5.550725 | 0.753623 | 0.086162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016807 | 0.243243 | 629 | 23 | 124 | 27.347826 | 0.787815 | 0 | 0 | 0 | 0 | 0 | 0.484897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.045455 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7fbdc11c64c416322347545771908c98a2d730b | 158 | py | Python | abc/abc205/abc205b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | abc/abc205/abc205b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | abc/abc205/abc205b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | N, *A = map(int, open(0).read().split())
A.sort()
for i in range(N):
if i == A[i] - 1:
continue
print('No')
break
else:
print('Yes')
| 14.363636 | 40 | 0.487342 | 27 | 158 | 2.851852 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.291139 | 158 | 10 | 41 | 15.8 | 0.669643 | 0 | 0 | 0 | 0 | 0 | 0.031646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b7fc5371e78fe759e9cfc9ac2a197cc1a24c7ba9 | 1,114 | py | Python | CPAC/cwas/tests/features/steps/base_cwas.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | 1 | 2021-08-02T23:23:39.000Z | 2021-08-02T23:23:39.000Z | CPAC/cwas/tests/features/steps/base_cwas.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | null | null | null | CPAC/cwas/tests/features/steps/base_cwas.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | 2 | 2021-08-02T23:23:40.000Z | 2022-02-26T12:39:30.000Z | from behave import *
from hamcrest import assert_that, is_not, greater_than
import numpy as np
import nibabel as nib
import rpy2.robjects as robjects
from rpy2.robjects.numpy2ri import numpy2ri
from rpy2.robjects.packages import importr
robjects.conversion.py2ri = numpy2ri
from os import path as op
import sys
curfile = op.abspath(__file__)
testpath = op.dirname(op.dirname(op.dirname(curfile)))
rpath = op.join(testpath, "R")
pypath = op.dirname(testpath)
sys.path.append(pypath)
from cwas import *
from utils import *
def custom_corrcoef(X, Y=None):
"""Each of the columns in X will be correlated with each of the columns in
Y. Each column represents a variable, with the rows containing the observations."""
if Y is None:
Y = X
if X.shape[0] != Y.shape[0]:
raise Exception("X and Y must have the same number of rows.")
X = X.astype(float)
Y = Y.astype(float)
X -= X.mean(axis=0)[np.newaxis,...]
Y -= Y.mean(axis=0)
xx = np.sum(X**2, axis=0)
yy = np.sum(Y**2, axis=0)
r = np.dot(X.T, Y)/np.sqrt(np.multiply.outer(xx,yy))
return r
| 25.906977 | 87 | 0.684022 | 187 | 1,114 | 4.032086 | 0.454545 | 0.047745 | 0.04244 | 0.047745 | 0.047745 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016816 | 0.199282 | 1,114 | 42 | 88 | 26.52381 | 0.828475 | 0.137343 | 0 | 0 | 0 | 0 | 0.045216 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 1 | 0.033333 | false | 0 | 0.366667 | 0 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4d0003163267427736e0367162b90a4c31a4952a | 18,450 | py | Python | Scripts/plot_ObservationsPrediction_RawHiatus_OHClevels-lag-EDA_v2.py | zmlabe/predictGMSTrate | 2bde4a106de1988d772f15a52d283d23bb7128f4 | [
"MIT"
] | 2 | 2022-01-20T20:20:04.000Z | 2022-02-21T12:33:37.000Z | Dark_Scripts/plot_ObservationsPrediction_RawHiatus_OHClevels-lag-EDA_v2.py | zmlabe/predictGMSTrate | 2bde4a106de1988d772f15a52d283d23bb7128f4 | [
"MIT"
] | null | null | null | Dark_Scripts/plot_ObservationsPrediction_RawHiatus_OHClevels-lag-EDA_v2.py | zmlabe/predictGMSTrate | 2bde4a106de1988d772f15a52d283d23bb7128f4 | [
"MIT"
] | 3 | 2022-01-19T16:25:37.000Z | 2022-03-22T13:25:00.000Z | """
Explore raw composites based on indices from predicted testing data and
showing all the difference OHC levels for OBSERVATIONS
Author : Zachary M. Labe
Date : 21 September 2021
Version : 2 (mostly for testing)
"""
### Import packages
import sys
import matplotlib.pyplot as plt
import numpy as np
import calc_Utilities as UT
from mpl_toolkits.basemap import Basemap, addcyclic, shiftgrid
import palettable.cubehelix as cm
import cmocean as cmocean
import calc_dataFunctions as df
import calc_Stats as dSS
from netCDF4 import Dataset
### Plotting defaults
plt.rc('text',usetex=True)
plt.rc('font',**{'family':'sans-serif','sans-serif':['Avant Garde']})
###############################################################################
###############################################################################
###############################################################################
### Data preliminaries
modelGCMs = ['CESM2le']
dataset_obs = 'ERA5'
allDataLabels = modelGCMs
monthlychoiceq = ['annual']
variables = ['T2M']
vari_predict = ['SST','OHC100','OHC300','OHC700']
reg_name = 'SMILEGlobe'
level = 'surface'
###############################################################################
###############################################################################
randomalso = False
timeper = 'hiatus'
shuffletype = 'GAUSS'
###############################################################################
###############################################################################
land_only = False
ocean_only = False
###############################################################################
###############################################################################
baseline = np.arange(1951,1980+1,1)
###############################################################################
###############################################################################
window = 0
if window == 0:
rm_standard_dev = False
ravel_modelens = False
ravelmodeltime = False
else:
rm_standard_dev = True
ravelmodeltime = False
ravel_modelens = True
yearsall = np.arange(1979+window,2099+1,1)
yearsobs = np.arange(1979+window,2020+1,1)
###############################################################################
###############################################################################
numOfEns = 40
lentime = len(yearsall)
###############################################################################
###############################################################################
lat_bounds,lon_bounds = UT.regions(reg_name)
###############################################################################
###############################################################################
ravelyearsbinary = False
ravelbinary = False
lensalso = True
###############################################################################
###############################################################################
### Remove ensemble mean
rm_ensemble_mean = True
###############################################################################
###############################################################################
### Accuracy for composites
accurate = True
if accurate == True:
typemodel = 'correcthiatus_obs'
elif accurate == False:
typemodel = 'extrahiatus_obs'
elif accurate == 'WRONG':
typemodel = 'wronghiatus_obs'
elif accurate == 'HIATUS':
typemodel = 'allhiatus_obs'
###############################################################################
###############################################################################
### Call functions
trendlength = 10
AGWstart = 1990
years_newmodel = np.arange(AGWstart,yearsall[-1]-8,1)
years_newobs = np.arange(AGWstart,yearsobs[-1]-8,1)
vv = 0
mo = 0
variq = variables[vv]
monthlychoice = monthlychoiceq[mo]
directoryfigure = '/Users/zlabe/Desktop/GmstTrendPrediction/ANN_v2/Obs/'
saveData = monthlychoice + '_' + variq + '_' + reg_name + '_' + dataset_obs
print('*Filename == < %s >' % saveData)
###############################################################################
###############################################################################
### Function to read in predictor variables (SST/OHC)
def read_primary_dataset(variq,dataset,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper,lat_bounds=lat_bounds,lon_bounds=lon_bounds):
data,lats,lons = df.readFiles(variq,dataset,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper)
datar,lats,lons = df.getRegion(data,lats,lons,lat_bounds,lon_bounds)
print('\nOur dataset: ',dataset,' is shaped',data.shape)
return datar,lats,lons
def read_obs_dataset(variq,dataset_obs,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds):
data_obs,lats_obs,lons_obs = df.readFiles(variq,dataset_obs,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper)
data_obs,lats_obs,lons_obs = df.getRegion(data_obs,lats_obs,lons_obs,lat_bounds,lon_bounds)
print('our OBS dataset: ',dataset_obs,' is shaped',data_obs.shape)
return data_obs,lats_obs,lons_obs
###############################################################################
###############################################################################
### Loop through to read all the variables
ohcHIATUS = np.empty((len(vari_predict),92,144))
for vvv in range(len(vari_predict)):
### Function to read in predictor variables (SST/OHC)
models_var = []
for i in range(len(modelGCMs)):
if vari_predict[vvv][:3] == 'OHC':
obs_predict = 'OHC'
else:
obs_predict = 'ERA5'
obsq_var,lats,lons = read_obs_dataset(vari_predict[vvv],obs_predict,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds)
### Save predictor
models_var.append(obsq_var)
models_var = np.asarray(models_var).squeeze()
### Remove ensemble mean
if rm_ensemble_mean == True:
models_var = dSS.remove_trend_obs(models_var,'surface')
print('\n*Removed observational linear trend*')
### Standardize
models_varravel = models_var.squeeze().reshape(yearsobs.shape[0],lats.shape[0]*lons.shape[0])
meanvar = np.nanmean(models_varravel,axis=0)
stdvar = np.nanstd(models_varravel,axis=0)
modelsstd_varravel = (models_varravel-meanvar)/stdvar
models_var = modelsstd_varravel.reshape(yearsobs.shape[0],lats.shape[0],lons.shape[0])
### Slice for number of years
yearsq_m = np.where((yearsobs >= AGWstart))[0]
models_slice = models_var[yearsq_m,:,:]
if rm_ensemble_mean == False:
variq = 'T2M'
fac = 0.7
random_segment_seed = int(np.genfromtxt('/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/SelectedSegmentSeed.txt',unpack=True))
random_network_seed = 87750
hidden = [20,20]
n_epochs = 500
batch_size = 128
lr_here = 0.001
ridgePenalty = 0.05
actFun = 'relu'
fractWeight = 0.5
elif rm_ensemble_mean == True:
variq = 'T2M'
fac = 0.7
random_segment_seed = int(np.genfromtxt('/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/SelectedSegmentSeed.txt',unpack=True))
random_network_seed = 87750
hidden = [30,30]
n_epochs = 500
batch_size = 128
lr_here = 0.001
ridgePenalty = 0.5
actFun = 'relu'
fractWeight = 0.5
else:
print(ValueError('SOMETHING IS WRONG WITH DATA PROCESSING!'))
sys.exit()
### Naming conventions for files
directorymodel = '/Users/zlabe/Documents/Research/GmstTrendPrediction/SavedModels/'
savename = 'ANNv2_'+'OHC100'+'_hiatus_' + actFun + '_L2_'+ str(ridgePenalty)+ '_LR_' + str(lr_here)+ '_Batch'+ str(batch_size)+ '_Iters' + str(n_epochs) + '_' + str(len(hidden)) + 'x' + str(hidden[0]) + '_SegSeed' + str(random_segment_seed) + '_NetSeed'+ str(random_network_seed)
if(rm_ensemble_mean==True):
savename = savename + '_EnsembleMeanRemoved'
### Directories to save files
directorydata = '/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/'
###############################################################################
###############################################################################
###############################################################################
### Read in data for testing predictions and actual hiatuses
actual_test = np.genfromtxt(directorydata + 'obsActualLabels_' + savename + '.txt')
predict_test = np.genfromtxt(directorydata + 'obsLabels_' + savename+ '.txt')
### Reshape arrays for [ensemble,year]
act_re = actual_test
pre_re = predict_test
### Slice ensembles for testing data
ohcready = models_slice[:,:,:].squeeze()
### Pick all hiatuses
if accurate == True: ### correct predictions
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (pre_re[yr]) == 1 and (act_re[yr] == 1):
ohc_allenscomp.append(ohcready[yr,:,:])
elif accurate == False: ### picks all hiatus predictions
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if pre_re[yr] == 1:
ohc_allenscomp.append(ohcready[yr,:,:])
elif accurate == 'WRONG': ### picks hiatus but is wrong
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (pre_re[yr]) == 1 and (act_re[yr] == 0):
ohc_allenscomp.append(ohcready[yr,:,:])
elif accurate == 'HIATUS': ### accurate climate change
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (act_re[yr] == 1):
ohc_allenscomp.append(ohcready[yr,:,:])
else:
print(ValueError('SOMETHING IS WRONG WITH ACCURACY COMPOSITES!'))
sys.exit()
### Composite across all years to get hiatuses
ohcHIATUS[vvv,:,:] = np.nanmean(np.asarray(ohc_allenscomp),axis=0)
###############################################################################
###############################################################################
### Loop through to read all the variables
lag1 = 3
lag2 = 7
lag = lag2-lag1
ohcHIATUSlag = np.empty((len(vari_predict),92,144))
for vvv in range(len(vari_predict)):
### Function to read in predictor variables (SST/OHC)
models_var = []
for i in range(len(modelGCMs)):
if vari_predict[vvv][:3] == 'OHC':
obs_predict = 'OHC'
else:
obs_predict = 'ERA5'
obsq_var,lats,lons = read_obs_dataset(vari_predict[vvv],obs_predict,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds)
### Save predictor
models_var.append(obsq_var)
models_var = np.asarray(models_var).squeeze()
### Remove ensemble mean
if rm_ensemble_mean == True:
models_var = dSS.remove_trend_obs(models_var,'surface')
print('\n*Removed observational linear trend*')
### Standardize
models_varravel = models_var.squeeze().reshape(yearsobs.shape[0],lats.shape[0]*lons.shape[0])
meanvar = np.nanmean(models_varravel,axis=0)
stdvar = np.nanstd(models_varravel,axis=0)
modelsstd_varravel = (models_varravel-meanvar)/stdvar
models_var = modelsstd_varravel.reshape(yearsobs.shape[0],lats.shape[0],lons.shape[0])
### Slice for number of years
yearsq_m = np.where((yearsobs >= AGWstart))[0]
models_slice = models_var[yearsq_m,:,:]
if rm_ensemble_mean == False:
variq = 'T2M'
fac = 0.7
random_segment_seed = int(np.genfromtxt('/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/SelectedSegmentSeed.txt',unpack=True))
random_network_seed = 87750
hidden = [20,20]
n_epochs = 500
batch_size = 128
lr_here = 0.001
ridgePenalty = 0.05
actFun = 'relu'
fractWeight = 0.5
elif rm_ensemble_mean == True:
variq = 'T2M'
fac = 0.7
random_segment_seed = int(np.genfromtxt('/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/SelectedSegmentSeed.txt',unpack=True))
random_network_seed = 87750
hidden = [30,30]
n_epochs = 500
batch_size = 128
lr_here = 0.001
ridgePenalty = 0.5
actFun = 'relu'
fractWeight = 0.5
else:
print(ValueError('SOMETHING IS WRONG WITH DATA PROCESSING!'))
sys.exit()
### Naming conventions for files
directorymodel = '/Users/zlabe/Documents/Research/GmstTrendPrediction/SavedModels/'
savename = 'ANNv2_'+'OHC100'+'_hiatus_' + actFun + '_L2_'+ str(ridgePenalty)+ '_LR_' + str(lr_here)+ '_Batch'+ str(batch_size)+ '_Iters' + str(n_epochs) + '_' + str(len(hidden)) + 'x' + str(hidden[0]) + '_SegSeed' + str(random_segment_seed) + '_NetSeed'+ str(random_network_seed)
if(rm_ensemble_mean==True):
savename = savename + '_EnsembleMeanRemoved'
### Directories to save files
directorydata = '/Users/zlabe/Documents/Research/GmstTrendPrediction/Data/'
###############################################################################
###############################################################################
###############################################################################
### Read in data for testing predictions and actual hiatuses
actual_test = np.genfromtxt(directorydata + 'obsActualLabels_' + savename + '.txt')
predict_test = np.genfromtxt(directorydata + 'obsLabels_' + savename+ '.txt')
### Reshape arrays for [ensemble,year]
act_re = actual_test
pre_re = predict_test
### Slice ensembles for testing data
ohcready = models_slice[:,:,:].squeeze()
### Pick all hiatuses
if accurate == True: ### correct predictions
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (pre_re[yr]) == 1 and (act_re[yr] == 1):
ohc_allenscomp.append(np.nanmean(ohcready[yr+lag1:yr+lag2,:,:],axis=0))
elif accurate == False: ### picks all hiatus predictions
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if pre_re[yr] == 1:
ohc_allenscomp.append(np.nanmean(ohcready[yr+lag1:yr+lag2,:,:],axis=0))
elif accurate == 'WRONG': ### picks hiatus but is wrong
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (pre_re[yr]) == 1 and (act_re[yr] == 0):
ohc_allenscomp.append(np.nanmean(ohcready[yr+lag1:yr+lag2,:,:],axis=0))
elif accurate == 'HIATUS': ### accurate climate change
ohc_allenscomp = []
for yr in range(ohcready.shape[0]):
if (act_re[yr] == 1):
ohc_allenscomp.append(np.nanmean(ohcready[yr+lag1:yr+lag2,:,:],axis=0))
else:
print(ValueError('SOMETHING IS WRONG WITH ACCURACY COMPOSITES!'))
sys.exit()
### Composite across all years to get hiatuses
ohcHIATUSlag[vvv,:,:] = np.nanmean(np.asarray(ohc_allenscomp),axis=0)
### Composite all for plotting
ohc_allcomp = np.append(ohcHIATUS,ohcHIATUSlag,axis=0)
###############################################################################
###############################################################################
### Plot subplot of obser+++++++++++++++vations
letters = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
plotloc = [1,3,5,7,2,4,6,8]
if rm_ensemble_mean == False:
limit = np.arange(-1.5,1.51,0.02)
barlim = np.round(np.arange(-1.5,1.6,0.5),2)
elif rm_ensemble_mean == True:
limit = np.arange(-1.5,1.6,0.02)
barlim = np.round(np.arange(-1.5,1.6,0.5),2)
cmap = cmocean.cm.balance
label = r'\textbf{[ HIATUS COMPOSITE ]}'
fig = plt.figure(figsize=(8,10))
###############################################################################
for ppp in range(ohc_allcomp.shape[0]):
ax1 = plt.subplot(ohc_allcomp.shape[0]//2,2,plotloc[ppp])
m = Basemap(projection='robin',lon_0=-180,resolution='l',area_thresh=10000)
m.drawcoastlines(color='darkgrey',linewidth=0.27)
### Variable
varn = ohc_allcomp[ppp]
if ppp == 0:
lons = np.where(lons >180,lons-360,lons)
x, y = np.meshgrid(lons,lats)
circle = m.drawmapboundary(fill_color='dimgrey',color='dimgray',
linewidth=0.7)
circle.set_clip_on(False)
cs1 = m.contourf(x,y,varn,limit,extend='both',latlon=True)
cs1.set_cmap(cmap)
m.fillcontinents(color='dimgrey',lake_color='dimgrey')
ax1.annotate(r'\textbf{[%s]}' % letters[ppp],xy=(0,0),xytext=(0.95,0.93),
textcoords='axes fraction',color='k',fontsize=10,
rotation=0,ha='center',va='center')
if ppp < 4:
ax1.annotate(r'\textbf{%s}' % vari_predict[ppp],xy=(0,0),xytext=(-0.08,0.5),
textcoords='axes fraction',color='dimgrey',fontsize=20,
rotation=90,ha='center',va='center')
if ppp == 0:
plt.title(r'\textbf{Onset}',fontsize=15,color='k')
if ppp == 4:
plt.title(r'\textbf{%s-Year Composite}' % lag,fontsize=15,color='k')
###############################################################################
cbar_ax1 = fig.add_axes([0.38,0.05,0.3,0.02])
cbar1 = fig.colorbar(cs1,cax=cbar_ax1,orientation='horizontal',
extend='both',extendfrac=0.07,drawedges=False)
cbar1.set_label(label,fontsize=6,color='dimgrey',labelpad=1.4)
cbar1.set_ticks(barlim)
cbar1.set_ticklabels(list(map(str,barlim)))
cbar1.ax.tick_params(axis='x', size=.01,labelsize=4)
cbar1.outline.set_edgecolor('dimgrey')
plt.tight_layout()
plt.subplots_adjust(bottom=0.08,wspace=0.01)
if rm_ensemble_mean == True:
plt.savefig(directoryfigure + 'RawCompositesHiatus_OBSERVATIONS_OHClevels-lag%s_v2_AccH-%s_AccR-%s_rmENSEMBLEmean.png' % (lag,accurate,accurate),dpi=300)
else:
plt.savefig(directoryfigure + 'RawCompositesHiatus_OBSERVATIONS_OHClevels-lag%s_v2_AccH-%s_AccR-%s.png' % (lag,accurate,accurate),dpi=300) | 44.244604 | 284 | 0.547805 | 1,989 | 18,450 | 4.928105 | 0.21267 | 0.013467 | 0.017139 | 0.016527 | 0.684146 | 0.669149 | 0.649867 | 0.636809 | 0.624158 | 0.610692 | 0 | 0.026305 | 0.171707 | 18,450 | 417 | 285 | 44.244604 | 0.615103 | 0.08 | 0 | 0.599315 | 0 | 0 | 0.132606 | 0.056052 | 0.006849 | 0 | 0 | 0 | 0 | 1 | 0.006849 | false | 0 | 0.034247 | 0 | 0.047945 | 0.030822 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d034751cf7a5ae250a1f9a85e64ff78986aa837 | 4,201 | py | Python | storage/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | storage/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | storage/__init__.py | daqbroker/daqbrokerServer | e8d2b72b4e3ab12c26dfa7b52e9d77097ede3f33 | [
"MIT"
] | null | null | null | import base64
import os
import threading
from pathlib import Path
#from sqlitedict import SqliteDict
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from daqbrokerServer.web.utils import hash_password
from daqbrokerServer.storage.server_schema import ServerBase, User, Connection
from daqbrokerServer.storage.contextual_session import session_open
# ###### THIS CREATES THE LOCAL STRUCTURE NECESSARY TO HOLD LOCAL DATABASES #######
# if not os.path.isdir(db_folder):
# os.mkdir(db_folder)
# # Initialise the local settings database
# local_url = "sqlite+pysqlite:///" + str(db_folder / "settings.sqlite")
# local_engine = create_engine(local_url)
# #################################################################################
# # This should create the mappings necessary on the local database
# Base.metadata.reflect(local_engine, extend_existing= True, autoload_replace= False)
# Base.metadata.create_all(local_engine, checkfirst= True)
# #This starts a session - probably not ideal, should consider using scoped session
# #LocalSession = scoped_session(sessionmaker(bind=local_engine))
# Session = sessionmaker(bind=local_engine)
# session = Session()
# Experimenting a class that will handle the folder definition of the session for the server class
class LocalSession:
def __init__(self, db_folder= None, empty_connections= False):
self.db_folder = None if db_folder == None else Path(db_folder)
self.url = "sqlite+pysqlite:///" + str( ( self.db_folder if self.db_folder else Path(__file__).parent / "databases" ) / "settings.sqlite")
self.engine = create_engine(self.url)
self.session = scoped_session(sessionmaker(bind=self.engine))
ServerBase.metadata.reflect(self.engine, extend_existing= True, autoload_replace= False)
ServerBase.metadata.create_all(self.engine, checkfirst= True)
Connection.set_db_folder(self.db_folder)
self.setup(empty_connections)
def setup(self, empty_connections= False):
test_session = self.session()
######## THIS IS VERY DANGEROUS - IT SHOULD BE A PROMPT CREATED WHEN INSTALLING THE LIBRARY
query = test_session.query(User).filter(User.id == 0)
if not query.count() > 0:
pwd = "admin"
password = hash_password(pwd)
user = User(id= 0, type= 3, email= "mail", username= "admin", password= password)
if not query.count() > 0:
test_session.add(user)
##########################################################################################
if not empty_connections:
##### THIS SHOULD LOOK FOR RECORDS OF LOCAL DATABASE, CREATES IF IT DOES NOT EXIST #######
query2 = test_session.query(Connection).filter(Connection.id == 0)
if not query2.count() > 0:
connection = Connection(id= 0, type= "sqlite+pysqlite", hostname= "local", username= "admin", password= base64.b64encode(b"admin"), port=0)
if not query2.count() > 0:
test_session.add(connection)
##########################################################################################
#Actually adding the object(s)
test_session.commit()
def teardown(self):
self.engine.dispose()
# ######## THIS IS VERY DANGEROUS - IT SHOULD BE A PROMPT CREATED WHEN INSTALLING THE LIBRARY
# query = session.query(User).filter(User.id == 0)
# if not query.count() > 0:
# pwd = "admin"
# password = hash_password(pwd)
# user = User(id= 0, type= 3, email= "mail", username= "admin", password= password)
# ##########################################################################################
# ##### THIS SHOULD LOOK FOR RECORDS OF LOCAL DATABASE, CREATES IF IT DOES NOT EXIST #######
# query2 = session.query(Connection).filter(Connection.id == 0)
# if not query2.count() > 0:
# connection = Connection(id= 0, type= "sqlite+pysqlite", hostname= "local", username= "admin", password= base64.b64encode(b"admin"), port=0)
# ##########################################################################################
# #Actually adding the objects - if one does not exist the other will most likely not exist too
# if (not query.count() > 0) or (not query2.count() > 0):
# connection.users.append(user)
# session.add(user)
# session.add(connection)
# session.commit()
| 40.009524 | 143 | 0.653416 | 515 | 4,201 | 5.223301 | 0.271845 | 0.032714 | 0.022305 | 0.011896 | 0.417472 | 0.385874 | 0.351673 | 0.318959 | 0.318959 | 0.318959 | 0 | 0.009877 | 0.132349 | 4,201 | 104 | 144 | 40.394231 | 0.728121 | 0.439895 | 0 | 0.108108 | 0 | 0 | 0.047855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0.108108 | 0.243243 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4d04229e05bd8f6f6995b6ba536b1ed9096df15a | 478 | py | Python | checkin/tests.py | MAKENTNU/web | 7a5b512bf4c087d1561cdb623d7df4b3d04811a2 | [
"MIT"
] | 10 | 2017-11-25T01:47:20.000Z | 2020-03-24T18:28:24.000Z | checkin/tests.py | MAKENTNU/web | 7a5b512bf4c087d1561cdb623d7df4b3d04811a2 | [
"MIT"
] | 319 | 2017-11-16T09:56:03.000Z | 2022-03-28T00:24:37.000Z | checkin/tests.py | MAKENTNU/web | 7a5b512bf4c087d1561cdb623d7df4b3d04811a2 | [
"MIT"
] | 6 | 2017-11-12T14:04:08.000Z | 2021-03-10T09:41:18.000Z | from django.test import TestCase
from django_hosts import reverse
from util.test_utils import Get, assert_requesting_paths_succeeds
class UrlTests(TestCase):
def test_all_get_request_paths_succeed(self):
path_predicates = [
Get(reverse('skills_present_list'), public=True),
Get(reverse('profile'), public=False),
Get(reverse('suggest'), public=False),
]
assert_requesting_paths_succeeds(self, path_predicates)
| 29.875 | 65 | 0.709205 | 57 | 478 | 5.649123 | 0.54386 | 0.093168 | 0.130435 | 0.180124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202929 | 478 | 15 | 66 | 31.866667 | 0.845144 | 0 | 0 | 0 | 0 | 0 | 0.069038 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d0e5f2a06efaa32ab6853b48bd163c479f22bbd | 467 | py | Python | Visualization/ConstrainedOpt.py | zhijieW94/SAGNet | 017b58853cb51d50851a5a3728b3205d235ff889 | [
"MIT"
] | 25 | 2019-09-15T09:10:17.000Z | 2021-04-08T07:44:16.000Z | Visualization/ConstrainedOpt.py | zhijieW-94/SAGNet | 017b58853cb51d50851a5a3728b3205d235ff889 | [
"MIT"
] | 9 | 2019-11-16T07:06:08.000Z | 2021-03-07T09:14:32.000Z | Visualization/ConstrainedOpt.py | zhijieW94/SAGNet | 017b58853cb51d50851a5a3728b3205d235ff889 | [
"MIT"
] | 7 | 2019-09-25T18:07:54.000Z | 2021-12-21T08:41:47.000Z | from PyQt5.QtCore import *
class ConstrainedOpt(QThread):
signal_update_voxels = pyqtSignal(str)
def __init__(self, model,index):
QThread.__init__(self)
self.model = model['model']
# self.model = model
self.name = model['name']
self.index = index
def run(self):
# while True:
self.update_voxel_model()
def update_voxel_model(self):
self.signal_update_voxels.emit('update_voxels') | 24.578947 | 55 | 0.631692 | 55 | 467 | 5.054545 | 0.418182 | 0.129496 | 0.129496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002899 | 0.261242 | 467 | 19 | 55 | 24.578947 | 0.802899 | 0.079229 | 0 | 0 | 0 | 0 | 0.051402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d149a21b75c6283b2a1da8727432ed396c798a7 | 1,537 | py | Python | redbull/doc_html.py | theSage21/redbull | 2f3cf0c8a8a910c18151ca493984d0a364581710 | [
"MIT"
] | 10 | 2018-09-11T09:11:13.000Z | 2019-09-10T09:47:35.000Z | redbull/doc_html.py | theSage21/redbull | 2f3cf0c8a8a910c18151ca493984d0a364581710 | [
"MIT"
] | null | null | null | redbull/doc_html.py | theSage21/redbull | 2f3cf0c8a8a910c18151ca493984d0a364581710 | [
"MIT"
] | null | null | null | def gen_doc_html(version, api_list):
doc_html = f'''
<!DOCTYPE html>
<html>
<head><meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style type="text/css">
body{{margin:40px auto;
max-width:650px; line-height:1.6;
font-size:18px; color:#444;
padding:0 10px}}
h1,h2,h3{{line-height:1.2}}
pre{{background-color: beige;
border-radius: 5px;
padding: 15px;
border: 1px solid black;
}}
</style></head>
<body>
<h1>API Docs V{version}</h1>
<p>The documentation is live and autogenerated.</p>
<hr>
<div id='docs'>
</div>
<script>
var api_list = {str(api_list)};'''
doc_html += '''
for(var i=0; i < api_list.length; ++i){
var url = api_list[i];
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("OPTIONS", url, false);
xmlhttp.onreadystatechange = function(){
if (xmlhttp.readyState == 4 && xmlhttp.status == 200){
var doc = document.createElement('pre');
doc.innerHTML = xmlhttp.responseText;
document.getElementById('docs').appendChild(doc);
}
}
xmlhttp.send();
}
</script>
</body>
</html>
'''
return doc_html
| 33.413043 | 78 | 0.471698 | 157 | 1,537 | 4.55414 | 0.617834 | 0.048951 | 0.027972 | 0.039161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035256 | 0.391021 | 1,537 | 45 | 79 | 34.155556 | 0.728632 | 0 | 0 | 0 | 0 | 0 | 0.932986 | 0.14704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d20e11d53db6d88edbeea07f1facb38a4748d8a | 2,131 | py | Python | mouse_burrows/scripts/show_info.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | 1 | 2016-03-06T05:16:38.000Z | 2016-03-06T05:16:38.000Z | mouse_burrows/scripts/show_info.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | null | null | null | mouse_burrows/scripts/show_info.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python2
'''
Created on Sep 21, 2016
@author: David Zwicker <dzwicker@seas.harvard.edu>
'''
from __future__ import division
import argparse
import sys
import os
# add the root of the video-analysis project to the path
script_path = os.path.split(os.path.realpath(__file__))[0]
package_path = os.path.abspath(os.path.join(script_path, '..', '..'))
sys.path.append(package_path)
video_analysis_path_guess = os.path.join(package_path, '..', 'video-analysis')
sys.path.append(os.path.abspath(video_analysis_path_guess))
from mouse_burrows.simple import load_result_file
def get_info(result_file, parameters=False):
""" show information about an analyzed antfarm video
`result_file` is the file where the results from the video analysis are
stored. This is usually a *.yaml file
`parameters` is a flag that indicates whether the parameters of the result
file are shown
"""
# load the respective result file
analyzer = load_result_file(result_file)
info = {}
if parameters:
info['Parameters'] = analyzer.params.to_dict()
return info
def main():
""" main routine of the script """
# setup the argument parsing
parser = argparse.ArgumentParser(
description='Program that outputs information about the analysis of '
'antfarm processing.',
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument('-r', '--result_file', metavar='FILE',
type=str, required=True,
help='filename of video analysis result')
parser.add_argument('-p', '--parameters', action='store_true',
help='show all parameters')
# fetch the arguments and build the parameter list
args = parser.parse_args()
# obtain information from data
info = get_info(result_file=args.result_file, parameters=args.parameters)
# TODO: add other output methods, like json, yaml, python dict
from pprint import pprint
pprint(info)
if __name__ == '__main__':
main()
| 27.675325 | 78 | 0.6687 | 268 | 2,131 | 5.145522 | 0.458955 | 0.072516 | 0.023205 | 0.034808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004905 | 0.234632 | 2,131 | 76 | 79 | 28.039474 | 0.840589 | 0.298921 | 0 | 0 | 0 | 0 | 0.143451 | 0 | 0 | 0 | 0 | 0.013158 | 0 | 1 | 0.060606 | false | 0 | 0.181818 | 0 | 0.272727 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d237d356a17f205a24800037a5d0a053ed6c426 | 563 | py | Python | todo/urls.py | fidele000/Ftodo-RestAPI-Django | 8c695503e04a3957920910acb9f1bb823ece4287 | [
"MIT"
] | null | null | null | todo/urls.py | fidele000/Ftodo-RestAPI-Django | 8c695503e04a3957920910acb9f1bb823ece4287 | [
"MIT"
] | null | null | null | todo/urls.py | fidele000/Ftodo-RestAPI-Django | 8c695503e04a3957920910acb9f1bb823ece4287 | [
"MIT"
] | null | null | null | from django.urls import path
from rest_framework import viewsets
from rest_framework import routers
from . import views
from django.urls import include
from rest_framework.routers import DefaultRouter
router=DefaultRouter()
router.register('hello-viewset',views.HelloViewSet,basename='hello-viewset')
router.register('profile',views.UserProfileViewSet)
router.register('login',views.LoginViewSet,basename='login')
router.register('task',views.TaskViewset)
urlpatterns = [
path('helloview/',views.HelloAPIView.as_view()),
path('',include(router.urls)),
]
| 33.117647 | 76 | 0.801066 | 68 | 563 | 6.573529 | 0.426471 | 0.12528 | 0.114094 | 0.089485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078153 | 563 | 16 | 77 | 35.1875 | 0.861272 | 0 | 0 | 0 | 0 | 0 | 0.101243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4d242ba823cf6de6e20e2768b1f065a06d916125 | 302 | py | Python | setup.py | samuel-spak/thermostate | 906d1e0b79289cd51cde510c797f007674b8bdcd | [
"BSD-3-Clause"
] | 6 | 2020-03-31T14:25:23.000Z | 2022-03-10T14:56:29.000Z | setup.py | samuel-spak/thermostate | 906d1e0b79289cd51cde510c797f007674b8bdcd | [
"BSD-3-Clause"
] | 35 | 2017-01-26T15:31:19.000Z | 2022-03-14T16:32:00.000Z | setup.py | samuel-spak/thermostate | 906d1e0b79289cd51cde510c797f007674b8bdcd | [
"BSD-3-Clause"
] | 15 | 2017-02-08T20:07:38.000Z | 2022-03-14T09:15:35.000Z | from setuptools import setup
from pathlib import Path
from typing import Dict
HERE = Path(__file__).parent
version: Dict[str, str] = {}
version_file = HERE / "src" / "thermostate" / "_version.py"
exec(version_file.read_text(), version)
setup(version=version["__version__"], package_dir={"": "src"})
| 25.166667 | 62 | 0.731788 | 40 | 302 | 5.2 | 0.525 | 0.105769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122517 | 302 | 11 | 63 | 27.454545 | 0.784906 | 0 | 0 | 0 | 0 | 0 | 0.129139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4d242e01b427dcb6b1bf2d2cc3562c29ca378947 | 78,627 | py | Python | fps2c__ - Copy.py | GeorgeLoo/FPS | 775bf173437d2feb09bc91b7f842226a8c752980 | [
"MIT"
] | 1 | 2022-02-21T12:07:42.000Z | 2022-02-21T12:07:42.000Z | fps2c__ - Copy.py | GeorgeLoo/FPS | 775bf173437d2feb09bc91b7f842226a8c752980 | [
"MIT"
] | null | null | null | fps2c__ - Copy.py | GeorgeLoo/FPS | 775bf173437d2feb09bc91b7f842226a8c752980 | [
"MIT"
] | null | null | null |
'''
_
_._ _..._ .-', _.._(`))
'-. ` ' /-._.-' ',/
) \ '.
/ _ _ | \
| a a / |
\ .-. ;
'-('' ).-' ,' ;
'-; | .'
\ \ /
| 7 .__ _.-\ \
| | | ``/ /` /
/,_| | /,_/ /
/,_/ '`-'
-----------------------------------------
injured face like Duke Nukem
/moving hostages panic
children as terrorists! with RPGs
/taking cover
/Claymore 700 ball bearings
night shootouts
mp7 rifle with silencer
/2 images for hostages
/Barrett Browning M82 CQ
/go through walls, kill tango commandos with one shot
/see through walls with scope
?tablet version
Fellow shooters with Ai
/medical kits
shoot and shatter glass
guards and
/trigger waypoints
long distance shooting! sniper rifle!
show dead bodies in 3 positions: left, right, upside down!
assassinations
deeper missions
scenario announcement
scenario chooser
improve tango generation
background music
show next mission
give a summary of the performance
/fight against special forces who take 5 shots before dying
/weapons restriction!
/pause that pauses the bullets - important!
/campaign mode
/reset stats F1 key 13.3.2015
/prevent hero from going off screen 14.3.2015 pi day
tango random shooters
tango intelligent fire
tango intelligent movement, flanking!
game suicide bombers
/game hostages
hostage with timer shootings
/different rates of auto fire
small message window
bullets that can shoot through the walls!
blow cover away with bombs
/Hero cursor wasd movement
/Tangos will target Hero
RPG countdown to explosion
tangos
hostages
cover
sniper rifle and distances
blood?
/headshots
/leg shots
shield for storming rooms
weapon accuracy
range
rate of auto fire
Pistol
Name
Graphic
Aiming cursor
Sound firing
Sound hit
Safe, Semi
Number of rounds
Reload()
Choose()
Draw()
DrawRounds()
Fire()
Selector()
Knife - sheathed, stab/slash/throw
Silenced Pistol
Glock automatic pistol
Samurai sword
Parachute avoid 'obstacles'
Maze grid when covert missions
Rifle
Safe Semi Auto
Sniper Rifle
Safe Semi
Mini-gun
50 round a second!
4400 round box
SAW
safe Auto
Shotgun
spread shot
Stun granade
safe armed
Grenade
Safe Armed
Rocket
Safe Semi
Artillery
Coords
Confirm
auto fire
---------------
explosion objects
Grenades as well.
Grenade multiple launcher
use proper consts
better structure for
changing weapons
use hash to get to weapons instead of if-then-else
'''
import pyglet
from pyglet.window import key
from pyglet import clock
import random
#import Tkinter, tkMessageBox
class Const():
foo = 15589888
folder = '.\\draw\\'
backgroundFolder = '.\\background\\'
knifeKbar = 'KBar knife'
pistol1911 = 'M1911 pistol'
knifeSafe = 'safe'
knifeArmed = 'armed'
knifeThrow = 'throw'
knifeModes = (knifeSafe,knifeArmed,knifeThrow)
pistolSafe = 'safe'
pistolSemi = 'semi'
pistolModes = (pistolSafe,pistolSemi)
M4assaultRifle = 'M4 Carbine'
assaultRifleSafe = 'safe'
assaultRifleSemi = 'semi'
assaultRifleAuto = 'auto'
assaultRifleModes = (assaultRifleSafe,assaultRifleSemi,assaultRifleAuto)
machineGunModes = (assaultRifleSafe,assaultRifleAuto)
SAR21Rifle = 'SAR21'
Ultimax100SAW = 'Ultimax 100 SAW'
M134MiniGun = 'M134 Mini-Gun'
M32GRENADElauncher = 'M32 Multi-Grenade Launcher'
STCPW = 'ST KINETICS CPW'
GLOCK = 'GLOCK 18'
M249SAW = 'M249 SAW'
M107sniper = 'M107 Sniper Rifle'
ClaymoreMine = 'M18 Claymore'
MP5 = 'MP5 PDW'
suicideBomber = not False #whether can get close to tangos
HandToolConst = 'Hand Tool'
class Sounds:
soundpath='.\\sound\\'
MuteSound = 0
def __init__(self):
try:
self.minigunSound = self.Load('minigunsound.wav')
self.sar21 = self.Load('sar21.wav')
self.reLoad = self.Load('reload.wav')
self.m1911 = self.Load('M1911pistolsound.wav')
self.m4carbine = self.Load('M4carbineSound.wav')
self.pain = self.Load('Pain-SoundBible.com-1883168362.wav')
self.heropain = self.Load('Pain-Hero.wav')
self.m32MGL = self.Load('grenadeLauncher.wav') #grenadeLauncher.wav
self.hostageHitsound = self.Load('HostageFemaleScream1.wav')
self.hostageRescuedSound = self.Load('ThankYouBritishWoman.wav')
self.stcpwsnd = self.Load('CPWsound.wav')
self.glocksnd = self.Load('Glocksound.wav')
self.m249sawSound = self.Load('M249_SAW.wav')
self.m107Sound = self.Load('M107.wav')
self.m18Sound = self.Load('Claymore.wav')
self.reliefSound = self.Load('Ahhh.wav')
self.curedSound = self.Load('Ahhhh.wav')
self.mp5sound = self.Load('mp5sound.wav')
#self.stcpwsnd = self.Load('')
#self.wallhit = self.Load('wallhit.mp3')
self.player = pyglet.media.Player()
except:
print 'sound file fucked'
# self. = self.Load('')
#self.player.queue(self.gunsound)
def Load(self,f):
#print f
s = pyglet.media.StaticSource(pyglet.media.load(self.soundpath+f, streaming=False))
#print s.duration
#s.play()
return s
def Player(self,s):
self.player.queue(s)
self.player.play()
def Play(self,s):
if self.MuteSound != 1:
#print 'sound play'
s.play()
def Stop(self):
self.player.pause()
pass
def On(self):
#print 'sound on'
self.MuteSound = 0
def Off(self):
self.MuteSound = 1
gSound = Sounds()
def SpriteLoad(name, centre=False):
image = pyglet.image.load(name)
if centre:
image.anchor_x = image.width / 2
image.anchor_y = image.height / 2
return pyglet.sprite.Sprite(image)
class MessageWin(pyglet.window.Window):
def __init__(self,title):
super(MessageWin, self).__init__(resizable = False)
self.set_size(800, 600)
self.set_location(300,50)
self.maximize()
self.set_caption(title)
self.lines = []
i = 0
g = self.height - 50
while i < 4:
self.label = pyglet.text.Label('line '+str(i),
font_name='Times New Roman',
font_size=16,
x=50, y=g)
#anchor_x='center', anchor_y='center')
self.lines.append(self.label)
g -= 25
i+=1
def on_mouse_release(self,x, y, button, modifiers):
#self.close()
#self.set_visible(visible=False)
#self.minimize()
#self.set_visible(visible=False)
pass
def on_draw(self):
self.clear()
#self.switch_to()
for i in self.lines:
i.draw()
def hide(self):
self.set_fullscreen(False, screen=None)
self.set_visible(visible=False)
def show(self):
self.set_visible(visible=True)
self.set_fullscreen(True, screen=None)
def on_close(self):
#self.set_visible(visible=False)
pass
def setText(self,text,line):
self.lines[line].text = text
def on_key_press(self, symbol, modifiers):
if symbol == key.I:
self.hide()
gmw = MessageWin('Battle Report')
class BattleReport():
def __init__(self):
self.init()
def init(self):
print 'Battle report init'
self.numberheadshot = 0
self.numberbodyhit = 0
self.numberblasted = 0
self.numLegHits = 0
self.herohit = 0
self.heroblasted = 0
self.hostagesHit = 0
self.hostagesRescued = 0
self.herokilled = 0
#alternative way to gather data
#but delay in reporting data
def add(self,v):
self.report()
return v + 1
def report(self):
if self.herohit > 4:
self.herokilled += self.herohit / 5
self.herohit = 0
s = 'hero hit '+str(self.herohit)+ \
' blasted '+ str(self.heroblasted) +\
' killed ' + str(self.herokilled)
gmw.setText(s,0)
s = 'tango headhit ' + str(self.numberheadshot) + \
' tango bodyhits ' + str(self.numberbodyhit) + \
' tango leg hits ' + str(self.numLegHits)
gmw.setText(s,1)
s = 'tango blown up ' + str(self.numberblasted)
gmw.setText(s,2)
totalTangoHits = self.numberheadshot + \
self.numberbodyhit + self.numLegHits + \
self.numberblasted + self.heroblasted
s = 'Total Tango casualties ' + str(totalTangoHits) +\
' Hostages hit ' + str(self.hostagesHit) + \
' Rescued ' + str(self.hostagesRescued)
gmw.setText(s,3)
pass
gBattleRep = BattleReport()
#gBattleRep.numberbodyhit += 1
#BattleReport.bullshit = 37
#print gBattleRep.numberbodyhit, BattleReport.bullshit
#gBattleRep.report()
#tkMessageBox.showinfo('Battle Report','Stuff')
#gmw.switch_to()
#gmw.show()
#gmw.setText('fuck'+' pap',0)
'''
Bystanders
Movement all over the place
stats
explosions
bullets
'''
'''
Hostages
Moving / stationary
crying sounds
statistics tracked
different looks
statistics
drawing
'''
class Hostages():
def __init__(self):
self.hostageList = []
self.deadhostagelist = []
self.hostageSprite = SpriteLoad(Const.folder+'hostage1.jpg',centre=False)
self.hostageSprite.scale = 0.25
self.deadhostageSprite = SpriteLoad(Const.folder+'hostage1.jpg',centre=False)
self.deadhostageSprite.scale = 0.25
self.deadhostageSprite.rotation = -90.0
h2s = 0.1
self.hostage2Sprite = SpriteLoad(Const.folder+'hostage2.jpg',centre=False)
self.hostage2Sprite.scale = h2s
self.deadhostage2Sprite = SpriteLoad(Const.folder+'hostage2.jpg',centre=False)
self.deadhostage2Sprite.scale = h2s
self.deadhostage2Sprite.rotation = 90.0
self.panic = False
self.sound = gSound
self.moveDir = 'up'
self.winw = 0
self.winh = 0
clock.schedule_interval(self.autocall, 1.0)
def setPanic(self,v):
self.panic = v
def autocall(self,dt):
if self.panic:
self.panicMove()
else:
self.move()
pass
def panicMove(self):
gw = self.hostageSprite.width / 1
gh = self.hostageSprite.height / 1
for i in self.hostageList:
tx = i[0]
ty = i[1]
d = random.randrange(1,9)
if d == 1:
self.moveDir = 'up'
elif d == 2:
self.moveDir = 'down'
elif d == 3:
self.moveDir = 'left'
elif d == 4:
self.moveDir = 'right'
if self.moveDir == 'up':
if ty - gh > 0:
ty -= gh
elif self.moveDir == 'down':
if ty + gh < self.winh:
ty += gh
elif self.moveDir == 'left':
if tx - gw > 0:
tx -= gw
elif self.moveDir == 'right':
if tx + gw < self.winw:
tx += gw
i[0] = tx
i[1] = ty
pass
def move(self):
gw = self.hostageSprite.width / 8
gh = self.hostageSprite.height / 8
for i in self.hostageList:
tx = i[0]
ty = i[1]
if self.moveDir == 'up':
ty -= gh
elif self.moveDir == 'down':
ty += gh
elif self.moveDir == 'left':
tx -= gw
elif self.moveDir == 'right':
tx += gw
i[0] = tx
i[1] = ty
if self.moveDir == 'up':
self.moveDir = 'down'
elif self.moveDir == 'down':
self.moveDir = 'left'
elif self.moveDir == 'right':
self.moveDir = 'up'
elif self.moveDir == 'left':
self.moveDir = 'right'
pass
def create(self,num,winWidth,winHeight):
self.winw = winWidth
self.winh = winHeight
i = 0
self.deadhostagelist = []
while i < num:
x = random.randrange(1,winWidth-100)
y = random.randrange(1,winHeight-100)
t = random.randrange(1,3)
self.hostageList.append([x,y,t])
i += 1
def Rescue(self,bx,by):
for i in self.hostageList:
tx = i[0]
ty = i[1]
l = tx
t = ty
r = tx + self.hostageSprite.width
b = ty + self.hostageSprite.height
rect = [l, t, r, b]
if withinrect( bx, by, rect):
#print 'hostage saved'
self.sound.Play(self.sound.hostageRescuedSound)
gBattleRep.hostagesRescued += 1
gBattleRep.report()
self.hostageList.remove(i)
def hit(self,bx,by):
for i in self.hostageList:
tx = i[0]
ty = i[1]
ht = i[2]
l = tx
t = ty
if t == 1:
r = tx + self.hostageSprite.width
b = ty + self.hostageSprite.height
else:
r = tx + self.hostage2Sprite.width
b = ty + self.hostage2Sprite.height
rect = [l, t, r, b]
if withinrect( bx, by, rect):
#print 'hostage hit'
self.sound.Play(self.sound.hostageHitsound)
gBattleRep.hostagesHit += 1
gBattleRep.report()
a = [tx,ty,ht]
self.deadhostagelist.append(a)
self.hostageList.remove(i)
pass
def HitGrenade(self,grenaderect):
for i in self.hostageList:
tx = i[0]
ty = i[1]
ht = i[2]
if withinrect( tx, ty, grenaderect):
self.sound.Play(self.sound.hostageHitsound)
gBattleRep.hostagesHit += 1
gBattleRep.report()
self.hostageList.remove(i)
a = [tx,ty,ht]
self.deadhostagelist.append(a)
pass
def draw(self):
for i in self.hostageList:
x = i[0]
y = i[1]
t = i[2]
if t == 1:
self.hostageSprite.set_position(x,y)
self.hostageSprite.draw()
else:
self.hostage2Sprite.set_position(x,y)
self.hostage2Sprite.draw()
for i in self.deadhostagelist:
x = i[0]
y = i[1]
t = i[2]
if t == 1:
self.deadhostageSprite.set_position(x,y)
self.deadhostageSprite.draw()
else:
self.deadhostage2Sprite.set_position(x,y)
self.deadhostage2Sprite.draw()
class BulletHoles():
def __init__(self):
self.maxholes = 40
self.bulletHoles = []
self.holeSprite = SpriteLoad(Const.folder+'bulleth.png',centre=True)
self.holeSprite.scale = 0.5
def record(self,x,y):
self.bulletHoles.append((x,y))
def draw(self):
for i in self.bulletHoles:
self.holeSprite.x = i[0]
self.holeSprite.y = i[1]
self.holeSprite.draw()
if len(self.bulletHoles) > self.maxholes:
self.bulletHoles = []
'''
animate explosions
sound handled elsewhere
'''
class Explosion():
def __init__(self):
self.exploList = []
self.imageFrames = []
self.maxframe = 4
self.ex0 = SpriteLoad(Const.folder+'explo0.png',centre=True)
self.ex1 = SpriteLoad(Const.folder+'explo1.png',centre=True)
self.ex2 = SpriteLoad(Const.folder+'explo2.png',centre=True)
self.ex3 = SpriteLoad(Const.folder+'explo3.png',centre=True)
self.imageFrames.append(self.ex0)
self.imageFrames.append(self.ex1)
self.imageFrames.append(self.ex2)
self.imageFrames.append(self.ex3)
clock.schedule_interval(self.autocall, 0.05)
def autocall(self,dt):
for i in self.exploList:
f = i[2] #which frame
f += 1
if f < self.maxframe:
i[2] = f
else:
self.exploList.remove(i)
pass
def add(self,x,y):
a = [x,y,0]
self.exploList.append(a)
pass
def draw(self):
for i in self.exploList:
f = i[2]
self.imageFrames[f].set_position(i[0],i[1])
self.imageFrames[f].draw()
pass
class Hero():
def __init__(self):
self.heroSprite = SpriteLoad(Const.folder+'hero1.jpg',centre=True)
self.heroSprite.scale = 0.25
self.heroSprite.set_position(100,100)
self.factor = 2
self.heroHITSprite = SpriteLoad(Const.folder+'hero1hit.jpg',centre=True)
self.heroHITSprite.scale = 0.25
self.heroHITSprite.set_position(100,100)
self.move = 'stop'
clock.schedule_interval(self.autocall, 0.125)
self.heroHittimer = 0
self.status = pyglet.text.Label('Hero Health',
font_name='Times New Roman',
font_size=24,
x=1000, y = 20)
def autocall(self,dt):
self.doMovement()
if self.heroHittimer > 0:
self.heroHittimer -= 1
pass
def setscreen(self,w,h):
self.winWidth = w
self.winHeight = h
print 'setscreen'
def doMovement(self):
self.saveherox = self.heroSprite.x
self.saveheroy = self.heroSprite.y
if self.move == 'up':
self.heroSprite.y += self.heroSprite.height / self.factor
elif self.move == 'left':
self.heroSprite.x -= self.heroSprite.width / self.factor
elif self.move == 'back':
self.heroSprite.y -= self.heroSprite.height / self.factor
elif self.move == 'right':
self.heroSprite.x += self.heroSprite.width / self.factor
if self.heroSprite.x > self.winWidth or \
self.heroSprite.x < 0:
self.heroSprite.x = self.saveherox
if self.heroSprite.y > self.winHeight or \
self.heroSprite.y < 0:
self.heroSprite.y = self.saveheroy
pass
def resetPos(self):
self.heroSprite.set_position(100,100)
def draw(self):
s = 'Hits '+ str(gBattleRep.herohit) \
+ ' Killed ' + str(gBattleRep.herokilled)
self.status.text = s
self.status.draw()
if self.heroHittimer == 0:
self.heroSprite.draw()
else:
x = self.heroSprite.x
y = self.heroSprite.y
self.heroHITSprite.set_position(x,y)
self.heroHITSprite.draw()
def goUp(self):
self.move = 'up'
#print 'up'
#self.heroSprite.y += self.heroSprite.height / self.factor
pass
def goLeft(self):
self.move = 'left'
#self.heroSprite.x -= self.heroSprite.width / self.factor
pass
def goBack(self):
self.move = 'back'#self.heroSprite.y -= self.heroSprite.height / self.factor
pass
def goRight(self):
self.move = 'right'#self.heroSprite.x += self.heroSprite.width / self.factor
def stopMoving(self):
self.move = 'stop'
pass
def hit(self):
#print 'hero hit check'
#print 'xxx hero hit'
self.heroHittimer = 20 #how long to show the 'hit' drawing
def HandleModeSelect(modes,currMode):
#print Const.foo
i = 0
while i < len(modes):
if currMode == modes[i] and i < len(modes)-1:
return modes[i+1]
i += 1
return modes[0]
def withinrect( x,y,r):
x1,y1=r[0],r[1]
x2,y2=r[2],r[3]
if x>x1 and x<x2 and y>y1 and y<y2:
return True
return False
'''
man figure must go past to trigger
red to green
1 2 3
'''
class Waypoints():
def __init__(self):
self.redSpr = SpriteLoad(Const.folder+'wayRed.png',centre=False)
self.greenSpr = SpriteLoad(Const.folder+'wayGreen.png',centre=False)
self.orangeSpr = SpriteLoad(Const.folder+'wayOrange.png',centre=False)
#self.coverSpr.scale = 0.2
self.stateOff = 'Orange'
self.stateOn = 'Green'
self.stateWrong = 'Red'
self.reset()
clock.schedule_interval(self.autocall, 2.0)
pass
def autocall(self,dt):
for i in self.waylist:
if i[2] == self.stateWrong:
i[2] = self.stateOff
def draw(self):
for i in self.waylist:
x = i[0]
y = i[1]
state = i[2]
if state == self.stateOff:
self.orangeSpr.set_position(x,y)
self.orangeSpr.draw()
elif state == self.stateOn:
self.greenSpr.set_position(x,y)
self.greenSpr.draw()
elif state == self.stateWrong:
self.redSpr.set_position(x,y)
self.redSpr.draw()
pass
def checkhit(self,x,y):
for i in self.waylist:
wx = i[0]
wy = i[1]
wx1 = wx + self.orangeSpr.width
wy1 = wy + self.orangeSpr.height
r = [wx,wy,wx1,wy1]
if withinrect(x, y, r):
if i[2] != self.stateOn \
and self.checkNum(i[3]):
i[2] = self.stateOn
return True
elif i[2] == self.stateOff:
i[2] = self.stateWrong
return False
def checkNum(self,n):
if n == self.expected:
self.expected += 1
return True
return False
def complete(self):
if self.waylist == []:
return False
ret = 0
for i in self.waylist:
if i[2] == self.stateOn:
ret += 1
return (ret == len(self.waylist))
def add(self,x,y):
state = self.stateOff
a = [x,y,state,self.number]
self.waylist.append(a)
self.number += 1
pass
def reset(self):
self.waylist = []
self.number = 1
self.expected = 1
'''
goes in front of the tangos to provide cover for bullets
typeofcover
'''
class CoverLayer():
def __init__(self):
self.coverSpr = SpriteLoad(Const.folder+'brick_texture___9_by_agf81-d3a20h2.jpg')
self.coverSpr.scale = 0.2
#self.coverSpr.set_position(x,y)
def Hit(self,where,x,y):
#tx = self.coverSpr.x
#ty = self.coverSpr.y
tx = where[0]
ty = where[1]
l = tx
t = ty
r = tx + self.coverSpr.width
b = ty + self.coverSpr.height
rect = [l, t, r, b]
if withinrect( x, y, rect):
#print 'cover hit'
return True
pass
def Draw(self,where):
x = where[0]
y = where[1]
self.coverSpr.set_position(x,y)
self.coverSpr.draw()
def GetDirection():
d = random.randrange(1,9)
return d
def MoveXY(d,x,y,tw,th,ww,wh):
base = 50
mx = x
my = y
if d == 1:
if my + th < wh:
my += th
elif d == 5:
if my - th > base:
my -= th
elif d == 3 or d == 2:
if mx + tw < ww:
mx += tw
elif d == 7 or d == 4:
if mx - tw > 0:
mx -= tw
return mx,my
'''
ninja stealth
five bullets to kill one
cannot be killed by grenade
dead bodies list
'''
class TangoCommando():
def __init__(self):
self.target = SpriteLoad(Const.folder+'tango1image.png')
self.target.scale = 0.2
self.deadtarget = SpriteLoad(Const.folder+'tango1image.png')
self.deadtarget.scale = 0.2
self.deadtarget.rotation = 90 #degrees
self.deadlist = []
self.sound = gSound
self.boardlist = []
print 'init Target'
print self.target.width, self.target.height
def create(self,number,winW,winH):
i = 0
self.boardlist = []
self.deadlist = []
while i < number:
x = random.randrange(1,winW)
y = random.randrange(500,winH)
a = [x,y,0]
self.boardlist.append(a)
i+=1
pass
def getList(self):
return self.boardlist
def tangoDead(self):
if not self.boardlist == []:
return False
return True
def move(self,w,h):
i = 0
while i < len(self.boardlist):
d = GetDirection()
tw = self.target.width / 2
th = self.target.height / 2
x = self.boardlist[i][0]
y = self.boardlist[i][1]
numhit = self.boardlist[i][2]
#x = self.target.x
#y = self.target.y
#if not numhit > 4:
rx,ry = MoveXY(d, x,y,tw, th, w, h)
self.boardlist[i][0] = rx
self.boardlist[i][1] = ry
i+=1
#return rx,ry
def TangoShotcheck(self,x,y):
for where in self.boardlist:
if self.Hit(x, y, where):
self.sound.Play(self.sound.pain)
self.boardlist.remove(where)
pass
def Hit(self,x,y,where): #commmando
tx = where[0]
ty = where[1]
numhit = where[2]
l = tx
t = ty + self.target.height/4*3
r = tx + self.target.width
b = ty + self.target.height
recthead = [l, t, r, b]
l = tx
t = ty + self.target.height/4
r = tx + self.target.width
b = ty + self.target.height/4*3
rectbody = [l, t, r, b]
l = tx
t = ty
r = tx + self.target.width
b = ty + self.target.height/4
rectlegs = [l, t, r, b]
if withinrect( x, y, recthead):
#print 'head hit'
gBattleRep.numberheadshot += 1
gBattleRep.report()
numhit += 1
#return True
elif withinrect( x, y, rectbody):
#print 'body hit'
gBattleRep.numberbodyhit += 1
gBattleRep.report()
#self.sound.Play(self.sound.pain)
numhit += 1
#return True
elif withinrect( x, y, rectlegs):
#print 'leg hit'
#gBattleRep.numLegHits = gBattleRep.add( gBattleRep.numLegHits)
gBattleRep.numLegHits += 1
gBattleRep.report()
#self.sound.Play(self.sound.pain)
numhit += 1
#return True
else:
#print 'miss'
return False
where[2] = numhit
if numhit > 4:
a = [where[0],where[1]]
self.deadlist.append(a)
return True #to have sound and register as dead
else:
return False
def Draw(self):
for i in self.boardlist:
self.target.set_position(i[0],i[1])
self.target.draw()
for i in self.deadlist:
self.deadtarget.set_position(i[0],i[1])
self.deadtarget.draw()
class TargetBoard():
def __init__(self):
#self.target = SpriteLoad(Const.folder+'terrorist.png')
#self.target.scale = 0.3
#self.sound = gSound
#self.deadtarget = SpriteLoad(Const.folder+'terrorist.png')
#self.deadtarget.scale = 0.3
#self.deadtarget.rotation = 90
#self.target.set_position(x,y)
#self.target.rotation = -90.0
#self.hitlist = []
self.boardlist = []
self.deadlist = []
def create(self,number,winW,winH,Tangotype):
print 'init Target'
if Tangotype == 'real':
name = 'terrorist.png'
sz = 0.3
self.target = SpriteLoad(Const.folder+name)
self.target.scale = sz
self.sound = gSound
self.deadtarget = SpriteLoad(Const.folder+name)
self.deadtarget.scale = sz
self.deadtarget.rotation = 90
print self.target.width, self.target.height
else:
name = 'target0.jpg'
sz = 0.2
self.target = SpriteLoad(Const.folder + name)
self.target.scale = sz
self.sound = gSound
self.deadtarget = SpriteLoad(Const.folder + name)
self.deadtarget.scale = sz
self.deadtarget.rotation = 90
print self.target.width, self.target.height
i = 0
self.boardlist = []
self.deadlist = []
while i < number:
x = random.randrange(1,winW)
y = random.randrange(500,winH)
a = [x,y]
self.boardlist.append(a)
i+=1
pass
def getList(self):
return self.boardlist
def getDeadlist(self):
return self.deadlist
def tangoDead(self):
if self.boardlist != []:
return False
return True
def move(self,w,h):
#print 'move',self.target.x,w,h
i = 0
while i < len(self.boardlist):
d = GetDirection()
tw = self.target.width / 2
th = self.target.height / 2
x = self.boardlist[i][0]
y = self.boardlist[i][1]
#x = self.target.x
#y = self.target.y
rx,ry = MoveXY(d, x,y,tw, th, w, h)
self.boardlist[i][0] = rx
self.boardlist[i][1] = ry
i+=1
#return rx,ry
def TangoShotcheck(self,x,y):
for where in self.boardlist:
if self.Hit(x, y, where):
self.sound.Play(self.sound.pain)
a = [x,y]
self.deadlist.append(a)
self.boardlist.remove(where)
pass
def Hit(self,x,y,where):
tx = where[0]
ty = where[1]
l = tx
t = ty + self.target.height/4*3
r = tx + self.target.width
b = ty + self.target.height
recthead = [l, t, r, b]
l = tx
t = ty + self.target.height/4
r = tx + self.target.width
b = ty + self.target.height/4*3
rectbody = [l, t, r, b]
l = tx
t = ty
r = tx + self.target.width
b = ty + self.target.height/4
rectlegs = [l, t, r, b]
if withinrect( x, y, recthead):
#print 'head hit'
gBattleRep.numberheadshot += 1
gBattleRep.report()
return True
elif withinrect( x, y, rectbody):
#print 'body hit'
gBattleRep.numberbodyhit += 1
gBattleRep.report()
#self.sound.Play(self.sound.pain)
return True
elif withinrect( x, y, rectlegs):
#print 'leg hit'
#gBattleRep.numLegHits = gBattleRep.add( gBattleRep.numLegHits)
gBattleRep.numLegHits += 1
gBattleRep.report()
#self.sound.Play(self.sound.pain)
return True
else:
#print 'miss'
return False
def Draw(self):
for i in self.boardlist:
self.target.set_position(i[0],i[1])
self.target.draw()
for d in self.deadlist:
self.deadtarget.set_position(d[0],d[1])
self.deadtarget.draw()
#class TargetBoard0():
#def __init__(self,x,y):
#self.target = SpriteLoad(Const.folder+'target.jpg')
#self.target.scale = 0.25
#self.target.set_position(x,y)
##self.target.rotation = -90.0
##self.hitlist = []
#print 'init Target'
#print self.target.width, self.target.height
#def move(self,w,h):
##print 'move',self.target.x,w,h
#d = GetDirection()
#tw = self.target.width
#th = self.target.height
#x = self.target.x
#y = self.target.y
#self.target.x,self.target.y = MoveXY(d, x,y,tw, th, w, h)
#pass
#def Hit(self,x,y):
#tx = self.target.x
#ty = self.target.y
#l = tx
#t = ty + self.target.height/4*3
#r = tx + self.target.width
#b = ty + self.target.height
#recthead = [l, t, r, b]
#l = tx
#t = ty + self.target.height/4
#r = tx + self.target.width
#b = ty + self.target.height/4*3
#rectbody = [l, t, r, b]
#l = tx
#t = ty
#r = tx + self.target.width
#b = ty + self.target.height/4
#rectlegs = [l, t, r, b]
#if withinrect( x, y, recthead):
#print 'head hit'
#return True
#elif withinrect( x, y, rectbody):
#print 'body hit'
##self.sound.Play(self.sound.pain)
#return True
#elif withinrect( x, y, rectlegs):
#print 'leg hit'
##self.sound.Play(self.sound.pain)
#return True
#else:
##print 'miss'
#return False
#def Draw(self):
#self.target.draw()
'''
appear near hero
dot moves randomly
dot moves toward hero
tries to hit hero
number
skill
speed
location of hero
add attacks
timed attacks, then end each
check hit
RPG
sound of hero hit
/graphic
'''
class AttackHero():
def __init__(self):
self.attackL = []
clock.schedule_interval(self.autocall, 0.05)
self.attackSpr = SpriteLoad(Const.folder+'attackDot.png',centre=True)
self.attackSpr.scale = 0.5
self.hero = None
self.sound = gSound
self.badguys = []
self.pauseBool = False
pass
def autocall(self,dt):
#after some time, remove the bullet
#for i in self.attackL:
#t = i[2]
#t -= 1
#if t < 1:
#self.attackL.remove(i)
#else:
#i[2] = t
if self.pauseBool: return
self.move()
pass
def addHero(self, hero):
self.hero = hero
def addBadGuys(self, badguys):
self.badguys = badguys
pass
def draw(self):
for i in self.attackL:
x = i[0]
y = i[1]
self.attackSpr.set_position(x, y)
self.attackSpr.draw()
def add(self,hero):
'''position,time'''
random.shuffle(self.badguys)
maxb = 2 # means 3 bullets
if len(self.attackL) > maxb: return #not too many bullets!
i = 0
while i < len(self.badguys):
#h = random.randrange(100,200)
#w = random.randrange(-500,500)
#for i in self.badguys:
bp = self.badguys[i]
hx = hero.heroSprite.x
hy = hero.heroSprite.y
x = bp[0]
y = bp[1]
a = [x,y,hx,hy]
self.attackL.append(a)
i += 1
if i > maxb: break
pass
def move(self):
s = self.attackSpr.height * 2
for i in self.attackL:
x = i[0]
y = i[1]
hx = i[2]
hy = i[3]
if hx < x:
x -= s
elif hx > x:
x += s
if hy < y:
y -= s
elif hy > y:
y += s
if abs(x-hx)<s and abs(y-hy)<s:
self.attackL.remove(i)
#y -= self.attackSpr.height
i[0] = x
i[1] = y
if self.Hit(x, y): break
pass
def Hit(self,x,y):
#tx = self.coverSpr.x
#ty = self.coverSpr.y
tx = self.hero.heroSprite.x
ty = self.hero.heroSprite.y
tx -= self.hero.heroSprite.width / 2 # back from the centre
ty -= self.hero.heroSprite.height / 2
l = tx
t = ty
r = tx + self.hero.heroSprite.width
b = ty + self.hero.heroSprite.height
rect = [l, t, r, b]
if withinrect( x, y, rect):
self.sound.Play(self.sound.heropain)
self.hero.hit()
gBattleRep.herohit += 1
gBattleRep.report()
return True
return False
def pause(self):
self.pauseBool = not self.pauseBool
class ScreenTime():
def __init__(self,countup=True,inittime=0):
self.seconds = inittime
self.countup = countup
self.status = pyglet.text.Label('Time Reading',
font_name='Times New Roman',
font_size=24,
x=800, y = 20)
clock.schedule_interval(self.autocall, 1.0)
self.mode = 'stop'
def autocall(self,dt):
if self.mode == 'start':
if self.countup:
self.seconds += 1
elif not self.countup:
self.seconds -= 1
pass
def start(self):
self.mode = 'start'
pass
def stop(self):
self.mode = 'stop'
pass
def reset(self):
self.seconds = 0
def draw(self):
self.status.text = str(self.seconds)
self.status.draw()
pass
'''
to allow the number keys to be programmed with different weapons
as to the mission at hand.
'''
class EquipmentKeys():
def __init__(self):
self.keydata = []
i = 0
for i in range(10):
self.keydata.append(Const.HandToolConst)
pass
def changekey(self,key,equipment):
assert(key > -1 and key < 10)
self.keydata[key] = equipment
def get(self,key):
assert(key > -1 and key < 10)
return self.keydata[key]
def reset(self):
self.__init__()
'''
Place where stuff that can be shot at are placed.
Tango
Hostages
Cover
can be distant and nearby
distant for sniper
How to account for the shot?
Scoring?
'''
class ShootingGallery():
def __init__(self):
# prevent the tangos get faster and faster
self.runauto = 0 # ensure autocall run once only?
self.gamestage = 0 # to allow staging of different scenarios
self.stageDepth = 0 # deeper missions
self.attHero = AttackHero()
self.CBattHero = AttackHero()
self.CBattHero2 = AttackHero()
self.pauseBool = False
self.timers = ScreenTime(countup=True,inittime=0)
self.wayp = Waypoints()
def initAttack(self,hero):
self.herotarget = hero
self.herotarget.setscreen(self.winWidth,self.winHeight)
self.attHero.addHero(hero)
self.CBattHero.addHero(hero)
self.CBattHero2.addHero(hero)
self.explodeObj = Explosion()
pass
def setWinSize(self,w,h):
self.winHeight = h
self.winWidth = w
def key9(self,which):
return self.equipment.get(which)
#pass
def depthChange(self):
if self.gamestage == 1 and self.stageDepth == 1:
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'aircraftcabin.jpg')
self.sound.Play(self.sound.m32MGL) #breach sound
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.pistol1911)
self.equipment.changekey(3, Const.STCPW)
self.equipment.changekey(4, Const.GLOCK)
self.equipment.changekey(5, Const.M107sniper)
self.equipment.changekey(6, Const.MP5)
self.timers.reset()
self.timers.start()
i = 9
self.TargetObj.create(i, self.winWidth, self.winHeight,'real')
self.CommandoBaddies.create(1, self.winWidth, self.winHeight)
self.attHero.addBadGuys(self.TargetObj.getList())
self.CBattHero.addBadGuys(self.CommandoBaddies.getList())
self.hostages = Hostages()
self.hostages.create(30,self.winWidth,self.winHeight)
self.hostages.setPanic(False)
self.stageDepth = 0
pass
def init(self):
self.sound = gSound
#self.boardlist = []
self.TargetObj = TargetBoard()
self.CommandoBaddies = TangoCommando()
self.coverlist = []
self.equipment = EquipmentKeys()
if self.gamestage == 0:
i = 0
self.equipment.reset()
self.equipment.changekey(1, Const.M32GRENADElauncher)
self.equipment.changekey(2, Const.pistol1911)
self.equipment.changekey(3, Const.STCPW)
self.equipment.changekey(4, Const.GLOCK)
self.equipment.changekey(5, Const.MP5)
self.equipment.changekey(6, Const.M107sniper)
self.equipment.changekey(7, Const.M134MiniGun)
self.equipment.changekey(8, Const.ClaymoreMine)
self.hostages = Hostages()
self.hostages.create(0,self.winWidth,self.winHeight)
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'rifle_range.jpg')
self.TargetObj.create(10, self.winWidth, self.winHeight,'dummy')
elif self.gamestage == 1:
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.pistol1911)
self.equipment.changekey(3, Const.STCPW)
self.equipment.changekey(4, Const.GLOCK)
self.equipment.changekey(5, Const.MP5)
#self.equipment.changekey(5, Const.M249SAW)
self.wayp.add(400, 100)
self.wayp.add(500, 100)
self.wayp.add(600, 100)
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'tarmac.jpg')
#'aircraftcabin.jpg')
#while i > 0: #tangos
#x = random.randrange(1,self.winWidth)
#y = random.randrange(500,self.winHeight)
#a = [x,y]
#self.boardlist.append(a)
#i-=1
elif self.gamestage == 2:
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.STCPW)
self.equipment.changekey(3, Const.M4assaultRifle)
self.equipment.changekey(4, Const.SAR21Rifle)
self.equipment.changekey(5, Const.Ultimax100SAW)
self.equipment.changekey(6, Const.M32GRENADElauncher)
self.equipment.changekey(7, Const.M249SAW)
self.equipment.changekey(8, Const.ClaymoreMine)
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'Afghan_village_destroyed_by_the_Soviets.jpg')
self.hostages = Hostages()
self.hostages.create(0,self.winWidth,self.winHeight)
self.hostages.setPanic(False)
i = 100
self.TargetObj.create(i, self.winWidth, self.winHeight,'real')
#while i > 0: #tangos
#x = random.randrange(1,self.winWidth)
#y = random.randrange(500,self.winHeight)
#a = [x,y]
#self.boardlist.append(a)
#i-=1
self.attHero.addBadGuys(self.TargetObj.getList())
elif self.gamestage == 3:
self.timers.reset()
self.timers.start()
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.STCPW)
self.equipment.changekey(3, Const.M4assaultRifle)
self.equipment.changekey(4, Const.SAR21Rifle)
self.equipment.changekey(5, Const.Ultimax100SAW)
self.equipment.changekey(6, Const.M32GRENADElauncher)
self.equipment.changekey(7, Const.M134MiniGun)
self.equipment.changekey(8, Const.M107sniper)
self.hostages = Hostages()
self.hostages.create(30,self.winWidth,self.winHeight)
self.hostages.setPanic(True)
i = 10
self.TargetObj.create(i, self.winWidth, self.winHeight,'real')
#while i > 0: #tangos
#x = random.randrange(1,self.winWidth)
#y = random.randrange(500,self.winHeight)
#a = [x,y]
#self.boardlist.append(a)
#i-=1
self.attHero.addBadGuys(self.TargetObj.getList())
i = 20
self.coverlist = []
self.coverObj = CoverLayer()
while i > 0: #cover
x = random.randrange(1,self.winWidth)
y = random.randrange(1,self.winHeight+300)
#y = 200
cov = (x,y)
self.coverlist.append(cov)
i-=1
elif self.gamestage == 4:
self.sound.Play(self.sound.hostageHitsound)
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.GLOCK)
self.equipment.changekey(3, Const.STCPW)
self.equipment.changekey(4, Const.MP5)
self.equipment.changekey(5, Const.M107sniper)
self.timers.reset()
self.timers.start()
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'Kuala-Lumpur-Federal-Hotel-Street-Front.jpg')
i = 5
self.CommandoBaddies.create(i, self.winWidth, self.winHeight)
#while i > 0: #tangos
#x = random.randrange(1,self.winWidth)
#y = random.randrange(500,self.winHeight)
#a = [x,y]
#self.boardlist.append(a)
#i-=1
self.CBattHero.addBadGuys(self.CommandoBaddies.getList())
self.hostages = Hostages()
self.hostages.create(20,self.winWidth,self.winHeight)
self.hostages.setPanic(True)
elif self.gamestage == 5:
self.equipment.reset()
self.equipment.changekey(1, Const.knifeKbar)
self.equipment.changekey(2, Const.GLOCK)
self.equipment.changekey(3, Const.STCPW)
self.equipment.changekey(4, Const.M4assaultRifle)
self.equipment.changekey(5, Const.M249SAW)
self.equipment.changekey(6, Const.M134MiniGun)
self.equipment.changekey(7, Const.M107sniper)
self.equipment.changekey(8, Const.ClaymoreMine)
self.timers.reset()
self.timers.start()
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'Nuclear.power.plant.Dukovany.jpg')
i = 50
self.CommandoBaddies.create(i, self.winWidth, self.winHeight)
self.CBattHero.addBadGuys(self.CommandoBaddies.getList())
self.CBattHero2.addBadGuys(self.CommandoBaddies.getList())
self.hostages = Hostages()
self.hostages.create(0,self.winWidth,self.winHeight)
elif self.gamestage == 6:
self.equipment.reset()
self.equipment.changekey(1, Const.M4assaultRifle)
self.hostages = Hostages()
self.hostages.create(0,self.winWidth,self.winHeight)
self.background = \
pyglet.image.load(Const.backgroundFolder+\
'Arlington-National-Cemetery-during-Spring.jpg')
pass
elif self.gamestage == 7:
self.hostages = Hostages()
self.hostages.create(0,self.winWidth,self.winHeight)
self.gamestage = -1
self.timers.stop()
self.timers.reset()
gBattleRep.init() #reset stats
gBattleRep.report()
self.herotarget.resetPos()
#gfwindow = pyglet.window.Window(style=pyglet.window.Window.WINDOW_STYLE_DIALOG)
#i = 0
#self.coverlist = []
#self.coverObj = CoverLayer()
#while i > 0:
#x = random.randrange(1,self.winWidth)
#y = random.randrange(1,self.winHeight+300)
##y = 200
#cov = (x,y)
#self.coverlist.append(cov)
#i-=1
if self.runauto == 0: #run once
clock.schedule_interval(self.autocall, 0.25)
#self.background = pyglet.image.load(Const.backgroundFolder+'Afghan_village_destroyed_by_the_Soviets.jpg')
self.runauto = 1
def autocall(self,dt):
if self.pauseBool: return
i = 0
#m = len(self.boardlist)-1
if not self.TargetObj.tangoDead():
self.attHero.add(self.herotarget) #keep attacking hero
if not self.CommandoBaddies.tangoDead():
self.CBattHero.add(self.herotarget)
self.CBattHero2.add(self.herotarget)
self.TargetObj.move(self.winWidth,self.winHeight)
self.CommandoBaddies.move(self.winWidth,self.winHeight)
self.wayp.checkhit(self.herotarget.heroSprite.x,self.herotarget.heroSprite.y)
if self.gamestage == 1 and self.wayp.complete():
print 'complete'
self.wayp.reset()
self.stageDepth += 1
self.depthChange()
#if m >= 0:
#self.attHero.add(self.herotarget)
#pass
#while m > -1:
#a = self.boardlist[m][0]
#b = self.boardlist[m][1]
#x,y = self.TargetObj.move(self.winWidth,self.winHeight,a,b)
#if Const.suicideBomber and self.SuicideBomberHit(x,y):
#print 'xxx bomber'
#gBattleRep.heroblasted += 1
#gBattleRep.report()
#self.boardlist.remove(self.boardlist[m])
#else:
#self.boardlist[m][0] = x
#self.boardlist[m][1] = y
#m-=1
#pass
def Rescue(self,x,y):
self.hostages.Rescue(x, y)
def Hit(self,x,y,name):
retz = False
self.hostages.hit(x,y)
if name != Const.M107sniper and \
name != Const.M134MiniGun: #can go through walls if powerful weapon
for c in self.coverlist:
if self.coverObj.Hit(c,x,y):
retz= True
break
if retz:
return
self.TargetObj.TangoShotcheck(x,y)
self.CommandoBaddies.TangoShotcheck(x,y)
if name == Const.M107sniper or \
name == Const.M134MiniGun: #one shot equals 4 shots
for i in xrange(0,4):
self.CommandoBaddies.TangoShotcheck(x,y)
if self.TargetObj.tangoDead() and \
self.CommandoBaddies.tangoDead():
self.timers.stop()
#for i in self.boardlist:
#if self.TargetObj.Hit(x,y,i):
##print 'hit hit'
#self.sound.Play(self.sound.pain)
#self.boardlist.remove(i)
#break
#if self.boardlist == []: #tangos dead
#self.timers.stop()
def SuicideBomberHit(self,x,y):
br = 100
gl = x - br
gt = y - br
gr = x + br
gb = y + br
grect = [gl,gt,gr,gb]
hx = self.herotarget.heroSprite.x
hy = self.herotarget.heroSprite.y
if withinrect(hx,hy,grect):
self.explodeObj.add(x, y)
self.herotarget.hit()
self.sound.Play(self.sound.m32MGL)
return True
else:
return False
def HitGrenade(self,x,y):
br = 200 #blast radius
self.explodeObj.add(x, y)
gl = x - br
gt = y - br
gr = x + br
gb = y + br
grect = [gl,gt,gr,gb]
tangos = self.TargetObj.getList()
dead = self.TargetObj.getDeadlist()
for i in tangos:
if withinrect(i[0],i[1],grect):
self.sound.Play(self.sound.pain)
tangos.remove(i)
dead.append([i[0],i[1]])
gBattleRep.numberblasted += 1
gBattleRep.report()
self.hostages.HitGrenade(grect)
if self.TargetObj.tangoDead(): #tangos dead
self.timers.stop()
def Claymore(self,x,y):
#print 'claymore'
self.explodeObj.add(x, y)
w = 250
h = 300
g = 20
sx = x - w
sy = y - h
ex = x + w
ey = y + h
bx = sx
by = sy
p = 0
while True:
p += 1
#bx = random.randrange(1,self.winWidth)
#by = random.randrange(1,self.winHeight)
self.Hit(bx, by, Const.ClaymoreMine)
bx += g
if bx > ex:
bx = sx
by += g
if by > ey:
break
print 'pellets', p
def Draw(self):
# the 60 gives a status bar for free
self.background.blit(0,60,width=self.winWidth,height=self.winHeight)
#self.background.draw() #cannot do this way - cannot set width/height
self.explodeObj.draw()
#for i in self.boardlist:
#self.TargetObj.Draw(i)
self.TargetObj.Draw()
self.CommandoBaddies.Draw()
for i in self.coverlist:
self.coverObj.Draw(i)
self.hostages.draw()
self.attHero.draw()
self.CBattHero.draw()
self.timers.draw()
self.wayp.draw()
#self.hero.draw()
def pause(self):
self.pauseBool = not self.pauseBool
self.attHero.pause()
self.CBattHero.pause()
#print 'pause',self.pauseBool
#class ShootingGallery():
#gTargetBoard = TargetBoard()
gShootGallery = ShootingGallery()
gBulletHoles = BulletHoles()
class Knife():
def __init__(self,name):
self.name = name
print 'knife init'
self.mode = Const.knifeSafe
self.data = Const.folder
weapondata = self.Database(name)
self.drawing = SpriteLoad(self.data+weapondata[0])
self.drawing.scale = weapondata[1]
self.sound = gSound
#self.bulleth = bulletholes
self.mousex = 0
self.mousey = 0
self.magazine = weapondata[2]
self.ammo = self.magazine
self.reloadweapon = False
self.status = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=24,
x=220, y = 20)
self.SetText()
self.reticle = weapondata[3]
def SetText(self):
self.report = self.name + ' ' + self.mode + ' ' + str(self.ammo)
self.status.text = self.report
def Database(self,name):
#filename,scale,magazine capacity,
if name == Const.knifeKbar:
return 'kbar knife side 1217_h_lg.png', \
0.25,1,'kbar knife side 1217_h_lgup.png'
else:
raise Exception("knife Weapon not exist!")
def mouse(self,x,y):
print self.name,x,y
pass
def mouseup(self,x,y):
pass
def mousedrag(self,x,y):
#knife has drag
pass
def mousepos(self,x,y): #knife
#print 'mouse',x,y
if self.mode == Const.knifeArmed:
gShootGallery.Hit(x, y,Const.knifeKbar)
def select(self):
self.mode = HandleModeSelect(Const.knifeModes, self.mode)
self.SetText()
def draw(self):
self.drawing.draw()
self.status.draw()
pass
def Reload(self):
pass
def SetSights(self,win): #knife
image = pyglet.image.load(Const.folder+self.reticle)
x = image.height / 2
y = image.width / 2
cursor = pyglet.window.ImageMouseCursor(image, x, y)
win.set_mouse_cursor( cursor)
pass
class HandTool():
def __init__(self,name):
self.name = name
self.data = Const.folder
print 'hand tool init'
self.reticle = 'Cursor hand white.png'
self.handName = 'Cursor hand whiteB.png'
self.drawing = SpriteLoad(self.data+self.handName)
self.drawing.set_position(20, 0)
self.status = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=24,
x=220, y = 20)
self.SetText()
def SetText(self):
self.report = 'This hand tool can rescue hostages'
self.status.text = self.report
def Database(self,name):
#filename,scale,magazine capacity,
if name == Const.knifeKbar:
return 'kbar knife side 1217_h_lg.png', \
0.25,1,'kbar knife side 1217_h_lgup.png'
else:
raise Exception("hand wanker not exist!")
def mouse(self,x,y):
print self.name,x,y
pass
def mouseup(self,x,y):
#gShootGallery.Rescue(x, y)
pass
def mousedrag(self,x,y):
#hand has drag
pass
def mousepos(self,x,y): #knife
#print 'mouse',x,y
gShootGallery.Rescue(x, y)
pass
def select(self):
pass
def draw(self):
self.drawing.draw()
self.status.draw()
pass
def Reload(self):
pass
def SetSights(self,win):
image = pyglet.image.load(Const.folder+self.reticle)
cursor = pyglet.window.ImageMouseCursor(image, 25, 25)
win.set_mouse_cursor( cursor)
pass
#class Pistol():
#def __init__(self,
#name,
##sound,
##bulletholes,
#):
#self.name = name
#print 'pistol init'
#self.mode = Const.pistolSafe
#self.data = Const.folder
#weapondata = self.Database(name)
#self.drawing = SpriteLoad(self.data+weapondata[0])
#self.drawing.scale = weapondata[1]
#self.sound = gSound
#self.bulleth = gBulletHoles
#self.mousex = 0
#self.mousey = 0
#self.magazine = weapondata[2]
#self.ammo = self.magazine
#self.reloadweapon = False
#self.status = pyglet.text.Label('Hello, world',
#font_name='Times New Roman',
#font_size=24,
#x=220, y = 20)
#self.SetText()
#self.reticle = weapondata[3]
#pass
#def Database(self,name):
##filename,scale,magazine capacity,
#if name == Const.pistol1911:
#return 'm1911pistol.jpg',0.25,15,'reticlePistol1911.png'
#else:
#raise Exception("pistol Weapon not exist!")
#def reloadCall(self,dt):
#if self.reloadweapon:
#self.reloadtime -= 1
#if self.reloadtime < 1:
#self.ammo = self.magazine
#self.SetText()
#clock.unschedule(self.reloadCall)
#self.reloadweapon = False
#def mouse(self,x,y):
#if self.mode != Const.pistolSafe:
#self.trigger = True
#self.mousex = x
#self.mousey = y
#self.Fire()
#def mouseup(self,x,y):
#self.trigger = False
#def mousedrag(self,x,y):
##pistol got no drag
#pass
#def Fire(self):
#if self.ammo > 0:
##self.sound.Play(self.sound.sar21)
#self.sound.Play(self.sound.m1911)
#x = self.mousex
#y = self.mousey
#self.bulleth.record(x,y)
#self.ammo -= 1
#self.SetText()
##gTargetBoard.Hit(x, y)
#gShootGallery.Hit(x, y)
#def SetText(self):
#self.report = self.name + ' ' + self.mode + ' ' + str(self.ammo)
#self.status.text = self.report
#def select(self):
##print 'pistol mode'
#self.mode = HandleModeSelect(Const.pistolModes, self.mode)
##print self.mode
#self.SetText()
##print self.mode
#def draw(self):
#self.drawing.draw()
#self.bulleth.draw()
#self.status.draw()
#pass
#def Reload(self):
#self.sound.Player(self.sound.reLoad)
#self.reloadweapon = True
#self.reloadtime = 3
#clock.schedule_interval(self.reloadCall, 1.0)
#def SetSights(self,win):
#image = pyglet.image.load(Const.folder+self.reticle)
#cursor = pyglet.window.ImageMouseCursor(image, 25, 25)
#win.set_mouse_cursor( cursor)
#pass
class AssaultRifle():
def __init__(self,
name,
numberMagazines,
#bulletholes,
):
self.name = name
print 'AssaultRifle init'
self.mode = Const.assaultRifleSafe
self.rateFire = 1.0
self.trigger = False
self.auto = False
self.data = Const.folder
self.sound = gSound
self.weaponsound = None
self.availableModes = None
weapondata = self.Database(name)
self.drawing = SpriteLoad(self.data+weapondata[0])
self.drawing.scale = weapondata[1]
self.bulleth = gBulletHoles
self.mousex = 0
self.mousey = 0
self.magazine = weapondata[2]
self.ammo = self.magazine
self.reloadweapon = False
self.magazines = numberMagazines
self.status = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=24,
x=220, y = 20)
self.SetText()
self.reticle = weapondata[3]
self.availableModes = weapondata[4]
self.rateFire = weapondata[5]
pass
def Database(self,name):
#filename,scale,magazine capacity,reticle,modes,rateOfFire
if name == Const.M4assaultRifle:
self.weaponsound = self.sound.m4carbine
return 'm4_1.jpg',0.25,30,'reticleM4.png',\
Const.assaultRifleModes,0.1
elif name == Const.SAR21Rifle:
self.weaponsound = self.sound.sar21
return 'sar21_1.jpg',0.3,30,'reticle.png',\
Const.assaultRifleModes,0.1
elif name == Const.Ultimax100SAW:
self.weaponsound = self.sound.sar21
return 'ultimax_mk3_3.jpg',0.2,100,'reticle.png',\
Const.machineGunModes,0.1
elif name == Const.M134MiniGun:
self.weaponsound = self.sound.minigunSound #4400
return 'minigun800px-DAM134DT.png',0.2,500,\
'reticleM4.png',Const.machineGunModes,0.01
elif name == Const.pistol1911:
self.weaponsound = self.sound.m1911
return 'm1911pistol.jpg',0.25,7,\
'reticlePistol1911.png',Const.pistolModes,1.0
elif name == Const.M32GRENADElauncher:
self.weaponsound = self.sound.m32MGL
return 'M32MGL.png',0.3,12,\
'M32_Iron_Sights_MW3DS.png',Const.pistolModes,1.0
elif name == Const.STCPW:
self.weaponsound = self.sound.stcpwsnd
return 'ST_Kinetics_CPW_Submachine_Gun_(SMG)_1.jpg',0.15,30,\
'reticle.png',Const.assaultRifleModes,0.1
elif name == Const.GLOCK:
self.weaponsound = self.sound.glocksnd
return 'glockhqdefault.jpg',0.25,18,\
'reticlePistol1911.png',Const.assaultRifleModes,0.1
elif name == Const.M249SAW:
self.weaponsound = self.sound.m249sawSound
return '800px-PEO_M249_Para_ACOG.jpg',0.25,200,\
'reticleM4.png',Const.machineGunModes,0.1
elif name == Const.M107sniper:
self.weaponsound = self.sound.m107Sound
return 'M107Cq.jpg',0.5,10,\
'M107largeSights.png',Const.pistolModes,1.0
elif name == Const.ClaymoreMine:
self.weaponsound = self.sound.m18Sound
return 'Claymore2.jpg',0.3,5,\
'Claymore2aimer.jpg',Const.pistolModes,1.0
elif name == Const.MP5:
self.weaponsound = self.sound.mp5sound
return 'mp5a3.jpg',0.3,30,\
'mp5sights.png',Const.assaultRifleModes,0.15
else:
raise Exception("Weapon not exist!")
def SetText(self):
self.report = self.name + ' ' + self.mode + ' ' + str(self.ammo) + ' ' + str(self.magazines)
self.status.text = self.report
def Fire(self):
if self.ammo > 0:
self.sound.Play(self.weaponsound)
x = self.mousex
y = self.mousey
self.bulleth.record(x,y)
self.ammo -= 1
self.SetText()
if self.name == Const.M32GRENADElauncher:
gShootGallery.HitGrenade(x, y)
elif self.name == Const.ClaymoreMine:
gShootGallery.Claymore(x, y)
elif self.name != Const.M32GRENADElauncher:
gShootGallery.Hit(x, y,self.name)
def draw(self):
self.drawing.draw()
self.bulleth.draw()
self.status.draw()
def autocall(self,dt):
if self.trigger:
#print 'autofire'
self.Fire()
def reloadCall(self,dt):
if self.reloadweapon:
self.reloadtime -= 1
if self.reloadtime < 1 and self.magazines > 0:
self.magazines -= 1
self.ammo = self.magazine
self.SetText()
clock.unschedule(self.reloadCall)
self.reloadweapon = False
def mouse(self,x,y):
#print self.name,x,y
if self.mode != Const.assaultRifleSafe:
self.trigger = True
self.mousex = x
self.mousey = y
self.Fire()
pass
def mousepos(self,x,y):
pass
def mouseup(self,x,y):
self.trigger = False
def mousedrag(self,x,y):
if self.mode == Const.assaultRifleAuto:
#self.mouse(x, y)
self.mousex = x
self.mousey = y
pass
def Reload(self):
self.sound.Player(self.sound.reLoad)
self.reloadweapon = True
self.reloadtime = 2
clock.schedule_interval(self.reloadCall, 1.0)
def select(self):
#print 'pistol mode'
#self.mode = HandleModeSelect(Const.assaultRifleModes, self.mode)
self.mode = HandleModeSelect(self.availableModes, self.mode)
self.SetText()
print self.mode
if self.mode == Const.assaultRifleAuto:
clock.schedule_interval(self.autocall, self.rateFire)
self.auto = True
elif self.auto:
clock.unschedule(self.autocall)
self.auto = False
def SetSights(self,win):
image = pyglet.image.load(Const.folder+self.reticle)
x = image.height / 2
y = image.width / 2
cursor = pyglet.window.ImageMouseCursor(image, x, y)
win.set_mouse_cursor( cursor)
pass
def AddMagazines(self,numMags):
self.magazines += numMags
class CurrentGame():
def __init__(self):
self.currentWeapon = 'nothing'
self.weaponDict = {}
#self.sound = Sounds()
self.sound = gSound
self.bulletholes = BulletHoles()
self.knife = Knife(Const.knifeKbar)
self.weaponDict[Const.knifeKbar] = self.knife
self.hand = HandTool(Const.HandToolConst)
self.weaponDict[Const.HandToolConst] = self.hand
#self.pistol = Pistol(Const.pistol1911,
##self.sound,
##self.bulletholes
#)
#self.weaponDict[Const.pistol1911] = self.pistol
#self.M4assaultRifle = AssaultRifle(Const.M4assaultRifle,
#self.sound,
#self.bulletholes)
#self.weaponDict[Const.M4assaultRifle] = self.M4assaultRifle
#self.Sar21assaultRifle = AssaultRifle(Const.SAR21Rifle,
#self.sound,
#self.bulletholes)
#self.weaponDict[Const.SAR21Rifle] = self.Sar21assaultRifle
self.AddRifleWeapon(Const.M4assaultRifle,10)
self.AddRifleWeapon(Const.SAR21Rifle,10)
self.AddRifleWeapon(Const.Ultimax100SAW,3)
self.AddRifleWeapon(Const.M134MiniGun,1)
self.AddRifleWeapon(Const.pistol1911,5)
self.AddRifleWeapon(Const.M32GRENADElauncher,6)
self.AddRifleWeapon(Const.STCPW,5)
self.AddRifleWeapon(Const.GLOCK,5)
self.AddRifleWeapon(Const.M249SAW,2)
self.AddRifleWeapon(Const.M107sniper,4)
self.AddRifleWeapon(Const.ClaymoreMine, 2)
self.AddRifleWeapon(Const.MP5, 5)
self.choose(Const.HandToolConst) #default to hand
self.hero = Hero()
gShootGallery.initAttack(self.hero)
#self.tb = TargetBoard()
def AddRifleWeapon(self,name,magazinesCarried):
self.weaponDict[name] = AssaultRifle(name,
magazinesCarried
#self.sound,
#self.bulletholes
)
pass
def choose(self,weapon):
self.currentWeapon = weapon
#print 'current weapon', self.currentWeapon
try:
self.cw = self.weaponDict[self.currentWeapon]
#print 'namee', self.cw.name
except:
print '{ERROR} choose weapon'
def mousedown(self,x,y):
#print 'name m', self.cw.name
self.cw.mouse(x,y)
#self.tb.Hit(x, y)
def mousepos(self,x,y):
self.cw.mousepos(x,y)
def mouseup(self,x,y):
self.cw.mouseup(x,y)
def mousedrag(self,x,y):
self.cw.mousedrag(x,y)
def reloadWeapom(self):
self.cw.Reload()
def select(self):
self.cw.select()
def draw(self):
#gTargetBoard.Draw()
gShootGallery.Draw()
self.cw.draw()
self.hero.draw()
def SetSight(self,win):
self.cw.SetSights(win)
#image = pyglet.image.load(Const.folder+'reticle.png')
#cursor = pyglet.window.ImageMouseCursor(image, 25, 25)
#return cursor
def AddMagazines(self):
self.cw.AddMagazines(5)
def key9(self,whichkey):
self.choose(gShootGallery.key9(whichkey))
pass
#class CurrentGame():
class FPSWin(pyglet.window.Window):
def __init__(self):
super(FPSWin, self).__init__(resizable = True)
self.maximize()
#self.set_visible(visible=False)
self.ikey = False # allow i key to toggle
self.set_caption('SHOOTER the game by George Loo')
#self.set_fullscreen(True, screen=None)
#self.set_exclusive_mouse()
#self.set_size(1600, 900)
#print 'winsize',Const.winHeight, Const.winWidth
#self.set_location(300,50)
self.clear()
gShootGallery.setWinSize(self.width, self.height)
gShootGallery.init()
self.game = CurrentGame()
self.game.SetSight(self) #pass in window self
#gmw.set_visible(visible=True)
self.set_visible(visible=True)
def on_mouse_press(self,x, y, button, modifiers):
if button==pyglet.window.mouse.LEFT:
#print 'mouse left'
self.game.mousedown(x,y)
pass
#self.set_fullscreen(False, screen=None)
if button==pyglet.window.mouse.RIGHT:
#print 'mouse right'
pass
def on_mouse_release(self,x, y, button, modifiers):
if button==pyglet.window.mouse.LEFT:
#print 'mouse left up'
self.game.mouseup(x,y)
pass
if button==pyglet.window.mouse.RIGHT:
#print 'mouse right up'
pass
def on_mouse_motion(self, x, y, dx, dy):
##print 'motion'
self.game.mousepos(x,y)
#pass
#def on_resize(self, width, height):
#print 'resize',width, height
#self.set_size(width, height)
def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers):
if buttons==pyglet.window.mouse.LEFT:
#print 'drag'
self.game.mousedrag(x,y)
pass
def on_key_press(self, symbol, modifiers):
if symbol == key.F12:
#print 'F12'
pass
elif symbol == key._1:
#print '1',Const.knifeKbar
#self.game.choose(Const.knifeKbar)
self.game.key9(1)
self.game.SetSight(self)
elif symbol == key._2:
#print '2',Const.pistol1911
self.game.key9(2)
#self.game.choose(Const.pistol1911)
self.game.SetSight(self)
elif symbol == key._3:
#print '3'
#self.game.choose(Const.M4assaultRifle)
self.game.key9(3)
self.game.SetSight(self)
elif symbol == key._4:
#self.game.choose(Const.SAR21Rifle)
self.game.key9(4)
#print '4'
self.game.SetSight(self)
#self.set_mouse_cursor( self.game.GetSight())
elif symbol == key._5:
#self.game.choose(Const.Ultimax100SAW)
self.game.key9(5)
self.game.SetSight(self)
#print '5'
elif symbol == key._6:
self.game.key9(6)
#self.game.choose(Const.M134MiniGun)
self.game.SetSight(self)
#print '6'
elif symbol == key._7:
self.game.key9(7)
#self.game.choose(Const.M32GRENADElauncher)
self.game.SetSight(self)
#print '7'
elif symbol == key._8:
self.game.key9(8)
self.game.SetSight(self)
pass #print '8'
#gmw.hide()
elif symbol == key._9:
self.game.key9(9)
self.game.SetSight(self)
pass
#print '9'
elif symbol == key._0:
#print '0'
self.game.key9(0)
#self.game.choose(Const.HandToolConst)
self.game.SetSight(self)
pass
elif symbol == key.I:
if not self.ikey:
self.set_fullscreen(False, screen=None)
self.ikey = True
gmw.show()
elif self.ikey:
#gmw.hide()
self.set_fullscreen(True, screen=None)
self.ikey = False
#self.set_visible(visible=False)
#
elif symbol == key.Z:
#print 'Z'
self.game.reloadWeapom()
elif symbol == key.B:
#print 'B - selector'
self.game.select()
elif symbol == key.W:
self.game.hero.goUp()
elif symbol == key.A:
self.game.hero.goLeft()
elif symbol == key.S:
self.game.hero.goBack()
elif symbol == key.D:
self.game.hero.goRight()
def on_key_release(self, symbol, modifiers):
if symbol == key.W:
self.game.hero.stopMoving()
elif symbol == key.A:
self.game.hero.stopMoving()
elif symbol == key.S:
self.game.hero.stopMoving()
elif symbol == key.D:
self.game.hero.stopMoving()
elif symbol == key.F1:
gBattleRep.init() #reset stats
gBattleRep.report()
elif symbol == key.F12 and modifiers & key.MOD_SHIFT:
print 'shift F12'
gShootGallery.init()
elif symbol == key.F12:
gShootGallery.gamestage += 1
gShootGallery.init()
elif symbol == key.F: #first aid = F
if gBattleRep.herohit > 0:
gSound.Play(gSound.reliefSound)
gBattleRep.herohit -= 1 #first aid
gBattleRep.report()
if gBattleRep.herohit == 0:
gSound.Play(gSound.curedSound)
elif symbol == key.F10:
self.game.AddMagazines()
elif symbol == key.PAUSE:
gShootGallery.pause()
def on_draw(self):
self.clear()
self.game.draw()
def on_close(self):
gmw.close()
self.close()
if __name__ == "__main__":
#gmw = MessageWin('Messages')
#gmw2 = MessageWin('Main')
m = FPSWin()
pyglet.app.run() | 31.114761 | 118 | 0.525176 | 8,682 | 78,627 | 4.72322 | 0.105736 | 0.006389 | 0.026825 | 0.006097 | 0.499305 | 0.436584 | 0.388739 | 0.341722 | 0.305119 | 0.295047 | 0 | 0.026807 | 0.365193 | 78,627 | 2,527 | 119 | 31.114761 | 0.794763 | 0.128098 | 0 | 0.478685 | 0 | 0 | 0.035852 | 0.006904 | 0 | 0 | 0 | 0 | 0.001218 | 0 | null | null | 0.042022 | 0.002436 | null | null | 0.010962 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4d246a042f4d01726d7da3a16c2ca45068a1a3cb | 2,462 | py | Python | exercises/networking_v2/roles/ansible-network.network-engine/lib/network_engine/plugins/template/__init__.py | rcalvaga/linklight | bb6364272c167c017cb2ee0790015143df29fa19 | [
"MIT"
] | 1 | 2020-03-29T17:35:59.000Z | 2020-03-29T17:35:59.000Z | exercises/networking_v2/roles/ansible-network.network-engine/lib/network_engine/plugins/template/__init__.py | rcalvaga/linklight | bb6364272c167c017cb2ee0790015143df29fa19 | [
"MIT"
] | null | null | null | exercises/networking_v2/roles/ansible-network.network-engine/lib/network_engine/plugins/template/__init__.py | rcalvaga/linklight | bb6364272c167c017cb2ee0790015143df29fa19 | [
"MIT"
] | 1 | 2020-03-30T11:00:47.000Z | 2020-03-30T11:00:47.000Z | # (c) 2018, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
from ansible.module_utils.six import iteritems, string_types
from ansible.errors import AnsibleUndefinedVariable
class TemplateBase(object):
def __init__(self, templar):
self._templar = templar
def __call__(self, data, variables, convert_bare=False):
return self.template(data, variables, convert_bare)
def run(self, template, variables):
pass
def template(self, data, variables, convert_bare=False):
if isinstance(data, collections.Mapping):
templated_data = {}
for key, value in iteritems(data):
templated_key = self.template(key, variables, convert_bare=convert_bare)
templated_value = self.template(value, variables, convert_bare=convert_bare)
templated_data[templated_key] = templated_value
return templated_data
elif isinstance(data, collections.Iterable) and not isinstance(data, string_types):
return [self.template(i, variables, convert_bare=convert_bare) for i in data]
else:
data = data or {}
tmp_avail_vars = self._templar._available_variables
self._templar.set_available_variables(variables)
try:
resp = self._templar.template(data, convert_bare=convert_bare)
resp = self._coerce_to_native(resp)
except AnsibleUndefinedVariable:
resp = None
pass
finally:
self._templar.set_available_variables(tmp_avail_vars)
return resp
def _coerce_to_native(self, value):
if not isinstance(value, bool):
try:
value = int(value)
except Exception:
if value is None or len(value) == 0:
return None
pass
return value
def _update(self, d, u):
for k, v in iteritems(u):
if isinstance(v, collections.Mapping):
d[k] = self._update(d.get(k, {}), v)
else:
d[k] = v
return d
| 34.676056 | 92 | 0.620634 | 288 | 2,462 | 5.090278 | 0.371528 | 0.082538 | 0.081855 | 0.060027 | 0.164393 | 0.099591 | 0 | 0 | 0 | 0 | 0 | 0.005202 | 0.297319 | 2,462 | 70 | 93 | 35.171429 | 0.842197 | 0.103574 | 0 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.058824 | 0.078431 | 0.019608 | 0.352941 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4d272df6572584be280304452391a2a0947eefaa | 4,220 | py | Python | frontend/Two_Dim_System.py | Pugavkomm/NS-analyst | 698af0e94f57b431fd77c17c49d4a23f11d21d3f | [
"MIT"
] | null | null | null | frontend/Two_Dim_System.py | Pugavkomm/NS-analyst | 698af0e94f57b431fd77c17c49d4a23f11d21d3f | [
"MIT"
] | null | null | null | frontend/Two_Dim_System.py | Pugavkomm/NS-analyst | 698af0e94f57b431fd77c17c49d4a23f11d21d3f | [
"MIT"
] | null | null | null | """AI is creating summary for
"""
from frontend import main_window
from PyQt5 import QtWidgets
from frontend import input_system
from PyQt5.QtWidgets import QInputDialog, qApp
from qt_material import apply_stylesheet
style_sheets = ['dark_amber.xml',
'dark_blue.xml',
'dark_cyan.xml',
'dark_lightgreen.xml',
'dark_pink.xml',
'dark_purple.xml',
'dark_red.xml',
'dark_teal.xml',
'dark_yellow.xml',
'light_amber.xml',
'light_blue.xml',
'light_cyan.xml',
'light_cyan_500.xml',
'light_lightgreen.xml',
'light_pink.xml',
'light_purple.xml',
'light_red.xml',
'light_teal.xml',
'light_yellow.xml']
class Two_Dim_system(QtWidgets.QMainWindow, main_window.Ui_MainWindow, input_system.Ui_input_system):
"""AI is creating summary for App
Args:
QtWidgets ([type]): [description]
main_window ([type]): [description]
"""
def __init__(self):
"""AI is creating summary for __init__
"""
super().__init__()
self.ui = main_window.Ui_MainWindow()
self.ui.setupUi(self)
self.InitUI()
def InitUI(self):
"""AI is creating summary for setupUi
"""
self.setupUi(self)
# self.statusBar = QStatusBar()
# self.setStatusBar(self.statusBar)
# self.menuFile.setStatusTip()
# self.menuFile.setStatusTip("test")
self.actionExit.triggered.connect(qApp.quit)
self.darkamber.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_amber.xml')))
self.lightamber.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_amber.xml')))
self.darkblue.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_blue.xml')))
self.lightblue.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_blue.xml')))
self.darkcyan.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_cyan.xml')))
self.lightcyan.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_cyan.xml')))
self.darklightgreen.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_lightgreen.xml')))
self.lightlightgreen.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_lightgreen.xml')))
self.darkpink.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_pink.xml')))
self.lightping.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_pink.xml')))
self.darkpurple.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_purple.xml')))
self.lightpurple.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_purple.xml')))
self.darkred.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_red.xml')))
self.lightred.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_red.xml')))
self.darkteal.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_teal.xml')))
self.lightteal.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_teal.xml')))
self.darkyellow.triggered.connect(lambda: self.__change_theme(style_sheets.index('dark_yellow.xml')))
self.lightyellow.triggered.connect(lambda: self.__change_theme(style_sheets.index('light_yellow.xml')))
self.actionInput_system.triggered.connect(self.__input_system)
def __input_system(self):
self.window = QtWidgets.QMainWindow()
self.ui = input_system.Ui_input_system()
self.ui.setupUi(self.window)
self.window.show()
def __change_theme(self, number: int):
"""AI is creating summary for change_theme
Args:
number (int): [description]
"""
with open('config_theme', 'w') as file:
file.write(str(number))
apply_stylesheet(self, theme=style_sheets[number])
print('TEST')
| 45.869565 | 119 | 0.667536 | 495 | 4,220 | 5.379798 | 0.193939 | 0.082614 | 0.114157 | 0.175742 | 0.450995 | 0.408186 | 0.388659 | 0.388659 | 0.388659 | 0.388659 | 0 | 0.001499 | 0.209479 | 4,220 | 91 | 120 | 46.373626 | 0.796763 | 0.104265 | 0 | 0 | 0 | 0 | 0.151295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.080645 | 0 | 0.16129 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.