hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d40426202b9b155286b4717780f009edeeea81e4 | 634 | py | Python | tgadmin/santabot/migrations/0004_alter_event_last_register_date_and_more.py | c-Door-in/secret-santa-bot | 1b1e71f76f5673a0b07d55307fa9c3dae84f24c5 | [
"MIT"
] | null | null | null | tgadmin/santabot/migrations/0004_alter_event_last_register_date_and_more.py | c-Door-in/secret-santa-bot | 1b1e71f76f5673a0b07d55307fa9c3dae84f24c5 | [
"MIT"
] | null | null | null | tgadmin/santabot/migrations/0004_alter_event_last_register_date_and_more.py | c-Door-in/secret-santa-bot | 1b1e71f76f5673a0b07d55307fa9c3dae84f24c5 | [
"MIT"
] | 1 | 2021-12-22T13:19:52.000Z | 2021-12-22T13:19:52.000Z | # Generated by Django 4.0 on 2021-12-24 20:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('santabot', '0003_alter_event_last_register_date_and_more'),
]
operations = [
migrations.AlterField(
model_name='event',
name='last_register_date',
field=models.DateTimeField(verbose_name='Последний день регистрации'),
),
migrations.AlterField(
model_name='event',
name='sending_date',
field=models.DateTimeField(verbose_name='Дата отправки подарка'),
),
]
| 26.416667 | 82 | 0.626183 | 65 | 634 | 5.892308 | 0.646154 | 0.062663 | 0.083551 | 0.151436 | 0.402089 | 0.402089 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.271293 | 634 | 23 | 83 | 27.565217 | 0.790043 | 0.067823 | 0 | 0.352941 | 1 | 0 | 0.235993 | 0.074703 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d408e83d104994c7b56a60993f81203af1d2e4b2 | 2,491 | py | Python | envs/babyai/oracle/xy_corrections.py | AliengirlLiv/babyai | 51421ee11538bf110c5b2d0c84a15f783d854e7d | [
"MIT"
] | 2 | 2022-02-24T08:47:48.000Z | 2022-03-23T09:44:22.000Z | envs/babyai/oracle/xy_corrections.py | AliengirlLiv/babyai | 51421ee11538bf110c5b2d0c84a15f783d854e7d | [
"MIT"
] | null | null | null | envs/babyai/oracle/xy_corrections.py | AliengirlLiv/babyai | 51421ee11538bf110c5b2d0c84a15f783d854e7d | [
"MIT"
] | 1 | 2021-12-27T19:03:38.000Z | 2021-12-27T19:03:38.000Z | import numpy as np
import pickle as pkl
from envs.babyai.oracle.teacher import Teacher
class XYCorrections(Teacher):
def __init__(self, *args, **kwargs):
super(XYCorrections, self).__init__(*args, **kwargs)
self.next_state_coords = self.empty_feedback()
def empty_feedback(self):
"""
Return a tensor corresponding to no feedback.
"""
return np.zeros(8) - 1
def random_feedback(self):
"""
Return a tensor corresponding to no feedback.
"""
return np.random.uniform(0, 1, size=8)
def compute_feedback(self, oracle, last_action=-1):
"""
Return the expert action from the previous timestep.
"""
# Copy so we don't mess up the state of the real oracle
oracle_copy = pkl.loads(pkl.dumps(oracle))
self.step_ahead(oracle_copy, last_action=last_action)
return np.concatenate([self.next_state_coords])
# TODO: THIS IS NO IMPLEMENTED FOR THIS TEACHER! IF WE END UP USING THIS METRIC, WE SHOULD MAKE IT CORRECT!
def success_check(self, state, action, oracle):
return True
def step_ahead(self, oracle, last_action=-1):
env = oracle.mission
# Remove teacher so we don't end up with a recursion error
env.teacher = None
try:
curr_coords = np.concatenate([env.agent_pos, [env.agent_dir, int(env.carrying is not None)]]).astype(
np.float32)
self.next_state, next_state_coords, _, _ = self.step_away_state(oracle, self.cartesian_steps,
last_action=last_action)
# Coords are quite large, so normalize them to between [-1, 1]
self.next_state_coords = next_state_coords.astype(np.float32)
self.next_state_coords[:2] = (self.next_state_coords[:2].astype(np.float32) - 12) / 12
curr_coords[:2] = (curr_coords[:2] - 12) / 6
self.next_state_coords = np.concatenate([self.next_state_coords, curr_coords])
# Also normalize direction
self.next_state_coords[2] = self.next_state_coords[2] - 2
self.next_state_coords[6] = self.next_state_coords[6] - 2
except Exception as e:
print("STEP AWAY FAILED XY!", e)
self.next_state = self.next_state * 0
self.next_state_coords = self.empty_feedback()
self.last_step_error = True
return oracle
| 42.220339 | 113 | 0.620233 | 328 | 2,491 | 4.503049 | 0.338415 | 0.103588 | 0.132024 | 0.154367 | 0.323629 | 0.253893 | 0.181449 | 0.132701 | 0.132701 | 0.132701 | 0 | 0.01855 | 0.285829 | 2,491 | 58 | 114 | 42.948276 | 0.811692 | 0.179847 | 0 | 0.054054 | 0 | 0 | 0.010157 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.162162 | false | 0 | 0.081081 | 0.027027 | 0.405405 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d409828d34c91d7940cba40bf708e67a3916a479 | 3,785 | py | Python | python_tools/utils.py | ultimatezen/felix | 5a7ad298ca4dcd5f1def05c60ae3c84519ec54c4 | [
"MIT"
] | null | null | null | python_tools/utils.py | ultimatezen/felix | 5a7ad298ca4dcd5f1def05c60ae3c84519ec54c4 | [
"MIT"
] | null | null | null | python_tools/utils.py | ultimatezen/felix | 5a7ad298ca4dcd5f1def05c60ae3c84519ec54c4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from loc import (we_are_frozen,
module_path)
import os
import datetime
import time
import loc
import traceback
import sys
from win32com.client import Dispatch
logfile = sys.stdout
def determine_redirect(filename):
"""
Determine where to redirect stdout
"""
global logfile
try:
logfile = open(get_log_filename(filename), "w")
except:
class DevNull(object):
def __getattr__(self, attr):
return lambda *kwds, **kwargs : None
logfile = DevNull()
def get_now():
return datetime.datetime(*time.localtime()[:6]).strftime("%Y-%m-%d %H:%M:%S")
def log(severity, msg):
if isinstance(msg, unicode):
msg = msg.encode("utf-8")
logfile.write("%s\t%s\t%s\n" % (severity, get_now(), msg))
logfile.flush()
def debug(msg):
log("INFO", msg)
def warn(msg):
log("WARN", msg)
def error(msg):
log("ERROR", msg)
def log_err(msg = None):
"""
Log an error from a traceback.
We have to do a repr, because we don't know what
the encoding of the traceback will be.
(Is that true?)
"""
if msg:
error(msg)
lines = []
for line in traceback.format_exc().splitlines():
try:
lines.append(line.decode(sys.getfilesystemencoding()).encode("utf-8"))
except:
lines.append(repr(line))
error("\n".join(lines))
def serialized_source(workbook):
"""
Get the filename for a history file
"""
return workbook.FullName + u".fhist"
def get_local_app_data_folder():
"""
This is the folder where we will save the log
"""
appdata_folder = loc.get_local_appdata()
return os.path.join(appdata_folder, u"Felix")
def get_log_filename(filename):
"""
Get the filename for logging.
"""
if we_are_frozen():
basepath = get_local_app_data_folder()
else:
basepath = module_path()
basepath = os.path.join(basepath, u'logs')
if not os.path.isdir(basepath):
os.makedirs(basepath)
return os.path.join(basepath, filename)
class FelixObject(object):
def __init__(self, felix=None):
self._felix = None
self.App2 = None
if felix:
self._felix = felix
else:
self.init_felix()
def init_felix(self):
self._felix = Dispatch("Felix.App")
self.App2 = self._felix.App2
def ensure_felix(self):
try:
self._felix.Visible = True
except:
self.init_felix()
self._felix.Visible = True
def get_record(self):
self.ensure_felix()
return self._felix.App2.CurrentMatch.Record
def ReviewTranslation(self, recid, source, trans):
self.ensure_felix()
return self._felix.App2.ReviewTranslation(recid, source, trans)
def ReflectChanges(self, recid, source, trans):
self.ensure_felix()
return self._felix.App2.ReflectChanges(recid, source, trans)
def LookupTrans(self, trans):
self.ensure_felix()
return self._felix.LookupTrans(trans)
def CorrectTrans(self, trans):
self.ensure_felix()
return self._felix.CorrectTrans(trans)
class TransUnit(object):
"""
A translation unit
"""
def __init__(self, recid, source, trans):
self.recid = recid
self.source = source
self.trans = trans
def __repr__(self):
return u"<TransUnit (%d): %s - %s>" % (self.recid,
self.source,
self.trans)
| 23.805031 | 83 | 0.572523 | 444 | 3,785 | 4.734234 | 0.313063 | 0.05138 | 0.03568 | 0.049952 | 0.136061 | 0.104662 | 0.104662 | 0.088487 | 0.05138 | 0.05138 | 0 | 0.004249 | 0.315984 | 3,785 | 158 | 84 | 23.955696 | 0.807648 | 0.084808 | 0 | 0.175258 | 0 | 0 | 0.03247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216495 | false | 0 | 0.082474 | 0.030928 | 0.443299 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d40b54816ee88b394aeebc54a80db5a3a1d2930a | 2,349 | py | Python | tsai/models/RNNPlus.py | MOREDataset/tsai | 54987a579365ca7722475fff2fc4a24dc054e82c | [
"Apache-2.0"
] | null | null | null | tsai/models/RNNPlus.py | MOREDataset/tsai | 54987a579365ca7722475fff2fc4a24dc054e82c | [
"Apache-2.0"
] | null | null | null | tsai/models/RNNPlus.py | MOREDataset/tsai | 54987a579365ca7722475fff2fc4a24dc054e82c | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/105_models.RNNPlus.ipynb (unless otherwise specified).
__all__ = ['RNNPlus', 'LSTMPlus', 'GRUPlus']
# Cell
from ..imports import *
from ..utils import *
from ..data.core import *
from .layers import *
# Cell
class _RNNPlus_Base(Module):
def __init__(self, c_in, c_out, seq_len=None, hidden_size=100, n_layers=1, bias=True, rnn_dropout=0, bidirectional=False, fc_dropout=0.,
last_step=True, flatten=False, custom_head=None, y_range=None, **kwargs):
if flatten: assert seq_len, 'you need to enter a seq_len to use flatten=True'
self.ls, self.fl = last_step, flatten
self.rnn = self._cell(c_in, hidden_size, num_layers=n_layers, bias=bias, batch_first=True, dropout=rnn_dropout, bidirectional=bidirectional)
if flatten: self.flatten = Reshape(-1)
# Head
self.head_nf = seq_len * hidden_size * (1 + bidirectional) if flatten and not last_step else hidden_size * (1 + bidirectional)
self.c_out = c_out
if custom_head: self.head = custom_head(self.head_nf, c_out) # custom head must have all required kwargs
else: self.head = self.create_head(self.head_nf, c_out, fc_dropout=fc_dropout, y_range=y_range)
def forward(self, x):
x = x.transpose(2,1) # [batch_size x n_vars x seq_len] --> [batch_size x seq_len x n_vars]
output, _ = self.rnn(x) # [batch_size x seq_len x hidden_size * (1 + bidirectional)]
if self.ls: output = output[:, -1] # [batch_size x hidden_size * (1 + bidirectional)]
if self.fl: output = self.flatten(output) # [batch_size x seq_len * hidden_size * (1 + bidirectional)]
if not self.ls and not self.fl: output = output.transpose(2,1)
return self.head(output)
def create_head(self, nf, c_out, fc_dropout=0., y_range=None):
layers = [nn.Dropout(fc_dropout)] if fc_dropout else []
layers += [nn.Linear(self.head_nf, c_out)]
if y_range is not None: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
class RNNPlus(_RNNPlus_Base):
_cell = nn.RNN
class LSTMPlus(_RNNPlus_Base):
_cell = nn.LSTM
class GRUPlus(_RNNPlus_Base):
_cell = nn.GRU | 48.9375 | 148 | 0.640272 | 341 | 2,349 | 4.167155 | 0.269795 | 0.033779 | 0.038705 | 0.084448 | 0.171006 | 0.137227 | 0.08867 | 0 | 0 | 0 | 0 | 0.011918 | 0.249894 | 2,349 | 48 | 149 | 48.9375 | 0.794552 | 0.16688 | 0 | 0 | 1 | 0 | 0.035421 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0.088235 | false | 0 | 0.117647 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d40ed53a9263c70e7ce5de9501d2ca2c5b50b7dc | 4,332 | py | Python | spidermon/contrib/scrapy/runners.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 2 | 2019-10-03T16:47:11.000Z | 2022-02-22T11:56:02.000Z | spidermon/contrib/scrapy/runners.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 23 | 2019-05-30T20:27:38.000Z | 2019-08-20T07:23:09.000Z | spidermon/contrib/scrapy/runners.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 1 | 2022-03-24T03:01:19.000Z | 2022-03-24T03:01:19.000Z | from __future__ import absolute_import
import logging
from spidermon.results.monitor import (
MonitorResult,
actions_step_required,
monitors_step_required,
)
from spidermon.runners import MonitorRunner
from spidermon.utils.text import Message, line, line_title
LOG_MESSAGE_HEADER = "Spidermon"
class SpiderMonitorResult(MonitorResult):
def __init__(self, spider):
super(SpiderMonitorResult, self).__init__()
self.spider = spider
def next_step(self):
super(SpiderMonitorResult, self).next_step()
self.write_title()
def finish_step(self):
super(SpiderMonitorResult, self).finish_step()
self.log_info(line())
if not self.step.successful:
self.write_errors()
self.write_run_footer()
self.write_step_summary()
@monitors_step_required
def addSuccess(self, test):
super(SpiderMonitorResult, self).addSuccess(test)
self.write_item_result(test)
@monitors_step_required
def addError(self, test, error):
super(SpiderMonitorResult, self).addError(test, error)
self.write_item_result(test)
@monitors_step_required
def addFailure(self, test, error):
super(SpiderMonitorResult, self).addFailure(test, error)
self.write_item_result(test)
@monitors_step_required
def addSkip(self, test, reason):
super(SpiderMonitorResult, self).addSkip(test, reason)
self.write_item_result(test, reason)
@monitors_step_required
def addExpectedFailure(self, test, error):
super(SpiderMonitorResult, self).addExpectedFailure(test, error)
self.write_item_result(test)
@monitors_step_required
def addUnexpectedSuccess(self, test):
super(SpiderMonitorResult, self).addUnexpectedSuccess(test)
self.write_item_result(test)
@actions_step_required
def add_action_success(self, action):
super(SpiderMonitorResult, self).add_action_success(action)
self.write_item_result(action)
@actions_step_required
def add_action_skip(self, action, reason):
super(SpiderMonitorResult, self).add_action_skip(action, reason)
self.write_item_result(action, reason)
@actions_step_required
def add_action_error(self, action, error):
super(SpiderMonitorResult, self).add_action_error(action, error)
self.write_item_result(action)
def write_title(self):
self.log_info(line_title(self.step.name))
def write_item_result(self, item, extra=None):
self.log_info(
"%s... %s%s"
% (item.name, self.step[item].status, " (%s)" % extra if extra else "")
)
def write_run_footer(self):
self.log_info(
"{count:d} {item_name}{plural_suffix} in {time:.3f}s".format(
count=self.step.number_of_items,
item_name=self.step.item_result_class.name,
plural_suffix="" if self.step.number_of_items == 1 else "s",
time=self.step.time_taken,
)
)
def write_step_summary(self):
summary = "OK" if self.step.successful else "FAILED"
infos = self.step.get_infos()
if infos and sum(infos.values()):
summary += " (%s)" % ", ".join(
["%s=%s" % (k, v) for k, v in infos.items() if v]
)
self.log_info(summary)
def write_errors(self):
for status in self.step.error_statuses:
for item in self.step.items_for_status(status):
msg = Message()
msg.write_line()
msg.write_bold_separator()
msg.write_line("%s: %s" % (item.status, item.item.name))
msg.write_light_separator()
msg.write(item.error)
self.log_error(msg)
def log_error(self, msg):
self.log(msg, level=logging.ERROR)
def log_info(self, msg):
self.log(msg, level=logging.INFO)
def log(self, msg, level=logging.DEBUG):
self.spider.log("[%s] %s" % (LOG_MESSAGE_HEADER, msg), level=level)
class SpiderMonitorRunner(MonitorRunner):
def __init__(self, spider):
super(SpiderMonitorRunner, self).__init__()
self.spider = spider
def create_result(self):
return SpiderMonitorResult(self.spider)
| 32.328358 | 83 | 0.649354 | 513 | 4,332 | 5.235867 | 0.185185 | 0.111318 | 0.125093 | 0.063663 | 0.396873 | 0.212211 | 0.100149 | 0.078555 | 0.078555 | 0.06143 | 0 | 0.000611 | 0.24446 | 4,332 | 133 | 84 | 32.571429 | 0.820043 | 0 | 0 | 0.207547 | 0 | 0 | 0.025162 | 0.006002 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207547 | false | 0 | 0.04717 | 0.009434 | 0.283019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d410e56dbc9df1300a47006e894b37f0f670424b | 711 | py | Python | uti.py | shaoxiang-zheng/Branch-and-price-for-one-dimensional-bin-packing | ab919dd12318721aebf02298edeca917400b56b9 | [
"Unlicense"
] | 1 | 2020-10-01T11:59:46.000Z | 2020-10-01T11:59:46.000Z | uti.py | shaoxiang-zheng/Branch-and-price-for-one-dimensional-bin-packing | ab919dd12318721aebf02298edeca917400b56b9 | [
"Unlicense"
] | 2 | 2021-05-06T05:51:12.000Z | 2021-06-15T07:12:55.000Z | uti.py | shaoxiang-zheng/Branch-and-price-for-one-dimensional-bin-packing | ab919dd12318721aebf02298edeca917400b56b9 | [
"Unlicense"
] | 2 | 2020-10-22T14:34:45.000Z | 2021-12-22T00:51:28.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
# @Time: 2020/9/26 9:41
# Author: Zheng Shaoxiang
# @Email: zhengsx95@163.com
# Description:
from enum import Enum
ReducedEpsilon = 1e-5
IntegerEpsilon = 1e-6
ComparisonEpsilon = 1e-5
def is_integer(num):
if abs(round(num) - num) <= IntegerEpsilon:
return True
return False
class Status(Enum):
LOADED = 1
OPTIMAL = 2
INFEASIBLE = 3
UNBOUNDED = 5
CUTOFF = 6
ITERATION_LIMIT = 7
NODE_LIMIT = 8
TIME_LIMIT = 9
SOLUTION_LIMIT = 10
INTERRUPTED = 11
NUMERIC = 12
SUBOPTIMAL = 13
INPROGRESS = 14
USER_OBJ_LIMIT = 15
if __name__ == '__main__':
pass
| 17.341463 | 48 | 0.59775 | 89 | 711 | 4.606742 | 0.775281 | 0.02439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085193 | 0.30661 | 711 | 40 | 49 | 17.775 | 0.74645 | 0.177215 | 0 | 0 | 0 | 0 | 0.014842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.04 | 0.04 | 0 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d4254f76dfd5ea2d8a20ec85e3bc7b40b5572cba | 244 | py | Python | cupyx/scipy/special/_statistics.py | Pandinosaurus/cupy | c98064928c8242d0c6a07e2c714e6c811f684a4e | [
"MIT"
] | 1 | 2021-05-16T11:52:30.000Z | 2021-05-16T11:52:30.000Z | cupyx/scipy/special/_statistics.py | Pandinosaurus/cupy | c98064928c8242d0c6a07e2c714e6c811f684a4e | [
"MIT"
] | 8 | 2019-02-11T17:20:01.000Z | 2021-09-08T01:14:51.000Z | cupyx/scipy/special/_statistics.py | Pandinosaurus/cupy | c98064928c8242d0c6a07e2c714e6c811f684a4e | [
"MIT"
] | 1 | 2021-01-08T14:16:53.000Z | 2021-01-08T14:16:53.000Z | from cupy import _core
ndtr = _core.create_ufunc(
'cupyx_scipy_ndtr', ('f->f', 'd->d'),
'out0 = normcdf(in0)',
doc='''Cumulative distribution function of normal distribution.
.. seealso:: :meth:`scipy.special.ndtr`
''')
| 20.333333 | 67 | 0.631148 | 30 | 244 | 4.966667 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0.192623 | 244 | 11 | 68 | 22.181818 | 0.746193 | 0 | 0 | 0 | 0 | 0 | 0.614754 | 0.106557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d425877abba4e1d9027f49ed030c5510ffa8f6f2 | 827 | py | Python | zshoes/articles/models.py | andresgz/zshoes | a9c2a11a0059558719cb5694b168022c9a18d24e | [
"BSD-3-Clause"
] | null | null | null | zshoes/articles/models.py | andresgz/zshoes | a9c2a11a0059558719cb5694b168022c9a18d24e | [
"BSD-3-Clause"
] | null | null | null | zshoes/articles/models.py | andresgz/zshoes | a9c2a11a0059558719cb5694b168022c9a18d24e | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
from zshoes.stores.models import Store
@python_2_unicode_compatible
class Article(models.Model):
"""
Entity that represents the articles of the store
"""
#: Name of the Article
name = models.CharField(max_length=45)
#: Description of the Article
description = models.TextField(null=True, blank=True)
#: Price of the Article
price = models.FloatField()
#: Available articles in shelf
total_in_shelf = models.PositiveIntegerField(default=0)
#: Available articles in vault
total_in_vault = models.PositiveIntegerField(default=0)
#: Store of the article
store = models.ForeignKey(Store)
def __str__(self):
return self.name
| 28.517241 | 61 | 0.732769 | 105 | 827 | 5.580952 | 0.47619 | 0.042662 | 0.081911 | 0.081911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008982 | 0.192261 | 827 | 28 | 62 | 29.535714 | 0.868263 | 0.241838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0.071429 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d425a87d278fdb1d8bc123af1626c4050e1112e2 | 1,465 | py | Python | Cousins_in_Binary_Tree.py | raniyer/Learning-competitive-coding | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | 2 | 2020-05-16T03:38:32.000Z | 2020-06-04T13:41:28.000Z | Cousins_in_Binary_Tree.py | raniyer/30daysofcode-May-leetcode | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | null | null | null | Cousins_in_Binary_Tree.py | raniyer/30daysofcode-May-leetcode | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | null | null | null | """
In a binary tree, the root node is at depth 0, and children of each depth k node are at depth k+1.
Two nodes of a binary tree are cousins if they have the same depth, but have different parents.
We are given the root of a binary tree with unique values, and the values x and y of two different nodes in the tree.
Return true if and only if the nodes corresponding to the values x and y are cousins.
Example 1:
Input: root = [1,2,3,4], x = 4, y = 3
Output: false
Example 2:
Input: root = [1,2,3,null,4,null,5], x = 5, y = 4
Output: true
Example 3:
Input: root = [1,2,3,null,4], x = 2, y = 3
Output: false
Note:
The number of nodes in the tree will be between 2 and 100.
Each node has a unique integer value from 1 to 100.
"""
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def isCousins(self, root: TreeNode, x: int, y: int) -> bool:
def check(node, mom, x, level):
if not node:
return
if node.val == x:
print (mom, level)
return [mom, level]
return check(node.left, node, x, level+1) or check(node.right, node, x, level+1)
i = check(root, None, x, 0)
j = check(root, None, y, 0)
if i[0] != j[0] and i[1]==j[1]:
return True
return False
| 31.170213 | 117 | 0.6 | 254 | 1,465 | 3.444882 | 0.322835 | 0.032 | 0.050286 | 0.037714 | 0.084571 | 0.038857 | 0.038857 | 0 | 0 | 0 | 0 | 0.039767 | 0.296246 | 1,465 | 46 | 118 | 31.847826 | 0.808923 | 0.624573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d425fc5115d843da5d542a9a1da650d07eb3e7e7 | 5,901 | py | Python | main.py | Smaniac1/02-Text-adeventure | e6b408cc346c6eb6075392234dc417cb0b2e914a | [
"MIT"
] | null | null | null | main.py | Smaniac1/02-Text-adeventure | e6b408cc346c6eb6075392234dc417cb0b2e914a | [
"MIT"
] | null | null | null | main.py | Smaniac1/02-Text-adeventure | e6b408cc346c6eb6075392234dc417cb0b2e914a | [
"MIT"
] | 2 | 2020-02-04T16:33:35.000Z | 2020-02-07T04:02:38.000Z | #!/usr/bin/env python3
import sys, os, json
import random
# Check to make sure we are running the correct version of Python
assert sys.version_info >= (3,7), "This script requires at least Python 3.7"
# The game and item description files (in the same folder as this script)
game_file = 'game.json'
# Load the contents of the files into the game and items dictionaries. You can largely ignore this
# Sorry it's messy, I'm trying to account for any potential craziness with the file location
def load_files():
try:
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
with open(os.path.join(__location__, game_file)) as json_file: game = json.load(json_file)
return game
except:
print("There was a problem reading either the game or item file.")
os._exit(1)
score = {
"Happiness": 50,
"Unrest": 50,
"Economy": 50,
"Corruption": 50
}
def render(game,current):
c = game[current]
print("\n\nHappiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
print(c["name"])
print(c["desc"])
if len(c["exits"]):
print("\nChoose: ")
for p in range(len(c["exits"])):
print("{}. {}".format(p+1, c["exits"][p]["exit"]))
def get_input():
response = input("\nMake a choice: ")
response = response.upper().strip()
return response
def update(game,current,response):
c = game[current]
if response.isdigit():
try:
p = int(response) - 1
score["Happiness"] += c["exits"][p]["happiness"]
score["Unrest"] += c["exits"][p]["unrest"]
score["Economy"] += c["exits"][p]["economy"]
score["Corruption"] += c["exits"][p]["corruption"]
return c["exits"][p]["target"]
except:
return current
return current
# The main function for the game
def main():
current = "INTRO" # The starting location
end_game = ['END'] # Any of the end-game locations
game = load_files()
while True:
if score["Happiness"] <= 0:
print("Your people are unhappy. They won't rise up but instead leave peacefully in search of a new happier life.")
print("Your final scores were:")
print("Happiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
break
elif score["Unrest"] >= 100:
print("Your people have had enough. You get thrown out of power and the nation goes back into chaos.")
print("Your final scores were:")
print("Happiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
break
elif score["Economy"] <= 0:
print("Your people are to poor. While they might like living in your country, they leave in mass in search of a place were they can make a living and not starve.")
print("Your final scores were:")
print("Happiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
break
elif score["Corruption"] >= 100:
print("You let corruption grow right under your nose, with so much corruption you are thrown out of power and a new, worse goverment takes power.")
print("Your final scores were:")
print("Happiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
break
else:
render(game,current)
if current in end_game:
print("You've made your decisions!")
print("Your stats ended up looking like this:")
print("Happiness:", score["Happiness"])
print("Unrest:", score["Unrest"])
print("Economy:", score["Economy"])
print("Corruption:",score["Corruption"])
total_score = score["Happiness"] + score["Unrest"] + score["Economy"] + score["Corruption"]
if total_score > 350:
print("Through your efforts you have made a near perfect nation and your people have high hopes for the future")
print("AMAZING VICTORY")
elif total_score > 300:
print("Through your efforts you have made a great nation and your people believe things will only get better from here")
print("GREAT VICTORY")
elif total_score > 250:
print("Through your efforts you have made an good nation but, your people don't fully believe that the nation will become the best")
print("GOOD VICTORY")
elif total_score > 200:
print("Through your efforts you have barely squeezed out a nation and your people don't believe you will make it better, but it's better than what was happening before.")
print("MINOR VICTORY")
else:
print("Despite your efforts your people aren't happy and eventually your are removed from power.")
print("YOU LOSE")
break #break out of the while loop
response = get_input()
if response == "QUIT" or response == "Q":
break #break out of the while loop
current = update(game,current,response)
print("\nThanks for playing!")
# run the main function
if __name__ == '__main__':
main() | 42.76087 | 190 | 0.583799 | 724 | 5,901 | 4.707182 | 0.316298 | 0.04108 | 0.033451 | 0.044014 | 0.298415 | 0.276115 | 0.267312 | 0.241491 | 0.220951 | 0.220951 | 0 | 0.008629 | 0.293001 | 5,901 | 138 | 191 | 42.76087 | 0.808245 | 0.085409 | 0 | 0.367521 | 0 | 0.042735 | 0.390941 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 1 | 0.042735 | false | 0 | 0.017094 | 0 | 0.102564 | 0.42735 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
d42a0af3f17b3dbc0f8ef130a5ff8a3db39f80c0 | 1,860 | py | Python | pydisco/disco/modutil.py | jseppanen/disco | 23ef8badfc7c539672e8834875d9908974b646dc | [
"BSD-3-Clause"
] | 2 | 2016-05-09T17:03:08.000Z | 2016-07-19T11:27:54.000Z | pydisco/disco/modutil.py | jseppanen/disco | 23ef8badfc7c539672e8834875d9908974b646dc | [
"BSD-3-Clause"
] | null | null | null | pydisco/disco/modutil.py | jseppanen/disco | 23ef8badfc7c539672e8834875d9908974b646dc | [
"BSD-3-Clause"
] | null | null | null | import re, struct, sys, os, imp, modulefinder
import functools
from os.path import abspath, dirname
from opcode import opname
from disco.error import ModUtilImportError
def user_paths():
return set(os.getenv('PYTHONPATH', '').split(':') + [''])
def parse_function(function):
if isinstance(function, functools.partial):
return parse_function(function.func)
code = function.func_code
mod = re.compile(r'\x%.2x(..)\x%.2x' % (opname.index('LOAD_GLOBAL'),
opname.index('LOAD_ATTR')), re.DOTALL)
return [code.co_names[struct.unpack('<H', x)[0]] for x in mod.findall(code.co_code)]
def recurse_module(module, path):
finder = modulefinder.ModuleFinder(path=list(user_paths()))
finder.run_script(path)
return dict((name, module.__file__) for name, module in finder.modules.iteritems()
if name != '__main__' and module.__file__)
def locate_modules(modules, recurse=True, include_sys=False):
LOCALDIRS = user_paths()
found = {}
for module in modules:
file, path, x = imp.find_module(module)
if dirname(path) in LOCALDIRS:
found[module] = path
if recurse:
found.update(recurse_module(module, path))
elif include_sys:
found[module] = None
return found.items()
def find_modules(functions, send_modules=True, recurse=True, exclude=['Task']):
modules = set()
for function in functions:
fmod = [m for m in parse_function(function) if m not in exclude]
if send_modules:
try:
m = locate_modules(fmod, recurse, include_sys=True)
except ImportError, e:
raise ModUtilImportError(e, function)
modules.update((k, v) if v else k for k, v in m)
else:
modules.update(fmod)
return list(modules)
| 35.09434 | 88 | 0.63871 | 238 | 1,860 | 4.844538 | 0.37395 | 0.023417 | 0.05464 | 0.039896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002138 | 0.245699 | 1,860 | 52 | 89 | 35.769231 | 0.819672 | 0 | 0 | 0 | 0 | 0 | 0.032796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.159091 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d42e217f0e3b4b173a405214d76e0e8bf0d90bff | 875 | py | Python | djangoq_demo/order_reminder/migrations/0001_initial.py | forance/django-q | 2b42fb173ab374760260692eda5b5445c7121e90 | [
"MIT"
] | null | null | null | djangoq_demo/order_reminder/migrations/0001_initial.py | forance/django-q | 2b42fb173ab374760260692eda5b5445c7121e90 | [
"MIT"
] | null | null | null | djangoq_demo/order_reminder/migrations/0001_initial.py | forance/django-q | 2b42fb173ab374760260692eda5b5445c7121e90 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2016-03-17 12:32
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='orders',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('order_id', models.PositiveIntegerField()),
('order_amount', models.DecimalField(decimal_places=2, max_digits=8)),
('customer', models.CharField(max_length=200, null=True, unique=True, verbose_name=b'Company Name')),
('ship_date', models.DateField(help_text=b'Please use the following format: YYYY/MM/DD.', null=True)),
],
),
]
| 32.407407 | 118 | 0.614857 | 99 | 875 | 5.272727 | 0.727273 | 0.030651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032159 | 0.253714 | 875 | 26 | 119 | 33.653846 | 0.767228 | 0.076571 | 0 | 0 | 1 | 0 | 0.12795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d439037ffb5d79c61b4fbc99b0bc04b3e553d76d | 1,493 | py | Python | samples/vis_amass_and_h36m.py | zaverichintan/mocap | 43db09af5be092b73ce5d1df1d1834a865d62441 | [
"MIT"
] | 22 | 2019-04-17T13:42:47.000Z | 2022-03-03T23:48:16.000Z | samples/vis_amass_and_h36m.py | zaverichintan/mocap | 43db09af5be092b73ce5d1df1d1834a865d62441 | [
"MIT"
] | 5 | 2019-04-17T13:57:25.000Z | 2021-04-16T12:31:38.000Z | samples/vis_amass_and_h36m.py | zaverichintan/mocap | 43db09af5be092b73ce5d1df1d1834a865d62441 | [
"MIT"
] | 3 | 2020-08-27T19:32:40.000Z | 2021-02-24T08:52:23.000Z | import sys
sys.path.insert(0, '../')
from mocap.settings import get_amass_validation_files, get_amass_test_files
from mocap.math.amass_fk import rotmat2euclidean, exp2euclidean
from mocap.visualization.sequence import SequenceVisualizer
from mocap.math.mirror_smpl import mirror_p3d
from mocap.datasets.dataset import Limb
from mocap.datasets.combined import Combined
from mocap.datasets.framerate import AdaptFramerate
import mocap.datasets.h36m as H36M
import numpy as np
import numpy.linalg as la
from mocap.datasets.amass import AMASS_SMPL3d, AMASS_QUAT, AMASS_EXP
data_loc = '/mnt/Data/datasets/amass'
val = get_amass_validation_files()
test = get_amass_test_files()
ds = AMASS_SMPL3d(val, data_loc=data_loc)
print(ds.get_joints_for_limb(Limb.LEFT_LEG))
ds = AdaptFramerate(Combined(ds), target_framerate=50)
print(ds.get_joints_for_limb(Limb.LEFT_LEG))
ds_h36m = Combined(H36M.H36M_FixedSkeleton(actors=['S5'], actions=['walking'], remove_global_Rt=True))
seq3d = ds[0]
seq3d_h36m = ds_h36m[0]
seq3d = seq3d[0:200].reshape((200, 14, 3))
seq3d_h36m = seq3d_h36m[0:200].reshape((200, 14, 3))
a = np.array([[[0.4, 0, 0]]])
b = np.array([[[-0.4, 0, 0]]])
seq3d += a
seq3d_h36m += b
vis_dir = '../output/'
vis = SequenceVisualizer(vis_dir, 'vis_amass_vs_h36m',
to_file=True,
mark_origin=False)
vis.plot(seq1=seq3d, seq2=seq3d_h36m, parallel=True,
create_video=True,
noaxis=False,
plot_jid=False,
) | 27.145455 | 102 | 0.734092 | 226 | 1,493 | 4.628319 | 0.376106 | 0.068834 | 0.06501 | 0.043977 | 0.122371 | 0.122371 | 0.068834 | 0.068834 | 0.068834 | 0.068834 | 0 | 0.058824 | 0.146015 | 1,493 | 55 | 103 | 27.145455 | 0.761569 | 0 | 0 | 0.052632 | 0 | 0 | 0.042169 | 0.016064 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d43e75fb89304bc9b1ab29d6bf1e74be0ea2345f | 1,222 | py | Python | review/tests/test_review_model.py | hossainchisty/Multi-Vendor-eCommerce | 42c5f62b8b098255cc9ea57858d3cc7de94bd76a | [
"MIT"
] | 16 | 2021-09-22T19:08:28.000Z | 2022-03-18T18:57:02.000Z | review/tests/test_review_model.py | hossainchisty/Multi-Vendor-eCommerce | 42c5f62b8b098255cc9ea57858d3cc7de94bd76a | [
"MIT"
] | 6 | 2021-09-30T12:36:02.000Z | 2022-03-18T22:18:00.000Z | review/tests/test_review_model.py | hossainchisty/Multi-Vendor-eCommerce | 42c5f62b8b098255cc9ea57858d3cc7de94bd76a | [
"MIT"
] | 6 | 2021-12-06T02:04:51.000Z | 2022-03-13T14:38:14.000Z | from django.test import TestCase
from review.models import Review
class TestReviewModel(TestCase):
'''
Test suite for review modules.
'''
def setUp(self):
'''
Set up test data for the review model.
'''
Review.objects.create(
feedback='Test review',
riderReview='Test review content',
)
def tearDown(self):
'''
Clean up test data for the review model.
'''
Review.objects.all().delete()
def test_review_feedback(self):
'''
Test review model for feedback.
'''
review = Review.objects.get(feedback='Test review')
self.assertEqual(review.feedback, 'Test review')
def test_review_rider_review(self):
'''
Test review model for rider review.
'''
review = Review.objects.get(riderReview='Test review content')
self.assertEqual(review.riderReview, 'Test review content')
def test_review_verbose_name_plural(self):
'''
Test review model for verbose name plural.
'''
self.assertEqual(str(Review._meta.verbose_name_plural), 'Customer feedback')
| 27.772727 | 85 | 0.58347 | 128 | 1,222 | 5.476563 | 0.296875 | 0.171184 | 0.077033 | 0.119829 | 0.313837 | 0.219686 | 0.114123 | 0.114123 | 0.114123 | 0 | 0 | 0 | 0.317512 | 1,222 | 43 | 86 | 28.418605 | 0.840528 | 0.180851 | 0 | 0 | 0 | 0 | 0.129383 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.277778 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d43edfd68011c8d5534ee2b154094bc55dd24e36 | 146 | py | Python | app.py | BodhisattwaMandal/wembedder | dd384db9a021cb7e3682f4153226d5fe53d05c9f | [
"Apache-2.0",
"MIT"
] | 42 | 2017-08-15T20:51:40.000Z | 2021-07-01T15:33:58.000Z | app.py | BodhisattwaMandal/wembedder | dd384db9a021cb7e3682f4153226d5fe53d05c9f | [
"Apache-2.0",
"MIT"
] | 17 | 2017-07-07T17:46:34.000Z | 2021-07-31T19:54:03.000Z | app.py | BodhisattwaMandal/wembedder | dd384db9a021cb7e3682f4153226d5fe53d05c9f | [
"Apache-2.0",
"MIT"
] | 11 | 2017-09-10T09:02:14.000Z | 2021-07-02T08:50:46.000Z | """Script to start webserving."""
from wembedder.app import create_app
app = create_app()
if __name__ == '__main__':
app.run(debug=True)
| 13.272727 | 36 | 0.691781 | 20 | 146 | 4.55 | 0.75 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171233 | 146 | 10 | 37 | 14.6 | 0.752066 | 0.184932 | 0 | 0 | 0 | 0 | 0.070796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d442c238e8821d0cdc0c3f3b94495eca939a45ec | 2,987 | py | Python | app/routes.py | mattkantor/basic-flask-app | ec893ca44b1c9c4772c24b81394b58644fefd29a | [
"MIT"
] | null | null | null | app/routes.py | mattkantor/basic-flask-app | ec893ca44b1c9c4772c24b81394b58644fefd29a | [
"MIT"
] | null | null | null | app/routes.py | mattkantor/basic-flask-app | ec893ca44b1c9c4772c24b81394b58644fefd29a | [
"MIT"
] | null | null | null | from app.api.feed import FeedController
from app.api.follows import FollowController
from .api import apiv1, app_routes
from .api.news import *
from .api.user import *
from .api.group import *
from .api.auth import get_auth_token, register
class Route():
def __init__(self):
''''''
@staticmethod
def build(apiv1):
apiv1.add_url_rule('/news','index', NewsController.index, methods=['GET'])
apiv1.add_url_rule('/news', 'create', NewsController.create, methods=['POST'])
apiv1.add_url_rule('/news_feed', 'full_user_news_feed', NewsController.full_news_feed ,
methods=['GET'])
app_routes.add_url_rule('/index.rss', 'rss_Feed', FeedController.rss,
methods=['GET'])
apiv1.add_url_rule('/public_feed', 'public_feed', NewsController.public_feed,
methods=['GET'])
apiv1.add_url_rule('/me', 'me', UserController.me, methods=['GET'])
apiv1.add_url_rule('/users', 'put', UserController.update, methods=['POST'])
apiv1.add_url_rule('/users/<string:username>', 'show', UserController.show, methods=['GET'])
apiv1.add_url_rule('/users/search', 'search', UserController.search, methods=['GET'])
apiv1.add_url_rule('/users/<string:uuid>/feed', 'user_news_feed', NewsController.user_news_feed, methods=['GET'])
apiv1.add_url_rule('/follow/<string:username>', 'follow', FollowController.follow, methods=['GET'])
apiv1.add_url_rule('/unfollow/<string:username>', 'unfollow', FollowController.unfollow, methods=['GET'])
apiv1.add_url_rule('/followers', 'followers', FollowController.followers, methods=['GET'])
apiv1.add_url_rule('/following', 'following', FollowController.following, methods=['GET'])
apiv1.add_url_rule('/groups', 'groups', GroupController.index, methods=['GET'])
apiv1.add_url_rule('/groups', 'create_group', GroupController.create, methods=['POST'])
apiv1.add_url_rule('/groups', 'update_group', GroupController.update, methods=['PUT'])
apiv1.add_url_rule('/groups', 'delete_group', GroupController.index, methods=['DELETE'])
apiv1.add_url_rule('/groups/<string:uuid>', 'show_group', GroupController.show, methods=['GET'])
apiv1.add_url_rule('/groups/<string:uuid>/add_user/<string:user_uuid>', 'add_user_to_group', GroupController.add_user, methods=['GET'])
apiv1.add_url_rule('/groups/<string:uuid>/del_user/<string:user_uuid>', 'del_user_from_group',
GroupController.remove_user, methods=['GET'])
apiv1.add_url_rule('/get_auth_token', 'get_auth_token', get_auth_token,methods=[ 'POST'])
apiv1.add_url_rule('/register', 'register', register, methods=[ 'POST'])
apiv1.add_url_rule('/feed', 'feed', FeedController.index, methods=['GET'])
apiv1.add_url_rule('/feed', 'search_feed', FeedController.index, methods=['POST'])
return apiv1 | 50.627119 | 143 | 0.667559 | 358 | 2,987 | 5.304469 | 0.156425 | 0.078989 | 0.131648 | 0.189573 | 0.413902 | 0.381253 | 0.25961 | 0.043181 | 0.043181 | 0 | 0 | 0.010835 | 0.165718 | 2,987 | 59 | 144 | 50.627119 | 0.751204 | 0 | 0 | 0.073171 | 0 | 0 | 0.229712 | 0.073776 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.170732 | 0 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d448253fd36b0f18b3b060f1981126a0797ec2c8 | 1,598 | py | Python | main.py | LN-24111/RPC_Sim | 976dcc7f2acd533fe788347be3ed875ae418d766 | [
"MIT"
] | null | null | null | main.py | LN-24111/RPC_Sim | 976dcc7f2acd533fe788347be3ed875ae418d766 | [
"MIT"
] | null | null | null | main.py | LN-24111/RPC_Sim | 976dcc7f2acd533fe788347be3ed875ae418d766 | [
"MIT"
] | null | null | null | from tournament import *
from strategies import *
resultSetPoints = {}
resultSetWins = {}
observer = Documenter()
for i in range(100):
if i % 100 == 0:
print (i//100)
participants = []
# participants.append(WaPlayer1())
# participants.append(Adam())
# participants.append(Rock())
# participants.append(Paper())
# participants.append(BaseStrategy())
# participants.append(Rand())
participants.append(Player2())
participants.append(Player3())
participants.append(Player4())
# participants.append(Player5())
participants.append(Player6())
participants.append(Player1())
participants.append(Player7())
participants.append(Player8())
# participants.append(Player9())
participants.append(Player10())
tournament = Tournament(*participants, observers = [observer], logging = False)
result = tournament.executeGame()
points = 2520 // len(result)
for p in result:
if p in resultSetWins:
resultSetPoints[p] += points
resultSetWins[p] += 1
else:
resultSetPoints[p] = points
resultSetWins[p] = 1
def resultSetToString(r, format):
retVal = ''
retVal += f"Cumulative {format}:\n"
for player, performance in r.items():
retVal += f"{player}: {performance} {format}\n"
retVal += '\n'
return retVal
log = open('cumulative.txt', "w", encoding="utf-8")
log.write(resultSetToString(resultSetWins, 'wins'))
log.write(resultSetToString(resultSetPoints, 'points'))
log.write(observer.toString())
log.close()
print(resultSetToString(resultSetWins, 'wins'))
print(resultSetToString(resultSetPoints, 'points'))
print(observer.toString())
input("Enter any key to quit.") | 27.084746 | 80 | 0.721527 | 173 | 1,598 | 6.66474 | 0.427746 | 0.249783 | 0.038161 | 0.060711 | 0.06418 | 0.06418 | 0 | 0 | 0 | 0 | 0 | 0.020685 | 0.122653 | 1,598 | 59 | 81 | 27.084746 | 0.801712 | 0.152065 | 0 | 0 | 0 | 0 | 0.089087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.046512 | 0 | 0.093023 | 0.093023 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4484ad6bf63fa89fa298eca8eea4369a0042fe5 | 7,795 | py | Python | npc/gui/uis/new_character.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | npc/gui/uis/new_character.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | npc/gui/uis/new_character.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'npc/gui/uis/new_character.ui'
#
# Created by: PyQt5 UI code generator 5.7.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_NewCharacterDialog(object):
def setupUi(self, NewCharacterDialog):
NewCharacterDialog.setObjectName("NewCharacterDialog")
NewCharacterDialog.resize(450, 432)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(NewCharacterDialog.sizePolicy().hasHeightForWidth())
NewCharacterDialog.setSizePolicy(sizePolicy)
NewCharacterDialog.setMinimumSize(QtCore.QSize(450, 382))
NewCharacterDialog.setModal(True)
self.verticalLayout = QtWidgets.QVBoxLayout(NewCharacterDialog)
self.verticalLayout.setObjectName("verticalLayout")
self.infoForm = QtWidgets.QFormLayout()
self.infoForm.setSizeConstraint(QtWidgets.QLayout.SetNoConstraint)
self.infoForm.setObjectName("infoForm")
self.typeLabel = QtWidgets.QLabel(NewCharacterDialog)
self.typeLabel.setObjectName("typeLabel")
self.infoForm.setWidget(0, QtWidgets.QFormLayout.LabelRole, self.typeLabel)
self.typeSelect = QtWidgets.QComboBox(NewCharacterDialog)
self.typeSelect.setObjectName("typeSelect")
self.infoForm.setWidget(0, QtWidgets.QFormLayout.FieldRole, self.typeSelect)
self.nameLine = QtWidgets.QLabel(NewCharacterDialog)
self.nameLine.setObjectName("nameLine")
self.infoForm.setWidget(1, QtWidgets.QFormLayout.LabelRole, self.nameLine)
self.characterName = QtWidgets.QLineEdit(NewCharacterDialog)
self.characterName.setObjectName("characterName")
self.infoForm.setWidget(1, QtWidgets.QFormLayout.FieldRole, self.characterName)
self.groupLabel = QtWidgets.QLabel(NewCharacterDialog)
self.groupLabel.setObjectName("groupLabel")
self.infoForm.setWidget(2, QtWidgets.QFormLayout.LabelRole, self.groupLabel)
self.groupName = QtWidgets.QLineEdit(NewCharacterDialog)
self.groupName.setObjectName("groupName")
self.infoForm.setWidget(2, QtWidgets.QFormLayout.FieldRole, self.groupName)
self.locLabel = QtWidgets.QLabel(NewCharacterDialog)
self.locLabel.setObjectName("locLabel")
self.infoForm.setWidget(3, QtWidgets.QFormLayout.LabelRole, self.locLabel)
self.locName = QtWidgets.QLineEdit(NewCharacterDialog)
self.locName.setObjectName("locName")
self.infoForm.setWidget(3, QtWidgets.QFormLayout.FieldRole, self.locName)
self.verticalLayout.addLayout(self.infoForm)
self.foreignBox = QtWidgets.QGroupBox(NewCharacterDialog)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.foreignBox.sizePolicy().hasHeightForWidth())
self.foreignBox.setSizePolicy(sizePolicy)
self.foreignBox.setMinimumSize(QtCore.QSize(0, 71))
self.foreignBox.setCheckable(True)
self.foreignBox.setChecked(False)
self.foreignBox.setObjectName("foreignBox")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.foreignBox)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.foreignText = QtWidgets.QLineEdit(self.foreignBox)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.foreignText.sizePolicy().hasHeightForWidth())
self.foreignText.setSizePolicy(sizePolicy)
self.foreignText.setClearButtonEnabled(True)
self.foreignText.setObjectName("foreignText")
self.verticalLayout_2.addWidget(self.foreignText)
self.verticalLayout.addWidget(self.foreignBox)
self.deceasedBox = QtWidgets.QGroupBox(NewCharacterDialog)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.deceasedBox.sizePolicy().hasHeightForWidth())
self.deceasedBox.setSizePolicy(sizePolicy)
self.deceasedBox.setMinimumSize(QtCore.QSize(0, 116))
self.deceasedBox.setCheckable(True)
self.deceasedBox.setChecked(False)
self.deceasedBox.setObjectName("deceasedBox")
self.verticalLayout_3 = QtWidgets.QVBoxLayout(self.deceasedBox)
self.verticalLayout_3.setObjectName("verticalLayout_3")
self.deceasedText = QtWidgets.QPlainTextEdit(self.deceasedBox)
self.deceasedText.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.deceasedText.setObjectName("deceasedText")
self.verticalLayout_3.addWidget(self.deceasedText)
self.verticalLayout.addWidget(self.deceasedBox)
self.buttonBox = QtWidgets.QDialogButtonBox(NewCharacterDialog)
self.buttonBox.setStandardButtons(QtWidgets.QDialogButtonBox.Cancel|QtWidgets.QDialogButtonBox.Ok)
self.buttonBox.setObjectName("buttonBox")
self.verticalLayout.addWidget(self.buttonBox)
self.typeLabel.setBuddy(self.typeSelect)
self.nameLine.setBuddy(self.characterName)
self.groupLabel.setBuddy(self.groupName)
self.retranslateUi(NewCharacterDialog)
self.buttonBox.accepted.connect(NewCharacterDialog.accept)
self.buttonBox.rejected.connect(NewCharacterDialog.reject)
QtCore.QMetaObject.connectSlotsByName(NewCharacterDialog)
NewCharacterDialog.setTabOrder(self.typeSelect, self.characterName)
NewCharacterDialog.setTabOrder(self.characterName, self.groupName)
NewCharacterDialog.setTabOrder(self.groupName, self.locName)
NewCharacterDialog.setTabOrder(self.locName, self.foreignBox)
NewCharacterDialog.setTabOrder(self.foreignBox, self.foreignText)
NewCharacterDialog.setTabOrder(self.foreignText, self.deceasedBox)
NewCharacterDialog.setTabOrder(self.deceasedBox, self.deceasedText)
def retranslateUi(self, NewCharacterDialog):
_translate = QtCore.QCoreApplication.translate
NewCharacterDialog.setWindowTitle(_translate("NewCharacterDialog", "New Character"))
self.typeLabel.setText(_translate("NewCharacterDialog", "T&ype"))
self.typeSelect.setToolTip(_translate("NewCharacterDialog", "Type of character. Determines which fields are available."))
self.nameLine.setText(_translate("NewCharacterDialog", "&Name"))
self.characterName.setToolTip(_translate("NewCharacterDialog", "The character\'s name. Use \' - \' to add a brief note."))
self.groupLabel.setText(_translate("NewCharacterDialog", "&Group"))
self.groupName.setToolTip(_translate("NewCharacterDialog", "Main group that the character belongs to"))
self.locLabel.setText(_translate("NewCharacterDialog", "Location"))
self.locName.setToolTip(_translate("NewCharacterDialog", "Place where the character lives within the main setting"))
self.foreignBox.setTitle(_translate("NewCharacterDialog", "Fore&ign"))
self.foreignText.setPlaceholderText(_translate("NewCharacterDialog", "Where do they live?"))
self.deceasedBox.setTitle(_translate("NewCharacterDialog", "&Deceased"))
self.deceasedText.setPlaceholderText(_translate("NewCharacterDialog", "How did they die?"))
| 61.377953 | 130 | 0.751123 | 684 | 7,795 | 8.524854 | 0.236842 | 0.064826 | 0.028812 | 0.026754 | 0.178357 | 0.178357 | 0.120734 | 0.120734 | 0.046819 | 0 | 0 | 0.007407 | 0.151379 | 7,795 | 126 | 131 | 61.865079 | 0.874074 | 0.025401 | 0 | 0.070175 | 1 | 0 | 0.096192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.008772 | 0 | 0.035088 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d44927253825e1c5d306c639c38985c68db197c9 | 2,111 | py | Python | tests/integration/test_dataframe_logging.py | mbseid/rubicon | d48d34e4b6580ef29bb8dbe6a8ed473b954af087 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_dataframe_logging.py | mbseid/rubicon | d48d34e4b6580ef29bb8dbe6a8ed473b954af087 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_dataframe_logging.py | mbseid/rubicon | d48d34e4b6580ef29bb8dbe6a8ed473b954af087 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import pytest
from dask import dataframe as dd
from rubicon.exceptions import RubiconException
def test_pandas_df(rubicon_local_filesystem_client):
rubicon = rubicon_local_filesystem_client
project = rubicon.create_project("Dataframe Test Project")
multi_index_df = pd.DataFrame(
[[0, 1, "a"], [1, 1, "b"], [2, 2, "c"], [3, 2, "d"]], columns=["a", "b", "c"]
)
multi_index_df = multi_index_df.set_index(["b", "a"])
written_dataframe = project.log_dataframe(multi_index_df)
read_dataframes = project.dataframes()
read_dataframe = read_dataframes[0]
assert len(read_dataframes) == 1
assert read_dataframe.id == written_dataframe.id
assert read_dataframe.get_data().equals(multi_index_df)
def test_dask_df(rubicon_local_filesystem_client):
rubicon = rubicon_local_filesystem_client
project = rubicon.create_project("Dataframe Test Project")
ddf = dd.from_pandas(pd.DataFrame([0, 1], columns=["a"]), npartitions=1)
written_dataframe = project.log_dataframe(ddf)
read_dataframes = project.dataframes()
read_dataframe = read_dataframes[0]
assert len(read_dataframes) == 1
assert read_dataframe.id == written_dataframe.id
assert read_dataframe.get_data(df_type="dask").compute().equals(ddf.compute())
def test_df_read_error(rubicon_local_filesystem_client):
rubicon = rubicon_local_filesystem_client
project = rubicon.create_project("Dataframe Test Project")
ddf = dd.from_pandas(pd.DataFrame([0, 1], columns=["a"]), npartitions=1)
written_dataframe = project.log_dataframe(ddf)
read_dataframes = project.dataframes()
read_dataframe = read_dataframes[0]
assert len(read_dataframes) == 1
assert read_dataframe.id == written_dataframe.id
# simulate user forgetting to set `df_type` to `dask` when reading a logged dask df
with pytest.raises(RubiconException) as e:
read_dataframe.get_data()
assert (
"This might have happened if you forgot to set `df_type='dask'` when trying to read a `dask` dataframe."
in str(e)
)
| 31.044118 | 112 | 0.720985 | 285 | 2,111 | 5.080702 | 0.235088 | 0.087017 | 0.09116 | 0.116022 | 0.662293 | 0.638122 | 0.638122 | 0.638122 | 0.638122 | 0.638122 | 0 | 0.011435 | 0.171483 | 2,111 | 67 | 113 | 31.507463 | 0.816467 | 0.03837 | 0 | 0.52381 | 0 | 0.02381 | 0.090237 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.071429 | false | 0 | 0.095238 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d449f83e2eb0a2d2d6d61b2e3a88d0cfd7006157 | 39,855 | py | Python | 820.py | tsbxmw/leetcode | e751311b8b5f2769874351717a22c35c19b48a36 | [
"MIT"
] | null | null | null | 820.py | tsbxmw/leetcode | e751311b8b5f2769874351717a22c35c19b48a36 | [
"MIT"
] | null | null | null | 820.py | tsbxmw/leetcode | e751311b8b5f2769874351717a22c35c19b48a36 | [
"MIT"
] | null | null | null | # 给定一个单词列表,我们将这个列表编码成一个索引字符串 S 与一个索引列表 A。
# 例如,如果这个列表是 ["time", "me", "bell"],我们就可以将其表示为 S = "time#bell#" 和 indexes = [0, 2, 5]。
# 对于每一个索引,我们可以通过从字符串 S 中索引的位置开始读取字符串,直到 "#" 结束,来恢复我们之前的单词列表。
# 那么成功对给定单词列表进行编码的最小字符串长度是多少呢?
#
# 示例:
# 输入: words = ["time", "me", "bell"]
# 输出: 10
# 说明: S = "time#bell#" , indexes = [0, 2, 5] 。
#
# 提示:
# 1 <= words.length <= 2000
# 1 <= words[i].length <= 7
# 每个单词都是小写字母 。
# 来源:力扣(LeetCode)
# 链接:https://leetcode-cn.com/problems/short-encoding-of-words
# 著作权归领扣网络所有。商业转载请联系官方授权,非商业转载请注明出处。
# 先去重
# 循环删除所有可能存在的是别的字符串后缀的单独字符串
class Solution:
def minimumLengthEncoding(self, words):
good = set(words)
for word in words:
for k in range(1, len(word)):
good.discard(word[k:])
return sum(len(word) + 1 for word in good)
class Solution1:
def minimumLengthEncoding(self, words) -> int:
rev = 0
words = list(set(words))
lw = len(words)
if lw == 0:
return 0
words.sort(key=lambda x: len(x), reverse=True)
temp = [words.pop()]
now = 0
while words:
for word in words:
if temp[now].endswith(word):
words.remove(word)
if not words:
break
temp.append(words.pop())
now += 1
return sum([len(word)+1 for word in temp])
if __name__ == "__main__":
s = Solution()
s.minimumLengthEncoding(["time", "me", "bell", "ime", "me"])
temp = [
"mokgggq",
"pjdislx",
"bfrbsfs",
"hgwqzz",
"bnwxc",
"pzhmyo",
"wbfton",
"evdro",
"uwxuzmn",
"mdwfn",
"rinmw",
"cwvvrea",
"aqyxlev",
"ipypqev",
"cbdhb",
"ynqok",
"lieciy",
"sqhmdh",
"pcotcq",
"vyeqmey",
"gvpbu",
"kvhaag",
"qkaqq",
"mwtmzzs",
"gtywt",
"cnowp",
"ibfdgvp",
"jybmx",
"gseqh",
"yaohcp",
"jgarzaz",
"lgxogb",
"cjjiev",
"tjfbf",
"qwtlx",
"hehmv",
"oergh",
"ovehsf",
"zifrfb",
"tbykq",
"oasqrsw",
"hjmzil",
"fuylmzc",
"zokxci",
"wbyspc",
"cqwsb",
"oftqr",
"wvgtmrq",
"ymfyjm",
"odrnphc",
"mnoms",
"frhelt",
"gokypg",
"yoafppu",
"mmquko",
"klnvy",
"atcfwzv",
"yjmluf",
"hckdblw",
"wreortt",
"osuidhr",
"vmvopqa",
"snilp",
"lpygwbe",
"esqpirj",
"lacnfr",
"dnyehuz",
"qfvuo",
"jvnlky",
"gdnzemt",
"isewa",
"hvmfts",
"nuxsog",
"cckcw",
"bmxtsb",
"ozlilc",
"wmhku",
"uhoni",
"ckkbb",
"uwrakdx",
"kciqov",
"xrpjq",
"lqvbs",
"fyrglkp",
"xfbgq",
"vrojsdw",
"wwivh",
"frgontv",
"fgghrms",
"psxdbxb",
"ezapa",
"lvihja",
"oydcdih",
"ztefj",
"khpoypx",
"llwgyuq",
"heepqf",
"lneold",
"lxcyjrt",
"yrnzmvm",
"kwcluhu",
"qoqbzzu",
"cuwmp",
"qiejx",
"fnqceo",
"myizd",
"thggnqx",
"ixwbbve",
"gjwruu",
"alpglnk",
"zrhmh",
"evkojps",
"gvwol",
"pystdn",
"yhcjrd",
"qtyhucx",
"cwmbh",
"vrlmw",
"bwkntib",
"isyyx",
"bptejfp",
"gctufb",
"lewtr",
"llkwsi",
"rokvhw",
"jwagu",
"axchu",
"llshkne",
"lnrwco",
"ylnkjsu",
"ukdaxm",
"byfnel",
"deecwis",
"xwjjf",
"xwsyfi",
"bvnen",
"supbi",
"dzara",
"qtnyslh",
"zflzqu",
"rfbsz",
"yiwbok",
"kpvpmey",
"aosdked",
"gjogz",
"pwaww",
"qpqhoz",
"avlxwv",
"aakku",
"ykpjq",
"biejhfz",
"ngnmk",
"gucufvo",
"zonyhu",
"pwbnko",
"dianhi",
"svdulhs",
"seaqz",
"tupyev",
"rfsde",
"qgvwnz",
"ijjpsx",
"vwwizu",
"cegwsql",
"snsrb",
"kzarfp",
"xsvwq",
"zdend",
"hnnib",
"ghtfd",
"pgdlfx",
"iyrfnl",
"xhnpif",
"axinxpx",
"cttjnl",
"gmkji",
"ewbecn",
"fqpvci",
"iazmng",
"ehfmc",
"wsmjtm",
"vsxto",
"cguyk",
"mncgl",
"brafj",
"jvpivd",
"ljburu",
"pgxbvd",
"ewxxx",
"trmshp",
"spfdgn",
"oczwdxj",
"wvnared",
"ktzmiu",
"kdqww",
"saeuudb",
"mwzel",
"sbihma",
"jemgzpm",
"oogsc",
"lvhtgm",
"thuis",
"ljkdo",
"ewgvbu",
"emuxgt",
"kgxyfdi",
"tzwmlof",
"canbgua",
"nwqeqwg",
"ikmorv",
"uzanjn",
"npmjwyl",
"hwkdfld",
"bbmil",
"kgfro",
"qamev",
"nuvhoh",
"pklbp",
"yzfplx",
"bcifen",
"gimid",
"xiiqj",
"pvvcocj",
"skshvmq",
"nlhsqt",
"zqttm",
"xuskbm",
"jejdbjq",
"xecjgs",
"udenjcp",
"tsrekhp",
"iisxmwb",
"gmtovot",
"kqcfsdo",
"efpsvqi",
"ylhxyv",
"tvamwgp",
"kidlop",
"amwsk",
"xeplp",
"tkuhek",
"qaweb",
"orrzhb",
"ogiylt",
"muvbpg",
"ooiebj",
"gtkqk",
"uurhse",
"cmwmgh",
"yiaogkj",
"famrlgt",
"nslern",
"hsdfss",
"asujt",
"hbdmg",
"qokzr",
"razeq",
"vwfrnu",
"hbgkrf",
"betedj",
"wctub",
"dpfrrv",
"zengra",
"elphsd",
"lkvhwx",
"xutmp",
"huqpltl",
"qaatefx",
"zarfa",
"dliudeh",
"ggniy",
"cvarq",
"rjjkqs",
"xkzztf",
"vjmoxa",
"cigku",
"cvmlpfj",
"vxmmb",
"kqxfn",
"nuwohcs",
"sezwzhd",
"xpglr",
"ypmuob",
"flqmt",
"ergssgv",
"ourdw",
"sexon",
"kwhdu",
"vdhdes",
"inellc",
"urjhgcj",
"vipnsis",
"rwtfs",
"nrahj",
"jnnxk",
"emesdw",
"iyiqq",
"luuadax",
"ueyurq",
"vzbcshx",
"flywfhd",
"kphagyo",
"tygzn",
"alauzs",
"oupnjbr",
"rpqsl",
"xpqbqkg",
"tusht",
"llxhgx",
"bdmhnbz",
"kwzxtaa",
"mhtoir",
"heyanop",
"bvjrov",
"udznmet",
"kxrdmr",
"vmldb",
"qtriy",
"qfmbt",
"ppxgclr",
"jywhzz",
"rdntkwp",
"hlejhf",
"pvqjag",
"zcnudmz",
"wcyuaqz",
"tudmp",
"kluqos",
"slygy",
"zkixjqg",
"socvj",
"igvxwq",
"oyugfh",
"jscjg",
"qmigl",
"yazwe",
"shzapa",
"zqgmc",
"rmfrfz",
"tdgquiz",
"ducbvxz",
"uiddcnw",
"aapdunz",
"bagnif",
"dbjyal",
"qgbram",
"bivve",
"vxrtcs",
"szwigrl",
"zmuteys",
"zagudtp",
"lrjmobu",
"ozbgh",
"hvoxaw",
"xmjna",
"lqlkqqq",
"oclufg",
"ovbokv",
"oezekn",
"hcfgcen",
"aablpvt",
"ejvzn",
"tzncr",
"lhedw",
"wiqgdb",
"gctgu",
"zczgt",
"vufnci",
"jlwsm",
"dcrqe",
"ghuev",
"exqdhff",
"kvecua",
"relofjm",
"jjznnjd",
"imbklr",
"sjusb",
"tqhvlej",
"jcczqnz",
"vyfzx",
"audyo",
"ckysmmi",
"oanjmb",
"fhtnb",
"rzqagl",
"wxlxmj",
"nyfel",
"wruftl",
"uuhbsje",
"oobjkmy",
"litiwj",
"dxscotz",
"znvixpd",
"nsxso",
"ieati",
"iaodgc",
"dgyvzr",
"rvfccm",
"cchxt",
"raiqi",
"owzwnr",
"rkosimq",
"dyrscbo",
"ppjtey",
"ebnuom",
"bifrbq",
"teclf",
"ztbbyr",
"omrehv",
"wtrvc",
"dllgjc",
"guxeicj",
"udmxbe",
"zrvravk",
"dnric",
"sbnxmxv",
"ckzah",
"qzbrlq",
"gtmjvoq",
"xarml",
"jeqco",
"hnodvno",
"fomdpk",
"coqoudc",
"eochkh",
"hdghbb",
"jiiyrvp",
"vsdydi",
"owctgdz",
"hqafd",
"zhjktay",
"yqcfv",
"ajququ",
"gftptr",
"amllxns",
"sisfbef",
"tdjwhz",
"wrvkr",
"hqxuo",
"bhrjk",
"igjldkr",
"ujqwaj",
"ufksq",
"kmfai",
"zsjiot",
"civroa",
"eqpoiu",
"hjpnw",
"snxdde",
"vxkro",
"lvyddk",
"rfskh",
"zcnjtk",
"hsszo",
"uxnnj",
"yghbnl",
"cunqr",
"pkbwwy",
"ozjxbzt",
"sxqxmz",
"arkjqwp",
"buwcygp",
"eqthat",
"lqrofq",
"wwfmh",
"xjddhyu",
"nluzkk",
"sofgqaw",
"yrwhfg",
"pkhiq",
"kpupg",
"eonip",
"ptkfqlu",
"togolji",
"exrnx",
"keofq",
"jltdju",
"jcnjdjo",
"qbrzz",
"znymyrb",
"cjjma",
"dqyhp",
"tcmmlpj",
"qictc",
"jgswqyq",
"jcrib",
"myajs",
"dsqicw",
"llszo",
"kxjsos",
"xxstde",
"vavnq",
"rjelsx",
"lermaxm",
"gqaaeu",
"neovumn",
"azaeg",
"lztlxhi",
"pqbaj",
"wjujyo",
"hldxxb",
"ocklr",
"lgvnou",
"egjrf",
"scigsem",
"orrjki",
"ncugkks",
"dfpui",
"zbmwda",
"gqlxi",
"xawccav",
"rtels",
"bewrq",
"xvhkn",
"pyzdzju",
"voizx",
"cxuxll",
"kyyyg",
"qdngpff",
"dvtivnb",
"lsurzr",
"xyxcev",
"ojbhjhu",
"qxenhl",
"lgzcsib",
"xwomf",
"yevuml",
"anfpjp",
"hsqxxna",
"wknpn",
"hpeffdv",
"yjqvvyz",
"eoctiv",
"wannxu",
"slxnf",
"ijadma",
"uaxypn",
"xafpljk",
"nysxmyv",
"nfhvjfd",
"nykvxrp",
"nvpvf",
"dxupo",
"pvyqjdx",
"wyzudq",
"vgwpkyj",
"qlhbvp",
"hvhpmh",
"djwukq",
"rdnmazf",
"gnnryn",
"mllysxq",
"iapvpe",
"zcpayi",
"amuakk",
"slwgc",
"pbqffyw",
"emlurpa",
"kecswbq",
"pfbzg",
"opjexe",
"savxot",
"finntqj",
"tnzroga",
"jktlrsn",
"beyir",
"txsvlt",
"qjkbxr",
"qomtajo",
"ytvkqbz",
"dxantfq",
"zsstb",
"sonmv",
"rxplgr",
"ltmnnl",
"najvi",
"ucdrotu",
"smeccv",
"iqyssw",
"ytkgn",
"ccrzv",
"iepvgw",
"zbnyyh",
"mupzq",
"ztghi",
"ztyaen",
"efnzco",
"ugsyq",
"puokxgl",
"ceqotz",
"yytwzik",
"gxsbdne",
"vgcjgyu",
"sfidsno",
"cqkvw",
"rscfgmv",
"zpoblc",
"oslpwhv",
"tsvsa",
"xdrty",
"dcbejb",
"twxul",
"fozhwha",
"hkehs",
"mclhfm",
"kfxfpx",
"oplmpq",
"cmfzuu",
"rmbyhmf",
"rcivkmz",
"mgabniw",
"hghapx",
"wmoxvuk",
"kupmpud",
"snspozp",
"zveqzy",
"omxxlfq",
"gaill",
"lpazav",
"kxywrju",
"lcgtepu",
"zbmlcl",
"lpzhv",
"pszeto",
"vzmbso",
"kbokpl",
"uoqmat",
"fdkwjg",
"kytvxew",
"gzzreyo",
"wxdyynx",
"twchm",
"lsxbxma",
"tzbva",
"fnazkc",
"qmvpa",
"mxoiz",
"nmxzmn",
"ufimada",
"dvrow",
"tqxxsd",
"xucms",
"loraesj",
"mbqdp",
"mkcnovs",
"pmvip",
"uksrfu",
"ngxcbel",
"acbgch",
"jynevd",
"pewhh",
"rtuymc",
"jxqvb",
"ylrdr",
"nfbnsxo",
"blcyz",
"twndeik",
"dnfku",
"yrckw",
"ozzqt",
"bxftit",
"ooidimg",
"mpvgc",
"otobnvo",
"fbczc",
"paybdj",
"yedrbz",
"qwlijd",
"uyamzc",
"cehtizz",
"xejofd",
"hlvqt",
"iracei",
"ppjlp",
"jymqay",
"vbdtxw",
"svhdn",
"srylnl",
"arpbta",
"yiasrg",
"chmlmof",
"oaoagf",
"ntiwo",
"heuvqrv",
"ygudcn",
"ujoxgw",
"vcxysn",
"xxvbcz",
"gubka",
"lcteegl",
"mfjqu",
"jmrll",
"xmpefxb",
"fxhlx",
"qgtcw",
"itldt",
"xbhhno",
"wjlkr",
"mkoumfo",
"clccuxo",
"ksflxgu",
"cviwbab",
"ggxcbmm",
"aosxdgi",
"ucaqtvm",
"tzkquj",
"dcdjfl",
"vykusrg",
"ayxfjy",
"vejuy",
"bnqxwd",
"fnrbwd",
"uvkjmu",
"rgneg",
"wcqrldl",
"lokksi",
"evoqgp",
"xzpvts",
"xbjudib",
"zdpttvp",
"tbvbi",
"pzvfn",
"giicqi",
"cyjsrd",
"vvyvdn",
"trvxk",
"xkwzirg",
"smzaoc",
"jvpncjy",
"carcxzy",
"azmnrz",
"tkffh",
"kalra",
"emoowz",
"qfjcz",
"tbcpi",
"unmas",
"fxdhi",
"wegea",
"vjnbmu",
"hbxxa",
"updrj",
"kanisyn",
"qzqfa",
"rbyfleo",
"gvpud",
"vvrnda",
"ntgcz",
"niiqd",
"okmqocr",
"hlmuoir",
"cllmy",
"pvgcyui",
"klubnzd",
"henjf",
"ucmilyk",
"bdzvhvy",
"zifmo",
"cidvxii",
"unfcw",
"uzsgfv",
"rvimhmz",
"rrneanz",
"tmtptt",
"wwdzgb",
"kxlofp",
"muvdf",
"ojlkit",
"xjioe",
"hdmrl",
"uotxsd",
"wblmhvw",
"kyatg",
"lueyjj",
"edcqlhh",
"yigtdu",
"mkqvux",
"ognxpmw",
"obdpmbo",
"doguzpy",
"pmeuq",
"egkwgm",
"zmjps",
"blxurlk",
"tcdsvz",
"cpttk",
"oones",
"zulotp",
"bbjmmft",
"qkchbbb",
"ddoyf",
"qwykri",
"rtdhc",
"xzeopey",
"dwwzu",
"absoj",
"pyhxrz",
"xkzppy",
"hukvxis",
"mrlzdcn",
"rowai",
"jehovhy",
"ctrho",
"icfhdp",
"mjgmdju",
"hxujna",
"bzmfac",
"hpnpfvb",
"zhnnuzl",
"exfpqk",
"uusye",
"abklkm",
"hpwybsm",
"pttzcdz",
"mmkao",
"uxkqle",
"mllnhh",
"gqggto",
"mgntzzx",
"xtdef",
"xdhgjq",
"bkvcqzj",
"grvdv",
"agjof",
"nxxonak",
"ssdwci",
"wjkcwl",
"bvgwiaw",
"aehhhox",
"miyxnt",
"ztjbho",
"npuynrk",
"qnjch",
"urwuyw",
"hjclv",
"qhbvvt",
"enyzud",
"pfeyzwd",
"fozvoz",
"zisyj",
"hzdbri",
"wyylyi",
"fjvwf",
"svfmn",
"edhcu",
"eprzr",
"vbhsezf",
"isudgte",
"qszip",
"ilqnce",
"rmrjlw",
"opaweid",
"juzdv",
"ehtiv",
"yzcosn",
"rimplj",
"yhdre",
"onklj",
"xyrzj",
"fpkebll",
"hvjyjkb",
"koczlof",
"iovlifd",
"uqvvw",
"xyfueng",
"twynd",
"ktmkxzq",
"qcvwd",
"uxnjdh",
"exzkjjx",
"vefasx",
"ufgtg",
"cmllk",
"whqpy",
"uiqka",
"lxnwzw",
"jblxgd",
"dxvuvf",
"llxqaz",
"qzxqeq",
"smfiri",
"qwalddg",
"qqkianf",
"oshgkag",
"qalgka",
"sedaqv",
"logbadp",
"hmklzj",
"qdqcqj",
"cdwcudk",
"sqgvhsh",
"llloqjx",
"binnlcs",
"fjaow",
"uirnxyz",
"zffqat",
"akdzyhn",
"pmcwu",
"elnge",
"uyelt",
"uzsod",
"qwfarib",
"lowtshb",
"wgzaiwm",
"xzcppu",
"azfoudu",
"zvargi",
"mhospi",
"bfqemy",
"krhnt",
"dolixge",
"ofpqew",
"xjslrou",
"djjueg",
"nbjtp",
"zahjb",
"vdbxeg",
"vooqz",
"hflenpk",
"xpyxqmq",
"wkkdtd",
"cfcgj",
"unrprg",
"keszevs",
"cequp",
"hsjio",
"ayprq",
"alpzyk",
"erikrmm",
"ftxfgde",
"lopqyg",
"oxlqbi",
"tiiht",
"itanzk",
"vpduf",
"uxbgkqt",
"vztwrdf",
"bcqleo",
"zbrteyu",
"mzhvxw",
"esnlm",
"uctdwz",
"dmhnw",
"dhoqk",
"ulsokg",
"ecceh",
"lfkyscl",
"turgt",
"jctmib",
"psrlmq",
"gdwpniq",
"hahedwt",
"ajbigo",
"qtfmiv",
"dtzij",
"oceesy",
"hothyg",
"vfadpdf",
"hppiu",
"pjgiee",
"isxxbpp",
"dwnoh",
"cgnill",
"zufyk",
"mpoeo",
"pqyxc",
"ehpyix",
"aulgyr",
"dtdvk",
"snlmd",
"uswfmev",
"jhypcm",
"nreeygx",
"sjbuqp",
"uajrx",
"bvgci",
"cktys",
"mioopek",
"tojhyqu",
"hihlal",
"lhviecc",
"kczwrkd",
"belivo",
"gcxlt",
"vtqorps",
"fdvnwku",
"kllox",
"xgnyfqr",
"xwvfqa",
"hshmm",
"lugfp",
"ugcxki",
"yiyylah",
"imjcspq",
"lpsts",
"ifvkz",
"veheym",
"yxbrib",
"wyhpjhn",
"izwaqvn",
"kjpdj",
"chcnl",
"tbxld",
"rskqzjm",
"fisxhi",
"dwepxp",
"cfxjbl",
"mjqpe",
"renbc",
"lnqdq",
"psyhms",
"unqmse",
"mlivyuw",
"ajzrmjt",
"fmmsrx",
"ggvwrg",
"evnhtbx",
"ednufjc",
"goows",
"olofgl",
"wpfrr",
"wuvype",
"fwjto",
"vnazt",
"ujrhkec",
"noxme",
"pbkaz",
"larkdu",
"ebuxauc",
"vkjqer",
"ldelp",
"sojxqfg",
"wbvbut",
"qgsooz",
"ixtgd",
"qioslp",
"pqgua",
"mfnvz",
"fkinlrh",
"qtenoc",
"syenus",
"fkplm",
"qvlies",
"unolnf",
"bftvtl",
"ifzbp",
"zckbjz",
"rwokr",
"iqipnz",
"nyrsnty",
"klneq",
"rtizom",
"nzxvaf",
"kwsknn",
"delmgu",
"lvrtoi",
"xgpcn",
"zovahx",
"kqykd",
"zllzd",
"ciskyi",
"qmkow",
"zwbwgni",
"qixaqyp",
"bbpdxz",
"gpyddcr",
"emwcbkv",
"qymwpuk",
"gzfwr",
"bqtsjm",
"zzjuarn",
"tuczqbf",
"focyyew",
"aajkpt",
"doxpudb",
"iihqftz",
"iplmetd",
"awzllw",
"mjafnh",
"gzqmt",
"bcbqbek",
"wkpda",
"hdhnced",
"vescmqn",
"tvsof",
"ontvba",
"ywmawy",
"yzucwi",
"fqziiur",
"pmydwpt",
"dliolrc",
"jnsou",
"tvswwhb",
"yogoial",
"vayef",
"fwvhh",
"wwlck",
"mrzrjcx",
"xzqxcif",
"irynybr",
"gzyqisa",
"lnflfzy",
"xmepwne",
"unyjrem",
"zgblsf",
"rtmomdy",
"eshld",
"xmwikj",
"fnupbcw",
"fcipz",
"uciehpe",
"kmtnut",
"ectqzea",
"swrit",
"frrchku",
"swcgsu",
"shkxvt",
"zjjcx",
"gtcez",
"xblhk",
"gubhe",
"pnaoos",
"yypewih",
"cbzbk",
"jjxbq",
"nzqycdl",
"mrjjfyk",
"itkzfhc",
"uambd",
"gictm",
"atwntt",
"cenrao",
"hzmlgfv",
"cyamfon",
"pldrrnv",
"ebtzqx",
"jonga",
"ktgmiy",
"qiqseiz",
"npitnk",
"fzwuen",
"mvxhb",
"obidnqw",
"plola",
"pijaf",
"jistxtn",
"dcxxk",
"ruxbphm",
"qzaneb",
"ioyqmy",
"wayuno",
"wbvmck",
"pmcxo",
"qtada",
"kbxnj",
"knhmjtl",
"kiqxiro",
"jcpsi",
"cyhvmo",
"hsomp",
"tkxxf",
"mneqtp",
"ntrcat",
"wgvgmr",
"varaytv",
"pbida",
"yqolnz",
"chxwvp",
"vchgf",
"hohypb",
"zohgdc",
"xspsy",
"hxaefb",
"zaomwg",
"ghpniya",
"vvsmcwk",
"ycnxjh",
"hyrkc",
"zmlyxmy",
"nxwrij",
"vgnda",
"scpzuwu",
"ibnbzhk",
"tmavs",
"bvdhfbl",
"cjudij",
"udgqjbs",
"svyrq",
"kmhthi",
"prapa",
"xlibves",
"dqddqmx",
"tqcipdw",
"uqgrhl",
"fczoo",
"pptncy",
"vvaylkt",
"xjznf",
"zdjori",
"atuzhd",
"qmttkmb",
"rsfkvw",
"rqxscu",
"rxrwc",
"zvptpuc",
"ahdvce",
"ftaidk",
"apahhfj",
"iskrwxa",
"ellsp",
"lwjvg",
"rcbsw",
"dtbmi",
"ejwti",
"hdyzyf",
"gbmdm",
"gmrzr",
"jbgje",
"hnuapiv",
"yogxasw",
"kjuxrxi",
"ejjzwq",
"qthpshr",
"ufqqa",
"drswm",
"sqcdrm",
"zharf",
"duefy",
"pfrsnfs",
"ywygkk",
"debqn",
"ttsbv",
"jqoqn",
"dtwopta",
"psgxiz",
"gpuiy",
"rtghkgu",
"qcmhksu",
"lcoseb",
"vzewq",
"gxiux",
"ryqht",
"nrljfdm",
"dztatif",
"lkehf",
"rmrox",
"rnntci",
"zhliree",
"rlfpo",
"dpdup",
"qhjspn",
"hpsqhi",
"wbnub",
"pwgkle",
"bldsutn",
"nhugm",
"llxvj",
"nkulvoy",
"aihuf",
"lqflwp",
"lekamz",
"fdnfln",
"fjtplf",
"zinbih",
"jvqovhr",
"ehmlp",
"qaprv",
"mdnfd",
"xjgon",
"nqsdbj",
"odrjtab",
"qzsjq",
"ripgmer",
"ljgsxt",
"sciqi",
"yaqykph",
"rfunhjy",
"abygu",
"ibldxl",
"fhsgodn",
"lnneny",
"clcemc",
"pviqaqg",
"ywpchy",
"baksyet",
"tnfmw",
"dkpvx",
"bykxod",
"qwzrn",
"kmrfrv",
"asxodt",
"yuismpd",
"cxfcrc",
"kbkioi",
"ivspipn",
"vmjcb",
"kpnotnf",
"jncttso",
"mvoexeu",
"gxgkb",
"ihpszdp",
"ihuzlsb",
"ztyzdp",
"gsvmymx",
"ldhfbb",
"wmjymr",
"gbcjub",
"ltxge",
"picika",
"qhjywi",
"ctxwfma",
"awnzi",
"cgdwc",
"gyfpuzr",
"taqohj",
"bdmeo",
"zwrsref",
"fhixq",
"drvryni",
"lmgsd",
"rihhz",
"twwhyy",
"bhzob",
"mwypg",
"nrmyzv",
"pmfvmst",
"mizvjy",
"tdsfg",
"weoma",
"ckrzl",
"zvcgqz",
"pjnuw",
"nqrde",
"qcnem",
"grhis",
"nqozqd",
"gefinct",
"ipmzvrp",
"rgiqruj",
"eoqdeva",
"dimxz",
"ixrhlpt",
"bfwkm",
"ufwjp",
"aoszp",
"ahpyp",
"hghcyv",
"rqlti",
"pcpnpo",
"efxyxdm",
"atgcrj",
"okadwcw",
"igavnan",
"bfxqc",
"tdvdr",
"zretcfp",
"siymap",
"tugzn",
"wulwhre",
"lmfqz",
"ixjsxwc",
"gsozyoj",
"bdolsf",
"korwx",
"fvlpk",
"kuebj",
"ublpu",
"ciglmvs",
"siwqcdx",
"xclnxlf",
"vdycdl",
"utsoyxq",
"ugjnsxj",
"hppqtce",
"ciijifs",
"mxbyw",
"ptwill",
"rbahig",
"twafrt",
"qgppawc",
"terobw",
"qcjpv",
"aauvybv",
"wjfbvx",
"hrmfd",
"ibtwu",
"bnrgqm",
"lrloxuk",
"rzippvx",
"cbjekyh",
"cggdym",
"czynzdj",
"qurxnfa",
"mclrra",
"byxfrrp",
"vcryit",
"umkva",
"zulxwp",
"sfvjsyl",
"lvosyl",
"mfjfprv",
"pudrmc",
"liineqn",
"jqrfff",
"apgrfu",
"xusxh",
"vbbla",
"unvsvm",
"zhaax",
"ztcnucd",
"iuhnod",
"meeglt",
"lyvaoq",
"pqjhuq",
"afsjig",
"mrnffa",
"vngwa",
"fveunc",
"vmvnx",
"wxdxosn",
"hfwybx",
"fmwna",
"qnbxae",
"rrmyoax",
"rnjhywy",
"vstnd",
"ewnllhr",
"wsvxenk",
"cbivijf",
"nysamc",
"zoyfna",
"uotmde",
"zkkrpmp",
"ttficz",
"rdnds",
"uveompq",
"emtbwb",
"drhwsf",
"bmkafp",
"yedpcow",
"untvymx",
"yyejqta",
"kcjakg",
"tdwmu",
"gecjncx",
"cxezgec",
"xonnszm",
"mecgvq",
"kdagva",
"ucewe",
"chsmeb",
"kscci",
"gzoia",
"ovdojr",
"mwgbxe",
"gibxxlt",
"mfgpo",
"jkhob",
"hwquiz",
"wvhfaia",
"sxhikny",
"dghcawc",
"phayky",
"shchy",
"mklvgh",
"yabxat",
"rkmrfs",
"pfhgrw",
"wtlxebg",
"mevefcq",
"uvhvgo",
"nldxkdz",
"dwybxh",
"ycmlybh",
"aqvod",
"tsvfhw",
"uhvuc",
"wcsxe",
"afyzus",
"jhfyk",
"vghpfv",
"nprfou",
"vsxmc",
"hiiiewc",
"uehpmz",
"jzffnr",
"twbuhn",
"ahrbzqv",
"rvmffb",
"vrmynfc",
"upnuka",
"jghpuse",
"dwrbkhv",
"nveuii",
"nefmnf",
"aowjzzo",
"yfcprb",
"ojihgh",
"jfnit",
"ovkpf",
"bhyqx",
"enyrhm",
"ljqxp",
"pzpfjr",
"qligbi",
"udoqp",
"naxqyjp",
"jriibb",
"iccscme",
"rhnwh",
"xfajbc",
"gopeq",
"kurqqru",
"qyzpd",
"twfaem",
"nopsy",
"yqcpwa",
"xzhoc",
"rwval",
"zqhyid",
"mnmaobk",
"bzsxfa",
"kmgqo",
"quxchux",
"mimqbx",
"djuok",
"injzi",
"nekayg",
"oiyytj",
"vgwdob",
"epmbtws",
"whwkeph",
"ddfwxo",
"nlobf",
"adrqb",
"lzzownl",
"iuhka",
"upfjos",
"kjiua",
"xjgud",
"qqqnwqc",
"bgvooqf",
"qjurybc",
"ufsnhxp",
"fjpkb",
"pztffxh",
"qeqcgg",
"tfills",
"rkmbus",
"wpsmuk",
"moqeh",
"nyiayg",
"bejhle",
"gszvfjw",
"fnskvxi",
"nhxyzxi",
"trwseu",
"jdnptzx",
"fiotq",
"xspgs",
"ddnyc",
"yhxjxus",
"hkwrzd",
"rmvsyi",
"eqbjf",
"gymahyo",
"vuxso",
"ekagz",
"vozvpu",
"euzcdla",
"qvernpp",
"seejev",
"tetez",
"eosct",
"fxuicyl",
"mwyzg",
"qeujko",
"gpnizxr",
"azxslf",
"faepd",
"nsvcr",
"rxcty",
"kmtnoe",
"tuwoxf",
"xewnebm",
"qlegtb",
"qxlust",
"qnlje",
"ptdlpvq",
"tjmwt",
"nddiu",
"qanqplx",
"kxckhbq",
"lvtyy",
"cqwdax",
"irvigyr",
"mpdqgvy",
"qbvysre",
"ezluyj",
"qshkht",
"fjxyezs",
"lquxor",
"rxtgdy",
"ezlzb",
"addqjj",
"fucytk",
"mmbjy",
"gtkjcnz",
"fourguz",
"ffhtah",
"yhyxwcm",
"svmofbn",
"gvzve",
"cizjcea",
"vkdtt",
"hdivkwt",
"utnjaf",
"svvrkeh",
"qyxpd",
"qlinqj",
"xvesol",
"bykuhwp",
"mjodd",
"trurbet",
"ahzxm",
"hkxhvo",
"bhccyxf",
"elobjqo",
"igdxmj",
"twkdf",
"gmogg",
"lzmljtj",
"jhgrq",
"ndiye",
"sgaaavr",
"mxxvrkm",
"vyvvi",
"pcafw",
"cverpds",
"bvpjmw",
"pcqzlg",
"fmwhf",
"ctviwh",
"tgmjzsd",
"uvtwwy",
"pbxhcmg",
"tbiwyru",
"efzimj",
"dcujj",
"lxbndb",
"ysbhy",
"lqwnjdz",
"ontmb",
"dfsxzto",
"ubwbyv",
"htjmvu",
"ahzxszy",
"ivttau",
"cfimiy",
"fkjfmw",
"gscep",
"bwdjojj",
"knwosp",
"gznvty",
"izgfoyl",
"zayof",
"jqjpk",
"vosohad",
"xjqtry",
"zdgvx",
"cbgvmn",
"iskhag",
"qdzxb",
"lfivyh",
"ltpzk",
"wexodoi",
"zheod",
"wtamnc",
"lnjhy",
"bwozgnh",
"dvdpsy",
"puayd",
"sogsxu",
"fzylgp",
"kotukif",
"pwfjx",
"vnecbvd",
"zgojjum",
"byuzv",
"lxwfio",
"enqpgs",
"lguax",
"ztfnyqt",
"bbbbrq",
"bfqcd",
"poalx",
"amyfb",
"rmuyan",
"anqopg",
"rovev",
"pafiqmd",
"uxjiaaz",
"kyskun",
"kdyzd",
"dnvyel",
"ljwmn",
"nosgpxo",
"wplvwil",
"orcwe",
"xhyuj",
"ogueh",
"taovv",
"zodzsc",
"rdiut",
"fiyny",
"qmwccp",
"oqgpqv",
"ipsmwz",
"rvnanf",
"vhjcem",
"hevsn",
"sxdsmg",
"zxerju",
"qqmvrn",
"jpqzy",
"yenlp",
"nmitc",
"bakwo",
"ixmrhxx",
"faypb",
"bbzsmgw",
"opulvn",
"qnugsr",
"kpidsbl",
"dukzjpq",
"bbybu",
"wjausnl",
"jmzkjv",
"uygdm",
"sejdzga",
"fxkyhn",
"xwgvw",
"oxxzvlr",
"kowjho",
"ipwkmjc",
"fjrxk",
"rrzkdgs",
"bxghaq",
"gbhoqa",
"xnaprd",
"vrjus",
"prpqp",
"zayukll",
"ieaarp",
"xfcozp",
"yofdlo",
"vquhofn",
"prlictl",
"akseu",
"fqlybv",
"crpvuzw",
"bsvzr",
"mwdcfdr",
"dhcmcu",
"hiocm",
"xivqrr",
"yvcmo",
"svwsfr",
"uwopkxy",
"ougre",
"yfpmzlw",
"ycsbch",
"wlrdnre",
"jrdhn",
"ssjkca",
"tndje",
"nzebm",
"ozyobeo",
"puerg",
"aaeqauo",
"gswil",
"iwcxgji",
"tauimn",
"kbpdwlk",
"vltzl",
"watqld",
"ghqrm",
"pkravau",
"mjfbxv",
"bzifdx",
"ufszjkr",
"xodqa",
"vopisyg",
"ppytrz",
"ioqlech",
"ixvtpg",
"sgpeoa",
"avsvj",
"iwobycm",
"ycnvobh",
"lnexix",
"fgogr",
"atdwdil",
"vcsbk",
"iopjwyx",
"moxoyua",
"vncee",
"cfqiwxh",
"ttbkbh",
"xearpw",
"jzfsl",
"shpxr",
"wyrrbm",
"imrjybd",
"adufra",
"msedvi",
"hgfyd",
"yofpdh",
"zjwycb",
"dcleww",
"ruacjb",
"yjwelwi",
"dagoiud",
"vavunu",
"xlxbcmc",
"urqksfd",
"tngbww",
"kwjhnl",
"gekdht",
"jlkzfgv",
"lexqhx",
"cnmynkc",
"ebenz",
"rwdopf",
"wnetkj",
"mcfbo",
"gtevzv",
"odvil",
"shkifu",
"aovbq",
"vibnyno",
"tcmlmkz",
"rfpgk",
"gohtjwc",
"mwmfeq",
"wzxmz",
"jyufim",
"bniivjc",
"mozrlzt",
"rcwje",
"nykfvh",
"ezglkh",
"nqkpvj",
"tyqwypw",
"udzlzyz",
"iixxey",
"dlyaq",
"ugksuyk",
"sxaco",
"tkpokn",
"ykglu",
"uwzorpp",
"fhuxz",
"dqfyv",
"xnlgoe",
"bpohjte",
"smlty",
"vhght",
"nmreqxa",
"reouy",
"abqju",
"ramtsu",
"ektbvhz",
"ercmpc",
"opchcx",
"ajhrj",
"hkvalb",
"ucngyjf",
"zoltae",
"ryjhfiv",
"lgjscrc",
"mnbkms",
"odbjs",
"ywbvys",
"jjcvh",
"vzkojje",
"ttohufl",
"gvnoaj",
"jkyhavl",
"czsbrxu",
"lhhrdn",
"nhmuatv",
"eityul",
"aabelb",
"limct",
"oooxwis",
"tmvxpv",
"xbeiqh",
"fcwcc",
"qjhdcq",
"wbyplq",
"zftnk",
"epcdy",
"kptee",
"qipzud",
"viytsl",
"bzhwvj",
"pmkpud",
"aqpunv",
"jsxetb",
"gxeljex",
"iaebpo",
"dihzj",
"zftby",
"vkzra",
"hejaidb",
"djvtqt",
"vazqo",
"iugtsp",
"lxvtoin",
"kwyxpwj",
"ehpnrp",
"iivjvkn",
"vdhwfj",
"afyavpl",
"yoiht",
"colenpr",
"iohrx",
"khuljuj",
"iwtjh",
"gnqncp",
"vdhwm",
"yhxfw",
"rsrig",
"qpgym",
"gbalr",
"gqhdmz",
"cxsimhf",
"muonsb",
"swfwyyi",
"ihnnk",
"hrzoc",
"uixhtym",
"rjjtpn",
"efzgwq",
"rubgndx",
"rffpmk",
"rllab",
"cyrfk",
"ssvoz",
"ttzhop",
"zhywy",
"utzix",
"oklvooj",
"kdslj",
"qjohyod",
"ulnqss",
"dppso",
"xhyjlff",
"elazc",
"qdimsq",
"ozzaprn",
"pusmfw",
"vqopa",
"fguvxwd",
"luerv",
"ylgvs",
"qixlgz",
"btwyq",
"exxthjr",
"gmcmk",
"vdovgma",
"uxaqwjn",
"rzdlo",
"yjknn",
"yrxygac",
"vocejbl",
"wnfki",
"aabtp",
"aohxnt",
"evgftbl",
"ppsraw",
"xwjin",
"bryhke",
"mhwlj",
"rnnfh",
"vfmsxq",
"znxzwm",
"yilmhgj",
"gqdvp",
"lnuln",
"ltjtpt",
"fhrhkcw",
"dvsalfh",
"soytv",
"kexst",
"sjblwo",
"wiblqa",
"hzikex",
"cqjlf"
]
# print(s.minimumLengthEncoding(temp))
a = s.minimumLengthEncoding(temp)
b = Solution1().minimumLengthEncoding(temp)
print(a, b)
# print(sum(len(x) + 1 for x in a))
# print(sum(len(x) + 1 for x in b))
# for x in a:
# if x not in b:
# print(x)
| 19.1795 | 87 | 0.325254 | 2,230 | 39,855 | 5.809417 | 0.939013 | 0.002161 | 0.002779 | 0.001544 | 0.00988 | 0.006947 | 0.006947 | 0.006947 | 0 | 0 | 0 | 0.001404 | 0.517451 | 39,855 | 2,077 | 88 | 19.188734 | 0.672213 | 0.016685 | 0 | 0.000983 | 0 | 0 | 0.323257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000983 | false | 0 | 0 | 0 | 0.003441 | 0.000492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d44c23f4260429a2ed754abb9b2f421c367d9fa9 | 2,753 | py | Python | app/core/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/core/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/core/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-19 02:59
from django.conf import settings
from django.db import migrations, models
import django.db.models.manager
import markdownx.models
import components
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Page',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dt_created', models.DateTimeField(auto_now_add=True, verbose_name='Created')),
('dt_modified', models.DateTimeField(auto_now=True, verbose_name='Modified')),
('label', models.CharField(db_index=True, max_length=64, verbose_name='Label')),
('content', markdownx.models.MarkdownxField(verbose_name='Content')),
('created_by', models.ForeignKey(blank=True, editable=False, null=True, on_delete=models.SET(
components.models.get_sentinel_user), related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dt_created', models.DateTimeField(auto_now_add=True, verbose_name='Created')),
('dt_modified', models.DateTimeField(auto_now=True, verbose_name='Modified')),
('name', models.CharField(blank=True, max_length=64, null=True, verbose_name='Name')),
('email', models.EmailField(blank=True, max_length=64, null=True, verbose_name='Email')),
('topic', models.CharField(max_length=255, verbose_name='Topic')),
('text', models.TextField(verbose_name='Message')),
('created_by', models.ForeignKey(blank=True, editable=False, null=True, on_delete=models.SET(
components.models.get_sentinel_user), related_name='+', to=settings.AUTH_USER_MODEL)),
('to', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
'base_manager_name': '_related',
'default_manager_name': 'objects',
},
managers=[
('objects', django.db.models.manager.Manager()),
('_related', django.db.models.manager.Manager()),
],
),
]
| 46.661017 | 158 | 0.603342 | 289 | 2,753 | 5.543253 | 0.287197 | 0.082397 | 0.05618 | 0.052434 | 0.548065 | 0.513109 | 0.513109 | 0.513109 | 0.513109 | 0.464419 | 0 | 0.011782 | 0.26008 | 2,753 | 58 | 159 | 47.465517 | 0.774669 | 0.016346 | 0 | 0.45098 | 1 | 0 | 0.096822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.098039 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d44e45e9b90fc8e34b0ae5fc97646b84ba004c81 | 670 | py | Python | tests/utils/environment_vars.py | alanverresen/django-keys | bd99a9f059af8b84b141ab8bf9a5bc5730a6ba38 | [
"MIT"
] | null | null | null | tests/utils/environment_vars.py | alanverresen/django-keys | bd99a9f059af8b84b141ab8bf9a5bc5730a6ba38 | [
"MIT"
] | null | null | null | tests/utils/environment_vars.py | alanverresen/django-keys | bd99a9f059af8b84b141ab8bf9a5bc5730a6ba38 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Contains a context manager for temporarily introducing an environment var.
import os
import contextlib
@contextlib.contextmanager
def use_environment_variable(key, value):
""" Used to temporarily introduce a new environment variable as if it
was set by the execution environment.
:param str key: key of environment variable
:param str value: value of environment variable
"""
assert type(value) == str
assert key not in os.environ
os.environ[key] = value
assert key in os.environ
yield
assert key in os.environ
os.environ.pop(key)
assert key not in os.environ
| 25.769231 | 76 | 0.707463 | 95 | 670 | 4.968421 | 0.505263 | 0.114407 | 0.09322 | 0.059322 | 0.220339 | 0.097458 | 0 | 0 | 0 | 0 | 0 | 0.003824 | 0.219403 | 670 | 25 | 77 | 26.8 | 0.898662 | 0.476119 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d44e934fa1525818148b189132b9513d98218d03 | 25,355 | py | Python | data/analyzer.py | morelab/teseo2014 | 568ff66cf25bf7017dd371cf503f6b99df462fff | [
"Apache-2.0"
] | null | null | null | data/analyzer.py | morelab/teseo2014 | 568ff66cf25bf7017dd371cf503f6b99df462fff | [
"Apache-2.0"
] | null | null | null | data/analyzer.py | morelab/teseo2014 | 568ff66cf25bf7017dd371cf503f6b99df462fff | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Sep 22 08:55:14 2014
@author: aitor
"""
import mysql.connector
import networkx as nx
from networkx.generators.random_graphs import barabasi_albert_graph
import json
import os.path
import numpy as np
import pandas as pd
from pandas import Series
from pandas import DataFrame
import matplotlib.pyplot as plt
config = {
'user': 'aitor',
'password': 'pelicano',
'host': 'thor.deusto.es',
'database': 'teseo_clean',
}
persons_university = []
persons_id = []
first_level_topic_list = {
11: 'Logic',
12: 'Mathematics',
21: 'Astronomy, Astrophysics',
22: 'Physics',
23: 'Chemistry',
24: 'Life Sciences',
25: 'Earth and space science',
31: 'Agricultural Sciences',
32: 'Medical Sciences',
33: 'Technological Sciences',
51: 'Anthropology',
52: 'Demography',
53: 'Economic Sciences',
54: 'Geography',
55: 'History',
56: 'Juridical Science and Law',
57: 'Linguistics',
58: 'Pedagogy',
59: 'Political Science',
61: 'Psychology',
62: 'Sciences of Arts and Letters',
63: 'Sociology',
71: 'Ethics',
72: 'Philosophy',
}
# Execute it once
def get_persons_university():
p_u = {}
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
query = "SELECT thesis.author_id, thesis.university_id, university.name, university.location, person.name FROM thesis, university, person WHERE thesis.university_id = university.id AND thesis.author_id = person.id"
cursor.execute(query)
for thesis in cursor:
p_u[thesis[0]] = {
"university" : {"id" : thesis[1], "name" : thesis[2], "location" : thesis[3]},
"author" : {"name" : thesis[4]}
}
cursor.close()
cnx.close()
json.dump(p_u, open("./cache/persons_university.json", "w"), indent=2)
def load_persons_university():
print "Loading the persons_university cache..."
if not os.path.isfile("./cache/persons_university.json"):
print " - Building the persons_university cache..."
get_persons_university()
p_u = json.load(open("./cache/persons_university.json", "r"))
print "done"
return p_u
def get_persons_id():
p_i = {}
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
query = "SELECT person.id, person.name FROM person"
cursor.execute(query)
for person in cursor:
p_i[person[0]] = person[1]
cursor.close()
cnx.close()
json.dump(p_i, open("./cache/persons_id.json", "w"), indent = 2)
def load_persons_id():
print "Loading the persons_id cache..."
if not os.path.isfile("./cache/persons_id.json"):
print " - Building the persons_id cache..."
get_persons_university()
p_u = json.load(open("./cache/persons_id.json", "r"))
print "done"
return p_u
persons_university = load_persons_university()
persons_id = load_persons_id()
def build_thesis_genealogy():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
query = "SELECT thesis.author_id, advisor.person_id FROM thesis, advisor WHERE thesis.id = advisor.thesis_id"
cursor.execute(query)
G = nx.DiGraph()
for thesis in cursor:
G.add_edge(thesis[1], thesis[0])
i = 0
for n in G.nodes():
try:
node = str(n)
G.node[n]["name"] = persons_id[node]
try:
G.node[n]["university"] = persons_university[node]["university"]["name"]
G.node[n]["location"] = persons_university[node]["university"]["location"]
i += 1
except:
G.node[n]["university"] = "none"
G.node[n]["location"] = "none"
except:
print n
print "Total persons with a location:", i
cursor.close()
cnx.close()
nx.write_gexf(G, "./networks/genealogy.gexf")
return G
def build_panel_network(with_weigh = True):
cnx = mysql.connector.connect(**config)
print "Recovering thesis ids"
cursor = cnx.cursor()
query = "SELECT id FROM thesis"
cursor.execute(query)
thesis_ids = []
for thesis in cursor:
thesis_ids.append(thesis[0])
cursor.close()
print "Creating panel network"
cursor = cnx.cursor()
G = nx.Graph()
for c, thesis_id in enumerate(thesis_ids):
if c % 1000 == 0:
print c, "of", len(thesis_ids)
cursor.execute("SELECT person_id FROM panel_member WHERE thesis_id = " + str(thesis_id))
members = []
for member in cursor:
members.append(member[0])
for i, m1 in enumerate(members):
for m2 in members[i+1:]:
if with_weigh:
if not G.has_edge(m1, m2):
G.add_edge(m1,m2, weight = 1)
else:
G.edge[m1][m2]['weight'] += 1
else:
G.add_edge(m1,m2)
cursor.close()
cnx.close()
nx.write_gexf(G, "./networks/panels.gexf")
return G
def get_first_level_descriptors():
cnx = mysql.connector.connect(**config)
print "Recovering first level descriptors"
cursor = cnx.cursor()
query = "select id, text, code from descriptor where parent_code IS NULL"
cursor.execute(query)
descriptors = {}
for d in cursor:
descriptors[d[2]] = {"id" : d[0], "text" : d[1]}
cursor.close()
cnx.close()
return descriptors
def build_panel_network_by_descriptor(unesco_code):
cnx = mysql.connector.connect(**config)
print "Recovering thesis ids"
cursor = cnx.cursor()
query = """SELECT thesis_id
FROM association_thesis_description, descriptor
WHERE association_thesis_description.descriptor_id = descriptor.id
AND descriptor.code DIV 10000 = """ + str(unesco_code)
cursor.execute(query)
thesis_ids = []
for thesis in cursor:
thesis_ids.append(thesis[0])
cursor.close()
print "Creating panel network"
cursor = cnx.cursor()
G = nx.Graph()
for c, thesis_id in enumerate(thesis_ids):
if c % 1000 == 0:
print c, "of", len(thesis_ids)
cursor.execute("SELECT person_id FROM panel_member WHERE thesis_id = " + str(thesis_id))
members = []
for member in cursor:
members.append(member[0])
for i, m1 in enumerate(members):
for m2 in members[i+1:]:
if not G.has_edge(m1, m2):
G.add_edge(m1,m2, weight = 1)
else:
G.edge[m1][m2]['weight'] += 1
cursor.close()
cnx.close()
nx.write_gexf(G, "./networks/panels-" + str(unesco_code) + ".gexf")
return G
def generate_random_graph(n, m):
print "Building random graph"
G = barabasi_albert_graph(n, m, 10)
return G
def analize_cliques(G):
print "Calculating cliques..."
cliques = nx.find_cliques(G)
print "Analysing the results..."
tot_cliques = 0
tot_size = 0
max_size = 0
min_size = 10000
high_5 = 0
hist_clic = {}
for c in cliques:
tot_cliques += 1
tot_size += len(c)
if len(c) > 5: #5 is the panel size in Spain
high_5 += 1
if len(c) > max_size :
max_size = len(c)
if len(c) < min_size:
min_size = len(c)
if hist_clic.has_key(len(c)):
hist_clic[len(c)] += 1
else:
hist_clic[len(c)] = 1
print "CLIQUES:"
print " - Total cliques:", tot_cliques
print " - Avg cliques size:", tot_size * 1.0 / tot_cliques
print " - Max clique:", max_size
print " - Min clique:", min_size
print " - Cliques with a size higher than 5:", high_5
print " - histogram:", hist_clic
results = {}
results['clique_tot'] = tot_cliques
results['clique_avg'] = tot_size * 1.0 / tot_cliques
results['clique_max'] = max_size
results['clique_min'] = min_size
results['clique_greater_5'] = high_5
results['clique_greater_5_norm'] = high_5 * 1.0 / tot_cliques
#results['clique_histogram'] = hist_clic
return results
def analize_degrees(G):
print "Calculating degrees..."
degrees = nx.degree(G)
hist = nx.degree_histogram(G)
print "DEGREES:"
print " - Max degree:", max(degrees.values())
print " - Min degree:", min(degrees.values())
print " - Avg. degree:", sum(degrees.values()) * 1.0 / len(degrees)
print " - histogram:", hist
results = {}
results['degree_avg'] = sum(degrees.values()) * 1.0 / len(degrees)
results['degree_max'] = max(degrees.values())
results['degree_min'] = min(degrees.values())
#results['degree_histogram'] = hist
return results
def analize_edges(G):
print "Analizing edges..."
min_weight = 10000
max_weight = 0
acum_weight = 0
hist_weight = {}
for e in G.edges(data=True):
acum_weight += e[2]['weight']
if max_weight < e[2]['weight']:
max_weight = e[2]['weight']
if min_weight > e[2]['weight']:
min_weight = e[2]['weight']
if hist_weight.has_key(e[2]['weight']):
hist_weight[e[2]['weight']] += 1
else:
hist_weight[e[2]['weight']] = 1
print "EDGES:"
print " - Max weight:", max_weight
print " - Min weight:", min_weight
print " - Avg weight:", acum_weight * 1.0 / len(G.edges())
print " - histogram:", hist_weight
results = {}
results['weight_avg'] = acum_weight * 1.0 / len(G.edges())
results['weight_max'] = max_weight
results['weight_min'] = min_weight
#results['weight_histogram'] = hist_weight
return results
def analyze_rdn_graph():
G = generate_random_graph(188979, 7) #nodes and nodes/edges
nx.write_gexf(G, "./networks/barabasi_panel.gexf")
print "Nodes:", G.number_of_nodes()
print "Edges:", G.number_of_edges()
analize_cliques(G)
analize_degrees(G)
def analyze_first_level_panels():
results = {}
for d in first_level_topic_list:
print "\n*********DESCRIPTOR: " + first_level_topic_list[d] + "(" + str(d) + ")"
G = build_panel_network_by_descriptor(d)
print "\nDESCRIPTOR: " + first_level_topic_list[d] + "(" + str(d) + ")"
print "Nodes:", G.number_of_nodes()
print "Edges:", G.number_of_edges()
res_clique = analize_cliques(G)
res_degree = analize_degrees(G)
res_weight = analize_edges(G)
d_final = dict(res_clique)
d_final.update(res_degree)
d_final.update(res_weight)
d_final['id'] = d
d_final['avg_clustering'] = nx.average_clustering(G)
results[first_level_topic_list[d]] = d_final
print "Writing json..."
json.dump(results, open('./networks/first_level_panels_analysis.json','w'), indent = 2)
print "Writing csvs..."
df = DataFrame(results)
df.to_csv('./networks/first_level_panels_analysis.csv')
dfinv = df.transpose()
dfinv.to_csv('./networks/first_level_panels_analysis_inv.csv')
def from_json_to_dataframe():
results = json.load(open('./networks/first_level_analysis.json','r'))
df = DataFrame(results)
df.to_csv("panels.csv")
dft = df.transpose()
dft.to_csv("panels_trans.csv")
return df
#df = DataFrame(['id', 'name', 'clique_tot', 'clique_avg', 'clique_max', 'clique_min', 'clique_greater_5', 'degree_max', 'degree_min', 'degree_avg', 'weight_max', 'weight_min', 'weight_avg']);
def panel_repetition_per_advisor():
cnx = mysql.connector.connect(**config)
print "Recovering thesis ids for each advisor..."
cursor = cnx.cursor()
query = "SELECT person_id, thesis_id FROM advisor"
cursor.execute(query)
thesis_advisor = {}
for thesis in cursor:
adv_id = thesis[0]
thesis_id = thesis[1]
if thesis_advisor.has_key(adv_id):
thesis_advisor[adv_id].append(thesis_id)
else:
thesis_advisor[adv_id] = [thesis_id]
cursor.close()
print "Counting repetitions..."
cursor = cnx.cursor()
results = {}
for c, adv in enumerate(thesis_advisor):
if c % 500 == 0:
print c, "of", len(thesis_advisor)
thesis_ids = thesis_advisor[adv]
adv_id = adv
for thesis_id in thesis_ids:
cursor.execute("SELECT person_id FROM panel_member WHERE thesis_id = " + str(thesis_id))
for member in cursor:
if results.has_key(adv_id):
if results[adv_id].has_key(member[0]):
results[adv_id][member[0]] += 1
else:
results[adv_id][member[0]] = 0
else:
results[adv_id] = {member[0] : 0}
cursor.close()
cnx.close()
json.dump(results, open('./networks/repetitions_per_advisor.json', 'w'), indent=2)
print "Procesing total repetitons"
repetitions_per_advisor = {}
for adv in results:
total_rep = 0
for rep in results[adv]:
total_rep += results[adv][rep]
repetitions_per_advisor[adv] = total_rep
return repetitions_per_advisor
def thesis_per_year():
results = {}
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
for year in range(1977,2015):
query = "SELECT count(defense_date) FROM thesis WHERE year(defense_date)=year('" + str(year) + "-01-01')"
cursor.execute(query)
for r in cursor:
results[year] = r[0]
cursor.close()
cnx.close()
return results
def thesis_per_location():
results = {}
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
cursor.execute("select distinct(location) from university")
locations = []
for l in cursor:
locations.append(l[0])
results = {}
for location in locations:
query = "SELECT count(thesis.id) FROM thesis, university WHERE university.location = '" + location + "'"
cursor.execute(query)
for r in cursor:
results[location] = r[0]
cursor.close()
cnx.close()
return results
def advisor_genders_by_topic():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
results = {}
for topic in first_level_topic_list:
print "Topic:", topic
print 'Getting thesis ids for topic...'
thesis_ids = []
cursor.execute("SELECT thesis_id FROM association_thesis_description, descriptor WHERE descriptor.id = association_thesis_description.descriptor_id AND descriptor.code DIV 10000 = " + str(topic))
for t_id in cursor:
thesis_ids.append(t_id)
print 'Number of thesis:', len(thesis_ids)
print 'Counting genders...'
male = 0
female = 0
unknown = 0
for thesis in thesis_ids:
query = "SELECT COUNT(advisor.person_id) FROM advisor, person, thesis WHERE thesis.id = advisor.thesis_id AND person.id = advisor.person_id AND person.gender = 'male' AND thesis.id = " + str(thesis[0])
cursor.execute(query)
for r in cursor:
male += r[0]
query = "SELECT COUNT(advisor.person_id) FROM advisor, person, thesis WHERE thesis.id = advisor.thesis_id AND person.id = advisor.person_id AND person.gender = 'female' AND thesis.id = " + str(thesis[0])
cursor.execute(query)
for r in cursor:
female += r[0]
query = "SELECT COUNT(advisor.person_id) FROM advisor, person, thesis WHERE thesis.id = advisor.thesis_id AND person.id = advisor.person_id AND person.gender = 'none' AND thesis.id = " + str(thesis[0])
cursor.execute(query)
for r in cursor:
unknown += r[0]
if len(thesis_ids) > 0:
results[first_level_topic_list[topic]] = {'male' : male, 'female' : female, 'unknown' : unknown}
cursor.close()
cnx.close()
print "Saving json"
json.dump(results, open('advisor_gender_by_topic.json','w'))
print "Saving csv"
df = DataFrame(results)
df.to_csv("advisor_gender_by_topic.csv")
return results
def analyze_advisor_student_genders():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
print "Recovering advisor-student pairs..."
cursor.execute("SELECT thesis.author_id, advisor.person_id FROM thesis, advisor WHERE thesis.id = advisor.thesis_id")
adv_stu = []
for advisor in cursor:
adv_stu.append([advisor[1], advisor[0]])
print "Recovering genders..."
genders = {}
cursor.execute("SELECT person.id, person.gender FROM person")
for person in cursor:
genders[person[0]] = person[1]
cursor.close()
cnx.close()
print "Counting..."
results = {}
results["MM"] = 0
results["FF"] = 0
results["FM"] = 0
results["MF"] = 0
for pair in adv_stu:
try:
adv_gender = genders[pair[0]]
stu_gender = genders[pair[1]]
except:
adv_gender = 'none'
stu_gender = 'none'
if adv_gender == 'male':
if stu_gender == 'male':
results['MM'] += 1
elif stu_gender == 'female':
results['MF'] += 1
elif adv_gender == 'female':
if stu_gender == 'male':
results['FM'] += 1
elif stu_gender == 'female':
results['FF'] += 1
return results
def analyze_advisor_student_genders_by_topic():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
print "Recovering genders..."
genders = {}
cursor.execute("SELECT person.id, person.gender FROM person")
for person in cursor:
genders[person[0]] = person[1]
topic_genders = json.load(open('advisor_gender_by_topic.json','r'))
topic_gender_pairs = {}
for topic in first_level_topic_list:
print "Topic:", topic
print "Recovering advisor-student pairs..."
query = """ SELECT thesis.author_id, advisor.person_id
FROM thesis, advisor, descriptor, association_thesis_description
WHERE descriptor.id = association_thesis_description.descriptor_id
AND thesis.id = advisor.thesis_id
AND thesis.id = association_thesis_description.thesis_id
AND descriptor.code DIV 10000 = """ + str(topic)
cursor.execute(query)
adv_stu = []
for advisor in cursor:
adv_stu.append([advisor[1], advisor[0]])
if len(adv_stu) > 0:
print "Counting..."
results = {}
results["MM"] = 0
results["FF"] = 0
results["FM"] = 0
results["MF"] = 0
for pair in adv_stu:
try:
adv_gender = genders[pair[0]]
stu_gender = genders[pair[1]]
except:
adv_gender = 'none'
stu_gender = 'none'
if adv_gender == 'male':
if stu_gender == 'male':
results['MM'] += 1
elif stu_gender == 'female':
results['MF'] += 1
elif adv_gender == 'female':
if stu_gender == 'male':
results['FM'] += 1
elif stu_gender == 'female':
results['FF'] += 1
results["MM_norm"] = results["MM"] * 1.0 / topic_genders[str(topic)]['male']
results["FF_norm"] = results["FF"] * 1.0 / topic_genders[str(topic)]['female']
results["FM_norm"] = results["FM"] * 1.0 / topic_genders[str(topic)]['female']
results["MF_norm"] = results["MF"] * 1.0 / topic_genders[str(topic)]['male']
topic_gender_pairs[first_level_topic_list[topic]] = results
cursor.close()
cnx.close()
print "Saving json"
json.dump(topic_gender_pairs, open('gender_pairs_by_topic.json','w'))
print "Saving csv"
df = DataFrame(topic_gender_pairs)
df.to_csv("gender_pairs_by_topic.csv")
return topic_gender_pairs
def count_persons_with_multiple_thesis():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
persons_id = []
cursor.execute("SELECT person.id FROM person")
for person in cursor:
persons_id.append(person[0])
results = {}
histogram = {}
for i, p_id in enumerate(persons_id):
if i % 2000 == 0:
print i, 'of', len(persons_id)
cursor.execute("SELECT COUNT(thesis.id) FROM thesis WHERE thesis.author_id = " + str(p_id))
for r in cursor:
if r[0] > 1:
results[p_id] = r[0]
if histogram.has_key(r[0]):
histogram[r[0]] += 1
else:
histogram[r[0]] = 1
cursor.close()
cnx.close()
print "Writing json..."
json.dump(results, open('multiple_thesis.json','w'))
json.dump(histogram, open('multiple_thesis_hist.json','w'))
return results, histogram
def count_panel_members():
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
print "Getting thesis ids..."
cursor.execute("SELECT id FROM thesis")
thesis_ids = []
for r in cursor:
thesis_ids.append(r[0])
results = {}
print "Counting panel members"
for i, t_id in enumerate(thesis_ids):
if i % 2000 == 0:
print i, 'of', len(thesis_ids)
cursor.execute("SELECT count(panel_member.person_id) FROM panel_member WHERE panel_member.thesis_id = " + str(t_id))
for r in cursor:
if results.has_key(r[0]):
results[r[0]] += 1
else:
results[r[0]] = 1
cursor.close()
cnx.close()
return results
def create_gender_pie():
male = 221579.0
female = 80363.0
none = 21428.0
total = male + female + none
labels = ['Male', 'Female', 'Unknown']
sizes = [male/total*100, female/total*100, none/total*100]
colors = ['lightblue', 'pink', 'gold']
plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
def create_advisor_gender_pie():
male = 165506.0
female = 37012.0
none = 11229.0
total = male + female + none
labels = ['Male', 'Female', 'Unknown']
sizes = [male/total*100, female/total*100, none/total*100]
colors = ['lightblue', 'pink', 'gold']
plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
def create_student_gender_pie():
male = 115423.0
female = 52184.0
none = 9742.0
total = male + female + none
labels = ['Male', 'Female', 'Unknown']
sizes = [male/total*100, female/total*100, none/total*100]
colors = ['lightblue', 'pink', 'gold']
plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
def create_panel_gender_pie():
male = 674748.0
female = 139170.0
none = 44765.0
total = male + female + none
labels = ['Male', 'Female', 'Unknown']
sizes = [male/total*100, female/total*100, none/total*100]
colors = ['lightblue', 'pink', 'gold']
plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%')
plt.axis('equal')
plt.show()
def create_number_of_thesis_bar():
values = [1552, 126, 33, 7, 2]
fig, ax = plt.subplots()
index = np.arange(len(values))
width = 0.30
plt.bar(index, values)
plt.xlabel('Number of thesis')
plt.ylabel('Total persons')
plt.title('Number of thesis by person (> 2)')
plt.xticks(index + width, ('2', '3', '4', '5', '6'))
plt.legend()
plt.tight_layout()
plt.show()
if __name__=='__main__':
print "starting"
print create_number_of_thesis_bar()
print "fin" | 32.885863 | 221 | 0.559771 | 3,041 | 25,355 | 4.50148 | 0.115751 | 0.020454 | 0.018628 | 0.024545 | 0.551099 | 0.491563 | 0.445613 | 0.388341 | 0.355322 | 0.303455 | 0 | 0.025173 | 0.315322 | 25,355 | 771 | 222 | 32.885863 | 0.763364 | 0.015421 | 0 | 0.450949 | 0 | 0.007911 | 0.22324 | 0.043379 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001582 | 0.015823 | null | null | 0.113924 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d454419e3e6cdb058ce6e7d5edb4c86210ee2dd2 | 7,751 | py | Python | primal_dual_models.py | louisenaud/pytorch_primal_dual | 05e212729299174e918b6f53f380f78986bcc135 | [
"MIT"
] | 2 | 2019-04-01T03:39:24.000Z | 2022-03-12T01:13:38.000Z | primal_dual_models.py | louisenaud/pytorch_primal_dual | 05e212729299174e918b6f53f380f78986bcc135 | [
"MIT"
] | null | null | null | primal_dual_models.py | louisenaud/pytorch_primal_dual | 05e212729299174e918b6f53f380f78986bcc135 | [
"MIT"
] | null | null | null | """
Project: pytorch_primal_dual
File: primal_dual_models.py
Created by: louise
On: 29/11/17
At: 4:00 PM
"""
import numpy as np
from numpy import random
from torch.autograd import Variable
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms
from primal_dual_updates import PrimalWeightedUpdate, PrimalRegularization, DualWeightedUpdate
from proximal_operators import ProximalLinfBall
from linear_operators import GeneralLinearOperator, GeneralLinearAdjointOperator
class LinearOperator(nn.Module):
def __init__(self):
"""
Constructor of the learnable weight parameter CNN.
"""
super(LinearOperator, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=3, stride=1, padding=1).cuda()
self.conv2 = nn.Conv2d(10, 10, kernel_size=3, stride=1, padding=1).cuda()
self.conv3 = nn.Conv2d(10, 2, kernel_size=3, stride=1, padding=1).cuda()
def forward(self, x):
"""
Function to learn the Linear Operator L with a small CNN.
:param x: PyTorch Variable [1xMxN], primal variable.
:return: PyTorch Variable [2xMxN], output of learned linear operator
"""
z = Variable(x.data.unsqueeze(0)).cuda()
z = F.relu(self.conv1(z))
z = F.relu(self.conv2(z))
z = F.relu(self.conv3(z))
y = Variable(z.data.squeeze(0).cuda())
return y
class GaussianNoiseGenerator(nn.Module):
def __init__(self):
super(GaussianNoiseGenerator, self).__init__()
def forward(self, img, std, mean=0.0, dtype=torch.cuda.FloatTensor):
"""
Function to add gaussian noise with zero mean and given std to img.
:param img: PyTorch Variable [1xMxN], image to noise.
:param std: PyTorch tensor [1]
:param mean: float
:param dtype: Pytorch Tensor type, def=torch.cuda.FloatTensor
:return: Pytorch variable [1xMxN], noised img.
"""
noise = torch.zeros(img.size()).type(dtype)
noise.normal_(mean, std=std)
img_n = img + noise
return img_n
class PoissonNoiseGenerator(nn.Module):
def __init__(self):
super(PoissonNoiseGenerator, self).__init__()
def forward(self, img, param=500., dtype=torch.cuda.FloatTensor):
"""
Function to create random Poisson noise on an image.
:param img:
:param param:
:param dtype:
:return:
"""
img_np = np.array(transforms.ToPILImage()(img.data.cpu()))
poissonNoise = random.poisson(param, img_np.shape).astype(float)
noisy_img = img + poissonNoise
noisy_img_pytorch = Variable(transforms.ToTensor()(noisy_img).type(dtype))
return noisy_img_pytorch
class Net(nn.Module):
def __init__(self, w1, w2, w, max_it, lambda_rof, sigma, tau, theta, dtype=torch.cuda.FloatTensor):
"""
Constructor of the Primal Dual Net.
:param w1: Pytorch variable [2xMxN]
:param w2: Pytorch variable [2xMxN]
:param w: Pytorch variable [2xMxN]
:param max_it: int
:param lambda_rof: float
:param sigma: float
:param tau: float
:param theta: float
:param dtype: Pytorch Tensor type, torch.cuda.FloatTensor by default.
"""
super(Net, self).__init__()
self.linear_op = LinearOperator()
self.max_it = max_it
self.dual_update = DualWeightedUpdate(sigma)
self.prox_l_inf = ProximalLinfBall()
self.primal_update = PrimalWeightedUpdate(lambda_rof, tau)
self.primal_reg = PrimalRegularization(theta)
self.pe = 0.0
self.de = 0.0
self.w1 = nn.Parameter(w1)
self.w2 = nn.Parameter(w2)
self.w = w
self.clambda = nn.Parameter(lambda_rof.data)
self.sigma = nn.Parameter(sigma.data)
self.tau = nn.Parameter(tau.data)
self.theta = nn.Parameter(theta.data)
self.type = dtype
def forward(self, x, img_obs):
"""
Forward function for the Net model.
:param x: Pytorch variable [1xMxN]
:param img_obs: Pytorch variable [1xMxN]
:return: Pytorch variable [1xMxN]
"""
x = Variable(img_obs.data.clone()).cuda()
x_tilde = Variable(img_obs.data.clone()).cuda()
img_size = img_obs.size()
y = Variable(torch.ones((img_size[0] + 1, img_size[1], img_size[2]))).cuda()
# Forward pass
y = self.linear_op(x)
w_term = Variable(torch.exp(-torch.abs(y.data.expand_as(y))))
self.w = self.w1.expand_as(y) + self.w2.expand_as(y) * w_term
self.w.type(self.type)
self.theta.data.clamp_(0, 5)
for it in range(self.max_it):
# Dual update
y = self.dual_update.forward(x_tilde, y, self.w)
y.data.clamp_(0, 1)
y = self.prox_l_inf.forward(y, 1.0)
# Primal update
x_old = x
x = self.primal_update.forward(x, y, img_obs, self.w)
x.data.clamp_(0, 1)
# Smoothing
x_tilde = self.primal_reg.forward(x, x_tilde, x_old)
x_tilde.data.clamp_(0, 1)
return x
class GeneralNet(nn.Module):
def __init__(self, w1, w2, w, max_it, lambda_rof, sigma, tau, theta, dtype=torch.cuda.FloatTensor):
"""
Constructor of the Primal Dual Net.
:param w1: Pytorch variable [2xMxN]
:param w2: Pytorch variable [2xMxN]
:param w: Pytorch variable [2xMxN]
:param max_it: int
:param lambda_rof: float
:param sigma: float
:param tau: float
:param theta: float
:param dtype: Pytorch Tensor type, torch.cuda.FloatTensor by default.
"""
super(Net, self).__init__()
self.linear_op = GeneralLinearOperator()
self.linear_op_adj = GeneralLinearAdjointOperator()
self.max_it = max_it
self.dual_update = DualWeightedUpdate(sigma)
self.prox_l_inf = ProximalLinfBall()
self.primal_update = PrimalWeightedUpdate(lambda_rof, tau)
self.primal_reg = PrimalRegularization(theta)
self.pe = 0.0
self.de = 0.0
self.w1 = nn.Parameter(w1)
self.w2 = nn.Parameter(w2)
self.w = w
self.clambda = nn.Parameter(lambda_rof.data)
self.sigma = nn.Parameter(sigma.data)
self.tau = nn.Parameter(tau.data)
self.theta = nn.Parameter(theta.data)
self.type = dtype
def forward(self, x, img_obs):
"""
Forward function for the Net model.
:param x: Pytorch variable [1xMxN]
:param img_obs: Pytorch variable [1xMxN]
:return: Pytorch variable [1xMxN]
"""
x = Variable(img_obs.data.clone()).cuda()
x_tilde = Variable(img_obs.data.clone()).cuda()
img_size = img_obs.size()
y = Variable(torch.ones((img_size[0] + 1, img_size[1], img_size[2]))).cuda()
# Forward pass
y = self.linear_op(x)
w_term = Variable(torch.exp(-torch.abs(y.data.expand_as(y))))
self.w = self.w1.expand_as(y) + self.w2.expand_as(y) * w_term
self.w.type(self.type)
self.theta.data.clamp_(0, 5)
for it in range(self.max_it):
# Dual update
y = self.dual_update.forward(x_tilde, y, self.w)
y.data.clamp_(0, 1)
y = self.prox_l_inf.forward(y, 1.0)
# Primal update
x_old = x
x = self.primal_update.forward(x, y, img_obs, self.w)
x.data.clamp_(0, 1)
# Smoothing
x_tilde = self.primal_reg.forward(x, x_tilde, x_old)
x_tilde.data.clamp_(0, 1)
return x
| 34.602679 | 103 | 0.608051 | 1,024 | 7,751 | 4.448242 | 0.161133 | 0.055982 | 0.039517 | 0.032931 | 0.668935 | 0.654226 | 0.610318 | 0.610318 | 0.603732 | 0.603732 | 0 | 0.020908 | 0.278029 | 7,751 | 223 | 104 | 34.757848 | 0.793066 | 0.227067 | 0 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084034 | false | 0 | 0.084034 | 0 | 0.252101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d455611f97cde8ae042a94f5b98d6334789fdf55 | 485 | py | Python | .venv/lib/python3.6/site-packages/pyglet/media/sources/__init__.py | FedericoFontana/ray | 5a7feae15f8c74d5d196fea6697c1827008018f3 | [
"Apache-2.0"
] | 3 | 2019-04-01T11:03:04.000Z | 2019-12-31T02:17:15.000Z | .venv/lib/python3.6/site-packages/pyglet/media/sources/__init__.py | FedericoFontana/ray | 5a7feae15f8c74d5d196fea6697c1827008018f3 | [
"Apache-2.0"
] | 1 | 2021-04-15T18:46:45.000Z | 2021-04-15T18:46:45.000Z | .venv/lib/python3.6/site-packages/pyglet/media/sources/__init__.py | FedericoFontana/ray | 5a7feae15f8c74d5d196fea6697c1827008018f3 | [
"Apache-2.0"
] | 1 | 2020-11-06T18:46:35.000Z | 2020-11-06T18:46:35.000Z | """Sources for media playback."""
# Collect public interface
from .loader import load, have_avbin
from .base import AudioFormat, VideoFormat, AudioData, SourceInfo
from .base import Source, StreamingSource, StaticSource, SourceGroup
# help the docs figure out where these are supposed to live (they live here)
__all__ = [
'load',
'have_avbin',
'AudioFormat',
'VideoFormat',
'AudioData',
'SourceInfo',
'Source',
'StreamingSource',
'StaticSource',
'SourceGroup',
]
| 23.095238 | 76 | 0.727835 | 53 | 485 | 6.54717 | 0.679245 | 0.04611 | 0.074928 | 0.236311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160825 | 485 | 20 | 77 | 24.25 | 0.85258 | 0.263918 | 0 | 0 | 0 | 0 | 0.282857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d457ee3cfd1e0db55001718ecea3c52991bf2068 | 410 | py | Python | tests/plot/test_layouts.py | akrherz/pyIEM | ec0acdc4c6b507b0d558ce216d4bbdbcb9b2f364 | [
"MIT"
] | 29 | 2015-09-02T15:53:48.000Z | 2022-02-04T19:47:49.000Z | tests/plot/test_layouts.py | akrherz/pyIEM | ec0acdc4c6b507b0d558ce216d4bbdbcb9b2f364 | [
"MIT"
] | 531 | 2015-01-13T20:58:33.000Z | 2022-03-30T13:59:14.000Z | tests/plot/test_layouts.py | akrherz/pyIEM | ec0acdc4c6b507b0d558ce216d4bbdbcb9b2f364 | [
"MIT"
] | 7 | 2015-02-28T22:34:32.000Z | 2020-12-06T05:16:13.000Z | """Test pyiem.plot.layouts."""
# third party
import pytest
# local
from pyiem.plot.layouts import figure_axes
@pytest.mark.mpl_image_compare(tolerance=0.1)
def test_crawl_before_walk():
"""Test that we can do basic things."""
fig, ax = figure_axes(
title="This is my Fancy Pants Title.",
subtitle="This is my Fancy Pants SubTitle.",
)
ax.plot([0, 1], [0, 1])
return fig
| 21.578947 | 52 | 0.658537 | 62 | 410 | 4.241935 | 0.629032 | 0.022814 | 0.121673 | 0.098859 | 0.136882 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.209756 | 410 | 18 | 53 | 22.777778 | 0.79321 | 0.187805 | 0 | 0 | 0 | 0 | 0.190031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d457f75e99a83dac762342d37904956309e359c4 | 235 | py | Python | python_sets/set_elements_sum.py | antonarnaudov/python-tigers-2021-02 | 8af6c121e4d373b7d98bf76dc1777753587262ef | [
"MIT"
] | null | null | null | python_sets/set_elements_sum.py | antonarnaudov/python-tigers-2021-02 | 8af6c121e4d373b7d98bf76dc1777753587262ef | [
"MIT"
] | null | null | null | python_sets/set_elements_sum.py | antonarnaudov/python-tigers-2021-02 | 8af6c121e4d373b7d98bf76dc1777753587262ef | [
"MIT"
] | null | null | null | def set_elements_sum(a, b):
c = []
for i in range(len(a)):
result = a[i] + b[i]
c.append(result)
return c
ll = [1, 2, 3, 4, 5]
ll2 = [3, 4, 5, 6, 7]
print(set_elements_sum(ll, ll2))
# [4, 6, 8, 10, 12]
| 14.6875 | 32 | 0.489362 | 46 | 235 | 2.413043 | 0.608696 | 0.198198 | 0.252252 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117284 | 0.310638 | 235 | 15 | 33 | 15.666667 | 0.567901 | 0.07234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d45e91dbb62c867d411f2c3492ce392f573d7768 | 3,450 | py | Python | mlops/parallelm/mlops/config_info.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 7 | 2019-04-08T02:31:55.000Z | 2021-11-15T14:40:49.000Z | mlops/parallelm/mlops/config_info.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 31 | 2019-02-22T22:23:26.000Z | 2021-08-02T17:17:06.000Z | mlops/parallelm/mlops/config_info.py | lisapm/mlpiper | 74ad5ae343d364682cc2f8aaa007f2e8a1d84929 | [
"Apache-2.0"
] | 8 | 2019-03-15T23:46:08.000Z | 2020-02-06T09:16:02.000Z |
import os
from parallelm.mlops.mlops_env_constants import MLOpsEnvConstants
from parallelm.mlops.constants import Constants
class ConfigInfo:
def __init__(self):
self.mlops_mode = None
self.output_channel_type = None
self.zk_host = None
self.token = None
self.ion_id = None
self.ion_node_id = None
self.mlops_server = None
self.mlops_port = None
self.model_id = None
self.pipeline_id = None
def __str__(self):
s = ""
s += "mode: {}\n".format(self.mlops_mode)
s += "output: {}\n".format(self.output_channel_type)
s += "zk: {}\n".format(self.zk_host)
s += "token: {}\n".format(self.token)
s += "{}: {}\n".format(Constants.ION_LITERAL, self.ion_id)
s += "node: {}\n".format(self.ion_node_id)
s += "server {}\n".format(self.mlops_server)
s += "port {}\n".format(self.mlops_port)
s += "model_id {}\n".format(self.model_id)
s += "pipeline: {}\n".format(self.pipeline_id)
return s
def _update_val_from_env(self, env_name, value):
if value is None:
return os.environ.get(env_name, value)
else:
return value
def read_from_env(self):
"""
Read configuration from environment variables.
"""
self.mlops_mode = self._update_val_from_env(MLOpsEnvConstants.MLOPS_MODE, self.mlops_mode)
self.output_channel_type = self._update_val_from_env(MLOpsEnvConstants.MLOPS_OUTPUT_CHANNEL,
self.output_channel_type)
self.zk_host = self._update_val_from_env(MLOpsEnvConstants.MLOPS_ZK_HOST, self.zk_host)
self.token = self._update_val_from_env(MLOpsEnvConstants.MLOPS_TOKEN, self.token)
self.ion_id = self._update_val_from_env(MLOpsEnvConstants.MLOPS_ION_ID, self.ion_id)
self.ion_node_id = self._update_val_from_env(MLOpsEnvConstants.MLOPS_ION_NODE_ID, self.ion_node_id)
self.mlops_server = self._update_val_from_env(MLOpsEnvConstants.MLOPS_DATA_REST_SERVER, self.mlops_server)
self.mlops_port = self._update_val_from_env(MLOpsEnvConstants.MLOPS_DATA_REST_PORT, self.mlops_port)
self.model_id = self._update_val_from_env(MLOpsEnvConstants.MLOPS_MODEL_ID, self.model_id)
self.pipeline_id = self._update_val_from_env(MLOpsEnvConstants.MLOPS_PIPELINE_ID, self.pipeline_id)
return self
def set_env(self):
"""
Set configuration into environment variables.
"""
os.environ[MLOpsEnvConstants.MLOPS_MODE] = self.mlops_mode
os.environ[MLOpsEnvConstants.MLOPS_OUTPUT_CHANNEL] = self.output_channel_type
# ZK might not be used (in attache mode for example)
if self.zk_host:
os.environ[MLOpsEnvConstants.MLOPS_ZK_HOST] = self.zk_host
os.environ[MLOpsEnvConstants.MLOPS_TOKEN] = self.token
os.environ[MLOpsEnvConstants.MLOPS_ION_ID] = self.ion_id
os.environ[MLOpsEnvConstants.MLOPS_ION_NODE_ID] = self.ion_node_id
os.environ[MLOpsEnvConstants.MLOPS_DATA_REST_SERVER] = self.mlops_server
os.environ[MLOpsEnvConstants.MLOPS_DATA_REST_PORT] = self.mlops_port
os.environ[MLOpsEnvConstants.MLOPS_PIPELINE_ID] = self.pipeline_id
if self.model_id:
os.environ[MLOpsEnvConstants.MLOPS_MODEL_ID] = self.model_id | 44.805195 | 114 | 0.671304 | 449 | 3,450 | 4.801782 | 0.131403 | 0.204082 | 0.066327 | 0.081633 | 0.551484 | 0.513451 | 0.472171 | 0.275046 | 0.119202 | 0 | 0 | 0 | 0.231884 | 3,450 | 77 | 115 | 44.805195 | 0.813585 | 0.041739 | 0 | 0 | 0 | 0 | 0.042958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0 | 0.050847 | 0 | 0.220339 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d45f1586c5630bca9421e3233108ec6725f2dae3 | 27,344 | py | Python | pyupdater/vendor/PyInstaller/hooks/hookutils.py | rsumner31/PyUpdater1 | d9658000472e57453267ee8fa174ae914dd8d33c | [
"BSD-2-Clause"
] | null | null | null | pyupdater/vendor/PyInstaller/hooks/hookutils.py | rsumner31/PyUpdater1 | d9658000472e57453267ee8fa174ae914dd8d33c | [
"BSD-2-Clause"
] | null | null | null | pyupdater/vendor/PyInstaller/hooks/hookutils.py | rsumner31/PyUpdater1 | d9658000472e57453267ee8fa174ae914dd8d33c | [
"BSD-2-Clause"
] | null | null | null | #-----------------------------------------------------------------------------
# Copyright (c) 2013, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License with exception
# for distributing bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
import glob
import os
import os.path
import sys
import PyInstaller
import PyInstaller.compat as compat
from PyInstaller.compat import is_darwin, is_win
from PyInstaller.utils import misc
import PyInstaller.log as logging
logger = logging.getLogger(__name__)
# All these extension represent Python modules or extension modules
PY_EXECUTABLE_SUFFIXES = set(['.py', '.pyc', '.pyd', '.pyo', '.so'])
# these suffixes represent python extension modules
try:
from importlib.machinery import EXTENSION_SUFFIXES as PY_EXTENSION_SUFFIXES
except ImportError:
import imp
PY_EXTENSION_SUFFIXES = set([f[0] for f in imp.get_suffixes()
if f[2] == imp.C_EXTENSION])
# These extensions represent Python executables and should therefore be
# ignored when collecting data files.
PY_IGNORE_EXTENSIONS = set(['.py', '.pyc', '.pyd', '.pyo', '.so', 'dylib'])
# Some hooks need to save some values. This is the dict that can be used for
# that.
#
# When running tests this variable should be reset before every test.
#
# For example the 'wx' module needs variable 'wxpubsub'. This tells PyInstaller
# which protocol of the wx module should be bundled.
hook_variables = {}
def __exec_python_cmd(cmd):
"""
Executes an externally spawned Python interpreter and returns
anything that was emitted in the standard output as a single
string.
"""
# Prepend PYTHONPATH with pathex
pp = os.pathsep.join(PyInstaller.__pathex__)
old_pp = compat.getenv('PYTHONPATH')
if old_pp:
pp = os.pathsep.join([old_pp, pp])
compat.setenv("PYTHONPATH", pp)
try:
try:
txt = compat.exec_python(*cmd)
except OSError, e:
raise SystemExit("Execution failed: %s" % e)
finally:
if old_pp is not None:
compat.setenv("PYTHONPATH", old_pp)
else:
compat.unsetenv("PYTHONPATH")
return txt.strip()
def exec_statement(statement):
"""Executes a Python statement in an externally spawned interpreter, and
returns anything that was emitted in the standard output as a single string.
"""
cmd = ['-c', statement]
return __exec_python_cmd(cmd)
def exec_script(script_filename, *args):
"""
Executes a Python script in an externally spawned interpreter, and
returns anything that was emitted in the standard output as a
single string.
To prevent missuse, the script passed to hookutils.exec-script
must be located in the `hooks/utils` directory.
"""
script_filename = os.path.join('utils', os.path.basename(script_filename))
script_filename = os.path.join(os.path.dirname(__file__), script_filename)
if not os.path.exists(script_filename):
raise SystemError("To prevent missuse, the script passed to "
"hookutils.exec-script must be located in "
"the `hooks/utils` directory.")
# Scripts might be importing some modules. Add PyInstaller code to pathex.
pyinstaller_root_dir = os.path.dirname(os.path.abspath(PyInstaller.__path__[0]))
PyInstaller.__pathex__.append(pyinstaller_root_dir)
cmd = [script_filename]
cmd.extend(args)
return __exec_python_cmd(cmd)
def eval_statement(statement):
txt = exec_statement(statement).strip()
if not txt:
# return an empty string which is "not true" but iterable
return ''
return eval(txt)
def eval_script(scriptfilename, *args):
txt = exec_script(scriptfilename, *args).strip()
if not txt:
# return an empty string which is "not true" but iterable
return ''
return eval(txt)
def get_pyextension_imports(modname):
"""
Return list of modules required by binary (C/C++) Python extension.
Python extension files ends with .so (Unix) or .pyd (Windows).
It's almost impossible to analyze binary extension and its dependencies.
Module cannot be imported directly.
Let's at least try import it in a subprocess and get the difference
in module list from sys.modules.
This function could be used for 'hiddenimports' in PyInstaller hooks files.
"""
statement = """
import sys
# Importing distutils filters common modules, especiall in virtualenv.
import distutils
original_modlist = sys.modules.keys()
# When importing this module - sys.modules gets updated.
import %(modname)s
all_modlist = sys.modules.keys()
diff = set(all_modlist) - set(original_modlist)
# Module list contain original modname. We do not need it there.
diff.discard('%(modname)s')
# Print module list to stdout.
print(list(diff))
""" % {'modname': modname}
module_imports = eval_statement(statement)
if not module_imports:
logger.error('Cannot find imports for module %s' % modname)
return [] # Means no imports found or looking for imports failed.
#module_imports = filter(lambda x: not x.startswith('distutils'), module_imports)
return module_imports
def qt4_plugins_dir():
qt4_plugin_dirs = eval_statement(
"from PyQt4.QtCore import QCoreApplication;"
"app=QCoreApplication([]);"
"print(map(unicode,app.libraryPaths()))")
if not qt4_plugin_dirs:
logger.error("Cannot find PyQt4 plugin directories")
return ""
for d in qt4_plugin_dirs:
if os.path.isdir(d):
return str(d) # must be 8-bit chars for one-file builds
logger.error("Cannot find existing PyQt4 plugin directory")
return ""
def qt4_phonon_plugins_dir():
qt4_plugin_dirs = eval_statement(
"from PyQt4.QtGui import QApplication;"
"app=QApplication([]); app.setApplicationName('pyinstaller');"
"from PyQt4.phonon import Phonon;"
"v=Phonon.VideoPlayer(Phonon.VideoCategory);"
"print(map(unicode,app.libraryPaths()))")
if not qt4_plugin_dirs:
logger.error("Cannot find PyQt4 phonon plugin directories")
return ""
for d in qt4_plugin_dirs:
if os.path.isdir(d):
return str(d) # must be 8-bit chars for one-file builds
logger.error("Cannot find existing PyQt4 phonon plugin directory")
return ""
def qt4_plugins_binaries(plugin_type):
"""Return list of dynamic libraries formatted for mod.pyinstaller_binaries."""
binaries = []
pdir = qt4_plugins_dir()
files = misc.dlls_in_dir(os.path.join(pdir, plugin_type))
# Windows:
#
# dlls_in_dir() grabs all files ending with *.dll, *.so and *.dylib in a certain
# directory. On Windows this would grab debug copies of Qt 4 plugins, which then
# causes PyInstaller to add a dependency on the Debug CRT __in addition__ to the
# release CRT.
#
# Since debug copies of Qt4 plugins end with "d4.dll" we filter them out of the
# list.
#
if is_win:
files = [f for f in files if not f.endswith("d4.dll")]
for f in files:
binaries.append((
# TODO fix this hook to use hook-name.py attribute 'binaries'.
os.path.join('qt4_plugins', plugin_type, os.path.basename(f)),
f, 'BINARY'))
return binaries
def qt4_menu_nib_dir():
"""Return path to Qt resource dir qt_menu.nib. OSX only"""
menu_dir = ''
# Detect MacPorts prefix (usually /opt/local).
# Suppose that PyInstaller is using python from macports.
macports_prefix = sys.executable.split('/Library')[0]
# list of directories where to look for qt_menu.nib
dirs = []
# Look into user-specified directory, just in case Qt4 is not installed
# in a standard location
if 'QTDIR' in os.environ:
dirs += [
os.path.join(os.environ['QTDIR'], "QtGui.framework/Versions/4/Resources"),
os.path.join(os.environ['QTDIR'], "lib", "QtGui.framework/Versions/4/Resources"),
]
# If PyQt4 is built against Qt5 look for the qt_menu.nib in a user
# specified location, if it exists.
if 'QT5DIR' in os.environ:
dirs.append(os.path.join(os.environ['QT5DIR'],
"src", "plugins", "platforms", "cocoa"))
dirs += [
# Qt4 from MacPorts not compiled as framework.
os.path.join(macports_prefix, 'lib', 'Resources'),
# Qt4 from MacPorts compiled as framework.
os.path.join(macports_prefix, 'libexec', 'qt4-mac', 'lib',
'QtGui.framework', 'Versions', '4', 'Resources'),
# Qt4 installed into default location.
'/Library/Frameworks/QtGui.framework/Resources',
'/Library/Frameworks/QtGui.framework/Versions/4/Resources',
'/Library/Frameworks/QtGui.Framework/Versions/Current/Resources',
]
# Qt from Homebrew
homebrewqtpath = get_homebrew_path('qt')
if homebrewqtpath:
dirs.append( os.path.join(homebrewqtpath,'lib','QtGui.framework','Versions','4','Resources') )
# Check directory existence
for d in dirs:
d = os.path.join(d, 'qt_menu.nib')
if os.path.exists(d):
menu_dir = d
break
if not menu_dir:
logger.error('Cannot find qt_menu.nib directory')
return menu_dir
def qt5_plugins_dir():
if 'QT_PLUGIN_PATH' in os.environ and os.path.isdir(os.environ['QT_PLUGIN_PATH']):
return str(os.environ['QT_PLUGIN_PATH'])
qt5_plugin_dirs = eval_statement(
"from PyQt5.QtCore import QCoreApplication;"
"app=QCoreApplication([]);"
"print(map(unicode,app.libraryPaths()))")
if not qt5_plugin_dirs:
logger.error("Cannot find PyQt5 plugin directories")
return ""
for d in qt5_plugin_dirs:
if os.path.isdir(d):
return str(d) # must be 8-bit chars for one-file builds
logger.error("Cannot find existing PyQt5 plugin directory")
return ""
def qt5_phonon_plugins_dir():
qt5_plugin_dirs = eval_statement(
"from PyQt5.QtGui import QApplication;"
"app=QApplication([]); app.setApplicationName('pyinstaller');"
"from PyQt5.phonon import Phonon;"
"v=Phonon.VideoPlayer(Phonon.VideoCategory);"
"print(map(unicode,app.libraryPaths()))")
if not qt5_plugin_dirs:
logger.error("Cannot find PyQt5 phonon plugin directories")
return ""
for d in qt5_plugin_dirs:
if os.path.isdir(d):
return str(d) # must be 8-bit chars for one-file builds
logger.error("Cannot find existing PyQt5 phonon plugin directory")
return ""
def qt5_plugins_binaries(plugin_type):
"""Return list of dynamic libraries formatted for mod.pyinstaller_binaries."""
binaries = []
pdir = qt5_plugins_dir()
files = misc.dlls_in_dir(os.path.join(pdir, plugin_type))
for f in files:
binaries.append((
os.path.join('qt5_plugins', plugin_type, os.path.basename(f)),
f, 'BINARY'))
return binaries
def qt5_menu_nib_dir():
"""Return path to Qt resource dir qt_menu.nib. OSX only"""
menu_dir = ''
# If the QT5DIR env var is set then look there first. It should be set to the
# qtbase dir in the Qt5 distribution.
dirs = []
if 'QT5DIR' in os.environ:
dirs.append(os.path.join(os.environ['QT5DIR'],
"src", "plugins", "platforms", "cocoa"))
dirs.append(os.path.join(os.environ['QT5DIR'],
"src", "qtbase", "src", "plugins", "platforms", "cocoa"))
# As of the time of writing macports doesn't yet support Qt5. So this is
# just modified from the Qt4 version.
# FIXME: update this when MacPorts supports Qt5
# Detect MacPorts prefix (usually /opt/local).
# Suppose that PyInstaller is using python from macports.
macports_prefix = sys.executable.split('/Library')[0]
# list of directories where to look for qt_menu.nib
dirs.extend( [
# Qt5 from MacPorts not compiled as framework.
os.path.join(macports_prefix, 'lib', 'Resources'),
# Qt5 from MacPorts compiled as framework.
os.path.join(macports_prefix, 'libexec', 'qt5-mac', 'lib',
'QtGui.framework', 'Versions', '5', 'Resources'),
# Qt5 installed into default location.
'/Library/Frameworks/QtGui.framework/Resources',
'/Library/Frameworks/QtGui.framework/Versions/5/Resources',
'/Library/Frameworks/QtGui.Framework/Versions/Current/Resources',
])
# Qt5 from Homebrew
homebrewqtpath = get_homebrew_path('qt5')
if homebrewqtpath:
dirs.append( os.path.join(homebrewqtpath,'src','qtbase','src','plugins','platforms','cocoa') )
# Check directory existence
for d in dirs:
d = os.path.join(d, 'qt_menu.nib')
if os.path.exists(d):
menu_dir = d
break
if not menu_dir:
logger.error('Cannot find qt_menu.nib directory')
return menu_dir
def get_homebrew_path(formula = ''):
'''Return the homebrew path to the requested formula, or the global prefix when
called with no argument. Returns the path as a string or None if not found.'''
import subprocess
brewcmd = ['brew','--prefix']
path = None
if formula:
brewcmd.append(formula)
dbgstr = 'homebrew formula "%s"' %formula
else:
dbgstr = 'homebrew prefix'
try:
path = subprocess.check_output(brewcmd).strip()
logger.debug('Found %s at "%s"' % (dbgstr, path))
except OSError:
logger.debug('Detected homebrew not installed')
except subprocess.CalledProcessError:
logger.debug('homebrew formula "%s" not installed' % formula)
return path
def get_qmake_path(version = ''):
'''
Try to find the path to qmake with version given by the argument
as a string.
'''
import subprocess
# Use QT[45]DIR if specified in the environment
if 'QT5DIR' in os.environ and version[0] == '5':
logger.debug('Using $QT5DIR/bin as qmake path')
return os.path.join(os.environ['QT5DIR'],'bin','qmake')
if 'QT4DIR' in os.environ and version[0] == '4':
logger.debug('Using $QT4DIR/bin as qmake path')
return os.path.join(os.environ['QT4DIR'],'bin','qmake')
# try the default $PATH
dirs = ['']
# try homebrew paths
for formula in ('qt','qt5'):
homebrewqtpath = get_homebrew_path(formula)
if homebrewqtpath:
dirs.append(homebrewqtpath)
for dir in dirs:
try:
qmake = os.path.join(dir, 'qmake')
versionstring = subprocess.check_output([qmake, '-query', \
'QT_VERSION']).strip()
if versionstring.find(version) == 0:
logger.debug('Found qmake version "%s" at "%s".' \
% (versionstring, qmake))
return qmake
except (OSError, subprocess.CalledProcessError):
pass
logger.debug('Could not find qmake matching version "%s".' % version)
return None
def qt5_qml_dir():
import subprocess
qmake = get_qmake_path('5')
if qmake is None:
logger.error('Could not find qmake version 5.x, make sure PATH is ' \
+ 'set correctly or try setting QT5DIR.')
qmldir = subprocess.check_output([qmake, "-query",
"QT_INSTALL_QML"]).strip()
if len(qmldir) == 0:
logger.error('Cannot find QT_INSTALL_QML directory, "qmake -query '
+ 'QT_INSTALL_QML" returned nothing')
if not os.path.exists(qmldir):
logger.error("Directory QT_INSTALL_QML: %s doesn't exist" % qmldir)
# On Windows 'qmake -query' uses / as the path separator
# so change it to \\.
if is_win:
import string
qmldir = string.replace(qmldir, '/', '\\')
return qmldir
def qt5_qml_data(dir):
"""Return Qml library dir formatted for data"""
qmldir = qt5_qml_dir()
return (os.path.join(qmldir, dir), 'qml')
def qt5_qml_plugins_binaries(dir):
"""Return list of dynamic libraries formatted for mod.pyinstaller_binaries."""
import string
binaries = []
qmldir = qt5_qml_dir()
dir = string.rstrip(dir, os.sep)
files = misc.dlls_in_subdirs(os.path.join(qmldir, dir))
if files is not None:
for f in files:
relpath = os.path.relpath(f, qmldir)
instdir, file = os.path.split(relpath)
instdir = os.path.join("qml", instdir)
logger.debug("qt5_qml_plugins_binaries installing %s in %s"
% (f, instdir) )
binaries.append((
os.path.join(instdir, os.path.basename(f)),
f, 'BINARY'))
return binaries
def django_dottedstring_imports(django_root_dir):
"""
Get all the necessary Django modules specified in settings.py.
In the settings.py the modules are specified in several variables
as strings.
"""
package_name = os.path.basename(django_root_dir)
compat.setenv('DJANGO_SETTINGS_MODULE', '%s.settings' % package_name)
# Extend PYTHONPATH with parent dir of django_root_dir.
PyInstaller.__pathex__.append(misc.get_path_to_toplevel_modules(django_root_dir))
# Extend PYTHONPATH with django_root_dir.
# Many times Django users do not specify absolute imports in the settings module.
PyInstaller.__pathex__.append(django_root_dir)
ret = eval_script('django-import-finder.py')
if not isinstance(ret, list):
# If the script fails, `ret` is not a list. Handle this here to
# avoid crashes laster. See github issues #667, 1067 and #1252.
logger.error('script django-import-finder.py failed')
assert (not ret), ret # ensure it is an empty value
ret = []
# Unset environment variables again.
compat.unsetenv('DJANGO_SETTINGS_MODULE')
return ret
def django_find_root_dir():
"""
Return path to directory (top-level Python package) that contains main django
files. Return None if no directory was detected.
Main Django project directory contain files like '__init__.py', 'settings.py'
and 'url.py'.
In Django 1.4+ the script 'manage.py' is not in the directory with 'settings.py'
but usually one level up. We need to detect this special case too.
"""
# Get the directory with manage.py. Manage.py is supplied to PyInstaller as the
# first main executable script.
manage_py = sys._PYI_SETTINGS['scripts'][0]
manage_dir = os.path.dirname(os.path.abspath(manage_py))
# Get the Django root directory. The directory that contains settings.py and url.py.
# It could be the directory containig manage.py or any of its subdirectories.
settings_dir = None
files = set(os.listdir(manage_dir))
if 'settings.py' in files and 'urls.py' in files:
settings_dir = manage_dir
else:
for f in files:
if os.path.isdir(os.path.join(manage_dir, f)):
subfiles = os.listdir(os.path.join(manage_dir, f))
# Subdirectory contains critical files.
if 'settings.py' in subfiles and 'urls.py' in subfiles:
settings_dir = os.path.join(manage_dir, f)
break # Find the first directory.
return settings_dir
def opengl_arrays_modules():
"""
Return list of array modules for OpenGL module.
e.g. 'OpenGL.arrays.vbo'
"""
statement = 'import OpenGL; print(OpenGL.__path__[0])'
opengl_mod_path = PyInstaller.hooks.hookutils.exec_statement(statement)
arrays_mod_path = os.path.join(opengl_mod_path, 'arrays')
files = glob.glob(arrays_mod_path + '/*.py')
modules = []
for f in files:
mod = os.path.splitext(os.path.basename(f))[0]
# Skip __init__ module.
if mod == '__init__':
continue
modules.append('OpenGL.arrays.' + mod)
return modules
def remove_prefix(string, prefix):
"""
This function removes the given prefix from a string, if the string does
indeed begin with the prefix; otherwise, it returns the string
unmodified.
"""
if string.startswith(prefix):
return string[len(prefix):]
else:
return string
def remove_suffix(string, suffix):
"""
This function removes the given suffix from a string, if the string
does indeed end with the prefix; otherwise, it returns the string
unmodified.
"""
# Special case: if suffix is empty, string[:0] returns ''. So, test
# for a non-empty suffix.
if suffix and string.endswith(suffix):
return string[:-len(suffix)]
else:
return string
def remove_file_extension(filename):
"""
This function returns filename without its extension.
"""
return os.path.splitext(filename)[0]
def get_module_file_attribute(package):
"""
Given a package name, return the value of __file__ attribute.
In PyInstaller process we cannot import directly analyzed modules.
"""
# Statement to return __file__ attribute of a package.
__file__statement = """
import %s as p
print(p.__file__)
"""
return exec_statement(__file__statement % package)
def get_package_paths(package):
"""
Given a package, return the path to packages stored on this machine
and also returns the path to this particular package. For example,
if pkg.subpkg lives in /abs/path/to/python/libs, then this function
returns (/abs/path/to/python/libs,
/abs/path/to/python/libs/pkg/subpkg).
"""
# A package must have a path -- check for this, in case the package
# parameter is actually a module.
is_pkg_statement = 'import %s as p; print(hasattr(p, "__path__"))'
is_package = eval_statement(is_pkg_statement % package)
assert is_package, 'Package %s does not have __path__ attribute' % package
file_attr = get_module_file_attribute(package)
# package.__file__ = /abs/path/to/package/subpackage/__init__.py.
# Search for Python files in /abs/path/to/package/subpackage; pkg_dir
# stores this path.
pkg_dir = os.path.dirname(file_attr)
# When found, remove /abs/path/to/ from the filename; pkg_base stores
# this path to be removed.
pkg_base = remove_suffix(pkg_dir, package.replace('.', os.sep))
return pkg_base, pkg_dir
def collect_submodules(package, subdir=None):
"""
The following two functions were originally written by Ryan Welsh
(welchr AT umich.edu).
This produces a list of strings which specify all the modules in
package. Its results can be directly assigned to ``hiddenimports``
in a hook script; see, for example, hook-sphinx.py. The
package parameter must be a string which names the package. The
optional subdir give a subdirectory relative to package to search,
which is helpful when submodules are imported at run-time from a
directory lacking __init__.py. See hook-astroid.py for an example.
This function does not work on zipped Python eggs.
This function is used only for hook scripts, but not by the body of
PyInstaller.
"""
pkg_base, pkg_dir = get_package_paths(package)
if subdir:
pkg_dir = os.path.join(pkg_dir, subdir)
# Walk through all file in the given package, looking for submodules.
mods = set()
for dirpath, dirnames, filenames in os.walk(pkg_dir):
# Change from OS separators to a dotted Python module path,
# removing the path up to the package's name. For example,
# '/abs/path/to/desired_package/sub_package' becomes
# 'desired_package.sub_package'
mod_path = remove_prefix(dirpath, pkg_base).replace(os.sep, ".")
# If this subdirectory is a package, add it and all other .py
# files in this subdirectory to the list of modules.
if '__init__.py' in filenames:
mods.add(mod_path)
for f in filenames:
extension = os.path.splitext(f)[1]
if ((remove_file_extension(f) != '__init__') and
extension in PY_EXECUTABLE_SUFFIXES):
mods.add(mod_path + "." + remove_file_extension(f))
else:
# If not, nothing here is part of the package; don't visit any of
# these subdirs.
del dirnames[:]
return list(mods)
def collect_data_files(package, include_py_files=False, subdir=None):
"""
This routine produces a list of (source, dest) non-Python (i.e. data)
files which reside in package. Its results can be directly assigned to
``datas`` in a hook script; see, for example, hook-sphinx.py. The
package parameter must be a string which names the package.
By default, all Python executable files (those ending in .py, .pyc,
and so on) will NOT be collected; setting the include_py_files
argument to True collects these files as well. This is typically used
with Python routines (such as those in pkgutil) that search a given
directory for Python executable files then load them as extensions or
plugins. See collect_submodules for a description of the subdir parameter.
This function does not work on zipped Python eggs.
This function is used only for hook scripts, but not by the body of
PyInstaller.
"""
pkg_base, pkg_dir = get_package_paths(package)
if subdir:
pkg_dir = os.path.join(pkg_dir, subdir)
# Walk through all file in the given package, looking for data files.
datas = []
for dirpath, dirnames, files in os.walk(pkg_dir):
for f in files:
extension = os.path.splitext(f)[1]
if include_py_files or (not extension in PY_IGNORE_EXTENSIONS):
# Produce the tuple
# (/abs/path/to/source/mod/submod/file.dat,
# mod/submod/file.dat)
source = os.path.join(dirpath, f)
dest = remove_prefix(dirpath,
os.path.dirname(pkg_base) + os.sep)
datas.append((source, dest))
return datas
# The following is refactored out of hook-sysconfig and hook-distutils,
# both of which need to generate "datas" tuples for pyconfig.h and
# Makefile, under the same conditions.
# In virtualenv, _CONFIG_H and _MAKEFILE may have same or different
# prefixes, depending on the version of virtualenv.
# Try to find the correct one, which is assumed to be the longest one.
def _find_prefix(filename):
if not compat.is_venv:
return sys.prefix
prefixes = [os.path.abspath(sys.prefix), compat.base_prefix]
possible_prefixes = []
for prefix in prefixes:
common = os.path.commonprefix([prefix, filename])
if common == prefix:
possible_prefixes.append(prefix)
possible_prefixes.sort(key=lambda p: len(p), reverse=True)
return possible_prefixes[0]
def relpath_to_config_or_make(filename):
# Relative path in the dist directory.
prefix = _find_prefix(filename)
return os.path.relpath(os.path.dirname(filename), prefix)
| 36.851752 | 102 | 0.656378 | 3,641 | 27,344 | 4.808569 | 0.165614 | 0.022961 | 0.018849 | 0.014393 | 0.382625 | 0.333105 | 0.295751 | 0.28467 | 0.267992 | 0.239719 | 0 | 0.006265 | 0.241186 | 27,344 | 741 | 103 | 36.901484 | 0.837534 | 0.191962 | 0 | 0.340686 | 0 | 0 | 0.209551 | 0.059154 | 0 | 0 | 0 | 0.00135 | 0.004902 | 0 | null | null | 0.004902 | 0.095588 | null | null | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d462cfa4f4831e5253a0564fc15f691201c2c792 | 2,837 | py | Python | people/migrations/0001_initial.py | David5627/instagram-clone | f9c4db320d59e757303f247ebc2ff5666b715d0d | [
"MIT"
] | null | null | null | people/migrations/0001_initial.py | David5627/instagram-clone | f9c4db320d59e757303f247ebc2ff5666b715d0d | [
"MIT"
] | null | null | null | people/migrations/0001_initial.py | David5627/instagram-clone | f9c4db320d59e757303f247ebc2ff5666b715d0d | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-17 15:24
import cloudinary.models
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('comment', models.CharField(max_length=300)),
],
),
migrations.CreateModel(
name='Following',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='InstaPhotos',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=20)),
('image', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='image')),
],
),
migrations.CreateModel(
name='Profile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bio', models.TextField(blank=True, null=True)),
('dp', cloudinary.models.CloudinaryField(blank=True, max_length=255, null=True, verbose_name='image')),
('following', models.ManyToManyField(to='people.Following')),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Image',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', cloudinary.models.CloudinaryField(max_length=255, null=True, verbose_name='image')),
('name', models.CharField(max_length=30)),
('caption', models.TextField()),
('likes', models.IntegerField(blank=True, null=True)),
('pub_date', models.DateTimeField(auto_now_add=True, null=True)),
('comments', models.ManyToManyField(to='people.Comment')),
('profile', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, to='people.profile')),
],
),
]
| 44.328125 | 141 | 0.59711 | 290 | 2,837 | 5.710345 | 0.275862 | 0.05314 | 0.075483 | 0.069444 | 0.546498 | 0.512681 | 0.512681 | 0.512681 | 0.490942 | 0.490942 | 0 | 0.014833 | 0.263306 | 2,837 | 63 | 142 | 45.031746 | 0.777512 | 0.015862 | 0 | 0.446429 | 1 | 0 | 0.071685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d463d9c0f6d6f7152fed820bc41a2196a8aa9a63 | 2,506 | py | Python | jobs/models.py | soheltarir/django-es-test | 2cdce24fb0288f16f2526b38d139359dbe8472f5 | [
"MIT"
] | 5 | 2019-12-01T13:06:55.000Z | 2020-01-22T04:21:54.000Z | jobs/models.py | soheltarir/django-es-test | 2cdce24fb0288f16f2526b38d139359dbe8472f5 | [
"MIT"
] | 2 | 2020-06-06T00:15:11.000Z | 2022-02-10T11:24:31.000Z | jobs/models.py | soheltarir/django-elastic-postgres | 2cdce24fb0288f16f2526b38d139359dbe8472f5 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.db import models, connection
from elasticsearch import Elasticsearch
from jobs.managers import BaseManager
class ElasticModelMixin(models.Model):
class Meta:
abstract = True
app_label = 'jobs'
@classmethod
def elastic_index(cls):
"""
:return: Elasticsearch index name / alias of the model
"""
return settings.ELASTIC_CONSTANTS[cls.__name__]['index']
@classmethod
def elastic_type(cls):
"""
:return: Elasticsearch document type of the model
"""
return settings.ELASTIC_CONSTANTS[cls.__name__]['doc_type']
@property
def pg_data(self):
"""
Retrieves data by executing the Postgres Functions
:return: Object
"""
_pg_func = settings.ELASTIC_CONSTANTS[self.__class__.__name__]['pg_function']
query = "SELECT {}({})".format(_pg_func, self.id)
cursor = connection.cursor()
cursor.execute(query)
res = cursor.fetchone()
return res[0]
def index_to_elastic(self):
"""
Indexes the instance data in elasticsearch
:raises TransportError if the indexing fails
"""
es_client = Elasticsearch(settings.ELASTIC_HOST)
es_client.index(index=self.elastic_index(), doc_type=self.elastic_type(), id=str(self.id), body=self.pg_data)
def from_elastic(self):
"""
Fetches the instance document from Elasticsearch
:return: Instance data in Dict
:raises ElasticNotFound if the document is not found in elasticsearch
"""
es_client = Elasticsearch(settings.ELASTIC_HOST)
es_res = es_client.get(index=self.elastic_index(), doc_type=self.elastic_type(), id=str(self.id))
return es_res['_source']
class Company(models.Model):
name = models.CharField(max_length=255, null=False, blank=False)
website = models.URLField(max_length=255, null=True)
class Meta:
app_label = 'jobs'
db_table = 'companies'
class Job(ElasticModelMixin):
title = models.CharField(max_length=255, null=False, blank=False)
company = models.ForeignKey('jobs.Company', null=False, on_delete=models.CASCADE)
vacancies = models.PositiveSmallIntegerField(null=False)
salary = models.PositiveIntegerField(null=False)
objects = BaseManager()
class Meta:
app_label = "jobs"
db_table = "jobs"
def __str__(self):
return self.title
| 30.560976 | 117 | 0.660016 | 290 | 2,506 | 5.496552 | 0.344828 | 0.047051 | 0.022585 | 0.030113 | 0.272271 | 0.272271 | 0.272271 | 0.184442 | 0.184442 | 0.067754 | 0 | 0.005252 | 0.240223 | 2,506 | 81 | 118 | 30.938272 | 0.831933 | 0.163208 | 0 | 0.195652 | 0 | 0 | 0.04156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.086957 | 0.021739 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d46f9cdaa97cb7c58424bb09eee06de910375f45 | 288 | py | Python | unmasked/m3/read_serial.py | saber-collection/saber-collection | b3cbf340c168c8c79a8b2371649934fa90c8be30 | [
"CC0-1.0"
] | 3 | 2021-07-27T09:01:04.000Z | 2022-01-14T03:21:54.000Z | unmasked/m3/read_serial.py | saber-collection/saber-collection | b3cbf340c168c8c79a8b2371649934fa90c8be30 | [
"CC0-1.0"
] | 2 | 2021-12-10T08:59:07.000Z | 2022-01-20T15:04:33.000Z | unmasked/m3/read_serial.py | saber-collection/saber-collection | b3cbf340c168c8c79a8b2371649934fa90c8be30 | [
"CC0-1.0"
] | 1 | 2022-01-07T07:39:45.000Z | 2022-01-07T07:39:45.000Z | #!/usr/bin/env python3
import platform
import serial
import sys
from config import Settings
dev = serial.Serial(Settings.SERIAL_DEVICE, Settings.BAUD_RATE)
print("> Returned data:", file=sys.stderr)
while True:
x = dev.read()
sys.stdout.buffer.write(x)
sys.stdout.flush()
| 18 | 63 | 0.729167 | 42 | 288 | 4.952381 | 0.666667 | 0.086538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004082 | 0.149306 | 288 | 15 | 64 | 19.2 | 0.844898 | 0.072917 | 0 | 0 | 0 | 0 | 0.060377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d472bfcdd59367d1ed14df00006f9db40ba1e325 | 924 | py | Python | planner/migrations/0008_lawnproduct.py | pmontgo33/lawn-care-planner | 0b79a07a301eeb9a15ff1f4f461bdd364151f86a | [
"MIT"
] | 7 | 2016-12-07T18:34:42.000Z | 2021-08-03T15:22:35.000Z | planner/migrations/0008_lawnproduct.py | pmontgo33/lawn-care-planner | 0b79a07a301eeb9a15ff1f4f461bdd364151f86a | [
"MIT"
] | 35 | 2016-11-23T15:40:46.000Z | 2017-05-18T22:58:17.000Z | planner/migrations/0008_lawnproduct.py | pmontgo33/lawn-care-planner | 0b79a07a301eeb9a15ff1f4f461bdd364151f86a | [
"MIT"
] | 1 | 2017-04-13T15:13:01.000Z | 2017-04-13T15:13:01.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.2 on 2016-12-06 03:59
from __future__ import unicode_literals
from django.db import migrations, models
import jsonfield.fields
class Migration(migrations.Migration):
dependencies = [
('planner', '0007_auto_20161123_0805'),
]
operations = [
migrations.CreateModel(
name='LawnProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('type', models.CharField(choices=[('Grass Seed', 'Grass Seed'), ('Fertilizer', 'Fertilizer'), ('Weed Control', 'Weed Control'), ('Insect Control', 'Insect Control')], max_length=140)),
('links', jsonfield.fields.JSONField()),
('specs', jsonfield.fields.JSONField()),
],
),
]
| 34.222222 | 201 | 0.599567 | 94 | 924 | 5.755319 | 0.648936 | 0.083179 | 0.073937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056115 | 0.247836 | 924 | 26 | 202 | 35.538462 | 0.722302 | 0.073593 | 0 | 0 | 1 | 0 | 0.181712 | 0.026964 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d47599305c1d1b7b8d20de48c506ff172cde37a3 | 5,173 | py | Python | pyaz/search/service/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/search/service/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/search/service/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | '''
Manage Azure Search services.
'''
from ... pyaz_utils import _call_az
def list(resource_group):
'''
Required Parameters:
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
'''
return _call_az("az search service list", locals())
def show(name, resource_group):
'''
Required Parameters:
- name -- The name of the search service.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
'''
return _call_az("az search service show", locals())
def delete(name, resource_group, yes=None):
'''
Required Parameters:
- name -- The name of the search service.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- yes -- Do not prompt for confirmation.
'''
return _call_az("az search service delete", locals())
def update(name, resource_group, add=None, force_string=None, identity_type=None, ip_rules=None, no_wait=None, partition_count=None, public_network_access=None, remove=None, replica_count=None, set=None):
'''
Update partition and replica of the given search service.
Required Parameters:
- name -- The name of the search service.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- add -- Add an object to a list of objects by specifying a path and key value pairs. Example: --add property.listProperty <key=value, string or JSON string>
- force_string -- When using 'set' or 'add', preserve string literals instead of attempting to convert to JSON.
- identity_type -- The identity type; possible values include: "None", "SystemAssigned".
- ip_rules -- Public IP(v4) addresses or CIDR ranges to the search service, seperated by comma(',') or semicolon(';'); If spaces (' '), ',' or ';' is provided, any existing IP rule will be nullified and no public IP rule is applied. These IP rules are applicable only when public_network_access is "enabled".
- no_wait -- Do not wait for the long-running operation to finish.
- partition_count -- Number of partitions in the search service.
- public_network_access -- Public accessibility to the search service; allowed values are "enabled" or "disabled".
- remove -- Remove a property or an element from a list. Example: --remove property.list <indexToRemove> OR --remove propertyToRemove
- replica_count -- Number of replicas in the search service.
- set -- Update an object by specifying a property path and value to set. Example: --set property1.property2=<value>
'''
return _call_az("az search service update", locals())
def create(name, resource_group, sku, identity_type=None, ip_rules=None, location=None, no_wait=None, partition_count=None, public_network_access=None, replica_count=None):
'''
Creates a Search service in the given resource group.
Required Parameters:
- name -- The name of the search service.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- sku -- Search Service SKU
Optional Parameters:
- identity_type -- The identity type; possible values include: "None", "SystemAssigned".
- ip_rules -- Public IP(v4) addresses or CIDR ranges to the search service, seperated by comma or semicolon; these IP rules are applicable only when --public-network-access is "enabled".
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- no_wait -- Do not wait for the long-running operation to finish.
- partition_count -- Number of partitions in the search service.
- public_network_access -- Public accessibility to the search service; allowed values are "enabled" or "disabled".
- replica_count -- Number of replicas in the search service.
'''
return _call_az("az search service create", locals())
def wait(name, resource_group, created=None, custom=None, deleted=None, exists=None, interval=None, timeout=None, updated=None):
'''
Wait for async service operations.
Required Parameters:
- name -- The name of the search service.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- created -- wait until created with 'provisioningState' at 'Succeeded'
- custom -- Wait until the condition satisfies a custom JMESPath query. E.g. provisioningState!='InProgress', instanceView.statuses[?code=='PowerState/running']
- deleted -- wait until deleted
- exists -- wait until the resource exists
- interval -- polling interval in seconds
- timeout -- maximum wait in seconds
- updated -- wait until updated with provisioningState at 'Succeeded'
'''
return _call_az("az search service wait", locals())
| 49.740385 | 312 | 0.715832 | 693 | 5,173 | 5.258297 | 0.226551 | 0.078485 | 0.05708 | 0.034577 | 0.581778 | 0.574918 | 0.531559 | 0.531559 | 0.531559 | 0.506312 | 0 | 0.000955 | 0.190218 | 5,173 | 103 | 313 | 50.223301 | 0.868942 | 0.744442 | 0 | 0 | 0 | 0 | 0.134111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | false | 0 | 0.076923 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d47c7410ad2e1d295e58a4e424616eb38492c3d3 | 3,502 | py | Python | care_batch/train.py | amedyukhina/care_batch | 7670eb7bbd9339dcc580cf8686c79900253392eb | [
"Apache-2.0"
] | null | null | null | care_batch/train.py | amedyukhina/care_batch | 7670eb7bbd9339dcc580cf8686c79900253392eb | [
"Apache-2.0"
] | null | null | null | care_batch/train.py | amedyukhina/care_batch | 7670eb7bbd9339dcc580cf8686c79900253392eb | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function, unicode_literals, absolute_import, division
import os
import matplotlib.pyplot as plt
from csbdeep.io import load_training_data
from csbdeep.models import Config, CARE
from csbdeep.utils import axes_dict, plot_history
from csbdeep.utils.tf import limit_gpu_memory
def train(data_file, model_name, model_basedir, validation_split=0.2, limit_gpu=0.5, save_history=True, **kwargs):
"""
Parameters
----------
data_file : str
File name for training data in ``.npz`` format
validation_split : float
Fraction of images to use as validation set during training.
model_name : str
Model name.
model_basedir : str
Path to model folder (which stores configuration, weights, etc.)
limit_gpu : float
Fraction of the GPU memory to use.
Default: 0.5
save_history : bool
If True, save the training history.
Default is True.
kwargs : key value
Configuration attributes (see below).
Attributes
----------
probabilistic : bool
Probabilistic prediction of per-pixel Laplace distributions or
typical regression of per-pixel scalar values.
n_dim : int
Dimensionality of input images (2 or 3).
unet_residual : bool
Parameter `residual` of :func:`csbdeep.nets.common_unet`. Default: ``n_channel_in == n_channel_out``
unet_n_depth : int
Parameter `n_depth` of :func:`csbdeep.nets.common_unet`. Default: ``2``
unet_kern_size : int
Parameter `kern_size` of :func:`csbdeep.nets.common_unet`. Default: ``5 if n_dim==2 else 3``
unet_n_first : int
Parameter `n_first` of :func:`csbdeep.nets.common_unet`. Default: ``32``
unet_last_activation : str
Parameter `last_activation` of :func:`csbdeep.nets.common_unet`. Default: ``linear``
train_loss : str
Name of training loss. Default: ``'laplace' if probabilistic else 'mae'``
train_epochs : int
Number of training epochs. Default: ``100``
train_steps_per_epoch : int
Number of parameter update steps per epoch. Default: ``400``
train_learning_rate : float
Learning rate for training. Default: ``0.0004``
train_batch_size : int
Batch size for training. Default: ``16``
train_tensorboard : bool
Enable TensorBoard for monitoring training progress. Default: ``True``
train_checkpoint : str
Name of checkpoint file for model weights (only best are saved); set to ``None`` to disable.
Default: ``weights_best.h5``
train_reduce_lr : dict
Parameter :class:`dict` of ReduceLROnPlateau_ callback; set to ``None`` to disable.
Default: ``{'factor': 0.5, 'patience': 10, 'min_delta': 0}``
.. _ReduceLROnPlateau: https://keras.io/callbacks/#reducelronplateau
"""
(X, Y), (X_val, Y_val), axes = load_training_data(data_file, validation_split=validation_split, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
limit_gpu_memory(fraction=limit_gpu)
config = Config(axes, n_channel_in, n_channel_out, **kwargs)
model = CARE(config, model_name, basedir=model_basedir)
history = model.train(X, Y, validation_data=(X_val, Y_val))
if save_history:
plt.figure(figsize=(16, 5))
plot_history(history, ['loss', 'val_loss'], ['mse', 'val_mse', 'mae', 'val_mae'])
plt.savefig(os.path.join(model_basedir, model_name, 'history.png'))
| 41.690476 | 114 | 0.679326 | 474 | 3,502 | 4.814346 | 0.339662 | 0.021034 | 0.028484 | 0.037248 | 0.124014 | 0.124014 | 0.074496 | 0 | 0 | 0 | 0 | 0.013086 | 0.214449 | 3,502 | 83 | 115 | 42.192771 | 0.81643 | 0.625928 | 0 | 0 | 0 | 0 | 0.040703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.368421 | 0 | 0.421053 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d48253284557462c50068a4ee14dab1e8cdaa744 | 623 | py | Python | weather_bot/generate_audio_files.py | tinalimbudev/weather-bot | 6e59420cac3a6b31d52983287c42ef9f4feb3422 | [
"MIT"
] | null | null | null | weather_bot/generate_audio_files.py | tinalimbudev/weather-bot | 6e59420cac3a6b31d52983287c42ef9f4feb3422 | [
"MIT"
] | null | null | null | weather_bot/generate_audio_files.py | tinalimbudev/weather-bot | 6e59420cac3a6b31d52983287c42ef9f4feb3422 | [
"MIT"
] | null | null | null | import os
from gtts import gTTS
from pathlib import Path
def generate_audio_file_from_text(
text, file_name, file_type="mp3", language="en", slow=False
):
audio = gTTS(text=text, lang=language, slow=slow)
file_path = os.path.join(
Path().absolute(),
"media",
"common_responses",
f"{file_name}.{file_type}",
)
audio.save(file_path)
return file_path
if __name__ == "__main__":
# Replace with list of tuples.
# E.g. [("Please could you repeat that?", "pardon")]
texts_and_file_names = []
for text, file_name in texts_and_file_names:
generate_audio_file_from_text(text, file_name)
| 22.25 | 61 | 0.701445 | 93 | 623 | 4.354839 | 0.505376 | 0.079012 | 0.088889 | 0.103704 | 0.182716 | 0.182716 | 0.182716 | 0.182716 | 0 | 0 | 0 | 0.001942 | 0.173355 | 623 | 27 | 62 | 23.074074 | 0.784466 | 0.126806 | 0 | 0 | 1 | 0 | 0.10536 | 0.042514 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4831cb1233c25b64c8427e8a7e41ee32b1bfca1 | 2,917 | py | Python | app.py | sohje/__flask_psgr | 4d8b201b93b72f55965cbaa030fcfb062b818803 | [
"MIT"
] | null | null | null | app.py | sohje/__flask_psgr | 4d8b201b93b72f55965cbaa030fcfb062b818803 | [
"MIT"
] | null | null | null | app.py | sohje/__flask_psgr | 4d8b201b93b72f55965cbaa030fcfb062b818803 | [
"MIT"
] | null | null | null | import os
from sqlalchemy import (create_engine, MetaData,
Table, Column, Integer, Text, String, DateTime)
from flask import Flask, request, jsonify, g
from mock_session import session_info_retriever
app = Flask(__name__)
# config.DevelopmentConfig -> sqlite://testing.db
# config.ProductionConfig -> postgresql://localhost/testing
app.config.from_object(os.environ.get('APP_SETTINGS', 'config.DevelopmentConfig'))
engine = create_engine(app.config['DATABASE_URI'], convert_unicode=True)
metadata = MetaData(bind=engine)
users = Table('users', metadata,
Column('user_id', Integer, primary_key=True),
Column('name', Text),
Column('sex', Text),
Column('birthday', DateTime),
Column('city', Text),
Column('country', Text),
Column('ethnicity', Text)
)
error_sess_obj = {'status': 'error', 'Message': 'Invalid session'}
error_data_obj = {'status': 'error', 'Message': 'Invalid data'}
def init_db():
from mock_users import users_list
db = get_db()
db.execute('DROP TABLE IF EXISTS users;')
metadata.create_all()
db.execute(users.insert(), users_list)
def get_db():
db = getattr(g, '_database', None)
if db is None:
db = g._database = engine.connect()
return db
@app.before_request
def before_request():
g.db = get_db()
@app.teardown_appcontext
def teardown_db(exception):
db = getattr(g, '_database', None)
if db is not None:
db.close()
# initialize db
@app.before_first_request
def init_me():
init_db()
# retrieve user profile
@app.route('/api/v1/profiles/<user_id>', methods=['GET'])
def return_profile(user_id):
# validate user session
session = request.cookies.get('session') or request.args.get('session')
session_info = session_info_retriever(session)
if session_info['data']['session_exists'] == False: # check session
return jsonify(error_sess_obj)
st = users.select().where(users.c.user_id == user_id)
result = g.db.execute(st).fetchone()
return jsonify(result) if result is not None else jsonify({})
# update profile
@app.route('/api/v1/profiles/self', methods=['PUT', 'PATCH'])
def change_profile():
session = request.cookies.get('session') or request.args.get('session')
session_info = session_info_retriever(session)
data = request.get_json()
if session_info['data']['session_exists'] == False: # check session
return jsonify(error_sess_obj)
elif not data:
return jsonify(error_data_obj)
user_id = session_info['data']['session_data']['user_id']
# todo: validate json body before exec
# todo: patch/put (update/modify entries)
st = users.update().where(users.c.user_id == user_id).values(data)
try:
g.db.execute(st)
except Exception as e:
return jsonify({'status': 'error', 'message': str(e), 'data': data})
return jsonify({'status': 'OK'})
if __name__ == '__main__':
app.run()
| 31.031915 | 82 | 0.685979 | 390 | 2,917 | 4.938462 | 0.312821 | 0.028037 | 0.031153 | 0.034268 | 0.281412 | 0.252336 | 0.223261 | 0.199377 | 0.170301 | 0.170301 | 0 | 0.000825 | 0.168666 | 2,917 | 93 | 83 | 31.365591 | 0.793402 | 0.097017 | 0 | 0.144928 | 0 | 0 | 0.14716 | 0.027068 | 0 | 0 | 0 | 0.010753 | 0 | 1 | 0.101449 | false | 0 | 0.072464 | 0 | 0.275362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d489decc18d8071a490883065a8c77ea014b486d | 1,221 | py | Python | draw.py | Adamv27/Pygame-drawing | 7f4f6e9eb1599fbfc8b96fe790ecfa0d0f821c58 | [
"MIT"
] | null | null | null | draw.py | Adamv27/Pygame-drawing | 7f4f6e9eb1599fbfc8b96fe790ecfa0d0f821c58 | [
"MIT"
] | null | null | null | draw.py | Adamv27/Pygame-drawing | 7f4f6e9eb1599fbfc8b96fe790ecfa0d0f821c58 | [
"MIT"
] | null | null | null | import sys
import pygame
def draw_canvas(screen, colors):
screen.fill(colors[0])
pygame.draw.rect(screen, colors[5], (20, 20, 500, 500))
index = 1 # skip light grey
for row in range(2):
for column in range(4):
pygame.draw.rect(screen, colors[index], ((60 * column) + 20, (60 * row) + 530, 60, 60))
index += 1
# Draw clear button
pygame.draw.rect(screen, colors[5], (280, 530, 120, 55))
font = pygame.font.SysFont(None, 48)
text_surface = font.render('Clear', True, colors[0])
screen.blit(text_surface, (295, 540))
# Draw sizing buttons
offset = 0
sizes = ['1', '2', '3']
for i in range(3):
if i > 0:
offset = 5 * i
pygame.draw.rect(screen, colors[5], ((36 * i) + 280 + offset, 595, 36, 36))
text_surface = font.render(sizes[i], True, colors[0])
screen.blit(text_surface, ((36 * i) + 288 + offset, 598))
def fill_square(screen, color, column, row):
pygame.draw.rect(screen, color, (20 + (5 * column), 20 + (5 * row), 5, 5))
pygame.display.update()
def clear_canvas(screen):
pygame.draw.rect(screen, (255, 255, 255), (20, 20, 500, 500)) | 32.131579 | 100 | 0.564292 | 176 | 1,221 | 3.875 | 0.340909 | 0.087977 | 0.123167 | 0.175953 | 0.250733 | 0.21261 | 0.093842 | 0 | 0 | 0 | 0 | 0.119639 | 0.274365 | 1,221 | 38 | 101 | 32.131579 | 0.650113 | 0.043407 | 0 | 0 | 0 | 0 | 0.007092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.074074 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d48a44f09de17b16e44c7192aa9ce93c25f50f90 | 1,027 | py | Python | fixture/soap.py | AlekseyVR/Python_mantis | ee07710d2fb5578bfe2d1906344ee7366c5878d4 | [
"Apache-2.0"
] | null | null | null | fixture/soap.py | AlekseyVR/Python_mantis | ee07710d2fb5578bfe2d1906344ee7366c5878d4 | [
"Apache-2.0"
] | null | null | null | fixture/soap.py | AlekseyVR/Python_mantis | ee07710d2fb5578bfe2d1906344ee7366c5878d4 | [
"Apache-2.0"
] | null | null | null | from suds.client import Client
from suds import WebFault
from model.project import Project
class SoapHelper:
def __init__(self, app):
self.app = app
def can_login(self, username, password):
client = Client(self.app.base_url + "api/soap/mantisconnect.php?wsdl")
try:
client.service.mc_login(username, password)
return True
except WebFault:
return False
def get_project_list_user(self, username, password):
self.can_login(username, password)
client = Client(self.app.base_url + "api/soap/mantisconnect.php?wsdl")
try:
response = client.service.mc_projects_get_user_accessible(username, password)
project_list = []
for element in response:
project = Project(id=element.id, name_project=element.name, description_project=element.description)
project_list.append(project)
return project_list
except WebFault:
return False
| 32.09375 | 116 | 0.646543 | 118 | 1,027 | 5.449153 | 0.372881 | 0.124417 | 0.062208 | 0.087092 | 0.22395 | 0.22395 | 0.22395 | 0.22395 | 0.22395 | 0.22395 | 0 | 0 | 0.27556 | 1,027 | 31 | 117 | 33.129032 | 0.864247 | 0 | 0 | 0.32 | 0 | 0 | 0.060429 | 0.060429 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0.2 | 0.12 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d48ad6f933339a03da03cb3447d55414174ab54a | 303 | py | Python | samples/sample.py | andy1xx8/spacy-train-tools | 8b3416c2505f43fcd06d40578d1938f284f8535b | [
"MIT"
] | null | null | null | samples/sample.py | andy1xx8/spacy-train-tools | 8b3416c2505f43fcd06d40578d1938f284f8535b | [
"MIT"
] | null | null | null | samples/sample.py | andy1xx8/spacy-train-tools | 8b3416c2505f43fcd06d40578d1938f284f8535b | [
"MIT"
] | null | null | null | from src.spacy_train_tools.train import train_spacy_model
if __name__ == "__main__":
train_spacy_model(
config_file='./en_config.cfg',
vector_file='en_core_web_lg',
train_file='./data/train.jsonl',
dev_file='./data/dev.jsonl',
output_folder='./models'
)
| 27.545455 | 57 | 0.653465 | 40 | 303 | 4.375 | 0.6 | 0.114286 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211221 | 303 | 10 | 58 | 30.3 | 0.732218 | 0 | 0 | 0 | 0 | 0 | 0.260726 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00f755c8104812d495b3d40142b84b1df9d1f2b7 | 7,970 | py | Python | cpc.py | vkramskikh/cgminer-pool-chooser | 507ebe84ae125e1b3c3a96afe8ab04bf7cdf3325 | [
"MIT"
] | 1 | 2018-02-12T12:52:59.000Z | 2018-02-12T12:52:59.000Z | cpc.py | vkramskikh/cgminer-pool-chooser | 507ebe84ae125e1b3c3a96afe8ab04bf7cdf3325 | [
"MIT"
] | null | null | null | cpc.py | vkramskikh/cgminer-pool-chooser | 507ebe84ae125e1b3c3a96afe8ab04bf7cdf3325 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
import time
import json
import yaml
import socket
import argparse
import traceback
from pycgminer import CgminerAPI
from data_providers import CoinwarzAPI, CryptsyAPI
from rating_calculator import RatingCalculator
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
)
logger = logging.getLogger('cpc')
class CPC(object):
def __init__(self, config):
self.config = config
self.cgminer = CgminerAPI(config['cgminer']['host'], config['cgminer']['port'])
self.coinwarz = CoinwarzAPI(config['coinwarz'])
self.cryptsy = CryptsyAPI(config['cryptsy'])
self.hashrate = self.config['hashrate']
def restart_cgminer(self):
logger.info('Restarting CGMiner...')
try:
self.cgminer.restart()
except ValueError:
pass
while True:
try:
self.cgminer.version()
except socket.error:
time.sleep(1)
else:
break
logger.info('CGMiner restarted')
def cgminer_pools(self):
pools = []
for pool in self.cgminer.pools()['POOLS']:
currency = self.config['pool_currency'].get(pool['URL'])
if pool['Status'] != 'Alive':
logger.warning('Pool %s status is %s', pool['URL'], pool['Status'])
if not currency:
logger.error('Unknown currency for pool %s', pool['URL'])
continue
pool['Currency'] = currency
pools.append(pool)
return pools
def get_currencies(self):
currencies = {}
btc_price = None
price_data = self.cryptsy.get_data()['return']['markets']
difficulty_data = self.coinwarz.get_data()['Data']
for label, currency_price_data in price_data.items():
if currency_price_data['secondarycode'] != 'BTC':
continue
currency_data = currencies[currency_price_data['primarycode']] = {}
currency_data['id'] = currency_price_data['primarycode']
currency_data['name'] = currency_price_data['primaryname']
currency_data['price'] = float(currency_price_data['lasttradeprice'])
currency_data['exchange_volume'] = float(currency_price_data['volume'])
for currency_difficulty_data in difficulty_data:
currency = currency_difficulty_data['CoinTag']
if currency == 'BTC':
btc_price = currency_difficulty_data['ExchangeRate']
continue
if currency not in currencies:
continue
currency_data = currencies[currency]
currency_data['profit_growth'] = currency_difficulty_data['ProfitRatio'] / currency_difficulty_data['AvgProfitRatio']
currency_data['difficulty'] = currency_difficulty_data['Difficulty']
currency_data['block_reward'] = currency_difficulty_data['BlockReward']
currency_data['coins_per_day'] = 86400 * self.hashrate * currency_data['block_reward'] / (currency_data['difficulty'] * 2 ** 32)
currencies = {k: v for k, v in currencies.iteritems() if 'coins_per_day' in v}
for currency_data in currencies.values():
currency_data['usd_per_day'] = currency_data['coins_per_day'] * currency_data['price'] * btc_price
currency_data['rating'] = RatingCalculator.rate_currency(currency_data)
return currencies
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='CGMiner Pool Chooser')
parser.add_argument(
'--config', dest='config', type=argparse.FileType('r'), default='cpc.yaml'
)
parser.add_argument(
'--data-only', dest='data_only', action='store_true'
)
parser.add_argument(
'--no-priority-change', dest='no_priority_change', action='store_true'
)
parser.add_argument(
'--no-loop', dest='no_loop', action='store_true'
)
args = parser.parse_args()
cpc = CPC(yaml.load(args.config))
while True:
try:
try:
cgminer_version = cpc.cgminer.version()['VERSION'][0]
logger.debug('Connected to CGMiner v{CGMiner} API v{API}'.format(**cgminer_version))
cgminer_summary = cpc.cgminer.summary()['SUMMARY'][0]
cpc.hashrate = cgminer_summary['MHS av'] * 1000000
except Exception:
logger.error('Unable to get CGMiner info: %s', traceback.format_exc())
logger.info('Using hashrate from config: %d Kh/s', cpc.hashrate)
currencies = cpc.get_currencies()
prioritized_currencies = list(reversed(sorted(currencies.values(), key=lambda c: c['rating'])))
if args.data_only:
print json.dumps(prioritized_currencies, indent=2)
exit(0)
pools = cpc.cgminer_pools()
active_pools = filter(lambda p: p['Stratum Active'], pools)
active_currency = None
if len(active_pools):
active_currency = currencies[active_pools[0]['Currency']]
active_currency_info = cpc.cgminer.coin()['COIN'][0]
logger.info('Currently mining %s ($%.2f/d, diff %f, %d Kh/s) on %s',
active_currency['name'],
active_currency['usd_per_day'],
active_currency_info['Network Difficulty'],
cpc.hashrate / 1000,
active_pools[0]['URL'])
else:
logger.error('No active pools found')
prioritized_currencies = [c for c in prioritized_currencies if c['id'] in (p['Currency'] for p in pools)]
logger.info('Currency priority: %s', ', '.join('%s(%.2f,$%.2f/d)' % (c['name'], c['rating'], c['usd_per_day']) for c in prioritized_currencies))
prioritized_pools = []
for currency in prioritized_currencies:
prioritized_pools += [p for p in pools if p['Currency'] == currency['id']]
pool_priority = ','.join(str(p['POOL']) for p in prioritized_pools)
logger.debug('Pool priority: %s', pool_priority)
change_priority = True
proposed_currency = prioritized_currencies[0]
if active_currency:
rating_ratio = proposed_currency['rating'] / active_currency['rating']
currency_switch_threshold = cpc.config['currency_switch_threshold']
if rating_ratio < currency_switch_threshold:
change_priority = False
logger.info('Rating ratio %f < %f, leaving pool priority as it is', rating_ratio, currency_switch_threshold)
else:
logger.info('Rating ratio %f >= %f, applying new pool priority', rating_ratio, currency_switch_threshold)
if not args.no_priority_change and change_priority:
if cpc.config['cgminer']['restart_on_pool_change']:
cpc.restart_cgminer()
response = cpc.cgminer.poolpriority(pool_priority)
priority_changed = response['STATUS'][0]['STATUS'] == 'S'
getattr(logger, priority_changed and 'info' or 'error')(response['STATUS'][0]['Msg'])
if not priority_changed:
raise ValueError('Unable to change pool priority')
if args.no_loop:
break
except Exception:
logger.error('Error occured during main loop: %s', traceback.format_exc())
logger.info('Retrying after %ds', cpc.config['retry_interval'])
time.sleep(cpc.config['retry_interval'])
else:
logger.info('Retrying after %ds', cpc.config['pool_choose_interval'])
time.sleep(cpc.config['pool_choose_interval'])
| 43.791209 | 156 | 0.599749 | 864 | 7,970 | 5.332176 | 0.239583 | 0.046885 | 0.02583 | 0.01628 | 0.168656 | 0.067289 | 0.02952 | 0 | 0 | 0 | 0 | 0.005771 | 0.28256 | 7,970 | 181 | 157 | 44.033149 | 0.79993 | 0.002509 | 0 | 0.139241 | 0 | 0.006329 | 0.179645 | 0.005913 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006329 | 0.06962 | null | null | 0.006329 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00f8e507c83b1e8a355cf2a4def195c5fc9c92f9 | 997 | py | Python | tournaments/migrations/0010_auto_20210413_1103.py | lejbron/arkenstone | d5341c27ba81eaf116e5ee5983b4fa422437d294 | [
"MIT"
] | null | null | null | tournaments/migrations/0010_auto_20210413_1103.py | lejbron/arkenstone | d5341c27ba81eaf116e5ee5983b4fa422437d294 | [
"MIT"
] | 4 | 2021-03-17T19:46:35.000Z | 2021-04-09T11:37:53.000Z | tournaments/migrations/0010_auto_20210413_1103.py | lejbron/arkenstone | d5341c27ba81eaf116e5ee5983b4fa422437d294 | [
"MIT"
] | 1 | 2021-04-11T07:50:56.000Z | 2021-04-11T07:50:56.000Z | # Generated by Django 3.1.4 on 2021-04-13 11:03
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('tournaments', '0009_match_table'),
]
operations = [
migrations.AlterField(
model_name='tour',
name='tournament',
field=models.ForeignKey(limit_choices_to={'tt_status__in': ['ann', 'reg', 'act']}, on_delete=django.db.models.deletion.CASCADE, related_name='tours', to='tournaments.tournament'),
),
migrations.AlterField(
model_name='tournament',
name='superviser',
field=models.ForeignKey(help_text='Укажите организатора турнира', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='owned_tournaments', to=settings.AUTH_USER_MODEL, verbose_name='Организатор'),
),
]
| 35.607143 | 226 | 0.68004 | 114 | 997 | 5.754386 | 0.561404 | 0.04878 | 0.064024 | 0.10061 | 0.091463 | 0.091463 | 0 | 0 | 0 | 0 | 0 | 0.02375 | 0.197593 | 997 | 27 | 227 | 36.925926 | 0.79625 | 0.045135 | 0 | 0.2 | 1 | 0 | 0.174737 | 0.023158 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e021c651d7a900ad2c435ab111127800dcc4a9d | 358 | py | Python | Exe19_centena_dezena_unidade.py | lucaslk122/Exercicios_Python_estutura_decisao | 51a9699c5d85aa6cfb163d891c56e804a7255634 | [
"MIT"
] | null | null | null | Exe19_centena_dezena_unidade.py | lucaslk122/Exercicios_Python_estutura_decisao | 51a9699c5d85aa6cfb163d891c56e804a7255634 | [
"MIT"
] | null | null | null | Exe19_centena_dezena_unidade.py | lucaslk122/Exercicios_Python_estutura_decisao | 51a9699c5d85aa6cfb163d891c56e804a7255634 | [
"MIT"
] | null | null | null | numero = int(input("Entre com um numero inteiro e menor que 1000: "))
if numero > 1000:
print("Numero invalido")
else:
unidade = numero % 10
numero = (numero - unidade) / 10
dezena = int(numero % 10)
numero = (numero - dezena) / 10
centena = int(numero)
print(f"{centena} centena(s) , {dezena} dezena(s) e {unidade} unidade(s)")
| 29.833333 | 78 | 0.625698 | 49 | 358 | 4.571429 | 0.428571 | 0.071429 | 0.125 | 0.178571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058182 | 0.231844 | 358 | 11 | 79 | 32.545455 | 0.756364 | 0 | 0 | 0 | 0 | 0.1 | 0.35014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e0377ac4e43c9d229393037f3e74eee30abc01b | 2,363 | py | Python | tests/report_test.py | thepatrik/corunner | a68ed8e4f9f659b71a03c37833710544532c302d | [
"MIT"
] | null | null | null | tests/report_test.py | thepatrik/corunner | a68ed8e4f9f659b71a03c37833710544532c302d | [
"MIT"
] | null | null | null | tests/report_test.py | thepatrik/corunner | a68ed8e4f9f659b71a03c37833710544532c302d | [
"MIT"
] | null | null | null | import time
from corunner.report import Execution, Report
def test_fastest():
report = _get_report()
assert report.fastest().latency == 100.0
assert report.fastest(include_errored=True).latency == 1.0
def test_slowest():
report = _get_report()
assert report.slowest().latency == 100.0
assert report.slowest(include_errored=True).latency == 100.0
def test_count():
report = _get_report()
assert report.count() == 1
assert report.count(include_errored=True) == 2
def test_average():
report = _get_report()
assert report.average() == 100.0
assert report.average(include_errored=True) == 50.5
def test_execution_ids():
report = _get_report()
ids = report.execution_ids()
ids_with_err = report.execution_ids(include_errored=True)
assert len(ids) == 1
assert 'test1' in ids
assert len(ids_with_err) == 2
assert 'test1' in ids_with_err
assert 'test2' in ids_with_err
def test_latencies():
report = _get_report()
assert report.latencies() == [100.0]
assert report.latencies(include_errored=True) == [1.0, 100.0]
def test_latency_sum():
report = _get_report()
assert report.latency_sum() == 100.0
assert report.latency_sum(include_errored=True) == 101.0
def test_execution_time():
report = _get_report()
assert report.execution_time() == 100.0
assert report.execution_time(include_errored=True) == 101.0
def test_has_execution():
report = _get_report()
assert report.has_execution('test1')
assert not report.has_execution('test2')
assert report.has_execution('test2', include_errored=True)
assert not report.has_execution('kalle', include_errored=False)
def test_child_report():
report = _get_report()
caught_err = False
try:
report.child_report('test3')
except BaseException:
caught_err = True
assert report.child_report('test1')
assert caught_err
def test_success_rate():
assert _get_report().success_rate() == 0.5
def test_errors():
assert len(_get_report().errors()) == 1
def _get_report():
latency = 100.0
ts = time.time() * 1000
e1 = Execution('test1', ts, ts + latency, latency, None)
ts = ts + latency
latency = 1.0
e2 = Execution('test2', ts, ts + latency, latency, ValueError('Nooo'))
return Report([e1, e2])
| 21.87963 | 74 | 0.679221 | 317 | 2,363 | 4.817035 | 0.173502 | 0.133595 | 0.098232 | 0.11002 | 0.244925 | 0.037983 | 0.037983 | 0 | 0 | 0 | 0 | 0.041204 | 0.1989 | 2,363 | 107 | 75 | 22.084112 | 0.765452 | 0 | 0 | 0.151515 | 0 | 0 | 0.024968 | 0 | 0 | 0 | 0 | 0 | 0.409091 | 1 | 0.19697 | false | 0 | 0.030303 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e06145b8cef85327fb9fa4107bd574429bd28d9 | 3,891 | py | Python | lib/synthetic_data.py | ppmdatix/rtdl | a01ecd9ae6b673f4e82e51f804ffd7031c7350a0 | [
"Apache-2.0"
] | 298 | 2021-06-22T15:41:18.000Z | 2022-03-09T07:52:30.000Z | lib/synthetic_data.py | ppmdatix/rtdl | a01ecd9ae6b673f4e82e51f804ffd7031c7350a0 | [
"Apache-2.0"
] | 15 | 2022-03-15T15:28:27.000Z | 2022-03-30T12:15:01.000Z | lib/synthetic_data.py | ppmdatix/rtdl | a01ecd9ae6b673f4e82e51f804ffd7031c7350a0 | [
"Apache-2.0"
] | 37 | 2021-06-25T03:56:37.000Z | 2022-03-10T11:07:51.000Z | "Code used to generate data for experiments with synthetic data"
import math
import typing as ty
import numba
import numpy as np
import torch
import torch.nn as nn
from numba.experimental import jitclass
from tqdm.auto import tqdm
class MLP(nn.Module):
def __init__(
self,
*,
d_in: int,
d_layers: ty.List[int],
d_out: int,
bias: bool = True,
) -> None:
super().__init__()
self.layers = nn.ModuleList(
[
nn.Linear(d_layers[i - 1] if i else d_in, x, bias=bias)
for i, x in enumerate(d_layers)
]
)
self.head = nn.Linear(d_layers[-1] if d_layers else d_in, d_out)
def init_weights(m):
if isinstance(m, nn.Linear):
torch.nn.init.kaiming_normal_(m.weight, mode='fan_in')
if m.bias is not None:
fan_in, _ = torch.nn.init._calculate_fan_in_and_fan_out(m.weight)
bound = 1 / math.sqrt(fan_in)
torch.nn.init.uniform_(m.bias, -bound, bound)
self.apply(init_weights)
def forward(self, x: torch.Tensor) -> torch.Tensor:
for layer in self.layers:
x = layer(x)
x = torch.relu(x)
x = self.head(x)
x = x.squeeze(-1)
return x
@jitclass(
spec=[
('left_children', numba.int64[:]),
('right_children', numba.int64[:]),
('feature', numba.int64[:]),
('threshold', numba.float32[:]),
('value', numba.float32[:]),
('is_leaf', numba.int64[:]),
]
)
class Tree:
"Randomly initialized decision tree"
def __init__(self, n_features, n_nodes, max_depth):
assert (2 ** np.arange(max_depth + 1)).sum() >= n_nodes, "Too much nodes"
self.left_children = np.ones(n_nodes, dtype=np.int64) * -1
self.right_children = np.ones(n_nodes, dtype=np.int64) * -1
self.feature = np.random.randint(0, n_features, (n_nodes,))
self.threshold = np.random.randn(n_nodes).astype(np.float32)
self.value = np.random.randn(n_nodes).astype(np.float32)
depth = np.zeros(n_nodes, dtype=np.int64)
# Root is 0
self.is_leaf = np.zeros(n_nodes, dtype=np.int64)
self.is_leaf[0] = 1
# Keep adding nodes while we can (new node must have 2 children)
while True:
idx = np.flatnonzero(self.is_leaf)[np.random.choice(self.is_leaf.sum())]
if depth[idx] < max_depth:
unused = np.flatnonzero(
(self.left_children == -1)
& (self.right_children == -1)
& ~self.is_leaf
)
if len(unused) < 2:
break
lr_child = unused[np.random.permutation(unused.shape[0])[:2]]
self.is_leaf[lr_child] = 1
self.is_leaf[lr_child] = 1
depth[lr_child] = depth[idx] + 1
self.left_children[idx] = lr_child[0]
self.right_children[idx] = lr_child[1]
self.is_leaf[idx] = 0
def apply(self, x):
y = np.zeros(x.shape[0])
for i in range(x.shape[0]):
idx = 0
while not self.is_leaf[idx]:
if x[i, self.feature[idx]] < self.threshold[idx]:
idx = self.left_children[idx]
else:
idx = self.right_children[idx]
y[i] = self.value[idx]
return y
class TreeEnsemble:
"Combine multiple trees"
def __init__(self, *, n_trees, n_features, n_nodes, max_depth):
self.trees = [
Tree(n_features=n_features, n_nodes=n_nodes, max_depth=max_depth)
for _ in range(n_trees)
]
def apply(self, x):
return np.mean([t.apply(x) for t in tqdm(self.trees)], axis=0)
| 31.128 | 85 | 0.542534 | 522 | 3,891 | 3.8659 | 0.256705 | 0.035679 | 0.044599 | 0.029732 | 0.160555 | 0.144698 | 0.095144 | 0.070367 | 0.03667 | 0.03667 | 0 | 0.020101 | 0.335132 | 3,891 | 124 | 86 | 31.379032 | 0.759954 | 0.049859 | 0 | 0.04 | 1 | 0 | 0.050577 | 0 | 0 | 0 | 0 | 0 | 0.01 | 1 | 0.07 | false | 0 | 0.08 | 0.01 | 0.21 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e11b018571f8da8d5cc1bd1ec0a76cb6a0343dc | 792 | py | Python | importer.py | CSwigg/stellarmass_pca | 6d7f3e8e4d3d637432d1bac6ed17a837c0ca9c75 | [
"MIT"
] | null | null | null | importer.py | CSwigg/stellarmass_pca | 6d7f3e8e4d3d637432d1bac6ed17a837c0ca9c75 | [
"MIT"
] | 1 | 2019-08-14T15:37:41.000Z | 2019-08-29T18:28:40.000Z | importer.py | CSwigg/stellarmass_pca | 6d7f3e8e4d3d637432d1bac6ed17a837c0ca9c75 | [
"MIT"
] | null | null | null | import os, sys, matplotlib
import faulthandler; faulthandler.enable()
mpl_v = 'MPL-8'
daptype = 'SPX-MILESHC-MILESHC'
os.environ['STELLARMASS_PCA_RESULTSDIR'] = '/Users/admin/sas/mangawork/manga/mangapca/zachpace/CSPs_CKC14_MaNGA_20190215-1/v2_5_3/2.3.0/results'
manga_results_basedir = os.environ['STELLARMASS_PCA_RESULTSDIR']
os.environ['STELLARMASS_PCA_CSPBASE'] = '/Users/admin/sas/mangawork/manga/mangapca/zachpace/CSPs_CKC14_MaNGA_20190215-1'
csp_basedir = os.environ['STELLARMASS_PCA_CSPBASE']
mocks_results_basedir = os.path.join(
os.environ['STELLARMASS_PCA_RESULTSDIR'], 'mocks')
from astropy.cosmology import WMAP9
cosmo = WMAP9
matplotlib.rcParams['font.family'] = 'serif'
matplotlib.rcParams['text.usetex'] = True
if 'DISPLAY' not in os.environ:
matplotlib.use('agg')
| 39.6 | 144 | 0.790404 | 110 | 792 | 5.472727 | 0.518182 | 0.089701 | 0.166113 | 0.19103 | 0.506645 | 0.219269 | 0.219269 | 0.219269 | 0.219269 | 0.219269 | 0 | 0.04235 | 0.075758 | 792 | 19 | 145 | 41.684211 | 0.780055 | 0 | 0 | 0 | 0 | 0.0625 | 0.463384 | 0.380051 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e128579da620ba7e1fb47af33782a7c01aadb8f | 502 | py | Python | src/google-cloud-speech/python/client.py | d-iii-s/teaching-introduction-middleware | 6d1129571c33059ca0c6ace8df18d3865e6205a0 | [
"Apache-2.0"
] | 2 | 2020-10-14T19:01:17.000Z | 2021-09-13T12:08:31.000Z | src/google-cloud-speech/python/client.py | d-iii-s/teaching-introduction-middleware | 6d1129571c33059ca0c6ace8df18d3865e6205a0 | [
"Apache-2.0"
] | 1 | 2021-01-07T08:32:05.000Z | 2021-01-07T08:32:05.000Z | src/google-cloud-speech/python/client.py | D-iii-S/teaching-introduction-middleware | 46abce8b4b6994a6fbc7c3c2abfb5962ed503a43 | [
"Apache-2.0"
] | 4 | 2020-10-15T13:26:43.000Z | 2021-10-07T11:07:30.000Z | #!/usr/bin/env python3
import sys
from google.cloud import speech as google_cloud_speech
# Create the object representing the API to the client.
client = google_cloud_speech.SpeechClient ()
content = open (sys.argv [1], 'rb').read ()
audio = google_cloud_speech.RecognitionAudio (content = content)
config = google_cloud_speech.RecognitionConfig (language_code = 'en-US')
# Call the service to perform speech recognition.
result = client.recognize (config = config, audio = audio)
print (result)
| 27.888889 | 72 | 0.768924 | 68 | 502 | 5.544118 | 0.588235 | 0.145889 | 0.180371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004608 | 0.135458 | 502 | 17 | 73 | 29.529412 | 0.864055 | 0.24502 | 0 | 0 | 0 | 0 | 0.018617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e19e9cc056c63106926ce23bba19316b4c83198 | 477 | py | Python | service_catalog/utils.py | a-belhadj/squest | 8714fefc332ab1ab349508488455f4a1f2ab8a82 | [
"Apache-2.0"
] | null | null | null | service_catalog/utils.py | a-belhadj/squest | 8714fefc332ab1ab349508488455f4a1f2ab8a82 | [
"Apache-2.0"
] | null | null | null | service_catalog/utils.py | a-belhadj/squest | 8714fefc332ab1ab349508488455f4a1f2ab8a82 | [
"Apache-2.0"
] | null | null | null | def str_to_bool(s):
if isinstance(s, bool): # do not convert if already a boolean
return s
else:
if s == 'True' \
or s == 'true' \
or s == '1' \
or s == 1 \
or s == True:
return True
elif s == 'False' \
or s == 'false' \
or s == '0' \
or s == 0 \
or s == False:
return False
return False
| 26.5 | 66 | 0.350105 | 55 | 477 | 3 | 0.381818 | 0.145455 | 0.084848 | 0.09697 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018349 | 0.542977 | 477 | 17 | 67 | 28.058824 | 0.738532 | 0.073375 | 0 | 0.117647 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e2cd6de05e6f854d4a44bdc3069fb849e91ed70 | 608 | py | Python | tests/test_conns.py | wgmueller1/unicorn | 68a265982635873cf77ac3226f4da52c3f58344c | [
"MIT"
] | null | null | null | tests/test_conns.py | wgmueller1/unicorn | 68a265982635873cf77ac3226f4da52c3f58344c | [
"MIT"
] | null | null | null | tests/test_conns.py | wgmueller1/unicorn | 68a265982635873cf77ac3226f4da52c3f58344c | [
"MIT"
] | null | null | null | import sys
import os
import unittest
from mock import MagicMock, patch
import json
sys.path.append(os.path.dirname(os.path.dirname(
os.path.abspath(__file__))))
import app
from app.config import db_conn_str
import base64
import sqlalchemy
class TestConnections(unittest.TestCase):
def setUp(self):
self.app = app.app.test_client()
def test_conn(self):
''' Only passes if the connection can be made '''
engine = sqlalchemy.create_engine(db_conn_str)
connection = engine.connect()
connection.close()
if __name__ == '__main__':
unittest.main()
| 24.32 | 57 | 0.700658 | 81 | 608 | 5.024691 | 0.530864 | 0.044226 | 0.063882 | 0.07371 | 0.078624 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004124 | 0.202303 | 608 | 24 | 58 | 25.333333 | 0.835052 | 0.067434 | 0 | 0 | 0 | 0 | 0.014311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.45 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2e373c6d12f335af7154c7c8478202f3eb936444 | 666 | py | Python | docs/core/examples/stdin.py | Khymeira/twisted | a8aae6853b729a742ed4a99b95e8fe0c5f1dad97 | [
"Unlicense",
"MIT"
] | 4,612 | 2015-01-01T12:57:23.000Z | 2022-03-30T01:08:23.000Z | docs/core/examples/stdin.py | Khymeira/twisted | a8aae6853b729a742ed4a99b95e8fe0c5f1dad97 | [
"Unlicense",
"MIT"
] | 1,243 | 2015-01-23T17:23:59.000Z | 2022-03-28T13:46:17.000Z | docs/core/examples/stdin.py | Khymeira/twisted | a8aae6853b729a742ed4a99b95e8fe0c5f1dad97 | [
"Unlicense",
"MIT"
] | 1,236 | 2015-01-13T14:41:26.000Z | 2022-03-17T07:12:36.000Z | # Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
An example of reading a line at a time from standard input
without blocking the reactor.
"""
from os import linesep
from twisted.internet import stdio
from twisted.protocols import basic
class Echo(basic.LineReceiver):
delimiter = linesep.encode("ascii")
def connectionMade(self):
self.transport.write(b">>> ")
def lineReceived(self, line):
self.sendLine(b"Echo: " + line)
self.transport.write(b">>> ")
def main():
stdio.StandardIO(Echo())
from twisted.internet import reactor
reactor.run()
if __name__ == "__main__":
main()
| 18.5 | 58 | 0.68018 | 83 | 666 | 5.361446 | 0.60241 | 0.074157 | 0.085393 | 0.11236 | 0.098876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205706 | 666 | 35 | 59 | 19.028571 | 0.84121 | 0.235736 | 0 | 0.125 | 0 | 0 | 0.054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2e416e106618e66359cbd6f5a9c858ba2efcc685 | 2,854 | py | Python | goodsManage/admin.py | z-Wind/warehouse | 3a1ebee4d37bc4e3b6783fee78062eb7aa8da152 | [
"MIT"
] | null | null | null | goodsManage/admin.py | z-Wind/warehouse | 3a1ebee4d37bc4e3b6783fee78062eb7aa8da152 | [
"MIT"
] | null | null | null | goodsManage/admin.py | z-Wind/warehouse | 3a1ebee4d37bc4e3b6783fee78062eb7aa8da152 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from goodsManage.models import *
class GoodInventoryInline(admin.TabularInline):
model = GoodInventory
extra = 1
@admin.register(GoodKind)
class GoodKindAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodKind._meta.fields if f.name != 'id']
@admin.register(Good)
class GoodAdmin(admin.ModelAdmin):
list_display = [f.name for f in Good._meta.fields if f.name != 'id']
list_filter = ('kind',)
search_fields = ('partNumber', 'partNumber_once', 'partNumber_old', 'type' )
inlines = (GoodInventoryInline,)
@admin.register(Department)
class DepartmentAdmin(admin.ModelAdmin):
list_display = [f.name for f in Department._meta.fields if f.name != 'id']
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = [f.name for f in Person._meta.fields if f.name != 'id']
list_filter = ('department',)
search_fields = ('name',)
@admin.register(GoodInventory)
class GoodInventoryAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodInventory._meta.fields if f.name != 'id']
list_filter = ('department', 'good__kind',)
search_fields = ('good__type',)
@admin.register(GoodRequisition)
class GoodRequisitionAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodRequisition._meta.fields if f.name != 'id']
list_filter = ('datetime', 'person__department', 'good__kind',)
search_fields = ('good__type',)
@admin.register(GoodBack)
class GoodBackAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodBack._meta.fields if f.name != 'id']
list_filter = ('datetime', 'person__department', 'good__kind',)
#search_fields = ('person',)
@admin.register(GoodBuy)
class GoodBuyAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodBuy._meta.fields if f.name != 'id']
list_filter = ('date', 'person__department', 'good__kind',)
#search_fields = ('pr','po')
@admin.register(GoodAllocate)
class GoodAllocateAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodAllocate._meta.fields if f.name != 'id']
list_filter = ('datetime', 'person__department', 'toDepartment', 'good__kind',)
#search_fields = ('person',)
@admin.register(WastageStatus)
class WastageStatusAdmin(admin.ModelAdmin):
list_display = [f.name for f in WastageStatus._meta.fields if f.name != 'id']
@admin.register(GoodWastage)
class GoodWastageAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodWastage._meta.fields if f.name != 'id']
list_filter = ('datetime', 'person__department', 'good__kind',)
search_fields = ('good__type',)
@admin.register(GoodRepair)
class GoodRepairAdmin(admin.ModelAdmin):
list_display = [f.name for f in GoodRepair._meta.fields if f.name != 'id']
list_filter = ('date', 'person__department') | 38.053333 | 83 | 0.711983 | 360 | 2,854 | 5.441667 | 0.177778 | 0.061256 | 0.116386 | 0.159265 | 0.601327 | 0.601327 | 0.591118 | 0.561511 | 0.497703 | 0.210311 | 0 | 0.000412 | 0.149264 | 2,854 | 75 | 84 | 38.053333 | 0.806425 | 0.037491 | 0 | 0.107143 | 0 | 0 | 0.125775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2e44d40deef066981e5898570685011fbd57c186 | 1,068 | py | Python | 2_CS_Medium/Leetcode/Interview_Easy/DLC_7_Design.py | andremichalowski/CSN1 | 97eaa66b324ef1850237dd6dcd6d8f71a1a2b64b | [
"MIT"
] | null | null | null | 2_CS_Medium/Leetcode/Interview_Easy/DLC_7_Design.py | andremichalowski/CSN1 | 97eaa66b324ef1850237dd6dcd6d8f71a1a2b64b | [
"MIT"
] | null | null | null | 2_CS_Medium/Leetcode/Interview_Easy/DLC_7_Design.py | andremichalowski/CSN1 | 97eaa66b324ef1850237dd6dcd6d8f71a1a2b64b | [
"MIT"
] | null | null | null | 1. SHUFFLE AN ARRAY:
class Solution:
def __init__(self, nums):
self.array = nums
self.original = list(nums)
def reset(self):
self.array = self.original
self.original = list(self.original)
return self.array
def shuffle(self):
aux = list(self.array)
for idx in range(len(self.array)):
remove_idx = random.randrange(len(aux))
self.array[idx] = aux.pop(remove_idx)
return self.array
2. MIN STACK:
#https://mail.google.com/mail/u/0/#inbox?projector=1
class MinStack:
def __init__(self):
self.my_stack = []
def push(self, x):
if self.my_stack == []:
self.my_stack.append((x,x))
else:
minimum = self.my_stack[-1][1]
self.my_stack.append((x, min(x, minimum)))
def pop(self):
return self.my_stack.pop()[0]
def top(self):
return self.my_stack[-1][0]
def getMin(self):
return self.my_stack[-1][1] | 24.837209 | 56 | 0.532772 | 139 | 1,068 | 3.964029 | 0.323741 | 0.087114 | 0.15971 | 0.065336 | 0.208711 | 0.079855 | 0 | 0 | 0 | 0 | 0 | 0.015581 | 0.338951 | 1,068 | 43 | 57 | 24.837209 | 0.764873 | 0.046816 | 0 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e4813e101148d2aa8ef55663105ba34b2d5e596 | 13,170 | py | Python | lib/python2.7/site-packages/appionlib/apTilt/autotilt.py | leschzinerlab/myami-3.2-freeHand | 974b8a48245222de0d9cfb0f433533487ecce60d | [
"MIT"
] | null | null | null | lib/python2.7/site-packages/appionlib/apTilt/autotilt.py | leschzinerlab/myami-3.2-freeHand | 974b8a48245222de0d9cfb0f433533487ecce60d | [
"MIT"
] | null | null | null | lib/python2.7/site-packages/appionlib/apTilt/autotilt.py | leschzinerlab/myami-3.2-freeHand | 974b8a48245222de0d9cfb0f433533487ecce60d | [
"MIT"
] | 1 | 2019-09-05T20:58:37.000Z | 2019-09-05T20:58:37.000Z | #!/usr/bin/env python
#python
import os
import sys
import time
import numpy
import threading
from PIL import Image
from pyami import quietscipy
from scipy import ndimage, optimize
#appion
try:
import radermacher
except:
print "using slow tilt angle calculator"
import slowmacher as radermacher
from appionlib import apDisplay
from appionlib import apPeaks
from appionlib import apImage
from appionlib import apParam
from appionlib.apTilt import apTiltTransform, apTiltShift, tiltfile
class autoTilt(object):
#---------------------------------------
#---------------------------------------
def __init__(self):
self.data = {}
return
#---------------------------------------
#---------------------------------------
def importPicks(self, picks1, picks2, tight=False, msg=True):
t0 = time.time()
#print picks1
#print self.currentpicks1
curpicks1 = numpy.asarray(self.currentpicks1)
curpicks2 = numpy.asarray(self.currentpicks2)
#print curpicks1
# get picks
apTiltTransform.setPointsFromArrays(curpicks1, curpicks2, self.data)
pixdiam = self.data['pixdiam']
if tight is True:
pixdiam /= 4.0
#print self.data, pixdiam
list1, list2 = apTiltTransform.alignPicks2(picks1, picks2, self.data, limit=pixdiam, msg=msg)
if list1.shape[0] == 0 or list2.shape[0] == 0:
apDisplay.printWarning("No new picks were found")
# merge picks
newpicks1, newpicks2 = apTiltTransform.betterMergePicks(curpicks1, list1, curpicks2, list2, msg=msg)
newparts = newpicks1.shape[0] - curpicks1.shape[0]
# copy over picks
self.currentpicks1 = newpicks1
self.currentpicks2 = newpicks2
if msg is True:
apDisplay.printMsg("Inserted "+str(newparts)+" new particles in "+apDisplay.timeString(time.time()-t0))
return True
#---------------------------------------
#---------------------------------------
def optimizeAngles(self, msg=True):
t0 = time.time()
### run find theta
na1 = numpy.array(self.currentpicks1, dtype=numpy.int32)
na2 = numpy.array(self.currentpicks2, dtype=numpy.int32)
# minimum area for a triangle to be valid
arealim = 100.0
fittheta = radermacher.tiltang(na1, na2, arealim)
if not fittheta or not 'wtheta' in fittheta:
return
theta = fittheta['wtheta']
thetadev = fittheta['wthetadev']
if msg is True:
thetastr = ("%3.3f +/- %2.2f" % (theta, thetadev))
tristr = apDisplay.orderOfMag(fittheta['numtri'])+" of "+apDisplay.orderOfMag(fittheta['tottri'])
tristr = (" (%3.1f " % (100.0 * fittheta['numtri'] / float(fittheta['tottri'])))+"%) "
apDisplay.printMsg("Tilt angle "+thetastr+tristr)
self.data['theta'] = fittheta['wtheta']
### run optimize angles
lastiter = [80,80,80]
count = 0
totaliter = 0
while max(lastiter) > 75 and count < 30:
count += 1
lsfit = self.runLeastSquares()
lastiter[2] = lastiter[1]
lastiter[1] = lastiter[0]
lastiter[0] = lsfit['iter']
totaliter += lsfit['iter']
if msg is True:
apDisplay.printMsg("Least Squares: "+str(count)+" rounds, "+str(totaliter)
+" iters, rmsd of "+str(round(lsfit['rmsd'],4))+" pixels in "+apDisplay.timeString(time.time()-t0))
return
#---------------------------------------
#---------------------------------------
def runLeastSquares(self):
#SET XSCALE
xscale = numpy.array((1,1,1,0,1,1), dtype=numpy.float32)
#GET TARGETS
a1 = numpy.asarray(self.currentpicks1, dtype=numpy.float32)
a2 = numpy.asarray(self.currentpicks2, dtype=numpy.float32)
if len(a1) > len(a2):
apDisplay.printWarning("shorten a1")
a1 = a1[0:len(a2),:]
elif len(a2) > len(a1):
apDisplay.printWarning("shorten a2")
a2 = a2[0:len(a1),:]
lsfit = apTiltTransform.willsq(a1, a2, self.data['theta'], self.data['gamma'],
self.data['phi'], 1.0, self.data['shiftx'], self.data['shifty'], xscale)
if lsfit['rmsd'] < 30:
self.data['theta'] = lsfit['theta']
self.data['gamma'] = lsfit['gamma']
self.data['phi'] = lsfit['phi']
self.data['shiftx'] = lsfit['shiftx']
self.data['shifty'] = lsfit['shifty']
return lsfit
#---------------------------------------
#---------------------------------------
def getRmsdArray(self):
targets1 = self.currentpicks1
aligned1 = self.getAlignedArray2()
if len(targets1) != len(aligned1):
targets1 = numpy.vstack((targets1, aligned1[len(targets1):]))
aligned1 = numpy.vstack((aligned1, targets1[len(aligned1):]))
diffmat1 = (targets1 - aligned1)
sqsum1 = diffmat1[:,0]**2 + diffmat1[:,1]**2
rmsd1 = numpy.sqrt(sqsum1)
return rmsd1
#---------------------------------------
#---------------------------------------
def getAlignedArray2(self):
apTiltTransform.setPointsFromArrays(self.currentpicks1, self.currentpicks2, self.data)
a2b = apTiltTransform.a2Toa1Data(self.currentpicks2, self.data)
a2c = numpy.asarray(a2b, dtype=numpy.float32)
return a2c
#---------------------------------------
#---------------------------------------
def getAlignedArray1(self):
apTiltTransform.setPointsFromArrays(self.currentpicks1, self.currentpicks2, self.data)
a1b = apTiltTransform.a1Toa2Data(self.currentpicks1, self.data)
return a1b
#---------------------------------------
#---------------------------------------
def getCutoffCriteria(self, errorArray):
#do a small minimum filter to get rid of outliers
size = int(len(errorArray)**0.3)+1
errorArray2 = ndimage.minimum_filter(errorArray, size=size, mode='wrap')
mean = ndimage.mean(errorArray2)
stdev = ndimage.standard_deviation(errorArray2)
### this is so arbitrary
cut = mean + 5.0 * stdev + 2.0
### anything bigger than 20 pixels is too big
if cut > self.data['pixdiam']:
cut = self.data['pixdiam']
return cut
#---------------------------------------
#---------------------------------------
def getGoodPicks(self, msg):
a1 = numpy.asarray(self.currentpicks1, dtype=numpy.float32)
a2 = numpy.asarray(self.currentpicks2, dtype=numpy.float32)
numpoints = max(a1.shape[0], a2.shape[0])
good = numpy.zeros((numpoints), dtype=numpy.bool)
if len(a1) != len(a2):
good[len(a1):] = True
good[len(a2):] = True
err = self.getRmsdArray()
cut = self.getCutoffCriteria(err)
minworsterr = 1.0
worstindex = []
worsterr = []
### always set 3% as bad if cutoff > max rmsd
numbad = int(len(a1)*0.03 + 1.0)
for i,e in enumerate(err):
if e > minworsterr:
### find the worst overall picks
if len(worstindex) >= numbad:
j = numpy.argmin(numpy.asarray(worsterr))
### take previous worst pick and make it good
k = worstindex[j]
good[k] = True
good[i] = False
worstindex[j] = i
worsterr[j] = e
### increase the min worst err
minworsterr = numpy.asarray(worsterr).min()
else:
### add the worst pick
good[i] = False
worstindex.append(i)
worsterr.append(e)
elif e < cut and (i == 0 or e > 0):
### this is a good pick
good[i] = True
if good.sum() == 0:
good[0] = True
#print good
if msg is True:
sumstr = ("%d of %d good (%d bad) particles; min worst error=%.3f"
%(good.sum(),numpoints,numpoints-good.sum(),minworsterr))
apDisplay.printMsg(sumstr)
return good
#---------------------------------------
#---------------------------------------
def clearBadPicks(self, msg=True):
good = self.getGoodPicks(msg)
a1 = numpy.asarray(self.currentpicks1, dtype=numpy.float32)
a2 = numpy.asarray(self.currentpicks2, dtype=numpy.float32)
numpoints = max(a1.shape[0], a2.shape[0])
if good.sum() < 2:
return
b1 = []
b2 = []
for i,v in enumerate(good):
if bool(v) is True:
b1.append(a1[i])
b2.append(a2[i])
self.currentpicks1 = numpy.asarray(b1, dtype=numpy.float32)
self.currentpicks2 = numpy.asarray(b2, dtype=numpy.float32)
return
#---------------------------------------
#---------------------------------------
def deleteFirstPick(self):
a1 = self.currentpicks1
a2 = self.currentpicks2
a1b = a1[1:]
a2b = a2[1:]
self.currentpicks1 = a1b
self.currentpicks2 = a2b
#---------------------------------------
#---------------------------------------
def getOverlap(self, image1, image2, msg=True):
t0 = time.time()
bestOverlap, tiltOverlap = apTiltTransform.getOverlapPercent(image1, image2, self.data)
overlapStr = str(round(100*bestOverlap,2))+"% and "+str(round(100*tiltOverlap,2))+"%"
if msg is True:
apDisplay.printMsg("Found overlaps of "+overlapStr+" in "+apDisplay.timeString(time.time()-t0))
self.data['overlap'] = bestOverlap
#---------------------------------------
#---------------------------------------
def saveData(self, imgfile1, imgfile2, outfile):
savedata = {}
savedata['theta'] = self.data['theta']
savedata['gamma'] = self.data['gamma']
savedata['phi'] = self.data['phi']
savedata['picks1'] = self.currentpicks1
savedata['picks2'] = self.currentpicks2
savedata['align1'] = self.getAlignedArray1()
savedata['align2'] = self.getAlignedArray2()
savedata['rmsd'] = self.getRmsdArray()
savedata['image1name'] = imgfile1
savedata['image2name'] = imgfile2
#savedata['filetype'] = tiltfile.filetypes[self.data['filetypeindex']]
tiltfile.saveData(savedata, outfile)
#---------------------------------------
#---------------------------------------
def openImageFile(self, filename):
self.filename = filename
if filename[-4:] == '.spi':
array = apImage.spiderToArray(filename, msg=False)
return array
elif filename[-4:] == '.mrc':
array = apImage.mrcToArray(filename, msg=False)
return array
else:
image = Image.open(filename)
array = apImage.imageToArray(image, msg=False)
array = array.astype(numpy.float32)
return array
return None
#---------------------------------------
#---------------------------------------
def printData(self, msg):
if msg is False:
return
mystr = ( "theta=%.3f, gamma=%.3f, phi=%.3f, rmsd=%.4f, shifts=%.1f,%.1f, numpoints=%d,%d"
%(self.data['theta'],self.data['gamma'],self.data['phi'],self.getRmsdArray().mean(),
self.data['shiftx'],self.data['shifty'],len(self.currentpicks1),len(self.currentpicks2),
))
apDisplay.printColor(mystr, "green")
#---------------------------------------
#---------------------------------------
def processTiltPair(self, imgfile1, imgfile2, picks1, picks2, tiltangle, outfile, pixdiam=20.0, tiltaxis=-7.0, msg=True):
"""
Inputs:
imgfile1
imgfile2
picks1, 2xN numpy array
picks2, 2xN numpy array
tiltangle
outfile
Modifies:
outfile
Output:
None, failed
True, success
"""
### pre-load particle picks
if len(picks1) < 10 or len(picks2) < 10:
if msg is True:
apDisplay.printWarning("Not enough particles ot run program on image pair")
return None
### setup tilt data
self.data['theta'] = tiltangle
self.data['shiftx'] = 0.0
self.data['shifty'] = 0.0
self.data['gamma'] = tiltaxis
self.data['phi'] = tiltaxis
self.data['scale'] = 1.0
self.data['pixdiam'] = pixdiam
### open image file 1
img1 = self.openImageFile(imgfile1)
if img1 is None:
apDisplay.printWarning("Could not read image: "+imgfile1)
return None
### open tilt file 2
img2 = self.openImageFile(imgfile2)
if img1 is None:
apDisplay.printWarning("Could not read image: "+imgfile1)
return None
### guess the shift
t0 = time.time()
if msg is True:
apDisplay.printMsg("Refining tilt axis angles")
origin, newpart, snr, bestang = apTiltShift.getTiltedCoordinates(img1, img2, tiltangle, picks1, True, tiltaxis, msg=msg)
self.data['gamma'] = float(bestang)
self.data['phi'] = float(bestang)
if snr < 2.0:
if msg is True:
apDisplay.printWarning("Low confidence in initial shift")
return None
self.currentpicks1 = [origin]
self.currentpicks2 = [newpart]
### search for the correct particles
self.importPicks(picks1, picks2, tight=False, msg=msg)
if len(self.currentpicks1) < 4:
apDisplay.printWarning("Failed to find any particle matches")
return None
self.deleteFirstPick()
self.printData(msg)
for i in range(4):
self.clearBadPicks(msg)
if len(self.currentpicks1) < 5 or len(self.currentpicks2) < 5:
if msg is True:
apDisplay.printWarning("Not enough particles to optimize angles")
return None
self.optimizeAngles(msg)
self.printData(msg)
self.clearBadPicks(msg)
self.clearBadPicks(msg)
if len(self.currentpicks1) < 5 or len(self.currentpicks2) < 5:
if msg is True:
apDisplay.printWarning("Not enough particles to optimize angles")
return None
self.optimizeAngles(msg)
self.printData(msg)
self.clearBadPicks(msg)
self.importPicks(picks1, picks2, tight=False, msg=msg)
self.clearBadPicks(msg)
self.printData(msg)
if len(self.currentpicks1) < 5 or len(self.currentpicks2) < 5:
if msg is True:
apDisplay.printWarning("Not enough particles to optimize angles")
return None
self.optimizeAngles(msg)
self.printData(msg)
self.getOverlap(img1,img2,msg)
if msg is True:
apDisplay.printMsg("Completed alignment of "+str(len(self.currentpicks1))
+" particle pairs in "+apDisplay.timeString(time.time()-t0))
self.saveData(imgfile1, imgfile2, outfile)
self.printData(msg)
return True
| 32.679901 | 122 | 0.620881 | 1,554 | 13,170 | 5.258044 | 0.212355 | 0.041121 | 0.011137 | 0.016155 | 0.243422 | 0.219312 | 0.191409 | 0.173785 | 0.16375 | 0.128993 | 0 | 0.032511 | 0.156872 | 13,170 | 402 | 123 | 32.761194 | 0.70335 | 0.150569 | 0 | 0.262069 | 0 | 0.003448 | 0.094854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.062069 | null | null | 0.093103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e48140d20f0f36746242230c58ba92c4fce4b01 | 676 | py | Python | bin/utils/topo.py | akrishna1995/emuedge | d33845107be3c9bbfcaf030df0a989e9d4972743 | [
"MIT"
] | 8 | 2018-06-21T03:20:26.000Z | 2021-10-15T03:53:49.000Z | bin/utils/topo.py | akrishna1995/emuedge | d33845107be3c9bbfcaf030df0a989e9d4972743 | [
"MIT"
] | 12 | 2018-05-21T17:26:59.000Z | 2018-06-14T02:48:21.000Z | bin/utils/topo.py | akrishna1995/emuedge | d33845107be3c9bbfcaf030df0a989e9d4972743 | [
"MIT"
] | 3 | 2018-08-30T22:37:20.000Z | 2019-03-31T18:29:52.000Z | import json
# use adjacency list for representing network would be intuitive for human
class topo:
def __init__(self):
self.graph=[]
@staticmethod
def read_from_json(filename):
raw=open(filename).read()
jdata=json.loads(raw)
nodes=[]
# init graph based on how many nodes we have
node_count=len(jdata)
for i in range(0, node_count):
nodes.append(jdata[i])
return nodes
@staticmethod
def set_graph(self, graph):
self.graph=graph
# only should be called after nodes are inited
#def generate_graph(self):
# for node in self.nodes:
# nid=node['id']
def test():
test_topo=topo.read_from_json('two_subnet.topo')
print test_topo.nodes[0]['name']
| 21.806452 | 74 | 0.723373 | 107 | 676 | 4.429907 | 0.53271 | 0.056962 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003534 | 0.162722 | 676 | 30 | 75 | 22.533333 | 0.833922 | 0.33284 | 0 | 0.105263 | 0 | 0 | 0.042793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e483ff8bc55f318b8c66a3d2be4b382f2c8b3ca | 527 | py | Python | __getYardAndMiles.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | __getYardAndMiles.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | __getYardAndMiles.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | '''
Python program to convert the distance (in feet) to inches,yards, and miles
'''
def distanceInInches (d_ft):
print("The distance in inches is %i inches." %(d_ft * 12))
def distanceInYard (d_ft):
print("The distance in yards is %.2f yards." %(d_ft / 3.0))
def distanceInMiles (d_ft):
print("The distance in miles is %.2f miles." %(d_ft / 5280.0) )
def main ():
d_ft = int(input("Input distance in feet: "))
distanceInInches (d_ft)
distanceInYard (d_ft)
distanceInMiles (d_ft)
main() | 25.095238 | 75 | 0.650854 | 80 | 527 | 4.1625 | 0.3625 | 0.09009 | 0.156156 | 0.099099 | 0.189189 | 0.189189 | 0 | 0 | 0 | 0 | 0 | 0.026506 | 0.212524 | 527 | 21 | 76 | 25.095238 | 0.775904 | 0.142315 | 0 | 0 | 0 | 0 | 0.296629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e650c253e8a6cac26c779a3546cf34701f3d990 | 1,085 | py | Python | securityheaders/checkers/hsts/test_maxagezero.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 151 | 2018-07-29T22:34:43.000Z | 2022-03-22T05:08:27.000Z | securityheaders/checkers/hsts/test_maxagezero.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 5 | 2019-04-24T07:31:36.000Z | 2021-04-15T14:31:23.000Z | securityheaders/checkers/hsts/test_maxagezero.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 42 | 2018-07-31T08:18:59.000Z | 2022-03-28T08:18:32.000Z | import unittest
from securityheaders.checkers.hsts import HSTSMaxAgeZeroChecker
class HSTSMaxAgeZeroCheckerTest(unittest.TestCase):
def setUp(self):
self.x = HSTSMaxAgeZeroChecker()
def test_checkNoHSTS(self):
nox = dict()
nox['test'] = 'value'
self.assertEqual(self.x.check(nox), [])
def test_checkNone(self):
nonex = None
self.assertEqual(self.x.check(nonex), [])
def test_checkNoneHSTS(self):
hasx = dict()
hasx['strict-transport-security'] = None
self.assertEqual(self.x.check(hasx), [])
def test_ValidHSTS(self):
hasx4 = dict()
hasx4['strict-transport-security'] = "max-age=31536000; includeSubDomains"
result = self.x.check(hasx4)
self.assertEqual(self.x.check(hasx4), [])
def test_ZeroMaxAge(self):
hasx5 = dict()
hasx5['strict-transport-security'] = "max-age=0; includeSubDomains"
result = self.x.check(hasx5)
self.assertIsNotNone(result)
self.assertEqual(len(result), 1)
if __name__ == '__main__':
unittest.main()
| 28.552632 | 81 | 0.647926 | 119 | 1,085 | 5.798319 | 0.378151 | 0.050725 | 0.086957 | 0.115942 | 0.336232 | 0.084058 | 0 | 0 | 0 | 0 | 0 | 0.020024 | 0.217512 | 1,085 | 37 | 82 | 29.324324 | 0.792697 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.069124 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0.206897 | false | 0 | 0.068966 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e66a4f133f98300f3f7bd13dd28274a3ef5f09d | 6,534 | py | Python | GUI.py | wtarimo/CFCScorePredictor | 2823dffdd4c13e29ebc5e77d85e000fa9977ef90 | [
"Artistic-2.0"
] | null | null | null | GUI.py | wtarimo/CFCScorePredictor | 2823dffdd4c13e29ebc5e77d85e000fa9977ef90 | [
"Artistic-2.0"
] | null | null | null | GUI.py | wtarimo/CFCScorePredictor | 2823dffdd4c13e29ebc5e77d85e000fa9977ef90 | [
"Artistic-2.0"
] | null | null | null | """
William Tarimo
COSI 157 - Final Project: CFC Score Predictor
This module manages instances of games
11/11/2012
"""
from Game import *
def userInputBox(win):
"""Creates and diplays graphical fields for user signing and registration"""
Text(Point(55,50), "Full Name:").draw(win)
rName = Entry(Point(245,50),30); rName.draw(win)
Text(Point(55,75), "Username:").draw(win)
rUsername = Entry(Point(200,75),20); rUsername.draw(win)
Text(Point(56,100), "Password:").draw(win)
rPassword1 = Entry(Point(200,100),20); rPassword1.draw(win)
Text(Point(56,125), "Password:").draw(win)
rPassword2 = Entry(Point(200,125),20); rPassword2.draw(win)
Text(Point(55,225), "Username:").draw(win)
lUsername = Entry(Point(200,225),20); lUsername.draw(win)
Text(Point(56,250), "Password:").draw(win)
lPassword = Entry(Point(200,250),20); lPassword.draw(win)
rButton = Buttons(win,Point(230,160),120,25,"REGISTER"); rButton.Activate()
lButton = Buttons(win,Point(230,285),120,25,"LOGIN"); lButton.Activate()
def fixtureInputBox(win):
"""Creates and diplays graphical fields for fixture input"""
Text(Point(113,50), "Game #:").draw(win)
fgame = Entry(Point(240,50),20); fgame.draw(win)
Text(Point(103,75), "Opponent:").draw(win)
fopponent = Entry(Point(240,75),20); fopponent.draw(win)
Text(Point(118,100), "Time:").draw(win)
ftime = Entry(Point(240,100),20); ftime.draw(win)
Text(Point(116,125), "Venue:").draw(win)
fvenue = Entry(Point(240,125),20); fvenue.draw(win)
Text(Point(116,150), "Result:").draw(win)
fresult = Entry(Point(240,150),20); fresult.draw(win)
Text(Point(100,175), "cfcScorers:").draw(win)
fcfcScorers = Entry(Point(240,175),20); fcfcScorers.draw(win)
Text(Point(75,200), "OppositionScorers:").draw(win)
foppositionScorers = Entry(Point(240,200),20); foppositionScorers.draw(win)
submitFButton = Buttons(win,Point(230,235),150,25,"SUBMIT FIXTURE"); submitFButton.Activate()
Text(Point(113,300), "Game #:").draw(win)
ugame = Entry(Point(240,300),20); ugame.draw(win)
Text(Point(116,325), "Result:").draw(win)
uresult = Entry(Point(240,325),20); uresult.draw(win)
Text(Point(100,350), "cfcScorers:").draw(win)
ucfcScorers = Entry(Point(240,350),20); ucfcScorers.draw(win)
Text(Point(75,375), "OppositionScorers:").draw(win)
uoppositionScorers = Entry(Point(240,375),20); uoppositionScorers.draw(win)
updateFButton = Buttons(win,Point(230,415),150,25,"UPDATE RESULTS"); updateFButton.Activate()
def predictionInputBox(win):
"""Creates and diplays graphical fields for prediction input"""
Text(Point(111,50), "Game #:").draw(win)
game = Entry(Point(240,50),20); game.draw(win)
Text(Point(116,80), "Result:").draw(win)
result = Entry(Point(240,80),20); result.draw(win)
Text(Point(107,110), "Scoreline:").draw(win)
scoreline = Entry(Point(240,110),20); scoreline.draw(win)
Text(Point(100,140), "cfcScorers:").draw(win)
cfcScorers = Entry(Point(240,140),20); cfcScorers.draw(win)
Text(Point(75,170), "OppositionScorers:").draw(win)
oppositionScorers = Entry(Point(240,170),20); oppositionScorers.draw(win)
submitRButton = Buttons(win,Point(230,235),170,25,"SUBMIT RESULT"); submitRButton.Activate()
def displayStandings(win,standings):
"""Displays the game standings on user points"""
text = Text(Point(725,40),"GAME STANDINGS"); text.setFill('grey'); text.setStyle('bold');text.setSize(20); text.draw(win)
#standings = registry.getStandings()
if len(standings)>10: standings=standings[:10]
Text(Point(700,70), " _________Name:__________ _Points:_ _Accuracy:_").draw(win)
(x,y) = (600,95)
for i in range(3):
y = 95
x = [600,780,850][i]
for record in standings:
Text(Point(x,y),record[i]).draw(win)
y+=20
def displaySchedule(win,fixtures,game):
"""Displays the schedule for the 5 recent fixtures by game #"""
text = Text(Point(730,325),"FIXTURE SCHEDULE"); text.setFill('grey'); text.setStyle('bold');text.setSize(20); text.draw(win)
#fixtures = schedule.fixtures
if len(fixtures)>5: fixtures = fixtures[:5]
fixtures = [[str(f.game),f.opponent.split()[0],f.time,str(f.result),str(f.cfcScorers),str(f.oppositionScorers)] for f in fixtures]
Text(Point(710,350), "Game#: __Opponent:__ ____Time:____ _Result_ _cfcScorers_ _oppScorers_").draw(win)
(x,y) = (470,375)
for i in range(6):
y = 375
x = [465,545,665,755,830,940][i]
for record in fixtures:
text = Text(Point(x,y),record[i])
if i==0 and int(record[0])>game: text = Text(Point(x,y),record[0]+" Future")
elif i==0 and int(record[0])==game: text = Text(Point(x,y),record[0]+" Next")
elif i==0 and int(record[0])<game: text = Text(Point(x,y),record[0]+" Played")
text.setFill('blue') if int(record[0])>game else text.setFill('red')
if int(record[0])==game: text.setFill('green4')
text.setSize(10); text.draw(win)
y+=20
def displayPredictions(win,pDB,username,game):
"""Diplays predictions sumbmitted by the user"""
text = Text(Point(730,505),"YOUR PREDICTIONS"); text.setFill('grey'); text.setStyle('bold');text.setSize(20); text.draw(win)
predictions = pDB.getPredictions(username)
if len(predictions)>5: predictions = predictions[:5]
predictions = [[str(p.game),str(p.result),str(p.scoreline),str(p.cfcScorers),str(p.oppositionScorers),str(p.points)] for p in predictions]
Text(Point(700,530), "_Game#:_ _Result:_ _Scoreline:_ _cfcScorers_ _oppScorers_ _Points:_").draw(win)
for i in range(6):
y = 555
x = [475,550,640,740,830,920][i]
for record in predictions:
text = Text(Point(x,y),record[i])
text.setFill('blue') if int(record[0])>game else text.setFill('red')
if int(record[0])==game: text.setFill('green4')
text.setSize(11); text.draw(win)
y+=20
def displayOutput(win,text):
"""Displays info to the user at the bottom left windows"""
text = text.split(",")
x,y = 215,485
for record in text:
text = Text(Point(x,y),record); text.setFill('grey2')
text.setSize(11); text.draw(win)
y+=25
| 46.340426 | 143 | 0.632537 | 904 | 6,534 | 4.513274 | 0.225664 | 0.092647 | 0.048529 | 0.070588 | 0.277451 | 0.187745 | 0.166422 | 0.118137 | 0.118137 | 0.118137 | 0 | 0.096554 | 0.191613 | 6,534 | 140 | 144 | 46.671429 | 0.67588 | 0.08494 | 0 | 0.125 | 0 | 0 | 0.099931 | 0.004142 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067308 | false | 0.057692 | 0.009615 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2e6a1028d97ccf09e8a08b43ffd57fa04c969dd7 | 11,300 | py | Python | doctor_jsonschema_md.py | rdpickard/doctor_jsonschema_md | e710ed89e877aba9e72abb9867bceaad6dddd87f | [
"BSD-3-Clause"
] | 6 | 2016-01-12T18:31:12.000Z | 2021-09-29T12:32:14.000Z | doctor_jsonschema_md.py | rdpickard/doctor_jsonschema_md | e710ed89e877aba9e72abb9867bceaad6dddd87f | [
"BSD-3-Clause"
] | null | null | null | doctor_jsonschema_md.py | rdpickard/doctor_jsonschema_md | e710ed89e877aba9e72abb9867bceaad6dddd87f | [
"BSD-3-Clause"
] | null | null | null | import json
import os
import logging
import time
import argparse
def _mds(s, iscode=False):
"""
Convert a string to a 'markdown' string by escaping control and syntax highlighting characters
:param s: The string to escape
:param iscode: The string appears in a (tick)(tick)code(tick)(tick) block
:return: The escaped string
"""
# P Is this json?
if iscode and type(s) == dict:
try:
js = json.dumps(s, sort_keys=True, indent=4, separators=(',', ': '))
return js
except:
return s
pass
elif iscode:
return s
else:
ms = s.replace("_", "\_")
return ms
def _json2markdown(jsonelement, elementname, jsonparent, parentpath, indenttabs=0):
"""
Generate Markdown text from a JSON Schema element. If the schema element is an object, the objects propoerties
will be included in the Markdown text
:param jsonelement: The schema element as a JSON dict
:param elementname: The name of the element, "" or None if it is the 'root' element
:param jsonparent: The element's partent element, if any
:param parentpath: A dotted string representing the lineage of the element root.grand_parent.parent
:param indenttabs: Number of Tabs to indent the Markdown outline equal to recursive depth of the calls
:return: A string of Markdown text representing the provided element and it's descendant propertied
"""
md = ""
elementtype = jsonelement.get("type", "")
if elementtype == "" and "$ref" in jsonelement.keys():
elementtype = "[{}](#{})".format(jsonelement.get("$ref"), jsonelement.get("$ref").split("/")[-1])
indent = "".join(["\t"] * indenttabs)
if elementname is not None and elementname != "":
if parentpath is None:
md += "{}+ <a id=\"{}\"></a> **{}**\n".format(indent, elementname.lower(), _mds(elementname))
else:
md += "{}+ <a id=\"{}.{}\"></a> **{}**\n".format(indent, parentpath.lower(), elementname.lower(),
_mds(elementname))
if elementtype != "" and (type(elementtype) == str or type(elementtype) == unicode):
md += "{}\t+ _Type:_ {}\n".format(indent, _mds(elementtype))
elif elementtype != "" and type(elementtype) == list:
md += "{}\t+ _Types:_ {}\n".format(indent, ",".join(map(lambda t: _mds(t), elementtype)))
elif jsonelement.get("oneOf", None) is not None:
md += "{}\t+ _Type one of:_ {}\n".format(indent, _mds(elementtype))
for et in jsonelement["oneOf"]:
if type(et) == str:
md += "{}\t\t+ {}\n".format(indent, et)
elif type(et) == dict and "$ref" in et.keys():
md += "{}\t\t+ [{}](#{})\n".format(indent, et["$ref"], et["$ref"].split("/")[-1])
if jsonparent is not None and "required" in jsonparent.keys() and elementname in jsonparent["required"]:
md += "{}\t+ _Required:_ True\n".format(indent)
else:
md += "{}\t+ _Required:_ False\n".format(indent)
md += "{}\t+ _Description:_ {}\n".format(indent, jsonelement.get("description", "None"))
if "enum" in jsonelement.keys():
md += "{}\t+ _Allowed values:_ {}\n".format(indent,
",".join(map(lambda o: "```" + o + "```", jsonelement["enum"])))
else:
md += "{}\t+ _Allowed values:_ Any\n".format(indent)
if "default" in jsonelement.keys():
d = _mds(jsonelement["default"], True)
md += "{}\t+ _Default:_ ```{}```\n".format(indent, d)
if elementtype == "object":
if elementname is not None and elementname != "":
md += "{}\t+ _Children_:\n\n".format(indent)
else:
md += "{}+ \n\n".format(indent)
if parentpath is not None and parentpath != "":
path = parentpath + "." + elementname
elif elementname is not None:
path = elementname
else:
path = None
for prop in jsonelement.get("properties", {}).keys():
md += _json2markdown(jsonelement["properties"][prop], prop, jsonelement, path, indenttabs + 1)
md += "\n"
elif elementtype in ["string", "boolean", "number"]:
pass
elif elementtype == "array":
md += "{}\t+ _Unique Items:_ {}\n".format(indent, jsonelement.get("uniqueItems", "False"))
md += "{}\t+ _Minimum Items:_ {}\n".format(indent, jsonelement.get("minItems", "NA"))
md += "{}\t+ _Maximum Items:_ {}\n".format(indent, jsonelement.get("maxItems", "NA"))
if "items" in jsonelement.keys():
md += _json2markdown(jsonelement["items"], "items", jsonelement, parentpath, indenttabs + 1)
else:
# raise ValueError("Unknown JSON Schema type <%s>" % jsonelement["type"])
pass
md += "\n"
return md
def _json_index_markdown(jsonelement, elementname):
"""
Creates Markdown text 'index' of all of the properties of the specified json element. The text is a list of
each element name and it's descendants. Each item in the list includes a link to a local hyper text anchor on
the same page to the details of the item.
:param jsonelement: The element to generate an index for
:param elementname: The name of the element, "" or None if the root
:return: Markdown text as string
"""
md = ""
if elementname is not None and elementname != "":
md += "* [{}](#{})\n".format(_mds(elementname), elementname.lower())
if type(jsonelement) != dict:
pass
elif "type" in jsonelement.keys() and jsonelement["type"] == "object" and "properties" in jsonelement.keys():
for prop in jsonelement["properties"]:
if elementname is not None and elementname != "":
md += _json_index_markdown(jsonelement["properties"][prop], elementname + "." + prop)
else:
md += _json_index_markdown(jsonelement["properties"][prop], prop)
elif False in map(lambda ek: type(jsonelement[ek]) == dict, jsonelement.keys()):
pass
elif type(jsonelement) == dict and "type" not in jsonelement.keys():
for k in jsonelement.keys():
md += _json_index_markdown(jsonelement[k], k)
return md
def jsonschema_to_markdown(schema_filepath,
markdown_outputfile=None,
overwrite_outputfile=False,
logger=logging.getLogger()):
"""
Creates a Markdown representation of a JSON schema.
:param schema_filepath: Path to the schema to generate MD from
:param markdown_outputfile: (optional A file to write the generated MD to
:param overwrite_outputfile: Whether or not to overwrite an existing Markdown file
:param logger: (optional) where to log messages to. defaults to basic logger
:return: generated markdown as a string
"""
if not os.path.isfile(schema_filepath):
logger.error("File [{}] does not seem to exist".format(schema_filepath))
raise ValueError("No such file %s" % schema_filepath)
schema_file = open(schema_filepath, "r")
try:
schema = json.load(schema_file)
except ValueError as ve:
msg = "File [{}] does not seem to be JSON".format(schema_filepath)
logger.error(msg)
raise ve
if schema.get("type", "derp").lower() != 'object':
raise ValueError("File [{}] does not seem to be JSON a json schema".format(schema_filepath))
if schema.get("$schema", "derp") != "http://json-schema.org/draft-04/schema#":
raise ValueError("File [{}] is not a supported schema version <{}>".format(schema_filepath,
schema.get("$schema", "(no schema)")
)
)
mdfile = None
if markdown_outputfile is not None:
if markdown_outputfile == schema_filepath:
raise ValueError("Schema path and Markdown path are pointed to the same file!")
if os.path.isfile(markdown_outputfile) and not overwrite_outputfile:
logging.error("Markdown file [%s] exists. Remove or rerun script with --overwrite")
return None
elif os.path.isfile(markdown_outputfile):
os.remove(markdown_outputfile)
elif not os.path.isdir(os.sep.join(markdown_outputfile.split(os.sep)[:-1])):
os.makedirs(os.sep.join(markdown_outputfile.split(os.sep)[:-1]))
mdfile = open(markdown_outputfile, "w+")
mddict = dict()
mddict["title"] = schema.get("title", "No Title")
mddict["elements"] = dict()
mddict["references"] = dict()
emd = _json2markdown(jsonelement=schema,
elementname="",
jsonparent=None,
parentpath=None,
indenttabs=0)
rmd = ""
stds = ["type", "id", "description", "title", "$schema", "properties", "required"]
for skey in filter(lambda k: k not in stds and type(schema[k]) == dict, schema.keys()):
for rkey in schema[skey].keys():
rmd += _json2markdown(jsonelement=schema[skey][rkey],
elementname=rkey,
jsonparent=None,
parentpath=None,
indenttabs=0)
md = """
#*{}* schema documentation
#####Generated by [doctor\_jsonschema\_md](https://github.com/rdpickard/doctor_jsonschema_md)
---
#####Source file: ```{}```
#####Documentations generation date: {}
---
####Title: {}
####Description: {}
####Schema: {}
####ID: {}
####Properties Index:
{}
####References Index:
{}
####Properties Detail:
{}
####Object References
{}
""".format(_mds(schema_filepath.split("/")[-1]),
schema_filepath,
time.strftime("%Y-%m-%d %H:%M"),
schema.get("title", "None"),
schema.get("description", "_None_"),
schema.get("$schema", "_None_"),
schema.get("id", "_None_"),
_json_index_markdown(schema, ""),
_json_index_markdown(schema['definitions'], ""),
emd,
rmd)
if mdfile is not None:
mdfile.write(md)
mdfile.close()
return md
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Bootstrap script to deploy register.citybridge.com in AWS')
parser.add_argument('--schemafile', type=str, required=True,
help='Path to schema file to use')
parser.add_argument('--outfile', type=str, required=False, default=None,
help='Path markdown file')
parser.add_argument('--overwrite', action='store_true', dest='overwrite', required=False,
default=False,
help='Overwrite markdown file if exists')
args = parser.parse_args()
mdc = jsonschema_to_markdown(args.schemafile, args.outfile, args.overwrite)
if args.outfile is None:
print mdc
| 39.649123 | 120 | 0.57354 | 1,277 | 11,300 | 4.984338 | 0.203602 | 0.020896 | 0.036764 | 0.011312 | 0.15271 | 0.117203 | 0.081697 | 0.043362 | 0.025452 | 0.013511 | 0 | 0.002231 | 0.285929 | 11,300 | 284 | 121 | 39.788732 | 0.786591 | 0.007699 | 0 | 0.204082 | 0 | 0 | 0.192455 | 0.002451 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.02551 | 0.02551 | null | null | 0.005102 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e79e519ed71cec0148d3ba0c2654172c86976af | 4,737 | py | Python | zdiscord/service/integration/chat/discord/DiscordCommandMiddleware.py | xxdunedainxx/zdiscord | e79039621969fd7a2987ccac4e8d6fcff11ee754 | [
"MIT"
] | null | null | null | zdiscord/service/integration/chat/discord/DiscordCommandMiddleware.py | xxdunedainxx/zdiscord | e79039621969fd7a2987ccac4e8d6fcff11ee754 | [
"MIT"
] | 57 | 2020-06-05T18:33:17.000Z | 2020-08-17T18:28:37.000Z | zdiscord/service/integration/chat/discord/DiscordCommandMiddleware.py | xxdunedainxx/zdiscord | e79039621969fd7a2987ccac4e8d6fcff11ee754 | [
"MIT"
] | null | null | null | # contains connectors between discord api logic && command logic
from zdiscord.service.messaging.CommandFactory import CommandFactory
from zdiscord.service.ServiceFactory import ServiceFactory
from zdiscord.service.integration.chat.discord.DiscordEvents import DiscordEvent
from zdiscord.service.messaging.Events import EventConfig
from zdiscord.service.integration.chat.discord.macros.MacroFactory import MacroFactory
import importlib
import json
from typing import Any
class Command:
def __init__(self, command: str, responseMsg: str,type, logic: Any = None, fallBack = None, arg: str = '', syncMsg: str = None,description: str = None, example: str = None, retries: int = 3):
self.command = command
self.response = responseMsg
self.command_logic = logic
self.fallback_logic: Any = fallBack
self.type=type
self.arg = arg
self.sync_msg: str = syncMsg
self.description = description
self.example = example
self.retries: int = retries
# execute command logic
def run(self, event: DiscordEvent):
# no-op, must ber overridden
raise Exception('THIS METHOD MUST BE OVERRIDDEN!!')
class SimpleStringCommand(Command):
def __init__(self, conf: {}):
super().__init__(
command=conf['command'] if 'command' in conf.keys() else None,
responseMsg=conf['responseMsg'] if 'responseMsg' in conf.keys() else None,
type=conf['type'] if 'type' in conf.keys() else None,
logic=conf['logic'] if 'logic' in conf.keys() else None,
fallBack=conf['fallBack'] if 'fallBack' in conf.keys() else None,
arg=conf['arg'] if 'arg' in conf.keys() else None,
syncMsg=conf['syncMsg'] if 'syncMsg' in conf.keys() else None,
description=conf['description'] if 'description' in conf.keys() else None,
example=conf['example'] if 'example' in conf.keys() else None,
retries=conf['retries'] if 'retries' in conf.keys() else 3,
)
# execute command logic
async def run(self, event: DiscordEvent):
await event.context['message_object'].channel.send(self.response)
class LambdaMessageCommand(Command):
def __init__(self, conf: {}):
super().__init__(
command=conf['command'] if 'command' in conf.keys() else None,
responseMsg=conf['responseMsg'] if 'responseMsg' in conf.keys() else None,
type=conf['type'] if 'type' in conf.keys() else None,
logic=conf['logic'] if 'logic' in conf.keys() else None,
fallBack=conf['fallBack'] if 'fallBack' in conf.keys() else None,
arg=conf['arg'] if 'arg' in conf.keys() else None,
syncMsg=conf['syncMsg'] if 'syncMsg' in conf.keys() else None,
description=conf['description'] if 'description' in conf.keys() else None,
example=conf['example'] if 'example' in conf.keys() else None,
retries=conf['retries'] if 'retries' in conf.keys() else 3,
)
# execute command logic
async def run(self, event: DiscordEvent):
services: {} = ServiceFactory.SERVICES
if self.sync_msg is not None:
await event.context['message_object'].channel.send(self.sync_msg)
await eval(self.command_logic)(event.parsed_message, event)
class StaticCommand(Command):
def __init__(self, conf: {}):
super().__init__(
command=conf['command'] if 'command' in conf.keys() else None,
responseMsg=conf['responseMsg'] if 'responseMsg' in conf.keys() else None,
type=conf['type'] if 'type' in conf.keys() else None,
logic=conf['logic'] if 'logic' in conf.keys() else None,
fallBack=conf['fallBack'] if 'fallBack' in conf.keys() else None,
arg=conf['arg'] if 'arg' in conf.keys() else None,
syncMsg=conf['syncMsg'] if 'syncMsg' in conf.keys() else None,
description=conf['description'] if 'description' in conf.keys() else None,
example=conf['example'] if 'example' in conf.keys() else None,
retries=conf['retries'] if 'retries' in conf.keys() else 3,
)
# execute command logic
async def run(self, event: DiscordEvent):
services: {} = ServiceFactory.SERVICES
if self.sync_msg is not None:
await event.context['message_object'].channel.send(self.sync_msg)
await eval(self.command_logic)
class DiscordCommandMiddleware(CommandFactory):
def __init__(self, conf: {}):
super().__init__(conf=conf)
async def execute_cmd(self, event: DiscordEvent, eventConfig: EventConfig):
await self._COMMAND_CONFIGS[eventConfig.lookup].run(event) | 49.863158 | 195 | 0.649778 | 585 | 4,737 | 5.17265 | 0.14188 | 0.059484 | 0.099141 | 0.138797 | 0.668539 | 0.659617 | 0.624587 | 0.624587 | 0.609716 | 0.609716 | 0 | 0.001097 | 0.230315 | 4,737 | 95 | 196 | 49.863158 | 0.828854 | 0.037365 | 0 | 0.567901 | 0 | 0 | 0.108476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.098765 | 0 | 0.234568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e7bd3d7ad84ae06f1e5357bb603f820856ea85c | 866 | py | Python | 0656 Gene Mutation Groups.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | 1 | 2020-12-29T21:17:26.000Z | 2020-12-29T21:17:26.000Z | 0656 Gene Mutation Groups.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | 0656 Gene Mutation Groups.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | 4 | 2021-09-09T17:42:43.000Z | 2022-03-18T04:54:03.000Z | class Solution:
def solve(self, genes):
ans = 0
seen = set()
genes = set(genes)
for gene in genes:
if gene in seen:
continue
ans += 1
dfs = [gene]
seen.add(gene)
while dfs:
cur = dfs.pop()
cur_list = list(cur)
for i in range(len(cur)):
for char in ["A","C","G","T"]:
if char == cur[i]: continue
cur_list[i] = char
new_gene = "".join(cur_list)
if new_gene in genes and new_gene not in seen:
seen.add(new_gene)
dfs.append(new_gene)
cur_list[i] = cur[i]
return ans
| 26.242424 | 70 | 0.362587 | 91 | 866 | 3.351648 | 0.395604 | 0.114754 | 0.072131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005141 | 0.550808 | 866 | 32 | 71 | 27.0625 | 0.77892 | 0 | 0 | 0 | 0 | 0 | 0.004619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e826e5485a0fdc41f7a7c7fdbeb5bb0e08fa5d3 | 286 | py | Python | Server/utils/blueprints/Logging.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | 1 | 2022-02-26T21:43:19.000Z | 2022-02-26T21:43:19.000Z | Server/utils/blueprints/Logging.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | null | null | null | Server/utils/blueprints/Logging.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | null | null | null | from _testcapi import instancemethod
from .ParentView import View
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from ..Server import Server
class Logging(View):
def __init__(self, server: 'Server'):
super().__init__(name=self.__class__.__name__, server=server)
| 22 | 69 | 0.755245 | 36 | 286 | 5.472222 | 0.5 | 0.121827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160839 | 286 | 12 | 70 | 23.833333 | 0.820833 | 0 | 0 | 0 | 0 | 0 | 0.020979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2e87c0bdf562862b65d0f64835fcb315496fe1df | 9,494 | py | Python | evalution/composes/composition/composition_model.py | esantus/evalution2 | 622a9faf729b7c704ad45047911b9a03cf7c8dae | [
"MIT"
] | 1 | 2017-12-06T21:46:26.000Z | 2017-12-06T21:46:26.000Z | evalution/composes/composition/composition_model.py | esantus/EVALution-2.0 | 622a9faf729b7c704ad45047911b9a03cf7c8dae | [
"MIT"
] | 5 | 2020-03-24T15:27:40.000Z | 2021-06-01T21:47:18.000Z | evalution/composes/composition/composition_model.py | esantus/EVALution-2.0 | 622a9faf729b7c704ad45047911b9a03cf7c8dae | [
"MIT"
] | 1 | 2018-02-15T17:13:02.000Z | 2018-02-15T17:13:02.000Z | '''
Created on Oct 5, 2012
@author: Georgiana Dinu, Pham The Nghia
'''
import time
import math
from warnings import warn
from composes.semantic_space.space import Space
from composes.matrix.dense_matrix import DenseMatrix
from composes.utils.gen_utils import assert_is_instance
from composes.utils.matrix_utils import resolve_type_conflict
from composes.utils.io_utils import create_parent_directories
import logging
from composes.utils import log_utils as log
logger = logging.getLogger(__name__)
class CompositionModel(object):
"""
Parent class of the composition models.
"""
_name = "no name"
MAX_MEM_OVERHEAD = 0.2
"""
double, in interval [0,1]
maximum overhead allowed: MAX_MEM_OVERHEAD ratio of argument space memory
when composing
"""
composed_id2column = None
"""
List of strings, the column strings of the resulted composed space.
"""
def __init__(self):
"""
Constructor
"""
def train(self, train_data, arg_space, phrase_space):
"""
Trains a composition model and sets its learned parameters.
Args:
train_data: list of string tuples. Each tuple contains 3
string elements: (arg1, arg2, phrase).
arg_space: argument space(s). Space object or a tuple of two
Space objects (e.g. my_space, or (my_space1, my_space2)).
If two spaces are provided, arg1 elements of train data are
interpreted in space1, and arg2 in space2.
phrase space: phrase space, of type Space.
Calls the specific training routine of the current composition
model. Training tuples which contain strings not found in their
respective spaces are ignored.
The id2column attribute of the resulted composed space is set to
be equal to that of the phrase space given as an input.
"""
start = time.time()
arg1_space, arg2_space = self.extract_arg_spaces(arg_space)
arg1_list, arg2_list, phrase_list = self.valid_data_to_lists(train_data,
(arg1_space.row2id,
arg2_space.row2id,
phrase_space.row2id)
)
self._train(arg1_space, arg2_space, phrase_space,
arg1_list, arg2_list, phrase_list)
self.composed_id2column = phrase_space.id2column
log.print_composition_model_info(logger, self, 1, "\nTrained composition model:")
log.print_info(logger, 2, "With total data points:%s" % len(arg1_list))
log.print_matrix_info(logger, arg1_space.cooccurrence_matrix, 3,
"Semantic space of argument 1:")
log.print_matrix_info(logger, arg2_space.cooccurrence_matrix, 3,
"Semantic space of argument 2:")
log.print_matrix_info(logger, phrase_space.cooccurrence_matrix, 3,
"Semantic space of phrases:")
log.print_time_info(logger, time.time(), start, 2)
def _train(self, arg1_space, arg2_space, phrase_space, arg1_list, arg2_list, phrase_list):
arg1_mat = arg1_space.get_rows(arg1_list)
arg2_mat = arg2_space.get_rows(arg2_list)
phrase_mat = phrase_space.get_rows(phrase_list)
[arg1_mat, arg2_mat, phrase_mat] = resolve_type_conflict([arg1_mat,
arg2_mat,
phrase_mat],
DenseMatrix)
self._solve(arg1_mat, arg2_mat, phrase_mat)
def compose(self, data, arg_space):
"""
Uses a composition model to compose elements.
Args:
data: data to be composed. List of tuples, each containing 3
strings: (arg1, arg2, composed_phrase). arg1 and arg2 are the
elements to be composed and composed_phrase is the string associated
to their composition.
arg_space: argument space(s). Space object or a tuple of two
Space objects (e.g. my_space, or (my_space1, my_space2)).
If two spaces are provided, arg1 elements of data are
interpreted in space1, and arg2 in space2.
Returns:
composed space: a new object of type Space, containing the
phrases obtained through composition.
"""
start = time.time()
arg1_space, arg2_space = self.extract_arg_spaces(arg_space)
arg1_list, arg2_list, phrase_list = self.valid_data_to_lists(data,
(arg1_space.row2id,
arg2_space.row2id,
None))
# we try to achieve at most MAX_MEM_OVERHEAD*phrase_space memory overhead
# the /3.0 is needed
# because the composing data needs 3 * len(train_data) memory (arg1 vector, arg2 vector, phrase vector)
chunk_size = int(max(arg1_space.cooccurrence_matrix.shape[0],arg2_space.cooccurrence_matrix.shape[0],len(phrase_list))
* self.MAX_MEM_OVERHEAD / 3.0) + 1
composed_mats = []
for i in range(int(math.ceil(len(arg1_list) / float(chunk_size)))):
beg, end = i*chunk_size, min((i+1)*chunk_size, len(arg1_list))
arg1_mat = arg1_space.get_rows(arg1_list[beg:end])
arg2_mat = arg2_space.get_rows(arg2_list[beg:end])
[arg1_mat, arg2_mat] = resolve_type_conflict([arg1_mat, arg2_mat],
DenseMatrix)
composed_mat = self._compose(arg1_mat, arg2_mat)
composed_mats.append(composed_mat)
composed_phrase_mat = composed_mat.nary_vstack(composed_mats)
if self.composed_id2column is None:
self.composed_id2column = self._build_id2column(arg1_space, arg2_space)
log.print_name(logger, self, 1, "\nComposed with composition model:")
log.print_info(logger, 3, "Composed total data points:%s" % arg1_mat.shape[0])
log.print_matrix_info(logger, composed_phrase_mat, 4,
"Resulted (composed) semantic space::")
log.print_time_info(logger, time.time(), start, 2)
return Space(composed_phrase_mat, phrase_list, self.composed_id2column)
@classmethod
def extract_arg_spaces(cls, arg_space):
"""
TO BE MOVED TO A UTILS MODULE!
"""
if not isinstance(arg_space, tuple):
arg1_space = arg_space
arg2_space = arg_space
else:
if len(arg_space) != 2:
raise ValueError("expected two spaces, received %d-ary tuple "
% len(arg_space))
arg1_space, arg2_space = arg_space
assert_is_instance(arg1_space, Space)
assert_is_instance(arg2_space, Space)
cls._assert_space_match(arg1_space, arg2_space)
return arg1_space, arg2_space
@classmethod
def _assert_space_match(cls, arg1_space, arg2_space, phrase_space=None):
if arg1_space.id2column != arg2_space.id2column:
raise ValueError("Argument spaces do not have identical columns!")
if not phrase_space is None:
if arg1_space.id2column != phrase_space.id2column:
raise ValueError("Argument and phrase space do not have identical columns!")
def _build_id2column(self, arg1_space, arg2_space):
return arg1_space.id2column
def valid_data_to_lists(self, data, (row2id1, row2id2, row2id3)):
"""
TO BE MOVED TO A UTILS MODULE!
"""
list1 = []
list2 = []
list3 = []
j = 0
for i in xrange(len(data)):
sample = data[i]
cond = True
if not row2id1 is None:
cond = cond and sample[0] in row2id1
if not row2id2 is None:
cond = cond and sample[1] in row2id2
if not row2id3 is None:
cond = cond and sample[2] in row2id3
if cond:
list1.append(sample[0])
list2.append(sample[1])
list3.append(sample[2])
j += 1
if i + 1 != j:
warn("%d (out of %d) lines are ignored because one of the elements is not found in its semantic space"
% ((i + 1) - j, (i + 1)))
if not list1:
raise ValueError("No valid data found for training/composition!")
return list1, list2, list3
def export(self, filename):
"""
Prints the parameters of the composition model to file.
Args:
filename: output filename, string
Prints the parameters of the compositional model in an appropriate
format, specific to each model.
"""
create_parent_directories(filename)
self._export(filename)
def get_name(self):
return self._name
name = property(get_name)
"""
String, name of the composition model.
"""
| 36.515385 | 126 | 0.58321 | 1,130 | 9,494 | 4.686726 | 0.200885 | 0.035687 | 0.029079 | 0.033988 | 0.373301 | 0.27398 | 0.232817 | 0.191465 | 0.147659 | 0.106118 | 0 | 0.029582 | 0.344849 | 9,494 | 259 | 127 | 36.656371 | 0.821865 | 0.020223 | 0 | 0.111111 | 0 | 0.007937 | 0.076767 | 0.003053 | 0 | 0 | 0 | 0 | 0.039683 | 0 | null | null | 0 | 0.079365 | null | null | 0.079365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e8c0b9ec91bd9f40ae51c082d96946532cbb9b5 | 1,139 | py | Python | test/plugin_support_test.py | spbrogan/rvc2mqtt | fca47bd302c895ae89af21d08a652fa595ea482b | [
"Apache-2.0"
] | 6 | 2022-01-16T18:36:03.000Z | 2022-03-10T02:01:24.000Z | test/plugin_support_test.py | ccirrinc/rvc2mqtt | 108328323357e55be6242887b964ca936ea7b3fc | [
"Apache-2.0"
] | 38 | 2022-01-09T22:20:36.000Z | 2022-03-21T06:28:46.000Z | test/plugin_support_test.py | ccirrinc/rvc2mqtt | 108328323357e55be6242887b964ca936ea7b3fc | [
"Apache-2.0"
] | 1 | 2022-01-30T00:20:40.000Z | 2022-01-30T00:20:40.000Z | """
Unit tests for the plugin_support module
This is just a hack to invoke it..not a unit test
Copyright 2022 Sean Brogan
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import unittest
import context # add rvc2mqtt package to the python path using local reference
from rvc2mqtt.plugin_support import PluginSupport
p_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'rvc2mqtt', "entity"))
if __name__ == '__main__':
ps = PluginSupport( p_path, {})
fm = []
ps.register_with_factory_the_entity_plugins(fm) # will be list of tuples (dict of match parameters, class)
print(fm) | 33.5 | 111 | 0.765584 | 179 | 1,139 | 4.75419 | 0.620112 | 0.070505 | 0.030552 | 0.037603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01357 | 0.158911 | 1,139 | 34 | 112 | 33.5 | 0.874739 | 0.698859 | 0 | 0 | 0 | 0 | 0.071856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5cf3021973cd6275c26935e79845f01ddda593d2 | 449 | py | Python | core/urls.py | kevincornish/Genesis | 6bc424fe97be954776dec2bdc4c7d214992cc3e2 | [
"MIT"
] | null | null | null | core/urls.py | kevincornish/Genesis | 6bc424fe97be954776dec2bdc4c7d214992cc3e2 | [
"MIT"
] | null | null | null | core/urls.py | kevincornish/Genesis | 6bc424fe97be954776dec2bdc4c7d214992cc3e2 | [
"MIT"
] | null | null | null | from django.conf.urls import url, include
from django.contrib import admin
from core import views
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^profile/$', views.profile, name='profile'),
url(r'^login/$', views.doLogin, name='login'),
url(r'^logout/$',views.doLogout, name='logout'),
url(r'^signup/$', views.doSignup, name='signup'),
] | 29.933333 | 54 | 0.683742 | 63 | 449 | 4.873016 | 0.380952 | 0.065147 | 0.136808 | 0.117264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138085 | 449 | 15 | 55 | 29.933333 | 0.793282 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cf0562de443d67001cb57718f67ebe5a0d8e1e3c | 4,559 | py | Python | awacs/redshift.py | calebmarcus/awacs | ce7d32cea496c714f501bede4e4052db6f1ee8a2 | [
"BSD-2-Clause"
] | null | null | null | awacs/redshift.py | calebmarcus/awacs | ce7d32cea496c714f501bede4e4052db6f1ee8a2 | [
"BSD-2-Clause"
] | null | null | null | awacs/redshift.py | calebmarcus/awacs | ce7d32cea496c714f501bede4e4052db6f1ee8a2 | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) 2012-2013, Mark Peek <mark@peek.org>
# All rights reserved.
#
# See LICENSE file for full license.
from aws import Action as BaseAction
from aws import BaseARN
service_name = 'Amazon Redshift'
prefix = 'redshift'
class Action(BaseAction):
def __init__(self, action=None):
sup = super(Action, self)
sup.__init__(prefix, action)
class ARN(BaseARN):
def __init__(self, resource='', region='', account=''):
sup = super(ARN, self)
sup.__init__(service=prefix, resource=resource, region=region,
account=account)
AuthorizeClusterSecurityGroupIngress = \
Action('AuthorizeClusterSecurityGroupIngress')
AuthorizeSnapshotAccess = Action('AuthorizeSnapshotAccess')
CancelQuerySession = Action('CancelQuerySession')
CopyClusterSnapshot = Action('CopyClusterSnapshot')
CreateCluster = Action('CreateCluster')
CreateClusterParameterGroup = Action('CreateClusterParameterGroup')
CreateClusterSecurityGroup = Action('CreateClusterSecurityGroup')
CreateClusterSnapshot = Action('CreateClusterSnapshot')
CreateClusterSubnetGroup = Action('CreateClusterSubnetGroup')
CreateClusterUser = Action('CreateClusterUser')
CreateEventSubscription = Action('CreateEventSubscription')
CreateHsmClientCertificate = Action('CreateHsmClientCertificate')
CreateHsmConfiguration = Action('CreateHsmConfiguration')
CreateSnapshotCopyGrant = Action('CreateSnapshotCopyGrant')
CreateTags = Action('CreateTags')
DeleteCluster = Action('DeleteCluster')
DeleteClusterParameterGroup = Action('DeleteClusterParameterGroup')
DeleteClusterSecurityGroup = Action('DeleteClusterSecurityGroup')
DeleteClusterSnapshot = Action('DeleteClusterSnapshot')
DeleteClusterSubnetGroup = Action('DeleteClusterSubnetGroup')
DeleteEventSubscription = Action('DeleteEventSubscription')
DeleteHsmClientCertificate = Action('DeleteHsmClientCertificate')
DeleteHsmConfiguration = Action('DeleteHsmConfiguration')
DeleteSnapshotCopyGrant = Action('DeleteSnapshotCopyGrant')
DeleteTags = Action('DeleteTags')
DescribeClusterParameterGroups = Action('DescribeClusterParameterGroups')
DescribeClusterParameters = Action('DescribeClusterParameters')
DescribeClusterSecurityGroups = Action('DescribeClusterSecurityGroups')
DescribeClusterSnapshots = Action('DescribeClusterSnapshots')
DescribeClusterSubnetGroups = Action('DescribeClusterSubnetGroups')
DescribeClusterVersions = Action('DescribeClusterVersions')
DescribeClusters = Action('DescribeClusters')
DescribeDefaultClusterParameters = \
Action('DescribeDefaultClusterParameters')
DescribeEventCategories = Action('DescribeEventCategories')
DescribeEventSubscriptions = Action('DescribeEventSubscriptions')
DescribeEvents = Action('DescribeEvents')
DescribeHsmClientCertificates = Action('DescribeHsmClientCertificates')
DescribeHsmConfigurations = Action('DescribeHsmConfigurations')
DescribeLoggingStatus = Action('DescribeLoggingStatus')
DescribeOrderableClusterOptions = \
Action('DescribeOrderableClusterOptions')
DescribeReservedNodeOfferings = Action('DescribeReservedNodeOfferings')
DescribeReservedNodes = Action('DescribeReservedNodes')
DescribeResize = Action('DescribeResize')
DescribeSnapshotCopyGrants = Action('DescribeSnapshotCopyGrants')
DescribeTableRestoreStatus = Action('DescribeTableRestoreStatus')
DescribeTags = Action('DescribeTags')
DisableLogging = Action('DisableLogging')
DisableSnapshotCopy = Action('DisableSnapshotCopy')
EnableLogging = Action('EnableLogging')
EnableSnapshotCopy = Action('EnableSnapshotCopy')
GetClusterCredentials = Action('GetClusterCredentials')
JoinGroup = Action('JoinGroup')
ModifyCluster = Action('ModifyCluster')
ModifyClusterIamRoles = Action('ModifyClusterIamRoles')
ModifyClusterParameterGroup = Action('ModifyClusterParameterGroup')
ModifyClusterSubnetGroup = Action('ModifyClusterSubnetGroup')
ModifyEventSubscription = Action('ModifyEventSubscription')
ModifySnapshotCopyRetentionPeriod = \
Action('ModifySnapshotCopyRetentionPeriod')
PurchaseReservedNodeOffering = Action('PurchaseReservedNodeOffering')
RebootCluster = Action('RebootCluster')
ResetClusterParameterGroup = Action('ResetClusterParameterGroup')
RestoreFromClusterSnapshot = Action('RestoreFromClusterSnapshot')
RestoreTableFromClusterSnapshot = \
Action('RestoreTableFromClusterSnapshot')
RevokeClusterSecurityGroupIngress = \
Action('RevokeClusterSecurityGroupIngress')
RevokeSnapshotAccess = Action('RevokeSnapshotAccess')
RotateEncryptionKey = Action('RotateEncryptionKey')
ViewQueriesInConsole = Action('ViewQueriesInConsole')
| 46.050505 | 73 | 0.830884 | 274 | 4,559 | 13.762774 | 0.383212 | 0.004243 | 0.006895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001906 | 0.079403 | 4,559 | 98 | 74 | 46.520408 | 0.896593 | 0.023251 | 0 | 0 | 0 | 0 | 0.341727 | 0.265962 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.023256 | 0 | 0.069767 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf10ed3e2eb2a9811b1265ceb9d30e65333e6e5f | 1,019 | py | Python | daiquiri/metadata/management/commands/update_access_level.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 14 | 2018-12-23T18:35:02.000Z | 2021-12-15T04:55:12.000Z | daiquiri/metadata/management/commands/update_access_level.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 40 | 2018-12-20T12:44:05.000Z | 2022-03-21T11:35:20.000Z | daiquiri/metadata/management/commands/update_access_level.py | agy-why/daiquiri | 4d3e2ce51e202d5a8f1df404a0094a4e018dcb4d | [
"Apache-2.0"
] | 5 | 2019-05-16T08:03:35.000Z | 2021-08-23T20:03:11.000Z | from django.core.management.base import BaseCommand,CommandError
from django.utils.translation import ugettext_lazy as _
from daiquiri.core.constants import ACCESS_LEVEL_CHOICES
from daiquiri.metadata.models import Schema
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('schema', help='the schema to be updated')
parser.add_argument('access_level', help='new access_level and metadata_access_level')
def handle(self, *args, **options):
if options['access_level'] not in dict(ACCESS_LEVEL_CHOICES):
raise CommandError(_('Unknown access_level.'))
schema = Schema.objects.get(name=options['schema'])
schema.access_level = options['access_level']
schema.metadata_access_level = options['access_level']
schema.save()
for table in schema.tables.all():
table.access_level = options['access_level']
table.metadata_access_level = options['access_level']
table.save()
| 36.392857 | 94 | 0.708538 | 124 | 1,019 | 5.612903 | 0.427419 | 0.237069 | 0.12931 | 0.137931 | 0.221264 | 0.221264 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191364 | 1,019 | 27 | 95 | 37.740741 | 0.84466 | 0 | 0 | 0 | 0 | 0 | 0.167812 | 0.020608 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf2211cf683c450a9ee0fff5f91b9dcf3295ad09 | 1,314 | py | Python | 04 - Class vs Static Methods/helper.py | ThiagoPiovesan/OOP-Python | 02e2ae6efad87524c1a9ce50d7dac6ebfd392d4e | [
"MIT"
] | null | null | null | 04 - Class vs Static Methods/helper.py | ThiagoPiovesan/OOP-Python | 02e2ae6efad87524c1a9ce50d7dac6ebfd392d4e | [
"MIT"
] | null | null | null | 04 - Class vs Static Methods/helper.py | ThiagoPiovesan/OOP-Python | 02e2ae6efad87524c1a9ce50d7dac6ebfd392d4e | [
"MIT"
] | null | null | null | #--------------------------------------------------------------------#
# Help program.
# Created by: Jim - https://www.youtube.com/watch?v=XCgWYx-lGl8
# Changed by: Thiago Piovesan
#--------------------------------------------------------------------#
# When to use class methods and when to use static methods ?
#--------------------------------------------------------------------#
class Item:
@staticmethod
def is_integer():
'''
This should do something that has a relationship
with the class, but not something that must be unique
per instance!
'''
@classmethod
def instantiate_from_something(cls):
'''
This should also do something that has a relationship
with the class, but usually, those are used to
manipulate different structures of data to instantiate
objects, like we have done with CSV.
'''
# THE ONLY DIFFERENCE BETWEEN THOSE:
# Static methods are not passing the object reference as the first argument in the background!
#--------------------------------------------------------------------#
# NOTE: However, those could be also called from instances.
item1 = Item()
item1.is_integer()
item1.instantiate_from_something()
#--------------------------------------------------------------------# | 37.542857 | 94 | 0.503805 | 130 | 1,314 | 5.046154 | 0.630769 | 0.059451 | 0.027439 | 0.054878 | 0.140244 | 0.140244 | 0.140244 | 0.140244 | 0.140244 | 0.140244 | 0 | 0.003724 | 0.182648 | 1,314 | 35 | 95 | 37.542857 | 0.607076 | 0.760274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf225d26cd1440c60b616f74880dc9c6c8368650 | 12,146 | py | Python | tests/test_dataset_and_algorithm_match.py | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | 2 | 2022-01-29T03:46:31.000Z | 2022-02-14T14:06:35.000Z | tests/test_dataset_and_algorithm_match.py | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | null | null | null | tests/test_dataset_and_algorithm_match.py | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | null | null | null | import tempfile
import unittest
from pathlib import Path
from typing import Iterable
import numpy as np
from tests.fixtures.algorithms import SupervisedDeviatingFromMean
from timeeval import (
TimeEval,
Algorithm,
Datasets,
TrainingType,
InputDimensionality,
Status,
Metric,
ResourceConstraints,
DatasetManager
)
from timeeval.datasets import Dataset, DatasetRecord
from timeeval.experiments import Experiment, Experiments
from timeeval.utils.hash_dict import hash_dict
class TestDatasetAndAlgorithmMatch(unittest.TestCase):
def setUp(self) -> None:
self.dmgr = DatasetManager("./tests/example_data")
self.algorithms = [
Algorithm(
name="supervised_deviating_from_mean",
main=SupervisedDeviatingFromMean(),
training_type=TrainingType.SUPERVISED,
input_dimensionality=InputDimensionality.UNIVARIATE,
data_as_file=False
)
]
def _prepare_dmgr(self, path: Path,
training_type: Iterable[str] = ("unsupervised",),
dimensionality: Iterable[str] = ("univariate",)) -> Datasets:
dmgr = DatasetManager(path / "data")
for t, d in zip(training_type, dimensionality):
dmgr.add_dataset(DatasetRecord(
collection_name="test",
dataset_name=f"dataset-{t}-{d}",
train_path="train.csv",
test_path="test.csv",
dataset_type="synthetic",
datetime_index=False,
split_at=-1,
train_type=t,
train_is_normal=True if t == "semi-supervised" else False,
input_type=d,
length=10000,
dimensions=5 if d == "multivariate" else 1,
contamination=0.1,
num_anomalies=1,
min_anomaly_length=100,
median_anomaly_length=100,
max_anomaly_length=100,
mean=0.0,
stddev=1.0,
trend="no-trend",
stationarity="stationary",
period_size=50
))
return dmgr
def test_supervised_algorithm(self):
with tempfile.TemporaryDirectory() as tmp_path:
timeeval = TimeEval(self.dmgr, [("test", "dataset-datetime")], self.algorithms,
repetitions=1,
results_path=Path(tmp_path))
timeeval.run()
results = timeeval.get_results(aggregated=False)
np.testing.assert_array_almost_equal(results["ROC_AUC"].values, [0.810225])
def test_mismatched_training_type(self):
algo = Algorithm(
name="supervised_deviating_from_mean",
main=SupervisedDeviatingFromMean(),
training_type=TrainingType.SEMI_SUPERVISED,
data_as_file=False
)
with tempfile.TemporaryDirectory() as tmp_path:
timeeval = TimeEval(self.dmgr, [("test", "dataset-datetime")], [algo],
repetitions=1,
results_path=Path(tmp_path),
skip_invalid_combinations=False)
timeeval.run()
results = timeeval.get_results(aggregated=False)
self.assertEqual(results.loc[0, "status"], Status.ERROR)
self.assertIn("training type", results.loc[0, "error_message"])
self.assertIn("incompatible", results.loc[0, "error_message"])
def test_mismatched_input_dimensionality(self):
algo = Algorithm(
name="supervised_deviating_from_mean",
main=SupervisedDeviatingFromMean(),
input_dimensionality=InputDimensionality.UNIVARIATE,
data_as_file=False
)
with tempfile.TemporaryDirectory() as tmp_path:
tmp_path = Path(tmp_path)
dmgr = self._prepare_dmgr(tmp_path, training_type=["supervised"], dimensionality=["multivariate"])
timeeval = TimeEval(dmgr, [("test", "dataset-supervised-multivariate")], [algo],
repetitions=1,
results_path=tmp_path,
skip_invalid_combinations=False)
timeeval.run()
results = timeeval.get_results(aggregated=False)
self.assertEqual(results.loc[0, "status"], Status.ERROR)
self.assertIn("input dimensionality", results.loc[0, "error_message"])
self.assertIn("incompatible", results.loc[0, "error_message"])
def test_missing_training_dataset_timeeval(self):
with tempfile.TemporaryDirectory() as tmp_path:
timeeval = TimeEval(self.dmgr, [("test", "dataset-int")], self.algorithms,
repetitions=1,
results_path=Path(tmp_path),
skip_invalid_combinations=False)
timeeval.run()
results = timeeval.get_results(aggregated=False)
self.assertEqual(results.loc[0, "status"], Status.ERROR)
self.assertIn("training dataset", results.loc[0, "error_message"])
self.assertIn("not found", results.loc[0, "error_message"])
def test_missing_training_dataset_experiment(self):
exp = Experiment(
dataset=Dataset(
datasetId=("test", "dataset-datetime"),
dataset_type="synthetic",
training_type=TrainingType.SUPERVISED,
num_anomalies=1,
dimensions=1,
length=3000,
contamination=0.0002777777777777778,
min_anomaly_length=1,
median_anomaly_length=1,
max_anomaly_length=1,
period_size=None
),
algorithm=self.algorithms[0],
params={},
params_id=hash_dict({}),
repetition=0,
base_results_dir=Path("tmp_path"),
resource_constraints=ResourceConstraints(),
metrics=Metric.default_list(),
resolved_test_dataset_path=self.dmgr.get_dataset_path(("test", "dataset-datetime")),
resolved_train_dataset_path=None
)
with self.assertRaises(ValueError) as e:
exp._perform_training()
self.assertIn("No training dataset", str(e.exception))
def test_dont_skip_invalid_combinations(self):
datasets = [self.dmgr.get(d) for d in self.dmgr.select()]
exps = Experiments(
dmgr=self.dmgr,
datasets=datasets,
algorithms=self.algorithms,
metrics=Metric.default_list(),
base_result_path=Path("tmp_path"),
skip_invalid_combinations=False
)
self.assertEqual(len(exps), len(datasets) * len(self.algorithms))
def test_skip_invalid_combinations(self):
datasets = [self.dmgr.get(d) for d in self.dmgr.select()]
exps = Experiments(
dmgr=self.dmgr,
datasets=datasets,
algorithms=self.algorithms,
metrics=Metric.default_list(),
base_result_path=Path("tmp_path"),
skip_invalid_combinations=True
)
self.assertEqual(len(exps), 1)
exp = list(exps)[0]
self.assertEqual(exp.dataset.training_type, exp.algorithm.training_type)
self.assertEqual(exp.dataset.input_dimensionality, exp.algorithm.input_dimensionality)
def test_force_training_type_match(self):
algo = Algorithm(
name="supervised_deviating_from_mean2",
main=SupervisedDeviatingFromMean(),
training_type=TrainingType.SUPERVISED,
input_dimensionality=InputDimensionality.MULTIVARIATE,
data_as_file=False
)
with tempfile.TemporaryDirectory() as tmp_path:
tmp_path = Path(tmp_path)
dmgr = self._prepare_dmgr(tmp_path,
training_type=["unsupervised", "semi-supervised", "supervised", "supervised"],
dimensionality=["univariate", "univariate", "univariate", "multivariate"])
datasets = [dmgr.get(d) for d in dmgr.select()]
exps = Experiments(
dmgr=dmgr,
datasets=datasets,
algorithms=self.algorithms + [algo],
metrics=Metric.default_list(),
base_result_path=tmp_path,
force_training_type_match=True
)
self.assertEqual(len(exps), 3)
exps = list(exps)
# algo1 and dataset 3
exp = exps[0]
self.assertEqual(exp.algorithm.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.UNIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.UNIVARIATE)
# algo2 and dataset 4
exp = exps[1]
self.assertEqual(exp.algorithm.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.MULTIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.MULTIVARIATE)
# algo1 and dataset 3
exp = exps[2]
self.assertEqual(exp.algorithm.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.MULTIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.UNIVARIATE)
def test_force_dimensionality_match(self):
algo = Algorithm(
name="supervised_deviating_from_mean2",
main=SupervisedDeviatingFromMean(),
training_type=TrainingType.UNSUPERVISED,
input_dimensionality=InputDimensionality.MULTIVARIATE,
data_as_file=False
)
with tempfile.TemporaryDirectory() as tmp_path:
tmp_path = Path(tmp_path)
dmgr = self._prepare_dmgr(tmp_path,
training_type=["unsupervised", "supervised", "supervised", "unsupervised"],
dimensionality=["univariate", "multivariate", "univariate", "multivariate"])
datasets = [dmgr.get(d) for d in dmgr.select()]
exps = Experiments(
dmgr=dmgr,
datasets=datasets,
algorithms=self.algorithms + [algo],
metrics=Metric.default_list(),
base_result_path=tmp_path,
force_dimensionality_match=True
)
self.assertEqual(len(exps), 3)
exps = list(exps)
# algo1 and dataset 2
exp = exps[0]
self.assertEqual(exp.algorithm.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.UNIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.UNIVARIATE)
# algo2 and dataset 2
exp = exps[1]
self.assertEqual(exp.algorithm.training_type, TrainingType.UNSUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.SUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.MULTIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.MULTIVARIATE)
# algo2 and dataset 4
exp = exps[2]
self.assertEqual(exp.algorithm.training_type, TrainingType.UNSUPERVISED)
self.assertEqual(exp.dataset.training_type, TrainingType.UNSUPERVISED)
self.assertEqual(exp.algorithm.input_dimensionality, InputDimensionality.MULTIVARIATE)
self.assertEqual(exp.dataset.input_dimensionality, InputDimensionality.MULTIVARIATE)
| 44.328467 | 116 | 0.619299 | 1,152 | 12,146 | 6.335938 | 0.145833 | 0.067818 | 0.064118 | 0.047952 | 0.681189 | 0.672969 | 0.657213 | 0.650774 | 0.63173 | 0.61529 | 0 | 0.01169 | 0.288655 | 12,146 | 273 | 117 | 44.490842 | 0.833102 | 0.009797 | 0 | 0.504032 | 0 | 0 | 0.071048 | 0.015225 | 0 | 0 | 0 | 0 | 0.169355 | 1 | 0.044355 | false | 0 | 0.040323 | 0 | 0.092742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf329e5f66247bbca95e4bf363db6bddf61b921c | 1,046 | py | Python | DataAggregator/FIPS_Reference.py | Trigition/Village | bf22077f54e87be2dda1bf8984d0a62914e8e70f | [
"Apache-2.0"
] | 1 | 2017-05-17T15:28:52.000Z | 2017-05-17T15:28:52.000Z | DataAggregator/FIPS_Reference.py | Trigition/Village | bf22077f54e87be2dda1bf8984d0a62914e8e70f | [
"Apache-2.0"
] | null | null | null | DataAggregator/FIPS_Reference.py | Trigition/Village | bf22077f54e87be2dda1bf8984d0a62914e8e70f | [
"Apache-2.0"
] | null | null | null | FIPS_Reference = {
"AL":"01",
"AK":"02",
"AZ":"04",
"AR":"05",
"AS":"60",
"CA":"06",
"CO":"08",
"CT":"09",
"DE":"10",
"FL":"12",
"GA":"13",
"GU":"66",
"HI":"15",
"ID":"16",
"IL":"17",
"IN":"18",
"IA":"19",
"KS":"20",
"KY":"21",
"LA":"22",
"ME":"23",
"MD":"24",
"MA":"25",
"MI":"26",
"MN":"27",
"MS":"28",
"MO":"29",
"MT":"30",
"NE":"32",
"NV":"32",
"NH":"33",
"NJ":"34",
"NM":"35",
"NY":"36",
"NC":"37",
"ND":"38",
"OH":"39",
"OK":"40",
"OR":"41",
"PA":"42",
"RI":"44",
"PR":"72",
"SC":"45",
"SD":"46",
"TN":"47",
"TX":"48",
"UT":"49",
"VT":"50",
"VI":"78",
"VA":"51",
"WA":"53",
"WV":"54",
"WI":"55",
"WY":"56"
}
| 18.350877 | 18 | 0.219885 | 110 | 1,046 | 2.081818 | 0.990909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194245 | 0.468451 | 1,046 | 56 | 19 | 18.678571 | 0.217626 | 0 | 0 | 0 | 0 | 0 | 0.206501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf3597f5b36dd3406bf59bd803fe6c7acdc72251 | 8,920 | py | Python | yoapi/yos/queries.py | YoApp/yo-api | a162e51804ab91724cc7ad3e7608410329da6789 | [
"MIT"
] | 1 | 2021-12-17T03:25:34.000Z | 2021-12-17T03:25:34.000Z | yoapi/yos/queries.py | YoApp/yo-api | a162e51804ab91724cc7ad3e7608410329da6789 | [
"MIT"
] | null | null | null | yoapi/yos/queries.py | YoApp/yo-api | a162e51804ab91724cc7ad3e7608410329da6789 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Yo querying package."""
from itertools import takewhile
from mongoengine import Q, DoesNotExist
from ..core import cache
from ..async import async_job
from ..errors import YoTokenInvalidError
from ..helpers import get_usec_timestamp
from ..models import Yo, YoToken, User
from ..permissions import assert_account_permission
from ..services import low_rq
from yoapi.constants.yos import UNREAD_YOS_FETCH_LIMIT
@cache.memoize()
def _get_broadcasts(user_id):
"""Gets the number of Yo's broadcasted by the user.
This has an arbitrary limit of 100 set so that we don't
cache too much"""
query = Q(sender=user_id,
broadcast=True) & (Q(link__exists=True) | Q(photo__exists=True))
yos = Yo.objects(query).order_by('-created') \
.only('id').limit(100)
return list(yos)
@cache.memoize()
def _get_favorite_yos(user_id):
"""Gets the Yo' favorited by the user.
This has an arbitrary limit of 100 set so that we don't
cache too much"""
yos = Yo.objects(recipient=user_id,
is_favorite=True).order_by('-created') \
.only('id').limit(100)
return list(yos)
@cache.memoize()
def _get_unread_yos(user_id, limit, app_id=None):
"""Gets the Yo' favorited by the user.
This has an arbitrary limit of 100 set so that we don't
cache too much"""
if app_id:
yos = Yo.objects(recipient=user_id,
status__in=['sent', 'received'],
app_id=app_id,
is_push_only__in=[None, False])\
.order_by('-created').only('id').limit(limit)
else:
yos = Yo.objects(recipient=user_id,
status__in=['sent', 'received'],
app_id__in=['co.justyo.yoapp', None],
is_push_only__in=[None, False])\
.order_by('-created').only('id').limit(limit)
return list(yos)
def clear_get_favorite_yos_cache(user_id):
"""Clears the _get_favories cache"""
cache.delete_memoized(_get_favorite_yos, user_id)
def clear_get_unread_yos_cache(user_id, limit, app_id=None):
"""Clears the get_unread_yos results cache"""
cache.delete_memoized(_get_unread_yos, user_id, limit, app_id)
def clear_get_yo_cache(yo_id):
"""Clears the get_yo_by_id result cache"""
cache.delete_memoized(get_yo_by_id, yo_id)
def clear_get_yo_count_cache(user):
"""clears the get_yo_count_cache"""
cache.delete_memoized(get_yo_count, user)
def clear_get_yo_token_cache(token):
"""Clears the get_yo_token_cache"""
cache.delete_memoized(get_yo_token, token)
def clear_get_yos_received_cache(user):
"""Clears the get_yos_received results cache"""
cache.delete_memoized(_get_yos_received, user.user_id)
def clear_get_yos_sent_cache(user):
"""Clears the _get_broadcasts and get_yos_sent results cache"""
cache.delete_memoized(_get_broadcasts, user.user_id)
cache.delete_memoized(get_yos_sent, user)
def clear_all_yos_caches(user):
"""Clears all the yo caches for this user"""
clear_get_yos_sent_cache(user)
clear_get_yos_received_cache(user)
clear_get_yo_count_cache(user)
clear_get_unread_yos_cache(user.user_id, UNREAD_YOS_FETCH_LIMIT)
clear_get_favorite_yos_cache(user.user_id)
@async_job(rq=low_rq)
def delete_user_yos(user_id):
"""Deletes all the yos this user has sent"""
yos_sent_query = Yo.objects(sender=user_id)
yos_sent = yos_sent_query.select_related()
recipients = set()
user = None
for yo in yos_sent:
if not user and isinstance(yo.sender, User):
user = yo.sender
if yo.has_children():
child_yos_query = Yo.objects(parent=yo.yo_id)
child_yos = child_yos_query.select_related()
for child_yo in child_yos:
if not child_yo.has_dbrefs():
recipients.add(child_yo.recipient)
child_yos_query.delete()
elif yo.recipient and not yo.has_dbrefs():
recipients.add(yo.recipient)
yos_sent_query.delete()
for recipient in recipients:
clear_get_unread_yos_cache(recipient.user_id, UNREAD_YOS_FETCH_LIMIT)
if user:
clear_all_yos_caches(user)
def get_broadcasts(user, limit=20, ignore_permission=False):
"""Gets the number of Yo's broadcasted by the user"""
if not ignore_permission:
assert_account_permission(user, 'No permission to see Yo\'s')
yos = _get_broadcasts(user.user_id)
yos = [get_yo_by_id(yo.yo_id) for i, yo in enumerate(yos) if i < limit]
return yos
def get_child_yos(parent_yo_id):
"""Returns a list of child yos"""
yos = Yo.objects(parent=parent_yo_id).all().select_related()
return yos
def get_favorite_yos(user, limit=20, ignore_permission=False):
"""Gets the Yo's favorited by the user"""
if not ignore_permission:
assert_account_permission(user, 'No permission to see Yo\'s')
yos = _get_favorite_yos(user.user_id)
yos = [get_yo_by_id(yo.yo_id) for i, yo in enumerate(yos) if i < limit]
return yos
def get_last_broadcast(user, ignore_permission=False):
"""Get the last broadcast sent"""
yos = get_broadcasts(user, limit=1, ignore_permission=ignore_permission)
if yos:
return yos[0]
return None
def get_unread_yos(user, limit=20, age_limit=None, app_id=None, ignore_permission=False):
"""Gets Yo's not yet read by the user"""
if not ignore_permission:
assert_account_permission(user, 'No permission to see Yo\'s')
yos = _get_unread_yos(user.user_id, limit, app_id)
fetched = []
for yo in yos:
try:
reloaded = get_yo_by_id(yo.yo_id)
fetched.append(reloaded)
except:
continue
if age_limit:
cuttoff_usec = get_usec_timestamp(age_limit)
cmp_func = lambda yo: yo.created and yo.created - cuttoff_usec >= 0
fetched = takewhile(cmp_func, fetched)
yos = [yo for yo in fetched if not yo.has_dbrefs() and not yo.is_poll]
return yos[:limit]
def get_unread_polls(user, limit=20, age_limit=None, ignore_permission=False):
"""Gets Yo's not yet read by the user"""
if not ignore_permission:
assert_account_permission(user, 'No permission to see Yo\'s')
yos = _get_unread_yos(user.user_id, limit, app_id='co.justyo.yopolls')
yos = [get_yo_by_id(yo.yo_id) for yo in yos]
if age_limit:
cuttoff_usec = get_usec_timestamp(age_limit)
cmp_func = lambda yo: yo.created and yo.created - cuttoff_usec >= 0
yos = takewhile(cmp_func, yos)
yos = [yo for yo in yos if not yo.has_dbrefs()]
return yos[:limit]
def get_public_dict_for_yo_id(yo_id):
yo = get_yo_by_id(yo_id)
dic = {
'yo_id': yo.yo_id
}
if yo.thumbnail_url:
dic.update({'thumbnail': yo.thumbnail_url})
if yo.text:
dic.update({'text': yo.text})
if yo.left_replies_count:
dic.update({'left_replies_count': yo.left_replies_count})
if yo.right_replies_count:
dic.update({'right_replies_count': yo.right_replies_count})
if yo.left_reply:
dic.update({'left_reply': yo.left_reply})
if yo.right_reply:
dic.update({'right_reply': yo.right_reply})
dic.update({'sender_object': {
'id': yo.sender.user_id,
'username': yo.sender.username
}})
return dic
@cache.memoize()
def get_yo_by_id(yo_id):
"""Returns a Yo model from the database"""
return Yo.objects(id=yo_id).get()
@cache.memoize()
def get_yo_count(user):
"""Gets the number of Yo's received by the user"""
user.reload()
return user.count_in or 0
@cache.memoize()
def get_yo_token(token):
"""Gets a yo token from the database"""
try:
return YoToken.objects(auth_token__token=token).get()
except DoesNotExist:
raise YoTokenInvalidError
def get_yos_received(user, limit=20, ignore_permission=False):
"""Gets the number of Yo's received by the user"""
if not ignore_permission:
assert_account_permission(user, 'No permission to see Yo\'s')
yos = _get_yos_received(user.user_id)
return yos[:limit]
@cache.memoize()
def _get_yos_received(user_id):
yos = Yo.objects(recipient=user_id).order_by('-created').limit(100)
# Turn the generator into a list so redis can cache it.
return list(yos)
@cache.memoize()
def get_yos_sent(user):
"""Gets the number of Yo's sent by the user
This is limited by 20 in order to minimize the ammount of
data that needs to be stored in redis. At some point this
should be a paginated list"""
assert_account_permission(user, 'No permission to see Yo\'s')
yos = Yo.objects(sender=user).order_by('-created').limit(20)
# Turn the generator into a list so redis can cache it.
return list(yos)
| 31.743772 | 89 | 0.67287 | 1,351 | 8,920 | 4.172465 | 0.135455 | 0.027674 | 0.015966 | 0.025546 | 0.576903 | 0.469044 | 0.351783 | 0.335817 | 0.319496 | 0.315771 | 0 | 0.005477 | 0.222197 | 8,920 | 280 | 90 | 31.857143 | 0.807005 | 0.014462 | 0 | 0.284916 | 0 | 0.005587 | 0.048136 | 0 | 0 | 0 | 0 | 0 | 0.039106 | 0 | null | null | 0 | 0.055866 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf36eb9a3d6a5a3acd94c393ea381b2d178ecae2 | 720 | py | Python | exam_system/exams/models.py | munirhaque/recruitment-system | 1405efdf9ec88d41c595e346231ae883836106ff | [
"bzip2-1.0.6"
] | null | null | null | exam_system/exams/models.py | munirhaque/recruitment-system | 1405efdf9ec88d41c595e346231ae883836106ff | [
"bzip2-1.0.6"
] | null | null | null | exam_system/exams/models.py | munirhaque/recruitment-system | 1405efdf9ec88d41c595e346231ae883836106ff | [
"bzip2-1.0.6"
] | null | null | null | from django.db import models
from questions.models import Question
from topics.models import Topic
class Exam(models.Model):
id = models.AutoField(primary_key = True)
name = models.TextField()
start_date = models.DateField()
end_date = models.DateField()
number_of_question = models.IntegerField()
time_duration = models.IntegerField()
class ExamQuestionTopic(models.Model):
id = models.AutoField(primary_key = True)
# exam_id = models.IntegerField()
# question_id = models.IntegerField()
# topic_id = models.IntegerField()
exam = models.ForeignKey(Exam, on_delete=models.CASCADE)
question = models.ForeignKey(Question, on_delete=models.CASCADE)
topic = models.ForeignKey(Topic, on_delete=models.CASCADE) | 36 | 65 | 0.781944 | 91 | 720 | 6.043956 | 0.362637 | 0.072727 | 0.109091 | 0.114545 | 0.152727 | 0.152727 | 0.152727 | 0.152727 | 0 | 0 | 0 | 0 | 0.111111 | 720 | 20 | 66 | 36 | 0.859375 | 0.138889 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cf37e22c1c71e2bafbb71d46e0f31afa46a234ff | 1,002 | py | Python | meleagris/__init__.py | kmayerb/Meleagris | b45ca01bd1bf46242272330236b74fc7b572b7b1 | [
"MIT"
] | null | null | null | meleagris/__init__.py | kmayerb/Meleagris | b45ca01bd1bf46242272330236b74fc7b572b7b1 | [
"MIT"
] | null | null | null | meleagris/__init__.py | kmayerb/Meleagris | b45ca01bd1bf46242272330236b74fc7b572b7b1 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
from .version import __version__ # noqa
from meleagris import carve
from meleagris import roast
__all__ = [
'roast',
'carve'
]
# For a review of the basics of the __init__.py file
#__init__.py is what is invoked when the package is imported.
# We assume that scripts will invoke turkey with one of the following:
# import turkey as tk
# from turkey import *
# form turkey import module
# For example, turkey contain roast and carve methods.
# If a script imports turkey as tk, the module roast will only be available if we
# include "from turkey import roast" in the __init__.py. Doing so,
# will make it available and a script with import turkey as tk can invoke
# tk.roast.roast_temp(); but, tk.carve.carve() will not be available.
# from turkey import roast
# __all__ =[] to specifies which modules are
# import to global namespace on "from turkey import *"
#__all__ = [
# 'roast',
# 'carve'
# ]
| 23.857143 | 81 | 0.727545 | 151 | 1,002 | 4.596026 | 0.463576 | 0.086455 | 0.092219 | 0.04611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204591 | 1,002 | 41 | 82 | 24.439024 | 0.870765 | 0.748503 | 0 | 0 | 0 | 0 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cf386355b5e43d0303322e0ce9830069d230850a | 2,187 | py | Python | anadama2/taskcontainer.py | biobakery/anadama2_test | 46d06f7efc24ae067a1b6cc2841eda0c2a328daf | [
"MIT"
] | 4 | 2020-06-08T22:10:48.000Z | 2021-07-27T13:57:43.000Z | anadama2/taskcontainer.py | biobakery/anadama2_test | 46d06f7efc24ae067a1b6cc2841eda0c2a328daf | [
"MIT"
] | null | null | null | anadama2/taskcontainer.py | biobakery/anadama2_test | 46d06f7efc24ae067a1b6cc2841eda0c2a328daf | [
"MIT"
] | 1 | 2020-09-10T08:29:22.000Z | 2020-09-10T08:29:22.000Z | # -*- coding: utf-8 -*-
import re
import fnmatch
import itertools
import six
from .util import matcher
class TaskContainer(list):
"""Contains tasks. Tasks can be accessed by task_no or by name"""
def __init__(self, *args, **kwargs):
self.by_name = dict()
return super(TaskContainer, self).__init__(*args, **kwargs)
def _update(self, task):
self.by_name[task.name] = task
def _get_or_search(self, key):
if '*' in key:
hits = list(self.search(fnmatch.translate(key)))
if not hits:
raise KeyError
return hits
return self.by_name[key]
def search(self, q):
return iter(val for val in self if re.search(q, val.name))
def append(self, task):
self._update(task)
return super(TaskContainer, self).append(task)
def extend(self, iterable):
a, b = itertools.tee(iterable)
for task in a:
self._update(task)
return super(TaskContainer, self).extend(b)
def __setitem__(self, key, task):
self._update(task)
return super(TaskContainer, self).__setitem__(key, task)
def __getitem__(self, key):
try:
if isinstance(key, six.string_types):
return self._get_or_search(key)
return super(TaskContainer, self).__getitem__(key)
except KeyError:
msg = "Unable to find task with `{}'. Perhaps you meant `{}'?"
m = matcher.closest(key, iter(t.name for t in self))[0][1]
raise KeyError(msg.format(key, m))
except IndexError:
msg = "No task with number {}. There are only {} tasks."
raise IndexError(msg.format(key, len(self)))
def __contains__(self, item):
if isinstance(item, six.string_types):
if '*' in item:
try:
next(self.search(fnmatch.translate(item)))
return True
except StopIteration:
return False
else:
return item in self.by_name
return super(TaskContainer, self).__contains__(item)
| 27.683544 | 74 | 0.569273 | 261 | 2,187 | 4.582375 | 0.329502 | 0.055184 | 0.120401 | 0.140468 | 0.11204 | 0.11204 | 0.11204 | 0.076923 | 0 | 0 | 0 | 0.002034 | 0.32556 | 2,187 | 78 | 75 | 28.038462 | 0.808814 | 0.037494 | 0 | 0.092593 | 0 | 0 | 0.049547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.092593 | 0.018519 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cf4608779dce90116381a48c64482664f32fa28c | 477 | py | Python | marketplace/migrations/0003_auto_20171106_1454.py | 18F/cloud-marketplace-prototype | 4098d7e5391274e5317be2525d9deb5bc40eb2ad | [
"CC0-1.0"
] | null | null | null | marketplace/migrations/0003_auto_20171106_1454.py | 18F/cloud-marketplace-prototype | 4098d7e5391274e5317be2525d9deb5bc40eb2ad | [
"CC0-1.0"
] | 7 | 2017-11-06T20:35:53.000Z | 2017-11-08T00:25:23.000Z | marketplace/migrations/0003_auto_20171106_1454.py | 18F/cloud-marketplace-prototype | 4098d7e5391274e5317be2525d9deb5bc40eb2ad | [
"CC0-1.0"
] | 2 | 2020-04-03T19:39:32.000Z | 2021-02-14T11:06:53.000Z | # Generated by Django 2.0b1 on 2017-11-06 14:54
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('marketplace', '0002_auto_20171106_1452'),
]
operations = [
migrations.AlterField(
model_name='purchase',
name='team',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='marketplace.Team'),
),
]
| 23.85 | 104 | 0.645702 | 53 | 477 | 5.716981 | 0.679245 | 0.079208 | 0.092409 | 0.145215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085399 | 0.238994 | 477 | 19 | 105 | 25.105263 | 0.749311 | 0.09434 | 0 | 0 | 1 | 0 | 0.144186 | 0.053488 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf51151a10753ed1d358548f36337ff54fb5fd8f | 559 | py | Python | aioface/dispatcher/utils.py | kirillkuzin/aioface | c19f89f3f0f6ccb95832030444f2ece8fda7de62 | [
"MIT"
] | 1 | 2020-09-12T21:10:54.000Z | 2020-09-12T21:10:54.000Z | aioface/dispatcher/utils.py | kirillkuzin/aioface | c19f89f3f0f6ccb95832030444f2ece8fda7de62 | [
"MIT"
] | null | null | null | aioface/dispatcher/utils.py | kirillkuzin/aioface | c19f89f3f0f6ccb95832030444f2ece8fda7de62 | [
"MIT"
] | null | null | null | def check_full_text(fb_full_text, filter_full_text) -> bool:
if filter_full_text is None or fb_full_text == filter_full_text:
return True
return False
def check_contains(fb_contains, filter_contains) -> bool:
if filter_contains is None:
return True
intersection = list(fb_contains & filter_contains)
if len(intersection) == 0:
return False
return True
def check_payload(fb_payload, filter_payload) -> bool:
if filter_payload is None or fb_payload == filter_payload:
return True
return False
| 27.95 | 68 | 0.713775 | 79 | 559 | 4.746835 | 0.253165 | 0.128 | 0.112 | 0.085333 | 0.128 | 0.128 | 0 | 0 | 0 | 0 | 0 | 0.002309 | 0.225403 | 559 | 19 | 69 | 29.421053 | 0.863741 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cf570cd02e631b3b5db311ff2825b9555c99f6c9 | 5,469 | py | Python | testscripts/RDKB/component/HAL_Platform/TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput.py | rdkcmf/rdkb-tools-tdkb | 9f9c3600cd701d5fc90ac86a6394ebd28d49267e | [
"Apache-2.0"
] | null | null | null | testscripts/RDKB/component/HAL_Platform/TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput.py | rdkcmf/rdkb-tools-tdkb | 9f9c3600cd701d5fc90ac86a6394ebd28d49267e | [
"Apache-2.0"
] | null | null | null | testscripts/RDKB/component/HAL_Platform/TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput.py | rdkcmf/rdkb-tools-tdkb | 9f9c3600cd701d5fc90ac86a6394ebd28d49267e | [
"Apache-2.0"
] | null | null | null | ##########################################################################
# If not stated otherwise in this file or this component's Licenses.txt
# file the following copyright and licenses apply:
#
# Copyright 2020 RDK Management
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################################
'''
<?xml version='1.0' encoding='utf-8'?>
<xml>
<id></id>
<!-- Do not edit id. This will be auto filled while exporting. If you are adding a new script keep the id empty -->
<version>4</version>
<!-- Do not edit version. This will be auto incremented while updating. If you are adding a new script you can keep the vresion as 1 -->
<name>TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput</name>
<!-- If you are adding a new script you can specify the script name. Script Name should be unique same as this file name with out .py extension -->
<primitive_test_id> </primitive_test_id>
<!-- Do not change primitive_test_id if you are editing an existing script. -->
<primitive_test_name>platform_stub_hal_SetSNMPOnboardRebootEnable</primitive_test_name>
<!-- -->
<primitive_test_version>3</primitive_test_version>
<!-- -->
<status>FREE</status>
<!-- -->
<synopsis>To set the SNMPOnboardRebootEnable with invalid argument and check its behaviour</synopsis>
<!-- -->
<groups_id />
<!-- -->
<execution_time>10</execution_time>
<!-- -->
<long_duration>false</long_duration>
<!-- -->
<advanced_script>false</advanced_script>
<!-- execution_time is the time out time for test execution -->
<remarks></remarks>
<!-- Reason for skipping the tests if marked to skip -->
<skip>false</skip>
<!-- -->
<box_types>
<box_type>Broadband</box_type>
<!-- -->
</box_types>
<rdk_versions>
<rdk_version>RDKB</rdk_version>
<!-- -->
</rdk_versions>
<test_cases>
<test_case_id>TC_HAL_PLATFORM_62</test_case_id>
<test_objective>This test case is to set the SNMPOnboardRebootEnable with invalid argument and check its behaviour</test_objective>
<test_type>Negative</test_type>
<test_setup>Broadband</test_setup>
<pre_requisite>1.Ccsp Components should be in a running state of DUT
2.TDK Agent should be in running state or invoke it through StartTdk.sh script</pre_requisite>
<api_or_interface_used>platform_stub_hal_SetSNMPOnboardRebootEnable</api_or_interface_used>
<input_parameters>SNMPonboard</input_parameters>
<automation_approch>1.Load the module
2. Call the platform_hal_SetSNMPOnboardRebootEnable api with the invalid input parameter
3.The api is expected to fail and the result is displayed accordingly.
4.Unload the Module</automation_approch>
<expected_output>platform_hal_SetSNMPOnboardRebootEnable api should fail with a invalid input parameter</expected_output>
<priority>High</priority>
<test_stub_interface>HAL_PLATFORM</test_stub_interface>
<test_script>TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput</test_script>
<skipped>No</skipped>
<release_version>M79</release_version>
<remarks>None</remarks>
</test_cases>
<script_tags />
</xml>
'''
# use tdklib library,which provides a wrapper for tdk testcase script
import tdklib;
#Test component to be tested
obj = tdklib.TDKScriptingLibrary("halplatform","1");
#IP and Port of box, No need to change,
#This will be replaced with corresponding DUT Ip and port while executing script
ip = <ipaddress>
port = <port>
obj.configureTestCase(ip,port,'TS_platform_stub_hal_SNMPOnboardReboot_InvalidInput');
#Get the result of connection with test component and DUT
result =obj.getLoadModuleResult();
if "SUCCESS" in result.upper():
obj.setLoadModuleStatus("SUCCESS");
#Prmitive test case which associated to this Script
tdkTestObj = obj.createTestStep('platform_stub_hal_SetSNMPOnboardRebootEnable');
expectedresult ="FAILURE"
setValue ="Invalid"
tdkTestObj.addParameter("SNMPonboard",setValue)
#Execute the test case in DUT
tdkTestObj.executeTestCase("expectedresult");
#Get the result of execution
actualresult = tdkTestObj.getResult();
details = tdkTestObj.getResultDetails();
if expectedresult in actualresult:
print" TEST STEP 1: Set the SetSNMPOnboardRebootEnable with Invalid Value";
print" EXPECTED RESULT 1: Should not set the SetSNMPOnboardRebootEnable";
print" ACTUAL RESULT 1: %s" %details
print "[TEST EXECUTION RESULT] : SUCCESS";
tdkTestObj.setResultStatus("SUCCESS");
else:
print" TEST STEP 1: Set the SetSNMPOnboardRebootEnable with Invalid Value";
print" EXPECTED RESULT 1: Should not set the SetSNMPOnboardRebootEnable";
print" ACTUAL RESULT 1: %s" %details
print "[TEST EXECUTION RESULT] : FAILURE";
tdkTestObj.setResultStatus("FAILURE");
obj.unloadModule("halplatform");
else:
print "Failed to load the module";
obj.setLoadModuleStatus("FAILURE");
print "Module loading failed";
| 43.752 | 149 | 0.722436 | 704 | 5,469 | 5.474432 | 0.34517 | 0.023612 | 0.023352 | 0.010898 | 0.183705 | 0.183705 | 0.147898 | 0.141671 | 0.141671 | 0.126103 | 0 | 0.00714 | 0.154873 | 5,469 | 124 | 150 | 44.104839 | 0.826698 | 0.18998 | 0 | 0.25 | 0 | 0 | 0.417808 | 0.136301 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.03125 | null | null | 0.3125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf5f5b44e44e3ba3b7ed0e8ffb3e498e72891f7c | 8,316 | py | Python | ReanalysisRetreival_orig/UnimakPass/UP_Winds_Trans_vs_SST.py | shaunwbell/FOCI_Analysis | dde4a5f0badd76fe5719575d5c138813ab156b70 | [
"MIT"
] | null | null | null | ReanalysisRetreival_orig/UnimakPass/UP_Winds_Trans_vs_SST.py | shaunwbell/FOCI_Analysis | dde4a5f0badd76fe5719575d5c138813ab156b70 | [
"MIT"
] | null | null | null | ReanalysisRetreival_orig/UnimakPass/UP_Winds_Trans_vs_SST.py | shaunwbell/FOCI_Analysis | dde4a5f0badd76fe5719575d5c138813ab156b70 | [
"MIT"
] | null | null | null | #!/usr/bin/env
"""
UP_Winds_Trans_vs_SST.py
Using U,V (6hr from NARR) to calculate a transport index
Using SST (daily) from HR
NARR U/V winds (triangel filtered and subsampled to 6 hours)
----
NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder,
Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/
SST -
---
DataSource: ftp://ftp.cdc.noaa.gov/Datasets/noaa.oisst.v2.highres/
NOAA High Resolution SST data provided by the NOAA/OAR/ESRL PSD,
Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/
"""
#System Stack
import datetime
import sys
#Science Stack
import numpy as np
from netCDF4 import Dataset
import pandas as pd
# Visual Stack
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, shiftgrid
__author__ = 'Shaun Bell'
__email__ = 'shaun.bell@noaa.gov'
__created__ = datetime.datetime(2014, 01, 13)
__modified__ = datetime.datetime(2014, 01, 13)
__version__ = "0.1.0"
__status__ = "Development"
__keywords__ = 'NARR','Unimak', 'Shumagin','3hr filtered', 'U,V','Winds', 'Gulf of Alaska'
"""------------------------General Modules-------------------------------------------"""
"""--------------------------------time Routines---------------------------------------"""
def date2pydate(file_time, file_time2=None, time_since_str=None, file_flag='EPIC'):
if file_flag == 'EPIC':
ref_time_py = datetime.datetime.toordinal(datetime.datetime(1968, 5, 23))
ref_time_epic = 2440000
offset = ref_time_epic - ref_time_py
try: #if input is an array
python_time = [None] * len(file_time)
for i, val in enumerate(file_time):
pyday = file_time[i] - offset
pyfrac = file_time2[i] / (1000. * 60. * 60.* 24.) #milliseconds in a day
python_time[i] = (pyday + pyfrac)
except:
pyday = file_time - offset
pyfrac = file_time2 / (1000. * 60. * 60.* 24.) #milliseconds in a day
python_time = (pyday + pyfrac)
elif file_flag == 'NARR':
""" Hours since 1800-1-1"""
base_date=datetime.datetime.strptime('1800-01-01','%Y-%m-%d').toordinal()
python_time = file_time / 24. + base_date
elif file_flag == 'NCEP_days':
""" days since 1800-1-1"""
base_date=datetime.datetime.strptime('1800-01-01','%Y-%m-%d').toordinal()
python_time = file_time + base_date
elif file_flag == 'NCEP':
""" Hours since 1800-1-1"""
base_date=datetime.datetime.strptime('1800-01-01','%Y-%m-%d').toordinal()
python_time = file_time / 24. + base_date
elif file_flag == 'netCDF4':
""" Use time_since_str"""
python_time = num2date(file_time,'time_since_str','standard')
else:
print "time flag not recognized"
sys.exit()
return np.array(python_time)
"""--------------------------------netcdf Routines---------------------------------------"""
def get_global_atts(nchandle):
g_atts = {}
att_names = nchandle.ncattrs()
for name in att_names:
g_atts[name] = nchandle.getncattr(name)
return g_atts
def get_vars(nchandle):
return nchandle.variables
def ncreadfile_dic(nchandle, params):
data = {}
for j, v in enumerate(params):
if v in nchandle.variables.keys(): #check for nc variable
data[v] = nchandle.variables[v][:]
else: #if parameter doesn't exist fill the array with zeros
data[v] = None
return (data)
"---"
def rotate_coord(angle_rot, mag, dir):
""" converts math coords to along/cross shelf.
+ onshore / along coast with land to right (right handed)
- offshore / along coast with land to left
Todo: convert met standard for winds (left handed coordinate system
"""
dir = dir - angle_rot
along = mag * np.sin(np.deg2rad(dir))
cross = mag * np.cos(np.deg2rad(dir))
return (along, cross)
"""------------------------- Main Modules -------------------------------------------"""
### list of files
NARR_dir = '/Users/bell/in_and_outbox/2016/stabeno/feb/unimakwinds_narr/shumigan_downstream/'
HROISST_dir = '/Users/bell/in_and_outbox/2016/stabeno/feb/unimakwinds_narr/shumigan_downstream_sst/'
#loop over every year from 1981 to 2015.
# calculate desired average (based on time stamps)
data_flag = 'winds'
for year in range(2016,2019):
#print "Working on year {0}".format(year)
sstfile = HROISST_dir + 'NOAA_OI_SST_V2_stn1_' + str(year) + '.nc'
uvfile = NARR_dir + 'NARR_stn1_' + str(year) + '.nc'
if data_flag == 'sst':
#open sst file
###nc readin
nchandle = Dataset(sstfile,'a')
global_atts = get_global_atts(nchandle)
vars_dic = get_vars(nchandle)
sstdata = ncreadfile_dic(nchandle, vars_dic.keys())
nchandle.close()
## Create and save monthly mean data of SST
tmptime = date2pydate(sstdata['time'],sstdata['time2'])
ssttime_month = [datetime.datetime.fromordinal(int(x)).month for x in tmptime]
for month in range(1,13):
tind = np.where(np.array(ssttime_month) == month)
sstmean = np.mean(sstdata['T_25'][tind,0,0,0])
print "{0}-{1}-01, {2}".format(year,month,sstmean)
elif data_flag == 'winds':
#open uv file
###nc readin
nchandle = Dataset(uvfile,'a')
global_atts = get_global_atts(nchandle)
vars_dic = get_vars(nchandle)
uvdata = ncreadfile_dic(nchandle, vars_dic.keys())
nchandle.close()
## Create and save monthly mean data of UV winds/transport
tmptime = date2pydate(uvdata['time'],uvdata['time2'])
uvtime_month = [datetime.datetime.fromordinal(int(x)).month for x in tmptime]
for month in range(1,13):
tind = np.where(np.array(uvtime_month) == month)
umean = np.mean(uvdata['WU_422'][tind,0,0,0])
vmean = np.mean(uvdata['WV_423'][tind,0,0,0])
print "{0}-{1}-01, {2}, {3}".format(year,month,umean,vmean)
else:
print "skipping data read"
plot_flag = False
#After all data has been averaged plot
#plot as timeseries but colorcode the values as follows:
# Break sst data into quintiles (first hack, find max and min and evenly divide by 5)
# Colorcode transport as a function of sst quintiles where warm 1/5 is bright red, cold 1/5 is bright blue
# 2/5's are much lighter and median/mean is grey
if plot_flag:
#watch for nan's with extra spaces
data = pd.read_csv('UP_transport.csv')
maxd = data.max()['SST']
mind = data.min()['SST']
bounds = np.arange(mind,maxd,(maxd-mind)/5)
bounds = np.hstack([bounds,maxd])
mag =np.sqrt(data['U']**2 + data['V']**2)
ang = np.rad2deg(np.arctan2(data['V'],data['U']))
transport_rough = rotate_coord(45, mag, ang)
print transport_rough[0]
transport_rough = mag*np.sqrt(2)/2
for ind,val in enumerate(bounds):
if ind == 0:
pind = np.where((data['SST'] >= val) & (data['SST'] <= bounds[ind+1]))
plt.plot(pd.to_datetime(data['Date'][pind[0]]),transport_rough[pind[0]],'.',color='#001AFF',markersize=20)
if ind == 1:
pind = np.where((data['SST'] >= val) & (data['SST'] <= bounds[ind+1]))
plt.plot(pd.to_datetime(data['Date'][pind[0]]),transport_rough[pind[0]],'.',color='#7D8AFF',markersize=20)
if ind == 2:
pind = np.where((data['SST'] >= val) & (data['SST'] <= bounds[ind+1]))
plt.plot(pd.to_datetime(data['Date'][pind[0]]),transport_rough[pind[0]],'.',color='#B2B2B2',markersize=20)
if ind == 3:
pind = np.where((data['SST'] >= val) & (data['SST'] <= bounds[ind+1]))
plt.plot(pd.to_datetime(data['Date'][pind[0]]),transport_rough[pind[0]],'.',color='#FF9B9B',markersize=20)
if ind == 4:
pind = np.where((data['SST'] >= val) & (data['SST'] <= bounds[ind+1]))
plt.plot(pd.to_datetime(data['Date'][pind[0]]),transport_rough[pind[0]],'.',color='#FF0000',markersize=20)
if ind == 5:
break | 37.125 | 118 | 0.592232 | 1,120 | 8,316 | 4.259821 | 0.299107 | 0.014672 | 0.018864 | 0.01572 | 0.377489 | 0.347726 | 0.341857 | 0.341857 | 0.341857 | 0.334731 | 0 | 0.037754 | 0.23557 | 8,316 | 224 | 119 | 37.125 | 0.712758 | 0.100048 | 0 | 0.162791 | 0 | 0 | 0.105314 | 0.026572 | 0 | 0 | 0 | 0.004464 | 0 | 0 | null | null | 0 | 0.054264 | null | null | 0.03876 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf6149728d0a4d14dac89fc7b43a49eb3065e261 | 576 | py | Python | src_nlp/tensorflow/toward_control/mains/pretrain.py | ashishpatel26/finch | bf2958c0f268575e5d51ad08fbc08b151cbea962 | [
"MIT"
] | 1 | 2019-02-12T09:22:00.000Z | 2019-02-12T09:22:00.000Z | src_nlp/tensorflow/toward_control/mains/pretrain.py | loopzxl/finch | bf2958c0f268575e5d51ad08fbc08b151cbea962 | [
"MIT"
] | null | null | null | src_nlp/tensorflow/toward_control/mains/pretrain.py | loopzxl/finch | bf2958c0f268575e5d51ad08fbc08b151cbea962 | [
"MIT"
] | 1 | 2020-10-15T21:34:17.000Z | 2020-10-15T21:34:17.000Z | import tensorflow as tf
import pprint
import os, sys
sys.path.append(os.path.dirname(os.getcwd()))
from model import VAE
from data.imdb import VAEDataLoader
from vocab.imdb import IMDBVocab
from trainers import VAETrainer
from log import create_logging
def main():
create_logging()
sess = tf.Session()
vocab = IMDBVocab()
dl = VAEDataLoader(sess, vocab)
model = VAE(dl, vocab)
tf.logging.info('\n'+pprint.pformat(tf.trainable_variables()))
trainer = VAETrainer(sess, model, dl, vocab)
trainer.train()
if __name__ == '__main__':
main() | 22.153846 | 66 | 0.713542 | 78 | 576 | 5.128205 | 0.487179 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175347 | 576 | 26 | 67 | 22.153846 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.017331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.4 | 0 | 0.45 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cf62df099326bf29d570a34b64f7dc86c916f3eb | 3,767 | py | Python | mpopt/population/base.py | CavalloneChen/mpopt | 9d39f613cc840c25416ec060ac503fb0599bde1b | [
"Xnet",
"X11",
"RSA-MD"
] | 3 | 2020-11-05T05:55:55.000Z | 2021-04-19T08:24:06.000Z | mpopt/population/base.py | CavalloneChen/mpopt | 9d39f613cc840c25416ec060ac503fb0599bde1b | [
"Xnet",
"X11",
"RSA-MD"
] | null | null | null | mpopt/population/base.py | CavalloneChen/mpopt | 9d39f613cc840c25416ec060ac503fb0599bde1b | [
"Xnet",
"X11",
"RSA-MD"
] | 2 | 2020-11-23T02:11:48.000Z | 2020-12-01T01:56:03.000Z | import numpy as np
from ..operator import operator as opt
class BasePop(object):
""" Base class for population """
def __init__(self, pop, fit, lb=-float('inf'), ub=float('inf')):
# init pop
self.pop = pop
self.fit = fit
self.gen_pop = None
self.gen_fit = None
self.new_pop = None
self.new_fit = None
# params
self.size = self.pop.shape[0]
self.dim = self.pop.shape[1]
self.lb = lb
self.ub = ub
# states
# no states here
def remap(self, samples):
""" Always apply random_map on out-bounded samples """
return opt.random_map(samples, self.lb, self.ub)
def eval(self, e):
""" Evaluate un-evaluated individuals here """
raise NotImplementedError
def select(self):
""" Select 'new_pop' and 'new_fit' """
raise NotImplementedError
def generate(self):
""" Generate offsprings """
raise NotImplementedError
def adapt(self):
""" Adapt new states """
raise NotImplementedError
def update(self):
""" Update pop and states """
raise NotImplementedError
def evolve(self):
""" Define the evolve process in an iteration """
raise NotImplementedError
class BaseEDAPop(object):
""" Base class for EDA (Estimation of Distribution Algorithms) population """
def __init__(self, dist, dim=None, lb=-float('inf'), ub=float('inf')):
# init pop
self.pop = None
self.fit = None
self.dist = dist
self.new_dist = None
# params
self.dim = dim if dim is not None else dist.dim
self.lb = lb
self.ub = ub
# states
# no states here
def remap(self, samples):
""" Always apply random_map on out-bounded samples """
return opt.random_map(samples, self.lb, self.ub)
def eval(self, e):
self.fit = e(self.pop)
def sample(self, num_sample):
""" Sample population from the distribution """
raise NotImplementedError
def adapt(self):
""" Adapt the distribution """
raise NotImplementedError
def update(self):
""" Update the distribution with adapted one """
raise NotImplementedError
def evolve(self):
""" Define the evolve process in an iteration """
raise NotImplementedError
class BaseFirework(object):
""" Base Class for Fireworks """
def __init__(self, idv, val, lb=-float('inf'), ub=float('inf')):
# init pop
self.idv = idv
self.val = val
self.spk_pop = None
self.spk_fit = None
self.new_idv = None
self.new_val = None
# params
self.dim = self.idv.shape[0]
self.lb = lb
self.ub = ub
# states
# No states here
def eval(self, e):
""" Eval un-evaluated individuals here """
raise NotImplementedError
def remap(self, samples):
""" Always apply random_map on out-bounded samples """
return opt.random_map(samples, self.lb, self.ub)
def select(self):
""" Select 'new_pop' and 'new_fit' """
raise NotImplementedError
def explode(self):
""" Generate explosion sparks """
raise NotImplementedError
def mutate(self):
""" Generate mutation sparks """
raise NotImplementedError
def adapt(self):
""" Adapt new states """
raise NotImplementedError
def update(self):
""" Update pop and states """
raise NotImplementedError
def evolve(self):
""" Define the evolve process in an iteration """
raise NotImplementedError | 26.342657 | 81 | 0.576586 | 435 | 3,767 | 4.91954 | 0.209195 | 0.190654 | 0.176636 | 0.061682 | 0.634112 | 0.620093 | 0.580841 | 0.534112 | 0.534112 | 0.519626 | 0 | 0.00117 | 0.319352 | 3,767 | 143 | 82 | 26.342657 | 0.833463 | 0.242633 | 0 | 0.565789 | 0 | 0 | 0.006657 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.026316 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf661f6c62d7243ea412ab586f9de155fce581e4 | 568 | py | Python | service/surf/vendor/surfconext/migrations/0004_users_update.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 2 | 2021-08-19T09:40:59.000Z | 2021-12-14T11:08:20.000Z | service/surf/vendor/surfconext/migrations/0004_users_update.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 159 | 2020-05-14T14:17:34.000Z | 2022-03-23T10:28:13.000Z | service/surf/vendor/surfconext/migrations/0004_users_update.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 1 | 2021-11-11T13:37:22.000Z | 2021-11-11T13:37:22.000Z | # Generated by Django 3.2.8 on 2021-12-28 14:50
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('surfconext', '0003_boolean_field'),
]
operations = [
migrations.AlterField(
model_name='datagoal',
name='users',
field=models.ManyToManyField(through='surfconext.DataGoalPermission', to=settings.AUTH_USER_MODEL, verbose_name='users'),
),
]
| 27.047619 | 133 | 0.672535 | 61 | 568 | 6.114754 | 0.672131 | 0.053619 | 0.085791 | 0.112601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042986 | 0.221831 | 568 | 20 | 134 | 28.4 | 0.800905 | 0.079225 | 0 | 0 | 1 | 0 | 0.143954 | 0.055662 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf6b4412d0c395f437f036a7b2a75ce1aea65192 | 1,583 | py | Python | core/tests/test_models.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | null | null | null | core/tests/test_models.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | 2 | 2019-06-03T06:17:29.000Z | 2019-06-17T05:26:02.000Z | core/tests/test_models.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | null | null | null | from django.test import TestCase
# from django.db.utils import IntegrityError
from core.models import User
class CaseInsensitiveUserNameManagerTest(TestCase):
@classmethod
def setUpTestData(cls):
cls.user1 = User.objects.create_user(username="user1",
password="azerty",
email="user1@test.fr")
def test_get_by_natural_key(self):
user = User.objects.get_by_natural_key('user1')
self.assertEqual(user.username, 'user1')
user = User.objects.get_by_natural_key('uSEr1')
self.assertEqual(user.username, 'user1')
def test_create_user_username(self):
with self.assertRaises(ValueError):
User.objects.create_user(username="user1",
password="azerty",
email="user12@test.fr")
with self.assertRaises(ValueError):
User.objects.create_user(username="usER1",
password="azerty",
email="user13@test.fr")
def test_create_user_email(self):
with self.assertRaises(ValueError):
User.objects.create_user(username="user2",
password="azerty",
email="")
with self.assertRaises(ValueError):
User.objects.create_user(username="user2",
password="azerty",
email="user1@test.fr")
| 39.575 | 67 | 0.53127 | 143 | 1,583 | 5.734266 | 0.258741 | 0.117073 | 0.131707 | 0.128049 | 0.669512 | 0.669512 | 0.642683 | 0.642683 | 0.642683 | 0.578049 | 0 | 0.016145 | 0.373973 | 1,583 | 39 | 68 | 40.589744 | 0.811302 | 0.026532 | 0 | 0.483871 | 0 | 0 | 0.083821 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 1 | 0.129032 | false | 0.16129 | 0.064516 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cf6cd344f584cd701a4076d93e4150b1102f79aa | 1,448 | py | Python | crop_video.py | ckjellson/tt_tracker | 99800b586e517ea8f048cf2add85cb4b3b091f73 | [
"MIT"
] | 15 | 2020-12-22T08:50:52.000Z | 2022-03-24T09:48:51.000Z | crop_video.py | ckjellson/tt_tracker | 99800b586e517ea8f048cf2add85cb4b3b091f73 | [
"MIT"
] | 2 | 2021-07-04T02:23:00.000Z | 2021-07-05T01:05:47.000Z | crop_video.py | ckjellson/tt_tracker | 99800b586e517ea8f048cf2add85cb4b3b091f73 | [
"MIT"
] | 3 | 2021-01-17T14:39:51.000Z | 2022-01-01T23:52:18.000Z | import cv2
import numpy as np
'''
Loads two videos and generates an interface to crop these to equal length
and being synced in time.
Specify:
path1: path to first video
path2: path to second video
vidname: name of the instance to be created
'''
path1 = "videos_original\out_a_full.mp4"
path2 = "videos_original\out_b_full.mp4"
vidname = 'outside'
cap1 = cv2.VideoCapture(path1)
nbr_frames1 = int(cap1.get(cv2.CAP_PROP_FRAME_COUNT))-1
cap2 = cv2.VideoCapture(path2)
nbr_frames2 = int(cap2.get(cv2.CAP_PROP_FRAME_COUNT))-1
_, f1 = cap1.read()
_, f2 = cap2.read()
height,width,channels = f1.shape
# Find a starting point
while True:
f = np.hstack((f1, f2))
f = cv2.resize(f, (0, 0), fx=0.5, fy=0.5)
cv2.imshow('',f)
k = cv2.waitKey(0) & 0xFF
if k==49: # 1 is pressed
_, f1 = cap1.read()
nbr_frames1 -= 1
elif k==50: # 2 is pressed
_, f2 = cap2.read()
nbr_frames2 -= 1
else:
break
# Create and save two equally long clips
clip1 = cv2.VideoWriter('videos/' + vidname + '1.mp4',cv2.VideoWriter_fourcc(*'mp4v'), 30.0, (width,height))
clip2 = cv2.VideoWriter('videos/' + vidname + '2.mp4',cv2.VideoWriter_fourcc(*'mp4v'), 30.0, (width,height))
for i in range(min([nbr_frames1, nbr_frames2])):
_, f1 = cap1.read()
_, f2 = cap2.read()
clip1.write(f1)
clip2.write(f2)
clip1.release()
clip2.release()
cap1.release()
cap2.release()
cv2.destroyAllWindows()
| 25.857143 | 108 | 0.662983 | 225 | 1,448 | 4.151111 | 0.466667 | 0.059957 | 0.03212 | 0.027837 | 0.182013 | 0.182013 | 0.139186 | 0.087794 | 0.087794 | 0 | 0 | 0.073567 | 0.19268 | 1,448 | 55 | 109 | 26.327273 | 0.725406 | 0.059392 | 0 | 0.162162 | 1 | 0 | 0.087688 | 0.053144 | 0 | 0 | 0.003543 | 0 | 0 | 1 | 0 | false | 0 | 0.054054 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf706ce509e2571be6ee751f269f1f260863436a | 859 | py | Python | Ecommerce/migrations/0010_auto_20200203_0112.py | aryanshridhar/Ecommerce-Website | c582659e9b530555b9715ede7bb774c39f101c7e | [
"MIT"
] | 1 | 2020-06-01T16:41:33.000Z | 2020-06-01T16:41:33.000Z | Ecommerce/migrations/0010_auto_20200203_0112.py | aryanshridhar/Ecommerce-Website | c582659e9b530555b9715ede7bb774c39f101c7e | [
"MIT"
] | 4 | 2020-03-17T03:37:23.000Z | 2021-09-22T18:36:18.000Z | Ecommerce/migrations/0010_auto_20200203_0112.py | aryanshridhar/Ecommerce-Website | c582659e9b530555b9715ede7bb774c39f101c7e | [
"MIT"
] | null | null | null | # Generated by Django 2.2.7 on 2020-02-02 19:42
import datetime
from django.db import migrations, models
import django.db.models.deletion
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('Ecommerce', '0009_review_date'),
]
operations = [
migrations.AlterField(
model_name='review',
name='Date',
field=models.DateTimeField(default=datetime.datetime(2020, 2, 2, 19, 42, 47, 841789, tzinfo=utc)),
),
migrations.CreateModel(
name='Cart',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Item', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Ecommerce.Product')),
],
),
]
| 29.62069 | 114 | 0.613504 | 95 | 859 | 5.473684 | 0.589474 | 0.046154 | 0.053846 | 0.084615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058176 | 0.259604 | 859 | 28 | 115 | 30.678571 | 0.759434 | 0.052387 | 0 | 0.090909 | 1 | 0 | 0.078818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf71abfca7c5e19ff69fd1894a754a844173e256 | 30,747 | py | Python | marmot/plottingmodules/reserves.py | equinor/Marmot | 99ab6336f920f460df335d4a1b3e4f36d847ae46 | [
"BSD-3-Clause"
] | 2 | 2021-08-05T19:52:02.000Z | 2021-12-10T23:47:52.000Z | marmot/plottingmodules/reserves.py | equinor/Marmot | 99ab6336f920f460df335d4a1b3e4f36d847ae46 | [
"BSD-3-Clause"
] | 21 | 2021-09-01T21:56:42.000Z | 2022-03-31T18:01:48.000Z | marmot/plottingmodules/reserves.py | equinor/Marmot | 99ab6336f920f460df335d4a1b3e4f36d847ae46 | [
"BSD-3-Clause"
] | 3 | 2021-12-14T18:12:33.000Z | 2022-03-25T18:27:26.000Z | # -*- coding: utf-8 -*-
"""Generator reserve plots.
This module creates plots of reserve provision and shortage at the generation
and region level.
@author: Daniel Levie
"""
import logging
import numpy as np
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import marmot.config.mconfig as mconfig
import marmot.plottingmodules.plotutils.plot_library as plotlib
from marmot.plottingmodules.plotutils.plot_data_helper import PlotDataHelper
from marmot.plottingmodules.plotutils.plot_exceptions import (MissingInputData, MissingZoneData)
class MPlot(PlotDataHelper):
"""reserves MPlot class.
All the plotting modules use this same class name.
This class contains plotting methods that are grouped based on the
current module name.
The reserves.py module contains methods that are
related to reserve provision and shortage.
MPlot inherits from the PlotDataHelper class to assist in creating figures.
"""
def __init__(self, argument_dict: dict):
"""
Args:
argument_dict (dict): Dictionary containing all
arguments passed from MarmotPlot.
"""
# iterate over items in argument_dict and set as properties of class
# see key_list in Marmot_plot_main for list of properties
for prop in argument_dict:
self.__setattr__(prop, argument_dict[prop])
# Instantiation of MPlotHelperFunctions
super().__init__(self.Marmot_Solutions_folder, self.AGG_BY, self.ordered_gen,
self.PLEXOS_color_dict, self.Scenarios, self.ylabels,
self.xlabels, self.gen_names_dict, Region_Mapping=self.Region_Mapping)
self.logger = logging.getLogger('marmot_plot.'+__name__)
self.y_axes_decimalpt = mconfig.parser("axes_options","y_axes_decimalpt")
def reserve_gen_timeseries(self, figure_name: str = None, prop: str = None,
start: float = None, end: float= None,
timezone: str = "", start_date_range: str = None,
end_date_range: str = None, **_):
"""Creates a generation timeseries stackplot of total cumulative reserve provision by tech type.
The code will create either a facet plot or a single plot depending on
if the Facet argument is active.
If a facet plot is created, each scenario is plotted on a separate facet,
otherwise all scenarios are plotted on a single plot.
To make a facet plot, ensure the work 'Facet' is found in the figure_name.
Generation order is determined by the ordered_gen_categories.csv.
Args:
figure_name (str, optional): User defined figure output name. Used here
to determine if a Facet plot should be created.
Defaults to None.
prop (str, optional): Special argument used to adjust specific
plot settings. Controlled through the plot_select.csv.
Opinions available are:
- Peak Demand
- Date Range
Defaults to None.
start (float, optional): Used in conjunction with the prop argument.
Will define the number of days to plot before a certain event in
a timeseries plot, e.g Peak Demand.
Defaults to None.
end (float, optional): Used in conjunction with the prop argument.
Will define the number of days to plot after a certain event in
a timeseries plot, e.g Peak Demand.
Defaults to None.
timezone (str, optional): The timezone to display on the x-axes.
Defaults to "".
start_date_range (str, optional): Defines a start date at which to represent data from.
Defaults to None.
end_date_range (str, optional): Defines a end date at which to represent data to.
Defaults to None.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
# If not facet plot, only plot first scenario
facet=False
if 'Facet' in figure_name:
facet = True
if not facet:
Scenarios = [self.Scenarios[0]]
else:
Scenarios = self.Scenarios
outputs = {}
# List of properties needed by the plot, properties are a set of tuples and contain 3 parts:
# required True/False, property name and scenarios required, scenarios must be a list.
properties = [(True,"reserves_generators_Provision",self.Scenarios)]
# Runs get_formatted_data within PlotDataHelper to populate PlotDataHelper dictionary
# with all required properties, returns a 1 if required data is missing
check_input_data = self.get_formatted_data(properties)
# Checks if all data required by plot is available, if 1 in list required data is missing
if 1 in check_input_data:
return MissingInputData()
for region in self.Zones:
self.logger.info(f"Zone = {region}")
xdimension, ydimension = self.setup_facet_xy_dimensions(facet,multi_scenario=Scenarios)
grid_size = xdimension*ydimension
excess_axs = grid_size - len(Scenarios)
fig1, axs = plotlib.setup_plot(xdimension,ydimension)
plt.subplots_adjust(wspace=0.05, hspace=0.2)
data_tables = []
unique_tech_names = []
for n, scenario in enumerate(Scenarios):
self.logger.info(f"Scenario = {scenario}")
reserve_provision_timeseries = self["reserves_generators_Provision"].get(scenario)
#Check if zone has reserves, if not skips
try:
reserve_provision_timeseries = reserve_provision_timeseries.xs(region,level=self.AGG_BY)
except KeyError:
self.logger.info(f"No reserves deployed in: {scenario}")
continue
reserve_provision_timeseries = self.df_process_gen_inputs(reserve_provision_timeseries)
if reserve_provision_timeseries.empty is True:
self.logger.info(f"No reserves deployed in: {scenario}")
continue
# unitconversion based off peak generation hour, only checked once
if n == 0:
unitconversion = PlotDataHelper.capacity_energy_unitconversion(max(reserve_provision_timeseries.sum(axis=1)))
if prop == "Peak Demand":
self.logger.info("Plotting Peak Demand period")
total_reserve = reserve_provision_timeseries.sum(axis=1)/unitconversion['divisor']
peak_reserve_t = total_reserve.idxmax()
start_date = peak_reserve_t - dt.timedelta(days=start)
end_date = peak_reserve_t + dt.timedelta(days=end)
reserve_provision_timeseries = reserve_provision_timeseries[start_date : end_date]
Peak_Reserve = total_reserve[peak_reserve_t]
elif prop == 'Date Range':
self.logger.info(f"Plotting specific date range: \
{str(start_date_range)} to {str(end_date_range)}")
reserve_provision_timeseries = reserve_provision_timeseries[start_date_range : end_date_range]
else:
self.logger.info("Plotting graph for entire timeperiod")
reserve_provision_timeseries = reserve_provision_timeseries/unitconversion['divisor']
scenario_names = pd.Series([scenario] * len(reserve_provision_timeseries),name = 'Scenario')
data_table = reserve_provision_timeseries.add_suffix(f" ({unitconversion['units']})")
data_table = data_table.set_index([scenario_names],append = True)
data_tables.append(data_table)
plotlib.create_stackplot(axs, reserve_provision_timeseries, self.PLEXOS_color_dict, labels=reserve_provision_timeseries.columns,n=n)
PlotDataHelper.set_plot_timeseries_format(axs,n=n,minticks=4, maxticks=8)
if prop == "Peak Demand":
axs[n].annotate('Peak Reserve: \n' + str(format(int(Peak_Reserve), '.2f')) + ' {}'.format(unitconversion['units']),
xy=(peak_reserve_t, Peak_Reserve),
xytext=((peak_reserve_t + dt.timedelta(days=0.25)), (Peak_Reserve + Peak_Reserve*0.05)),
fontsize=13, arrowprops=dict(facecolor='black', width=3, shrink=0.1))
# create list of gen technologies
l1 = reserve_provision_timeseries.columns.tolist()
unique_tech_names.extend(l1)
if not data_tables:
self.logger.warning(f'No reserves in {region}')
out = MissingZoneData()
outputs[region] = out
continue
# create handles list of unique tech names then order
handles = np.unique(np.array(unique_tech_names)).tolist()
handles.sort(key = lambda i:self.ordered_gen.index(i))
handles = reversed(handles)
# create custom gen_tech legend
gen_tech_legend = []
for tech in handles:
legend_handles = [Patch(facecolor=self.PLEXOS_color_dict[tech],
alpha=1.0,
label=tech)]
gen_tech_legend.extend(legend_handles)
# Add legend
axs[grid_size-1].legend(handles=gen_tech_legend, loc='lower left',bbox_to_anchor=(1,0),
facecolor='inherit', frameon=True)
#Remove extra axes
if excess_axs != 0:
PlotDataHelper.remove_excess_axs(axs,excess_axs,grid_size)
# add facet labels
self.add_facet_labels(fig1)
fig1.add_subplot(111, frameon=False)
plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
if mconfig.parser("plot_title_as_region"):
plt.title(region)
plt.ylabel(f"Reserve Provision ({unitconversion['units']})", color='black', rotation='vertical', labelpad=40)
data_table_out = pd.concat(data_tables)
outputs[region] = {'fig': fig1, 'data_table': data_table_out}
return outputs
def total_reserves_by_gen(self, start_date_range: str = None,
end_date_range: str = None, **_):
"""Creates a generation stacked barplot of total reserve provision by generator tech type.
A separate bar is created for each scenario.
Args:
start_date_range (str, optional): Defines a start date at which to represent data from.
Defaults to None.
end_date_range (str, optional): Defines a end date at which to represent data to.
Defaults to None.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
outputs = {}
# List of properties needed by the plot, properties are a set of tuples and contain 3 parts:
# required True/False, property name and scenarios required, scenarios must be a list.
properties = [(True,"reserves_generators_Provision",self.Scenarios)]
# Runs get_formatted_data within PlotDataHelper to populate PlotDataHelper dictionary
# with all required properties, returns a 1 if required data is missing
check_input_data = self.get_formatted_data(properties)
# Checks if all data required by plot is available, if 1 in list required data is missing
if 1 in check_input_data:
return MissingInputData()
for region in self.Zones:
self.logger.info(f"Zone = {region}")
Total_Reserves_Out = pd.DataFrame()
unique_tech_names = []
for scenario in self.Scenarios:
self.logger.info(f"Scenario = {scenario}")
reserve_provision_timeseries = self["reserves_generators_Provision"].get(scenario)
#Check if zone has reserves, if not skips
try:
reserve_provision_timeseries = reserve_provision_timeseries.xs(region,level=self.AGG_BY)
except KeyError:
self.logger.info(f"No reserves deployed in {scenario}")
continue
reserve_provision_timeseries = self.df_process_gen_inputs(reserve_provision_timeseries)
if reserve_provision_timeseries.empty is True:
self.logger.info(f"No reserves deployed in: {scenario}")
continue
# Calculates interval step to correct for MWh of generation
interval_count = PlotDataHelper.get_sub_hour_interval_count(reserve_provision_timeseries)
# sum totals by fuel types
reserve_provision_timeseries = reserve_provision_timeseries/interval_count
reserve_provision = reserve_provision_timeseries.sum(axis=0)
reserve_provision.rename(scenario, inplace=True)
Total_Reserves_Out = pd.concat([Total_Reserves_Out, reserve_provision], axis=1, sort=False).fillna(0)
Total_Reserves_Out = self.create_categorical_tech_index(Total_Reserves_Out)
Total_Reserves_Out = Total_Reserves_Out.T
Total_Reserves_Out = Total_Reserves_Out.loc[:, (Total_Reserves_Out != 0).any(axis=0)]
if Total_Reserves_Out.empty:
out = MissingZoneData()
outputs[region] = out
continue
Total_Reserves_Out.index = Total_Reserves_Out.index.str.replace('_',' ')
Total_Reserves_Out.index = Total_Reserves_Out.index.str.wrap(5, break_long_words=False)
# Convert units
unitconversion = PlotDataHelper.capacity_energy_unitconversion(max(Total_Reserves_Out.sum()))
Total_Reserves_Out = Total_Reserves_Out/unitconversion['divisor']
data_table_out = Total_Reserves_Out.add_suffix(f" ({unitconversion['units']}h)")
# create figure
fig1, axs = plotlib.create_stacked_bar_plot(Total_Reserves_Out, self.PLEXOS_color_dict,
custom_tick_labels=self.custom_xticklabels)
# additional figure formatting
#fig1.set_ylabel(f"Total Reserve Provision ({unitconversion['units']}h)", color='black', rotation='vertical')
axs.set_ylabel(f"Total Reserve Provision ({unitconversion['units']}h)", color='black', rotation='vertical')
# create list of gen technologies
l1 = Total_Reserves_Out.columns.tolist()
unique_tech_names.extend(l1)
# create handles list of unique tech names then order
handles = np.unique(np.array(unique_tech_names)).tolist()
handles.sort(key = lambda i:self.ordered_gen.index(i))
handles = reversed(handles)
# create custom gen_tech legend
gen_tech_legend = []
for tech in handles:
legend_handles = [Patch(facecolor=self.PLEXOS_color_dict[tech],
alpha=1.0,label=tech)]
gen_tech_legend.extend(legend_handles)
# Add legend
axs.legend(handles=gen_tech_legend, loc='lower left',bbox_to_anchor=(1,0),
facecolor='inherit', frameon=True)
if mconfig.parser("plot_title_as_region"):
axs.set_title(region)
outputs[region] = {'fig': fig1, 'data_table': data_table_out}
return outputs
def reg_reserve_shortage(self, **kwargs):
"""Creates a bar plot of reserve shortage for each region in MWh.
Bars are grouped by reserve type, each scenario is plotted as a differnet color.
The 'Shortage' argument is passed to the _reserve_bar_plots() method to
create this plot.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
outputs = self._reserve_bar_plots("Shortage", **kwargs)
return outputs
def reg_reserve_provision(self, **kwargs):
"""Creates a bar plot of reserve provision for each region in MWh.
Bars are grouped by reserve type, each scenario is plotted as a differnet color.
The 'Provision' argument is passed to the _reserve_bar_plots() method to
create this plot.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
outputs = self._reserve_bar_plots("Provision", **kwargs)
return outputs
def reg_reserve_shortage_hrs(self, **kwargs):
"""creates a bar plot of reserve shortage for each region in hrs.
Bars are grouped by reserve type, each scenario is plotted as a differnet color.
The 'Shortage' argument and count_hours=True is passed to the _reserve_bar_plots() method to
create this plot.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
outputs = self._reserve_bar_plots("Shortage", count_hours=True)
return outputs
def _reserve_bar_plots(self, data_set: str, count_hours: bool = False,
start_date_range: str = None,
end_date_range: str = None, **_):
"""internal _reserve_bar_plots method, creates 'Shortage', 'Provision' and 'Shortage' bar
plots
Bars are grouped by reserve type, each scenario is plotted as a differnet color.
Args:
data_set (str): Identifies the reserve data set to use and pull
from the formatted h5 file.
count_hours (bool, optional): if True creates a 'Shortage' hours plot.
Defaults to False.
start_date_range (str, optional): Defines a start date at which to represent data from.
Defaults to None.
end_date_range (str, optional): Defines a end date at which to represent data to.
Defaults to None.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
outputs = {}
# List of properties needed by the plot, properties are a set of tuples and contain 3 parts:
# required True/False, property name and scenarios required, scenarios must be a list.
properties = [(True, f"reserve_{data_set}", self.Scenarios)]
# Runs get_formatted_data within PlotDataHelper to populate PlotDataHelper dictionary
# with all required properties, returns a 1 if required data is missing
check_input_data = self.get_formatted_data(properties)
# Checks if all data required by plot is available, if 1 in list required data is missing
if 1 in check_input_data:
return MissingInputData()
for region in self.Zones:
self.logger.info(f"Zone = {region}")
Data_Table_Out=pd.DataFrame()
reserve_total_chunk = []
for scenario in self.Scenarios:
self.logger.info(f'Scenario = {scenario}')
reserve_timeseries = self[f"reserve_{data_set}"].get(scenario)
# Check if zone has reserves, if not skips
try:
reserve_timeseries = reserve_timeseries.xs(region,level=self.AGG_BY)
except KeyError:
self.logger.info(f"No reserves deployed in {scenario}")
continue
interval_count = PlotDataHelper.get_sub_hour_interval_count(reserve_timeseries)
reserve_timeseries = reserve_timeseries.reset_index(["timestamp","Type","parent"],drop=False)
# Drop duplicates to remove double counting
reserve_timeseries.drop_duplicates(inplace=True)
# Set Type equal to parent value if Type equals '-'
reserve_timeseries['Type'] = reserve_timeseries['Type'].mask(reserve_timeseries['Type'] == '-', reserve_timeseries['parent'])
reserve_timeseries.set_index(["timestamp","Type","parent"],append=True,inplace=True)
# Groupby Type
if count_hours == False:
reserve_total = reserve_timeseries.groupby(["Type"]).sum()/interval_count
elif count_hours == True:
reserve_total = reserve_timeseries[reserve_timeseries[0]>0] #Filter for non zero values
reserve_total = reserve_total.groupby("Type").count()/interval_count
reserve_total.rename(columns={0:scenario},inplace=True)
reserve_total_chunk.append(reserve_total)
if reserve_total_chunk:
reserve_out = pd.concat(reserve_total_chunk,axis=1, sort='False')
reserve_out.columns = reserve_out.columns.str.replace('_',' ')
else:
reserve_out=pd.DataFrame()
# If no reserves return nothing
if reserve_out.empty:
out = MissingZoneData()
outputs[region] = out
continue
if count_hours == False:
# Convert units
unitconversion = PlotDataHelper.capacity_energy_unitconversion(max(reserve_out.sum()))
reserve_out = reserve_out/unitconversion['divisor']
Data_Table_Out = reserve_out.add_suffix(f" ({unitconversion['units']}h)")
else:
Data_Table_Out = reserve_out.add_suffix(" (hrs)")
# create color dictionary
color_dict = dict(zip(reserve_out.columns,self.color_list))
fig2,axs = plotlib.create_grouped_bar_plot(reserve_out, color_dict)
if count_hours == False:
axs.yaxis.set_major_formatter(mpl.ticker.FuncFormatter(lambda x, p: format(x, f',.{self.y_axes_decimalpt}f')))
axs.set_ylabel(f"Reserve {data_set} [{unitconversion['units']}h]", color='black', rotation='vertical')
elif count_hours == True:
axs.set_ylabel(f"Reserve {data_set} Hours", color='black', rotation='vertical')
handles, labels = axs.get_legend_handles_labels()
axs.legend(handles,labels, loc='lower left',bbox_to_anchor=(1,0),
facecolor='inherit', frameon=True)
if mconfig.parser("plot_title_as_region"):
axs.set_title(region)
outputs[region] = {'fig': fig2,'data_table': Data_Table_Out}
return outputs
def reg_reserve_shortage_timeseries(self, figure_name: str = None,
timezone: str = "", start_date_range: str = None,
end_date_range: str = None, **_):
"""Creates a timeseries line plot of reserve shortage.
A line is plotted for each reserve type shortage.
The code will create either a facet plot or a single plot depending on
if the Facet argument is active.
If a facet plot is created, each scenario is plotted on a separate facet,
otherwise all scenarios are plotted on a single plot.
To make a facet plot, ensure the work 'Facet' is found in the figure_name.
Args:
figure_name (str, optional): User defined figure output name. Used here
to determine if a Facet plot should be created.
Defaults to None.
timezone (str, optional): The timezone to display on the x-axes.
Defaults to "".
start_date_range (str, optional): Defines a start date at which to represent data from.
Defaults to None.
end_date_range (str, optional): Defines a end date at which to represent data to.
Defaults to None.
Returns:
dict: Dictionary containing the created plot and its data table.
"""
facet=False
if 'Facet' in figure_name:
facet = True
# If not facet plot, only plot first scenario
if not facet:
Scenarios = [self.Scenarios[0]]
else:
Scenarios = self.Scenarios
outputs = {}
# List of properties needed by the plot, properties are a set of tuples and contain 3 parts:
# required True/False, property name and scenarios required, scenarios must be a list.
properties = [(True, "reserve_Shortage", Scenarios)]
# Runs get_formatted_data within PlotDataHelper to populate PlotDataHelper dictionary
# with all required properties, returns a 1 if required data is missing
check_input_data = self.get_formatted_data(properties)
# Checks if all data required by plot is available, if 1 in list required data is missing
if 1 in check_input_data:
return MissingInputData()
for region in self.Zones:
self.logger.info(f"Zone = {region}")
xdimension, ydimension = self.setup_facet_xy_dimensions(facet,multi_scenario = Scenarios)
grid_size = xdimension*ydimension
excess_axs = grid_size - len(Scenarios)
fig3, axs = plotlib.setup_plot(xdimension,ydimension)
plt.subplots_adjust(wspace=0.05, hspace=0.2)
data_tables = []
unique_reserve_types = []
for n, scenario in enumerate(Scenarios):
self.logger.info(f'Scenario = {scenario}')
reserve_timeseries = self["reserve_Shortage"].get(scenario)
# Check if zone has reserves, if not skips
try:
reserve_timeseries = reserve_timeseries.xs(region,level=self.AGG_BY)
except KeyError:
self.logger.info(f"No reserves deployed in {scenario}")
continue
reserve_timeseries.reset_index(["timestamp","Type","parent"],drop=False,inplace=True)
reserve_timeseries = reserve_timeseries.drop_duplicates()
# Set Type equal to parent value if Type equals '-'
reserve_timeseries['Type'] = reserve_timeseries['Type'].mask(reserve_timeseries['Type'] == '-',
reserve_timeseries['parent'])
reserve_timeseries = reserve_timeseries.pivot(index='timestamp', columns='Type', values=0)
if pd.notna(start_date_range):
self.logger.info(f"Plotting specific date range: \
{str(start_date_range)} to {str(end_date_range)}")
reserve_timeseries = reserve_timeseries[start_date_range : end_date_range]
else:
self.logger.info("Plotting graph for entire timeperiod")
# create color dictionary
color_dict = dict(zip(reserve_timeseries.columns,self.color_list))
scenario_names = pd.Series([scenario] * len(reserve_timeseries),name = 'Scenario')
data_table = reserve_timeseries.add_suffix(" (MW)")
data_table = data_table.set_index([scenario_names],append = True)
data_tables.append(data_table)
for column in reserve_timeseries:
plotlib.create_line_plot(axs,reserve_timeseries,column,color_dict=color_dict,label=column, n=n)
axs[n].yaxis.set_major_formatter(mpl.ticker.FuncFormatter(lambda x, p: format(x, f',.{self.y_axes_decimalpt}f')))
axs[n].margins(x=0.01)
PlotDataHelper.set_plot_timeseries_format(axs,n=n,minticks=6, maxticks=12)
# scenario_names = pd.Series([scenario]*len(reserve_timeseries),name='Scenario')
# reserve_timeseries = reserve_timeseries.set_index([scenario_names],append=True)
# reserve_timeseries_chunk.append(reserve_timeseries)
# create list of gen technologies
l1 = reserve_timeseries.columns.tolist()
unique_reserve_types.extend(l1)
if not data_tables:
out = MissingZoneData()
outputs[region] = out
continue
# create handles list of unique reserve names
handles = np.unique(np.array(unique_reserve_types)).tolist()
# create color dictionary
color_dict = dict(zip(handles,self.color_list))
# create custom gen_tech legend
reserve_legend = []
for Type in handles:
legend_handles = [Line2D([0], [0], color=color_dict[Type], lw=2, label=Type)]
reserve_legend.extend(legend_handles)
axs[grid_size-1].legend(handles=reserve_legend, loc='lower left',
bbox_to_anchor=(1,0), facecolor='inherit',
frameon=True)
#Remove extra axes
if excess_axs != 0:
PlotDataHelper.remove_excess_axs(axs,excess_axs,grid_size)
# add facet labels
self.add_facet_labels(fig3)
fig3.add_subplot(111, frameon=False)
plt.tick_params(labelcolor='none', top=False, bottom=False,
left=False, right=False)
# plt.xlabel(timezone, color='black', rotation='horizontal',labelpad = 30)
plt.ylabel('Reserve Shortage [MW]', color='black',
rotation='vertical',labelpad = 40)
if mconfig.parser("plot_title_as_region"):
plt.title(region)
data_table_out = pd.concat(data_tables)
outputs[region] = {'fig': fig3, 'data_table': data_table_out}
return outputs | 47.230415 | 148 | 0.606921 | 3,505 | 30,747 | 5.138374 | 0.12525 | 0.036424 | 0.041866 | 0.013326 | 0.711938 | 0.701555 | 0.658412 | 0.624986 | 0.603998 | 0.572959 | 0 | 0.005525 | 0.317104 | 30,747 | 651 | 149 | 47.230415 | 0.852217 | 0.287117 | 0 | 0.532308 | 0 | 0 | 0.075273 | 0.015953 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024615 | false | 0 | 0.036923 | 0 | 0.098462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf7b65fc64ee28b65b720c458202445033d75918 | 142,649 | py | Python | dcase_framework/datasets.py | thisisjl/DCASE2017-modified | 4755e712e3b53277120c142cc6c14f279cc396d4 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | dcase_framework/datasets.py | thisisjl/DCASE2017-modified | 4755e712e3b53277120c142cc6c14f279cc396d4 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | dcase_framework/datasets.py | thisisjl/DCASE2017-modified | 4755e712e3b53277120c142cc6c14f279cc396d4 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Datasets
==================
Classes for dataset handling
Dataset - Base class
^^^^^^^^^^^^^^^^^^^^
This is the base class, and all the specialized datasets are inherited from it. One should never use base class itself.
Usage examples:
.. code-block:: python
:linenos:
# Create class
dataset = TUTAcousticScenes_2017_DevelopmentSet(data_path='data')
# Initialize dataset, this will make sure dataset is downloaded, packages are extracted, and needed meta files are created
dataset.initialize()
# Show meta data
dataset.meta.show()
# Get all evaluation setup folds
folds = dataset.folds()
# Get all evaluation setup folds
train_data_fold1 = dataset.train(fold=folds[0])
test_data_fold1 = dataset.test(fold=folds[0])
.. autosummary::
:toctree: generated/
Dataset
Dataset.initialize
Dataset.show_info
Dataset.audio_files
Dataset.audio_file_count
Dataset.meta
Dataset.meta_count
Dataset.error_meta
Dataset.error_meta_count
Dataset.fold_count
Dataset.scene_labels
Dataset.scene_label_count
Dataset.event_labels
Dataset.event_label_count
Dataset.audio_tags
Dataset.audio_tag_count
Dataset.download_packages
Dataset.extract
Dataset.train
Dataset.test
Dataset.eval
Dataset.folds
Dataset.file_meta
Dataset.file_error_meta
Dataset.file_error_meta
Dataset.relative_to_absolute_path
Dataset.absolute_to_relative
AcousticSceneDataset
^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
AcousticSceneDataset
Specialized classes inherited AcousticSceneDataset:
.. autosummary::
:toctree: generated/
TUTAcousticScenes_2017_DevelopmentSet
TUTAcousticScenes_2016_DevelopmentSet
TUTAcousticScenes_2016_EvaluationSet
SoundEventDataset
^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
SoundEventDataset
SoundEventDataset.event_label_count
SoundEventDataset.event_labels
SoundEventDataset.train
SoundEventDataset.test
Specialized classes inherited SoundEventDataset:
.. autosummary::
:toctree: generated/
TUTRareSoundEvents_2017_DevelopmentSet
TUTSoundEvents_2016_DevelopmentSet
TUTSoundEvents_2016_EvaluationSet
AudioTaggingDataset
^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
AudioTaggingDataset
"""
from __future__ import print_function, absolute_import
import sys
import os
import logging
import socket
import zipfile
import tarfile
import collections
import csv
import numpy
import hashlib
import yaml
from tqdm import tqdm
from six import iteritems
from .utils import get_parameter_hash, get_class_inheritors
from .decorators import before_and_after_function_wrapper
from .files import TextFile, ParameterFile, ParameterListFile, AudioFile
from .containers import DottedDict
from .metadata import MetaDataContainer, MetaDataItem
def dataset_list(data_path, group=None):
"""List of datasets available
Parameters
----------
data_path : str
Base path for the datasets
group : str
Group label for the datasets, currently supported ['acoustic scene', 'sound event', 'audio tagging']
Returns
-------
str
Multi line string containing dataset table
"""
output = ''
output += ' Dataset list\n'
output += ' {class_name:<45s} | {group:20s} | {valid:5s} | {files:10s} |\n'.format(
class_name='Class Name',
group='Group',
valid='Valid',
files='Files'
)
output += ' {class_name:<45s} + {group:20s} + {valid:5s} + {files:10s} +\n'.format(
class_name='-' * 45,
group='-' * 20,
valid='-'*5,
files='-'*10
)
def get_empty_row():
return ' {class_name:<45s} | {group:20s} | {valid:5s} | {files:10s} |\n'.format(
class_name='',
group='',
valid='',
files=''
)
def get_row(d):
file_count = 0
if d.meta_container.exists():
file_count = len(d.meta)
return ' {class_name:<45s} | {group:20s} | {valid:5s} | {files:10s} |\n'.format(
class_name=d.__class__.__name__,
group=d.dataset_group,
valid='Yes' if d.check_filelist() else 'No',
files=str(file_count) if file_count else ''
)
if not group or group == 'acoustic scene':
for dataset_class in get_class_inheritors(AcousticSceneDataset):
d = dataset_class(data_path=data_path)
output += get_row(d)
if not group or group == 'sound event':
for dataset_class in get_class_inheritors(SoundEventDataset):
d = dataset_class(data_path=data_path)
output += get_row(d)
if not group or group == 'audio tagging':
for dataset_class in get_class_inheritors(AudioTaggingDataset):
d = dataset_class(data_path=data_path)
output += get_row(d)
return output
def dataset_factory(*args, **kwargs):
"""Factory to get correct dataset class based on name
Parameters
----------
dataset_class_name : str
Class name
Default value "None"
Raises
------
NameError
Class does not exists
Returns
-------
Dataset class
"""
dataset_class_name = kwargs.get('dataset_class_name', None)
try:
return eval(dataset_class_name)(*args, **kwargs)
except NameError:
message = '{name}: No valid dataset given [{dataset_class_name}]'.format(
name='dataset_factory',
dataset_class_name=dataset_class_name
)
logging.getLogger('dataset_factory').exception(message)
raise NameError(message)
class Dataset(object):
"""Dataset base class
The specific dataset classes are inherited from this class, and only needed methods are reimplemented.
"""
def __init__(self, *args, **kwargs):
"""Constructor
Parameters
----------
name : str
storage_name : str
data_path : str
Basepath where the dataset is stored.
(Default value='data')
logger : logger
Instance of logging
Default value "none"
show_progress_in_console : bool
Show progress in console.
Default value "True"
log_system_progress : bool
Show progress in log.
Default value "False"
use_ascii_progress_bar : bool
Show progress bar using ASCII characters. Use this if your console does not support UTF-8 characters.
Default value "False"
"""
self.logger = kwargs.get('logger') or logging.getLogger(__name__)
self.disable_progress_bar = not kwargs.get('show_progress_in_console', True)
self.log_system_progress = kwargs.get('log_system_progress', False)
self.use_ascii_progress_bar = kwargs.get('use_ascii_progress_bar', True)
# Dataset name
self.name = kwargs.get('name', 'dataset')
# Folder name for dataset
self.storage_name = kwargs.get('storage_name', 'dataset')
# Path to the dataset
self.local_path = os.path.join(kwargs.get('data_path', 'data'), self.storage_name)
# Evaluation setup folder
self.evaluation_setup_folder = kwargs.get('evaluation_setup_folder', 'evaluation_setup')
# Path to the folder containing evaluation setup files
self.evaluation_setup_path = os.path.join(self.local_path, self.evaluation_setup_folder)
# Meta data file, csv-format
self.meta_filename = kwargs.get('meta_filename', 'meta.txt')
# Path to meta data file
self.meta_container = MetaDataContainer(filename=os.path.join(self.local_path, self.meta_filename))
if self.meta_container.exists():
self.meta_container.load()
# Error meta data file, csv-format
self.error_meta_filename = kwargs.get('error_meta_filename', 'error.txt')
# Path to error meta data file
self.error_meta_file = os.path.join(self.local_path, self.error_meta_filename)
# Hash file to detect removed or added files
self.filelisthash_filename = kwargs.get('filelisthash_filename', 'filelist.python.hash')
# Dirs to be excluded when calculating filelist hash
self.filelisthash_exclude_dirs = kwargs.get('filelisthash_exclude_dirs', [])
# Number of evaluation folds
self.crossvalidation_folds = 1
# List containing dataset package items
# Define this in the inherited class.
# Format:
# {
# 'remote_package': download_url,
# 'local_package': os.path.join(self.local_path, 'name_of_downloaded_package'),
# 'local_audio_path': os.path.join(self.local_path, 'name_of_folder_containing_audio_files'),
# }
self.package_list = []
# List of audio files
self.files = None
# List of audio error meta data dict
self.error_meta_data = None
# Training meta data for folds
self.crossvalidation_data_train = {}
# Testing meta data for folds
self.crossvalidation_data_test = {}
# Evaluation meta data for folds
self.crossvalidation_data_eval = {}
# Recognized audio extensions
self.audio_extensions = {'wav', 'flac'}
self.default_audio_extension = 'wav'
# Reference data presence flag, by default dataset should have reference data present.
# However, some evaluation dataset might not have
self.reference_data_present = True
# Info fields for dataset
self.authors = ''
self.name_remote = ''
self.url = ''
self.audio_source = ''
self.audio_type = ''
self.recording_device_model = ''
self.microphone_model = ''
def initialize(self):
# Create the dataset path if does not exist
if not os.path.isdir(self.local_path):
os.makedirs(self.local_path)
if not self.check_filelist():
self.download_packages()
self.extract()
self._save_filelist_hash()
return self
def show_info(self):
DottedDict(self.dataset_meta).show()
@property
def audio_files(self):
"""Get all audio files in the dataset
Parameters
----------
Returns
-------
filelist : list
File list with absolute paths
"""
if self.files is None:
self.files = []
for item in self.package_list:
path = item['local_audio_path']
if path:
l = os.listdir(path)
for f in l:
file_name, file_extension = os.path.splitext(f)
if file_extension[1:] in self.audio_extensions:
if os.path.abspath(os.path.join(path, f)) not in self.files:
self.files.append(os.path.abspath(os.path.join(path, f)))
self.files.sort()
return self.files
@property
def audio_file_count(self):
"""Get number of audio files in dataset
Parameters
----------
Returns
-------
filecount : int
Number of audio files
"""
return len(self.audio_files)
@property
def meta(self):
"""Get meta data for dataset. If not already read from disk, data is read and returned.
Parameters
----------
Returns
-------
meta_container : list
List containing meta data as dict.
Raises
-------
IOError
meta file not found.
"""
if self.meta_container.empty():
if self.meta_container.exists():
self.meta_container.load()
else:
message = '{name}: Meta file not found [{filename}]'.format(
name=self.__class__.__name__,
filename=self.meta_container.filename
)
self.logger.exception(message)
raise IOError(message)
return self.meta_container
@property
def meta_count(self):
"""Number of meta data items.
Parameters
----------
Returns
-------
meta_item_count : int
Meta data item count
"""
return len(self.meta_container)
@property
def error_meta(self):
"""Get audio error meta data for dataset. If not already read from disk, data is read and returned.
Parameters
----------
Raises
-------
IOError:
audio error meta file not found.
Returns
-------
error_meta_data : list
List containing audio error meta data as dict.
"""
if self.error_meta_data is None:
self.error_meta_data = MetaDataContainer(filename=self.error_meta_file)
if self.error_meta_data.exists():
self.error_meta_data.load()
else:
message = '{name}: Error meta file not found [{filename}]'.format(name=self.__class__.__name__,
filename=self.error_meta_file)
self.logger.exception(message)
raise IOError(message)
return self.error_meta_data
def error_meta_count(self):
"""Number of error meta data items.
Parameters
----------
Returns
-------
meta_item_count : int
Meta data item count
"""
return len(self.error_meta)
@property
def fold_count(self):
"""Number of fold in the evaluation setup.
Parameters
----------
Returns
-------
fold_count : int
Number of folds
"""
return self.crossvalidation_folds
@property
def scene_labels(self):
"""List of unique scene labels in the meta data.
Parameters
----------
Returns
-------
labels : list
List of scene labels in alphabetical order.
"""
return self.meta_container.unique_scene_labels
@property
def scene_label_count(self):
"""Number of unique scene labels in the meta data.
Parameters
----------
Returns
-------
scene_label_count : int
Number of unique scene labels.
"""
return self.meta_container.scene_label_count
def event_labels(self):
"""List of unique event labels in the meta data.
Parameters
----------
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
return self.meta_container.unique_event_labels
@property
def event_label_count(self):
"""Number of unique event labels in the meta data.
Parameters
----------
Returns
-------
event_label_count : int
Number of unique event labels
"""
return self.meta_container.event_label_count
@property
def audio_tags(self):
"""List of unique audio tags in the meta data.
Parameters
----------
Returns
-------
labels : list
List of audio tags in alphabetical order.
"""
tags = []
for item in self.meta:
if 'tags' in item:
for tag in item['tags']:
if tag and tag not in tags:
tags.append(tag)
tags.sort()
return tags
@property
def audio_tag_count(self):
"""Number of unique audio tags in the meta data.
Parameters
----------
Returns
-------
audio_tag_count : int
Number of unique audio tags
"""
return len(self.audio_tags)
def __getitem__(self, i):
"""Getting meta data item
Parameters
----------
i : int
item id
Returns
-------
meta_data : dict
Meta data item
"""
if i < len(self.meta_container):
return self.meta_container[i]
else:
return None
def __iter__(self):
"""Iterator for meta data items
Parameters
----------
Nothing
Returns
-------
Nothing
"""
i = 0
meta = self[i]
# yield window while it's valid
while meta is not None:
yield meta
# get next item
i += 1
meta = self[i]
def download_packages(self):
"""Download dataset packages over the internet to the local path
Parameters
----------
Returns
-------
Nothing
Raises
-------
IOError
Download failed.
"""
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Set socket timeout
socket.setdefaulttimeout(120)
item_progress = tqdm(self.package_list,
desc="{0: <25s}".format('Download package list'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for item in item_progress:
try:
if item['remote_package'] and not os.path.isfile(item['local_package']):
def progress_hook(t):
"""
Wraps tqdm instance. Don't forget to close() or __exit__()
the tqdm instance once you're done with it (easiest using `with` syntax).
"""
last_b = [0]
def inner(b=1, bsize=1, tsize=None):
"""
b : int, optional
Number of blocks just transferred [default: 1].
bsize : int, optional
Size of each block (in tqdm units) [default: 1].
tsize : int, optional
Total size (in tqdm units). If [default: None] remains unchanged.
"""
if tsize is not None:
t.total = tsize
t.update((b - last_b[0]) * bsize)
last_b[0] = b
return inner
remote_file = item['remote_package']
tmp_file = os.path.join(self.local_path, 'tmp_file')
with tqdm(desc="{0: >25s}".format(os.path.splitext(remote_file.split('/')[-1])[0]),
file=sys.stdout,
unit='B',
unit_scale=True,
miniters=1,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar) as t:
local_filename, headers = urlretrieve(
remote_file,
filename=tmp_file,
reporthook=progress_hook(t),
data=None
)
os.rename(tmp_file, item['local_package'])
except Exception as e:
message = '{name}: Download failed [{filename}] [{errno}: {strerror}]'.format(
name=self.__class__.__name__,
filename=item['remote_package'],
errno=e.errno if hasattr(e, 'errno') else '',
strerror=e.strerror if hasattr(e, 'strerror') else '',
)
self.logger.exception(message)
raise
@before_and_after_function_wrapper
def extract(self):
"""Extract the dataset packages
Parameters
----------
Returns
-------
Nothing
"""
item_progress = tqdm(self.package_list,
desc="{0: <25s}".format('Extract packages'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for item_id, item in enumerate(item_progress):
if self.log_system_progress:
self.logger.info(' {title:<15s} [{item_id:d}/{total:d}] {package:<30s}'.format(
title='Extract packages ',
item_id=item_id,
total=len(item_progress),
package=item['local_package'])
)
if item['local_package'] and os.path.isfile(item['local_package']):
if item['local_package'].endswith('.zip'):
with zipfile.ZipFile(item['local_package'], "r") as z:
# Trick to omit first level folder
parts = []
for name in z.namelist():
if not name.endswith('/'):
parts.append(name.split('/')[:-1])
prefix = os.path.commonprefix(parts) or ''
if prefix:
if len(prefix) > 1:
prefix_ = list()
prefix_.append(prefix[0])
prefix = prefix_
prefix = '/'.join(prefix) + '/'
offset = len(prefix)
# Start extraction
members = z.infolist()
file_count = 1
progress = tqdm(members,
desc="{0: <25s}".format('Extract'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for i, member in enumerate(progress):
if self.log_system_progress:
self.logger.info(' {title:<15s} [{item_id:d}/{total:d}] {file:<30s}'.format(
title='Extract ',
item_id=i,
total=len(progress),
file=member.filename)
)
if len(member.filename) > offset:
member.filename = member.filename[offset:]
progress.set_description("{0: >35s}".format(member.filename.split('/')[-1]))
progress.update()
if not os.path.isfile(os.path.join(self.local_path, member.filename)):
try:
z.extract(member, self.local_path)
except KeyboardInterrupt:
# Delete latest file, since most likely it was not extracted fully
os.remove(os.path.join(self.local_path, member.filename))
# Quit
sys.exit()
file_count += 1
elif item['local_package'].endswith('.tar.gz'):
tar = tarfile.open(item['local_package'], "r:gz")
progress = tqdm(tar,
desc="{0: <25s}".format('Extract'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for i, tar_info in enumerate(progress):
if self.log_system_progress:
self.logger.info(' {title:<15s} [{item_id:d}/{total:d}] {file:<30s}'.format(
title='Extract ',
item_id=i,
total=len(progress),
file=tar_info.name)
)
if not os.path.isfile(os.path.join(self.local_path, tar_info.name)):
tar.extract(tar_info, self.local_path)
tar.members = []
tar.close()
def _get_filelist(self, exclude_dirs=None):
"""List of files under local_path
Parameters
----------
exclude_dirs : list of str
List of directories to be excluded
Default value "[]"
Returns
-------
filelist: list
File list
"""
if exclude_dirs is None:
exclude_dirs = []
filelist = []
for path, subdirs, files in os.walk(self.local_path):
for name in files:
if os.path.splitext(name)[1] != os.path.splitext(self.filelisthash_filename)[1] and os.path.split(path)[1] not in exclude_dirs:
filelist.append(os.path.join(path, name))
return sorted(filelist)
def check_filelist(self):
"""Generates hash from file list and check does it matches with one saved in filelist.hash.
If some files have been deleted or added, checking will result False.
Parameters
----------
Returns
-------
result: bool
Result
"""
if os.path.isfile(os.path.join(self.local_path, self.filelisthash_filename)):
old_hash_value = TextFile(filename=os.path.join(self.local_path, self.filelisthash_filename)).load()[0]
file_list = self._get_filelist(exclude_dirs=self.filelisthash_exclude_dirs)
new_hash_value = get_parameter_hash(file_list)
if old_hash_value != new_hash_value:
return False
else:
return True
else:
return False
def _save_filelist_hash(self):
"""Generates file list hash, and saves it as filelist.hash under local_path.
Parameters
----------
Nothing
Returns
-------
Nothing
"""
filelist = self._get_filelist()
hash_value = get_parameter_hash(filelist)
TextFile([hash_value], filename=os.path.join(self.local_path, self.filelisthash_filename)).save()
def train(self, fold=0):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = []
if fold > 0:
self.crossvalidation_data_train[fold] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(setup_part='train', fold=fold)).load()
else:
self.crossvalidation_data_train[0] = self.meta_container
for item in self.crossvalidation_data_train[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
return self.crossvalidation_data_train[fold]
def test(self, fold=0):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = []
if fold > 0:
self.crossvalidation_data_test[fold] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(setup_part='test', fold=fold)).load()
for item in self.crossvalidation_data_test[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
else:
self.crossvalidation_data_test[fold] = self.meta_container
for item in self.crossvalidation_data_test[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
return self.crossvalidation_data_test[fold]
def eval(self, fold=0):
"""List of evaluation items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_eval:
self.crossvalidation_data_eval[fold] = []
if fold > 0:
self.crossvalidation_data_eval[fold] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(setup_part='evaluate', fold=fold)).load()
else:
self.crossvalidation_data_eval[fold] = self.meta_container
for item in self.crossvalidation_data_eval[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
return self.crossvalidation_data_eval[fold]
def folds(self, mode='folds'):
"""List of fold ids
Parameters
----------
mode : str {'folds','full'}
Fold setup type, possible values are 'folds' and 'full'. In 'full' mode fold number is set 0 and all data is used for training.
(Default value=folds)
Returns
-------
list : list of integers
Fold ids
"""
if mode == 'folds':
return range(1, self.crossvalidation_folds + 1)
elif mode == 'full':
return [0]
def file_meta(self, filename):
"""Meta data for given file
Parameters
----------
filename : str
File name
Returns
-------
list : list of dicts
List containing all meta data related to given file.
"""
return self.meta_container.filter(filename=self.absolute_to_relative(filename))
def file_error_meta(self, filename):
"""Error meta data for given file
Parameters
----------
filename : str
File name
Returns
-------
list : list of dicts
List containing all error meta data related to given file.
"""
return self.error_meta.filter(file=self.absolute_to_relative(filename))
def relative_to_absolute_path(self, path):
"""Converts relative path into absolute path.
Parameters
----------
path : str
Relative path
Returns
-------
path : str
Absolute path
"""
return os.path.abspath(os.path.expanduser(os.path.join(self.local_path, path)))
def absolute_to_relative(self, path):
"""Converts absolute path into relative path.
Parameters
----------
path : str
Absolute path
Returns
-------
path : str
Relative path
"""
if path.startswith(os.path.abspath(self.local_path)):
return os.path.relpath(path, self.local_path)
else:
return path
def _get_evaluation_setup_filename(self, setup_part='train', fold=None, scene_label=None, file_extension='txt'):
parts = []
if scene_label:
parts.append(scene_label)
if fold:
parts.append('fold' + str(fold))
if setup_part == 'train':
parts.append('train')
elif setup_part == 'test':
parts.append('test')
elif setup_part == 'evaluate':
parts.append('evaluate')
return os.path.join(self.evaluation_setup_path, '_'.join(parts) + '.' + file_extension)
class AcousticSceneDataset(Dataset):
def __init__(self, *args, **kwargs):
super(AcousticSceneDataset, self).__init__(*args, **kwargs)
self.dataset_group = 'base class'
class SoundEventDataset(Dataset):
def __init__(self, *args, **kwargs):
super(SoundEventDataset, self).__init__(*args, **kwargs)
self.dataset_group = 'base class'
def event_label_count(self, scene_label=None):
"""Number of unique scene labels in the meta data.
Parameters
----------
scene_label : str
Scene label
Default value "None"
Returns
-------
scene_label_count : int
Number of unique scene labels.
"""
return len(self.event_labels(scene_label=scene_label))
def event_labels(self, scene_label=None):
"""List of unique event labels in the meta data.
Parameters
----------
scene_label : str
Scene label
Default value "None"
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
if scene_label is not None:
labels = self.meta_container.filter(scene_label=scene_label).unique_event_labels
else:
labels = self.meta_container.unique_event_labels
labels.sort()
return labels
def train(self, fold=0, scene_label=None):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
scene_label : str
Scene label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_train[fold]:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer()
if fold > 0:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(setup_part='train', fold=fold, scene_label=scene_label_)).load()
else:
self.crossvalidation_data_train[0][scene_label_] = self.meta_container.filter(
scene_label=scene_label_
)
for item in self.crossvalidation_data_train[fold][scene_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
if scene_label:
return self.crossvalidation_data_train[fold][scene_label]
else:
data = MetaDataContainer()
for scene_label_ in self.scene_labels:
data += self.crossvalidation_data_train[fold][scene_label_]
return data
def test(self, fold=0, scene_label=None):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
scene_label : str
Scene label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_test[fold]:
self.crossvalidation_data_test[fold][scene_label_] = MetaDataContainer()
if fold > 0:
self.crossvalidation_data_test[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(
setup_part='test', fold=fold, scene_label=scene_label_)
).load()
else:
self.crossvalidation_data_test[0][scene_label_] = self.meta_container.filter(
scene_label=scene_label_
)
for item in self.crossvalidation_data_test[fold][scene_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
if scene_label:
return self.crossvalidation_data_test[fold][scene_label]
else:
data = MetaDataContainer()
for scene_label_ in self.scene_labels:
data += self.crossvalidation_data_test[fold][scene_label_]
return data
class SyntheticSoundEventDataset(SoundEventDataset):
def __init__(self, *args, **kwargs):
super(SyntheticSoundEventDataset, self).__init__(*args, **kwargs)
self.dataset_group = 'base class'
def initialize(self):
# Create the dataset path if does not exist
if not os.path.isdir(self.local_path):
os.makedirs(self.local_path)
if not self.check_filelist():
self.download_packages()
self.extract()
self._save_filelist_hash()
self.synthesize()
return self
@before_and_after_function_wrapper
def synthesize(self):
pass
class AudioTaggingDataset(Dataset):
def __init__(self, *args, **kwargs):
super(AudioTaggingDataset, self).__init__(*args, **kwargs)
self.dataset_group = 'base class'
# =====================================================
# DCASE 2017
# =====================================================
class TUTAcousticScenes_2017_DevelopmentSet(AcousticSceneDataset):
"""TUT Acoustic scenes 2017 development dataset
This dataset is used in DCASE2017 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-acoustic-scenes-2017-development')
super(TUTAcousticScenes_2017_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'acoustic scene'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Acoustic Scenes 2017, development dataset',
'url': None,
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.error.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.error.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.4.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.4.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.5.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.5.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.6.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.6.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.7.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.7.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.8.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.8.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.9.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.9.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400515/files/TUT-acoustic-scenes-2017-development.audio.10.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2017-development.audio.10.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = collections.OrderedDict()
for fold in range(1, self.crossvalidation_folds):
# Read train files in
fold_data = MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')).load()
fold_data += MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt')).load()
for item in fold_data:
if item['file'] not in meta_data:
raw_path, raw_filename = os.path.split(item['file'])
relative_path = self.absolute_to_relative(raw_path)
location_id = raw_filename.split('_')[0]
item['file'] = os.path.join(relative_path, raw_filename)
item['identifier'] = location_id
meta_data[item['file']] = item
self.meta_container.update(meta_data.values())
self.meta_container.save()
else:
self.meta_container.load()
def train(self, fold=0):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = []
if fold > 0:
self.crossvalidation_data_train[fold] = MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')).load()
for item in self.crossvalidation_data_train[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
raw_path, raw_filename = os.path.split(item['file'])
location_id = raw_filename.split('_')[0]
item['identifier'] = location_id
else:
self.crossvalidation_data_train[0] = self.meta_container
return self.crossvalidation_data_train[fold]
class TUTAcousticScenes_2017_EvaluationSet(AcousticSceneDataset):
"""TUT Acoustic scenes 2017 evaluation dataset
This dataset is used in DCASE2017 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-acoustic-scenes-2017-evaluation')
super(TUTAcousticScenes_2017_EvaluationSet, self).__init__(*args, **kwargs)
self.reference_data_present = False
self.dataset_group = 'acoustic scene'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Acoustic Scenes 2017, development dataset',
'url': None,
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = collections.OrderedDict()
for fold in range(1, self.crossvalidation_folds):
# Read train files in
fold_data = MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')).load()
for item in fold_data:
if item['file'] not in meta_data:
raw_path, raw_filename = os.path.split(item['file'])
relative_path = self.absolute_to_relative(raw_path)
location_id = raw_filename.split('_')[0]
item['file'] = os.path.join(relative_path, raw_filename)
meta_data[item['file']] = item
self.meta_container.update(meta_data.values())
self.meta_container.save()
else:
self.meta_container.load()
def train(self, fold=0):
return []
def test(self, fold=0):
return []
class TUTRareSoundEvents_2017_DevelopmentSet(SyntheticSoundEventDataset):
"""TUT Acoustic scenes 2017 development dataset
This dataset is used in DCASE2017 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-rare-sound-events-2017-development')
kwargs['filelisthash_exclude_dirs'] = kwargs.get('filelisthash_exclude_dirs', ['generated_data'])
self.synth_parameters = DottedDict({
'train': {
'seed': 42,
'mixture': {
'fs': 44100,
'bitdepth': 24,
'length_seconds': 30.0,
'anticlipping_factor': 0.2,
},
'event_presence_prob': 0.5,
'mixtures_per_class': 500,
'ebr_list': [-6, 0, 6],
},
'test': {
'seed': 42,
'mixture': {
'fs': 44100,
'bitdepth': 24,
'length_seconds': 30.0,
'anticlipping_factor': 0.2,
},
'event_presence_prob': 0.5,
'mixtures_per_class': 500,
'ebr_list': [-6, 0, 6],
}
})
# Override synth parameters
if kwargs.get('synth_parameters'):
self.synth_parameters.merge(kwargs.get('synth_parameters'))
# Meta filename depends on synth parameters
meta_filename = 'meta_'+self.synth_parameters.get_hash_for_path()+'.txt'
kwargs['meta_filename'] = kwargs.get('meta_filename', os.path.join('generated_data', meta_filename))
# Initialize baseclass
super(TUTRareSoundEvents_2017_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Aleksandr Diment, Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Rare Sound Events 2017, development dataset',
'url': None,
'audio_source': 'Synthetic',
'audio_type': 'Natural',
'recording_device_model': 'Unknown',
'microphone_model': 'Unknown',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.code.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.code.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.4.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.4.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.5.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.5.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.6.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.6.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.7.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.7.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.8.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_bgs_and_cvsetup.8.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2017/data/TUT-rare-sound-events-2017-development/TUT-rare-sound-events-2017-development.source_data_events.zip',
'local_package': os.path.join(self.local_path, 'TUT-rare-sound-events-2017-development.source_data_events.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
@property
def event_labels(self, scene_label=None):
"""List of unique event labels in the meta data.
Parameters
----------
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
labels = ['babycry', 'glassbreak', 'gunshot']
labels.sort()
return labels
def train(self, fold=0, event_label=None):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
event_label : str
Event label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = {}
for event_label_ in self.event_labels:
if event_label_ not in self.crossvalidation_data_train[fold]:
self.crossvalidation_data_train[fold][event_label_] = MetaDataContainer()
if fold == 1:
params_hash = self.synth_parameters.get_hash_for_path('train')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtrain_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_devtrain_' + event_label_ + '.csv'
)
self.crossvalidation_data_train[fold][event_label_] = MetaDataContainer(
filename=event_list_filename).load()
elif fold == 0:
params_hash = self.synth_parameters.get_hash_for_path('train')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtrain_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_devtrain_' + event_label_ + '.csv'
)
# Load train files
self.crossvalidation_data_train[0][event_label_] = MetaDataContainer(
filename=event_list_filename).load()
params_hash = self.synth_parameters.get_hash_for_path('test')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtest_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_devtest_' + event_label_ + '.csv'
)
# Load test files
self.crossvalidation_data_train[0][event_label_] += MetaDataContainer(
filename=event_list_filename).load()
for item in self.crossvalidation_data_train[fold][event_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
if event_label:
return self.crossvalidation_data_train[fold][event_label]
else:
data = MetaDataContainer()
for event_label_ in self.event_labels:
data += self.crossvalidation_data_train[fold][event_label_]
return data
def test(self, fold=0, event_label=None):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
event_label : str
Event label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = {}
for event_label_ in self.event_labels:
if event_label_ not in self.crossvalidation_data_test[fold]:
self.crossvalidation_data_test[fold][event_label_] = MetaDataContainer()
if fold == 1:
params_hash = self.synth_parameters.get_hash_for_path('test')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtest_' + params_hash,
'meta'
)
event_list_filename = os.path.join(mixture_meta_path, 'event_list_devtest_' + event_label_ + '.csv')
self.crossvalidation_data_test[fold][event_label_] = MetaDataContainer(
filename=event_list_filename
).load()
elif fold == 0:
params_hash = self.synth_parameters.get_hash_for_path('train')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtrain_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_devtrain_' + event_label_ + '.csv'
)
# Load train files
self.crossvalidation_data_test[0][event_label_] = MetaDataContainer(
filename=event_list_filename
).load()
params_hash = self.synth_parameters.get_hash_for_path('test')
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_devtest_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_devtest_' + event_label_ + '.csv'
)
# Load test files
self.crossvalidation_data_test[0][event_label_] += MetaDataContainer(
filename=event_list_filename
).load()
for item in self.crossvalidation_data_test[fold][event_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
if event_label:
return self.crossvalidation_data_test[fold][event_label]
else:
data = MetaDataContainer()
for event_label_ in self.event_labels:
data += self.crossvalidation_data_test[fold][event_label_]
return data
@before_and_after_function_wrapper
def synthesize(self):
subset_map = {'train': 'devtrain',
'test': 'devtest'}
background_audio_path = os.path.join(self.local_path, 'data', 'source_data', 'bgs')
event_audio_path = os.path.join(self.local_path, 'data', 'source_data', 'events')
cv_setup_path = os.path.join(self.local_path, 'data', 'source_data', 'cv_setup')
set_progress = tqdm(['train', 'test'],
desc="{0: <25s}".format('Set'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for subset_label in set_progress:
if self.log_system_progress:
self.logger.info(' {title:<15s} [{subset_label:<30s}]'.format(
title='Set ',
subset_label=subset_label)
)
subset_name_on_disk = subset_map[subset_label]
background_meta = ParameterListFile().load(filename=os.path.join(cv_setup_path, 'bgs_' + subset_name_on_disk + '.yaml'))
event_meta = ParameterFile().load(
filename=os.path.join(cv_setup_path, 'events_' + subset_name_on_disk + '.yaml')
)
params = self.synth_parameters.get_path(subset_label)
params_hash = self.synth_parameters.get_hash_for_path(subset_label)
r = numpy.random.RandomState(params.get('seed', 42))
mixture_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_' + subset_name_on_disk + '_' + params_hash
)
mixture_audio_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_' + subset_name_on_disk + '_' + params_hash,
'audio'
)
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_' + subset_name_on_disk + '_' + params_hash,
'meta'
)
# Make sure folder exists
if not os.path.isdir(mixture_path):
os.makedirs(mixture_path)
if not os.path.isdir(mixture_audio_path):
os.makedirs(mixture_audio_path)
if not os.path.isdir(mixture_meta_path):
os.makedirs(mixture_meta_path)
class_progress = tqdm(self.event_labels,
desc="{0: <25s}".format('Class'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for class_label in class_progress:
if self.log_system_progress:
self.logger.info(' {title:<15s} [{class_label:<30s}]'.format(
title='Class ',
class_label=class_label)
)
mixture_recipes_filename = os.path.join(
mixture_meta_path,
'mixture_recipes_' + subset_name_on_disk + '_' + class_label + '.yaml'
)
# Generate recipes if not exists
if not os.path.isfile(mixture_recipes_filename):
self._generate_mixture_recipes(
params=params,
class_label=class_label,
subset=subset_name_on_disk,
mixture_recipes_filename=mixture_recipes_filename,
background_meta=background_meta,
event_meta=event_meta[class_label],
background_audio_path=background_audio_path,
event_audio_path=event_audio_path,
r=r
)
mixture_meta = ParameterListFile().load(filename=mixture_recipes_filename)
# Generate mixture signals
item_progress = tqdm(mixture_meta,
desc="{0: <25s}".format('Generate mixture'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for item_id, item in enumerate(item_progress):
if self.log_system_progress:
self.logger.info(' {title:<15s} [{item_id:d}/{total:d}] {file:<30s}'.format(
title='Generate mixture ',
item_id=item_id,
total=len(item_progress),
file=item['mixture_audio_filename'])
)
mixture_file = os.path.join(mixture_audio_path, item['mixture_audio_filename'])
if not os.path.isfile(mixture_file):
mixture = self._synthesize_mixture(
mixture_recipe=item,
params=params,
background_audio_path=background_audio_path,
event_audio_path=event_audio_path
)
audio_container = AudioFile(
data=mixture,
fs=params['mixture']['fs']
)
audio_container.save(
filename=mixture_file,
bitdepth=params['mixture']['bitdepth']
)
# Generate event lists
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_' + subset_name_on_disk + '_' + class_label + '.csv'
)
event_list = MetaDataContainer(filename=event_list_filename)
if not event_list.exists():
item_progress = tqdm(mixture_meta,
desc="{0: <25s}".format('Event list'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
for item_id, item in enumerate(item_progress):
if self.log_system_progress:
self.logger.info(' {title:<15s} [{item_id:d}/{total:d}] {file:<30s}'.format(
title='Event list ',
item_id=item_id,
total=len(item_progress),
file=item['mixture_audio_filename'])
)
event_list_item = {
'file': os.path.join(
'generated_data',
'mixtures_' + subset_name_on_disk + '_' + params_hash,
'audio',
item['mixture_audio_filename']
),
}
if item['event_present']:
event_list_item['event_label'] = item['event_class']
event_list_item['event_onset'] = float(item['event_start_in_mixture_seconds'])
event_list_item['event_offset'] = float(item['event_start_in_mixture_seconds'] + item['event_length_seconds'])
event_list.append(MetaDataItem(event_list_item))
event_list.save()
mixture_parameters = os.path.join(mixture_path, 'parameters.yaml')
# Save parameters
if not os.path.isfile(mixture_parameters):
ParameterFile(params).save(filename=mixture_parameters)
if not self.meta_container.exists():
# Collect meta data
meta_data = MetaDataContainer()
for class_label in self.event_labels:
for subset_label, subset_name_on_disk in iteritems(subset_map):
params_hash = self.synth_parameters.get_hash_for_path(subset_label)
mixture_meta_path = os.path.join(
self.local_path,
'generated_data',
'mixtures_' + subset_name_on_disk + '_' + params_hash,
'meta'
)
event_list_filename = os.path.join(
mixture_meta_path,
'event_list_' + subset_name_on_disk + '_' + class_label + '.csv'
)
meta_data += MetaDataContainer(filename=event_list_filename).load()
self.meta_container.update(meta_data)
self.meta_container.save()
def _generate_mixture_recipes(self, params, subset, class_label, mixture_recipes_filename, background_meta,
event_meta, background_audio_path, event_audio_path, r):
try:
from itertools import izip as zip
except ImportError: # will be 3.x series
pass
def get_event_amplitude_scaling_factor(signal, noise, target_snr_db):
"""Get amplitude scaling factor
Different lengths for signal and noise allowed: longer noise assumed to be stationary enough,
and rmse is calculated over the whole signal
Parameters
----------
signal : numpy.ndarray
noise : numpy.ndarray
target_snr_db : float
Returns
-------
float > 0.0
"""
def rmse(y):
"""RMSE"""
return numpy.sqrt(numpy.mean(numpy.abs(y) ** 2, axis=0, keepdims=False))
original_sn_rmse_ratio = rmse(signal) / rmse(noise)
target_sn_rmse_ratio = 10 ** (target_snr_db / float(20))
signal_scaling_factor = target_sn_rmse_ratio / original_sn_rmse_ratio
return signal_scaling_factor
# Internal variables
fs = float(params.get('mixture').get('fs', 44100))
current_class_events = []
# Inject fields to meta data
for event in event_meta:
event['classname'] = class_label
event['audio_filepath'] = os.path.join(class_label, event['audio_filename'])
event['length_seconds'] = numpy.diff(event['segment'])[0]
current_class_events.append(event)
# Randomize order of event and background
events = r.choice(current_class_events,
int(round(params.get('mixtures_per_class') * params.get('event_presence_prob'))))
bgs = r.choice(background_meta, params.get('mixtures_per_class'))
# Event presence flags
event_presence_flags = (numpy.hstack((numpy.ones(len(events)), numpy.zeros(len(bgs) - len(events))))).astype(bool)
event_presence_flags = r.permutation(event_presence_flags)
# Event instance IDs, by default event id set to nan: no event. fill it later with actual event ids when needed
event_instance_ids = numpy.nan * numpy.ones(len(bgs)).astype(int)
event_instance_ids[event_presence_flags] = numpy.arange(len(events))
# Randomize event position inside background
for event in events:
event['offset_seconds'] = (params.get('mixture').get('length_seconds') - event['length_seconds']) * r.rand()
# Get offsets for all mixtures, If no event present, use nans
event_offsets_seconds = numpy.nan * numpy.ones(len(bgs))
event_offsets_seconds[event_presence_flags] = [event['offset_seconds'] for event in events]
# Double-check that we didn't shuffle things wrongly: check that the offset never exceeds bg_len-event_len
checker = [offset + events[int(event_instance_id)]['length_seconds'] for offset, event_instance_id in
zip(event_offsets_seconds[event_presence_flags], event_instance_ids[event_presence_flags])]
assert numpy.max(numpy.array(checker)) < params.get('mixture').get('length_seconds')
# Target EBRs
target_ebrs = -numpy.inf * numpy.ones(len(bgs))
target_ebrs[event_presence_flags] = r.choice(params.get('ebr_list'), size=numpy.sum(event_presence_flags))
# For recipes, we got to provide amplitude scaling factors instead of SNRs: the latter are more ambiguous
# so, go through files, measure levels, calculate scaling factors
mixture_recipes = ParameterListFile()
for mixture_id, (bg, event_presence_flag, event_start_in_mixture_seconds, ebr, event_instance_id) in tqdm(
enumerate(zip(bgs, event_presence_flags, event_offsets_seconds, target_ebrs, event_instance_ids)),
desc="{0: <25s}".format('Generate recipe'),
file=sys.stdout,
leave=False,
total=len(bgs),
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar):
# Read the bgs and events, measure their energies, find amplitude scaling factors
mixture_recipe = {
'bg_path': bg['filepath'],
'bg_classname': bg['classname'],
'event_present': bool(event_presence_flag),
'ebr': float(ebr)
}
if event_presence_flag:
# We have an event assigned
assert not numpy.isnan(event_instance_id)
# Load background and event audio in
bg_audio, fs_bg = AudioFile(fs=params.get('mixture').get('fs')).load(
filename=os.path.join(background_audio_path, bg['filepath'])
)
event_audio, fs_event = AudioFile(fs=params.get('mixture').get('fs')).load(
filename=os.path.join(event_audio_path, events[int(event_instance_id)]['audio_filepath'])
)
assert fs_bg == fs_event, 'Fs mismatch! Expected resampling taken place already'
# Segment onset and offset in samples
segment_start_samples = int(events[int(event_instance_id)]['segment'][0] * fs)
segment_end_samples = int(events[int(event_instance_id)]['segment'][1] * fs)
# Cut event audio
event_audio = event_audio[segment_start_samples:segment_end_samples]
# Let's calculate the levels of bgs also at the location of the event only
eventful_part_of_bg = bg_audio[int(event_start_in_mixture_seconds * fs):int(event_start_in_mixture_seconds * fs + len(event_audio))]
if eventful_part_of_bg.shape[0] == 0:
message = '{name}: Background segment having an event has zero length.'.format(
name=self.__class__.__name__
)
self.logger.exception(message)
raise ValueError(message)
scaling_factor = get_event_amplitude_scaling_factor(event_audio, eventful_part_of_bg, target_snr_db=ebr)
# Store information
mixture_recipe['event_path'] = events[int(event_instance_id)]['audio_filepath']
mixture_recipe['event_class'] = events[int(event_instance_id)]['classname']
mixture_recipe['event_start_in_mixture_seconds'] = float(event_start_in_mixture_seconds)
mixture_recipe['event_length_seconds'] = float(events[int(event_instance_id)]['length_seconds'])
mixture_recipe['scaling_factor'] = float(scaling_factor)
mixture_recipe['segment_start_seconds'] = events[int(event_instance_id)]['segment'][0]
mixture_recipe['segment_end_seconds'] = events[int(event_instance_id)]['segment'][1]
# Generate mixture filename
mixing_param_hash = hashlib.md5(yaml.dump(mixture_recipe)).hexdigest()
mixture_recipe['mixture_audio_filename'] = 'mixture' + '_' + subset + '_' + class_label + '_' + '%03d' % mixture_id + '_' + mixing_param_hash + '.' + self.default_audio_extension
# Generate mixture annotation
if event_presence_flag:
mixture_recipe['annotation_string'] = \
mixture_recipe['mixture_audio_filename'] + '\t' + \
"{0:.14f}".format(mixture_recipe['event_start_in_mixture_seconds']) + '\t' + \
"{0:.14f}".format(mixture_recipe['event_start_in_mixture_seconds'] + mixture_recipe['event_length_seconds']) + '\t' + \
mixture_recipe['event_class']
else:
mixture_recipe['annotation_string'] = mixture_recipe['mixture_audio_filename'] + '\t' + 'None' + '\t0\t30'
# Store mixture recipe
mixture_recipes.append(mixture_recipe)
# Save mixture recipe
mixture_recipes.save(filename=mixture_recipes_filename)
def _synthesize_mixture(self, mixture_recipe, params, background_audio_path, event_audio_path):
background_audiofile = os.path.join(background_audio_path, mixture_recipe['bg_path'])
# Load background audio
bg_audio_data, fs_bg = AudioFile().load(filename=background_audiofile,
fs=params['mixture']['fs'],
mono=True)
if mixture_recipe['event_present']:
event_audiofile = os.path.join(event_audio_path, mixture_recipe['event_path'])
# Load event audio
event_audio_data, fs_event = AudioFile().load(filename=event_audiofile,
fs=params['mixture']['fs'],
mono=True)
if fs_bg != fs_event:
message = '{name}: Sampling frequency mismatch. Material should be resampled.'.format(
name=self.__class__.__name__
)
self.logger.exception(message)
raise ValueError(message)
# Slice event audio
segment_start_samples = int(mixture_recipe['segment_start_seconds'] * params['mixture']['fs'])
segment_end_samples = int(mixture_recipe['segment_end_seconds'] * params['mixture']['fs'])
event_audio_data = event_audio_data[segment_start_samples:segment_end_samples]
event_start_in_mixture_samples = int(mixture_recipe['event_start_in_mixture_seconds'] * params['mixture']['fs'])
scaling_factor = mixture_recipe['scaling_factor']
# Mix event into background audio
mixture = self._mix(bg_audio_data=bg_audio_data,
event_audio_data=event_audio_data,
event_start_in_mixture_samples=event_start_in_mixture_samples,
scaling_factor=scaling_factor,
magic_anticlipping_factor=params['mixture']['anticlipping_factor'])
else:
mixture = params['mixture']['anticlipping_factor'] * bg_audio_data
return mixture
def _mix(self, bg_audio_data, event_audio_data, event_start_in_mixture_samples, scaling_factor, magic_anticlipping_factor):
"""Mix numpy arrays of background and event audio (mono, non-matching lengths supported, sampling frequency
better be the same, no operation in terms of seconds is performed though)
Parameters
----------
bg_audio_data : numpy.array
event_audio_data : numpy.array
event_start_in_mixture_samples : float
scaling_factor : float
magic_anticlipping_factor : float
Returns
-------
numpy.array
"""
# Store current event audio max value
event_audio_original_max = numpy.max(numpy.abs(event_audio_data))
# Adjust SNRs
event_audio_data *= scaling_factor
# Check that the offset is not too long
longest_possible_offset = len(bg_audio_data) - len(event_audio_data)
if event_start_in_mixture_samples > longest_possible_offset:
message = '{name}: Wrongly generated event offset: event tries to go outside the boundaries of the bg.'.format(name=self.__class__.__name__)
self.logger.exception(message)
raise AssertionError(message)
# Measure how much to pad from the right
tail_length = len(bg_audio_data) - len(event_audio_data) - event_start_in_mixture_samples
# Pad zeros at the beginning of event signal
padded_event = numpy.pad(event_audio_data,
pad_width=((event_start_in_mixture_samples, tail_length)),
mode='constant',
constant_values=0)
if not len(padded_event) == len(bg_audio_data):
message = '{name}: Mixing yielded a signal of different length than bg! Should not happen.'.format(
name=self.__class__.__name__
)
self.logger.exception(message)
raise AssertionError(message)
mixture = magic_anticlipping_factor * (padded_event + bg_audio_data)
# Also nice to make sure that we did not introduce clipping
if numpy.max(numpy.abs(mixture)) >= 1:
normalisation_factor = 1 / float(numpy.max(numpy.abs(mixture)))
print('Attention! Had to normalise the mixture by [{factor}]'.format(factor=normalisation_factor))
print('I.e. bg max: {bg_max:2.4f}, event max: {event_max:2.4f}, sum max: {sum_max:2.4f}'.format(
bg_max=numpy.max(numpy.abs(bg_audio_data)),
event_max=numpy.max(numpy.abs(padded_event)),
sum_max=numpy.max(numpy.abs(mixture)))
)
print('The scaling factor for the event was [{factor}]'.format(factor=scaling_factor))
print('The event before scaling was max [{max}]'.format(max=event_audio_original_max))
mixture /= numpy.max(numpy.abs(mixture))
return mixture
class TUTRareSoundEvents_2017_EvaluationSet(SyntheticSoundEventDataset):
"""TUT Acoustic scenes 2017 evaluation dataset
This dataset is used in DCASE2017 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-rare-sound-events-2017-evaluation')
kwargs['filelisthash_exclude_dirs'] = kwargs.get('filelisthash_exclude_dirs', ['generated_data'])
# Initialize baseclass
super(TUTRareSoundEvents_2017_EvaluationSet, self).__init__(*args, **kwargs)
self.reference_data_present = True
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Aleksandr Diment, Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Rare Sound Events 2017, evaluation dataset',
'url': None,
'audio_source': 'Synthetic',
'audio_type': 'Natural',
'recording_device_model': 'Unknown',
'microphone_model': 'Unknown',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
@property
def event_labels(self, scene_label=None):
"""List of unique event labels in the meta data.
Parameters
----------
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
labels = ['babycry', 'glassbreak', 'gunshot']
labels.sort()
return labels
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = MetaDataContainer()
for event_label_ in self.event_labels:
event_list_filename = os.path.join(
self.local_path,
'meta',
'event_list_evaltest_' + event_label_ + '.csv'
)
if os.path.isfile(event_list_filename):
# Load train files
current_meta = MetaDataContainer(filename=event_list_filename).load()
# Fix path
for item in current_meta:
item['file'] = os.path.join('audio', item['file'])
meta_data += current_meta
else:
current_meta = MetaDataContainer()
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
if event_label_ in base_filename:
current_meta.append(MetaDataItem({'file': os.path.join(relative_path, raw_filename)}))
self.meta_container.update(meta_data)
self.meta_container.save()
def train(self, fold=0, event_label=None):
return []
def test(self, fold=0, event_label=None):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
event_label : str
Event label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = {}
for event_label_ in self.event_labels:
if event_label_ not in self.crossvalidation_data_test[fold]:
self.crossvalidation_data_test[fold][event_label_] = MetaDataContainer()
if fold == 0:
event_list_filename = os.path.join(
self.local_path,
'meta',
'event_list_evaltest_' + event_label_ + '.csv'
)
if os.path.isfile(event_list_filename):
# Load train files
self.crossvalidation_data_test[0][event_label_] = MetaDataContainer(
filename=event_list_filename).load()
# Fix file paths
for item in self.crossvalidation_data_test[fold][event_label_]:
item['file'] = os.path.join('audio', item['file'])
else:
# Recover files from audio files
meta = MetaDataContainer()
for item in self.meta:
if event_label_ in item.file:
meta.append(item)
# Change file paths to absolute
for item in self.crossvalidation_data_test[fold][event_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
if event_label:
return self.crossvalidation_data_test[fold][event_label]
else:
data = MetaDataContainer()
for event_label_ in self.event_labels:
data += self.crossvalidation_data_test[fold][event_label_]
return data
class TUTSoundEvents_2017_DevelopmentSet(SoundEventDataset):
"""TUT Sound events 2017 development dataset
This dataset is used in DCASE2017 - Task 3, Sound event detection in real life audio
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-sound-events-2017-development')
super(TUTSoundEvents_2017_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Sound Events 2016, development dataset',
'url': 'https://zenodo.org/record/45759',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'street'),
},
{
'remote_package': 'https://zenodo.org/record/400516/files/TUT-sound-events-2017-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2017-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400516/files/TUT-sound-events-2017-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2017-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400516/files/TUT-sound-events-2017-development.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2017-development.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/400516/files/TUT-sound-events-2017-development.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2017-development.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = MetaDataContainer()
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(
self.local_path,
relative_path.replace('audio', 'meta'),
base_filename + '.ann'
)
data = MetaDataContainer(filename=annotation_filename).load()
for item in data:
item['file'] = os.path.join(relative_path, raw_filename)
item['scene_label'] = scene_label
item['identifier'] = os.path.splitext(raw_filename)[0]
item['source_label'] = 'mixture'
meta_data += data
self.meta_container.update(meta_data)
self.meta_container.save()
else:
self.meta_container.load()
def train(self, fold=0, scene_label=None):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
scene_label : str
Scene label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_train[fold]:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer()
if fold > 0:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(
setup_part='train',
fold=fold, scene_label=scene_label_)).load()
else:
self.crossvalidation_data_train[0][scene_label_] = self.meta_container.filter(
scene_label=scene_label_
)
for item in self.crossvalidation_data_train[fold][scene_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
raw_path, raw_filename = os.path.split(item['file'])
item['identifier'] = os.path.splitext(raw_filename)[0]
item['source_label'] = 'mixture'
if scene_label:
return self.crossvalidation_data_train[fold][scene_label]
else:
data = MetaDataContainer()
for scene_label_ in self.scene_labels:
data += self.crossvalidation_data_train[fold][scene_label_]
return data
class TUTSoundEvents_2017_EvaluationSet(SoundEventDataset):
"""TUT Sound events 2017 evaluation dataset
This dataset is used in DCASE2017 - Task 3, Sound event detection in real life audio
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-sound-events-2017-evaluation')
super(TUTSoundEvents_2017_EvaluationSet, self).__init__(*args, **kwargs)
self.reference_data_present = True
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Sound Events 2016, development dataset',
'url': 'https://zenodo.org/record/45759',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'street'),
},
]
@property
def scene_labels(self):
labels = ['street']
labels.sort()
return labels
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = MetaDataContainer()
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(self.local_path, relative_path.replace('audio', 'meta'),
base_filename + '.ann')
data = MetaDataContainer(filename=annotation_filename).load()
for item in data:
item['file'] = os.path.join(relative_path, raw_filename)
item['scene_label'] = scene_label
item['identifier'] = os.path.splitext(raw_filename)[0]
item['source_label'] = 'mixture'
meta_data += data
meta_data.save(filename=self.meta_container.filename)
else:
self.meta_container.load()
def train(self, fold=0, scene_label=None):
return []
def test(self, fold=0, scene_label=None):
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_test[fold]:
self.crossvalidation_data_test[fold][scene_label_] = MetaDataContainer()
if fold > 0:
self.crossvalidation_data_test[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(
setup_part='test', fold=fold, scene_label=scene_label_)
).load()
else:
self.crossvalidation_data_test[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(
setup_part='test', fold=fold, scene_label=scene_label_)
).load()
if scene_label:
return self.crossvalidation_data_test[fold][scene_label]
else:
data = MetaDataContainer()
for scene_label_ in self.scene_labels:
data += self.crossvalidation_data_test[fold][scene_label_]
return data
class DCASE2017_Task4tagging_DevelopmentSet(SoundEventDataset):
"""DCASE 2017 Large-scale weakly supervised sound event detection for smart cars
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'DCASE2017-task4-development')
super(DCASE2017_Task4tagging_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'audio tagging'
self.dataset_meta = {
'authors': 'Benjamin Elizalde, Emmanuel Vincent, Bhiksha Raj',
'name_remote': 'Task 4 Large-scale weakly supervised sound event detection for smart cars',
'url': 'https://github.com/ankitshah009/Task-4-Large-scale-weakly-supervised-sound-event-detection-for-smart-cars',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': None,
'microphone_model': None,
}
self.crossvalidation_folds = 1
self.default_audio_extension = 'flac'
github_url = 'https://raw.githubusercontent.com/ankitshah009/Task-4-Large-scale-weakly-supervised-sound-event-detection-for-smart-cars/master/'
self.package_list = [
{
'remote_package': github_url + 'training_set.csv',
'local_package': os.path.join(self.local_path, 'training_set.csv'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'testing_set.csv',
'local_package': os.path.join(self.local_path, 'testing_set.csv'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'groundtruth_weak_label_training_set.csv',
'local_package': os.path.join(self.local_path, 'groundtruth_weak_label_training_set.csv'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'groundtruth_weak_label_testing_set.csv',
'local_package': os.path.join(self.local_path, 'groundtruth_weak_label_testing_set.csv'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'APACHE_LICENSE.txt',
'local_package': os.path.join(self.local_path, 'APACHE_LICENSE.txt'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'README.txt',
'local_package': os.path.join(self.local_path, 'README.txt'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'sound_event_list_17_classes.txt',
'local_package': os.path.join(self.local_path, 'sound_event_list_17_classes.txt'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': github_url + 'groundtruth_strong_label_testing_set.csv',
'local_package': os.path.join(self.local_path, 'groundtruth_strong_label_testing_set.csv'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
@property
def scene_labels(self):
labels = ['youtube']
labels.sort()
return labels
def _after_extract(self, to_return=None):
import csv
from httplib import BadStatusLine
from dcase_framework.files import AudioFile
def progress_hook(t):
"""
Wraps tqdm instance. Don't forget to close() or __exit__()
the tqdm instance once you're done with it (easiest using `with` syntax).
"""
def inner(total, recvd, ratio, rate, eta):
t.total = int(total / 1024.0)
t.update(int(recvd / 1024.0))
return inner
# Collect file ids
files = []
with open(os.path.join(self.local_path, 'testing_set.csv'), 'rb') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
files.append({
'query_id': row[0],
'segment_start': row[1],
'segment_end': row[2]}
)
with open(os.path.join(self.local_path, 'training_set.csv'), 'rb') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
files.append({
'query_id': row[0],
'segment_start': row[1],
'segment_end': row[2]}
)
# Make sure audio directory exists
if not os.path.isdir(os.path.join(self.local_path, 'audio')):
os.makedirs(os.path.join(self.local_path, 'audio'))
file_progress = tqdm(files,
desc="{0: <25s}".format('Files'),
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)
non_existing_videos = []
# Check that audio files exists
for file_data in file_progress:
audio_filename = os.path.join(self.local_path,
'audio',
'Y{query_id}_{segment_start}_{segment_end}.{extension}'.format(
query_id=file_data['query_id'],
segment_start=file_data['segment_start'],
segment_end=file_data['segment_end'],
extension=self.default_audio_extension
)
)
# Download segment if it does not exists
if not os.path.isfile(audio_filename):
import pafy
#
try:
# Access youtube video and get best quality audio stream
youtube_audio = pafy.new(
url='https://www.youtube.com/watch?v={query_id}'.format(query_id=file_data['query_id']),
basic=False,
gdata=False,
size=False
).getbestaudio()
# Get temp file
tmp_file = os.path.join(self.local_path, 'tmp_file.{extension}'.format(
extension=youtube_audio.extension)
)
# Create download progress bar
download_progress_bar = tqdm(
desc="{0: <25s}".format('Download youtube item '),
file=sys.stdout,
unit='B',
unit_scale=True,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar
)
# Download audio
youtube_audio.download(
filepath=tmp_file,
quiet=True,
callback=progress_hook(download_progress_bar)
)
# Close progress bar
download_progress_bar.close()
# Create audio processing progress bar
audio_processing_progress_bar = tqdm(
desc="{0: <25s}".format('Processing '),
initial=0,
total=4,
file=sys.stdout,
leave=False,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar
)
# Load audio
audio_file = AudioFile()
audio_file.load(
filename=tmp_file,
mono=True,
fs=44100,
res_type='kaiser_best',
start=float(file_data['segment_start']),
stop=float(file_data['segment_end'])
)
audio_processing_progress_bar.update(1)
# Save the segment
audio_file.save(
filename=audio_filename,
bitdepth=16
)
audio_processing_progress_bar.update(3)
# Remove temporal file
os.remove(tmp_file)
audio_processing_progress_bar.close()
except (IOError, BadStatusLine) as e:
# Store files with errors
file_data['error'] = str(e.message)
non_existing_videos.append(file_data)
except (KeyboardInterrupt, SystemExit):
# Remove temporal file and current audio file.
os.remove(tmp_file)
os.remove(audio_filename)
raise
log_filename = os.path.join(self.local_path, 'item_access_error.log')
with open(log_filename, 'wb') as csv_file:
csv_writer = csv.writer(csv_file, delimiter=',')
for item in non_existing_videos:
csv_writer.writerow(
(item['query_id'], item['error'].replace('\n', ' '))
)
# Make sure evaluation_setup directory exists
if not os.path.isdir(os.path.join(self.local_path, self.evaluation_setup_folder)):
os.makedirs(os.path.join(self.local_path, self.evaluation_setup_folder))
# Check that evaluation setup exists
evaluation_setup_exists = True
train_filename = self._get_evaluation_setup_filename(
setup_part='train',
fold=1,
scene_label='youtube',
file_extension='txt'
)
test_filename = self._get_evaluation_setup_filename(
setup_part='test',
fold=1,
scene_label='youtube',
file_extension='txt'
)
evaluate_filename = self._get_evaluation_setup_filename(
setup_part='evaluate',
fold=1,
scene_label='youtube',
file_extension='txt'
)
if not os.path.isfile(train_filename) or not os.path.isfile(test_filename) or not os.path.isfile(
evaluate_filename):
evaluation_setup_exists = False
# Evaluation setup was not found generate
if not evaluation_setup_exists:
fold = 1
train_meta = MetaDataContainer()
for item in MetaDataContainer().load(
os.path.join(self.local_path, 'groundtruth_weak_label_training_set.csv')):
if not item['file'].endswith('flac'):
item['file'] = os.path.join('audio', 'Y' + os.path.splitext(item['file'])[
0] + '.' + self.default_audio_extension)
# Set scene label
item['scene_label'] = 'youtube'
# Translate event onset and offset, weak labels
item['event_offset'] -= item['event_onset']
item['event_onset'] -= item['event_onset']
# Only collect items which exists
if os.path.isfile(os.path.join(self.local_path, item['file'])):
train_meta.append(item)
train_meta.save(filename=self._get_evaluation_setup_filename(
setup_part='train',
fold=fold,
scene_label='youtube',
file_extension='txt')
)
evaluate_meta = MetaDataContainer()
for item in MetaDataContainer().load(
os.path.join(self.local_path, 'groundtruth_strong_label_testing_set.csv')):
if not item['file'].endswith('flac'):
item['file'] = os.path.join('audio', 'Y' + os.path.splitext(item['file'])[
0] + '.' + self.default_audio_extension)
# Set scene label
item['scene_label'] = 'youtube'
# Only collect items which exists
if os.path.isfile(os.path.join(self.local_path, item['file'])):
evaluate_meta.append(item)
evaluate_meta.save(filename=self._get_evaluation_setup_filename(
setup_part='evaluate',
fold=fold,
scene_label='youtube',
file_extension='txt')
)
test_meta = MetaDataContainer()
for item in evaluate_meta:
test_meta.append(MetaDataItem({'file': item['file']}))
test_meta.save(filename=self._get_evaluation_setup_filename(
setup_part='test',
fold=fold,
scene_label='youtube',
file_extension='txt')
)
if not self.meta_container.exists():
fold = 1
meta_data = MetaDataContainer()
meta_data += MetaDataContainer().load(self._get_evaluation_setup_filename(
setup_part='train',
fold=fold,
scene_label='youtube',
file_extension='txt')
)
meta_data += MetaDataContainer().load(self._get_evaluation_setup_filename(
setup_part='evaluate',
fold=fold,
scene_label='youtube',
file_extension='txt')
)
self.meta_container.update(meta_data)
self.meta_container.save()
else:
self.meta_container.load()
# =====================================================
# DCASE 2016
# =====================================================
class TUTAcousticScenes_2016_DevelopmentSet(AcousticSceneDataset):
"""TUT Acoustic scenes 2016 development dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-acoustic-scenes-2016-development')
super(TUTAcousticScenes_2016_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'acoustic scene'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Acoustic Scenes 2016, development dataset',
'url': 'https://zenodo.org/record/45739',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.error.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.error.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.4.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.4.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.5.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.5.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.6.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.6.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.7.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.7.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.8.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.8.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = {}
for fold in range(1, self.crossvalidation_folds):
# Read train files in
fold_data = MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')).load()
fold_data += MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt')).load()
for item in fold_data:
if item['file'] not in meta_data:
raw_path, raw_filename = os.path.split(item['file'])
relative_path = self.absolute_to_relative(raw_path)
location_id = raw_filename.split('_')[0]
item['file'] = os.path.join(relative_path, raw_filename)
item['identifier'] = location_id
meta_data[item['file']] = item
self.meta_container.update(meta_data.values())
self.meta_container.save()
def train(self, fold=0):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = []
if fold > 0:
self.crossvalidation_data_train[fold] = MetaDataContainer(
filename=os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')).load()
for item in self.crossvalidation_data_train[fold]:
item['file'] = self.relative_to_absolute_path(item['file'])
raw_path, raw_filename = os.path.split(item['file'])
location_id = raw_filename.split('_')[0]
item['identifier'] = location_id
else:
self.crossvalidation_data_train[0] = self.meta_container
return self.crossvalidation_data_train[fold]
class TUTAcousticScenes_2016_EvaluationSet(AcousticSceneDataset):
"""TUT Acoustic scenes 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-acoustic-scenes-2016-evaluation')
super(TUTAcousticScenes_2016_EvaluationSet, self).__init__(*args, **kwargs)
self.dataset_group = 'acoustic scene'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Acoustic Scenes 2016, evaluation dataset',
'url': 'https://zenodo.org/record/165995',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/165995/files/TUT-acoustic-scenes-2016-evaluation.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-evaluation.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/165995/files/TUT-acoustic-scenes-2016-evaluation.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-evaluation.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/165995/files/TUT-acoustic-scenes-2016-evaluation.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-evaluation.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/165995/files/TUT-acoustic-scenes-2016-evaluation.audio.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-evaluation.audio.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/165995/files/TUT-acoustic-scenes-2016-evaluation.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-evaluation.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
eval_file = MetaDataContainer(filename=os.path.join(self.evaluation_setup_path, 'evaluate.txt'))
if not self.meta_container.exists() and eval_file.exists():
eval_data = eval_file.load()
meta_data = {}
for item in eval_data:
if item['file'] not in meta_data:
raw_path, raw_filename = os.path.split(item['file'])
relative_path = self.absolute_to_relative(raw_path)
item['file'] = os.path.join(relative_path, raw_filename)
meta_data[item['file']] = item
self.meta_container.update(meta_data.values())
self.meta_container.save()
def train(self, fold=0):
return []
def test(self, fold=0):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.crossvalidation_data_test[fold].append({'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.audio_files:
if self.relative_to_absolute_path(item) not in files:
data.append({'file': self.relative_to_absolute_path(item)})
files.append(self.relative_to_absolute_path(item))
self.crossvalidation_data_test[fold] = data
return self.crossvalidation_data_test[fold]
class TUTSoundEvents_2016_DevelopmentSet(SoundEventDataset):
"""TUT Sound events 2016 development dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-sound-events-2016-development')
super(TUTSoundEvents_2016_DevelopmentSet, self).__init__(*args, **kwargs)
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Sound Events 2016, development dataset',
'url': 'https://zenodo.org/record/45759',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.audio.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.audio.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists():
meta_data = MetaDataContainer()
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(
self.local_path,
relative_path.replace('audio', 'meta'),
base_filename + '.ann'
)
data = MetaDataContainer(filename=annotation_filename).load()
for item in data:
item['file'] = os.path.join(relative_path, raw_filename)
item['scene_label'] = scene_label
item['identifier'] = os.path.splitext(raw_filename)[0]
item['source_label'] = 'mixture'
meta_data += data
meta_data.save(filename=self.meta_container.filename)
def train(self, fold=0, scene_label=None):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
scene_label : str
Scene label
Default value "None"
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.crossvalidation_data_train:
self.crossvalidation_data_train[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_train[fold]:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer()
if fold > 0:
self.crossvalidation_data_train[fold][scene_label_] = MetaDataContainer(
filename=self._get_evaluation_setup_filename(
setup_part='train', fold=fold, scene_label=scene_label_)).load()
else:
self.crossvalidation_data_train[0][scene_label_] = self.meta_container.filter(
scene_label=scene_label_
)
for item in self.crossvalidation_data_train[fold][scene_label_]:
item['file'] = self.relative_to_absolute_path(item['file'])
raw_path, raw_filename = os.path.split(item['file'])
item['identifier'] = os.path.splitext(raw_filename)[0]
item['source_label'] = 'mixture'
if scene_label:
return self.crossvalidation_data_train[fold][scene_label]
else:
data = MetaDataContainer()
for scene_label_ in self.scene_labels:
data += self.crossvalidation_data_train[fold][scene_label_]
return data
class TUTSoundEvents_2016_EvaluationSet(SoundEventDataset):
"""TUT Sound events 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, *args, **kwargs):
kwargs['storage_name'] = kwargs.get('storage_name', 'TUT-sound-events-2016-evaluation')
super(TUTSoundEvents_2016_EvaluationSet, self).__init__(*args, **kwargs)
self.dataset_group = 'sound event'
self.dataset_meta = {
'authors': 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen',
'name_remote': 'TUT Sound Events 2016, evaluation dataset',
'url': 'http://www.cs.tut.fi/sgn/arg/dcase2016/download/',
'audio_source': 'Field recording',
'audio_type': 'Natural',
'recording_device_model': 'Roland Edirol R-09',
'microphone_model': 'Soundman OKM II Klassik/studio A3 electret microphone',
}
self.crossvalidation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2016/evaluation_data/TUT-sound-events-2016-evaluation.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-evaluation.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2016/evaluation_data/TUT-sound-events-2016-evaluation.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-evaluation.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'http://www.cs.tut.fi/sgn/arg/dcase2016/evaluation_data/TUT-sound-events-2016-evaluation.audio.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-evaluation.audio.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
@property
def scene_labels(self):
labels = ['home', 'residential_area']
labels.sort()
return labels
def _after_extract(self, to_return=None):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not self.meta_container.exists() and os.path.isdir(os.path.join(self.local_path, 'meta')):
meta_file_handle = open(self.meta_container.filename, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(
self.local_path,
relative_path.replace('audio', 'meta'),
base_filename + '.ann'
)
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename),
scene_label,
float(annotation_file_row[0].replace(',', '.')),
float(annotation_file_row[1].replace(',', '.')),
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
def train(self, fold=0, scene_label=None):
return []
def test(self, fold=0, scene_label=None):
if fold not in self.crossvalidation_data_test:
self.crossvalidation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.crossvalidation_data_test[fold]:
self.crossvalidation_data_test[fold][scene_label_] = []
if fold > 0:
with open(
os.path.join(self.evaluation_setup_path, scene_label_ + '_fold' + str(fold) + '_test.txt'),
'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.crossvalidation_data_test[fold][scene_label_].append(
{'file': self.relative_to_absolute_path(row[0])}
)
else:
with open(os.path.join(self.evaluation_setup_path, scene_label_ + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.crossvalidation_data_test[fold][scene_label_].append(
{'file': self.relative_to_absolute_path(row[0])}
)
if scene_label:
return self.crossvalidation_data_test[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.crossvalidation_data_test[fold][scene_label_]:
data.append(item)
return data
| 38.890131 | 192 | 0.55223 | 14,930 | 142,649 | 5.037642 | 0.053382 | 0.023055 | 0.030181 | 0.035367 | 0.729179 | 0.691565 | 0.667633 | 0.644605 | 0.624767 | 0.611113 | 0 | 0.016129 | 0.3424 | 142,649 | 3,667 | 193 | 38.900736 | 0.785653 | 0.136916 | 0 | 0.498825 | 0 | 0.026798 | 0.18594 | 0.036683 | 0.00047 | 0 | 0 | 0 | 0.002351 | 1 | 0.046074 | false | 0.00094 | 0.013164 | 0.003291 | 0.102492 | 0.002351 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cf896e2bac34dddb9638290d95518117931f9f94 | 522 | py | Python | api/v1/exceptions.py | SVArago/alexia | 96ae6dfabb893388bd4610ea971574a993b8029d | [
"BSD-3-Clause"
] | 3 | 2015-12-22T00:50:43.000Z | 2017-01-07T18:09:36.000Z | api/v1/exceptions.py | SVArago/alexia | 96ae6dfabb893388bd4610ea971574a993b8029d | [
"BSD-3-Clause"
] | 24 | 2015-11-02T15:38:40.000Z | 2017-01-07T21:18:42.000Z | api/v1/exceptions.py | SVArago/alexia | 96ae6dfabb893388bd4610ea971574a993b8029d | [
"BSD-3-Clause"
] | null | null | null | from jsonrpc.exceptions import Error
class ForbiddenError(Error):
""" The token was not recognized. """
code = 403
status = 200
message = 'Forbidden.'
class NotFoundError(Error):
""" The token was not recognized. """
code = 404
status = 200
message = 'Not Found.'
class InvalidParametersError(Error):
""" Invalid method parameters.
Copy of jsonrpc.exceptions.InvalidParamsError with 400 status code.
"""
code = -32602
status = 200
message = 'Invalid params.'
| 20.076923 | 71 | 0.653257 | 56 | 522 | 6.089286 | 0.571429 | 0.079179 | 0.140762 | 0.093842 | 0.193548 | 0.193548 | 0.193548 | 0 | 0 | 0 | 0 | 0.058376 | 0.245211 | 522 | 25 | 72 | 20.88 | 0.807107 | 0.298851 | 0 | 0.230769 | 0 | 0 | 0.103858 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d83d65f8e83449bb166797c179bfdf461170d4c2 | 546 | py | Python | tests/urls.py | klowe0100/wagtail-transfer | 90245b1cff1f542a3698273e745d858857de8722 | [
"BSD-3-Clause"
] | null | null | null | tests/urls.py | klowe0100/wagtail-transfer | 90245b1cff1f542a3698273e745d858857de8722 | [
"BSD-3-Clause"
] | null | null | null | tests/urls.py | klowe0100/wagtail-transfer | 90245b1cff1f542a3698273e745d858857de8722 | [
"BSD-3-Clause"
] | 1 | 2022-02-23T11:45:04.000Z | 2022-02-23T11:45:04.000Z | from __future__ import absolute_import, unicode_literals
from django.urls import include, re_path
from wagtail.admin import urls as wagtailadmin_urls
from wagtail.core import urls as wagtail_urls
from wagtail_transfer import urls as wagtailtransfer_urls
urlpatterns = [
re_path(r'^admin/', include(wagtailadmin_urls)),
re_path(r'^wagtail-transfer/', include(wagtailtransfer_urls)),
# For anything not caught by a more specific rule above, hand over to
# Wagtail's serving mechanism
re_path(r'', include(wagtail_urls)),
]
| 30.333333 | 73 | 0.776557 | 77 | 546 | 5.285714 | 0.467532 | 0.058968 | 0.088452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148352 | 546 | 17 | 74 | 32.117647 | 0.875269 | 0.173993 | 0 | 0 | 0 | 0 | 0.055804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d8400315a945f18534e51ba7e9f07635b5c88b42 | 1,381 | py | Python | examples/rl/train/play.py | ONLYA/RoboGrammar | 4b9725739b24dc9df4049866c177db788b1e458f | [
"MIT"
] | 156 | 2020-10-02T14:33:22.000Z | 2022-03-17T22:30:30.000Z | examples/rl/train/play.py | ONLYA/RoboGrammar | 4b9725739b24dc9df4049866c177db788b1e458f | [
"MIT"
] | 10 | 2020-12-14T01:24:03.000Z | 2022-02-16T10:01:16.000Z | examples/rl/train/play.py | ONLYA/RoboGrammar | 4b9725739b24dc9df4049866c177db788b1e458f | [
"MIT"
] | 43 | 2020-10-02T00:01:17.000Z | 2022-03-06T17:02:38.000Z | import sys
import os
base_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../')
sys.path.append(base_dir)
sys.path.append(os.path.join(base_dir, 'rl'))
import numpy as np
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import gym
gym.logger.set_level(40)
import environments
from rl.train.evaluation import render, render_full
from rl.train.arguments import get_parser
from a2c_ppo_acktr import algo, utils
from a2c_ppo_acktr.envs import make_vec_envs, make_env
from a2c_ppo_acktr.model import Policy
parser = get_parser()
parser.add_argument('--model-path', type = str, required = True)
args = parser.parse_args()
if not os.path.isfile(args.model_path):
print_error('Model file does not exist')
torch.manual_seed(0)
torch.set_num_threads(1)
device = torch.device('cpu')
render_env = gym.make(args.env_name, args = args)
render_env.seed(0)
envs = make_vec_envs(args.env_name, 0, 4, 0.995, None, device, False, args = args)
actor_critic = Policy(
envs.observation_space.shape,
envs.action_space,
base_kwargs={'recurrent': False})
actor_critic.to(device)
ob_rms = utils.get_vec_normalize(envs).ob_rms
actor_critic, ob_rms = torch.load(args.model_path)
actor_critic.eval()
envs.close()
render_full(render_env, actor_critic, ob_rms, deterministic = True, repeat = True)
| 23.810345 | 82 | 0.766112 | 229 | 1,381 | 4.401747 | 0.39738 | 0.029762 | 0.029762 | 0.044643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.11803 | 1,381 | 57 | 83 | 24.22807 | 0.816092 | 0 | 0 | 0 | 0 | 0 | 0.041334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d84582c0c8bdca35e7b408f03d12bcdbc039c3f0 | 252 | py | Python | setup.py | KieberLab/indCAPS | 6b0a75d99f39a3273ed31fc6cf45ba1b002ca313 | [
"BSD-3-Clause"
] | 1 | 2018-04-13T18:02:27.000Z | 2018-04-13T18:02:27.000Z | setup.py | KieberLab/indCAPS | 6b0a75d99f39a3273ed31fc6cf45ba1b002ca313 | [
"BSD-3-Clause"
] | null | null | null | setup.py | KieberLab/indCAPS | 6b0a75d99f39a3273ed31fc6cf45ba1b002ca313 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python2
from setuptools import setup
setup(name='indCAPS',
version='0.1',
description='OpenShift App',
author='Charles Hodgens',
author_email='hodgens@email.unc.edu',
# install_requires=['Flask==0.10.1'],
) | 22.909091 | 43 | 0.650794 | 32 | 252 | 5.0625 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033981 | 0.18254 | 252 | 11 | 44 | 22.909091 | 0.752427 | 0.246032 | 0 | 0 | 0 | 0 | 0.312169 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d845c8b1f227bb85466c78b3c158732c8901248a | 9,474 | py | Python | sparse_decomposition/decomposition/decomposition.py | bdpedigo/sparse_matrix_analysis | dbdff69b8ec56f60ba96b723a616f442755eacda | [
"MIT"
] | 2 | 2021-03-18T14:51:52.000Z | 2021-03-18T16:05:55.000Z | sparse_decomposition/decomposition/decomposition.py | bdpedigo/sparse_matrix_analysis | dbdff69b8ec56f60ba96b723a616f442755eacda | [
"MIT"
] | 1 | 2021-03-18T05:08:25.000Z | 2021-03-18T16:17:05.000Z | sparse_decomposition/decomposition/decomposition.py | bdpedigo/sparse_matrix_analysis | dbdff69b8ec56f60ba96b723a616f442755eacda | [
"MIT"
] | null | null | null | # Some of the implementation inspired by:
# REF: https://github.com/fchen365/epca
import time
from abc import abstractmethod
import numpy as np
from factor_analyzer import Rotator
from sklearn.base import BaseEstimator
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_array
from graspologic.embed import selectSVD
from ..utils import calculate_explained_variance_ratio, soft_threshold
from scipy.linalg import orthogonal_procrustes
def _varimax(X):
return Rotator(normalize=False).fit_transform(X)
def _polar(X):
# REF: https://en.wikipedia.org/wiki/Polar_decomposition#Relation_to_the_SVD
U, D, Vt = selectSVD(X, n_components=X.shape[1], algorithm="full")
return U @ Vt
def _polar_rotate_shrink(X, gamma=0.1):
# Algorithm 1 from the paper
U, _, _ = selectSVD(X, n_components=X.shape[1], algorithm="full")
# U = _polar(X)
# R, _ = orthogonal_procrustes(U_old, U)
# print(np.linalg.norm(U_old @ R - U))
U_rot = _varimax(U)
U_thresh = soft_threshold(U_rot, gamma)
return U_thresh
def _reorder_components(X, Z_hat, Y_hat):
score_norms = np.linalg.norm(X @ Y_hat, axis=0)
sort_inds = np.argsort(-score_norms)
return Z_hat[:, sort_inds], Y_hat[:, sort_inds]
# import abc
# class SuperclassMeta(type):
# def __new__(mcls, classname, bases, cls_dict):
# cls = super().__new__(mcls, classname, bases, cls_dict)
# for name, member in cls_dict.items():
# if not getattr(member, "__doc__"):
# member.__doc__ = getattr(bases[-1], name).__doc__
# return cls
class BaseSparseDecomposition(BaseEstimator):
def __init__(
self,
n_components=2,
gamma=None,
max_iter=10,
scale=False,
center=False,
tol=1e-4,
verbose=0,
):
"""Sparse matrix decomposition model.
Parameters
----------
n_components : int, optional (default=2)
Number of components or embedding dimensions.
gamma : float, int or None, optional (default=None)
Sparsity parameter, must be nonnegative. Lower values lead to more sparsity
in the estimated components. If ``None``, will be set to
``sqrt(n_components * X.shape[1])`` where ``X`` is the matrix passed to
``fit``.
max_iter : int, optional (default=10)
Maximum number of iterations allowed, must be nonnegative.
scale : bool, optional
[description], by default False
center : bool, optional
[description], by default False
tol : float or int, optional (default=1e-4)
Tolerance for stopping iterative optimization. If the relative difference in
score is less than this amount the algorithm will terminate.
verbose : int, optional (default=0)
Verbosity level. Higher values will result in more messages.
"""
self.n_components = n_components
self.gamma = gamma
self.max_iter = max_iter
self.scale = scale
self.center = center
self.tol = tol
self.verbose = verbose
# TODO add random state
def _initialize(self, X):
"""[summary]
Parameters
----------
X : [type]
[description]
Returns
-------
[type]
[description]
"""
U, D, Vt = selectSVD(X, n_components=self.n_components)
score = np.linalg.norm(D)
return U, Vt.T, score
def _validate_parameters(self, X):
"""[summary]
Parameters
----------
X : [type]
[description]
"""
if not self.gamma:
gamma = np.sqrt(self.n_components * X.shape[1])
else:
gamma = self.gamma
self.gamma_ = gamma
def _preprocess_data(self, X):
"""[summary]
Parameters
----------
X : [type]
[description]
Returns
-------
[type]
[description]
"""
if self.scale or self.center:
X = StandardScaler(
with_mean=self.center, with_std=self.scale
).fit_transform(X)
return X
# def _compute_matrix_difference(X, metric='max'):
# TODO better convergence criteria
def fit_transform(self, X, y=None):
"""[summary]
Parameters
----------
X : [type]
[description]
y : [type], optional
[description], by default None
Returns
-------
[type]
[description]
"""
self._validate_parameters(X)
self._validate_data(X, copy=True, ensure_2d=True) # from sklearn BaseEstimator
Z_hat, Y_hat, score = self._initialize(X)
if self.gamma == np.inf:
max_iter = 0
else:
max_iter = self.max_iter
# for keeping track of progress over iteration
Z_diff = np.inf
Y_diff = np.inf
norm_score_diff = np.inf
last_score = 0
# main loop
i = 0
while (i < max_iter) and (norm_score_diff > self.tol):
if self.verbose > 0:
print(f"Iteration: {i}")
iter_time = time.time()
Z_hat_new, Y_hat_new = self._update_estimates(X, Z_hat, Y_hat)
# Z_hat_new, Y_hat_new = _reorder_components(X, Z_hat_new, Y_hat_new)
Z_diff = np.linalg.norm(Z_hat_new - Z_hat)
Y_diff = np.linalg.norm(Y_hat_new - Y_hat)
norm_Z_diff = Z_diff / np.linalg.norm(Z_hat_new)
norm_Y_diff = Y_diff / np.linalg.norm(Y_hat_new)
Z_hat = Z_hat_new
Y_hat = Y_hat_new
B_hat = Z_hat.T @ X @ Y_hat
score = np.linalg.norm(B_hat)
norm_score_diff = np.abs(score - last_score) / score
last_score = score
if self.verbose > 1:
print(f"{time.time() - iter_time:.3f} seconds elapsed for iteration.")
if self.verbose > 0:
print(f"Difference in Z_hat: {Z_diff}")
print(f"Difference in Y_hat: {Z_diff}")
print(f"Normalized difference in Z_hat: {norm_Z_diff}")
print(f"Normalized difference in Y_hat: {norm_Y_diff}")
print(f"Total score: {score}")
print(f"Normalized difference in score: {norm_score_diff}")
print()
i += 1
Z_hat, Y_hat = _reorder_components(X, Z_hat, Y_hat)
# save attributes
self.n_iter_ = i
self.components_ = Y_hat.T
# TODO this should not be cumulative by the sklearn definition
self.explained_variance_ratio_ = calculate_explained_variance_ratio(X, Y_hat)
self.score_ = score
return Z_hat
def fit(self, X):
"""[summary]
Parameters
----------
X : [type]
[description]
Returns
-------
[type]
[description]
"""
self.fit_transform(X)
return self
def transform(self, X):
"""[summary]
Parameters
----------
X : [type]
[description]
Returns
-------
[type]
[description]
"""
# TODO input checking
return X @ self.components_.T
@abstractmethod
def _update_estimates(self, X, Z_hat, Y_hat):
"""[summary]
Parameters
----------
X : [type]
[description]
Z_hat : [type]
[description]
Y_hat : [type]
[description]
"""
pass
class SparseComponentAnalysis(BaseSparseDecomposition):
def _update_estimates(self, X, Z_hat, Y_hat):
"""[summary]
Parameters
----------
X : [type]
[description]
Z_hat : [type]
[description]
Y_hat : [type]
[description]
Returns
-------
[type]
[description]
"""
Y_hat = _polar_rotate_shrink(X.T @ Z_hat, gamma=self.gamma)
Z_hat = _polar(X @ Y_hat)
return Z_hat, Y_hat
def _save_attributes(self, X, Z_hat, Y_hat):
"""[summary]
Parameters
----------
X : [type]
[description]
Z_hat : [type]
[description]
Y_hat : [type]
[description]
"""
pass
class SparseMatrixApproximation(BaseSparseDecomposition):
def _update_estimates(self, X, Z_hat, Y_hat):
"""[summary]
Parameters
----------
X : [type]
[description]
Z_hat : [type]
[description]
Y_hat : [type]
[description]
Returns
-------
[type]
[description]
"""
Z_hat = _polar_rotate_shrink(X @ Y_hat)
Y_hat = _polar_rotate_shrink(X.T @ Z_hat)
return Z_hat, Y_hat
def _save_attributes(self, X, Z_hat, Y_hat):
"""[summary]
Parameters
----------
X : [type]
[description]
Z_hat : [type]
[description]
Y_hat : [type]
[description]
"""
B = Z_hat.T @ X @ Y_hat
self.score_ = B
self.right_latent_ = Y_hat
self.left_latent_ = Z_hat
| 26.389972 | 88 | 0.541799 | 1,085 | 9,474 | 4.490323 | 0.214747 | 0.031199 | 0.020115 | 0.019704 | 0.359811 | 0.31876 | 0.267652 | 0.22968 | 0.209975 | 0.182677 | 0 | 0.005342 | 0.348005 | 9,474 | 358 | 89 | 26.463687 | 0.78339 | 0.343994 | 0 | 0.101563 | 0 | 0 | 0.058524 | 0 | 0 | 0 | 0 | 0.011173 | 0 | 1 | 0.125 | false | 0.015625 | 0.078125 | 0.007813 | 0.3125 | 0.070313 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8460399c532214a703651f1b44dd2db303623f8 | 829 | py | Python | guess_a_number.py | peterhogan/python | bc6764f7794a862ff0d138bad80f1d6313984dcd | [
"MIT"
] | null | null | null | guess_a_number.py | peterhogan/python | bc6764f7794a862ff0d138bad80f1d6313984dcd | [
"MIT"
] | null | null | null | guess_a_number.py | peterhogan/python | bc6764f7794a862ff0d138bad80f1d6313984dcd | [
"MIT"
] | null | null | null | import maths
import random
print "Let's guess a number."
bottom = input("Pick a range; bottom number: ")
top = input("Pick a top number? ")
guess_range = range(bottom, top+1)
ans = random.randint(bottom, top)
games = 0
average_guesses = []
again = 'y'
while again == 'y':
ans = random.randint(bottom, top)
games += 1
print "Game %d: Number picked!..." % games
guesses = 0
guess = ''
while guess != ans:
print "Guess #%d:" % (guesses+1)
guess = input("> ")
guesses += 1
if guess > ans:
print "Too high,"
elif guess < ans:
print "Too low,"
else:
pass
if guess == ans:
average_guesses.append(guesses)
again = raw_input("Yes! Play again? (y/n)")
avg_guess = maths.mean(average_guesses)
print average_guesses
print "End of game, %d games played with an average of %f guesses." % (games, avg_guess)
| 23.027778 | 88 | 0.656212 | 125 | 829 | 4.288 | 0.376 | 0.104478 | 0.072761 | 0.08209 | 0.11194 | 0.11194 | 0 | 0 | 0 | 0 | 0 | 0.009036 | 0.199035 | 829 | 35 | 89 | 23.685714 | 0.798193 | 0 | 0 | 0.0625 | 0 | 0 | 0.249698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.03125 | 0.0625 | null | null | 0.21875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d84e1367a97b3daad73881d637cea224686cd672 | 331 | py | Python | test_lambda.py | unbiased-coder/python-aws-lambda-guide | 36c3112a0a49eca1ddcaef967ca80fac573f50ac | [
"Unlicense"
] | null | null | null | test_lambda.py | unbiased-coder/python-aws-lambda-guide | 36c3112a0a49eca1ddcaef967ca80fac573f50ac | [
"Unlicense"
] | null | null | null | test_lambda.py | unbiased-coder/python-aws-lambda-guide | 36c3112a0a49eca1ddcaef967ca80fac573f50ac | [
"Unlicense"
] | null | null | null | import os
import json
def lambda_handler(event, context):
first_name = event['first_name']
last_name = event['last_name']
country = os.environ['COUNTRY']
return {
'statusCode': 200,
'body': json.dumps('Hello I am %s %s and I am from %s'%(first_name, last_name, country))
}
| 25.461538 | 97 | 0.595166 | 44 | 331 | 4.318182 | 0.545455 | 0.142105 | 0.136842 | 0.178947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012552 | 0.277946 | 331 | 12 | 98 | 27.583333 | 0.782427 | 0 | 0 | 0 | 0 | 0 | 0.22884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d851a89f9002b0dda4ccf4ff65e9d8b1cb4dfa03 | 592 | py | Python | tensorflow-example/tensor_placeholder.py | dinkar1708/machine-learning-examples | 40d7d1fa77de5fd8414697f27da889c3e08a3eff | [
"Apache-2.0"
] | null | null | null | tensorflow-example/tensor_placeholder.py | dinkar1708/machine-learning-examples | 40d7d1fa77de5fd8414697f27da889c3e08a3eff | [
"Apache-2.0"
] | null | null | null | tensorflow-example/tensor_placeholder.py | dinkar1708/machine-learning-examples | 40d7d1fa77de5fd8414697f27da889c3e08a3eff | [
"Apache-2.0"
] | null | null | null | import numpy as np
import tensorflow as tf
# placeholder - Inserts a placeholder for a tensor that will be always fed.
# Example1-
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b
sess = tf.Session()
print(sess.run(adder_node, {a: [1, 2], b: [2, 3]}))
sess.close()
# Example2
x = tf.placeholder(tf.float32, shape=(1024, 1024))
y = tf.matmul(x, x)
with tf.Session() as sess:
# print(sess.run(y)) # ERROR: will fail because x was not fed.
rand_array = np.random.rand(1024, 1024)
print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.
| 25.73913 | 75 | 0.677365 | 100 | 592 | 3.96 | 0.47 | 0.131313 | 0.113636 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05726 | 0.173986 | 592 | 22 | 76 | 26.909091 | 0.752556 | 0.285473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8542a8bab379d24ecb097c9415b72c7698225bb | 1,812 | py | Python | validator/checks/md.py | KeepSafe/content-validator | 30e59100f3251aee20b3165d42fceba15a3f5ede | [
"Apache-2.0"
] | 1 | 2018-04-25T19:42:47.000Z | 2018-04-25T19:42:47.000Z | validator/checks/md.py | KeepSafe/content-validator | 30e59100f3251aee20b3165d42fceba15a3f5ede | [
"Apache-2.0"
] | 12 | 2015-07-21T11:01:53.000Z | 2021-03-31T18:53:35.000Z | validator/checks/md.py | KeepSafe/content-validator | 30e59100f3251aee20b3165d42fceba15a3f5ede | [
"Apache-2.0"
] | 2 | 2016-11-05T04:25:35.000Z | 2018-04-25T19:42:49.000Z | import re
from typing import Type
from sdiff import diff, renderer, MdParser
from markdown import markdown
from ..errors import MdDiff, ContentData
LINK_RE = r'\]\(([^\)]+)\)'
def save_file(content, filename):
with open(filename, 'w') as fp:
fp.write(content)
class MarkdownComparator(object):
def __init__(self, md_parser_cls: Type[MdParser] = MdParser):
self._md_parser_cls = md_parser_cls
def check(self, data, parser, reader):
if not data:
return []
# TODO use yield instead of array
errors = []
for row in data:
base = row.pop(0)
base_parsed = parser.parse(reader.read(base))
base_html = markdown(base_parsed)
for other in row:
other_parsed = parser.parse(reader.read(other))
other_html = markdown(other_parsed)
other_diff, base_diff, error = diff(other_parsed, base_parsed,
renderer=renderer.HtmlRenderer(),
parser_cls=self._md_parser_cls)
if error:
error_msgs = []
if error:
error_msgs = map(lambda e: e.message, error)
base_data = ContentData(base, base_parsed, base_diff, base_html)
other_data = ContentData(other, other_parsed, other_diff, other_html)
errors.append(MdDiff(base_data, other_data, error_msgs))
return errors
def get_broken_links(self, base, other):
base_links = re.findall(LINK_RE, base)
other_links = re.findall(LINK_RE, other.replace('\u200e', ''))
broken_links = set(other_links) - set(base_links)
return broken_links
| 35.529412 | 89 | 0.573951 | 208 | 1,812 | 4.759615 | 0.346154 | 0.045455 | 0.044444 | 0.045455 | 0.094949 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003331 | 0.337196 | 1,812 | 50 | 90 | 36.24 | 0.820983 | 0.017108 | 0 | 0.051282 | 0 | 0 | 0.011804 | 0 | 0 | 0 | 0 | 0.02 | 0 | 1 | 0.102564 | false | 0 | 0.128205 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8569c4866fa822712890e85b3227b7cef16ca10 | 3,305 | py | Python | src/conversations/migrations/0001_initial.py | earth-emoji/august | 065d4b449a138ead1557293bffcb20cd2db90a41 | [
"BSD-2-Clause"
] | null | null | null | src/conversations/migrations/0001_initial.py | earth-emoji/august | 065d4b449a138ead1557293bffcb20cd2db90a41 | [
"BSD-2-Clause"
] | 10 | 2021-03-19T10:47:13.000Z | 2022-03-12T00:28:30.000Z | src/conversations/migrations/0001_initial.py | earth-emoji/august | 065d4b449a138ead1557293bffcb20cd2db90a41 | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 2.2.12 on 2020-05-21 03:10
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('accounts', '0002_auto_20200501_0524'),
('classifications', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='RoomRequest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('slug', models.SlugField(blank=True, default=uuid.uuid1, unique=True)),
('status', models.BooleanField(default=False)),
('type', models.CharField(blank=True, choices=[('Invite', 'Invite'), ('Inquiry', 'Inquiry')], max_length=9)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('receiver', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='room_requests_received', to='accounts.Professional')),
('sender', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='room_requests_sent', to='accounts.Professional')),
],
),
migrations.CreateModel(
name='Room',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('slug', models.SlugField(blank=True, max_length=80, unique=True)),
('name', models.CharField(blank=True, max_length=60, unique=True)),
('access', models.CharField(blank=True, choices=[('Public', 'Public'), ('Private', 'Private')], max_length=9)),
('is_active', models.BooleanField(default=True)),
('black_list', models.ManyToManyField(blank=True, related_name='rooms_forbidden', to='accounts.Professional')),
('category', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='rooms', to='classifications.Category')),
('host', models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, related_name='rooms', to='accounts.Professional')),
('members', models.ManyToManyField(blank=True, related_name='room_memberships', to='accounts.Professional')),
('tags', models.ManyToManyField(blank=True, related_name='rooms', to='classifications.Tag')),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('slug', models.SlugField(blank=True, default=uuid.uuid1, unique=True)),
('message', models.TextField()),
('timestamp', models.DateTimeField(auto_now_add=True)),
('room', models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, related_name='messages', to='conversations.Room')),
('sender', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='messages', to='accounts.Professional')),
],
),
]
| 56.982759 | 170 | 0.625416 | 345 | 3,305 | 5.852174 | 0.292754 | 0.053492 | 0.048539 | 0.076275 | 0.565627 | 0.495295 | 0.442298 | 0.379891 | 0.379891 | 0.379891 | 0 | 0.017074 | 0.220272 | 3,305 | 57 | 171 | 57.982456 | 0.766395 | 0.013918 | 0 | 0.34 | 1 | 0 | 0.174087 | 0.059871 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.06 | 0 | 0.14 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d85a4d460505324fde409b6130001dc85cb93b73 | 2,099 | py | Python | output/dfg_mix/mix_rotate_chains.py | tmcclintock/fit_mass_functions | 3edc05004d734c48e8138de098ade8b77a0cfd61 | [
"BSD-2-Clause"
] | null | null | null | output/dfg_mix/mix_rotate_chains.py | tmcclintock/fit_mass_functions | 3edc05004d734c48e8138de098ade8b77a0cfd61 | [
"BSD-2-Clause"
] | null | null | null | output/dfg_mix/mix_rotate_chains.py | tmcclintock/fit_mass_functions | 3edc05004d734c48e8138de098ade8b77a0cfd61 | [
"BSD-2-Clause"
] | null | null | null | """
Instead of rotating the chains in the entire parameter space,
just rotate all the intercepts together and then all the slopes together.
"""
import numpy as np
import corner, sys
import matplotlib.pyplot as plt
old_labels = [r"$d0$",r"$d1$",r"$f0$",r"$f1$",r"$g0$",r"$g1$"]
N_z = 10
N_boxes = 39
N_p = 6
mean_models = np.zeros((N_boxes,N_p))
var_models = np.zeros((N_boxes,N_p))
#Just use Box000 to find the rotations
index = 0
inbase = "../6params/chains/Box%03d_chain.txt"
outbase = "./mixed_chains/Mixed_Box%03d_chain.txt"
make_Rs = False
rotate = True
#GOT UP TO HERE AND STOPPED
if make_Rs:
#First find all the rotation matrices
for i in range(0,N_boxes):
data = np.loadtxt(inbase%i)
labs = ["int","slope"]
for g in range(0,2): #First slopes, then intercepts
D = data[:,g::2]
C = np.cov(D,rowvar=False)
w,R = np.linalg.eig(C)
np.savetxt("./mixed_chains/R%s%d_matrix.txt"%(labs[g],i),R)
#As it turns out, cosmo 34 is the middle-most box,
#so use it for the rotation matrix.
if i == 34: np.savetxt("./mixed_chains/R%s_matrix.txt"%labs[g],R)
if i == 34: np.savetxt("./R%s_matrix.txt"%labs[g],R)
if i == 34: np.savetxt("../R%s_matrix.txt"%labs[g],R)
print "Created R%s%d"%(labs[g],i)
if rotate:
#First get the Rotation matrix
R = []
R.append(np.loadtxt("./mixed_chains/Rint_matrix.txt"))
R.append(np.loadtxt("./mixed_chains/Rslope_matrix.txt"))
#Now rotate some chains
for i in range(0,N_boxes):
data = np.loadtxt(inbase%i)
rD = np.zeros_like(data)
for g in range(0,2):
rD[:,g::2] = np.dot(data[:,g::2],R[g])
np.savetxt(outbase%i,rD)
mean_models[i] = np.mean(rD,0)
var_models[i] = np.var(rD,0)
print "Saved box%03d"%i
fig = corner.corner(data,labels=old_labels)
fig = corner.corner(rD)
plt.show()
sys.exit()
#np.savetxt("./mixed_dfg_means.txt",mean_models)
#np.savetxt("./mixed_dfg_vars.txt",var_models)
| 32.292308 | 77 | 0.604097 | 353 | 2,099 | 3.484419 | 0.339943 | 0.05122 | 0.026016 | 0.045528 | 0.263415 | 0.25935 | 0.160163 | 0.126016 | 0.126016 | 0.126016 | 0 | 0.025577 | 0.236303 | 2,099 | 64 | 78 | 32.796875 | 0.741734 | 0.168652 | 0 | 0.136364 | 0 | 0 | 0.179648 | 0.122487 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068182 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d85e74f3d68877675e7e2b4e7d3a786bcaedf85c | 720 | py | Python | Room/api/migrations/0002_auto_20210129_1554.py | zarif007/Club-Room | 57e15aecbc8e88e6ca46616c001e4c25ae22068c | [
"MIT"
] | 1 | 2021-03-15T22:28:28.000Z | 2021-03-15T22:28:28.000Z | Room/api/migrations/0002_auto_20210129_1554.py | zarif007/Club-Room | 57e15aecbc8e88e6ca46616c001e4c25ae22068c | [
"MIT"
] | null | null | null | Room/api/migrations/0002_auto_20210129_1554.py | zarif007/Club-Room | 57e15aecbc8e88e6ca46616c001e4c25ae22068c | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-29 09:54
import api.models
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='User',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_code', models.CharField(max_length=8)),
],
),
migrations.AlterField(
model_name='room',
name='code',
field=models.CharField(default=api.models.generate_unique_code, max_length=8, unique=True),
),
]
| 26.666667 | 114 | 0.580556 | 77 | 720 | 5.298701 | 0.649351 | 0.044118 | 0.04902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041257 | 0.293056 | 720 | 26 | 115 | 27.692308 | 0.760314 | 0.0625 | 0 | 0.1 | 1 | 0 | 0.059435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d86022ebba92fcc28f93038a020bb13a52cd248a | 12,280 | py | Python | scalyr_agent/util.py | GitSullied/scalyr-agent-2 | 1982ce2b15fceca87b0a4c5c07cdda0258a032ac | [
"Apache-2.0"
] | null | null | null | scalyr_agent/util.py | GitSullied/scalyr-agent-2 | 1982ce2b15fceca87b0a4c5c07cdda0258a032ac | [
"Apache-2.0"
] | null | null | null | scalyr_agent/util.py | GitSullied/scalyr-agent-2 | 1982ce2b15fceca87b0a4c5c07cdda0258a032ac | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Scalyr Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------
#
# author: Steven Czerwinski <czerwin@scalyr.com>
__author__ = 'czerwin@scalyr.com'
import base64
import os
import random
import threading
import time
from json_lib import parse, JsonParseException
# Use sha1 from hashlib (Python 2.5 or greater) otherwise fallback to the old sha module.
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
# Try to use the UUID library if it is available (Python 2.5 or greater).
try:
import uuid
except ImportError:
uuid = None
def read_file_as_json(file_path):
"""Reads the entire file as a JSON value and return it.
@param file_path: the path to the file to read
@type file_path: str
@return: The JSON value contained in the file. This is typically a JsonObject, but could be primitive
values such as int or str if that is all the file contains.
@raise JsonReadFileException: If there is an error reading the file.
"""
f = None
try:
try:
if not os.path.isfile(file_path):
raise JsonReadFileException(file_path, 'The file does not exist.')
if not os.access(file_path, os.R_OK):
raise JsonReadFileException(file_path, 'The file is not readable.')
f = open(file_path, 'r')
data = f.read()
return parse(data)
except IOError, e:
raise JsonReadFileException(file_path, 'Read error occurred: ' + str(e))
except JsonParseException, e:
raise JsonReadFileException(file_path, "JSON parsing error occurred: %s (line %i, byte position %i)" % (
e.raw_message, e.line_number, e.position))
finally:
if f is not None:
f.close()
def create_unique_id():
"""
@return: A value that will be unique for all values generated by all machines. The value
is also encoded so that is safe to be used in a web URL.
@rtype: str
"""
if uuid is not None:
# Here the uuid should be based on the mac of the machine.
base_value = uuid.uuid1().bytes
else:
# Otherwise, get as good of a 16 byte random number as we can and prefix it with
# the current time.
try:
base_value = os.urandom(16)
except NotImplementedError:
base_value = ''
for i in range(16):
base_value += random.randrange(256)
base_value = str(time.time()) + base_value
return base64.urlsafe_b64encode(sha1(base_value).digest())
def remove_newlines_and_truncate(input_string, char_limit):
"""Returns the input string but with all newlines removed and truncated.
The newlines are replaced with spaces. This is done both for carriage return and newline.
Note, this does not add ellipses for the truncated text.
@param input_string: The string to transform
@param char_limit: The maximum number of characters the resulting string should be
@type input_string: str
@type char_limit: int
@return: The string with all newlines replaced with spaces and truncated.
@rtype: str
"""
return input_string.replace('\n', ' ').replace('\r', ' ')[0:char_limit]
def format_time(time_value):
"""Returns the time converted to a string in the common format used throughout the agent and in UTC.
This should be used to make how we report times to the user consistent.
If the time_value is None, then the returned value is 'Never'. A time value of None usually indicates
whatever is being timestamped has not occurred yet.
@param time_value: The time in seconds past epoch (fractional is ok) or None
@type time_value: float or None
@return: The time converted to a string, or 'Never' if time_value was None.
@rtype: str
"""
if time_value is None:
return 'Never'
else:
return '%s UTC' % (time.asctime(time.gmtime(time_value)))
class JsonReadFileException(Exception):
"""Raised when a failure occurs when reading a file as a JSON object."""
def __init__(self, file_path, message):
self.file_path = file_path
self.raw_message = message
Exception.__init__(self, "Failed while reading file '%s': %s" % (file_path, message))
class RunState(object):
"""Keeps track of whether or not some process, such as the agent or a monitor, should be running.
This abstraction can be used by multiple threads to efficiently monitor whether or not the process should
still be running. The expectation is that multiple threads will use this to attempt to quickly finish when
the run state changes to false.
"""
def __init__(self):
"""Creates a new instance of RunState which always is marked as running."""
self.__condition = threading.Condition()
self.__is_running = True
# A list of functions to invoke when this instance becomes stopped.
self.__on_stop_callbacks = []
def is_running(self):
"""Returns True if the state is still set to running."""
self.__condition.acquire()
result = self.__is_running
self.__condition.release()
return result
def sleep_but_awaken_if_stopped(self, timeout):
"""Sleeps for the specified amount of time, unless the run state changes to False, in which case the sleep is
terminated as soon as possible.
@param timeout: The number of seconds to sleep.
@return: True if the run state has been set to stopped.
"""
self.__condition.acquire()
if not self.__is_running:
return True
self._wait_on_condition(timeout)
result = not self.__is_running
self.__condition.release()
return result
def stop(self):
"""Sets the run state to stopped.
This also ensures that any threads currently sleeping in 'sleep_but_awaken_if_stopped' will be awoken.
"""
callbacks_to_invoke = None
self.__condition.acquire()
if self.__is_running:
callbacks_to_invoke = self.__on_stop_callbacks
self.__on_stop_callbacks = []
self.__is_running = False
self.__condition.notifyAll()
self.__condition.release()
# Invoke the stopped callbacks.
if callbacks_to_invoke is not None:
for callback in callbacks_to_invoke:
callback()
def register_on_stop_callback(self, callback):
"""Adds a callback that will be invoked when this instance becomes stopped.
The callback will be invoked as soon as possible after the instance has been stopped, but they are
not guaranteed to be invoked before 'is_running' return False for another thread.
@param callback: A function that does not take any arguments.
"""
is_already_stopped = False
self.__condition.acquire()
if self.__is_running:
self.__on_stop_callbacks.append(callback)
else:
is_already_stopped = True
self.__condition.release()
# Invoke the callback if we are already stopped.
if is_already_stopped:
callback()
def _wait_on_condition(self, timeout):
"""Blocks for the condition to be signaled for the specified timeout.
This is only broken out for testing purposes.
@param timeout: The maximum number of seconds to block on the condition.
"""
self.__condition.wait(timeout)
class FakeRunState(RunState):
"""A RunState subclass that does not actually sleep when sleep_but_awaken_if_stopped that can be used for tests.
"""
def __init__(self):
# The number of times this instance would have slept.
self.__total_times_slept = 0
RunState.__init__(self)
def _wait_on_condition(self, timeout):
self.__total_times_slept += 1
return
@property
def total_times_slept(self):
return self.__total_times_slept
class StoppableThread(threading.Thread):
"""A slight extension of a thread that uses a RunState instance to track if it should still be running.
This class must be extended to actually perform work. It is expected that the derived run method
invokes '_run_state.is_stopped' to determine when the thread has been stopped.
"""
def __init__(self, name=None, target=None):
threading.Thread.__init__(self, name=name, target=target)
# Tracks whether or not the thread should still be running.
self._run_state = RunState()
def stop(self, wait_on_join=True, join_timeout=5):
"""Stops the thread from running.
By default, this will also block until the thread has completed (by performing a join).
@param wait_on_join: If True, will block on a join of this thread.
@param join_timeout: The maximum number of seconds to block for the join.
"""
self._run_state.stop()
if wait_on_join:
self.join(join_timeout)
class RateLimiter(object):
"""An abstraction that can be used to enforce some sort of rate limit, expressed as a maximum number
of bytes to be consumed over a period of time.
It uses a leaky-bucket implementation. In this approach, the rate limit is modeled as a bucket
with a hole in it. The bucket has a maximum size (expressed in bytes) and a fill rate (expressed in bytes
per second). Whenever there is an operation that would consume bytes, this abstraction checks to see if
there are at least X number bytes available in the bucket. If so, X is deducted from the bucket's contents.
Otherwise, the operation is rejected. The bucket is gradually refilled at the fill rate, but the contents
of the bucket will never exceeded the maximum bucket size.
"""
def __init__(self, bucket_size, bucket_fill_rate, current_time=None):
"""Creates a new bucket.
@param bucket_size: The bucket size, which should be the maximum number of bytes that can be consumed
in a burst.
@param bucket_fill_rate: The fill rate, expressed as bytes per second. This should correspond to the
maximum desired steady state rate limit.
@param current_time: If not none, the value to use as the current time, expressed in seconds past epoch.
This is used in testing.
"""
self.__bucket_contents = bucket_size
self.__bucket_size = bucket_size
self.__bucket_fill_rate = bucket_fill_rate
if current_time is None:
current_time = time.time()
self.__last_bucket_fill_time = current_time
def charge_if_available(self, num_bytes, current_time=None):
"""Returns true and updates the rate limit count if there are enough bytes available for an operation
costing num_bytes.
@param num_bytes: The number of bytes to consume from the rate limit.
@param current_time: If not none, the value to use as the current time, expressed in seconds past epoch. This
is used in testing.
@return: True if there are enough room in the rate limit to allow the operation.
"""
if current_time is None:
current_time = time.time()
fill_amount = (current_time - self.__last_bucket_fill_time) * self.__bucket_fill_rate
self.__bucket_contents = min(self.__bucket_size, self.__bucket_contents + fill_amount)
self.__last_bucket_fill_time = current_time
if num_bytes <= self.__bucket_contents:
self.__bucket_contents -= num_bytes
return True
return False | 37.901235 | 117 | 0.674267 | 1,726 | 12,280 | 4.6292 | 0.238702 | 0.014018 | 0.011389 | 0.017021 | 0.150563 | 0.108886 | 0.075594 | 0.058573 | 0.048811 | 0.027284 | 0 | 0.003945 | 0.256922 | 12,280 | 324 | 118 | 37.901235 | 0.871671 | 0.100407 | 0 | 0.279412 | 0 | 0 | 0.037329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.080882 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8676de497ac9bb9955ea09500c27f424ac41cbf | 6,647 | py | Python | scripts/download_grid_images.py | edwardoughton/taddle | f76ca6067e6fca6b699675ab038c31c9444e0a79 | [
"MIT"
] | 9 | 2020-08-18T04:25:00.000Z | 2022-03-18T16:42:33.000Z | scripts/download_grid_images.py | edwardoughton/arpu_predictor | f76ca6067e6fca6b699675ab038c31c9444e0a79 | [
"MIT"
] | null | null | null | scripts/download_grid_images.py | edwardoughton/arpu_predictor | f76ca6067e6fca6b699675ab038c31c9444e0a79 | [
"MIT"
] | 4 | 2020-01-27T01:48:30.000Z | 2021-12-01T16:48:17.000Z | """
Generate download locations within a country and download them.
Written by Jatin Mathur.
5/2020
"""
import os
import configparser
import math
import pandas as pd
import numpy as np
import random
import geopandas as gpd
from shapely.geometry import Point
import requests
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
from tqdm import tqdm
import logging
import time
BASE_DIR = '.'
# repo imports
import sys
sys.path.append(BASE_DIR)
from utils import PlanetDownloader
from config import VIS_CONFIG, RANDOM_SEED
COUNTRY_ABBRV = VIS_CONFIG['COUNTRY_ABBRV']
COUNTRIES_DIR = os.path.join(BASE_DIR, 'data', 'countries')
GRID_DIR = os.path.join(COUNTRIES_DIR, COUNTRY_ABBRV, 'grid')
IMAGE_DIR = os.path.join(COUNTRIES_DIR, COUNTRY_ABBRV, 'images')
ACCESS_TOKEN_DIR = os.path.join(BASE_DIR, 'planet_api_key.txt')
ACCESS_TOKEN = None
with open(ACCESS_TOKEN_DIR, 'r') as f:
ACCESS_TOKEN = f.readlines()[0]
assert ACCESS_TOKEN is not None, print("Access token is not valid")
def create_folders():
"""
Function to create new folders.
"""
os.makedirs(IMAGE_DIR, exist_ok=True)
def get_polygon_download_locations(polygon, number, seed=7):
"""
Samples NUMBER points evenly but randomly from a polygon. Seed is set for
reproducibility.
At first tries to create sub-grid of size n x n where n = sqrt(number)
It checks these coordinates and if they are in the polygon it uses them
If the number of points found is still less than the desired number,
samples are taken randomly from the polygon until the required number
is achieved.
"""
random.seed(seed)
min_x, min_y, max_x, max_y = polygon.bounds
edge_num = math.floor(math.sqrt(number))
lats = np.linspace(min_y, max_y, edge_num)
lons = np.linspace(min_x, max_x, edge_num)
# performs cartesian product
evenly_spaced_points = np.transpose(
[np.tile(lats, len(lons)), np.repeat(lons, len(lats))])
assert len(evenly_spaced_points) <= number
# tries using evenly spaced points
points = []
for proposed_lat, proposed_lon in evenly_spaced_points:
point = Point(proposed_lon, proposed_lat)
if polygon.contains(point):
points.append([proposed_lat, proposed_lon])
# fills the remainder with random points
while len(points) < number:
point = Point(random.uniform(min_x, max_x),
random.uniform(min_y, max_y))
if polygon.contains(point):
points.append([point.y, point.x])
return points # returns list of lat/lon pairs
def generate_country_download_locations(min_population=100, num_per_grid=4):
"""
Generates a defined number of download locations (NUM_PER_GRID) for each
grid with at least the minimum number of specified residents (MIN_
POPULATION).
"""
grid = gpd.read_file(os.path.join(GRID_DIR, 'grid.shp'))
grid = grid[grid['population'] >= min_population]
lat_lon_pairs = grid['geometry'].apply(
lambda polygon: get_polygon_download_locations(
polygon, number=num_per_grid))
centroids = grid['geometry'].centroid
columns = [
'centroid_lat', 'centroid_lon', 'image_lat', 'image_lon', 'image_name'
]
with open(os.path.join(GRID_DIR, 'image_download_locs.csv'), 'w') as f:
f.write(','.join(columns) + '\n')
for lat_lons, centroid in zip(lat_lon_pairs, centroids):
for lat, lon in lat_lons:
name = str(lat) + '_' + str(lon) + '.png'
to_write = [
str(centroid.y), str(centroid.x), str(lat), str(lon), name]
f.write(','.join(to_write) + '\n')
print('Generated image download locations and saved at {}'.format(
os.path.join(GRID_DIR, 'image_download_locs.csv')))
def download_images(df):
"""
Download images using a pandas DataFrame that has "image_lat", "image_lon",
"image_name" as columns.
"""
imd = PlanetDownloader(ACCESS_TOKEN)
zoom = 16
NUM_RETRIES = 20
WAIT_TIME = 0.1 # seconds
# drops what is already downloaded
already_downloaded = os.listdir(IMAGE_DIR)
print('Already downloaded ' + str(len(already_downloaded)))
df = df.set_index('image_name').drop(already_downloaded).reset_index()
print('Need to download ' + str(len(df)))
# use three years of images to find one that matches search critera
min_year = 2014
min_month = 1
max_year = 2016
max_month = 12
for _, r in tqdm(df.iterrows(), total=df.shape[0]):
lat = r.image_lat
lon = r.image_lon
name = r.image_name
try:
im = imd.download_image(lat, lon, min_year, min_month, max_year, max_month)
if im is None:
resolved = False
for _ in range(num_retries):
time.sleep(wait_time)
im = imd.download_image(lat, lon, min_year, min_month, max_year, max_month)
if im is None:
continue
else:
plt.imsave(image_save_path, im)
resolved = True
break
if not resolved:
# print(f'Could not download {lat}, {lon} despite several retries and waiting')
continue
else:
pass
else:
# no issues, save according to naming convention
plt.imsave(os.path.join(IMAGE_DIR, name), im)
except Exception as e:
# logging.error(f"Error-could not download {lat}, {lon}", exc_info=True)
continue
return
if __name__ == '__main__':
create_folders()
arg = '--all'
if len(sys.argv) >= 2:
arg = sys.argv[1]
assert arg in ['--all', '--generate-download-locations', '--download-images']
if arg == '--all':
print('Generating download locations...')
generate_country_download_locations()
df_download = pd.read_csv(os.path.join(GRID_DIR, 'image_download_locs.csv'))
print('Downloading images. Might take a while...')
download_images(df_download)
elif arg == '--generate-download-locations':
print('Generating download locations...')
generate_country_download_locations()
elif arg == '--download-images':
df_download = pd.read_csv(os.path.join(GRID_DIR, 'image_download_locs.csv'))
print('Downloading images. Might take a while...')
download_images(df_download)
else:
raise ValueError('Args not handled correctly')
| 31.803828 | 99 | 0.641643 | 890 | 6,647 | 4.608989 | 0.301124 | 0.049732 | 0.024378 | 0.017065 | 0.228425 | 0.213554 | 0.155534 | 0.155534 | 0.10629 | 0.08825 | 0 | 0.006274 | 0.256657 | 6,647 | 208 | 100 | 31.956731 | 0.823922 | 0.18294 | 0 | 0.175573 | 1 | 0 | 0.123002 | 0.028211 | 0 | 0 | 0 | 0 | 0.022901 | 1 | 0.030534 | false | 0.007634 | 0.137405 | 0 | 0.183206 | 0.061069 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d86a1d3f8be7cc0277c9fbefaafac0fb5dcf11a6 | 1,351 | py | Python | wildlifecompliance/migrations/0219_auto_20190611_1047.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 1 | 2020-12-07T17:12:40.000Z | 2020-12-07T17:12:40.000Z | wildlifecompliance/migrations/0219_auto_20190611_1047.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 14 | 2020-01-08T08:08:26.000Z | 2021-03-19T22:59:46.000Z | wildlifecompliance/migrations/0219_auto_20190611_1047.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 15 | 2020-01-08T08:02:28.000Z | 2021-11-03T06:48:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2019-06-11 02:47
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('wildlifecompliance', '0218_auto_20190611_1026'),
]
operations = [
migrations.CreateModel(
name='AllegedOffence',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('offence', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='alleged_offence_offence', to='wildlifecompliance.Offence')),
('section_regulation', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='alleged_offence_section_regulation', to='wildlifecompliance.SectionRegulation')),
],
options={
'verbose_name': 'CM_Alleged_Offence',
'verbose_name_plural': 'CM_Alleged_Offences',
},
),
migrations.AddField(
model_name='offence',
name='alleged_offences',
field=models.ManyToManyField(through='wildlifecompliance.AllegedOffence', to='wildlifecompliance.SectionRegulation'),
),
]
| 39.735294 | 208 | 0.655811 | 132 | 1,351 | 6.484848 | 0.492424 | 0.037383 | 0.049065 | 0.077103 | 0.200935 | 0.200935 | 0.200935 | 0.200935 | 0.200935 | 0.200935 | 0 | 0.031549 | 0.225759 | 1,351 | 33 | 209 | 40.939394 | 0.786807 | 0.050333 | 0 | 0.076923 | 1 | 0 | 0.283594 | 0.164844 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.